text
stringlengths 56
7.94M
|
---|
\hbox{$\beta$}egin{document}
\title{A Unified Tool for Solving Uni-Parametric Linear Programs, Convex Quadratic Programs, and Linear Complementarity Problems}
\titlerunning{A Unified Tool for Solving upLP, upQP, and upLCP}
\hbox{$\alpha$}uthor{Nathan Adelgren}
\institute{Nathan Adelgren \hbox{$\alpha$}t
Andlinger Center for Energy and the Environment\\
Princeton University\\
Princeton, NJ, 08544\\
\email{[email protected]}, ORCHID: \url{https://orcid.org/
0000-0003-3836-9324}
}
\date{Received: date / Accepted: date}
\maketitle
\hbox{$\beta$}egin{abstract}
We introduce a new technique for solving uni-parametric versions of linear programs, convex quadratic programs, and linear complementarity problems in which a single parameter is permitted to be present in any of the input data. We demonstrate the use of our method on a small, motivating example and present the results of a small number of computational tests demonstrating its utility for larger scale problems.
\keywords{parametric optimization \hbox{$\alpha$}nd linear complementarity \hbox{$\alpha$}nd linear programming \hbox{$\alpha$}nd quadratic programming}
\end{abstract}
\hbox{$\beta$}egin{acknowledgements}
The author would like to thank Jacob Adelgren for offering helpful advice and feedback regarding the implementation.
\end{acknowledgements}
\section{Introduction}\hbox{$\lambda$}abel{sec:intro}
In this work we consider the uni-parametric form of the Linear Complementarity Problem (LCP) in which all input data is permitted to be dependent on a single parameter $\theta \in \Theta$, where
\hbox{$\beta$}egin{equation}\hbox{$\lambda$}abel{eq:paramspace}
\Theta := \{\theta \in \hbox{$\mathbb R$}: \hbox{$\alpha$}lpha \hbox{$\lambda$}eq \theta \hbox{$\lambda$}eq \hbox{$\beta$}eta \text{ and } \hbox{$\alpha$}lpha,\hbox{$\beta$}eta \in \hbox{$\mathbb R$}\}
\end{equation}
is a connected interval in $\hbox{$\mathbb R$}$ that represents the set of ``attainable'' values for $\theta$. This problem is referred to as the uni-parametric (or single-parametric) Linear Complementarity Problem (upLCP). Let $\mathscr{A} = \{\mu \theta + \sigma: \mu, \sigma \in \hbox{$\mathbb R$}\}$, the set of affine functions of $\theta$. Then upLCP is as follows:
\hbox{$\beta$}egin{quote}
Given $M(\theta)\in \mathscr{A}^{h\times h}$ and $q(\theta)\in \mathscr{A}^h$, for each $\theta\in \Theta$ find vectors $w(\theta)$ and $z(\theta)$ that satisfy the system
\hbox{$\beta$}egin{equation}\hbox{$\lambda$}abel{upLCP}
\hbox{$\beta$}egin{array}{c}
w - M(\theta)z = q(\theta)\\[1mm]
w^\top z = 0\\[1mm]
w,z \hbox{$\gamma$}eq 0
\end{array}
\end{equation}
or show that no such vectors exist.
\end{quote}
upLCP is said to be \emph{feasible at $\theta$} if there exist $w(\theta)$ and $z(\theta)$ that satisfy System \eqref{upLCP} and \emph{infeasible at $\theta$} otherwise. Similarly, upLCP is said to be \emph{feasible} if there exists a ${}^{\text{th}}at{\theta} \in \Theta$ at which upLCP is feasible, and \emph{infeasible} otherwise.
Now, recognize that $\Theta$ must be an infinite set, otherwise upLCP reduces to LCP. Hence, it is not possible to determine a solution to System \eqref{upLCP} for each $\theta \in \Theta$ individually. Instead, upLCP is solved by partitioning the interval $\Theta$ into a set of \emph{invariancy intervals}. As the name ``invariancy intervals'' suggests, within each of these intervals the representation of the solution vectors $w$ and $z$ as functions of $\theta$ is invariant. The methods we propose solve upLCP whenever the following assumptions are met.
\hbox{$\beta$}egin{assumption}\hbox{$\lambda$}abel{asm:sufficient}
The matrix $M(\theta)$ is sufficient for all $\theta \in \Theta$.
\end{assumption}
\hbox{$\beta$}egin{assumption}\hbox{$\lambda$}abel{asm:feasible}
System \eqref{upLCP} is feasible for all $\theta \in \Theta$.
\end{assumption}
We point any reader interested in background information on LCP, in particular the definition of sufficient matrices, to the work of \citet{cottle2009linear}. Additionally, we note that Assumption \ref{asm:feasible} is not extremely restrictive. For example, the claim of Assumption \ref{asm:feasible} is satisfied when upLCP results from the reformulation of a biobjective linear program or biobjective quadratic program that has been scalarized using the weighted sum approach (see \citep{ehrgott2005multicriteria}, for example) and $\Theta$ is taken to be $[0,1]$.
To our knowledge, the only other works in which solution procedures are proposed for upLCP having the general form of System \eqref{upLCP} are those of \citet{valiaho1994procedure} and \citet{chakraborty2004solution}. Section 6 of the former work describes various restrictions that can be placed on the structure of $M(\theta)$ in order to guarantee finite execution of the procedures presented therein, but all are more restrictive than our Assumption \ref{asm:sufficient}. Similarly, the methods presented in the latter work require that $M(\theta)$ be a $P$-matrix for all $\theta \in \Theta$, which is again more restrictive than our Assumption \ref{asm:sufficient}. Solution methodology for the multiparametric counterpart to System \eqref{upLCP}, i.e., the case in which $\theta \in \hbox{$\mathbb R$}^k$ for $k > 1$, is presented in \citep{adelgren2021advancing}. However, the techniques presented therein are not directly applicable for scalar $\theta$. This motivates our current work, the remainder of which is organized as follows. The proposed algorithm for solving upLCP is presented in Section \ref{sec:algorithm}. We demonstrate the use of our proposed methodology on a motivating example in Section \ref{sec:example}. In Section \ref{sec:otherProblems} we discuss some classes of optimization problems that can be reformulated as upLCP and can therefore be solved using the methods presented in this work.
Section \ref{sec:implementation/results} contains a brief description of our implementation and the results of a small number of computational tests.
Finally, we give concluding remarks in Section \ref{sec:conclusion}.
\section{Proposed Algorithm}\hbox{$\lambda$}abel{sec:algorithm}
We begin this section with a brief set of definitions. Given an instance of upLCP, as presented in System \eqref{upLCP}, we define the matrix $G(\theta):= \hbox{$\lambda$}eft[ \hbox{$\beta$}egin{matrix}
I & -M(\theta)
\end{matrix} \right]$ and the index set $\mathcal{E} := \{1,\dots,2h\}$. Then a set $\B \subset \mathcal{E}$ is called a \emph{basis} if $|\B|=h$. Moreover, a basis $\B$ is \emph{complementary} if $\hbox{$\lambda$}eft|\{i,i+h\}\cap \B \right| = 1$ for each $i \in \{1,\dots,h\}$. Then, given a complementary basis $\B$, its associated \emph{invariancy interval} is the set
\hbox{$\beta$}egin{equation}\hbox{$\lambda$}abel{invInterval} \mathcal{II}_{\B} := \hbox{$\lambda$}eft\{ \theta \in \Theta: G(\theta)^{-1}_{\hbox{$\beta$}ullet \B} q(\theta) \hbox{$\gamma$}eq 0 \right\},
\end{equation}
where $G(\theta)^{-1}_{\hbox{$\beta$}ullet \B}$ denotes the matrix comprised of the columns of $G(\theta)^{-1}$ whose indices are in $\B$. We note here that for a given complementary basis $\B$, $\mathcal{II}_{\B}$ may be the union of disjoint intervals in $\Theta$. To see this, recognize that: (i) for any complementary basis $\B$, the system $G(\theta)^{-1}_{\hbox{$\beta$}ullet \B} q(\theta) \hbox{$\gamma$}eq 0$ can be equivalently written as $\dfrac{Adj\hbox{$\lambda$}eft(G(\theta)_{\hbox{$\beta$}ullet \B}\right)}{det\hbox{$\lambda$}eft(G(\theta)_{\hbox{$\beta$}ullet \B}\right)} q(\theta) \hbox{$\gamma$}eq 0$, where $Adj(\cdot)$ and $det(\cdot)$ represent the matrix adjoint and determinant, respectively; and (ii) both the adjoint and determinant of a given matrix can be represented as polynomials of the elements of the matrix. Hence, for each complementary basis $\B$, $\mathcal{II}_{\B}$ is a possibly nonconvex subset of $\Theta$ defined by a set of rational inequalities in $\theta$. Fortunately, this structure can be somewhat improved. By Lemma 2.1 of \citep{adelgren2021advancing} we know that, under Assumption \ref{asm:sufficient}, the sign of $det\hbox{$\lambda$}eft(G(\theta)_{\hbox{$\beta$}ullet \B}\right)$ is invariant over $\Theta$ for any complementary basis $\B$. Thus, $\mathcal{II}_{\B}$ can be represented as
\hbox{$\beta$}egin{equation}\hbox{$\lambda$}abel{invInterval2} \mathcal{II}_{\B} := \hbox{$\lambda$}eft\{ \theta \in \Theta: s_{\B} Adj(G(\theta)_{\hbox{$\beta$}ullet \B}) q(\theta) \hbox{$\gamma$}eq 0 \right\},
\end{equation}
where $s_{\B}$ represents the sign of $det\hbox{$\lambda$}eft(G(\theta)_{\hbox{$\beta$}ullet \B}\right)$ over $\Theta$. Note that in Equation \ref{invInterval2}, $\mathcal{II}_{\B}$ is given by a system of inequalities that are polynomial in $\theta$. We now provide Algorithm \ref{alg:solveUpLCP} in which we present a technique for solving upLCP.
\hbox{$\beta$}egin{algorithm}
\small
\caption{\small\textsc{Solve\_upLCP}($\Theta$)~--~Partition the parameter space $\Theta$.\\
\textbf{Input}: The set $\Theta$ as defined in Equation \eqref{eq:paramspace} for an instance of upLCP.\\
\textbf{Output}: A set $\mathcal{P}$ specifying a partition of $\Theta$. Each $\mathcal{I} \in \mathcal{P}$ is a tuple of the form $(\hbox{$\rho$}rotect\B^*, \hbox{$\alpha$}lpha^*, \hbox{$\beta$}eta^*)$ where $\hbox{$\rho$}rotect\B^*$ represents a complementary basis and $\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^* \in \hbox{$\rho$}rotect\hbox{$\mathbb R$}$ specify an interval $[\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^*] \subseteq \mathcal{II}_{\hbox{$\rho$}rotect\B}$ such that $w(\theta)$ and $z(\theta)$ satisfy System \eqref{upLCP} for all $\theta \in [\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^*]$.
}\hbox{$\lambda$}abel{alg:solveUpLCP}
\hbox{$\beta$}egin{algorithmic}[1]
\State Let $\mathcal{S} = \{\Theta\}$ and $\mathcal{P} = \emptyset$.
\While{$\mathcal{S} \neq \emptyset$}{ select $[\hbox{$\alpha$}lpha', \hbox{$\beta$}eta']$ from $\mathcal{S}$.}\hbox{$\lambda$}abel{alg:solveUpLCP:line:while}
\State Set $\theta^* = \frac{\hbox{$\alpha$}lpha' + \hbox{$\beta$}eta'}{2}$ and compute a complemetary basis $\B^*$ such that $\theta^* \in \mathcal{II}_{\B^*}$.\hbox{$\lambda$}abel{alg:solveUpLCP:line:midpoint}
\State Set $\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^* = \textsc{Get\_Extremes}(\B^*, \theta^*, \hbox{$\alpha$}lpha', \hbox{$\beta$}eta')$ and add $(\B^*, \hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^*)$ to $\mathcal{P}$.\hbox{$\lambda$}abel{alg:solveUpLCP:line:endpoints}
\If{$\hbox{$\alpha$}lpha' < \hbox{$\alpha$}lpha^*$}{ add $[\hbox{$\alpha$}lpha',\hbox{$\alpha$}lpha^*]$ to $\mathcal{S}$.}\hbox{$\lambda$}abel{alg:solveUpLCP:line:subint1}
\EndIf
\If{$\hbox{$\beta$}eta' > \hbox{$\beta$}eta^*$}{ add $[\hbox{$\beta$}eta^*,\hbox{$\beta$}eta']$ to $\mathcal{S}$.}\hbox{$\lambda$}abel{alg:solveUpLCP:line:endwhile}
\EndIf
\EndWhile
\State Return $\mathcal{P}$.
\end{algorithmic}
\end{algorithm}
The majority of the work done in Algorithm \ref{alg:solveUpLCP} is the processing of a set $\mathcal{S}$ of intervals using the while loop contained in lines \ref{alg:solveUpLCP:line:while}--\ref{alg:solveUpLCP:line:endwhile}. On line \ref{alg:solveUpLCP:line:midpoint}, the midpoint $\theta^*$ of the current interval is computed and a complementary basis $\B^*$ is sought for which $\theta^*$ is contained within $\mathcal{II}_{\B^*}$. Such a complementary basis can be found by fixing $\theta$ to $\theta^*$ and solving the resulting non-parametric LCP using, for example, the criss-cross method of \citet{den1993linear}. We note that the criss-cross method is a particularly good choice in this case as it is guaranteed to solve the resulting non-parametric LCP under Assumption \ref{asm:sufficient}. On line \ref{alg:solveUpLCP:line:endpoints}, the subroutine \textsc{Get\_Extremes} is used to compute $\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^* \in \hbox{$\mathbb R$}$ that define a subinterval $[\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^*]$ of $\mathcal{II}_{\B^*}$ that contains $\theta^*$ and should be included in the final partition of $\Theta$.
Finally, lines \ref{alg:solveUpLCP:line:subint1}--\ref{alg:solveUpLCP:line:endwhile} identify connected subintervals of $[\hbox{$\alpha$}lpha',\hbox{$\beta$}eta']$ over which solutions to System \eqref{upLCP} have yet to be computed, if any exist, and those subintervals are added to the set $\mathcal{S}$.
We pause now to discuss the subroutine \textsc{Get\_Extremes} in more detail. There are many ways to implement such a routine, but the strategies employed in our implementation are presented in Algorithm \ref{alg:getExtremes}.
\hbox{$\beta$}egin{algorithm}
\small
\caption{\small\textsc{Get\_Extremes}($\hbox{$\rho$}rotect\B^*,\theta^*,\hbox{$\alpha$}lpha',\hbox{$\beta$}eta'$)~--~Compute extreme values of $\mathcal{II}_{\hbox{$\rho$}rotect\B}$.\\
\textbf{Input}: A complementary basis $\hbox{$\rho$}rotect\B^*$, $\theta^* \in \mathcal{II}_{\hbox{$\rho$}rotect\B^*}$, and $\hbox{$\alpha$}lpha',\hbox{$\beta$}eta' \in \hbox{$\mathbb R$}$.\\
\textbf{Output}: $\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^* \in \hbox{$\rho$}rotect\hbox{$\mathbb R$}$ that specify an interval $[\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^*] \subseteq \mathcal{II}_{\hbox{$\rho$}rotect\B^*}$ such that $\theta^* \in [\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^*]$ and $w(\theta)$ and $z(\theta)$ satisfy System \eqref{upLCP} for all $\theta \in [\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^*]$.
}\hbox{$\lambda$}abel{alg:getExtremes}
\hbox{$\beta$}egin{algorithmic}[1]
\State Set $\hbox{$\alpha$}lpha^* = \hbox{$\alpha$}lpha'$ and $\hbox{$\beta$}eta^* = \hbox{$\beta$}eta'$.
\hbox{$\mathbb{F}$}or{$i \in \B^*$}\hbox{$\lambda$}abel{alg:getExtremes:lineFor}
\If{$degree\hbox{$\lambda$}eft(\hbox{$\lambda$}eft(Adj(G(\theta)_{\hbox{$\beta$}ullet \B})\right)_{i\hbox{$\beta$}ullet} q(\theta)\right) > 0$}
\State Let $\mathcal{R} = \textsc{Get\_Real\_Roots}\hbox{$\lambda$}eft(\hbox{$\lambda$}eft(Adj(G(\theta)_{\hbox{$\beta$}ullet \B})\right)_{i\hbox{$\beta$}ullet} q(\theta)\right)$.\hbox{$\lambda$}abel{alg:getExtremes:lineGetRoots}
\hbox{$\mathbb{F}$}or{$r \in \mathcal{R}$}\hbox{$\lambda$}abel{alg:getExtremes:lineFor2}
\If{$multiplicity(r)$ is odd}{}\hbox{$\lambda$}abel{alg:getExtremes:lineMult}
\If{$r \in (\hbox{$\alpha$}lpha^*,\theta^*)$}{ set $\hbox{$\alpha$}lpha^* = r$.}\hbox{$\lambda$}abel{alg:getExtremes:lineLeft}
\ElsIf{$r \in (\theta^*,\hbox{$\beta$}eta^*)$}{ set $\hbox{$\beta$}eta^* = r$.}\hbox{$\lambda$}abel{alg:getExtremes:lineRight}
\ElsIf{$r == \hbox{$\alpha$}lpha^*$ OR $r == \hbox{$\beta$}eta^*$}{}\hbox{$\lambda$}abel{alg:getExtremes:lineEqual}
\If{$\hbox{$\lambda$}eft.\frac{d}{d\theta}\hbox{$\lambda$}eft(s_{\B} \hbox{$\lambda$}eft(Adj(G(\theta)_{\hbox{$\beta$}ullet \B})\right)_{i\hbox{$\beta$}ullet} q(\theta) \right)\right|_{\theta = \theta^*} > 0$}{ set $\hbox{$\alpha$}lpha^* = r$.}
\Else{ set $\hbox{$\beta$}eta^* = r$.}\hbox{$\lambda$}abel{alg:getExtremes:lineRight2}
\EndIf
\EndIf
\EndIf
\EndFor
\EndIf
\EndFor
\State Return $\hbox{$\alpha$}lpha^*,\hbox{$\beta$}eta^*$.
\end{algorithmic}
\end{algorithm}
On a high-level, the work done in Algorithm \ref{alg:getExtremes} seeks to find the largest connected interval that is a subset of $\mathcal{II}_{\B^*}$ and contains $\theta^*$. To do this, we examine the roots of the nonconstant polynomial functions that serve as the boundaries of $\mathcal{II}_{\B^*}$ (lines \ref{alg:getExtremes:lineFor}--\ref{alg:getExtremes:lineGetRoots}). We note that any polynomial root finder capable of determining root multiplicity can be used for the subroutine \textsc{Get\_Real\_Roots} (see, for example, \citep{rouillier2004efficient}). As roots of even multiplicity cannot be restrictive, we need only consider roots of odd multiplicity (lines \ref{alg:getExtremes:lineFor2}--\ref{alg:getExtremes:lineMult}). Since we have $\theta^* \in \mathcal{II}_{\B^*}$, we know that if no root occurs directly at $\theta^*$, then the endpoints of the interval of interest are: (i) the root closest to $\theta^*$ on the left, and (ii) the root closest to $\theta^*$ on the right (lines \ref{alg:getExtremes:lineLeft}--\ref{alg:getExtremes:lineRight}). If a root does occur directly at $\theta^*$, however, then $\theta^*$ serves as one of the endpoints of the interval of interest, and we must determine the direction in which the function associated with that root increases in order to determine which endpoint it is (lines \ref{alg:getExtremes:lineEqual}--\ref{alg:getExtremes:lineRight2}). For this purpose, we use the sign of the derivative of the associated function, evaluated at $\theta^*$. Since the defining constraints of $\mathcal{II}_{\B^*}$ are given as greater-than-or-equal-to constraints, we know that if the aforementioned sign is positive, $\theta^*$ is the left endpoint of the interval of interest. Otherwise, it is the right endpoint. We note that it is possible that two functions defining boundaries of $\mathcal{II}_{\B^*}$ each have roots at $\theta^*$ and, moreover, the derivatives of the two functions, evaluated at $\theta^*$, may have opposite signs. In this case, the interval of interest reduces to a singleton at $\theta^*$ and need not be included in the partition of $\Theta$. Hence, such intervals can be rejected upon discovery or removed from the partition in a post-processing phase.
We finish this section by stating a small number of theoretical results related to the methods proposed in Algorithms \ref{alg:solveUpLCP}--\ref{alg:getExtremes}. For the sake of space, we do not include proofs for these results -- though, the proofs are not challenging -- and, as such, we state them as observations.
\hbox{$\beta$}egin{observation}\hbox{$\lambda$}abel{obs:multiple}
For a given complementary basis $\B$, the final partition $\mathcal{P}$ of $\Theta$ may contain more than one connected subinterval of $\mathcal{II}_{\B}$.
\end{observation}
\hbox{$\beta$}egin{observation}
Under Assumption \ref{asm:sufficient}, for a given complementary basis $\B$, the number of connected subintervals of $\mathcal{II}_{\B}$ present in the final partition $\mathcal{P}$ of $\Theta$ can be at most $n-h$, where $n$ is the number of unique roots of odd multiplicity of the functions $Adj(G(\theta)_{\hbox{$\beta$}ullet \B}) q(\theta)$ and $h = |\B|$.
\end{observation}
\hbox{$\beta$}egin{observation}
Under Assumptions \ref{asm:sufficient}--\ref{asm:feasible}, Algorithm \ref{alg:solveUpLCP} will complete in a finite number of iterations.
\end{observation}
\section{Motivating Example}\hbox{$\lambda$}abel{sec:example}
At the start of Chapter 1 of \citep{murty1997linear}, the authors demonstrate that an instance of LCP with $M= \hbox{$\lambda$}eft[\hbox{$\beta$}egin{array}{rr}
2 & -1\\
1 & 3\\
\end{array} \right]$ will have relatively nice properties. We use this as a starting point and consider the following instance of mpLCP:
\hbox{$\beta$}egin{equation}\hbox{$\lambda$}abel{eq:ex2}
\hbox{$\beta$}egin{array}{c}
w - \hbox{$\lambda$}eft[\hbox{$\beta$}egin{array}{cc}
2 & -1 + \theta_2\\
1 - \theta_1 & 3\\
\end{array} \right]z = \hbox{$\lambda$}eft[\hbox{$\beta$}egin{array}{c}
1 - \theta_1\\
-2 + 3\theta_2\\
\end{array} \right]\\[1mm]
w^\top z = 0\\[1mm]
w,z \hbox{$\gamma$}eq 0
\end{array}
\end{equation}
We assume here that $(\theta_1, \theta_2) \in [-2,2]^2$ and note that it is straightforward to verify that $M(\theta) = \hbox{$\lambda$}eft[\hbox{$\beta$}egin{array}{cc}
2 & -1 + \theta_2\\
1 - \theta_1 & 3\\
\end{array} \right]$ is sufficient for all $(\theta_1, \theta_2) \in [-2,2]^2$ using Theorem 4.3 of \citep{valiaho1996criteria}. We note that beginning with a mpLCP rather than a upLCP allows us to visualize some of the challenges associated with upLCP. We compute the parametric solution to System \eqref{eq:ex2} using the techniques proposed in \citep{adelgren2021advancing}. The final partition of $\Theta$ is depicted in Figure \ref{fig:ex2} and the complementary bases associated with each region, as well as the parametric solutions associated with each basis, are provided in Table \ref{tab:ex2}.
\hbox{$\beta$}egin{figure}[t]
\caption{Regions over which the computed parametric solutions are valid.}\hbox{$\lambda$}abel{fig:ex2}
\centering
\includegraphics[width=70mm]{Figure1}
\end{figure}
\hbox{$\beta$}egin{table}
\caption{Parametric solutions to System \eqref{eq:ex2}.}\hbox{$\lambda$}abel{tab:ex2}
\centering
\hbox{$\beta$}egin{tabular}{ll}
{}^{\text{th}}line
Region & Basic Variables and\\
& Associated Values\\
{}^{\text{th}}line
{}^{\text{th}}line
I & $w_1 = 1 - \theta_1$\\
& $w_2 = 3\theta_2 - 2$\\
{}^{\text{th}}line
II & $w_1 = -\theta_1 - \theta_2^2 + \frac{5}{3}\theta_2 + \frac{1}{3}$\\
& $z_2 = \frac{2}{3} - \theta_2$\\
{}^{\text{th}}line
III & $z_1 = \frac{3\theta_1 + 3\theta_2^2 - 5\theta_2 - 1}{\theta_1\theta_2 - \theta_1 - \theta_2 + 7}$\\
& $z_2 = \frac{\theta_1^2 - 2\theta_1 - 6\theta_2 + 5}{\theta_1\theta_2 - \theta_1 - \theta_2 + 7}$\\
{}^{\text{th}}line
IV & $z_1 = \frac{1}{2}\theta_1 - \frac{1}{2}$\\
& $w_2 = -\frac{1}{2}\theta_1^2 + \theta_1 + 3\theta_2 - \frac{5}{2}$\\
{}^{\text{th}}line
\end{tabular}
\end{table}
We note that, in an abuse of notation, we identify a basis in Table \ref{tab:ex2} by its associated basic variables, rather than the elements of $\mathcal{E}$ that it contains. Additionally, variables having zero value are omitted from Table \ref{tab:ex2}. The same convention for identifying complementary bases and omitting variables with zero value is used throughout the remainder of this work.
We now reduce System \eqref{eq:ex2} to an instance of upLCP by fixing $\theta_2 = \frac{1}{2}\theta_1$ (depicted in Figure \ref{fig:ex2} as a dotted black line) and letting $\Theta = [-2,2]$. In observing Figure \ref{fig:ex2}, we note that this reduction will result in a realization of the scenario outlined in Observation \ref{obs:multiple}. Namely, our final partition of $\Theta$ will contain more than one interval associated with a single complementary basis. We believe that situations like this have been one of the primary barriers to the development of methods for solving upLCP as specified by System \eqref{upLCP}. We now demonstrate how the techniques outlined in Algorithms \ref{alg:solveUpLCP}--\ref{alg:getExtremes} can be used to solve this instance of upLCP. We begin by setting $\mathcal{S} = [-2,2]$ and then iterate through the while loop on line \ref{alg:solveUpLCP:line:while} of Algorithm \ref{alg:solveUpLCP}. Note that in the following discussion we process elements from the set $\mathcal{S}$ using a last-in-first-out strategy.\\
\noindent \textbf{Iteration 1}:
\hbox{$\beta$}egin{quote}
Here $[\hbox{$\alpha$}lpha',\hbox{$\beta$}eta'] = [-2,2]$ and we set $\theta^* = 0$. Using the criss-cross method, we find that an optimal basis at $\theta^*$ is $\B_1 = \{w_1 = -\frac{1}{4}\theta_1^2 - \frac{1}{6}\theta_1 + \frac{1}{3}, z_2 = -\frac{1}{2}\theta_1 + \frac{2}{3}\}$. We now enter Algorithm \ref{alg:getExtremes} via the routine \textsc{Get\_Extremes}. The real roots associated with the function representing $w_1$ are approximately $-1.535$ and $0.869$, whereas the function associated with $z_2$ has only one real root, approximately $1.333$. Of these roots, $-1.535$ is closest to $\theta^*$ on the left and $0.869$ is closest on the right. As such, these values are returned from \textsc{Get\_Extremes} as $\hbox{$\alpha$}lpha^*$ and $\hbox{$\beta$}eta^*$, respectively. On lines \ref{alg:solveUpLCP:line:subint1}--\ref{alg:solveUpLCP:line:endwhile} we now add $[-2,-1.535]$ and $[0.869,2]$ to $\mathcal{S}$ and proceed to Iteration 2.
\end{quote}
\noindent \textbf{Iteration 2}:
\hbox{$\beta$}egin{quote}
For this and subsequent iterations, we omit details that are analogous to those of Iteration 1. We note that here we have $[\hbox{$\alpha$}lpha',\hbox{$\beta$}eta'] = [0.869,2]$ and thus $\theta^* = 1.4345$. An optimal basis at this point is $\B_2 = \{z_1 = \frac{1}{2}\theta_1 - \frac{1}{2}, w_2 = -\frac{1}{2}\theta_1^2 + \frac{5}{2}\theta_1 - \frac{5}{2} \}$ and respective sets of approximate real roots for these functions are $\{1\}$ and $\{1.382,3.618\}$. Hence, we set $\hbox{$\alpha$}lpha^* = 1.382$ and leave $\hbox{$\beta$}eta^*$ as $2$. The only new interval added to $\mathcal{S}$ is then $[0.869,1.382]$.
\end{quote}
\noindent \textbf{Iteration 3}:
\hbox{$\beta$}egin{quote}
We now have $[\hbox{$\alpha$}lpha',\hbox{$\beta$}eta'] = [0.869,1.382]$ and $\theta^* = 1.1255$. An optimal basis here is $\B_3 = \{z_1 = \frac{-3\theta_1^2 - 2\theta_1 + 4}{-2\theta_1^2 + 6\theta_1 - 28}, z_2 = \frac{-2\theta_1^2 + 10\theta_1 - 10}{-\theta_1^2 + 3\theta_1 - 14} \}$ and respective sets of approximate real roots for these functions are $\{0.869,-1.535\}$ and $\{1.382,3.618\}$. As a result, $\hbox{$\alpha$}lpha^*$ and $\hbox{$\beta$}eta^*$ are left as $0.869$ and $1.382$, respectively, and no intervals are added to $\mathcal{S}$.
\end{quote}
\noindent \textbf{Iteration 4}:
\hbox{$\beta$}egin{quote}
Finally, we have $[\hbox{$\alpha$}lpha',\hbox{$\beta$}eta'] = [-2,-1.535]$ and $\theta^* = -1.7675$. An optimal basis at this point is $\B_3$. Thus, using the previously computed sets of approximate real roots, we see that $\hbox{$\alpha$}lpha^*$ and $\hbox{$\beta$}eta^*$ are left as $-2$ and $-1.535$, respectively, and no intervals are added to $\mathcal{S}$.
\end{quote}
\noindent We have now completed the execution of Algorithm \ref{alg:solveUpLCP}. The final partition of $\Theta$ consists of the four intervals $[-1.535,0.869]$, $[1.382,2]$, $[0.869,1.382]$, and $[-2,-1.535]$. The parametric solutions that are valid over these intervals are those given in the respective descriptions of $\B_1, \B_2, \B_3$ and (again) $\B_3$ above. It is straightforward to verify that this solution matches the one given in Table \ref{tab:ex2} when $\theta_2$ is fixed to $\frac{1}{2}\theta_1$. Recognize, though, that the representation of $w_2$ in the solution associated with Region I in Table \ref{tab:ex2} reduces to $w_2 = \frac{3}{2}\theta_1 - 2$. Hence, as there exists no $\theta_1 \in [-2,2]$ for which both $w_1 = 1-\theta_1$ and $w_2 = \frac{3}{2}\theta_1 - 2$ are non-negative, the complementary basis $\B' = \{w_1,w_2\}$ is not associated with any interval in the final partition of $\Theta$ when solving the reduced instance of upLCP.
\section{Applicability to Other Classes of Problems}\hbox{$\lambda$}abel{sec:otherProblems}
It is well known that LCP arises naturally as the system resulting from applying the Karush-Kuhn-Tucker (KKT) optimality conditions to quadratic programs (QP). Moreover, as a solution to the KKT system is guaranteed to give an optimal solution for any convex QP, the methods presented herein are directly applicable to the uni-parametric form of QP given by
\hbox{$\beta$}egin{align}
\min_x\quad & \frac{1}{2} x^\top Q(\theta) x + c(\theta)^\top x\nonumber\\
\text{s.t.}\quad & A(\theta) x \hbox{$\lambda$}eq b(\theta)\hbox{$\lambda$}abel{prob:QP}\\
& x \hbox{$\gamma$}eq 0\nonumber\\
& \theta \in \Theta\nonumber
\end{align}
whenever $Q(\theta)$ is convex for all $\theta \in \Theta$, with $\Theta$ defined as in Equation \eqref{eq:paramspace}. We note that this implies that the methods presented herein are also applicable to: (i) biobjective problems having linear and/or convex quadratic objectives and linear constraints (via weighted-sum scalarization), and (ii) uni-parametric linear programs (by replacing $Q(\theta)$ with the zero matrix in Problem \eqref{prob:QP}). To our knowledge, the only other works that propose solution strategies for Problem \eqref{prob:QP} under similar assumptions to those that are implied by our Assumption \ref{asm:sufficient} are those of \citet{ritter1962verfahren}, \citet{valiaho1985unified}, and \citet{jonker2001one}.
\section{Computational Results}\hbox{$\lambda$}abel{sec:implementation/results}
Using the Python programming language, we develop an implementation of the methods described in Algorithms \ref{alg:solveUpLCP}--\ref{alg:getExtremes} in which the processing of set $\mathcal{S}$ on line \ref{alg:solveUpLCP:line:while} of Algorithm \ref{alg:solveUpLCP} is performed in parallel, when desired. Interested readers may obtain the code from \url{https://github.com/Nadelgren/upLCP_solver}. The code is written so that the user may specify any one of three types of problems: (i) upLCP, (ii) upQP, or (iii) upLP. Problems given in the form of upQP or upLP are converted to upLCP prior to utilization of the methods proposed herein, but the computed solution is ultimately provided in the context of the originally presented problem. We note that solutions for problems in the form of upQP or upLP contain not only parametric values for the problem's original decision variables, but also for slack variables for each constraint and dual variables for each constraint and non-negativity restriction.
As suggested in Section \ref{sec:algorithm}, our implementation utilizes the criss-cross method of \citet{den1993linear} to compute complementary bases on line \ref{alg:solveUpLCP:line:midpoint} of Algorithm \ref{alg:solveUpLCP}. Additionally, we employ the computer algebra system Pari/GP 2.14.0 \citep{PARI2} via the Python library CyPari2 for all symbolic algebra needed to compute and process (e.g., calculate roots, take derivatives, etc.) the polynomial functions that define each invariancy interval.
We now present numerical results for two classes of problems. Instances of the first class are obtained from biobjective QPs having convex objectives that are scalarized using the weighted-sum approach and then reformulated as upLCP. We refer to these instances as \texttt{boQP} instances. Instances of the second class are obtained using some of the techniques outlined by \citet{illes2018generating}. We refer to these instances as \texttt{sufLCP} instances. All instances, together with a detailed description of the specific techniques use for their generation, are provided along with the code at the above referenced url.
All tests were conducted on a machine running Linux Mint 20.0 and that had a 1.2 GHz Intel i3-1005G1 CPU with 12GB of RAM. When solving each instance of upLCP, we permitted the processing of set $\mathcal{S}$ to be executed in parallel, using four threads. In all, we generated five \texttt{boQP} instances for each value of $h \in \{50,75,100,125\}$ and five \texttt{sufLCP} instances for each value of $h \in \{50,75,100,125, 150, 175\}$. The CPU time required to solve each \texttt{boQP} instance, as well as the number of invariancy intervals present in the final partition for each of instance, is presented in Table \ref{tab:boqp}. Analogous data is provided for \texttt{sufLCP} instances in Table \ref{tab:suflcp}.
\hbox{$\beta$}egin{table} \small
\centering
\caption{Results for \texttt{boQP} Instances}\hbox{$\lambda$}abel{tab:boqp}
\hbox{$\beta$}egin{tabular}{rrrr|rrrr}
Instance & & CPU & \# of & Instance & & CPU & \# of\\
Size ($h$) & \# & Time (s) & Intervals & Size ($h$) & \# & Time (s) & Intervals\\
{}^{\text{th}}line
{}^{\text{th}}line
50 & 1 & 18.61 & 15 & 100 & 1 & 915.22 & 33\\
& 2 & 24.90 & 13 & & 2 & 1041.95 & 38\\
& 3 & 51.05 & 30 & & 3 & 1377.16 & 39\\
& 4 & 18.17 & 22 & & 4 & 806.51 & 27\\
& 5 & 8.07 & 11 & & 5 & 1543.52 & 44\\
{}^{\text{th}}line
75 & 1 & 66.69 & 17 & 125 & 1 & 2813.07 & 38\\
& 2 & 185.42 & 27 & & 2 & 4463.14 & 51 \\
& 3 & 327.48 & 23 & & 3 & 2137.46 & 45 \\
& 4 & 266.87 & 26 & & 4 & 705.88 & 5 \\
& 5 & 222.91 & 24 & & 5 & 1429.14 & 31 \\
{}^{\text{th}}line
\end{tabular}
\end{table}
\hbox{$\beta$}egin{table} \small
\centering
\caption{Results for \texttt{sufLCP} Instances}\hbox{$\lambda$}abel{tab:suflcp}
\hbox{$\beta$}egin{tabular}{rrrr|rrrr}
Instance & & CPU & \# of & Instance & & CPU & \# of\\
Size ($h$) & \# & Time (s) & Intervals & Size ($h$) & \# & Time (s) & Intervals\\
{}^{\text{th}}line
{}^{\text{th}}line
50 & 1 & 1.42 & 15 & 125 & 1 & 496.97 & 37\\
& 2 & 3.34 & 11 & & 2 & 303.12 & 25\\
& 3 & 5.73 & 17 & & 3 & 64.05 & 44\\
& 4 & 2.81 & 10 & & 4 & 169.82 & 29\\
& 5 & 3.94 & 6 & & 5 & 283.03 & 35\\
{}^{\text{th}}line
75 & 1 & 19.88 & 20 & 150 & 1 & 369.89 & 42\\
& 2 & 27.35 & 22 & & 2 & 814.69 & 48 \\
& 3 & 39.17 & 52 & & 3 & 96.95 & 34\\
& 4 & 6.23 & 9 & & 4 & 597.42 & 61 \\
& 5 & 26.41 & 115 & & 5 & 442.07 & 37\\
{}^{\text{th}}line
100 & 1 & 80.94 & 23 & 175 & 1 & 1947.01 & 47\\
& 2 & 15.74 & 16 & & 2 & 2066.37 & 42\\
& 3 & 250.15 & 37 & & 3 & 1083.17 & 42 \\
& 4 & 22.27 & 14 & & 4 & 2228.88 & 54 \\
& 5 & 41.50 & 18 & & 5 & 2062.75 & 40 \\
{}^{\text{th}}line
\end{tabular}
\end{table}
As we expect, we see that for both classes of problems the required CPU time and number of intervals present in the final solution increase as problem size increases.
\section{Conclusion}\hbox{$\lambda$}abel{sec:conclusion}
We have presented a new method for solving upLCP, upQP, and upLP that is capable of solving some classes of problems that were not able to be solved by any known uni-parametric methodology from the literature. Moreover, we have demonstrated empirically that our proposed technique can be used to solve relatively large instances in reasonable time.
\hbox{$\beta$}ibliographystyle{plainnat}
\hbox{$\beta$}ibliography{/home/nate/Dropbox/citations}
\end{document}
|
\begin{document}
\title[On the metric theory of approximations by reduced fractions]{On the metric theory of approximations by reduced fractions: a quantitative Koukoulopoulos--Maynard theorem}
\author{Christoph Aistleitner}
\author{Bence Borda}
\author{Manuel Hauke}
\address{Graz University of Technology, Institute of Analysis and Number Theory, Steyrergasse 30/II, 8010 Graz, Austria}
\email{[email protected]}
\email{[email protected]}
\email{[email protected]}
\subjclass[2020]{Primary 11J83; Secondary 11A05, 11J04, 11K60}
\keywords{Diophantine approximation, metric number theory, Duffin--Schaeffer conjecture, Koukoulopoulos--Maynard theorem}
\renewcommand{\alph{enumi})}{\alph{enumi})}
\newcommand{\mods}[1]{\,(\mathrm{mod}\,{#1})}
\begin{abstract}
Let $\psi: \mathbb{N} \to [0,1/2]$ be given. The Duffin--Schaeffer conjecture, recently resolved by Koukoulopoulos and Maynard, asserts that for almost all reals $\alpha$ there are infinitely many coprime solutions $(p,q)$ to the inequality $|\alpha - p/q| < \psi(q)/q$, provided that the series $\sum_{q=1}^\infty \varphi(q) \psi(q) / q$ is divergent. In the present paper, we establish a quantitative version of this result, by showing that for almost all $\alpha$ the number of coprime solutions $(p,q)$, subject to $q \leq Q$, is of asymptotic order $\sum_{q=1}^Q 2 \varphi(q) \psi(q) / q$. The proof relies on the method of GCD graphs as invented by Koukoulopoulos and Maynard, together with a refined overlap estimate coming from sieve theory, and number-theoretic input on the ``anatomy of integers''. The key phenomenon is that the system of approximation sets exhibits ``asymptotic independence on average'' as the total mass of the set system increases.
\end{abstract}
\maketitle
\section{Introduction and statement of results}
A foundational result in Diophantine approximation is Dirichlet's approximation theorem, which asserts that for every real number $\alpha$ there are infinitely many coprime solutions $(p,q)$ to the inequality
\begin{equation} \label{diri}
\left| \alpha - \frac{p}{q} \right| < \frac{1}{q^2}.
\end{equation}
It is well-known that this result is optimal up to constant factors for numbers $\alpha$ whose partial quotients in the continued fraction representation are bounded (so-called badly approximable numbers). Metric number theory asks to what extent \eqref{diri} can be improved for typical reals $\alpha$, in the sense that the exceptional set has vanishing Lebesgue measure.
One of the fundamental results of metric Diophantine approximation is Khintchine's theorem~\cite{khin}. Let $\psi(q)$ be a non-negative sequence, and suppose that $q \psi(q)$ is non-increasing. Then the inequality
\begin{equation} \label{khin}
\left| \alpha - \frac{p}{q} \right| < \frac{\psi(q)}{q}
\end{equation}
has infinitely many integer solutions $(p,q)$ for almost all real numbers $\alpha$, provided that the series $\sum_{q=1}^\infty \psi(q)$ diverges. In contrast, inequality \eqref{khin} has only finitely many solutions for almost all $\alpha$ if this series converges. Very roughly speaking, this says that for typical reals the Dirichlet approximation theorem can be improved by a factor of logarithmic order. By periodicity, it is sufficient to consider $\alpha \in [0,1]$. It can easily be seen that Khintchine's theorem addresses the question whether the set system
$$
\bigcup_{p=0}^q \left( \frac{p}{q} - \frac{\psi(q)}{q}, \frac{p}{q} + \frac{\psi(q)}{q} \right) \cap [0,1], \qquad q=1,2,\dots,
$$
contains a given real $\alpha$ for infinitely resp.\ only finitely many values of $q$. If we assume that $\psi(q) \leq 1/2$ (as we will throughout this paper, to avoid degenerate situations), then the measure of such a set is exactly $2\psi(q)$. Thus the ``only finitely many'' part of Khintchine's theorem is a straightforward application of the convergence part of the Borel--Cantelli lemma. The ``infinitely many'' part of the theorem, however, is much more delicate since the divergence part of the Borel--Cantelli lemma requires some form of stochastic independence. The purpose of the monotonicity condition in the statement of Khintchine's theorem is to guarantee this stochastic independence property of the set system.
Duffin and Schaeffer~\cite{duff} showed that Khintchine's theorem generally fails without the monotonicity condition. More precisely, they constructed a function $\psi$ which is supported on a set of very smooth integers (having a large number of small prime factors), such that $\sum_{q=1}^{\infty} \psi(q)$ diverges, but for almost all $\alpha$ there are only finitely many solutions to \eqref{khin}. From a probabilistic perspective, the counterexample of Duffin and Schaeffer exploits the lack of stochastic independence in the set system, by constructing a special configuration where the overlaps between different sets of the system are too large; the crucial point here is that a fraction $p/q$ can have many different representations as a quotient of integers (as long as non-reduced representations are allowed), and thus may appear in many different elements of the set system.
Duffin and Schaeffer suggested that this lack of independence could be overcome by switching to the coprime setting. More precisely, the Duffin--Schaeffer conjecture asserted that for almost all $\alpha$ there are infinitely many \emph{coprime} solutions $(p,q)$ to \eqref{khin} if and only if the series $\sum_{q=1}^\infty \varphi(q) \psi(q)/q$ diverges, where $\varphi$ denotes the Euler totient function. Let
\begin{equation} \label{eq_def}
\mathcal{A}_q := \bigcup_{\substack{1 \leq p \leq q,\\ \gcd(p,q)=1}} \left( \frac{p}{q} - \frac{\psi(q)}{q}, \frac{p}{q} + \frac{\psi(q)}{q} \right), \qquad q=1,2,\ldots .
\end{equation}
Then, writing $\lambda$ for the Lebesgue measure and again assuming that $\psi(q) \leq 1/2$ for all $q$, we have
$$
\lambda (\mathcal{A}_q ) = \frac{2 \varphi(q) \psi(q)}{q}.
$$
Thus the ``only finitely many'' part of the Duffin--Schaeffer conjecture is again a direct consequence of the convergence part of the Borel--Cantelli lemma. However, the divergence part of the Duffin--Schaeffer conjecture has resisted a resolution for many decades. After important contributions of Gallagher~\cite{gall}, Erd\H os~\cite{erd}, Vaaler~\cite{vaal}, Pollington and Vaughan~\cite{pv}, and Beresnevich and Velani~\cite{ber_ve}, the Duffin--Schaeffer conjecture was finally solved in full generality by Koukoulopoulos and Maynard~\cite{km} in 2020. Their argument relies on an ingenious construction of what they call ``GCD graphs''. This allows them to implement a step-by-step quality increment strategy until they finally arrive at a situation where they can completely control the divisor structure which is at the heart of the problem. The final, number-theoretic input, is an ``anatomy of integers'' statement that quantifies the observation that there are only few integers that have many small prime factors.
In the present paper, we prove a quantitative version of the Koukoulopoulos--Maynard theorem. Their result states that there are infinitely many coprime solutions to \eqref{khin} for almost all $\alpha$ if the sum of measures diverges. We show that for almost all $\alpha$ the number of solutions in fact grows proportionally to the sum of measures.
\begin{thm} \label{th1}
Let $\psi:~\mathbb{N} \to [0,1/2]$ be a function such that $\sum_{q=1}^{\infty} \frac{\varphi(q) \psi(q)}{q}=\infty$. Write $S(Q)=S(Q,\alpha)$ for the number of coprime solutions $(p,q)$ to the inequality
$$
\left| \alpha - \frac{p}{q} \right| < \frac{\psi(q)}{q}, \qquad \text{subject to $q \leq Q$},
$$
and let
\begin{equation} \label{psiq_def}
\Psi(Q) = \sum_{q=1}^Q \frac{2 \varphi(q) \psi(q)}{q}.
\end{equation}
Let $C>0$ be arbitrary. Then for almost all $\alpha$,
$$
S(Q) = \Psi(Q) \left( 1 + O \left(\frac{1}{(\log \Psi(Q))^{C}}\right) \right) \qquad \text{as } Q \to \infty .
$$
\end{thm}
It is not clear to what extent the error term in the theorem can be improved. It seems to us that any result which contains a power saving, i.e.\ has a multiplicative error of order $(1 + O(\Psi(Q)^{-\varepsilon}))$ for some $\varepsilon>0$, would require a substantial improvement of the argument in the present paper. By analogy with other results from metric number theory it is reasonable to assume that Theorem~\ref{th1} actually holds with an error term $(1 + O(\Psi(Q)^{-1/2+\varepsilon}))$ for any $\varepsilon>0$, and probably even $(1 + O(\Psi(Q)^{-1/2} (\log \Psi(Q))^c))$ for some appropriate $c$. We note in passing that very precise metric estimates for the asymptotic order of $S(Q)$ are known when an extra monotonicity assumption is imposed upon $\psi$, in the spirit of Khintchine's original result; see for example Chapter 3 of~\cite{phil} and Chapter 4 of~\cite{harman}. However, from a technical perspective, the problem is of a very different nature when this extra monotonicity assumption is made. The results for the monotonic case imply as a corollary that Theorem~\ref{th1} above cannot hold in general with a multiplicative error of order $(1 + O(\Psi(Q)^{-1/2}))$ or less.
The key problem in the metric theory of approximations by reduced fractions is to control the measure of the overlaps $\mathcal{A}_q \cap \mathcal{A}_r$ in some averaged sense. Pairwise independence $\lambda (\mathcal{A}_q \cap \mathcal{A}_r) = \lambda(\mathcal{A}_q) \lambda(\mathcal{A}_r)$ would allow a direct application of the second Borel--Cantelli lemma, but it turns out that $\lambda(\mathcal{A}_q \cap \mathcal{A}_r)$ can exceed $\lambda(\mathcal{A}_q) \lambda(\mathcal{A}_r)$ by a factor as large as $\log \log (qr)$ for some configurations of $q,r,\psi$. Such an exceedingly large overlap can happen if there are many small prime factors dividing $q$ but not dividing $r$, or vice versa, and if simultaneously the greatest common divisor of $q$ and $r$ lies in a certain critical range (which is determined by the values of $\psi(q)$ and $\psi(r)$). The crucial point then is to show that such large extra factors appear only for a small number of pairs $q,r$. Consider the quotient
$$
\frac{\sum_{q,r \leq Q} \lambda( \mathcal{A}_q \cap \mathcal{A}_r)}{\left( \sum_{q=1}^Q \lambda(\mathcal{A}_q) \right)^2} = \frac{\sum_{q,r \leq Q} \lambda( \mathcal{A}_q \cap \mathcal{A}_r)}{\Psi(Q)^2}.
$$
Without imposing an absolute lower bound on $\Psi(Q)$, this quotient can be arbitrarily large. The main breakthrough of Koukoulopoulos and Maynard was to prove that
\begin{equation}\label{KMtechnical}
\frac{\sum_{q,r \leq Q} \lambda( \mathcal{A}_q \cap \mathcal{A}_r)}{\Psi(Q)^2} \ll 1 \qquad \text{provided that} \quad \Psi(Q) \geq 1.
\end{equation}
This property is called \emph{quasi-independence on average}, and is sufficient for an application of the second Borel--Cantelli lemma (in the Erd\H os--R\'enyi formulation of the lemma) -- cf.\ \cite{bv}. In the present paper, we show that even more is true: we have
$$
\frac{\sum_{q,r \leq Q} \lambda( \mathcal{A}_q \cap \mathcal{A}_r)}{\Psi(Q)^2} \to 1 \qquad \text{as} \quad \Psi(Q) \to \infty.
$$
Thus the set system $(\mathcal{A}_q)_{q \geq 1}$ moves towards pairwise independence on average as the total mass of the set system (the sum of measures of the approximation sets) tends towards infinity. Since we consider this fact, which is the key ingredient in our proof of Theorem~\ref{th1}, to be very interesting in its own right, we state it below as a separate theorem.
\begin{thm} \label{th2}
Let $\psi:~\mathbb{N} \to [0,1/2]$ be a function. Let the sets $\mathcal{A}_q$, $q=1,2,\dots$, be defined as in \eqref{eq_def}, and let $\Psi(Q)$ be defined as in \eqref{psiq_def}. Let $C>0$ be arbitrary. For any $Q \in \mathbb{N}$ such that $\Psi(Q) \ge 2$, we have
$$
\sum_{q,r \leq Q} \lambda( \mathcal{A}_q \cap \mathcal{A}_r) - \Psi(Q)^2 = O \left(\frac{\Psi(Q)^2}{(\log \Psi(Q))^{C}} \right)
$$
with an implied constant depending only on $C$.
\end{thm}
The rest of this paper is organized as follows. In Section~\ref{sec_2}, we show how Theorem~\ref{th2} implies Theorem~\ref{th1}. The following seven sections are concerned with the proof of Theorem~\ref{th2}. Section~\ref{sec_3} contains an estimate for the measure of the overlap $\mathcal{A}_q \cap \mathcal{A}_r$ for given $q$ and $r$. This estimate exploits information on the divisor structure of $q$ and $r$ in order to bound the difference between $\lambda \left(\mathcal{A}_q \cap \mathcal{A}_r\right)$ and $\lambda (\mathcal{A}_q) \lambda(\mathcal{A}_r)$, thus addressing the issue of the ``stochastic dependence'' between $\mathcal{A}_q$ and $\mathcal{A}_r$. In Section~\ref{sec_4}, we reduce Theorem~\ref{th2} to two second moment bounds. Section~\ref{sec_5} contains a brief introduction to the ``GCD graph'' machinery developed by Koukoulopoulos and Maynard~\cite{km}. In Section~\ref{sec_6}, we show how the second moment bounds follow from the existence of a ``good'' GCD subgraph. In the final two sections, we establish the existence of such a good GCD subgraph, using a modification of the iteration procedure of~\cite{km}. Our argument requires a careful balancing of the ``quality gain'' against the potential ``density loss'' coming from this iterative procedure, in such a way that information on the ``anatomy of integers'' can be exploited beyond a certain threshold. This threshold is determined by the order of the error terms coming from sieve theory (which translate into the error terms of the overlap estimate in Section~\ref{sec_3}).
For the rest of the paper, $\psi: \mathbb{N} \to [0,1/2]$ is an arbitrary function, $\mathcal{A}_q$, $q \in \mathbb{N}$, is as in \eqref{eq_def}, and $\Psi(Q)$, $Q \in \mathbb{N}$, is as in \eqref{psiq_def}.
\section{Proof of Theorem 1} \label{sec_2}
Let $C>4$ be fixed, and assume that Theorem~\ref{th2} holds. Let $\mathbbm{1}_A$ denote the indicator function of a set $A$. Formulated in probabilistic language, Theorem~\ref{th2} controls the variance of the random variables $\mathbbm{1}_{\mathcal{A}_1}, \dots, \mathbbm{1}_{\mathcal{A}_Q}$, and we obtain
\begin{eqnarray} \label{vari}
\int_0^1 \left(\sum_{q=1}^Q \mathbbm{1}_{\mathcal{A}_q} (\alpha) - \Psi(Q) \right)^2 \mathrm{d}\alpha
& = & \sum_{q,r \leq Q} \lambda( \mathcal{A}_q \cap \mathcal{A}_r) - \Psi(Q)^2 = O \left(\frac{\Psi(Q)^2}{(\log \Psi(Q))^{C}} \right).
\end{eqnarray}
Define
$$
Q_k = \min \left \{Q:~\Psi(Q) \geq e^{k^{1/\sqrt{C}}} \right \}, \qquad k \geq 1,
$$
and let
$$
\mathcal{B}_k = \left\{ \alpha \in [0,1]:~\left| \sum_{q=1}^{Q_k} \mathbbm{1}_{\mathcal{A}_q} (\alpha) - \Psi(Q_k) \right| \geq \frac{\Psi(Q_k)}{(\log \Psi(Q_k))^{C/4}} \right\}.
$$
By Chebyshev's inequality and \eqref{vari}, we have
$$
\lambda\left( \mathcal{B}_k \right) \ll (\log \Psi(Q_k))^{-C/2} \ll k^{-\sqrt{C}/2}.
$$
Since we assumed that $C>4$, we have $\sum_{k=1}^\infty \lambda(\mathcal{B}_k) < \infty$, and the Borel--Cantelli lemma implies that almost all $\alpha$ are contained in at most finitely many sets $\mathcal{B}_k$. Thus for almost all $\alpha$,
$$
\left| \sum_{q=1}^{Q_k} \mathbbm{1}_{\mathcal{A}_q} (\alpha) - \Psi(Q_k) \right| \leq \frac{\Psi(Q_k)}{(\log \Psi(Q_k))^{C/4}}
$$
holds for all $k \geq k_0(\alpha)$. Clearly, for any $Q \geq 3$ there exists a $k$ such that $Q_k \leq Q < Q_{k+1}$, which also implies that
$$
\sum_{q=1}^{Q_k} \mathbbm{1}_{\mathcal{A}_q} (\alpha) \leq \sum_{q=1}^{Q} \mathbbm{1}_{\mathcal{A}_q} (\alpha) \leq \sum_{q=1}^{Q_{k+1}} \mathbbm{1}_{\mathcal{A}_q} (\alpha).
$$
Since $\psi \leq 1/2$ by assumption, we have $\Psi(Q_k) \in \left[e^{k^{1/\sqrt{C}}},e^{k^{1/\sqrt{C}}} + 1/2\right]$, and so
$$
\Psi(Q_{k+1}) / \Psi(Q_k) = 1 + O \left(k^{-1+1/\sqrt{C}}\right) = 1 + O \left( \left( \log \Psi(Q_k) \right)^{-\sqrt{C}+1} \right).
$$
From the previous three formulas and the triangle inequality, we deduce that for almost all $\alpha$ there exists a $Q_0 = Q_0(\alpha)$ such that for all $Q \geq Q_0$,
$$
\left| \sum_{q=1}^{Q} \mathbbm{1}_{\mathcal{A}_q} (\alpha) - \Psi(Q) \right| = O \left( \frac{\Psi(Q)}{(\log \Psi(Q))^{\sqrt{C}-1}} \right).
$$
As $C$ can be chosen arbitrarily large, this proves Theorem~\ref{th1}.
\section{The overlap estimate} \label{sec_3}
In this section, we develop a new estimate for the measure of the overlaps $\mathcal{A}_q \cap \mathcal{A}_r$. For the rest of the paper, let
\begin{equation}\label{D_def}
D(q,r):= \frac{\max \left( r \psi(q), q \psi(r) \right)}{\gcd (q,r)}, \qquad q,r \in \mathbb{N}.
\end{equation}
The standard bound for the measure of $\mathcal{A}_q \cap \mathcal{A}_r$ is due to Pollington and Vaughan~\cite{pv}: for any $q \neq r$,
\begin{equation} \label{pv*}
\lambda(\mathcal{A}_q \cap \mathcal{A}_r) \ll \lambda(\mathcal{A}_q) \lambda(\mathcal{A}_r) \prod_{\substack{p \mid \frac{qr}{\gcd(q,r)^2}, \\ p>D(q,r)}} \left(1 + \frac{1}{p} \right),
\end{equation}
with an absolute implied constant. Clearly, because of the presence of the implied constant this standard bound cannot be sufficient to deduce Theorem~\ref{th2}. Below we will use a more refined argument from sieve theory which allows us to isolate a main term, and prove an upper bound of the form
$$
\lambda(\mathcal{A}_q \cap \mathcal{A}_r) \leq \lambda(\mathcal{A}_q) \lambda(\mathcal{A}_r) \left(1 + \textup{[error]}\right),
$$
with an error term that becomes small if there are not too many small primes which divide $q$ and $r$ with different multiplicities (see Lemma~\ref{lemma_over} below for details).
The following lemma is called the fundamental lemma of sieve theory. We state it in the formulation of~\cite[Theorem 18.11]{kouk}.
\begin{lemma}[Fundamental lemma of sieve theory] \label{lemma_fund}
Let $(a_n)_{n \geq 1}$ be non-negative reals, such that $\sum_{n=1}^\infty a_n < \infty$. Let $\mathcal{P}$ be a finite set of primes, and write $P = \prod_{p \in \mathcal{P}} p$. Set $y = \max \mathcal{P}$, and $A_d = \sum_{n \equiv 0 \mod d} a_n$. Assume that there exists a multiplicative function $g$ such that $g(p) < p$ for all $p \in \mathcal{P}$, a real number $x$, and positive constants $\kappa,C$ such that
$$
A_d =: x \frac{g(d)}{d} + r_d, \qquad d \mid P,
$$
and
$$
\prod_{p \in (y_1, y_2] \cap \mathcal{P}} \left( 1 - \frac{g(p)}{p} \right)^{-1} < \left( \frac{\log y_2}{\log y_1} \right)^\kappa \left(1 + \frac{C}{\log y_1} \right), \qquad 3/2 \leq y_1 \leq y_2 \leq y.
$$
Then, uniformly in $u \geq 1$ we have
$$
\sum_{(n,P)=1} a_n = \left( 1 + O ( u^{-u/2} ) \right) x \prod_{p \in \mathcal{P}} \left(1 -\frac{g(p)}{p} \right) + O \left( \sum_{d \leq y^u,~d \mid P} |r_d| \right).
$$
\end{lemma}
We will also need an estimate for the order of the partial sums of a particular multiplicative function.
\begin{lemma} \label{lemma_mean_value}
Let $\mathcal{P}$ be a set of odd primes, and define
$$
f(n) = \prod_{\substack{p \mid n,\\ p \in \mathcal{P}}} \left(1+ \frac{1}{p-2} \right).
$$
Then for any $x \ge 2$,
$$
\sum_{n \leq x} f(n) = x \prod_{p \in \mathcal{P}} \left(1 + \frac{1}{p(p-2)} \right) + O \left( \log x \right),
$$
where the implied constant is absolute.
\end{lemma}
\begin{proof}
Define $g(n) = \sum_{d \mid n} \mu(d) f(n/d)$, where $\mu$ is the M\"obius function. Note that $f$ and $g$ are multiplicative functions. We have
\begin{equation} \label{sumfnformula}
\begin{split} \sum_{n \leq x} f(n) &= \sum_{n \leq x} \sum_{d \mid n} g(d) \\
&= \sum_{d \leq x} g(d) \left \lfloor \frac{x}{d} \right\rfloor \\
&= x \sum_{d \leq x} \frac{g(d)}{d} + O \left( \sum_{d \leq x} g(d) \right) \\
&= x \sum_{d=1}^\infty \frac{g(d)}{d} + O \left(x \sum_{d>x} \frac{g(d)}{d} + \sum_{d \leq x} g(d) \right). \end{split}
\end{equation}
For $p \in \mathcal{P}$,
\begin{equation*}
g(p) = f(p) - 1 = \frac{1}{p-2}, \qquad \text{and} \qquad g(p^m) = 0,~m \geq 2,
\end{equation*}
whereas for $p \not\in \mathcal{P}$, we have $g(p^m)=0$ for all $m \geq 1$. Thus
$$
\sum_{d=1}^\infty \frac{g(d)}{d} = \prod_{p \in \mathcal{P}} \left(1 + \frac{1}{p(p-2)} \right),
$$
and it remains to estimate the error term in \eqref{sumfnformula}.
Note that $p^m g(p^m) \le p/(p-2) \le 3$ for all prime powers $p^m$. Hence by a general upper bound for the order of partial sums of multiplicative functions (see e.g.\ \cite[Theorem 14.2]{kouk}), the partial sums of $dg(d)$ satisfy
$$
\sum_{d \le x} d g(d) \ll x \exp \left( \sum_{p \le x} \frac{pg(p)-1}{p} \right) \ll x \exp \left( \sum_{p>2} \frac{2}{p(p-2)} \right) \ll x.
$$
In particular, $\sum_{x \le d \le 2x} g(d)/d \ll x^{-2} \sum_{d \le 2x} d g(d) \ll x^{-1}$, and the first error term in \eqref{sumfnformula} is $x\sum_{d>x} g(d)/d \ll 1$. Further, $\sum_{x \le d \le 2x} g(d) \le x^{-1} \sum_{d \le 2x} d g(d) \ll 1$, and the second error term in \eqref{sumfnformula} is $\sum_{d \le x} g(d) \ll \log x$, as claimed. All implied constants are absolute.
\end{proof}
\begin{lemma}[Overlap estimate] \label{lemma_over}
For any positive integers $q \neq r$ and any reals $u \ge 1$ and $T \ge 2$, we have
\begin{equation} \label{overlapclaim}
\lambda (\mathcal{A}_q \cap \mathcal{A}_r) \le \lambda (\mathcal{A}_q) \lambda (\mathcal{A}_r) \left( 1 + O \left( u^{-u/2} + \frac{T^u \log (D+2) \log T}{D} \right) \right) \prod_{\substack{p \mid \frac{qr}{\gcd(q,r)^2}, \\ p>T}} \left( 1+ \frac{1}{p-1} \right)
\end{equation}
with an absolute implied constant, where $D=D(q,r)$ is as in \eqref{D_def}. In particular, for any $C \ge 1$,
$$
\lambda (\mathcal{A}_q \cap \mathcal{A}_r) \le \lambda (\mathcal{A}_q) \lambda (\mathcal{A}_r) \left( 1 + O \left( (\log (D+2))^{-C} \right) \right) \prod_{\substack{p \mid \frac{qr}{\gcd(q,r)^2}, \\ p>A}} \left( 1+ \frac{1}{p-1} \right)
$$
with an implied constant depending only on $C$, where
\begin{equation}\label{A_def}
A=A_C(q,r) := \exp \left( \frac{\log (D+100) \log \log \log (D+100)}{8C \log \log (D+100)} +1 \right) .
\end{equation}
\end{lemma}
\begin{proof}
We follow the general strategy of Pollington and Vaughan in~\cite[Section 3]{pv}. If $D<1/2$, then $\psi(q)/q+\psi(r)/r<1/\mathrm{lcm}(q,r)$, hence $\mathcal{A}_q \cap \mathcal{A}_r = \emptyset$, and the claim trivially holds. We may thus assume throughout the rest of the proof that $D \ge 1/2$.
We set
\begin{eqnarray*}
\delta = \min \left( \frac{\psi(q)}{q}, \frac{\psi(r)}{r} \right) \qquad \text{and} \qquad \Delta = \max \left( \frac{\psi(q)}{q}, \frac{\psi(r)}{r} \right) ,
\end{eqnarray*}
and define the piecewise linear function
\begin{equation*}
w(y) = \left\{ \begin{array}{ll} 2\delta & \text{if $0 \leq y \leq \Delta - \delta$,} \\ \Delta+\delta-y & \text{if $\Delta - \delta < y \leq \Delta+\delta$,} \\ 0 & \text{otherwise.} \end{array}\right.
\end{equation*}
We can express the measure of $\mathcal{A}_q \cap \mathcal{A}_r$ as
$$
\lambda(\mathcal{A}_q \cap \mathcal{A}_r) = \sum_{\substack{1 \leq a \leq q,\\ \gcd(a,q) = 1}} \sum_{\substack{1 \leq b \leq r,\\ \gcd(b,r) = 1}} w \left(\left| \frac{a}{q} - \frac{b}{r} \right| \right).
$$
For any prime $p$, let $u = u(p,q)$ and $v = v(p,r)$ be defined by $q = \prod_p p^u$ and $r = \prod_p p^v$, and let
$$
l = \prod_{p:~u=v} p^u, \qquad m = \prod_{p:~u \neq v} p^{\min(u,v)}, \qquad n = \prod_{p:~u \neq v} p^{\max(u,v)}.
$$
Following the argument on p.\ 195 of~\cite{pv} (an application of the Chinese remainder theorem, together with a simple counting argument) leads to
$$
\sum_{\substack{1 \leq a \leq q,\\ \gcd(a,q) = 1}} \sum_{\substack{1 \leq b \leq r,\\ \gcd(b,r) = 1}} w \left( \left| \frac{a}{q} - \frac{b}{r} \right| \right) = \sum_{\substack{1 \leq c \leq ln,\\ \gcd(c,n) =1}} 2 w\left( \frac{c}{ln} \right) \varphi(m) l \prod_{p \mid \gcd(l,c)} \left(1 - \frac{1}{p} \right) \prod_{\substack{p \mid l, \\p \nmid c}} \left( 1 - \frac{2}{p} \right).
$$
Assume first that $l$ is odd. By rewriting the right-hand side of the previous formula we see that $\lambda(\mathcal{A}_q \cap \mathcal{A}_r)$ equals
\[ \begin{split} & 2 \varphi(m) \frac{\varphi(l)^2}{l} \sum_{\substack{1 \leq c \leq ln,\\ \gcd(c,n) =1}} w\left( \frac{c}{ln} \right) \prod_{p \mid \gcd(l,c)} \left(1 - \frac{1}{p} \right)^{-1} \prod_{\substack{p \mid l, \\p \nmid c}} \left( \left( 1 - \frac{2}{p} \right) \left( 1 - \frac{1}{p} \right)^{-2} \right) \\ = & 2 \varphi(m) \frac{\varphi(l)^2}{l} \prod_{\substack{p \mid l}} \left( 1 - \frac{1}{(p-1)^2} \right) \sum_{\substack{1 \leq c \leq ln,\\ \gcd(c,n) =1}} w\left( \frac{c}{ln} \right) \prod_{\substack {p \mid \gcd(l,c)}} \left(1 + \frac{1}{p-2} \right) . \end{split} \]
We now find an upper bound for this expression. First, we replace the condition $\gcd(c,n)=1$ by $\gcd(c,n^*)=1$, where $n^*$ denotes the $T$-smooth part of $n$ (i.e.\ $n^*=\prod_{p \le T,~u \neq v}p^{\max (u,v)}$). Next, we fix a large positive integer $K$, and divide $[\Delta-\delta, \Delta+\delta]$ into $K$ subintervals of equal length. Observe that the piecewise constant function
$$
w^*(y) = \frac{2 \delta}{K} \left( \left\lfloor \frac{K (\Delta+\delta-y)}{2 \delta} \right\rfloor +1 \right) = \frac{2 \delta}{K} \sum_{k=0}^{K-1} \mathbbm{1}_{[0,\Delta + \delta - 2k \delta /K]}(y)
$$
satisfies $w(y) \leq w^*(y)$ for all $y \ge 0$. Therefore $\lambda(\mathcal{A}_q \cap \mathcal{A}_r)$ is bounded above by
\begin{equation} \label{intersectionupperbound}
2 \varphi(m) \frac{\varphi(l)^2}{l} \prod_{\substack{p \mid l}} \left( 1 - \frac{1}{(p-1)^2} \right) \sum_{\substack{1 \leq c \leq ln,\\ \gcd(c,n^*) =1}} w^*\left( \frac{c}{ln} \right) \prod_{\substack {p \mid \gcd(l,c)}} \left(1 + \frac{1}{p-2} \right) .
\end{equation}
Here
$$
\sum_{\substack{1 \leq c \leq ln,\\ \gcd(c,n^*) =1}} w^* \left( \frac{c}{ln} \right) \prod_{\substack {p \mid \gcd(l,c)}} \left(1 + \frac{1}{p-2} \right) = \frac{2 \delta}{K} \sum_{k=0}^{K-1} \sum_{\substack{1 \leq c \leq ln (\Delta + \delta - 2k \delta/K),\\ \gcd(c,n^*) =1}} \prod_{\substack {p \mid \gcd(l,c)}} \left(1 + \frac{1}{p-2} \right).
$$
Now fix $k \in \{0,\dots,K-1\}$, and set
$$
a_c = \prod_{\substack {p \mid \gcd(l,c)}} \left(1 + \frac{1}{p-2} \right), \qquad 1 \leq c \leq ln (\Delta + \delta - 2k \delta/K),
$$
and $a_c = 0$ for $c > ln (\Delta + \delta - 2k \delta/K)$. Note that for $d \mid n^*$ we have $a_{dc} = a_c$ as long as $dc \leq ln (\Delta + \delta - 2k \delta/K)$. By Lemma~\ref{lemma_mean_value}, for any $d \mid n^*$ we thus have
\begin{eqnarray*}
\sum_{c \equiv 0 \mod d} a_c & = & \sum_{1 \leq c \leq \frac{ln (\Delta + \delta - 2k \delta/K)}{d}} ~\prod_{\substack {p \mid \gcd(l,c)}} \left(1 + \frac{1}{p-2} \right) \\
& = & \frac{ln (\Delta + \delta - 2k \delta/K)}{d} \prod_{p \mid l} \left(1 + \frac{1}{p(p-2)} \right) + O(\log (D+2)) .
\end{eqnarray*}
We have
$$
\sum_{\gcd(c,n^*)=1} a_c = \sum_{\substack{1 \leq c \leq ln (\Delta + \delta - 2k \delta/K),\\ \gcd(c,n^*) =1}} ~\prod_{\substack {p \mid \gcd(l,c)}} \left(1 + \frac{1}{p-2} \right),
$$
and by an application of Lemma~\ref{lemma_fund} (with $\mathcal{P}$ the set of prime divisors of $n^*$, $\max \mathcal{P} \le T$ and $|r_d| \ll \log (D+2)$) this is
$$
(1 + O (u^{-u/2})) ln \left( \Delta + \delta - \frac{2k \delta}{K} \right) \frac{\varphi(n^*)}{n^*} \prod_{p \mid l} \left(1 + \frac{1}{p(p-2)} \right) + O \left( T^u \log (D+2) \right) . $$
Since
$$
\prod_{\substack{p \mid l}} \left( 1 - \frac{1}{(p-1)^2} \right) \prod_{p \mid l} \left(1 + \frac{1}{p(p-2)} \right) = 1,
$$
formula \eqref{intersectionupperbound} thus yields that $\lambda(\mathcal{A}_q \cap \mathcal{A}_r)$ is bounded above by
$$
2 \varphi(m) \frac{\varphi(l)^2}{l} \cdot \frac{2 \delta}{K} \sum_{k=0}^{K-1} \left( \left( (1 + O (u^{-u/2})) ln \left( \Delta + \delta - \frac{2k \delta}{K} \right) \frac{\varphi(n^*)}{n^*} + O \left( T^u \log (D+2) \right) \right) \right) .
$$
Letting $K \to \infty$, and using $D = \Delta l n$ and $\varphi(n^*)/n^* \ge \prod_{p \le T} (1-1/p) \gg 1/\log T$, we obtain
\[
\begin{split} \lambda(\mathcal{A}_q \cap \mathcal{A}_r) &\le 2 \varphi(m) \frac{\varphi(l)^2}{l} 2 \delta \left( (1 + O (u^{-u/2})) ln \Delta \frac{\varphi(n^*)}{n^*} + O \left( T^u \log (D+2) \right) \right) \\ &= 4 \varphi(m) \varphi(l)^2 n \frac{\varphi(n^*)}{n^*} \delta \Delta \left( 1 + O \left( u^{-u/2} + \frac{T^u \log (D+2) \log T}{D} \right) \right) \\ &= \lambda(\mathcal{A}_q) \lambda (\mathcal{A}_r) \frac{\varphi(n^*)/n^*}{\varphi(n)/n} \left( 1 + O \left( u^{-u/2} + \frac{T^u \log (D+2) \log T}{D} \right) \right) . \end{split}
\]
Finally, observe that
\[ \frac{\varphi(n^*)/n^*}{\varphi(n)/n} = \frac{1}{\prod_{\substack{p \mid n, \\ p>T}}\left( 1-\frac{1}{p} \right)} = \prod_{\substack{p \mid n, \\ p>T}} \left( 1+ \frac{1}{p-1} \right) . \]
This establishes \eqref{overlapclaim} for odd $l$.
Assume next that $l$ is even. Then
\[ \prod_{p \mid \gcd(l,c)} \left(1 - \frac{1}{p} \right) \prod_{\substack{p \mid l, \\p \nmid c}} \left( 1 - \frac{2}{p} \right) = \frac{1}{2} \mathbbm{1}_{\{ 2 \mid c \}} \prod_{\substack{p \mid \gcd(l,c),\\ p>2}} \left(1 - \frac{1}{p} \right) \prod_{\substack{p \mid l, \\p \nmid c,\\p>2}} \left( 1 - \frac{2}{p} \right),
\]
and similarly to before we obtain that $\lambda (\mathcal{A}_q \cap \mathcal{A}_r)$ equals
\[ \begin{split} & 4 \varphi (m) \frac{\varphi(l)^2}{l} \prod_{\substack{p \mid l, \\ p>2}} \left( 1-\frac{1}{p-1} \right) \sum_{\substack{1 \le c \le ln, \\ \gcd(c,n)=1, \\ 2 \mid c}} w \left( \frac{c}{ln} \right) \prod_{\substack{p \mid \gcd(l,c), \\ p>2}} \left( 1+\frac{1}{p-2} \right) \\ = & 4 \varphi (m) \frac{\varphi(l)^2}{l} \prod_{\substack{p \mid l, \\ p>2}} \left( 1-\frac{1}{p-1} \right) \sum_{\substack{1 \le c \le ln/2, \\ \gcd(c,n)=1}} w \left( \frac{c}{ln/2} \right) \prod_{\substack{p \mid \gcd(l,c), \\ p>2}} \left( 1+\frac{1}{p-2} \right) . \end{split}
\]
The rest of the proof for odd $l$ applies \emph{mutatis mutandis} to even $l$. This completes the proof of \eqref{overlapclaim}.
Given $C \ge 1$, let us choose
$$
u=4C \frac{\log \log (D+100)}{\log \log \log (D+100)} \quad \text{and} \quad T=\exp \left( \frac{\log (D+100) \log \log \log (D+100)}{8C \log \log (D+100)} +1 \right) .
$$
One readily checks that $u^{-u/2} \le (\log(D+100))^{-C}$. Using $4/\log \log \log 100 <10$, we also have $T^u \le (D+100)^{1/2} (\log (D+100))^{10C}$, hence
$$
\frac{T^u \log (D+2) \log T}{D} \ll \frac{(\log (D+100))^{12C}}{D^{1/2}}
$$
is negligible compared to $(\log (D+2))^{-C}$.
\end{proof}
\section{Second moment bounds} \label{sec_4}
In this section, we show how two second moment bounds, stated as Propositions~\ref{prop_secondmoment1} and~\ref{prop_secondmoment2} below, together with the overlap estimate in Lemma~\ref{lemma_over} imply Theorem~\ref{th2}. These Propositions should be compared to the second moment bound of Koukoulopoulos and Maynard~\cite[Proposition 5.4]{km}, which, together with the overlap estimate of Pollington and Vaughan in equation \eqref{pv*}, implies the Duffin--Schaeffer conjecture.
Let $D(q,r)$ be as in \eqref{D_def}. For the sake of readability, let
\begin{equation}\label{L_def}
L_s (q,r) := \sum_{\substack{p \mid \frac{qr}{\gcd(q,r)^2}, \\ p \geq s}} \frac{1}{p},
\end{equation}
and
\begin{equation}\label{F_def}
F(x)=F_C(x):= \exp \left( \frac{\log (x+100) \log \log \log (x+100)}{8C \log \log (x+100)} +1 \right) .
\end{equation}
\begin{prop}\label{prop_secondmoment1} For any $Q \in \mathbb{N}$ and any real $t \ge 1$, the set
\[ \mathcal{E}_t = \left\{ (q,r) \in [1,Q]^2 \, : \, D(q,r) \le \frac{\Psi(Q)}{t} \right\} \]
satisfies
\[ \sum_{(q,r) \in \mathcal{E}_t} \frac{\varphi(q) \psi(q)}{q} \cdot \frac{\varphi(r) \psi(r)}{r} \ll \frac{\Psi(Q)^2}{t^{1/5}}, \]
with an absolute implied constant.
\end{prop}
\begin{prop}\label{prop_secondmoment2} Let $C \ge 1$ be arbitrary. For any $Q \in \mathbb{N}$ and any real $t \ge 1$, the set
\[ \mathcal{E}_t = \left\{ (q,r) \in [1,Q]^2 \, : \, D(q,r) \le t \Psi(Q) \quad \textrm{and} \quad L_{F(t)}(q,r) \ge \frac{1}{F(t)^{1/4}} \right\} \]
satisfies
\[ \sum_{(q,r) \in \mathcal{E}_t} \frac{\varphi(q) \psi(q)}{q} \cdot \frac{\varphi(r) \psi(r)}{r} \ll \frac{\Psi(Q)^2}{F(t)^{1/2}} \]
with an implied constant depending only on $C$.
\end{prop}
We now present the proof of Theorem~\ref{th2} assuming Propositions~\ref{prop_secondmoment1} and~\ref{prop_secondmoment2}.
\begin{proof}[Proof of Theorem~\ref{th2}] Fix $C>10$, and let $Q \in \mathbb{N}$ be such that $\Psi(Q) \ge 2$. We may assume that $\Psi(Q)$ is large enough in terms of $C$, since otherwise, the claim follows from the estimate \eqref{KMtechnical} of Koukoulopoulos and Maynard.
We partition the index set $[1,Q]^2$ into the sets
\[ \begin{split} \mathcal{E}^{1} &= \left\{ (q,r) \in [1,Q]^2 \, : \, q=r \right\}, \\ \mathcal{E}^{2} &= \left\{ (q,r) \in [1,Q]^2 \, : \, q \neq r, \quad D(q,r) \le \frac{\Psi(Q)}{(\log \Psi (Q))^C}, \quad L_{F(\Psi(Q))}(q,r) \le 1 \right\}, \\ \mathcal{E}^{3} &= \left\{ (q,r) \in [1,Q]^2 \, : \, q \neq r, \quad D(q,r) \le \frac{\Psi(Q)}{(\log \Psi (Q))^C}, \quad L_{F(\Psi(Q))}(q,r) > 1 \right\}, \\ \mathcal{E}^{4} &= \left\{ (q,r) \in [1,Q]^2 \, : \, q \neq r, \quad D(q,r) > \frac{\Psi(Q)}{(\log \Psi (Q))^C}, \quad L_{F(D(q,r))} (q,r) \le \frac{1}{(\log \Psi (Q))^C} \right\}, \\ \mathcal{E}^{5} &= \left\{ (q,r) \in [1,Q]^2 \, : \, q \neq r, \quad D(q,r) > \frac{\Psi(Q)}{(\log \Psi (Q))^C}, \quad L_{F(D(q,r))} (q,r) > \frac{1}{(\log \Psi (Q))^C} \right\}. \end{split} \]
The contribution of $\mathcal{E}^{1}$ is clearly negligible:
\begin{equation}\label{edgeset1sum}
\sum_{(q,r) \in \mathcal{E}^{1}} \lambda (\mathcal{A}_q \cap \mathcal{A}_r) = \sum_{q=1}^Q \lambda (\mathcal{A}_q) = \Psi (Q).
\end{equation}
Now we consider $\mathcal{E}^{2}$. For any $(q,r) \in \mathcal{E}^{2}$, the condition $L_{F(\Psi(Q))}(q,r) \le 1$ together with Mertens' theorem ensures that
\[ \begin{split} \prod_{p \mid \frac{qr}{\gcd (q,r)^2}} \left( 1 + \frac{1}{p-1} \right) &\le \exp \bigg( \sum_{\substack{p \mid \frac{qr}{\gcd (q,r)^2}, \\ p<F (\Psi(Q))}} \frac{2}{p} + \sum_{\substack{p \mid \frac{qr}{\gcd (q,r)^2}, \\ p \ge F (\Psi(Q))}} \frac{2}{p} \bigg) \\ &\ll \exp \left( 2 \log \log F(\Psi (Q)) \right) \\ &\ll (\log \Psi (Q))^2. \end{split} \]
In the last step we used the rough estimate $F(x) \le x$ for large enough $x$. The overlap estimate (Lemma~\ref{lemma_over}) thus shows that for any $(q,r) \in \mathcal{E}^{2}$,
\[ \lambda (\mathcal{A}_q \cap \mathcal{A}_r) \ll \lambda (\mathcal{A}_q) \lambda (\mathcal{A}_r) (\log \Psi (Q))^2 . \]
Applying Proposition~\ref{prop_secondmoment1} with $t=(\log \Psi (Q))^C$ leads to
\begin{equation}\label{edgeset2sum}
\sum_{(q,r) \in \mathcal{E}^{2}} \lambda (\mathcal{A}_q \cap \mathcal{A}_r) \ll \frac{\Psi(Q)^2}{(\log \Psi (Q))^{C/5-2}} .
\end{equation}
Next we consider $\mathcal{E}^{3}$. For any $(q,r) \in \mathcal{E}^{3}$, let $j(q,r)$ be the maximal integer $j$ such that $L_{F(\exp \exp (j))}(q,r) >1$; note that by construction $j(q,r) \ge \lfloor \log \log \Psi (Q) \rfloor$. Let $(q,r) \in \mathcal{E}^{3}$ with $j(q,r)=j$. By definition, $L_{F(\exp \exp (j+1))}(q,r) \le 1$, hence Mertens' theorem implies
\[ \begin{split} \prod_{p \mid \frac{qr}{\gcd (q,r)^2}} \left( 1 + \frac{1}{p-1} \right) &\le \exp \bigg( \sum_{\substack{p \mid \frac{qr}{\gcd (q,r)^2}, \\ p < F(\exp \exp (j+1))}} \frac{2}{p} + \sum_{\substack{p \mid \frac{qr}{\gcd (q,r)^2}, \\ p \ge F(\exp \exp (j+1))}} \frac{2}{p} \bigg) \\ &\ll \exp \left( 2 \log \log F (\exp \exp (j+1)) \right) \\ &\ll \exp (2j) . \end{split} \]
Thus the overlap estimate gives
\[ \lambda (\mathcal{A}_q \cap \mathcal{A}_r) \ll \lambda (\mathcal{A}_q) \lambda (\mathcal{A}_r) \exp (2j) , \]
and applying Proposition~\ref{prop_secondmoment2} with $t=\exp \exp (j)$ leads to
\begin{equation}\label{edgeset3sum}
\begin{split} \sum_{(q,r) \in \mathcal{E}^{3}} \lambda (\mathcal{A}_q \cap \mathcal{A}_r) &= \sum_{j \ge \lfloor \log \log \Psi (Q) \rfloor} \sum_{\substack{(q,r) \in \mathcal{E}^{3}, \\ j(q,r)=j}} \lambda (\mathcal{A}_q \cap \mathcal{A}_r) \\ &\ll \sum_{j \ge \lfloor \log \log \Psi (Q) \rfloor} \exp (2j) \frac{\Psi(Q)^2}{F(\exp \exp (j))^{1/2}} \\ &\ll \frac{\Psi(Q)^2}{(\log \Psi (Q))^C} . \end{split}
\end{equation}
In the last step we used the fact that $F(x)$ increases faster than any power of $\log x$.
Now we consider $\mathcal{E}^{4}$. For any $(q,r) \in \mathcal{E}^{4}$,
\[ \prod_{\substack{p \mid \frac{qr}{\gcd (q,r)^2}, \\ p>F(D(q,r))}} \left( 1+\frac{1}{p-1} \right) \le \exp \left( 2 L_{F(D(q,r))} (q,r) \right) = 1+O \left( \frac{1}{(\log \Psi(Q))^{C}} \right) . \]
The overlap estimate thus gives
\[ \lambda (\mathcal{A}_q \cap \mathcal{A}_r) \le \lambda (\mathcal{A}_q) \lambda (\mathcal{A}_r) \left( 1 + O \left( \frac{1}{(\log \Psi(Q))^{C}} \right) \right) , \]
hence
\begin{equation}\label{edgeset4sum}
\sum_{(q,r) \in \mathcal{E}^{4}} \lambda (\mathcal{A}_q \cap \mathcal{A}_r) \le \Psi (Q)^2 + O \left( \frac{\Psi(Q)^2}{(\log \Psi(Q))^{C}} \right) .
\end{equation}
Finally, we consider $\mathcal{E}^{5}$. For any $(q,r) \in \mathcal{E}^{5}$, let $i(q,r)$ be the maximal integer $i$ such that
\[ L_{F \left( \exp \exp \frac{i}{(\log \Psi(Q))^C} \right)} (q,r) > \frac{1}{2(\log \Psi (Q))^C} . \]
Note that
\[ L_{F \left( \frac{\Psi(Q)}{(\log \Psi (Q))^C} \right)} (q,r) \ge L_{F(D(q,r))} (q,r) > \frac{1}{(\log \Psi (Q))^C}, \]
therefore
\[ i(q,r) \ge \left\lfloor (\log \Psi (Q))^C \log \log \frac{\Psi(Q)}{(\log \Psi (Q))^C} \right\rfloor . \]
Let $(q,r) \in \mathcal{E}^{5}$ such that $i(q,r)=i$. By definition,
\[ L_{F \left( \exp \exp \frac{i+1}{(\log \Psi(Q))^C} \right)} (q,r) \le \frac{1}{2(\log \Psi (Q))^C}, \]
hence Mertens' theorem shows that
\[ \begin{split} \prod_{p \mid \frac{qr}{\gcd (q,r)^2}} \left( 1+\frac{1}{p-1} \right) &\le \exp \Bigg( \sum_{\substack{p \mid \frac{qr}{\gcd (q,r)^2}, \\ p < F \left( \exp \exp \frac{i+1}{(\log \Psi(Q))^C} \right) }} \frac{2}{p} + \sum_{\substack{p \mid \frac{qr}{\gcd (q,r)^2}, \\ p \ge F \left( \exp \exp \frac{i+1}{(\log \Psi(Q))^C} \right) }} \frac{2}{p} \Bigg) \\ &\ll \exp \left( 2 \log \log F \left( \exp \exp \frac{i+1}{(\log \Psi(Q))^C} \right) \right) \\ &\ll \exp \left( \frac{2i}{(\log \Psi(Q))^C} \right) . \end{split} \]
The overlap estimate thus gives
\[ \lambda (\mathcal{A}_q \cap \mathcal{A}_r) \ll \lambda (\mathcal{A}_q) \lambda (\mathcal{A}_r) \exp \left( \frac{2i}{(\log \Psi(Q))^C} \right) . \]
Another application of Mertens' theorem leads to
\[ \begin{split} \sum_{F \left( \exp \exp \frac{i}{(\log \Psi(Q))^C} \right) \le p \le F \left( \exp \exp \frac{i+1}{(\log \Psi(Q))^C} \right)} \frac{1}{p} = &\log \log F \left( \exp \exp \frac{i+1}{(\log \Psi(Q))^C} \right) \\ &- \log \log F \left( \exp \exp \frac{i}{(\log \Psi(Q))^C} \right) \\ &+ O \left( \exp \left( - \sqrt{\log F \left( \exp \exp \frac{i}{(\log \Psi (Q))^C} \right)} \right) \right) \\ \le & \frac{1}{2(\log \Psi(Q))^C} . \end{split} \]
In the last step we used the facts that $h(x):=\log \log F (\exp \exp (x))$ satisfies $h'(x)=1+o(1)$, and $\log F (\exp \exp (x)) \ge e^{x/2}$ for large enough $x$. It follows that
\[ L_{F \left( \exp \exp \frac{i}{(\log \Psi (Q))^C} \right)} (q,r) \le \frac{1}{2(\log \Psi (Q))^C} + \frac{1}{2(\log \Psi (Q))^C} = \frac{1}{(\log \Psi (Q))^C}, \]
hence $D(q,r) \le \exp \exp \frac{i}{(\log \Psi (Q))^C}$. Applying Proposition~\ref{prop_secondmoment2} with $t=\exp \exp \frac{i}{(\log \Psi (Q))^C}$ thus leads to
\[ \begin{split} \sum_{\substack{(q,r) \in \mathcal{E}^{5}, \\ i(q,r)=i}} \lambda (\mathcal{A}_q \cap \mathcal{A}_r) &\ll \exp \left( \frac{2i}{(\log \Psi(Q))^C} \right) \frac{\Psi(Q)^2}{F \left( \exp \exp \frac{i}{(\log \Psi(Q))^C} \right)^{1/2}} \\ &\ll \frac{\Psi(Q)^2}{\exp \exp \frac{i}{2 (\log \Psi(Q))^C}} , \end{split} \]
and by summing over all possible values of $i$,
\begin{equation}\label{edgeset5sum}
\begin{split} \sum_{(q,r) \in \mathcal{E}^{5}} \lambda (\mathcal{A}_q \cap \mathcal{A}_r) &\ll \sum_{i \ge \left\lfloor (\log \Psi (Q))^C \log \log \frac{\Psi(Q)}{(\log \Psi (Q))^C} \right\rfloor} \frac{\Psi(Q)^2}{\exp \exp \frac{i}{2 (\log \Psi(Q))^C}} \\ &\ll \sum_{m \ge \log \log \frac{\Psi (Q)}{(\log \Psi(Q))^C}} \frac{\Psi(Q)^2 (\log \Psi(Q))^C}{\exp \exp \frac{m}{2}} \\ &\ll \frac{\Psi(Q)^2}{(\log \Psi (Q))^C} . \end{split}
\end{equation}
Combining formulas \eqref{edgeset1sum}--\eqref{edgeset5sum} shows that
\[ \sum_{q,r=1}^Q \lambda (\mathcal{A}_q \cap \mathcal{A}_r) \le \Psi (Q)^2 + O \left( \frac{\Psi (Q)^2}{(\log \Psi (Q))^{C/5-2}} \right), \]
as claimed.
\end{proof}
\section{GCD graphs: notations and basic properties} \label{sec_5}
The proof of the Duffin--Schaeffer conjecture given by Koukoulopoulos and Maynard in~\cite{km} is based on a concept called ``GCD graphs'', which they introduced in that paper. Very roughly speaking, a GCD graph encodes information on the divisor structure of a set of integers. To each GCD graph, a ``quality'' can be assigned, and the key argument in~\cite{km} is that one can iteratively pass to subgraphs of the original GCD graph in such a way that in each step either the quality increases and/or the divisor structure becomes more regular. At the end of this procedure, one has a graph that either has particularly high quality, or a very regular divisor structure. High quality directly implies that the density of the edge set, essentially controlling the influence of the bad pairs $(q,r)$ in such sets as $\mathcal{E}^{1}$ -- $\mathcal{E}^{5}$ of the previous section, is small, leading to the desired result. If one cannot achieve high quality, then one obtains a GCD subgraph that has perfect control of the divisor structure of the underlying set of integers; in this case, results on the ``anatomy of integers'' can be used to show that the problematic factor $\prod_{\substack{p \mid \frac{qr}{\gcd(q,r)^2}}} \left(1 + \frac{1}{p} \right)$ in the overlap estimate can only be large for a very small proportion of pairs $(q,r)$, again leading to the desired result.
We do not give a fully detailed presentation of the notion of a GCD graph here, and refer the reader to Section 6 of~\cite{km} instead. However, for the convenience of the reader, we will recall the basic definitions and some of the basic properties of GCD graphs.
A GCD graph is a septuple $G = (\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$, for which the following properties hold.
\begin{enumerate}[a)]
\item $\mu$ is a measure on $\mathbb{N}$ for which $\mu(n)<\infty$ for all $n$. This measure is extended to $\mathbb{N}^2$ by defining
$$
\mu(\mathcal{N}) = \sum_{(n_1,n_2) \in \mathcal{N}} \mu(n_1) \mu(n_2), \qquad \mathcal{N} \subseteq \mathbb{N}^2.
$$
\item The \emph{vertex sets} $\mathcal{V}$ and $\mathcal{W}$ are finite sets of positive integers.
\item The \emph{edge set} $\mathcal{E}$ is a subset of $\mathcal{V} \times \mathcal{W}$.
\item $\mathcal{P}$ is a set of primes.
\item $f$ and $g$ are functions from $\mathcal{P}$ to $\mathbb{Z}_{\geq 0}$ such that for all $p \in \mathcal{P}$,
\begin{enumerate}[(i)]
\item $p^{f(p)} \mid v$ for all $v \in \mathcal{V}$ and $p^{g(p)} \mid w$ for all $w \in \mathcal{W}$;
\item if $(v,w) \in \mathcal{E}$, then $p^{\min(f(p),g(p))} \parallel \gcd(v,w)$;
\item if $f(p) \neq g(p)$, then $p^{f(p)} \parallel v$ for all $v \in \mathcal{V}$ and $p^{g(p)} \parallel w$ for all $w \in \mathcal{W}$.
\end{enumerate}
\end{enumerate}
For two GCD graphs $G = (\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$ and $G' = (\mu',\mathcal{V}',\mathcal{W}',\mathcal{E}',\mathcal{P}',f',g')$ we say that $G'$ is a GCD subgraph of $G$, and write $G' \preceq G$, if
$$
\mu' = \mu, \quad \mathcal{V}' \subseteq \mathcal{V}, \quad \mathcal{W}' \subseteq \mathcal{W}, \quad \mathcal{E}' \subseteq \mathcal{E}, \quad \mathcal{P}' \supseteq \mathcal{P},
$$
and if $f$ resp.\ $g$ coincide with $f'$ resp.\ $g'$ on $\mathcal{P}$.
For given $\mathcal{V}$ and $k \geq 0$ we define $\mathcal{V}_{p^k} = \{v \in \mathcal{V}:~p^k \parallel v\}$. We write $\mathcal{E}_{p^k,p^\ell} = \mathcal{E} \cap (\mathcal{V}_{p^k} \times \mathcal{W}_{p^\ell})$. It turns out that for $p \not \in \mathcal{P}$, the GCD graph
$$
G_{p^k,p^\ell} := (\mu, \mathcal{V}_{p^k},\mathcal{W}_{p^\ell}, \mathcal{E}_{p^k,p^\ell}, \mathcal{P} \cup \{p\}, f_{p^k},g_{p^\ell})
$$
is a GCD subgraph of $G$ (where $f_{p^k}$ resp.\ $g_{p^\ell}$ are defined in such a way that they coincide with $f$ resp.\ $g$ on $\mathcal{P}$, and $f_{p^k}(p)=k$ and $g_{p^{\ell}}(p)=\ell$).
For a GCD graph $G = (\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$ we define
\begin{enumerate}[(i)]
\item The \emph{edge density}
$$
\delta(G) = \frac{\mu(\mathcal{E})}{\mu(\mathcal{V}) \mu(\mathcal{W})},
$$
provided that $\mu(\mathcal{V}) \mu(\mathcal{W}) \neq 0$.
If $\mu(\mathcal{V}) \mu(\mathcal{W}) = 0$, we define $\delta(G)$ to be $0$.
\item The \emph{neighborhood sets}
$$
\Gamma_G(v) = \left\{ w \in \mathcal{W}:~(v,w) \in \mathcal{E} \right\}, \qquad v \in \mathcal{V},
$$
and
$$
\Gamma_G(w) = \left\{v \in \mathcal{V}:~(v,w) \in \mathcal{E} \right\}, \qquad w \in \mathcal{W}.
$$
\item The set $\mathcal{R}(G)$ of primes that have not (yet) been accounted for in the GCD graph:
$$
\mathcal{R}(G) = \left\{ p \not\in \mathcal{P}:~\exists (v,w) \in \mathcal{E} \text{ such that } p \mid \gcd(v,w) \right\}.
$$
\item The \emph{quality}
$$
q(G) = \delta (G)^{10} \mu(\mathcal{V}) \mu(\mathcal{W}) \prod_{p \in \mathcal{P}} \frac{p^{|f(p)-g(p)|}}{\left(1 - \mathbbm{1}_{f(p)=g(p) \geq 1}/p\right)^2 \left(1 - p^{-31/30} \right)^{10}}.
$$
\end{enumerate}
This notion of quality of a GCD graph is an ad-hoc definition, which turns out to serve the required purpose for the argument of~\cite{km}. We refer to~\cite{km} for the heuristic reasoning which led to this particular definition. It is possible that a modified notion of quality would be better suited for the argument in the present paper. However, we preferred to stick to the original definition of quality from~\cite{km}, since this allows us to directly use a large part of the iteration procedure from~\cite{km} without the need to adapt it to a modified framework.
We also introduce
\[ \mathcal{R}^{\twonotes}(G) := \left\{ p \in \mathcal{R}(G) \, : \, \forall k \ge 0 \,\,\, \min \left\{ \frac{\mu(\mathcal{V}_{p^k})}{\mu(\mathcal{V})},\frac{\mu(\mathcal{W}_{p^k})}{\mu(\mathcal{W})} \right\} \le 1 - \frac{1}{\sqrt{p}} \right\} . \]
This should be compared to the sets $\mathcal{R}^{\sharp}(G)$ and $\mathcal{R}^{\flat}(G)$ used in~\cite{km}, the latter of which is defined analogous to our $\mathcal{R}^{\twonotes}(G)$ but with $1-10^{40}/p$ instead of $1-1/\sqrt{p}$. Finally, we define
\[\mathcal{P}_{\text{diff}}(G) := \{p \in \mathcal{P} \, : \, f(p) \neq g(p) \} .\]
Among the basic properties of GCD graphs are the facts that $G_1 \preceq G_2$ and $G_2 \preceq G_3$ together imply $G_1 \preceq G_3$ (transitivity), and that $G_1 \preceq G_2$ implies $\mathcal{R}(G_1) \subseteq \mathcal{R}(G_2)$. However, in general $G_1 \preceq G_2$ does not imply that $\mathcal{R}^{\twonotes}(G_1) \subseteq \mathcal{R}^{\twonotes}(G_2)$.
\section{Good GCD subgraphs} \label{sec_6}
In this section, we state two results on the existence of a ``good'' GCD subgraph of an arbitrary GCD graph with trivial multiplicative data (i.e.\ $\mathcal{P}=\emptyset$) in the form of Propositions~\ref{prop_goodgcd1} and~\ref{prop_goodgcd2} below; these should be compared to~\cite[Proposition 7.1]{km}. We then show how Proposition~\ref{prop_secondmoment1} resp.\ \ref{prop_secondmoment2} follow from Proposition~\ref{prop_goodgcd1} resp.\ \ref{prop_goodgcd2}.
\begin{prop}\label{prop_goodgcd1}
Let $G=(\mu,\mathcal{V},\mathcal{W},\mathcal{E},\emptyset,f_\emptyset,g_\emptyset)$ be a GCD graph with trivial set of primes and edge density $\delta (G)>0$. Then there exists a GCD subgraph $G' = (\mu,\mathcal{V}',\mathcal{W}',\mathcal{E}',\mathcal{P}',f',g')$ of $G$ such that
\begin{enumerate}
\item $\mathcal{R} (G') = \emptyset$.
\item For all $v \in \mathcal{V}'$, we have $\mu(\Gamma_{G'}(v)) \geq \frac{9 \delta(G')}{10} \mu(\mathcal{W}')$.
\item For all $w \in \mathcal{W}'$, we have $\mu(\Gamma_{G'}(w)) \geq \frac{9 \delta(G')}{10} \mu(\mathcal{V}')$.
\item $q(G') \gg q(G)$ with an absolute implied constant.
\end{enumerate}
\end{prop}
\begin{prop}\label{prop_goodgcd2}
Let $G= (\mu,\mathcal{V},\mathcal{W},\mathcal{E},\emptyset,f_{\emptyset},g_{\emptyset})$ be a GCD graph with trivial set of primes, and let $C \ge 1$. Assume that
\[ \mathcal{E} \subseteq \left\{(v,w) \in \mathcal{V} \times \mathcal{W}: L_{F(t)}(v,w) \geq \frac{1}{F(t)^{1/4}}\right\} \quad \textrm{and} \quad \delta(G) \ge \frac{1}{F(t)^{1/2}}\]
with some $t \ge 1$ sufficiently large in terms of $C$. Then there exists a GCD subgraph $G' = (\mu,\mathcal{V}',\mathcal{W}',\mathcal{E}',\mathcal{P}',f',g')$ of $G$ such that
\begin{enumerate}
\item $\mathcal{R}(G') = \emptyset$.
\item For all $v \in \mathcal{V}'$, we have $\mu(\Gamma_{G'}(v)) \geq \frac{9\delta(G')}{10}\mu(\mathcal{W}')$.
\item For all $w \in \mathcal{W}'$, we have $\mu(\Gamma_{G'}(w)) \geq \frac{9\delta(G')}{10}\mu(\mathcal{V}')$.
\item One of the following holds:
\begin{enumerate}
\item[(i)] $q(G') \gg t^3 q(G)$ with an implied constant depending only on $C$.
\item[(ii)] $q(G') \gg q(G)$ with an implied constant depending only on $C$, and for any $(v,w) \in \mathcal{E}'$, if we write $v = v'\prod_{p \in \mathcal{P}'}p^{f'(p)}$ and $w = w'\prod_{p \in \mathcal{P}'}p^{g'(p)}$, then $L_{F(t)}(v',w') \geq \frac{1}{2F(t)^{1/4}}$.
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{proof}[Proof of Proposition~\ref{prop_secondmoment1}]
Let $\psi: \mathbb{N} \to [0,1/2]$ be a function, let $Q \in \mathbb{N}$ and let $t \ge 1$. Consider the GCD graph $G=(\mu, \mathcal{V}, \mathcal{W}, \mathcal{E}, \emptyset, f_{\emptyset}, g_{\emptyset})$ with the measure $\mu(v)=\frac{\varphi(v)\psi(v)}{v}$, the vertex sets $\mathcal{V}=\mathcal{W}=[1,Q]^2$, and the edge set
\[ \mathcal{E}= \left\{ (v,w) \in [1,Q]^2 \, : \, D(v,w) \le \frac{\Psi(Q)}{t} \right\} . \]
Note that $\mu(\mathcal{V})=\mu(\mathcal{W})=\Psi(Q)/2$. In the language of GCD graphs, the claim of Proposition \ref{prop_secondmoment1} can equivalently be written as $\mu (\mathcal{E}) \ll \Psi(Q)^2 /t^{1/5}$, that is, $\delta(G) \ll t^{-1/5}$.
By Proposition~\ref{prop_goodgcd1}, there exists a GCD subgraph $G'=(\mu, \mathcal{V}', \mathcal{W}', \mathcal{E}', \mathcal{P}', f', g')$ of $G$ having properties a)--d) of the proposition. Following the steps in~\cite[Proof of Proposition 6.3 assuming Proposition 7.1]{km}, from properties a)--c) we deduce $q(G') \ll \Psi(Q)^2/t^2$. Since $G$ has trivial set of primes, by the definition of quality and property d),
\[ \delta(G)^{10} \mu (\mathcal{V}) \mu (\mathcal{W}) = q(G) \ll q(G') \ll \frac{\Psi(Q)^2}{t^2} . \]
Therefore $\delta(G) \ll t^{-1/5}$, as claimed.
\end{proof}
For the proof of Proposition~\ref{prop_secondmoment2} we will need the following fact about the ``anatomy of integers''; compare this to~\cite[Lemma 7.3]{km}, which is a similar result for a fixed value of $c$ on the right-hand side, rather than allowing $c \to 0$ as in view of Lemma~\ref{lemma_over} above will be necessary for our application.
\begin{lemma}\label{lemma_anatomy}
For any real $x,t \geq 1$ and $0<c \le 1$,
$$
\bigg| \bigg\{n \leq x \, : \, \sum_{\substack{p \mid n,\\ p \geq t}} \frac{1}{p} \geq c \bigg\} \bigg| \ll xe^{- 100 ct}
$$
with an absolute implied constant.
\end{lemma}
\begin{proof}
An application of the Markov inequality gives
\[ \begin{split} \bigg| \bigg\{ n \le x \, : \, \sum_{\substack{p \mid n, \\ p \ge t}} \frac{1}{p} \ge c \bigg\} \bigg| &= \bigg| \bigg\{ n \le x \, : \, \exp \bigg( 100t \sum_{\substack{p \mid n, \\ p \ge t}} \frac{1}{p} \bigg) \ge \exp \left( 100ct \right) \bigg\} \bigg| \\ &\le e^{- 100ct} \sum_{n \le x} \prod_{\substack{p \mid n, \\ p \ge t}} e^{100t/p}
.\end{split} \]
Now let $f$ be the multiplicative function defined at prime powers as $f(p^m) = e^{100 t/p}$ if $p \ge t$, and $f(p^m)=1$ if $p<t$. Note that $f(p^m) \le e^{100}$ at all prime powers. Hence by~\cite[Theorem 14.2]{kouk} the partial sums of $f$ satisfy
\[ \begin{split} \sum_{n \le x} \prod_{\substack{p \mid n, \\ p \ge t}} e^{100t/p} = \sum_{n \le x} f(n) \ll x \exp \left( \sum_{p \le x} \frac{f(p)-1}{p} \right) &= x \exp \left( \sum_{t \le p \le x} \frac{e^{100t/p}-1}{p} \right) \\ &= x \exp \left( O \left( \sum_{p \ge t} \frac{t}{p^2} \right) \right) \\ &\ll x, \end{split} \]
where the implied constants are absolute.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop_secondmoment2}]
Let $\psi: \mathbb{N} \to [0,1/2]$ be a function, let $Q \in \mathbb{N}$ and let $t \ge 1$. Consider the GCD graph $G=(\mu, \mathcal{V}, \mathcal{W}, \mathcal{E}, \emptyset, f_{\emptyset}, g_{\emptyset})$ with the measure $\mu(v)=\frac{\varphi(v)\psi(v)}{v}$, the vertex sets $\mathcal{V}=\mathcal{W}=[1,Q]^2$, and the edge set
\[ \mathcal{E}= \left\{ (v,w) \in [1,Q]^2 \, : \, D(v,w) \le t \Psi(Q) \quad \textrm{and} \quad L_{F(t)}(v,w) \ge \frac{1}{F(t)^{1/4}} \right\} . \]
Note that $\mu(\mathcal{V})=\mu(\mathcal{W})=\Psi(Q)/2$. In the language of GCD graphs, the claim can equivalently be written as $\mu (\mathcal{E}) \ll \Psi(Q)^2/F(t)^{1/2}$, that is, $\delta(G) \ll F(t)^{-1/2}$. We may assume in the sequel that $\delta(G) \ge F(t)^{-1/2}$ and that $t$ and $F(t)$ are large enough in terms of $C$, since otherwise the claim trivially holds.
By Proposition~\ref{prop_goodgcd2}, there exists a GCD subgraph $G'=(\mu, \mathcal{V}', \mathcal{W}', \mathcal{E}', \mathcal{P}', f', g')$ of $G$ having properties a)--d) of the proposition. Let $a=\prod_{p \in \mathcal{P}'}p^{f'(p)}$ and $b=\prod_{p \in \mathcal{P}'}p^{g'(p)}$. By the definition of a GCD graph, $a \mid v$ for all $v \in \mathcal{V}'$ and $b \mid w$ for all $w \in \mathcal{W}'$. Since $\mathcal{R}(G')=\emptyset$, we also have $\gcd (v,w)=\gcd (a,b)$ for all $(v,w) \in \mathcal{E}'$. Following the steps in~\cite[Proof of Proposition 6.3 assuming Proposition 7.1]{km}, we deduce from properties a)--c) of Proposition~\ref{prop_goodgcd2} that
\begin{equation}\label{qG'upperboundfromkm}
q(G') \ll ab \Psi(Q)^2 t^2 \sum_{(v,w) \in \mathcal{E}'} \frac{1}{w_0 v_{\max}(w)} \le \Psi (Q)^2 t^2 ,
\end{equation}
where $w_0=\max \mathcal{W}'$ and $v_{\max}(w)=\max \{ v \in \mathcal{V}' \, : \, (v,w) \in \mathcal{E}' \}$.
Assume first that $G'$ satisfies property d)(i) in Proposition~\ref{prop_goodgcd2}, that is, $q(G') \gg t^3 q(G)$. Since $G$ has trivial set of primes, by the definition of quality and \eqref{qG'upperboundfromkm} we obtain
\[ \delta(G)^{10} \mu (\mathcal{V}) \mu (\mathcal{W}) = q(G) \ll t^{-3} q(G') \ll \frac{\Psi(Q)^2}{t}. \]
Therefore $\delta(G) \ll t^{-1/10} \ll F(t)^{-1/2}$, as claimed.
Assume next, that $G'$ satisfies property d)(ii) in Proposition~\ref{prop_goodgcd2}, that is, $q(G') \gg q(G)$, and for any $(v,w) \in \mathcal{E}'$, if we write $v = av'$ and $w = bw'$, then $L_{F(t)}(v',w') \geq \frac{1}{2F(t)^{1/4}}$. Note that here $\gcd (v',w')=1$. As in the first case, we have
\[ \begin{split} \delta(G)^{10} \mu (\mathcal{V}) \mu (\mathcal{W}) = q(G) \ll q(G') &\ll ab \Psi(Q)^2 t^2 \sum_{(v,w) \in \mathcal{E}'} \frac{1}{w_0 v_{\max}(w)} \\ &\le \frac{ab \Psi(Q)^2 t^2}{w_0} \sum_{1 \le w' \le w_0/b} \frac{1}{v_{\max}(bw')} \sum_{\substack{1 \le v' \le v_{\max}(bw')/a, \\ L_{F(t)}(v',w') \ge 1/(2F(t)^{1/4})}} 1 . \end{split} \]
For the sake of readability, define $R_s(n)=\sum_{p \mid n,~p \ge s} 1/p$ for any $n \in \mathbb{N}$ and $s \ge 1$. Then $1/(2F(t)^{1/4}) \le L_{F(t)}(v',w') = R_{F(t)}(v') + R_{F(t)}(w')$ implies that $R_{F(t)}(v') \ge 1/(4F(t)^{1/4})$ or $R_{F(t)}(w') \ge 1/(4F(t)^{1/4})$. The previous formula thus shows that $\delta (G)^{10} \ll S_1+S_2$ with
\[ \begin{split} S_1 &= \frac{ab t^2}{w_0} \sum_{1 \le w' \le w_0/b} \frac{1}{v_{\max}(bw')} \sum_{\substack{1 \le v' \le v_{\max}(bw')/a, \\ R_{F(t)}(v') \ge 1/(4F(t)^{1/4})}} 1, \\ S_2 &= \frac{ab t^2}{w_0} \sum_{\substack{1 \le w' \le w_0/b, \\ R_{F(t)}(w') \ge 1/(4F(t)^{1/4})}} \frac{1}{v_{\max}(bw')} \sum_{1 \le v' \le v_{\max}(bw')/a} 1 . \end{split} \]
An application of Lemma~\ref{lemma_anatomy} with $x=v_{\max}(bw')/a$ and $c=1/(4F(t)^{1/4})$ yields
\[ S_1 \ll \frac{bt^2}{w_0} \sum_{1 \le w' \le w_0/b} \exp \left( -25 F(t)^{3/4} \right) = t^2 \exp \left( -25 F(t)^{3/4} \right) \ll t^{-100} . \]
Another application of Lemma~\ref{lemma_anatomy} with $x=w_0/b$ and $c=1/(4F(t)^{1/4})$ similarly yields
\[ S_2 = \frac{bt^2}{w_0} \sum_{\substack{1 \le w' \le w_0/b, \\ R_{F(t)}(w') \ge 1/(4F(t)^{1/4})}} 1 \ll t^2 \exp \left( -25 F(t)^{3/4} \right) \ll t^{-100}. \]
Therefore $\delta(G) \ll (S_1+S_2)^{1/10} \ll t^{-10} \ll F(t)^{-10}$, as claimed.
\end{proof}
\section{Four technical lemmas} \label{sec_7}
In this section, we state four lemmas on GCD subgraphs, and show that Propositions~\ref{prop_goodgcd1} and~\ref{prop_goodgcd2} follow from these four lemmas. The key technical improvement in comparison with the iteration argument of~\cite{km} is in Lemma~\ref{quality_density_lemma} below, which more carefully balances the quality gain versus the potential density loss of the iteration procedure. The ratio of quality gain vs.\ density loss which is necessary for the proof of Theorem~\ref{th2} is determined by the range of admissible parameters $u$ and $A$ in Lemma~\ref{lemma_over}, and what Lemma~\ref{quality_density_lemma} provides is just enough for a successful completion of the proof. Lemma~\ref{lem84}, which should be compared to~\cite[Lemma 8.4]{km}, and Lemma~\ref{empty_R} follow from results in~\cite{km} in a more or less straightforward way. Finally, for the convenience of the reader, we cite~\cite[Lemma 8.5]{km} in the form of Lemma~\ref{lem85}.
\begin{lemma}\label{quality_density_lemma}
Let $G = (\mu,\mathcal{V},\mathcal{W},\mathcal{E},\emptyset,f_{\emptyset},g_{\emptyset})$ be a GCD graph with trivial set of primes and $\delta(G) > 0$. Let $C \ge 1$, and let $t \ge 1$ be sufficiently large in terms of $C$. Then there exists a GCD subgraph $G' \preceq G$ such that $R^{\twonotes}(G') = \emptyset$, and at least one of the following two statements holds:
\begin{enumerate}
\item $q(G') \ge t^3 q(G)$.
\item $q(G') \gg q(G), \quad \frac{\delta(G')}{\delta(G)} \ge \frac{1}{F(t)^{1/4}} ,\quad \lvert \mathcal{P}_{\text{diff}}(G')\rvert \le \log t$ with an implied constant depending only on $C$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{lem84}
Let $G= (\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$ be a GCD graph. Assume that
\[\delta(G) \geq \frac{1}{s^{1/4}},\quad \mathcal{R}^{\twonotes}(G) = \emptyset, \quad
\mathcal{E} \subseteq \left\{(v,w) \in \mathcal{V} \times \mathcal{W}: L_{s}(v,w) \geq \frac{1}{s^{1/4}}\right\} \]
with a sufficiently large $s \ge 1$. Then there exists a GCD subgraph $G' = (\mu,\mathcal{V},\mathcal{W},\mathcal{E}',\mathcal{P},f,g)$ of $G$ such that
\[ q(G') \geq \frac{q(G)}{2} \quad \text{and} \quad \mathcal{E}' \subseteq \Bigg\{(v,w) \in \mathcal{V} \times \mathcal{W}: \sum_{\substack{p \mid \frac{vw}{\gcd(v,w)^2},\\ p \geq s, \,\, p \notin \mathcal{R}(G)}} \frac{1}{p} \geq \frac{3}{4s^{1/4}}\Bigg\}.\]
\end{lemma}
\begin{lemma}\label{empty_R}
Let $G=(\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$ be a GCD graph with $\delta(G) > 0$. Then there exists a GCD subgraph $G'=(\mu, \mathcal{V}', \mathcal{W}', \mathcal{E}', \mathcal{P}', f', g')$ of $G$ such that
\[ \mathcal{P}' \subseteq \mathcal{P} \cup \mathcal{R}(G), \quad \mathcal{R}(G') = \emptyset, \quad q(G') \gg q(G) \]
with an absolute implied constant.
\end{lemma}
\begin{lemma}[{\cite[Lemma 8.5]{km}}] \label{lem85}
Let $G= (\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$ be a GCD graph with $\delta(G) > 0$. Then there exists a GCD subgraph $G' = (\mu,\mathcal{V},\mathcal{W},\mathcal{E}',\mathcal{P},f,g)$ of $G$ such that:
\begin{enumerate}
\item $q(G') \geq q(G)$.
\item $\delta(G') \geq \delta(G)$.
\item For all $v \in \mathcal{V}'$ and $w \in \mathcal{W}'$, we have
\[\mu(\Gamma_{G'}(v)) \geq \frac{9\delta(G')}{10}\mu(\mathcal{W}') \quad \textrm{and} \quad \mu(\Gamma_{G'}(w)) \geq \frac{9\delta(G')}{10}\mu(\mathcal{V}').\]
\end{enumerate}
\end{lemma}
We now show how Lemmas~\ref{quality_density_lemma}--\ref{lem85} imply Propositions~\ref{prop_goodgcd1} and~\ref{prop_goodgcd2}.
\begin{proof}[Proof of Proposition~\ref{prop_goodgcd1}] Apply Lemma~\ref{empty_R} to $G$ to obtain a GCD subgraph $G^{(1)} \preceq G$ with $\mathcal{R}(G^{(1)})= \emptyset$ and $q(G^{(1)}) \gg q(G)$, satisfying properties a) and d). Next, apply Lemma~\ref{lem85} to $G^{(1)}$ to obtain a GCD subgraph $G^{(2)} \preceq G^{(1)}$ which additionally satisfies properties b) and c).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop_goodgcd2}] We follow~\cite[Proof of Proposition 7.1]{km}, although the ordering of the different stages needs to be changed. It suffices to prove the existence of a GCD subgraph which satisfies properties a) and d). Indeed, applying Lemma~\ref{lem85} to such a subgraph, we obtain a GCD subgraph that satisfies all required properties a)--d).
We start by applying Lemma~\ref{quality_density_lemma} to $G$, and obtain a GCD subgraph $G^{(1)} \preceq G$ such that $\mathcal{R}^{\twonotes}(G^{(1)}) = \emptyset$, and $G^{(1)}$ satisfies at least one of the following properties:
\begin{enumerate}[A)]
\item $q(G^{(1)}) \ge t^3 q(G)$.
\item $q(G^{(1)}) \gg q(G), \quad \frac{\delta(G^{(1)})}{\delta(G)} \ge \frac{1}{F(t)^{1/4}},\quad \lvert\mathcal{P}_{\text{diff}}(G^{(1)})\rvert \le \log t$.
\end{enumerate}
We distinguish between two cases depending on whether A) or B) is satisfied.
Case A). Assume that $q(G^{(1)}) \ge t^3 q(G)$. We apply Lemma~\ref{empty_R} to obtain a GCD subgraph $G^{(2A)} \preceq G^{(1)}$ with $\mathcal{R}(G^{(2A)}) = \emptyset$ and $q(G^{(2A)}) \gg q(G^{(1)})$. Then $G^{(2A)}$ satisfies properties a) and d)(i) in Proposition~\ref{prop_goodgcd2}. This finishes the proof for Case A).
Case B). Assume that $q(G^{(1)}) \gg q(G),\;\frac{\delta(G^{(1)})}{\delta(G)} \ge \frac{1}{F(t)^{1/4}},\; \lvert\mathcal{P}_{\text{diff}}(G^{(1)})\rvert \le \log t$. First, we remove the effect of the large primes in $\mathcal{R}(G^{(1)})$ on $L_{F(t)}(v,w)$. By the assumption $\delta(G) \ge 1/ F(t)^{1/2}$, we have $\delta(G^{(1)}) \ge 1/F(t)^{1/4}$. We can thus apply Lemma~\ref{lem84} to $G^{(1)}$ with $s = F(t)$ to obtain a GCD subgraph $G^{(2B)} \preceq G^{(1)}$ with edge set $\mathcal{E}^{(2B)}$ such that
\[ q(G^{(2B)}) \geq \frac{q(G^{(1)})}{2} \quad \textrm{and} \quad \mathcal{E}^{(2B)} \subseteq \Bigg\{(v,w) \in \mathcal{V} \times \mathcal{W}: \sum_{\substack{p \mid \frac{vw}{\gcd(v,w)^2},\\ p \geq F(t), \,\, p \notin \mathcal{R}(G^{(1)})}} \frac{1}{p} \geq \frac{3}{4F(t)^{1/4}}\Bigg\}.\]
Now we remove the contribution of the primes in $\mathcal{P}_{\text{diff}}(G^{(1)})$. Using $\lvert\mathcal{P}_{\text{diff}}(G^{(1)})\rvert \le \log t$, we obtain that for any $(v,w) \in \mathcal{E}^{(2B)}$,
\[ \sum_{\substack{p \mid \frac{vw}{\gcd(v,w)^2},\\ p \geq F(t), \,\, p \in \mathcal{P}_{\text{diff}}(G^{(1)})}} \frac{1}{p} \le \frac{\log t}{F(t)} \le \frac{1}{4F(t)^{1/4}} \]
for large enough $t$. Hence for any $(v,w) \in \mathcal{E}^{(2B)}$,
\begin{equation}\label{without_diff}\sum_{\substack{p \mid \frac{vw}{\gcd(v,w)^2},\\ p \geq F(t), \,\, p \notin \mathcal{R}(G^{(1)}) \cup \mathcal{P}_{\text{diff}}(G^{(1)})}} \frac{1}{p} \geq \frac{1}{2 F(t)^{1/4}}.
\end{equation}
Finally, we apply Lemma~\ref{empty_R} to $G^{(2B)}$ to obtain a GCD subgraph $G^{(3B)} \preceq G^{(2B)}$ such that
\[ \mathcal{R}(G^{(3B)}) = \emptyset \quad \textrm{and} \quad q(G^{(3B)}) \gg q(G^{(2B)}) \gg q(G). \]
Thus $G^{(3B)}$ satisfies property a) in Proposition~\ref{prop_goodgcd2}. Following the steps in Stage 4b of~\cite[Proof of Proposition 7.1]{km}, we deduce from \eqref{without_diff} that $G^{(3B)}$ satisfies property d)(ii) as well. This finishes the proof for Case B).
\end{proof}
\section{Proof of Lemmas~\ref{lem84} and~\ref{empty_R}} \label{sec_8}
\begin{proof}[Proof of Lemma~\ref{lem84}]
Define
\[S(v,w) = \sum_{\substack{p \mid \frac{vw}{\gcd(v,w)^2}, \\ p \geq s, \,\, p \in \mathcal{R}(G)}} \frac{1}{p} . \]
Following the steps in~\cite[Proof of Lemma 8.4]{km}, from the assumptions $\mathcal{R}^{\twonotes}(G) = \emptyset$ and $\delta(G) \ge 1/s^{1/4}$ we deduce that
\[ \sum_{(v,w) \in \mathcal{E}} \mu(v) \mu(w) S(v,w) \leq \sum_{p \geq s}\frac{2 \mu(\mathcal{V})\mu(\mathcal{W})}{p^{3/2}}
\leq \frac{\mu(\mathcal{E})}{100s^{1/4}} \]
for large enough $s$. Consider the edge set
\[\mathcal{E}' := \Bigg\{(v,w) \in \mathcal{E}: S(v,w) \leq \frac{1}{4 s^{1/4}}\Bigg\} . \]
An application of the Markov inequality gives
\[ \mu(\mathcal{E}\setminus \mathcal{E'}) \leq 4s^{1/4} \sum_{(v,w) \in \mathcal{E}} \mu(v)\mu(w)S(v,w) \leq \frac{\mu(\mathcal{E})}{25}, \]
that is, $\mu(\mathcal{E'}) \geq (24/25)\mu(\mathcal{E})$. By the definition of quality, the GCD subgraph $G' := (\mu,\mathcal{V},\mathcal{W},\mathcal{E}',\mathcal{P},f,g)$ thus satisfies
\[ \frac{q(G')}{q(G)} = \left(\frac{\mu(\mathcal{E}')}{\mu(\mathcal{E})}\right)^{10} \geq \frac{1}{2}. \]
Further, for any $(v,w) \in \mathcal{E}'$ we have
\[\sum_{\substack{p \mid \frac{vw}{\gcd(v,w)^2}, \\ p \geq s, \,\, p \notin \mathcal{R}(G)}}\frac{1}{p} = L_{s}(v,w) - S(v,w) \geq \frac{3}{4s^{1/4}}, \]
as claimed.
\end{proof}
To prove Lemma~\ref{empty_R}, we will apply the following two propositions in an iterative way.
\begin{prop} \label{prop_iteration1} Let $G=(\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$ be a GCD graph with $\delta(G) > 0$. Then there is a GCD subgraph $G'=(\mu, \mathcal{V}', \mathcal{W}', \mathcal{E}', \mathcal{P}', f',g')$ of $G$ such that
\[ \mathcal{P}' \subseteq \mathcal{P} \cup (\mathcal{R}(G) \cap \{ p \le 10^{2000} \}), \quad \mathcal{R}(G') \subseteq \{p > 10^{2000}\}, \quad \frac{q(G')}{q(G)} \geq \frac{1}{10^{10^{3000}}}. \]
\end{prop}
\begin{proof} This is a slight modification of~\cite[Proposition 8.3]{km}, the only difference being that in our formulation the set $\mathcal{P}$ can be non-empty. The proof given in~\cite{km} actually covers the formulation stated above, since it only relies on the iterative application of~\cite[Lemma 13.2]{km}, which holds for GCD graphs with an arbitrary set of primes.
\end{proof}
\begin{prop} \label{prop_iteration2} Let $G=(\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$ be a GCD graph with $\delta(G) > 0$ such that $\emptyset \neq \mathcal{R}(G) \subseteq \{p > 10^{2000}\}$. Then there is a GCD subgraph $G'=(\mu, \mathcal{V}', \mathcal{W}', \mathcal{E}', \mathcal{P}', f',g')$ of $G$ such that
\[ \mathcal{P} \subsetneq \mathcal{P}' \subseteq \mathcal{P}\cup \mathcal{R}(G), \quad \mathcal{R}(G') \subsetneq \mathcal{R}(G),\quad q(G') \geq q(G). \]
\end{prop}
\begin{proof} This follows directly from~\cite[Propositions 8.1 and 8.2]{km}.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{empty_R}] First, we apply Proposition~\ref{prop_iteration1} to obtain a GCD subgraph $G^{(1)} \preceq G$ with
\[ \mathcal{R}(G^{(1)}) \subseteq \{p>10^{2000}\} \quad \textrm{and} \quad q(G^{(1)}) \gg q(G). \]
If $\mathcal{R}(G^{(1)}) = \emptyset$, we are done. Otherwise, we apply Proposition~\ref{prop_iteration2} to obtain a GCD subgraph $H_1 \preceq G^{(1)}$ with $\mathcal{R}(H_1) \subsetneq \mathcal{R}(G^{(1)})$ and $q(H_1) \geq q(G^{(1)}).$ By iterating this argument, we obtain a chain of GCD subgraphs $G^{(1)} \succeq H_1 \succeq H_2 \succeq \cdots$ with
\[ \mathcal{R}(G^{(1)}) \supsetneq \mathcal{R}(H_1) \supsetneq \mathcal{R}(H_2) \supsetneq \cdots \quad \textrm{and} \quad q(G^{(1)}) \leq q(H_1) \leq q(H_2) \leq \cdots . \]
Since $\mathcal{R}(G^{(1)})$ is a finite set, we arrive after finitely many steps at a GCD subgraph $G' \preceq G$ with $\mathcal{R}(G') = \emptyset$ and $q(G') \geq q(G^{(1)}) \gg q(G)$. Furthermore, we have $\mathcal{P}' \subseteq \mathcal{P} \cup \mathcal{R}(G)$ since this property is preserved at each step.
\end{proof}
\section{Quality increment vs.\ density loss} \label{sec_9}
The goal of this section is to prove Lemma~\ref{quality_density_lemma}. We start with three preliminary results.
\begin{lemma} \label{lemma121} Let $G=(\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$ be a GCD graph with $\delta(G)>0$, let $p \in\mathcal{R}(G)$, and let
\[ \alpha_k= \frac{\mu(\mathcal{V}_{p^k})}{\mu(\mathcal{V})} \qquad \text{and} \qquad \beta_l= \frac{\mu(\mathcal{W}_{p^l})}{\mu(\mathcal{W})}. \]
Then there exists a pair of non-negative integers $(k,l)=(k_p,l_p)$ such that $\alpha_k, \beta_l >0$, and
\[ \frac{\mu(\mathcal{E}_{p^k,p^l})}{\mu(\mathcal{E})} \geq \left\{ \begin{array}{ll} (\alpha_k \beta_k)^{9/10} & \textrm{if } k=l, \\ \frac{\alpha_k (1 - \beta_k) + \beta_k (1-\alpha_k) + \alpha_l (1-\beta_l) + \beta_l (1-\alpha_l)}{40 |k-l|^2} & \textrm{if } k \neq l . \end{array} \right. \]
\end{lemma}
\begin{proof} This follows from a straightforward modification of the proof of~\cite[Lemma 12.1]{km}, replacing the estimate$\frac{1}{1000} \sum_{|j|\geq 1} 2^{-|j|/20} \leq \frac{1}{10}$ by $\sum_{|j| \geq 1} \frac{1}{40 j^2} \leq \frac{1}{10}$ in one of the steps.
\end{proof}
\begin{lemma}\label{optim_2} Let $\alpha_k,\beta_k,\alpha_l,\beta_l \in [0,1]$ with $\alpha_k, \beta_l >0$ be such that $\alpha_k + \alpha_l \leq 1$ and $\beta_k + \beta_l \leq 1$, and let
\[ S = \alpha_k (1 - \beta_k) + \beta_k (1-\alpha_k) + \alpha_l (1-\beta_l) + \beta_l (1-\alpha_l). \]
If $\min\{\alpha_k,\beta_k\} \leq 1-R$ and $\min\{\alpha_{l},\beta_{l}\} \leq 1-R$ with some $R \in \left[0,1/\sqrt{2} \right]$, then $\frac{S^2}{\alpha_k\beta_{l}} \geq \frac{R}{2}$.
\end{lemma}
\begin{proof} Clearly,
\begin{equation} \label{S_larger_ab}
S \geq \alpha_k(1-\beta_k) + \beta_{l}(1-\alpha_{l}) \geq \alpha_k\beta_{l} + \beta_{l}\alpha_k = 2\alpha_k\beta_{l} .
\end{equation}
Since the conditions of the lemma and $S$ are invariant under switching $\alpha_k$ with $\beta_l$ and $\alpha_l$ with $\beta_k$, respectively, we may assume that $\alpha_k \geq \beta_{l}$.
Assume first that $\alpha_k \le 1/2$. Then $\beta_l \le 1/2$ as well, hence
\[ S = \beta_k(1 - 2\alpha_k) + \alpha_k + \alpha_{l}(1 - 2\beta_{l}) + \beta_{l} \geq \alpha_k + \beta_{l} \ge 2 \sqrt{\alpha_k \beta_l} . \]
Therefore $S^2/(\alpha_k \beta_l) \ge 4>R/2$, as claimed.
Assume next that $\alpha_k>1/2$. Formula \eqref{S_larger_ab} then gives
\[ 1 - \beta_k \le 2\alpha_k(1-\beta_k) \leq 2S \leq \frac{S^2}{\alpha_k\beta_{l}} . \]
If $\beta_k \le 1-R$, then $R \le 1-\beta_k \le S^2/(\alpha_k \beta_l)$, as claimed. If $\beta_k>1-R>1/4$, then by the assumption $\min \{ \alpha_k, \beta_k \} \le 1-R$ we have $\alpha_k \le 1-R$, and we similarly deduce
\[ R \le 1-\alpha_k \le 4 \beta_k (1-\alpha_k) \le 4S \le 2 \frac{S^2}{\alpha_k \beta_l}, \]
which finishes the proof of the statement.
\end{proof}
The following lemma is a variant of~\cite[Lemma 12.2]{km}.
\begin{lemma} \label{lemma122} Consider a GCD graph $G=(\mu,\mathcal{V},\mathcal{W},\mathcal{E},\mathcal{P},f,g)$ with $\delta(G)>0$ and a prime $p \in \mathcal{R}^{\twonotes}(G)$. Let $(k,l)=(k_p,l_p)$ be a pair of non-negative integers which satisfies the conclusion of Lemma~\ref{lemma121}. Then there is a GCD subgraph $G'=(\mu,\mathcal{V}',\mathcal{W}',\mathcal{E}',\mathcal{P}',f',g')$ of $G$ with $\mathcal{P}' = \mathcal{P} \cup \{p\}$ and $\mathcal{R}(G') \subseteq \mathcal{R}(G) \backslash \{p\}$ such that
\[ \frac{\delta(G')}{\delta(G)} \ge \left\{ \begin{array}{ll} 1 & \textrm{if } k=l, \\ \frac{1}{20 |k-l|^2} & \textrm{if } k \neq l, \end{array} \right. \]
and
\[ \frac{q(G')}{q(G)} \ge \left\{ \begin{array}{ll} 1 & \textrm{if } k=l, \\ \frac{p^{|k-l| -1/2}}{10^{15} |k-l|^{20}} & \textrm{if } k \neq l. \end{array} \right. \]
\end{lemma}
\begin{proof} We claim that $G'= G_{p^k, p^l}$ satisfies all required properties. Note that $\mathcal{P}' = \mathcal{P} \cup \{p\}$ and $\mathcal{R}(G') \subseteq \mathcal{R}(G) \backslash \{p\}$ hold by the definition of $G_{p^k, p^l}$. If $k = l$, then by Lemma~\ref{lemma121} and the definition of quality,
\[ \frac{\delta(G')}{\delta(G)} = \frac{\mu(\mathcal{E}_{p^k,p^k})}{\mu(\mathcal{E})} \cdot \frac{1}{\alpha_k\beta_k} \geq 1, \]
and
\[ \frac{q(G')}{q(G)} = \left(\frac{\mu(\mathcal{E}_{p^k,p^k})}{\mu(\mathcal{E})}\right)^{10}(\alpha_k\beta_k)^{-9}\frac{1}{(1 - \mathbbm{1}_{k \geq 1}/p)^2(1 - 1/p^{31/30})^{10}}\geq 1 , \]
as claimed. Let $S$ be as in Lemma~\ref{optim_2}. If $k \neq l$, then by Lemma~\ref{lemma121} together with \eqref{S_larger_ab},
\[ \frac{\delta(G')}{\delta(G)} = \frac{\mu(\mathcal{E}_{p^k,p^l})}{\mu(\mathcal{E})} \cdot \frac{1}{\alpha_k \beta_l} \geq \frac{S}{40 |k-l|^2 \alpha_k \beta_l} \geq \frac{1}{20 |k-l|^2}.\]
Furthermore,
\[ \begin{split} \frac{q(G')}{q(G)} = \left(\frac{\mu(\mathcal{E}_{p^k,p^l})}{\mu(\mathcal{E})}\right)^{10}(\alpha_k\beta_l)^{-9} \frac{p^{|k-l|}}{(1 - 1/p^{31/30})^{10}} &\geq \frac{S^{10}}{(40|k-l|^2)^{10}} \cdot \frac{1}{(\alpha_k \beta_l)^9} p^{|k-l|} \\ &\geq \frac{2^8 p^{|k-l|}}{40^{10} |k-l|^{20}} \cdot \frac{S^2}{\alpha_k \beta_l} . \end{split} \]
The assumption $p \in \mathcal{R}^{\twonotes}(G)$ ensures that $\min \{ \alpha_k, \beta_k \} \leq 1 -1/\sqrt{p}$ and $\min \{ \alpha_l, \beta_l \} \leq 1 - 1/\sqrt{p}$. Hence we can apply Lemma~\ref{optim_2} with $R = 1/\sqrt{p}$, which shows that
\[ \frac{q(G')}{q(G)} \geq \frac{2^7 p^{|k-l|-1/2}}{40^{10}|k-l|^{20}} > \frac{p^{|k-l|-1/2}}{10^{15}|k-l|^{20}}, \]
as claimed.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{quality_density_lemma}] We apply Lemma~\ref{lemma122} iteratively to $G$ until we obtain a GCD subgraph $G'=(\mu, \mathcal{V}', \mathcal{W}', \mathcal{E}', \mathcal{P}', f', g')$ of $G$ such that $\mathcal{R}^{\twonotes}(G')=\emptyset$. Note that each prime $p$ is used at most once, and $\mathcal{P}'$ is precisely the set of primes to which Lemma~\ref{lemma122} was applied. For each $p \in \mathcal{P}'$, let $(k_p,l_p)$ be the pair of non-negative integers with which Lemma~\ref{lemma122} is applied.\footnote{We might use primes $p \not\in \mathcal{R}^{\twonotes}(G)$ of the original GCD graph $G$, since $\mathcal{R}^{\twonotes}$ does not necessarily decrease at each step. However, $\mathcal{R}$ decreases by at least one element at each step, hence the algorithm terminates.} Since the original graph $G$ had an empty set of primes, we have $\mathcal{P}_{\textrm{diff}}(G') = \{ p \in \mathcal{P}' \, : \, k_p \neq l_p \}$. By Lemma~\ref{lemma122}, the resulting graph $G'$ satisfies
\[ \frac{\delta(G')}{\delta(G)} \ge \prod_{p \in \mathcal{P}_{\textrm{diff}}(G')} \frac{1}{20 |k_p-l_p|^2} \quad \textrm{and} \quad \frac{q(G')}{q(G)} \ge \prod_{p \in \mathcal{P}_{\textrm{diff}}(G')} \frac{p^{|k_p-l_p|-1/2}}{10^{15} |k_p-l_p|^{20}} . \]
In particular,
\begin{equation}\label{qG'/qGbound}
\frac{q(G')}{q(G)} \gg \prod_{p \in \mathcal{P}_{\textrm{diff}}(G')} p^{|k_p-l_p|/4} \gg 1.
\end{equation}
Fix $C \ge 1$, and let $t \ge 1$ be large enough in terms of $C$. Let $N=|\mathcal{P}_{\textrm{diff}}(G')|$, and for the sake of readability, in the sequel let $\log_i$ denote the $i$-fold iterated logarithm. It will be enough to show that if $q(G')<t^3 q(G)$ (i.e.\ property a) does not hold), then $\delta(G')/\delta(G) \ge 1/F(t)^{1/4}$, and $N \le \log t$ (i.e.\ property b) holds). The latter follows easily from \eqref{qG'/qGbound} and $q(G')<t^3 q(G)$:
\[ (N!)^{1/4} \le \prod_{p \in \mathcal{P}_{\textrm{diff}}(G')} p^{|k_p-l_p|/4} \ll \frac{q(G')}{q(G)} < t^3. \]
Hence $N \ll (\log t)/\log_2 t$, and in particular, $N \le \log t$ for large enough $t$, as claimed. It remains to show that $q(G')<t^3 q(G)$ implies $\delta(G')/\delta(G) \ge 1/F(t)^{1/4}$.
Let $Y=\{ p \in \mathcal{P}_{\mathrm{diff}}(G') \, : \, |k_p-l_p| \ge \log_3 t \}$. Bounding the sum term by term gives
\begin{equation}\label{pnotinY}
\sum_{p \not\in Y} \log (20 |k_p-l_p|^2) \ll N \log_4 t \ll \frac{\log t \log_4 t}{\log_2t } .
\end{equation}
On the other hand, \eqref{qG'/qGbound} and the assumption $q(G')<t^3 q(G)$ lead to
\[ \log t \gg \sum_{p \in \mathcal{P}_{\mathrm{diff}}(G')} |k_p-l_p| \log p \ge \log_3 t \sum_{p \in Y} \log p \gg (\log_3 t) |Y| \log |Y|, \]
hence $|Y| \ll (\log t)/(\log_2 t \log_3 t)$. The previous formula also shows that $\sum_{p \in Y} |k_p-l_p| \ll \log t$. An application of the inequality of arithmetic and geometric means thus yields
\[ \begin{split} \sum_{p \in Y} \log (20 |k_p-l_p|^2) \le 2 \sum_{p \in Y} \log (20 |k_p-l_p|) &\le 2|Y| \log \frac{\sum_{p \in Y} 20|k_p-l_p|}{|Y|} \\ &\ll |Y| \log \left(\frac{\log t}{|Y|}\right) \\ &\ll \frac{\log t}{\log_2 t}. \end{split} \]
The previous formula and \eqref{pnotinY} thus give
\[ -\log \frac{\delta(G')}{\delta(G)} \le \sum_{p \in \mathcal{P}_{\mathrm{diff}}(G')} \log (20|k_p-l_p|^2) \ll \frac{\log t}{\log_2 t} . \]
Hence $-\log (\delta(G')/\delta(G)) \le (1/4) \log F(t)$ for large enough $t$, that is, $\delta(G')/\delta(G) \ge 1/F(t)^{1/4}$, and we obtain the desired result.
\end{proof}
\section*{Acknowledgments}
CA is supported by the Austrian Science Fund (FWF), projects F-5512, I-3466, I-4945, I-5554, P-34763, P-35322 and Y-901. BB is supported by the Austrian Science Fund (FWF), project F-5510.
\end{document}
|
\begin{document}
\title{Distinguishing chromatic number of Hamiltonian circulant graphs}
\begin{abstract}
The distinguishing chromatic number of a graph $G$ is the smallest number of colors needed to properly color the vertices of $G$ so that the trivial automorphism is the only symmetry of $G$ that preserves the coloring. We investigate the distinguishing chromatic number for Hamiltonian circulant graphs with maximum degree at most 4.
\end{abstract}
\section{Introduction and definitions} \label{sec: intro}
The \emph{distinguishing chromatic number} of a graph $G$, introduced by Collins and Trenk~\cite{CollinsTrenk06} and denoted by $\chi_D(G)$, is the minimum number of colors needed for a proper coloring of the vertices of $G$ such that the only automorphism of $G$ that preserves the coloring is the trivial automorphism. Any such coloring using $r$ colors is called an \emph{$r$-distinguishing} coloring, or simply a \emph{distinguishing coloring} if the value of $r$ is unimportant.
The distinguishing chromatic number was introduced as a proper-coloring analogue of the \emph{distinguishing number} of a graph defined by Albertson and Collins~\cite{AlbertsonCollins96}, which measures the difficulty of breaking symmetries in graphs by coloring vertices (i.e., providing a distinguishing coloring), though without requiring a proper coloring. In this paper, all distinguishing colorings will be assumed to be proper colorings.
In~\cite{CollinsTrenk06}, Collins and Trenk observed that both the ordinary chromatic number $\chi(G)$ and the distinguishing number $D(G)$ serve as lower bounds for $\chi_D(G)$. They proved that $\chi_D(G) = |V(G)|$ if and only if $G$ is a complete multipartite graph, and they also determined $\chi_D(G)$ for various classes of graphs $G$. In particular, they showed the following.
\begin{thm}[\cite{CollinsTrenk06}] \label{thm: cycles}
For $n \geq 3$, the distinguishing chromatic number of $C_n$ is given by
\[
\chi_D(C_n) = \begin{cases}
3 & \text{ if }n \in \{3,5\} \text{ or if }n \geq 7;\\
4 & \text{ if }n \in \{4,6\}.
\end{cases}
\]
\end{thm}
The graph $C_6$ is one example where $\chi_D$ can be strictly greater than both the chromatic number and the distinguishing number (both equal 2 for $C_6$).
In this paper we consider circulant graphs, i.e., those undirected graphs with vertices $v_0,\dots,v_{n-1}$ where edges join any two vertices having indices with a difference (in either order) modulo $n$ lying in a given set of positive numbers. If this set of differences is denoted by $D$, then we denote such a graph by $C_n(D)$, and if $D=\{d_1,\dots,d_s\}$, we write the graph as $C_n(d_1,\dots,d_s)$.
For example, the cycle graph $C_n$ is equivalent to the circulant graph $C_n(1)$.
Note that the notation allows for multiple representations for a single graph, since differences may be computed in the opposite order and are reduced modulo $n$. For example, the cycle $C_n$ is also equivalent to
$C_n(n-1)$, and we may replace any element $k$ in the set of allowed differences by $n-k$ without changing the graph. Unless otherwise specified, we will assume here that for an $n$-vertex circulant graph, each difference belongs to $\{1,\dots,\lfloor{n/2\rfloor}\}$.
We will extend Theorem~\ref{thm: cycles} by determining the distinguishing chromatic number for various classes of Hamiltonian circulant graphs with maximum degree at most 4. As we will see, in most cases these graphs are similar enough to cycles that the distinguishing chromatic number is 3, and in no infinite family does the number ever rise higher than 5. In particular, we show that for any tetravalent graph $C_n(1, k),$ where $k \neq n/2, n/2 -1,$ and $(n, k) \neq (10, 3),$ the distinguishing chromatic number is at most one more than the chromatic number. As a prelude, here is a summary of our main results.
\begin{thm} \label{thm:summary}
Let $k,n$ be positive integers such that $2 \leq k \leq \lfloor n/2 \rfloor$. The distinguishing chromatic number of $C_n(1,k)$ is given by \[\chi_D(C_n(1,k)) = \begin{cases}
n & \text{ if $(n, k) = (4, 2), (5, 2), (6, 2), (6, 3), (8, 3);$}\\
5 & \text{ if $(n, k) = (10, 3);$}\\
4 & \text{ if $(n, k) = (15, 4), (13, 5);$}\\
3 & \text{ if $k =n/2$ and $n \geq 8;$}\\
5 & \text{ if $k =n/2 -1$ and $n \geq 10;$ }\\
4 & \text{ if $k=2$ or $k= (n-1)/2,$ and $n \geq 7;$}\\
3 & otherwise.
\end{cases}\]
\end{thm}
By otherwise, we mean all tetravalent circulant graphs $C_n(1, k)$ such that $k \neq 2, (n-1)/2,$ and $n/2 -1,$ and $(n, k) \neq (10, 3), (15, 4), (13, 5), (5, 2),$ and $(8, 3).$ The circulant graphs $C_n(1,k)$ such that $(n, k)= (4, 2), (5, 2), (6, 3), (8, 3)$ are complete graphs or complete bipartite graphs. Thus, results hold by~\cite{CollinsTrenk06}. As for graph $C_6(1, 2),$ the following arguments can be used to show that the distinguishing number is 6: vertices $v_iv_{i+2}v_{i-2}$ form an induced triangle and antipodal pairs $v_iv_{i+3}, v_{i+2}v_{i+5}, v_{i-2}v_{i-5}$ must have different labels. The remaining results in Theorem~\ref{thm:summary} are shown in Theorems~\ref{thm: Mobius}, ~\ref{thm: even n and k= n/2-1},~\ref{thm: k= 2},~\ref{thm: n even, k even},~\ref{thm:both odd},~\ref{thm:n odd and k even}, and Propositions~\ref{thm: n=10, k=3},~\ref{thm: nondihedral}.
The following sections are organized as follows. In Section~\ref{sec: isos}, we recall facts about isomorphisms and proper colorings of circulant graphs. In Section~\ref{sec: trivalent}, we determine $\chi_D$ for the trivalent Hamiltonian circulant graphs, also known as the M\"obius ladders. In Section~\ref{sec: automorphism groups}, we discuss the automorphism of the tetravalent graphs $C_n(1,k).$ In Section~\ref{sec: k = n/2-1}, we give an optimal distinguishing proper coloring of $C_n(1, n/2 -1).$ In Sections~\ref{sec: chi plus 1},~\ref{sec: dihedral symmetries}, and~\ref{sec: k^2 = pm 1}, we prove that the distinguishing chromatic number of tetravalent graphs $C_n(1, k),$ where $k \neq n/2, n/2 -1,$ and $(n, k) \neq (10, 3),$ is at most 1 more than the ordinary chromatic number.
Throughout this paper, we will use $V(G)$ and $E(G)$ to denote the vertex and edge sets of a graph $G$.
\section{Isomorphisms and colorings of circulant graphs} \label{sec: isos}
The results of this paper will deal principally with circulant graphs having the form $C_n(1,k)$ for various $n,k$; the inclusion of 1 as one of the differences forces the graph to be Hamiltonian. Though not investigated in this paper, observe that circular graphs with different differences sets can also be Hamiltonian, as shown by the example $C_6(2,3)$, which is isomorphic to the triangular prism and is not isomorphic to $C_6(1,k)$ for any $k$.
\begin{figure}
\caption{Isomorphic graphs $C_7(1,2)$ and $C_7(1,3)$.}
\label{fig: isos}
\end{figure}
Recall from Section 1 that isomorphic circulant graphs can have multiple representations using distinct sets of differences. Besides the examples given there, observe that the graph $C_5$ is isomorphic to both $C_5(1)$ and to $C_5(2)$, and Figure~\ref{fig: isos} shows that $C_{7}(1,2)$ is isomorphic to $C_{7}(1,3)$. We may simplify cases in what follows, and extend our results about graphs $C_n(1,k)$ to isomorphic trivalent or tetravalent circulant graphs not expressed in this way by recalling a result of \'Ad\'am~\cite{Adam67}.
\begin{thm}[\cite{Adam67}] \label{thm: iso}
If $\gcd(n,p)=1$, then $C_n(a_1, \cdots, a_t) \cong C_n(pa_1, \cdots, pa_t)$, where multiplication is performed modulo $n$.
\end{thm}
We specialize this result to the graphs of the form $C_n(1,k)$.
\begin{cor}
If either $a$ or $b$ is relatively prime to $n$, then $C_n(a,b) = C_n(1,k)$ for some $k \in \{1,\dots,\lfloor{n/2}\rfloor\}$.
\end{cor}
\begin{proof}
Suppose without loss of generality that $\gcd(a,n)=1$. An elementary result of number theory shows that there exists $p$ in $\{1,\dots,n-1\}$ such that $ap \equiv 1 \pmod{n}$ and $p$ is relatively prime to $n$. Hence $C_n(a,b) \cong C_n(pa,pb) \cong C_n(1,k)$, where $k$ is either $pb$ or $n-pb$ modulo $n$, whichever belongs to $\{1,\dots,\lfloor{n/2}\rfloor\}$.
\end{proof}
\begin{cor} \label{cor: iso} If $n=k\ell\pm 1$, then $C_n(1,k) \cong C_n(1,\ell)$.
\end{cor}
\begin{proof}
If $n=k\ell\pm 1$, then $\ell$ is relatively prime to $n$, as is $n-\ell$. Letting $p=n-\ell$ in Theorem~\ref{thm: iso} shows that $C_n(1,k) \cong C_n(n-\ell, \pm 1) \cong C_n(1,\ell)$.
\end{proof}
We recall now a few results about the chromatic number $\chi(G)$ of circulant graphs $G$. The following result was conjectured by Collins, Fisher, and Hutchinson (see~\cite{CollinsEtAl98, Fisher98} as cited in~\cite{YehZhu03}) and proved by Yeh and Zhu~\cite{YehZhu03}; see also~\cite{GobelNeutel00}, \cite{Heuberger03}, and~\cite{NicolosoPietropaoli07}.
\begin{thm} \label{thm: chromatic num}
Let $k,n$ be positive integers such that $2 \leq k \leq \lfloor n/2 \rfloor$. The chromatic number of $C_n(1,k)$ is given by \[\chi(C_n(1,k)) = \begin{cases}
2 & \text{ if $k$ is odd and $n$ is even;}\\
4 & \text{ if $k = 2$ or $k = (n-1)/2$, and $n \neq 5$ and $3 \nmid n$;}\\
4 & \text{ if $k=5$ and $n=13$;}\\
5 & \text{ if $k=2$ and $n=5$;}\\
3 & otherwise.
\end{cases}\]
\end{thm}
\section{Trivalent circulant graphs} \label{sec: trivalent}
When $n$ is even, the circulant graph $C_n(1,n/2)$ is a trivalent graph also known as the \emph{M\"obius Ladder} due to a drawing as a M\"obius band of 4-cycles; see Figure~\ref{fig: Mobius} which draws $C_8(1,4)$ in two ways. These are the unique trivalent circulant graphs $C_n(1,k)$.
\begin{figure}
\caption{The graph $C_8(1,4)$ drawn as a M\"obius ladder.}
\label{fig: Mobius}
\end{figure}
\begin{thm} \label{thm: Mobius}
For even integers $n \geq 4$, \[\chi_D(C_n(1,n/2)) = \begin{cases}
4 & \text{ if }n=4;\\
6 & \text{ if }n=6;\\
3 & \text{ if }n \geq 8.
\end{cases}\]
\end{thm}
\begin{proof}
When $n$ is 4 or 6, the graph $C_n(1,n/2)$ is isomorphic to $K_4$ or $K_{3,3}$, respectively. By the result of Collins and Trenk~\cite{CollinsTrenk06}, distinguishing colorings of these complete multipartite graphs require as many colors as there are vertices in the graph.
For even $n \geq 8$, it has been shown (see~\cite{MortezaMirafzal}, for instance) that the M\"obius ladder with $n$ vertices has the dihedral group of order $2n$ as its automorphism group. We exhibit a distinguishing proper coloring using 3 colors based on whether or not $n$ is a multiple of 4.
If $n=4t$, where $t>1$, then the M\"obius ladder is not bipartite and hence a proper coloring will require at least 3 colors. Assign colors to the vertices as shown here (assigning the colors shown to the vertices in order of their subscripts):
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|}
Colors of $v_0,\dots,v_{n/2-1}$ & 1 & 2 & 1 & 2 & 1 & 2 & $\cdots$ & 1 & 2 \\
\hline
Colors of $v_{n/2},\dots,v_{n-1}$ & 3 & 1 & 2 & 1 & 2 & 1 & $\cdots$ & 2 & 3
\end{tabular}
\end{center}
Observe that each vertex is adjacent to vertices whose colors precede and follow its color in the table, as well as to the vertex whose color appears in the same position on the opposite row. Thus it is easy to verify that this is a proper coloring. (In fact, this coloring arises via a greedy coloring of the vertices $v_0,\dots,v_{n-1}$ in order.) Note that a coloring-preserving automorphism must permute the vertices with color 3, and one of these two vertices has two neighbors with color 1 while the other has only one neighbor with color 1. Hence the vertices with color 3 must be fixed under any color-preserving automorphism. The only dihedral symmetry fixing two non-antipodal vertices is the identity symmetry, so this coloring is a distinguishing coloring.
If $n=4t+2$, where $t>1$, then $C_n(1,n/2)$ is a bipartite graph. The unique partition of its vertices into two color classes allows for many dihedral symmetries, so a distinguishing proper coloring must use at least 3 colors. We obtain one by changing to color 3 the colors of a pair of nonadjacent vertices formerly colored 1 and 2 in a proper 2-coloring. One example is shown here:
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|}
Colors of $v_0,\dots,v_{n/2-1}$ & 3 & 2 & 1 & 2 & $\cdots$ & 1 \\
\hline
Colors of $v_{n/2},\dots,v_{n-1}$ & 2 & 1 & 3 & 1 & $\cdots$ & 2
\end{tabular}
\end{center}
As before, we observe that a color-preserving automorphism must permute the vertices of color 3. Since one is adjacent only to vertices of color 1, while the other is adjacent only to vertices of color 2, the automorphism fixes these vertices. As before, the identity is thus the only color-preserving automorphism, and this is a distinguishing coloring.
\end{proof}
\section{Automorphism groups} \label{sec: automorphism groups}
For the remainder of the paper we will deal with the tetravalent graphs $C_n(1,k)$ where $1<k<n/2$. Because distinguishing colorings on graphs ``break'' nontrivial symmetries, this section will review some facts about automorphism groups of circulant graphs.
The following theorem of Poto\u{c}nik and Wilson~\cite{PotocnikWilson20} will be key to our organization. A graph is \emph{edge-transitive} if any edge may be carried to any other edge by some automorphism, and \emph{dart-transitive} if any edge may be mapped to any other edge, with the endpoint images specified, by some automorphism.
\begin{thm}[\cite{PotocnikWilson20}] \label{thm: PW automorphisms}
If $G$ is a tetravalent edge-transitive circulant graph with $n$ vertices, then it is dart-transitive and either:
\begin{enumerate}
\item[\textup{(1)}] $G$ is isomorphic to $C_n(1,k)$ for some $a$ such that $k^2 \equiv \pm 1 \pmod{n}$, or
\item[\textup{(2)}] $n$ is even,
and $G$ is isomorphic to $C_{2m}(1,m+1)$ where $m=n/2$.
\end{enumerate}
\end{thm}
As we will see later as we color the graphs $C_n(1,k)$, the converse to Theorem~\ref{thm: PW automorphisms} is true. We will provide an optimal distinguishing proper coloring for the graphs in (2) in Section~\ref{sec: k = n/2-1}, noting that $C_{2m}(1,m+1)$ is isomorphic to $C_n(1,n/2-1)$, and say no more about these graphs here. We will discuss coloring the graphs in (1) in Section~\ref{sec: k^2 = pm 1} after a few comments about these graphs at the end of this section.
What about graphs $C_n(1,k)$ that are not edge-transitive? As we will see, in that case $\operatorname{Aut}(C_n(1,k)$ is isomorphic to a dihedral group of order $2n$.
Let $E_1$ denote the set of edges of $C_n(1,k)$ joining vertices having indices differing by 1 modulo $n$, and similarly let $E_k$ denote the set of edges of $C_n(1,k)$ joining vertices whose indices' difference is $k$ modulo $n$. Observe that an automorphism $\phi:V(C_n(1,k)) \to V(C_n(1,k))$ induces a permutation on the edge set of $C_n(1,k)$ that ``carries'' edge $v_iv_j$ to edge $\phi(v_i)\phi(v_j)$ for any $i,j$.
\begin{thm} \label{thm: E_1 and E_k permutations}
If $k \neq n/2-1$ and $(n,k) \neq (10,3)$, then any automorphism of $C_n(1,k)$ either carries every edge in $E_1$ to an edge in $E_1$ or carries every edge in $E_1$ to an edge in $E_k$.
\end{thm}
\begin{proof}
The result can be verified directly for $n < 7$, so assume that $n \geq 7$.
Consider an edge of $E_1$ in $C_n(1,k)$; by symmetry we may assume that it is $v_0v_1$. Note that this edge belongs to the two 4-cycles $v_0 v_{1} v_{k+1} v_{k}$ and $v_0 v_{1} v_{1-k} v_{-k}$; these are distinct 4-cycles unless $k=n/2$, in which case they coincide. Note that for each edge in $E_k$ incident with either $v_0$ or $v_{1}$, there is a 4-cycle containing that edge and $v_0 v_{1}$. We claim now that no 4-cycle contains both $v_0 v_{1}$ and an incident edge from $E_1$. Indeed, for a 4-cycle to contain the path $v_0 v_{1} v_{2}$, both $v_0$ and $v_{2}$ must have a common neighbor $v_j$, where $j \in \{-k,-1,k\} \cap \{2-k,3,k+2\}$. Setting each element of the first set equal to each element of the second set and recalling that $n \geq 7$ and that $1 < k < n/2$, we see that the existence of a common neighbor $v_j$ implies that either $k = n/2 - 1$, a contradiction to our hypothesis, or $k=3$. A similar argument and conclusion holds if $C_n(1,k)$ has a 4-cycle containing the path $v_{-1} v_0 v_{1}$.
Let us assume for now that $k \neq 3$. Since the property of inclusion of a pair of edges in some induced 4-cycle is preserved by an automorphism, it follows that if an edge from $E_1$ is carried by an automorphism to an edge from $E_1$, the same must be true for the images of its incident edges from $E_1$. Working inductively outward from the first edge, we see that each edge of $E_1$ is then carried to an edge in $E_1$, and edges from $E_k$ are forced to be carried to edges in $E_k$. Conversely, if an edge from $E_1$ is carried by an automorphism of $C_n(1,k)$ to an edge in $E_k$, then no edge from $E_1$ can be carried to an edge from $E_1$, which forces all edges from $E_1$ to be carried to edges from $E_k$ and vice versa.
If instead $k=3$, observe directly that in each of $C_n(1,3)$ for $7 \leq n \leq 12$ except for $n \in \{8,10\}$, every automorphism of $C_n(1,3)$ either carries all edges in $E_1$ to edges in $E_1$ or carries all edges in $E_1$ to $E_3$, as claimed. Assume now that $k=3$ and $n > 12$. Note that the size of $n$ implies that each edge in $E_1$ belongs to exactly five 4-cycles; for $v_0v_1$ these cycles are $v_0v_1v_{n-2}v_{n-3}$, $v_0v_1v_{n-2}v_{n-1}$, $v_0v_1v_{2}v_{n-1}$, $v_0v_1v_{2}v_{3}$, and $v_0v_1v_{4}v_{3}$. In contrast each edge in $E_3$ belongs to exactly three 4-cycles; for $v_0v_3$ these are $v_0v_3v_2v_{n-1}, v_0v_3v_2v_1, v_0v_3v_4v_1$. Since the inclusion of an edge in a 4-cycle is preserved under the image of a graph automorphism $\phi$, any such map $\phi$ induces a permutation of the edges in $E_1$ and a permutation of the edges of $E_k$.
\end{proof}
We arrive at our result; let $D_n$ be the dihedral group of order $2n$, the symmetry group of a regular $n$-gon.
\begin{cor} \label{cor: dihedral group}
If the graph $C_n(1,k)$, where $1 < k < n/2$, satisfies neither $k^2 \equiv \pm 1 \pmod{n}$ nor $k=n/2-1$, then $\operatorname{Aut}(C_n(1,k)) \cong D_{n}$.
\end{cor}
\begin{proof}
The dihedral group $D_n$ is always isomorphic to the subgroup of $\operatorname{Aut}(C_n(1,k)$) consisting of automorphisms that carry $E_1$ to $E_1$ and $E_k$ to $E_k$. These symmetries act transitively on $E_1$ and $E_k$, so if $\operatorname{Aut}(C_n(1,k))$ were to contain any automorphism carrying an edge from $E_1$ to $E_k$, or vice versa, then the compositions of this automorphism with suitable dihedral symmetries would yield automorphisms causing $\operatorname{Aut}(C_n(1,k))$ to be edge-transitive and hence implying that $C_n(1,k)$ satisfies conclusion (1) or (2) in Theorem~\ref{thm: PW automorphisms}, a contradiction to our hypothesis.
\end{proof}
We conclude this section by describing the automorphism groups of the edge-transitive graphs in (1) in Theorem~\ref{thm: PW automorphisms}, those graphs $C_n(1,k)$ for which $k^2 \equiv \pm 1 \pmod{n}$. These groups are more elaborate than dihedral groups, but not by much.
\begin{thm} \label{thm: Aut when k^2 equiv pm 1}
If $k$ and $n$ are integers satisfying $1 < k < n/2$ and $k^2 \equiv \pm 1 \pmod{n}$, then $|\operatorname{Aut}(C_n(1,k))|=4n$, and for any two edges $v_a v_b, v_s v_t$ in the graph, there is a unique automorphism sending $v_a$ to $v_s$ and $v_b$ to $v_t$.
\end{thm}
\begin{proof}
By symmetry it suffices to show that for any edge $v_s v_t$ in the graph, there is a unique automorphism sending $v_0$ to $v_s$ and $v_1$ to $v_t$. We claim that this map is $\phi_{st}$ on $\{v_0,\dots,v_{n-1}\}$ given by $\phi_{st}(v_i) = v_{s+(t-s)i}$ (with all operations performed modulo $n$). To see that is indeed an automorphism, note that the pair $v_x,v_y$ of vertices in $C_n(1,k)$ is adjacent if and only if $x-y$ is congruent modulo $n$ to either $\pm 1$ or $\pm k$. Now the difference in the indices of $\phi_{st}(v_x)$ and $\phi_{st}(v_y)$ is \begin{equation} \label{eq: automorphism proof} s+(t-s)x - s - (t-s)y = (t-s)(x-y).\end{equation} Since $v_s v_t$ is an edge in $C_n(1,k)$, we know that $t-s \in \{\pm 1, \pm k\}$. Recalling that $k^2 \equiv \pm 1 \pmod{n}$, it is straightforward to check that $x-y \in \{\pm 1,\pm k\}$ if and only if $(t-s)(x-y) \in \{\pm 1, \pm k\}$, so $\phi_{st}$ is an element of $\operatorname{Aut}(C_n(1,k))$.
To finish our proof, we show the uniqueness of the automorphism respectively mapping $v_0$ and $v_1$ to $v_s$ and $v_t$. Let $\rho$ be any automorphism of $C_n(1,k)$ mapping vertex $v_0$ to $v_s$ and $v_1$ to $v_t$. Taking $i \in \{1,k\}$ to be the index such that $v_sv_t \in E_i$, Theorem~\ref{thm: E_1 and E_k permutations} implies that $\rho$ carries all edges in $E_1$ to edges in $E_i$, so since $v_2$ is adjacent to $v_1$ along an edge from $E_1$, its image $\rho(v_2)$ is adjacent to $v_t$ along an edge from $E_i$. Since $\rho(v_2) \neq \rho(v_0)$, and $v_t$ only has two neighbors along edges from $E_i$, $\rho(v_2)$ is uniquely determined; we have $\rho(v_2) = \phi_{st}(v_2)$. Continuing inductively through all the vertices $v_2,\dots,v_{n-1}$, we conclude that $\rho = \phi_{st}$, as desired.
\end{proof}
\section{The graphs $C_n(1,n/2-1)$} \label{sec: k = n/2-1}
In this section we give an optimal distinguishing proper coloring of the edge-transitive graphs described in (2) in Theorem~\ref{thm: PW automorphisms}.
In these graphs $n$ is even, and each vertex $v_i$ is adjacent to the same neighbors to which its \emph{antipodal vertex} $v_{i + n/2}$ is. It follows that $C_n(1,n/2-1)$ is isomorphic to the \emph{wreath graph} $W(n/2,2)$; in general, the wreath graph $W(a,b)$ has $ab$ vertices, partitioned into $a$ independent sets $I_1,\dots,I_a$, each of size $b$, with the vertices in each set $I_i$ being adjacent to all vertices in $I_{i-1}$ and $I_{i+1}$ (with operations in subscripts performed modulo $a$). The graph $C_{12}(1,5)$ and its interpretation as $W(6,2)$ are illustrated in Figure~\ref{fig: C12(1,5)}.
\begin{figure}
\caption{Two drawings showing $C_{12}
\label{fig: C12(1,5)}
\end{figure}
The automorphism group of $C_n(1,k)$, when $k=n/2-1$, contains ``dihedral'' symmetries interpreted as acting on the graph when drawn as on the left in Figure~\ref{fig: C12(1,5)}. In addition, however, there are $n/2$ symmetries that interchange two vertices having the same neighborhood and fix all other vertices. (Vertices with the same neighborhood are called \emph{twins}.) Hence $\operatorname{Aut}(C_n(1,k))$ has many more than $2n$ automorphisms. Intuitively, this imposes more restrictions on distinguishing colorings (in particular, vertices having the same neighborhoods must receive distinct colors). Hence we may expect to need more colors than we do with $C_n$ to break all symmetries of $C_n(1,k)$ with a proper coloring, and indeed this is the case.
\begin{thm}\label{thm: even n and k= n/2-1}
For even integers $n \geq 8$, \[\chi_D(C_n(1,n/2-1)) = \begin{cases}
6 & \text{ if }n = 6;\\
8 & \text{ if }n = 8;\\
5 & \text{ if }n \geq 10.
\end{cases}\]
\end{thm}
\begin{proof}
It is easy to see that $C_6(1,2)$ is isomorphic to $K_{2,2,2}$ and that $C_8(1,3)$ is isomorphic to $K_{4,4}$, which are both complete multipartite graphs. This implies (see Section~\ref{sec: intro}) that $\chi_D(C_6(1,2)) = 6$ and $\chi_D(C_8(1,3)) = 8$. Suppose henceforth that $n \geq 10$. As mentioned above, a distinguishing coloring must assign different colors to twins, and for $n \geq 10$ the vertex set of $C_n(1,n/2-1)$ is partitioned into $n/2$ such twin pairs. Since our coloring is to be proper, it cannot use the same color on two vertices from ``consecutive'' pairs $\{v_i,v_{i+n/2}\}$ and $\{v_{i+1},v_{i+1+n/2}\}$. Such a coloring must then use at least 4 colors, but 4 colors are not enough for a proper coloring if $n/2$ is odd, and if $n/2$ is even, the 4-coloring of $C_n(1,n/2-1)$ (which is unique up to permutations of the colors) admits the color-preserving automorphism defined by $v_i \mapsto v_{i+2}$ for all $i$.
Hence at least 5 colors are necessary for a distinguishing coloring. Note now that the pairs of colors assigned to pairs of twins naturally correspond to vertices in the Kneser graph $KG_{c,2}$, where $c$ is the number of colors used in the coloring. (Recall that the Kneser graph $KG_{p,q}$ is the graph whose vertices are the $q$-element subsets of a set of $p$ elements, with edges joining vertices corresponding to disjoint subsets.) Moving from one pair of
colored twins to the consecutively following pair and noting the colors used corresponds to moving along edges in $KG_{c,2}$, and the overall coloring of $C_n(1,n/2-1)$ corresponds to a closed walk in $KG_{c,2}$. For a proper coloring, any such closed walk will do, and a useful walk in $KG_{5,2}$ (i.e., the Petersen graph) consists of the pairs \[
\begin{cases}
12,34,15,\underline{23,45} & \text{if $n/2$ is odd,}\\
12,34,25,14,\underline{23,45} & \text{if $n/2$ is even;}
\end{cases}\]
here the underlined pairs are repeated as necessary to produce $n/2$ pairs of colors. Note that in each corresponding coloring of $C_n(1,n/2-1)$, there is at most one pair of twins receiving a color pairs from $\{12,34,15,25,14\}$. Any automorphism maps a pair of twins to a pair of twins, and each vertex in $C_n(1,n/2-1)$ belongs to a unique pair of twins when $n/2 \geq 5$. Color-preserving automorphisms likewise preserve the colors on pairs of twins, so the vertices in the twin pairs whose colors were just listed must be fixed any such automorphism. By inductively moving to neighboring twin pairs along the wreath, we see that every other vertex must be fixed, so the only color-preserving automorphism is the identity, as desired.
\end{proof}
\section{A general upper bound} \label{sec: chi plus 1}
Having determined the symmetries of $C_n(1,k)$ when $k \neq n/2 -1$ in Section~\ref{sec: automorphism groups}, for the remaining sections we turn to distinguishing colorings. In this section we show that in many cases, the distinguishing chromatic number of $C_n(1,k)$ is at most 1 more than the ordinary chromatic number. This will allow us to exactly determine the distinguishing chromatic number whenever $C_n(1,k)$ is bipartite.
Our first result is useful in ``breaking'' symmetries in edge-transitive graphs $C_n(1,k)$.
\begin{lem} \label{lem: one more color}
Suppose that $b$ is an integer such that $1<b<n/2$ and $b \neq k$ and $\gcd(b,n) = 1$. If $C_n(1,k)$ is properly colored with $\chi(C_n(1,k))$ colors, and the colors on $v_0$ and $v_b$ are changed to be a new previously unused color, then this coloring is not preserved by any automorphism of $C_n(1,k)$ that carries $E_1$ to $E_k$.
\end{lem}
\begin{proof}
Suppose that $c:V(C_n(1,k)) \to \{1,\dots,\chi(C_n(1,k))+1\}$ is the modified coloring, and let $\phi$ be any automorphism of $C_n(1,k)$ that exchanges $E_1$ and $E_k$. Note that there are two internally vertex-disjoint paths from $v_0$ to $v_b$ along edges of $E_1$, and similarly two such paths from $v_0$ to $v_b$ along edges of $E_k$. Since $\phi$ exchanges $E_1$ and $E_k$, if $\phi$ were to preserve the coloring, then $v_0,v_b$ would either be fixed or mapped to each other under $\phi$, and the paths between them along edges of $E_k$ would be carried to the paths between $v_0,v_b$ using edges of $E_1$. Since these paths along edges in $E_1$ have lengths $b$ and $n-b$, the paths along edges in $E_k$ must have the same lengths. Hence either $bk \equiv b \pmod{n}$ or $bk \equiv n-b \pmod{n}$. Since $\gcd(b,k) = 1$, $b$ has a multiplicative inverse modulo $n$, and these congruences yield $k \equiv 1 \pmod{n}$ or $k \equiv n-1 \pmod{n}$; both statements are contradictions.
\end{proof}
Thus any color-preserving automorphism of the modified coloring in Lemma~\ref{lem: one more color} must act as a dihedral symmetry on the edges of $E_1$. Since $b<n/2$, the only such automorphisms that either fix $v_0$ and $v_b$ or interchange them are the identity automorphism and a single reflection. This will yield a general bound; first we show that such an integer $b$ as in the hypothesis of Lemma~\ref{lem: one more color} always exists for large enough $n$.
\begin{lem}\label{lem: there is a b}
For any $n \geq 13$ and for any integer $k \in \{2,\dots,\lfloor n/2 \rfloor\}$, there exists an integer $b$ such that $1<b<n/2$ and $b \neq k$ and $\gcd(b,n) = 1$.
\end{lem}
\begin{proof}
We will show the stronger statement that when $n \geq 13$, there exist two primes not dividing $n$ in $\{2,\dots,\lfloor n/2 \rfloor\}$. If $k$ were to equal one of these primes, then we could let $b$ be the other one.
It is easy to verify directly that the two primes specified exist for all $n$ satsifying $13 \leq n \leq 22$. Now suppose that $n \geq 23$. If $n$ is not relatively prime to two elements of $\{2,3,5,7,11\}$, then $n \geq 2\cdot 3 \cdot 5 \cdot 7= 210$.
Recall now the result known as Bertrand's Postulate, which states, in one formulation, that whenever $m$ is an integer greater than $3$, then there exists a prime number $p$ with $m < p < 2m$. It follows that there exist prime numbers $p_1,p_2,p_3$ such that $n/16 < p_1 < n/8$ and $n/8 < p_2 < n/4$ and $n/4 < p_3 < n/2$. If $n$ is not relatively prime to at least two of $p_1,p_2,p_3$, then $n \geq p_1p_2 > n^2/128$ and hence $n < 128$, a contradiction to our earlier bound on $n$.
\end{proof}
\begin{thm} \label{thm: non-palindrome}
Given integers $k,n$ such that $1 < k< n/2$ and a proper coloring coloring $c:V(C_n(1,k)) \to$ $\{1,\dots,\chi(C_n(1,k))\}$, let $b$ be an integer such that $1 < b < n/2$ and $b \neq k$ and $\gcd(n,b)=1$. If either
\[c(v_1),c(v_2),\dots,c(v_{b-1}) \quad \text{or} \quad c(v_{b+1}),c(v_{b+2}),\dots,c(v_{n-1})\]
is not a palindrome, then $\chi_D(C_{n}(1,k)) \leq
\chi(C_n(1,k))+1$.
\end{thm}
\begin{proof}
In light of Lemma~\ref{lem: one more color} and the discussion following it, since one of $c(v_1),c(v_2),\dots,c(v_{b-1})$ and $c(v_{b+1}),c(v_{b+2}),\dots,c(v_{n-1})$ is not a palindrome, recoloring $v_0$ and $v_b$ with a single new color yields a proper coloring where no reflection preserves the coloring; the only color-preserving automorphism of $C_n(1,k)$ is the identity. Hence $\chi_D(C_{n}(1,k)) \leq
\chi(C_n(1,k))+1$.
\end{proof}
In certain cases, Theorem~\ref{thm: non-palindrome} quickly yields an optimal distinguishing coloring.
\begin{thm} \label{thm: n even k odd}
Given positive integers $k,n$ such that $1 < k <n/2-1$, if $n$ is even and $k$ is odd and $(n,k) \neq (10,3)$, then $\chi_D(C_{n}(1,k)) = 3$.
\end{thm}
\begin{proof}
Note that $C_n(1,k)$ is bipartite, though a proper 2-coloring of $C_n(1,k)$ admits a nontrivial color-preserving rotation, so $\chi_D(C_n(1,k)) \geq 3$. To prove the corresponding upper bound, observe first that no value of $k$ satisfies the hypotheses for any even $n$ less than $10$. When $n \in \{10,12\}$, only $k=3$ satisfies the given inequalities, though we are given that $(n,k) \neq (10,3)$.
Assume that either $(n,k)=(12,3)$ or $n \geq 13$. Using $b=5$ in the first case and Lemma~\ref{lem: there is a b} in the latter, there is an integer $b$ such that $1<b<n/2$ and $b \neq k$ and $\gcd(b,n) = 1$. Let $c$ be a proper 2-coloring of $C_n(1,k)$; here the vertices $v_i$ with even subscripts recieve one color, and the vertices with odd subscripts receive the other color. Since $n$ is even, $b$ must be odd, and hence $c(v_1),\cdots,c(v_{b-1})$ is not a palindrome, since $c(v_1) \neq c(v_{b-1})$. By Theorem~\ref{thm: non-palindrome}, there is an optimal distinguishing coloring of $C_n(1,k)$ using 3 colors.
\end{proof}
Given the exceptionality of the case $(n,k)=(10,3)$ in Theorems~\ref{thm: E_1 and E_k permutations} and~\ref{thm: n even k odd}, we determine $\chi_D(C_{10}(1,3))$ next. Here the distinguishing chromatic number is quite a bit higher than the chromatic number.
\begin{prop}\label{thm: n=10, k=3}
$\chi_D(C_{10}(1,3)) = 5$.
\end{prop}
\begin{proof}
The graph $C_{10}(1,3)$ is bipartite and may be obtained by deleting the edges $v_i v_{i+5}$ from the complete biparite graph having partite sets $A=\{v_0,v_2,v_4,v_6,v_8\}$ and $B=\{v_1,v_3,v_5,v_7,v_9\}$. Note that if some proper coloring of the vertices assigns the same color to both $v_i,v_j$ in $A$ and the same color (which must different from the first) to $v_{i+5},v_{j+5}$ in $B$, then the involution of $V(C_{10}(1,3))$ written in cycle notation as $(v_i v_j)(v_{i+5} v_{j+5})$ is a color-preserving automorphism of the graph, so such a coloring is not distinguishing.
Let $c:V(C_{10}(1,3)) \to \{1,\dots,\ell\}$ be a distinguishing proper coloring, and suppose by way of contradiction that $\ell < 5$. By a pigeonhole principle, some color must appear on at least three vertices of the graph. Since $c$ is a proper coloring, these three vertices must appear in the same partite set; assume that it is $A$. By the symmetries in $C_{10}(1,3)$, we may assume that these vertices are $v_0,v_2,v_4$, and the color assigned is 1. As explained above, the colors on $v_5,v_7,v_9$ must then be distinct elements of $\{2,\dots,\ell\}$, which forces $\ell = 4$. Since $c$ is a proper coloring, $v_6$ and $v_8$ must also receive color 1, but then when we consider $v_1,v_3$, we see that the pigeonhole principle forces some color from $\{2,\dots,\ell\}$ to appear at least twice on vertices in $B$, and as above we find a color-preserving involution of the vertices of $C_{10}(1,3)$, a contradiction. Thus a distinguishing proper coloring of $C_{10}(1,3)$ requires at least 5 colors, and one can verify that the following map $c:\{v_0,\dots,v_9\} \to \{1,2,3,4,5\}$ provides one.
\begin{center}
\begin{tabular}{rccccc}
$A$: & $c(v_0) = 1$, & $c(v_2) = 2$, & $c(v_4) = 2$, & $c(v_6) = 3$, & $c(v_8) = 3$, \\
$B$: & $c(v_5) = 1$, & $c(v_7) = 4$, & $c(v_9) = 5$, & $c(v_1) = 4$, & $c(v_3) = 5$.
\end{tabular}
\end{center}
\end{proof}
\section{Dihedral symmetries} \label{sec: dihedral symmetries}
In this section we restrict our attention to graphs $C_n(1,k)$ for which $\operatorname{Aut}(C_n(1,k))$ is the dihedral group $D_n$. We will show that often the general bound in the conclusion of Theorem~\ref{thm: non-palindrome} is not optimal, since we may find a distinguishing proper coloring using $\chi(C_n(1,k))$ colors.
When $\operatorname{Aut}(C_n(1,k)) \cong D_n$, every automorphism of $C_n(1,k)$ permutes the edges in $E_1$, to use the notation from Section~\ref{sec: automorphism groups}, and likewise permutes the edges of $E_k$. We may also use more intuitive language, imagining that $C_n(1,k)$ is drawn with its vertices placed, in order of their subscripts, at the vertices of a regular $n$-gon, with the edges of $E_1$ drawn as the sides and the edges of $E_k$ drawn as diagonals of this polygon. This allows us to freely speak of the elements of $\operatorname{Aut}(C_n(1,k)$ as rotations and reflections and to determine the forms of colorings that are preserved under these automorphisms. We do this in Section~\ref{subsec: colorings preserved} below before proceeding in later subsections by the values or parities of $n$ and $k$ (recalling that the case where $n$ is even and $k$ is odd was concluded in Section~\ref{sec: chi plus 1}).
\subsection{Colorings preserved by rotations and reflections} \label{subsec: colorings preserved}
We consider first reflections.
\begin{lem} \label{lem: no reflections}
If a reflection symmetry in $\operatorname{Aut}(C_n(1,k))$ is color-preserving for a given proper coloring of $C_n(1,k)$, then $n$ is even and $k$ is odd.
\end{lem}
\begin{proof}
Picture a drawing of $C_n(1,k)$ as a regular polygon with chords, with the polygon vertices drawn a circle. Every reflection symmetry in $\operatorname{Aut}(C_n(1,k))$ has a corresponding axis of reflection that passes through the center of the circle.
If the axis of reflection passes through the midpoint of a 1-edge, then endpoints of that edge have different colors (since $C_n(1,k)$ is properly colored), and the reflection is not color-preserving. Hence the only possible color-preserving reflection symmetry is one where $n$ is even and the symmetry fixes two ``opposite'' vertices $v_a$ and $v_{a+n/2}$. Here $k$ must be odd, since otherwise the vertices $v_{a-k/2}$ and $v_{a+k/2}$ would have the same color (by the symmetry) but be adjacent, a contradiction.
\end{proof}
Since the case when $n$ is even and $k$ is odd was handled in Section~\ref{sec: chi plus 1}, we note that for the rest of Section~\ref{sec: dihedral symmetries}, we may ignore reflections when checking for color-preserving symmetries.
The next result deals with rotations in $\operatorname{Aut}(C_n(1,k))$.
\begin{lem} \label{lem: rotations}
If a rotation symmetry in $\operatorname{Aut}(C_n(1,k))$ is color-preserving for a given proper coloring $c$ of $C_n(1,k)$, then the sequence $c(v_0),\dots,v(v_{n-1})$ is periodic with a period that is a proper divisor of $n$.
\end{lem}
\begin{proof}
Suppose that $c:V(G) \to \{1,\dots,\chi(C_n(1,k))\}$ is a proper coloring, and that $\rho$ is a a non-identity rotation in $\operatorname{Aut}(C_n(1,k))$ that preserves the coloring $c$. If $\rho(v_0) = v_r$, then clearly $c(v_i) = c(v_{i+tr})$ for all $t$. This shows that $c(v_0),\dots,c(v_{n-1})$ is periodic.
In fact, an elementary result from number theory implies that $c(v_0)$ appears on all vertices $v_j$ where $j$ is a multiple of the greatest common divisor of $r$ and $n$. Let $d=\gcd(n,r)$. Now by symmetry $c(v_i) = c(v_{i+td})$ for all $i$ and $t$, showing that the period of $c(v_0),\dots,c(v_{n-1})$ divides $d$ and hence $n$. Since $d<n$, the period is a proper divisor of $n$.
\end{proof}
In light of Lemma~\ref{lem: rotations}, for the rest of Section~\ref{sec: dihedral symmetries}, in verifying that a coloring of $C_n(1,k)$ is preserved by no non-identity symmetry, we need only check that the coloring is not preserved by any rotation $v_i \mapsto v_{i+d}$ where $d$ is a proper divisor of $n$. This allows us a quick result.
\begin{cor}
If $n$ is an odd prime and $\operatorname{Aut}(C_n(1,k)) \cong D_n$, then $\chi_D(C_n(1,k))=\chi(C_n(1,k))$.
\end{cor}
\begin{proof}
By Lemmas~\ref{lem: no reflections} and~\ref{lem: rotations}, any proper coloring of $C_n(1,k)$ is preserved only by the identity in $\operatorname{Aut}(C_n(1,k))$.
\end{proof}
\subsection{Case: $k=2$ or $k=(n-1)/2$}
Theorem~\ref{thm: chromatic num} shows that $\chi(C_n(1,2))$ and $\chi(C_n(1,(n-1)/2)$ are $4$ except when $n=5$ or when $n$ is a multiple of 3 (in the latter case, the chromatic number is $3$). We are able to give optimal distinguishing proper colorings of these graphs. Note first of all that by Corollary~\ref{cor: iso}, $C_n(1,(n-1)/2)$ is isomorphic to $C_n(1,2)$, so it suffices to restrict our attention to $C_n(1,2)$. We may also assume that $n \geq 7$, since distinguishing colorings have already been described in earlier sections for the cases $3 \leq n \leq 6$.
\begin{thm}\label{thm: k= 2}
For all $n \geq 7$, $\chi_D(C_n(1,2)) = 4$.
\end{thm}
\begin{proof}
By Theorem~\ref{thm: chromatic num}, $\chi_D(C_n(1,2)) \geq \chi(C_n(1,2)) = 4$ if $n$ is not a multiple of 3. If $n$ is a multiple of 3, then the only partition of the vertices of $C_n(1,2)$ into three independent sets is given by grouping the vertices $v_i$ by the congruence class modulo 3 of their subscripts; hence any proper 3-coloring is preserved by the rotation given by $v_i \mapsto v_{i+3}$, and as before we must have $\chi_D(C_n(1,2)) \geq 4$.
By Corollary~\ref{cor: dihedral group}, $\operatorname{Aut}(C_n(1,2))$ is isomorphic to the dihedral group of order $2n$. By Lemma~\ref{lem: no reflections}, a proper coloring of $C_n(1,2)$ will be distinguishing if and only if no color-preserving rotation symmetry exists other than the identity. If $n$ is congruent to 0 or 1 modulo 3, we obtain a proper coloring by assigning $v_0$ the color 4 and greedily coloring $v_1,v_2,\dots,v_{n-1}$ in order with the lowest available color from $\{1,2,3\}$. If $n \equiv 2 \pmod{3}$, we color $C_n(1,2)$ by assigning color 4 to vertices $v_0$ and $v_{n-3}$ and greedily coloring the remaining vertices in order of their subscripts with colors from $\{1,2,3\}$ as before. The placement of color 4 allows for no nontrivial rotational symmetry, so these colorings establish that $\chi_D(C_n(1,2))=4$.
\end{proof}
\subsection{Case: $n$ is even and $k$ is even.}
Before presenting our result when $n$ is even and $k$ is even, we establish some conventions that will also be used in later sections. Taking $n$ and $k$ to be fixed, we first define $q$ and $r$ to be the unique integers such that $n=qk+r$, where $0 \leq r < k$.
Our distinguishing colorings will often be constructed with \emph{blocks} of colors, that is, sequences of colors to be assigned to vertices $v_i$ with consecutive indices. We may also use \emph{block} to refer to the vertices being assigned that sequence of colors. For example, to color the vertices of $C_n(1,k)$ with the block $B = (1,2,3)$ means to alternately color consecutive vertices with 1, 2, and 3, and we may also refer to subsets of three consecutive vertices colored 1, 2, 3 (in that order) as blocks. Our next result gives a more sophisticated example of coloring with blocks.
\begin{thm} \label{thm: n even, k even}
Given integers $k, n$ such that $2 < k < n/2 -1,$ $n$ is even and $k$ is even and $\operatorname{Aut}(C_n(1,k)) \cong D_n$, then $\chi_D(C_{n}(1,k))=3.$
\end{thm}
\begin{proof}
We give a proper 3-coloring of the vertices of $C_n(1,k)$ as follows. Consider the following blocks of $k$ terms, where each color is drawn from $\{1,2,3\}$. Here the bounds on $k$ and the assumption that $k$ is even ensure a consistent definition. For convenience hereafter we represent blocks (and later, portions of blocks) by enclosing them in rectangles.
\hfil \begin{tabular}{ccccccccccc}
$B_1$ & $=$ & ($1$, & $2$, & $3$, & $2$, & $3$, & $\cdots$, & $2$, & $3$, & $1$); \\
$B_2$ & $=$ & ($2$, & $3$, & $1$, & $3$, & $1$, & $\cdots$, & $3$, & $1$, & $2$); \\
$B_3$ & $=$ & ($3$, & $1$, & $2$, & $1$, & $2$, & $\cdots$, & $1$, & $2$, & $3$).
\end{tabular} \hfil
\textsc{Case 1: $n\geq 3k$ so $q\geq 3$}
Assign colors to $v_1,\dots,v_{(q-1)k}$ by alternating the use of blocks $B_1$ and $B_2$ on successive collections of $k$ consecutively-indexed vertices. Assign colors to $v_{(q-1)k+1},\dots,v_{qk}$ using the block $B_3$. For any remaining $r$ vertices $v_{qk+1},\dots,v_{n-1}, v_0$, begin by assigning color 2 to $v_{qk+1}$. Assign to $v_0$ the color $3$ if $r=2$ and the color $2$ otherwise. Then color any remaining vertices $v_{qk+2},\dots,v_{n-1}$ by alternating the colors $3$ and $1$. The final $r$ vertices' colors thus create a block $B'$ that is a shortened or partial version of the block $B_2$.
We illustrate this coloring in the the figure below, where each row indicates the colors placed on the sets of $k$ consecutively-indexed vertices in $C_n(1,k)$, beginning in the first row with the colors on $v_1,\dots, v_k$, followed in the second row with the colors on $v_{k+1},\dots,v_{2k}$, and so on. In this way the entries surrounding a vertex's color show the colors on neighboring vertices along $1$-edges (these are the immediately following and preceding numbers) and along $k$-edges (these are the vertically aligned numbers in the previous and following rows; for convenience, the initial block $B_1$ is repeated at the end of the figure). In contrast to the collection of rectangles above, though the blocks $B_1,B_2,B_3$ occupy entire rows, here they are shown as split into two rectangles each, respectively containing $r$ and $k-r$ entries. This allows us to see how the colors from the block $B'$ (which is shaded) are aligned with colors from the preceding block $B_3$ and the following block $B_1$.
\[
\begin{array}{c}
\hspace{2.7cm}\overbrace{\hspace{2.9 cm}}^{r \text{ (even)}}\hspace{.5 cm} \overbrace{\hspace{2.2 cm}}^{k-r \text{ (even)}} \\
\begin{array}{rcc}
B_1: & \framebox{1 2 3 2 3 $\cdots$ 2 3 2} & \framebox{3 2 $\cdots$ 2 3 1}\\
B_2: & \framebox{2 3 1 3 1 $\cdots$ 3 1 3} & \framebox{1 3 $\cdots$ 3 1 2}\\
B_1: & \framebox{1 2 3 2 3 $\cdots$ 2 3 2} & \framebox{3 2 $\cdots$ 2 3 1}\\
B_2: & \framebox{2 3 1 3 1 $\cdots$ 3 1 3} & \framebox{1 3 $\cdots$ 3 1 2}\\
\vdots \hspace{0.25cm} & \vdots & \vdots\\
B_3: & \framebox{3 1 2 1 2 $\cdots$ 1 2 1} & \framebox{2 1 $\cdots$ 1 2 3}\\
B', \text{ then } B_1: & \framebox{{\color{red}{2 3 1 3 1 $\cdots$ 3 1 2}}
} &
{\framebox{1 2 $\cdots$ 2 3 2}}\\
B_1 \text{ concluded:} & \framebox{3 2 3 2 3 $\cdots$ 2 3 1}
&
\end{array}
\end{array}
\]
To see that this is a proper coloring, note that vertices with consecutive indices $i,i+1$ where $0 \leq i <n$ receive distinct colors by the patterns within the blocks and at their ends. We will also see that each vertex $v_i$ is colored differently than $v_{i-k}$ and $v_{i+k}$. This is apparent from the blocks displayed above if $v_i$ belongs to a block of vertices colored with one of $B_1,B_2,B_3$ and its neighbor $v_{i-k}$ or $v_{i+k}$ does as well, with $B'$ not appearing between the two blocks. To finish the argument, we assume that $r>0$ and consider the vertices $v_i$ for $i \in \{(q-1)k+1,\dots,n\}$, showing that none receives the same color as $v_{i+k}$. These vertices are assigned colors using the blocks $B_3$ and $B'$. In the figure above, these colors appear in the shaded rectangle and on the previous row.
Recalling that $B'$ is constructed as a shortened or truncated version of $B_2$, as we compare the first $r$ entries of $B_3$ with those of $B'$, we see that the colors on vertices $v_i,v_{i+k}$ must differ; this is apparent for colors at the beginning or middle of the blocks, and we use the fact that $r$ is even, so the final entry of $B'$ (which is 3 or 2) is sure to align with an entry of 1 in $B_3$. Likewise, as we compare the entries of $B'$ with the last $r$ entries of $B_1$, the first entry of $B'$ (which is 2) aligns with a 3 from $B_1$, since both $k$ and $r$ are even; similarly, no other color in $B'$ aligns vertically with the same color in $B_1$. Finally, comparing the final $k-r$ entries of $B_3$ with the first $k-r$ entries of $B_1$, having $k$ and $r$ be even ensures that the parities of the relevant entries in $B_3$ differ from the parities of the vertically aligned entries from $B_1$.
When $r>2,$ we shall show that if we switch $v_0$ color to 3, we obtain a proper distinguishing coloring. As presently constructed, when $r>2,$ $v_0$ is a vertex colored 2 having each of its neighbors colored 1. Thus, there is no trouble switching its color to 3. Let's go ahead and switch the color of $v_0$ to 3. This moves keeps the coloring proper while making $v_0$ the only vertex colored 3 having each of its neighbors colored 1.
Therefore, any automorphism that preserves the coloring must fix that vertex. By Lemmma~\ref{lem: no reflections}. This leaves only the trivial automorphism, and the coloring is distinguished.
When $r=2,$ we will show that the vertices $v_{-2},v_{-1},v_{0}$ must be fixed by color-preserving symmetries. Hence, the coloring will be distinguishing, since only the identity rotation preserves it. We depict the coloring in this case as we did before, with $k$ numbers in each row and the block $B'$ appearing as shaded.
\[
\begin{array}{c}
\overbrace{\hspace{.6 cm}}^{r=2}\hspace{.3 cm} \overbrace{\hspace{2.6 cm}}^{k-r \text{ (even)}} \\
\begin{array}{cc}
\framebox{1 2} & \framebox{3 2 $\cdots$ 3 2 3 1}\\
\framebox{2 3} & \framebox{1 3 $\cdots$ 1 3 1 2}\\
\framebox{1 2} & \framebox{3 2 $\cdots$ {\color{blue}{3 2 3}} 1}\\
\framebox{2 3} & \framebox{1 3 $\cdots$ 1 3 1 2}\\
\vdots & \vdots\\
\framebox{1 2} & \framebox{3 2 $\cdots$ 3 2 3 1}\\
\framebox{3 1} & \framebox{2 1 $\cdots$ 2 1 2 3}\\
\framebox{{\color{red}{2 3}}} &
\framebox{1 2 $\cdots$ 3 2 3 2}
\\
\framebox{3 1}
&
\end{array}
\end{array}
\]
The vertex $v_0$ labeled 3, which is the last vertex in the shortened $B'$ block, has 3 neighbors labeled 1 and one neighbor labeled 2. In looking at $v_{-1}$, this vertex is labeled 2 with all neighbors labeled 3, and in looking at $v_{-2}$, this is a vertex labeled 3 with one neighbor labeled 1 and the rest labeled 2. The only other vertices that have the same neighbors as $v_0$ are the second to last entries in a $B_1$ block surrounded by $B_2$ blocks, call one of these vertices $v_i$. However $v_{i-2}$ is a vertex labeled 3 with at least two 1 neighbors, or in the case where $k=4$ a vertex labeled 1.
\textsc{Case 2: $n=2k+r$ so $q=2$}
Now suppose $n = 2k + r, $ where $ 1< r < k.$ Since $n$ is even, then $r$ is also even. Consider the blocks of colors $E_1, E_2$ of length $k$ and $M$ of length $r.$
\hfil \begin{tabular} {c}$E_1= (1, 2, 1, 2, \cdots, 1, 2, 1, 2) $ \\ $E_2= (3, 1, 3, 1, \cdots, 3, 1, 3, 1)$ \\ $M= (2, 3, 2, 3, \cdots, 2, 3, 2, 3)$ \end{tabular} \hfil
Color the sets of vertices $\{v_1, v_2, \cdots, v_k\}$ and $\{v_{k+1}, v_{k+2}, \cdots, v_{2k}\}$ using the blocks of colors $E_1$ and $E_2,$ respectively. Then use $M$ to color the remaining $r$ vertices. Below is the coloring $C$ of $C_{2k+r}(1, k)$ with a detailed representation of these blocks of colors. The light-shaded blocks in rows 3 and 4 correspond exactly to the first $k$ vertices in row 1.
\[
\begin{array}{c}
\overbrace{\hspace{2.2 cm}}^{r}\hspace{.1 cm} \overbrace{\hspace{2.2 cm}}^{k-r} \\
\left[\begin{array}{cc}
\framebox{1 2 $\cdots$ 1 2} & \framebox{1 2 $\cdots$ 1 2} \\
\framebox{3 1 $\cdots$ 3 1} & \framebox{3 1 $\cdots$ {\color{red}{3 1}}} \\
\framebox{{\color{red}{2 3}} $\cdots$ 2 3} & \color{gray}{\framebox{1 2 $\cdots$ 1 2}} \\ \color{gray}{\framebox{1 2 $\cdots$ 1 2}} &
\end{array}
\right]
\end{array}
\]
Since $r$ and $k$ are both even, we can easily see from the detailed representation above that $C$ is a proper coloring. By lemma~\ref{lem: no reflections}, it suffices to show that there is not a nontrivial rotation symmetry that preserves coloring. The string of vertices $v_{2k-1}, v_{2k}, v_{2k+1}, v_{2k+2}$ (as highlighted in the block diagram) is the only string of vertices labeled (3, 1, 2, 3) as every other pair of vertices labeled (1, 2) is preceded or followed by another pair of vertices labeled (1, 2). Hence, $C$ is distinguishing.
\end{proof}
\subsection{Case: $n$ is odd and $k$ is odd}
We shall treat the cases where $2k+1<n <3k$ and $ n \geq 3k$ separately. In either case, we propose a proper 3-coloring that is distinguishing. To do so, we conveniently label the vertices of $G$ consecutively as $v_1 , \cdots , v_n,$ though we shall realize later that the coloring works independently of the vertex label. Note that since both $k, n$ are odd, $v_{-k}= v_{(q-1)k+r}$ is labeled an even number.
\begin{thm}\label{thm:both odd}
Given integers $k, n$ such that $2 < k < (n-1)/2,$ if both $n$ and $k$ are odd and $\operatorname{Aut}(C_n(1,k)) \cong D_n,$ then $\chi_D(C_{n}(1,k))=3.$
\end{thm}
\begin{proof} As indicated, we split it into three cases:
\textsc{Case 1: $ n \geq 3k$}
Let $q, r$ be integers such that $0 \leq r < k$ and $n=qk+r.$ Consider the following 3-proper coloring $C$ of the vertices: assign color 1 to all odd-indexed vertices $v_i \in \{v_1, v_2, \cdots, v_{-k}\}.$ Next, assign color 3 to even-indexed vertices $v_i \in \{v_{k+1}, \cdots, v_n\},$ not including $v_n$ since $n$ is considered odd. Lastly, assign color 2 to all other vertices in $\{v_1, v_2, \cdots, v_k\} \cup \{v_{-k+1},v_{-k+2}, \cdots, v_n\}.$ In other words, for even $i$ such that $1< i< k,$ the vertex $v_i$ is colored 2 and for odd $i$ such that $(q-1)k< i \leq n$ (including $n$), we have $v_i$ colored 2. We claim that the coloring is proper. Moreover, it is distinguishing.
We proceed to prove that $C$ is proper by showing that the sets made up of vertices with the same color are all independent sets. Consider the set of all 1-colored vertices and denote it $V_1.$ Thus, $V_1= \{v_i | i \text{ is odd and } 1 \leq i \leq (q-1)k+r \}$ by construction. We show that $V_1$ is an independent set. Take $v_i \in V_1.$ Thus, $v_{i-1}$ and $v_{i+1}, v_{i+k}$ (since $k$ is odd) are even-labeled vertices and do not belong to $V_1.$ Moreover, $v_{i-k}$ is an even-labeled vertex unless $1 \leq i \leq k,$ in which case $v_{i-k} \in \{v_{-k+1},v_{-k+2}, \dots, v_n\}.$ Therefore, $V_1$ is indeed an independent set.
A similar argument can be made for the set $V_3$ of the vertices colored 3. This time, we have $V_3 = \{v_i| i \text{ is even and } k+1 \leq i \leq n\}$ and $v_{i-1}, v_{i+1}, v_{i-k}$ are odd-labeled vertices. Moreover, $v_{i+k}$ is an odd-labeled vertex unless $(q-1)k+r+1 \leq i \leq n,$ in which case $v_1 \in \{ v_1, v_2, \cdots, v_k\}.$
Lastly, we show that the set $V_2$ of vertices colored 2 is also an independent set. By construction, we have
$$V_2= \{v_i|i \text{ is even and } 1\leq i \leq k\} \cup \{v_i|i \text{ is odd and } (q-1)k+r+1 \leq i \leq n\}$$
Here is what $C$ looks like when restricted to $\{v_{-k+1}, v_{-k+2}, \cdots, v_{-1}, v_n, v_1, v_2, \cdots, v_k\}:$
\begin{center} \includegraphics[width=8cm, scale=4]{RestrictedC}
\end{center}
As we can see from the above illustration, the coloring $C$ restricted to $\{v_{-k+1}, v_{-k+2}, \cdots, v_{-1}, v_n, v_1, v_2, \cdots, v_k\}$ is a sequential-vertex coloring of the vertex list $v_{-k+1}, v_{-k+2}, \cdots, v_{-1}, v_n, v_1, v_2, \cdots, v_k,$ which starts with the alternation of colors 2 and 3 on the first k vertices to end up with color 3 being replaced with color 1 on the last k vertices. Thus, given any pair of vertices $v_i, v_j$ coloured 2, we have $\pm(i-j)= 2\ell,$ where $\ell \in \mathbb{Z}.$ Therefore, the set of vertices colored 2 form an independent set. In summary, $C$ is a proper coloring.
It remains to show that it is distinguishing. By lemma 7.1, it suffices to show that the only color-preserving rotation symmetry is the identity. Suppose there is a non-identity rotation symmetry. Thus, by Lemma 7.2, there exists a proper divisor $d$ of $n$ such that $\rho(v_n)= v_d.$ This means that $C(v_n) = C(v_d).$ Since $d$ is a proper divisor of $n,$ thus $d$ must be a number between 1 and $k$ by construction. In particular, $d$ must also be a multiple of 2 since for any pair of vertices $v_i, v_j$ coloured with 2, we have $\pm(i-j)= 2\ell,$ where $\ell \in \mathbb{Z}.$ The fact that $d$ is a proper divisor of odd $n$ and also a multiple of $2$ yields a contradiction. Hence, $C$ is indeed distinguishing.
\textsc{Case 2: If $n= 2k+r,$ where $1 < r < k$ and $2r\leq k-1$}
Note that in this case, $r$ is odd and we consider integers $\ell$ and $p$ such that $k=pr+\ell$ and $0<\ell<r$.
Consider the following arrangement of colors where the first two rows are made up of $p$ blocks of length $r$ with a possible shorter block of length $\ell$ (if $\ell >0$) added to the end. The third row consists of just one block of length $r$. Furthermore, the first row is used to color the sequence of vertices $v_1, v_2, \cdots, v_k,$ second row to color vertices $v_{k+1}, v_{k+2}, \cdots, v_{2k},$ and third row to color the remaining vertices.
\[
\begin{array}{c}
\overbrace{\hspace{2.2 cm}}^{r}\hspace{.3 cm} \overbrace{\hspace{2.2 cm}}^{r} \hspace{.3cm}\overbrace{\hspace{2.2 cm}}^{r}
\hspace{1 cm}
\overbrace{\hspace{2.2 cm}}^{r} \hspace{.3 cm} \overbrace{\hspace{.5 cm}}^{\ell}\\
\left[\begin{array}{cccccc}
\framebox{\hspace{.5cm} $R_1$ \hspace{.5cm} } & \framebox{3 2 $\cdots$ 3 2 1} & \framebox{3 2 $\cdots$ 3 2 1}&\cdots & \framebox{3 2 $\cdots$ 3 2 1} & \framebox{$L_1$}\\
\framebox{\hspace{.5cm} $R_2$ \hspace{.5cm} } & \framebox{2 1 $\cdots$ 2 1 3} & \framebox{2 1 $\cdots$ 2 1 3}&\cdots & \framebox{2 1 $\cdots$ 2 1 3} & \framebox{$L_2$} \\
\framebox{\hspace{.5cm} $R_3$ \hspace{.5cm} } & & & & &
\end{array}
\right]
\end{array}
\]
where the blocks $R_1, R_2, R_3$ of length $r$ and blocks $L_1, L_2$ of length $\ell$ vary with respect to $\ell.$ In particular, we have
\[
\begin{tabular}{|c|c|c|c|c|}
\hline
& $\ell = 0$ & $\ell = 1$ & $\ell >1$ and odd & $\ell>1$ and even \\
\hline
$R_1$ & \framebox{1 3 $\cdots$ 1 3 1} & \framebox{3 2 $\cdots$ 3 2 1} & \framebox{3 2 $\cdots$ 3 2 1}& \framebox{1 3 $\cdots$ 1 3 1}\\
\hline
$R_2$ & \framebox{2 1 3 $\cdots$ 1 3 } & \framebox{2 1 $\cdots$ 2 1 3} & \framebox{2 1$\cdots$ 2 1 3}& \framebox{3 2$\cdots$ 3 2 3}\\
\hline
$R_3$ & \framebox{1 3 2 $\cdots$ 3 2} & \framebox{1 2 $\cdots$ 1 2 1} & \framebox{3 2 $\cdots$ 3 2 1}& \framebox{2 1 $\cdots$ 2 1 2}\\
\hline
$L_1$ &N/A& \framebox{3} & \framebox{2 1 $\cdots$ 2 1 3} & \framebox{3 1 $\cdots$ 3 1}\\
\hline
$L_2$ & N/A & \framebox{2} & \framebox{1 3 $\cdots$ 1 3 1} & \framebox{2 3 $\cdots$ 2 3}\\
\hline
\end{tabular}
\]
We proceed to show that the coloring is proper and distinguishing on a case-by-case basis. We start with the case where $\ell =0.$
\textsc{Subcase 1: Let $\ell = 0$ }.
Thus, $k= pr.$ Let $L_1$ and $L_2$ be empty and the other blocks $R_1, R_2,$ and $R_3$ as given below. Note that the added light-shaded blocks in rows 3 and 4 represent row 1 re-positioned so that the first $k-r$ vertices (positions) in row 1 are vertically aligned with the last $k-r$ vertices in row 2 and the last $r$ vertices (positions) in row 1 are vertically aligned with the $r$ vertices in row 3.
\[
\begin{array}{c}
\overbrace{\hspace{2.4 cm}}^{r}\hspace{.4 cm} \overbrace{\hspace{2.4 cm}}^{r} \hspace{.35cm}\overbrace{\hspace{2.5 cm}}^{r}
\hspace{1cm}
\overbrace{\hspace{2.5 cm}}^{r}\\
\left[\begin{array}{ccccc}
\framebox{1 3 1 $\cdots$ 1 3 1} & \framebox{3 2 3 $\cdots$ 3 2 1} & \framebox{3 2 3 $\cdots$ 3 2 1}& \cdots & \framebox{3 2 3 $\cdots$ 3 2 1}\\
\framebox{{\color{red}{2}} 1 3 $\cdots$ 3 1 3} & \framebox{2 1 2 $\cdots$ 2 1 3} & \framebox{2 1 2 $\cdots$ 2 1 3}& \cdots & \framebox{2 1 2 $\cdots$ 2 1 3} \\
\framebox{1 3 2 $\cdots$ 2 3 2} & \color{gray}{\framebox{1 3 1 $\cdots$ 1 3 1}} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\cdots} &
\color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} \\ \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & & &
\end{array}
\right]
\end{array}
\]
We can easily see from the above detailed arrangement of colors that the coloring is proper. By Lemma~\ref{lem: no reflections}, it suffices to show that there is not a nontrivial rotation symmetry that preserves the coloring. To this end, note that the vertex $v_{k+1}$ is the only vertex labeled 2 whose neighbors are all labeled 1. Hence, the coloring is distinguishing with respect to the dihedral group.
\textsc{Subcase 2: Let $\ell = 1$ (thus, $p$ is even). }
Let the blocks $R_1, R_2, R_3$ of length $r$ and blocks $L_1, L_2$ of length 1 be as given below.
\[
\begin{array}{c}
\overbrace{\hspace{2.6 cm}}^{r}\hspace{.4 cm} \overbrace{\hspace{2.4 cm}}^{r} \hspace{.35cm}\overbrace{\hspace{2.5 cm}}^{r}
\hspace{1cm}
\overbrace{\hspace{2.5 cm}}^{r} \hspace{.2cm} \overbrace{\hspace{.5 cm}}^{\ell}\\
\left[\begin{array}{cccccc}
\framebox{3 2 3 $\cdots$ 3 2 \, 1} & \framebox{3 2 3 $\cdots$ 3 2 1} & \framebox{3 2 3 $\cdots$ 3 2 1}& \cdots & \framebox{3 2 3 $\cdots$ 3 2 1}& \framebox{3}\\
\framebox{2 1 2 $\cdots$ 2 1 \, 3} & \framebox{2 1 2 $\cdots$ 2 1 3} & \framebox{2 1 2 $\cdots$ 2 1 3}& \cdots & \framebox{2 1 2 $\cdots$ 2 1 3}& \framebox{2}\\
\framebox{1 2 1 $\cdots$ 1 {\color{red}{2}} \, 1} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\cdots} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\framebox{3}} \\
\color{gray}{\framebox{2 3 2 $\cdots$ 2 1 $\mid$ 3}} & & &
\end{array}
\right]
\end{array}
\]
Here, $v_{-1}$ is the only vertex labeled 2 whose neighbors are all labeled 1. Similarly, the coloring is proper and distinguishing.
\textsc{Subcase 3: Let $\ell > 1$ and odd (thus, $p$ is even).}
Let the blocks $R_1, R_2, R_3$ and blocks $L_1, L_2$ be as given below.
\[
\begin{array}{c}
\overbrace{\hspace{3.9cm}}^{r}\hspace{.4 cm} \overbrace{\hspace{2.3 cm}}^{r} \hspace{.4cm}\overbrace{\hspace{2.5 cm}}^{r}
\hspace{.9cm}
\overbrace{\hspace{2.7 cm}}^{r} \hspace{.3cm} \overbrace{\hspace{2.6 cm}}^{\ell}\\
\left[\begin{array}{cccccc}
\framebox{3 2 $\cdots$ 3 2 \, 3 2 $\cdots$ 3 2 1} & \framebox{3 2 3 $\cdots$ 3 2 1} & \framebox{3 2 3 $\cdots$ 3 2 1}& \cdots & \framebox{3 2 3 $\cdots$ 3 2 1}& \framebox{2 1 2 $\cdots$ 2 1 3}\\
\framebox{2 1 $\cdots$ 2 1 \, 2 1 $\cdots$ 2 1 3} & \framebox{2 1 2 $\cdots$ 2 1 3} & \framebox{2 1 2 $\cdots$ 2 1 3}& \cdots & \framebox{2 1 2 $\cdots$ 2 {\color{red}{1 3}}}& \framebox{\color{red}{1 3 1 $\cdots$ 1 3 1}}\\
\framebox{3 2 $\cdots$ 3 2 \, 3 2 $\cdots$ 3 2 1} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\cdots} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 3}} \\
\color{gray}{\framebox{2 3 $\cdots$ 2 1 $\mid$ 2 1 $\cdots$ 2 1 3}} & & & \\
\underbrace{\hspace{1.9cm}}_{\ell - r\text{ (even)}}\hspace{.1 cm} \underbrace{\hspace{2 cm}}_{\ell\text{ (odd)}} &&&&
\end{array}
\right]
\end{array}
\]
When restricted to the sequence of vertices $v_{2k-\ell-2}, v_{2k-\ell-1}, \cdots, v_{2k},$ the above proper coloring alternates labels 1 and 3. Since $\ell > 1,$ the length of this sequence is at least five, which makes it the longest of its kind. As a result, no nontrivial rotation preserves the coloring. Therefore, by Lemma~\ref{lem: no reflections}, the coloring is distinguishing with respect to the dihedral group.
\textsc{Subcase 4: Let $\ell > 1$ and even (thus $p$ is odd).}
Let the blocks $R_1, R_2, R_3$ and blocks $L_1, L_2$ be as given below.
\[
\begin{array}{c}
\overbrace{\hspace{4 cm}}^{r}\hspace{.4 cm} \overbrace{\hspace{2.4 cm}}^{r} \hspace{.35cm}\overbrace{\hspace{2.5 cm}}^{r}
\hspace{1cm}
\overbrace{\hspace{2.7 cm}}^{r} \hspace{.2cm} \overbrace{\hspace{2.4 cm}}^{\ell}\\
\left[\begin{array}{cccccc}
\framebox{1 3 $\cdots$ 1 3 1 \, 3 $\cdots$ 1 3 1} & \framebox{3 2 3 $\cdots$ 3 2 1} & \framebox{3 2 3 $\cdots$ 3 2 1}& \cdots & \framebox{3 2 3 $\cdots$ 3 2 1}& \framebox{3 1 3 $\cdots$ 1 3 {\color{red}{1}}}\\
\framebox{{\color{red}{3}} 2 $\cdots$ 3 2 3 \, 2 $\cdots$ 3 2 3} & \framebox{2 1 2 $\cdots$ 2 1 3} & \framebox{2 1 2 $\cdots$ 2 1 3}& \cdots & \framebox{2 1 2 $\cdots$ 2 1 3}& \framebox{2 3 2 $\cdots$ 3 2 3}\\
\framebox{2 1 $\cdots$ 2 1 2 \, 1 $\cdots$ 2 1 2} & \color{gray}{\framebox{1 3 1 $\cdots$ 1 3 1}} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\cdots} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\framebox{3 2 3 $\cdots$ 2 3 2}} \\
\color{gray}{\framebox{3 2 $\cdots$ 3 2 1 $\mid$ 3 $\cdots$ 1 3 1}} & & & \\
\underbrace{\hspace{1.9cm}}_{\ell - r\text{ (odd)}}\hspace{.1 cm} \underbrace{\hspace{2 cm}}_{\ell\text{ (even)}} &&&&
\end{array}
\right]
\end{array}
\]
By construction, the coloring is proper. Moreover, $v_r$ is the only vertex labeled 1 that has all its neighbors labeled 3. Also, $v_{k+1}$ is the only vertex labeled 3 with exactly two neighbors labeled 2.
As a result, no nontrivial rotation preserves the coloring. Therefore, by Lemma~\ref{lem: no reflections}, the coloring is distinguishing with respect to the dihedral group.
\textsc{Case 3: If $n=2k+r$, where $1 < r< k$ and $2r\geq k+1$}
\textsc{Subcase 1: Let $2r-k < k-r$ with $2r-k>1$}.
Consider the following arrangements of sequences of colors below, where the first two rows represents sequences of length $k$ and the third row is a sequence of length $r.$ The remaining light-shaded blocks in rows 3 and 4 represent row 1 re-positioned so that the first $k-r$ vertices (positions) in row 1 are vertically aligned with the last $k-r$ vertices (positions) in row 2 and the last $r$ vertices (positions) in row 1 are vertically aligned with the $r$ vertices in row 3.
\[
\begin{array}{c}
\overbrace{\overbrace{\hspace{3 cm}}^{2r-k \text{ (odd)}}\hspace{.3 cm}
\overbrace{\hspace{2.5 cm}}^{2k-3r \text{ (odd)}}}^{k-r\text{ (even)}}\hspace{.3 cm}
\overbrace{\hspace{3 cm}}^{2r-k \text{ (odd)}}
\hspace{.3 cm}
\overbrace{\overbrace{\hspace{2.5 cm}}^{2k-3r \text{ (odd)}}
\hspace{.3 cm}
\overbrace{\hspace{3 cm}}^{2r-k \text{ (odd)}}}^{k-r\text{ (even)}}\\
\left[\begin{array}{ccccc}
\framebox{2 1 2 1 2 $\cdots$ 2 1 2} & \framebox{1 2 1 2 $\cdots$ 2 1} & \framebox{3 1 3 1 3 $\cdots$ 3 1 3} & \framebox{2 1 2 1 $\cdots$ 1 2} &\framebox{1 2 1 2 1 $\cdots$ 1 2 1}\\
\framebox{3 2 1 2 1 $\cdots$ 1 2 1} & \framebox{2 1 2 1 $\cdots$ 1 2} & \framebox{1 3 1 3 1 $\cdots$ 1 3 1} & \framebox{3 2 1 2 $\cdots$ 2 1} & \framebox{2 1 2 1 2 $\cdots$ 2 1 2} \\
\framebox{1 {\color{red}{3 2 3 2 $\cdots$ 2 3 2}}} & \framebox{{\color{red}{3 2 3 2 $\cdots$ 2 3}}} & \framebox{{\color{red}{2}} 1 3 1 3 $\cdots$ 3 1 3} & \color{gray}{\framebox{2 1 2 1 $\cdots$ 1 2}} & \color{gray}{\framebox{1 2 1 2 1 $\cdots$ 1 2 1} }\\
\color{gray}{\framebox{3 1 3 1 3 $\cdots$ 3 1 3} } & \color{gray}{\framebox{2 1 2 1 $\cdots$ 1 2} } & \color{gray}{\framebox{1 2 1 2 1 $\cdots$ 1 2 1} } &
\end{array}
\right]
\\
\hspace{10 cm}\underbrace{\hspace{5.8 cm}}_{k-r\text{ (even)}}
\end{array}
\]
Use row 1 to color the sequence of vertices $v_1, v_2,\cdots, v_k,$ row 2 to color the sequence of vertices $v_{k+1}, v_{k+2}, \cdots, v_{2k},$ and the first three blocks in row 3 to color the remaining consecutively-indexed vertices. It is not hard to see from the above detailed arrangement of colors that the coloring is proper. By Lemma~\ref{lem: no reflections}, it suffices to show that there is not a nontrivial rotation symmetry that preserves the coloring. Note that the highlighted vertices above (from $v_{2k+2}$ to $v_{-2r+k +1}$) is a unique string of vertices labeled alternately 2 and 3. The longest such string outside of these vertices is only 2 vertices long, and therefore this string is unique as $2r-k > 1$. Therefore, no rotational symmetry will fix the coloring and thus this coloring is distinguishing.
\textsc{Subcase 2: Let $2r-k < k-r$ with $2r-k=1$ and $2k-3r >1$}.
In this case, the coloring must be altered slightly because of the construction of the blocks in the previous subcase. Consider the coloring below in which the colors of the last two vertices are changed from the last subcase.
\[
\begin{array}{c}
\overbrace{\overbrace{\hspace{.2 cm}}^{ 2r-k}\hspace{.3 cm}
\overbrace{\hspace{2.2 cm}}^{2k-3r \text{ (odd)}}}^{k-r\text{ (even)}}
\hspace{.3 cm}
\overbrace{\hspace{.2 cm}}^{2r-k}
\hspace{.3 cm}
\overbrace{\overbrace{\hspace{2.2 cm}}^{2k-3r \text{ (odd)}}
\hspace{.3 cm}
\overbrace{\hspace{.2 cm}}^{2r-k}}^{k-r\text{ (even)}}\\
\left[\begin{array}{ccccc}
\framebox{2} & \framebox{1 2 1 2 $\cdots$ 2 1} & \framebox{3} & \framebox{2 1 2 1 $\cdots$ 1 2} &\framebox{1}\\
\framebox{{\color{red}{3}}} & \framebox{2 1 2 1 $\cdots$ 1 2} & \framebox{1} & \framebox{3 2 1 2 $\cdots$ 2 1} & \framebox{2} \\
\framebox{1} & \framebox{3 2 3 2 $\cdots$ 2 1} & \framebox{3} & \color{gray}{\framebox{2 1 2 1 $\cdots$ 1 2}} & \color{gray}{\framebox{1} }\\
\color{gray}{\framebox{3} } & \color{gray}{\framebox{2 1 2 1 $\cdots$ 1 2} } & \color{gray}{\framebox{1} } &
\end{array}
\right]
\\
\hspace{4.5 cm}\underbrace{\hspace{3.3 cm}}_{k-r\text{ (even)}}
\end{array}
\]
The coloring given above is proper and note that the highlighted vertex, $v_{k+1}$, is the only vertex labeled 3 with two of its neighbors labeled 2 and two labeled 1. Because this vertex is unique, no rotational symmetry can fix the coloring and thus the coloring is distinguished.
\textsc{Subcase 3: Let $2r-k = 1$ and $2k-3r = 1$ }
Note that if both $2r-k = 1$ and $2k-3r = 1$ we have the specific case of $C_{13}(1,5)$ which has a chromatic distinguishing number of 4 and is investigated in Section 8.
\textsc{Subcase 4: Let $2r-k > k-r$ }
Once again, consider the following arrangement of sequences of colors.
\[
\begin{array}{c}
\overbrace{\hspace{3 cm}}^{k-r \text{ (even)}}\hspace{.4 cm} \overbrace{\hspace{3.2 cm}}^{2r-k \text{ (odd)}}
\hspace{.4 cm}
\overbrace{\hspace{3 cm}}^{k-r \text{ (even)}}\\
\left[\begin{array}{ccc}
\framebox{2 1 2 1 2 $\cdots$ 1 2 1} & \framebox{3 2 $\cdots$ 3 \, 2 3 $\cdots$ 2 3} & \framebox{2 1 2 1 2 $\cdots$ 1 2 1}\\
\framebox{{\color{red}{3}} 2 1 2 1 $\cdots$ 2 1 2} & \framebox{1 3 $\cdots$ 1 \, 3 1 $\cdots$ 3 1} & \framebox{3 2 1 2 1 $\cdots$ 2 1 2} \\
\framebox{1 3 2 3 2 $\cdots$ 3 2 3} & \framebox{2 1 $\cdots$ 2 \, 1 2 $\cdots$ 1 3} & \color{gray}{\framebox{2 1 2 1 2 $\cdots$ 1 2 1} } \\
\color{gray}{\framebox{3 2 3 2 3 $\cdots$ 2 3 2}}& \color{gray}{\framebox{3 2 $\cdots$ 3 $\mid$ 2 1 $\cdots$ 2 1} } & \\
& \underbrace{\hspace{1.6cm}}_{3r-2k\text{ (odd)}}\hspace{.09 cm} \underbrace{\hspace{1.7 cm}}_{k-r\text{ (even)}} &
\end{array}
\right]
\end{array}
\]
Proceed similarly to color the vertices and observe that the coloring is proper. Moreover, the vertex $v_{k+1}$ is the only vertex labeled 3 with exactly two of its neighbors labeled 2 and the others labeled 1. Hence, the coloring is distinguishing with respect to the dihedral group.
\end{proof}
\subsection{Case: $n$ is odd and $k$ is even}
We will proceed by considering the cases where $k\geq n/3$ and $n/3 < k < n/2-1$. Furthermore, we will need to consider within the case that $n/3 < k < n/2-1$, when $r\leq k/2$ and $r>k/2$.
\begin{thm}\label{thm:n odd and k even}
Given integers $k, n$ such that $2 < k < (n-1)/2,$ if $n$ is odd and $k$ is even and $\operatorname{Aut}(C_n(1,k)) \cong D_n,$ then $\chi_D(C_{n}(1,k))=3.$
\end{thm}
\begin{proof}
\textsc{Case 1: Let $n/3<k<n/2 -1$ such that $n=2k+r$ hence $r$ odd and $ r\leq k/2$}.
Let $r\leq k/2$ so that $k=mr+\ell$ for some integers $\ell<r$ and $m \geq 2$.
Consider the following arrangement of colors where the first two rows are made up of blocks of length $r$ with a possible shorter block of length $\ell$ (if $\ell>0$) added to the end. The third row consists of just one block of length $r$. Note that in the fourth row, the block is comprised of a portion of block $B$ of length $r-\ell$ (we will call $B'$) and $E_1$ of length $\ell$.
\[
\begin{array}{c}
\overbrace{\hspace{1.4 cm}}^{r}\hspace{.2 cm} \overbrace{\hspace{.5 cm}}^{r}
\hspace{1 cm} \overbrace{\hspace{.4 cm}}^{r}
\hspace{.2 cm}
\overbrace{\hspace{.4 cm}}^{\ell}\\
\left[
\begin{array}{ccccc}
\framebox{\,\,\,\,\, A \,\,\,\,\,} & \framebox{B} & \cdots & \framebox{B} & \framebox{$E_1$}\\
\framebox{\,\,\,\,\, C \,\,\,\,\,} & \framebox{D} & \cdots & \framebox{D} & \framebox{$E_2$} \\
\framebox{\,\,\,\,\, E \,\,\,\,\,} & \color{gray}{\framebox{A}} & \color{gray}{\cdots} &
\color{gray}{\framebox{B}} & \color{gray}{\framebox{B}}\\
\color{gray}{\framebox{B' $\mid$ $E_1$ }} & & & &
\end{array}
\right]
\end{array}
\]
\textsc{Subcase 1: Let $\ell = 0$ }.
Let $E_1$ and $E_2$ be empty and the other blocks as given below.
\[
\begin{array}{c}
\overbrace{\hspace{2.4 cm}}^{r}\hspace{.3 cm} \overbrace{\hspace{2.4 cm}}^{r}
\hspace{1cm}
\overbrace{\hspace{2.4 cm}}^{r}\\
\left[\begin{array}{cccc}
\framebox{1 3 1 $\cdots$ 1 3 1} & \framebox{3 2 3 $\cdots$ 3 2 1} & \cdots & \framebox{3 2 3 $\cdots$ 3 2 1}\\
\framebox{{\color{red}{2}} 1 3 $\cdots$ 3 1 3} & \framebox{2 1 2 $\cdots$ 2 1 3} & \cdots & \framebox{2 1 2 $\cdots$ 2 1 3} \\
\framebox{1 3 2 $\cdots$ 2 3 2} & \color{gray}{\framebox{1 3 1 $\cdots$ 1 3 1}} & \color{gray}{\cdots} &
\color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}}\\
\color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & & &
\end{array}
\right]
\end{array}
\]
The vertex $v_{k+1}$ highlighted above is the only vertex labeled 2 whose neighbors are all labeled 1. Because this vertex is unique, the coloring is distinguished by dihedral symmetries.
\textsc{Subcase 2: Let $\ell = 1$ }.
Let the blocks be given as below with $E_1$ and $E_2$ of length 1.
\[
\begin{array}{c}
\overbrace{\hspace{2.4 cm}}^{r}\hspace{.3 cm} \overbrace{\hspace{2.4 cm}}^{r}
\hspace{1.2 cm}
\overbrace{\hspace{2.4 cm}}^{r} \hspace{.3 cm} \overbrace{\hspace{.2 cm}}^{\ell} \\
\left[\begin{array}{ccccc}
\framebox{3 2 3 $\cdots$ 3 2 \, 1} & \framebox{3 2 3 $\cdots$ 3 2 1} & \cdots & \framebox{3 2 3 $\cdots$ 3 2 1} & \framebox{3}\\
\framebox{2 1 2 $\cdots$ 2 1 \, 3} & \framebox{2 1 2 $\cdots$ 2 1 3} & \cdots & \framebox{2 1 2 $\cdots$ 2 1 3} & \framebox{2} \\
\framebox{1 2 1 $\cdots$ 1 {\color{red}{2}} \, 1} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\cdots} &
\color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\framebox{3}} \\
\color{gray}{\framebox{2 3 2 $\cdots$ 2 1 $\mid$ 3}} & & & &
\end{array}
\right]
\end{array}
\]
The vertex $v_{-1}$ highlighted above is the only vertex labeled 2 whose neighbors are all labeled 1. Because this vertex is unique, the coloring is distinguished by dihedral symmetries.
\textsc{Subcase 3: Let $\ell > 1$ and $m$ odd }.
Use the same arrangement as in Subcase 2 (with $\ell = 1$) with extended $E_1$ and $E_2$ blocks.
\[
\begin{array}{c}
\overbrace{\hspace{4.4 cm}}^{r}\hspace{.3 cm} \overbrace{\hspace{2.4 cm}}^{r}
\hspace{1 cm}
\overbrace{\hspace{2.4 cm}}^{r} \hspace{.3 cm} \overbrace{\hspace{2.4 cm}}^{\ell < r \text{ (odd)}} \\
\left[\begin{array}{ccccc}
\framebox{3 2 3 $\cdots$ 2 3 2 \, 3 $\cdots$ 3 2 1} & \framebox{3 2 3 $\cdots$ 3 2 1} & \cdots & \framebox{3 2 3 $\cdots$ 3 2 1} & \framebox{3 1 3 $\cdots$ 3 1 3}\\
\framebox{2 1 2 $\cdots$ 1 2 1 \, 2 $\cdots$ 2 1 3} & \framebox{2 1 2 $\cdots$ 2 1 3} & \cdots & \framebox{2 1 2 $\cdots$ 2 1 3} & \framebox{2 3 2 $\cdots$ 2 3 2} \\
\framebox{{\color{red}{1 2 1 $\cdots$ 2 1 2 \, 1 $\cdots$ 1 2 1}}} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\cdots} &
\color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\framebox{3 2 3 $\cdots$ 3 2 3}}\\
\color{gray}{\framebox{2 3 2 $\cdots$ 3 2 1 $\mid$ 3 $\cdots$ 3 1 3}} & & & & \\
\underbrace{\hspace{2.3 cm}}_{\ell - r\text{ (even)}}\hspace{.1 cm} \underbrace{\hspace{1.7 cm}}_{\ell\text{ (odd)}} &&&&
\end{array}
\right]
\end{array}
\]
The vertices $v_{2k+1}$ to $v_n$ highlighted above represent a unique labeled string of vertices (alternating labels 1 and 2) of length $r$ as every other block contains a 3 and is immediately proceeded or followed by a vertex labeled 3. Because this string of vertices is unique, the coloring is distinguished by dihedral symmetries.
\textsc{Subcase 4: Let $\ell > 1$ and $m$ even and hence $\ell$ even }.
Use the blocks as given below.
\[
\begin{array}{c}
\overbrace{\hspace{4.4 cm}}^{r}\hspace{.3 cm} \overbrace{\hspace{2.4 cm}}^{r}
\hspace{1 cm}
\overbrace{\hspace{2.4 cm}}^{r} \hspace{.3 cm} \overbrace{\hspace{2.4 cm}}^{\ell < r \text{ (even)}} \\
\left[\begin{array}{ccccc}
\framebox{1 3 1 $\cdots$ 3 1 \, 3 1 $\cdots$ 1 3 1} & \framebox{3 2 3 $\cdots$ 3 2 1} & \cdots & \framebox{3 2 3 $\cdots$ 3 2 1} & \framebox{3 1 3 $\cdots$ 1 3 1}\\
\framebox{3 2 3 $\cdots$ 2 3 \, 2 3 $\cdots$ 3 2 3} & \framebox{2 1 2 $\cdots$ 2 1 3} & \cdots & \framebox{2 1 2 $\cdots$ 2 1 3} & \framebox{2 3 2 $\cdots$ 3 2 3} \\
\framebox{{\color{red}{2 1 2 $\cdots$ 1 2 \, 1 2 $\cdots$ 2 1 2}}} & \color{gray}{\framebox{1 3 1 $\cdots$ 1 3 1}} & \color{gray}{\cdots} &
\color{gray}{\framebox{3 2 3 $\cdots$ 3 2 1}} & \color{gray}{\framebox{3 2 3 $\cdots$ 2 3 2}}\\
\color{gray}{\framebox{3 2 3 $\cdots$ 2 1 $\mid$ 3 1 $\cdots$ 1 3 1}} & & & & \\
\underbrace{\hspace{2 cm}}_{\ell - r\text{ (odd)}}\hspace{.1 cm} \underbrace{\hspace{2 cm}}_{\ell\text{ (even)}} &&&&
\end{array}
\right]
\end{array}
\]
The vertices $v_{2k+1}$ to $v_n$ highlighted above represent a unique labeled string of vertices (alternating labels 2 and 1) of length $r$ as every other block contains a 3 and is immediately proceeded or followed by a vertex labeled 3. Because this string of vertices is unique, the coloring is distinguished by dihedral symmetries.
\textsc{Case 2: Let $n/3<k<n/2 -1$ such that $n=2k+r$ hence $r$ odd and $ r > k/2$}.
It can be verified that the following arrangement of these sequences yield a proper coloring $C$ of $C_n(1, k)$ that is also distinguishing.
\textsc{Subcase 1: Let $2r-k < k-r$ }.
\[
\begin{array}{c}
\overbrace{\hspace{4cm}}^{k-r \text{ (odd)}}\hspace{.4 cm} \overbrace{\hspace{3.3 cm}}^{2r-k \text{ (even)}}
\hspace{.4 cm}
\overbrace{\hspace{3 cm}}^{k-r \text{ (odd)}}\\
\left[\begin{array}{ccc}
\framebox{1 2 $\cdots$ 1 2 1 2 $\cdots$ 1 2 3} & \framebox{1 3 $\cdots$ 1 3 1 $\cdots$ 1 3} & \framebox{1 2 1 2 1 $\cdots$ 1 2 1}\\
\framebox{2 3 $\cdots$ 2 3 2 3 $\cdots$ 2 3 2} & \framebox{3 1 $\cdots$ 3 1 3 $\cdots$ 3 1} & \framebox{3 1 2 1 2 $\cdots$ 2 1 2} \\
\framebox{{\color{red}{3}} 1 $\cdots$ 3 1 3 1 $\cdots$ 3 1 3} & \framebox{1 3 $\cdots$ 1 3 1 $\cdots$ 1 3} & \color{gray}{\framebox{1 2 1 2 1 $\cdots$ 1 2 3} } \\
\underbrace{\color{gray}{\framebox{1 3 $\cdots$ 1 3 1 2 $\cdots$ 1 2 1}}}_{\text{ if } \, 2r-k \, < \, k-r}& \color{gray}{\framebox{2 1 $\cdots$ 2 1 2 $\cdots$ 2 1} } &
\end{array}
\right]
\end{array}
\]
The vertex $v_{2k+1}$ highlighted above is the only vertex labeled 3 with two neighbors labeled 2 and two neighbors labeled 1. Because this vertex is unique, the coloring is distinguished by dihedral symmetries.
\textsc{Subcase 2: Let $2r-k > k-r$ }.
\[
\begin{array}{c}
\overbrace{\hspace{4cm}}^{k-r \text{ (odd)}}\hspace{.4 cm} \overbrace{\hspace{3.3 cm}}^{2r-k \text{ (even)}}
\hspace{.4 cm}
\overbrace{\hspace{3 cm}}^{k-r \text{ (odd)}}\\
\left[\begin{array}{ccc}
\framebox{1 2 $\cdots$ 1 2 1 2 $\cdots$ 1 2 3} & \framebox{1 3 $\cdots$ 1 3 1 $\cdots$ 1 3} & \framebox{1 2 1 2 1 $\cdots$ 1 2 1}\\
\framebox{2 3 $\cdots$ 2 3 2 3 $\cdots$ 2 3 2} & \framebox{3 1 $\cdots$ 3 1 3 $\cdots$ 3 1} & \framebox{3 1 2 1 2 $\cdots$ 2 1 2} \\
\framebox{{\color{red}{3}} 1 $\cdots$ 3 1 3 1 $\cdots$ 3 1 3} & \framebox{1 3 $\cdots$ 1 3 1 $\cdots$ 1 3} & \color{gray}{\framebox{1 2 1 2 1 $\cdots$ 1 2 3} } \\
\color{gray}{\framebox{1 3 $\cdots$ 1 3 1 3 $\cdots$ 1 3 1}} & \underbrace{\color{gray}{\framebox{3 1 $\cdots$ 3 1 2 $ \cdots$ 2 1} }}_{\text{ if } 2r-k \, > \, k -r} &
\end{array}
\right]
\end{array}
\]
The vertex $v_{2k+1}$ highlighted above is the only vertex labeled 3 with two neighbors labeled 2 and two neighbors labeled 1. Because this vertex is unique, the coloring is distinguished by dihedral symmetries.
\textsc{Case 3: $n=pk+r$ for some $p\geq 3$}.
We will similar blocks as those used in Section 7.3, given below:
\hfil \begin{tabular}{ccccccccccc}
$B_1$ & $=$ & ($1$, & $2$, & $3$, & $2$, & $3$, & $\cdots$, & $2$, & $3$, & $1$); \\
$B_2$ & $=$ & ($2$, & $3$, & $1$, & $3$, & $1$, & $\cdots$, & $3$, & $1$, & $2$); \\
$B_3$ & $=$ & ($3$, & $1$, & $2$, & $1$, & $2$, & $\cdots$, & $1$, & $2$, & $3$); \\
$B_4$ & $=$ & ($3$, & $1$, & $3$, & $1$, & $3$, & $\cdots$, & $1$, & $3$, & $1$); \\
$B'$ & $=$ & ($2$, & $1$, & $2$, & $1$, & $2$, & $\cdots$, & $2$, & $1$, & $2$).
\end{tabular} \hfil
The blocks of colors $B_1$, $B_2$, $B_3$ are constructed in such a way that as long as blocks are not repeated, adjacent vertices will not have the same color. It is also the case between $B_4$ and $B_2$. We give a coloring $C$ of the vertices of $C_n(1,k)$ using colors 1, 2, and 3 as follows: assign colors to $v_1,v_2, \ldots, v_k$ using $B_4$. Next, starting with $B_2$, assign colors to $v_{k+1}, v_{k+2}, \ldots, v_{(q-1)k}$ by alternating the use of sequences $B_2$, $B_3$ on successive collections of $k$ consecutively-indexed vertices. Assign colors to $v_{(q-1)k+1}, v_{(q-2)k+2},\ldots ,v_{qk}$ using $B_1$. Lastly, use the shortened block $B'$ of length $r$ on the remaining vertices, noting that if $r=1$ we are left with the vertex $v_n$ labeled 2.
Use the blocks as given below, the first is where $p$ is odd so the repetition of $B_2$ and $B_3$ ends with a $B_2$, and the latter is where $p$ is even and the repetition ends with a $B_3$.
\[
\begin{array}{cc}
\begin{array}{cc}
\begin{array}{c}
{\color{red}{B_4}} \\
B_2 \\
B_3 \\
\vdots \\
B_2 \\
B_1 \\
B'
\end{array}
\begin{array}{cc}
\overbrace{\hspace{2.9 cm}}^{r \text{ (odd)}}\hspace{.3 cm} \overbrace{\hspace{2.2 cm}}^{k-r \text{ (odd)}}\\
\left[\begin{array}{cc}
\framebox{{\color{red}{3 1 3 1 3 $\cdots$ 3 1 3}}} & \framebox{{\color{red}{1 3 $\cdots$ 1 3 1}}}\\
\framebox{2 3 1 3 1 $\cdots$ 1 3 1} & \framebox{3 1 $\cdots$ 3 1 2}\\
\framebox{3 1 2 1 2 $\cdots$ 2 1 2} & \framebox{1 2 $\cdots$ 1 2 3}\\
$\vdots$ & $\vdots$\\
\framebox{2 3 1 3 1 $\cdots$ 1 3 1} & \framebox{3 1 $\cdots$ 3 1 2}\\
\framebox{1 2 3 2 3 $\cdots$ 3 2 3} & \framebox{2 3 $\cdots$ 2 3 1}\\
\framebox{2 1 2 1 2 $\cdots$ 2 1 2} & \color{gray}{\framebox{3 1 $\cdots$ 3 1 3}}\\
\color{gray}{\framebox{1 3 1 3 1 $\cdots$ 1 3 1}} &
\end{array}
\right]
\end{array}
\end{array}
&
\begin{array}{cc}
\begin{array}{c}
{\color{red}{B_4}} \\
B_2 \\
B_3 \\
\vdots \\
B_3 \\
B_1 \\
B'
\end{array}
\begin{array}{cc}
\overbrace{\hspace{2.9 cm}}^{r \text{ (odd)}}\hspace{.3 cm} \overbrace{\hspace{2.2 cm}}^{k-r \text{ (odd)}}\\
\left[\begin{array}{cc}
\framebox{{\color{red}{3 1 3 1 3 $\cdots$ 3 1 3}}} & \framebox{{\color{red}{1 3 $\cdots$ 1 3 1}}}\\
\framebox{2 3 1 3 1 $\cdots$ 1 3 1} & \framebox{3 1 $\cdots$ 3 1 2}\\
\framebox{3 1 2 1 2 $\cdots$ 2 1 2} & \framebox{1 2 $\cdots$ 1 2 3}\\
$\vdots$ & $\vdots$\\
\framebox{3 1 2 1 2 $\cdots$ 2 1 2} & \framebox{1 2 $\cdots$ 1 2 3}\\
\framebox{1 2 3 2 3 $\cdots$ 3 2 3} & \framebox{2 3 $\cdots$ 2 3 1}\\
\framebox{2 1 2 1 2 $\cdots$ 2 1 2} & \color{gray}{\framebox{3 1 $\cdots$ 3 1 3}}\\
\color{gray}{\framebox{1 3 1 3 1 $\cdots$ 1 3 1}} &
\end{array}
\right]
\end{array}
\end{array}
\end{array}
\]
The vertices in the block $B_4$ in both cases above represent a unique string of $r$ vertices with alternating labels 3 and 1. Since every other block contains multiple vertices labeled 2, the string is unique and the coloring is distinguished by dihedral symmetries.
\end{proof}
\section{Graphs $C_n(1,k)$ where $k^2 \equiv \pm 1 \pmod{n}$}
\label{sec: k^2 = pm 1}
\begin{figure}
\caption{A proper distinguishing coloring of $C_{13}
\label{C_13(1,5)}
\end{figure}
Theorem~\ref{thm: Aut when k^2 equiv pm 1} and other results in Section~\ref{sec: automorphism groups} showed that that when $k^2 \equiv \pm 1 \pmod{n}$, the graphs $C_n(1,k)$ are dart-transitive, but any automorphism not corresponding to a dihedral symmetry on the Hamiltonian cycle comprised of edges in $E_1$ corresponds to a symmetry that maps the edges of $E_k$ to this Hamiltonian cycle while mapping the edges of $E_1$ to those in $E_k$. It follows that, given a coloring $C$ that destroys all dihedral symmetries of $C_n(1,k),$ if $C$ also fixes an edge, then $C$ destroys all non-dihedral symmetries as well. We begin with the case of $C_{13}(1,5)$ to illustrate this idea.
\begin{prop} \label{prop: C13 1 5}
$\chi_D(C_{13}(1,5))= \chi(C_{13}(1, 5))= 4$.
\end{prop}
\begin{proof}
Consider the following coloring: starting at $v_1$, assign colors to $v_1, v_2, \cdots, v_{12}$ by repeatedly using the block of colors $B= (1, 2,3)$, and assign color 4 to $v_{13}.$ We can see in Figure~\ref{C_13(1,5)} that the coloring is proper. Moreover, it is distinguishing since the uniquely colored vertex $v_{13}$ is fixed and the edge $v_{12}v_{13}$ is also fixed, as it is the only edge with endpoints colored 4 and 3.
\end{proof}
The graph $C_{15}(1,4)$ will form an exception for our next main result, so we establish its distinguishing chromatic number here. Unlike with $C_{13}(1,5)$ the distinguishing chromatic number will be greater than the chromatic number, which is 3, so more effort is required for the lower bound. We begin with a lemma that incidentally shows that the distinguishing coloring of $C_8(1,4)$ in Section~\ref{sec: trivalent} was unique up to the names of the colors.
\begin{lem} \label{lem: Cn(1,4) pre-labeling}
Let $n$ be an integer with $n \geq 8$. In any distinguishing 3-coloring of $C_n(1,4)$, there exist 8 consecutively-indexed vertices receiving colors in the pattern $C,A,B,C,B,C,A,B$, where $\{A,B,C\}$ is the set of colors used.
\end{lem}
\begin{proof}
Observe first that in any proper coloring there must be at least one index $i$ such that $v_i,v_{i+1},v_{i+2}$ receive three distinct colors; if this is not the case, then $C_n(1,4)$ would be properly colored with just two colors, and this is a contradiction, since $v_0,v_1,v_2,v_3,v_4$ are the vertices of a 5-cycle in $C_n(1,4)$.
Furthermore, we claim that in any \emph{distinguishing} proper coloring of $C_n(1,4)$ using three colors, there must be an index $j$ such that $v_j,v_{j+1},v_{j+2}$ do \emph{not} receive three distinct colors. Indeed, if no such $j$ existed, then the proper coloring would consist of the pattern $A,B,C$ repeated around the vertices, requiring that $n$ be a multiple of 3 and that the map $v_k \mapsto v_{k+3}$ be a color-preserving automorphism of $C_n(1,4)$, a contradiction.
It follows that there must be an index $i$ such that $v_i,v_{i+1},v_{i+2}$ are colored with three distinct colors, but $v_{i+1},v_{i+2},v_{i+3}$ are not. By symmetry, we may suppose that $i=0$ and that the labels on $v_{0},v_1,v_2,v_3$ are $A,B,C,B$. Since $v_4$ is adjacent to both $v_3$ and $v_0$, the color on $v_4$ must be $C$. Likewise, the colors on $v_5$ and $v_6$ must be $A$ and $B$, respectively, and the color on $v_{n-1}$ (which is adjacent to both $v_0$ and $v_3$) must be $C$. This completes the proof.
\end{proof}
\begin{thm}
$\chi_D(C_{15}(1,4)) = 4$
\end{thm}
\begin{proof}
We first show that no distinguishing proper coloring can use 3 colors. Suppose to the contrary that some coloring $c$ does. By Lemma~\ref{lem: Cn(1,4) pre-labeling}, we may suppose that $v_{14},v_0,\dots,v_{6}$ are respectively labeled with $C,A,B,C,B,C,A,B$. Since $v_{10}$ is adjacent to $v_6$ and to $v_{14}$, we have $c(v_{10})=A$. We now proceed by cases on the colors of $v_7,v_8,v_9,v_{11},v_{12},v_{13}$, showing that any proper 3-coloring admits a color-preserving automorphism.
\textsc{Case: $c(v_9)=B$ or $c(v_{11})=C$}. Suppose first that $c(v_9)=B$. Then $c(v_8)=A$ and $c(v_{13})=A$, since each vertex has a neighbor already colored with $B$ and with $C$. Then $c(v_7)=C$. and $c(v_{12})=C$ and consequently $c(v_{11})=B$. At this point all the vertices have been colored, and one can verify that the permutation $(v_1 \ v_{11})(v_2 \ v_7)(v_4 \ v_{14})(v_5 \ v_{10})(v_8 \ v_{13})$ is an automorphism of $C_{15}(1,4)$ preserving the coloring $c$.
Note that the starting pattern of colors appears on vertices $v_6,v_5,v_4,v_3,v_2,v_1,v_0,v_{14}$, in this order, if the roles of colors $B$ and $C$ are switched, so the case where $c(v_{11})=C$ follows exactly the same argument as above to conclude that there is an automorphism of $C_{15}(1,4)$ preserving the coloring.
Assume henceforth that $c(v_9) \neq B$ and $c(v_{11})\neq C$. This forces $c(v_9)=C$ and $c(v_{11})=B$.
\textsc{Case: $c(v_8)=A$ or $c(v_{12})=A$}. By the symmetry argument above, it suffices to suppose that $c(v_8)=A$. It follows that $c(v_7)=C$ and that $c(v_{11})=B$ and $c(v_{12})=C$. Now, if $c(v_{13})=A$, we get the coloring-preserving automorphism $(v_1 \ v_{11})(v_2 \ v_7)(v_4 \ v_{14)}(v_5 \ v_{10})(v_8 \ v_{13})$. If $c(v_{13})=B$, then we get the color-preserving automorphism $(v_0 \ v_{10})(v_1 \ v_6)(v_3 \ v_{13})(v_4 \ v_9)(v_7 \ v_{12})$.
Having handled the cases above, assume henceforth that $c(v_8) \neq A$ and $c(v_{12})\neq A$. This forces $c(v_8)=B$ and $c(v_{12})=C$.
\textsc{Case: $c(v_7)=A$ or $c(v_{13})=A$}. By the prior symmetry argument, it suffices to assume that $c(v_7)=A$. No matter what color $v_{13}$ receives, we have the color-preserving symmetry $(v_0 \ v_5)(v_{2} \ v_{12})(v_3 \ v_8)(v_6 \ v_{11})(v_9 \ v_{14})$.
In light of these cases, we see that $c(v_7) \neq A$ and $c(v_{13})\neq A$. This forces $c(v_7)=C$ and $c(v_{13})=B$. However, then the map $v_{i} \mapsto v_{i+5}$ yields a color-preserving automorphism.
This concludes the necessary cases; we have shown that every proper 3-coloring of $C_{15}(1,4)$ is not distinguishing. On the other hand, $\chi(C_{15}(1,4)) = 3$, and the coloring that alternates colors $A,B,C$ around the vertices is a palindrome-free coloring of $C_{n}(1,4)$ that satisfies Theorem 6.3 of the paper. The Theorem then implies that $\chi_D(C_{15}(1,4))=4$.
\end{proof}
With the exceptions dealt with, we now prove the general result. We will prove the colorings introduced in Section 7 will also be distinguished by any symmetry that swaps the edges comprising of a Hamilton cycle of $E_1$ and those comprising a Hamilton cycle of $E_K$. We do this by finding an edge that is fixed by the coloring. Note that this proof is more than is necessary as most graphs covered in Section 7 will not have an edge swapping symmetry like those discussed in this section, but will be sufficient to prove our result.
\begin{thm} \label{thm: nondihedral}
For $k^2 \equiv \pm 1 \pmod{n}$ with $k>3$, $\chi_D(C_n(1,k)) = 3,$ except for $C_{13}(1, 5)$ and $C_{15}(1, 4)$ where the distinguishing chromatic number is 4.
\end{thm}
\begin{proof}
Like in the dihedral case, we proceed on a case-by-case basis. Note that the corresponding theorem and case in Section 7 will be listed for reference at the beginning of each case.
\textsc{Case: $n$ is odd and $k$ is odd (referencing Theorem 7.6 in Section 7.4)}
\textsc{Subcase 1: $n \geq 3k$ (referencing Theorem 7.6 Case 1)}
Consider the distinguishing coloring $C$ given in the case of its dihedral analogue. It can be easily verified that $v_k$ is the only vertex labeled 1 with exactly two neighbors colored 2 and the others colored 3. Similarly, $v_n$ is the only vertex colored 2 with exactly two neighbors colored 1 and the others colored 3. By construction, both vertices $v_0$ and $v_k$ are fixed and, hence, the edge $v_0v_k$ is fixed.
\textsc{Subcase 2: $n= 2k + r$ (referencing Theorem 7.6 Case 2)}
Suppose that $\displaystyle 2r \geq k+1.$ We consider the same subcases together with the corresponding distinguishing colorings discussed in the dihedral section. If $2r-k > k-r$ (previously referred as SUBCASE 4), the vertex $v_{k+1}$ is the only vertex labeled 3 with exactly two of its neighbors labeled 2 and the others labeled 1. Moreover, the neighbors labeled 1, namely $v_k$ and $v_{2k+1}$ have distinct neighborhoods. Hence, the edge $v_kv_{k+1}$ is fixed. If $2r-k < k-r$ with $2r-k > 1$ (previously referred as SUBCASE 1), the vertex $v_{k-r+2}$ is labeled 1 with all neighbors labeled 3. Switch the color of $v_{k-r+2}$ to 2. Thus, $v_{k-r+2}$ becomes the only vertex labeled 2 that has all its neighbors labeled 3. Most importantly, the coloring is still distinguishing with respect to the dihedral group. Moreover, since $v_{-r+2}$ has a unique neighborhood among the neighbors of $v_{k-r+2},$ the edge $v_{k-r+2}v_{-r+2}$ is fixed. If $2r-k < k-r$ with $2r-k = 1$ and $2k-3r > 1$ (previously referred as SUBCASE 2), recall that the vertex $v_{k+1}$ is the only vertex labeled 3 with two of its neighbors labeled 2 and two labeled 1. Moreover, the neighbors labeled, namely $v_{2k+1}$ and $v_k,$ have distinct neighborhoods. Thus, the edge $v_{k+1}v_k$ is fixed. Lastly, if $2r-k < k-r$ with $2r-k = 2k-3r= 1$ (previously referred as SUBCASE 3), then we have the special graph $C_{13}(1, 5)$ (see Theorem~\ref{thm: chromatic num}). We saw at the beginning of this section a construction of a coloring that is distinguishing.
\textsc{Subcase 3: $n= 2k + r$ (referencing Theorem 7.6 Case 3)}
Suppose $\displaystyle 2r \leq k-1.$ which implies $k = pr+\ell.$ Again, we consider the same subcases discussed in the section dedicated to the dihedral group. If $\ell = 0$ (previously referred as SUBCASE 1), the vertex $v_{k+1}$ is the only 2-colored vertex whose neighbors are all colored 1. Thus, $v_{k+1}$ is fixed. Moreover, $v_2$ is colored 3 and all its neighbors are colored 1. Switch the color of $v_2$ to 2. This moves makes the neighborhood of $v_1$ distinct from the other neighbors of $v_{k+1}$ while the coloring remains distinguishing with respect to the dihedral group. Moreover, we have $v_1v_{k+1}$ as a fixed edge. If $\ell$ is even and $\ell \neq 0$ (previously referred as SUBCASE 4), the vertex $v_{k+1}$ is fixed by virtue of being the only 3-colored vertex with exactly two neighbors colored 2 and the others colored 1. Moreover, since $v_1$ is uniquely colored among the neighbors of $v_{k+1},$ we thus have $v_{k+1}v_1$ as a fixed edge. If $\ell$ is odd (previously referred as SUBCASES 2 and 3), the vertex $v_{k+r+2}$ is colored 1 with all its neighbors colored 2 and vertex $v_{k+r+1}$ is colored 2 with exactly 3 neighbors colored 3 and the other one, $v_{k+r+2},$ colored 1. Fix vertex $v_{k+r+1}$ by switching the color $v_{k+r+2}$ from 1 to 3. The switch makes $v_{k+r+1}$ the only 2-colored vertex whose neighbors are all colored 3. Most importantly, the coloring remains distinguishing with respect to the dihedral group. Moreover, $v_{k+r}$ has a distinct neighborhood among the neighbors of $v_{k+r+1}.$ Therefore, the edge $v_{k+r}v_{k+r+1}$ is fixed.
\textsc{Case: $n$ is odd and $k$ is even (referencing Theorem 7.7 in Section 7.5)}
\textsc{Subcase 1: $n \geq 3k$ (referencing Theorem 7.7 Case 3)}
Let $n= qk+r,$ where $q \geq 3.$ Since $n$ is odd and $k$ is even, $r$ is also odd and $r\neq 0.$
First, suppose that $q$ is even. Consider the coloring $C$ given in the case of its dihedral analogue; which is the case where the block $B_1$ is preceded by a block $B_3.$ If $r < k-1,$ then the vertex $v_1$ and $v_2$ are labeled 3 and 1, respectively. Switch the color of $v_1$ to 1 and the color of $v_2$ to 2. Note that the modified $C$ is still proper. Furthermore, the moves makes $v_1$ a vertex labeled 1 with all its neighbors colored 2. Thus, $v_1$ is fixed since there exists no other vertex labeled 1 with all its neighbors labeled 3. Thus, the modified $C$ is distinguishing with respect to the dihedral group. Moreover, $v_2$ has a distinct neighborhood among the neighbors of $v_1.$ In particular, $v_2$ has three neighbors labeled 3. Therefore, the edge $v_1v_2$ is also fixed. Hence, the modified $c$ is distinguishing with respect to nondihedral group. If $r = k-1 \geq 3,$ the vertex $v_{qk+1}$ (first vertex in block $B'$) is colored 2 with all its neighbors colored 1. Now, fix vertex $v_{qk+1}$ by switching the colors of all possible like-vertices (a vertex that is labeled 2 with all neighbors labeled 1) to 3; for example, such vertex can be found in a Block $B_3$ squeezed between two blocks $B_2.$ Therefore, the modified $C$ is distinguishing with respect to the dihedral group. Moreover, the vertex $v_{qk+2}$ has a unique neighborhood among the neighbors of $v_{q+1}.$ In particular, at least three of its neighbors are labeled 2. Therefore, the edge $v_{q+1}v_{q+2}$ is fixed. Hence, the modified $C$ is distinguishing with respect to nondihedral group.
Now suppose that $q$ is odd. Consider the coloring $C$ given in the case of its dihedral analogue; which is the case where the block $B_1$ is preceded by a block $B_2.$ If $q >3,$ there exists at least a block $B_3$ squeezed between two blocks $B_2$ and the first vertex of the block $B_1$, namely $v_{(q-1)k+1},$ is labeled 1 with all its neighbors labeled 2. Since no other vertex labeled 1 has the same neighborhood, $v_{(q-1)k+1}$ is thus fixed. Moreover, the vertex $v_{qk+1}$ has a distinct neighborhood among the neighbors of $v_{(q-1)k+1}$. In particular, all the neighbors of $v_{qk+1}$ are labeled 1. Therefore, the edge $v_{(q-1)k+1}v_{qk+1}$ is fixed. Hence, $C$ is distinguishing with respect to the nondihedral group. If $q=3$ and $k \neq 4,$ the vertex $v_{k+3}$ is labeled 1 with all its neighbors labeled 3. Switch its color to 2. This moves makes $v_{k+3}$ a vertex labeled 2 with all its neighbors labeled 3. Thus, $v_{k+3}$ is fixed since there is no such other vertex and the modified $C$ is distinguishing with respect to the dihedral group. Moreover, $v_{k+4}$ has a distinct neighborhood among the neighbors of $v_{k+3}.$ More specifically, $v_{k+4}$ has exactly two neighbors labeled 2 and two neighbors labeled 1. Therefore, the edge $v_{k+3}v_{k+4}$ is fixed. Hence, the modified $C$ is distinguishing with respect to the nondihedral group. If $q=3$ and $k=4,$ then we have the exceptional graph $C_{15}(1, 4)$.
\textsc{Subcase 2: $n=2k+r$ (referencing Section 7.7 Cases 1 and 2)}
Suppose $\displaystyle r > k/2.$ Consider the distinguishing coloring $C$ given in the case of its dihedral analogue. Then $v_1$ is fixed since it is the only vertex colored 1 that has excatly two neighbors colored 2 (namely $v_2$ and $v_{k+1}$) and the other colored 3. Since $v_2$ is distinctly colored than $v_{k+1},$ thus $v_1v_2$ is a fixed edge. Hence, the desired result.
Suppose $\displaystyle r \leq k/2,$ which implies $k = pr+\ell.$ Consider the distinguishing coloring $C$ given in the case of its dihedral analogue. The same argument used in the case where both $n$ and $k$ are odd also works here.
\textsc{Case: $n$ is even and $k$ is even (referencing Theorem 7.5 in Section 7.3)}
\textsc{Subcase 1: $n \geq 3k$ (referencing Theorem 7.5 Case 1)}
Again, consider the coloring $C$ given in the case of its dihedral analogue. Recall that when $r>2$ and if we switch the color of vertex $v_0$ to 3, we obtain a proper distinguishing coloring with respect to the dihedral group. Moreover, $v_{-1}$ has a distinct neighborhood among the neighbors of $v_0.$ In particular, $v_{-1}$ has three of its neighbors colored 3 while every other neighbor of $v_0$ is adjacent to at most two vertices colored 3.Therefore, the edge $v_{-1}v_0$ is fixed. Hence, the desired result. When $r=2,$ the same argument in the dihedral case holds.
\textsc{Subcase 2: $n=2k+r$ (referencing Theorem 7.5 Case 2)}
Suppose $r \leq k/2.$ Consider the distinguishing coloring $C$ given in the case of its dihedral analogue. With respect to $C,$ the vertex $v_{k+1}$ is fixed since it is the only vertex colored 3 that has excatly two neighbors colored 1 (namely $v_1$ and $v_{k+2}$) and the other colored 2. Moreover, $v_2$ is colored 2 and all its neighbors are colored 1. Switch the color of $v_2$ to 3. This moves makes $v_1$ uniquely colored among the neighbors of $v_{k+1},$ so we have $v_1v_{k+1}$ as a fixed edge.
Similarly when $r > k/2.$
Note that the case where $n$ is even and $k$ is odd is already addressed in Theorem~\ref{thm: n even k odd}.
\end{proof}
END OF PAPER
\end{document}
|
\begin{document}
\title{Test for entanglement using physically \\
observable witness operators and positive maps}
\author{Kai Chen}
\email{[email protected]}
\author{Ling-An Wu}
\email{[email protected]}
\affiliation{Laboratory of Optical Physics, Institute of Physics, Chinese Academy of
Sciences, Beijing 100080, P.R. China}
\begin{abstract}
Motivated by the Peres-Horodecki criterion and the realignment criterion we
develop a more powerful method to identify entangled states for any
bipartite system through a universal construction of the witness operator.
The method also gives a new family of positive but non-completely positive
maps of arbitrary high dimensions, which provide a much better test than the
witness operators themselves. Moreover, we find that there are two types of
positive maps that can detect $2\times N$ and $4\times N$ bound entangled
states. Since entanglement witnesses are physical observables and may be
measured locally our construction could be of great significance for future
experiments.
\end{abstract}
\pacs{03.67.Mn, 03.65.Ta, 03.65.Ud}
\date{\today}
\maketitle
\section{Introduction}
Quantum entangled states lie at the heart of the rapidly developing field of
quantum information science, which encompasses important potential
applications such as quantum communication, quantum computation and quantum
lithography \cite{pre98,nielsen,zeilinger}. However, the fundamental nature
of entangled states has tantalized physicists since the earliest days of
quantum mechanics, and even today is by no means fully understood.
One of the most basic problems is that how can one tell if a quantum state
is entangled?, and how entangled is it still after some noisy quantum
process (e.g. long distance quantum communication)?
A pure entangled state is a quantum state which cannot be factorized, i.e., $
\left\vert \Psi \right\rangle _{AB}\neq \left\vert \psi \right\rangle
_{A}\left\vert \phi \right\rangle _{B}$, and shows remarkable nonlocal
quantum correlations. From a practical point of view, the state of a
composite quantum system which usually becomes a mixed state after a noisy
process, is called \emph{unentangled} or \emph{separable} if it can be
prepared in a \textquotedblleft local\textquotedblright\ or
\textquotedblleft classical\textquotedblright\ way. It can then be expressed
as an ensemble realization of pure product states $\left\vert \psi
_{i}\right\rangle _{A}\left\vert \phi _{i}\right\rangle _{B}$ occurring with
a certain probability $p_{i}$ and the density matrix $\rho
_{AB}=\sum_{i}p_{i}\rho _{i}^{A}\otimes \rho _{i}^{B},$ where $\rho
_{i}^{A}=\left\vert \psi _{i}\right\rangle _{A}\left\langle \psi
_{i}\right\vert $, $\rho _{i}^{B}=\left\vert \phi _{i}\right\rangle
_{B}\left\langle \phi _{i}\right\vert $, $\sum_{i}p_{i}=1$, and $\left\vert
\psi _{i}\right\rangle _{A}$, $\left\vert \phi _{i}\right\rangle _{B}$ are
normalized pure states of subsystems $A$ and $B$, respectively \cite
{werner89}. If no convex linear combination of $\rho _{i}^{A}\otimes \rho
_{i}^{B}$ exists for a given $\rho _{AB}$, then the state is called
\textquotedblleft entangled\textquotedblright .
However, for a generic mixed state $\rho _{AB}$, finding a separable
decomposition or proving that it does not exist is a non-trivial task (see
the recent good reviews \cite{lbck00,terhal01,3hreview,bruss01} and
references therein). There have been many efforts in recent years to analyze
the separability and quantitative character of quantum entanglement. The
Bell inequalities satisfied by a separable system give the first necessary
condition for separability \cite{Bell64}. In 1996, Peres made an important
step towards proving that, for a separable state, the partial transposition
with respect to one subsystem of a bipartite density matrix is positive, $
\rho ^{T_{A}}\geq 0$. This is known as the PPT (positive partial
transposition) criterion or Peres-Horodecki criterion \cite{peres}. By
establishing a close connection between positive map theory and
separability, Horodecki \textit{et al.} promptly showed that this is a
sufficient condition for separability for bipartite systems of $2\times 2$
and $2\times 3$ \cite{3hPLA223}. Regarding the quantitative character of
entanglement, Wootters succeeded in giving an elegant formula to compute the
\textquotedblleft \textit{entanglement of formation}\textquotedblright\ \cite
{be96} of $2\times 2$ mixtures, thus giving also a separability criterion
\cite{wo98}.
Very recently, Rudolph and other authors \cite
{ru02,ChenQIC03,chenPLA02,rupra} proposed a new operational criterion for
separability, the \emph{realignment} criterion (named, thus, following the
suggestion of Ref. \cite{Horo02}) which is equivalent to the \emph{
computational cross norm} criterion of Ref. \cite{ru02}). The criterion is
very simple to apply and shows a dramatic ability to detect bound entangled
states (BESs) \cite{hPLA97} in any high dimension \cite{ChenQIC03}. It is
even strong enough to detect the true tripartite entanglement shown in Ref.
\cite{Horo02}.
An alternative method to detect entanglement is to construct so-called
entanglement witnesses (EWs) \cite{3hPLA223,terhal00,lkch00} and positive
maps (PMs) \cite{pmaps}. Entanglement witnesses \cite{3hPLA223,terhal00} are
physical observables that can \textquotedblleft detect\textquotedblright\
the presence of entanglement. Starting from the witness operators one can
also obtain PMs \cite{Jami72} that detect more entanglement. Although there
are constructions of EWs related to the unextendible bases of product
vectors in Ref. \cite{terhal00} and to the existence of \textquotedblleft
edge\textquotedblright\ positive partial transpose entangled states (PPTES)
in Ref. \cite{lkch00}, a universal construction of EWs and PMs for a general
bipartite quantum state has still to be discovered.
The aim of this paper is to introduce a new powerful technique for universal
construction of EWs and PMs for any bipartite density matrix and to obtain a
stronger operational test for identifying entanglement. Our starting point
will be the $PPT$ criterion and the realignment criterion for separability.
The universal construction is given in Sec. \ref{sec2}, and several typical
examples of entangled states which can be recognized by the corresponding
EWs and PMs are presented in Sec. \ref{sec3}. We show that many of the
recognized bound entangled states cannot be detected by the realignment
criterion or any of the EWs and PMs constructed previously in the
literature. Moreover, we demonstrate in Sec. \ref{sec4} that there are two
types of positive maps that can detect systematically the $2\times N$ and $
4\times N$ bound entangled states. A brief summary and discussion are given
in the last section.
\section{Universal construction of entangled witnesses and positive maps for
identifying entanglement}
\label{sec2}
In this section we will give two universal constructions for EWs and PMs for
any bipartite density matrix that provide stronger operational tests for
identifying entanglement. The starting points are the $PPT$ criterion \cite
{peres,3hPLA223} and the realignment criterion for separability \cite
{ru02,ChenQIC03,chenPLA02,rupra}. The main tools that we shall use are drawn
from the general theory of matrix analysis \cite{hornt1,hornt}.
\subsection{Some notation}
\label{sec2.1} The various matrix operations from \cite{hornt1,hornt} that
we need will employ the following notation.
\noindent \textbf{Definition:} \emph{For each $m\times n$ matrix $A=[a_{ij}]$
, where $a_{ij}$ is the matrix entry of A, we define the vector $vec(A)$ as}
\begin{equation*}
vec(A)=[a_{11},\cdots ,a_{m1},a_{12},\cdots ,a_{m2},\cdots ,a_{1n},\cdots
,a_{mn}]^{T}.
\end{equation*}
Here the superscript \textquotedblleft $T$" means standard transposition.
Let $Z$ be an $m\times m$ block matrix with block size $n\times n$. We
define the following \textquotedblleft realignment\textquotedblright\
operation $\mathcal{R}$ to change $Z$ to a realigned matrix $\widetilde{Z}$
of size $m^{2}\times n^{2}$ that contains the same elements as $Z$ but in
different positions as follows:
\begin{equation}
\mathcal{R}(Z)\equiv \widetilde{Z}\equiv \left[
\begin{array}{c}
vec(Z_{1,1})^{T} \\
\vdots \\
vec(Z_{m,1})^{T} \\
\vdots \\
vec(Z_{1,m})^{T} \\
\vdots \\
vec(Z_{m,m})^{T}
\end{array}
\right] . \label{realign}
\end{equation}
For example, a $2\times 2$ bipartite density matrix $\rho $ can be
transformed as
\begin{align}
\rho & =\left(
\begin{array}{cc|cc}
\rho _{11} & \rho _{12} & \rho _{13} & \rho _{14} \\
\rho _{21} & \rho _{22} & \rho _{23} & \rho _{24} \\ \hline
\rho _{31} & \rho _{32} & \rho _{33} & \rho _{34} \\
\rho _{41} & \rho _{42} & \rho _{43} & \rho _{44}
\end{array}
\right) \notag \\
& \longrightarrow \mathcal{R}(\rho )=\left(
\begin{array}{cccc}
\rho _{11} & \rho _{21} & \rho _{12} & \rho _{22} \\ \hline
\rho _{31} & \rho _{41} & \rho _{32} & \rho _{42} \\ \hline
\rho _{13} & \rho _{23} & \rho _{14} & \rho _{24} \\ \hline
\rho _{33} & \rho _{43} & \rho _{34} & \rho _{44}
\end{array}
\right) .
\end{align}
\subsection{The realignment criterion}
\label{sec2.2}
Motivated by the Kronecker product approximation technique for a matrix \cite
{loan,pits}, we developed a very simple method to obtain the realignment
criterion in Ref. \cite{ChenQIC03} (called the cross norm criterion in \cite
{ru02}). To recollect, the criterion says that, \emph{for any separable $
m\times n$ bipartite density matrix $\rho _{AB},$ the $m^{2}\times n^{2}$
matrix $\mathcal{R}(\rho _{AB})$ should satisfy $||\mathcal{R}(\rho
_{AB})||\leq 1,$ where $||\mathcal{\cdot }||$ means the trace norm defined
as $||G||=Tr((GG^{\dagger })^{1/2})$. Thus $||\mathcal{R}(\rho _{AB})||>1$
implies the presence of entanglement in $\rho _{AB}.$}
This criterion is strong enough to detect most of the bound entangled states
in the literature, as shown in Ref. \cite{ChenQIC03}, and holds even for
genuine multipartite systems, as shown in Ref. \cite{Horo02}.
\subsection{Entanglement witnesses and positive maps}
\label{sec2.3}
\noindent \textbf{Entanglement witness:} an entanglement witness is a
Hermitian operator $W=W^{\dagger }$ acting on the Hilbert space $\mathcal{H}=
\mathcal{H}_{A}\otimes \mathcal{H}_{B}$ that satisfies $Tr(W\rho _{A}\otimes
\rho _{B})\geq 0$ for any pure separable state $\rho _{A}\otimes \rho _{B}$,
and has at least one negative eigenvalue. If a density matrix $\rho $
satisfies $Tr(W\rho )<0$, then $\rho $ is an entangled state and we say that
$W$ \textquotedblleft detects\textquotedblright\ entanglement in $\rho $
\cite{3hPLA223,terhal00,lkch00}. It has been shown in Ref. \cite{3hPLA223}
that a given density matrix is entangled if and only if there exists an EW
that detects it.
\noindent \textbf{Positive map:} it was shown in Ref. \cite{3hPLA223} that $
\rho $ is separable iff for any positive map $\Lambda $ the inequality
\begin{equation}
(Id_{A}\otimes \Lambda )\rho \geq 0 \label{posimap}
\end{equation}
holds where $Id_{A}$ means a identity matrix with respect to the A
subsystem. In practice, detecting entanglement only involves finding those
maps which are positive but \emph{not} completely positive (non-CP), since a
CP map will satisfy Eq. (\ref{posimap}) for any given separable $\rho $ \cite
{3hPLA223}.
It was shown in \cite{Jami72} that there is a close connection between a
positive map and the entanglement witness, i.e., the \emph{Jamio\l kowski
isomorphism}
\begin{equation}
W=(Id_{A}\otimes\Lambda)P_{+}^{m}, \label{isomor}
\end{equation}
where $P_{+}^{m}=|\Phi\rangle\langle\Phi|$ and $|\Phi\rangle=\frac{1}{\sqrt {
m}}\sum_{i=1}^{m}|\,ii\rangle$ is the maximally entangled state in $\mathcal{
H}_{A}\otimes\mathcal{H}_{A}$.
\subsection{Universal construction of EWs}
\label{sec2.4}
With the above mentioned notation and concepts in mind we will now derive
the main result of this paper: two universal constructions for EWs and PMs
to identify entanglement of bipartite quantum systems in arbitrary
dimensions.
\noindent
\vskip0.2cm \textbf{Theorem 1:} \label{theorem1} \emph{For any density
matrix $\rho $, we can associate with it an EW defined as
\begin{equation}
W=Id-(\mathcal{R}^{-1}(U^{\ast }V^{T}))^{T}, \label{witness}
\end{equation}
where $U,V$ are the unitary matrices that yield the singular value
decomposition (SVD) of $\mathcal{R}(\rho )$, i.e., $\mathcal{R}(\rho
)=U\Sigma V^{\dagger }.$}
\vskip0.2cm \noindent \textbf{Proof:} Using a result of matrix analysis (see
chapter $3$ of \cite{hornt}), we have
\begin{align}
||\mathcal{R}(\rho )||& =\max \{|Tr(X^{\dagger }\mathcal{R}(\rho )Y)|:X\in
M_{m^{2},q}, \notag \\
& Y\in M_{n^{2},q},X^{\dagger }X=Id=Y^{\dagger }Y\},
\end{align}
where $q=\min \{m^{2},n^{2}\}$. We thus find that the maximum value for $
|TrX^{\dagger }\mathcal{R}(\rho )Y|$ occurs at $X=U$ and $Y=V$. This is
because
\begin{eqnarray*}
|Tr(U^{\dagger }\mathcal{R}(\rho )V)| &=&|Tr(U^{\dagger }U\Sigma V^{\dagger
}V)| \\
&=&|Tr\Sigma |=Tr\Sigma =||\mathcal{R}(\rho )||.
\end{eqnarray*}
In the same way, for a separable state $\rho _{sep}$, $|Tr(X^{\dagger }
\mathcal{R}(\rho _{sep})Y)|$ has its maximum value at $X=U^{\prime }$ and $
Y=V^{\prime }$ where $\mathcal{R}(\rho _{sep})=U^{\prime }\Sigma ^{\prime }{
V^{\prime }}^{\dagger }$ is the SVD of $R(\rho _{sep})$. From the
realignment criterion for separability we have $||\mathcal{R}(\rho
_{sep})||\leq 1$, thus
\begin{eqnarray*}
&&|Tr(U^{\dagger }\mathcal{R}(\rho _{sep})V)| \\
&\leq &|Tr({U^{\prime }}^{\dagger }\mathcal{R}(\rho _{sep})V^{\prime })|=||
\mathcal{R}(\rho _{sep})||\leq 1.
\end{eqnarray*}
Since $\mathcal{R}(\rho ^{\prime })$ is just a rearrangement of entries in $
\rho ^{\prime }$, we find by direct observation that $W_{2}=(\mathcal{R}
^{-1}(W_{1}^{T}))^{T}$ if we require $|Tr(W_{1}\mathcal{R}(\rho ^{\prime
}))|=|Tr(W_{2}\rho ^{^{\prime }})|$ to hold for all $\rho ^{\prime }$. Here $
\mathcal{R}^{-1}(W_{1}^{T})$ means the inverse of $\mathcal{R}$, realigning
the entries of $W_{1}^{T}$ according to Eq. (\ref{realign}). Letting $
W_{1}=VU^{\dagger }$, since $Tr(W\rho _{sep})=1-Tr(W_{2}\rho _{sep})\geq
1-|Tr(W_{2}\rho _{sep})|=1-|Tr(W_{1}\mathcal{R}(\rho _{sep}))|\geq 0$, we
have the EW
\begin{eqnarray}
W=Id-W_{2} &=&Id-(\mathcal{R}^{-1}(W_{1}^{T}))^{T} \notag \\
&=&Id-(\mathcal{R}^{-1}(U^{\ast }V^{T}))^{T}.
\end{eqnarray}
\rule{1ex}{1ex}
\vskip0.2cm Whenever we have an entangled state $\rho$ which can be detected
by the realignment criterion, i.e., $||\mathcal{R}(\rho)||>1$, it can also
be detected by the EW in Theorem 1, since $Tr(W\rho)=1-Tr(VU^{\dagger}
\mathcal{R}(\rho))=1-||\mathcal{R}(\rho)||<0$. It should be remarked that
for an $m\times m$ system, we have $(\mathcal{R}^{-1}(U^{\ast}V^{T}))^{T}
\equiv\mathcal{R}(VU^{\dagger})$ which is a simpler expression for practical
operation by direct observation.
As for the $PPT$ criterion, we can also have a universal construction for
EWs as follows:
\vskip0.2cm \noindent \textbf{Theorem 2:} \emph{For any density matrix $\rho$
, we can associate with it another EW defined as
\begin{equation}
W=Id-(VU^{\dagger})^{T_{A}}, \label{witness2}
\end{equation}
where $U,V$ are unitary matrices that yield the SVD of $\rho^{T_{A}}$, i.e.,
$\rho^{T_{A}}=U\Sigma V^{\dagger}.$}
\vskip0.2cm \noindent\textbf{Proof:} For any separable density matrix $
\rho_{sep}$, we have $||\rho_{sep}^{T_{A}}||=1$ due to positivity of $
\rho_{sep}^{T_{A}}$. Similar to the procedure in the proof of Theorem 1, we
have
\begin{eqnarray}
Tr(W\rho _{sep})&=&1-Tr((VU^{\dagger})^{T_{A}}\rho_{sep}) \notag \\
&\geq& 1-|Tr((VU^{\dagger})^{T_{A} }\rho_{sep})| \notag \\
&=&1-|Tr(VU^{\dagger}\rho_{sep}^{T_{A}})| \notag \\
&\geq& 1-||\rho_{sep}^{T_{A}}||=0.
\end{eqnarray}
Thus $W$ is an EW.
\rule{1ex}{1ex}
\vskip0.2cm Whenever we have an entangled state $\rho$ which can be detected
by the $PPT$ criterion, i.e., $||\rho^{T_{A}}||>1$, it can also be detected
by the EW in Theorem 2, since $Tr(W\rho)=1-Tr((VU^{\dagger})^{T_{A}}
\rho)=1-||\rho^{T_{A}}||<0$.
\subsection{Optimization of EWs and universal construction of PMs}
\label{sec2.5}
The universal EWs that we have constructed from Theorem 1 and Theorem 2 are
no weaker than the $PPT$ criterion and the realignment criterion. We
anticipate that better tests should exist. Motivated by the idea developed
in Ref. \cite{lkch00}, we now derive a better witness $W^{\prime }$ from
Theorems 1 and 2:
\begin{equation}
W^{\prime }=W-\epsilon Id, \label{optimize}
\end{equation}
where $\epsilon =\min Tr(W\rho _{A}\otimes \rho _{B})$ for all possible pure
states $\rho _{A}$ and $\rho _{B}.$ We observe that $Tr(W^{\prime }\rho
_{sep})=Tr(W\rho _{sep})-\epsilon \geq 0$ since $Tr(W\sum_{i}p_{i}\rho
_{i}^{A}\otimes \rho _{i}^{B})=\sum_{i}p_{i}Tr(W\rho _{i}^{A}\otimes \rho
_{i}^{B})\geq \sum_{i}p_{i}\epsilon =\epsilon $. Thus $W^{\prime }$ is a
reasonable EW.
Let us see how the minimum of $Tr(W\rho _{A}\otimes \rho _{B})$ for a given $
W$ is obtained. Partitioning $W$ to be an $m\times m$ block matrix $W_{i,j}$
$(i,j=1,\ldots m)$ with block size $n\times n$, we have
\begin{eqnarray}
\epsilon &=&\min Tr(W\rho _{A}\otimes \rho _{B}) \notag \\
&=&\min Tr\left\{ \left[ \sum\nolimits_{i,j}W_{i,j}(\rho _{A})_{ji}\right]
\rho _{B}\right\} .
\end{eqnarray}
Here $W_{i,j}$ is an $n\times n$ matrix while $(\rho _{A})_{ji}$ is a single
entry of $\rho _{A}$. Using a known result of matrix analysis:
\begin{equation*}
\lambda _{\min }(G)=\min Tr(U^{\dagger }GU)=\min Tr(GUU^{\dagger }),
\end{equation*}
where $U^{\dagger }U=1$ and $\lambda _{\min }(G)$ is the minimum eigenvalue
of $G$, we deduce that $\epsilon $ is in fact the minimum eigenvalue of $
G=\sum\nolimits_{i,j}W_{i,j}(\rho _{A})_{ji})$. Thus the problem changes to
that of finding $\lambda _{\min }(G)$ for all possible $\rho _{A}$, which
can be done with common numerical optimization software. This is much
simpler for the low dimensional cases of the first subsystem such as $m=2,3$.
However, the ability of the optimized $W^{\prime }$ to detect entanglement
is still not ideal, and some stronger entanglement tests than the EWs should
be found. The Jamio\l kowski isomorphism Eq. (\ref{isomor}) between a PM and
an EW serves us a good candidate. A positive map can in fact detect more
entanglement than its corresponding witness operator \cite{3hreview}.
According to Eq. (\ref{isomor}) the positive map $\Lambda $ corresponding to
a given $W$ is of the form
\begin{equation}
\Lambda (|i\rangle \langle j|)=\langle i|W|j\rangle , \label{pomap}
\end{equation}
where $\{|i\rangle \}_{i=1}^{m}$ is an orthogonal basis for $\mathcal{H}^{A}$
.
\vskip0.2cm Using Theorems 1 and 2 we now have a universal EW construction
to detect entanglement for a given quantum state, which is not weaker than
the two criteria for separability. We can also detect all other possible
entangled states (in particular the same class of states) using the
constructed EW. Moreover, we can use the optimizing procedure of Eq. (\ref
{optimize}) to get a better witness, while the best detection can be
obtained with the positive map $\Lambda $ given by Eq. (\ref{pomap}). We say
that our constructions are universal in the sense that we can, in principle,
find an EW and a PM to detect entanglement in most of the entangled states
(especially the bound entangled states). If we have some prior knowledge of
a quantum state, we can even detect its entanglement with just a single EW
and a PM from the constructions of Theorem 1 and 2, without having to apply
full quantum state tomography \cite{tomog1}. The $2\times 2$ Werner state
\cite{VoPRA01}) is such an example as shown in the following section. To our
surprise, we find that even for some separable states we can still obtain
good EWs to detect many entangled states. Additional examples are given
below for bound entangled states to depict this character. The constructed
EWs, their optimized versions and corresponding PM are always more powerful
than the $PPT$ criterion and the realignment criterion.
\section{Application of EWs and PMs to entangled states}
\label{sec3} In this section we give several typical examples to display the
power of our constructions to distinguish entangled states, in particular,
the bound entangled states from separable states. Example 1 is a simple $
2\times 2$ Werner state, which is to show that we can still obtain good EWs
and PMs even with a separable states. Since the $PPT$ criterion is strong
enough to detect all non-$PPT$ entangled states we shall just examine PPTES
in the following two examples. We have tested most of the bipartite BESs in
the literature \cite{UPB,hPLA97,bruss} and found that the optimized EW $
W^{\prime }$ and corresponding PM are always much more powerful than the
realignment criterion.
\vskip0.2cm \noindent \emph{Example 1:} $2\times 2$ Werner state.
Consider the family of two-dimensional Werner states \cite{werner89},
\begin{equation}
\rho =\frac{1}{6}\big(\left( 2-f\right) \mathbb{I}_{4}+(2f-1)V\big),
\label{werner}
\end{equation}
where $-1\leq f\leq 1$, $V(\alpha \otimes \beta )=\beta \otimes \alpha $,
the operator $V$ has the representation $V=\sum_{i,j=0}^{1}|ij\rangle
\langle ji|$, and $\rho $ is inseparable if and only if $-1\leq f<0$. As is
well known, the entanglement in a $2\times 2$ Werner state can be detected
completely using the \emph{PPT} criterion and the realignment criteria. It
is straightforward to verify that we can obtain a single EW
\begin{equation}
W=
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
,
\end{equation}
from Theorem 1 whenever $-1\leq f<1/2$. From $Tr(W\rho )=f<0$, we see that
this EW and corresponding PM can detect all of the entangled $2\times 2$
Werner states. This is a surprising result since we still obtain a good
witness operator associated with a separable state ($0\leq f<1/2$) to detect
the whole family of Werner states.
\vskip0.2cm \noindent \emph{Example 2}: $3\times3$ BES constructed from
unextendible product bases
\noindent In Ref. \cite{UPB}, Bennett et al introduced a $3\times 3$
inseparable BES from the following bases:
\begin{align}
{|\psi _{0}\rangle }& ={\frac{1}{\sqrt{2}}}{|0\rangle }({|0\rangle }-{\
|1\rangle }),\ \ \ \ {|\psi _{1}\rangle }={\frac{1}{\sqrt{2}}}({|0\rangle }-{
\ |1\rangle }){|2\rangle }, \notag \\
{|\psi _{2}\rangle }& ={\frac{1}{\sqrt{2}}|2\rangle }({|1\rangle }-{\
|2\rangle }),\text{ \ \ \ }{|\psi _{3}\rangle }={\frac{1}{\sqrt{2}}}({
|1\rangle }-{|2\rangle }){|0\rangle }, \notag \\
{|\psi _{4}\rangle }& ={\frac{1}{3}}({|0\rangle }+{|1\rangle }+{|2\rangle )}(
{|0\rangle }+{|1\rangle }+{|2\rangle )},
\end{align}
from which a bound entangled density matrix can be constructed as
\begin{equation}
\rho =\frac{1}{4}(Id-\sum_{i=0}^{4}{|\psi _{i}\rangle \langle \psi _{i}|}).
\end{equation}
Consider a mixture of this state and the maximally mixed state $\rho
_{p}=p\rho +(1-p)Id/9$. Direct calculation using the realignment criterion
gives $p>88.97$ $\%$ where $\rho _{p}$ still has entanglement. This is much
stronger than the optimal witness given in Ref. \cite{terhal00,guhne} where
they give $p>94.88$ $\%$ for the existence of entanglement in $\rho _{p}$.
Using Theorem 1 for $\rho $, we obtain an EW (not optimized) and further a
positive map to detect entanglement for $\rho _{p}$ when $p>88.41$ $\%$. A
surprising result is that we can still obtain a good EW to detect
entanglement in $\rho $ whenever we generate an EW from $\rho _{p}$ using
Theorem 1 for all values of $0<p\leq 1$. Numerical calculation shows that
the best PM is the one corresponding to the EW generated from $\rho _{p}$
when $p\doteq 0.3$, which is, to a large degree, a separable state (we have
no operational separability criterion to guarantee this but we can estimate
that it is so). The best PM detects entanglement for $\rho _{p}$ when $
p>87.44$ $\%$, which is, to our knowledge, the strongest test up to now.
\vskip0.2cm \noindent \emph{Example 3}: $3\times3$ chessboard BES
\noindent Bru{\ss } and Peres constructed a seven parameter family of PPTES
in~Ref. \cite{bruss}. Using the above mentioned constructions we perform a
systematic test for these states. Choosing a state with a relatively large $
||\mathcal{R}(\rho )||=1.164$ in the family and constructing a PM from the
EW corresponding to $\rho $ we can detect about $9.48$ $\%$ of $10^{4}$
randomly chosen density matrix $\sigma $ satisfying $\sigma =\sigma ^{T_{A}}$
. Half of these detected BESs cannot be detected by the realignment
criterion, and it should also be noted that we only used one PM here. For
every state in this family (including those that cannot be detected by the
realignment criterion) we can almost always obtain an EW and a PM which can
detect some BESs in this family, many of which cannot be recognized by the
realignment criterion by a direct numerical calculation.
\vskip0.2cm Actually, we have a third choice in constructing an EW with
\begin{equation*}
W=\epsilon Id-\rho ,
\end{equation*}
where $\epsilon =\max Tr(\rho (\rho _{A}\otimes \rho _{B}))$ for a given
density matrix $\rho $, as first proposed in Ref. \cite{lbck00}. We can
calculate $\epsilon $ following the same procedure as for optimizing the EW,
and find that $\epsilon $ is actually the maximum eigenvalue of $
G=\sum\nolimits_{i,j}W_{i,j}(\rho _{A})_{ji})$ for all possible $\rho _{A}$.
The optimal witness given in Ref. \cite{guhne} for states constructed from
unextendible product bases is in fact equivalent to this method.
Since some PMs can detect BES they cannot be decomposed to the form $\Lambda
_{1}+T\circ \Lambda _{1}$ where $\Lambda _{1}$ and $\Lambda _{2}$ are
completely positive maps, and $T$ is the standard transposition \cite
{3hPLA223}. Thus our construction gives a universal method to find the
indecomposable positive linear map in any dimension.
\section{two indecomposable positive maps}
\label{sec4}
Here we also show that two indecomposable positive maps, which were first
given in Ref. \cite{tang86}, can systematically detect the $2\times 4$ BES
given by Horodecki \cite{hPLA97}. One map $\Lambda :M_{4}\longrightarrow
M_{2}$ is defined as $\Lambda :[a_{ij}]_{i,j=1}^{4}\longrightarrow $
\begin{equation}
\left(
\begin{array}{c|c}
\begin{array}{r}
(1-\varepsilon )a_{11}+a_{22} \\
+2a_{33}+a_{44}
\end{array}
&
\begin{array}{r}
-2a_{23}-2a_{34} \\
+ua_{31}-a_{12}
\end{array}
\\ \hline
\begin{array}{r}
-2a_{32}-2a_{43} \\
+ua_{13}-a_{21}
\end{array}
&
\begin{array}{r}
u^{2}a_{11}-ua_{14}+2a_{22} \\
-ua_{41}+a_{44}
\end{array}
\end{array}
\right) , \label{4x2pm}
\end{equation}
where $0<u<1$ and $0<\varepsilon \leq u^{2}/6$. The density matrix $\rho $
of the $2\times 4$ BES in Ref. \cite{hPLA97} is real and symmetric, and has
the form:
\begin{equation}
\rho ={\frac{1}{7b+1}}\left[
\begin{array}{ccccccccc}
b & 0 & 0 & 0 & 0 & b & 0 & 0 & \\
0 & b & 0 & 0 & 0 & 0 & b & 0 & \\
0 & 0 & b & 0 & 0 & 0 & 0 & b & \\
0 & 0 & 0 & b & 0 & 0 & 0 & 0 & \\
0 & 0 & 0 & 0 & {\frac{1+b}{2}} & 0 & 0 & {\frac{\sqrt{1-b^{2}}}{2}} & \\
b & 0 & 0 & 0 & 0 & b & 0 & 0 & \\
0 & b & 0 & 0 & 0 & 0 & b & 0 & \\
0 & 0 & b & 0 & {\frac{\sqrt{1-b^{2}}}{2}} & 0 & 0 & {\frac{1+b}{2}} &
\end{array}
\right] ,
\end{equation}
where $0<b<1$. Assuming $\varepsilon =u^{2}/6$, we see that $(Id_{A}\otimes
\Lambda )\rho $ can detect all the entanglement in $\rho $ for $0<b<1$, as
shown in Fig. \ref{fig1}, where we have plotted $f=\min \{0,\lambda _{\min }
\left[ (Id_{A}\otimes \Lambda )\rho \right] \}$, and $\lambda _{\min }$
means the minimum eigenvalue. It is straightforward to verify that the dual
map $\Lambda ^{\prime }:M_{2}\longrightarrow M_{4}$ to $\Lambda $ in Ref.
\cite{tang86} can also detect $\rho $ by the action of $(\Lambda ^{\prime
}\otimes Id_{B})\rho $. If we assume $\rho _{p}=p\rho +(1-p)Id/8$, the PM $
\Lambda $ gives $p>99.26$ $\%$ for existence of entanglement in $\rho _{p}$
with $u=0.849$ and $b=0.218$, which is a stronger test than the PM
constructed from an optimal EW shown in Ref. \cite{lkch00} to give $p>99.65$
$\%$. For any $2\times N$ or $4\times N$ system, the maps $\Lambda $ and $
\Lambda ^{\prime }$ are expected to give a very strong test for recognizing
BES.
\begin{figure}
\caption{Detection of a Horodecki $2\times 4$ bound entangled state with the
indecomposable positive map $\Lambda $ of Eq.(\protect\ref{4x2pm}
\label{fig1}
\end{figure}
\section{Conclusion}
\label{sec5}
Summarizing, we have presented a universal construction for entanglement
witnesses and positive maps that can detect entanglement systematically and
operationally. They are stronger than both the $PPT$ and the realignment
criteria, and provide a powerful method to detect entanglement since
entanglement witnesses are physical observables and may be measured locally
\cite{guhne}. The construction gives us a new subtle way to enhance
significantly our ability to detect entanglement beyond previously known
methods. Even when associated with some separable states, the construction
gives EWs and PMs to identify entanglement. If we have some prior knowledge
of a quantum state, we can even detect its entanglement with just a single
EW and a PM. In addition, our construction gives a systematic way to obtain
positive but non-CP maps, which may also be of interest to the mathematics
community. Moreover, we find that two types of positive maps can detect
completely a $2\times 4$ bound entangled state and promise to give a very
strong test for any $2\times N$ and $4\times N$ systems. We hope that our
method can shed some light on the final solution of the separability
problem, as well as motivate new interdisciplinary studies connected with
mathematics.
\end{document}
|
\begin{document}
\title{Dirichlet product of derivative arithmetic with an arithmetic function multiplicative}
\begin{abstract}
We define the derivative of an integer to be the map sending every prime to 1 and satisfying the Leibniz rule. The aim of this article is to calculate the Dirichlet product of
this map with a function arithmetic multiplicative.
\end{abstract}
\section{Introduction}
Barbeau \cite{Barb} defined the arithmetic derivative as the
function $\delta\;:\; \mathbb N \rightarrow \mathbb N$ ,$\;$defined by the rules :
\begin{enumerate}
\item $\delta(p) = 1$ for any prime $p \in \mathbb P := \{2,3,5,7,\ldots, p_{i},\ldots \}$.
\item $\delta(ab) = \delta(a)b + a\delta(b)$ for any $a,b \in \mathbb N$ (the Leibnitz rule) .
\end{enumerate}
Let $n$ a positive integer , if $n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}$ is the prime factorization of $n$, then
the formula for computing the arithmetic derivative
of n is (see, e.g., \cite{Barb, Ufna}) giving by :
\begin{equation}
\delta(n)=n\sum \limits_{i=1}^s\frac{\alpha_i}{p_i}
=n\sum \limits_{p^{\alpha}||n}\frac{\alpha}{p}
\end{equation}
A brief summary on the history of arithmetic derivative and its generalizations
to other number sets can be found, e.g., in \cite{Stay} .
\\
First of all, to cultivate analytic number theory one must acquire a considerable
skill for operating with arithmetic functions. We begin with a few elementary considerations.
\begin{definition}[arithmetic function]
An \textbf{arithmetic function} is a function $f:\mathbb{N}\longrightarrow \mathbb{C}$ with
domain of definition the set of natural numbers $\mathbb{N}$ and range a subset of the set of complex numbers $\mathbb{C}$.
\end{definition}
\begin{definition}[multiplicative function]
A function $f$ is called an \textbf{multiplicative function} if and
only if :
\begin{equation}\label{eq:1}
f(nm)=f(n)f(m)
\end{equation}
for every pair of coprime integers $n$,$m$. In case (\ref{eq:1}) is satisfied for every pair of integers $n$ and $m$ , which are not necessarily coprime, then the function $f$ is
called \textbf{completely multiplicative}.
\end{definition}
Clearly , if $f$ are a multicative function , then
$f(n)=f(p_1^{\alpha _1})\ldots f(p_s^{\alpha _s})$,
for any positive integer $n$ such that
$n = p_1^{\alpha _1}\ldots p_s^{\alpha _s}$ , and if $f$ is completely multiplicative , so we have : $f(n)=f(p_1)^{\alpha _1}\ldots f(p_s)^{\alpha _s}$.
\begin{example}
Let $n\in\mathbb{N}^*$ , This is the same classical arithmetic functions used in this article :
\begin{enumerate}
\item \textbf{Identity function }: The function defined by $Id(n)=n$ for all positive integer n.
\item \textbf{The Euler phi function}
:\;$\varphi(n)=\sum \limits_{\underset{gcd(k,n)=1}{k=1}}^n 1$.
\item \textbf{The number of distinct prime divisors of n} :\;$\omega(n)=\sum \limits_{p|n}1$ .
\item \textbf{The Mobiuse function }
:\;$\mu(n)=\left\{ \begin{array}{cl}
1 & \textrm{if }\;\;n=1\\
0& \textrm{if }\;\; p^2|n\;\; for\; some\; prime\; p\\
(-1)^{\omega(n)} & \textrm{otherwise}\\
\end{array}\right.$
\item \textbf{number of positive divisors of $n$} defined by : $\tau(n)=\sum \limits_{d|n}1$ .
\item \textbf{sum of divisors function of $n$} defined by : $\sigma(n)=\sum \limits_{d|n}d$ .
\end{enumerate}
\end{example}
Now ,if $f,g:\mathbb{N}\longrightarrow \mathbb{C}$ are two arithmetic functions from the positive integers to the complex numbers, the Dirichlet convolution $f*g$ is a new arithmetic function defined by:
\begin{equation}
(f*g)(n)=\sum \limits_{d|n}f(d)g(\frac{n}{d})=
\sum \limits_{ab=n}f(a)g(b)
\end{equation}
where the sum extends over all positive divisors $d$ of $n$ , or equivalently over all distinct pairs $(a, b)$ of positive integers whose product is $n$.\\
In particular, we have $(f*g)(1)=f(1)g(1)$ ,$(f*g)(p)=f(1)g(p)+f(p)g(1)$ for any prime $p$ and for any power prime $p^m$ we have :
\begin{equation}
(f*g)(p^m)=\sum \limits_{j=0}^m f(p^j)g(p^{m-j})
\end{equation}
This product occurs naturally in the study of Dirichlet series such as the Riemann zeta function. It describes the multiplication of two Dirichlet series in terms of their coefficients:
\begin{equation}\label{eq:5}
\bigg(\sum \limits_{n\geq 1}\frac{\big(f*g\big)(n)}{n^s}\bigg)=\bigg(\sum \limits_{n\geq 1}\frac{f(n)}{n^s} \bigg)
\bigg( \sum \limits_{n\geq 1}\frac{g(n)}{n^s} \bigg)
\end{equation}
with Riemann zeta function or is defined by : $$\zeta(s)= \sum \limits_{n\geq 1} \frac{1}{n^s}$$
These functions are widely studied in the literature (see, e.g., \cite{book1, book2, book3}).\\
Now before to past to main result we need this propriety , if $f$ and $g$ are multiplicative function , then $f*g$ is multiplicative.
\section{Main results }
In this section we give the new result of Dirichlet product between derivative arithmetic and an arithmetic function multiplicative $f$ , and we will give the relation between $\tau$ and the derivative arithmetic .
\begin{theorem}
Given a multiplicative function $f$, and lets $n$ and $m$ two positive integers such that $gcd(n,m)=1$ , Then we have :
\begin{equation}
(f*\delta)(nm)=\big(Id*f\big)(n).\big(f*\delta\big)(m)+\big(Id*f\big)(m).\big(f*\delta\big)(n)
\end{equation}
\end{theorem}
\begin{proof}
Lets $n$ and $m$ two positive integers such that $gcd(n,m)=1$, and let $f$ an arithmetic function multiplicative , then we have :
\begin{align*}
(f*\delta)(nm) &=\sum \limits_{d|nm} f\big(\frac{nm}{d}\big)\delta(d)
=\sum \limits_{\underset{d_2|m}{d_1|n}} f(\frac{nm}{d_1d_2})\delta(d_1d_2)
=\sum \limits_{\underset{d_2|m}{d_1|n}} f(\frac{n}{d_1}) f(\frac{m}{d_2}) \bigg(d_1\delta(d_2)+d_2\delta(d_1)\bigg)
\\
&=\sum \limits_{\underset{d_2|m}{d_1|n}}\bigg( d_1f(\frac{n}{d_1}) f(\frac{m}{d_2})\delta(d_2)
+d_2f(\frac{m}{d_2})f(\frac{n}{d_1})\delta(d_1)
\bigg)
\\
& =\bigg(\sum \limits_{d_1|n} d_1f(\frac{n}{d_1})\bigg)\bigg(\sum \limits_{d_2|m} f(\frac{m}{d_2})\delta(d_2)\bigg)
+
\bigg(\sum \limits_{d_2|m} d_2f(\frac{m}{d_2})\bigg)\bigg(\sum \limits_{d_1|n} f(\frac{n}{d_1})\delta(d_1)\bigg)
\\
&=\big(Id*f\big)(n).\big(f*\delta\big)(m)+\big(Id*f\big)(m).\big(f*\delta\big)(n)
\end{align*}
\end{proof}
\begin{lemma}
For any natural number $n$ , if $n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}$ is the prime factorization of n, then :
\begin{equation}
\big(f*\delta\big)(n)=\big(Id*f\big)(n)\sum \limits_{i=1}^s
\frac{\big(f*\delta\big)(p^{\alpha_i}_i)}{\big(Id*f\big)(p^{\alpha_i}_i)}
\end{equation}
\end{lemma}
\begin{proof}
Let $n$ a positive integer such that $n = p_1^{\alpha _1}\ldots p_s^{\alpha _s}$ and let $f$ an arithmetic function , Then :
\begin{align*}
\big(f*\delta\big)(n) &=\big(f*\delta\big)(p_1^{\alpha _1}\ldots p_s^{\alpha _s})
\\
&=
\big(Id*f\big)(p_2^{\alpha _2}\ldots p_s^{\alpha _s}).\big(f*\delta\big)(p^{\alpha_1}_1)
+
\big(Id*f\big)(p^{\alpha_1}_1).\big(f*\delta\big)(p_2^{\alpha _2}\ldots p_s^{\alpha _s})
\\
& =\big(Id*f\big)(n).
\frac{\big(f*\delta\big)(p^{\alpha_1}_1)}{\big(Id*f\big)(p_1^{\alpha _1})}
+
\big(Id*f\big)(p^{\alpha_1}_1).
\bigg[
\big(Id*f\big)(p_3^{\alpha _3}\ldots p_s^{\alpha _s}).\big(f*\delta\big)(p^{\alpha_2}_2)
+\\
&+
\big(Id*f\big)(p^{\alpha_2}_2).\big(f*\delta\big)(p_3^{\alpha _3}\ldots p_s^{\alpha _s})
\bigg]
\\
&=\big(Id*f\big)(n)\frac{\big(f*\delta\big)(p^{\alpha_1}_1)}{\big(Id*f\big)(p_1^{\alpha _1})}
+
\big(Id*f\big)(n)\frac{\big(f*\delta\big)(p^{\alpha_2}_2)}{\big(Id*f\big)(p_2^{\alpha _2})}
+\\
&+
\big(Id*f\big)(p^{\alpha_1}_1)\big(Id*f\big)(p^{\alpha_2}_2)\big(f*\delta\big)(p_3^{\alpha _3}\ldots p_s^{\alpha _s})
\\
\vdots\\
&=\big(Id*f\big)(n)\frac{\big(f*\delta\big)(p^{\alpha_1}_1)}{\big(Id*f\big)(p_1^{\alpha _1})}
+
\big(Id*f\big)(n)\frac{\big(f*\delta\big)(p^{\alpha_2}_2)}{\big(Id*f\big)(p_2^{\alpha _2})}
+ \ldots
+
\big(Id*f\big)(n)\frac{\big(f*\delta\big)(p^{\alpha_s}_s)}{\big(Id*f\big)(p_s^{\alpha _s})}
\\
&=\big(Id*f\big)(n)
\bigg[
\frac{\big(f*\delta\big)(p^{\alpha_1}_1)}{\big(Id*f\big)(p_1^{\alpha _1})}+\ldots+ \frac{\big(f*\delta\big)(p^{\alpha_s}_s)}{\big(Id*f\big)(p_s^{\alpha _s})}
\bigg]\\
&=\big(Id*f\big)(n)\sum \limits_{i=1}^s
\frac{\big(f*\delta\big)(p^{\alpha_i}_i)}{\big(Id*f\big)(p^{\alpha_i}_i)}
\end{align*}
\end{proof}
an other prof by induction on $s$ that if $n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}$ then $(f*\delta)(n)=\big(Id*f\big)(n)\sum \limits_{i=1}^s
\frac{\big(f*\delta\big)(p^{\alpha_i}_i)}{\big(Id*f\big)(p^{\alpha_i}_i)}$.
\begin{proof}
Consider $n\in\mathbb{N}$ and express $n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}$ where all $p_i$ are distinct .\\
where $s=1$ , it is clear that $(f*\delta)(n)=\big(Id*f\big)(n)\sum \limits_{i=1}^1
\frac{\big(f*\delta\big)(p^{\alpha_i}_i)}{\big(Id*f\big)(p^{\alpha_i}_i)}
=\big(Id*f\big)(p^{\alpha_1})
\frac{\big(f*\delta\big)(p^{\alpha_1}_1)}{\big(Id*f\big)(p^{\alpha_1}_1)}=\big(f*\delta\big)(p^{\alpha_1}_1).
$\\
Assume that $n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}$, then we have :
\begin{align*}
\big(id*\delta\big)(n.p^{\alpha_{s+1}}_{s+1})
&=\big(Id*f\big)(p^{\alpha_{s+1}}_{s+1}).\big(f*\delta\big)(n)+\big(Id*f\big)(n).\big(f*\delta\big)(p^{\alpha_{s+1}}_{s+1})\\
&=\big(Id*f\big)(p^{\alpha_{s+1}}_{s+1}).
\big(Id*f\big)(n)\sum \limits_{i=1}^s
\frac{\big(f*\delta\big)(p^{\alpha_i}_i)}{\big(Id*f\big)(p^{\alpha_i}_i)}
+
\big(Id*f\big)(p^{\alpha_{s+1}}_{s+1}).
\big(Id*f\big)(n)
\frac{\big(f*\delta\big)(p^{\alpha_{s+1}}_{s+1})}{\big(Id*f\big)(p^{\alpha_{s+1}}_{s+1})}\\
&=\big(Id*f\big)(n.p^{\alpha_{s+1}}_{s+1})\sum \limits_{i=1}^s
\frac{\big(f*\delta\big)(p^{\alpha_i}_i)}{\big(Id*f\big)(p^{\alpha_i}_i)}
+
\big(Id*f\big)(n.p^{\alpha_{s+1}}_{s+1})
\frac{\big(f*\delta\big)(p^{\alpha_{s+1}}_{s+1})}{\big(Id*f\big)(p^{\alpha_{s+1}}_{s+1})}\\
&=\big(Id*f\big)(n.p^{\alpha_{s+1}}_{s+1})\bigg[\sum \limits_{i=1}^s
\frac{\big(f*\delta\big)(p^{\alpha_i}_i)}{\big(Id*f\big)(p^{\alpha_i}_i)}
+
\frac{\big(f*\delta\big)(p^{\alpha_{s+1}}_{s+1})}{\big(Id*f\big)(p^{\alpha_{s+1}}_{s+1})}\bigg]\\
&=\big(Id*f\big)(n.p^{\alpha_{s+1}}_{s+1})\sum \limits_{i=1}^{s+1}
\frac{\big(f*\delta\big)(p^{\alpha_i}_i)}{\big(Id*f\big)(p^{\alpha_i}_i)}
\end{align*}
\end{proof}
\begin{proposition}\label{prop_6}
Let $f$ a function arithmetic multiplicative , and $\delta$ the derivative arithmetic , then we have :
\begin{equation}
\big(Id*\delta\big)(n)=\frac{1}{2}\tau(n)\delta(n)
\end{equation}
\end{proposition}
\begin{proof}
Since $(Id*Id)(n)=\sum\limits_{d|n} \frac{n}{d}d
=n\sum \limits_{d|n} 1 =n\tau(n)$.\\
and :\;\;
$(Id*\delta)(p^{\alpha})=
\sum \limits_{j=1}^{\alpha} \delta(p^j)Id(p^{\alpha-j})
=
\sum \limits_{j=1}^{\alpha} jp^{j -1}p^{\alpha-j}
=\frac{1}{2}\alpha(\alpha+1)p^{\alpha-1}
$.\\
Then for every a positive integer $n$ such that $n = p_1^{\alpha _1}\ldots p_s^{\alpha _s}$ , we have :
\begin{align*}
\big(Id*\delta\big)(n)&=\big(Id*Id\big)(n)\sum \limits_{i=1}^s
\frac{\big(Id*\delta\big)(p^{\alpha_i}_i)}{\big(Id*Id\big)(p^{\alpha_i}_i)}\\
&=n\tau(n)\sum \limits_{i=1}^s
\frac{\frac{1}{2}\alpha_i(\alpha_{i}+1)p^{\alpha_i-1}_i}{p^{\alpha_i}_i\tau(p^{\alpha_i}_i)}\\
&=n\tau(n)\sum \limits_{i=1}^s
\frac{\frac{1}{2}\alpha_i(\alpha_{i}+1)p^{\alpha_i-1}_i}{p^{\alpha_i}_i(\alpha_i+1)}\\
&=\frac{1}{2} n\tau(n)\sum \limits_{i=1}^s \frac{\alpha_i}{p_i}=
\frac{1}{2}\tau(n)\delta(n)
\end{align*}
\end{proof}
So by the proposition \ref{prop_6} , and the equality \ref{eq:5} we have this relation between arithmetic derivative and the function $\tau$ :
\begin{equation}
2\zeta(s-1)\sum \limits_{n\geq 1} \frac{\delta(n)}{n^s}
=
\sum \limits_{n\geq 1} \frac{\delta(n)\tau(n)}{n^s}
\end{equation}
Let defined the new function arithmetic called \textbf{En-naoui} function , by :
\begin{equation}
\Phi_\varphi(n)=n\sum \limits_{p|n}\bigg(1-\frac{1}{p}\bigg)
\end{equation}
Then we have this equality related between 8 arithmetic function :
\begin{equation}
\big(\mu*\delta\big)(n)=
\varphi(n)\bigg(
\delta(n)-2\omega(n)+B(n)+\frac{\Phi_\varphi(n)}{n}+
\frac{\big(B*Id\big)(n)}{\sigma(n)}
\bigg) .
\end{equation}
with $B$ is the arithmetic function defined by :
$B(n)=\sum \limits_{p^{\alpha}||n}\alpha p$.
In next article i will prove this equality, just I need to submit an article about this new function .
\end{document}
|
\begin{document}
{\LARGE \bf
\begin{center}
Oscillations and pattern formation in\\ a slow-fast prey-predator system
\end{center}
}
\vspace*{1cm}
\centerline{\bf Pranali Roy Chowdhury$^1$, Sergei Petrovskii$^{2,3}$, Malay Banerjee$^1$}
\centerline{ $^1$ Indian Institute of Technology Kanpur, Kanpur - 208016, India}
\centerline{ $^2$ School of Mathematics \& Actuarial Science, University of Leicester, Leicester LE1 7RH, UK}
\centerline{ $^3$ Peoples Friendship University of Russia (RUDN University), 6 Miklukho-Maklaya St,}
\centerline{ Moscow 117198, Russian Federation}
\noindent
\begin{abstract}
We consider the properties of a slow-fast prey-predator system in time and space. We first argue that the simplicity of prey-predator system is apparent rather than real and there are still many of its hidden properties that have been poorly studied or overlooked altogether. We further focus on the case where, in the slow-fast system, the prey growth is affected by a weak Allee effect. We first consider this system in the non-spatial case and make its comprehensive study using a variety of mathematical techniques. In particular, we show that the interplay between the Allee effect and the existence of multiple timescales may lead to a regime shift where small-amplitude oscillations in the population abundances abruptly change to large-amplitude oscillations.
We then consider the spatially explicit slow-fast prey-predator system and reveal the effect of different time scales on the pattern formation. We show that a decrease in the timescale ratio may lead to another regime shift where the spatiotemporal pattern becomes spatially correlated leading to large-amplitude oscillations in spatially average population densities and potential species extinction.
\noindent
{\bf Keywords:} Slow-fast time scale; relaxation oscillation; canard cycle; spatial pattern; regime shift
\end{abstract}
\section{Introduction}
\label{intro}
In the natural environment, interactions in a population community are usually quite complex \cite{0b,0a,0c}. This ubiquitous complexity have several different sources such as the complexity of the wood web, nonlinearity of species feedbacks, multiplicity of temporal and spatial scales, etc.
It is extremely difficult, in fact hardly possible at all to capture the entire complexity of ecological interactions in a single mathematical model or framework. Instead, the usual means of analysis tend to focus on a particular aspect or feature of the ecological system. For instance, while the food web theory endeavours to link the properties of a realistic population community to the complexity of the corresponding food web, in particular by analysing the web connectivity and revealing the bottlenecks \cite{Allesina04,Polis96}, a lot of attention focuses on the properties of simpler `building blocks' from which the web is made \cite{Jordan02}. A variety of blocks of intermediate complexity have been considered, a few examples are given by the three-species competition system \cite{Hofbauer98}, intraguild predation \cite{Holt97} and a three-species resource-consumer-predator food chain \cite{Hastings91}.
Arguably, the most basic block is the prey-predator system.
It has been a focus of research for almost a century \cite{Rosenzweig63,Volterra26} and there is a tendency to think about it as a fully studied, textbook material \cite{Gyll}. However, this is far from true. The apparent mathematical simplicity of the prey-predator system (usually associated with the classical Rosenzweig-McCarthur model as a paradigm \cite{Rosenzweig63,0c}) is superficial rather than real, and there has recently been a surge of interest and an increase in mathematical modellng literature dealing with its `hidden', overlooked properties, with more than a hundred of papers published in the first quarter of 2021 alone\footnote{Data are taken from the Web of Science}. New properties readily arise as soon as one introduces relatively small (i.e.~preserving the defining structure of the model), biologically motivated changes into the paradigmatic system, e.g.~adding explicit heterogeneous space \cite{Zou20}, changing the specialist predator to a generalist one \cite{Rodrigues20,Sen20}, changing the properties of predator's functional response \cite{Arditi89,Huang14}, considering different types of density dependence in the population growth or mortality \cite{Edwards99,Jiang21}, or taking into account the fact that the intraspecific dynamics of prey and predator often occur on a very different time scale \cite{Kooi18,Poggiale20}.
While one of the generic properties of a prey-predator system is its intrinsic capability to produce sustained population cycles (due to the emergence of a stable limit cycle in a certain parameter range \cite{May72}), with many fundamental implications for the population dynamics, another equally important property is its capacity to exhibit pattern formation, in particualar due to the Turing instability \cite{Segel72,Turing52}. The latter has been a focus of many groundbreaking studies that linked the patterns observed in various biological and ecological systems to the dissipative instability in a prey-predator (or, more generically, activator-inhibitor) system, e.g.~see \cite{Gurney98,Hastings97,Jansen95,McCauley90,Mimura78,Murray68,Murray75,Murray76,Murray81,Murray82,Murray88,Rosenzweig71}, also \cite{Murray89} for an exhaustive review of earlier research.
Other studies also discovered and considered in detail a possibility of non-Turing pattern formation, in particular due to the interplay between the Hopf bifurcation and diffusion \cite{Pascual93,Petrovskii01,Petrovskii99,Petrovskii02b,Scheffer97,Sherratt95} as well as pattern formation resulting from the Turing-Hopf bifurcation \cite{Baurmann07}.
Interestingly, in spite of the large number of modelling papers concerned with the prey-predator system, there are still a number of issues poorly investigated. One such issue is the interaction between different types of density dependence and the existence of different time scales, either in a spatial or nonspatial system.
Indeed, one context where the prey-predation framework has been particularly successful to provide a new insight into the mechanisms of ecological interactions are is large-magnitude nearly-periodical fluctuations in population size that has been observed in many species and ecosystems. In such a case, typically, a large outbreak in population abundance is followed by a population decline, often to a small population size or density. For instance, the fluctuations in populations of \textit{snowshoe hares} and \textit{Canadian lynx} in the Canadian Boreal forest was modeled with the help of a tri-trophic food web model \cite{Stenseth97} where the population explosion of the lynx was observed every 9-11 years followed by a rapid decline in the population of hares. Also in case of plankton ecosystem within lake, the seasonal abundance of zooplankton (particularly \textit{Daphnia}) is frequently observed, which completely grazes down the algal biomass thus resulting in clear-water phases in lakes \cite{Scheffer97}. The exact causes of these fluctuations are still a debatable issue among various researchers. However, one of the common trait observed in the above examples is that the bottom level of a multi-trophic system or the basal prey has faster growth and decay compared to their consumers. On the other hand, the population of the budworm can increase several hundred fold within a span of few years whereas the leaves of adult trees do not grow at a comparable rate. This resulted in the outbreak of \textit{spruce budworm} which destroyed the \textit{balsam forest} of eastern Northern America \cite{Ludwig78}. To capture this type of rapid growth/decay for the interacting species, researchers introduced the mathematical models with slow-fast time scale. A small time scale parameter is introduced either in the prey growth equation or in the predator growth depending upon the species under consideration.
In a rather general case, the prey-predator interaction in a nonspatial system can be modeled by a system of coupled ordinary differential equations
\begin{equation}
\begin{aligned}
u' &= uf(u)-vg(u,v),\\
v' &= evg(u,v) - m(v) v,
\end{aligned}
\end{equation}
where $u$ and $v$ denote the prey and predator densities, respectively, at time $t$. (A spatially explicit approach includes diffusion terms, hence turning the ODEs to PDEs, see Section \ref{sec:spatial}). Here both the species are assumed to be distributed homogeneously within their habitat. The function $f(u)$ represents the per capita growth rate of prey, $g(u,v)$ describes the prey-predator interaction and $\mu$ describes the natural death rate of predators in absence of prey, $e$ is known as the conversion efficiency. Assuming the rate of growth of prey population much faster than its predator, a time-scale parameter $0<\varepsilon\ll1$ is introduced which transform the original model to a slow-fast model as follows
\begin{equation}
\begin{aligned}
\varepsilon u' &= uf(u)-vg(u,v),\\
v' &= evg(u,v) - m(v) v.
\end{aligned}
\end{equation}
This type of slow-fast prey predator models were first studied by Rinaldi and Muratori \cite{Rinaldi92}, so far as our knowledge goes, where the cyclic coexistence of the slow-fast limit cycle was discussed. They also analyzed the cyclic fluctuation in population densities of three species model in a slow-fast setting with one and two multiple time scale parameters.
In mathematical literature, the slow-fast systems are considered as singularly perturbed ordinary differential equation, where $\varepsilon$ is the singular perturbation parameter. The standard stability and bifurcation analysis performed for the prey-predator models was not enough to analyze the complete dynamics exhibited by the slow-fast systems. Many mathematical techniques were developed to study this class of systems. In the late 1970s, Neil Fenichel \cite{Fenichel79} introduced a geometric approach based on the invariant manifold theory to study the singularly perturbed coupled systems, known as Geometric Singular Perturbation Theory (GSPT). Using this theory the dynamics of the full slow-fast system are studied by reducing it to sub-systems of lower dimension and thereby studying the complete dynamics of the subsystems. The application of Fenichel's theory in the context of biology was well explained by G. Hek \cite{Hek10}. But this theory fails to approximate the dynamics near the non-hyperbolic equilibrium points where the system encounters a singularity. Later in 2001, Krupa and Szmolyan \cite{Krupa01A,Krupa01B} extended Fenichel's theory to overcome the difficulty around non-hyperbolic points using the blow-up technique. This was based on the pioneering work of Dumortier \cite{Dumortier78,Dumortier93,Dumortier96}. The main idea behind this was to blow up the non-hyperbolic equilibrium points of the system by a 4-dimensional unit sphere $S^3$ and the trajectories of the blow-up system is mapped on and around the sphere. In case, the blow-up space still has non-hyperbolic points, sequence of blow-up maps can be used to desingularize the system.
Before the development of mathematical tools to study this class of systems, a Dutch Physicist Van der Pol \cite{Dumortier96,VanderPol26} observed large amplitude periodic oscillation consisting of slow and fast dynamics, which he named relaxation oscillation. These are periodic solutions consisting of slow curvilinear motion and sudden fast jumps. These types of slow-fast limit cycles were later observed in many chemical and biological systems \cite{Kooi18,Muratori89,Rinaldi92,Wang19,Wang19AML}. Another type of periodic solution observed in singularly perturbed systems are canard solutions. This was first investigated by E. Beno\^{i}t {\it et. al} \cite{Benoit81} while studying the Van der Pol Oscillator. Dumortier and Roussarie, in their seminal work \cite{Dumortier96}, analyzed this phenomenon through a geometric approach, using blow-up technique and with the help of invariant manifold theory. A canard is a solution of a singularly perturbed system which follows an attracting slow manifold, closely passing through the bifurcation point of the critical manifold, and then following a repelling slow manifold for $\mathcal{O}(1)$ time. It was observed that for the existence of canard solution Hopf bifurcation is necessary \cite{Krupa01B}. The fast transition from small stable limit cycles appearing through Hopf bifurcation to large amplitude relaxation oscillation via a sequence of canard cycles within an exponentially small range of the parameter is known as canard explosion. In real-world ecosystems this phenomenon can be related to sudden outbreak or decline of a particular species \cite{Ludwig78,Scheffer00,Scheffer97,Siteur16,Stenseth97}.
Over the last few years, several works have been done on prey-predator systems with slow-fast time scale. In \cite{Muratori89,Muratori92,Rinaldi92}, the authors have analyzed the periodic bursting of high and low-frequency oscillations in interacting population models with two and three-trophic level with slow-fast time scale. A novel 1-fast-3-slow dynamical system have been developed in \cite{Piltz17} to consider the adaptive change of diet of a predator population that switches its feeding between two prey populations. The classical Rosenzweig–MacArthur (RM) model and the Mass Balance chemostat model in the slow-fast setting is studied in \cite{Kooi18}, where the authors have shown that the RM model exhibits canard explosion in the oscillatory regime of the parameter space whereas the later model does not exhibit such phenomenon. They have used the asymptotic expansion technique to determine the canard explosion point. In \cite{Poggiale20}, the authors have used the blow-up technique to obtain an analytical expression of the bifurcation thresholds for which maximal canard solution occurs in the RM-model. The existence and uniqueness of the relaxation oscillation cycle have been studied for the Leslie-Gower model with the help of entry-exit function and GSPT in \cite{Wang19AML}. The rich and complex slow-fast dynamics of the predator-prey model with Beddington-DeAngelis functional response is studied in \cite{Saha21}. To the best of our knowledge, there is no work so far in literature considering the Allee effect in the slow-fast prey-predator model. In this paper, first we consider the slow-fast dynamics of the classical Rosenzweig–MacArthur (RM) model with multiplicative weak Allee effect in prey growth equation using GSPT and blow-up technique.
In population ecology, the Allee effect is a widely observed phenomenon especially at low population density, which describes a positive relationship between species population and per capita population growth rate of species. The main causes of Allee effect include difficulties in mate finding, inbreeding depression, cooperative defense mechanism etc. Mostly, we are concerned about the demographic Allee effect which can be classified as: strong Allee and weak Allee effect. For strong Allee effect, the per capita growth rate is negative below some critical population density (Allee threshold), and the growth rate becomes positive above that threshold. Whereas, in case of weak Allee effect, per capita growth rate is small and remains positive even at low population densities. But with the introduction of the time scale parameter the per capita growth rate becomes much higher even at low density. Thus the species can recover itself from the endemic level, and extinction is prevented. In this paper, we have incorporated the weak Allee effect in prey's growth in order to capture the true essence of the slow-fast cycle.
The main objective of this paper is to provide a detailed slow-fast analysis of the temporal model based on the various mathematical approach discussed above, and numerically investigating the corresponding spatially extended slow-fast model. In prey-predator models, the oscillatory dynamics of the system arises from the Hopf bifurcation but in the slow-fast setting other than Hopf bifurcating limit cycle, the system exhibit various other interesting periodic solution namely canard and relaxation oscillation. Here we are interested to explore these solutions analytically and numerically with the help of sophisticated slow-fast techniques as discussed above.
Over the last few decades, significant work has been done to study the mechanism of spatial dispersal of species, with the help of reaction-diffusion systems. In this regard, the study of invasion of the exotic species emerged to be of particular interest for many theoretical or field ecologists. Biological invasion is a complex phenomenon that starts with a local introduction of exotic species and once it gets established in a particular region, they start spreading and occupying new areas \cite{Lewis16}. The study of rate and pattern of spread is of primary importance as they have a huge environmental impact and also can be used as a control measure for other species. The invasion of exotic species takes place via propagation of continuous wave fronts, as well as via irregular movement of separate population patches \cite{Lewis00}. In \cite{Morozov06,Petrovskii02a}, the authors have shown that patchy invasion is possible in deterministic models as a result of the Allee effect. To the best of our knowledge, there exist hardly any work in literature which such complete analysis of the slow-fast prey-predator model as well as on the effect of explicit time-scale parameters on pattern formation. Here, first we perform exhaustive numerical simulations to examine the pattern spread and patchy invasion of the species. And then, we examine how the invasion of the species are affected by the time-scale parameter.
This paper is divided into two parts, in the first part we will provide mathematical analysis of the temporal slow-fast model and the second part is supported by exhaustive numerical simulations to investigate the effect of varying time-scale in biological invasion. In section 2, we introduce the non-dimensionalised temporal model and standard stability analysis is performed. Then in section 3 we introduced the slow-fast system. In section 4 we discussed GSPT and blow-up technique for a detailed mathematical analysis of slow-fast systems. The existence and uniqueness of the relaxation oscillation is studied here followed by the phenomenon of canard explosion. In section 5, we consider the corresponding slow-fast spatio-temporal model to examine how the spread of invasive species is affected by time-scale parameters. Finally, we draw the conclusion of our work in section 6.
\section{Temporal Model and its linear stability analysis}\label{Section:2}
We consider the classical Rosenzweig-MacArthur prey-predator model with the multiplicative weak Allee effect in prey growth \cite{Courchamp08,Murray89,Sen11}. Let $u$ and $v$ be the prey and its specialist predator densities, respectively, at time $t$. In appropriately chosen dimensionless variables and parameter (see \cite{Morozov06} for details), the model is given by the following equations:
\begin{subequations} \label{eq:temp_weak}
\begin{eqnarray}
\dfrac{du}{dt} &=& f(u,v):=\gamma u(1-u)(u+\beta)-v\Big(\frac{u}{1+\alpha u}\Big),\\
\dfrac{dv}{dt} &=& g(u,v):=v\Big(\frac{u}{1+\alpha u}-\delta \Big).
\end{eqnarray}
\end{subequations}
Here and below, the sign ``:='' means ``is defined''.
We focus on the case where the growth rate of the prey population is damped by the weak Allee effect, so that $0<\beta<1$. For $\beta<0$, the Allee effect becomes strong (in this case, the prey population has another [usntable] equilibrium at $u=\beta$); for $\beta\ge 1$, the Allee effect is absent \cite{Lewis93}. The per capita growth rate $f(u,v)/u$ is increasing for $0<u<\dfrac{1-\beta}{2}$ and decreasing for $\dfrac{1-\beta}{2}<u<1$.
The predator is a specialist predator as they do not have any alternative food source to survive apart from $u$. The prey-dependent functional response is taken to be Holling type II \cite{Holling65}. The system contains four positive dimensionless parameters where $\beta$ quantifies the weak Allee parameter, $\gamma$ is the coefficient
proportional to the maximum per capita growth rate, called characteristic growth rate \cite{Jankovic14}. The parameter $\alpha$ characterizes the inverse saturation level of the functional response and $\delta$ is the natural mortality rate of the predator. Throughout this paper we will consider $\delta$ as the bifurcation parameter to determine the stability conditions of the coexisting steady-state for the model \ref{eq:temp_weak}.
\noindent Depending on the species traits, the prey population often grows much faster than its predator; one well known example is given by hare and lynx where hares reproduce much faster than lynx \cite{Stenseth97}. This motivated researchers to introduce a small time-scale parameter $\varepsilon,\ 0<\varepsilon<1$ in the basic model (\ref{eq:temp_weak}). The parameter $\varepsilon$ is interpreted as the ratio between the linear death rate of the predator and the linear growth rate of the prey \cite{Hek10,Rinaldi92}. And the assumption $\varepsilon < 1$ implies that one generation of predator can encounter several generations of prey \cite{Holling65,Kuehn15}. Therefore considering the difference in the time scale, the slow-fast version of the dimensionless model (\ref{eq:temp_weak}) can be written as
\begin{subequations} \label{eq:temp_weak_fast}
\begin{eqnarray}
\dfrac{du}{dt} &=& f(u,v)=\gamma u(1-u)(u+\beta)-\frac{uv}{1+\alpha u},\\
\dfrac{dv}{dt} &=&\varepsilon g(u,v)=\varepsilon v \Big(\frac{u}{1+\alpha u}-\delta \Big),
\end{eqnarray}
\end{subequations}
with initial conditions $u(0)\ge 0,\ v(0)\ge 0$ . Since the prey population grows faster compared to the predator, $u$ and $v$ are referred to as fast and slow variables, respectively and time $t$ is called fast time. The equilibrium points for the system are independent of $\varepsilon$, thus system (\ref{eq:temp_weak}) and (\ref{eq:temp_weak_fast}) has same equilibrium points. The extinction equilibrium point and prey only equilibrium point of system (\ref{eq:temp_weak}) (as well as for (\ref{eq:temp_weak_fast}) are given by $E_0=(0,0)$ and $E_1=(1,0)$ respectively. The interior equilibrium point $E_*(u_*,v_*) $ of the system is the point where the non-trivial prey nullcline intersect with non-trivial predator nullcline in the interior of the positive quadrant, and we have,
$$u_* = \dfrac{\delta}{1-\alpha \delta}, \ \ v_* = \gamma (1-u_*)(u_*+\beta)(1+\alpha u_*).$$
$E_*$ is feasible if the parametric restriction $\delta (\alpha + 1)<1$ holds. With the help of linear stability analysis, we find $E_0$ is always a saddle point. $E_1$ is stable for $\delta > \dfrac{1}{1+\alpha}$ and saddle point for $\delta < \dfrac{1}{1+\alpha}$. $E_*$ bifurcates from predator free equilibrium point $E_1$ through transcritical bifurcation at $\delta = \delta_T\equiv \dfrac{1}{1+\alpha}.$\\ Now evaluating the Jacobian matrix for the system (\ref{eq:temp_weak_fast}) at the interior equilibrium point $E_*(u_*,v_*)$ we have
\begin{equation*}\label{eq:Jacobian matrix}
J_*=
\begin{pmatrix}
\gamma(u_*(2-3u-2\beta)+\beta)-\dfrac{v_*}{(1+u_*\alpha)^2}&-\dfrac{u_*}{1+u_*\alpha}\\\dfrac{\varepsilon v_*}{(1+u_*\alpha)^2}&\varepsilon\Big(\dfrac{u_*}{1+u_*\alpha}-\delta\Big)
\end{pmatrix}.
\end{equation*}
From the feasibility condition of $E_*$ we always have Det$(J_*)>0$. The interior equilibrium point is stable if Tr$(J_*)<0$, and it loses its stability via super-critical Hopf bifurcation when Tr$(J_*)=0$ and is unstable for Tr$(J_*)>0$.
The Hopf threshold $\delta=\delta_H$ can be obtained by solving Tr$(J_*)=0$ which on simplification gives
\begin{equation}\label{eq:Hopf_threshold}
\delta_H = \dfrac{1+\alpha^2\beta-\sqrt{1+\alpha+\alpha^2-\alpha \beta +\alpha^2\beta+\alpha^2 \beta^2}}{\alpha(-1-\alpha+\alpha \beta +\alpha^2\beta)}.
\end{equation}
Transversality condition for Hopf bifurcation is satisfied at $\delta=\delta_H.$
The coexistence steady state $E_*(u_*,v_*)$ is stable for $\delta>\delta_H$ and it destabilizes for $\delta<\delta_H$, surrounded by a stable limit cycle. The bifurcation diagrams of the system (\ref{eq:temp_weak}) with $\delta$ as bifurcation parameter and for two different values of $\beta$ are plotted in Fig.~\ref{fig:bifurcation_eps_1}.
It is readily seen that, in case $\beta<1/\alpha$, with an increase in the Allee threshold $\beta$ (hence making the Allee effect weaker) the Hopf bifurcation point shifts to the left. Correspondingly, the steady species coexistence occurs for a broader parameter range of $\delta$ (see Fig.~\ref{fig:bifurcation_eps_1}). Furthermore, with the increase of $\beta$ the size of the limit cycle is reduced. Increase in the strength of the weak Allee effect not only enhance the stable coexistence rather reduces the amplitude of stable oscillatory coexistence.
\begin{figure}
\caption{The bifurcation diagram of system (\ref{eq:temp_weak}
\label{fig:bifurcation_eps_1}
\end{figure}
\noindent Interestingly, the linear stability results remains unaltered in the presence of slow-fast time scale as the analytical conditions are independent of $\varepsilon$. The linear stability analysis fails to capture the complete dynamics of the slow-fast system (\ref{eq:temp_weak_fast}) for $0<\varepsilon \ll 1$. The system (\ref{eq:temp_weak_fast}) exhibit catastrophic transition which cannot be captured by standard stability analysis, rather the model may sometimes overestimate ecological resilience \cite{Siteur16}. Therefore, to study the complete dynamics of the system we take help of geometric singular perturbation theory and blow-up technique which will be discussed in next sections.
\section{Slow-fast system}
In this section, we shall describe the dynamics of the slow-fast system (\ref{eq:temp_weak_fast}) when $0<\varepsilon\ll1$. To understand the dynamics of the system (\ref{eq:temp_weak_fast}) for sufficiently small $\varepsilon\ (>0)$ we need to consider the behaviour of two subsystems corresponding to (\ref{eq:temp_weak_fast}), which can be obtained for $\varepsilon=0$. The system in its singular limit, $\varepsilon=0$ is obtained as follows
\begin{subequations} \label{eq:layer-system}
\begin{eqnarray}
\dfrac{du}{dt} &=& f(u,v)= \gamma u(1-u)(u+\beta)-\frac{uv}{1+\alpha u},\\
\dfrac{dv}{dt} &=&0.
\end{eqnarray}
\end{subequations}
The above system is known as fast subsystem or layer system corresponding to the slow-fast system (\ref{eq:temp_weak_fast}). The fast flow consists with constant predator density determined by the initial condition $v(0)=c$, and by integrating the differential equation
\begin{equation} \label{eq:fast-flow}
\begin{aligned}
\dfrac{du}{dt} = \gamma u(1-u)(u+\beta)-\frac{uc}{1+\alpha u},
\end{aligned}
\end{equation}
with initial condition $u(0)>0$. The direction of the fast flow depends on the choice of initial conditions $u(0)$, $v(0)$ and other parameter values. Green horizontal lines are solution trajectories with appropriate direction as shown in Fig.~\ref{fig:dynamics_slow_fast}a.
\noindent Now writing system (\ref{eq:temp_weak_fast}) in terms of the slow time $\tau:=\varepsilon t$, we get the equivalent system in terms of slow time derivatives,
\begin{subequations} \label{eq:temp_weak_slow}
\begin{eqnarray}
\varepsilon \dfrac{du}{d\tau} &=& f(u,v) = \gamma u(1-u)(u+\beta)-\frac{uv}{1+\alpha u},\\
\dfrac{dv}{d\tau} &=& g(u,v) =v\Big(\frac{u}{1+\alpha u}-\delta \Big).
\end{eqnarray}
\end{subequations}
Substituting $\varepsilon = 0$ in the above system to find the following differential algebraic equation (DAE),
\begin{subequations} \label{eq:DAE}
\begin{eqnarray}
0&=& f(u,v) = \gamma u(1-u)(u+\beta)- \frac{uv}{1+\alpha u},\\
\dfrac{dv}{d\tau} &=& g(u,v) = v\Big(\frac{u}{1+\alpha u}-\delta \Big),
\end{eqnarray}
\end{subequations}
which is known as the slow subsystem corresponding to the slow-fast system (\ref{eq:temp_weak_slow}). The solution of the above system is constrained to the set $\{(u,v) \in \mathbb{R}^2_+:f(u,v)=0\}$ and is known as critical manifold $C_0$. This set has one-one correspondence with the set of equilibrium of the system (\ref{eq:fast-flow}). This critical manifold consists of two of different manifolds
\begin{equation*}
\begin{aligned}
C_0^0 &=& &\{(u,v)\in \mathbb{R}^2_+ : u=0, v\ge0\},\\
C_0^1 &=& &\Big\{(u,v) \in \mathbb{R}^2_+ :v =q(u):= \gamma (1-u)(u+\beta)(1+\alpha u),\ 0<u<1,\ v > 0\Big\},
\end{aligned}
\end{equation*}
such that $C_0 = C^0_0 \cup C^1_0$ where $C_0^0$ is the positive $v$-axis and $C_0^1$ is a portion of a parabola as shown in Fig.~\ref{fig:dynamics_slow_fast}a, marked with black colour. The slow flow on the critical manifold is given by
\begin{equation} \label{eq:slow-flow}
\begin{aligned}
\dfrac{du}{d\tau} &=& \dfrac{g(u,q(u))}{\dot{q}(u)},
\end{aligned}
\end{equation}
where $`.`$ refers to the differentiation w.r.t $u$.
\noindent The solution of the system (\ref{eq:temp_weak_fast}) for sufficiently small $\varepsilon >0$ cannot be approximated from its limiting solution at $\varepsilon=0$. Therefore, $\varepsilon=0$ is the singular limit of the system (\ref{eq:temp_weak_fast}). The solution of the full system is obtained by combining the solution of the system in its singular limits. And depending on the region in the phase space we use either of the subsystems.
\noindent For $\alpha=0.5$, $\beta=0.22$, $\gamma=3$, and $\delta=0.3$ the coexistence steady-state is unstable for $0<\varepsilon\leq1$ and is surrounded by a stable limit cycle. Interestingly the size and shape of stable limit cycle change with the variation of $\varepsilon$ which is shown in Fig. \ref{fig:dynamics_slow_fast}(b). The size and shape of closed curve attractor (blue), obtained for $\varepsilon=0.001$ is quite different from stable limit cycle (magenta) which is obtained for $\varepsilon=1$. This shape does not change if we further decrease $\varepsilon$, keeping other parameters fixed. This observation is based upon the numerical simulation and we need detailed analysis to understand the possible shape of the trajectories in singular limit $\epsilon\rightarrow0$. For $\varepsilon=0.001$, the closed curve attractor (blue) consists of two horizontal segments on which flow is fast and one curvilinear and vertical segment where the flow is slow. They are obtained by concatenating the solution of the layer and the reduced subsystem respectively. The two horizontal segments of the attractor (blue) are the
perturbed trajectories corresponding to the layer system. This signifies the fast growth or decay of the prey species while predator density remains unaltered. The vertical portion close to $v$-axis and curvilinear part are close to the critical manifolds $C^0_0$ and $C^1_0$. Change in shape and size of the attractor does not solely depend upon the magnitude of $\varepsilon$ rather determined by the magnitude of the parameters involved with the reaction kinetics and time scale parameter.
\noindent Now we fix $\varepsilon=0.01$ and other parameters as mentioned above, except $\delta$. Small variation in $\delta$ just below the $\delta_H$ results in rapid change in size and shape of the periodic attractor (see Fig.~\ref{fig:dynamics_slow_fast}(c)). A small limit cycle (cyan) appears for $\delta=0.3762$ known as canard cycle without head. This cycle encounter a change in curvature when $\delta=0.376165$ and the resulting cycle is known as canard cycle with head (blue). Further decreasing $\delta$ to $0.36$ the system settles down to a closed cycle known as relaxation oscillation. Further decrease in $\delta$ does not alter the size and shape of the closed attractor and the trajectories converge to the stable relaxation oscillation cycle even for $\varepsilon$ sufficiently small. In the next section, we will derive the analytical conditions for the existence of canard cycle and relaxation oscillation. The analytical results will help us to identify the domains in the prarametric plane where we can find these different types of closed curve attractors.
\begin{figure}
\caption{(a) Dynamics of the slow-fast system (\ref{eq:temp_weak_fast}
\label{fig:dynamics_slow_fast}
\end{figure}
\section{Analysis of slow-fast system}
The critical manifold $C_0^1$ can be divided into two parts, one part consists of the attractors of the fast sub-system and another part is repelling in nature. The attracting and repelling part of the manifold is separated by a non-degenerate fold point $P$. The fold point $P(u_f,v_f)$ is characterized by the following conditions \cite{Krupa01A}.
$$ \frac{\partial{f}}{\partial u}(u_f,v_f)=0,\ \frac{\partial f}{\partial v}(u_f,v_f)\ne 0,\ \frac{\partial^2f}{\partial u^2}(u_f,v_f)\ne 0,\ \text{and}\ g(u_f,v_f)\ne 0.$$
The components of the fold point are given by $$u_f = \dfrac{(\alpha-\alpha \beta -1)+(1+\alpha+\alpha^2-\alpha \beta+\alpha^2\beta+\alpha^2 \beta^2)^{\frac{1}{2}}}{3\alpha}, $$ $$ v_f = \gamma (1-u_f)(u_f+\beta)(1+\alpha u_f), $$
which is the maximum point on the critical manifold. The fold point divides the critical manifold into normally hyperbolic attracting ($C^{1,a}_0$) and repelling ($C^{1,r}_0$) submanifolds given by
\begin{equation*}
\begin{aligned}
C_0^{1,a} &=\Big\{(u,v) \in \mathbb{R}^2_+ :v =q(u), u_f<u\le1\Big\} \\
C_0^{1,r} &=\Big\{(u,v) \in \mathbb{R}^2_+ :v =q(u), 0\le u<u_f\Big\}.
\end{aligned}
\end{equation*}
The point of intersection of $C^1_0$ with the vertical $v$-axis is $T_C(0,\beta \gamma)$, which is the transcritical bifurcation point for the fast subsystem. It follows from Fenichel's theorem \cite{Fenichel79,Kuehn15}, there exist locally invariant slow sub-manifolds $C^1_{\varepsilon}$ and $C_{\varepsilon}^0$ which are diffeomorphic to the respective critical manifolds $C^1_0$ and $C^0_0$, except at the non-hyperbolic points $P$ and $T$. $C^1_0$ can be written explicitly as $v=q(u)$, we assume the invariant manifold $C_{\varepsilon}^1$ can be written as a perturbation of $v=q(u)$ as follows, with $\varepsilon$ as perturbation parameter,
\begin{equation} \label{eq:invariant-manifold}
C_{\varepsilon}^1=\Big\{(u,v) \in \mathbb{R}^2_+ :v =q(u,\varepsilon),\ 0< u<1,\ v >0\Big\},
\end{equation}
where
$q(u,\varepsilon) = q_0(u) + \varepsilon q_1(u) + \varepsilon^2 q_2(u)+\cdots$, and
$C^0_{\varepsilon}=\{(u,v)\in \mathbb{R}^2_+ : u=0, v\ge0\}$.
Using the invariance condition and the asymptotic expansion of $q(u,\varepsilon)$, we can find the perturbed invariant manifold approximated up to the desired order. The approximation of $q(u,\varepsilon)$ up to second order is provided in Appendix A with explicit expressions for $q_0$, $q_1$, $q_2$. The approximations of invariant manifold for different values of $\varepsilon$ are shown in Fig.~\ref{fig:perturbed_manifolds}. This approximation has two non-removable discontinuities in the vicinity of the non-hyperbolic points $P$ and $T_C$.
\noindent The critical manifold $C^1_0$ is normally hyperbolic except at the points $P (u_f,v_f)$ and $T_C(0,\beta \gamma)$ and so is $C^1_\varepsilon$. Thus any trajectory starting near the attracting (repelling) submanifold $C^{1,a}_0$ $(C^{1,r}_0)$ cannot cross the fold point $P$ (transcritical point $T_C$). We can see from Fig.~\ref{fig:dynamics_slow_fast} that for sufficiently small values of $\varepsilon$ the trajectories pass enough close to the attracting manifold $C^{1,a}_0$ and cross the point $P$. Fenichel's theory is not adequate to determine the analytical expression for perturbed sub-manifolds close to $C_0^{1}$ and is continuous in the vicinity of the non-hyperbolic points.
\begin{figure}
\caption{ Second order approximation of perturbed manifold from GSPT with $\varepsilon=1$ (magenta), $\varepsilon=0.1$(green),\ $\varepsilon=0.01$(red),\ $\varepsilon=0.001$(blue); for the parameter values $\alpha=0.5,\ \beta=0.22,\ \gamma=3,\ \delta=0.3$ .}
\label{fig:perturbed_manifolds}
\end{figure}
\noindent Therefore, to construct a trajectory passing through the vicinity of the point $P$ we must remove the singularity at this point. Depending on parameter $\delta,$ the predator nullcline intersects either $C^{1,a}_0$ or $C_0^{1,r}$ or passes through the point $P$. Thus the coexistence equilibrium point $E_*$ lies either on $C^{1,a}_0$ or $C_0^{1,r}$ or coincides with $P$. When $E_*$ lie on $C_0^{1,a}$ it is globally asymptotically stable and every trajectory converges to $E_*$. When $E_*$ coincides with the fold point $P$ then $ f_u(E_*)=0$, $f_v(E_*)\ne 0$, $f_{uu}(E_*)\ne 0$, and $g(E_*)= 0$, this point is called the canard point. For the model under consideration (\ref{eq:temp_weak_fast}), the Hopf point coincides with the canard point. The solution passing through this point is known as the canard solution. For $\delta<\delta_H$, the coexistence equilibrium point $E_*$ lies on the repelling sub-manifold $C_0^{1,r}$, it is unstable and we obtain a special kind of periodic solution consisting of two fast flow (almost horizontal) and two slow flow (passing close to $C^{1,a}_0$ and $C^0_0$) called relaxation oscillation. We will discuss the existence of such solutions in consequent subsections.
\noindent To remove the singularity at the fold point, we use blow-up transformation at the non-hyperbolic fold point which will extend the system over a 3-sphere in $\mathbb{R}^4$, denoted by $S^3 = \{x\in \mathbb{R}^4: ||x||=1\}$. Using the blow-up technique we remove the singularity from the system and determine the canard solution passing through this point.
\noindent To apply the blow-up technique, first, we transform the slow-fast system (\ref{eq:temp_weak_fast}) into its desired slow-fast normal form.
\subsection{Slow-fast normal form}
Here we consider a topologically equivalent form of the system (\ref{eq:temp_weak_fast}) by re-scaling the time with the help of a transformation $t \rightarrow (1+\alpha u)t$, where $(1+\alpha u) >0$ \cite{Kuznetsov04}. The transformed system is
\begin{equation}\label{eq:topo_eq}
\begin{aligned}
\dfrac{du}{dt} &= \gamma u(1-u)(u+\beta)(1+\alpha u)-uv \equiv F(u,v,\delta),\\
\dfrac{dv}{dt} &= \varepsilon\left(uv-\delta v(1+\alpha u)\right) \equiv \varepsilon G(u,v,\delta).
\end{aligned}
\end{equation}
The fold point $P$ coincides with the coexistence equilibrium point at $\delta=\delta_*$. As a consequence, the following conditions hold
\begin{equation}\label{eq:folded_singularity}
\begin{aligned}
&F(u_*,v_*,\delta_*)=0,\ F_u(u_*,v_*,\delta_*)= 0,\ F_v(u_*,v_*,\delta_*)\ne 0,\ F_{uu}(u_*,v_*,\delta_*)\ne 0,\\
&\ G_u(u_*,v_*,\delta_*)\ne 0,\ G_{\delta}(u_*,v_*,\delta_*)\ne 0 \ \text{and} \ G(u_*,v_*,\delta_*)=0.
\end{aligned}
\end{equation}
Using the transformation $U=u-u_*,\ V=v-v_*,\ \lambda=\delta-\delta_*$, we translate the fold point to the origin, and together with the conditions (\ref{eq:folded_singularity}) the system reduces to the slow-fast normal form near $(0,0)$ as follows
\begin{equation}\label{eq:slow-fast_normal_form}
\begin{aligned}
\dfrac{dU}{dt} &=-Vh_1(U,V)+U^2h_2(U,V)+\varepsilon h_3(U,V),\\
\dfrac{dV}{dt} &=\varepsilon\left(Uh_4(U,V)-\lambda h_5(U,V)+Vh_6(U,V)\right),
\end{aligned}
\end{equation}
where $h_i's,\ i=1,2,3\cdots 6$ are given in Appendix B.
Here $\lambda$ measures the perturbation of $\delta$ from $\delta_*$ and is considered as bifurcation parameter for the system (\ref{eq:slow-fast_normal_form}). The bifurcation parameter $\lambda$ and time-scale parameter $\varepsilon$ are assumed to be independent of time. We now extend the above system to $\mathbb{R}^4$ by augmenting the equations $\dfrac{d\lambda}{dt}=0$ and $\dfrac{d\varepsilon}{dt} =0$ to system (\ref{eq:slow-fast_normal_form}) and study the dynamics of the system in the vicinity of $(0,0,0,0)$.
\subsection{Blow-up transformation}
The fold point $P$ of the system (\ref{eq:temp_weak_fast}) and the equilibrium point $E_*$ coincides at the Hopf bifurcation threshold. Hence $P$ is now a Canard point. Let us consider the blow-up space $S^3 = \{(\bar{U},\bar{V},\bar{\lambda},\bar{\varepsilon})\in \mathbb{R}^4: \bar{U}^2+\bar{V}^2+\bar{\lambda}^2+\bar{\varepsilon}^2=1\}$ and an interval $\mathcal{I}:=[0,\rho]$ where $\rho>0$ is a small constant. We define a manifold $\mathcal{M}:=S^3\times\mathcal{I}$ and the blow-up map $\Phi$, $\Phi: \mathcal{M} \rightarrow \mathbb{R}^4$ where
\begin{equation}\label{eq:blow-up-map}
\Phi(\bar{U},\bar{V},\bar{\lambda},\bar{\varepsilon},\bar{r}) = (\bar{r}\bar{U},\bar{r}^2\bar{V},\bar{r}\bar{\lambda},\bar{r}^2\bar{\varepsilon}):=(U,V,\lambda,\varepsilon).
\end{equation}
Using the above map we can write the transformed system as follows
\begin{equation}\label{eq:blow_up_system}
\begin{aligned}
\dfrac{d\bar{U}}{dt} &= \dfrac{1}{\bar{r}}\left( \dfrac{dU}{dt} - \bar{U}\dfrac{d\bar{r}}{dt}\right),\
\dfrac{d\bar{V}}{dt} = \dfrac{1}{\bar{r}^2}\left(\dfrac{dV}{dt} - 2\bar{r} \bar{V}\dfrac{d\bar{r}}{dt}\right),\\
\dfrac{d\bar{\lambda}}{dt} &= \dfrac{1}{\bar{r}}\left( \dfrac{d\lambda}{dt} - \bar{\lambda}\dfrac{d\bar{r}}{dt}\right),\
\dfrac{d\bar{\varepsilon}}{dt} = \dfrac{1}{\bar{r}^2}\left(\dfrac{d\varepsilon}{dt} - 2\bar{r} \bar{\varepsilon}\dfrac{d\bar{r}}{dt}\right),
\end{aligned}
\end{equation}
where $\dfrac{d\bar{U}}{dt},\dfrac{d\bar{V}}{dt}, \dfrac{d\bar{\lambda}}{dt}, \dfrac{d\bar{\varepsilon}}{dt}$ are given in (\ref{eq:slow-fast_normal_form}). To study the dynamics of the transformed system on and around the hemisphere $S^3_{\varepsilon\ge 0}$ we will introduce the charts with direction blow-up maps \cite{Kuehn15,Tu}. Along each direction of the coordinate axis we define the charts $K_1$, $K_2$, $K_3$ and $K_4$ by setting $\bar{V}=1$, $\bar{\varepsilon}=1$, $\bar{U}=1$ and $\bar{\lambda}=1$ respectively, in (\ref{eq:blow_up_system}). The charts $K_1$ and $K_3$ describe the dynamics in the neighborhood of the equator of $S^3$ and $K_2$ describes the dynamics in a neighborhood of the positive hemisphere. Here, we mainly focus on chart $K_2$ to prove the existence of a periodic solution for $0<\bar{\varepsilon}\ll1$. Re-scaling the time with the transformation $\bar{t}:=\bar{r}t$, we desingularize the system (\ref{eq:blow_up_system}) so that the multiplicative factor $\bar{r}$ disappears. The transformed version of the system (\ref{eq:blow_up_system}) can be written in chart $K_2$ as follows
\begin{equation}\label{eq:desingularized_system_K2}
\begin{aligned}
\dfrac{d\bar{U}}{dt} &=-\bar{V}b_1+\bar{U}^2b_2+\bar{r}\left(a_1\bar{U}-a_2\bar{U}\bar{V}+a_3\bar{U}^3\right)+O(\bar{r}(\bar{\lambda}+\bar{r})),\\
\dfrac{d\bar{V}}{dt} &= \bar{U}b_3-\bar{\lambda}b_4 + \bar{r}\left(a_4\bar{U}^2+a_5\bar{V}\right)+O(\bar{r}(\bar{\lambda}+\bar{r}),\\
\dfrac{d\bar{\lambda}}{dt} &= 0,\\
\dfrac{d\bar{\varepsilon}}{dt} &=0,
\end{aligned}
\end{equation}
where $b_j$'s are given in Appendix B,
\begin{equation}\label{eq:a_i}
\ a_1 = a_5 = 0,\ \
a_2 = 1,\ \
a_3 = -\left(\gamma+\alpha \gamma(4u_*+\beta-1)\right),\ \
a_5 = u_*-(1+u_*\alpha)\delta_*.
\end{equation}
\noindent The condition for the destabilization of the coexistence equilibrium point through singular Hopf-bifurcation is summarized in the following theorem. The Hopf bifurcation of the system (\ref{eq:slow-fast_normal_form}) occurs at $\lambda=0$. At Hopf bifurcation, the purely imaginary eigenvalues of the corresponding Jacobian matrix are a function of $\varepsilon$, which tends to zero as $\varepsilon \rightarrow 0$. Also, in slow-time derivative, the eigenvalues are a function of $1/\varepsilon. $ Thus on both the time scales the Hopf bifurcation is singular as $\varepsilon \rightarrow 0.$
\begin{theorem}
Let $(U,V)=(0,0)$ is the canard point of the transformed system (\ref{eq:slow-fast_normal_form}) at $\lambda=0$ such that $(0,0)$ is a folded singularity and $G(0,0,0)=0$. Then for sufficiently small $\varepsilon$ there exist a singular Hopf bifurcation curve $\lambda=\mathcal{\lambda_H}(\sqrt{\varepsilon})$ such that the equilibrium point $p$ of the system (\ref{eq:slow-fast_normal_form}) is stable for $\lambda>\mathcal{\lambda_H}(\sqrt{\varepsilon})$ and
\begin{equation}
\mathcal{\lambda_H}(\sqrt{\varepsilon})=-\dfrac{b_3(a_1+a_5)}{2b_2b_4}\varepsilon+O(\varepsilon^{3/2}).
\end{equation}
\end{theorem}
\begin{proof}
The proof of the theorem is given in Appendix B.
\end{proof}
\noindent The singular Hopf bifurcation curve for the system (\ref{eq:topo_eq}) is thus given by
\begin{equation} \label{eq:Singular_Hopf_curve}
\begin{aligned}
\mathcal{\delta_H}(\sqrt{\varepsilon}) = & \dfrac{1+\alpha^2\beta-\sqrt{1+\alpha+\alpha^2-\alpha\beta+\alpha^2\beta+\alpha^2\beta^2}}{\alpha(-1-\alpha+\alpha\beta+\alpha^2\beta)}- \\ & \quad \dfrac{b_3(a_1+a_5)}{2b_2b_4}\varepsilon+O(\varepsilon^{3/2}).
\end{aligned}
\end{equation}
In Fig.~\ref{fig:Singular_Hopf_Maximal_Canard} the singular Hopf bifurcation curve (red) is plotted in $\delta-\varepsilon$ parametric plane, it clearly explains how the singular Hopf-bifurcation threshold changes with the variation in $\varepsilon$.
Once the coexistence equilibrium loses stability through Hopf bifurcation, at the Canard point, we find a closed orbit as attractor surrounding the unstable equilibrium point. From this point small amplitude stable canard cycle originates enclosing the point $P$ and then forms canard cycle with head depending on the parameter values. The following theorem provides an anlaytical expression of the maximal canard curve in $(\lambda-\varepsilon)$ plane.
\begin{theorem}
Let $(U,V)=(0,0)$ is the canard point of the slow-fast normal form (\ref{eq:slow-fast_normal_form}) at $\lambda=0$ such that $(0,0)$ is a folded singularity and $G(0,0,0)=0$. Then for $\varepsilon>0$ sufficiently small there exists maximal canard curve $\lambda=\lambda_c(\sqrt{\varepsilon})$ such that the slow flow on the normally hyperbolic invariant submanifolds $\mathcal{M}_{\varepsilon}^{1,a}$ connects with $\mathcal{M}_{\varepsilon}^{1,r}$ in the blow-up space. And $\lambda_c(\sqrt{\varepsilon})$ is given by
\begin{equation}
\begin{aligned}
\lambda_c(\sqrt{\varepsilon}) &= -\dfrac{1}{A_5}\Big(\dfrac{3A_1}{4A_4^2}+\dfrac{A_2}{2A_4}+A_3\Big)\varepsilon + O(\varepsilon^{3/2})
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof} The proof of this theorem is given in Appendix C.
\end{proof}
\noindent The maximal canard curve, along which the canard cycle with head appears for the system (\ref{eq:topo_eq}) is given by
\begin{equation}\label{eq:Maximal_canard_curve}
\begin{aligned}
\delta_c(\sqrt{\varepsilon}) = & \dfrac{1+\alpha^2\beta-\sqrt{1+\alpha+\alpha^2-\alpha\beta+\alpha^2\beta+\alpha^2\beta^2}}{\alpha(-1-\alpha+\alpha\beta+\alpha^2\beta)} - \\ & \quad \dfrac{1}{A_5}\Big(\dfrac{3A_1}{4A_4^2}+\dfrac{A_2}{2A_4}+A_3\Big)\varepsilon + O(\varepsilon^{3/2}).
\end{aligned}
\end{equation}
Keeping $\alpha$, $\beta$ and $\varepsilon\ (>0)$ fixed, $\delta_c(\sqrt{\varepsilon})$ gives the threshold for the existence of canard cycle with head. A schematic diagram of the threshold curves in $\delta-\varepsilon$-plane is illustrated in Fig.~\ref{fig:Singular_Hopf_Maximal_Canard} and it divides the $\delta-\varepsilon$ parametric plane into four domains.
\begin{figure}
\caption{Schematic diagram showing singular Hopf bifurcation curve $\mathcal{\delta_H}
\label{fig:Singular_Hopf_Maximal_Canard}
\end{figure}
In domain I, when $\delta > \delta_\mathcal{H}$, the coexistence equilibrium point is stable. For a fixed $\varepsilon>0$, as we decrease $\delta$ from domain I to domain II small amplitude canard cycles appear after crossing the Hopf bifurcation threshold $\delta = \delta_\mathcal{H}.$ In domain II, that is when $\delta_c <\delta < \delta_\mathcal{H},$ the system experiences a transition from canard cycle with head to canard cycle without head. The size of the canard cycle increases on decreasing $\delta$ and the shape of the cycle changes to canard with a head at $\delta=\delta_c.$ The canard cycle whith head persists in a narrow domain III, where $\delta_{ro}<\delta<\delta_c$. On further decreasing $\delta,$ that is, when $\delta\le\delta_{ro}$ the unstable equilibrium point is surrounded by a stable periodic attractor called relaxation oscillation. This periodic attractor consists of two concatenated slow (close to the critical manifold) and fast (almost horizontal and away from the critical manifold) flow.
We can see that for sufficiently small $\varepsilon$, this transition, from small canard cycle to relaxation oscillation through canard cycle with head, takes place within a narrow interval of the parameter $\delta$ and the phenomenon is known as canard explosion. This mechanism is further illustrated with the help of numerical example in the sub-section \ref{sec:Canard_explosion}.
\subsection{Entry-Exit Function} \label{subsec:entry_exit}
The canard cycle and relaxation oscillation pass through the fold point $P$ when $\varepsilon=0$. These closed attractors pass through the vicinity of $P$ for $\varepsilon\ll1$. We now prove the existence of a trajectory that jumps from the fold point to the other attracting slow manifold $C^0_0$ through fast horizontal flow and continuing there for a constant time the trajectory leaves $C^0_0$ at a certain point. This is determined by the entry-exit function and we can find the coordinates of the exit point from the slow manifold $C^0_0$. To do this, first, rewrite the system (\ref{eq:temp_weak_fast}) as a Kolmogorov system \cite{Freedman80} as follows
\begin{equation}\label{eq:entry_exit}
\begin{aligned}
\dfrac{du}{dt} &=& uf_1(u,v) &= u\Big(\gamma (1-u)(u+\beta)-\frac{v}{1+\alpha u}\Big),\\
\dfrac{dv}{dt} &=& \varepsilon vg_1(u,v) &= \varepsilon v \Big(\frac{u}{1+\alpha u}-\delta \Big).
\end{aligned}
\end{equation}
We can verify $f_1(0,v)=\gamma \beta-v$, $g_1(0,v)=-\delta<0$, which implies that $f_1(0,v)<0$ if $v> \gamma \beta$, $f_1(0,v)>0$ if $v< \gamma \beta$.
$T\ (0,\beta \gamma)$ on the vertical axis is the transcritical bifurcation point and we can divide the slow manifold $C^0_0$ into two parts $V^+:=\{(u,v):u=0, v>\gamma \beta\}$ and $V^-:=\{(u,v):u=0,\ v<\gamma \beta\}$. Clearly $V^+$ is attracting and $V^-$ is repelling.
\noindent Let us fix $\varepsilon>0$ and let $u_{max}$ be the point of maximum of the critical manifold $C^1_0$ obtained from the extremum condition $\dot{q_0}(u)=0,$ where $q_0(u)$ is given in Appendix A.
Solving for $u_{max}$ we find
\begin{equation} \label{eq:u_max}
u_{max} = \dfrac{(\alpha-\alpha \beta -1)+\sqrt{1+\alpha+\alpha^2-\alpha \beta+\alpha^2\beta+\alpha^2 \beta^2}}{3\alpha}
\end{equation}
and from the expression of $C_0^1$ we have
\begin{equation}\label{eq:v_max}
\begin{aligned}
v_{max} = q(u_{max})\ = \ \gamma (1-u_{max})(u_{max}+\beta)(1+\alpha u_{max}).
\end{aligned}
\end{equation}
Now we consider a trajectory starting from a point, say $(u_{1},v_{1})$, where $u_1 < u_{max}$ and $v_1 = v_{max}$. The trajectory gets attracted toward the attracting manifold $V_+$ and starts moving downward maintaining proximity to $V^+.$ It was expected that the trajectory would leave the vertical axis at the bifurcation point $T$ where it loses its stability \cite{Muratori89}. The trajectory crosses the point $T$ and continues to move vertically downward remaining close to the repelling part $V^-$, for a certain time, until a minimum predator population $p(v_1)$ is attained s.t. $0<p(v_{1})<\gamma \beta$. After leaving the slow manifold near the point $p(v_1)$, the trajectory starts moving along a fast horizontal segment and gets attracted to attracting slow manifold $C_0^{1,a}$. This point of exit is determined by an implicit function $p(v_1)$, called entry-exit function, which is defined implicitly as
\begin{equation*}
\int_{p(v_{1})}^{v_{1}}\dfrac{f_1(0,v)}{vg_1(0,v)}dv=0.
\end{equation*}
For simplicity we define $v_{0}:=p(v_{1})$, then we have
\begin{equation}\label{eq:en_ex_integral}
\begin{aligned}
\int_{v_{0}}^{v_{1}}\dfrac{v-\gamma\beta}{v\delta}dv = 0\,\,
\implies(v_{1}-v_{0})-\gamma\beta\ln\Big(\frac{v_{1}}{v_{0}}\Big) =0
\end{aligned}
\end{equation}
Substituting (\ref{eq:v_max}) into equation (\ref{eq:en_ex_integral}) we obtain a transcendental equation in $v_0$ which we solve numerically to obtain the exit point.\\ For the parameter values $\alpha=0.5$, $\beta=0.2$, $\gamma=3$, $\delta=0.3$ we obtain $u_{max}=0.472$,\ $v_{1} = 1.316$, and solving the transcendental equation (\ref{eq:en_ex_integral}) we get $p(v_{1}) = 0.207509$, which is the exit point from the manifold $C_0^0$.
\begin{theorem}
Let $P$ be the fold point on the critical manifold $C^1_0$ where the slow flow on the attracting manifold $C_{0}^{1,a}$ is given by (\ref{eq:slow-flow}). Also assume that the coexistence equilibrium point lies on the normally hyperbolic repelling critical submanifold under the parametric restriction,
$$\dfrac{\delta}{1-\alpha \delta}< \dfrac{(\alpha-\alpha \beta -1)+\sqrt{1+\alpha+\alpha^2-\alpha \beta+\alpha^2\beta+\alpha^2 \beta^2}}{3\alpha}$$
and let $U$ denotes a small neighborhood of a singular trajectory $\gamma_0$ consisting of alternate slow and fast trajectories. Then for sufficiently small $\varepsilon$ there exist a unique attracting limit cycle $\gamma_\varepsilon \subset U$ such that $\gamma_\varepsilon \rightarrow \gamma_0$ as $\varepsilon \rightarrow 0$.
\end{theorem}
\begin{proof}
The proof is given in Appendix D.
\end{proof}
\noindent These cycles are shown with the help of a numerical example in Appendix D.
When $\varepsilon \rightarrow 0$, all the trajectories asymptotically converge to this stable limit cycle consisting of alternate slow and fast transitions of prey and predator densities. This cycle can be interpreted as: when the predator population reaches some high density there is a rapid decline in the prey population due to excessive consumption by the specialist predator and the prey reaches a considerably low level. As a consequence, the predator population declines slowly until it reaches a low threshold density at which the prey population again starts growing. Consequently, the prey regenerates within a very short time while predator density remains more or less fixed. Finally, the predator population starts growing slowly due to the abundance of resources. Finally, when the predator density reaches its maximum level, the slow-fast cycle completes and this dynamics continues with time.
\subsection{Canard Explosion} \label{sec:Canard_explosion}
In the previous sub-sections, we have observed the periodic dynamics of the slow-fast system near the canard point, where the predator nullcline intersects the non-trivial prey nullcline at the fold point. This occurs at a certain threshold of the parameter $\delta$. At this point, the coexistence equilibrium point loses stability through singular Hopf bifurcation and a small amplitude stable limit cycle is observed. Due to the decrease of the parameter $\delta$, the Hopf bifurcating stable cycle grows in size and settles down to relaxation oscillation. The fast transition in the size of the limit cycle from small canard cycles to relaxation oscillation occurs in an exponentially small range of the parameter $\delta$. This phenomenon is known as the canard explosion.
\noindent The family of canard cycles are already shown in Fig.~\ref{fig:dynamics_slow_fast}(c) for fixed $\varepsilon$ and three values of $\delta$ close to the singular Hopf bifurcation threshold $\delta_H$. The coexistence equilibrium is stable for $\delta>\delta_H$ and the trajectory converges to the stable steady state for any initial condition as it is the global attractor. We can see that for $\delta$ just below $\delta_H$, a stable limit cycle grows in size and a new periodic solution emerges known as the canard cycle without a head Fig.~\ref{fig:dynamics_slow_fast}(c) (cyan color). This marks the onset of the canard explosion. Further decreasing $\delta$ slightly we obtain another canard cycle known as canard with head Fig.~\ref{fig:dynamics_slow_fast}(c) (blue color). This cycle is special in the sense that from the vicinity of the fold point it follows the repelling slow manifold $C^{1,r}_0$ for $O(1)$ time, before jumping to another attracting manifold. A maximal canard is obtained at $\delta=\delta_c$. After crossing the maximal canard threshold, the system settles down to a large stable periodic solution called relaxation oscillation, which marks the end of canard explosion. This orbit is characterized by the fact that the slow flow on reaching the vicinity of the fold point directly jumps to another attracting slow manifold, as studied in the previous section. The strength of the Allee effect has a significant influence on the amplitude of stable oscillatory coexistence of both the species. The change in the amplitude of the limit cycle corresponding to canard explosion is shown in Fig.~\ref{fig:canard_cycles}.
\begin{figure}
\caption{The bifurcation diagram showing the change in the amplitude of the canard cycles is plotted against $\delta$ for $\alpha=0.5,\ \gamma=3,\ \varepsilon=0.01$ $\beta=0.22$}
\label{fig:canard_cycles}
\end{figure}
\noindent For smaller values of $\beta$, the size of the canard cycle is very large and the canard explosion occurs in an exponentially small interval. However, on increasing the value of $\beta$ the size of the limit cycle shrinks and instead of a sudden change in the size of the cycle, we observe a gradual increase in the amplitude of the periodic solution Fig.~\ref{fig:canard_different_allee_strength}. Though the transition from canard cycle to relaxation oscillation takes place in a much wider parametric interval, in this case, it is difficult to distinctively identify the different periodic solutions. For smaller values of $\beta$, when the prey population is almost absent, the predator population also slowly declines to an almost endemic level. But on increasing the strength of the Allee effect the predator density never collapses but stays at the system at a much higher density. After which the system experience a sudden outbreak in the prey population within a very short interval of time. Again because of the abundance of resources, the predator population grows slowly and reaches the maximum capacity. Due to the exploitation of the resources, there is a fast decline in the prey density and this cycle continues.
\begin{figure}
\caption{(a) The canard cycles for $\alpha=0.5,\ \beta=0.8,\ \gamma=3,\ \varepsilon=0.01$ and for different values of $\delta$ i.e $ \delta=0.234$ (green), $\delta=0.233$ (blue), $\delta=0.231$ (magenta), (b) the bifurcation diagram showing the change in the size of the cycles.}
\label{fig:canard_different_allee_strength}
\end{figure}
\section{Spatio-temporal model}\label{sec:spatial}
We now consider the spatio-temporal model corresponding to the model (\ref{eq:temp_weak_fast}) with slow-fast time scale. Here, we consider that the prey and predator densities are functions of time and space, $u(t,{\bf x})$ and $v(t,{\bf x})$ denote prey and predator densities, respectively, at time $t$ and at spatial location ${\bf x}$. In case of one dimensional (1D) space ${\bf x}=x\in\mathbb{R}$ and for two dimensional (2D) space ${\bf x}=(x,y)\in\mathbb{R}^2$. For simplicity we assume that ${\bf x}$ belongs to a bounded domain $D\subset\mathbb{R}$ and $D\subset\mathbb{R}^2$ respectively. The spatio-temporal dynamics of the prey-predator interaction is described by the following reaction-diffusion equation
\begin{subequations} \label{eq:sptemp_weak}
\begin{eqnarray}
u_t &=& \gamma u(1-u)(u+\beta)-\frac{uv}{1+\alpha u}+\nabla^2u,\\
v_t &=& \varepsilon \left(\frac{uv}{1+\alpha u}-\delta v\right)+d\nabla^2v,
\end{eqnarray}
\end{subequations}
where $d$ is the ratio of diffusivity coefficients of predator to prey and $\nabla^2$ is the Laplacian operator. The above spatio-temporal model is subject to no-flux boundary condition and non-negative initial condition. The model (\ref{eq:sptemp_weak}) can not produce any stationary Turing pattern and it can be proved that the Turing instability condition is not satisfied. However, instability of the coexistence steady-state due to Hopf bifurcation combined with the diffusivity of two species leads to some dynamic pattern due to the formation of traveling wave, wave of invasion and spatio-temporal chaos. The mechanisms responsible for such kind of pattern formation are described in \cite{Lewis16}.
\noindent Analytical condition for the existence of traveling wave leads to successful invasion by specialist predator is derived in the next subsection. For simplicity of mathematical calculation, we restrict ourselves to one dimensional space to explain the existence of traveling wave. In case of two dimensional spatial domain, the analogous patterns are presented separately.
\subsection{Existence of Traveling wave}
To study the successful invasion by the predator we consider the system (\ref{eq:sptemp_weak}) in one dimensional space, we re-write the above system as
\begin{subequations} \label{eq:sptem_onedim}
\begin{eqnarray}
\frac{\partial u(t,x)}{\partial t} &=& f(u,v) +\frac{\partial^2u}{\partial x^2},\\
\frac{\partial v(t,x)}{\partial t} &=& \varepsilon g(u,v) +d\frac{\partial^2v}{\partial x^2},
\end{eqnarray}
\end{subequations}
where $f(u,v) = \gamma u(1-u)(u+\beta)-\frac{uv}{1+\alpha u},\ g(u,v)= \frac{uv}{1+\alpha u}-\delta v.$ The predator is introduced in a small domain where the prey density is at its carrying capacity. The successful invasion of the predator is characterized by the existence of a traveling wave joining the predator free steady-state with the coexistence steady-state. Depending upon the stability of the coexistence steady-state, we can find monotone traveling wave, non-monotonic traveling wave, and periodic traveling wave as explained below with the help of numerical examples.
\noindent We begin with deriving the minimum speed of the traveling wave which result in the successful invasion of the specialist predator into the space already inhabited by its prey. For this, we first consider a single-species model with the linear growth:
$$\frac{\partial v(t,x)}{\partial t} = \alpha v+D\frac{\partial^2v}{\partial x^2},$$
where $\alpha$, $D\,>\,0$ are parameters with obvious meaning. Stricktly speaking, the above equation does not possess a traveling wave solution. However, for a compact initial condition, it is known that the tail of the profile propagates with the constant speed given by $c_{min}=2\sqrt{D\alpha}$, see \cite{Lewis16}, sometimes referred to as the Fisher spreading speed.
\noindent For the invasion of predator into space inhabited by its prey, we consider the tail of the profile where $u\approx 1$ and $v\approx 0$ and linearize (\ref{eq:sptem_onedim}b) around $(1,0)$:
\begin{equation} \label{eq:TW_linearized}
\frac{\partial v(t,x)}{\partial t} = \varepsilon \Big(\frac{1}{\alpha+1}-\delta \Big)v+d\frac{\partial^2v}{\partial x^2}.
\end{equation}
Clearly, the speed of the traveling wave, at the onset of successful invasion, is given by
\begin{equation}
c_v = 2\Big(\varepsilon d \Big[\frac{1}{\alpha+1}-\delta\Big] \Big)^{1/2}.
\end{equation}
The feasibility condition is $\delta (\alpha + 1)<1$. The expression for $c_v$ indicates that the speed of traveling wave reduces in the order of $\sqrt{\varepsilon}$. We consider the existence of traveling wave starting from the predator free steady-state, which leads to the successful establishment of the predators, this requires the consideration of the system (\ref{eq:sptem_onedim}) with the following conditions
$$ u(t,x) = 1,\ \text{and \ } v(t,x) = 0,\ \text{as\ } x\rightarrow -\infty,\ \forall \,t, $$
$$ u(t,x) = u_*,\ \text{and \ } v(t,x) = v_*,\ \text{as\ } x\rightarrow \infty,\ \forall \,t .$$
We consider the traveling wave solution of the system (\ref{eq:sptem_onedim}) in the form $u(t,x)=\phi(\xi)$, $v(t,x)=\psi(\xi)$ where $\xi = x-ct$ and $c$ is the wave speed. The functions $\phi(\xi)$ and $\psi(\xi)$ thus satisfy the equations
\begin{equation} \label{eq:TW_2ODE}
\begin{aligned}
\dfrac{d^2\phi}{d\xi^2} + c\dfrac{d\phi}{d\xi} + f(\phi,\psi) &= 0,\\
d\dfrac{d^2\psi}{d\xi^2} + c\dfrac{d\psi}{d\xi} + \varepsilon g(\phi,\psi) &=0.
\end{aligned}
\end{equation}
Substituting $p(\xi)= -\dfrac{d\phi}{d\xi}$ and $q(\xi)= -\dfrac{d\psi}{d\xi}$, from (\ref{eq:TW_2ODE}) we can derive four coupled ordinary differential equations as follows
\begin{equation} \label{eq:TW_4ODE}
\begin{aligned}
\dfrac{d\phi}{d\xi} &= -p,\\
\dfrac{dp}{d\xi} & = -cp + f(\phi,\psi),\\
\dfrac{d\psi}{d\xi} &= -q,\\
\dfrac{dq}{d\xi} &= \dfrac{1}{d}(-cq+\varepsilon g(\phi,\psi)).
\end{aligned}
\end{equation}
Three homogeneous steady states ($E_0,\ E_1,\ E_*$) of the spatio-temporal model (\ref{eq:sptem_onedim}) corresponds to three steady states of system (\ref{eq:TW_4ODE}) are $Q_0(0,0,0,0)$, $Q_1(1,0,0,0)$ and $Q_*(u_*,0,v_*,0)$.
To ensure the successful invasion of the predator, we focus on the dynamics of the system (\ref{eq:TW_4ODE}) around $Q_1$ and $Q_*$. The Jacobian matrix of the system (\ref{eq:TW_4ODE}) evaluated at $Q_1$ is
\begin{equation}
J_{Q_1} =
\begin{pmatrix}
0&-1&0&0\\ -\gamma(1+\beta)& -c& -\frac{1}{1+\alpha}&0\\0&0&0&1\\0&0& \frac{\varepsilon}{d}(\frac{1}{1+\alpha}-\delta)&-\frac{c}{d}
\end{pmatrix}
\end{equation}
The eigenvalues of the matrix $J_{Q_1}$ are $\lambda _{1,2} = \frac{c}{2}\pm \frac{\sqrt{c^2+4g(1+\beta)}}{2}$ and
$\lambda _{3,4} = -\frac{c(1+\alpha)}{2}\pm \frac{\sqrt{\Gamma}}{d(1+\alpha)}$ where $\Gamma=(1+\alpha)^2c^2-4\varepsilon d(1+\alpha)(1-\delta-\alpha\delta)$. First two eigenvalues are real, whereas $\lambda_{3,4}$ are real for $c^2\ge \frac{4\varepsilon d(1-\delta-\alpha\delta)}{1+\alpha}$. The traveling wave exist if all the eigenvalues are real, otherwise the trajectories will spiral around $Q_1$ and leads to negative population density. Hence, the minimum speed of traveling wave originating from predator free steady-state is
\begin{equation}\label{eq:c_min}
c_{min} = \Big[\frac{4\varepsilon d(1-\delta-\alpha\delta)}{1+\alpha}\Big]^{1/2}.
\end{equation}
Note that $c_{min}$ depends on $\varepsilon$, thus for $\varepsilon<1$, the wave speed decreases. The minimum wave speed derived here is the same as it was derived earlier with the help of the linearized equation.
The expressions for the eigenvalues of the Jacobian matrix $J_{Q_*}$ are quite complicated and hence we avoid writing them here explicitly for the sake of brevity. For $c\ge c_{min}$, the eigenvalues of $J_{Q_*}$ can be real or complex depending on which we find monotone traveling wave and non-monotone as well as periodic traveling wave originating from the steady states $E_1$.
\begin{figure}
\caption{Monotone traveling wave, non-monotone traveling wave and periodic traveling wave obtained for the model (\ref{eq:sptem_onedim}
\label{fig:TW_1d_1}
\end{figure}
To illustrate the existence of various type of traveling waves, we fix the parameter values $\alpha=0.5$, $\beta=0.22$, $\gamma=3$, $\varepsilon=1$ and consider $\delta$ as variable parameter. From (\ref{eq:c_min}) we find that traveling wave exists for $\delta<2/3$. The eigenvalues of $J_{Q_*}$ are real for $0.52<\delta<0.667$ and the eigenvalues are complex conjugate for $\delta\le 0.52$. Complex conjugate eigenvalues with negative real parts correspond to non-monotone traveling wave and with positive real part correspond to periodic traveling wave. Three types of traveling waves are shown in Fig.~\ref{fig:TW_1d_1} for three different values of $\delta$. For $\delta\, =\,0.38$, the minimum wave speed $c_{min}\approx 1.07$, and the complex conjugate eigenvalues with negative real part are $-1.22\pm0.423i$, whereas for $\delta=0.3,\ c_{min}\approx 1.211$ and the complex eigenvalues with positive real part are $0.034\pm 0.405i$. The initial condition used in Fig.~\ref{fig:TW_1d_1} is given by
$$u(0,x)\,=\,\left\{\begin{array}{ll}
u_*, & 0\leq x\leq3\\
1, & 3< x\leq 300 \\
\end{array}\right.,\,\,\,
v(0,x)\,=\,\left\{\begin{array}{ll}
v_*, & 0\leq x\leq3\\
0, & 3< x\leq 300 \\
\end{array}\right..$$
\noindent The traveling wave emerging from the above initial conditions connects the prey-only steady state $E_1$ to the coexistence state $E_*$. Its shape depends on parameter values; for $\delta=0.6$, its profile is monotone (see Fig.~\ref{fig:TW_1d_1}a). A non-monotone traveling wave emerges for $\delta=0.38$ (see Fig.~\ref{fig:TW_1d_1}b). The range of spatio-temporal oscillation increases for values of $\delta$ close to the temporal Hopf bifurcation threshold. For $\delta=0.3$, we find periodic traveling wave, with a plateau behind the oscillatory front corresponding to steady state $E_*$ which is, for these parameter values, unstable: a phenomenon known as the dynamical stabilization \cite{Malchow02,Petrovskii00,Petrovskii01TPB,Sherratt98}.
\begin{figure}
\caption{Periodic traveling waves for the model (\ref{eq:sptem_onedim}
\label{fig:PTW_1d}
\end{figure}
We mention here that the properties of emerging traveling wave are rather robust with regard to the choice of initial conditions.
For instance, for a different initial condition as given by
$$u(0,x)\,=\,\left\{\begin{array}{ll}
1, & 0\leq x\leq3\\
0, & 3< x\leq 300 \\
\end{array}\right.,\,\,\,
v(0,x)\,=\,\left\{\begin{array}{ll}
0.2, & 0\leq x\leq2\\
0, & 2< x\leq 300 \\
\end{array}\right.,$$
the emerging periodic traveling waves shown in Fig.~\ref{fig:TW_1d_1}c and Fig.~\ref{fig:PTW_1d}a are qualitatively similar.
Numerical simulation for smaller values of $\delta$ shows an increase in the magnitude and period of the oscillating front as shown in Fig.~\ref{fig:PTW_1d}b-c. For numerical simulations we have chosen a spatial domain of size $[0,300]$, further increase in domain size does not effect the qualitative property of the traveling waves.
To understand the effect of slow-fast time scale on the resulting pattern formation, here we consider the change in periodic traveling wave shown in Fig.~\ref{fig:PTW_1d}a for $\varepsilon<1$. The model (\ref{eq:sptem_onedim}) is simulated for three different values of $\varepsilon$ as mentioned at the caption of Fig.~\ref{fig:PTW_SF_1d}. With the decrease in $\varepsilon$, we observe that the oscillatory wake of the invading predator front separating the predator-free area and the onset of spatiotemporal shrinks and eventually disappears for smaller values of $\varepsilon$, so that dynamical stabilization does not occur. Also the size of predator-free area where prey exists at its carrying capacity increases with the decrease in $\varepsilon$. The size of predator-free patch increases further for $\varepsilon<0.25$.
\begin{figure}
\caption{Periodic traveling waves for the model (\ref{eq:sptem_onedim}
\label{fig:PTW_SF_1d}
\end{figure}
\subsection{Patterns in two dimension}
Finally, we consider the spatio-temporal pattern formation over two dimensional spatial domain. The nonlinear reaction-diffusion system (\ref{eq:sptemp_weak}) is solved numerically using five-point finite difference scheme for the Laplacian operator and forward Euler scheme for the temporal part with the initial conditions (\ref{eq:IC1_1}) and (\ref{eq:IC2}). Equal diffusivities are considered throughout {\it i.e}, $d=1$. Two types of initial conditions are used to study the successful invasion and establishment of both species. In \cite{Morozov06} Morozov {\it et.al} used the following initial condition which explains that a small amount of population of both the species are introduced within a small elliptic domain
\begin{equation}\label{eq:IC1_1}
u(0,x,y)\,=\,\left\{
\begin{array}{ll}
u_0, & \frac{(x-x_1)^2}{\Delta_{11}}+\frac{(y-y_1)^2}{\Delta_{12}}\,\le\,1\\
0, & \text{otherwise}\\
\end{array}\right. \text{,}\ \
v(0,x,y)\,=\,\left\{
\begin{array}{ll}
v_0, & \frac{(x-x_2)^2}{\Delta_{21}}+\frac{(y-y_2)^2}{\Delta_{22}}\,\le\,1\\
0, & \text{otherwise}\\
\end{array}\right.
\end{equation}
where $u_0$ and $v_0$ measure the initial densities of prey (native) and predator (invasive) species respectively. The other initial condition we consider here, is a small amplitude heterogeneous perturbation from the homogeneous steady states (see \cite{Medvinsky02} for details), is
\begin{equation}\label{eq:IC2}
\begin{aligned}
u(x,y,0) &=& &u_* - e_1(x-0.1y-225)(x-0.1y-675),&\\
v(x,y,0) &=& &v_* - e_2(x-450)-e_3(y-450),&
\end{aligned}
\end{equation}
where $(u_*,v_*)$ is the homogeneous steady state and for numerical simulation we choose $e_1 = 2\times 10^{-7}$, $e_2 = 3\times 10^{-5}$, and $e_3 = 2\times 10^{-4}$.
\noindent First we simulate the spatio-temporal model with the initial condition (\ref{eq:IC1_1}) in a square domain $L\times L$ with $L=300$, with grid spacing $\Delta x = \Delta y=1$ and time step $\Delta t =0.01$. The simulation results are verified with other choices of $\Delta x$ and $\Delta t$ to ensure that the obtained results are free from numerical artifact. We consider a small elliptic domain within which the prey is at its carrying capacity ($u_0=1$) and a small amount of population ($v_0=0.2$) is introduced. Other parameter values are $x_1 = 153.5$, $y_1 = 145$, $x_2 = 150$, $y_2=150$, $\Delta_{11} = 12.5$, $\Delta_{12} = 12.5$, $\Delta_{21} = 5$, $\Delta_{22}=10$. We simulate the model for a sufficiently long time so that the invading waves can cover the whole domain and hits the domain boundary. Parameter values of $\alpha$, $\beta$, $\gamma$ and $\varepsilon$ are same as mentioned in the previous subsection. We will first check the change in resulting pattern by varying the parameter $\delta$. In Fig \ref{fig:weak_allee_delta_0dot3}, two dimensional spatial distribution of prey population density at two different time points are shown for $\delta=0.3$. We have omitted the spatial distribution of the predator population as they exhibit similar patterns as exhibited by the prey species. Choosing $\delta=0.3$, just below the temporal Hopf bifurcation threshold ($\delta_H= 0.3768$), we observe concentric circular rings as the initial transient pattern which eventually settle down to an interacting spiral pattern once the transients are over. With the advancement of time, the expanding circular rings hit the domain boundary and break into irregular patches. These irregular spiral patches cover the whole domain, and the system dynamics can be identified as interacting spiral chaos (see Fig.~\ref{fig:weak_allee_delta_0dot3}b). Note that the initial invading waves are the periodic traveling waves (Fig.~\ref{fig:weak_allee_delta_0dot3}a) but in large-time we observe irregular spatio-temporal oscillations (Fig.~\ref{fig:weak_allee_delta_0dot3}b). This type of chaotic dynamics persists in the vicinity of temporal Hopf bifurcation threshold ($0.2<\delta<\ \delta_H=0.3768$).
\begin{figure}
\caption{Spatio-temporal pattern of prey with initial condition \ref{eq:IC1_1}
\label{fig:weak_allee_delta_0dot3}
\end{figure}
\noindent Keeping other parameters fixed, we further decrease $\delta \ (\le 0.2)$ and find propagating circular rings which are periodic traveling waves. The number of rings, that is the number of population patches within the fixed domain, decreases with the decrease in magnitude of $\delta$. These periodic traveling waves do not break after hitting the boundary and the spatio-temporal dynamics remains unaltered. This result is in agreement with Sherrat et. al \cite{Sherratt97} that the system exhibits oscillatory dynamics as a successful invasion. Periodic traveling fronts for $\delta\le 0.2.$ are shown in Fig.~\ref{fig:weak_allee_delta_0dot1}.
\begin{figure}
\caption{Spatio-temporal pattern of prey with initial condition \ref{eq:IC1_1}
\label{fig:weak_allee_delta_0dot1}
\end{figure}
\noindent Now we consider the effect of the slow-fast time scale on the resulting patterns. In the previous subsection, we have explained the reduction of traveling wave speed with the decrease in the magnitude of $\varepsilon$. As a result, the time taken by the predators to invade over the entire domain for $\varepsilon\ll1$ is much longer as compared to $\varepsilon=1$. For $\varepsilon<1$ we find two kinds of distinctive changes in the resulting patterns. The width of the population patches increase (see Fig~\ref{fig:weak_allee_eps2}a), and the spatio-temporal chaotic dynamics changes to periodic temporal oscillation of nearly homogeneous distribution of prey and predator densities (see Fig~\ref{fig:weak_allee_eps2}d). The time evolution of the spatial average of the prey and predator population is analogous to the temporal Canard cycle.
\begin{figure}
\caption{Spatio-temporal pattern of prey and plots of average spatial of prey and predator population is given w.r.t time for $\alpha = 0.5,\ \beta = 0.22,\ \gamma = 3,\ \delta = 0.3$ for $\varepsilon=0.1$(left) and $\varepsilon=0.01$(right). Upper panel shows the pattern at (a) $t=10000$, (b) $t=5000$; lower panel shows the phase trajectory of spatially averaged densities (c) $t \in [2000, 10000]$, (d) $t \in [5000,10000].$ }
\label{fig:weak_allee_eps2}
\end{figure}
\noindent The choice of the initial condition and the domain size plays an important role in pattern formation. We simulate the system (\ref{eq:sptemp_weak}) with the second kind of initial condition (\ref{eq:IC2}) and over a square domain $L\times L$ with $L=900$. The initial condition indicates a small heterogeneous perturbation from the homogeneous steady state $(u_*,v_*)$. Choosing the same parameter set as Fig.~\ref{fig:weak_allee_delta_0dot3}, initially, we find two spirals rotating about their fixed centers. The regular spirals are destroyed with the advancement of time and the interacting spiral pattern engulfs the whole domain (see Fig.~\ref{fig:IC2_delta0.3}b-c). These patches move, break and form new patches, but qualitatively, the dynamics of the system does not alter with time.
\begin{figure}
\caption{Spatio-temporal pattern of prey density for $\alpha = 0.5,\ \beta = 0.22,\ \gamma = 3,\ \delta = 0.3, \ \varepsilon = 1,$ at different time intervals. }
\label{fig:IC2_delta0.3}
\end{figure}
\noindent Exhaustive numerical simulation indicates that for $\delta$ close to the temporal Hopf bifurcation threshold, the system always exhibits spatio-temporal chaos. The duration and type of transient patterns depend upon the initial condition and the size of the domain. Now considering $\varepsilon < 1$, we found the persistent interacting spirals with thick arms (see Fig.~\ref{fig:IC2_delta0.3_eps0dot1}a for $\varepsilon=0.1$). The regular spiral grows in size and doesn't breakdown even after hitting the boundary when $\varepsilon$ is significantly small, say $\varepsilon=0.01$. The irregularity of the temporal evolution of spatial averages of both the population decreases and moves towards periodic or quasi-periodic oscillation with the decrease in the magnitude of $\varepsilon$. This claim is justified from the time evolution of spatial averages as presented in the lower panel of Fig.~\ref{fig:IC2_delta0.3_eps0dot1}.
\begin{figure}
\caption{Upper panel (a,b,c) represents spatial distribution of prey for $\alpha = 0.5,\ \beta = 0.22,\ \gamma = 3,\ \delta = 0.3$
for different values of $\varepsilon$ (a) $t=12000$, (b) $t=55000$, (c) $t=10000$ ; lower panel (d,e,f) represents phase trajectory of spatially averaged densities obtained after removing initial transients (d) $t\in [3000,12000],$ (e) $t\in [3000,55000]$, (f) $t\in [5000,10000]$.}
\label{fig:IC2_delta0.3_eps0dot1}
\end{figure}
\section{Discussion and Conclusions}
Understanding the effects that the existence of multiple time scales may have on the population dynamics of corresponding interacting species, in particular by promoting or hampering their persistence, has been attracting an increasing attention over the last two decades. In particular, some preliminary yet significant work has been done to understand changes in the oscillatory coexistence in the presence of slow-fast time scales \cite{Hek10,Kooi18,Muratori89,Rinaldi92}. However, the effects of the slow-fast dynamics in the spatially explicit systems, e.g.~as given by the corresponding reaction–diffusion equations, remains poorly investigated. This paper aims to bridge this gap, at least partially. As our baseline system, we consider the classical Rosenzweig-McCarthur prey-predator model with the multiplicative weak Allee effect in prey's growth. We pay particular attention to the interplay between the strength of the weak Allee effect (quantified by parameter $0<\beta<1$) and the difference in the time scales for prey and predator (quantified by $\epsilon\le 1$).
We first provide a detailed slow-fast analysis for the corresponding nonspatial system. In doing that, we have obtained the following results:
\begin{itemize}
\item in the presence of slow-fast dynamics ($\epsilon\ll 1$) and a weak Allee effect, a decrease in the predator mortality may lead to a regime shift where small-amplitudfe oscillations in the populaton abundance change to large-amplitude oscillations (see Fig.~\ref{fig:canard_cycles}). This change becomes more abrupt in case the Allee effect is `not too weak' (i.e.~$\beta$ is sufficiently small), cf.~Figs.~\ref{fig:canard_cycles} and \ref{fig:canard_different_allee_strength}.
\end{itemize}
On a more technical side,
we have derived an asymptotic expansion in $\varepsilon$ for the invariant approximated manifolds and have explained the dynamics of the system near the hyperbolic submanifolds. This theory cannot be extended at non-hyperbolic points. To unravel the complete geometry of the manifolds and their intersection as they pass through the non-hyperbolic points we followed the blow-up technique \cite{Dumortier96,Krupa01A,Krupa01B}. We considered the slow-fast normal form of the model by translating the fold point to the origin. As the transformed system has a singularity at the origin, it is then blown up to a sphere $S^3$ and the trajectories of the blow-up system are mapped on and around the sphere. Using the blow-up analysis we have found the analytical expression for the singular Hopf bifurcation curve $(\mathcal{\lambda_H}(\sqrt{\varepsilon}))$ along which the eigenvalues become singular as $\varepsilon\rightarrow 0$. A particular kind of slow-fast solution known as canards (with or without head) has been found explicitly with the help of Melnikov's distance function in the blow-up space. We have also calculated an analytical expression for the maximal canard curve $(\lambda_c(\sqrt{\varepsilon}))$. Another type of periodic solution is obtained which consists of two concatenated slow and fast flow, known as relaxation oscillation. Analytically, we have proved the existence and uniqueness of the relaxation oscillation cycle using the entry-exit function \cite{Rinaldi92,Wang19AML} and validated our results numerically.
\noindent The difference in the time scale for the growth and decay in prey and predator species capture some interesting feature of respective populations. As the prey population growth takes place over a faster time scale, the predator population remains unchanged during the rapid growth and the decay of prey population. On the other hand, the change in predator population occurs slowly compared to the prey population. This type of growth and decay in two constituent species is observed for steady-state coexistence as well as oscillatory coexistence. The presence of weak Allee effect in prey growth acts as a system saver. The size of the limiting relaxation oscillation cycle is smaller in size when the magnitude of Allee effect is comparatively large (cf. Fig.~\ref{fig:canard_different_allee_strength}). It reduces the chance of extinction (maybe localized) as the periodic attractor remains away from both the axes.
To understand the change in dynamic behavior, we have chosen the predator mortality rate $\delta$, as the bifurcation parameter. For predator mortality rate greater than Hopf threshold the system stabilizes at coexistence steady state, whereas, the system shows oscillatory dynamics for mortality rate less than the threshold. For the model under consideration, the Hopf threshold is independent of the time scale parameter. But the combination of the mortality rate along with time scale parameter ($0<\varepsilon\ll1$) has enormous effect on the nature of the oscillatory coexistence. For a fixed $\varepsilon>0$, as we decrease $\delta$ below the Hopf threshold, we observe a fast transition from small amplitude oscillatory coexistence to relaxation oscillation within an exponentially small range of the parameter $\delta$ via a family of canard cycles (Fig.~\ref{fig:canard_cycles}), known as canard explosion. This type of dynamics has been observed in an ecosystem where the growth rate of the interacting species (resource-consumer type) differ on some orders of magnitude. The reason behind this can be interpreted as: when the predator density is at a maximum level, due to over-consumption the prey population collapses rapidly. Since the predator is specialist, due to the lack of food source the predator population starts decaying slowly and reaches a lower density. It reduces grazing pressure on the prey, as a result the prey population revives leading to a sudden outbreak. Again with increasing food resources, the predator population increases slowly until it reaches a desirable level which can be supported by the abundance of prey and thus the cycle continues. Empirical evidence suggests that this type of oscillatory dynamics is observed in real world, for example in the food web of Canadian boreal forest \cite{Stenseth97} where outbreak of hare population follows a cycle of almost 11 years. In the aquatic ecosystem the seasonal cycle of Daphnia and algae \cite{Scheffer00,Scheffer97} are observed and also in a forest ecosystem where insect pests defoliate the adult trees \cite{Ludwig78}.
We then considered the effect of multiple time scales in one-dimensional and two-dimensional spatial extension of our slow-fast system.
In the 1D case, the minimum speed of the traveling wave of predator invading into the space already occupied by its prey (observed in case of compact initial conditions) is found analytically, while the patterns emerging in the wake of the front are investigated by means of numerical simulations. In the 2D case, the effect of the interplay between the weak Allee effect and the multiple timescales is studied in simulations.
The following result is worth of highlighting:
\begin{itemize}
\item in the presence of a weak Allee effect, a decrease in the time scale ratio (i.e.~for $\epsilon\ll 1$) may lead to a regime shift where the pattern becomes correlated across the whole spatial domain resulting in large-amplitude oscillations of spatially average population density; see Fig.~\ref{fig:IC2_delta0.3_eps0dot1}. Since the corresponding trajectory in the phase plane ($<u>,<v>$) comes close to the vertical axis, the immediate ecological implication of this is a likely extinction of prey.
\end{itemize}
On a more technical side,
our main interest was to study how the invasion of the species is taking place and how it is getting affected with the introduction of the time scale parameter. For the values of $\delta$ less than Hopf bifurcation threshold ($\delta<\delta_H$), we find spatio-temporal chaotic patterns. The onset of spatio-temporal chaos and the duration of transient oscillation is completely influenced by the initial distribution of the two species. Fig.~\ref{fig:weak_allee_delta_0dot3} shows periodic traveling waves as transient dynamics before spatio-temporal chaos sets in. However, small amplitude heterogeneous perturbation around the homogeneous steady states reduces the time length for transient dynamics and the system quickly enters spatio-temporal chaotic regime. For $\delta$ significantly less than $\delta_H$ (see Fig.~\ref{fig:weak_allee_delta_0dot1}) we find only periodic traveling waves which indicate that continuous alteration of population patches mimics the temporal dynamics of large amplitude oscillations.
Consideration of time scale difference in the growth rates of prey and predator have some stabilizing effect on the spatio-temporal pattern formation scenario. On one hand, it increases the size of the coexisting population patches over the domain, and on other hand, it drives the spatio-temporal chaotic pattern to periodic or quasi-periodic oscillatory dynamics as shown in Fig.~\ref{fig:IC2_delta0.3_eps0dot1}. One prominent feature can be visualized from the numerical simulation that spatio-temporal chaotic pattern engulf the entire domain of size $900\times900$ at $t=1700,$ for $\varepsilon=1,$ whereas it takes quite a long time for $\varepsilon=0.1.$ The pattern obtained for $\varepsilon=0.1$ in Fig.~\ref{fig:IC2_delta0.3_eps0dot1}(b) is interacting spiral but not chaotic, as it does not show any sensitivity to initial condition. The time evolution of average prey-predator density changes from chaotic nature to quasi-periodic oscillation with the decrease in magnitude of $\varepsilon$ as shown in the lower panel of Fig.~\ref{fig:IC2_delta0.3_eps0dot1}.
In this work, we have considered a prey-predator model with specialist predator and the consumption of prey by the predator follows prey-dependent functional response. As a result, the periodic solution arising through Hopf instability is stable and there is no possibility of global bifurcation through which one or more species can collapse. The large amplitude oscillatory coexistence obtained from the temporal model changes to periodic traveling wave once the individuals are assumed to be distributed over their habitat heterogeneously. The difference in the time-scale for the growth of resource and consumer leads to the establishment of the species over larger patches and the speed of invasive wave reduces with the decrease of $\varepsilon$. The irregularity of population patches (spatio-temporal chaotic patterns) as a part of successful invasion is observed for parameter values close to the Hopf bifurcation threshold and $\varepsilon$ is close to or equal to 1. Decrease in the magnitude of $\varepsilon$ reduces the irregular oscillation but the duration of transient oscillations are enhanced. The study of spatio-temporal pattern formation with a difference in time scales in the context of ecological systems is quite unexplored in literature. This kind of study can provide a better insight in establishment of the invasive species. More realistic phenomena can be captured if we consider long food chain model with multiple time scales and two species model with generalist predator with predator dependent functional response, which we will study in our future works.
\appendix
\section*{Appendix A} \label{App:GSPT}
Here we will follow geometric singular perturbation technique as given by Fenichel \cite{Fenichel79} to find the analytical expression of locally perturbed invariant manifold $C^1_{\varepsilon}$. Since $v=q(u,\varepsilon)$, from the invariance condition we have $$\dfrac{dv}{dt} = \dfrac{dq(u,\varepsilon)}{du}\dfrac{du}{dt}.$$ Using the explicit expression for $\dfrac{du}{dt}$ and $\dfrac{dv}{dt}$ from (\ref{eq:temp_weak_fast}) we get
\begin{equation} \label{eq:invariance_condition}
\begin{aligned}
\varepsilon q(u,\varepsilon)(u(1-\alpha \delta)-\delta) = u \dfrac{dq(u,\varepsilon)}{du}(\gamma (1-u)(u+\beta)(1+\alpha u)-q(u,\varepsilon)).
\end{aligned}
\end{equation}
Substituting the asymptotic expansion of $q(u,\varepsilon)$ from (\ref{eq:invariant-manifold} and assuming $u\ne0$, $\dot{q_0}(u)\ne0,$ we equate $\varepsilon$ free terms from both sides to obtain
\begin{equation}\label{eq:q0}
\begin{aligned}
q_0(u) &= \gamma (1-u)(u+\beta)(1+\alpha u),\\
\end{aligned}
\end{equation}
which is exactly the critical manifold. Now equating the coefficients of $\varepsilon$ from both sides of (\ref{eq:invariance_condition}) we get
\begin{equation}\label{eq:q1}
\begin{aligned}
q_1(u) = \dfrac{q_0(u)(u(1-\alpha \delta)-\delta)}{-u\dot{q}_0(u)}.
\end{aligned}
\end{equation}
Similarly we obtain $q_2(u)$ by equating the coefficients of $\varepsilon^2,$
\begin{equation}\label{eq:q2}
\begin{aligned}
q_2(u) = \dfrac{q_1(u)(u(1-\alpha \delta)-\delta)+u q_1\dot{q_1}(u)}{-u\dot{q_0}(u)}.
\end{aligned}
\end{equation}
Proceeding as above we find $q_r(u),\ r=3,4,\cdots$ by equating the coefficients of $\varepsilon^r$ from (\ref{eq:invariance_condition}).
Therefore the second order approximation of the perturbed invariant manifold is given by
$$q(u,\varepsilon)=q_0(u)+\varepsilon q_1(u)+\varepsilon^2 q_2(u),$$ where $q_0,q_1,q_2$ are given in (\ref{eq:q0})-(\ref{eq:q2}).
\section*{Appendix B} \label{App:singular_Hopf}
We apply the blow-up transformation in the slow-fast normal form (\ref{eq:slow-fast_normal_form}) where $$h_1(U,V)=u_*+U, \ \ h_3(U,V)=0,\ \ h_5(U,V)=(v_*+V)(1+\alpha u_*)+Uv_*\alpha,$$ $$h_2(U,V)=-\gamma(-1+6u_*^2\alpha+3u_*(1+\alpha(\beta-1))+\beta-\alpha \beta)-U\gamma(1+\alpha(4u_*+\beta-1)),$$
$$ h_4(U,V)=(v_*+V)(1-\alpha\delta_*),\ \
h_6(U,V)=u_*-(1+u_*\alpha)\delta_*,,$$
On chart $K_2$, $\bar{\varepsilon}=1$ so the blow-up transformation as defined in (\ref{eq:blow-up-map}) reduces to:
\begin{equation}\label{eq:blow-up_K2}
\bar{r} = \sqrt{\varepsilon},\ U=\sqrt{\varepsilon}\bar{U},\ V=\varepsilon \bar{V},\ \lambda=\sqrt{\varepsilon}\bar{\lambda}.
\end{equation}
Using the transformation (\ref{eq:blow-up_K2}) we can write the system (\ref{eq:desingularized_system_K2}) by removing the overbars as
\begin{equation}\label{eq:desingularized}
\begin{aligned}
U_t &= -b_1V+b_2U^2+\sqrt{\varepsilon}\mathcal{G}_1(U,V) + O(\sqrt{\varepsilon}(\lambda+\sqrt{\varepsilon})),\\
V_t &= b_3U-b_4\lambda+\sqrt{\varepsilon}\mathcal{G}_2(U,V) + O(\sqrt{\varepsilon}(\lambda+\sqrt{\varepsilon})),\\
\end{aligned}
\end{equation}
where
\begin{equation} \label{eq:b_i's}
\begin{aligned}
\ & b_1 =u_*,\
\ b_2 =-\gamma(-1+6u_*^2\alpha+3u_*(1+\alpha(\beta-1))+\beta-\alpha \beta),\\
\ & b_3 = v_*(1-\alpha \delta_*),\
\ b_4 = v_*(1+\alpha u_*),
\end{aligned}
\end{equation}
and
\begin{equation} \label{eq:G1 and G2}
\begin{aligned}
\mathcal{G}_1(U,V) = a_1U-a_2UV+a_3U^3,\ \ \mathcal{G}_2(u,V) = a_4U^2+a_5V.
\end{aligned}
\end{equation}
Let the equilibrium point of the system (\ref{eq:desingularized}) is $(U_e,V_e)$, $U_e=\dfrac{b_4\lambda}{b_3}+O(2)$ and $V_e=O(2)$ where $O(2):=O(\lambda^2,\lambda\sqrt{\varepsilon},\lambda)$. Linearizing the system about this equilibrium point we have the Jacobian matrix as
\begin{equation}
\mathcal{J}:=
\begin{pmatrix}
2U_eb_2+a_1 \sqrt{\varepsilon}+O(2)&-b_1+O(2)\\
b_3+O(2)&a_5\sqrt{\varepsilon}+O(2)
\end{pmatrix}
\end{equation}
At the Hopf bifurcation we have Trace $\mathcal{J}=0$ which implies
\begin{equation}
\dfrac{2b_2b_4\lambda}{b_3}+\sqrt{\varepsilon}(a_1+a_5)+O(2)=0.
\end{equation}
and applying the blow-down map $\mathcal{\lambda_H}=\lambda\sqrt{\varepsilon}$ we get the singular Hopf bifurcation curve $\mathcal{\lambda_H}(\sqrt{\varepsilon})$ for the slow-fast normal form (\ref{eq:slow-fast_normal_form}) as
\begin{equation}
\mathcal{\lambda_H}(\sqrt{\varepsilon})=-\dfrac{b_3(a_1+a_5)}{2b_2b_4}\varepsilon+O(\varepsilon^{3/2}).
\end{equation}
\section*{Appendix C} \label{App:maximal_canard}
Here we prove the existence of maximal canard curve and will give an analytical expression for the same. For that we will first prove the following proposition. In chart $K_2$ of the blow up space we consider the desingularized system (\ref{eq:desingularized}) as
\begin{equation}\label{eq:desingularized_r}
\begin{aligned}
U_t &= -b_1V+b_2U^2+r\mathcal{G}_1(U,V) + O(\lambda r,r),\\
V_t &= b_3U-b_4\lambda+r\mathcal{G}_2(U,V) + O(\lambda r,r),\\
r_t &=0,\\
\lambda_t &=0,
\end{aligned}
\end{equation}
where $b_1,\ b_2,\ b_3,\ b_4, \mathcal{G}_1$ and $\mathcal{G}_2$ are computed above (\ref{eq:b_i's}), (\ref{eq:G1 and G2}). The dynamics of the system on the sphere is obtained by putting $r=0$ in (\ref{eq:desingularized_r}) for different values of $\lambda$ in the vicinity of $0$. Thus, by taking $r=0,\ \lambda=0$, the above system is integrable and we have
\begin{equation} \label{eq:riccati eqn}
\begin{aligned}
U_t &= -b_1V+b_2U^2,\\
V_t &= b_3U.
\end{aligned}
\end{equation}
This is a Riccati equation and the solution of this equation helps in proving our main theorem. \\
\textbf{Proposition 1} The solution of the system (\ref{eq:riccati eqn}) is given by $H(U,V) = c,$ where $$H(U,V) = e^{-\dfrac{2b_2}{b_3}V}\Big(\dfrac{b_3}{2}U^2-\dfrac{b_1b_3^2}{4b_2^2}-\dfrac{b_1b_3}{2b_2}V\Big)$$ and
$$\dfrac{dU}{dt} = -e^{\dfrac{2b_2}{b_3}V}\dfrac{\partial H}{\partial V},$$
\begin{equation} \label{eq:H(u,v)}
\begin{aligned}
\dfrac{dV}{dt} = e^{\dfrac{2b_2}{b_3}V}\dfrac{\partial H}{\partial U}.
\end{aligned}
\end{equation}
\begin{proof}
We can write the above Riccati system (\ref{eq:riccati eqn}) as
\begin{equation}
\dfrac{dV}{dU}=\dfrac{b_3U}{-b_1V+b_2U^2}
\end{equation}
where the integrating factor is $e^{-\dfrac{2b_2}{b_3}V}$. Multiplying both sides with the I.F and integrating we get
$$e^{-\dfrac{2b_2}{b_3}V} \Big(U^2-\dfrac{b_1}{b_2}V-\dfrac{b_1b_3}{2b_2^2}\Big)=c_0.$$
Multiplying with $\dfrac{b_3}{2}$ we obtain the solution of the system( \ref{eq:riccati eqn}) as
$$e^{-\dfrac{2b_2}{b_3}V}\Big(\dfrac{b_3}{2}U^2-\dfrac{b_1b_3^2}{4b_2^2}-\dfrac{b_1b_3}{2b_2}V\Big) = c,$$ where $c=c_0\dfrac{b_3}{2}$ is a constant. The solution determined by $c=0$ is a parabola of the form $$U^2=\dfrac{b_1b_3}{2b_2^2}+\dfrac{b_1}{b_2}V.$$
\end{proof}
\textit{Proof of theorem 4.2:}
We write the solution of the system (\ref{eq:riccati eqn}) in the parametric form
\begin{equation}\label{eq:parametric_sol}
\eta(t) = (U(t),V(t))= \Big(t,\dfrac{b_2}{b_1}t^2-\dfrac{b_3}{2b_2}\Big),\ t\in \mathbb{R}
\end{equation}
For $\varepsilon=0$ the attracting and repelling submanifolds of the critical manifold $\mathcal{M}^1_0$ intersect along the equator of the blow-up space $S^3$. From Fenichel's theory, for $\varepsilon>0$ there exist invariant perturbed attracting $(\mathcal{M}_{\varepsilon}^{1,a})$ and repelling submanifold $(\mathcal{M}_{\varepsilon}^{1,r})$. Along the curve (\ref{eq:parametric_sol}), the attracting $(\mathcal{M}_{\varepsilon}^{1,a})$ and repelling $(\mathcal{M}_{\varepsilon}^{1,r})$ invariant submanifolds in the blow-up space intersect and the solution trajectory lying in that intersection is called maximal canard. We use Melnikov function to calculate the distance between these invariant manifolds \cite{Krupa01A}, \cite{Kuehn15} which is given by
\begin{equation}\label{eq:Melnikov_distance}
D_{r,\lambda} = d_r r + d_{\lambda} \lambda + O(r^2),
\end{equation}
where
\begin{equation}\label{eq:melnikov_formula}
\begin{aligned}
d_r = \int_{-\infty}^{\infty}\nabla H(\eta(t))^T\mathcal{G}(\eta(t))dt,\\
d_{\lambda} = \int_{-\infty}^{\infty}\nabla H(\eta(t))^T\begin{pmatrix}
0\\-b_4
\end{pmatrix}dt,
\end{aligned}
\end{equation}
where $\mathcal{G},\ H$ and $b_4$ are defined in (\ref{eq:G1 and G2}), (\ref{eq:H(u,v)}) and (\ref{eq:b_i's}) respectively. The distance between the submanifolds $\mathcal{M}_{\varepsilon}^{1,a}$ and $\mathcal{M}_{\varepsilon}^{1,r}$ is given by the eq.~(\ref{eq:Melnikov_distance}). And since the maximal canard lie in the intersection of these manifolds, so we must have $D_{r,\lambda}=0$.
For that we now calculate the Melnikov-type integrals $d_r$ and $d_{\lambda}$ (\ref{eq:parametric_sol}
) and (\ref{eq:melnikov_formula}).
Therefore,
\begin{equation}
\begin{aligned}
d_r &= \int_{-\infty}^{\infty}\Big[(a_1U-a_2UV+a_3U^3 )\dfrac{\partial H(\eta(t))}{\partial U}+(a_4U^2+a_5V)\dfrac{\partial H(\eta(t))}{\partial V}\Big]dt\\
&= \int_{-\infty}^{\infty}e^{-\dfrac{2b_2}{b_3}V}\Big[(a_1U-a_2UV+a_3U^3 )b_3U+(a_4U^2+a_5V)(b_1V-b_2U^2)\Big]dt\\
&= e\int_{-\infty}^{\infty}e^{-A_4t^2}\Big(A_1t^4+A_2t^2+A_3\Big)dt
\end{aligned}
\end{equation}
where,
$$A_1 = a_3b_3-\dfrac{a_2b_2b_3}{b_1},\ A_2 = a_1b_3+ \dfrac{a_2b_3^2}{2b_2}-\dfrac{a_4b_1b_3}{2b_2}-\dfrac{a_5b_3}{2},\ A_3 = \dfrac{a_5b_1b_3^2}{4b_2^2},\ A_4 = \dfrac{2b_2^2}{b_1b_3}.$$
Now substituting $z=t^2$ and by repeated integration by parts we obtain
\begin{equation}
\begin{aligned}
d_r = e\Big(\dfrac{3A_1}{4A_4^2}+\dfrac{A_2}{2A_4}+A_3\Big)\int_{-\infty}^{\infty}e^{-A_4t^2}dt,
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
d_{\lambda} &= -\int_{-\infty}^{\infty}b_4\dfrac{\partial H}{\partial V}dt\\
&= b_4\int_{-\infty}^{\infty}e^{-\dfrac{2b_2}{b_3}V}(-b_1V+b_2U^2)dt\\
&= e A_5\int_{-\infty}^{\infty}e^{-A_4t^2}dt,
\end{aligned}
\end{equation}
where $A_5 = \dfrac{b_1b_3b_4}{2b_2}$. Since $d_{\lambda}\ne0$ therefore using implicit function theorem we can explicitly solve for $\lambda$ from (\ref{eq:Melnikov_distance})
\begin{equation}
\begin{aligned}
\lambda(r) &=-\dfrac{d_r}{d_\lambda}r + O(r^2)
= -\dfrac{1}{A_5}\Big(\dfrac{3A_1}{4A_4^2}+\dfrac{A_2}{2A_4}+A_3\Big)r + O(r^2).
\end{aligned}
\end{equation}
Now using blow down map $\lambda_c=\lambda\sqrt{\varepsilon}$ we obtain the maximal canard curve for the slow-fast normal form (\ref{eq:slow-fast_normal_form}).
\begin{equation}
\begin{aligned}
\lambda_c(\sqrt{\varepsilon}) &= -\dfrac{1}{A_5}\Big(\dfrac{3A_1}{4A_4^2}+\dfrac{A_2}{2A_4}+A_3\Big)\varepsilon + O(\varepsilon^{3/2}).
\end{aligned}
\end{equation}
\section*{Appendix D} \label{App:Entry-exit}
Here we prove the existence of a unique attracting limit cycle called relaxation oscillation. To study the dynamics of the system (\ref{eq:entry_exit}) we define two section of the flow as
\begin{equation*}
\begin{aligned}
& \Delta^{in}=\{(u_+,v):u_+<<u_{max}, v\in (v_1-\rho,v_1+\rho)\},\\
& \Delta^{out}=\{(u_+,v):u_+<<u_{max}, v\in (v_0-\rho^2,v_0+\rho^2)\},
\end{aligned}
\end{equation*}
where $u_{max},\ v_1,\ v_0$ are defined in subsection \ref{subsec:entry_exit} and $\rho$ is sufficiently small positive number.
\noindent Let us define a return map $\Pi:\Delta^{in}\rightarrow \Delta^{in}$ which is a composition of two maps $$\Phi:\Delta^{in}\rightarrow\Delta^{out},\ \ \Psi:\Delta^{out}\rightarrow \Delta^{in},$$ such that $\Pi = \Psi \circ \Phi$. Let us fix $\varepsilon>0$ and we take a point $(u_+,v_+)$ on the section $\Delta^{in}$. Now we consider a trajectory of the system (\ref{eq:entry_exit}) starting from the initial point $(u_+,v_+)$. From the analysis of the entry-exit function we can say that this trajectory will be attracted to $V_+$ and will leave $V_-$ at point $(0,p(v_+)),$ where $p$ is the entry-exit function. The trajectory then jumps into the section $\Delta^{out}$ at the point $(u_+,p(v_+)).$ Thus, the map $\Phi$ is defined with the help of entry-exit function as $\Phi(u_+,v_+)=(u_+,p(v_+)).$\\
Now to study the map $\Psi$ we consider two trajectories $\gamma_{\varepsilon}^1, \gamma_{\varepsilon}^2$ starting from the section $\Delta^{out}$. These trajectories get attracted toward $C_{\varepsilon}^{1,a}$ where the slow flow is given by $\dfrac{du}{d\tau}=\dfrac{g(u,q(u,\varepsilon))}{\dot{q}(u,\varepsilon)}.$
\noindent They follow the slow perturbed manifold until the vicinity of the fold point where they contract exponentially toward each other \cite{Wang19AML} and jump into $\Delta^{in}.$ From Theorem 2.1 of \cite{Krupa01A} we have that the map $\Pi$ is a contraction. Using contraction mapping theorem we conclude that $\Pi$ has a unique fixed point which gives rise to a unique relaxation oscillation cycle $\gamma_{\varepsilon}$. Further from Fenichel's theory we infer that $\gamma_{\varepsilon}$ converges to $\gamma_0$ as $\varepsilon \rightarrow 0.$
\noindent Now for the parameter values $\alpha=0.5,\ \beta=0.2,\ \delta=0.3,$ the unique attracting cycle $\gamma_{\varepsilon}$ for $\varepsilon=0.1$, is shown below which converges to $\gamma_0$ as $\varepsilon \rightarrow 0.$
\begin{figure}
\caption{Singular trajectory $\gamma_0$ (blue) and unique attracting limit cycle $\gamma_{\varepsilon}
\label{fig:entry-exit}
\end{figure}
\end{document}
|
\begin{document}
\begin{abstract}
We consider partial matchings, which are finite graphs consisting of edges and vertices of degree zero or one. We consider transformations between two states of partial matchings. We introduce a method of presenting a transformation between partial matchings. We introduce the notion of the lattice presentation of a partial matching, and the lattice polytope associated with a pair of lattice presentations, and we investigate transformations with minimal area.
\end{abstract}
\title{Transformations of partial matchings
}
\section{Introduction}\label{sec1}
A {\it partial matching}, or a {\it chord diagram},
is a finite graph consisting of edges and vertices of degree zero or one, with the vertex set $\{1,2,\ldots, m\}$ for a positive integer $m$ \cite{Reidys}. A partial matching is used to present secondary structures of polymeric molecules such as RNAs (see Section \ref{sec2-1}). The aim of this paper is to establish mathematical basics for discussing partial matchings, from a viewpoint of lattice presentations.
In this paper, we consider transformations between two states of partial matchings.
In Section \ref{sec2}, we introduce the lattice presentation of a partial matching, and we clarify correspondence between structures of chord diagrams and lattice presentations. In Section \ref{sec3}, we introduce the notion of the lattice polytope associated with a pair of partial matchings, and we discuss the equivalence of lattice polytopes. In Section \ref{sec4}, we introduce transformations of lattice presentations and lattice polytopes, and the area of a transformation. We give a lower estimate of the area of a transformation by using the area of a lattice polytope, and in certain cases we construct transformations with minimal area (Theorem \ref{thm3-10}). In Section \ref{sec-red}, we introduce the notion of the reduced graph of a lattice polytope, and we show that there exists a transformation of a lattice polytope with minimal area if and only if its reduced graph is an empty graph (Theorem \ref{thm-red}). Section \ref{sec4-3} is devoted to showing lemmas and propositions. In Section \ref{sec5}, we consider simple connected lattice polytopes. We show that when a lattice polytope $P$ is connected and simple, a certain division of $P$ into $n$ rectangles presents a transformation with minimal area, and in certain cases, any transformation with minimal area is presented by such a division of $P$ (Corollary \ref{cor5-2}).
\section{Partial matchings and our motivation}\label{sec2-1}
Partial matchings present secondary structures of polymeric molecules such as RNAs. An RNA is a single-strand chain of simple units of nucleotides called nucleobases, with a backbone with an orientation from 5'-end to 3'-end, which folds back on itself. The nucleobases consist of 4 types,
guanine (G), uracil (U), adenine (A), and cytosine (C), and there are interactions between nucleobases: adenine and uracil, guanine and cytosine, which form A-U, G-C
base pairs. The secondary structure of an RNA is the information of base pairs. For an RNA strand, regard each nucleobase as a vertex, and label the vertices by integers $1,2,\ldots$ from 5'-end to 3'-end, and connect two vertices forming each base pair with an edge. Then we have a chord diagram presenting the secondary structure.
Chord diagrams have been studied to investigate RNA secondary structures, which consist of nesting structures formed by \lq\lq parallel'' bonds, and pseudo-knot structures containing \lq\lq cross-serial'' bonds. In particular, a special kind of structure called a \lq\lq $k$-noncrossing structure'' plays an important role and enumerations of $k$-noncrossing structures are investigated \cite{CDDSY, JQR, JR, Reidys}.
Our motivation of this research was to give a new method of investigating RNA secondary structures.
Partial matchings are used to predict the most possible forms of RNA secondary structures with optimal free energy, by means of dynamic programming, calculating energy for every possible state of partial matchings, and investigating one partial matching at a time; there are many references, for example see \cite{PM, Reidys, Tinoco-al, Zuker-Sankoff}.
In this paper, from a different viewpoint, we
focus on investigating paths between two fixed states of partial matchings.
\section{Lattice presentations}\label{sec2}
In this section, we introduce the notion of the lattice presentation of a partial matching, and we see the correspondence between structures of chord diagrams and lattice presentations.
We recall the precise definition of a partial matching, or a chord diagram.
A {\it partial matching}, or a {\it chord diagram},
is a finite graph consisting of edges and vertices of degree zero or one, with the vertex set $\{1,2,\ldots, m\}$ for a positive integer $m$.
A chord diagram is represented by drawing the vertices $1,2,\ldots, m$ in the horizontal line and the edges in the upper half plane. We denote by $(x,y)$ ($x, y \in \{1,2,\ldots,m\}$, $x < y$) the edge connecting the vertices $x$ and $y$ and call it an {\it arc}, and we call a vertex with degree zero an {\it isolated vertex}. In this paper, we use the term \lq\lq partial matching'' when we consider its lattice presentation, and the term \lq\lq chord diagram'' when we consider the chord diagram itself.
For the $xy$-plane $\mathbb{R}^2$,
put $C=\{(x,y)\in \mathbb{R}^2 \mid x, y>0\}$, and
we denote by $C_+$ (respectively $C_-$) the half plane $\{(x,y)\in C \mid x\leq y\}$ (respectively $\{(x,y)\in C \mid x\geq y\}$). We call a point $(x,y)$ of $C$ such that $x$ and $y$ are integers a {\it lattice point}.
\begin{definition}
For a partial matching $\delta$,
the {\it lattice presentation} $\Delta$ of $\delta$ is a set of lattice points in $C \backslash \partial C_+$ such that each element $(x,y)$ of $\Delta$ presents an arc $(x,y)$ when $x<y$ (respectively an arc $(y,x)$ when $y<x$) of $\delta$. Note that for each arc $(x,y)$ of $\delta$, we have two points $(x,y) \in C_+$ and $(y,x) \in C_-$ of $\Delta$.
\end{definition}
\begin{example}
The left figure of Figure \ref{g-1} is a chord diagram $\delta$ with five arcs $(1, 5)$, $(2,4)$, $(3,7)$, $(6,8)$, $(10, 12)$, and two isolated vertices $9$ and $11$, and the right figure of Figure \ref{g-1} is the lattice presentation of $\delta$.
\end{example}
\begin{figure}
\caption{A chord diagram (left) and the lattice presentation (right).}
\label{g-1}
\end{figure}
Partial matchings are investigated by considering special structures called $k$-nesting and $k$-noncrossing; for example, see \cite{CDDSY, JQR, JR, Reidys}. We see the correspondence of a partial matching with such a structure and the lattice presentation.
By definition of lattice presentation of a partial matching, we have the following propositions.
For two points $v_1, v_2$ of $C$, we denote by $R(v_1, v_2)$ the rectangle in $C$ whose diagonal vertices are $v_1$ and $v_2$.
For a point $v=(x,y) \in C_+$, put $v^*=(y,x) \in C_-$.
Two arcs $(x_1, y_1)$ and $(x_2, y_2)$ of a chord diagram are said to be {\it separated} if the intervals $[x_1, y_1]$ and $[x_2, y_2]$ in $\mathbb{R}$ are disjoint. Two arcs are said to be {\it non-separated} if they are not separated.
\begin{proposition}\label{prop2-3}
For a lattice presentation $\Delta=\{ v_1, v_2, v_1^*, v_2^* \}$ with $v_j \in C_+, v_j^* \in C_-$ $(j=1,2)$, the following conditions are mutually equivalent (see Figure \ref{fig-2}).
\begin{enumerate}[$(1)$]
\item
A lattice presentation $\Delta$ presents separated arcs.
\item
We have $R(v_1, v_2) \not\subset C_+$.
\item
We have $R(v_1, v_2) \cap R(v_1^*,v_2^*) \neq \emptyset$.
\end{enumerate}
\end{proposition}
\begin{proof}
Put $v_1=(x_1, y_1)$ and $v_2=(x_2, y_2)$. Let us assume $x_1<x_2$. Then,
two arcs $(x_1, y_1)$ and $(x_2, y_2)$ are separated if and only if $y_1<x_2$. Let us denote the vertices of the rectangle $R(v_1, v_2)$ by $v_1=(x_1, y_1), u_1=(x_1, y_2), u_2=(x_2, y_1)$, and $v_2=(x_2, y_2)$. Since $x_j<y_j$ ($j=1,2$), with the assumption $x_1<x_2$, we see that $v_1, u_1, v_2 \in C_+$, and $y_1<x_2$ if and only if $u_2=(x_2, y_1) \in C_-$, which is equivalent to the condition that $R(v_1, v_2) \not\subset C_+$. Since $R(v_1^*, v_2^*)$ is the mirror reflection of $R(v_1, v_2)$ with respect to the line $\partial C_+$, and $v_1$ and $v_2$ are not in $\partial C_+$, $R(v_1, v_2) \not\subset C_+$ if and only if $R(v_1, v_2) \cap R(v_1^*,v_2^*) \neq \emptyset$.
\end{proof}
\begin{figure}
\caption{Separated arcs of a chord diagram and the lattice presentation, where we shadow rectangles $R(v_1, v_2)$ and $R(v_1^*, v_2^*)$. For this figure, $v_1=(1,5)$ and $v_2=(7,10)$.}
\label{fig-2}
\end{figure}
For a rectangle $R(v_1, v_2)$, we say it is {\it of type I} (respectively {\it of type II, III, IV}) if the vector from $v_1$ to $v_2$, $v_2-v_1\in \mathbb{R}^2$ is in the first (respectively second, third, fourth) quadrant.
Let $k$ be a positive integer.
A chord diagram is called {\it $k$-nesting} if its arcs consist of $k$ distinct arcs $(x_1, y_1)$, $(x_2, y_2)$, $\ldots$, $(x_k, y_k)$ such that
\begin{equation}\label{nest}
x_1<x_2<\cdots<x_k<y_k<y_{k-1}<\cdots<y_1,
\end{equation}
see Figure \ref{fig-3}.
\begin{proposition}
For a lattice presentation $\Delta=\{ v_1, \ldots, v_k, v_1^*, \dots, v_k^* \}$ with $v_j \in C_+, v_j^* \in C_-$ $(j=1,\dots, k)$, the following conditions are mutually equivalent.
\begin{enumerate}[$(1)$]
\item
A lattice presentation $\Delta$ presents a $k$-nesting chord diagram.
\item
Each rectangle $R(v_{i}, v_{j})$ is of type II or IV for $i, j=1,\dots, k, i \neq j$.
\item
Changing the indices if necessary, we have the following: $R(v_{j}, v_{j+1})$ is of type IV for each $j=1,\dots, k-1$.
\end{enumerate}
\end{proposition}
\begin{proof}
Put $v_j=(x_j, y_j)$ ($j=1,\dots, k$).
The relation (\ref{nest}) is equivalent to $x_{i}<x_{j}<y_{j}<y_{i}$ when $i<j$ (respectively $x_{j}<x_{i}<y_{i}<y_{j}$ when $i>j$). Since $v_i, v_j \in C_+$, this is equivalent to $x_{i}<x_{j}$ and $y_{j}<y_{i}$ when $i<j$ (respectively $x_{j}<x_{i}$ and $y_{i}<y_{j}$ when $i>j$), which is equivalent to the condition that $R(v_{i}, v_{j})$ is of type IV when $i<j$ (respectively of type II when $i>j$). Thus the conditions 1 and 2 are equivalent.
Again, the relation (\ref{nest}) is equivalent to $x_{j}<x_{j+1}<y_{j+1}<y_{j}$ for $j=1,\dots, k-1$, which is equivalent to the condition that $R(v_{j}, v_{j+1})$ is of type IV for each $j=1,\dots, k-1$.
Thus the conditions 1 and 3 are equivalent.
\end{proof}
\begin{figure}
\caption{A nesting chord diagram and the lattice presentation.}
\label{fig-3}
\end{figure}
A chord diagram is called {\it $k$-crossing} if its arcs consist of $k$ distinct arcs $(x_1, y_1), (x_2, y_2), \ldots, (x_k, y_k)$ such that
\begin{equation}\label{crossing}
x_1<x_2<\cdots<x_k<y_1<y_2<\cdots<y_k,
\end{equation}
see Figure \ref{fig-4}.
A chord diagram is called {\it $k$-noncrossing} if there exists no $k$-crossing subgraph.
\begin{proposition}\label{prop2-5}
For a lattice presentation $\Delta=\{ v_1, \ldots, v_k, v_1^*, \dots, v_k^* \}$ with $v_j \in C_+, v_j^* \in C_-$ $(j=1,\dots, k)$, the following conditions are mutually equivalent.
\begin{enumerate}[$(1)$]
\item
A lattice presentation $\Delta$ presents a $k$-crossing chord diagram.
\item
Each rectangle $R(v_{i}, v_{j})$ is of type I or III and contained in $C_+$ for $i, j=1,\dots, k, i \neq j$.
\item
Changing the indices if necessary, we have the following: $R (v_1, v_k)$ is contained in $C_+$ and $R(v_{j}, v_{j+1})$ is of type I for each $j=1,\dots, k-1$.
\end{enumerate}
\end{proposition}
\begin{proof}
Put $v_j=(x_j, y_j)$ ($j=1,\dots, k$).
The relation (\ref{crossing}) is equivalent to $x_{i}<x_{j}<y_{i}<y_{j}$ when $i<j$ (respectively $x_{j}<x_{i}<y_{j}<y_{i}$ when $i>j$).
This is equivalent to $x_{i}<x_{j}$, $y_{i}<y_{j}$, and $x_{j}<y_{i}$ when $i<j$ (respectively $x_{j}<x_{i}$, $y_{j}<y_{i}$, and $x_i<y_j$ when $i>j$). When $i<j$, $x_{i}<x_{j}$ and $y_{i}<y_{j}$ if and only if $R(v_{i}, v_{j})$ is of type I, and since $R(v_i, v_j)$ is a rectangle whose vertices are $(x_i, y_i), (x_i, y_j), (x_j, y_i)$ and $(x_j, y_j)$, with the condition $x_i<x_j$ and $y_i<y_j$, we see that $x_j<y_i$ if and only if $R(v_i, v_j)$ is contained in $C_+$. Thus, by the same argument, we see that the condition 1 is equivalent to the condition that $R(v_{i}, v_{j})$ is of type I when $i<j$ (respectively of type III when $i>j$) and contained in $C_+$. Thus the conditions 1 and 2 are equivalent.
Again, the relation (\ref{crossing}) is equivalent to $x_{j}<x_{j+1}$ and $y_{j}<y_{j+1}$ for $j=1,\dots, k-1$, and $x_k<y_1$. Then, by the same argument, we see that the conditions 1 and 3 are equivalent.
\end{proof}
\begin{figure}
\caption{A crossing chord diagram and the lattice presentation.}
\label{fig-4}
\end{figure}
\section{Lattice polytopes}\label{sec3}
In this section, we give the definition of the lattice polytope associated with two partial matchings.
We take the standard basis $\mathbf{e}_1, \mathbf{e}_2$ of the $xy$-plane $\mathbb{R}^2$, where $\mathbf{e}_1=(1,0)$ and $\mathbf{e}_2=(0,1)$.
For two distinct points $v, w$ of $\mathbb{R}^2$, we denote by $\overline{vw}$ the segment connecting $v$ and $w$. We say a segment $\overline{vw}$ is {\it in the $x$-direction} or {\it in the $y$-direction} if $w=v+x\mathbf{e}_1$ or $w=v+y\mathbf{e}_2$ respectively for some $x,y \in \mathbb{R}$: we say a segment is in the $x$-direction or in the $y$-direction if it is parallel to the $x$-axis or the $y$-axis, respectively. We denote by $R(v, w)$ the rectangle in $\mathbb{R}^2$ whose diagonal vertices are $v$ and $w$.
For a point $v=(x_1,y_1)$ of $\mathbb{R}^2$, we call $x_1$ (respectively $y_1$) the {\it $x$-component} (respectively the {\it $y$-component}) of $v$.
For a set of points of $\mathbb{R}^2$, $\{ v_1, \ldots, v_n\}$ with $v_j=(x_j, y_j)$ ($j=1,\dots, n$), we call the set $\{ x_1, \ldots, x_n, y_1,\ldots, y_n\}$, the {\it set of $x$, $y$ components} of $\{ v_1,\ldots, v_n\}$.
The set of $x$, $y$-components of a lattice presentation $\Delta$ with $n$ arcs is a set $X$ of $2n$ distinct points of $\mathbb{R}$, which, as a multiset, consists of two copies of $X$.
\begin{definition}
A {\it lattice polytope} is a polytope $P$ consisting of a finite number of vertices of degree zero (called {\it isolated vertices}) with multiplicity $2$, vertices of degree $2$ with multiplicity $1$, edges, and faces satisfying the following conditions.
\begin{enumerate}[$(1)$]
\item
Each edge is either in the $x$-direction or in the $y$-direction.
\item
The $x$-components (respectively $y$-components) of isolated vertices and edges in the $x$-direction (respectively $y$-direction) are distinct.
\item
The boundary $\partial P$ is equipped with a coherent orientation as an immersion of a union of several circles, where we call the union of edges of $P$ the {\it boundary} of $P$, and denote it by $\partial P$.
\end{enumerate}
The set of vertices are divided to two sets $X_0$ and $X_1$
such that each edge in the $x$-direction is oriented from a vertex of $X_0$ to a vertex of $X_1$, and isolated vertices are both in $X_0$ and $X_1$ with multiplicity 1. We denote $X_0$ (respectively $X_1$) by $\mathrm{Ver}_0(P)$ (respectively $\mathrm{Ver}_1(P)$), and call it the set of {\it initial vertices} (respectively {\it terminal vertices}).
\end{definition}
In graph theory, a lattice polytope is defined as a polytope whose vertices are lattice points \cite{Barvinok2, Diestel}, but in this paper, when we say that $P$ is a lattice polytope, we assume that each edge of $P$ is either in the $x$-direction or in the $y$-direction.
For a lattice polytope $P$, we denote by $-P^*$ the orientation-reversed mirror reflection of $P$ with respect to the line $x=y$.
\begin{proposition}\label{prop1}
Let $\Delta, \Delta'$ be two lattice presentations with the same set of $x$, $y$-components.
Then, $\Delta$ and $\Delta'$ form a pair of lattice polytopes $P, -P^*$ satisfying that each edge is either in the $x$-direction or in the $y$-direction such that it connects a point of $\Delta$ and a point of $\Delta'$. See Figure \ref{fig3-1}.
\end{proposition}
\begin{definition}\label{def3}
We call a lattice polytope $P$ as in Proposition \ref{prop1} the {\it lattice polytope associated with lattice presentations} $\Delta$ and $\Delta'$, and denote it by $P(\Delta, \Delta')$. Note that we have a choice of $P$.
We denote by $\mathrm{Ver}_0(P)$ (respectively $\mathrm{Ver}_1(P)$) the vertices of $P$ coming from $\Delta$ (respectively $\Delta'$), which consist of a half of the elements of $\Delta$ (respectively $\Delta'$).
We give an orientation of $P$ by giving each edge in the $x$-direction (respectively in the $y$-direction) the orientation from a vertex of $\mathrm{Ver}_0(P)$ to a vertex of $\mathrm{Ver}_1(P)$ (respectively from a vertex of $\mathrm{Ver}_1(P)$ to a vertex of $\mathrm{Ver}_0(P)$).
\end{definition}
\begin{figure}
\caption{The lattice polytope associated with $\Delta$ and $\Delta'$, where $\Delta$ is presented by black circles and $\Delta'$ is presented by X marks.}
\label{fig3-1}
\end{figure}
\begin{proof}[Proof of Proposition \ref{prop1}]
We take the set of points $\Delta$ and $\Delta'$ as the set of vertices.
Since the set of $x$, $y$-components is a set of $2n$ distinct points in $\mathbb{R}$, for each point $v$ of $\Delta$, $v$ is either an isolated vertex satisfying $v=v'$ for $v' \in \Delta'$, or there are exactly two points $v'$ and $w'$ of $\Delta'$ such that $\overline{vv'}$ and $\overline{vw'}$ is in the $x$-direction and the $y$-direction, respectively. The same argument implies the same thing for each point of $\Delta'$. Thus we have a pair of lattice polytopes $P, -P^*$ satisfying the required conditions. That $-P^*$ is the mirror reflection of $P$ with respect to the line $x=y$ follows from the fact that
for each point of $\Delta$ or $\Delta'$, its mirror reflection is also a point of $\Delta$ or $\Delta'$.
\end{proof}
Since the sets of $x$, $y$-components of vertices of $P$ and $-P^*$ are the same, and the union of the vertices of $P\cup (-P^*)$ form $\Delta \cup \Delta'$, the set of $x$, $y$-components of vertices of $P$ is the same with that of $\Delta$ and $\Delta'$. Since each edge of a lattice polytope is either in the $x$-direction or the $y$-direction, the sets of $x$, $y$-components of $\mathrm{Ver}_i(P)$ ($i=0,1$) is the same. Thus the set of $x$, $y$-components of $\mathrm{Ver}_i(P)$ ($i=0,1$) is the same with that of $\Delta$ and $\Delta'$, and we have the following.
\begin{remark}\label{rem3-3}
Let $n$ be the number of the vertices of $\mathrm{Ver}_i(P)$ ($i=0,1$).
Then, the set $X$ of $x$, $y$-components of $\mathrm{Ver}_i(P)$ ($i=0,1$) is the same with that of $\Delta$ and $\Delta'$, and it consists of $2n$ distinct elements.
Thus the multiset of $x$, $y$-components of
the vertices of $P$ is the set of two copies of $X$.
Conversely, for a pair of sets $V_0$ and $V_1$ of points of $\mathbb{R}^2$ with the same set of $x$, $y$-components consisting of $2n$ distinct elements of $\mathbb{R}$,
there is a unique lattice polytope $P$ such that $\mathrm{Ver}_i(P)=V_i$ $(i=0,1)$.
\end{remark}
\begin{remark}
For a lattice polytope $P$, we denote by $-P$ the lattice polytope obtained from $P$ by orientation-reversal. Then, for a lattice polytope $P=P(\Delta, \Delta')$ associated with lattice presentations $\Delta$ and $\Delta'$,
$-P(\Delta, \Delta')=P(\Delta', \Delta)$.
Further, since we equipped $P$ with orientation such that each edge in the $x$-direction (respectively in the $y$-direction) is oriented from a vertex of $\mathrm{Ver}_0(P)$ to a vertex of $\mathrm{Ver}_1(P)$ (respectively from a vertex of $\mathrm{Ver}_1(P)$ to a vertex of $\mathrm{Ver}_0(P)$), the orientation changes when we rotate $P$ by $\pi/2$:
the $\pi/2$-rotation of unoriented $P$ is as a new polytope $-\rho(P)$, where $\rho(P)$ is the $\pi/2$-rotation of oriented $P$. Similarly,
the mirror reflection of unoriented $P$ is as a new polytope $-P^*$, the orientation-reversed mirror reflection of $P$.
\end{remark}
We introduce the equivalence relation among lattice polytopes.
Let $P$ be a lattice polytope with $2n$ vertices.
Since the set of $x$, $y$-components of $\mathrm{Ver}_i(P)$
($i=0,1$) consists of $2n$ distinct elements, for $\mathrm{Ver}_i(P)=\{v_1, \ldots, v_n\}$ with $v_j=(x_j, y_j)$ ($j =1,2,\ldots,n$) satisfying $x_1<x_2 <\ldots<x_n$,
we take an element $\sigma$ of the symmetric group $\mathfrak{S}_n$ of $n$ elements determined by
$y_{\sigma^{-1}(1)}<y_{\sigma^{-1}(2)}<\ldots<y_{\sigma^{-1}(n)}$. We denote the element $\sigma$ of $\mathfrak{S}_n$ by $\sigma_i(P)$ $(i=0,1)$.
Let $\pi$ be a permutation \[
\pi=\begin{pmatrix} 1 & 2 & \ldots & n-1& n \\
n & n-1 & \ldots& 2 & 1\end{pmatrix} \in \mathfrak{S}_n. \]
\begin{proposition}\label{prop3-4}
Let $\sigma$ be an element of $\mathfrak{S}_n$, which will be identified with a set of points of $\mathbb{R}^2$, $\{ (j,\sigma(j))-(n/2, n/2) \mid j=1,2,\ldots,n\}$. Then, the mirror reflection of $\sigma$ with respect to the line $x=n+1$ is $\pi\sigma$, and the $\pi/2$ rotation of $\sigma$ is $\sigma^{-1}\pi$.
\end{proposition}
\begin{definition}
Let $P, P'$ be lattice polytopes
consisting of $2n$ vertices.
Then, we say $P$ and $P'$ are {\it equivalent} if $(\sigma_0(P), \sigma_1(P))$ and $(\sigma_0(P'), \sigma_1(P'))$, or $(\sigma_0(P), \sigma_1(P))$ and $((\sigma_0(P'))^{-1}, (\sigma_1(P'))^{-1})$, are related by right or left multiplication by the permutation $\pi$.
\end{definition}
\begin{proof}[Proof of Proposition \ref{prop3-4}]
Let $S$ be a matrix presenting $\sigma$.
The matrix presenting $\pi$, which will be denoted by $P$, is
\[P=\begin{pmatrix} 0 & \cdots &0 & 1\\
0 & \cdots & 1 & 0\\
\vdots & \ddots & \vdots\\
1 & \cdots & 0 &0\end{pmatrix}.
\]
Since the right multiplication by $P$ changes the $j$th column of the operated matrix to the $(n-j)$th column $(j=1,2,\ldots,n)$, $SP$ presents the mirror reflection of $\sigma$ with respect to the line $x=n+1$. Since the order of a product of matrices is the reversed order of a product of the presented elements of $\mathfrak{S}_n$, we see that $\pi \sigma$ is the mirror reflection of $\sigma$ with respect to the line $x=n+1$.
Let $\{(x_j, y_j) \mid j=1,2,\ldots,n\}$ ($x_1<x_2<\ldots<x_n$) be the set of points of $\mathbb{R}^2$ identified with $\sigma$. Then, we have $y_{\sigma^{-1}(1)}<y_{\sigma^{-1}(2)}<\ldots<y_{\sigma^{-1}(n)}$. Let us rotate $\sigma$ by $\pi/2$. Then, the set of points changes to $\{(-y_j, x_j) \mid j=1,2,\ldots,n\}$.
Put $x_j'=-y_{\sigma^{-1}(n+1-j)}$ and $y_j'=x_{\sigma^{-1}(n+1-j)}$. Let $\rho$ be the element of $\mathfrak{S}_n$ presenting the $\pi/2$ rotation of $\sigma$. Since $-y_{\sigma^{-1}(n)}<-y_{\sigma^{-1}(n-1)}<\ldots<-y_{\sigma^{-1}(1)}$, $x'_1<x'_2<\ldots<x'_n$, hence we see that $y'_{\rho^{-1}(1)}<y'_{\rho^{-1}(2)}<\ldots<y'_{\rho^{-1}(n)}$. Since $y_j'=x_{\sigma^{-1}(n+1-j)}=x_{\sigma^{-1}\pi(j)}$, $x_j=y'_{\pi \sigma(j)}$ ($j=1,2,\ldots,n$). Thus $\rho^{-1}=\pi \sigma$, and we see $\rho=\sigma^{-1}\pi$.
\end{proof}
In particular, by Proposition \ref{prop3-4} and by definition, we have the following.
For a lattice polytope $P$, we call a subgraph of $P$ whose boundary is homeomorphic to an immersed circle a {\it component} of $P$, and
we say a lattice polytope is {\it connected} if it consists of one component.
\begin{proposition}
For a lattice polytope $P$,
$P \sim -P ^*$.
Thus, a connected lattice polytope $P$ associated with lattice presentations of partial matchings is unique up to equivalence.
\end{proposition}
\section{Describing transformations of partial matchings by lattice polytopes}\label{sec4}
In this section, we consider transformations of lattice presentations of partial matchings.
In Section \ref{sec4-1}, we introduce transformations of lattice presentations and lattice polytopes. In Section \ref{sec4-2}, we introduce the area of a transformation and we give a lower estimate of the area of a transformation and in certain cases construct a transformation with minimal area (Theorem \ref{thm3-10}). Lemmas and Propositions are shown in Section \ref{sec4-3}.
\subsection{Transformations of lattice presentations and lattice polytopes}\label{sec4-1}
For two arcs $a_1=(x_1, y_1)$ and $a_2=(x_2, y_2)$ of a chord diagram, we consider new arcs $a_1'=(x_1', y_1')$ and $a_2'=(x_2', y_2')$, where $\{x_1', x_2', y_1', y_2'\}=\{x_1, x_2, y_1, y_2\}$ with $\{a_1, a_2\} \neq \{a_1', a_2'\}$. We consider a new chord diagram obtained from exchanging $a_1, a_2$ to $a_1', a_2'$. We call the new chord diagram the result of a {\it transformation} between two arcs $a_1$ and $a_2$.
For a lattice presentation of a chord diagram, we define
a {\it transformation} as the operation presenting a transformation of the chord diagram.
For two lattice presentations $\Delta$ and $\Delta'$ with the same set of $x, y$-components, we define a {\it transformation from $\Delta$ to $\Delta'$} as a sequence of transformations such that the initial and the terminal lattice presentations are $\Delta$ and $\Delta'$, respectively. We denote it by $\Delta \to \Delta'$.
\begin{definition}
Let $\Delta$ be a set of lattice points. For a point $u$ of $\mathbb{R}^2$, we denote by $x(u)$ and $y(u)$ the $x$ and $y$-components of $u$ respectively. For two distinct points $v, w$ of $\Delta$,
we consider the rectangle $R(v, w)$ one pair of whose diagonal vertices are $v$ and $w$. Put $\tilde{v}=(x(w), y(v))$ and $\tilde{w}=(x(v), y(w))$, which form the other pair of diagonal vertices of $R(v,w)$. Then, consider a new set of lattice points obtained from $\Delta$, from removing $v,w$ and adding $\tilde{v}, \tilde{w}$.
We call the new set of lattice points the result of a {\it transformation of $\Delta$ by the rectangle} $R(v, w)$, and denote it by $t( \Delta, R(v, w))$.
For two sets of lattice points $\Delta$ and $\Delta'$ with the same set of $x, y$-components, we define a {\it transformation from $\Delta$ to $\Delta'$} as a sequence of transformations by rectangles such that the initial and the terminal lattice points are $\Delta$ and $\Delta'$ respectively. We will denote it by $\Delta \to \Delta'$.
\end{definition}
Recall that for a lattice polytope $P$, we denote by $-P^*$ the orientation-reversed mirror reflection of $P$ with respect to the line $x=y$.
\begin{lemma}\label{lem4-2}
A transformation between two arcs of a chord diagram is presented by a transformation of the presenting lattice presentation $\Delta$ by rectangles $R(v,w)$ and $-R(v,w)^*$
for $v,w \in \Delta$.
\end{lemma}
\begin{proof}
Since two arcs of a chord diagram are either separated, nesting, or crossing,
it suffices to consider the following three cases: (1) $a_1, a_2$ are separated and $a_1', a_2'$ are nesting, (2) $a_1, a_2$ are nesting and $a_1', a_2'$ are crossing, and (3) $a_1, a_2$ are crossing and $a_1', a_2'$ are separated.
Since the transformation is described as in Figure \ref{fig4-1}, we have the required result.
\end{proof}
\begin{figure}
\caption{Transformations between two arcs and the presenting transformations of lattice presentations.}
\label{fig4-1}
\end{figure}
Let $P$ be a lattice polytope.
Then, for $v, w \in \mathrm{Ver}_0(P)$, consider a new set of vertices obtained from $\mathrm{Ver}_0(P)$ by a transformation by $R(v,w)$.
Since the multiset of $x$, $y$-components are preserved, the resulting new vertices
and $\mathrm{Ver}_1(P)$ form a new lattice polytope (see Remark \ref{rem3-3}).
\begin{definition}
For a lattice polytope $P$ and $v,w \in \mathrm{Ver}_0(P)$,
we call the lattice polytope $P'$ determined by the lattice points $\mathrm{Ver}_0(P')=t(\mathrm{Ver}_0(P), R(v, w))$ and $\mathrm{Ver}_1(P')=\mathrm{Ver}_1(P)$
the result of a {\it transformation of $P$ by the rectangle} $R=R(v, w)$ and denote it by $t(P , R)$.
For two lattice polytopes $P$ and $P'$ with the same set of $x, y$-components, we define a {\it transformation from $P$ to $P'$} as a sequence of transformations by rectangles $R_j$ such that the initial and the terminal lattice polytopes are $P$ and $P'$, respectively. We denote it by $P \to P'$.
\end{definition}
For a pair of lattice polytopes $P$ and $-P^*$, and rectangles $R$ and $-R^*$, we denote $t(P , R ) \cup t(-P^*, -R^* )$ by $t(P\cup (-P^*) , R \cup (-R ^*))$, and call it the result of a {\it transformation of $P\cup (-P^*)$ by rectangles} $R \cup (-R ^*)$.
Note that $t(-P^*, -R^*)=-t(P, R)^*$ and $t(P\cup (-P^*) , R \cup (-R ^*))=t(P , R ) \cup (-t(P, R )^*)$.
\begin{definition}
For two pairs of lattice polytopes $P, -P^*$ and $P', -P'^*$ with the same set of $x, y$-components, we define a {\it transformation from $P\cup (-P^*)$ to $P'\cup (-P'^*)$} as a sequence of transformations by rectangles $R_j \cup (-R_j^*)$ such that the initial and the terminal lattice polytopes are $P\cup (-P^*)$ and $P'\cup (-P'^*)$, respectively, and denote it by $P\cup (-P^*) \to P'\cup (-P'^*)$.
\end{definition}
For lattice presentations of partial matchings $\Delta, \Delta'$ with the same set of $x, y$-components, and an associated lattice polytope $P$,
$\Delta=\mathrm{Ver}_0(P \cup (-P^*))$, $\Delta'=\mathrm{Ver}_1(P \cup (-P^*))$.
Thus, by Lemma \ref{lem4-2}, we have the following.
When we consider transformations of lattice polytopes, we regard isolated vertices $\mathrm{Ver}_1(Q)$ as a lattice polytope $Q'$ whose initial and terminal vertices are the isolated vertices: $\mathrm{Ver}_0(Q')=\mathrm{Ver}_1(Q')=\mathrm{Ver}_1(Q)$.
\begin{proposition}
Let $\Delta$, $\Delta'$ be lattice presentations of partial matchings with the same set of $x, y$-components, and let $P$ be a lattice polytope associated with $\Delta$ and $\Delta'$.
Then, a transformation $\Delta \to \Delta'$ is described by a transformation of lattice
polytopes $P \cup (-P^*) \to \mathrm{Ver}_1(P \cup (-P^*))$.
In particular, a transformation of a lattice
polytope $P \to \mathrm{Ver}_1(P)$ presents a transformation $\Delta \to \Delta'$.
\end{proposition}
\begin{example}
We consider lattice presentations $\Delta$ and $\Delta'$ as in Figure \ref{fig3-1}, where we denote the points of $\Delta=\mathrm{Ver}_0(P \cup (-P^*))$ by black circles and the points of $\Delta'=\mathrm{Ver}_1(P \cup (-P^*))$ by X marks. In Figure \ref{fig4-2}, we give an example of a transformation $\Delta \to \Delta'$, described by a transformation of lattice
polytopes $P \cup (-P^*) \to \mathrm{Ver}_1(P \cup (-P^*))$, where we denote by shadowed rectangles the used rectangles. In this case, it suffices to see a transformation $P \to \mathrm{Ver}_1(P)$, which induces a transformation $P \cup (-P^*) \to \mathrm{Ver}_1(P \cup (-P^*))$.
\begin{figure}
\caption{Transformation of lattice polytopes $P \cup (-P^*)$ associated with lattice presentations of partial matchings, where the vertices of $\mathrm{Ver}
\label{fig4-2}
\end{figure}
\end{example}
\subsection{Areas of transformations}\label{sec4-2}
For a lattice polytope $P$, let us define the area of $|P|$, $\mathrm{Area}|P|$, as follows.
Recall that for a lattice polytope $P$,
we give an orientation of $P$ by giving each edge in the $x$-direction (respectively in the $y$-direction) the orientation from a vertex of $\mathrm{Ver}_0(P)$ to a vertex of $\mathrm{Ver}_1(P)$ (respectively from a vertex of $\mathrm{Ver}_1(P)$ to a vertex of $\mathrm{Ver}_0(P)$). Then, $\partial P$ has a coherent orientation as an immersion of a union of several circles.
The space $\mathbb{R}^2$, which contains $P$, is divided into several regions $A_1,\ldots, A_m$ by $\partial P$. For each region $A_i$ ($i=1,\dots,m$), let $\omega(A_i)$ be the {\it rotation number} of $P$ with respect to $A_i$, which is the sum of rotation numbers of the components
of $P$, with respect to $A_i$. Here, the {\it rotation number} of a connected lattice polytope $Q$ with respect to a region $A$ of $\mathbb{R}^2 \backslash\partial Q$ is the rotation number of a map $f$ from $\partial Q=\mathbb{R}/2\pi \mathbb{Z}$ to $\mathbb{R}/2\pi\mathbb{Z}$ which maps $x \in \partial Q$ to the argument of the vector from a fixed interior point of $A$ to $x$. Here, the {\it rotation number} of the map $f$ is defined by
$(F(x)-x)/2\pi$, where $F: \mathbb{R} \to \mathbb{R}$ is the lift of $f$ and $x \in \partial Q$.
We define $\mathrm{Area}(A_i)$ for a region $A_i$ by the area induced from $\mathrm{Area}(R)=|x_2-x_1||y_2-y_1|$ for a rectangle $R$ whose diagonal vertices are $(x_1, y_1)$ and $(x_2, y_2)$.
Then, we define the {\it area} of $P$, denoted by $\mathrm{Area}(P)$, by $\mathrm{Area}(P)=\sum_{i=1}^m \omega(A_i)\mathrm{Area}(A_i)$, and the {\it area} of $|P|$, denoted by $\mathrm{Area}|P|$, by $\mathrm{Area}|P|=\sum_{i=1}^m |\omega(A_i)\mathrm{Area}(A_i)|$.
\begin{remark}
For two lattice polytopes $P_1$ and $P_2$, $\mathrm{Area}|P_1 \cup P_2|=\mathrm{Area}|(-P_1^*)\cup (-P_2^*)|$, but $\mathrm{Area}|P_1\cup P_2|$ is not always equal to $\mathrm{Area}|P_1\cup (-P_2^*)|$.
Thus, for a lattice polytope $P$ associated with lattice presentations $\Delta$ and $\Delta'$, $\mathrm{Area}|P|$ depends on the choice of the components of $P$.
\end{remark}
\begin{definition}
Let $\Delta, \Delta'$ be two lattice presentations of partial matchings with the same set of $x, y$-components.
Let us consider a transformation
$\Delta=\Delta_0 \to \Delta_1\to \cdots \to \Delta_k=\Delta'$, with $\Delta_j=t(\Delta_{j-1}, R_j\cup (-R_j^*))$ for a rectangle $R_j$ ($j=1,2,\ldots,k$).
Then, we call $\sum_{j=1}^k |\mathrm{Area}(R_j)|$ the {\it area of a transformation $\Delta \to \Delta'$}.
\end{definition}
We say that a connected lattice polytope $P$ is {\it simple} if the face of $P$ is homeomorphic to a 2-disk in $\mathbb{R}^2$.
For a lattice polytope $P$ with regions $A_1, \ldots, A_m$ divided by $\partial P$,
attach each region $A_i$ with right-handed (resp. left-handed) orientation when $\omega(A_i)$ is positive (resp. negative) $(i=1,\ldots,m)$. Then the union of $|\omega(A_i)|$ copies of the closure of $A_i$ bounds $\partial P$. We call the union the {\it region} of $P$, which will be denoted by the same notation $P$.
Further, if edges $\overline{uv}$ and $\overline{wz}$ of $P$ have a transverse intersection, then we call the intersection point a {\it crossing}.
\begin{theorem}\label{thm3-10}
Let $\Delta, \Delta'$ be two lattice presentations of partial matchings with the same set of $x, y$-components.
We consider a transformation
$\Delta=\Delta_0 \to \Delta_1\to \cdots \to \Delta_k=\Delta'$, with $\Delta_j=t(\Delta_{j-1}, R_j\cup R_j^*)$ for a rectangle $R_j$ $(j=1,2,\ldots,k)$. Let $P$ be a lattice polytope associated with $\Delta$ and $\Delta'$.
Then
\begin{equation}\label{eq0}
\sum _{j=1}^k \mathrm{Area}(R_j) =\frac{1}{2}\mathrm{Area}(P\cup (-P^*))
\end{equation}
and
\begin{equation}\label{eq1}
\sum_{j=1}^k |\mathrm{Area}(R_j)|\geq \frac{1}{2}\mathrm{Area}|P\cup (-P^*)|.
\end{equation}
Further, when $P$ satisfies either the following condition $(1)$ or $(2)$,
\begin{equation}\label{eq2}
\sum_{j=1}^k |\mathrm{Area}(R_j)| \geq \mathrm{Area}|P|,
\end{equation}
and there exists transformations which realize the equality of $(\ref{eq2})$, where the conditions are as follows.
\begin{enumerate}[$(1)$]
\item
The rotation number $\omega(A)$ is equal to $\epsilon$ for any region $A$ surrounded by an embedded closed path in $\partial P$, where $\epsilon\in \{+1, -1\}$ (see Figure \ref{fig4-3} for example).
\item
The lattice polytopes $P$ and $-P^*$ are disjoint, and,
when we regard crossings as vertices, $P$ is regarded as the union of simple lattice polytopes $P_1, \ldots, P_m$ and $Q_{i1}, \ldots, Q_{in_i}$ $(i=1,\ldots,m)$ such that $P_i \cap P_j$ $(i \neq j)$ is empty or consists of one crossing in $\partial P_i \cap \partial P_j$ and $Q_{i1}, \ldots, Q_{in_i}$ are mutually disjoint and contained in the interior of $P_i$ (see Figure $\ref{fig4-4}$ for example).
\end{enumerate}
\end{theorem}
\begin{figure}
\caption{Example of a lattice polytope $P$ satisfying the condition (1), where the vertices of $\mathrm{Ver}
\label{fig4-3}
\end{figure}
\begin{figure}
\caption{Example of a lattice polytope $P$ satisfying the condition (2), where the vertices of $\mathrm{Ver}
\label{fig4-4}
\end{figure}
\begin{remark}
The condition (1) indicates that each connected component of $P$ is in the form of the projected image into the $xy$-plane of a closed braid in the $xyz$-space in general position with respect to the $z$-axis \cite{Birman}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm3-10}]
For a point $u$ of $\mathbb{R}^2$, we denote by $x(u)$ and $y(u)$ the $x$ and $y$-components of $u$ respectively.
For a rectangular $R(v,w)$,
we consider the other pair of diagonal vertices of $R(v,w)$,
$\tilde{v}=(x(w), y(v))$ and $\tilde{w}=(x(v), y(w))$.
Then, we assign to $R(v,w)$ an orientation such that the edges are oriented from $v$ to $\tilde{v}$, from $\tilde{v}$ to $w$, from $w$ to $\tilde{w}$, and from $\tilde{w}$ to $v$.
Then, this induces an orientation of $P$ and $-P^*$ which coincides with the original orientation of $P$ and $-P^*$,
and by Lemma \ref{lem4-15} we see that
\[
\sum _{j=1}^k (\mathrm{Area}(R_j \cup (-R_j^*))=\mathrm{Area}(P \cup (-P^*)),
\]
and
\[
\sum _{j=1}^k \mathrm{Area}|R_j \cup R_j^*|\geq\mathrm{Area}|P \cup (-P^*)|.
\]
Let $Q$ be a lattice polytope. Then, since the mirror reflection of an edge of $Q$ in the $x$-direction (respectively $y$-direction) is an edge of $Q^*$ in the $y$-direction (respectively $x$-direction), the orientation of any closed path $C$ in $\partial Q$ and $C^*$ in $\partial Q^*$ coincide as an embedded circle in $\mathbb{R}^2$. Hence
$\mathrm{Area}(Q)=\mathrm{Area}(Q^*)$, and hence, for $j=1,\ldots, k$,
\begin{equation*} \mathrm{Area}(R_j\cup (-R_j^*))
= \mathrm{Area}(R_j)+\mathrm{Area}(-R_j^*)=2 \mathrm{Area}(R_j)
\end{equation*}
and
\begin{equation*} \mathrm{Area}|R_j\cup (-R_j^*)|
= | \mathrm{Area}(R_j)|+|\mathrm{Area}(-R_j^*)|=2 |\mathrm{Area}(R_j)|.
\end{equation*}
Hence we have (\ref{eq0}) and (\ref{eq1}).
When the lattice polytope $P$ satisfies the condition 1 or 2, $\mathrm{Area}(P \cup (-P^*))=\mathrm{Area}(P)+\mathrm{Area}(-P^*)$. Since $\mathrm{Area}(P)=\mathrm{Area}(-P^*)$, we have (\ref{eq2}). In these cases, the boundary $\partial P$ can be divided to a union of boundaries of simple connected lattice polytopes.
In order to show that there exists a transformation which realizes the equality of (\ref{eq2}),
first we show the following claims.
Claim (a). For a simple connected lattice polytope $Q$, there is a transformation $\mathrm{Ver}_0(Q) \to \mathrm{Ver}_1(Q)$ by rectangles $R_j$ $(j=1,\ldots, k)$ such that
\begin{equation}\label{eq3-2}
\sum _{j=1}^k |\mathrm{Area}(R_j)|=|\mathrm{Area}(Q)|.
\end{equation}
Claim (b).
For simple connected lattice polytopes $Q, Q_1, \ldots, Q_l$ such that $Q_1, \ldots, Q_l$ are mutually disjoint and contained in the interior of $Q$, and $Q \cup Q_1 \cup \cdots \cup Q_l$ is a region obtained from a 2-disk $Q$ by removing $Q_1,\ldots, Q_l$, there is a transformation $\mathrm{Ver}_0(Q \cup \cup_{i=1}^l Q_i) \to \mathrm{Ver}_1(Q\cup \cup_{i=1}^l Q_i)$ by rectangles $R_j$ $(j=1,\ldots,k)$ such that
\begin{equation}\label{eq3-3}
\sum _{j=1}^k |\mathrm{Area}(R_j)|=\mathrm{Area}|Q \cup \cup_{i=1}^lQ_i|=\big||\mathrm{Area}(Q)|-\sum_{i=1}^l|\mathrm{Area}(Q_i)|\big|.
\end{equation}
First we show Claim (a).
By Proposition \ref{lem3-15}, there is a rectangular $R=R(v,w)$ contained in the region $P$ with $v,w \in \mathrm{Ver}_0(Q)$ such that $Q\backslash R$ forms a simple lattice polytope $Q'$.
This implies that
$|\mathrm{Area}(R)|+|\mathrm{Area}(Q')|=|\mathrm{Area}(Q)|$.
Hence, in order to show Claim (a), it suffices to show that there is a transformation $\mathrm{Ver}_0(Q') \to \mathrm{Ver}_1(Q')$ satisfying (\ref{eq3-2}) with $Q=Q'$. Let $n$ be the number of vertices of $\mathrm{Ver}_0(Q)$. Since $v,w$ of $R=R(v,w)$ are vertices of $\mathrm{Ver}_0(Q)$, we see that if $Q'$ is connected, then the number of vertices of $\mathrm{Ver}_0(Q')$ is $n-1$. If $Q'$ is not connected, then $Q'$ consists of 2 components, and the sum of the number of vertices of $\mathrm{Ver}_0(Q')$ is $n$, thus the number of vertices of $\mathrm{Ver}_0(Q')$ which come from a connected component is less than $n$. If the number of vertices of $\mathrm{Ver}_0(Q')$ is 2, then $Q'$ is a rectangular, and we have (\ref{eq3-2}) with $Q=Q'$. Thus, by induction on the number of vertices of connected components, we can construct a transformation satisfying (\ref{eq3-2}).
For the other Claim (b), by a similar argument, we can show that there exists a transformation satisfying (\ref{eq3-3}). We can take rectangles $R$ inductively. When the vertices $v,w$ of a used rectangle $R(v,w)$ come from distinct components,
then the number $l$ of simple connected lattice polytopes is reduced by one.
Now, we construct a transformation such that $\sum_j |\mathrm{Area}(R_j)|=\mathrm{Area}|P|$, assuming that $P$ satisfies the condition (1) or (2).
We show the case when $P$ is connected, and $Q_{i1}, \ldots, Q_{in_i}$ $(i=1,\ldots, m)$ of the condition (2) are empty graphs.
Let us regard $\partial P$ as an immersion of a circle.
(Step 1)
Take an interval $I$ of $\partial P$ which starts from a crossing $x$ and comes back to $x$. If there is another interval $J$ in $I$ which starts from a crossing $y$ and comes back to $y$, then take $J$ instead of $I$. Repeating this process, we have an interval $I$ of $\partial P$ which bounds a simple connected lattice polytope $Q_1$ whose vertices consists of the vertices of $P$ and one crossing $x$.
(Step 2) For a lattice polytope $Q_1$ of Step 1,
if $x \in \mathrm{Ver}_1(Q_1)$, then put $P_1=Q_1$. Then, by Claim (a), we have a transformation
$P \to (P\backslash P_1)\cup \mathrm{Ver}_1(P_1)$.
If $x \in \mathrm{Ver}_0(Q_1)$, then consider $Q_2=P\backslash Q_1$, and repeat Step 1 several times. Since $x \in \mathrm{Ver}_0(Q_1)$ become a vertex in $\mathrm{Ver}_1(Q_2)$, we obtain $Q_n$ such that the crossings are in $\mathrm{Ver}_1(Q_n)$. Put $P_1=Q_n$, and then by Claim (a) we see that there is a transformation $P \to (P\backslash P_1) \cup \mathrm{Ver}_1(P_1)$.
By repeating this process, we have a transformation $P \to \mathrm{Ver}_1(P)$, which is a transformation $ \Delta \to \Delta'$ such that $\sum_j |\mathrm{Area}(R_j)|=\mathrm{Area}|P|$. See Figures \ref{fig4-5} and \ref{fig4-6}.
The case when $Q_{i1}, \ldots, Q_{in_i}$ $(i=1,\ldots, m)$ of the condition 2 may not be empty graphs can be shown by the same argument by using Claim (b). Thus there is a transformation such that $\sum_j |\mathrm{Area}(R_j)|=\mathrm{Area}|P|$ when $P$ satisfies the condition (1) or (2).
\end{proof}
\begin{figure}
\caption{Example of a transformation of a lattice polytope $P$ satisfying the condition (1). We have a transformation of $P$ by composite of transformations of $P_1, P_2, P_3$, where $P_1, P_2, P_3$ are indicated by shadowed regions, and the vertices of $\mathrm{Ver}
\label{fig4-5}
\end{figure}
\begin{figure}
\caption{Example of a transformation of a lattice polytope $P$ satisfying the condition (2). We have a transformation of $P$ by composite of transformations of $P_1, P_2, P_3, P_4$, where the vertices of $\mathrm{Ver}
\label{fig4-6}
\end{figure}
\begin{proposition}\label{prop3-18}
There exists a lattice polytope $P$ such that for any transformation of $P$ by rectangles $R_1, \ldots, R_k$, $\sum_{j=1}^k|\mathrm{Area}(R_j)|>\mathrm{Area}|P|$.
\end{proposition}
\begin{proof}
We consider a lattice polytope as illustrated by the leftmost figure of Figure \ref{fig4-7}, where the vertices of $\mathrm{Ver}_0(P)$ (respectively $\mathrm{Ver}_1(P)$) are indicated by black circles (respectively X marks), and the numbers in regions divided by $\partial P$ denote the rotation numbers.
Assume that there is a transformation of $P$ by rectangles $R_1, \ldots, R_k$ such that $\sum_{j=1}^k|\mathrm{Area}(R_j)|=\mathrm{Area}|P|$. Then, by Lemma \ref{lem4-15}, any $R_j$ is disjoint with any region whose rotation number is zero. Thus, we have only one applicable rectangle $R_1=R(v_1, w_1)$ for $v_1,w_1 \in \mathrm{Ver}_0(P)$ as in the figure. Then, by a transformation of $P$ by $R_1$, we have a lattice polytope $P_1$ as in the middle figure of Figure \ref{fig4-7}. By the same argument, we have only one applicable rectangle $R_2=R(v_2, w_2)$ for $v_2,w_2 \in \mathrm{Ver}_0(P_1)$ as in the figure. Then, by a transformation of $P_1$ by $R_2$, we have a lattice polytope $P_2$ as in the right figure of Figure \ref{fig4-7}. Now, ignoring the isolated vertex, every possible rectangle of $P_2$ has intersection with a region with the rotation number zero. This implies that there is not a transformation of $P$ by rectangles $R_1, \ldots, R_k$ such that $\sum_{j=1}^k|\mathrm{Area}(R_j)|=\mathrm{Area}|P|$, and the required result follows.
\end{proof}
\begin{figure}
\caption{Example of a transformation of a lattice polytope $P$ satisfying $\sum_{j=1}
\label{fig4-7}
\end{figure}
\begin{corollary}
There exist lattice presentations $\Delta, \Delta'$ such that any transformation $\Delta \to \Delta'$ satisfies $\sum_{j=1}^k(|\mathrm{Area}(R_j)|+|\mathrm{Area}(-R_j^*)|)>\frac{1}{2}\mathrm{Area}|P\cup (-P^*)|$, where $R_1, \ldots, R_k$ are used rectangles and $P$ is a lattice polytope associated with $\Delta, \Delta'$. Thus, there exist lattice presentations $\Delta, \Delta'$ such that the minimal area of transformations $\Delta \to \Delta'$ is greater than $\frac{1}{2}\mathrm{Area}|P\cup (-P^*)|$.
\end{corollary}
\begin{proof}
Take lattice presentations $\Delta, \Delta'$ presented by lattice polytopes $P, -P^*$ such that $P\cap (-P^*)=\emptyset$ and $P$ is the lattice polytope given in Proposition \ref{prop3-18}. Then $\Delta, \Delta'$ are the required presentations.
\end{proof}
The transformation with minimal area $\mathrm{Area}|P|$ which we constructed in the proof of Theorem \ref{thm3-10}
is described by a transformation of $P$, but we have the following proposition.
\begin{proposition}
There exist lattice presentations of partial matchings $\Delta, \Delta'$ and a transformation $\Delta \to \Delta'$ by rectangles $R_j \cup (-R_j^*)$ $(j=1,\ldots,k)$ such that
$\sum_{j=1}^k |\mathrm{Area}(R_j)|=\mathrm{Area}|P|$
but $R_1=R(v,w)$ satisfies $v \in \mathrm{Ver}_0(P)$ and $w \in \mathrm{Ver}_0(-P^*)$. Thus, there exists a transformation $\Delta \to \Delta'$ with minimal area $\mathrm{Area}|P|$ which is not presented by a transformation of $P$. We have examples satisfying one of the following conditions.
\begin{enumerate}[$(1)$]\label{prop4-13}
\item
The lattice polytope $P$ is connected and simple. In this case, there is also a transformation $\Delta \to \Delta'$ with minimal area which is presented by a transformation of $P$.
\item
The lattice polytope $P$ is connected but not simple. In this example, any transformation $\Delta \to \Delta'$ with minimal area is not presented by a transformation of $P$.
\end{enumerate}
\end{proposition}
\begin{proof}
Case (1). We consider lattice presentations with associated lattice polytopes as illustrated in Figure \ref{fig4-8a}, where the shadowed polytope is $P$, and the vertices of $\mathrm{Ver}_0(P \cup (-P^*))$ (respectively $\mathrm{Ver}_1(P\cup (-P^*))$) are indicated by black circles (respectively X marks), and the numbers in regions divided by $\partial (P\cup (-P^*))$ denote the rotation numbers. Then, the transformation as shown in Figure \ref{fig4-9a} satisfies $\sum_j|\mathrm{Area}(R_j)|=\mathrm{Area}|P|$ but $R_1=R(v,w)$ satisfies $v \in \mathrm{Ver}_0(P)$ and $w \in \mathrm{Ver}_0(-P^*)$. The latter statement is obvious, and also follows from Theorem \ref{thm3-10}.
Case (2).
We consider lattice presentations with associated lattice polytopes as illustrated in Figure \ref{fig4-8a} and the transformation as shown in Figure \ref{fig4-9a} satisfies the required condition. The latter statement is obvious, since the minimal area of the lattice polytope $P$ is the sum of the two areas sorrounded by $\partial P$ by Theorem \ref{thm3-10}.
\end{proof}
\begin{figure}
\caption{Example of lattice polytopes $P \cup (-P^*)$ of Proposition \ref{prop4-13}
\label{fig4-8a}
\end{figure}
\begin{figure}
\caption{Example of a transformation of lattice polytopes $P \cup (-P^*)$ given by Figure \ref{fig4-8a}
\label{fig4-9a}
\end{figure}
\begin{figure}
\caption{Example of lattice polytopes $P \cup (-P^*)$ of Proposition \ref{prop4-13}
\label{fig4-8}
\end{figure}
\begin{figure}
\caption{Example of a transformation of lattice polytopes $P \cup (-P^*)$ given by Figure \ref{fig4-8}
\label{fig4-9}
\end{figure}
\begin{corollary}\label{cor4-12}
Let $\Delta, \Delta'$ be two lattice presentations of partial matchings.
We consider a transformation $\Delta \to \Delta'$ with minimal area. Then, if $P$ and $-P^*$ are disjoint and $P$ satisfies the condition $(1)$ or $(2)$ of Theorem \ref{thm3-10}, then $\Delta \to \Delta'$ is described by a transformation of $P$.
Further, in this case, the transformations consist of those which, as chord diagrams, changes nesting arcs to crossing arcs and vice versa.
\end{corollary}
\begin{proof}
By Lemma \ref{lem3-14} and the proof of Theorem \ref{thm3-10}, if $P$ and $-P^*$ are disjoint and $P$ satisfies the condition $1$ or $2$ of Theorem \ref{thm3-10}, then $\Delta \to \Delta'$ is described by a transformation of $P$.
Since $P$ and $-P^*$ are disjoint, any used rectangle $R$ is disjoint with $-R^*$, and it follows from Proposition \ref{prop2-3} that the two arcs of presented chord diagram are non-separated before and after the transformation, and hence we see that the transformation changes nesting arcs to crossing arcs and vice versa.
\end{proof}
Let $f(\Delta, \Delta')$ (respectively $f(P(\Delta, \Delta'))$) be the number of transformations $\Delta \to \Delta'$ with minimal area (respectively the number of transformations of $P=P(\Delta, \Delta')$ with minimal area). Note that by definition, $f(\Delta, \Delta')=f(P\cup (-P^*))$.
By Lemma \ref{lem3-14} and the existence of a transformation with minimal area by Theorem \ref{thm3-10}, we have the following corollary.
\begin{corollary}\label{cor3-15}
If a lattice polytope $P=P(\Delta, \Delta')$ satisfies the condition $1$ or $2$ of Theorem \ref{thm3-10}, then
the number of transformation with minimal area $f(\Delta, \Delta')$ is equal to $f(P)$. Further, by definition of equivalence, for equivalent lattice polytopes $P$ and $P'$, $f(P)=f(P')$.
Thus, for pairs of lattice presentations $\Delta_i$, $\Delta_i'$ $(i=1,2)$ such that their lattice polytopes $P_i=P(\Delta_i, \Delta_i')$ $(i=1,2)$ satisfies that $P_i$ and $-P_i^*$ are disjoint and $P_i$ satisfies the condition $1$ or $2$ of Theorem \ref{thm3-10} $(i=1,2)$,
$f(\Delta_1, \Delta_1')=f(\Delta_2, \Delta_2')$ if $P_1$ and $P_2$ are equivalent.
\end{corollary}
\section{Reduced graphs}\label{sec-red}
In this section, we introduce the notion of the reduced graph of a lattice polytope. Then we can determine whether or not a lattice polytope has a transformation with minimal area by studying its reduced graph (Theorem \ref{thm-red}).
\begin{definition}\label{def-graph}
For a lattice polytope $P$, let $\partial P$ be the boundary of $P$ equipped with the initial vertices $\mathrm{Ver}_0(P)$, which are one of the two types $\mathrm{Ver}_0(P)$ and $\mathrm{Ver}_1(P)$, and with orientations of edges, and labels assigned to regions divided by $\partial P$ denoting the rotation number of each region. We regard $\partial P$ as a graph of a finite numebr of immersed oriented circles with transverse intersection points and equipped with several vertices and integral labels for each divided region.
Then, we consider the following deformations, where an {\it arc} is a connected component of $\partial P$ minus the intersection points.
\begin{enumerate}[(I)]
\item
Reduce several vertices on an arc to one vertex on the arc (see Figure \ref{2017-0516-01} (I)).
\item
The local move illustrated in Figure \ref{2017-0516-01} (II).
\item
The local move illustrated in Figure \ref{2017-0516-01} (III).
\item
Remove a closed arc bounded with a 2-disk $E$ whose interior is disjoint with the graph and the label assigned to $E$ is not zero (see Figure \ref{2017-0516-01} (IV)).
\begin{figure}
\caption{The local deformations (I)--(IV), where $\delta \in \{+1, -1\}
\label{2017-0516-01}
\end{figure}
\end{enumerate}
We consider a graph obtained from the deformations (I)--(IV) such that no more deformations can be applied. It is unique up to an ambient isotopy of $\mathbb{R}^2$ by Lemma \ref{lem0515}. We call the graph the {\it reduced graph} of a lattice polytope $P$.
\end{definition}
For example, we obtain in Figure \ref{2017-0517-01} the reduced graph of the lattice polytope given in the leftmost figure of Figure \ref{fig4-7} in Proposition \ref{prop3-18}.
\begin{figure}
\caption{Obtaining the reduced graph of the lattice polytope given in the leftmost figure of Figure \ref{fig4-7}
\label{2017-0517-01}
\end{figure}
\begin{lemma}\label{lem0515}
The reduced graph of a lattice polytope is unique up to an ambient isotopy of $\mathbb{R}^2$.
\end{lemma}
\begin{proof}
It is obvious that the deformation (I) commutes the deformation (II). Consider a local graph as illustrated in the left figure of Figure \ref{2017-0517-02}. Let $\delta i$ be the label of the region encircled by an arc whose closure is a circle, where $\delta \in \{+1, -1\}$ and $i$ is a positive integer. Then, the label of the adjoining region is $\delta (i-1)$.
Hence there does not exists a local graph where both deformations (II) and (III) are applicable; see Figure \ref{2017-0517-02}.
Consider a local graph where both deformations (II) and (IV) are applicable. Then, the result by the deformation (II) is equal to the result by deformation (IV) after applying the deformation (I) as illustrated in Figure \ref{2017-0517-03}.
Thus the reduced graph is unique.
\end{proof}
\begin{figure}
\caption{The deformation (II) cannot be applied.}
\label{2017-0517-02}
\end{figure}
\begin{figure}
\caption{The relation between deformations (II) and (IV).}
\label{2017-0517-03}
\end{figure}
By the proof of Theorem \ref{thm3-10}, we have the following theorem. We give the proof at the end of the next section.
\begin{theorem}\label{thm-red}
Let $P$ be a lattice polytope.
Then, the reduced graph of $P$ is an empty graph if and only if there exists a transformation of $P$ with the minimal area $\mathrm{Area}|P|$.
\end{theorem}
\section{Lemmas and Propositions}\label{sec4-3}
For a point $u$ of $\mathbb{R}^2$, we denote by $x(u)$ and $y(u)$ the $x$ and $y$-components of $u$, respectively.
\begin{lemma}\label{lem4-15}
For a transformation of a lattice polytope $P$ by rectangles $R_j$,
the union of rectangles $R_j$ form regions whose boundaries are the boundary of $P$.
\end{lemma}
\begin{proof}
It suffices to show the case $P$ is connected.
Put $P_0=R_1$ and let $P_j$ be a lattice polytope with region $P_j=P_{j-1} \cup R_j$
and $\mathrm{Ver}_1(P_j)=t(\mathrm{Ver}_1(P_{j-1})\cup \{v,w\}, R(v,w))$. For $P_0=R_1$, $R_1$ forms a region of $P_0$. Hence, by induction, it suffices to show that $P' \cup R(v,w)$ forms a region of a lattice polytope $P$, for a lattice polytope $P'$ and a rectangle $R(v,w)$ such that $\mathrm{Ver}_1(P)=t(\mathrm{Ver}_1(P')\cup \{v,w\}, R(v,w))$. Since we assume that $P$ is connected, at least one of $v,w$ is in $\mathrm{Ver}_1(P')$. We have two cases: (Case 1) $v \in \mathrm{Ver}_1(P')$ and $w \not\in \mathrm{Ver}_1(P')$, and (Case 2) $v,w \in \mathrm{Ver}_1(P')$.
In both cases, $I=\partial P' \cap \partial R(v,w)$ is either $\{v\}$, or $\{v,w\}$ or an interval or intervals of $\partial R(v,w)$ containing $v$ or $w$. The orientation of edges of $\partial P'$ in the $x$-direction (respectively $y$-direction) is toward $v$ or $w$ (respectively from $v$ or $w$), and the orientation of edges of $\partial R(v,w)$ in the $x$-direction (respectively $y$-direction) is from $v$ or $w$ (respectively toward $v$ or $w$). Hence the orientations of the intervals of $I \subset \partial P'$ and $I \subset \partial R(v,w)$ are opposite, and the intervals are canceled when we take a union of rectangles, to form one region. Further, put $\tilde{v}=(x(w), y(v))$ and $\tilde{w}=(x(v), y(w))$, the other diagonal vertices of $R(v,w)$. Then, the region $P' \cup R(v,w)$ forms a lattice polytope $P$ with $\mathrm{Ver}_0(P)=\mathrm{Ver}_0(P') \cup \{w\}$ and $\mathrm{Ver}_1(P)=(\mathrm{Ver}_1(P')\backslash \{v\}) \cup \{\tilde{v}, \tilde{w}\}=t(\mathrm{Ver}_1(P')\cup \{v,w\}, R(v,w))$ for Case (1) (see Figure \ref{fig4-10}), and $\mathrm{Ver}_0(P)=\mathrm{Ver}_0(P')$ and $\mathrm{Ver}_1(P)=(\mathrm{Ver}_1(P')\backslash \{v,w\}) \cup \{\tilde{v}, \tilde{w}\}=t(\mathrm{Ver}_1(P')\cup \{v,w\}, R(v,w))$ for Case (2) (see Figure \ref{fig4-11}). Thus we have the required result.
\end{proof}
\begin{figure}
\caption{Examples of Case (1) of lattice polytopes $P'$ and $P$ with region $P=P' \cup R(v,w)$ and $\mathrm{Ver}
\label{fig4-10}
\end{figure}
\begin{figure}
\caption{Examples of Case (2) of lattice polytopes $P'$ and $P$ with region $P=P' \cup R(v,w)$ and $\mathrm{Ver}
\label{fig4-11}
\end{figure}
\begin{proposition}\label{lem3-15}
Let $P$ be a simple connected lattice polytope.
Then, there exists a rectangle $R(v,w)$ $(v,w \in \mathrm{Ver}_0(P))$ contained in $P$ such that $t(P, R(v,w))$ is a simple lattice polytope with region $P\backslash R(v,w)$.
\end{proposition}
\begin{proof}
By Lemma \ref{lem3-18} and \ref{lem3-19}, we have the required result.
\end{proof}
\begin{lemma}\label{lem3-18}
Let $P$ be a simple connected lattice polytope.
Then, there exists a rectangle $R(v,w)$ $(v,w \in \mathrm{Ver}_0(P))$ contained in $P$. \end{lemma}
For a vertex $v \in \mathrm{Ver}_0(P)$, we denote by $v'$ the vertex of $\mathrm{Ver}_1(P)$ such that the edge $\overline{vv'}$ is in the $x$-direction. Recall that
$\mathbf{e}_1=(1,0)$ and $\mathbf{e}_2=(0,1)$,
the standard basis of the $xy$-plane $\mathbb{R}^2$.
\begin{proof}
For $v \in \mathrm{Ver}_0(P)$, take a point $u \in \partial P$ such that $u=v+x \mathbf{e}_1$ for some $x$ and $\overline{v u} \cap \overline{vv'} \neq \{v\}$ and $\overline{v u} \subset P$. This point $u$ is unique. Then, take
another point $u'=u+y\mathbf{e}_2$ for some $y$ such that $R(v, u') \subset P$ and $(u'+\mathbb{R}\mathbf{e}_1) \cap \partial P$ consists of edges of $\partial P$. Note that there may be several choice of $u'$. Fix $u'$. Then,
since an edge of $P$ in the $x$-direction with a fixed $x$-component is unique, $(u'+\mathbb{R}\mathbf{e}_1) \cap \partial P=\overline{ww'}$ for some unique $w\in \mathrm{Ver}_0(P)$ and $w' \in \mathrm{Ver}_1(P)$.
Note that when $u'$ is a vertex of $P$, then $u' \in \mathrm{Ver}_0(P)$ and $w=u'$.
By construction, $R(v,w)$ is the required rectangle.
For $v$, we can construct another rectangle $R(v,z)$ in a similar way by first taking an interval in the $y$-direction and then making it fat in the $x$-direction. See Figure \ref{fig4-12}.
\end{proof}
\begin{figure}
\caption{The vertices $v, v', u, u', w$ and $w'$, where the vertices of $\mathrm{Ver}
\label{fig4-12}
\end{figure}
\begin{lemma}\label{lem3-19}
Let $P$ be a connected simple lattice polytope and let $R(v,w)$ be the rectangle constructed in Lemma \ref{lem3-18}.
Then, the result of transformation $t(P, R(v, w))$ is a simple lattice polytope with region $P\backslash R(v,w)$.
\end{lemma}
\begin{proof}
Put $R=R(v,w)$.
Let $u$ be a point in $\partial P$ as in the proof of Lemma \ref{lem3-18}.
First we see that $\partial R \cap \partial P$ consists of $\partial R=\partial P$, one interval, or two intervals. Since $v,w$ are vertices of $P$ and an edge of $P$ in the $x$-direction (respectively $y$-direction) with a fixed $x$-component (respectively $y$-component) is unique, $\partial R \cap \partial P$ consists of the union of points $v$, $w$, and intervals containing $v$ or $w$. Hence it suffices to show that there are intervals in $\partial R \cap \partial P$ containing $v$ and $w$.
Assume $x(v)<x(v')$. Put the other pair of diagonal points of $R(v,w)$ by $\tilde{v}=(x(w), y(v))$ and $\tilde{w}=(x(v), y(w))$.
Recall that we give an orientation of $P$ by giving each edge in the $x$-direction (respectively in the $y$-direction) the orientation from a vertex of $\mathrm{Ver}_0(P)$ to a vertex of $\mathrm{Ver}_1(P)$ (respectively from a vertex of $\mathrm{Ver}_1(P)$ to a vertex of $\mathrm{Ver}_0(P)$), and $\partial P$ has a coherent orientation as an immersion of a circle.
Now, consider a point moving continuously in an interval.
When a point $p$ passes a point $q$ from one direction and passes $q$ again for the second time, $p$ comes back from the other direction. Since $P$ is simple, this implies with the assumption $x(v)<x(v')$, that the situation in which both $x(w)<x(w')$ and $x(w)<x(u)$ do not occur simultaneously; thus, if $x(w)<x(w')$, then $x(w)=x(u)=x(u')$, hence $w=u'$. Thus, by construction, we see that if $x(w)<x(w')$, then $\tilde{v}=u$.
When $x(w)<x(w')$, then $\tilde{v}=u$ and $w=u'$, hence $\overline{uu'}=\overline{\tilde{v} w}$ is an interval in $\partial R \cap \partial P$ containing $w$.
When $x(w)>x(w')$, by construction, $x(w)\geq x(\tilde{w})$, hence $\overline{w\tilde{w}}\cap \overline{ww'}$ is an interval in $\partial R \cap \partial P$ containing $w$. Further, in both cases, $\overline{vv'}$ is an interval in $\partial R \cap \partial P$ containing $v$.
Thus $\partial R \cap \partial P$ consists of $\partial R=\partial P$, one interval, or two intervals.
Since $\partial P$ has an orientation, we denote the vertices of $\mathrm{Ver}_0(P)$ and $\mathrm{Ver}_1(P)$ by $v_1,\ldots, v_n$ and $v'_1, \ldots, v_n'$ such that $\overline{v_j v_j'}$ $(j=1,\ldots, n)$ is an edge in the $x$-direction and the vertices appear by $v_1, v_1', v_2, v_2' \ldots, v_n, v_n'$ on $\partial P$ with respect to the orientation.
In the above argument, we assume that $v=v_i$ and $w=v_j$ with $i<j$.
Then, let $P_1$ and $P_2$ be lattice polytopes determined by vertices
\[
v_1, v_1', \ldots, v_{i-1}, v'_{i-1}, \tilde{v}_j, v_j', v_{j+1}, \ldots, v_n, v_n'\]
and
\[
\tilde{v}_i, v_i', v_{i+1}, v_{i+1}', \ldots, v_{j-1}, v_{j-1}',\]
respectively such that
\begin{eqnarray*}
&&\mathrm{Ver}_0(P_1)=\{ v_1, \ldots, v_{i-1}, \tilde{v}_j, v_{j+1}, \ldots, v_n \}, \\
&& \mathrm{Ver}_1(P_1)=\{v_1', v'_{i-1}, v_j', \ldots, v_n'\}, \\
&&\mathrm{Ver}_0(P_2)=\{ \tilde{v}_i, v_{i+1}, \ldots, v_{j-1} \},\\
&& \mathrm{Ver}_1(P_2)=\{v_i', v_{i+1}', \ldots, v_{j-1}'\},
\end{eqnarray*}
see Figure \ref{fig4-13}.
Since both $P$ and $R=R(v_i,v_j)$ are simple and connected, $P_1$ and $P_2$ are simple connected lattice polytopes.
Since $\partial R \cap \partial P$ consists of either $\partial R=\partial P$, one interval,
or two intervals containing $v$ and $w$,
$P_1 \cap P_2=\emptyset$. Note that each of $P_1$ and $P_2$ consists of one isolated vertex (respectively one of $P_1$ and $P_2$ consists of one isolated vertex) when $\partial R=\partial P$ (respectively $\partial R \cap \partial P$ consists of one interval).
Hence the region of $P_1 \cup P_2$ is formed by $P\backslash R(v,w)$, and we have the required result.
\end{proof}
\begin{figure}
\caption{Lattice polytopes $P$, $P_1$ and $P_2$, where the vertices of $\mathrm{Ver}
\label{fig4-13}
\end{figure}
Let $\Delta=\Delta_0 \to \Delta_1 \to \cdots \to \Delta_k=\Delta'$ be a transformation with $\Delta_j=t(\Delta_{j-1}, R_j\cup (-R_j^*))$ for a rectangle $R_j=R_j(v_j, w_j)$ $(j=1,2,\ldots,k)$.
For a lattice polytope $P$ associated with $\Delta$ and $\Delta'$, put $V_0=\mathrm{Ver}_0(P)$.
We define $V_j$ inductively by $V_{j}=t(V_{j-1}, R_j(v_j, w_j))$ $(j=1,2,\ldots,k)$,
if the diagonal vertices $v_{j}, w_{j}$ of $R_{j}(v_{j}, w_{j})$ satisfy $v_{j}, w_{j} \in V_{j-1}$. Note that $\Delta_j=V_j \cup (-V_j^*)$ if $V_j$ can be defined, and $V_k=\mathrm{Ver}_1(P)$.
Then we have the following.
\begin{lemma}\label{lem3-14}
If $P$ and $-P^*$ are disjoint and $P$ satisfies the condition $1$ or $2$ of Theorem \ref{thm3-10} and further the area of the transformation $\Delta \to \Delta'$ is minimal, then $V_j$ can be defined for all $j=1,\ldots,k$.
\end{lemma}
\begin{proof}
Assume that the area is minimal.
By the proof of Theorem \ref{thm3-10}, the area is minimal when the used rectangles can be divided to several sets such that each set of rectangles form each simple lattice polytope or a set of simple lattice polytopes which bound a region obtained from a 2-disk $D^2$ by removing several mutually disjoint disks in the interior of $D^2$. Thus the vertices of each rectangle are contained in one of such regions. Since $P$ and $-P^*$ are disjoint, each of these regions is contained in either the region of $P$ or the region of $-P^*$. If $v_1 \in V_{0}=\mathrm{Ver}_0(P)$ and $w_1 \in -V_{0}^*$, then $R(v_1, w_1)$, together with other rectangles, forms a region which contains a vertex $v_1$ of $P$ and a vertex $w_1$ of $-P^*$. Since the components of $P$ and $-P^*$ are distinct, this is a contradiction, and hence we can assume that $v_1, w_1 \in V_0$ and we have $V_1$. Since the area of $\Delta_0 \to \Delta_1 \to \cdots \to \Delta_k$ is minimal, the area of the transformation $\Delta _1=V_1 \cup (-V_1^*) \to \cdots \to \Delta_k$ is also minimal. Hence, by repeating the same argument, we see that $V_j$ can be defined for all $j=1,\ldots,k$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm-red}]
We consider local deformations (I)--(IV) given in Definition \ref{def-graph}, but instead of deformations (III) and (IV), which contain one vertex, we consider (III) as the local move with one or two vertices in the arc whose closure is a circle, and (IV) as the local move with two vertices on the closed arc. Then, by the proof of Theorem \ref{thm3-10}, we see that all possible transformations which realize the minimal area are in the corresponding graphs presented by deformations (I)--(IV).
From a transformation of a lattice polytope by rectangles with minimal area, we obtain a sequence of graphs related by deformations (I)--(IV). Thus, the reduced graph of a lattice polytope $P$ is an empty graph if there exists a transformation of $P$ with the minimal area. Conversely, since all possible transformations which realize the minimal area are presented by deformations (I)--(IV), if there does not exist a transformation of $P$ with the minimal area, then the reduced graph is not empty. Thus we have the required result.
\end{proof}
\section{Simple lattice polygons}\label{sec5}
In this section, we consider simple lattice polytopes with one component, which we call simple {\it lattice polygons}.
By the proof of Proposition \ref{lem3-15}, we have the following.
\begin{proposition}\label{prop3-20}
Let $P$ be a simple lattice polygon and let $R(v,w)$ $(v,w \in \mathrm{Ver}_0(P))$ be a rectangle contained in $P$.
Then, the result of transformation $t(P, R(v, w))$ is a simple lattice polygon with region $P\backslash R(v,w)$ if and only if $\partial R(v,w)$ contains an interval of $\partial P$.
\end{proposition}
\begin{proof}
If $\partial R(v, w)$ $(v,w \in \mathrm{Ver}_0(P))$ contains an interval of $\partial P$, then
$R(v,w)$ is a rectangle constructed by the way shown in Lemma \ref{lem3-18}, and it follows from Lemma \ref{lem3-19} that $t(P, R(v, w))$ is a simple lattice polygon with region $P\backslash R(v,w)$.
If $\partial R(v,w)$ does not contain an interval of $\partial P$, then $R(v,w) \cap \partial P=\{v,w\}$. Then, the result $t(P, R(v, w))$, which consists of
the lattice polytopes $P_1$ and $P_2$ in the proof of \ref{lem3-19}, satisfies
$P_1 \cap P_2=R(v,w)$, hence the region of $t(P, R(v, w))$ is not the region of $P\backslash R(v,w)$. Thus we have the required result.
\end{proof}
By Corollary \ref{cor4-12},
in particular, we have the following.
\begin{corollary}\label{cor5-2}
Let $\Delta, \Delta'$ be two lattice presentations of partial matchings such that a lattice polytope $P$ associated with $\Delta, \Delta'$ is a simple lattice polygon. Let $n$ be half the number of the non-isolated vertices of $P$. Then, a division of $P$ into $n$ rectangles $R_1,\ldots, R_n$ presents a transformation $\Delta \to \Delta'$ with minimal area, where $R_1, \ldots, R_n$ satisfy the condition that they induce a transformation of $P$.
In particular, if $P$ and $-P^*$ are disjoint, then any transformation $\Delta \to \Delta'$ with minimal area is presented by such a division of $P$; further, the transformations consist of those which, as chord diagrams, changes nesting arcs to crossing arcs and vice versa.
\end{corollary}
\begin{definition}
In the situation of Corollary \ref{cor5-2},
we describe a transformation by a division of $P$ into $n$ rectangles $R_1,\ldots, R_n$, where $R_1, \ldots, R_n$ satisfy the condition that they induce a transformation of $P$,
and assigning each rectangle $R_j$ with the label $j$ ($j=1,\ldots,n$).
We call such a division of $P$
the {\it division of a simple lattice polygon $P$} presenting a transformation; see Figure \ref{fig5-1}.
\end{definition}
\begin{figure}
\caption{Division of a simple lattice polygon $P$ presenting a transformation.}
\label{fig5-1}
\end{figure}
\begin{proof}[Proof of Corollary \ref{cor5-2}]
When $n=1$, then $P=R_1$, which is a division by one rectangle. Assume that a simple lattice polygon with $2(n-1)$ vertices is divided by $n-1$ rectangles. By the proof of Lemma \ref{lem3-19}, $P$ is divided to $R \cup P_1 \cup P_2$ for a rectangular $R$ and simple lattice polygons $P_1$ and $P_2$ with $P_1 \cap P_2=\emptyset$ such that the sum of the numbers of vertices of $P_1$ and $P_2$ is $2n$. Then, the number of vertices of $P_1$ and that of $P_2$ is equal to or less than $2(n-1)$, and hence, by assumption, $P$ is divided to $n$ rectangles. Thus, by induction on $n$, together with Corollary \ref{cor4-12}, we have the required result.
\end{proof}
\end{document}
|
\begin{equation}gin{document}
\title[On Locally $\phi $-semisymmetric Kenmotsu Manifolds]{On Locally $\phi $-semisymmetric Kenmotsu Manifolds}
\maketitle
\vskip0.2in
\footnotesize MSC 2010 Classifications:53C25, 53D15.
\vskip0.1in
Keywords and phrases: Kenmotsu manifold, locally $\phi $- symmetric, $\phi $-semisymmetric, manifold of constant curvature.
\vskip0.1in
\normalsize
{\bf Abstract} The object of the present paper is to study the locally $\phi $- semisymmetric Kenmotsu manifolds along with the characterization of such notion.
\section{Introduction}
Let $M$ be an n-dimensional, $n\geq 3$, connected smooth Riemannian manifold endowed with the Riemannian metric $g$. Let $ \nabla $, $R$, $S$ and
$r$ be the Levi-Civita connection, curvature tensor, Ricci tensor and the scalar curvature of $M$ respectively. The manifold $M$ is called locally symmetric due to Cartan (\cite{ce1}, \cite{ce2}) if the local geodesic symmetry at $p\in M$ is an isometry, which is equivalent to the fact that
$\nabla R=0$. Generalizing the concept of local symmetry, the notion of semisymmetric manifold was introduced by Cartan \cite{ce3} and fully classified by
Szabo (\cite{sz1}, \cite{sz2}, \cite{sz3}). The manifold $M$ is said to be semisymmetric if
$ (R(U,V).R)(X,Y)Z=0$, for all vector fields $X$, $Y$, $Z$, $U$, $V$ on $M$, where $R(U,V)$ is considered as the derivation of the tensor algebra at each point of $M$.
In 1977 Takahashi \cite{tt} introduced the notion of local $\phi $- symmetry on a Sasakian manifold. A Sasakian manifold is said to be locally $\phi $-symmetric if
\begin{equation}gin{equation} \phi ^2((\nabla_WR)(X,Y)Z)=0 ,\end{equation}
for all horizontal vector fields $X$, $Y$, $Z$, $W$ on $M$ that is all vector fields orthogonal to $\xi$, where $\phi $ is the structure tensor of the manifold $M$. The concept of local $\phi $- symmetry
on various structures and their generalizations or extension are studied in ( \cite{ak}, \cite{aks}, \cite{ats}, \cite{aats}). By extending the notion of semisymmetry and generalizing the concept of local $\phi $- symmetry of Takahashi \cite{tt}, the first author and his coauthor introduced \cite{aaha} the notion of local $\phi $-semisymmetry on a Sasakian manifold. A Sasakian manifold $M$, $n\geq 3$, is said to be locally $\phi $-semisymmetric if
\begin{equation}gin{equation} \phi ^2((R(U,V).R)(X,Y)Z)=0, \end{equation}
for all horizontal vector fields $X$, $Y$, $Z$, $U$, $V$ on $M$. In the present paper we study locally $\phi $-semisymmetric Kenmotsu manifolds.
The paper is organized as follows:
In section 2 some rudimentary facts and curvature related properties of Kenmotsu manifolds are discussed. In section 3 we study locally $\phi $-semisymmetric Kenmotsu manifolds and obtained the characterization of such notion.
\section{Preliminaries}
Let $M$ be a $(2n+1)$-dimensional connected smooth manifold endowed with an almost contact
metric structure $(\phi ,\xi ,\eta ,g),$ where $\phi $ is a tensor field of type $(1,1),$ $\xi $
is a vector field, $\eta $ is an $1$-form and $g$ is a Riemannian metric on $M$ such that \cite{bde}
\begin{equation}gin{equation} \phi ^2X=-X+\eta( X)\xi ,\hspace{10 pt}\eta (\xi )=1.\label{21}\end{equation}
\begin{equation}gin{equation} g(\phi X,\phi Y)=g(X,Y)-\eta (X)\eta (Y)\hspace{10 pt} \label{22}\end{equation}
for all vector fields X, Y on $M$.\\
Then we have \cite{bde}
\begin{equation}gin{equation} \phi \xi =0,\hspace{10 pt}\eta (\phi X)=0,\hspace{10 pt}\eta (X)=g(X,\xi ).\label{23}\end{equation}
\begin{equation}gin{equation} g(\phi X,X)=0 \label{24a}.\end{equation}
\begin{equation}gin{equation} g(\phi X,Y)=-g(X,\phi Y)\label{24b}\end{equation}
for all vector fields X, Y on $M$.\\
If
\begin{equation}gin{equation} (\nabla_X \phi )Y=-g(X, \phi Y)\xi -\eta (Y)\phi X, \label{24} \end{equation}
\begin{equation}gin{equation}\nabla_X \xi=X - \eta (X)\xi,\label{25}\end{equation}
holds on $M$, then it is called a Kenmotsu manifold \cite{kk}.\\
In a Kenmotsu manifold the following relations hold \cite{kk}
\begin{equation}gin{equation} (\nabla_X \eta )Y= g(X, Y) -\eta (X)\eta (Y),\label{26}\end{equation}
\begin{equation}gin{equation} \eta (R(X, Y)Z)=g(X, Z)\eta (Y)-g(Y, Z)\eta (X), \label{27} \end{equation}
\begin{equation}gin{equation} R(X, Y)\xi =\eta (X)Y-\eta (Y)X, \label{28} \end{equation}
\begin{equation}gin{equation} R(X, \xi )Z =g(X,Z)\xi -\eta (Z)X, \label{29} \end{equation}
\begin{equation}gin{equation} R(X, \xi )\xi =\eta (X)\xi -X, \label{29d} \end{equation}
\begin{equation}gin{equation} S(X,\xi )=-2n\eta (X), \label{29a}\end{equation}
\begin{equation}gin{equation} (\nabla_WR)(X, Y)\xi = g(X, W)Y -g(Y, W)X- R(X, Y)W,\label{29b}\end{equation}
\begin{equation}gin{equation} (\nabla_WR)(X, \xi )Z = g(X, Z)W -g(W, Z)X- R(X, W)Z,\label{29c}\end{equation}
for all vector fields $X$, $Y$, $Z$ and $W$ on $M$.\\
In a Kenmotsu manifold we also have \cite{kk}
\begin{equation}gin{equation} \begin{equation}gin{array}{rcl} R(X,Y)\phi W&=&g(Y,W)\phi X-g(X,W)\phi Y+g(X,\phi W)Y-g(Y,\phi W)X\\
&+&\phi R(X,Y)W .\end{array}\label{29e}\end{equation}
Applying $\phi $ and using (\ref{21}) we get from (\ref{29e})
\begin{equation}gin{equation} \begin{equation}gin{array}{rcl} \phi R(X,Y)\phi W&=&-g(Y,W)X+g(X,W)Y+g(X,\phi W)\phi Y-g(Y,\phi W)\phi X\\
&-&R(X,Y)W .\end{array}\label{29f}\end{equation}
In view of (\ref{29f}) we obtain from (\ref{29b})
\begin{equation}gin{equation}(\nabla_WR)(X, Y)\xi =g(Y,\phi W)\phi X-g(X,\phi W)\phi Y+\phi R(X,Y)\phi W.\label{29g}\end{equation}
\section{ Locally $\phi $-semisymmetric Kenmotsu Manifolds}
\begin{equation}gin{definition} A Kenmotsu manifold $M$ is said to be locally $\phi $-semisymmetric if
\begin{equation}gin{equation} \phi ^2((R(U,V).R)(X,Y)Z)=0, \label{302} \end{equation}
for all horizontal vector fields $X$, $Y$, $Z$, $U$, $V$ on $M$.
\end{definition}
First we suppose that $M$ is a Kenmotsu manifold such that
\begin{equation}gin{equation} \phi ^2((R(U,V).R)(X,Y)\xi )=0, \label{303}\end{equation}
for all horizontal vector fields $X$, $Y$, $U$ and $V$ on $M$.
Differentiating (\ref{29g}) covariantly with respect to a horizontal vector field $U$, we get
\begin{equation}gin{equation} \begin{equation}gin{array} {rcl} && (\nabla_U\nabla_VR)(X, Y)\xi \\
&=&[g(X,\phi V)g(U,\phi Y)-g(Y,\phi V)g(U,\phi X)+g(\phi U, R(X,Y)\phi V)]\xi \\
&+&\phi (\nabla_U R)(X,Y)\phi V.\end{array}\label{30}\end{equation}
Using (\ref{29e}) we obtain from (\ref{30})
\begin{equation}gin{equation} \begin{equation}gin{array} {rcl}(\nabla_U\nabla_VR)(X, Y)\xi &=&[g(Y,V)g(U,X)-g(X,V)g(U,Y)+g(R(X,Y)V,U)]\xi \\
&+&\phi (\nabla_U R)(X,Y)\phi V.\end{array}\label{31}\end{equation}
Interchanging $U$ and $V$ on (\ref{31}) we get
\begin{equation}gin{equation} \begin{equation}gin{array} {rcl}(\nabla_V\nabla_UR)(X, Y)\xi &=&[g(Y,U)g(V,X)-g(X,U)g(V,Y)+g(R(X,Y)U,V)]\xi \\
&+&\phi (\nabla_V R)(X,Y)\phi U.\end{array}\label{32}\end{equation}
From (\ref{31}) and (\ref{32}) it follows that
\begin{equation}gin{equation} \begin{equation}gin{array} {rcl} (R(U,V).R)(X,Y)\xi &=&2[g(Y,V)g(U,X)-g(X,V)g(U,Y)-R(X, Y, U, V)]\xi \\
&+&\phi \{(\nabla_U R)(X,Y)\phi V- (\nabla_V R)(X,Y)\phi U\} .\end{array}\label{33}\end{equation}
Again from (\ref{303}) we have
\begin{equation}gin{equation} (R(U,V).R)(X,Y)\xi =0, \label{34}\end{equation}
From (\ref{33}) and (\ref{34}) we have
\begin{equation}gin{equation} \begin{equation}gin{array} {rcl}&& 2[g(Y,V)g(U,X)-g(X,V)g(U,Y)-R(X, Y, U, V)]\xi \\
&+&\phi \{(\nabla_U R)(X,Y)\phi V- (\nabla_V R)(X,Y)\phi U\} \\
&=& 0 .\end{array}\label{35}\end{equation}
Applying $\phi $ on (\ref{35}) and using (\ref{29e}), (\ref{29g}) and (\ref{23}) we get
\begin{equation}gin{equation} (\nabla_U R)(X,Y)\phi V- (\nabla_V R)(X,Y)\phi U =0.\label{36} \end{equation}
In view of (\ref{35}) and (\ref{36}) we get
\begin{equation}gin{equation} R(X, Y, U, V)=g(Y,V)g(U,Y)-g(X,V)g(U,Y) ,\label{37}\end{equation}
\begin{equation}gin{equation} R(X, Y, U, V)=-\{ g(X,V)g( U,Y)-g(Y,V)g(U,X)\} ,\label{37i}\end{equation}
for all horizontal vector fields $X$, $Y$, $U$ and $V$ on $M$. Hence M is of constant $\phi $-holomorphic sectional curvature -1
and hence of constant curvature -1. This leads to the following:
\begin{equation}gin{theorem} If a Kenmotsu manifold $M$ satisfies the condition $ \phi ^2((R(U,V).R)(X,Y)\xi)=0 $, for all horizontal
vector fields $X$, $Y$, $Z$, $U$ and $V$ on $M$, then $M$ is a manifold of constant curvature -1.
\end{theorem}
We consider a Kenmotsu manifold which is locally $\phi $-semisymmetric. Then from (\ref{302}) we have
\begin{equation}gin{equation} (R(U,V).R)(X,Y)Z=g((R(U,V).R)(X,Y)Z,\xi )\xi , \label{38} \end{equation}
from which we get
\begin{equation}gin{equation} (R(U,V).R)(X,Y)Z=-g((R(U,V).R)(X,Y)\xi, Z )\xi \label{39} \end{equation}
for all horizontal vector fields $X$, $Y$, $Z$, $U$, $V$ on $M$.
Now taking inner product on both side of (\ref{33}) with a horizontal vector field $Z$, we obtain
\begin{equation}gin{equation} g((R(U,V).R)(X,Y)\xi, Z) =g(\phi (\nabla_U R)(X,Y)\phi V,Z)- g(\phi (\nabla_V R)(X,Y)\phi U,Z) .\label{39a}\end{equation}
Using (\ref{24b}) and (\ref{39}) we get from (\ref{39a})
\begin{equation}gin{equation} (R(U,V).R)(X,Y)Z=[g((\nabla_U R)(X,Y)\phi V,\phi Z)- g((\nabla_V R)(X,Y)\phi U,\phi Z)]\xi \label{39b} \end{equation}
Differentiating (\ref{29e}) covariantly with respect to a horizontal vector field $V$, we get
\begin{equation}gin{equation} \begin{equation}gin{array} {rcl} && (\nabla_VR)(X, Y)\phi Z \\
&=&[-g(Y,Z)g(V,\phi X)+g(X,Z)g(V,\phi Y)-g(V, R(X,Y)Z)]\xi \\
&+&\phi (\nabla_V R)(X,Y)Z.\end{array}\label{39c}\end{equation}
Taking inner product on both sides of (\ref{39c}) with a horizontal vector field $U$, we obtain
\begin{equation}gin{equation} g\{(\nabla_VR)(X, Y)\phi Z,U\} =g\{\phi (\nabla_VR)(X, Y) Z, U\}.\label{39d}\end{equation}
Using (\ref{24b}) we get from above
\begin{equation}gin{equation} g\{(\nabla_VR)(X, Y)\phi Z,U\} =-g\{ (\nabla_VR)(X, Y) Z, \phi U\}.\label{39e}\end{equation}
In view of (\ref{39e}) we obtain from (\ref{39b})
\begin{equation}gin{equation} (R(U,V).R)(X,Y)Z =[-g((\nabla_U R)(X,Y) V,\phi^2 Z)+ g((\nabla_V R)(X,Y)U,\phi^2 Z)]\xi, \label{39f} \end{equation}
which implies that
\begin{equation}gin{equation} (R(U,V).R)(X,Y)Z =[g((\nabla_U R)(X,Y) V, Z)- g((\nabla_V R)(X,Y)U,Z)]\xi, \label{39h} \end{equation}
i.e.
\begin{equation}gin{equation} (R(U,V).R)(X,Y)Z =[-(\nabla_U R)(X,Y,Z,V)+ (\nabla_V R)(X,Y,Z,U)]\xi ,\label{39i} \end{equation}
for any horizontal vector field $X$, $Y$, $Z$, $U$, $V$ on $M$. Hence we can state the following:
\begin{equation}gin{theorem} A Kenmotsu manifold $M$, $n\geq3$, is locally $\phi $-semisymmetric
if and only if the relation (\ref{39i}) holds for all horizontal vector fields $X$, $Y$, $Z$, $U$, $V$ on $M$.\end{theorem}
\section{ Characterization of Locally $\phi $-semisymmetric Kenmotsu Manifolds}
In this section we investigate the condition of local $\phi $-semisymmetry of a Kenmotsu manifold for arbitrary vector fields
on $M$. To find this we need the following results.
\begin{equation}gin{lemma} For any horizontal vector field $X$, $Y$ and $Z$ on a Kenmotsu manifold $M$, we have
\begin{equation}gin{equation} (\nabla_\xi R)(X,Y)Z=(\ell _\xi R)(X,Y)Z+2R(X,Y)Z.\end{equation}\end{lemma}
\begin{equation}gin{proof} Let $X^\ast$, $Y^\ast$ and $Z^\ast$ be $\xi $- invariant horizontal vector field extensions on $X$, $Y$ and $Z$ respectively.
Since $X^\ast $ is $\xi $- invariant of $X$, we get by using (\ref{25})
\begin{equation}gin{equation} \nabla _\xi X^\ast =\nabla_X{^\ast}\xi =X^\ast \label{41}\end{equation}
Now making use of invariance of $X^\ast$, $Y^\ast$ and $Z^\ast$ by $\xi $ and using (\ref{41}) we get
\begin{equation}gin{equation} \begin{equation}gin{array} {rcl} (\ell _\xi R)(X^\ast ,Y^\ast )Z^\ast &=&[\xi ,R(X^\ast ,Y^\ast )Z^\ast ]\\
&=&\nabla_\xi (R(X^\ast ,Y^\ast )Z^\ast )-\nabla_{R(X^\ast ,Y^\ast )Z^\ast }\xi \\
&=&(\nabla_\xi R)(X^\ast ,Y^\ast )Z^\ast +R(\nabla_\xi X^\ast ,Y^\ast )Z^\ast +R(X^\ast ,\nabla_\xi Y^\ast )Z^\ast\\
&+&R(X^\ast ,Y^\ast )\nabla_\xi Z^\ast -R(X^\ast ,Y^\ast )Z^\ast \\
&=&(\nabla_\xi R)(X^\ast ,Y^\ast )Z^\ast +R( X^\ast ,Y^\ast )Z^\ast +R(X^\ast ,Y^\ast )Z^\ast\\
&+&R(X^\ast ,Y^\ast )Z^\ast -R(X^\ast ,Y^\ast )Z^\ast \\
&=&(\nabla_\xi R)(X^\ast ,Y^\ast )Z^\ast +2R( X^\ast ,Y^\ast )Z^\ast \end{array} \end{equation}
Hence we get the conclusion.\end{proof}
\begin{equation}gin{lemma} For any vector field $X$, $Y$ and $Z$ on a Kenmotsu manifold $M$ we have
\begin{equation}gin{equation} \begin{equation}gin{array} {rcl} R(\phi ^2X,\phi ^2Y)\phi ^2Z&=&-R(X,Y)Z+\eta (Z)\{\eta (X)Y-\eta (Y)X\}\\
&+&\{\eta (Y)g(X,Z)-\eta (X)g(Y,Z)\}\xi \end{array} \end{equation}\end{lemma}
Now lemma (4.1) and lemma (4.2) together imply the following:
\begin{equation}gin{lemma} For any vector field $X$, $Y$, $Z$ and $U$ on a Kenmotsu manifold $M$, we have
\begin{equation} \begin{equation}gin {array} {rcl}&&(\nabla_{\phi ^2U} R)(\phi ^2X,\phi ^2Y)\phi ^2Z \\
&=& (\nabla_U R)(X,Y)Z-\eta (X)H_1(Y,U)Z+\eta (Y)H_1(X,U)Z+\eta (Z)H_1(X,Y)U\\
&+&\eta (U)[\eta (Z)\{\eta (X)\ell _\xi Y-\eta (Y)\ell _\xi X\}-(\ell _\xi R)(X,Y)Z]\\
&+&2\eta (U)[R(X,Y)Z-\eta (Z)\{\eta (X)Y-\eta (Y)X\}\\
&-&\{\eta (Y)g(X,Z)-\eta (X)g(Y,Z)\}\xi].\end{array} \label{42} \end{equation}
where the tensor field $H_1$ of type (1, 3) is given by
\begin{equation}gin{equation} H_1(X,Y)Z=R(X,Y)Z-g(X,Z)Y+g(Y,Z)X, \label{4.66} \end{equation}
for all vector fields $X$, $Y$, $Z$ on $M$.
\end{lemma}
Now let $X$, $Y$, $Z$, $U$, $V$ be arbitrary vector fields on $M$.\\
Now we compute $(R(\phi ^2U, \phi ^2V).R)(\phi ^2X, \phi ^2Y)\phi ^2Z$
in two different ways. Firstly from (\ref{39i}), (\ref{21}) and (\ref{42}) we get
\begin{equation}gin{equation} \begin{equation}gin{array}{rcl} &&(R(\phi ^2U,\phi ^2V).R)(\phi ^2X,\phi ^2Y)\phi ^2Z\\
&=&\{(\nabla_U R)(X,Y,Z,V)-(\nabla_V R)(X,Y,Z,U)\}\xi\\
&+&\{\eta (U)\eta \{(\nabla_V R)(X,Y)Z)\}-\eta (V)\eta \{(\nabla_U R)(X,Y)Z)\}\}\xi\\
&-&\eta (X)\{H(Y,U,Z,V)-H(Y,V,Z,U)\}\xi \\
&+& \eta (Y)\{H(X,U,Z,V)-H(X,V,Z,U)\}\xi \\
&+&\eta (Z)\{H(X,Y,U,V)-H(X,Y,V,U)\}\xi\\
&+&\eta (X)\eta (Z)\{\eta(U)g(\ell _\xi Y,V)-\eta (V)g(\ell _\xi Y,U)\}\xi \\
&-&\eta (Y)\eta (Z)\{\eta (U)g(\ell _\xi X,V)-\eta (V)g(\ell _\xi X,U)\}\xi \\
&+&2\{\eta (U)R(X,Y,Z,V)-\eta (V)R(X,Y,Z,U)\}\xi \\
&+&2\eta (Z)\eta (V)\{\eta (X)g(Y,U)-\eta (Y)g(X,U)\}\xi \\
&-&2\eta (Z)\eta (U)\{\eta (X)g(Y,V)-\eta (Y)g(X,V)\}\xi , \end{array}\label{43} \end{equation}
where $H(X,Y,Z,U)=g(H_1(X,Y)Z,U)$ and the tensor field $H_1$ of type (1, 3) is given by (\ref{4.66})\\
Secondly we have
\begin{equation}gin{equation} \begin{equation}gin{array}{rcl}&& (R(\phi ^2U,\phi ^2V).R)(\phi ^2X,\phi ^2Y)\phi ^2Z=R(\phi ^2U,\phi ^2V)R(\phi ^2X,\phi ^2Y)\phi ^2Z\\
&-&R(R(\phi ^2U,\phi ^2V)\phi ^2X,\phi ^2Y)\phi ^2Z-R(\phi ^2X,R(\phi ^2U,\phi ^2V)\phi ^2Y)\phi ^2Z\\
&-&R(\phi ^2X,\phi ^2Y)R(\phi ^2U,\phi ^2V)\phi ^2Z.\end{array}\label{44} \end{equation}
By straightforward calculation from (\ref{44}) we get
\begin{equation}gin{equation} \begin{equation}gin{array}{rcl} &&(R(\phi ^2U,\phi ^2V).R)(\phi ^2X,\phi ^2Y)\phi ^2Z\\
&=&-(R(U,V).R)(X,Y)Z\\
&+&\eta (X)\{\eta (V)H_1(U,Y)Z-\eta (U)H_1(V,Y)Z\}\\
&+&\eta (Y)\{\eta (V)H_1(X,U)Z-\eta (U)H_1(X,V)Z\}\\
&+&\eta (Z)\{\eta (V)H_1(X,Y)U-\eta (U)H_1(X,Y)V\}\\
&+&\{\eta (V)g(H(X,Y,Z,U)-\eta (U)g(H(X,Y,Z,V)\}\xi ,
\end{array}\label{45} \end{equation}
where $H(X,Y,Z,U)=g(H_1(X,Y)Z,U)$ and the tensor field $H_1$ of type (1, 3) is given by (\ref{4.66})\\
From (\ref{43}) and (\ref{45}) we obtain
\begin{equation}gin{equation} \begin{equation}gin{array}{rcl} && (R(U,V).R)(X,Y)Z\\
&=&[-(\nabla_U R)(X,Y,Z,V)+(\nabla_V R)(X,Y,Z,U)]\xi\\
&+&[\eta (V)\eta \{(\nabla_U R)(X,Y)Z)\}-\eta (U)\eta \{(\nabla_V R)(X,Y)Z)\}]\xi\\
&+&\eta (X)[\{H(Y,U,Z,V)-H(Y,V,Z,U)\}\xi+\eta (V)H_1(U,Y)Z-\eta (U)H_1(V,Y)Z]\\
&-& \eta (Y)[\{H(X,U,Z,V)-H(X,V,Z,U)\}\xi-\eta (V)H_1(X,U)Z+\eta (U)H_1(X,V)Z]\\
&-&\eta (Z)[\{H(X,Y,U,V)-H(X,Y,V,U)\}\xi+\eta (V)H_1(X,U)Z-\eta (U)H_1(X,V)Z]\\
&+&\{\eta (V)H(X,Y,Z,U)-\eta (U)H(X,Y,Z,V)\}\xi\\
&-&2\{\eta (U)R(X,Y,Z,V)-\eta (V)R(X,Y,Z,U)\}\xi \\
&+&.\{\eta (U)(\ell _\xi R)(X,Y,Z,V)-\eta (V)(\ell _\xi R)(X,Y,Z,U)\}\xi\\
&-&\eta (Z)\eta (X)\{\eta (U)g(\ell _\xi Y,V)-\eta (V)g(\ell _\xi Y,U)\}\xi \\
&+&\eta (Z)\eta (Y)\{\eta (U)g(\ell _\xi X,V)-\eta (V)g(\ell _\xi X,U)\}\xi \\
&-&2\eta (Z)\eta (V)\{\eta (X)g(Y,U)-\eta (Y)g(X,U)\}\xi \\
&+&2\eta (Z)\eta (U)\{\eta (X)g(Y,V)-\eta (Y)g(X,V)\}\xi . \end{array}\label{46} \end{equation}
Thus in a locally $\phi $-semisymmetric Kenmotsu manifold the relation (\ref{46}) holds for all arbitrary vector fields
$X$, $Y$, $Z$, $U$, $V$ on $M$. Next if the relation (\ref{46}) holds in a Kenmotsu manifold, then for any horizontal vector field
$X$, $Y$, $Z$, $U$, $V$ on $M$, we get the relation (\ref{39i}) and hence the manifold is locally $\phi $-semisymmetric. \\
Thus we can state the following:
\begin{equation}gin{theorem} A Kenmotsu manifold $M$ is locally $\phi $-semisymmetric if and only if the relation (\ref{46}) holds for any
arbitrary vector field $X$, $Y$, $Z$, $U$, $V$ on $M$.\end{theorem}
\begin{equation}gin{thebibliography}{99}
\bibitem{bde} Blair, D. E., Contact manifolds in Riemannian geometry. Lecture Notes in Math. No. 509. Springer 1976.
\bibitem{ce1} Cartan, E., Sur une class remarquable despace de Riemann, I, Bull. de la Soc. Math. de France , 54(1926), 214-216.
\bibitem{ce2} Cartan, E., Sur une class remarquable despace de Riemann, II, Bull. de la Soc. Math. de France , 55(1927), 114-134.
\bibitem{ce3} Cartan, E., Lecons sur la geometric des espaces de Riemann, 2nd ed., Paris 1946.
\bibitem{kk} Kenmotsu, K., A class of almost contact Riemannian manifolds, Tohoku Math. J., 24(1972), 93-102.
\bibitem{ak} Shaikh, A. A., Baishya, K. K., On $\phi $-Symmetric LP- Sasakian manifolds, Yokohama Math. J., 52(2005), 97-112.
\bibitem{aks} Shaikh, A. A., Baishya, K. K. and Eyasmin, S., On $\phi $-recurrent generalized ($k, \mu $)-contact metric manifolds, Lobachevski J. Math.,
27(2007), 3-13.
\bibitem{ats} Shaikh, A. A., Basu, T. and Eyasmin, S., On locally $\phi $-symmetric $(LCS)_n$-manifolds, Int. J. of Pure and Appl. Math., 41(8)(2007),
1161-1170.
\bibitem{aats} Shaikh, A. A., Basu, T. and Eyasmin, S., On the existence of $\phi $-recurrent $(LCS)_n$-manifolds, Extracta Mathematica, 23(1)(2008)
, 71-83.
\bibitem{aaha} Shaikh, A. A., Mondal, C.K. and Ahmad, H., On locally $\phi $-semisymmetric Sasakian manifolds, arxive: 1302. 2139v3 [math.DG] 11 Feb 2017.
\bibitem{sz1} Szab\'{o}, Z. I., Structure theorems on Riemannian spaces satisfying $R(X, Y).R=0$, I, The local version, J. Diff. Geom. 17(1982), 531-582.
\bibitem{sz2} Szab\'{o}, Z. I., Structure theorems on Riemannian spaces satisfying $R(X, Y).R=0$, II, Global version, Geom. Dedicata, 19(1983), 65-108.
\bibitem{sz3} Szab\'{o} , Z. I., Classification and construction of complete hypersurfaces satisfying $R(X, Y).R=0$, Acta. Sci. Math., 47(1984), 321-348.
\bibitem{tt} Takahashi. T., Sasakian $\phi $-symmetric spaces, Tohoku Math. J., 29(1977), 91-113.
\end{thebibliography}
\hspace{10pt}
\hspace{10pt}
\end{document}
|
\begin{document}
\draft
\title{A protocol for secure and deterministic quantum key expansion}
\author{Xiang-Bin Wang\thanks{Email address: [email protected]}\\
IMAI Quantum Computation and Information Project,
ERATO, JST,
Daini Hongo White Bldg. 201, \\5-28-3, Hongo, Bunkyo,
Tokyo 133-0033, Japan}
\maketitle
\begin{abstract}
In all existing protocols of private communication with encryption and
decryption, the pre-shared key can be used for only {\it one time}.
We give a deterministic quantum key expansion
protocol where the pre-shared key can be recycled. Our protocol
costs less qubits and almost zero classical communication.
Since the bit values of the expanded key is deterministic,
this protocol can also be used for direct communication.
Our protocol includes the authentication steps therefore we
don't worry about the case that
Alice and Bob are completely isolated.
\end{abstract}
\section{introduction}
Information processing with quantum systems enables us to do novel tasks
which seem to be impossible with its classical counterpart
\cite{wies,shor,bene}.
Among all of the non-trivial quantum algorithms,
quantum key distribution (QKD)
\cite{bene,eker,ben2,ben3,brus,gisi,maye}
is one of the most important and interesting
quantum information processing due to its relative low technical overhead:
the only thing required there is quantum states preparing, transmission and measurement.
It needs neither quantum memory nor collective quantum operation such
as the controlled-NOT (CNOT) gate.
Therefore, QKD will be the first practical quantum information
processor \cite{gisi}.
QKD makes it possible for two remote parties,
Alice and Bob to make unconditionally
secure communications: they first build up a secure shared key and then
use this key as the one-time-pad to send the private message.
However, in the standard BB84\cite{bene} protocol, at least half of the transmitted qubits
are discarded due to the mismatch of preparation bases and measurement bases to
the qubits. Also, the standard BB84 protocol does not include authentication.
This makes it insecure in the case that Alice and Bob are completely isolated:
Eavesdropper (Eve) may intercept all classical information and quantum information and
the actual case there
is that each of Alice and Bob are doing QKD with Eve separately.
In this Letter, we shall give an efficient protocol to expand the key deterministically
or make direct communication, with
authentication being included. Our protocol
has the advantage of lower cost in both classical communication and quantum states transmission.
Our protocol includes the authentication steps.
The pre-shared key can be recycled in our protocol.
The requirement of pre-sharing a secret string is not a serious drawback
of our protocol. In the case authentication is required for security,
all protocols need a pre-shared secret string;
in the case that authentication is thought to be unnecessary,
our protocol need not pre-share anything initially:
they may first use any standard QKD protocol to
generate a secret random string and then use this string as the pre-shared string.
The the initial version of QKD protocol\cite{bene} proposed by Bennett and Brassard
is fully efficient by delaying the measurement.
This delay requires the quantum memories which
are very difficult technique.
Another method is to assign
significantly different probabilities to the different bases
\cite{lo2}. Although unconditional security of the scheme is
given \cite{lo2}, it has a disadvantage that a larger number of
key must be generated at one time.
Roughly speaking, with the bases mismatch rate being set to $\epsilon$,
the number of qubits it
needs to generate at one time is $\epsilon^{-2}$ times of that of the
standard BB84\cite{bene}. In a
recently proposed QKD protocol without public announcement of basis
(PAB) \cite{hwa2,wang}, there is no measurement mismatch.
However, the protocol in its present form has the disadvantage
that one must make many batches of keys before any batch is used to
encrypt and transmit classical message. Note that
they must abort the preshared secret string after the key expansion.
To really have an advantage in the efficiency, one should generate
as many secret bits as possible at one time, by that protocol.
Blindly generating too many secret bits at one time means a higher cost:
First, the complexity of decoding the error correction code rises rapidly
with the size of the code. Second, the quantum channel could be expensive.
In practice, it could be the case that we don't know how many secret bits are
needed in the future communication.
For example, a detective is sent to his enemy country Duba from the country VSA.
He is scheduled
to work in Duba for only one month and then come back to the headquarter
in VSA. The so called secret bits will be useless after that month.
The existing protocols for quantum direct communication can save some
cost of classical communications. Unfortunately, they are either
insecure\cite{cai,hoff,woj} or only quasisecure\cite{guo}.
Moreover, all of them require quantum memory.
So far, it seems that our protocol is the unique one which has the advantage of
lower cost of both quantum states transmission and classical communication
while still holding the unconditional security.
\section{ our protocols and security proof}
We shall use the reduction technique. We first reduce the classical protocol
to quantum protocol (the one uses perfect entangled pairs and quantum memories), and then reduce
the quantum protocol back to classical protocol (the one without any entangled
pair or quantum memory).
We start with a trivial scheme, {\it Protocol 1}.\\
{\it Protocol 1, Classical protocol}\\
Alice and Bob share a secret key, i.e., $g$-bit random string, $G$. Alice wants to
send an $N$-bit classical binary string $s$ to Bob, $g>N$. She chooses
first $N$ bits from $G$ and denotes
this substring as $b$. She prepares an $N$-qubit string $q$
which is in the
quantum state $|b\oplus s\rangle$, and sends these $N$ qubits to Bob. Here $\oplus$
is the summation modulo 2. Suppose the values of the
$i$th element in string $b$ and $s$ are $b_i$ and $s_i$, respectively,
given any value of $b_i\oplus s_i$, she just prepares the $i$th quantum state
$|b_i\oplus s_i\rangle$ accordingly. All qubit states in $q$ are prepared
in $Z$ basis.
Bob measures each of qubits in $Z$ basis
and obtain an $N-$bit classical string, taking
$\oplus$ operation of this string
and string $b$ he obtains the message string. Alice and Bob discards
string $b$.
This is just
classical private communication with one-time-pad.
Obviously, the message string $s$ is perfectly
secure no matter how noisy the quantum channel is.
Though there are bit-flip errors in to the transmitted message,
there is no information leakage.
In this protocol, the
one-time-pad cannot be recycled. Since all qubits are prepared in $Z$ basis,
Eve in principle can have full information of $b\oplus s$ without disturbing
the quantum string $q$ at all.
For the purpose of recycling the one-time-pad, we reduce it to our
{\it Protocol 2}, a quantum protocol. Latter on, we shall classicalize
{\it Protocol 2}.\\
{\it Protocol 2:} Secure communication with recyclable quantum one-time-pad.\\
Alice and Bob share $g$ pairs of (exponentially) perfect entangled pairs of
$|\phi^+\rangle= \frac{1}{\sqrt 2}(|00\rangle+|11\rangle)$. For convenience
we shall call this pair state as EPR pair.
Alice wants to
send $N$-bit classical binary string $s$ to Bob. According to each individual
bit information, she prepares an $N-$qubit quantum state $|s\rangle$, all of
them being prepared in $Z$ basis.
She chooses her halves of first $N$
pairs from $g$ pairs and number them from 1 to $N$. We denote these $N$ pairs
by $E$, Alice's halves of $E$ by $E_A$, Bob's halves of $E$ as $E_B$.
To each of the $i$th qubit in $|s\rangle$
and $i$th qubit in $E_A$, she takes CNOT operation with
the $i$th qubit in $E_A$ being the controlled
qubit and the $i$th qubit in $|s\rangle$ as the target
qubit. $i$ runs from 1 to $N$. She sends those $N$ target qubits to Bob.
Bob takes a CNOT operation to each of the $i$th received qubit and the
$i$th qubit of $E_B$, with the received qubit being the
target qubit and the qubit in $E_B$ being controlled qubit.
Bob takes a measurement in $Z$ basis to each of the target qubit and obtain
a classical string. He uses this string as the message from Alice.\\
The message $s$ in this protocol is as secure as that in {\it Protocol 1}.
\\
{\it Proof.}
Imagine the case that Alice measures each qubits in $E_A$ in $Z$ basis
in the beginning, then {\it protocol 2} is identical to {\it Protocol 1}.
However, no one except Alice knows whether she has taken the measurement.
Therefore she can choose not to measure her halves of entangled pairs.
This is just {\it Protocol 2}. In {\it Protocol 2}, $N$ EPR pairs have
been used as a quantum shared key, however, we don't have to
discard them after the message $s$ has been decrypted.
Instead, Alice and Bob may do purification to those $N$ pairs, given the
information of
bit-flip rate and phase-flip rate. After the purification, the outcome pairs
can be re-used as (almost) perfect entangled pairs. So the next question is on how
to do the purification efficiently. The bit-flip rate is defined as the percentage of pairs
which have been changed into state
$|\psi^+\rangle=\frac{1}{\sqrt 2}(|01\rangle +|10\rangle)$ or state
$|\psi^-\rangle=\frac{1}{\sqrt 2}(|01\rangle -|10\rangle)$;
phase-flip rate is defined as the percentage of pairs
which have been changed into state
$|\phi^-\rangle=\frac{1}{\sqrt 2}(|00\rangle -|11\rangle)$ or state
$|\psi^-\rangle$. Or mathematically,
if we consider the Pauli channel consisting of the following operations:
\begin{eqnarray}
\sigma_x= ( \begin{array}{cc} 0 & 1 \\
1 & 0
\end{array} ),
\sigma_y= ( \begin{array}{cc} 0 & -i \\
i & 0
\end{array} ),
\sigma_z= ( \begin{array}{cc} 1 & 0 \\
0 & -1
\end{array} )\end{eqnarray}
the channel operation $\sigma_x$ or $\sigma_y$ causes a bit-flip,
the channel operation $\sigma_y$ or $\sigma_z$ will cause a phase-flip.
One direct way to know the bit-flip rate and phase-flip rate is to let
Alice and Bob randomly take some samples of those pairs and then measure
the samples in $Z$ ( $\{|0\rangle,|1\rangle\}$)or in $X$
( $\{|\pm\rangle=\frac{1}{2}(|0\rangle\pm |1\rangle\})$)basis in each side,
and obtain the statistical
values of those flip rates for the remained pairs. However,
in testing the phase-flip rates with samples of those used EPR pairs,
the corresponding message bits must be discarded because once the bit values
of EPR pairs are announced, Eve has a way to attack encrypted message bits.
Moreover, we want to reduce
the protocol back to classical protocol therefore we don't directly sample the
entangled pairs. We can have a better way for the error test.
Consider the initial state of an entangled pair and the
the quantum state of message bit $|\chi_A\rangle$,
\begin{eqnarray}
|h_0\rangle = |\phi^+\rangle\otimes |\chi_A\rangle.\label{initial}
\end{eqnarray}
In the most general case $|\chi_A\rangle=\alpha |0\rangle +\beta |1\rangle$ and $|\alpha|^2+|\beta|^2=1$.
In our {\it protocol 2}, there will be Alice's CNOT operation, transmission and Bob's CNOT operation to the message qubit.
In transmission, the encrypted quantum state of message bit
could bear a flipping error of $\sigma_x,\sigma_z$ or $\sigma_y$.
It is easy to
see that, after Bob's CNOT operation, $\sigma_x$ error of transmission channel
will cause a $\sigma_x$ error to
the the message qubit only, $\sigma_z$ error of
transmission channel will cause a $\sigma_z$ error to
the EPR pair and $\sigma_z$ error to the message qubit, while $\sigma_y$ error
of transmission channel will cause a $\sigma_z$ error to the EPR pair $and$ a $\sigma_y$ error
to the message qubit. That is to say, the final state will be
\begin{eqnarray}
|h_f\rangle=|\phi^+\rangle\otimes (\sigma_x |\chi_A\rangle)
\end{eqnarray}
given a $\sigma_x$ flip to the encrypted message qubit in transmission;
\begin{eqnarray}
|h_f\rangle=(\sigma_z|\phi^+\rangle)\otimes (\sigma_z |\chi_A\rangle).\label{phase}
\end{eqnarray}
given a $\sigma_z$ flip to the encrypted qubit in transmission;
and
\begin{eqnarray}
|h_f\rangle=(\sigma_z|\phi^+\rangle)\otimes (\sigma_y|\chi_A\rangle)
\end{eqnarray}
given a $\sigma_y$ flip to the encrypted qubit in transmission.
We now show eq.(\ref{phase}). The other two equations can be shown in a similar way.
Consider the initial state defined by eq.(\ref{initial}). After the CNOT operation done by Alice, the state is changed
to
\begin{eqnarray}
|h_0'\rangle=\frac{1}{\sqrt 2}|00\rangle\otimes |\chi_A\rangle +\frac{1}{\sqrt 2}|11\rangle \otimes(\alpha |1\rangle+
\beta|0\rangle).
\end{eqnarray}
Suppose there is a phase-flip to the encrypted qubit during the transmission, the total state is then changed to
\begin{eqnarray}
|h_0''\rangle = \frac{1}{\sqrt 2}|00\rangle\otimes (\alpha |0\rangle-\beta|1\rangle)
+\frac{1}{\sqrt 2}|11\rangle \otimes(-\alpha |1\rangle+
\beta|0\rangle).
\end{eqnarray}
After Bob take the CNOT operation, the final state is changed to
\begin{eqnarray}
|h_f\rangle=|\phi^-\rangle\otimes |\chi_A\rangle= (\sigma_z |\phi^+\rangle)\otimes (\sigma_z |\chi_A\rangle).
\end{eqnarray}
This completes the proof. Although there could be phase-flips to the transmitted qubits, as we have shown already,
in principle, there is no information leakage of the original message. Threfore we disregard those phase-flips to the
message qubits.
Note that the model of Pauli channel and classical statistics work perfectly here\cite{sho2,lo3,wangs}, given $arbitrary$
channel noise, including any type of collective noise.
Therefore if we know the bit-flip rate and phase-flip rate of the channel, we can deduce exactly
the flipping rate of those used EPR pairs.
Therefore we can simply
mix some of qubits (test qubits) in transmitting the message qubits.
We don't do any CNOT operations (quantum encryption or decryption)
to those test
qubits. Half of the test qubits should be prepared in $X$ basis and half of the test qubits should be prepared in
$Z$ basis. All of the test qubits should be mixed randomly with the message qubits. Bob needs to know the measurement
bases of each qubits so as not to destroy any message qubits.
Bob also needs to know which qubits are for testing
and the original state of each test qubits so as to see the
flip-rates of transmission. Therefore, besides $N$ EPR pairs,
they must also share a classical
string $b'$ for the information of bases, positions
and bit values of each test qubits.
Suppose after reading the test qubits, Bob finds the error rate to
those test qubits in $X$ bases is
$t_0$. Then they may safely assume $(t_0+\delta)N$ phase-flips to the
used EPR pair.
$\delta$ is a very small number.
The probability that the phase-flip rate of those used EPR pairs
is larger than $t_0+\delta$
is exponentially small. As we have shown earlier, there is not bit-flip error to the used entangled pairs.
Therefore they may purify the used pairs by the standard purification protocol\cite{sho2,ben4} which costs
only $N\cdot H(t_0+\delta)$ pairs,
$H(x)= -x\log_2 x -(1-x)\log_2(x)$.
Since their purpose is to re-use those pairs securely for private
communication in the future instead of
really reproducing the perfect EPR pairs,
they need not really complete the full procedure of the purification.
Instead, as it has been shown in Ref\cite{sho2},
except Alice herself, no one knows it if she measures
all EPR pairs in $Z$ bases
in the begining of the protocol.
Therefore the CSS code can be
classicalized\cite{sho2} if the purpose is for security of
private communication instead of the real entanglement purification
. Consequently,
the initially shared EPR pairs before running the protocol
can be replaced by a classical random string and after they run
the protocol they recycle the random string
by a classical Hamming code with the phase-error rate input being $t_0+\delta$.
{\it Protocol 3} can help them to do quantum key expansion efficiently, without any quantum memory or
entanglement resource:
\\
1. Alice and Bob pre-share a secret classical random string $G$.
They are sure that the bit-flip rate and phase-flip rate
of the $physical$ channel
are less than $t_x-\delta$ and $t_z-\delta$, respectively.
(In quantum cryptography, the knowledge of flipping
rates of physical channel does not guarantee the security in any sense.)
They choose two Hamming code $C_x$ and $C_z$ which can correct
$(t_x+\delta)M$ bits and $(t_z+\delta)M$ bits of error, respectively.
We suppose $t_x+\delta < 11\%$ and $t_z+\delta < 11\%$.
2. Alice plans to send $N$ deterministic bits, string $s$ to Bob.
Alice and Bob
take an $M-$bit substring $b$, an $M'-$bit substring $b'$, a 200-bit
substring $c$ and a 200-bit substring $d$ from $G$, from left to right.
Here $M= \frac{N}{1-H(t+\delta)}$.
3. Alice expands the message string $s$ to $S$
by Hamming code $C_x$. Obviously, there are $M$ bits in the expanded
string $S$.
She encrypts the expanded string $S$ with string $b$, i.e., she prepares
an $M-$qubit quantum state $|S_q\rangle=|S\oplus b\rangle$ in $Z$ basis.
All these encrypted message qubits are placed in order. She also
produces $rN=2k$ test qubits and mix them with those qubits in $|S_q\rangle$.
The position, bit value and preparation basis of each test qubits
are determined by substring $b'$. This requires substring $b'$ including
$M'==\left(\begin{array}{c}M+2k\\2k\end{array}\right)+4k$ bits.
The bit values (0 or 1), position and bases ($X$ or
$Z$) of those test qubits must be totally random, since $b'$ is random.
After the mixing, she has a quantum sequence $q$ which contains
$M+2k$ qubits.
4. Alice transmits sequence $q$ to Bob.
5. Bob reads $b'$. After receives sequence $q$ from Alice,
he measures each of them in the correct bases.
He then separates the test bits and message bits,
with their original positions in each string being recovered.
Bob reads the test bits and check the error rate (authentication).
If he finds the
bit-flip rate $t_{x0}> t$ or
phase-flip rate $t_{z0}> t$ on the test bits, he sends substring $c\oplus d$
to Alice by classical communication and abort the protocol with string
$c$ being deleted from $G$.
If he finds the bit-flip rate $t_{x0}\le t$ and
phase-flip rate $t_{z0}\le t$ on the test bits, he sends substring $c$
to Alice by classical communication and continues the protocol.
6. Bob deletes $c$ from $G$.
He decrypts the encrypted expanded message string by $b$ and then
decodes it by Hamming code $C_x$ and obtains the message string.
The probability that Bob's decoded string is not identical
to the original message string $s$ is exponentially close to 0.
The key expansion part (or communication part) has been completed now.
7. Alice reads the 200-bit classical message from Bob.
If it is not $c$, she aborts the protocol with string $c$ being deleted
from $G$.(This is also authentication.) If it is $c$, she deletes
substring $c$ from $G$ and carries out
the next step.
8. Alice and Bob replace $b$ by the coset of $b+C_z$
as the recycled string.\\
{\it Remark 1}.
Our cost of qubit-transmission is less than half of
that in BB84 protocol. Our cost of classical communication is almost
zero.
{\it Remark 2}. After the protocol, string $b'$ and $d$ can be re-used safely.
In our protocol, even Alice announces $b',d$, Eve's information
about message $s$ is 0. Therefore the mutual information between
$s$ and $\{b',d\}$ is
$I(s:\{b',d\})=0$. Therefore, if message $s$ is announced while $\{b',d\}$
is not announced,
Eve's information about $\{b',d\}$ must be also 0. Consequently, Eve's
information to $\{b',d\}$ must be zero after the protocol.\\
{\it Remark 3.} If we want to reduce the number of pre-shared qubits, we can
use fewer test bits, i.e., reduce the value of $r$. In our protocol,
the total qubits needed is $r^{-1}$ times of that of BB84 protocol. To avoid
a too large key expansion at one time, we can choose to raise the value
of $\delta$, given a small $r$.
\section{Existing protocols of direct communication with qubits are insecure.}
Our protocol cannot be replaced by
any existing direct communication protocol\cite{bf,cai,hoff,woj,guo} with quantum states.
The insecurity of existing direct communication protocols have been pointed
out already for the case of noisy channel\cite{cai,hoff}. Here we show
that these protocols are not exponentially secure even with noiseless
quantum channel. We suppose that there are $m$ test qubits and $N$ message qubits.
Consider the best case that they find no error to
the test bits. Even in such a case, the message is still polynomially insecure: Eve has
non-negligible probability to obtain a few bits information to
the message. For example, Eve just intercepts one qubit in transmission
and measures it in $Z$ basis ($\{|0\rangle,|1\rangle\}$) and then resends it to Bob.
Suppose the physical channel itself is noiseless. Obviously, There is
a probability of $N/(N+m)$ that Bob finds no error to the test bits while Eve
has one bit information about the message. In particular, in certain
cases, 1 bit leakage of message is disastrous\cite{lo3}.
Such type of
direct private communication is insecure {\it even with noiseless quantum channel},
since the zero error of test bits only guarantees less than $\delta$ errors of
the message bits, it does not guarantee zero phse-flip of the message bits.
{\it In principle, there is no way to verify zero phase-flip error of the untested
bits by looking
at the test bits only.}
The insecurity of existing protocols
is due to the lack of privacy amplification step, which is the main
issue of the security of private communication.
One cannot directly append a privacy amplification
step here since this may change the message bits therefore destroy the message.
One of the non-trivial point of our protocol is that {\it the transmitted message bits
in our protocol is unconditionally secure without any privacy amplification, no matter
how noisy the channel is}. There we only need to correct the $bit$-flip
errors in
the message. This does not change the message itself.
\section{discussions}
Our protocol can obviously be used for both key expansion and direct communication.
In the security proof, we have used a pre-condition that Eve has zero
information to the preshared string $G$. However, strictly speaking,
this condition does not hold in our real protocol. First, as we have argued
that the pre-shared string can be generated by standard BB84 QKD protocol where
Eve's information to the shared key is exponentially close to 0 instead of
strict 0. Second,
Eve's information to the recycled string is also exponentially close to 0
rather than 0. Eve's exponentially small prior information is not a problem
to the security of classical private communication. However, here we have
used quantum states to carry
the classical message, Eve may store her $quantum$ information about the
pre-shared (or recycled) secret string and directly attacks
the decoded message or the updated key finally.
With the universality
of quantum compossibility\cite{comp}, we know that Eve's a exponentially small amount
of prior information about the pre-shared string or the recycled string will only cause an exponentially
small amount of information about the private message or the updated shared string. Therefore our protocol is unconditionally secure in the real case that
Eve has exponentially small amount of information to the pre-shared key.
\acknowledgments
I am
very grateful to Prof. H. Imai for his long term supports. I thank D. Leung and H.-K. Lo for pointing
out ref\cite{comp}.
\begin{references}
\bibitem{wies} M. A. Nielsen and I. L. Chuang,
Quantum Computation and Quantum Information,
Cambridge University Press, UK, 2000.
\bibitem{shor} P. Shor, Proc. 35th Ann. Symp. on Found. of
Computer Science. (IEEE Comp. Soc. Press,
Los Alomitos, CA, 1994) 124-134.
\bibitem{bene} C.H. Bennett and G. Brassard, in : Proc. IEEE
Int. Conf. on Computers, systems, and signal
processing, Bangalore (IEEE, New York, 1984)
p.175.
\bibitem{eker} A.K. Ekert, Phys. Rev. Lett. {\bf67}, 661 (1991).
\bibitem{ben2} C.H. Bennett, G. Brassard, and N.D. Mermin, Phys.
Rev.Lett. {\bf68}, 557 (1992).
\bibitem{ben3} C.H. Bennett,
Phys. Rev. Lett. {\bf68}, 3121 (1992) ;
A.K. Ekert, Nature {\bf358}, 14 (1992).
\bibitem{brus} D. Bru\ss, Phys. Rev. Lett. {\bf81},
3018 (1998).
\bibitem{gisi} N. Gisin, G. Ribordy, W. Tittel, H. Zbinden,
Rev. Mod. Phys. {\bf 74}, 145 (2002),
references therein.
802 (1982).
\bibitem{maye} D. Mayers,
J. Assoc. Comput. Mach. {\bf 48}, 351 (2001).
\bibitem{lo2} H.-K. Lo, H.F. Chau, and M. Ardehali,
quant-ph/0011056.
\bibitem{hwa2} W.Y. Hwang, I.G. Koh, and Y.D. Han,
Phys. Lett. A {\bf 244}, 489 (1998)
\bibitem{wang} W.-Y. Hwang X.-B. Wang, K. Matsumoto and H. Imai,
Phys. Rev. A 67, 012302 (2003)
\bibitem{sho2} P.W. Shor and J. Preskill, Phys. Rev. Lett.
{\bf 85}, 441 (2000).
\bibitem{lo3} H.-K. Lo and H.F. Chau, Science {\bf283},
2050 (1999).
\bibitem{wangs}X.-B. Wang, quant-ph/0403058.
\bibitem{ben4} C.H. Bennett, D.P. DiVincenzo, J.A. Smolin, and
W.K. Wootters, Phys. Rev. A {\bf54},
3824 (1996).
\bibitem{bf} K. Bostrom and T. Felbinger, Phys. Rev. Lett. 89, 187902(2002).
\bibitem{cai} Q.Y. Cai, Phys. Rev. Lett., 91, 109801(2003) and references therein.
\bibitem{hoff} H. Hoffmann, K. Bostroem and T. Helbinger, quant-ph/0406115 and references therein.
\bibitem{woj}A. Wojcik, Phys. Rev. Lett. 90, 157901(2003), and references therein.
\bibitem{guo}P. Xue, C. Han, B Yu, X.-M. Lin and G.-C. Guo, Phys. Rev. A69, 052318(2004).
\bibitem{comp} D. Mayers
http://www.msri.org/publications/ln/msri/2002/qip/mayers/1/index.html;
D. Leung et al, http://www.iqc.ca/conferences/qip/presentations/leung.pdf
\end{references}
\end{document}
|
\begin{document}
\title[Restriction of representations to parabolic subgroups]{Restriction of irreducible unitary representations of Spin(N,1) to parabolic subgroups}
\author{Gang Liu}
\operatorname{ad}dress{Gang Liu, Institut Elie Cartan de Lorraine, CNRS-UMR 7502, Universit\'e de Lorraine, 3 rue Augustin Fresnel, 57045 Metz, France}
\email{[email protected]}
\author{Yoshiki Oshima}
\operatorname{ad}dress{Yoshiki Oshima, Department of Pure and Applied Mathematics, Graduate School of Information Science and
Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan.}
\email{[email protected]}
\author{Jun Yu}
\operatorname{ad}dress{Jun Yu, Beijing International Center for Mathematical Research, Peking University, No. 5 Yiheyuan Road,
Beijing 100871, China}
\email{[email protected]}
\keywords{unitary representations, branching laws, discrete series, Fourier transform, moment map,
method of coadjoint orbits.}
\subjclass[2010]{22E46}
\begin{abstract}In this paper, we obtain explicit branching laws for all irreducible unitary representations of
$\operatorname{Sp}in(N,1)$ restricted to a parabolic subgroup $P$. The restriction turns out to be a finite direct sum of
irreducible unitary representations of $P$. We also verify Duflo's conjecture for the branching law of tempered
representations of $\operatorname{Sp}in(N,1)$ with respect to a parabolic subgroup $P$. That is to show: in the framework of
the orbit method, the branching law of a tempered representation is determined by the behavior of the moment map
from the corresponding coadjoint orbit. A few key tools used in this work include: Fourier transform, Knapp-Stein
intertwining operator, Casselman-Wallach globalization, Zuckerman translation principle, du Cloux's results for
smooth representations of semi-algebraic groups.
\end{abstract}
\maketitle
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}\langlebel{S:introduction}
The unitary dual problem concerning the classification of irreducible unitary representations of a Lie group and
the branching law problem concerning the decomposition of the restriction of irreducible unitary representations
to a closed Lie subgroup are two of the most important problems in the representation theory of real Lie groups.
The orbit method of Kirillov (\cite{Kirillov2}, \cite{Kirillov}) and Kostant (\cite{Auslander-Kostant}, \cite{Kostant})
relates both problems to the geometry of coadjoint orbits.
In a series of seminal papers \cite{Kobayashi}, \cite{Kobayashi2}, \cite{Kobayashi3} Kobayashi initiated the study of
discrete decomposability and admissibility for representations when restricted to non-compact subgroups. Let $G$ be a
Lie group and let $H$ be a closed Lie subgroup. For an irreducible unitary representation $\pi$ of $G$, the restriction of
$\pi$ to $H$, denoted by $\pi|_H$, is said to be \emph{discretely decomposable} if it is a direct sum of irreducible
unitary representations of $H$. If moreover, all irreducible unitary representations of $H$ have only finite
multiplicities in $\pi$, then $\pi|_H$ is said to be \emph{admissible}. Kobayashi established criteria for the
admissibility for a large class of unitary representations with respect to reductive subgroups. Based on his work,
branching laws for admissible restriction have been studied in many papers including \cite{Duflo-Vargas2},
\cite{Gross-Wallach}, \cite{Kobayashi}, \cite{Kobayashi2}, \cite{Kobayashi3}, \cite{Kobayashi4},
\cite{Moellers-Oshima2}, \cite{Oshima}, \cite{Sekiguchi}. In this paper we set $G=\operatorname{Sp}in(N,1)$ for $N>2$ and set $H$ to
be a minimal parabolic subgroup of $G$, which we denote by $P(\subset G)$. In the first half of the paper we obtain
explicit branching laws for all irreducible unitary representations of $G$. The formulas are given in
\S\ref{SS:branchinglaw} and \S\ref{SS:branchinglaw2}. We find that the restriction is always a finite direct sum of
irreducible unitary representations of $P$. Hence in particular, the restriction is admissible.
In the second half of the paper, we study moment maps of coadjoint orbits which is related to branching laws via the
so-called orbit method. Let $\pi$ be an irreducible unitary representation of $G$ associated to a $G$-coadjoint orbit
$\mathcal{O}$ in $\mathfrak{g}^{\ast}$. It is well known that equipped with the Kirillov-Kostant-Souriau symplectic
form, $\mathcal{O}$ becomes a $G$-Hamiltonian space and hence an $H$-Hamiltonian space. The corresponding moment map
is the natural projection $q\colon \mathcal{O} \rightarrow\mathfrak{h}^{\ast}$. The orbit method predicts that the
branching law of $\pi|_H$ is given in terms of the geometry of the moment map $q$ (see \cite{Kirillov}). In fact,
when $G$ and $H$ are unipotent groups, Corwin-Greenleaf~\cite{Corwin-Greenleaf} proved that the multiplicity of an
$H$-representation associated with an $H$-coadjoint orbit $\mathcal{O}'$ equals almost everywhere with the cardinality
of $q^{-1}(\mathcal{O}')/H$. Concerning more general Lie groups, recently Duflo formulated a precise conjecture which
describes a connection between the branching law of the restriction to a closed subgroup of discrete series on the
representation theory side and the moment map from strongly regular coadjoint orbits on the geometry side. The
conjecture is inspired by Heckman's thesis \cite{Heckman} and the ``quantization commuting with reduction" program
\cite{Guillemin-Sternberg}.
\begin{conjecture}\langlebel{C:Duflo}
Let $\pi$ be a discrete series of a real almost algebraic group $G$, which is attached to a coadjoint orbit
$\mathcal{O}_{\pi}$. Let $H$ be a closed almost algebraic subgroup, and
let $q\colon \mathcal{O}_{\pi}\rightarrow \mathfrak{h}^{\ast}$
be the moment map from $\mathcal{O}_{\pi}$. Then,
\begin{itemize}
\item[(i)] $\pi\vert_{H}$ is $H$-admissible (in the sense of Kobayashi) if and only if the moment map $q\colon
\mathcal{O}_{\pi}\rightarrow\mathfrak{h}^{\ast}$ is \textit{weakly proper}.
\item[(ii)] If $\pi\vert_{H}$ is $H$-admissible, then each irreducible $H$-representation $\sigma$ which appears
in $\pi\vert_{H}$ is attached to a \textit{strongly regular} $H$-coadjoint orbit $\mathcal{O}'_{\sigma}$ (in the sense
of Duflo) contained in $q(\mathcal{O}_{\pi})$.
\item[(iii)] If $\pi\vert_{H}$ is $H$-admissible, then the multiplicity of each such $\sigma$ can be expressed
geometrically in terms of the \textit{reduced space} $q^{-1}(\mathcal{O}'_{\sigma})/H$.
\end{itemize}
\end{conjecture}
Let us give some explanations for the conjecture. The notion of ``almost algebraic group" is defined in \cite{Duflo2}.
Recall that an element $f\in\mathfrak{g}^{\ast}$ is called strongly regular
if $f$ is regular (i.e., the coadjoint orbit containing $f$ is of maximal dimension) and its ``reductive factor"
$\mathfrak{s}(f):= \{X\in \mathfrak{g}(f):\operatorname{ad}(X)\text{ is semisimple} \}$ is of maximal dimension among reductive factors
of all regular elements in $\mathfrak{g}^*$ ($f$ is regular implies that $\mathfrak{g}(f)$ is commutative). Let
$\operatorname{U}psilon_{sr}$ denote the set of strongly regular elements in $\mathfrak{h}^*$. A coadjoint orbit $\mathcal{O}$ is called
strongly regular if there exists an element $f\in \mathcal{O}$ (then every element in $\mathcal{O}$) which is strongly regular.
``Weakly proper" in (i) means that the preimage (for $q$) of each compact subset which is contained in $q(\mathcal{O}_{\pi})
\cap\operatorname{U}psilon_{sr}$ is compact in $\mathcal{O}_{\pi}$. Note that it is known that the classic properness condition is not
sufficient to characterize the $H$-admissibility when $H$ is not reductive (see \cite{Liu2}, \cite{Liu3}).
If $G$ is compact, then Duflo's conjecture is a special case of the $\text{Spin}^c$ version of \emph{quantization commutes
with reduction} principle (see \cite{Paradan2}). More generally, if $G$ and $H$ are both reductive, then the assertions (ii)
and (iii) of the conjecture are consequences of a recent work of Paradan \cite{Paradan3}. Note that in this case, Dulfo-Vargas
and Paradan \cite{Duflo-Vargas1}, \cite{Duflo-Vargas2}, \cite{Paradan3} proved that $\pi\vert_{H}$ is $H$-admissible if and
only if the moment map $q\colon \mathcal{O}_{\pi}\rightarrow\mathfrak{h}^{\ast}$ is \textit{proper}. In order to prove the
assertion (i) of the conjecture in this case, one needs to prove the equivalence between properness and weak properness of
the moment map.
There is a fact that if $\pi$ is a tempered $G$-representation, then each irreducible $H$-representation appearing in the
spectrum of $\pi\vert_{H}$ is tempered. Thus when $\pi$ is a tempered representation (with regular infinitesimal character)
of a reductive group $G$, Conjecture~\ref{C:Duflo} still makes sense. In this paper, based on our explicit branching laws
and an explicit description of the moment map, we verify Conjecture~\ref{C:Duflo} for the restriction to a minimal parabolic
subgroup of all tempered representations of $\operatorname{Sp}in(N,1)$. In our setting the restriction is admissible for any irreducible
unitary representation $\pi$ while the moment map $q\colon \mathcal{O}\to \mathfrak{h}^{\ast}$ is weakly proper for any
$\mathcal{O}$. The restriction $\pi\vert_{H}$ is always multiplicity-free and the reduced space $q^{-1}(\mathcal{O}'_{\sigma})/H$
is always a singleton. In addition, we extend the conjecture to Vogan-Zuckerman's derived functor modules
$A_{\mathfrak{q}}(\langlembda)$ and verify it. The representations $A_{\mathfrak{q}}(\langlembda)$ are possibly non-tempered and are considered
to be associated with possibly singular elliptic orbits. For the conjecture in this case we need a certain modification
in the correspondence of orbits and representations for the parabolic subgroup (see \S\ref{SS:nontempered} for details).
For the proof of our branching laws, a key idea is
to consider the non-compact picture ($N$-picture)
of principal series representations of $G$
and to take the classical Fourier transform.
Such an idea appeared in Kobayashi-Mano~\cite{Kobayashi-Mano}
for the construction of
an $L^2$-model (called the Schr\"{o}dinger model) of a minimal representation of $\operatorname{O}(p,q)$.
This was extended to other groups and representations in
\cite{HKMO}, \cite{Moellers}, \cite{Moellers-Oshima}.
They embed an irreducible representation into a degenerate principal series
and then take the Fourier transform of the non-compact picture.
We will apply this method in our case and obtain $L^2$-models for all unitary principal series (see Appendix~\ref{S:principalSeries}) and for all irreducible unitary representations of $G$
with infinitesimal character $\rho$ and some complementary series (see Appendix \ref{S:trivial}).
The treatment for the latter representations is more involved. More precisely, we realize any of such a
representation as the image of a normalized Knapp-Stein intertwining operator between two non-unitary principal
series. Then, applying the Fourier transform to the non-compact picture of the non-unitary principal series, we
obtain the $L^2$-model. In this process, we find an explicit formula for the Fourier transformed counter-part of
the normalized Knapp-Stein intertwining operator and analyze the growth property at infinity and the singularity
at zero of the Fourier transformed functions (or distributions) carefully. Since the $P$-action on the $L^2$-model
is of a simple form, we have explicit branching laws for these representations. In order to obtain branching laws
for all irreducible unitary representations of $G$, we employ du Cloux's results \cite{duCloux} on moderate growth
smooth Fr\'echet representations of semi-algebraic groups and Zuckerman translation principle (\cite{Zuckerman},
\cite{Knapp}).
On the geometry side, let $\mathcal{O}_{f}=G/G^{f}$ be a coadjoint $G$-orbit. By parametrizing the double coset
space $P\backslash G/G^{f}$ we find explicit representatives of $P$-orbits in $\mathcal{O}_{f}$. By calculating
the Pfaffian and the characteristic polynomial of related skew-symmetric matrix, we are able to identify the
$P$-class of the moment map image of each representative. With this, we calculate the image and show geometric
properties of the moment map. The moment map is always weakly proper, but it is not proper unless
$\mathcal{O}_{f}$ is a zero orbit. This supports Duflo's belief that weak properness is the correct counterpart
of $H$-admissibility. We prove moreover that the reduced space for each regular $P$-coadjoint orbit in the image
of the moment map is a singleton. By comparing the branching law of discrete series (or unitary principal series)
and the behavior of the moment map from the corresponding coadjoint orbit, we verify Conjecture~\ref{C:Duflo}
when $G=\operatorname{Sp}in(N,1)$ and $H$ is a parabolic subgroup.
One might compare our branching laws with
Kirillov's conjecture which says that the restriction to
a mirabolic subgroup of any irreducible unitary representation of
$\operatorname{G}L_{n}(k)$ (for $k$ an archimedean or non-archimedean local field) is irreducible.
Kirillov's conjecture was proved by Bernstein~\cite{Bernstein} for $p$-adic groups.
It awaited nearly ten years for a breakthrough by Sahi~\cite{Sahi} who proved it for tempered representations of
$\operatorname{G}L_{n}(k)$ for $k$ an archimedean local field. It was finally proved by Baruch~\cite{Baruch} over archimedean local
fields in general through a qualitative approach by studying invariant distributions. The restriction to a mirabolic
subgroup of a general irreducible unitary representation of $\operatorname{G}L_{n}(\mathbb{R})$ or $\operatorname{G}L_{n}(\mathbb{C})$ is determined
by combining ~\cite{Sahi} (which sets up a strategy to attack this problem and treats tempered representations),
~\cite{Sahi2} (which treats Stein complementary series), ~\cite{Sahi-Stein} (which treats Speh representations) and
~\cite{AGS} (which treats Speh complementary series). In the literature, there is another related/similar work by
Rossi-Vergne~\cite{Rossi-Vergne} concerning the restriction to a minimal parabolic subgroup of holomorphic
(or anti-holomorphic) discrete series of a Hermitian simple Lie group. As for the restriction of irreducible unitary
representations of $\operatorname{Sp}in(N,1)$ ($N\geq 2$) to a minimal parabolic subgroup, we note that the branching law is known
in the literature only when $N=2$ or $3$ by Martin~\cite{Martin}, and when $N=4$ by Fabec~\cite{Fabec}. Note that
Fourier transform is used in all of these works. On the geometry side, we describe explicitly the moment map image
for any coadjoint orbit of $G=\operatorname{Sp}in(N,1)$. For the mirabolic subgroup of $\operatorname{G}L_{n}(\mathbb{R})$ (or $\operatorname{G}L_{n}(\mathbb{C})$),
similar calculation was done in \cite{Liu-Yu}. Kobayashi-Nasrin~\cite{Kobayashi-Nasrin} studied the moment map image
which corresponds to the restriction of holomorphic discrete series of scalar type with respect to holomorphic symmetric
pairs studied in \cite{Kobayashi4}.
Representations of four groups $G=\operatorname{Sp}in(m+1,1)$, $G_{1}=\operatorname{SO}_{e}(m+1,1)$, $G_{2}=\operatorname{O}(m+1,1)$, $G_{3}=\operatorname{SO}(m+1,1)$,
(resp.\ their parabolic subgroups, maximal compact subgroups, etc.) are studied (resp.\ arise) in this paper. We
remark on the advantages of studying representations of each of these groups $G$, $G_{1}$, $G_{2}$: the advantage
of studying representations of $G_{2}$ is using the matrix $s=\operatorname{d\!}iag\{I_{m},-1,1\}$ to define and study intertwining
operators; the advantage of studying representations of $G$ is applying the Zuckerman translation principle; then,
representations of $G_{1}$ serve as a bridge connecting representations of $G_{2}$ and representations of $G$.
The group $G_{3}$ and its representations are used only in Appendix~\ref{S:GGP}.
Readers might be curious if methods and results of this paper can be generalized in more general setting. On the
representation theory side, we remark that the decomposition of unitary principal series as done in Appendix~\ref{S:principalSeries} can
be generalized well. The way of treating representations with trivial infinitesimal character and some complementary
series can be generalized partially. However, the comparing of $L^2$ spectrum and smooth quotient as done in
$\S$\ref{SS:CW-Cloux} and $\S$\ref{SS:ResPS} can be hardly generalized. On the geometry side, the method of moment
map calculation in this paper for a coadjoint orbit $\mathcal{O}_{g}=G\cdot f$ ($f\in\mathfrak{g}^{\ast}$) is
applicable whenever the double coset space $G^{f}\backslash G/P$ has a simple description, which is the case if
$\operatorname{d\!}im P$ is as large as possible and $f$ is as singular as possible.
The paper is organized as follows. In Section~\ref{S:repP} we introduce notation used throughout the paper. We give a
classification of irreducible unitary representations of $P$ and coadjoint $P$-orbits. In Section~\ref{S:resP} we obtain
branching laws of irreducible unitary representations of $G$ when restricted to $P$. We require an explicit calculation
of the Fourier transform of a vector in the lowest $K$-type of discrete series, which will be done in Appendix~\ref{S:trivial}.
Sections~\ref{S:elliptic} and \ref{S:non-elliptic} are devoted to the description of the moment map $q\colon\mathcal{O}
\to\mathfrak{p}^{\ast}$. In Section~\ref{S:Duflo} we verify Conjecture~\ref{C:Duflo} in our setting. In Appendices~\ref{S:principalSeries} and
\ref{S:trivial} we see the Fourier transformed picture more explicitly for particular representations: unitary principal series, those
with infinitesimal character $\rho$ and some complementary series. This yields the $L^2$-models and the decomposition
into irreducible $P$-representations of these representations. In Appendix~\ref{S:MN.K}, we show that $\bar{\pi}|_{MN}$ is
determined by the $K$ type of $\bar{\pi}$ for any irreducible unitarizable representation $\pi$ of $G$. In Appendix~\ref{S:GGP},
we explain that branching laws shown in this paper are related to a case of Bessel model of the local
Gan-Gross-Prasad conjecture (\cite{Gan-Gross-Prasad},\cite{Gan-Gross-Prasad2}).
\operatorname{sm}allskip
\noindent\textbf{Acknowledgements.} We would like to thank Professors Michel Duflo and Pierre Torasso for suggesting this
problem. The authors thank Professor David Vogan for helpful suggestions and for providing the reference \cite{duCloux}.
Yoshiki Oshima would like to thank Professor Bent {\O}rsted for helpful comments about Appendix~\ref{S:trivial}. Jun Yu
would like to thank Professor David Vogan for encouragement and to thank Chengbo Zhu, Wen-wei Li, Lei Zhang for helpful
communications. Yoshiki Oshima was partially supported by JSPS Kakenhi Grant Number JP16K17562. Jun Yu was partially
supported by the NSFC Grant 11971036.
\section{Preliminaries}\langlebel{S:repP}
\subsection{Notation and conventions}\langlebel{SS:notation}
{\it Indefinite orthogonal and spin groups of real rank one.} Fix a positive integer $m$. Let $I_{m+1,1}$ be
the $(m+2)\times (m+2)$-matrix given as \[I_{m+1,1}=\begin{pmatrix}I_{m+1}&\\&-1\end{pmatrix}.\]
Put \begin{align*}
&G_{2}=\operatorname{O}(m+1,1)=\{X\in M_{m+2}(\mathbb{R}):XI_{m+1,1}X^{t}=I_{m+1,1}\},\\
&G_{3}=\operatorname{SO}(m+1,1)=\{X\in\operatorname{O}(m+1,1):\operatorname{d\!}et X=1\},\\
&G_{1}=\operatorname{SO}_{e}(m+1,1),\\& G=\operatorname{Sp}in(m+1,1),
\end{align*}
where $\operatorname{SO}_{e}(m+1,1)$ is the identity component of $\operatorname{O}(m+1,1)$ (and of $\operatorname{SO}(m+1,1)$), and $\operatorname{Sp}in(m+1,1)$ is
a universal covering group of $\operatorname{SO}_{e}(m+1,1)$. The Lie algebras of $G,G_{1},G_{2},G_{3}$ are all equal to
\begin{equation*}
\mathfrak{g}
=\mathfrak{so}(m+1,1)=\{X\in\mathfrak{gl}(m+2,\mathbb{R}): XI_{m+1,1}+I_{m+1,1}X^{t}=0\}.
\end{equation*}
\operatorname{sm}allskip
{\it Cartan decomposition.} Write
\begin{align*}
&K=\operatorname{Sp}in(m+1),\\
&K_{1}=\{\operatorname{d\!}iag\{Y,1\}:Y\in\operatorname{SO}(m+1)\},\\
&K_{2}=\{\operatorname{d\!}iag\{Y,t\}:Y\in\operatorname{O}(m+1),t\in\{\pm{1}\}\},\\
&K_{3}=\{\operatorname{d\!}iag\{Y,t\}:Y\in\operatorname{O}(m+1),t=\operatorname{d\!}et Y\}.
\end{align*}
Then, $K,K_{1},K_{2},K_{3}$ are maximal compact subgroups of $G,G_{1},G_2,G_{3}$ respectively.
Their Lie algebras are equal to \[\mathfrak{k}=\{\operatorname{d\!}iag\{Y,0\}: Y\in\mathfrak{so}(m+1)\}.\]
Write \[\mathfrak{s}=
\operatorname{B}igl\{\begin{pmatrix}0_{(m+1)\times (m+1)}&\alpha^{t}\\ \alpha&0\end{pmatrix}
:\alpha\in M_{1\times (m+1)}(\mathbb{R})\operatorname{B}igr\}.\]
Then, $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{s}$,
which is a Cartan decomposition for $\mathfrak{g}$.
The corresponding Cartan involution $\theta$ of $G_{1}$ (or $G_{2}$, $G_{3}$) is given by
$\theta=\operatorname{A}d(I_{m+1,1})$.
{\it Restricted roots and Iwasawa decomposition.}
Put \begin{equation*}
H_0=\begin{pmatrix}0_{m\times m}&0_{m\times 1}&0_{m\times 1}\\0_{1\times m}&0&1\\0_{1\times m}&1&0\\
\end{pmatrix}\textrm{ and }\mathfrak{a}=\mathbb{R}\cdot H_{0},\end{equation*}
which is a maximal abelian subspace in $\mathfrak{s}$.
Define $\langlembda_{0}\in\mathfrak{a}^*$ by
$\langlembda_{0}(H_{0})=1$.
Then, the restricted root system $\operatorname{D}elta(\mathfrak{g}_{\mathbb{C}},\mathfrak{a})$ consists of two roots
$\{\pm{\langlembda_0}\}$. Let $\langlembda_0$ be a positive restricted root.
Then the associated positive nilpotent part is
\begin{equation*}
\mathfrak{n}
=\operatorname{B}iggl\{ \begin{pmatrix}0_{m\times m}&-\alpha^{t}&\alpha^{t}\\
\alpha&0&0\\\alpha&0&0\\ \end{pmatrix}:
\alpha\in M_{1\times m}(\mathbb{R})\operatorname{B}iggr\}.
\end{equation*}
Let $\rho'$ be half the sum of positive roots in $\operatorname{D}elta(\mathfrak{n},\mathfrak{a})$.
Then
\[\rho'=\mathfrak{a}c{m}{2}\langlembda_{0}\textrm{ and } \rho'(H_0)=\mathfrak{a}c{m}{2}.\]
One has the {\it Iwasawa decomposition}
$\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{n}$.
{\it Standard parabolic and opposite parabolic subalgebras.}
Let \begin{equation*}
\mathfrak{m}=
Z_{\mathfrak{k}}(\mathfrak{a})=\{\operatorname{d\!}iag\{Y,0_{2\times 2}\}:Y\in\mathfrak{so}(m,\mathbb{R})\}.
\end{equation*}
Write
$\mathfrak{p}=\mathfrak{m}+\mathfrak{a}+\mathfrak{n}$,
which is a parabolic subalgebra of $\mathfrak{g}$.
We have the opposite nilradical
\begin{equation*}\langlebel{Eq:nbar}
\bar{\mathfrak{n}}=
\operatorname{B}iggl\{\begin{pmatrix}0
_{m\times m}&\alpha^{t}&\alpha^{t}\\-\alpha&0&0\\
\alpha&0&0\\
\end{pmatrix}:\alpha\in M_{1\times m}(\mathbb{R})\operatorname{B}iggr\}
\end{equation*}
and the opposite parabolic subalgebra
$\bar{\mathfrak{p}}=\mathfrak{m}+\mathfrak{a}+\bar{\mathfrak{n}}$.
{\it Subgroups.} Let $A$ (or $A_{1}$, $A_{2}$, $A_{3}$), $N$ (or $N_{1}$, $N_{2}$, $N_{3}$), $\bar{N}$
(or $\bar{N}_{1}$, $\bar{N}_{2}$, $\bar{N}_{3}$) be analytic subgroups of $G$ (or $G_{1}$, $G_{2}$, $G_{3}$)
with Lie algebras $\mathfrak{a},\mathfrak{n},\bar{\mathfrak{n}}$ respectively.
In particular,
\begin{equation*}
A_{1}=\operatorname{B}iggl\{\begin{pmatrix}I_{m}&&\\
&r&s\\ &s&r\\
\end{pmatrix}:r,s\in\mathbb{R},\ r^{2}-s^{2}=1,\ r>0\operatorname{B}iggr\}.
\end{equation*}
By the two-fold covering
$G\twoheadrightarrow G_{1}$ and the inclusions $G_{1}\subset G_{2}$ and $G_{1}\subset G_{3}$ we identify
$A$, $A_{2}$, $A_{3}$ with $A_{1}$, identify $N$, $N_{2}$, $N_{3}$ with $N_{1}$, and identify $\bar{N}$,
$\bar{N}_{2}$, $\bar{N}_{3}$ with $\bar{N}_{1}$.
Put \[M=Z_{K}(\mathfrak{a}),\quad M_{1}=Z_{K_1}(\mathfrak{a}),\quad M_{2}=Z_{K_{2}}(\mathfrak{a}),
\quad M_{3}=Z_{K_{3}}(\mathfrak{a}).\] Set \[P=MAN,\quad P_{1}=M_{1}AN,\quad P_{2}=M_{2}AN,
\quad P_{3}=M_{3}AN\] and \[\bar{P}=MA\bar{N}, \quad \bar{P}_{1}=M_{1}A\bar{N},
\quad \bar{P}_{2}=M_{2}A\bar{N},\quad \bar{P}_{3}=M_{3}A\bar{N}.\]
Then, the Lie algebras of $M$ (or $M_{1}$, $M_2$, $M_{3}$), $P$ (or $P_{1}$, $P_2$, $P_{3}$), $\bar{P}$
(or $\bar{P}_{1}$, $\bar{P}_2$, $\bar{P}_{3}$) are equal to $\mathfrak{m}$, $\mathfrak{p}$, $\bar{\mathfrak{p}}$
respectively. Note that
\begin{equation*}
M=\operatorname{Sp}in(m),\ M_{1}=\operatorname{SO}(m),\ M_{2}=\operatorname{O}(m)\times\operatorname{D}elta_{2}(\operatorname{O}(1)),\ M_{3}=\operatorname{SO}(m)\times
\operatorname{D}elta_{2}(\operatorname{O}(1)),\end{equation*} where $\operatorname{D}elta_{2}(\operatorname{O}(1))=\{\operatorname{d\!}iag\{t,t\}:t=\pm{1}\}\subset\operatorname{O}(2)$.
For $\operatorname{Sp}in(2,1)\cong\operatorname{SL}_{2}(\mathbb{R})$,
the Langlands classification of irreducible
$(\mathfrak{g},K)$-modules requires different parametrization due to $M$ is disconnected.
For convenience in this paper we treat only $\operatorname{Sp}in(m+1,1)$ ($m>1$),
though the case of $m=1$ is even easier (which was treated in \cite{Martin}).
{\it Nilpotent elements.} For a row vector
$\alpha\in \mathbb{R}^{m}$ write
\begin{equation*}
X_{\alpha}=\begin{pmatrix}
0_{m\times m}&-\alpha^{t}&\alpha^{t}\\ \alpha&0&0\\ \alpha&0&0\\
\end{pmatrix},\quad
\bar{X}_{\alpha}=
\begin{pmatrix}
0_{m\times m}&\alpha^{t}&\alpha^{t}\\ -\alpha&0&0\\ \alpha&0&0\\
\end{pmatrix}.
\end{equation*}
The following Lie brackets will be used later
\begin{align}\langlebel{Eq:brackets}
&[X_{\alpha},\bar{X}_{\beta}]=
\operatorname{d\!}iag\{2(\alpha^t\beta-\beta^t\alpha),0_{2\times 2}\}+2\alpha\beta^t H_0, \\ \nonumber
&[\operatorname{d\!}iag\{Y,0_{2\times 2}\},\,X_{\alpha}]=X_{\alpha Y^t},\\ \nonumber
&[\operatorname{d\!}iag\{Y,0_{2\times 2}\},\,\bar{X}_{\beta}]=\bar{X}_{\beta Y^t},\\ \nonumber
&[H_0,X_{\alpha}]=X_{\alpha},\\ \nonumber
&[H_0,\bar{X}_{\beta}]=-\bar{X}_{\beta},
\end{align}
where $\alpha,\beta\in \mathbb{R}^{m}$
and $Y\in\mathfrak{so}(m,\mathbb{R})$.
Put
$n_{\alpha}=\operatorname{exp}(X_{\alpha})$ and $\bar{n}_{\alpha}=\operatorname{exp}(\bar{X}_{\alpha})$.
Then by $n_{\alpha}=I+X_{\alpha}+\mathfrak{a}c{1}{2}X_{\alpha}^2$ and
$\bar{n}_{\alpha}=I+\bar{X}_{\alpha}+\mathfrak{a}c{1}{2}\bar{X}_{\alpha}^2$ we have
\begin{equation*}
n_{\alpha}=
\begin{pmatrix}I_{m}&-\alpha^{t}&\alpha^{t}\\
\alpha&1-\mathfrak{a}c{1}{2}|\alpha|^2&\mathfrak{a}c{1}{2}|\alpha|^2\\
\alpha&-\mathfrak{a}c{1}{2}|\alpha|^{2}&1+\mathfrak{a}c{1}{2}|\alpha|^2\\
\end{pmatrix}
\end{equation*} and
\begin{equation*}
\bar{n}_{\alpha}=
\begin{pmatrix}I_{m}&\alpha^{t}&\alpha^{t}\\
-\alpha&1-\mathfrak{a}c{1}{2}|\alpha|^2&-\mathfrak{a}c{1}{2}|\alpha|^2\\
\alpha&\mathfrak{a}c{1}{2}|\alpha|^2&1+\mathfrak{a}c{1}{2}|\alpha|^2\\
\end{pmatrix}.
\end{equation*}
Via the maps $\alpha\mapsto X_{\alpha}$ and $\alpha\mapsto n_{\alpha}$,
one identifies $\mathfrak{n}$ and $N$ with the Euclidean space $\mathbb{R}^{m}$.
We have $\theta(n_{\alpha})=\bar{n}_{-\alpha}$.
{\it Invariant bilinear form.}
For $X,Y\in\mathfrak{g}$, define
\begin{equation}\langlebel{Eq:form}
(X,Y)=\mathfrak{a}c{1}{2}\operatorname{tr}(XY).
\end{equation}
Then $(\cdot,\cdot)$ is a nondegenerate symmetric bilinear form on $\mathfrak{g}$,
which is invariant under the adjoint action of $G$, $G_{1}$ and $G_{2}$.
Define $\iota\colon \mathfrak{g}\to \mathfrak{g}^{\ast}$ by
\begin{equation}\langlebel{Eq:identification1}
\iota(X)(Y)=(X,Y)\ (\forall Y\in\mathfrak{g}).
\end{equation}
Then, $\iota$ is an isomorphism of $G_{2}$ (or $G$, $G_{1}$) modules.
Define $\operatorname{pr}\colon \mathfrak{g}\to \mathfrak{p}^{\ast}$ by
\begin{equation}\langlebel{Eq:identificaiton2}
\operatorname{pr}(X)(Y)=(X,Y)\
(\forall Y\in\mathfrak{p}).
\end{equation}
Then, the kernel of $\operatorname{pr}$ is $\mathfrak{n}$, and $\operatorname{pr}$ gives a $P$ (or
$P_{1}$, $P_{2}$) module isomorphism \[\mathfrak{g}/\mathfrak{n}\cong\mathfrak{p}^{\ast}.\]
Since $\bar{\mathfrak{p}}$ is a complement of $\mathfrak{n}$ in $\mathfrak{g}$,
we can identify $\bar{\mathfrak{p}}$ with
$\mathfrak{p}^{\ast}$ by
$X\mapsto\operatorname{pr}(X)\ (X\in\bar{\mathfrak{p}})$.
{\it Roots and weights.}
Let $n':=\lfloor \mathfrak{a}c{m+1}{2}\rfloor$.
For $\vec{a}=(a_1,\operatorname{d\!}ots,a_{n'})\in \mathbb{R}^{n'}$,
let
\begin{align}\langlebel{Eq:ta}
t_{\vec{a}}
:=\begin{pmatrix}
0&a_1&& &&\\
-a_{1}&0&&&&\\
&&\operatorname{d\!}dots&&&\\
&&&0&a_{n'}&\\
&&&-a_{n'}&0&\\
&&&&&0_{(m+2-2n')\times (m+2-2n')}\\
\end{pmatrix}.
\end{align}
Then
\[\mathfrak{t}=\{t_{\vec{a}} : a_1,\operatorname{d\!}ots,a_{n'}\in \mathbb{R}\}\]
is a maximal abelian subalgebra of $\mathfrak{k}=\operatorname{Lie} K$.
Write $T$ for the corresponding maximal torus in $K$.
Define $\epsilon'_i \in \mathfrak{t}_{\mathbb{C}}^*$ by
\[\epsilon'_i\colon t_{\vec{a}} \mapsto \mathbf{i}a_i.\]
The root system $\operatorname{D}elta(\mathfrak{k}_{\mathbb{C}},\mathfrak{t}_{\mathbb{C}})$
is given by
\begin{align*}
&\{\pm{\epsilon'_{i}}\pm{\epsilon'_{j}}, \pm{\epsilon'_{k}}:1\leq i<j\leq n',\ 1\leq k\leq n'\}
\ \text{ if $m$ is even and},\\
&\{\pm{\epsilon'_{i}}\pm{\epsilon'_{j}}:1\leq i<j\leq n'\}
\ \text{ if $m$ is odd}.
\end{align*}
We denote the weight
$c_1\epsilon'_1+\cdots +c_{n'}\epsilon'_{n'}\in \mathfrak{t}_{\mathbb{C}}^*$
by $(c_1,\operatorname{d\!}ots,c_{n'})$.
The similar notation will be used for
elements in $(\mathfrak{t}\cap \mathfrak{m})_{\mathbb{C}}^*$
and $(\mathfrak{t}\cap \mathfrak{m}')_{\mathbb{C}}^*$,
where $\mathfrak{m}'$ denotes the Lie algebra of
the group $M'$ defined in \eqref{Eq:M'}.
\begin{remark}\langlebel{R:weight-orbit}
The bilinear form \eqref{Eq:form} on $\mathfrak{t}$ is given as
$(t_{\vec{a}},t_{\vec{b}})=-\vec{a}\cdot \vec{b}^t$.
Hence by using the isomorphism
$\iota\colon \mathfrak{g}_{\mathbb{C}}\to \mathfrak{g}_{\mathbb{C}}^*$
defined as \eqref{Eq:identification1},
we have for example
$\epsilon'_1 = \mathbf{i}\cdot \iota(t_{(-1,0,\operatorname{d\!}ots,0)})|_{\mathfrak{t}}$.
\end{remark}
Define $T_{s}:=(T\cap M)\times A$.
Then $T\cap M$ is a maximal torus of $M$ and
$T_{s}$ is a Cartan subgroup of $G$.
Let $n:=\lfloor \mathfrak{a}c{m+2}{2}\rfloor$.
Note that $m=2n-2$ and $n=n'+1$ if $m$ is even;
$m=2n-1$ and $n=n'$ if $m$ is odd.
Define $\epsilon_i\in (\mathfrak{t}_s)_{\mathbb{C}}^*$ by
\begin{align*}
&\epsilon_i=\epsilon'_i \text{ on $\mathfrak{t}\cap\mathfrak{m}$},
\quad \epsilon_i=0 \text{ on $\mathfrak{a}$}\ \text{ for $1\leq i<n$},\\
&\epsilon_n=0 \text{ on $\mathfrak{t}\cap\mathfrak{m}$},
\quad \epsilon_n=\langlembda_0 \text{ on $\mathfrak{a}$}.
\end{align*}
The root system
$\operatorname{D}elta=\operatorname{D}elta(\mathfrak{g}_{\mathbb{C}},(\mathfrak{t}_{s})_{\mathbb{C}})$
is given by
\begin{align*}
&\{\pm{\epsilon_{i}}\pm \epsilon_{j}: 1\leq i<j\leq n\}
\ \text{ if $m$ is even and},\\
&\{\pm{\epsilon_{i}}\pm \epsilon_{j}, \pm{\epsilon_{k}}
: 1\leq i<j\leq n,\ 1\leq k\leq n\}
\ \text{ if $m$ is odd},
\end{align*}
where
\[\{\pm{\epsilon_{i}}\pm{\epsilon_{j}},\pm{\epsilon_{k}}
:1\leq i<j\leq n-1,\ 1\leq k \leq n-1\}\]
are roots of $MA$.
Choose a positive system
\begin{align*}
&\operatorname{D}elta^+=\{\epsilon_{i}\pm \epsilon_{j}:1\leq i<j\leq n\}
\ \text{ if $m$ is even and},\\
&\operatorname{D}elta^+=\{\epsilon_{i}\pm \epsilon_{j}, \epsilon_{k}:1\leq i<j\leq n,\ 1\leq k\leq n\}
\ \text{ if $m$ is odd}.
\end{align*}
Then the corresponding simple roots are $\{\epsilon_{1}-\epsilon_{2},\operatorname{d\!}ots,\epsilon_{n-1}-\epsilon_{n},
\epsilon_{n-1}+\epsilon_{n}\}$ for even $m$ and $\{\epsilon_{1}-\epsilon_{2},\operatorname{d\!}ots,\epsilon_{n-1}-
\epsilon_{n},\epsilon_{n}\}$ for odd $m$. A weight of $T_{s}$ is of the form \[\gamma=c_1\epsilon_1+
\cdots+c_n\epsilon_n,\] which we denote by $(c_{1},\operatorname{d\!}ots,c_{n})$. Put \[\mu=(c_{1},\operatorname{d\!}ots,c_{n-1})
=c_1\epsilon_1+\cdots +c_{n-1}\epsilon_{n-1},\] which vanishes on $\mathfrak{a}$ and may be regarded
as a weight of $\mathfrak{t}\cap \mathfrak{m}$; put \[\nu=c_{n}\epsilon_{n},\] which vanishes on
$\mathfrak{t}\cap \mathfrak{m}$ and may be regarded as a weight of $\mathfrak{a}$. Then $\gamma=
(\mu,\nu)=\mu+\nu$.
The vector $\rho:=\mathfrak{a}c{1}{2}\sum_{\alpha\in\operatorname{D}elta^{+}}\alpha$ is given as
\[\operatorname{B}igl(n-\mathfrak{a}c{1}{2},\operatorname{d\!}ots,\mathfrak{a}c{3}{2},\mathfrak{a}c{1}{2}\operatorname{B}igr)
\text{ for $m$ odd}; \
(n-1,\operatorname{d\!}ots,1,0)
\text{ for $m$ even}.\]
{\it Reflections.} For a row vector
$0\neq x\in\mathbb{R}^{m}$, write
\begin{equation}\langlebel{Eq:rx}
r_{x}=I_{m}-
\mathfrak{a}c{2}{|x|^2}x^{t}x\in\operatorname{O}(m),
\end{equation}
which is a reflection.
The action of $r_{x}$ on $\mathbb{R}^{m}$ is given by
\[r_{x}(y)=y-\mathfrak{a}c{2yx^t}{|x|^2}x\quad (\forall y\in\mathbb{R}^{m}).\]
Let $r_{x}$ also denote the element \[\operatorname{d\!}iag\{r_{x},I_{2}\}\in G_{2}=\operatorname{O}(m+1,1).\]
Write
\begin{equation}\langlebel{Eq:s}s=\operatorname{d\!}iag\{I_{m},-1,1\}\in\operatorname{O}(m+1,1).
\end{equation}
For $x\in\mathbb{R}^{m}$, write
\begin{align}\langlebel{Eq:sx}
s_{x}=
\begin{pmatrix}I_{m}-\mathfrak{a}c{2x^{t}x}{1+|x|^2}&
-\mathfrak{a}c{2x^{t}}{1+|x|^2}&0\\
\mathfrak{a}c{2x}{1+|x|^2}&\mathfrak{a}c{1-|x|^2}{1+|x|^2}&0\\
0&0&1\\
\end{pmatrix}\in\operatorname{O}(m+1,1).
\end{align}
\if 0
Let $y=(x,1)$.
Write $r'_{x}$ for both
\[I_{m+1}-\mathfrak{a}c{2}{|y|^{2}}y^{t}y\in \operatorname{O}(m+1)\] and
\[\operatorname{d\!}iag\operatorname{B}igl\{I_{m+1}-\mathfrak{a}c{2}{|y|^2}y^{t}y,\, 1\operatorname{B}igr\}\in\operatorname{O}(m+1,1).\]
Then,
$s_{x}=sr'_{x}$.
\fi
{\it Unitarily induced representations of $P$, $P_{1}$, $P_{2}$ and $P_{3}$.}
Through the map
\[Y\mapsto\begin{pmatrix}Y&\\&I_{2}\\\end{pmatrix},\]
one identifies $M_{1}$ with $\operatorname{SO}(m)$.
With this identifications, the adjoint action of $M_{1}A$ on $\mathfrak{n}$ is given by
\[\operatorname{A}d(Y,a)X_\alpha=e^{\langlembda_0(\log a)} X_{\alpha Y^t}\ (\forall \alpha\in \mathbb{R}^{m},
\ \forall Y\in\operatorname{SO}(m),\ \forall a\in A).\] Moreover, identify $\mathbb{R}^{m}$ with
$\mathfrak{n}^{\ast}$ by $\xi(X_{\alpha})=\xi\alpha^t$. Then, the coadjoint action of $M_{1}A$
on $\mathfrak{n}^{\ast}$ is given by \begin{equation}\langlebel{Eq:MAaction}
\operatorname{A}d^{\ast}(Y,a)\xi = e^{-\langlembda_0(\log a)} \xi Y^t\
(\forall \xi\in \mathbb{R}^{m},\ \forall Y\in\operatorname{SO}(m),\ \forall a\in A).
\end{equation}
Let \[\xi_0=(0,\operatorname{d\!}ots,0,1)\in\mathfrak{n}^{\ast}.\] Put \[M'_{1}=\operatorname{Stab}_{M_{1}A}\xi_0.\]
Then,\begin{equation*}
M'_{1}=\operatorname{B}igl\{\begin{pmatrix}Y&\\ &I_{3}\\ \end{pmatrix}:Y\in\operatorname{SO}(m-1)\operatorname{B}igr\}.\end{equation*}
For a (not necessarily irreducible) unitary representation $(\tau,V_{\tau})$ of $M'_{1}$, let
\begin{equation*}
I_{P_{1},\tau}=\operatorname{Ind}_{M'_{1}\ltimes N}^{M_{1}AN}(\tau\otimes e^{\mathbf{i}\xi_{0}})\end{equation*}
be a unitarily induced representation. It consists of functions
$h\colon M_{1}AN \rightarrow V_{\tau}$ with
\[h(pm'n)=(\tau\otimes e^{\mathbf{i}\xi_{0}})(m',n)^{-1}h(p)\]
for all $(p,m',n)\in P_{1}\times M'_{1}\times N$
and $\langlengle h,h\ranglengle<\infty$, where
\begin{equation*}
\langlengle h_1,h_2\ranglengle:=\int_{M_{1}A/M_{1}'}
\langlengle h_{1}(ma),h_{2}(ma)\ranglengle_{\tau}\operatorname{d\!} {}_{l}ma
\end{equation*}
for $h_1,h_2\in\operatorname{Ind}_{M'_{1}\ltimes N}^{M_{1}AN}(\tau\otimes e^{\mathbf{i}\xi_{0}})$.
Here $\operatorname{d\!} {}_{l}m_0a$ is a left $M_{1}A$ invariant measure on $(M_{1}A)/M'_{1}$,
and $\langlengle\cdot,\cdot \ranglengle_{\tau}$
denotes an $M'_{1}$-invariant inner product on $V_{\tau}$.
The action of $P_{1}$ on $I_{P_{1},\tau}$ is given by
$(p\cdot h)(x)=h(p^{-1}x)$ for $h\in I_{P_{1},\tau}$ and $p,x\in P_1$.
Similarly, put \begin{equation}\langlebel{Eq:M'}M'=\operatorname{Stab}_{MA}\xi_0,\quad M'_{2}=\operatorname{Stab}_{M_{2}A_{2}}\xi_0,
\quad M'_{3}=\operatorname{Stab}_{M_{3}A_{3}}\xi_0,\end{equation} and define a unitarily induced representation
$I_{P,\tau}$ (or $I_{P_{2},\tau}$, $I_{P_{3},\tau}$) from a unitary representation $\tau$ of $M'$
(or $M'_{2}$, $M'_{3}$). One has \begin{equation*}
M'\cong\operatorname{Sp}in(m-1),\quad M'_{2}\cong\operatorname{O}(m-1)\times\operatorname{O}(1),\quad M'_{3}\cong\operatorname{SO}(m-1)\times\operatorname{O}(1).
\end{equation*}
\operatorname{sm}allskip
{\it Induced representations of $G$.} For a finite-dimensional irreducible complex
linear representation $(\sigma,V_{\sigma})$ of $M$ and a character $e^\nu$ of $A$,
form the smoothly induced representation
\[I(\sigma,\nu)
=\operatorname{Ind}_{MA\bar{N}}^{G}(\sigma\otimes e^{\nu-\rho'}\otimes\mathbf{1}_{\bar{N}})\]
which consists of smooth functions
$h\colon G\rightarrow V_{\sigma}$ with
\[h(gma\bar{n})=\sigma(m)^{-1}e^{(-\nu+\rho')\log a}h(g)\]
for any $(g,m,a,\bar{n})\in G\times M\times A\times\bar{N}$.
The action of $G$ on $I(\sigma,\nu)$ is given by
$(g\cdot h)(x)=h(g^{-1}x)$ for $h\in I(\sigma,\nu)$ and $g,x\in G$.
Write $I(\sigma,\nu)_K$ for the space of functions
$h\in I(\sigma,\nu)$ such that $h|_{K}$ is a $K$-finite function.
Then, $I(\sigma,\nu)_K$ is the space of $K$-finite vectors
in $I(\sigma,\nu)$, and it is a $(\mathfrak{g},K)$-module.
When $\nu$ is a unitary character,
let $\bar{I}(\sigma,\nu)$ denote the space of all functions
$h\colon G\rightarrow V_{\sigma}$ with
\[h(gma\bar{n})=\sigma(m)^{-1}e^{(-\nu+\rho')\log a}h(g)\]
for any $(g,m,a,\bar{n})\in G\times M\times A\times\bar{N}$,
and $h_{N}=h|_{N}\in L^{2}(N,V_{\sigma},\operatorname{d\!} n)$.
This is called a unitary principal series representation of $G$.
The invariant inner product on $\bar{I}(\sigma,\nu)$ is defined by:
\begin{equation*}
\langlengle f_1,f_2\ranglengle=\int_{N}(f_{1}(n),
f_{2}(n))\operatorname{d\!} n
\end{equation*}
for $f_{1},f_{2}\in L^{2}(N,V_{\sigma},\operatorname{d\!} n)$.
When $\sigma$ factors through $M_{1}$, the smoothly induced representation $I(\sigma,\nu)$ factors through $G_{1}$,
and the $(\mathfrak{g},K)$-module $I(\sigma,\nu)_K$ factors through a $(\mathfrak{g},K_{1})$-module. If $\nu$ is a unitary
character, then the unitarily induced representation $\bar{I}(\sigma,\nu)$ factors through $G_{1}$.
In the above, let $\mu$ denote the highest weight of $\sigma$.
Then for simplicity we denote
$I(\sigma,\nu), I(\sigma,\nu)_K, \bar{I}(\sigma,\nu)$
by $I(\mu,\nu),I(\mu,\nu)_K, \bar{I}(\mu,\nu)$, respectively.
For a finite-dimensional irreducible complex linear representation $(\sigma,V_{\sigma})$ of $M_{2}$ and a character
$e^\nu$ of $A$, we define the smoothly induced representation \[I(\sigma,\nu)=\operatorname{Ind}_{M_{2}A\bar{N}}^{G_{2}}(\sigma\otimes e^{\nu-\rho'}\otimes\mathbf{1}_{\bar{N}})\] similarly.
{\it $(\mathfrak{g},K)$-modules and $G$-representations.}
For a $(\mathfrak{g},K)$-module $V$,
we denote by $V^{\operatorname{sm}}$ a Casselman-Wallach globalization of $V$
(\cite{Casselman} and \cite{Wallach}).
If $V$ is unitarizable, we denote by $\bar{V}$
for a Hilbert space completion of $V$.
For $(\mathfrak{g},K_{1})$-modules (or $(\mathfrak{g},K_{2})$-modules)
and $G_{1}$-representations (or $G_{2}$-representations),
we take similar conventions.
{\it Irreducible finite-dimensional representations.}
Write $F_{\langlembda}$ (resp.\ $V_{K,\langlembda}$, $V_{M,\mu}$, $V_{M',\mu}$)
for an irreducible finite-dimensional
representation of $G$ (resp.\ $K$, $M$, $M'$) with highest weight $\langlembda$,
(resp.\ $\langlembda$, $\mu$, $\mu$).
{\it $L^{p}$ space.} We take Fourier transform on $N$ (or $\mathfrak{n}$). For a finite-dimensional Hilbert space $V$,
for brevity let $L^{p}$ denote the space of $V$-valued functions $h$ on $N$ (or $\mathfrak{n}$) such that $|h(x)|$ is
an $L^{p}$ integrable function.
\subsection{Iwasawa decomposition and Bruhat decomposition}\langlebel{SS:group-algebra}
By a direct calculation,
one shows the following opposite Iwasawa decomposition for elements in $N$.
\begin{lemma}\langlebel{L:Iwasawa}
The opposite Iwasawa decomposition of a general element in $N$ is given by
\begin{equation}\langlebel{Eq:Iwasawa3}
n_{x}=
\begin{pmatrix}
I_{m}-\mathfrak{a}c{2x^{t}x}{1+|x|^2}&-\mathfrak{a}c{2x^{t}}{1+|x|^2}&0\\
\mathfrak{a}c{2x}{1+|x|^2}&\mathfrak{a}c{1-|x|^2}{1+|x|^2}&0\\0&0&1\\
\end{pmatrix}\operatorname{exp}(-\log(1+|x|^2)H_{0})\bar{n}_{\mathfrak{a}c{x}{1+|x|^2}}.
\end{equation}
\end{lemma}
One can write (\ref{Eq:Iwasawa3}) as
\[n_{x}=s_{x}\operatorname{exp}(-\log(1+|x|^2)H_{0})\bar{n}_{\mathfrak{a}c{x}{1+|x|^2}},\]
where $s_{x}$ is as in (\ref{Eq:sx}).
By the Bruhat decomposition, \[G_{2}=NM_{2}A\bar{N}\sqcup sM_{2}A\bar{N}.\] By this, $sn_{x}\in NM_{2}A\bar{N}$ for any
$0\neq x\in\mathbb{R}^{m}$. By direct calculation one shows the following decomposition.
\begin{lemma}\langlebel{L:barn-iwasawa}
For $0\neq x\in\mathbb{R}^{m}$, we have
\begin{equation*}
sn_{x}=n_{\mathfrak{a}c{x}{|x|^{2}}}r_{x}e^{-(2\log|x|)H_{0}}\bar{n}_{\mathfrak{a}c{x}{|x|^{2}}},
\end{equation*}
where $r_{x}$ is as in (\ref{Eq:rx}).
\end{lemma}
\subsection{Irreducible unitary representations of $P$}\langlebel{SS:rep-P}
The classification of irreducible unitary representations of $P$ (or $P_{1},P_{2}$) is obtained by using Mackey's
little group method (see e.g.\ \cite{Wolf}). In the following we illustrate the classification of infinite-dimensional
irreducible unitary representations of $P$ (or $P_{1},P_{2}$).
\begin{proposition}\langlebel{P:Mackey}
Any infinite-dimensional irreducible unitary representation of $P$ (or $P_{1},P_{2}$) is isomorphic to $I_{P,\tau}$
(or $I_{P_{1},\tau},I_{P_{2},\tau}$) for a unique (up to isomorphism) irreducible finite-dimensional unitary
representation $\tau$ of $M'$ (or $M'_{1},M'_{2}$).
\end{proposition}
\begin{proof}
We sketch a proof for $P$. The proof for $P_{1}$ (or $P_{2}$) is similar. Let $\pi$ be an irreducible unitary
representation of $P$. If $\pi|_{N}$ is trivial, then $\pi$ factors through $P\to MA$ and is finite-dimensional.
Assume that $\pi|_{N}$ is non-trivial, then the support of the the spectrum of $\pi|_{N}$ is not equal to $\{0\}$.
As the spectrum of $\pi|_{N}$ is an $MA$-stable subset of $\mathfrak{n}^{\ast}$ and $MA$ acts transitively
on $\mathfrak{n}^{\ast}-\{0\}$, $\xi_0$ is in the support. By Mackey's method, one then shows $\pi\cong I_{P,\tau}$
for a unique finite-dimensional irreducible unitary representation $\tau$ of $M'$ up to an isomorphism.
\end{proof}
\subsection{Abstract classification of coadjoint orbits in $\mathfrak{p}^{\ast}$}\langlebel{SS:P-orbit}
Write $L=MA$ and $\mathfrak{l}=\mathfrak{m}\oplus\mathfrak{a}$. Then, $P=N\rtimes L$ is a Levi decomposition of $P$.
Write $L_{1}=M_{1}A\subset P_{1}$. Then, $P_{1}=N\rtimes L_{1}$.
There are exact sequences of $P$-modules
\[0\rightarrow\mathfrak{n}\rightarrow\mathfrak{p}\rightarrow\mathfrak{l}\rightarrow 0
\ \text{ and }\
0\rightarrow\mathfrak{l}^{\ast}\rightarrow\mathfrak{p}^{\ast}\rightarrow\mathfrak{n}^{\ast}\rightarrow 0.\]
Note that the action of $P$ on $\mathfrak{p}$ (or $\mathfrak{p}^{\ast}$) factors through $P_{1}$.
We have
\[L_{1}=M_1 A \cong \operatorname{SO}(m)\times\mathbb{R}_{>0},\quad
\mathfrak{n}^{\ast}\cong \mathbb{R}^{m},\]
and $L_{1}$ acts on $\mathfrak{n}^{\ast}$
as in \eqref{Eq:MAaction}.
Thus, $\{0\}$ and $\mathfrak{n}^{\ast}-\{0\}$ are the only two $L$-orbits in
$\mathfrak{n}^{\ast}$.
Write $$\psi_{n}:\mathfrak{p}^{\ast}\rightarrow\mathfrak{n}^{\ast}\textrm{ and }\psi_{l}:\mathfrak{p}^{\ast}\rightarrow
\mathfrak{l}^{\ast}$$ for projection maps corresponding to the inclusions $\mathfrak{n}\hookrightarrow\mathfrak{p}$ and $\mathfrak{l}\hookrightarrow\mathfrak{p}$.
Write
\[\phi_{l}:\mathfrak{l}^{\ast}\rightarrow\mathfrak{p}^{\ast}\textrm{ and }
\phi_{n}:\mathfrak{n}^{\ast}\rightarrow\mathfrak{p}^{\ast}\]
for inclusions corresponding to projections $\mathfrak{p}
\rightarrow\mathfrak{l}$ and $\mathfrak{p}\rightarrow\mathfrak{n}$ from $\mathfrak{p}=\mathfrak{l}+\mathfrak{n}$. Then,
$\psi_{n}$ and $\phi_{l}$ are $P$-equivariant maps. The maps $\psi_{l}$ and $\phi_{n}$ are $L$-equivariant, but not
$P$-equivariant.
For any $\xi\in\mathfrak{n}^{\ast}$, put $\tilde{\xi}=\phi_{n}(\xi)\in\mathfrak{p}^{\ast}$. Then,
\[\psi_{n}^{-1}(P\cdot \xi) =\mathfrak{l}^{\ast}+P\cdot\tilde{\xi}.\]
Write $P^{\xi}=\operatorname{Stab}_{P}(\xi)$ and $L^{\xi}=\operatorname{Stab}_{L}(\xi)$.
Then,
$P^{\xi}=N\rtimes L^{\xi}$.
\begin{proposition}\langlebel{P:orbit-reduction}
In the above setting, we have
\[(\mathfrak{l}^{\ast}+P\cdot\tilde{\xi})/P
\cong(\mathfrak{l}^{\ast}+\tilde{\xi})/P^{\xi}
\cong(\mathfrak{l}^{\ast}/\operatorname{ad}^*(\mathfrak{n})(\tilde{\xi}))/L^{\xi}\cong(\mathfrak{l}^{\xi})^{\ast}/L^{\xi}.\]
\end{proposition}
\begin{proof}
We have
\[\mathfrak{l}^{\ast}+\tilde{\xi}\subset\mathfrak{l}^{\ast}+P\cdot\tilde{\xi}
\ \text{ and }\
\mathfrak{l}^{\ast}+P\cdot \tilde{\xi}=P(\mathfrak{l}^{\ast}+\tilde{\xi}).\]
On the other hand, for any $g\in P$,
\[g(\mathfrak{l}^{\ast}+\tilde{\xi})
\cap(\mathfrak{l}^{\ast}+\tilde{\xi})\neq\emptyset\]
if and only if $g\in P^{\xi}$. Thus,
\begin{equation*}
(\mathfrak{l}^{\ast}+P\cdot\tilde{\xi})/P\cong(\mathfrak{l}^{\ast}+\tilde{\xi})/P^{\xi}.
\end{equation*}
As $\mathfrak{n}$ is abelian, it acts trivially on $\mathfrak{n}^{\ast}$.
Thus, $$\operatorname{ad}^*(\mathfrak{n})
(\mathfrak{p}^{\ast})\subset\mathfrak{l}^{\ast}.$$ Apparently, $\mathfrak{n}$ acts trivially on $\mathfrak{l}^{\ast}$.
By these, $N$ acts on $\mathfrak{l}^{\ast}+\tilde{\xi}$ through translations: for $X\in\mathfrak{n}$ and $\eta\in \mathfrak{l}^{\ast}$,
\begin{equation*}
\operatorname{exp}(X)\cdot(\eta+\tilde{\xi})=(\eta+\operatorname{ad}^*(X)\tilde{\xi})+\tilde{\xi}.
\end{equation*}
Since $P^{\xi}=L^{\xi}N$, we get
\begin{equation*}
(\mathfrak{l}^{\ast}+\tilde{\xi})/P^{\xi}\cong(\mathfrak{l}^{\ast}/\operatorname{ad}^*(\mathfrak{n})(\tilde{\xi}))/L^{\xi}.
\end{equation*}
For $X\in\mathfrak{l}$ and $Y\in\mathfrak{n}$, \[(\operatorname{ad}^*(X)(\xi))(Y)=(\operatorname{ad}^*(X)(\tilde{\xi}))(Y)=-\tilde{\xi}([X,Y])
=-(\operatorname{ad}^*(Y)(\tilde{\xi}))(X).\] By this, \[X\in\mathfrak{l}^{\xi}\Leftrightarrow\tilde{\xi}|_{\operatorname{ad}(X)(\mathfrak{n})}
=0\Leftrightarrow(\operatorname{ad}^*(\mathfrak{n})(\tilde{\xi}))(X)=0.\] Thus, the null space of $\mathfrak{l}^{\xi}$
($\subset\mathfrak{l}$) in $\mathfrak{l}^{\ast}$ is $\operatorname{ad}^{\ast}(\mathfrak{n})\tilde{\xi}$. Hence,
\begin{equation}\langlebel{Eq:orbit-reduc3}\mathfrak{l}^{\ast}/\operatorname{ad}^*(\mathfrak{n})(\tilde{\xi})
\cong(\mathfrak{l}^{\xi})^{\ast}.\end{equation} This finishes the proof of the proposition.
\end{proof}
Note that $\mathfrak{l}=\mathfrak{m}\oplus\mathfrak{a}$ with $\mathfrak{m}$ a compact semisimple Lie algebra
and $\mathfrak{a}$ a one-dimensional abelian Lie algebra. Thus, $\mathfrak{l}^{\ast}=\mathfrak{m}^{\ast}
\oplus\mathfrak{a}^{\ast}$.
Let
\[\mathfrak{l}^{\ast}_{\xi}
=\{\eta\in \mathfrak{l}^{\ast} \mid (\eta,\operatorname{ad}^*(\mathfrak{n})(\tilde{\xi}))=0\}.
\]
Here, $(\cdot,\cdot)$ is the invariant form on $\mathfrak{l}^{\ast}$ induced by the
restriction on $\mathfrak{l}$ of the invariant form defined in (\ref{Eq:form}).
We note that $L^{\xi}\cdot \tilde{\xi}=\tilde{\xi}$ and hence
$L^{\xi}\cdot \operatorname{ad}^*(\mathfrak{n})(\tilde{\xi})=\operatorname{ad}^*(\mathfrak{n})(\tilde{\xi})$.
\begin{lemma}\langlebel{L:p-standard1}
We have $\mathfrak{l}^{\ast}=\mathfrak{l}^{\ast}_{\xi}\oplus\operatorname{ad}^*(\mathfrak{n})(\tilde{\xi})$, and
$\mathfrak{l}^{\ast}_{\xi}\cong(\mathfrak{l}^{\xi})^{\ast}$ as $L^{\xi}$-modules.
\end{lemma}
\begin{proof}
When $\xi=0$, we have $L^{\xi}=L$, $\operatorname{ad}^*(\mathfrak{n})(\tilde{\xi})=0$
and $\mathfrak{l}^{\ast}_{\xi}=\mathfrak{l}^{\ast}$.
Then, the two statements in the lemma are clear.
When $\xi\neq 0$, one shows by a direct calculation that
\begin{equation*}
\mathfrak{a}^{\ast}\subset\operatorname{ad}^{\ast}(\mathfrak{n})(\tilde{\xi}).
\end{equation*}
Then, the restriction of the induced form on $\mathfrak{l}^{\ast}$ of $(\cdot,\cdot)$
is nondegenerate when restricted to $\operatorname{ad}^{\ast}(\mathfrak{n})(\tilde{\xi})$. Hence,
$\mathfrak{l}^{\ast}=\mathfrak{l}^{\ast}_{\xi}\oplus\operatorname{ad}^{\ast}(\mathfrak{n})(\tilde{\xi})$.
This is clearly a decomposition of $L^{\xi}$-modules. Combining with \eqref{Eq:orbit-reduc3}, we get
\begin{equation*}
\mathfrak{l}^{\ast}_{\xi}\cong\mathfrak{l}^{\ast}/\operatorname{ad}^{\ast}(\mathfrak{n})(\tilde{\xi})\cong
(\mathfrak{l}^{\xi})^{\ast}\qedhere
\end{equation*}
\end{proof}
\begin{lemma}\langlebel{L:p-standard3}\begin{enumerate}
\item[(1)]Every $P$-orbit in $\mathfrak{l}^{\ast}+P\cdot\tilde{\xi}$ has a representative of the form
$\eta+\tilde{\xi}$, where $\eta\in\mathfrak{l}^{\ast}_{\xi}$.
\item[(2)]Two elements $\eta+\tilde{\xi}$ and $\eta'+\tilde{\xi}$ $(\eta,\eta'\in\mathfrak{l}^{\ast}_{\xi})$
are in the same $P$-orbit if and and only if $\eta$ and $\eta'$ are in the same $L^{\xi}$-orbit.
\item[(3)]For an element $\eta+\tilde{\xi}(\in\mathfrak{p}^{\ast})$ with $\eta\in\mathfrak{l}^{\ast}_{\xi}\cong
(\mathfrak{l}^{\xi})^{\ast}$, if $\xi\neq 0$, then $\operatorname{Stab}_{P}(\eta+\tilde{\xi})=\operatorname{Stab}_{L^{\xi}}(\eta)$;
if $\xi=0$, then $\operatorname{Stab}_{P}(\eta+\tilde{\xi})=\operatorname{Stab}_{L}(\eta)N$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) and (2) follow from Proposition~\ref{P:orbit-reduction} and Lemma~\ref{L:p-standard1}.
\if 0
By Lemma \ref{L:p-standard1},
$\mathfrak{l}^{\ast}=\mathfrak{l}^{\ast}_{n}\oplus\operatorname{ad}^{\ast}(\mathfrak{n})(\tilde{\xi})$.
From this and $N$ is abelian, we get $$N(\mathfrak{l}^{\ast}_{n}+\tilde{\xi})=\mathfrak{l}^{\ast}+\tilde{\xi}.$$ Thus,
every $P$-orbit in $\mathfrak{l}^{\ast}+P\cdot\tilde{\xi}$ has a representative of the form $p=l+\tilde{\xi}$
where $l\in\mathfrak{l}^{\ast}_{n}$.
(2) Let $l,l'\in\mathfrak{l}^{\ast}_{n}$ and suppose $g\cdot(l+\tilde{\xi})=l'+\tilde{\xi}$ for some $g\in P$. First,
projecting to $\mathfrak{n}^{\ast}$, we get $g\in P^{\xi}=L^{\xi}N$. Write $g=g_1g_2$ where $g_1\in L^{\xi}$
and $g_{2}\in N$. Then $g_{1}\cdot\tilde{\xi}=\tilde{\xi}$. It is clear that $g_{2}\cdot l=l$.
Thus, $$g\cdot(l+\tilde{\xi})=g_{1}\cdot l+g_{1}g_{2}g_{1}^{-1}\cdot\tilde{\xi}.$$ Hence,
\[g_{1}g_{2}g_{1}^{-1}\cdot\tilde{\xi}-\tilde{\xi}=l'-g_{1}\cdot l.\]
Since $L^{\xi}\cdot \mathfrak{l}^{\ast}_{n}=\mathfrak{l}^{\ast}_{n}$,
it follows that $l'-g_{1}\cdot l\in\mathfrak{l}^{\ast}_{n}$.
On the other hand, $g_{1}g_{2}g_{1}^{-1}\in N$ implies
that $g_{1}g_{2}g_{1}^{-1}\cdot\tilde{\xi}-\tilde{\xi}\in \operatorname{ad}^*(\mathfrak{n})\tilde{\xi}$.
Due to $\mathfrak{l}^{\ast}_{n}\cap\operatorname{ad}^{\ast}(\mathfrak{n})\tilde{\xi}=0$,
we get $g_{1}g_{2}g_{1}^{-1}\cdot\tilde{\xi}-\tilde{\xi}=l'-g_{1}\cdot l=0$.
Therefore, $l'$ and $l$ are in the same $L^{\xi}$-orbit.
Conversely, if $l'=g\cdot l$ for some $g\in L^{\xi}$,
then we have $g\cdot\tilde{\xi}=\tilde{\xi}$ and $g\cdot(l+\tilde{\xi})=l'+\tilde{\xi}$.
\fi
(3) Suppose that $g\cdot (\eta+\tilde{\xi})=(\eta+\tilde{\xi})$ where $\eta\in\mathfrak{l}^{\ast}_{\xi}$ and
$g\in P$. First, we have $g\in P^{\xi}=L^{\xi}N$ by projecting to $\mathfrak{n}^{\ast}$. Write $g=g_1g_2$ with
$g_{1}\in L^{\xi}$ and $g_{2}\in N$. Since $g_2\cdot \eta=\eta$, we have \[g\cdot(\eta+\tilde{\xi})=
g_1\cdot\eta+ g_1g_2\cdot \tilde{\xi}=g_1\cdot \eta + g_1g_2g_1^{-1}\cdot \tilde{\xi}.\]
Hence $g_1\cdot \eta=\eta$ and $g_1g_2g_1^{-1}\tilde{\xi}=\tilde{\xi}$.
Then we get $g_{1}\in\operatorname{Stab}_{L^{\xi}}(\eta)$, which shows the statement when $\xi=0$.
When $\xi\neq 0$, we have $L\cdot \xi=\mathfrak{n}^*-\{0\}$
which implies $\operatorname{d\!}im(\mathfrak{l}\cdot \xi) = \operatorname{d\!}im\mathfrak{n}$.
Then by Lemma \ref{L:p-standard1},
$\operatorname{d\!}im \mathfrak{n} =\operatorname{d\!}im(\operatorname{ad}^{\ast}(\mathfrak{n})(\tilde{\xi}))$
and hence the map
\[\mathfrak{n}\rightarrow\mathfrak{l}^{\ast},\quad Y\mapsto
\operatorname{ad}^{\ast}(Y)(\tilde{\xi})\] is injective.
Thus, $g_{1}g_{2}g_{1}^{-1}=1$ and $g_{2}=1$. Therefore,
$\operatorname{Stab}_{P}(\eta+\tilde{\xi})=\operatorname{Stab}_{L^{\xi}}(\eta)$.
\end{proof}
For a given element $\xi\in\mathfrak{n}^{\ast}$, Proposition \ref{P:orbit-reduction} reduces the classification of $P$-orbits
intersecting with $\mathfrak{l}^{\ast}+\tilde{\xi}$ to the classification of coadjoint $L^{\xi}$-orbits in
$(\mathfrak{l}^{\xi})^{\ast}$. Moreover, for an element $\eta\in(\mathfrak{l}_{n})^{\ast}$, we showed that $\eta+\tilde{\xi}$
is a canonical form in Lemma \ref{L:p-standard3} (2); in Lemma \ref{L:p-standard3} (3), we calculated $\operatorname{Stab}_{P}(\eta+\tilde{\xi})$
in terms of $\operatorname{Stab}_{L^{\xi}}(\eta)$. Note that, when $\xi\neq 0$, one has $L^{\xi}\cong\operatorname{Sp}in(m-1)$.
\begin{definition}\langlebel{D:depth}
We say a coadjoint $P$-orbit $\mathcal{O}$ in $\mathfrak{p}^{\ast}$ has {\it depth zero} if it is contained
in $\mathfrak{l}^{\ast}$, otherwise we say $\mathcal{O}$ has {depth one}.
\end{definition}
\subsection{Explicit parametrization of coadjoint orbits in $\mathfrak{p}^{\ast}$}\langlebel{SS:P-orbit2}
We now give a parametrization of depth one coadjoint $P$-orbits.
For $Y\in\mathfrak{so}(m)$, $\beta\in\mathbb{R}^{m}$ and $a\in\mathbb{R}$, put
\begin{equation*}
X_{Y,\beta,a}=
\begin{pmatrix}
Y&\beta^{t}&\beta^{t}\\
-\beta&0&a\\
\beta&a&0\\
\end{pmatrix}\in\overline{\mathfrak{p}}.
\end{equation*}
Then, for any
$X=\begin{pmatrix}
Y&\beta_{1}^{t}&\beta_2^{t}\\
-\beta_{1}&0&a\\
\beta_{2}&a&0\\
\end{pmatrix}\in\mathfrak{g},$
one has
$\operatorname{pr}(X)=\operatorname{pr}(X_{Y,\mathfrak{a}c{\beta_{1}+\beta_{2}}{2},a})$;
for any
$f\in\mathfrak{p}^{\ast}$, there exists a unique triple
\[(Y,\beta,a)\in\mathfrak{so}(m)\times \mathbb{R}^{m}\times\mathbb{R}\]
such that
$f=\operatorname{pr}(X_{Y,\beta,a})$.
\begin{lemma}\langlebel{L:p-standard4}
For $0\neq\beta\in\mathbb{R}^{m}$, put
$\xi=\psi_{n}(\operatorname{pr}(X_{0,\beta,0}))\in\mathfrak{n}^{\ast}$.
In order that $\operatorname{pr}(X_{Y,0,a})\in\mathfrak{l}^{\ast}_{\xi}$
it is necessary and sufficient that $a=0$ and $Y\beta^{t}=0$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
\operatorname{pr}(X_{Y,0,a})\in\mathfrak{l}^{\ast}_{\xi}
\Leftrightarrow
(\operatorname{pr}(X_{Y,0,a}), \operatorname{ad}^*(\mathfrak{n})(\tilde{\xi}))=0
\Leftrightarrow
([X_{Y,0,a}, X_{0,\beta,0}], \mathfrak{n})=0.
\end{align*}
Since $[X_{Y,0,a}, X_{0,\beta,0}]=X_{0,\beta Y^t - a\beta,0}\in\bar{\mathfrak{n}}$
by \eqref{Eq:brackets},
$\operatorname{pr}(X_{Y,0,a})\in\mathfrak{l}^{\ast}_{\xi}$
is equivalent to $\beta Y^t - a\beta=0$.
If $\beta Y^t - a\beta=0$,
then $(\beta Y^t - a\beta)\beta^t = - a\beta\beta^t=0$.
Hence $a=0$ and $\beta Y^t=0$.
The lemma follows.
\end{proof}
\begin{lemma}\langlebel{p:standard5}
For a general triple $(Y,\beta,a)\in\mathfrak{so}(m)\times\mathbb{R}^{m}\times\mathbb{R}$ with
$\beta\neq 0$, put $\xi=\psi_{n}(\operatorname{pr}(X_{Y,\beta,a}))\in\mathfrak{n}^{\ast}$.
Then, there exists a unique $\gamma\in\mathbb{R}^{m}$
such that $\operatorname{A}d^*(n_{\gamma})(\operatorname{pr}(X_{Y,\beta,a}))\in\mathfrak{l}^{\ast}_{\xi}+\tilde{\xi}$.
Moreover,
\[\operatorname{A}d^*(n_{\gamma})(\operatorname{pr}(X_{Y,\beta,a}))=
\operatorname{pr}(X_{Y-\mathfrak{a}c{1}{|\beta|^2}(Y\beta^{t}\beta-\beta^{t}\beta Y^{t}),\beta,0}).\]
\end{lemma}
\begin{proof}
For $\gamma\in \mathbb{R}^{m}$, we calculate by using \eqref{Eq:brackets}
\[\operatorname{pr}(\operatorname{A}d(n_{\gamma})(X_{Y,\beta,a}))
=\operatorname{pr}(X_{Y+2\gamma^t\beta-2\beta^t\gamma,\,\beta,\,a+2\gamma\beta^t}).\]
By Lemma \ref{L:p-standard4}, in order that
$\operatorname{A}d^*(n_{\gamma})(\operatorname{pr}(X_{Y,\beta,a}))\in\mathfrak{l}^{\ast}_{\xi}+\tilde{\xi}$,
it is necessary and sufficient that $a+2\gamma\beta^t=0$ and
$\beta(Y+2\gamma^t\beta-2\beta^t\gamma)^t=0$.
From these two equations, one solves that
\[\gamma=-\mathfrak{a}c{1}{2|\beta|^2}(\beta Y^t+a\beta).\]
Then we have \[\operatorname{A}d^*(n_{\gamma})(X_{Y,\beta,a})=
X_{Y-\mathfrak{a}c{1}{|\beta|^2}(Y\beta^t\beta-\beta^t\beta Y^t),\,\beta,\,0}.\qedhere\]
\end{proof}
\begin{lemma}\langlebel{L:class-matrix}
Assume that $\beta\neq 0$ and $Y\beta^{t}=0$. Then, the orbit $P\cdot\operatorname{pr}(X_{Y,\beta,0})$ is
determined by the class of $Z_{Y,\beta}$ with respect to the conjugation action of $\operatorname{SO}(m+1)$, where
\begin{equation*}
Z_{Y,\beta}=
\begin{pmatrix}Y&\mathfrak{a}c{\beta^t}{|\beta|}\\
-\mathfrak{a}c{\beta}{|\beta|}&0\\
\end{pmatrix}.
\end{equation*}
\end{lemma}
\begin{proof}
We assume $m$ is odd and $m=2n-1$.
Put
\[H'=
\begin{pmatrix}0&1\\ -1&0\\
\end{pmatrix}.\]
Then, there exists
$(W,a) \in \operatorname{SO}(m)\times \mathbb{R}_{>0}$ such that
\[WYW^{-1}=\operatorname{d\!}iag\{x_{1}H',\operatorname{d\!}ots,x_{n-1}H',0\}\]
and
$a\beta W^t=(\underbrace{0,\operatorname{d\!}ots,0}_{m-1},1)$,
where $x_1\geq x_{2}\geq\cdots\geq x_{n-2}\geq |x_{n-1}|$.
By Lemma~\ref{L:p-standard3},
the orbit $P\cdot \operatorname{pr}(X_{Y,\beta,0})$ is determined by the tuple $(x_1,\operatorname{d\!}ots,x_{n-1})$.
Since $Z_{Y,\beta}$ is conjugate to
\[\operatorname{d\!}iag\{x_{1}H',\operatorname{d\!}ots,x_{n-1}H',H'\},\]
the class of $Z_{Y,\beta}$ with respect to the conjugation action of $\operatorname{SO}(m+1)$
is also determined by the tuple $(x_1,\operatorname{d\!}ots,x_{n-1})$.
Hence, the conclusion of the proposition follows.
The case where $m$ is even is similar.
\end{proof}
Suppose that $m$ is odd and $m+1=2n$.
Then it is known that
the $\operatorname{SO}(2n)$-conjugacy class of $Z_{Y,\beta}$ is determined by its singular values
and the sign of its Pfaffian (which can be $1$, $-1$ or $0$).
Here, the singular values of $Z_{Y,\beta}$
mean the square roots of eigenvalues of $(Z_{Y,\beta})^t Z_{Y,\beta}$.
From the proof of Lemma \ref{L:class-matrix},
we see that singular values of $Z_{Y,\beta}$ are
\[\{x_1,x_1,x_2,x_2,\operatorname{d\!}ots,x_{n-1},x_{n-1},1,1\}\]
and the singular values of $Y$ are
\[\{x_1,x_1,x_2,x_2,\operatorname{d\!}ots,x_{n-1},x_{n-1},0\}.\]
The sign of the Pfaffian of $Z_{Y,\beta}$ is
equal to the sign of $x_{n-1}$.
Next, suppose that $m$ is even and $m+1=2n-1$.
Then $Z_{Y,\beta}$ has an eigenvalue $0$
and the singular values are
\[\{x_1,x_1,x_2,x_2,\operatorname{d\!}ots,x_{n-2},x_{n-2},1,1,0\}\]
with $x_1\geq x_2\geq \cdots \geq x_{n-2}\geq 0$.
The singular values of $Y$ are
\[\{x_1,x_1,x_2,x_2,\operatorname{d\!}ots,x_{n-2},x_{n-2},0,0\}.\]
The $\operatorname{SO}(2n-1)$-conjugacy class of $Z_{Y,\beta}$
is determined by the tuple $(x_1,\operatorname{d\!}ots,x_{n-2})$.
The Pfaffian does not appear in this case.
\begin{remark}
It is easy to see that if a coadjoint orbit in $\mathfrak{p}^{\ast}$
is strongly regular (see Section~\ref{S:introduction}), then it has depth one.
In the above notation the orbit $P\cdot\operatorname{pr}(X_{Y,\beta,0})$ is
strongly regular if and only if $x_1>\cdots >x_{n-2}>|x_{n-1}|>0$ for odd $m$
and $x_1>\cdots > x_{n-2}>0$ for even $m$.
\end{remark}
\section{Restriction to $P$ of irreducible representations of $G$}\langlebel{S:resP}
\subsection{Moderate growth smooth representations of $G$ (or $P$)}\langlebel{SS:CW-Cloux}
Let $\mathcal{C}_{K}(G)$ denote the category of Harish-Chandra modules, i.e., finitely generated admissible
$(\mathfrak{g},K)$-modules. For a $G$-representation $\pi$, let $\pi_{K}$ be the space of $K$-finite vectors in $\pi$.
Let $\mathcal{C}(G)$ denote the category of moderate growth, smooth Fr\'echet $G$-representations $\pi$ such that
$\pi_{K}\in\mathcal{C}_{K}(G)$.
The morphisms in $\mathcal{C}(G)$ are defined to be continuous intertwiners with images that are direct summands in
the category of Fr\'echet spaces.
The {\it Casselman-Wallach theorem} asserts that the functor
\[\mathcal{C}(G)\rightarrow \mathcal{C}_{K}(G),\quad \pi\mapsto \pi_{K}\]
gives an equivalence of abelian categories.
For an object $V\in\mathcal{C}_{K}(G)$, write $V^{\operatorname{sm}}\in\mathcal{C}(G)$ for a
{\it Casselman-Wallach globalization} of $V$. Then $(V^{\operatorname{sm}})_{K}\cong V$.
In \cite{duCloux}, du Cloux studied the category of moderate growth, smooth Fr\'echet representations of a real
semi-algebraic group. We recall some results of \cite{duCloux} in our setting. Let $\mathcal{C}(P)$
(resp.\ $\mathcal{C}(M')$) denotes the category of moderate growth, smooth Fr\'echet representations of $P$
(resp.\ $M'$). The morphisms are continuous intertwiners.
Let $\mathscr{S}(\mathfrak{n})$ (resp.\ $\mathscr{S}(\mathfrak{n}^{\ast})$)
be the Schwartz space on $\mathfrak{n}$ (resp.\ $\mathfrak{n}^{\ast}$)
with the algebra structure by the convolution product
(resp.\ by the usual multiplication) of functions.
The inverse Fourier transform gives an algebra isomorphism
$\mathscr{S}(\mathfrak{n})\xrightarrow{\sim} \mathscr{S}(\mathfrak{n}^{\ast})$.
Let $\mathscr{S}(\mathfrak{n}^{\ast}-\{0\})$ be the Schwarz space on
$\mathfrak{n}^{\ast}-\{0\}$. In other words, it consists of $f|_{\mathfrak{n}^{\ast}-\{0\}}$ with
$f\in\mathscr{S}(\mathfrak{n}^{\ast})$ such that $f$ and its all (higher) derivatives vanish at
$0 (\in \mathfrak{n}^{\ast})$.
A representation $E\in \mathcal{C}(P)$ can be viewed as a moderate growth, smooth Fr\'echet representation of $N$
by restriction. Then via exponential map $\mathfrak{n}\cong N$, the Fr\'echet space $E$ becomes an
$\mathscr{S}(\mathfrak{n})$-module and then an $\mathscr{S}(\mathfrak{n}^{\ast}-\{0\})$-module by
$\mathscr{S}(\mathfrak{n}^{\ast}-\{0\})\subset \mathscr{S}(\mathfrak{n}^{\ast})\cong\mathscr{S}(\mathfrak{n})$.
We shall define a functor $\operatorname{P}si\colon \mathcal{C}(P)\to \mathcal{C}(M')$ as follows. This functor is given as
$E\to E(x_0)$ in \cite[The\'or\`eme 2.5.8]{duCloux}. Recall that $\xi_0\in\mathfrak{n}^{\ast}-\{0\}$ is defined in
\S\ref{SS:notation} and the stabilizer of $\xi_0$ for the coadjoint action of $P$ on $\mathfrak{n}^{\ast}$
is $\operatorname{Stab}_{P}(\xi_0)=M'N$. Define the following algebra by adding the constant function $1$, which becomes the unit
of the algebra: \[\tilde{\mathscr{S}}(\mathfrak{n}^{\ast}-\{0\})=\mathbb{C} 1\oplus \mathscr{S}(\mathfrak{n}^{\ast}-
\{0\}).\]
Define an ideal $\mathfrak{m}_{\xi_0}$ by
\[\mathfrak{m}_{\xi_0}=\{f\in \tilde{\mathscr{S}}(\mathfrak{n}^{\ast}-\{0\}) : f(\xi_0)=0\}.\]
For $E\in \mathcal{C}(P)$,
\cite[Lemme 2.5.7]{duCloux} shows that
the subspace $\mathfrak{m}_{\xi_0}\cdot E$ is closed
and stable by the action of $M'N$.
Hence the quotient $E/(\mathfrak{m}_{\xi_0}\cdot E)$
is a Fr\'echet space with a natural $M'N$-action on it.
The action of $\tilde{\mathscr{S}}(\mathfrak{n}^*-\{0\})$ on $E/(\mathfrak{m}_{\xi_0}\cdot E)$
factors through the evaluation map
$\tilde{\mathscr{S}}(\mathfrak{n}^*-\{0\})\ni f \mapsto f(\xi_0)$.
Hence $N$ acts on $E/(\mathfrak{m}_{\xi_0}\cdot E)$ by $e^{\mathbf{i}\xi_0}$.
When we view $E/(\mathfrak{m}_{\xi_0}\cdot E)$ as a representation of $M'$ we write
\begin{equation*}
\operatorname{P}si(E):=E/(\mathfrak{m}_{\xi_0}\cdot E).
\end{equation*}
By \cite[The\'or\`eme 2.5.8]{duCloux},
$\operatorname{P}si(E)\in \mathcal{C}(M')$ and then $\operatorname{P}si$ defines a
functor $\mathcal{C}(P)\to \mathcal{C}(M')$.
Let $F$ be a finite-dimensional representation of $P$ such that the $N$-action is trivial. Then it is easy to see that
there is a natural isomorphism
\begin{equation}\langlebel{Jtensor}
\operatorname{P}si(E)\otimes (F|_{M'}) \cong \operatorname{P}si(E\otimes F).
\end{equation}
Next, we define the induction from $M'$-representations to $P$-representations.
Let $(\tau,V_{\tau})\in \mathcal{C}(M')$.
Then $\tau\otimes e^{\mathbf{i}\xi_{0}}$ is a smooth Fr\'echet representation of $M'N$.
One defines in a natural way the smoothly induced representation
$C^{\infty}\operatorname{Ind}_{M'N}^{P}(\tau\otimes e^{\mathbf{i}\xi_{0}})$.
Let $\mathscr{S}(P,V_{\tau})$ be the space of Schwartz functions on $P$
taking values in $V_{\tau}$.
For $f\in\mathscr{S}(P,V_{\tau})$, define $\bar{f}\in C^{\infty}(P,V_{\tau})$ by
\[\bar{f}(g)=\int_{M'N}(\tau\otimes e^{\mathbf{i}\xi_{0}})(mn)f(gmn)\operatorname{d\!} m\operatorname{d\!} n.\]
Then one has $\bar{f}\in C^{\infty}\operatorname{Ind}_{M'N}^{P}(V_{\tau})$. Let
\[\mathscr{S}\operatorname{Ind}_{M'N}^{P}(\tau\otimes e^{\mathbf{i}\xi_{0}})=
\{\bar{f}:f\in\mathscr{S}(P,V_{\tau})\}.\] Then
$\mathscr{S}\operatorname{Ind}_{M'N}^{P}(\tau\otimes e^{\mathbf{i}\xi_0})$ is a dense
subspace of $C^{\infty}\operatorname{Ind}_{M'N}^{P}(\tau\otimes e^{\mathbf{i}\xi_0})$
and $\mathscr{S}\operatorname{Ind}_{M'N}^{P}
(\tau\otimes e^{\mathbf{i}\xi_0})\in\mathcal{C}(P)$.
Let \[\mathscr{O}_{M}(P,\tau\otimes e^{\mathbf{i}\xi_{0}})=\{f\in C^{\infty}(P,V_{\tau}): h\cdot f\in\mathscr{S}(P,V_{\tau})
\ (\forall h\in\mathscr{S}(P))\}.\]
Let
\begin{align*}
&\mathscr{O}_{M}\operatorname{Ind}_{M'N}^{P}(\tau\otimes e^{\mathbf{i}\xi_{0}}) \\
& =\{f\in\mathscr{O}_{M}(P,\tau\otimes e^{\mathbf{i}\xi_0}):
f(gmn)=(\tau\otimes e^{\mathbf{i}\xi_{0}})(mn)^{-1}f(g)\ (\forall g\in P,mn\in M'N)\}.
\end{align*}
This is not a Fr\'echet space, but $P$ naturally acts on it.
Then
\[\mathscr{S}\operatorname{Ind}_{M'N}^{P}
(\tau\otimes e^{\mathbf{i}\xi_0})\subset\mathscr{O}_{M}\operatorname{Ind}_{M'N}^{P}(\tau\otimes e^{\mathbf{i}\xi_0})\subset C^{\infty}\operatorname{Ind}_{M'N}^{P}(\tau\otimes e^{\mathbf{i}\xi_0}).\]
Since $P/(M'N)\cong \mathfrak{n}^*-\{0\}$,
these three spaces become $\mathscr{S}(\mathfrak{n}^*-\{0\})$-modules by multiplication.
Since $N$ is nilpotent and $M'$ is compact, the group $M'N$ is unimodular and the restriction to $M'N$
of the modulus character of $P$ is also trivial. Let $E\in\mathcal{C}(P)$. The natural map $E\to\operatorname{P}si(E)$
is $M'$-intertwining and corresponds to a $P$-intertwiner
\[u\colon E\to C^{\infty}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E)\otimes e^{\mathbf{i}\xi_{0}})\] by the Frobenius
reciprocity. The following is a part of \cite[Th\'eor\`eme 2.5.8]{duCloux} applying to the group $P$.
\begin{fact}\langlebel{F:duCloux}
Let $E\in\mathcal{C}(P)$ and let $u\colon E\to C^{\infty}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E)\otimes e^{\mathbf{i}\xi_{0}})$
be as above. Then \begin{align*}
&\mathscr{S}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E)\otimes e^{\mathbf{i}\xi_{0}})\subset\operatorname{Im}(u)\subset
\mathscr{O}_{M}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E)\otimes e^{\mathbf{i}\xi_{0}}),\text{ and }\\&\operatorname{K}er(u)=
\{v\in E : \mathscr{S}(\mathfrak{n}^*-\{0\}) \cdot v = 0\}.
\end{align*}
\end{fact}
We need several lemmas below.
\begin{lemma}\langlebel{L:Frobenius}
Let $E\in\mathcal{C}(P)$ and $W\in \mathcal{C}(M')$. Let
$\varphi\colon E\hookrightarrow C^{\infty}\operatorname{Ind}_{M'N}^{P}(W\otimes e^{\mathbf{i}\xi_{0}})$
be an injective $P$-intertwining map that is also a homomorphism of $\mathscr{S}(\mathfrak{n}^*-\{0\})$-modules.
Then the kernel of the map $\bar{\varphi}\colon E\to W$ given by $\bar{\varphi}(v)=(\varphi(v))(e)$
equals $\mathfrak{m}_{\xi_0}\cdot E$.
\end{lemma}
\begin{proof}
Take any $f\in \mathfrak{m}_{\xi_0}$ and $v\in E$. Since $\phi$ is an $\mathscr{S}(\mathfrak{n}^{\ast}-\{0\})$-homomorphism,
$\phi(fv)=f\phi(v)$ and $\bar{\phi}(fv)=f(\xi_0)\bar{\phi}(v)$. Hence, $\operatorname{K}er(\bar{\varphi})\supset
\mathfrak{m}_{\xi_0}\cdot E$. Then, $\bar{\varphi}$ descends to $\bar{\varphi}\colon \operatorname{P}si(E)\to W$.
The map $\varphi$ factors as
\[E\xrightarrow{u} C^{\infty}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E)\otimes e^{\mathbf{i}\xi_{0}})
\to C^{\infty}\operatorname{Ind}_{M'N}^{P}(W\otimes e^{\mathbf{i}\xi_{0}}).\]
By Fact~\ref{F:duCloux},
$\operatorname{Im}(u)\supset \mathscr{S}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E)\otimes e^{\mathbf{i}\xi_{0}})$
and then
$\mathscr{S}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E)\otimes e^{\mathbf{i}\xi_{0}})
\to C^{\infty}\operatorname{Ind}_{M'N}^{P}(W\otimes e^{\mathbf{i}\xi_{0}})$
is injective.
Therefore, $\bar{\varphi}\colon \operatorname{P}si(E)\to W$ is also injective.
\end{proof}
\begin{lemma}\langlebel{L:exact}
Let $0\to E_1 \to E_2 \to E_3 \to 0$ be a sequence
in $\mathcal{C}(P)$ which is exact as vector spaces.
Then the induced sequence $0\to \operatorname{P}si(E_1) \to \operatorname{P}si(E_2) \to \operatorname{P}si(E_3) \to 0$
in $\mathcal{C}(M')$ is also exact as vector spaces.
\end{lemma}
\begin{proof}
It is easy to see that
$\operatorname{P}si(E_1) \to \operatorname{P}si(E_2) \to \operatorname{P}si(E_3) \to 0$ is exact.
Assuming $E_1 \hookrightarrow E_2$ is an injective homomorphism in $\mathcal{C}(P)$,
we will show that $\operatorname{P}si(E_1) \to \operatorname{P}si(E_2)$ is injective.
By Fact~\ref{F:duCloux}, we have
\begin{align*}
&u_i\colon E_i\to \mathscr{O}_{M}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E_i)\otimes e^{\mathbf{i}\xi_{0}}),\\
&\operatorname{K}er(u_i)=\{v\in E_i :
\mathscr{S}(\mathfrak{n}^*-\{0\}) \cdot v = 0\},\\
&\mathscr{S}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E_i)\otimes e^{\mathbf{i}\xi_{0}})\subset\operatorname{Im}(u_i)\quad (i=1,2).
\end{align*}
By this description of $\operatorname{K}er(u_i)$,
we have $\operatorname{K}er(u_1)=E_1\cap \operatorname{K}er(u_2)$ and hence
the natural map $\operatorname{Im}(u_1)\to \operatorname{Im}(u_2)$ is injective.
By composing
\[\mathscr{S}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E_1)\otimes e^{\mathbf{i}\xi_{0}})\subset\operatorname{Im}(u_1)
\hookrightarrow \operatorname{Im}(u_2)
\subset \mathscr{O}_{M}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E_2)\otimes e^{\mathbf{i}\xi_{0}}),\]
we obtain an injective map \[\mathscr{S}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E_1)\otimes e^{\mathbf{i}\xi_{0}})
\hookrightarrow \mathscr{O}_{M}\operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(E_2)\otimes e^{\mathbf{i}\xi_{0}}).\]
This is induced from the map $\operatorname{P}si(E_1)\to \operatorname{P}si(E_2)$, which must be also injective.
\end{proof}
For $E\in \mathcal{C}(P)$, $\operatorname{P}si(E)$ is the maximal Hausdorff quotient of $E$ on which $\mathfrak{n}$ acts by
$\mathbf{i}\xi_0$ in the following sense. Let $F$ be the linear span of the set
\[\{X\cdot v - \mathbf{i}\xi_0(X) v \mid v\in E,\ X\in\mathfrak{n}\}\] and let $F^{\operatorname{cl}}$ be the
closure of $F$ in $E$. Then $F^{\operatorname{cl}}$ is closed by the $M'N$-action and
\begin{equation}\langlebel{Eq:max_quotient}
\operatorname{P}si(E)\cong E/F^{\operatorname{cl}}.
\end{equation} Take any $f\in \mathfrak{m}_{\xi_0}$ and $v\in E$ and consider the inverse Fourier transform
$\mathcal{F}(f)$ of $f$. Calculate the vector $fv$ by the definition of $\mathscr{S}(\mathfrak{n})$-action on $E$, i.e. the
integration of $\mathcal{F}(f)(n)nv$ over $n\in N$. Since the projection $p\colon E\to E/F^{cl}$ respects the $N$-action and $N$
acts by $e^{\mathbf{i}\xi_0}$ on $E/F^{cl}$, we have \[p(fv)=\int_{N} \mathcal{F}(f)(n)e^{\mathbf{i}(\xi_0,n)}p(v) dn=f(\xi_0)p(v)=0.\]
Then, the map $E\to E/F^{\operatorname{cl}}$ factors through
$\operatorname{P}si(E)\to E/F^{\operatorname{cl}}$. On the other hand, since $\mathfrak{n}$ acts on $E/(\mathfrak{m}_{\xi_0}\cdot E)$
by $\mathbf{i}\xi_0$, we get a map $E/F^{\operatorname{cl}}\to\operatorname{P}si(E)$, which is the inverse of the above map.
Thus \eqref{Eq:max_quotient} follows.
For $V\in \mathcal{C}(G)$, write $V|_P\in \mathcal{C}(P)$ for the representation obtained by restriction of the action
of $G$ to $P$.
\begin{lemma}\langlebel{L:tensor}
Let $V\in \mathcal{C}(G)$ and
let $F$ be a finite-dimensional representation of $P$.
Then there exists an isomorphism of $M'$-representations:
\[\operatorname{P}si(V|_P)\otimes (F|_{M'}) \cong \operatorname{P}si((V\otimes F)|_P).\]
\end{lemma}
\begin{proof}
There exists a filtration $0=F_0\subset F_1\subset \cdots \subset F_k =F$
of $P$-subrepresentaions such that $N$ acts trivially on $F_i/F_{i-1}$.
Then \eqref{Jtensor} and Lemma~\ref{L:exact} imply
\[\operatorname{P}si(V|_P)\otimes (F_i |_{M'}) \cong \operatorname{P}si(V|_P\otimes F_i)\] inductively and
we obtain the conclusion of the lemma.
\end{proof}
\begin{remark}\langlebel{R:CHM}
For $V\in \mathcal{C}(G)$, the dimension of $\operatorname{P}si(V|_P)$ is finite and
$\operatorname{P}si(V|_P)\cong H_0(\mathfrak{n}, V\otimes e^{-\mathbf{i}\xi_0})$,
namely, the subspace $F$ defined above \eqref{Eq:max_quotient} is closed.
This is proved in \cite[\S8]{CHM}.
The exactness of $\operatorname{P}si$ for representations of $G$ is also proved there.
Vectors in the dual space of $\operatorname{P}si(V|_P)$ is no other than Whittaker vectors.
\end{remark}
We will apply the above lemmas to study the restriction of unitary representations.
Let $V$ be a non-trivial irreducible
unitarizable $(\mathfrak{g},K)$-module.
Write $\bar{V}$ for its Hilbert space completion and $V^{\operatorname{sm}}$ for the
Casselman-Wallach globalization. By Proposition~\ref{P:Mackey}, an irreducible unitary representation of $P$ is either
finite-dimensional or equal to $I_{P,\tau}$ for an irreducible representation $\tau$ of $M'$. In the former case, it
factors through $P/N (\cong MA)$. Hence these are parametrized by irreducible unitary representations
$\sigma\otimes e^{\nu}$ of $MA$. Then by general theory, the restriction of $\bar{V}$ to $P$ decomposes into
irreducibles as \begin{align*}
\bar{V}|_P \cong \int^{\oplus} (\sigma\otimes e^\nu)^{m(\sigma,\nu)} d\mu
\oplus \bigoplus_{\tau} (I_{P,\tau})^{m(\tau)}.
\end{align*}
Here, the first term on the right hand side is a direct integral of irreducible unitary representations and the second
term is a Hilbert space direct sum. We will show that actually the first term on the right hand side does not appear and
the second term is a finite sum. Since any vector $v\in\int^{\oplus}(\sigma\otimes e^\nu)^{m(\sigma,\nu)}d\mu$ is
$N$-invariant, \[V\ni v' \mapsto (v',v) \in \mathbb{C} \] defines an $\mathfrak{n}$-invariant vector of the algebraic dual space $V^*$.
Since it is known that $H^0(\mathfrak{n},V^*)$ is finite-dimensional (\cite[Corollary 2.4]{Casselman-Osborne}),
$\int^{\oplus} (\sigma\otimes e^\nu)^{m(\sigma,\nu)} d\mu$ is also finite-dimensional and in particular it only has a
discrete spectrum. Suppose that $\sigma\otimes \nu$ appears in $\bar{V}|_P$ as a direct summand. Then by the Frobenius
reciprocity, we obtain an intertwining map \[V\hookrightarrow \operatorname{Ind}_P^G (\sigma\otimes e^{\nu}\otimes\mathbf{1}_N)
\bigl(\cong \operatorname{Ind}_{\bar{P}}^{G}(\sigma\otimes e^{-\nu}\otimes\mathbf{1}_{\bar{N}})=I(\sigma,-\nu+\rho')\bigr).\]
Since $\nu\in \mathbf{i}\mathfrak{a}^*$, $V$ is isomorphic to the unique irreducible subrepresentation of $I(\sigma,-\nu+\rho')$.
By considering leading exponent of matrix coefficients of $V$, \cite[Theorem 9.1.4]{Collingwood} (which in turn is
implied by a theorem of Howe-Moore in \cite{Howe-Moore}) implies that $\nu=0$ and $V$ is trivial, which is not the
case. Hence, $\int^{\oplus}(\sigma\otimes e^\nu)^{m(\sigma,\nu)}d\mu=0$.
Let $\bar{\tau}:=\sum^{\oplus}_{\tau} \tau^{m(\tau)}$ be the Hilbert direct sum.
Let $\operatorname{Ind}_{M'N}^P(\bar{\tau}\otimes e^{\mathbf{i}\xi_0})$
be the unitarily ($L^2$-)induced representation. Then
\[\bar{V}|_P \cong \bigoplus_{\tau} (I_{P,\tau})^{m(\tau)}
\cong \operatorname{Ind}_{M'N}^P(\bar{\tau}\otimes e^{\mathbf{i}\xi_{0}}).\]
Let $\bar{\tau}^{\infty}$ be the
set of smooth vectors in $\bar{\tau}$ as a representation of $M'$.
Then by the Sobolev embedding theorem (on $P/M'N\cong\mathbb{R}^{m}-\{0\}$),
the smooth vectors in $\bar{V}$ lies in
$C^{\infty}\operatorname{Ind}_{M'N}^P(\bar{\tau}^{\infty} \otimes e^{\mathbf{i} \xi_{0}})$.
Hence we obtain an injective $P$-intertwining map
\[V^{\operatorname{sm}}\hookrightarrow
C^{\infty}\operatorname{Ind}_{M'N}^P(\bar{\tau}^{\infty}\otimes e^{\mathbf{i}\xi_{0}}).\]
Now we apply Lemma~\ref{L:Frobenius}
and use the denseness of $V^{\operatorname{sm}}$ in $\bar{V}$,
we conclude that
\[\operatorname{P}si(V^{\operatorname{sm}}|_P)\cong\bar{\tau}^{\infty}.\]
By Remark~\ref{R:CHM} or Proposition~\ref{P:res-induced} shown later,
$\operatorname{P}si(V^{\operatorname{sm}}|_P)$ is always finite-dimensional.
Therefore, $\bar{\tau}^{\infty}=\bar{\tau}$. We thus obtain the following.
\begin{lemma}\langlebel{L:unitary-J}
Suppose that $V$ is a non-trivial irreducible unitarizable $(\mathfrak{g},K)$-module. Then
\begin{align*}
\bar{V}|_{P}\cong \operatorname{Ind}_{M'N}^{P}(\operatorname{P}si(V^{\operatorname{sm}}|_P)\otimes e^{\mathbf{i}\xi_{0}}).
\end{align*}
\end{lemma}
\subsection{Restrictions of principal series representations}\langlebel{SS:ResPS}
We calculate $\operatorname{P}si(I(\sigma,\nu)|_P)$, where $I(\sigma,\nu)$ is a (not necessarily unitary) principal series
representation of $G$.
\begin{proposition}\langlebel{P:res-induced}
Let $I(\sigma,\nu)=\operatorname{Ind}_{\bar{P}}^{G}(V_{\sigma}\otimes e^{\nu-\rho'}\otimes \mathbf{1}_N)$
be a principal series representation.
Then \[\operatorname{P}si(I(\sigma,\nu)|_P)\cong V_{\sigma}|_{M'}.\]
\end{proposition}
\begin{remark}
Proposition~\ref{P:res-induced} is proved in \cite[Lemma 8.5]{CHM}
by using the Bruhat filtration.
However, we include the proof here because the argument of the Fourier transform
below will be used for concrete calculations in Appendix~\ref{S:trivial}.
\end{remark}
In order to prove the proposition, we consider the restriction of the functions
$f\in I(\sigma,\nu)$ to $N$ and take inverse Fourier transform.
For $f\in I(\sigma,\nu)$, let $f_{N}=f|_{N}$. We have the map \[I(\sigma,\nu)\to C^{\infty}(N,V_{\sigma}),
\quad f\mapsto f_{N}.\] The action of $P=MAN$ on $I(\sigma,\nu)$ is compatible with the following $P$-action
on $C^{\infty}(N,V_{\sigma})$: for $F\in C^{\infty}(N,V_{\sigma})$ and $n\in N$,
\begin{align}\langlebel{Eq:P-action} \nonumber
&(n'\cdot F)(n)=F(n'^{-1}n)\quad (n'\in N); \\
&(a\cdot F)(n)=e^{(\nu-\rho')\log a}F(a^{-1}na)\quad (a\in A); \\ \nonumber
&(m_0\cdot F)(n)=\sigma(m_0)F(m_0^{-1}nm_0)\quad (m_0\in M).
\end{align}
Next, define the inverse Fourier transform of $f_N \in C^{\infty}(N,V_{\sigma})$.
If $f_N$ is $L^1$, then its inverse Fourier transform is defined as
a function on $\mathfrak{n}^*$ as
\begin{equation}\langlebel{Eq:Ff}
\widehat{f_{N}}(\xi)=\mathcal{F}(f_{N})(\xi)=
(2\pi)^{-\mathfrak{a}c{m}{2}}\int_{\mathbb{R}^{m}}e^{\mathbf{i}(\xi,x)}f(n_{x})\operatorname{d\!} x.
\end{equation}
In general, for $f\in I(\sigma,\nu)$,
the function $f_{N}$ on $N\cong \mathbb{R}^m$ is of at most polynomial growth at infinity.
Hence $f_{N}$ is a tempered distribution.
The map \eqref{Eq:Ff} extends for tempered distributions and
we obtain $\widehat{f_{N}}(\xi)$ as distributions on $\mathfrak{n}^*$.
The action of $G$ on the Fourier transformed picture is defined as
\begin{equation*}
g(\widehat{f_{N}})=\widehat{(gf)_{N}}
\end{equation*}
for $f\in I(\sigma,\nu)$ and $g\in G$.
Then the $P$-action is given as follows:
for $f\in I(\sigma,\nu)$ and $\xi\in\mathfrak{n}^{\ast}$,
\begin{align}\langlebel{Eq:P-action2}\nonumber
&(n_{x}\cdot \widehat{f_{N}})(\xi)
=e^{\mathbf{i}(\xi,x)}\widehat{f_{N}}(\xi)\quad (x\in \mathbb{R}^{m});\\
&(a\cdot \widehat{f_{N}})(\xi)=e^{(\nu+\rho')\log a}
\widehat{f_{N}}(\operatorname{A}d^{\ast}(a^{-1})\xi) \quad (a\in A);\\ \nonumber
&(m_0\cdot \widehat{f_{N}})(\xi)=\sigma(m_0)\widehat{f_{N}}(\operatorname{A}d^{\ast}(m_0^{-1})\xi)
\quad (m_0\in M).
\end{align}
\begin{lemma}\langlebel{L:FourierSmooth}
Let $f\in I(\sigma,\nu)$.
Then the restriction of $\widehat{f_{N}}$ to
$\mathfrak{n}^*-\{0\}$ is a $C^{\infty}$-function.
\end{lemma}
\begin{proof}
We first prove a similar claim for the group $G_2=\operatorname{O}(m+1,1)$.
Let $I(\sigma,\nu)$ be a principal series representation of $G_2$
for an irreducible representation $\sigma$ of $M_2$ and
take $f\in I(\sigma,\nu)$.
To prove that $\widehat{f_N}|_{\mathfrak{n}^*-\{0\}}$ is a smooth function,
we need to see the behavior of $f(n_x)$ as $x\to \infty$.
This is equivalent to the behavior of $f(s n_x)$ near $x=0$,
where $s=\operatorname{d\!}iag\{I_m,-1,1\}$.
Put $F(x):=f(sn_x)$ for $x\in \mathbb{R}^n$.
By Lemma~\ref{L:barn-iwasawa},
\[
F(x)=f\bigl(n_{\mathfrak{a}c{x}{|x|^2}}r_x e^{-(2\log |x|)H_0}\bar{n}_{\mathfrak{a}c{x}{|x|^2}}\bigr)
= |x|^{2(\nu-\rho')(H_0)} \sigma(r_x) f\bigl(n_{\mathfrak{a}c{x}{|x|^2}}\bigr).
\]
Since $F$ is smooth, $|f(n_x)|$ is bounded by $C|x|^{2(-\nu+\rho')(H_0)}$
as $x\to \infty$ for some constant $C>0$.
The $G$-action on $I(\sigma,\nu)$ differentiates to the $\mathfrak{g}$-action.
Take $X_y\in \mathfrak{n}$ for $y\in \mathbb{R}^m$ and consider the function
$X_y\cdot f\in I(\sigma,\nu)$.
We have
\[(X_y\cdot f)(sn_x) = \mathfrak{a}c{d}{dt}\operatorname{B}igl|_{t=0} f(n_{ty}^{-1} s n_x)\]
By Lemma~\ref{L:barn-iwasawa} again,
\[n_{ty}^{-1} s n_x
=n_{-ty+\mathfrak{a}c{x}{|x|^2}} r_x e^{-(2\log |x|)H_0}\bar{n}_{\mathfrak{a}c{x}{|x|^2}}.\]
Putting $z:=-ty+\mathfrak{a}c{x}{|x|^2}$, we have
\begin{align*}
&\quad n_{-ty+\mathfrak{a}c{x}{|x|^2}} r_x e^{-(2\log |x|)H_0}\bar{n}_{\mathfrak{a}c{x}{|x|^2}}\\
&=s n_{\mathfrak{a}c{z}{|z|^2}} r_z e^{-(2\log |z|)H_0}\bar{n}_{\mathfrak{a}c{z}{|z|^2}}
r_x e^{-(2\log |x|)H_0}\bar{n}_{\mathfrak{a}c{x}{|x|^2}}\\
&\in s n_{\mathfrak{a}c{z}{|z|^2}} r_z r_x e^{-(2\log |z|+2\log |x|)H_0}\bar{N}.
\end{align*}
Hence
\[
(X_y\cdot f)(sn_x)
= \mathfrak{a}c{d}{dt}\operatorname{B}igl|_{t=0} |z|^{2(\nu-\rho')(H_0)} |x|^{2(\nu-\rho')(H_0)}
\sigma(r_z r_x) F\operatorname{B}igl(\mathfrak{a}c{z}{|z|^2}\operatorname{B}igr).
\]
Note that $r_z=r_{-t|x|^2 y + x}$.
We calculate
\begin{align*}
&\mathfrak{a}c{d}{dt}\operatorname{B}igl|_{t=0} |z|^{2(\nu-\rho')(H_0)} |x|^{2(\nu-\rho')(H_0)}
= - 2(\nu-\rho')(H_0) (y,x), \\
&\mathfrak{a}c{d}{dt}\operatorname{B}igl|_{t=0} r_z r_x
= 2 (y^t x - x^t y), \\
&\mathfrak{a}c{d}{dt}\operatorname{B}igl|_{t=0} F\operatorname{B}igl(\mathfrak{a}c{z}{|z|^2}\operatorname{B}igr)
= \bigl(2 (x,y)x - y|x|^2\bigr) (\nabla_{y} F)(x).
\end{align*}
Combining above equations, we see that if $F(x)$ vanishes at $x=0$ of order $k$,
then $(X_y\cdot f)(sn_x)$ vanishes at $x=0$ of order $k+1$.
Hence $(X_y\cdot f)(n_x)$ is bounded by $C|x|^{2(-\nu+\rho')(H_0)-1}$ for some $C$.
By repeating this, $(X_y^l\cdot f)(n_x)$ is bounded by $C|x|^{2(-\nu+\rho')(H_0)-l}$ for $l>0$.
Then for any $k>0$, if $l$ is sufficiently large,
then $P(x)(X_y^l \cdot f)(n_x)$ is in $L^1$ for any polynomial $P(x)$ of degree $k$.
Therefore, its inverse Fourier transform is continuous.
By
\begin{align}
\langlebel{Eq:Mult-Differ}
\mathcal{F}(x_{j}h)=-\mathbf{i}\partial_{\xi_j} \mathcal{F}(h),\quad
\mathcal{F}(\partial_{x_j}h) =-\mathbf{i}\xi_j \mathcal{F}(h),
\end{align}
we have
\[
\mathcal{F} (P(x)(X_y^l \cdot f)(n_x))
= P(-\mathbf{i}\partial_{\xi}) (-\mathbf{i}\xi,y)^l \cdot \widehat{f_N}(\xi).
\]
Hence $\widehat{f_N}(\xi)$ is $C^k$ in $(\xi,y)\neq 0$.
Since $k$ and $y$ are arbitrary, we proved that $\widehat{f_N}$ is $C^{\infty}$ on
$\mathfrak{n}^*-\{0\}$.
To prove the claim for $G=\operatorname{Sp}in(m+1,1)$, fix $m_0\in M_2$ such that
$m_0s\in M_1=\operatorname{SO}(m)$ and use a lift of $m_0s$ in $M=\operatorname{Sp}in(m)$ instead of $s$
in the above argument.
\end{proof}
Recall $\xi_0=(0,\operatorname{d\!}ots,0,1)\in \mathfrak{n}^*$.
For $h\in C^{\infty}(\mathfrak{n}^{\ast}-\{0\},V_{\sigma})$,
define a function $h_{at,\nu}$ on $P$ by
\begin{align*}
h_{at,\nu}(p)
=(p^{-1}\cdot h)(\xi_0) \quad (p\in P).
\end{align*}
More concretely,
\begin{align}\langlebel{Eq:anti-trivialization}
&\quad h_{at,\nu}(p)\\ \nonumber
&=e^{-\mathbf{i}(\xi_0,x)}
e^{(-\nu-\rho')\log a}\sigma(m_0)^{-1}h(\operatorname{A}d^*(m_0a)\xi_{0}) \\ \nonumber
&=e^{-\mathbf{i}(\xi_0,x)}
|\operatorname{A}d^*(m_0a)(\xi_{0})|^{\mathfrak{a}c{2\nu(H_0)+m}{2}}
\sigma(m_0)^{-1}h(\operatorname{A}d^*(m_0a)\xi_{0})
\end{align}
for $p=m_0an_{x}\in P$.
We call $h_{at,\nu}$ the anti-trivialization of $h$.
The term `anti-trivialization' comes from:
$h$ is a function on $\mathfrak{n}^{\ast}-\{0\}$,
i.e., a section of the trivial bundle on $P/M'N\cong\mathfrak{n}^{\ast}-\{0\}$, and $h_{at,\nu}$
is a section of the vector bundle
$P\times_{M'N}(\sigma|_{M'}\otimes e^{\mathbf{i} \xi_{0}})$
on $P/M'N$.
\begin{lemma}\langlebel{L:anti-trivialization}
The image of the map
$C^{\infty}(\mathfrak{n}^{\ast}-\{0\},V_{\sigma})\ni h\mapsto h_{at,\nu}$
is equal to the representation space of the smoothly induced representation
$C^{\infty}\operatorname{Ind}_{M'N}^{P}(\sigma|_{M'}\otimes e^{\mathbf{i}\xi_{0}})$.
The map $h\mapsto h_{at,\nu}$ respects the actions of $P$
and $\mathscr{S}(\mathfrak{n}^*-\{0\})$.
\end{lemma}
\begin{proof}
For any $m'\in M'$, $n_{x}\in N$ and $p\in P$, we have
\begin{align*}
&\quad h_{at,\nu}(pm'n_{x})\\
&=(n_{x}^{-1}(m')^{-1}p^{-1}\cdot h)(\xi_0)\\
&=e^{-\mathbf{i}(\xi_0,x)}
\sigma(m')^{-1}(p^{-1}\cdot h)(\operatorname{A}d^*(m')\xi_{0})\\
&=(\sigma\otimes e^{\mathbf{i} \xi_{0}})(m',n_{x})^{-1}h_{at,\nu}(p),
\end{align*}
where we used $\operatorname{A}d^*(m')\xi_0=\xi_0$.
This shows that $h_{at,\nu}$ is a section of the vector bundle
$P\times_{M'N}(\sigma|_{M'}\otimes e^{\mathbf{i}\xi_{0}})$.
It directly follows from the definition of $h_{at,\nu}$ that
the map $h\mapsto h_{at,\nu}$ respects the $P$-actions.
The actions of $\mathscr{S}(\mathfrak{n}^*-\{0\})$ on
$C^{\infty}\operatorname{Ind}_{M'N}^{P}(\sigma|_{M'}\otimes e^{\mathbf{i}\xi_{0}})$
and $C^{\infty} (\mathfrak{n}^{\ast}-\{0\},V_{\sigma})$
are given by multiplications.
Hence the map $h\mapsto h_{at,\nu}$ is a $\mathscr{S}(\mathfrak{n}^*-\{0\})$-homomorphism.
The inverse map
\[C^{\infty}\operatorname{Ind}_{M'N}^{P}(\sigma|_{M'}\otimes e^{\mathbf{i}\xi_{0}})
\to C^{\infty} (\mathfrak{n}^{\ast}-\{0\},V_{\sigma}),
\quad h' \mapsto h'_{t,\nu}\]
is given as follows:
for any $\xi\in \mathfrak{n}^{\ast}-\{0\}$, choose
$m_0a\in MA$ such that $\xi=\operatorname{A}d^*(m_0a)\xi_0$ and define
\[h'_{t,\nu}(\xi)=|\xi|^{-\mathfrak{a}c{m+2\nu(H_0)}{2}}\sigma(m_0)h'(m_0a).\]
It is easy to see that the maps $h\mapsto h_{at,\nu}$
and $h'\mapsto h'_{t,\nu}$ are inverse to each other.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{P:res-induced}]
By Lemmas~\ref{L:FourierSmooth} and \ref{L:anti-trivialization},
we obtain a map
\[
\varphi\colon
I(\sigma,\nu) \to C^{\infty}\operatorname{Ind}_{M'N}^{P}(\sigma|_{M'}\otimes e^{\mathbf{i}\xi_{0}}),
\quad f\mapsto (\widehat{f_N})_{at,\nu}
\]
which respects the actions of $P$ and $\mathscr{S}(\mathfrak{n}^*-\{0\})$.
If $f\in \operatorname{K}er \varphi$, then the support of $\widehat{f_N}$ is contained in $\{0\}$,
or equivalently, $f_N$ is a polynomial.
Hence $\mathscr{S}(\mathfrak{n}^*-\{0\})\cdot (\operatorname{K}er \varphi)=0$
and $\operatorname{P}si(I(\sigma,\nu)|_P)\cong \operatorname{P}si(I(\sigma,\nu)|_P/\operatorname{K}er \varphi)$.
Then by Lemma~\ref{L:Frobenius}, there exists an injective map
$\operatorname{P}si(I(\sigma,\nu)|_P)\to \sigma|_{M'}$.
To show the surjectivity, take any vector $v\in \sigma|_{M'}$ and take a function
$h\in \mathscr{S}\operatorname{Ind}_{M'N}^{P}(\sigma|_{M'}\otimes e^{\mathbf{i}\xi_{0}})$
(or a function $h\in C^{\infty}\operatorname{Ind}_{M'N}^{P}(\sigma|_{M'}\otimes e^{\mathbf{i}\xi_{0}})$ which is
compactly supported modulo $M'N$) such that $h(e)=v$. Then there exists $f\in I(\sigma,\nu)$ such that
$(\widehat{f_N})_{at,\nu}=h$, which implies that the map $\operatorname{P}si(I(\sigma,\nu)|_P)\to \sigma|_{M'}$ is surjective.
\end{proof}
Using Lemma \ref{L:unitary-J} and the determination of the restriction to $P$ of irreducible unitary representations
of $G$ with trivial infinitesimal character and some complementary series in Appendix \ref{S:trivial}, one can show
Proposition~\ref{P:res-induced} using translation principle similarly as that in Theorem \ref{T:branching-ds}. However, it
is much more complicated than the above proof.
\operatorname{sm}allskip
By Casselman's subrepresentation theorem, any moderate growth,
irreducible admissible smooth Fr\'echet representation $\pi$ of $G$ is
a subrepresentation of a principal series representation $I(\sigma,\nu)$.
Then by Lemma~\ref{L:exact} and Proposition~\ref{P:res-induced},
$\operatorname{P}si(\pi|_P)\subset \operatorname{P}si(I(\sigma,\nu)|_P)\cong\sigma|_{M'}$.
In particular, $\operatorname{P}si(\pi|_P)$ is finite-dimensional.
Let $K(G)$ (resp.\ $K(M')$) be the Grothendieck group of
the category of Harish-Chandra modules
(resp.\ the category of finite-dimensional representations of $M'$).
By Lemma~\ref{L:exact},
$\mathcal{C}(G)\ni \pi \mapsto \operatorname{P}si(\pi|_{P})$ induces a homomorphism
$\operatorname{P}si \colon K(G)\to K(M')$.
\subsection{Classification of irreducible representations of $G$}\langlebel{SS:irreducible}
In this subsection we recall the classification of irreducible
admissible representations $\pi\in \mathcal{C}(G)$.
Suppose first that $m$ is odd and then $m=2n-1$ and $G=\operatorname{Sp}in(2n,1)$.
The infinitesimal character $\gamma$ of $\pi$ is conjugate to
\[(\mu+\rho_{M},\nu)=\operatorname{B}igl(a_{1}+n-\mathfrak{a}c{3}{2},\cdots,a_{n-1}+\mathfrak{a}c{1}{2},a\operatorname{B}igr),\]
where $\mu=(a_{1},\operatorname{d\!}ots,a_{n-1})$ and $\nu=a\langlembda_{0}$.
We have $a_{1}\geq \cdots\geq a_{n-1}\geq 0$;
and $a_{1},\operatorname{d\!}ots,a_{n-1}$ are all integers or all half-integers.
The weight $\gamma$ is integral if and only if $a-(a_{j}+\mathfrak{a}c{1}{2})\in\mathbb{Z}$.
The singularity of integral $\gamma$ has several possibilities:
\begin{enumerate}
\item If $a\neq a_{j}+n-j-\mathfrak{a}c{1}{2}$ for $1\leq j\leq n-1$ and $a\neq 0$,
then $\gamma$ is regular.
Write $\Lambda_{0}$ for the set of integral regular dominant weights.
\item If $a=a_{j}+n-j-\mathfrak{a}c{1}{2}$ for some $1\leq j \leq n - 1$, then up to conjugation
\[\gamma=\operatorname{B}igl(a_{1}+n-\mathfrak{a}c{3}{2},\operatorname{d\!}ots,a_{j}+n-j-\mathfrak{a}c{1}{2},a_{j}+n-j-\mathfrak{a}c{1}{2},
\operatorname{d\!}ots,a_{n-1}+\mathfrak{a}c{1}{2}\operatorname{B}igr),\]
and $\alpha_{j}=\epsilon_{j}-\epsilon_{j+1}$ is the only simple root orthogonal to $\gamma$.
Write $\Lambda_{j}$ for the set of such integral dominant weights.
\item If $a=0$, then
\[\gamma=\operatorname{B}igl(a_{1}+n-\mathfrak{a}c{3}{2},\operatorname{d\!}ots,a_{n-1}+\mathfrak{a}c{1}{2},0\operatorname{B}igr).\]
$a_{j}$ ($1\leq j\leq n-1$) are half-integers, and $\alpha_{n}=\epsilon_{n}$
is the only simple root orthogonal to $\gamma$.
Write $\Lambda_{n}$ for the set of such integral dominant weights.
\end{enumerate}
To describe irreducible representations with the infinitesimal character $\gamma$,
we introduce several notation for every type of $\gamma$.
For a weight
\[\gamma=\operatorname{B}igr(a_{1}+n-\mathfrak{a}c{1}{2},\cdots,a_{n-1}+\mathfrak{a}c{3}{2},a_{n}+\mathfrak{a}c{1}{2}\operatorname{B}igr)
\in\Lambda_{0}\]
with $a_1\geq \cdots \geq a_{n-1}\geq a_n\geq 0$, let
\[\mu_{j}=(a_{1}+1,\cdots,a_{j}+1,a_{j+2},\cdots,a_{n})
\textrm{ and }\nu_{j}=\operatorname{B}igl(a_{j+1}+n-\mathfrak{a}c{1}{2}-j\operatorname{B}igr)\langlembda_{0}\]
for $0\leq j\leq n-1$.
Put
\[I_{j}^{\pm}(\gamma)= I(\mu_j,\pm\nu_j)
=\operatorname{Ind}_{MA\bar{N}}^{G}(V_{M,\mu_{j}}\otimes e^{\pm{\nu_j-\rho'}}
\otimes\mathbf{1}_{\bar{N}}).\]
For each $j$, there are nonzero intertwining operators
\[J_{j}^{+}(\gamma)\colon I_{j}^{+}(\gamma)
\rightarrow I_{j}^{-}(\gamma)
\text{ and }
J_{j}^{-}(\gamma)\colon I_{j}^{-}(\gamma)\rightarrow I_{j}^{+}(\gamma).\]
Write $\pi_{j}(\gamma)$ (resp.\ $\pi'_{j}(\gamma)$)
for the image of $J_{j}^{-}(\gamma)$ (resp.\ $J_{j}^{+}(\gamma)$).
Put \[\langlembda^+=
(a_{1}+1,\operatorname{d\!}ots,a_{n-1}+1,a_{n}+1)\textrm{ and }
\langlembda^-=(a_{1}+1,\operatorname{d\!}ots,a_{n-1}+1,-(a_{n}+1)).\] Write $\pi^{+}(\gamma)$ for a
discrete series with the lowest $K$-type $V_{K,\langlembda^+}$,
and write $\pi^{-}(\gamma)$ for a discrete series with the lowest $K$-type $V_{K,\langlembda^-}$.
Let $1\leq j\leq n-1$. For a weight
\[\gamma=\operatorname{B}igl(a_{1}+n-\mathfrak{a}c{3}{2},\operatorname{d\!}ots,a_{j}+n-j-\mathfrak{a}c{1}{2},
a_{j}+n-j-\mathfrak{a}c{1}{2},\operatorname{d\!}ots,a_{n-1}+\mathfrak{a}c{1}{2}\operatorname{B}igr)\in\Lambda_{j},\]
write \[\mu=(a_{1},\operatorname{d\!}ots,a_{n-1})\textrm{ and }
\nu=\operatorname{B}igl( a_{j}+n-j-\mathfrak{a}c{1}{2}\operatorname{B}igr)\langlembda_0.\]
Put
\[\pi(\gamma)=I(\mu,\nu).\]
For a weight
\[\gamma=\operatorname{B}igl(a_{1}+n-\mathfrak{a}c{3}{2},a_{2}+n-\mathfrak{a}c{5}{2},\operatorname{d\!}ots,a_{n-1}+\mathfrak{a}c{1}{2},0\operatorname{B}igr)
\in\Lambda_{n},\]
write
\[\mu=(a_{1},\operatorname{d\!}ots,a_{n-1}) \text{ and } I(\gamma)=I(\mu,0).\]
By Schmid's identity (\cite[Theorem 12.34]{Knapp})
$I(\gamma)$ is the direct sum of two limits of discrete series
(\cite[Theorem 12.26]{Knapp}).
Write $\pi^{+}(\gamma)$ (resp.\ $\pi^{-}(\gamma)$)
for a limits of discrete series with
the lowest $K$-type $V_{K,\langlembda^+}$ (resp.\ $V_{K,\langlembda^-}$), where
\[\langlembda^+=\operatorname{B}igl(a_{1},a_{2},\operatorname{d\!}ots,a_{n-1},\mathfrak{a}c{1}{2}\operatorname{B}igr)\textrm{ and }
\langlembda^-=\operatorname{B}igl(a_{1},a_{2},\operatorname{d\!}ots,a_{n-1},-\mathfrak{a}c{1}{2}\operatorname{B}igr).\]
For a non-integral weight
\[\gamma=\operatorname{B}igr(a_{1}+n-\mathfrak{a}c{3}{2},\cdots,a_{n-1}+\mathfrak{a}c{1}{2},a\operatorname{B}igr),\]
write \[\mu=(a_{1},\operatorname{d\!}ots,a_{n-1})
\textrm{ and } \nu=a\langlembda_0.\]
Put
\[\pi(\gamma)=I(\mu,\nu).\]
Note that $I(\mu,\nu)\cong I(\mu,-\nu)$.
By using the above notation,
the Langlands classification of irreducible representations of $G$
is given as follows.
In Fact~\ref{F:gamma-classification},
an irreducible representation of $G$ means
an irreducible, moderate growth, smooth Fr\'echet representation.
\begin{fact}\langlebel{F:gamma-classification}
\begin{enumerate}
\item
For $\gamma\in\Lambda_{0}$, any irreducible representation of $G$
with infinitesimal character $\gamma$ is
equivalent to one of
\[\{\pi_{0}(\gamma),\operatorname{d\!}ots,\pi_{n-1}(\gamma),\pi^{+}(\gamma),\pi^{-}(\gamma)\}.\]
When $0\leq j\leq n-2$,
$\pi_{j+1}(\gamma)\cong \pi'_{j}(\gamma)$; $\pi_{0}(\gamma)$ is a finite-dimensional module;
and $\pi'_{n-1}(\gamma)\cong\pi^{+}(\gamma)\oplus \pi^{-}(\gamma)$.
\item
For $\gamma\in\Lambda_{j}$ ($1\leq j\leq n-1$),
any irreducible representation of $G$
with infinitesimal character $\gamma$ is
equivalent to $\pi(\gamma)$.
\item
For $\gamma\in\Lambda_{n}$, any irreducible representation of $G$
with infinitesimal character $\gamma$ is
equivalent to $\pi^{+}(\gamma)$ or $\pi^{-}(\gamma)$.
\item
For a non-integral weight $\gamma$,
any irreducible representation of $G$
with infinitesimal character $\gamma$ is
equivalent to $\pi(\gamma)$.
\end{enumerate}
\end{fact}
Among these representations, unitarizable ones are given as follows (\cite{Hirai}).
\begin{fact}\langlebel{F:unitarizable}
\begin{enumerate}
\item
For $\gamma\in\Lambda_{0}$, $\pi^{+}(\gamma)$ and $\pi^{-}(\gamma)$ are unitarizable
(discrete series).
$\pi_j(\gamma)$ is unitarizable if and only if $a_i=0$ for any $j<i\leq n$.
\item
For $\gamma\in\Lambda_{j}$ ($1\leq j\leq n-1$),
$\pi(\gamma)$ is unitarizable if and only if $a_i=0$ for any $j\leq i\leq n-1$.
\item
For $\gamma\in\Lambda_{n}$,
$\pi^+(\gamma)$ and $\pi^-(\gamma)$ are unitarizable
(limit of discrete series).
\end{enumerate}
\end{fact}
\begin{fact}
For a non-integral weight $\gamma$,
$\pi(\gamma)$ is unitarizable if and only if
at least one of the following two conditions holds.
\begin{enumerate}
\item
$a\in \mathbf{i}\mathbb{R}$\ (unitary principal series).
\item
$a\in \mathbb{R}$, $|a|< n-\mathfrak{a}c{1}{2}$, $a_i\in \mathbb{Z}$ for $1\leq i\leq n-1$
and $a_j=0$ for any $n-|a|-\mathfrak{a}c{1}{2}<j\leq n-1$ (complementary series).
\end{enumerate}
\end{fact}
\begin{remark}\langlebel{R:complAq}
The unitarizable representations $\pi(\gamma)$ for $\gamma\in \Lambda_{j}$
in Fact~\ref{F:unitarizable} (2) can be regarded as a complementary series
and also as $A_{\mathfrak{q}}(\langlembda)$ as we see below.
\end{remark}
The unitarizable $(\mathfrak{g},K)$-modules with integral infinitesimal character
are isomorphic to Vogan-Zuckerman's derived functor module $A_{\mathfrak{q}}(\langlembda)$.
General references for $A_{\mathfrak{q}}(\langlembda)$
are e.g.\ \cite{Knapp-Vogan}, \cite{Vogan}.
For $0\leq j\leq n-1$
let $\mathfrak{q}_j$ be a $\theta$-stable parabolic subalgebra of $\mathfrak{g}_{\mathbb{C}}$
such that the real form of the Levi component of $\mathfrak{q}_j$
is isomorphic to $\mathfrak{u}(1)^{j}+\mathfrak{so}(2(n-j),1)$.
For the normalization of parameters, we follow the book of Knapp-Vogan \cite{Knapp-Vogan}.
In particular, $A_{\mathfrak{q}}(\langlembda)$ has infinitesimal character $\langlembda+\rho$.
\begin{remark}\langlebel{R:Aqsingular}
The parameter $\langlembda=(a_1,\operatorname{d\!}ots,a_{j},0,\operatorname{d\!}ots,0)$ for $\mathfrak{q}_j$
is in the good range if and only if
$a_1\geq a_2\geq \cdots \geq a_{j} \geq 0$.
It is in the weakly fair range if and only if
$a_i+1\geq a_{i+1}$ for $1\leq i\leq j-1$ and $a_j\geq -n+j$.
When $\langlembda$ is in the weakly fair range, $A_{\mathfrak{q}}(\langlembda)$ is nonzero
if and only if $a_1\geq \cdots \geq a_{j}$ and $a_{j-1} \geq -1$.
\end{remark}
Let $\gamma\in \Lambda_0$ and $0\leq j\leq n-1$ such that $\pi_j(\gamma)$ is unitarizable.
Then
\begin{align*}
\pi_j(\gamma)_K \cong A_{\mathfrak{q}_{j}}(\langlembda),
\end{align*}
where $\langlembda=(a_1,\operatorname{d\!}ots,a_{j},0,\operatorname{d\!}ots,0)$.
Let $\gamma\in \Lambda_j\ (1\leq j\leq n-1)$ and $1\leq i\leq j$.
Assume that $a_{i}=\cdots=a_n=0$.
Then
\begin{align*}
\pi(\gamma)_K \cong A_{\mathfrak{q}_{i}}(\langlembda),
\end{align*}
where $\langlembda=(a_1-1,\operatorname{d\!}ots,a_{i-1}-1,i-j-1,0,\operatorname{d\!}ots,0)$.
Suppose next that $m$ is even and then $m=2n-2$ and $G=\operatorname{Sp}in(2n-1,1)$.
This case is similar to and simpler than the previous case.
The infinitesimal character $\gamma$ of $\pi\in\mathcal{C}(G)$ is conjugate to
\[(\mu+\rho_{M},\nu)=(a_{1}+n-2,a_{2}+n-3,\cdots,a_{n-1},a),\]
where $\mu=(a_{1},\operatorname{d\!}ots,a_{n-1})$ and $\nu=a\langlembda_{0}$.
We have $a_{1}\geq \cdots\geq a_{n-1}\geq 0$;
and $a_{1},\operatorname{d\!}ots,a_{n-1}$ are all integers or all half-integers.
The weight $\gamma$ is integral if and only if $a-a_{j}\in\mathbb{Z}$.
The singularity of integral $\gamma$ has the following possibilities:
\begin{enumerate}
\item If $a\neq a_{j}+n-j-1$ for $1\leq j\leq n-1$, then $\gamma$ is regular.
Write $\Lambda_{0}$ for the set of integral regular dominant weights.
\item If $a=a_{j}+n-j-1$ for some $1\leq j \leq n - 1$, then up to conjugation
\[\gamma=(a_{1}+n-2,\operatorname{d\!}ots,a_{j}+n-j-1,a_{j}+n-j-1,\operatorname{d\!}ots,a_{n-1}).\]
Write $\Lambda_{j}$ for the set of such integral dominant weights.
\end{enumerate}
We introduce several notation for every type of $\gamma$.
For a weight
\[\gamma=(a_{1}+n-1,a_{2}+n-2,\cdots,a_{n-1}+1,a_{n})
\in\Lambda_{0}\]
with $a_1\geq \cdots \geq a_{n-1}\geq a_n\geq 0$, let
\[\mu_{j}=(a_{1}+1,\cdots,a_{j}+1,a_{j+2},\cdots,a_{n})
\textrm{ and }\nu_{j}=(a_{j+1}+n-j-1)\langlembda_{0}\]
for $0\leq j\leq n-1$.
Put
\[I_{j}^{\pm}(\gamma)=I(\mu_{j},\pm \nu_j).\]
For each $j$, there are nonzero intertwining operators
\[J_{j}^{+}(\gamma)\colon I_{j}^{+}(\gamma)
\rightarrow I_{j}^{-}(\gamma)
\text{ and }
J_{j}^{-}(\gamma)\colon I_{j}^{-}(\gamma)\rightarrow I_{j}^{+}(\gamma).\]
Write $\pi_{j}(\gamma)$ (resp.\ $\pi'_{j}(\gamma)$)
for the image of $J_{j}^{-}(\gamma)$ (resp.\ $J_{j}^{+}(\gamma)$).
Let $1\leq j\leq n-1$. For a weight
\[\gamma=(a_{1}+n-2,\operatorname{d\!}ots,a_{j}+n-j-1,
a_{j}+n-j-1,\operatorname{d\!}ots,a_{n-1})\in\Lambda_{j},\]
write
\[\mu=(a_{1},\operatorname{d\!}ots,a_{n-1})\textrm{ and }
\nu=(a_{j}+n-j-1)\langlembda_0.\]
Put
\[\pi(\gamma)=I(\mu,\nu).\]
For a non-integral weight
\[\gamma=(a_{1}+n-2,a_{2}+n-3,\cdots,a_{n-1},a),\]
write \[\mu=(a_{1},\operatorname{d\!}ots,a_{n-1}) \textrm{ and } \nu=a\langlembda_0.\]
Put
\[\pi(\gamma)=I(\mu,\nu).\]
Note that $I(\mu,\nu) \cong I(\mu, -\nu)$.
Using these notation, the Langlands classification is given as follows.
\begin{fact}\langlebel{F:gamma-classification2}
\begin{enumerate}
\item
For $\gamma\in\Lambda_{0}$,
any irreducible representation of $G$
with infinitesimal character $\gamma$ is
equivalent to one of
\[\{\pi_{0}(\gamma),\operatorname{d\!}ots,\pi_{n-1}(\gamma)\}.\]
When $0\leq j\leq n-2$,
$\pi_{j+1}(\gamma)\cong \pi'_{j}(\gamma)$;
$\pi_{n-1}(\gamma)\cong \pi'_{n-1}(\gamma)$; and
$\pi_{0}(\gamma)$ is a finite-dimensional module.
If $a_n=0$, then $\pi_{n-1}(\gamma)$ is tempered.
\item
For $\gamma\in\Lambda_{j}$ ($1\leq j\leq n-1$),
any irreducible representation of $G$
with infinitesimal character $\gamma$ is
equivalent to $\pi(\gamma)$.
\item
For a non-integral weight $\gamma$,
any irreducible representation of $G$
with infinitesimal character $\gamma$ is
equivalent to $\pi(\gamma)$.
\end{enumerate}
\end{fact}
Among these representations, unitarizable ones are given as follows (\cite{Hirai}).
\begin{fact}\langlebel{F:unitarizable2}
\begin{enumerate}
\item
For $\gamma\in\Lambda_{0}$,
$\pi_j(\gamma)$ is unitarizable if and only if $a_i=0$ for any $j<i\leq n$.
\item
For $\gamma\in\Lambda_{j}$ ($1\leq j\leq n-1$),
$\pi(\gamma)$ is unitarizable if and only if $a_i=0$ for any $j\leq i\leq n-1$.
\end{enumerate}
\end{fact}
\begin{fact}
For a non-integral weight $\gamma$,
$\pi(\gamma)$ is unitarizable if and only if
at least one of the following two conditions holds.
\begin{enumerate}
\item
$a\in \mathbf{i}\mathbb{R}$\ (unitary principal series).
\item
$a\in \mathbb{R}$, $|a|<n-1$, $a_i\in \mathbb{Z}$ for $1\leq i\leq n-1$
and $a_j=0$ for any $n-|a|-1<j\leq n-1$ (complementary series).
\end{enumerate}
\end{fact}
For $0\leq j\leq n-1$
let $\mathfrak{q}_j$ be a $\theta$-stable parabolic subalgebra of $\mathfrak{g}_{\mathbb{C}}$
such that the real form of the Levi component of $\mathfrak{q}_j$
is isomorphic to $\mathfrak{u}(1)^{j}+\mathfrak{so}(2(n-j)-1,1)$.
Remarks~\ref{R:complAq} and \ref{R:Aqsingular} are valid without change of words.
Let $\gamma\in \Lambda_0$ and $0\leq j\leq n-1$ such that $\pi_j(\gamma)$ is unitarizable.
Then
\begin{align*}
\pi_j(\gamma)_K \cong A_{\mathfrak{q}_{j}}(\langlembda),
\end{align*}
where $\langlembda=(a_1,\operatorname{d\!}ots,a_{j},0,\operatorname{d\!}ots,0)$.
Let $\gamma\in \Lambda_j\ (1\leq j\leq n-1)$ and $1\leq i\leq j$.
Assume that $a_{i}=\cdots=a_n=0$.
Then
\begin{align*}
\pi(\gamma)_K \cong A_{\mathfrak{q}_{i}}(\langlembda),
\end{align*}
where $\langlembda=(a_1-1,\operatorname{d\!}ots,a_{i-1}-1,i-j-1,0,\operatorname{d\!}ots,0)$.
\subsection{Branching laws for the restriction from $\operatorname{Sp}in(2n,1)$ to $P$}\langlebel{SS:branchinglaw}
We deduce branching laws for the restriction to $P$
of all irreducible unitary representations of $G$.
In this subsection suppose $G=\operatorname{Sp}in(2n,1)$.
A similar result for the group $\operatorname{Sp}in(2n-1,1)$ will be given in the next subsection.
By Fact~\ref{F:unitarizable},
many of irreducible unitary representations of $G$
are the completion of principal series representations.
This is the case if the infinitesimal character $\gamma$ lies in
$\Lambda_j\ (1\leq j\leq n-1)$ or $\gamma$ is not integral.
\begin{theorem}\langlebel{T:branching-ps}
Suppose that an irreducible unitary representation $\pi$ of $Spin(2n,1)$ is
isomorphic to the completion of a principal series representation
$I(\mu,\nu)$, where $\mu=(a_1,\operatorname{d\!}ots,a_{n-1})$ and $a_1\geq a_2\geq \operatorname{d\!}ots\geq a_{n-1}\geq 0$.
Then
\[ \pi|_{P}\cong \bigoplus_{\tau}
\operatorname{Ind}_{M'N}^P(V_{M',\tau}\otimes e^{\mathbf{i}\xi_0}), \]
where $\tau=(b_1,\operatorname{d\!}ots,b_{n-1})$ runs over tuples such that
\[a_{1}\geq b_1 \geq a_{2}\geq b_2 \geq
\cdots \geq a_{n-1} \geq |b_{n-1}|\] and $b_{i}-a_{1}\in\mathbb{Z}$.
\end{theorem}
\begin{proof}
By Lemma~\ref{L:unitary-J} and Proposition~\ref{P:res-induced},
the theorem follows from the well-known branching law from $M=\operatorname{Sp}in(2n-1)$ to $M'=\operatorname{Sp}in(2n-2)$
(see e.g.\ \cite[Theorem 8.1.3]{Goodman-Wallach}).
\end{proof}
Next, let $\gamma\in \Lambda_0$, namely, $\gamma$ is a regular integral weight.
Recall that in \S\ref{SS:irreducible}
we defined $\pi_j(\gamma)$ to be the image of the intertwining operator
$J_j^{-}(\gamma)\colon I_j^-(\gamma) \to I_j^+(\gamma)$.
We give branching laws for $\pi_j(\gamma)$ for $1\leq j\leq n-1$
when it is unitarizable.
\begin{theorem}\langlebel{T:branching-regular}
Let $1\leq j\leq n-1$ and let
\[\gamma=\operatorname{B}igl(a_1+n-\mathfrak{a}c{1}{2},\operatorname{d\!}ots,a_{j}+n-j+\mathfrak{a}c{1}{2},
n-j-\mathfrak{a}c{1}{2},\operatorname{d\!}ots,\mathfrak{a}c{1}{2} \operatorname{B}igr),\]
where $a_1\geq \cdots \geq a_{j}\geq 0$ are integers.
Then
\[ \bar{\pi}_j(\gamma)|_{P}\cong
\bigoplus_{\tau}
\operatorname{Ind}_{M'N}^P(V_{M',\tau}\otimes e^{\mathbf{i}\xi_0}), \]
where $\tau=(b_1,\operatorname{d\!}ots,b_{j-1},0,\operatorname{d\!}ots,0)$ runs over tuples of integers such that
\[a_{1}+1 \geq b_1 \geq a_{2}+1 \geq b_2 \geq
\cdots \geq a_{j-1}+1 \geq b_{j-1}\geq a_{j}+1.\]
\end{theorem}
\begin{proof}
Let $0\leq i < j$.
It is known that $[\pi_i(\gamma)]+[\pi_{i+1}(\gamma)]=[I_i^+(\gamma)]$
in the Grothendieck group $K(G)$.
Hence by Proposition~\ref{P:res-induced}
\[
\operatorname{P}si([\pi_i(\gamma)])+\operatorname{P}si([\pi_{i+1}(\gamma)]) = [V_{M,\mu_i}|_{M'}],
\]
where $\mu_i=(a_1+1,\operatorname{d\!}ots,a_{i}+1,a_{i+2},\operatorname{d\!}ots,a_{j},0,\operatorname{d\!}ots,0)$.
Since $\pi_0(\gamma)$ is finite-dimensional, $\operatorname{P}si([\pi_0(\gamma)])=0$.
Then by induction on $i$, we have
\[\operatorname{P}si([\pi_i(\gamma)])= \bigoplus_{\tau} [V_{M',\tau}],
\]
where $\tau=(b_1,\operatorname{d\!}ots,b_{j-1},0,\operatorname{d\!}ots,0)$ runs over tuples of integers such that
\begin{align*}
&a_{1}+1 \geq b_1 \geq a_{2}+1 \geq \cdots \geq b_{i-1} \geq a_{i}+1, \text{ and } \\
&a_{i+1}\geq b_{i} \geq a_{i+2} \geq b_{i+1} \geq \operatorname{d\!}ots\geq a_{j}\geq |b_{j-1}|.
\end{align*}
Hence the theorem follows from Lemma~\ref{L:unitary-J}.
\end{proof}
We have the following formula for $A_{\mathfrak{q}}(\langlembda)$
by Theorems~\ref{T:branching-ps} and \ref{T:branching-regular}.
For $0\leq j\leq n-1$
let $\mathfrak{q}_j$ be a $\theta$-stable parabolic subalgebra of $\mathfrak{g}_{\mathbb{C}}$
such that the real form of the Levi component of $\mathfrak{q}_j$
is isomorphic to $\mathfrak{u}(1)^{j}+\mathfrak{so}(2(n-j),1)$.
For a weakly fair parameter $\langlembda=(a_1,\operatorname{d\!}ots,a_j,0,\operatorname{d\!}ots,0)$, we have
\[ \overline{A_{\mathfrak{q}_{j}}(\langlembda)}|_{P}\cong
\bigoplus_{\tau}
\operatorname{Ind}_{M'N}^P(V_{M',\tau}\otimes e^{\mathbf{i}\xi_0}), \]
where $\tau=(b_1,\operatorname{d\!}ots,b_{j-1},0,\operatorname{d\!}ots,0)$ runs over tuples of integers such that
\[a_{1}+1 \geq b_1 \geq a_{2}+1 \geq b_2 \geq
\cdots \geq a_{j-1}+1 \geq b_{j-1}\geq \operatorname{max}\{a_{j}+1,0\}.\]
The remaining representations are (limit of) discrete series representations.
The following formula is proved by the translation principle
and the case where $\gamma=\rho$.
The proof for the case $\gamma=\rho$ involves an explicit calculation
for the lowest $K$-type and will be later proved in Proposition~\ref{P:P-restriction2}.
\begin{theorem}\langlebel{T:branching-ds}
Let
\[\gamma=\operatorname{B}igl(a_1+n-\mathfrak{a}c{1}{2}, a_2+n-\mathfrak{a}c{3}{2},
\operatorname{d\!}ots, a_{n}+\mathfrak{a}c{1}{2} \operatorname{B}igr),\]
where $a_1\geq \cdots \geq a_{n}\geq -\mathfrak{a}c{1}{2}$ are all integers
or all half-integers.
Then
\[ \bar{\pi}^{\pm}(\gamma)|_{P}\cong
\bigoplus_{\tau}
\operatorname{Ind}_{M'N}^P(V_{M',\tau}\otimes e^{\mathbf{i}\xi_0}), \]
where $\tau=(b_1,\operatorname{d\!}ots,b_{n-1})$ runs over tuples such that
\[a_{1}+1 \geq b_1 \geq a_{2}+1 \geq \cdots \geq b_{n-2}
\geq a_{n-1}+1 \geq {\mp} b_{n-1}\geq a_{n}+1\]
and $b_{i}-a_{1}\in\mathbb{Z}$.
\end{theorem}
\begin{proof}
By Lemma \ref{L:unitary-J},
it suffices to calculate $\operatorname{P}si([\pi^{\pm}(\gamma)])$ for
$\gamma\in\Lambda_{0}\sqcup \Lambda_{n}$.
The same argument as in the proof of Theorem~\ref{T:branching-regular} yields
\begin{align}\langlebel{Eq:ds-pair}
\operatorname{P}si([\pi^+(\gamma)])+\operatorname{P}si([\pi^-(\gamma)])
=\operatorname{P}si([\pi'_{n-1}(\gamma)])
=\bigoplus_{\tau} [V_{M',\tau}],
\end{align}
where $\tau=(b_1,\operatorname{d\!}ots,b_{n-1})$ runs over tuples of integers such that
\begin{align*}
&a_{1}+1 \geq b_1 \geq a_{2}+1 \geq \cdots \geq b_{n-2} \geq a_{n-2}+1\geq |b_{n-1}|\geq a_{n-1}+1.
\end{align*}
Therefore, it suffices to show that the two modules $\operatorname{P}si([\pi^+(\gamma)])$ and $\operatorname{P}si([\pi^-(\gamma)])$
are separated by the sign of $b_{n-1}$.
First, prove the statement for $\gamma\in\Lambda_0$
by induction on $|\gamma|$.
When $\gamma=\rho$, the conclusion follows from Proposition~\ref{P:P-restriction2}.
Let $\gamma\not\in\Lambda_{0}-\{\rho\}$
and assume that the conclusion holds for weights in $\Lambda_{0}$
having norm strictly smaller than $|\gamma|$.
Write $\omega_{k}$ for the $k$-th fundamental weight, namely,
\[
\omega_{k}=(\underbrace{1,\operatorname{d\!}ots,1}_{k},\underbrace{0,\operatorname{d\!}ots,0}_{n-k})
\text{ for $1\leq k\leq n-1$ and }
\omega_{n}=
\operatorname{B}igl(\underbrace{\mathfrak{a}c{1}{2},\operatorname{d\!}ots,\mathfrak{a}c{1}{2}}_{n}\operatorname{B}igr).
\]
Then, one finds $\gamma'\in\Lambda_{0}$ and a fundamental weight $\omega_{k}$
such that $\gamma=\gamma'+\omega_{k}$.
By the Zuckerman translation principle (\cite{Vogan}, \cite{Zuckerman}),
$\pi^{\pm}(\gamma)$ occurs as a composition factor of
$\pi^{\pm}(\gamma')\otimes F_{\omega_{k}}$.
Hence by Lemma~\ref{L:tensor},
if an irreducible $M'$-representation $V_{M',\mu}$
with $\mu=(b_{1},\operatorname{d\!}ots,b_{n-1})$ occurs in $\operatorname{P}si([\pi^{+}(\gamma)])$,
then it also occurs in $\operatorname{P}si([\pi^{+}(\gamma')])\otimes [F_{\omega_{k}}|_{M'}]$.
For any irreducible $V_{M',\mu'}$ in $\operatorname{P}si([\pi^{+}(\gamma')])$
with $\mu'=(b'_{1},\operatorname{d\!}ots,b'_{n-1})$, one has $b'_{n-1}\leq -1$
by induction hypothesis and
for any weight $\mu''$ appearing in $F_{\omega_{k}}|_{M'}$
with $\mu''=(b''_{1},\operatorname{d\!}ots,b''_{n-1})$,
we have $b''_{n-1} \in\{1,-1,\mathfrak{a}c{1}{2},-\mathfrak{a}c{1}{2}\}$.
Hence $b_{n-1}\leq 0$.
Therefore, we get $b_{n-1}\leq -1$ from \eqref{Eq:ds-pair}.
The statement for $\pi^{-}(\gamma)$ is similarly proved.
Next, suppose that $\gamma\in\Lambda_{n}$.
Let $\gamma'=\gamma+\omega_{n}\in \Lambda_{0}$.
Then again by the translation principle,
$\pi^{\pm}(\gamma)$ occurs as a composition factor of
$\pi^{\pm}(\gamma')\otimes F_{\omega_{n}}$.
Then by using the result for $\operatorname{P}si([\pi^{\pm}(\gamma')])$ proved above,
the statement for $\operatorname{P}si([\pi^{\pm}(\gamma)])$ is similarly obtained.
\end{proof}
\subsection{Branching laws for the restriction from $\operatorname{Sp}in(2n-1,1)$ to $P$}\langlebel{SS:branchinglaw2}
Let $G=\operatorname{Sp}in(2n-1,1)$.
Branching laws for the restriction to $P$ are similar to the previous case where $G=\operatorname{Sp}in(2n,1)$.
\begin{theorem}\langlebel{T:branching-ps2}
Suppose that an irreducible unitary representation $\pi$ of $Spin(2n-1,1)$ is
isomorphic to the completion of a principal series representation
$I(\mu,\nu)$, where $\mu=(a_1,\operatorname{d\!}ots,a_{n-1})$ and $a_1\geq \operatorname{d\!}ots\geq a_{n-2}\geq |a_{n-1}|$.
Then
\[ \pi|_{P}\cong \bigoplus_{\tau}
\operatorname{Ind}_{M'N}^P(V_{M',\tau}\otimes e^{\mathbf{i}\xi_0}), \]
where $\tau=(b_1,\operatorname{d\!}ots,b_{n-2})$ runs over tuples such that
\[a_{1}\geq b_1 \geq a_{2}\geq b_2 \geq
\cdots \geq a_{n-1} \geq b_{n-2}\geq |a_{n-1}|\] and $b_{i}-a_{1}\in\mathbb{Z}$.
\end{theorem}
\begin{theorem}\langlebel{T:branching-regular2}
Let
\[\gamma=(a_1+n-1,\operatorname{d\!}ots,a_{j}+n-j, n-j-1,\operatorname{d\!}ots, 0),\]
where $a_1\geq \cdots \geq a_{j}\geq 0$ are integers
and let $1\leq j\leq n-1$.
Then
\[ \bar{\pi}_j(\gamma)|_{P}\cong
\bigoplus_{\tau}
\operatorname{Ind}_{M'N}^P(V_{M',\tau}\otimes e^{\mathbf{i}\xi_0}), \]
where $\tau=(b_1,\operatorname{d\!}ots,b_{j-1},0,\operatorname{d\!}ots,0)$ runs over tuples of integers such that
\[a_{1}+1 \geq b_1 \geq a_{2}+1 \geq b_2 \geq
\cdots \geq a_{j-1}+1 \geq b_{j-1}\geq a_{j}+1.\]
\end{theorem}
For $0\leq j\leq n-1$
let $\mathfrak{q}_j$ be a $\theta$-stable parabolic subalgebra of $\mathfrak{g}_{\mathbb{C}}$
such that the real form of the Levi component of $\mathfrak{q}_j$
is isomorphic to $\mathfrak{u}(1)^{j}+\mathfrak{so}(2(n-j)-1,1)$.
For a weakly fair parameter $\langlembda=(a_1,\operatorname{d\!}ots,a_j,0,\operatorname{d\!}ots,0)$, we have
\[ \overline{A_{\mathfrak{q}_{j}}(\langlembda)}|_{P}\cong
\bigoplus_{\tau}
\operatorname{Ind}_{M'N}^P(V_{M',\tau}\otimes e^{\mathbf{i}\xi_0}), \]
where $\tau=(b_1,\operatorname{d\!}ots,b_{j-1},0,\operatorname{d\!}ots,0)$ runs over tuples of integers such that
\[a_{1}+1 \geq b_1 \geq a_{2}+1 \geq b_2 \geq
\cdots \geq a_{j-1}+1 \geq b_{j-1}\geq \operatorname{max}\{a_{j}+1,0\}.\]
\section{Moment map for elliptic coadjoint orbits}\langlebel{S:elliptic}
In this section and the next section we calculate the projection of semisimple coadjoint orbits
for $G$ by the natural map $\mathfrak{g}^*\to \mathfrak{p}^*$.
We treat elliptic orbits for $G=\operatorname{Sp}in(2n,1)$ in this section.
Non-elliptic orbits and the case $G=\operatorname{Sp}in(2n-1,1)$ will be treated in the next section.
Throughout this section we assume $m$ is odd and then $G=\operatorname{Sp}in(2n,1)$.
In Definition~\ref{D:depth},
we divided the coadjoint orbits $\mathcal{O}$ for $P$ into two types:
depth zero and depth one according to
$\mathcal{O}\subset \mathfrak{l}^*$ or not.
Then we saw at the end of \S\ref{SS:P-orbit2} that
the depth one coadjoint orbits for $P$ are parametrized by
singular values of the matrix $Y$ there
and the sign of the Pfaffian of $Z_{Y,\beta}$.
\subsection{$P$-orbits in $\mathcal{O}_{f}$}\langlebel{SS:doubleCoset}
For $a_1\geq a_2\geq\cdots\geq a_{n-1}\geq|a_{n}|\geq 0$, write
$\vec{a}=(a_{1},a_{2},\operatorname{d\!}ots,a_{n}).$
As in \eqref{Eq:ta}, put
\begin{equation*}
t_{\vec{a}}=
\begin{pmatrix}
0& a_1&&&&\\
-a_{1}&0&&&&\\
&&\operatorname{d\!}dots&&& \\
&&&0& a_{n}&\\
&&& -a_{n}&0&\\
&&&&&0\\
\end{pmatrix}.
\end{equation*}
By the isomorphism
$\iota\colon \mathfrak{g}\xrightarrow{\sim} \mathfrak{g}^*$ in
\eqref{Eq:identification1},
we put \[f=f_{\vec{a}}=\iota(t_{\vec{a}}),\] which is an elliptic
element in $\mathfrak{g}^{\ast}$.
Moreover, each elliptic coadjoint orbit in $\mathfrak{g}^{\ast}$ contains
$f_{\vec{a}}$ for a unique vector $\vec{a}$.
We first consider regular orbits, i.e., assume that
\[a_1>a_2>\cdots >a_{n-1}>|a_{n}|>0.\]
Write $G^{f}$ for the stabilizer of $G$ at $f$.
Then, $G^{f} = T$, where $T$ is the pre-image in $G$
of the maximal torus
\begin{equation*}
T_1=\operatorname{B}iggl\{
\begin{pmatrix}
y_{1}&z_1&&&&\\
-z_{1}&y_{1}&&&&\\
&&\operatorname{d\!}dots&&&\\
&&&y_{n}&z_{n}&\\
&&&-z_{n}&y_{n}&\\
&&&&&1\\
\end{pmatrix}:
y_{1}^{2}+z_{1}^{2}=\cdots=y_{n}^{2}+z_{n}^{2}=1
\operatorname{B}iggr\}
\end{equation*}
of $G_{1}$.
Put $\mathcal{O}_{f}=G\cdot f$ and then $\mathcal{O}_{f}\cong G/G^{f}$.
To parametrize $P$-orbits in $\mathcal{O}_{f}$ is equivalent
to parametrize double cosets in $P\backslash G/G^{f}$.
Since the map
\[ P\backslash G/G^{f}\ni P g G^{f}\mapsto G^{f} g^{-1} P\in G^{f}\backslash G/P\]
is an isomorphism,
it is also equivalent to
parametrize $G^{f}$-orbits in $G/P$.
Write
\begin{equation*}
X_{n}=\{\vec{x}=(x_1,\operatorname{d\!}ots,x_{2n},x_{0}):
x_{0}^{2}=\sum_{i=1}^{2n} x_{i}^{2} \text{ and } x_{0}>0\}/\sim.
\end{equation*}
Here, for two vectors $\vec{x}$ and $\vec{x'}$, we defined
\[\vec{x}\sim\vec{x'}\Leftrightarrow\exists s>0
\textrm{ such that }\vec{x'}=s\vec{x}.\]
As a manifold, $X_{n}\cong S^{2n-1}$.
The group $G$ acts on $X_{n}$ transitively as
\[g\cdot[\vec{x}]=[\vec{x}g_{1}^{t}],\quad
G\ni g\mapsto g_{1} \in G_1.\]
Put
\begin{equation*}
v_{0}=[(0,\operatorname{d\!}ots,0,1,1)].
\end{equation*}
Then $\operatorname{Stab}_{G}(v_{0})=P$ and hence $X_{n}\cong G/P.$
Therefore, to parametrize $G^{f}$-orbits in
$G/P$ is equivalent to parametrize $T$-orbits in $X_{n}$.
Let
\begin{equation*}
B=\operatorname{B}igl\{\vec{b}=(b_1,\operatorname{d\!}ots,b_{n}):b_1,\operatorname{d\!}ots,b_n\geq 0,\
\sum_{i=1}^{n-1} b_{i}^{2}=1-2b_{n}\operatorname{B}igr\}.
\end{equation*}
Then, $0\leq b_{n}\leq\mathfrak{a}c{1}{2}$ for any $\vec{b}\in B$.
Write
\begin{equation*}
\alpha=\alpha_{\vec{b}}=(0,b_1,0,b_{2},\operatorname{d\!}ots,0,b_{n-1},0)
\text{ and }
\bar{X}_{\vec{b}}=
\begin{pmatrix}
0_{2n-1}&\alpha^{t}&\alpha^{t}\\
-\alpha&0&0\\
\alpha&0&0\\
\end{pmatrix}.
\end{equation*}
Put
\begin{equation*}
\bar{n}_{\vec{b}}=
\operatorname{exp}(\bar{X}_{\vec{b}}) =
\begin{pmatrix}
I_{2n-1}&\alpha^{t}&\alpha^{t}\\
-\alpha&1-\mathfrak{a}c{1}{2}|\alpha|^2&-\mathfrak{a}c{1}{2}|\alpha|^2\\
\alpha&\mathfrak{a}c{1}{2}|\alpha|^2&1+\mathfrak{a}c{1}{2}|\alpha|^2\\
\end{pmatrix}\in\bar{N}.
\end{equation*}
Then,
\begin{equation*}
\bar{n}_{\vec{b}}^{-1}\cdot v_{0}=[(0,-b_1,0,-b_2,\operatorname{d\!}ots,0,-b_{n-1},0,b_{n},1-b_{n})].
\end{equation*}
\begin{lemma}\langlebel{L:T-orbits}
The map $B\rightarrow X_{n}/T$ defined by
\begin{equation*}
\vec{b}\mapsto(\bar{n}_{\vec{b}})^{-1} \cdot v_{0}
\end{equation*} is a bijection.
\end{lemma}
\begin{proof}
Identify the image of $T$ in $G_{1}$ with $\operatorname{U}(1)^{n}$.
Then, $T$ acts on $X_{n}$ by
\begin{align*}&
(y_{1}+z_{1}\mathbf{i},\operatorname{d\!}ots,y_{n}+z_{n}\mathbf{i})\cdot[(x_1,\operatorname{d\!}ots,x_{2n},x_{0})]&\\&=[(y_{1}x_{1}+z_{1}x_{2},
-z_{1}x_{1}+y_{1}x_{2},\operatorname{d\!}ots,y_{n}x_{2n-1}+z_{n}x_{2n},-z_{n}x_{2n-1}+y_{n}x_{2n},
x_{0})].&\end{align*}
Then each $T$-orbit in $X_{n}$ has a unique representative of the form
\[ [(0,-b_1,\operatorname{d\!}ots,0, -b_{n-1},0,b_{n},1-b_{n})],\]
where $b_{i}\geq 0$ ($1\leq i\leq n$).
Moreover, the equation \[\sum_{i=1}^{2n} x_{i}^{2}=x_{0}^{2}\] leads to the equation
\[\sum_{i=1}^{n-1} b_{i}^{2}=1-2b_{n}.\]
By this, the map $\vec{b}\mapsto(\bar{n}_{\vec{b}})^{-1}\cdot v_{0}$ is a bijection.
\end{proof}
By Lemma~\ref{L:T-orbits}, we proved
\begin{lemma}\langlebel{L:P-orbits}
Each $P$-orbit in $\mathcal{O}_{f}=G\cdot f$ contains some $\bar{n}_{\vec{b}}\cdot f$
for a unique tuple $\vec{b}\in B$.
\end{lemma}
\subsection{The moment map $\mathcal{O}_{f}\rightarrow\mathfrak{p}^{\ast}$}\langlebel{SS:moment}
In \S\ref{SS:P-orbit2}, we showed an explicit parametrization of coadjoint $P$-orbits.
In this subsection, we use the results in \S\ref{SS:P-orbit2} to calculate
the image of the moment map $q\colon \mathcal{O}_f \rightarrow\mathfrak{p}^{\ast}$.
Here, the moment map $q$ is defined by the composition
of the inclusion $\mathcal{O}_f\hookrightarrow \mathfrak{g}^*$
and the dual map $\mathfrak{g}^*\to \mathfrak{p}^*$
of the inclusion $\mathfrak{p}\hookrightarrow \mathfrak{g}$.
Recall the map $\operatorname{pr}$ defined in \eqref{Eq:identificaiton2}.
Put
\begin{equation*}
H'=
\begin{pmatrix}
0&1\\
-1&0\\
\end{pmatrix}.
\end{equation*}
\begin{lemma}\langlebel{L:gf}
We have $q(\bar{n}_{\vec{b}}\cdot f)=\operatorname{pr}(X_{Y,\beta,0})$, where
\begin{align*}
&\beta=(-a_{1}b_{1},0,\operatorname{d\!}ots,-a_{n-1}b_{n-1},0,a_{n}b_{n})\neq 0,\\
&Y=\operatorname{d\!}iag\{a_{1}H',\operatorname{d\!}ots,a_{n-1}H',0\} + (\beta')^t \alpha - \alpha^t \beta',
\text{ and }
\beta'=(\underbrace{0,\operatorname{d\!}ots,0}_{2n-2},a_{n}).
\end{align*}
\end{lemma}
\begin{proof}
Write $Y'=\operatorname{d\!}iag\{a_{1}H',\operatorname{d\!}ots,a_{n-1}H',0\}$.
Then following notation in \S\ref{SS:notation} we have
$t_{\vec{a}}=
\operatorname{d\!}iag\{Y',0_{2\times 2}\} - X_{\mathfrak{a}c{\beta'}{2}}+\bar{X}_{\mathfrak{a}c{\beta'}{2}}$.
Using \eqref{Eq:brackets} we calculate
\begin{align*}
&\operatorname{A}d(\bar{n}_{\vec{b}}) (t_{\vec{a}})
= t_{\vec{a}} + [\bar{X}_{\alpha}, t_{\vec{a}}]
+ \mathfrak{a}c{1}{2} [[\bar{X}_{\alpha}, [\bar{X}_{\alpha}, t_{\vec{a}}]] \\
&= \bigl( \operatorname{d\!}iag\{Y',0_{2\times 2}\} - X_{\mathfrak{a}c{\beta'}{2}}
+ \bar{X}_{\mathfrak{a}c{\beta'}{2}}\bigr) \\
&\quad - \bigl(\bar{X}_{\alpha(Y')^t}
- \operatorname{d\!}iag\{(\beta')^t \alpha - \alpha^t \beta', 0_{2\times 2}\}\bigr)
- \mathfrak{a}c{1}{2}
\bar{X}_{\alpha \alpha^t \beta' - \alpha(\beta')^t \alpha}.
\end{align*}
Hence the lemma follows from
\[
Y'+(\beta')^t \alpha - \alpha^t \beta' = Y \text{ and }
\mathfrak{a}c{\beta'}{2}-\alpha(Y')^t
- \mathfrak{a}c{1}{2}(\alpha \alpha^t \beta' - \alpha(\beta')^t \alpha)
=\beta.
\qedhere
\]
\end{proof}
Put
\begin{align*}
&Y_{\vec{b}}=Y-\mathfrak{a}c{1}{|\beta|^2}(Y\beta^{t}\beta-
(Y\beta^{t}\beta)^{t}), \quad
Z_{\vec{b}}=
\begin{pmatrix}
Y_{\vec{b}}&\mathfrak{a}c{\beta^{t}}{|\beta|}\\
-\mathfrak{a}c{\beta}{|\beta|}&0\\
\end{pmatrix}.
\end{align*}
By Lemmas \ref{L:p-standard4}, \ref{p:standard5} and \ref{L:class-matrix},
the $P$-conjugacy class of $q(\bar{n}_{\vec{b}}\cdot f)$ is determined by
the sign of the Pfaffian of $Z_{\vec{b}}$
and singular values of $Y_{\vec{b}}$.
Put
\begin{align*}
&\gamma_{1}=(a_{1}b_{1},\operatorname{d\!}ots,a_{n-1}b_{n-1},-a_{n}b_{n}),\\
&\gamma_{2}=((a_{1}^{2}-a_{n}^{2}b_{n})b_{1}, \operatorname{d\!}ots,(a_{n-1}^{2}-a_{n}^{2}b_{n})b_{n-1},0).
\end{align*}
For a permutation $\sigma$ on $\{1,2,\operatorname{d\!}ots,2n\}$, let
$Q_{\sigma}=(x_{ij})_{1\leq i,j\leq 2n}$ be the permutation matrix corresponding to $\sigma$,
that is, $x_{i,j}=1$ if $j=\sigma(i)$; and $x_{i,j}=0$ if $j\neq\sigma(i)$.
\begin{lemma}\langlebel{L:Zb}
Let $\sigma$ be the permutation
\[\sigma(i) =\begin{cases} 2i-1 &(1\leq i\leq n) \\
2(i-n) & (n+1\leq i\leq 2n) \end{cases}.\]
Then
\[Q_{\sigma}Z_{\vec{b}}Q_{\sigma}^{-1}
=\begin{pmatrix}
0_{n}&Z\\
-Z^{t}&0_{n}\\
\end{pmatrix},\]
where
\begin{equation}\langlebel{Eq:M2}
Z= \begin{pmatrix} a_{1}&\ldots&0&\mathfrak{a}c{-a_{1}b_{1}}{|\beta|}\\
\vdots&\operatorname{d\!}dots&\vdots& \vdots\\
0&\ldots&a_{n-1}&\mathfrak{a}c{-a_{n-1}b_{n-1}}{|\beta|}\\
a_{n}b_{1}&\ldots&a_{n}b_{n-1} &\mathfrak{a}c{a_{n}b_{n}}{|\beta|}\\
\end{pmatrix}-\mathfrak{a}c{\gamma_{1}^{t}\gamma_{2}}{|\beta|^{2}}.
\end{equation}
\end{lemma}
\begin{proof}
By calculation
\[ Y\beta^{t}=(0,(a_{1}^{2}-a_{n}^{2}b_{n})b_{1},\operatorname{d\!}ots,0,(a_{n-1}^{2}-a_{n}^{2}b_{n})b_{n-1},
0)^{t}.\]
By inputting the forms of $Y$, $\beta$, $Y\beta^{t}$
in
\[Z_{\vec{b}}
= \begin{pmatrix}
Y-\mathfrak{a}c{1}{|\beta|^{2}}(Y\beta^{t})\beta
+ \mathfrak{a}c{1}{|\beta|^2}\beta^{t} (Y\beta^{t})^{t}&\mathfrak{a}c{\beta^{t}}{|\beta|}\\
-\mathfrak{a}c{\beta}{|\beta|}&0\\
\end{pmatrix}.\]
we get the form of $Z_{\vec{b}}$.
It is easy to see that
$Q_{\sigma}Z_{\vec{b}}Q_{\sigma}^{-1}$ is of the block diagonal form as in the lemma.
\end{proof}
\begin{lemma}\langlebel{L:Zb2}
The Pfaffian of $Z_{\vec{b}}$ is equal to
\[\mathfrak{a}c{1-b_{n}}{|\beta|}\operatorname{pr}od_{1\leq i\leq n}a_{i}.\]
\end{lemma}
\begin{proof}
By Lemma \ref{L:Zb}, the Pfaffian of $Z_{\vec{b}}$ is equal to $\operatorname{d\!}et Z$.
Since $\gamma_{1}^{t}$ is proportional
to the right most column of the first matrix in the right hand side of \eqref{Eq:M2}
and the last entry of $\gamma_2$ is zero,
the term $\mathfrak{a}c{1}{|\beta|^2}
\gamma_{1}^{t}\gamma_{2}$ makes no contribution to $\operatorname{d\!}et Z$.
Therefore,
\begin{align*}
\operatorname{d\!}et Z&=
\operatorname{d\!}et \begin{pmatrix} a_{1}&\ldots&0&\mathfrak{a}c{-a_{1}b_{1}}{|\beta|}\\
\vdots&\operatorname{d\!}dots&\vdots&\vdots\\0&\ldots& a_{n-1}&\mathfrak{a}c{-a_{n-1}b_{n-1}}{|\beta|}\\
a_{n}b_{1}&\ldots&a_{n}b_{n-1}&\mathfrak{a}c{a_{n}b_{n}}{|\beta|}\\
\end{pmatrix}\\
&=\mathfrak{a}c{1-b_{n}}{|\beta|}\operatorname{pr}od_{1\leq i\leq n}a_{i}. \qedhere
\end{align*}
\end{proof}
Write $Z'$ for the $n\times (n-1)$ matrix obtained from $Z$ by removing the last column.
Put \begin{equation*}
h_{\vec{b}}(x)=\operatorname{d\!}et(xI_{n-1}-(Z')^{t}Z').
\end{equation*}
Then we claim that the singular values of $Y_{\vec{b}}$
are square roots of zeros of $h_{\vec{b}}(x)$.
Indeed, let $W\in \operatorname{SO}(n)$ be a matrix such that
the right most column of $WZ$ is $(0,0,\operatorname{d\!}ots,0,1)^t$.
Then $WZ=\operatorname{d\!}iag\{Z'_0,1\}$ for some $(n-1)\times (n-1)$ matrix $Z'_0$.
Hence the eigenvalues of $Z^t Z$ are
the eigenvalues of $(Z')^{t}Z'=(Z'_0)^tZ'_0$ plus $1$.
Since the eigenvalues of $Z^t Z$ are the same as
those of $(Z_{\vec{b}})^t Z_{\vec{b}}$
and they are the singular values of $Y_{\vec{b}}$ plus $1$
(see the last part of \S\ref{SS:P-orbit2}),
the claim follows.
\begin{proposition}\langlebel{P:hb3}
We have
\begin{equation}\langlebel{Eq:hb2}
h_{\vec{b}}(x)
= \sum_{1\leq i\leq n}\mathfrak{a}c{a_{i}^{2}b_{i}^{2}}{|\beta|^{2}}
\operatorname{pr}od_{1\leq j\leq n,\, j\neq i}(x-a_{j}^{2}).
\end{equation}
For any $1\leq i\leq n$, $a_{i}^{2}$ is a root of $h_{\vec{b}}(x)$ if and only if $b_{i}=0$.
\end{proposition}
\begin{proof}
Put
\begin{align*}
&\gamma_{3}=(a_{n}b_{1},\operatorname{d\!}ots,a_{n}b_{n-1}), \\
&\gamma_{4}=\mathfrak{a}c{1}{|\beta|}
\bigl(
(a_{1}^{2}-a_{n}^{2}b_{n})b_{1},\operatorname{d\!}ots,
(a_{n-1}^{2}-a_{n}^{2}b_{n})b_{n-1}\bigr).
\end{align*}
By a direct calculation we see that
\[(Z')^{t}Z'=\operatorname{d\!}iag\{a_{1}^{2},\operatorname{d\!}ots,a_{n-1}^{2}\}
+\gamma_{3}^{t}\gamma_{3}- \gamma_{4}^{t}\gamma_{4}.\]
From this we calculate that
\[h_{\vec{b}}(a_{i}^{2})=\mathfrak{a}c{a_{i}^{2}b_{i}^{2}}{|\beta|^{2}}
\operatorname{pr}od_{1\leq j\leq n,\, j\neq i}(a_{i}^{2}-a_{j}^{2})\] for $1\leq i\leq n-1$.
Since $h_{\vec{b}}(x)$ is a monic polynomial of degree $n-1$,
we get
\[h_{\vec{b}}(x)=
\sum_{1\leq i\leq n}\mathfrak{a}c{a_{i}^{2}b_{i}^{2}}{|\beta|^{2}}
\operatorname{pr}od_{1\leq j\leq n,\, j\neq i}(x-a_{j}^{2}). \qedhere\]
\end{proof}
\begin{corollary}\langlebel{C:hb4}
The polynomial $h_{\vec{b}}(x)$
has $n-1$ positive roots, which lie in the intervals
\[[a_{n}^{2},a_{n-1}^{2}],\operatorname{d\!}ots,[a_{2}^{2},a_{1}^{2}],\]
respectively.
\end{corollary}
\begin{proof}
First assume that none of $b_{i}$ is zero.
Then by Proposition~\ref{P:hb3}, $h_{\vec{b}}(a_{i}^{2})$ and $h_{\vec{b}}
(a_{i+1}^{2})$ have different signs.
Thus, $h_{\vec{b}}(x)$ has a zero in $(a_{i+1}^{2},a_{i}^{2})$
for each $1\leq i\leq n-1$.
Hence, the $n-1$ zeros of $h_{\vec{b}}(x)$ lie in the intervals
\[(a_{n}^{2},a_{n-1}^{2}),\operatorname{d\!}ots,(a_{2}^{2},a_{1}^{2}),\]
respectively. Therefore, $h_{\vec{b}}(x)$ has no double zeros.
In general, among $\{b_1,\operatorname{d\!}ots,b_{n}\}$
let $b_{i_{1}},\operatorname{d\!}ots,b_{i_{l}}$ with $1\leq i_{1}<\cdots<i_{l}\leq n$
be all nonzero members.
Write $I=\{i_{1},\operatorname{d\!}ots,i_{l}\}$ and $J=\{1,\operatorname{d\!}ots,n\}-\{i_{1},\operatorname{d\!}ots,i_{l}\}$.
By Proposition~\ref{P:hb3},
\[h_{\vec{b}}(x)=\operatorname{B}igl(\sum_{1\leq j\leq l}\mathfrak{a}c{a_{i_{j}}^{2}b_{i_{j}}^{2}}{|\beta|^{2}}
\operatorname{pr}od_{1\leq k\leq l,\, k\neq j}(x-a_{i_{k}}^{2})\operatorname{B}igr)\operatorname{pr}od_{i\in J}(x-a_{i}^{2}).\]
Thus, $a_{i}^{2}$ ($i\in J$) are zeros of
$h_{\vec{b}}(x)$. By a similar argument as above, one shows that other $l-1$ zeros of $h_{\vec{b}}(x)$ lie in
the intervals \[(a_{i_{l}}^{2},a_{i_{l-1}}^{2}),\operatorname{d\!}ots,(a_{i_{2}}^{2},a_{i_{1}}^{2})\] respectively. This shows
that: $h_{\vec{b}}(x)$ has $n-1$ positive roots, which lie in the intervals \[[a_{n}^{2},a_{n-1}^{2}],\operatorname{d\!}ots,
[a_{2}^{2},a_{1}^{2}]\] respectively.
\end{proof}
By Corollary \ref{C:hb4}, $h_{\vec{b}}(x)$ has at most double zero,
and the only possible double zeros are
$a_{2}^{2},\operatorname{d\!}ots,a_{n-1}^{2}$;
for each $2\leq i\leq n-2$, $a_{i}^{2}$ and $a_{i+1}^{2}$ cannot be
both double zeros.
By \eqref{Eq:hb2}, in order that $a_{i}^{2}$ for $2\leq i\leq n-1$ is a double zero of
$h_{\vec{b}}(x)$ it is necessary and sufficient that
$b_{i}=0$ and
\[\sum_{1\leq k\leq n,\, k\neq i}\mathfrak{a}c{a_{k}^{2}b_{k}^{2}}{|\beta|^{2}}
\operatorname{pr}od_{1\leq j\leq n,\, j\neq i,k}(a_{i}^{2}-a_{j}^{2})=0.\]
\operatorname{sm}allskip
Let $x_1\geq\cdots\geq x_{n-1}\geq 0$ be square roots of zeros of $h_{\vec{b}}(x)$.
By Corollary \ref{C:hb4},
$a_{i+1}\leq x_{i}\leq a_{i}$ for each $1\leq i\leq n-2$,
and $|a_{n}|\leq x_{n-1}\leq a_{n-1}$.
Write \[\vec{x}=(x_{1},\operatorname{d\!}ots,x_{n-1}).\]
\begin{corollary}\langlebel{C:characteristic3}
The map $\vec{b}\mapsto\vec{x}$
gives a bijection from $B$ to
\[[a_{2},a_{1}]\times\cdots\times[a_{n-1},a_{n-2}]\times[|a_{n}|,a_{n-1}].\]
\end{corollary}
\begin{proof}
For $\vec{b}=(b_1,\operatorname{d\!}ots,b_{n})\in B$ and $\vec{b'}=(b'_{1},\operatorname{d\!}ots,b'_{n})\in B$, suppose $h_{\vec{b}}(x)$ and $h_{\vec{b'}}(x)$ have the same zeros.
Then, $h_{\vec{b}}=h_{\vec{b'}}$.
Thus, $h_{\vec{b}}(a_{i}^{2})=h_{\vec{b'}}(a_{i}^{2})$ for each $1\leq i\leq n$.
By Proposition \ref{P:hb3}, this implies that
$b_{i}=b'_{i}$ for each $i$. Thus, $\vec{b}=\vec{b'}$.
This shows the injectivity.
The singular values $x_1,\operatorname{d\!}ots,x_{n-1}$ gives a polynomial
\[p(x)=\operatorname{pr}od_{1\leq i\leq n-1}(x-x_{i}^2).\]
Since $(-1)^{i-1}f(a_{i}^{2})\geq 0$, we can write
\[p(x)= \sum_{1\leq i\leq n} c_i \operatorname{pr}od_{1\leq j\leq n,\, j\neq i}(x-a_{j}^2).\]
for some $c_i\geq 0$ with $\sum_{i=1}^n c_i=1$.
Hence there exists a unique tuple $\vec{b}\in B$
such that $h_{\vec{b}}(x)=p(x)$.
This shows the surjectivity.
\end{proof}
\begin{proposition}\langlebel{P:pf1}
The image of the moment map $q(\mathcal{O}_{f})$
consists of all depth one coadjoint orbits of $P$
with the sign of the Pfaffian equal to the sign of $a_{n}$,
and with singular values $(x_1,\operatorname{d\!}ots, x_{n-1})$
such that
\[a_1\geq x_1\geq a_2\geq x_2\geq \cdots\geq
a_{n-1}\geq x_{n-1}\geq |a_{n}|.\]
Moreover, $q$ maps different $P$-orbits in
$\mathcal{O}_{f}$ to different $P$-orbits in $\mathfrak{p}^{\ast}$.
\end{proposition}
\begin{proof}
By the form of $q(\bar{n}_{\vec{b}}\cdot f)$ in Lemma \ref{L:gf}, we have $\beta\neq 0$. Thus, the $P$-orbit containing
$q(\bar{n}_{\vec{b}}\cdot f)$ has depth one. By Lemma \ref{L:Zb2}, the Pfaffian of $Z_{\vec{b}}$ has the same sign as
the sign of $a_{n}$. The other statements follow from Corollary \ref{C:characteristic3}.
\end{proof}
The stabilizers of orbits are given an follows.
\begin{proposition}\langlebel{P:pf3}
For any $\vec{b}\in B$,
\[\operatorname{Stab}_{P_{1}}(\bar{n}_{\vec{b}}\cdot f)\cong\operatorname{SO}(2)^{r}
\text{ and }
\operatorname{Stab}_{P_{1}}(q(\bar{n}_{\vec{b}}\cdot f)) \cong
\operatorname{U}(2)^{s}\times\operatorname{SO}(2)^{n-1-2s},\]
where $r$ is the number of zeros among $b_{1},\operatorname{d\!}ots,b_{n}$
and $s$ is the number of double zeros of $h_{\vec{b}}(x)$.
\end{proposition}
\begin{proof}
The first isomorphism
follows from $\operatorname{Stab}_{P_{1}}(\bar{n}_{\vec{b}}\cdot f)
\cong \operatorname{Stab}_{T_{1}}(\bar{n}_{\vec{b}}^{-1}\cdot v_0)$.
The second isomorphism follows from
the description of orbit $\operatorname{Stab}_{P_{1}}(q(\bar{n}_{\vec{b}}\cdot f))$ above
and Lemma~\ref{L:p-standard3}~(3).
\end{proof}
\begin{lemma}\langlebel{L:proper1}
For any compact set $\Omega\subset\mathfrak{p}^{\ast}-\mathfrak{l}^{\ast}$, $q^{-1}(\Omega)$ is compact.
\end{lemma}
\begin{proof}
Write a general element in $q^{-1}(\Omega)$ as
\[f'=an'm \bar{n}_{\vec{b}}\cdot f, \]
where $a\in A$, $n'\in N$, $m\in M$, and $\vec{b}\in B$.
Then \[q(f')=an'm\cdot q(\bar{n}_{\vec{b}}\cdot f)\in\Omega.\]
Note that $M$ and $B$ are compact.
Hence, $m$ and $\vec{b}$ are bounded.
Write \[m\cdot q(\bar{n}_{\vec{b}}\cdot f)=\eta_{1}+\phi_{n}(\xi_{1}),\]
where $m^{-1}\cdot \xi_1$ is given by the vector $\beta$ as in Lemma \ref{L:gf}.
Then
\[\mathfrak{a}c{1}{2}|a_{n}|\leq|a_{n}||\vec{b}|\leq |\xi_{1}|
=|\beta|\leq|a_{1}||\vec{b}|\leq|a_{1}|,\]
where we used $\mathfrak{a}c{1}{2}\leq|\vec{b}|=1-b_{n}\leq 1$.
Write $an'm\cdot q(\bar{n}_{\vec{b}}\cdot f)=\eta+\phi_{n}(\xi)$,
where $\eta\in\mathfrak{l}^{\ast}$ and $\xi\in\mathfrak{n}^{\ast}$.
By the compactness of
$\Omega\subset\mathfrak{p}^{\ast}-\mathfrak{l}^{\ast}$,
$|\eta|$ is bounded from above,
and $|\xi|$ is bounded from both above and below.
We have \[an'\cdot(\eta_{1}+\phi_{n}(\xi_{1}))=\eta+\phi_{n}(\xi).\]
Write $n'=\operatorname{exp}(X)$ and $\xi_{1}=\operatorname{pr}(Y)$, where $X\in\mathfrak{n}$ and $Y\in\bar{\mathfrak{n}}$.
Then,
\[an'\cdot(\eta_{1}+\phi_{n}(\xi_{1}))
=\eta_{1}+\operatorname{pr}([X,Y])+e^{-\langlembda_{0}\log a}\phi_{n}(\xi_{1}).\]
Thus, $\eta=\eta_{1}+\operatorname{pr}([X,Y])$ and $\xi=e^{-\langlembda_{0}\log a} \xi_{1}$.
Now, $|\xi_{1}|,|Y|,|\xi|$ are bounded both from above and below,
and $|\eta|,|\eta_{1}|$ are bounded from above.
Thus, $\log a$ is bounded both from above and below, and $|X|$ is bounded from above.
This shows the compactness of $q^{-1}(\Omega)$.
\if 0
Considering $q^{-1}(\Omega_{\epsilon})$ instead,
then in the above $|X|$ ($n'=\operatorname{exp}(X)$, $X\in\mathfrak{n}$) is bounded from above,
and $\log a$ is only bounded from below.
Write \[\operatorname{A}d (m\cdot \bar{n}_{\vec{b}}) t_{\vec{a}}=H_{1}+X_{1}+Y_{1},\]
where $H_{1}\in\mathfrak{l}$, $X_{1}\in\bar{\mathfrak{n}}$, $Y_{1}\in\mathfrak{n}$.
Then, $\operatorname{A}d(m^{-1})X_{1}$ is given by the vector
$\beta$ in Lemma \ref{L:gf}, and $\operatorname{A}d(m^{-1})Y_{1}$ is given by the vector
$-\mathfrak{a}c{\beta'}{2}$ there.
Now $|H_{1}|$ is bounded from above, $|X_{1}|$ is bounded both from above
and below, and $|Y_{1}|$ has fixed positive norm. Also, $n'=\operatorname{exp}(X)$ with $X$ in a compact set of $\mathfrak{n}$,
and $\log a$ can be arbitrarily large. Moreover, the bound of $|X|$ depends on $\epsilon$ linearly. By taking
$\epsilon$ sufficiently small, we see that the $\mathfrak{n}$-part of
$\operatorname{A}d (an'm\bar{n}_{\vec{b}}) t_{\vec{a}}$ can have
arbitrarily large norm. Hence, $q^{-1}(\Omega_{\epsilon})$ is not compact.
\fi
\end{proof}
\begin{proposition}\langlebel{P:pf2}
The moment map $q\colon \mathcal{O}_{f}=G\cdot f\rightarrow\mathfrak{p}^{\ast}$ has image lies in the set of depth
one elements. For any $g\in G$, the reduced space $q^{-1}(q(g\cdot f))/\operatorname{Stab}_{P}(q(g\cdot f))$ is a singleton.
The moment map $q$ is weakly proper, but not proper.
\end{proposition}
\begin{proof}
The first and the second statements follow from Proposition \ref{P:pf1}.
By Lemma \ref{L:proper1}, we see that $q$ is weakly proper.
Since the closure of every depth one orbit contains a depth zero orbit,
$q(\mathcal{O}_{f})$ is not closed in $\mathfrak{p}^{\ast}$.
Hence $q\colon \mathcal{O}_{f}\to \mathfrak{p}^*$ is not proper.
\end{proof}
\subsection{Singular elliptic coadjoint orbits}\langlebel{SS:singular-elliptic}
Now we consider singular elliptic coadjoint orbits. That is, we allow some of the singular values $a_{1},\operatorname{d\!}ots,a_{n}$
to be equal.
Consider the case when $a_{n}\neq 0$.
Let $0=i_{0}<i_{1}<\cdots<i_{l}=n$ be such that $|a_{i}|=|a_{j}|$
if $i_{k-1}<i\leq j\leq i_{k}$ for some $1\leq k\leq l$,
and $|a_{i_{k}}|>|a_{i_{k}+1}|$ for any $1\leq k\leq l-1$.
Write $n_{k}=i_{k}-i_{k-1}$ for $1\leq k\leq l$.
Then
\begin{equation*}
\operatorname{Stab}_{G_{1}}(f)=\operatorname{U}(n_{1})\times \cdots\times\operatorname{U}(n_{l}).
\end{equation*}
Put
\begin{equation*}
B_{l}=\{\vec{b}=(b'_{1},\operatorname{d\!}ots,b'_{l}):b'_{i}
\geq 0,\sum_{1\leq i\leq l-1}b_{i}'^{2}=1-2b'_{l}\}.
\end{equation*}
For any $\vec{b}=(b'_{1},\operatorname{d\!}ots,b'_{l})\in B_{l}$,
we have $0\leq b'_{l}\leq\mathfrak{a}c{1}{2}$.
For $\vec{b}\in B_{l}$, write
\begin{equation*}
\alpha= \alpha_{\vec{b}}=(0,b_1,0,b_{2},\operatorname{d\!}ots,0,b_{n-1},0),
\end{equation*}
where $b_{i}=b'_{k}$ if $i=i_{k}$ for some $1\leq k\leq l-1$,
and $b_{i}=0$ if otherwise.
Put $b_{n}=b'_{l}$.
Put
\begin{equation*}
\bar{X}_{\vec{b}}=\begin{pmatrix}
0_{2n-1}&\alpha^{t}&\alpha^{t}\\
-\alpha&0&0\\
\alpha&0&0\\
\end{pmatrix} \text{ and }
\bar{n}_{\vec{b}}=\operatorname{exp}(\bar{X}_{\vec{b}})
\end{equation*}
Then
\begin{equation*}
\bar{n}_{\vec{b}}^{-1}\cdot
v_{0}=[(0,-b_1,0,-b_2,\operatorname{d\!}ots,0,-b_{n-1},0,
b_{n},1-b_{n})].
\end{equation*}
We have the following analogue of Lemma \ref{L:P-orbits}.
\begin{lemma}\langlebel{L:P-orbits3}
Each $P$-orbit in $\mathcal{O}_{f}=G\cdot f$ contains some $\bar{n}_{\vec{b}}\cdot f$
for a unique tuple $\vec{b} \in B_{l}$.
\end{lemma}
With this, we employ an argument similarly to the case of regular orbits.
Then Lemma \ref{L:Zb2}, Proposition \ref{P:hb3}, Corollary \ref{C:hb4}, Corollary \ref{C:characteristic3}, Proposition \ref{P:pf1} and Lemma \ref{L:proper1}
all extend to this singular case.
Then Proposition \ref{P:pf2} extends without change of words.
Similarly to Proposition \ref{P:pf3}, we can describe $\operatorname{Stab}_{P_{1}}(\bar{n}_{\vec{b}}\cdot f)$ and
$\operatorname{Stab}_{P_{1}}(q(\bar{n}_{\vec{b}}\cdot f))$ for $\vec{b}\in B_{l}$. Let $n_{k}+s_{k}$ be the multiplicity
of $a_{k}^{2}$ for $1\leq k\leq l$ as a zero of $h_{\vec{b}}(x)$.
Then $s_{k}\in\{-1,0,1\}$; and $s_{k}\in\{0,1\}$ if and only if $b'_{k}=0$.
Put \[r_{k}=\operatorname{B}igl\lfloor\mathfrak{a}c{s_{k}}{2}\operatorname{B}igr\rfloor\text{ and }s=-\sum_{1\leq k\leq l}s_{k}.\]
Then $r_{k}\in\{-1,0\}$, and $r_{k}=0$ if and only if $b'_{k}=0$.
The stabilizers are given as follows.
\begin{proposition}\langlebel{P:Stab2}
For $\vec{b}\in B_{l}$,
\[\operatorname{Stab}_{P_{1}}(\bar{n}_{\vec{b}}\cdot f)\cong\operatorname{U}(n_{1}+r_{1})\times\cdots\times\operatorname{U}(n_{l}+r_{l})\]
and \[\operatorname{Stab}_{P_{1}}(q(\bar{n}_{\vec{b}}\cdot f))\cong\operatorname{U}(n_{1}+s_{1})
\times\cdots\times\operatorname{U}(n_{l}+s_{l})\times\operatorname{U}(1)^{s}.\]
\end{proposition}
Elliptic orbits for $a_{n}=0$
can be regarded as degenerations of
the non-elliptic regular orbits which are studied in the next section.
Thus, we treat them in the next section.
\section{Moment map for non-elliptic semisimple coadjoint orbits}\langlebel{S:non-elliptic}
We continue the study of moment map of coadjoint orbits.
In this section we treat the remaining types of orbits.
Suppose first that $m$ is odd and $m=2n-1$.
The case where $m$ is even will be mentioned at the end of this section.
\subsection{$P$-orbits in $\mathcal{O}_{f}$}\langlebel{SS:doubleCoset2}
Let $a_1\geq a_2\geq\cdots\geq a_{n-1}\geq 0$ and $a_{n}\geq 0$.
Write $\vec{a}=(a_1,\operatorname{d\!}ots,a_{n})$ and
\begin{align*}
t'_{\vec{a}}
=\begin{pmatrix}
0&a_1&& &&&&\\
-a_{1}&0&&&&&&\\
&&\operatorname{d\!}dots&&&&&\\
&&&0&a_{n-1}&&&\\&&&-a_{n-1}&0&&&\\
&&&&&0&&\\&&&&&&0&a_{n}\\&&&&&&a_{n}&0\\
\end{pmatrix}.
\end{align*}
Put $f=\iota(t'_{\vec{a}})$.
Then $\mathcal{O}_{f}=G\cdot f$ is a non-elliptic semisimple
coadjoint orbit in $\mathfrak{g}^{\ast}$ if $a_n>0$.
All non-elliptic semisimple coadjoint orbits in $\mathfrak{g}^{\ast}$
are of this form.
We first consider regular orbits,
that is, we assume that $a_1>a_2>\cdots>a_{n-1}>0$ and $a_{n}>0$.
Let $G^{f}$ be the stabilizer of $G$ at $f$.
Then $G^{f}=T_{s}$, where $T_{s}$ is the pre-image in $G$ of a maximal torus
\begin{align*}
&\operatorname{B}iggl\{
\begin{pmatrix}
y_{1}&z_{1}&&&&&&\\
-z_{1}&y_{1}&&&&&&\\
&&\operatorname{d\!}dots&&&&&\\
&&&y_{n-1}&z_{n-1}&&&\\
&&&-z_{n-1}&y_{n-1}&&&\\
&&&&&1&&\\
&&&&&&y_{n}&z_{n}\\
&&&&&&z_{n}&y_{n}\\
\end{pmatrix}\\
&\qquaduad\qquaduad
:y_{1}^{2}+z_{1}^{2}=\cdots=y_{n-1}^{2}+z_{n-1}^{2}
=y_{n}^{2}-z_{n}^{2}=1,\, y_{n}>0
\operatorname{B}iggr\}
\end{align*} of $G_{1}$.
As in the previous section,
we identify $G/P$ with the set
\[X_{n}=\{\vec{x}=(x_1,\operatorname{d\!}ots,x_{2n},x_{0}):x_{0}^{2}
=\sum_{1\leq i\leq 2n}x_{i}^{2},\, x_{0}>0\}/\sim\]
and consider $T_s$-orbits in $X_n$.
Set \begin{equation*}
B'=\{(b_1,\operatorname{d\!}ots,b_{n-1},b_{n}):b_1,\operatorname{d\!}ots,b_{n-1} \geq 0,
\sum_{1\leq i\leq n}b_{i}^{2}=1\}.
\end{equation*}
For each $\vec{b}\in B'$, write
\begin{align}\langlebel{Eq:Xbnb}
&\alpha=\alpha_{\vec{b}}=
\operatorname{B}igl(\mathfrak{a}c{a_{n}b_1}{\sqrt{a_{n}^{2}+a_{1}^{2}}},\mathfrak{a}c{-a_{1}b_1}
{\sqrt{a_{n}^{2}+a_{1}^{2}}},\operatorname{d\!}ots,\mathfrak{a}c{a_{n}b_{n-1}}{\sqrt{a_{n}^{2}+a_{n-1}^{2}}},
\mathfrak{a}c{-a_{n-1}b_{n-1}}{\sqrt{a_{n}^{2}+a_{n-1}^{2}}},b_{n}\operatorname{B}igr), \\ \nonumber
&\qquaduad \bar{X}_{\vec{b}}
=\begin{pmatrix} 0_{2n-1}&\alpha^{t}&\alpha^{t}\\
-\alpha&0&0\\
\alpha&0&0\\
\end{pmatrix}
\text{ and }
\bar{n}_{\vec{b}}=\operatorname{exp}(\bar{X}_{\vec{b}}).
\end{align}
Note that $|\alpha|=1$. Let $g_{\infty}=1$ and let $g'_{\infty}$ be a pre-image in $G$ of
$\operatorname{d\!}iag\{I_{2n-2},-1,-1,1\}\in G_1$.
Then $\bar{n}_{\vec{b}}^{-1}\cdot v_{0}$ is equal to
\begin{align}\langlebel{Eq:nbv0}
\operatorname{B}igl[\operatorname{B}igl(\mathfrak{a}c{-a_{n}b_1}{\sqrt{a_{n}^{2}+a_{1}^{2}}},
\mathfrak{a}c{a_{1}b_1}{\sqrt{a_{n}^{2}+a_{1}^{2}}},\operatorname{d\!}ots,
\mathfrak{a}c{-a_{n}b_{n-1}}{\sqrt{a_{n}^{2}+a_{n-1}^{2}}},
\mathfrak{a}c{a_{n-1}b_{n-1}}{\sqrt{a_{n}^{2}+a_{n-1}^{2}}},
-b_{n},0,1\operatorname{B}igr)\operatorname{B}igr],
\end{align}
and
\[g_{\infty}^{-1}\cdot v_{0}=v_{0}, \quad (g'_{\infty})^{-1}\cdot v_{0}=v'_{0},\]
where $v'_0:=[(0,\operatorname{d\!}ots,0,-1,1)]$.
Note that $v_0$ and $v'_{0}$ are the only $T_{s}$-fixed points in $X_{n}$.
The following lemma is easy to show.
\begin{lemma}\langlebel{L:Ts-orbits}
The map
\[B'\rightarrow(X_{n}-\{v_{0},v'_{0}\})/T_{s},\quad
\vec{b}\mapsto (\bar{n}_{\vec{b}})^{-1} \cdot v_{0}\]
is a bijection.
\end{lemma}
The following lemma directly follows form Lemma \ref{L:Ts-orbits}.
\begin{lemma}\langlebel{L:P-orbits2}
Each $P$-orbit in $\mathcal{O}_{f}-(P\cdot f\sqcup Pg'_{\infty}\cdot f)$
contains some $\bar{n}_{\vec{b}}\cdot f$ for a unique tuple $\vec{b}\in B'$.
\end{lemma}
Put
\[H'=\begin{pmatrix}0 & 1 \\ -1 & 0 \end{pmatrix}
\text{ and } H=\begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix}.\]
By Lemma \ref{L:P-orbits2}, $\{\bar{n}_{\vec{b}}\cdot f:\vec{b}\in B'\}$
together with $g_{\infty}\cdot f=f$ and
\[g'_{\infty}\cdot f=\iota(\operatorname{d\!}iag\{a_{1}H',\operatorname{d\!}ots,a_{n-1}H',0,-a_{n}H\})\]
represent all $P$-orbits in $\mathcal{O}_{f}=G\cdot f$.
\subsection{The moment map $\mathcal{O}_{f}\rightarrow\mathfrak{p}^{\ast}$}\langlebel{SS:moment2}
Let $Y=\operatorname{d\!}iag\{a_{1}H',\operatorname{d\!}ots,a_{n-1}H',0\}$.
We have $t'_{\vec{a}}=\operatorname{d\!}iag\{Y,0_{2\times 2}\}+a_nH_0\in \mathfrak{m}+\mathfrak{a}$.
For each $\vec{b}\in B'$, by \eqref{Eq:brackets},
\begin{align*}
\operatorname{A}d(\bar{n}_{\vec{b}}) (t'_{\vec{a}})
&= t_{\vec{a}} + [\bar{X}_{\alpha}, t_{\vec{a}}] \\
&= \operatorname{d\!}iag\{Y,0_{2\times 2}\} + a_nH_0
+ \bar{X}_{\alpha Y} + a_n \bar{X}_{\alpha}.
\end{align*}
Hence
\[\operatorname{A}d(\bar{n}_{\vec{b}})(t'_{\vec{a}})
=\begin{pmatrix}
Y& \beta^{t}&\beta^{t}\\
-\beta &0&a_{n}\\
\beta&a_{n}&0\\
\end{pmatrix},
\] where
\[\beta=a_{n}\alpha+\alpha Y
=(b_{1}\sqrt{a_{n}^{2}+a_{1}^{2}},0,\operatorname{d\!}ots, b_{n-1}\sqrt{a_{n}^{2}+a_{n-1}^{2}},0,a_{n}b_{n}).\]
Put
\[Y_{\vec{b}}=Y-\mathfrak{a}c{1}{|\beta|^2}(Y\beta^{t}\beta-(Y\beta^{t}\beta)^{t}),
\quad
Z_{\vec{b}}=\begin{pmatrix}
Y_{\vec{b}}&\mathfrak{a}c{\beta}{|\beta|}\\-\mathfrak{a}c{\beta}{|\beta|}&0\\
\end{pmatrix}.\]
We have
\[Y\beta^{t}
=(0,-a_{1}b_{1}\sqrt{a_{n}^{2}+a_{1}^{2}},\operatorname{d\!}ots,
0,-a_{n-1}b_{n-1}\sqrt{a_{n}^{2}+a_{n-1}^{2}},0)^{t}.\]
Note that we always have $\beta\neq 0$.
By Lemmas \ref{L:p-standard4}, \ref{p:standard5} and \ref{L:class-matrix},
the $P$-conjugacy class of $q(\bar{n}_{\vec{b}}\cdot f)$
is determined by the sign of the Pfaffian of $Z_{\vec{b}}$
and singular values of $Y_{\vec{b}}$.
Put
\begin{align*}
&\gamma_{1}=(b_{1}\sqrt{a_{n}^{2}+a_{1}^{2}},\operatorname{d\!}ots,b_{n-1}\sqrt{a_{n}^{2}+a_{n-1}^{2}},a_{n}
b_{n}),\\
&\gamma_{2}=(a_{1}b_{1}\sqrt{a_{n}^{2}+a_{1}^{2}},\operatorname{d\!}ots,
a_{n-1}b_{n-1}\sqrt{a_{n}^{2}+a_{n-1}^{2}},-|\beta|).
\end{align*}
By a direct calculation we have
\begin{lemma}\langlebel{L:Zb3}
Let $\sigma$ be the permutation
\[\sigma(i) =\begin{cases} 2i-1 &(1\leq i\leq n) \\
2(i-n) & (n+1\leq i\leq 2n) \end{cases}.\]
Then
\[Q_{\sigma}Z_{\vec{b}} Q_{\sigma}^{-1}=
\begin{pmatrix} 0_{n}&Z\\
-Z^{t}&0_{n}\\
\end{pmatrix},\] where
\[Z=\operatorname{d\!}iag\{a_{1},\operatorname{d\!}ots,a_{n-1},0\}-\mathfrak{a}c{1}{|\beta|^2}\gamma_{1}^{t}\gamma_{2}.\]
\end{lemma}
By Lemma~\ref{L:Zb3},
the Pfaffian of $Z_{\vec{b}}$ equals $\operatorname{d\!}et Z$.
Hence
\[\operatorname{P}f(Z_{\vec{b}})=\mathfrak{a}c{b_{n}}{|\beta|}\operatorname{pr}od_{1\leq i\leq n}a_{i}.\]
Write $Z'$ for the $n\times (n-1)$ submatrix of $Z$ by removing the last column.
Then the singular values of $Y_{\vec{b}}$ are square roots of eigenvalues of $(Z')^{t}Z'$
as we saw in \S\ref{SS:moment}.
Write
\[h_{\vec{b}}(x)=\operatorname{d\!}et(xI_{n-1}-(Z')^{t}Z').\]
\begin{lemma}\langlebel{Ls:characteristic1}
We have \[h_{\vec{b}}(x)=\operatorname{pr}od_{1\leq i\leq n-1}(x-a_{i}^{2})
+\sum_{1\leq i\leq n-1}\operatorname{B}igl(\operatorname{pr}od_{1\leq j\leq n-1,\, j\neq i}(x-a_{j}^{2})\operatorname{B}igr)
\mathfrak{a}c{a_{i}^{2}b_{i}^{2}(a_{n}^{2}+a_{i}^{2})}{|\beta|^2}.\]
\end{lemma}
\begin{proof}
Put
\[\gamma_{3}=\mathfrak{a}c{1}{|\beta|}
\bigl(a_{1}b_{1}\sqrt{a_{n}^{2}+a_{1}^{2}},\operatorname{d\!}ots,
a_{n-1}b_{n-1}\sqrt{a_{n}^{2}+a_{n-1}^{2}}\bigr).\]
We calculate \[(Z')^{t}Z'=\operatorname{d\!}iag\{a_{1}^{2},\operatorname{d\!}ots,a_{n-1}^{2}\}-\gamma_{3}^{t}\gamma_{3}.\]
Then, one shows easily that
\begin{align*}
&\operatorname{d\!}et(xI_{n-1}-(Z')^{t}Z') \\
&=\operatorname{pr}od_{1\leq i\leq n-1}(x-a_{i}^{2})
+\sum_{1\leq i\leq n-1}\operatorname{B}igl(\operatorname{pr}od_{1\leq j\leq n-1,\, j\neq i}(x-a_{j}^{2})\operatorname{B}igr)
\mathfrak{a}c{a_{i}^{2}b_{i}^{2}(a_{n}^{2}+a_{i}^{2})}{|\beta|^2}. \qedhere
\end{align*}
\end{proof}
\begin{proposition}\langlebel{Ps:determinant}
For $1\leq i\leq n-1$,
\[h_{\vec{b}}(a_{i}^{2})=\mathfrak{a}c{a_{i}^{2}b_{i}^{2}(a_{n}^{2}+
a_{i}^{2})}{|\beta|^{2}}\operatorname{pr}od_{1\leq j\leq n-1,\, j\neq i}(a_{i}^{2}-a_{j}^{2});\]
and \[h_{\vec{b}}(0)=(-1)^{n-1}
\mathfrak{a}c{a_{n}^{2}b_{n}^{2}}{|\beta|^{2}}\operatorname{pr}od_{1\leq i\leq n-1}a_{i}^{2}.\]
\end{proposition}
\begin{proof}
This follows by an easy calculation from Lemma \ref{Ls:characteristic1}.
\end{proof}
With Proposition \ref{Ps:determinant} substituting Proposition \ref{P:hb3}, the following corollary can be shown in the same way as for Corollary \ref{C:hb4}.
\begin{corollary}\langlebel{Cs:characteristic2}
The polynomial $h_{\vec{b}}(x)$ has $n-1$ non-negative roots,
which lie in the intervals
\[ [0,a_{n-1}^{2}], [a_{n-1}^{2},a_{n-2}^{2}],\operatorname{d\!}ots,[a_{2}^{2},a_{1}^{2}]\]
respectively; $a_{i}^{2}$ ($1\leq i\leq n-1$)
is a root if and only if $b_{i}=0$; $0$ is a root if and only if $b_{n}=0$.
\end{corollary}
By Corollary \ref{Cs:characteristic2}, $h_{\vec{b}}(x)$ has at most double zeros; the only possible double zeros of it are $a_{2}^{2},\operatorname{d\!}ots,a_{n-1}^{2}$;
$a_{i}^{2}$ and $a_{i+1}^{2}$ cannot both be double zeros.
Moreover, $a_{i}^{2}$ for $2\leq i\leq n-1$ is a double zero if and only if $b_{i}=0$ and
\[\sum_{1\leq k\leq n,\, k\neq i}\mathfrak{a}c{a_{k}^{2}b_{k}^{2}}{|\beta|^{2}}
\operatorname{pr}od_{1\leq j\leq n,\, j\neq i,k}(a_{i}^{2}-a_{j}^{2})=0.\]
By Corollary \ref{Cs:characteristic2},
write $x_1^{2}\geq\cdots\geq x_{n-1}^{2}$ for zeros of $h_{\vec{b}}(x)$.
Choose $x_1,\operatorname{d\!}ots,x_{n-1}$ such that $x_i\geq 0$ for $1\leq i\leq n-2$
and $\operatorname{sgn} x_{n-1} = \operatorname{sgn} b_n$.
Then
\[a_1\geq x_1\geq a_2\geq x_2\geq \cdots \geq a_{n-1}\geq |x_{n-1}|.
\]
Write
$\vec{x}=(x_{1},\operatorname{d\!}ots,x_{n-1})$.
The following corollary and propositions are analogues of Corollary \ref{C:characteristic3}, Proposition \ref{P:pf1},
Proposition \ref{P:pf3} respectively. The proofs are similar.
\begin{corollary}\langlebel{Cs:characteristic3}
The map $\vec{b}\mapsto \vec{x}$
gives a bijection from $B'$ to
\[[a_{2},a_{1}]\times\cdots\times[a_{n-1},a_{n-2}]
\times[-a_{n-1}, a_{n-1}].\]
\end{corollary}
\begin{proposition}\langlebel{Ps:pf1}
The image of the moment map $q(\mathcal{O}_f)$
consists of two depth zero orbits $P\cdot f$, $Pg'_{\infty}\cdot f$,
and all depth one $P$-coadjoint orbits with singular values
$(x_1,\operatorname{d\!}ots, x_{n-1})$ such that
\[a_1\geq x_1\geq a_2\geq x_2\geq \cdots\geq
a_{n-1}\geq x_{n-1}\geq 0.\]
Moreover, $q$ maps different $P$-orbits in $\mathcal{O}_{f}$
to different $P$-orbits in $\mathfrak{p}^{\ast}$.
\end{proposition}
\begin{proposition}\langlebel{Ps:pf3}
For any $\vec{b}\in B'$,
\[\operatorname{Stab}_{P_{1}}(\bar{n}_{\vec{b}}\cdot f)\cong\operatorname{SO}(2)^{r}\]
and
\[\operatorname{Stab}_{P_{1}}(q(\bar{n}_{\vec{b}}\cdot f)) \cong\operatorname{U}(2)^{s}\times
\operatorname{SO}(2)^{n-1-2s},\]
where $r$ is the number of zeros among $b_{1},\operatorname{d\!}ots,b_{n-1}$ and
$s$ is the number of double zeros of $h_{\vec{b}}(x)$.
\end{proposition}
Lemma \ref{L:proper1} has the following analogue. The proof is the same.
\begin{lemma}\langlebel{Ls:proper1}
For any compact set $\Omega\subset\mathfrak{p}^{\ast}-\mathfrak{l}^{\ast}$, $q^{-1}(\Omega)$ is compact.
\end{lemma}
By Proposition \ref{Ps:pf1} and Lemma \ref{Ls:proper1}, the following proposition follows.
\begin{proposition}\langlebel{Ps:pf2}
For any $g\in G$, the reduced space
\[q^{-1}(q(g\cdot f))/\operatorname{Stab}_{P}(q(g\cdot f))\] is a singleton.
The moment map $q$ is weakly proper, but not proper.
\end{proposition}
\begin{proof}
The first and the second claims are direct consequences of
Proposition \ref{Ps:pf1} and Lemma \ref{Ls:proper1}, respectively.
For the last claim it is enough to see that $q^{-1}(q(f))$ is non-compact.
\end{proof}
\subsection{Singular semisimple coadjoint orbits}\langlebel{SS:singular-nonelliptic}
Now we consider singular semisimple coadjoint orbits, that is, we allow some of the singular values
$a_{1},\operatorname{d\!}ots,a_{n-1}$ to be equal, $a_{n-1}=0$ or $a_{n}=0$.
First consider the case when $a_{n-1}\neq 0$ and $a_{n}\neq 0$.
Let $0=i_{0}<i_{1}<\cdots<i_{l-1}=n-1$ be such that
$a_{i}=a_{j}$ if $i_{k-1}<i\leq j\leq i_{k}$ for some $1\leq k\leq l-1$,
and $a_{i_{k}}>a_{i_{k}+1}$ for any $1\leq k\leq l-2$.
Write $n_{k}=i_{k}-i_{k-1}$ for $1\leq k\leq l-1$.
Then,
\[\operatorname{Stab}_{G}(f)\cong\operatorname{U}(n_{1})\times\cdots\times\operatorname{U}(n_{l-1})
\times\mathbb{R}_{>0}.\]
Put
\[B'_{l}=\{\vec{b}=(b'_{1},\operatorname{d\!}ots,b'_{l}):
b'_{1},\operatorname{d\!}ots,b'_{l-1}\geq 0,\sum_{1\leq i\leq l}(b_{i}')^{2}=1\}.
\]
For each $\vec{b}=(b'_{1},\operatorname{d\!}ots,b'_{l})\in B'_{l}$, write
\begin{equation*}
\alpha=\alpha_{\vec{b}}=
\operatorname{B}igl(\mathfrak{a}c{a_{n}b_1}
{\sqrt{a_{n}^{2}+a_{1}^{2}}},\mathfrak{a}c{-a_{1}b_1}{\sqrt{a_{n}^{2}+a_{1}^{2}}},\operatorname{d\!}ots,\mathfrak{a}c{a_{n}b_{n-1}}
{\sqrt{a_{n}^{2}+a_{n-1}^{2}}},\mathfrak{a}c{-a_{n-1}b_{n-1}}{\sqrt{a_{n}^{2}+a_{n-1}^{2}}},b_{n}\operatorname{B}igr),
\end{equation*}
where $b_{i}=b'_{k}$ if $i=i_{k}$ for some $1\leq k\leq l-1$;
$b_{i}=0$ if $n-1\geq i\neq i_{k}$ for any $1\leq k\leq l-1$; and $b_{n}=b'_{l}$.
Put $\bar{X}_{\vec{b}}$ and $\bar{n}_{\vec{b}}$ as in \eqref{Eq:Xbnb}.
Then $\bar{n}_{\vec{b}}^{-1}\cdot v_{0}$ equals \eqref{Eq:nbv0}.
We have the following analogue of Lemma \ref{L:P-orbits2}.
\begin{lemma}\langlebel{Ls:P-orbits2}
Each $P$-orbit in $\mathcal{O}_{f}-(P\cdot f\sqcup Pg'_{\infty}\cdot f)$
contains some $\bar{n}_{\vec{b}}\cdot f$ for a unique tuple $\vec{b}\in B'_{l}$.
\end{lemma}
With this, we take a similar study as for regular non-elliptic semisimple orbits.
All the results in the previous subsection can be extended.
In particular, Propositions \ref{Ps:pf1} and \ref{Ps:pf2} extend without change of words.
Next, consider the case when $a_{n-1}=0$ and $a_{n}\neq 0$.
Let $0=i_{0}<i_{1}<\cdots<i_{l}=n-1$ be such that
$a_{i}=a_{j}$ if $i_{k-1}<i\leq j\leq i_{k}$ for some $1\leq k\leq l$,
and $a_{i_{k}}>a_{i_{k}+1}$ for any $1\leq k\leq l-1$.
Write $n_{k}=i_{k}-i_{k-1}$ for $1\leq k\leq l$.
Then,
\[\operatorname{Stab}_{G_{1}}(f)\cong\operatorname{U}(n_{1})\times\cdots\times\operatorname{U}(n_{l-1})\times\operatorname{SO}(2n_{l}+1)
\times\mathbb{R}_{>0}.\]
Put
\[B'_{l}=\{\vec{b}=(b'_{1},\operatorname{d\!}ots,b'_{l}):b'_1,\operatorname{d\!}ots,b'_{l}\geq 0,
\sum_{1\leq i\leq l}(b_{i}')^{2}=1\}.\]
For each $\vec{b}=(b'_{1},\operatorname{d\!}ots,b'_{l})\in B'_{l}$,
write
\[\alpha=\alpha_{\vec{b}}=
\operatorname{B}igl(\mathfrak{a}c{a_{n}b_1}{\sqrt{a_{n}^{2}+a_{1}^{2}}},
\mathfrak{a}c{-a_{1}b_1}{\sqrt{a_{n}^{2}+a_{1}^{2}}},
\operatorname{d\!}ots,\mathfrak{a}c{a_{n}b_{n-1}}{\sqrt{a_{n}^{2}+a_{n-1}^{2}}},
\mathfrak{a}c{-a_{n-1}b_{n-1}}{\sqrt{a_{n}^{2}+a_{n-1}^{2}}},b_{n}\operatorname{B}igr),
\] where $b_{i}=b'_{k}$ if $i=i_{k}$ for some $1\leq k\leq l-1$;
$b_{i}=0$ if $n-1\geq i\neq i_{k}$ for any $1\leq k\leq l-1$;
and $b_{n}=b'_{l}$.
Put $\bar{X}_{\vec{b}}$ and $\bar{n}_{\vec{b}}$ as in \eqref{Eq:Xbnb}.
Then $\bar{n}_{\vec{b}}^{-1}\cdot v_{0}$ equals \eqref{Eq:nbv0}.
We have the following analogue of Lemma \ref{L:P-orbits2}.
\begin{lemma}\langlebel{Ls:P-orbits3}
Each $P$-orbit
in $\mathcal{O}_{f}-(P\cdot f\sqcup Pg'_{\infty}\cdot f)$
contains some $\bar{n}_{\vec{b}}\cdot f$ for a unique tuple $\vec{b}\in B'_{l}$.
\end{lemma}
With this, we take a similar study as for regular non-elliptic semisimple orbits.
All results can be extended.
In particular, Propositions \ref{Ps:pf1} and \ref{Ps:pf2} extend without change of words.
Next, consider the case when $a_{n}=0$ and $a_{1}\neq 0$.
Then $\mathcal{O}_f$ is elliptic.
Let $0=i_{0}<i_{1}<\cdots<i_{l}=n$ be such that $a_{i}=a_{j}$ if
$i_{k-1}<i\leq j\leq i_{k}$ for some $1\leq k\leq l$,
and $a_{i_{k}}>a_{i_{k}+1}$ for any $1\leq k\leq l-1$.
Write $n_{k}=i_{k}-i_{k-1}$ for $1\leq k\leq l$.
Then,
\[\operatorname{Stab}_{G_{1}}(f)\cong\operatorname{U}(n_{1})\times
\cdots\times\operatorname{U}(n_{l-1})\times\operatorname{SO}_{e}(2n_{l},1).
\]
Put
\[B'_{l}=\{\vec{b}=(b'_{1},\operatorname{d\!}ots,b'_{l-1}):
b'_{1},\operatorname{d\!}ots,b'_{l-1}\geq 0,\sum_{1\leq i\leq l-1}(b_{i}')^{2}=1\}.
\]
For each $\vec{b}=(b'_{1},\operatorname{d\!}ots,b'_{l})\in B'_{l}$,
write
\[\alpha=\alpha_{\vec{b}}=(0,-b_{1},\operatorname{d\!}ots,0,-b_{n-1},0),\]
where $b_{i}=b'_{k}$ if $i=i_{k}$ for some $1\leq k\leq l-1$;
$b_{i}=0$ if $i\neq i_{k}$ for any $1\leq k\leq l-1$.
Put $\bar{X}_{\vec{b}}$ and $\bar{n}_{\vec{b}}$ as in \eqref{Eq:Xbnb}.
Then,
\[\bar{n}_{\vec{b}}^{-1}\cdot v_{0}=[(0,b_{1},\operatorname{d\!}ots,0,b_{n-1},0,0,1)].\]
Because of $a_{n}=0$, one has $g'_{\infty}\cdot f=f$. The following lemma can be proved along the same way as that
for Lemma \ref{L:P-orbits2}.
\begin{lemma}\langlebel{Ls:P-orbits4}
Each $P$-orbit in $\mathcal{O}_{f}-P\cdot f$ contains some
$\bar{n}_{\vec{b}}\cdot f$ for a unique tuple $\vec{b}\in B'_{l}$.
\end{lemma}
With this, we take a similar study as for regular non-elliptic semisimple orbits.
All results can be extended.
In particular, Propositions \ref{Ps:pf1} and \ref{Ps:pf2} extend without change of words.
However, the range of zeros of $h_{\vec{b}}(x)$ becomes a bit different:
due to $a_{n}=0$,
we see that $x^{n_l}$ divides $h_{\vec{b}}(x)$.
The analogue of Corollary \ref{Cs:characteristic3} is as follows.
\begin{corollary}\langlebel{Cs:characteristic3-an0}
The map $\vec{b}\mapsto\vec{x}$ gives a bijection from $B'_{l}$ to
\[[a_{2},a_{1}]\times\cdots\times [a_{i_{l-1}},a_{i_{l-1}-1}]\times\{0\}^{n_l}.\]
\end{corollary}
For the most degenerate case where $a_{1}=\operatorname{d\!}ots=a_{n-1}=a_{n}=0$, one has $f=0$. Then, the image of
the moment map is equal to $\{0\}$.
\subsection{Non-semisimple coadjoint orbits}\langlebel{SS:non-semisimple}
Non-semisimple coadjoint orbits of $\operatorname{Sp}in(2n,1)$ can be thought of as limits of elliptic orbits.
For $a_1\geq \cdots\geq a_{n-1}\geq 0$, define $s_{\vec{a}}\in \mathfrak{g}$ by
\[s_{\vec{a}}=\operatorname{d\!}iag\{a_1 H',\operatorname{d\!}ots,a_{n-1}H' , U\},\ \text{ where }
U=\begin{pmatrix} 0 & 1 & 1 \\ -1 & 0 & 0 \\ 1 & 0 & 0\end{pmatrix}\]
and put $f=\iota(s_{\vec{a}})$.
We first assume that $a_1>a_2>\cdots >a_{n-1}>0$.
Then the coadjoint orbit $\mathcal{O}_f = G\cdot f$ is regular
and is the limit of regular
elliptic coadjoint orbits defined in \S\ref{SS:doubleCoset} when $a_n\to +0$.
The image of the moment map $q\colon \mathcal{O}_f \to \mathfrak{p}^*$
is obtained along the same line as the arguments in Section~\ref{S:elliptic} for elliptic orbits.
Instead of the set $B$ in \S\ref{SS:doubleCoset}, we let
\begin{equation*}
B=\operatorname{B}igl\{\vec{b}=(b_1,\operatorname{d\!}ots,b_{n}):b_1,\operatorname{d\!}ots,b_{n-1}\geq 0,\
\sum_{i=1}^{n-1} b_{i}^{2}=1-2b_{n}\operatorname{B}igr\}.
\end{equation*}
Here, the condition $b_n\geq 0$ is not imposed.
Put $v'_0=[(0,\operatorname{d\!}ots,0,-1,1)]\in X_{n}$.
Then
\begin{lemma}\langlebel{L:T-orbits2}
The map
\[B\rightarrow(X_{n}-\{v'_{0}\})/G^f ,\quad
\vec{b}\mapsto (\bar{n}_{\vec{b}})^{-1} \cdot v_{0}\]
is a bijection.
\end{lemma}
To obtain the image $q(\mathcal{O}_f)$, we follow the argument in \S\ref{SS:moment}.
Similarly to Lemma~\ref{L:gf}, we have
$q(\bar{n}_{\vec{b}}\cdot f)=\operatorname{pr}(X_{Y,\beta,0})$,
where
\begin{align*}
Y=\operatorname{d\!}iag\{a_1H',\operatorname{d\!}ots,a_{n-1}H',0\} \text{ and }
\beta=(-a_1b_1,0,\operatorname{d\!}ots,-a_{n-1}b_{n-1},0,1).
\end{align*}
Then Lemma~\ref{L:Zb} holds if the matrix $Z$ there is replaced by
\begin{equation*}
\begin{pmatrix} a_{1}&\ldots&0&\mathfrak{a}c{-a_{1}b_{1}}{|\beta|}\\
\vdots&\operatorname{d\!}dots&\vdots& \vdots\\
0&\ldots&a_{n-1}&\mathfrak{a}c{-a_{n-1}b_{n-1}}{|\beta|}\\
0&\ldots&0 &\mathfrak{a}c{1}{|\beta|}\\
\end{pmatrix}-\mathfrak{a}c{\gamma_{1}^{t}\gamma_{2}}{|\beta|^{2}},
\end{equation*}
where $\gamma_{1}=(a_{1}b_{1},\operatorname{d\!}ots,a_{n-1}b_{n-1},-1)$,
$\gamma_{2}=(a_{1}^{2}b_{1}, \operatorname{d\!}ots,a_{n-1}^{2}b_{n-1},0)$. Note that this matrix is the
limit of the matrix $Z$ in Lemma~\ref{L:Zb} when $b_{n}=a_{n}^{-1}$ and $a_{n}\to 0$.
As in Lemma~\ref{L:Zb2},
the Pfaffian is
$\operatorname{P}f(Z_{\vec{b}})=\mathfrak{a}c{1}{|\beta|}a_1\cdots a_{n-1}>0$.
As in Proposition~\ref{P:hb3},
\begin{equation*}
h_{\vec{b}}(x)
= \sum_{1\leq i\leq n-1}\mathfrak{a}c{a_{i}^{2}b_{i}^{2}}{|\beta|^{2}}\,\,
x\!\!\!\! \operatorname{pr}od_{1\leq j\leq n-1,\, j\neq i}(x-a_{j}^{2})
+ \mathfrak{a}c{1}{|\beta|^{2}}\operatorname{pr}od_{1\leq j\leq n-1}(x-a_{j}^{2}).
\end{equation*}
Analogously to Proposition~\ref{Ps:pf1}, we have
\begin{proposition}\langlebel{Ps:pf4}
The image of the moment map $q(\mathcal{O}_f)$
consists of one depth zero orbit $Pg'_{\infty}\cdot f$,
and all depth one $P$-coadjoint orbits with singular values
$(x_1,\operatorname{d\!}ots, x_{n-1})$ such that
\[a_1\geq x_1\geq a_2\geq x_2\geq \cdots\geq
a_{n-1}\geq x_{n-1}>0\]
and with the positive Pfaffian.
Moreover, $q$ maps different $P$-orbits in $\mathcal{O}_{f}$
to different $P$-orbits in $\mathfrak{p}^{\ast}$.
\end{proposition}
Notice that $x_{n-1}$ is strictly positive.
As an analogue of Lemma~\ref{L:proper1},
we have
\begin{lemma}
For any compact set
$\Omega\subset (\mathfrak{p}^{\ast}-\mathfrak{l}^{\ast})\cap q(\mathcal{O}_f)$,
the inverse image $q^{-1}(\Omega)$ is compact.
\end{lemma}
Then Proposition~\ref{P:pf2} holds without change of words.
Proposition~\ref{Ps:pf4} extends
to the case where some of $a_1,\operatorname{d\!}ots,a_{n-1}$ coincide and $a_{n-1}\neq 0$.
This case can be thought of as a limit of singular elliptic orbits treated
in \S\ref{SS:singular-elliptic}.
When $a_{n-1}=0$, the non-semisimple orbit $\mathcal{O}_f$ is
a limit of singular elliptic orbits treated
at the end of \S\ref{SS:singular-nonelliptic}.
Similarly to Corollary~\ref{Cs:characteristic3-an0},
the range of zeros of $h_{\vec{b}}(x)$ becomes
\[
[a_{2},a_{1}]\times \cdots [a_j,a_{j-1}]\times (0,a_{j}] \times \{0\}^{n-j-1},
\]
where $a_j>a_{j+1}=0$.
By replacing $U$ with $-U$, namely, taking $\operatorname{d\!}iag\{a_1 H',\operatorname{d\!}ots,a_{n-1}H' , -U\}$, we have other non-semisimple
orbits. This case is similar to the above. The only difference is that the Pfaffian in Proposition~\ref{Ps:pf4}
becomes negative.
\subsection{Moment map for $\operatorname{Sp}in(2n-1,1)$}
Suppose that $m$ is even and $G=\operatorname{Sp}in(2n-1,1)$.
Let $a_1\geq\cdots\geq a_{n-2}\geq|a_{n-1}|\geq 0$ and $a_{n}\geq 0$.
Write
\[t'_{\vec{a}}=
\begin{pmatrix}
0&a_1&&&&&\\
-a_{1}&0&&&&&\\
&&\operatorname{d\!}dots&&&&\\
&&&0&a_{n-1}&&\\
&&&-a_{n-1}&0&&\\
&&&&&0&a_{n}\\
&&&&&a_{n}&0\\
\end{pmatrix}.\]
Put $f = \iota(t'_{\vec{a}})$.
Then $\mathcal{O}_{f}=G\cdot f$ is a semisimple coadjoint orbit in $\mathfrak{g}^{\ast}$.
Moreover, all semisimple coadjoint orbits in $\mathfrak{g}^{\ast}$ are of this form.
Let $q\colon \mathcal{O}_{f}\rightarrow\mathfrak{p}^{\ast}$ be the moment map.
Let $g_{\infty}=1$ and
let $g'_{\infty}$ be a pre-image in $G$ of $\operatorname{d\!}iag\{I_{2n-3},-1,-1,1\}\in G_{1}$.
The following proposition summarize the result concerning the moment map $q$.
It is analogous to
Propositions~\ref{Ps:pf1} and \ref{Ps:pf2},
and can be proved along a similar line as them.
\begin{proposition}\langlebel{P:Dn-qf}
The image of the moment map $q(\mathcal{O}_f)$ consists of depth zero orbit(s)
$P\cdot q(f)$ and $P\cdot q(g'_{\infty}f)$
(these are the same orbit if and only if $a_{n-1}=a_{n}=0$),
and depth one coadjoint $P$-orbits with singular values
$(x_1,\operatorname{d\!}ots, x_{n-2})$ such that
\[a_1\geq x_1\geq a_2\geq x_2\geq \cdots\geq
a_{n-2}\geq x_{n-2}\geq |a_{n-1}|.\]
If $g\in G$ and $g\cdot f \not\in Pf \cup Pg'_{\infty}f$,
then the reduced space \[q^{-1}(q(g\cdot f))/\operatorname{Stab}_{P}(q(g\cdot f))\] is a singleton.
The moment map $q$ is weakly proper.
It is not proper unless $f=0$.
\end{proposition}
A similar result holds for non-semisimple orbits.
\section{Verification of Duflo's conjecture in the case of $\operatorname{Sp}in(m+1,1)$}\langlebel{S:Duflo}
The orbit method associates some unitary representations
of Lie groups to coadjoint orbits.
With that, algebraic properties of representations are
reflected by geometric properties of coadjoint orbits.
The celebrated Duflo's conjecture (Conjecture~\ref{C:Duflo}) gives a connection between
the branching law of unitary representations and the moment map of coadjoint orbits.
Here we verify Conjecture~\ref{C:Duflo} in our setting.
\subsection{Tempered representations}\langlebel{SS:tempered}
We follow Duflo's way of associating coadjoint orbits to tempered representations.
Suppose first that $m$ is odd and $G=\operatorname{Sp}in(2n,1)$.
Put
\[
H'=\begin{pmatrix} 0 & 1 \\ -1 & 0\end{pmatrix},\
H=\begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix},
\text{ and }
U= \begin{pmatrix} 0 & 1 & 1 \\ -1 & 0 & 0 \\ 1 & 0 & 0\end{pmatrix}.
\]
Let
\[\gamma= \operatorname{B}igl(a_{1}+n-\mathfrak{a}c{1}{2}, a_{2}+n-\mathfrak{a}c{3}{2},
\operatorname{d\!}ots,a_{n}+\mathfrak{a}c{1}{2}\operatorname{B}igr)\in\Lambda_{0}\]
be a regular integral weight so that
$a_1,\operatorname{d\!}ots,a_n$ are all integers or all half-integers and
$a_1\geq \operatorname{d\!}ots \geq a_n \geq 0$.
Let $\pi^{+}(\gamma)$ (resp.\ $\pi^{-}(\gamma)$)
be a discrete series representation of $G$
with infinitesimal character $\gamma$ and
the lowest $K$-type $V_{K,\langlembda^+}$ (resp.\ $V_{K,\langlembda^-}$),
where
\[\langlembda^+=(a_1+1,\cdots,a_{n}+1) \text{ and }
\langlembda^-=(a_1+1,\cdots,a_{n-1}+1,-(a_{n}+1)).\]
In light of Remark~\ref{R:weight-orbit},
the orbit $\mathcal{O}$ associated to $\pi^{+}(\gamma)$ is
$G\cdot \iota(t_{-\gamma})$, where
\[
t_{-\gamma}
= - \operatorname{d\!}iag\operatorname{B}igl\{\operatorname{B}igl(a_1+n-\mathfrak{a}c{1}{2}\operatorname{B}igr)H',
\operatorname{B}igl(a_2+n-\mathfrak{a}c{3}{2}\operatorname{B}igr)H',\operatorname{d\!}ots,
\operatorname{B}igl(a_n+\mathfrak{a}c{1}{2}\operatorname{B}igr)H',0 \operatorname{B}igr\}.
\]
Putting
\[
\vec{a}'=(a'_1,\operatorname{d\!}ots,a'_n)
=\operatorname{B}igl(a_{1}+n-\mathfrak{a}c{1}{2}, a_{2}+n-\mathfrak{a}c{3}{2},
\operatorname{d\!}ots, (-1)^n\operatorname{B}igl(a_{n}+\mathfrak{a}c{1}{2}\operatorname{B}igr)\operatorname{B}igr),
\]
$t_{-\gamma}$ is $G$-conjugate to $t_{\vec{a}'}$.
Hence $\mathcal{O}=G\cdot \iota(t_{\vec{a}'})$.
Similarly, the orbit associated to $\pi^-(\gamma)$ is
the coadjoint $G$-orbit through
\[
\iota\operatorname{B}igl(
\operatorname{d\!}iag\operatorname{B}igl\{\operatorname{B}igl(a_1+n-\mathfrak{a}c{1}{2}\operatorname{B}igr)H',
\operatorname{B}igl(a_2+n-\mathfrak{a}c{3}{2}\operatorname{B}igr)H',\operatorname{d\!}ots,
(-1)^{n-1} \operatorname{B}igl(a_n+\mathfrak{a}c{1}{2}\operatorname{B}igr)H',0\operatorname{B}igr\}\operatorname{B}igr).
\]
When \[ \gamma = \operatorname{B}igl(a_{1}+n-\mathfrak{a}c{1}{2}, a_{2}+n-\mathfrak{a}c{3}{2},
\operatorname{d\!}ots,a_{n-1}+\mathfrak{a}c{3}{2},0\operatorname{B}igr)\in\Lambda_{n}\]
and $\pi^{+}(\gamma)$ is a limit of discrete series,
The corresponding orbits are not semisimple.
It is the coadjoint $G$-orbit through
\[
\iota\operatorname{B}igl(
\operatorname{d\!}iag\operatorname{B}igl\{\operatorname{B}igl(a_1+n-\mathfrak{a}c{1}{2}\operatorname{B}igr)H',\operatorname{d\!}ots,
\operatorname{B}igl(a_{n-1}+\mathfrak{a}c{3}{2}\operatorname{B}igr)H',
(-1)^{n} U \operatorname{B}igr\}\operatorname{B}igr).
\]
For $\pi^{-}(\gamma)$, replace $(-1)^{n} U$ by $(-1)^{n-1}U$.
Next, let
\[I(\mu,\nu)
=\operatorname{Ind}_{MA\bar{N}}^{G}
(V_{M,\mu}\otimes e^{\nu-\rho'}\otimes\mathbf{1}_{\bar{N}})\]
be a unitary principal series representation of $G$.
Write $\mu=(a_1,\operatorname{d\!}ots,a_{n-1})$ and $\nu=\mathbf{i}a_{n}\langlembda_0$.
Then the corresponding orbit $\mathcal{O}$ is
the $G$-coadjoint orbit through
\[
\iota\operatorname{B}igl(
\operatorname{d\!}iag\operatorname{B}igl\{\operatorname{B}igl(a_1+n-\mathfrak{a}c{3}{2}\operatorname{B}igr)H',
\operatorname{B}igl(a_2+n-\mathfrak{a}c{5}{2}\operatorname{B}igr)H',\operatorname{d\!}ots,
(-1)^{n-1} \operatorname{B}igl(a_{n-1}+\mathfrak{a}c{1}{2}\operatorname{B}igr)H',0,
a_n H\operatorname{B}igr\}\operatorname{B}igr).
\]
For $P$-representations,
let $V_{M',\mu}$ be an irreducible representation of $M'$
with highest weight $\mu=(b_1,\operatorname{d\!}ots,b_{n-1})$.
Let
$I_{P,V_{M',\mu}}=\operatorname{Ind}_{M'N}^{P}(V_{M',\mu}\otimes e^{\mathbf{i}\xi_0})$
be the unitarily induced representation of $P$.
Then the corresponding orbit is $P\cdot \operatorname{pr} (Z_{Y,\beta})$
in the notation of \S\ref{SS:P-orbit2} such that the singular values
of $Y$ are
\[
(x_1,\operatorname{d\!}ots,x_{n-1})=(b_1+n-2,\,b_2+n-3,\operatorname{d\!}ots,b_{n-2}+1,|b_{n-1}|)
\]
and the sign of the Pfaffian of $Z_{Y,\beta}$ equals
the sign of $(-1)^{n-1}b_{n-1}$.
\begin{theorem}\langlebel{T:Duflo1}
Let $P$ be a minimal parabolic subgroup $P$ of $G=\operatorname{Sp}in(2n,1)$.
Let $\pi$ be a tempered representation of $G$,
which is associated to a regular coadjoint orbit $\mathcal{O}\subset\mathfrak{g}^{\ast}$.
Write $q\colon \mathcal{O} \rightarrow\mathfrak{p}^{\ast}$ for the moment map.
\begin{enumerate}
\item[(1)] The restriction of $\bar{\pi}$ to $P$ decomposes
into a finite direct sum of
irreducible unitarily induced representations of $P$ from $M'N$,
and this decomposition is multiplicity-free.
\item[(2)] Let $\tau$ be an irreducible unitarily induced representation of $P$ which is associated to a coadjoint orbit $\mathcal{O}'\subset\mathfrak{p}^{\ast}$.
Assume that $Z(G)$, the center of $G$, acts by the same scalar
on $\pi$ and on $\tau$.
Then for $\tau$ to appear in $\pi|_{P}$
it is necessary and sufficient that
$\mathcal{O'}\subset q(\mathcal{O})$.
\item[(3)] The moment map $q\colon \mathcal{O}\rightarrow\mathfrak{p}^{\ast}$ is weakly proper,
but not proper.
\item[(4)] The reduced space $q^{-1}(\mathcal{O'})/P$ is a singleton.
\end{enumerate}
\end{theorem}
\begin{proof}
Statement (1) follows from Theorem \ref{T:branching-ds} for (limit of) discrete series
and Theorem \ref{T:branching-ps} or \ref{T:branching-principal}
for unitary principal series.
Statements (3) and (4) follow from
Propositions \ref{P:pf2}, \ref{Ps:pf2}, \ref{Ps:pf4}.
It remains to show Statement (2), that is,
to compare the restriction of tempered representations and
the image of moment map of corresponding coadjoint orbits.
For $a_1\geq a_2\geq \cdots\geq a_{n}\geq -\mathfrak{a}c{1}{2}$,
\[\gamma=
\operatorname{B}igl(a_{1}+n-\mathfrak{a}c{1}{2},a_{2}+n-\mathfrak{a}c{3}{2},
\operatorname{d\!}ots,a_{n}+\mathfrak{a}c{1}{2}\operatorname{B}igr)\in\Lambda_{0}\cup\Lambda_{n},\]
the restriction of the (limit of) discrete series $\pi^+(\gamma)$ is given
by Theorem \ref{T:branching-ds}:
\[\bar{\pi}^{+}(\gamma)|_{P}=\bigoplus_{\mu}I_{P,V_{M',\mu}}\]
where $\mu=(b_1,\operatorname{d\!}ots,b_{n-1})$ runs over tuples such that
\begin{equation}\langlebel{Eq:interlacing}
a_{1}+1\geq b_{1}\geq a_{2}\geq
\cdots\geq b_{n-2}\geq a_{n-1}+1\geq -b_{n-1}\geq a_{n}+1
\end{equation}
and $b_i-a_1\in \mathbb{Z}$.
On the other hand, the moment map image of the corresponding orbit $\mathcal{O}$
was studied in \S\ref{SS:moment} and \S\ref{SS:non-semisimple}.
Let $\mathcal{O}'$ be a coadjoint $P$-orbit which corresponds
to a unitary representation $\tau=I_{P,V_{M',\mu}}$
with $\mu=(b_1,\operatorname{d\!}ots,b_{n-1})$.
Then the singular values for the $P$-orbit $\mathcal{O}'$ are
\[
(x_1,\operatorname{d\!}ots,x_{n-1})=(b_1+n-1,\operatorname{d\!}ots,b_{n-2}+1,|b_{n-1}|)
\]
and the sign of the Pfaffian equals $\operatorname{sgn} (-1)^{n-1}b_{n-1}$.
Assume that
the center $Z(G)$ acts on $\pi$ and
$\tau$ by the same scalar, which is equivalent to $b_i-a_i\in \mathbb{Z}$.
Then by Proposition~\ref{P:pf1},
$\mathcal{O}'\subset q(\mathcal{O})$ if and only if
\begin{align*}
a_1+n-\mathfrak{a}c{1}{2}\geq x_1 \geq a_2+n-\mathfrak{a}c{3}{2} \geq x_2
\geq \cdots \geq a_{n-1}+\mathfrak{a}c{3}{2}
\geq x_{n-1}\geq a_{n}+\mathfrak{a}c{1}{2}
\end{align*}
and the sign of Pfaffian equals $(-1)^n$.
Under our assumption $b_1-a_i\in \mathbb{Z}$,
this is equivalent to
\eqref{Eq:interlacing}.
Therefore, Statement (2) for $\pi=\pi^+(\gamma)$ is proved.
The case of $\pi=\pi^-(\gamma)$ is similar.
For unitary principal series representations, use Theorem~\ref{T:branching-ps} and Proposition~\ref{Ps:pf1}.
\end{proof}
Suppose next that $m$ is even and $G=\operatorname{Sp}in(2n-1,1)$.
Then the tempered representations of $\operatorname{Sp}in(2n-1,1)$ are all unitary principal series.
A result similar to Theorem \ref{T:Duflo1} is implied by
Theorem \ref{T:branching-ps2} and Proposition \ref{P:Dn-qf}.
The proof is along the same line as above.
\subsection{Non-tempered representations}\langlebel{SS:nontempered}
We may associate non-tempered representations to
some non-regular coadjoint orbits.
For example, the derived functor module $A_{\mathfrak{q}}(\langlembda)$
is associated to elliptic orbits.
Fix $1\leq j\leq n-1$.
Let $\mathfrak{q}_j$ be a $\theta$-stable parabolic subalgebra of $\mathfrak{g}_{\mathbb{C}}$
whose Levi component has the real form isomorphic to
$\mathfrak{u}(1)^j\oplus \mathfrak{so}(m-2j+1,1)$.
Then for $A_{\mathfrak{q}_j}(\langlembda)$ with $\langlembda=(a_1,\operatorname{d\!}ots,a_j,0,\operatorname{d\!}ots,0)$
in the good range, we associate the singular elliptic coadjoint orbit through
\[
\iota\operatorname{B}igl(\operatorname{d\!}iag\operatorname{B}igl\{\operatorname{B}igl(a_1+\mathfrak{a}c{m}{2}\operatorname{B}igr)H',\operatorname{d\!}ots,
\operatorname{B}igl(a_j+\mathfrak{a}c{m}{2}-j+1\operatorname{B}igr)H',0,\operatorname{d\!}ots,0\operatorname{B}igr\}\operatorname{B}igr).
\]
The branching law of $\overline{A_{\mathfrak{q}_j}(\langlembda)}|_{P}$ was obtained
in \S\ref{SS:branchinglaw} and \S\ref{SS:branchinglaw2}.
According to the formulas there, we observe that
$I_{P,V_{M',\mu}}$ occurs in the restriction
only if $\mu$ is of the form $\mu=(b_1,\cdots,b_{j-1},0,\operatorname{d\!}ots,0)$.
Then for $I_{P,V_{M',\mu}}$ with $\mu$ in this form, we associate the coadjoint $P$-orbit
$P\cdot \operatorname{pr}(Z_{Y,\beta})$ such that the singular values of $Y$ are
\[\operatorname{B}igl(b_1+\mathfrak{a}c{m-1}{2},\cdots,b_{j-1}+\mathfrak{a}c{m+3}{2}-j,0,\operatorname{d\!}ots,0\operatorname{B}igr).\]
We remark that this correspondence is different from one in \S\ref{SS:tempered}
even for the same representation of $P$.
This is related to the fact that
the same representation of a compact group
can be cohomologically induced from different parabolic subalgebras.
By restricting our consideration to the coadjoint $P$-orbit of
the above singular type for fixed $j$,
an analogue of Theorem~\ref{T:Duflo1} for $A_{\mathfrak{q}_j}(\langlembda)$
follows from
branching laws in \S\ref{SS:branchinglaw}, \S\ref{SS:branchinglaw2},
and results about orbits, Corollary~\ref{Cs:characteristic3-an0}.
\appendix
\section{Unitary principal series representations}\langlebel{S:principalSeries}
In Section~\ref{S:resP} we obtain branching laws for
all irreducible unitary representations of $\operatorname{Sp}in(m+1,1)$
when restricted to $P$.
In Appendices~\ref{S:principalSeries} and \ref{S:trivial}
we give more concrete description of
the decomposition into $P$-representations
for particular types of representations
without using the arguments in \S\ref{SS:CW-Cloux}
or the result of du Cloux~\cite{duCloux}.
We treat unitary principal series representations in Appendix~\ref{S:principalSeries}
and representations with trivial infinitesimal character in Appendix~\ref{S:trivial}.
For a finite-dimensional irreducible unitary representation $\sigma$ of $M$ and a unitary character $\nu$ of $A$,
we have a unitary principal series representation $\bar{I}(\sigma,\nu)$.
For $f\in\bar{I}(\sigma,\nu)$, let $f_{N}=f|_{N}$.
Through the map
\[\bar{I}(\sigma,\nu)\xrightarrow{\sim} L^{2}(N,V_{\sigma},\operatorname{d\!} n),\quad f\mapsto f_{N},\]
one identifies $\bar{I}(\sigma,\nu)$ with $L^{2}(N,V_{\sigma},\operatorname{d\!} n)$.
The action of $P=MAN$ on $\bar{I}(\sigma,\nu)$
induces its action on $L^{2}(N,V_{\sigma},\operatorname{d\!} n)$
given by \eqref{Eq:P-action}.
As in \eqref{Eq:Ff},
define the inverse Fourier transform of $f_{N}\in L^{2}(N,V_{\sigma},\operatorname{d\!} n)$
as a function on $\mathfrak{n}^{\ast}$ by
\begin{equation*}
\widehat{f_{N}}(\xi)=\mathcal{F}(f_{N})(\xi)=
(2\pi)^{-\mathfrak{a}c{m}{2}}\int_{\mathbb{R}^{m}}e^{\mathbf{i}(\xi,x)}f(n_{x})\operatorname{d\!} x.
\end{equation*}
By the classical Fourier theory, map
\[f_{N}\mapsto \mathcal{F}(f_{N})=\widehat{f_{N}}\]
gives an isomorphism of Hilbert spaces
\[\mathcal{F}\colon L^{2}(N,V_{\sigma},\operatorname{d\!} n)
\rightarrow L^{2}(\mathfrak{n}^{\ast}-\{0\},V_{\sigma},\operatorname{d\!} \xi)
\bigl(\simeq L^{2}(\mathfrak{n}^{\ast},V_{\sigma},\operatorname{d\!} \xi)\bigr).\]
The $P$-action on $\widehat{f_{N}}$ is given by \eqref{Eq:P-action2}.
Recall $\xi_0=(0,\operatorname{d\!}ots,0,1)\in \mathfrak{n}^*$.
For $h\in L^{2}(\mathfrak{n}^{\ast}-\{0\},V_{\sigma},\operatorname{d\!} x)$,
define a function $h_{at,\nu}$ on $P$ by $h_{at,\nu}(p)=(p^{-1}\cdot h)(\xi_0)$
and then
\begin{align*}
h_{at,\nu}(p)
&=e^{-\mathbf{i}(\xi_0,x)}
|\operatorname{A}d^*(m_0a)(\xi_{0})|^{\mathfrak{a}c{2\nu(H_0)+m}{2}}
(\sigma(m_0)^{-1}h(\operatorname{A}d^*(m_0a)\xi_{0}))
\end{align*}
for $p=m_0an_{x}\in P$ as in \eqref{Eq:anti-trivialization}.
\begin{proposition}\langlebel{P:anti-trivialization}
The image of the map
$L^{2}(\mathfrak{n}^{\ast}-\{0\},V_{\sigma},\operatorname{d\!} x)\ni h\mapsto h_{at,\nu}$ is equal to the representation space of the unitarily induced representation
$\operatorname{Ind}_{M'N}^{MAN}(\sigma|_{M'}\otimes e^{\mathbf{i} \xi_{0}})$.
The map $h\mapsto h_{at,\nu}$ preserves the actions of $P$ and
it preserves inner products up to scalar.
\end{proposition}
\begin{proof}
For $m_0a\in MA$, write $\xi=\operatorname{A}d^*(m_0a)\xi_{0}$.
Since both $\operatorname{d\!} {}_{l}m_0a$ and $|\xi|^{-m}\operatorname{d\!}\xi$
are $MA$ invariant measures on $MA/M'=\mathfrak{n}^{\ast}-\{0\}$,
we have $\operatorname{d\!} {}_{l}m_0a=c|\xi|^{-m}\operatorname{d\!}\xi$ for some constant $c>0$.
Then $|h_{at,\nu}(m_0a)|^{2}=|\xi|^{m}|h(\xi)|^{2}$ and
hence $\| h_{at,\nu}\|^{2}=c\|h\|^{2}$.
Therefore, the map $h\mapsto h_{at,\nu}$
sends $L^{2}(\mathfrak{n}^{\ast}-\{0\},V_{\sigma},\operatorname{d\!} x)$
to $\operatorname{Ind}_{M'N}^{MAN}(\sigma|_{M'}\otimes e^{\mathbf{i} \xi_{0}})$
and it preserves inner products up to scalar.
The remaining assertions can be seen as in Lemma~\ref{L:anti-trivialization}.
\end{proof}
Proposition~\ref{P:anti-trivialization}
gives the branching law for unitary principal series
more directly than that in Section~\ref{S:resP}.
\begin{theorem}\langlebel{T:branching-principal}
For a finite-dimensional unitary representation $\sigma$ of $M$, let
\[\sigma|_{M'}=\bigoplus_{j=1}^s \tau_{j}\]
be the decomposition of $\sigma$ into a direct sum of irreducible unitary representations of $M'$.
Then,
\[\bar{I}(\sigma,\nu)|_{P}=\bigoplus_{j=1}^s I_{P,\tau_{j}}.\]
\end{theorem}
\begin{proof}
By realizing $\bar{I}(\sigma,\nu)$ with the non-compact picture of induced representation, we identify $\bar{I}(\sigma,\nu)$ with $L^{2}(N,V_{\sigma},\operatorname{d\!} n)$.
From \eqref{Eq:P-action},
we know the action of $P$ on $L^{2}(N,V_{\sigma},\operatorname{d\!} n)$.
Taking the inverse Fourier transform
the action of $P$ on the Fourier transformed picture is given as \eqref{Eq:P-action2}.
Applying the anti-trivialization, by Proposition \ref{P:anti-trivialization}
we identify $\bar{I}(\sigma,\nu)$
with $\operatorname{Ind}_{M'N}^{MAN}(\sigma|_{M'}\otimes e^{\mathbf{i}\xi_{0}})$.
Hence the conclusion follows.
\end{proof}
\section{Restriction to $P$ of irreducible representations of $G$ with trivial infinitesimal character
and some complementary series}\langlebel{S:trivial}
In this section we study
representations of $G$ (or $G_2$) with trivial infinitesimal character
and some complementary series representations in detail.
In particular we prove Proposition~\ref{P:P-restriction2}, a branching law
for a discrete series, which was used in \S\ref{SS:branchinglaw}.
\subsection{Principal series with infinitesimal character $\rho$}\langlebel{SS:rep-O(m+1,1)}
Let \[m>1 \text{ and } n:= \operatorname{B}igl\lfloor \mathfrak{a}c{m}{2} \operatorname{B}igr\rfloor +1\] and use the notation in Section~\ref{S:repP}.
For each $0\leq j\leq n-1$,
define an representation of $M_{2}=\operatorname{O}(m)\times\operatorname{D}elta_{2}(\operatorname{O}(1))$,
denoted by $(\sigma_j, V_j)$ as
\[
V_j:=\bigwedge^{j}\mathbb{C}^{m}
\]
on which $\operatorname{O}(m)$ acts naturally and $\operatorname{D}elta_{2}(\operatorname{O}(1))$
acts trivially.
The restriction $V_j|_{M_1}$ is irreducible and has the highest weight
\begin{equation*}
\mu_{j}=(\underbrace{1,\operatorname{d\!}ots,1}_{j},\underbrace{0,\operatorname{d\!}ots,0}_{n-j-1}).
\end{equation*}
Put
\begin{align*}
&I_{j}(\nu):=\operatorname{Ind}_{M_{2}A\bar{N}}^{G_{2}}(V_j\otimes e^{\nu-\rho'}\otimes \mathbf{1}_{\bar{N}}).
\end{align*}
\subsection{Normalized Knapp-Stein intertwining operators}\langlebel{SS:intertwining}
The formal Knapp-Stein intertwining operator \[J'_{j}(\nu):I_{j}(-\nu)\rightarrow I_{j}(\nu)\] is defined by
\begin{equation}\langlebel{Eq:intertwining3}(J'_{j}(\nu)f)(g)=\int_{N}f(gsn)\operatorname{d\!} n
\end{equation}
for $f\in I_{j}(-\nu)$ and $g\in G_{2}$, where $s=\operatorname{d\!}iag\{I_{m},-1,1\}$ as in (\ref{Eq:s}).
\begin{proposition}\langlebel{L:intertwining1}
When $\operatorname{Re}\nu(H_0)>0$, one has
\begin{equation}\langlebel{Eq:Jf1}(J'_{j}(\nu)f)(n_{x})=\int_{\mathbb{R}^{m}}
|y|^{-2(\rho'-\nu)(H_{0})}\sigma_j(r_{y})f(n_{x-y})\operatorname{d\!} y
\end{equation}
for $x\in\mathbb{R}^{m}$, and the integral on the right hand side converges absolutely.
When $f|_{K_{2}}$ is fixed, the value of $(J'_{j}(\nu)f)(n_{x})$ varies holomorphically with respect to $\nu$ when
$\operatorname{Re}\nu(H_0)>0$.
\end{proposition}
\begin{proof}
Using Lemma \ref{L:barn-iwasawa}, one has
\begin{align*}
(J'_{j}(\nu)f)(n_{x})
&=\int_{\mathbb{R}^{m}}
f(n_{x}sn_{y})\operatorname{d\!} y\\
&=\int_{\mathbb{R}^{m}}\sigma_j(r_{y})^{-1}e^{(-2\log|y|)(\rho'+\nu)(H_{0})}
f(n_{x+\mathfrak{a}c{y}{|y|^{2}}})\operatorname{d\!} y\\
&=\int_{\mathbb{R}^{m}}\sigma_j(r_{y})^{-1}e^{(2\log|y|)(\rho'+\nu)(H_{0})}
|y|^{-4\rho'(H_0)}f(n_{x-y})\operatorname{d\!} y\\
&=\int_{\mathbb{R}^{m}}|y|^{-2(\rho'-\nu)(H_{0})}\sigma_j(r_{y})^{-1}f(n_{x-y})\operatorname{d\!} y.
\end{align*}
Note that $r_y^2=I$ and hence $\sigma_j(r_{y})^{-1}=\sigma_j(r_{y})$.
By Lemma \ref{L:Iwasawa}, for $f\in I_{j}(-\nu)$,
\begin{equation*}
f(n_{x})=(1+|x|^{2})^{-(\rho'+\nu)(H_{0})}f(s_{x}).
\end{equation*}
Let $C=\operatorname{max}_{x\in K_{2}}|f(x)|$. Then,
\[|f(n_{x})|\leq C(1+|x|^{2})^{-(\rho'+\nu)(H_{0})}\] for all $x\in\mathbb{R}^{m}$.
Since \[(-2(\rho'-\nu)H_{0})+(-2(\rho'+\nu)(H_{0}))=-4\rho'(H_0)=-2m<-m,\]
the integral in (\ref{Eq:Jf1}) converges absolutely in the range $|y|\geq |x|+1$.
For a fixed $x$, $\sigma_j(r_{y})f(n_{x-y})$ is bounded
in the range $|y|\leq |x|+1$.
By \[-\operatorname{Re}2(\rho'-\nu)(H_{0})=-m+2\operatorname{Re}\nu(H_0)>-m,\]
the integral in (\ref{Eq:Jf1}) also converges absolutely on $|y|\leq |x|+1$.
By the absolute convergence, the integral on the right hand side of (\ref{Eq:Jf1}) varies holomorphically with respect to $\nu$.
\end{proof}
The Knapp-Stein intertwining operator in the current
setting was studied in \cite[\S8.3]{Kobayashi-Speh}.
Following \cite[(8.12)]{Kobayashi-Speh}, define
an $\operatorname{E}nd(V_j)$-valued distribution on $\mathbb{R}^{m}$ by
\begin{equation}
\langlebel{Eq:Tdistribution}
(T_{j}(\nu))(x)=\mathfrak{a}c{1}{\operatorname{G}amma(\nu(H_0))}|x|^{-2(\rho'-\nu)(H_{0})}\sigma_j(r_{x}).
\end{equation}
Define
\begin{equation*}
(J_{j}(\nu)f)(n_{x})=\int_{\mathbb{R}^{m}}(T_{j}(\nu))(y)f(x-y)\operatorname{d\!} y.
\end{equation*}
We call $J_{j}(\nu)$ the {\it normalized Knapp-Stein intertwining operator.} The following is
\cite[Lemma 8.7]{Kobayashi-Speh}, which is implied by Lemma \ref{L:Riesz}.
\begin{lemma}\langlebel{L:continuation}
Both the distribution $T_{j}(\nu)$ and the intertwining operator $J_{j}(\nu)$ admit holomorphic continuations to the
whole $\mathfrak{a}^{\ast}_{\mathbb{C}}$.
\end{lemma}
Assume $\nu(H_{0})\in \mathbb{R}$.
For two functions $f,h$ in the image of $J_{j}(\nu)$,
choose $\tilde{f},\tilde{h}\in I_{j}(-\nu)$
such that $J_{j}(\nu)\tilde{f}=f$ and $J_{j}(\nu)\tilde{h}=h$.
Define
\begin{equation}\langlebel{Eq:Hermitian1}(f,h):=(\tilde{f}|h)=
\int_{\mathbb{R}^{m}}(\tilde{f}(n_{x}),h(n_{x}))\operatorname{d\!} x,\end{equation}
where $(\tilde{f}(n_{x}),h(n_{x}))$ is the inner product on
$V_{M,\mu_{j}}$.
By the following lemma, $(f,h)$ is a well-defined $G$-invariant
Hermitian form on the image of $J_j(\nu)$.
\begin{lemma}\langlebel{L:Hermitian2}
For any $\nu\in\mathfrak{a}_{\mathbb{C}}^*$,
$\tilde{f}\in I_{j}(-\nu)$ and $\tilde{h}\in I_{j}(-\bar{\nu})$,
\begin{equation*}
\int_{\mathbb{R}^{m}}((J_{j}(\nu)\tilde{f})(n_{x}), \tilde{h}(n_{x}))\operatorname{d\!} x
=\int_{\mathbb{R}^{m}}(\tilde{f}(n_{x}), (J_j(\bar{\nu})\tilde{h})(n_{x}))\operatorname{d\!} x.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\phi(g)=(\tilde{f}(g), (J_j(\bar{\nu})\tilde{h})(g))$ ($g\in G_{2}$).
Then it satisfies the relation $\phi(gma\bar{n})=e^{2\rho'(\log a)}\phi(g)$.
By this, $\phi$ gives a left $G_{2}$-invariant density form on $G_{2}/\bar{P}_{2}$.
Hence, \[\int_{\mathbb{R}^{m}}\phi(n_{x})\operatorname{d\!} x=\int_{K_{2}}\phi(k)\operatorname{d\!} k\]
(\cite[Chapter V, \S 6]{Knapp}).
Therefore, the integral in \eqref{Eq:Hermitian1} converges.
Put
\[(\tilde{f}, \tilde{h})_{\nu}
= \int_{\mathbb{R}^m} \bigl(\tilde{f}(n_{x}), (J_{j}(\bar{\nu}) \tilde{h})(n_x)\bigr) \operatorname{d\!} x. \]
Fix $\tilde{f}|_{K_{2}}, \tilde{h}|_{K_{2}}$ and vary $\nu$.
By Lemma \ref{L:continuation} the value of $(\tilde{f},\tilde{h})_{\nu}$
varies holomorphically with respect to $\nu$.
When $\operatorname{Re}\nu(H_0)>0$, by Proposition~\ref{L:intertwining1}
the intertwining integral converges absolutely. Hence
\begin{align*}
&\quad(\tilde{f},\tilde{h})_{\nu}\\
&=\int_{\mathbb{R}^{m}}\int_{\mathbb{R}^{m}}(\tilde{f}(n_{y}),(T_{j}({\bar{\nu}}))(y-x)\tilde{h}(n_{x}))
\operatorname{d\!} x\operatorname{d\!} y\\
&=\int_{\mathbb{R}^{m}}\int_{\mathbb{R}^{m}}((T_{j}(\nu))(x-y)\tilde{f}(n_{y}),
\tilde{h}(n_{x}))\operatorname{d\!} y\operatorname{d\!} x.
\end{align*}
By holomorphicity, this holds for all $\nu\in\mathfrak{a}^{\ast}_{\mathbb{C}}$.
\end{proof}
\subsection{Fourier transformed picture}\langlebel{SS:Fourier2}
For $f\in I_{j}(\nu)$, let $f_{N}=f|_{N}$. As in \eqref{Eq:Ff}, define the inverse Fourier transform of
$f_{N}$ as a function (or a distribution) on $\mathfrak{n}^{\ast}$ by
\begin{equation*}
\widehat{f_{N}}(\xi)=\mathcal{F}(f_{N})(\xi)
=(2\pi)^{-\mathfrak{a}c{m}{2}}\int_{\mathbb{R}^{m}}e^{\mathbf{i}(\xi,x)}f(n_{x})\operatorname{d\!} x.
\end{equation*}
Since $f_{N}$ is a tempered distribution, $\widehat{f_{N}}$ is a tempered distribution as well. The action of
$P_2$ on the Fourier transformed picture is defined as $p\widehat{f_{N}}=\widehat{(p f)_{N}}$ for $p\in P_2$.
The $P_2$-actions on $I_j(\nu)$ and $\mathcal{F}I_j(\nu)$ are given as \eqref{Eq:P-action} and \eqref{Eq:P-action2},
respectively.
\if 0
\begin{lemma}\langlebel{L:action-P2}
The group $P_{2}=M_{2}AN$ acts on the noncompact picture of $I_{j}(\nu)$ as follows:
for $f\in I_{j}(\nu)$ and $n\in N$,
\begin{equation}\langlebel{Eq:nf3}(n'f)(n)=f(n'^{-1}n)\ (\forall n'\in N);\end{equation}
\begin{equation}\langlebel{Eq:af3}(af)(n)=e^{(\nu-\rho')\log a}f(\operatorname{A}d(a^{-1})\bar{n})\ (\forall a\in A);\end{equation} \begin{equation}\langlebel{Eq:mf3}(mf)(n)=\sigma_j(m)f(m^{-1}nm)\ (\forall m\in M_{2}).\end{equation}
\end{lemma}
\begin{lemma}\langlebel{L:action-P-Fourier2}
The group $P_{2}=M_{2}AN$ acts on the Fourier transformed picture of $I_{\operatorname{d\!}elta,\epsilon}(\mu,\nu)$ as follows:
for any $f\in I_{j}(\nu)$ and any $\xi\in\mathfrak{n}^{\ast}$, \begin{equation}\langlebel{Eq:nf4}
(n_{x}\widehat{f_{N}})(\xi)=e^{2\pi\mathbf{i}(\xi,x)}\widehat{f_{N}}(\xi)\ (\forall n_{x}\in N);\end{equation} \begin{equation}\langlebel{Eq:af4}(a\widehat{f_{N}})(\xi)=e^{(\nu+\rho')\log a}\widehat{f_{N}}(\operatorname{A}d^{\ast}(a^{-1})\xi)
\ (\forall a\in A);\end{equation} \begin{equation}\langlebel{Eq:mf4}(m\widehat{f_{N}})(\xi)=\sigma_j(m)\widehat{f_{N}}
(\operatorname{A}d^{\ast}(m^{-1})\xi)\ (\forall m\in M_{2}).\end{equation}
\end{lemma}
\fi
When $\operatorname{Re}\alpha<m$, the function $|x|^{-\alpha}$ on $\mathbb{R}^{m}$ is locally $L^1$
and is a tempered distribution.
The lemma below follows from \cite[Chapter I, \S3.9]{Gelfand-Shilov}
and it implies Lemma \ref{L:continuation}.
\begin{lemma}\langlebel{L:Riesz}
The distribution $|x|^{-\alpha}$ on $\mathbb{R}^{m}$, originally defined when $\operatorname{Re}\alpha<m$, admits a unique meromorphic extension to all $\alpha\in\mathbb{C}$ as tempered distributions.
Moreover, the extension has only simple poles,
which are at $\alpha=m+2k$ with $k\in\mathbb{Z}_{\geq 0}$.
The distribution $\operatorname{G}amma(\mathfrak{a}c{m-\alpha}{2})^{-1} |x|^{-\alpha}$ admits a holomorphic continuation to the whole complex plane such that it is a tempered distribution
for every $\alpha\in\mathbb{C}$.
\end{lemma}
It follows from \cite[Chapter II, \S 3.3]{Gelfand-Shilov} that
\begin{align}\langlebel{Eq:Riesz1}
\mathcal{F}|x|^{-\alpha}=d_{\alpha}|\xi|^{-(m-\alpha)},
\text{ where }
d_{\alpha}=2^{\mathfrak{a}c{m}{2}-\alpha}\mathfrak{a}c{\operatorname{G}amma(\mathfrak{a}c{m-\alpha}{2})}{\operatorname{G}amma(\mathfrak{a}c{\alpha}{2})}.
\end{align}
From \eqref{Eq:Mult-Differ} and \eqref{Eq:Riesz1}, one verifies the following two equalities.
For $1\leq k,l \leq m$ with $k\neq l$,
\begin{align}\langlebel{Eq:Riesz2}
&\mathcal{F}(|x|^{-\alpha-2}x_{k}x_{l})=
-d_{\alpha+2}(m-\alpha-2)
(m-\alpha)\xi_{k}\xi_{l}|\xi|^{-(m-\alpha)-2}, \\ \nonumber
&\mathcal{F}(|x|^{-\alpha-2}x_{k}^2)
=d_{\alpha+2}(m-\alpha-2)
(|\xi|^{2}-(m-\alpha)\xi_{k}^{2})|\xi|^{-(m-\alpha)-2}.
\end{align}
In the following lemma \ref{L:Riesz3}, we give a formula for the Fourier transformed counter-part of the
intertwining kernel $T_{j}(\nu)$. The formula \eqref{Eq:fourier-T} is crucial for us. It enables us to
show algebraic and analytic properties of Fourier transformed picture of the intertwining image, and to
construct $L^2$-models for irreducible unitary representations with infinitesimal character $\rho$ and
for some complementary series $I(\mu_{j},\nu)$ of $G_{2}=\operatorname{O}(m+1,1)$ when $0<\nu(H_0)<\mathfrak{a}c{m}{2}-j$.
\begin{lemma}\langlebel{L:Riesz3}
Let $0\leq j\leq n-1$. Then
\begin{equation}
\langlebel{Eq:fourier-T}
\mathcal{F}T_{j}(\nu)
=\mathfrak{a}c{2^{(2\nu-\rho')(H_0)}|\xi|^{-2\nu(H_0)}}{\operatorname{G}amma(1+(\rho'-\nu)(H_0))}
\big(\rho'(H_0)-j-\nu(H_0)\sigma_j(r_{\xi})\big).
\end{equation}
\end{lemma}
\begin{proof}
Choose an orthonormal basis $\{v_1,\operatorname{d\!}ots,v_{m}\}$ of $\mathbb{C}^{m}$.
The exterior product $\bigwedge^{j}\mathbb{C}^{m}$ has a basis
$v_{I}=v_{i_{1}}\wedge\cdots\wedge v_{i_{j}}$,
where $I=\{i_{1},\operatorname{d\!}ots,i_{j}\}$ with $1\leq i_{1}<\cdots<i_{j}\leq m$.
By this,
\begin{align*}
&\quad\sigma_j(r_{x})v_{I} \\
&=r_{x}(v_{i_1})\wedge \cdots \wedge r_{x}(v_{i_{j}})\\
&= (v_{i_{1}}-2|x|^{-2}(v_{i_{1}},x)x) \wedge\cdots
\wedge (v_{i_{j}}-2|x|^{-2}(v_{i_{j}},x)x)\\
&=v_{I}+\sum_{k=1}^{j}2(-1)^{k}|x|^{-2} (v_{i_{k}},x)
x \wedge v_{i_{1}}\wedge\cdots\wedge v_{i_{k-1}}\wedge v_{i_{k+1}}\cdots\wedge v_{i_{j}}\\
&=v_{I}+
\sum_{k=1}^{j}\sum_{l=1}^{m}2(-1)^{k}|x|^{-2} (v_{i_{k}},x)(v_{l},x)
v_{l} \wedge v_{i_{1}} \wedge \cdots \wedge v_{i_{k-1}} \wedge v_{i_{k+1}} \cdots \wedge
v_{i_{j}}\\
&=\sum_{J}c_{I,J}(x)v_{J},
\end{align*}
where $J$ runs over $J\subset \{1,\operatorname{d\!}ots,m\}$ such that $|J|=j$; and
\[c_{I,J}(x)\!=\!
\begin{cases}
2(-1)^{k+a(I,k,l)}|x|^{-2} (v_{i_{k}},x)(v_{l},x)
\ \text{if $J=(I-\{i_{k}\})\sqcup\{l\}\neq I$},\\
|x|^{-2}\operatorname{B}igl(\sum_{\substack{l'\neq i_{k} (1\leq k\leq j)}}(v_{l'},x)^{2}-\sum_{k=1}^{j}(v_{i_{k}},x)^{2}\operatorname{B}igr)
\ \text{if $I=J$},\\
0\quad\text{if $|I\cap J|<j-1$}.
\end{cases}\]
Here, $a(I,k,l)$ is the number of indices $k'\in\{1,\operatorname{d\!}ots,k-1,k+1,\operatorname{d\!}ots,j\}$ such that $i_{k'}<l$.
\if 0
\begin{align*}
&c_{I,J}(x)=2 (-1)^{k+a(I,k,l)}|x|^{-2} (v_{i_{k}},x)(v_{l},x)
\text{ if $J=(I-\{i_{k}\})\sqcup\{l\}\neq I$},\\
&c_{I,J}(x)=|x|^{-2}\operatorname{B}igl(\sum_{\substack{l'\neq i_{k}\\ (1\leq \forall k\leq j)}}(v_{l'},x)^{2}
-\sum_{k=1}^{j}(v_{i_{k}},x)^{2}\operatorname{B}igr) \text{ if $I=J$},\\
&c_{I,J}(x)=0 \text{ if $|I\cap J|<j-1$}.
\end{align*}
\begin{align*}
c_{I,J}(x)=
\begin{cases}
0 & \text{ if $|I\cap J|<j-2$};\\
(-1)^{k+a(I,k,l)}\mathfrak{a}c{2(v_{i_{k}},x)(v_{l},x)}{(x,x)} & \text{ if $J=(I-\{i_{k}\})\sqcup\{l\}, l\neq i_{k}$};\\
\mathfrak{a}c{\sum_{l'\neq i_{k}\ (1\leq k\leq j-1)}(v_{l'},x)^{2}
-\sum_{k=1}^{j-1}(v_{i_{k}},x)^{2}}{(x,x)}
& \text{ if $I=J$},
\end{cases}
\end{align*}
\fi
Put $\alpha=2(\rho'-\nu)(H_0)$. Using two formulas \eqref{Eq:Riesz2} and the above expression of $\sigma_j(r_{x})v_{I}$,
one finds that the inverse Fourier transform of
\[(T_{j}(\nu))(x)=\mathfrak{a}c{|x|^{-2(\rho'-\nu)(H_{0})}}{\operatorname{G}amma(\nu(H_0))}\sigma_j(r_{x})\]
is equal to
\[d_{\alpha+2}\mathfrak{a}c{(m-\alpha-2)|\xi|^{-(m-\alpha)}}{\operatorname{G}amma(\nu(H_0))}
(m-2j-(m-\alpha)\sigma_j(r_{\xi})).\]
Note that
\begin{align*}
m-\alpha=2\nu(H_0),\quad
\mathfrak{a}c{d_{\alpha+2}}{d_{\alpha}}
=\mathfrak{a}c{1}{\alpha(m-\alpha-2)}, \quad
\mathfrak{a}c{d_{\alpha}}{\operatorname{G}amma(\nu(H_0))}=\mathfrak{a}c{2^{\mathfrak{a}c{m}{2}-\alpha}}{\operatorname{G}amma(\mathfrak{a}c{\alpha}{2})}.
\end{align*}
Multiplying these together, we find that $\mathcal{F}T_{j}(\nu)$ is equal to
\[\mathfrak{a}c{2^{(2\nu-\rho')(H_0)}|\xi|^{-2\nu(H_0)}}{\operatorname{G}amma(1+(\rho'-\nu)(H_0))}
\bigl(\rho'(H_0)-j-\nu(H_0)\sigma_j(r_{\xi})\bigr).
\qedhere\]
\end{proof}
After writing a draft of this paper,
the authors noticed that Lemma~\ref{L:Riesz3} was proved in a more general form
in \cite[Theorem 3.2 and Remark 4.9]{Fischmann-Orsted}.
The Knapp-Stein intertwining operators and $L^2$ inner products for complementary series
in the Fourier transformed picture
were previously studied in \cite{Speh-Venkataramana}.
The authors thank Professor {\O}rsted for kindly showing references.
\if 0
Put \[\nu_{j}=\operatorname{B}igl(\mathfrak{a}c{m}{2}-j+1\operatorname{B}igr)\langlembda_{0}.\]
The following formulas directly follow from Lemma \ref{L:Riesz3}:
\begin{align}
\langlebel{Eq:intertwining-T1}
&\mathcal{F}T_{j}(\nu_j)
=\mathfrak{a}c{2^{\mathfrak{a}c{m}{2}-2j+2}|\xi|^{-m+2j-2}}{\operatorname{G}amma(j)}
\operatorname{B}igl(\mathfrak{a}c{m}{2}-j+1\operatorname{B}igr)(1-\sigma_j(r_{\xi}))\text{ for $2\leq j\leq n$},\\
\nonumber
&\mathcal{F}T_{j}(-\nu_{j})
=\mathfrak{a}c{2^{-\mathfrak{a}c{3m}{2}+2j-2}|\xi|^{m-2j+2}}{\operatorname{G}amma(m-j+2)}
\operatorname{B}igl(\mathfrak{a}c{m}{2}-j+1\operatorname{B}igr)(1+\sigma_j(r_{\xi}))\text{ for $1\leq j\leq n$}.
\end{align}
\fi
We need some general facts about the convolution and the Fourier transform.
\begin{fact}\langlebel{L:Fourier-convolution1}
\begin{enumerate}
\item Let $1\leq p,q,r\leq\infty$ be such that $\mathfrak{a}c{1}{r}=\mathfrak{a}c{1}{p}+\mathfrak{a}c{1}{q}$.
For any complex valued functions
$u_{1}\in L^{p}(\mathbb{R}^{m})$ and $u_{2}\in L^{q}(\mathbb{R}^{m})$, one has
$u_{1}u_{2}\in L^{r}(\mathbb{R}^{m})$ and
\begin{equation*}
\| u_{1}u_{2}\|_{r}
\leq \|u_{1}\|_{p}\|u_{2}\|_{q}.
\end{equation*}
\item Let $1\leq p,q,r\leq\infty$ be such that $\mathfrak{a}c{1}{r}=\mathfrak{a}c{1}{p}+\mathfrak{a}c{1}{q}-1$.
For any complex valued functions $u_{1}\in L^{p}(\mathbb{R}^{m})$
and $u_{2}\in L^{q}(\mathbb{R}^{m})$, one has
$u_{1}\ast u_{2}\in L^{r}(\mathbb{R}^{m})$ and
\begin{equation*}
\|u_{1}\ast u_{2}\|_{r}\leq\|u_{1}\|_{p}
\|u_{2}\|_{q}.
\end{equation*}
\item Let $1\leq p\leq 2$ and $\mathfrak{a}c{1}{q}=1-\mathfrak{a}c{1}{p}$. For any complex valued function
$u\in L^{p}(\mathbb{R}^{m})$, one has $\widehat{u}\in L^{q}(\mathbb{R}^{m})$ and
\begin{equation*}
\|\widehat{u}\|_{q}\leq(2\pi)^{\mathfrak{a}c{m}{2}-\mathfrak{a}c{m}{p}}\|u\|_{p}.
\end{equation*}
\end{enumerate}
\end{fact}
\begin{proof}
(1) is the classical H\"older's inequality.
(2) is \cite[Corollary 4.5.2]{Hormander} and (3) is \cite[Theorem 7.1.13]{Hormander}.
\end{proof}
The following is a version of the convolution theorem.
\begin{lemma}\langlebel{L:Fourier-convolution2}
Let $p_{1},p_{2}\geq 1$ be such that
$\mathfrak{a}c{1}{p_{1}}+\mathfrak{a}c{1}{p_{2}}\geq\mathfrak{a}c{3}{2}$.
For any complex valued functions $u_{1}\in L^{p_{1}}(\mathbb{R}^{m})$
and $u_{2}\in L^{p_{2}}(\mathbb{R}^{m})$, one has
\begin{equation}\langlebel{Eq:Tconvolution}
\widehat{u_{1}\ast u_{2}}=(2\pi)^{\mathfrak{a}c{m}{2}}\widehat{u_{1}}\widehat{u_{2}}.
\end{equation}
\end{lemma}
\begin{proof}
As $\mathfrak{a}c{1}{p_{1}}+\mathfrak{a}c{1}{p_{2}}\geq\mathfrak{a}c{3}{2}$ and $p_{1},p_{2}\geq 1$, we have $1\leq p_{1},p_{2}\leq 2$.
Let $p,q_1,q_2$ be given by \[\mathfrak{a}c{1}{p}=\mathfrak{a}c{1}{p_{1}}+\mathfrak{a}c{1}{p_2}-1,\quad\mathfrak{a}c{1}{q_{1}}=1-\mathfrak{a}c{1}{p_{1}},
\quad\mathfrak{a}c{1}{q_2}=1-\mathfrak{a}c{1}{p_{2}}.\] Then $1\leq p\leq 2$ and $q_{1},q_{2}\geq 2$. Let $q\geq 2$ be given by
\[\mathfrak{a}c{1}{q}=1-\mathfrak{a}c{1}{p}=\mathfrak{a}c{1}{q_{1}}+\mathfrak{a}c{1}{q_2}.\] By Fact~\ref{L:Fourier-convolution1},
\[u_{1}\ast u_{2}\in L^{p},\quad
\widehat{u_{1}}\in L^{q_{1}},\quad \widehat{u_{2}}\in L^{q_{2}}.\]
Again by Fact~\ref{L:Fourier-convolution1},
\[\widehat{u_{1}\ast u_{2}}\in L^{q}
\text{ and }
\widehat{u_{1}}\widehat{u_{2}}\in L^{q}.\]
Put
\[\phi(u_1,u_2)=\widehat{u_{1}\ast u_{2}}-(2\pi)^{\mathfrak{a}c{m}{2}}\widehat{u_{1}}\widehat{u_{2}}.\]
Then $\phi$ gives a map
$L^{p_{1}}\times L^{p_{2}}\rightarrow L^{q}$, which is bilinear and bounded on both variables. By the classical
convolution theorem, $\phi(u_1,u_{2})=0$ whenever $u_1,u_{2}$ are both Schwartz functions. Since the space of
Schwartz functions are dense in both $L^{p_{1}}$ and $L^{p_{2}}$, we get $\phi(u_1,u_2)=0$ for any
$u_{1}\in L^{p_{1}}$ and $u_{2}\in L^{p_{2}}$. Thus, (\ref{Eq:Tconvolution}) follows.
\end{proof}
Recall some facts about the (single variable) $K$-Bessel function. See e.g.\ \cite{Watson} for more details.
The $K$-Bessel function $K_{\alpha}(x)$ is a solution to the second order linear ordinary differential
equation \begin{equation*}
x^{2}y''+xy'-(x^{2}+\alpha^{2})y=0
\end{equation*}
that has the asymptotic behavior
\begin{equation}\langlebel{Eq:Bessel2}K_{\alpha}(x)
=\sqrt{\mathfrak{a}c{\pi}{2x}}e^{-x}\operatorname{B}igl(1+O\operatorname{B}igl(\mathfrak{a}c{1}{x}\operatorname{B}igr)\operatorname{B}igr)
\quad (x\to +\infty).
\end{equation}
This uniquely determines the holomorphic function $K_{\alpha}(x)$ on $\mathbb{C} - \mathbb{R}_{\leq 0}$.
Note that $K_{\alpha}(x)=K_{-\alpha}(x)$.
When $x\to +0$, $K_{\alpha}(x)$ behaves as
\begin{equation*}
K_{\alpha}(x)=\begin{cases}
\mathfrak{a}c{\operatorname{G}amma(\alpha)}{2}(\mathfrak{a}c{x}{2})^{-\alpha}(1+o(1))
&\textrm{ if $\operatorname{Re}\alpha>0$},\\
(\mathfrak{a}c{\operatorname{G}amma(\alpha)}{2}(\mathfrak{a}c{x}{2})^{-\alpha}
+\mathfrak{a}c{\operatorname{G}amma(-\alpha)}{2}(\mathfrak{a}c{x}{2})^{\alpha})(1+o(1))
&\textrm{ if $\alpha\in \mathbf{i}\mathbb{R} -\{0\}$},\\
-\log(\mathfrak{a}c{x}{2})(1+o(1))&\textrm{ if $\alpha=0$}.
\end{cases}
\end{equation*}
Define
\begin{equation*}
\tilde{K}_{\alpha}(x):=\operatorname{B}igl(\mathfrak{a}c{|x|}{2}\operatorname{B}igr)^{\alpha}K_{\alpha}(|x|)
\quad \text{ for $x\in \mathbb{R}-\{0\}$}.
\end{equation*}
Then as $x\to +0$,
\begin{equation}\langlebel{Eq:Bessel4}
\tilde{K}_{\alpha}(x)
=\begin{cases}
\mathfrak{a}c{\operatorname{G}amma(\alpha)}{2}(1+o(1))
& \textrm{ if $\operatorname{Re}\alpha>0$},\\
(\mathfrak{a}c{\operatorname{G}amma(\alpha)}{2}
+ \mathfrak{a}c{\operatorname{G}amma(-\alpha)}{2}(\mathfrak{a}c{x}{2})^{2\alpha})(1+o(1))
& \textrm{ if $\alpha\in \mathbf{i}\mathbb{R} -\{0\}$},\\
-\log(\mathfrak{a}c{x}{2})(1+o(1))
& \textrm{ if $\alpha=0$},\\
\mathfrak{a}c{\operatorname{G}amma(-\alpha)}{2}(\mathfrak{a}c{x}{2})^{2\alpha}(1+o(1))
& \textrm{ if $\operatorname{Re}\alpha<0$}.
\end{cases}
\end{equation}
The function $y=\tilde{K}_{\alpha}(x)$ solves the second order linear ODE
\begin{equation*}
\langlebel{Eq:Bessel5}
xy''+(1-2\alpha)y'-xy=0.
\end{equation*}
It follows that (\cite[III.71(5)]{Watson})
\begin{equation}\langlebel{Eq:KBessel-recursive}
\tilde{K}_{\alpha+1}'(x)=-\mathfrak{a}c{x}{2}\tilde{K}_{\alpha}(x).
\end{equation}
as both sides satisfy the same second order linear ODE and have the same asymptotic behavior when $x\to +\infty$.
Then \[\tilde{K}''_{\alpha+1}
=\tilde{K}_{\alpha+1}+\mathfrak{a}c{2\alpha+1}{x}\tilde{K}'_{\alpha+1}
=\tilde{K}_{\alpha+1}-\mathfrak{a}c{2\alpha+1}{2}\tilde{K}_{\alpha}\]
and hence
\begin{equation*}
\tilde{K}_{\alpha}=\mathfrak{a}c{1}{\alpha+\mathfrak{a}c{1}{2}}
(\tilde{K}_{\alpha+1}- \tilde{K}''_{\alpha+1}).
\end{equation*}
By this one shows that
\[\mathfrak{a}c{\tilde{K}_{\alpha}(x)}{\operatorname{G}amma(\alpha+\mathfrak{a}c{1}{2})}\]
admits a holomorphic continuation to the whole complex plane and gives tempered distributions on $\mathbb{R}$
for any $\alpha\in \mathbb{C}$.
\if 0
One shows that \[\mathfrak{a}c{1}{\operatorname{G}amma(\alpha+\mathfrak{a}c{1}{2})}\tilde{K}_{\alpha}(x)\] admits a holomorphic continuation
to the whole complex plane such that it is a tempered distribution for any $\alpha$.
\fi
By \cite[Chapter II, \S2.5]{Gelfand-Shilov}, for each $\langlembda\in \mathbb{C}$,
\begin{equation*}
\mathcal{F}(1+x^2)^{\langlembda}
=\mathfrak{a}c{\sqrt{2}}{\operatorname{G}amma(-\langlembda)}\tilde{K}_{-\langlembda-\mathfrak{a}c{1}{2}}(\xi)
\end{equation*}
as tempered distributions on $\mathbb{R}$.
On $\mathbb{R}^{m}$, define the modified $K$-Bessel function by \[\tilde{K}_{\alpha}(x)=
\operatorname{B}igl(\mathfrak{a}c{|x|}{2}\operatorname{B}igr)^{\alpha} K_{\alpha}(|x|).\] Write
\[E=\sum_{1\leq i\leq m}x_{i}\partial_{x_{i}}\quad\textrm{ and }\quad\operatorname{D}elta=\sum_{1\leq i\leq m}(\partial_{x_{i}})^{2}.\]
By \eqref{Eq:KBessel-recursive} we have \begin{equation}\langlebel{Eq:KBessel-recursive2}
(\partial_{x_{j}}\tilde{K}_{\alpha+1})(x)=-\mathfrak{a}c{x_{j}}{2}\tilde{K}_{\alpha}(x).
\end{equation} Then, one shows that \[|x|^{2}\operatorname{D}elta \tilde{K}_{\alpha}+(2-m-2\alpha)E\tilde{K}_{\alpha}-
|x|^{2}\tilde{K}_{\alpha}=0\] and \[E\tilde{K}_{\alpha+1}=-\mathfrak{a}c{|x|^{2}}{2}\tilde{K}_{\alpha}.\] By these,
we get \[\tilde{K}_{\alpha}=\mathfrak{a}c{1}{\alpha+\mathfrak{a}c{m}{2}}(-\operatorname{D}elta\tilde{K}_{\alpha+1}+\tilde{K}_{\alpha+1}).\]
Then, one shows that \[\mathfrak{a}c{\tilde{K}_{\alpha}(x)}{\operatorname{G}amma(\alpha+\mathfrak{a}c{m}{2})}\] admits a holomorphic continuation
to the whole complex plane for $\alpha$ such that it is a tempered distribution on $\mathbb{R}^m$ for each $\alpha$.
\begin{lemma}\langlebel{L:Fourier-KBessel3}
For all $\langlembda\in \mathbb{C}$, \[\mathcal{F}(1+|x|^2)^{\langlembda}
=\mathfrak{a}c{2^{1-\mathfrak{a}c{m}{2}}}{\operatorname{G}amma(-\langlembda)}\tilde{K}_{-\langlembda-\mathfrak{a}c{m}{2}}(|\xi|)\]
as tempered distributions on $\mathbb{R}^m$.
\end{lemma}
\begin{proof}
By the holomorphicity of both sides with respect to $\langlembda$,
we may assume that $\operatorname{Re}\langlembda<-\mathfrak{a}c{m}{2}$.
We calculate the Fourier transform of $(1+|x|^{2})^{\langlembda}$ on $\mathbb{R}^{m}$
by reducing it to the case of $m=1$.
Let $m\geq 2$.
Write $\Omega_{m}$ for the volume of the $(m-1)$-dimensional sphere. It is well-known that
\[\Omega_{m}=\mathfrak{a}c{2\pi^{\mathfrak{a}c{m}{2}}}{\operatorname{G}amma(\mathfrak{a}c{m}{2})}.\]
Letting $x=(x_{1},\sqrt{1+x_{1}^{2}} z)$ with $z\in\mathbb{R}^{m-1}$, one gets
\begin{align*}
&\int_{\mathbb{R}^{m}}(1+|x|^2)^{\langlembda}e^{\mathbf{i}(\xi,x)}\operatorname{d\!} x\\
&=\int_{\mathbb{R}^{m}}(1+|x|^2)^{\langlembda}e^{\mathbf{i}x_{1}|\xi|}\operatorname{d\!} x\\
&=\int_{\mathbb{R}^{m}}(1+x_{1}^{2})^{\langlembda+\mathfrak{a}c{m-1}{2}}
e^{\mathbf{i}x_{1}\xi}(1+|z|^2)^{\langlembda}\operatorname{d\!} x_1\operatorname{d\!} z\\
&=\sqrt{2\pi}\mathcal{F}(1+x_{1}^{2})^{\langlembda+\mathfrak{a}c{m-1}{2}}
\times \Omega_{m-1}
\int_{0}^{\infty}(1+r^{2})^{\langlembda}r^{m-2}\operatorname{d\!} r\\
&=\mathfrak{a}c{2\sqrt{\pi}}{\operatorname{G}amma(-\langlembda-\mathfrak{a}c{m-1}{2})}
\tilde{K}_{-\langlembda-\mathfrak{a}c{m}{2}}(|\xi|)
\times \mathfrak{a}c{2\pi^{\mathfrak{a}c{m-1}{2}}}
{\operatorname{G}amma(\mathfrak{a}c{m-1}{2})}\mathfrak{a}c{\operatorname{G}amma(\mathfrak{a}c{m-1}{2})\operatorname{G}amma(-\langlembda-\mathfrak{a}c{m-1}{2})}{2\operatorname{G}amma(-\langlembda)}\\
&=(2\pi)^{\mathfrak{a}c{m}{2}}
\mathfrak{a}c{2^{1-\mathfrak{a}c{m}{2}}}{\operatorname{G}amma(-\langlembda)}\tilde{K}_{-\langlembda-\mathfrak{a}c{m}{2}}(|\xi|)
\end{align*}
For the first equation, we used a rotation on coordinates
to assume $\xi=(|\xi|,0,\operatorname{d\!}ots,0)$.
\end{proof}
Now we show properties of $K_{2}$-finite functions in $I(\sigma,\nu)$ (particularly $I_{j}(\nu)$) and their Fourier
transforms.
\begin{lemma}\langlebel{L:f-polynomial}
For any tuple $\alpha=(\alpha_1,\operatorname{d\!}ots,\alpha_{m})$ with $\alpha_{i}\in\mathbb{Z}_{\geq 0}$ and any $K_{2}$-finite
function $f\in I(\sigma,\nu)_{K_{2}}$, each entry of \[(1+|x|^{2})^{(\rho'-\nu)(H_{0})}\partial^{\alpha}f(n_{x})\]
is a polynomial of the functions $(1+|x|^{2})^{-1}$, $x_{i}(1+|x|^{2})^{-1}$, $x_{i}x_{j}(1+|x|^{2})^{-1}$
($1\leq i,j\leq m$). In particular, \[|\partial^{\alpha}f(n_{x})|\leq C(1+|x|^2)^{(\operatorname{Re}\nu-\rho')(H_0)}\]
for a constant $C$ depending on $f$ and $\alpha$.
\end{lemma}
\begin{proof}
By Lemma~\ref{L:Iwasawa}, we have \[f(n_{x})=(1+|x|^{2})^{-(\rho'-\nu)(H_{0})}f(s_{x}),\] where
\[s_{x}=\begin{pmatrix}I_{m}-\mathfrak{a}c{2x^{t}x}{1+|x|^2}&-\mathfrak{a}c{2x^{t}}{1+|x|^{2}}&0\\
\mathfrak{a}c{2x}{1+|x|^2}&\mathfrak{a}c{1-|x|^{2}}{1+|x|^{2}}&0\\0&0&1\\
\end{pmatrix}.\]
Since $f$ is a $K_{2}$-finite function, $f(s_{x})$ is a polynomial of its entries. Hence the claim holds when
$\alpha=0$ by Lemma \ref{L:Fourier-KBessel3}. By taking derivatives, the lemma is proved.
\end{proof}
\begin{lemma}\langlebel{L:f-Fourier}
\begin{enumerate}
\item[(1)]For any $\nu\in\mathfrak{a}^{\ast}_{\mathbb{C}}$ and any $K_{2}$-finite function $f\in I(\sigma,\nu)_{K_{2}}$,
$\widehat{f_{N}}(\xi)$ is a smooth function on $\mathfrak{n}^{\ast}-\{0\}$ and it decays fast near infinity in
the sense that $\widehat{f_{N}}(\xi)e^{|\xi|}$ has a polynomial bound as $\xi\rightarrow\infty$.
\item[(2)]If $\operatorname{Re}\nu(H_0)<0$, then $\widehat{f_{N}}(\xi)$ is continuous near $\xi=0$; if $\operatorname{Re}\nu(H_0)=0$, then
$\widehat{f_{N}}(\xi)$ is bounded by a multiple of $\log|\xi|$ near $\xi=0$; if $0<\operatorname{Re}\nu(H_0)<\rho'(H_0)$,
then $\widehat{f_{N}}(\xi)$ is bounded by a multiple of $|\xi|^{-2\operatorname{Re}\nu(H_0)}$ near $\xi=0$.
\item[(3)]If $\operatorname{Re}\nu(H_0)<\rho'(H_0)$, then $\widehat{f_{N}}$ belongs to $L^1(\mathfrak{n}^*)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) By Lemma \ref{L:f-polynomial}, $f(n_{x})$ is of the form \[f(n_{x})=(1+|x|^{2})^{-(\rho'-\nu)(H_{0})}h(x)\]
where $h(x)$ is a polynomial of $(1+|x|^{2})^{-1}$, $x_{i}(1+|x|^{2})^{-1}$, $x_{i}x_{j}(1+|x|^{2})^{-1}$
($1\leq i,j\leq m$). By Lemma \ref{L:Fourier-KBessel3}, the inverse Fourier transform of $(1+|x|^{2})^{\langlembda}$
is \[\mathfrak{a}c{2^{1-\mathfrak{a}c{m}{2}}}{\operatorname{G}amma(-\langlembda)}\tilde{K}_{-\langlembda-\mathfrak{a}c{m}{2}}(|\xi|).\] Then
$\widehat{f_{N}}(\xi)$ is a linear combination of terms of the form
\[\mathfrak{a}c{\partial^{\alpha}\tilde{K}_{-\nu(H_0)+k}(|\xi|)}{\operatorname{G}amma((\rho'-\nu)(H_0)+k)}\] with $k\geq 0$ and
$|\alpha|\leq 2k$. By the recursive relation \eqref{Eq:KBessel-recursive2} and the smoothness and asymptotic
behavior \eqref{Eq:Bessel2} of the $K$-Bessel function, one sees that $\widehat{f_{N}}(\xi)$ is a smooth function
over $\mathfrak{n}^{\ast}-\{0\}$ and it decays fast near infinity in the sense that $\widehat{f_{N}}(\xi)e^{|\xi|}$
has a polynomial bound as $\xi\rightarrow\infty$.
\noindent (2) By the recursive relation \eqref{Eq:KBessel-recursive2}, each $\partial^{\alpha}\tilde{K}_{-\nu(H_0)+k}(|\xi|)$
($|\alpha|\leq 2k$) is a finite linear combination of terms of the form $\xi^{\beta}\tilde{K}_{-\nu(H_0)+k-t}(|\xi|)$
where $t\geq|\beta|\geq 0$ and $2t-|\beta|=|\alpha|\leq 2k$. When $\operatorname{Re}\nu(H_0)<\rho'(H_0)$, we have
$\operatorname{Re}(-\nu(H_0)+k)>-\mathfrak{a}c{m}{2}$.
Moreover, by \eqref{Eq:Bessel4} one shows that the singularity at 0 of
$\xi^{\beta}\tilde{K}_{-\nu(H_0)+k-t}(|\xi|)$ is not worse than that described in (2).
\noindent (3) By (1) and (2), $\widehat{f_{N}}|_{\mathfrak{n}^*-\{0\}}$ is in $L^1$ when $\operatorname{Re}\nu(H_0)<\rho'(H_0)$. Let $h:=
\widehat{f_{N}}|_{\mathfrak{n}^*-\{0\}}$. To prove (3), we only need to show that $\widehat{f_{N}}=h$ as a
distribution. Since the support of $\widehat{f_{N}}-h$ is contained in $\{0\}$, its Fourier transform
$\mathcal{F}^{-1}(\widehat{f_{N}}-h)$ is a polynomial. Both $\mathcal{F}^{-1}(\widehat{f_{N}})(x)=f_{N}(x)$ and
$\mathcal{F}^{-1}(h)(x)$ tend to 0 as $x\to \infty$. Hence $\mathcal{F}^{-1}(\widehat{f_{N}}-h)=0$ and
$\widehat{f_{N}}=h$.
\end{proof}
\begin{lemma}\langlebel{L:f-L1}
Let $0\leq j\leq n-1$ and $f\in I_{j}(\nu)_{K_{2}}$. \begin{enumerate}
\item[(1)]If $\operatorname{Re}\nu(H_0)<0$, then $f_{N}\in L^{1}$, and $|\xi|^{k}\widehat{f_{N}}$ is a bounded function over
$\mathfrak{n}^{\ast}$ for any $k\geq 0$.
\item[(2)]If $0\leq\operatorname{Re}\nu(H_0)<\rho'(H_0)$, put \[p_{0}= \mathfrak{a}c{\rho'(H_0)}{\rho'(H_0)-\operatorname{Re}\nu(H_0)}\geq 1
\textrm{ and }q_0=\mathfrak{a}c{\rho'(H_0)}{\operatorname{Re}\nu(H_0)}\geq 1.\] Then, $f_{N}\in L^{p}$ for any $p$ with $p>p_0$ and
$\widehat{f_{N}}\in L^{q}$ for any $q$ with $1\leq q<q_0$.
\end{enumerate}
\end{lemma}
\begin{proof}
If $\operatorname{Re}\nu(H_0)<0$, then $\partial^{\alpha}f(n_{x})\in L^{1}$ for any tuple $\alpha$ by Lemma~\ref{L:f-polynomial}.
Hence, $\xi^{\alpha}\widehat{f_{N}}(\xi)$ is bounded over $\mathfrak{n}^{\ast}$. Therefore, for any $k\geq 0$,
$|\xi|^{k}\widehat{f_{N}}$ is a bounded function over $\mathfrak{n}^{\ast}$. If $0\leq\operatorname{Re}\nu(H_0)<\rho'(H_0)$,
then by Lemma~\ref{L:f-polynomial} one gets $f_{N}\in L^{p}$ for any $p$ with $p>p_0$. By Lemma~\ref{L:f-Fourier},
one sees that $\widehat{f_{N}}\in L^{q}$ for any $q$ with $1\leq q<q_0$.
\end{proof}
\begin{lemma}\langlebel{L:Riesz5}
Let $0\leq j\leq n-1$ and $f\in I_{j}(-\nu)_{K_{2}}$. If $-\rho'(H_0)<\operatorname{Re}\nu(H_0)<\rho'(H_0)$, then
\begin{equation}\langlebel{Eq:intertwining-Fourier}
\mathcal{F}((J_{j}(\nu)f)_{N})=(2\pi)^{\mathfrak{a}c{m}{2}}\mathcal{F}(T_{j}(\nu))\cdot \mathcal{F}(f_{N}).
\end{equation}
\end{lemma}
\begin{proof}
Recall that the operator $J_{j}(\nu)$ is defined as the convolution with $T_j(\nu)$ when $\operatorname{Re}\nu(H_0)>0$ and
it extends holomorphically to all $\nu\in \mathbb{C}$. Then $J_{j}(\nu)f\in I_{j}(\nu)_K$ and when
$-\rho'(H_0)<\operatorname{Re}\nu(H_0)<\rho'(H_0)$, $\mathcal{F}((J_{j}(\nu)f)_{N})$ is in $L^1$ by Lemma~\ref{L:f-Fourier}.
By Lemma~\ref{L:Riesz3} and Lemma~\ref{L:f-Fourier}, the right hand side of \eqref{Eq:intertwining-Fourier} is
$L^1$ near $0$, and decays rapidly when $|\xi|\to \infty$. Hence it is in $L^1(\mathfrak{n}^*)$.
Let $f_{0}: K_{2}\rightarrow V_{M,\mu_{j}}$ be a fixed $K_{2}$-finite function satisfying
\[f_{0}(km_0)=\sigma_j(m_0)^{-1}f_{0}(k),\quad (k,m_0)\in K_{2}\times M_{2}.\]
Let $f=f_{\nu}$ be defined by
\[f_{\nu}(ka\bar{n})=e^{-(\nu-\rho')(H_{0})} f_{0}(k),\quad
(k,a,\bar{n})\in K_{2}\times A\times\bar{N}.\]
Write
\[T_{\nu}:=T_{j}(\nu),\quad g_{\nu}:=T_{\nu}\ast (f_\nu)_{N} = J_{j}(\nu)(f_\nu)_{N}.\]
It is enough to show
\begin{equation}\langlebel{Eq:convolution2}
\mathcal{F}(g_{\nu})
=(2\pi)^{\mathfrak{a}c{m}{2}}\mathcal{F}(T_{\nu})\cdot \mathcal{F}((f_{\nu})_N).
\end{equation}
We first show (\ref{Eq:convolution2}) when $0<\nu(H_0)<\mathfrak{a}c{1}{2}\rho'(H_0)$. Let
\[T_{\nu}^{+}=T_{\nu}\chi_{|x|\leq 1},\ T_{\nu}^{-}=T_{\nu}\chi_{|x|>1}, \text{ and }
g_{\nu}^{+}=T_{\nu}^{+}\ast f_{\nu},\ g_{\nu}^{-}=
T_{\nu}^{-}\ast f_{\nu},\]
where $\chi_{|x|\leq 1}$ and $\chi_{|x|>1}$
are characteristic functions of ${|x|\leq 1}$ and ${|x|> 1}$, respectively.
Then $T_{\nu}^{+}\in L^{1}$ and $T_{\nu}^{-}\in L^2$.
By Lemma~\ref{L:f-Fourier}, $f_{\nu}\in L^{1}$.
Hence Lemma~\ref{L:Fourier-convolution2} implies that
\begin{align*}
&\mathcal{F}(g_{\nu}^{+})
=(2\pi)^{\mathfrak{a}c{m}{2}}\mathcal{F}(T_{\nu}^{+})\cdot \mathcal{F}((f_{\nu})_N)
\text{ and }\\
&\mathcal{F}(g_{\nu}^{-})
=(2\pi)^{\mathfrak{a}c{m}{2}}\mathcal{F}(T_{\nu}^{-})\cdot \mathcal{F}((f_{\nu})_N).
\end{align*}
Taking the sum, one gets \eqref{Eq:convolution2}.
\if 0
By Lemma~\ref{L:f-Fourier},
$\mathcal{F}(f_{\nu})$ and $\mathcal{F}(g_{\nu})$ are smooth over $\mathfrak{n}^{\ast}-\{0\}$. By Lemma
\ref{L:Riesz3}, $\mathcal{F}(T_{\nu})$ is smooth over $\mathfrak{n}^{\ast}-\{0\}$. Hence,
\[\mathcal{F}(g_{\nu})=\mathcal{F}(f_{\nu})\mathcal{F}(T_{\nu})\] over $\mathfrak{n}^{\ast}-\{0\}$.
\fi
When $-\rho'(H_0)<\operatorname{Re}\nu(H_0)<\rho'(H_0)$,
both sides of \eqref{Eq:convolution2} are smooth on $\mathfrak{n}^{\ast}-\{0\}$
and they are in $L^1(\mathfrak{n}^*)$.
Moreover, they are holomorphic in $\nu$.
Therefore, \eqref{Eq:convolution2} holds
for all $\nu$ with $-\rho'(H_0)<\operatorname{Re}\nu(H_0)<\rho'(H_0)$.
\if 0
by Lemma \ref{L:f-Fourier}, $\mathcal{F}(f_{\nu})$ and $\mathcal{F}(g_{\nu})$
are smooth over $\mathfrak{n}^{\ast}-\{0\}$. By Lemma \ref{L:Riesz3}, $\mathcal{F}(T_{\nu})$ is smooth over
$\mathfrak{n}^{\ast}-\{0\}$. Since $\mathcal{F}(f_{\nu})$, $\mathcal{F}(g_{\nu})$, $\mathcal{F}(T_{\nu})$ all vary
holomorphically with respect to $\nu$, it follows that \[\mathcal{F}(g_{\nu})=\mathcal{F}(f_{\nu})\mathcal{F}(T_{\nu})\]
holds true over $\mathfrak{n}^{\ast}-\{0\}$.
\fi
\end{proof}
The following proposition follows from Lemma \ref{L:Riesz3} and Lemma \ref{L:Riesz5}.
\begin{proposition}\langlebel{P:Riesz5}
Let $0\leq j\leq n-1$ and $-\rho'(H_0)<\operatorname{Re}\nu(H_0)<\rho'(H_0)$. Then for any function $f\in I_{j}(-\nu)_{K_2}$,
\begin{equation*}
\mathcal{F}((J_{j}(\nu)f)_{N})(\xi)=\mathfrak{a}c{\pi^{\mathfrak{a}c{m}{2}}|\mathfrak{a}c{1}{2}\xi|^{-2\nu(H_0)}}{\operatorname{G}amma(\mathfrak{a}c{m}{2}-
\nu(H_0)+1)}\operatorname{B}igl(\mathfrak{a}c{m}{2}-j-\nu(H_0)\sigma_j(r_{\xi})\operatorname{B}igr)\mathcal{F}(f_{N})(\xi).
\end{equation*}
\end{proposition}
Put \[\nu_{j}=\operatorname{B}igl(\mathfrak{a}c{m}{2}-j\operatorname{B}igr)\langlembda_{0}.\] Then Proposition~\ref{P:Riesz5} gives
\begin{proposition}\langlebel{P:Riesz4}
Let $1\leq j\leq n-1$. Then for any function $f\in I_{j}(-\nu_{j})_{K_2}$, \begin{equation*}
\mathcal{F}((J_{j}(\nu_{j})f)_N)(\xi)=\mathfrak{a}c{\pi^{\mathfrak{a}c{m}{2}}|\mathfrak{a}c{1}{2}\xi|^{-m+2j}}{\operatorname{G}amma(j+1)}
\operatorname{B}igl(\mathfrak{a}c{m}{2}-j\operatorname{B}igr)(1-\sigma_j(r_{\xi}))\mathcal{F}(f_N)(\xi);\end{equation*}
and for any function $f\in I_{j}(\nu_{j})_{K_2}$, \begin{equation*}
\mathcal{F}((J_{j}(-\nu_{j})f)_N)(\xi)=\mathfrak{a}c{\pi^{\mathfrak{a}c{m}{2}}|\mathfrak{a}c{1}{2}\xi|^{m-2j}}{\operatorname{G}amma(m-j+1)}
\operatorname{B}igl(\mathfrak{a}c{m}{2}-j\operatorname{B}igr)(1+\sigma_j(r_{\xi}))\mathcal{F}(f_N)(\xi).
\end{equation*}
\end{proposition}
\begin{lemma}\langlebel{L:intertwining2}
Let $1\leq j\leq n-1$. \begin{enumerate}
\item[(1)]For any function $f\in I_{j}(-\nu_{j})_{K_2}$, \[(1+\sigma_j(r_{\xi}))\mathcal{F}((J_{j}(\nu_{j})f)_N)
=0;\] if $J_{j}(\nu_{j})f=0$, then $(1-\sigma_j(r_{\xi}))(\mathcal{F}(f_N))=0$.
\item[(2)]For any function $f\in I_{j}(\nu_{j})_{K_2}$, \[(1-\sigma_j(r_{\xi}))\mathcal{F}((J_{j}(-\nu_{j})f)_N)
=0;\] if $J_{j}(-\nu_{j})f=0$, then $(1+\sigma_j(r_{\xi}))\mathcal{F}(f_N)=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
By Proposition \ref{P:Riesz4}, one gets the four equalities on $\mathfrak{n}^{\ast}-\{0\}$. By Lemma \ref{L:f-Fourier},
Fourier transformed functions appearing in the four equalities are always in $L^1$. Hence, one has such equalities
as tempered distributions.
\end{proof}
\if 0
\begin{proposition}\langlebel{L:L-L'}
For each $1\leq j\leq n$,
the image of $J_{j}(\nu_{j})$
equals the kernel of $J_{j}(-\nu_{j})$, and the image of
$J_{j}(-\nu_{j})$ equals the kernel of $J_{j}(\nu_{j})$.
\end{proposition}
\begin{proof}
This follows from the composition series of the induced modules $L_{j}(\pm{\nu_{j}})$, and is well-known. For example, it
is implied by \cite[Theorem 5.2.1]{Collingwood}. When $2\leq j\leq n$, we give a proof using Proposition \ref{P:Riesz4}.
We know that $J_{j}(\nu)$ is given by the convolution against $T_{j}(\nu)$. In the Fourier transformed picture, by
Proposition \ref{P:Riesz4} one sees that the convolution becomes multiplication. Proposition \ref{P:Riesz4} also indicates
that the kernel of $\mathcal{F}(T_{j}(\pm{\nu_{j}}))$ is equal to the image of $\mathcal{F}(T_{j}(\mp{\nu_{j}}))$. Hence,
the conclusion follows.
\end{proof}
\fi
For two functions $f_1,f_2\in I_{j}(\nu)_{K_2}$ which lie in the image of $J_{j}(\nu)$, take a function
$\tilde{f}_{1}\in I_{j}(-\nu)_{K_2}$ such that $J_{j}(\nu)\tilde{f}_{1}=f_{1}$. Define
\begin{equation*}
(\mathcal{F}((\tilde{f}_{1})_{N})|\mathcal{F}((f_{2})_{N}))=
\int_{\mathfrak{n}^{\ast}}(\mathcal{F}(\tilde{f}_{1})(\xi),\mathcal{F}(f_{2})(\xi))\operatorname{d\!}\xi.
\end{equation*}
\begin{lemma}\langlebel{L:innerProduct2}
Let $1\leq j\leq n-1$ and $-\rho'(H_0)<\nu(H_0)<\rho'(H_0)$. For any two functions $f_1,f_2\in I_{j}(\nu)_{K_{2}}$
which lie in the image of $J_{j}(\nu)$, we have
\begin{equation}\langlebel{Eq:innerProduct3}
(\mathcal{F}((\tilde{f}_{1})_{N})|\mathcal{F}((f_{2})_{N}))=(f_{1},f_{2}).
\end{equation}
Here, the right hand side is defined in \eqref{Eq:Hermitian1}.
\end{lemma}
\begin{proof}
When $-\mathfrak{a}c{1}{2}\rho'(H_0)<\nu(H_0)<\mathfrak{a}c{1}{2}\rho'(H_0)$,
both $\tilde{f}_1$ and $f_{2}$ are in $L^2$. By Parseval's theorem, one gets
\[(\mathcal{F}((\tilde{f}_{1})_{N})|\mathcal{F}((f_{2})_{N}))
=((\tilde{f}_{1})_{N}|(f_{2})_{N})=(f_{1},f_{2}).\]
When $0<\nu(H_0)<\rho'(H_0)$, write $h_2=\mathcal{F}((f_2)_{N})$
so that $(f_{2})_{N}$ is the Fourier transform of $h_2$.
By Lemma \ref{L:f-L1}, $(\tilde{f}_{1})_{N}\in L^{1}$, $h_{2}\in L^{1}$.
Then
\begin{align*}
(\mathcal{F}((\tilde{f}_{1})_{N})|h_{2})
&=(2\pi)^{\mathfrak{a}c{m}{2}} \int_{\mathbb{R}^m}\int_{\mathbb{R}^m}
(\tilde{f}_1)_N(x) e^{\mathbf{i}x\xi} \overline{h_2(\xi)} \operatorname{d\!} x \operatorname{d\!}\xi \\
&=((\tilde{f}_{1})_{N}|(f_{2})_{N})=(f_{1},f_{2}).
\end{align*}
\if 0
Put \begin{align*}&&\phi(h_1,(F_{2})_{N})\\&=&(h_{1}|\mathcal{F}((F_{2})_{N}))-
(\mathcal{F}^{-1}(h_{1})|(F_{2})_{N})\\&=&(\mathcal{F}((f_{1})_{N})|\mathcal{F}((F_{2})_{N}))-
((f_{1})_{N}|(F_{2})_{N}).\end{align*} Viewing $h_1,F_2$ as functions
in $L^{1}$, then $\phi$ is a sesquilinear form and is bounded on both variables. When both $h_1$ and $(F_{2})_{N}$
are Schwartz functions, $\phi(h_1,(F_2)_{N})=0$ by the classical Parseval's theorem. Using Schwartz functions to
approximate $h_1$ and $(F_{2})_{N}$, we get $\phi\equiv 0$. Hence, \[(\mathcal{F}((f_{1})_{N}),\mathcal{F}
((f_{2})_{N}))\!=\!(\mathcal{F}((f_{1})_{N})|\mathcal{F}((F_{2})_{N}))\!=\!((f_{1})_{N}|(F_{2})_{N})=(f_{1},f_{2}).\]
\fi
When $-\rho'(H_0)<\nu(H_0)< 0$, we have
$(f_{2})_{N}\in L^{1}$ and $\mathcal{F}((\tilde{f}_1)_{N})\in L^1$.
Then \eqref{Eq:innerProduct3} is proved in the same way.
\if 0
By Lemma \ref{L:f-L1}, $(f_{1})_{N}\in L^{1}$
and $h_{2}\in L^{1}$. Then, $(F_{2})_{N}=\mathcal{F}^{-1}((h_2)_{N})$. Put\begin{align*}&&\phi((f_1)_{N},h_{2})
\\&=&(\mathcal{F}((f_{1})_{N})|h_{2})-(f_{1}|\mathcal{F}^{-1}(h_{2}))\\&=&(\mathcal{F}(f_{1})|\mathcal{F}(F_{2}))-
((f_{1})_{N}|(\tilde{f}_{2})_{N}).\end{align*}
Then one proves similarly as in the case of $0<\nu(H_0)<\rho'(H_0)$.
\fi
\end{proof}
Similarly to \S\ref{SS:irreducible}, we write $\pi_j$ for the image of $J_{j}(\nu_j)$
and write $\pi'_j$ for the image of $J_{j}(-\nu_j)$.
The following fact is a special case of
Facts~\ref{F:gamma-classification}, \ref{F:unitarizable},
\ref{F:gamma-classification2}, \ref{F:unitarizable2}
(cf.\ \cite{Collingwood}, \cite{Kobayashi-Speh}).
\begin{fact}\langlebel{L:rho-classification}
For each $j$, both $\pi_{j}$ and $\pi'_{j}$ are irreducible representations of $G_{2}$. When $0\leq j\leq n-2$,
$\pi_{j+1}\cong \pi'_{j}$ and they are infinite-dimensional unitarizable representations, and $\pi_{0}$ is the
trivial representation. If $m$ is odd, then $\pi'_{n-1}$ is a discrete series; if $m$ is even, then
$I_{n-1}(\pm{\nu_{n-1}})$ is irreducible and hence $\pi'_{n-1}\cong \pi_{n-1}\cong \pi'_{n-2}$.
\end{fact}
We write $\bar{\pi}_j$ and $\bar{\pi}'_j$ for the Hilbert completion of $\pi_j$ and $\pi'_j$, respectively.
They are irreducible unitary representations of $G_2$.
\if 0
\begin{lemma}\langlebel{L:innerProduct3}
Let $2\leq j\leq n$.
For a function $f\in I_{j}(\nu_{j})$ which lies in the image of $J_{j}(\nu_{j})$, \begin{equation}
\langlebel{Eq:innerProduct5}(\mathcal{F}(f_{N}),\mathcal{F}(f_{N}))\!=\!\int_{\overline{\mathfrak{n}}^{\ast}}
\mathfrak{a}c{\operatorname{G}amma(j)}{2n+1-2j}\mathfrak{a}c{|\xi|^{2\nu_{j}(H_0)}}{\pi^{(\rho'-2\nu_{j})(H_0)}}|\mathcal{F}(f_{N})(\xi)|^{2}\operatorname{d\!}\xi;
\end{equation} for a function $f\in I_{j}(-\nu_{j})$ which lies in the image of $J_{j}(0\nu_{j})$, \begin{equation}
\langlebel{Eq:innerProduct6}(\mathcal{F}(f_{N}),\mathcal{F}(f_{N}))\!=\!\int_{\overline{\mathfrak{n}}^{\ast}}
\mathfrak{a}c{\operatorname{G}amma(2n+1-j)}{2n+1-2j}\mathfrak{a}c{|\xi|^{-2\nu_{j}(H_0)}}{\pi^{(\rho'+2\nu_{j})(H_0)}}|\mathcal{F}(f_{N})(\xi)|^{2}
\operatorname{d\!}\xi.\end{equation}
\end{lemma}
\fi
The following proposition follows from Proposition~\ref{P:Riesz4} and Lemma~\ref{L:innerProduct2}.
\begin{proposition}\langlebel{P:image-Fourier2}
For each $j$ with $1\leq j\leq n-1$, the inverse Fourier transform $\mathcal{F}$ gives maps \[\mathcal{F}\colon
J_{j}(\nu_{j})(I_j(-\nu_j))\to L^{2}(\mathfrak{n}^{\ast}-\{0\},(V_j)^{-\sigma_j(r_{\xi})},|\xi|^{m-2j}\operatorname{d\!} \xi),\]
and \[\mathcal{F}\colon J_{j}(-\nu_{j})(I_j(\nu_j))\to L^{2}(\mathfrak{n}^{\ast}-\{0\},
(V_j)^{\sigma_j(r_{\xi})},|\xi|^{-m+2j}\operatorname{d\!} \xi),\] which are isometries up to scalars.
\end{proposition}
\begin{proof}
We prove the first statement. The second statement can be proved similarly.
We first prove the statement for $K_{2}$-finite functions. Let $f\in I_{j}(\nu_{j})_{K_2}$ be equal to
$J_{j}(\nu_{j})(\tilde{f})$ for some function $\tilde{f}\in I_{j}(-\nu_{j})_{K_2}$. By Lemma~\ref{L:intertwining2}, $(1+\sigma_j(r_{\xi}))\mathcal{F}(f_{N})(\xi)=0$. Hence, $\mathcal{F}(f_{N})(\xi)\in (V_j)^{-\sigma_j(r_{\xi})}$
for all $\xi\in\mathfrak{n}^{\ast}-\{0\}$. By Lemma \ref{L:innerProduct2},
\[(\mathcal{F}(f_{N})|\mathcal{F}(\tilde{f}_{N}))=(f_{N},f_{N})<\infty.\] By Proposition~\ref{P:Riesz4},
\[\mathcal{F}(f_N)(\xi) =\mathfrak{a}c{\pi^{\mathfrak{a}c{m}{2}}|\mathfrak{a}c{1}{2}\xi|^{-m+2j}}{\operatorname{G}amma(j+1)}
\operatorname{B}igl(\mathfrak{a}c{m}{2}-j\operatorname{B}igr)(1-\sigma_j(r_{\xi}))\mathcal{F}(\tilde{f}_N)(\xi).\]
Therefore, the statement follows for $K_{2}$-finite functions.
For a general smooth function $f\in J_{j}(\nu_{j})(I_j(-\nu_j))$, Write $f=\sum_{\sigma\in\widehat{K}}f_{\sigma}$
for the $K_{2}$-type decomposition of $f$. Put \[f_{n}=\sum_{|\sigma|<n}f_{\sigma},\] where $|\sigma|$ denotes
the norm of the highest weight. Then, $\lim_{n\rightarrow\infty}f_{n}=f$ both as sections of a vector bundle on
$K_{2}/M_{2}\cong S^{m}$ with respect to the $C^{0}$-norm and as vectors in $\bar{\pi}_{j}$ with respect to the
inner product. Since $\lim_{n\rightarrow\infty}f_{n}=f$ as sections of the vector bundle, we have
$\lim_{n\rightarrow\infty}(f_{n})|_{N}=f|_{N}$ as tempered distributions.
Then, \[\lim_{n\rightarrow\infty}\mathcal{F}((f_{n})|_{N})=\mathcal{F}(f|_{N})\] as tempered distributions.
Since $\lim_{n\rightarrow\infty}f_{n}=f$ with respect to the inner product of $\bar{\pi}_{j}$, there exists a
function \[g\in L^{2}(\mathfrak{n}^{\ast}-\{0\},(V_j)^{-\sigma_j(r_{\xi})}, |\xi|^{m-2j}\operatorname{d\!} \xi)\] such that \[\lim_{n\rightarrow\infty}\mathcal{F}((f_{n})|_{N})=g\] in this $L^2$ space. Then one must have
\[\mathcal{F}(f|_{N})=g\in L^{2}(\mathfrak{n}^{\ast}-\{0\},(V_j)^{-\sigma_j(r_{\xi})},|\xi|^{m-2j}\operatorname{d\!} \xi).
\qedhere\]
\end{proof}
\begin{remark}
In light of Proposition~\ref{P:Riesz5} and Lemma~\ref{L:innerProduct2}, the Hermitian form
\eqref{Eq:Hermitian1} on $I_j(\nu)$ is positive definite if $-\mathfrak{a}c{m}{2}+j < \nu(H_0)< \mathfrak{a}c{m}{2}-j$.
In this case, the Hilbert completion of $I_j(\nu)$ is called a complementary series representation
(cf.\ \cite[Remark 4.10]{Fischmann-Orsted}, \cite[Section 4]{Speh-Venkataramana}). One can determine
their restriction to $P_{2}$ similarly as that in Proposition~\ref{P:P'-restriction1}.
\end{remark}
\subsection{Anti-trivialization and restriction to $P_2$}\langlebel{SS:restriction2}
As in \eqref{Eq:anti-trivialization},
for a $V_j$-valued function $h$ on $\mathfrak{n}^{\ast}-\{0\}$,
define its anti-trivialization by
$h_{at,\nu}(p)=(p^{-1}\cdot h)(\xi_0)$ for $p\in P$.
\begin{proposition}\langlebel{P:anti-trivialization2}
The map $h\mapsto h_{at,\nu}$ gives an isomorphism \[L^{2}(\mathfrak{n}^{\ast}-\{0\},V_j,|\xi|^{2\nu(H_0)}
\operatorname{d\!}\xi)\xrightarrow{\sim}\operatorname{Ind}_{M'_{2}N}^{M_{2}AN}(V_j|_{M'_{2}} \otimes e^{\mathbf{i}\xi_{0}}),\] which
preserves inner products and actions of $P_{2}$. Moreover, the map $h\mapsto h_{at,\nu_{j}}$ gives
\[L^{2}(\mathfrak{n}^{\ast}-\{0\},(V_j)^{-\sigma_j(r_{\xi})},|\xi|^{m-2j}\operatorname{d\!} \xi)\xrightarrow{\sim}
\operatorname{Ind}_{M'_{2}N}^{M_{2}AN}((V_j)^{-\sigma_j(r_{\xi_0})}\otimes e^{\mathbf{i}\xi_{0}})\] and the map
$h\mapsto h_{at,-\nu_{j}}$ gives \[L^{2}(\mathfrak{n}^{\ast}-\{0\},(V_j)^{\sigma_j(r_{\xi})},|\xi|^{-m+2j}
\operatorname{d\!}\xi)\xrightarrow{\sim}\operatorname{Ind}_{M'_{2}N}^{M_{2}AN}((V_j)^{\sigma_j(r_{\xi_0})}\otimes e^{\mathbf{i}\xi_{0}}).\]
\end{proposition}
\begin{proof}
The first statement can be shown along the same way as the proof for Proposition~\ref{P:anti-trivialization}.
The second and the third statements follow from the first one easily.
\end{proof}
\begin{proposition}\langlebel{P:P'-restriction1}
For each $j$ with $1\leq j\leq n-1$, the restriction of unitary representations $\bar{\pi}_j,\bar{\pi}'_j$ to
$P_2$ are as follows: \[\bar{\pi}'_j|_{P_2}\simeq\operatorname{Ind}_{M'_{2}N}^{M_{2}AN}\bigl(\bigwedge^{j}\mathbb{C}^{m-1}
\otimes e^{\mathbf{i}\xi_{0}}\bigr)\textrm{ and }\bar{\pi}_j|_{P_2}\simeq\operatorname{Ind}_{M'_{2}N}^{M_{2}AN}
\bigl(\bigwedge^{j-1}\mathbb{C}^{m-1}\otimes e^{\mathbf{i}\xi_{0}}\bigr).\]
\end{proposition}
\begin{proof}
Recall that the actions of $P_2$ on $\pi_{j}$, $\pi'_{j}$
and on their Fourier transforms are
given by \eqref{Eq:P-action} and \eqref{Eq:P-action2}.
Then by Proposition~\ref{P:image-Fourier2},
$\mathcal{F}$ composed with anti-trivialization gives
a $P_{2}$-equivariant isometric (up to scalar) embedding from $\pi_{j}$ or $\pi'_{j}$
into unitarily induced representation of $P$ in the proposition.
Extending to the unitary completion, it gives
a $P_{2}$-equivariant embedding
from the unitary completion of $\pi_{j}$ or $\pi'_{j}$
into unitarily induced representation of $P$.
Since $V_j|_{M_{2}}\cong\bigwedge^{j}\mathbb{C}^{m}$, and
\[(\bigwedge^{j}\mathbb{C}^{m})^{\sigma_j(r_{\xi_0})}\cong\bigwedge^{j}\mathbb{C}^{m-1}
\text{ and }
(\bigwedge^{j}\mathbb{C}^{m})^{-\sigma_j(r_{\xi_0})}\cong\bigwedge^{j-1}\mathbb{C}^{m-1}\]
are irreducible representations of $M'_{2}$.
It follows that unitarily induced representations in the proposition are irreducible.
Therefore, the $P_{2}$-equivariant embeddings are isomorphisms.
\end{proof}
\subsection{Restriction to $P$}\langlebel{SS:restriction3}
For each $j$ with $0\leq j\leq n-1$, write \[\pi_{j}(\rho)=\pi_{j}|_{G_{1}}\textrm{ and }\pi'_{j}(\rho)=
\pi'_{j}|_{G_{1}}.\] These representations of $G_1$ are also regarded as representations of $G$ via the
covering map $G\to G_1$. We use the same notation for these representations of $G$, which agrees with the
notation in \S\ref{SS:irreducible}.
In this subsection we suppose $m$ is odd so that $m=2n-1$ and $G=\operatorname{Sp}in(2n,1)$. Write $\pi^{+}(\rho)$ for the
discrete series with lowest $K$-type $V_{K,(\underbrace{\scriptstyle 1,\operatorname{d\!}ots,1}_{n})}$; and write $\pi^{-}(\rho)$
for the discrete series with lowest $K$-type $V_{K,(\underbrace{\scriptstyle 1,\operatorname{d\!}ots,1}_{n-1},-1)}$. Then $\pi'_{n-1}(\rho)\cong\pi^{+}(\rho)\oplus\pi^{-}(\rho)$. Other representations $\pi_{j}$
($0\leq j\leq n-1$) are irreducible. Write $\bar{\pi}_{j}(\rho)$, $\bar{\pi}^{+}(\rho)$, $\bar{\pi}^{-}(\rho)$
for the Hilbert completion of $\pi_{j}(\rho)$, $\pi^{+}(\rho)$, $\pi^{-}(\rho)$, respectively.
The following proposition is a direct consequence of Proposition~\ref{P:P'-restriction1}.
It is a special case of Theorem~\ref{T:branching-regular}.
\begin{proposition}\langlebel{P:P-restriction1}
For each $1\leq j\leq n-1$, \begin{align*}
&\bar{\pi}_{j}(\rho)|_{P}\cong\operatorname{Ind}_{M'N}^{MAN}(\bigwedge^{j-1}\mathbb{C}^{2n-2}\otimes e^{\mathbf{i}\xi_{0}})
\text { and }\\
&\bar{\pi}^{+}(\rho)|_{P}\oplus\bar{\pi}^{-}(\rho)|_{P}
\cong\operatorname{Ind}_{M'N}^{MAN}(\bigwedge^{n-1}\mathbb{C}^{2n-2}\otimes e^{\mathbf{i}\xi_{0}}).
\end{align*}
The restrictions $\bar{\pi}_{j}(\rho)|_{P}$ ($1\leq j\leq n-1$), $\bar{\pi}^{+}(\rho)|_{P}$,
$\bar{\pi}^{-}(\rho)|_{P}$ are all irreducible.
\end{proposition}
The restriction $\bigwedge^{n-1}\mathbb{C}^{2n-2}|_{M'}$
is the direct sum of two finite-dimensional irreducible representations of $M'=\operatorname{Sp}in(2n-2)$
with highest weights
\[\mu^+=(\underbrace{1,\operatorname{d\!}ots,1}_{n-1})\textrm{ and }\mu^-=(\underbrace{1,\operatorname{d\!}ots,1}_{n-2},-1),\]
respectively.
After Proposition~\ref{P:P-restriction1}, we need to determine
whether $\bar{\pi}^{-}(\rho)|_{P}$ is isomorphic to
$\operatorname{Ind}_{M'N}^{MAN}(V_{M',\mu^+}\otimes e^{\mathbf{i}\xi_{0}})$
or $\operatorname{Ind}_{M'N}^{MAN}(V_{M',\mu^-}\otimes e^{\mathbf{i}\xi_{0}})$.
In order to do this, we calculate the Fourier transform
of a specific $K$-type function in $\pi^{-}(\rho)$.
For $f$ in a small $K$-type, the explicit form of $f|_N$
was given in Kobayashi-Speh \cite[\S8.2]{Kobayashi-Speh}.
We follow their description and then we calculate its Fourier transform.
One has \[\bigwedge^{n}\mathbb{C}^{2n}\cong V_{K,\langlembda^+}\oplus V_{K,\langlembda^-},
\text{ where }
\langlembda^+=(\underbrace{1,\operatorname{d\!}ots,1}_{n})
\textrm{ and }\langlembda^-=(\underbrace{1,\operatorname{d\!}ots,1}_{n-1},-1).\]
Note that $V_{K,\langlembda^+}$ is the lowest $K$-type of $\pi^{+}(\rho)$,
and $V_{K,\langlembda^-}$ is the lowest $K$-type of $\pi^{-}(\rho)$.
Put $V=\mathbb{C}^{2n}$ and $V'=\mathbb{C}^{2n-1}$. Let $\{e_{j}:1\leq j\leq 2n\}$ be the standard orthonormal basis
of $V$. Then, $V=V'\oplus\mathbb{C}e_{2n}$ and this decomposition induces
\[\bigwedge^{n}V=\bigwedge^{n}V'\oplus \operatorname{B}igl(\bigwedge^{n-1}V'\wedge\mathbb{C}e_{2n}\operatorname{B}igr).\]
Let $p\colon V\rightarrow V'$ be the projection along $\mathbb{C}e_{2n}$.
Then it induces the projection $\bigwedge^{n}V\rightarrow\bigwedge^{n}V'$
along $\bigwedge^{n-1}V'\wedge\mathbb{C}e_{2n}$, still denoted by $p$.
Note that $p$ is $M$-equivariant.
For each $u\in\bigwedge^{n}V$, define
\[f_{u}(ka\bar{n})=e^{(\rho'+\nu_{n-1})\log a}p(k^{-1}u),
\quad ka\bar{n}\in KA\bar{N}, \]
which belongs to
$\operatorname{Ind}_{MA\bar{N}}^{G}(\bigwedge^{n}\mathbb{C}^{2n-1}\otimes
e^{-\nu_{n-1}-\rho'}\otimes\mathbf{1}_{\bar{N}})
\cong I_{n-1}(-\nu_{n-1})$.
This isomorphism is induced by $\bigwedge^{n}V'|_{M'}\cong\bigwedge^{n-1}V'|_{M'}$.
By (\ref{Eq:Iwasawa3}), we have
\begin{equation*}
f_{u}(n_{x})=(1+|x|^{2})^{-n}p(s_{x}^{-1}u),
\end{equation*}
where $s_x$ is as in \eqref{Eq:sx}.
Write $v_{j}=e_{2j-1}+\mathbf{i}e_{2j}$ for each $1\leq j\leq n$.
Put
\[u^{+}=v_{1}\wedge\cdots\wedge v_{n} \text{ and }
u^{-}=v_{1}\wedge\cdots\wedge v_{n-1}\wedge(e_{2n-1}-\mathbf{i}e_{2n}).\]
Then $u^{+}$ (resp.\ $u^{-}$)
is a nonzero vector in $\bigwedge^{n}\mathbb{C}^{2n}$
with highest weight $\langlembda^+$ (resp.\ $\langlembda^-$).
Then $f_{u^{-}}$ gives a function in
$\pi^{-}(\rho)\subset
\operatorname{Ind}_{MA\bar{N}}^{G}(\bigwedge^{n}\mathbb{C}^{2n-1}\otimes
e^{-\nu_{n-1}-\rho'}\otimes\mathbf{1}_{\bar{N}})$,
corresponding to a highest weight vector of the lowest $K$-type of $\pi^{-}(\rho)$.
Below we calculate the inverse Fourier transform of $f_{u^{-}}$.
Put $y=(x,1)\in\mathbb{R}^{2n}$. Let $r'_{x}$ denote both
\[I_{2n}-\mathfrak{a}c{2y^{t}y}{|y|^{2}}\in\operatorname{O}(2n) \text{ and }
\operatorname{d\!}iag\operatorname{B}igl\{I_{2n}-
\mathfrak{a}c{2y^{t}y}{|y|^{2}},1\operatorname{B}igr\}\in\operatorname{O}(2n,1).\]
Note that $s_{x}=sr'_{x}$. Then
\[p(s_{x}^{-1}u^{-})=p(r'_{x}su^{-})=p(r'_{x}u^{+}).\]
Set $x_{2n}=1$ for notational convenience,
but we keep $|x|^2=x_1^2+\cdots+x_{2n-1}^2$.
\begin{lemma}\langlebel{L:rv+}
One has
\begin{align}\langlebel{Eq:rv+}
&\ \ r'_{x}u^{+}\\\nonumber &=u^{+}+\sum_{1\leq k\leq n}(-1)^{k}\mathfrak{a}c{2(x_{2k-1}+\mathbf{i}x_{2k})}
{1+|x|^{2}}(x_{2k-1}e_{2k-1}+x_{2k}e_{2k})\\ \nonumber
&\qquaduad \qquaduad \qquaduad \qquaduad \wedge v_{1}\wedge \cdots \wedge\hat{v_{k}}\wedge\cdots\wedge v_{n}
\\ \nonumber
&\quad +\sum_{1\leq k<j\leq n}(-1)^{j-k}\mathfrak{a}c{2\mathbf{i}(x_{2k-1}+\mathbf{i}x_{2k})
(x_{2j-1}+\mathbf{i}x_{2j})}
{1+|x|^{2}}e_{2j-1}\wedge e_{2j}\\ \nonumber
&\qquaduad \qquaduad \qquaduad \qquaduad
\wedge v_{1}\wedge \cdots\wedge\hat{v_{k}}\wedge\cdots\wedge\hat{v_{j}}
\wedge\cdots\wedge v_{n}\\ \nonumber
&\quad +\sum_{1\leq j<k\leq n}(-1)^{j-k-1}\mathfrak{a}c{2\mathbf{i}(x_{2k-1}+\mathbf{i}x_{2k})
(x_{2j-1}+\mathbf{i}x_{2j})}{1+|x|^{2}}e_{2j-1}\wedge e_{2j} \\ \nonumber
&\qquaduad\qquaduad\qquaduad \qquaduad
\wedge v_{1}\wedge \cdots\wedge\hat{v_{j}}
\wedge\cdots\wedge\hat{v_{k}}\wedge\cdots\wedge v_{n}.
\end{align}
\end{lemma}
\begin{proof}
We have
\[r'_{x}u^{+}=r'_{x}v_1\wedge \cdots \wedge r'_{x}v_n.\]
Since $r'_{x}v_k-v_k$ is proportional to $y$,
\[r'_{x}u^{+}
=u^{+}
+ \sum_{k=1}^n (-1)^{k-1}
(r'_{x}v_k -v_k) \wedge
v_1\wedge \cdots \wedge \hat{v_k} \wedge
\cdots \wedge v_n.\]
Then we calculate
\begin{align*}
r'_{x}v_k-v_k
&=-\mathfrak{a}c{2(v_k,y)}{|y|^2}y \\
&= - \sum_{i=1}^n \mathfrak{a}c{2(x_{2k-1}+\mathbf{i}x_{2k})}{1+|x|^2}(x_{2i-1}e_{2i-1}+x_{2i}e_{2i}).
\end{align*}
The term for $i=k$ corresponds to the
second term of the right hand side of \eqref{Eq:rv+}.
Then the lemma follows from
\[(x_{2j-1}e_{2j-1}+x_{2j}e_{2j})\wedge v_j
=\mathbf{i}(x_{2j-1}+\mathbf{i}x_{2j}) e_{2j-1} \wedge e_{2j}.
\qedhere\]
\end{proof}
To calculate the inverse Fourier transform
$\mathcal{F}((f_{u^{-}})_N)$ we need some formulas.
\begin{align} \langlebel{Eq:F-formula1}
&\mathcal{F}(1+|x|^2)^{-n}
=\mathfrak{a}c{2^{\mathfrak{a}c{1}{2}-n}\pi^{\mathfrak{a}c{1}{2}}}{(n-1)!}e^{-|\xi|},\\
\langlebel{Eq:F-formula2}
&\mathcal{F}(1+|x|^2)^{-(n+1)}=\mathfrak{a}c{2^{-\mathfrak{a}c{1}{2}-n}\pi^{\mathfrak{a}c{1}{2}}}{n!}(1+|\xi|)e^{-|\xi|}.
\end{align}
First, the Fourier transform of the function of one variable
$\sqrt{\mathfrak{a}c{\pi}{2}}e^{-|\xi|}$ is equal to $(1+x^{2})^{-1}$.
By the Fourier inversion formula, we get $\mathcal{F}(1+x^{2})^{-1}=\sqrt{\mathfrak{a}c{\pi}{2}}e^{-|\xi|}$.
Using $(1+x^{2})^{-2}=(1+\mathfrak{a}c{x}{2}\mathfrak{a}c{d}{dx})(1+x^{2})^{-1}$,
we see that $\mathcal{F}(1+x^{2})^{-2}=\mathfrak{a}c{\sqrt{\pi}}{2\sqrt{2}}(1+|\xi|)e^{-|\xi|}$.
Then as in the proof of Lemma~\ref{L:Fourier-KBessel3},
we obtain \eqref{Eq:F-formula1} and \eqref{Eq:F-formula2}.
Then by \eqref{Eq:Mult-Differ}, we obtain the following.
\begin{align} \langlebel{Eq:F-formula3}
&\mathcal{F}(x_j(1+|x|^2)^{-(n+1)})
= \mathbf{i} \mathfrak{a}c{2^{-\mathfrak{a}c{1}{2}-n}\pi^{\mathfrak{a}c{1}{2}}}{n!}\xi_je^{-|\xi|},\\
\langlebel{Eq:F-formula4}
&\mathcal{F}(x_j^2(1+|x|^2)^{-(n+1)})
= \mathfrak{a}c{2^{-\mathfrak{a}c{1}{2}-n}\pi^{\mathfrak{a}c{1}{2}}}{n!}
\operatorname{B}igl(1-\mathfrak{a}c{\xi_j^2}{|\xi|}\operatorname{B}igr)e^{-|\xi|},\\
\langlebel{Eq:F-formula5}
&\mathcal{F}(x_jx_k(1+|x|^2)^{-(n+1)})
= -\mathfrak{a}c{2^{-\mathfrak{a}c{1}{2}-n}\pi^{\mathfrak{a}c{1}{2}}}{n!}
\mathfrak{a}c{\xi_j\xi_k}{|\xi|} e^{-|\xi|}\quad (j\neq k).
\end{align}
\begin{lemma}\langlebel{P:Ffnx}
We have
\[
\mathcal{F}(f_{u^{-}})_{N}
=\mathfrak{a}c{2^{-\mathfrak{a}c{1}{2}-n}\pi^{\mathfrak{a}c{1}{2}}}{n!}e^{-|\xi|}
(|\xi|(1-r_{\xi})u+2u'\wedge\xi)\]
at $0\neq\xi\in\mathbb{R}^{2n-1}$, where
\begin{align*}
&u=v_{1}\wedge\cdots\wedge v_{n-1}\wedge e_{2n-1}\in\bigwedge^{n}V' \text{ and }\\
& u'=v_{1}\wedge\cdots\wedge v_{n-1}\in\bigwedge^{n-1}V'.
\end{align*}
\end{lemma}
\begin{proof}
By Lemma \ref{L:rv+}, we have
\begin{align*}
&f_{u^{-}}(n_{x})
=(1+|x|)^{-n}v_{1}\wedge\cdots \wedge v_{n-1}\wedge e_{2n-1}\\
&+\sum_{1\leq k\leq n-1}(-1)^{k}2(1+|x|)^{-n-1} (x_{2k-1}+\mathbf{i}x_{2k})
(x_{2k-1}e_{2k-1}+x_{2k}e_{2k})\\
&\qquaduad\qquaduad\qquaduad\qquaduad
\wedge v_{1}\wedge\cdots\wedge\hat{v_{k}}\wedge\cdots\wedge v_{n-1}\wedge e_{2n-1}\\
&+(-1)^{n}2(1+|x|)^{-n-1}(x_{2n-1}+\mathbf{i})x_{2n-1}e_{2n-1}\wedge
v_{1}\wedge\cdots\wedge v_{n-1}\\
&+\sum_{1\leq k<j\leq n-1}(-1)^{j-k}2(1+|x|)^{-n-1}
\mathbf{i}(x_{2k-1}+\mathbf{i}x_{2k})(x_{2j-1}+\mathbf{i}x_{2j})\\
&\qquaduad\qquaduad\qquaduad
e_{2j-1}\wedge e_{2j}\wedge v_{1} \wedge\cdots\wedge\hat{v_{k}}\wedge\cdots\wedge
\hat{v_{j}}\wedge\cdots\wedge v_{n-1}\wedge e_{2n-1}\\
&+\sum_{1\leq j<k\leq n-1}(-1)^{j-k-1}
2(1+|x|)^{-n-1}\mathbf{i}(x_{2k-1}+\mathbf{i}x_{2k})(x_{2j-1}+\mathbf{i}x_{2j})\\
&\qquaduad\qquaduad\qquaduad e_{2j-1}\wedge e_{2j}\wedge v_{1}
\wedge\cdots\wedge\hat{v_{j}}\wedge\cdots\wedge\hat{v_{k}}\wedge\cdots\wedge
v_{n-1}\wedge e_{2n-1}\\
&+\sum_{1\leq j\leq n-1}(-1)^{j-n-1}
2(1+|x|)^{-n-1}\mathbf{i}(x_{2n-1}+\mathbf{i})(x_{2j-1}+\mathbf{i}x_{2j})\\
&\qquaduad\qquaduad\qquaduad\qquaduad e_{2j-1}\wedge e_{2j}\wedge v_{1}\wedge
\cdots\wedge\hat{v_{j}}\wedge\cdots\wedge v_{n-1}.
\end{align*}
Then by \eqref{Eq:F-formula1} -- \eqref{Eq:F-formula5},
\begin{align*}
&\operatorname{B}igl(\mathfrak{a}c{2^{\mathfrak{a}c{1}{2}-n}\pi^{\mathfrak{a}c{1}{2}}}{n!}e^{-|\xi|}\operatorname{B}igr)^{-1}
\mathcal{F}((f_{u^{-}})_{N})
= nv_{1}\wedge\cdots \wedge v_{n-1}\wedge e_{2n-1}\\
&+\sum_{1\leq k\leq n-1}(-1)^{k}
\operatorname{B}igl(\operatorname{B}igl(1-\mathfrak{a}c{\xi_{2k-1}^{2}}{|\xi|}
-\mathbf{i}\mathfrak{a}c{\xi_{2k-1}\xi_{2k}}{|\xi|}\operatorname{B}igr)e_{2k-1}
+\operatorname{B}igl(\mathbf{i}-\mathbf{i}\mathfrak{a}c{\xi_{2k}^{2}}{|\xi|}
-\mathfrak{a}c{\xi_{2k-1}\xi_{2k}}{|\xi|}\operatorname{B}igr)e_{2k}\operatorname{B}igr)\\
&\qquaduad\qquaduad\qquaduad\qquaduad
\wedge v_{1}\wedge\cdots\wedge\hat{v_{k}}\wedge\cdots\wedge v_{n-1}\wedge e_{2n-1}\\
&+(-1)^{n}\operatorname{B}igl(1-\mathfrak{a}c{\xi_{2n-1}^{2}}{|\xi|}-\xi_{2n-1}\operatorname{B}igr)
e_{2n-1}\wedge v_{1}\wedge\cdots\wedge v_{n-1}\\
&+\sum_{1\leq k<j\leq n-1}(-1)^{j-k-1}\mathbf{i}|\xi|^{-1}(\xi_{2k-1}+\mathbf{i}\xi_{2k})
(\xi_{2j-1}+\mathbf{i}\xi_{2j})\\
&\qquaduad\qquaduad\qquaduad e_{2j-1}\wedge e_{2j}\wedge
v_{1}\wedge\cdots\wedge\hat{v_{k}}\wedge\cdots\wedge\hat{v_{j}}\wedge\cdots
\wedge v_{n-1}\wedge e_{2n-1}\\
&+\sum_{1\leq j<k\leq n-1}(-1)^{j-k}
\mathbf{i}|\xi|^{-1}(\xi_{2k-1}+\mathbf{i}\xi_{2k})(\xi_{2j-1}+\mathbf{i}\xi_{2j})\\
&\qquaduad\qquaduad\qquaduad
e_{2j-1}\wedge e_{2j}\wedge v_{1}\wedge\cdots\wedge\hat{v_{j}}\wedge\cdots\wedge\hat{v_{k}}
\wedge \cdots\wedge v_{n-1}\wedge e_{2n-1}\\
&+\sum_{1\leq j\leq n-1}(-1)^{j-n}\mathbf{i}(1+|\xi|^{-1}\xi_{2n-1})
(\xi_{2j-1}+\mathbf{i}\xi_{2j})\\
&\qquaduad\qquaduad\qquaduad\qquaduad
e_{2j-1}\wedge e_{2j}\wedge v_{1}\wedge\cdots\wedge\hat{v_{j}}\wedge\cdots\wedge v_{n-1}.
\end{align*}
Similarly to Lemma \ref{L:rv+}, we have
\begin{align*}
r_{\xi}(u)&=
u+\sum_{1\leq k\leq n-1}(-1)^{k}\mathfrak{a}c{2(\xi_{2k-1}+\mathbf{i}\xi_{2k})}{|\xi|^{2}}
(\xi_{2k-1}e_{2k-1}+\xi_{2k}e_{2k})\\
&\qquaduad\qquaduad\qquaduad\qquaduad
\wedge v_{1}\wedge\cdots\wedge\hat{v_{k}}\wedge\cdots\wedge v_{n-1}\wedge e_{2n-1}\\
&+(-1)^{n}\mathfrak{a}c{2\xi_{2n-1}}{|\xi|^{2}}\xi_{2n-1}e_{2n-1}\wedge v_{1}\wedge\cdots\wedge v_{n-1}\\
&+\sum_{1\leq k<j\leq n-1}(-1)^{j-k}\mathfrak{a}c{2\mathbf{i}(\xi_{2k-1}+\mathbf{i}\xi_{2k})(\xi_{2j-1}+\mathbf{i}\xi_{2j})}
{|\xi|^{2}}e_{2j-1}\wedge e_{2j}\\
&\qquaduad\qquaduad\qquaduad\qquaduad
\wedge v_{1}\wedge\cdots\wedge\hat{v_{k}}\wedge\cdots\wedge\hat{v_{j}}\wedge
\cdots\wedge v_{n-1}\wedge e_{2n-1}\\
&+\sum_{1\leq j<k\leq n-1}(-1)^{j-k-1}\mathfrak{a}c{2\mathbf{i}(\xi_{2k-1}+\mathbf{i}\xi_{2k})
(\xi_{2j-1}+\mathbf{i}\xi_{2j})}{|\xi|^{2}}e_{2j-1}\wedge e_{2j}\\
&\qquaduad\qquaduad\qquaduad\qquaduad \wedge v_{1}\wedge\cdots\wedge\hat{v_{j}}
\wedge\cdots\wedge\hat{v_{k}}\wedge\cdots\wedge v_{n-1}\wedge e_{2n-1}\\
&+\sum_{1\leq j\leq n-1}(-1)^{j-n-1}
\mathfrak{a}c{2\mathbf{i}\xi_{2n-1}(\xi_{2j-1}+\mathbf{i}\xi_{2j})}{|\xi|^{2}}e_{2j-1}\wedge e_{2j}\\
&\qquaduad\qquaduad\qquaduad\qquaduad
\wedge v_{1}\wedge \cdots\wedge\hat{v_{j}}\wedge\cdots\wedge v_{n-1}.
\end{align*}
The lemma follows from these equations.
\end{proof}
\begin{proposition}\langlebel{P:P-restriction2}
One has
\[\bar{\pi}^{+}(\rho)|_{P}\cong\operatorname{Ind}_{M'N}^{MAN}(V_{M',\mu^-}\otimes e^{\mathbf{i}\xi_{0}})
\text{ and }
\bar{\pi}^{-}(\rho)|_{P} \cong\operatorname{Ind}_{M'N}^{MAN}(V_{M',\mu^+}\otimes e^{\mathbf{i}\xi_{0}}).\]
\end{proposition}
\begin{proof}
Let $h:=\mathcal{F}((f_{u^{-}})_N)$.
By Lemma~\ref{P:Ffnx},
\[h(\xi)=\mathfrak{a}c{2^{-\mathfrak{a}c{1}{2}-n}\pi^{\mathfrak{a}c{1}{2}}}{n!}e^{-|\xi|}(|\xi|(1-r_{\xi})u+2u'\wedge\xi)\]
for $\xi\neq 0$.
Evaluating at $\xi=\xi_0=e_{2n-1}$, we have
\[h(e_{2n-1})=\mathfrak{a}c{2^{-\mathfrak{a}c{1}{2}-n}\pi^{\mathfrak{a}c{1}{2}}}{n! e}\cdot (4u).\]
Hence
\[h_{at,\nu}(e)=h(\xi_0) = cu\]
for a constant $c\neq 0$.
This is a highest weight vector for $M'$ with weight $\mu^+$
in the representation $\bigwedge^{n}V'|_{M'}\cong\bigwedge^{n-1}V'|_{M'}$.
Hence the inverse Fourier transform of $f_{u^{-}}$
must lie in $\operatorname{Ind}_{M'N}^{MAN}(V_{M',\mu^+}\otimes e^{\mathbf{i}\xi_{0}})$.
Therefore,
\[\bar{\pi}^{-}(\rho)|_{P}\cong\operatorname{Ind}_{M'N}^{MAN}(V_{M',\mu^+}\otimes e^{\mathbf{i}\xi_{0}})\]
and then
\[\bar{\pi}^{+}(\rho)|_{P}\cong\operatorname{Ind}_{M'N}^{MAN}(V_{M',\mu^-}\otimes e^{\mathbf{i}\xi_{0}}).
\qedhere\]
\end{proof}
\section{Obtaining $\bar{\pi}|_{MN}$ from $\bar{\pi}|_{K}$}\langlebel{S:MN.K}
By the Cartan decomposition $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ and the decomposition $\mathfrak{g}=
\mathfrak{m}\oplus\mathfrak{a}\oplus\mathfrak{n}\oplus\bar{\mathfrak{n}}$, we have $\mathfrak{k}=\mathfrak{m}
\oplus\{X+\theta(X):X\in\mathfrak{n}\}$. By this, we can view $\mathfrak{m}+\mathfrak{n}$ as the limit
$\lim_{t\rightarrow+\infty}\operatorname{A}d(\operatorname{exp}(tH_{0}))(\mathfrak{k})$. On the group level, we can view $MN$ as the limit
$\lim_{t\rightarrow+\infty}\operatorname{exp}(tH_{0})K\operatorname{exp}(-tH_{0})$.
From this viewpoint, we may expect that $\bar{\pi}|_{MN}$ is determined
by $\bar{\pi}|_{K}$ for any unitarizable irreducible representation $\pi$ of $G$. Here we observe the relationship between two restrictions. The writing of this section is motivated by a question of Professor David Vogan.
As in Section \ref{S:repP}, write $I_{P,\tau}=\operatorname{Ind}_{M'N}^{P}(\tau\otimes e^{\mathbf{i}\xi_{0}})$ for a
unitarily induced representation of $P$ where $\tau$ is a finite-dimensional unitary representation of
$M'$. Then, $I_{P,\tau}$ is irreducible when $\tau$ is so. For any finite-dimensional unitary representation
$\tau$ of $M'$ and any $0\neq t\in\mathbb{R}$, write $I_{t,\tau}=\operatorname{Ind}_{M'N}^{MN}(\tau\otimes
e^{\mathbf{i}t\xi_{0}})$ for a unitarily induced representation of $MN$. Then, $I_{t,\tau}$ is irreducible when $\tau$ is so.
Using Mackey's theory for unitarily induced representations and considering the action of $MN$ on $P/M'N$,
the following lemma follows easily.
\begin{lemma}\langlebel{L:P-MN}
We have \[I_{P,\tau}|_{MN}\cong\int_{t>0}I_{t,\tau}\operatorname{d\!} t.\]
\end{lemma}
\begin{corollary}\langlebel{C:P-MN}
Let $\pi$ be an infinite-dimensional irreducible unitarizable representation of $G=\operatorname{Sp}in(m+1,1)$. Then
$\bar{\pi}|_{P}$ and $\bar{\pi}|_{MN}$ determine each other.
\end{corollary}
\begin{proof}
We have shown that $\bar{\pi}|_{P}$ is a finite direct sum of $I_{P,\tau}$. Then, the conclusion follows as the
spectra of $I_{P,\tau}|_{MN},I_{P,\tau'}|_{MN}$ are disjoint whenever $\tau\not\cong\tau'$.
\end{proof}
In \S\ref{SS:CW-Cloux}, we constructed a homomorphism \[\operatorname{P}si\colon K(G)\rightarrow K(M').\] Write $\widehat{K}$
for the set of isomorphism classes of finite-dimensional irreducible representations of $K$ and write
$\mathbb{Z}^{\widehat{K}}$ for the abelian group of functions $\widehat{K}\rightarrow\mathbb{Z}$ with addition
given by point-wise addition. Taking the multiplicities of irreducible representations of $K$ appearing in
$\pi|_{K}$ ($\pi\in\mathcal{C}(G)$), we obtain a homomorphism \[m\colon K(G)\rightarrow\mathbb{Z}^{\widehat{K}}.\]
Write $\mathbb{Z}(K)$ for the quotient group of $\mathbb{Z}^{\widehat{K}}$ by the subgroup of functions
$f\colon \widehat{K}\rightarrow\mathbb{Z}$ such that $\sharp\{[\sigma]\in\widehat{K} : f([\sigma])\neq 0\}$ is finite.
Let \[p\colon\mathbb{Z}^{\widehat{K}}\rightarrow\mathbb{Z}(K)\] be the quotient map.
As in Section \ref{S:repP}, put $n=\lfloor\mathfrak{a}c{m+2}{2}\rfloor$ and $n'=\lfloor\mathfrak{a}c{m+1}{2}\rfloor$. Then,
\[n=\begin{cases} n'\quad\ &\textrm{ if }m\textrm{ is odd};\\
n'+1\quad &\textrm{ if }m\textrm{ is even}.
\end{cases}\]
The ranks of $K=\operatorname{Sp}in(m+1)$, $M=\operatorname{Sp}in(m)$, $M'=\operatorname{Sp}in(m-1)$ are equal to $n'$, $n-1$, $n'-1$,
respectively. For a highest weight $\vec{b}=(b_1,\operatorname{d\!}ots,b_{n'-1})$ of $M'$, write $V_{M',\vec{b}}$ for an irreducible
representation of $M'$ with highest weight $\vec{b}$. Then, $[V_{M',\vec{b}}]$ is a basis of $K(M')$.
Let \[\phi\colon K(M')\rightarrow\mathbb{Z}(K)\] be defined by \[\phi([V_{M',\vec{b}}])=\sum_{k\geq 0}[V_{K,(k+b_1,
b_1,\operatorname{d\!}ots,b_{n'-2},(-1)^{m}b_{n'-1})}].\]
\begin{proposition}\langlebel{P:P-K}
We have $\phi\circ\operatorname{P}si=p\circ m$.
\end{proposition}
\begin{proof}
When $m$ is even, $K(G)$ is generated by induced representations $I(\sigma,\nu)$ and finite-dimensional
representations. Then, the conclusion follows from Proposition~\ref{P:res-induced} and branching laws for the
pair $M\subset K$ (giving $K$ types of induced representations).
When $m$ is odd,
$K(G)$ is generated by induced representations $I(\sigma,\nu)$, (limits of) discrete
series and finite-dimensional representations. Then, the conclusion follows from Proposition~\ref{P:res-induced},
branching laws for the pair $M\subset K$, Theorem \ref{T:branching-ds} and Blattner's formula (giving
$K$ types of discrete series and limits of discrete series).
\end{proof}
\begin{corollary}\langlebel{C:P-K}
For any $\pi\in\mathcal{C}(G)$, $\operatorname{P}si(\pi)$ is determined by $\pi_{K}|_{K}$.
\end{corollary}
\begin{proof}
Note that $\operatorname{P}si(\pi)$ is a finite direct sum of finite-dimensional irreducible unitary representations of $M'$.
Then, the conclusion follows from Proposition~\ref{P:P-K} directly.
\end{proof}
\begin{corollary}\langlebel{C:MN-K}
Let $\pi$ be a unitarizable irreducible representation of $G$. Then $\bar{\pi}|_{MN}$ is determined by
$\bar{\pi}|_{K}$.
\end{corollary}
\begin{proof}
When $\pi$ is a unitarizable irreducible representation, $\bar{\pi}|_{K}$ and $\pi_{K}|_{K}$ determine each other.
By Corollary \ref{C:P-K}, $\operatorname{P}si(\pi)$ is determined by $\pi_{K}|_{K}$. By Lemma \ref{L:unitary-J}, $\bar{\pi}|_{P}$
and $\operatorname{P}si(\pi)$ determine each other. By Corollary \ref{C:P-MN}, $\bar{\pi}|_{P}$ and $\bar{\pi}|_{MN}$ determine each
other. Then, the conclusion of the corollary follows.
\end{proof}
\section{A case of Bessel model and relation with the local GGP conjecture}\langlebel{S:GGP}
Take $G_{3}=\operatorname{SO}(m+1,1)$ and let $P_{3}=M_{3}AN$ be a standard minimal parabolic subgroup. Put $H_{3}=
M'_{3}\ltimes N$. Then, determining $\operatorname{H}om_{H_{3}}(\pi,\tau\otimes e^{\mathbf{i}\xi_{0}})$ for irreducibles
$\pi\in\mathcal{C}(G_{3})$ and $\tau\in\mathcal{C}(M'_{3})$ is related to a case of Bessel model in the
local Gan-Gross-Prasad conjecture. Precisely to say, this is the case that $\operatorname{d\!}im W^{\perp}=3$ for the Bessel
model of $G_{3}=\operatorname{SO}(m+1,1)$ as described in \S 2 of \cite{Gan-Gross-Prasad}. Note that Bessel models were
studied by Gomez-Wallach in more general setting (\cite{Gomez-Wallach}).
First, define categories $\mathcal{C}(G_{3})$, $\mathcal{C}(P_{3})$, $\mathcal{C}(M_{3})$ similarly as that for
$\mathcal{C}(G)$, $\mathcal{C}(P)$, $\mathcal{C}(M)$ in \S \ref{S:repP}. For any $\pi\in\mathcal{C}(P_{3})$, as in
\S \ref{S:repP}, define $\operatorname{P}si(\pi)=\pi/\mathfrak{m}_{\xi_0}\cdot\pi$. Then $\operatorname{P}si(\pi)\in\mathcal{C}(M'_{3})$ and
$\operatorname{P}si$ defines a functor $\mathcal{C}(P_{3})\rightarrow\mathcal{C}(M'_{3})$. Then, for any $\pi\in\mathcal{C}(G_{3})$
we have \[\operatorname{P}si(\pi)\cong\bigoplus_{\tau\in\widehat{M'_{3}}}n_{\tau}\tau\] where $n_{\tau}(\pi)=
\operatorname{d\!}im\operatorname{H}om_{H_{3}}(\pi,\tau\otimes e^{\mathbf{i}\xi_{0}})$. By an analogue of Proposition~\ref{P:res-induced} for the
pair $P_{3}\subset G_{3}$ and Casselman's subrepresentation theorem, it follows that: when $\pi\in\mathcal{C}(G_{3})$
is irreducible, $n_{\tau}(\pi)=0$ or 1 (the multiplicity one theorem) and there are only finitely many $\tau\in
\widehat{M'_{3}}$ such that $n_{\tau}(\pi)=1$. Moreover, $\operatorname{P}si(\pi)$ are all calculated with the Langlands parameter
of $\pi$ by results in this paper. Hence, we know $n_{\tau}(\pi)=1$ for exactly which $\tau\in\mathcal{C}(M'_{3})$. The
multiplicity one theorem in this case was shown before in a very general setting (cf.\ \cite[\S 15]{Gan-Gross-Prasad}
and references therein). However, to the authors' knowledge determining $n_{\tau}(\pi)=1$ for which pairs $(\pi,\tau)$
in this case is new. This is related to the local Gan-Gross-Prasad conjecture in the Bessel model case
(\cite[Conjecture 17.1]{Gan-Gross-Prasad}, \cite[Conjecture 6.1]{Gan-Gross-Prasad2}).
Second, for any unitarizable irreducible infinite-dimensional representation $\pi\in\mathcal{C}(G_{3})$, an analogue
of Lemma \ref{L:unitary-J} indicates that $\bar{\pi}|_{P_{3}}\cong I_{P_{3}}(\operatorname{P}si(\pi)\otimes e^{\mathbf{i}\xi_{0}})$.
That is to say, ``$L^{2}$ spectrum=smooth quotient" in this case.
\end{document}
|
\begin{document}
\title{High-bit-rate quantum key distribution with entangled internal degrees of freedom of photons}
\author{Isaac~Nape}
\affiliation{School of Physics, University of the Witwatersrand, Private Bag 3, Wits 2050, South Africa}
\author{Bienvenu~Ndagano}
\affiliation{School of Physics, University of the Witwatersrand, Private Bag 3, Wits 2050, South Africa}
\author{Benjamin~Perez-Garcia}
\affiliation{School of Physics, University of the Witwatersrand, Private Bag 3, Wits 2050, South Africa}
\affiliation{Photonics and Mathematical Optics Group, Tecnol\'{o}gico de Monterrey, Monterrey 64849, Mexico}
\author{Stirling~Scholes}
\affiliation{School of Physics, University of the Witwatersrand, Private Bag 3, Wits 2050, South Africa}
\author{Raul~I.~Hernandez-Aranda}
\affiliation{Photonics and Mathematical Optics Group, Tecnol\'{o}gico de Monterrey, Monterrey 64849, Mexico}
\author{Thomas Konrad}
\affiliation{College of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000, South Africa}
\author{Andrew~Forbes}
\email[Corresponding author: ]{[email protected]}
\affiliation{School of Physics, University of the Witwatersrand, Private Bag 3, Wits 2050, South Africa}
\date{\today}
\begin{abstract}
\noindent \textbf{Quantum communication over long distances is integral to information security and has been demonstrated in free space and fibre with two-dimensional polarisation states of light. Although increased bit rates can be achieved using high-dimensional encoding with spatial modes of light, the efficient detection of high-dimensional states remains a challenge to realise the full benefit of the increased state space. Here we exploit the entanglement between spatial modes and polarization to realise a four-dimensional quantum key distribution (QKD) protocol. We introduce a detection scheme which employs only static elements, allowing for the detection of all basis modes in a high-dimensional space deterministically. As a result we are able to realise the full potential of our high-dimensional state space, demonstrating efficient QKD at high secure key and sift rates, with the highest capacity-to-dimension reported to date. This work opens the possibility to increase the dimensionality of the state-space indefinitely while still maintaining deterministic detection and will be invaluable for long distance ``secure and fast" data transfer}.
\end{abstract}
\pacs{}
\maketitle
\section{Introduction}
The use of polarization encoded qubits has become ubiquitous in quantum communication protocols with single photons \cite{Hubel2007,Ursin2007,Ma2012,Herbst2015}. Most notably, they have enabled unconditionally secure cryptography protocols through quantum key distribution (QKD) over appreciable distances \cite{Jennewein2000,Poppe2004,Peng2007}. With the increasing technological prowess in the field, faster and efficient key generation together with robustness to third party attacks have become paramount issues to address. A topical approach to overcome these hurdles is through higher-dimensional QKD: increasing the dimensionality, $d$, of a QKD protocol leads to better security and higher secure key rates, with each photon carrying up to $\log_2(d)$ bits of information \cite{bechmann2000quantum,cerf2002security}.
Employing spatial modes of light, particularly those carrying orbital angular momentum (OAM), has shown considerable improvements in data transfer of classical communication systems \cite{Wang2012, Sleiffer2012,Huang2013}. However, realizing high-dimensional quantum communication remains challenging. To date, the list of reports on high dimensional QKD with spatial modes is not exhaustive, and include protocols in up to $d=7$ \cite{Groblacher2006,mafu2013higher,mirhosseini2015high}. It is worth noting that due to experimental limitations, the secret key rate of a given QKD protocol does not scale with the dimension, i.e., given a certain set of experimental parameters there exist an optimum number of dimension that maximizes the secret key rate \cite{leach2012secure}.
Photons with complex spatial and polarization structure, commonly known as vector modes, have been used as information carriers for polarisation encoded qubits in alignment-free QKD \cite{Souza2008,vallone2014free}, exploiting the fact that vector modes that carry OAM exhibit rotational symmetry, removing the need to align the detectors in order to reconcile the encoding and decoding bases, as would be the case in QKD with only polarization. In these vector modes, the spatial and polarization degrees of freedom (DoFs) are coupled in a non-separable manner, reminiscent of entanglement in quantum mechanics. This non-separability can be used to encode information and has been done so with classical light \cite{Milione2015e,Li2016}, for example, in mode division multiplexing \cite{Milione2015f}.
Here we use the non-separability of vector OAM modes (vector vortex modes) to realize four-dimensional QKD based on the ``BB84'' protocol \cite{bennett1984quantum}. Rather than carrying information encoded in one DoF, the non-separable state can itself constitute a basis for a higher dimensional space that combines two DoFs, namely the spatial and polarisation DoFs. To fully benefit from the increased state space, we introduce a new detection scheme that, deterministically and without dimension dependent sifting loss, can detect all basis elements in our high-dimensional space. This differs from previous schemes that have used mode filters as detectors, sifting through the space one mode at a time, thus removing all benefit of the dimensionality of the space (see for example ref. \cite{mafu2013higher}).
Our approach combines manipulations of the dynamic and Panchanratnam-Berry phase with static optical elements and, in principle, allows detection of the basis elements with unit probability. We demonstrate high-dimensional encoding/decoding in our entangled space, obtaining a detection fidelity as high as $97\%$, with a secret key rate of $1.63$ bits per photon and quantum error rate of $3\%$. As a means of comparison to other protocols, we calculated the capacity-to-dimension ratio and show that our scheme is more efficient than any other reported to date.
\section{Results}
\begin{figure}
\caption{\textbf{Modes in a four-dimensional hyper-entangled space.}
\label{fig: basis states}
\end{figure}
\begin{figure*}
\caption{\textbf{Deterministic detection of the full state space.}
\label{fig:figure2}
\end{figure*}
\textbf{High-dimensional encoding.} The first QKD demonstrations were performed using the polarisation DoF, namely, states in the space spanned by left- circular $\ket{L}$ and right-circular polarization $\ket{R}$, i.e., ${\cal{H}_{\sigma}}= \mbox{span}\{\ket{L}, \ket{R}\}$, and later using the spatial mode of light as a DoF, e.g., space spanned by the OAM modes $\ket{\ell}$ and $\ket{-\ell}$, i.e. $\cal{H}_{\ell} = \mbox{span}\{\ket{\ell}, \ket{-\ell}\}$.
Using entangled states in both DoFs allows one to access an even larger state space, i.e., ${\cal H}_{\Omega} = \cal{H}_{\sigma}\otimes\cal{H}_{\ell}$, described by the higher-order Poincar\'e sphere \cite{Milione2011a,Milione2012a}. When many $\ell$ subspaces are combined, the dimension $d$ of the final space incorporating $N$ OAM values ($\ell \in \Omega \subset \mathbf{N}$) is given by $d=4N$. This opens the way to infinite dimensional encoding using such hyper-entangled states.
For example, using only the $|\ell| $ subspace of OAM ($N = 1$) leads to a four dimensional space spanned by $\{ \ket{\ell,L},\ket{-\ell,L}, \ket{\ell,R},\ket{-\ell,R} \} $. It is precisely in this four-dimensional subspace that, here, we define our vector and scalar modes. Alice randomly prepares photons in modes from two sets: a vector mode set, $\ket{\psi}_{\ell,\theta}$, and a mutually unbiased scalar mode set, $\ket{\phi}_{\ell,\theta}$, defined as
\begin{eqnarray}
\ket{\psi}_{\ell,\theta} &=& \frac{1}{\sqrt{2}}(\ket{R}\ket{\ell} + e^{i\theta} \ket{L}\ket{-\ell}), \label{eq: vector mode 1}\\
\ket{\phi}_{\ell,\theta} &=& \frac{1}{\sqrt{2}}\left(\ket{R} + e^{i\left(\theta-\frac{\pi}{2}\right)} \ket{L}\right)\ket{\ell}, \label{eq: scalar mode 1}
\end{eqnarray}
\noindent where each photon carries $\ell\hbar$ quanta of OAM, $\ket{R}$ and $\ket{L}$ are, respectively, the right and left circular polarization eigenstates and $\theta = [0,\pi]$ is the intra-modal phase. For a given $|\ell|$ OAM subspace, there exist four orthogonal modes in both the vector basis (Eq.~\ref{eq: vector mode 1}) and its mutually unbiased counterpart (Eq.~\ref{eq: scalar mode 1}), such that $\left|\braket{\psi}{\phi}\right|^2 = 1/d$ with $d=4$. These vector and scalar modes can be generated by manipulating the dynamic or geometric phase of light \cite{Forbes2016,Ndagano2016,Naidoo2016,Lu2016}.
\AF{Here we employ geometric phase control through a combination of $q$-plates \cite{Marrucci2006,marrucci2011spin}
and wave plates to create all vector and scalar modes in the four dimensional space (see Methods and Supplementary Information). Our four vector modes for QKD then become:}
\begin{eqnarray}
\ket{00} &=& \frac{1}{\sqrt{2}}(\ket{R}\ket{\ell} + \ket{L}\ket{-\ell}), \\
\ket{01} &=& \frac{1}{\sqrt{2}}(\ket{R}\ket{\ell} - \ket{L}\ket{-\ell}), \\
\ket{10} &=& \frac{1}{\sqrt{2}}(\ket{L}\ket{\ell} + \ket{R}\ket{-\ell}), \\
\ket{11} &=& \frac{1}{\sqrt{2}}(\ket{L}\ket{\ell} - \ket{R}\ket{-\ell}),
\end{eqnarray}
with corresponding MUBs
\begin{eqnarray}
\ket{00} &=& \frac{1}{\sqrt{2}}\ket{D}\ket{-\ell}, \\
\ket{01} &=& \frac{1}{\sqrt{2}}\ket{D}\ket{\ell}, \\
\ket{10} &=& \frac{1}{\sqrt{2}}\ket{A}\ket{-\ell}, \\
\ket{11} &=& \frac{1}{\sqrt{2}}\ket{A}\ket{\ell},
\end{eqnarray}
where $D$ and $A$ refer to diagonal and anti-diagonal polarisation states. For the purpose of demonstration, we use vector and scalar modes in the $\ell = \pm 1$ and $\ell = \pm 10$ OAM subspaces, shown graphically in Fig.~\ref{fig: basis states}.
\textbf{High-dimensional decoding.} At the receiver's end, Bob randomly opts to measure the received photon in either the scalar or vector basis. The randomness of the choice between the two bases is implemented here with a 50:50 beam splitter (BS) as shown in Fig.~\ref{fig:figure2}(a). Prior QKD experiments beyond two-dimensions have used filtering based techniques that negate the very benefit of the increased state space: by filtering for only one mode at a time, the effective data transfer rate is reduced by a factor $1/d$. We introduce a new scheme to deterministically detect the modes, as detailed in Fig.~\ref{fig:figure2} (b) and (c), that has a number of practical advantages for quantum cryptography. Consider a vector mode as defined in Eq.~\ref{eq: vector mode 1}. The sorting of the different vector modes is achieved through a combination of geometric phase control and multi-path interference. First, a polarisation grating based on geometric phase acts as a beam splitter for left- and right-circularly polarised photons, creating two paths
\begin{equation}
\ket{\Psi}_{\ell,\theta} \rightarrow \frac{1}{\sqrt{2}} \left( \ket{\ell}_a\ket{R}_a + e^{i\theta} \ket{-\ell}_b\ket{L}_b \right),
\label{eq: vector mode2}
\end{equation}
where the subscript $a$ and $b$ refer to the polarisation-marked paths.
The photon paths $a$ and $b$ are interfered at a 50:50 BS, resulting in the following state after the BS:
\begin{equation}
\ket{\Psi'}_{\ell,\theta} = \frac{1+e^{i(\delta+\theta+\frac{\pi}{2})}}{2} \ket{\ell}_{c} + i\frac{1+e^{i(\delta+\theta-\frac{\pi}{2})}}{2} \ket{-\ell}_{d}
\label{eq: path interference}
\end{equation}
where the subscripts $c$ and $d$ refer to the output ports of the beam splitter and $\delta$ is the dynamic phase difference between the two paths. Note that the polarisation of the two paths is automatically reconciled in each of the output ports of the beam splitter due to the difference of parity in the number of reflections for each input arm. Also note that at this point it is not necessary to retain the polarisation kets in the expression of the photon state since the polarisation information is contained in the path.
In our setup we set $\delta = \pi/2$, reducing the state in Eq. \ref{eq: path interference} to
\begin{equation}
\ket{\Psi'}_{\ell,\theta} = \frac{1-e^{i\theta}}{2} \ket{\ell}_{c} + i\frac{1+e^{i\theta}}{2} \ket{-\ell}_{d}
\label{eq: path interference2}
\end{equation}
The measurement system is completed by passing each of the outputs in $c$ and $d$ through a mode sorter and collecting the photons using 4 multimode fibres coupled to avalanches photodiodes. The mode sorters are refractive (lossless) aspheres that map OAM to position \cite{Berkhout2010a, Fickler2014a, Lavery2013, Dudley2013} (see Supplementary Information for a layout of the detection system). While it is trivial to measure such hyper-entangled (non-separable) vector states at the classical level \cite{Milione2015e,Milione2015f,Ndagano2015},
with our approach each such state is detected with unit probability at the single photon level. For example, consider the modes $\ket{00}$ and $\ket{01}$, where $\theta = 0$ and $\theta = \pi$, respectively. The mapping is such that
\begin{eqnarray}
\ket{00} &\rightarrow& \ket{\Psi'}_{\ell,0} = i \ket{-\ell}_{d}, \\
\ket{01} &\rightarrow& \ket{\Psi'}_{\ell,\pi} = - \ket{\ell}_{c}.
\label{eq:mapping}
\end{eqnarray}
\noindent The combination of path ($c$ or $d$) and lateral location ($+\ell$ or $-\ell$) uniquely determines the original vector mode as shown in Fig.~\ref{fig:figure2}(d)
The scalar mode detector works on an analogous principle but without the need of the BS to resolve the intermodal phases (see Supplementary Information). The polarisation states are resolved by first performing a unitary transformation that maps linear to circular basis, and passing the scalar mode through the polarisation grating. The OAM states are subsequently sorted using the mode sorters.
A graphical illustration of the experimental performance of both the scalar and vector analysers is shown in Fig.~\ref{fig:figure2}(e), where modes from the $\ell = \pm 1$ and $\ell = \pm 10$ subsets were measured with high fidelity (close to unity).
\begin{figure}
\caption{\textbf{Crosstalk analysis in four dimensions.}
\label{fig: figure3}
\end{figure}
\textbf{High dimensional cryptography.} We performed a four-dimensional prepare-and-measure BB84 scheme \cite{bennett1984quantum} using mutually unbiased vector and scalar modes. Light from our source was attenuated to the single photon level with an average photon number of $\mu = 0.008$. Alice prepared an initial state in either the $\ket{\psi}_{\ell}$ (vector) or $\ket{\phi}_{\ell}$ (scalar) basis and transmitted it to Bob, who made his measurements as detailed in the previous section. Through optical projection onto both the vector and scalar bases as laid out in Fig.~\ref{fig: figure3}(a), we determined the crosstalk matrices shown in Fig.~\ref{fig: figure3}(c) and (d), relating the input and measured modes within, respectively, the subspaces $\ell = \pm 1$ and $\ell = \pm 10$. The average fidelity of detection, measured for modes prepared and detected in identical bases, is $0.965\pm0.004$ while the overlap between modes from MUBs is $|\braket{\phi}{\psi}|^2 = 0.255 \pm 0.004$, in good agreement with theory (0.25).
From the measured crosstalk matrices in Fig.~\ref{fig: figure3}(c) and (d), we performed a security analysis on our QKD scheme in dimensions $d=4$ for the two OAM subspaces ($\pm 1$ and $\pm 10$). The results of the analysis are summarised in Table \ref{tab:table2}. From the measured detection fidelity $F$, we computed the mutual information between Alice and Bob in $d$-dimensions as follows \cite{cerf2002security}
\begin{equation}
I_{AB}=\log_{2}(d) + F\log_{2}(F)+ (1-F)\log_{2}\left(\frac{1-F}{d}\right).
\end{equation}
The measured $I_{AB}$ for $d=4$ is nearly double ($1.7 \times$) that of the maximum achievable with only qubit states ($1$).
Assuming a third party, Eve, uses an ideal quantum cloning machine to extract information, the associated cloning fidelity, $F_E$, in $d$-dimensions is given by \cite{cerf2002security}
\begin{equation}
F_{E}=\frac{F}{d}+\frac{(d-1)(1-F)}{d} + \frac{2\sqrt{(d-1)F(1-F)}}{d}.
\end{equation}
With increasing dimensions, the four dimensional protocol reduces the efficiency of Eve's cloning machine to as low as $0.41$ well below the maximum limit in a the two-dimensional protocol (0.5)
Thus, increasing the dimensionality of QKD protocols does indeed have, in addition to higher mutual information capacity, higher robustness to cloning based attacks.
The mutual information shared between Alice and Bob, conditioned on Bob's error -- that is, Bob making a wrong measurement is as a result of Eve extracting the correct information -- is computed in $d$-dimension as follows \cite{cerf2002security}
\begin{equation}
\begin{split}
I_{AE}= & \log_{2}(d) + (F+F_E-1)\log_{2}\left(\frac{F+F_E-1}{F}\right)\\ & +(1-F_E)\log_{2}\left(\frac{1-F_E}{(d-1)F}\right).
\end{split}
\end{equation}
\begin{table}
\centering
\caption{Summary of the security analysis on the high dimensional protocol showing the experimental and theoretical values of the detection fidelity ($F$), mutual information $I_{AB}$ between Alice and Bob, Eve's cloning fidelity $(F_{E})$ and mutual information with Alice $I_{AE}$, as well as the quantum error rate $Q$ and secret key rate $R$.}
\label{tab:table2}
\begin{tabular}{|c|c|c|c|}
\hline
& $d=4$ ($|\ell|$ = 1) & $d=4$ ($|\ell|$ = 10) & \\
\hline
Measures & experiment & experiment & ideal \\
\hline
$F$ & $0.96$ & $0.97$ & $1.00$\\
\hline
$I_{AB} $ & $1.69$ & $1.76$ & $2.00$ \\
\hline
$F_{E} $ & $0.44$ & $0.41$ & $0.25$ \\
\hline
$I_{AE} $ & $0.17$ & $0.13$ & $0.00$ \\
\hline
$Q$ & $0.04$ & $0.03$ & $0.00$ \\
\hline
$R$ & $1.52$ & $1.63$ & $2.00$ \\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\caption{\textbf{High dimensional BB84.}
\label{fig:figure4}
\end{figure*}
The consequent measured quantum error rate of $Q = 1-F = 0.04$ is well below the $0.11$ and $0.18$ bounds for unconditional security against coherent attacks in two and four dimensions \cite{cerf2002security}, respectively. The lower bound on the secret key rate, $R=\max\left(I_{AB} - I_{AE},I_{AB} - I_{BE}\right)$ \cite{gisin2002quantum}, yields a value as high as $1.63$ bits per photon, well above the Shannon limit of one bit per photon achievable with qubit states. While the security of the protocol can be increase with privacy amplification, the measured four-dimensional secret key rate demonstrates the potential of such hyper-entangled modes for high bandwidth quantum communication.
Finally, we performed a four dimensional prepare-and-measure BB84 scheme using mutually unbiased vector and scalar modes. For each mode, Alice and Bob assign the bit values $00, 01, 10 $ and $11$, as shown in Fig.~\ref{fig:figure4}(a).
During the transmission, Alice randomly prepares her photon in a vector (scalar) mode state while Bob randomly measures the photon with either the vector or scalar analyser detailed in Fig. \ref{fig:figure2}. At the end of the transmission, Alice and Bob reconcile the prepare and measure bases and discard measurements in complementary bases, as described in Fig.~\ref{fig:figure4}(b). We performed this transmission using a sequence of 100 modes and retained a sifted key of 49 spatial modes (98 bits), which was used to encrypt and decrypt a picture as shown in Fig.~\ref{fig:figure4}(c).
\section{Discussion and conclusion}
The prepare-and-measure quantum cryptography scheme we report here realised the potential of hyper-entanglement between spatial modes and polarisation as means to achieving higher bandwidth optical communication at the single photon level as well as classically. Our secret key rate of 1.63 bits per photon represents a significant increase in data transfer rates as compared to QKD with conventional polarisation eigenstates, limited to one bit per photon under ideal conditions. The secret key rate we obtained exceeds previously reported \cite{mafu2013higher} $d = 4$ laboratory results by more than $43\%$. In order to compare the efficiency of higher-dimensional protocols we define the information per photon per dimension as a figure of merit. Using this, we find that we achieve a value of $0.41$, compared to reported values of $0.17$ ($d = 5$) \cite{mafu2013higher} and $0.24$ ($d = 7$) \cite{mirhosseini2015high}, highlighting the efficiency of our scheme.
An important aspect of our scheme is the deterministic measurement of all higher dimensional states, allowing, in principle, unit detection probability by Bob for any prepared mode by Alice. This makes it possible to increase the dimensionality of quantum cryptography protocols without compromising on the sifting rate, the fraction of the transmitted bits that constitute the key, unlike with other methods where the data transfer rate is decreased due to filtering for one mode at a time, thereby decreasing the detection probability for a given mode by a factor $1/d$. As a consequence our sift rate is two times greater than would be possible with conventional probabilistic (filtering-based) detection schemes (See Supplementary Information for an experimental comparison). We point out that our scheme would likewise increase the signal-to-noise of classical mode division multiplexing communication systems: rather than distribute the signal across $d$ modes, each with $1/d$ of the signal, we can achieve full signal on each mode with a factor $d$ greater signal-to-noise ratio \cite{Ruffato2016}.
In conclusion, we have demonstrated a four-dimensional QKD protocol using a deterministic detection scheme that realises the full benefit of of the dimensionality of the state space. Using modes with entangled spatial and polarisation DoFs, we demonstrated the efficiency of the approach using the BB84 scheme. The system performance confirms that the QKD protocol is capable of realising high-bits per photon at high sift rates and high data transfer rates, substantially improving on previously reported results. It is anticipated that, due to the identical scattering of vector and scalar OAM modes in turbulence \cite{Cox2016}, no benefit will be derived to Eve (mutual information between Bob/Alice and Eve) from this mode set. When combined with real-time error correction \cite{Ndagano2016}
and the possibility to increase the dimensionality of the state-space indefinitely while still maintaining unit probability detection, we foresee that this approach will be invaluable for long distance ``secure and fast" data transfer.
\section{Methods}
\textbf{Generating vector and scalar modes using a $q$-plate.} We used $q$-plates to couple the polarisation and orbital angular momentum degrees of freedom through geometric phase control. With locally varying birefringence across a wave plate, the geometric phase imparted by a $q$-plate was engineered to produce the following transformation
\begin{eqnarray}
\left| \ell, L \right\rangle \xrightarrow{q\text{-plate}} \left| \ell + 2q, R \right\rangle, \label{eq:Qplate1}\\
\left| \ell, R \right\rangle \xrightarrow{q\text{-plate}} \left| \ell - 2q, L \right\rangle,
\label{eq:Qplate2}
\end{eqnarray}
where $q = 2\ell$ is the topological charge of the $q$-plate. The vector modes investigated here were generated by transforming an input linearly polarised Gaussian mode with quarter- or half- wave plates and $q=1/2$ and $q=5$ plates, producing either separable (scalar) non-separable (vector) superpositions of qubit states in Eqs. \ref{eq:Qplate1} and \ref{eq:Qplate2}. The generated states and the elements setting are given in the table below:
\begin{table}[h!]
\caption{Generation of MUBs of vector and scalar modes from an input, horizontally polarised Gaussian beam}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Mode & $\lambda/4 (\alpha_1) $ & $\lambda/2 (\theta_1)$ & $q$-plate & $\lambda/4 (\alpha_2)$ & $\lambda/2 (\theta_2)$ \\
\hline
\hline
$\ket{\psi}_{\ell,0}$ & -- & 0 & $|q|$ & -- & -- \\
\hline
$\ket{\psi}_{\ell,\pi}$ & -- & $\pi/4$ & $|q|$ & -- & -- \\
\hline
$\ket{\psi}_{-\ell,0}$ & -- & -- & $|q|$ & -- & 0 \\
\hline
$\ket{\psi}_{-\ell,\pi}$ & -- & -- & $|q|$ & -- & $\pi/4$ \\
\hline
\hline
$\ket{\phi}_{\ell,0}$ & $-\pi/4$ & -- & $|q|$ & $-\pi/4$ & $\pi/4$ \\
\hline
$\ket{\phi}_{\ell,\pi}$ & $-\pi/4$ & -- & $|q|$ & $-\pi/4$ & $-\pi/4$ \\
\hline
$\ket{\phi}_{-\ell,0}$ & $\pi/4$ & -- & $|q|$ & $\pi/4$ & $\pi/4$ \\
\hline
$\ket{\phi}_{-\ell,\pi}$ & $\pi/4$ & -- & $|q|$ & $\pi/4$ & $-\pi/4$ \\
\hline
\end{tabular}
\end{table}
\begin{thebibliography}{39}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {H{\"{u}}bel}\ \emph {et~al.}(2007)\citenamefont
{H{\"{u}}bel}, \citenamefont {Vanner}, \citenamefont {Lederer}, \citenamefont
{Blauensteiner}, \citenamefont {Lor{\"{u}}nser}, \citenamefont {Poppe},\ and\
\citenamefont {Zeilinger}}]{Hubel2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{H{\"{u}}bel}}, \bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont
{Vanner}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Lederer}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Blauensteiner}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Lor{\"{u}}nser}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Poppe}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1364/OE.15.007853} {\bibfield {journal} {\bibinfo {journal} {Opt.
Express}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {7853-7862}
(\bibinfo {year} {2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ursin}\ \emph {et~al.}(2007)\citenamefont {Ursin},
\citenamefont {Tiefenbacher}, \citenamefont {Schmitt-Manderbach},
\citenamefont {Weier}, \citenamefont {Scheidl}, \citenamefont {Lindenthal},
\citenamefont {Blauensteiner}, \citenamefont {Jennewein}, \citenamefont
{Perdigues}, \citenamefont {Trojek}, \citenamefont {{\"{O}}mer},
\citenamefont {F{\"{u}}rst}, \citenamefont {Meyenburg}, \citenamefont
{Rarity}, \citenamefont {Sodnik}, \citenamefont {Barbieri}, \citenamefont
{Weinfurter},\ and\ \citenamefont {Zeilinger}}]{Ursin2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Ursin}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Tiefenbacher}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schmitt-Manderbach}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weier}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Scheidl}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Lindenthal}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Blauensteiner}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Perdigues}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Trojek}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {{\"{O}}mer}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {F{\"{u}}rst}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Meyenburg}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Rarity}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Sodnik}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Barbieri}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Weinfurter}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase 10.1038/nphys629}
{\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf
{\bibinfo {volume} {3}},\ \bibinfo {pages} {481-486} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ma}\ \emph {et~al.}(2012)\citenamefont {Ma},
\citenamefont {Herbst}, \citenamefont {Scheidl}, \citenamefont {Wang},
\citenamefont {Kropatschek}, \citenamefont {Naylor}, \citenamefont
{Wittmann}, \citenamefont {Mech}, \citenamefont {Kofler}, \citenamefont
{Anisimova}, \citenamefont {Makarov}, \citenamefont {Jennewein},
\citenamefont {Ursin},\ and\ \citenamefont {Zeilinger}}]{Ma2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-S.}\ \bibnamefont
{Ma}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Herbst}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Scheidl}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Kropatschek}}, \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {Naylor}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Wittmann}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Mech}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Kofler}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Anisimova}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Ursin}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1038/nature11472} {\bibfield {journal} {\bibinfo {journal} {Nat.}\
}\textbf {\bibinfo {volume} {489}},\ \bibinfo {pages} {269-73} (\bibinfo {year}
{2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Herbst}\ \emph {et~al.}(2015)\citenamefont {Herbst},
\citenamefont {Scheidl}, \citenamefont {Fink}, \citenamefont {Handsteiner},
\citenamefont {Wittmann}, \citenamefont {Ursin},\ and\ \citenamefont
{Zeilinger}}]{Herbst2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Herbst}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Scheidl}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fink}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Handsteiner}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Wittmann}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Ursin}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1073/pnas.1517007112} {\bibfield {journal} {\bibinfo {journal}
{Proc. Natl. Acad. Sci. USA}\ }\textbf {\bibinfo
{volume} {112}},\ \bibinfo {pages} {14202-14205} (\bibinfo {year} {2015})}
\BibitemShut {NoStop}
\bibitem [{\citenamefont {Jennewein}\ \emph {et~al.}(2000)\citenamefont
{Jennewein}, \citenamefont {Simon}, \citenamefont {Weihs}, \citenamefont
{Weinfurter},\ and\ \citenamefont {Zeilinger}}]{Jennewein2000}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Jennewein}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Simon}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Weihs}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1103/PhysRevLett.84.4729} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo
{pages} {4729} (\bibinfo {year} {2000})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Poppe}\ \emph {et~al.}(2004)\citenamefont {Poppe},
\citenamefont {Fedrizzi}, \citenamefont {Ursin}, \citenamefont {B\"ohm},
\citenamefont {Lor\"unser}, \citenamefont {Maurhardt}, \citenamefont {Peev},
\citenamefont {Suda}, \citenamefont {Kurtsiefer}, \citenamefont {Weinfurter},
\citenamefont {Jennewein},\ and\ \citenamefont {Zeilinger}}]{Poppe2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Poppe}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fedrizzi}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ursin}}, \bibinfo
{author} {\bibfnamefont {H.~R.}\ \bibnamefont {B\"ohm}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Lor\"unser}}, \bibinfo {author}
{\bibfnamefont {O.}~\bibnamefont {Maurhardt}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Peev}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Suda}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Kurtsiefer}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Weinfurter}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Jennewein}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Zeilinger}},\ }\href {\doibase 10.1364/OPEX.12.003865} {\bibfield {journal}
{\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {12}},\
\bibinfo {pages} {3865-3871} (\bibinfo {year} {2004})}\BibitemShut{NoStop}
\bibitem [{\citenamefont {Peng}\ \emph {et~al.}(2007)\citenamefont {Peng},
\citenamefont {Zhang}, \citenamefont {Yang}, \citenamefont {Gao},
\citenamefont {Ma}, \citenamefont {Yin}, \citenamefont {Zeng}, \citenamefont
{Yang}, \citenamefont {Wang},\ and\ \citenamefont {Pan}}]{Peng2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~Z.}\ \bibnamefont
{Peng}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Yang}}, \bibinfo {author}
{\bibfnamefont {W.~B.}\ \bibnamefont {Gao}}, \bibinfo {author} {\bibfnamefont
{H.~X.}\ \bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {H.~P.}\
\bibnamefont {Zeng}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Yang}}, \bibinfo {author} {\bibfnamefont {X.~B.}\ \bibnamefont {Wang}}, \
and\ \bibinfo {author} {\bibfnamefont {J.~W.}\ \bibnamefont {Pan}},\ }\href
{\doibase 10.1103/PhysRevLett.98.010505} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {98}},\
\bibinfo {pages} {010505} (\bibinfo {year} {2007})}\BibitemShut{NoStop}
\bibitem [{\citenamefont {Bechmann-Pasquinucci}\ and\ \citenamefont
{Tittel}(2000)}]{bechmann2000quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bechmann-Pasquinucci}}\ and\ \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {Tittel}},\ }\href{\doibase 10.1103/PhysRevA.61.062308} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {61}},\ \bibinfo
{pages} {062308} (\bibinfo {year} {2000})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cerf}\ \emph {et~al.}(2002)\citenamefont {Cerf},
\citenamefont {Bourennane}, \citenamefont {Karlsson},\ and\ \citenamefont
{Gisin}}]{cerf2002security}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont
{Cerf}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bourennane}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Karlsson}}, \ and\
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {127902} (\bibinfo
{year} {2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2012)\citenamefont {Wang},
\citenamefont {Yang}, \citenamefont {Fazal}, \citenamefont {Ahmed},
\citenamefont {Yan}, \citenamefont {Huang}, \citenamefont {Ren},
\citenamefont {Yue}, \citenamefont {Dolinar}, \citenamefont {Tur},\ and\
\citenamefont {Willner}}]{Wang2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {J.-Y.}\ \bibnamefont {Yang}},
\bibinfo {author} {\bibfnamefont {I.~M.}\ \bibnamefont {Fazal}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Ahmed}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Yue}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Dolinar}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tur}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.~E.}\ \bibnamefont {Willner}},\ }\href {\doibase
10.1038/nphoton.2012.138} {\bibfield {journal} {\bibinfo {journal} {Nat.
Phot.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {488-496}
(\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sleiffer}\ \emph {et~al.}(2012)\citenamefont
{Sleiffer}, \citenamefont {Jung}, \citenamefont {Veljanovski}, \citenamefont
{van Uden}, \citenamefont {Kuschnerov}, \citenamefont {Chen}, \citenamefont
{Inan}, \citenamefont {Nielsen}, \citenamefont {Sun}, \citenamefont
{Richardson}, \citenamefont {Alam}, \citenamefont {Poletti}, \citenamefont
{Sahu}, \citenamefont {Dhar}, \citenamefont {Koonen}, \citenamefont
{Corbett}, \citenamefont {Winfield}, \citenamefont {Ellis},\ and\
\citenamefont {de~Waardt}}]{Sleiffer2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Sleiffer}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Jung}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Veljanovski}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {van Uden}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Kuschnerov}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Inan}}, \bibinfo {author} {\bibfnamefont {L.~G.}\
\bibnamefont {Nielsen}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Sun}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Richardson}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Alam}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Poletti}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Sahu}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Dhar}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Koonen}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Corbett}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Winfield}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Ellis}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {de~Waardt}},\ }\href {\doibase
10.1364/OE.20.00B428} {\bibfield {journal} {\bibinfo {journal} {Opt.
Express}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {B428-B438}
(\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2014)\citenamefont {Huang},
\citenamefont {Xie}, \citenamefont {Yan}, \citenamefont {Ahmed},
\citenamefont {Ren}, \citenamefont {Yue}, \citenamefont {Rogawski},
\citenamefont {Willner}, \citenamefont {Erkmen}, \citenamefont {Birnbaum},
\citenamefont {Dolinar}, \citenamefont {Lavery}, \citenamefont {Padgett},
\citenamefont {Tur},\ and\ \citenamefont {Willner}}]{Huang2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Huang}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Xie}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Yan}}, \bibinfo {author}
{\bibfnamefont {N.}~\bibnamefont {Ahmed}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Yue}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Rogawski}},
\bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Willner}}, \bibinfo
{author} {\bibfnamefont {B.~I.}\ \bibnamefont {Erkmen}}, \bibinfo {author}
{\bibfnamefont {K.~M.}\ \bibnamefont {Birnbaum}}, \bibinfo {author}
{\bibfnamefont {S.~J.}\ \bibnamefont {Dolinar}}, \bibinfo {author}
{\bibfnamefont {M.~P.~J.}\ \bibnamefont {Lavery}}, \bibinfo {author}
{\bibfnamefont {M.~J.}\ \bibnamefont {Padgett}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Tur}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~E.}\ \bibnamefont {Willner}},\ }\href {\doibase
10.1364/OL.39.000197} {\bibfield {journal} {\bibinfo {journal} {Opt.
Lett.}\ }\textbf {\bibinfo {volume} {39}},\ \bibinfo {pages} {197-200}
(\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gr{\"{o}}blacher}\ \emph {et~al.}(2006)\citenamefont
{Gr{\"{o}}blacher}, \citenamefont {Jennewein}, \citenamefont {Vaziri},
\citenamefont {Weihs},\ and\ \citenamefont {Zeilinger}}]{Groblacher2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Gr{\"{o}}blacher}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Jennewein}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vaziri}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Weihs}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1088/1367-2630/8/5/075} {\bibfield {journal} {\bibinfo {journal} {New
J. Phys.}\ }\textbf {\bibinfo {volume} {8}} \bibinfo {pages}{75} (\bibinfo {year}
{2006})}\BibitemShut {NoStop}
\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mafu}\ \emph {et~al.}(2013)\citenamefont {Mafu},
\citenamefont {Dudley}, \citenamefont {Goyal}, \citenamefont {Giovannini},
\citenamefont {McLaren}, \citenamefont {Padgett}, \citenamefont {Konrad},
\citenamefont {Petruccione}, \citenamefont {L{\"u}tkenhaus},\ and\
\citenamefont {Forbes}}]{mafu2013higher}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Mafu}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dudley}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Goyal}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Giovannini}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {McLaren}}, \bibinfo {author} {\bibfnamefont
{M.~J.}\ \bibnamefont {Padgett}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Konrad}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Petruccione}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {L{\"u}tkenhaus}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Forbes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo
{pages} {032305} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mirhosseini}\ \emph {et~al.}(2015)\citenamefont
{Mirhosseini}, \citenamefont {Maga{\~n}a-Loaiza}, \citenamefont
{O’Sullivan}, \citenamefont {Rodenburg}, \citenamefont {Malik},
\citenamefont {Lavery}, \citenamefont {Padgett}, \citenamefont {Gauthier},\
and\ \citenamefont {Boyd}}]{mirhosseini2015high}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Mirhosseini}}, \bibinfo {author} {\bibfnamefont {O.~S.}\ \bibnamefont
{Maga{\~n}a-Loaiza}}, \bibinfo {author} {\bibfnamefont {M.~N.}\ \bibnamefont
{O’Sullivan}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Rodenburg}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Malik}},
\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Lavery}}, \bibinfo
{author} {\bibfnamefont {M.~J.}\ \bibnamefont {Padgett}}, \bibinfo {author}
{\bibfnamefont {D.~J.}\ \bibnamefont {Gauthier}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.~W.}\ \bibnamefont {Boyd}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo
{volume} {17}},\ \bibinfo {pages} {033033} (\bibinfo {year}
{2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Leach}\ \emph {et~al.}(2012)\citenamefont {Leach},
\citenamefont {Bolduc}, \citenamefont {Gauthier},\ and\ \citenamefont
{Boyd}}]{leach2012secure}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Leach}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Bolduc}},
\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Gauthier}}, \ and\
\bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Boyd}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf
{\bibinfo {volume} {85}},\ \bibinfo {pages} {060304} (\bibinfo {year}
{2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Souza}\ \emph {et~al.}(2008)\citenamefont {Souza},
\citenamefont {Borges}, \citenamefont {Khoury}, \citenamefont {Huguenin},
\citenamefont {Aolita},\ and\ \citenamefont {Walborn}}]{Souza2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Souza}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Borges}},
\bibinfo {author} {\bibfnamefont {a.}~\bibnamefont {Khoury}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Huguenin}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Aolita}}, \ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Walborn}},\ }\href {\doibase
10.1103/PhysRevA.77.032345} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {032345}
(\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Vallone}\ \emph {et~al.}(2014)\citenamefont
{Vallone}, \citenamefont {D’Ambrosio}, \citenamefont {Sponselli},
\citenamefont {Slussarenko}, \citenamefont {Marrucci}, \citenamefont
{Sciarrino},\ and\ \citenamefont {Villoresi}}]{vallone2014free}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Vallone}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{D’Ambrosio}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Sponselli}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Slussarenko}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Marrucci}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sciarrino}},
\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Villoresi}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {060503}
(\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Milione}\ \emph
{et~al.}(2015{\natexlab{a}})\citenamefont {Milione}, \citenamefont {Nguyen},
\citenamefont {Leach}, \citenamefont {Nolan},\ and\ \citenamefont
{Alfano}}]{Milione2015e}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Milione}}, \bibinfo {author} {\bibfnamefont {T.~A.}\ \bibnamefont {Nguyen}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Leach}}, \bibinfo
{author} {\bibfnamefont {D.~A.}\ \bibnamefont {Nolan}}, \ and\ \bibinfo
{author} {\bibfnamefont {R.~R.}\ \bibnamefont {Alfano}},\ }\href {\doibase
10.1364/OL.40.004887} {\bibfield {journal} {\bibinfo {journal} {Opt.
Lett.}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {4887-4890}
(\bibinfo {year} {2015}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Li}\ \emph {et~al.}(2016)\citenamefont {Li},
\citenamefont {Wang},\ and\ \citenamefont {Zhang}}]{Li2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Li}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Wang}}, \ and\
\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhang}},\ }\href
{\doibase 10.1364/OE.24.015143} {\bibfield {journal} {\bibinfo {journal}
{Opt. Express}\ }\textbf {\bibinfo {volume} {24}},\ \bibinfo {pages}
{15143-15159} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Milione}\ \emph
{et~al.}(2015{\natexlab{b}})\citenamefont {Milione}, \citenamefont {Lavery},
\citenamefont {Huang}, \citenamefont {Ren}, \citenamefont {Xie},
\citenamefont {Nguyen}, \citenamefont {Karimi}, \citenamefont {Marrucci},
\citenamefont {Nolan}, \citenamefont {Alfano},\ and\ \citenamefont
{Willner}}]{Milione2015f}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Milione}}, \bibinfo {author} {\bibfnamefont {M.~P.~J.}\ \bibnamefont
{Lavery}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Huang}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ren}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Xie}}, \bibinfo {author} {\bibfnamefont
{T.~A.}\ \bibnamefont {Nguyen}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Karimi}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Marrucci}}, \bibinfo {author} {\bibfnamefont {D.~A.}\
\bibnamefont {Nolan}}, \bibinfo {author} {\bibfnamefont {R.~R.}\ \bibnamefont
{Alfano}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~E.}\ \bibnamefont
{Willner}},\ }\href {\doibase 10.1364/OL.40.001980} {\bibfield {journal}
{\bibinfo {journal} {Optics letters}\ }\textbf {\bibinfo {volume} {40}},\
\bibinfo {pages} {1980-1983} (\bibinfo {year} {2015}{\natexlab{b}})} \BibitemShut {NoStop}
\bibitem [{\citenamefont {Bennett}(1984)}]{bennett1984quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont
{Bennett}},\ }In\ \href@noop {} {\emph {\bibinfo {booktitle} {International
Conference on Computer System and Signal Processing, IEEE, 1984,}}} \bibinfo {pages} {175--179}\ (\bibinfo
{year} {1984})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Milione}\ \emph {et~al.}(2011)\citenamefont
{Milione}, \citenamefont {Sztul}, \citenamefont {Nolan},\ and\ \citenamefont
{Alfano}}]{Milione2011a}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Milione}}, \bibinfo {author} {\bibfnamefont {H.~I.}\ \bibnamefont {Sztul}},
\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Nolan}}, \ and\
\bibinfo {author} {\bibfnamefont {R.~R.}\ \bibnamefont {Alfano}},\ }\href
{\doibase 10.1103/PhysRevLett.107.053601} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\
\bibinfo {pages} {053601} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Milione}\ \emph {et~al.}(2012)\citenamefont
{Milione}, \citenamefont {Evans}, \citenamefont {Nolan},\ and\ \citenamefont
{Alfano}}]{Milione2012a}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Milione}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Evans}},
\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Nolan}}, \ and\
\bibinfo {author} {\bibfnamefont {R.~R.}\ \bibnamefont {Alfano}},\ }\href
{\doibase 10.1103/PhysRevLett.108.190401} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\
\bibinfo {pages} {190401} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Forbes}\ \emph {et~al.}(2016)\citenamefont {Forbes},
\citenamefont {Dudley},\ and\ \citenamefont {McLaren}}]{Forbes2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Forbes}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dudley}}, \
and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {McLaren}},\ }\href
{\doibase 10.1364/AOP.8.000200} {\bibfield {journal} {\bibinfo {journal}
{Adv. Opt. Phot.}\ }\textbf {\bibinfo {volume} {8}},\
\bibinfo {pages} {200-227} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ndagano}\ \emph {et~al.}(2016)\citenamefont
{Ndagano}, \citenamefont {Perez-Garcia}, \citenamefont {Roux}, \citenamefont
{McLaren}, \citenamefont {Rosales-Guzman}, \citenamefont {Zhang},
\citenamefont {Mouane}, \citenamefont {Hernandez-Aranda}, \citenamefont
{Konrad},\ and\ \citenamefont {Forbes}}]{Ndagano2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Ndagano}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Perez-Garcia}}, \bibinfo {author} {\bibfnamefont {F.~S.}\ \bibnamefont
{Roux}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {McLaren}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Rosales-Guzman}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhang}}, \bibinfo
{author} {\bibfnamefont {O.}~\bibnamefont {Mouane}}, \bibinfo {author}
{\bibfnamefont {R.~I.}\ \bibnamefont {Hernandez-Aranda}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Konrad}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Forbes}},\ }\href
{http://arxiv.org/abs/1605.05144}\ \Eprint {http://arxiv.org/abs/1605.05144} {arXiv:1605.05144}
{\ (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Naidoo}\ \emph {et~al.}(2016)\citenamefont {Naidoo},
\citenamefont {Roux}, \citenamefont {Dudley}, \citenamefont {Litvin},
\citenamefont {Piccirillo}, \citenamefont {Marrucci},\ and\ \citenamefont
{Forbes}}]{Naidoo2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Naidoo}}, \bibinfo {author} {\bibfnamefont {F.~S.}\ \bibnamefont {Roux}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dudley}}, \bibinfo
{author} {\bibfnamefont {I.}~\bibnamefont {Litvin}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Piccirillo}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Marrucci}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Forbes}},\ }\href {\doibase
10.1038/nphoton.2016.37} {\bibfield {journal} {\bibinfo {journal} {Nat.
Phot.}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {327-332}
(\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lu}\ \emph {et~al.}(2016)\citenamefont {Lu},
\citenamefont {Huang}, \citenamefont {Wang}, \citenamefont {Wang},\ and\
\citenamefont {Alfano}}]{Lu2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~H.}\ \bibnamefont
{Lu}}, \bibinfo {author} {\bibfnamefont {T.~D.}\ \bibnamefont {Huang}},
\bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Wang}}, \bibinfo
{author} {\bibfnamefont {L.~W.}\ \bibnamefont {Wang}}, \ and\ \bibinfo
{author} {\bibfnamefont {R.~R.}\ \bibnamefont {Alfano}},\ }\href {\doibase
10.1038/srep39657} {\bibfield {journal} {\bibinfo {journal} {Sci.
Rep.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {39657}
(\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Marrucci}\ \emph {et~al.}(2006)\citenamefont
{Marrucci}, \citenamefont {Manzo},\ and\ \citenamefont
{Paparo}}]{Marrucci2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Marrucci}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Manzo}}, \
and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Paparo}},\ }\href
{\doibase 10.1103/PhysRevLett.96.163905} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {96}},\
\bibinfo {pages} {163905} (\bibinfo {year} {2006})} \BibitemShut {NoStop}
\bibitem [{\citenamefont {Marrucci}\ \emph {et~al.}(2011)\citenamefont
{Marrucci}, \citenamefont {Karimi}, \citenamefont {Slussarenko},
\citenamefont {Piccirillo}, \citenamefont {Santamato}, \citenamefont
{Nagali},\ and\ \citenamefont {Sciarrino}}]{marrucci2011spin}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Marrucci}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Karimi}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Slussarenko}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Piccirillo}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Santamato}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Nagali}}, \ and\ \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Sciarrino}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {J. Opt.}\ }\textbf {\bibinfo
{volume} {13}},\ \bibinfo {pages} {064001} (\bibinfo {year}
{2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Berkhout}\ \emph {et~al.}(2010)\citenamefont
{Berkhout}, \citenamefont {Lavery}, \citenamefont {Courtial}, \citenamefont
{Beijersbergen},\ and\ \citenamefont {Padgett}}]{Berkhout2010a}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~C.~G.}\
\bibnamefont {Berkhout}}, \bibinfo {author} {\bibfnamefont {M.~P.~J.}\
\bibnamefont {Lavery}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Courtial}}, \bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont
{Beijersbergen}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~J.}\
\bibnamefont {Padgett}},\ }\href {\doibase 10.1103/PhysRevLett.105.153601}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {153601} (\bibinfo
{year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fickler}\ \emph {et~al.}(2014)\citenamefont
{Fickler}, \citenamefont {Lapkiewicz}, \citenamefont {Huber}, \citenamefont
{Lavery}, \citenamefont {Padgett},\ and\ \citenamefont
{Zeilinger}}]{Fickler2014a}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Fickler}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Lapkiewicz}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Huber}}, \bibinfo
{author} {\bibfnamefont {M.~P.}\ \bibnamefont {Lavery}}, \bibinfo {author}
{\bibfnamefont {M.~J.}\ \bibnamefont {Padgett}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1038/ncomms5502} {\bibfield {journal} {\bibinfo {journal} {Nat.
Commun}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {4502}
(\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lavery}\ \emph {et~al.}(2013)\citenamefont {Lavery},
\citenamefont {Robertson}, \citenamefont {Sponselli}, \citenamefont
{Courtial}, \citenamefont {Steinhoff}, \citenamefont {Tyler}, \citenamefont
{Wilner},\ and\ \citenamefont {Padgett}}]{Lavery2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~P.~J.}\
\bibnamefont {Lavery}}, \bibinfo {author} {\bibfnamefont {D.~J.}\
\bibnamefont {Robertson}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Sponselli}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Courtial}},
\bibinfo {author} {\bibfnamefont {N.~K.}\ \bibnamefont {Steinhoff}}, \bibinfo
{author} {\bibfnamefont {G.~A.}\ \bibnamefont {Tyler}}, \bibinfo {author}
{\bibfnamefont {A.~E.}\ \bibnamefont {Wilner}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~J.}\ \bibnamefont {Padgett}},\ }\href {\doibase
10.1088/1367-2630/15/1/013024} {\bibfield {journal} {\bibinfo {journal}
{New J. Phys.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {013024} (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dudley}\ \emph {et~al.}(2013)\citenamefont {Dudley},
\citenamefont {Mhlanga}, \citenamefont {Lavery}, \citenamefont {McDonald},
\citenamefont {Roux}, \citenamefont {Padgett},\ and\ \citenamefont
{Forbes}}]{Dudley2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Dudley}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mhlanga}},
\bibinfo {author} {\bibfnamefont {M.~P.~J.}\ \bibnamefont {Lavery}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {McDonald}}, \bibinfo {author}
{\bibfnamefont {F.~S.}\ \bibnamefont {Roux}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Padgett}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Forbes}},\ }\href {\doibase
10.1364/OE.21.000165} {\bibfield {journal} {\bibinfo {journal} {Opt.
Express}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {165-171}
(\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ndagano}\ \emph {et~al.}(2015)\citenamefont
{Ndagano}, \citenamefont {Br{\"{u}}ning}, \citenamefont {McLaren},
\citenamefont {Duparr{\'{e}}},\ and\ \citenamefont {Forbes}}]{Ndagano2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Ndagano}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Br{\"{u}}ning}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{McLaren}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Duparr{\'{e}}}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Forbes}},\ }\href {\doibase 10.1364/OE.23.017330} {\bibfield {journal}
{\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {23}},\
\bibinfo {pages} {17330-17336} (\bibinfo {year} {2015})}
\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2002)\citenamefont {Gisin},
\citenamefont {Ribordy}, \citenamefont {Tittel},\ and\ \citenamefont
{Zbinden}}]{gisin2002quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Gisin}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}},
\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}}, \ and\ \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\
}\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {145-195} (\bibinfo {year}
{2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ruffato}\ \emph {et~al.}(2016)\citenamefont
{Ruffato}, \citenamefont {Massari},\ and\ \citenamefont
{Romanato}}]{Ruffato2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Ruffato}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Massari}}, \
and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Romanato}},\ }\href
{\doibase 10.1038/srep24760} {\bibfield {journal} {\bibinfo {journal}
{Sci. Rep.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages}
{24760} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cox}\ \emph {et~al.}(2016)\citenamefont {Cox},
\citenamefont {Rosales-Guzm{\'{a}}n}, \citenamefont {Lavery}, \citenamefont
{Versfeld},\ and\ \citenamefont {Forbes}}]{Cox2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Cox}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Rosales-Guzm{\'{a}}n}}, \bibinfo {author} {\bibfnamefont {M.~P.~J.}\
\bibnamefont {Lavery}}, \bibinfo {author} {\bibfnamefont {D.~J.}\
\bibnamefont {Versfeld}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Forbes}},\ }\href {\doibase 10.1364/OE.24.018105}
{\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf
{\bibinfo {volume} {24}},\ \bibinfo {pages} {18105-18113} (\bibinfo {year}
{2016})}
\BibitemShut {NoStop}
\end{thebibliography}
\section*{Materials and correspondence}
Correspondence and requests for materials should be addressed to A.F.
\section*{Acknowledgments}
We express our gratitude to Lorenzo Marrucci for providing us with $q$-plates and Miles Padgett and Martin Lavery for the mode sorters. B.N. acknowledges financial support from the National Research Foundation of South Africa and I. N. from the Department of Science and Technology (South Africa). B.P.G. and R.I.H. acknowledge support from CONACyT.
\section*{Authors' contributions}
Experiments were performed by B.N., I.N., S.S. and B.P.G. All authors contributed to the data analysis and interpretation of the results. A.F., I.N. and B.N. wrote the manuscript with inputs from all the authors. A.F. supervised the project.
\section*{Competing financial interests}
The authors declare no financial competing interests.
\section{Supplementary information }
\textbf{Sorting of scalar OAM mode.} We use a compact phase element to perform a geometric transformation on OAM modes such that azimuthal phase is mapped to a transverse phase variation, i.e., a tilted wavefront. The first optical element of our OAM mode sorter performs a conformal mapping in the standard Cartesian coordinates, from a position in the input plane $(x,y)$ to one in the output plane $(u,v)$, such that
\begin{figure}
\caption{ \textbf{Sorting the modes}
\label{fig:figure5}
\end{figure}
\begin{eqnarray}
u &=& \frac{d}{2\pi}\arctan\left(\frac{y}{x}\right) \\
v &=& -\frac{d}{2\pi}\ln\left(\frac{\sqrt{x^2+y^2}}{b}\right)
\end{eqnarray}
where $d$ is the aperture size of the free form optics and $b$ is a scaling factor that controls the translation of the transformed beam in the $u$ direction of the new coordinate system. The result is that after passing through a second phase-correcting optic and then a Fourier transforming lens (of focal length $f$), the input OAM ($\ell$) is mapped to output positions ($X_{\ell}$ following
\begin{equation}
X_{\ell} = \frac{\lambda f \ell}{d}.
\end{equation}\\
\textbf{Crosstalk analysis.} The crosstalk analysis of the vector ($\ket{\psi}$) and scalar ($\ket{\phi}$) modes is represented by a matrix of detection probabilities for each of the modes sent by Alice (rows) and measured by Bob (columns). The entries are partitioned into four quadrants: the diagonal quadrants correspond to the outcomes of measurements in matching bases while the off-diagonal show the outcomes of measurements in complementary bases.\\
\begin{figure}
\caption{\textbf{Crosstalk analysis matrix for the four dimensional sets of vector and scalar modes.}
\label{fig:figure6}
\end{figure}
\textbf{Filter based detection system.} The filter based detection system depends on the use of beam splitters with a combination with $q$-plates, wave plates and polarisers. While it is common practice for the measurement process to be identical to the generation for reversible processes -- as is the case in linear optics -- this approach would fail in measuring high dimensional vector mode spaces. This is because vector modes within one subset required oppositely charged $q$-plates. The best approach to probe the high dimensional space would require the use of beam splitters as shown in Fig.~\ref{fig:figure7}, however, at the cost of reducing the detection probability by a factor of $1/2$, thus halving the sift rate and secure key rate; for a key that is $N$-bit long, one would require sending, on average, $4N$ bits. We have tested this by building the system depicted in Fig.~\ref{fig:figure7} and performing the same prepare and measure QKD protocol as detailed in the main text. For a 200 bit transmission we were only able to produce a key with $25\%$ of the transmitted bits, as compared to $50\%$ using the scheme described in Fig.~\ref{fig:figure2} of the main text. This highlights one of the advantages of a deterministic detection system versus the probabilistic (filter-based) system.
\begin{figure}
\caption{ \textbf{Filter based system for detecting the vector and scalar modes.}
\label{fig:figure7}
\end{figure}
\end{document}
|
\begin{document}
\title{Matrix product states for quantum metrology}
\author{Marcin Jarzyna, Rafa{\l} Demkowicz-Dobrza{\'n}ski}
\affiliation{Faculty of Physics, University of Warsaw, ul. Ho\.{z}a 69, PL-00-681 Warszawa, Poland}
\begin{abstract}
We demonstrate that the optimal states in lossy quantum interferometry may be efficiently simulated using low rank matrix product states.
We argue that this should be expected in all realistic quantum metrological protocols with uncorrelated noise and is related
to the elusive nature of the Heisenberg precision scaling in the asymptotic limit of large number of probes.
\end{abstract}
\pacs{03.65.Ta, 06.20Dk, 02.70.-c, 42.50.St}
\maketitle
Over the recent years, advancements in quantum engineering have pushed
non-classical concepts such as entanglement and squeezing, previously regarded as largely academic
topics, close to practical applications. Quantum features of light and atoms
helped to improve the performance of measuring devices that operate in the regime where the precision
is limited by the fundamental laws of physics \cite{Giovanetti}.
One of the most spectacular examples of practical applications of quantum metrology
can be found in gravitational wave detectors \cite{LIGO}, where
the original idea \cite{Caves} of employing squeezed states of light to improve the sensitivity of an interferometer
has found its full scale realization \cite{GEO, GEO2011}. No less impressive are
experiments with trapped entangled ions demonstrating spectroscopic resolution enhancement crucial for the operation of the atomic clocks \cite{clock, Schmidt, Ospelkaus}.
When standard sources of laser light are being used, any interferometric
experiment may be fully described by treating each photon individually and claiming that \emph{each photon interferes only with itself}.
Sensing a phase delay $\phi$ between the two arms of the interferometer via intensity measurements
may be regarded as many independent repetitions of single photon interferometric experiments.
$N$ independent experiments result in the data that allows the parameter $\phi$ to be estimated with error scaling as
$1/\sqrt{N}$---the so called standard quantum limit or the shot noise limit.
If, however, an experiment cannot be split into $N$ independent processes, as is e.g. the case with the $N$ probing photons being entangled,
the above reasoning is invalid and one can in principle achieve the $1/N$ estimation precision---the Heiseberg scaling \cite{Giovanetti2004, Berry,
Lee, Zwierz}--- with
the help of e.g. the N00N states \cite{Dowling}.
\begin{figure}
\caption{Quantum metrology with matrix product states. $N$ parallel quantum channels act
on the input state $\ket{\Psi}
\label{fig:bscr}
\end{figure}
Still, in all realistic experimental setups, decoherence typically makes the relevant quantum features such as squeezing or entanglement die out very quickly \cite{Huelga, Shaji}. Recently, it has been rigorously shown for optical interferometry with loss \cite{Janek, Knysh}, as well as for more general decoherence models \cite{Escher,RafalJanek:Fujiwara}, that if decoherence acts independently on
each of the probes one can get at best $c/\sqrt{N}$ asymptotic scaling of error---precision that is better than classical one only by a constant factor $c$ which depends on the type of decoherence and its strength. One can therefore appreciate the Heisenberg-like decrease in uncertainty
only in the regime of small $N$, where the precise meaning of ``small'' depends on the decoherence strength \cite{Escher}, and
in typical cases is of the order of $10$ photons/atoms.
This indicates that in the limit of large number of probes, almost optimal performance can be achieved
by dividing the probes into independent groups where only the probes from a given group are entangled among each other. Clearly, the size of the group that is needed to approach the fundamental $c/\sqrt{N}$ bound up to a given
accuracy will depend on the strength of decoherence. Nevertheless, irrespectively of how small the decoherence strength is, for $N$ large enough
the size of the group will saturate at some point and therefore asymptotically the optimal state may be regarded as only locally correlated.
A natural class of states efficiently representing locally correlated states are the Matrix Product States (MPSs) \cite{MPS1,MPS3,MPS4,MPS5,MPS6},
which have proved to be highly successful in simulating low-energy states of complex spin systems. Until now no attempt has been made, however,
to employ MPSs for quantum metrology purposes. Establishing this connection is the essence of the present paper.
Basic quantum metrology scheme is depicted in Fig.~\ref{fig:bscr}. $N$ probe input state $\ket{\Psi}$ travels through $N$ parallel noisy channels $\Lambda_\phi$ which action is parameterized by an unknown value $\phi$.
A measurement $\hat{\Pi}_{x}$ is performed on the output density matrix $\hat{\rho}_{\phi}=\Lambda_\phi^{\otimes N}(\ket{\Psi}\bra{\Psi})$ yielding a result $x$ with probability $p(x|\phi)=\t{Tr}(\hat{\rho}_{\phi} \hat{\Pi}_x)$. The estimation procedure is completed by specifying
an estimator function $\tilde{\phi}(x)$. Eventually we are left with the estimated value of the parameter, $\tilde{\phi}$, which in general will be
different from $\phi$. We denote the average uncertainty of estimation by $\Delta \phi=\sqrt{\langle (\tilde{\phi} - \phi)^2 \rangle}$.
where the average is performed over different measurement results $x$.
The main goal of theoretical quantum metrology is to find strategies that minimize $\Delta \phi$. For this purpose
one has to find the optimal estimator, measurement and input state. This in general is a difficult task.
To simplify the problem one may resort to the quantum Cramer-Rao inequality \cite{Braunstein, Helstrom, Nielsen, Paris}
\begin{equation}\label{eq:precF}
\Delta\phi\geq\frac{1}{\sqrt{k F(\hat{\rho}_\phi})}, \quad F(\hat{\rho}_\phi)= \t{Tr}( \hat{\rho}_\phi \hat{L}_\phi^2)
\end{equation}
that bounds the precision of any unbiased estimation strategy based on $k$ independent repetitions of an experiment.
$F(\hat{\rho}_\phi)$ is the Quantum Fisher Information (QFI) written
in terms of $\hat{L}_\phi$---the so called symmetric logarithmic derivative (SLD)---defined implicitly as:
$2 \frac{d \hat{\rho}_\phi}{d \phi} = \hat{L}_\phi \hat{\rho}_\phi +\hat{\rho}_\phi \hat{L}_\phi$. For pure states
the formula for QFI simplifies to $F(\ket{\Psi_\phi}) = 4(\braket{\dot{\Psi}_\phi}{\dot{\Psi}_\phi} -
|\braket{\dot{\Psi}_\phi}{\Psi_\phi}|^2)$, where $\ket{\dot{\Psi}_\phi}= \frac{d \ket{\Psi_\phi}}{d \phi}$.
The bound is known to be saturable in the asymptotic limit
of $k \rightarrow \infty$ in the sense that there exist a measurement and an estimator that yields equality in \eqref{eq:precF}.
The main benefit of using QFI is that since it does not depend neither on the measurement nor on the estimator, the only remaining
optimization problem is the maximization of $F(\hat{\rho}_\phi)$ over input states.
Since the optimal states in the regime of large number of probes $N$ (not $k$)
may be regarded as consisting of independent groups, the Cramer-Rao bound may be saturated even for $k=1$ provided $N$ is
large enough \cite{Nielsen, Guta}. This makes the QFI an even more appealing quantity than in the decoherence-free case
where some controversies arise on the practical use of the strategies based on the optimization of the QFI \cite{Anisimov, Giovanetti2012}.
Maximization of QFI over the most general input states for large $N$ may still be challenging, though, and even if successful might not provide
an insight into the structure of the optimal states. This is the place where MPSs come in useful.
A general MPS of $N$ qubits is defined as
\begin{equation}
\ket{\Psi}_{\textrm{MPS}}=\frac{1}{\sqrt{\mathcal{N}}}\sum_{\sigma_{1}\dots \sigma_{N}=0}^{1}\textrm{Tr}(A_{\sigma_{1}}^{[1]}\dots A_{\sigma_{N}}^{[N]})\ket{\sigma_{1}\dots \sigma_{N}},
\end{equation}
where $A_{\sigma_k}^{[k]}$ are square complex matrices of dimension $D\times D$, $D$ is called the bond dimension and $\mathcal{N}$ is the normalization factor.
In operational terms, a MPS is generated by assuming that each qubit is substituted by a pair of $D$ dimensional virtual systems. Adjacent systems corresponding to different neighboring particles are prepared in maximally entangled states $\ket{\varphi_{D}}=\frac{1}{\sqrt{D}}\sum_{\alpha=1}^{D}\ket{\alpha,\alpha}$ (Fig.~\ref{fig:bscr}) and
maps $A_{\sigma_k}^{[k]}=\sum_{\alpha,\beta=1}^{D}A_{\sigma_k,\alpha,\beta}\ket{\sigma_k}\bra{\alpha, \beta}$ are applied to the pair of virtual systems
corresponding to the $k$-th particle \cite{MPS5}.
Such a description of state is very efficient provided the bond dimension $D$ increases slowly with $N$.
In a most favorable case when $D$ may be assumed to be bounded, $D < D_\t{max}$, the number of coefficients needed to specify an $N$ qubit state
in the asymptotic regime of large $N$ will scale as $N D_{\t{max}}^2$ (linear in $N$), as opposed to the standard $2^N$ scaling.
It should be noted, however, that
in many quantum metrological models, in particular the ones based on the QFI, the search for the optimal input probe states
may be restricted to symmetric (bosonic) states \cite{Huelga, Escher, Kolenderski, Rafal:OptStates}.
Even though the description of a symmetric $N$ qubit pure state is efficient and requires only $N+1$ parameters,
the use of MPSs may still offer a significant advantage as the symmetric MPS description
involves matrices $A$ which are identical for different particles: $A^{[k]}_\sigma = A_\sigma$ and
commute under the trace---$\t{Tr}(A_{\sigma_1} \dots A_{\sigma_N})$ does not depend on the order of matrices.
Provided $D$ is asymptotically bounded or grows slowly with $N$, one can still
benefit significantly from the use of MPS in the large $N$ regime.
In order to demonstrate the power of the MPS approach, we apply it to the most thoroughly analyzed and
relevant model in quantum metrology---the lossy interferometer.
We will not specify the nature of the physical systems (atoms, photons) but will rather refer to abstract two-level probes, with orthogonal sates $\ket{0}$, $\ket{1}$.
The parameter to be estimated is the relative phase delay $\phi$ a probe experiences being in $\ket{1}$
vs. $\ket{0}$ state. The decoherence
mechanism amounts to a loss of probes where each of the probes is lost independently of the others with probability $1-\eta$.
As such, this is an example of a general scheme depicted in Fig.~\ref{fig:bscr}.
Since the distinguishability of probes offers no advantage for phase estimation \cite{Rafal:OptStates} we move to the symmetric state description
where the general $N$ probe state
reads $\ket{\Psi} = \sum_{n=0}^{N}\alpha_n\ket{n,N-n}$, and $\ket{n,N-n}$ represents $n$ and $N-n$ probes in states $\ket{0}$ and $\ket{1}$ respectively.
The output state $\hat{\rho}_{\phi}$ can be written explicitly as:
\begin{equation}
\label{eq:rho}
\hat{\rho}_\phi=\sum_{l_0=0}^{N}\sum_{l_1=0}^{N-l_0}p_{l_0 l_1}\ket{\Psi^{l_0 l_1}_\phi}\bra{\Psi^{l_0 l_1}_\phi}
\end{equation}
where
\begin{multline}
\ket{\Psi^{l_0 l_1}_\phi}=\frac{1}{\sqrt{p_{l_0 l_1}}}\sum_{n=l_0}^{N-l_1}\alpha_n e^{\mathrm{i} n \phi}\beta_{l_0 l_1}^{n}\ket{n-l_0,N-n-l_1}\\
\beta_{l_0 l_1}^{n}(\eta)= \sqrt{B_{l_0}^{n}B_{l_1}^{N-n}}, \ B_{l}^{n}={{n}\choose{l}}\eta^{n-l}(1-\eta)^{l}
\end{multline}
and $p_{l_0 l_1}$ is a normalization factor which can be interpreted as a probability to lose $l_0$, $l_1$ probes in states
$\ket{0}$ and $\ket{1}$ respectively.
As the output state $\hat{\rho}_\phi$ is mixed, QFI is not explicitly given in terms of the input state parameters as it requires
involved calculation of the SLD or other equivalent quantities \cite{Escher, Toth, Fujiwara}.
In general, due to convexity, QFI for a mixed state is smaller than a weighted sum
of QFIs for pure states into which a mixed state may be decomposed. Nevertheless,
it was shown in \cite{Dorner} that assuming additional knowledge of how many photons were lost
being in the state $\ket{0}$ and how many being in the state $\ket{1}$
does not improve the estimation precision appreciably. Hence, one can approximate the QFI with a weighted sum of QFIs all pure states entering the mixture $\hat{\rho}_\phi$:
\begin{equation}
\label{eq:fishapp}
F(\hat{\rho}_\phi) \lesssim \tilde{F}(\hat{\rho}_\phi) = \sum_{l_0=0}^N \sum_{l_1=0}^{N-l_0} p_{l_0,l_1} F(\ket{\Psi^{l_0,l_1}_\phi}).
\end{equation}
This approximation simplifies the calculations significantly since in our case
$F(\ket{\Psi_\phi^{l_0l_1}}) = 4(\bra{\Psi_\phi^{l_0l_1}} \hat{n}^2 \ket{\Psi_\phi^{l_0 l_1}} - |\bra{\Psi_\phi^{l_0l_1}} \hat{n} \ket{\Psi_\phi^{l_0l_1}}|^2)$ with $\hat{n}$ being the the excitation number operator $\hat{n} \ket{n,N-n} = n \ket{n, N-n}$.
Direct optimization of formula \eqref{eq:fishapp} over the input state parameters $\alpha_n$
involves $N+1$ variables.
This approach was taken in \cite{Dorner, Rafal:OptStates}. Here we
consider the class of symmetric MPSs parameterized with two
diagonal (assuring commutativity) $D \times D$ matrices $A_{0}$, $A_{1}$.
These MPS are parameterized with $2D$ complex numbers, instead of $N+1$, and read explicitly:
\begin{equation}
\ket{\Psi}_{\textrm{MPS}}=\frac{1}{\sqrt{\mathcal{N}}}\sum_{n=0}^{N}\sqrt{{{N}\choose{n}}}\textrm{Tr}(A_{0}^n A_{1}^{N-n})\ket{n,N-n}.
\end{equation}
Thanks to the simple form of Eq~\eqref{eq:fishapp} it is possible to compute $\tilde{F}$ directly on $A_\sigma$ matrices and there is no need to go back to the less efficient standard description as would be the case with the formula \eqref{eq:precF}.
\begin{figure*}
\caption{(color online) Log-Log plots of the phase estimation precision with losses ($\eta=0.9$) as a function of
number of probes $N$ optimized over input states in (a) the QFI approach where $\Delta \phi = \frac{1}
\label{fig:combined}
\end{figure*}
Fig.~\ref{fig:combined}a illustrates the precision obtained using MPS for the case of relatively small losses $\eta=0.9$.
As one can see, the MPS approximation is excellent. In particular, the upper-right inset shows that already $D=5$ is sufficient
to obtain less than $1\%$ discrepancy for $N \leq 500$.
We have confirmed this observation for different $\eta$ and observed
that for higher losses (lower $\eta$) lower $D$ are required to obtain a given level of
approximation for a particular $N$---an effect that should be much more spectacular for larger $N$ reflecting the fact
that stronger decoherence diminishes the role of quantum correlations.
Moreover we have observed that optimal matrices $A_{0}$, $A_{1}$ have the same diagonal values which are ordered complementarily---the
highest in $A_0$ is paired with lowest one in $A_1$ etc. The higher is $N$ the closer the diagonal values approach each other as can be seen from the lower-left inset on Fig.~(\ref{fig:combined}). This confirms the intuition that with increasing $N$, the optimal states
are becoming less distinct from the product state---all diagonal values of $A_\sigma$ equal.
The peculiarity of phase estimation is that in the decoherence-free case optimal QFI is achieved for the N00N
state which, even though has non-local correlations, is an example of an MPS with $D=2$. This makes the MPS
capable of approximating the optimal states very well even for low loss and small $N$ $[N \lesssim 1/(1-\eta)]$---an ability that in general will not hold for other estimation problems.
Taking now a more operational approach, not based on the QFI, one may consider a concrete measurement
scheme with a particular observable $\hat{O}$ being measured.
Simple error-propagation formula for $\phi$ yields
$\Delta \phi = \Delta \hat{O} /|\frac{\t{d}\langle \hat{O}\rangle}{\t{d}\phi}|$.
In the Ramsey spectroscopy setup \cite{Wineland1}, or equivalently in the Mach-Zehnder interferometer with photon number difference measurement,
one effectively measures a component of the total angular momentum operator $\hat{J}$ of $N$ spins 1/2---if a qubit $\ket{0}$, $\ket{1}$ is treated as a spin 1/2 particle \cite{Lee}. If the phase dependent rotation $\hat{U}=e^{i\phi\hat{J}_{z}}$ is being sensed by the measurement
of the $\hat{J}_x$ observable, the explicit formula for estimation uncertainty at the optimal operation point $\phi=0$
calculated for $\hat{\rho}_\phi$ from Eq.~\eqref{eq:rho} reads
\begin{equation}\label{eq:precLoss}
\Delta \phi=\sqrt{\frac{\Delta^2\hat{J}_x}{\langle\hat{J}_y\rangle^2}+\frac{1-\eta}{\eta}\frac{N}{4\langle\hat{J}_y\rangle^2}}.
\end{equation}
Search for the optimal state amounts to minimizing the above quantity. Since it depends only on first and second moments of $\hat{J}$
it is simple to implement numerically using MPS. Results are presented in Fig.~\ref{fig:combined}b.
It is clear that MPS are capable to capture the essential feature of the
optimal states---the squeezing of the $\hat{J}_x$---with relatively low bond dimensions $D$.
Moreover, the upper-right inset indicates that the required bond dimension $D$ is reduced much more significantly with increasing decoherence strength
than in the QFI approach.
The lower-left inset confirms again that the structure of the optimal states gets closer to the product state structure with increasing $N$.
We have also applied the MPS approach to Ramsey spectroscopy with other decoherence models including independent dephasing, depolarization and spontaneous emission and have obtained completely analogous results.
In summary, we have shown that MPS are very well suited for achieving the optimal performance in realistic quantum metrological setups and
may reduce the numerical effort while searching for the optimal estimation strategies.
Even though we have based our presentation on a single model of lossy phase estimation
we anticipate these conclusions to be valid in all metrological setups where decoherence makes the asymptotic Heisenberg scaling
unachievable---the intuitive argument being that no large scale strong correlations are needed to reach the optimal performance.
An intriguing open question remains: is it possible, as it is in many-body physics problems,
to obtain an exponential reduction in numerical complexity thanks to the use of MPS.
This is not possible when the optimal states
are known to be symmetric, as in the lossy phase estimation.
In problems, however, where distinguishability of probes is essential as e.g. in
Bayesian multiparameter estimation \cite{Chiribella, Bagan}, MPS might demonstrate their full potential when impact of decoherence is
taken into account.
We thank Janek Ko{\l}ody{\'n}ski and Konrad Banaszek for fruitful discussions and support.
This research was supported by the European Commission under the Integrating Project Q-ESSENCE,
Polish NCBiR under the ERA-NET CHIST-ERA project QUASAR
and the Foundation for Polish Science under the TEAM programme.
\end{document}
|
\begin{document}
\title{Artificial boundary conditions for linearized
stationary incompressible viscous flow around rotating and translating body}
\date{}
\author{P. Deuring $^1$, S. Kra\v cmar $^{2,3}$, \v S. Ne\v casov\' a$^4$}
\maketitle
\date{}
\centerline{$^1$ Univ. Littoral C\^ote d'Opale, Laboratoire de math\'ematiques} \centerline{pures
et appliqu\'ees Joseph Liouville}
\centerline{e-mail: {\tt [email protected]}}
\centerline{$^2$ Department of Technical Mathematics, Czech Technical University}
\centerline{$^3$ Institute of Mathematics of the Academy of Sciences of the Czech Republic}
\centerline{e-mail: {\tt [email protected]}}
\centerline{$^4$ Institute of Mathematics of the Academy of Sciences of the Czech Republic}
\centerline{e-mail: {\tt [email protected]}}
\date{}
\vskip0.25cm
\begin{abstract}
We consider the linearized and nonlinear stationary incompressible flow around rotating and translating body in the exterior domain $\mathbb R^3 \setminus \overline{\mathcal D}$, where $\mbox{$\mathcal D$} \subset \mathbb{R}^3 $ is open and bounded,
with Lipschitz boundary. We derive the pointwise estimates for the pressure in both cases. Moreover, we consider the linearized problem in a truncation domain
$\mathcal D_R:=B_R \backslash \overline{ \mathcal D}$
of the exterior domain
$\mathbb R^3 \setminus \overline{\mathcal D}$
under certain artificial boundary conditions on the truncating boundary $\partial B_R$,
and then compare this solution with the solution in the exterior domain $\mathbb R^3 \setminus \overline{\mathcal D}$ to get the truncation error estimate.
\end{abstract}
\section{Introduction}
We consider the systems of equations
\begin{equation}\label{1.0}
\begin{array}{crl}
-\text{\rm \,d}elta u (z) + ( \tau e_1- \varrho e_1 \times z)\cdot \nabla u(z) + \varrho e_1 \times u (z)
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \qquad\qquad\qquad\qquad
+\tau(u(z)\cdot \nabla) u(z)+\nabla \pi(z) = F(z)\\
{\rm div}\,u(z)=0 \ \text{for} \ z\in \mathbb R^3 \setminus \overline{\mathcal D}
\end{array}
\end{equation}
\begin{equation}\label{eq_1.1}
\begin{array}{crl}
-\text{\rm \,d}elta u (z) + ( \tau e_1- \varrho e_1 \times z)\cdot \nabla u(z) + \varrho e_1 \times u (z)
+\nabla \pi(z) = F(z)\\
{\rm div}\,u(z)=0 \ \text{for} \ z\in \mathbb R^3 \setminus \overline{\mathcal D}\\
\end{array}
\end{equation}where $\mbox{$\mathcal D$} \subset \mathbb{R}^3 $ is open and bounded,
with Lipschitz boundary.
Problems (\ref{1.0}) and (\ref{eq_1.1}) together with some boundary conditions on $ \partial \mathcal D $ constitute mathematical models (linear and non-linear, respectively) describing stationary flow of a viscous incompressible fluid around a rigid body which moves at a constant velocity and rotates at a constant angular velocity, where we consider that the rotation is parallel to the velocity at the infinity. { For details concerning of deriving the model, see \cite{Fa2, G2}. The description and the analysis in the case when the rotation is not parallel to the velocity at infinity can be find in the following works, see \cite{FGTN, GH}}.
The aim of this paper is two folds:
First, we would like to derive the pointwise estimates for the pressure in the linear
and
also in the non-linear cases in
order to complete the pointwise estimates for the velocity and its gradient from \cite{ DKN5,3} by the pointwise estimates of the pressure in order to get complete decay information of all parts $u,\,\pi $ of solutions to systems (\ref{1.0}), (\ref{eq_1.1}).
{ Let us mention that the decay of pressure was also investigated in the work of Galdi, Kyed \cite{GK1} and in case of pure rotation see \cite{FGK}}.
Second, to solve the linear system (\ref{eq_1.1}) in a truncation
$\mathcal D_R:=B_R \backslash \overline{ \mathcal D}$
of the exterior domain
$\mathbb R^3 \setminus \overline{\mathcal D}$
under certain artificial boundary conditions on the truncating boundary $\partial B_R$,
and then compare this solution with the solution of (\ref{eq_1.1}) in the exterior domain, i.e. to get some sort of error estimates of the method of an artificial boundary condition. For this aim we use pointwise estimates of the velocity and of the pressure.
{ Mathematical analysis of the problem of the Navier-Stokes equations with artificial boundary condition was performed by many authors but without considering the rotation of body, see e.g.
\cite{BoFa,BrMu,DeKr,DeKr1}. The article can be seen as a first result in the case of motion of viscous fluids around rotating and translating body with artificial boundary condition.}
The paper is organized as follows: In the rest of this section we introduce notation and give some auxiliary results. The
next section 2 deals with pointwise estimates of the pressure of the linear system (\ref{eq_1.1}). In Section 3 we consider the linear system (\ref{eq_1.1}) with artificial boundary conditions. The error estimate of the velocity is derived comparing to the solution to the system given in the exterior domain.
First let us introduce notation:\
\subsection*{Definitions and notation related to the rotational system}
Define
$ s(y):=1+|y|-y_1$ for $y \in \mathbb{R}^3 $,
$\mathcal D_R:=B_{R}\setminus\overline {\mathcal D}$,
$B_R^c:=\mathbb R^3\setminus\overline{B_R}$,
\noindent
where $B_R:=\{x\in\mathbb R^3 ; |x|<R\},$\ for $R>0$ such that $B_R \supset\overline{\mathcal D.} $
\noindent
So, $\mathcal D_R$ is the truncation of the exterior domain $ \overline{\mathcal D}^c:=\mathbb R ^3\setminus\overline{\mathcal D}\, $ by the ball $B_R$. The boundary $\partial \mathcal D_R$ consists of parts $\partial \mathcal D$ and $\partial B_R$, the later we call the truncating boundary.
Fix $\tau \in (0,\infty)$, $\rho \in \mathbb R \setminus \{0\}$,
and put
$e_1:=(1,0,0),\;\Omega : = \rho \begin{pmatrix} 0&0&0\\ 0&0&-1\\ 0&1&0 \end{pmatrix}$,
so that
$\color{black}
\Omega \cdot z = \rho e_1 \times z
\color{black}\
$
for $z\in \mathbb R^3$.
For $U \subset\mathbb R^3$ open, $u\in W^{2,1}_{{\rm loc}}(U)^3$, $z\in U$, put
\begin{align*}
(Lu) (z):= & -\text{\rm \,d}elta u(z) + \tau \partial_1 u(z) - (\rho e_1 \times z) \cdot \nabla u (z)
+\rho e_1 \times u(z),\qquad \qquad \ \\
\ \ \ (L^*u)(z):= & - \text{\rm \,d}elta u(z) - \tau \partial_1 u(z) + (\rho e_1 \times z) \cdot \nabla u(z)
-\rho e_1 \times u(z).
\end{align*}
Put
\begin{eqnarray*}
&&K(z,t):= (4\pi t)^{-3/2}e^{-|z|^2/(4t)} \quad (z\in\mathbb R^3, t\in (0,\infty)),\\
&&\Lambda (z,t):=\left(K(z,t)\delta_{jk}+ \partial z_j \partial z_k
\left(\int_{\mathbb R^3}(4\pi |z-y|)^{-1} K(y,t) dy\right)\right)_{1\le j,k\le 3} \qquad \\
&&(z\in \mathbb R^3,\ t>0),\\
&&
\Gamma(x,y,t):=
\color{black}
\Lambda (x-\tau te_1 - e^{-t\Omega}y,t)
\color{black}
\cdot e^{-t\Omega},\\
&&
\widetilde{\Gamma}(x,y,t):= \Lambda (x+\tau te_1 - e^{t\Omega}y,t)\cdot e^{t\Omega}
\quad (x,y\in\mathbb R^3, t>0),\\
&&\mathcal Z(x,y):= \int^{\infty}_0 \Gamma (x,y,t) dt,
\ \widetilde{ \mathcal Z}(x,y):=\int^{\infty}_0 \widetilde{\Gamma} (x,y,t) dt,
\\
&&
(x,y\in\mathbb R^3, \, x\not = y).
\end{eqnarray*}
For $q\in (1,2)$, $f\in L^q(\mathbb R^3)^3$, put
$$
\mathcal R(f)(x) : = \int_{\mathbb R^3} \mathcal Z(x,y) f(y) dy \quad (x\in \mathbb R^3);
$$
see \cite[Lemma 3.1]{2}.
We will use the space
$D^{1,2}_0( \overline{ \mathcal D}^c)^3:=\{v \in L^6( \overline{ \mathcal D}^c)^3
\cap H^1_{loc}( \overline{ \mathcal D}^c)^3\, :\,
\nabla v \in L^2( \overline{ \mathcal D}^c)^9,\; v| \partial \mathcal D =0\}$ equipped with the norm
$\|\nabla u\|_2$, where $v|\partial\mathcal D$ means the trace of $v$ on $\partial \mathcal D$.
For $p \in (1, \infty ),$ define $M _p$ as the space of all pairs
of functions $(u,\pi )$ such that
$
u \in W^{2,p}_{loc}( \overline{ \mbox{$\mathcal D$} }^c)^3,\;
\pi \in W^{1,p}_{loc}( \overline{ \mbox{$\mathcal D$} }^c),
$
\begin{eqnarray*} &&
u| \mbox{$\mathcal D$} _R \in W^{1,p}( \mbox{$\mathcal D$} _R)^3,
\quad
\pi | \mbox{$\mathcal D$} _R \in L^p( \mbox{$\mathcal D$} _R),
\quad
u| \partial \mbox{$\mathcal D$} \in W^{2-1/p,\,p}( \partial \mbox{$\mathcal D$} )^3,
\\ && \hspace{3em} \nonumber
\mbox{div}\, u| \mbox{$\mathcal D$} _R \in W^{1,p}( \mbox{$\mathcal D$} _R),
\quad
L(u)+ \nabla \pi | \mbox{$\mathcal D$} _R \in L^p( \mbox{$\mathcal D$} _R)^3
\end{eqnarray*}
for some $ R \in (0, \infty ) $ with $ \overline{ \mbox{$\mathcal D$} } \subset B_R$.
We write $C$ for generic constants.
It should be clear from context which are the parameters these constants depend on.
In order to lift possible ambiguities,
we sometimes use the notation $C (\gamma _1,\, ...,\, \gamma _n)$
in order to indicate that the constant in question depends in particular on
$\gamma _1,\, ...,\, \gamma _n \in (0, \infty ) $, for some $n \in \mathbb{N} $.
But the relevant constant may depend on other parameters as well.
\subsection*{Auxiliary results to asymptotic behavior of the pressure}
\begin{lem}\label{Theorem 1} {\rm (Weyl's lemma).}
Let $n\in\mathbb N$, $U\subset \mathbb R^n$ open, $u \in L^1_{{\rm loc}}(U)$ with $\int_Uu\cdot \text{\rm \,d}elta l\, dx =0$ for $l\in C^{\infty}_0(U)$. Then $u\in C^{\infty}(U)$ and $\text{\rm \,d}elta u =0$.
\end{lem}
{\it Proof:} An elementary proof is given in
\cite[Appendix]{4}
$\Box$
For $q\in (1,3/2)$, $h\in L^q(\mathbb R^3)$, put
\[
\mathcal N(h)(x) := \int_{\mathbb R^3} - (4\pi|x-y|)^{-1}h(y) dy \quad (x\in \mathbb R^3).
\]
For $q\in (1,3)$, $h\in L^q(\mathbb R^3)$, put
\[
\mathcal S(h)(x):=\left(\int_{\mathbb R^3}(4\pi|x-y|^3)^{-1} (x-y)_j\cdot h(y)dy\right)_{1\le j\le 3}
\quad (x\in\mathbb R^3).
\]
For $q\in (1,3)$, $h\in L^q(\mathbb R^3)^3$, put
\[
\mathcal P(h)(x):=\int_{\mathbb R^3} (4\pi|x-y|^3)^{-1} ((x-y)\cdot h(y))dy \quad (x\in\mathbb R^3).
\]
Note that $S(h)$ is a vector-valued function with $h$ being scalar, whereas $P(h)$ is a scalar
function with $h$ being vector-valued.
\begin{lem}\label{Theorem 2}
{\it Let $q\in (1,3/2)$, $h\in L^q(\mathbb R^3)$.
Then $\mathcal N(h) \in W^{2,q}_{{\rm loc}} (\mathbb R^3) \cap L^{(1/q-2/3)^{-1}}(\mathbb R^3)$,
$\text{\rm \,d}elta\mathcal N(h)=h$.
If $h\in W^{1,q}(\mathbb R^3)$,
then $\partial_l \mathcal N(h) = \mathcal N(\partial_lh)$ $(1\le l \le 3)$.
Let $q\in (1,3)$, $h\in L^q (\mathbb R^3)$. Then
$
\mathcal S(h) \in W^{1,q}_{{\rm loc}}(\mathbb R^3)^3
,
\ {\rm div}\,\mathcal S(h)=h.
$
If $q\in (1,3/2)$,
then $\nabla \mathcal N(h) = \mathcal S(h)$.
If $h\in W^{1,q}(\mathbb R^3)$,
then $\mathcal S(h)\in W^{2,q}_{{\rm loc}}(\mathbb R^3)^3
$.
Let $q\in (1,3)$, $h\in L^q(\mathbb R^3)^3$. Then
\begin{align*}
&\mathcal P(h)\in W^{1,q}_{{\rm loc}}(\mathbb R^3) \cap L^{(1/q-1/3)^{-1}}(\mathbb R^3),
\\&
\left(\int_{\mathbb R^3}\left(\int_{\mathbb R^3}|x-y|^{-2}|h(y)|dy\right)^{(1/q-1/3)^{-1}}dx\right)^{1/q-1/3}
\le C\|h\|_q.
\end{align*}
}
\end{lem}
{\it Proof:} The assertion of the Lemma \ref{Theorem 2} follows from well known Hardy-Littlewood-Sobolev inequality, Calderon-Zyg\-mund inequality,
and density arguments.
$\Box$
\begin{lem}\label{Fa}
\cite[Lemma 2.2]{KrNoPo}
Let $B\in\mathbb R,\, S\in(0,\infty).$
Then
\begin {equation}\displaystyle \int
_{\partial B_R}s(x)^{-B}
\, do_x\le C(S,B)\cdot R^{2-\hbox{min}\,\{1,B\}}\cdot\sigma
(R)
\end{equation}
for $R\in[S,\infty),$ with $\sigma(R):=1$
if $B\ne1,$ and $\sigma(R)={\rm
ln(1+R)}$ if $B=1$.
\end{lem}
\section{Decay estimates }
In first part of this section we recall some known results from \cite{2} and \cite{3}
about the decay of the velocity part of the solution of the system (\ref{eq_1.1}), and in order to get the full decay characterization of the solution we derive the decay of the pressure part of solution of (\ref{eq_1.1}). In the second part of this section we extend the result for the pressure to the non-linear case of (\ref{1.0}).
\subsection*{Decay estimates in the linear case}
Our starting point is a decay result from \cite{3} for the velocity part $u$ of a solution to
(\ref{eq_1.1}).
\begin{thm}\label{thm_1.1}
{\rm (\cite[Theorem 3.12]{3})}
Suppose that \mbox{$\mathcal D$} is $C^2$-bounded.
Let $p\in (1,\infty)$, $(u,\pi)\in M_p$. Put $F = L(u)+\nabla \pi$.
Suppose there are numbers $S_1,S,\gamma \in (0,\infty)$, $A\in [2,\infty)$, $B\in \mathbb R$
such that $S_1 < S$,
\[\overline{\mathcal D}\cup {\rm supp}({\rm div}\,u) \subset B_{S_1}, \quad u|{B_{S}^c} \in L^6 (B^c_S)^3,
\quad
\nabla u|{B^c_S} \in L^2(B_S^c)^9,
\]
\[
A+\min \{1,B\} \ge 3, \ |F(z)|\le \gamma|z|^{-A}s(z)^{-B} \ \text{for} \ z\in B^c_{S_1}.
\]
Then
\begin{equation}\label{1.6}
\displaystyle|u(y)|\le
C\, (|y|s(y))^{-1}\,l_{A,B}(y),
\end{equation}
\begin{equation}\label{1.7}
| \nabla u(y)| \le
C\, (|y|s(y))^{-3/2} \, s(y)^{ \max{(0,7/2-A-B)}}
\color{black}
\,l_{A,B}(y)
\color{black}
\end{equation}
for $y\in B^c_S$,
where
function $l_{A,B}$
is given by
$$
\left\{
\begin{array}{crl}
1 \quad & \text{if} \quad A + \min \{1,B\} > 3\\
\max (1, \text{ln}(y)) & \text{if} \quad A + \min \{1,B\} = 3.
\end{array} \right.
$$
\end{thm}
The requirements $u|{B_{S}^c} \in L^6 (B^c_S)^3,\; \nabla u|{B^c_S} \in L^2(B_S^c)^9$
should be interpreted as decay conditions on $u$.
It may be deduced from Theorem \ref{thm_1.1} that inequalities
(\ref{1.6}) and (\ref{1.7}) hold under assumptions weaker than those stated in that
theorem. We specify this more general situation in the ensuing corollary, which in addition
indicates some properties of $F$ that will be useful in the following.
\begin{cor} \label{corollary2.1}
Let $p \in (1, \infty ),\; \gamma ,\,S_1,\, S \in (0, \infty ) $ with
$\overline{ \mbox{$\mathcal D$} }\subset B_{S_1},\; S_1<S,\; A \in [2, \infty ),\;B \in \mathbb{R} $
with $A+\min\{1,B\}\ge 3$. Let $F:\overline{ \mbox{$\mathcal D$} }^c \mapsto \mathbb{R}^3 $
be measurable with $F| \mbox{$\mathcal D$} _{S_1} \in L^p( \mbox{$\mathcal D$} _{S_1})^3$
and
$|F(z)|\le \gamma|z|^{-A}s(z)^{-B} \ \text{for} \ z\in B^c_{S_1}.$
Let $u \in W^{1,p}_{loc}( \overline{ \mbox{$\mathcal D$} }^c)^3$ with
$u|{B_{S}^c} \in L^6 (B^c_S)^3$,
$\nabla u|{B^c_S} \in L^2(B_S^c)^9$,
${\rm supp}({\rm div}\,u)\subset B_{S_1}$,
\begin{eqnarray} \label{2.*}&&\hspace{-3em}
\int_{ \overline{ \mbox{$\mathcal D$} }^c}\bigl[\, \nabla u \cdot \nabla \varphi
+ \bigl(\, \tau \, \partial _1u - ( \varrho \, e_1 \times z) \cdot \nabla u
+ ( \varrho \, e_1 \times u) - F \,\bigr) \cdot \varphi \,\bigr]\, dz
\\&&\nonumber \hspace{-3em}
=0 \quad \mbox{for}\;\;
\varphi \in C ^{ \infty } _0( \overline{ \mbox{$\mathcal D$} }^c)^3\;\;\mbox{with}\;\;
{\rm div}\, \varphi =0.
\end{eqnarray}
Then inequalities (\ref{1.6}) and (\ref{1.7}) hold for $y \in B_S^c$.
Moreover $F \in L^q(\overline{ \mbox{$\mathcal D$} }^c)^3$ for $q \in (1,p]$.
If $p\ge 6/5,$ the function $F$ may be considered as a bounded linear functional on
$\mbox{$\mathcal D$} ^ {1,2}_0( \overline{ \mbox{$\mathcal D$} }^c)^3$, in the usual sense.
\color{black}
Let $\pi \in L^p_{loc}( \overline{ \mbox{$\mathcal D$}}^c)$ with
\begin{eqnarray} \label{2.**}&&\hspace{-2em}
\int_{ \overline{ \mbox{$\mathcal D$} }^c}\bigl[\, \nabla u \cdot \nabla \varphi
+ \bigl(\, \tau \, \partial _1u - ( \varrho \, e_1 \times z) \cdot \nabla u
+ ( \varrho \, e_1 \times u) - F \,\bigr) \cdot \varphi
\\&&\nonumber
-\pi\, {\rm div}\, \varphi \,\bigr]\, dz
=0 \quad \mbox{for}\;\;
\varphi \in C ^{ \infty } _0( \overline{ \mbox{$\mathcal D$} }^c)^3.
\end{eqnarray}
Fix some number $S_0 \in (0,S_1)$ with $\overline{ \mbox{$\mathcal D$}}\cup {\rm supp}({\rm div}\, u)
\subset B_{S_0}.$
Then the relations
$u| \overline{ B_{S_0}}^c \in W^{2,p}_{loc}(\overline{B_{S_0}}^c)^3,\;
\pi \in W^{1,p}_{loc}(\overline{B_{S_0}}^c)$
and
$L(u| \overline{ B_{S_0}}^c)+\nabla \pi = F| \overline{ B_{S_0}}^c$
hold.
\color{black}
\end{cor}
{\it Proof:}
For $z \in B_{S_1}^c$, we have
\begin{eqnarray*}&&\hspace{-1em}
|F(z)|
\le
\gamma \, C(S_1,A)\,|z| ^{-2} \, s(z)^{-A+2-B}
\le
\gamma \,C(S_1,A)\,|z| ^{-2} \, s(z) ^{-A+2-\min\{1,B\}}
\\[1ex]&&\hspace{-1em}
\le
\gamma \,C(S_1,A)\, |z| ^{-2} \, s(z) ^{-1}.
\end{eqnarray*}
Thus for $q \in (1, \infty ),$ with
Lemma \ref{Fa},
\begin{eqnarray*}
\int_{ B_{S_1}^c}|F(z)|^q\, dz
\le
C\, \int_{ S_1} ^{ \infty } r^{-2q}\, \int_{ \partial B_r}s(z) ^{-q}\, do_z\, dr
\le
C\, \int_{ S_1} ^{ \infty } r^{-2q+1}\,dr < \infty .
\end{eqnarray*}
It follows that $F \in L^q( \overline{ \mbox{$\mathcal D$}}^c)^3$ for $q \in (1,p]$.
According to \cite[Theorem II.6.1]{Galdi}, the inequality $\|v\|_6\le C\, \| \nabla v\|_2$
holds for $v \in
\color{black}
D ^{1,2}_0(\overline{ \mbox{$\mathcal D$}}^c)^3
\color{black}
$.
Thus, if $p\ge 6/5$, hence $F \in L^{6/5}(\overline{ \mbox{$\mathcal D$}}^c)^3$,
this function $F$ may be considered as a linear bounded functional on
$\mbox{$\mathcal D$} ^{1,2}_0(\overline{ \mbox{$\mathcal D$}}^c)^3$.
The $L^p$-integrability of $F$ and the assumptions on $u$ imply that the function
\begin{eqnarray} \label{BB}
G(z):=F(z)-
\bigl(\, \tau \, e_1 - ( \varrho \, e_1 \times z) \,\bigr) \cdot \nabla u(z)
- \bigl(\, \varrho \, e_1 \times u(z) \,\bigr) ,\;\;
z \in \overline{ \mbox{$\mathcal D$}}^c,
\end{eqnarray}
belongs to $L^p_{loc}(\overline{ \mbox{$\mathcal D$}}^c)^3$.
\color{black}
The choice of $S_0$ (see at the end of Corollary \ref{corollary2.1}) means in particular that
${\rm div}\,(u| \overline{ B_{S_0}}^c)=0.$
This equation,
(\ref{2.**}),
the relation $G \in L^p_{loc}(\overline{ \mbox{$\mathcal D$}}^c)^3$
and interior regularity of solutions to the Stokes system (see \cite[Theorem IV.4.1]{Galdi}
for example) imply
the claims in the last sentence of Corollary \ref{corollary2.1}.
\color{black}
Put $ S_0 ^{\prime} :=(S_0+S_1)/2,\;
A_{S_0 ^{\prime} ,R}:=B_R \backslash B_{S_0 ^{\prime} } $ for $R \in (S_0 ^{\prime} , \infty ).$
Then
$u| A_{S_0 ^{\prime} ,R} \in W^{2,p}(A_{S_0 ^{\prime} ,R})^3$ and
$\pi | A_{S_0 ^{\prime} ,R} \in W^{1,p}(A_{S_0 ^{\prime} ,R})^3$
for $R \in (S_0 ^{\prime} , \infty ),$
so
$(u| A_{S_0 ^{\prime} ,R},\, \pi| A_{S_0 ^{\prime} ,R}) \in M_p$,
with $B_{S_0 ^{\prime} } $ in the role of \mbox{$\mathcal D$}.
Note that $S_0<S_o ^{\prime} <S_1< S.$ Thus the assumptions of Theorem \ref{thm_1.1} are
satisfied with \mbox{$\mathcal D$} replaced by $B_{S_0 ^{\prime} }$. As a consequence inequalities
(\ref{1.6}) and (\ref{1.7}) hold.
$\Box $
\noindent
\begin{rem}\label{rem_existence} Solutions as considered in
Corollary \ref{corollary2.1}
exist if, for example,
Dirichlet boundary conditions are prescribed on $\partial \mathcal D$. In fact,
as stated in \cite[Theorem VIII.1.2]{Galdi},
if $F$ is a bounded linear functional on the space
$D^{1,2}_0( \overline{ \mathcal D}^c)^3$,
and if $b \in H ^{1/2} ( \partial \overline{ \mathcal D}^c)^3,$
then
there is a function
$u \in L^6( \overline{ \mathcal D}^c)^3\cap W^{1,1}_{loc}( \overline{ \mathcal D}^c)^3
$
such that
$ \nabla u \in L^2( \overline{ \mathcal D}^c)^9$
and
$u$ satisfies the equations (\ref{2.*}) and ${\rm div}\,u =0$ (weak form of
(\ref{eq_1.1})), as well as the boundary conditions $u| \partial \mbox{$\mathcal D$} =b$.
\color{black}
Existence of a pressure $\pi \in L^p_{loc}( \overline{ \mathcal D}^c)$ with (\ref{2.**})
holds according to \cite[Lemma VIII.1.1]{Galdi}.
\color{black}
\end {rem}
The main result of this section, dealing with the asymptotics of the pressure, is stated in
\begin{thm}\label{thm_Main_1}
{\it
Let $p,\, \gamma ,\, S_1,\, S,\, A,\, B,\, F,\, u$
be given as in Corollary \ref{corollary2.1}, but with the stronger assumptions
$A=5/2,\; B \in (1/2, \, \infty )$ on $A$ and $B$.
\color{black}
Let $\pi \in L^p_{loc}( \overline{ \mbox{$\mathcal D$}}^c)$
such that (\ref{2.**}) holds
\color{black}
Then there is $c_0 \in \mathbb{R} $ such that
\begin{eqnarray} \label{CC}
|\pi(x)+c_0|\le C \, |x| ^{-2} \quad \mbox{for}\;\; x \in
B_S^c.
\end{eqnarray}
}
\end{thm}
{\it Proof:}
\color{black}
By Corollary \ref{corollary2.1} we have $F \in L^q(\overline{ \mathcal D}^c)^3$ for $q \in (1,p]$.
Fix some number $S_0 \in (0,S_1) $ with
$\overline{ \mbox{$\mathcal D$}}\cup {\rm supp}({\rm div}\, u)\subset B_{S_0}$.
Then again by Corollary \ref{corollary2.1}, the relations
$u| \overline{ B_{S_0}}^c \in W^{2,p}_{loc}( \overline{ B_{S_0}}^c)^3,
\;\pi | \overline{ B_{S_0}}^c \in W^{1,p}_{loc}( \overline{ B_{S_0}}^c)$
and $L(u| \overline{ B_{S_0}}^c )+\nabla (\pi | \overline{ B_{S_0}}^c )=F | \overline{ B_{S_0}}^c $ hold.
\color{black}
Note that $S_0<S_1<S$.
Take $\phi \in C^{\infty}(\mathbb R^3)$ with
$$
\phi|B_{S_1+\frac14(S-S_1)}=0, \ \phi|B^c_{S_1+\frac34(S-S_1)} = 1,
$$
and put
$\widetilde u : = \phi \cdot u$, $\widetilde{\pi} := \phi \cdot \pi$,
with $\widetilde u, \widetilde{\pi}$ to be considered as functions in $\mathbb R^3$.
By the choice of $\phi $ and the properties of $u$ and $\pi$,
we get
$\widetilde u \in W^{2,q}_{{\rm loc}}(\mathbb R^3)^3$,
$\widetilde{\pi}\in W^{1,q}_{{\rm loc}}(\mathbb R^3)$ for $q\in [1,p]$,
$\widetilde u |B_S^c=u|B_S^c\in L^6(\mathbb R^3)^3$,
$\nabla \widetilde u|B_S^c=\nabla u|B_S^c \in L^2(\mathbb R^3)^9$. Put
\begin{align*}
g_l(z):= &
\color{black}
- \sum^3_{k=1} \partial_k \phi(z) \partial_k u_l(z) - \text{\rm \,d}elta \phi(z) u_l(z)
+ \tau \partial_1 \phi(z) u_l(z)
\color{black}
\\
&- \sum^3_{k=1} (\tau e_1 \times z)_k \cdot \partial_k \phi(z) \cdot u_l (z) + \partial_l \phi(z) \pi(z)
\end{align*}
for $z\in\mathbb R^3$, $1\le l \le 3$, and set $\gamma :={\rm div}\, \widetilde{ u}$. Then
\begin{align}
&{\rm supp}(g) \subset \overline{ B_{S_1+3(S-S_1)/4}} \setminus B_{S_1+(S-S_1)/4},
\ g\in L^q(\mathbb R^3)^3 \ \text{for} \ q\in [1,p], \nonumber
\\&
L\widetilde u + \nabla \widetilde{\pi} = g + \phi \cdot F,
\
\gamma
= \nabla \phi \cdot u, \label{eq1}
\end{align}
in particular $\mbox { supp}
( \gamma )
\subset \overline{ B_{S_1+3(S-S_1)/4}} \setminus B_{S_1+(S-S_1)/4}$,
$
\color{black}
\gamma
\in W^{2,q}(\mathbb R^3),
\color{black}
$
$g+ \phi \cdot F \in L^q(\mathbb R^3)^3$ for $q\in (1,p]$.
Let $x\in \mathbb R^3$, $\varepsilon >0$ with $\overline{ B_{\varepsilon}(x)}\subset B_{S_1}$, where $B_\varepsilon(x)=\{y\in\mathbb R^3;|y-x|<\varepsilon\}$.
Since $\widetilde u|B_{S_1} = 0$, $\widetilde{\pi}|B_{S_1}=0$,
it follows from \cite[Theorem 3.11]{3} with $\mathcal D$ replaced by $B_{\varepsilon}(x)$ that
$$
\widetilde u(y) = \mathcal R(g+\phi F)(y) + \mathcal S(
\gamma
)(y)
\ \text{for} \ y\in \overline{B_{\varepsilon}(x)}^c.
$$
Since this is true for any $x\in \mathbb R^3$, $\varepsilon >0$ with
$\overline{B_{\varepsilon}(x)} \subset B_{S_1}$, it follows that
\begin{equation}\label{eq2}
\widetilde u = \mathcal R(g+\phi F)+\mathcal S(
\gamma
) \ \text{in} \ \mathbb R^3.
\end{equation}
But $\mathcal S(
\gamma
) \in W^{2,q}_{{\rm loc}}(\mathbb R^3)^3$
for $q\in [1,\min \{3,p\})$ by Lemma \ref{Theorem 2}, so from \eqref{eq2}
$$
\mathcal R(g+ \phi F)\in W_{{\rm loc}}^{2,q}(\mathbb R^3)^3 \ \text{for} \ q \in [1,\min\{3,p\}).
$$
This relation and
\cite[(3.11) and the
\color{black}
inequality
\color{black}
following (3.15)]{2}
imply
\begin{equation}\label{eq3}
\sup_{z\in\mathbb R^3} \int_{\mathbb R^3} |\mathcal Z(z,y)\cdot (g+ \phi F)(y)|dy<\infty.
\end{equation}
Let $\psi\in C^{\infty}_0 (\mathbb R^3)^3$. Due to \eqref{eq3}, we may apply Fubini's theorem, to obtain
\begin{align}\label{eq4}
A&:=\int_{\mathbb R^3}\psi(x) (L\mathcal R(g+\phi F))(x)dx
= \int_{\mathbb R^3}(L^*\psi)(x) \mathcal R(g+\phi F)(x)dx
\\
&=\int_{\mathbb R^3}\int_{\mathbb R^3}[(L^*\psi)(x)]^T\cdot \mathcal Z(x,y) \cdot (g+ \phi F)(y) dydx\nonumber\\
&= \int_{\mathbb R^3}\int_{\mathbb R^3} [(L^*\psi)(z)]^T \cdot \mathcal Z(x,y) \cdot (g+\phi F)(y) dx dy.\nonumber
\end{align}
But for $a,b,x,y \in\mathbb R^3$ with $x\not=y$,
$$
a^T\cdot \mathcal Z(x,y) \cdot b = \int^{\infty}_0 a^T \Gamma (x,y,t) b dt,
$$
hence with \cite[Lemma 2.10]{2},
\begin{align*}
a^T\cdot \mathcal Z(x,y) \cdot b &= \int^{\infty}_0 a^T \cdot e^{-t\Omega} \Lambda (e^{t\Omega}x - \tau te_1 - y,t) \cdot b dt\\
&= \int^{\infty}_0 b^T [e^{-t\Omega} \Lambda (e^{t\Omega} x - \tau t e_1 - y,t) ]^T a dt\\
&= \int^{\infty}_0 b^T \cdot \Lambda (e^{t\Omega} x - \tau te_1 - y,t) e^{t\Omega} \cdot a dt\\
&= \int^{\infty}_0 b^T \Lambda (y + \tau te_1 - e^{t\Omega} x,t) e^{t\Omega} a dt\\
&= \int^{\infty}_0 b^T \widetilde {\Gamma} (y,x,t) \cdot a dt = b^T \cdot \widetilde Z(y,x)\cdot a.
\end{align*}
Therefore from \eqref{eq4}
\begin{equation}\label{eq5}
A = \int_{\mathbb R^3} (g+ \phi F)(y)^T\cdot \int_{\mathbb R^3} \widetilde{\mathcal Z}(y,x) \cdot (L^*\psi)(x)dxdy.
\end{equation}
Since $\psi\in C^{\infty}_0(\mathbb R^3)^3$, we may choose $x_0 \in \mathbb R^3$, $\varepsilon >0$ such that
$$
\overline{B_{\varepsilon}(x_0)}\subset \mathbb R^3 \setminus {\rm supp}\,(\psi).
$$
Thus we get from \cite[Theorem 4.3]{1}
\color{black}
with $\mathcal D,U,\omega,u,L$ replaced by $B_{\varepsilon}(x_0)$,
$\tau e_1$,
$-\rho e_1,\, \psi,\, L^*$, respectively,
\color{black}
and with $\pi=0$, that
$$
\int_{\mathbb R^3} \widetilde{\mathcal Z}(y,x)\cdot (L^*\psi)(x) dx = \psi(y) - \mathcal S ({\rm div}\,\psi)(y)
$$
for $y\in\mathbb R^3 \setminus\overline{B_{\varepsilon}(x_0)}$. Since this is true for any
$x_0 \in\mathbb R^3$,
$\varepsilon >0$ with $\overline{B_{\varepsilon}(x_0)} \subset \mathbb R^3 \setminus {\rm supp}(\psi)$,
the preceding equation holds for any $y\in \mathbb R^3$. It follows from~\eqref{eq5}
\begin{equation}\label{eq6}
\int_{\mathbb R^3}\psi(x)(L\mathcal R(g+ \phi F))(x) dx
=\int_{\mathbb R^3} (g+ \phi F)(y) \cdot (\psi(y) - \mathcal S({\rm div}\,\psi)(y)) dy.
\end{equation}
Again recalling that $g+\phi F\in L^q(\mathbb R^3)^3$ for $q\in (1,p]$, we get with Lemma \ref{Theorem 2} that
\begin{equation}\label{eq7}
\int_{\mathbb R^3}\psi(x) \nabla \mathcal P(g+\phi F)(x)dx
= \int_{\mathbb R^3} - {\rm div}\,\psi(x) \cdot \mathcal P(g+\phi F)(x)dx.
\end{equation}
Put $q_0 := \min \{6/5,p\}$, and note that $q_0\in (1,3/2)$, $q_0 \le p$.
Thus, by H\"older's inequality
and Lemma \ref{Theorem 2},
\begin{align*}
&
\int_{\mathbb R^3}|{\rm div}\,\psi(x)\, \mathcal P(g+
\color{black}
\phi
\color{black}
\, F)(x)|\, dx
\\
&\le \int_{\mathbb R^3}\int_{\mathbb R^3} |{\rm div}\,\psi(x)
(4\pi|x-y|^3)^{-1} (x-y)\cdot (g+\phi F)(y)| dydx
\\[1ex]
&\le \|{\rm div}\,\psi\|_{(4/3-1/q_0)^{-1}}
\\&\hspace{2em}
\left(\int_{\mathbb R^3}
\left(\int_{\mathbb R^3}(4\pi|x-y|^2)^{-1} |(g+\phi F)(y)| dy\right)^{(1/q_0-1/3)^{-1}}dx\right)^{1/q_0-1/3}\\
&\le C \cdot \|{\rm div}\,\psi\|_{(4/3-1/q_0)^{-1}}\cdot \|g+\phi F\|_{q_0} < \infty.
\end{align*}
As a consequence, we may apply Fubini's theorem to deduce from (\ref{eq7}) that
\begin{align}
&\int_{\mathbb R^3} \psi(x)\nabla \mathcal P(g+\phi F)(x)dx\label{eq8}
\\
&=-\int_{\mathbb R^3}\int_{\mathbb R^3} ({\rm div}\,\psi)(x)
(4\pi|x-y|^{3})^{-1}(x-y)\cdot (g+\phi F)(y)dxdy\nonumber\\
&=\int_{\mathbb R^3} (g+\phi F)(y) \cdot \mathcal S({\rm div}\,\psi)(y)dy.\nonumber
\end{align}
From \eqref{eq6} and \eqref{eq8},
\begin{align*}
&\int_{\mathbb R^3} \psi(x) ((L\mathcal R(g+\phi F))(x) + \nabla \mathcal P(g+\phi F)(x))dx\\
&= \int_{\mathbb R^3} \psi(x) (g+\phi F)(x) dx.
\end{align*}
Since this is true for any $\psi\in C^{\infty}_0 (\mathbb R^3)^3$, we have found that
\begin{equation}\label{eq9}
L\mathcal R(g+\phi F)+\nabla \mathcal P(g+\phi F)=g+\phi F.
\end{equation}
On the other hand, by \eqref{eq2} and \eqref{eq1}
$$
L\mathcal R(g+\phi F)+L\mathcal S(
\gamma
) + \nabla \widetilde{\pi} = L\widetilde u + \nabla \widetilde {\pi} = g+ \phi F.
$$
By subtracting this equation from \eqref{eq9}, we get
\begin{equation}\label{eq10}
\nabla\mathcal P(g+\phi F)-L\mathcal S(
\gamma
) - \nabla \widetilde{\pi}=0.
\end{equation}
Next we consider the term
${\rm div} \bigl(\, L\mathcal S(\gamma ) \,\bigr) $.
Recall that
\color{black}
$q_0< 3/2,\; q_0\le p$ and
$ \gamma \in W^{2,q}(\mathbb R^3)$ for $q\in [1,p]$
(see (\ref{eq1})),
\color{black}
so by Lemma \ref{Theorem 2}
\begin{equation}\label{eq11}
\begin{cases}
\mathcal S(\gamma) \in W^{2,q_0}_{{\rm loc}}(\mathbb R^3)^3, \ {\rm div}\,\mathcal S(\gamma) = \gamma, \ \mathcal N(\gamma) \in W^{2,q_0}_{{\rm loc}} (\mathbb R^3),\\
\nabla \mathcal N(\gamma)=\mathcal S(\gamma).
\end{cases}
\end{equation}
\color{black}
Since
$e_1 \times \mathcal S( \gamma )=(0,\,-\mathcal S_3 ( \gamma ),\, \mathcal S_2( \gamma ))$
and because of the equation $\nabla \mathcal N( \gamma )=\mathcal S( \gamma )$
in (\ref{eq11}), we may conclude that
\begin{eqnarray} \label{2.17b}
{\rm div}(e_1 \times \mathcal S( \gamma ))
=
- \partial_2\mathcal S_3(\gamma) + \partial_3\mathcal S_2(\gamma)
=
- \partial_2\partial_3 \mathcal N(\gamma) + \partial_3\partial_2 \mathcal N(\gamma) = 0.
\end{eqnarray}
Moreover, for $z\in\mathbb R^3$, $1\le j\le 3$,
\begin{eqnarray} \label{2.17c}
(e_1\times z) \cdot \nabla {\mathcal S} _j(\gamma)(z)
=
-z_3\partial _2\mathcal S_j(\gamma)(z)+ z_2 \partial _3 \mathcal S _j(\gamma )(z).
\end{eqnarray}
Put
$\varphi (z):=-z_3 \partial_2 \gamma(z) + z_2\partial_3\gamma(z) $
for $z \in \mathbb{R}^3 .$
Then with (\ref{2.17c}), the equation ${\rm div}\mathcal S( \gamma )=\gamma $
in \eqref{eq11}, and the second and third equation in (\ref{2.17b}),
\begin{eqnarray*} &&\hspace{-2em}
{\rm div}_z \bigl(\, (e_1\times z)\cdot\nabla \mathcal S(\gamma)(z) \,\bigr)
=
\sum_{j = 1}^3 \partial z_j \bigl(\, (e_1\times z)\cdot\nabla \mathcal S_j(\gamma)(z) \,\bigr)
\\&&\hspace{-2em}
=
-z_3 \partial _2{\rm div}_z\mathcal S( \gamma )(z) +z_2 \partial _3{\rm div}_z\mathcal S( \gamma )(z)
+ \sum_{j = 1}^3 \bigl(\, \partial z_j(-z_3) \partial _2\mathcal S_j( \gamma )(z)
+\partial z_j(z_2) \partial _3\mathcal S_j( \gamma )(z) \,\bigr)
\\&&\hspace{-2em}
=
\varphi (z)- \partial _2\mathcal S_3( \gamma )(z)+\partial _3\mathcal S_2( \gamma )(z)
=
\varphi (z).
\end{eqnarray*}
\color{black}
Let $\psi\in C^{\infty}_0(\mathbb R^3)$. Then it follows that
\begin{align*}
&\int_{\mathbb R^3}\nabla \psi \cdot (\rho e_1 \times \mathcal S(\gamma)) dx =0,\\
&\int_{\mathbb R^3}\nabla \psi(z) [(\rho e_1\times z)\cdot \nabla \mathcal S(\gamma) (z)]dz
= \int_{\mathbb R^3} \psi(z)(-\varphi(z)) dz.
\end{align*}
Obviously, again with \eqref{eq11},
$$
\int_{\mathbb R^3}\nabla \psi \cdot \text{\rm \,d}elta \mathcal S(\gamma) dx = \int_{\mathbb R^3} \nabla \text{\rm \,d}elta \psi \cdot \mathcal S(\gamma) dx = -\int_{\mathbb R^3} \text{\rm \,d}elta\psi \cdot \gamma dx
= \int_{\mathbb R^3} \psi \cdot (-\text{\rm \,d}elta \gamma) dx,
$$
and similarly,
$$
\int_{\mathbb R^3} \nabla \psi(\tau \partial_1\mathcal S(\gamma))
\color{black}
dx
\color{black}
= \int_{\mathbb R^3} \psi(-\tau\partial_1 \gamma) dx.
$$
Combining these equations, we get
$$
\int_{\mathbb R^3} \nabla \psi \cdot L\mathcal S(\gamma) dx = \int_{\mathbb R^3} \psi(\varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1 \gamma) dx.
$$
Now from \eqref{eq10}
\begin{equation}\label{eq12}
\int_{\mathbb R^3}\nabla \psi [\nabla \mathcal P(g+\phi F)-\nabla (\phi \pi)]dx
= \int_{\mathbb R^3}\psi(\varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1 \gamma)dx.
\end{equation}
Since $\gamma \in W^{2,q}(\mathbb R^3)$ for $q\in [1,p]$
and ${\rm supp}(\gamma) \subset B_{S}\backslash B_{S_1}$
\color{black}
due to (\ref{eq1})),
\color{black}
it follows that
$\varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1 \gamma \in L^q(\mathbb R^3)$ for $q\in [1,p]$q,
so we may consider
$\mathcal N (\varphi + \text{\rm \,d}elta \gamma - \tau \partial_1 \gamma)$. Lemma \ref{Theorem 2} yields
\begin{align*}
&\mathcal N(\varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1 \gamma) \in W^{2,q_0}_{{\rm loc}} (\mathbb R^3),\\
&\text{\rm \,d}elta \mathcal N(\varphi+ \text{\rm \,d}elta \gamma
- \tau\partial_1 \gamma) = \varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1 \gamma.
\end{align*}
Therefore from \eqref{eq12}
$$
\int_{\mathbb R^3} \nabla \psi [\nabla \mathcal P(g+\phi F)- \nabla \mathcal N(\varphi+ \text{\rm \,d}elta \phi - \tau\partial_1 \gamma) - \nabla (\phi \pi) ] dx = 0.
$$
Lemma \ref{Theorem 1} now yields
\begin{eqnarray}
Q := \mathcal P(g+ \phi F)
- \mathcal N (\varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1\gamma) - \phi \pi \in C^{\infty}(\mathbb R^3),
\quad
\label{99}
\text{\rm \,d}elta Q = 0.
\end{eqnarray}
Now we again apply Lemma \ref{Theorem 2}. Since $g+ \phi \cdot F \in L^{q_0}(\mathbb R^3)^3$, we have
$$
\mathcal P(g+ \phi F)\in L^{(1/q_0-1/3)^{-1}}(\mathbb R^3).
$$
Moreover $\varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1 \gamma \in L^{q_0}(\mathbb R^3)$, so
$$
\mathcal N(\varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1 \gamma) \in L^{(1/q_0 - 2/3)^{-1}}(\mathbb R^3).
$$
Since $q_0\le p,$ and in view of our remarks at the beginning of this proof we know that
$
u| \overline{ B_{S_0}}^c\in W^{2,q_0}_{loc}( \overline{ B_{S_0}}^c),\;
\pi| \overline{ B_{S_0}}^c\in W^{1,q_0}_{loc}( \overline{ B_{S_0}}^c),\;
L(u| \overline{ B_{S_0}}^c )+\nabla (\pi | \overline{ B_{S_0}}^c )=F | \overline{ B_{S_0}}^c ,\;
\mbox{div} ( u| \overline{ B_{S_0}}^c) =0
$
and
$F\in L^{q_0}(\mathbb R^3)^3$.
By the choice of $u$ in Corollary \ref{corollary2.1},
we have
$u|B_{S}^c\in L^6(B_{S}^c)^3$.
\color{black}
Therefore
\color{black}
\cite[Theorem~2.1]{3} yields there is $c_0\in\mathbb R$ such that
$$
\pi + c_0|B^c_{2 S} \in L^{3q_0/(3-q_0)}(B_{2\,S}^c)+L^{3}(B^c_{2\, S}).
$$
But by (\ref{99}),
\begin{eqnarray*}
Q-c_0
=
\mathcal P(g+ \phi F)
- \mathcal N (\varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1\gamma)
- \phi\,( \pi+c_0) + ( \phi -1)\, c_0,
\end{eqnarray*}
where ${\rm supp}( \phi -1) \subset B_{S}$ and ${\rm supp}( \phi ) \subset B_{S_1}^c$.
We may conclude that
\begin{eqnarray}
\label{100}&&
Q - c_0
\in L^{(1/q_0 - 1/3)^{-1}}( \mathbb{R}^3 ) + L^{(1/q_0-2/3)^{-1}}( \mathbb{R}^3 )
+ L^{q_0}(\mathbb R^3)
\\&&\nonumber \hspace{2em}
+ L^{3q_0/(3-q_0)}(\mathbb R^3) + L^{3}( \mathbb{R}^3 ).
\end{eqnarray}
Let $\varepsilon \in (0,\infty)$,
and let $(Q- c_0)_{\varepsilon}$
be the usual Friedrich's mollifier of $Q- c_0$ associated with $\varepsilon$.
Due to (\ref{99}), (\ref{100}) and by standard properties Friedrich's mollifier,
the function $(Q- c_0)_{\varepsilon}$ is bounded and
$\text{\rm \,d}elta (Q- c_0)_{\varepsilon}=0$.
Now Liouville's theorem yields $(Q- c_0)_{\varepsilon}=0$.
Since this is true for any $\varepsilon >0$
\color{black}
and because $Q \in C ^{ \infty } ( \mathbb{R}^3 ),$
\color{black}
we may conclude that $Q- c_0=0$, that is,
\begin{eqnarray*}
\phi \,(\pi + c_0)=\mathcal P(g+\phi F)-\mathcal N(\varphi +\text{\rm \,d}elta \gamma - \tau \partial_1 \gamma)
+( \phi -1)\, c_0,
\end{eqnarray*}
hence
\begin{equation}\label{eq13}
\pi + c_0|B_{S}^c
=\mathcal P(g+\phi F)-\mathcal N(\varphi +\text{\rm \,d}elta \gamma - \tau \partial_1 \gamma)|B_{S}^c,
\end{equation}
where we used that ${\rm supp}( \phi -1) \subset B_{S}$ and $\phi |B_{S}^c=1$.
Since ${\rm supp}(g) \subset \overline{ B_{S_1+3(S-S_1)/4}}$, we have
\begin{equation}\label{eq14}
|\mathcal P(g)(x)| \le c \cdot |x|^{-2} \ \text{for} \ x \in B^c_{S}.
\end{equation}
Due to the assumptions
$A=5/2,\; B \in (1/2, \, \infty )$
and because $\phi F|B_{S_1+(S-S_1)/4} = 0$ and $S_1<S_1+(S-S_1)/4<S$,
we get by \cite[Theorem 3.2]{Farwig2} or \cite[Theorem 3.4]{KrNoPo}
that
\begin{equation}\label{eq16}
|\mathcal P(\phi \, F)(x)| \le c \, |x|^{-2}\ \text{for} \ x\in B^c_{S}.
\end{equation}
\color{black}
Note that according to \cite[Theorem 3.4 (iii)]{KrNoPo}, a logarithmic factor
should be added on the right-hand
side of (\ref{eq16})
in the case $A=5/2,\; B=1$. But this factor is superfluous. In fact, if the relation
$|F(z)|\le \gamma |z|^{-A}s(z)^{-B}\;(z \in B_{S_1}^c)$
is valid with $A=5/2,\; B=1,$
it holds in the case $A=5/2,\; B=3/4,$ too. But then \cite[Theorem 3.4 (i), (iii)]{KrNoPo}
yields that (\ref{eq16}) holds as it is, without additional factor.
\color{black}
Define $\zeta (x)=-x_3 \gamma(x)$, $\tilde{\zeta }(x):=x_2\gamma (x)$ for $x\in\mathbb R^3$.
Then ${\rm supp}( \zeta ) \cup \,{\rm supp}(\tilde{\zeta})\subset B_{S}\setminus \overline{B_{S_1}}$,
\begin{eqnarray*}&&
\zeta,\tilde{\zeta} \in W^{2,q}(\mathbb R^3) \ \text{for} \ q\in [1,p],
\\&&
\varphi= \partial_2 \zeta + \partial_3 \tilde{\zeta}.
\end{eqnarray*}
It follows with Lemma \ref{Theorem 2} that
$$
\mathcal N(\varphi )
= \partial_2 \mathcal N(\zeta)+\partial_3\mathcal N(\tilde{\zeta})
= \mathcal S_2 (\zeta) + \mathcal S_3(\tilde{\zeta}).
$$
Similarly, since ${\rm supp}(\gamma)\subset B_{S}\setminus \overline{B_{S_1}}$,
$\gamma \in W^{2,q}(\mathbb R^3)$ for $q\in [1,p]$,
$$
\mathcal N(\text{\rm \,d}elta \gamma -\tau \partial _1 \gamma )
= \sum^3_{k=1} \mathcal S_k(\partial_k\gamma)
- \tau \mathcal S_1(\gamma).
$$
Together
$$
\mathcal N(\varphi+ \text{\rm \,d}elta \gamma - \tau \partial_1 \gamma)
= \mathcal S_2 (\zeta) + \mathcal S_3(\tilde{\zeta})
+ \sum^3_{k=1} \mathcal S_k (\partial_k\gamma) - \tau \mathcal S_1 (\gamma).
$$
Since supp$(\zeta) \cup {\rm supp}(\tilde{\zeta}) \cup {\rm supp}(\gamma)
\subset B_{S_1+3(S-S_1)/4}$,
we may conclude that
\begin{equation}\label{eq17}
|\mathcal N(\varphi+ \text{\rm \,d}elta \gamma -\tau\partial_1 \gamma)(x)| \le C\,
|x|^{-2}\ \text{for} \ |\xi| \in B^c_{S}.
\end{equation}
Inequality (\ref{CC}) follows from
\eqref{eq13}--\eqref{eq17}.
$\Box$
We remark that Theorem \ref{thm_Main_1} remains valid if the assumptions on $A$ and $B$
are replaced by
the conditions $A\ge 5/2,\; A+\min\{1,B\}>1$, which are weaker than those in Corollary
\ref{corollary2.1}. This observation is made precise by the
ensuing corollary. Its proof is obvious, but this modified version of Theorem \ref{thm_Main_1}
still is interesting because its requirements on $A$ and $B$ are closer to the ones in
Corollary \ref{corollary2.1} than those stated in Theorem \ref{thm_Main_1}.
\begin{cor} \label{corollary2.2}
Let $p,\, \gamma ,\, S_1,\, S,\, A,\, B,\, F,\, u$
be given as in Corollary \ref{corollary2.1}, but with the stronger assumptions
$A\ge 5/2,\; A+\min\{1,B\}>3$ on $A$ and $B$.
Let $\pi \in L^p_{loc}( \overline{ \mbox{$\mathcal D$}}^c)$ such that (\ref{2.**}) holds.
Then there is $c_0 \in \mathbb{R} $ such that inequality (\ref{CC}) is valid.
\end{cor}
{\it Proof:}
Put $B ^{\prime} := A-5/2+\min\{1,B\}$. Since $A+\min\{1,B\}>3$, we have $B ^{\prime} \in
(1/2,\, \infty ).$ Moreover, since $A\ge 5/2$, we find for
\color{black}
$z \in B_{S_1}^c$
\color{black}
that
\begin{eqnarray*}
|F(z)|
\le
\gamma \, C(S_1,A)\, |z|^{-5/2}\, s(z)^{-A+5/2-B}
\le
\gamma \,C(S_1,A)\, |z|^{-5/2}\, s(z)^{-B ^{\prime} }.
\end{eqnarray*}
Thus the assumptions of Theorem \ref{thm_Main_1} are satisfied with $B$ replaced by $B ^{\prime} $
and with a modified parameter $\gamma $. This implies the conclusion of Theorem \ref{thm_Main_1}.
$\Box $
\subsection*{Decay estimates in the non-linear case}
Let us assume now the non-linear case, i.e. the system (\ref{1.0}).
First, recall the result about the decay properties of the velocity in this non-linear case:
\begin{thm}\label{nonlin}
\cite[Theorem 1.1]{DKN5}
Let $
\gamma ,\, S_1 \in (0, \infty ) ,\; p_0 \in (1, \infty ),\; A \in (2, \infty ),\; B \in [0,\,3/2]$
with
$\overline{\mathcal D}\subset B_{S_1}, \, $
$ A+\min\{B,1\}>3,\; A+B\ge7/2 $.
Take $F: {\mathbb R}^3\mapsto\mathbb R^3 $ measurable with
$F|{B_{S_1}}\in L^{p_0}(B_{S_1})^3$,
$$|F(y)|\le\gamma\cdot|y|^{-A}\cdot s(y)^{-B}\ \hbox{for}\ y\in B^c_{S_1}.$$
\noindent
Let $u\in L^6(\overline{ \mathcal D}^c)^3\cap W^{1,1}_{loc}(\overline{ \mathcal D}^c)^3,
\pi\in L^2_{loc}(\overline{ \mathcal D}^c)$
with
$\nabla u \in L^2(\overline{ \mathcal D}^c)^9, \hbox{\rm div}\,u=0 $ and
\begin{eqnarray*}
\displaystyle{\int_{\overline{ \mathcal D}^c}
\left[\nabla u\cdot\nabla \varphi+((\tau e_{1}-\rho e_1\times z )\cdot\nabla u+\rho
e_1\times u
\qquad\qquad\qquad\qquad\right.} & \\
\displaystyle{ \left.+\tau(u\cdot\nabla)u-F)\cdot\varphi-\pi\,\hbox{\rm div}\,\varphi \right] \,
\hbox{\rm d}x=0
}
\end{eqnarray*}
for $\varphi\in C_0^{\infty}(\overline{ \mathcal D}^c)^3.$ Let $S\in(S_1,\infty).$ Then
\begin{eqnarray} \label{AA}
\displaystyle{|\partial^\alpha u(x)|\le C\,
(|x|s(x))}^{-1-|\alpha|/2} \ \ \hbox{for}\ x\in B^c _S,\ \alpha\in\mathbb N^3_0
\ \hbox{with}\ |\alpha|\le1
.
\end{eqnarray}
\end{thm}
Now, using
Theorems \ref{thm_Main_1} and \ref{nonlin},
we are in the position to prove the result on the decay of the pressure in the non-linear case:\
\begin{thm}\label{theorem2.7}
Consider the situation in Theorem \ref{nonlin}. Suppose in addition that
$A\ge 5/2$.
Then there is $c_0 \in \mathbb{R} $ such that inequality (\ref{CC}) holds.
\end{thm}
{\it Proof:}
Observe that $(u \cdot \nabla )u \in L^{3/2}( \overline{ \mbox{$\mathcal D$} }^c)^3$.
Thus, putting $p :=\min\{3/2,\, p_0\},\; \widetilde{ F}:=F-\tau \,(u \cdot \nabla )u,$
we get $\widetilde{ F}|\mathcal D_{S_1}\in L^p(\mathcal D_{S_1})^3$.
Put $B ^{\prime} := \min\{5/2,\, A+B-5/2\}$. Since $A\ge 5/2$, we have
\begin{eqnarray*}
|F(z)|
\le
\gamma \,C(S_1,A)\, |z|^{-5/2}\, s(z)^{-B ^{\prime} }
\quad \mbox{for}\;\;
z \in B_{S_1}^c.
\end{eqnarray*}
On the other hand, by Theorem \ref{nonlin} with $(S_1+S)/2$ in the place of $S$,
\begin{eqnarray*} &&
| \bigl(\, u(z) \cdot \nabla )u(z) \,\bigr) |
\le
C\, |z|^{-5/2}\,s(z)^{-5/2}
\le
C\, |z|^{-5/2}\,s(z)^{-B ^{\prime} }
\end{eqnarray*}
for $ z \in B_{(S_1+S)/2}^c.$
In this way we get
$
| \widetilde{ F}|
\le
C\, |z|^{-5/2}\, s(z)^{-B ^{\prime} }
$
for
$
z \in B_{(S_1+S)/2}^c
$.
We further note that $B ^{\prime} \in (1/2,\, \infty )$. This is obvious
in the case $B ^{\prime} = 5/2$. If $B ^{\prime} < 5/2$, we have
$B ^{\prime} = A+B-5/2$. Due to the assumption $A+\min\{1,B\}> 3$ in Theorem \ref{nonlin},
we thus get $B ^{\prime} \in (1/2, \, \infty )$. (The requirement $A+B\ge 7/2$ in
Theorem \ref{nonlin} even yields $B ^{\prime} \ge 1$, but if this requirement is
weakened in a suitable way, pointwise decay of $u$ and $\nabla u$ could still be
proved. However, this point is not elaborated in \cite{DKN5}, and therefore is not reflected in
Theorem \ref{nonlin}. But we still take account of it here by avoiding to use the assumption
$A+B\ge 7/2$.)
We further have $u \in W^{1,p}_{loc}( \overline{ \mbox{$\mathcal D$} }^c)^3,\;
\pi \in
\color{black}
L^p_{loc}(\overline{ \mbox{$\mathcal D$} }^c)
\color{black}
,$ and
equation (\ref{2.**}) holds with $F$ replaced by $\widetilde{ F}$. Since in addition
$u|B_S^c \in L^6(B_S^c)^3,\; \nabla u|B_S^c \in L^2(B_S^c)^9$ and ${\rm div}\, u =0,$
we see that the assumptions of Theorem \ref{thm_Main_1} are satisfied with $p$ as defined
above and with $(S_1+S)/2,\; B ^{\prime} ,\, \widetilde{ F}$ in the role of $S_1,\,B$ and $F$, respectively.
Thus Theorem \ref{thm_Main_1} implies the conclusion of Theorem \ref{theorem2.7}.
$\Box $
\section{Formulation of the problem with artificial boundary conditions }
Recall that we defined $\mathcal D _R = B_R\setminus\overline{\mathcal D}$.
We introduce the subspace $W_R$ of $H^1(\mathcal D_R)$ denoting
$$
W_R := \{v\in H^1(\mathcal D_R)^3 : v|{\partial \mathcal D} = 0\},
$$
where $v|\partial\mathcal D$ means the trace of $v$ on $\partial\mathcal D.$
\begin{lem}\label{lem_2.1}
{\rm
\color{black}
(\cite[Lemma 4.1]{DeFEM})
\color{black}
}
The estimate
$$
\|u\|_2 \le C \, (R\, \left\|\nabla u\right\|_2 + R^{1/2}\, \left\|u|{\partial B_R}\right\|_2)
$$
holds
for $R\in (0,\infty)$ with $\overline{\mathcal D} \subset B_R$ and for $u\in W_R$.
\end{lem}
\rm
We introduce an inner product $(\cdot,\,\cdot )^{(R)}$ in $W_R$ by defining
\begin{eqnarray*}
(v,w)^{(R)}= &\int_{\mathcal D_R}
\nabla v \cdot \nabla w
\, dx
+ \int_{\partial B_R} ( \tau /2)v\cdot w \,do_x \ \text{for} \ \ v,w \in W_R.
\end{eqnarray*}
The space $W_R$ equipped with this inner product is a Hilbert space.
The norm generated by this scalar product $(\cdot,\cdot)^{(R)}$ is denoted by $|\cdot|^{(R)}$, that is
$$
|v|^{(R)} : = \left(\left\|\nabla v\right\|^2_2+( \tau /2)\left\|v{|\partial B_R}\right\|^2_2\right)^{1/2} \ \text{for} \ v\in W_{R}.
$$
We define the bilinear forms
\begin{align*}
&a_R :H^1(\mathcal D_R)^3\times H^1(\mathcal D_R)^3\rightarrow \mathbb R,\\
&\beta_R : H^1(\mathcal D_R)^3 \times L^2(\mathcal D_R) \rightarrow \mathbb R,\\&\delta_R:
H^1(\mathcal D_R)^3\times H^1(\mathcal D_R)^3\rightarrow \mathbb R,\\
a_R (u,w):=&\int_{\mathcal D_R} [\nabla u\cdot \nabla w+ \tau \partial_1 u\cdot w ] dx\\
&
+\frac{\tau }{2} \int_{\partial B_R} (u(x) \cdot w(x)) \left(1-\frac{x_1}{R}\right) do_x,\\
\beta_R (w,\sigma):= &- \int_{\mathcal D_R} ({\rm div} \, w) \, \sigma dx,
\\
\delta_R (u,w):=&
\color{black}
\int_{\mathcal D_R} \bigl[\, -\bigl(\, ( \varrho e_1\times x) \cdot \nabla \,\bigr) u(x)
+\bigl(\, \varrho e_1\times u(x) \,\bigr) \,\bigr] \cdot w(x)\, dx
\color{black}
\\ \hbox{for} \ \
u,w \in H^1(\mathcal D_R)^3,& \ \sigma \in L^2(\mathcal D_R),\
R\in (0,\infty)
\; \mbox{with}\; \overline{ \mathcal{D}}\subset B_R.
\end{align*}
\begin{lem}\label{lem_2.3}
{\it
Let $R \in (0, \infty ) $ with $\overline{ \mathcal{D}}\subset B_R$. Then
\begin{align*}
|a_R (u,w) + \delta _R(u,w)|
\le& C(R)\, |u|^{(R)}\,|w|^{(R)}
\end{align*}
for $ u,w \in H^1(\mathcal D_R)^3$.
}
\end{lem}
{\it Proof:} The proof of Lemma \ref{lem_2.3} is based on use of Lemma \ref{lem_2.1}.
The key observation in this section is stated in the following lemma, which is the basis of the theory
presented in this section.
\begin{lem}\label{lem_2.4}
{\it
Let $R\in (0, \infty ) $ with $\overline{ \mathcal{D}}\subset B_R$,
and let
$w\in W_R$. Then the equation $(|w|^{(R)})^2 = a_R(w,w)+\delta _R(w,w)$ holds.
}
\end{lem}
\noindent
{\it Proof:} Using the definitions $a_R(\cdot,\cdot),\, \delta_R(\cdot,\cdot)$, we get
\begin{eqnarray*} &&
a_R(w,w)+\delta_R(w,w)
\\&&
= \int_{\mathcal D_R} \left[|\nabla w|^2 + \tau
\partial _1 \left(\frac{|w|^2}{2}\right)
- ( \varrho e_1\times x) \cdot \nabla \left(\frac{|w|^2}{2}\right)\right] dx
\\&&
+\frac{\tau }{2} \int_{\partial B_R} |w(x)|^2 \left(1-\frac{x_1}{R}\right)do_x
\\&&
=\int_{\mathcal D_R} |\nabla w|^2\, dx
+ \int_{\partial B_R}\left(\frac{\tau }{2} |w(x)|^2
\frac{x_1}{R} - \frac12 ( \varrho e_1\times x)\cdot \frac{x}{R} |w(x)|^2\right)do_x
\\&&
+\frac{\tau }{2} \int_{\partial B_R} |w(x)|^2\left(1-\frac{x_1}{R} \right) do_x
\\&&
=\int_{\mathcal D_R}|\nabla w|^2\, dx + \frac{\tau }{2} \int_{\partial B_R} |w(x)|^2 = (|w|^{(R)})^2.
\end{eqnarray*}
We applied that
\color{black}
$$( \omega \times x)\cdot x =0 \; \mbox{for}\;\; x,\, \omega \in \mathbb{R}^3 .$$
\color{black}\
As in \cite{DeKr},
we obtain that the bilinear form $\beta_R$ is stable:
\begin{thm}\label{thm_2.2}
{\rm (\cite[Corollary 4.3]{DeKr})}
{\it
Let $R>0$ with $\overline{ \mathcal D}\subset B_R$.
Then
\[
\inf\limits_{\rho \in L^2(\mathcal D_R), \rho \not=0}
\sup_{v\in W_R, v\not=0} \frac{\beta_R(v,\rho)}{|v|^{(R)}\|\rho\|_2} \ge C(R).
\]
}
\end{thm}
\color{black}
We note that functions from $W^{1,1}_{loc}( \overline{ \mathcal D}^c)$
with $L^2$-integrable gradient are
$L^2$-integrable on truncated exterior domains:
\begin{lem}[\mbox{\cite[Lemma II.6.1]{Galdi}}] \label{lemma3.xx}
Let $ w \in W^{1,1}_{loc}( \overline{ \mathcal D}^c)$
with $\nabla w \in L^2( \overline{ \mathcal D}^c)^3$, and let $R \in (0, \infty ) $
with $\overline{ \mathcal D}\subset B_R.$ Then
$w{| \mathcal D_R}
\in L^2(\mathcal D_R).
$
In particular the trace of $w$ on $\partial \mathcal D$ is well defined.
\end{lem}
The preceding lemma is implicitly used in the ensuing theorem, where
we introduce
an
extension operator
$\mbox{$\mathfrak E$}: H ^{1/2} ( \partial \mathcal{D})^3 \mapsto W^{1,1}_{loc}( \overline{ \mathcal{D}}^c)^3$
such that
$\mbox{div}\, \mbox{$\mathfrak E$} (b)=0
$.
\begin{thm}[\mbox{\cite[Exercise III.3.8]{Galdi}}]\label{thm_2.1}
{\it
There is an operator
$\mbox{$\mathfrak E$}$
from
$H^{1/2}(\partial \mathcal D)^3 $
into
$W^{1,1}_{loc}( \overline{ \mathcal{D}}^c)^3$
satisfying the relations
$
\nabla \mbox{$\mathfrak E$} (b) \in L^2( \overline{ \mathcal{D}}^c)^9,\;
\mbox{$\mathfrak E$} (b)|{\partial \mathcal{D}} = b$
and
${\rm div}\,\mbox{$\mathfrak E$} (b) =0
$
for
$
b\in H^{1/2}(\partial \mathcal{D})^3.
$
}
\end{thm}
\color{black}
In view of Lemma \ref{lem_2.3} and \ref{lem_2.4}
and Theorem \ref{thm_2.1} and \ref{thm_2.2}, the theory of mixed variational problems
yields
\begin{thm}\label{thm_2.3}
{\it
Let $S>0$ with $\overline{ \mathcal D}\subset B_S,\;
R \in [2S, \infty ),\;
F \in
L^{6/5}( \mathcal{D}_R)^3, \;
b \in H ^{1/2} ( \partial \mathcal{D})^3.$
Then there is a uniquely determined pair of functions
$(\widetilde V, P )= \bigl(\, \widetilde V(R,F,b),\, P(R,F,b) \,\bigr) \in W_R \times L^2( \mathcal{D}_R)$
such that
\begin{eqnarray} &&
a_R( \widetilde V,g) + \delta _R( \widetilde V,g) +\beta _R(g, P )
\label{eq_2_1}\\&&
=
\int_{ \mathcal{D}_R}F \cdot g\, dx - a_R \bigl(\, \mbox{$\mathfrak E$} (b)|\mathcal{D}_R,\, g \,\bigr)
-\delta _R \bigl(\, \mbox{$\mathfrak E$} (b)|\mathcal{D}_R,\, g \,\bigr)
\;\; \mbox{for}\;\; g \in W_R,
\nonumber\\[1ex]&&
\beta _R( \widetilde V, \sigma )=0
\;\; \mbox{for}\;\; \sigma \in L^2( \mathcal{D}_R),
\label{eq_2_2}\end{eqnarray}
where the operator \mbox{$\mathfrak E$} was introduced in Theorem \ref{thm_2.1}.
}
\end{thm}
Let us interpret variational problem (\ref{eq_2_1}), (\ref{eq_2_2})
as a boundary value problem.
Define the expression used in the boundary condition on the artificial boundary $\partial B_R:$\
$$\mathcal L _R (u,\pi)(x):= \left(\sum^3_{j=1} \partial_j u_k(x) \frac{x_j}{R}
\color{black}
- \pi(x) \frac{x_k}{R}
\color{black}
+ \frac{\tau }{2}
\left(1-\frac{x_1}{R}\right) u_k(x)\right)_{1\le k\le 3}$$
for $x \in \partial B_R,\; R \in (0, \infty ) $ with $\overline{ \mbox{$\mathcal D$} } \subset B_R,\;
\color{black}
u\in W^{2,\, 6/5}(\mathcal D_R)^3,
\color{black}
\ \pi\in W^{1,\, 6/5}(\mathcal D_R)$.
\begin{lem}\label{lem_interpret} Assume that $ \mathcal D$ is $\mathcal C^2$-bounded. Let
$
\color{black}
S \in (0, \infty )
\color{black}
$ with
$\overline{ \mathcal D}\subset B_S,\;
\color{black}
R\in[2S, \infty),
\color{black}
\;
F \in L^{6/5}( \mathcal{D}_R)^3$ and
$b\in W^{7/6,\,6/5 }( \partial\mathcal{D})^3.$
Put $V:=\widetilde V(R,F,b)+\mathfrak E(b)|\mathcal D_R$,
with $V(R,F,b)$ from Theorem \ref{thm_2.3} and $\mbox{$\mathfrak E$} (b)$
from Theorem \ref{thm_2.1}.
Suppose that
$
V\in W^{2,6/5}(\mathcal D_R)^3$
and
$ P=P(R,F,b)\in
W^{1,\, 6/5}(\mathcal D_R)$, with $P(R,F,b)$ also introduced in Theorem \ref{thm_2.3}.
Then
\begin{equation}
\begin{array}{crl}
-\text{\rm \,d}elta V (z) + ( \tau e_1- \varrho e_1 \times z)\cdot \nabla V(z) + \varrho e_1 \times V (z)+\nabla P(z)
= F(z),
\\
{\rm div}\,V(z)=0 \
\end{array}
\end{equation}
\noindent
for $z\in \mathcal D_R,$ and
$V|\partial\mathcal D =b, \ \ \mathcal L_R(V,P)=0.$
\end{lem}
The proof of Lemma \ref{lem_interpret} is obvious. This lemma
means that a solution of variational problem (\ref{eq_2_1}), (\ref{eq_2_2}) may be considered as a weak solution of the modified Oseen system with rotation in $\mathcal D_R$, under the Dirichlet boundary condition on $\partial \mathcal D$ and under the artificial boundary condition $\mathcal L_{R}(V,P)=0 $ on $\partial B_R$. The solution of (\ref{eq_2_1}), (\ref{eq_2_2}) will be now compared to the exterior modified Oseen flow
introduced in
Corollary \ref{corollary2.1}:
\begin{thm}\label{thm_Main_2}
Suppose that \mbox{$\mathcal D$} is $C^2$-bounded. Let $\gamma ,\, S_1 \in (0, \infty ) $
with $\overline{ \mathcal D} \subset B_{S_1}, \; A \in [5/2,\, \infty ),\; B \in \mathbb{R} $
with
$\ A+\min\{1,B\} > 3$.
Let $F:\overline{\mathcal D}^c \mapsto \mathbb{R}^3 $ be measurable with
$F| \mathcal D_{S_1} \in L^{6/5}(\mathcal D_{S_1})^3$ and
$|F(z)|\le \gamma\,|z|^{-A}s(z)^{-B} \ \text{for} \ z\in B^c_{S_1}$.
Let $ b \in W^{7/6,\, 6/5}( \partial \mbox{$\mathcal D$} )^3,\;
u \in W^{1,1}_{loc}( \overline{ \mathcal D}^c)^3\cap L^6(\overline{ \mathcal D}^c)^3$
such that
$
\nabla u \in L^2(\overline{ \mathcal D}^c)^9,\; {\rm div}\, u =0,\;
u| \partial \mbox{$\mathcal D$} =b
$
and equation (\ref{2.*}) is satisfied.
For $R \in [2S_1,\, \infty ),$ put
$
V_R:=\widetilde V(R,F,b) + \mbox{$\mathfrak E$} (b),
$
with $\mbox{$\mathfrak E$} (b)$ from Theorem \ref{thm_2.1},
and
$\widetilde V(R,F,b)
$
from Theorem \ref{thm_2.3}.
Then
\begin{align*}
&|u|_{\mathcal D_R}-V_R|^{(R)} \le C\,R^{-1} \quad \mbox{for}\;\; R \in [2S, \infty ).
\end{align*}
\end{thm}
We note that since $W^{2,\, 6/5}( \mbox{$\mathcal D$} ) \subset H^{1}( \mbox{$\mathcal D$} )$
by a Sobolev inequality, we have
$W^{7/6,\, 6/5}( \partial \mbox{$\mathcal D$} ) \subset H^{1/2}( \partial \mbox{$\mathcal D$} )$,
as follows with the usual
\color{black}
lifting and trace properties.
\color{black}
As a consequence,
$b \in H ^{1/2} ( \partial \mbox{$\mathcal D$})^3$, so the term $\mbox{$\mathfrak E$} (b)$
is well defined. We further remark that by Corollary \ref{corollary2.1} with $p=6/5$,
the function $F$ may be considered as a bounded linear functional on
$\mbox{$\mathcal D$} ^{1,2}_0( \overline{ \mbox{$\mathcal D$} }^c)^3.$
Therefore, as explained in Remark \ref{rem_existence}, a function $u$ with properties as stated in
Theorem \ref{thm_Main_2} does in fact exist.
\noindent
{\it Proof of Theorem \ref{thm_Main_2}:}
All conditions in Corollary \ref{corollary2.1} are verified if $\gamma ,\, S_1,\, A,\, B,\, F,\, u$
are given as in Theorem \ref{thm_Main_2}, and if $p=6/5$ and $S=2\,S_1.$
Note in this respect that the conditions on $u$ in Theorem \ref{thm_Main_2} obviously imply
$u \in W^{1,\, 6/5}_{loc}( \overline{ \mbox{$\mathcal D$} }^c)^3$.
Corollary \ref{corollary2.1} now yields that
$F \in L^{6/5}(\overline{ \mbox{$\mathcal D$} }^c)^3$ and that the function $u$
satisfies inequalities (\ref{1.6}) and (\ref{1.7})
with $S=2\, S_1$.
On the other hand, since
$u \in W^{1,\, 6/5}_{loc}( \overline{ \mbox{$\mathcal D$} }^c)^3$,
the function $G$ already considered in the proof of Corollary \ref{corollary2.1} (see (\ref{BB}))
belongs to $L^{6/5}_{loc}( \overline{ \mbox{$\mathcal D$} }^c)^3$.
Therefore, by interior regularity of solutions to the Stokes system (see \cite[Theorem IV.4.1]{Galdi}),
we may deduce from the equations (\ref{2.*}) and ${\rm div}\, u =0$ that
$u \in W^{2,\, 6/5}_{loc}( \overline{ \mbox{$\mathcal D$} }^c)^3$
and that there is $\pi \in W^{1,\, 6/5}_{loc}( \overline{ \mbox{$\mathcal D$} }^c)^3$ with
$L(u)+\nabla \pi =F$. In particular the pair $(u,\pi )$ verifies (\ref{2.**}).
In view of our assumptions on $A$ and $B$, we thus see that the requirements in
Corollary \ref{corollary2.2}
are fulfilled
for $\gamma ,\, S_1,\, A,\, B,\, F,\, u$
as in Theorem \ref{thm_Main_2} and for $p=6/5$ and $S=2\,S_1$.
As a consequence, Corollary \ref{corollary2.2} yields that there is $c_0 \in \mathbb{R} $ such that
(\ref{CC}) holds with $S=2\, S_1$.
Take $R \in [2\, S_1, \, \infty ).$
Since
$u \in W^{2,6/5}_{loc}( \overline{ \mbox{$\mathcal D$} }^c)^3$,
we have
$
\color{black}
u | \partial B_R \in W^{7/5,\,6/5}( \partial B_R)^3
\color{black}
$.
Combining this relation with the assumption
$
\color{black}
b
\in W^{7/5,\,6/5}( \partial \mbox{$\mathcal D$})^3
\color{black}
$
and the boundary condition $u| \partial \mbox{$\mathcal D$} =b,$
we get $u| \partial \mbox{$\mathcal D$} _R \in W^{7/5,\,6/5}( \partial \mbox{$\mathcal D$}_R)^3$.
Moreover our requirements on $u$ yield that
$u| \mbox{$\mathcal D$} _R \in W^{1,\,6/5}( \mbox{$\mathcal D$}_R)^3$.
Since $F \in L^{6/5}( \overline{ \mbox{$\mathcal D$} }^c)^3$,
as already mentioned, we get
$G| \mbox{$\mathcal D$} _R \in L^{6/5}( \mbox{$\mathcal D$}_R)^3$,
with $G$ from (\ref{BB}).
Recalling that \mbox{$\mathcal D$} is supposed to be $C^2$-bounded,
we may now apply the result in \cite[Lemma IV.6.1]{Galdi} on boundary regularity
of solutions to the Stokes system. This reference yields that
$u| \mbox{$\mathcal D$} _R \in W^{2,\,6/5}( \mbox{$\mathcal D$}_R)^3,\;
\pi | \mbox{$\mathcal D$} _R \in W^{1,\,6/5}( \mbox{$\mathcal D$}_R)$
and that the pair $(u,\pi) $ solves (\ref{eq_1.1}).
Let $P_R:=P(R,F,b)$ be given as in Theorem \ref{thm_2.3},
and put
$
w:=u-V_R,\; \kappa := \pi - P_R\, $,
and let $\, g \in W_R$.
Note that by Theorem \ref{thm_2.3}, we have
$$
a_R(V_R,g)+\delta _R(V_R,g) + \beta _R(g,P_R) = \int_{ \mathcal D_R}F \cdot g\, dx.
$$
Thus
\begin{eqnarray*} &&\hspace{-2em}
a_R (w,g) +\delta_R(w,g) + \beta _R (g,\kappa)
\\&&\hspace{-2em}
=a_{R} (u|_{\mathcal D_R},g) +\delta _R(u|_{\mathcal D_R},g)
+ \beta_R(g,\pi|_{\mathcal D_R})- \bigl(\, \underbrace{a_R (V_R ,g )
+ \delta _R (V_R,g) +\beta_R(g, P_R )\bigr)}
\\&&\hspace{25em}
_{=\int_{\mathcal D_R} F\cdot g\, dx} \,
\end{eqnarray*}
\begin{eqnarray*}
&&\hspace{-2em}=\int_{\mathcal D_R} \bigl(\, \nabla u \cdot \nabla g + \tau \partial_1 u \cdot g
- ( \varrho e_1 \times x) \cdot \nabla u\cdot g
+( \varrho e_1 \times u) \cdot g
\\&&
\hspace{18em}- \pi\,{\rm div} \, g - F \cdot g \,\bigr) \, dx
\\&&\hspace{-1em} +\frac{\tau }{2} \int_{\partial B_R} u(x) \cdot g(x) \left(1-\frac{x_1}{R}\right) do_x
\\&&\hspace{-2em}
=\displaystyle\int_{\mathcal D_R} [
-\text{\rm \,d}elta u \cdot g + \tau \partial_1 u \cdot g
- ( \varrho e_1\times x) \cdot \nabla u\cdot g
+( \varrho e_1 \times u) \cdot g
\\&&\hspace{18em}+ \nabla \pi \cdot g - F\cdot g
]\, dx
\\&&
\underbrace{\hspace{-1EM}+\int_{\partial B_R}\Bigl(
\sum^3_{j,k=1} \bigl[\, \partial_j u_k(x)\, g_k(x) \frac{x_j}{R}
-\pi (x)\delta_{jk} g_k(x) \frac{x_j}{R} \,\bigr]+ \frac{\tau }{2} u(x) \cdot g(x) (1-\frac{x_1}{R})\Bigr) do_x.}
\\&&
\hspace{15em}
_{=\int_{\partial B_R}\mathcal L_R(u,\pi)\cdot g \ do}
\end{eqnarray*}
\noindent
\noindent
Since the pair $(u,\pi)$ solves (\ref{eq_1.1}),
we now get
\begin{eqnarray}
a_R (w,g) +\delta_R(w,g) + \beta _R (g,\kappa)
=
\int_{\partial B_R}
\mathcal L_R (u,\pi)(x) \cdot g(x)\, do_x.
\end{eqnarray}
Let $c\in\mathbb R$ be an arbitrary constant.
For $g:=w$ we get with Lemma \ref{lem_2.4} that
\begin{eqnarray}\label{eq_w}
(|w|^{(R)})^2 &=&a_R (w,w) +\delta_R(w,w) + \beta _R (w,\kappa)\nonumber\\ &=&
\int_{\partial B_R}
\mathcal L_R (u,\pi+c)(x) \cdot w(x)\, do_x\, ,
\end{eqnarray}
\color{black}
because by the assumptions on $u$ and Theorem \ref{thm_2.1} and \ref{thm_2.3},
\color{black}
$$
\int_{\partial B_R}\Bigl[
\sum^3_{j,k=1} \, c\delta_{jk} w_k(x) \frac{x_j}{R} \,\Bigr] do_x=
\int_{
\color{black}
\partial {\mathcal D}
\color{black}
}
\, c\,w\cdot n \, do_x+\int_{ {\mathcal D}_R}
\, c\,\hbox{div}\, w \, d x=0,
$$
\color{black}
where $n$ denotes the outward unit normal to
$\mathcal D.$
\color{black}
Let $c_0$ be the constant introduced above as part of
estimate (\ref{CC}).
Because
$$
\int_{\partial B_R} \mathcal L_R (u,\pi+c_0)(x) \cdot w(x)\, do_x
\le \|\mathcal L_R(u,\pi+c_{0})\|_2\,
\|w|_{
\color{black}
\partial B_R
\color{black}
} \|_2
\le C\, \|\mathcal L_R (u,\pi+c_0)\|_2 \cdot |w|^{(R)},
$$
we get from (\ref{eq_w}) $$|w|^{(R)}\le C \|\mathcal L_R (u,\pi+c_0)\|_2.
$$
The last step is estimation:
$\|\mathcal L_R (u,\pi+c_{0}) \|_2\le
C\,\cdot
R^{-1}$.
\color{black}
We start by observing that
\color{black}
\begin{eqnarray*}&&
\hspace{-4em}\|\mathcal L_R (u,\pi+c_{0}) \|_2
\\&&
\hspace{-2em}\le C \Big[\|\nabla u|_{\partial B_R} \|_2 + \|[\pi(x)+c_0] |_{\partial B_R}\|_2 +
\left(\int_{\partial B_R} \left(1-\frac{x_1}{R}\right)^2|u(x)|^2do_x\right)^{1/2}\Big].
\end{eqnarray*}
As explained above, inequalities (\ref{1.6}), (\ref{1.7}) and (\ref{CC})
are valid with $S=2\, S_1$. According to (\ref{1.6}) and (\ref{CC}), we have
$
| u(x)| \le C\, (|x|s(x))^{-1},
$
and $|\pi(x)+c_0|\le C\,|x|^{-2} $
for $x \in B_{2\cdot S_1}^c$.
Inequality (\ref{1.7}) yields
$
| \nabla u(x)| \le C\,
|x|^{
\color{black}
-3/2
\color{black}
}
s(x)^{-B ^{\prime} }
$
for $x$ as before, with
$B ^{\prime} :=3/2-\max\{0,\, 7/2-A-B\}$.
If $B\ge 1$, we recall that $A\ge 5/2$, getting $A+B\ge 7/2,$
hence $B ^{\prime} =3/2$.
On the other hand, if $B<1$, then
$
\color{black}
\min
\color{black}
\{1,B\}=B$, so that the assumption
$A+
\color{black}
\min
\color{black}
\{1,B\}>3$
becomes $A+B>3$, hence $B ^{\prime} > 1$. Thus we get in any case that $B ^{\prime} >1>1/2$.
In view of these observations, and with Lemma \ref{Fa}, we obtain
\begin{eqnarray*}
\|\mathcal L_R(u,\pi +c_0)\|_2 &\le& C
\left[ \left(
\int_{\partial B_R}\, |x|^{-3}s(x)^{-2\,B ^{\prime} } \, do_x
\right)^{1/2}
+ \left(\int_{\partial B_R} |\pi(x)+c_0|^2 do_x\right)^{1/2}\right.
\\&&\hspace{3em}
+\left. \left(\frac1{R^2} \int_{\partial B_R}(|x|-x_1)^2|u(x)|^2do_x\right)^{1/2}\right]
\\
&\le& C \left[ \left(\frac1{R^3}\int_{\partial B_R}s(x)^{-2\, B ^{\prime} } do_x\right)^{1/2}
+ \left(\frac{1}{R^4}\int_{\partial B_R} 1\, do_x\right)^{1/2}
\right.
\\&&\hspace{3em}
\left.+\left(\frac1{R^2}\int_{\partial B_R} s(x)^2(|x|s(x))^{-2}do_x\right)^{1/2} \right]
\\
&\le&
C \left[\left(\frac{1}{R^2} \right)^{1/2}
+\left(\frac{1}{R^2} \right)^{1/2} +\left(\frac1{R^4}\int_{\partial B_R}1\, do_x\right)^{1/2}\right]\le CR^{-1}.
\end{eqnarray*}
This completes the proof of Theorem \ref{thm_Main_2}.
$\Box $
\textbf{Acknowledgements:} \vskip0.25cm \textit{ The works of S.K. and \v{S}. N. were supported by Grant No. 16-03230S of GA\v{C}R in the framework of RVO 67985840, S.K. is supported by RVO 12000. Final version was supported by Grant No. 19-04243S of GA\v CR.}
\end{document}
|
\begin{document}
\begin{comment}
60 Probability theory and stochastic processes
60C Combinatorial probability
60C05 Combinatorial probability
60F Limit theorems [See also 28Dxx, 60B12]
60F05 Central limit and other weak theorems
60F17 Functional limit theorems; invariance principles
60G Stochastic processes
60G09 Exchangeability
60G55 Point processes
\end{comment}
\begin{abstract}
We give explicit bounds for the tail probabilities for sums of independent
geometric or exponential variables, possibly with different parameters.
\end{abstract}
\maketitle
\section{Introduction and notation}\label{S:intro}
Let $X=\sum_{i=0}^\inftyn X_i$, where $n\ge1$ and $X_i$, $i=1,\dots,n$,
are independent geometric random variables
with possibly different distributions:
$X_i\sim \Ge(p_i)$
with $0<p_i\le 1$, \ie,
\begin{equation}\label{geo}
\P(X_i=k) = p_i(1-p_i)^{k-1},\qquad k=1,2,\dots.
\end{equation}
Our goal is to estimate the tail probabilities $\P(X\ge x)$.
(Since $X$ is integer-valued, it suffices to consider integer $x$. However,
it is convenient to allow arbitrary real $x$, and we do so.)
We define
\begin{align}
\mu&:=\E X = \sum_{i=0}^\inftyn \E X_i = \sum_{i=0}^\inftyn \frac{1}{p_i},
\\
\px&:=\min_i p_i.
\end{align}
We shall see that $\px$ plays an important role in our estimates, which
roughly speaking show that the tail probabilities of $X$ decrease at
about the same rate as the tail probabilities of $\Ge(\px)$, \ie, as
for the variable $X_i$
with smallest $p_i$ and thus fattest tail.
Recall the simple and well-known fact that \eqref{geo} implies that,
for any non-zero $z$ such that $|z|(1-p_i)<1$,
\begin{equation}\label{pgf}
\E z^{X_i} = \sum_{k=0}^\inftyi z^k \P(X_i=k)=\frac{p_i z}{1-(1-p_i)z}
=\frac{p_i }{z\qw-1+p_i}.
\end{equation}
For future use, note that
since $x\mapsto-\ln(1-x)$ is convex on $(0,1)$ and $0$ for $x=0$,
\begin{equation}\label{tp}
-\ln(1-x) \le -\frac{x}y\ln(1-y),
\qquad 0< x\le y<1.
\end{equation}
\begin{remark}
The theorems and corollaries below
hold also, with the same proofs, for infinite sums
$X=\sum_{i=1}^\infty X_i$, provided $\E X=\sum_i p_i\qw<\infty$.
\end{remark}
\end{ack}
\section{Upper bounds for the upper tail}
We begin with a simple upper bound obtained by
the classical method of estimating the moment generating function
(or probability generating function) and
using the standard inequality (an instance of Markov's inequality)
\begin{equation}
\label{markov}
\P(X\ge x)\le z^{-x}\E z^X,
\qquad z\ge1,
\end{equation}
or equivalently
\begin{equation}
\label{markov2}
\P(X\ge x)\le e^{-tx}\E e^{tX},
\qquad t\ge0.
\end{equation}
(Cf.~the related ``Chernoff bounds'' for the binomial distribution that are
proved by this method, see \eg{} \cite[Theorem 2.1]{JLR}, and
see \eg{} \cite{BLM} for other applications of this method.
See also \eg{} \cite[Chapter 2]{DemboZeitouni} or \cite[Chapter 27]{Kallenberg}
for more general large deviation theory.)
\begin{theorem}\label{T1}
For any $p_1,\dots,p_n\in(0,1]$ and any $\gl\ge1$,
\begin{equation}\label{xp}
\P(X\ge \lex)
\le
e^{-\px\mu(\gl-1-\ln \gl)}.
\end{equation}
\end{theorem}
\begin{proof}
If $0\le t<p_i$, then $e^{-t}-1+p_i\ge p_i-t>0$, and thus by \eqref{pgf},
\begin{equation}
\E e^{t X_i}
= \frac{p_i}{e^{-t}-1+p_i}
\le \frac{p_i}{p_i-t}
=\Bigpar{1-\frac{t}{p_i}}\qw.
\end{equation}
Hence, if $0\le t <\px=\min_i p_i$, then
\begin{equation}
\E e^{t X} = \prod_{i=1}^n \E e^{t X_i}
\le \prod_{i=1}^n \Bigpar{1-\frac{t}{p_i}}\qw
\end{equation}
and, by \eqref{markov2},
\begin{equation}\label{bo}
\begin{split}
\P(X\ge \gl \mu)\le e^{-t\gl\mu} \E e^{t X}
\le \exp \biggpar{-t\gl\mu + \sum_{i=0}^\inftyn-\ln \Bigpar{1-\frac{t}{p_i}}}.
\end{split}
\end{equation}
By \eqref{tp} and $0<\px/p_i\le1$,
we have, for $0\le t<\px$,
\begin{equation}\label{tpp}
-\ln \Bigpar{1-\frac{t}{p_i}}
\le-\frac{\px}{p_i}\ln\Bigpar{1-\frac{t}{\px}}.
\end{equation}
Consequently,
\eqref{bo} yields
\begin{equation}\label{box}
\begin{split}
\P(X\ge \gl \mu)
&\le \exp\biggpar{-t\gl\mu -\ln \Bigpar{1-\frac{t}{\px}} \sum_{i=0}^\inftyn
\frac{\px}{p_i}}
\\&
= \exp\biggpar{-t\gl\mu -\px\mu\ln \Bigpar{1-\frac{t}{\px}}}.
\end{split}
\end{equation}
Choosing $t=(1-\gl\qw)\px$ (which is optimal in \eqref{box}), we obtain
\eqref{xp}.
\end{proof}
As a corollary we obtain a bound that is generally much cruder, but has the
advantage of not depending on the $p_i$'s at all.
\begin{corollary}\label{C1}
For any $p_1,\dots,p_n\in(0,1]$ and any $\gl\ge1$,
\begin{equation}\label{c1}
\P(X\ge \lex)
\le \gl e^{1-\gl} = e\gl e^{-\gl}.
\end{equation}
\end{corollary}
\begin{proof}
Use $\mu\ge1/p_i$ for each $i$, and thus $\mu\px\ge1$ in \eqref{xp}.
(Alternatively, use $t=(1-\gl\qw)/\mu$ in \eqref{box}.)
\end{proof}
The bound in \refT{T1}
is rather sharp in many cases.
Also the cruder \eqref{c1} is almost
sharp for $n=1$ (a single $X_i$) and small
$\px=p_1$; in this case
$\mu=1/p_1$ and
\begin{equation}
\P(X\ge\lex)=(1-p_1)^{\ceil{\gl\mu}-1}
=\exp\bigpar{\gl+O(\gl p_1)} .
\end{equation}
Nevertheless,
we can improve \eqref{xp} somewhat, in particular when $\px=\min_i p_i$ is
not small,
by using more careful estimates.
\begin{theorem}\label{T2}
For any $p_1,\dots,p_n\in(0,1]$ and any $\gl\ge1$,
\begin{equation}\label{t2}
\P(X\ge \lex)
\le\gl\qw (1-\px)^{(\gl-1-\ln\gl)\mu}.
\end{equation}
\end{theorem}
The proof is given below. We note that \refT{T2} implies a minor improvement
of \refC{C1}:
\begin{corollary}\label{C2}
For any $p_1,\dots,p_n\in(0,1]$ and any $\gl\ge1$,
\begin{equation}\label{c2}
\P(X\ge \lex)
\le e^{1-\gl} .
\end{equation}
\end{corollary}
\begin{proof}
Use \eqref{t2} and $(1-\px)^\mu\le e^{-\px\mu}\le e^{-1}$.
\end{proof}
We begin the proof of \refT{T2}
with two lemmas yielding a minor improvement
of \eqref{markov}
using the fact that the variables are
geometric. (The lemmas actually use only that one of the variables is
geometric.)
\begin{lemma}
\label{L0}
\begin{thmenumerate}
\item
For any integers $j$ and $k$ with $j\ge k$,
\begin{equation}\label{ao}
\P(X\ge j)\ge(1-\px)^{j-k} \P(X\ge k).
\end{equation}
\item
For any real numbers $x$ and $y$ with $x\ge y$,
\begin{equation}\label{aoxy}
\P(X\ge x)\ge(1-\px)^{x-y+1} \P(X\ge y).
\end{equation}
\end{thmenumerate}
\end{lemma}
\begin{proof}
(i).
We may without loss of generality assume that $\px=p_1$.
Then, for any integers $i,j,k$ with $j\ge k$,
\begin{equation}
\P(X\ge j\mid X-X_1=i)
=\P(X_1\ge j-i)
=(1-\px)^{(j-i-1)_+},
\end{equation}
and similarly for $ \P(X\ge k\mid X-X_1=i)$.
Since $(j-i-1)_+\le j-k+(k-i-1)_+$, it follows that
\begin{equation}
\P(X\ge j\mid X-X_1=i)
\ge(1-\px)^{j-k} \P(X\ge k\mid X-X_1=i)
\end{equation}
for every $i$, and thus \eqref{ao} follows by taking the expectation.
(ii). For real $x$ and $y$ we obtain from \eqref{ao}
\begin{equation}
\begin{split}
\P(X\ge x)&=\P(X\ge\ceil x)\ge(1-\px)^{\ceil x-\ceil y} \P(X\ge \ceil y)
\\&
\ge(1-\px)^{x- y+1} \P(X\ge y).
\end{split}
\end{equation}
\end{proof}
\begin{lemma}\label{L1}
For any $x\ge0$ and $z\ge1$ with $z(1-\px)<1$,
\begin{equation}\label{l1}
\P(X\ge x)\le \frac{1-z(1-\px)}{\px}z^{-x}\E z^X.
\end{equation}
\end{lemma}
\begin{proof}
Since $z\ge1$, \eqref{ao} implies that for every $k\ge1$,
\begin{equation}
\begin{split}
\E z^X
&\ge
\E(z^X\cdot\ett{X\ge k})
=\E\biggpar{\biggpar{z^k+(z-1)\sum_{j=k}^{X-1}z^j}\ett{X\ge k}}
\\&
=\E\biggpar{z^k\ett{X\ge k}+(z-1)\sum_{j=k}^{\infty}z^j\ett{X\ge j+1}}
\\&
=z^k\P\xpar{X\ge k}+(z-1)\sum_{j=k}^{\infty}z^j\P\xpar{X\ge j+1}
\\&
\ge z^k\P\xpar{X\ge k}\biggpar{1+(z-1)\sum_{j=k}^{\infty}z^{j-k}(1-\px)^{j+1-k}}
\\&
=
z^k\P\xpar{X\ge k}\biggpar{1+\frac{(z-1)(1-\px)}{1-z(1-\px)}}
\\&
=
z^k\P\xpar{X\ge k}\frac{\px}{1-z(1-\px)}.
\end{split}
\end{equation}
The result \eqref{l1}
follows when $x=k$ is a positive integer. The general case
follows by taking $k=\max(\ceil x,1)$ since then $\P(X\ge x)=\P(X\ge k)$.
\end{proof}
\begin{proof}[Proof of \refT{T2}]
We may assume that $\px<1$. (Otherwise every $p_i=1$ and $X_i=1$ a.s., so
$X=n=\mu$ a.s\punkt{} and the result is trivial.)
We then choose
\begin{equation}
\label{z}
z:=\frac{\gl-\px}{\gl(1-\px)},
\end{equation}
\ie,
\begin{equation}\label{zqw}
z\qw = \frac{\gl(1-\px)}{\gl-\px}
= 1-\frac{(\gl-1)\px}{\gl-\px}
;
\end{equation}
note that $z\qw\le1$ so $z\ge1$
and $z\qw>1-\px\ge1-p_i$ for every $i$.
Thus, by \eqref{pgf},
\begin{equation}\label{qk}
\E z^X = \prod_{i=1}^n \E z^{X_i}
=\prod_{i=1}^n\frac{p_i}{z\qw-1+p_i}
=\prod_{i=1}^n\frac{1}{1-(1-z\qw)/p_i}
.
\end{equation}
By \eqref{qk}, \eqref{tpp} (with $t=1-z\qw<\px$) and \eqref{zqw},
\begin{equation}\label{sw}
\begin{split}
\ln \E z^X
&=-\sum_{i=0}^\inftyn\ln\Bigpar{1-\frac{1-z\qw}{p_i}}
\le
-\sum_{i=0}^\inftyn\frac{\px}{p_i}\ln\Bigpar{1-\frac{1-z\qw}{\px}}
\\&
= -\sum_{i=0}^\inftyn\frac{\px}{p_i}\ln\Bigpar{1-\frac{\gl-1}{\gl-\px}}
= -\mu\px\ln\frac{1-\px}{\gl-\px}
= \mu\px\ln\frac{\gl-\px}{1-\px}.
\end{split}
\end{equation}
Furthermore, by \eqref{z},
\begin{equation}
\frac{1-z(1-\px)}{\px}
=
\frac{1-(\gl-\px)/\gl}{\px} = \frac{1}{\gl}.
\end{equation}
Hence, \refL{L1}, \eqref{z} and \eqref{sw} yield
\begin{equation}\label{bx}
\begin{split}
\ln \P(X\ge \gl\mu)
&\le -\ln\gl -\gl\mu \ln z +\ln\E z^X
\\&
\le -\ln\gl -\gl\mu \ln \frac{\gl-\px}{\gl(1-\px)}
+\mu\px\ln\frac{\gl-\px}{1-\px}
\\&
=-\ln\gl+\gl\mu\ln (1-\px) + \mu f(\gl),
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
f(\gl)&:=
-\gl \ln \frac{\gl-\px}{\gl}
+\px\ln\frac{\gl-\px}{1-\px}
\\&\phantom:
=
-(\gl-\px) \ln \xpar{\gl-\px}
+\gl\ln\gl
-\px\ln\xpar{1-\px}
.
\end{split}
\end{equation}
We have $f(1)=-\ln(1-\px)$ and, for $\gl\ge1$,
using \eqref{tp},
\begin{equation}\label{f'}
\begin{split}
f'(\gl)
=
- \ln \xpar{\gl-\px}
+\ln\gl
=-\ln\Bigpar{1-\frac{\px}{\gl}}
\le
-\frac{1}{\gl}\ln\xpar{1-{\px}}.
\end{split}
\end{equation}
Consequently, by integrating \eqref{f'}, for all $\gl\ge1$,
\begin{equation}
f(\gl)\le -\ln(1-\px) -\ln\gl\cdot\ln(1-\px),
\end{equation}
and the result \eqref{t2} follows by \eqref{bx}.
\end{proof}
\begin{remark}
Note that for large $\gl$, the exponents above are roughly linear in
$\gl$, while for $\gl=1+o(1)$ we have $\gl-1-\ln\gl\sim\frac12(\gl-1)^2$ so
the exponents are quadratic in $\gl-1$. The latter is to be expected from
the central limit theorem. However, if $\gl=1+\eps$ with $\eps$ very small
and the central limit theorem is applicable, then $\P(X\ge(1+\eps)\mu)$ is
roughly $\exp(-\eps^2\mu^2/(2\gss))$, where $\gss=\Var X = \sum_{i=0}^\inftyn \Var X_i
=\sum_{i=0}^\inftyn\frac{1-p_i}{p_i^2}$. Hence,
in this case
the exponents in \eqref{xp} and \eqref{t2} are
asymptotically too small by a factor of rougly,
for small $p_i$,
\begin{equation}
\frac{\px\mu}{\mu^2/\gss}\approx \frac{\px\sum_{i=0}^\inftyn p_i\qww}{\sum_{i=0}^\inftyn p_i\qw},
\end{equation}
which may be much smaller than 1. (For example if $p_2=\dots=p_n$ and
$p_1=p_2/ n\qqq$.)
\end{remark}
\section{Upper bounds for the lower tail}
We can similarly bound the probability $\P(X\le\lex)$ for $\gl\le1$.
We give only a simple bound corresponding to \refT{T1}.
(Note that $\gl-1-\ln\gl>0$ for both $\gl\in(0,1)$ and $\gl\in(1,\infty)$.)
\begin{theorem}\label{TL1}
For any $p_1,\dots,p_n\in(0,1]$ and any $\gl\le1$,
\begin{equation}\label{xp-}
\P(X\le \lex)
\le
e^{-\px\mu(\gl-1-\ln \gl)}.
\end{equation}
\end{theorem}
\begin{proof}
We follow closely the proof of \refT{T1}.
If $t\ge0$, then by \eqref{pgf},
\begin{equation}
\E e^{-t X_i}
= \frac{p_i}{e^{t}-1+p_i}
\le \frac{p_i}{t+p_i}
=\Bigpar{1+\frac{t}{p_i}}\qw.
\end{equation}
Hence
\begin{equation}
\E e^{-t X} = \prod_{i=1}^n \E e^{-t X_i}
\le \prod_{i=1}^n \Bigpar{1+\frac{t}{p_i}}\qw
\end{equation}
and, in analogy to \eqref{markov2},
\begin{equation}\label{bo-}
\begin{split}
\P(X\le \gl \mu)\le e^{t\gl\mu} \E e^{-t X}
\le \exp \biggpar{t\gl\mu -\sum_{i=0}^\inftyn\ln \Bigpar{1+\frac{t}{p_i}}}.
\end{split}
\end{equation}
In analogy with \eqref{tpp}, still by the convexity of $-\ln x$,
\begin{equation}\label{tpp-}
-\ln \Bigpar{1+\frac{t}{p_i}}
\le-\frac{\px}{p_i}\ln\Bigpar{1+\frac{t}{\px}},
\end{equation}
and \eqref{bo-} yields
\begin{equation}\label{box-}
\begin{split}
\P(X\le \gl \mu)
&\le \exp\biggpar{t\gl\mu -\ln \Bigpar{1+\frac{t}{\px}} \sum_{i=0}^\inftyn
\frac{\px}{p_i}}
\\&
= \exp\biggpar{t\gl\mu -\px\mu\ln \Bigpar{1+\frac{t}{\px}}}.
\end{split}
\end{equation}
Choosing $t=(\gl\qw-1)\px$, we obtain \eqref{xp-}.
\end{proof}
\section{A lower bound}
We show also a general lower bound for the upper tail probabilities,
which shows that for constant $\gl>1$,
the exponents in Theorems \ref{T1} and \ref{T2} are at most
a constant factor away from best possible.
\begin{theorem}
\label{TL}
For any $p_1,\dots,p_n\in(0,1]$ and any $\gl\ge1$,
\begin{equation}\label{tl}
\P(X\ge\lex)
\ge \frac{(1-\px)^{1+1/\px}}{2\px\mu}(1-\px)^{(\gl-1)\mu}.
\end{equation}
\end{theorem}
\begin{lemma}\label{LA}
If $A\ge1$ and $0\le x\le 1/A$, then
\begin{equation}
A\bigpar{x+\ln(1-x)}
\le \ln\bigpar{1-Ax^2/2}.
\end{equation}
\end{lemma}
\begin{proof}
Let $f(x):=A\bigpar{x+\ln(1-x)}-\ln\bigpar{1-Ax^2/2}$.
Then $f(0)=0$ and
\begin{equation}
f'(x)=A\Bigpar{1-\frac{1}{1-x}}+\frac{Ax}{1-Ax^2/2}
=
-\frac{Ax}{1-x}+\frac{Ax}{1-Ax^2/2}
\le0
\end{equation}
for $0\le x<1/A\le1$, since then $0<1-x\le 1-Ax^2/2$.
Hence $f(x)\le0$ for $0\le x\le 1/A$.
\end{proof}
\begin{proof}[Proof of \refT{TL}]
Let $\eps:=1/(\px\mu)$.
By \refT{TL1} (with $\gl=1-\eps$) and \refL{LA} (with $A=\px\mu\ge1$),
\begin{equation}
\begin{split}
\P(X\le(1-\eps)\mu)
\le \exp\bigpar{-\px\mu(-\eps-\ln(1-\eps))}
\le 1-\frac{\px\mu\eps^2}2
=1-\frac{1}{2\px\mu}.
\end{split}
\end{equation}
Hence,
$\P(X\ge(1-\eps)\mu) \ge \xqfrac{1}{2\px\mu}$, and by \refL{L0}(ii),
\begin{equation*}
\P(X\ge\lex)
\ge (1-\px)^{(\gl-1+\eps)\mu+1}
\P(X\ge(1-\eps)\mu)
\ge (1-\px)^{(\gl-1+\eps)\mu+1}\frac{1}{2\px\mu},
\end{equation*}
which completes the proof since $\eps\mu=1/\px$.
\end{proof}
\section{Exponential distributions}
In this section we assume that $X=\sum_{i=0}^\inftyn X_i$ where $X_i$, $i=1,\dots,n$, are
independent random variables with exponential distributions:
$X_i\sim\Exp(a_i)$, with density function $a_ixe^{-a_i x}$, $x>0$,
and expectation $\E X_i=1/a_i$. (Thus $a_i$ can be interpreted as a rate.)
The exponential distribution is the continuous analogue of the geometric
distributions, and the results above have (simpler) analogues for
exponential distributions.
We now define
\begin{align}
\mu&:=\E X = \sum_{i=0}^\inftyn \E X_i = \sum_{i=0}^\inftyn \frac{1}{a_i},
\\
\ax&:=\min_i a_i.
\end{align}
\begin{theorem}
\label{Texp}
Let $X=\sum_{i=0}^\inftyn X_i$ with $X_i\sim\Exp(a_i)$ independent.
\begin{romenumerate}[-10pt]
\item
For any $\gl\ge1$,
\begin{equation}
\P(X\ge \lex)
\le\gl\qw e^{-\ax\mu(\gl-1-\ln\gl)}.
\end{equation}
\item For any $\gl\ge1$, we have also the simpler but weaker
\begin{equation}
\P(X\ge \lex)
\le e^{1-\gl} .
\end{equation}
\item
For any $\gl\le1$,
\begin{equation}
\P(X\le \lex)
\le
e^{-\ax\mu(\gl-1-\ln \gl)}.
\end{equation}
\item
For any $\gl\ge1$,
\begin{equation}
\P(X\ge\lex)
\ge \frac{1}{2e\ax\mu}e^{-\ax\mu(\gl-1)}.
\end{equation}
\end{romenumerate}
\end{theorem}
\begin{proof}
Let $X_i\NN\sim\Ge(a_i/N)$ be independent (for $N>\max_i a_i$).
Then $X_i\NN/N\dto X_i$, where $\dto$ denotes convergence in distribution,
and thus $X\NN/N\dto X$, where
$X\NN:=\sum_{i=0}^\inftyn X_i\NN$.
Furthermore, $\mu\NN:=\E X\NN=M\nu$ and $\px:=\min_i (a_i/N)=\ax/N$.
The results follow by taking the limit as \Ntoo{} in \eqref{t2}, \eqref{c2},
\eqref{xp-} and \eqref{tl}.
(Alternatively, we may imitate the proofs above, using $\E
e^{tX_i}=a_i/(a_i-t)$ for $t<a_i$.)
\end{proof}
\newcommand\AAP{\emph{Adv. Appl. Probab.} }
\newcommand\JAP{\emph{J. Appl. Probab.} }
\newcommand\JAMS{\emph{J. \AMS} }
\newcommand\MAMS{\emph{Memoirs \AMS} }
\newcommand\PAMS{\emph{Proc. \AMS} }
\newcommand\TAMS{\emph{Trans. \AMS} }
\newcommand\AnnMS{\emph{Ann. Math. Statist.} }
\newcommand\AnnPr{\emph{Ann. Probab.} }
\newcommand\CPC{\emph{Combin. Probab. Comput.} }
\newcommand\JMAA{\emph{J. Math. Anal. Appl.} }
\newcommand\RSA{\emph{Random Struct. Alg.} }
\newcommand\ZW{\emph{Z. Wahrsch. Verw. Gebiete} }
\newcommand\DMTCS{\jour{Discr. Math. Theor. Comput. Sci.} }
\newcommand\AMS{Amer. Math. Soc.}
\newcommand\Springer{Springer-Verlag}
\newcommand\Wiley{Wiley}
\newcommand\vol{\textbf}
\newcommand\jour{\emph}
\newcommand\book{\emph}
\newcommand\inbook{\emph}
\def\no#1#2,{\unskip#2, no. #1,}
\newcommand\toappear{\unskip, to appear}
\newcommand\arxiv[1]{\url{arXiv:#1}}
\newcommand\arXiv{\arxiv}
\def\nobibitem#1\par{}
\end{document}
|
\begin{document}
\title{Estimates on fractional higher derivatives of\weak solutions for
the Navier-Stokes equations}
\markboth{Fractional higher derivatives of weak solutions for
Navier-Stokes}{Fractional higher derivatives of weak solutions for
Navier-Stokes}
\renewcommand{\sectionmark}[1]{}
\begin{abstract}
We study weak solutions of the 3D Navier-Stokes equations in whole space
with $L^2$ initial data.
It will be proved that
$\nabla^\alpha u $ is locally integrable
in space-time
for any real $\alpha$ such that
$1< \alpha <3$, which says that almost third derivative is locally integrable.
Up to now, only second derivative $\nabla^2 u$ has been known to
be locally integrable by standard parabolic regularization.
We also present sharp estimates of
those quantities in weak-$L_{loc}^{4/(\alpha+1)}$.
These estimates depend only
on the $L^2$ norm of initial data and integrating domains.
Moreover, they are valid even for $\alpha\geq 3$ as
long as $u$ is smooth.
The proof uses a good approximation of Navier-Stokes and a blow-up technique, which let us to focusing on a local study.
For the local study, we use De Giorgi method with a new pressure decomposition.
To handle non-locality of the fractional Laplacian, we will adopt
some properties of the Hardy space and Maximal functions.
\end{abstract}
\textbf{Mathematics Subject Classification}: 76D05, 35Q30.
\section{Introduction and main result}
\qquad In this paper, any derivative signs ($\nabla,\Delta,(-\Delta)^{\alpha/2},D,\partial$ and etc) denote derivatives in only space variable
$x\in \mathbb{R}^3$ unless time
variable $t\in\mathbb{R}$
is clearly specified.
We study the 3-D Navier-Stokes equations
\begin{equation}\label{navier}
\begin{split}
\partial_tu+(u\cdot\nabla)u+ \nabla P -\Delta u&=0 \quad\mbox{and}\\
\ebdiv u&=0,\quad \quad t\in ( 0,\infty ),\quad x\in \mathbb{R}^3
\end{split}
\end{equation} with $L^2$ initial data
\begin{equation}\label{initial_condition}
u_0\in L^2(\mathbb{R}^3) ,\quad \ebdiv u_0= 0.
\end{equation}
Regularity of weak solutions for the 3D Navier-Stokes equations has long history.
Leray \cite{leray} 1930s and Hopf \cite{hopf} 1950s
proved existence of a global-time
weak solution for any given $L^2$ initial data.
Such Leray-Hopf weak solutions $u$ lie
in $L^\infty(0,\infty;L^2(\mathbb{R}^3))$ and
$\nabla u$ do in $L^2(0,\infty;L^2(\mathbb{R}^3))$ and
satisfy the energy inequality:
\begin{equation*}
\|u(t)\|_{L^2(\mathbb{R}^3)}^2+
2\|\nabla u\|^2_{L^2(0,t;L^2(\mathbb{R}^3))}
\leq \|u_0\|_{L^2(\mathbb{R}^3)}^2 \quad \mbox{ for a.e. } t<\infty.
\end{equation*}
Until now, regularity and uniqueness of such weak solutions are
generally open.\\% while those of local-time smooth solution has been shown
Instead, many criteria which
ensure regularity of weak solutions
have been developed. Among
them the most famous one is Lady{\v{z}}enskaja-Prodi-Serrin Criteria
(\cite{lady},\cite{prodi} and \cite{Serrin}),
which says:
if $u\in L^p((0,T);L^q(\mathbb{R}^3)) $ for
some $p$ and $q$ satisfying $\frac{2}{p}
+\frac{3}{q}=1$ and $p<\infty$, then it is regular. Recently, the limit case $p=\infty$
was established in the paper of
Escauriaza, Ser{\"e}gin and {\v{S}}ver{\'a}k
\cite{seregin}. We may impose similar conditions to derivatives of velocity, vorticity
or pressure. (see
Beale, Kato and Majda \cite{bkm}, Beir{\~a}o da Veiga \cite{beirao} and
Berselli and Galdi \cite{berselli}) Also, many other conditions
exist (e.g. see
Cheskidov and Shvydkoy \cite{Cheski},
Chan \cite{chan}
and
\cite{vas_bjorn}). \\
On the other hand, many efforts have been given to measuring the size
of possible singular set.
This approach has been initiated by
Scheffer \cite{scheffer}. Then, Caffarelli, Kohn and Nirenberg \cite{ckn}
improved the result and showed that possible singular sets have zero Hausdorff measure of one dimension for
certain class of weak solutions (suitable weak
solutions)
satisfying the following additional inequality
\begin{equation}\label{suitable}
\partial_t \frac{|u|^2}{2} + \ebdiv (u \frac{|u|^2}{2}) + \ebdiv (u P) +
|\nabla u |^2 - \Delta\frac{|u|^2}{2} \leq 0
\end{equation} in the sense of distribution.
There are many other proofs of this fact (e.g. see Lin
\cite{lin}, \cite{vas:partial}
and Wolf \cite{wolf}). Similar criteria for interior points with other quantities
can be found in many places (e.g. see Struwe \cite{struwe},
Gustafson, Kang and Tsai \cite{gusta}, Ser{\"e}gin \cite{seregin2}
and Chae, Kang and Lee \cite{chae}).
Also, Robinson and Sadowski \cite{robinson} and Kukavica \cite{kukavica}
studied
box-counting dimensions
of
singular sets.\\
In this paper, our main concern is about space-time $L^p_{(t,x)}=L^p_tL^p_x$
estimates of higher derivatives for weak
solutions assuming only $L^2$
initial data.
$\nabla u\in L^{2}((0,\infty)\times\mathbb{R}^3)$ is obvious from the energy
inequality, and
simple interpolation gives $u\in L^{10/3}$.
For second derivatives of weak solutions,
from standard parabolic regularization theory (see
Lady{\v{z}}enskaja, Solonnikov and
Ural$'$ceva
\cite{LSU}),
we know $\nabla^2 u \in L^{5/4}$ by considering
$(u\cdot\nabla)u$ as a source term.
With different ideas, Constantin \cite{peter} showed
$L^{\frac{4}{3}-\epsilon}$ for any small $\epsilon>0$ in periodic setting,
and later Lions \cite{lions} improved it up to
weak-$L^{\frac{4}{3}}$ (or $ L^{\frac{4}{3},\infty}$)
by assuming $\nabla u_0$ lying in the space
of all bounded measures in $\mathbb{R}^3$.
They used natural structure of the equation with some interpolation technique.
On the other hand, Foia{\c{s}}, Guillop{\'e} and Temam \cite{foias_guillope_tem:higher} and Duff \cite{duff}
obtained other kinds of estimates for higher derivatives
of weak solutions while Giga and Sawada \cite{giga}
and Dong and Du \cite{dong} covered mild solutions. For asymptotic behavior,
we refer Schonbek and Wiegner \cite{Scho_and_Wiegner}.\\
Recently in \cite{vas:higher}, it has been shown that, for any small
$\epsilon > 0$, any integer $d\geq1$ and any smooth solution $u$ on $(0,T)$,
we have bounds of $\nabla^d u$ in
$L_{loc}^{\frac{4}{d+1}-\epsilon}$,
which
depend only on $L^2$ norm of initial data
once we fix $\epsilon$, $d$ and the domain of integration.
It can be considered as a
natural extension of the result of Constantin \cite{peter} for higher derivatives.
But the idea is completely different in the sense that
\cite{vas:higher} used the Galilean invariance of transport part of
the equation and the partial regularity criterion in the version
of \cite{vas:partial}, which re-proved
the famous result of Caffarelli, Kohn and Nirenberg \cite{ckn} by using a parabolic
version of the De Giorgi method \cite{De_Giorgi}.
It is noteworthy that this method
gave full regularity to
the critical Surface Quasi-Geostrophic equation in
\cite{caf_vas}.
The limit
non-linear scaling
$p=\frac{4}{d+1}$
appears from
the following invariance of the Navier-Stokes scaling
$u_\lambda(t,x)=
\lambda u(\lambda^2 t,\lambda x)$:
\begin{equation}\label{best_scaling}
\|\nabla^d u_\lambda\|^p_{L^p}=\lambda^{-1}\|\nabla^d u\|^p_{L^p}.
\end{equation}
In this paper, our main result is better than
the above result of \cite{vas:higher} in the sense of the following three directions.
First, we achieve the limit case weak-$L^{\frac{4}{d+1}}$
(or $ L^{\frac{4}{d+1},\infty}$) as Lions \cite{lions} did for second derivatives.
Second, we make similar bounds for fractional derivatives as well as classical derivatives.
Last, we consider not only smooth solutions but also global-time weak solutions.
These three improvements will give us that
$\nabla^{3-\epsilon}u$, which is almost third derivatives of weak solutions, is locally integrable
on $(0,\infty)\times\mathbb{R}^3$. \\
Our precise result is the following:
\begin{thm}\label{main_thm}
There exist universal constants
$C_{d,\alpha}$ which depend only on integer $d\geq1$ and real $\alpha\in[0,2)$
with the following two properties $(I)$ and $(II)$:\\
(I)
Suppose that we have
a smooth solution $ u $ of \eqref{navier}
on $ (0,T)\times \mathbb{R}^3 $ for some $0<T\leq\infty$
with some initial data \eqref{initial_condition}. Then it satisfies
\begin{equation}\label{main_thm_eq}
\quad\|(-\Delta)^{\frac{\alpha}{2}}\nabla^d u\|
_{L^{p,\infty}(t_0,T;L^{p,\infty}(K))}
\leq C_{d,\alpha}\mathcal{B}ig(\|u_0\|_{L^2(\mathbb{R}^3)}^{2}
+ \frac{|K|}{t_0}\mathcal{B}ig)^{\frac{1}{p}}
\end{equation}
for any $t_0\in(0,T)$, any integer $d\geq 1$, any $\alpha\in[0,2)$
and any bounded open subset
$K$ of $ \mathbb{R}^3$, where $p = \frac{4}{d+\alpha+1}$
and $|\cdot|=$ the Lebesgue measure in $\mathbb{R}^3$. \\
(II) For any initial data \eqref{initial_condition}, we can construct a
suitable weak solution $u$ of \eqref{navier} on $ (0,\infty)\times \mathbb{R}^3 $ such that
$(-\Delta)^{\frac{\alpha}{2}}\nabla^d u$
is locally integrable in $(0,\infty)\times \mathbb{R}^3$
for $d=1,2$ and for $\alpha\in[0,2)$ with $(d+\alpha)<3$.
Moreover,
the estimate \eqref{main_thm_eq} holds
with $T=\infty$
under the same setting of the above part $(I)$
as long as $(d+\alpha)<3$.
\end{thm}
Let us begin with some simple remarks.
\begin{rem}\label{frac_rem}
For any suitable weak solution $u$,
we can define $(-\Delta)^{\alpha/2}\nabla^d u$
in the sense of distributions $\mathcal{D}^\prime$ for any integer $d\geq0$ and for any real
$\alpha\in[0,2)$:
\begin{equation}\label{fractional_distribution}
<(-\Delta)^{\alpha/2}\nabla^d u;\psi >_{\mathcal{D}^\prime,\mathcal{D}}
= (-1)^{d}\int_{(0,\infty)\times\mathbb{R}^3}u\cdot(-\Delta)^{\alpha/2}\nabla^d \psi\mbox{ } dxdt
\end{equation} for any $\psi\in \mathcal{D}=C_c^\infty((0,\infty)\times\mathbb{R}^3)$
where
$(-\Delta)^{\alpha/2}$ in the right hand side is
the traditional
fractional Laplacian in $\mathbb{R}^3$ defined by the Fourier transform.
Note that
$(-\Delta)^{\alpha/2}\nabla^d \psi$ lies in $L_t^\infty L^2_x$.
Thus, this definition from \eqref{fractional_distribution} makes sense due to
$u\in L_t^\infty L^2_x$. Note also $(-\Delta)^{0}=Id$. For more general extensions
of this fractional Laplacian operator, we
recommend
Silvestre \cite{silve:fractional}.
\end{rem}
\begin{rem}\label{rmk_dissipation of energy}
Since we impose only \eqref{initial_condition} to $u_0$, the estimate \eqref{main_thm_eq}
is a
(quantitative)
regularization result to higher derivatives.
Also, in the proof, we will see that $\|u_0\|_{L^2(\mathbb{R}^3)}^{2}$ in \eqref{main_thm_eq} can be
relaxed to $\|\nabla u\|_{L^2((0,T)\times\mathbb{R}^3)}^{2}$. Thus
it says that any (higher) derivatives can be controlled by having only
$L^2$ estimate of dissipation of energy.
\end{rem}
\begin{rem}\label{weak_L_p}
The result of the part $(I)$ for $\alpha=0$ extends the result of the previous paper \cite{vas:higher}
because for any $0<q<p<\infty$ and any bounded subset $\Omega\subset\mathbb{R}^n$, we have
\begin{equation*}
\|f\|_{L^{q}(\Omega)}\leq C\cdot
\|f\|_{L^{p,\infty}(\Omega)}
\end{equation*} where C depends only
on $p$, $q$, dimension $n$ and Lebesgue measure of $\Omega$ (e.g. see Grafakos \cite{grafakos}).
\end{rem}
\begin{rem}
The assumption ``smoothness'' in the part $(I)$ is about pure differentiability.
For example, the result of the part $(I)$ for $d\geq1$
and $\alpha=0$
holds once we know that $u$ is $d$-times differentiable. In addition,
constants in \eqref{main_thm_eq} are independent of any possible blow-time $T$.
\end{rem}
\begin{rem}
$p=4/(d+\alpha+1)$ is a very interesting relation as mentioned before. Due to this $p$, the estimate \eqref{main_thm_eq}
is
a non-linear estimate while many other $a$ $priori$ estimates are linear.
Also, from the part $(II)$ when $(d+\alpha)$ is very close to $3$,
we can see that almost third derivatives of weak solutions are locally integrable.
Moreover, imagine that the part $(II)$ for $d=\alpha=0$ be true even though
we can NOT prove it here.
This would imply that this weak solution $u$ could lie in $L^{4,\infty}$ which is beyond the
best known
estimate $u\in L^{10/3}$ from $L^2$ initial data. \\
\end{rem}
Before presenting the main ideas, we want to mention that
Caffarelli, Kohn and Nirenberg \cite{ckn}
contains two different kinds of
local regularity criteria. The first one is quantitative, and it says that
if $\| u\|_{L^3(Q(1))}$ and $\| P\|_{L^{3/2}(Q(1))}$ is small,
then $u$ is bounded by some universal constant in $Q({1/2})$.
The second one says that
$u$ is locally bounded near the origin
if $\limsup_{r\rightarrow 0}$ $r^{-1}\|\nabla u\|^2_{L^2(Q(r))}$ is
small. So it is qualitative
in the sense that
the conclusion says not that $u$ is bounded by a universal constant but
that $\sup |u|$ for some local neighborhood is not infinite.\\
On the other hand,
there is a different quantitative local regularity criterion in
\cite{vas:partial}, which showed
that for any $p>1$, there exists $\epsilon_p$ such that
\begin{equation}\label{vas:partial_result}
\mbox{if}\quad
\|u\|_{L^\infty_tL^2_x(Q_1)}+\|\nabla u\|_{L^2_tL^2_x(Q_1)}+
\|P\|_{L^p_tL^1_x(Q_1)}\leq \epsilon_p, \mbox{ then } |u|\leq 1
\mbox { in } Q_{1/2}
\end{equation} Recently, this criterion was used in
\cite{vas:higher} in order to obtain higher derivative estimates.
The main proposition in \cite{vas:higher} says that
if both $ \||\nabla u|^2+|\nabla^2 P|\|_{L^1(Q(1))}$
and some other
quantity about pressure
are small, then $u$ is bounded by $1$
at the origin
once $u$ has a mean zero property in space.
We can observe that $\|\nabla u\|^2_{L^2(Q(1))}$ and $\|\nabla^2 P\|_{L^1(Q(1))}$
have the same best scaling like \eqref{best_scaling} among
all the other quantities which we can obtain from $L^2$ initial data.
However, the other quantity about pressure
has a slightly worse scaling. That is the reason that
the limit case $L^{\frac{4}{d+1},\infty}$ has been missing in \cite{vas:higher}.\\
Here are the main ideas of proof.
First,
in order to obtain the missing limit case
$ L^{\frac{4}{d+1},\infty}$, we will see that it requires an equivalent estimate
of \eqref{vas:partial_result} for $p=1$.
Here we extend this result up to $ p=1$ for some
approximation of the Navier-Stokes (see the proposition
\ref{partial_problem_II_r}).
To obtain this first goal, we will introduce a new pressure decomposition
(see the lemma \ref{lem_pressure_decomposition}),
which will be used in the De Giorgi-type argument. This makes us to
remove the bad scaling term about pressure
in \cite{vas:higher}.
As a result,
by using the Galilean invariance property and some blow-up technique with the standard Navier-Stokes scaling,
we can proceed our local study in order to obtain a better
version of a quantitative partial regularity criterion
for some
approximation of the Navier-Stokes
(see the proposition
\ref{local_study_thm}).
As a result, we can prove $ L^{\frac{4}{d+1},\infty}$ estimate
for classical derivatives ($\alpha=0$ case).\\
Second,
the result for fractional derivatives ($0<\alpha<2$ case) is not obvious at all
because there is no proper interpolation theorem
for $L_{loc}^{p,\infty}$ spaces.
For example, due to the non-locality of the fractional Laplacian operator, the fact $\nabla^2 u\in L_{loc}^{\frac{4}{3},\infty}$
with $\nabla^3 u\in L_{loc}^{1,\infty}$
does not imply the case of fractional derivatives even if we assume $u$ is smooth.
Moreover, even though we assume that
$\nabla^2 u\in L^{\frac{4}{3}}(\mathbb{R}^3)$
and $\nabla^3 u\in L^{1}(\mathbb{R}^3)$ which we can NOT prove here,
the standard interpolation theorem still requires $L^p(\mathbb{R}^3)$ for some $p>1$ (we refer
Bergh and L{\"o}fstr{\"o}m \cite{bergh}).
To overcome
the difficulty,
we will use the Maximal functions
of $u$ which capture its behavior of long-range part.
Unfortunately, second derivatives of pressure, which lie in the Hardy space $\mathcal{H}
\subset L^1(\mathbb{R}^3)$
from
Coifman, Lions, Meyer and Semmes \cite{clms},
do not have an integrable Maximal function since
the Maximal operator is not bounded on $L^1$. In order to handle non-local parts of pressure,
we will use some
property of Hardy space,
which says that some integrable functions play
a similar role of the Maximal function(see \eqref{hardy_property}).\\
Finally, the result $(II)$ for weak solutions comes from
specific approximation of Navier-Stokes equations that
Leray \cite{leray} used in order to construct a global time weak solution
:
$\partial_tu_n+((u_n*\phi_{(1/n)})\cdot\nabla)u_n+ \nabla P_n -\Delta u_n=0$ and $
\ebdiv u_n=0$ where $\phi$ is a fixed mollifier in $\mathbb{R}^3$,
and $\phi_{(1/n)}$ is defined by $\phi_{(1/n)}(\cdot)=n^3\phi(n\mbox{ }\cdot)$.
Main advantage for us of adopting this approximation is that
it has strong existence theory of global-time smooth solutions
$u_n$ for each $n$, and it is well-known that there exists
a suitable weak solution $u$ as a weak limit.
In fact, for any integer $d\geq1$
and for any $\alpha\in[0,2)$,
we will obtain bounds for $u_n$ in the form of \eqref{main_thm_eq}
with $T=\infty$,
which is uniform in $n$.
Since $p=4/(d+\alpha+1)$ is greater than $1$ for the case $(d+\alpha)<3$,
we can know that $(-\Delta)^{\frac{\alpha}{2}}
\nabla^d u $ exists as a locally integrable function from weak-compactness of $L^p$ for $p>1$. \\
\noindent However, to prove \eqref{main_thm_eq} uniformly for the approximation
is nontrivial because our proof is based on local study while
the approximation is not
scaling-invariant with the standard Navier-Stokes scaling:
After the scaling, the advection velocity $u*\phi_{(1/n)}$
depends the original velocity $u$ more non-locally than before.
Moreover, when we consider the case of fractional derivatives of weak solutions, it requires
even Maximal of Maximal functions to handle non-local parts of
the advection velocity
which depends the original velocity
non-locally.\\
The paper is organized as follows. In the next section, preliminaries with
the main propositions
\ref{partial_problem_II_r} and
\ref{local_study_thm}
will be introduced.
Then we prove those propositions \ref{partial_problem_II_r} and \ref{local_study_thm}
in sections \ref{proof_partial_prob_II_r} and
\ref{proof_local study},
respectively.
Finally we will explain how
the proposition \ref{local_study_thm}
implies the part $(II)$ of the theorem
\ref{main_thm}
for $\alpha=0$ and for $0<\alpha<2$ in subsections
\ref{prof_main_thm_II_alpha_0}
and
\ref{prof_main_thm_II_alpha_not_0} respectively while
the part $(I)$ will be covered in the subsection \ref{proof_main_thm_I}.
After that,
the appendix contains some missing proofs of
technical lemmas.\\
\section{Preliminaries, definitions and main propositions}\label{prelim}
We begin this section by fixing some notations and reminding
some well-known results on analysis. After that we will present
definitions of two approximations and two main propositions. In this paper, any derivatives, convolutions and Maximal functions
are with respect to space variable $x\in\mathbb{R}^3$ unless time variable
is specified.\\
\noindent \textbf{Notations for general purpose}\\
We define
$B(r) =\mbox{ the ball in } \mathbb{R}^3$
centered at the origin with radius $r$,
$Q(r) =(-r^2,0)\times B(r)$, the cylinder in $ \mathbb{R}\times\mathbb{R}^3$ and
$B(x;r) =\mbox{ the ball in } \mathbb{R}^3$
centered at x with radius $r$.\\
To the end of this paper, we fix $\phi \in C^{\infty}(\mathbb{R}^3)$ satisfying:\\
\begin{equation*}\begin{split}
\int_{\mathbb{R}^3} &\phi(x)dx = 1,\quad
supp(\phi) \subset B(1),\quad
0 \leq \phi \leq 1\\
\end{split}\end{equation*}
\begin{equation*}\begin{split}
\phi(x)&=1 \mbox{ for } |x|\leq \frac{1}{2} \quad\mbox{ and }\quad
\phi \mbox{ is radial. }
\end{split}\end{equation*}
For real number $r > 0 $ , we define functions $\phi_r \in C^{\infty}(\mathbb{R}^3)$
by $\phi_r(x) =\frac{1}{r^3}\phi(\frac{x}{r})$. Moreover, for $r=0$,
we define $\phi_r=\phi_0=\delta_0$ as the Dirac-delta function, which implies that
the convolution between $\phi_0$ and any function becomes
the function itself.
From the Young's inequality for convolutions, we can observe
\begin{equation}\label{young}
\|f*\phi_r\|_{L^p(B(a))}\leq\|f\|_{L^p(B(a+r))}
\end{equation} due to $supp(\phi_r)\subset B(r)$ for any $p\in[1,\infty]$,
for any $f\in L^p_{loc}$ and for any $a,r>0$.\\
\noindent \textbf{ $L^p$, weak-$L^p$ and Sobolev spaces $W^{n,p}$}\\
Let $K$ be a open subset $K$ of $\mathbb{R}^n$.
For $0<p<\infty $, we define $L^{p}(K)$ by the standard way
with (quasi) norm $\|f\|_{L^{p}(K)}
= (\int_K|f|^pdx)^{(1/p)}$.
From the Banach-–Alaoglu theorem,
any sequence which is bounded in $L^{p}(K)$ for $p\in(1,\infty)$
has a weak limit from some subsequence due to the weak-compactness.\\
Also, for $0<p<\infty $, the weak-$L^p(K)$ space
(or $L^{p,\infty}(K)$) is defined by \\
\begin{equation*}\begin{split}
L^{p,\infty}(K) = \{ f \mbox{ measurable in } K\subset\mathbb{R}^d
\quad: \sup_{\alpha>0}\mathcal{B}ig(\alpha^p\cdot|\{|f|>\alpha\}\cap K|\mathcal{B}ig)<\infty\}
\end{split}\end{equation*} with (quasi) norm $\|f\|_{L^{p,\infty}(K)}
= \sup_{\alpha>0}\mathcal{B}ig(\alpha\cdot|\{|f|>\alpha\}\cap K|^{1/p}\mathcal{B}ig)$.
From the Chebyshev's inequality, we have $\|f\|_{L^{p,\infty}(K)}\leq\|f\|_{L^{p}(K)}$ for any $0<p<\infty$.
Also, for $0<q<p<\infty$, $L^{p,\infty}(K)\subset L^{q}(K)$
once $K$ is bounded (refer the remark \ref{weak_L_p} in the beginning).\\
For any integer $n\geq0$ and for any $p\in[1,\infty]$, we denote $W^{n,p}(\mathbb{R}^3)$ and $W^{n,p}(B(r))$
as the standard Sobolev spaces for the whole space $\mathbb{R}^3$ and for any ball $B(r)$ in $\mathbb{R}^3$,
respectively.\\
\noindent \textbf{The Maximal function $\mathcal{M}$ and the Riesz transform $\mathcal{R}_j$}\\
The Maximal function $\mathcal{M}$ in $\mathbb{R}^d$ is
defined by the following standard way:
\begin{equation*}\begin{split}
\mathcal{M}(f)(x)=&\sup_{r>0}\frac{1}{|B(r)|}\int_{B(r)}|f(x+y)|dy.
\end{split}\end{equation*}
Also, we can express this Maximal operator as a
supremum of convolutions: $\mathcal{M}(f)=C\sup_{\delta>0}\mathcal{B}ig(\chi_\delta *|f|\mathcal{B}ig)$
where $\chi=\mathbf{1}_{\{|x|<1\}}$ is the characteristic function of the unit ball,
and $\chi_\delta(\cdot)=(1/\delta^3)\chi(\cdot/\delta)$.
One of properties of the Maximal function is that
$\mathcal{M}$ is bounded
from $L^{p}(\mathbb{R}^d)$ to $L^{p}(\mathbb{R}^d)$
for $p\in(1,\infty]$ and
from $L^{1}(\mathbb{R}^d)$ to $L^{1,\infty}(\mathbb{R}^d)$.
In this paper,
we denote $\mathcal{M}$ and $\mathcal{M}^{(t)}$ as the Maximal functions
in $\mathbb{R}^3$ and in
$\mathbb{R}^1$, respectively. \\
For $1\leq j\leq 3$, the Riesz Transform $\mathcal{R}_j$ in $\mathbb{R}^3$ is defined by:
\begin{equation*}\begin{split}
\widehat{\mathcal{R}_j(f)}(x)=\mathit{i}\frac{x_j}{|x|}\hat{f}(x)
\end{split}\end{equation*}
for any $f\in\mathcal{S}$ (the Schwartz space).
Moreover we can extend such definition for functions $L^{p}(\mathbb{R}^3)$
for $1<p<\infty$ and it is well-known that
$\mathcal{R}_j$ is bounded in $L^{p}$ for the same range of $p$.\\
\noindent \textbf{The Hardy space $\mathcal{H}$}\\
The Hardy space $\mathcal{H}$ in $\mathbb{R}^3$ is defined by
\begin{equation*}
\mathcal{H}(\mathbb{R}^3)=\{f\in L^1(\mathbb{R}^3)\quad: \quad\sup_{\delta>0}|
\mathcal{P}_\delta * f|\in L^1(\mathbb{R}^3) \}
\end{equation*} where $\mathcal{P}=
C
(1+|x|^2)^{-2}$ is the
Poisson kernel and $\mathcal{P}_\delta$ is defined
by $\mathcal{P}_\delta(\cdot)=\delta^{-3}\mathcal{P}(\cdot/\delta)$.
A norm of
$\mathcal{H}$ is defined by $L^1$ norm of $\sup_{\delta>0}|
\mathcal{P}_\delta * f|$. Thus $\mathcal{H}$
is a subspace of $L^1(\mathbb{R}^3)$ and
$\|f\|_{L^{1}(\mathbb{R}^3)}
\leq\|f\|_{\mathcal{H}(\mathbb{R}^3)}$ for any $f\in\mathcal{H}$.
Moreover,
the Riesz Transform is bounded from $\mathcal{H}$ to $\mathcal{H}$.\\
\noindent One of important applications of the Hardy space is the compensated compactness
(see Coifman, Lions, Meyer and Semmes \cite{clms}). Especially, it says that if $E,B\in L^2(\mathbb{R}^3)$
and $curlE=\ebdiv B=0$ in distribution, then
$E\cdot B\in\mathcal{H}(\mathbb{R}^3)$ and we have
\begin{equation*}
\|E\cdot B\|_{\mathcal{H}(\mathbb{R}^3)}\leq
C\cdot\|E\|_{L^2(\mathbb{R}^3)}\cdot\|B\|_{L^2(\mathbb{R}^3)}
\end{equation*} for some universal constant $C$.
In order to obtain some regularity of second derivative of pressure,
we can combine compensated compactness
with boundedness of the Riesz transform in $\mathcal{H}(\mathbb{R}^3)$.
For example, if $u$ is a weak solution
of the Navier-Stokes \eqref{navier},
then a corresponding pressure $P$ satisfies \begin{equation}\label{pressure_hardy}
\|\nabla^2 P\|_{L^1(0,\infty;\mathcal{H}(\mathbb{R}^3))}\leq
C\cdot\|\nabla u\|_{L^2(0,\infty;L^2(\mathbb{R}^3))}^2
\end{equation} (see Lions \cite{lions} or the lemma 7 in \cite{vas:higher}).\\
\begin{comment}
Or if $u$ is a solution of \eqref{navier_Problem I-n} in (Problem I-n)
then \begin{equation}\begin{split}\label{pressure_hardy_Problem I-n}
\|\nabla^2 P\|_{L^1(0,\infty;\mathcal{H}(\mathbb{R}^3))}&\leq
C\|\Delta P\|_{L^1(0,\infty;\mathcal{H}(\mathbb{R}^3))}\\
&\leq C\cdot\|\nabla (u * \phi_{1/n})\|_{L^2(0,\infty;L^2(\mathbb{R}^3))}
\|\nabla u\|_{L^2(0,\infty;L^2(\mathbb{R}^3))}\\
&\leq C\cdot\|\nabla u\|_{L^2(0,\infty;L^2(\mathbb{R}^3))}^2.\\
\end{split}\end{equation} from $-\Delta P=\ebdiv\ebdiv
\mathcal{B}ig( (u * \phi_{1/n})\otimes u$\mathcal{B}ig).\\
\end{comment}
\noindent Now it is well known that if we replace the
Poisson kernel $\mathcal{P}$ with any function
$\mathcal{G}\in C^\infty(\mathbb{R}^3)$ with compact support, then we have a constant
$C$ depending only on $\mathcal{G}$ such that
\begin{equation}\begin{split}\label{hardy_property}
\| \sup_{\delta>0}|\mathcal{G}_{\delta}* f|
\|_{L^{1}(\mathbb{R}^3)}
\leq C\| \sup_{\delta>0}|\mathcal{P}_{\delta}* f|
\|_{L^{1}(\mathbb{R}^3)}
= C\|f\|_{\mathcal{H}(\mathbb{R}^3)}
\end{split}\end{equation}
where $\mathcal{G}_\delta(\cdot)=\mathcal{G}(\cdot/\delta)/\delta^3$.
(see Fefferman and Stein \cite{fefferman} or
see Stein \cite{ste:harmonic}, Grafakos \cite{grafakos} for modern texts).
Due to the supremum and the convolution in \eqref{hardy_property},
we can say that
even though the Maximal function
$\sup_{\delta>0}\mathcal{B}ig(\chi_\delta *|f|\mathcal{B}ig)$ of any non-trivial Hardy space function $f$
is not integrable, there exist at least
integrable functions $\mathcal{B}ig(\sup_{\delta>0}\mathcal{B}ig|\mathcal{G}_{\delta}* f\mathcal{B}ig|\mathcal{B}ig)$, which can capture non-local data as
Maximal functions do.
However, note the position of the absolute value sign in \eqref{hardy_property},
which is outside of the convolution while
it is inside of the convolution
for the Maximal function.
It implies that
\eqref{hardy_property} is slightly weaker than the Maximal function
in the sense of controlling non-local data.
This weakness is the reason that we introduce
certain definitions of $\zeta$ and $h^{\alpha}$
in the following.\\
\noindent \textbf{Some notations which will be useful for fractional derivatives
$ {(-\Delta)^{{\alpha}/{2}}}$}\\
The following two definitions of $\zeta$ and $h^{\alpha}$ will be used
only in the proof for fractional derivatives.
We define $\zeta$ by $\zeta(x)=\phi(\frac{x}{2})-\phi(x)$.
Then we have
\begin{equation}\begin{split}\label{property_zeta}
& \zeta \in C^{\infty}(\mathbb{R}^3),\quad supp(\zeta)\subset B(2),
\quad\zeta(x)=0 \mbox{ for } |x|\leq \frac{1}{2} \\
&\mbox{ and }\sum_{j=k}^{\infty}\zeta(\frac{x}{2^j})=1 \mbox{ for }|x|\geq 2^{k}
\mbox{ for any integer } k.
\end{split}\end{equation}
\noindent In addition, we define function $h^{\alpha}$
for $\alpha>0$
by
$h^{\alpha}(x)=\zeta(x)/|x|^{3+\alpha}$.
Also we define $(h^{\alpha})_{\delta}$ and
$(\nabla^{d}{h^{\alpha})_\delta}$ by
$(h^{\alpha})_{\delta}(x)=\delta^{-3}h^{\alpha}(x/\delta)$ and
$(\nabla^{d}{h^{\alpha})_\delta}(x)
=\delta^{-3}(\nabla^{d}{h^{\alpha})}(x/\delta)$
for $\delta>0$ and for positive integer $d$, respectively.
Then they satisfy
\begin{equation}\begin{split}\label{property_h}
& (h^{\alpha})_{\delta} \in C^{\infty}(\mathbb{R}^3),
\quad supp((h^{\alpha})_{\delta})\subset B(2\delta)-
B(\delta/2), \\
&\mbox{ and }\frac{1}{|x|^{3+\alpha}}\cdot\zeta
(\frac{x}{2^j})=\frac{1}{(2^j)^{\alpha }} \cdot (h^{\alpha})_{2^j}(x)
\mbox{ for any integer } j.
\end{split}\end{equation}
\ \\
\noindent \textbf{The definition of the fractional Laplacian
$ {(-\Delta)^{{\alpha}/{2}}}$}\\
For $-3<\alpha\leq2$
and for $f\in \mathcal{S}(\mathbb{R}^3)$
(the Schwartz space), $ (-\Delta)^{\frac{\alpha}{2}}f$
is defined by the Fourier transform:
\begin{equation}\label{fractional_fourier}
\widehat{(-\Delta)^{\frac{\alpha}{2}}f}(\xi)=|\xi|^\alpha \hat{f}(\xi)
\end{equation}
Note that
$(-\Delta)^{0}=Id$.
Especially, for $\alpha\in(0,2)$, the fractional Laplacian can also be defined
by the singular integral for any $f\in\mathcal{S}$:
\begin{equation}\label{fractional_integral}
(-\Delta)^{\frac{\alpha}{2}}f(x)=C_{\alpha}\cdot P.V.\int_{\mathbb{R}^3}
\frac{f(x)-f(y)}{|x-y|^{3+\alpha}}dy.\\
\end{equation}
We introduce two notions of approximations to Navier-Stokes.
The first one or (Problem I-n) is the approximation Leray \cite{leray} used
while the second one or (Problem II-r) will be used in local study after we apply some certain scaling
to (Problem I-n).\\
\noindent \textbf{Definition of {(Problem I-n)}:
the first approximation
to Navier-Stokes}\\
\begin{defn}\label{Problem I-n} Let $n\geq1$ be either an integer or the infinity $\infty$, and let $0<T\leq \infty$.
Suppose that $u_0$ satisfy \eqref{initial_condition}.
We say that $(u,P)\in [C^{\infty}\big((0,T)\times\mathbb{R}^3\big)]^2$
is a solution of {(Problem I-n)} on $(0,T)$ for the data $u_0$
if it satisfies
\begin{equation}\label{navier_Problem I-n}
\begin{split}
\partial_tu+((u*\phi_{{\frac{1}{n}}})\cdot\nabla)u+ \nabla P -\Delta u&=0\\
\ebdiv u&=0 \quad t\in ( 0,T ),\quad x\in \mathbb{R}^3
\end{split}
\end{equation}
and
\begin{equation}\label{initial_condition_Problem I-n}
u(t)\rightarrow u_0*\phi_{\frac{1}{n}} \mbox{ in } L^2 \mbox{-sense as }
t \rightarrow 0.
\end{equation}
\end{defn}
\begin{rem}
When $n=\infty$, \eqref{navier_Problem I-n} is the Navier-Stokes on $(0,T)\times\mathbb{R}^3$ with initial value $u_0$.
\end{rem}
\begin{rem}\label{remark_leray}
If $n$ is not the infinity but an positive integer, then for any given $u_0$ of \eqref{initial_condition}, we have existence
and uniqueness theory of (Problem I-n) on $(0,\infty)$ with the energy equality
\begin{equation}\label{energy_eq_Problem I-n}
\|u(t)\|_{L^2(\mathbb{R}^3)}^2+
2\|\nabla u\|^2_{L^2(0,t;L^2(\mathbb{R}^3))}
= \|u_0*\phi_{\frac{1}{n}}\|_{L^2(\mathbb{R}^3)}^2.\\
\end{equation} for any $t<\infty$
and it is well-known that we can extract a sub-sequence which
converges to a suitable weak
solution $u$ of \eqref{navier} and \eqref{suitable} with the initial data
$u_0$ of \eqref{initial_condition}
by limiting procedure on a sequence of solutions of (Problem I-n)
(see Leray \cite{leray}, or see Lions \cite{lions}, Lemari{\'e}-Rieusset \cite{lemarie} for modern texts).
\end{rem}
\begin{rem}
As mentioned in the introduction section, we can observe that this notion (Problem I-n)
is not invariant under the standard Navier-Stokes scaling $u(t,x)\rightarrow
\epsilon u(\epsilon^2 t,\epsilon x)$
due to the advection velocity $(u*\phi_{1/n})$
unless $n$ is the infinity. \\
\end{rem}
\noindent \textbf{Definition of {(Problem II-r)}:
the second approximation
to Navier-Stokes}\\
\begin{defn}\label{problem II-r} Let $0\leq r<\infty$ be real.
We say that $(u,P)\in [C^{\infty}\big((-4,0)\times\mathbb{R}^3\big)]^2$ is a solution of {(Problem II-r)}
if it satisfies
\begin{equation}\label{navier_Problem II-r}
\begin{split}
\partial_tu+(w
\cdot\nabla)u+ \nabla P -\Delta u&=0\\
\ebdiv u&=0, \quad t\in (-4,0 ), \quad x\in \mathbb{R}^3
\end{split}
\end{equation}
where $w$ is the difference of two functions:
\begin{equation}\begin{split}\label{w_Problem II-n}
w(t,x)
&= w^\prime(t,x) - w^{\prime\prime}(t),
\quad t\in (-4,0 ), x\in \mathbb{R}^3 \\
\end{split}\end{equation} which are defined by u in the following way:
\begin{equation*}\begin{split}
w^\prime(t,x)&=(u*\phi_{r})(t,x) \quad\mbox{ and } \quad
w^{\prime\prime}(t)=\int_{\mathbb{R}^3}\phi(y)(u*\phi_{r})(t,y)dy.
\end{split}\end{equation*}
\end{defn}
\begin{rem}
This notion of {(Problem II-r)} gives us the mean zero property for
the advection velocity $w$: $\int_{\mathbb{R}^3}\phi(x)w(t,x)dx = 0 $ on $(-4,0)$. Also this $w$ is divergent free from the definition.
Moreover, by multiplying $u$ to \eqref{navier_Problem II-r},
we have
\begin{equation}\label{suitable_Problem II-r}
\partial_t \frac{|u|^2}{2} + \ebdiv (w \frac{|u|^2}{2}) + \ebdiv (u P) +
|\nabla u |^2 - \Delta\frac{|u|^2}{2}= 0\\
\end{equation} in classical sense
because our definition needs $u$ to be $C^{\infty}$.
\end{rem}
\begin{rem}
We will introduce some specially designed $\epsilon$-scaling
which is a bridge between (Problem I-n) and (Problem II-r)
(it can be found in \eqref{special designed scaling}).
This scaling
is based on the Galilean invariance in order to obtain
the mean zero property for the velocity $u$:
$\int_{\mathbb{R}^3}\phi(x)u(t,x)dx = 0 $ on $(-4,0)$.
Moreover, after this $\epsilon$-scaling is applied to solutions of (Problem I-n),
the resulting functions will
satisfy not conditions of (Problem II-$\frac{1}{n}$) but
those of (Problem II-$\frac{1}{n\epsilon}$) (it can be found \eqref{special designed scaling result}).
These things will be
stated precisely in the section \ref{proof_main_thm_II}.
\end{rem}
\begin{rem}
When $r=0$, the equation \eqref{navier_Problem II-r} is
the Navier-Stokes on $(-4,0)\times\mathbb{R}^3$
once
we assume the mean zero property for $u$.\\
\end{rem}
Now we present two main local-study propositions which require
the notion of (Problem II-r). These are kinds of partial regularity theorems
for solutions of (Problem II-r).
The main difficulty to prove these two propositions is that
$\bar{\eta}$ and $\bar\delta>0$ should be independent of any $r$ in $[0,\infty)$.
We will prove this independence very carefully, which is the heart of the section
\ref{proof_partial_prob_II_r} and
\ref{proof_local study}.\\
\noindent \textbf{The first local study proposition
for (Problem II-r)}\\
The following one is a quantitative version of partial regularity theorems
which extends that of \cite{vas:partial} up to $p=1$. The proof
will be based on the De Giorgi iteration with a new pressure decomposition
lemma \ref{lem_pressure_decomposition} which will appear later.
\begin{prop}\label{partial_problem_II_r}
There exists a $\bar\delta>0$ with the following property:\\
If u is a solution of (Problem II-r) for some $0\leq r<\infty$ verifying both
\begin{equation*}\begin{split}
&\| u\|_{L^{\infty}(-2,0;L^{2}(B(\frac{5}{4})))}+
\|P\|_{L^1(-2,0;L^{1}(B(1)))}+\| \nabla u\|_{L^{2}(-2,0;L^{2}(B(\frac{5}{4})))}
\leq \bar{\delta}\\
\end{split}\end{equation*}
\begin{equation*}\begin{split}
\mbox{ and }\quad\quad&\| \mathcal{M}(|\nabla u|)
\|_{L^{2}(-2,0;L^{2}(B(2)))}\leq \bar{\delta},
\end{split}\end{equation*}
then we have
\begin{equation*}
|u(t,x)|\leq 1 \mbox{ on } [-\frac{3}{2},0]\times B(\frac{1}{2}).
\end{equation*}\\
\end{prop}
The above proposition, whose proof will appear in the section \ref{proof_partial_prob_II_r},
contains two bad scaling terms
$\| u\|_{L_t^{\infty}L_x^{2}}$ and $\|P\|_{L_t^1L_x^{1}}$,
while the following proposition \ref{local_study_thm} does not have those two.
Instead, the proposition \ref{local_study_thm} will assume
the mean-zero property on $u$ with the additional terms.
We will see later that these additional ones have
the best scaling like $|\nabla u|^2$ (also, see \eqref{best_scaling}).\\
\noindent \textbf{The second local study proposition
for (Problem II-r)}\\
\begin{prop}\label{local_study_thm}
There exists a $\bar{\eta}>0$ and there exist
constants $C_{d,\alpha}$ depending only on $d$ and $\alpha$
with the following property:\\
If $u$ is a solution of (Problem II-r) for some $0\leq r<\infty$ verifying both
\begin{equation}\label{local_study_condition1}
\int_{\mathbb{R}^3}\phi(x)u(t,x)dx = 0
\quad \mbox{ for } t\in(-4,0) \mbox{ and}\\
\end{equation}
\begin{equation}\begin{split}\label{local_study_condition2}
&\int_{-4}^{0}\int_{B(2)}\mathcal{B}ig(|\nabla u|^2(t,x)+
|\nabla^2 P|(t,x)+|\mathcal{M}(|\nabla u|)|^2(t,x)\mathcal{B}ig)dxdt\leq \bar{\eta},\\
\end{split}\end{equation}
then $|\nabla^d u|\leq C_{d,0}$ on
$Q(\frac{1}{3})=(-(\frac{1}{3})^2,0)\times B(\frac{1}{3})$ for every integer $d\geq 0$.\\
Moreover if we assume further
\begin{equation}\begin{split}\label{local_study_condition3}
&\int_{-4}^{0}\int_{B(2)}
\mathcal{B}ig(
|\mathcal{M}(\mathcal{M}(|\nabla u|))|^2+
|\mathcal{M}(|\mathcal{M}(|\nabla u|)|^q)|^{2/q}\\+
&|\mathcal{M}(|\nabla u|^q)|^{2/q}+
\sum_{m=d}^{d+4} \sup_{\delta>0}(|(\nabla^{m-1}{h^{\alpha})_\delta}
*\nabla^2 P|)
\mathcal{B}ig)dxdt\leq \bar{\eta}
\end{split}\end{equation}
for some integer $d\geq 1$ and for some real $\alpha\in(0,2)$
where $q=12/(\alpha+6)$,
then $|(-\Delta)^{\frac{\alpha}{2}}\nabla^d u|\leq C_{d,\alpha}$
on $Q(\frac{1}{6})$ for such $(d,\alpha)$.
\end{prop}
\begin{rem}
For the definitions of $h^{\alpha}$ and $(\nabla^{m-1}{h^{\alpha})_\delta}$, see around \eqref{property_h}.
\end{rem}
The proof will be given in the section \ref{proof_local study}
which will use the conclusion of the previous proposition \ref{partial_problem_II_r}.
Moreover we will use an induction argument and the integral representation
of the fractional Laplacian in order to get estimates
for the fractional case.
The Maximal function term of \eqref{local_study_condition2}
is introduced to estimate non-local part of $u$ while
the Maximal of Maximal function terms of \eqref{local_study_condition3}
is to estimate non-local part of $w$ which is already non-local.
On the other hand, because $\nabla^2 P$ has only $L^1$ integrability,
we can not have $L^1$ Maximal function of $\nabla^2 P$.
Instead, we use
some integrable functions, which is the last term of \eqref{local_study_condition3}.
This term plays the role which captures
non-local information of pressure (see \eqref{hardy_property}).
These will be stated
clearly in sections \ref{proof_local study} and \ref{proof_main_thm_II}.\\
\section{Proof of the first local study proposition \ref{partial_problem_II_r} }\label{proof_partial_prob_II_r}
\qquad This section is devoted to prove the proposition \ref{partial_problem_II_r} which is
a
partial regularity theorem for (Problem II-r).
Remember that we are looking for $\bar\delta$ which must
be independent of $r$. \\
In the first subsection \ref{definition_for_thoerem_partial_problem_II_r},
we present some lemmas about the advection velocity $w$ and
a new pressure
decomposition. After that, two big lemmas \ref{lem_partial_1}
and \ref{lem_partial_2}
in the subsections
\ref{proof_lem_partial_1} and \ref{proof_lem_partial_2}, which give us a control for big $r$ and small $r$ respectively, follow. Then
the actual proof of the proposition
\ref{partial_problem_II_r} will appear in the last subsection \ref{combine_de_giorgi}
where we can combine those two big lemmas.\\
\subsection{A control on the
advection velocity $w$ and a new pressure decomposition}\label{definition_for_thoerem_partial_problem_II_r}
\quad The following lemma
says that
convolution of any functions with $\phi_r$
can be controlled by just one point value of
the Maximal function with some factor of $1/r$.
Of course, it is useful when $r$ is away from $0$.
\begin{lem}\label{convolution_lem}
Let f be an integrable function in $\mathbb{R}^3$.
Then for any integer $d\geq0$,
there exists $C=C(d)$ such that
\begin{equation*}
\|\nabla^d (f*\phi_r)\|_{L^{\infty}(B(2))}\leq \frac{C}{r^d}
\cdot(1+\frac{4}{r})^3\cdot\inf_{x\in B(2)}\mathcal{M}f(x)
\end{equation*} for any $0<r<\infty$.
\end{lem}
\begin{proof}
Let $z,x\in B(2)$. Then
\begin{equation*}\begin{split}
& |\nabla^d (f*\phi_r)(z)| =|(f*\nabla^d \phi_r)(z)|
=|\int_{B(z,r)}f(y)\nabla^d\phi_r(z-y)dy|\\
&\leq\|\nabla^d \phi_r\|_{L^{\infty}}\int_{B(z,r)}|f(y)|dy
=\frac{\|\nabla^d \phi\|_{L^{\infty}}}{r^{d+3}}\int_{B(z,r)}|f(y)|dy \\
&\leq\frac{\|\nabla^d \phi\|_{L^{\infty}}}{r^{d+3}}\frac{(r+4)^3}{(r+4)^3}
\int_{B(x,r+4)}|f(y)|dy
\leq\frac{C}{r^d}
\cdot(1+\frac{4}{r})^3\cdot\mathcal{M}f(x).
\end{split}\end{equation*}
We used $B(z,r)\subset B(x,r+4)$. Then we take $\sup$ in $z$ and $\inf$ in $x$.
Recall that $\phi(\cdot)$ is the fixed function in this paper.\\
\end{proof}
The following corollary is just an application of the previous lemma
to solutions of (Problem II-r).
\begin{cor}\label{convolution_cor}
Let $u$ be a solution of (Problem II-r) for $0<r<\infty$. Then
for any integer $d\geq0$,
there exists $C=C(d)$ such that
\begin{equation*}
\| w\|_{L^2(-4,0;L^{\infty}(B(2)))}\leq {C}
\cdot(1+\frac{4}{r})^3\cdot\|\mathcal{M}(|\nabla u|)\|_{L^2{(Q(2))}}
\end{equation*} and
\begin{equation*}
\|\nabla^d w\|_{L^2(-4,0;L^{\infty}(B(2)))}\leq \frac{C}{r^{d-1}}
\cdot(1+\frac{4}{r})^3\cdot\|\mathcal{M}(|\nabla u|)\|_{L^2{(Q(2))}}
\end{equation*} if $d\geq1$.
\end{cor}
\begin{proof}
Recall $\int_{\mathbb{R}^3}w(t,y)\phi(y)dy=0$ and $supp(\phi)\subset B(1)$. Thus for $z\in B(2)$
\begin{equation*}\begin{split}
|w(t,z)|=&\mathcal{B}ig|\int_{\mathbb{R}^3}w(t,z)\phi(y)dy-\int_{\mathbb{R}^3}w(t,y)
\phi(y)dy\mathcal{B}ig|\\
\leq&\|\nabla w(t,\cdot)\|_{L^{\infty}(B(2))}\int_{\mathbb{R}^3}|z-y|\phi(y)dy\\
\leq&C\|(\nabla u) *\phi_r(t,\cdot)\|_{L^{\infty}(B(2))}\cdot\int_{\mathbb{R}^3}\phi(y)dy\\
\leq&{C}\cdot(1+\frac{4}{r})^3\cdot \inf_{x\in B(2)}\mathcal{M}(|\nabla u|)(t,x).
\end{split}\end{equation*}
For last inequality, we used the lemma \ref{convolution_lem} to $\nabla u$.
For $d\geq1$, use $\nabla^d w =\nabla^{d-1}\mathcal{B}ig[(\nabla u)*\phi_r\mathcal{B}ig]$.
\end{proof}
To use De Giorgi type argument, we require more notations
which will be used only in this section.
\begin{equation}\begin{split}\label{def_s_k}
\mbox{For real }&k\geq 0, \mbox{ define }\\
B_k &=\mbox{ the ball in } \mathbb{R}^3 \mbox{ centered at the origin with radius }
\frac{1}{2}(1+\frac{1}{2^{3k}}),\\
T_k &= -\frac{1}{2}(3+\frac{1}{2^k}),\\
Q_k &=[T_k,0]\times B_k \quad\mbox{ and}\\
s_k &= \mbox{ distance between } B^c_{k-1} \mbox{ and } B_{k-\frac{5}{6}}\\
&=2^{-3k}\mathcal{B}ig((\sqrt{2}-1)2\sqrt{2}\mathcal{B}ig).
\end{split}\end{equation} Also we define $s_\infty=0$.
Note that $0<s_1 <\frac{1}{4}$ and the sequence $\{s_k\}_{k=1}^{\infty}$ is strictly decreasing to zero
as k goes to $\infty$.\\
\noindent For each integer $k\geq 0 $, we define and fix a function $\psi_k \in C^{\infty}(\mathbb{R}^3)$ satisfying:\\
\begin{equation}\begin{split}
&\psi_k = 1 \quad\mbox{ in } B_{k-\frac{2}{3}} , \quad\psi_k = 0 \quad\mbox{ in } B_{k-\frac{5}{6}}^C\\
&0\leq \psi_k(x) \leq 1 , \quad
|\nabla\psi_k(x)|\leq C2^{3k} \mbox{ and }
|{\nabla}^2\psi_k(x)|\leq C2^{6k} \mbox{ for }x\in\mathbb{R}^3.
\end{split}\end{equation} This $\psi_k$ plays role of a cut-off function for $B_k$.\\
To prove the proposition \ref{partial_problem_II_r}, We need the following
important lemma about pressure decomposition. Here
we decompose our pressure term into three parts:
a non-local part which depends on $k$,
a local part which depends on $k$ and
a non-local part which does not depend on $k$
and will be absorbed into the velocity component later.
\begin{lem}\label{lem_pressure_decomposition}
There exists a constant ${\Lambda_1}>0$ with
the following property:\\
Suppose $A_{ij}\in L^1(B_0) $ $ 1\leq i,j\leq 3 $
and $ P\in L^1(B_0)$
with $-\Delta P = \sum_{ij}\partial_i \partial_j A_{ij}$ in $B_0$.
Then, there exist a function $P_3 $ with $P_3|_{B_{\frac{2}{3}}} \in L^{\infty} $
such that, for any $k\geq1$, we can decompose $P$ by\\
\begin{equation}\label{pressure_decomposition_expressition}
P = P_{1,k} + P_{2,k} + P_{3}\quad\mbox{ in } B_{\frac{1}{3}},
\end{equation}
and they satisfy
\begin{equation}\label{lem_pressure_decomposition_p1k} \|\nabla P_{1,k}\|_{L^{\infty}(B_{k-\frac{1}{3}}))}
+ \|P_{1,k}\|_{L^{\infty}(B_{k-\frac{1}{3}}))}
\leq {\Lambda_1}
2^{12k} \sum_{ij}\|A_{ij}\|_{L^1(B_{\frac{1}{6}})},
\end{equation}
\begin{equation}\label{lem_pressure_decomposition_p2k} -\Delta P_{2,k} =
\sum_{ij}\partial_i \partial_j (\psi_k A_{ij}) \quad\quad \mbox{in } \mathbb{R}^3 \quad\mbox{ and}
\end{equation}
\begin{equation}\label{lem_pressure_decomposition_p3} \|\nabla P_{3}\|_{L^{\infty}(B_{\frac{2}{3}})} \leq {\Lambda_1}
(\|P\|_{L^1(B_{\frac{1}{6}})} + \sum_{ij}\|A_{ij}\|_{L^1(B_{\frac{1}{6}})}).
\end{equation}
Note that ${\Lambda_1}$ is a totally independent constant. \\
\end{lem}
\begin{proof}
The product rule and the hypothesis imply
\begin{equation*}\begin{split}
-\Delta(\psi_1 P) &= -\psi_1 \Delta P - 2\ebdiv((\nabla \psi_1)P) + P\Delta\psi_1\\
&= \psi_1\sum_{ij}\partial_i \partial_j A_{ij} - 2\ebdiv((\nabla \psi_1)P) + P\Delta\psi_1\\
&= - \Delta P_{1,k} - \Delta P_{2,k} - \Delta P_3
\end{split}
\end{equation*}
where $P_{1,k}$, $ P_{2,k} $ and $ P_3 $ are defined by
\begin{equation*}\begin{split}
- \Delta P_{1,k} &= \sum_{ij}\partial_i \partial_j ((\psi_1 -\psi_k) A_{ij}) \\
- \Delta P_{2,k} &= \sum_{ij}\partial_i \partial_j (\psi_k A_{ij}) \\
- \Delta P_3\mbox{ } &= - \sum_{ij}\partial_j[(\partial_i \psi_1)(A_{ij})]
- \sum_{ij}\partial_i[(\partial_j \psi_1)(A_{ij})] \\&+ \sum_{ij}(\partial_i \partial_j \psi_1)(A_{ij})
- 2\ebdiv((\nabla \psi_1)P) + P\Delta\psi_1.
\end{split}\end{equation*}
$P_{1,k}$ and $P_3$ are defined by the representation formula
${(-\Delta)}^{-1}(f) = \frac{1}{4\pi}(\frac{1}{|x|} * f)$\\
while $P_{2,k}$ by the Riesz transforms.\\
\\
Since $\psi_1 = 1 $ on $ B_{\frac{1}{3}}$, we have $\Delta P = \Delta(\psi_1 P)$ on $ B_{\frac{1}{3}}$.
Thus \eqref{pressure_decomposition_expressition} holds.\\
\\
By definition of $P_{2,k}$, \eqref{lem_pressure_decomposition_p2k} holds.\\
\\
For \eqref{lem_pressure_decomposition_p1k} and \eqref{lem_pressure_decomposition_p3},
it follows the proof of the lemma 3 of \cite{vas:partial} directly.
For completeness, we present a proof here. Note that $ (\psi_1 -\psi_k) $
is supported in $ (B_{\frac{1}{6}} - B_{k-\frac{2}{3}} )$ and
$ \nabla\psi_1 $ is supported in $ (B_{\frac{1}{6}} - B_{\frac{1}{3}} )$. Thus
for $x\in B_{k-\frac{1}{3}}$,
\begin{equation*}\begin{split}
|P_{1,k}(x)| &= \mathcal{B}igg|\frac{1}{4\pi}\int_{(B_{\frac{1}{6}} - B_{k-\frac{2}{3}} )}
\frac{1}{|x-y|}\sum_{ij}(\partial_i \partial_j ((\psi_1 -\psi_k) A_{ij}))(y)dy\mathcal{B}igg|\\
&\leq \sup_{y\in B^C_{k-\frac{2}{3}}}(|\nabla^2_y\frac{1}{|x-y|}|) \cdot \sum_{ij}
\int_{B_{\frac{1}{6}}}|A_{ij}(x)|dy\\
&\leq C\cdot\sup_{y\in B^C_{k-\frac{2}{3}}}(\frac{1}{|x-y|^3}) \cdot
\sum_{ij}\|A_{ij}\|_{L^1(B_{\frac{1}{6}})}
\leq C_1\cdot2^{9k} \cdot
\sum_{ij}\|A_{ij}\|_{L^1(B_{\frac{1}{6}})}.
\end{split}\end{equation*}
We used integration by parts and facts $|x-y| \geq 2^{-3k}$ and $ |(\psi_1 -\psi_k)|\leq 1 $ . \\
\\ In the same way, for $x\in B_{k-\frac{1}{3}}$,
\begin{equation*}\begin{split}
|\nabla P_{1,k}(x)|
&\leq C_2\cdot2^{12k} \cdot
\sum_{ij}\|A_{ij}\|_{L^1(B_{\frac{1}{6}})}.\\
\end{split}\end{equation*}
For $x\in B_{\frac{2}{3}}$,
\begin{equation*}\begin{split}
|\nabla P_{3}(x)|
= & \mathcal{B}igg|\frac{1}{4\pi}\int_{(B_{\frac{1}{6}} - B_{\frac{1}{3}} )}
(\nabla_y\frac{1}{|x-y|})\mathcal{B}ig[-\sum_{ij}\partial_j[(\partial_i \psi_1)(A_{ij})]
- \sum_{ij}\partial_i[(\partial_j \psi_1)(A_{ij})] \\
&\quad + \sum_{ij}(\partial_i \partial_j \psi_1)(A_{ij}))
- 2\ebdiv((\nabla \psi_1)P) + P\Delta\psi_1 \mathcal{B}ig]dy\mathcal{B}igg|\\
\leq& C_3\mathcal{B}ig(\sum_{ij}\|A_{ij}\|_{L^1(B_{\frac{1}{6}})}
+ \|P\|_{L^1(B_{\frac{1}{6}})}\mathcal{B}ig).
\end{split}\end{equation*}
These prove \eqref{lem_pressure_decomposition_p1k} and \eqref{lem_pressure_decomposition_p3} and we keep the
constant ${\Lambda_1} = max(C_1,C_2,C_3)$ for future use.
\end{proof}
Before presenting De Giori arguments for
large $r$ and small $r$, we need more notations.
In the following two main lemmas \ref{lem_partial_1}
and \ref{lem_partial_2}, $P_3$
will be constructed from solutions $(u,P)$ for (Problem II-r) by
using the previous lemma \ref{lem_pressure_decomposition} and
it will be clearly shown
that $\nabla P_3$ has $L_t^1L_x^{\infty}$ bound. Thus we can define
\begin{equation}\begin{split}\label{def_e_k}
E_k(t) = &\frac{1}{2}(1-2^{-k}) + \int_{-1}^{t}\|\nabla P_3(s,\cdot)
\|_{L^{\infty}(B_{\frac{2}{3}})}ds, \\
& \mbox{ for } t \in[-2,0] \mbox{ and for } k\geq 0.
\end{split}\end{equation}
Note that $E_k$ depends on $t$.
We also define followings like in \cite{vas:partial}
\begin{equation*}\begin{split}
v_k &= (|u|-E_k)_{+},\\
d_k & = \sqrt{\frac{E_k \mathbf{1}_{\{|u|\geq E_k\}}}{|u|}|\nabla|u||^2 +
\frac{v_k}{|u|}|\nabla u|^2} \quad\mbox{ and}\\
U_k &= \sup_{t\in[T_k,0]}\mathcal{B}ig(\int_{B_k}|v_k|^2 dx \mathcal{B}ig) + \int\int_{Q_k}|d_k|^2 dx dt\\
&= \|v_k\|_{L^{\infty}(T_k,0;L^2(B_k))}^2 + \|d_k\|^2_{L^2(Q_k)}.
\end{split}\end{equation*}
In this way, $P_3$ will be absorbed into $v_k$, which is the key
idea of proof of
this proposition \ref{partial_problem_II_r}. \\
\subsection{De Giorgi argument to get a control for large $r$}\label{proof_lem_partial_1}
The following big lemma says that
we can obtain a certain uniform non-linear estimate
in the form of $W_k\leq C^k\cdot W_{k-1}^\beta $ when $r$ is large.
Then an elementary lemma can give us the conclusion (we will see
the lemma \ref{lem_recursive} later).
On the other hand,
for small $r$, we have the factor of $(1/r^3)$ which blows up
as $r$ goes to zero. This weak point implies that
we still need some extra work after this lemma.
(it will be the next big lemma \ref{lem_partial_2}).
\begin{lem}\label{lem_partial_1}
There exist universal constants ${\delta}_1>0$ and $\bar{C}_1>1$ such that
if u is a solution of (Problem II-r) for some $0<r<\infty$ verifying both
\begin{equation*}\begin{split}
&\| u\|_{L^{\infty}(-2,0;L^{2}(B(\frac{5}{4})))}+
\|P\|_{L^1(-2,0;L^{1}(B(1)))}+\| \nabla u\|_{L^{2}(-2,0;L^{2}(B(\frac{5}{4})))}
\leq {\delta}_1\\ \mbox{ and }
&\| \mathcal{M}(|\nabla u|)\|_{L^{2}(-2,0;L^{2}(B(2)))}\leq {\delta}_1,\\
\end{split}\end{equation*}
then we have
\begin{equation*}
U_k \leq \begin{cases}& (\bar{C}_1)^k U_{k-1}^{\frac{7}{6}} ,\quad \mbox{ for any } k\geq 1 \quad\mbox{ if } r\geq s_{1}\\
&\frac{1}{r^3}\cdot (\bar{C}_1)^k U_{k-1}^{\frac{7}{6}} ,
\quad \mbox{ for any } k\geq 1 \quad\mbox{ if } r< s_{1}. \end{cases}
\end{equation*}
\end{lem}
\begin{rem}
$s_1$ is a pre-fixed constant defined in
\eqref{def_s_k} such that $0<s_1<1/4$, and
$({\delta}_1,\bar{C}_1)$ is independent of any
$ 0<r<\infty$. It will be clear that
the exponent $7/6$ is not optimal and we can make it close to $(4/3)$
arbitrarily. However, any exponent bigger than $1$ is enough for our study.
\end{rem}
\begin{proof}
We assume ${\delta_{1}} <1$. First we claim that there exists
a universal constant ${\Lambda_2}\geq 1$
such that
\begin{equation}\label{w_iu_j}
\||w|\cdot|u|\|_{L^{2}(-2,0;L^{3/2}(B_{\frac{1}{6}}))}\leq {\Lambda_2}\cdot{\delta_{1}} \quad\mbox{for any } 0<r<\infty.
\end{equation}
In order to prove the above claim \eqref{w_iu_j}, we need to separate it into
a large $r$ case and a small $r$ case:\\
\textbf{(I)-large $r$ case.} From the corollary \ref{convolution_cor} if $r\geq s_{1}$, then
\begin{equation}\begin{split}\label{w_large_r}
\| w\|_{L^2(-4,0;L^{\infty}(B(2)))}
&\leq {C}
\cdot(1+\frac{4}{s_{1}})^3\cdot\|\mathcal{M}(|\nabla u|)\|_{L^2{(Q(2))}}\\
&\leq {C}\|\mathcal{M}(|\nabla u|)\|_{L^2{(Q(2))}}\leq {C}{\delta_{1}}.
\end{split}\end{equation} Likewise,
\begin{equation}\begin{split}\label{nabla_w_large_r}
\|\nabla w\|_{L^2(-4,0;L^{\infty}(B(2)))}
&\leq {C}{\delta_{1}}.
\end{split}\end{equation}
With Holder's inequality and $B_{\frac{1}{6}}\subset B_{0}=B(1)\subset B(\frac{5}{4})\subset B(2)$,
\begin{equation*}\begin{split}
\||w|\cdot|u|\|_{L^{2}(-2,0;L^{3/2}(B_{\frac{1}{6}}))}&\leq C
\| u\|_{L^{\infty}(-2,0;L^{2}(B(\frac{5}{4}))}\cdot\| w\|_{L^2(-4,0;L^{\infty}(B(2)))}\\
&\leq C \cdot{\delta_{1}}^2\leq C_1 \cdot{\delta_{1}}.
\end{split}\end{equation*} so we obtained \eqref{w_iu_j} for $r\geq s_{1}$.\\
\textbf{(II)-small $r$ case.} On the other hand, if $r< s_{1}$, then
\begin{equation}\begin{split}\label{w_small_r_with_r^3}
\| w\|_{L^2(-4,0;L^{\infty}(B(2)))}&\leq {C}
\cdot(1+\frac{4}{r})^3\cdot\|\mathcal{M}(|\nabla u|)\|_{L^2{(Q(2))}}\\
&\leq {C}\frac{1}{r^3}\|\mathcal{M}(|\nabla u|)\|_{L^2{(Q(2))}}\leq {C}\frac{1}{r^3}{\delta_{1}}
\end{split}\end{equation} and
\begin{equation}\begin{split}\label{nabla_w_small_r_with_r^3}
\|\nabla w\|_{L^2(-4,0;L^{\infty}(B(2)))}
&\leq {C}\frac{1}{r^3}{\delta_{1}}.
\end{split}\end{equation}
However, it is not enough to prove \eqref{w_iu_j} because $\frac{1}{r^3}$ factor
blows up as $r$ goes to zero. So, instead, we use the idea that $w$ and $u$ are similar if $r$ is small:
\begin{equation*}\begin{split}
\| u\|_{L^{4}(-2,0;L^{3}(B_0))}
&\leq
C\mathcal{B}ig(\| u\|_{L^{\infty}(-2,0;L^{2}(B_0))}+
\| \nabla u\|_{L^{2}(-2,0;L^{2}(B_0))}\mathcal{B}ig)
\leq C{\delta_{1}}
\end{split}\end{equation*}
and
\begin{equation*}\begin{split}
\| w^{\prime}\|_{L^{4}(-2,0;L^{3}(B_{\frac{1}{6}}))}
&= \| u*\phi_r\|_{L^{4}(-2,0;L^{3}(B_{\frac{1}{6}}))}
\leq\| u\|_{L^{4}(-2,0;L^{3}(B_0))} \leq C{\delta_{1}} \\
\end{split}\end{equation*}
because $u*\phi_r$ in $B_{\frac{1}{6}}$ depends only on $u$ in $B_0$.
(recall that $r\leq s_1$ and $s_1$ is the distance $B^c_{0}$ and $B_{\frac{1}{6}}$
and refer \eqref{young}).
For $w^{\prime\prime}$,
\begin{equation}\begin{split}\label{w_prime_prime_small_r}
\| w^{\prime\prime}\|_{L^\infty(-2,0;L^{\infty}(B(2)))}
&=\| w^{\prime\prime}\|_{L_t^\infty((-2,0))}\\
&=\|\int_{\mathbb{R}^3}\phi(y)(u*\phi_{r})(y)dy\|_{L_t^\infty((-2,0))}\\
&\leq C\|\|u*\phi_{r}\|_{L_x^1(B(1))}\|_{L_t^\infty((-2,0))}\\
&\leq C\|\|u\|_{L_x^1(B(\frac{5}{4}))}\|_{L_t^\infty((-2,0))}\\
&\leq {C}\| u\|_{L^\infty(-2,0;L^{2}(B(\frac{5}{4})))}\\
&\leq {C}{\delta_{1}}\\
\end{split}\end{equation} because $ w^{\prime\prime}$ is a constant in $x$
, $supp(\phi)\subset B(1)$ and $u*\phi_r$ in $B(1)$ depends only on $u$
in $B(1+s_{1})$
which is a subset of $B(\frac{5}{4})$.
As a result, we have
\begin{equation}\begin{split}\label{w_u}
\||w|\cdot|u|\|_{L^{2}(-2,0;L^{3/2}(B_{\frac{1}{6}}))}&\leq C
\| u\|_{L^{4}(-2,0;L^{3}(B(1))}
\cdot\| w\|_{L^4(-2,0;L^{3}(B(\frac{1}{6})))}\\
&\leq C{\delta_{1}}\cdot
\| |w^{\prime}|+|w^{\prime\prime}|\|_{L^4(-2,0;L^{3}(B(\frac{1}{6})))}\\
&\leq C \cdot{\delta_{1}}^2 \leq C_2 \cdot{\delta_{1}}
\end{split}\end{equation} so that we obtained \eqref{w_iu_j} for $r\leq s_{1}$.\\
\noindent
Hence, taking
\begin{equation}\label{def_breve_C}
{\Lambda_2}=\max(C_1,C_2,1),
\end{equation} we have
\eqref{w_iu_j} and ${\Lambda_2}$ is independent of $0<r<\infty$
as long as $\delta_1<1$.
From now on, we assume $\delta_1<1$ sufficiently
small to be $10 \cdot {\Lambda_1}\cdot{\Lambda_2}\cdot\delta_1\leq 1/2$
(Recall that ${\Lambda_1}$ comes from the lemma \ref{lem_pressure_decomposition}). \\
Thanks to the lemma \ref{lem_pressure_decomposition} and \eqref{w_iu_j}, by putting
$A_{ij}=w_iu_j$ we can decompose $P$ by\\
\begin{equation*}
P = P_{1,k} + P_{2,k} + P_{3}\quad\mbox{ in } [-2,0]\times B_{\frac{1}{3}}
\end{equation*} for each $k\geq1$
with following properties:\\
\begin{equation}\begin{split}\label{eq_pressure_decomposition_p1k}
\| |\nabla P_{1,k}|
+ |P_{1,k}|\|_{L^{2}(-2,0;L^{\infty}(B_{k-\frac{1}{3}}))}
&\leq {\Lambda_1} 2^{12k}\sum_{ij}\|w_iu_j\|_{L^{2}(-2,0;L^1(B_{\frac{1}{6}}))}\\
&\leq 9\cdot{\Lambda_1}\cdot{\Lambda_2} \cdot {\delta_{1}}\cdot2^{12k}
\leq 2^{12k} \quad\mbox{ for any } k\geq1,\\
\end{split}\end{equation}
\begin{equation}\label{eq_pressure_decomposition_p2k}
-\Delta P_{2,k} = \sum_{ij} \partial_i \partial_j (\psi_k w_i u_j)
\quad\quad \mbox{in } [-2,0]\times\mathbb{R}^3 \quad\mbox{ for any } k\geq1
\quad \mbox{ and} \\
\end{equation}
\begin{equation}\begin{split}\label{eq_pressure_decomposition_p3}
\|\nabla P_{3}\|_{L^{1}(-2,0;L^{\infty}(B_{\frac{2}{3}}))} &\leq {\Lambda_1}
\mathcal{B}ig(\|P\|_{L^{1}(-2,0;L^1(B(1))} + \sum_{ij}\|w_iu_j\|_{L^{2}(-2,0;L^1(B(1))}\mathcal{B}ig)\\
&\leq {\Lambda_1}({\delta_{1}}+ 9\cdot{\Lambda_2}\cdot{\delta_{1}})
\leq 10\cdot{\Lambda_1}\cdot {\Lambda_2}\cdot{\delta_{1}}\leq \frac{1}{2}.
\end{split}\end{equation}
\noindent Note that the above \eqref{eq_pressure_decomposition_p3}
enables $E_k$ to be well-defined and it
satisfies $0\leq E_k \leq 1$ (see the definition of $E_k$ in \eqref{def_e_k}).\\
In the following remarks \ref{lem10_39}--\ref{rem_lem10_39}, we gather some easy results, which were obtained in \cite{vas:partial}, without proof. They can be found in
the lemmas
4, 6 and the remark of the lemma 4 of \cite{vas:partial}. Note that any constants $C$ in the following remarks
do not depend on $k$.\\% as long as $k\geq 0.\\
\begin{rem}\label{lem10_39}
For any $k\geq 0$, the function $u$ can be decomposed by
$u=u\frac{v_k}{|u|} + u(1-\frac{v_k}{|u|})$.
Also we have
\begin{equation}\begin{split}\label{d_k}
&\mathcal{B}ig|u(1-\frac{v_k}{|u|})\mathcal{B}ig|\leq 1,
\quad\frac{v_k}{|u|}|\nabla u|\leq d_k, \quad
\mathbf{1}_{|u|\geq E_k}|\nabla|u||\leq d_k,\\
&|\nabla v_k|\leq d_k \quad\mbox{ and }\quad
\big|\nabla\frac{uv_k}{|u|}\big|\leq 3d_k.
\end{split}\end{equation}
\end{rem}
\begin{rem}\label{lem12_39}
For any $k\geq1$ and
for any $q\geq1$,
\begin{equation*}\begin{split}
\|\mathbf{1}_{v_k>0}\|_{L^q(Q_{k-1})}&\leq C 2^{\frac{10k}{3q}}U^{\frac{5}{3q}}_{k-1} \quad\mbox{ and }\quad
\|\mathbf{1}_{v_k>0}\|_{L^{\infty}(T_{k-1},0;L^q(Q_{k-1})}
\leq C 2^{\frac{2k}{q}}U^{\frac{1}{q}}_{k-1}.\\
\end{split}\end{equation*}
\end{rem}
\begin{rem}\label{rem_lem10_39}
For any $k\geq1$,
$\|v_{k-1}\|_{L^{\frac{10}{3}}(Q_{k-1})}\leq C U_{k-1}^{\frac{1}{2}}.$
\end{rem}
\ \\
From the above remarks \ref{lem10_39}--\ref{rem_lem10_39}, we have for any $1\leq p\leq\frac{10}{3}$,
\begin{equation}\begin{split}\label{raise_of_power}
\|v_k\|_{L^{p}(Q_{k-1})}&=\|v_k\mathbf{1}_{v_k>0}\|_{L^{p}(Q_{k-1})}\\
&\leq \|v_k\|_{L^{\frac{10}{3}}(Q_{k-1})}\cdot\|\mathbf{1}_{v_k>0}\|_{L^{1/(\frac{1}{p}-\frac{3}{10})}(Q_{k-1})}\\
&\leq \|v_{k-1}\|_{L^{\frac{10}{3}}(Q_{k-1})}\cdot C 2^{\frac{10k}{3}\cdot(\frac{1}{p}-\frac{3}{10})}U^{\frac{5}{3}\cdot(\frac{1}{p}-\frac{3}{10})}_{k-1}\\
&\leq C2^{\frac{7k}{3}}U^{\frac{5}{3p}}_{k-1}.\\
\end{split}\end{equation}
Likewise, for any $1\leq p\leq2$,
\begin{equation}\begin{split}\label{raise_of_power2}
\|v_k\|_{L^{\infty}(T_{k-1},0;L^{p}(B_{k-1}))}
&\leq C2^{k}U^{\frac{1}{p}}_{k-1}
\end{split}\end{equation} and
\begin{equation}\begin{split}\label{raise_of_power3}
\|d_k\|_{L^{p}(Q_{k-1})}&
\leq C2^{\frac{5k}{3}}U^{\frac{5}{3p}-\frac{1}{3}}_{k-1}.\\
\end{split}\end{equation}\\
Second, we claim that
for every $k\geq 1$, functions $v_k$ verifies:
\begin{equation}\label{eq_suitable_inequality_for_v_k_prob_II_r}\begin{split}
\partial_t \frac{v_k^2}{2} + &\ebdiv (w \frac{v_k^2}{2}) + d_k^2 - \Delta\frac{v_k^2}{2} \\
&+\ebdiv (u (P_{1,k} + P_{2,k})) + (\frac{v_k}{|u|}-1)u\cdot\nabla (P_{1,k} + P_{2,k}) \leq 0
\end{split}\end{equation}
in $(-2,0)\times B_{\frac{2}{3}}$.\\
\begin{rem}
Note that the above inequality \eqref{eq_suitable_inequality_for_v_k_prob_II_r} does not contain the $P_3$ term. We will see that this fact comes from
the definition of $E_k(t)$ in \eqref{def_e_k}.
\end{rem}
Indeed, observe that
$ \frac{v_k^2}{2} = \frac{|u|^2}{2} + \frac{v_k^2 -|u|^2}{2}$
and note that $E_k$ does not depend on space variable but on time variable.
Thus we can compute,
for time derivatives,
\begin{equation*}\begin{split}
\partial_t&(\frac{v^2_k - |u|^2}{2}) = v_k\partial_{t}v_k - u\partial_{t}u
= v_k\partial_{t}|u| - v_k\partial_t E_k - u\partial_{t}u \\
&=u (\frac{v_k}{|u|} -1)\partial_t u - v_k\partial_t E_k
= u (\frac{v_k}{|u|} -1)\partial_t u - v_k
\|\nabla P_3(t,\cdot)\|_{L^{\infty}(B_{\frac{2}{3}})}
\end{split}\end{equation*} while,
for any space derivatives $\partial_{\alpha}$,
\begin{equation*}\begin{split}
\partial_{\alpha}(\frac{v^2_k - |u|^2}{2})
& =u (\frac{v_k}{|u|} -1)\partial_{\alpha} u.
\end{split}\end{equation*}
Then we follow the same way as the lemma 5 of \cite{vas:partial} did:
First, we multiply \eqref{navier_Problem II-r} by $u (\frac{v_k}{|u|} -1)$,
and then we sum the result and \eqref{suitable_Problem II-r}. We omit the details which can be found in the proof of the lemma 5 of \cite{vas:partial}.
As a result,
we have
\begin{equation*}\begin{split}
0 \geq & \quad\partial_t \frac{v_k^2}{2} + \ebdiv (w \frac{v_k^2}{2}) + d_k^2
- \Delta\frac{v_k^2}{2} +v_k\|\nabla P_3(t,\cdot)\|_{L^{\infty}(B_{\frac{2}{3}})}\\
&+\ebdiv (u P) + (\frac{v_k}{|u|}-1)u\cdot\nabla P \\
=&\quad\partial_t \frac{v_k^2}{2} + \ebdiv (w \frac{v_k^2}{2}) + d_k^2 - \Delta\frac{v_k^2}{2}
+\mathcal{B}ig(v_k\|\nabla P_3(t,\cdot)\|_{L^{\infty}(B_{\frac{2}{3}})} + \frac{v_k}{|u|}u\cdot \nabla P_3\mathcal{B}ig)\\
&+\ebdiv (u (P_{1,k}+P_{2,k})) + (\frac{v_k}{|u|}-1)u\cdot\nabla (P_{1,k}+P_{2,k}).
\end{split}\end{equation*}
For the last equality, we used the fact $P = P_{1,k} + P_{2,k} + P_{3} $ in $ B_{\frac{1}{3}} $ and
\begin{equation}\label{easyp_3}
\ebdiv (u P_3) + (\frac{v_k}{|u|}-1)u\cdot\nabla P_3= \frac{v_k}{|u|}u\cdot \nabla P_3.
\end{equation}
Thus we proved the claim \eqref{eq_suitable_inequality_for_v_k_prob_II_r}
due to
\begin{equation*}
v_k\|\nabla P_3(t,\cdot)\|_{L^{\infty}(B_{\frac{2}{3}})} + \frac{v_k}{|u|}u\cdot \nabla P_3 \geq 0
\quad \mbox{ on } (-2,0)\times B_{\frac{2}{3}}.
\end{equation*}
For any integer $k$, we introduce a cut-off function $\eta_k(x)\in C^{\infty}(\mathbb{R}^3)$ satisfying
\begin{equation*}\begin{split}
&\eta_k = 1 \quad\mbox{ in } B_{k} \quad , \quad
\eta_k = 0 \quad\mbox{ in } B_{k-\frac{1}{3}}^C\quad , \quad
0\leq \eta_k \leq 1, \\
&|\nabla\eta_k|\leq C2^{3k} \quad \mbox{and} \quad
|{\nabla}^2\eta_k|\leq C2^{6k},
\quad \mbox{ for }\mbox{any }x\in\mathbb{R}^3.
\end{split}\end{equation*}
Multiplying \eqref{eq_suitable_inequality_for_v_k_prob_II_r} by $\eta_k$
and integrating $[\sigma,t]\times
\mathbb{R}^3 $ for $T_{k-1}\leq \sigma\leq T_k\leq t \leq 0$,
\begin{equation*}\begin{split}
&\int_{\mathbb{R}^3}\eta_k(x)\frac{|v_k(t,x)|^2}{2}dx
+ \int_{\sigma}^t\int_{\mathbb{R}^3}\eta_k(x)d^2_k(s,x)dxds\\
&\leq \int_{\mathbb{R}^3}\eta_k(x)\frac{|v_k(\sigma,x)|^2}{2}dx\\
+&\int_{\sigma}^t\int_{\mathbb{R}^3}(\nabla\eta_k)(x)w(s,x)\frac{|v_k(s,x)|^2}{2}dxds
+\int_{\sigma}^t\int_{\mathbb{R}^3}(\Delta\eta_k)(x)\frac{|v_k(s,x)|^2}{2}dxds\\
-&\int_{\sigma}^t\int_{\mathbb{R}^3}\eta_k(x)
\mathcal{B}ig(\ebdiv (u (P_{1,k} + P_{2,k})) + (\frac{v_k}{|u|}-1)u
\cdot\nabla (P_{1,k} + P_{2,k})\mathcal{B}ig)(s,x)dxds.
\end{split}\end{equation*}
Integrating in $\sigma\in[T_{k-1},T_k]$ and dividing by
$-(T_{k-1}-T_k)=2^{-(k+1)}$,
\begin{equation*}\begin{split}
&\sup_{t\in[T_k,1]}\mathcal{B}ig(\int_{\mathbb{R}^3}\eta_k(x)\frac{|v_k(t,x)|^2}{2}dx
+ \int_{T_k}^t\int_{\mathbb{R}^3}\eta_k(x)d^2_k(s,x)dxds\mathcal{B}ig)\\
&\leq 2^{k+1}\cdot \int_{T_{k-1}}^{T_k}\int_{\mathbb{R}^3}\eta_k(x)\frac{|v_k(\sigma,x)|^2}{2}dx\\
+&\int_{T_{k-1}}^{0}\mathcal{B}ig|\int_{\mathbb{R}^3}\nabla\eta_k(x)w(s,x)\frac{|v_k(s,x)|^2}{2}dx\mathcal{B}ig|ds
+\int_{T_{k-1}}^{0}\mathcal{B}ig|\int_{\mathbb{R}^3}\Delta\eta_k(x)\frac{|v_k(s,x)|^2}{2}dx\mathcal{B}ig|ds\\
+&\int_{T_{k-1}}^{0}\mathcal{B}ig|\int_{\mathbb{R}^3}\eta_k(x)
\mathcal{B}ig(\ebdiv (u (P_{1,k} + P_{2,k})) + (\frac{v_k}{|u|}-1)u
\cdot\nabla (P_{1,k} + P_{2,k})\mathcal{B}ig)(s,x)dx\mathcal{B}ig|ds.\\
\end{split}\end{equation*}
From $\eta_k=1$ on $ B_k$,
\begin{equation*}\begin{split}
U_k
&\leq\sup_{t\in[T_k,1]}\mathcal{B}ig(\int_{\mathbb{R}^3}\eta_k(x)\frac{|v_k(t,x)|^2}{2}dx\mathcal{B}ig)
+ \int_{T_k}^0\int_{\mathbb{R}^3}\eta_k(x)d^2_k(s,x)dxds\\
&\leq2\cdot\sup_{t\in[T_k,1]}\mathcal{B}ig(\int_{\mathbb{R}^3}\eta_k(x)\frac{|v_k(t,x)|^2}{2}dx
+ \int_{T_k}^t\int_{\mathbb{R}^3}\eta_k(x)d^2_k(s,x)dxds\mathcal{B}ig).
\end{split}\end{equation*}
Thus we have
\begin{equation}\begin{split}\label{1234_decompo_1}
&U_k
\leq (I)+(II)+(III)+(IV)
\end{split}\end{equation} where
\begin{equation}\begin{split}\label{1234_decompo_2}
&(I)=C2^{6k}\int_{Q_{k-1}}|v_k(s,x)|^2dxds,\\
&(II)=\int_{Q_{k-1}}|\nabla\eta_k(x)|\cdot|w(s,x)|\cdot|v_k(s,x)|^2dxds,\\
&(III)=2\int_{T_{k-1}}^{0}\mathcal{B}ig|\int_{\mathbb{R}^3}\eta_k(x)
\mathcal{B}ig(\ebdiv (u P_{1,k}) + (\frac{v_k}{|u|}-1)u\cdot\nabla P_{1,k}\mathcal{B}ig)(s,x)dx\mathcal{B}ig|ds\quad\mbox{ and}\\
&(IV)=2\int_{T_{k-1}}^{0}\mathcal{B}ig|\int_{\mathbb{R}^3}\eta_k(x)
\mathcal{B}ig(\ebdiv (u P_{2,k}) + (\frac{v_k}{|u|}-1)u\cdot\nabla P_{2,k}\mathcal{B}ig)(s,x)dx\mathcal{B}ig|ds.
\end{split}\end{equation}
For $(I)$, by using \eqref{raise_of_power}, for any $0<r<\infty$,
\begin{equation}\begin{split}\label{(I)}
&(I)= C2^{6k}\|v_k\|^2_{L^{2}(Q_{k-1})}
\leq C2^{10k}U^{\frac{5}{3}}_{k-1}.
\end{split}\end{equation}\\
For $(II)$ with $r\geq s_{1}$, by using \eqref{w_large_r} and \eqref{raise_of_power2},
\begin{equation}\begin{split}\label{(II-1)}
(II)
&\leq C2^{3k}\|w\|_{L^2(-4,0;L^{\infty}(B(2)))}\cdot
\||v_k|^2\|_{L^2(T_{k-1},0;L^{1}(B_{k-1}))}\\
&\leq C2^{3k}{\delta_{1}}\|v_k\|_{L^{\infty}(T_{k-1},0;L^{\frac{6}{5}}(B_{k-1}))}
\cdot\|v_k\|_{L^2(T_{k-1},0;L^{6}(B_{k-1}))}\\
&\leq C2^{4k}{\delta_{1}}U^{\frac{5}{6}}_{k-1}
\cdot\mathcal{B}ig(\|v_{k-1}\|_{L^{\infty}(T_{k-1},0;L^{2}(B_{k-1}))}+
\|\nabla v_{k-1}\|_{L^2(T_{k-1},0;L^{2}(B_{k-1}))}\mathcal{B}ig)\\
&\leq C2^{4k}\cdot{\delta_{1}}\cdot U^{\frac{5}{6}}_{k-1}
\cdot U^{\frac{1}{2}}_{k-1}\leq C2^{4k}\cdot{\delta_{1}}\cdot U^{\frac{4}{3}}_{k-1}
\leq C2^{4k}\cdot U^{\frac{4}{3}}_{k-1}.
\end{split}\end{equation}
For $r<s_{1}$, follow the above steps using
\eqref{w_small_r_with_r^3} instead of using \eqref{w_large_r}
then we get
\begin{equation}\begin{split}\label{(II-2)}
&(II)\leq C\frac{1}{r^3}2^{4k}\cdot U^{\frac{4}{3}}_{k-1}.
\end{split}\end{equation}
For $(III) $ (non-local pressure term), observe that
\begin{equation*}\begin{split}
\ebdiv (u P_{1,k}) + (\frac{v_k}{|u|}-1)u\cdot\nabla P_{1,k}
= \frac{v_k}{|u|}u\cdot\nabla P_{1,k}
\end{split}\end{equation*}
because everything is smooth.
Thus, by using \eqref{eq_pressure_decomposition_p1k}
and \eqref{raise_of_power}, for any $0<r<\infty$,
\begin{equation}\begin{split}\label{(III)}
(III)&\leq C\cdot\|\frac{v_k}{|u|}u\cdot\nabla P_{1,k}\|_{L^1(Q_{k-1})}
\leq C\||v_k|\cdot|\nabla P_{1,k}|\|_{L^1(Q_{k-1})}\\
&\leq\|v_k\|_{L^{2}(T_{k-1},0;L^{1}(B_{k-1})))}
\cdot\|\nabla P_{1,k}\|_{L^{2}(T_{k-1},0;L^{\infty}(B_{k-1}))}\\
&\leq\|\mathbf{1}_{v_k>0}\|_{L^{2}(T_{k-1},0;L^{2}(B_{k-1})))}
\|v_k\|_{L^{\infty}(T_{k-1},0;L^{2}(B_{k-1})))}
\cdot 2^{12k}\\
&\leq C2^{\frac{43k}{3}}U^{\frac{5}{6}}_{k-1}
U^{\frac{1}{2}}_{k-1}
\leq C2^{\frac{43k}{3}}U^{\frac{4}{3}}_{k-1}.
\end{split}\end{equation}
For $(IV) $ (local pressure term), as we did for $(III)$, observe
\begin{equation*}\begin{split}
\ebdiv (u P_{2,k}) + (\frac{v_k}{|u|}-1)u\cdot\nabla P_{2,k}
= \frac{v_k}{|u|}u\cdot\nabla P_{2,k}.
\end{split}\end{equation*}
By definition of $P_{2,k}$, we have
\begin{equation*}\begin{split}
-\Delta P_{2,k}& = \sum_{ij} \partial_i \partial_j (\psi_k w_i u_j)= \sum_{ij} \partial_i ((\partial_j \psi_k) w_i u_j
+\psi_k (\partial_jw_i) u_j)\\
&= \sum_{ij} \partial_i \mathcal{B}ig((\partial_j \psi_k) w_i u_j(1-\frac{v_k}{|u|})
+(\partial_j \psi_k) w_i u_j\frac{v_k}{|u|}\\
&\quad\quad+\psi_k (\partial_jw_i) u_j(1-\frac{v_k}{|u|})
+\psi_k (\partial_jw_i) u_j\frac{v_k}{|u|}\mathcal{B}ig)\\
\end{split}\end{equation*} and
\begin{equation*}\begin{split}
-\Delta (\nabla P_{2,k})&= \sum_{ij} \partial_i\nabla
\mathcal{B}ig((\partial_j \psi_k) w_i u_j(1-\frac{v_k}{|u|})
+(\partial_j \psi_k) w_i u_j\frac{v_k}{|u|}\\
&\quad\quad+\psi_k (\partial_jw_i) u_j(1-\frac{v_k}{|u|})
+\psi_k (\partial_jw_i) u_j\frac{v_k}{|u|}\mathcal{B}ig).
\end{split}\end{equation*}\\
\noindent Thus we can decompose $\nabla P_{2,k}$ by the Riesz transform into
\begin{equation*}\begin{split}
\nabla P_{2,k}= G_{1,k}+G_{2,k}+G_{3,k}+G_{4,k}
\end{split}\end{equation*} where
\begin{equation*}\begin{split}
&G_{1,k} =\sum_{ij} (\partial_i\nabla)(-\Delta)^{-1}\mathcal{B}ig(
(\partial_j \psi_k) w_i u_j(1-\frac{v_k}{|u|})\mathcal{B}ig),\\
&G_{2,k} =\sum_{ij} (\partial_i\nabla)(-\Delta)^{-1}\mathcal{B}ig(
(\partial_j \psi_k) w_i u_j\frac{v_k}{|u|}\mathcal{B}ig),\\
&G_{3,k} =\sum_{ij} (\partial_i\nabla)(-\Delta)^{-1}\mathcal{B}ig(
\psi_k (\partial_jw_i) u_j(1-\frac{v_k}{|u|})\mathcal{B}ig)\quad\mbox{ and}\\
&G_{4,k} =\sum_{ij} (\partial_i\nabla)(-\Delta)^{-1}\mathcal{B}ig(
\psi_k (\partial_jw_i) u_j\frac{v_k}{|u|}\mathcal{B}ig).
\end{split}\end{equation*}
\noindent From $L^p$-boundedness of the Riesz transform with the fact
$supp(\psi_k)\subset B_{k-(5/6)}\subset B_{k-1}$, we have
\begin{equation*}\begin{split}
&\|G_{2,k}\|_{L^2(T_{k-1},0;L^2(\mathbb{R}^3))}
\leq C2^{3k}\|w\|_{L^2(T_{k-1},0;L^{\infty}(B_{k-1}))}
\cdot\|v_k\|_{L^{\infty}(T_{k-1},0;L^2(B_{k-1}))},\\%\mbox{ and}\\
&\|G_{4,k}\|_{L^2(T_{k-1},0;L^2(\mathbb{R}^3))}
\leq C\cdot\|\nabla w\|_{L^2(T_{k-1},0;L^{\infty}(B_{k-1}))}
\cdot\|v_k\|_{L^{\infty}(T_{k-1},0;L^2(B_{k-1}))}.\\
\end{split}\end{equation*}
For any $1<p<\infty$,
\begin{equation*}\begin{split}
&\|G_{1,k}\|_{L^2(T_{k-1},0;L^p(\mathbb{R}^3))}
\leq C_p\cdot2^{3k}\|w\|_{L^2(T_{k-1},0;L^{\infty}(B_{k-1}))}\quad\mbox{ and}\\
&\|G_{3,k}\|_{L^2(T_{k-1},0;L^p(\mathbb{R}^3))}
\leq C_p\cdot\|\nabla w\|_{L^2(T_{k-1},0;L^{\infty}(B_{k-1}))}.
\end{split}\end{equation*}
Therefore, by using \eqref{nabla_w_large_r} and \eqref{nabla_w_small_r_with_r^3}
\begin{equation*}
\||G_{2,k}|+|G_{4,k}|\|_{L^2(T_{k-1},0;L^2(\mathbb{R}^3))}\leq
\begin{cases} &C\cdot2^{3k}\cdot U^{\frac{1}{2}}_{k-1}
,\quad \quad\mbox{ if } r\geq s_{1}\\
&C\cdot2^{3k}\cdot\frac{1}{r^3}\cdot U^{\frac{1}{2}}_{k-1}
,\quad \quad\mbox{ if } r< s_{1}
\end{cases}\end{equation*} and, for any $1<p<\infty$,
\begin{equation*}
\||G_{1,k}|+|G_{3,k}|\|_{L^2(T_{k-1},0;L^p(\mathbb{R}^3))}\leq
\begin{cases} &C_p\cdot2^{3k}
,\quad \quad\mbox{ if } r\geq s_{1}\\
&C_p\cdot2^{3k}\cdot\frac{1}{r^3}
,\quad \quad\mbox{ if } r< s_{1}.
\end{cases}\end{equation*}
Thus, by using the above estimates
and \eqref{raise_of_power}, for $r\geq s_{1}$ and $p>5$,
\begin{equation*}\begin{split}
(IV)&\leq C\cdot\|\frac{v_k}{|u|}u\cdot\nabla P_{2,k}\|_{L^1(Q_{k-1})}
\leq C\||v_k|\cdot|\nabla P_{2,k}|\|_{L^1(Q_{k-1})}\\
&\leq C\||v_k|\cdot(|G_{1,k}|+|G_{3,k}|)\|_{L^1(Q_{k-1})}
+ C\||v_k|\cdot(|G_{2,k}|+|G_{4,k}|)\|_{L^1(Q_{k-1})}\\
&\leq\|v_k\|_{L^{2}(T_{k-1},0;L^{\frac{p}{p-1}}(B_{k-1})))}
\cdot\||G_{1,k}|+|G_{3,k}|\|_{L^{2}(T_{k-1},0;L^{p}(B_{k-1}))}\\
&\quad\quad+\|v_k\|_{L^{2}(T_{k-1},0;L^{2}(B_{k-1})))}
\cdot\||G_{2,k}|+|G_{4,k}|\|_{L^{2}(T_{k-1},0;L^{2}(B_{k-1}))}\\
&\leq C\cdot C_p\cdot 2^{\frac{16k}{3}}U^{\frac{4p-5}{3p}}_{k-1}.
\end{split}\end{equation*}
\noindent By the same way, for $r< s_{1}$ and $p>5$,
\begin{equation*}\begin{split}
(IV)&\leq C\cdot C_p\cdot\frac{1}{r^3}
2^{\frac{16k}{3}}U^{\frac{4p-5}{3p}}_{k-1}.
\end{split}\end{equation*}
\noindent Thus, by taking $p=10$,
\begin{equation}\label{(IV)}
(IV)\leq \begin{cases}& C\cdot
2^{\frac{16k}{3}}U^{\frac{7}{6}}_{k-1}
,\quad \quad\mbox{ if } r\geq s_{1}\\
& C\cdot\frac{1}{r^3}
2^{\frac{16k}{3}}U^{\frac{7}{6}}_{k-1}
,\quad \quad\mbox{ if } r< s_{1}.
\end{cases}\end{equation}
Finally, combining \eqref{(I)}, \eqref{(II-1)},
\eqref{(II-2)}, \eqref{(III)} and \eqref{(IV)} gives us
\begin{equation*}
(I)+(II)+(III)+(IV)\leq \begin{cases}& C^k\cdot U^{\frac{7}{6}}_{k-1}
,\quad \quad\mbox{ if } r\geq s_{1}\\
& \frac{1}{r^3}\cdot C^k\cdot U^{\frac{7}{6}}_{k-1}
,\quad \quad\mbox{ if } r< s_{1}.
\end{cases}\end{equation*}
\end{proof}
\subsection{De Giorgi argument to get a control for small $r$}\label{proof_lem_partial_2}
The following big lemma makes us be able to avoid
the weak point of the previous lemma \ref{lem_partial_1}
when we handle small $r$ including the case $r=0$.\\
Recall the definition of $s_k$ in \eqref{def_s_k} first. It is the distance
between $B^c_{k-1}$ and $B_{k-\frac{5}{6}}$
and $s_k$ is strictly decreasing to zero as $k\rightarrow\infty$.
For any $0<r<s_{1}$ we
define $k_r$ as
the integer such that $s_{k_r+1}< r\leq s_{k_r}$.
Note that
$k_r$ is integer-valued, $k_r\geq 1$
and is increasing to $\infty$ as $r$ goes to zero.
For the case $r=0$, we simply
define $k_r=k_0=\infty$.\\
\begin{lem}\label{lem_partial_2}
There exist universal constants ${\delta}_2$ and $\bar{C}_2>1$ such that
if u is a solution of (Problem II-r) for some $0\leq r<s_{1}$ verifying both
\begin{equation*}\begin{split}
&\| u\|_{L^{\infty}(-2,0;L^{2}(B(\frac{5}{4}))}+
\|P\|_{L^1(-2,0;L^{1}(B(1))}+\| \nabla u\|_{L^{2}(-2,0;L^{2}(B(\frac{5}{4}))}
\leq {\delta}_2\\
\mbox{ and }
& \| \mathcal{M}(|\nabla u|)\|_{L^{2}(-4,0;L^{2}(B(2)))}\leq {\delta}_2,
\end{split}\end{equation*}
then we have
\begin{equation*}
U_k \leq (\bar{C}_2)^k U_{k-1}^{\frac{7}{6}}\quad\text{ for any integer }
k \mbox{ such that } 1\leq k\leq k_r.
\end{equation*}
\end{lem}
\begin{rem}
Note that ${\delta}_2$ and $\bar{C}_2$ are independent of any $r\in[0, s_1)$
and the exponent $7/6$ is not optimal and we can make it almost $(4/3)$.
\end{rem}
\begin{rem}
This lemma says that even though $r$ is very small, we can make the above
uniform estimate for the first few steps $k\leq k_r$. Moreover, the number $k_r$ of
these steps is increasing to the infinity with a certain rate as $r$ goes to zero. In the subsection \ref{combine_de_giorgi}, we will see that this rate is enough to obtain a uniform estimate
for any small $r$ once we combine two lemmas \ref{lem_partial_1}
and \ref{lem_partial_2}.
\end{rem}
\begin{proof}
In this proof, we can borrow any inequalities
in the proof of the previous lemma \ref{lem_partial_1}
except those which depend on $r$ and blow up as $r$ goes to zero.\\% (having $\frac{1}{r^3}$ factor).\\
Let $0\leq r<s_{1}$ and take any integer k such that $1\leq k\leq k_r$.
Like ${\delta}_1$ of the previous lemma \ref{lem_partial_1}, we assume ${\delta}_2$ so small that
\begin{equation*}\begin{split}
{\delta}_2<1,\quad 10{\Lambda_1} {\Lambda_2}{\delta}_2\leq\frac{1}{2}.
\end{split}\end{equation*}
We begin this proof by decomposing $w^\prime$ by
\begin{equation*}\begin{split}
w^\prime
=u*\phi_r
&=\mathcal{B}ig(u(1-\frac{v_k}{|u|})\mathcal{B}ig)*\phi_r
+ \mathcal{B}ig(u\frac{v_k}{|u|}\mathcal{B}ig)*\phi_r =w^{\prime,1} + w^{\prime,2}.
\end{split}\end{equation*}
Thus the advection velocity $w$ has a new decomposition: $w=w^{\prime} -w^{\prime\prime}=
(w^{\prime,1} +w^{\prime,2}) -w^{\prime\prime}
=(w^{\prime,1} -w^{\prime\prime}) +w^{\prime,2} $.
We will verify that $w^{\prime,1} -w^{\prime\prime}$ is bounded and
$w^{\prime,2}$ can be controlled locally.
First, for $w^{\prime,1}$,
\begin{equation}\begin{split}\label{w_1_prime}
|w^{\prime,1}(t,x)|=\mathcal{B}ig|\mathcal{B}ig(\mathcal{B}ig(u(1-\frac{v_k}{|u|})\mathcal{B}ig)*\phi_r\mathcal{B}ig)(t,x)\mathcal{B}ig|
&\leq\|u(1-\frac{v_k}{|u|})(t,\cdot)\|_{L^{\infty}(\mathbb{R}^3)} \leq 1
\end{split}\end{equation} for any $-4\leq t $ and any $x\in\mathbb{R}^3$.
From \eqref{w_prime_prime_small_r},
we still have
\begin{equation}\begin{split}\label{w_prime_prime_small_r_again}
\| w^{\prime\prime}\|_{L^\infty(-2,0;L^{\infty}(B(2)))}\leq {C}\bar{\delta}\leq C.
\end{split}\end{equation}
\noindent Combining above two results,
\begin{equation}\begin{split}\label{w_1_prime_w_prime_prime_small_r}
\||w^{\prime,1}|+|w^{\prime\prime}|\|_{L^\infty(-2,0;L^{\infty}(B(2)))}\leq C.
\end{split}\end{equation}
\noindent For $w^{\prime,2}$, we observe that any $L^p$
norm of $w^{\prime,2}=\mathcal{B}ig(u\frac{v_k}{|u|}\mathcal{B}ig)*\phi_r$
in $B_{k-\frac{5}{6}}$ is less than or equal to that of $v_k$ in $B_{k-1}$ because
$r\leq s_{k_r}\leq s_{k}$ and $s_k$
is the distance between $B^c_{k-1}$ and $B_{k-\frac{5}{6}}$ (see
\eqref{young}). Thus we have, for any
$1\leq p\leq\infty$,
\begin{equation}\begin{split}\label{w_2_prime_small_r}
\|w^{\prime,2}&\|_{L^{p}(T_{k-1},0;L^{p}(B_{k-\frac{5}{6}}))}
= \| \mathcal{B}ig(u\frac{v_k}{|u|}\mathcal{B}ig)*\phi_r\|_{L^{p}(T_{k-1},0;L^{p}(B_{k-\frac{5}{6}}))}\\
&= \| |v_k|*\phi_r\|_{L^{p}(T_{k-1},0;L^{p}(B_{k-\frac{5}{6}}))}
\leq\| v_k\|_{L^{p}(Q_{k-1})}.
\end{split}\end{equation}
So, by using \eqref{raise_of_power}, we have
\begin{equation}\begin{split}\label{raise_of_power_ w_prime_2}
&\| w^{\prime,2}\|_{L^{p}(T_{k-1},0;L^{p}(B_{k-\frac{5}{6}}))}
\leq C2^{\frac{7k}{3}}U^{\frac{5}{3p}}_{k-1},
\quad\mbox{ for any } 1\leq p\leq\frac{10}{3}.
\end{split}\end{equation}
\begin{rem}
The above computations says that, for any small $r$,
the advection velocity $w$ can be decomposed into
one bounded part $(w^{\prime,1} -w^{\prime\prime})$ and the other part
$w^{\prime,2}$, which has a good contribution
to the power of $U_{k-1}$.
\end{rem}
Recall that the transpost term estimate \eqref{w_iu_j} is valid for any
$0<r<\infty$. Moreover,
the argument around \eqref{w_u} says that \eqref{w_iu_j} holds even for the case $r=0$.
Thus, for any $r\in[0,s_1)$, we have the same pressure estimates
\eqref{eq_pressure_decomposition_p1k},
\eqref{eq_pressure_decomposition_p2k} and
\eqref{eq_pressure_decomposition_p3}. Thus
we can follow the proof of the previous lemma \ref{lem_partial_1} up to
\eqref{1234_decompo_1} without any single modification.
It remains to control $(I)$--$(IV)$. \\
For $(I)$, \eqref{(I)} holds here too because \eqref{(I)} is independent of $r$.\\
For $(II)$, by using \eqref{w_1_prime_w_prime_prime_small_r}
and \eqref{raise_of_power_ w_prime_2} with
the fact $supp(\eta_k)\subset B_{k-\frac{1}{3}}\subset B_{k-\frac{5}{6}}$, we have
\begin{equation}\begin{split}\label{second_(II)}
&(II)=\||\nabla\eta_k|\cdot|w|\cdot|v_k|^2\|_{L^1(Q_{k-1})}\\
&\leq C2^{3k}\mathcal{B}ig(\|(|w^{\prime,1}|+|w^{\prime\prime}|)\cdot|v_k|^2\|_{L^1(Q_{k-1})}
+\||w^{\prime,2}|\cdot|v_k|^2\|_{L^{1}(T_{k-1},0;L^1(B_{k-\frac{5}{6}}))}\mathcal{B}ig)\\
&\leq C2^{3k}\|v_k\|^2_{L^2(Q_{k-1})}
+C2^{3k}\|w^{\prime,2}\|_{L^{\frac{10}{3}}(T_{k-1},0;L^\frac{10}{3}(B_{k-\frac{5}{6}}))}\cdot
\||v_k|^2\|_{L^\frac{10}{7}(Q_{k-\frac{5}{6}})}\\
&\leq C2^{\frac{23k}{3}}U^{\frac{5}{3}}_{k-1}
+C2^{10k}U^{\frac{5}{3}}_{k-1}\leq C2^{10k}U^{\frac{5}{3}}_{k-1}.
\end{split}\end{equation}
For $(III)$(non-local pressure term),
we have \eqref{(III)} here too since \eqref{(III)} is independent of $r$.\\
For $(IV)$(local pressure term),
by definition of $P_{2,k}$ and decomposition of $w$,
\begin{equation*}\begin{split}
-\Delta P_{2,k}
&= \sum_{ij} \partial_i \partial_j\mathcal{B}ig( \psi_k w_i u_j(1-\frac{v_k}{|u|})
+ \psi_k w_i u_j\frac{v_k}{|u|}\mathcal{B}ig)\\
&= \sum_{ij} \partial_i \partial_j\mathcal{B}ig( \psi_k (w^{\prime,1}_i-w^{\prime\prime}_i) u_j(1-\frac{v_k}{|u|})+\psi_k w^{\prime,2}_i u_j(1-\frac{v_k}{|u|})\\
&\quad\quad+ \psi_k (w^{\prime,1}_i-
w^{\prime\prime}_i) u_j\frac{v_k}{|u|}+ \psi_k
w^{\prime,2}_i u_j\frac{v_k}{|u|}\quad\mathcal{B}ig).
\end{split}\end{equation*}
\noindent Thus we can decompose $ P_{2,k}$ by
\begin{equation*}\begin{split}
P_{2,k}= P_{2,k,1}+P_{2,k,2}+P_{2,k,3}+P_{2,k,4}
\end{split}\end{equation*} where
\begin{equation*}\begin{split}
&P_{2,k,1} =\sum_{ij} (\partial_i\partial_j)(-\Delta)^{-1}\mathcal{B}ig(
\psi_k (w^{\prime,1}_i-w^{\prime\prime}_i) u_j(1-\frac{v_k}{|u|})\mathcal{B}ig),\\
&P_{2,k,2} =\sum_{ij} (\partial_i\partial_j)(-\Delta)^{-1}\mathcal{B}ig(
\psi_k w^{\prime,2}_i u_j(1-\frac{v_k}{|u|})\mathcal{B}ig),\\
&P_{2,k,3} =\sum_{ij} (\partial_i\partial_j)(-\Delta)^{-1}\mathcal{B}ig(
\psi_k (w^{\prime,1}_i-w^{\prime\prime}_i) u_j\frac{v_k}{|u|}\mathcal{B}ig)\quad\mbox{ and}\\
&P_{2,k,4} =\sum_{ij} (\partial_i\partial_j)(-\Delta)^{-1}\mathcal{B}ig(
\psi_k w^{\prime,2}_i u_j\frac{v_k}{|u|}\mathcal{B}ig).
\end{split}\end{equation*}
By using $\mathcal{B}ig|u\mathcal{B}ig(1-\frac{v_k}{|u|}\mathcal{B}ig)\mathcal{B}ig|\leq 1$
and the fact
$\psi_k$ is supported in $B_{k-\frac{5}{6}}$ with
\eqref{w_1_prime_w_prime_prime_small_r},
\begin{equation}\begin{split}\label{second_P_{2,k,1}}
\|P_{2,k,1}\|_{L^p(T_{k-1},0;L^p(\mathbb{R}^3))}&\leq C_p,\quad\mbox{ for } 1<p<\infty\\
\end{split}\end{equation} and,
with
\eqref{raise_of_power_ w_prime_2},
\begin{equation}\begin{split}\label{second_P_{2,k,2}}
\|P_{2,k,2}\|_{L^p(T_{k-1},0;L^p(\mathbb{R}^3))}&\leq
C_p\||\psi_k|\cdot |w^{\prime,2}|\|_{L^p(T_{k-1},0;L^{p}(\mathbb{R}^3)))}\\
&\leq CC_p2^{\frac{7k}{3}}U^{\frac{5}{3p}}_{k-1}\quad\mbox{ for } 1\leq p\leq\frac{10}{3}.\\
\end{split}\end{equation}
Observe that for $i=1,2$,
\begin{equation}\begin{split}\label{second_P_{2,k,1}_P_{2,k,2}}
&\ebdiv \mathcal{B}ig(u G_i\mathcal{B}ig) + \mathcal{B}ig(\frac{v_k}{|u|}-1\mathcal{B}ig)u\cdot\nabla G_i
=\ebdiv \mathcal{B}ig(v_k\frac{u}{|u|}G_i\mathcal{B}ig) -
G_i\ebdiv\mathcal{B}ig(\frac{uv_k}{|u|}\mathcal{B}ig).
\end{split}\end{equation}
For $P_{2,k,1}$, by using \eqref{d_k}, \eqref{raise_of_power}, \eqref{raise_of_power3},
\eqref{second_P_{2,k,1}_P_{2,k,2}} and \eqref{second_P_{2,k,1}} with $p=10$
\begin{equation}\begin{split}\label{second_P_{2,k,1}_TOTAL}
&\int_{T_{k-1}}^{0}\mathcal{B}ig|\int_{\mathbb{R}^3}\eta_k(x)
\mathcal{B}ig(\ebdiv (u P_{2,k,1}) + (\frac{v_k}{|u|}-1)u\cdot\nabla P_{2,k,1}\mathcal{B}ig)(s,x)dx\mathcal{B}ig|ds\\
&\leq C^{3k}\|v_k\cdot|P_{2,k,1}|\|_{L^1(Q_{k-1})} +
3\|d_k\cdot|P_{2,k,1}|\|_{L^1(Q_{k-1})}\\
&\leq C^{3k}\|v_k\|_{L^\frac{10}{9}(Q_{k-1})}
\cdot\|P_{2,k,1}\|_{L^{10}(Q_{k-1})} +
3\|d_k\|_{L^\frac{10}{9}(Q_{k-1})}\cdot\|P_{2,k,1}\|_{L^{10}(Q_{k-1})}\\
&\leq C2^{\frac{16k}{3}}U^{\frac{3}{2}}_{k-1}+
C2^{\frac{5k}{3}}U^{\frac{7}{6}}_{k-1}
\leq C2^{\frac{16k}{3}}U^{\frac{7}{6}}_{k-1}.
\end{split}\end{equation}
Likewise, for $P_{2,k,2}$, by using \eqref{second_P_{2,k,2}} instead of \eqref{second_P_{2,k,1}}
\begin{equation}\begin{split}\label{second_P_{2,k,2}_TOTAL}
&\int_{T_{k-1}}^{0}\mathcal{B}ig|\int_{\mathbb{R}^3}\eta_k(x)
\mathcal{B}ig(\ebdiv (u P_{2,k,2}) + (\frac{v_k}{|u|}-1)u\cdot\nabla P_{2,k,2}\mathcal{B}ig)(s,x)dx\mathcal{B}ig|ds\\
&\leq C2^{\frac{23k}{3}}U^{\frac{5}{3}}_{k-1}+
C2^{4k}U^{\frac{4}{3}}_{k-1}
\leq C2^{\frac{23k}{3}}U^{\frac{4}{3}}_{k-1}.
\end{split}\end{equation}
\noindent From definitions of
$P_{2,k,3}$ and $P_{2,k,4}$
with $\ebdiv(w)=0$, we have
\begin{equation*}\begin{split}
-\Delta\nabla (P_{2,k,3}+P_{2,k,4})
&= \sum_{ij} \partial_i \partial_j\nabla\mathcal{B}ig( \psi_k w_i u_j\frac{v_k}{|u|}\mathcal{B}ig)\\
&= \sum_{ij} \nabla \partial_j
\mathcal{B}ig( (\partial_i\psi_k) w_i
u_j\frac{v_k}{|u|}+
\psi_k w_i
\partial_i(u_j\frac{v_k}{|u|})\mathcal{B}ig).
\end{split}\end{equation*}
\noindent Then we use the fact
$w=(w^{\prime,1} -w^{\prime\prime}) +w^{\prime,2} $
so that
we can decompose
\begin{equation*}
\nabla(P_{2,k,3}+P_{2,k,4})=H_{1,k}+H_{2,k}+H_{3,k}+H_{4,k}
\end{equation*}
where
\begin{equation*}\begin{split}
H_{1,k}&= \sum_{ij}( \nabla \partial_j)(-\Delta)^{-1}
\mathcal{B}ig((\partial_i\psi_k) (w^{\prime,1}_i-w^{\prime\prime}_i)
u_j\frac{v_k}{|u|} \mathcal{B}ig),\\
H_{2,k}&= \sum_{ij}( \nabla \partial_j)(-\Delta)^{-1}
\mathcal{B}ig((\partial_i\psi_k) w^{\prime,2}_i
u_j\frac{v_k}{|u|} \mathcal{B}ig),\\
H_{3,k}&= \sum_{ij}( \nabla \partial_j)(-\Delta)^{-1}
\mathcal{B}ig(\psi_k (w^{\prime,1}_i-w^{\prime\prime}_i)
\partial_i(u_j\frac{v_k}{|u|}) \mathcal{B}ig)\quad\mbox{ and}\\
H_{4,k}&= \sum_{ij}( \nabla \partial_j)(-\Delta)^{-1}
\mathcal{B}ig(\psi_k w^{\prime,2}_i
\partial_i(u_j\frac{v_k}{|u|}) \mathcal{B}ig).
\end{split}\end{equation*}
By using $|u|\leq 1+v_k$,
\begin{equation}\begin{split}\label{second_P_{2,k,3}_P_{2,k,4}_TOTAL}
&\int_{T_{k-1}}^{0}\mathcal{B}ig|\int_{\mathbb{R}^3}\eta_k(x)
\mathcal{B}ig(\ebdiv (u (P_{2,k,3}+P_{2,k,4})) + (\frac{v_k}{|u|}-1)u\cdot
\nabla (P_{2,k,3}+P_{2,k,4})\mathcal{B}ig)
dx\mathcal{B}ig|ds\\
&\leq C^{3k}\int_{Q_{k-1}}(1+v_k
)\cdot|(P_{2,k,3}+P_{2,k,4})(s,x)|+|\nabla (P_{2,k,3}+P_{2,k,4})
|dxds\\
&\leq C^{3k}\mathcal{B}ig(\|P_{2,k,3}\|_{L^1(Q_{k-1})}+\|v_k\cdot|P_{2,k,3}|\|_{L^1(Q_{k-1})}\\
&\quad\quad+\|P_{2,k,4}\|_{L^1(Q_{k-1})}
+\|v_k\cdot|P_{2,k,4}|\|_{L^1(Q_{k-1})}\\
&\quad\quad+\|H_{1,k}\|_{L^1(Q_{k-1})}+\|H_{2,k}\|_{L^1(Q_{k-1})}+\|H_{3,k}\|_{L^1(Q_{k-1})}
+\|H_{4,k}\|_{L^1(Q_{k-1})}\mathcal{B}ig).\\
\end{split}\end{equation}
From \eqref{raise_of_power} and \eqref{w_1_prime_w_prime_prime_small_r} with
the Riesz transform,
\begin{equation}\begin{split}\label{second_P_{2,k,3}_TOTAL}
\|P_{2,k,3}\|_{L^1(Q_{k-1})}&
\leq C\|P_{2,k,3}\|_{L^\frac{10}{9}(T_{k-1},0;L^\frac{10}{9}(\mathbb{R}^3))}
\leq C\|v_k\|_{L^\frac{10}{9}(Q_{k-1})}
\leq C2^{\frac{7k}{3}}U^{\frac{3}{2}}_{k-1}.
\end{split}\end{equation}
\noindent Likewise
\begin{equation}\begin{split}\label{second_H_{1,k}_TOTAL}
\|H_{1,k}\|_{L^1(Q_{k-1})}&
\leq C2^{\frac{16k}{3}}U^{\frac{3}{2}}_{k-1}
\end{split}\end{equation} and
\begin{equation}\begin{split}\label{second_v_k_P_{2,k,3}_TOTAL}
\|v_k\cdot|P_{2,k,3}|\|_{L^1(Q_{k-1})}
&\leq \|v_k\|_{L^2(Q_{k-1})}
\|P_{2,k,3}\|_{L^2(Q_{k-1})}\\
&\leq C2^{\frac{7k}{3}}U^{\frac{5}{6}}_{k-1}\cdot
C2^{\frac{7k}{3}}U^{\frac{5}{6}}_{k-1}
\leq C2^{\frac{14k}{3}}U^{\frac{5}{3}}_{k-1}.\\
\end{split}\end{equation}
Using \eqref{raise_of_power}, \eqref{raise_of_power_ w_prime_2},
\eqref{d_k} and \eqref{raise_of_power3}, we have
\begin{equation}\begin{split}\label{second_P_{2,k,4}_TOTAL}
\|P_{2,k,4}\|_{L^1(Q_{k-1})}
&\leq C2^{\frac{14k}{3}}U^{\frac{3}{2}}_{k-1},
\end{split}\end{equation}
\begin{equation}\begin{split}\label{second_H_{2,k}_TOTAL}
\|H_{2,k}\|_{L^1(Q_{k-1})}
&\leq C2^{\frac{23k}{3}}U^{\frac{3}{2}}_{k-1},
\end{split}\end{equation}
\begin{equation}\begin{split}\label{second_v_k_P_{2,k,4}_TOTAL}
\|v_k\cdot|P_{2,k,4}|\|_{L^1(Q_{k-1})}
&\leq C2^{\frac{21k}{3}}U^{\frac{5}{3}}_{k-1},
\end{split}\end{equation}
\begin{equation}\begin{split}\label{second_H_{3,k}_TOTAL}
\|H_{3,k}\|_{L^1(Q_{k-1})}
&\leq C2^{\frac{5k}{3}}U^{\frac{7}{6}}_{k-1}
\end{split}\end{equation} and
\begin{equation}\begin{split}\label{second_H_{4,k}_TOTAL}
\|H_{4,k}\|_{L^1(Q_{k-1})}
&\leq C2^{4k}U^{\frac{7}{6}}_{k-1}.
\end{split}\end{equation}
Combining \eqref{second_P_{2,k,1}_TOTAL},
\eqref{second_P_{2,k,2}_TOTAL} and
\eqref{second_P_{2,k,3}_P_{2,k,4}_TOTAL} together with
\eqref{second_P_{2,k,3}_TOTAL}, $\cdots$,
\eqref{second_H_{4,k}_TOTAL}, we obtain
\begin{equation}\begin{split}\label{second_(IV)}
&(IV)\leq C 2^{\frac{23k}{3}} U^{\frac{7}{6}}_{k-1}.
\end{split}\end{equation}
Finally we combine \eqref{second_(II)} and \eqref{second_(IV)} together with \eqref{(I)} and \eqref{(III)} in the previous lemma
in order to finish the proof of this lemma \ref{lem_partial_2}.
\end{proof}
\subsection{Combining the two De Giorgi arguments}\label{combine_de_giorgi}
First we present one small lemma. Then the actual proof
of the proposition \ref{partial_problem_II_r} will follow.
The following small lemma says that certain non-linear estimates
give zero limit if the initial term is sufficiently small.
This fact is one of key arguments of De Giorgi method.
\begin{lem}\label{lem_recursive}
Let $C>1$ and $\beta>1$. Then there exists a constant $C_0^{*}$
such that for every sequence verifying both $ 0 \leq W_0 < C^{*}_0$
and \begin{equation*}
0\leq W_{k} \leq C^k \cdot W_{k-1}^{\beta} \quad \mbox{ for any } k\geq 1,
\end{equation*} we have $\lim_{k \to\infty} W_k = 0$.
\end{lem}
\begin{proof}
It is quite standard or see the lemma 1 in \cite{vas:partial}.
\end{proof}
Finally we are ready to prove the proposition \ref{partial_problem_II_r}.
\begin{proof}[Proof of proposition \ref{partial_problem_II_r}]
Suppose that u is a solution of (Problem II-r) for some $0\leq r<\infty$ verifying
\begin{equation*}\begin{split}
&\| u\|_{L^{\infty}(-2,0;L^{2}(B(\frac{5}{4})))}+
\|P\|_{L^1(-2,0;L^{1}(B(1)))}+\| \nabla u\|_{L^{2}(-2,0;L^{2}(B(\frac{5}{4})))}
\leq {\delta}\\ \mbox{ and }
&\| \mathcal{M}(|\nabla u|)\|_{L^{2}(-4,0;L^{2}(B(2)))}\leq {\delta}\\
\end{split}\end{equation*} where $\delta$ will be chosen within the proof.\\
From two big lemmas \ref{lem_partial_1} and \ref{lem_partial_2} by assuming
$\delta\leq\min({\delta}_1,\delta_2)$, we have
\begin{equation}\label{before_combine}
U_k \leq
\begin{cases}& (\bar{C}_1)^k U_{k-1}^{\frac{7}{6}} ,
\quad \mbox{ for any } k\geq 1 \quad\mbox{ if } r\geq s_{1}.\\
&\frac{1}{r^3}\cdot (\bar{C}_1)^k U_{k-1}^{\frac{7}{6}}
,\quad \mbox{ for any } k\geq 1 \quad\mbox{ if } 0<r< s_{1}.\\%\quad\mbox{ and}\\
&(\bar{C}_2)^k U_{k-1}^{\frac{7}{6}}
\quad\text{ for } k = 1,2,\cdots,k_r \quad\mbox{ if } 0\leq r< s_{1}.
\end{cases}
\end{equation}
Note that $k_r=\infty$ if $r=0$. Thus we can combine
the case $r=0$ with the case $r\geq s_1$ into one estimate:
\begin{equation*}
U_k \leq
(\bar{C}_3)^k U_{k-1}^{\frac{7}{6}}
\quad \mbox{ for any } k\geq 1 \quad\mbox{ if either } r\geq s_1 \mbox{ or } r=0.
\end{equation*} where we define $\bar{C}_3=\max(\bar{C}_1,\bar{C}_2)$.\\
We consider now the case $0<r<s_1$.
Recall that
$s_k=D\cdot 2^{-3k}$ where $D=\mathcal{B}ig((\sqrt{2}-1)2\sqrt{2}\mathcal{B}ig)>1$
and $s_{k_r+1}< r\leq s_{k_r}$ for any $r\in(0,s_1)$. It gives us
$r\geq D\cdot2^{-3(k_r+1)}$.
Thus if $k\geq k_r$ and if $0<r< s_1$, then the second line in \eqref{before_combine} becomes
\begin{equation}\begin{split}
U_k
&\leq\frac{1}{r^3}\cdot (\bar{C}_1)^k U_{k-1}^{\frac{7}{6}}
\leq\frac{2^{9{(k_r+1)}}}{D^3}\cdot (\bar{C}_1)^k U_{k-1}^{\frac{7}{6}}\\
&\leq{2^{9{(k+1)}}}\cdot (\bar{C}_1)^k U_{k-1}^{\frac{7}{6}}
\leq({2^{18}}\cdot\bar{C}_1)^k U_{k-1}^{\frac{7}{6}}.
\end{split}\end{equation}
So we have for any $r\in(0,s_1)$,
\begin{equation*}
U_k \leq
\begin{cases}
& ({2^{18}}\cdot\bar{C}_1)^k U_{k-1}^{\frac{7}{6}}
,\quad \mbox{ for any } k\geq k_r. \\%\quad\mbox{ if } r< s_{1}\quad\mbox{ and}\\
&(\bar{C}_2)^k U_{k-1}^{\frac{7}{6}}\quad\text{ for }
k = 1,2,\cdots,k_r.
\end{cases}
\end{equation*}
Define $\bar{C}=\max({2^{18}}\cdot\bar{C}_1,\bar{C}_2,\bar{C}_3)
=\max({2^{18}}\cdot\bar{C}_1,\bar{C}_2)$. Then we can combine
all three cases $r=0$, $0<r<s_1$ and $s_1\leq r<\infty$ into one uniform estimate:
\begin{equation*}
U_k \leq
(\bar{C})^k U_{k-1}^{\frac{7}{6}}
\quad \mbox{ for any } k\geq 1 \quad\mbox{ and for any } 0\leq r<\infty.
\end{equation*}
Finally, by using the recursive lemma
\ref{lem_recursive}, we obtain $C^{*}_0$ such that $U_k\rightarrow 0$ as
$k\rightarrow 0$ whenever $ U_0 < C^{*}_0$. This condition $ U_0 < C^{*}_0$
is achievable once
we assume $\delta$ so small that
${\delta}\leq\sqrt{\frac{C^{*}_0}{2}}$
because
\begin{equation*}\begin{split}
U_0
&\leq \mathcal{B}ig(\| u\|_{L^{\infty}(-2,0;L^{2}(B(\frac{5}{4})))}+
\|P\|_{L^1(-2,0;L^{1}(B(1)))}+
\| \nabla u\|_{L^{2}(-2,0;L^{2}(B(\frac{5}{4})))}\big)^2.
\end{split}\end{equation*}
Thus we fix ${\delta}=\min(\sqrt{\frac{C^{*}_0}{2}},\delta_1,\delta_2)$
which does not depend on any $r\in [0,\infty)$.
Observe that for any $k\geq 1$,
\begin{equation*}
\sup_{-\frac{3}{2}\leq t\leq 0}\int_{B(\frac{1}{2})}(|u(t,x)|-1)^2_{+}dx \leq U_k
\end{equation*} from $E_k\leq 1 $ and
$ (-\frac{3}{2},0)\times B(\frac{1}{2})\subset Q_k$. Due to the fact
$U_k\rightarrow 0$, the conclusion of this proposition \ref{partial_problem_II_r} follows.
\end{proof}
\section{Proof of the second local study proposition \ref{local_study_thm}}\label{proof_local study}
First we present technical lemmas, whose proofs will be given in the appendix.
In the subsection \ref{new_step1_2_together}, it will be explained how to apply the previous local study
proposition \ref{partial_problem_II_r} in order to get a $L^\infty$-bound of
the velocity $u$ . Then, the subsections \ref{new_step3} and
\ref{new_step4} will give us $L^\infty$-bounds for classical derivatives $\nabla^d u$
and for fractional derivatives $(-\Delta)^{\alpha/2}\nabla^d u$, respectively.
\subsection{Some lemmas}
The following lemma is an estimate about higher derivatives of pressure
which we can find a similar lemma
in \cite{vas:higher}. However they are different in the sense that
here we require $(n-1)$th order norm of $v_1$ to control $n$th
derivatives of pressure (see \eqref{ineq_parabolic_pressure}) while in \cite{vas:higher} we require one more order,
i.e. $n$th order. This fact follows the divergence structure and it
will be useful for a bootstrap argument in the
subsection \ref{new_step3}
when large $r$ is large (we will see \eqref{step3_second_claim}).
\begin{lem}\label{higher_pressure} Suppose that we have $v_1,v_2\in
(C^\infty(B(1)))^3$ with $\ebdiv v_1=\ebdiv v_2=0$ and $P \in
C^\infty(B(1))$ which satisfy
\begin{equation*}\begin{split}
-\Delta P &=\ebdiv\ebdiv(v_2\otimes v_1)
\end{split}\end{equation*}
on $B(1)\subset \mathbb{R}^3$.\\% for some $0<a<\infty$.\\
\noindent Then, for any $n\geq 2$, $0<b<a<1$ and $1<p<\infty$, we have the
two following estimates:
\begin{equation}\begin{split}\label{ineq_parabolic_pressure}
\|\nabla^n P\|_{L^{p}(B(b))}
&\leq C_{(a,b,n,p)}\mathcal{B}ig(
\| v_2 \|_{W^{n-1,p_2}(B(a))}\cdot
\| v_1 \|_{W^{n-1,p_1}(B(a))}\\
&\quad\quad\quad\quad\quad\quad+\| P \|_{L^{1}(B(a))}\mathcal{B}ig)
\end{split}\end{equation} where
$\frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}$, and
\begin{equation}\begin{split}\label{ineq_parabolic_pressure2}
\|\nabla^n P\|_{L^{\infty}(B(b))}
&\leq C_{(a,b,n)}\mathcal{B}ig(
\| v_2 \|_{W^{n,\infty}(B(a))}\cdot
\| v_1 \|_{W^{n,\infty}(B(a))}\\
&\quad\quad\quad\quad\quad\quad+\| P \|_{L^{1}(B(a))}\mathcal{B}ig)
\end{split}\end{equation}
\noindent Note that such constants are independent of any $v_1,v_2$ and $P$. Also,
$\infty$ is allowed for $p_1$ and $p_2$. e.g. if $p_1=\infty$, then $p_2=p$.
\end{lem}
\begin{proof}
See the appendix.
\end{proof}
The following is a local result by using a parabolic regularization.
It will be used in the subsection \ref{new_step3}
to prove
\eqref{step3_first_claim} and \eqref{step3_second_claim}.
\begin{lem}\label{lem_parabolic_v_1_v_2_pressure}
Suppose that we have
smooth solution $(v_1,v_2,P)$ on $Q(1)=(-1,0)\times B(1)$ of
\begin{equation*}\begin{split}
&\partial_t(v_1)+\ebdiv(v_2\otimes v_1)+ \nabla P -\Delta v_1=0\\
&\ebdiv (v_1)=0 \mbox{ and } \ebdiv (v_2)=0.
\end{split}\end{equation*}
Then, for any $n\geq 1$, $0<b<a<1$, $1< p_1<\infty$ and $1<p_2<\infty$, we have
\begin{equation}\begin{split} \label{ineq_parabolic_v_1_v_2}
&\|\nabla^n v_1 \|_{L^{p_1}(-({b})^2,0 ;L^{p_2}(B({b})))}
\leq C_{(a,b,n,p_1,p_2)}
\mathcal{B}ig(\|v_2\otimes v_1 \|_{L^{p_1}(-a^2,0;W^{n-1,p_2}(B(a)))}\\&
\quad\quad\quad\quad\quad\quad\quad\quad+
\| v_1 \|_{L^{p_1}(-a^2,0;W^{n-1,p_2}(B(a)))}+
\| P \|_{L^{1}(-a^2,0;L^{1}(B(a)))}\mathcal{B}ig)\\
\end{split}\end{equation} where $v_2\otimes v_1$ is the matrix whose
$(i,j)$ component is the product of $j$-th component $v_{2,j}$ of $v_2$
and $i$-th one $v_{1,i}$ of $v_1$ and $\mathcal{B}ig(\ebdiv(v_2\otimes v_1)\mathcal{B}ig)_i
=\sum_j\partial_j(v_{2,j} v_{1,i})$.\\% =\sum_jv_{2,j}\partial_j v_{1,i}$.\\
Note that such constants are independent of any $v_1,v_2$ and $P$.
\end{lem}
Proof of this lemma \ref{lem_parabolic_v_1_v_2_pressure} is omitted because it
is based on the standard parabolic regularization result (e.g. Solonnikov \cite{Solo})
and precise argument
is essentially contained in \cite{vas:higher}
except that
here we consider
\begin{equation*}\begin{split}
&(v_1)_t+\ebdiv(v_2\otimes v_1)+ \nabla P -\Delta v_1=0\\
\end{split}\end{equation*} while \cite{vas:higher} covered
\begin{equation*}\begin{split}
&(u)_t+\ebdiv(u\otimes u)+ \nabla P -\Delta u=0.\\
\end{split}\end{equation*}
The following lemma will be
used in the subsection \ref{new_step3}
, especially when we prove \eqref{step3_second_claim} for large $r$.
\begin{lem}\label{lem_a_half_upgrading_large_r}
Suppose that we have
smooth solution $(v_1,v_2,P)$ on $Q(1)=(-1,0)\times B(1)$ of
\begin{equation*}\begin{split}
&\partial_t(v_1)+(v_2\cdot\nabla)(v_1)+ \nabla P -\Delta v_1=0\\
&\ebdiv (v_1)=0 \mbox{ and } \ebdiv (v_2)=0.
\end{split}\end{equation*}
Then, for any $n\geq 0$ and $0<b<a<1$, we have
\begin{equation*}\begin{split}
\|\nabla^n {v_1} &\|_{L^{\infty}(-{({b})}^2,0 ;L^{1}(B{({b})}))}
\leq\\ & C_{(a,b,n)}\mathcal{B}ig[
\mathcal{B}ig(\| v_2\|_{L^2(-{{a}^2},0;W^{n,\infty}(B({a})))} +1\mathcal{B}ig)
\cdot
\| {v_1} \|_{L^{2}(-{a}^2,0;W^{n,{1}}(B{(a)}))}\\
&\quad\quad\quad\quad\quad\quad\quad\quad+ \|\nabla^{n+1}P \|_{L^{1}(-{a}^2,0;L^{1}(B{(a)}))} \mathcal{B}ig]
\end{split}\end{equation*}
and, for any ${p}\geq 1$,
\begin{equation*}\begin{split}
&\|\nabla^{n} {v_1} \|^{p+\frac{1}{2}}_{L^{\infty}(-{({b})}^2,0 ;L^{p+\frac{1}{2}}(B{({b})}))}\leq\\
& \quad\quad C_{(a,b,n,p)}\mathcal{B}ig[
\mathcal{B}ig(\| v_2\|_{L^2(-{{a}^2},0;W^{n,\infty}(B({a})))} +1\mathcal{B}ig)
\cdot
\|{v_1} \|_{L^{2}(-{({a})}^2,0;W^{n,2p}(B{({a})}))}\\
&\quad\quad\quad\quad\quad\quad+ \|\nabla^{n+1} P\|_{L^{1}(-{({a})}^2,0;L^{2p}(B{({a})}))}
\mathcal{B}ig]\cdot
\| {v_1} \|^{p-\frac{1}{2}}_{L^{\infty}(-{({a})}^2,0;W^{n,p}(B{(a)}))}.
\end{split}\end{equation*}
Note that such constants are independent of any $v_1,v_2$ and $P$.
\end{lem}
\begin{proof}
See the appendix.
\end{proof}
The following non-local version of Sobolev-type lemmas will be useful
when we handle fractional derivatives by Maximal
functions. We will see
in the subsection \ref{new_step4}
that the power $({1+\frac{3}{p}})$ of $M$
on the right hand side of the following estimate is very important
to obtain a required estimate \eqref{step4_third_claim}.
\begin{lem}\label{lem_Maximal 2.5 or 4}
Let $M_0>0$ and $1\leq p <\infty$. Then there exist $C=C(M_0,p)$ with
the following property: \\
For any $M\geq M_0$ and
for any $f\in C^1(\mathbb{R}^3)$
such that $ \int_{\mathbb{R}^3}\phi(x)f(x)dx=0$, we have
\begin{equation*}
\|f\|_{L^p(B(M))}\leq CM^{1+\frac{3}{p}}\cdot\mathcal{B}ig(
\|\mathcal{M}(|\nabla f|^p)\|^{1/p}_{L^{1}(B(1))}
+\|\nabla f\|_{L^1(B(2))}\mathcal{B}ig).
\end{equation*}
\end{lem}
\begin{proof}
See the appendix.
\end{proof}
With the above lemmas, we are ready to prove the proposition \ref{local_study_thm}.
\begin{proof} [Proof of proposition \ref{local_study_thm}]
We divide this proof into three stages.\\
Stage 1 in subsection \ref{new_step1_2_together}: First, we will obtain a $L_t^{\infty}L_x^2$-local bound for $u$
by using the mean-zero property of $u$ and $w$.
Then, a $L^\infty$-local bound of $u$
follows thanks to the first local study
proposition \ref{partial_problem_II_r}.\\
Stage 2 in subsection \ref{new_step3}: We will get a $L^\infty$-local bound for $\nabla^d u$ for $d\geq1$ by using
an induction argument with
a boot-strapping.
This is not obvious especially when $r$ is large because $w$
depends a non-local part of $u$ while our knowledge about the $L^\infty$-bound of
$u$ from the stage 1 is only local.\\
Stage 3 in subsection \ref{new_step4}: We will achieve a $L^\infty$-local bound for $(-\Delta)^{\alpha/2}\nabla^d u$ for $d\geq1$ with $0<\alpha<2$ from the integral representation
of the fractional Laplacian. The non-locality of this fractional operator
will let us to adopt more complicated conditions (see \eqref{local_study_condition3}).\\
\subsection{Stage 1: to obtain $L^{\infty}$-local bound for $u$.}\label{new_step1_2_together}
First we suppose that $u$ satisfies all conditions of the proposition \ref{local_study_thm}
without \eqref{local_study_condition3} (The condition \eqref{local_study_condition3} will be assumed only at the stage 3). Our goal is to find a sufficiently small $\bar{\eta}$ which is independent of
$r\in[0,\infty)$.\\
Assume $\bar{\eta}\leq 1$ and
define $\bar{r}_0=\frac{1}{4}$ for this subsection.
From \eqref{local_study_condition1}, we get
\begin{equation*}
\|u\|_{L^2(-4,0;L^{6}(B(2)))}\leq C\|\nabla u\|_{L^2(-4,0;L^{2}(B(2)))}
\leq {C}\cdot\bar{\eta}.
\end{equation*}
From the corollary \ref{convolution_cor}, if $r\geq\bar{r}_0$, then
\begin{equation*}\begin{split}
\| w\|_{L^2(-4,0;L^{\infty}(B(2)))}&
\leq {C}\cdot\bar{\eta}.
\end{split}\end{equation*}
On the other hand, if $0\leq r<\bar{r}_0$, then
\begin{equation*}\begin{split}
\| w^{\prime}\|_{L^2(-4,0;L^{6}(B(\frac{7}{4})))}
&\leq {C}\| u\|_{L^2(-4,0;L^{6}(B(2)))}
\leq {C}\bar{\eta}
\end{split}\end{equation*} because $\phi_r$ is supported
in $B(r)\subset B(\bar{r})$, and $w=u*\phi_r$ (see \eqref{young}).
For $w^{\prime\prime}$,
\begin{equation*}\begin{split}
&\| w^{\prime\prime}\|_{L^2(-4,0;L^{\infty}(B(2)))}
\leq\|\|u*\phi_{r}\|_{L^1(B(1))}\|_{L^2((-4,0))}
\leq\|\|u\|_{L^1(B(2))}\|_{L^2((-4,0))}\\
&\leq {C}\| u\|_{L^2(-4,0;L^{6}(B(2)))}
\leq {C}\bar{\eta}.
\end{split}\end{equation*}
Thus $\| w\|_{L^2(-4,0;L^{6}(B(\frac{7}{4})))}
\leq {C}\bar{\eta}$ if $r<\bar{r}_0$ from $w= w^{\prime} +w^{\prime\prime}$.\\
In sum, for any $0\leq r<\infty$,
\begin{equation}\label{step1_w}
\| w\|_{L^2(-4,0;L^{6}(B(\frac{7}{4})))}\leq {C}\bar{\eta}.
\end{equation}
Since the equation \eqref{navier_Problem II-r} depends only on $\nabla P$,
without loss of generality, we may assume
$\int_{\mathbb{R}^3}\phi(x)P(t,x)=0$ for $t\in(-4,0)$. Then with the mean zero property \eqref{local_study_condition1} of $u$, we have
\begin{equation*}\label{step1_nabla_p}
\|\int_{\mathbb{R}^3}\phi(x)\nabla P(\cdot,x) dx\|_{L^1(-4,0)}\leq C \bar{\eta}^{\frac{1}{2}}
\end{equation*} after integration in $x$.\\
From Sobolev,
\begin{equation*}\begin{split}
\|\nabla P\|
_{L^1(-4,0;L^{\frac{3}{2}}(B(\frac{7}{4}))}
&\leq C \bar{\eta}^{\frac{1}{2}}\\
\mbox{ and } \quad \| P\|_{L^1(-4,0;L^{3}(B(\frac{7}{4}))}
&
\leq C \bar{\eta}^{\frac{1}{2}}\\
\end{split}
\end{equation*}
Then we follows step 1 and step 2 of the proof of the proposition 10 in \cite{vas:higher},
we can obtain
\begin{equation*}
\| u\|_{L^{\infty}(-3,0;L^{\frac{3}{2}}(B(\frac{6}{4})))}
\leq {C}\bar{\eta}^{\frac{1}{3}}.\\
\end{equation*} and then
\begin{equation*}
\| u\|_{L^{\infty}(-2,0;L^{2}(B(\frac{5}{4})))}\leq {C}\bar{\eta}^{\frac{1}{4}}
\end{equation*} for $0\leq r<\infty$. Details are omitted.\\
Finally, by taking $0<\bar\eta<1$ such that $C\bar\eta^\frac{1}{4}\leq\bar\delta$,
we have all assumptions of the proposition \ref{partial_problem_II_r}. As a result,
we have
$|u(t,x)|\leq 1 \mbox{ on } [-\frac{3}{2},0]\times B(\frac{1}{2})$.\\
\subsection{Stage 2: to obtain $L^\infty$ local bound for $\nabla^d u$.}\label{new_step3}
Here we cover only classical derivatives, i.e. $\alpha=0$.
For any integer $d\geq1$, our goal is to find $C_{d,0}$ such that
$|((-\Delta)^{\frac{0}{2}}\nabla^d) u(t,x)|=|\nabla^d u(t,x)|\leq C_{d,0}$ on
$(-(\frac{1}{3})^2,0)\times(B(\frac{1}{3}))$. \\
We define a strictly decreasing sequence of balls
and parabolic cylinders from
$(-(\frac{1}{2})^2,0)\times B(\frac{1}{2})$
to $(-(\frac{1}{3})^2,0)\times(B(\frac{1}{3}))$ by
\begin{equation*}\begin{split}
&\bar{B}_n =B(\frac{1}{3}+\frac{1}{6}\cdot 2^{-n})=B(l_n) \\
&\bar{Q}_n=(-(\frac{1}{3}+\frac{1}{6}\cdot 2^{-n})^2,0)\times \bar{B}_n
=(-(l_n)^2,0)\times \bar{B}_n
\end{split}\end{equation*} where $l_n=\frac{1}{3}+\frac{1}{6}\cdot 2^{-n}$.\\
First we claim in order to cover the small $r$ case:\\
There exist two positive sequences
$\{\bar{r}_n\}_{n=0}^{\infty}$
and $\{C_{n,small}\}_{n=0}^{\infty}$ such that for any integer $n\geq0$
and for any $r\in [0,\bar{r}_n)$,
\begin{equation}\begin{split}\label{step3_first_claim}
&\| \nabla^n u\|_{L^{\infty}(\bar{Q}_{11n})}\leq C_{n,small}.
\end{split}\end{equation}
Indeed, from the previous subsection \ref{new_step1_2_together} (the stage 1), \eqref{step3_first_claim} holds for $n=0$
by taking $\bar{r}_0=1$ and $C_{0,small}=1$.
We define $\bar{r}_n= $ distance between
$B_{11n}$ and $(B_{11n-1})^c$ for $n \geq 1$.
Then $\{\bar{r}_n\}_{n=0}^{\infty}$ is decreasing
to zero as $n$ goes to $\infty$. Moreover,
we
can control $w$ by $u$
as long as $0\leq r<\bar{r}_n$: for any $n\geq 1$,
\begin{equation}\begin{split}\label{intermidiate balls}
&\|w\|_{L^{p_1}(-(l_{m})^2,0;L^{p_2}({\bar{B}_{m}}))}\leq
\mathcal{B}ig(\|u\|_{L^{p_1}(-(l_{m-1})^2,0;L^{p_2}({\bar{B}_{m-1}}))} + C\mathcal{B}ig) \quad\mbox{ and}\\
&\|\nabla^{k}w\|_{L^{p_1}(-(l_{m})^2,0;L^{p_2}({\bar{B}_{m}}))}\leq
\|\nabla^{k}u\|_{L^{p_1}(-(l_{m-1})^2,0;L^{p_2}({\bar{B}_{m-1}}))}
\end{split}\end{equation} for any integer $m$ such that
$m\leq 11\cdot n$, for any $ k\geq1$
and for any $p_1\in [1,\infty]$ and $p_2\in [1,\infty]$ (see \eqref{young}).\\% $1\leq p_1,p_2\leq\infty$.\\
We will use an induction with a boot-strapping.
First we fix $d\geq 1$ and suppose that \eqref{step3_first_claim} is true up to $n=(d-1)$. It implies
for any $r\in [0,\bar{r}_{d-1})$
\begin{equation*}\begin{split}
\|u\|_{L^{\infty}(-l_{s}^2,0;W^{d-1,{\infty}} (\bar{B}_{s}))}
\leq C
\end{split}\end{equation*} where $s=11(d-1)$.
We want to
show that \eqref{step3_first_claim} is also true for the case $n=d$.\\
\noindent From \eqref{intermidiate balls},
$\|w\|_{L^{\infty }(-l_{s+{1 }}^2,0;W^{{ d-1 } ,{ \infty}} (\bar{B}_{s+{ 1 }}))}\leq C$ and,
From the lemma \ref{lem_parabolic_v_1_v_2_pressure} with
$v_2=w$ and $v_1=u$,\quad
$\|u\|_{L^{16}(-l_{s+{ 2}}^2,0;W^{{ d } ,{32 }} (\bar{B}_{s+{ 2 }}))}\leq C$.
Then, we use \eqref{intermidiate balls}
and the lemma \ref{lem_parabolic_v_1_v_2_pressure} in turn:
\begin{equation*}\begin{split}
&\rightarrow w\in{L^{16}(-l_{s+{ 3}}^2,0;W^{{ d } ,{32 }} (\bar{B}_{s+{3 }}))}
\rightarrow
u\in{L^{8}(-l_{s+{ 4}}^2,0;W^{{ d+1 } ,{16 }} (\bar{B}_{s+{ 4 }}))}\\
&\rightarrow w\in{L^{8}W^{{ d+1 } ,{16 }} }
\rightarrow ... \rightarrow
u\in{L^{2}W^{{ d+3 } ,{4 }} }
\end{split}\end{equation*} Then, from Sobolev,
\begin{equation*}\begin{split}
&\rightarrow u\in{L^{2}W^{{ d+2 } ,{\infty}} }\rightarrow
w\in{L^{2}(-l_{s+{ 9}}^2,0;W^{{ d+2 } ,{\infty}} (\bar{B}_{s+{ 9 }}))}.
\end{split}\end{equation*}
This estimate gives us
\begin{equation*}\begin{split}
&\Delta(\nabla^d u) ,
\ebdiv(\nabla^d(w\otimes u))\mbox{ and }
\nabla(\nabla^dP) \in{L^{1}(-l_{s+{ 10}}^2,0;L^{\infty}(\bar{B}_{s+{10 }}))}
\end{split}\end{equation*}
where
we used
\eqref{ineq_parabolic_pressure2}
for the pressure term. Thus
\begin{equation*}\begin{split}
&\partial_t(\nabla^d u)\in{L^{1}(-l_{s+{ 10}}^2,0;L^{\infty}(\bar{B}_{s+{10 }}))}.
\end{split}\end{equation*}
Finally, we obtain that for any $r\in [0,\bar{r}_{d})$
\begin{equation*}\begin{split}
&\|\nabla^d u\|_{L^{\infty}(-l_{s+{ 11}}^2,0;L^{\infty}(\bar{B}_{s+{11 }}))}\leq C.
\end{split}\end{equation*} where $C$ depends only on $d$.
By the induction argument, we showed the above claim \eqref{step3_first_claim}. \\
Now we introduce the second claim:\\
There exist a sequences
$\{C_{n,large}\}_{n=0}^{\infty}$ such that for any integer $n\geq0$
and for any $r\geq\bar{r}_n$,
\begin{equation}\begin{split}\label{step3_second_claim}
&\| \nabla^n u\|_{L^{\infty}(\bar{Q}_{21\cdot n})}\leq C_{n,large} \\
\end{split}\end{equation} where $\bar{r}_n$ comes from previous claim
\eqref{step3_first_claim}.\\
Before proving the above second claim \eqref{step3_second_claim},
we need the following two observations \textbf{(I),(II)}
from the lemmas \ref{lem_parabolic_v_1_v_2_pressure} and
\ref{higher_pressure}:\\
\textbf{(I).} From the corollary \ref{convolution_cor} for any $n\geq 0$,
if $r\geq\bar{r}_n$, then
\begin{equation*}\begin{split}\label{nabla_n_w_large_r}
\| w\|_{L^2(-4,0;W^{n,\infty}(B(2)))}&
\leq {C_n}.
\end{split}\end{equation*}
We use \eqref{ineq_parabolic_v_1_v_2} in the lemma \ref{lem_parabolic_v_1_v_2_pressure}
with $v_1=u$ and $v_2=w$.
Then it
becomes
\begin{equation}\begin{split} \label{ineq_nabla_n_u_large_r}
\|\nabla^n u \|_{L^{p_1}(-(l_m)^2,0 ;L^{p_2}(\bar{B}_m))}
&\leq C_{(m,n,p_2)}
\mathcal{B}ig(\| u \|_{L^{\frac{2p_1}{2-p_1}}
(-(l_{m-1})^2,0;W^{n-1,p_2}({\bar{B}_{m-1}}))}+
1\mathcal{B}ig)\\
\end{split}\end{equation}
for $n\geq 1$, $m\geq 1$, $1< p_1\leq 2$ and $1<p_2<\infty$. (For
the case $p_1=2$, we may interpret $\frac{2p_1}{2-p_1}=\infty$.)\\
\textbf{(II).} Moreover, \eqref{ineq_parabolic_pressure} in the lemma
\ref{higher_pressure} becomes
\begin{equation}\begin{split} \label{ineq_nabla_n_pressure_large_r}
\|\nabla^n P\|_{L^{1}(-(l_m)^2,0;L^{p}(\bar{B}_m))}
&\leq C_{(m,n,p)}\mathcal{B}ig(
\| u \|_{L^{2}(-(l_{m-1})^2,0;W^{n-1,p}({\bar{B}_{m-1}}))}+1\mathcal{B}ig)
\end{split}\end{equation} for $n\geq 2$ and $1<p<\infty$.\\
Now we are ready to prove the second claim \eqref{step3_second_claim}
by an induction with a boot-strapping. From the previous subsection
\ref{new_step1_2_together} (the stage 1), \eqref{step3_second_claim} holds for $n=0$
with $C_{0,large}=1$. Fix $d\geq 1$ and suppose that we
have \eqref{step3_second_claim} up to $n=(d-1)$. It implies
for any $r\geq\bar{r}_{d-1}$
\begin{equation*}\begin{split}
\|u\|_{L^{\infty}(-l_s^2,0;W^{d-1,{\infty}} (\bar{B}_{s}))}
\leq C_{d-1,large}
\end{split}\end{equation*} where $s=21(d-1)$.
We want to
show \eqref{step3_second_claim} for $n=d$.\\
\noindent By using \eqref{ineq_nabla_n_u_large_r} with $n=d, p_1=2$ and $p_2=11$,
\begin{equation*}\begin{split}
\|u\|_{L^{ 2}(-l_{s+{1 }}^2,0;W^{{ d } ,{11 }} (\bar{B}_{s+{ 1 }}))}\leq C
\end{split}\end{equation*}
and,
from \eqref{ineq_nabla_n_pressure_large_r} with $n=d+1,m=0$ and $p=11$,
\begin{equation*}\begin{split}
\|\nabla^{d+{1 }}P\|_{L^{1 }(-l_{s+{ 2}}^2,0;L^{ 11 } (\bar{B}_{s+{ 2}}))}\leq C.
\end{split}\end{equation*}
Combining the above two results with the
lemma \ref{lem_a_half_upgrading_large_r} for $v_1=u$ and $v_2=w$
, we can have increased integrability in space by $0.5$ up to $6$:
\begin{equation*}\begin{split}
&\|u\|_{L^{\infty }(-l_{s+{3 }}^2,0;W^{{ d } ,{ 1}} (\bar{B}_{s+{ 3 }}))}\leq C,\\
&\|u\|_{L^{\infty }(-l_{s+{ 4}}^2,0;W^{{ d } ,{1.5 }} (\bar{B}_{s+{ 4 }}))}\leq C,\\
&\quad\quad\quad\quad\quad\quad\cdots, \quad\mbox{ and} \\
&\|u\|_{L^{\infty }(-l_{s+{ 13}}^2,0;W^{{ d } ,{6 }} (\bar{B}_{s+{ 13 }}))}\leq C.\\
\end{split}\end{equation*}
By using \eqref{ineq_nabla_n_u_large_r} and \eqref{ineq_nabla_n_pressure_large_r}
again, we have
\begin{equation*}\begin{split}
&\|u\|_{L^{ 2}(-l_{s+{14 }}^2,0;W^{{d+1 } ,{ 6}} (\bar{B}_{s+{ 14 }}))}\leq C
\quad\mbox{and}\\
&\|\nabla^{d+{ 2 }}P\|_{L^{1 }(-l_{s+{ 15}}^2,0;L^{6 } (\bar{B}_{s+{ 15 }}))}\leq C.
\end{split}\end{equation*}
Combining the above two results with the lemma \ref{lem_a_half_upgrading_large_r} again
, we have
\begin{equation*}\begin{split}
&\|u\|_{L^{\infty }(-l_{s+{16 }}^2,0;W^{{ d+1 } ,{ 1}} (\bar{B}_{s+{ 16 }}))}\leq C,\\
&\quad\quad\quad\quad\quad\quad\cdots,\quad\mbox{ and} \\
&\|u\|_{L^{\infty }(-l_{s+{21 }}^2,0;W^{{ d+1} ,{ 3.5}} (\bar{B}_{s+{21 }}))}\leq C.\\
\end{split}\end{equation*}
Finally, from Sobolev's inequality,
\begin{equation*}\begin{split}
&\|\nabla^d u\|_{L^{\infty }(-l_{s+{21 }}^2,0;L^{\infty} (\bar{B}_{s+{ 21 }}))}
\leq C
\end{split}\end{equation*} where $C$ depends
only $d$ not $u$ nor $r$ as long as $r\geq \bar{r}_d$.
From induction, we proved second claim \eqref{step3_second_claim}.\\
Define for any $n\geq0$, $C_{n,0} = \max({C_{n,small},C_{n,large}})$ where
$C_{n,small}$ and $C_{n,large}$ come from \eqref{step3_first_claim}
and \eqref{step3_second_claim} respectively.
Then we have:
\begin{equation}\begin{split}\label{step3_conclusion}
&\| \nabla^n u\|_{L^{\infty}(Q(\frac{1}{3}))}\leq C_{n,0} \\
\end{split}\end{equation} for any $n\geq0$ and for any $0\leq r<\infty$
because $Q(\frac{1}{3})\subset \bar{Q}_{n}$.
It ends this stage 2.\\
\subsection{Stage 3: to obtain $L^\infty$ local bound for $(-\Delta)^{\alpha/2}\nabla^d u$.}\label{new_step4}
From now on, we
assume further that $(u,P)$ satisfies \eqref{local_study_condition3} as well as
all the other conditions of the proposition \ref{local_study_thm}.
In the following proof, we will not
divide the proof into a small $r$ part and a large $r$ part.\\% as you will see.
Fix an integer $d\geq 1$ and
a real $\alpha$ with $0<\alpha<2$.
i.e. any constant which will appear may depend $d$ and $\alpha$.
But they will be independent of any $ r\in [0,\infty)$ and any solution $(u,P)$.\\
First, we claim:\\
There exists a constant $C=C({d,\alpha})$
such that
\begin{equation}\begin{split}\label{step4_first_claim}
|(-\Delta)^{\frac{\alpha}{2}}\nabla^d u(t,x)|
\leq C({d,\alpha})+\mathcal{B}ig|\int_{|y|\geq {(1/6)}}
\frac{\nabla^{d} u(t,x-y)}{|y|^{3+\alpha}}dy\mathcal{B}ig|
\end{split}\end{equation} for $ |x|\leq (1/6)$ and
for $ -(1/3)^2\leq t\leq 0$.\\% for $-{(a_{n+1})}^2\leq t\leq 0$.\\
\noindent To prove \eqref{step4_first_claim},
we first recall the Taylor expansion of any $C^2$ function $f$ at $x$:
$f(y)-f(x)=(\nabla f)(x)\cdot(y-x)+R(x,y)$, and we have
an error estimate $|R|\leq C|x-y|^2\cdot\|\nabla^2 f\|_{L^\infty(B(x;|x-y|))}$.
Note that if we integrate the first order term $(\nabla f)(x)\cdot(y-x)$
in $y$ on any sphere with the center $x$, we have zero by symmetry.
As a result, if we
take any $x$ and $t$ for $ |x|\leq (1/6)$ and
for $ -(1/3)^2\leq t\leq 0$ respectively, then we have
\begin{equation*}\begin{split}
|(-\Delta)^{\frac{\alpha}{2}}\nabla^d u(t,x)|&
=\mathcal{B}ig|P.V.\int_{\mathbb{R}^3}\frac{\nabla^d u(t,x)-\nabla^d u(t,y)}{|x-y|^{3+\alpha}}dy\mathcal{B}ig|\\
&\leq \sup_{z\in B((1/3))}(|\nabla^{d+2}u(t,z)|)\cdot\int_{|x-y|< (1/6)}\frac{1}{|x-y|^{3+\alpha-2}}dy\\
&\quad+ \sup_{z\in B((1/3))}(|\nabla^d u(t,z)|)\cdot
\int_{|x-y|\geq (1/6)}\frac{1}{|x-y|^{3+\alpha}}dy\\
&\quad+\mathcal{B}ig|\int_{|x-y|\geq (1/6)}\frac{\nabla^d u(t,y)}{|x-y|^{3+\alpha}}dy\mathcal{B}ig|\\
&\leq C({d,\alpha}) +\mathcal{B}ig|\int_{|y|\geq (1/6)}
\frac{\nabla^d u(t,x-y)}{|y|^{3+\alpha}}dy\mathcal{B}ig|
\end{split}\end{equation*}
where we used
the result \eqref{step3_conclusion} of the previous
subsection \ref{new_step3} (the stage 2)
together with the Taylor expansion of
$\nabla^d u(t,\cdot)$ at $x$
in order to reduce singularity by $2$ at the origin $x=y$. We
proved the first claim \eqref{step4_first_claim}.\\
Second, we claim:\\
There exists $C=C({d,\alpha})$ such that
\begin{equation}\begin{split}\label{step4_second_claim}
\mathcal{B}ig|\int_{|y|\geq {{(1/6)}}}\frac{\nabla^{d} u(t,x-y)}
{|y|^{3+\alpha}}dy\mathcal{B}ig|
\leq C({d,\alpha})+\sum_{j=k}^{\infty}(\frac{1}{2^{\alpha}})^j\cdot
|({(h^{\alpha})_{2^j}}*\nabla^d u)(t,x)|\\
\end{split}\end{equation}
for $ |x|\leq (1/6)$ and
for $ -(1/3)^2\leq t\leq 0$ where
$k$ is the integer such that
$2^k\leq (1/6)< 2^{k+1}$.(i.e. from now on, we fix $ k=-3$). Recall that $ h^\alpha$ is defined around \eqref{property_h}.\\%(k may be negative)\\
\noindent To prove the above second claim \eqref{step4_second_claim}: (Recall \eqref{property_zeta}
and \eqref{property_h})
\begin{equation*}\begin{split}
&\mathcal{B}ig|\int_{|y|\geq {{(1/6)}}}\frac{\nabla^{d} u(t,x-y)}
{|y|^{3+\alpha}}dy\mathcal{B}ig|=
\mathcal{B}ig|\int_{|y|\geq {{(1/6)}}}\sum_{j=k}^{\infty}\zeta
(\frac{y}{2^j})\frac{\nabla^{d} u(t,x-y)}
{|y|^{3+\alpha}}dy\mathcal{B}ig|\\
&=\mathcal{B}ig|\int_{|y|\geq {{(1/6)}}}\sum_{j=k}^{\infty}
\frac{1}{(2^j)^{\alpha }} \cdot {(h^{\alpha})_{2^j}}(y)\nabla^{d} u(t,x-y)dy\mathcal{B}ig|\\
\end{split}\end{equation*}
\begin{equation*}\begin{split}
&\leq\sum_{j=k}^{k+1}\frac{1}{(2^j)^{\alpha }}
\cdot\mathcal{B}ig|\int_{|y|\geq {{(1/6)}}} {(h^{\alpha})_{2^j}}(y)\nabla^{d} u(t,x-y)dy\mathcal{B}ig|\\
&\quad+\sum_{j=k+2}^{\infty}\frac{1}{(2^j)^{\alpha }}
\cdot\mathcal{B}ig|\int_{|y|\geq {{(1/6)}}} {(h^{\alpha})_{2^j}}(y)\nabla^{d} u(t,x-y)dy\mathcal{B}ig|\\
&=(I)+(II).
\end{split}\end{equation*}
\noindent For $(I)$,
\begin{equation*}\begin{split}
(I)&\leq\sum_{j=k}^{k+1}\frac{1}{(2^j)^{\alpha }}
\cdot\mathcal{B}ig(\mathcal{B}ig|\int_{\mathbb{R}^3} {(h^{\alpha})_{2^j}}(y)\nabla^{d} u(t,x-y)dy\mathcal{B}ig|\\
&\quad\quad\quad\quad\quad\quad\quad\quad +
\int_{|y|\leq {{(1/6)}}} |{(h^{\alpha})_{2^j}}(y)|\cdot|\nabla^{d} u(t,x-y)|dy\mathcal{B}ig)\\
&\leq\sum_{j=k}^{k+1}\frac{1}{(2^j)^{\alpha }}
\mathcal{B}ig(| ({(h^{\alpha})_{2^j}}*\nabla^{d} u)(t,x)|
+ C\cdot \sup_{z\in B(1/3)}|\nabla^{d} u(t,z)|\mathcal{B}ig)\\
&= \sum_{j=k}^{k+1}(\frac{1}{2^{\alpha}})^j\cdot| ({(h^{\alpha})_{2^j}}
*\nabla^{d} u)(t,x)|+C({d,\alpha}).
\end{split}\end{equation*}
\noindent For $(II)$, by using $supp( h^{\alpha}_{2^j})
\subset (B(2^{j-1}))^C\subset (B(1/6))^C$
for any $j\geq k+2$,
\begin{equation*}\begin{split}
(II)&=
\sum_{j=k+2}^{\infty}\frac{1}{(2^j)^{\alpha }}
\cdot\mathcal{B}ig|\int_{\mathbb{R}^3} {(h^{\alpha})_{2^j}}(y)\nabla^{d} u(t,x-y)dy\mathcal{B}ig|\\
&= \sum_{j=k+2}^{\infty}(\frac{1}{2^{\alpha}})^j
\cdot| ({(h^{\alpha})_{2^j}}*\nabla^{d} u)(t,x)|.
\end{split}\end{equation*} We showed the second claim
\eqref{step4_second_claim}.\\
Third, we claim:\\%that
There exists $C=C({d,\alpha})$ such that
\begin{equation}\begin{split}\label{step4_third_claim}
\|{(h^{\alpha})_M}*\nabla^d u\|_{L^{\infty}( -(1/6)^2,0;L^1(B(1/6)))}
\leq C({d,\alpha})\cdot M^{1-d}\\
\end{split}\end{equation} for any $M\geq 2^k$. (Recall
$k= -3$.)\\% is the integer such that
\noindent To prove the above third claim \eqref{step4_third_claim},
take a convolution first with
$ \nabla^d[{(h^{\alpha})_M}]$ into the equation
\eqref{navier_Problem II-r}.
Then we have
\begin{equation*}\begin{split}
(\nabla^d [{(h^{{\alpha}})_M}]*u)_t &+(\nabla^d [{(h^{{\alpha}})_M}]*\mathcal{B}ig((w\cdot\nabla)u\mathcal{B}ig))\\&
+(\nabla^d [{(h^{{\alpha}})_M}]*\nabla P) -(\nabla^d [{(h^{{\alpha}})_M}]*\Delta u)=0
\end{split}\end{equation*} so that
\begin{equation*}\begin{split}
( \nabla^{d-1}[{(h^{{\alpha}})_M}]&*\nabla u)_t +(\nabla^d {[(h^{{\alpha}})_M}]*\mathcal{B}ig((w\cdot\nabla)u\mathcal{B}ig))\\& +(\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla^2 P) -\Delta(
\nabla^{d-1}
[{(h^{{\alpha}})_M}]* \nabla u)=0.
\end{split}\end{equation*}
\noindent Define a cut-off $\Phi(t,x)$ by
\begin{equation*}\begin{split}
&0\leq\Phi(x)\leq 1 \quad , \quad
supp(\Phi)\subset (-4,0)\times B({2})\\
&\Phi(t,x) = 1 \mbox{ for } (t,x)\in (-(1/6)^2,0)\times B({(1/6)}).
\end{split}
\end{equation*}
\noindent Multiply $\Phi(t,x)\frac{(\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla u)(t,x)}{|(\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla u) (t,x)|}$, then
integrate in $x$:
\begin{equation*}\begin{split}
&\frac{d}{dt}\int_{\mathbb{R}^3}\Phi(t,x)|(\nabla^{d-1}[{(h^{{\alpha}})_M}]*
\nabla u) (t,x)|dx\\
&\leq\int_{\mathbb{R}^3}(|\partial_t\Phi(t,x)|+|\Delta\Phi(t,x)|)
|(\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla u) (t,x)|dx \\
&\quad\quad+\int_{\mathbb{R}^3}|\Phi(t,x)||(\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla^2 P)|dx \\
&\quad\quad+\int_{\mathbb{R}^3}|\Phi(t,x)||\nabla^d [{(h^{{\alpha}})_M}]*
\mathcal{B}ig((w\cdot\nabla)u\mathcal{B}ig)|dx.
\end{split}\end{equation*}
\noindent Then integrating on $[-4,t]$ for any $t\in[-(1/6),0]$ gives
\begin{equation*}\begin{split}
&\|{(h^{{\alpha}})_M}*\nabla^d u\|_{L^{\infty}( -(1/6)^2,0;L^1(B(1/6)))}\\
&=\|\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla u\|_{L^{\infty}( -(1/6)^2,0;L^1(B(1/6)))}\\
&\leq C\mathcal{B}ig(\|\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla u\|_{L^{1}( -4,0;L^1(B(2)))} \\
&+ \|\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla^2 P\|_{L^{1}( -4,0;L^1(B(2)))} \\
&+ \|\nabla^d [{(h^{{\alpha}})_M}]*\mathcal{B}ig((w\cdot\nabla)u\mathcal{B}ig)\|
_{L^{1}( -4,0;L^1(B(2)))}\mathcal{B}ig) \\
&=(I) + (II)+ (III).
\end{split}\end{equation*}
For $(I)$, we use simple observations
$\nabla^m[(f)_\delta]=\delta^{-m}\cdot(\nabla^mf)_\delta$
and \\$|(f)_\delta*\nabla u|(x)\leq C_f\cdot\mathcal{M}(|\nabla u|)(x) $
for any $f\in C^\infty_0(\mathbb{R}^3)$ so that
\begin{equation*}\begin{split}
|(\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla u)(t,x)|&
=M^{-(d-1)}\cdot|((\nabla^{d-1}{h^{{\alpha}})_M}*\nabla u)(t,x)|\\
&\leq C\cdot M^{-(d-1)}\cdot\mathcal{M}(|\nabla u|)(t,x)
\end{split}\end{equation*} for any $0<M<\infty$ so that
\begin{equation*}\begin{split}
(I)&=\|(\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla u)\|_{L^{1}( -4,0;L^1(B(2)))}\\
&\leq C\cdot M^{-(d-1)}\cdot\|\mathcal{M}(|\nabla u|)\|_{L^{1}( -4,0;L^1(B(2)))}\\
&\leq C\cdot M^{-(d-1)}\cdot\|\mathcal{M}(|\nabla u|)\|_{L^{2}( -4,0;L^2(B(2)))}
\leq C\cdot M^{1-d}
\end{split}\end{equation*} for any $0<M<\infty$.\\
For $(II)$, we use our global information about pressure in \eqref{local_study_condition3}
thanks to the property of the Hardy space \eqref{hardy_property}:
\begin{equation}\begin{split}\label{pressure_hardy_used}
(II)&=\|\nabla^{d-1}[{(h^{{\alpha}})_M}]*\nabla^2 P\|_{L^{1}( -4,0;L^1(B(2)))}\\
&=M^{-(d-1)}\cdot\|(\nabla^{d-1}{h^{{\alpha}})_M}*\nabla^2 P\|_{L^{1}( -4,0;L^1(B(2)))}\\
&\leq M^{-(d-1)}\cdot
\| \sup_{\delta>0}(|(\nabla^{d-1}{h^{{\alpha}})_\delta}*\nabla^2 P|)\|_{L^{1}(-4,0;L^{1}(B(2)))}\\&
\leq C \cdot M^{1-d}
\end{split}\end{equation} for any $0<M<\infty$.\\
For $(III)$, we use following useful facts \textbf{(1, }$\cdots$\textbf{, 5)}:\\
\noindent \textbf{1.} From $supp((h^{{\alpha}})_M)\subset B(2M)$,
\begin{equation*}\begin{split}
\quad&\|\nabla^d [{(h^{{\alpha}})_M}]*\mathcal{B}ig((w\cdot\nabla)u\mathcal{B}ig)(t,\cdot)\|
_{L^1(B(2))} \\
&\leq\int_{B(2)}
\int_{\mathbb{R}^3}\mathcal{B}ig|\mathcal{B}ig((w\cdot\nabla)u\mathcal{B}ig)(t,y)\cdot (\nabla^d [{(h^{{\alpha}})_M}])(x-y)\mathcal{B}ig|dy
dx\\
&\leq\int_{B(2M+2)}\mathcal{B}ig|\mathcal{B}ig((w\cdot\nabla)u\mathcal{B}ig)(t,y)\mathcal{B}ig|\cdot\mathcal{B}ig[
\int_{{B(2)}}| (\nabla^d [{(h^{{\alpha}})_M}])(x-y)|dx \mathcal{B}ig]
dy\\
&\leq C\|\nabla^d [{(h^{{\alpha}})_M}]\|_{L^{\infty}(\mathbb{R}^3)}\cdot
\|\mathcal{B}ig((w\cdot\nabla)u\mathcal{B}ig)(t,\cdot)\|_{L^1{(B(2M+2))}}
\\
&\leq C \cdot\frac{1}{M^{3+d}}\cdot
\|\mathcal{B}ig((w\cdot\nabla)u\mathcal{B}ig)(t,\cdot)\|_{L^1{(B(2M+2))}}
\\
&\leq C \cdot\frac{1}{M^{3+d}}\cdot
\|w(t,\cdot)\|_{L^{q^\prime}{(B(2M+2))}}
\cdot\|\nabla u(t,\cdot)\|_{L^q{(B(2M+2))}}
\end{split}\end{equation*} where $q=12/(\alpha +6)$ and $1/q+1/q^\prime = 1$.\\
Note: Because $0<\alpha<2$, we know $12/8<q<2$.
\begin{equation*}\begin{split}
\textbf{2.}\quad& \|w(t,\cdot)\|_{L^{q}{(B(2M+2))}}\\
&\quad\leq
CM^{1+\frac{3}{q}}\cdot\mathcal{B}ig(
\|\mathcal{M}(|\nabla w|^q)(t,\cdot)\|^{1/q}_{L^{1}(B(1))}
+\|\nabla w(t,\cdot)\|_{L^1(B(2))}\mathcal{B}ig)\\
&\quad\leq
CM^{1+\frac{3}{q}}\cdot\mathcal{B}ig(
\|\mathcal{M}(|\mathcal{M}(|\nabla u|)|^q)(t,\cdot)\|^{1/q}_{L^{1}(B(1))}
+\|\mathcal{M}(|\nabla u|)(t,\cdot)\|_{L^1(B(2))}\mathcal{B}ig)\\
\end{split}\end{equation*} for any $M\geq 2^k$.\\
\noindent For the first inequality, we used the lemma \ref{lem_Maximal 2.5 or 4}
and for the second one, we used the fact
$|\nabla w (t,x)|=|(\nabla u * \phi_r)(t,x)|\leq
C|\mathcal{M}(|\nabla u|)(t,x)|$ where
$C$ is independent of $0\leq r<\infty$. (For $r>0$, it follows
definitions of the convolution and the Maximal function while
for $r=0$, it follows the Lebesgue differentiation theorem with
continuity of $\nabla u$.)
So, for any $M\geq 2^k$, from \eqref{local_study_condition3},
\begin{equation*}\begin{split}
&\|w\|_{L^2(-4,0;L^q(B(2M+2)))}\\
&\quad\leq
CM^{1+\frac{3}{q}}\mathcal{B}ig(
\|\|\mathcal{M}(|\mathcal{M}(|\nabla u|)|^q)\|^{1/q}_{L_x^{1}(B(1))}\|
_{L_t^2(-4,0)}\\
&\quad\quad\quad\quad\quad\quad+\|\|\mathcal{M}(|\nabla u|)\|_{L_x^1(B(2))}\|
_{L_t^2(-4,0)}\mathcal{B}ig)\\
&\quad\leq
CM^{1+\frac{3}{q}}\mathcal{B}ig(
\|\mathcal{M}(|\mathcal{M}(|\nabla u|)|^q)\|^{1/q}_{L^{2/q}(-4,0;L^1(B(2)))}\\
&\quad\quad\quad\quad\quad\quad+\|\|\mathcal{M}(|\nabla u|)\|_{L^2(-4,0;L^1(B(2)))}\mathcal{B}ig)\\
&\quad\leq
CM^{1+\frac{3}{q}}.
\end{split}\end{equation*}
Before stating the third fact, we needs the following two observations:\\
From standard Sobolev-Poincare inequality on balls (e.g. see
Saloff-Coste \cite{sobolev}), we have $C$ such that
\begin{equation}\begin{split}\label{Sobolev-Poincare inequality}
\|f-\mathcal{B}ar{f}\|_{L^{3q/(3-q)}(B(M))}\leq C\cdot\|\nabla f\|_{L^q(B(M))}
\end{split}\end{equation} for any $0 <M <\infty$ and for any $f$
whose derivatives are in $L^q_{loc}(\mathbb{R}^3)$ where $\mathcal{B}ar{f}
=\int_{B}fdx/|B|$ is
the mean value on $B$. Note that $C$ is independent of $M$.\\
On the other hand, once we fix $M_0>0$, then there exist $C=C(M_0)$ with
the following property: \\
For any $p$ with $1\leq p<\infty$, for any $M\geq M_0$
and for any $f\in L^p_{loc}(\mathbb{R}^3)$, we have
\begin{equation}\label{lem_Maximal q}
\|f\|_{L^p(B(M))}\leq CM^{\frac{3}{p}}\cdot
\|\mathcal{M}(|f|^p)\|^{1/p}_{L^{1}(B(2))}
\end{equation}
To prove \eqref{lem_Maximal q}, it is enough to show that
\begin{equation*}
\|g\|_{L^1(B(M))}\leq CM^{3}\cdot
\|\mathcal{M}(g)\|_{L^{1}(B(2))}
\end{equation*}
For any $z\in B(2)$,
\begin{equation*}\begin{split}
\int_{B(M)} |g(x)|dx&=\frac{(M+2)^3}{(M+2)^3}\cdot\int_{B(M+2)} |g(z+x)|dx\\
&\leq (M+2)^3\mathcal{M}(g)(z)
\leq C_{M_0}M^{3}\mathcal{M}(g)(z)
\end{split}\end{equation*} Then we take integral on $z\in B(2)$.\\
Now we states the third fact.
\begin{equation*}\begin{split}
\textbf{3.} \quad& \|w(t,\cdot)\|_{L^{3q/(3-q)}{(B(2M+2))}}\\
&\quad\quad\quad\leq C\cdot\|\nabla w(t,\cdot)\|_{L^q(B(2M+2))}+
\|\mathcal{B}ar{w}(t,\cdot)\|_{L^{3q/(3-q)}(B(2M+2))}\\
&\quad\quad\quad\leq
C\cdot M^{3/q} \cdot \|\mathcal{M}(|\nabla w|^q)(t,\cdot)\|
_{L^1(B(2))}^{1/q}
\\&\quad\quad\quad\quad+
CM^{-3}\|w(t,\cdot)\|_{L^1(B(2M+2))}\cdot CM^{3\cdot\frac{3-q}{3q}}\\
&\quad\quad\quad\leq
C\cdot M^{3/q} \cdot \|\mathcal{M}(|\mathcal{M}(|\nabla u|)|^q)(t,\cdot)\|
_{L^1(B(2))}^{1/q}
\\&\quad\quad\quad\quad+
CM^{\frac{3}{q}-4}\|w(t,\cdot)\|_{L^1(B(2M+2))}\\
&\quad\quad\quad\leq
C\cdot M^{3/q} \cdot \|\mathcal{M}(|\mathcal{M}(|\nabla u|)|^q)(t,\cdot)\|
_{L^1(B(2))}^{1/q}
\\&\quad\quad\quad\quad+
CM^{\frac{3}{q}-4} CM^{1+\frac{3}{1}}\cdot\mathcal{B}ig(
\|\mathcal{M}(|\nabla w|^1)(t,\cdot)\|^{1/1}_{L^{1}(B(1))}
+\|\nabla w(t,\cdot)\|_{L^1(B(2))}\mathcal{B}ig)\\
&\quad\quad\quad\leq
C\cdot M^{3/q} \cdot \|\mathcal{M}(|\mathcal{M}(|\nabla u|)|^q)(t,\cdot)\|
_{L^1(B(2))}^{1/q}
\\&\quad\quad\quad\quad+
CM^{\frac{3}{q}}\mathcal{B}ig(
\|\mathcal{M}(|\mathcal{M}(|\nabla u|)|)(t,\cdot)\|_{L^{1}(B(1))}
+\|\mathcal{M}(|\nabla u|)(t,\cdot)\|_{L^1(B(2))}\mathcal{B}ig)\\
\end{split}\end{equation*}
we used \eqref{Sobolev-Poincare inequality} for the first inequality,
\eqref{lem_Maximal q} and definition of mean value
for the second one and
$|\nabla w (t,x)|\leq
C|\mathcal{M}(|\nabla u|)(t,x)|$ and the lemma \ref{lem_Maximal 2.5 or 4}
for fourth and fifth ones respectively. So, by taking $L^2$-norm on time $[-4,0]$ with \eqref{local_study_condition3},
\begin{equation*}\begin{split}
&\|w\|_{L^2(-4,0;L^{\frac{3q}{3-q}}(B(2M+2)))}
\leq
CM^{\frac{3}{q}}
\end{split}\end{equation*} for any $M\geq 2^k$.
\begin{equation*}\begin{split}
\textbf{4.}\quad& \|w(t,\cdot)\|_{L^{q\prime}{(B(2M+2))}}\leq
\|w(t,\cdot)\|^{\theta}_{L^{q}{(B(2M+2))}}\cdot
\|w(t,\cdot)\|^{1-\theta}_{L^{3q/(3-q)}{(B(2M+2))}}\quad\quad
\end{split}\end{equation*} where $q^\prime=q/(q-1)$ and $\theta=(4q-6)/q$.\\
Note: Because $12/8<q<2$, we have $0<\theta<1$.
So, for any $M\geq 2^k$,
\begin{equation*}\begin{split}
&\|w\|_{L^2(-4,0;L^{q\prime}(B(2M+2)))}\\&\quad\leq
\|w\|^{\theta}_{L^2(-4,0;L^{q}{(B(2M+2)))}}\cdot
\|w\|^{1-\theta}_{L^2(-4,0;L^{3q/(3-q)}{(B(2M+2)))}}\\
&\leq C\cdot (M^{1+(3/q)})^{\theta}(M^{3/q})^{1-\theta}
= C\cdot M^{4-\frac{3}{q} }.
\end{split}\end{equation*}
\noindent \textbf{5.} From \eqref{lem_Maximal q}, for any $M\geq 2^k$,
\begin{equation*}\begin{split}
\quad& \|\nabla u(t,\cdot)\|_{L^q{(B(2M+2))}}
\leq C\cdot M^{3/q} \cdot \|\mathcal{M}(|\nabla u|^q)(t,\cdot)\|_{L^1(B(2))}^{1/q}.
\end{split}\end{equation*} So, for any $M\geq 2^k$, from \eqref{local_study_condition3},
\begin{equation*}\begin{split}
\|\nabla u\|_{L^2(-4,0;L^q(B(2M+2)))}
&\leq C\cdot M^{3/q} \cdot \|\|\mathcal{M}(|\nabla u|^q)\|
_{L_x^1(B(2))}^{1/q}\|_{L_t^2(-4,0)}\\
&\leq C\cdot M^{3/q} \cdot \|\mathcal{M}(|\nabla u|^q)\|
_{L^{2/q}(-4,0;L^1(B(2)))}^{1/q}
\leq C\cdot M^{3/q}.
\end{split}\end{equation*}
\noindent Using above five results $\textbf{(1, }\cdots\textbf{, 5)}$ all together, we have for any $M\geq 2^k$,
\begin{equation*}\begin{split}
(III)
&\leq C \cdot\frac{1}{M^{3+d}}\cdot
\|w\|_{L^{2}( -4,0;L^{q^{\prime}}(B(2M+2)))}
\|\nabla u\|_{L^{2}( -4,0;L^q(B(2M+2)))}\\
&\leq C\cdot\frac{1}{M^{3+d}}\cdot M^{4-(3/q)}\cdot M^{3/q}
= C\cdot M^{1-d}
\end{split}\end{equation*}
which proved the above third claim \eqref{step4_third_claim}.\\
Finally we combine three claims
\eqref{step4_first_claim}, \eqref{step4_second_claim}
and \eqref{step4_third_claim}:
\begin{equation*}\begin{split}
&\|(-\Delta)^{\frac{\alpha}{2}}
\nabla^d u\|_{L^\infty(-{(1/6)}^2,0;
L^{1}(B((1/6))))}\\
&\quad\leq \|
C\mathcal{B}ig(1+\mathcal{B}ig|\int_{|y|\geq {(1/6)}}
\frac{\nabla^{d} u(\cdot_t,\cdot_x-y)}{|y|^{3+\alpha}}dy\mathcal{B}ig|\mathcal{B}ig)\|
_{L^\infty(-{(1/6)}^2,0;
L^{1}(B((1/6))))}\\
&\quad\leq C+C\sum_{j=k}^{\infty}(\frac{1}{2^{{\alpha}}})^j\cdot \|
|({(h^{{\alpha}})_{2^j}}*\nabla^d u)(\cdot_t,\cdot_x)|\|
_{L^\infty(-{(1/6)}^2,0;
L^{1}(B((1/6))))}\\
&\quad\leq C+C\sum_{j=k}^{\infty}(\frac{1}{2^{{\alpha}}})^j
\cdot (2^j)^{1-d}
\leq C+C\sum_{j=k}^{\infty}(\frac{1}{2^{d+\alpha-1}})^j
\leq C
\end{split}\end{equation*} because $d+\alpha-1>0$
from $d\geq 1$ and $\alpha>0$.\\
\noindent By the exact same way, we can also prove that
\begin{equation*}
\|(-\Delta)^{\frac{\alpha}{2}}
\nabla^m u\|_{L^\infty(-{(1/6)}^2,0;
L^{1}(B((1/6))))}\leq C
\end{equation*} for $m=d+1, ... ,d+4$. By repeated uses of Sobolev's inequality,
\begin{equation*}
\|(-\Delta)^{\frac{\alpha}{2}}
\nabla^d u\|_{L^\infty(-{(1/6)}^2,0;
L^{\infty}(B((1/6))))}\leq C({d,\alpha})
\end{equation*} and it finishes this proof of the proposition \ref{local_study_thm}.
\end{proof}
\section{Proof of the main theorem \ref{main_thm}}\label{proof_main_thm_II}
We begin this section by presenting one small lemma about pivot quantities.
After that,
the subsection \ref{prof_main_thm_II_alpha_0} covers the part (II) for $\alpha=0$
while the subsection \ref{prof_main_thm_II_alpha_not_0} does the part (II) for $0<\alpha<2$.
Finally the part (I) for $0\leq\alpha<2$ follows in the subsection \ref{proof_main_thm_I}.
\subsection{$L^1$ Pivot quantities}
The following lemma says that $L^1$ space-time norm of our pivot quantities can be controlled by $L^2$ space norm of the initial data. These things have
the best scaling like $|\nabla u|^2$ and $|\nabla^2 P|$ among all other $a$ $priori$ quantities from $L^2$ initial data (also see \eqref{best_scaling}).
\begin{lem}\label{lemma7_problem I-n}
There exist constant $C>0$ and $C_{d,\alpha}$
for integer $d\geq1$ and real $\alpha\in(0,2)$ with the following property:\\
If $(u,P)$ is a solution of (Problem I-n) for some $1\leq n \leq \infty$, then we have
\begin{equation*}
\int_0^{\infty}\int_{\mathbb{R}^3}\big(|\nabla u(t,x)|^2 +
|\nabla^2P(t,x)| + |\mathcal{M}(|\nabla u|)(t,x)|^2\big)dxdt
\leq C\|u_0\|^2_{L^2(\mathbb{R}^3)}
\end{equation*} and
\begin{equation*}\begin{split}
\int_0^{\infty}\int_{\mathbb{R}^3}&\mathcal{B}ig(
|\mathcal{M}(\mathcal{M}(|\nabla u|))|^2+|\mathcal{M}(|\nabla u|^q)|^{2/q}+
|\mathcal{M}(|\mathcal{M}(|\nabla u|)|^q)|^{2/q}\\
&+
\sum_{m=d}^{d+4} \sup_{\delta>0}(|(\nabla^{m-1}{h^{\alpha})_\delta}*\nabla^2 P|)
\mathcal{B}ig)dxdt
\leq C_{d,\alpha}\|u_0\|^2_{L^2(\mathbb{R}^3)}
\end{split}\end{equation*} for any integer $d\geq1$ and any real $\alpha\in(0,2)$
where $q=q(\alpha)$ is defined by $12/(\alpha+6)$.
\begin{rem}
The definitions of $h^{\alpha}$ and
$(\nabla^{m-1}{h^{\alpha})_\delta}$ can be found around \eqref{property_h}.
\end{rem}
\begin{rem}
In the following proof, we will see that every quantity in the left hand sides of the above two estimates can be controlled by
dissipation of energy $\|\nabla u\|_{L^2((0,\infty)\times\mathbb{R}^3)}^{2}$
only.
It explains the latter part of the remark \ref{rmk_dissipation of energy}.
\end{rem}
\end{lem}
\begin{proof}
From \eqref{energy_eq_Problem I-n},
\begin{equation*}\begin{split}
\|\nabla u\|^2_{L^2(0,\infty;L^2(\mathbb{R}^3))}
&\leq \|u_0*\phi_{\frac{1}{n}}\|_{L^2(\mathbb{R}^3)}^2\leq \|u_0\|_{L^2(\mathbb{R}^3)}^2.
\end{split}\end{equation*}
\noindent For the pressure term,
we use boundedness of the Riesz transform on Hardy space $\mathcal{H}$
and compensated compactness result in
Coifman, Lions, Meyer and Semmes \cite{clms}:
\begin{equation}\begin{split}\label{pressure_hardy_Problem I-n}
\|\nabla^2 P\|_{L^1(0,\infty;L^1(\mathbb{R}^3))}&\leq
\|\nabla^2 P\|_{L^1(0,\infty;\mathcal{H}(\mathbb{R}^3))}
\leq C\|\Delta P\|_{L^1(0,\infty;\mathcal{H}(\mathbb{R}^3))}\\
&=\|\ebdiv\ebdiv
\mathcal{B}ig( (u * \phi_{1/n})\otimes u\mathcal{B}ig)\|_{L^1(0,\infty;\mathcal{H}(\mathbb{R}^3))}\\
&\leq C\cdot\|\nabla (u * \phi_{1/n})\|_{L^2(0,\infty;L^2(\mathbb{R}^3))}
\|\nabla u\|_{L^2(0,\infty;L^2(\mathbb{R}^3))}\\
&\leq C\cdot\|\nabla u\|_{L^2(0,\infty;L^2(\mathbb{R}^3))}^2
\leq C\|u_0\|_{L^2(\mathbb{R}^3)}^2.
\end{split}\end{equation}
\noindent For Maximal functions,
\begin{equation*}\begin{split}
\| \mathcal{M}(\mathcal{M}(|\nabla u|))\|^2_{L^2(0,\infty;L^2(\mathbb{R}^3))}&\leq
C\cdot\|\mathcal{M}(|\nabla u|)\|^2_{L^2(0,\infty;L^2(\mathbb{R}^3))}\\
&\leq C\cdot\|\nabla u\|^2_{L^2(0,\infty;L^2(\mathbb{R}^3))}\\
&\leq C\cdot\|u_0\|^2_{L^2(\mathbb{R}^3)}.\\
\end{split}\end{equation*}
\noindent Let $d\geq1$ and $0<\alpha<2$ and take $q=12/(\alpha+6)$. From
$1<(2/q)<(4/3)$,
\begin{equation*}\begin{split}
\|\mathcal{M}(|\nabla u|^q)\|^{2/q}_{L^{2/q}(0,\infty;L^{2/q}(\mathbb{R}^3))}
&\leq C\cdot\||\nabla u|^q\|^{2/q}_{L^{2/q}(0,\infty;L^{2/q}(\mathbb{R}^3))}\\
&= C\cdot\|\nabla u\|^2_{L^{2}(0,\infty;L^2(\mathbb{R}^3))}\\
&\leq C\cdot\|u_0\|^{2}_{L^2(\mathbb{R}^3)}
\end{split}\end{equation*} and
\begin{equation*}\begin{split}
\| \mathcal{M}(|\mathcal{M}(|\nabla u|)|^q)
\|^{2/q}_{L^{2/q}(0,\infty;L^{2/q}(\mathbb{R}^3))}
&\leq C\cdot\||\mathcal{M}(|\nabla u|)|^q\|^{2/q}_{L^{2/q}(0,\infty;L^{2/q}(\mathbb{R}^3))}\\
&\leq C\cdot\|\mathcal{M}(|\nabla u|)\|^{2}_{L^{2}(0,\infty;L^{2}(\mathbb{R}^3))}\\
&\leq C\cdot\|\nabla u\|^2_{L^{2}(0,\infty;L^2(\mathbb{R}^3))}\\
&\leq C\cdot\|u_0\|^{2}_{L^2(\mathbb{R}^3)}
\end{split}\end{equation*}
where $C$ depends
only
on $\alpha$.\\
\noindent Thanks to the property of Hardy space \eqref{hardy_property}
with \eqref{pressure_hardy_Problem I-n}, we have
\begin{equation*}\begin{split}
\sum_{m=d}^{d+4}\| \sup_{\delta>0}(|(\nabla^{m-1}{h^{\alpha})_\delta}
*\nabla^2 P|)\|_{L^{1}(0,\infty;L^{1}(\mathbb{R}^3))}
&\leq\sum_{m=d}^{d+4}C\|\nabla^2 P
\|_{L^1(0,\infty;\mathcal{H}(\mathbb{R}^3))}\\
&\leq C\|u_0\|_{L^2(\mathbb{R}^3)}^2
\end{split}\end{equation*} where the above $C$ depends only on $d$ and $\alpha$.
\end{proof}
We are ready to prove the main theorem \ref{main_thm}.
\begin{rem}\label{n_infty_rem}
In the following subsections \ref{prof_main_thm_II_alpha_0}
and \ref{prof_main_thm_II_alpha_not_0}, we consider solutions of
{(Problem I-n)} for positive integers $n$.
However it will be clear that
every computation in these subsections can also be verified for the case $n=\infty$
once we assume that the smooth solution $u$ of the Navier-Stokes exists.
This $n=\infty$ case (the original Navier-Stokes)
will be covered in the subsection \ref{proof_main_thm_I}. \\
\end{rem}
We focus on the $\alpha=0$ case of the part (II) first.\\
\subsection{Proof of theorem \ref{main_thm} part (II)
for $\alpha=0$ case}\label{prof_main_thm_II_alpha_0}
\begin{proof}[Proof of proposition \ref{main_thm} part (II) for the $\alpha=0$ case]
\ \\
Let any $u_0$ of \eqref{initial_condition} be given.
From the Leray's construction,
there exists the $C^{\infty}$ solution sequence $\{u_n\}_{n=1}^{\infty}$ of
{(Problem I-n)} on $(0,\infty)$ with
corresponding pressures $\{P_n\}_{n=1}^{\infty}$.
From now on, our goal is to make an estimate for $\nabla^d u_n$ which is uniform in $n$.\\
\noindent For each $n$, $\epsilon>0$, $t>0$ and $x\in\mathbb{R}^3$,
define a new flow $X_{n,\epsilon}(\cdot,t,x)$ by solving
\begin{equation*}\begin{split}
&\frac{\partial X_{n,\epsilon} }{\partial s}
(s,t,x) = u_{n}*\phi_{\frac{1}{n}}*\phi_{\epsilon}(s,X_{n,\epsilon}(s,t,x))
\quad \mbox{ for } s\in[0,t],\\
&X_{n,\epsilon}(t,t,x) =x.
\end{split}
\end{equation*}
For convenience, we define $F_n(t,x)$ and $g_n(t)$.
\begin{equation*}\begin{split}
F_n(t,x)& = \big(|\nabla u_n|^2 + |\nabla^2P_n|
+ |\mathcal{M}(\nabla u_n)|^2\big)(t,x),\quad
g_n(t) = \int_{\mathbb{R}^3}F_n(t,x)dx.
\end{split}\end{equation*}
We
define for $n$, $t>0$ and $0<4\epsilon^2\leq t$
\begin{equation*}
\Omega_{n,\epsilon,t} = \{ x\in \mathbb{R}^3\quad |\quad \frac{1}{\epsilon}
\int_{t-4{\epsilon}^2}^{t}\int_{B({2\epsilon})}F_n(s,X_{n,\epsilon}
(s,t,x)+y)dyds\leq{\bar{\eta}}\}
\end{equation*} where $\bar{\eta}$ comes from the proposition
\ref{local_study_thm}.
We measure size of $(\Omega_{n,\epsilon,t})^C$:
\begin{equation}\begin{split}\label{omega_estimate}
|(\Omega_{n,\epsilon,t})^C|
&= |\{ x\in \mathbb{R}^3\quad |\quad \frac{1}{\epsilon}
\int_{t-4{\epsilon}^2}^{t}\int_{B({2\epsilon})}F_n(s,X_{n,\epsilon}(s,t,x)+y)dyds > {\bar{\eta}}\}|\\
&\leq\frac{1}{{\bar{\eta}}}\int_{\mathbb{R}^3}\mathcal{B}ig(\frac{1}{\epsilon}
\int_{t-4{\epsilon}^2}^{t}\int_{B({2\epsilon})}F_n(s,X_{n,\epsilon}(s,t,x)+y)dyds\mathcal{B}ig)dx\\
&=\frac{1}{{\bar{\eta}}\epsilon}\mathcal{B}ig(\int_{B({2\epsilon})}
\int_{-4{\epsilon}^2}^{0}\int_{\mathbb{R}^3}F_n(t+s,X_{n,\epsilon}(t+s,t,x)+y)dxdsdy\mathcal{B}ig)\\
&=\frac{1}{{\bar{\eta}}\epsilon}\mathcal{B}ig(\int_{B({2\epsilon})}
\int_{-4{\epsilon}^2}^{0}\int_{\mathbb{R}^3}F_n(t+s,z+y)dzdsdy\mathcal{B}ig)\\
&\leq\frac{1}{{\bar{\eta}}\epsilon}\mathcal{B}ig(\int_{B({2\epsilon})}1 dy\mathcal{B}ig)\mathcal{B}ig(
\int_{-4{\epsilon}^2}^{0}\int_{\mathbb{R}^3}F_n(t+s,\bar{z})d\bar{z}ds\mathcal{B}ig)\\
&\leq\frac{C\epsilon^2}{{\bar{\eta}}}\mathcal{B}ig(
\int_{-4{\epsilon}^2}^{0}\int_{\mathbb{R}^3}F_n(t+s,\bar{z})d\bar{z}ds\mathcal{B}ig)\\
&\leq C\frac{\epsilon^4}{\bar{\eta}}\mathcal{B}ig(\frac{1}{4\epsilon^2}
\int_{-4\epsilon^2}^0g_n(t+s)ds\mathcal{B}ig)
\leq \epsilon^4\mathcal{M}^{(t)}
(\frac{C}{\bar{\eta}}g_n\cdot \mathbf{1}_{(0,\infty)})(t)
=\epsilon^4 \tilde{g}_n(t)\
\end{split}\end{equation} where $\tilde{g}_n= \mathcal{M}^{(t)}
(\frac{C}{\bar{\eta}}g_n\cdot \mathbf{1}_{(0,\infty)})$ and $\mathcal{M}^{(t)}$ is
the Maximal function in $\mathbb{R}^1$. For the third inequality, we used the fact
that $X_{n,\epsilon}(\cdot,t,x)$ is incompressible.
From the fact that the Maximal operator is bounded from $L^1$ to $L^{1,\infty}$ together with
the lemma \ref{lemma7_problem I-n},
$\|\tilde{g}_n(\cdot)\|_{L^{1,\infty}(0,\infty)}\leq
\frac{C}{\bar{\eta}}\|g_n(\cdot)\|_{L^{1}(0,\infty)}\leq
\frac{C}{\bar{\eta}}\|u_0\|^2_{L^2(\mathbb{R}^3)}$.\\
Now we fix $n, t, \epsilon$ and $ x$ with $n\geq1$, $0<t<\infty$ , $0<4\epsilon^2\leq t$ and $x\in\Omega_{n,\epsilon,t}$.
We define ${v}, {Q}$ on $(-4,\infty)\times\mathbb{R}^3$ by
using the Galilean invariance:
\begin{equation}\begin{split}\label{special designed scaling}
{v}(s,y) =& \epsilon u_n(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon y)\\
&- \epsilon (u_n*\phi_{\epsilon})(t+\epsilon^2s,X_{n,\epsilon}(t+\epsilon^2s,t,x))\\
{Q}(s,y) =& \epsilon^2 P_n(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon y)\\
&+ \epsilon y \partial_s[ (u_n*\phi_{\epsilon})(t+\epsilon^2s,
X_{n,\epsilon}(t+\epsilon^2s,t,x))].
\end{split}\end{equation}
\begin{rem}
This specially designed
$\epsilon$-scaling will give the mean zero property to both the velocity
and the advection
velocity of the resulting equation \eqref{special designed scaling result}.
\end{rem}
\noindent Let us denote $\mathcal{B}ox$ and $\Diamond$ by $\mathcal{B}ox= \big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon y\big)$ and
$\Diamond=\big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)\big)$,
respectively.
Then the chain rule gives us
\begin{equation*}\begin{split}
& \partial_s{v}(s,y) =
\epsilon^3\partial_t(u_n)(\mathcal{B}ox)
+ \epsilon^3 \big((u_{n}*\phi_{\frac{1}{n}}*\phi_{\epsilon})(\Diamond)\cdot\nabla\big)u_n(\mathcal{B}ox)
- \epsilon \partial_s[(u_{n}*\phi_{\epsilon})(\Diamond)],\\
&\big({v} *_y \phi_{\frac{1}{n\epsilon}}\big)(s,y)
=\epsilon (u_n*\phi_{\frac{1}{n}})(\mathcal{B}ox)-\epsilon(u_{n}*\phi_{\epsilon})(\Diamond),\\
&\int_{\mathbb{R}^3}\big({v} *_y \phi_{\frac{1}{n\epsilon}}\big)(s,z)\phi(z)dz
=\epsilon (u_n*\phi_{\frac{1}{n}}*\phi_{\epsilon})(\Diamond)-\epsilon(u_{n}*\phi_{\epsilon})(\Diamond),\\
&\bigg(\mathcal{B}ig(\big({v} *_y \phi_{\frac{1}{n\epsilon}}\big)(s,y) -
\int_{\mathbb{R}^3}\big({v} *_y \phi_{\frac{1}{n\epsilon}}\big)(s,z)\phi(z)dz\mathcal{B}ig)\cdot\nabla\bigg)
{v}(s,y)=\\
&\epsilon^3 \mathcal{B}ig((u_n*\phi_{\frac{1}{n}})(\mathcal{B}ox)\cdot\nabla\mathcal{B}ig)u_n(\mathcal{B}ox)
-\epsilon^3 \mathcal{B}ig((u_n*\phi_{\frac{1}{n}}*\phi_{\epsilon})(\Diamond)\cdot\nabla\mathcal{B}ig)u_n(\mathcal{B}ox),\\
&-\Delta_y {v}(s,y) = -\epsilon^3\Delta_y u_n(\mathcal{B}ox) \mbox{ and} \\
&\nabla_y{Q}(s,y)= \epsilon^3 \nabla P_n(\mathcal{B}ox)
+ \epsilon \partial_s[ (u_n*\phi_{\epsilon})(\Diamond))].
\end{split}\end{equation*}
\noindent Thus, for $(s,y)\in (-4,\infty)\times\mathbb{R}^3$,
\begin{equation}\begin{split}\label{special designed scaling result}
\mathcal{B}ig[\partial_s{v}+
\bigg(\mathcal{B}ig(\big({v} * \phi_{\frac{1}{n\epsilon}}\big) -
\int\big({v} * \phi_{\frac{1}{n\epsilon}}\big)\phi \mathcal{B}ig)\cdot\nabla\bigg)
{v} +\nabla{Q}-\Delta {v}\mathcal{B}ig](s,y)=0.
\end{split}\end{equation} As a result,
$({v}(\cdot_s,\cdot_y),{Q}(\cdot_s,\cdot_y))$ is a solution of (Problem II-$\frac{1}{n\epsilon}$).\\
From definition of the Maximal function, we can verify that
$|\mathcal{M}(\nabla {v})|^2$
behaves like $|\nabla v|^2$ under the scaling
in the following sense:
\begin{equation}\begin{split}\label{Maximal_scaling}
\mathcal{M}(\nabla {v})(s,y)
&=\sup_{M>0}\frac{C}{M^3}\int_{B(M)}\epsilon^2(\nabla {u_n})
\big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon (y+z)\big)dz\\
&=\sup_{\epsilon M>0}\frac{C}{\epsilon^3M^3}\int_{B(\epsilon M)}\epsilon^2(\nabla {u_n})
\big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon y+\bar{z}\big)d\bar{z}\\
&=\epsilon^2\mathcal{M}(\nabla {u_n})
(\mathcal{B}ox)
\end{split}\end{equation} As a result,
\begin{equation*}\begin{split}
&\int_{-4}^{0}\int_{B(2)}\big(|\nabla {v}(s,y)|^2 +
|\nabla^2{Q}(s,y)| + |\mathcal{M}(\nabla {v})(s,y)|^2\big)dyds \\
&=\epsilon^4\int_{-4}^{0}\int_{B(2)}\mathcal{B}ig[|\nabla {u_n}|^2 +
|\nabla^2{P_n}| + |\mathcal{M}(\nabla {u_n})|^2\mathcal{B}ig](\mathcal{B}ox)dyds \\
&=\epsilon^4\int_{-4}^{0}\int_{B(2)}F_n(\mathcal{B}ox)dyds \\
&=\epsilon^{-1}\int_{t-4\epsilon^2}^{t}\int_{B(2\epsilon)}F_n
\big(s,X_{n,\epsilon}(s,t,x)+y\big)dyds
\leq {\bar{\eta}}
\end{split}\end{equation*} where the first equality comes from the definition of $
(v,Q)$ and the second one follows the change of variable
$\big(t+\epsilon^2 s,\epsilon y\big)\rightarrow (s,y)$.
Moreover, it satisfies
\begin{equation}\label{local_study_condition_satisfied}\begin{split}
&\int_{\mathbb{R}^3}\phi(z){v}(s,z)dz = 0, \quad -4<s<0.
\end{split}\end{equation}
So $(v,Q)$ satisfies all conditions (\ref{local_study_condition1},
\ref{local_study_condition2})
in the proposition \ref{local_study_thm} with $r=1/(n\epsilon)\in[0,\infty)$. \\
The conclusion of the proposition \ref{local_study_thm} implies that if $x\in\Omega_{n,\epsilon,t}$ for some $n,t$ and $\epsilon$ such that
$4\epsilon^2\leq t$ then
$|\nabla^d {v}(0,0)|\leq C_{d}.$
As a result, using $\nabla^d {v}(0,0)=
\epsilon^{d+1} \nabla^d u_n(t,x)$ for any integer $d\geq 1$,
we have
\begin{equation*}
|\{x\in \mathbb{R}^3 | \quad|\nabla^d u_n(t,x)|>
\frac{C_{d}}{\epsilon^{d+1}}\}|\leq|\Omega_{n,\epsilon,t}^C|
\leq \epsilon^4\cdot\tilde{g}_n(t).\
\end{equation*}
\noindent Let $K$ be any open bounded subset in $\mathbb{R}^3$.
Also define $p=4/(d+1)$. Then for any $t>0$,
\begin{equation*}
\beta^p\cdot\mathcal{B}ig|\{x\in K: |
(\nabla^d u_n)(t,x)|
> \beta\}\mathcal{B}ig|\leq
\begin{cases}& \beta^p\cdot|K|
,\quad\mbox{ if } \beta\leq C\cdot t^{-2/p}\\
& C\cdot \tilde{g}_n(t) ,
\quad\mbox{ if } \beta>C\cdot t^{-2/p}.\\
\end{cases}
\end{equation*} Thus,
\begin{equation*}
\|(\nabla^d u_n)(t,\cdot)\|^p_{L^{p,\infty}(K)}
\leq C\cdot\max\big( \tilde{g}_n(t),\frac{|K|}{t^2} \big)
\end{equation*}
\noindent We pick any $t_0>0$. If we take ${L^{1,\infty}(t_0,T)}$-norm
to the above inequality, then we obtain
\begin{equation}\begin{split}\label{integer_estimate}
\quad\|\nabla^d u_n\|^p_{L^{p,\infty}(t_0,\infty;L^{p,\infty}(K))}
&\leq C\mathcal{B}ig(\| \tilde{g}_n\|_{L^{1,\infty}{(0,\infty)}} + |K|\cdot\|\frac{1}
{{|\cdot|}^2}\|_{L^{1,\infty}(t_0,\infty)}\mathcal{B}ig)\\
&\leq C\mathcal{B}ig(\|u_0\|^2_{L^{2}(\mathbb{R}^3)} +
\frac{|K|}{t_0}\mathcal{B}ig)
\end{split}\end{equation} where $C$ depends only on $d\geq1$.\\
We observe that
the above
estimate is uniform in $n$.
It is well known that both $\nabla u$ and $\nabla^2 u$ are locally integrable functions for any suitable weak solution $u$
which can be obtained by a limiting argument of $u_n$ (e.g. see
Lions \cite{lions}). Thus, the
above estimates \eqref{integer_estimate} holds
even for $u$ with $d=1,2$.\\
\begin{rem}
In fact, for the case $d=1$, the above estimate says $\nabla u\in L^{2,\infty}_{loc}$, which is useless because
we know a better estimate $\nabla u\in L^2$.
\end{rem}
\begin{rem}
For $d \geq 3$,
the above estimate \eqref{integer_estimate} does not give us any
direct information about higher derivatives $\nabla^d u$ of a weak
solution $u$
because full regularity of weak solutions is still open, so
$\nabla^d u$ may not be locally integrable for $d\geq3$.
Instead,
the only thing we can say is that, for $d\geq3$, higher derivatives $\nabla^d u_n$
of a Leray's approximation $u_n$ have $L^{4/(d+1),\infty}_{loc}$ bounds which are uniform in $n\geq1$.
\end{rem}
\end{proof}
From now on, we will prove the $0<\alpha<2$ case of the part (II).
\subsection{Proof of theorem \ref{main_thm} part (II)
for $0<\alpha<2$ case}\label{prof_main_thm_II_alpha_not_0}
\begin{proof}[Proof of proposition \ref{main_thm} part (II) for the $0<\alpha<2$ case]
\ \\
We fix $d\geq 1$ and $0<\alpha<2$.
Then,
for any positive integer $n$, any $t>0$ and $x\in\mathbb{R}^3$, we denote $F_n(t,x)$ in this time by:
\begin{equation*}\begin{split}
F_n(t,x) = \mathcal{B}ig(&|\nabla u_n(t,x)|^2 + |\nabla^2P_n(t,x)| +
|\mathcal{M}(\nabla u_n)(t,x)|^2\\&+
|\mathcal{M}(\mathcal{M}(|\nabla u_n|))|^2+
(\mathcal{M}(|\mathcal{M}(|\nabla u_n|)|^q))^{2/q}\\+
&|\mathcal{M}(|\nabla u_n|^q)|^{2/q}+
\sum_{m=d}^{d+4} \sup_{\delta>0}(|(\nabla^{m-1}{h^{\alpha})_\delta}*\nabla^2 P|)
\mathcal{B}ig).
\end{split}\end{equation*}
We use the same definitions for $g_n$, $\tilde{g}_n$, $X_{n,\epsilon}$ and $\Omega_{n,\epsilon,t}$
of the previous section \ref{prof_main_thm_II_alpha_0} for the case $\alpha=0$.
Note that they depend on $d$ and
$\alpha$, and we have
$\|\tilde{g}_n\|_{L^{1,\infty}(0,\infty)}\leq
\frac{C_{d,\alpha}}{\bar{\eta}}\cdot\|u_0\|^2_{L^2(\mathbb{R}^3)}$
from the lemma \ref{lemma7_problem I-n}. \\
Now we pick any $x\in\Omega_{n,\epsilon,t}$ and any $\epsilon$ such that
$4\epsilon^2\leq t$, and define $v$ and $Q$
as the previous section \ref{prof_main_thm_II_alpha_0} (see \eqref{special designed scaling}).\\
\noindent In order to follow the previous subsection \ref{prof_main_thm_II_alpha_0},
only thing which remains is to verify if every quantity in
$F_n(t,x)$
has the same scaling with $|\nabla v|^2$ after the transform
\eqref{special designed scaling}.
For Maximal of Maximal functions,
\begin{equation*}\begin{split}
&\mathcal{M}(\mathcal{M}(|\nabla v|))(s,y)\\
&=\sup_{M>0}\frac{C}{M^3}\int_{B(M)}\mathcal{M}(|\nabla v|)(s,y+z)dz\\
&=\sup_{M>0}\frac{C}{M^3}\int_{B(M)}\epsilon^2\mathcal{M}
(|\nabla u_n|)\big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon (y+z)\big)dz\\
&=\epsilon^2\mathcal{M}(\mathcal{M}(|\nabla u_n|))(\mathcal{B}ox).\\
\end{split}\end{equation*} where $\mathcal{B}ox=\big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon y\big)$
and we used the idea of \eqref{Maximal_scaling}
for second and third equalities. Likewise,
$\mathcal{M}(|\nabla v|^q)(s,y)
=\epsilon^{2q}\cdot\mathcal{M}(|\nabla u_n|^q)(\mathcal{B}ox)$
and
$\mathcal{M}(|\mathcal{M}(|\nabla v|)|^q)(s,y)
=\epsilon^{2q}\cdot\mathcal{M}(|\mathcal{M}(|\nabla u_n|)|^q)(\mathcal{B}ox)$. \\
\noindent Also, we have for any function $\mathcal{G}\in C_0^\infty$,
\begin{equation*}\begin{split}
\sup_{\delta>0}&(|{\mathcal{G}_{\delta}}*\nabla^2 Q|)(s,y)=
\sup_{\delta>0}\mathcal{B}ig|\int_{\mathbb{R}^3}\frac{1}{\delta^{3}}{\mathcal{G}}(\frac{z}{\delta})\cdot
(\nabla^2 Q)(s,y-z)dz\mathcal{B}ig|\\
&=
\sup_{\delta>0}\mathcal{B}ig|\int_{\mathbb{R}^3}\frac{\epsilon^4}{\delta^{3}}{\mathcal{G}}(\frac{z}{\delta})\cdot
(\nabla^2 P_n)\big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon (y-z)\big)dz\mathcal{B}ig|\\
&=
\sup_{\delta>0}\mathcal{B}ig|\int_{\mathbb{R}^3}\frac{\epsilon^4}{\epsilon^3
\delta^{3}}{\mathcal{G}}(\frac{z}{\epsilon\delta})\cdot
(\nabla^2 P_n)\big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon y-z\big)dz\mathcal{B}ig|
\end{split}\end{equation*}
\begin{equation*}\begin{split}
&=
\sup_{\epsilon\delta>0}\mathcal{B}ig|\int_{\mathbb{R}^3}\epsilon^4
{\mathcal{G}}_{\epsilon\delta}(z)\cdot
(\nabla^2 P_n)\big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon y-z\big)dz\mathcal{B}ig|\\
&=
\sup_{\epsilon\delta>0}\epsilon^4\mathcal{B}ig|
\mathcal{B}ig({\mathcal{G}}_{\epsilon\delta}*
(\nabla^2 P_n)\mathcal{B}ig)\big(t+\epsilon^2 s,X_{n,\epsilon}(t+\epsilon^2 s,t,x)+\epsilon y\big)\mathcal{B}ig|\\
&=\epsilon^4
\sup_{\delta>0}\mathcal{B}ig|
{\mathcal{G}_{\delta}}*
(\nabla^2 P_n)\mathcal{B}ig|(\mathcal{B}ox).
\end{split}\end{equation*} Thus by taking $\mathcal{G}=(\nabla^{m-1}{h^{\alpha}})$,
we have \begin{equation*}\begin{split}
\sup_{\delta>0}(|{(\nabla^{m-1}{h^{\alpha})_\delta}}*\nabla^2 Q|)(s,y)
&=\epsilon^4
\sup_{\delta>0}\mathcal{B}ig|
{(\nabla^{m-1}{h^{\alpha})_\delta}}*
(\nabla^2 P_n)\mathcal{B}ig|\big(\mathcal{B}ox\big).
\end{split}\end{equation*}
\noindent As a result,
we have
\begin{equation*}\begin{split}
&\int_{-4}^{0}\int_{B(2)}\mathcal{B}ig[|\nabla {v}|^2 +
|\nabla^2{Q}| + |\mathcal{M}(\nabla {v})|^2+\\
&\quad+|\mathcal{M}(\mathcal{M}(|\nabla v|))|^2+
|\mathcal{M}(|\mathcal{M}(|\nabla v|)|^q)|^{q/2}\\
&\quad+|\mathcal{M}(|\nabla v|^q)|^{2/q}+
\sum_{m=d}^{d+4} \sup_{\delta>0}(
|{(\nabla^{m-1}{h^{\alpha})_\delta}}*\nabla^2 Q|)
\mathcal{B}ig](s,y)
dyds \\
&=\epsilon^4\int_{-4}^{0}\int_{B(2)}F_n(\mathcal{B}ox)dyds \\
&=\epsilon^{-1}\int_{t-4\epsilon^2}^{t}\int_{B(2\epsilon)}F_n
\big(s,X_{n,\epsilon}(s,t,x)+y\big)dyds
\leq {\bar{\eta}}.\\
\end{split}\end{equation*}
\noindent Then $(v,Q)$ satisfies condition
\eqref{local_study_condition3} as well as
\eqref{local_study_condition1} and
\eqref{local_study_condition2}
of the proposition \ref{local_study_thm}
with $r=1/(n\epsilon)\in[0,\infty)$. In sum if $x\in\Omega_{n,\epsilon,t}$
and if $4\epsilon^2\leq t$, then
\begin{equation*}
| (-\Delta)^{\alpha/2}\nabla^d {v}(0,0)|\leq C_{d,\alpha}.
\end{equation*}
Because $u_n$ is a smooth solution of (Problem I-n),
$(-\Delta)^{\alpha/2}\nabla^d u_n$ is not only a distribution
but also a locally integrable function.
Indeed, from a boot-strapping argument, it is
easy to show that $\nabla^d u_n(t)$ has a good behavior at infinity
which is required
in order to use the integral
representation \eqref{fractional_integral} pointwise. For example,
$(C^2\cap W^{2,\infty})$ is enough (For a better approach, see Silvestre \cite{silve:fractional}). Also it can be easily verified that
the resulting function $(-\Delta)^{\alpha/2}[\nabla^d u_n(t,\cdot)](x)$
from the integral representation \eqref{fractional_integral}
satisfies the definition in the remark \ref{frac_rem}. \\
\noindent
As a result, it makes sense to talk about pointwise values
of $(-\Delta)^{\alpha/2}\nabla^d u_n$. Thus,
from the simple observation:
for any integer $d\geq 1$ and any real $0<\alpha<2$,
\begin{equation*}
(-\Delta)^{\alpha/2}\nabla^d {v}(0,0)=
\epsilon^{d+\alpha+1} (-\Delta)^{\alpha/2}\nabla^d u_n(t,x),
\end{equation*}
we can
deduce the following set inclusion:
\begin{equation}\label{fractional_inclusion}
\{x\in \mathbb{R}^3 | \quad|(-\Delta)^{\frac{\alpha}{2}}\nabla^d u_n(t,x)|>
\frac{C_{d,\alpha}}{\epsilon^{d+\alpha +1}}\}\qquad\subset\qquad\Omega_{n,\epsilon,t}^C.
\end{equation}
\noindent Thus we have
for any $0<t<\infty$ and for any $0<4\epsilon^2\leq t$
\begin{equation*}
|\{x\in \mathbb{R}^3 | \quad|(-\Delta)^{\frac{\alpha}{2}}\nabla^d u_n(t,x)|>
\frac{C_{d,\alpha}}{\epsilon^{d+\alpha +1}}\}|\leq|\Omega_{n,\epsilon,t}^C|
\leq \epsilon^4\cdot \tilde{g}_n(t).
\end{equation*}
\noindent Define $p=4/(d+\alpha+1)$. Like we did for case $\alpha=0$, we obtain
\begin{equation*}\begin{split}
\|(-\Delta)^{\frac{\alpha}{2}}\nabla^d u_n\|^p_{L^{p,\infty}
(t_0,\infty;L^{p,\infty}(K))}
&\leq C\mathcal{B}ig(\|u_0\|^2_{L^{2}(\mathbb{R}^3)} +
\frac{|K|}{t_0}\mathcal{B}ig)
\end{split}\end{equation*}
for any integer $n,d\geq1$, for any real $\alpha\in(0,2)$, for
any bounded open subset $K$ of $\mathbb{R}^3$ and for any $t_0\in(0,\infty)$ where $C$ depends only on $d$ and $\alpha$.\\
If we restrict further $(d+\alpha)<3$, then
$ p = \frac{4}{d+\alpha+1}>1 $. This implies
$(-\Delta)^{\alpha/2}\nabla^d u_n\in L^q_{loc}((t_0,\infty)\times K)$
for every $q$ between $1$ and $p$,
and the norm is uniformly bounded in $n$.
Thus,
from weak-compactness of $L^q$ for $q>1$,
we conclude that
if $u$ is a suitable weak solution obtained by a limiting argument
of $u_n$, then any higher derivatives
$(-\Delta)^{\alpha/2}\nabla^d u$,
which is defined in the remark \ref{frac_rem},
lie in $L^1_{loc}$
as long as $(d+\alpha)<3$ with the same estimate
\begin{equation}\begin{split}\label{final_comment}
\|(-\Delta)^{\frac{\alpha}{2}}\nabla^d u\|^p_{L^{p,\infty}
(t_0,\infty;L^{p,\infty}(K))}
&\leq C_{d,\alpha}\mathcal{B}ig(\|u_0\|^2_{L^{2}(\mathbb{R}^3)} +
\frac{|K|}{t_0}\mathcal{B}ig).
\end{split}\end{equation}
\end{proof}
\subsection{Proof of theorem \ref{main_thm} part (I)}\label{proof_main_thm_I}
\begin{proof}[Proof of proposition \ref{main_thm} part (I)]
Suppose that $(u,P)$ is a smooth solution of the Navier-Stokes equations
\eqref{navier}
on $(0,T)$ with \eqref{initial_condition}. Then it satisfies all conditions of
{(Problem I-n)} for $n=\infty$ on $(0,T)$.
As we mentioned at the remark \ref{n_infty_rem}, we follow
every steps in the subsections
\ref{prof_main_thm_II_alpha_0}
and \ref{prof_main_thm_II_alpha_not_0} except each last arguments
which impose $d<3$ or $(d+\alpha)<3$.
Indeed,
under the scaling \eqref{special designed scaling},
the resulting function $(v,Q)$ is a solution for (Problem II-r)
for $r=0$.\\
\noindent
Recall that $u$ is smooth by assumption.
As a result, we do NOT have
any restriction like $d<3$ or $(d+\alpha)<3$ at this time because
we do not need any limiting argument any more
which requires a weak-compactness.
Thus, we obtain \eqref{final_comment}
for any integer $d\geq1$,
for any real $\alpha\in[0,2)$
and for any $t_0\in(0,T)$.
It finishes
the proof of the part $(I)$ of the main theorem \ref{main_thm}.
\end{proof}
\section*{A. Appendix: proofs of some technical lemmas}\label{appendix}
\begin{proof} [proof for lemma \ref{higher_pressure}]
Fix $(n,a,b,p)$ such that $n\geq 2$, $0<b<a<1$ and $1<p<\infty$.
Let $\alpha$ be any multi index such that
$|\alpha|=n$ and $D^{\alpha}=\partial_{\alpha_1} \partial_{\alpha_2}
D^{\beta}$ where $\beta$ is a multi index with $|\beta|=n-2$.\\
Observe that from $\ebdiv(v_2)=0$ and $\ebdiv(v_1)=0$,
\begin{equation*}\begin{split}
-\Delta(D^{\alpha}P) &=\ebdiv\ebdiv D^{\alpha}(v_2\otimes v_1)\\
&=D^{\alpha}\mathcal{B}ig(\sum_{ij} (\partial_j v_{2,i})(\partial_iv_{1,j})\mathcal{B}ig)\\
&=\partial_{\alpha_1} \partial_{\alpha_2}
H\\
\end{split}\end{equation*} where $H=D^{\beta}\mathcal{B}ig(\sum_{ij}
(\partial_j v_{2,i})(\partial_iv_{1,j})\mathcal{B}ig)$
and $v_k=(v_{k,1},v_{k,2},v_{k,3})$ for $k=1,2$.\\
\noindent Then for any $(p_1,p_2)$ such that $\frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}$
\begin{equation*}\begin{split}
\|H\|_{L^{p}(B(a))}
&\leq C
\| v_2 \|_{W^{n-1,p_2}(B(a))}\cdot
\| v_1 \|_{W^{n-1,p_1}(B(a))}
\end{split}\end{equation*}
where $C$ is independent of choice of $p_1$ and $p_2$
and
\begin{equation*}\begin{split}
\|H\|_{W^{1,{\infty}}(B(a))}&
\leq C
\| v_2 \|_{W^{n,\infty}(B(a))}\cdot
\| v_1 \|_{W^{n,\infty}(B(a))}.
\end{split}\end{equation*}
Fix a function $\psi \in C^{\infty}(\mathbb{R}^3)$ satisfying:
\begin{equation*}\begin{split}
&\psi = 1 \quad\mbox{ in } B(b+\frac{a-b}{3}),\quad
\psi = 0 \quad\mbox{ in } (B(b+\frac{2(a-b)}{3}))^C \mbox{ and }
0\leq \psi \leq 1.
\end{split}\end{equation*}
\noindent We decompose $D^{\alpha}P$ by using $\psi$:
\begin{equation*}\begin{split}
-\Delta({\psi} D^{\alpha}P)
&= -{\psi} \Delta D^{\alpha}P
- 2\ebdiv((\nabla {\psi})(D^{\alpha}P)) +
(D^{\alpha} P)\Delta{\psi}\\
&= {\psi}\partial_{\alpha_1} \partial_{\alpha_2}H
- 2\ebdiv((\nabla {\psi})(D^{\alpha}P)) +
(D^{\alpha} P)\Delta{\psi}\\
&= - \Delta Q_{1} - \Delta Q_{2} - \Delta Q_{3}\\
\end{split}
\end{equation*} where
\begin{equation*}\begin{split}
- \Delta Q_{1} &= \partial_{\alpha_1} \partial_{\alpha_2} ({\psi}H), \\
- \Delta Q_{2} &= - \partial_{\alpha_2}[(\partial_{\alpha_1} {\psi})(H)]
-\partial_{\alpha_1}[(\partial_{\alpha_2} {\psi})(H)]
+ (\partial_{\alpha_1} \partial_{\alpha_2} {\psi})(H) \quad\mbox{ and} \\
- \Delta Q_{3}\mbox{ } &= - 2\ebdiv((\nabla {\psi})(D^{\alpha}P)) +
(D^{\alpha} P)\Delta{\psi}.
\end{split}\end{equation*}
Here $Q_{2}$ and $Q_{3}$ are defined by the representation formula
${(-\Delta)}^{-1}(f) = \frac{1}{4\pi}(\frac{1}{|x|} * f)$\\
while $Q_{1}$ by the Riesz transforms.\\
\noindent Then, by the Riesz transform,
\begin{equation*}\begin{split}
\|Q_{1}\|_{L^{p}(B(b))}
&\leq
C\|\psi H\|_{L^{p}(\mathbb{R}^3)}
\leq C
\|H\|_{L^{p}(B(a))}\\
&\leq C
\| v_2 \|_{W^{n-1,p_2}(B(a))}\cdot
\| v_1 \|_{W^{n-1,p_1}(B(a))}.\\
\end{split}\end{equation*}
\noindent Moreover, using Sobolev,
\begin{equation*}\begin{split}
\|Q_{1}\|_{L^\infty(B(b))}
&\leq
C\mathcal{B}ig(\|Q_{1}\|_{L^4(B(b))}+\|\nabla Q_{1}\|_{L^4(B(b))}\mathcal{B}ig)\\
&\leq C
\|H\|_{W^{1,{4}}(B(a))}
\leq C
\|H\|_{W^{1,{\infty}}(B(a))}\\
&\leq C
\| v_2 \|_{W^{n,\infty}(B(a))}\cdot
\| v_1 \|_{W^{n,\infty}(B(a))}.\\
\end{split}\end{equation*}
\noindent For $x\in B(b)$,
\begin{equation*}\begin{split}
|Q_2(x)|
&= \mathcal{B}igg|\frac{1}{4\pi}\int_{(B(b+\frac{2(a-b)}{3})-B(b+\frac{a-b}{3}))}
\frac{1}{|x-y|} \mathcal{B}ig( \partial_{\alpha_2}[(\partial_{\alpha_1} {\psi})(H)](y)\\
&\quad\quad\quad\quad -\partial_{\alpha_1}[(\partial_{\alpha_2} {\psi})(H)](y)
+ (\partial_{\alpha_1} \partial_{\alpha_2} {\psi})(H)(y)\mathcal{B}ig)dy\mathcal{B}igg|\\
&\leq 2\|\nabla \psi\|_{L^\infty}\cdot
\sup_{y\in B(b+\frac{a-b}{3})^C}(|\nabla_y\frac{1}{|x-y|}|)
\cdot \|H\|_{L^1(B(a))}\\
&\quad\quad+ \|\nabla^2 \psi\|_{L^\infty}\cdot
\sup_{y\in B(b+\frac{a-b}{3})^C}(|\frac{1}{|x-y|}|)
\cdot \|H\|_{L^1(B(a))}\\
&\leq C\cdot \|H\|_{L^1(B(a))}\\
\end{split}\end{equation*} because $|x-y|\geq (a-b)/3$.
Likewise, for $x\in B(b)$,
\begin{equation*}\begin{split}
|Q_3(x)|
&\leq C\mathcal{B}ig(\sum_{k=0}^{n}\|\nabla^{k+1} \psi\|_{L^\infty}\mathcal{B}ig)\cdot
\mathcal{B}ig(\sum_{k=0}^{n}\sup_{y\in B(b+\frac{a-b}{3})^C}
|\nabla^{k+1}_y\frac{1}{|x-y|}|\mathcal{B}ig)
\cdot \|P\|_{L^1(B(a))}\\
&+ C\mathcal{B}ig(\sum_{k=0}^{n}\|\nabla^{k+2} \psi\|_{L^\infty}\mathcal{B}ig)\cdot
\mathcal{B}ig(\sum_{k=0}^{n}\sup_{y\in B(b+\frac{a-b}{3})^C}
|\nabla^{k}_y\frac{1}{|x-y|}|\mathcal{B}ig)
\cdot \|P\|_{L^1(B(a))}\\
&\leq C\cdot \|P\|_{L^1(B(a))}.
\end{split}\end{equation*}
\noindent Finally,
\begin{equation*}\begin{split}
\|\nabla^n &P\|_{L^{p}(B(b))}
\leq \|Q_1\|_{L^{p}(B(b))}+
C\||Q_2|+|Q_3|\|_{L^{\infty}(B(b))}\\
&\leq C\cdot
\|H\|_{L^p(B(a))} +
C\cdot \|H\|_{L^1(B(a))}
+C\cdot \|P\|_{L^1(B(a))}\\
&\leq C_{a,b,p,n}\mathcal{B}ig(
\| v_2 \|_{W^{n-1,p_2}(B(a))}\cdot
\| v_1 \|_{W^{n-1,p_1}(B(a))}
+\cdot \|P\|_{L^1(B(a))}\mathcal{B}ig)
\end{split}\end{equation*} and
\begin{equation*}\begin{split}
\|\nabla^n &P\|_{L^{\infty}(B(b))}
\leq \||Q_1|+|Q_2|+|Q_3|\|_{L^{\infty}(B(b))}\\
&\leq C\cdot
\|H\|_{W^{1,\infty}(B(a))} +
C\cdot \|H\|_{L^1(B(a))}
+C\cdot \|P\|_{L^1(B(a))}\\
&\leq C_{a,b,n}\mathcal{B}ig(
\| v_2 \|_{W^{n,\infty}(B(a))}\cdot
\| v_1 \|_{W^{n,\infty}(B(a))}
+ \|P\|_{L^1(B(a))}\mathcal{B}ig).
\end{split}\end{equation*}
\end{proof}
\begin{proof} [proof for lemma \ref{lem_a_half_upgrading_large_r}]
We fix $(n,a,b)$ such that $n\geq 0$ and $0<b<a<1$
and let $\alpha$ be a multi index with $|\alpha|=n$. Then,
by taking $D^{\alpha}$ to \eqref{navier_Problem II-r}, we have
\begin{equation}
\begin{split}\label{eq_d_alpha_large_r}
0
=&\partial_t(D^{\alpha}{v_1})+\sum_{\beta\leq\alpha, |\beta|>0}\binom{\alpha}{\beta}
((D^{\beta}{v_2})
\cdot\nabla)(D^{\alpha-\beta}{v_1} )+({v_2}
\cdot\nabla)(D^{\alpha}{v_1} )\\&
\quad\quad\quad\quad\quad
\quad\quad\quad\quad\quad
+ \nabla(D^{\alpha} P) -\Delta(D^{\alpha} {v_1} ).
\end{split}
\end{equation}
\noindent We define $\Phi(t,x)\in C^\infty$ by
$0\leq\Phi\leq 1,
\Phi = 1 \mbox{ on } Q_{{b}} \mbox{ and }
\Phi = 0 \mbox{ on } Q_{{a}}^C.$
We observe that, for $p\geq \frac{1}{2}$ and for $f\in C^\infty$,
\begin{equation*}\begin{split}
&({p}+\frac{1}{2})|f|^{{p}-\frac{3}{2}}f\cdot\partial_{x} f = \partial_{x}|f|^{{p}+\frac{1}{2}}
\mbox{ and } ({p}+\frac{1}{2})|f|^{{p}-\frac{3}{2}}f\cdot\Delta f
\leq \Delta(|f|^{{p}+\frac{1}{2}}) .
\end{split}\end{equation*} which can be verified by direct computations
with the fact $|\nabla f|\geq |\nabla|f||$.\\
\noindent Now
we multiply $(p+\frac{1}{2}){\Phi}\frac{D^{\alpha}v_1}
{|D^{\alpha}v_1|^{(3/2)-p}}$ to \eqref{eq_d_alpha_large_r},
use the above observation
and integrate in x. Then we have for any $p\geq \frac{1}{2}$,\\
\begin{equation*}\begin{split}
&\frac{d}{dt}\int_{\mathbb{R}^3}{\Phi}(t,x)|D^{\alpha}{v_1} (t,x)|^{p+\frac{1}{2}}dx\\
&\leq\int_{\mathbb{R}^3}(|\partial_t{\Phi}(t,x)|+|\Delta{\Phi}(t,x)|)
|D^{\alpha}{v_1} (t,x)|^{p+\frac{1}{2}}dx \\
&\quad+(p+\frac{1}{2})\int_{\mathbb{R}^3}|\nabla D^{\alpha}P(t,x)||D^{\alpha}{v_1} (t,x)|^{p-\frac{1}{2}}dx \\
&\quad+(p+\frac{1}{2})\sum_{\beta\leq\alpha, |\beta|>0}\binom{\alpha}{\beta}
\int_{\mathbb{R}^3}\mathcal{B}ig|(D^{\beta}{v_2}(t,x)
\cdot\nabla)D^{\alpha-\beta}{v_1} (t,x)\mathcal{B}ig||D^{\alpha}{v_1} (t,x)|^{p-\frac{1}{2}}dx \\
&\quad\quad -\int_{\mathbb{R}^3}{\Phi}(t,x)(v_2(t,x)
\cdot\nabla)(|D^{\alpha}{v_1} (t,x)|^{p+\frac{1}{2}})dx \\
\end{split}
\end{equation*}
\begin{equation*}\begin{split}
&\leq C\||\nabla^n {v_1} (t,\cdot)|^{p+\frac{1}{2}}\|_{L^{1}(B{(a)})} \\
&\quad+C\|\nabla^{n+1} P(t,\cdot)\|_{L^{2p}(B{(a)})}
\cdot\||\nabla^n {v_1} (t,\cdot)|^{p-\frac{1}{2}}\|_{L^{\frac{2p}{2p-1}}(B{(a)})} \\
&+C \| {v_2} (t,\cdot)\|_{W^{n,\infty}(B{(a)})}\cdot
\|{v_1} (t,\cdot)\|_{W^{n,p+\frac{1}{2}}(B{(a)})}\cdot
\||\nabla^n {v_1} (t,\cdot)|^{p-\frac{1}{2}}\|_{L^{\frac{p+\frac{1}{2}}{p-\frac{1}{2}}}(B{(a)})} \\
&-\int_{\mathbb{R}^3}{\Phi}(t,x)\ebdiv\mathcal{B}ig({v_2} (t,x)
\otimes|D^{\alpha}{v_1} (t,x)|^{p+\frac{1}{2}}\mathcal{B}ig)dx \\
\end{split}
\end{equation*}
\begin{equation*}\begin{split}
&\leq C\|{v_1} (t,\cdot)\|^{p+\frac{1}{2}}_{W^{n,p+\frac{1}{2}}(B{(a)})}\\
&\quad+C\|\nabla^{n+1} P(t,\cdot)\|_{L^{2p}(B{(a)})}
\cdot\|\nabla^n {v_1} (t,\cdot)\|^{p-\frac{1}{2}}_{L^{p}(B{(a)})} \\
&\quad+C \| {v_2} (t,\cdot)\|_{W^{n,\infty}(B{(a)})}
\cdot\|{v_1} (t,\cdot)\|^{p+\frac{1}{2}}_{W^{n,p+\frac{1}{2}}(B{(a)})} \\
&\quad+C\| {v_2} (t,\cdot)\|_{L^{\infty}(B{(a)})}\cdot
\|\nabla^n {v_1} (t,\cdot)\|^{p+\frac{1}{2}}_{L^{p+\frac{1}{2}}(B{(a)})}.\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\end{split}
\end{equation*}
\noindent Then integrating on $[-a^2,t]$ for any $t\in[-b^2,0]$ gives
\begin{equation*}\begin{split}
&\|D^{\alpha} {v_1} \|^{p+\frac{1}{2}}_{L^{\infty}(-{({b})}^2,0 ;L^{p+\frac{1}{2}}(B{({b})}))}\\
&\leq C\|{v_1} \|^{p+\frac{1}{2}}
_{L^{p+\frac{1}{2}}(-{({a})}^2,0;W^{n,p+\frac{1}{2}}(B{({a})}))}\\
&\quad+C\|\nabla^{n+1} P\|_{L^{1}(-{({a})}^2,0;L^{2p}(B{({a})}))}\cdot
\|\nabla^n {v_1} \|^{p-\frac{1}{2}}_{L^{\infty}(-{({a})}^2,0;L^{p}(B{(a)}))}\\
&\quad+C \| {v_2} \|_{L^{2}(-{({a})}^2,0;W^{n,\infty}(B{({a})}))}
\cdot\|{v_1} \|^{p+\frac{1}{2}}_{L^{2p+1}(-{({a})}^2,0;W^{n,p+\frac{1}{2}}(B{({a})}))} \\
&\quad+C\| {v_2} \|_{L^{2}(-{({a})}^2,0;L^{\infty}(B{(a)}))}\cdot
\|\nabla^n {v_1} \|^{p+\frac{1}{2}}_{L^{2p+1}(-{({a})}^2,0;L^{p+\frac{1}{2}}(B{(a)}))}\\
\end{split}
\end{equation*}
\noindent Thus for the case $p=1/2$, we have
\begin{equation*}\begin{split}
&\|D^{\alpha} {v_1} \|_{L^{\infty}(-{({b})}^2,0 ;L^{1}(B{({b})}))}\\
&\leq C\mathcal{B}ig[
\mathcal{B}ig(\| v_2\|_{L^2(-{{a}^2},0;W^{n,\infty}(B({a})))} +1\mathcal{B}ig)
\cdot
\| {v_1} \|_{L^{2}(-{a}^2,0;W^{n,{1}}(B{(a)}))}\\
&\quad\quad\quad\quad\quad\quad\quad\quad+ \|\nabla^{n+1}P
\|_{L^{1}(-{a}^2,0;L^{1}(B{(a)}))} \mathcal{B}ig]
\end{split}\end{equation*}
while, for the case $p\geq 1$, we have
\begin{equation*}\begin{split}
&\|D^{\alpha} {v_1} \|^{p+\frac{1}{2}}_{L^{\infty}(-{({b})}^2,0 ;L^{p+\frac{1}{2}}(B{({b})}))}\\
&\leq C
\mathcal{B}ig(\| v_2\|_{L^2(-{{a}^2},0;W^{n,\infty}(B({a})))} +1\mathcal{B}ig)
\\&\quad\quad\quad\cdot
\mathcal{B}ig(
\|{v_1} \|^{\frac{1}{p+\frac{1}{2}}}
_{L^{2}(-{({a})}^2,0;W^{n,2p}(B{({a})}))}
\cdot
\|{v_1} \|^{1-\frac{1}{p+\frac{1}{2}}}_{L^{\infty}(-{({a})}^2,0;W^{n,p}(B{({a})}))}\mathcal{B}ig)^{p+\frac{1}{2}}\\
&\quad\quad\quad+C\|\nabla^{n+1} P\|_{L^{1}(-{({a})}^2,0;L^{2p}(B{({a})}))}\cdot
\| {v_1} \|^{p-\frac{1}{2}}_{L^{\infty}(-{({a})}^2,0;W^{n,p}(B{(a)}))}\\
&\leq C_{a,b,n,p}\mathcal{B}ig[
\mathcal{B}ig(\| v_2\|_{L^2(-{{a}^2},0;W^{n,\infty}(B({a})))} +1\mathcal{B}ig)
\cdot
\|{v_1} \|_{L^{2}(-{({a})}^2,0;W^{n,2p}(B{({a})}))}\\
&\quad\quad\quad\quad\quad\quad+ \|\nabla^{n+1} P\|_{L^{1}(-{({a})}^2,0;L^{2p}(B{({a})}))}
\mathcal{B}ig]\cdot
\| {v_1} \|^{p-\frac{1}{2}}_{L^{\infty}(-{({a})}^2,0;W^{n,p}(B{(a)}))}.
\end{split}
\end{equation*}
\end{proof}
\begin{proof}[proof for lemma \ref{lem_Maximal 2.5 or 4}]
Fix any $M_0>0$ and $1\leq p<\infty$ first. Then, for any $M\geq M_0$ and
for any $f\in C^1(\mathbb{R}^3)$
such that $ \int_{\mathbb{R}^3}\phi(x)f(x)dx=0$,
we have
\begin{equation*}\begin{split}
&\|f\|_{L^p(B(M))}=
\mathcal{B}ig(\int_{B(M)}\mathcal{B}ig|\int_{\mathbb{R}^3}(f(x)-f(y))
\phi(y)dy\mathcal{B}ig|^{p}dx\mathcal{B}ig)^{1/p}\\
&\leq C\mathcal{B}ig(\int_{B(M)}\mathcal{B}ig(\int_{B(1)}\mathcal{B}ig|f(x)-f(y)\mathcal{B}ig|
dy\mathcal{B}ig)^{p}dx\mathcal{B}ig)^{1/p}\\
&\leq C\mathcal{B}ig(\int_{B(M)}\mathcal{B}ig(\int_{B(1)}
\int_0^1\mathcal{B}ig|(\nabla f)((1-t)x+ty)\cdot(x-y)\mathcal{B}ig|dt
dy\mathcal{B}ig)^{p}dx\mathcal{B}ig)^{1/p}
\end{split}\end{equation*}
\begin{equation*}\begin{split}
&\leq C(M+1)\mathcal{B}ig(\int_{B(M)}\mathcal{B}ig(\int_{B(1)}
\int_0^1\mathcal{B}ig|(\nabla f)((1-t)x+ty)\mathcal{B}ig|dt
dy\mathcal{B}ig)^{p}dx\mathcal{B}ig)^{1/p}\\
&\leq C(M+1)\mathcal{B}ig(\int_{B(M)}\mathcal{B}ig(\int_{B(1)}
\int_0^{\frac{M}{M+1}}\mathcal{B}ig|(\nabla f)((1-t)x+ty)\mathcal{B}ig|dt
dy\mathcal{B}ig)^{p}dx\mathcal{B}ig)^{1/p}\\
&\quad + C(M+1)\mathcal{B}ig(\int_{B(M)}\mathcal{B}ig(\int_{B(1)}
\int_{\frac{M}{M+1}}^1\mathcal{B}ig|(\nabla f)((1-t)x+ty)\mathcal{B}ig|dt
dy\mathcal{B}ig)^{p}dx\mathcal{B}ig)^{1/p}\\
&=(I)+(II)
\end{split}\end{equation*} where we used $x\in B(M)$ and $y\in B(1)$.\\
\noindent For $(I)$,
\begin{equation*}\begin{split}
&(I)
\leq C_{M_0}\mathcal{B}ig(\int_{B(1)}
\int_0^{\frac{M}{M+1}}\mathcal{B}ig(\int_{B(M)}
\mathcal{B}ig|(\nabla f)((1-t)x+ty)\mathcal{B}ig|^{p}dx\mathcal{B}ig)^{1/p}dt
dy\mathcal{B}ig)\\
&\leq C_{M_0}\cdot M
\int_0^{\frac{M}{M+1}}\frac{1}{(1-t)^{3/p}}\mathcal{B}ig(\int_{B((1-t)M+1)}
\mathcal{B}ig|(\nabla f)(z)\mathcal{B}ig|^{p}dz\mathcal{B}ig)^{1/p}dt\\
&\leq C_{M_0}\cdot M
\int_0^{\frac{M}{M+1}}\frac{1}{(1-t)^{3/p}}
\mathcal{B}ig(
\int_{B(1)}
\int_{B((1-t)M+2)}
\mathcal{B}ig|(\nabla f)(z+u)\mathcal{B}ig|^{p}dzdu\mathcal{B}ig)^{1/p}dt\\
&\leq C_{M_0}\cdot M
\int_0^{\frac{M}{M+1}}\frac{((1-t)M+2)^{3/p}}{(1-t)^{3/p}}\mathcal{B}ig(
\int_{B(1)}
\mathcal{M}(|\nabla f|^{p})(u)du\mathcal{B}ig)^{1/p}dt\\
&\leq C_{M_0,p}\cdot M \cdot\|\mathcal{M}(|\nabla f|^{p})\|^{1/p}_{L^1(B(1))}
\int_0^{\frac{M}{M+1}}\mathcal{B}ig(M^{3/p}+\frac{1}{(1-t)^{3/p}}\mathcal{B}ig)
dt
\\
&\leq C_{M_0,p}\cdot M \cdot\|\mathcal{M}(|\nabla f|^{p})\|^{1/p}_{L^1(B(1))}
\mathcal{B}ig(M^{3/p}+\int_{\frac{1}{M+1}}^{1}\frac{1}{s^{3/p}}
ds\mathcal{B}ig)
\\
&\leq C_{M_0}\cdot M \cdot\|\mathcal{M}(|\nabla f|^{p})\|^{1/p}_{L^1(B(1))}
\mathcal{B}ig(M^{3/p}+{(M+1)}^{3/p}
\mathcal{B}ig)
\\
&\leq C_{M_0,p}\cdot M^{1+\frac{3}{p}}
\cdot\|\mathcal{M}(|\nabla f|^{p})\|^{1/p}_{L^1(B(1))}
\end{split}\end{equation*} where we used
an integral version of the Minkoski's inequality
and $(1+M)\leq C_{M_0}\cdot M$ from $M\geq M_0$
for the first inequality.\\
\noindent For $(II)$, observe that if $\frac{M}{M+1}\leq t\leq 1$, then
$0\leq 1-t\leq \frac{1}{M+1}$ and
\begin{equation*}\begin{split}
|(1-t)x+ty|\leq (1-t)\cdot|x|+t|y|\leq \frac{M}{M+1}+1\leq 2
\end{split}\end{equation*} because $x\in B(M)$ and $y\in B(1)$. Thus
\begin{equation*}\begin{split}
(II)
& \leq
C_{M_0}\cdot M\mathcal{B}ig(\int_{B(M)}\mathcal{B}ig(
\int_{\frac{M}{M+1}}^1\frac{1}{t^3}\int_{B(2)}\mathcal{B}ig|(\nabla f)(z)\mathcal{B}ig|dz
dt\mathcal{B}ig)^{p}dx\mathcal{B}ig)^{1/p}\\
& \leq
C_{M_0}\cdot M\cdot M^{3/p}\cdot
\int_{B(2)}|(\nabla f)(z)|dz
\cdot
\int_{\frac{M}{M+1}}^1\frac{1}{t^3}
dt\\
& \leq
C_{M_0} M^{1+\frac{3}{p}}\cdot
\|\nabla f\|_{L^1(B(2))}.
\end{split}\end{equation*}
\end{proof}
\end{ack}
\end{document}
|
\begin{eqnarray}gin{document}
\title{Cusp formation for a nonlocal evolution equation}
\author{Vu Hoang \and Maria Radosz}
\maketitle
\begin{eqnarray}gin{abstract}
In this paper, we introduce a nonlocal evolution equation inspired by the C\'ordoba-C\'ordoba-Fontelos nonlocal transport equation.
The C\'ordoba-C\'ordoba-Fontelos equation can be regarded as a model for the 2D surface quasigeostrophic equation or the Birkhoff-Rott equation.
We prove blowup in finite time, and more importantly, investigate conditions under which the solution forms a cusp in finite time.
\mathbf{e}nd{eqnarray*}d{abstract}
\section{Introduction}
\lambdabel{intro}
The non-linear, non-local active scalar equation
\begin{eqnarray}q\lambdabel{CCF}
\theta_t + u \theta_y = 0, ~~ u = H\theta
\mathbf{e}nd{eqnarray*}d{eqnarray}q
for $\theta = \theta(y, t)$ was proposed by A. C\'ordoba, D. C\'ordoba and A. Fontelos \cite{CCF} as a one-dimensional analogue of the
2D surface quasigeostrophic equation (SQG) \cite{constantin1994formation}
and the Birkhoff-Rott equation \cite{Saffmann}. We refer to \mathbf{e}qref{CCF} as the CCF equation.
The study of one-dimensional equations modeling various aspects of three- and two-dimensional fluid mechanics problems has a
long-standing tradition (\cite{Castro,CLM, HouLuo1,HouLi}).
It is well-known that smooth solutions of \mathbf{e}qref{CCF} lose their regularity in finite time (\cite{CCF,SashaRev,silvestre2014transport}).
However, little is understood about the precise way in which the singularity forms. A particularly simple scenario
is as follows: We consider smooth, even solutions, i.e. $\theta(-y, t) = \theta(y, t)$ such that
the initial condition $\theta_0(y)$ has a single nondegenerate maximum and decays sufficiently quickly at infinity.
Numerical simulations \cite{CCF, silvestre2014transport} seem to indicate that the solution forms a \mathbf{e}mph{cusp} at
the singular time $T_s$, so that
\begin{eqnarray}q\lambdabel{universalCusp}
\theta(y, T_s) \sigmam \theta_0(0)- \thetaxt{const.}|y|^{1/2}
\mathbf{e}nd{eqnarray*}d{eqnarray}q
close to the origin. In \cite{silvestre2014transport}, the authors make the conjecture that in a generic sense,
all maxima eventually develop into cusps of the form \mathbf{e}qref{universalCusp}.
Besides the original blowup proof \cite{CCF}, various proofs of blowup for \mathbf{e}qref{CCF} have been found recently \cite{silvestre2014transport}.
In \cite{Tam}, a discrete model for \mathbf{e}qref{CCF} was studied.
Many of the known proofs work when an additional viscosity term is present in \mathbf{e}qref{CCF}. On the other hand none of them explains the cusp formation since
the shape of the solution is not controlled at the singular time. The task of establishing that solutions of \mathbf{e}qref{CCF}
exhibit cusp formation appears to be challenging.
We believe that making progress on the question of cusp formation is important for the following reasons. In order to prove cusp formation
for \mathbf{e}qref{CCF}, we need to develop insight into the intrinsic mechanism of singularity formation. This mechanism is not well-understood
and seems to favor blowup at certain predetermined points. Moreover, the solution apparently remains regular outside of these points
- even at the time of blowup.
This situation is similar to recent in numerical observations for the three-dimensional Euler equations
by T. Hou and G. Luo \cite{HouLuo1}. There, the authors exhibit solutions for which the magnitude of the vorticity
vector appears to become infinite at an intersection point of a domain boundary with a symmetry axis.
Their blowup scenario is referred to as the \mathbf{e}mph{hyperbolic flow scenario}.
In \cite{HouLuo2}, a 1D model problem for axisymmetric flow was introduced, and finite time blowup
was proven in \cite{sixAuthors}. We would also like to mention \cite{Pengfei}, in which the authors study a
simplified model (proposed in \cite{CKY}) of the equation introduced by T. Hou and G. Luo in \cite{HouLuo2},
proving the existence of self-similar solutions using computer-assisted means.
A sufficiently detailed understanding of the singularity formation mechanism for nonlocal active scalar equations
like \mathbf{e}qref{CCF} is very likely a prerequisite for obtaining a blowup proof for the 3D Euler equations.
We refer to \cite{ConstRev} for a thorough discussion of this outstanding and challenging problem.
In this paper, our goal is to lay groundwork for future investigation by studying a nonlocal
active scalar equation for which the singular behavior of the solution at the blowup time can be characterized.
We develop new techniques that allow us to obtain some control on the singular shape.
Our model problem is related to \mathbf{e}qref{CCF} and reads as follows:
\begin{eqnarray}q\lambdabel{outerModel}
\begin{eqnarray}gin{split}
&\theta_t + u \theta_y = 0, \\
&u_y (y, t) = \int_{y}^\infty \frac{ \theta_y(z, t)}{z}~dz, ~~u(0, t) = 0.
\mathbf{e}nd{eqnarray*}d{split}
\mathbf{e}nd{eqnarray*}d{eqnarray}q
We consider solutions $\theta$ defined on $[0, \infty)$ and think of them as being
extended to $\mathbf{e}nsuremath{\mathbb{R}}$ as even functions.
Our main result, Theorem \ref{blowupThm}, states that solutions blow up in finite time, either
forming a cusp or a needle-like singularity. To the best of our knowledge, this is the first
time such a scenario has been rigorously established.
Our paper is structured as follows: in the remaining part of this introductory section, we motivate the introduction of
\mathbf{e}qref{outerModel}. In section \ref{mainRes}, we state our main results.
The remainder of our paper provides the proofs of Theorems
\ref{thmWellposed} and \ref{blowupThm}.
\subsection{Derivation of the model equation.}
An essential issue with nonlocal transport equations is predicting from the outset where the
singularity will form. The even symmetry of $\theta$ helps, by creating a stagnant
point of the flow at the origin. At this stage of our understanding of blowup scenarios, however, this still does not rule out
the possibility of a singularity also forming somewhere else.
Our goal is to simplify by writing down a model where singularities can only form at a given point,
following ideas of ``hyperbolic approximation" or ``hyperbolic cut-off"
from \cite{kiselev2013small} and \cite{CKY}. We first describe
how this applies to \mathbf{e}qref{CCF}.
The velocity gradient of equation \mathbf{e}qref{CCF} is given by $u_y = H\theta_y$. Using the odd symmetry of
$\theta_y$, we can write
\begin{eqnarray}a
u_y(y, t) = PV\!\!\!\int_0^{2 y} \frac{2 z \theta_y(z, t)\, dz}{z^2-y^2} + \int_{2 y}^\infty \frac{2 z \theta_y(z, t)\, dz}{z^2-y^2}.
\mathbf{e}nd{eqnarray*}d{eqnarray}a
In a first approximation we only retain the long-range part of the interaction,
and therefore consider the model Biot-Savart law
\begin{eqnarray}q\lambdabel{cutOff}
u_y(y, t) = \int_{y}^\infty \frac{ \theta_y(z, t)\, dz}{z}, ~~u(0, t) = 0.
\mathbf{e}nd{eqnarray*}d{eqnarray}q
The long-range part of the interaction has been approximated by shifting the
kernel singularity to the origin (and we have also dropped the non-essential factor $2$).
In general, the hyperbolic cut-off emphasizes the non-local role of the fluid surrounding the singular point
as a leading part of the blowup mechanism. Due to certain monotonicity properties,
the intrinsinc blowup mechanism becomes transparent (see Remark \ref{expl}).
Our main inspiration to derive \mathbf{e}qref{outerModel} came from \cite{kiselev2013small}, where
the hyperbolic flow scenario for the 2D Euler equation on a disc was considered.
There, a useful new representation of the velocity field for the 2D Euler equation was discovered. Briefly,
the velocity field close to a stagnant point of the flow at the intersection of a domain boundary and a symmetry
axis of the solution was decomposed into a main part and error term
$$
u_1(x_2, x_2, t) = - x_1\iint_{y_1\geq x_1, y_2\geq x_2}\frac{y_1 y_2}{|y|^4}\omega(y, t) ~dy_1 dy_2 + x_1 B_1(x_1, x_2, t).
$$
The error term $B_1$ could be controlled in a certain sector. Note that, as in \mathbf{e}qref{cutOff}, the main part of the
velocity field is given by an integral over a kernel with singularity at $y = (0, 0)$.
Finally we note that K.~Choi, A.~Kiselev and Y.~Yao use a similar process in \cite{CKY}
to approximate a one-dimensional model of
an equation proposed by T.~Hou and G.~Luo in \cite{HouLuo2}.
\section{Main results}\lambdabel{mainRes}
We prove that for the model \mathbf{e}qref{outerModel}, solutions blow up in
finite time for generic data, and that the singularity is either a cusp or a needle-like
singularity.
We are looking for solutions of \mathbf{e}qref{outerModel} in the class
\begin{eqnarray}q
\theta(y, t) \in C( [0, T),\,C_0^2(\mathbf{e}nsuremath{\mathbb{R}}^+))\cap C^1([0, T), \, C^1(\mathbf{e}nsuremath{\mathbb{R}}^+)), ~~\theta_y(0, t) = 0,
\mathbf{e}nd{eqnarray*}d{eqnarray}q
where $\mathbf{e}nsuremath{\mathbb{R}}^+ = [0, \infty)$ and $C_0^2(\mathbf{e}nsuremath{\mathbb{R}}^+)$ denotes twice continuously differentiable
functions with compact support in $\mathbf{e}nsuremath{\mathbb{R}}^+$.
This is a natural class for solutions of \mathbf{e}qref{outerModel} which have a single maximum at the origin.
Our results are as follows:
\begin{eqnarray}gin{theorem}\lambdabel{thmWellposed}
The problem \mathbf{e}qref{outerModel} is locally well-posed for compactly supported initial data
\begin{eqnarray}q\nonumber
\theta_0 \in C_0^2(\mathbf{e}nsuremath{\mathbb{R}}^+), ~~\partial_y\theta_0(0)=0.
\mathbf{e}nd{eqnarray*}d{eqnarray}q
\mathbf{e}nd{eqnarray*}d{theorem}
\begin{eqnarray}gin{theorem}\lambdabel{blowupThm}
Assume the initial data $\theta_0$ is compactly supported, nonincreasing, nonnegative and is such that
\begin{eqnarray}q\nonumber
\partial_{yy}\theta_0(0) < 0.
\mathbf{e}nd{eqnarray*}d{eqnarray}q
Then there exists a finite time $T_s>0$ and constants
$\nu\in (0, 1), c > 0$ depending on the inital data
such that $\theta(y, t) \in C^1( [0, T_s),\,C_0^2(\mathbf{e}nsuremath{\mathbb{R}}^+))$ and
$$\theta(y, T_s)=\lim_{t \to T_s} \theta(y, t)$$
exists for all $y\in \mathbf{e}nsuremath{\mathbb{R}}^+$ and
\begin{eqnarray}a
\theta(\cdot, T_s)\in C^2(0, \infty).
\mathbf{e}nd{eqnarray*}d{eqnarray}a
Moreover,
\begin{eqnarray}q\lambdabel{cuspBound}
\theta(y, T_s) \leq \theta_0(0) - c |y|^{\nu} \quad (y\in \mathbf{e}nsuremath{\mathbb{R}}^+).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
\mathbf{e}nd{eqnarray*}d{theorem}
The singularity formed at time $T_s$ is at least a cusp, but can potentially also be ``needle-like" (see Figure \ref{f1}). Needle formation arises when
$$
\lim_{y\to 0+} \theta(y, T_s) < \theta_0(0).
$$
To the best of our knowledge,
this is the first time singularity formation of the type described is rigourously established for a nonlocal active
scalar transport equation.
\begin{eqnarray}gin{figure}[htbp]
\begin{eqnarray}gin{center}
\includegraphics[scale=0.6]{fig7}
\mathbf{e}nd{eqnarray*}d{center}
\caption{Illustration of the upper bound (black), $\theta(\cdot, T_s)$ at singular time with cusp (grey) and a scenario with ``needle formation" (light grey). Note that all three have the same value at $y=0$.
}\lambdabel{f1}
\mathbf{e}nd{eqnarray*}d{figure}
\section{Proofs}
\subsection{An equation for the flow map.}
Our approach is characterized by working in Langrangian coordinates and exploiting the properties of the flow map.
From now on, $x$ denotes the Lagrangian space variable. The flow map $\Phi$ associated to \mathbf{e}qref{outerModel} satisfies
\begin{eqnarray}q\lambdabel{eq_flowmap}
\Phi_t(x, t) = u(\Phi(x, t), t).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
Note that because of $u(0,t)=0$, $\Phi(0,t)=0$ holds.
The basic equation for the stretching (derivative of the flow map $\phi = \partial_{x}\Phi$) follows from differentiating \mathbf{e}qref{eq_flowmap}:
\begin{eqnarray}q\lambdabel{basic_Stretch}
\phi_t(x, t) = u_y(\Phi(x, t), t) \phi(x, t).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
With some nonnegative, compactly supported, smooth $g: [0, \infty)\to \mathbf{e}nsuremath{\mathbb{R}}$, $g\not \mathbf{e}quiv 0$, one can make an ansatz for the solution $\theta$ by setting
\begin{eqnarray}q\lambdabel{def_te}
\theta(y, t) = g(\Phi^{-1}(y, t)).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
The following relation holds:
\begin{eqnarray}q\lambdabel{rel:basic}
\theta_y(y, t) = g'(\Phi^{-1}(y, t))/\phi(\Phi^{-1}(y, t)).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
Computing $u_y(\Phi(x, t), t)$ using \mathbf{e}qref{cutOff}, \mathbf{e}qref{rel:basic} and the substitution $z = \Phi^{-1}(y, t)$ gives
\begin{eqnarray}q\lambdabel{rel_u}
u_y(\Phi(x, t), t) = \int_{\Phi(x, t)}^\infty \frac{g'(\Phi^{-1}(y, t))}{y ~\phi(\Phi^{-1}(y, t))}\, dy = \int_x^{\infty} \frac{g'(z)}{\Phi(z,t)}\, dz.
\mathbf{e}nd{eqnarray*}d{eqnarray}q
Combining this with \mathbf{e}qref{basic_Stretch}, we obtain the following central evolution equation for $\phi$:
\begin{eqnarray}gin{align}\lambdabel{main_eq}
\begin{eqnarray}gin{split}
&\phi_t(x, t) = \phi(x, t) \int_x^{\infty} \frac{g'(z) ~dz}{\Phi(z, t)}, \qquad\Phi(z, t) = \int_0^z \phi(\sigma, t)\, d\sigma.\\
&\phi(x, 0) = \phi_0(x)
\mathbf{e}nd{eqnarray*}d{split}
\mathbf{e}nd{eqnarray*}d{align}
Here, $(\phi_0, g)$ are given functions with $(\phi_0, g)\in C^1(\mathbf{e}nsuremath{\mathbb{R}}^+)\times C^2_0(\mathbf{e}nsuremath{\mathbb{R}}^+)$ having the properties
\begin{eqnarray}q\lambdabel{cond_g}
\inf_{\mathbf{e}nsuremath{\mathbb{R}}^+} \phi_0 > 0, ~~ g'(0) = 0.
\mathbf{e}nd{eqnarray*}d{eqnarray}q
We shall consider solutions $\phi\in C^1(\mathbf{e}nsuremath{\mathbb{R}}^+\times[0, T])$ with the additional property
\begin{eqnarray}q\lambdabel{eq_inf}
\inf_{(x, t)\in \mathbf{e}nsuremath{\mathbb{R}}^+\times [0, T]} \phi(x, t) > 0.
\mathbf{e}nd{eqnarray*}d{eqnarray}q
The following Lemma clarifies the relation between $\theta$ and $(\phi_0, g)$:
\begin{eqnarray}gin{lemma}\lambdabel{backToTheta}
Let $\theta_0 \in C^2_0(\mathbf{e}nsuremath{\mathbb{R}}^+)$ with $\partial_y \theta_0(0) = 0$ be given.
Suppose $$\phi\in C^1(\mathbf{e}nsuremath{\mathbb{R}}^+\times[0, T])$$ is a solution of \mathbf{e}qref{main_eq} with given $(\phi_0, g)$ satisfying
\mathbf{e}qref{cond_g} and
\begin{eqnarray}q\lambdabel{choice_g}
\theta_0(y) = g(\Phi_0^{-1}(y)) \quad (y\in \mathbf{e}nsuremath{\mathbb{R}}^+), ~~\thetaxt{where }\Phi_0(x) = \int_0^x \phi_0(\sigma)\,d\sigma.
\mathbf{e}nd{eqnarray*}d{eqnarray}q
Then $\theta(y, t) = g(\Phi^{-1}(y,t))\in C( [0, T),\,C_0^2(\mathbf{e}nsuremath{\mathbb{R}}^+))\cap C^1([0, T), \, C^1(\mathbf{e}nsuremath{\mathbb{R}}^+))$ is a solution of \mathbf{e}qref{outerModel} as long as \mathbf{e}qref{eq_inf} holds.
\mathbf{e}nd{eqnarray*}d{lemma}
\begin{eqnarray}gin{proof}
Observe first that the definition of $g$ in \mathbf{e}qref{choice_g} implies $g'(0) = 0$ via the assumption $\partial_y \theta_0(0) = 0$.
Let $\theta(y, t)$ be defined by $\theta(y, t) = g(\Phi^{-1}(y,t))$ (note that $\Phi^{-1}(\cdot, t)$ is well-defined for
all $t\in [0, T]$ because of \mathbf{e}qref{eq_inf}). A calculation (see \mathbf{e}qref{rel_u}) shows that the integral
$$
\int_x^{\infty} \frac{g'(z)}{\Phi(z,t)}\, dz
$$
equals $u_y(\Phi(x, t), t)$, where $u$ is the velocity field \mathbf{e}qref{cutOff}. Using \mathbf{e}qref{main_eq}, this implies
$$
\partial_{t}\partial_{x}\Phi(x, t) = \partial_{x}\Phi(x, t) u_y(\Phi(x, t), t).
$$
Taking into account $\Phi(0, t) = 0$ and integrating with respect to $x$, we obtain $\partial_t \Phi(x, t) = u(\Phi(x, t), t)$. By differentiating
$$\theta(\Phi(x,t),t)=g(x)$$
in time we see that $\theta$ satisfies equation \mathbf{e}qref{outerModel}. The initial condition $\theta(y, 0) = \theta_0(y)$ follows
from \mathbf{e}qref{choice_g}. Finally, the required regularity for $\theta$ is also straightforward to verify.
To check for instance $\theta(\cdot, t)\in C^2(\mathbf{e}nsuremath{\mathbb{R}}^+)$ we observe first
that \mathbf{e}qref{rel:basic} implies that $\theta_y$ is continuous on $\mathbf{e}nsuremath{\mathbb{R}}^+$. Differentiating $\mathbf{e}qref{rel:basic}$, we obtain
$$
\theta_{yy}(y, t) = \frac{g''(\Phi^{-1}(y, t))- g'(\Phi^{-1}(y, t)) \phi'(\Phi^{-1}(y, t), t) \phi(\Phi^{-1}(y, t), t)^{-1}}{\phi({\Phi^{-1}(y, t)}, t)^2}
$$
and thus $\theta_{yy}\in C(\mathbf{e}nsuremath{\mathbb{R}}^+)$.
\mathbf{e}nd{eqnarray*}d{proof}
\begin{eqnarray}gin{figure}[htbp]
\begin{eqnarray}gin{center}
\includegraphics[scale=0.6]{cuspfig2.pdf}
\mathbf{e}nd{eqnarray*}d{center}
\caption{Illustration of $\phi$.
}\lambdabel{f2}
\mathbf{e}nd{eqnarray*}d{figure}
\begin{eqnarray}gin{remark}\lambdabel{expl}
A fairly clear picture of the blowup mechanism emerges by visualizing the solution of \mathbf{e}qref{main_eq}.
Later, we will choose $\phi_0$ to be a positive constant and adjust $g$ so that we obtain solutions of \mathbf{e}qref{outerModel}
by using Lemma \ref{backToTheta}. $\phi(x, t)$ is monotone nonincreasing in $x$ and is depicted in Figure \ref{f2} for times $t>0$.
The blowup happens when $\phi(0, t)$ reaches zero in finite time. The intuitive reason is that at the singular time $T_s$, the odd continuation of the
inverse $\Phi^{-1}(\cdot,T_s)$ to $(-\infty,\infty)$ is not differentiable in $x=0$. The main driving mechanism for blowup in finite time is the behavior of the
following integral:
$$
-\int_0^\infty \frac{g'(z) ~dz}{\Phi(z, t)}.
$$
We shall show that it will be at least growing like $\phi(0, t)^{-1+\begin{eqnarray}ta}$ with some $\begin{eqnarray}ta\in (0, 1)$, thus implying
$\phi_t(0, t) \leq - C \phi(0, t)^{\begin{eqnarray}ta}$.
\mathbf{e}nd{eqnarray*}d{remark}
The following Theorem shows that nontrivial solutions of \mathbf{e}qref{main_eq} cannot be
defined on an infinite time interval. We present this Theorem for illustrative purposes, to show that finite-time blowup via
a contradiction argument can easily be obtained for \mathbf{e}qref{main_eq}. The task of characterizing
the singular shape is, however, much more challenging (see section \ref{sec_proof_blowupThm} below).
\begin{eqnarray}gin{theorem}\lambdabel{thm:finite}
Suppose \mathbf{e}qref{cond_g} holds for $(\phi_0, g)$ and that $g(0)> 0$.
There can be no solution of \mathbf{e}qref{main_eq} $\phi\in C^1(\mathbf{e}nsuremath{\mathbb{R}}^+\times[0, \infty))$ satisfying
\mathbf{e}qref{eq_inf}.
\mathbf{e}nd{eqnarray*}d{theorem}
\begin{eqnarray}gin{proof}
For the purpose of deriving a contradiction, let us assume that $\phi(\cdot, t)$ is defined for $t\in [0, \infty)$. \mathbf{e}qref{eq_inf}
implies $\Phi(x, t) \geq 0$ for all times. Let $[0, a] = \operatorname{supp} g$. By taking the derivative of
$\Phi(a, t) = \int_0^a \phi(\sigma, t)\, d\sigma$ with respect to $t$ and using \mathbf{e}qref{main_eq} we get
$$
\Phi_t(a, t) = \int_0^a \int_{\sigma}^{a} \phi(\sigma, t)\frac{g'(z) }{\Phi(z, t)} ~dz d\sigma.
$$
The integral on the right-hand side can be written as
\begin{eqnarray}a
\int_{0}^a\int_0^z\phi(\sigma, t)\frac{g'(z) }{\Phi(z, t)}\, d\sigma dz = \int_0^a \Phi(z, t) \frac{g'(z) }{\Phi(z, t)}\, dz = g(a)-g(0).
\mathbf{e}nd{eqnarray*}d{eqnarray}a
Because of $g(a) = 0$, $\Phi_t(a, t) = - g(0)<0$, giving a contradiction for large times.
\mathbf{e}nd{eqnarray*}d{proof}
\subsection{Local existence}
\begin{eqnarray}gin{theorem}[Local existence and uniqueness] \lambdabel{locEx}
The problem \mathbf{e}qref{main_eq} has a unique local solution $\phi\in C^1(\mathbf{e}nsuremath{\mathbb{R}}^+\times [0,T])$ for some $T>0$,
provided that $\phi_0\in C^1(\mathbf{e}nsuremath{\mathbb{R}}^+)$, $g\in C_0^2(\mathbf{e}nsuremath{\mathbb{R}}^+)$ and \mathbf{e}qref{cond_g} holds for $(\phi_0,g)$. Moreover, $$\inf_{(x, t)\in \mathbf{e}nsuremath{\mathbb{R}}^+\times [0, T]} \phi(x, t) > 0.$$
\mathbf{e}nd{eqnarray*}d{theorem}
\begin{eqnarray}gin{proof} The proof is standard, so we just sketch a few details. Let $\operatorname{supp} g' = [0, a]$.
Rewrite \mathbf{e}qref{main_eq} as an integral equation by integrating \mathbf{e}qref{main_eq} in time
\begin{eqnarray}gin{align}\lambdabel{int_eq}
\phi(x,t) & = \phi_0(x)+\int_0^t \phi(x,s)\,\left(\int_x^\infty\frac{g'(z)}{\Phi(z,s)}~dz\right)~ds =:\phi_0(x)+\mathcal{G}[\phi](x,t).
\mathbf{e}nd{eqnarray*}d{align}
Given $(g,\phi_0)$ let the set $X_{T,\mu}$ be the set of functions $\phi$ defined by the following conditions:
\begin{eqnarray}gin{align}\nonumber
\phi \in C\left([0,T], C^1(\mathbf{e}nsuremath{\mathbb{R}}^+)\right),~\|\phi-\phi_0\|_{C\left([0,T],C^1[0, a]\right)}\le \mu,~~\phi(x, t) = \phi_0(x)~\thetaxt{for}~x\geq a
\mathbf{e}nd{eqnarray*}d{align}
where $T,\mu>0$ are to be chosen below. The norm $\|\cdot\|_{C\left([0,T], C^1[0, a]\right)}$
is given by
$$
\max_{t\in [0, T]}\left(\|f(\cdot, t)\|_\infty+\|f_x(\cdot, t)\|_{\infty}\right),
$$
$\|\cdot\|_{\infty}$ denoting the supremum norm on $[0, a]$.
First we need to show that the operator $\phi_0+\mathcal{G}$ is well-defined.
For $\phi \in X_{T,\mu}$ we have the following estimate:
\begin{eqnarray}gin{align*}
\Phi(z,t) & = \int_0^z \phi(\sigma,t)~d\sigma
\ge -\left|\int_0^z (\phi(\sigma,t) -\phi_0(\sigma))~d\sigma\right|+\int_0^z \phi_0(\sigma)~d\sigma\\
& \ge -\mu z+(\partialisplaystyle\min_{\substack{\mathbf{e}nsuremath{\mathbb{R}}^+}}\phi_0)z
=\left(\partialisplaystyle\min_{\substack{[0,a]}}\phi_0-\mu\right)z.
\mathbf{e}nd{eqnarray*}d{align*}
So if $\mu=\frac{1}{4}\partialisplaystyle\min_{\substack{\mathbf{e}nsuremath{\mathbb{R}}^+}}\phi_0$, then we have
$$
\frac{g'(z)}{\Phi(z,t)} \leq C(g, \phi_0) \quad (0\leq z \leq a)
$$
because of $g'(0) = 0$, i.e. $g'(z)\le \thetaxt{const.}z$. Using $\|\partial_x\phi\|_\infty\le \mu + \|\partial_x\phi_0\|_\infty$, $\|\phi\|_\infty\le \mu+ \|\phi_0\|_\infty$ and
\mathbf{e}qref{eq_inf} we obtain
\begin{eqnarray}a
|\mathcal{G}[\phi](x, t)| \leq C(g,\phi_0) T, ~~|\partial_x \mathcal{G}[\phi](x, t)| \leq C(g,\phi_0) T,
\mathbf{e}nd{eqnarray*}d{eqnarray}a
and $\mathcal{G}[\phi](x, t) = 0$ for all $x\geq a, t\in [0, T]$.
Hence $\phi\mapsto\phi_0+\mathcal{G}[\phi]$ maps $\phi\in X_{T,\mu}$ and into $X_{T, \mu}$ for sufficiently
small $T>0$. A straightforward, but tedious calculation shows the inequality
\begin{eqnarray}gin{align*}
\|\mathcal{G}[\phi]-\mathcal{G}[\psi]\|_{C([0,T], C^1[0, a])} & \le C(\mu, g, a, \phi_0) T \|\phi-\psi\|_{C([0,T], C^1[0, a])}.
\mathbf{e}nd{eqnarray*}d{align*}
The proof is concluded by applying the Contraction Mapping Theorem to the operator $$\phi_0+\mathcal{G},$$ choosing $T>0$ to be sufficiently
small to ensure the contraction property. Note that $\phi\in X_{T, \mu}$ satisfying the equation \mathbf{e}qref{int_eq} lies
automatically in $C^1(\mathbf{e}nsuremath{\mathbb{R}}^+\times[0, T])$.
\mathbf{e}nd{eqnarray*}d{proof}
The proof of Theorem \ref{thmWellposed} follows now from Theorem \ref{locEx}, by taking the local solution
of \mathbf{e}qref{main_eq} and defining $\theta$ via \mathbf{e}qref{def_te}. More precisely, we may take e.g. $\phi_0 = 1,
g(x) = \theta_0$.
\subsection{Proof of Theorem \ref{blowupThm}.}\lambdabel{sec_proof_blowupThm}
\mathbf{e}mph{Setup for the proof and preparatory lemmas.} For convenience, we set
\begin{eqnarray}q \nonumber
\phi_0(x) = \mathbf{e}ps_0>0,
\mathbf{e}nd{eqnarray*}d{eqnarray}q
with small $0<\mathbf{e}ps_0<1$ to be chosen later, instead of starting with $\phi_0(x) \mathbf{e}quiv 1$ (see also Remark \ref{why_eps_0}).
To obtain the given initial condition $\theta_0$ we set
\begin{eqnarray}q\lambdabel{def_g}
g(z; \mathbf{e}ps_0) := \theta_0(\mathbf{e}ps_0 z) \in C^2(\mathbf{e}nsuremath{\mathbb{R}}^+).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
Note that \mathbf{e}qref{def_g} ensures that $\theta$ defined by $\theta(y, t) = g(\Phi^{-1}(y,t);\mathbf{e}ps_0)$ satisfies the initial
condition $\theta(\cdot, 0) = \theta_0$ for any $\mathbf{e}ps_0>0$. Observe carefully that variation of $\mathbf{e}ps_0$ does not correspond to
a rescaling of the initial data for the equation \mathbf{e}qref{outerModel}.
Now fix some constants $0<K_0<K_1$ such that
\begin{eqnarray}q\lambdabel{eq_K1}
\mathbf{e}ps_0^2 K_0 z \leq -g'(z; \mathbf{e}ps_0) \leq \mathbf{e}ps_0^2 K_1 z \quad (0 \leq z \leq 1)
\mathbf{e}nd{eqnarray*}d{eqnarray}q
To see that such constants exist, observe that $g''(z; \mathbf{e}ps_0) = \mathbf{e}ps_0^2 \partial_{yy}\theta_0(\mathbf{e}ps_0 z)$ and $g'(0; \mathbf{e}ps_0) = 0$;
just take $K_0$ to be slightly lower and $K_1$ to be slightly larger than $-\partial_{yy}\theta_0(0)> 0$ and choose $\mathbf{e}ps_0$ sufficiently small so that
\mathbf{e}qref{eq_K1} holds for $0 \leq z \leq 1$. In the following, we write $g(z; \mathbf{e}ps_0) = g(z)$ for convenience.
We write
\begin{eqnarray}q
\mathbf{e}ps(t) = \phi(0, t).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
By assumption, $\theta_y(z, 0) < 0$, so that $g' < 0$.
Note that \mathbf{e}qref{main_eq} implies $\mathbf{e}ps_t(t) < 0$ as long as the smooth solution can be continued.
\begin{eqnarray}gin{lemma}\lambdabel{lem_eq_eta}
The following equation holds for $\mathbf{e}ta(x, t) = \phi(x, t)/\mathbf{e}ps(t)$:
\begin{eqnarray}q\lambdabel{eq_eta}
\mathbf{e}ta_t(x, t) = -\mathbf{e}ta(x,t) \int_0^x \frac{g'(z)\, dz}{\Phi(z, t)}.
\mathbf{e}nd{eqnarray*}d{eqnarray}q
Moreover, $\phi(x, t)$ is monotone nondecreasing in $x$ for fixed $t$.
\mathbf{e}nd{eqnarray*}d{lemma}
\begin{eqnarray}gin{proof} A direct computation gives
\begin{eqnarray}a
\mathbf{e}ta_t(x, t) = \frac{\mathbf{e}ps(t) \phi_t(x, t)- \phi(x, t) \mathbf{e}ps_t(t)}{\mathbf{e}ps^2(t)} = -\mathbf{e}ta(x, t) \int_0^x \frac{g'(z)~dz}{\Phi(z, t)}.
\mathbf{e}nd{eqnarray*}d{eqnarray}a
To see that $\phi$ is monotone in $x$, observe that because of $g'<0$, we have for $x_1 < x_2$
\begin{eqnarray}a
\phi_t(x_1, t) \leq \phi(x_1, t) \int_{x_2}^\infty \frac{g'(z)\,dz}{\Phi(z, t)}\leq \frac{\phi(x_1, t)}{\phi(x_2, t)} \phi_t(x_2, t)
\mathbf{e}nd{eqnarray*}d{eqnarray}a
and thus $(\log\phi(x_1, t))_{t}\leq (\log\phi(x_2, t))_{t}$, and the monotonicity follows since $\phi(x_1,0)=\phi(x_2,0)$.
\mathbf{e}nd{eqnarray*}d{proof}
\begin{eqnarray}gin{lemma}\lambdabel{lemma_cont}
A solution $\phi(x, t)$ satisfies the bound $\phi(x, t) \geq \mathbf{e}ps(t)$ for all $x\in \mathbf{e}nsuremath{\mathbb{R}}^+$. If $T>0$ is such that $\phi\in C^1(\mathbf{e}nsuremath{\mathbb{R}}^+\times [0, T))$
and $\lim_{t\to T} \mathbf{e}ps(t) > 0$, then the solution $\phi$ can be continued to a slightly larger time interval.
\mathbf{e}nd{eqnarray*}d{lemma}
\begin{eqnarray}gin{proof}
This follows from Lemma \ref{lem_eq_eta}. From equation \mathbf{e}qref{eq_eta} and $\mathbf{e}ta(x, 0) = 1$ we get immediately
$\mathbf{e}ta(x, t)\geq 1$, so that actually $\min_{\substack{x\in \mathbf{e}nsuremath{\mathbb{R}}^+}} \phi(x, t) = \mathbf{e}ps(t)$. Now apply Theorem \ref{locEx}.
\mathbf{e}nd{eqnarray*}d{proof}
Let $\begin{eqnarray}ta \in (0, 1)$ and define
\begin{eqnarray}q\nonumber
l(t) = \mathbf{e}ps^{\begin{eqnarray}ta}(t).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
Let $\kappa>1$ be such that
\begin{eqnarray}q\lambdabel{cond_ka}
\mathbf{e}ps_0 \leq \kappa \mathbf{e}ps_0^{1/\begin{eqnarray}ta}
\mathbf{e}nd{eqnarray*}d{eqnarray}q
We consider now the following bootstrap or barrier property \mathbf{e}qref{eq:boot}.
\begin{eqnarray}q\tag{B}\lambdabel{eq:boot}
\thetaxt{$\phi(x, t) \leq \kappa \mathbf{e}ps(t)$ for all $x\in [0, l(t)]$}
\mathbf{e}nd{eqnarray*}d{eqnarray}q
The continuity of $t\mapsto\phi(\cdot,t)$ implies that \mathbf{e}qref{eq:boot} holds for some short time interval $[0, \tau), \tau>0$.
We now extend the validity of the bootstrap property, with uniform $\kappa$, to the whole existence interval
of our solution $\phi$ by utilizing a kind of \mathbf{e}mph{continuous induction}, or \mathbf{e}mph{bootstrap argument}, or alternatively speaking a
\mathbf{e}mph{nonlocal maximum principle}.
\begin{eqnarray}gin{itemize}
\item In order to state the argument in a clear fashion, we now let $$\phi\in C^1([0, T_s)\times \mathbf{e}nsuremath{\mathbb{R}}^+)$$
be the solution of \mathbf{e}qref{main_eq} with data $(\mathbf{e}ps_0, g(\cdot; \mathbf{e}ps_0))$ given by \mathbf{e}qref{def_g},
defined on its maximal existence interval $[0, T_s)$. That is, we continue the local solution $\phi$ for as long
as $\mathbf{e}ps(t)>0$.
\mathbf{e}nd{eqnarray*}d{itemize}
Note that by Theorem \ref{thm:finite} $T_s$ is finite. However, the in the following we will not use Theorem \ref{thm:finite} and prove blowup in finite time together with a characterization of the solution at the singular time.
Observe that $\lim_{t\to T_s} \mathbf{e}ps(t) = 0$.
\begin{eqnarray}gin{lemma}\lambdabel{lem:boundInnerZone}
Suppose \mathbf{e}qref{eq:boot} holds on the time interval $[0, T)$, $T>0$. Then
\begin{eqnarray}q\nonumber
\mathbf{e}ta(x, t) \leq \mathbf{e}xp\left( \mathbf{e}ps_0^2 K_1 l(t) \int_0^t \mathbf{e}ps^{-1}(s)\, ds\right) \quad (x\in [0, l(t)], 0 \leq t < T).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
Moreover,
\begin{eqnarray}q\lambdabel{upperBd}
\phi(x, t)\leq \kappa x^{1/\begin{eqnarray}ta} \quad (l(t) \leq x < \infty,~0\leq t <T).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
\mathbf{e}nd{eqnarray*}d{lemma}
\begin{eqnarray}gin{proof}
We have $\phi(x, t)\geq \mathbf{e}ps(t)$, so $\Phi(z, t) \geq \mathbf{e}ps(t) z$. By \mathbf{e}qref{eq_eta}, \mathbf{e}qref{eq_K1}, we have for $0 < s < t$
and $x\in [0, l(t)]$:
$$
\mathbf{e}ta_t(x,s) \leq \mathbf{e}ta(x, s) \mathbf{e}ps_0^2 K_1 \frac{l(s)}{\mathbf{e}ps(s)}
$$
from which the first statement follows.
For all $l(t) \leq x \leq \mathbf{e}ps_0^{\begin{eqnarray}ta}$, let $t_x \leq t$ be the uniquely defined time such that
$$
l(t_x) = \mathbf{e}ps(t_x)^{\begin{eqnarray}ta} = x.
$$
By the bootstrap assumption, $\phi(x, t_x)\leq \kappa \mathbf{e}ps(t_x) = \kappa x^{1/\begin{eqnarray}ta}$.
From \mathbf{e}qref{main_eq} and the assumption that $g'< 0$ it follows that $\phi(x, t)$ is nonincreasing
in $t$ for fixed $x$. Consequently,
$\phi(x, t) \leq \phi(x, t_x)\leq \kappa x^{1/\begin{eqnarray}ta}$ for $t\geq t_x$.
If $x \geq \mathbf{e}ps_0^{\begin{eqnarray}ta}$, then we observe $\phi(x, t) \leq \mathbf{e}ps_0 \leq \kappa \mathbf{e}ps_0^{1/\begin{eqnarray}ta} \leq \kappa x^{1/\begin{eqnarray}ta}$
by condition \mathbf{e}qref{cond_ka} and also noting $x\geq \mathbf{e}ps_0$.
\mathbf{e}nd{eqnarray*}d{proof}
The bootstrap assumption also gives a lower bound on $-\mathbf{e}ps_t$. In the following Lemma, the structure of the Biot-Savart law of \mathbf{e}qref{outerModel} enters in
a crucial way.
\begin{eqnarray}gin{lemma}\lambdabel{lem:lowerBoundEps}
Suppose \mathbf{e}qref{eq:boot} holds on $[0, T)$. Then
\begin{eqnarray}q\lambdabel{lowerBdEps_t}
- \mathbf{e}ps_t (t)\geq \frac{\mathbf{e}ps_0^2 K_0 c_\begin{eqnarray}ta}{2\kappa} \mathbf{e}ps^\begin{eqnarray}ta
\mathbf{e}nd{eqnarray*}d{eqnarray}q
where $c_\begin{eqnarray}ta = \frac{\begin{eqnarray}ta (\begin{eqnarray}ta+1)}{(2\begin{eqnarray}ta+1)(1-\begin{eqnarray}ta)}$, provided $\mathbf{e}ps_0^{1-\begin{eqnarray}ta} \leq 1/2$.
\mathbf{e}nd{eqnarray*}d{lemma}
\begin{eqnarray}gin{proof}
For $z\geq l(t)$, using the upper bound \mathbf{e}qref{upperBd} and \mathbf{e}qref{eq:boot} yields
\begin{eqnarray}gin{align*}
\Phi(z, t) &= \Phi(l(t), t) + \int_{l(t)}^z \phi(\sigma,t)\, d\sigma
\leq \Phi(l(t), t) + \int_{l(t)}^z \kappa \sigma^{\frac{1}{\begin{eqnarray}ta}}\, d\sigma\\
&\leq \Phi(l(t), t) + \frac{\kappa \begin{eqnarray}ta}{\begin{eqnarray}ta+1} (z^{\frac{1}{\begin{eqnarray}ta} + 1}- l^{\frac{1}{\begin{eqnarray}ta}+1}(t))\le \kappa \mathbf{e}ps(t) l(t) + \frac{\kappa \begin{eqnarray}ta}{\begin{eqnarray}ta+1} (z^{\frac{1}{\begin{eqnarray}ta} + 1}- l^{\frac{1}{\begin{eqnarray}ta}+1}(t))\\
&\le \kappa\left( l^{\frac{1}{\begin{eqnarray}ta}+ 1}(t) + \frac{\begin{eqnarray}ta}{\begin{eqnarray}ta+1}z^{\frac{1}{\begin{eqnarray}ta} + 1}\right).
\mathbf{e}nd{eqnarray*}d{align*}
Using this and \mathbf{e}qref{eq_K1}
\begin{eqnarray}gin{align*}
\int_0^\infty \frac{-g'(z) \,dz}{\Phi(z,t)}
&\ge \frac{\mathbf{e}ps_0^2 K_0}{\kappa} \int_{l(t)}^1 \frac{z ~dz}{l^{1+\frac{1}{\begin{eqnarray}ta}}(t) + \frac{\begin{eqnarray}ta}{1+\begin{eqnarray}ta} z^{1+\frac{1}{\begin{eqnarray}ta}}}
\ge \frac{\mathbf{e}ps_0^2 K_0}{\kappa} \int_{l(t)}^1 \frac{z~dz}{(1+ \frac{\begin{eqnarray}ta}{1+\begin{eqnarray}ta}) z^{1+\frac{1}{\begin{eqnarray}ta}}}\\
&= \frac{\mathbf{e}ps_0^2 K_0}{\kappa} \left(1+\frac{\begin{eqnarray}ta}{1+\begin{eqnarray}ta}\right)^{-1}\frac{\begin{eqnarray}ta}{1-\begin{eqnarray}ta} (l^{1-\frac{1}{\begin{eqnarray}ta}}(t)-1) \\
&= \frac{\mathbf{e}ps_0^2 K_0 c_\begin{eqnarray}ta}{\kappa} (\mathbf{e}ps^{\begin{eqnarray}ta-1}(t)-1).
\mathbf{e}nd{eqnarray*}d{align*}
Thus
\begin{eqnarray}q\nonumber
-\mathbf{e}ps_t (t) \ge \mathbf{e}ps(t) \int_0^\infty\frac{-g'(z)~dz}{\Phi(z, t)} \ge \frac{\mathbf{e}ps_0^2 K_0 c_\begin{eqnarray}ta}{\kappa}\mathbf{e}ps^{\begin{eqnarray}ta}(t) (1-\mathbf{e}ps^{1-\begin{eqnarray}ta}(t)) \geq \frac{\mathbf{e}ps_0^2 K_0 c_\begin{eqnarray}ta}{2 \kappa}\mathbf{e}ps^{\begin{eqnarray}ta}(t)
\mathbf{e}nd{eqnarray*}d{eqnarray}q
provided $\mathbf{e}ps_0^{1-\begin{eqnarray}ta}\leq 1/2$.
\mathbf{e}nd{eqnarray*}d{proof}
\begin{eqnarray}gin{lemma}\lambdabel{lemma_5}
Suppose the following three conditions hold:
\begin{eqnarray}gin{align}\lambdabel{threecond}
\begin{eqnarray}gin{split}
\mathbf{e}ps_0 &\leq \kappa \mathbf{e}ps_0^{1/\begin{eqnarray}ta} , \\
\mathbf{e}ps_0^{1-\begin{eqnarray}ta} &\leq 1/2,\\
\frac{2 K_1}{K_0 c_\begin{eqnarray}ta \begin{eqnarray}ta} &< \frac{\log \kappa}{\kappa}.
\mathbf{e}nd{eqnarray*}d{split}
\mathbf{e}nd{eqnarray*}d{align}
Then \mathbf{e}qref{eq:boot} holds on the whole interval $[0, T_s)$.
\mathbf{e}nd{eqnarray*}d{lemma}
\begin{eqnarray}gin{proof} Observe first that \mathbf{e}qref{eq:boot} holds for $\phi(\cdot, 0)$ since $\kappa > 1$.
Because of continuity in time, there exists a small time interval $[0, \partialelta)$ in which \mathbf{e}qref{eq:boot}
holds. Suppose \mathbf{e}qref{eq:boot} does not hold on the whole interval $[0, T_s)$, and let
$$
T := \sup\{ t\in [0, T_s): \thetaxt{$\mathbf{e}qref{eq:boot}$ holds on $[0, t]$}\} < T_s.
$$
Observe that \mathbf{e}qref{eq:boot} holds on $[0, T)$ and that
the monotonicity of $\phi$ implies that
\begin{eqnarray}q\lambdabel{eq_contr}
\phi(l(T), T) = \kappa \mathbf{e}ps(T).
\mathbf{e}nd{eqnarray*}d{eqnarray}q
The first two conditions of \mathbf{e}qref{threecond} allow us to use
Lemma \ref{lem:lowerBoundEps} to estimate
\begin{eqnarray}gin{align*}
\mathbf{e}ps_0^2 K_1 l(T) \int_0^T \frac{ds}{\mathbf{e}ps(s)} &= \mathbf{e}ps_0^2 K_1 l(T) \int_0^T \frac{-\mathbf{e}ps_t(s)}{-\mathbf{e}ps_t(s) \mathbf{e}ps(s)}\, ds \leq \frac{2 K_1 \kappa}{K_0 c_\begin{eqnarray}ta} l(T) \int_0^T \frac{- \mathbf{e}ps_t(s)\,ds}{\mathbf{e}ps^{1+\begin{eqnarray}ta}(s)} \\
&\leq \frac{2 K_1 \kappa}{K_0 c_\begin{eqnarray}ta \begin{eqnarray}ta} l(T) (\mathbf{e}ps^{-\begin{eqnarray}ta}(T)-\mathbf{e}ps_0^{-\begin{eqnarray}ta}) \leq \frac{2 K_1 \kappa}{K_0 c_\begin{eqnarray}ta \begin{eqnarray}ta} (1-(\mathbf{e}ps(T)/\mathbf{e}ps_0)^{\begin{eqnarray}ta}) \\
&\leq \frac{2 K_1 \kappa}{K_0 c_\begin{eqnarray}ta \begin{eqnarray}ta}.
\mathbf{e}nd{eqnarray*}d{align*}
Thus by Lemma \ref{lem:boundInnerZone},
$$
\mathbf{e}ta(l(T), T) \leq \mathbf{e}xp\left(\frac{2 K_1 \kappa}{K_0 c_\begin{eqnarray}ta \begin{eqnarray}ta}\right).
$$
Now, using the third line of \mathbf{e}qref{threecond},
\begin{eqnarray}a
\phi(l(T), T)\leq \mathbf{e}ps(T) \mathbf{e}ta(l(T), T) \leq \mathbf{e}ps(T)\mathbf{e}xp\left(\frac{2 K_1 \kappa}{K_0 c_\begin{eqnarray}ta \begin{eqnarray}ta}\right) < \kappa \mathbf{e}ps(T),
\mathbf{e}nd{eqnarray*}d{eqnarray}a
contradicting \mathbf{e}qref{eq_contr}.
\mathbf{e}nd{eqnarray*}d{proof}
\mathbf{e}mph{Conclusion of the proof of Theorem \ref{blowupThm}}.
We show that it is possible to choose $0< \begin{eqnarray}ta < 1, 0< \mathbf{e}ps_0 < 1, \kappa>0$ such that \mathbf{e}qref{threecond}
are satisfied. This is done as follows: Fix $\kappa > 2$ and observe that the first and second condition of
\mathbf{e}qref{threecond} together are equivalent to
\begin{eqnarray}gin{align}\lambdabel{cond2}
-\frac{\begin{eqnarray}ta}{1-\begin{eqnarray}ta} \log \kappa \leq \log \mathbf{e}ps_0 \leq -\frac{\log 2}{1-\begin{eqnarray}ta}.
\mathbf{e}nd{eqnarray*}d{align}
\mathbf{e}qref{cond2} holds for sufficiently small $\mathbf{e}ps_0$ provided $\begin{eqnarray}ta$ can be chosen such that
$-\frac{\begin{eqnarray}ta}{1-\begin{eqnarray}ta} \log \kappa < -\frac{\log 2}{1-\begin{eqnarray}ta}$, i.e.
\begin{eqnarray}gin{align}\lambdabel{cond3}
\log \kappa > \frac{\log 2}{\begin{eqnarray}ta}.
\mathbf{e}nd{eqnarray*}d{align}
Since $\kappa > 2$, \mathbf{e}qref{cond3} holds if $\begin{eqnarray}ta$ is close enough to $1$. Also the third condition of \mathbf{e}qref{threecond}
holds if $\begin{eqnarray}ta$ is close enough to $1$, which follows from $c_\begin{eqnarray}ta\to \infty$ as $\begin{eqnarray}ta\to 1$.
Taking into account \mathbf{e}qref{threecond}, we can now apply Lemma \ref{lemma_5} to see that \mathbf{e}qref{eq:boot} holds on the
whole existence interval $[0, T_s)$ of the solution. \mathbf{e}qref{lowerBdEps_t} now implies that
\begin{eqnarray}a
\mathbf{e}ps_t(t) \leq - C \mathbf{e}ps^{\begin{eqnarray}ta}(t)
\mathbf{e}nd{eqnarray*}d{eqnarray}a
for some positive, fixed $C>0$, and hence $T_s$ must be finite and $l(T_s)=\mathbf{e}ps^\begin{eqnarray}ta(T_s)=0$.
We show now that $\theta$ defined by \mathbf{e}qref{def_te} is regular outside the origin, even at time $t=T_s$.
Note that $\phi(x, t)$ for fixed $x$ is monotone nonincreasing in $t$ and $\phi(x, t) \in [0, \mathbf{e}ps_0]$ for
$t < T_s$. Hence the pointwise limit
\begin{eqnarray}q\nonumber
\phi(x, t) = \lim_{t\to T_s} \phi(x, t)
\mathbf{e}nd{eqnarray*}d{eqnarray}q
exists and is $\geq 0$ everywhere.
Let $s_0\geq 0$ be the infimum of all numbers $x > 0$ with the property that $\phi(x, T_s) > 0$ (see Figure \ref{f2}).
Note that the set over which the infimum is taken is not empty, since $\phi(x, t) = \mathbf{e}ps_0 > 0$ for all $0 \leq t< T_s$ if $x$ is
outside the support of $g'$. Moreover,
\begin{eqnarray}a\lambdabel{lowerbd_Phi}
\Phi(x, t) \geq \mathbf{e}ps_0 (x-a) \quad (x \geq a, 0\leq t < T_s)
\mathbf{e}nd{eqnarray*}d{eqnarray}a
if $\operatorname{supp} g' = [0, a]$.
From $\Phi(x, t) = \int_0^x \phi(\sigma, t)\, d\sigma$, we see that $\Phi(x, t)$ is nonincreasing in time for fixed $x$
and $\Phi(x, T_s)\geq 0$. Consequently, the pointwise limit $\Phi(x, T_s)$ exists, and
\begin{eqnarray}a
\thetaxt{$\Phi(x, T_s) = 0$ for $0 \leq x \leq s_0$, and $\Phi(x, T_s) > 0$ for all $x > s_0$.}
\mathbf{e}nd{eqnarray*}d{eqnarray}a
Observe that $\Phi(x,T_s)$ is continuous and strictly increasing in $x$ for $x\ge s_0$ since $\phi(x,T_s)>0$ for $x>s_0$.
An elementary argument (using e.g. the fact that $\Phi(x, t)$ is nonincreasing) proves that
as $t\to T_s$, $\Phi^{-1}(y, t)$ converges for all $y > 0$ to a limit $\Phi^{-1}(y, T_s)$.
The function $\Phi^{-1}(\cdot, T_s)$ is the inverse of $\Phi(\cdot, T_s)$ restricted to the
interval $(s_0, \infty)$. \mathbf{e}qref{def_te} shows that the pointwise limit $\theta(y, T_s)$ exists for all $y> 0$.
By Lemma \ref{reg1} below, $\phi(\cdot, T_s) \in C^1(s_0, \infty)$ and so again by \mathbf{e}qref{rel:basic} $\theta(\cdot, T_s)\in C^2(0,\infty)$.
This proves that $\theta$ remains regular outside the origin even at the singular time $T_s$.
Finally, we prove the bound \mathbf{e}qref{cuspBound}. First we look at the case $s_0 = 0$. A key observation
is that from \mathbf{e}qref{upperBd} we get the upper bound
\begin{eqnarray}gin{align*}
\Phi(z, T_s) \leq C(\begin{eqnarray}ta, \kappa) z^{\frac{1}{\begin{eqnarray}ta}+1} \quad (0 \leq z < \infty),
\mathbf{e}nd{eqnarray*}d{align*}
implying
\begin{eqnarray}q\lambdabel{eq_PhiInv}
\Phi^{-1}(y, T_s) \geq \widetilde C(\begin{eqnarray}ta, \kappa) y^{\frac{\begin{eqnarray}ta}{\begin{eqnarray}ta+1}}.
\mathbf{e}nd{eqnarray*}d{eqnarray}q
From \mathbf{e}qref{rel:basic} we get, using the substitution $\Phi^{-1}(\tilde y, t) = z$,
\begin{eqnarray}gin{align}\nonumber
\theta(y, t) = \theta(0, t) + \int_0^y \frac{g'(\Phi^{-1}(\tilde y, t))}{\phi(\Phi^{-1}(\tilde y, t))} d\tilde y = \theta_0(0) + \int_0^{\Phi^{-1}(y, t)} g'(z)\, dz
\mathbf{e}nd{eqnarray*}d{align}
Now pass to the limit $t\to T_s$ and continue the estimate by using first \mathbf{e}qref{eq_K1} and then \mathbf{e}qref{eq_PhiInv},
\begin{eqnarray}gin{align}\nonumber
\int_0^{\Phi^{-1}(y, T_s)} g'(z)\, dz \leq - C(\mathbf{e}ps_0, K_0, \begin{eqnarray}ta, \kappa) y^{\frac{2\begin{eqnarray}ta}{\begin{eqnarray}ta+1}}
\mathbf{e}nd{eqnarray*}d{align}
for all $y\in [0, \Phi(1, T_s)]$. Note that $\Phi(1, T_s)>0$ because of $s_0=0$.
Since $\theta(y, T_s)\leq \theta(\Phi(1, T_s), T_s)$ for $y \geq \Phi(1, T_s)$
we can define $\nu = \frac{2\begin{eqnarray}ta}{\begin{eqnarray}ta+1} < 1$ and adjust the constant such that \mathbf{e}qref{cuspBound}
holds (note that $\theta(y, T_s)$ has compact support).
If $s_0 > 0$, we note $\sup_{\mathbf{e}nsuremath{\mathbb{R}}+}\theta(y, T_s) < \theta(0, T_s) = \theta_0(0)$. Hence, the constant $c$ can be adjusted such that
\mathbf{e}qref{cuspBound} holds.
This concludes the proof of Theorem \ref{blowupThm}.
\begin{eqnarray}gin{lemma}\lambdabel{reg1}
Suppose $\lim_{t\to T_s}\Phi(x_0, t) > 0$. Then $\phi(\cdot, T_s) \in C^1[x_0, \infty)$.
\mathbf{e}nd{eqnarray*}d{lemma}
\begin{eqnarray}gin{proof}
Since for fixed $t$, $\Phi(x, t)$ is non-decreasing in $x$, $\Phi(x, t) \geq \Phi(x_0, t)>0$ for all $x \geq x_0$, $t < T_s$. Using this lower bound for $\Phi$,
the $C^1$-norm of the right-hand-side of equation \mathbf{e}qref{main_eq} on
the interval $[x_0, \infty)$ can be bounded:
\begin{eqnarray}a
\|\phi_t(\cdot, t)\|_{C^1[x_0, \infty)} \leq C(x_0)
\mathbf{e}nd{eqnarray*}d{eqnarray}a
for all $t< T_s$, where $C(x_0)$ depends on $x_0$ but not on $t$. This implies the convergence $\phi(\cdot, t)\to \phi(\cdot, T_s)$
in $C_1([x_0, \infty))$ as $t\to T_s$.
\mathbf{e}nd{eqnarray*}d{proof}
\begin{eqnarray}gin{remark}\lambdabel{why_eps_0}
It is not necessary to choose $(\mathbf{e}ps_0, g(\cdot; \mathbf{e}ps_0))$ with sufficiently small $\mathbf{e}ps_0$
for the pair $(\phi_0, g)$. In fact, we could have worked with the canonical choice $(1, \theta_0)$.
In this case, one first proves a bound of the form $-\mathbf{e}ps_t \leq k \mathbf{e}ps$ with $k>0$. This means
that after some positive time $T_0$, $\mathbf{e}ps(t)$ is small enough such that \mathbf{e}qref{threecond} can
be satisfied.
\mathbf{e}nd{eqnarray*}d{remark}
\begin{eqnarray}gin{remark}
For the moment, we leave the question open if a needle-like discontinuity can really form, or if generically a cusp is
obtained at the singular time. In this context, it is interesting to observe that a suitable lower bound
\begin{eqnarray}q
\phi(x, T_s) \geq c_0 x^{p}
\mathbf{e}nd{eqnarray*}d{eqnarray}q
with $c_0>0$, $p>1$ would suffice to exclude the needle scenario, and would nicely complement \mathbf{e}qref{upperBd}.
A lower bound for $\phi(\cdot, t)$ for all $t < T_s$ was obtained by A.~Zlato\v{s} \cite{ZlatosPer}, however,
his lower bound does not give any information in the limit $t\to T_s$.
\mathbf{e}nd{eqnarray*}d{remark}
\begin{eqnarray}gin{thebibliography}{10}
\bibitem{Castro}
A.~Castro and D.~C\'ordoba, Infinite energy solutions of the surface
quasi-geostrophic equation,
\mathbf{e}mph{Advances in Mathematics} 225, 1820--1829, 2010.
\bibitem{sixAuthors}
K.~Choi, T.Y.~Hou, A.~Kiselev, G.~Luo, V.~\v{S}ver\'{a}k and Y.~Yao, On the finite-time blowup of a 1d model for the 3d axisymmetric {Euler} equations, {\mathbf{e}m arXiv:1407.4776}, 2014.
\bibitem{CKY}
K.~Choi, A.~Kiselev and Y.~Yao, Finite time blow up for a 1d model of 2d {Boussinesq} system, {\mathbf{e}m Comm. Math. Phys.}, 334(3):1667--1679, 2015.
\bibitem{ConstRev}
P.~Constantin, On the Euler equations of incompressible fluids, \mathbf{e}mph{Bull. Amer. Math. Soc}, 44 (4): 603--621, 2007.
\bibitem{CLM}
P.~Constantin, P.D.~Lax and A.~Majda, A simple one-dimensional model for the three-dimensional vorticity equation, {\mathbf{e}m Comm. Pure Appl. Math.}, 38:715--724, 1985.
\bibitem{constantin1994formation}
P.~Constantin, A.~Majda and E.~Tabak, Formation of strong fronts in the 2-d quasigeostrophic thermal active scalar, {\mathbf{e}m Nonlinearity}, 7(6):1495--1533, 1994.
\bibitem{CCF}
A.~C\'ordoba, D.~C\'ordoba and M.A.~Fontelos, Formation of singularities for a transport equation with nonlocal velocity, {\mathbf{e}m Ann. of Math.(2)}, 162(3):1377--1389, 2005.
\bibitem{Tam}
T.~Do, A Discrete Model for Nonlocal Transport Equations with Fractional Dissipation, 2014
\mathbf{e}mph{arXiv:1412.3391}.
\bibitem{HouLuo1}
T.Y.~Hou and G.~Luo, Toward the finite-time blowup of the 3d axisymmetric {Euler}
equations: A numerical investigation, {\mathbf{e}m Multiscale Model. Simul.}, 12(4):1722--1776, 2014.
\bibitem{HouLuo2}
T.Y.~Hou and G.~Luo, Potentially singular solutions of the 3D axisymmetric Euler equations, PNAS, 111(36), 12968--12973,
\mathbf{e}mph{DOI 10.1073/pnas.1405238111.}
\bibitem{HouLi} T.Y.~Hou and C.~Li, Dynamic stability of the three-dimensional axisymmetric Navier-Stokes equations
with swirl, \mathbf{e}mph{Commun. Pure Appl. Math}, vol. LXI, 661--697, 2008.
\bibitem{Pengfei}
T.Y.~Hou and P.~Liu, Self-similar singularity of a 1D model for the 3D axisymmetric Euler equations, \mathbf{e}mph{Research in the Mathematical Sciences}, 2:5, 2015,
\mathbf{e}mph{DOI 10.1186/s40687-015-0021-1}.
\bibitem{SashaRev}
A.~Kiselev, Nonlocal maximum principles for active scalars, \mathbf{e}mph{Adv. Math.} 227, 1806--1826, 2011.
\bibitem{kiselev2013small}
A.~Kiselev and V.~\v{S}ver\'{a}k, Small scale creation for solutions of the incompressible two dimensional {Euler} equation, {\mathbf{e}m Ann. of Math.(2)}, 180(3):1205--1220, 2014.
\bibitem{Saffmann}
P.G.~Saffman, Vortex Dynamics, \mathbf{e}mph{Cambridge University Press}, Cambridge, 1992.
\bibitem{silvestre2014transport}
L.~Silvestre and V.~Vicol, On a transport equation with nonlocal drift, \mathbf{e}mph{Trans. Amer. Math. Soc.}, 2015,
\mathbf{e}mph{DOI: http://dx.doi.org/10.1090/tran6651}.
\bibitem{ZlatosPer} A.~Zlato\v{s}, \mathbf{e}mph{Personal communication}, October 2015.
\mathbf{e}nd{eqnarray*}d{thebibliography}
\mathbf{e}nd{eqnarray*}d{document}
|
\begin{document}
\title{The integral Chow ring of the stack of 1-pointed hyperelliptic curves}
\begin{abstract}
In this paper we give a complete description of the integral Chow ring of the stack $\mathscr{H}_{g,1}$ of 1-pointed hyperelliptic curves, lifting relations and generators from the Chow ring of $\mathscr{H}_g$. We also give a geometric interpretation for the generators.
\end{abstract}
\pagenumbering{Roman}
\tableofcontents
\pagenumbering{arabic}
\section*{Introduction}
After they were introduced by Mumford in \cite{Mum},
the rational Chow rings $ {\rm CH} (\mathscr{M}_{g,n})_{ \mathbb{Q}}$ and $ {\rm CH}(\overline{ \mathscr{M}}_{g,n})_{ \mathbb{Q}}$ of moduli spaces of smooth or stable curves have been the subject of extensive investigations. The integral versions $ {\rm CH} (\mathscr{M}_{g,n})$ and $ {\rm CH}(\overline{ \mathscr{M}}_{g,n})$ of these rings have been introduced by Edidin and Graham in \cite{EdGra},
and are much harder to compute. The known results are for $ {\rm CH}( \mathscr{M}_{1,1})$ \cite{EdGra},
$ {\rm CH}( \mathscr{M}_{2})$ \cite{Vis3}
and $ {\rm CH}(\overline{ \mathscr{M}}_{2})$ \cite{Lar}.
A stack whose Chow ring is particularly amenable to be studied with the equivariant techniques of \cite{EdGra}
is the stack $ \mathscr{H}_{g}$ of smooth hyperelliptic curves of genus $g \geq 2$, introduced in \cite{ArVis},
where its Picard group is computed. Its Chow ring has been computed by Edidin--Fulghesu \cite{EdFul}
for even $g$, and by Fulghesu--Viviani and Di Lorenzo for odd $g$ \cite{FulViv, DiLor}
(the second paper introduces new ideas to fill a gap in the first). The result is the following.
\begin{teorema}[Edidin--Fulghesu, Fulghesu--Viviani, Di Lorenzo]
If $g$ is even, then
\[
{\rm CH} (\mathscr{H}_{g}) =
\mathbb{Z}[c_1, c_2]/\bigl(2(2g + 1)c_{1}, g(g - 1)c_{1}^{2} - 4g(g + 1)c_{2}\bigr)\,.
\]
If $g$ is odd, then
\[
{\rm CH} (\mathscr{H}_{g} )=
\mathbb{Z}[\tau, c_{2}, c_{3}]/
\bigl(4(2g + 1)\tau, 8\tau^{2} - 2(g^{2} - 1)c_{2}, 2c_{3}\bigr)\,.
\]
\end{teorema}
Here the $c_{i}$'s are Chern classes of certain natural vector bundles on $ \mathscr{H}_{g}$ and $\tau$ is the first Chern class of a certain line bundle.
In this paper we compute the Chow ring of the stack $ \mathscr{H}_{g,1}$ of smooth $1$-pointed hyperelliptic curves of genus $g$ for any $g \geq 2$. Our main result is as follows.
\begin{teorema}(See \hyperref[pri]{Theorem \ref{pri}})
\begin{enumerate}
\item The ring $ {\rm CH} (\mathscr{H}_{g,1})$ is generated by two elements $t_1$, $t_2$ of degree~$1$.
\item The pullback $ {\rm CH} (\mathscr{H}_{g}) \longrightarrow {\rm CH} (\mathscr{H}_{g,1})$ is as follows.
\begin{enumerate}
\item If $g$ is even, it sends $c_{1}$ to $t_1+t_2$ and $c_{2}$ to $t_1t_2$ ($t_1$ and $t_2$ are the Chern roots of the vector bundle which defines the $c_i$'s).
\item If $g$ is odd, it sends $\tau$ to $t_1$, $c_{2}$ to $-t_2^{2}$ and $c_{3}$ to $0$.
\end{enumerate}
\item The ideal of relations for $\mathscr{H}_{g,1}$ is generated by the image of the ideal of relations for $\mathscr{H}_g$.
\end{enumerate}
\end{teorema}
The generators can be interpreted geometrically (see \hyperref[gen]{Section \ref{gen}}).
From this, with the results of Edidin--Fulghesu and Di Lorenzo we get the following.
\begin{corollario}(See \hyperref[p]{Theorem \ref{p}}, \hyperref[d]{Theorem \ref{d}})
\begin{enumerate}
\item If $g$ is even, then
\[
{\rm CH} (\mathscr{H}_{g,1}) = \frac{\mathbb{Z}[t_1,t_2]}{\bigl(2(2g+1)(t_1+t_2), g(g-1)(t_1^2+t_2^2) - 2g(g+3)t_1t_2\bigr)}\,.
\]
\item If $g$ is odd, then
\[
{\rm CH}( \mathscr{H}_{g,1} )= \frac{\mathbb{Z}[t_1,t_2]}{\bigl(4(2g+1)t_1,8t_1^{2} +2g(g+1)t_2^{2}\bigr)}\,.
\]
\end{enumerate}
\end{corollario}
To prove the main result we use equivariant techniques. The strategy is the following. Recall from \cite{ArVis}
that $ \mathscr{H}_{g}$ is a quotient $[X_{g}/G]$, where $X_{g} \subseteq \mathbb{A}(2g+2)$ is an open inside the space of binary form in two variables of degree $2g+2$, and $G$ is either ${\rm GL}_{2}$ (when $g$ is even), or $\mathbb{G}_m \times \mathbb{P}{\rm GL}_{2}$ (when $g$ is odd). In \hyperref[des]{Section \ref{des}} of the present paper we express $ \mathscr{H}_{g,1}$ as a quotient $[Y_{g}/B]$, where $B \subseteq G$ is a Borel subgroup and $Y_{g}$ is an open subset of a $2g+3$-dimensional representation $\widetilde{\mathbb{A}}(2g+2)$ of $B$; the tautological map $ \mathscr{H}_{g,1} \longrightarrow \mathscr{H}_{g}$ comes from a nonlinear finite flat $B$-equivariant map $\widetilde{\mathbb{A}}(2g+2) \longrightarrow \mathbb{A}(2g+2)$. Thus, the Chow ring $ {\rm CH} (\mathscr{H}_{g,1})$ is the equivariant Chow ring $ {\rm CH}_{B}(Y_{g})$; this easily proves parts (1) and (2) of the main theorem.
To prove part (3) one needs to projectivize (as is done in all the previous papers \cite{Vis3, EdFul, FulViv, DiLor});
although, because the standard action of $\mathbb{G}_m$ does not commute with the action of $B$, one needs a weighted action, yielding a weighted projective stack, which maps to the projectivization $\mathbb{P}^{2g+2}$ of $\mathbb{A}(2g+2)$. In \hyperref[red]{Section \ref{red}} we prove some technical results about Chow envelopes for quotient stacks and we use them to compute the relations in \hyperref[eq]{Section \ref{eq}} and \hyperref[de]{Section \ref{de}}, which represent the technical heart of this paper. Finally, \hyperref[gen]{Section \ref{gen}} contains the geometric interpretation of the generators of the Chow group in both our cases.
\section{Description of $\mathscr{H}_{g,1}$ as a quotient stack}\label{des}
Let us define precisely the actors of this paper. We will work over a fixed field $k$ of characteristic different from $2$. In all the paper the genus will be considered greater or equal than $2$.
\begin{comment}
\begin{definition}
A relative uniform cyclic cover of degree $r$ of a scheme $P$ over $S$ consists of a morphism of $S$-schemes $f:C \rightarrow P$ together with an action of the group scheme $\mu_r$ on $C$, such that for each point $q \in P$, there is an affine neighborhood $V={\rm Spec}\, R$ of $q$ in $Y$, together with an element $h_V \in R$ that is not a zero divisor, and an isomorphism of $V$-schemes $f^{-1}(V) \simeq {\rm Spec}\, R[x]/(x^r-h_V)$ which is $\mu_r$-equivariant, when the right-hand side is given by the obvious action. Moreover, for every geometric point $s: {\rm Spec}\, \overline{k}(s) \rightarrow S$ the restriction $h\vert_{V_s}$ of $h$ over the fibre of $s$ is not a zero divisor. Equivalently, the Cartier divisor $\Delta_f$ defined locally by $\{h_V\}_V$, called branch divisor, is flat over $S$.
Furthermore, suppose $P$ is a Brauer-Severi scheme over $S$ of relative dimension $1$. We will say that a $S$-morphism $C\rightarrow P$ is a relative uniform cyclic cover of degree $r$ and branch degree $d$ if $C \rightarrow P$ is a relative uniform cyclic cover of degree $r$ and for every closed point $s: {\rm Spec}\, \overline{k}(s) \rightarrow S$ the induced map $C_s \rightarrow \mathbb{P}_{\overline{k}(s)}^1$ obtained by restricting to the fiber $s$ has a branch divisor of degree $d$.
Finally we will say that a relative uniform cyclic cover $C\rightarrow P$ of degree $r$ and branch degree $d$ is smooth if $C$ is smooth over $S$.
We will denote by $\mathscr{H}(r,d)$ the category of smooth relative uniform cyclic cover of Brauer-Severi schemes of relative dimension $1$ of degree $r$ and branch degree $d$. An arrow from $(C_1 \rightarrow P_1 \rightarrow S_1)$ to $(C_2 \rightarrow P_2 \rightarrow S_2)$ is a commutative diagram
$$ \xymatrix{ C_1 \ar[r] \ar[d] & P_1 \ar[r] \ar[d] & S_1 \ar[d] \\
C_2 \ar[r] & P_2 \ar[r] & S_2 }$$
where both squares are cartesian and the left-hand column is $\mu_r$-equivariant.
\end{definition}
The category fibred in groupoids $\mathscr{H}(r,d)$ is actually a stack. For a thorough and more precise discussion, we refer to \cite{ArVis}.
\end{comment}
\begin{definition}
Let $S$ be a base scheme over $k$. A hyperelliptic curve of genus $g$ over $S$ is a morphism of $S$-schemes $C\rightarrow P\rightarrow S$ where $C\rightarrow S$ is a family of smooth curves of genus $g$, $P \rightarrow S$ is a Brauer-Severi scheme of relative dimension $1$ and $C\rightarrow P$ is finite and flat of degree $2$.
\end{definition}
We define $\mathscr{H}_g$ to be the fibered category in groupoid over the category of $k$-schemes whose objects are hyperelliptic curves of genus $g$. An arrow between $(C\rightarrow P \rightarrow S)$ and $(C'\rightarrow P'\rightarrow S')$ is a commutative diagram like the following:
$$ \xymatrix{ C \ar[r] \ar[d] & P \ar[r] \ar[d] & S \ar[d] \\
C' \ar[r] & P' \ar[r] & S' .} $$
Let $\mathscr{H}_{g,1}$ be the fibered category of hyperelliptic curves of genus g over $k$ with a marked point. An object $$(C \rightarrow P \rightarrow S, \sigma:S\rightarrow C)$$ in $\mathscr{H}_{g,1}$ is a pair defined by $C\rightarrow P\rightarrow S$, a hyperelliptic curve of genus $g$ over $S$, and by $\sigma:S \rightarrow C$, a section of $C\rightarrow S$. A morphism between $(C\rightarrow P\rightarrow S,\sigma)$ and $(C'\rightarrow P'\rightarrow S',\sigma')$ is just an arrow in $\mathscr{H}_g$ which commutes with the sections.
\begin{osservazione}
Both $\mathscr{H}_g$ and $\mathscr{H}_{g,1}$ are Deligne-Mumford stacks: in fact the natural maps $\mathscr{H}_{g} \rightarrow \mathscr{M}_g$ and $\mathscr{H}_{g,1} \rightarrow \mathscr{M}_{g,1}$ are closed immersions, where $\mathscr{M}_g$(respectively $\mathscr{M}_{g,1}$), which is the stack of smooth curves of genus g (respectively of smooth curves with a marked point) is a Deligne-Mumford stack (see \cite[Theorem 8.4.5]{Oll}). Clearly the functor $\mathscr{H}_{g,1} \rightarrow \mathscr{H}_g$ forgetting the section is the universal curve over $\mathscr{H}_g$.
\end{osservazione}
As proved in \cite{ArVis}, the stack $\mathscr{H}_g$ is equivalent to the fibered category $\mathscr{H}_g'$ defined as follows: an object $(P\rightarrow S, \mathscr{L},i:\mathscr{L}^{\otimes 2} \hookrightarrow \mathcal{O}_P)$ is defined by a Brauer-Severi scheme $P \rightarrow S$ of relative dimension $1$, an invertible sheaf $\mathcal{L}$ on $P$ and an injection $i$ such that $\mathcal{L}$ restricts to an invertible sheaf of degree $-(g+1)$ on any geometric fiber, the injection $i$ remains injective when restricted to any geometric fiber and the Cartier divisor $\Delta_i$ associated to the image of $i$ (called branch divisor) is smooth over $S$; an arrow between $(P\rightarrow S, \mathscr{L},i:\mathscr{L}^{\otimes 2} \hookrightarrow \mathcal{O}_P)$ and $(P'\rightarrow S', \mathscr{L}',i':\mathscr{L}'^{\otimes 2} \hookrightarrow \mathcal{O}_{P'})$ is a commutative diagram
$$\xymatrix{ P \ar[r] \ar[d]^{\phi_0} & S \ar[d]\\
P' \ar[r] & S' }$$
plus an isomorphism $\phi_1:\mathcal{L} \simeq \phi_0^*\mathcal{L}'$ of $\mathcal{O}_P$-modules such that the following diagram commutes:
$$ \xymatrix{ \mathcal{L}^{\otimes 2} \ar[rr]^{\phi_1} \ar[dr]_{i} && \phi_0^*\mathcal{L}'^{\otimes 2} \ar[dl]^{\phi_0^*i'} \\
& \mathcal{O}_P. & } $$
We can recover the morphism $C\rightarrow P$ as the morphism
$$ \underline{{\rm Spec}\,}_{\mathcal{O}_P}(\mathcal{O}_P \oplus \mathscr{L}) \longrightarrow P$$
where $\mathcal{O}_P \oplus \mathscr{L}$ is the $\mathcal{O}_P$-algebra defined by the injection $i$. In fact, given such injection, we can endow $\mathcal{O}_P \oplus \mathscr{L}$ with a structure of $\mathcal{O}_P$-algebra where the multiplication is defined in the following way:
$$ (f,s) \cdot (f',s'):= (ff'+i(ss'),fs'+f's).$$
Consider an object $(C\rightarrow P \rightarrow S,\sigma) \in \mathscr{H}_{g,1}(S)$. Using the description above, we only need to understand how to translate the information of the section $\sigma:S\rightarrow C$ in relation to the Brauer-Severi scheme $P\rightarrow S$, its invertible sheaf $\mathscr{L}$ and the injection $i:\mathscr{L}^{\otimes 2} \hookrightarrow \mathcal{O}_P$.
First, we recall that given a morphism $\sigma_P:S\rightarrow P$ and an $\mathcal{O}_P$-algebra $\mathcal{A}$ one has the functorial bijective map
$$ {\rm Hom}_P(S,\underline{{\rm Spec}\,}_{\mathcal{O}_P}(\mathcal{A})) \rightarrow {\rm Hom}_{\mathcal{O}_S-{\rm alg}}(\sigma_P^*(\mathcal{A}),\mathcal{O}_S). $$
Therefore, if we define $\sigma_P:=f \circ \sigma$ with $f:C\rightarrow P$ the finite flat morphism of degree $2$, the datum of the section $\sigma$ is equivalent to a pair $(\sigma_P,j)$ where $\sigma_P$ is a section of $P\rightarrow S$ and $$j \in {\rm Hom}_{\mathcal{O}_S-{\rm alg}}(\mathcal{O}_S\oplus\sigma_P^*(\mathscr{L}),\mathcal{O}_S).$$
We notice that ${\rm Hom}_{\mathcal{O}_S-{\rm alg}}(\mathcal{O}_S\oplus\sigma_P^*(\mathscr{L}),\mathcal{O}_S)$ is the subset of ${\rm Hom}_{\mathcal{O}_S-{\rm mod}}(\sigma_P^*(\mathscr{L}),\mathcal{O}_S)$ such that
the two maps $\sigma_P^*(i):\sigma_P^*(\mathscr{L}^{\otimes 2}) \rightarrow \mathcal{O}_S$ and
$j^{\otimes 2}:\sigma_P^*(\mathscr{L})^{\otimes 2} \rightarrow \mathcal{O}_S$ coincide (up to the canonical isomorphism $\sigma_P^*(\mathscr{L}^{\otimes 2}) \cong \sigma_P^*(\mathscr{L})^{\otimes 2}$).
Thus we define $\mathscr{H}_{g,1}'$ as the category fibred in groupoids whose objects are $$(P\rightarrow S, \mathscr{L},i:\mathscr{L}^{\otimes 2} \hookrightarrow \mathcal{O}_P, \sigma_P, j)$$
where $(P\rightarrow S, \mathscr{L},i) \in \mathscr{H}_g'(S)$, $\sigma_P$ is a section of $P\rightarrow S$ and $j: \sigma_P^*(\mathscr{L}) \rightarrow \mathcal{O}_S$ is a morphism of $\mathcal{O}_S$-modules such that $j^{\otimes 2}= \sigma_P^*(i)$.
The morphisms are defined in the natural way.
We have just proved the following statement.
\begin{proposizione}
There is an equivalence of fibred categories in groupoids between $\mathscr{H}_{g,1}$ and $\mathscr{H}_{g,1}'$.
\end{proposizione}
For the sake of simplicity, the section of the Brauer-Severi scheme will be denoted just by $\sigma$. We denote by $\sigma_{\infty}$ the section $S\rightarrow\mathbb{P}_S^1$ defined by the map $S\rightarrow {\rm Spec}\, k \hookrightarrow \mathbb{P}^1_k $ sending $S$ to $[0:1]$ in $\mathbb{P}_k^1$. The next step will be to describe this stack $\mathscr{H}_{g,1}'$ as a quotient stack. Let ${\rm H}_{g,1}'$ be the auxiliary fibred category whose objects over a base scheme $S$ are given as pairs consisting of an object $(P\rightarrow S, \mathscr{L},i, \sigma, j)$ in $\mathscr{H}_{g,1}'(S)$, plus an isomorphism $$\phi:(P,\mathcal{L},\sigma) \simeq (\mathbb{P}_S^1,\mathcal{O}(-g-1),\sigma_{\infty})$$
which consists of an isomorphism of $S$-schemes $\phi_0:P \simeq \mathbb{P}_S^1$ with the property that $\phi_0 \circ \sigma = \sigma_{\infty}$, plus an isomorphism $\phi_1:\mathscr{L}\simeq \phi_0^*\mathcal{O}(-g-1)$. The arrows in ${\rm H}_{g,1}'$ are arrows in $\mathscr{H}_{g,1}'$ preserving the isomorphism $\phi$.
\begin{osservazione}\label{re}
Clearly, ${\rm H}_{g,1}'$ is a category fibred in groupoids over the category of $k$-schemes and it is straightforward to verify that the groupoid ${\rm H}_{g,1}'(S)$ is in fact equivalent to a set for every $k$-scheme $S$. This implies that ${\rm H}_{g,1}'$ is equivalent to a functor.
Notice that we have an action of the group scheme $\underline{{\rm Aut}}_k(\mathbb{P}_k^1,\mathcal{O}(-g-1),\sigma_{\infty})$ on the functor ${\rm H}_{g,1}'$ defined by composing $\phi:(P,\mathcal{L},\sigma) \simeq (\mathbb{P}_S^1,\mathcal{O}(-g-1),\sigma_{\infty})$ with an element of the group ${\rm Aut}_S(\mathbb{P}_S^1,\mathcal{O}(-g-1),\sigma_{\infty})$ for every $S$-point.
\end{osservazione}
Before giving the description of $\mathscr{H}_{g,1}$ as a quotient stack, let us introduce some notation.
Let $\mathbb{A}(n)$ be the affine space of homogenous polynomials in two variables of degree $n$, which is an affine space of dimension $n+1$, and let $\mathbb{A}_{sm}(n)$ be the open affine subscheme of $\mathbb{A}(n)$ defined as the complement of the discriminant locus.
We consider the closed subscheme $\widetilde{\mathbb{A}}(n) \hookrightarrow \mathbb{A}(n) \times\mathbb{A}^1$ defined as
$$ \widetilde{\mathbb{A}}(n)(S) := \{ (f,s) \in (\mathbb{A}(n)\times\mathbb{A}^1)(S) \vert f(0,1)=s^2 \}. $$ We denote by $\widetilde{\mathbb{A}}_{sm}(n)$ the intersection of $\mathbb{A}_{sm}(n)\times \mathbb{A}^1$ with $\widetilde{\mathbb{A}}(n)$ inside $\mathbb{A}(n)\times \mathbb{A}^1$, seeing it as an open subscheme of $\widetilde{\mathbb{A}}(n)$.
\begin{proposizione}
We prove the following:
\begin{enumerate}
\item the group scheme $\underline{{\rm Aut}}(\mathbb{P}^1,\mathcal{O}(-g-1),\sigma_{\infty})$ is isomorphic to ${\rm B}_{2}/\mu_{g+1}$ where ${\rm B}_{2}$ is the subgroup of lower triangular matrices inside ${\rm GL}_{2}$ and $\mu_{g+1}\hookrightarrow {\rm B}_2$ is the natural inclusion inside the subgroup of the diagonal matrices of $\mu_{g+1}$, the group of $(g+1)$-th roots of unity.
\item The functor ${\rm H}_{g,1}'$ is naturally isomorphic to $\widetilde{\mathbb{A}}_{sm}(2g+2)$.
\item The action of $\underline{{\rm Aut}}(\mathbb{P}^1,\mathcal{O}(-g-1),\sigma_{\infty})$ on ${\rm H}_{g,1}'$ translates into the action of ${\rm B}_2/\mu_{g+1}$ on $\widetilde{\mathbb{A}}_{sm}(2g+2)$ defined by
$$A\cdot\Big(f(\underline{x}),s\Big):=\Big(f\big(A^{-1}\underline{x}\big),c^{-(g+1)}s\Big)$$ where
$$A= \begin{bmatrix}
a & 0 \\
b & c
\end{bmatrix} \in {\rm B}_2/\mu_{g+1}.$$
\end{enumerate}
\end{proposizione}
\begin{proof}
There is a natural isomorphism (c.f. proof of \cite[Theorem 4.1]{ArVis})
$$\underline{{\rm Aut}}(\mathbb{P}^1,\mathcal{O}(-g-1)) \longrightarrow {\rm GL}_2/\mu_{g+1} $$
which follows from the exact sequence of sheaves of groups
$$ \xymatrix { 0 \ar[r] & \mu_{g+1} \ar[r] & \underline{{\rm Aut}}(\mathbb{P}^1,\mathcal{O}(1)) \ar[r]^<<<<<{\alpha} & \underline{{\rm Aut}}(\mathbb{P}^1,\mathcal{O}(-g-1)) \ar[r] & 0} $$
where $\alpha(\phi_0,\phi_1)= (\phi_0,\phi_1^{\otimes(-g-1)}) $. The same exact sequence leads us to the isomorphism
$$\underline{{\rm Aut}}(\mathbb{P}^1,\mathcal{O}(-g-1),\sigma_{\infty})\simeq \underline{{\rm Aut}}(\mathbb{P}^1,\mathcal{O}(1),\sigma_{\infty})/\mu_{g+1}.$$
If we identify $\underline{{\rm Aut}}(\mathbb{P}^1,\mathcal{O}(1))$ with ${\rm GL}_2$, the subgroup $\underline{{\rm Aut}}(\mathbb{P}^1,\mathcal{O}(1),\sigma_{\infty})$ corresponds to ${\rm B}_2$ inside ${\rm GL}_2$. This proves the first claim.
Consider an element $(P\rightarrow S, \mathscr{L},i, \sigma, j,\phi)$ in ${\rm H}_{g,1}'(S)$: the pushforward of the inclusion $i:\mathscr{L}^{\otimes 2}\hookrightarrow \mathcal{O}_P$ along $\phi=(\phi_0,\phi_1)$induces an inclusion
$$ \mathcal{O}_{\mathbb{P}_S^1}(-2g-2) \hookrightarrow \mathcal{O}_{\mathbb{P}_S^1} $$
which we keep denoting $i$. We identify such inclusion with an element $f \in {\rm H}^0(\mathbb{P}_S^1,\mathcal{O}_{\mathbb{P}_S^1}(2g+2))=\mathbb{A}(2g+2)(S)$. Observe that actually $f$ belongs to $\mathbb{A}_{sm}(2g+2)$ because by construction the divisor associated to $f$ has to be smooth over $S$. Using again the isomorphism $\phi$ we can describe $j$ as an element of ${\rm Hom}_{\mathcal{O}_S}(\sigma_{\infty}^*\mathcal{O}_{\mathbb{P}_S^1}(-g-1),\mathcal{O}_S)$ such that $j^{\otimes 2}= \sigma_P^*(i)$, or equivalently as an element $s \in H^0(S,\sigma_{\infty}^*\mathcal{O}_{\mathbb{P}_S^1}(g+1))$ such that $\sigma_{\infty}^*(f)=s^{\otimes 2}$.
We have a non-canonical isomorphism $\sigma_{\infty}^*\mathcal{O}_{\mathbb{P}_S^1}(g+1)\simeq \mathcal{O}_S$ given by the association $f \mapsto f(0,1)$, therefore we are considering $j$ as an element $s$ in $H^0(S,\mathcal{O}_S)=\mathbb{A}^1(S)$. In the same way, given $f \in \mathbb{A}_{sm}(2g+2)$ induced by the inclusion $i$, we have that $\sigma_{\infty}^*(i)$ will be represented by $f(0,1)$ in $\mathbb{A}^1(S)$. The condition above is represented through this identification by $f(0,1)=s^2$. This gives us a base-preserving functor from ${\rm H}_{g,1}'$ to $\widetilde{\mathbb{A}}_{sm}(2g+2)$, seeing it as a closed subscheme of $\mathbb{A}_{sm}(2g+2) \times \mathbb{A}^1$. There is also a base-preserving functor in the other direction sending an element $(f,s) \in \widetilde{\mathbb{A}}_{sm}(2g+2)(S)$ to the object inside ${\rm H}_{g,1}'(S)$ of the form
$$ \big( \mathbb{P}_S^1 \rightarrow S,\mathcal{O}(-g-1),f,\sigma_{\infty},s, {\rm id} \big). $$
It is straightforward to see that it is a quasi-inverse to the previous functor. This proves the second claim.
As far as the action is concerned, it is a classical fact that in general, given an automorphism of $(\mathbb{P}^1,\mathcal{O}(1))$ expressed by a matrix $A$ in ${\rm GL}_2$,
the corresponding automorphism of $(\mathbb{P}^1,\mathcal{O}(-1))$ is expressed by the matrix $A^{-1}$ and, tensoring $n$ times, we get an automorphism of $(\mathbb{P}^1,\mathcal{O}(-n))$ defined by
$$f(x) \longmapsto f(A^{-1}x).$$
Since the equation $f(0,1)=s^2$ is invariant for the action of the group ${\rm GL}_2$ on $\mathbb{A}(2g+2)\times \mathbb{A}^1$ described above, we get an induced action on $\widetilde{\mathbb{A}}(2g+2)$ and this clearly proves the third claim, since we are restricting the ${\rm GL}_2$-action to the Borel subgroup.
\end{proof}
\begin{proposizione}
If we denote the group scheme $\underline{{\rm Aut}}_k(\mathbb{P}_k^1,\mathcal{O}(-g-1),\sigma_{\infty})\,$ by $G$, then the natural forgetting map
$$ {\rm H}_{g,1}' \longrightarrow \mathscr{H}_{g,1}'$$
is a $G$-torsor and in particular $\mathscr{H}_{g,1}' \simeq \big[ {\rm H}_{g,1}' /G \big]$.
\end{proposizione}
\begin{proof}
Consider an object $(P\rightarrow S, \mathscr{L},i, \sigma, j)$ in $\mathscr{H}_{g,1}'(S)$, then we can find an fppf covering $S'\rightarrow S$ such that there exists an isomorphism $\phi$ between the pullback to $S'$ of the pair $(P,\mathscr{L})$ and $(\mathbb{P}_{S'}^1,\mathcal{O}(-g-1))$.
We denote by $\tilde{\sigma}$ the composition $\phi_0 \circ \sigma$. Using the transitivity of the action of ${\rm GL}_2$ over $\mathbb{P}^1$, we can find an element $T$ of ${\rm GL}_2(S)$, up to passing to a fppf covering again, which sends $\tilde{\sigma}$ to $\sigma_{\infty}$. This implies that, given an atlas $H$ of $\mathscr{H}_{g,1}'$ we can find a fppf covering of $H$ such that the pullback of the morphism $${\rm H}_{g,1}' \longrightarrow \mathscr{H}_{g,1}'$$ through this covering is a trivial $G$-torsor. Therefore this concludes the proof.
\end{proof}
We denote by $\rm{PB}_2$ the Borel subgroup of ${\rm PGL}_2$.
Using \cite[Proposition 4.4]{ArVis}, we get the following proposition.
\begin{proposizione}\label{a}
Let $g \geq 2$ be an integer.
\begin{itemize}
\item[i)]If $g$ is even, then the homomorphism of group schemes
$$ {\rm B}_2/\mu_{g+1} \longrightarrow {\rm B}_2 $$
defined by $$[A] \mapsto {\rm det}(A)^{g/2}A$$ is an isomorphism.
\item[ii)]If $g$ is odd, then the homomorphism of group schemes
$$ {\rm B}_2/\mu_{g+1} \longrightarrow \mathbb{G}_m \times\rm{PB}_2 $$
defined by
$$ [A] \mapsto ({\rm det}(A)^{(g+1)/2},[A])$$
is an isomorphism.
\end{itemize}
\end{proposizione}
Putting together all the precedent results, we finally get the description we need.
\begin{corollario} Let $g \geq 2$ an integer. The stack $\mathscr{H}_{g,1}$ is equivalent to the quotient stack
$$ \Big[ \widetilde{\mathbb{A}}_{sm}(2g+2)/ G\Big] $$
where the group $G$ and its action on $\widetilde{\mathbb{A}}_{sm}(2g+2)$ are described by the following formulas:
\begin{itemize}
\item if $g$ is even, then $G = {\rm B}_2$ and the action is given by
$$ A\cdot(f(x),s):= \bigg(({\rm det}A)^gf(A^{-1}x),a^{\frac{g}{2}}{c^{-\frac{g+2}{2}}}s\bigg) $$
where
$$A= \begin{pmatrix}
a & 0 \\
b & c
\end{pmatrix} \in {\rm B}_2.$$
\item if $g$ is odd, then $G = \mathbb{G}_m \times\rm{PB}_2$ and the action is given by
$$ (\alpha,A)\cdot(f(x),s):= \Big(\alpha^{-2}{\rm det}(A)^{g+1}f(A^{-1}x),\alpha^{-1}a^{\frac{g+1}{2}}{c^{-\frac{g+1}{2}}}s\Big)$$
where $$A= \begin{bmatrix}
a & 0 \\
b & c
\end{bmatrix} \in \rm{PB}_2.$$
\end{itemize}
\end{corollario}
\begin{osservazione}
Notice that in both the even and odd genus case, the group $G$ is in fact a Borel (maximal connected solvable) subgroup of $\rm{GL}_2/\mu_{g+1}$.
\end{osservazione}
\section{Reduction to the weighted projectivization }\label{red}
From now on $G$ will be one of the two groups described in \hyperref[a]{Proposition \ref{a}} depending on the parity of the genus.
We have now found the description of $\mathscr{H}_{g,1}'$ as a quotient stack. Using this presentation, we know that the integral Chow ring of $\mathscr{H}_{g,1}'$ can be computed as the $G$-equivariant Chow ring of $\widetilde{\mathbb{A}}_{sm}(2g+2)$ as defined in \cite[Proposition 19]{EdGra}, i.e.
$$ {\rm CH}^*(\mathscr{H}_{g,1}')= {\rm CH}^*_{G}(\widetilde{\mathbb{A}}_{sm}(2g+2)).$$
The following remark explains why we can reduce ourselves to the computation of $T$-equivariant Chow ring of $\widetilde{\mathbb{A}}_{sm}(2g+2)$, where $T$ is the maximal torus of the diagonal matrices inside $G$.
\begin{osservazione}
Recall that the group $G$ is defined as:
\begin{itemize}
\item $G = \mathbb{G}_m \times\rm{PB}_2$ if $g$ is odd,
\item $G = {\rm B}_2$ if $g$ is even.
\end{itemize}
We notice that both of them are unipotent split extensions of a 2-dimensional split torus $T$ because $G$ is a Borel sungroup of $\rm{GL}_2/\mu_{g+1}$. If $g$ is odd, we construct the following isomorphism explicitly for the sake of explicit computations:
$$ \mathbb{G}_m \times\rm{PB}_2 \longrightarrow \mathbb{G}_m^2 \ltimes \mathbb{G}_a $$
defined by
$$ (\alpha, [A]) \mapsto (\alpha, a/c, b/c) $$
where
$$A= \begin{bmatrix}
a & 0 \\
b & c
\end{bmatrix} \in \rm{PB}_2.$$
Using the result \cite[Lemma 2.3]{RoVis}, we deduce that the homomorphism
$$ {\rm CH}^*_T(X) \longrightarrow {\rm CH}^*_G(X) $$
induced by the projection $G \rightarrow T$ is in fact an isomorphism of rings for every smooth $G$-scheme $X$ (it is in fact an isomorphism of graded groups for $X$ an algebraic scheme). Therefore we can consider directly the action of the $2$-dimensional split torus $T$ inside $G$.
Using this identification, we get the following description of the action of $T$ on the affine scheme $\mathbb{A}(2g+2) \times \mathbb{A}^1$:
\begin{itemize}
\item if $g$ is even,
$$ (t_0,t_1)\cdot(f(x_0,x_1),s):= \Big((t_0t_1)^{g} f(x_0/t_0,x_1/t_1), t_0^{\frac{g}{2}}t_1^{-\frac{g+2}{2}}s \Big) ; $$
\item if $g$ is odd,
$$ (\alpha,\rho)\cdot(f(x_0,x_1),s) = \Big(\alpha^{-2}\rho^{g+1}f(x_0/\rho,x_1),\alpha^{-1}\rho^{\frac{g+1}{2}}s\Big).$$
\end{itemize}
\end{osservazione}
From now on, we have to concentrate on computing the $T$-equivariant Chow ring of $\widetilde{\mathbb{A}}_{sm}(2g+2)$, where $T$ will be the split 2-dimensional torus and the action will be the one described above depending on whether $g$ is odd or even.
We will use the localization sequence to compute the Chow group we are interested in. In fact, if we manage to describe $\widetilde{\mathbb{A}}_{sm}(2g+2)$ as an open subscheme $U$ of a $T$-representation $V$, the localization sequence will give us the following explicit description:
$$ {\rm CH}_T(\widetilde{\mathbb{A}}_{sm}(2g+2)) = \frac{{\rm CH}({\rm B}T)}{I} $$
where $I$ will be the ideal generated by the pushforward of the cycles from the closed subscheme $V\setminus U$.
\begin{osservazione}
There is a natural isomorphism of $k$-schemes (without considering the $T$-action) $$\xi_n:\widetilde{\mathbb{A}}(n) \simeq \mathbb{A}^{n+1}$$ described by the formula
$$ \xi_n(a_0,\dots,a_n,s)=(a_0,\dots,a_{n-1},s).$$
Therefore $\widetilde{\mathbb{A}}(2g+2)$ can be identified with the $k$-scheme $\mathbb{A}^{2g+3}$ endowed with the unique action that makes $\xi_{2g+2}$ into a $T$-equivariant isomorphism.
Under this identification, $\widetilde{\mathbb{A}}(2g+2)$ is clearly a $T$-representation and the natural projection map $$\varphi_{2g+2} :\widetilde{\mathbb{A}}(2g+2) \hookrightarrow \mathbb{A}(2g+2) \times \mathbb{A}^1 \longrightarrow \mathbb{A}(2g+2) $$is $T$-equivariant. Recall that $\widetilde{\mathbb{A}}_{sm}(n)$ is defined as the intersection of $\mathbb{A}_{sm}(n) \times \mathbb{A}^1$ and $\widetilde{\mathbb{A}}(n)$ inside $\mathbb{A}(n)\times \mathbb{A}^1$; thus,
we have that $$\widetilde{\mathbb{A}}_{sm}(2g+2)=\varphi_{2g+2}^{-1}(\mathbb{A}_{sm}(2g+2)).$$
Therefore $\widetilde{\mathbb{A}}_{sm}(2g+2)$ is a $T$-invariant open subset of the $T$-representation $\widetilde{\mathbb{A}}(2g+2)$. If we denote by $\widetilde{\Delta}$ the complement of $\widetilde{\mathbb{A}}_{sm}(2g+2)$ inside $\widetilde{\mathbb{A}}(2g+2)$, then set-theoretically the following holds:
$$ \widetilde{\Delta}= \varphi_{2g+2}^{-1}(\Delta) $$
where $\Delta$ is the discriminant locus inside $\mathbb{A}(2g+2)$.
\end{osservazione}
The problem now is to describe the image of the pushforward along the inclusion
$$\widetilde{\Delta} \hookrightarrow \widetilde{\mathbb{A}}(2g+2)$$
at the level of Chow group. The idea is to construct a stratification of $\widetilde{\Delta}$ lifting the one introduced in \cite[Proposition 4.1]{EdFul}. We will pass to the projectivization of $\widetilde{\mathbb{A}}(2g+2)$, considering $\widetilde{\mathbb{A}}(2g+2)$
as $T$-representation through the identification $\xi_{2g+2}$. Notice that $\widetilde{\Delta}$ is invariant for the action of $\mathbb{G}_m$ defined by the formula $\lambda\cdot (h,t):=(\lambda^2 h,\lambda t)$. We will denote by $\mathbb{P}(2^N,1)$ the quotient stack of $\widetilde{\mathbb{A}}(N)\setminus 0$ by $\mathbb{G}_m$ using this weighted action, where $N:=2g+2$.
The following proposition explains why we can pass to the weighted projective setting without losing any information about Chow group. First, suppose we have two group schemes $G$ and $H$ acting on a scheme $X$ such that their actions commute. From now on, the Chow group ${\rm CH}([X/(G\times H)])$ defined as the $G \times H$-equivariant Chow group of $X$ will be also denoted by ${\rm CH}_H([X/G])$ or ${\rm CH}_G([X/H])$.
\begin{proposizione}\label{GM}
Let $X$ be a smooth algebraic scheme over $k$ with an action of $\mathbb{G}_m$ and an action of a group $G$ such that the two actions commute, then the natural morphism of rings
$$ {\rm CH}([X/(\mathbb{G}_m\times G)]) \longrightarrow {\rm CH}([X/G])$$
induced by the pullback along the $\mathbb{G}_m$-torsor
$$ [X/G] \longrightarrow [X/(\mathbb{G}_m\times G)]$$
is surjective. Moreover, its kernel is the ideal generated by $c_1(\mathscr{L})$ in ${\rm CH}([X/(\mathbb{G}_m\times G)])$, where $\mathscr{L}$ is the line bundle associated to the $\mathbb{G}_m$-torsor.
\end{proposizione}
\begin{proof}
Because torsors are stable under base change and representable as morphisms of stacks, we can reduce to the case of an $\mathbb{G}_m \times G$-equivariant $\mathbb{G}_m$-torsor in the category of algebraic spaces, where this result is well known.
\end{proof}
Using the previous proposition and writing down the following commutative diagram of ${\rm CH}({\rm B}T)$-algebras:
$$\xymatrix{ {\rm CH}_T(\widetilde{\Delta} \setminus 0) \ar[r] & {\rm CH}_T(\widetilde{\mathbb{A}}(N)\setminus 0) \ar[r] & {\rm CH}(\mathscr{H}_{g,1})) \ar[r] & 0 \\
{\rm CH}_T([\widetilde{\Delta}\setminus 0/\mathbb{G}_m]) \ar[r] \ar[u] & {\rm CH}_T(\mathbb{P}(2^N,1)) \ar[r] \ar[u] & {\rm CH}_T([\widetilde{\mathbb{A}}_{sm}(N)\setminus 0/\mathbb{G}_m]) \ar[r] \ar[u] & 0 \ } $$
we can reduce the computation to the weighted projective setting and then set the first Chern class of the line bundle associated to the $\mathbb{G}_m$-torsor equal to $0$.
The rest of the section will be dedicated to computing the $T$-equivariant Chow group of $\mathbb{P}(2^N,1)$, i.e.
$$ {\rm CH}_{T}(\mathbb{P}(2^N,1)):={\rm CH}_{T \times \mathbb{G}_m}(\widetilde{\mathbb{A}}(N)\setminus 0). $$
Let $T$ be a split torus of dimension $r$, i.e. $T\simeq \mathbb{G}_m^r$.
\begin{osservazione}
Edidin and Graham have already proved in \cite[Section 3.2]{EdGra} that
$${\rm CH}({\rm B}T) \simeq \mathbb{Z}[T_1,\dots,T_r] $$
where $T_j=c_1^{(\mathbb{G}_{m})_j}(\mathbb{A}^1)$, $(\mathbb{G}_m)_j$ is the $j$-th factor of the product $T$ and $\mathbb{A}^1$ is the representation of $\mathbb{G}_m$ with weight $1$.
Suppose $T$ acts on $\mathbb{A}^{n+1}$. We can decompose the representation $\mathbb{A}^{n+1}$ in a product of irreducible representations $\mathbb{A}^1_0 \times \dots\times \mathbb{A}^1_n$ where $T$ acts on $\mathbb{A}^1_i$ with some weights $(m_1^i,\dots,m_r^i) \in \mathbb{Z}^r$ for every $0\leq i\leq n$. If we denote by $ p_i(T_1,\dots,T_r)$ the first Chern classes $c_1^T(\mathbb{A}^1_i)$, then we get
$$ p_i(T_1,\dots,T_r) = \sum_{j=1}^r m_j^i T_j \in \mathbb{Z}[T_1,\dots,T_r]$$
for every $0\leq i \leq n$.
\end{osservazione}
\begin{proposizione}\label{c}
In the setting of the previous remark, we get the following result:
$$ {\rm CH}_T(\mathbb{A}^{n+1}\setminus 0) = \frac{\mathbb{Z}[T_1,\dots,T_r]}{\Big(\prod_{i=0}^{n}p_i(T_1,...,T_r)\Big)}. $$
\end{proposizione}
\begin{proof}
Consider the localization sequence for the $T$-invariant open subscheme $\mathbb{A}^{n+1} \setminus 0 \hookrightarrow \mathbb{A}^{n+1}$:
$$ \xymatrix{{\rm CH}({\rm B}T) \ar[r]^{(i_0)_*} & {\rm CH}_T(\mathbb{A}^{n+1}) \ar[d]^{(i_0)^*} \ar[r] & {\rm CH}_T(\mathbb{A}^{n+1}\setminus 0) \ar[r] & 0\\
& {\rm CH}({\rm B}T) } $$
where $i_0$ is the $0$-section of the $T$-equivariant vector bundle over ${\rm Spec}\, k$. Using the self-intersection formula, we get that the image through $(i_0)_*$ of ${\rm CH}({\rm B}T)$ is the ideal (inside ${\rm CH}({\rm B}T)$) generated by $c_{n+1}^T(\mathbb{A}^{n+1})$. Thus we get the following equality
$$c_{n+1}^T(\mathbb{A}^{n+1}) = \prod_{i=0}^n c_1^T(\mathbb{A}_i^1) = \prod_{i=0}^{n}p_i(T_1,\dots,T_r)$$
and the statement follows.
\end{proof}
Recall that
$$ \mathbb{P}(2^N,1) \simeq \Big[ \big(\widetilde{\mathbb{A}}(N)\setminus 0\big) / \mathbb{G}_m \Big] $$
where $\mathbb{G}_m$ acts with weights $(2,\dots,2,1)$ and we have an action of a $2$-dimensional split torus $T$ over $\widetilde{\mathbb{A}}(N)$. Furthermore, recall that by definition
$$ {\rm CH}_T(\mathbb{P}(2^N,1)):= {\rm CH}_{T \times \mathbb{G}_m}(\widetilde{\mathbb{A}}(N)\setminus 0). $$
We denote by $p_i(T_0,T_1)$ the first $T$-equivariant Chern class of the $i$-th factor of the product $\widetilde{\mathbb{A}}(N) \simeq \mathbb{A}^1_0 \times \dots\times \mathbb{A}^1_N$, and by $h$ the first $\mathbb{G}_m$-equivariant Chern class of the irreducible representation of $\mathbb{G}_m$ with weight $1$ (here we are considering $\widetilde{\mathbb{A}}(N)$ a $T$-representation through the identification $\xi_N$). The previous proposition gives us the following result.
\begin{corollario}
In the above setting, we get
$$ {\rm CH}_T(\mathbb{P}(2^N,1)) \simeq \frac{\mathbb{Z}[T_0,T_1,h]}{(h+p_N(T_0,T_1))\prod_{i=0}^{N-1}(2h+p_i(T_0,T_1))} $$
for some homogeneous polynomials $p_i(T_0,T_1)$ of degree $1$.
\end{corollario}
\begin{osservazione}
We can easily compute the polynomials $p_i$ mentioned in the previous corollary, but it is not necessary.
\end{osservazione}
\section{Equivariant Chow envelope for $\widetilde{\Delta}$}\label{eq}
In this section, we recall briefly the theory of Chow envelopes for quotient stacks and then we find a Chow envelope for $\widetilde{\Delta}$. The idea is to modify the one described in \cite[Section 4]{EdFul} so to suit this weighted projective setting. Fix again $N:=2g+2$.
\begin{definition}
Let $f:\mathscr{X} \rightarrow \mathscr{Y}$ be a proper, representable morphism of quotient stacks. We say that $f$ has the property $\mathcal{N}$ if the morphism of groups
$$ f_*:{\rm CH}(\mathscr{X}) \rightarrow {\rm CH}(\mathscr{Y}) $$
is surjective.
We say that a morphism of algebraic stacks $f:\mathscr{X} \rightarrow \mathscr{Y}$ is a Chow envelope if $f(K): \mathscr{X}(K) \rightarrow \mathscr{Y}(K)$ is essentially surjective for every extension of fields $K/k$, i.e. for every element $y \in \mathscr{Y}(K)$ there exist an object $x \in \mathscr{X}(K)$ and an isomorphism $\eta: f(K)(x) \rightarrow y$ in the groupoid $Y(K)$.
\end{definition}
\begin{osservazione}\label{Env}
It is a classical fact that if $f:X \rightarrow Y$ is a proper morphism of algebraic spaces over $k$ such that $f$ is a Chow envelope, then $f$ has the property $\mathcal{N}$.
\end{osservazione}
We want to prove the same for quotient stacks.
\begin{lemma}\label{lem2}
Let $G$ be a group scheme over $k$ and suppose we have an action of $G$ on two algebraic spaces $X$ and $Y$ and a map $f: X \rightarrow Y$ which is $G$-equivariant. We denote by $f^G$ the induced morphism of quotient stacks and we assume it is proper and representable. If $f^G$ is a Chow envelope, then $f^G$ has the property $\mathcal{N}$.
\end{lemma}
\begin{proof}
We need to prove that
$$f_*^G: {\rm CH}_i^G(X) \longrightarrow {\rm CH}_i^G(Y) $$
is surjective for every $i \in \mathbb{N}$. Fix $i \in \mathbb{N}$. We consider an approximation $U \subset V$ where ${\rm codim}_V(V\setminus U)>i$ and $G$ acts freely on $U$, therefore $X \times U/G$ is an algebraic space and
$$ {\rm CH}_i^G(X)={\rm CH}_{i+l-g}(X \times U/G). $$
If we consider the following cartesian diagram
$$ \xymatrix{ (X \times U)/G \ar[r]^{f_U} \ar[d] & (Y \times U)/G \ar[d] \\ [X/G] \ar[r]^{f^G} & [Y/G] } $$
we get that $f_U(K)$ is surjective for every extension of fields $K/k$ because $f^G$ has the same property and being surjective is stable under base change. \hyperref[Env]{Remark 3.2} implies the surjectivity of $(f_U)_*$ and therefore of $f_*^G$.
\end{proof}
\begin{osservazione}
The previous lemma is a natural corollary of \cite[Lemma 3.3]{EdGra}, using the fact that being a Chow envelope is a property invariant under base change.
\end{osservazione}
We recall that a special group scheme $T$ is a group scheme (over $k$) such that every $T$-torsor $P\rightarrow S$ is locally trivial in the Zariski topology, i.e. there exists a Zariski covering $\{U_i \rightarrow S\}_{i\in I}$ such that $P\times_{S} U_i \rightarrow U_i$ is a trivial $T$-torsor for every $i \in I$.
\begin{osservazione}\label{boh}
Given a special group $T$ acting on an algebraic space $X$ over $k$, then the $T$-torsor $$X \rightarrow [X/T]$$ is clearly a Chow envelope thanks to $T$ being special.
\end{osservazione}
\begin{corollario}\label{ChEnv}
Let $G,T$ be two group schemes over $k$ and suppose we have an action of $G \times T$ on two algebraic spaces $X$ and $Y$ and a map $f: X \rightarrow Y$ which is $G\times T$-equivariant. Assume that $T$ is a special group. Suppose that $f^G$ is a Chow envelope and $f^{G\times T}$ is a proper representable morphism. Then $f^{G \times T}: [X/(G\times T)] \rightarrow [Y/(G \times T)]$ has the property $\mathcal{N}$.
\end{corollario}
\begin{proof}
If we consider the cartesian diagram of quotient stacks
$$ \xymatrix{ [X/G] \ar[r]^{f^G} \ar[d] & [Y/G] \ar[d] \\
[X/(G\times T)] \ar[r]^{f^{G\times T}} & [Y/(G \times T)]} $$
we notice that the two vertical maps are $T$-torsors. The \hyperref[boh]{Remark 3.5} easily implies that $f^{G\times T}$ is a Chow envelope. Therefore $f^{G\times T}$ has the property $\mathcal{N}$.
\end{proof}
Let us recall the setting we are studying. We have a morphism
$$ \varphi_n:\widetilde{\mathbb{A}}(n) \rightarrow \mathbb{A}(n) $$
which is defined on points as $(f,s) \mapsto f$; it induces a morphism $\phi_n:\mathbb{P}(2^n,1) \rightarrow \mathbb{P}^n$ for every $n \in \mathbb{N}$. In the case of smooth hyperelliptic curves (without the datum of the section), we have the following exact sequence:
$$ \xymatrix{{\rm CH}_{{\rm GL}_2}(\Delta) \ar[r] & {\rm CH}_{{\rm GL}_2}(\mathbb{A}(2g+2)) \ar[r] & {\rm CH}(\mathscr{H}_g) \ar[r] & 0 } $$
where $\Delta$ is the discriminant locus, which is naturally $\mathbb{G}_m$-invariant with respect to the standard action (see \cite[Corollary 4.7]{ArVis}). By abuse of notation we denote by $\Delta$ the image of the discrimant locus in $\mathbb{P}^N$. We know that, if we define
$$ \Delta_r:=\{ h \in \mathbb{P}^N \vert h=f^2g \mathrmxtrm{ in some field extension, with deg}(f)=r \},$$
for every $r\leq N/2$ (recall that $N:=2g+2$), the chain of closed subsets $\Delta_{r+1} \subset \Delta_r$ is a stratification of $\Delta$ and the coproduct of the maps
$$ \pi_r: \mathbb{P}^r \times \mathbb{P}^{N-2r} \longrightarrow \mathbb{P}^N$$
defined by $\pi_r(h,g)=h^2g$ forms a Chow envelope for $\Delta$ when ${\rm char}(k)>N-2$. (see \cite[Lemma 3.2]{Vis3}).
If we denote the projectivization of $\widetilde{\Delta}$ in $\mathbb{P}(2^N,1)$ by $\overline{\Delta}$, we clearly have $\overline{\Delta}=\phi_N^{-1}(\Delta)$ (set-theoretically). Thus, we consider the pullback of the stratification of $\Delta$ through the map $\phi_N$. To be precise, we have the stratification $\overline{\Delta}_{r+1}\subset \overline{\Delta}_r $ given by the following:
$$ \overline{\Delta}_r:=\{(h,s) \in \mathbb{P}(2^N,1) \mathrmxtrm{ such that, in some field extension, } h=f^2g \mathrmxtrm{ with } {\rm deg}(f) = r\}.$$
Our aim is to construct a Chow envelope (of quotient stacks) for $\overline{\Delta} \subset \mathbb{P}(2^N,1)$. We recall that $\widetilde{\mathbb{A}}(n)$ is defined as the closed subscheme of $\mathbb{A}(n) \times \mathbb{A}^1$ given by the pairs $$(f,s) \in \mathbb{A}(n) \times \mathbb{A}^1$$ satisfying the equation $f(0,1)=s^2$.
We consider the morphisms of schemes
$$ c_r:\mathbb{A}(r) \times \widetilde{\mathbb{A}}(N-2r) \longrightarrow \widetilde{\mathbb{A}}(N) $$
defined by
\begin{equation*}\label{c_r}
(f,(g,s)) \mapsto (f^2g,f(0,1)s). \tag{*}
\end{equation*}
Furthermore, we endow $\mathbb{A}(r) \times \widetilde{\mathbb{A}}(N-2r)$ with a $T$-action.
\begin{osservazione} \label{act}
We have to define the $T$-action on our product $\mathbb{A}(r)\times \widetilde{\mathbb{A}}(N-2r)$:
\begin{itemize}
\item[i)] if $g$ is even, the action is described by:
$$(t_0,t_1).p(x_0,x_1) :=(t_0t_1)^{g/2}p(x_0/t_0,x_1/t_1)$$ for every $(t_0,t_1) \in T$, for every $p \in \mathbb{A}(r)$;
$$(t_0,t_1).(q(x_0,x_1),s):= (q(x_0/t_0,x_1/t_1),t_1^{r-g-1}s)$$ for every $(t_0,t_1)\in T$, for every $(q,s) \in \widetilde{\mathbb{A}}(N-2r);$
\item[ii)] if $g$ is odd, the action is described by:
$$(\alpha,\rho).p(x_0,x_1) :=\alpha^{-1}\rho^{(g+1)/2}p(x_0/\rho,x_1)$$ for every $(\alpha,\rho) \in T$, for every $p \in \mathbb{A}(r)$;
$$(\alpha,\rho).(q(x_0,x_1),s):= (q(x_0/\rho,x_1),s)$$ for every $(\alpha,\rho)\in T$, for every $(q,s) \in \widetilde{\mathbb{A}}(N-2r);$
a straightforward computation shows that $c_r$ is a $T$-equivariant morphism.
\end{itemize}
\end{osservazione}
To construct the Chow envelope, we need to pass to the projective setting. We need to construct two different morphisms and we will prove that the coproduct is the Chow envelope required. Therefore we consider the action of $\mathbb{G}_m \times \mathbb{G}_m$ on $\mathbb{A}(r)\times \widetilde{\mathbb{A}}(N-2r)$ (as always $N:=2g+2$) defined by the product of the two actions:
$$\lambda\cdot (f_0,\dots,f_r), := (\lambda f_0,\dots,\lambda f_r)$$
or equivalently $\lambda\cdot f:= \lambda f$ for every $\lambda \in \mathbb{G}_m$ and $f=(f_0,\dots,f_r)\in \mathbb{A}(r)$;
$$ \mu \cdot (g_0,\dots,g_{N-2r},s):=(\mu^2g_0,\dots,\mu^2g_{N-2r},\mu s) $$
or equivalently $\mu \cdot (g,s):= (\mu^2 g,\mu s)$ for every $\mu \in \mathbb{G}_m$ and $(g,s) \in \widetilde{\mathbb{A}}(N-2r)$. We have denoted the quotient stack $[\widetilde{\mathbb{A}}(N-2r)\setminus 0/\mathbb{G}_m]$ for the action described above by
$\mathbb{P}(2^{N-2r},1)$.
Clearly this action commutes with the one of the torus $T$ described in \hyperref[act]{Remark \ref{act}}. The morphism $c_r$ is equivariant for the morphism of group schemes $\mathbb{G}_m \times \mathbb{G}_m \times T \rightarrow \mathbb{G}_m \times T$ described as $$(\lambda,\mu,t) \mapsto (\lambda\mu,t),$$
i.e. $c_r((\lambda,\mu,t)\cdot (f,(g,s)))= (\lambda\mu,t)\cdot c_r(f,(g,s))$.
We denote by $$a_r:\mathbb{P}^r \times \mathbb{P}(2^{N-2r},1) \rightarrow \mathbb{P}(2^N,1)$$ the morphism induced by $c_r$ on the quotients of $(\mathbb{A}(r)\setminus 0) \times (\widetilde{\mathbb{A}}(N-2r)\setminus 0)$ by $\mathbb{G}_m\times \mathbb{G}_m$ and of $\widetilde{\mathbb{A}}(N)\setminus0$ by $\mathbb{G}_m$.
\begin{osservazione}
Unfortunately, the morphism $a_r$ is not a Chow envelope of $\overline{\Delta}_r$. Consider in fact an object $(h,0) \in \overline{\Delta}_r(k) \subset \mathbb{P}(2^N,1)(k)$ such that $h=f^2g$ where $f,g \in k[x_0,x_1]$ are homogenous polynomials of degree $r, N-2r$ respectively with $g(0,1) \in k\setminus k^2$ and $f(0,1)=0$. Thus, if we hope to find an element $(g,s)$ in $\mathbb{P}(2^{N-2r},1)$ such that $g(0,1)=s^2$, we need to pass to an extension of $k$. Therefore it will be a surjective morphism of algebraic stacks, but not a Chow envelope.
\end{osservazione}
We need then to construct another morphism to have a Chow envelope in the case $f(0,1)=0$. The construction is the following. Consider the closed immersion $i_r:\mathbb{A}(r-1) \hookrightarrow \mathbb{A}(r)$ defined by $$ (f_0,\dots,f_{r-1}) \mapsto (f_0,\dots,f_{r-1},0)$$ and we let $U_r$ be the open complement of this closed subset. Notice that the closed immersion can be expressed in the language of homogenous polynomials as $f \mapsto x_0f$, if $f \in \mathbb{A}(r-1)$.
We consider the morphism
$$ d_r: \mathbb{A}(r-1) \times \mathbb{A}(N-2r) \rightarrow \widetilde{\mathbb{A}}(N)$$ defined by the equation
\begin{equation*}\label{d_r}
(f,g) \mapsto ((x_0f)^2g,0) \tag{**}
\end{equation*}
and a $\mathbb{G}_m$-action on $\mathbb{A}(N-2r)$ described by
$$ \mu \cdot (g_0,\dots,g_{N-2r}):=(\mu^2g_0,\dots,\mu^2g_{N-2r})$$
or equivalently $\mu\cdot g:=\mu^2 g$ for $\mu \in \mathbb{G}_m$ and $g \in \mathbb{A}(N-2r)$. We will denote by $\mathbb{P}(2^{N-2r+1})$ the quotient stack $[\mathbb{A}(N-2r) \setminus 0/\mathbb{G}_m]$ with this action. In the same way, $d_r$ is equivariant for the group scheme homomorphism $\mathbb{G}_m\times \mathbb{G}_m \times T \rightarrow \mathbb{G}_m \times T$ and therefore gives us the morphism $b_r:\mathbb{P}^{r-1} \times \mathbb{P}(2^{N-2r+1}) \rightarrow \mathbb{P}(2^N,1)$. In this case the action of $T$ on $\mathbb{A}(N-2r)$ is just the restriction to the diagonal torus $T$ of the standard action $A\cdot f(x):= f(A^{-1}x)$ for every $A\in {\rm GL}_2$ and $f\in \mathbb{A}(N-2r)$.
\begin{lemma}\label{b}
In the setting above, the two maps
$$b_r: \mathbb{P}^{r-1} \times \mathbb{P}(2^{N-2r+1}) \longrightarrow \mathbb{P}(2^N,1)$$
and
$$a_r: \mathbb{P}^r \times \mathbb{P}(2^{N-2r},1) \longrightarrow \mathbb{P}(2^N,1) $$
are representable proper morphisms of quotient stacks.
\end{lemma}
\begin{proof}
Representability follows directly from the following fact (see \cite[Lemma 99.6.2]{stacks-project}): suppose $G$ and $H$ are two group schemes with a group homomorphism $\phi:G \rightarrow H$ and suppose $X$ is a $G$-scheme, $Y$ is a $H$-scheme and $f: X \rightarrow Y$ is an equivariant morphism, i.e. $f(g\cdot x)=\phi(g)\cdot f(x)$ for every $x \in X(S)$ and $g\in G(S)$. Then the induced morphism $[X/G]\rightarrow [Y/H]$ between the quotient stacks is representable if and only if for every $k$-scheme $S$ and for every $x \in X(S)$ the following map
$$ {\rm Stab}_G(x) \rightarrow {\rm Stab}_H(f(x)) $$ is injective, where ${\rm Stab}_G(x):=\{g\in G(S)\vert g\cdot x=x\}$.
Properness follows from the fact that the source of the morphism is a proper stack and the target is separated (see \cite[Proposition 10.1.6]{Oll})
\end{proof}
\begin{osservazione}\label{dia}
If we consider now the commutative diagram
$$\xymatrix{ \mathbb{A}(r) \times \widetilde{\mathbb{A}}(N-2r) \ar[r]^<<<<{c_r} \ar[d]^{{\rm Id}\times \varphi_{N-2r}} & \widetilde{\mathbb{A}}(N) \ar[d]^{\varphi_N} \\
\mathbb{A}(r) \times \mathbb{A}(N-2r) \ar[r]^<<<<{\pi_r} & \mathbb{A}(N)},$$
we can pass to the projective setting to get the following commutative diagram
$$\xymatrix{ \mathbb{P}^r \times \mathbb{P}(2^{N-2r},1) \ar[r]^<<<<{a_r} \ar[d]^{{\rm Id}\times \phi_{N-2r}} & \mathbb{P}(2^N,1) \ar[d]^{\phi_N} \\
\mathbb{P}^r \times \mathbb{P}^{N-2r} \ar[r]^<<<<<<{\pi_r} & \mathbb{P}^{N}}$$
which shows that $a_r$ factors through the closed immersion $\overline{\Delta}_r \subset \mathbb{P}(2^N,1)$. In a similar way, the same can be shown for $b_r$.
\end{osservazione}
\begin{lemma}\label{lem}
Consider the morphisms
$$b_r: \mathbb{P}^{r-1} \times \mathbb{P}(2^{N-2r+1}) \longrightarrow \overline{\Delta}_r $$
and
$$a_r: \mathbb{P}^r \times \mathbb{P}(2^{N-2r},1) \longrightarrow \overline{\Delta}_r $$
and let $\omega_r$ be the coproduct morphism. If ${\rm char}(k)>2g$, then $\omega_r$ restricted to the preimage of $\overline{\Delta}_r\setminus\overline{\Delta}_{r+1}$ is a Chow envelope for every $1\leq r \leq N/2$ (where $\overline{\Delta}_{N/2+1}:=\emptyset$).
\end{lemma}
\begin{proof}
Let us denote $\overline{\Delta}_r\setminus\overline{\Delta}_{r+1}$ by $D$. Consider $K$ an extension of $k$ and $(h,t)\in D(K)$, thus $h \in \Delta_r\setminus \Delta_{r+1}$ and therefore $h=f^2g$ with $f,g \in K[x_0,x_1]$ homogeneous polynomials, where $g$ is square free and deg$f=r$ (see \cite[Lemma 3.2]{Vis3}). Moreover, if $f(0,1)\neq 0$ the equation
$$ t^2=h(0,1)=f(0,1)^2g(0,1)$$
gives us that $a_r(K)(f,(g,s))=(h,t)$ for $s=h(0,1)/f(0,1)$ (we need that $s^2=g(0,1)$). On the other hand, if $f(0,1)=0$ ($f=x_0f'$) then $t=0$ therefore we can consider $(f',g)$ as an element of $( \mathbb{P}^{r-1} \times \mathbb{P}(2^{N-2r+1}))(K)$ and get
$b_r(f',g)=(h,0)$. We have then proved the statement.
\end{proof}
\begin{comment}
\begin{osservazione}
Suppose we have $G:=\mathbb{G}_m$ and $T$ a torus, a scheme $X$ with an action of $G\times G \times T$ and a scheme $Y$ with an action of $G \times T$. Furthermore, we consider a morphism $X \rightarrow Y$ equivariant in respect to the map $G\times G \times T \rightarrow G \times T$ defined as $(g,h,t)\mapsto(gh,t)$. Suppose that if we consider the action of $G$ on $X$ given by
$g\cdot x:=(g,g^{-1})\cdot x$, we obtain a free action with an algebraic space $Z$ as a quotient, and considering the action of $G$ on $Z$
\end{osservazione}
\end{comment}
\begin{proposizione}
Suppose ${\rm char}(k) > 2g$. The morphism
$$ \omega := \bigsqcup_{r=1}^{g+1} \omega_r $$
is surjective at the level of $T$-equivariant Chow ring, i.e. $$\omega_*: \bigoplus_{r=1}^{g+1}{\rm CH}_T\big(\mathbb{P}^{r-1}\times \mathbb{P}(2^{N-2r+1})\big) \oplus {\rm CH}_T\big(\mathbb{P}^r \times \mathbb{P}(2^{N-2r},1)\big) \rightarrow {\rm CH}_T(\overline{\Delta})$$
is surjective.
\end{proposizione}
\begin{proof}
First, notice that \hyperref[ChEnv]{Corollary \ref{ChEnv}} states that it is enough to prove that $\omega$ is a Chow envelope. Consider the stratification
$$ 0\subset \overline{\Delta}_{N/2} \subset \dots \subset \overline{\Delta}_1=\overline{\Delta},$$
\hyperref[lem]{Lemma \ref{lem}} states that $\omega_r$ is a Chow envelope restricted to $\overline{\Delta}_r \setminus \overline{\Delta}_{r+1}$. Therefore the coproduct is a Chow envelope for $\overline{\Delta}$, proving the statement.
\end{proof}
Therefore, we just need to describe the image of the morphisms $(b_r)_*$ and $(a_r)_*$ inside ${\rm CH}_T(\mathbb{P}(2^N,1))$.
\section{Description of the Chow ring of $\mathscr{H}_{g,1}$}\label{de}
Finally, we can explicitly compute the relations in ${\rm CH}_T(\mathbb{P}(2^N,1))$ using the pushforward along $a_r$ and $b_r$ in $\mathbb{P}(2^N,1)$ (these two morphisms are the ones induced on the weighted projective stacks by $c_r$ and $d_r$, whose description is explicited in the equations \hyperref[c_r]{(*)} and \hyperref[d_r]{(**)} respectively). Our goal is to prove that every relation for $\mathscr{H}_{g,1}$ is in the image of the map $\varphi_{2g+2}^*: {\rm CH}_T(\mathbb{A}(2g+2)) \rightarrow {\rm CH}_T(\widetilde{\mathbb{A}}(2g+2))$.
\begin{osservazione}
Let us consider the two Chow groups computed using \hyperref[c]{Proposition \ref{c}}:
\begin{itemize}
\item[i)] $$ {\rm CH}_T(\mathbb{P}^r\times \mathbb{P}(2^{N-2r},1)) \simeq \frac{\mathbb{Z}[T_0,T_1,u_1,v_1]}{(p_1(u_1,T_0,T_1),q_1(v_1,T_0,T_1))} $$
where $u_1=c_1(\mathcal{O}_{\mathbb{P}^r}(1))$, $v_1=c_1(\mathcal{O}_{\mathbb{P}(2^{N-2r},1)}(1))$ and $p_1$ is a monic polynomial in the variable $u_1$ of degree $r+1$;
\item[ii)] $$ {\rm CH}_T(\mathbb{P}^{r-1}\times \mathbb{P}(2^{N-2r+1})) \simeq \frac{\mathbb{Z}[T_0,T_1,u_2,v_2]}{(p_2(u_2,T_0,T_1),q_2(v_2,T_0,T_1))} $$
where $u_2=c_1(\mathcal{O}_{\mathbb{P}^{r-1}}(1))$, $v_2=c_1(\mathcal{O}_{\mathbb{P}(2^{N-2r+1})}(1))$ and $p_2$ is a monic polynomial in the variable $u_2$ of degree $r$.
\end{itemize}
Notice that because $c_r$ (respectively $d_r$) is $T$-equivariant, the image of the pushforward of $a_r$ (respectively $b_r$) will be the ideal generated by the pushforwards of the elements of the form $u_1^iv_1^j$ (respectively $u_2^iv_2^j$) where $i\leq r$ (respectively $i\leq r-1$).
Recall the description of the $T$-equivariant Chow ring of $\mathbb{P}(2^N,1)$:
$$ {\rm CH}_T(\mathbb{P}(2^N,1)) \simeq \frac{\mathbb{Z}[T_0,T_1,t]}{(P(t,T_0,T_1))}. $$
\end{osservazione}
\begin{lemma}
Using the notation above, we have the following equations:
$$ a_r^*(t)= u_1+v_1 $$
and
$$ b_r^*(t) = u_2 + v_2.$$
\end{lemma}
\begin{proof}
Let us show the formula for $a_r$. We recall the morphism
$$c_r: (\mathbb{A}(r) \setminus 0) \times (\widetilde{\mathbb{A}}(N-2r) \setminus 0) \longrightarrow \widetilde{\mathbb{A}}(N) \setminus 0 $$
defined by the formula (see \hyperref[c_r]{(*)})
$$ c_r(f,(g,s))=(f^2g, f(0,1)s) $$
and we consider the action of $\mathbb{G}_m$ on the product $(\mathbb{A}(r) \setminus 0) \times (\widetilde{\mathbb{A}}(N-2r) \setminus 0)$ defined by the formula
$$ \lambda.(f,(g,s))= (\lambda f, (\lambda^{-2}g, \lambda^{-1} s)) $$
for every $\lambda \in \mathbb{G}_m$.
A straightforward computation shows that $c_r(\lambda.(f,(g,s)))=c_r(f,(g,s))$
therefore we have an induced $T$-equivariant morphism of stacks (in fact it is a morphism of schemes; see \cite[Proposition 23]{EdGra})
$$ [c_r]:\frac{(\mathbb{A}(r) \setminus 0) \times (\widetilde{\mathbb{A}}(N-2r) \setminus 0)}{\mathbb{G}_m} \longrightarrow \widetilde{\mathbb{A}}(N) \setminus 0$$
which fits into the following commutative diagram:
$$ \xymatrix{ \frac{(\mathbb{A}(r) \setminus 0) \times (\widetilde{\mathbb{A}}(N-2r) \setminus 0)}{\mathbb{G}_m} \ar[rr]^{[c_r]} \ar[d] && \widetilde{\mathbb{A}}(N) \setminus 0 \ar[d] \\
\mathbb{P}^r \times \mathbb{P}(2^{N-2r},1) \ar[rr]^{a_r} && \mathbb{P}(2^N,1) } $$
where the vertical maps are the natural projection maps (clearly the left one is still defined after quotienting by $\mathbb{G}_m$).
We leave as an easy verification to the reader that the vertical map on the left is in fact a $\mathbb{G}_m$-torsor under the action described by the formula
$$ \mu.[f,(g,s)]= [f, (\mu^2 g, \mu s)].$$
Using the fact that the category of $G$-torsors over a fixed stack is in fact a groupoid for every $G$ group scheme, we deduce that the diagram above is in fact cartesian and consequently we get the following equality at the level of first Chern classes
$$ c_1^T(\mathcal{L}) = a_r^*\big(c_1^T(\mathcal{O}_{\mathbb{P}(2^N,1)}(-1))\big) $$
where $\mathcal{L}$ is the line bundle associated to the $\mathbb{G}_m$-torsor
$$ \frac{(\mathbb{A}(r) \setminus 0) \times (\widetilde{\mathbb{A}}(N-2r) \setminus 0)}{\mathbb{G}_m} \longrightarrow \mathbb{P}^r \times \mathbb{P}(2^{N-2r},1); $$
explicitly, the line bundle can be described as
$$ \mathcal{L}=\frac{(\mathbb{A}(r) \setminus 0) \times (\widetilde{\mathbb{A}}(N-2r) \setminus 0) \times \mathbb{A}^1}{\mathbb{G}_m \times \mathbb{G}_m} $$
where the action can be described as follows: $$(\lambda,\mu). (f,(g,s),v)= (\lambda f, (\lambda^{-2}\mu^2 g, \lambda^{-1}\mu s), \mu v)$$
for every $(\lambda,\mu) \in \mathbb{G}_m \times \mathbb{G}_m$ and for every point $(f,(g,s),v)$ in $(\mathbb{A}(r) \setminus 0) \times (\widetilde{\mathbb{A}}(N-2r) \setminus 0) \times \mathbb{A}^1$.
Using the group isomorphism $\mathbb{G}_m \times \mathbb{G}_m \rightarrow \mathbb{G}_m \times \mathbb{G}_m$ described by the association $(\lambda, \mu) \mapsto (\lambda, \lambda^{-1} \mu)$, we deduce that $\mathcal{L}$ is in fact the quotient above with the new action
$(\lambda,\mu)(f,(g,s),v)= (\lambda f, (\mu^2 g, \mu s), \lambda\mu v)$ which describes exactly the line bundle whose first $T$-equivariant Chern class is $c_1^T(\mathcal{O}_{\mathbb{P}^r}(-1))+c_1^T(\mathcal{O}_{\mathbb{P}(2^{N-2r},1)}(-1))$.
This concludes the proof. The same idea can be used to prove the statement for $b_r$.
\end{proof}
\begin{osservazione}\label{rem}
Using projection formula, it is immediate to prove that the ideal we are looking for is generated by the pushforwards through the map $a_r$ of $u_1^i$ (respectively through the map $b_r$ of $u_2^i$) for every $i\leq r$ (respectively for $i\leq r-1$).
\end{osservazione}
Let us consider the pull-back of the Chow envelope $\pi_r$ through the morphism of algebraic stacks $\phi_N$, i.e. the following cartesian diagrams
$$ \xymatrix{\mathscr{R}_r \ar[d]_{p} \ar[r]&\mathscr{P}_r \ar[d]^q \ar[r]_{\varpi_r}
& \mathbb{P}(2^N,1) \ar[d]_{\phi_N} \\ \mathbb{P}^{r-1}\times \mathbb{P}^{N-2r} \ar[r]^{i_r \times {\rm Id}} &
\mathbb{P}^r \times \mathbb{P}^{N-2r} \ar[r]^>>>>>>>>>{\pi_r} & \mathbb{P}^N, } $$
in particular we are defining $\mathscr{P}_r$ and $\mathscr{R}_r$ as fiber products of the diagrams above.
The commutative diagram described in \hyperref[dia]{Remark \ref{dia}}
$$ \xymatrix{ \mathbb{P}^r \times \mathbb{P}(2^{N-2r},1) \ar[d]_{{\rm Id} \times \phi_{N-2r}} \ar[r]^<<<<<{a_r}
& \mathbb{P}(2^N,1) \ar[d]_{\phi_N} \\
\mathbb{P}^r \times \mathbb{P}^{N-2r} \ar[r]^>>>>>>>>>{\pi_r} & \mathbb{P}^N } $$
induces a morphism of algebraic stacks $\alpha_r: \mathbb{P}^r \times \mathbb{P}(2^{N-2r},1) \rightarrow \mathscr{P}_r$ such that $\varpi_r \circ \alpha_r = a_r$ and $q \circ \alpha_r = {\rm Id} \times \phi_{N-2r}$.
In the same way, we consider the following commutative diagram:
$$ \xymatrix{ \mathbb{P}^{r-1} \times \mathbb{P}(2^{N-2r+1}) \ar[d]_{{\rm Id} \times \iota_{N-2r}} \ar[r]^<<<<<{b_r}
& \mathbb{P}(2^N,1) \ar[d]_{\phi_N} \\
\mathbb{P}^{r-1} \times \mathbb{P}^{N-2r} \ar[r]^>>>>>>>>>{\pi_r \circ (i_r\times {\rm Id})} & \mathbb{P}^N } $$
where $\iota_{N-2r}:\mathbb{P}(2^{N-2r+1}) \rightarrow \mathbb{P}^{N-2r}$ is the quotient map induced by the identity on the affine space $\mathbb{A}(N-2r)\setminus 0$ with the two different actions (on the source we have the action of $\mathbb{G}_m$ with all weights equal to $2$, on the target all the weights are equal to $1$). Therefore we get the morphism $\beta_r: \mathbb{P}^{r-1} \times \mathbb{P}(2^{N-2r+1}) \rightarrow \mathscr{R}_r$ together with the equalities $\varpi_r\vert_{\mathscr{R}_r} \circ \beta_r = b_r$ and $ p \circ \beta_r = {\rm Id}\times\iota_{N-2r}$.
\begin{proposizione}
In the setting above, the two morphisms $\beta_r$ and $\alpha_r$ are representable and proper. Furthermore, we get that $\beta_r(K)$ and $\alpha_r\vert_{\alpha_r^{-1}(\mathscr{P}_r \setminus \mathscr{R}_r)}(K)$ are equivalences of groupoids for every extension of fields $K/k$.
\end{proposizione}
\begin{proof}
Representability and properness follow from the representability and properness of $b_r$ and $a_r$. Let us start with $a_r$ restricted to the preimage of the open $\mathscr{S}_r:=\mathscr{P}_r\setminus \mathscr{R}_r$ inside $\mathscr{P}_r$. An object inside $\mathscr{S}_r(K)$ is a triplet of the form $(f,g,(h,s))$ where $(f,g) \in (\mathbb{P}^r \times \mathbb{P}^{N-2r})(K)$ with $f(0,1)\neq 0$, $(h,t)\in \mathbb{P}(2^N,1)(K)$ with $h=f^2g$. The only morphisms are the identities if $t\neq 0$, otherwise we have ${\rm Hom}_{\mathscr{S}_r}((f,g,(h,0)),(f,g,(h,0)))=\mu_2(K)$ where $\mu_2$ is the group of the square roots of unity. We can describe the morphism $\alpha_r$ in the following way:
$$ \alpha_r(f,(g,s))=(f,g,a_r(f,(g,s)))=(f,g,(f^2g,f(0,1)s)).$$
Therefore because ${\rm Hom}_{\mathbb{P}(2^{N-2r},1)}((g,s),(g,s))$ is the trivial group if $s\neq 0$ and it is $\mu_2(K)$ if $s=0$, the fact that $f(0,1)\neq 0 $ implies that $\alpha_r$ is fully faithful (it is faithful because of representability). As far as essential surjectivity is concerned, the idea is the same of \hyperref[lem]{Lemma \ref{lem}}. Consider an element $(f,g,(f^2g,t)) \in \mathscr{S}_r(K)$, we get $t^2=f(0,1)^2g(0,1)$, therefore if we define $s:=t/f(0,1)$ we get $\alpha_r(f,(g,s))=(f,g,(f^2g,t))$. We have proved that $\alpha_r(K)\vert_{\alpha_r^{-1}(\mathscr{S}_r)}$ is essentially surjective, therefore equivalence.
As far as $\beta_r$ is concerned, the proof works in the same way. Recall that the map $i_r: \mathbb{P}^{r-1} \hookrightarrow \mathbb{P}^r$ can be described as $f\mapsto x_0f$. Thus we have the following description of $\beta_r$:
$$ \beta_r(K)(f,g)=(f,g,((x_0f)^2g,0))$$
for every $(f,g)\in (\mathbb{P}^{r-1} \times \mathbb{P}(2^{N-2r+1}))(K)$ and again fully faithfulness and essential surjectivity are straightforward.
\end{proof}
\begin{lemma}\label{us}
In the setting of previous proposition, we get the following equalities at the level of Chow rings:
$(\beta_r)_*(1)=1$ and $(\alpha_r)_*(1)=1.$
\end{lemma}
\begin{proof}
The statement follows easily from the fact that the property of being an equivalence on points is stable under base change, thus we can pass to an approximation (see proof of \hyperref[lem2]{Lemma \ref{lem2}}) and we just need to prove the following statement: given a proper morphism $\alpha:X \rightarrow Y$ of algebraic spaces over $k$ such that $\alpha(K)$ is bijective for every field extension $K/k$, then $\alpha_*(1)=1$ at the level of Chow group. This follows from the fact that the hypothesis of the claim implies $\alpha$ is a birational morphism.
\end{proof}
\begin{osservazione}
Before stating and proving the theorem, we recall that for every flat morphism of Deligne-Mumford separated stacks we have an induced pullback morphism at the level of Chow ring (we do not need representability) and the same is true in the $T$-equivariant case. Moreover, the compatibility formula applies between flat pullbacks and proper representable pushforward (see \cite[Lemma 3.9]{Vis1}).
\end{osservazione}
We are finally ready to prove our main theorem.
\begin{teorema}\label{pri}
Let $g\geq 2$ be an integer, and ${\rm char}(k)>2g$. Then every relation coming from $\overline{\Delta}$ inside $\mathbb{P}(2^{2g+2},1)$ is the image through the morphism $\phi_{2g+2}^*$ of a relation coming from $\Delta$ inside $\mathbb{P}^{2g+2}$.
We denote the ideal inside ${\rm CH}({\rm B}{\rm GL}_2/\mu_{g+1})$ defining the relations for $\mathscr{H}_g$ by $J$ and the ideal generated by the image of $J$ through the map $\varphi_{2g+2}^*$ by $I$. Then we get the following isomorphism
$$ {\rm CH}(\mathscr{H}_{g,1}) \simeq \frac{ {\rm CH}({\rm B}T)}{I}.$$
\end{teorema}
\begin{proof}
The first statement implies the remaining part of the theorem by using the surjectivity of the pullback in the case of a $\mathbb{G}_m$-torsor quotient map (see \hyperref[GM]{Proposition \ref{GM}}) and diagram chasing. Therefore we just need to prove that the image of the pushforward of $b_r$ and $a_r$ are contained inside the image of $\phi_{2g+2}^*$. As usual, we set $N:=2g+2$. By \hyperref[rem]{Remark \ref{rem}} it is enough to prove that $(a_r)_*(u_1^i)$ (respectively $(b_r)_*(u_2^i)$) is the image of some element through $\phi_N^*$ for every $i\leq r$ (respectively for every $i\leq r-1$).
The proof is the same for both $a_r$ and $b_r$. Let us deal with $a_r$. We have the following chain of equalities, for every $i\leq r$:
\begin{eqnarray}
(a_r)_*(u_1^i) = (\varpi_r)_*(\alpha_r)_*(u_1^i) = (\varpi_r)_*(\alpha_r)_*({\rm Id}\times \phi_{N-2r})^*(c_1(\mathcal{O}_{\mathbb{P}^r}(1)\boxtimes \mathcal{O}_{\mathbb{P}^{N-2r}})^i) \nonumber \\ = (\varpi_r)_*(\alpha_r)_*\alpha_r^*q^*(c_1(\mathcal{O}_{\mathbb{P}^r}(1)\boxtimes \mathcal{O}_{\mathbb{P}^{N-2r}})^i) \nonumber.
\end{eqnarray}
By abuse of notation, the element $c_1(\mathcal{O}_{\mathbb{P}^r}(1)\boxtimes \mathcal{O}_{\mathbb{P}^{N-2r}})$ will be denoted by $u_1$. Using projection formula and \hyperref[us]{Lemma \ref{us}} we get
$$ (\alpha_r)_*(\alpha_r)^*(q^*(u_1^i))= q^*(u_1^i) $$
because $u_1$ is the first Chern class of a line bundle.
Therefore, we get the following equality:
$$ (a_r)_*(u_1^i)= (\varpi_r)_* q^* (u_1^i) = \phi_N^*(\pi_r)_*(u_1^i) $$
which concludes the proof.
\end{proof}
Using the theorem above and the description of the Chow ring of $\mathscr{H}_g$ obtained in \cite[Theorem 1.1]{EdFul} when $g$ is even and \cite[Theorem 6.2]{DiLor} when $g$ is odd, we get the following two descriptions of the Chow ring of the stack of pointed hyperelliptic curves.
\begin{teorema}\label{p}
If $g$ is even, $g\geq 2$ and ${\rm char}(k)>2g$, then we have the following isomorphism
$$ {\rm CH}(\mathscr{H}_{g,1})\simeq \frac{\mathbb{Z}[T_0,T_1]}{\bigg(2(2g+1)(T_0+T_1),g(g-1)(T_0^2+T_1^2)-2g(g+3)T_0T_1\bigg)}.$$
\end{teorema}
\begin{teorema}\label{d}
If $g$ is odd, $g\geq 3$ and ${\rm char}(k)>2g$, then we have the following isomorphism
$$ {\rm CH}(\mathscr{H}_{g,1})\simeq \frac{\mathbb{Z}[\tau,\rho]}{\bigg(4(2g+1)\tau,8\tau^2+2g(g+1)\rho^2\bigg)}.$$
\end{teorema}
In the case of $g$ even, the theorem is a conseguence of making explicit the map
$$\phi_N^*:{\rm CH}_{{\rm GL}_2}(\mathbb{A}(N)) \rightarrow {\rm CH}_T(\widetilde{\mathbb{A}}(N)); $$
the morphism is clearly induced by
$$ {\rm CH}({\rm BGL}_2)\simeq \mathbb{Z}[c_1,c_2] \rightarrow \mathbb{Z}[T_0,T_1]\simeq {\rm CH}({\rm BT})$$
where $c_1 \mapsto T_0+T_1$ and $c_2 \mapsto T_0T_1$, because it is the pullback of the diagonal inclusion $T=(\mathbb{G}_m)^2 \hookrightarrow {\rm GL}_2$.
In the case $g$ odd, we have to understand the morphism
$$\phi_N^*:{\rm CH}_{\mathbb{P}{\rm GL}_2\times \mathbb{G}_m}(\mathbb{A}(N)) \rightarrow {\rm CH}_T(\widetilde{\mathbb{A}}(N)); $$
this morphism too is induced by
$$ {\rm CH}({\rm B}\mathbb{P}{\rm GL}_2 \times {\rm B}\mathbb{G}_m)\simeq \frac{\mathbb{Z}[\tau,c_1,c_2,c_3]}{(c_1,2c_3)} \rightarrow \mathbb{Z}[\tau,\rho]\simeq {\rm CH}({\rm BT});$$
which is the pullback of the group homomorphism
$$ \mathbb{G}_m \times \mathbb{G}_m \rightarrow \mathbb{G}_m \times\mathbb{P}{\rm GL}_2$$
given by $(t, l) \mapsto (t, A)$ where $A$ is the class in $\mathbb{P}{\rm GL}_2$ given by the matrix
$$A(l)= \begin{bmatrix}
l & 0 \\
0 & 1
\end{bmatrix} \in \mathbb{P}{\rm GL}_2.$$
We notice that $\tau$ in the Chow group is the generator of ${\rm B}\mathbb{G}_m$ inside the product, thus we get $\phi_N^*(\tau)=\tau$. To understand the image of $c_1,c_2,c_3$ we need to explicit where they come from geometrically. Pandariphande shows in \cite{Pan} that they are the Chern classes of the adjunction representation of $\mathbb{P}{\rm GL}_2$ on its Lie algebra. It is straightforward to see that for every $l\in \mathbb{G}_m$ the matrix $A(l)$ acts with eigenvalues $l,1,1/l$ and therefore the pullback on the Chow rings is defined by
\begin{itemize}
\item $c_1 \mapsto 0;$
\item $c_2 \mapsto -\rho^2;$
\item $c_3 \mapsto 0.$
\end{itemize}
\begin{comment}
$$A= \begin{bmatrix}
a & b\\
c& d
\end{bmatrix} \in \mathbb{P}{\rm GL}_2$$
will be mapped to
$$Ad(A)=\frac{1}{{\rm det}(A)}\begin{pmatrix}
d^2 & bd & -b^2\\
2cd & ad+bc & -2ab \\
-c^2 & -ac & a^2
\end{pmatrix} \in {\rm GL}_3.$$
Restricting the morphism to the diagonal subgroup $\mathbb{G}_m$ inside $\mathbb{P}{\rm GL}_2$, i.e. taking $a=l$, $b=0$, $c=0$ and $d=1$, we get the map
$$ l \mapsto X= \begin{pmatrix}
1/l & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & l
\end{pmatrix} \in {\rm GL}_3$$
\end{comment}
Mapping the two relations of $\mathscr{H}_g$ for $g$ odd through this map gives us the statement of \hyperref[d]{Theorem \ref{d}}. Finally, we have the explicit description of the integral Chow ring of the stack of 1-pointed hyperelliptic curves of genus $g$.
\section{Generators of the Chow ring}\label{gen}
In this last part, we will give the geometric interpretation of the generators of the Chow ring of $\mathscr{H}_{g,1}$. We divide it in two cases, depending on the parity of the genus. In the even case, we will prove the following theorem.
\begin{teorema}
Suppose $g$ is an even positive integer and ${\rm char}(k)>2g$, then in the notation of \hyperref[p]{Theorem \ref{p}} we have
$ T_0 = c_1(\mathcal{B}_g) $ and $T_1 = c_1(\mathcal{A}_g)$ where $\mathcal{A}_g$ and $\mathcal{B}_g$ are the two line bundles on $\mathscr{H}_{g,1}$ defined as $$\mathcal{B}_g(\pi:C\rightarrow S,\sigma):=\sigma^*\omega_{C/S}^{\otimes g/2}((1-g/2)W)$$ and $$\mathcal{A}_g(\pi:C\rightarrow S,\sigma):=\pi_*\omega_{C/S}^{\otimes g/2}((1-g/2)W-\sigma)$$ with $W$ the Weierstrass divisor associated to a relative hyperelliptic curve $C\rightarrow S$.
\end{teorema}
Let us start with the description in the non-pointed case. Edidin and Fulghesu have given the explicit formula for the generators of the Chow ring of $\mathscr{H}_g$. We recall that the generators are $c_1(\mathbb{V}_g)$ and $c_2(\mathbb{V}_g)$ where the $2$-dimensional vector bundle $\mathbb{V}_g$ is defined by the following: for every morphism $S\rightarrow \mathscr{H}_g$ associated to the hyperelliptic curve $\pi:C\rightarrow S$, we have
$$ \mathbb{V}_g(S)= \pi_*(\omega_{C/S}^{\otimes g/2}((1-g/2)W)) $$
where $\omega_{C/S}$ is the relative canonical bundle and $W$ is the ramification divisor. They proved that this is infact the pullback over the natural map $\mathscr{H}_g \rightarrow {\rm BGL}_2$ of the ${\rm GL}_2$-representation $ E \otimes ({\rm det}E)^{\otimes g/2}$ where $E$ is the standard representation of ${\rm GL}_2$ (which is in turn the pullback through the isomorphism ${\rm B}({\rm GL}_2/\mu_{g+1}) \simeq {\rm BGL}_2$
of the standard representation $E$).
\begin{lemma}\label{le}
Let $\pi:C\rightarrow S$ be a smooth curve over $S$, i.e. a smooth, proper morphism such that every geometric fiber is a connected one dimensional scheme. Suppose $\mathcal{L}$ is a line bundle on $C$ such that $\mathcal{L}\vert_{C_s}$ is globally generated for every geometric point $s$ in $S$. Moreover, suppose there exists a section $\sigma: S \rightarrow C$ of the morphism $\pi$. Then the following sequence
$$ \xymatrix{ 0 \ar[r] & \pi_*\mathcal{L}(-\sigma) \ar[r] & \pi_*\mathcal{L} \ar[r] & \sigma^*\mathcal{L} \ar[r] & 0} $$
is exact.
\end{lemma}
\begin{proof}
We consider the exact sequence induced by $\sigma$
$$ \xymatrix{ 0 \ar[r] & \mathcal{O}_C(-\sigma) \ar[r] & \mathcal{O}_C \ar[r] & \sigma_*\sigma^*\mathcal{O}_C \ar[r] & 0} $$
and we tensor it with $\mathcal{L}$ to obtain
$$ \xymatrix{ 0 \ar[r] & \mathcal{L}(-\sigma) \ar[r] & \mathcal{L} \ar[r] & \sigma_*\sigma^*\mathcal{L} \ar[r] & 0}. $$
Clearly $R^1\pi_*(\sigma_*\sigma^*\mathcal{L})=0$ because $\sigma_*\sigma^*\mathcal{L}$ restricted to every geometric fiber is supported on a point. Therefore the natural map
$$ \xi: R^1\pi_*\mathcal{L}(-\sigma) \longrightarrow R^1\pi_*\mathcal{L}$$
is surjective. Because $R^2\pi_*\mathcal{F}=0$ for every coherent sheaf $\mathcal{F}$, using cohomology and base change we know that $\xi$ restricts to the morphism
$$ \xi_s:H^1(C_s,\mathcal{L}_s(-\sigma(s))) \longrightarrow H^1(C_s,\mathcal{L}_s) $$
for every geometric point $s \in S$. Recall that on a smooth curve $C$ over an algebraically closed field a divisor $D$ is globally generated (without base points) if and only if $$h^1(D-P)=h^1(D)$$ for every $P$ closed point on $C$. Being $\xi_s$ surjective, this clearly implies $\xi_s$ isomorphism for every geometric point and therefore $\xi$ isomorphism.
Thus, applying $\pi_*$ to the exact sequence
$$ \xymatrix{ 0 \ar[r] & \mathcal{L}(-\sigma) \ar[r] & \mathcal{L} \ar[r] & \sigma_*\sigma^*\mathcal{L} \ar[r] & 0}$$
we get the statement.
\end{proof}
Let $\phi$ be the natural morphism $\mathscr{H}_{g,1}\rightarrow \mathscr{H}_g$.
Our idea is to define the two line bundles $\mathcal{A}_g$ and $\mathcal{B}_g$, whose first Chern classes generate the Chow ring of $\mathscr{H}_{g,1}$, using the fact that $\varphi^*c(\mathbb{V}_g)=c(\mathcal{A}_g)c(\mathcal{B}_g)$. Fix a morphism $S\rightarrow \mathscr{H}_{g,1}$ induced by the $1$-pointed hyperelliptic curve $(C\rightarrow S,\sigma)$. We define
$$ \mathcal{A}_g:=\pi_*\mathcal{L}(-\sigma), \quad \mathcal{B}_g:=\sigma^{*}\mathcal{L}$$
where $\mathcal{L}:= \omega_{C/S}^{\otimes g/2}((1-g/2)W)$. Notice that $\mathcal{L}_s$ is the divisor $f^*\mathcal{O}_{\mathbb{P}^1_{\overline{k}(s)}}(1)$ where $f:C_s \rightarrow \mathbb{P}^1_{\overline{k}(s)}$ is the degree $2$ cover of $\mathbb{P}^1$ of the hyperelliptic curve, therefore $\mathcal{L}_s$ is globally generated for every geometric point $ s \in S$. As we are in the hypotheses of \hyperref[le]{Lemma \ref{le}} we get that $c(\mathbb{V}_g)=c(\mathcal{A}_g)c(\mathcal{B}_g)$.
\begin{osservazione}
Recall that a vector bundle over ${\rm BB}_2$, where ${\rm B}_2$ is the Borel subgroup of ${\rm GL}_2$, is equivalent to a flag of ${\rm GL}_2$-representations $0\subset F \subset E$ of length $2$. Notice that at the level of Chern classes it is the same as supposing that $E \simeq F\oplus E/F$, which is confirmed by the fact that ${\rm B}_2$-equivariant Chow group is isomorphic to the $T$-equivariant one, where $T$ is the maximal torus inside ${\rm GL}_2$.
Given such a flag, we will say that $c(F)$ and $c(E/F)$ are the total Chern classes induced by it.
\end{osservazione}
In our situation, $T_0$ and $T_1$ will be the first Chern classes induced by the flag
$$0\subset F\otimes ({\rm det}E)^{\otimes g/2} \subset E\otimes({\rm det}E)^{\otimes g/2},$$
where $E$ is the standard representation of ${\rm GL}_2$, and $F$ is the subrepresentation stable under the standard action of ${\rm B}_2$ (in our setting the Borel subgroup is identified with the lower triangular matrices).
We consider as usual a $1$-pointed hyperelliptic curve $(\pi:C \rightarrow S,\sigma)$ and $\mathcal{L}$ is defined by $\mathcal{L}:=\omega_{C/S}^{\otimes g/2}((1-g/2)W)$. We start by taking the flag of vector bundles of $\mathscr{H}_{g,1}$ induced by \hyperref[le]{Lemma \ref{le}}:
$$ 0 \subset \pi_*\mathcal{L}(-\sigma) \subset \pi_*\mathcal{L} $$
and let us consider the pullback to the atlas of $\mathscr{H}_{g,1}$ that we denoted by ${\rm H}_{g,1}'$ (see \hyperref[re]{Remark \ref{re}}).
This is described by taking the pushforward of the flag $0\subset \mathcal{L}(-\sigma) \subset \mathcal{L}$ through $\pi:C \rightarrow S$, which is the composition of the two maps $p:\mathbb{P}^1_S \rightarrow S$ and $f: C \rightarrow \mathbb{P}^1_S$. If we first take the pushforward by $f$, which is finite and flat of degree 2, we get
$$ \xymatrix{ 0 \ar[r] & \mathcal{O}_{\mathbb{P}_S^1}\oplus \mathcal{O}_{\mathbb{P}^1_S}(-g) \ar[r] & \mathcal{O}_{\mathbb{P}^1_S}(1)\oplus \mathcal{O}_{\mathbb{P}^1_S}(-g) \ar[r] & (\sigma_{\infty})_*\sigma_{\infty}^*\mathcal{O}_{\mathbb{P}^1_S}(1) \ar[r] & 0.} $$
Finally taking the pushfoward through $p$ we get the pullback to $S$ of the standard flag $0\subset F\subset E$ of ${\rm B}_2$ (up to the isomorphism ${\rm B}({\rm B}_2/\mu_{g+1}) \simeq {\rm BB}_2$). Notice that $p_*\mathcal{O}_{\mathbb{P}^1_S}(-g)=0$.
If we compare it to the presentation exhibited in \hyperref[p]{Corollary \ref{p}}, we have proved the following description of the generators
$$ T_0=c_1(\mathcal{A}_g) \quad T_1=c_1(\mathcal{B}_g)$$
where $\mathcal{A}_g$ and $\mathcal{B}_g$ are the two line bundles described above.
Now we pass to the odd case. The result is the following.
\begin{teorema}
Suppose $g$ is an odd positive integer with $g>1$ and ${\rm char}(k) > 2g$, in the notation of \hyperref[d]{Theorem \ref{d}}, we have $\tau=c_1(\mathcal{L})$ and $\rho=c_1(\mathcal{R})$ where $$\mathcal{L}(\pi:C\rightarrow S,\sigma):=\pi_*\omega_{C/S}^{\otimes \frac{g+1}{2}}\Big(\frac{1-g}{2}W\Big)$$
and $\mathcal{R}(\pi:C \rightarrow S,\sigma):=\pi_*\omega_{C/S}^{-1}(W-2\sigma)$, with $W$ the Weierstrass divisor associated to an hyperelliptic curve $C\rightarrow S$.
\end{teorema}
If $g$ is odd, we can apply the same reasoning, using the geometric description of the two generators of ${\rm CH}(\mathscr{H}_g)$ given in \cite{DiLor}.
We consider the two generators $\tau$ and $\rho$ like in \hyperref[d]{Corollary \ref{d}}. Clearly by construction $\tau$ is the same compared to the one in \cite{DiLor};
therefore $\tau=c_1(\mathcal{L})$ where $\mathcal{L}$ is functorially defined by
$$ \mathcal{L}((\pi:C\rightarrow S),\sigma) = \pi_*\omega_{C/S}^{\otimes \frac{g+1}{2}}\Big(\frac{1-g}{2}W\Big)$$
where $W$ is the ramification divisor. The other generators in the case of $\mathscr{H}_g$ are the Chern classes of the $3$-dimensional vector bundle $\mathcal{E}$ defined functorially by
$$\mathcal{E}(\pi:C\rightarrow S)= \pi_*\omega_{C/S}^{-1}(W).$$
We repeat the procedure used in the case of $g$ even. If we consider a morphism $ S \rightarrow \mathscr{H}_{g,1}$ induced by the element $(\pi:C\rightarrow S,\sigma)$, we can consider the flag on $S$
$$ 0 \subset \pi_*\omega_{C/S}^{-1}(W)(-2\sigma) \subset \pi_*\omega_{C/S}^{-1}(W)(-\sigma) \subset \pi_*\omega_{C/S}^{-1}(W) $$
that corresponds to the flag of ${\rm B_3}$-representations
$$ 0\subset \mathcal{O}_S \subset \mathcal{O}_S^{\oplus 2} \subset \mathcal{O}_S^{\oplus 3} $$
induced by the inclusions $f \mapsto (0,f)$ and $(g,h) \mapsto (0,g,h)$. Using the adjunction representation map $Ad: \mathbb{G}_m\subset \rm{PB}_2 \rightarrow {\rm B}_3$ we easily get that $\rho=c_1(\mathcal{R})$ where $\mathcal{R}$ is functorially defined by
$$ \mathcal{R}(\pi:C\rightarrow S,\sigma):=\pi_*\omega_{C/S}^{-1}(W-2\sigma).$$
We have thus given the geometric interpretion of the generators of ${\rm CH}(\mathscr{H}_{g,1})$ in both the cases of $g$ even and odd.
\input{CRH.bbl}
\end{document}
|
{\bold e}gin{document}
\title{\bf New thought on Matsumura-Nishida theory in the $L_p$-$L_q$ maximal
regularity framework}
\author{
Yoshihiro Shibata
\thanks{Department of Mathematics,
Waseda University,
Ohkubo 3-4-1, Shinjuku-ku, Tokyo 169-8555, Japan. \endgraf
e-mail address: [email protected] \endgraf
Adjunct faculty member in the Department of Mechanical Engineering and
Materials Scinece, University of Pittsburgh \endgraf
partially supported by Top Global University Project and JSPS Grant-in-aid
for Scientific Research (A) 17H0109.
}
}
\date{}
\maketitle
{\bold e}gin{abstract}
This paper is devoted to proving the global well-posedness of
initial-boundary value problem for Navier-Stokes
equations describing the motion of viscous, compressible, barotropic
fluid flows in a three dimensional exterior domain with non-slip
boundary conditions. This was first proved by an excellent paper
due to Matsumura and Nishida \cite{MN2} in 1983.
In \cite{MN2}, they used energy method and
their requirement was that space derivatives of the mass density up to third order
and space derivatives of the velocity fields up to fourth order
belong to $L_2$
in space-time, detailed statement of Matsumura and
Nishida theorem is given in Theorem \ref{thm:NS.1} of Sect. 1 of context.
This requirement is essentially used to estimate the $L_\infty$ norm
of necessary order of derivatives in order to enclose the iteration scheme
with the help of Sobolev inequalities and also to treat the material derivatives
of the mass density.
On the other hand, this paper gives the global wellposedness of the same problem
as in \cite{MN2}
in $L_2$ in time and $L_2\cap L_6$ in space maximal regularity
class, which is an improvement of the Matsumura and Nishida theory in \cite{MN2}
from the point of view of the minimal requirement of the regularity of solutions.
In fact, after changing
the material derivatives to time derivatives by Lagrange
transformation, enough estimates obtained by combination of
the maximal $L_2$ in time and $L_2\cap L_6$ in space regularity and
$L_p$-$L_q$ decay estimate of the Stokes equations with
non-slip conditions in the compressible viscous fluid flow case
enable us to use the standard Banach's fixed point argument.
Moreover, one of the purposes of this paper is to present a framework to prove the
$L_p$-$L_q$ maximal regularity for parabolic-hyperbolic type equations with
non-homogeneous boundary conditions and how to combine
the maximal $L_p$-$L_q$ regularity and $L_p$-$L_q$ decay estimates
of linearized equations to prove the global well-posedness of quasilinear problems
in unbounded domains, which gives a new thought of proving the global
well-posedness of the initial-boundary value problem for a system of
parabolic or parabolic-hyperbolic equations with non-homogeneous
boundary conditions.
\end{abstract}
{\small 2020 Mathematics Subject Classification. 35Q30, 76N10}\\
{\small Key words and phrases. Navier-Stokes equations, compressible viscous
barotropic fluid, global well-posedness, \\
the maximal $L_p$ space}
\section{Introduction}\label{sec:1}
A.~Matsumura and T.~Nishida \cite{MN2} proved the existence of unique
solutions of equations governing the flow of viscous, compressible, and
heat conduction fluids in an exterior domain of 3 dimensional Euclidean
space ${\Bbb R}^3$ for all times, provided the initial data are sufficiently small.
Although Matsumura and Nishida \cite{MN2} considered the
the viscous, barotropic,
and heat conductive fluid, in this paper we only consider the
viscous, compressible, barotropic fluid for simplicity and reprove
the Matsumura and Nishida theory in view of the $L_2$ in time and $L_2 \cap L_6$
in space maximal regularity theorem.
To describe in more detail, we start with
description of equations considered in this paper.
Let $\Omega$ be a three dimensional exterior domain, that is
the complement, $\Omega^c$, of $\Omega$ is a bounded domain in
the three dimensional Euclidean space ${\Bbb R}^3$. Let $\Gamma$ be the
boundary of $\Omega$, which is a compact $C^2$ hypersurface.
Let $\rho=\rho(x, t)$ and ${\bold v} = (v_1(x, t),
v_2(x, t), v_3(x, t))^\top$ be respective the mass density and the velocity field,
where $M^\top$ denotes
the transposed $M$, $t$ is a time variable and $x=(x_1, x_2, x_3)
\in \Omega$. Let ${{\frak r}ak p}= {{\frak r}ak p}(\rho)$ be the fluid pressure, which
is a smooth function defined on $(0, \infty)$ such that ${{\frak r}ak p}'(\rho) > 0$ for $\rho >0$.
We consider the following equations:
{\bold e}gin{equation}\label{eq:1.1}{\bold e}gin{aligned}
\partial_t\rho + {\rm div}\,(\rho {\bold v}) = 0&&\quad&\text{in $\Omega\times (0, T)$}, \\
\rho(\partial_t{\bold v} + {\bold v}\cdot\nabla{\bold v}) - {\rm Div}\,(\mu{\bold D}({\bold v}) + \nu {\rm div}\,{\bold v}{\bold I}
-{{\frak r}ak p}(\rho){\bold I}) = 0&&\quad&\text{in $\Omega\times (0, T)$}, \\
{\bold v}|_{\Gamma} = 0, \quad
(\rho, {\bold v})|_{t=0} = (\rho_* + \theta_0, {\bold v}_0)
&&\quad&\text{in $\Omega$}.
\end{aligned}\end{equation}
Here, $\partial_t = \partial/\partial t$, ${\bold D}({\bold v}) = \nabla{\bold v} + (\nabla{\bold v})^\top$ is the deformation tensor,
${\rm div}\, {\bold v} = \sum_{j=1}^3 \partial v_j/\partial x_j$, for a $3\times 3$ matrix
$K$ with $(i, j)$ th component $K_{ij}$, ${\rm Div}\, K =(\sum_{j=1}^3 \partial K_{1j}/\partial x_j,
\sum_{j=1}^3 \partial K_{2j}/\partial x_j, \sum_{j=1}^3 \partial K_{3j}/\partial x_j)^\top$,
$\mu$ and $\nu$ are two viscous constants such that
$\mu > 0$ and $\mu + \nu > 0$, and $\rho_*$ is a positive constant
describing the mass density of a
reference body.
According to Matsumura and Nishida \cite{MN2}, we have
the global well-posedness of equations \eqref{eq:1.1} in the $L_2$ framework
stated as follows:
{\bold e}gin{thm}[\cite{MN2}]\label{thm:NS.1}
Let $\Omega$ be a three dimensional exterior domain, the boundary
of which is a smooth $2$ dimensional compact hypersurface. Then,
there exsits a small number $\epsilon > 0$ such that for any
initial data $(\theta_0, {\bold v}_0) \in H^3(\Omega)^4$
satisfying smallness condition: $\|(\theta_0, {\bold v}_0)\|_{H^3(\Omega)}
\leq \epsilon$ and compatibility conditions of order 1, that is
${\bold v}_0$ and $\partial_t{\bold v}|_{t=0}$ vanish at $\Gamma$,
problem \eqref{eq:1.1} admits unique solutions $\rho = \rho_*+\theta$
and ${\bold v}$ with
{\bold e}gin{align*}
\theta \in C^0((0, \infty), H^3(\Omega)) \cap C^1((0, \infty), H^2(\Omega)),
\quad \nabla \rho \in L_2((0, \infty), H^2(\Omega)^3), \\
{\bold v} \in C^0((0, \infty), H^3(\Omega)^3) \cap C^1((0, \infty), H^1(\Omega)^3),
\quad \nabla {\bold v} \in L_2((0, \infty),, H^3(\Omega)^9).
\end{align*}
\end{thm}
Matsumura and
Nishida \cite{MN2} proved Theorem \ref{thm:NS.1} essentially
by energy method. One of key issues in \cite{MN2} is to estimate
$\sup_{t \in (0, \infty)} \|{\bold v}(\cdot, t)\|_{H^1_\infty(\Omega)}$
by Sobolev's inequality, namely
{\bold e}gin{equation}\label{sob:1}
\sup_{t \in ((0, \infty)} \|{\bold v}(\cdot, t)\|_{H^1_\infty(\Omega)}
\leq C\sup_{t \in (0, \infty)} \|{\bold v}(\cdot, t))\|_{H^3(\Omega)}.
\end{equation}
Recently, Enomoto and Shibata \cite{ES2} proved
the global wellposedness of equations \eqref{eq:1.1} for
$(\theta_0, {\bold v}_0) \in H^2(\Omega)^4$ with small norms. Namely,
they proved the following theorem.
{\bold e}gin{thm}[\cite{ES2}]\label{thm:ES.1} Let $\Omega$
be a three dimensional exterior domain, the boundary
of which is a smooth $2$ dimensional compact hypersurface. Then,
there exsits a small number $\epsilon > 0$ such that for any
initial data $(\theta_0, {\bold v}_0) \in H^2(\Omega)^4$
satisfying $\|(\theta_0, {\bold v}_0)\|_{H^2(\Omega)}
\leq \epsilon$ and compatibility condition: ${\bold v}_0|_{\Gamma}=0$,
problem \eqref{eq:1.1} admits unique solutions $\rho = \rho_*+\theta$
and ${\bold v}$ with
{\bold e}gin{align*}
\theta \in C^0((0, \infty), H^2(\Omega)) \cap C^1((0, \infty), H^1(\Omega)),
\quad \nabla \rho \in L_2((0, \infty), H^1(\Omega)^3), \\
{\bold v} \in C^0((0, \infty), H^2(\Omega)^3) \cap C^1((0, \infty), L_2(\Omega)^3),
\quad \nabla {\bold v} \in L_2((0, \infty), H^2(\Omega)^9).
\end{align*}
\end{thm}
The method used in the proof of Enomoto and Shibata \cite{ES2} is
essentially the same as that in
Matsumura and Nishida \cite{MN2}. Only the difference is that
\eqref{sob:1} is replaced by
$\int^\infty_0\|\nabla{\bold v}\|_{L_\infty(\Omega)}^2\,dt
\leq C\int^\infty_0\|\nabla{\bold v}\|_{H^2(\Omega)}^2\,dt$
in \cite{ES2}.
As a conclusion, in the $L_2$ framework the least regularity we need is that
$\nabla \rho \in L_2((0, \infty), H^1(\Omega)^3)$ and $\nabla {\bold v} \in
L_2((0, \infty), H^2(\Omega)^9)$.
In this paper, we improve this point by solving the equations \eqref{eq:1.1}
in the $L_p$-$L_q$ maximal regularity class, that is
the following theorem is a main result of this paper.
{\bold e}gin{thm}\label{thm:main0}
Let $\Omega$ be an exterior domain in ${\Bbb R}^3$, whose boundary $\Gamma$
is a compact $C^2$ hypersurface and $T \in (0, \infty)$.
Let $0 < \sigma < 1/6$ and
$p=2$ or $p=1+\sigma$. Let $b$ be a number defined by
$b= (3-\sigma)/2(2+\sigma)$ when $p=2$ and $b= (1-\sigma)/2(2+\sigma)$
when $p=1+\sigma$. Let $r= 2(2+\sigma)/(4+\sigma)=(1/2+1/(2+\sigma))^{-1}$.
Set
{\bold e}gin{align*}
&{\mathcal I} = \{(\theta_0, {\bold v}_0) \mid \theta_0 \in ( \bigcap_{q=r, 2, 2+\sigma, 6}
H^1_q(\Omega)), \quad {\bold v}_0 \in (\bigcap_{q=2, 2+\sigma, 6}
B^{2(1-1/p)}_{q,p}(\Omega)^3) \cap L_r(\Omega)^3\}, \\
&\|(\theta_0, {\bold v}_0)\|_{{\mathcal I}} = \sum_{q=2, 2+\sigma, 6} \|\theta_0\|_{H^1_q(\Omega)}
+ \sum_{q=2,2+\sigma, 6}\|{\bold v}_0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+ \|(\theta_0, {\bold v}_0)\|_{H^{1,0}_r(\Omega)}.
\end{align*}
Here and hereafter, we write $\|(\theta, {\bold v})\|_{H^{\ell, m}_q(\Omega)}
= \|\theta\|_{H^\ell_q(\Omega)} + \|{\bold v}\|_{H^m_q(\Omega)}$ and
$H^0_q(\Omega) = L_q(\Omega)$.
Then, there exists a small constant
$\epsilon \in (0, 1)$ independent of $T$ such that if
initial data $(\theta_0, {\bold v}_0) \in {\mathcal I}$ satisfy the compatibility condition:
${\bold v}_0|_\Gamma=0$ and the smallness condition :
$\|(\theta_0, {\bold v}_0)\|_{{\mathcal I}} \leq \epsilon^2$, then problem \eqref{eq:1.1}
admits unique solutions $\rho=\rho_*+\theta$ and ${\bold v}$ with
{\bold e}gin{equation}\label{sol:1}{\bold e}gin{aligned}
\theta &\in H^1_p((0, T), L_2(\Omega)) \cap L_6(\Omega))
\cap L_p((0, T), H^1_2(\Omega)) \cap H^1_6(\Omega)), \\
{\bold v} & \in H^1_p((0, T), L_2(\Omega)^3 \cap L_6(\Omega)^3) \cap
L_p((0, T), H^2_2(\Omega)^3 \cap H^2_6(\Omega)^3).
\end{aligned}\end{equation}
Moreover, setting
{\bold e}gin{align*}
{\mathcal E}_T(\theta, {\bold v}) &
= \|<t>^b(\theta, {\bold v})\|_{L_\infty((0, T), L_2(\Omega) \cap L_6(\Omega))}
+ \|<t>^b\nabla(\theta, {\bold v})\|_{L_p((0, T), H^{0,1}_2(\Omega)
\cap H^{0,1}_{2+\sigma}(\Omega))} \\
&+ \|<t>^b(\theta, {\bold v})\|_{L_p((0, T), H^{1,2}_6(\Omega))}
+ \|<t>^b\partial_t(\theta, {\bold v})\|_{L_p((0, T), L_2(\Omega) \cap L_6(\Omega))},
\end{align*}
we have ${\mathcal E}_T(\theta, {\bold v}) \leq \epsilon$.
\end{thm}
{\bold e}gin{remark} \thetag1~ $T>0$ is taken arbitrarily and $\epsilon>0$ is
chosen independently of $T$, and so Theorem \ref{thm:main0}
tells us the global wellposedness of equations \eqref{eq:1.1} for $(0, \infty)$
time inverval. \\
\thetag2~
In the case $p=2$, Theorem \ref{thm:main0} gives an
extension of Matsumura and Nishida theorem \cite{MN2}.
Roughly speaking, if we assume that $(\theta_0, {\bold v}_0)
\in H^3_2(\Omega)^4$, then $(\theta_0, {\bold v}_0)
\in (H^1_2(\Omega) \cap H^1_6(\Omega))\times (B^1_{2,2}(\Omega) \cap
B^1_{6,2}(\Omega))$, and so the global wellposedness holds in
the class as
$$\theta \in H^1_2((0, T), H^1_2(\Omega) \cap H^1_6(\Omega)),
\quad {\bold v} \in H^1_2((0, T), L_2(\Omega)^3 \cap L_6(\Omega)^3)
\cap L_2((0, T), H^2_2(\Omega)^3\cap H^2_6(\Omega)^3)
$$
under the additional condition:
$(\theta_0, {\bold v}_0) \in H^{1,0}_r(\Omega)$.
On the other hand, choosing $p=1+\sigma$ gives
the minimal regularity assumption of initial velocity field
in the $L_2 \cap L_6$ framework.
\end{remark}
As related topics, we consider the Cauchy problem, that is
$\Omega= {\Bbb R}^3$ without boundary condition. A. Matsumura and T. Nishida
\cite{MN1} proved the global wellposedness theorem, the statement
of which is essentially
the same as in Theorem \ref{thm:NS.1} and the proof is based on
energy method. R.~Danchin \cite{Danchin} proved the global wellposedness
in the critical space by using the Littlewood-Paley decomposition.
{\bold e}gin{thm}[\cite{Danchin}]\label{Danchin}
Let $\Omega= {\Bbb R}^N$ $(N\geq 2)$. Assume that $\mu > 0$ and
$\mu+\nu > 0$. Let $B^s = \dot B^s_{2,1}({\Bbb R}^N)$ and
$$F^s = (L_2((0, \infty), B^s) \cap C((0, \infty), B^s \cap B^{s-1}))
\times (L_1((0, \infty), B^{s+1}) \cap C((0, \infty), B^{s-1}))^N.
$$
Then, there exists an $\epsilon > 0$ such that if initial data
$\theta_0 \in B^{N/2}({\Bbb R}^N) \cap B^{N/2-1}({\Bbb R}^N)$ and
${\bold v}_0 \in B^{N/2-1}({\Bbb R}^N)^N$ satisfy the condition:
$$\|\theta_0\|_{B^{N/2}({\Bbb R}^N) \cap B^{N/2-1}({\Bbb R}^N)}
+ \|{\bold v}_0\|_{B^{N/2-1}({\Bbb R}^N)} \leq \epsilon, $$
then problem \eqref{eq:1.1} with $\Omega={\Bbb R}^N$ and $T=\infty$
admits a unique solution $\rho=\rho_*+\theta$ and ${\bold v}$ with
$(\theta, {\bold v}) \in F^{N/2}$.
\end{thm}
In the case where $\Omega= {\Bbb R}^3$ or ${\Bbb R}^N$, there are a lot of works
concerning \eqref{eq:1.1}, but we do not mention them
any more,
because we are interested only in the global wellposedness
in exterior domains.
For more information on references, refer to
Enomoto and Shibata \cite{ES1}.
Concerning the $L_1$ in time maximal
regularity in exterior domains, the incompressible viscous fluid flows
has been treated by Danchin and Mucha \cite{DM}.
To obtain $L_1$ maximal regularity in time, we have to use $\dot B^s_{q,1}$
in space, which is slightly regular space than $H^s_q$, and the
decay estimates for semigroup on $\dot B^s_{q,1}$ must be needed to
controle terms arising from the cut-off procedure near the boundary.
Detailed arguments related with thses facts can be found in \cite{DM}.
To treat \eqref{eq:1.1} in an exterior domain in the $L_1$
in time maximal regularity framework,
we have to prepare not only $L_1$ maximal regularity for
model problems in the whole space and the half space but also decay properties
of semigroup in $\dot B^s_{q,1}$, and so this will be a future work.
From Theorem \ref{thm:main0},
we may say that problem \eqref{eq:1.1} can be solved in $L_{1+\sigma}$ in time
and $L_2\cap L_6$ in space maximal regularity class for any small
$\sigma \in (0, 1/6)$.
The paper is organized as follows. In Sect. 2, equations \eqref{eq:1.1}
are rewriten in Lagrange coordinates to eliminate ${\bold v}\cdot\nabla\rho$
and a main result for equations with Lagrangian
description is stated.
In Sect. 3, we give a $L_p$-$L_q$ maximal regularity theorem
in some abstract setting. In Sect. 4, we give estimates of nonlinear terms.
In Sect. 5, we prove main results stated in Sect. 2.
In Sect. 6,
Theorem \ref{thm:main0} is proved by using a main result in Sect. 2.
In Sect. 7, we discuss the $N$ dimensonal case.
The main point of our
proof is to obtain maximal regularity estimates with
decay properties of solutions to linearized
equations, the Stokes equations with non-slip conditions.
To explain the idea, we write linearized equations
as $\partial_t u - Au = f$ and $u|_{t=0}=u_0$ symbolically,
where $f$ is a function corresponding
to nonlinear terms and $A$ is an closed linear operator with domain $D(A)$.
We write $u=u_1+u_2$, where $u_1$ is a solution to time shifted
equations: $\partial_t u_1 + \lambda_1u_1- Au_1 = f$ and $u_1|_{t=0}=u_0$ with some large
positive number $\lambda_1$ and $u_2$ is a solution to compensating equations:
$\partial_t u_2 -Au_2 = \lambda_1u_1$ and $u_2|_{t=0} = 0$. Since the fundamental solutions
to shifted equations have exponential decay properties, $u_1$ has the same
decay properties as these of nonlinear terms $f$. Moreover $u_1$ belongs to
the domain of $A$ for all positive time. By Duhamel principle $u_2$ is given by
$u_2= \lambda_1\int^t_0 T(t-s)u_1(s)\,ds$, where $\{T(t)\}_{t\geq 0}$ is a continuous
analytic semigroup associated with $A$. By using $L_p$-$L_q$ decay properties
of $\{T(t)\}_{t\geq 0}$ in the interval $0 < s < t-1$ and standard
estimates of $C_0$ analytic semigroup: $\|T(t-s)u_0\|_{D(A)} \leq C\|u_0\|_{D(A)}$
for $t-1 < s < t$, where $\|\cdot\|_{D(A)}$ denotes a domain norm,
we obtain maximal $L_p$-$L_q$ regularity of $u_2$ with decay
properties. This method seems to be a new thought to prove the
global wellposedness and to be applicable to many
quasilinear problems of parabolic type or parabolic-hyperbolic mixture type
appearing in mathematical physics.
To end this section, symbols of functional spaces used in this paper are given. Let
$L_p(\Omega)$, $H^m_p(\Omega)$ and $B^s_{q,p}(\Omega)$ denote
the standard Lebesgue spaces, Sobolev spaces and Besov spaces,
while their norms are written as $\|\cdot\|_{L_p(\Omega)}$,
$\|\cdot\|_{H^m_p(\Omega)}$ and $\|\cdot\|_{B^s_{q,p}(\Omega)}$.
We write $H^m(\Omega) = H^m_2(\Omega)$, $H^0_q(\Omega)=L_q(\Omega)$
and $W^s_q(\Omega)=
B^s_{q,q}(\Omega)$. For any Banach space $X$ with norm $\|\cdot\|_X$,
$L_p((a, b), X)$ and
$H^m_p((a, b), X)$ denote respective the standard $X$-valued Lebesgue spaces
and Sobolev spaces, while their time weighted norms are defined by
$$\|<t>^b f\|_{L_p((a, b), X)}
= {\bold e}gin{cases} \Bigl(\int^b_a(<t>^b\|f(t)\|_X)^p\,dt\Bigr)^{1/p}
\quad& (1 \leq p < \infty), \\
{\rm esssup}_{t \in (a, b)} <t>^b\|f(t)\|_X\quad &(p=\infty),
\end{cases}
$$
where $<t> = (1 + t^2)^{1/2}$. Let
$X^n = \{ {\bold v}=(u_1, \ldots, u_n)) \mid u_i \in X \enskip (i=1, \ldots, n)\}$, but we write
$\|\cdot\|_{X^n} = \|\cdot\|_X$ for simplicity.
Let $H^{\ell, m}_q(\Omega)
= \{(\rho, {\bold v}) \mid \rho \in H^\ell_q(\Omega), {\bold v} \in H^m_q(\Omega)^3\}$
and $\|(\rho, {\bold v})\|_{H^{\ell, m}_q(\Omega)} = \|\rho\|_{H^\ell_q(\Omega)}
+ \|{\bold v}\|_{H^m_q(\Omega)}$.
The letter $C$ denotes generic constants and $C_{a, b, \cdots}$ denotes
that constants depend on quantities $a$, $b$, $\ldots$. $C$ and $C_{a,b, \cdots}$
may change from line to line.
\section{Equations in Lagrange coordinates and statment of main results}
To prove Theorem \ref{thm:main0}, we write equations \eqref{eq:1.1} in Lagrange
coordinates $\{y\}$. Let $\zeta=\zeta(y, t)$ and ${\bold u}={\bold u}(y, t)$ be the mass density
and the velocity field in Lagrange coordinates $\{y\}$, and
for a while we assume that
{\bold e}gin{equation}\label{lag:3} {\bold u} \in H^1_p((0, T), L_6(\Omega)^3) \cap
L_p((0, T), H^2_6(\Omega)^3),.
\end{equation}
and the quantity:
$\|<t>^b\nabla{\bold u}\|_{L_p((0, T), H^1_6(\Omega)}$ is small enough
for some $b > 0$ with $bp' > 1$, where $1/p + 1/p' = 1$.
We consider
the Lagrange transformation:
{\bold e}gin{equation}\label{lag:1} x = y + \int^t_0 {\bold u}(y, s)\,ds
\end{equation}
and assume that
{\bold e}gin{equation}\label{lag:2}
\int^T_0\|\nabla{\bold u}(\cdot, t)\|_{L_\infty(\Omega)}\,dt <\delta
\end{equation}
with some small number $\delta > 0$.
If $0 < \delta < 1$, then
for $x_i = y_i + \int^t_0{\bold u}(y_i, s)\,ds$ we have
$$|x_1-x_2| \geq (1-\int^T_0\|\nabla{\bold u}(\cdot, t)\|_{L_\infty(\Omega)}\,dt)
|y_1-y_2|,$$
and so the correspondence \eqref{lag:1} is one to one. Moreover,
applying a method due to
Str\"ohmer \cite{Strohmer1}, we see that the correspondence \eqref{lag:1}
is a $C^{1+\omega}$ ($\omega \in (0, 1/2)$) diffeomorphism from $\overline{\Omega}$
onto itself for any $t \in (0, T)$.
In fact, let $J =
{\bold I} + \int^t_0\nabla{\bold u}(y, s)\,ds$, which is the Jacobian of the map
defined by \eqref{lag:1}, and then by Sobolev's imbedding theorem
and H\"older's inequality
for $\omega \in (0, 1/2)$ we have
{\bold e}gin{equation}\label{lag:4}
\sup_{t \in (0, T)} \|\int^t_0\nabla{\bold u}(\cdot, s)\,ds\|_{C^{\omega}(\overline{\Omega})}
\leq C_\omega\Bigl(\int^T_0<s>^{-bp'}\,ds\Bigr)^{1/p'}
\Bigl(\int^T_0\|<s>\nabla{\bold u}(\cdot, s)\|_{H^1_6(\Omega)}^p\,ds\Bigr)^{1/p}<\infty
\end{equation}
and we may assume that the right hand side of \eqref{lag:4} is small enough
and \eqref{lag:2} holds in the process of
constructing a solution.
By \eqref{lag:1}, we have
$${\frak r}ac{\partial x}{\partial y} = {\bold I} + \int^t_0{\frak r}ac{\partial {\bold u}}{\partial y}(y, s)\,ds, $$
and so choosing $\delta > 0$ small enough, we may assume that
there exists a $3\times 3$ matrix ${\bold V}_0({\bold k})$ of $C^\infty$ functions of
variables ${\bold k}$ for $|{\bold k}| < \delta$, where ${\bold k}$ is a corresponding
variable to $\int^t_0\nabla{\bold u}\,ds$,
such that
${\frak r}ac{\partial y}{\partial x} = {\bold I} + {\bold V}_0({\bold k})$ and ${\bold V}_0(0) = 0$. Let
$V_{0ij}({\bold k})$ be the $(i, j)$ th component of $3\times 3$ matrix
$V_0({\bold k})$, and then we have
{\bold e}gin{equation}\label{lag:5}
{\frak r}ac{\partial}{\partial x_j} = {\frak r}ac{\partial}{\partial y_j} + \sum_{j=1}^3V_{0ij}({\bold k}){\frak r}ac{\partial}{\partial y_j}.
\end{equation}
Let $X_t(x) = y$ be the inverse map of Lagrange transform \eqref{lag:1} and
set $\rho(x, t) = \zeta(X_t(x), t)$ and ${\bold v}(x, t) = {\bold u}(X_t(x), t)$.
Setting
$${\mathcal D}_{\rm div}\,({\bold k})\nabla{\bold u} = \sum_{i, j=1}^3V_{0ij}({\bold k}){\frak r}ac{\partial u_i}{\partial y_j},$$
we have ${\rm div}\,{\bold v} = {\rm div}\,{\bold u} + {\mathcal D}_{\rm div}\,({\bold k}){\bold u}$.
Let $\zeta = \rho_* + \eta$, and then
{\bold e}gin{align*}
{\frak r}ac{\partial}{\partial t} \rho + {\rm div}\,(\rho{\bold u})= {\frak r}ac{\partial \eta}{\partial t} +
(\rho_* + \eta)({\rm div}\,{\bold u} + {\mathcal D}_{\rm div}\,({\bold k})\nabla{\bold u}).
\end{align*}
Setting
{\bold e}gin{equation}\label{form:1}
{\mathcal D}_{\bold D}({\bold k})\nabla{\bold u} = {\bold V}_0({\bold k})\nabla {\bold u} + ({\bold V}_0({\bold k})\nabla {\bold u})^\top,
\end{equation}
we have
${\bold D}({\bold v}) = \nabla {\bold v} + (\nabla{\bold v})^\top
= ({\bold I} + {\bold V}_0({\bold k}))\nabla {\bold u} + (({\bold I} + {\bold V}_0({\bold k}))\nabla {\bold u})^\top
= {\bold D}({\bold u}) + {\mathcal D}_{\bold D}({\bold k})\nabla{\bold u}$.
Moreover,
{\bold e}gin{align*}{\rm Div}\,(\mu{\bold D}({\bold v}) + \nu{\rm div}\,{\bold v}{\bold I})
&= ({\bold I}+{\bold V}_0({\bold k}))\nabla(\mu ({\bold D}({\bold u}) + {\mathcal D}_{\bold D}({\bold k})\nabla{\bold u})
+ \nu ({\rm div}\,{\bold u} + {\mathcal D}_{\rm div}\,({\bold k})\nabla{\bold u}) \\
& = {\rm Div}\,(\mu{\bold D}({\bold u}) + \nu{\rm div}\,{\bold u}{\bold I}) + {\bold V}_1({\bold k})\nabla^2{\bold u}
+ ({\bold V}_2({\bold k})\int^t_0\nabla^2{\bold u}\,ds)\nabla{\bold u}
\end{align*}
with
{\bold e}gin{equation}\label{form:2}{\bold e}gin{aligned}
{\bold V}_1({\bold k})\nabla^2{\bold u} &= \mu {\mathcal D}_{\bold D}({\bold k})\nabla^2{\bold u}
+ \nu {\mathcal D}_{\rm div}\,({\bold k})\nabla^2{\bold u} {\bold I} \\
&+ {\bold V}_0({\bold k})(\mu\nabla{\bold D}({\bold u})
+ \nu\nabla {\rm div}\,{\bold u} {\bold I} + \mu{\mathcal D}_{\bold D}({\bold k})\nabla^2{\bold u} +
\nu {\mathcal D}_{\rm div}\,({\bold k})\nabla^2{\bold u} {\bold I}), \\
({\bold V}_2({\bold k})\int^t_0\nabla{\bold u}\,ds)\nabla{\bold u}
&= ({\bold I}+{\bold V}_0({\bold k}))(\mu (d_{\bold k}{\mathcal D}_{\bold D}({\bold k})\int^t_0\nabla^2{\bold u}\,ds)\nabla{\bold u}
+\nu (d_{\bold k}{\mathcal D}_{\rm div}\,({\bold k}) \int^t_0\nabla^2{\bold u}\,ds \nabla{\bold u}){\bold I}.
\end{aligned}\end{equation}
Here, $d_{\bold k} F({\bold k})$ denotes the derivative of $F$ with respect to ${\bold k}$.
Note that ${\bold V}_1(0) = 0$. Moreover, we write
{\bold e}gin{equation}\label{form:3}
\nabla {{\frak r}ak p}(\rho)
= {{\frak r}ak p}'(\rho_*)\nabla\eta +({{\frak r}ak p}'(\rho_*+\eta) - {{\frak r}ak p}'(\rho_*))\nabla\eta
+ {{\frak r}ak p}'(\rho_*+\eta){\bold V}_0({\bold k})\nabla\theta.
\end{equation}
The material derivative $\partial_t{\bold v} + {\bold v}\cdot\nabla{\bold v}$ is changed to $\partial_t{\bold u}$.
Summing up, we have obtained
{\bold e}gin{equation}\label{eq:2.1}{\bold e}gin{aligned}
\partial_t\eta + \rho_*{\rm div}\, {\bold u} = F(\eta, {\bold u})&&\quad&\text{in $\Omega \times(0, T)$}, \\
\rho_*\partial_t{\bold u}- {\rm Div}\,(\mu{\bold D}({\bold u})
+ \nu{\rm div}\,{\bold u}{\bold I} - {{\frak r}ak p}'(\rho_*)\eta) = {\bold G}(\eta, {\bold u})&&
\quad&\text{in $\Omega \times(0, T)$}, \\
{\bold u}|_\Gamma=0, \quad (\eta, {\bold u})|_{t=0} = (\theta_0, {\bold v}_0)&&
\quad&\text{in $\Omega$}.
\end{aligned}\end{equation}
Here, we have set
{\bold e}gin{equation}\label{non:linear1}{\bold e}gin{aligned}
{\bold k} &= \int^t_0\nabla{\bold u}(\cdot, s)\,ds, \\
F(\eta, {\bold u})&= \rho_* {\mathcal D}_{\rm div}\,({\bold k})\nabla{\bold u}
+ \eta({\rm div}\,{\bold u} + {\mathcal D}_{\rm div}\,({\bold k})\nabla{\bold u}),
\\
{\bold G}(\eta, {\bold u}) &= \eta\partial_t{\bold u}+ {\bold V}_1({\bold k})\nabla^2{\bold u}
+ ({\bold V}_2({\bold k})\int^t_0\nabla^2{\bold u}\,ds)\nabla{\bold u}\\
&\qquad - ({{\frak r}ak p}'(\rho_*+\eta)-{{\frak r}ak p}'(\rho_*))\nabla\eta
- {{\frak r}ak p}'(\rho_*+\eta){\bold V}_0({\bold k})\nabla\eta
\end{aligned}\end{equation}
and ${\mathcal D}_{\rm div}\,({\bold k})\nabla{\bold u}$, ${\bold V}_1({\bold k})$ and ${\bold V}_2({\bold k})$
have been defined in \eqref{form:1}, \eqref{form:2}
and \eqref{form:3}. Note that ${\mathcal D}_{\bold k}(0)=0$,
${\bold V}_1(0)=0$ and $g(0, 0)=0$.
The following theorem is a main result in this paper.
{\bold e}gin{thm}\label{mainthm:2}
Let $\Omega$ be an exterior domain in ${\Bbb R}^3$, whose boundary $\Gamma$
is a compact $C^2$ hypersurface. Let $0 < \sigma < 1/6$ and
$p=2$ or $p=1+\sigma$. Let $b$ be a number defined by
$b= (3-\sigma)/2(2+\sigma)$ when $p=2$ and $b= (1-\sigma)/2(2+\sigma)$
when $p=1+\sigma$. Let $r= 2(2+\sigma)/(4+\sigma)$ and let $T \in (0, \infty]$.
Set
{\bold e}gin{align*}
&{\mathcal I} = \{(\theta_0, {\bold v}_0) \mid \theta_0 \in ( \bigcap_{q=r, 2, 2+\sigma, 6}
H^1_q(\Omega)) \quad {\bold v}_0 \in (\bigcap_{q=2, 2+\sigma, 6}
B^{2(1-1/p)}_{q,p}(\Omega)^3) \cap L_r(\Omega)^3\}, \\
&\|(\theta_0, {\bold v}_0)\|_{{\mathcal I}} = \sum_{q=2, 2+\sigma, 6} \|\theta_0\|_{H^1_q(\Omega)}
+ \sum_{q=2,2+\sigma, 6}\|{\bold v}_0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+ \|(\theta_0, {\bold v}_0)\|_{H^{1,0}_r(\Omega)}.
\end{align*}
Then, there exists a small constant
$\epsilon \in (0, 1)$ independent of $T$ such that if
initial data $(\theta_0, {\bold v}_0) \in X$ satisfy the compatibility condition:
${\bold v}_0|_\Gamma=0$ and the smallness condition :
$\|(\theta_0, {\bold v}_0)\|_{{\mathcal I}} \leq \epsilon^2$, then problem \eqref{eq:2.1}
admits unique solutions $\zeta=\rho_*+\eta$ and ${\bold u}$ with
{\bold e}gin{equation}\label{sol:1*}{\bold e}gin{aligned}
\eta &\in H^1_p((0, T), H^1_2(\Omega)) \cap H^1_6(\Omega)), \\
{\bold u} & \in H^1_p((0, T), L_2(\Omega)^3 \cap L_6(\Omega)^3) \cap
L_p((0, T), H^2_2(\Omega)^3 \cap H^2_6(\Omega)^3)
\end{aligned}\end{equation}
possessing the estimate $E_T(\eta, {\bold u}) \leq \epsilon$. Here, we have set
$$E_T(\eta, {\bold u}) = {\mathcal E}_T(\eta, {\bold u}) +
\|<t>^b\partial_t\nabla(\eta, {\bold u})\|_{L_p((0, T), L_q(\Omega))}$$
and ${\mathcal E}_T(\eta, {\bold u})$ is the quantity defined in Theorem \ref{thm:main0}.
\end{thm}
{\bold e}gin{remark}
\thetag1~ The choice of $\epsilon$ is independent of $T>0$, and so
solutions of equations \eqref{eq:2.1} exist for any time $t \in (0, \infty)$. \\
\thetag2~
For any natural number $m$,
$B^m_{q, 2}(\Omega) \subset H^m_q(\Omega)$ for
$2 < q < \infty$ and $B^m_{2,2} = H^m$.
\\
\thetag3~ The condition: $0 < \sigma < 1/6$ guarantees that
$bp' > 1$. \\
\thetag4~
Letting $\sigma>0$ be taken a small number such that
$C^{1+\sigma} \subset H^2_6$, we see that Theorem \ref{mainthm:2}
implies
$$\int^T_0\|{\bold u}(\cdot, s)\|_{C^{1+\sigma}(\Omega)}\,ds < \delta $$
with some small number $\delta > 0$,
which guarantees that Lagrange transform given in \eqref{lag:1} is a $C^{1+\sigma}$
diffeomorphism on $\Omega$. Moreover, Theorem \ref{thm:main0} follows
from Theorem \ref{mainthm:2}, the proof of which will be given in
Sect. 6 below.
\end{remark}
\section{${\mathcal R}$-bounded solution operators}
This section gives a general framework of proving the maximal $L_p$ regularity
($1 < p < \infty$), and so
problem is formulated in an abstract setting. Let $X$, $Y$, and $Z$ be
three UMD Banach spaces such that $X \subset Z \subset Y$
and $X$ is dense in $Y$, where the inclusions are continuous.
Let $A$ be a closed linear operator from $X$ into $Y$ and
let $B$ be a linear operator from $X$ into $Y$ and also
from $Z$ into $Y$. Moreover, we assume that
$$\|Ax\|_Y \leq C\|x\|_X, \quad \|Bx\|_Z \leq C\|x\|_X,
\quad \|Bz\|_Y \leq C\|z\|_Z $$
with some constant $C$ for any $x \in X$ and $z \in Z$.
Let $\omega \in (0, \pi/2)$ be a fixed number and set
{\bold e}gin{align*}
\Sigma_\omega &= \{\lambda \in {\Bbb C}\setminus\{0\}
\mid |\arg\lambda| < \pi-\omega\},
\quad
\Sigma_{\omega, \lambda_0} = \{\lambda \in \Sigma_\omega
\mid |\lambda| \geq \lambda_0\}.
\end{align*}
We consider an abstract boundary
value problem with parameter $\lambda \in \Sigma_{\omega, \lambda_0}$:
{\bold e}gin{equation}\label{1}
\lambda u - A u = f, \quad Bu = g.
\end{equation}
Here, $Bu = g$ represents boundary conditions, restrictions like
divergence condition for Stokes equations in the incompressible
viscous fluid flows case, or both of them.
The simplest example is the following:
$$\lambda u - \Delta u = f \enskip\text{in $\Omega$}, \quad
{\frak r}ac{\partial u}{\partial \nu} = g \enskip\text{on $\Gamma$}, \\
$$
where $\Omega$ is a uniform $C^2$ domain in ${\Bbb R}^N$, $\Gamma$ its boundary,
$\nu$ the unit outer normal to $\Gamma$, and $\partial/\partial\nu = \nu\cdot\nabla$
with $\nabla = (\partial/\partial x_1, \ldots, \partial/\partial x_N)$ for $x=(x_1, \ldots, x_N) \in {\Bbb R}^N$.
In this case, it is standard to choose $X = H^2_q(\Omega)$, $Y = L_q(\Omega)$,
$Z = H^1_q(\Omega)$ with $1 < q < \infty$, $A = \Delta$, and $B = \partial/\partial\nu$.
Problem formulated in \eqref{1} is corresponding to parameter elliptic problems
which have been studied by Agmon \cite{Agmon},
Agmon, Douglis and Nirenberg \cite{ADN},
Agranovich and Visik \cite{AV}, Denk and Volevich \cite{DV} and
references there in,
and their arrival point is to prove the unique existence of solutions possessing
the estimate:
$$
|\lambda|\|u\|_Y + \|u\|_X
\leq C(\|f\|_Y + |\lambda|^\alpha\|g\|_Y + \|g\|_Z)
$$
for some $\alpha \in {\Bbb R}$. From this estimate,
we can derive the generation of a $C^0$ analytic
semigroup associated with $A$ when $Bu=0$.
But to prove the maximal $L_p$ regularity with $1 < p < \infty$
for the corresponding nonstationary problem:
{\bold e}gin{align}\label{2}
&\partial_t v- A v = f, \quad Bv= g \quad\text{for $t > 0$}, \quad v|_{t=0} = v_0,
\end{align}
especially in the cases where $Bv=g \not=0$,
further consideration is needed. Below, we introduce a framework based on
the Weis operator valued Fourier multiplier theorem.
To state this theorem, we make a preparation.
{\bold e}gin{dfn} Let $E$ and $F$ be two Banach spaces and let ${\mathcal L}(E, F)$ be the
set of all bounded linear operators from $E$ into $F$. We say that
an operator family ${\mathcal T} \subset {\mathcal L}(E, F)$ is ${\mathcal R}$ bounded if
there exist a constant $C$ and an exponent $q \in [1, \infty)$ such that for any
integer $n$, $\{T_j\}_{j=1}^n \subset {\mathcal T}$ and $\{f_j\}_{j=1}^n
\subset E$, the inequality:
$$\int^1_0\|\sum_{j=1}^n r_j(u)T_jf_j\|_F^q\,d u
\leq C\int^1_0\|\sum_{j=1}^n r_j(u)f_j\|_E^q\,du$$
is valid, where the Rademacher functions $r_k$, $k \in {\Bbb N}$, are
given by $r_k: [0, 1] \to \{-1, 1\}$; $t \mapsto {\rm sign}(\sin 2^k\pi t)$.
The smallest such $C$ is called ${\mathcal R}$ bound of ${\mathcal T}$ on ${\mathcal L}(E, F)$,
which is denoted by ${\mathcal R}_{{\mathcal L}(E, F)}{\mathcal T}$.
\end{dfn}
For $m(\xi) \in L_\infty({\Bbb R}\setminus\{0\}, {\mathcal L}(E, F))$, we set
$$T_mf = {\mathcal F}^{-1}_\xi[m(\xi){\mathcal F}[f](\xi)] \quad f \in {\mathcal S}({\Bbb R}, E),$$
where ${\mathcal F}$ and ${\mathcal F}_\xi^{-1}$ denote respective Fourier transformation and
inverse Fourier transformation.
{\bold e}gin{thm}[Weis's operator valued Fourier multiplier theorem]
Let $E$ and $F$ be two UMD Banach spaces.
Let $m(\xi) \in C^1({\Bbb R}\setminus\{0\}, {\mathcal L}(E, F))$ and assume that
{\bold e}gin{align*}
{\mathcal R}_{{\mathcal L}(E, F)}(\{m(\xi) &\mid \xi \in {\Bbb R}\setminus\{0\}\}) \leq r_b \\
{\mathcal R}_{{\mathcal L}(E, F)}(\{\xi m'(\xi) &\mid \xi \in {\Bbb R}\setminus\{0\}\}) \leq r_b
\end{align*}
with some constant $r_b > 0$. Then, for any $p \in (1, \infty)$,
$T_m \in {\mathcal L}(L_p({\Bbb R}, E), L_p({\Bbb R}, F))$ and
$$\|T_mf\|_{L_p({\Bbb R}, F)} \leq C_pr_b\|f\|_{L_p({\Bbb R}, E)}
$$
with some constant $C_p$ depending solely on $p$.
\end{thm}
{\bold e}gin{remark} For a proof, refer to Weis \cite{Weis}.
\end{remark}
We introduce the following assumption. Recall that $\omega$ is a fixed number
such that $0 < \omega < \pi/2$.
{\bold e}gin{assump}\label{assump:1} Let $X$, $Y$ and $Z$
be UMD Banach spaces. There exist a constant $\lambda_0$,
$\alpha \in {\Bbb R}$, and an operator family ${\mathcal S}(\lambda)$ with
$${\mathcal S}(\lambda)\in {\rm Hol}\,(\Sigma_{\omega,\lambda_0},
{\mathcal L}(Y\times Y \times Z, X))$$
such that for any $f \in Y$ and $g \in Z$, $u={\mathcal S}(\lambda)(f, \lambda^\alpha g, g)$
is a solution of equations \eqref{1}, and the estimates:
{\bold e}gin{align*}
{\mathcal R}_{{\mathcal L}(Y\times Y \times Z, X)}(\{(\tau\partial_\tau)^\ell{\mathcal S}(\lambda) \mid
\lambda \in \Sigma_{\omega,\lambda_0}\}) &\leq r_b \\
{\mathcal R}_{{\mathcal L}(Y\times Y \times Z, Y)}(\{(\tau\partial_\tau)^\ell(\lambda{\mathcal S}(\lambda)) \mid
\lambda \in \Sigma_{\omega,\lambda_0}\}) &\leq r_b
\end{align*}
for $\ell=0,1$ are valid,
where $\lambda = \gamma + i\tau \in \Sigma_{\omega, \lambda_0}$.
${\mathcal S}(\lambda)$ is called an ${\mathcal R}$-bounded solution operator
or an ${\mathcal R}$ solver of equations \eqref{1}.
\end{assump}
We now consider an initial-boundary value problem:
{\bold e}gin{equation}\label{4}
\partial_t u - Au = f \quad Bu = g \quad(t>0), \quad u|_{t=0} = u_0.
\end{equation}
This problem is divided into the following two equations:
{\bold e}gin{alignat}3
\partial_t u - Au &= f &\quad Bu &= g &\quad&(t \in {\Bbb R}); \label{eq:1} \\
\partial_t u - Au &= 0 &\quad Bu &= 0 &\quad&(t>0), \quad u|_{t=0} = u_0.
\label{eq:2}
\end{alignat}
From the definition of ${\mathcal R}$-boundedness with $n=1$ we see that
$u={\mathcal S}(\lambda)({\bold f}, 0, 0)$ satisifes equations:
$$\lambda u -Au = f, \quad Bu = 0,$$
and the estimate:
$$|\lambda|\|u\|_Y + \|u\|_X \leq C\|f\|_Y.$$
Thus, $A$ generates a $C^0$ analytic semigroup $\{T(t)\}_{t\geq 0}$ such that
$u = T(t)u_0$ solves equations \eqref{eq:2} uniquely and
{\bold e}gin{equation}\label{semi.est.1}\|u(t)\|_Y \leq r_be^{\lambda_0t}\|u_0\|_Y,
\quad \|\partial_tu(t)\|_Y \leq r_be^{\lambda_0 t}\|u_0\|_Y,
\quad \|\partial_tu(t)\|_Y \leq r_be^{\lambda_0 t}\|u_0\|_X.
\end{equation}
These estimates and trace method of real-interpolation theory
yield the following theorem.
{\bold e}gin{thm}[Maximal regularity for initial value problem] \label{max.thm.1}
Let $1 < p < \infty$ and set ${\mathcal D}= (Y, X_B)_{1-1/p, p}$,
where $X_B = \{u_0 \in X \mid Bu_0=0\}$, and $(\cdot, \cdot)_{1-1/p, p}$
denotes a real interpolation functor. Then, for any $u_0 \in {\mathcal D}$,
problem \eqref{eq:2} admits a unique solution $u$ with
$$e^{-\lambda_0t}u \in L_p({\Bbb R}_+, X) \cap H^1_p({\Bbb R}_+, Y)
\quad({\Bbb R}_+=(0, \infty))$$
possessing the estimate:
$$\|e^{-\lambda_0t}\partial_tu\|_{L_p({\Bbb R}_+, Y)}
+ \|e^{-\lambda_0t}u\|_{L_p({\Bbb R}_+, X)} \leq C\|u_0\|_{(Y, X)_{1-1/p, p}}.
$$
\end{thm}
The ${\mathcal R}$-bounded solution operator plays an essential role to prove
the following theorem.
{\bold e}gin{thm}[Maximal regularity for boundary value problem] \label{max.thm.2}
Let $1 < p < \infty$. Then for any $f$ and $g$ with
$e^{-\gamma t}f \in L_p({\Bbb R}, Y)$ and $
e^{-\gamma t}g \in L_p({\Bbb R}, Z) \cap H^\alpha_p({\Bbb R}, Y)$
for any $\gamma > \lambda_0$, problem \eqref{eq:1}
admits a unique solution $u$ with
$e^{-\gamma t} u \in L_p({\Bbb R}, X) \cap H^1_p({\Bbb R}, Y)$
for any $\gamma > \lambda_0$ possessing the estimate:
{\bold e}gin{align*}
&\|e^{-\lambda_0t}\partial_tu\|_{L_p({\Bbb R}_+, Y)}
+ \|e^{-\lambda_0t}u\|_{L_p({\Bbb R}_+, X)} \leq C(
\|e^{-\gamma t}f\|_{L_p({\Bbb R}, Y)} \\
&\quad + (1+\gamma)^\alpha\|e^{-\gamma t}g\|_{H^\alpha_p({\Bbb R}, Y)}
+ \|e^{-\gamma t}g\|_{L_p({\Bbb R}, Z)})
\end{align*}
for any $\gamma > \lambda_0$. Here, the constant
$C$ may depend on $\lambda_0$ but independent of
$\gamma$ whenever $\gamma > \lambda_0$, and we have set
$$H^\alpha_p({\Bbb R}, Y) = \{h \in {\mathcal S}'({\Bbb R}, Y) \mid
\|h\|_{H^\alpha_p({\Bbb R}, Y)} : =
\|{\mathcal F}^{-1}_\xi[(1+|\xi|^2)^{\alpha/2}{\mathcal F}[f](\xi)]\|_{L_p({\Bbb R}, Y)} < \infty\}.
$$
\end{thm}
{\bold e}gin{proof}
Let ${\mathcal L}$ and ${\mathcal L}^{-1}$ denote respective Laplace transformation and
inverse Laplace transformation defined by setting
{\bold e}gin{align*}{\mathcal L}[f](\lambda) &= \int_{\Bbb R} e^{-\lambda t}f(t)\,d t =
\int_{\Bbb R} e^{-i\tau t}(e^{-\gamma t}f(t))\,d t
= {\mathcal F}[e^{-\gamma t}f(t)](\tau) \quad(\lambda = \gamma + i\tau),\\
{\mathcal L}^{-1}[f](t) &= {\frak r}ac{1}{2\pi}\int_{\Bbb R} e^{\lambda t}f(\tau)\,d \tau =
{\frak r}ac{e^{\gamma t}}{2\pi}\int_{\Bbb R} e^{-i\tau t}f(\tau)\,d \tau
= e^{\gamma t}{\mathcal F}^{-1}[f](\tau).
\end{align*}
We consider equations:
$$\partial_tu-Au = f, \quad Bu = g \quad\text{for $t \in {\Bbb R}$}.$$
Applying Laplace transformation yields that
$$\lambda {\mathcal L}[u](\lambda) -A{\mathcal L}[u](\lambda) ={\mathcal L}[ f](\lambda),
\quad B{\mathcal L}[u](\lambda) = {\mathcal L}[g](\lambda).
$$
Applying ${\mathcal R}$-bounded solution operator ${\mathcal S}(\lambda)$ yields that
$${\mathcal L}[u](\lambda) = {\mathcal S}(\lambda)({\mathcal L}[f](\lambda),
\lambda^\alpha{\mathcal L}[g](\lambda), {\mathcal L}[g](\lambda)),
$$
and so
$$u = {\mathcal L}^{-1}[{\mathcal S}(\lambda){\mathcal L}[(f, \Lambda^\alpha g, g)](\lambda)],$$
where $\Lambda^\alpha g = {\mathcal L}^{-1}[\lambda^\alpha {\mathcal L}[g]]$.
Moreover,
$$\partial_tu = {\mathcal L}^{-1}[\lambda{\mathcal S}(\lambda){\mathcal L}[f, \Lambda^\alpha g, g)](\lambda)].
$$
Using Fourier transformation and inverse Fourier transformation, we rewrite
{\bold e}gin{align*}
u &= e^{\gamma t}{\mathcal F}^{-1}[{\mathcal S}(\lambda){\mathcal F}[e^{-\gamma t}(f, \Lambda^\alpha g, g)]
(\tau)](t), \\
\partial_tu &= e^{\gamma t}{\mathcal F}^{-1}[\lambda{\mathcal S}(\lambda)
{\mathcal F}[e^{-\gamma t}(f, \Lambda^\alpha g, g)](\tau)](t).
\end{align*}
Applying the assumption of ${\mathcal R}$-bounded solution operators
and Weis's operator valued Fourier
multiplier theorem yields that
{\bold e}gin{align*}
&\|e^{-\gamma t}\partial_tu\|_{L_p({\Bbb R}, Y)} + \|e^{-\gamma t}u\|_{L_p({\Bbb R}, X)}\\
&\quad
\leq C_pr_b(\|e^{-\gamma t}f\|_{L_p({\Bbb R}, Y)}
+ (1 + \gamma)^\alpha\|e^{-\gamma t}g\|_{H^\alpha_p({\Bbb R}, Y)}
+ \|e^{-\gamma t}g\|_{L_p({\Bbb R}, Z)})
\end{align*}
for any $\gamma > \lambda_0$. The uniqueness follows from the generation of
analytic semigroup and Duhamel's principle.
\end{proof}
We now consider a time shifted
equations:
{\bold e}gin{equation}\label{eq:3}
\partial_tu + \lambda_1 u - Au = f, \quad Bu = g \quad\text{for $t \in (0, \infty)$},
\quad u|_{t=0} = u_0.
\end{equation}
As a first step, we consider the following time shifted equations without initial data
{\bold e}gin{equation}\label{eq:4}
\partial_tu + \lambda_1 u - Au = f, \quad Bu = g \quad\text{for $t \in {\Bbb R}$}.
\end{equation}
Then, we have the following theorem which guarantees the polynomial decay
of solutions.
{\bold e}gin{thm}\label{max:thm.2*}
Let $\lambda_0$ be a constant appearing in Assumption \ref{assump:1} and
let $\lambda_1 > \lambda_0$.
Let $1 < p < \infty$ and $b \geq 0$.
Then, for any $f$ and $g$ with
$<t>^bf \in L_p({\Bbb R}, Y)$ and $<t>^bg \in L_p({\Bbb R}, Z) \cap H^\alpha_p({\Bbb R}, X)$,
problem \eqref{eq:4} admits a unique solution
$w \in H^1_p((0, \infty), Y) \cap L_p((0, \infty), X)$ possessing the estimate:
{\bold e}gin{equation}\label{eq:8}{\bold e}gin{aligned}
&\|<t>^bw\|_{L_p((0, \infty), X)} + \|<t>^b\partial_tw\|_{L_p((0, \infty), Y)} \\
&\quad \leq C(\|<t>^bf\|_{L_p({\Bbb R}, Y)}
+ \|<t>^bg\|_{H^\alpha_p({\Bbb R}, Y)} + \|<t>^bg\|_{L_p({\Bbb R}, Z)}).
\end{aligned}\end{equation}
\end{thm}
{\bold e}gin{proof}
Since $ik + \lambda_1 \in \Sigma_{\omega, \lambda_0}$, for $k \in {\Bbb R}$
we set
$w = {\mathcal F}^{-1}[{\mathcal M}(ik + \lambda_1)({\mathcal F}[f], (ik)^\alpha {\mathcal F}[g], {\mathcal F}[g])]$,
and then $w$ satisfies equations:
$$\partial_tw + \lambda_1w - Aw=f, \quad Bw=g \quad\text{for $t \in {\Bbb R}$},
$$
and the estimate:
{\bold e}gin{equation}\label{est:3.1}
\|\partial_t w\|_{L_p({\Bbb R}, Y)} + \|w\|_{L_p({\Bbb R}, X)} \leq C(\|f\|_{L_p({\Bbb R}, Y)}
+ \|g\|_{H^\alpha_p({\Bbb R}, Y)} + \|g\|_{L_p({\Bbb R}, Z)}).
\end{equation}
This prove the theorem in the case where $b=0$. When $0 < b \leq 1$,
we observe that
$$\partial_t(<t>^bw) +\lambda_1(<t>^bw) - A(<t>^bw) = <t>^bf
+ <t>^{b-2}tw, \quad B(<t>^bw) = <t>^bg,$$
and so noting that $\|<t>^{b-2}t w\|_Y \leq C\|w\|_Y \leq C\|w\|_X$,
we have
{\bold e}gin{align*}
&\|<t>^bw\|_{L_p((0, \infty), X)} + \|<t>^b\partial_tw\|_{L_p((0, \infty), Y)} \\
&\quad \leq C(\|<t>^{b-2}tw\|_{L_p({\Bbb R}, Y)} + \|<t>^bf\|_{L_p({\Bbb R}, Y)}
+ \|<t>^bg\|_{H^\alpha_p({\Bbb R}, Y)} + \|<t>^bg\|_{L_p({\Bbb R}, Z)}) \\
&\quad \leq C( \|<t>^bf\|_{L_p({\Bbb R}, Y)}
+ \|<t>^bg\|_{H^\alpha_p({\Bbb R}, Y)} + \|<t>^bg\|_{L_p({\Bbb R}, Z)}).
\end{align*}
If $b > 1$, then repeated use of this argument yields the theorem, which
completes the proof of Theorem \ref{max:thm.2*}.
\end{proof}
Finally, we consider equations \eqref{eq:3}. Let $w$ be a solution of
\eqref{eq:4}, the unique existence of which is guaranteed by Theorem
\ref{max:thm.2*}. Let $v= u-w$, and then $v$ satisifes equations:
{\bold e}gin{equation}\label{eq:5}
\partial_tv + \lambda_1 v - Av = 0, \quad Bv = 0 \quad\text{for $t \in (0, \infty)$},
\quad v|_{t=0} = u_0-w|_{t=0}
\end{equation}
Let $\{T(t)\}_{t\geq0}$ be a continuous analytic semigroup
satisfying \eqref{semi.est.1}. Set $u_1 = u_0-w|_{t=0}$ and
$v = e^{-\lambda_1 t}T(t)u_1$, and then
{\bold e}gin{gather} \partial_t v + \lambda_1v - Av=0, \quad Bv=0,
\quad v|_{t=0} = u_1, \label{eq:6} \\
\|v(t)\|_Y \leq r_be^{-(\lambda_1-\lambda_0)t}
\|u_1\|_Y, \, \|\partial_t v(t)\|_Y \leq r_be^{-(\lambda_1-\lambda_0)t}\|u_1\|_Y,
\, \|\partial_t v(t)\|_Y \leq r_be^{-(\lambda_1-\lambda_0)t}\|u_1\|_X.
\label{eq:7}
\end{gather}
Thus, the trace method of real interpolation theory yields the following theorem.
{\bold e}gin{thm}\label{max.thm.1*}
Let $1 < p < \infty$ and $b>0$. Let ${\mathcal D}$ be the same space as in Theorem
\ref{max.thm.1}. If $u_1 \in {\mathcal D}$ and $f$ and $g$ satisfy the same condition
as in Theorem \ref{max:thm.2*}, then
problem \eqref{eq:3} admits a unique solution
$u \in L_p({\Bbb R}_+, X) \cap H^1_p({\Bbb R}_+, Y)$ $({\Bbb R}_+=(0, \infty))$
possessing the estimate:
{\bold e}gin{equation}\label{eq:9}{\bold e}gin{aligned}
&\|<t>^b\partial_tu\|_{L_p({\Bbb R}_+, Y)}
+ \|<t>^bu\|_{L_p({\Bbb R}_+, X)} \\
&\quad \leq C(\|u_0\|_{(Y, X)_{1-1/p, p}}
+ \|<t>^bf\|_{L_p({\Bbb R}, Y)}
+ \|<t>^bg\|_{H^\alpha_p({\Bbb R}, Y)} + \|<t>^bg\|_{L_p({\Bbb R}, Z)}).
\end{aligned}\end{equation}
\end{thm}
{\bold e}gin{proof}
Let $v = e^{-\lambda_1 t}\,T(t)u_1$, and then $v$ satisfies equations
\eqref{eq:6}. Since $u_1 \in {\mathcal D}$, by trace method of real interpolation
theorem and \eqref{eq:7}, we have
{\bold e}gin{equation}\label{est:3.2}
\|e^{(\lambda_1-\lambda_0)}v\|_{L_p((0, \infty), X)}
+ \|e^{(\lambda_1-\lambda_0)}\partial_tv\|_{L_p((0, \infty), Y)}
\leq C\|u_1\|_{(Y, X)_{1-1/p,p}}.
\end{equation}
Since $w$ satisfies \eqref{eq:8},
trace method of real interpolation theory yields that
{\bold e}gin{equation}\label{est:3.3}{\bold e}gin{aligned}
\|w\|_{(Y, X)_{1-1/p,p}} &\leq C(\|w\|_{L_p((0, \infty), X)}
+ \|\partial_tw\|_{L_p((0, \infty), Y)}) \\
& \leq C(\|<t>^bf\|_{L_p({\Bbb R}, Y)}
+ \|<t>^bg\|_{H^\alpha_p({\Bbb R}, Y)} + \|<t>^bg\|_{L_p({\Bbb R}, Z)}),
\end{aligned}\end{equation}
because $b \geq 1$.
Thus, $u = v+w$ satisfies equations \eqref{eq:3}
and the estimate \eqref{eq:9}. The uniqueness of solutions follows
from the generation of continous analytic semigroup and Duhamel's principle.
This completes the proof
of Theorem \ref{max.thm.1*}. \end{proof}
\section{Estimates of nonlinear terms}
In what follows, let $T > 0$ be any positive time and let $b$ and $p$ be
positive numbers and an exponents given in Theorem \ref{thm:main0}
and Theorem \ref{mainthm:2}.
Let ${\mathcal U}^i_\epsilon$ ($i=1,2$) be underlying spaces for linearized equations of
equations \eqref{eq:2.1}, which is defined by
{\bold e}gin{equation}\label{eq:5.1}{\bold e}gin{aligned}
&{\mathcal U}^1_T = \{\theta
\in H^1_p((0, T), H^1_2(\Omega) \cap H^1_6(\Omega))\mid \theta|_{t=0}=\theta_0,
\quad \sup_{t \in (0, T)}\|\theta(\cdot,t)\|_{L_\infty(\Omega)} \leq \rho_*/2\},\\
&{\mathcal U}^2_T = \{{\bold v} \in L_p((0, T), H^2_2(\Omega)^3
\cap H^2_6(\Omega)^3) \cap
H^1_p((0, T), L_2(\Omega)^3 \cap L_6(\Omega)^3) \mid \\
&\hskip6.7cm {\bold v}|_{t=0} = {\bold v}_0, \quad
\int^T_0\|\nabla{\bold v}(\cdot, s)\|_{L_\infty(\Omega)}\,ds \leq \delta\}.
\end{aligned}\end{equation}
Recall that our energy $E_T(\eta,{\bold u})$ has been defined by
{\bold e}gin{align*}
E_T(\eta, {\bold u})& =
\|<t>^{b}\nabla(\eta, {\bold u})\|_{L_p((0, T), H^{0,1}_2(\Omega) \cap H^{0,1}_{2+\sigma}
(\Omega))}
+ \|<t>^{b}(\eta, {\bold u})\|_{L_\infty((0, T), L_2(\Omega)\cap L_6(\Omega))}
\\
&+ \|<t>^{b}\partial_t(\eta, {\bold u})\|_{L_p((0, T), H^{1,0}_2(\Omega) \cap H^{1,0}_{6}
(\Omega))}
+\|<t>^{b}(\eta, {\bold u})\|_{L_p((0, T), H^2_6(\Omega))}.
\end{align*}
Note that by using a standard interpolation inequality we have
{\bold e}gin{equation}\label{eq:5.2}
\|f\|_{L_{2+\sigma}(\Omega)} \leq \|f\|_{L_2(\Omega)}^{1-\sigma/4}
\|f\|_{L_6(\Omega)}^{\sigma/4}.
\end{equation}
And therefore, for $(\theta, {\bold v}) \in {\mathcal U}^1_T\times{\mathcal U}^2_T$,
we know that
{\bold e}gin{equation}\label{eq:5.3}{\bold e}gin{aligned}
\|<t>^b(\theta, {\bold v})\|_{L_\infty((0, T), L_{2+\sigma}(\Omega)}
&\leq C_\sigma \sum_{q=2, 6}\|<t>^b(\theta, {\bold v})
\|_{L_\infty((0, T), L_q(\Omega))},
\\
\|<t>^b\partial_t(\theta, {\bold v})\|_{L_p((0, T), H^{1,0}_{2+\sigma}(\Omega)}
&\leq C_\sigma \sum_{q=2, 6}\|<t>^b\partial_t(\theta, {\bold v})\|_{L_p((0, T), H^{1,0}_q(\Omega))},
\end{aligned}\end{equation}
where $bp' > 1$.
Notice that for any $\theta \in {\mathcal U}^1_T$
we have
{\bold e}gin{equation}\label{cond:1}
\rho_*/2 \leq |\rho_*+\tau\theta(y, t)| \leq 3\rho_*/2 \quad\text{for
$(y,t) \in \Omega\times(0, T)$ and $|\tau| \leq 1$}.
\end{equation}
For ${\bold v} \in {\mathcal U}^2_T$ let
${\bold k}_{\bold v}= \int^t_0\nabla{\bold v}(\cdot,s)\,ds$, and then
$|{\bold k}_{\bold v}(y, t)| \leq \delta$ for any $(y, t) \in \Omega\times(0, T)$.
Moreover, for $q=2, 2+\sigma$ and $6$ by H\"older's inequality
{\bold e}gin{equation}\label{eq:4.4}
\sup_{t \in (0, T)}
\|{\bold k}_{\bold v}\|_{H^1_q(\Omega)} \leq \int^T_0\|\nabla{\bold v}(\cdot, t)\|_{H^1_q(\Omega)}
\leq C\Bigl(\int^\infty_0<t>^{-p'b}\Bigr)^{1/p'}\|<t>^b\nabla{\bold v}\|_{L_p((0, T),
H^1_q(\Omega))},
\end{equation}
where $bp' > 1$.
In what follows, for notational simplicity we use the following
abbreviation: $\|f\|_{H^1_q(\Omega)} = \|f\|_{H^1_q}$,
$\|f\|_{L_q(\Omega)} = \|f\|_{L_q}$, $\|f\|_{L_\infty((0, T), X)}
= \|f\|_{L_\infty(X)}$, and $\|<t>^bf\|_{L_p((0, T), X)}
= \|f\|_{L_{p,b}(X)}$.
Let $(\theta, {\bold v}) \in {\mathcal U}^1_T\times{\mathcal U}^2_T$ and
$(\theta_i, {\bold v}_i) \in {\mathcal U}^1_T\times {\mathcal U}^2_T$ ($i=1,2$).
The purpose of this section is to give necessary estimates of
$(F(\theta, {\bold v}), {\bold G}(\theta, {\bold v}))$ and difference:
$(F(\theta_1, {\bold v}_1) -F(\theta_2, {\bold v}_2), {\bold G}(\theta_1, {\bold v}_1)
- {\bold G}(\theta_2, {\bold v}_2)))$ to prove the global wellposedness of
equations \eqref{eq:2.1}.
Recall that
{\bold e}gin{equation}\label{nonlinear:4.1}{\bold e}gin{aligned}
F(\theta, {\bold v}) &= \rho_*{\mathcal D}_{\rm div}\,({\bold k})\nabla{\bold v} +\theta{\rm div}\,{\bold v} + \theta{\mathcal D}_{\rm div}\,({\bold k})\nabla{\bold v},
\\
{\bold G}(\theta, {\bold v}) &= \theta\partial_t{\bold v}+ {\bold V}_1({\bold k})\nabla^2{\bold v}
+ ({\bold V}_2({\bold k})\int^t_0\nabla^2{\bold v}\,ds)\nabla{\bold v}\\
&\qquad - ({{\frak r}ak p}'(\rho_*+\theta)-{{\frak r}ak p}'(\rho_*))\nabla\theta
- {{\frak r}ak p}'(\rho_*+\theta){\bold V}_0({\bold k})\nabla\theta.
\end{aligned}\end{equation}
We start with estimating $\|F(\theta, {\bold v})\|_{L_{p,b}(H^1_r)}$.
Recall that $r^{-1} = 2^{-1} + (2+\sigma)^{-1}$ and we use the estimates:
{\bold e}gin{equation}\label{geneq:4.1}{\bold e}gin{aligned}
\|fg\|_{L_{p,b}(H^1_r)} &\leq C\|f\|_{L_\infty(H^1_{2+\sigma})}
\|g\|_{L_{p,b}(H^1_2)}, \\
\|fgh\|_{L_{p,b}(H^1_r)} &\leq C(\|f\|_{L_\infty(H^1_6)}
\|g\|_{L_\infty(H^1_{2+\sigma})}
+ \|f\|_{L_\infty(H^1_{2+\sigma})}\|g\|_{L_\infty(H^1_6)})
\|h\|_{L_{p,b}(H^1_2)},
\end{aligned}\end{equation}
as follows from H\"older's inequality and Sobolev's inequality :
$\|f\|_{L_\infty} \leq C\|f\|_{H^1_6}$.
Let $dG({\bold k})$ denote the derivative of $G({\bold k})$ with
respect to ${\bold k}$ and $C_{\rm div}\,$ be a constan such that
$\sup_{|{\bold k}| < \delta}|{\mathcal D}_{\rm div}\,({\bold k})| < C_{\rm div}\,$,
$\sup_{|{\bold k}| < \delta}|d{\mathcal D}_{\rm div}\,({\bold k})| < C_{\rm div}\,$, and
$\sup_{|{\bold k}| < \delta}|d(d{\mathcal D}_{\rm div}\,)({\bold k})| < C_{\rm div}\,$.
Then, noting ${\mathcal D}_{\rm div}\,(0)=0$, by \eqref{eq:4.4} we have
{\bold e}gin{equation}\label{eq:4.0}{\bold e}gin{aligned}
\|{\mathcal D}_{\rm div}\,({\bold k}_{{\bold v}})\|_{H^1_q}
&\leq C_{\rm div}\, \|{\bold k}_{{\bold v}}\|_{H^1_q} \leq C\|\nabla{\bold v}\|_{L_{p,b}(H^1_q)}
\quad\text{for ${\bold v}
\in {\mathcal U}^2_T$ and $q=2, 2+\sigma$ and $6$}.
\end{aligned}\end{equation}
Moreover, for ${\bold v}_1$, ${\bold v}_2 \in {\mathcal U}^2_T$ writing
$${\mathcal D}_{\rm div}\,({\bold k}_{{\bold v}_1}) - {\mathcal D}_{\rm div}\,({\bold k}_{{\bold v}_2})
= \int^t_0d{\mathcal D}_{\rm div}\,({\bold k}_{{\bold v}_2}+ \tau({\bold k}_{{\bold v}_1}-{\bold k}_{{\bold v}_2}))\,d\tau
\,({\bold k}_{{\bold v}_1}-{\bold k}_{{\bold v}_2}), $$
and noting that
$|{\bold k}_{{\bold v}_2}+ \tau({\bold k}_{{\bold v}_1}-{\bold k}_{{\bold v}_2})| = |(1-\tau){\bold k}_{{\bold v}_2}
+ \tau {\bold k}_{{\bold v}_1}| \leq (1-\tau)\delta + \tau \delta = \delta$,
we have
{\bold e}gin{equation}\label{eq:4.1}{\bold e}gin{aligned}
&\|{\mathcal D}_{\rm div}\,({\bold k}_{{\bold v}_1})- {\mathcal D}_{\rm div}\,({\bold k}_{{\bold v}_2})\|_{H^1_q}\\
&\quad\leq C_{\rm div}\, (\|{\bold k}_{{\bold v}_1}-{\bold k}_{{\bold v}_2}\|_{L_\infty(H^1_q)}
+ \sum_{i=1,2}\|\nabla {\bold k}_{{\bold v}_i}\|_{L_\infty(L_q)}
\|{\bold k}_{{\bold v}_1}-{\bold k}_{{\bold v}_2}\|_{L_\infty(L_\infty)})\\
&\quad \leq C(\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_q)}
+ \sum_{i=1,2}\|\nabla {\bold v}_i\|_{L_{p,b}(H^1_q)}
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}.
\end{aligned}\end{equation}
Since $\theta= \theta|_{t=0} + \int^t_0\partial_s\theta\,ds$,
for $X \in \{L_q, H^1_q\}$ with $q=2$, $2+\sigma$ and $6$
{\bold e}gin{equation}\label{eq:4.2}{\bold e}gin{aligned}
\|\theta(\cdot, t)\|_{X} &\leq \|\theta_0\|_{X}
+ \int^T_0\|(\partial_s\theta)(\cdot, s)\|_{X}\,ds
\\
&\leq\|\theta_0\|_{X} + \Bigl(\int^\infty_0<t>^{-p'b}\,dt\Bigr)^{1/p'}
\|\partial_s\theta\|_{L_{p,b}(X)}.
\end{aligned}\end{equation}
In particular, by Sobolev's inequality
{\bold e}gin{equation}\label{eq:4.3}
\|\theta(\cdot, t)\|_{L_\infty} \leq C(\|\theta_0\|_{H^1_6}
+ \|\partial_t\theta\|_{L_{p,b}(H^1_{6})}).
\end{equation}
For $\theta \in {\mathcal U}^1_T$ and ${\bold v} \in {\mathcal U}^2_T$,
combining \eqref{geneq:4.1}, \eqref{eq:4.0},
\eqref{eq:4.1},
\eqref{eq:4.2}, and \eqref{eq:4.3} yields that
{\bold e}gin{equation}\label{nones:4.1}{\bold e}gin{aligned}
&\|F(\theta, {\bold v})\|_{L_{p,b}(H^1_r)} \leq C[\|\nabla{\bold v}\|_{L_{p,b}(H^1_{2+\sigma})}
\|\nabla{\bold v}\|_{L_{p,b}(H^1_2)}
+ (\|\theta_0\|_{H^1_{2+\sigma}}+\|\partial_t\theta\|_{L_{p,b}(H^1_{2+\sigma})})
\|\nabla{\bold v}\|_{L_{p,b}(H^1_2)}\\
&\quad +\{(\|\theta_0\|_{H^1_6}+\|\partial_t\theta\|_{L_{p,b}(H^1_6)})
\|\nabla{\bold v}\|_{L_{p,b}(H^1_{2+\sigma})}
+ (\|\theta_0\|_{H^1_{2+\sigma}}+\|\partial_t\theta\|_{L_{p,b}(H^1_{2+\sigma})})
\|\nabla{\bold v}\|_{L_{p,b}(H^1_6)}\}\\
&\hskip12.4cm\times\|\nabla{\bold v}\|_{L_{p,b}(H^1_2)}].
\end{aligned}\end{equation}
Analogously, for $\theta_i \in {\mathcal U}^1_T$ and ${\bold v}_i \in {\mathcal U}^2_T$
($i=1,2$),
\allowdisplaybreaks
{\bold e}gin{align}
&\|F(\theta_1, {\bold v}_1)-F(\theta_2, {\bold v}_2)\|_{L_{p,b}(L_r)} \nonumber \\
&\leq C[(\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_{2+\sigma})}
+ \sum_{i=1,2}\|\nabla{\bold v}_i\|_{L_{p,b}(H^1_{2+\sigma})}
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)})\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_2)}
\nonumber\\
&+ \|\nabla{\bold v}_2\|_{L_{p,b}(H^1_{2+\sigma})}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_2)}
+ \|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_{2+\sigma})}
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_2)} \nonumber\\
& + (\|\theta_0\|_{H^1_{2+\sigma}}+\|\partial_t\theta_2\|_{L_{p, b}(H^1_{2+\sigma})})
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_2)} \nonumber\\
&
+(\|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_6)}
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_{2+\sigma})}
+ \|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_{2+\sigma})}
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)})\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_2)} \nonumber\\
&+\{(\|\theta_0\|_{H^1_6}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_6)})
(\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_{2+\sigma})}
+ \sum_{i=1,2}\|\nabla{\bold v}_i\|_{L_{p,b}(H^1_{2+\sigma})}
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6))})
\nonumber\\
&+ (\|\theta_0\|_{H^1_{2+\sigma}}+\|\partial_t\theta\|_{L_{p,b}(H^1_{2+\sigma})})
(\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}
+ \sum_{i=1,2}\|\nabla{\bold v}_i\|_{L_{p,b}(H^1_6)}
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)})\} \nonumber \\
&\hskip10cm \times \|\nabla{\bold v}_1\|_{L_{p,b}(H^1_2)} \nonumber \\
&+\{(\|\theta_0\|_{H^1_6}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_6)})
\|\nabla{\bold v}_2\|_{L_{p,b}(H^1_{2+\sigma})}
+ (\|\theta_0\|_{H^1_{2+\sigma}}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_{2+\sigma})}
\|\nabla{\bold v}_2\|_{L_{p,b}(H^1_6)}\} \nonumber \\
&\hskip10cm \times\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_2)}].
\label{nones:4.2}
\end{align}
We now estimate $\|F(\theta, {\bold v})\|_{L_{p,b}(H^1_q)}$
and $\|F(\theta_1, {\bold v}_1) - F(\theta_2, {\bold v}_2)\|_{L_{p, b}(H^1_q)}$
with $q=2$, $2+\sigma$ and $6$.
For this purpose, we use the following estimates:
{\bold e}gin{align*}
\|fg\|_{L_{p, b}(H^1_q)} &\leq C\{\|f\|_{L_\infty(H^1_q)}\|g\|_{L_{p, b}(H^1_6)}
+ \|f\|_{L_\infty(H^1_q)}\|g\|_{L_{p,b}(H^1_6)}\}, \\
\|fgh\|_{L_{p, b}(H^1_q)} &\leq C\{\|f\|_{L_\infty(H^1_q)}
\|g\|_{L_\infty(H^1_6)}\|h\|_{L_{p, b}(H^1_6)}
+ \|f\|_{L_\infty(H^1_6)}\|g\|_{L_\infty(H^1_q)}\|h\|_{L_{p, b}(H^1_6)} \\
&\qquad + \|f\|_{L_\infty(H^1_6)}\|g\|_{L_\infty(H^1_6)}\|h\|_{L_{p, b}(H^1_q)}\}.
\end{align*}
And then, using \eqref{eq:4.0}, \eqref{eq:4.1}, \eqref{eq:4.2}, we have
\allowdisplaybreaks
{\bold e}gin{align}
&\|F(\theta,{\bold v})\|_{L_{p,b}(H^1_q)}
\leq C\{\|\nabla{\bold v}\|_{L_{p,b}(H^1_q)}\|\nabla{\bold v}\|_{L_{p,b}(H^1_6)}
+ (\|\theta_0\|_{H^1_q}+\|\partial_t\theta\|_{L_{p,b}(H^1_q)})\|\nabla{\bold v}\|_{L_{p,b}(H^1_6)}
\nonumber \\
&\quad + (\|\theta_0\|_{H^1_6}+\|\partial_t\theta\|_{L_{p,b}(H^1_6)})
\|\nabla{\bold v}\|_{L_{p,b}(H^1_q)}
+ (\|\theta_0\|_{H^1_q}+\|\partial_t\theta\|_{L_{p,b}(H^1_q)})\|\nabla{\bold v}\|_{L_{p,b}(H^1_6)}^2
\nonumber \\
&\quad + (\|\theta_0\|_{H^1_6}+\|\partial_t\theta\|_{L_{p,b}(H^1_6)})\|\nabla{\bold v}\|_{L_{p,b}(H^1_q)}
\|\nabla{\bold v}\|_{L_{p,b}(H^1_6)}\}; \label{nones:4.3} \\
&\|F(\theta_1, {\bold v}_1) - F(\theta_2, {\bold v}_2)\|_{L_{p, b}(H^1_q)} \nonumber \\
&\quad \leq \{(\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_q)} +
\sum_{i=1,2}\|\nabla{\bold v}_i\|_{L_{p,b}(H^1_q)}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)})
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)} \nonumber \\
&\quad + (\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)} +
\sum_{i=1,2}\|\nabla{\bold v}_i\|_{L_{p,b}(H^1_6)}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)})
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_q)}\nonumber \\
&\quad +\|\nabla{\bold v}_2\|_{L_{p,b}(H^1_q)}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}
+ \|\nabla{\bold v}_2\|_{L_{p,b}(H^1_6)}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_q)} \nonumber \\
&\quad + \|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_q)}\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)}
+ \|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_6)}
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_q)} \nonumber \\
&\quad + (\|\theta_0\|_{H^1_q} + \|\partial_t\theta_2\|_{L_{p,b}(H^1_q)})
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}
+ (\|\theta_0\|_{H^1_6} + \|\partial_t\theta_2\|_{L_{p,b}(H^1_6)})
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_q)}
\nonumber\\
&\quad +\|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_q)}\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)}^2
+\|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_6)}\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_q)}
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)} \nonumber \\
&\quad + (\|\theta_0\|_{H^1_q}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_q)})(
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}
+ \sum_{i=1,2}\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)})
\nonumber \\
&\hskip13.2cm\times
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)}\nonumber \\
&\quad + (\|\theta_0\|_{H^1_6}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_6)})(
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_q)}
+ \sum_{i=1,2}\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_q)}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)})
\nonumber \\
&\hskip13.2cm\times
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)}\nonumber \\
&\quad + (\|\theta_0\|_{H^1_6}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_6)})(
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}
+ \sum_{i=1,2}\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)})
\nonumber \\
&\hskip13.2cm\times
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_q)}\nonumber \\
&\quad +(\|\theta_0\|_{H^1_q}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_q)})
\|\nabla{\bold v}_2\|_{L_{p,b}(H^1_6)}
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)} \nonumber \\
&\quad +(\|\theta_0\|_{H^1_6}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_6)})
\|\nabla{\bold v}_2\|_{L_{p,b}(H^1_q)}
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)} \nonumber \\
&\quad +(\|\theta_0\|_{H^1_6}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_6)})
\|\nabla{\bold v}_2\|_{L_{p,b}(H^1_6)}
\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_q)}.
\label{nones:4.4}
\end{align}
We next estimate $\|{\bold G}(\theta, {\bold v})\|_{L_{p,b}(L_r)}$
and $\|{\bold G}(\theta_1, {\bold v}_1)-{\bold G}(\theta_2, {\bold v}_2)\|_{L_{p,b}(L_r)}$. For
this purpose, we use the estimates:
{\bold e}gin{equation}\label{geneq:4.2}{\bold e}gin{aligned}
\|fg\|_{L_{p,b}(L_r)} & \leq \|f\|_{L_\infty(L_{2+\sigma})}\|g\|_{L_{p,b}(L_2)}, \\
\|fgh\|_{L_{p,b}(L_r)} & \leq \|f\|_{L_\infty(L_\infty)}
\|g\|_{L_\infty(L_{2+\sigma)}}\|h\|_{L_{p,b}(L_2)}.
\end{aligned}\end{equation}
Employing the same argument as in \eqref{eq:4.0} and \eqref{eq:4.1}
and using ${\bold V}_i(0)=0$ ($i=0,1$), for $i=0,1$
we have
{\bold e}gin{equation}\label{eq:4.0.1}{\bold e}gin{aligned}
&\|{\bold V}_i({\bold k})\|_{L_\infty(L_q)} \leq \sup_{|{\bold k}| < \delta}|d{\bold V}_i({\bold k})|
\int^T_0\|\nabla{\bold v}(\cdot, s)\|_{L_q} \leq C\|\nabla{\bold v}\|_{L_{p,b}(L_q)}; \\
&\|{\bold V}_i({\bold k}_{{\bold v}_1}) - {\bold V}_i({\bold k}_{{\bold v}_2})\|_{L_\infty(L_q)}
\leq C\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_q)},
\end{aligned}\end{equation}
where $q=2, 2+\sigma$ and $6$.
Moreover, $\|{\bold V}_2({\bold k})\|_{L_\infty(L_\infty)} = \sup_{|{\bold k}| < \delta}|{\bold V}_1({\bold k})|$,
{\bold e}gin{align*}
&\|{\bold V}_i({\bold k})\|_{L_\infty(L_\infty)} \leq \sup_{|{\bold k}| < \delta}|d{\bold V}_i({\bold k})|
\int^T_0\|\nabla{\bold v}(\cdot, s)\|_{H^1_6} \leq C\|\nabla{\bold v}\|_{L_{p,b}(H^1_6)};
\quad(i=0,1), \\
&\|{\bold V}_i({\bold k}_{{\bold v}_1}) - {\bold V}_i({\bold k}_{{\bold v}_2})\|_{L_\infty(L_\infty)}
\leq C\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}
\quad(i=0,1,2)
\end{align*}
as follows from $|{\bold V}_2({\bold k}_{{\bold v}_1}) - {\bold V}_2({\bold k}_{{\bold v}_2})|
\leq \sup_{|{\bold k}| \leq \delta}|(d{\bold V}_i)({\bold k})||{\bold k}_{{\bold v}_1}-{\bold k}_{{\bold v}_2}|$.
Writing
{\bold e}gin{align*}
{{\frak r}ak p}'(\rho_*+\theta)-{{\frak r}ak p}'(\rho_*)
&= \int^1_0{{\frak r}ak p}''(\rho_*
+\tau\theta)\,d\tau\,\theta, \\
{{\frak r}ak p}'(\rho_*+\theta_1)-{{\frak r}ak p}'(\rho_*+\theta_2) &
= \int^1_0{{\frak r}ak p}''(\rho_* + \theta_2
+\tau(\theta_1-\theta_2))\,d\tau\,(\theta_1-\theta_2),
\end{align*}
by \eqref{cond:1} and \eqref{eq:4.2} we have
{\bold e}gin{equation}\label{p-est.1}{\bold e}gin{aligned}
&\|({{\frak r}ak p}'(\rho_*+\theta)-{{\frak r}ak p}'(\rho_*))\nabla\theta\|_{L_{p,b}(L_r)}
\leq C(\|\theta_0\|_{L_{2+\sigma}}+\|\partial_t\theta\|_{L_{p,b}(L_{2+\sigma})}
\|\nabla\theta\|_{L_{p,b}(L_2)}, \\
&\|({{\frak r}ak p}'(\rho_*+\theta_1)-{{\frak r}ak p}'(\rho_*))\nabla\theta_1
-({{\frak r}ak p}'(\rho_*+\theta_2)-{{\frak r}ak p}'(\rho_*))\nabla\theta_2\|_{L_{p,b}(L_r)}
\\
&\quad
\leq C\{\|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(L_{2+\sigma})}
\|\nabla\theta\|_{L_{p,b}(L_2)}
+ (\|\theta_0\|_{L_{2+\sigma}}+\|\partial_t\theta_2\|_{L_{p,b}(L_{2+\sigma})}
\|\nabla(\theta_1-\theta_2)\|_{L_{p, b}(L_2)}, \\
&\|({{\frak r}ak p}'(\rho_*+\theta)-{{\frak r}ak p}'(\rho_*))\nabla\theta\|_{L_{p,b}(L_q)}
\leq C(\|\theta_0\|_{H^1_6}+\|\partial_t\theta\|_{L_{p,b}(H^1_6)}
\|\nabla\theta\|_{L_{p,b}(L_q)}, \\
&\|({{\frak r}ak p}'(\rho_*+\theta_1)-{{\frak r}ak p}'(\rho_*))\nabla\theta_1
-({{\frak r}ak p}'(\rho_*+\theta_2)-{{\frak r}ak p}'(\rho_*))\nabla\theta_2\|_{L_{p,b}(L_q)}
\\
&\quad
\leq C\{\|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_6)}
\|\nabla\theta_1\|_{L_{p,b}(L_q)}
+ (\|\theta_0\|_{H^1_6}+\|\partial_t\theta_2\|_{L_{p,b}(H^1_6)}
\|\nabla(\theta_1-\theta_2)\|_{L_{p, b}(L_q)},
\end{aligned}\end{equation}
for $q=2, 2+\sigma$ and $6$.
Combining these estimates above, we have
{\bold e}gin{align}
&\|{\bold G}(\theta, {\bold v})\|_{L_{p, b}(L_r)}
\leq C\{(\|\theta_0\|_{L_{2+\sigma}}+\|\partial_t\theta\|_{L_{p,b}(L_{2+\sigma})})
(\|\partial_t{\bold v}\|_{L_{p,b}(L_2)} + \|\nabla\theta\|_{L_{p,b}(L_2)}) \nonumber \\
&\quad + \|\nabla{\bold v}\|_{L_{p,b}(L_{2+\sigma})}(\|\nabla^2{\bold v}\|_{L_{p,b}(L_2)}
+ \|\nabla\theta\|_{L_{p,b}(L_2)})\};
\label{nones:4.5} \\
&\|{\bold G}(\theta_1, {\bold v}_1) - {\bold G}(\theta_2, {\bold v}_2)\|_{L_{p,b}(L_r)}
\leq C\{\|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(L_{2+\sigma})}\|\partial_t{\bold v}_1\|_{L_{p,b}(L_2)}
\nonumber \\
&\quad + (\|\theta_0\|_{L_{2+\sigma}} + \|\partial_t\theta_2\|_{L_{p,b}(L_{2+\sigma})})
\|\partial_t({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_2)}
+ \|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_2)}\|\nabla^2{\bold v}_1\|_{L_{p, b}(L_{2+\sigma})}
\nonumber \\
&\quad + \|\nabla{\bold v}_2\|_{L_{p,b}(L_{2+\sigma})}\|\nabla^2({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_2)}
+ \|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_2)}\|\nabla^2{\bold v}_1\|_{L_{p,b}(L_{2+\sigma})}
\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)} \nonumber \\
&\quad + \|\nabla^2({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_2)}\|\nabla{\bold v}_1\|_{L_{p,b}(L_{2+\sigma})}
+ \|\nabla^2{\bold v}_2\|_{L_{p,b}(L_{2+\sigma})}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_2)}
\nonumber \\
&\quad +\|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(L_2)}
\|\nabla\theta_1\|_{L_{p,b}(L_{2+\sigma})}
+ \|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_2)}\|\nabla\theta_1\|_{L_{p,b}(L_{2+\sigma})}
\nonumber \\
&\quad
+ \|\nabla{\bold v}_2\|_{L_{p,b}(L_{2+\sigma})}\|\nabla(\theta_1-\theta_2)\|_{L_{p,b}(L_2)}
+(\|\theta_0\|_{L_{2+\sigma}} + \|\partial_t\theta_2\|_{L_{p,b}(L_{2+\sigma})}
\|\nabla(\theta_1-\theta_2)\|_{L_{p,b}(L_2)}. \label{nones:4.6}
\end{align}
Finally, we estimate $\|G(\theta, {\bold v})\|_{L_{p,b}(L_q)}$ and
$\|G(\theta_1, {\bold v}_1) - {\bold G}(\theta_2, {\bold v}_2)\|_{L_{p,b}(L_q)}$
with $q=2$, $2+\sigma$, and $6$.
For this purpose, we use the following estimates:
{\bold e}gin{align*}
\|fg\|_{L_{p, b}(L_q)} &\leq C\|f\|_{L_\infty(H^1_q)}\|g\|_{L_{p, b}(L_q)}, \\
\|fgh\|_{L_{p, b}(H^1_q)} &\leq C\{\|f\|_{L_\infty(L_\infty)}
\|g\|_{L_\infty(H^1_6)}\|h\|_{L_{p, b}(L_q)}.
\end{align*}
And then, using \eqref{eq:4.0.1}, \eqref{p-est.1}, \eqref{eq:4.2} and \eqref{eq:4.3},
for $q=2, 2+\sigma$ and $6$ we have
{\bold e}gin{align}
&\|{\bold G}(\theta, {\bold v})\|_{L_{p,b}(L_q)}
\leq C\{(\|\theta_0\|_{H^1_6} + \|\partial_t\theta\|_{L_{p,b}(H^1_6)})
(\|\partial_t{\bold v}\|_{L_{p,b}(L_q)} + \|\nabla\theta\|_{L_{p,b}(L_q)}) \nonumber \\
&\quad +\|\nabla{\bold v}\|_{L_{p,b}(H^1_6)}
(\|\nabla^2{\bold v}\|_{L_{p,b}(L_q)} + \|\nabla\theta\|_{L_{p,b}(L_q)});
\label{nones:4.7}\\
&\|{\bold G}(\theta_1, {\bold v}_1) - {\bold G}(\theta_2, {\bold v}_2)\|_{L_{p,b}(L_q)}
\leq C(\|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_6)}\|\partial_t{\bold v}_1\|_{L_{p,b}(L_q)}
\nonumber \\
&\quad+(\|\theta_0\|_{H^1_6} + \|\partial_t\theta_2\|_{L_{p,b}(H^1_6)})
\|\partial_t({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_q)}
+ \|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}\|\nabla^2{\bold v}_1\|_{L_{p,b}(L_q)}
\nonumber \\
&\quad + \|\nabla{\bold v}_2\|_{L_{p,b}(H^1_6)}\|\nabla^2({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_q)}
+\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)}
\|\nabla^2{\bold v}_1\|_{L_{p,b}(L_q)} \nonumber \\
&\quad + \|\nabla^2({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(L_q)}\|\nabla{\bold v}_1\|_{L_{p,b}(H^1_6)}
+ \|\nabla^2{\bold v}_2\|_{L_{p,b}(L_q)}\|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}
\nonumber \\
&\quad +\|\partial_t(\theta_1-\theta_2)\|_{L_{p,b}(H^1_6)}
\|\nabla\theta_1\|_{L_{p,b}(L_q)}
+ \|\nabla({\bold v}_1-{\bold v}_2)\|_{L_{p,b}(H^1_6)}\|\nabla\theta_1\|_{L_{p,b}(L_q)}
\nonumber \\
&\quad
+ \|\nabla{\bold v}_2\|_{L_{p,b}(H^1_6)}\|\nabla(\theta_1-\theta_2)\|_{L_{p,b}(L_q)}
+(\|\theta_0\|_{H^1_6} + \|\partial_t\theta_2\|_{L_{p,b}(H^1_6)}
\|\nabla(\theta_1-\theta_2)\|_{L_{p,b}(L_q)}.
\label{nones:4.8}
\end{align}
\section{A priori estimates for solutions of linearized equations}
Let ${\mathcal V}_{T, \epsilon} = \{(\theta, {\bold v}) \in {\mathcal U}^1_T\times {\mathcal U}^2_T \mid
E_T(\theta, {\bold v}) \leq \epsilon\}$. For $(\theta, {\bold v}) \in {\mathcal V}_{T, \epsilon}$,
we consider linearized equations:
{\bold e}gin{equation}\label{linearized:5.1}{\bold e}gin{aligned}
\partial_t\eta + \rho_*{\rm div}\, {\bold u} = F(\theta, {\bold v})
&&\quad&\text{in $\Omega \times(0, T)$}, \\
\rho_*\partial_t{\bold u}- {\rm Div}\,(\mu{\bold D}({\bold u}) + \nu{\rm div}\,{\bold u}{\bold I} - {{\frak r}ak p}'(\rho_*)\eta)
= {\bold G}(\theta, {\bold v})&&
\quad&\text{in $\Omega \times(0, T)$}, \\
{\bold u}|_\Gamma=0, \quad (\eta, {\bold u})|_{t=0} = (\theta_0, {\bold v}_0)&&
\quad&\text{in $\Omega$}.
\end{aligned}\end{equation}
We first show that equations \eqref{linearized:5.1} admit unique solutions
$\eta$ and ${\bold u}$ with
{\bold e}gin{equation}\label{eq:5.4}{\bold e}gin{aligned}
\eta &\in H^1_p((0, T), H^1_2(\Omega) \cap H^1_6(\Omega)), \\
{\bold u} &\in H^1_p((0, T), L_2(\Omega)^3 \cap L_6(\Omega)^3)
\cap L_p((0, T), H^2_2(\Omega)^3\cap H^2_6(\Omega)^3)
\end{aligned}\end{equation}
possessing the estimate:
{\bold e}gin{equation}\label{eq:5.5}
E_T(\eta, {\bold u}) \leq C(\epsilon^2 + \epsilon^3)
\end{equation}
with some constant $C$ independent of $T$ and $\epsilon$.
To prove \eqref{eq:5.5}, we divide $\eta$ and ${\bold u}$ into two parts:
$\eta=\eta_1+ \eta_2$ and ${\bold u}={\bold u}_1+{\bold u}_2$, where $\eta_1$ and ${\bold u}_1$
are solutions of time shifted equations:
{\bold e}gin{equation}\label{linearized:5.2}{\bold e}gin{aligned}
\partial_t\eta_1 +\lambda_1\eta_1+ \rho_*{\rm div}\, {\bold u}_1= F(\theta, {\bold v})
&&\quad&\text{in $\Omega \times(0, T)$}, \\
\rho_*(\partial_t{\bold u}_1+ \lambda{\bold u}_1)
- {\rm Div}\,(\mu{\bold D}({\bold u}_1) + \nu{\rm div}\,{\bold u}_1{\bold I} - {{\frak r}ak p}'(\rho_*)\eta_1) = {\bold G}(\theta, {\bold v})&&
\quad&\text{in $\Omega \times(0, T)$}, \\
{\bold u}_1|_\Gamma=0, \quad (\eta_1, {\bold u}_1)|_{t=0} = (\theta_0, {\bold v}_0)&&
\quad&\text{in $\Omega$},
\end{aligned}\end{equation}
and $\eta_2$ and ${\bold u}_2$ are solutions to compensation equations:
{\bold e}gin{equation}\label{linearized:5.3}{\bold e}gin{aligned}
\partial_t\eta_2 + \rho_*{\rm div}\, {\bold u}_2= \lambda_1\eta_1
&&\quad&\text{in $\Omega \times(0, T)$}, \\
\rho_*\partial_t{\bold u}_2
- {\rm Div}\,(\mu{\bold D}({\bold u}_2) + \nu{\rm div}\,{\bold u}_2{\bold I} - {{\frak r}ak p}'(\rho_*)\eta_2)
= \rho_*\lambda_1{\bold u}_1&&
\quad&\text{in $\Omega \times(0, T)$}, \\
{\bold u}_2|_\Gamma=0, \quad (\eta_2, {\bold u}_2)|_{t=0} = (0, 0)&&
\quad&\text{in $\Omega$}.
\end{aligned}\end{equation}
We first treat with equations \eqref{linearized:5.2}. For this purpose, we use the
result stated in Sect. 3. We consider a resolvent problem corresponding to
equations \eqref{linearized:5.1} given as follows:
{\bold e}gin{equation}\label{resolvent:5.1}{\bold e}gin{aligned}
\lambda \zeta + \rho_*{\rm div}\, {\bold w} = f &&\quad&\text{in $\Omega$}, \\
\rho_*\lambda {\bold w}- {\rm Div}\,(\mu{\bold D}({\bold w}) + \nu{\rm div}\,{\bold w}{\bold I} - {{\frak r}ak p}'(\rho_*)\zeta) = {\bold g}&&
\quad&\text{in $\Omega$}, \\
{\bold w}|_\Gamma=0&.
\end{aligned}\end{equation}
Enomoto and Shibata \cite{ES1} proved the existence
of ${\mathcal R}$ bounded solution operators
associated with \eqref{resolvent:5.1}. Namely, we know the following theorem.
{\bold e}gin{thm} \label{thm:5.1}
Let $\Omega$ be a uniform $C^2$ domain in
${\Bbb R}^N$. Let $0 < \omega < \pi/2$ and $1 < q < \infty$. Set
$H^{1,0}_q(\Omega) = H^1_q(\Omega)\times L_q(\Omega)^3$
and $H^{1,2}_q(\Omega) = H^1_q(\Omega)\times H^2_q(\Omega)^3$.
Then, there exist a large number $\lambda_0 > 0$ and
operator families ${\mathcal P}(\lambda)$ and ${\mathcal S}(\lambda)$ with
$${\mathcal P}(\lambda) \in {\rm Hol}\,(\Sigma_{\omega, \lambda_0},
{\mathcal L}(H^{1,0}_q(\Omega), H^1_q(\Omega))),
\quad
{\mathcal S}(\lambda) \in {\rm Hol}\,(\Sigma_{\omega, \lambda_0},
{\mathcal L}(H^{1,0}_q(\Omega), H^2_q(\Omega))$$
such that for any $\lambda \in \Sigma_{\omega, \lambda_0}$
and $(f, {\bold g}) \in H^{1,0}_q(\Omega)$, $\zeta = {\mathcal P}(\lambda)(f, {\bold g})$
and ${\bold w} = {\mathcal S}(\lambda)(f, {\bold g})$ are unique solutions of Stokes
resolvent problem \eqref{resolvent:5.1} and
{\bold e}gin{align*}
{\mathcal R}_{{\mathcal L}(H^{1,0}_q(\Omega), H^1_q(\Omega))}(\{(\tau\partial_\tau)^\ell
(\lambda^k {\mathcal P}(\lambda)) \mid \lambda \in \Sigma_{\omega, \lambda_0}\})
\leq r_b, \\
{\mathcal R}_{{\mathcal L}(H^{1,0}_q(\Omega), H^{2-j}_q(\Omega)^3)}(\{(\tau\partial_\tau)^\ell
(\lambda^{j/2}{\mathcal S}(\lambda)) \mid \lambda \in \Sigma_{\omega, \lambda_0}\})
\leq r_b
\end{align*}
for $\ell=0,1$, $k=0,1$ and $j = 0,1,2$.
\end{thm}
In view of Theorem \ref{thm:5.1} and consideration in Sect. 3,
there exists a continuous analytic semigroup
$\{S(t)\}_{t\geq 0}$ associated with equations \eqref{linearized:5.2} such
that
{\bold e}gin{equation}\label{exp:5.1}
\|S(t)\|_{H^{1,0}_q(\Omega)} \leq C_qe^{-\lambda_2t}\|(f, {\bold g})\|_{H^{1,0}_q(\Omega)}
\end{equation}
for any $t> 0$ and $(f, {\bold g}) \in H^{1,0}_q(\Omega)$ with some constant
$\lambda_2 > 0$. Moreover, from
Theorem \ref{max:thm.2*} we have the following theorem.
{\bold e}gin{thm}\label{thm:exp:5.1} Let $1<p, q < \infty$.
Let $b \geq 0$. Then, there exists a large constant
$\lambda_1 > 0$ such that for any $(f, {\bold g})$ with
$<t>^b(f, {\bold g}) \in L_p({\Bbb R}, H^{1,0}_q)$ and
initial data $(\theta_0, {\bold v}_0) \in H^1_q(\Omega)
\times B^{2(1-1/p)}_{q,p}(\Omega)^3$ satisfying the compatibility condition:
${\bold v}_0|_\Gamma=0$, problem:
{\bold e}gin{equation}\label{linearized:5.4}{\bold e}gin{aligned}
\partial_t\rho +\lambda_1\rho+ \rho_*{\rm div}\, {\bold w}= f
&&\quad&\text{in $\Omega \times(0, T)$}, \\
\rho_*(\partial_t{\bold w}+ \lambda_1{\bold w})
- {\rm Div}\,(\mu{\bold D}({\bold w}) + \nu{\rm div}\,{\bold w}{\bold I} - {{\frak r}ak p}'(\rho_*)\rho) = {\bold g}&&
\quad&\text{in $\Omega \times(0, T)$}, \\
{\bold w}|_\Gamma=0, \quad (\rho, {\bold w})|_{t=0} = (\theta_0, {\bold v}_0)&&
\quad&\text{in $\Omega$},
\end{aligned}\end{equation}
admits unique solutions $\rho\in H^1_p((0, T), H^1_q(\Omega))$
and ${\bold w}\in H^1_p((0, T), L_q(\Omega)^3) \cap L_p((0, T), H^2_q(\Omega)^3)$
possessing the estimate:
{\bold e}gin{align*}
&\|<t>^b(\rho, \partial_t\rho)\|_{L_p((0, T), H^1_q(\Omega))}
+ \|<t>^b\partial_t{\bold w}\|_{L_p((0, T), L_q(\Omega))}
+ \|<t>^b{\bold w}\|_{L_p((0, T), H^2_q(\Omega))} \\
&\quad \leq C(\|\theta_0\|_{H^1_q(\Omega)} + \|{\bold v}_0\|_{B^{2(1-1/p)}_{q, p}(\Omega)}
+ \|<t>^b(f, {\bold g})\|_{L_p((0, T), H^{1,0}_q(\Omega))}).
\end{align*}
Here, $C$ is a constant independent of $T>0$.
\end{thm}
Applying Duhamel's principle to equations \eqref{linearized:5.2} yields that
$$(\eta_1, {\bold u}_1) = S(t)(\theta_0, {\bold v}_0) + \int^t_0 S(t-s)
(F(\theta, {\bold v}), {\bold G}(\theta,{\bold v}))(\cdot, s)\,ds.$$
Thus, by \eqref{exp:5.1}, we have
{\bold e}gin{equation}\label{eq:5.6}{\bold e}gin{aligned}
&\|<t>^b(\eta_1, {\bold u}_1)\|_{L_p((0, T), H^{1,0}_r(\Omega))} \\
&\quad \leq C
(\|(\theta_0, {\bold v}_0)\|_{H^{1,0}_r(\Omega)} + \|<t>^b(F(\theta, {\bold v}), {\bold G}(\theta, {\bold v}))
\|_{L_p((0, T), H^{1,0}_r(\Omega))}).
\end{aligned}\end{equation}
In fact, setting $I(t) = \int^t_0 S(t-s)(F(\theta, {\bold v}), {\bold G}(\theta, {\bold v}))(\cdot, s)
\,ds$, by \eqref{exp:5.1} we have
{\bold e}gin{align*}
<t>^b\|I(t)\|_{H^{1,0}_r(\Omega)} &\leq C_r
<t>^b\Bigl\{\int^{t/2}_0 + \int_{t/2}^t \Bigr\}e^{-\lambda_2(t-s)}
\|(F(\theta, {\bold v}), {\bold G}(\theta,{\bold v}))(\cdot, s)\|_{H^{1,0}_r(\Omega)}\,ds \\
&= C_r (II(t)+ III(t)).
\end{align*}
In $II(t)$, using $e^{-\lambda_2(t-s)} \leq e^{-(\lambda_2/2)t}$ as follows from
$0 < s < t/2$, by H\"older's inequality we have
$$
II(t) \leq <t>^be^{-(\lambda_2/2)t}\Bigl(\int^\infty_0<s>^{-p'b}\,ds\Bigr)^{1/p'}
\Bigl(\int^T_0 (<s>^b\|(F(\theta, {\bold v}),
{\bold G}(\theta,{\bold v}))(\cdot, s)\|_{H^{1,0}_r(\Omega)})^p\,ds\Bigr)^{1/p},$$
and so we have
$$\Bigl(\int^T_0II(t)^p\,dt\Bigr)^{1/p}
\leq C\Bigl(\int^\infty_0(<t>^{b}e^{-(\lambda_2/2)t})^p\,dt\Bigr)^{1/p}
\|<t>^b(F(\theta, {\bold v}), {\bold G}(\theta, {\bold v}))\|_{L_2((0, T), H^{1,0}_r(\Omega))}.
$$
On the other hand, using $<t>^b \leq C_b<s>^b$ for $t/2 < s < t$,
by H\"older's inequality we have
$$
III(t) \leq C_b\Bigl(\int^t_{t/2} e^{-\lambda_2(t-s)}\,ds\Bigr)^{1/p'}
\Bigl(\int^t_{t/2}e^{-\lambda_2(t-s)}(<s>^b
\|(F(\theta, {\bold v}), {\bold G}(\theta, {\bold v}))(\cdot, s)\|_{L_r(\Omega)})^p\,
ds\Bigr)^{1/p}.
$$
Setting $L = \int^\infty_0 e^{-\lambda_2 t}\,dt$, by Fubini's theorem we have
$$\Bigl(\int^T_0III(t)^p\,dt\Bigr)^{1/p}
\leq C_b L \|<t>^b(F(\theta, {\bold v}), {\bold G}(\theta, {\bold v}))\|_{L_p((0, T), H^{1,0}_r(\Omega))}.
$$
Combining these two estimates yields \eqref{eq:5.6}.
Moreover, applying Theorem \ref{thm:exp:5.1} to equations \eqref{linearized:5.2}
yields that
{\bold e}gin{equation}\label{eq:5.7}{\bold e}gin{aligned}
&\|<t>^b\partial_t(\eta_1, {\bold u}_1)\|_{L_p((0, T), H^{1,0}_q(\Omega))}
+ \|<t>^b(\eta_1, {\bold u}_1)\|_{L_p((0, T), H^{1,2}_q(\Omega))} \\
&\quad \leq C_q(\|\theta_0\|_{H^1_q(\Omega)} + \|{\bold v}_0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+ \|<t>^b(F(\theta, {\bold v}), {\bold G}(\theta, {\bold v}))\|_{L_p((0, T), H^{1,0}_q(\Omega))})
\end{aligned}\end{equation}
for $q=2$, $2+\sigma$ and $6$. Recalling that
$\|(\theta_0, {\bold v}_0)\|_{{\mathcal I}} \leq \epsilon^2$, by \eqref{nones:4.1},
\eqref{nones:4.3}, \eqref{nones:4.5}, \eqref{nones:4.7},
\eqref{eq:5.6}, and \eqref{eq:5.7}, we have
{\bold e}gin{equation}\label{eq:5.8}{\bold e}gin{aligned}
&
\sum_{q=2, 2+\sigma, 6}(\|<t>^b\partial_t(\eta_1, {\bold u}_1)\|_{L_p((0, T), H^{1,0}_q(\Omega))}
+ \|<t>^b(\eta_1, {\bold u}_1)\|_{L_p((0, T), H^{1,2}_q(\Omega))} )
\\
&\quad + \|<t>^b(\eta_1, {\bold u}_1)\|_{L_p((0, T), H^{1,0}_r(\Omega))}
\leq C(\epsilon^2 + \epsilon^3 + \epsilon^4).
\end{aligned}\end{equation}
Here, $C$ is a constant independent of $T$ and $\epsilon$.
By the trace method of real interpolation theorem,
{\bold e}gin{align*}
&\|<t>^b{\bold u}_1\|_{L_\infty((0, T), L_q(\Omega))} \\
&\quad \leq C(\|{\bold v}_0\|_{B^{2(1-1/p)}_{q,p}(\Omega)} +
\|<t>^b\partial_t{\bold u}_1\|_{L_p((0, T), L_q(\Omega))}
+ \|<t>^b{\bold u}_1\|_{L_p((0, T), H^2_q(\Omega))}),
\end{align*}
and so by \eqref{eq:5.8} and $\|(\theta_0, {\bold v}_0)\|_{\mathcal I} \leq \epsilon^2$,
{\bold e}gin{equation}\label{add.est.1}
\sum_{q=2, 2+\sigma, 6}\|<t>^b{\bold u}_1\|_{L_\infty((0, T), L_q(\Omega))}
\leq C(\epsilon^2 + \epsilon^3 + \epsilon^4).
\end{equation}
We now estimate $\eta_2$ and ${\bold u}_2$. Let $\{T(t)\}_{t\geq 0}$ be
a continuous analytic semigroup associated with problem:
{\bold e}gin{equation}\label{modeleq:5.1}{\bold e}gin{aligned}
\partial_t \rho + \rho_*{\rm div}\, {\bold v} = 0 &&\quad&\text{in $\Omega\times(0, \infty)$}, \\
\rho_*\partial_t {\bold v}- {\rm Div}\,(\mu{\bold D}({\bold v}) + \nu{\rm div}\,{\bold v}{\bold I} - {{\frak r}ak p}'(\rho_*)\rho) = 0&&
\quad&\text{in $\Omega\times(0, \infty)$}, \\
{\bold v}|_\Gamma=0\quad (\rho, {\bold v})|_{t=0}=(\theta_0, {\bold v}_0)&&\quad
&\text{in $\Omega$}.
\end{aligned}\end{equation}
By Theorem \ref{thm:5.1} and consideration in Sect. 3, we know
the existence of $C^0$ analytic semigroup $\{T(t)\}_{t\geq 0}$ associated with
\eqref{modeleq:5.1}. Moreover, by Enomoto and Shibata \cite{ES2},
we know that $\{T(t)\}_{t\geq 0}$ possesses the following $L_p$-$L_q$
decay estimates:
Setting $(\theta, {\bold v}) = T(t)(f, {\bold g})$, we have
{\bold e}gin{equation}\label{lp-lq}{\bold e}gin{aligned}
\|(\theta, {\bold v})(\cdot, t)\|_{L_p} &\leq C_{p,q}t^{-{\frak r}ac32\left({\frak r}ac1q-{\frak r}ac1p\right)}
[(f, {\bold g})]_{p,q}\quad (t>1);\\
\|\nabla (\theta, {\bold v})(\cdot, t)\|_{L_p} &\leq C_{p,q}t^{-\sigma(p,q)}
[(f, {\bold g})]_{p,q} \quad(t>1);\\
\|\nabla^2{\bold v}(\cdot, t)\|_{L_p} &\leq C_{p,q}t^{-{\frak r}ac{3}{2q}}[(f, {\bold g})]_{p,q}
\quad(t > 1); \\
\|\partial_t (\theta, {\bold v})(\cdot, t)\|_{L_p} &\leq Ct^{-{\frak r}ac{3}{2q}}
[(f, {\bold g})]_{p,q}\quad(t > 1).
\end{aligned}\end{equation}
Here, $1 \leq q \leq 2 \leq p < \infty$, $[(f, {\bold g})]_{p,q}
= \|(f, {\bold g})\|_{H^{1,0}_p} + \|(f, {\bold g})\|_{L_q}$, $H^{m,n}_p = H^m_p\times H^n_p
\ni (f, {\bold g})$ and
$$\sigma(p,q) = {\frak r}ac32\left({\frak r}ac1q- {\frak r}ac1p\right)+ {\frak r}ac12
\quad(2 \leq p \leq 3), \quad \text{and}\quad
{\frak r}ac{3}{2q}\quad(p \geq 3).
$$
Moreover, we use
{\bold e}gin{equation}\label{lp-lq*}
\|(\theta, {\bold v})(\cdot, t)\|_{H^{1,2}_q} \leq M\|(f, {\bold g})\|_{H^{1,2}_q}
\quad(0 < t < 2)
\end{equation}
as follows from the following standard estimate for continuous analytic
semigroup.
Applying Duhamel's principle to equations \eqref{linearized:5.3}
yields that
$$(\eta_2, {\bold u}_2) = \lambda_1\int^t_0T(t-s)(\eta_1, \rho_*{\bold u}_1)(\cdot, s)\,ds.
$$
Let
$$
[[(\eta_1, {\bold u}_1)(\cdot, s)]] = \|(\eta_1, {\bold u}_1)(\cdot, s)\|_{H^{1,0}_r(\Omega)}
+ \sum_{q=2, 2+\sigma, 6}
\|(\eta_1, {\bold u}_1)(\cdot, s)\|_{H^{1,2}_q(\Omega)}
+ \|\partial_t(\eta_1, {\bold u}_1)(\cdots, s)\|_{H^{1,0}_q(\Omega)}).
$$
We set
$$\tilde E_T(\eta_1, {\bold u}_1) : = \Bigl(\int^T_0(<t>^b[[\eta_1, {\bold u}_1)(\cdot, t)]])^p\,
dt\Bigr)^{1/p},$$
and then, by \eqref{eq:5.8} we have
{\bold e}gin{equation}\label{eq:5.9}
\tilde E_T(\eta_1, {\bold u}_1) \leq C(\epsilon^2 + \epsilon^3 + \epsilon^4).
\end{equation}
First we consider the case: $2 \leq t \leq T$.
Notice that
{\bold e}gin{align*}
&(1/2) + (3/2)(1/2 + 1/(2+\sigma) - 1/2) =
(3/2)(1/2+1/(2+\sigma) -1/6) \\
&\leq
(1/2) + (3/2)(1/2 + 1/(2+\sigma) - 1/(2+\sigma))
\leq 3/(2r),
\end{align*}
where $1/r = 1/2 + 1/(2+\sigma)$. Let
$\ell= (1/2) + (3/2)(1/2 + 1/(2+\sigma) - 1/2) = (5+\sigma)/(4+2\sigma)$,
and then all the decay rates used below, which are obtained by \eqref{lp-lq},
are less than or equal to $\ell$.
Let $(\eta_3, {\bold u}_3) = (\nabla\eta_2, {\bold a}r\nabla^1 \nabla{\bold u}_2)$
when $q=2$ or $2+\sigma$, and $(\eta_3, {\bold u}_3) =
({\bold a}r\nabla^1\eta_2, {\bold a}r\nabla^2{\bold u}_2)$ when $q=6$.
Here, ${\bold a}r\nabla^m f = (\partial_x^\alpha f \mid |\alpha| \leq m)$.
And then,
{\bold e}gin{align*}
&\|(\eta_3, {\bold u}_3)(\cdot, t)\|_{L_q(\Omega)} \\
&\leq C\Bigl\{\int^{t/2}_0 + \int^{t-1}_{t/2} + \int^t_{t-1}\Bigr\}
\|(\nabla, {\bold a}r\nabla^1\nabla)\enskip\text{or}\enskip ({\bold a}r\nabla^1, {\bold a}r\nabla^2)
T(t-s)(\eta_1, {\bold u}_1)(\cdot, s)\|_{L_q(\Omega)}\,ds \\
& = I_{q} + II_{q} + III_{q}.
\end{align*}
By \eqref{lp-lq}, we have
{\bold e}gin{align*}
I_{q}(t) &\leq C\int^{t/2}_0
(t-s)^{-\ell}
[[(\eta_1, {\bold u}_1)]]\,ds \\
&\leq C(t/2)^{-\ell}\Bigl(\int^{t/2}_0<s>^{-b}<s>^b
[[(\eta_1, {\bold u}_1)(\cdot, s)]]\,ds \\
& \leq Ct^{-\ell}
\Bigl(\int^T_0<s>^{-bp'}\,ds\Bigr)^{1/p'}\Bigl(\int^T_0
(<s>^b[[(\eta_1, {\bold u}_1)(\cdot, s)]])^p\,ds\Bigr)^{1/p} \\
& \leq Ct^{-\ell}\tilde E_T(\eta_1, {\bold u}_1).
\end{align*}
Recalling that $b = (3-\sigma)/(2(2+\sigma))$ when $p=2$ and
$b=(1-\sigma)/(2(2+\sigma))$ when $p=1+\sigma$, we see that
$\ell- b = (2+2\sigma)/(2(2+\sigma)) > 1/2$ when $p=2$ and $
\ell-b = 1$ when $p=1+\sigma$.
Thus, we have
$$\int^T_1(<t>^bI_{q}(t))^p\,dt
\leq C\tilde E_T(\eta_1, {\bold u}_1)^p.
$$
We next estimate $II_q(t)$. By \eqref{lp-lq} we have
$$II_{q}(t) \leq C\int^{t-1}_{t/2}(t-s)^{-\ell}
[[(\eta_1, {\bold u}_1)(\cdot, s)]]\,ds.$$
By H\"older's inequality and $<t>^b \leq C_b<s>^b$ for
$s \in (t/2, t-1)$, we have
{\bold e}gin{align*}
<t>^bII_{q}(t) & \leq
C\int^{t-1}_{t/2}(t-s)^{-\ell/p'}(t-s)^{-\ell/p}<s>^b [[(\eta_1, {\bold u}_1)(\cdot, s)]]\,
ds\\
&\leq C\Bigl(\int^{t-1}_{t/2}(t-s)^{-\ell}\,ds\Bigr)^{1/p'}
\Bigl(\int^{t-1}_{t/2}(t-s)^{-\ell}(<s>^b[[(\eta_1, {\bold u}_1)(\cdot, s)]]^p)\,ds\Bigr)^{1/p}.
\end{align*}
Setting $\int^\infty_1 s^{-\ell}\,ds =L$, by Fubini's theorem we have
{\bold e}gin{align*}
\int^T_2(<t>^b II_{q}(t))^p\,dt
&\leq CL^{p/p'}\int^{T-1}_1(<s>^b[[(\eta_1, {\bold u}_1)(\cdot, s)]])^p
\Bigl(\int^{2s}_{s+1} (t-s)^{-\ell}\,dt\Bigr) \,ds \\
& \leq CL^p\tilde E_T(\eta_1, {\bold u}_1)^p.
\end{align*}
Using a standard estimate \eqref{lp-lq*} for continuous analytic semigroup, we have
{\bold e}gin{align*}
III_{q}(t) &\leq C\int^t_{t-1}
\|(\eta_1, {\bold u}_1)(\cdot, s)\|_{H^{1,2}_{q}}\,ds
\leq C\int^t_{t-1} [[(\eta_1, {\bold u}_1)(\cdot, s)]]\,ds.
\end{align*}
Thus, employing the same argument as in estimating $II_{q}(t)$,
we have
$$\int^T_2(<t>^b III_{q}(t))^p\,dt
\leq C\tilde E_T(\eta_1, {\bold u}_1)^p.
$$
Combining three estimates above yields that
{\bold e}gin{equation}\label{eq:5.10}
\int^T_2(<t>^b\|(\eta_3, {\bold u}_3)(\cdot, t)\|_{L_q(\Omega)})^p\,dt
\leq C\tilde E_T(\eta_1, {\bold u}_1)^p,
\end{equation}
when $T > 2$.
For $0 < t < \min(2, T)$, using \eqref{lp-lq*}
and employing the same argument as in estimating
$III_q(t)$ above, we have
$$\int^{\min(2, T)}_0(<t>^b\|(\eta_3, {\bold u}_3)(\cdot, t)\|_{L_q(\Omega)})^p\,dt
\leq C\tilde E_T(\eta_1, {\bold u}_1)^p,
$$
which, combined with \eqref{eq:5.10}, yields that
{\bold e}gin{equation}\label{eq:5.11}
\int^T_0(<t>^b\|(\eta_3, {\bold u}_3)(\cdot, t)\|_{L_q(\Omega)})^p\,dt
\leq C\tilde E_T(\eta_1, {\bold u}_1)^p
\end{equation}
for $q=2$, $2+\sigma$, and $6$.
Since
$$\partial_t(\eta_2, {\bold u}_2) = -\lambda_1(\eta_1, \rho_*{\bold u}_1)(\cdot, t)
-\lambda_1\int^t_0\partial_tT(t-s)(\eta_1, \rho_*{\bold u}_1)(\cdot,s)\,ds, $$
employing the same argument as in proving \eqref{eq:5.11},
we have
{\bold e}gin{equation}\label{eq:5.12}
\int^T_0(<t>^b\|\partial_t(\eta_2, {\bold u}_2)(\cdot, t)\|_{L_q(\Omega)})^p\,dt
\leq C\tilde E_T(\eta_1, {\bold u}_1)^p
\end{equation}
for $q=2$, $2+\sigma$, and $6$.
We now estimate $\sup_{2 < t < T}<t>^b \|(\eta_2, {\bold u}_2)\|_{L_q(\Omega)}$
for $q=2, 2+\sigma$ and $6$.
Let $q=2$, $2+\sigma$ and $6$ in what follows. For $2 < t < T$,
{\bold e}gin{align*}
\|(\eta_2, {\bold u}_2)(\cdot, t)\|_{L_q(\Omega)}
&\leq C\Bigl\{\int^{t/2}_0 + \int^{t-1}_{t/2} + \int^t_{t-1}\Bigr\}
\|T(t-s)(\eta_1, {\bold u}_1)(\cdot, s)\|_{L_q(\Omega)}\,ds \\
& = I_{q, 0} + II_{q, 0} + III_{q, 0}.
\end{align*}
By \eqref{lp-lq}, we have
{\bold e}gin{align*}
I_{q,0}(t) &\leq C\int^{t/2}_0(t-s)^{-3/2(2+\sigma)}[[(\eta_1, {\bold u}_1)(\cdot, s)]]\,ds
\\
&\leq C(t/2)^{-3/2(2+\sigma)}\int^{t/2}_0<s>^{-b}<s>^b[[(\eta_1, {\bold u}_1)(\cdot, s)]]\,ds
\\
&\leq Ct^{-3/2(2+\sigma)}\Bigl(\int^\infty_0<s>^{-p'b}\,ds\Bigr)^{1/p'}
\tilde E_T(\eta_1, {\bold u}_1).
\end{align*}
Note that $3/2(2+\sigma) = (3/2)(1/r-1/2) < (3/2)(1/r-1/(2+\sigma))<(3/2)(1/r-1/6)$.
By \eqref{lp-lq}, we also have
{\bold e}gin{align*}
II_{q,0}(t) & \leq C\int^{t-1}_{t/2}(t-s)^{-3/2(2+\sigma)}\|(\eta_1, {\bold u}_1)(\cdot, s)]]\,ds
\\
& \leq C\Bigl(\int^{t-1}_{t/2}((t-s)^{-3/2(2+\sigma)}<s>^{-b})^{p'}\,ds\Bigr)^{1/p'}
\Bigl(\int^{t-1}_{t/2}(<s>^b[[(\eta_1, {\bold u}_1)(\cdot, s)]])^p\,ds\Bigr)^{1/p}\\
&\leq C<t>^{-b}\tilde E_T(\eta_1, {\bold u}_1),
\end{align*}
where we have used $3p'/2(2+\sigma) > 1$. By \eqref{lp-lq*}, we have
{\bold e}gin{align*}III_{q,0}(t) & \leq C\int^t_{t-1}[[(\eta_1, {\bold u}_1)(\cdot, s)]]\,ds\\
& \leq C<t>^{-b}\int^t_{t-1}<s>^b[[(\eta_1, {\bold u}_1)(\cdot, s)]]\,ds \\
&\leq C<t>^{-b}\Bigl(\int^t_{t-1}\,ds\Bigr)^{1/p'}\tilde E_T(\eta_1, {\bold u}_1).
\end{align*}
Since $b< 3/2(2+\sigma)$,
combining these estimates above yields that
{\bold e}gin{equation}\label{eq:5.13}
\sup_{2 < t < T}<t>^b\|(\eta_1, {\bold u}_1)(\cdot, t)\|_{L_q(\Omega)}
\leq C\tilde E_T(\eta_2, {\bold u}_2)
\end{equation}
For $0 < t < \min(2, T)$, by standard estimate
\eqref{lp-lq*} of continuous analytic semigroup, we have
$$
\sup_{0 < t < \min(2, T)}<t>^b\|(\eta_1, {\bold u}_1)(\cdot, t)\|_{L_q(\Omega)}
\leq C\tilde E_T(\eta_2, {\bold u}_2)
$$
which, combined with \eqref{eq:5.13}, yields that
{\bold e}gin{equation}\label{eq:5.14*}
\|<t>^b(\eta_1, {\bold u}_1)(\cdot, t)\|_{L_\infty((0, T), L_q(\Omega)}
\leq C\tilde E_T(\eta_2, {\bold u}_2)
\end{equation}
for $q=2, 2+\sigma$ and $6$.
Recalling that
$\eta=\eta_1+\eta_2$ and ${\bold u}={\bold u}_1+{\bold u}_2$,
noting that
$E_T(\eta_1, {\bold u}_1) \leq C(\tilde E_T(\eta_1, {\bold u}_1)+\|(\theta_0, {\bold v}_0)\|_{\mathcal I})$
as follows from \eqref{add.est.1}, and combining
\eqref{eq:5.11}, \eqref{eq:5.12}, \eqref{eq:5.14*}, and \eqref{eq:5.9} yield that
{\bold e}gin{equation}\label{eq:5.14}
E_T(\eta, {\bold u}) \leq C(\epsilon^2 + \epsilon^3 + \epsilon^4).
\end{equation}
If we choose $\epsilon > 0$ so small that
$C(\epsilon + \epsilon^2 + \epsilon^3) < 1$ in \eqref{eq:5.14}, we have
$E_T(\eta, {\bold u}) \leq \epsilon$. Moreover, by \eqref{eq:4.3}
$$\sup_{t \in (0, T)} \|\eta(\cdot, t)\|_{L_\infty(\Omega)}
\leq C(\|\eta_0\|_{H^1_6} + \|\partial_t\eta\|_{L_p((0, T), H^1_6(\Omega))})
\leq C(\epsilon^2 +\epsilon^3 + \epsilon^4).$$
Thus, choosing $\epsilon > 0$ so small that
$C(\epsilon^2 +\epsilon^3 + \epsilon^4) \leq \rho_*/2$,
we see that
$\sup_{t \in (0, T)} \|\eta(\cdot, t)\|_{L_\infty(\Omega)} \leq \rho_*/2$.
And also,
$$\int^T_0\|\nabla{\bold u}(\cdot, s)\|_{L_\infty(\Omega)}\,ds
\leq \Bigl(\int^\infty_0<s>^{-p'b}\,ds\Bigr)^{1/p'}
\|<t>^b\nabla{\bold u}\|_{L_p((0, T), H^1_6(\Omega))}
\leq C_{p', b}(\epsilon^2 + \epsilon^3 + \epsilon^4).
$$
Thus, choosing $\epsilon > 0$ so small that
$C_{p', b}(\epsilon^2 + \epsilon^3 + \epsilon^4) \leq \delta$,
we see that
$\int^T_0\|\nabla{\bold u}(\cdot, s)\|_{L_\infty(\Omega)}\,ds \leq \delta$.
From consideration above, it follows that $(\eta, {\bold u})
\in {\mathcal V}_{T, \epsilon}$. Let ${\mathcal S}$ be an operator defined by
${\mathcal S}(\theta, {\bold v}) = (\eta, {\bold u})$ for $(\theta, {\bold v})
\in {\mathcal V}_{T, \epsilon}$, and then ${\mathcal S}$ maps ${\mathcal V}_{T, \epsilon}$
into itself.
We now show that ${\mathcal S}$ is a contraction map. Let
$(\theta_i, {\bold v}_i) \in {\mathcal V}_{T, \epsilon}$ ($i=1,2$) and set
$(\eta, {\bold u}) = (\eta_1, {\bold u}_1) - (\eta_2, {\bold u}_2) = {\mathcal S}(\theta_1, {\bold v}_1)-
{\mathcal S}(\theta_2, {\bold v}_2)$, and $F = F(\theta_1, {\bold v}_1)-F(\theta_2, {\bold v}_2)$ and
${\bold G} = {\bold G}(\theta_1, {\bold v}_1) - {\bold G}(\theta_2, {\bold v}_2)$. And then,
from \eqref{linearized:5.1} it follows that
{\bold e}gin{equation}\label{difeq:5.1}{\bold e}gin{aligned}
\partial_t\eta + \rho_*{\rm div}\, {\bold u} = F
&&\quad&\text{in $\Omega \times(0, T)$}, \\
\rho_*\partial_t{\bold u}- {\rm Div}\,(\mu{\bold D}({\bold u}) + \nu{\rm div}\,{\bold u}{\bold I} - {{\frak r}ak p}'(\rho_*)\eta)
= {\bold G}&&
\quad&\text{in $\Omega \times(0, T)$}, \\
{\bold u}|_\Gamma=0, \quad (\eta, {\bold u})|_{t=0} = (0, 0)&&
\quad&\text{in $\Omega$}.
\end{aligned}\end{equation}
By \eqref{nones:4.2}, \eqref{nones:4.4}, \eqref{nones:4.6}, and \eqref{nones:4.8},
we have
$$\|(F, {\bold G})\|_{L_p((0, T), H^{1,0}_r(\Omega))}
+ \sum_{q=2, 2+\sigma, 6}\|(F, {\bold G})\|_{L_p((0, T), H^{1,0}_q(\Omega))}
\leq C(\epsilon + \epsilon^2+ \epsilon^3)E_T((\theta_1, {\bold v}_1)-(\theta_2, {\bold v}_2)).
$$
Applying the same argument as in proving \eqref{eq:5.14} to equations
\eqref{difeq:5.1} and recalling $(\eta, {\bold u}) = {\mathcal S}(\theta_1, {\bold v}_1)-
S(\theta_2, {\bold v}_2)$,
we have
$$E_T( {\mathcal S}(\theta_1, {\bold v}_1)-
S(\theta_2, {\bold v}_2))
\leq C(\epsilon + \epsilon^2+ \epsilon^3)E_T((\theta_1, {\bold v}_1)-(\theta_2, {\bold v}_2)),
$$
for some constant $C$ independent of $\epsilon$ and $T$. Thus, choosing
$\epsilon > 0$ so small that $C(\epsilon + \epsilon^2+ \epsilon^3) < 1$, we have
that ${\mathcal S}$ is a contraction map on ${\mathcal V}_{T, \epsilon}$, which proves
Theorem \ref{mainthm:2}. Since the contraction mapping principle yields
the uniqueness of solutions in ${\mathcal V}_{T, \epsilon}$,
we have completed the proof of Theorem \ref{mainthm:2}.
\section{A proof of Theorem \ref{thm:main0}}
We shall prove Theorem \ref{thm:main0} with the help of Theorem \ref{mainthm:2}.
In what follows, let $b$ and $p$ be the constants given in Theorem \ref{mainthm:2},
and $q=2, 2+\sigma$ and $6$.
As was stated in Sect. 2, the Lagrange transform \eqref{lag:1} gives a
$C^{1+\omega}$ ($\omega \in (0, 1/2)$) diffeomorphism on $\Omega$ and
$dx= \det({\bold I} + {\bold k})\,dy$, where $\{x\}$ and $\{y\}$
denote respective Euler coordinates and Lagrange coordinates on $\Omega$
and ${\bold k} = \int^t_0\nabla {\bold u}(\cdot, s)\,ds$.
By \eqref{lag:2}, $\|{\bold k}\|_{L_\infty(\Omega)} \leq \delta < 1$. In particular,
choosing $\delta>0$ smaller if necessary, we may assume that
$C^{-1}\leq \det({\bold I} + \int^t_0\nabla{\bold u}(\cdot, s) \,ds)\leq C$ with some constant
$C > 0$ for any
$(x, t) \in \Omega\times(0, T)$. Let $y = X_t(x)$ be an inverse map of Lagrange transform \eqref{lag:1}, and set $\theta(x, t) = \eta(X_t(x), t)$ and ${\bold v}(x, t)
= {\bold u}(X_t(x), t)$. We have
$$\|(\theta, {\bold v})\|_{L_q(\Omega)} \leq C\|(\eta, {\bold u})\|_{L_q(\Omega)}.$$
Noting that $(\eta, {\bold u})(y, t) = (\theta, {\bold v})(y+\int^t_0{\bold u}(y, s)\,ds, t)$,
the chain rule of composite functions yields that
{\bold e}gin{align*}
\|(\nabla(\theta, {\bold v})\|_{L_q(\Omega)} \leq C
(1-\|{\bold k}\|_{L_\infty(\Omega)})^{-1}\|\nabla(\eta, {\bold u})\|_{L_q(\Omega)}; \\
\|\nabla^2{\bold v}\|_{L_q(\Omega)} \leq C(1-\|{\bold k}\|_{L_\infty(\Omega)})^{-2}
\|\nabla^2{\bold u}\|_{L_q(\Omega)} + (1-\|{\bold k}\|_{L_\infty(\Omega)})^{-1}
\|\nabla{\bold k}\|_{L_q(\Omega)}\|\nabla{\bold u}\|_{L_\infty(\Omega)}.
\end{align*}
Thus, using
$\|\nabla{\bold k}\|_{L_q(\Omega)} \leq C\|<t>^b\nabla^2{\bold u}\|_{L_p((0, T), L_q(\Omega))}$
and $\|\nabla{\bold u}\|_{L_\infty(\Omega)}\leq C\|\nabla {\bold u}\|_{H^1_6(\Omega)}$, we have
{\bold e}gin{align*}
\|<t>^b\nabla(\theta, {\bold v})\|_{L_\infty((0, T), L_2(\Omega) \cap L_6(\Omega))}
&\leq C\|<t>^b\nabla(\theta, {\bold v})\|_{L_\infty((0, T), L_2(\Omega) \cap L_6(\Omega))};
\\
\|<t>^b(\theta, {\bold v})\|_{L_p((0, T), L_6(\Omega))}
&\leq C\|<t>^b(\theta, {\bold v})\|_{L_p((0, T), L_6(\Omega))};
\\
\|<t>^b(\theta, {\bold v})\|_{L_\infty((0, T), L_2(\Omega) \cap L_6(\Omega))}
&\leq C\|<t>^b(\theta, {\bold v})\|_{L_p((0, T), L_2(\Omega) \cap L_6(\Omega))};
\\
\|<t>^b\nabla^2{\bold v}\|_{L_p((0, T), L_2(\Omega) \cap L_6(\Omega))}
&\leq C(\|<t>^b\nabla^2{\bold u}\|_{L_p((0, T), L_2(\Omega) \cap L_6(\Omega))}\\
&+ \|<t>^b\nabla^2{\bold u}\|_{L_p((0, T), L_q(\Omega))}\|<t>^b\nabla {\bold u}
\|_{L_p((0, T), H^1_6(\Omega))}).
\end{align*}
Since
$\partial_t(\eta, {\bold u})(y, t) = \partial_t[(\theta,{\bold v})(y+ \int^t_0{\bold u}(y, s)\,ds, t)]
= \partial_t(\theta, {\bold v})(x, t) +{\bold u}\cdot \nabla(\theta, {\bold v})(x, t)$,
we have
{\bold e}gin{align*}
\|\partial_t(\theta, {\bold v})\|_{L_q(\Omega)}
\leq C\|\partial_t(\eta, {\bold u})\|_{L_q(\Omega)} + \|{\bold u}\|_{L_\infty(\Omega)}
\|\nabla\eta\|_{L_q(\Omega)}
+ \|{\bold u}\|_{L_q(\Omega)}\|\nabla{\bold u}\|_{L_\infty(\Omega)}.
\end{align*}
Since
$\|\nabla\eta\|_{L_\infty((0, T), L_q(\Omega))}
\leq \|\nabla \theta_0\|_{L_q(\Omega)}
+ C\|<t>^b\partial_t\eta\|_{L_p((0, T), H^1_q(\Omega))}$, we have
{\bold e}gin{align*}
\|<t>^b\partial_t(\theta, {\bold v})\|_{L_p((0, T), L_q(\Omega))}
&\leq C(\|<t>^b\partial_t(\eta, {\bold u})\|_{L_p((0, T), L_q(\Omega))} \\
&+ (\|\nabla \theta_0\|_{L_q(\Omega)}
+ \|<t>^b\partial_t\eta\|_{L_p((0, T), H^1_q(\Omega))})\|<t>^b
{\bold u}\|_{L_p((0, T), H^1_6(\Omega))}\\
&+ \|<t>^b{\bold u}\|_{L_\infty((0, T), L_q(\Omega))}
\|<t>^b\nabla{\bold u}\|_{L_p((0, T), H^1_6(\Omega))}).
\end{align*}
By Theorem \ref{mainthm:2} we see that there exists a small constant
$\epsilon > 0$ such that if initial data $(\theta_0,{\bold v}_0) \in {\mathcal I}$
satisifes the compatibility condition: ${\bold v}_0|_\Gamma=0$ and
the smallness condition: $\|(\theta_0, {\bold v}_0)\|_{{\mathcal I}} \leq \epsilon^2$
then problem \eqref{eq:1.1} admits unique solutions
$\rho = \rho_*+\theta$ and ${\bold v}$ satisfying the regularity conditions
\eqref{sol:1} and ${\mathcal E}(\theta, {\bold v}) \leq \epsilon$. This completes the proof
of Theorem \ref{thm:main0}.
\section{Comment on the proof}
Let $N \geq 3$ and $\Omega$ be an exterior domain in
${\Bbb R}^N$.
Assume that $L_p$-$L_q$ decay estmates for $C_0$ analytic semigroup
like \eqref{lp-lq} are valid.
We choose $q_1=2$, $q_2 = 2+\sigma$, and $q_3$ in such a way that $q_3 > N$ and
$${\frak r}ac12 + {\frak r}ac{N}{2(2+\sigma)} \leq {\frak r}ac{N}{2}\Bigl({\frak r}ac12+{\frak r}ac{1}{2+\sigma}
-{\frak r}ac{1}{q_3}\Bigr). $$
Namely, $q_3 =6$ ($N=3$) and $q_3 >N \geq 2N/(N-2)$ for $N \geq 4$.
If $L_1$ in space estimates hold, then the global well-posedness is
established with $q_1=q_2=2$. But, so far
$L_1$ in space estimates does not hold, and so we have chosen
$q_1=2$ and $q_2 = 2+\sigma$.
Let
$p$ and $b$ be chosen in such a way that
$$\Bigl({\frak r}ac12 + {\frak r}ac{N}{2(2+\sigma)} - b\Bigr)p > 1,
\quad bp' > 1.$$
If we write equations as
$$\partial_tu - Au = f, \quad Bu = g \quad(t > 0), \quad u|_{t=0}=u_0.$$
Here, $Bu=g$ is corresponding to boundary conditions,
and $f$ and $g$ are corresponding to nonlinear terms. The first reduction
is that $u_1$ is a solution to equations:
$$\partial_tu_1 + \lambda_1 u_1 - Au_1 = f,
\quad Bu_1 = g \quad(t > 0), \quad u_1|_{t=0}=u_0.$$
Then, $u_1$ has the same decay properties as nonlinear terms $f$ and $g$ have.
If $u_1$ does not belong to the domain of the operator $(A, B)$
(free boundary conditions or slip boundary conditions cases)),
in addition we choose $u_2$ as
a solution of equations:
$$\partial_tu_2 + \lambda_1 u_2 - Au_2 = \lambda_1u_1, \quad Bu_2 = 0
\quad(t > 0), \quad u_2|_{t=0}=0,$$
with very large constant $\lambda_1 > 0$.
Since $u_2$ belongs to the domain of operator $A$ for any $t > 0$,
we choose $u_3$
as a solution of equations:
$$\partial_tu_3 - Au_3 = \lambda_1 u_2, \quad Bu_3 = 0 \quad(t > 0), \quad u_3|_{t=0}=0.$$
And then, by the Duhamel principle, we have
$$u_3= \lambda_1\int^t_0T(t-s)u_2(s)\,ds,
$$
and we use \eqref{lp-lq}
estimate for $0 < s < t-1$ and
a standard semigroup estimate for $t-1 < s < t$, that is
$\|T(t-s)u_2(s)\|_{D(A)} \leq C\|u(s)\|_{D(A)}$ for $t-1<s<t$, where $\|\cdot\|_{D(A)}$
is a domain norm.
When $N=2$, the method above is fail, because
$${\frak r}ac12 + {\frak r}ac{2}{2(2+\sigma)} < 1.$$
And so, Matsumura-Nishida method seems to be only the way to prove the
global wellposedness in a two dimensiona exterior domain.
{\bold e}gin{thebibliography}{9}
\bibitem{Agmon} S.~Agmon, {\it On the eigenfunctions and on the eigenvales
of general elliptic boundary value problems}, Commun. Pure Appl. Math.,
{\bf 15} (1962), 119--147.
\bibitem{ADN} S.~Agmon, A.~Douglis and L.~Nirenberg,
{\it Estimates near the boundary for solutions of elliptic partial differential
equations satisfying general boundary conditions, I},
Commun. Pure Appl. Math., {\bf 22} (1959), 623--727.
\bibitem{AV} M.~S.~Agranovich and M.~I.~Vishik, {\it Elliptic problems
with parameter and parabolic problems of general form} (Russian),
Uspekhi Mat. Nauk. {\bf 19}(1964) 53--161.
English transl. in Russian Math. Surv., {\bf 19}(1964), 53--157.
\bibitem{Danchin} R.~Danchin, {\it Global existence in critical spaces for
compressible Navier-Stokes equations}, Invent. Math. {\bf 141} (2000),
579--614.
\bibitem{DM} R.~Danchin and P.~Mucha, {\it Critical functional framework
and maximal regularity in action on systems of incompressible flows},
M\'emoires de la Soci\'ete math\'ematique de France 1,
November 2013. DOI:10.24033/msmf.451
\bibitem{DV} R.~Denk and L.~Volevich, {\it Parameter-elliptic boundary
value problems connected with the newton polygon}, Diff. Int. Eqns.,
{\bf 15}(3) (2002), 289--326.
\bibitem{ES1} Y.~Enomoto and Y.~Shibata, {\it On the ${\mathcal R}$-sectoriality and
the initial boundary value problem for the viscous compressible fluid flow},
Funkcial Ekvac., {\bf 56} (2013), 441--505.
\bibitem{ES2} Y.~Enomoto and Y.~Shibata, {\it Global existence of classical
solutions and optimal decay rate for compressible flows via the
theory of semigroups}, Chapter 39 pp. 2085--2181
in Y. Giga and A. Novotn\'y (eds.),
Handbook of Mathematical Analysis in Mechanics of Viscous Fluids,
Springer International Publishing AG, part of Springer Nature 2018.
https://doi.org/10.1007/978-3-319-13344-7\_52.
\bibitem{MN1} A.~Matsumura and T.~Nishida, {\it The initial
value problem for the equations of motion of compressible viscous and heat-conductive
gases}, J. Math. Kyoto Univ., {\bf 20} (1980), 67--104
\bibitem{MN2} A.~Matsumura and T.~Nishida, {\it Initial boundary value
problems for the equations of motion of compressible viscous
and heat-conductive fluids}, Commun. Math. Phys.,
{\bf 89} (1983), 445--464.
\bibitem{S1} Y.~Shibata, {\it ${\mathcal R}$ Boundedness, Maximal Regularity and Free Boundary
Problems for the Navier Stokes Equations}, pp 193--462 in Mathematical
Analysis of the Navier-Stokes Equations edts. G.~P.~Galdi and Y.~Shibata,
Lecture Notes in Math. 2254 CIME, Springer Nature Switzerland AG 2020.
ISBN978-3-030-36226-3.
\bibitem{Strohmer1} G.~Str\"ohmer, {About a certain class of parabolic-hyperbolic
systems of differential equations}, Analysis {\bf 9} (1989), 1--39.
\bibitem{Weis} L.~Weis, {\rm Operator-valued Fourier multiplier theorems and maximal
$L_p$-regularity}, Math. Ann., {\bf 319}(4)(2001), 735--758.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Jacquet modules of\ principal series generated by the trivial $K$-type}
\begin{abstract}
We propose a new approach for the study of the Jacquet module of a Harish-Chandra module of a real semisimple Lie group.
Using this method, we investigate the structure of the Jacquet module of principal series representation generated by the trivial $K$-type.
\end{abstract}
\section{Introduction}\label{sec:Introduction}
Let $G$ be a real semisimple Lie group.
By Casselman's subrepresentation theorem, any irreducible admissible representation $U$ is realized as a subrepresentation of a certain non-unitary principal series representation.
Such an embedding is a powerful tool to study an irreducible admissible representation but the subrepresentation theorem dose not tell us how it can be realized.
Casselman~\cite{MR562655} introduced the Jacquet module $J(U)$ of $U$.
This important object retains all information of embeddings given by the subrepresentation theorem.
For example, Casselman's subrepresentation theorem is equivalent to $J(U)\ne 0$.
However the structure of $J(U)$ is very intricate and difficult to determine.
In this paper we give generators of the Jacquet module of a principal series representation generated by the trivial $K$-type.
Let $\mathbb{Z}$ be the ring of integers,
$\mathfrak{g}_0$ the Lie algebra of $G$,
$\theta$ a Cartan involution of $\mathfrak{g}_0$,
$\mathfrak{g}_0 = \mathfrak{k}_0\mathrm{op}lus\mathfrak{a}_0\mathrm{op}lus\mathfrak{n}_0$ the Iwasawa decomposition of $\mathfrak{g}_0$,
$\mathfrak{m}_0$ the centralizer of $\mathfrak{a}_0$ in $\mathfrak{k}_0$,
$W$ the little Weyl group for $(\mathfrak{g}_0,\mathfrak{a}_0)$,
$e\in W$ the unit element of $W$,
$\Sigma$ the restricted root system for $(\mathfrak{g}_0,\mathfrak{a}_0)$,
$\mathfrak{g}_{0,\alpha}$ the root space for $\alpha\in\Sigma$,
$\Sigma^+$ the positive system of $\Sigma$ such that $\mathfrak{n}_0 = \bigoplus_{\alpha\in\Sigma^+}\mathfrak{g}_{0,\alpha}$,
$\rho = \sum_{\alpha\in\Sigma^+}(\dim \mathfrak{g}_{0,\alpha}/2)\alpha$,
$\mathcal{P} = \{\sum_{\alpha\in\Sigma^+}n_\alpha\alpha\mid n_\alpha\in\mathbb{Z}\}$,
$\mathcal{P}^+ = \{\sum_{\alpha\in\Sigma^+}n_\alpha\alpha \mid n_\alpha\in\mathbb{Z},\ n_\alpha\ge 0\}$ and
$U(\lambda)$ the principal series representation with an infinitesimal character $\lambda$ generated by the trivial $K$-type.
In this paper we prove the following theorem.
\begin{thm}[Theorem~\ref{thm:definition of boundary value map for G/K}, Theorem~\ref{thm:relations of v_i}]\label{thm:generators and relations}
Assume that $\lambda$ is regular.
Set $\mathcal{W}(w) = \{w'\in W\mid w\lambda - w'\lambda\in2\mathcal{P}^+\}$ for $w\in W$.
Then there exist generators $\{v_w\mid w\in W\}$ of $J(U(\lambda))$ such that
\[
\begin{cases}
\text{$(H - (\rho + w\lambda))v_w \in \sum_{w'\in \mathcal{W}(w)}U(\mathfrak{g})v_{w'}$ for all $H\in \mathfrak{a}_0$},\\
\text{$Xv_w \in \sum_{w'\in \mathcal{W}(w)}U(\mathfrak{g})v_{w'}$ for all $X\in \mathfrak{m}_0\mathrm{op}lus\theta(\mathfrak{n}_0)$}.
\end{cases}
\]
\end{thm}
We enumerate $W = \{w_1,w_2,\dots,w_r\}$ such that $\re w_1\lambda\ge \re w_2\lambda\ge \dots \ge \re w_r\lambda$.
Set $V_i = \sum_{j\ge i}U(\mathfrak{g})v_{w_j}$.
Then by Theorem~\ref{thm:generators and relations} we have the surjective map $M(w_i\lambda)\to V_i/V_{i + 1}$ where $M(w_i\lambda)$ is a generalized Verma module (See Definition~\ref{defn:generalized Verma module}).
This map is isomorphic.
Namely we can prove the following theorem.
\begin{thm}[Theorem~\ref{thm:Main theorem for regular case}]\label{thm:existence of filtration}
There exists a filtration $J(U(\lambda)) = V_1 \supset V_2\supset \dots \supset V_{r + 1} = 0$ of $J(U(\lambda))$ such that $V_i / V_{i + 1} \simeq M(w_i\lambda)$.
Moreover if $w\lambda - \lambda \not\in 2\mathcal{P}$ for $w\in W\setminus\{e\}$ then $J(U(\lambda)) \simeq \bigoplus_{w\in W} M(w\lambda)$.
\end{thm}
This theorem does not need the assumption that $\lambda$ is regular.
In the case of $G$ is split and $U(\lambda)$ is irreducible, Collingwood~\cite{MR1089304} proved Theorem~\ref{thm:existence of filtration}.
For example, we obtain the following in case of $\mathfrak{g}_0 = \mathfrak{sl}(2,\mathbb{R})$:
Choose a basis $\{H,E_+,E_-\}$ of $\mathfrak{g}_0$ such that $\mathbb{R} H = \mathfrak{a}_0$, $\mathbb{R} E_+ = \mathfrak{n}_0$, $[H,E_\pm] = \pm 2E_\pm$ and $E_- = \theta(E_+)$.
Then $\Sigma^+ = \{2\alpha\}$ where $\alpha(H) = 1$.
Let $\lambda = r\alpha$ for $r\in \mathbb{C}$.
We may assume $\re r \ge 0$.
By Theorems~\ref{thm:generators and relations} and \ref{thm:existence of filtration}, we have the exact sequence
\[
\xymatrix{
0 \ar[r] & M(-r\alpha) \ar[r] & J(U(r\alpha)) \ar[r] & M(r\alpha) \ar[r] & 0.
}
\]
Consider the case $\lambda$ is integral, i.e., $2r\in \mathbb{Z}$.
If $r\not\in \mathbb{Z}$ then this sequence splits by Theorem~\ref{thm:existence of filtration}.
On the other hand, if $r\in \mathbb{Z}$ then by the direct calculation using the method introduced in this paper we can show it does not split.
Notice that $U(r\alpha)$ is irreducible if and only if $r\in\mathbb{Z}$.
Then we have the following;
if $\lambda$ is integral then $J(U(\lambda))$ is isomorphic to the direct sum of generalized Verma modules if and only if $U(\lambda)$ is reducible.
Our method is based on the paper of Kashiwara and Oshima~\cite{MR0482870}.
In Section~\ref{sec:Jacquet modules and fundamental properties} we show fundamental properties of Jacquet modules and introduce a certain extension of the universal enveloping algebra.
An analog of the theory of Kashiwara and Oshima is established in Section~\ref{sec:construction of special elements}.
In Section~\ref{sec:Structure of Jacquet modules (regular case)} we prove our main theorem in the case of a regular infinitesimal character using the result of Section~\ref{sec:construction of special elements}.
We complete the proof in Section~\ref{sec:Structure of Jacquet modules (singular case)} using the translation principle.
\subsection*{Acknowledgments}
The author is grateful to his advisor Hisayosi Matumoto for his advice and support.
He would also like to thank Professor Toshio Oshima for his comments.
\subsection*{Notations}
Throughout this paper we use the following notations.
As usual we denote the ring of integers, the set of non-negative integers, the set of positive integers, the real number field and the complex number field by $\mathbb{Z},\mathbb{Z}_{\ge 0},\mathbb{Z}_{> 0},\mathbb{R}$ and $\mathbb{C}$ respectively.
Let $\mathfrak{g}_0$ be a real semisimple Lie algebra.
Fix a Cartan involution $\theta$ of $\mathfrak{g}_0$.
Let $\mathfrak{g}_0 = \mathfrak{k}_0\mathrm{op}lus \mathfrak{s}_0$ be the decomposition of $\mathfrak{g}_0$ into the $+1$ and $-1$ eigenspaces for $\theta$.
Take a maximal abelian subspace $\mathfrak{a}_0$ of $\mathfrak{s}_0$ and let $\mathfrak{g}_0 = \mathfrak{k}_0\mathrm{op}lus \mathfrak{a}_0\mathrm{op}lus \mathfrak{n}_0$ be the corresponding Iwasawa decomposition of $\mathfrak{g}_0$.
Set $\mathfrak{m}_0 = \{X\in \mathfrak{k}_0\mid \text{$[H,X] = 0$ for all $H\in \mathfrak{a}_0$}\}$.
Then $\mathfrak{p}_0 = \mathfrak{m}_0\mathrm{op}lus \mathfrak{a}_0 \mathrm{op}lus \mathfrak{n}_0$ is a minimal parabolic subalgebra of $\mathfrak{g}_0$.
Write $\mathfrak{g}$ for the complexification of $\mathfrak{g}_0$ and $U(\mathfrak{g})$ for the universal enveloping algebra of $\mathfrak{g}$.
We apply analogous notations to other Lie algebras.
Set $\mathfrak{a}^* = \Hom_\mathbb{C}(\mathfrak{a},\mathbb{C})$ and $\mathfrak{a}_0^* = \Hom_\mathbb{R}(\mathfrak{a}_0,\mathbb{R})$.
Let $\Sigma\subset\mathfrak{a}^*$ be the restricted root system for $(\mathfrak{g},\mathfrak{a})$ and $\mathfrak{g}_\alpha$ the root space for $\alpha\in\Sigma$.
Let $\Sigma^+$ be the positive root system determined by $\mathfrak{n}$, i.e., $\mathfrak{n} = \bigoplus_{\alpha\in\Sigma^+}\mathfrak{g}_\alpha$.
$\Sigma^+$ determines the set of simple roots $\Pi = \{\alpha_1,\alpha_2,\dots,\alpha_l\}$.
We define the total order on $\mathfrak{a}_0^*$ by the following; for $c_i,d_i\in\mathbb{R}$ we define $\sum_i c_i \alpha_i > \sum_i d_i \alpha_i$ if and only if there exists an integer $k$ such that $c_1 = d_1,\dots,c_k = d_k$ and $c_{k + 1} > d_{k + 1}$.
Let $\{H_1,H_2,\dots,H_l\}$ be the dual basis of $\{\alpha_i\}$.
Write $W$ for the little Weyl group for $(\mathfrak{g}_0,\mathfrak{a}_0)$ and $e$ for the unit element of $W$.
Set
$\mathcal{P} = \{\sum_{\alpha\in\Sigma^+}n_\alpha\alpha\mid n_\alpha\in\mathbb{Z}\}$,
$\mathcal{P}^+ = \{\sum_{\alpha\in\Sigma^+}n_\alpha\alpha\mid n_\alpha\in\mathbb{Z}_{\ge 0}\}$
and $\mathcal{P}^{++} = \mathcal{P}^+\setminus\{0\}$.
Let $m$ be a dimension of $\mathfrak{n}$.
Fix a basis $E_1,E_2,\dots,E_m$ of $\mathfrak{n}$ such that each $E_i$ is a restricted root vector.
Let $\beta_i$ be a restricted root vector such that $E_i\in \mathfrak{g}_{\beta_i}$.
For $\mathbf{n} = (\mathbf{n}_i)\in\mathbb{Z}_{\ge 0}^m$ we denote $E_1^{\mathbf{n}_1}E_2^{\mathbf{n}_2}\dotsm E_m^{\mathbf{n}_m}$ by $E^{\mathbf{n}}$.
For $x = (x_1,x_2,\dots,x_n)\in\mathbb{Z}_{\ge 0}^n$, we write $\lvert x\rvert = x_1 + x_2 + \dots + x_n$ and $x! = x_1!\,x_2!\dotsm x_n!$.
For a $\mathbb{C}$-algebra $R$, let $M(r,r',R)$ be the space of $r\times r'$ matrices with entries in $R$ and $M(r,R) = M(r,r,R)$.
Write $1_r\in M(r,R)$ for the identity matrix.
\section{Jacquet modules and fundamental properties}\label{sec:Jacquet modules and fundamental properties}
\begin{defn}[Jacquet module]
Let $U$ be a $U(\mathfrak{g})$-module.
Define modules $\widehat{J}(U)$ and $J(U)$ by
\begin{align*}
\widehat{J}(U) & = \varprojlim_k U/\mathfrak{n}^k U,\\
J(U) & = \widehat{J}(U)_{\text{\normalfont $\mathfrak{a}$-finite}} = \{u\in \widehat{J}(U)\mid \dim U(\mathfrak{a})u < \infty\}.
\end{align*}
We call $J(U)$ the Jacquet module of $U$.
\end{defn}
Set $\widehat{\mathcal{E}}(\mathfrak{n}) = \varprojlim_k U(\mathfrak{n})/\mathfrak{n}^kU(\mathfrak{n})$.
\begin{prop}\label{prop:algebraic property of E(n)}
\begin{enumerate}
\item The $\mathbb{C}$-algebra $\widehat{\mathcal{E}}(\mathfrak{n})$ is right and left Noetherian.
\item The $\mathbb{C}$-algebra $\widehat{\mathcal{E}}(\mathfrak{n})$ is flat over $U(\mathfrak{n})$.
\item If $U$ is a finitely generated $U(\mathfrak{n})$-module then $\varprojlim_k U/\mathfrak{n}^kU = \widehat{\mathcal{E}}(\mathfrak{n})\otimes_{U(\mathfrak{n})}U$.
\item Let $S = (S_k)$ be an element of $M(r,\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))$ and $(a_n)\in \mathbb{C}^{\mathbb{Z}_{\ge 0}}$.
Define $\sum_{n = 0}^\infty a_nS^n = (\sum_{n = 0}^k a_n S_k^n)_k$.
Then $\sum_{n = 0}^\infty a_nS^n \in M(r,\widehat{\mathcal{E}}(\mathfrak{n}))$.
\end{enumerate}
\end{prop}
\begin{proof}
Since Stafford and Wallach~\cite[Theorem~2.1]{MR656493} show that $\mathfrak{n}U(\mathfrak{n})\subset U(\mathfrak{n})$ satisfies the Artin-Rees property, the usual argument of the proof for commutative rings can be applicable to prove (1), (2) and (3).
(4) is obvious.
\end{proof}
\begin{cor}
Let $S$ be an element of $M(r,\widehat{\mathcal{E}}(\mathfrak{n}))$ such that $S - 1_r \in M(r,\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))$.
Then $S$ is invertible.
\end{cor}
\begin{proof}
Set $T = 1_r - S$.
By Proposition~\ref{prop:algebraic property of E(n)}, $R = \sum_{n = 0}^\infty T^n\in M(r,\widehat{\mathcal{E}}(\mathfrak{n}))$.
Then $SR = RS = 1_r$.
\end{proof}
We can prove the following proposition in a similar way to that of Goodman and Wallach~\cite[Lemma~2.2]{MR597811}.
For the sake of completeness we give a proof.
\begin{prop}\label{prop:generalized result of Goodman and Wallach}
Let $U$ be a $U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})$-module which is finitely generated as a $U(\mathfrak{n})$-module.
Assume that every element of $U$ is $\mathfrak{a}$-finite.
For $\mu\in\mathfrak{a}^*$ set
\[
U_\mu = \{u\in U\mid \text{For all $H\in\mathfrak{a}$ there exists a positive integer $N$ such that $(H - \mu(H))^Nu = 0$}\}.
\]
Then
\[
\widehat{J}(U) \simeq \prod_{\mu\in\mathfrak{a}^*}U_\mu.
\]
\end{prop}
\begin{proof}
For $k\in\mathbb{Z}_{>0}$ put $S_k = \{\mu\in\mathfrak{a}^* \mid U_\mu\ne 0,\ U_\mu\not\subset \mathfrak{n}^kU\}$.
Since $U$ is finitely generated, $\dim U/\mathfrak{n}^kU < \infty$.
Therefore $S_k$ is a finite set.
Define a map $\varphi\colon \prod_{\mu\in\mathfrak{a}^*} U_\mu\to \widehat{J}(U)$ by
\[
\varphi((x_\mu)_{\mu\in\mathfrak{a}^*}) = \left(\sum_{\mu\in S_k} x_\mu \pmod{\mathfrak{n}^kU}\right)_k.
\]
First we show that $\varphi$ is injective.
Assume $\varphi((x_\mu)_{\mu\in\mathfrak{a}^*}) = 0$.
We have $\sum_{\mu\in S_k}x_\mu \in \mathfrak{n}^kU$ for all $k\in\mathbb{Z}_{> 0}$.
Since $\mathfrak{n}^kU$ is $\mathfrak{a}$-stable and $S_k$ is a finite set, $x_\mu\in\mathfrak{n}^kU$ for all $\mu\in\mathfrak{a}^*$, thus we have $x_\mu = 0$.
We have to show that $\varphi$ is surjective.
Let $x = (x_k\pmod{\mathfrak{n}^k U})_k$ be an element of $\widehat{J}(U)$.
Since every element of $U$ is $\mathfrak{a}$-finite, we have $U = \bigoplus_{\mu\in\mathfrak{a}^*}U_\mu$.
Let $p_\mu\colon U\to U_\mu$ be the projection.
The $U(\mathfrak{n})$-module $U$ is finitely generated and therefore for all $\mu\in\mathfrak{a}^*$ there exists a positive integer $k_\mu$ such that $\mathfrak{n}^{k_\mu}U\cap U_\mu = 0$.
Notice that if $i,i' > k_\mu$ then $p_\mu(x_i) = p_\mu(x_{i'})$.
Hence we have $\varphi((p_\mu(x_{k_\mu}))_{\mu\in\mathfrak{a}^*}) = x$.
\end{proof}
We define an $(\mathfrak{a}\mathrm{op}lus\mathfrak{n})$-representation structure of $U(\mathfrak{n})$ by $(H + X)(u) = Hu - uH + Xu$ for $H\in \mathfrak{a}$, $X\in \mathfrak{n}$, $u\in U(\mathfrak{n})$.
Then $U(\mathfrak{n})$ is a $U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})$-module.
By Proposition~\ref{prop:generalized result of Goodman and Wallach} $\widehat{\mathcal{E}}(\mathfrak{n}) = \prod_{\mu\in\mathfrak{a}^*}U(\mathfrak{n})_\mu$.
The following results are corollaries of Proposition~\ref{prop:generalized result of Goodman and Wallach}.
\begin{cor}\label{cor:E(n) as vector space}
A linear map
\[
\begin{array}{ccc}
\mathbb{C}[[X_1,X_2,\dots,X_m]] & \longrightarrow & \widehat{\mathcal{E}}(\mathfrak{n})\\
\sum_{\mathbf{n}\in\mathbb{Z}_{\ge 0}^m} a_{\mathbf{n}} X^{\mathbf{n}}
& \longmapsto &
\left(\sum_{\lvert\mathbf{n}\rvert \le k}a_{\mathbf{n}} E^{\mathbf{n}}\pmod{\mathfrak{n}^kU(\mathfrak{n})}\right)_k
\end{array}
\]
is bijective, where $X^{\mathbf{n}} = X_1^{\mathbf{n}_1}X_2^{\mathbf{n}_2}\dotsm X_m^{\mathbf{n}_m}$ for $\mathbf{n} = (\mathbf{n}_1,\mathbf{n}_2,\dots,\mathbf{n}_m)\in\mathbb{Z}_{\ge 0}^m$.
\end{cor}
\begin{proof}
By the Poincar\'e-Birkhoff-Witt theorem $\{E^{\mathbf{n}}\mid \sum_i\mathbf{n}_i\beta_i = \mu\}$ is a basis of $U(\mathfrak{n})_\mu$.
This implies the corollary since $\widehat{\mathcal{E}}(\mathfrak{n})= \prod_{\mu\in\mathfrak{a}^*}U(\mathfrak{n})_\mu$.
\end{proof}
We denote the image of $\sum_{\mathbf{n}\in\mathbb{Z}_{\ge 0}^m} a_{\mathbf{n}}X^{\mathbf{n}}$ under the map in Corollary~\ref{cor:E(n) as vector space} by $\sum_{\mathbf{n}\in\mathbb{Z}_{\ge 0}^m} a_{\mathbf{n}}E^{\mathbf{n}}$.
\begin{cor}\label{cor:Jacquet module of trivial case}
Let $U$ be a $U(\mathfrak{g})$-module which is finitely generated as a $U(\mathfrak{n})$-module.
Assume that all elements are $\mathfrak{a}$-finite.
Then $J(U) = U$.
\end{cor}
\begin{proof}
This follows from the following equation.
\[
J(U) = \widehat{J}(U)_{\text{$\mathfrak{a}$-finite}} = \left(\prod_{\mu\in\mathfrak{a}^*}U_\mu\right)_{\text{$\mathfrak{a}$-finite}} = \bigoplus_{\mu\in\mathfrak{a}^*}U_\mu = U.
\]
\end{proof}
Put $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n}) = \widehat{\mathcal{E}}(\mathfrak{n})\otimes_{U(\mathfrak{n})}U(\mathfrak{g})$.
We can define a $\mathbb{C}$-algebra structure of $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})$ by
\begin{align*}
(f\otimes 1)(1\otimes u) & = f\otimes u,\\
(1\otimes u)(1\otimes u') & = 1\otimes (uu'),\\
(f\otimes 1)(f'\otimes 1) & = (ff')\otimes 1,\\
(1\otimes X)(f\otimes 1) & = \sum_{{\mathbf{n}}\in\mathbb{Z}_{\ge 0}^r}\frac{1}{{\mathbf{n}}!}\frac{\partial^{\lvert{\mathbf{n}}\rvert}}{\partial E^{\mathbf{n}}}f\otimes ((\ad(E))^{\mathbf{n}})'(X),
\end{align*}
where $u,u'\in U(\mathfrak{g})$, $X\in\mathfrak{g}$, $f,f'\in\widehat{\mathcal{E}}(\mathfrak{n})$, $((\ad(E))^{\mathbf{n}})' = (-\ad(E_m))^{{\mathbf{n}}_m}
\dotsm(-\ad(E_1))^{{\mathbf{n}}_1}$ and
\[
\frac{\partial^{\lvert{\mathbf{n}}\rvert}}{\partial E^{\mathbf{n}}}\left(\sum_{\mathbf{m}\in\mathbb{Z}_{\ge 0}^m} a_{\mathbf{m}} E^{\mathbf{m}}\right) = \sum_{\mathbf{m}\in\mathbb{Z}_{\ge 0}^m} a_{\mathbf{m}}\frac{{\mathbf{m}}!}{({\mathbf{m}}-{\mathbf{n}})!}E^{{\mathbf{m}}-{\mathbf{n}}}.
\]
Notice that $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U \simeq \widehat{\mathcal{E}}(\mathfrak{n})\otimes_{U(\mathfrak{n})}U$ as an $\widehat{\mathcal{E}}(\mathfrak{n})$-module for a $U(\mathfrak{g})$-module $U$.
By Proposition~\ref{prop:algebraic property of E(n)}, $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})$ is flat over $U(\mathfrak{g})$.
Notice that if $\mathfrak{b}$ is a subalgebra of $\mathfrak{g}$ which contains $\mathfrak{n}$ then $\widehat{\mathcal{E}}(\mathfrak{n})\otimes_{U(\mathfrak{n})}U(\mathfrak{b})$ is a subalgebra of $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})$.
Put $\widehat{\mathcal{E}}(\mathfrak{b},\mathfrak{n}) = \widehat{\mathcal{E}}(\mathfrak{n})\otimes_{U(\mathfrak{n})}U(\mathfrak{b})$.
Let $U$ be a $U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})$-module such that $U = \bigoplus_{\mu\in\mathfrak{a}^*}U_\mu$.
Set
\[
V = \left\{(u_\mu)_\mu\in\prod_{\mu\in\mathfrak{a}^*}U_\mu\Biggm| \text{there exists an element $\nu \in \mathfrak{a}_0^*$ such that $u_\mu = 0$ for $\re\mu < \nu$}\right\}.
\]
Then we can define an $\mathfrak{a}$-module homomorphism
\[
\varphi\colon \widehat{\mathcal{E}}(\mathfrak{a}\mathrm{op}lus\mathfrak{n},\mathfrak{n})\otimes_{U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})}U\simeq \left(\prod_{\mu\in\mathfrak{a}^*} U(\mathfrak{n})_\mu\right)\otimes_{U(\mathfrak{n})} \left(\bigoplus_{\mu'\in\mathfrak{a}^*}U_{\mu'}\right) \to V
\]
by
$\varphi((f_\mu)_{\mu\in\mathfrak{a}^*}\otimes (u_{\mu'})_{\mu'\in\mathfrak{a}^*}) = (\sum_{\mu + \mu' = \lambda}f_\mu u_{\mu'})_{\lambda\in\mathfrak{a}^*}$.
Notice that the composition $U\to \widehat{U}\to V$ is equal to the inclusion map $U\hookrightarrow V$.
We consider the case $U = U(\mathfrak{g})$.
Define an $(\mathfrak{a}\mathrm{op}lus \mathfrak{n})$-module structure of $U(\mathfrak{g})$ by $(H + X)(u) = Hu - uH + Xu$ for $H\in \mathfrak{a}$, $X\in \mathfrak{n}$, $u\in U(\mathfrak{g})$.
We have a map
\begin{multline*}
\varphi\colon \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\to \\
\left\{(P_\mu)_{\mu\in\mathfrak{a}^*}\in\prod_{\mu\in\mathfrak{a}^*}U(\mathfrak{g})_\mu\,\Bigg|\,\text{there exists an element $\nu \in \mathfrak{a}_0^*$ such that $P_\mu = 0$ for $\re\mu < \nu$}\right\}.
\end{multline*}
We write $\varphi(P) = (P^{(\mu)})_{\mu\in\mathfrak{a}^*}$.
Put $P^{(H,z)} = \sum_{\mu(H) = z}P^{(\mu)}$ for $z\in \mathbb{C}$ and $H\in \mathfrak{a}$ such that $\re\alpha(H) > 0$ for all $\alpha\in\Sigma^+$.
By the definition we have the following proposition.
\begin{prop}\label{prop:fundamental property of ^(lambda)}
\begin{enumerate}
\item Assume that $U$ is finitely generated as a $U(\mathfrak{n})$-module.
Let $\varphi\colon \widehat{\mathcal{E}}(\mathfrak{a}\mathrm{op}lus\mathfrak{n},\mathfrak{n})\otimes_{U(\mathfrak{n})}U\to \prod_{\mu\in\mathfrak{a}^*}U_\mu$ be an $\mathfrak{a}$-module homomorphism defined as above.
Then $\varphi$ is coincide with the map given in Proposition~\ref{prop:generalized result of Goodman and Wallach}.
In particular $\varphi$ is isomorphic.
\item We have $(PQ)^{(\lambda)} = \sum_{\mu + \mu' = \lambda}P^{(\mu)}Q^{(\mu')}$ for $P,Q\in\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})$ and $\lambda\in\mathfrak{a}^*$.
\item We have
\[
\left(\sum_{\mathbf{n}\in\mathbb{Z}_{\ge 0}^m}a_{\mathbf{n}}E^{\mathbf{n}}\right)^{(\lambda)} = \sum_{\sum_i \mathbf{n}_i\beta_i = \lambda}a_{\mathbf{n}}E^{\mathbf{n}}
\]
for $\lambda\in\mathfrak{a}^*$.
\end{enumerate}
\end{prop}
\begin{prop}\label{prop:induced equation}
Let $U$ be a $U(\mathfrak{g})$-module which is finitely generated as a $U(\mathfrak{n})$-module.
We take generators $v_1,v_2,\dots,v_n$ of an $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})$-module $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U$ and set $V = \sum_i U(\mathfrak{g})v_i\subset \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U$.
Define the surjective map $\psi\colon \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}V \to \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U$ by $\psi(f\otimes v) = fv$.
Assume that there exist weights $\lambda_i\in\mathfrak{a}^*$ and a positive integer $N$ such that $(H - \lambda_i(H))^Nv_i = 0$ for all $H\in \mathfrak{a}$ and $1 \le i \le n$.
Let $\varphi\colon \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}V \to \prod_{\mu\in\mathfrak{a}^*} V_\mu$ be the map defined as above.
Then there exists a unique map $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U\to \prod_{\mu\in\mathfrak{a}^*} V_\mu$ such that the diagram
\[
\xymatrix{
\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}V \ar[r]^(.57){\varphi} \ar[d]^\psi &
\prod_{\mu\in\mathfrak{a}^*}V_\mu\\
\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U \ar@{.>}[ru]}
\]
is commutative.
\end{prop}
\begin{proof}
Set $\widehat{U} = \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U$ and $\widehat{V} = \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}V$.
Take $f^{(i)}\in \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})$ and $v^{(i)}\in V$ such that $\psi(\sum_i f^{(i)}\otimes v^{(i)}) = 0$.
We have to show $\varphi(\sum_i f^{(i)}\otimes v^{(i)}) = 0$.
Since $\widehat{V} = \widehat{\mathcal{E}}(\mathfrak{n})\otimes_{U(\mathfrak{n})}V$, we may assume $f_i\in \widehat{\mathcal{E}}(\mathfrak{n})$.
We can write $f^{(i)} = (f_\mu^{(i)})_{\mu\in\mathfrak{a}^*}$ by the isomorphism $\widehat{\mathcal{E}}(\mathfrak{n}) \simeq \prod_{\mu\in\mathfrak{a}^*}U(\mathfrak{n})_\mu$.
Since $V = \bigoplus_{\mu'\in\mathfrak{a}^*}V_{\mu'}$, we can write $v_i = \sum_{\mu'\in\mathfrak{a}^*}v_{\mu'}^{(i)}$, $v_{\mu'}^{(i)}\in V_{\mu'}$.
We have to show $\sum_i\sum_{\mu + \mu' = \lambda}f^{(i)}_\mu v^{(i)}_{\mu'} = 0$ for all $\lambda\in\mathfrak{a}^*$.
Since $U$ is a finitely generated $U(\mathfrak{n})$-module we have $\widehat{U} = \varprojlim_k U/\mathfrak{n}^kU = \varprojlim_k \widehat{U}/\mathfrak{n}^k\widehat{U}$.
It is sufficient to prove $\sum_i\sum_{\mu + \mu' = \lambda}f^{(i)}_\mu v^{(i)}_{\mu'}\in \mathfrak{n}^k\widehat{U}$ for all $k\in\mathbb{Z}_{>0}$.
Fix $\lambda\in\mathfrak{a}^*$ and $k\in\mathbb{Z}_{>0}$.
We can choose an element $\nu\in\mathfrak{a}^*_0$ such that
$\bigoplus_{\re\mu \ge \nu}U(\mathfrak{n})_\mu\subset \mathfrak{n}^kU(\mathfrak{n})$.
Then $0 = \varphi(\sum_i f^{(i)}\otimes v^{(i)}) \equiv \sum_i\sum_{\re\mu < \nu}f^{(i)}_\mu v^{(i)}_{\mu'}\pmod{\mathfrak{n}^k\widehat{U}}$.
Notice that following two sets are finite.
\begin{gather*}
\{\mu \mid \text{$\re(\mu) < \nu$ and there exists an integer $i$ such that $f_\mu^{(i)} \ne 0$}\},\\
\{\mu'\mid \text{there exists an integer $i$ such that $v_{\mu'}^{(i)} \ne 0$}\}.
\end{gather*}
This implies $\sum_i\sum_{\mu + \mu' = \lambda}f_\mu^{(i)}v_{\mu'}^{(i)} \in \mathfrak{n}^k\widehat{U}$.
\end{proof}
The following result is a corollary of Proposition~\ref{prop:induced equation}.
\begin{cor}\label{cor:induced equation}
In the setting of Proposition~\ref{prop:induced equation}, we have the following.
Let $P_i$ $(1 \le i \le n)$ be elements of $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})$ such that $\sum_{i = 1}^nP_iv_i = 0$.
Then $\sum_iP_i^{(\lambda - \lambda_i)}v_i = 0$ for all $\lambda\in\mathfrak{a}^*$.
\end{cor}
\section{Construction of special elements}\label{sec:construction of special elements}
Let $\Lambda$ be a subset of $\mathcal{P}$.
Put $\Lambda^+ = \Lambda\cap \mathcal{P}^+$ and $\Lambda^{++} = \Lambda\cap \mathcal{P}^{++}$.
We define vector spaces $U(\mathfrak{g})_\Lambda, U(\mathfrak{n})_\Lambda, \widehat{\mathcal{E}}(\mathfrak{n})_\Lambda$ and $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})_\Lambda$ by
\begin{align*}
U(\mathfrak{g})_\Lambda & = \{P\in U(\mathfrak{g})\mid \text{$P^{(\mu)} = 0$ for all $\mu\not\in\Lambda$}\},\\
U(\mathfrak{n})_\Lambda & = \{P\in U(\mathfrak{n})\mid \text{$P^{(\mu)} = 0$ for all $\mu\not\in\Lambda$}\},\\
\widehat{\mathcal{E}}(\mathfrak{n})_\Lambda & = \{P\in \widehat{\mathcal{E}}(\mathfrak{n})\mid \text{$P^{(\mu)} = 0$ for all $\mu\not\in\Lambda$}\},\\
\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})_\Lambda & = \{P \in \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\mid \text{$P^{(\mu)} = 0$ for all $\mu\not\in\Lambda$}\}.
\end{align*}
Put $(\mathfrak{n}U(\mathfrak{n}))_\Lambda = \mathfrak{n}U(\mathfrak{n})\cap U(\mathfrak{n})_\Lambda$ and $(\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))_\Lambda = \mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n})\cap \widehat{\mathcal{E}}(\mathfrak{n})_\Lambda$.
We assume that $\Lambda$ is a subgroup of $\mathfrak{a}^*$.
Then $U(\mathfrak{g})_\Lambda, U(\mathfrak{n})_\Lambda, \widehat{\mathcal{E}}(\mathfrak{n})_\Lambda$ and $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})_\Lambda$ are $\mathbb{C}$-algebras.
Let $U$ be a $U(\mathfrak{g})_\Lambda$-module which is finitely generated as a $U(\mathfrak{n})_\Lambda$-module.
Let $u_1,u_2,\dots,u_N$ be generators of $U$ as a $U(\mathfrak{n})_\Lambda$-module.
Put $u = {}^t(u_1,u_2,\dots,u_N)$, $\overline{U} = U/(\mathfrak{n}U(\mathfrak{n}))_\Lambda U$ and $\overline{u} = u\pmod{(\mathfrak{n}U(\mathfrak{n}))_\Lambda}$.
The module $\overline{U}$ has an $\mathfrak{a}$-module structure induced from that of $U$.
By the assumption we have $\dim \overline{U} < \infty$.
Let $\lambda_1,\lambda_2,\dots,\lambda_r \in \mathfrak{a}^*$ $(\re\lambda_1 \ge \re \lambda_2\ge \dots \ge \re\lambda_r)$ be eigenvalues of $\mathfrak{a}$ on $\overline{U}$ with multiplicities.
We can choose a basis $\overline{v_1},\overline{v_2},\dots,\overline{v_r}$ of $\overline{U}$ and a linear map $\overline{Q}\colon\mathfrak{a}\to M(r,\mathbb{C})$ such that
\[
\begin{cases}
\text{$H\overline{v} = \overline{Q}(H)\overline{v}$ for all $H\in \mathfrak{a}$},\\
\text{$\overline{Q}(H)_{ii} = \lambda_i(H)$ for all $H\in \mathfrak{a}$}, \\
\text{if $i > j$ then $\overline{Q}(H)_{ij} = 0$ for all $H\in \mathfrak{a}$}, \\
\text{if $\lambda_i \ne \lambda_j$ then $\overline{Q}(H)_{ij} = 0$ for all $H\in \mathfrak{a}$},
\end{cases}
\]
where $\overline{v} = {}^t(\overline{v_1},\overline{v_2},\dots,\overline{v_r})$.
Take $\overline{A}\in M(N,r,\mathbb{C})$ and $\overline{B}\in M(r,N,\mathbb{C})$ such that $\overline{v} = \overline{B}\overline{u}$ and $\overline{u} = \overline{A}\overline{v}$.
Set $\widehat{U} = \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})_\Lambda\otimes_{U(\mathfrak{g})_\Lambda}U$.
\begin{thm}\label{thm:definition of boundary value map}
There exist matrices $A\in M(N,r,\widehat{\mathcal{E}}(\mathfrak{n})_\Lambda)$ and $B\in M(r,N,\widehat{\mathcal{E}}(\mathfrak{n})_\Lambda)$ such that the following conditions hold:
\begin{itemize}
\item There exists a linear map $Q\colon \mathfrak{a}\to M(r,U(\mathfrak{n})_\Lambda)$ such that
\[
\begin{cases}
\text{$Hv = Q(H)v$ for all $H\in \mathfrak{a}$},\\
\text{$Q(H) - \overline{Q}(H)\in M(r,(\mathfrak{n}U(\mathfrak{n}))_\Lambda)$ for all $H\in\mathfrak{a}$},\\
\text{if $\lambda_i - \lambda_j \not\in\Lambda^+$ then $Q(H)_{ij} = 0$ for all $H\in \mathfrak{a}$},\\
\text{if $\lambda_i - \lambda_j \in \Lambda^+$ then $[H',Q(H)_{ij}] = (\lambda_i - \lambda_j)(H')Q(H)_{ij}$ for all $H,H'\in \mathfrak{a}$},
\end{cases}
\]
where $v = Bu$.
\item We have $u = ABu$.
\item We have $A - \overline{A} \in M(N,r,(\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))_\Lambda)$ and $B - \overline{B}\in M(r,N,(\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))_\Lambda)$.
\end{itemize}
\end{thm}
For the proof we need some lemmas.
Put $w = \overline{B}u\in \widehat{U}^r$.
\begin{lem}\label{lem:lemma for proof of boudary value map}
For $H\in\mathfrak{a}$ there exists a matrix $R\in M(r,(\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))_\Lambda)$ such that $Hw = (\overline{Q}(H) + R)w$ in $\widehat{U}^r$.
\end{lem}
\begin{proof}
Since $w \pmod{((\mathfrak{n}U(\mathfrak{n}))_\Lambda U)^r} = \overline{v}$, we have $Hw - \overline{Q}(H)w \in ((\mathfrak{n}U(\mathfrak{n}))_\Lambda U)^r$.
Hence there exists a matrix $R_1 \in M(N,r,(\mathfrak{n}U(\mathfrak{n}))_\Lambda)$ such that $Hw - \overline{Q}(H)w = R_1u$.
Similarly we can choose a matrix $S\in M(N,(\mathfrak{n}U(\mathfrak{n}))_\Lambda)$ which satisfies $u = \overline{A}w + Su$.
Put $R = R_1(1 - S)^{-1}\overline{A}$.
Then $(H - \overline{Q}(H) - R)w = R_1u - R_1(1 - S)^{-1}\overline{A}w = 0$.
\end{proof}
\begin{lem}\label{lem:Lemma of linear algebra}
Let $\lambda\in\mathbb{C}$ and $Q_0,R\in M(r,\mathbb{C})$.
Assume that $Q_0$ is an upper triangular matrix.
Then there exist matrices $L,T\in M(r,\mathbb{C})$ such that
\[
\begin{cases}
\lambda L - [Q_0,L] = T + R,\\
\text{if $(Q_0)_{ii} - (Q_0)_{jj} \ne \lambda$ then $T_{ij} = 0$}.
\end{cases}
\]
\end{lem}
\begin{proof}
Since $(Q_0)_{ij} = 0$ for $i > j$, we have
\begin{align*}
(\lambda L - [Q_0,L])_{ij} & = \lambda L_{ij} - \sum_{k = 1}^r ((Q_0)_{ik}L_{kj} - L_{ik}(Q_0)_{kj})\\
& = \lambda L_{ij} - \sum_{k = i}^r (Q_0)_{ik}L_{kj} + \sum_{k = 1}^jL_{ik}(Q_0)_{kj}\\
& = (\lambda - ((Q_0)_{ii} - (Q_0)_{jj}))L_{ij} - \sum_{k = i + 1}^r (Q_0)_{ik}L_{kj} + \sum_{k = 1}^{j - 1}L_{ik}(Q_0)_{kj}.
\end{align*}
Hence we can choose $L_{ij}$ and $T_{ij}$ inductively on $(j - i)$.
\end{proof}
\begin{lem}\label{lem:Main lemma of def of boundary value map}
Let $H$ be an element of $\mathfrak{a}$ such that $\alpha(H) > 0$ for all $\alpha\in \Sigma^+$.
Let $Q_0\in M(r,\mathbb{C})$, $R\in M(r,(\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))_\Lambda)$.
Assume $(Q_0)_{ij} = 0$ for $i > j$.
Set $\mathcal{L}^{++} = \{\lambda(H)\mid \lambda\in\Lambda^{++}\}$.
Then there exist matrices $L\in M(r,\widehat{\mathcal{E}}(\mathfrak{n})_\Lambda)$ and $T\in M(r,(\mathfrak{n}U(\mathfrak{n}))_\Lambda)$ such that
\[
\begin{cases}
L \equiv 1_r \pmod{(\mathfrak{n}U(\mathfrak{n}))_\Lambda},\\
(H1_r - Q_0 - T)L = L(H1_r - Q_0 - R),\\
\text{if $(Q_0)_{ii} - (Q_0)_{jj} \not\in\mathcal{L}^{++}$ then $T_{ij} = 0$},\\
\text{if $(Q_0)_{ii} - (Q_0)_{jj}\in\mathcal{L}^{++}$ then $[H,T_{ij}] = ((Q_0)_{ii} - (Q_0)_{jj})T_{ij}$}.
\end{cases}
\]
\end{lem}
\begin{proof}
Set $\mathcal{L} = \{\lambda(H)\mid \lambda\in\Lambda\}$ and $\mathcal{L}^+ = \{\lambda(H)\mid \lambda\in\Lambda^+\}$.
Put $f(\mathbf{n}) = \sum_i \mathbf{n}_i\beta_i$ for $\mathbf{n} = (\mathbf{n}_i) \in \mathbb{Z}^m$.
Set $\widetilde{\Lambda} = \{\mathbf{n}\in \mathbb{Z}_{\ge 0}^m\mid f(\mathbf{n}) \in \Lambda\}$.
We define the order on $\widetilde{\Lambda}$ by $\mathbf{n} < \mathbf{n}'$ if and only if $f(\mathbf{n}) < f(\mathbf{n}')$.
By Corollary~\ref{cor:E(n) as vector space}, we can write $R = \sum_{\mathbf{n}\in \widetilde{\Lambda}}R_{\mathbf{n}}E^{\mathbf{n}}$ where $R_{\mathbf{n}}\in M(r,\mathbb{C})$.
We have $R_{\mathbf{0}} = 0$ where $\mathbf{0} = (0)_i\in \widetilde{\Lambda}$ since $R\in M(r,(\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))_\Lambda)$.
We have to show the existence of $L$ and $T$.
Write $L = 1_r + \sum_{\mathbf{n}\in\widetilde{\Lambda}}L_{\mathbf{n}}E^{\mathbf{n}}$ and $T = \sum_{\mathbf{n}\in\widetilde{\Lambda}}T_{\mathbf{n}}E^{\mathbf{n}}$.
Then $(H1_r - Q_0 - T)L = L(H1_r - Q_0 - R)$ is equivalent to
\[
f(\mathbf{n})(H)L_{\mathbf{n}} - [Q_0,L_{\mathbf{n}}] = T_{\mathbf{n}} + S_{\mathbf{n}} - R_{\mathbf{n}},
\]
where $S_{\mathbf{n}}$ is defined by
\[
\sum_{\mathbf{n}\in\widetilde{\Lambda}}S_{\mathbf{n}}E^{\mathbf{n}} = T(L - 1_r) - (L - 1_r)R.
\]
By Proposition~\ref{prop:fundamental property of ^(lambda)} the above equation is equivalent to
\[
\sum_{f(\mathbf{n}) = \mu} S_{\mathbf{n}}E^{\mathbf{n}} =
\sum_{f(\mathbf{k}) + f(\mathbf{l}) = \mu}T_{\mathbf{k}}L_{\mathbf{l}}E^{\mathbf{k}}E^{\mathbf{l}} -
\sum_{f(\mathbf{k}) + f(\mathbf{l}) = \mu}L_{\mathbf{k}}R_{\mathbf{l}}E^{\mathbf{k}}E^{\mathbf{l}}
\]
for all $\mu\in\mathfrak{a}^*$.
Notice that $L_{\mathbf{0}} = T_{\mathbf{0}} = 0$.
$S_{\mathbf{n}}$ can be defined from the data $\{T_{\mathbf{k}}\mid \mathbf{k} < \mathbf{n}\}$, $\{L_{\mathbf{k}}\mid \mathbf{k} < \mathbf{n}\}$ and $\{R_{\mathbf{k}}\mid \mathbf{k} < \mathbf{n}\}$.
Now we prove the existence of $L$ and $T$.
We choose the $L_{\mathbf{n}}$ and $T_{\mathbf{n}}$ which satisfy
\[
\begin{cases}
L_{\mathbf{0}} = 0,\quad T_{\mathbf{0}} = 0,\\
f(\mathbf{n})(H)L_{\mathbf{n}} - [Q_0,L_{\mathbf{n}}] = T_{\mathbf{n}} + S_{\mathbf{n}} - R_{\mathbf{n}},\\
\text{if $(Q_0)_{ii} - (Q_0)_{jj}\ne f(\mathbf{n})(H)$ then $(T_{\mathbf{n}})_{ij} = 0$}.
\end{cases}
\]
By Lemma~\ref{lem:Lemma of linear algebra}, we can choose such $L_{\mathbf{n}}$ and $T_{\mathbf{n}}$ inductively.
Put $L = 1_r + \sum_{\mathbf{n}\in\widetilde{\Lambda}}L_{\mathbf{n}}E^{\mathbf{n}}$ and $T = \sum_{\mathbf{n}\in\widetilde{\Lambda}}T_{\mathbf{n}}E^{\mathbf{n}}$.
Since
\[
[H,T_{ij}] = \sum_{\mathbf{n}\in\widetilde{\Lambda}}(f(\mathbf{n})(H))(T_{\mathbf{n}})_{ij}E^{\mathbf{n}} = ((Q_0)_{ii} - (Q_0)_{jj})T_{ij},
\]
$L$ and $T$ satisfy the conditions of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:definition of boundary value map}]
We can choose positive integers $C = (C_i)\in\mathbb{Z}_{>0}^l$ such that
\[
\textstyle\{\alpha\in\Lambda^{++}\mid \alpha(\sum_i C_iH_i) = (\lambda_i - \lambda_j)(\sum_i C_iH_i)\} =
\begin{cases}
\{\lambda_i - \lambda_j\} & (\lambda_i - \lambda_j \in \Lambda^{++}),\\
\emptyset & (\lambda_i - \lambda_j \not\in \Lambda^{++}).
\end{cases}
\]
The existence of such $C$ is shown by Oshima~\cite[Lemma~2.3]{MR810637}.
Set $X = \sum_i C_iH_i$.
Notice that $(\lambda_i - \lambda_j)(X) \in \{\mu(X)\mid \mu\in\Lambda^{++}\}$ if and only if $\lambda_i - \lambda_j\in \Lambda^{++}$.
By Lemma~\ref{lem:Main lemma of def of boundary value map}, there exist $T\in M(r,(\mathfrak{n}U(\mathfrak{n}))_\Lambda)$ and $L\in M(r,\widehat{\mathcal{E}}(\mathfrak{n})_\Lambda)$ such that
\[
\begin{cases}
L \equiv 1_r \pmod{(\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))_{\Lambda}},\\
(X1_r - \overline{Q}(X) - T)L = L(X1_r - \overline{Q}(X) - R),\\
\text{if $\lambda_i - \lambda_j \not\in \Lambda^{++}$ then $T_{ij} = 0$},\\
\text{if $\lambda_i - \lambda_j \in \Lambda^{++}$ then $[X,T_{ij}] = (\lambda_i - \lambda_j)(X)T_{ij}$}.
\end{cases}
\]
Let $S\in M(N,(\mathfrak{n}U(\mathfrak{n}))_\Lambda)$ such that $u = \overline{A}w + Su$.
Put $A = (1 - S)^{-1}\overline{A}L^{-1}$, $B = L\overline{B}$ and $v = (v_1,v_2,\dots,v_r) = Bu = Lw$ then $ABu = (1 - S)^{-1}\overline{A}L^{-1}L\overline{B}u = (1 - S)^{-1}\overline{A}w = u$.
Moreover, we have $(X1_r - \overline{Q}(X) - T)v = 0$.
Since $[X,T_{ij}] = (\lambda_i - \lambda_j)(X)T_{ij}$, we have $(X - \lambda_i(X))^rv_i = 0$.
Fix a positive integer $k$ such that $1\le k\le l$.
We can choose a matrix $R_k\in M(r,(\mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n}))_\Lambda)$ such that $H_kw = (\overline{Q}(H_k) + R_k)w$ by Lemma~\ref{lem:lemma for proof of boudary value map}.
Set $T_k = H_k1_r - \overline{Q}(H_k) - L(H_k1_r - \overline{Q}(H_k) - R_k)L^{-1}$.
Then we have $(H_k1_r - \overline{Q}(H_k) - T_k)v = 0$, i.e.,
\[
H_kv_i - \sum_{j = 1}^r\overline{Q}(H_k)_{ij}v_j - \sum_{j = 1}^r(T_k)_{ij}v_j = 0
\]
for each $i = 1,2,\dots,r$.
By Corollary~\ref{cor:induced equation}, we have
\[
H_kv_i - \sum_{j = 1}^r\overline{Q}(H_k)_{ij}v_j - \sum_{j = 1}^r (T_k)_{ij}^{(X,(\lambda_i - \lambda_j)(X))}v_j = 0.
\]
Define $T_k'\in M(r,(\mathfrak{n}U(\mathfrak{n}))_\Lambda)$ by $(T_k')_{ij} = (T_k)_{ij}^{(X,(\lambda_i - \lambda_j)(X))}$.
Since $(T_k)_{ij}^{(\mu)} = 0$ for $\mu\not\in\Lambda^{++}$, we have
\[
(T_k)_{ij}^{(X,(\lambda_i - \lambda_j)(X))}
= \sum_{\mu \in \Lambda^{++},\ \mu(X) = (\lambda_i - \lambda_j)(X)}(T_k)_{ij}^{(\mu)} =
\begin{cases}
(T_k)_{ij}^{(\lambda_i - \lambda_j)} & (\lambda_i - \lambda_j\in\Lambda^{++}),\\
0 & (\lambda_i - \lambda_j\not\in\Lambda^{++})
\end{cases}
\]
by the condition of $C$.
In particular $[H,(T_k')_{ij}] = (\lambda_i - \lambda_j)(H)$ for all $H\in \mathfrak{a}$.
Put $Q(\sum x_iH_i) = \overline{Q}(\sum x_iH_i) + \sum x_iT_i'$ for $(x_1,x_2,\dots,x_l)\in \mathbb{C}^l$ then $Q$ satisfies the condition of the theorem.
\end{proof}
Set $\rho = \sum_{\alpha\in\Sigma^+}(\dim \mathfrak{g}_\alpha/2)\alpha$.
From the Iwasawa decomposition $\mathfrak{g} = \mathfrak{k}\mathrm{op}lus\mathfrak{a}\mathrm{op}lus\mathfrak{n}$ we have the decomposition into the direct sum
\[
U(\mathfrak{g}) = \mathfrak{n}U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\mathrm{op}lus U(\mathfrak{a})\mathrm{op}lus U(\mathfrak{g})\mathfrak{k}.
\]
Let $\chi_1$ be the projection of $U(\mathfrak{g})$ to $U(\mathfrak{a})$ with respect to this decomposition and $\chi_2$ the algebra automorphism of $U(\mathfrak{a})$ defined by $\chi_2(H) = H - \rho(H)$ for $H\in\mathfrak{a}$.
We define $\chi\colon U(\mathfrak{g})^\mathfrak{k}\to U(\mathfrak{a})$ by $\chi = \chi_2\circ\chi_1$ where $U(\mathfrak{g})^\mathfrak{k} = \{u\in U(\mathfrak{g})\mid \text{$Xu = uX$ for all $X\in \mathfrak{k}$}\}$.
It is known that an image of $U(\mathfrak{g})^\mathfrak{k}$ under $\chi$ is contained in the set of $W$-invariant elements in $U(\mathfrak{a})$.
Fix $\lambda\in \mathfrak{a}^*$.
We can define the algebra homomorphism $U(\mathfrak{a})\to \mathbb{C}$ by $H\mapsto \lambda(H)$ for $H\in \mathfrak{a}$.
We denote this map by the same letter $\lambda$.
Put $\chi_\lambda = \lambda\circ\chi$.
Set $U(\lambda) = U(\mathfrak{g})/(U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})$, $u_\lambda = 1\mod{(U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})}\in U(\lambda)$ and $U(\lambda)_0 = U(\mathfrak{g})_{2\mathcal{P}}u_\lambda$.
Before applying Theorem~\ref{thm:definition of boundary value map} to $U(\lambda)_0$, we give some lemmas.
\begin{lem}\label{lem:deformation by k}
Let $u\in U(\mathfrak{g})_\mu$ where $\mu \in \mathfrak{a}^*$.
Then there exists an element $x\in U(\mathfrak{g})\mathfrak{k}$ such that $u + x\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{\mu + 2\mathcal{P}}$.
\end{lem}
\begin{proof}
Set $\overline{\mathfrak{n}} = \theta(\mathfrak{n})$.
We may assume $u\in U(\overline{\mathfrak{n}})_\mu$.
Let $\{U_n(\overline{\mathfrak{n}})\}_{n\in\mathbb{Z}_{\ge 0}}$ be the standard filtration of $U(\mathfrak{n})$ and $n$ the smallest integer such that $u\in U_n(\overline{\mathfrak{n}})$.
We will prove the existence of $x$ by the induction on $n$.
If $n = 0$ then the lemma is obvious.
Assume $n > 0$.
We may assume that there exist a restricted root $\alpha\in\Sigma^+$, an element $u_0\in U_{n - 1}(\overline{\mathfrak{n}})_{\mu + \alpha}$ and a vector $E_{-\alpha}\in \mathfrak{g}_{-\alpha}$ such that $u = u_0 E_{-\alpha}$.
Set $E_\alpha = \theta(E_{-\alpha})$, $u_1 = u_0E_\alpha$, $u_2 = E_\alpha u_0$ and $u_3 = u_1 - u_2$.
Then $u + u_2 + u_3 = u + u_1\in U(\mathfrak{g})\mathfrak{k}$, $u_1,u_2\in U(\mathfrak{g})_{\mu + 2\alpha}$ and $u_3\in U_{n - 1}(\mathfrak{g})_{\mu + 2\alpha}$.
Using the Poincar\'e-Birkhoff-Witt theorem and the induction hypothesis we can choose an element $u_5\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{\mu + 2\mathcal{P}}$ such that $u_3 - u_5\in U(\mathfrak{g})\mathfrak{k}$.
Again by the induction hypothesis we can choose an element $u_6\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{\mu + \alpha + 2\mathcal{P}}$ such that $u_0 - u_6\in U(\mathfrak{g})\mathfrak{k}$.
Then $u + u_5 + E_\alpha u_6\in U(\mathfrak{g})\mathfrak{k}$, $u_5\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{\mu + 2\mathcal{P}}$ and $E_\alpha u_6\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{\mu + 2\alpha + 2\mathcal{P}}$.
\end{proof}
\begin{lem}\label{lem:lemma for only t^2}
The following equations hold.
\begin{enumerate}
\item $\Ker \chi_\lambda\subset U(\mathfrak{g})_{2\mathcal{P}}$.
\item $U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\cap (\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k}) \subset U(\mathfrak{a}\mathrm{op}lus \mathfrak{n})_{2\mathcal{P}} \cap (\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})$.
\item $U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\cap (U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})
\subset U(\mathfrak{a}\mathrm{op}lus \mathfrak{n})(U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\cap (\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k}))$.
\item $U(\mathfrak{a}\mathrm{op}lus \mathfrak{n})\cap (U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k}) = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})((U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})\cap U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}})$.
\end{enumerate}
\end{lem}
\begin{proof}
(1)
It is sufficient to prove $U(\mathfrak{g})^\mathfrak{k} \subset U(\mathfrak{g})_{2\mathcal{P}}$.
Let $G$ be a connected Lie group whose Lie algebra is $\mathfrak{g}_0$ and $K$ its maximal compact subgroup such that $\Lie(K) = \mathfrak{k}_0$.
Since $K$ is connected, $U(\mathfrak{g})^\mathfrak{k} = U(\mathfrak{g})^K = \{u\in U(\mathfrak{g})\mid \text{$\Ad(k)u = u$ for all $k\in K$}\}$.
Assume that $G$ has the complexification $G_\mathbb{C}$.
Fix a maximal abelian subspace $\mathfrak{t}_0$ of $\mathfrak{m}_0$.
Let $K_{\mathrm{split}}$ and $A_{\mathrm{split}}$ be the analytic subgroups with Lie algebras given as the intersections of $\mathfrak{k}_0$ and $\mathfrak{a}_0$ with $[Z_{\mathfrak{g}_0}(\mathfrak{t}_0),Z_{\mathfrak{g}_0}(\mathfrak{t}_0)]$ where $Z_{\mathfrak{g}_0}(\mathfrak{t}_0)$ is the centralizer of $\mathfrak{t}_0$ in $\mathfrak{g}_0$.
Let $F$ be the centralizer of $A_{\mathrm{split}}$ in $K_{\mathrm{split}}$.
Since $F\subset K$, we have $U(\mathfrak{g})^K \subset U(\mathfrak{g})^F$.
On the other hand, we have $U(\mathfrak{g})^F\subset U(\mathfrak{g})_{2\mathcal{P}}$ (See Knapp~\cite[Thorem~7.55]{MR1920389} and Lepowsky~\cite[Proposition~6.1, Proposition~6.4]{MR0379613}).
Hence (1) follows.
(2)
Let $u\in \Ker\chi_\lambda$ and $x\in U(\mathfrak{g})\mathfrak{k}$ such that $u + x \in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})$.
We can write $u = \sum_\mu u_\mu$ where $u_\mu \in U(\mathfrak{g})_\mu$.
By (1), we have $u_\mu = 0$ for $\mu\not\in 2\mathcal{P}$.
Let $\mu \in 2\mathcal{P}$.
By Lemma~\ref{lem:deformation by k}, there exists an element $y_\mu\in U(\mathfrak{g})\mathfrak{k}$ such that $u_\mu + y_\mu\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{\mu + 2\mathcal{P}} = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}$.
Put $y = \sum_\mu y_\mu$.
Then $u + y \in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}$.
Since $u + y\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})$ and $x,y\in U(\mathfrak{g})\mathfrak{k}$ we have $y = x$ by the Poincar\'e-Birkhoff-Witt theorem.
Hence we have $u + x = u + y\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}$.
(3)
Let $\sum_i x_iu_i\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\Ker\chi_\lambda$ where $x_i\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})$ and $u_i\in \Ker\chi_\lambda$.
We write $u_i = \sum_j z_j^{(i)}v_j^{(i)}$ where $z_j^{(i)}\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})$ and $v_j^{(i)}\in U(\mathfrak{k})$.
Let $y\in U(\mathfrak{g})\mathfrak{k}$ and assume $\sum_i x_iu_i + y\in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})$.
By the Poincar\'e-Birkhoff-Witt theorem, $\sum_i x_iu_i + y = \sum_{i,j} x_iz_j^{(i)}v_{j,0}^{(i)}$ where $v_{j,0}^{(i)}$ is the constant term of $v_j^{(i)}$.
Hence $\sum_i x_iu_i + y = \sum_i x_i(u_i + \sum_j z_j^{(i)}(v_{j,0}^{(i)} - v_j^{(i)})) \in U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})(U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\cap (\Ker \chi_\lambda + U(\mathfrak{g})\mathfrak{k}))$.
(4)
Since $\Ker\chi_\lambda \subset U(\mathfrak{g})^\mathfrak{k}$, we have
\[
U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k} = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})(\Ker\chi_\lambda)U(\mathfrak{k}) + U(\mathfrak{g})\mathfrak{k} = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k}.
\]
By (2) and (3), we have
\begin{align*}
U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\cap (U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})
& = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\cap (U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})\\
& \subset U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})(U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}\cap (\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k}))\\
& \subset U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})((U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})\cap U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}).
\end{align*}
This implies (4).
\end{proof}
\begin{lem}\label{lem:only t^2}
We have the following equations.
\begin{enumerate}
\item $U(\lambda)_0 = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}u_\lambda$.
\item $U(\mathfrak{n})\otimes_{U(\mathfrak{n})_{2\mathcal{P}}}U(\lambda)_0 \simeq U(\lambda)$ under the map $p\otimes u\mapsto pu$.
\end{enumerate}
\end{lem}
\begin{proof}
(1) This is obvious from Lemma~\ref{lem:deformation by k}.
(2)
Let $I = U(\mathfrak{a}\mathrm{op}lus \mathfrak{n})\cap (U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})$.
We have $U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\otimes_{U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}}U = U(\mathfrak{n})\otimes_{U(\mathfrak{n})_{2\mathcal{P}}}U$ for any $U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}$-module $U$ since $U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}} = U(\mathfrak{a})\otimes U(\mathfrak{n})_{2\mathcal{P}}$.
By (1), we have $U(\lambda)_0 = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}/(I\cap U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}})$.
Hence
\begin{align*}
U(\mathfrak{n})\otimes_{U(\mathfrak{n})_{2\mathcal{P}}}U(\lambda)_0
& = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\otimes_{U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}}U(\lambda)_0\\
& = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})\otimes_{U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}}(U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}/(I\cap U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}))\\
& = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})/(U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})(I\cap U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})_{2\mathcal{P}}))\\
& = U(\mathfrak{a}\mathrm{op}lus\mathfrak{n})/I\\
& = U(\lambda)
\end{align*}
by Lemma~\ref{lem:lemma for only t^2} (4).
\end{proof}
\begin{lem}\label{lem:soe finiteness property of U(n)_2P}
Let $\{U_n(\mathfrak{n})\}_{n\in\mathbb{Z}_{\ge 0}}$ be the standard filtration of $U(\mathfrak{n})$ and $U_n(\mathfrak{n})_{2\mathcal{P}} = U_n(\mathfrak{n})\cap U(\mathfrak{n})_{2\mathcal{P}}$.
Set $U_{-1}(\mathfrak{n}) = U_{-1}(\mathfrak{n})_{2\mathcal{P}} = 0$, $R = \Gr U(\mathfrak{n})_{2\mathcal{P}} = \bigoplus_n U_n(\mathfrak{n})_{2\mathcal{P}}/U_{n - 1}(\mathfrak{n})_{2\mathcal{P}}$ and $R' = \Gr U(\mathfrak{n}) = \bigoplus_n U_n(\mathfrak{n})/U_{n - 1}(\mathfrak{n})$.
\begin{enumerate}
\item $R'$ is a finitely generated $R$-module.
\item $U(\mathfrak{n})$ is a finitely generated $U(\mathfrak{n})_{2\mathcal{P}}$-module.
\item $U(\mathfrak{n})_{2\mathcal{P}}$ is right and left Noetherian.
\item $U(\lambda)_0$ is a finitely generated $U(\mathfrak{n})_{2\mathcal{P}}$-module.
\end{enumerate}
\end{lem}
\begin{proof}
(1)
Let $\Gamma = \{E^\varepsilon\mid \varepsilon\in\{0,1\}^m\}$.
We denote the principal symbol of $u\in U(\mathfrak{n})$ by $\sigma(u)$.
Notice that if $u\in U(\mathfrak{n})_{2\mathcal{P}}$ then $\sigma(u)$ is the principal symbol of $u$ as an element of $U(\mathfrak{n})_{2\mathcal{P}}$.
We will prove that $\{\sigma(E)\mid E\in \Gamma\}$ generates $R'$ as an $R$-module.
Let $x\in R'$.
We may assume that $x$ is homogeneous, thus there exists an element $u\in U(\mathfrak{n})$ such that $x = \sigma(u)$.
Moreover we may assume that there exist non-negative integers $p = (p_1,p_2,\dots,p_m)$ such that $u = E^p$.
Choose $\varepsilon_i\in \{0,1\}$ such that $\varepsilon_i\equiv p_i\pmod{2}$.
Set $q_i = (p_i - \varepsilon_i)/2 \in \mathbb{Z}_{\ge 0}$, $\varepsilon = (\varepsilon_1,\varepsilon_2,\dots,\varepsilon_m)$ and $q = (q_1,q_2,\dots,q_m)$.
Then we have $x = \sigma(E^p) = \sigma(E^{2q})\sigma(E^\varepsilon)$.
Since $\sigma(E^{2q})\in R$, this implies that $\{\sigma(E)\mid E\in \Gamma\}$ generates $R'$ as an $R$-module.
(2) This is a direct consequence of (1).
(3)
By the Poincar\'e-Birkhoff-Witt theorem, $R'$ is isomorphic to a polynomial ring.
In particular $R'$ is Noetherian.
By the theorem of Eakin-Nagata and (1), we have $R$ is Noetherian.
This implies (3).
(4)
Since $U(\lambda)$ is a finitely generated $U(\mathfrak{n})$-module and $U(\mathfrak{n})$ is a finitely generated $U(\mathfrak{n})_{2\mathcal{P}}$-module, $U(\lambda)$ is a finitely generated $U(\mathfrak{n})_{2\mathcal{P}}$-module.
Hence $U(\lambda)_0$ is a finitely generated $U(\mathfrak{n})_{2\mathcal{P}}$-module by (3).
\end{proof}
We enumerate $W = \{w_1,w_2,\dots,w_r\}$ such that $\re w_1\lambda\ge \re w_2\lambda\ge \dots \ge \re w_r\lambda$.
\begin{thm}\label{thm:definition of boundary value map for G/K}
There exist matrices $A\in M(1,r,\widehat{\mathcal{E}}(\mathfrak{n})_{2\mathcal{P}})$ and $B\in M(r,1,\widehat{\mathcal{E}}(\mathfrak{a}\mathrm{op}lus\mathfrak{n},\mathfrak{n})_{2\mathcal{P}})$ such that $v_\lambda = Bu_\lambda\in(\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U(\lambda))^r$ satisfies the following conditions:
\begin{itemize}
\item There exists a linear map $Q\colon \mathfrak{a}\to M(r,U(\mathfrak{n})_{2\mathcal{P}})$ such that
\[
\begin{cases}
\text{$Hv_\lambda = Q(H)v_\lambda$ for all $H\in\mathfrak{a}$},\\
\text{$Q(H)_{ii} = (\rho + w_i\lambda)(H)$ for all $H\in\mathfrak{a}$},\\
\text{if $w_i\lambda - w_j\lambda \not\in2\mathcal{P}^+$ then $Q(H)_{ij} = 0$ for all $H\in\mathfrak{a}$},\\
\text{if $w_i\lambda - w_j\lambda \in 2\mathcal{P}^+$ then $[H',Q(H)_{ij}] = (w_i\lambda - w_j\lambda)(H')Q(H)_{ij}$ for all $H,H'\in \mathfrak{a}$}.
\end{cases}
\]
\item We have $u_\lambda = Av_\lambda$.
\item Let $(v_1,v_2,\dots,v_r) = v_\lambda$. Then $\{v_i\pmod{\mathfrak{n}U(\lambda)}\}$ is a basis of $U(\lambda)/\mathfrak{n}U(\lambda)$.
\end{itemize}
\end{thm}
\begin{proof}
Let $u_1,u_2,\dots,u_N$ be generators of $U(\lambda)_0$ as a $U(\mathfrak{n})_{2\mathcal{P}}$-module.
These are also generators of $U(\lambda)$ as a $U(\mathfrak{n})$-module by Lemma~\ref{lem:only t^2}.
We choose matrices $C = {}^t(C_1,C_2,\dots,C_N)\in M(N,1,U(\mathfrak{a}\mathrm{op}lus \mathfrak{n})_{2\mathcal{P}})$ and $D = (D_1,D_2,\dots,D_N)\in M(1,N,U(\mathfrak{n})_{2\mathcal{P}})$ such that ${}^t(u_1,u_2,\dots,u_N) = Cu_\lambda$ and $u_\lambda = D\,{}^t(u_1,u_2,\dots,u_N)$.
Notice that $U(\mathfrak{n})_{2\mathcal{P}} + \mathfrak{n}U(\mathfrak{n}) = U(\mathfrak{n})$.
By Lemma~\ref{lem:only t^2},
\begin{align*}
U(\lambda)/\mathfrak{n}U(\lambda) & =
(U(\mathfrak{n})/\mathfrak{n}U(\mathfrak{n}))\otimes_{U(\mathfrak{n})}U(\lambda)\\
& = (U(\mathfrak{n})/\mathfrak{n}U(\mathfrak{n}))\otimes_{U(\mathfrak{n})}U(\mathfrak{n})\otimes_{U(\mathfrak{n})_{2\mathcal{P}}}U(\lambda)_0\\
& = (U(\mathfrak{n})/\mathfrak{n}U(\mathfrak{n}))\otimes_{U(\mathfrak{n})_{2\mathcal{P}}}U(\lambda)_0\\
& = ((U(\mathfrak{n})_{2\mathcal{P}} + \mathfrak{n}U(\mathfrak{n}))/\mathfrak{n}U(\mathfrak{n}))\otimes_{U(\mathfrak{n})_{2\mathcal{P}}}U(\lambda)_0\\
& = (U(\mathfrak{n})_{2\mathcal{P}}/(\mathfrak{n}U(\mathfrak{n})\cap U(\mathfrak{n})_{2\mathcal{P}}))\otimes_{U(\mathfrak{n})_{2\mathcal{P}}}U(\lambda)_0\\
& = (U(\mathfrak{n})_{2\mathcal{P}}/(\mathfrak{n}U(\mathfrak{n}))_{2\mathcal{P}})\otimes_{U(\mathfrak{n})_{2\mathcal{P}}}U(\lambda)_0\\
& = U(\lambda)_0/(\mathfrak{n}U(\mathfrak{n}))_{2\mathcal{P}}U(\lambda)_0.
\end{align*}
On the other hand,
\begin{align*}
U(\lambda)/\mathfrak{n}U(\lambda) & = U(\mathfrak{g})/(\mathfrak{n}U(\mathfrak{g}) + U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})\\
& = (\mathfrak{n}U(\mathfrak{g}) + U(\mathfrak{a}) + U(\mathfrak{g})\mathfrak{k})/(\mathfrak{n}U(\mathfrak{g}) + U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})\\
& = U(\mathfrak{a})/((\mathfrak{n}U(\mathfrak{g}) + U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})\cap U(\mathfrak{a})).
\end{align*}
By the definition of $\chi_\lambda$, we have
\[
(\mathfrak{n}U(\mathfrak{g}) + U(\mathfrak{g})\Ker\chi_\lambda + U(\mathfrak{g})\mathfrak{k})\cap U(\mathfrak{a}) = \sum_{p\in U(\mathfrak{a})^W}U(\mathfrak{a})(\chi_2^{-1}(p) - \lambda(p))
\]
where $U(\mathfrak{a})^W$ is a $\mathbb{C}$-algebra of $W$-invariant elements of $U(\mathfrak{a})$.
By the result of Oshima~\cite[Proposition~2.8]{MR1039854}, the set of eigenvalues of $H\in\mathfrak{a}$ on $U(\mathfrak{a})/(\sum_{p\in U(\mathfrak{a})^W}U(\mathfrak{a})(\chi_2^{-1}(p) - \lambda(p)))$ is $\{(\rho + w\lambda)(H) \mid w\in W\}$ with multiplicities.
We take matrices $A'\in M(N,r,\widehat{\mathcal{E}}(\mathfrak{n})_{2\mathcal{P}})$ and $B'\in M(r,N,\widehat{\mathcal{E}}(\mathfrak{n})_{2\mathcal{P}})$ such that the conditions of Theorem~\ref{thm:definition of boundary value map} hold.
Put $A = DA'$, $B = B'C$ then $A,B$ satisfy the conditions of the theorem.
\end{proof}
\section{Structure of Jacquet modules (regular case)}\label{sec:Structure of Jacquet modules (regular case)}
In this section we assume that $\lambda$ is regular, i.e., $w\lambda \ne \lambda$ for all $w\in W\setminus\{ e\}$.
Let $r = \# W$ and $v_\lambda = (v_1,v_2,\dots,v_r)\in(\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U(\lambda))^r$ as in Theorem~\ref{thm:definition of boundary value map for G/K}.
Set $\mathcal{W}(i) = \{j\mid w_i\lambda - w_j\lambda\in 2\mathcal{P}^+\}$ for each $i = 1,2,\dots,r$.
\begin{thm}\label{thm:relations of v_i}
We have $Xv_i\in \sum_{j\in\mathcal{W}(i)}U(\mathfrak{g})v_j$ for all $X\in\theta(\mathfrak{n})\mathrm{op}lus\mathfrak{m}$.
\end{thm}
Let $A = {}^t(A^{(1)},A^{(2)},\dots,A^{(r)})$ be as in Theorem~\ref{thm:definition of boundary value map for G/K} and $\overline{A} = {}^t(\overline{A^{(1)}},\overline{A^{(2)}},\dots,\overline{A^{(r)}})$ an element of $M(r,1,\mathbb{C})$ such that $A^{(i)} - \overline{A^{(i)}} \in \mathfrak{n}\widehat{\mathcal{E}}(\mathfrak{n})$.
\begin{lem}
We have $\overline{A^{(i)}} \ne 0$ for each $i = 1,2,\dots,r$.
\end{lem}
\begin{proof}
Put $\overline{U(\lambda)} = U(\lambda)/\mathfrak{n}U(\lambda)$, $\overline{u_\lambda} = u_\lambda\pmod{\mathfrak{n}U(\lambda)}$ and $\overline{v_i} = v_i\pmod{\mathfrak{n}U(\lambda)}$.
Let $\overline{B} = (\overline{B^{(1)}},\overline{B^{(2)}},\dots,\overline{B^{(r)}})$ be a matrix in $M(1,r,U(\mathfrak{a}))$ such that $\overline{v_j} = \overline{B^{(j)}}\overline{u_\lambda}$.
Then we have $\overline{v_j} = \sum_i \overline{A^{(i)}}\,\overline{B^{(j)}}\overline{v_i}$.
By the regularity of $\lambda$, we have $H\overline{v_j} = (w_j\lambda)(H)\overline{v_j}$ and $H\overline{B^{(j)}}\overline{v_i} = (w_i\lambda)(H)\overline{B^{(j)}}\overline{v_i}$ for all $H\in \mathfrak{a}$.
This implies $\overline{A^{(j)}} \ne 0$ since $\lambda$ is regular.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:relations of v_i}]
Put $f(\mathbf{n}) = \sum_i \mathbf{n}_i\beta_i$ for $\mathbf{n} = (\mathbf{n}_i) \in \mathbb{Z}^m$.
Set $\widetilde{\Lambda} = \{\mathbf{n}\in \mathbb{Z}_{\ge 0}^m\mid f(\mathbf{n}) \in 2\mathcal{P}\}$.
We write $A^{(j)} = \sum_{\mathbf{n}\in\widetilde{\Lambda}} A^{(j)}_{\mathbf{n}} E^{\mathbf{n}}$.
Let $\alpha\in\Sigma^+$ and $E_\alpha\in\mathfrak{g}_\alpha$.
Since $\mathfrak{k}u_\lambda = 0$, we have $(\theta(E_\alpha) + E_\alpha)u_\lambda = 0$.
Hence $(\theta(E_\alpha) + E_\alpha)\sum_j\sum_{\mathbf{n}} A^{(j)}_{\mathbf{n}} E^{\mathbf{n}}v_j = 0$.
By applying Corollary~\ref{cor:induced equation} we have
\[
\sum_{j = 1}^r\left(\sum_{{\mathbf{n}}\in\widetilde{\Lambda}} A^{(j)}_{\mathbf{n}} (\theta(E_\alpha) + E_\alpha)E^{\mathbf{n}}\right)^{(w_i\lambda - w_j\lambda - \alpha )}v_j = 0
\]
for $i = 1,2,\dots,r$.
On one hand if $w_i\lambda - w_j\lambda\not\in 2\mathcal{P}_+$ then
\[
\left(\sum_{{\mathbf{n}}\in\widetilde{\Lambda}} A^{(j)}_{\mathbf{n}} (\theta(E_\alpha) + E_\alpha)E^{\mathbf{n}}\right)^{(w_i\lambda - w_j\lambda - \alpha )} = 0.
\]
On the other hand
\[
\left(\sum_{{\mathbf{n}}\in\widetilde{\Lambda}} A^{(i)}_{\mathbf{n}} (\theta(E_\alpha) + E_\alpha)E^{\mathbf{n}}\right)^{(-\alpha)} = A^{(i)}_{\mathbf{0}}\theta(E_\alpha).
\]
Hence we have
\[
A_{\mathbf{0}}^{(i)}\theta(E_\alpha)v_i \in \sum_{j\in \mathcal{W}(i)}U(\mathfrak{g})v_j.
\]
Since $A^{(i)}_{\mathbf{0}} = \overline{A^{(i)}} \ne 0$, we have
\[
\theta(E_{\alpha})v_i \in \sum_{j\in \mathcal{W}(i)}U(\mathfrak{g})v_j.
\]
Next let $X$ be an element of $\mathfrak{m}$.
By Corollary~\ref{cor:induced equation}, we have
\[
\sum_{j = 1}^r\left(\sum_{\mathbf{n}\in\widetilde{\Lambda}} A^{(j)}_{\mathbf{n}} XE^{\mathbf{n}}\right)^{(w_i\lambda - w_j\lambda)}v_j = 0.
\]
We can prove $Xv_i \in \sum_{j\in \mathcal{W}(i)}U(\mathfrak{g})v_j$ by the same argument.
\end{proof}
Put $V(\lambda) = \sum_i U(\mathfrak{g})v_i \subset \widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U(\lambda)$.
\begin{cor}\label{cor:V = J(U)}
\[
V(\lambda) = J(U(\lambda)).
\]
\end{cor}
\begin{proof}
By Theorem~\ref{thm:relations of v_i}, $V(\lambda)$ is finitely generated as a $U(\mathfrak{n})$-module.
By applying Proposition~\ref{prop:generalized result of Goodman and Wallach}, we see that the map
$\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}V(\lambda) \to \prod_{\mu\in\mathfrak{a}^*}V(\lambda)_\mu$ is bijective.
Hence
$\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}V(\lambda)\to
\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U(\lambda)$
is injective by Proposition~\ref{prop:induced equation}.
This map is also surjective since $v_1,v_2,\dots,v_r$ are generators of $\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U(\lambda)$.
We have
$\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}V(\lambda) =
\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U(\lambda)$.
Since $U(\lambda)$ and $V(\lambda)$ are finitely generated as $U(\mathfrak{n})$-modules, we have
\begin{align*}
\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}U(\lambda) = \widehat{J}(U(\lambda)),\\
\widehat{\mathcal{E}}(\mathfrak{g},\mathfrak{n})\otimes_{U(\mathfrak{g})}V(\lambda) = \widehat{J}(V(\lambda)),
\end{align*}
by Proposition~\ref{prop:algebraic property of E(n)}.
Hence we have $J(U(\lambda)) = J(V(\lambda)) = V(\lambda)$ by Corollary~\ref{cor:Jacquet module of trivial case}.
\end{proof}
Recall the definition of a generalized Verma module.
Set $\overline{\mathfrak{p}} = \theta(\mathfrak{p})$, $\overline{\mathfrak{n}} = \theta(\mathfrak{n})$ and $\rho = \sum_{\alpha\in\Sigma^+}(\dim \mathfrak{g}_\alpha/2)\alpha$.
\begin{defn}[Generalized Verma module]\label{defn:generalized Verma module}
Let $\mu\in\mathfrak{a}^*$.
Define the one-dimensional representation $\mathbb{C}_{\rho + \mu}$ of $\overline{\mathfrak{p}}$ by $(X + Y + Z)v = (\rho + \mu)(Y)v$ for $X\in \mathfrak{m}$, $Y\in\mathfrak{a}$, $Z\in\overline{\mathfrak{n}}$, $v\in \mathbb{C}_{\rho + \mu}$.
We define a $U(\mathfrak{g})$-module $M(\mu)$ by
\[
M(\mu) = U(\mathfrak{g})\otimes_{U(\overline{\mathfrak{p}})}\mathbb{C}_{\rho + \mu}.
\]
This is called a generalized Verma module.
\end{defn}
Set $V_i = \sum_{j \ge i}U(\mathfrak{g})v_j$.
By the universality of tensor products, any $U(\overline{\mathfrak{p}})$-module homomorphism $\mathbb{C}_{\rho + \mu} \to U$ is uniquely extended to the $U(\mathfrak{g})$-module homomorphism $M(\mu)\to U$ for a $U(\mathfrak{g})$-module $U$.
In particular we have the surjective $U(\mathfrak{g})$-module homomorphism $M(w_i\lambda)\to V_i/V_{i + 1}$.
We shall show that $V_i/V_{i + 1}$ is isomorphic to a generalized Verma module using the character theory.
Let $G$ be a connected Lie group such that $\Lie(G) = \mathfrak{g}_0$, $K$ its maximal compact subgroup with its Lie algebra $\mathfrak{k}_0$, $P$ the parabolic subgroup whose Lie algebra is $\mathfrak{p}_0$ and $P = MAN$ the Langlands decomposition of $P$ where Lie algebra of $M$ (resp.\ $A$, $N$) is $\mathfrak{m}_0$ (resp.\ $\mathfrak{a}_0$, $\mathfrak{n}_0$).
Since $U(w\lambda) = U(\lambda)$ for $w\in W$, we may assume that $\re\lambda$ is dominant, i.e., $\re\lambda(H_i) \le 0$ for each $i = 1,2,\dots,l$.
By the result of Kostant~\cite[Theorem~2.10.3]{MR0399361}, $U(\lambda)$ is isomorphic to the space of $K$-finite vectors of the non-unitary principal series representation $\Ind_{P}^G (1\otimes\lambda)_K$.
The character of this representation is calculated by Harish-Chandra (See Knapp~\cite[Proposition~10.18]{MR1880691}).
Before we state it, we prepare some notations.
Let $H = TA$ be the maximally split Cartan subgroup,
$\mathfrak{h}_0$ its Lie algebra,
$T = H\cap M$,
$\Delta$ the root system of $H$,
$\Delta^+$ the positive system compatible with $\Sigma^+$,
$\Delta_I$ the set of imaginary roots,
$\Delta_I^+ = \Delta^+ \cap \Delta_I$ and
$\xi_\alpha$ the one-dimensional representation of $H$ whose derivation is $\alpha$ for $\alpha\in\mathfrak{h}^*$.
Under these notations, the distribution character $\Theta_G(U(\lambda))$ of $U(\lambda)$ is as follows;
\[
\Theta_G(U(\lambda))(ta) = \frac{\sum_{w \in W}\xi_{\rho + w\lambda}(a)}{\prod_{\alpha\in\Delta^+\setminus\Delta_I^+}\lvert1 - \xi_\alpha(ta)\rvert}
\quad (t\in T,\ a\in A).
\]
We will use the Osborne conjecture, which was proved by Hecht and Schmid~\cite[Theorem~3.6]{MR716371}.
To state it, we must define a character of $J(U)$ for a Harish-Chandra module $U$.
Recall that $J(U)$ is an object of the category $\mathcal{O}'_P$, i.e.,
\begin{enumerate}
\item the actions of $M\cap K$ and $\mathfrak{g}$ are compatible,
\item $J(U)$ splits under $\mathfrak{a}$ into a direct sum of generalized weight spaces, each of them being a Harish-Chandra modules for $MA$,
\item $J(U)$ is $U(\overline{\mathfrak{n}})$- and $Z(\mathfrak{g})$-finite
\end{enumerate}
(See Hecht and Schmid~\cite[(34)Lemma]{MR705884}).
For an object $V$ of $\mathcal{O}'_P$, we define the character $\Theta_P(V)$ of $V$ by
\[
\Theta_P(V) = \sum_{\mu\in\mathfrak{a}^*}\Theta_{MA}(V_\mu),
\]
where $V_\mu$ is a generalized $\mu$-weight space of $V$.
Let $G'$ be the set of regular elements of $G$.
Set
\begin{gather*}
A^{-} = \{a\in A\mid \text{$\alpha(\log a) < 0$ for all $\alpha\in\Sigma^+$}\},\\
(MA)^{-} = \text{interior of $\left\{g\in MA \Biggm| \text{$\prod_{\alpha\in\Delta^+\setminus\Delta_I^+}(1 - \xi_\alpha(ga)) \ge 0$ for all $a\in A^{-}$}\right\}$ in $MA$}.
\end{gather*}
Then the Osborn conjecture says that $\Theta_G(U)$ and $\Theta_P(J(U))$ coincide on $(MA)^{-}\cap G'$ (See Hecht and Schmid~\cite[(42)Lemma]{MR705884}).
It is easy to calculate the character of a generalized Verma module.
We have
\[
\Theta_P(M(\mu))(ta) = \frac{\xi_{\rho + \mu}(a)}{\prod_{\alpha\in\Delta^+\setminus\Delta^+_I}(1 - \xi_\alpha(ta))} \quad (t\in T, \ a\in A).
\]
Consequently we have
\[
\Theta_P(J(U(\lambda))) = \sum_{w\in W}\Theta_P(M(w\lambda)).
\]
This implies the following theorem when $\lambda$ is regular.
\begin{thm}\label{thm:Main theorem for regular case}
There exists a filtration $0 = V_{r + 1} \subset V_r \subset \dots \subset V_1 = J(U(\lambda))$ of $J(U(\lambda))$ such that $V_i / V_{i + 1}$ is isomorphic to $M(w_i\lambda)$ for an arbitrary $\lambda\in\mathfrak{a}^*$.
Moreover if $w\lambda - \lambda\not\in 2\mathcal{P}$ for all $w\in W\setminus \{e\}$ then $J(U(\lambda)) \simeq \bigoplus_{w\in W}M(w\lambda)$.
\end{thm}
\section{Structure of Jacquet modules (singular case)}\label{sec:Structure of Jacquet modules (singular case)}
In this section, we shall prove Theorem~\ref{thm:Main theorem for regular case} in the singular case using the translation principle.
We retain notations in Section~\ref{sec:Structure of Jacquet modules (regular case)}.
Let $\lambda'$ be an element of $\mathfrak{a}^*$ such that following conditions hold:
\begin{itemize}
\item The weight $\lambda'$ is regular.
\item The weight $(\lambda - \lambda')/2$ is integral.
\item The real part of $\lambda'$ belongs to the same Weyl chamber which real part of $\lambda$ belongs to.
\end{itemize}
First we define the translation functor $T_{\lambda'}^\lambda$.
Let $U$ be a $U(\mathfrak{g})$-module which has an infinitesimal character $\lambda'$. (We regard $\mathfrak{a}^*\subset \mathfrak{h}^*$.)
We define $T_{\lambda'}^\lambda(U)$ by $T_{\lambda'}^\lambda(U) = P_\lambda(U\otimes E_{\lambda - \lambda'})$ where:
\begin{itemize}
\item $E_{\lambda - \lambda'}$ is the finite-dimensional irreducible representation of $\mathfrak{g}$ with an extreme weight $\lambda - \lambda'$.
\item $P_\lambda(V) = \{v\in V\mid \text{for some $n > 0$ and all $z\in Z(\mathfrak{g})$, $(z - \lambda(\widetilde{\chi}(z)))^nv = 0$}\}$ where $Z(\mathfrak{g})$ is the center of $U(\mathfrak{g})$ and $\widetilde{\chi}\colon Z(\mathfrak{g})\to U(\mathfrak{h})$ is the Harish-Chandra homomorphism.
\end{itemize}
Notice that $P_\lambda$ and $T_{\lambda'}^\lambda$ are exact functors.
Theorem~\ref{thm:Main theorem for regular case} in the singular case follows from following two equations.
\begin{enumerate}
\item $T_{\lambda'}^\lambda(U(\lambda')) = U(\lambda)$.
\item $T_{\lambda'}^\lambda(M(w\lambda')) = M(w\lambda)$.
\end{enumerate}
The following lemma is important to prove these equations.
\begin{lem}\label{lem:lemma for weight (by Vogan)}
Let $\nu$ be a weight of $E_{\lambda - \lambda'}$ and $w\in W$.
Assume $\nu = w\lambda - \lambda'$.
Then $\nu = \lambda - \lambda'$.
\end{lem}
\begin{proof}
See Vogan~\cite[Lemma~7.2.18]{MR632407}.
\end{proof}
\begin{proof}[Proof of\/ $T_{\lambda'}^\lambda(U(\lambda')) = U(\lambda)$]
We may assume that $\lambda'$ is dominant.
Notice that we have $U(\lambda') \simeq \Ind_P^G(1\otimes \lambda')_K$.
Let $0 = E_0 \subset E_1 \subset E_2\subset \dots\subset E_n = E_{\lambda - \lambda'}$ be a $P$-stable filtration with the trivial induced action of $N$ on $E_i/E_{i - 1}$.
We may assume that $E_i / E_{i - 1}$ is irreducible.
Let $\nu_i $ be the highest weight of $E_i / E_{i - 1}$.
Notice that $\Ind_P^G(1\otimes \lambda')\otimes E_{\lambda - \lambda'} = \Ind_P^G((1\otimes \lambda')\otimes E_{\lambda - \lambda'})$.
Then $\Ind_P^G(1\otimes \lambda')\otimes E_{\lambda - \lambda'}$ has a filtration $\{M_i\}$ such that $M_i/M_{i - 1} \simeq \Ind_P^G((1\otimes \lambda')\otimes (E_i/E_{i - 1}))$.
Since $\Ind_P^G((1\otimes \lambda')\otimes (E_i/E_{i - 1}))$ has an infinitesimal character $\lambda + \nu_i$, $P_\lambda(M_i/M_{i - 1}) = 0$ if $\nu_i \ne w\lambda - \lambda'$ for all $w\in W$.
By Lemma~\ref{lem:lemma for weight (by Vogan)} we have $T_{\lambda'}^\lambda(\Ind_P^G(1\otimes \lambda') = \Ind_P^G((1\otimes \lambda')\otimes (E_i/E_{i - 1}))$ where $\nu_i = \lambda - \lambda'$.
By the conditions of $\lambda'$, the action of $M$ on $(\lambda - \lambda')$-weight space of $E_{\lambda - \lambda'}$ is trivial.
Consequently $T_{\lambda'}^\lambda(\Ind_P^G(1\otimes \lambda')) = \Ind_P^G((1\otimes\lambda')\otimes(\lambda - \lambda')) = \Ind_P^G(1\otimes \lambda)$.
\end{proof}
\begin{proof}[Proof of\/ $T_{\lambda'}^\lambda(M(w\lambda')) = M(w\lambda)$.]
We may assume $w = e\in W$.
Since $M(\lambda')\otimes E_{\lambda - \lambda'} = U(\mathfrak{g})\otimes (\mathbb{C}_{\lambda'}\otimes E_{\lambda - \lambda'})$,
the equation follows by the same argument of the proof of $T_{\lambda'}^\lambda(U(\lambda')) = U(\lambda)$.
\end{proof}
\def$'$} \def\dbar{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d{$'$} \def\dbar{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d}
\end{document}
|
\begin{document}
\date{\today}
\title{Nice sets and invariant densities in complex dynamics}
\begin{abstract}
In complex dynamics, we construct a so-called nice set (one for which the first return map is Markov) around any point which is in the Julia set but not in the post-singular set, adapting a construction of Rivera-Letelier. This simplifies the study of absolutely continuous invariant measures. We prove a converse to a recent theorem of Kotus and \'Swi\c atek, so for a certain class of meromorphic maps the absolutely continuous invariant measure is finite if and only if an integrability condition is satisfied.
\end{abstract}
\section{Introduction}
In dynamical systems, invariant measures which are absolutely continuous with respect to some natural reference measure (often Lebesgue measure, but also other \emph{conformal} measures) are of great interest. We shall study such measures in the setting of complex one-dimensional dynamics. Both rational and transcendental maps are considered. In the transcendental setting one must deal with maps with unbounded derivative.
This work starts with a useful construction (in Proposition \ref{prop:nice}) of simply connected \emph{nice sets} around each point in the Julia set but not in the post-singular set. First return maps to such maps have good Markov properties. We use this to give a very simple proof of a couple of essentially known results concerning existence and smoothness of invariant densities (Theorem \ref{thm:BockConf}). Then we give a converse to a theorem of Kotus and \'Swi\c atek (\cite{KotSwi:Mero}): for a class of meromorphic maps admitting absolutely continuous invariant measures (\emph{acims}), the acims are finite if and \emph{only if} an integrability condition is satisfied (Theorem \ref{thm:converse}).
We shall introduce some notation and definitions now, before discussing the results in more detail in subsections \ref{sec:niceresults}-\ref{sec:converse}.
Let $f : \ccc \to \cbar$ be a transcendental meromorphic function or let $f: \cbar \to \cbar$ be a rational map of degree $\geq 2$. The Fatou set is defined as usual using normal families: a point $z$ is in the Fatou set if and only if there is a neighbourhood of $z$ on which the iterates of $f$ are well-defined and form a normal family. The Julia set, $\J(f)$, of $f$ is the complement in $\cbar$ of the Fatou set. We use the spherical metric.
The $\omega$-limit set of a point $z$ is denoted $\omega(z) \subset \cbar$. Given a point $z \in \cbar$, we set
$$z \in \Orb(z):= \{f^i(z) : i \geq 0 \mbox{ and }f^i(z)\mbox{ is defined} \}.$$ Similarly, for a set $A$, we write $\Orb(A) := \bigcup_{z \in A} \Orb(z)$.
Let $S(f) \subset \cbar$ denote the set of \emph{singular values} of $f$: a point $z$ is an element of $\cbar \setminus S(f)$ \emph{if and only if} there is a neighbourhood of $z$ on which all inverse branches are well-defined, univalent maps. $S(f)$ contains all critical and asymptotic values.
Set
$$
\scrP(f) := \overline{\Orb(S(f))}.
$$
$P(f)$ is called the \emph{post-singular set}.
All measures considered shall be assumed Borelian without further reference. We denote by $\scrM_\infty(f)$ the set of all conservative, $\sigma$-finite, ergodic $f$-invariant measures. We set
$$\scrM(f):= \{ \mu \in \scrM_\infty(f) : \mu(\cbar) = 1\}.$$
If the Fatou set is non-empty, or if one wants to consider the pressure function and thermodynamic formalism, then conformal measures are of interest.
\begin{dfn} Let $p, t \in \arr$.
We call a measure \emph{$(t,p)$-conformal}, with respect to some metric $\rho$, if
\begin{itemize}
\item
it is finite on compact sets disjoint from $\scrP(f)$;
\item
it has Jacobian $\exp(p)|Df|_\rho^t$, wherever this is finite, where $|Df|_\rho$ denotes the norm of the derivative with respect to that metric, and with the convention that $0^0 =1$;
\item
if $|Df(x)|_\rho^t = \infty$, then $m(x) = 0$ and $m(f(x))$ is zero or finite.
\end{itemize}
\end{dfn}
With this definition, if $t >0$ then critical values have measure zero, critical points need not have. If $t < 0$ then critical points have measure zero, critical values need not have.
Standard and spherical Lebesgue measures are $(2,0)$-conformal with respect to the Euclidean and spherical metrics respectively.
If the metric is unspecified, the norm of the derivative $|Df|$ shall henceforth be taken with respect to the spherical metric.
\subsection{Nice sets}\label{sec:niceresults}
\begin{dfn} An open set $U$ is called \emph{nice} if $f^n(\partial U) \cap U =
\emptyset$ for all $n >0$.
\end{dfn}
This implies that every pair of pullbacks of a nice set $U$
(connected components of $f^{-n}(U), f^{-n'}(U)$ for some $n,n'\geq 0$)
is either nested or disjoint.
In the Section \ref{sec:nice} we adapt a construction of nice sets due to J. Rivera-Letelier (\cite{Rivera:Connecting, PRL:StatTCE}) to prove the following result. In the setting of real interval dynamics, existence of nice sets (or \emph{regularly returning} sets) is easy to prove using (pre-)periodic points. In higher dimensions the existence of such sets is highly non-trivial.
\begin{prop}\label{prop:nice}
Suppose $z\in \J(f)\setminus \scrP(f)$.
Then $z$ is contained in a simply connected nice set $U_z$ with arbitrarily small diameter satisfying $\dist(U_z, \scrP(f)) > 0$.
\end{prop}
\subsection{Existence and smoothness of invariant densities}
Call a point $z$ \emph{univalently inaccessible} if there is an open set $U$ with non-empty intersection with the Julia set, such that there does \emph{not exist} a triple $(y,n,m)$ satsifying the following:
\begin{itemize}
\item
$y \in U$;
\item
$n,m \geq 0$;
\item
$f^n(z) = f^m(y)$;
\item
$0 < |Df^n(z)||Df^m(y)| < \infty$.
\end{itemize}
Let $E$ denote the set of all univalently inaccessible points.
For most maps $E\cap \J(f) = \emptyset$.
The simplest map for which $E\cap \J(f)$ is non-empty is the Chebyshev polynomial $f:z\mapsto 4z(1-z)$, for which $E = \{0, 1, \infty\}$.
\begin{thm} \label{thm:BockConf}
Let $f : \ccc \to \cbar$ be a meromorphic function.
Let $p,t \in \arr$ and suppose $m$ is a $(t,p)$-conformal measure with respect to either the Euclidean metric or the spherical metric.
Suppose
$m\left(\{ z \in \J : \omega(z) \not\subset \scrP(f)\}\right) > 0$ and $m(E)=0$.
Then $f$ admits a non-atomic measure $\mu \in \scrM_\infty(f)$, equivalent to $m$ with the support of $\mu$ equal to the Julia set.
On a neighbourhood of any point in $\J \setminus \scrP(f)$,
there is a density function $\frac{d\mu}{dm}$ which
is analytic and non-zero. Moreover, $m(\scrP(f)) = 0$.
\end{thm}
\remark If $\J = \cbar$, then it follows immediately that the density is analytic on $\J \setminus \scrP$. In any case, this local result is sufficient for most purposes.
Theorem \ref{thm:BockConf} will not surprise the specialist. For rational maps, when the reference measure is Lebesgue measure, ergodicity and conservativity of Lebesgue measure date back to the work of Lyubich \cite{Lyubich:Typical}. Grzegorczyk \emph{et al}, in \cite{GPS:Rational}, give a detailed proof of the existence of the absolutely continuous invariant probability measure. For meromorphic maps, ergodicity and conservativity of Lebesgue measure were shown by H. Bock in \cite{Bock, Bock2}. Kotus and Urbanski (\cite{KotUrb03}) showed that in fact there is a $\sigma$-finite invariant measure. In
\cite{przytycki1999rtr}, Przytycki and Urba\'nski showed something similar, including analyticity of the density, in the rational setting for $(t,0)$-conformal measures. They also discuss what happens when the Julia set is contained in a finite union of \emph{real-analytic} sets.
In \cite{KotSwi08}, it is shown that, if the acim with respect to Lebesgue measure is a probability measure, then it has density bounded from below on an open set. Compare \cite{MeBS08}, where a predecessor of Theorem \ref{thm:BockConf} can be found.
However, the union of these proofs sprawls unnecessarily and we wish to give a reasonably short and elegant proof, also in Section \ref{sec:nice}, of a more general result covering both the rational and transcendental cases.
\subsection{Finiteness of acims} \label{sec:converse}
In this subsection, we only use the Euclidean metric.
\begin{thm}\label{thm:converse}
Let $f : \ccc \to \cbar$ be a meromorphic function such that
there exists a positive Lebesgue measure set of points $z \in \J(f)$ such that $\omega(z) \not\subset \scrP(f)_*$.
Let $A$ be a forward-invariant, bounded set and suppose
$f$ admits
a pole of order $M$ which is not an omitted value.
If
the $\sigma$-finite measure given by Theorem \ref{thm:BockConf} is finite, then
\begin{equation}\label{eqn:est}
\int_{|z| > r_0} \frac{-\log \dist(f(z),A)}{|z|^{2+\frac{2}{M}}} dm < \infty
\end{equation}
for some $r_0 > 0$, where integration is with respect to Euclidean Lebesgue measure.
\end{thm}
\remark By Theorem \ref{thm:BockConf}, $\J(f) = \cbar$. One can give a version of this theorem where the reference measure is $(t,p)$-conformal rather than Lebesgue measure. One must replace $(2+2/M)$ by
$(t+t/M)$, then, provided the conformal measure has support equal to $\cbar$, the proof is the same. We choose not to do so in order to keep the discussion concise.
\remark Suppose $f$ admits an asymptotic value whose orbit is bounded. Then (\ref{eqn:est}) implies
$$
\int_{|z| > r_0} \frac{-\log \dist(f(z),a)}{|z|^{2+\frac{2}{M}}} dm < \infty
$$
One can rewrite the inequality as
$$
\int_{r_0}^\infty \frac{m(r,a)}{r^{1+ \frac{2}{M}}} dr < \infty
$$
where
$$
m(r,a) = \int_0^{2\pi} \log^+ \frac{1}{\dist(f(r e^{i\theta}), a)} dr.
$$
The notation $m(r,a)$ for this quantity is from Nevanlinna theory (see \cite{Tsuji:Book}, \cite{KotSwi:Mero}).
Theorem \ref{thm:converse} is new. Note that it is only interesting for transcendental maps, since in the rational setting the integral is always finite. Recently Kotus and \Swan \ proved the following nice result which motivated our theorem.
\begin{Fact}[\cite{KotSwi:Mero}] \label{Fact:KS}
Let $m$ denote Lebesgue measure.
Let $f : \ccc \to \cbar$ be a meromorphic map with finitely many singular values. Suppose all poles of $f$ have multiplicities bounded by $M$. Suppose also that $\J(f) = \cbar$ and $\scrP(f) \cap (\Crit(f) \cup \{\infty\}) = \emptyset$ and that, for some $r_0 >0$,
$$
\int_{r_0}^\infty \frac{m(r,a)}{r^{1+\frac{2}{M}}} dr < \infty
$$
for each asymptotic value $a$.
Then there exists $\mu \in \scrM(f)$ which is absolutely continuous with respect to Lebesgue measure.
\end{Fact}
They also provide an example of a map satisfying all of the assumptions bar the integrability one and show that this map does not admit a \emph{finite}, absolutely continuous, invariant measure. Theorem \ref{thm:converse} states that this integrability condition
(and indeed a stronger one, integrability of the logarithmic distance to any forward invariant, bounded set) is necessary for a finite probability measure to exist for a whole class of maps including those considered in Fact \ref{Fact:KS}.
One could be tempted to view Fact \ref{Fact:KS} as an analogue to the equivalent result for Misiurewicz rational maps. However, compare it with the following result of Benedicks and Misiurewicz which we would view as more relevant:
\begin{Fact}[\cite{MisBen:Flat}] \label{Fact:MisBen}
Let $f : I \to I$ be a $C^3$ map of the interval $I$ with a unique critical point $c$. Suppose that $c \notin \omega(c)$ and that all periodic points are repelling. Then there exists a $\sigma$-finite, conservative, ergodic, invariant measure $\mu$ which is absolutely continuous with respect to Lebesgue measure.
The measure $\mu$ is finite if and only if
$$
\int \log |f'(x)| dx > -\infty.$$
\end{Fact}
They also showed that this inequality is equivalent to
$$
\int -\log \dist(f(x),f(c)) dx < \infty,$$
strikingly similar to Fact \ref{Fact:KS}. The reason the interval setting is perhaps more similar is that rational maps are very rigid: the critical points are all like $z^n$ for some $n$, so they cannot scrunch up space too much. On the interval, one can have flat critical points of type $\exp(|x|^{-\alpha})$ for $\alpha >0$.
Fact \ref{Fact:MisBen} implies that for the maps considered with a critical point of this type, the measure is finite if and only if $\alpha < 1$. For meromorphic maps, large regions near infinity can get mapped very close to asymptotic values. Theorem \ref{thm:converse} roughly says that if (Lebesgue-) many points get mapped too close to an asymptotic value and then take too long to leave a bounded region, this is an obstruction to the measure being finite.
In prior work \cite{Me:Cusp}, we generalised one direction of Fact \ref{Fact:MisBen}:
\begin{Fact}[\cite{Me:Cusp}] \label{Fact:MeCusp} Let $f : I \to I$ be a $C^{1+\epsilon}$ map of the interval $I$. Suppose there exists $\mu \in \scrM(f)$ such that the entropy of $\mu$ is positive and that $\mu$ is absolutely continuous with respect to Lebesgue measure. Then the support of $\mu$ is a finite union $X$ of closed intervals on which
$$
\int_X \log |f'(x)| dx > -\infty.$$
\end{Fact}
We view Theorem \ref{thm:converse} not only as a converse to Fact \ref{Fact:KS}, but also as a first generalisation of Fact \ref{Fact:MeCusp}
in the meromorphic setting.
\section{Nice sets and invariant densities} \label{sec:nice}
Our proof of Theorem \ref{thm:BockConf} relies on an adaptation of the construction of \emph{nice sets} due to J. Rivera-Letelier (\cite{Rivera:Connecting, PRL:StatTCE}). His construction is around critical points and uses different methods to guarantee decrease of preimages along backward branches. We use nice sets to find induced Markov maps on a neighbourhood of each point of $\ccc \setminus \scrP(f)$. Theorem \ref{thm:BockConf} will then follow by standard arguments.
Recall that we use the spherical metric, not the Euclidean one.
For $n \geq 0, z\in \cbar, 0<R$ and $U$ an open set compactly contained in $B(z,R)$, let $\scrV_n(U,z,R)$ denote the collection of open sets $V$ such that $f^n (V)= U$, $V \subset V'$ for some
conected $V'$ on which $f^n$ is univalent and $f^n(V') = B(z,R)$. Let $\scrV(U,z,R) := \bigcup_{n>0} \scrV_n(U,z,R)$.
The following lemma is standard, and very similar to Lemma 1 of \cite{GPS:Rational}, for example.
\begin{lem} \label{lem:decrease} Let $z \in \J(f), R > 0$ and $U$ be an open set compactly contained in $B(z,R)$.
For each $K>0$ there exists an $N \geq 1$ such that if $n \geq N$ and $V \in \scrV_n(U,z,R)$, then $|Df^n(z')| > K$ for all $z' \in V$.
\end{lem}
\beginpf
There are infinitely many periodic orbits (we assume $f$ is not a Möbius transformation (in particular $\J\ne \emptyset$)), so we can fix three points whose orbits are disjoint from a small neighbourhood $W$ of $z$ with $W \subset B(z,R)$.
Then the holomorphic inverse branches of $f^{-n}_{|W}$ omit the three points, so they form a normal family by Montel's Theorem.
By the Koebe principle, there is a uniform bound on the distortion of $f^n$ on each $V \in \scrV_n(U,z,R)$ for all $n$.
Suppose such an $N$ (from the statement) does not exist for some $K>0$. Then, by uniformly bounded distortion, there is a sequence of holomorphic inverse branches $g_n$ on $B(z,R)$ with derivative at $z$ uniformly bounded away from zero.
Then, by Montel, there is a subsequence which converges to a non-constant function $g$ on a neighbourhood of $z$.
Then a neighbourhood of $g(z)$ is mapped infinitely often by iterates of $f$ onto a neighbourhood of $z$ and into $W$. It follows that $z \notin \J(f)$, contradiction.
\eprf
Now we need to show that nothing too nasty happens for small $n$. The complication is that $z$ could be in the backward orbit of a pole, and so be an accumulation point for the set $f^{-k}(z)$ for some fixed $k$. If this were impossible, the following lemma would be simpler.
\begin{lem} \label{lem:branches}
Let $z \in \cbar$, $R >0$, $\tau >0$ and $N \geq 1$. Let $r < R$. Then there exists $M>0$ so that, for $n \leq N$ all holomorphic inverse branches $g$ of $f^{-n}$ on $B(z,R)$ satisfy $|Dg| < M$ on $B(z,r)$. Moreover, all but a finite number of such inverse branches satisfy $|Dg| < \tau$ on $B(z,r)$.
\end{lem}
\beginpf
The key is that the images of $B(z,r)$ by any pair of inverse branches of $f^{-n}$ are pairwise disjoint. Thus there are at most a finite number of any minimal size (Koebe implies the images are not too distorted, and the Riemann sphere has finite area), so firstly there is an upper bound to the size (and, equivalently, derivative), and secondly all but a finite number of such inverse branches have small images. If the image is small, the derivative $|Dg|$ is small.
\eprf
\begin{lem} \label{lem:knice} Let $z \in \J(f)$ and $R >0$. Suppose $z$ is not a parabolic periodic point. Let $\kappa > 1$. For all $r>0$ sufficiently small, there exists an open, simply connected set $U = U_r(z)$ such that
\begin{itemize}
\item
$B(z,r) \subset U \subset B(z,\kappa r)$;
\item
if $n > 0$,
$V \in \scrV_n(U,z,R)$ and $V \cap U \ne \emptyset$ then $V \subset U$;
\item there exists $\theta >1$ so that, for all $n$, if $V \in \scrV_n(U,z,R)$ and $V\cap U \ne \emptyset$, then $|Df^n(z')| > \theta$ for all $z' \in V$.
\end{itemize}
\end{lem}
\beginpf
We call an inverse branch $g$ \emph{$z$-periodic} if $g(z) = z$. This of course can only happen if $z$ is periodic.
By Lemma \ref{lem:decrease}, there exist $\delta$ and $N$ such that for all $n \geq N$ and all $W \in \scrV_n(B(z,\delta),z,R)$, $|Df^n(z)| > 2\kappa/(\kappa-1)$ for all $z \in W$.
Given a holomorphic non-$z$-periodic inverse branch $g$ of $f^n$ on $B(z,R)$, if $r$ is sufficiently small then $g(B(z,r)) \cap B(z,\kappa r) = \emptyset$. This allows one to disregard a finite number of branches to deduce the following, by
Lemma \ref{lem:branches}: There exists $0< r < \delta$ sufficiently small that, if $n < N$, any non-$z$-periodic holomorphic inverse branch $g$ of $f^n$ on $B(z,R)$ such that $g(B(z,r)) \cap B(z,\kappa r) \ne \emptyset$ satisfies
$$
|Dg| < (\kappa-1)/2\kappa
$$
on $B(z,r)$.
In particular, if $W \in \scrV_n(B(z,\kappa r),z,R)$ for some $n > 0$ and $W \cap B(z,\kappa r) \ne \emptyset$ then either $|Df^n| > 2\kappa/(1-\kappa)$ on $W$ or $z$ is periodic and $z \in W$.
If $z$ is periodic we also require $r$ to be small enough that $|Df| > 1$ on $B(z,\kappa r)$.
Set $U_0 := B(z,r)$.
For $n > 0$ define $U_n$ as the connected component of
$$
U_0 \cup \bigcup_{i=1}^n \bigcup_{W \in \scrV_i(B(z,r),z,R)} W$$
containing $z$ and $U_r'(z) := \bigcup_{n \geq 0} U_n$. We
prove by induction that $U_n \subset B(z,\kappa r)$ for all $n \geq
0$. This is clearly true for $n = 0$. So suppose it is true for all $n
\leq k$. We must show it holds for $n = k+1$.
Let $X$ be a connected component of $U_{k+1} \setminus U_0$. Then
there is a minimal $m \geq 0$ such that there is a $y\in X$ and $V \in \scrV_m(B(z,r),z,R)$. Let $g$ be the corresponding inverse branch, so $f^m(y) \in U_0$ and $g(f^m(y)) = y$.
Since $y \notin U_0$, $m \geq 1$ and $g$ is not $z$-periodic. Consider $f^m(X)$. This set is
contained in $U_{k+1-m}$, and so by hypothesis is contained in
$B(z,\kappa r)$. But then $X$ is contained in $g(B(z,\kappa r))$
so $|X| < ((\kappa-1)/2\kappa) |B(z,\kappa r)| = r(\kappa-1)$. Thus $X \subset B(z,\kappa r)$ as required, completing the inductive argument.
Fill in the set $U'_r(z)$ to get a simply connected set $U = U_r(z)$ (the smallest simply connected set containing $U_r'(z)$ and contained in $B(z, \kappa r)$).
Now suppose there exists a $V \in \scrV_n(U,z,R)$ and a point $y \in V \cap \partial U$. Then $f^n(y)$ is in some $W \in \scrV_m(B(z,r),z,R)$. But this means $y$ is contained in some element of $\scrV_{n+m}(B(z,r),z,R)$ which must intersect $U$ since $y\in \partial U$, so $y$ is contained in $U$, contradiction.
\eprf
Note that one can construct a similarly defined set around parabolic points if one excludes periodic inverse branches too.
\noindent \textbf{Proof of Proposition \ref{prop:nice}:}
Recall that an open set $U$ is called \emph{nice} if $f^n(\partial U) \cap U =
\emptyset$ for all $n >0$.
Suppose $z\in \J(f)\setminus \scrP(f)$. Then $z$ is not parabolic. Take some small $R < \dist(z,\scrP(f))$.
Then all inverse branches on $B(z,R)$ are holomorphic.
Let $U$ be the set given by
Lemma \ref{lem:knice}. We claim this set is nice. Indeed, suppose $x \in \partial U$ and $f^n(x) \in U$. Then $x \in V$, for some $V \in \scrV_n(U, z, R)$. But $V \subset U$ so $x \in U$, contradiction.
\eprf
Given a nice set $U$, we write $r_U(z) :=\inf\{k \geq 1: f^{r_U(z)}(z) \in U\}$ for the first return time to $U$ and define the first return map $\phi_U$ by
$$
\phi_U(z) := f^{r_U(z)}(z)
$$
for $z \in U$ such that $r_U(z)$ is defined (finite). By the nested or disjoint property, the domain of definition of $\phi_U$ is a countable union of connected components $U_i$ on which $r_U$ is constant. If $U$ is connected then
$$
\phi_U : U_i \to U
$$
is biholomorphic for each $i$. Let $m$ be a $(t,p)$-conformal measure for $f$. Then the Jacobian of $\phi_U$ at $z$ is $\exp(p r_U(z)) |D\phi_U(z)|_\rho^t$.
\begin{lem}[Folklore Theorem]\label{Fact:Folklore}
Let $U$ be a bounded, simply-connected nice set such that $\dist(U, \scrP(f)) > 0$. Let $\Lambda_U :=\bigcap_{k\geq0} \phi_U^{-k}(U)$. If $m(\Lambda_U) >0$ then there exists a non-atomic, ergodic, absolutely continuous, $\phi_U$-invariant, probability measure $\nu$ with
a density $\rho_\nu$ which is non-zero and analytic on $U$. The measure $\nu$ is the unique absolutely continuous invariant probability measure on $U$.
Moreover, $m(U \setminus \Lambda_U)=0$ and $m$-almost every point in $U$ is recurrent.
\end{lem}
\beginpf
This is standard, see for example Theorem 6.1.3 in \cite{MauldinUrbanski:Book}.
That the density is smooth is easy, the argument for analyticity can also be found in \cite{przytycki1999rtr} or \cite{MPU:Rigidity}.
To deduce $m(U \setminus \Lambda_U)=0$, note, by bounded distortion and conformality of the measure, there exists $C>0$ such that, for any two subsets $A,B\subset U$ and $k\geq0$,
$$
C^{-1}\frac{m(A)}{m(B)} \leq \frac{m(\phi_U^{-k}(A))}{m(\phi_U^{-k}(B))} \leq C\frac{m(A)}{m(B)}.
$$
Let $A_n$ denote the set of points $x\in U$ for which $\phi^n_U(x)$ is defined, but $\phi^{n+1}_U(x)$ is not. Then $U = \Lambda_U \cup \bigsqcup_{n\geq0}A_n$, and $\phi_U^{-1}(A_n) = A_{n+1}$. Setting $A := A_0$ and $B:=\Lambda_U$ in the above inequality, we have $A_n \geq cA_0$ for some constant $c>0$ and all $n$. Since the measure is finite on $U$, it is zero on each $A_n$, as required.
That $\nu$ is non-atomic follows by a similar argument. Remark just that $\phi$ has more than one branch (an infinity of branches, actually), so that if $m$ admits an atom, then it has a non-periodic atom at a point $q$; set $A_0 := \{q\}$ and $A_n := \phi_U^{-n}(A_0)$.
Recurrence follows from ergodicity.
\eprf
\noindent \textbf{Proof of Theorem \ref{thm:BockConf}.}
Consider the compact sets
$$C_k :=\ccc \setminus\left(B\left(\scrP(f),\frac{1}{k}\right) \cup \{z:|z| >k\}\right).
$$
Recall that, according to the hypotheses, there is a positive measure set of points $z$ with $\omega(z) \not\subset \scrP(f)$. But then there is some $C_k$ and a positive measure set of points $z$ with $\omega(z) \cap C_k \ne \emptyset$. Each point in $C_k$ is contained in a nice set given by Proposition~\ref{prop:nice}, so there is a finite cover of $C_k$ by such sets, and thus there is one, $U$ say, and a set $A$ of positive measure such that, for all $z \in A$, $\omega(z) \cap U \ne \emptyset$.
We can write $A = \bigcup_{n \geq 0} A_n$ where $A_n = \{y \in A : r_{U}(y) = n\}$. Then there is some $A_n$
of positive measure.
Then $f^n(A_n)$ has positive measure and
$f^n(A_n) \subset \bigcap_{k\geq 0} \phi_{U}^{-k}(U)$. We can apply the Folklore Theorem.
By the Folklore Theorem, $m$-almost every point in $U$ is recurrent.
We have $\scrP(f) \cap U = \emptyset$ and $\scrP(f)$ is forward-invariant.
Points in $\scrP(f)\setminus E$, are, by definition, not univalently inaccessible. By conformality, it follows then that $\scrP(f)\setminus E$ has zero measure. By hypothesis, $m(E)=0$, so $m(\scrP(f)) = 0$.
Let $\nu$ be the $\phi_U$-invariant measure given by the Folklore Theorem.
Let $\{U_j\}_j$ denote the connected components of the domain of $\phi_U$, and $r_j$ the first return time to $U$ on $U_j$. Now
$$
\mu := \sum_j \sum_{k=0}^{r_j-1} f^k_*\nu_{|U_j}
$$
is in $\scrM_\infty(f)$ and absolutely continuous and equivalent to $m$.
Let $y \in \J \setminus \scrP(f)$. We consider a nice set $U_y \ni y$ given by Proposition~\ref{prop:nice} and the first return map $\phi_{U_y}$. Since $\mu/\mu(U_y)$ is an ergodic, absolutely continuous, invariant, probability measure for $\phi_{U_y}$, its density is analytic on $U_y$ (using the Folklore Theorem). Thus the density of $\mu$ is analytic on a neighbourhood of $y$.
\eprf
\section{Finite mass}
Now let us prove Theorem \ref{thm:converse}. For these final paragraphs we use the Euclidean metric.
Let $p$ be a pole which is not an omitted value. There exists a point $y \in \ccc$ and an $n \geq 0$ such that $f^n(y) = p$ and $y \notin \scrP(f)$.
By Theorem \ref{thm:BockConf}, the density $\rho$ of the absolutely continuous invariant measure $\mu$
is bounded away from zero on a neighbourhood of $y$. Thus $\rho$ is bounded away from zero on a neighbourhood of $p$. If $p$ has multiplicity $M > 0$, then it follows that there exists $c,r_0 > 0$ such that
$$
\rho(z) > c \frac{1}{|z|^{2+\frac{2}{M}}}
$$
for all $z$ satisfying $|z| \geq r_0$.
Let $A$ be any subset of $\ccc$ such that $\Orb(A)$ is bounded. Then there exists $\varepsilon >0$ such that $B(\Orb(A),2\varepsilon)$ contains no poles.
The derivative $|f'|$ is bounded from above by a positive constant $K > 1$ on $B(\Orb(A),\varepsilon)$. Now let $x \in \ccc$.
Denote by $n(x)$ the least integer $n \geq 1$ such that $\dist(f^n(x), \Orb(A)) > \varepsilon$. Then $\varepsilon < K^{n(x)} \dist(f(x),\Orb(A))$, so
$$
n(x) > (1/\log K) (\log \varepsilon - \log \dist(f(x), \Orb(A))),$$
so $n(x) > - c_1 \log \dist(f(x),A) -c_2$ for some constants $c_1, c_2 > 0$.
Consider $r_0$ as before, but large enough that $\Orb(A) \subset B(0, r_0 - \varepsilon)$.
We shall consider the first return time $r_W$ to $W := \{|z| \geq r_0\}$.
By Kac's Lemma, $\int_W r_W(z) d\mu(z) = \mu(\ccc)$. Combined with the above density estimate,
$$ \int_W r_W(z) \frac{1}{|z|^{2+\frac{2}{M}}} dm < \mu(\ccc)/c.$$
For $z \in W$,
$$
r_W(z) \geq n(f(z)) >
- c_1 \log \dist(f(z),A) -c_2$$
so, if $\mu$ is finite, then
$$
+\infty > \int_W \frac{-\log \dist (f(z),A) }{|z|^{2+\frac{2}{M}}} dm $$
as required.
\eprf
\section*{Acknowledgments}
The author is grateful to IMPAN in Warsaw where this work was started, supported by the EU training network ``Conformal Structures and Dynamics''. The author currently enjoys the support of the
Göran Gustafssons Stiftelse and the Knut och Alice Wallenbergs Stiftelse. Thanks are due to a first referee and to Bartek Skorulski for helpful conversations and remarks.
\end{document}
|
\begin{document}
\title{\sc Containing All Permutations}
\begin{abstract}
Numerous versions of the question ``what is the shortest object containing all permutations of a given length?'' have been asked over the past fifty years: by Karp (via Knuth) in 1972; by Chung, Diaconis, and Graham in 1992; by Ashlock and Tillotson in 1993; and by Arratia in 1999. The large variety of questions of this form, which have previously been considered in isolation, stands in stark contrast to the dearth of answers. We survey and synthesize these questions and their partial answers, introduce infinitely more related questions, and then establish an improved upper bound for one of these questions.
\end{abstract}
\pagestyle{main}
\section{Introduction}
\label{sec-intro}
What is the shortest object containing all permutations of length $n$? As we shall describe, there are a variety of such problems, going by an assortment of names including superpatterns and superpermutations. Throughout, we call all such problems \emph{universal permutation problems}. The diversity of these problems stems from the multiple possible definitions of the terms involved.
To state these problems, it is necessary to view permutations as words. A \emph{word} is simply a finite sequence of \emph{letters} or \emph{entries} drawn from some \emph{alphabet}. The \emph{length} of the word $w$, denoted $|w|$ throughout, is its number of letters, and if $w$ is a word of length at least $i$, then we denote by $w(i)$ its $i$th letter. From this viewpoint, a \emph{permutation} of length $n$ is a word consisting of the letters $[n]=\{1,2,\dots,n\}$, each occurring precisely once. Permutations are thus a special type of words over the positive integers $\mathbb{P}$. Two words $u,v\in\mathbb{P}^n$ (that is, both of length $n$, with positive integer letters) are \emph{order-isomorphic} if, for all indices $i,j\in[n]$, we have
\[
u(i)>u(j) \iff v(i)>v(j).
\]
In all universal permutation problems considered here, the object that is to contain all permutations of length $n$, called the \emph{universal} object, is a word, but there are two different types of containment. Sometimes we insist that the word $w$ contain each such permutation $\pi$ as a contiguous subsequence, or \emph{factor}, by which we mean that $w$ can be expressed as a concatenation $w=upv$ where the word $p$ is order-isomorphic to $\pi$. At other times we merely insist that $w$ contain each such permutation $\pi$ as a \emph{subsequence}, by which we mean that there are indices $1\le i_1<i_2<\cdots<i_n\le|w|$ so that the word $p=w(i_1)w(i_2)\cdots w(i_n)$ is order-isomorphic to $\pi$.
These notions of containment give rise to two different universal permutation problems. To obtain infinitely many, we vary the size of the alphabet that the letters of the universal word $w$ can be drawn from. In the strictest form, we insist that $w$ be a word over the alphabet $[n]$, meaning that $w$ is only allowed the symbols of the permutations it must contain. In this case, the notion of order-isomorphism reduces to equality: a word $p\in[n]^n$ is order-isomorphic to a permutation $\pi$ of length $n$ if and only if $p=\pi$. At the other end of the spectrum, we allow the letters of $w$ to be arbitrary positive integers. Between these extremes, another interesting case stands out: when the alphabet is $[n+1]$, thus allowing the universal word one more symbol than the permutations it must contain. Table~\ref{tab-six-problems} displays the best upper bounds established to date for the six versions of this question that have garnered the most interest. In the case of the rightmost two cells of the upper row, these upper bounds are known to be the actual answers.
\begin{table}
\caption{Current best upper bounds for the lengths of the shortest universal words in six flavors of universal permutation problems (for large $n$).}
\vspace{-
amount}
\[
\begin{array}{r|ccccc}
&\text{words over $[n]$}
&
&\text{words over $[n+1]$}
&
&\text{words over $\mathbb{P}$}\\[4pt]
\hline\\[-6pt]
\text{factor}
&n!+(n-1)!+(n-2)!
&
&n!+n-1
&
&n!+n-1
\\
&\quad\quad\ +(n-3)!+n-3
\\[16pt]
\text{subsequence}
&\displaystyle\left\lceil n^2-\frac{7}{3}n+\frac{19}{3}\right\rceil
&
&\displaystyle\frac{n^2+n}{2}
&
&\displaystyle\left\lceil\frac{n^2+1}{2}\right\rceil
\end{array}
\]
\vspace{-
amount}
\label{tab-six-problems}
\end{table}
The bounds shown in Table~\ref{tab-six-problems} weakly decrease as we move from left to right (a word over $[n]$ is also a word over $[n+1]$, which is also a word over $\mathbb{P}$) and also as we go from top to bottom (factors are also subsequences). Another notable feature of this table is that the lengths of the shortest universal words over the alphabet $[n]$ seem to be significantly greater than the lengths of the shortest universal words over the alphabet $[n+1]$, whose lengths seem to be either equal to or close to those of the shortest universal words over the largest possible alphabet, $\mathbb{P}$.
We remark that some research in this area has sought a universal \emph{permutation} instead of a universal word, but this is in fact equivalent to finding a universal word over $\mathbb{P}$, as we briefly explain. The word $u\in\mathbb{P}^n$ is \emph{order-homomorphic} to the word $v\in\mathbb{P}^n$ if, for all indices $i,j\in[n]$, we have
\[
u(i)>u(j) \implies v(i)>v(j).
\]
Less formally, if $u$ is order-homomorphic to $v$, then all strict inequalities between entries of $u$ also hold between the corresponding entries of $v$, but equalities between entries of $u$ may be broken in $v$. It is clear that every word over $\mathbb{P}$ is order-homomorphic to at least one permutation (one simply needs to ``break ties'' among the letters of the word), and it follows that if $u$ contains the permutation $\pi$ (as a factor or subsequence) and $u$ is order-homomorphic to $v$, then $v$ also contains $\pi$ (in the same sense---indeed, in the same indices---that $u$ contains it). As every permutation is also a word over $\mathbb{P}$, it follows finding a universal word, in either the factor or subsequence setting, is equivalent to finding a universal word over $\mathbb{P}$.
Each of the subsequent five sections of this paper is devoted to the examination of one of the cells of Table~\ref{tab-six-problems} (except for Section~\ref{sec-factors-n+1}, which considers both the upper-center and upper-right cells). While the results described in Sections~\ref{sec-factors-n}--\ref{sec-seq-n+1} are previously known, the results of Section~\ref{sec-subseq-P} appear for the first time here. In the final section, we briefly describe some further variations on universal permutation problems.
\section{As Factors, Over $[n]$}
\label{sec-factors-n}
The case in the upper-left of Table~\ref{tab-six-problems} dates to a 1993 paper of Ashlock and Tillotson~\cite{ashlock:construction-of:} and can be restated as follows.
\begin{quote}
What is the length of the shortest word over the alphabet $[n]$ that contains each permutation of length $n$ as a factor?
\end{quote}
This version of the universal permutation problem has recently attracted a surprising amount of attention, including an article in \emph{The Verge}~\cite{griggs:an-anonymous-4c:} and two in \emph{Quanta Magazine}~\cite{honner:unscrambling-th:,klarreich:mystery-math-wh:}, and investigations are very much ongoing.
We call a word over the alphabet $[n]$ that contains all permutations of length $n$ as factors an \emph{$n$-superpermutation}. A (not particularly good) lower bound on the length of $n$-superpermutations is easy to establish by observing that every word $w$ has at most $|w|-n+1$ many factors of length $n$.
\begin{observation}
\label{obs-number-of-factors}
Every $n$-superpermutation has length at least $n!+n-1$.
\end{observation}
In the cases of $n=1$ and $n=2$, the shortest $n$-superpermutations are easy to find. The word $1$ meets the demands for $n=1$ and the word $121$ is as short as possible for $n=2$. The shortest $3$-superpermutation has length $9$---one more than the lower bound above, but may be shown to be optimal with a slightly more delicate argument, which we now present. First, there is a word of length $9$,
\[
123121321,
\]
that contains all permutations of length $3$ as factors. Now suppose that the word $w$ over the alphabet $[3]$ contains all permutations of length $3$ as factors. We say that the letter $w(i)$ is \emph{wasted} if the factor $w(i-2)w(i-1)w(i)$ is not equal to a new permutation of length $3$---either because not all of the letters are defined, or because it contains a repeated letter, or because that permutation occurs earlier in $w$. As each nonwasted letter corresponds to the first occurrence of a permutation, we have
\[
|w|
=
3! + (\text{\# of wasted letters in $w$}).
\]
Clearly the first two letters of $w$ are wasted. If $w$ contains an additional wasted letter, then its length must be at least $9$. Suppose then that $w$ does not contain any additional wasted letters. Thus each of the factors
\[
w(1)w(2)w(3),\
w(2)w(3)w(4),\
w(3)w(4)w(5),\
\text{and}\
w(4)w(5)w(6)
\]
must be equal to different permutations. However, the only way for these factors to be equal to permutations at all is to have $w(4)=w(1)$, $w(5)=w(2)$, and $w(6)=w(3)$, and this implies that $w(4)w(5)w(6)=w(1)w(2)w(3)$, a contradiction.
Computations by hand become more difficult at $n=4$, but we invite the reader to check that the word
\[
123412314231243121342132413214321
\]
of length $33$ is a $4$-superpermutation, and that no shorter word suffices.
As Ashlock and Tillotson~\cite{ashlock:construction-of:} noticed, the lengths of these superpermutations are, respectively,
\begin{eqnarray*}
1!&=&1,\\
2!+1!&=&3,\\
3!+2!+1!&=&9,\text{ and}\\
4!+3!+2!+1!&=&33.
\end{eqnarray*}
They also gave a recursive construction establishing the following result.
\begin{proposition}[Ashlock and Tillotson~{\cite[Theorem 3 and Lemma 5]{ashlock:construction-of:}}]
\label{prop-factor-[n]-upper-bound}
If there is an $(n-1)$-superpermutation of length $m$, then there is an $n$-superpermutation of length $n!+m$.
\end{proposition}
Proposition~\ref{prop-factor-[n]-upper-bound} guarantees an $n$-superpermutation of length at most $n!+\cdots+2!+1!$. Given this construction and the lower bounds they had been able to compute, Ashlock and Tillotson made the natural conjecture that the shortest $n$-superpermutation has length $n!+\cdots+2!+1!$ for all $n$. They further conjectured that all of the shortest $n$-superpermutations were unique up to the relabeling of their letters.
For about twenty years, very little progress seemed to have been made on these conjectures, although they were rediscovered many times on Internet forums such as MathExchange and StackOverflow (references to some of these rediscoveries are given in Johnston's article~\cite{johnston:non-uniqueness-:}). Then, in 2013, Johnston~\cite{johnston:non-uniqueness-:} constructed multiple distinct $n$-superpermutations of length $n!+\cdots+2!+1!$ for all $n\ge 5$, proving that at least one of Ashlock and Tillotson's two conjectures must be false, although giving no hint as to which one. A year later, Benjamin Chaffin verified the $n=5$ case of the length conjecture by computer (see Johnston's blog post~\cite{johnston:all-minimal-sup:} for details), showing that no word of length less than $153=5!+4!+3!+2!+1!$ is a $5$-superpermutation. This showed, via Johnston's constructions, that Ashlock and Tillotson's uniqueness conjecture was certainly false, although their length conjecture might still have held.
The next case of the length conjecture to be verified would be $n=6$, where the conjectured shortest length was $6!+5!+4!+3!+2!+1!=873$. However, only weeks after Chaffin's verification of the length conjecture for $n=5$, Houston~\cite{houston:tackling-the-mi:}---by viewing the problem as an instance of the traveling salesman problem---found a $6$-superpermutation of length only $872$.
Whether this is the shortest $6$-superpermutation is the focus of an ongoing distributed computing project at
\[
\text{\url{www.supermutations.net}.}
\]
Regardless of the outcome of that project, the $6$-superpermutation of length $872$ and Proposition~\ref{prop-factor-[n]-upper-bound} reduce the upper bound on the length of the shortest $n$-superpermutation to {$n!+\cdots+3!+2!$} for all $n\ge 6$.
After breaking the length conjecture of Aslock and Tillotson in 2014, Houston created a Google discussion group called Superpermutators, where those interested in the problem could work on it in a loose and Polymath-esque manner, and most of the subsequent research mentioned here has been communicated there.
The next breakthrough was made shortly after John Baez tweeted about Houston's construction in September 2018. This tweet caused Greg Egan, who is known for his science fiction novels (coincidently including one entitled \emph{Permutation City}~\cite{egan:permutation-cit:}), to become interested in the problem. Egan found inspiration in an unpublished manuscript of Williams~\cite{williams:hamiltonicity-o:}. In that paper, Williams showed how to construct Hamiltonian paths and cycles in the Cayley graph on the symmetric group $S_n$ generated by the two permutations denoted by $(12\cdots n)$ and $(12)$ in cycle notation (see Sawada and Williams~\cite{sawada:a-hamilton-path:} for a published, streamlined construction). Williams's construction had solved a forty year-old conjecture of Nijenhuis and Wilf~\cite{nijenhuis:combinatorial-a:} (later included by Knuth as an exercise with a difficulty rating of $48/50$ in Volume 4A of the \emph{Art of Computer Programming}~\cite[Problem 71 of Section 7.2.1.2]{knuth:the-art-of-comp:4a}), and, in October 2018, Egan showed how it could be adapted to prove the following.
\begin{theorem}[Egan~\cite{egan:superpermutatio:}]
\label{thm-egan-upper-bound}
For all $n\ge 4$, there is an $n$-superpermutation of length at most
\[
n!+(n-1)!+(n-2)!+(n-3)!+n-3.
\]
\end{theorem}
For $n=6$, the construction of Theorem~\ref{thm-egan-upper-bound} is worse than Houston's (Theorem~\ref{thm-egan-upper-bound} gives a $6$-superpermutation of length $873$), but for $n\ge 7$ this bound is strictly less than the bound of $n!+\cdots+3!+2!$ implied by Houston's construction and Proposition~\ref{prop-factor-[n]-upper-bound}.
The efforts described above yield upper bounds. For lower bounds, Ashlock and Tillotson improved on Observation~\ref{obs-number-of-factors} by focusing on wasted letters as we did earlier in the $n=3$ case. For general $n$, we say that the letter $w(i)$ is \emph{wasted} if the factor
\[
w(i-n+1)w(i-n+2)\cdots w(i)
\]
is either not a permutation of length $n$, or occurs earlier in $w$. The crucial observation is that if neither $w(i)$ nor $w(i+1)$ are wasted letters, then the permutations ending at those letters are cyclic rotations of each other. The $n!$ permutations of length $n$ can be partitioned into $(n-1)!$ disjoint cyclic classes, where the \emph{cyclic class} of the permutation $\pi$ consists of all of its cyclic rotations. For example, the cyclic class of the permutation $12345$ is
\[
\{12345, 23451, 34512, 45123, 51234\}.
\]
Our reasoning above implies that upon completing a cyclic class (having visited all of its members), the next letter in the word (if there is one) must be wasted. Any $n$-superpermutation must complete all $(n-1)!$ cyclic classes, and thus doing so requires at least $(n-1)!-1$ wasted letters. Together with the $n-1$ letters at the beginning of $w$, which are trivially wasted, we obtain the following result.
\begin{proposition}[Ashlock and Tillotson~{\cite[proof of Theorem 18]{ashlock:construction-of:}}]
\label{prop-factor-[n]-lower-bound}
For all $n\ge 1$, every $n$-superpermutation has length at least
\[
n!+(n-1)!+n-2.
\]
\end{proposition}
At least since a 2013 blog post of Johnston~\cite{johnston:the-minimal-sup:}, it had been known that there was an argument (on a website devoted to anime) claiming to improve on the lower bound provided by Proposition~\ref{prop-factor-[n]-lower-bound}. However, the argument was far from what most mathematicians would consider a proof, and there had been no efforts to make it into one, in part because the claimed lower bound was so far from what was thought to be the correct answer at the time. However, Egan's breakthrough quickly inspired several participants of the Superpermutators group to re-examine the argument. In the process, it was realized not only that the argument was correct, but that it did not originate on the anime website where Johnston had found it. Instead, it had been copied there from a series of anonymous posts in 2011 on the somewhat-notorious Internet forum 4chan.
The crux of the argument is an idea that we call a trajectory (though the original proof called it a $2$-loop). The proof of Proposition~\ref{prop-factor-[n]-lower-bound} suggests that in building an $n$-superpermutation, one might try to complete an entire cyclic class, then waste a single letter to enter a new cyclic class, and so on. For example, in the $n=5$ case, suppose we visit the cyclic class of $12345$ in order,
\[
12345, 23451, 34512, 45123, 51234.
\]
Once we have come to the $4$ of $51234$, there is a unique way to waste a single letter to move to a different cyclic class; this is to append the letters $15$, and doing so moves us to the cyclic class of the permutation $23415$. It would then be natural to complete this cyclic class, by visiting the permutations
\[
23415, 34152, 41523, 15234, 52341.
\]
After that it would again be natural to waste a letter to traverse to the cyclic class of $34125$ and complete that class by visiting the permutations
\[
34125, 41253, 12534, 25341, 53412.
\]
Finally, by wasting a letter to enter and complete the cyclic class of $41235$, we would encounter the permutations
\[
41235, 12354, 23541, 35412, 54123
\]
in that order. However, from that point there would be no way to waste a single letter to enter a new cyclic class; by appending a $5$ we would cycle back to $41235$, while appending a $45$ would return us to $12345$. In general, by following this procedure one would visit ${n-1}$ cyclic classes before reaching a point where wasting a single letter would either cause us to stay in the same cyclic class or to return to the initial permutation. We define the \emph{trajectory} of the permutation $\pi$ of length $n$ to consist of the sequence of $n(n-1)$ permutations visited by following this procedure starting at $\pi$, and thus the above sequence of permutations is the trajectory of $12345$. We caution the reader that trajectories do \emph{not} partition the set of permutations; while $23451$ lies in the trajectory of $12345$, the trajectories of $23451$ and of $12345$ contain different sets of permutations.
As we read a superpermutation from left to right, we keep track of which trajectory we are on. We begin on the trajectory of the first permutation we see in the word. After that, we say that we \emph{change} trajectories whenever the word deviates from the above pattern of traversing an entire cyclic class, wasting a letter, traversing an entire cyclic class, etc., and the trajectory we change \emph{to} is the trajectory of the first permutation encountered after a change of trajectories. Changing trajectories obviously requires at least one wasted letter because one must at least change cyclic classes to change trajectories. We view the wasted letter immediately before entering the new trajectory (that is, encountering a new permutation) as the letter wasted to change trajectories. As each trajectory contains $n(n-1)$ permutations, any $n$-superpermutation must change trajectory at least $(n-2)!-1$ times, and doing so requires at least $(n-2)!-1$ wasted letters.
To improve on Proposition~\ref{prop-factor-[n]-lower-bound}, we now argue as follows. As in the proof of Proposition~\ref{prop-factor-[n]-lower-bound}, any $n$-superpermutation must complete all $(n-1)!$ cyclic classes, and doing so requires at least $(n-1)!-1$ wasted letters. We view the letter wasted immediately after completing a cyclic class as the letter wasted to leave a completed cyclic class. For example, suppose that our word begins with the prefix
\[
123451234\underline{1}523\underline{1}4.
\]
Thus we begin on the trajectory of $12345$. The next four letters, $1234$, complete the cyclic class of $12345$. The letter immediately after that (the first $\underline{1}$ above) is wasted to leave that completed cyclic class. We then visit the permutations $23415$, $34152$, and $41523$ in that order before wasting another letter (the second $\underline{1}$ above) to change trajectories.
Finally, we note that the letters wasted to complete cyclic classes and those wasted to change trajectory must be distinct---indeed, this claim amounts to saying that when one has completed a cyclic class, wasting a single letter does not change trajectories. This completes the proof of the following result.
\begin{theorem}[Anonymous 4chan poster]
\label{thm-4chan-lower-bound}
For all $n\ge 1$, every $n$-superpermutation has length at least
\[
n!+(n-1)!+(n-2)!+n-3.
\]
\end{theorem}
Houston has shown (in the Superpermutators group) that the bound in Theorem~\ref{thm-4chan-lower-bound} can be increased by $1$. For general $n$, Theorem~\ref{thm-egan-upper-bound} and this improvement to Theorem~\ref{thm-4chan-lower-bound} are the best results established so far. There had been some hope in the Superpermutators group that perhaps Egan's construction could be made one letter shorter for $n\ge 7$, while the lower bound could be increased by $(n-3)!-1$, so that the two met at
\[
n!+(n-1)!+(n-2)!+(n-3)!+n-4,
\]
but this has also been shown to be false in the $n=7$ case. In this case, the original length conjecture of Ashlock and Tillotson suggested that the length of the shortest $7$-superpermutation should be $7!+6!+5!+4!+3!+2!+1!=5913$, while Egan's Theorem~\ref{thm-egan-upper-bound} gives a $7$-superpermutation of length $7!+6!+5!+4!+4=5908$. In February 2019, Bogdan Coanda made several theoretical improvements to the computer search for superpermutations and used these to find a $7$-superpermutation of length $7!+6!+5!+4!+3=5907$, thus matching the wishful thinking above. (Continuing the tradition of ``publishing'' progress on this problem in unorthodox places, Coanda announced his construction pseudonymously in the comment section of a YouTube video~\cite{parker:superpermutatio:} about the problem.) Shortly thereafter, Egan and Houston modified Coanda's approach to construct a $7$-superpermutation of length $7!+6!+5!+4!+2=5906$.
\section{As Factors, Over $[n+1]$ and $\mathbb{P}$}
\label{sec-factors-n+1}
In moving from the previous universal permutation problem to this one, we see for the first of two times the dramatic effect of adding a letter to the alphabet. Not only does the addition of a single letter seem to significantly shorten the universal words, but it changes the problem from one that remains wide open to one solved a decade ago.
A \emph{de Bruijn word} of order $n$ over the alphabet $[k]$ is a word $w$ of length $k^n$ such that every word in $[k]^n$ occurs exactly once as a \emph{cyclic factor} in $w$, or equivalently, every such word occurs exactly once as a factor in the longer word
\[
w(1)w(2)\cdots w(k^n)\ w(1)w(2)\cdots w(n-1).
\]
These words were (mis)named for de Bruijn (see \cite{bruijn:acknowledgement:}) because in addition to establishing that such words exist, he showed that there are precisely $(k!)^{k^{n-1}}/k^n$ of them. An example of a de Bruijn word, written cyclically, is shown on the left of Figure~\ref{fig-debruijn}.
\begin{figure}
\caption{On the left, a de Bruijn word of order $3$ over the alphabet $[3]$. On the right, a universal cycle for the permutations of length $4$ over the alphabet $[5]$.}
\label{fig-debruijn}
\end{figure}
In their highly influential 1992 paper, Chung, Diaconis, and Graham~\cite{chung:universal-cycle:} explored generalizations of de Bruijn words to other types of objects, including permutations. (In fact, Diaconis and Graham~\cite[Chapter 4]{diaconis:magical-mathema:} state that their motivation was a magic trick.) As they defined it, a \emph{universal cycle} (frequently shortened to \emph{ucycle}) for the permutations of length $n$ would be a word $w$ of length $n!$ (over some alphabet) such that every permutation of length $n$ is order-isomorphic to a cyclic factor of $w$, or equivalently, to a factor of the slightly longer word $w(1)w(2)\cdots w(n!)\ w(1)w(2)\cdots w(n-1)$. An example of a universal cycle over $[5]$, written cyclically, for the permutations of length $4$ is shown on the right of Figure~\ref{fig-debruijn}.
If such a universal cycle $w$ were to exist (which was the question they were interested in, leaving enumerative concerns for later), then the word
\[
w(1)w(2)\cdots w(n!)\ w(1)w(2)\cdots w(n-1)
\]
would be, in our terms, a shortest possible answer to the universal permutation problem for factors over the alphabet $\mathbb{P}$. In this way, their universal cycle of length $4!=24$ for the permutations of length $4$ shown on the right of Figure~\ref{fig-debruijn} is converted (starting at noon and proceeding clockwise) into the universal word
\[
123412534153214532413254\ 123
\]
of length $4!+4-1=27$. Thus, together with the trivial lower bound of $n! + n - 1$ noted in Observation~\ref{obs-number-of-factors}, the answer to the question posed in the upper-righthand cell of Table~\ref{tab-six-problems} is implied by the following result.
\begin{theorem}[Chung, Diaconis, and Graham~\cite{chung:universal-cycle:}]
\label{thm-univ-factor-P}
For all positive integers $n$, there is a universal cycle over the alphabet $[6n]$ for the permutations of length $n$.
\end{theorem}
Chung, Diaconis, and Graham left open the question of whether the alphabet $[6n]$ could be shrunk. Proposition~\ref{prop-factor-[n]-lower-bound} shows that, for $n\ge 3$, there cannot be a universal cycle over the alphabet $[n]$ for the permutations of length $n$. Therefore the result below, established by Johnson in 2009, is best possible.
\begin{theorem}[Johnson~\cite{johnson:universal-cycle:}]
\label{thm-univ-factor-n+1}
For all positive integers $n$, there is a universal cycle over the alphabet $[n+1]$ for the permutations of length $n$.
\end{theorem}
In terms of universal permutation problems, Theorem~\ref{thm-univ-factor-n+1} establishes that there is a word of length $n!+n-1$ over the alphabet $[n+1]$ that contains every permutation of length $n$ as a factor.
\section{As Subsequences, Over $[n]$}
\label{sec-seq-n}
The universal permutation problem for subsequences over the alphabet $[n]$ pre-dates the others by 20 years. In a 1972 technical report entitled ``Selected Combinatorial Research Problems'' and edited together with Chv\'atal and Klarner, Knuth~\cite[Problem 36]{chvatal:selected-combin:} stated the following problem, which he attributed to Richard Karp:
\begin{quote}
What is the shortest string of $\{1,2,\dots,n\}$ containing all permutations on $n$ elements as subsequences? (For $n = 3$, $1213121$; for $n=4$, $123412314321$; for $n=5$, M. Newey claims the shortest has length $19$.)
\end{quote}
To this day, the lengths of the shortest universal words in this case are known exactly only for $1\le n\le 7$. These values were computed by Newey to be $1$, $3$, $7$, $12$, $19$, $28$, and $39$ in his 1973 technical report~\cite{newey:notes-on-a-prob:}, and he observed that this sequence is equal to $n^2-2n+4$ for $3\le n\le 7$. In fact, Newey gave a construction of universal words of this length for all $n\ge 3$, meaning $n^2-2n+4$ is an upper bound on the answer to this universal permutation problem. While Newey remarked that it is an ``obvious conjecture'' that the length of the shortest universal word in this case is $n^2-2n+4$, he also suggested a competing conjecture that would imply that the lengths grow like $n^2 - n \log_2(n)$.
Simpler constructions of universal words of length $n^2-2n+4$ were presented in a 1974 paper of Adleman~\cite{adleman:short-permutati:}, a 1975 paper of Koutas and Hu~\cite{koutas:shortest-string:}, and a 1976 paper of Galbiati and Preparata~\cite{galbiati:on-permutation-:}. The latter two constructions were given a common generalization in the 1980 paper of Mohanty~\cite{mohanty:shortest-string:}. Interestingly, of these four papers, only Koutas and Hu were bold (or foolish) enough to conjecture that $n^2-2n+4$ is the true answer (it isn't).
After this initial flurry of activity, the problem laid dormant until the surprising 2011 work of Z\u{a}linescu~\cite{zu-alinescu:shorter-strings:}, who lowered the upper bound by $1$ for $n\ge 10$, constructing a word of length $n^2-2n+3$ that contains all permutations of length $n$ as subsequences. However, his upper bound stood for just over one year before being improved upon, for $n\ge 11$, by the following.
\begin{theorem}[Radomirovi\'c~\cite{radomirovic:a-construction-:}]
\label{thm-miller-perms}
For all $n\ge 7$, there is a word over the alphabet $[n]$ of length $\left\lceil n^2-7n/3+19/3\right\rceil$ containing subsequences equal to every permutation of length $n$.
\end{theorem}
For a lower bound on the length of a universal word in this context, we briefly present the elementary proof given by Kleitman and Kwiatkowski~\cite{kleitman:a-lower-bound-o:}. Let $w\in[n]^\ast$ be a word that contains each permutation of length $n$ as a subsequence. Choose $\pi(1)$ to be the symbol whose earliest occurrence in $w$ is as late as possible, and note that this occurrence may not appear before $w(n)$. Next, choose $\pi(2)$ to be the symbol whose earliest occurrence after $\pi(1)$ in $w$ is as late as possible, and note that this occurrence must be at least $n-1$ symbols later. Then, choose $\pi(3)$ to be the symbol whose earliest occurrence after $\pi(1)\pi(2)$ first appears as a subsequence in $w$ is as late as possible, and note that this means that $\pi(3)$ must occur at least $n-2$ symbols later. Continuing in this manner, we construct a permutation $\pi$ whose earliest possible occurrence in $w$ requires at least
\[
n + (n-1) + \cdots + 2 + 1
=
\frac{n^2+n}{2}
\]
symbols.
Kleitman and Kwiatkowski~\cite{kleitman:a-lower-bound-o:} go on to prove (via a delicate inductive argument) a lower bound of $n^2-c_\epsilon n^{(7/4)+\epsilon}$, where the constant $c_\epsilon$ depends on $\epsilon$. While this later bound lacks concreteness, it does establish that the lengths of the shortest universal words in this case are asymptotic to $n^2$.
\section{As Subsequences, Over $[n+1]$}
\label{sec-seq-n+1}
As in the factor case, by adding a single symbol to our alphabet, we again see a dramatic decrease in the length of the shortest universal word. To date, this version of the problem has only been studied implicitly, in the 2009 work of Miller~\cite{miller:asymptotic-boun:}, where she established the following bound.
\begin{theorem}[Miller~\cite{miller:asymptotic-boun:}]
\label{thm-miller-perms}
For all $n\ge 1$, there is a word over the alphabet $[n+1]$ of length $(n^2+n)/2$ containing subsequences order-isomorphic to every permutation of length $n$.
\end{theorem}
To establish this result, define the \emph{infinite zigzag word} to be the word formed by alternating between ascending \emph{runs} of the odd positive integers $1357\cdots$ and descending \emph{runs} of the even positive integers $\cdots 8642$,
\[
1357\cdots\ \cdots 8642\
1357\cdots\ \cdots 8642\
1357\cdots\ \cdots 8642\ \cdots.
\]
While this object does not conform to most definitions of the word \emph{word} in combinatorics, we hope the reader forgives us the slight expansion of the definition adopted here. We are interested in the leftmost embeddings of words over $\mathbb{P}$ into the infinite zigzag word.
We also need two definitions. First, given a word $p\in\mathbb{P}^\ast$, we define the word $p^{+1}\in\mathbb{P}^\ast$ to be the word formed by adding $1$ to each letter of $p$, so $p^{+1}(i)=p(i)+1$ for all indices $i$ of $p$. Next we say that the word $p\in\mathbb{P}^\ast$ has an \emph{immediate repetition} if there is an index $i$ with $p(i)=p(i+1)$, i.e., if $p$ contains a factor equal to $\ell\ell$ for some letter $\ell\in\mathbb{P}$.
\begin{proposition}
\label{prop-miller-words}
If the word $p\in\mathbb{P}^n$ has no immediate repetitions, then either $p$ or $p^{+1}$ occurs as a subsequence of the first $n$ runs of the infinite zigzag word.
\end{proposition}
Before proving Proposition~\ref{prop-miller-words}, note that permutations do not have immediate repetitions. Thus if $\pi$ is a permutation of length $n$, Proposition~\ref{prop-miller-words} implies that either $\pi$ or $\pi^{+1}$ occurs as a subsequence in the first $n$ runs of the infinite zigzag word. Since $\pi^{+1}$ is order-isomorphic to $\pi$ and both $\pi$ and $\pi^{+1}$ are words over $[n+1]$, this implies that the restriction of the first $n$ runs of the infinite zigzag word to the alphabet $[n+1]$ contains every permutation of length $n$. For example, in the case of $n=5$ we obtain the universal word
\[
135\ 642\ 135\ 642\ 135
\]
of length $15$ over the alphabet $[6]$.
The restriction of the infinite zigzag word described above consists of $n$ runs of average length $(n+1)/2$: if $n$ is odd, then all runs are of this length, while if $n$ is even, then half are of length $n/2$ and half are of length $(n+2)/2$. Thus Proposition~\ref{prop-miller-words} implies Theorem~\ref{thm-miller-perms}. While Proposition~\ref{prop-miller-words} does not appear explicitly in Miller~\cite{miller:asymptotic-boun:}, its proof, presented below, is adapted from her proof of Theorem~\ref{thm-miller-perms}.
\newenvironment{proof-of-prop-miller-words}{
\noindent {\it Proof of Proposition~\ref{prop-miller-words}.\/}}{\qed
}
\begin{proof-of-prop-miller-words}
We define the \emph{score} of the word $p\in\mathbb{P}^\ast$, denoted by $s(p)$, as the minimum number of runs that an initial segment of the infinite zigzag word must have in order to contain $p$, minus the length of $p$. Thus our goal is to show that for every word $p\in\mathbb{P}^\ast$ without immediate repetitions, either $s(p)\le 0$ or $s(p^{+1})\le 0$. In fact, we show that for such words we have $s(p)+s(p^{+1})=1$, which implies this.
We prove this claim by induction on the length of $p$. For the base case, we see that words consisting of a single odd letter are contained in the first run of the infinite zigzag word (thus corresponding to scores of $0$) while words consisting of a single even letter are contained in the second run (corresponding to scores of $1$). Thus for every $\ell\in\mathbb{P}^1$ we have $s(\ell)+s(\ell^{+1})=1$, as desired. Now suppose that the claim is true for all words $p\in\mathbb{P}^n$ without immediate repetitions and let $\ell\in\mathbb{P}$ denote a letter. We see that, for any $p \in \mathbb{P}^n$,
\[
s(p\ell)-s(p)
=
\left\{
\begin{array}{cl}
-1& \begin{array}{l}
\text{if $p(n)<\ell$ and both entries are odd or}\\
\text{if $p(n)>\ell$ and both entries are even;}
\end{array}
\\[12pt]
0& \begin{array}{l}
\text{if $p(n)$ and $\ell$ are of different parity; or}
\end{array}
\\[8pt]
+1& \begin{array}{l}
\text{if $p(n)<\ell$ and both entries are even,}\\
\text{if $p(n)=\ell$, or}\\
\text{if $p(n)>\ell$ and both entries are odd.}
\end{array}
\end{array}
\right.
\]
Because our words do not have immediate repetitions, we can ignore the possibility that $\ell=p(n)$. In the other cases, it can be seen by inspection that
\[
\big( s(p\ell)-s(p) \big)
+
\big( s\!\left((p\ell)^{+1}\right) - s\!\left(p^{+1}\right)\! \big)
=
0.
\]
By rearranging these terms, we see that
\[
s(p\ell) + s\!\left((p\ell)^{+1}\right)
=
s(p)+s\!\left(p^{+1}\right).
\]
Since $s(p)+s\!\left(p^{+1}\right)=1$ by induction, this completes the proof of the inductive claim, and thus also of the proposition.
\end{proof-of-prop-miller-words}
We conclude our consideration of this case by providing a lower bound. Suppose that the word $w$ over the alphabet $[n+1]$ contains subsequences order-isomorphic to every permutation of length $n$. For each letter $\ell\in[n+1]$, let $r_\ell$ denote the number of occurrences of the letter $\ell$ in $w$. To create a subsequence of $w$ that is order-isomorphic to a permutation, we must choose a letter of the alphabet $[n+1]$ to omit and then choose precisely one occurrence of each of the other letters. Thus the number of permutations that can be contained in $w$ is at most
\[
\sum_{\ell\in[n+1]} r_1\cdots r_{\ell-1}r_{\ell+1}\cdots r_{n+1}
=
r_1 \cdots r_{n+1} \sum_{\ell \in [n+1]} \frac{1}{r_{\ell}}.
\]
Setting $m=|w|=\sum r_\ell$, we see that the above quantity attains its maximum over all $(r_1,\dots,r_{n+1})\in\mathbb{R}_{\ge 0}^{n+1}$ when each $r_\ell$ is equal to $m/(n+1)$, and in that case the number of permutations contained in $w$ is at most
\[
(n+1)\left(\frac{m}{n+1}\right)^n.
\]
If $w$ is to contain all permutations of length $n$, then this quantity must be at least $n!$. Using the fact that $k!\ge (k/e)^k$ for all $k$, we must therefore have
\[
(n+1)\left(\frac{m}{n+1}\right)^n
\ge
n!
\ge
\left(\frac{n}{e}\right)^n.
\]
It follows that, asymptotically, we must have $m\ge n^2/e$.
\section{As Subsequences, Over $\mathbb{P}$}
\label{sec-subseq-P}
For the final cell of Table~\ref{tab-six-problems}, we seek a word over the positive integers $\mathbb{P}$ that contains all permutations of length $n$ as subsequences. As remarked upon in the Introduction, this is equivalent to seeking a permutation that contains all permutations of length $n$, and such a permutation is sometimes called an \emph{$n$-superpattern} (for example, by B\'ona~\cite[Chapter 5, Exercises 19--22 and Problems Plus 9--12]{bona:combinatorics-o:}). The first result about universal permutations of this type was obtained by Simion and Schmidt in 1985~\cite[Section~5]{simion:restricted-perm:}. They computed the number of $3$-universal permutations of length $m\ge 5$ to be
\[
m!
-6C_m
+5\cdot 2^m
+4{m\choose 2}
-2F_m
-14m
+20.
\]
(Here $C_m$ denotes the $m$th Catalan number and $F_m$ denotes the $m$th \emph{combinatorial} Fibonacci number, so $F_0=F_1=1$ and $F_m=F_{m-1}+F_{m-2}$ for $m\ge 2$.) However, the first to study this version of the universal permutation problem for general $n$ was Arratia~\cite{arratia:on-the-stanley-:} in 1999.
As our alphabet has only expanded from the version of the problem discussed in the previous section, the upper bound of $(n^2+n)/2$ established in Theorem~\ref{thm-miller-perms} also holds for the version of the problem discussed in this section. It should be noted that before Miller~\cite{miller:asymptotic-boun:} established Theorem~\ref{thm-miller-perms} in 2009, Eriksson, Eriksson, Linusson, and W{{\"{a}}}stlund~\cite{eriksson:dense-packing-o:} had established an upper bound for this problem asymptotically equal to $2n^2/3$.
Here, we give a new improvement to Miller's upper bound. In order to do so, we further restrict the infinite zigzag word, and then break ties between its letters to obtain a specific permutation $\zeta_n$. To this end, we define the word $z_n$ to be the restriction of the first $n$ runs of the infinite zigzag word to the alphabet $[n]$. When $n$ is even, each run of $z_n$ has length $n/2$. When $n$ is odd, $z_n$ consists of $(n+1)/2$ ascending odd runs, each of length $(n+1)/2$, and $(n-1)/2$ descending even runs, each of length $(n-1)/2$. Thus we have
\[
|z_n|
=
\left\{
\begin{array}{cl}
\displaystyle\frac{n^2}{2}&\text{if $n$ is even,}\\[12pt]
\displaystyle\frac{n^2+1}{2}&\text{if $n$ is odd.}
\end{array}
\right.
\]
Next we choose a specific permutation, $\zeta_n$, such that $z_n$ is order-homomorphic to $\zeta_n$. Recall that this means that for all indices $i$ and $j$,
\[
z_n(i)>z_n(j)
\implies
\zeta_n(i)>\zeta_n(j).
\]
In constructing $\zeta_n$, we have the freedom to break ties between equal letters of $z_n$. That is to say, if $z_n(i)=z_n(j)$ for $i\neq j$, then in constructing $\zeta_n$ we may choose whether $\zeta_n(i)<\zeta_n(j)$ or $\zeta_n(i)>\zeta_n(j)$ arbitrarily without affecting any other pair of comparisons and thus without losing any occurrences of permutations. We choose to break these ties by replacing all instances of a given letter $k\in[n]$ in $z_n$ by a decreasing subsequence in $\zeta_n$. Thus for indices $i<j$, we have
\[
z_n(i)=z_n(j)
\implies
\zeta_n(i)>\zeta_n(j).
\]
This choice uniquely determines $\zeta_n$ (up to order-isomorphism), as all comparisons between its letters are determined either in $z_n$, if the corresponding letters of $z_n$ differ, or by the rule above, if the corresponding letters of $z_n$ are the same. Figure~\ref{fig-z5-zeta5} shows the plots of $z_5$ and $\zeta_5$, where the \emph{plot} of a word $w$ over $\mathbb{P}$ is the set $\{(i,w(i))\}$ of points in the plane.
\begin{figure}
\caption{On the left, the further restriction we define of the infinite zigzag word, $z_5$. On the right, the normalized permutation formed by breaking ties, $\zeta_5$.}
\label{fig-z5-zeta5}
\end{figure}
In the following sequence of results, we show that $\zeta_n$ is \emph{almost} universal. In fact, we show that $\zeta_n$ fails to be universal only for even $n$, and in that case, the only missing permutation is the decreasing permutation $n\cdots 21$. The first of these results, Proposition~\ref{prop-distant-inv-desc}, covers almost all permutations. (In fact, Proposition~\ref{prop-distant-inv-desc-layered} shows that Proposition~\ref{prop-distant-inv-desc} handles all but $2^{n-1}$ permutations of length $n$.)
We say that two entries $\pi(j)$ and $\pi(k)$ form an \emph{inverse-descent} if $j<k$ and $\pi(j)=\pi(k)+1$. (As the name is meant to indicate, if a pair of entries forms an inverse-descent in $\pi$, then the corresponding entries of $\pi^{-1}$ form a descent.) If $\pi(j)$ and $\pi(k)$ form an inverse-descent and they are not adjacent in $\pi$ (so $k\ge j+2$), then we say that they form a \emph{distant} inverse-descent.
\begin{proposition}
\label{prop-distant-inv-desc}
If the permutation $\pi$ of length $n$ has a distant inverse-descent, then $\zeta_n$ contains a subsequence order-isomorphic to $\pi$.
\end{proposition}
\begin{proof}
Suppose that the entries $\pi(a)$ and $\pi(b)$ form a distant inverse-descent in $\pi$, meaning that $\pi(a)=\pi(b)+1$ and $b\ge a+2$. We define the word $p\in [n-1]^n$ by
\[
p(i)
=
\left\{\begin{array}{ll}
\pi(i)&\text{if $\pi(i)\le\pi(b)$,}\\
\pi(i)-1&\text{if $\pi(i)\ge\pi(a)=\pi(b)+1$.}
\end{array}\right.
\]
The word $p$ has two occurrences of the letter $\pi(b)$, but because $\pi(a)$ and $\pi(b)$ form a distant inverse-descent, these two occurrences of $\pi(b)$ in $p$ do not constitute an immediate repetition. Thus Proposition~\ref{prop-miller-words} shows that either $p$ or $p^{+1}$ occurs as a subsequence in the first $n$ runs of the infinite zigzag word. As $p$ and $p^{+1}$ are both words over $[n]$, whichever of these words occurs in the first $n$ runs of the infinite zigzag word also occurs as a subsequence of $z_n$. Suppose that this subsequence occurs in the indices $1\le i_1<i_2<\cdots<i_n\le |z_n|$, so $z_n(i_1)z_n(i_2)\cdots z_n(i_n)$ is equal to either $p$ or $p^{+1}$, and thus for $j,k \in [n]$ we have
\[
z_n(i_j) > z_n(i_k)
\iff
p(j) > p(k).
\]
Because $z_n$ is order-homomorphic to $\zeta_n$, this implies that for all pairs of indices $j,k\in[n]$ except the pair $\{a,b\}$, we have
\[
\zeta_n(i_j) > \zeta_n(i_k)
\iff
p(j) > p(k)
\iff
\pi(j) > \pi(k).
\]
Furthermore, since $p(a)=p(b)$, we have $z_n(a)=z_n(b)$, and so by our construction of $\zeta_n$ it follows that $\zeta_n(a)>\zeta_n(b)$, while we know that $\pi(a)>\pi(b)$ because those entries form an inverse-descent. This verifies that $\zeta_n(i_1)\zeta_n(i_2)\cdots \zeta_n(i_n)$ is order-isomorphic to $\pi$, completing the proof.
\end{proof}
\begin{figure}
\caption{The plot of the sum of $\pi$ and $\sigma$ is shown on the left. The figure on the right shows the plot of the layered permutation $21\ 3\ 654\ 87$ with layer lengths $2$, $1$, $3$, $2$.}
\label{fig-sum-and-layered}
\end{figure}
To describe the permutations that Proposition~\ref{prop-distant-inv-desc} does not apply to, we need the notions of sums of permutations and layered permutations. Given permutations $\pi$ and $\sigma$ of respective lengths $m$ and $n$, their \emph{(direct) sum} is the permutation $\pi\oplus\sigma$ of length $m+n$ defined by
\[
(\pi\oplus\sigma)(i)
=
\left\{\begin{array}{ll}
\pi(i)&\text{if $1\le i\le m$,}\\
\sigma(j-m)+m&\text{if $m+1\le i\le m+n$.}
\end{array}\right.
\]
Pictorially, the plot of $\pi\oplus\sigma$ then consists of the plot of $\sigma$ placed above and to the right of the plot of $\pi$, as shown on the left of Figure~\ref{fig-sum-and-layered}. A permutation is said to be \emph{layered} if it can be expressed as a sum of decreasing permutations, and in this case, these decreasing permutations are themselves called the \emph{layers}. An example of a layered permutation is shown on the right of Figure~\ref{fig-sum-and-layered}.
\begin{proposition}
\label{prop-distant-inv-desc-layered}
The permutation $\pi$ is layered if and only if it does not have a distant inverse-descent.
\end{proposition}
\begin{proof}
One direction is completely trivial: if $\pi$ is layered then all of its inverse-descents are between consecutive entries, so it does not have a distant inverse-descent. For the other direction we use induction on the length of $\pi$. The empty permutation is layered, so the base case holds. If $\pi$ is a nonempty permutation without distant inverse-descents, then it must begin with the entries $\pi(1)$, $\pi(1)-1$, $\dots$, $2$, $1$ in that order. This means that $\pi=\delta\oplus\sigma$ where $\delta$ is a nonempty decreasing permutation and $\sigma$ is a permutation shorter than $\pi$ that also does not have any distant inverse-descents. By induction, $\sigma$ is layered, and thus $\pi$ is as well, completing the proof.
\end{proof}
Having characterized the permutations to which Proposition~\ref{prop-distant-inv-desc} does not apply, we now show that almost all of them are nevertheless contained in $\zeta_n$.
\begin{proposition}
\label{prop-layered-zeta}
If the permutation $\pi$ of length $n$ is layered and not a decreasing permutation of even length, then $\zeta_n$ contains a subsequence order-isomorphic to $\pi$.
\end{proposition}
\begin{proof}
Let $\pi$ denote an arbitrary layered permutation of length $n$. To prove the result, we compute the score of $\pi$ as in the proof of Proposition~\ref{prop-miller-words}, show that this score can only take on the values $0$ or $\pm 1$, and then describe an alternative embedding of $\pi$ in $\zeta_n$ in the case where the score of $\pi$ is $1$, except when $\pi$ is a decreasing permutation of even length.
Recall that the score of any word $\pi$, $s(\pi)$, is defined as the number of initial runs of the infinite zigzag word necessary to contain $\pi$ minus the length of $\pi$. As observed in the proof of Proposition~\ref{prop-miller-words}, the score of a word does not change upon reading a letter of opposite parity. This implies that, while reading a layered permutation, the score changes only when transitioning from one layer to the next, and thus we compute the score of $\pi$ layer-by-layer.
\renewcommand{\small\textsf{odd}\text{--}\textsf{even}}{\small\textsf{odd}\text{--}\textsf{even}}
\newcommand{\small\textsf{odd}\text{--}\textsf{odd}}{\small\textsf{odd}\text{--}\textsf{odd}}
\newcommand{\small\textsf{even}\text{--}\textsf{even}}{\small\textsf{even}\text{--}\textsf{even}}
\newcommand{\small\textsf{even}\text{--}\textsf{odd}}{\small\textsf{even}\text{--}\textsf{odd}}
\begin{figure}
\caption{A directed graph describing the scoring of a layered permutation.}
\label{fig-layered-zeta-automaton}
\end{figure}
The change in score when moving from one layer of $\pi$ to the next is determined by the parity of the last entry of the layer we are leaving and the first entry of the layer we are entering. Specifically, the score changes by $-1$ if both of these entries are odd and $+1$ if both are even. This shows that in order to compute the score of the layered permutation $\pi$, we simply need to know the parities of the first and last entries of each of its layers. This information is represented by the labels of the nodes of the directed graph shown in Figure~\ref{fig-layered-zeta-automaton}.
Moreover, not all transitions between these nodes are possible, because the last entry of a layer is precisely $1$ greater than the first entry of the preceding layer. This is why there are only eight edges shown in Figure~\ref{fig-layered-zeta-automaton}. In this figure, each of those edges is labeled by the change in the score function. Note that the first layer must end with $1$ (an odd entry), and its first entry must be either odd (for a score of $0$) or even (for a score of $1$); this is equivalent to starting our walk on the graph in Figure~\ref{fig-layered-zeta-automaton} at the node labeled $(\small\textsf{even}\text{--}\textsf{even})$ before any layers are read.
From this graphical interpretation of the scoring process, it is apparent that the score of a layered permutation can take on only three values: $-1$ if it ends at the node $(\small\textsf{odd}\text{--}\textsf{even})$; $0$ if it ends at either node $(\small\textsf{even}\text{--}\textsf{even})$ or $(\small\textsf{odd}\text{--}\textsf{odd})$; or $1$ if it ends at the node $(\small\textsf{even}\text{--}\textsf{odd})$. Except in this final case, we are done.
Now suppose that we are in the final case, so the ultimate layer of $\pi$ is of $(\small\textsf{even}\text{--}\textsf{odd})$ type. The first entry of this layer is the greatest entry of $\pi$, so we know that $\pi$ has even length. If $\pi$ were a decreasing permutation then there would be nothing to prove (as we have not claimed anything in this case), so let us further suppose that $\pi$ is not a decreasing permutation, and thus that $\pi$ has at least two layers. We further divide this case into two cases. In both cases, as in the proof of Proposition~\ref{prop-distant-inv-desc}, we construct a word $p\in[n-1]^n$ such that if $z_n$ contains $p$, then $\zeta_n$ contains $\pi$.
First, suppose that the penultimate layer of $\pi$ is of $(\small\textsf{even}\text{--}\textsf{odd})$ type and that this layer begins with the entry $\pi(b)$. This implies that the penultimate layer of $\pi$ has at least two entries (because its first and last entries have different parities). In this case, we define $p$ by
\[
p(i)
=
\left\{\begin{array}{ll}
\pi(i)&\text{if $\pi(i)<\pi(b)$,}\\
\pi(i)-1&\text{if $\pi(i)\ge\pi(b)$.}
\end{array}\right.
\]
In other words, to form $p$ from $\pi$ we decrement the first entry of the penultimate layer and all entries of the ultimate layer. Because the penultimate layer of $\pi$ has at least two entries, performing this operation creates an immediate repetition (of the entry $\pi(b)-1$) at the beginning of this layer. For example, if $\pi=21\ 6543\ 87$ then $\pi(b)=6$ and we decrement the $6$, $8$, and $7$ to obtain the word $p=21\ 5543\ 76$.
As with our previous constructions, if $z_n$ contains an occurrence of $p$, then $\zeta_n$ will contain a copy of $\pi$. We establish that $z_n$ contains $p$ by showing that $s(p)=0$, which requires a further bifurcation into subcases. In both subcases, the scoring of $p$ is computed by considering its score in the antepenultimate layer (the layer immediately before the penultimate layer), the score change when reading the newly decremented first entry of the penultimate layer, the score penalty of $+1$ because $p$ contains an immediate repetition (namely, $\pi(b)-1$ occurs twice in a row), and finally the score change between the penultimate and ultimate layers. We label these cases by the final three nodes of the directed graph from Figure~\ref{fig-layered-zeta-automaton} visited while computing the score of $\pi$.
\begin{itemize}
\item The final three layers are of type $(\small\textsf{even}\text{--}\textsf{even})(\small\textsf{even}\text{--}\textsf{odd})(\small\textsf{even}\text{--}\textsf{odd})$. Note that this case includes the possibility that $\pi$ has only two layers. If $p$ has an antepenultimate layer, then the score while reading that layer is $0$ and the ascent between its last entry and the newly decremented first entry of the penultimate layer is of different parity (even to odd), contributing $0$ to the score. If $p$ does not have an antepenultimate layer, then $p$ begins with the newly decremented first entry of its penultimate layer, which contributes $0$ to the score. In either case, the score of $p$ is $0$ upon reading the first entry of the penultimate layer. The immediate repetition in the penultimate layer contributes $+1$ to the score, while the ascent between the last entry of the penultimate layer and the newly decremented first entry of the ultimate layer is odd and thus contributes $-1$, so $s(p) = 0$.
\item The final three layers are of type $(\small\textsf{even}\text{--}\textsf{odd})(\small\textsf{even}\text{--}\textsf{odd})(\small\textsf{even}\text{--}\textsf{odd})$. The score while reading the antepenultimate layer is $+1$. The ascent between the last entry of the antepenultimate layer and the newly decremented first entry of the penultimate layer is odd, so it contributes $-1$ to the score, the immediate repetition in the penultimate layer contributes $+1$, and the ascent between the last entry of the penultimate layer and the newly decremented first entry of the ultimate layer is odd and thus contributes $-1$, so $s(p) = 0$.
\end{itemize}
It remains to treat the case where the penultimate layer is of $(\small\textsf{even}\text{--}\textsf{even})$ type. Note that this case includes the possibility that the penultimate layer consists of a single entry. Suppose that the penultimate layer ends with the entry $\pi(a)$. We define $p$ by
\[
p(i)
=
\left\{\begin{array}{ll}
\pi(i)&\text{if $\pi(i)<\pi(a)$ or $\pi(i)=n$,}\\
\pi(i)+1&\text{if $\pi(i)\ge\pi(a)$ and $\pi(i)\neq n$.}
\end{array}\right.
\]
Thus in forming $p$ from $\pi$ we increment all entries of the penultimate layer and all but the first entry of the ultimate layer. For example, if $\pi=21\ 3\ 654\ 87$, then we increment the $6$, $5$, $4$, and $7$ to obtain the word $p=21\ 3\ 765\ 88$.
As before, if $z_n$ contains an occurrence of $p$ then $\zeta_n$ will contain a copy of $\pi$. Thus we need only show that $s(p)=0$, which we do, as in the previous case, by considering the scoring of the final three layers. As in that case, we identify two subcases.
\begin{itemize}
\item The final three layers are of type $(\small\textsf{odd}\text{--}\textsf{even})(\small\textsf{even}\text{--}\textsf{even})(\small\textsf{even}\text{--}\textsf{odd})$. The score while reading the antepenultimate layer is $-1$. The ascent between the last entry of the antepenultimate layer and the newly incremented first entry of the penultimate layer is of different parity (even to odd) and thus contributes $0$ to the score. The ascent between the newly incremented last entry of the penultimate and the first entry of the ultimate layer (which is $n$) is of different parity (odd to even) and thus contributes $0$ to the score. Finally, the immediate repetition at the beginning of the ultimate layer (the two entries equal to $n$) contributes $+1$ to the score, so $s(p)=0$.
\item The final three layers are of type $(\small\textsf{odd}\text{--}\textsf{odd})(\small\textsf{even}\text{--}\textsf{even})(\small\textsf{even}\text{--}\textsf{odd})$. The score while reading the antepenultimate layer is $0$. The ascent between the last entry of the antepenultimate layer and the newly incremented first entry of the penultimate layer contributes $-1$ to the score (as both entries are now odd). The ascent between the newly incremented last entry of the penultimate layer and the first entry of the ultimate layer (which is $n$) is of different parity (odd to even) and thus contributes $0$ to the score. Finally, the immediate repetition at the beginning of the ultimate layer contributes $+1$ to the score, so $s(p)=0$.
\end{itemize}
As we have considered all of the cases, the proof is complete.
\end{proof}
It remains only to conclude. The length of $\zeta_n$ is $(n^2+1)/2$ when $n$ is odd and $n^2/2$ when $n$ is even. When $n$ is odd, we have established that $\zeta_n$ is universal. However, Proposition~\ref{prop-layered-zeta} shows that $\zeta_n$ need not be universal when $n$ is even. (Indeed, it can be checked that $\zeta_n$ is \emph{not} universal when $n$ is even.) However, in this case we know that $\zeta_n$ contains the decreasing permutation $(n-1)\cdots 21$ (for instance because it contains the permutation $(n-1)\cdots 21\oplus 1$). Thus we obtain a universal permutation by prepending a new maximum entry to $\zeta_n$, giving us the following bound.
\begin{theorem}
\label{thm-perms-perms}
There is a word over $\mathbb{P}$ of length $\left\lceil(n^2+1)/2\right\rceil$ containing subsequences order-isomorphic to every permutation of length $n$.
\end{theorem}
A computer search reveals that the bound in Theorem~\ref{thm-perms-perms} is best possible for $n\le 5$. Alas, for $n=6$ the bound in the Theorem~\ref{thm-perms-perms} is $19$, but Arnar Arnarson [private communication] has found that the permutation
\[
6\ 14\ 10\ 2\ 13\ 17\ 5\ 8\ 3\ 12\ 9\ 16\ 1\ 7\ 11\ 4\ 15
\]
of length $17$ is universal for the permutations of length $6$. Computations have shown that no shorter permutation is universal for the permutations of length $6$.
The best lower bound in this case is still the one given by Arratia~\cite{arratia:on-the-stanley-:} in his initial work on the problem. Note that if the word $w$ of length $m$ over the alphabet $\mathbb{P}$ is to contain subsequences order-isomorphic to each permutation of length $n$, then we must have
\[
{m\choose n}\ge n!.
\]
As in the analysis of the lower bound of the previous section, using the fact that $k!\ge (k/e)^k$ for all $k$, we see that for the above inequality to hold we must have
\[
\bigg(\frac{me}{n}\bigg)^n
\ge
{m\choose n}
\ge
n!
\ge
\bigg(\frac{n}{e}\bigg)^n,
\]
from which it follows that we must have $m\ge n^2/e^2$. In fact, Arratia~\cite[Conjecture 2]{arratia:on-the-stanley-:} conjectured that the length of the shortest universal permutation in this case is asymptotic to $n^2/e^2$.
\section{Further Variations}
In case the infinitely many problems introduced so far are not enough, we conclude by briefly describing further variants that have received attention.
\begin{enumerate}
\item As observed in Section~\ref{sec-factors-n}, there is no universal cycle over the alphabet $[n]$ for the permutations of length $n$. However, Jackson~\cite{jackson:universal-cycle:} proved that there is a universal cycle over the alphabet $[n]$ for all shorthand encodings of permutations of length $n$, where the \emph{shorthand encoding} of the permutation $\pi$ of length $n$ is the word $\pi(1)\cdots\pi(n-1)$. This result and some extensions are discussed in \cite[Section 7.2.1.2, Exercises 111--113]{knuth:the-art-of-comp:4a}, where Knuth asked for an explicit construction of such a universal cycle (Jackson's proof was nonconstructive). Knuth's request was answered by Ruskey and Williams~\cite{ruskey:an-explicit-uni:}. Further constructions have been given by Holroyd, Ruskey, and Williams~\cite{holroyd:faster-generati:,holroyd:shorthand-unive:}.
\begin{figure}
\caption{Two rosaries presented by Gupta for permutations of length $6$. The rosary on the left may only be read clockwise, while the rosary on the right may be read either clockwise or counterclockwise.}
\label{fig-rosary}
\end{figure}
\item Gupta~\cite{gupta:on-permutation-:} considered a subsequence version of a universal cycle for permutations. A \emph{rosary} is a word $w$ over the alphabet $[n]$ such that every permutation of length $n$ is contained as a subsequence of the word
\[
w(k)w(k+1)\cdots w(|w|)w(1)w(2)\cdots w(k-1)
\]
for some value of $k$. In other words, thinking of the letters as being arranged in a circle as on the left of Figure~\ref{fig-rosary}, we may start anywhere we like, but must traverse the rosary clockwise, and cannot return to where we started. Gupta conjectured that one could always construct a rosary of length at most $n^2/2$. This conjecture was discussed by Guy~\cite[Problem E22]{guy:unsolved-proble:book} and proved in the case where $n$ is even by Lecouturier and Zmiaikou~\cite{lecouturier:on-a-conjecture:}. Gupta also considered the variant where one is allowed to traverse the rosary both clockwise and counterclockwise (see the right of Figure~\ref{fig-rosary}); he conjectured that one can always construct a rosary of length at most $3n^2/8+1/2$ in this version of the problem.
\item Albert and West~\cite{albert:universal-cycle:} studied the existence of universal cycles in the sense of Section~\ref{sec-factors-n+1} for permutation classes, making no restrictions on the size of the alphabet. To describe their results, we define a partial order on the set of all finite permutations where $\sigma\le\pi$ if $\pi$ contains a subsequence that is order-isomorphic to $\sigma$. If $\sigma\not\le\pi$ then we say that $\pi$ \emph{avoids} $\sigma$. A \emph{permutation class} is a set closed downward in this order. Every permutation class can be specified by giving the set of minimal elements \emph{not} in the class (this set is called the \emph{basis} of the class), and when presented in this form, we use the notation
\[
\operatorname{Av}(B)=\{\pi\::\:\pi\mbox{ avoids all $\beta\in B$}\}.
\]
Most of Albert and West's results are negative in nature, but some classes they consider, such as $\operatorname{Av}(132,312)$, do have universal cycles over the alphabet $\mathbb{P}$. They say that a permutation class with such a universal cycle is \emph{value cyclic}.
\begin{figure}
\caption{The proportion of permutations containing subsequences order-isomorphic to every permutation of length $n$, by length, for $3\le n\le 6$.}
\label{fig-plot-universal}
\end{figure}
\item At the end of his paper, Arratia~\cite{arratia:on-the-stanley-:} defines $t(n)$ to be the least integer $m$ such that at least half of all permutations of length $m$ contain subsequences order-isomorphic to every permutation of length $n$, and he states that Noga Alon has conjectured that $t(n)$ is asymptotic to $n^2/4$. Figure~\ref{fig-plot-universal} plots the proportions of these permutations of lengths $0\le m\le 40$ for $n=3$, $4$, $5$, and $6$. For $n=3$, we compute these proportions exactly using the formula of Simion and Schmidt~\cite{simion:restricted-perm:} mentioned at the beginning of Section~\ref{sec-subseq-P}, while for $n\ge 4$, these plots are obtained by random sampling to a high level of confidence. This data and further computations suggest the following values of $t(n)$ for $1\le n\le 8$:
\[
\begin{array}{cc}
n&t(n)\\\hline
1&1\\
2&3\\
3&7\\
4&13\\
5&20\\
6&28\\
7&36\\
8&48
\end{array}
\]
While the first six values of $t(n)$ above might lead the reader to suspect that $t(n)$ is the nearest integer to $\pi n^2/4$, this seems not to hold for $n=7,8$. We leave it to the reader to decide whether these values support or undermine Alon's conjecture that $t(n)\sim n^2/4$.
\item Universal words over $\mathbb{P}$ containing, as subsequences, all permutations of length $n$ from a proper permutation class have also been studied. Bannister, Cheng, Devanny, and Eppstein~\cite{bannister:superpatterns-a:} construct a universal word of length $n^2/4+\Theta(n)$ for the permutations of length $n$ in the class $\operatorname{Av}(132)$, and they show that every \emph{proper} subclass $\mathcal{C}\subsetneq\operatorname{Av}(132)$ has a universal word of length at most $O(n\log^{O(1)} n)$. In~\cite{bannister:superpatterns-a:}, among other results, Bannister, Devanny, and Eppstein find a universal word of length at most $22n^{3/2}+\Theta(n)$ for the class $\operatorname{Av}(321)$. Finally, Albert, Engen, Pantone, and Vatter~\cite{albert:universal-layer:} consider the class of layered permutations, $\operatorname{Av}(231, 312)$. In addition to verifying a conjecture of Gray~\cite{gray:bounds-on-super:}, they show that the length of the shortest universal word over $\mathbb{P}$ containing all layered permutations of length $n$ as subsequences is given \emph{precisely} by $a(0)=0$ and
\[
a(n) = (n+1)\lceil\log_2 (n+1)\rceil -2^{\lceil \log_2 (n+1)\rceil} + 1
\]
for $n\ge 1$.
\end{enumerate}
\noindent{\bf Acknowledgements.}
We thank Michael Albert, Arnar Arnarson, Robert Brignall, Robin Houston, and Jay Pantone for numerous fruitful discussions that improved this work. We are additionally grateful to Jay Pantone for his assistance in verifying that no permutation of length $16$ or less contains all permutations of length $6$ as subsequences.
\end{document}
|
\begin{document}
\author{Zilong He}
\address{}
\email{}
\author{Yong Hu}
\address{ }
\email{ }
\author{Fei Xu}
\address{ }
\email{ }
\title[ ]{On indefinite $k$-universal integral quadratic forms over number fields}
\thanks{ }
\subjclass[2010]{11E12, 11E08, 11E20, 11R11}
\date{\today}
\keywords{integral quadratic forms, local-global principle, integral representation, universal quadratic forms, quadratic fields}
\begin{abstract}
An integral quadratic lattice is called indefinite $k$-universal if it represents all integral quadratic lattices of rank $k$ for a given positive integer $k$.
For $k\geq 3$, we prove that the indefinite $k$-universal property satisfies the local-global principle over number fields.
For $k=2$, we show that a number field $F$ admits an integral quadratic lattice which is locally $2$-universal but not indefinite 2-universal if and only if the class number of $F$ is even. Moreover, there are only finitely many classes of such lattices over $F$.
For $k=1$, we prove that $F$ admits a classic integral lattice which is locally classic $1$-universal but not classic indefinite $1$-universal if and only if $F$ has a quadratic unramified extension where all dyadic primes of $F$ split completely. In this case, there are infinitely many classes of such lattices over $F$. All quadratic fields with this property are determined.
\end{abstract}
\maketitle
\section{Introduction}
As an extension of the four squares theorem of Lagrange, Ramanujan determined in \cite{Ram} all diagonal positive definite quaternary integral quadratic forms which represent all positive integers. Later, Dickson and his school called these forms universal, and further studied forms of this kind extensively in \cite{Di-27}, \cite{Di}, \cite{Ro}, \cite{Will} etc. in both the positive definite and the indefinite cases. Instead of classifying all universal quadratic forms, Conway and Schneeberger provided a surprisingly simple criterion, according to which a classic positive quadratic form is universal if and only if it represents all positive integers up to 15. In \cite{Bh}, Bhargava gave a simple proof of the Conway-Schneeberger theorem, and then corrected and completed the previous results in \cite{Will} by using this theorem.
Representation of an integral quadratic form by another is a natural generalization of representation of an integer by an integral quadratic form. Generalizing Lagrange's four squares theorem in this direction, Mordell proved that every binary positive classic quadratic form can be represented by the sum of five squares in \cite{Mor}. Ko \cite{Ko} further proved that every $k$-ary positive classic quadratic form can be represented by the sum of $k+3$ squares for $3\leq k\leq 5$. A general representation theory for positive definite quadratic forms has been established by Hsia, Kitaoka and Kneser in \cite{hsia_positivedefinite_1998}. Their result has been improved by Ellenberg and Venkatesh over $\Bbb Z$ in \cite{EV}. A natural extension of universal form to higher dimensions is a positive integral quadratic form representing all $k$-ary positive integral quadratic forms for a given positive integer $k$. B. M. Kim, M.-H. Kim and S. Raghavan called such a quadratic form $k$-universal and classified all 2-universal positive quinary diagonal quadratic forms over $\Bbb Z$ in \cite{kim_2universal_1997}. B. M. Kim, M.-H. Kim and B.-K. Oh \cite{kim_2universal_1999} further determined all 2-universal quinary positive classic quadratic forms. They also established an analogue of Conway-Schneeberger's theorem for 2-universal forms over $\Bbb Z$. It is proved in \cite{Oh} that a positive classic quadratic form over $\Bbb Z$ is $8$-universal if and only if it represents the two special forms $I_8$ and $E_8$. We refer the reader to the survey article \cite{Kim} for the case $k\geq 3$ as well as some related results over other number fields.
A natural question of universal property arises for indefinite integral quadratic forms over number fields. Since the ring of integers of a number field is not necessarily a principal ideal domain in general, one can use integral quadratic lattices in a more general setting. In order to distinguish from the positive definite case, we call an integral quadratic lattice \emph{indefinite $k$-universal} if it represents all integral quadratic lattices of rank $k$ for a given positive integer $k$. Indeed, when $k=1$, such universal property has been studied in \cite{xu_indefinite_2020}. It turns out that indefinite 1-universal property satisfies the local-global principle over $\Bbb Z$ by strong approximation for spin groups. The local universal conditions have been given in \cite{beli_universal_2020}, \cite{EG} and \cite{xu_indefinite_2020}. Therefore this provides an effective algorithm for determining an indefinite 1-universal form over $\Bbb Z$.
However, the local-global principle is not always true over general number fields. For example, Estes and Hsia have shown that the sum of three squares over certain imaginary quadratic fields is locally 1-universal but not indefinite 1-universal in \cite{EH0} and \cite{EH}. In \cite{xu_indefinite_2020}, the third named author and Zhang have proved that
a number field $F$ admits an integral lattice which is locally 1-universal but not indefinite 1-universal if and only if the class number of $F$ is even.
In this paper, we will extend this result to $k\geq 2$ (see Corollary \ref{2.3} and Theorem \ref{5.1}).
\begin{thm} \label{1.1} Let $k$ be a positive integer and $F$ be a number field.
(1) For $k\ge 3$, an integral quadratic lattice over $F$ is indefinite $k$-universal if and only if it is locally $k$-universal.
(2) There exists an integral quadratic lattice which is locally $2$-universal but not indefinite $2$-universal over $F$ if and only if the class number of $F$ is even. In this case, there are only finitely many classes of such integral quadratic lattices over $F$.
\end{thm}
Saying a quadratic lattice over $F$ is integral means that the norm of the lattice is contained in the ring of integers of $F$ (see Definition \ref{int}).
One can also define the notion of classic indefinite $k$-universality (see Definition\;\ref{1.3}) by restricting to classic integral lattices (see Definition \ref{int}). For the classic indefinite universal property, the first part of Theorem\;\ref{1.1} is still true but the second part is not. In fact, there is no classic $2$-universal quaternary lattice over a dyadic local field (Proposition \ref{4.6}).
As a complement to Theorem\;\ref{1.1} and \cite[Remark 1.2]{xu_indefinite_2020}, we also obtain a necessary and sufficient condition for number fields which admit classic indefinite $1$-universal lattices (see Theorem\;\ref{6.1}).
\begin{thm} \label{1.2} A number field $F$ admits a classic integral lattice which is locally classic $1$-universal but not classic indefinite 1-universal if and only if $F$ has a quadratic unramified extension where all dyadic primes of $F$ split completely. In this case, there are infinitely many classes of such integral quadratic lattices over $F$.
\end{thm}
We also classify all quadratic fields with this property (see Theorem\;\ref{6.4}).
\
Unexplained notation and terminology are adopted from \cite{omeara_quadratic_1963}. Let $F$ be a number field with ring of integers $\mathcal{O}_{F} $, $\Omega_{F} $ the set of all primes of $ F $ and $ \infty_{F} $ the subset of archimedean primes. For any $ \mathfrak{p}\in \Omega_{F} $, we denote by $ F_{\mathfrak{p}} $ the completion of $ F$ at $\frak p$ and by $F_\frak p^\times$ the set of non-zero elements of $F_\frak p$.
If $\frak p\in \infty_F$, we put $\mathcal O_{\frak p}=F_{\frak p}$. If $ \mathfrak{p}\in \Omega_{F} \setminus \infty_F$, we denote by $ \mathcal{O}_{\mathfrak{p}} $ the valuation ring of
$F_{\mathfrak{p}} $ and $ \mathcal{O}_{\mathfrak{p}}^{\times} $ the group of units. By abuse of notation, sometimes we also write $\frak p$ for the maximal ideal of ${\mathcal O}_{\frak p}$. The normalized discrete valuation of $F_{\frak p}$ is denoted by $\mathrm{ord}_{\frak p}$. We write ${\mathfrak p}i_{\frak p}$ for a uniformizer of $F_{\frak p}$ and fix a non-square unit $\Delta_\frak p\in \mathcal{O}_{\frak p}^\times$ such that the quadratic extension $F_\frak p(\sqrt{\Delta_\frak p})/F_\frak p$ is unramified. For $\alpha, \beta\in F_\frak p^\times$, we write $(\alpha, \beta)_\frak p$ for the Hilbert symbol.
Let $V$ be a quadratic space over $F$ (resp. $F_\frak p$), by which we mean a finite dimensional vector space endowed with a non-degenerate symmetric bilinear form $$B: V\times V\to F \ (\text{resp.} \ F_\frak p) \ \ \text{with the associated quadratic form} \ \ Q(x)=B(x,x),\, x\in V . $$ We denote by $\det(V)$ the \emph{determinant} (or \emph{discriminant} in the terminology of \cite[\S\;41.B, p.87]{omeara_quadratic_1963}) of $V$. A two dimensional quadratic space is called a hyperbolic plane and denoted by $\Bbb H$ if it contains two vectors $x, y$ satisfying $Q(x)=Q(y)=0$ and $B(x,y)=1$. For a quadratic space $V$ over $F_{\frak p}$, let $S_\frak p(V)$ be its Hasse symbol.
An ${\mathcal O}_F$-lattice (resp. ${\mathcal O}_{\frak p}$-lattice) $L$ in $ V $ is a finitely generated $\mathcal{O}_{F}$-module (resp. $ \mathcal{O}_{\mathfrak{p}} $-module for $ \mathfrak{p}\in \Omega_{F} \setminus \infty_F$) inside $V$. If $FL=V$ (resp. $F_\frak p L=V$), we say that $L$ is a lattice on $V$. If the quadratic space $V$ has an orthogonal basis $ x_{1},\ldots, x_{n} $ with $ Q(x_{i})=a_{i}$ for $1\leq i\leq n$, we simply write $[a_{1},\ldots,a_{n}] $ for $V$. A lattice with such an orthogonal basis will be denoted by $ \langle a_{1},\ldots,a_{n}\rangle $.
The \emph{scale} $\mathfrak{s}(L)$ and the \emph{norm} $\mathfrak{n}(L)$ of a lattice $L$ are defined as the fractional ideals
$$ \mathfrak{s}(L) =B(L,L) \mathcal{O}_{F} \ (\text{resp.} \ B(L,L) \mathcal{O}_{\mathfrak{p}}) \ \ \ \text{and} \ \ \ \mathfrak{n}(L) =Q(L)\mathcal{O}_{F} \ (\text{resp.} \ Q(L) \mathcal{O}_{\mathfrak{p}}). $$
\begin{defn} \label{int} Let $L$ be an ${\mathcal O}_F$-lattice (resp. ${\mathcal O}_{\frak p}$-lattice).
(1) We say $L$ is \emph{integral}
if $\mathfrak{n}(L)\subseteq \mathcal{O}_{F}$ (resp. $\mathfrak{n}(L)\subseteq \mathcal{O}_{\mathfrak{p}}$).
(2) A lattice $L$ is called \emph{classic integral} if $\mathfrak{s}(L)\subseteq \mathcal{O}_{F}$ (resp. $\mathfrak{s}(L)\subseteq \mathcal{O}_{\mathfrak{p}}$).
\end{defn}
Let $W$ be another quadratic space over $F$ with the associated quadratic form $Q'$. We say that $W$ is represented by $V$ if there is an $F$-linear map $\sigma: W\to V$ such that $Q\circ \sigma=Q'$. Such a map $\sigma$ is called a \emph{representation} of $W$ into $V$. For two lattices $L$ and $M$, we say that $L$ is represented by $M$, written as $L\to M$, if there is a representation $$\sigma: FL\to FM \ \ \ \text{ such that } \ \ \ \sigma L\subseteq M. $$
The \emph{orthogonal group} $O(V)$ and the \emph{special orthogonal group} $O^+(V)$ of $V$ are defined by $$ O(V)=\{ \sigma \in GL(V): \ Q\circ \sigma=Q\} \ \ \ \text{and} \ \ \ O^+(V)= \{ \sigma \in O(V): \ \det(\sigma)=1 \}\,. $$ The symmetry $\tau_u \in O(V)$ with respect to a vector $u\in V$ such that $Q(u)\neq 0$ is defined by
$$ \tau_u(x) = x-\frac{2B(x, u)}{Q(u)} u; \ \ \ \forall x\in V . $$
Write
$$O(L)= \{ \sigma\in O(V): \ \sigma L= L \} \ \ \ \text{and} \ \ \ O^+(L)=\{ \sigma \in O^+(V): \ \sigma L= L \} $$ for a given lattice $L$ on $V$.
Denote by $$\theta: \ O(V)\rightarrow F^\times / (F^\times)^2 $$ the spinor norm map of $V$ (cf. \cite[p.137, \S\;55]{omeara_quadratic_1963}).
For each prime $\frak p\in\Omega_F$, let $$V_{\frak p}=F_{\frak p}\otimes_F V \ \ \ \text{ and } \ \ \ L_{\frak p}={\mathcal O}_{\frak p}\otimes_{{\mathcal O}_F}L .$$ Then $O(V_{\frak p}),\,O^+(V_{\frak p})$, $O(L_{\frak p}),\,O^+(L_{\frak p})$, $\theta_\frak p$ and $\tau_{u_\frak p}$ for $u_\frak p\in V_\frak p$ with $Q(u_\frak p)\neq 0$ are defined similarly.
The adelic groups of $O(V)$ and $O^+(V)$ are denoted by $O_\Bbb A(V)$ and $O^+_\Bbb A(V)$ respectively. They act naturally on a given lattice $L$ on $V$.
The orbit of $L$ under the action of $O_\Bbb A(V)$ (or $O^+_\Bbb A(V)$) is called the \emph{genus} of $L$ and denoted by $\mathrm{gen}(L)$ (see \cite[\S 102 A]{omeara_quadratic_1963}).
\begin{defn}\label{1.3} Let $k\ge 1$ be an integer.
(1) A quadratic space over $F$ (resp. $F_\frak p$) is called \emph{$k$-universal} if it represents all $k$-dimensional quadratic spaces over $F$ (resp. $F_\frak p$). When $k=1$, we simply say universal instead of $1$-universal.
(2) An integral (resp. classic integral) $\mathcal{O}_{\mathfrak{p}}$-lattice is called \emph{$k$-universal (resp. classic $k$-universal)} if it represents all integral (resp. classic integral) $\mathcal{O}_{\mathfrak{p}}$-lattices of rank $k$ for $ \mathfrak{p}\in \Omega_{F} \setminus \infty_F$.
(3) An integral (resp. classic integral) $\mathcal{O}_{F}$-lattice is called \emph{indefinite (resp. classic indefinite) $k$-universal} if it represents all integral (resp. classic integral) $\mathcal{O}_{F}$-lattices of rank $k$.
(4) An integral (resp. classic integral) $\mathcal{O}_{F}$-lattice $L$ is called \emph{$k$-LNG (resp. classic $k$-LNG)} if $L_{\frak p}=\mathcal{O}_{\frak p}\otimes_{\mathcal{O}_F} L$ is $k$-universal (resp. classic $k$-universal) over $\mathcal{O}_{\frak p}$ for all $\frak p\in \Omega_F$ but $L$ is not indefinite $k$-universal (resp. classic indefinite $k$-universal).
\end{defn}
The paper is organized as follows. We first study $k$-universal spaces over a local field in Section\;\ref{sec2}. A consequence of the main result of this section is the first part of Theorem \ref{1.1}. In Section\;\ref{sec3}, we establish a local analogue of the Conway-Schneeberger theorem and determine all $k$-universal lattices over non-dyadic local fields in terms of Jordan decompositions. Over dyadic local fields, 2-universal quaternary lattices are classified in Section\;\ref{sec4}. We show in Section\;\ref{sec5} that these results lead to a proof of the second part of Theorem \ref{1.1}. In Section\;\ref{sec6}, we give a proof of Theorem \ref{1.2} and determine all quadratic fields which admit classic $1$-LNG lattices.
\section{On $k$-universal spaces}\label{sec2}
Before studying the universal theory for integral quadratic forms, one needs to understand the corresponding theory for quadratic spaces. By the Hasse-Minkowski Theorem (see \cite[66:3.Theorem]{omeara_quadratic_1963}), the $k$-universal property of a quadratic space over $F$ can be detected at each local completion $F_{\frak p}$ of $F$.
For convenience, we recall some known results about quadratic spaces. It is well-known that the isometry class of any quadratic space over a local field is determined completely by its dimension, determinant and Hasse symbol (see \cite[63:20 Theorem]{omeara_quadratic_1963}). These three invariants are independent of each other except in the one dimensional or the hyperbolic plane case.
\begin{thm} \label{ind} (\cite [63:22 Theorem]{omeara_quadratic_1963}) Let $V$ be a quadratic space over $F_\frak p$ with $\frak p\in \Omega_F\setminus \infty_F$. If $\dim(V)>1$ and $V$ is not a hyperbolic plane, then there is a quadratic space $V'$ over $F_\frak p$ such that
$$ \dim(V')=\dim(V), \ \ \ \det(V')=\det(V) \ \ \ \text{and} \ \ \ S_\frak p(V')= - S_\frak p(V) . $$
\end{thm}
For representation of quadratic spaces, one has the following result.
\begin{thm}\label{rep} (\cite [63:21 Theorem]{omeara_quadratic_1963}) Let $U$ and $V$ be quadratic spaces over $F_\frak p$ with $\frak p\in \Omega_F\setminus \infty_F$. Write $\nu=\dim(V)-\dim(U) \geq 0$. Then $U \to V$ if and only if $\nu\geq 3$ or
$$ \begin{cases} U\cong V \ \ \ & \text{when $\nu=0$}\,, \\
U{\mathfrak p}erp [ \det(U) \cdot \det(V)] \cong V \ \ \ & \text{when $\nu=1$}\,, \\
U {\mathfrak p}erp \Bbb H \cong V \ \ \ & \text{when $\nu=2$ and $\det(U)=-\det(V)$}.
\end{cases} $$
\end{thm}
The main result of this section is the following theorem.
\begin{thm}\label{2.1} Fix $\frak p\in \Omega_F$ and let $V$ be a quadratic space over $F_\frak p$ .
When $\frak p\in \Omega_F\setminus \infty_F$, $V$ is $k$-universal if and only if one of the following conditions holds:
(1) $\dim(V)\geq k+3$;
(2) $k=1$ and $V$ is isotropic of dimension $2$ or $3$;
(3) $k=2$ and $V\cong \Bbb H{\mathfrak p}erp \Bbb H$.
When $\frak p$ is a real prime, $V$ is $k$-universal if and only if both the positive index and the negative index of $V$ are $\ge k$.
When $\frak p$ is a complex prime, $V$ is $k$-universal if and only if $\dim(V)\geq k$.
\end{thm}
\begin{proof} First consider the case $\frak p\in\Omega_F\setminus\infty_F$. The sufficiency follows from Theorem \ref{rep} and the universal property of isotropic spaces.
\underline{Necessity.} For $k=1$, one only needs to prove that $V$ is not universal if $\dim (V) =3$ and $V$ is anisotropic by \cite[63:16.Example]{omeara_quadratic_1963}. Indeed, we claim that $V$ does not represent $-\det(V)$ in this case. Suppose not. Then the quadratic space $V{\mathfrak p}erp [ \det(V) ]$ is isotropic with discriminant 1. By \cite[63:18.Remark]{omeara_quadratic_1963}, one concludes that
$$V{\mathfrak p}erp [ \det(V) ] \cong \Bbb H {\mathfrak p}erp \Bbb H\,. $$ By computing the Hasse symbols of both sides with \cite[58:3.Remark]{omeara_quadratic_1963}, one obtains that $S_\frak p (V) =(-1, -1)_\frak p$. Therefore $V$ is isotropic by \cite[58:6]{omeara_quadratic_1963} and a contradiction is derived.
\
For $k=2$, one has $\dim(V)\geq 2$. Since there are more than two isometric classes of quadratic spaces over $F_\frak p$, one obtains that $\dim(V)\geq 3$ by Theorem \ref{rep}. We further claim that $\dim(V)\geq 4$. Suppose $\dim(V)=3$. Let $W$ be a binary quadratic space over $F_\frak p$ satisfying $W\not \cong \Bbb H$. Since $W\to V$, one has
$$V\cong W{\mathfrak p}erp [\det(V)\cdot \det(W)]$$ by Theorem \ref{rep}. By Theorem \ref{ind}, there is a binary quadratic space $W'$ over $F_\frak p$ such that $\det(W')=\det(W)$ and $S_\frak p(W')=-S_\frak p(W)$. Since $W' \to V$ as well, one has
$$V\cong W'{\mathfrak p}erp [\det(V)\cdot \det(W')]$$ by Theorem \ref{rep}.
By Witt cancellation (\cite[42:16.Theorem] {omeara_quadratic_1963}), one obtains that $W\cong W'$. This is a contradiction and the claim follows. Now one only needs to consider $\dim(V)=4$. Since $\Bbb H\to V$, one can further assume that $V=\Bbb H {\mathfrak p}erp U$ where $U$ is a binary quadratic space. Suppose that $U$ is not a hyperbolic plane. Then there is a binary quadratic space $U'$ over $F_\frak p$ such that $\det(U')=\det(U)$ and $S_\frak p(U')=-S_\frak p(U)$ by Theorem \ref{ind}. Therefore
$$\det(U')\cdot \det(V)=\det(U)\cdot \det(V)=-1.$$ Since $U'\to V$, one obtains that $V\cong U'{\mathfrak p}erp \Bbb H$ by Theorem \ref{rep}. By Witt cancellation, one concludes that $U\cong U'$ and a contradiction is derived. This implies that $U$ is a hyperbolic plane as desired.
\
For $k\geq 3$, one has $\dim(V)\geq k$. Since there are more than two isometric classes of quadratic spaces over $F_\frak p$, one obtains that $\dim(V)\geq k+1$ by Theorem \ref{rep}.
Suppose $\dim(V)=k+1$. By Theorem \ref{ind}, there are two quadratic spaces $U$ and $U'$ over $F_\frak p$ such that $$ \dim(U')=\dim(U)=k, \ \ \ \det(U')=\det(U) \ \ \ \text{ and } \ \ \ S_\frak p(U')=-S_\frak p(U) . $$
Since both $U$ and $U'$ are represented by $V$, one has
$$ V \cong U{\mathfrak p}erp [\det(V)\cdot \det(U)] \cong U'{\mathfrak p}erp [\det(V) \cdot \det(U')] $$ by Theorem \ref{rep}.
By Witt cancellation, one obtains $U\cong U'$. A contradiction is derived.
Suppose $\dim(V)=k+2$. Since $\Bbb H \to V$, there is a quadratic space $U$ of dimension $k$ such that $V\cong \Bbb H{\mathfrak p}erp U$.
By Theorem \ref{ind}, there is a quadratic space $U'$ over $F_\frak p$ such that $$\dim(U')=\dim(U), \ \ \ \det(U')=\det(U) \ \ \ \text{ and } \ \ \ S_\frak p(U')=-S_\frak p(U) . $$
This implies that
$$\det(U')\cdot \det(V)=\det(U)\cdot \det(V)=-1.$$ Since $U' \to V$, one obtains that $V\cong U'{\mathfrak p}erp \Bbb H$ by Theorem \ref{rep}. By Witt cancellation, one concludes that $U\cong U'$ and a contradiction is derived.
Therefore one concludes that $\dim(V)\geq k+3$ as desired.
\
When $\frak p$ is a real prime, the result follows from \cite[61:1.Theorem]{omeara_quadratic_1963} (for the necessity, consider the representation of the quadratic spaces $I_k$ and $-I_k$).
\
When $\frak p$ is a complex prime, the result follows from the fact that (non-degenerate) quadratic spaces of the same dimension over the complex numbers are all isomorphic.
\end{proof}
An immediate consequence of Theorem \ref{2.1} is the following result.
\begin{cor} \label{2.3} If $k\geq 3$, then there are no (classic) $k$-LNG lattices over any number field $F$. In other words, every (classic) integral $\mathcal O_F$-lattice $L$ such that $L_\frak p=L\otimes_{{\mathcal O}_F} {\mathcal O}_\frak p$ is (classic) $k$-universal over ${\mathcal O}_\frak p$ for all $\frak p\in \Omega_F$ is (classic) indefinite $k$-universal over ${\mathcal O}_F$.
\end{cor}
\begin{proof} Since $L_\frak p$ is $k$-universal over ${\mathcal O}_\frak p$ for all $\frak p\in \Omega_F$, one obtains that $F_\frak pL_\frak p$ is a $k$-universal quadratic space over $F_\frak p$. In particular, the quadratic space $FL$ is indefinite. Since $k\geq 3$, one concludes that $$\mathrm{rank}(L)=\dim(FL)=\dim(F_\frak pL_\frak p)\geq k+3$$ by Theorem \ref{2.1}. By \cite[p.135, line 2-4]{hsia_indefinite_1998}, $L$ represents all ${\mathcal O}_F$-lattices of rank $k$ whose localization are represented by $L_\frak p$ over ${\mathcal O}_{\frak p}$ for all $\frak p\in \Omega_F$. Since $L_\frak p$ is $k$-universal over ${\mathcal O}_\frak p$ for all $\frak p\in \Omega_F$, one concludes that $L$ is indefinite $k$-universal over ${\mathcal O}_F$. The argument is also valid for classic $\mathcal O_F$-lattices.
\end{proof}
\section{On $k$-universal lattices over non-dyadic local fields}\label{sec3}
In this section, we fix a non-dyadic prime $\frak p\in\Omega_F$, so that $F_\frak p$ is a non-dyadic local field.
Our goal is to determine all $k$-universal lattices over $\mathcal{O}_\frak p$. (Notice that the notions of integral lattices and classic lattices coincide over ${\mathcal O}_{\frak p}$, since $2\in{\mathcal O}_{\frak p}^\times$.)
Recall that ${\mathfrak p}i_{\frak p}$ denotes a uniformizer of $F_{\frak p}$ and $\Delta_\frak p\in \mathcal{O}_{\frak p}^\times$ is chosen such that $F_\frak p(\sqrt{\Delta_\frak p})/F_\frak p$ is a quadratic unramified extension.
\begin{lem}\label{3.1} The lattice $\langle {\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle$ is the $ \mathcal{O}_{\frak p} $-maximal lattice on the quadratic space $[{\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p]$.
\end{lem}
\begin{proof} Let $\{x, y\}$ be an orthogonal basis of $[{\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p]$ with $Q(x)={\mathfrak p}i_\frak p$ and $Q(y)=-\Delta_\frak p {\mathfrak p}i_\frak p$. Since the quadratic space $[{\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p]$ is anisotropic, an element $\alpha x+ \beta y$ with $\alpha, \beta \in F_\frak p$ belongs to the unique $\mathcal{O}_{\frak p} $-maximal lattice if and only if
$$Q(\alpha x+ \beta y)= \alpha^2 {\mathfrak p}i_\frak p - \beta^2 \Delta_\frak p {\mathfrak p}i_\frak p \in \mathcal{O}_{\frak p} $$
by \cite[91:1. Theorem]{omeara_quadratic_1963}. If $\mathrm{ord}_\frak p(\alpha)\neq \mathrm{ord}_\frak p(\beta)$, then
$$ \mathrm{ord}_\frak p(Q(\alpha x+ \beta y))= \min \{ \mathrm{ord}_\frak p(\alpha^2 {\mathfrak p}i_\frak p), \mathrm{ord}_\frak p(\beta^2 \Delta_\frak p {\mathfrak p}i_\frak p) \} \geq 0 .$$ This implies that $\alpha, \beta \in \mathcal{O}_{\frak p} $. Suppose $\mathrm{ord}_\frak p(\alpha)= \mathrm{ord}_\frak p(\beta) <0$. There are $\xi, \eta\in \mathcal{O}_{\frak p}^\times $ such that $\xi^2-\Delta_\frak p \eta^2 \in {\mathfrak p}i_{\frak p}\mathcal{O}_{\frak p}$ by removing the denominators. Then $\Delta_\frak p$ is a square by \cite[63:1.Local Square Theorem]{omeara_quadratic_1963}. A contradiction is derived.
\end{proof}
Let us call a set $S$ of integral lattices of rank $k$ a \emph{testing set} for $k$-universality if for every integral lattice representing all members in $S$ is $k$-universal. It is not difficult to provide a finite testing set for $k$-universality over $\mathcal{O}_{\frak p}$. The crucial part of a local analogue of the Conway-Schneeberger theorem is to find a \emph{minimal} testing set, i.e., a testing set such that none of its proper subsets is enough for testing the $k$-universality.
\begin{prop}\label{3.2}
A minimal testing set for $k$-universality over $\mathcal{O}_{\frak p}$ is given as follows:
When $k=1$, the set consists of the following $4$ lattices
$$\langle 1 \rangle; \ \langle \Delta_\frak p \rangle; \ \langle {\mathfrak p}i_\frak p \rangle; \ \langle \Delta_\frak p {\mathfrak p}i_\frak p \rangle . $$
When $k=2$, the set consists of the following $7$ lattices
$$ \langle 1,-1\rangle ; \ \langle 1,-\Delta_\frak p \rangle ; \ \langle {\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle ; \ \langle 1,-{\mathfrak p}i_\frak p \rangle ; \ \langle \Delta_\frak p,-\Delta_\frak p{\mathfrak p}i_\frak p \rangle ; \ \langle 1,-\Delta_\frak p{\mathfrak p}i_\frak p \rangle ; \ \langle\Delta_\frak p,-{\mathfrak p}i_\frak p\rangle . $$
When $k\geq 3$, the set consists of the following $8$ lattices
$$ \langle 1, \cdots, 1 \rangle; \ \langle \cdots, -\Delta_\frak p, {\mathfrak p}i_\frak p , -\Delta_\frak p {\mathfrak p}i_\frak p \rangle; \ \langle 1, \cdots, 1, \Delta_\frak p \rangle; \ \langle \cdots, -1, {\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle ; $$
$$ \langle 1, \cdots, -1, -{\mathfrak p}i_\frak p \rangle; \ \langle 1, \cdots, -\Delta_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle ; \ \langle 1, \cdots, -1, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle ; \ \langle 1, \cdots, -\Delta_\frak p, -{\mathfrak p}i_\frak p \rangle $$
where the elements not showing up in the notations are $1$.
\end{prop}
\begin{proof}
Let us first show that the set of lattices given in the proposition are indeed testing sets for $k$-universality in the respective cases.
Every integral $\mathcal{O}_{\frak p}$-lattice of rank $k$ is contained in an $\mathcal{O}_{\frak p}$-maximal lattice of rank $k$ by \cite[82:18]{omeara_quadratic_1963}, and all $\mathcal{O}_{\frak p}$-maximal lattices in a fixed quadratic space are isometric by \cite[91:2. Theorem]{omeara_quadratic_1963}. So it is sufficient to prove that the lattices listed in the proposition are the $\mathcal{O}_{\frak p}$-maximal lattices from all possible $k$-dimensional quadratic spaces.
There are only $4$ one-dimensional quadratic spaces over $F_\frak p$ up to isometry. The lattices listed for $k=1$ are the corresponding $\mathcal{O}_{\frak p}$-maximal lattices. The result can also follow from \cite[Lemma 2.2]{xu_indefinite_2020}.
There are precisely $7$ binary quadratic spaces over $F_\frak p$ up to isometry by \cite[63:9]{omeara_quadratic_1963} and Theorem \ref{ind}. One concludes that the lattices listed for $k=2$ come from $7$ different quadratic spaces by computing their discriminants and Hasse symbols. The listed lattices are also $\mathcal{O}_{\frak p}$-maximal by \cite[82:19]{omeara_quadratic_1963} and Lemma \ref{3.1}.
For $k\geq 3$, there are exactly $8$ quadratic spaces of dimension $k$ over $F_\frak p$ up to isometry by \cite[63:9]{omeara_quadratic_1963} and Theorem \ref{ind}. Since the listed lattices of rank $k$ are from different quadratic spaces by computing discriminants and Hasse symbols, one only needs to show that $\langle \cdots, -\Delta_\frak p, {\mathfrak p}i_\frak p , -\Delta_\frak p {\mathfrak p}i_\frak p \rangle$ and $\langle \cdots, -1, {\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle $ are $\mathcal{O}_{\frak p}$-maximal lattices by \cite[82:19]{omeara_quadratic_1963}.
Suppose that $L$ is an integral $\mathcal{O}_{\frak p}$-lattice such that
$$ L \supseteq \langle \cdots, -\Delta_\frak p, {\mathfrak p}i_\frak p , -\Delta_\frak p {\mathfrak p}i_\frak p \rangle \ \ \ \ \text{or} \ \ \ \ L \supseteq \langle \cdots, -1, {\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle . $$ Since both $\langle \cdots, -\Delta_\frak p \rangle$ and $\langle \cdots, -1\rangle$ are unimodular, there is a binary lattice $L_1$ on $[{\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p]$ with $\frak n(L_1) \subseteq \mathcal{O}_{\frak p}$ such that
$$ L= \langle \cdots, -\Delta_\frak p \rangle {\mathfrak p}erp L_1 \ \ \ \ \text{or} \ \ \ \ L=\langle \cdots, -1\rangle {\mathfrak p}erp L_1 $$ by \cite[82:15a]{omeara_quadratic_1963}. Then $L_1\supseteq \langle {\mathfrak p}i_\frak p , -\Delta_\frak p {\mathfrak p}i_\frak p \rangle$. By Lemma \ref{3.1}, one concludes that $L_1= \langle {\mathfrak p}i_\frak p , -\Delta_\frak p {\mathfrak p}i_\frak p \rangle$ and
$$L = \langle \cdots, -\Delta_\frak p, {\mathfrak p}i_\frak p , -\Delta_\frak p {\mathfrak p}i_\frak p \rangle \ \ \ \ \text{or} \ \ \ \ L = \langle \cdots, -1, {\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle $$ as desired.
Finally, we show that the sets of integral $\mathcal{O}_{\frak p} $-lattices listed above are minimal.
Indeed,
$$ \begin{cases}
\langle \Delta_\frak p, -{\mathfrak p}i_\frak p, \Delta_\frak p {\mathfrak p}i_\frak p \rangle \ \text{represents} \ \langle \Delta_\frak p \rangle, \ \langle {\mathfrak p}i_\frak p \rangle, \ \langle \Delta_\frak p {\mathfrak p}i_\frak p \rangle \ \text{but does not represent} \ \langle 1 \rangle \\
\langle 1, {\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle \ \text{represents} \ \langle 1 \rangle, \ \langle {\mathfrak p}i_\frak p \rangle, \ \langle \Delta_\frak p {\mathfrak p}i_\frak p \rangle \ \text{but does not represent} \ \langle \Delta_\frak p \rangle \\
\langle -1, \Delta_\frak p, \Delta_\frak p {\mathfrak p}i_\frak p \rangle \ \text{represents} \ \langle 1 \rangle, \ \langle \Delta_\frak p \rangle, \ \langle \Delta_\frak p {\mathfrak p}i_\frak p \rangle \ \text{but does not represent} \ \langle {\mathfrak p}i_\frak p \rangle \\
\langle 1, -\Delta_\frak p, {\mathfrak p}i_\frak p \rangle \ \text{represents} \ \langle 1 \rangle, \ \langle \Delta_\frak p \rangle, \ \langle {\mathfrak p}i_\frak p \rangle \ \text{but does not represent} \ \langle \Delta_\frak p {\mathfrak p}i_\frak p \rangle \\
\end{cases} $$
by \cite[63:15.Example and 63:17]{omeara_quadratic_1963}. It follows that the set of integral $\mathcal{O}_{\frak p}$-lattices listed above for $k=1$ is minimal.
For $k=2$, we take a binary integral $\mathcal{O}_{\frak p}$-lattice $L$ in the above list.
If $L=\langle 1,-1\rangle$, then $ \langle 1,-\Delta_\frak p \rangle {\mathfrak p}erp \langle {\mathfrak p}i_\frak p, -\Delta_\frak p {\mathfrak p}i_\frak p \rangle$ does not represent $L$ by \cite[63:17]{omeara_quadratic_1963} but represents the remaining six lattices in the list by \cite[Theorem 1]{omeara_integral_1958} and \cite[63:15.Example]{omeara_quadratic_1963}.
Otherwise, there is another binary integral $\mathcal{O}_{\frak p}$-lattice $L'$ in the above list satisfying $$F_\frak pL'\not \cong F_\frak pL \ \ \ \text{and} \ \ \ \det(F_\frak pL)=\det(F_\frak p L') . $$
We claim that $L'{\mathfrak p}erp \langle 1,-1\rangle$ does not represent $L$. Suppose not. Then $F_\frak p L'{\mathfrak p}erp \Bbb H$ represents $F_\frak p L$ as quadratic spaces. This implies that $$F_\frak p L{\mathfrak p}erp \Bbb H \cong F_\frak p L'{\mathfrak p}erp \Bbb H$$ by Theorem \ref{rep}. By Witt cancellation (\cite[42:16.Theorem] {omeara_quadratic_1963}), one obtains $F_\frak pL' \cong F_\frak pL $. A contradiction is derived and the claim follows. By \cite[Theorem 1]{omeara_integral_1958} and Theorem \ref{rep}, one can verify that $L'{\mathfrak p}erp \langle 1,-1\rangle$
represents the other six lattices. Therefore, the set of binary lattices in the given list for $k=2$ is minimal.
For any integral $\mathcal{O}_{\frak p}$-lattice $L$ of rank $k\geq 3$ in our list, there is another integral $\mathcal{O}_{\frak p}$-lattice $L'$ of rank $k$ in the list satisfying $$F_\frak pL'\not \cong F_\frak pL \ \ \ \text{and} \ \ \ \det(F_\frak pL)=\det(F_\frak p L') . $$
Then $L'{\mathfrak p}erp \langle 1,-1\rangle$ does not represent $L$ but represents the rest of seven lattices in the given list by the same arguments as above. Therefore the testing set given for $k\ge 3$ is minimal.
\end{proof}
Since the lattices listed in Proposition \ref{3.2} are all $\mathcal{O}_{\frak p}$-maximal, one concludes that the minimal sets are unique up to isometry by \cite[91:2.Theorem]{omeara_quadratic_1963}. By applying Proposition \ref{3.2}, we can determine all $k$-universal lattices over the non-dyadic field $F_{\frak p}$ in terms of Jordan splittings. The $k=1$ case has been done in \cite[Prop.\;2.3]{xu_indefinite_2020}. For convenience, we treat the cases $k=2$ and $k\geq 3$ separately.
\begin{prop}\label{3.3}
Let $ M $ be an integral $ \mathcal{O}_{\frak p} $-lattice with the following Jordan splitting $$ M=M_{1}{\mathfrak p}erp M_{2}{\mathfrak p}erp \ldots {\mathfrak p}erp M_{t} $$ where $M_1$ is unimodular. Then
$ M $ is $2$-universal if and only if one of the following conditions holds:
(a) $ \mathrm{rank} (M_{1})=3 $ and $M_{2} $ is $\frak p$-modular with $ \mathrm{rank} (M_{2})\ge 2 $.
(b) $M_{1}\cong \langle 1,1,1,1 \rangle$.
(c) $ M_{1}\cong \langle 1,1,1,\Delta_\frak p \rangle$ and $ M_{2} $ is $ \frak p$-modular with $\mathrm{rank} (M_2)\geq 1$.
(d) $\mathrm{rank} (M_1) \geq 5$.
\end{prop}
\begin{proof} \underline{Necessity}. Since $M$ represents both $ \langle 1,-1\rangle$ and $ \langle 1,-\Delta_\frak p \rangle$, we have $\mathrm{rank} (M_1)\geq 3$ by \cite[Theorem 1]{omeara_integral_1958} and Theorem \ref{rep}.
When $ \mathrm{rank} (M_{1})=3 $, the space $F_\frak p M_1$ is not $2$-universal by Theorem\;\ref{2.1}. Since $M$ represents all the 7 binary lattices listed in Proposition\;\ref{3.2}, $M_2$ must be $\frak p$-modular and $F_\frak pM_1{\mathfrak p}erp F_\frak p M_2$ is a $2$-universal quadratic space by \cite[Theorem 1]{omeara_integral_1958}. Applying Theorem \ref{2.1}, one concludes that $\mathrm{rank}(M_2)\geq 2$.
When $\mathrm{rank}(M_1)=4$, one obtains $M_1\cong \langle 1,1,1,1 \rangle$ or $\langle 1,1,1,\Delta_\frak p \rangle$ by \cite[92:1]{omeara_quadratic_1963}. Suppose $M_1\cong \langle 1,1,1,\Delta_\frak p \rangle$. Then $F_\frak pM_1$ is not a $2$-universal quadratic space by Theorem \ref{2.1}. This implies that $M_1$ cannot represent all the binary lattices listed in Proposition\;\ref{3.2} by \cite[Theorem 1]{omeara_integral_1958}. Therefore, $M_2$ is $\frak p$-modular with $\mathrm{rank}(M_2)\geq 1$ by \cite[Theorem 1]{omeara_integral_1958}.
\underline{Sufficiency}. It suffices to verify that $M$ represents all the binary lattices listed in Proposition\;\ref{3.2} if one of the conditions $(a)$, $(b)$, $(c)$ or $(d)$ holds.
If condition $(a)$ holds, then $M_1$ represents both $ \langle 1,-1\rangle $ and $\langle 1,-\Delta_\frak p \rangle$ by \cite[92:1]{omeara_quadratic_1963}. Since $M_1$ represents all elements in $ \mathcal{O}_{\mathfrak{p}}^{\times} $ by \cite[92:1b]{omeara_quadratic_1963}, one concludes that $M_1{\mathfrak p}erp M_2$ represents the other binary lattices listed in Proposition\;\ref{3.2} by \cite[Theorem 1]{omeara_integral_1958} and Theorem \ref{rep}.
If condition $(b)$ holds, then $F_\frak p M_1\cong \Bbb H{\mathfrak p}erp \Bbb H$. In this case $M_1$ represents all the binary lattices listed in Proposition\;\ref{3.2} by \cite[Theorem 1]{omeara_integral_1958} and Theorem\;\ref{2.1}.
If condition $(c)$ holds, then $M_1$ represents both $ \langle 1,-1\rangle $ and $\langle 1,-\Delta_\frak p \rangle$ and $M_1{\mathfrak p}erp M_2$ represents the other binary lattices in the testing set by \cite[Theorem 1]{omeara_integral_1958} and Theorem \ref{rep}.
If condition $(d)$ holds, then $M_1$ represents all the binary lattices in the list by \cite[Theorem 1]{omeara_integral_1958} and Theorem \ref{rep}.
\end{proof}
Finally, we determine all $k$-universal lattices for $k\geq 3$.
\begin{prop}\label{3.4}
Suppose $k\ge 3$. Let $ M $ be an integral $ \mathcal{O}_{\frak p} $-lattice with the following Jordan splitting $$ M=M_{1}{\mathfrak p}erp M_{2}{\mathfrak p}erp \ldots {\mathfrak p}erp M_{t} $$ where $M_1$ is unimodular. Then $ M $ is $k$-universal if and only if one of the following conditions holds:
(a) $ \mathrm{rank} (M_{1})=k+1 $ and $M_{2} $ is $\frak p$-modular with $ \mathrm{rank} (M_{2})\ge 2 $.
(b) $\mathrm{rank} (M_{1}) =k+2 $ and $ M_{2} $ is $ \frak p$-modular with $\mathrm{rank} (M_2)\geq 1$.
(c) $\mathrm{rank} (M_1) \geq k+3 $.
\end{prop}
\begin{proof} \underline{Necessity}. Since $M$ represents the rank $k$ lattices $ \langle 1, \cdots, 1 \rangle$ and $\langle 1, \cdots, 1, \Delta_\frak p \rangle$, we have $\mathrm{rank} (M_1)\geq k+1$ by \cite[Theorem 1]{omeara_integral_1958} and Theorem \ref{rep}.
When $ \mathrm{rank} (M_{1})=k+1$, the quadratic space $F_\frak pM_1$ is not $k$-universal by Theorem\;\ref{2.1}. Since $M$ represents the 8 lattices of rank $k$ listed
in Proposition\;\ref{3.2}, it follows from \cite[Theorem 1]{omeara_integral_1958} that $M_2$ is $\frak p$-modular and $F_\frak pM_1{\mathfrak p}erp F_\frak p M_2$ is a $k$-universal quadratic space. Then Theorem\;\ref{2.1} implies $\mathrm{rank} (M_2)\geq 2$.
When $\mathrm{rank}(M_1)=k+2$, the same argument as above shows that $M_2$ is $\frak p$-modular with $\mathrm{rank} (M_2)\geq 1$.
\underline{Sufficiency}. We need to verify that $M$ represents all the lattices of rank $k$ listed in Proposition\;\ref{3.2} under one of the conditions $(a)$, $(b)$ and $(c)$.
If condition $(a)$ or $(b)$ holds, then $M_1$ represents all unimodular lattices of rank less than $k+1$ by \cite[92:1 and 92:1b]{omeara_quadratic_1963}. Therefore
$M_1{\mathfrak p}erp M_2$ represents all the lattices of rank $k$ listed in Proposition\;\ref{3.2} by \cite[Theorem 1]{omeara_integral_1958} and Theorem \ref{rep}.
If condition $(c)$ holds, the same method shows that $M_1$ represents all the lattices of rank $k$ in the testing set given in Proposition\;\ref{3.2}.
\end{proof}
\section{On 2-universal quaternary lattices over dyadic local fields}\label{sec4}
Determining $k$-universal lattices over dyadic local fields is much more difficult than over non-dyadic fields. Based on O'Meara's work \cite{omeara_integral_1958},
a classification of (classic) $k$-universal lattices using Jordan splittings has been obtained in \cite{HeHu1} for unramified dyadic local fields. By introducing the concept of
bases of norm generators (BONGs in short), Beli has recently developed an integral representation theory over general dyadic local fields (cf. \cite{beli_representations_2006} and \cite{beli_representations_2019}). He classified 1-universal lattices over general dyadic fields in \cite{beli_universal_2020} by his theory. In a forthcoming work \cite{HeHu2}, all (classic) $k$-universal lattices for $k\ge 2$ will be determined by using BONGs.
In this paper our main concern is about the local-global principle for $k$-universality. In view of Corollary\;\ref{2.3}, we are satisfied with the determination of quaternary 2-universal lattices over a general dyadic local field.
Throughout this section, we fix a dyadic prime $\mathfrak{p}$ of the number field $F$, and we put $e= \mathrm{ord}_\frak p (2) $. For $ a\in \mathcal{O}_{\frak p}^{\times}$, define its \textit{quadratic defect} by $$ \mathfrak{d}(a)=\bigcap_{x\in F_\frak p}(a-x^{2})\mathcal{O}_{\frak p}$$
as in \cite[\S\,63A.]{omeara_quadratic_1963}.
Recall that we use $\Delta_{\frak p}$ to denote an element in ${\mathcal O}_{\frak p}^\times$ such that $F_\frak p(\sqrt{\Delta_\frak p})$ is the unique quadratic unramified extension of $F_\frak p$. This $\Delta_{\frak p}$ satisfies $ \mathfrak{d}(\Delta_{\frak p})=4\mathcal{O}_{\frak p} $. We may and we shall assume that $\Delta_{\frak p}=1-4\rho_\frak p $ with $\rho_{\frak p}\in {\mathcal O}_{\frak p}^\times$ (cf. \cite[p.251, \S\;93]{omeara_quadratic_1963}).
The Hilbert symbol computation in the following lemma is an explicit version of \cite[Lemma 3]{hsia_spinor_1975}.
\begin{lem} \label{4.1} Let $t$ be an odd integer with $1\leq t\leq 2e-1$ and $\sigma \in \mathcal{O}_{\frak p}^\times$. Then
$$\big(1-\sigma {\mathfrak p}i_\frak p^t\,,\; 1-4\rho_\frak p \sigma^{-1} {\mathfrak p}i_\frak p^{-t}\big)_\frak p =-1.$$
\end{lem}
\begin{proof} First, we have $\big(\Delta_\frak p\,,\, \sigma {\mathfrak p}i_\frak p^t-4\rho_\frak p\big)_\frak p=-1$ by \cite[63:11a]{omeara_quadratic_1963}. Therefore
$$ \begin{aligned} &
\big(1-\sigma {\mathfrak p}i_\frak p^t\,,\, 1-4\rho_\frak p \sigma^{-1} {\mathfrak p}i_\frak p^{-t} \big)_\frak p =\bigg(1-\sigma {\mathfrak p}i_\frak p^t\,,\, \frac{\sigma {\mathfrak p}i_\frak p^t-4\rho_\frak p}{ \sigma {\mathfrak p}i_\frak p^{t}}\bigg)_\frak p=\big(1-\sigma {\mathfrak p}i_\frak p^t\,,\, \sigma {\mathfrak p}i_\frak p^t-4\rho_\frak p\big)_\frak p \\
= & - \big((1-\sigma {\mathfrak p}i_\frak p^t)\Delta_\frak p\,,\, \sigma {\mathfrak p}i_\frak p^t-4\rho_\frak p\big)_\frak p = - \big((1-\sigma {\mathfrak p}i_\frak p^t)\Delta_\frak p\,,\,
\Delta_\frak p -(1-\sigma {\mathfrak p}i_\frak p^t )\big)_\frak p \\
= & -\left((1-\sigma {\mathfrak p}i_\frak p^t)\Delta_\frak p\,,\, \big(\Delta_\frak p -(1-\sigma {\mathfrak p}i_\frak p^t )\big)\Delta_\frak p\right)_\frak p=
-\bigg(\frac{1-\sigma {\mathfrak p}i_\frak p^t}{\Delta_\frak p}\,,\, 1 - \frac{1-\sigma {\mathfrak p}i_\frak p^t}{\Delta_\frak p}\bigg)_\frak p=-1 \end{aligned} $$
by \cite[57:10 and 63:11a]{omeara_quadratic_1963}.
\end{proof}
Let $N\frak p$ be the number of elements in the residue field of $F$ at $\frak p$. By \cite[63:9, 63:20.Theorem and 63:22.Theorem]{omeara_quadratic_1963}, there are $8(N\frak p)^e-1$ binary quadratic spaces over $F_\frak p$ up to isometry. One can classify binary $\mathcal{O}_{\frak p}$-maximal lattices accordingly.
For any elements $\xi,\,\eta\in {\mathcal O}_{\frak p}$ and $\gamma\in F_{\frak p}^\times$, we write $ \gamma A(\xi,\eta) $ for the binary lattice $$ \mathcal{O}_{\frak p}x+\mathcal{O}_{\frak p}y \ \ \ \text{ with } \ \ \ Q(x)=\gamma\xi, \ B(x,y)=\gamma \ \text{ and } \ Q(y)= \gamma\eta . $$
\begin{prop}\label{4.2} Any binary $\mathcal{O}_{\frak p}$-maximal lattice is isometric to one of the following lattices:
Type I: $${\mathfrak p}i_\frak p^{-i} A({\mathfrak p}i_{\frak p}^i, \sigma {\mathfrak p}i_\frak p^{i+1}) \ \ \ \text{ and } \ \ \ (1-4\rho_\frak p \sigma^{-1} {\mathfrak p}i_\frak p^{-2i-1}){\mathfrak p}i_\frak p^{-i} A({\mathfrak p}i_{\frak p}^i, \sigma {\mathfrak p}i_\frak p^{i+1}) $$
with $0\leq i\leq e-1$ and $\sigma \in \mathcal{O}_{\frak p}^{\times}$.
Type II: \ $\langle 1, \sigma {\mathfrak p}i_\frak p \rangle $ and $\langle \Delta_\frak p, \sigma {\mathfrak p}i_\frak p \rangle $
with $\sigma \in \mathcal{O}_{\frak p}^{\times}$.
Type III: \ $ 2^{-1}A(2, 2\rho_\frak p) $ and $ 2^{-1}{\mathfrak p}i_\frak p A(2, 2\rho_\frak p)$.
Type IV: \ $2^{-1}A(0, 0)$.
\end{prop}
\begin{proof}There is only one isometry class of $\mathcal{O}_{\frak p}$-maximal lattices in a given quadratic space (cf. \cite[91:2. Theorem]{omeara_quadratic_1963}). So we only need to prove that the lattices listed above exhaust all possible binary quadratic spaces over $F_{\frak p}$.
Suppose that $V$ is a binary quadratic space whose discriminant is a unit $\epsilon$.
If $-\epsilon$ is a square, then $V\cong \Bbb H$ and the lattice of Type IV is an $\mathcal{O}_{\frak p}$-maximal lattice by \cite[82:21]{omeara_quadratic_1963}.
If $-\epsilon \in \Delta_\frak p (\mathcal{O}_{\frak p}^{\times})^2$, there are two quadratic spaces with discriminant $\epsilon$ up to isometry by \cite[63:15.Example]{omeara_quadratic_1963}. The $\mathcal{O}_{\frak p}$-maximal lattice on the quadratic space which represents $1$ is the lattice $2^{-1}A(2, 2\rho_\frak p)$ in Type III by \cite[93:11]{omeara_quadratic_1963}. The $\mathcal{O}_{\frak p}$-maximal lattice on the other quadratic space, which represents ${\mathfrak p}i_\frak p$, is the lattice $ 2^{-1}{\mathfrak p}i_\frak p A(2, 2\rho_\frak p)$ in Type III, by \cite[91:1.Theorem]{omeara_quadratic_1963} and the principle of domination in \cite{riehm_integral_1964}.
Otherwise we can assume that $$-\epsilon=1-\sigma {\mathfrak p}i_\frak p^{2i+1} \ \ \ \text{ with } \ \ \ 0\leq i\leq e-1 \ \ \ \text{and} \ \ \ \sigma\in \mathcal{O}_{\frak p}^\times $$ by \cite[63:2]{omeara_quadratic_1963}. Up to isometry there are only two quadratic spaces with discriminant $\epsilon$ by Theorem \ref{ind}. By scaling if necessary, one can assume that $V$ represents $1$. Then the lattice ${\mathfrak p}i_\frak p^{-i} A({\mathfrak p}i_{\frak p}^i, \sigma {\mathfrak p}i_\frak p^{i+1})$ in Type I is the $\mathcal{O}_{\frak p}$-maximal lattice on $V$ by \cite[91:1.Theorem]{omeara_quadratic_1963} and the principle of domination in \cite{riehm_integral_1964}. Lemma \ref{4.1} implies that $1-4\rho_\frak p \sigma^{-1} {\mathfrak p}i_\frak p^{-2i-1}$ is not represented by $V$. So one concludes that the other quadratic space with discriminant $\epsilon$ can be obtained by scaling $V$ with $1-4\rho_\frak p \sigma^{-1} {\mathfrak p}i_\frak p^{-2i-1}$ by \cite[63:15.Example (iii)]{omeara_quadratic_1963}. Thus, the lattice $$(1-4\rho_\frak p \sigma^{-1} {\mathfrak p}i_\frak p^{-2i-1}){\mathfrak p}i_\frak p^{-i} A({\mathfrak p}i_{\frak p}^i, \sigma {\mathfrak p}i_\frak p^{i+1}) \ \ \ \text{in Type I} $$ is the $\mathcal{O}_{\frak p}$-maximal lattice on this space.
Now suppose that $V$ is a binary quadratic space with discriminant $\epsilon{\mathfrak p}i_\frak p$ for some $\epsilon\in \mathcal{O}_{\frak p}^\times$. By scaling if necessary, one can assume that $V$ represents $1$. Then the lattice $\langle 1, \epsilon {\mathfrak p}i_\frak p \rangle$ in Type II is the $\mathcal{O}_{\frak p}$-maximal lattice on $V$ by \cite[91:1.Theorem]{omeara_quadratic_1963}. Since $\Delta_\frak p$ is not represented by $V$ by \cite[63:11a]{omeara_quadratic_1963}, the other quadratic space with discriminant $\epsilon{\mathfrak p}i_\frak p$ is $[\Delta_\frak p, \Delta_\frak p \epsilon {\mathfrak p}i_\frak p]$, according to \cite[ 63:15.Example(iii); 63:20.Theorem and 63:22.Theorem]{omeara_quadratic_1963}. The lattice $\langle \Delta_\frak p, \Delta_\frak p\epsilon {\mathfrak p}i_\frak p \rangle$ in Type II is the $\mathcal{O}_{\frak p}$-maximal lattice on this space.
\end{proof}
The following result can be regarded as a local analogue of the Conway-Schneeberger theorem for $2$-universal integral lattices over dyadic local fields.
\begin{cor}\label{4.3} An integral ${\mathcal O}_{\frak p}$-lattice is $2$-universal if and only if it represents all the lattices of type I, II, III and IV in Proposition $\ref{4.2}$.
\end{cor}
\begin{proof} Similarly as in the proof of Proposition\;\ref{3.2}, to test 2-universality of an integral lattice it suffices to check that it represents the $\mathcal{O}_{\frak p}$-maximal lattices in all binary quadratic spaces. So the result is immediate from Proposition \ref{4.2}.
\end{proof}
\begin{lem}\label{4.4} Suppose $M = 2^{-1} A(0, 0) {\mathfrak p}erp N$ where $N$ is an $\mathcal{O}_{\frak p}$-lattice with $\frak s(N) \subseteq 2^{-1} \frak p$. If $\frak n(N) \subseteq \frak p$, then $M$ cannot represent $ 2^{-1} A(2, 2\rho_\frak p) $.
\end{lem}
\begin{proof} Let $\{x, y\}$ be a basis for $ 2^{-1} A(0, 0) $ and $\{u, v\}$ be a basis for $ 2^{-1} A(2, 2\rho_\frak p) $. Suppose $M$ represents $ 2^{-1} A(2, 2\rho_\frak p) $. Then there are elements $\alpha_1, \beta_1$ and $\alpha_2, \beta_2$ in $\mathcal{O}_{\frak p}^\times$ such that
$$ \begin{cases} u= \alpha_1 x+\beta_1 y + z_1 \\
v=\alpha_2 x+\beta_2 y +z_2 \end{cases} $$ with $z_1, z_2\in N$. Then
\begin{align*}
& \Delta_\frak p=1-4\rho_\frak p=4(B(u, v)^2-Q(u)Q(v)) \\
= \ & (2B(z_1, z_2)+\alpha_1\beta_2+\alpha_2\beta_1)^2- 4\alpha_1\alpha_2\beta_1\beta_2 - 4 (\alpha_1\beta_1 Q(z_2)+\alpha_2\beta_2 Q(z_1)+Q(z_1)Q(z_2)) \\
= \ & (2B(z_1, z_2)+\alpha_1\beta_2-\alpha_2\beta_1)^2 + 8\alpha_2 \beta_1 B(z_1, z_2)- 4 (\alpha_1\beta_1 Q(z_2)+\alpha_2\beta_2 Q(z_1)+Q(z_1)Q(z_2)).
\end{align*}
Since $\frak s(N) \subseteq 2^{-1} \frak p$, it follows that $2\alpha_2 \beta_1B(z_1, z_2)\in \frak p$. If $\frak n(N) \subseteq \frak p$, one concludes that $$\alpha_1\beta_1 Q(z_2)+\alpha_2\beta_2 Q(z_1)+Q(z_1)Q(z_2) \in \frak p . $$ This implies that $\Delta_\frak p$ is a square by the local square theorem, whence a contradiction. \end{proof}
Recall that the \textit{norm group} of an ${\mathcal O}_{\frak p}$-lattice $L$ is defined by $ \mathfrak{g}(L)=Q(L)+2\mathfrak{s}(L) $. We can now prove our main results in this section.
\begin{prop} \label{4.5} An integral quaternary $\mathcal{O}_{\frak p}$-lattice $M$ is $2$-universal if and only if $$M\cong 2^{-1}A(0, 0) {\mathfrak p}erp 2^{-1}A(0, 0) . $$
\end{prop}
\begin{proof} \underline{Sufficiency}. By Corollary \ref{4.3}, one only needs to verify that $2^{-1}A(0, 0) {\mathfrak p}erp 2^{-1}A(0, 0)$ represents all the lattices listed in Proposition \ref{4.2}. Since $2^{-1}A(0, 0)$ is universal, the lattices in Type II or IV are represented by $M$. Let $L$ be a lattice in Type I or III. In order to apply \cite[Theorem 7.2]{riehm_integral_1964}, we scale $M$ and $L$ by 2 and call the resulting lattices $M'$ and $L'$ respectively.
If $L$ is in Type I, then we have
$$ \frak g(L')=\frak n(L')= \frak g (M')= \frak n(M')= 2 \mathcal{O}_{\frak p}$$ by \cite[Lemma 7.1]{riehm_integral_1964} and Proposition \ref{4.2}. Since ${L'}^{\sharp}$ is isomorphic to $$ 2^{-1}{\mathfrak p}i_\frak p^{i} A({\mathfrak p}i_{\frak p}^i, \sigma {\mathfrak p}i_\frak p^{i+1}) \ \ \ \text{or} \ \ \ 2^{-1}(1-4\rho_\frak p \sigma^{-1} {\mathfrak p}i_\frak p^{-2i-1}){\mathfrak p}i_\frak p^{i} A({\mathfrak p}i_{\frak p}^i, \sigma {\mathfrak p}i_\frak p^{i+1}) $$ with $0\leq i\leq e-1$ and $$\frak n({L'}^{\sharp})= \frak p^{2i-e} \supset 2 \mathcal{O}_{\frak p}=\frak n(M') , $$ one concludes that $M'$ represents $L'$ by \cite[Theorem 7.2]{riehm_integral_1964} and Theorem \ref{2.1}.
If $L$ is in Type III, one only needs to consider the case $L\cong 2^{-1}{\mathfrak p}i_\frak p A(2, 2\rho_\frak p)$ since
$$ 2^{-1} A(2, 2\rho_\frak p) {\mathfrak p}erp 2^{-1} A(2, 2\rho_\frak p) \cong 2^{-1} A(0, 0) {\mathfrak p}erp 2^{-1} A(0, 0) . $$
Since the norm group of $L'$ satisfies
$$ \frak g(L')= \frak n(L')= \frak p^{e+1} \subset \frak g(M')=\frak n(M')= 2 \mathcal{O}_{\frak p}$$ by \cite[Lemma 7.1]{riehm_integral_1964} and Proposition \ref{4.2} and
$${L'}^{\sharp}\cong {\mathfrak p}i_\frak p^{-1} A(2, 2\rho_\frak p) \ \ \ \text{with} \ \ \ \frak n({L'}^\sharp)=\frak p^{e-1}\supset 2 \mathcal{O}_{\frak p}=\frak n(M') , $$ one concludes that $M'$ represents $L'$ by \cite[Theorem 7.2]{riehm_integral_1964} and Theorem \ref{2.1}.
\underline{Necessity}. Suppose $M$ is a quaternary $2$-universal integral lattice with a Jordan splitting $$M=M_1{\mathfrak p}erp \cdots {\mathfrak p}erp M_t . $$
Since $\frak n(M)\subseteq \mathcal{O}_{\frak p}$, one has $\frak s(M)=\frak s(M_1) \subseteq 2^{-1}\mathcal{O}_{\frak p}$. On the other hand, $M$ represents $2^{-1}A(0, 0)$. It follows that $$\frak s(M)=\frak s(M_1) =2^{-1}\mathcal{O}_{\frak p} \ \ \ \text{with} \ \ \ \mathrm{rank} (M_1)\geq 2 . $$ Therefore $\frak n(M_1)=2\frak s(M_1)$ and $\mathrm{rank} (M_1)=2$ or $4$. Since $F_\frak p M\cong \Bbb H{\mathfrak p}erp \Bbb H$ by Theorem \ref{2.1}, one concludes that $$M\cong 2^{-1} A(0, 0) {\mathfrak p}erp 2^{-1} A(0, 0)$$ for $M=M_1$ by \cite[93:18.Example (vi)]{omeara_quadratic_1963}.
Now we only need to consider the case $\mathrm{rank} (M_1)=2$. Since $M$ represents $2^{-1}A(0, 0)$, we can assume that $M_1 \cong 2^{-1}A(0, 0)$ in a basis $\{x_1, x_2\}$ by \cite[82:15]{omeara_quadratic_1963}. Since $M$ also represents $2^{-1} A(2, 2\rho_\frak p)$, we have $$ \frak n(M_2{\mathfrak p}erp \cdots {\mathfrak p}erp M_t) = \mathcal{O}_{\frak p} $$ by Lemma \ref{4.4}.
Since $F_\frak p(M_2{\mathfrak p}erp \cdots {\mathfrak p}erp M_t) \cong \Bbb H$, one can write
\begin{equation} \label{complement} M_2{\mathfrak p}erp \cdots {\mathfrak p}erp M_t = \begin{cases} {\mathfrak p}i_\frak p^{-i} A(\epsilon {\mathfrak p}i_\frak p^i, 0) \ \ \ & \text{for $t=2$} \\
\langle \epsilon \rangle {\mathfrak p}erp \langle \eta {\mathfrak p}i_\frak p^r\rangle \ \ \ & \text{for $t=3$} \end{cases} \end{equation}
in some basis $\{ z_1, z_2\} $, where $\epsilon$ and $\eta$ are in $\mathcal{O}_{\frak p}^\times$, $r\geq 2$ and $0\leq i\leq e-1$. Let $\{y_1, y_2\}$ be a basis for $ 2^{-1} {\mathfrak p}i_\frak p A(2, 2\rho_\frak p) $. Since $2^{-1}{\mathfrak p}i_\frak p A(2, 2\rho_\frak p) \longrightarrow M$, there are elements $a, b, c, d$ in $\mathcal{O}_{\frak p}$ such that
$$ y_1= a x_1+b x_2 + cz_1+dz_2 .$$
We claim that $a$ or $b$ is in $\mathcal{O}_{\frak p}^\times$. Suppose that both $a$ and $b$ lie in $\frak p$. Then
\begin{equation} \label{val} {\mathfrak p}i_\frak p=ab+ \begin{cases} \epsilon c^2+ 2{\mathfrak p}i_\frak p^{-i} cd \ \ \ & \text{for $t=2$} \\
\epsilon c^2 + \eta {\mathfrak p}i_\frak p^r d^2 \ \ \ & \text{for $t=3$.} \end{cases} \end{equation}
This implies $c\in \frak p$, and by comparing $\frak p$-adic valuations we can find a contradiction to (\ref{val}).
Without loss of generality, we assume $a\in \mathcal{O}_{\frak p}^\times$. Then $M=(\mathcal{O}_{\frak p} y_1 +\mathcal{O}_{\frak p} x_2 ){\mathfrak p}erp N' $ with $$(\mathcal{O}_{\frak p} y_1 +\mathcal{O}_{\frak p} x_2 )\cong 2^{-1} A(0, 0) \ \ \ \text{and} \ \ \ N'\cong M_2{\mathfrak p}erp \cdots {\mathfrak p}erp M_t$$ by \cite[82:15, 93:11.Example and 93:14.Theorem]{omeara_quadratic_1963}. Let $\{z_1', z_2'\}$ be the basis of $N'$ corresponding to the above isometry.
Since $ 2^{-1}{\mathfrak p}i_\frak p A(2, 2\rho_\frak p) \longrightarrow M$, there are elements $\alpha, \beta, \gamma, \delta$ in $\mathcal{O}_{\frak p}$ such that
$y_2= \alpha y_1+\beta x_2 + \gamma z_1'+\delta z_2'$. Then
$$ B(y_1, y_2)= \alpha Q(y_1) + \beta B(y_1, x_2) + \gamma B(y_1, z_1')+\delta B(y_1, z_2') \in 2^{-1} \frak p . $$
Since $\frak s(N') \subseteq 2^{-1} \frak p$, both $B(y_1, z_1')$ and $B(y_1, z_2')$ are in $2^{-1} \frak p$. Therefore $\beta B(y_1, x_2)\in 2^{-1} \frak p$. Since $B(y_1, x_2)\mathcal{O}_{\frak p}=2^{-1} \mathcal{O}_{\frak p}$, one has $\beta\in \frak p$. This implies that $$2^{-1}{\mathfrak p}i_\frak p A(2, 2\rho_\frak p) \longrightarrow (\mathcal{O}_{\frak p} y_1+\frak p x_2 ){\mathfrak p}erp N' . $$
Since $\frak n(N') = \mathcal{O}_{\frak p}$, one obtains
$$2^{-1}{\mathfrak p}i_\frak p A(2, 2\rho_\frak p) \longrightarrow (\mathcal{O}_{\frak p} y_1+\frak p x_2 ){\mathfrak p}erp (\frak p z_1'+\mathcal{O}_{\frak p} z_2') $$ by inspecting the representation of $y_2$. Moreover one has $ \frak n (\frak p z_1'+\mathcal{O}_{\frak p} z_2') = \frak p^2 $ by (\ref{complement}). A contradiction is derived by applying Lemma \ref{4.4} with scaling ${\mathfrak p}i_\frak p^{-1}$.
\end{proof}
Since $2$ is not invertible in $\mathcal{O}_{\mathfrak{p}}$ in the dyadic case, the classic integral lattices are quite different from the usual ones.
\begin{prop} \label{4.6} There is no classic $2$-universal quaternary lattice over any dyadic local field.
\end{prop}
\begin{proof} Suppose that $M$ is a classic 2-universal quaternary lattice with a Jordan splitting $$M=M_1{\mathfrak p}erp \cdots {\mathfrak p}erp M_t . $$ Since $M$ represents all binary unimodular lattices, both proper and improper binary unimodular lattices split $M$ by \cite[82:15]{omeara_quadratic_1963}. This implies that $\mathrm{rank} (M_1)\geq 3$ by \cite[91:9.Theorem]{omeara_quadratic_1963}. Since $M$ represents $A(0,0)$, one can assume that $A(0,0)$ splits $M_1$ by \cite[82:15]{omeara_quadratic_1963}.
If $\mathrm{rank} (M_1)= 3$, one can write $$M_1=(\mathcal{O}_{\frak p} x+\mathcal{O}_{\frak p}y){\mathfrak p}erp \mathcal{O}_{\frak p} z $$ with $Q(x)=Q(y)=0$, $B(x,y)=1$ and $Q(z)=\epsilon \in \mathcal{O}_{\frak p}^\times$. Since $M$ is classic 2-universal, one obtains $F_\frak p M\cong \Bbb H {\mathfrak p}erp \Bbb H$ from Theorem \ref{2.1}. This implies that $\frak s (M_2) \subseteq \frak p^2$. Moreover, by the condition $ \langle \epsilon(1+{\mathfrak p}i_\frak p)\rangle \longrightarrow M $, there are elements $a, b, c\in \mathcal{O}_{\frak p}$ and $w\in M_2$ such that
$$ \epsilon(1+{\mathfrak p}i_\frak p)=Q(ax+by+cz+w)=2ab + \epsilon c^2+ Q(w) .$$ Since $Q(w)\in \frak p^2$, one concludes that $c\in \mathcal{O}_{\frak p}^\times$ and
$$ \frak d(1+{\mathfrak p}i_\frak p)=\frak d(c^2+2ab \epsilon^{-1} + Q(w)\epsilon^{-1}) \subseteq \frak p^2 $$ if $e>1$. But this contradicts \cite[63:5]{omeara_quadratic_1963}.
For $e=1$, we consider a basis $\{u, v\}$ of $A(2, 2\rho_\frak p)$. By the representability of this lattice by $M$, one can find elements $\alpha_1, \beta_1, \gamma_1$ and $\alpha_2, \beta_2, \gamma_2$ in $\mathcal{O}_{\frak p}$ such that
$$ \begin{cases} u= \alpha_1x+\beta_1 y + \gamma_1 z+ w_1 \\
v=\alpha_2 x+\beta_2 y +\gamma_2 z+ w_2 \end{cases} $$ for some $w_1, w_2\in M_2$. Therefore
$$ 2= 2\alpha_1\beta_1+ \gamma_1^2 \epsilon +Q(w_1) \ \ \ \text{and} \ \ \ 2\rho_\frak p= 2\alpha_2\beta_2 + \epsilon \gamma_2^2 +Q(w_2) .$$
This implies that $\gamma_1, \gamma_2 \in \frak p$. One concludes that $$ A(2, 2\rho_\frak p) \longrightarrow (\mathcal{O}_{\frak p} x+\mathcal{O}_{\frak p}y){\mathfrak p}erp \frak p z
{\mathfrak p}erp M_2 . $$
A contradiction is derived by applying Lemma \ref{4.4} with scaling 2.
Otherwise $M=M_1$. Since $ A(1,0) \longrightarrow M $ and $A({\mathfrak p}i_\frak p, 0) \longrightarrow M $, the weight $\frak w(M)$ of $M$ is equal to $\frak p$ by \cite[82:15 and 93:5.Example]{omeara_quadratic_1963}. Therefore $$M\cong A(1,0){\mathfrak p}erp A({\mathfrak p}i_\frak p, 0) \ \ \ \text{in some basis} \ \ \ \{w, x, y, z\} $$ by \cite[93:18.Example (vi)]{omeara_quadratic_1963}. Since the lattice $A(2, 2\rho_\frak p)$ with basis $\{u, v\}$ is represented by $M$, there exist $\alpha_1, \beta_1, \gamma_1, \delta_1$ and $\alpha_2, \beta_2, \gamma_2, \delta_2$ in $\mathcal{O}_{\frak p}$ such that
$$ \begin{cases} u= \alpha_1 w+\beta_1 x + \gamma_1 y+ \delta_1 z \\
v=\alpha_2 w+ \beta_2 x+\gamma_2 y +\delta_2 z .\end{cases} $$
This implies that
$$ 2=\alpha_1^2 + 2\alpha_1\beta_1+{\mathfrak p}i_\frak p \gamma_1^2+ 2\gamma_1\delta_1 \ \ \ \text{and} \ \ \ 2\rho_\frak p=\alpha_2^2 +2\alpha_2\beta_2 + {\mathfrak p}i_\frak p \gamma_2^2 + 2\gamma_2\delta_2 $$ and one obtains that $\alpha_1, \alpha_2$ are both in $\frak p$.
For $e>1$, one further has $ \gamma_1, \gamma_2\in \frak p$. Therefore
$$1= B(u,v)=\alpha_1\beta_2+\beta_1\alpha_2+\gamma_1\delta_2+\delta_1\gamma_2 \in \frak p $$ which is a contradiction.
For $e=1$, one has $A({\mathfrak p}i_\frak p, 0)\cong A(0,0)$ by \cite[93:11.Example]{omeara_quadratic_1963}. Then
$$A(2, 2\rho_\frak p) \longrightarrow (\frak p w+ \mathcal{O}_{\frak p} x) {\mathfrak p}erp (\mathcal{O}_{\frak p} y +\mathcal{O}_{\frak p} z) \cong A(0, 0) {\mathfrak p}erp {\mathfrak p}i_\frak p A({\mathfrak p}i_\frak p, 0) .$$
This contradicts Lemma \ref{4.4} with scaling $2$. \end{proof}
\section{Existence of 2-LNG lattices}\label{sec5}
In next two sections, we apply the results in the previous sections to study existence of (classic) $k$-LNG lattices (Definition\;\ref{1.3} (4)) over a number field.
We keep the same notations as in the previous sections.
By Corollary \ref{2.3}, there may exist $k$-LNG lattices over a number field only for $k=1$ or $2$. In this section, we treat the case $k=2$.
Our main result in this section is the following.
\begin{thm}\label{5.1}
Let $ F $ be a number field. Then there is a $2$-LNG lattice over $F$ if and only if the class number of $F$ is even. Moreover, all $2$-LNG lattices are in the genus of $2^{-1}A(0, 0){\mathfrak p}erp 2^{-1}A(0, 0)$.
\end{thm}
\begin{proof}
\underline{Necessity.} Let $M$ be a 2-LNG lattice over $F$. Since $M_\frak p$ is 2-universal for a non-dyadic prime $\frak p$, one concludes that $\mathrm{rank} (M)\geq 4$ by Proposition \ref{3.3}. Suppose that the class number of $F$ is odd. Since $M_\frak p$ is 2-universal for all $\frak p\in \Omega_F$, one has that $M_\frak p$ is also 1-universal.
So there is a single class in $\mathrm{gen}(M)$ by \cite[Proposition 3.2]{xu_indefinite_2020}. This implies that $M$ is indefinite 2-universal over $F$. A contradiction is derived.
\underline{Sufficiency.} Let
$$ M=2^{-1}A(0, 0){\mathfrak p}erp 2^{-1}A(0, 0)=(\mathcal O_F x+ \mathcal O_F y) {\mathfrak p}erp (\mathcal O_F z+ \mathcal O_F w)$$ where $Q(x)=Q(y)=Q(z)=Q(w)=0$ and $B(x, y)=B(z, w)=\frac{1}{2} $.
Since the class number of $F$ is even, there exists an unramified quadratic extension $ E=F(\sqrt{c}) $ with $ c\in \mathcal{O}_{F} $. Since $ E=F(\sqrt{c}) $ is an unramified quadratic extension, one concludes that $\mathrm{ord}_\frak p(c) \equiv 0 {\mathfrak p}mod 2$ for all $\frak p\in \Omega_F\setminus \infty_F$.
Consider the quadratic space $Fu{\mathfrak p}erp Fv$ with
$ Q(u)=1$ and $Q(v)=-c$. By \cite[81:14]{omeara_quadratic_1963}, we can define a binary $\mathcal O_F$-lattice $L$ on $Fu{\mathfrak p}erp Fv$ by local conditions as follows
$$ L_\frak p= \begin{cases} \mathcal{O}_{\frak p} u {\mathfrak p}erp \mathcal{O}_{\frak p} {\mathfrak p}i_\frak p^{-t_\frak p} v \ \ \ & \text{if $\frak p$ is non-dyadic} \\
2^{-1}A(0, 0) \ \ \ & \text{if $\frak p$ is dyadic and splits completely in $E/F$}\\
2^{-1}A(2, 2\rho_\frak p) \ \ \ & \text{if $\frak p$ is dyadic and inert in $E/F$} \end{cases} $$
where $t_\frak p=\frac{1}{2} \mathrm{ord}_{\frak p} (c)$. We claim that there is an $\mathcal O_F$-lattice in $\mathrm{gen}(M)$ which cannot represent $L$. Then such a lattice is 2-LNG.
By \cite[Theorem 4.1]{hsia_indefinite_1998}, the claim follows as soon as we prove
\begin{equation}\label{spn}
\theta_\frak p (X(M_\frak p/K_\frak p ) ) =N_{\frak P\mid \frak p} (E_\frak P^\times)
\end{equation}
for all $\frak p\in \Omega_F\setminus \infty_F$, where $\frak P$ is a prime of $E$ above $\frak p$, $N_{\frak P\mid \frak p}$ is the norm map from $E_\frak P$ to $F_\frak p$, $K_\frak p$ is a fixed sublattice of $M_\frak p$ satisfying $K_\frak p \cong L_\frak p$ and
$$X(M_\frak p/K_\frak p)= \{ \sigma \in O^+(F_\frak p M_\frak p): \ K_\frak p \subset \sigma M_\frak p \} .$$
When $\frak p$ is a non-dyadic prime of $F$, \eqref{spn} holds by \cite[92:5]{omeara_quadratic_1963} and \cite[Theorem 5.1]{hsia_indefinite_1998}.
When $\frak p$ is dyadic and splits completely in $E/F$, we have
$$\theta_\frak p (X(M_\frak p/K_\frak p ) )\supseteq \theta_\frak p(O^+((F_\frak p K_\frak p)^{\mathfrak p}erp))= \theta_\frak p(O^+(\Bbb H))= F_\frak p^\times $$
where $(F_\frak p K_\frak p)^{\mathfrak p}erp$ is the orthogonal complement of $F_\frak p K_\frak p$ in $F_\frak pM_\frak p$. Therefore (\ref{spn}) holds.
When $\frak p$ is dyadic and inert in $E/F$, we have the splitting
$$M_\frak p=K_\frak p{\mathfrak p}erp K_\frak p^{\mathfrak p}erp$$
by \cite[82:15]{omeara_quadratic_1963}. Since $\frak n(M_\frak p)=2\frak s(M_\frak p)$, one obtains that $\frak n(K_\frak p^{\mathfrak p}erp)=2\frak s(K_\frak p^{\mathfrak p}erp)$. Therefore $$K_\frak p^{\mathfrak p}erp \cong 2^{-1}A(2, 2\rho_\frak p)$$ by \cite[93:11.Example]{omeara_quadratic_1963}. For any $\sigma\in X(M_\frak p/K_\frak p )$, one also has the splitting
$$ \sigma M_\frak p=K_\frak p {\mathfrak p}erp K_\sigma \ \ \ \text{with} \ \ \ K_\sigma\cong 2^{-1}A(2, 2\rho_\frak p) $$ by the same argument as above. Since both $ K_\frak p^{\mathfrak p}erp$ and $K_\sigma$ are $\mathcal{O}_{\frak p}$-maximal lattices on $F_\frak p K_\frak p^{\mathfrak p}erp$ by \cite[82:19]{omeara_quadratic_1963}, we have $K_\frak p^{\mathfrak p}erp = K_\sigma$ by
\cite[91:1.Theorem]{omeara_quadratic_1963}. This implies that
$$ X(M_\frak p/K_\frak p ) =O^+(M_\frak p) \ \ \ \text{and} \ \ \ \theta_\frak p( X(M_\frak p/K_\frak p )) =\theta (O^+(M_\frak p))= \mathcal{O}_{\frak p}^\times (F_\frak p^\times)^2 $$
by \cite[Lemma 1]{hsia_spinor_1975}. Therefore \eqref{spn} follows from \cite[63:16.Example]{omeara_quadratic_1963}. The proof of the above claim is complete.
Suppose that $N$ is a 2-LNG lattice over $F$. Then $\mathrm{rank}(N)=4$ by \cite[\S 4 (5)]{hsia_indefinite_1998}. Since $N_\frak p$ is universal for all $\frak p\in \Omega_F$, one concludes that $$N\in \mathrm{gen}(2^{-1}A(0, 0){\mathfrak p}erp 2^{-1}A(0, 0))$$ by Theorem \ref{2.1}, Proposition \ref{3.3} and Proposition \ref{4.5}.
\end{proof}
It should be pointed out that there are no classic $2$-LNG lattices over a number field by Proposition\;\ref{4.6}.
Here we provide a concrete example of 2-LNG integral quadratic form.
\begin{ex} Let $F=\Bbb Q(\sqrt{-5})$. Then
$$ (1+\sqrt{-5})x^2+5xy+(1-\sqrt{-5})y^2 + zw $$ is a 2-LNG integral quadratic form over $F$.
\end{ex}
\begin{proof} It is well known that the class number of $F$ is two and the Hilbert class field of $F$ is $F(\sqrt{-1})$ (see \cite[Example 3.8]{xu_indefinite_2020}). The quadratic form $xy+zw$ corresponds to the $\mathcal O_F$-lattice
$$M=2^{-1}A(0, 0){\mathfrak p}erp 2^{-1}A(0, 0)= (\mathcal O_F v_1+ \mathcal O_{F} v_2) {\mathfrak p}erp (\mathcal O_F v_3 + \mathcal O_{F} v_4) $$
where $Q(v_1)=Q(v_2)=Q(v_3)=Q(v_4)=0$ and $B(v_1, v_2)=B(v_3, v_4)=\frac{1}{2}$. Using by \cite[92:5]{omeara_quadratic_1963} and \cite[Lemma 1]{hsia_spinor_1975}, we find that
$$\theta_\frak p (O^+(M_\frak p))= \mathcal{O}_{\frak p}^\times (F_\frak p^\times)^2 $$ for $\frak p\in \Omega_F\setminus \infty_F$. Thus, the number $h(M)$ of proper classes in $\mathrm{gen}(M)$ is given by
$$ h(M)=\bigg[\Bbb I_F : F^\times \bigg({\mathfrak p}rod_{\frak p\in \infty_F} F_\frak p^\times\times {\mathfrak p}rod_{\frak p\in \Omega_F\setminus \infty_F} \mathcal{O}_{\frak p}^\times \bigg) \Bbb I_F^2\bigg] = [\mbox{Pic}(\mathcal O_F) : \mbox{Pic}(\mathcal O_F)^2]=2$$
by \cite[33:14.Theorem, 102:7 and 104:5.Theorem]{omeara_quadratic_1963}. Here $\Bbb I_F$ denotes the id\`ele group of $F$.
Let $\frak q= (2, 1+\sqrt{-5})$ be the non-principal prime ideal of $F$ above $2$. Define
$$N = (\frak q v_1 + \frak q^{-1} v_2) {\mathfrak p}erp (\mathcal O_F v_3 + \mathcal O_{F} v_4)\,, $$
which corresponds to the integral quadratic form $$(1+\sqrt{-5})x^2+5xy+(1-\sqrt{-5})y^2 + zw $$ as shown in \cite[Example 3.8]{xu_indefinite_2020}. Then $N_\frak p=M_\frak p$ for $\frak p\neq \frak q$ and $\sigma_\frak q (M_{\mathfrak{q}})= N_{\mathfrak{q}}$ where
$$\sigma_\frak q v_1={\mathfrak p}i_\mathfrak{q} v_1, \ \ \ \sigma_{\mathfrak{q}} v_2= {\mathfrak p}i_{\mathfrak{q}}^{-1} v_2, \ \ \ \sigma_\frak q v_3=v_3 \ \ \ \text{and} \ \ \ \sigma_\frak q v_4= v_4 .$$
Since $\sigma_\frak q = \tau_{v_1-v_2} \tau_{v_1-{\mathfrak p}i_\frak qv_2}$, one obtains that $\theta_\frak q(\sigma_\frak q)={\mathfrak p}i_\frak q (F_\frak q^\times)^2$. On the other hand, since $2$ is inert in $\Bbb Q(\sqrt{5})/\Bbb Q$, one concludes that the dyadic prime $\frak q$ of $F$ is inert in $F(\sqrt{-1})=F(\sqrt{5})$ by inspecting the extension degree of the residue fields. By using \cite[Chapter VI, \S5. (5.7) Corollary]{Neu1}, we see that
$$(i_\frak p)_{\frak p\in \Omega_F} \not \in F^\times \bigg({\mathfrak p}rod_{\frak p\in \infty_F} F_\frak p^\times\times {\mathfrak p}rod_{\frak p\in \Omega_F\setminus \infty_F} \mathcal{O}_{\frak p}^\times \bigg) \Bbb I_F^2$$ where
$$ i_\frak p= \begin{cases} {\mathfrak p}i_\frak q \ \ \ & \text{if } \frak p=\frak q \\
1 \ \ \ & \text{otherwise.} \end{cases} $$
Therefore $M$ and $N$ are exactly the representatives of the two classes in $\mathrm{gen}(M)$.
Put
$$ L= \mathcal O_F (v_1+ v_2) + \mathcal O_F (\sqrt{-5} v_2+v_3-v_4) \subset M\,. $$
Since $\det(FL)\cdot \det(FM)\in (F^\times)^2$,
$$E=F(\sqrt{-\det(FL)\cdot \det(FM)})=F(\sqrt{-1})$$
is the Hilbert class field of $F$. Moreover, one has
$$\theta_\frak p (X(M_\frak p/L_\frak p ) ) =N_{\frak P\mid \frak p} (E_\frak P^\times) $$
for any prime $\frak p\neq \frak q$ of $F$ by \cite[Theorem 5.1]{hsia_indefinite_1998}, where
$$X(M_\frak p/L_\frak p)= \{ \sigma \in O^+(F_\frak p M_\frak p): \ L_\frak p \subset \sigma M_\frak p \}.$$
Since $ L_\frak q \cong 2^{-1} A(2, 2\rho_\frak q)$, the same arguments as in Theorem\;\ref{5.1} and \cite[63:16.Example]{omeara_quadratic_1963} yield
$$X(M_\frak q/L_\frak q)= \{ \sigma \in O^+(F_\frak q M_\frak q): \ L_\frak q \subset \sigma M_\frak q \}=O^+(M_\frak q) \ \ \ \text{and} \ \ \ \theta_\frak q(X(M_\frak q/L_\frak q))=N_{\frak Q\mid \frak q} (E_\frak Q^\times)\,, $$
$\frak Q$ denoting a prime of $E$ above $\frak q$. One concludes that $L$ is not represented by $N$ by \cite[Theorem 4.1]{hsia_indefinite_1998}. Hence $N$ is $2$-LNG over $F$.
\end{proof}
\section{Existence of classic 1-LNG lattices}\label{sec6}
In \cite{xu_indefinite_2020}, it has been proved that a number field $F$ admits 1-LNG lattices if and only if the class number of $F$ is even. However, the lattices constructed in \cite[Theorem 3.5.(2)]{xu_indefinite_2020} are not classic integral. It is natural to ask what kind of number fields admit classic 1-LNG (see \cite[Remark 1.2]{xu_indefinite_2020}).
Recall that a prime $\frak p\in\infty_F$ is called ramified in a finite extension $E/F$ if $\frak p$ is real and $E$ has a complex prime above $\frak p$ (\cite[p.172]{Janusz}). We say a finite extension $E/F$ is unramified if it is unramified at all (finite or infinite) primes of $F$.
\begin{thm} \label{6.1} A number field $F$ admits a classic 1-LNG $\mathcal O_F$-lattice if and only if there is a quadratic unramified extension $E/ F $ in which every dyadic prime of $F$ splits completely. In this case, there are infinitely many classes of 1-LNG free ${\mathcal O}_F$-lattices.
\end{thm}
\begin{proof} \underline{Necessity}. Let $M$ be a classic 1-LNG $\mathcal{O}_F$-lattice and $K$ be an integral ${\mathcal O}_F$-lattice of rank 1 such that $K$ is not represented by $M$.
Since there is no classic 1-universal binary lattice over dyadic local fields by \cite[Corollary 2.9]{xu_indefinite_2020}, one obtains $\mathrm{rank} (M)\geq 3$.
By \cite[p.135, line 2-4]{hsia_indefinite_1998}, one concludes that $$\mathrm{rank} (M)=3 \ \ \ \text{ and } -\det(FK) \cdot \det(FM) \not \in (F^\times)^2 . $$ Let $ E=F(\sqrt{-\det(FK)\cdot \det(FM)}) $ and $L\in \mathrm{gen}(M)$ such that $K\subset L$. Then
$$ \theta_\frak p (X(L_\frak p, K_\frak p) ) = N_{\mathfrak{P}\mid \mathfrak{p}}(E_\frak P^\times)$$
by \cite[Theorem 4.1]{hsia_indefinite_1998}, where $\frak P$ is a prime in $E$ above $\frak p$ and $N_{\frak P\mid \frak p}$ is the norm map from $E_\frak P$ to $F_\frak p$ and
$$ X(L_\frak p, K_\frak p) = \{ \sigma\in O^+(F_\frak p M_\frak p): \ K_\frak p\subset \sigma (L_\frak p) \} $$ for all $\frak p\in \Omega_F$.
Since $L_\frak p$ is 1-universal over $F_\frak p$ for all $\frak p\in \Omega_F$, one obtains
$$\theta_\frak p (X(L_\frak p, K_\frak p) ) \supseteq \theta_\frak p(O^+(L_\frak p)) \supseteq \begin{cases} \mathcal{O}_{\frak p}^\times \ \ \ & \frak p\in \Omega_F\setminus \infty_F \\
F_\frak p^\times \ \ \ & \frak p\in \infty_F \end{cases} $$
by \cite[Lemma 2.2]{xu_indefinite_2020}. So $N_{\mathfrak{P}\mid \mathfrak{p}}(E_\frak P^\times)\supseteq {\mathcal O}_{\frak p}^\times$ for all $\frak p\in\Omega_F$, and hence $E/F$ is an unramified extension by \cite[Chap.\;VI, (6.6) Corollary]{Neu1}.
Now let $\frak p$ be a dyadic prime of $F$. Since $L_\frak p$ is classic integral, the symmetry
$\tau_v$ is an element in $O(L_\frak p)$ for any $v\in L_\frak p$ with $0\leq \mathrm{ord}_\frak p(Q(v))\leq 1$. Then
$ \theta_\frak p(O^+(L_\frak p)) = F_\frak p^\times $
by \cite[Lemma 2.2]{xu_indefinite_2020}.
Therefore $N_{\mathfrak{P}\mid \mathfrak{p}}(E_\frak P^\times)=F_{\frak p}^\times$, and hence $E_{\mathfrak{P}}=F_{\frak p}$ by local class field theory (cf. \cite[Chap.\;V,\,(1.3) Theorem]{Neu1}). This proves that every dyadic prime of $F$ splits completely in $E/F$ as desired.
\
\underline{Sufficiency}. Suppose that $E/F$ is a quadratic unramified extension in which dyadic primes of $F$ all split completely. We can write $E=F(\sqrt{-b})$ for some $b\in \mathcal{O}_{F} $. Since $E/F$ is unramified, one has that $ \mathrm{ord}_{\frak p}(b) $ is even for all $ \mathfrak{p}\in \Omega_{F}\backslash \infty_{F} $. Consider the ternary space $V=[1,\,-1,\,-b]$. This is the unique isotropic ternary space with $\det(V)=b$.
Consider an ${\mathcal O}_F$-lattice $L=\langle 1,\,-1,\,-b\rangle$ on $V$.
Write $\beta_{\mathfrak{p}}:=b{\mathfrak p}i_{\mathfrak{p}}^{-\mathrm{ord}_{\frak p}(b)} $ for each $\frak p\in \Omega_F\setminus\infty_F$. Then $ \beta_{\frak p}\in \mathcal{O}_{F_{\mathfrak{p}}}^{\times } $. Since $\mathrm{ord}_{\frak p}(b)=0$ for almost all $\frak p$, one has $L_{\frak p}=\langle 1, -1, -\beta_{\frak p}\rangle$ at almost all primes. When $\frak p$ is dyadic, set $\kappa_{\frak p}:=1+4\rho_{\mathfrak{p}}{\mathfrak p}i_{\mathfrak{p}}^{-1} $. Then $(\kappa_{\frak p}, 1+{\mathfrak p}i_{\mathfrak{p}})_{\mathfrak{p}}=-1 $ by Lemma \ref{4.1}.
By \cite[81:14]{omeara_quadratic_1963}, one can define a ternary $ \mathcal{O}_{F} $-lattice $ M $ on $ V $ by local conditions as follows:
\begin{align*}
M_{\mathfrak{p}}=
\begin{cases}
\langle 1, -1, -\beta_{\frak p} \rangle &\text{if $ \mathfrak{p} $ is non-dyadic},\\
A(1,-{\mathfrak p}i_{\mathfrak{p}}){\mathfrak p}erp \langle -(1+{\mathfrak p}i_{\mathfrak{p}})\beta_{\frak p} \rangle &\text{if $ \mathfrak{p} $ is dyadic and $ (1+{\mathfrak p}i_{\mathfrak{p}},-\beta_{\mathfrak{p}})_{\mathfrak{p}}=1 $},\\
A(\kappa_{\mathfrak{p}},-\kappa^{-1}_{\mathfrak{p}}{\mathfrak p}i_{\mathfrak{p}}){\mathfrak p}erp \langle -(1+{\mathfrak p}i_{\mathfrak{p}})\beta_{\frak p} \rangle &\text{if $ \mathfrak{p} $ is dyadic and $ (1+{\mathfrak p}i_{\mathfrak{p}},-\beta_{\mathfrak{p}} )_{\mathfrak{p}}=-1$}\,.
\end{cases}
\end{align*}
When $ \mathfrak{p} $ is a non-dyadic prime, $ M_{\mathfrak{p}} $ is unimodular and so is 1-universal by \cite[Proposition 2.3]{xu_indefinite_2020}. When $ \mathfrak{p} $ is a dyadic prime, using Lemma \ref{4.1} we can easily check that $ S_{\mathfrak{p}}(F_{\frak p}M)=(-1,-1)_{\mathfrak{p}}$ and so $ F_{\mathfrak{p}}M $ is isotropic by \cite[58:6]{omeara_quadratic_1963}. Also, we have the weight $ \mathfrak{w}(M_{\mathfrak{p}})=\mathfrak{p} $ by \cite[93:5]{omeara_quadratic_1963}. So $ M_{\mathfrak{p}} $ is 1-universal by \cite[Proposition 2.17]{xu_indefinite_2020}.
Fix $x_\frak p \in M_\frak p$ with $Q(x_\frak p)=1$ and write
$$ X(M_\frak p, 1) = \{ \sigma\in O^+(F_\frak p M_\frak p): \ x_\frak p\in \sigma (M_\frak p) \} $$ for all $\frak p\in \Omega_F$. When $ \mathfrak{p} $ is a non-dyadic prime, $\theta_\frak p (X(M_\frak p, 1))=N_{\frak P\mid \frak p} (E_\frak P^\times) $ by the second part of \cite[Satz 3(a)]{schulzepillot_Darstellung_1980} with $ r=s=0 $. When $\mathfrak{p}$ is a dyadic prime, $ \theta_\frak p (X(M_\frak p, 1))= F_\frak p^\times $ by \cite[Theorem 2.1 (Case III)]{Xu4}. On the other hand, the prime $\frak p$ splits completely in $E/F$, thus $N_{\mathfrak{P}\mid \mathfrak{p}}(E_\frak P^\times)=F_{\mathfrak{p}}^{\times}$.
We conclude that $\theta_\frak p (X(M_\frak p, 1))=N_{\mathfrak{P}\mid \mathfrak{p}}(E_\frak P^\times) $ for all $\frak p\in \Omega_F$. By \cite[Theorem 4.1]{hsia_indefinite_1998}, there is an $\mathcal O_F$-lattice in $\mathrm{gen}(M)$ which does not represent $1$. This lattice is classic 1-LNG as desired.
Fix $P\in \mathrm{gen}(M)$ such that 1 is not represented by $P$. There is a base $\{z_1, z_2, z_3\}$ for $FP$ such that $P= \frak b_1 z_1+ \frak b_2 z_2+ \frak b_3 z_3$ where $\frak b_1, \frak b_2$ and $\frak b_3$ are fractional ideals of ${\mathcal O}_F$ by \cite[81:3. Theorem]{omeara_quadratic_1963}. The Steinitz class of $P$ is defined as the class of $\frak b_1\frak b_2\frak b_3$ in $\mbox{Pic}({\mathcal O}_F)$ and is independent of choices of bases of $FP$ (cf. \cite[9.3.10, p.\hskip 0.1cm 143]{Voight}). Moreover, $P$ is free if and only if the Steinitz class of $P$ is trivial in $\mbox{Pic}({\mathcal O}_F)$.
By Tchebotarev's density theorem \cite[Chapter V, (6.4) Theorem]{Neu86}, there are infinitely many non-dyadic primes $\frak q_1, \cdots, \frak q_n , \cdots$ such that the class of each $\frak q_i$ in $\mbox{Pic}({\mathcal O}_F)$ is equal to the inverse of the Steinitz class of $P$ for $i=1, \cdots, n, \cdots$. Define a sub-lattice $P_i$ of $P$ by
$$ (P_i)_{\frak p} = \begin{cases} P_\frak p \ \ \ & \text{when $\frak p \neq \frak q_i$} \\
\langle 1, -1, -\beta_{\frak p} {\mathfrak p}i_\frak p^2 \rangle \ \ \ & \text{when $\frak p=\frak q_i$} \end{cases} $$
for each $i=1, \cdots, n , \cdots $. Since the Steinitz class of $P_i$ is equal to the product of the Steinitz class of $P$ and the class of $\frak q_i$ by \cite[82:11]{omeara_quadratic_1963}, one has the Steinitz class of $P_i$ is trivial and $P_i$ is a free ${\mathcal O}_F$-module for $i=1, \cdots, n, \cdots$. Since $(P_i)_{\frak q_i}$ is also $1$-universal by \cite[Proposition 2.3]{xu_indefinite_2020}, one obtains that $P_i$ is 1-universal over ${\mathcal O}_\frak p$ for all $\frak p\in \Omega_F$. Since $P$ cannot represent $1$, one concludes that $P_i$ cannot represent $1$ either for $i=1, \cdots, n\cdots$.
Namely, all $P_i$ are classic 1-LNG free lattices for $i=1, \cdots, n\cdots$. Since $\det(P_i)\neq \det(P_j)$ for $i\neq j$, none of them are in the same class as desired.
\end{proof}
In the rest of this section, we determine all quadratic fields over which there are classic 1-LNG lattices. The starting point of the arguments is that all quadratic unramified extensions of a quadratic field can be described explicitly by the genus theory.
\begin{prop}\label{6.3} Let $F$ be a quadratic field with discriminant $d_F$ and let $p_1, \cdots, p_t$ be all the odd prime divisors of $d_F$. Write $p_i^*=(-1)^{\frac{p_i-1}{2}}p_i$ for each $1\leq i\leq t$ and put
\[
G^{(+)}=F(\sqrt{p_1^*},\cdots, \sqrt{p_t^*})={\mathbb Q}(\sqrt{d_F}\,,\,\sqrt{p_1^*},\cdots, \sqrt{p_t^*})\,.
\] Here $G^{(+)}=F$ if $t=0$.
Then a quadratic extension $E/F$ is unramified if and only if $E\subseteq G^{(+)}$ and in case $F$ is real, $E$ is totally real.
\end{prop}
\begin{proof}
Let $H$ be the Hilbert class field of $F$. Then the Artin map gives an isomorphism
between $ \mbox{Pic}(\mathcal O_F) $ and $ \mbox{Gal}(H/F) $ by \cite[Chap.\,VI, (6.9) Proposition]{Neu1}.
The subgroup $\mbox{Pic}(\mathcal O_F)^2$ generated by $\{ \frak a^2: \frak a\in \mbox{Pic}(\mathcal O_F)\} $
corresponds to an intermediate field $G$ of $H/F$. It is the \emph{genus field} of $F$ in the sense of \cite[\S\,VI.3, pp.243--244]{Janusz}, characterized as the maximal unramified abelian extension of $F$ which is abelian over ${\mathbb Q}$. Note that if $F$ is real, then $G$ must be totally real for otherwise it would be ramified at a real prime of $F$.
The field $G^{(+)}$ is called the \emph{extended genus field} of $F$ in \cite[\S\,VI.3]{Janusz} (but in \cite{Ishida} it is called the \emph{genus field} of $F$). It is the maximal abelian extension of ${\mathbb Q}$ containing $F$ which is unramified at all finite primes of $F$ (\cite[pp.3--4]{Ishida}). So clearly $G\subseteq G^{(+)}$. If $F$ is imaginary, then $G=G^{(+)}$. If $F$ is real, then $G$ coincides with the maximal totally real subfield of $G^{(+)}$.
If $E\subseteq G$, then as a subextension of an unramified extension, $E/F$ is unramified. Conversely, assume that $E/F$ is unramified. Then $E\subseteq H$. Since the quotient group
$$\mathrm{Gal}(H/F)/\mathrm{Gal}(H/E)\cong\mathrm{Gal}(E/F) $$ is 2-torsion, we have $\mathrm{Gal}(H/G)\subseteq \mathrm{Gal}(H/E)$. Hence $E\subseteq G$. This completes the proof.
\end{proof}
\begin{thm} \label{6.4} Let $F$ be a quadratic field with discriminant $d_F$ and let $p_1, \cdots, p_t$ be all the odd prime divisors of $d_F$.
Then there is no classic ternary 1-LNG lattice over ${\mathcal O}_F$ if and only if $F$ is one of the fields in the following table:
\begin{center}
\renewcommand{1.5}{1.5}
\begin{tabular*}{16.3cm}{c|c|c}
\hline
$d_F{\mathfrak p}mod{4}$ & $F$ \text{ real} & $F$ \text{ imaginary} \\
\hline
\multirow{5}{*}{$\equiv 1$} & $\mathbb{Q}(\sqrt{p_1})$ \text{ with } $p_1\equiv 1{\mathfrak p}mod{4}$ & $\mathbb{Q}(\sqrt{-p_1})$ \text{ with } $p_1\equiv 3{\mathfrak p}mod{4}$ \\
\cline{2-3}
& $\mathbb{Q}(\sqrt{p_1p_2})$ \text{ with } $p_1\equiv p_2\equiv 3{\mathfrak p}mod{4}$ & $\mathbb{Q}(\sqrt{-p_1p_2})$ \text{ with } \\
& \qquad \text{ or } $p_1\equiv p_2\equiv -3 {\mathfrak p}mod{8}$ & $p_1\equiv -p_2\equiv{\mathfrak p}m 3{\mathfrak p}mod{8}$ \\
\cline{2-2}
& $\mathbb{Q}(\sqrt{p_1p_2p_3})$ \text{ with } $p_1\equiv -1{\mathfrak p}mod{8}$ & \\
& $p_2\equiv 3,\,p_3\equiv -3 {\mathfrak p}mod{8}$ & \\
\hline
\multirow{7}{*}{$\equiv 0$} & $\mathbb{Q}(\sqrt{2})$ & ${\mathbb Q}(\sqrt{-1}); \ {\mathbb Q}(\sqrt{-2})$\\
\cline{2-3}
& $\mathbb{Q}(\sqrt{p_1})$ \text{ with } $p_1\equiv 3{\mathfrak p}mod{4}$ & $\mathbb{Q}(\sqrt{-p_1})$ \text{ with } $p_1\equiv -3{\mathfrak p}mod{8}$ \\
\cline{2-3}
& $\mathbb{Q}(\sqrt{2p_1})$ \text{ with } $p_1\equiv -1,\,{\mathfrak p}m 3{\mathfrak p}mod{8}$ & $\mathbb{Q}(\sqrt{-2p_1})$ \text{ with } $p_1\equiv {\mathfrak p}m 3{\mathfrak p}mod{8}$ \\
\cline{2-2}
& $\mathbb{Q}(\sqrt{p_1p_2})$ \text{ or } $\mathbb{Q}(\sqrt{2p_1p_2})$ \text{ with } & \\
& $p_1\equiv -3{\mathfrak p}mod{8}$, $p_2\equiv 3{\mathfrak p}mod{4}$ & \\
\cline{2-2}
& $\mathbb{Q}(\sqrt{2p_1p_2})$ \text{ with } & \\
& $p_1\equiv -3p_2\equiv -1,\,3{\mathfrak p}mod{8}$ & \\
\hline
\end{tabular*}
\end{center}
\end{thm}
\begin{proof}
Let $G^{(+)}={\mathbb Q}(\sqrt{d_F},\,\sqrt{p_1^*},\cdots, \sqrt{p_t^*})$ as in Proposition\;\ref{6.3}.
If $t=0$, then $G^{(+)}=F$. Thus, $F$ has no quadratic unramified extensions at all, hence there is no classic 1-LNG $\mathcal{O}_F$-lattice by Theorem\;\ref{6.1}. Note that in this case $F={\mathbb Q}(\sqrt{2}), {\mathbb Q}(\sqrt{-1})$ or ${\mathbb Q}(\sqrt{-2})$.
Now let us assume $t\ge 1$. We discuss 4 cases to finish the proof.
\noindent {\bf Case 1}. $t=1$.
If $d_F\equiv 1{\mathfrak p}mod{4}$, then $F={\mathbb Q}(\sqrt{p_1})$ with $p_1\equiv 1{\mathfrak p}mod{4}$ or $F={\mathbb Q}(\sqrt{-p_1})$ with $p_1\equiv 3{\mathfrak p}mod{4}$.
In both cases, $d_F=p_1^*$ so that $G^{(+)}=F$. Thus, as in the case $t=0$, there is no classic 1-LNG $\mathcal{O}_F$-lattice.
Next assume $d_F\equiv 0{\mathfrak p}mod{4}$. If $F$ is imaginary, then $F=\Bbb Q(\sqrt{-p_1})$ with $p_1\equiv 1 {\mathfrak p}mod 4$ or $F=\Bbb Q(\sqrt{-2p_1})$. In the former case, $G^{(+)}=F(\sqrt{p_1})$ is then only quadratic unramified extension of $F$. When $p_1\equiv 1 {\mathfrak p}mod 8$, the dyadic prime of $F$ splits completely in $F(\sqrt{p_1})/F$ by \cite[63:1.Local Square Theorem]{omeara_quadratic_1963}. In this case, there is a classic ternary 1-LNG $\mathcal O_F$-lattice by Theorem\;\ref{6.1}.
If $p_1\equiv 5 {\mathfrak p}mod 8$, then $2$ is inert in $\Bbb Q(\sqrt{p_1})/\Bbb Q$. This implies that the dyadic prime of $F$ is inert in $F(\sqrt{p_1})/F$ by inspecting extension degree of the residue fields. In this case, one concludes using Theorem\;\ref{6.1} that there is no classic ternary 1-LNG $\mathcal O_F$-lattice.
For $F=\Bbb Q(\sqrt{-2p_1})$, we have $G^{(+)}=F(\sqrt{p_1^*})/F$. When $p_1\equiv {\mathfrak p}m 1 {\mathfrak p}mod 8$, the dyadic prime of $F$ splits completely in $F(\sqrt{p_1^*})/F$ by \cite[63:1.Local Square Theorem]{omeara_quadratic_1963}. In this case, there is a classic ternary 1-LNG $\mathcal O_F$-lattice by Theorem\;\ref{6.1}. When $p_1\equiv {\mathfrak p}m 3 {\mathfrak p}mod 8$, the dyadic prime of $F$ is inert in $F(\sqrt{p_1^*})/F$, so that $F$ has no classic ternary 1-LNG lattice by Theorem\;\ref{6.1}.
Now suppose $F$ is real. Then $F={\mathbb Q}(\sqrt{p_1})$ with $p_1\equiv 3{\mathfrak p}mod{4}$ or $F={\mathbb Q}(\sqrt{2p_1})$. If $F={\mathbb Q}(\sqrt{p_1})$, then $G^{(+)}=F(\sqrt{p_1^*})=F(\sqrt{-p_1})$ is not totally real. It follows that $F$ has no quadratic unramified extension, hence there is no classic ternary 1-LNG lattice over ${\mathcal O}_F$. If $F={\mathbb Q}(\sqrt{2p_1})$ with $p_1\equiv 3{\mathfrak p}mod{4}$, similar arguments show that $F$ admits no classic 1-LNG lattice. If $F={\mathbb Q}(\sqrt{2p_1})$ with $p_1\equiv -3{\mathfrak p}mod{8}$, then the dyadic prime of $F$ is inert in $G^{(+)}=F(\sqrt{p_1})$, so again there is no classic ternary 1-LNG lattice over $\mathcal O_F$. Finally, if $F={\mathbb Q}(\sqrt{2p_1})$ with $p_1\equiv 1{\mathfrak p}mod{8}$, then the quadratic extension $F(\sqrt{p_1^*})=F(\sqrt{p_1})$ satisfies the conditions in Theorem\;\ref{6.1}, whence the existence of classic ternary 1-LNG lattices over $\mathcal{O}_F$.
\noindent {\bf Case 2}. $F$ is imaginary and $t\ge 2$.
If $p_{i}\equiv {\mathfrak p}m 1 {\mathfrak p}mod 8$ for some $1\leq i\leq t$, then $F(\sqrt{p_{i}^*})/F$ is a quadratic extension contained in $G^{(+)}$, hence unramified. Moreover, by the local square theorem all dyadic primes of $F$ split completely in $F(\sqrt{p_{i}^*})/F$. By Theorem\;\ref{6.1}, there exist classic ternary 1-LNG lattices over ${\mathcal O}_F$ in this case.
If there are two distinct indices $1\leq i\neq j \leq t$ such that $p_{i}\equiv p_{j}\mod 8$, then $p_i^*p_j^*=p_ip_j$ and the field $F(\sqrt{p_i^*p_j^*})=F(\sqrt{p_{i} p_{j}})$
is an unramified quadratic extension by Proposition \ref{6.3}. Since $p_ip_j\equiv 1{\mathfrak p}mod{8}$, all dyadic primes of $F$ split completely in $F(\sqrt{p_ip_j})$. Thus, as in the previous case there exist classic ternary 1-LNG lattices over ${\mathcal O}_F$.
In the remaining case, we must have $t=2$ and we may assume without loss of generality that $p_1\equiv 3{\mathfrak p}mod{8}$ and $p_2\equiv -3{\mathfrak p}mod{8}$. Hence $G^{(+)}=F(\sqrt{p_1^*},\,\sqrt{p_2^*})=F(\sqrt{-p_1},\,\sqrt{p_2})$. Note that $F={\mathbb Q}(\sqrt{-p_1p_2})$ or $F={\mathbb Q}(\sqrt{-2p_1p_2})$. In the former case, $2$ splits completely in $F$ and is inert in ${\mathbb Q}(\sqrt{p_2})$. This implies that the dyadic primes of $F$ are inert in $G^{(+)}=F(\sqrt{p_2})$. So by Theorem\;\ref{6.1}, there is no classic ternary 1-LNG lattices over ${\mathcal O}_F$. In the latter case, $G^{(+)}$ contains the quadratic extension $F(\sqrt{-p_1p_2})$ of $F={\mathbb Q}(\sqrt{-2p_1p_2})$, and the dyadic prime of $F$ splits completely in $F(\sqrt{-p_1p_2})$. So by Theorem\;\ref{6.1}, $F$ admits a classic ternary 1-LNG lattice.
\noindent {\bf Case 3}. $F$ is real and $t=2$.
First assume $d_F\equiv 1{\mathfrak p}mod{8}$. Then $d_F=p_1p_2$ with $p_1\equiv p_2{\mathfrak p}mod{8}$, and $G^{(+)}=F(\sqrt{p_1^*})=F(\sqrt{p_2^*})$. Similar to the imaginary case, if $p_1\equiv p_2\equiv 1{\mathfrak p}mod{8}$, there exist classic ternary 1-LNG lattices over $\mathcal O_F$. Otherwise either $G^{(+)}$ is not totally real, or $p_1\equiv p_2\equiv 5{\mathfrak p}mod{8}$. In the first case, $F$ has no quadratic unramified extension; in the second case, the unique quadratic unramified extension of $F$ is $F(\sqrt{p_1})$, but the dyadic primes of $F$ are inert in $F(\sqrt{p_1})$. Therefore, there is no classic ternary 1-LNG lattice over $\mathcal O_F$ in either case.
Next assume $d_F\equiv 5{\mathfrak p}mod{8}$. Then $d_F=p_1p_2$ with $p_1\not\equiv p_2{\mathfrak p}mod{8}$. If $p_1$ or $p_2$ is congruent to $1$ modulo $8$, then $F$ has a classic ternary 1-LNG lattice. Otherwise, we may assume $p_1\equiv 3{\mathfrak p}mod{8},\,p_2\equiv -1{\mathfrak p}mod{8}$. Then $G^{(+)}=F(\sqrt{-p_1})$ is not real, hence there is no classic ternary 1-LNG lattice over $\mathcal O_F$.
Finally, consider the case $d_F\equiv 0{\mathfrak p}mod{4}$. Then either $F={\mathbb Q}(\sqrt{p_1p_2})$ with $p_1p_2\equiv 3{\mathfrak p}mod{4}$ or $F={\mathbb Q}(\sqrt{2p_1p_2})$. If $F={\mathbb Q}(\sqrt{p_1p_2})$, we may assume without loss of generality that $p_1\equiv 1{\mathfrak p}mod{4},\,p_2\equiv 3{\mathfrak p}mod{4}$. Then $G^{(+)}=F(\sqrt{p_1},\,\sqrt{-p_2})$ contains precisely 3 quadratic extensions of $F$. The only real extension among these three extensions is $F(\sqrt{p_1})$. Arguing similarly as before, we see that $F$ has no classic ternary 1-LNG lattice if and only if $p_1\equiv -3{\mathfrak p}mod{8}$.
Now suppose $F={\mathbb Q}(\sqrt{2p_1p_2})$. As before, if $p_1$ or $p_2$ is congruent to 1 modulo 8, or if $p_1\equiv p_2{\mathfrak p}mod{8}$, then there exist classic ternary 1-LNG $\mathcal O_F$-lattices. Without loss of generality, we may now assume either $p_1\equiv -1{\mathfrak p}mod{8},\,p_2\equiv 3{\mathfrak p}mod{8}$, or $p_1\equiv 5{\mathfrak p}mod{8},\,p_2\equiv 3{\mathfrak p}mod{4}$. In the former case, the only real quadratic extension of $F$ in $G^{(+)}=F(\sqrt{-p_1},\,\sqrt{-p_2})$ is $F(\sqrt{p_1p_2})$. But the dyadic prime of $F$ is inert in $F(\sqrt{p_1p_2})$ since $p_1p_2\equiv -3{\mathfrak p}mod{8}$. In the other case, $G^{(+)}=F(\sqrt{p_1},\,\sqrt{-p_2})$ and the only real quadratic extension of $F$ in $G^{(+)}$ is $F(\sqrt{p_1})$, in which the dyadic prime of $F$ is again inert. Therefore, for $F={\mathbb Q}(\sqrt{2p_1p_2})$, there is no classic ternary 1-LNG lattice over $F$ if and only if (up to permutation) $p_1\equiv -1{\mathfrak p}mod{8},\,p_2\equiv 3{\mathfrak p}mod{8}$, or $p_1\equiv 5{\mathfrak p}mod{8},\,p_2\equiv 3{\mathfrak p}mod{4}$.
\noindent {\bf Case 4}. $F$ is real and $t\ge 3$.
As in previous discussions, if $p_i\equiv p_j{\mathfrak p}mod{8}$ for two distinct indices $i,\,j$, or if some $p_i$ is congruent to 1 modulo 8, then there exist classic ternary 1-LNG lattices over $\mathcal O_F$. The only remaining possibility is that $t=3$ and up to permutation, $p_1\equiv -1,\,p_2\equiv 3,\,p_3\equiv -3{\mathfrak p}mod{8}$. Thus, $G^{(+)}=F(\sqrt{-p_1},\,\sqrt{-p_2},\,\sqrt{p_3})$.
If $F={\mathbb Q}(\sqrt{p_1p_2p_3})$, then $G^{(+)}$ is a biquadratic extension of $F$ and the only real quadratic subextension is $F(\sqrt{p_3})$. But the dyadic primes of $F$ are inert in $F(\sqrt{p_3})$. So $F$ has no classic ternary 1-LNG lattice in this case.
If $F={\mathbb Q}(\sqrt{2p_1p_2p_3})$, then $F(\sqrt{p_1p_2p_3})$ is a real quadratic extension of $F$ contained in $G^{(+)}$ and the dyadic primes of $F$ split completely in it. Therefore classic ternary 1-LNG $\mathcal O_F$-lattices exist.
The theorem follows by summarizing all the results in the above discussions.
\end{proof}
As was shown in \cite[Example 3.8]{xu_indefinite_2020}, there are infinitely many integral (non-classic) ternary 1-LNG forms over the field $\Bbb Q(\sqrt{-5})$. However,
Theorem\;\ref{6.4} tells us in particular that over $\Bbb Q(\sqrt{-5})$ there is no classic 1-LNG lattice. On the other hand, there are classic 1-LNG lattices over $\Bbb Q(\sqrt{-14})$ according to Theorem\;\ref{6.4}.
\begin{ex} Let $F=\Bbb Q(\sqrt{-14})$ and $m$ be an odd integer. Then the classic integral ternary quadratic forms
$$(2+11 \sqrt{-14})x^2+(106+6\sqrt{-14}) xy + (32-17\sqrt{-14}) y^2 + m^2 z^2 $$
represent all integers of $F$ over ${\mathcal O}_\frak p$ for all $\frak p\in \Omega_F$ but cannot represent $2$ over ${\mathcal O}_F$.
\end{ex}
\begin{proof} Since $2$ is ramified in $F=\Bbb Q(\sqrt{-14})/\Bbb Q$, there is a unique dyadic prime $\frak d$ in $F$. By Proposition \ref{6.3}, the genus field $G$ of $F$ is $F(\sqrt{-7})=F(\sqrt{2})$.
Since $-7(=1-8)$ is a square in $F_\frak d$, one concludes that $\frak d$ splits completely in $G/F$. By Theorem \ref{6.1}, there are infinitely many classic 1-LNG lattices. We will construct infinitely many classic 1-LNG integral quadratic forms explicitly. Consider
$$ M=(\mathcal O_F x+ \mathcal O_F y) {\mathfrak p}erp \mathcal O_F z \ \ \ \text{with} \ \ \ Q(x)=0, \ Q(y)=\sqrt{-14} \ \ \ \text{and} \ \ \ B(x, y)=Q(z)=1 $$ with the corresponding quadratic form $2xy+\sqrt{-14} y^2+ z^2$.
Then $M$ is a classic integral unimodular lattice. By \cite[Proposition 2.3 and Proposition 2.17]{xu_indefinite_2020}, one obtains that $M_\frak p$ is 1-universal over ${\mathcal O}_\frak p$ for all $\frak p\in \Omega_F\setminus \infty_F$. Therefore
$$ \theta_\frak p(O^+(M_\frak p)) \supseteq \begin{cases} \mathcal{O}_{\frak p}^\times ( F_\frak p^\times)^2 \ \ \ & \frak p\in \Omega_F\setminus \infty_F \\
F_\frak p^\times \ \ \ & \frak p\in \infty_F \end{cases} $$
by \cite[Lemma 2.2]{xu_indefinite_2020}. By \cite[33:14.Theorem, 102:7 and 104:5.Theorem]{omeara_quadratic_1963}, the number $h(M)$ of (proper) classes in $\mathrm{gen}(M)$ is bounded by
\begin{equation} \label{cls} h(M) \leq \bigg[\Bbb I_F : F^\times \bigg({\mathfrak p}rod_{\frak p\in \infty_F} F_\frak p^\times\times {\mathfrak p}rod_{\frak p\in \Omega_F\setminus \infty_F} \mathcal{O}_{\frak p}^\times \bigg) \Bbb I_F^2\bigg] \leq [\mbox{Pic}(\mathcal O_F) : \mbox{Pic}(\mathcal O_F)^2]=[G:F]=2 \end{equation}
where $\Bbb I_F$ denotes the id\`ele group of $F$. On the other hand, write
$$ X(M_\frak p, 2) = \{ \sigma\in O^+(F_\frak p M_\frak p): \ x_\frak p\in \sigma (M_\frak p) \} $$ where $x_\frak p\in M_\frak p$ with $Q(x_\frak p)=2$ for all $\frak p\in \Omega_F$.
Since $$\theta_\frak p (X(M_\frak p, 2))=N_{\frak P\mid \frak p} (G_\frak P^\times) $$ by \cite[Satz 3(a)]{schulzepillot_Darstellung_1980} and \cite[Theorem 2.0]{Xu4}, one obtains that $2$ is a spinor exception for $\mathrm{gen}(M)$ by \cite[Theorem 4.1]{hsia_indefinite_1998}. In particular, $h(M)\geq 2$. Therefore $h(M)=2$ and the equality of (\ref{cls}) holds. Moreover $G$ is also the spinor class field of $\mathrm{gen}(M)$ in the sense of \cite[p.131]{hsia_indefinite_1998}.
Since $3$ splits completely in $F$ and is inert in $\Bbb Q(\sqrt{2})/\Bbb Q$, the prime $\frak a=(3, 1-\sqrt{-14})$ of $F$ above $3$ is inert in $G/F$.
Define
$$L= (\frak a^{-1} x+ \frak a y) {\mathfrak p}erp \mathcal O_F z .$$
Since
$$ \tau_{{\mathfrak p}i_\frak a^{-1} x + y}({\mathfrak p}i_\frak a^{-1} x)= {\mathfrak p}i_\frak a^{-1} x - \frac{2 {\mathfrak p}i_\frak a^{-1}}{Q(y)+2{\mathfrak p}i_\frak a^{-1}}({\mathfrak p}i_\frak a^{-1} x + y)= \frac{Q(y)}{2+Q(y) {\mathfrak p}i_\frak a} x - \frac{2}{2+Q(y) {\mathfrak p}i_\frak a}y \in M_\frak a $$
and
$$ \tau_{{\mathfrak p}i_\frak a^{-1}x+y}({\mathfrak p}i_\frak a y) ={\mathfrak p}i_\frak a y -\frac{2({\mathfrak p}i_\frak a Q(y)+1)}{Q(y)+2{\mathfrak p}i_\frak a^{-1}} ({\mathfrak p}i_\frak a^{-1} x+y)=- \frac{2({\mathfrak p}i_\frak a Q(y)+1)}{ 2+Q(y) {\mathfrak p}i_\frak a} x-\frac{Q(y){\mathfrak p}i_\frak a^2}{ 2+Q(y) {\mathfrak p}i_\frak a} y \in M_\frak a ,$$ this implies that $ \tau_{{\mathfrak p}i_\frak a^{-1}x+y}(L_\frak a)=M_\frak a$ and $L\in \mathrm{gen}(M)$. Moreover, since the intersection ideal of $M$ and $L$ (see \cite[p.134]{hsia_indefinite_1998}) is $\frak a$ which is inert in $G/F$, one concludes that $L$ and $M$ are exactly representative elements of the two proper classes in $\mathrm{gen}(M)$ by \cite[Theorem 3.1]{hsia_indefinite_1998} and \cite[82:4]{omeara_quadratic_1963}.
The quadratic form corresponding to $M$ being $2xy+\sqrt{-14}y^2+z^2$, the identity
\[
2\cdot 3\cdot \sqrt{-14}+\sqrt{-14}\cdot (\sqrt{-14})^2+(4+\sqrt{-14})^2=2
\]
shows that $M$ represents $2$ over ${\mathcal O}_F$. Since $2$ is a spinor exception for $\mathrm{gen}(M)$, one concludes that $L$ cannot represent $2$.
Since $\frak a^{-1}=(\frac{1+\sqrt{-14}}{3}, 3)$, one can make the following substitution
$$ \begin{cases} u= \frac{1}{3} (1+ \sqrt{-14}) x+ 3 y \\
v = 2 x + (1-\sqrt{-14}) y \end{cases} $$
with $u, v\in \frak a x+ \frak a^{-1} y$. Then $\frak a x+ \frak a^{-1} y= {\mathcal O}_F u + {\mathcal O}_F v$ by \cite[81:8]{omeara_quadratic_1963}. The corresponding quadratic form of $L$ is
$$(2+11 \sqrt{-14})x^2+(106+6\sqrt{-14}) xy + (32-17\sqrt{-14}) y^2 + z^2 . $$
Let $L_m= (\frak a^{-1} x+ \frak a y) {\mathfrak p}erp \mathcal O_F m z \subseteq L $ with the corresponding quadratic form $$ (2+11 \sqrt{-14})x^2+(106+6\sqrt{-14}) xy + (32-17\sqrt{-14}) y^2 + m^2 z^2 . $$ Then $L_m$ cannot represent $2$ either. On the other hand, $(L_m)_\frak p$ is 1-universal over ${\mathcal O}_\frak p$ for all $\frak p\in \Omega_F\setminus \infty_F$ by \cite[Proposition 2.3 and Proposition 2.17]{xu_indefinite_2020} as desired.
\end{proof}
\
\noindent \emph{Acknowledgments}. Part of the paper comes from discussions during our visit to Xi'an Jiaotong University in July 2021. We would like to thank Prof.\;Ping Xi for the invitation and kind hospitality. Zilong He and Yong Hu are supported by the National Natural Science Foundation of China (grant no.\,12171223). Fei Xu is supported by the National Natural Science Foundation of China (grant no.\,11631009).
\
\
Contact information of the authors:
\
Zilong HE
Department of Mathematics
Southern University of Science and Technology
Shenzhen 518055, China
Email: [email protected]
\
Yong HU
Department of Mathematics
Southern University of Science and Technology
Shenzhen 518055, China
Email: [email protected]
\
Fei XU
School of Mathematical Science
Capital Normal University
Beijing 100048, China
Email: [email protected]
\
\end{document}
|
\begin{document}
\titleformat{\section}
{\Large\scshape\centering\bf}{\thesection}{1em}{}
\titleformat{\subsection}
{\large\scshape\bf}{\thesubsection}{1em}{}
\title{\normalsize{\uppercase{\bf{Restriction of 3D arithmetic Laplace eigenfunctions to a plane}
\begin{abstract}
We consider a random Gaussian ensemble of Laplace eigenfunctions on the 3D torus, and investigate the 1-dimensional Hausdorff measure (`length') of nodal intersections against a smooth 2-dimensional toral sub-manifold (`surface'). The expected length is universally proportional to the area of the reference surface, times the wavenumber, independent of the geometry.
For surfaces contained in a plane, we give an upper bound for the nodal intersection length variance, depending on the arithmetic properties of the plane. The bound is established via estimates on the number of lattice points in specific regions of the sphere.
\end{abstract}
{\bf Keywords:} nodal intersections, arithmetic random waves, lattice points on spheres, Gaussian random fields, Kac-Rice formulas.
\\
{\bf MSC(2010):} 11P21, 60G15.
\section{Introduction}
\subsection{Nodal sets for eigenfunctions of the Helmholtz equation}
Let $\Delta_\mathcal{M}$ be the Laplace-Beltrami operator, or for short Laplacian, on a smooth manifold $\mathcal{M}$ of dimension $d$. With motivation coming from physics and PDEs, one is interested in eigenfunctions $G$ of the Helmholtz equation
\begin{equation*}
(\Delta_\mathcal{M}+E)G=0
\end{equation*}
with eigenvalue (or `energy' in the physics terminology) $E>0$, in the high energy limit $E\to\infty$.
Of particular importance is the {\bf nodal set} (zero-locus) of $G$,
\begin{equation}
\label{nodset}
\mathcal{A}_G:=\{x\in\mathcal{M} : G(x)=0\}.
\end{equation}
Its study dates back to Hooke's and Chladni's pioneering work (17th-18th century). There is a wide range of scientific applications including telecommunications \cite{rice44}, oceanography \cite{longue, azawsc}, and photography \cite{swerli}.
It is known that $\mathcal{A}_G$ is a smooth sub-manifold of dimension $d-1$ except for a set of lower dimension \cite[Theorem 2.2]{cheng1}. For $d=2$, we call $\mathcal{A}_G$ {\bf nodal line}, and for $d=3$, we call it {\bf nodal surface}.
Our setting is the three-dimensional standard flat torus $\mathcal{M}=\mathbb{T}^3=\mathbb{R}^3/\mathbb{Z}^3$. Here the Laplace eigenvalues {`energy levels'}, are of the form $4\pi^2 m$, $m\in S_3$, where
\begin{equation*}
S_3:=\{0<m: \ m=a_1^2+a_2^2+a_3^2,\ a_i\in\mathbb{Z}\}.
\end{equation*}
The frequencies
\begin{equation}
\label{Lambda}
\Lambda_m=\{\lambda\in\mathbb{Z}^3 : \|\lambda\|^2=m\}
\end{equation}
are the lattice points on $\sqrt{m}\mathcal{S}^2$, the sphere of radius $\sqrt{m}$. The (complex-valued) Laplace eigenfunctions may be written as \cite{brnoda}
\begin{equation}
\label{lapeig}
G(x)=G_m(x)=\sum_{\lambda\in\Lambda}
c_{\lambda}
e^{2\pi i\langle\lambda,x\rangle},
\qquad x\in\mathbb{T}^3,
\end{equation}
with $c_\lambda$ Fourier coefficients.
The eigenspace dimension is the lattice point number, i.e., the number of ways to express $m$ as a sum of three integer squares
\begin{equation}
\label{N}
N:=|\Lambda|=r_3(m).
\end{equation}
In what follows we will always make the (natural) assumption $m\not\equiv 0,4,7 \pmod 8$, implying
\begin{equation}
\label{totnumlp3}
(\sqrt{m})^{1-\epsilon}
\ll
N
\ll
(\sqrt{m})^{1+\epsilon}
\end{equation}
for all $\epsilon>0$ \cite[\S 1]{bosaru} and in particular $N\to\infty$. This assumption is natural in the sense that if $m\equiv 7 \pmod 8$ then $m\not\in S_3$, while multiplying $m$ by $4$ just rescales the frequency set \cite[\S 1.3]{ruwiye}. Further details on the structure of $\Lambda_m$ may be found in section \ref{seclp}.
\subsection{Nodal intersections}
One insightful approach to the study of the nodal set is given by its restriction to a fixed sub-manifold in the ambient $\mathcal{M}$, the so-called {\bf nodal intersections}. The recent papers \cite{totzel,cantot,elhtot} analyse nodal intersections on `generic' surfaces (i.e. $d=2$) against a curve. Unless the curve is contained in the nodal line, the intersection is a set of points. It is expected that in many situations, the nodal intersections number obeys the bound $\ll\sqrt{E}$, where $E>0$ is the eigenvalue.
The nodal set of $G_m$ \eqref{lapeig} is a nodal surface on $\mathbb{T}^3$. We consider the restriction of $G_m$ to a fixed smooth $2$-dimensional sub-manifold $\Pi\subset\mathbb{T}^3$, and specifically the {\bf nodal intersection length}
\begin{equation*}
h_1(\mathcal{A}_G \cap \Pi)
\end{equation*}
where $h_1$ is $1$-dimensional Hausdorff measure, in the high energy limit $m\to\infty$. Bourgain and Rudnick found that, for $\Pi$ real-analytic, with nowhere zero Gauss-Kronecker curvature, there exists $m_\Pi$ such that for every $m\geq m_\Pi$, the surface $\Pi$ is not contained in the nodal set of any eigenfunction $G_m$ \cite[Theorem 1.2]{brnoda}. Moreover, one has the upper bound
\begin{equation}
\label{BRthm}
h_1(\mathcal{A}_G \cap \Pi)<C_\Pi\cdot\sqrt{m}
\end{equation}
for some constant $C_\Pi$ \cite[Theorem 1.1]{brgafa}, and for every eigenfunction $G_m$ the nodal intersection is non-empty \cite[Theorem 1.3]{brgafa}.
\begin{comment}
\begin{prop}[Bourgain and Rudnick {\cite[Theorem 1.2]{brnoda}}]
If $\Sigma$ is real-analytic, with nowhere-zero Gaussian curvature, then there exists $m_0$ such that for every $m\geq m_0$, $\Sigma$ is not contained in the nodal set of any eigenfunction $G_m$.
\end{prop}
\begin{prop}[Bourgain and Rudnick, {\cite[Theorem 1.1]{brgafa}}]
\label{BR}
If $\Sigma$ is real-analytic, with nowhere-zero Gaussian curvature, then for some constant $C_\Sigma$ we have the upper bound
\begin{equation}
\text{length}\{x\in\mathbb{T}^3 : G(x)=0\} \cap \Sigma<C_\Sigma\cdot\sqrt{m}.
\end{equation}
\end{prop}
\begin{prop}[Bourgain and Rudnick {\cite[Theorem 1.3]{brgafa}}]
If $\Sigma$ is real-analytic, with nowhere-zero Gaussian curvature, then there exists $m_0$ such that for every $m\geq m_0$, and for every eigenfunction $G_m$,
\begin{equation}
\{x : G_m(x)=0\} \cap \Sigma\neq\emptyset.
\end{equation}
\end{prop}
\end{comment}
\subsection{The arithmetic waves}
\label{secarw}
The eigenvalue multiplicities allow us to randomise our setting as follows. We will be working with an ensemble of {\em random} Gaussian Laplace toral eigenfunctions (`arithmetic waves' for short \cite{orruwi, rudwi2, krkuwi})
\begin{equation}
\label{arw}
F(x)=F_m(x)=\frac{1}{\sqrt{N}}
\sum_{\lambda\in\Lambda}
a_{\lambda}
e^{2\pi i\langle\lambda,x\rangle},
\qquad x\in\mathbb{T}^3,
\end{equation}
of eigenvalue $4\pi^2m$, where $a_{\lambda}$ are complex standard Gaussian random variables \footnote{Defined on some probability space $(\Omega,\mathcal{F},\mathbb{P})$, where $\mathbb{E}$ denotes
the expectation with respect to $\mathbb{P}$.} (i.e., one has $\mathbb{E}[a_{\lambda}]=0$ and $\mathbb{E}[|a_{\lambda}|^2]=1$), independent save for the relations $a_{-\lambda}=\overline{a_{\lambda}}$ (so that $F(x)$ is real valued). The total area of the nodal surface of $F$ was studied in \cite{benmaf,cammar}. The arithmetic wave \eqref{arw} may be analogously defined on the $d$-dimensional torus $\mathbb{R}^d/\mathbb{Z}^d$. Several recent papers investigate the nodal volume \cite{rudwi2, krkuwi} and nodal intersections of arithmetic waves against a fixed {\em curve} \cite{rudwig, maff2d, roswig, ruwiye, maff3d}.
\subsection{Restriction to a surface of nowhere vanishing Gauss-Kronecker curvature}
\label{prior}
In \cite{maff18} we considered the {\bf nodal intersection length}, i.e. the random variable
\begin{equation}
\label{L}
\mathcal{L}=\mathcal{L}_m:=h_1(\mathcal{A}_{F_m}\cap\Pi)
\end{equation}
where $\Pi$ is a smooth $2$-dimensional sub-manifold of $\mathbb{T}^3$, possibly with boundary, admitting a smooth normal vector locally.
The expected intersection length is $\mathbb{E}[\mathcal{L}]=\sqrt{m}A\pi/\sqrt{3}$, where $A$ is the total area of $\Pi$ \cite[Proposition 1.2]{maff18}. This expectation is independent of the geometry, and is consistent with \eqref{BRthm}.
The main result of \cite{maff18} is the precise asymptotic of the nodal intersection length variance, against surfaces of nowhere vanishing Gauss-Kronecker curvature \cite[Theorem 1.3]{maff18}
\begin{equation}
\label{var}
\text{Var}(\mathcal{L})=\frac{\pi^2}{60}\frac{m}{N}\left[3\mathcal{I}-A^2+O\left(m^{-1/28+o(1)}\right)\right]
\end{equation}
where
\begin{equation*}
\mathcal{I}=\mathcal{I}_\Pi:=\iint_{\Pi^2}\langle\overrightarrow{n}(p),\overrightarrow{n}(p')\rangle^2dpdp'
\end{equation*}
and $\overrightarrow{n}(p)$ is the unit normal vector to $\Pi$ at the point $p$.
In this paper, we consider the {\bf other extreme} of the nowhere vanishing curvature scenario, namely, the case where $\Pi$ is {\bf contained in a plane}. The above result for the expected intersection length is valid in this case also. The integral $\mathcal{I}$ satisfies the sharp bounds \cite[Proposition 1.4]{maff18}
\begin{equation*}
\frac{A^2}{3}\leq\mathcal{I}\leq A^2,
\end{equation*}
so that the leading coefficient of \eqref{var} is always non-negative and bounded, though it may vanish, for instance when $\Pi$ is a sphere or a hemisphere \footnote{There are also (several) other examples of these so-called `static' surfaces. To establish the variance asymptotic for these seems to be a difficult problem.}: in this case the variance is of lower order than $m/N$. This behaviour is similar to the two-dimensional case \cite{rudwig, roswig}.
The theoretical maximum of the variance asymptotic is achieved in the case of intersection with a surface contained in a plane. Although this case is excluded by the assumptions of \eqref{var}, it is natural to conjecture $\text{Var}(\mathcal{L})\sim A^2m/N\cdot\pi^2/30$ for $\Pi$ confined to a plane.
\subsection{Main results}
\label{secresults}
Let $\Pi$ be a smooth $2$-dimensional sub-manifold of $\mathbb{T}^3$ contained in a plane. We denote $\overrightarrow{n}$ the unit normal vector to this plane. We distinguish between vectors/planes of the following three types, possibly after relabelling the coordinates and assuming w.l.o.g. that $n_1\neq 0$:
\begin{comment}
Given $0\neq\alpha\in\mathbb{R}^3$, we assume w.l.o.g. that $\alpha_1\neq 0$. We distinguish between vectors satisfying
\begin{align}
\label{qq}
\tag{i}
{\alpha_2}/{\alpha_1}\in\mathbb{Q} \quad&\text{and}\quad {\alpha_3}/{\alpha_1}\in\mathbb{Q};
\\
\label{qr}
\tag{ii}
{\alpha_2}/{\alpha_1}\in\mathbb{Q} \quad&\text{and}\quad {\alpha_3}/{\alpha_1}\in\mathbb{R}\setminus\mathbb{Q};
\\
\label{rr}
\tag{iii}
{\alpha_2}/{\alpha_1}\in\mathbb{R}\setminus\mathbb{Q} \quad&\text{and}\quad {\alpha_3}/{\alpha_1}\in\mathbb{R}\setminus\mathbb{Q};
\end{align}
and call them vectors of type \eqref{qq}, \eqref{qr} and \eqref{rr} respectively. If $\xi,\eta$ in \eqref{gammapl} are both of type \eqref{qq}, we say that $\Pi$ is a `rational' plane, otherwise that it is `irrational'. Recall the definition of $\kappa$ \eqref{kappa}.
\end{comment}
\begin{align}
\label{qq}
\tag{i}
{n_2}/{n_1}\in\mathbb{Q} \quad&\text{and}\quad {n_3}/{n_1}\in\mathbb{Q};
\\
\label{qr}
\tag{ii}
{n_2}/{n_1}\in\mathbb{Q} \quad&\text{and}\quad {n_3}/{n_1}\in\mathbb{R}\setminus\mathbb{Q};
\\
\label{rr}
\tag{iii}
{n_2}/{n_1}\in\mathbb{R}\setminus\mathbb{Q} \quad&\text{and}\quad {n_3}/{n_1}\in\mathbb{R}\setminus\mathbb{Q}.
\end{align}
Vectors/planes of type \eqref{qq} will also be called `rational', and the remaining types `irrational'. This terminology is borrowed from \cite{maff3d}.
As in {\cite[\S 2.3]{brgafa}} we will denote $\kappa(R)$ the maximal number of lattice points in the intersection of $R\mathcal{S}^{2}$ and any plane. The upper bound
\begin{equation}
\label{kappa3bound}
\kappa(R)\ll R^\epsilon, \quad \forall\epsilon>0
\end{equation}
is due to Jarnik \cite{jarnik}, \cite[(2.6)]{brgafa}.
\begin{thm}
\label{thmpl}
Let $\Pi$ be a smooth $2$-dimensional sub-manifold of $\mathbb{T}^3$ contained in a plane.
\begin{enumerate}[label=(\arabic*)]
\item
\label{thmpl1}
If the plane is rational, then the nodal intersection length variance satisfies the bound
\begin{equation}
\label{varplr}
\text{Var}(\mathcal{L})\ll_{\Pi}\frac{m}{N}\cdot\kappa(\sqrt{m}).
\end{equation}
\item
\label{thmpl2}
Moreover, for irrational planes we have
\begin{equation}
\label{varpli}
\text{Var}(\mathcal{L})\ll_{\Pi}\frac{m}{N}\cdot N^{a+\epsilon}
\end{equation}
for any positive $\epsilon$ where we may take:
\begin{comment}
\begin{enumerate}
\item
$a=3/7$ if $\xi$ is of type \eqref{qq} and $\eta$ of type \eqref{qr};
\item
$a=5/9$ if $\xi$ is of type \eqref{qq} and $\eta$ of type \eqref{rr};
\item
$a=3/4$ if neither of $\xi,\eta$ are of type \eqref{qq}.
\end{enumerate}
\end{comment}
\begin{enumerate}[label=(\Alph*)]
\item
$a=3/7$ for planes of type \eqref{qr};
\item
$a=3/4$ for planes of type \eqref{rr}.
\end{enumerate}
\end{enumerate}
\end{thm}
Theorem \ref{thmpl} will be proven in section \ref{secpl}. Taking into account \eqref{kappa3bound}, the bound \eqref{varplr} is just $\epsilon$'s off from the conjectured order $m/N$. Similarly to \cite{rudwig, ruwiye, maff18}, the above results on expectation and variance have the following consequence.
\begin{thm}
Let $\Pi$ be a smooth $2$-dimensional sub-manifold of $\mathbb{T}^3$ contained in a plane, of total area $A$. Then the nodal intersection length $\mathcal{L}$ satisfies, for all $\epsilon>0$,
\begin{equation*}
\lim_{\substack{m\to\infty \\ m\not\equiv 0,4,7 \pmod 8}}\mathbb{P}\left(\left|\frac{\mathcal{L}}{\sqrt{m}}-\frac{\pi}{\sqrt{3}}A\right|>\epsilon\right)=0.
\end{equation*}
\end{thm}
\begin{proof}
Apply the Chebychev-Markov inequality together with Theorem \ref{thmpl} and \cite[Proposition 1.2]{maff18}.
\end{proof}
Furthermore, one may improve on Theorem \ref{thmpl} conditionally on the following conjecture.
\begin{conj}[Bourgain and Rudnick {\cite[\S 2.2]{brgafa}}]
\label{brgafaconj}
Let $\chi(R,s)$ be the maximal number of lattice points in a cap of radius $s$ of the sphere $R\mathcal{S}^2$. Then for all $\epsilon>0$ and $s<R^{1-\delta}$,
\begin{gather*}
\chi(R,s)
\ll
R^\epsilon\left(1+\frac{s^2}{R}\right)
\end{gather*}
as $R\to\infty$.
\end{conj}
We have the following conditional improvement for planes of type \eqref{rr}.
\begin{thm}
\label{thmc}
Let $\Pi$ be a smooth $2$-dimensional sub-manifold of $\mathbb{T}^3$ contained in a plane. Assuming Conjecture \ref{brgafaconj}, we have for every $\epsilon>0$
\begin{equation}
\label{varplc}
\text{Var}(\mathcal{L})\ll_{\Pi}\frac{m}{N}\cdot N^{1/2+\epsilon}.
\end{equation}
\end{thm}
\noindent
Theorem \ref{thmc} will be proven in section \ref{secpl}.
\subsection{Outline of proofs and plan of the paper}
\label{secout}
The arithmetic random wave $F$ \eqref{arw} is a {\em random field}. For a smooth random field $P:T\subset_{\text{open}}\mathbb{R}^d\to\mathbb{R}^{d'}$, denote $\mathcal{V}$ the Hausdorff measure of its nodal set. For instance when $d=3$ and $d'=1$ then $\mathcal{V}$ is the nodal area. Only the case $d\geq d'$ is interesting, since otherwise the zero set of $P$ is a.s. \footnote{The expression `almost surely', or for short `a.s.', means `with probability $1$'.} empty. Under appropriate assumptions, the moments of $\mathcal{V}$ may be computed via Kac-Rice formulas \cite[Theorems 6.8 and 6.9]{azawsc}. These formulas, however, do not apply to our situation \cite[Example 1.6]{maff18} (except in the very special case of the plane containing $\Pi$ being parallel to one of the coordinate planes). To resolve this issue, in \cite{maff18} we derived Kac-Rice formulas for a random field defined on a {\em surface}, and thus computed $\mathbb{E}[\mathcal{L}]$.
Via an {\bf approximate Kac-Rice formula} \cite[Proposition 1.7]{maff18}, for surfaces of nowhere vanishing Gauss-Kronecker curvature, the problem of computing the nodal intersection length variance \eqref{var} was reduced to estimating the second moment of the {\em covariance function}
\begin{equation}
\label{rintro}
r(p,p'):=\mathbb{E}[F(p)F(p')]
\end{equation}
and of its various first and second order derivatives. The error term in \eqref{var} comes from bounding the fourth moment of $r$ and of its derivatives.
For $\Pi$ confined to a plane, we wish to prove the upper bounds in Theorem \ref{thmpl}. An {\bf approximate Kac-Rice bound} will then suffice, similarly to \cite{maff2d, ruwiye, maff3d}.
\begin{prop}[Approximate Kac-Rice bound]
\label{approxKRpl}
Let $\Pi$ be a smooth $2$-dimensional sub-manifold of $\mathbb{T}^3$ contained in a plane. Then we have
\begin{equation}
\label{varpl1}
\text{Var}(\mathcal{L})\ll m\iint_{\Pi^2}
\left(
r^2
+\frac{D\Omega D^T}{m}
+\frac{tr(H\Omega H\Omega)}{m^2}
\right)
dpdp'
\end{equation}
where $D(p,p'),H(p,p'),\Omega$ are appropriate vectors and matrices, depending on $r(p,p')$, its derivatives, and $\Pi$ \footnote{See \cite[Definition 3.3]{maff18}.}.
\end{prop}
Proposition \ref{approxKRpl} will be proven in section \ref{secappkr}. The problem of bounding the variance of $\mathcal{L}$ is thus reduced to estimating the second moment of the covariance function $r$ and its various first and second order derivatives. This, in turn, requires estimates for the number of lattice points in specific regions of the sphere $\sqrt{m}\mathcal{S}^2$, covered in section \ref{capseg}.
There are marked differences compared to the case of generic surfaces: first, if $\Pi$ is contained in a plane of unit normal $\overrightarrow{n}=(n_1,n_2,n_3)$, it admits everywhere the parametrisation
\begin{align}
\notag
\gamma:U\subset\mathbb{R}^2&\to\Pi,
\\
\label{gammapl}
(u,v)&\mapsto(P+u\xi+v\eta),
\end{align}
where $P\in\Pi$ and $\{\overrightarrow{n},\xi,\eta\}$ is an orthonormal basis of $\mathbb{R}^3$ \cite[\S 2.5, Example 1]{docarm}. Then the covariance function \eqref{rintro} has the special form
\begin{equation}
\label{rplane}
r((u,v),(u',v'))
=\frac{1}{N}\sum_{\lambda\in\Lambda} e^{2\pi i\langle\lambda,(u'-u)\xi+(v'-v)\eta\rangle},
\end{equation}
depending on the difference $(u',v')-(u,v)$ only: the random field $f(u,v):=F(\gamma(u,v))$ is {\em stationary} \footnote{In particular we may assume w.l.o.g. that $P$ is the origin.}. This behaviour is very different from the case of generic surfaces. In particular it eventually leads to a different method from \cite{maff18} of controlling the second moment, and specifically the off-diagonal terms. Indeed, in our previous paper, the off-diagonal terms are handled via a generalisation of Van der Corput's lemma to higher dimensions \cite[Proposition 5.4]{maff18}, applicable for surfaces $\Pi$ of nowhere vanishing Gauss-Kronecker curvature. On the other hand if $\Pi$ is confined to a plane, the special form \eqref{rplane} of the covariance function allows us to establish the estimates \eqref{min} directly, leading to a different arithmetic problem from the generic surfaces case.
Similarly to \cite{maff2d,maff3d} (nodal intersections against a straight line in two and three dimensions), in the linear case the variance upper bounds depend on the arithmetic properties of the line/plane. In Theorem \ref{thmpl}, the upper bound is stronger in the case of rational planes, and the bound for planes of type \eqref{qr} is stronger than for those of type \eqref{rr}, again similar to \cite{maff2d,maff3d}. This situation occurs because the bounds rely on estimates for lattice points in specific regions of the sphere: when
\begin{equation*}
\frac{n_3}{n_1}, \ \frac{n_3}{n_2}
\end{equation*}
are irrational numbers, the lattice point estimates are derived using simultaneous Diophantine approximation, so that the bound for the variance is stronger when the number of irrationals to approximate is smaller \cite[\S 8]{maff3d}.
\section{Kac-Rice bound: Proof of Proposition \ref{approxKRpl}}
\label{secappkr}
\subsection{Setup}
We fix a smooth $2$-dimensional sub-manifold $\Pi$ of $\mathbb{T}^3$ confined to a plane, denoting the unit normal $\overrightarrow{n}=(n_1,n_2,n_3)$. Then w.l.o.g. $\Pi$ admits everywhere the parametrisation (cf. \eqref{gammapl})
\begin{align}
\notag
\gamma:[0,A]\times[0,B]\subset\mathbb{R}^2&\to\Pi,
\\
\label{gammaplagain}
(u,v)&\mapsto p=u\xi+v\eta,
\end{align}
where $\{\overrightarrow{n},\xi,\eta\}$ is an orthonormal basis of $\mathbb{R}^3$,
\begin{equation}
\label{AB}
A:=\max\{u: u\xi+v\eta\in\Pi\},
\quad
\text{ and }
\quad
B:=\max\{v: u\xi+v\eta\in\Pi\}.
\end{equation}
Later we will choose (assuming w.l.o.g that $n_1\neq 0$)
\begin{equation}
\label{xieta}
\xi=\frac{(n_2,-n_1,0)}{\sqrt{n_1^2+n_2^2}}, \qquad\qquad \eta=\frac{(n_1n_3,n_2n_3,-n_1^2-n_2^2)}{\sqrt{n_1^2+n_2^2}}.
\end{equation}
We now introduce some necessary notation for the derivatives of the covariance function $r$ \eqref{rplane}.
\begin{defin}
\label{thedef}
Define the row vector $D:=\nabla r$,
\begin{equation*}
D((u,v),(u',v'))=\frac{2\pi i}{N}
\sum_{\lambda\in\Lambda} e^{2\pi i\langle\lambda, (u'-u)\xi+(v'-v)\eta\rangle}\cdot\lambda
\end{equation*}
\begin{comment}
Denote
\begin{equation*}
r_u:=\frac{\partial r}{\partial u},
\qquad
r_v:=\frac{\partial r}{\partial v},
\qquad
r_{u'}:=\frac{\partial r}{\partial u'},
\qquad
r_{v'}:=\frac{\partial r}{\partial v'}
\end{equation*}
the first derivatives of the covariance function \eqref{r}, noting that $r_u(u,v,u',v')=r_{u'}(u',v',u,v)$ and so forth. Define the vector
\begin{equation*}
D=D(u,v,u',v'):=\begin{pmatrix}r_u & r_v\end{pmatrix},
\end{equation*}
and $D'=D'(u,v,u',v')=D(u',v',u,v)$,
and moreover the matrix of the second mixed derivatives
\begin{equation*}
H=H(u,v,u',v'):=
\begin{pmatrix}
r_{uu'} & r_{uv'} \\ r_{u'v} & r_{vv'}
\end{pmatrix}
\end{equation*}
and $H'=H^T(u,v,u',v')=H(u',v',u,v)$.
\end{comment}
and the Hessian matrix $H:=\text{Hess}(r)$,
\begin{equation*}
H((u,v),(u',v'))=-\frac{4\pi^2}{N}
\sum_{\lambda\in\Lambda} e^{2\pi i\langle\lambda, (u'-u)\xi+(v'-v)\eta\rangle}\cdot\lambda^T\lambda.
\end{equation*}
We also introduce the matrix
\begin{equation*}
\Omega:=\begin{pmatrix}
n_2^2+n_3^2 & -n_1n_2 & -n_1n_3
\\ -n_1n_2 & n_1^2+n_3^2 & -n_2n_3
\\ -n_1n_3 & -n_2n_3 & n_1^2+n_2^2
\end{pmatrix}.
\end{equation*}
\begin{comment}
We will denote $\Omega'=\Omega(p')$
\begin{equation}
\label{Q}
Q(\sigma)=
\frac{1}{n_3^2+n_3}\cdot
\begin{pmatrix}
n_1^2+n_3^2+n_3 & n_1n_2 \\ n_1n_2 & n_2^2+n_3^2+n_3
\end{pmatrix}=Q^T.
\end{equation}
with the shorthand $Q':=Q(\sigma')$
\begin{equation}
\label{matL}
L:=\begin{pmatrix}
1 & 0
\\ 0 & 1
\\ 0 & 0
\end{pmatrix}.
\end{equation}
\begin{align*}
& X:=X(\sigma,\sigma')=-
\frac{1}{(1-r^2)M}QL^T\Omega D^TD\Omega LQ,
\\
& X':=X(\sigma',\sigma),
\\
& Y:=Y(\sigma,\sigma')=-\frac{1}{M}\left[QL^T\Omega\left(H+\frac{r}{1-r^2}D^TD\right)\Omega'LQ'\right],
\\
& Y':=Y(\sigma',\sigma),
\end{align*}
\end{comment}
\end{defin}
\subsection{Proof of Proposition \ref{approxKRpl}}
We bring some modifications to the proof of Proposition \cite[Proposition 1.7]{maff18}.
With the notation of the parametrisation \eqref{gammaplagain}, consider the rectangle $U$ of vertices the origin, $A\xi$, $B\eta$, and $A\xi+B\eta$. We partition it (with boundary overlaps) into small squares $U_j$ of side length $\delta\asymp 1/\sqrt{m}$.
\footnote{To be precise, we need $\delta\sqrt{2}<c_0/\sqrt{4\pi m/3}$, with $c_0$ as in \cite[Lemma 3.8]{maff18}.} Writing $\Pi_j:=\Pi\cap U_j$, we denote
\begin{equation*}
\mathcal{L}_{j}:=h_1(\mathcal{A}_F \cap\Pi_j)
\end{equation*}
recalling the notations $\mathcal{A}_F$ \eqref{nodset} for the nodal set and $h_1$ for Hausdorff measure. Then for \eqref{L} one has a.s.
\begin{equation*}
\mathcal{L}=\sum_j\mathcal{L}_{j}.
\end{equation*}
It follows that
\begin{equation}
\label{snspre}
\text{Var}(\mathcal{L})
=
\sum_{i,j}\text{Cov}(\mathcal{L}_i,\mathcal{L}_j).
\end{equation}
The set $\Pi^2$ is thus partitioned (with boundary overlaps) into regions $\Pi_i\times\Pi_j=:V_{i,j}$. We call the region $V_{i,j}$ {\em singular} if there are points $p\in \Pi_i$ and $p'\in \Pi_j$ s.t. $|r(p,p')|>1/2$. The union of all singular regions is the {\em singular set} $S$. It was proven in \cite[Lemma 3.12]{maff18} that
\begin{equation}
\label{Sb}
\text{meas}(S)\ll\iint_{\Pi^2}r^2(p,p')dpdp'.
\end{equation}
We separate the summation \eqref{snspre} over singular and non-singular regions:
\begin{equation}
\label{sns}
\text{Var}(\mathcal{L})
=
\sum_{V_{i,j} \text{ non-sing}} \text{Cov}(\mathcal{L}_i,\mathcal{L}_j)
+
\sum_{V_{i,j} \text{ sing}} \text{Cov}(\mathcal{L}_i,\mathcal{L}_j).
\end{equation}
In \cite[\S 3.4]{maff18} we showed the uniform bound
\begin{equation*}
\text{Cov}(\mathcal{L}_i,\mathcal{L}_j)\ll\frac{1}{m}
\end{equation*}
hence
\begin{equation}
\label{bdsingpl}
\bigg|\sum_{V_{i,j} \text{ sing}} \text{Cov}(\mathcal{L}_i,\mathcal{L}_j)\bigg|
\ll m\iint_{\Pi^2}r^2(p,p')dpdp'
\end{equation}
via \eqref{Sb}.
For non-singular regions, Kac-Rice formulae yield (see \cite[(3.19), \S 5.2, and \S 5.3]{maff18})
\begin{equation}
\label{covpl}
\text{Cov}(\mathcal{L}_i,\mathcal{L}_j)
\ll
m\iint_{V_{i,j}}
\left(
r^2
+\frac{D\Omega D^T}{m}
+\frac{tr(H\Omega H\Omega)}{m^2}
\right)
dpdp'
\end{equation}
with $D,H,\Omega$ as in Definition \ref{thedef}. We substitute \eqref{covpl} and \eqref{bdsingpl} into \eqref{sns}, and extend the domain of integration to the whole of $\Pi^2$ via another application of \eqref{Sb}. The proof of Proposition \ref{approxKRpl} is thus complete.
\section{Lattice points on spheres}
\label{seclp}
\subsection{Background}
To estimate the second moment of the covariance function $r$ and of its derivatives (the RHS of \eqref{varpl1}), we will need several considerations on lattice points on spheres $\sqrt{m}\mathcal{S}^2$. An integer $m$ is representable as a sum of three squares if and only if it is not of the form $4^l(8k+7)$, for $k,l$ non-negative integers \cite{harwri, daven1}. Recall the notation \eqref{N} $N:=|\Lambda|=r_3(m)$ for the number of such representations. Under the natural assumption $m\not\equiv 0,4,7 \pmod 8$ one has \eqref{totnumlp3}
\begin{equation*}
(\sqrt{m})^{1-\epsilon}
\ll
N
\ll
(\sqrt{m})^{1+\epsilon}.
\end{equation*}
Subtle questions about the distribution of $\Lambda/\sqrt{m}$ in the unit sphere as $m\to\infty$ are of independent interest in number theory. The limiting equidistribution of the lattice points was conjectured and proved conditionally by Linnik, and subsequently proven unconditionally \cite{duke88,dukesp,golfom}. The finer statistics of $\Lambda/\sqrt{m}$ on shrinking sets has been recently investigated by Bourgain-Rudnick-Sarnak \cite{bosaru,bsr016}.
\begin{prop}[{\cite[Theorem 1.1]{bsr016}}]
\label{asyriesz}
Fix $0<s<2$. Suppose $m\to\infty$, $m\not\equiv 0,4,7 \pmod 8$. There is some $\delta>0$ so that
\begin{equation*}
\sum_{\lambda\neq\lambda'}\frac{m^{s/2}}{|\lambda-\lambda'|^{s}}=\frac{2^{1-s}}{2-s}\cdot N^2+O(N^{2-\delta}).
\end{equation*}
\end{prop}
\begin{comment}
Given a sphere $\mathfrak{C}\subset\mathbb{R}^3$ and a point $P\in\mathfrak{C}$, we define the {\em spherical cap} $\mathcal{T}$ centred at $P$ to be the intersection of $\mathfrak{C}$ with the ball $\mathcal{B}_s(P)$ of radius $s$ centred at $P$. We will call $s$ the {\em radius of the cap}.
\begin{defin}
\label{projlpdef}
Given an integer $m$ expressible as a sum of three squares, define
\begin{equation}
\label{projlp}
\widehat{\mathcal{E}}_m:=\mathcal{E}_m/\sqrt{m}\subset\mathcal{S}^{2}
\end{equation}
to be the projection of the set of lattice points $\mathcal{E}_m$ \eqref{E} on the unit sphere (cf. \cite[(1.5)]{bosaru} and \cite[(4.3)]{ruwiye}).
\end{defin}
Linnik conjectured (and proved under GRH) that the projected lattice points $\widehat{\mathcal{E}}_m$ become equidistributed as $m\to\infty$, $m\not\equiv 0,4,7 \pmod 8$. This result was proven unconditionally by Duke \cite{duke88,dukesp} and by Golubeva-Fomenko \cite{golfom} following a breakthrough by Iwaniec \cite{iwniec}. As a consequence, one may approximate a summation over the lattice point set by an integral over the unit sphere.
\begin{lemma}[cf. {\cite[Lemma 8]{pascbo}}]
\label{equidlemma}
Let $g(z)$ be a $C^2$-smooth function on $\mathcal{S}^2$. For $m\to\infty$, $m\not\equiv 0,4,7 \pmod 8$, we have
\begin{equation*}
\frac{1}{N}\sum_{\mu\in\mathcal{E}}g\left(\frac{\mu}{|\mu|}\right)
=
\int_{z\in\mathcal{S}^2}g(z)\frac{dz}{4\pi}
+
O_g\left(\frac{1}{m^{1/28-\epsilon}}\right).
\end{equation*}
\end{lemma}
Define the probability measures
\begin{equation}
\label{tau}
\tau_m=\frac{1}{N}\sum_{\mu\in\mathcal{E}}\delta_{\mu/|\mu|}
\end{equation}
on the unit sphere, where $\delta_x$ is the Dirac delta function at $x$. By the equidistribution of lattice points on spheres, the $\tau_m$ converge weak-* \footnote{The statement $\nu_i\Rightarrow\nu$ means that, for every smooth bounded test function $g$, one has $\int gd\nu_i\to\int gd\nu$.} to the uniform measure on the unit sphere:
\begin{equation}
\label{weak}
\tau_m\Rightarrow\frac{\sin(\varphi)d\varphi d\psi}{4\pi},
\end{equation}
as $m\to\infty$, $m\not\equiv 0,4,7 \pmod 8$.
\begin{defin}
For $s>0$, the \textbf{Riesz $s$-energy} of $n$ (distinct) points $P_1,\dots,P_n$ on $\mathcal{S}^2$ is defined as
\begin{equation*}
E_s(P_1,\dots,P_n):=\sum_{i\neq j}\frac{1}{|P_i-P_j|^s}.
\end{equation*}
\end{defin}
\noindent
Bourgain, Sarnak and Rudnick computed the following precise asymptotics for the Riesz $s$-energy of the projected lattice points $\widehat{\mathcal{E}}_m\subset\mathcal{S}^2$ \eqref{projlp}.
\begin{prop}[{\cite[Theorem 1.1]{bosaru}, \cite[Theorem 4.1]{ruwiye}}]
Fix $0<s<2$. Suppose $m\to\infty$, $m\not\equiv 0,4,7 \pmod 8$. There is some $\delta>0$ so that
\begin{equation*}
E_s(\widehat{\mathcal{E}}_m)=I(s)\cdot N^2+O(N^{2-\delta})
\end{equation*}
where
\begin{equation*}
I(s)=\frac{2^{1-s}}{2-s}.
\end{equation*}
\end{prop}
\end{comment}
\subsection{Lattice points in spherical caps and segments}
\label{capseg}
In the present subsection, we collect several bounds for lattice points in certain regions of the sphere. For a more detailed account, see e.g. \cite[\S 2]{brgafa} (spherical caps) and \cite[\S\S 5,6,8]{maff3d} (spherical segments).
\begin{defin}[{\cite[Definition 4.1]{maff3d}}]
\label{defcap}
Given a sphere $\mathfrak{S}$ in $\mathbb{R}^3$ with centre $O$ and radius $R$, and a point $P\in\mathfrak{S}$, we define the \textbf{spherical cap} $\mathcal{T}$ to be the intersection of $\mathfrak{S}$ with the ball $\mathcal{B}_s(P)$ of radius $s$ centred at $P$. We will call $s$ the \textbf{radius of the cap}, and the unit vector $\alpha:=\overrightarrow{OP}/R$ the \textbf{direction} of $\mathcal{T}$.
The intersection of $\mathfrak{S}$ with the boundary of $\mathcal{B}_s(P)$ is a circle, called the \textbf{base} of $\mathcal{T}$, and the {\bf radius of the base} will be denoted $k$. Let $Q,Q'$ be two points on the base which are diametrically opposite (note $\overline{PQ}=\overline{PQ'}=s$): we define the \textbf{opening angle} of $\mathcal{T}$ to be $\theta=\widehat{QOQ'}$. The \textbf{height} $h$ of $\mathcal{T}$ is the distance between the point $P$ and the disc base.
\end{defin}
We will be considering the sphere of radius
\begin{equation*}
R=\sqrt{m}.
\end{equation*}
If $s$, $h$, $k$ and $\theta$ denote the radius, height, radius of the base, and opening angle of $\mathcal{T}$ respectively, then geometric considerations give us the relations $0\leq s\leq 2R$, $0\leq h\leq 2R$, $0\leq k\leq R$, $0\leq \theta\leq \pi$, and
\begin{equation}
\label{shR}
s^2=2Rh.
\end{equation}
Let us introduce the notation
\begin{equation}
\label{chi(R,s)}
\chi(R,s)
=\max_\mathcal{T}\#\{\lambda\in\mathbb{Z}^3\cap \mathcal{T}\}
\end{equation}
for the maximal number of lattice points contained in any spherical cap $\mathcal{T}\subset R\mathcal{S}^2$ of radius $s$.
\begin{lemma}[Bourgain and Rudnick {\cite[Lemma 2.1]{brgafa}}]
\label{lemmachi}
We have for all $\epsilon>0$,
\begin{equation*}
\chi(R,s)
\ll
R^\epsilon\left(1+\frac{s^2}{R^{1/2}}\right)
\end{equation*}
as $R\to\infty$.
\end{lemma}
Compare Lemma \ref{lemmachi} with Conjecture \ref{brgafaconj}. We now introduce another particular region of the sphere, the segment (sometimes called `slab' or `annulus').
\begin{defin}
\label{defseg}
Given a sphere $\mathfrak{S}$ in $\mathbb{R}^3$ with centre $O$ and radius $R$, and two parallel planes $\Pi_1,\Pi_2$, we call \textbf{spherical segment} $\Gamma$ the region of the sphere delimited by $\Pi_1,\Pi_2$. The two \textbf{bases} of $\Gamma$ are the circles $\mathfrak{S}\cap\Pi_1$ and $\mathfrak{S}\cap\Pi_2$: we always assume the latter to be the larger. We define the \textbf{height} $h$ of the spherical segment to be the distance between $\Pi_1$ and $\Pi_2$. We will denote $k$ the \textbf{radius of the larger base}.
Moreover, let $\mathfrak{C}$ be a great circle of the sphere $\mathfrak{S}$, lying on a plane perpendicular to $\Pi_1$ and $\Pi_2$. Denote $\{A,B\}:=\mathfrak{S}\cap\Pi_1\cap\mathfrak{C}$ and $\{C,D\}:=\mathfrak{S}\cap\Pi_2\cap\mathfrak{C}$. We define the \textbf{opening angle} of $\mathfrak{S}$ to be $\theta=\widehat{AOC}+\widehat{BOD}=2\cdot\widehat{AOC}$. The \textbf{direction} of the spherical segment is the unit vector $\alpha$ that is the direction of the two spherical caps $\mathcal{T}_1,\mathcal{T}_2$ satisfying
$
\mathfrak{S}=\mathcal{T}_2\setminus \mathcal{T}_1.
$
\end{defin}
A cap is thus a special case of a segment. It will be convenient to always assume a spherical segment $\Gamma$ to be contained in a hemisphere, so that any two of $h,k,\theta$ completely determine $\Gamma$. We always have $0\leq h\leq R$, $0\leq k\leq R$, $0\leq \theta\leq \pi$ and the relation \cite[Lemma 5.3]{maff3d}
\begin{equation}
\label{kth}
k\theta\ll h
\end{equation}
as $R\to\infty$.
Next, we state two lemmas of \cite{maff3d} which will be needed later.
\begin{lemma}[{\cite[Lemma 9.1]{maff3d}}]
\label{twoparpla}
Given $0<c<R$, fix a point $P\in R\mathcal{S}^2$, and let $\alpha$ be a unit vector. Then all points $P'\in R\mathcal{S}^2$ satisfying $|\langle P-P',\alpha\rangle|\leq c$ lie on the same spherical segment, of height (at most) $2c$ and direction $\alpha$ on $R\mathcal{S}^2$.
\end{lemma}
\begin{lemma}[{\cite[Lemma 7.1]{maff3d}}]
\label{cone}
Let $c=c(R)>0$, with $c\to 0$ as $R\to\infty$. Fix a point $P\in R\mathcal{S}^2$, and let $\alpha$ be a unit vector. Then all points $P'\in R\mathcal{S}^2$ satisfying $|\langle P-P',\alpha\rangle|\leq c|P-P'|$ lie: either on the same spherical segment, of opening angle $8c+O(c^3)$ and direction $\alpha$; or on the same spherical cap, of radius $\ll cR$ and direction $\alpha$, on $R\mathcal{S}^2$.
\end{lemma}
In \cite{maff3d} we found several upper bounds for the maximal number of lattice points belonging to a spherical segment $\Gamma$ of the sphere $R\mathcal{S}^2$,
\begin{equation}
\label{psi}
\psi=\psi(R,h,k,\theta):=\max_\Gamma\#\{\lambda\in\mathbb{Z}^3\cap \Gamma\},
\end{equation}
with $h,k,\theta$ as in Definition \ref{defseg}. Here we collect some of these bounds for convenience. Recall that $\kappa$ denotes the maximal number of spherical lattice points in a plane, and the types \eqref{qq}, \eqref{qr}, \eqref{rr} of vectors/planes defined in section \ref{secresults}.
\begin{prop}
Let $\Gamma\subset R\mathcal{S}^2$ be a spherical segment of opening angle $\theta$, height $h$, radius of larger base $k$, and direction $\alpha$. Then the number of lattice points lying on $\Gamma$ satisfies for every $\epsilon>0$:
\begin{enumerate}[label=(\arabic*)]
\item
if $\alpha$ is of type \eqref{qq},
\begin{equation}
\label{psi1}
\psi\ll_{\alpha} R^\epsilon\cdot(1+h);
\end{equation}
\item
if $\alpha$ is of type \eqref{qr} or \eqref{rr},
\begin{equation}
\label{psi2}
\psi\ll_{\alpha} R^{1/2+\epsilon}\cdot(R^{1/4}+h);
\end{equation}
\item
if $\alpha$ is of type \eqref{qr},
\begin{equation}
\label{psi3}
\psi\ll_{\alpha} \kappa(R)(1+R\cdot\theta^{1/2});
\end{equation}
\item
if $\alpha$ is of type \eqref{rr},
\begin{equation}
\label{psi4}
\psi\ll_{\alpha} \kappa(R)(1+R\cdot\theta^{1/3}).
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
The bound \eqref{psi1} was proven in \cite[Proposition 6.3]{maff3d} (also see Yesha \cite[Lemma A.1]{yesh13}). We now show that \eqref{psi2} follows directly from \cite{maff3d}. Applying \cite[Proposition 5.4]{maff3d} with $\Omega=R^{1/4}$,
\begin{equation*}
\psi\ll
\chi(R,R^{1/4})
\cdot
\left\lceil\frac{k}{R^{1/4}}\right\rceil
\cdot
\left\lceil R^{3/4}\theta\right\rceil
\end{equation*}
so that, by Lemma \ref{lemmachi},
\begin{equation*}
\psi\ll
R^{\epsilon}
\cdot
\left(
1+\frac{k}{R^{1/4}}+R^{3/4}\theta+R^{1/2}k\theta
\right).
\end{equation*}
Since $0\leq k\leq R$, $0\leq \theta\leq \pi$ and $k\theta\ll h$ \eqref{kth}, we obtain \eqref{psi2}. The bounds \eqref{psi3} and \eqref{psi4} were shown in \cite[Proposition 8.3]{maff3d} and \cite[Proposition 6.2]{maff3d} respectively.
\end{proof}
\section{Proofs of Theorems \ref{thmpl} and \ref{thmc}}
\label{secpl}
\subsection{The bounds for the variance}
In this section, we prove Theorem \ref{thmpl}. We commence by further reducing our problem of bounding the variance to estimating a summation over the lattice points on the sphere. Recall the notations $\Lambda$ of the frequency set \eqref{Lambda}, $A,B\in\mathbb{R}^+$ \eqref{AB}, and vectors/matrices $D,H,\Omega$ (Definition \ref{thedef}).
\begin{lemma}
\label{2mompl}
Let $\Pi$ be a $2$-dimensional toral sub-manifold confined to a plane. Then
\begin{equation}
\label{rint}
\iint_{\Pi^2}
\left(
r^2
+\frac{D\Omega D^T}{m}
+\frac{tr(H\Omega H\Omega)}{m^2}
\right)dpdp'
\ll_\Pi\frac{1}{N}+\frac{\mathcal{G}}{N^2},
\end{equation}
where
\begin{equation}
\label{g}
\mathcal{G}=\mathcal{G}_{m,\Pi}:=\sum_{\substack{\lambda,\lambda'\in\Lambda_m\\\lambda\neq\lambda'}}
\left|\int_{0}^{A}\int_{0}^{B}e^{2\pi i\langle\lambda-\lambda',u\xi+v\eta\rangle}dudv\right|^2.
\end{equation}
\end{lemma}
The proof of Lemma \ref{2mompl} is relegated to appendix \ref{appa}. Assuming it, we deduce the following bound for the nodal intersection length variance.
\begin{cor}
Let $\Pi$ be a $2$-dimensional toral sub-manifold confined to a plane. Then
\begin{equation}
\label{varpl2}
\text{Var}(\mathcal{L})\ll_\Pi\frac{m}{N}+\frac{m}{N^2}\cdot\mathcal{G}.
\end{equation}
\end{cor}
\begin{proof}
One substitutes the estimate \eqref{rint} into the approximate Kac-Rice bound \eqref{varpl1}.
\end{proof}
In the following two lemmas we bound $\mathcal{G}$, thereby completing the proof of Theorem \ref{thmpl}. Recall that we distinguish between planes of three types, according to the unit normal $\overrightarrow{n}$ satisfying:
\begin{align}
\tag{i}
{n_2}/{n_1}\in\mathbb{Q} \quad&\text{and}\quad {n_3}/{n_1}\in\mathbb{Q};
\\
\tag{ii}
{n_2}/{n_1}\in\mathbb{Q} \quad&\text{and}\quad {n_3}/{n_1}\in\mathbb{R}\setminus\mathbb{Q};
\\
\tag{iii}
{n_2}/{n_1}\in\mathbb{R}\setminus\mathbb{Q} \quad&\text{and}\quad {n_3}/{n_1}\in\mathbb{R}\setminus\mathbb{Q}.
\end{align}
Recall further that $\kappa$ denotes the maximal number of spherical lattice points lying on a plane.
\begin{lemma}
\label{gratpl}
Let $\Pi$ be a $2$-dimensional toral sub-manifold confined to a {\em rational} plane. Then we have
\begin{equation}
\label{kbound}
\mathcal{G}
\\
\ll_\Pi N\cdot\kappa(\sqrt{m}).
\end{equation}
\end{lemma}
Lemma \ref{gratpl} will be proven in section \ref{secratpl}. For irrational planes, we have the following.
\begin{lemma}
\label{girrpl}
For every $\epsilon>0$, one has
\begin{equation}
\label{abound}
\mathcal{G}
\\
\ll_\Pi N^{1+a+\epsilon}
\end{equation}
where we may take:
\begin{enumerate}[label=(\Alph*)]
\item
\label{37}
$a=3/7$ if $\overrightarrow{n}$ is of type \eqref{qr};
\item
\label{34}
$a=3/4$ if $\overrightarrow{n}$ is of type \eqref{rr};
\item
\label{12}
$a=1/2$ conditionally on Conjecture \ref{brgafaconj}.
\end{enumerate}
\end{lemma}
Lemma \ref{girrpl} will be proven in sections \ref{secirrpl} and \ref{secc}. Assuming them we may complete the proofs of our main theorems.
\begin{proof}[Proof of Theorems \ref{thmpl} and \ref{thmc} assuming Lemmas \ref{gratpl} and \ref{girrpl}]
One substitutes \eqref{kbound} into \eqref{varpl2} to obtain \eqref{varplr}. One substitutes \eqref{abound} into \eqref{varpl2} to obtain \eqref{varpli} and \eqref{varplc}.
\end{proof}
\subsection{Rational planes}
\label{secratpl}
In this subsection we prove Lemma \ref{gratpl}. We will need a preparatory result, the proof of which will follow in appendix \ref{appa}.
\begin{lemma}
\label{trineq}
Let $\xi,\eta\in\mathbb{R}^3$, satisfying
\begin{equation*}
\langle\lambda-\lambda',\xi\rangle\cdot\langle\lambda-\lambda',\eta\rangle\neq 0.
\end{equation*}
Then
\begin{equation}
\label{min}
\left|\int_{0}^{A}\int_{0}^{B}e^{2\pi i\langle\lambda-\lambda',u\xi+v\eta\rangle}dudv\right|^2
\ll\min\left(1,\frac{1}{\langle\lambda-\lambda',\xi\rangle^2\langle\lambda-\lambda',\eta\rangle^2}\right).
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{gratpl} assuming Lemma \ref{trineq}]
We split the summation
\begin{equation*}
\mathcal{G}
=\sum_{\lambda\neq\lambda'}\left|\int_{0}^{A}\int_{0}^{B}e^{2\pi iu\langle\lambda-\lambda',\xi\rangle}du\cdot e^{2\pi iv\langle\lambda-\lambda',\eta\rangle}dv\right|^2
\end{equation*}
over the set of pairs $(\lambda,\lambda')$ s.t. $\langle\lambda-\lambda',\xi\rangle\cdot\langle\lambda-\lambda',\eta\rangle\neq 0$ and its complement. Thanks to the bounds \eqref{min} of Lemma \ref{trineq},
\begin{align}
\label{ratsum}
\notag
\mathcal{G}\ll_{\Pi}
&\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\xi\rangle|=0 \ \vee \ |\langle\lambda-\lambda',\eta\rangle|=0\}
\\&+\sum_{\langle\lambda-\lambda',\xi\rangle\cdot\langle\lambda-\lambda',\eta\rangle\neq 0}\frac{1}{\langle\lambda-\lambda',\xi\rangle^2\langle\lambda-\lambda',\eta\rangle^2}.
\end{align}
We claim that there are few pairs $(\lambda,\lambda')$ satisfying $\langle\lambda-\lambda',\xi\rangle=0$. Indeed, once we fix $\lambda$, the lattice point $\lambda'$ is confined to the plane
\begin{equation}
\label{plane}
\langle\xi,(x,y,z)\rangle=l,
\end{equation}
where $l:=\langle\lambda,\xi\rangle\in\mathbb{R}$. By definition of $\kappa$, there are at most $\kappa(\sqrt{m})$ solutions $(x,y,z)\in\Lambda$ to \eqref{plane}. Therefore,
\begin{equation}
\label{ratsum1}
\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\xi\rangle|=0\}
=\sum_{\lambda\in\Lambda}
\#\{\lambda': \ \langle\lambda',\xi\rangle=\langle\lambda,\xi\rangle\}
\leq N\cdot\kappa(\sqrt{m}).
\end{equation}
Similarly, there are few pairs $(\lambda,\lambda')$ such that $\langle\lambda-\lambda',\eta\rangle=0$.
We turn to bounding the summation in \eqref{ratsum}. By assumption, $\overrightarrow{n}$ is of type \eqref{qq}. Taking $\xi,\eta$ as in \eqref{xieta}, then $\xi,\eta$ are also of type \eqref{qq}, hence we may write $\xi=c\tilde{\xi}$ and $\eta=c'\tilde{\eta}$, where $\tilde{\xi},\tilde{\eta}\in\mathbb{Z}^3$ and $c,c'$ are real numbers. Therefore,
\begin{multline*}
\sum_{\langle\lambda-\lambda',\xi\rangle\cdot\langle\lambda-\lambda',\eta\rangle\neq 0}\frac{1}{\langle\lambda-\lambda',\xi\rangle^2\langle\lambda-\lambda',\eta\rangle^2}
\\\ll_\Pi\sum_{\lambda}\sum_{a\neq 0}\sum_{b\neq 0}\frac{1}{a^2}\frac{1}{b^2}
\cdot\#\{\lambda':\langle \tilde{\xi},\lambda'\rangle=a\in\mathbb{Z} \ \wedge \ \langle \tilde{\eta},\lambda'\rangle=b\in\mathbb{Z}\}.
\end{multline*}
For fixed $a,b$, the lattice point $\lambda'$ is confined to the intersection of the two planes
\begin{equation*}
\langle\tilde{\xi},\lambda'\rangle=a \qquad \text{ and } \qquad \langle\tilde{\eta},\lambda'\rangle=b.
\end{equation*}
Since $\tilde{\xi}\perp\tilde{\eta}$, these two planes intersect in a line, hence the number of solutions $\lambda'\in\Lambda$ cannot exceed two. It follows that
\begin{gather}
\label{ratsum2}
\sum_{\langle\lambda-\lambda',\xi\rangle\cdot\langle\lambda-\lambda',\eta\rangle\neq 0}\frac{1}{\langle\lambda-\lambda',\xi\rangle^2\langle\lambda-\lambda',\eta\rangle^2}
\ll N.
\end{gather}
Substituting \eqref{ratsum1} and \eqref{ratsum2} into \eqref{ratsum} yields \eqref{kbound}.
\end{proof}
\subsection{Irrational planes}
\label{secirrpl}
In the present subsection we prove Lemma \ref{girrpl} parts \ref{37} and \ref{34}, using the bounds for lattice points in spherical caps and segments of section \ref{capseg}. We introduce the parameters $c=c(N),\rho=\rho(N)>0$ and consider the three regimes
\begin{itemize}
\item
first regime: $|\langle\lambda-\lambda',\xi\rangle|\leq c$;
\item
second regime:
$|\langle\lambda-\lambda',\eta\rangle|\leq \rho|\lambda-\lambda'|$;
\item
third regime: $|\langle\lambda-\lambda',\xi\rangle|\geq c$, $|\langle\lambda-\lambda',\eta\rangle|\geq \rho|\lambda-\lambda'|$.
\end{itemize}
We apply the bounds \eqref{min} of Lemma \ref{trineq} to obtain
\begin{multline}
\label{irrsum}
\mathcal{G}\ll_{\Pi}
\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\xi\rangle|\leq c\}
+\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\eta\rangle|\leq \rho|\lambda-\lambda'|\}
\\+\sum_{\substack{
|\langle\lambda-\lambda',\xi\rangle|\geq c
\\
|\langle\lambda-\lambda',\eta\rangle|\geq \rho|\lambda-\lambda'|
}}\frac{1}{\langle\lambda-\lambda',\xi\rangle^2\langle\lambda-\lambda',\eta\rangle^2}.
\end{multline}
\begin{enumerate}[label=(\Alph*), listparindent=\the\parindent]
\item
\label{parta}
Let $\overrightarrow{n}$ be of type \eqref{qr}. Taking $\xi,\eta$ as in \eqref{xieta}, then $\xi$ is of type \eqref{qq} and $\eta$ of type \eqref{qr}.
\underline{First regime}. Once we fix $\lambda$, the lattice points $\lambda'$ satisfying
\begin{equation*}
|\langle\lambda-\lambda',\xi\rangle|\leq c
\end{equation*}
lie on a spherical segment $\Gamma_\lambda$ of height at most $2c$ and direction $\xi$ (see Lemma \ref{twoparpla}). As $\xi$ is of type \eqref{qq}, we may apply \eqref{psi1}:
\begin{equation}
\label{1reg}
\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\xi\rangle|\leq c\}
\ll N R^\epsilon\left(1+c\right).
\end{equation}
\underline{Second regime}. Once we fix $\lambda$, the lattice points $\lambda'$ satisfying
\begin{equation*}
|\langle\lambda-\lambda',\eta\rangle|\leq \rho|\lambda-\lambda'|
\end{equation*}
lie on a spherical segment $\Gamma_{\lambda}$ of opening angle $8\rho+O(\rho^3)$ and direction $\eta$, or on a spherical cap $\mathcal{T}_\lambda$ of radius $\ll\rho R$ and direction $\eta$, on $R\mathcal{S}^2$ (see Lemma \ref{cone}). Later we are going to choose $\rho=N^{-8/7}$, thus the number of lattice points in $\mathcal{T}_\lambda$ of radius $\rho R=o(1)$ is $\ll R^{\epsilon}$. To control the lattice points in each $\Gamma_{\lambda}$, as $\eta$ is of type \eqref{qr}, we may apply \eqref{psi3}:
\begin{equation}
\label{2reg}
\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\eta\rangle|\leq \rho\cdot|\lambda-\lambda'|\}
\ll N R^\epsilon(1+R\rho^{1/2}).
\end{equation}
\underline{Third regime}. Here we have
\begin{equation}
\label{3reg}
\sum\frac{1}{\langle\lambda-\lambda',\xi\rangle^2\langle\lambda-\lambda',\eta\rangle^2}
\leq\frac{1}{c^2\rho^2}\sum\frac{1}{|\lambda-\lambda'|^{2-\epsilon'}}\ll\frac{m^\epsilon}{c^2\rho^2}
\end{equation}
via an application of Proposition \ref{asyriesz}. Collecting the estimates \eqref{1reg}, \eqref{2reg}, \eqref{3reg}, and \eqref{irrsum} we obtain
\begin{equation*}
\mathcal{G}
\ll_\Pi N R^\epsilon\left(1+c\right)+N R^\epsilon(1+R\rho^{1/2})+\frac{m^\epsilon}{c^2\rho^2}.
\end{equation*}
The optimal choice of parameters $(c,\rho)=(N^{3/7},N^{-8/7})$ yields \eqref{abound} with $a=3/7$.
\begin{comment}
\item
Now let $\xi$ be of type \eqref{qq} and $\eta$ of type \eqref{rr}. We proceed as in part 1, except that in the second regime we apply \eqref{psi4} in place of \eqref{psi3}:
\begin{equation*}
\mathcal{G}
\ll N R^\epsilon\left(1+c\right)+N R^\epsilon(1+R\rho^{1/3})+\frac{1}{c^2\rho^2}.
\end{equation*}
The optimal choice of parameters $(c,\rho)=(N^{5/9},N^{-4/3})$ yields \eqref{abound} with $a=5/9$.
\end{comment}
\item
In case $\overrightarrow{n}$ is of type \eqref{rr}, then $\xi$ is of type \eqref{qr} and $\eta$ of type \eqref{rr}. After a relabelling \footnote{Alternatively, one could swap the roles of $\xi,\eta$ when defining the three regimes.}, $\xi$ is of type \eqref{rr} and $\eta$ of type \eqref{qr}. We modify the proof of part \ref{parta} in the following way. In the first regime, by Lemma \ref{twoparpla} and \eqref{psi2},
\begin{equation*}
\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\xi\rangle|\leq c\}
\ll N R^{1/2+\epsilon}\cdot(R^{1/4}+c).
\end{equation*}
In the second regime, the lattice points in the cap $\mathcal{T}_\lambda$ of radius $\ll\rho R$ have the upper bound $R^\epsilon(1+\rho^2 R^{3/2})$ (Lemma \ref{lemmachi}), while those in each segment $\Gamma_\lambda$ are no more than $R^\epsilon(1+R\rho^{1/2})$ \eqref{psi3}. It follows that
\begin{multline*}
\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\eta\rangle|\leq \rho|\lambda-\lambda'|\}
\\\ll N R^\epsilon(1+\rho^2 R^{3/2})
+N R^\epsilon(1+R\rho^{1/2}).
\end{multline*}
Choosing e.g. $(c,\rho)=(N^{1/14},N^{-6/7})$, we have obtained the bound
\begin{equation*}
\mathcal{G}
\ll_\Pi N R^\epsilon(1+\rho^2 R^{3/2}+R\rho^{1/2})+N R^{1/2+\epsilon}(R^{1/4}+c)+\frac{1}{c^2\rho^2}\ll N^{7/4+\epsilon}
\end{equation*}
proving Lemma \ref{girrpl} part \ref{34}.
\begin{comment}
Lastly, if both $\xi,\eta$ are of type \eqref{rr}, we apply \eqref{psi4} instead of \eqref{psi3} to obtain
\begin{equation*}
\mathcal{G}
\ll N R^\epsilon(1+\rho^2 R^{3/2}+R\rho^{1/3})+N R^{1/2+\epsilon}\cdot(R^{1/4}+c)+\frac{1}{c^2\rho^2}\ll N^{7/4+\epsilon}
\end{equation*}
choosing e.g. $(c,\rho)=(N^{1/6},N^{-1})$.
\end{comment}
\end{enumerate}
\subsection{Conditional result}
\label{secc}
It remains to show Lemma \ref{girrpl} part \ref{12}. Assuming Conjecture \ref{brgafaconj}, one may improve the bound \eqref{psi2} for lattice points in spherical segments of given height and larger base radius.
\begin{cor}[{\cite[Corollary 5.6]{maff3d}}]
\label{covercapscor2}
Assume Conjecture \ref{brgafaconj}. Let $\Gamma\subset R\mathcal{S}^2$ be a spherical segment of height $h$ and radius of larger base $k$. Then for every $\epsilon>0$,
\begin{equation}
\label{psi5}
\psi\ll R^{\epsilon}
\cdot
(R^{1/2}+h).
\end{equation}
\end{cor}
We introduce the parameters $c=c(N),c'=c'(N)>0$ and consider the three regimes
\begin{itemize}
\item
first regime: $|\langle\lambda-\lambda',\xi\rangle|\leq c$;
\item
second regime:
$|\langle\lambda-\lambda',\eta\rangle|\leq c'$;
\item
third regime: $|\langle\lambda-\lambda',\xi\rangle|\geq c$, $|\langle\lambda-\lambda',\eta\rangle|\geq c'$.
\end{itemize}
We apply the bounds \eqref{min} of Lemma \ref{trineq} to obtain
\begin{multline}
\label{csum}
\mathcal{G}\ll_{\Pi}
\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\xi\rangle|\leq c\}
+\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\eta\rangle|\leq c'\}
\\+\sum_{\substack{
|\langle\lambda-\lambda',\xi\rangle|\geq c
\\
|\langle\lambda-\lambda',\eta\rangle|\geq c'
}}\frac{1}{\langle\lambda-\lambda',\xi\rangle^2\langle\lambda-\lambda',\eta\rangle^2}.
\end{multline}
\underline{First regime}. Once we fix $\lambda$, the lattice points $\lambda'$ satisfying
\begin{equation*}
|\langle\lambda-\lambda',\xi\rangle|\leq c
\end{equation*}
lie on a spherical segment $\Gamma_\lambda$ of height at most $2c$ and direction $\xi$ (see Lemma \ref{twoparpla}). By \eqref{psi5},
\begin{equation}
\label{1regc}
\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\xi\rangle|\leq c\}
\ll N R^\epsilon(R^{1/2}+c).
\end{equation}
\underline{Second regime}. Similarly to the first regime,
\begin{equation}
\label{2regc}
\#\{(\lambda,\lambda'): |\langle\lambda-\lambda',\eta\rangle|\leq c'\}
\ll N R^\epsilon(R^{1/2}+c').
\end{equation}
\underline{Third regime}. Here we simply write
\begin{equation}
\label{3regc}
\sum\frac{1}{\langle\lambda-\lambda',\xi\rangle^2\langle\lambda-\lambda',\eta\rangle^2}
\leq\frac{N^2}{c^2c'^2}.
\end{equation}
Collecting the estimates \eqref{1regc}, \eqref{2regc}, \eqref{3regc}, and \eqref{csum}, we obtain
\begin{equation*}
\mathcal{G}
\ll_\Pi N R^\epsilon(R^{1/2}+c+c')+\frac{N^2}{c^2c'^2}\ll N^{3/2+\epsilon},
\end{equation*}
choosing e.g. $c=c'=N^{1/5}$. This completes the proof of Lemma \ref{girrpl} part \ref{12}.
\appendix
\section{Proofs of auxiliary results}
\label{appa}
\noindent
In this appendix, we prove a couple of auxiliary lemmas.
\begin{proof}[Proof of Lemma \ref{2mompl}]
We follow \cite[\S 3 and \S 6]{maff2d} and \cite[\S 3]{maff3d}. Squaring $r$ we obtain
\begin{equation*}
r^2((u,v),(u',v'))
=
\frac{1}{N^2}\sum_{\lambda,\lambda'} e^{2\pi i\langle\lambda-\lambda',(u'-u)\xi+(v'-v)\eta\rangle}
\end{equation*}
and on integrating over $\Pi^2$, the contribution of the diagonal terms to \eqref{rint} is
\begin{equation}
\label{diag}
\frac{1}{N^2}\int_{0}^{A}\int_{0}^{B}\int_{0}^{A}\int_{0}^{B}\sum_{\lambda}1dudvdu'dv'
\ll\frac{1}{N}.
\end{equation}
The off-diagonal terms equal
\begin{align}
\label{od}
\notag
&\int_{([0,A]\times[0,B])^2}\frac{1}{N^2}\sum_{\lambda\neq\lambda'}e^{2\pi i\langle\lambda-\lambda',(u'-u)\xi+(v'-v)\eta\rangle}
dudvdu'dv'
\\&\notag=\frac{1}{N^2}\sum_{\lambda\neq\lambda'}
\int_{0}^{A}\int_{0}^{B}e^{2\pi i\langle\lambda-\lambda',u'\xi+v'\eta\rangle}dudv
\int_{0}^{A}\int_{0}^{B}e^{-2\pi i\langle\lambda-\lambda',u\xi+v\eta\rangle}du'dv'
\\&=\frac{1}{N^2}\sum_{\lambda\neq\lambda'}
\left|\int_{0}^{A}\int_{0}^{B}e^{2\pi i\langle\lambda-\lambda',u\xi+v\eta\rangle}dudv\right|^2=\frac{\mathcal{G}}{N^2}.
\end{align}
By \eqref{diag} and \eqref{od},
\begin{equation*}
\iint_{\Pi^2}
r^2dpdp'
\ll_\Pi\frac{1}{N}+\frac{\mathcal{G}}{N^2}.
\end{equation*}
To complete the proof of \eqref{rint}, by the symmetries it will suffice to show that
\begin{equation}
\label{suffice}
\iint_{\Pi^2}
\left(
\frac{r_u^2}{m}
+\frac{r_{uu'}^2}{m^2}
\right)dpdp'
\ll_\Pi\frac{1}{N}+\frac{\mathcal{G}}{N^2}
\end{equation}
(see Definition \ref{thedef}). One has
\begin{equation*}
r_u=\frac{2\pi i}{N}\sum_{\lambda\in\Lambda}\langle\lambda,\xi\rangle e^{2\pi i\langle\lambda,(u'-u)\xi+(v'-v)\eta\rangle},
\end{equation*}
hence, as required in \eqref{suffice},
\begin{align*}
&\iint_{\Pi^2}\frac{r_u^2}{m}dpdp'
\ll_\Pi\frac{1}{N}+\int_{([0,A]\times[0,B])^2}\frac{1}{N^2}\sum_{\lambda\neq\lambda'}\left\langle\frac{\lambda}{|\lambda|},\xi\right\rangle\left\langle\frac{\lambda'}{|\lambda'|},\xi\right\rangle
\\
&\cdot e^{2\pi i\langle\lambda-\lambda',(u'-u)\xi+(v'-v)\eta\rangle}
dudvdu'dv'
\\&\leq\frac{1}{N}+\int_{([0,A]\times[0,B])^2}\frac{1}{N^2}\sum_{\lambda\neq\lambda'}e^{2\pi i\langle\lambda-\lambda',(u'-u)\xi+(v'-v)\eta\rangle}
dudvdu'dv'
\\&=\frac{1}{N}+\frac{\mathcal{G}}{N^2}
\end{align*}
where in the first inequality we isolated the diagonal terms and in the second we applied Cauchy-Schwartz. The calculation for the second derivatives is very similar and we omit it here.
\end{proof}
\begin{proof}[Proof of Lemma \ref{trineq}]
\begin{comment}
Recall that
\begin{multline*}
\mathcal{G}:=\sum_{\lambda\neq\lambda'}
\left|\int_{0}^{A}\int_{0}^{B}e^{2\pi i\langle\lambda-\lambda',u\xi+v\eta\rangle}dudv\right|^2
\\=\sum_{\lambda\neq\lambda'}\left|\int_{0}^{A}e^{2\pi iu\langle\lambda-\lambda',\xi\rangle}du\cdot\int_{0}^{B}e^{2\pi iv\langle\lambda-\lambda',\eta\rangle}dv\right|^2.
\end{multline*}
\end{comment}
The first upper bound in \eqref{min} is a straightforward application of the triangle inequality. To show the second bound in \eqref{min}, we integrate and apply the triangle inequality,
\begin{equation*}
\left|\int_{0}^{A} e^{2\pi iu\langle\lambda-\lambda',\xi\rangle}du\right|^2
=\frac{|e^{2\pi i A\langle\lambda-\lambda',\xi\rangle}-1|}{4\pi^2\langle\lambda-\lambda',\xi\rangle^2}
\leq\frac{1}{\pi^2}\cdot\frac{1}{\langle\lambda-\lambda',\xi\rangle^2}
\end{equation*}
and similarly for the integral over $[0,B]$. This completes the proof of Lemma \ref{trineq}.
\end{proof}
\begin{comment}
\section{Case of a plane}
\begin{prop}
Let $\Pi\subset\mathbb{T}^3$ be a (finite portion of a) plane. Assume it is delimited by a rectangle and call $P,Q,R,S$ the four corners. Then we have
\begin{equation*}
\mathbb{E}[\mathcal{L}]=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot \text{Area}(\Pi).
\end{equation*}
\end{prop}
\begin{proof}
Call $a=Q-P$, $b=S-P=R-Q$. We can parametrise $\Pi$ in the following way. Call $L:=|a|>0, M:=|b|>0$.
\begin{gather*}
\gamma: U=[0,L]\times[0,M]\to\mathbb{T}^3,
\\
(u,v)\mapsto P+\frac{a}{L}u+\frac{b}{M}v.
\end{gather*}
It follows that $\gamma_u=\frac{a}{L}$ and $\gamma_v=\frac{b}{M}$, so that the matrix of the First Fundamental Form is the identity:
\begin{equation*}
\begin{pmatrix}
E & F \\ F & G
\end{pmatrix}(u,v)
\equiv
\begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix}.
\end{equation*}
\noindent
By Kac-Rice,
\begin{equation*}
\mathbb{E}[\mathcal{L}]=\int_U K_1(u,v)dudv,
\end{equation*}
\begin{equation*}
K_1(u,v)=\phi_{f(u,v)}(0)\cdot\mathbb{E}[|\nabla f(u,v)|\mid f(u,v)=0],
\end{equation*}
\begin{equation*}
\phi_{f(u,v)}(0)=\frac{1}{\sqrt{2\pi\cdot 1}}\exp(-\frac{0^2}{2})=\frac{1}{\sqrt{2\pi}}.
\end{equation*}
\begin{lemma}
\begin{equation*}
\begin{pmatrix}
Var(f) & Cov(f,f_u) & Cov(f,f_v)
\\
Cov(f,f_u) & Var(f_u) & Cov(f_u,f_v)
\\
Cov(f,f_v) & Cov(f_u,f_v) & Var(f_v)
\end{pmatrix}(u,v)
=
\begin{pmatrix}
1 & 0 & 0
\\
0 & \frac{4}{3}\pi^2 m E & \frac{4}{3}\pi^2 m F
\\
0 & \frac{4}{3}\pi^2 m F & \frac{4}{3}\pi^2 m G
\end{pmatrix}(u,v).
\end{equation*}
\end{lemma}
\begin{proof}
\begin{equation*}
F(x)=\frac{1}{\sqrt{N}}
\sum_{(\mu_1,\mu_2,\mu_3)\in\mathcal{E}}
a_{\mu}
e^{2\pi i\langle\mu,x\rangle}.
\end{equation*}
\begin{equation*}
r_F(x,y)
:=
\mathbb{E}[F(x)\cdot F(y)]
=
\frac{1}{N}\sum_{\mu\in\mathcal{E}} e^{2\pi i\langle\mu,(x-y)\rangle}.
\end{equation*}
\begin{equation*}
f(u,v):=F(\gamma(u,v))=\frac{1}{\sqrt{N}}
\sum_{\mu\in\mathcal{E}}
a_{\mu}
e^{2\pi i\langle\mu,\gamma(u,v)\rangle}.
\end{equation*}
\begin{equation*}
r(u,v,u',v')
=
\frac{1}{N}\sum_{\mu\in\mathcal{E}} e^{2\pi i\langle\mu,\gamma(u,v)-\beta(u',v')\rangle}.
\end{equation*}
\begin{equation*}
\mathbb{E}[f(u,v)\cdot f(u,v)]=r(u,v,u,v)=1.
\end{equation*}
\begin{equation*}
f_u(u,v)=\frac{\partial f}{\partial u}(u,v)=\frac{1}{\sqrt{N}}
\sum_{\mu\in\mathcal{E}}
a_{\mu}
e^{2\pi i\langle\mu,\gamma(u,v)\rangle}\cdot2\pi i\cdot\langle\mu,\gamma_u(u,v)\rangle.
\end{equation*}
\begin{equation*}
\mathbb{E}[f(u,v)\cdot f_u(u,v)]=\frac{\partial}{\partial u}r(u,v,u',v')\Big|_{(u,v,u,v)}.
\end{equation*}
\begin{equation*}
\frac{\partial}{\partial u}r(u,v,u',v')=\langle\nabla r_F(\gamma(u,v)-\beta(u',v')),\gamma_u(u,v)\rangle.
\end{equation*}
\begin{equation*}
\mathbb{E}[f\cdot f_u]=\langle\nabla r_F(\gamma(u,v)-\beta(u',v')),\gamma_u(u,v)\rangle\Big|_{(u,v,u,v)}=\langle\nabla r_F(0),\gamma_u(u,v)\rangle=0.
\end{equation*}
\begin{equation*}
\mathbb{E}[f_u(u,v)\cdot f_u(u,v)]=\frac{\partial^2}{\partial u\partial u'}r(u,v,u',v')\Big|_{(u,v,u,v)}.
\end{equation*}
\begin{equation*}
\frac{\partial^2}{\partial u\partial u'}r(u,v,u',v')=-\gamma_u'(u',v')^TH_{r_F}(\gamma(u,v)-\beta(u',v'))\gamma_u(u,v).
\end{equation*}
\begin{gather*}
\mathbb{E}[f_u\cdot f_u]=-\gamma_u(u,v)^TH_{r_F}(0)\gamma_u(u,v)
\\
=-\gamma_u^T\begin{pmatrix}
-\frac{4}{3}\pi^2 m & 0 & 0 \\ 0 & -\frac{4}{3}\pi^2 m & 0 \\ 0 & 0 & -\frac{4}{3}\pi^2 m
\end{pmatrix}\gamma_u=\frac{4}{3}\pi^2 m\langle\gamma_u,\gamma_u\rangle=:\frac{4}{3}\pi^2 m E(u,v).
\end{gather*}
\begin{gather*}
\mathbb{E}[f_v\cdot f_v]=-\gamma_v(u,v)^TH_{r_F}(0)\gamma_v(u,v)=\frac{4}{3}\pi^2 m\langle\gamma_v,\gamma_v\rangle=:\frac{4}{3}\pi^2 m G(u,v).
\end{gather*}
\begin{gather*}
\mathbb{E}[f_u\cdot f_v]=-\gamma_u(u,v)^TH_{r_F}(0)\gamma_v(u,v)=\frac{4}{3}\pi^2 m\langle\gamma_u,\gamma_v\rangle=:\frac{4}{3}\pi^2 m F(u,v).
\end{gather*}
\end{proof}
\begin{equation*}
\mathbb{E}[|\nabla f(u,v)|\mid f(u,v)=0]=\mathbb{E}[|\nabla f(u,v)|]=\mathbb{E}[\sqrt{f_u^2+f_v^2}].
\end{equation*}
\begin{equation*}
\begin{pmatrix}
E & F \\ F & G
\end{pmatrix}
\equiv
\begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix}
\Rightarrow
(f_u,f_v)\sim N\bigg((0,0),\begin{pmatrix}
\frac{4}{3}\pi^2 m & 0
\\
0 & \frac{4}{3}\pi^2 m
\end{pmatrix}\bigg).
\end{equation*}
\begin{equation*}
(f_u,f_v)=\sqrt{\frac{4}{3}\pi^2 m}\cdot(X,Y),
\end{equation*}
with
\begin{equation*}
(X,Y)\sim N\bigg((0,0),\begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix}\bigg).
\end{equation*}
\begin{equation*}
\mathbb{E}[\sqrt{f_u^2+f_v^2}]=\sqrt{\frac{4}{3}\pi^2 m}\cdot\mathbb{E}[\sqrt{X^2+Y^2}].
\end{equation*}
\begin{equation*}
K_1(u,v)=\frac{1}{\sqrt{2\pi}}\cdot\sqrt{\frac{4}{3}\pi^2 m}\cdot\mathbb{E}[\sqrt{X^2+Y^2}].
\end{equation*}
\begin{gather*}
\mathbb{E}[\sqrt{X^2+Y^2}]
=
\frac{1}{\sqrt{(2\pi)^2\cdot\bigg|\begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix}\bigg|}}\cdot\iint_{\mathbb{R}^2}\sqrt{x^2+y^2}\cdot\exp(
-\frac{1}{2}(
\begin{pmatrix}
x & y
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix}^{-1}
\begin{pmatrix}
x \\ y
\end{pmatrix}
)
)dxdy
\\
=
\frac{1}{2\pi}\cdot\iint_{\mathbb{R}^2}\sqrt{x^2+y^2}\cdot\exp(
-\frac{1}{2}(
x^2+y^2
)
)dxdy
\\
=
\frac{1}{2\pi}\cdot\int_{0}^{2\pi}\int_{\mathbb{R}^{+}}\rho\cdot\exp(
-\frac{\rho^2}{2}
)\cdot\rho \ d\rho \ d\theta.
\end{gather*}
\begin{equation*}
\int_{\mathbb{R}^{+}}\rho^2\cdot\exp(-\frac{\rho^2}{2}) \ d\rho
=
\frac{\sqrt{2}}{2}\sqrt{\pi}.
\end{equation*}
\begin{equation*}
\mathbb{E}[\sqrt{X^2+Y^2}]
=
\frac{1}{2\pi}\cdot\int_{0}^{2\pi}
\frac{\sqrt{2}}{2}\sqrt{\pi}\ d\theta
=
\frac{\sqrt{2}}{2}\sqrt{\pi}.
\end{equation*}
\begin{equation*}
K_1(u,v)\equiv\frac{1}{\sqrt{2\pi}}\cdot\sqrt{\frac{4}{3}\pi^2 m}\cdot\frac{\sqrt{2}}{2}\sqrt{\pi}.
\end{equation*}
\begin{gather*}
\mathbb{E}[\mathcal{L}]=\int_U K_1(u,v)dudv=\int_{0}^{L}\int_{0}^{M}\frac{1}{\sqrt{2\pi}}\cdot\sqrt{\frac{4}{3}\pi^2 m}\cdot\frac{\sqrt{2}}{2}\sqrt{\pi}dudv=\frac{\pi}{\sqrt{3}}\sqrt{m}LM
\\
=\frac{\pi}{\sqrt{3}}\sqrt{m}|a||b|=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot \text{Area}(\Pi).
\end{gather*}
\end{proof}
\section{Reparametrise the plane}
Let $\Pi\subset\mathbb{T}^3$ be a (finite portion of a) plane. Assume it is delimited by a rectangle and call $P,Q,R,S$ the four corners. Let $a=Q-P$, $b=S-P=R-Q$. We can parametrise $\Pi$ in the following way. Take any $k\in\mathbb{R}^+$. Call $L:=k|a|>0, M:=k|b|>0$.
\begin{gather*}
\gamma: U=[0,L]\times[0,M]\to\mathbb{T}^3
\\
(u,v)\mapsto P+\frac{a}{L}u+\frac{b}{M}v
\end{gather*}
Then
\begin{equation*}
\gamma(L,0)=P+\frac{a}{L}L+\frac{b}{M}0=P+a=Q.
\end{equation*}
We have $\gamma_u=\frac{a}{L}=\frac{1}{k}\frac{a}{|a|}$ and $\gamma_v=\frac{b}{M}=\frac{1}{k}\frac{b}{|b|}$, so that the matrix of the First Fundamental Form is a scalar matrix:
\begin{equation*}
\begin{pmatrix}
E & F \\ F & G
\end{pmatrix}(u,v)
\equiv
\begin{pmatrix}
\frac{1}{k^2} & 0 \\ 0 & \frac{1}{k^2}
\end{pmatrix}.
\end{equation*}
\begin{equation*}
\mathbb{E}[|\nabla f(u,v)|\mid f(u,v)=0]=\mathbb{E}[|\nabla f(u,v)|]=\mathbb{E}[\sqrt{f_u^2+f_v^2}].
\end{equation*}
\begin{equation*}
(f_u,f_v)\sim N\bigg((0,0),\begin{pmatrix}
\frac{4}{3}\pi^2 m\cdot\frac{1}{k^2} & 0
\\
0 & \frac{4}{3}\pi^2 m\cdot\frac{1}{k^2}
\end{pmatrix}\bigg).
\end{equation*}
\begin{equation*}
(f_u,f_v)=\sqrt{\frac{4}{3}\pi^2 m}\cdot\frac{1}{k}\cdot(X,Y),
\end{equation*}
with
\begin{equation*}
(X,Y)\sim N\bigg((0,0),\begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix}\bigg).
\end{equation*}
\begin{equation*}
\mathbb{E}[\sqrt{f_u^2+f_v^2}]=\sqrt{\frac{4}{3}\pi^2 m}\cdot\frac{1}{k}\cdot\mathbb{E}[\sqrt{X^2+Y^2}].
\end{equation*}
\begin{equation*}
K_1(u,v)=\frac{1}{\sqrt{2\pi}}\cdot\sqrt{\frac{4}{3}\pi^2 m}\cdot\frac{1}{k}\cdot\mathbb{E}[\sqrt{X^2+Y^2}].
\end{equation*}
\begin{equation*}
\mathbb{E}[\sqrt{X^2+Y^2}]
=
\frac{\sqrt{2}}{2}\sqrt{\pi}.
\end{equation*}
\begin{equation*}
K_1(u,v)\equiv\frac{1}{\sqrt{2\pi}}\cdot\sqrt{\frac{4}{3}\pi^2 m}\cdot\frac{1}{k}\cdot\frac{\sqrt{2}}{2}\sqrt{\pi}.
\end{equation*}
\begin{gather*}
\mathbb{E}[\mathcal{L}]=\int_U K_1(u,v)dudv=\int_{0}^{L}\int_{0}^{M}\frac{1}{\sqrt{2\pi}}\cdot\sqrt{\frac{4}{3}\pi^2 m}\cdot\frac{1}{k}\cdot\frac{\sqrt{2}}{2}\sqrt{\pi}dudv=\frac{\pi}{\sqrt{3}}\sqrt{m}LM\cdot\frac{1}{k}
\\
=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot k|a|\cdot k|b|\cdot\frac{1}{k}=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot \text{Area}(\Pi)\cdot k.
\end{gather*}
\section{Graph of a function}
Any regular surface is locally the graph of a differentiable function (i.e., locally admits a Monge parametrisation $(x,y)\mapsto(x,y,h(x,y))$) (see \cite{docarm}, chapter 2, Proposition 3).
\begin{prop}
Let $\Sigma\subset\mathbb{T}^3$ be a smooth surface given by $z=h(x,y)$. Then we have
\begin{multline*}
K_1(x,y)
\\
=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot \frac{1}{2\pi}\int_{0}^{2\pi}[(1+h_x^2)\cos^2\theta+2h_xh_y\cos\theta\sin\theta+(1+h_y^2)\sin^2\theta]^{1/2}d\theta
\\
=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot \frac{1}{2\pi}\int_{0}^{2\pi}[1+(h_x\cos\theta+h_y\sin\theta)^2]^{1/2}d\theta
\\
=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot \frac{1}{2\pi}\int_{0}^{2\pi}[1+|\nabla h(x,y)|^2\cdot\sin^2\varphi]^{1/2}d\varphi.
\end{multline*}
\end{prop}
The zero density $K_1(x,y)$ depends only on the tangent plane at $(x,y)$ to the surface $\Sigma$.
\begin{defin}[\cite{anasro}, Definition 3.2.4]
The complete elliptic integral of the second kind is defined as
\begin{equation*}
EllipticE(k):=\int_{0}^{\frac{\pi}{2}}(1-k^2\sin^2\varphi)^{1/2}d\varphi.
\end{equation*}
\end{defin}
It follows that
\begin{equation*}
K_1(x,y)
=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot \frac{1}{\pi/2}EllipticE(\textrm{i}|\nabla h(x,y)|).
\end{equation*}
\begin{cor}
Let $\Sigma\subset\mathbb{T}^3$ be a smooth surface given by $z=h(x,y)$. For $0\leq\varphi\leq 2\pi$, let $\Sigma_{\varphi}$ be the surface given by $z=h(x,y)\cdot|\sin(\varphi)|$. Then we have
\begin{equation*}
\mathbb{E}[\mathcal{L}]=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot\frac{1}{\pi/2}\int_{0}^{\frac{\pi}{2}}\text{Area}(\Sigma_{\varphi})d\varphi.
\end{equation*}
\end{cor}
\section{}
\begin{prop}
Let $\Sigma\subset\mathbb{T}^3$ be a smooth surface parametrised by $\gamma(u,v)$, with matrix of the First Fundamental Form given by
\begin{equation*}
\begin{pmatrix}
E & F \\ F & G
\end{pmatrix}
=
\begin{pmatrix}
E(u,v) & F(u,v) \\ F(u,v) & G(u,v).
\end{pmatrix}
\end{equation*} Then the zero-density of the random field $f=\mathcal{F}(\gamma)$ is given by
\begin{multline*}
K_1(u,v)
=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot \frac{1}{2\pi}\int_{0}^{2\pi}[E\cos^2\theta+2F\cos\theta\sin\theta+G\sin^2\theta]^{1/2}d\theta.
\end{multline*}
\end{prop}
We may also write
\begin{equation*}
K_1(u,v)
=\frac{\pi}{\sqrt{3}}\sqrt{m}\cdot \frac{1}{2\pi}\int_{0}^{2\pi}|a|d\theta,
\end{equation*}
where $a(u,v,\theta)$ is the vector
\begin{equation*}
a=\Big(\frac{(E+\sqrt{\Delta}\cos\theta+F\sin\theta)}{\sqrt{E+G+2\sqrt{\Delta}}},\frac{F\cos\theta+(G+\sqrt{\Delta})\sin\theta}{\sqrt{E+G+2\sqrt{\Delta}}}\Big),
\end{equation*}
and $\Delta=EG-F^2$ is the determinant of the matrix of the First Fundamental Form.
\end{comment}
\addcontentsline{toc}{section}{References}
\Addresses
\end{document}
|
\begin{document}
\begin{center}
\textbf{Bispecial factors in circular non-pushy D0L languages}\\[0.5cm]
Karel Klouda\\[0.5cm]
\texttt{[email protected]}\footnote{Faculty of Information Technology,
Czech Technical University in Prague, Th\'{a}kurova 9, 160 00, Prague 6}\\
\texttt{http://www.kloudak.eu/}
\end{center}
\begin{abstract}
We study bispecial factors in fixed points of morphisms. In
particular, we propose a simple method of how to find all bispecial words of
non-pushy circular D0L-systems. This method can be formulated as an algorithm.
Moreover, we prove that non-pushy circular D0L-systems are exactly those
with finite critical exponent.
\end{abstract}
\noindent
\small{\emph{keywords:} bispecial factors, circular D0L systems, non-pushy
D0L systems, critical exponent}
\section{Introduction}
Bispecial factors proved to be a powerful tool for better understanding
of complexity of aperiodic sequences of symbols from a finite set. One of the
most studied families of such sequences are fixed points of morphisms. In this
paper we present a method of how to describe the structure of all bispecial
factors in a given fixed point.
The method we describe here can be partially spotted in results of
several authors: It is a sort of inverse of the algorithm by Cassaigne from
paper \cite{Cassaigne1994} which is concerned by pattern avoidability. Very
similar approach was used in \cite{Avgustinovich1999} by Avgustinovich and Frid
and in \cite{Frid1998} by Frid to describe bispecial factors of biprefix
circular morphisms and marked uniform morphisms, respectively. Actually, the
fact that all bispecial factors in a fixed point can be generated as members of some
easily constructed sequences was noticed in many papers where factor
complexity was computed, see, e.g., \cite{Luca1988, Cassaigne1997}. In this paper we
formalize this approach and prove that it works for a very wide class of
morphisms, namely non-pushy and circular morphisms. Moreover, it seems that the
assumptions we need for proofs can be weakened or even omitted and the main
theorems remain true.
The paper is organized as follows. In the next section we introduce necessary
notation and notions and also explain the importance of bispacial factors.
Since it is easier to explain the main result using examples than to formulate
it as a theorem, we do so in
Section~\ref{sec:explain_main_result}. Section~\ref{sec:main_proofs} contains
proofs of the crucial theorems and in Section~\ref{sec:infinite_branches} we
explain how to use our results to identify immediately all infinite special
branches. In Section~\ref{sec:assumptions} we prove that non-pushy and circular
morphisms are exactly those whose fixed points have finite critical exponent.
\section{Preliminaries}
Let $\mathcal{A} = \{0,1,\ldots, n - 1\}, n \geq 2$, be a finite
alphabet of $n$ letters; if needed, we denote this particular $n$-letter
alphabet as $\mathcal{A}_n$. An \emph{infinite word} over the alphabet $\mathcal{A}$ is a
sequence $\mathbf{u} = (u_i)_{i \geq 1}$ where $u_i \in \mathcal{A}$ for all $i \geq 1$. If $v = u_j u_{j+1}\cdots u_{j+n-1}$, $j,n \geq 1$, then $v$ is
said to be a \emph{factor} of $\mathbf{u}$ of length $n$, the empty word $\epsilon$
is the factor of length $0$. The set of all finite words over $\mathcal{A}$ is
the free monoid $\mathcal{A}^*$, the set of nonempty finite words is denoted by
$\mathcal{A}^+ = \mathcal{A}^* \setminus \{\epsilon\}$.
A map $\varphi : \mathcal{A}^* \to \mathcal{A}^*$ is called a \emph{morphism} if
$\varphi(wv) = \varphi(w)\varphi(v)$ for every $w, v \in \mathcal{A}^*$. Any morphism $\varphi$
is uniquely given by the set of images of letters $\varphi(a), a \in \mathcal{A}$.
If all these images are nonempty words, the morphism is called
\emph{non-erasing}. A famous example of a morphism is the Thue-Morse morphism
$\varphii{TM}$ defined by
\begin{equation*}
\begin{array}{rcl}
\varphii{TM}(0)&=&01,\\
\varphii{TM}(1)&=&10.
\end{array}
\end{equation*}
This paper studies infinite \emph{fixed points} of morphisms: an infinite word
$w$ is a fixed point of a morphism $\varphi$ if $\varphi(w) = w$. If $\varphi^\ell(w) = w$ for some positive $\ell$, $w$ is a \emph{periodic point} of
$\varphi$. The fixed point of $\varphii{TM}$ beginning in the letter $0$ is the infinite
word
\begin{equation}\label{eq:def_uTM}
\mathbf{u}i{TM} = \lim_{n \to \infty} \varphii{TM}^n(0) = \varphii{TM}^\omega(0) =
0110100110\cdots,
\end{equation}
which is called the \emph{Thue-Morse word}.
An infinite word $\mathbf{u}$ is \emph{aperiodic} if it is not
\emph{eventually periodic}, i.e., there are no finite words $v$ and $w$ such
that $\mathbf{u} = v w w w w \cdots = v w^\omega$. If a word $u = v w$, then $v$ is a
\emph{prefix} of $u$ and $w$ is its \emph{suffix}. In this case we put
$(v)^{-1}u = w$ and $u(w)^{-1} = v$. Given a morphism $\varphi$ on $\mathcal{A}$, if $\varphi(a)$
is not a suffix of $\varphi(b)$ for any distinct $a,b \in \mathcal{A}$, then
$\varphi$ is said to be \emph{suffix-free}. \emph{Prefix-free} morphisms are defined analogously.
The \emph{language of a fixed point} $\mathbf{u}$ is the set of all its factors and is
denoted by $\mathcal{L}(\mathbf{u})$. When speaking about a morphism, we usually mean a
morphism together with its particular infinite fixed point. But a morphism can have more than one fixed point and
not all of them must have the same language (this is true if the
morphism is primitive). For instance, consider the
morphism $0 \mapsto 010, 1 \mapsto 11$: it has two fixed
points, one aperiodic starting in $0$ and one periodic starting in
$1$. Therefore, instead of speaking only about a morphism we
will always speak about a morphism and its particular infinite
fixed point. A well-established way of how to do so is to treat
a morphism and its fixed point as a D0L-system (see, e.g.,
\cite{Rozenberg1986} and \cite{Rozenberg1980}).
\begin{dfn}
A triplet $G = (\mathcal{A},\varphi,w)$ is called a \emph{D0L-system}, where
$\mathcal{A}$ is an alphabet, $\varphi$ a morphism on $\mathcal{A}$, and $w \in
\mathcal{A}^+$ is an \emph{axiom}.
The language of $G$ denoted by $\mathcal{L}(G)$ is the set
of all factors of the words $\varphi^n(w), n = 0,1,\ldots$
If $\varphi$ is non-erasing, then the system
is called PD0L-system.
\end{dfn}
In what follows, when referring to a D0L-system, we always mean a
PD0L-system. In fact, for any D0L-system, it is possible to
construct its elementary (not simplifiable) version which is a
PD0L-system with an injective morphism \cite{Ehrenfeucht1978}
\cite{Seebold1985}.
Clearly, if $\varphi(a) = av$ for some $a\in \mathcal{A}, v \in \mathcal{A}^+$, and if
$\varphi$ is non-erasing, then the language of the D0L-system
$(\mathcal{A},\varphi,a)$ is the language of the infinite fixed point
$\varphi^\omega(a)$.
There are several tools which help us to study the structure of the language of
D0L-systems. We mention here two basic ones: the factor complexity and
critical exponent. The factor complexity of a language is the function $C(n)$
which counts the number of factors of length $n$. The factor complexity is
usually obtained using the notion of \emph{special factors}.
\begin{dfn}
Let $w$ be a factor of the language $\mathcal{L}(G)$ of a D0L-system $G$
over $\mathcal{A}$. The set of \emph{left extensions} of $w$ is defined as
$$
\mathrm{Lext}(w) = \{a \in \mathcal{A} : aw \in \mathcal{L}(G)\}.
$$
If $\#\mathrm{Lext}(w) \geq 2$, then $w$ is said to be a \emph{left
special (LS) factor} of $\mathcal{L}(G)$.
In the analogous way we define the set of \emph{right extensions}
$\mathrm{Rext}(w)$ and a \emph{right special (RS) factor}. If $w$ is both left
and right special, then it is called \emph{bispecial~(BS)}.
\end{dfn}
The connection between special factors and the factor complexity is described
in~\cite{Cassaigne1997}; the complete knowledge of LS, RS, or BS factors enables
to find the factor complexity.
The critical exponent is related to the repetitions in the language. Let
$w$ be a finite and nonempty word. Any finite prefix $v$ of $w^{\omega} =
www\cdots$ is a \emph{power} of $w$. We denote this by $v = w^r$, in words $v$
is \emph{$r$-power} of $w$, where $r = \frac{|v|}{|w|}$. Further, we define the
\emph{index} of $w$ in a language $\mathcal{L}(G)$ of a D0L-system $G$ as
$$
\mathrm{ind}(w,G) = \sup\left\{ r \in \mathbb{Q} : w^r \in \mathcal{L}(G)
\right\}. $$
And finally, the \emph{critical exponent} of the language $\mathcal{L}(G)$ is
the number
$$
\sup\{ \mathrm{ind}(w) \mid w \in \mathcal{L}(G)\}.
$$
More details about critical exponent can be found, e.g.,
in~\cite{Krieger2007}. Examples of how knowledge of BS factors can help to
compute it for a fixed point of a morphism are in~\cite{Balkova2009}
and~\cite{Balkova2011}.
\section{Explaining the main result} \label{sec:explain_main_result}
Since the main result of this paper is a tool rather than a
theorem, we demonstrate it using example morphisms. The tool has
two ingredients. First one is a mapping that maps a BS factor to
another one and so, applied repetitively, it generates sequences
of BS factors. This mapping is defined by two directed labeled
graphs. The other ingredient is a finite set of BS factors such
that the sequences generated from them by the mapping cover all BS
factors in a given fixed point.
Let us consider the morphism $\varphii{E}$ defined by $0 \mapsto 012,
1 \mapsto 112, 2 \mapsto 102$ and the corresponding D0L-system
$({\mathcal{A}}_3, \varphii{E}, 0)$ with the fixed point $\mathbf{u}i{E}$.
The factor $2112$ is LS and has left extensions $0$ and $1$. If we
apply the morphism $\varphii{E}$ on this structure -- both on the
factor and its two extensions --, the resulting factor
$\varphii{E}(2212)$ is no more LS since the respective extensions
$\varphii{E}(0) = 012$ and $\varphii{E}(1) = 112$ end in the same letter
$2$. In order to obtain a LS factor, we have to cut off the
longest common suffix of the new extensions, here it is $12$, and
append it to the beginning of the $\varphii{E}$-image of the factor:
the result is the LS factor $12\varphii{E}(2212)$ with left extensions
$0$ and $1$. We can proceed in the same manner and obtain another
LS factor $12\varphii{E}(12)\varphii{E}^2(2112)$ again with the same
extensions $0$ and $1$. Clearly, the same process works for RS
factors and right extensions.
Let us formalize what we did in the previous paragraph. Instead of
BS factors we will use a slightly different notion of \emph{BS
triplets} $((a,b), v, (c,d))$, where $v$ is a BS factor and
$(a,b)$ and $(c,d)$ are unordered pairs of its left and right
extensions, respectively. We assume that either $avc$ and $bvd$ or
$avd$ and $bvc$ are factors. Thus, $((0,1), 2112, (0,1))$ is a BS
triplet in $\mathbf{u}i{E}$. In terms of the previous paragraph, we can
get another BS triplet from this one, namely $((0,1),
12\varphii{E}(2112),(0,1))$; we call this BS triplet the
\emph{$f$-image} of $((0,1), 2112, (0,1))$. The fact that left
extensions $(0,1)$ result again in extensions $(0,1)$ with
appending of $12$ can be represented as a directed edge from vertex
$(0,1)$ to vertex $(0,1)$ with label $12$. The edge corresponding
to the right extensions starts in $(0,1)$, ends again in $(0,1)$
and is labeled by the empty word. Applying this idea on all
possible pairs of left and right extensions gives us two directed
labeled graphs depicted in Figure~\ref{fig:GL_GR_E}. We call these graphs
\emph{graph of left and right prolongations}. With these graphs in hand, it is
easy to generate infinitely many BS triplets from a given starting one.
\begin{figure}
\caption{The graphs defining the $f$-image for the morphism~$\varphii{E}
\label{fig:GL_GR_E}
\end{figure}
The other ingredience of our method bears on the fact that all BS
triplets can be generated by taking repetitively $f$-image of only
finitely many \emph{initial} BS triplets. Initial BS triplets are
those which are not $f$-images of another BS triplet. For instance, $((0,1),
2112, (0,1))$ is not initial as it is the $f$-image of the BS
triplet $((0,1), 1, (0,2))$ which is initial. Later we will show
how to find all the initial factors for a given fixed point. For
the case of $\mathbf{u}i{E}$, we have eight initial BS triplets:
\begin{center}
\begin{tabular}{ c c c c }
$((0,1),121,(0,1))$, & $((0,1),12,(0,1))$, & $((0,1),21,(0,1))$, & $((0,1),2,(0,1))$, \\
$((1,2),1,(1,2))$, & $((0,2),1,(1,2))$, & $((0,2),1,(0,2))$, & $((1,2),0,(1,2))$.
\end{tabular}
\end{center}
The vertices of the graphs from Figure~\ref{fig:GL_GR_E} are just
all pairs of distinct letters, but the situation is not that
simple for all morphisms. In fact, it happens for graph of left (right)
prolongations only if the respective morphism is suffix-free (prefix-free). This
case when the morphism is both prefix- and suffix-free has been already
solved in~\cite{Avgustinovich1999}, where not only describe the authors
all BS factors, but they also give a formula for the factor
complexity.
Let us consider the morphism $\varphii{S}$ defined by $0 \mapsto 0012,
1 \mapsto 2, 2 \mapsto 012$ and the corresponding D0L-system
$({\mathcal{A}}_3, \varphii{S}, 0)$.
Clearly, the morphism is not suffix-free. Let $v$ be a LS factor
with left extensions $(1,2)$. If we apply the morphism as above,
we have a problem: the longest common suffix of factors
$\varphii{S}(1) = 2$ and $\varphii{S}(2) = 012$ is $2$ and so we do not
know what are the left extensions of the LS factor $2\varphii{S}(v)$.
A solution is to consider left extensions longer than one letter, in such a
case we say left prolongation instead of left extension. Clearly, the factor
$1$ is always preceded by $0$. Hence, let us consider left extensions $(01,2)$. Now, $\varphii{S}(01) = 00122$
is no more a suffix of $\varphii{2} = 012$ and so we know the new left
extensions: again $(01,2)$. In this way we can construct a
complete graph defining the respective $f$-image, the result is in
Figure~\ref{fig:GL_GR_S} (the notation will be explained later).
To prove that a proper \emph{finite} set of pairs of left and
right extensions of arbitrary length always exists is not trivial.
It is not simple even to describe the properties such sets should
posses so that they define a correct $f$-image. We will call such
sets \emph{left} and \emph{right forky sets}, see
Definition~\ref{dfn:L_forky}. It may happen that a finite forky
set does not exist and so our method fails. Therefore we will
have to put some restriction on the D0L-systems considered: we
will assume that the systems are \emph{circular} and
\emph{non-pushy}. These notions are explained in the following
section.
Finally, we can now state the main result of this paper: Given a
circular non-pushy D0L-system with an aperiodic fixed point, there
exist finite left and right forky sets defining two directed
graphs and a finite set of initial BS triplets such that the
corresponding $f$-image applied repetitively on the initial
BS-triplets generates all BS factors.
\section{Forky sets and initial factors} \label{sec:main_proofs}
\subsection{Circular and non-pushy D0L-systems}
Any factor of a fixed point of a morphism $\varphi$ can be decomposed
into (possibly incomplete) $\varphi$-images of letters. For example,
in the case of $\varphii{E}$ we have: 01210 is a factor of
$\varphii{E}(0)\varphii{E}(2)$, i.e., 01210 is composed by
$\varphii{E}$-images of 0 and 2. We denote this using bars, i.e.,
$012|10$. The decomposition may not be unique. For instance $210$
is always decomposed as $2|10$ but we do not know whether 2 is a
suffix of $\varphii{E}(0)$ or $\varphii{E}(1)$ (it cannot be a suffix of
$\varphii{E}(2)$ since $22$ is not a factor of $\mathbf{u}i{E}$). In the case
of the factor $1$, we do not even know where to place the bar if not
at all.
A factor can have more than one decomposition; however, if there
is a common bar for all these decompositions, this bar is called a
\emph{synchronizing point}. Coming back to our example, $210$ has
a synchronizing point between 2 and 10, formally we say that
$(2,10)$ is a synchronizing point of $210$.
\begin{dfn}[Cassaigne \cite{Cassaigne1994}]\label{dfn:synchro_poit_injective}
Let $\varphi$ be a morphism with a fixed point
$\mathbf{u}$, $\varphi$ injective on $\mathcal{L}(u)$, and let $w$ be a factor of
$\mathbf{u}$. An ordered pair of factors $(w_1,w_2)$ is called a \emph{synchronizing point} of $w$ if $w = w_1w_2$ and
$$
\forall v_1, v_2 \in {\mathcal{A}}^{*}, ({v}_{1} {w} {v}_2 \in \varphi(\mathcal{L}(\mathbf{u})) \Rightarrow {v}_{1} {w}_{1} \in \varphi(\mathcal{L}(\mathbf{u})) \text{\ and\ } v_2w_2 \in
\varphi(\mathcal{L}(\mathbf{u}))).
$$
We denote this by $w = w_1|_sw_2$.
\end{dfn}
\begin{dfn}\label{dfn:circular_D0L_system}
A D0L-system $G = (\mathcal{A},\varphi,w)$ is \emph{circular} if $\varphi$ is injective on $\mathcal{L}(G)$ and if there
exists $D \in \mathbb{N}$ such that any $v \in \mathcal{L}(G)$ with $|v| \geq D$ has at least one
synchronizing point. This $D$ is called a \emph{synchronizing
delay}.
\end{dfn}
Some examples of both circular and non-circular D0L-systems
follow.
\begin{exa}
The system $G = (\mathcal{A}_2, \varphii{TM},0)$ is circular with a
synchronizing delay 4. It is clear that any $w \in \mathcal{L}(G)$
containing 00 or 11 has the synchronizing point $w =
\cdots0|_s0\cdots$ or $w = \cdots1|_s1\cdots$. To see that, let us consider
a word $w$ of length 4 not containing these two factors.
Without loss of generality, assume that $w$ begins in 1, then $w = 1010$. This word can be
decomposed into $\varphii{TM}(0)$ and $\varphii{TM}(1)$ in exactly two
ways: $|10|10|$ and $1|01|0$. But the latter one is not admissible
since it arises as the $\varphii{TM}$-image of $000$ which is not an alement of
$\mathcal{L}(G)$.
\end{exa}
\begin{exa}
The system $G = (\mathcal{A}_2,\varphi,0)$, where $\varphi(0) = 01, \varphi(1) =
11$, is not circular. Indeed, for all $n$ the word $1^n$ has no
synchronizing point since it can be decomposed as $|11|11|\cdots$ and
$1|11|11|\cdots$.
\end{exa}
This example is very simple since the respective infinite fixed point
$011111\cdots$ is eventually periodic. However, there are also
aperiodic non-circular systems.
\begin{exa}
The system $G = (\mathcal{A}_3,\varphi,0)$, where $\varphi(0) = 010, \varphi(1) =
22$, $\varphi(2) = 11$, is not circular. The argument is the same as in the previous
example, since the words $1^n$ are for all $n \in \mathbb{N}$ elements of
$\mathcal{L}(G)$. However, the infinite word $\varphi^\omega(0) = 0102201011110102\cdots$ is aperiodic.
\end{exa}
One can notice that the languages in the both non-circular
examples contain an arbitrary power of 1. It is not just a
coincidence but a general rule.
\begin{thm}[Mignosi and S\'{e}\'{e}bold \cite{Mignosi1993}] \label{thm:mignosi_seebold}
If a D0L-system is $k$-power-free (i.e., $\mathcal{L}(G)$ does not contain
the $k$-power of any word) for some $k \geq 1$, then it is circular.
\end{thm}
Thus, non-circular fixed points must have infinite critical
exponent, in fact, they must
contain an unbounded power of some word.
\begin{thm}[Ehrenfeucht and Rozenberg \cite{Ehrenfeucht1983}]
\label{thm:strongly_repetitive}
Given a D0L-system $G = (\mathcal{A}, \varphi , w)$, if $\mathcal{L}(G)$ contains a
$k$-power for all $k \in \mathbb{N}$, then $G$ is \emph{strongly repetitive}, i.e., there exists a nonempty $v \in L(G)$
such that $v^\ell \in \mathcal{L}(G)$ for all $\ell \in \mathbb{N}$.
\end{thm}
We see that non-circular systems have very special properties.
Furthermore, the morphism of a non-circular system cannot be even
primitive. A morphism $\varphi$ over $\mathcal{A}$ is primitive if there is $k \in
\mathbb{N}$ such that $\varphi^k(a)$ contains $b$ for all $a,b \in \mathcal{A}$.
\begin{thm}[Moss\'{e} \cite{Mosse1996}]\label{thm:mosse}
Any D0L-system $G = (\mathcal{A}, \varphi , a)$ with $\varphi$ injective on
$\mathcal{G}$ and primitive is circular\footnote{In the article
\cite{Mosse1996} the circular systems are called ``recognizable''.}.
\end{thm}
No matter how non-circular systems seem to be bizarre, there
is no known finite algorithm which would decide whether a given general
D0L-system is circular or not. Of course, if the respective
morphism is primitive, it is easy to prove it in finite steps. Later on we also
prove that if the system is non-pushy, then the circularity is equivalent to
repetitiveness which is decidable.
\begin{exa}
An example of non-primitive but circular morphism is the one given by $0 \mapsto 0010, 1 \mapsto
1$. This is the Chacon morphism~\cite{Chacon1969} and 5~is its synchronizing
delay.
\end{exa}
\subsection{Non-pushy D0L-systems}
The following two definitions and the lemma are taken from
\cite{Ehrenfeucht1983}.
\begin{dfn}
Let $G = (\mathcal{A},\varphi,w)$ be a D0L-system. A letter $b \in \mathcal{A}$ has
\emph{rank zero} if $\mathcal{L}(G_b)$, where $G_b =
(\mathcal{A},\varphi,b)$, is finite.
\end{dfn}
\begin{dfn}
A D0L-system $G = (\mathcal{A},\varphi,w)$ is \emph{pushy} if for all $n \in
\mathbb{N}$ there exists $v \in \mathcal{L}(G)$ of length $n$ which is composed of
letters that have rank zero; otherwise $G$ is \emph{non-pushy}. If $G$ is
non-pushy, then $q(G)$ denotes
$$
q(G) = \max\{|v| \mid v \in \mathcal{L}(G) \text{ is composed of letters
that have rank zero}\}. $$
\end{dfn}
\begin{lem}\label{lem:pushy_systems} \
\begin{enumerate}
\item It is decidable whether or not an arbitrary D0L-system
is pushy.
\item If $G$ is pushy, then $\mathcal{L}(G)$ is strongly repetitive (see Theorem~\ref{thm:strongly_repetitive}).
\item If $G$ is non-pushy, then $q(G)$ is effectively computable.
\item It is decidable whether or not an arbitrary D0L-system is strongly
repetitive.
\end{enumerate}
\end{lem}
\begin{cor}[Krieger \cite{Krieger2007}] \label{cor:constant_C}
Let $G = (\mathcal{A},\varphi,a), a \in \mathcal{A},$ be a non-pushy D0L-system
and let $\mathbf{u} = \varphi^\omega(a)$ be an infinite fixed point of $\varphi$. Then
there exists a non-erasing morphism $\varphi'$ and an effectively computable $C \in \mathbb{N}$ such that $\mathbf{u} =
(\varphi')^\omega(a)$ and for all $v \in \mathcal{L}(G)$ with $|\varphi'(v)| =
|v|$ we have $|v| < C$.
\end{cor}
This means that for any word $v$ of length at least $C$ we have
$|\varphi(v)| \geq |v| + 1$. More generally, if $|v| \geq KC$ then
$|\varphi(v)| \geq |v| + K$. We will use this in the proof of the main
theorem of the following subsection.
In the sequel, we always suppose that the $G = (\mathcal{A},\varphi,
a)$ is such that $\varphi'$ can be taken equal to $\varphi$ (in fact
$\varphi'$ is just a power of $\varphi$, see the proof
in~\cite{Krieger2007}). This is without loss of generality since
the language is the same.
\subsection{Forky sets}
Our aim is to define properly the notion of $f$-image introduced in
Section~\ref{sec:explain_main_result}. As explained, we need to have two directed labeled graphs defined on unordered pairs of
left and right prolongations. Since left and right extensions usually
refer to letters and the vertices of our graphs might be pairs of
words, we give the following definition.
\begin{dfn}
Let $\mathbf{u}$ be an infinite word and $w$ its factor. The set
of \emph{left prolongations} of $w$ is the set
$$
\mathrm{Lpro}(w) = \{v \in \mathcal{A}^+ \mid vw \in \mathcal{L}(\mathbf{u})\}.
$$
In an analogous way we define the set of \emph{right prolongations} $\mathrm{Rpro}(w)$.
\end{dfn}
The sets $\mathrm{Lpro}(w)$ and $\mathrm{Rpro}(w)$ are, in general, infinite.
Our aim is to specify a suitable finite sets $\mathcal{B}_L$ and $\mathcal{B}_R$
of (unordered) pairs of left and right prolongations such that it
allows to define correctly an $f$-image of all triplets
$((w_1,w_2), v, (w_3,w_4))$, where $v$ is a BS factor, $(w_1,
w_2)$ a pair of its left and $(w_3, w_4)$ a pair of its right
prolongations from $\mathcal{B}_L$ and $\mathcal{B}_R$, respectively. The
$f$-image defined by the sets $\mathcal{B}_L$ and $\mathcal{B}_R$ are to be
defined as a BS triplet $((w'_1,w'_2), v', (w'_3,w'_4))$, where $(w'_1,w'_2)$
and $(w'_3,w'_4)$ are again in $\mathcal{B}_L$ and $\mathcal{B}_R$ and the
factor $v' = f_L(w'_1,w'_2)\varphi(v)f_R(w'_3,w'_4)$ is BS. The mappings
$f_L$ and $f_R$ are defined as follows.
\begin{dfn}\label{dfn:f_L_f_R_intro}
Let $\varphi$ be a morphism over $\mathcal{A}$ and let $(v_1, v_2)$ be an unordered pair of words from $\mathcal{A}^+$.
We define
\begin{eqnarray*}
f_L(v_1,v_2) &=& \text{the\ longest\ common\ suffix\ of\ } \varphi(v_1) \text{\ and\ } \varphi(v_2), \\
f_R(v_1,v_2) &=& \text{the\ longest\ common\ prefix\ of\ } \varphi(v_1) \text{\ and\ } \varphi(v_2).
\end{eqnarray*}
\end{dfn}
The purpose of the following definitions is just to describe
``good'' choices of $\mathcal{B}_L$ and $\mathcal{B}_R$.
\begin{dfn}
Let $(w_1,w_2)$ and $(v_1,v_2)$ be unordered pairs of
words. We say that
\begin{itemize}
\item[(i)] $(w_1,w_2)$ is a \emph{prefix} (\emph{suffix}) of
$(v_1,v_2)$ if either $w_1$ is a prefix (suffix) of $v_1$ and $w_2$ of
$v_2$, or $w_1$ is a prefix (suffix) of $v_2$ and $w_2$ of
$v_1$;
\item[(ii)] $(w_1,w_2)$ and $(v_1,v_2)$ are \emph{L-aligned}
if
$$
(v_1 = uw_1 \text{\ or\ } w_1 = uv_1) \quad \text{and} \quad ( v_2 = u'w_2 \text{\ or\ } w_2 = u'v_2)
$$
or
$$
(v_1 = uw_2 \text{\ or\ } w_2 = uv_1) \quad \text{and} \quad (v_2 = u'w_1 \text{\ or\ } w_1 = u'v_2)
$$
for some words $u,u'$.
\end{itemize}
Analogously, we define pairs which are \emph{R-aligned}.
\end{dfn}
\begin{exa}
The pairs $(01,0)$ and $(001,10)$ are L-aligned,
while $(01,0)$ and $(011,10)$ are not L-aligned.
Schematically, the notion of L-aligned pairs of words is depicted in
Figure~\ref{fig:L_aligned_pairs}.
\begin{figure}
\caption{L-aligned and not L-aligned pairs of words.}
\label{fig:L_aligned_pairs}
\end{figure}
\end{exa}
\begin{dfn} \label{dfn:L_forky}
Let $\varphi$ be a morphism with a fixed point $\mathbf{u}$.
A finite set $\mathcal{B}_L$ of unordered pairs $(w_1,w_2)$ of nonempty factors of
$\mathbf{u}$ is called \emph{L-forky} if all the following conditions
are satisfied:
\begin{itemize}
\item[(i)] the last letters of $w_1$ and $w_2$ are different for all $(w_1, w_2) \in \mathcal{B}_L$,
\item[(ii)] no distinct pairs $(w_1, w_2)$ and $(w'_1, w'_2)$ from $\mathcal{B}_L$ are L-aligned,
\item[(iii)] for any $v_1,v_2 \in
\mathcal{L}(\mathbf{u}) \setminus \{\epsilon\}$ with distinct last letters there exists $(w_1,w_2) \in \mathcal{B}_L$ such that
$(w_1,w_2)$ and $(v_1,v_2)$ are L-aligned,
\item[(iv)] for any $(w_1, w_2) \in \mathcal{B}_L$ there exists $(w'_1, w'_2) \in \mathcal{B}_L$
such that
$$
(w'_1f_L(w_1,w_2), w'_2f_L(w_1,w_2))
$$
is a suffix of $(\varphi(w_1), \varphi(w_2))$.
\end{itemize}
Analogously we define an \emph{R-forky} set.
\end{dfn}
Since the definition may look a bit intricate, we now comment on
all the conditions. Condition $(i)$ says that $w_1$ and $w_2$ are
left prolongations of LS factors (note that all pairs of words
$w_1,w_2 \in \mathcal{L}(\mathbf{u})$ are prolongations of the empty
word, i.e., elements of $\mathrm{Lpro}(\epsilon)$). Condition~$(ii)$ is
required to avoid redundancy in $\mathcal{B}_L$. Condition~$(iii)$
ensures that any two left prolongations of any LS factor are
included in $\mathcal{B}_L$ in the following sense: if we prolong
or shorten them in a certain way we obtain a pair from
$\mathcal{B}_L$. And, finally, Condition~$(iv)$ is there because
of the definition of the $f$-image: we want to be able to apply it
repetitively. Note that due to $(ii)$ and $(iii)$ the pair $(w'_1,
w'_2)$ from $(iv)$ is uniquely given. Note also that if $(i)$ is
satisfied and the words from all the pairs of $\mathcal{B}_L$
are of the same length, then $(ii)$ and $(iii)$ are satisfied
automatically.
\begin{exa} \label{exa:vp_S_set_B}
Consider the morphism $\varphii{S}$ from
Section~\ref{sec:explain_main_result} deifned by $0 \mapsto 0012,
1 \mapsto 2, 2 \mapsto 012$. This morphism is injective
and primitive and so, by Theorem~\ref{thm:mosse}, the respective D0L-system is circular.
One can easily prove that 3 is a synchronizing delay
(note that all factors containing $2$ has a synchronizing point $\cdots2|\cdots$.)
Since $\varphii{S}$ is prefix-free, we get that the set
$$
\mathcal{B}_R = \{(0,1), (0,2), (1,2)\}
$$
is R-forky. However, this set is not L-forky:
Condition $(iv)$ is not satisfied for any pair since $\varphii{S}(1)$ is a suffix of $\varphii{S}(2)$
which is a suffix of $\varphii{S}(0)$. To remedy this, we
consider left prolongations one letter longer which are ending in $1$ and
$2$. Since the list of all factors of length 2 reads
$$
00,01,12,20,22
$$
the new pairs are
\begin{equation}\label{eq:exa_vpS_list_Lpro}
(0,01), (0,12), (0,22), (2,01).
\end{equation}
For these pairs conditions $(i)$ -- $(iii)$ are again satisfied.
But $(iv)$ is not satisfied for $(0,12)$ since $f_L(0,12) =
012$ and $(\varphii{S}(0)(012)^{-1}, \varphii{S}(12)(012)^{-1}) =
(0,2)$ has no suffix in list~\eqref{eq:exa_vpS_list_Lpro}. Hence,
we have to prolong $12$ again. There is only one possibility,
namely $012$. The resulting set
$$
\mathcal{B}_L = \{(0,01), (0,012), (0,22), (2,01)\}
$$
is then L-forky since now we get $(\varphii{S}(0)(012)^{-1}, \varphii{S}(012)(012)^{-1}) =
(0,00122)$ with a suffix $(0,22) \in \mathcal{B}_L$.
\end{exa}
\begin{thm} \label{thm:existence_of_the_graph}
Let $\varphi$ be a morphism on $\mathcal{A}$ with a fixed point $\mathbf{u} =
\varphi^\omega(a)$. If $(\mathcal{A},\varphi, a)$ is circular non-pushy system, then
it has L-forky and R-forky sets.
\end{thm}
\begin{proof}
Set $M = DC$, where $D$ is a synchronizing delay and
$C$ is the constant from Corollary~\ref{cor:constant_C}.
Define
\begin{multline}
\mathcal{B}_L = \{(w_1,w_2) : w_1,w_2 \in \mathcal{L}(\mathbf{u}), |w_1| =
|w_2| = M, \\ \text{ and the last letters of } w_1 \text{ and }
w_2
\text{ are different}
\}.
\end{multline}
We claim that $\mathcal{B}_L$ is L-forky. Conditions $(i)$ -- $(iii)$
from Definition~\ref{dfn:L_forky} are trivially fulfilled. It remains to prove $(iv)$.
It is clear that we must have $|f_L(w_1,w_2)| \leq
D$ for any $(w_1,w_2)$ since $f_L(w_1,w_2)$ without the last letter does not have any
synchronizing point. Now, it suffices to realize that for any
$w$ of length $M$ we have $|\varphi(w)| \geq |w| + D = M + D$ and so
$(\varphi(w_1)(f_L(w_1,w_2))^{-1}, \varphi(w_2)(f_L(w_1,w_2))^{-1})$
has a suffix in $\mathcal{B}_L$. The proof of existence of an R-forky set is
perfectly the same.
\end{proof}
This proof does not give us a general guideline how to construct
L-forky and R-forky sets since, as we have recalled earlier, we do
not have an effective algorithm computing a synchronizing delay
for a general morphism. Moreover, the L-forky set constructed in
the proof is usually too huge. As in the case of our example
morphism $\varphii{S}$ from Example~\ref{exa:vp_S_set_B} (for this
morphism $C = 2$ and $D = 3$), there usually exists a smaller
L-forky set.
\begin{rem}\label{rem:important_Krieger}
The techniques of the proof of
Theorem~\ref{thm:existence_of_the_graph} are the same as those
of the proof of Lemma~11 in~\cite{Krieger2007}. This Lemma,
however, is concerned with the notion of critical exponent.
\end{rem}
\begin{dfn}
Let $\varphi$ be a morphism with a fixed point $\mathbf{u}$ and let $\mathcal{B}_L$ be an L-forky set.
We define the directed labeled \emph{graph of left prolongations} $\GL{\mathcal{B}_L}_\varphi$ as follows:
\begin{itemize}
\item[(i)] the set of vertices is $\mathcal{B}_L$,
\item[(ii)] there is an edge from $(w_1,w_2)$ to $(w_3,w_4)$
if $(w_3f_L(w_1,w_2),w_4f_L(w_1,w_2))$ is a suffix
of $(\varphi(w_1),\varphi(w_2))$. The label of this edge is
$f_L(w_1,w_2)$.
\end{itemize}
In the same manner we define the \emph{graph of right prolongations} $\GR{\mathcal{B}_R}_\varphi$.
\end{dfn}
As a straightforward consequence of the definition of forky sets
(especially of Condition $(iv)$) we have the following property of
the graphs.
\begin{lem}
Each vertex in a graph of left and right prolongations has
its out-degree equal to one.
Consequently, any long enough path in the graph ends in a
cycle and any component contains exactly one cycle.
\end{lem}
\begin{exa}
The graphs $\GL{\mathcal{B}_L}_{\varphii{S}}$ and $\GR{\mathcal{B}_R}_{\varphii{S}}$ for $\varphii{S}$ and for the
sets $\mathcal{B}_L$ and $\mathcal{B}_R$ from Example~\ref{exa:vp_S_set_B}
are in Figure~\ref{fig:GL_GR_S}.
\begin{figure}
\caption{The graphs $\GL{\mathcal{B}
\label{fig:GL_GR_S}
\end{figure}
\end{exa}
\begin{dfn}
Let $\varphi$ be a morphism on $\cal{A}$ with a fixed point $\mathbf{u}$
and let $\mathcal{B}_L$ and $\mathcal{B}_R$ be L-forky and R-forky sets, respectively.
A triplet $((w_1,w_2), v, (w_3,w_4))$ is called a \emph{bispecial (BS) triplet} in $\mathbf{u}$
if $(w_1,w_2) \in \mathcal{B}_L$, $(w_3,w_4) \in \mathcal{B}_R$ and $w_1vw_3, w_2vw_4 \in \mathcal{L}(\mathbf{u})$ or $w_1vw_4, w_2vw_3 \in \mathcal{L}(\mathbf{u})$.
\end{dfn}
\begin{lem}
Let $\varphi$ be a morphism with a fixed point $\mathbf{u}$, let $\mathcal{B}_L$ be an
L-forky and $\mathcal{B}_R$ an R-forky set and let $\mathcal{T} = ((w_1,w_2), v,
(w_3,w_4))$ be a bispecial triplet of $\mathbf{u}$. Let us denote by
\begin{itemize}
\item[(i)] $g_L(w_1,w_2)$ the end of the edge of $\GL{\mathcal{B}_L}_\varphi$ starting in $(w_1,w_2)$,
\item[(ii)] $g_R(w_3,w_4)$ the end of the edge of $\GR{\mathcal{B}_R}_\varphi$ starting in $(w_3,w_4)$.
\end{itemize}
Then
$$
\mathcal{T}' = (g_L(w_1,w_2), f_L(w_1,w_2)\varphi(v)f_R(w_3,w_4), g_R(w_3,w_4))
$$
is also a bispecial triplet of $\mathbf{u}$.
\end{lem}
\begin{dfn}\label{dfn:f_mapping_for_forky_sets}
Denote $\mathcal{B} = (\mathcal{B}_L, \mathcal{B}_R)$. The bispecial triplet $\mathcal{T}'$ from the previous lemma is
called the \emph{$f_{\mathcal{B}}$-image} of a bispecial triplet $\mathcal{T} =((w_1,w_2), v, (w_3,w_4))$.
\end{dfn}
\begin{exa}
Consider again the morphism $\varphii{S}$. Let $\mathcal{B} = (\mathcal{B}_L,\mathcal{B}_R)$,
where $\mathcal{B}_L$ and $\mathcal{B}_R$ are those from
Example~\ref{exa:vp_S_set_B}. Then $((0,012),0,(0,1))$ is a bispecial
triplet since both $001$ and $01200$ are factors. Its $f_\mathcal{B}$-image reads
$((0,22),0120012,(0,2))$ for we have $g_L(0,012) =
(0,22)$, $f_L(0,012) = 012$, $g_R(0,1) = (0,2)$, and $f_R(0,1) = \epsilon$.
\end{exa}
Condition~$(iv)$ from Definition~\ref{dfn:L_forky} of forky sets
allows us to get a compact formula for the
$(f_{\mathcal{B}})^n$-image, i.e., $f_\mathcal{B}$-image applied repetetively
$n$ times.
\begin{lem}\label{lem:compact_form_of_f_n_image}
Let $\varphi$ be a morphism and let $((w_1,w_2),v,(w_3,w_4))$ be a bispecial triplet for some
forky sets $\mathcal{B} = (\mathcal{B}_L, \mathcal{B}_R)$. Then for all $n \in \mathbb{N}$ it
holds that its $(f_{\mathcal{B}})^n$-image equals
$$
(g_L^n(w_1,w_2),f_L(\varphi^{n-1}(w_1),\varphi^{n-1}(w_2))\varphi^n(v)f_R(\varphi^{n-1}(w_3),\varphi^{n-1}(w_4)),g_R^n(w_3,w_4)).
$$
\end{lem}
\subsection{Initial BS factors}
From the previous subsection, we know how to get a sequence of BS
factors from some starting one: we just apply
$f_\mathcal{B}$-image repetitively. The goal of the present
subsection is to prove that for circular systems there exists a
finite set of initial BS factors (triplets) such that any other BS factor is
an $(f_\mathcal{B})^n$-image of one of them.
\begin{dfn}
Let $\varphi$ be a morphism injective
on $\mathcal{L}(\mathbf{u})$, where $\mathbf{u}$ is its fixed point, let $\mathcal{B}_L$ and $\mathcal{B}_R$ be L- and R-forky sets,
and $\mathcal{T} = ((w_1,w_2), v, (w_3,w_4))$ a bispecial triplet.
Assume, without loss of generality, that $w_1vw_3, w_2vw_4 \in \mathcal{L}(\mathbf{u})$.
An ordered pair of factors $(v_1,v_2)$ is called a \emph{BS-synchronizing point} of $\mathcal{T}$
if $v = v_1v_2$ and
\begin{multline*}
\forall u_1, u_2, u_3, u_4 \in \mathcal{A}^*, (u_1 w_1 v w_3 u_3, u_2 w_2 v w_4 u_4 \in \varphi(\mathcal{L}(\mathbf{u})) \Rightarrow \\
u_1 w_2 v_1,u_2 w_2 v_1, v_2 w_3 u_3,v_2 w_4 u_4 \in \varphi(\mathcal{L}(\mathbf{u})).
\end{multline*}
We denote this by $v = v_1|_{bs}v_2$.
\end{dfn}
The notion of BS-synchronizing point is weaker than the one of
synchronizing point: it holds that if $v = v_1|_s v_2$, then $v =
v_1 |_{bs} v_2$. The other direction is not true as it follows
from this example:
\begin{exa}
Given a morphism $0 \mapsto 010, 1 \mapsto 210, 2 \mapsto
220$, the factor $0$ has no synchronizing point. On the other
hand, if we take it as the bispecial triplet $((1,2),0,(0,2))$
it has the BS-synchronizing point $0|_{bs}$.
\end{exa}
\begin{dfn}
Let $\varphi$ be a morphism with a fixed point $\mathbf{u}$ which is injective
on $\mathcal{L}(\mathbf{u})$. A bispecial triplet $\mathcal{T} = ((w_1,w_2), v, (w_3,w_4))$
is said to be \emph{initial} if it does not have any
BS-synchronizing point.
\end{dfn}
\begin{dfn}\label{dfn:synchro_terminology}
Let $\mathcal{T} = ((w_1,w_2), v, (w_3,w_4))$ be a bispecial triplet which is not initial and let $(v_1,v_2), (v_3,v_4), \ldots,
(v_{2m-1},v_{2m})$ be all its BS-synchronizing points such
that $|v_1| < |v_3| < \cdots < |v_{2m-1}|$. The factor
$v_1$ is said to be the \emph{non-synchronized prefix}, $v_{2m}$ the \emph{non-synchronized
suffix} and the factor $(v_1)^{-1}v(v_{2m})^{-1}$ is called the \emph{synchronized factor} of $\mathcal{T}$.
\end{dfn}
Now we state the main theorem of this section.
\begin{thm}
Let $(\mathcal{A},\varphi, a)$ be a circular non-pushy D0L-system,
$\mathcal{B}_L$ and $\mathcal{B}_R$ its L-forky and R-forky set, and $\mathbf{u}
= \varphi^\omega(a)$ infinite. Then there exists a finite set $\mathcal{I}$ of
bispecial triplets such that for any bispecial factor $v$ there exist a
bispecial triplet $\mathcal{T} \in \mathcal{I}$ and $n \in \mathbb{N}$ such that
$((w_1,w_2),v,(w_3,w_4)) = (f_{\mathcal{B}})^n(\mathcal{T})$ for some
$(w_1,w_2) \in \mathcal{B}_L$ and $(w_3,w_4) \in \mathcal{B}_R$.
\end{thm}
\begin{proof}
Let $\mathcal{I}$ be the set of initial bispecial triplets. The
finiteness of $\mathcal{I}$ is a direct consequence of the
definition of circularity: elements of $\mathcal{I}$ cannot be
longer than the synchronizing delay. The rest of the statement
follows from the fact that any non-initial triplet has at least
one $f_{\mathcal{B}}$-preimage.
To prove this, it suffices to realize that the synchronized
factor of $((w_1,w_2),v,(w_3,w_4))$ has unique $\varphi$-preimage $v'$ (possibly the empty word) and
that the non-synchronized prefix (resp. suffix) must be equal
to $f_L(w'_1,w'_2)$ for some $(w'_1,w'_2) \in \mathcal{B}_L$
(resp. to $f_R(w'_3,w'_4)$ for some $(w'_3,w'_4) \in
\mathcal{B}_R$).
\end{proof}
\begin{exa}
We now find the set $\mathcal{I}$ for $\mathbf{u}i{S}$ with $\mathcal{B} =
(\mathcal{B}_L,\mathcal{B}_R)$, where $\mathcal{B}_L$ and $\mathcal{B}_R$ are from
Example~\ref{exa:vp_S_set_B}, the graphs of prolongations are in Figure~\ref{fig:GL_GR_S}. The only BS factors without
synchronizing points are $\epsilon$ and $0$ since any other factor
contains the letter~$2$ (and hence it has a synchronizing point) or is not
BS. It remains to find all corresponding triplets:
\begin{center}
\begin{tabular}{ c c c c }
$((0,01),\epsilon,(1,2))$, & $((0,01),\epsilon,(0,2))$, & $((0,012),\epsilon,(0,1))$, & $((0,012),\epsilon,(0,2))$, \\
$((0,012),\epsilon,(1,2))$, & $((0,22),\epsilon,(0,1))$, & $((2,01),\epsilon,(0,2))$, & $((0,012),0,(0,1))$.
\end{tabular}\\[2mm]
\end{center}
Since we are usually interested in nonempty BS factors, we can
replace the bispecial triplets containing $\epsilon$ with their
$f_\mathcal{B}$-images and get:
\begin{center}
\begin{tabular}{ c c c }
$((2,01),2,(0,2))$, & $((2,01),20,(0,1))$, & $((0,22),012,(0,2))$, \\
$((0,22),0120,(0,1))$, & $((0,012),012,(0,2))$, & $((0,012),1,(0,1))$.
\end{tabular}\\[2mm]
\end{center}
There are only 6 bispecial triplets since $((0,012),\epsilon,(0,1))$ and
$((0,012),\epsilon,(1,2))$ have the same $f_\mathcal{B}$-image and so do $((0,01),\epsilon,(0,2))$ and
$((2,01),\epsilon,(0,2))$.
\end{exa}
\section{Infinite special branches} \label{sec:infinite_branches}
In the preceding section we have described a tool allowing us to
find all BS factors. It requires some effort to construct the
graphs $\mathrm{GL}$ and $\mathrm{GR}$ and to find all initial
bispecial triplets, but it can be done by an algorithm. However,
even if we have all these necessities in hand, it may be still a
long way to the complete knowledge of the structure of all BS
factors. Nevertheless, there is a class of special factors which
can be identified directly from the graphs $\mathrm{GL}$ and
$\mathrm{GL}$, namely, the prefixes (or suffixes) of the so-called
infinite LS (or RS) branches.
\begin{dfn}
An infinite word $\mathbf{w}$ is an \emph{infinite LS
branch} of an infinite word $\mathbf{u}$ if each prefix of $\mathbf{w}$ is~a~LS
factor of $\mathbf{u}$. We put
$$
\mathrm{Lext}(\mathbf{w}) = \bigcap_{v \text{ prefix of } \mathbf{w}}\mathrm{Lext}(v).
$$
Infinite RS branches are defined in the same manner, only that they are
infinite to the right.
\end{dfn}
Here are some (almost) obvious statements on infinite special
branches in an infinite word:
\begin{pro} \label{pro:inf_LS_branch} \
\begin{itemize}
\item[(i)] If $\mathbf{u}$ is eventually periodic, then there is
no infinite LS branch of $\mathbf{u}$,
\item[(ii)] if $\mathbf{u}$ is aperiodic, then there exists at
least one infinite LS branch of $\mathbf{u}$,
\item[(iii)] if $\mathbf{u}$ is a fixed point of a primitive
morphism, then the number of infinite LS
branches is bounded.
\end{itemize}
\end{pro}
\begin{proof}
Item $(i)$ is obvious, $(iii)$ is a direct consequence of the
fact that the first difference of complexity is
bounded~\cite{Cassaigne1996}. The proof of item $(ii)$ is due to
the famous K\"{o}nig's infinity lemma~\cite{Konig1936} applied on
sets $V_1, V_2, \ldots$, where the set $V_k$ comprises all LS
factors of length $k$ and where $v_1 \in V_i$ is connected by an
edge with $v_2 \in V_{i+1}$ if $v_1$ is prefix of $v_2$.
\end{proof}
Imagine now that we have an L-forky set $\mathcal{B}_L$ and an
infinite LS branch $\mathbf{w}$. There must exist $(v_1,v_2) \in
\mathcal{B}_L$ such that $v_1w$ and $v_2w$ are factors for any
prefix $w$ of $\mathbf{w}$. Such a pair is called an infinite LS pair.
\begin{dfn}
Let $(v_1,v_2)$ be an element of an L-forky set corresponding
to a fixed point $\mathbf{u}$ of a morphism $\varphi$. The ordered pair
$((v_1,v_2),\mathbf{w})$ is called an \emph{infinite LS pair} if for any
prefix $w$ of $\mathbf{w}$ the words $v_1w$ and $v_2w$ are factors of
$\mathbf{u}$.
Further, we define the \emph{$f_{\mathcal{B}_L}$-image} of an infinite LS pair
$((v_1,v_2),\mathbf{w})$ as the infinite LS pair $((v'_1,v'_2),\mathbf{w}')$, where
$(v'_1,v'_2) = g_L(v_1,v_2)$ and $\mathbf{w}' = f_L(v_1,v_2)\varphi(\mathbf{w})$.
\end{dfn}
Having the $f_{\mathcal{B}_L}$-image of an infinite LS branch, we are
again interested in its $f_{\mathcal{B}_L}$-preimage.
\begin{lem}\label{lem:f_preimage_of_infinite_pairs}
Let $(\mathcal{A},\varphi,a), a \in \mathcal{A},$ be a circular D0L-system with
an infinite fixed point $\mathbf{u} = \varphi^\omega(a)$ and let
$\mathcal{B}_L$ be its L-forky set. Then any infinite LS pair is the $f_{\mathcal{B}_L}$-image of a unique infinite LS pair.
\end{lem}
\begin{proof}
Let $((v_1,v_2),\mathbf{w})$ be an infinite LS pair and let $D$ be a synchronizing
delay of $\varphi$. Then any prefix of $w$ of length at least $D$
has the same left-most synchronizing point
$(w_1,(w_1)^{-1}w)$. Since such $w$ is LS, then $w_1$ must be
a label of an edge in $\GL{\mathcal{B}_L}_\varphi$ whose end-vertex is
$(v_1,v_2)$ and starting one in $(v'_1,v'_2)$. The infinite word
$(w_1)^{-1}\mathbf{w}$ must have a unique $\varphi$-preimage $\mathbf{w}'$.
\end{proof}
Since any infinite LS pair $((v_1,v_2),\mathbf{w})$ has an
$f_{\mathcal{B}_L}$-preimage, the in-degree of the vertex
$(v_1,v_2)$ in the graph of left prolongations $\GL{\mathcal{B}_L}_\varphi$ must be
at least one.
\begin{cor}
Let $(\mathcal{A},\varphi,a)$ be a circular D0L-system with a fixed point
$\mathbf{u}$ and an L-forky set $\mathcal{B}_L$. If $((v_1,v_2),\mathbf{w})$ is an
infinite LS pair then $(v_1,v_2)$ is a vertex of a cycle in
$\GL{\mathcal{B}_L}_\varphi$.
\end{cor}
We know that the number of infinite LS pairs in a fixed point of a
primitive morphism is finite (see
Proposition~\ref{pro:inf_LS_branch}); the following proposition
says that this is true even if we weaken the assumption from
primitive to circular and non-pushy. The proof of the proposition will,
moreover, give us a simple method of how to find all these
infinite LS pairs.
\begin{thm}\label{thm:infinite_branches}
Let $(\mathcal{A},\varphi,a), a \in \mathcal{A},$ be a circular
D0L-system with an L-forky set $\mathcal{B}_L$ such that $\varphi^{\omega}(a) = \mathbf{u}$ is
infinite. Then there exist only a finite number of infinite LS pairs.
\end{thm}
\begin{proof}
Let $((v_1,v_2),\mathbf{w})$ be an infinite LS pair and let
$(v_1,v_2)$ be a vertex of a cycle in $\GL{\mathcal{B}_L}_\varphi$. Let us
assume that the cycle is of length $k$. Then it is labeled
by words $f_L(v_1,v_2), f_L(g_L(v_1,v_2)), \ldots, f_L(g_L^{k-1}(v_1,v_2))$ where $f_L(v_1,v_2)$ is the label of the
edge starting in $(v_1,v_2)$ (see Figure~\ref{fig:proof_of_infinite_branches}).
\begin{figure}
\caption{The notation from the proof of Theorem~\ref{thm:infinite_branches}
\label{fig:proof_of_infinite_branches}
\end{figure}
We distinguish two cases:\\[2mm]
$(a)$ At least one of the labels of the cycle is not
the empty word. Applying $k$ times Lemma~\ref{lem:f_preimage_of_infinite_pairs} we can find the
infinite LS pair $((v_1,v_2),\mathbf{w}')$ such that $w$ is the
$(f_{\mathcal{B}_L})^{k}$-image of $((v_1,v_2),\mathbf{w}')$, i.e.,
$$
\mathbf{w} = \underbrace{f_L(g_L^{k -1}(v_1,v_2)) \cdots
\varphi^{k -2}(f_L(g_L(v_1,v_2))\varphi^{k
-1}(f_L(v_1,v_2))}_{\text{denoted by }s}\varphi^{k}(\mathbf{w}') = s\varphi^{k}(\mathbf{w}').
$$
Since $\mathbf{w}'$ can be expressed again as $\mathbf{w}' =
s\varphi^k(\mathbf{w}'')$ for some infinite LS pair $((v_1,v_2),\mathbf{w}'')$, we have
$$
\mathbf{w} = s\varphi^k(s)\varphi^{2k}(s)\varphi^{3k}(\mathbf{w}'').
$$
Continuing in this construction one can prove that
$s\varphi^{k}(s)\cdots\varphi^{nk}(s)$ is a prefix of $\mathbf{w}$ for all $n \in \mathbb{N}$. Therefore, we get
$$
\mathbf{w} = s\varphi^{k}(s)\varphi^{2k}(s)\varphi^{3k}(s)\cdots.
$$
We have just shown that exactly one infinite LS pair corresponds to each vertex of the cycle.
\noindent $(b)$ Now assume that all the labels of the cycle are empty words.
In such a case the $f_{\mathcal{B}_L}$-image coincides with $\varphi$-image,
meaning that $(f_{\mathcal{B}_L})^j$-image of $((v_1,v_2),\mathbf{w})$ is
$(g_L^j(v_1,v_2),\varphi^j(\mathbf{w}))$ for all $j = 1,2,\cdots$. We
want to prove that $\mathbf{w}$ must be a periodic point of $\varphi$.
Consider the directed graph whose vertices are the first
letters of $\varphi(b), b \in \mathcal{A},$ and there is an edge from $b$
to $c$ if $c$ is the first letter of $\varphi(b)$. Clearly, the
first letter of $\mathbf{w}$, say $b$, must be again a vertex of a
cycle in this graph. Let $\ell$ be the length of this cycle.
For reasons analogous to those above the
$(f_{\mathcal{B}_L})^{j\ell}$-image and $(f_{\mathcal{B}_L})^{j\ell}$-preimage
of $\mathbf{w}$ must also begin in $b$. Therefore, $\mathbf{w}$ contains the
factor $\varphi^{j\ell}(b)$ as a prefix for all $j = 1,2,\ldots$ and this implies that $\mathbf{w} =
(\varphi^\ell)^\omega(b)$, i.e., $\mathbf{w}$ is a periodic point of $\varphi$.
Since the number of vertices of $\GL{\mathcal{B}_L}_\varphi$ and of
periodic points is finite, the number of infinite LS pairs
must be finite as well.
\end{proof}
The previous proof is also a proof of the following
corollary which gives us a method of how to find all infinite LS
branches.
\begin{cor}\label{cor:how_to_find_LS_branches}
Let $(\mathcal{A},\varphi,a),a \in \mathcal{A},$ be a circular D0L-system, $\mathbf{u}
= \varphi^{\omega}(a)$ infinite with $\mathcal{B}_L$ an L-forky set and let
$((v_1,v_2),\mathbf{w})$ be an infinite LS pair. Then either $\mathbf{w}$ is a
\emph{periodic point} of $\varphi$, i.e.,
\begin{equation}\label{eq:eq_for_inf_LS_type1}
\mathbf{w} = \varphi^{\ell }(\mathbf{w}) \quad \text{for some $\ell \geq 1$},
\end{equation}
and $(v_1,v_2)$ is a vertex of a cycle in $\GL{\mathcal{B}_L}_\varphi$ labeled by
$\epsilon$ only, or $\mathbf{w} = s\varphi^{\ell }(s)\varphi^{2\ell}(s)\cdots$ is the
unique solution of the equation
\begin{equation}\label{eq:eq_for_inf_LS_type2}
\mathbf{w} = s\varphi^{\ell }(\mathbf{w}),
\end{equation}
where $(v_1,v_2)$ is a vertex of a cycle in $GL_\varphi$ containing at
least one edge with a non-empty label, $\ell$ is the length of this
cycle and
\begin{equation}\label{eq:def_of_prefix_s}
s = f_L(g_L^{\ell -1}(v_1,v_2)) \cdots
\varphi^{\ell -2}(f_L(g_L(v_1,v_2))\varphi^{\ell -1}(f_L(v_1,v_2)).
\end{equation}
\end{cor}
We demonstrate this method on an example morphism.
\begin{exa}
We consider the morphism
\begin{equation} \label{eq:ex_subst}
\varphii{P}: 1 \mapsto 1211, 2 \mapsto 311, 3 \mapsto 2412, 4 \mapsto 435, 5 \mapsto 534
\end{equation}
with $\mathbf{u} = \varphii{P}^\omega(1)$. This morphism is suffix- and prefix-free and
so the set of all unordered pairs of distinct letters is L-forky. The graph
of left prolongations is in Figure~\ref{fig:GL_example}.
\begin{figure}
\caption{The graph $GL^{(\mathcal{B}
\label{fig:GL_example}
\end{figure}
The morphism $\varphii{P}$ has five periodic points
$$
\varphii{P}^\omega(1),\varphii{P}^\omega(4),\varphii{P}^\omega(5),(\varphii{P}^2)^\omega(2),(\varphii{P}^2)^\omega(3).
$$
It is easy to show that
\begin{multline*}
\mathrm{Lext}(1) = \{1,2,3,4,5\}, \mathrm{Lext}(2) = \{1,4,5\}, \mathrm{Lext}(3) = \{1,4,5\},\\
\mathrm{Lext}(4) = \{1,2,3\}, \mathrm{Lext}(5) = \{1,2,3\}.
\end{multline*}
Looking at the graph of left prolongations depicted in
Figure~\ref{fig:GL_example}, we see that
$\varphii{P}^\omega(4)$ and $\varphii{P}^\omega(5)$ are not infinite LS branches as none
of the vertices $(1,2), (2,3)$ and $(1,3)$ is a vertex of a cycle
labeled by $\epsilon$ only. Hence, only
$\varphii{P}^\omega(1),(\varphii{P}^2)^\omega(2),(\varphii{P}^2)^\omega(3)$ are infinite
LS branches with left extensions $1,4,5$.
As for infinite LS branches corresponding to
Equation~\eqref{eq:eq_for_inf_LS_type2}, in the case of our
example, there is only one cycle which is not labeled by the empty word:
the cycle between vertices $(1,2)$ and $(2,3)$. There are two (= the
length of the cycle) equations corresponding to this cycle
$$
\mathbf{w} = \varphii{P}(11)\varphii{P}^2(\mathbf{w}) \quad \text{and} \quad \mathbf{w} =
11\varphii{P}^2(\mathbf{w}).
$$
They give us two infinite LS branches
\begin{equation*}
\begin{split}
& \varphii{P}(11)\varphii{P}^3(11)\varphii{P}^5(11)\cdots,\\
& 11\varphii{P}^2(11)\varphii{P}^4(11)\cdots,
\end{split}
\end{equation*}
the former having left extensions $1$ and $2$ and the latter $2$
and $3$.
\end{exa}
\section{Assumptions and a connection with the critical exponent}
\label{sec:assumptions}
The method of how to generate all BS factors of a given D0L-system we have
described above bears on two facts: There exists L- and R-forky sets and the
number of initial BS triplets is finite. The former was proved for circular and
non-pushy systems and the later for circular ones only. Are these assumptions
necessary or can they be weakened?
Let us take a morphism $0 \mapsto 001, 1 \mapsto 1$ which is pushy and circular
and its fixed point $\mathbf{u}$ starting in $0$. If we try to construct an L-forky set as it
has been defined in this paper, we will find out that it is not possible. A
natural cantidate for veritices of the graph of left prolongations are pairs
$(0,1^n)$ with $n \in \mathbb{N}$, but any finite set of such pairs does not satisfy the property
$(iv)$ of Definition~\ref{dfn:L_forky}. So it seems that to assume the morphism
being non-pushy is inevitable for existence of forky
sets. However, if we relax the definition and enable the pairs of factors to be
infintely long, we can find something like L-forky set even for this morphism:
Define a directed graph of left prolongations such that it has only one vertex
$(0,\cdots 111)$ and one loop on this vertex with label 1 and a directed graph
of right prolongations with one vertex $(0,1)$ and a loop on it with empty label,
then all BS factors in $\mathbf{u}$ are the $f$-images of BS-triplet $((0,\cdots
111),0,(0,1))$, namely $0, 1\varphi(0), 1\varphi(1)\varphi^2(0), \ldots$
Now, consider a morphism $0 \mapsto 001, 1 \mapsto 11$
and its fixed point $\mathbf{u}$ starting in $0$. This morphism is non-pushy and
non-circular. In this case, L- and R-forky sets exists, we can simply take
$\{(0,1)\}$ and there is only one (nonempty) initial $BS$-triplet with no
\emph{BS-synchronizing} point $((0,1),0,(0,1))$. Its $f$-images again reed $0,
1\varphi(0), 1\varphi(1)\varphi^2(0), \ldots$. In fact, to prove that the set of initial
BS-triplets is finite, we need to know only that there is not infinite numer of
BS-triplets without BS-synchronizing point and it seems to be true even for
non-circular morphisms.
All considered morphisms for which our method does not work (or is not proved to
work) have an infinite critical exponent. The following theorem says this is not
a misleading observation but a general rule.
\begin{thm}
Let $G = ({\cal{A}}, \varphi, w)$ be a D0L-system.
Then the critical exponent of ${\cal{L}}(G)$ is finite if and only if $G$ is
circular and non-pushy.
\end{thm}
\begin{proof}
$(\Rightarrow)$: Circularity follows from Theorem~\ref{thm:mignosi_seebold}. $G$
being pushy is in contradiction with Lemma~\ref{lem:pushy_systems}, thus, it is non-pushy.
\noindent $(\mathrm{Lext}ftarrow)$: Suppose the critical exponent of $\mathcal{L}(G)$ is
infinite and that $G$ is circular and non-pushy. According to
Theorem~\ref{thm:strongly_repetitive}, there exists a non-empty factor $v \in \mathcal{L}(G)$ such that for all $n \in \mathbb{N}$, $v^n \in \mathcal{L}(G)$. Take the shortest factor $v$ having such property.
Since $G$ is circular, there exists a finite synchronizing delay $D$.
Take $N \in \mathbb{N}$ such that $|v^N| \geq D$.
Then $v^N$ contains a synchronizing point, i.e., $v^N = v_1 |_s v_2$.
It is clear that $v^{N+1}$ contains at least two synchronizing points, i.e.,
$v^{N+1} = v_1 |_s v_2 v = v v_1 |_s v_2$. In general, $v^{N+k}$ contains $k+1$
synchronizing points at fixed distances equal to $|v|$. Since $\varphi$ is injective, it implies that there exists a unique $z \in \mathcal{L}(G)$ such that $v^{N+k} = p \varphi(z^{k})s$
(for some factors $p$ and $s$) and $z^k \in \mathcal{L}(G)$ for all $k \geq 0$.
According to the choice of $v$, it is clear that $|\varphi(z)| = |z| = |v|$.
Denote by ${\mathcal{L}}_1(z)$ the set of letters occurring in $z$.
It is clear that $\varphi({\mathcal{L}}_1(z)) = {\mathcal{L}}_1(v)$ and $\forall a \in
{\mathcal{L}}_1(z)$ we have $|\varphi(a)| = 1$.
We can now repeat the process: take the factor $z$ to play the role of factor $v$.
Thus, we can find an infinite sequence of factors $z_0 = z, z_1, z_2 \ldots $ such that
$\varphi({\mathcal{L}}_1(z_{k+1})) = {\mathcal{L}}_1(z_{k})$ and $|z_k| = |z|$ for all $k \geq
0$. Since $\cal{A}$ is finite, it is clear that there exists integers $m \neq \ell$
such that ${\mathcal{L}}_1(z_m) = {\mathcal{L}}_1(z_\ell)$. This implies that for all $k$ the
factor $z_k$ is composed of letters of rank zero. This is a contradiction with $G$ being non-pushy.
\end{proof}
\section{Conclusion}
The tool we have introduced in this paper enables to construct an algorithm
which can find all BS factors in a given circular non-pushy D0L-system so that
it produces the graphs of prolongations and the set of initial BS factors -- its
slightly simplified version was implemented by \v{S}t\v{e}p\'{a}n Starosta
using SAGE. The scatch of the algorithm is as follows:
\begin{enumerate}
\item Decide whether the input D0L-system is strongly repetitive using the
algorithm from \cite{Ehrenfeucht1983}. If it is, then by the previous theorem
and Theorem~\ref{thm:strongly_repetitive} the D0L-system is non-circular or
pushy and our method does not work. If it is not, proceed with to steps.
\item Construct L- and R-forky sets: the details of the construction are a bit
technical but the basic idea is the same we used in
Example~\ref{exa:vp_S_set_B}.
\item Find all initial BS triplets without any BS-synchronizing point. The
fact that the system is circular ensures the algorithm stops after a finite
number of steps.
\end{enumerate}
\end{document}
|
\begin{document}
\baselineskip 15truept
{\it Accepted for publication in Mathematica Slovaca}
\title{Zero-divisor graphs of lower dismantlable lattices-I}
\date{}
\dedicatory{Dedicated to
Professor N. K. Thakare on his $77^{th}$ birthday}
\author[Avinash Patil, B. N. Waphare, V. V. Joshi \and H. Y. Pourali]
{Avinash Patil*, B. N. Waphare**, V. V. Joshi** \and H. Y. Pourali**}
\newcommand{\newline\indent}{\newline\indent}
\address{\llap{*\,}Department of Mathematics\newline\indent
Garware College of Commerce\newline\indent
Karve Road, Pune-411004\newline\indent
India.}
\email{[email protected]}
\address{\llap{**\,}Department of Mathematics\newline\indent
Savitribai Phule Pune Universitye\newline\indent
Pune-411007\newline\indent
India.}
\email{[email protected]; [email protected]}
\email{[email protected]; [email protected]}
\email{[email protected]}
\begin{abstract}In this paper, we study the zero-divisor graphs
of a subclass of dismantlable lattices. These graphs are characterized in terms of the non-ancestor graphs of rooted
trees.\end{abstract} \maketitle \noindent
{\bf Keywords:} Dismantlable lattice, adjunct element,
adjunct representation, zero-divisor graph, cover graph, incomparability graph.\\
{\bf MSC(2010):} {Primary $05$C$25$, Secondary $05$C$75$}.
\section{Introduction}
Beck \cite{2} introduced the concept of zero-divisor graph of a
commutative ring $R$ with unity as follows. Let $G$ be a simple
graph whose vertices are the elements of $R$ and two vertices $x$
and $y$ are adjacent if $xy = 0$. The graph $G$ is known as the
\textit{zero-divisor graph} of $R$. He was mainly interested in the
coloring of this graph. This concept is well studied in
algebraic structures such as rings, semigroups, lattices,
semilattices as well as in ordered structures such as posets and
qosets; see Anderson et al. \cite{DP}, Alizadeh et al. \cite{1},
LaGrange \cite{11,12}, Lu and Wu \cite{13}, Joshi and Khiste
\cite{8}, Nimbhorkar et al. \cite{15}, Hala\v {s} and Jukl
\cite{4}, Joshi \cite{7}, Joshi, Waphare and Pourali \cite{9, jwp} and
Hala\v {s} and L\"{a}nger \cite{5}.
A graph is called \textit{realizable as zero-divisor graph} if it
is isomorphic to the zero-divisor graph of an algebraic structure
or an ordered structure. In \cite{12}, LaGrange characterized
simple graphs which are realizable as the zero-divisor graphs of
Boolean rings and in \cite{13}, Lu and Wu gave a class of graphs
that are realizable as the zero-divisor graphs of posets.
Recently, Joshi and Khiste \cite{8} extended the result of
LaGarange \cite{12} by characterizing simple graphs which are
realizable as zero-divisor graphs of Boolean posets.
In this paper, we provide a class of graphs, namely the
non-ancestor graphs of rooted trees, that are realizable as
zero-divisor graphs of lower dismantlable lattices. In fact we
prove:
\begin{thm}\label{t1}For a simple undirected graph $G$, the following statements are equivalent.
\begin{enumerate}
\item[$(a)$] $G\in \mathcal{G_T}$, the class of non-ancestor
graphs of rooted trees. \item[$(b)$] $G=G_{\{0\}}(L)$ for
some lower dismantlable lattice $L$ with the greatest element $1$
as a join-reducible element. \item[$(c)$] $G$ is the
incomparability graph of $(L\backslash \{0,1\},\leq) $ for some
lower dismantlable lattice $L$ with the greatest element $1$ as a
join-reducible element.
\end{enumerate}
\end{thm}
Rival \cite{17} introduced dismantlable lattices to study the
combinatorial properties of doubly irreducible elements. By
dismantlable lattice, we mean a lattice which can be completely
``dismantled" by removing one element at each stage. Kelly and Rival
\cite{10} characterized dismantlable lattices by means of crowns,
whereas Thakare, Pawar and Waphare \cite{18} gave a structure
theorem for dismantlable lattices using adjunct operation.
Now we begin with the necessary definitions and terminology.
\begin{defn}
A nonzero element $p$ of a lattice $L$ with 0 is an {\it atom} if $0\prec p$ (by
$a\prec b$, we mean there is no $c$ such that $a<c<b$). Dually, a nonzero element $d$ of a lattice $L$ with 1 is a \textit{dual atom} if $d\prec 1$.
\end{defn}
\begin{defn}[Definition 2.1, Thakare et al. \cite{18}]\label{d1}
If $L_1$ and $L_2$ are two disjoint finite lattices and $\; (a, b)\; $ is a pair
of elements in $L_1$ such that $a < b$ and $a \not\prec b$. Define the
partial order $\leq$ on $L = L_1 \cup L_2$ with respect to
the pair $(a,b)$ as follows.
$x \leq y$ in $L$ if
either $x,y \in L_1$ and $x \leq y$ in $L_1$;
or $x,y \in L_2$ and $x \leq y$ in $L_2$;
or $x \in L_1,$ $ y \in L_2$ and $x \leq a$ in $L_1$;
or $x \in L_2,$ $ y \in L_1$ and $b \leq y$ in $L_1$.
It is easy to see that $L$ is a lattice containing $L_1$ and $L_2$
as sublattices. The procedure of obtaining $L$ in this way is
called an {\it adjunct operation of $L_2$ to $L_1$}. The pair $(a,b)$
is called {\it an adjunct pair } and $L$ is an {\it adjunct }
of $L_2$ to $L_1$ with respect to the adjunct pair ($a,b$) and we
write $L = L_1 ]^b_a L_2$.
\end{defn}
We place the Hasse diagrams of $L_1$, $L_2$ side
by side in such a way that the greatest element $1_{L_2}$ of $L_2$
is at the lower position than $b$ and the least element $0_{L_2}$
of $L_2$ is at the higher position than $a$. Then add the
coverings $1_{L_2}\prec b$ and $ a\prec 0_{L_2}$, as shown in
Figure \ref{f1}, to obtain the Hasse diagram of $L=L_1]_a^bL_2$.
\begin{figure}
\caption{Adjunct of two lattices $L_1$ and $L_2$}
\label{f1}
\end{figure}
Clearly, $|E(L)| = |E(L_1)| + |E(L_2)|+2$, where $E(L)$ is
nothing but edge set of $L$. This also implies that the adjunct
operation preserves all the covering relations of the individual
lattices $L_1$ and $L_2$. Also note that if $x,y\in L_2$, then $a\prec 0_{L_2}\leq x\wedge y$. Hence $x\wedge y\neq 0$ in $L=L_1]_a^bL_2$.
\pagebreak
\section{properties of zero-divisor graphs of dismantlable lattices}
Following Beck \cite{2}, Nimbhorkar et al. \cite{15}
introduced the concept of zero-divisor graph of meet-semilattices
with 0, which was further extended by Hala\v {s} and Jukl \cite{4}
to posets with 0. Recently, Joshi \cite{7} introduced the
zero-divisor graph with respect to an ideal $I$ of a poset with 0.
Note that his definition of zero-divisor graph coincides with the
definition of Lu and Wu \cite{13} when $I=\{0\}$.
A nonempty subset $I$ of a lattice $L$ is an {\it ideal} of $L$ if
$a,b\in I$ and $c\in L$ with $c\leq a$ implies $c\in I$ and $a\vee
b\in I$. An ideal $I\neq L$ is a {\it prime ideal} if $a\wedge
b\in I$ implies either $a\in I$ or $b\in I$. A prime ideal $P$ of
a lattice $L$ is a {\it minimal prime ideal} if for any prime ideal
$Q$ we have $P\subseteq Q\subseteq L$ implies either $P=Q$ or
$Q=L$.
Now, we recall the definition of zero-divisor graph given by Joshi \cite{7} when the corresponding poset is a lattice and an
ideal $I=\{0\}$.
\begin{defn}[Definition 2.1, Joshi \cite{7}]\label{d2} Let $L$ be a lattice with the least element 0. We associate a simple
undirected graph $G_{\{0\}}(L)$ as
follows. The set of vertices of $G_{\{0\}}(L)$ is\\
$V\left(G_{\{0\}}(L)\right)=\Bigl\{x \in L \setminus \{0\}~:~x\wedge y=0$ for some $y \in L\setminus \{0\} \Bigr\}$ and
distinct vertices $x, y$ are adjacent if and only if $x\wedge
y=0$. The graph $G_{\{0\}}(L)$ is called the {\it zero-divisor
graph of $L$}.
\end{defn}
The following Figure \ref{f2} illustrates the zero-divisor graph
$G_{\{0\}}(L)$ of the given lattice $L$.
\begin{figure}
\caption{A lattice $L$ with its zero-divisor graph
$G_{\{0\}
\label{f2}
\end{figure}
Note that $\{0\}$ is a prime ideal in a lattice $L$ with 0 if and only if $V\left(G_{\{0\}}(L)\right)=\emptyset$.
Now, we reveal the structure of zero-divisor graph of
$L=L_1]_a^bL_2$ in terms of zero-divisor graphs of $L_1$ and
$L_2$. For that purpose, we need the following definitions.
\begin{defn}\label{d3}Given two graphs $G_1$ and $G_2$, the {\it union} $G_1\cup G_2$
is the graph with $V(G_1\cup G_2)=V(G_1)\cup V(G_2)$ and
$E(G_1\cup G_2)=E(G_1)\cup E(G_2)$. The {\it join} of $G_1$ and
$G_2$, denoted by $G_1+G_2$ is the graph with $V(G_1+
G_2)=V(G_1)\cup V(G_2)$, $E(G_1+ G_2)=E(G_1)\cup E(G_2)\cup J$,
where $J=\Bigl\{\{x_1,x_2\}\;|\; x_1\in V(G_1), x_2 \in
V(G_2)\Bigr\}$. The {\it null graph} on a set $S$ is the graph
whose vertex set is $S$ and edge set is the empty set, we denote
it by $N(S)$.\end{defn}
\textbf{Throughout this paper all the lattices are finite.}
The following result describes the zero-divisor graph of adjunct
of two lattices.
\begin{thm}\label{t2} Let $L_1$ and $L_2$ be two lattices. Put $L=L_1]_a ^b L_2$. Then the following statements are true.
\begin{enumerate}
\item[$(a)$] If $a\neq 0$ and $a\notin V\left(G_{\{0\}}(L_1)\right)$, then $G_{\{0\}}(L)= G_{\{0\}}(L_1)$.
\item[$(b)$] If $a\in V\big(G_{\{0\}}(L_1)\big)$, then $G_{\{0\}}(L)= G_{\{0\}}(L_1)\cup \big(G_a + N(L_2)\big)$,
where \\ $G_a=\{x\in L_1 ~|~ a,x ~ \textnormal{are adjacent in } G_{\{0\}}(L_1)\}$.
\item[$(c)$] If $a=0$, then $G_{\{0\}}(L)= G_{\{0\}}(L_1)\cup \Big( N\big(L_1^* \backslash [b)\big) + N(L_2)\Big)$, where
$[b)=\{x\in L_1~:~ x\geq b\}$ is the principal dual ideal generated by $b$ in $L_1$ and $L_1^*= L_1\backslash \{0\}$.
\end{enumerate}
\end{thm}
\begin{proof}$(a)$ Let $a\neq 0$ and $a\notin V\left(G_{\{0\}}(L_1)\right)$. If $x\in V\left(G_{\{0\}}(L)\right)$ is adjacent to some $y\in L_2$,
then $x\wedge y=0$, clearly $x\notin L_2$. As $a\leq y$ in $L$, we get $a\wedge x
=0$. Hence $a\in V\left(G_{\{0\}}(L_1)\right)$, a contradiction to
the fact that $a\notin V\left(G_{\{0\}}(L_1)\right)$. Hence no element
of $L_2$ is adjacent to any vertex of $G_{\{0\}}(L)$. Also, $G_{\{0\}}(L_1)$ is a subgraph of $G_{\{0\}}(L)$, therefore
$G_{\{0\}}(L)= G_{\{0\}}(L_1)$.
$(b)$ Now, let $a\in V\left(G_{\{0\}}(L_1)\right)$. If $x\in V\left(G_{\{0\}}(L)\right)$, then there exists a nonzero element
$y\in L$ such that $x\wedge y=0$.
This implies that at most one of $x$ and $y$ may be in $L_2$, otherwise $a\leq x\wedge y=0$, a contradiction. If $x,y\in L_1$, then
$x\in V\left(G_{\{0\}}(L_1)\right)$. Without loss of generality, let $x\in L_1$ and $y\in L_2$, which gives $x\wedge a=0$, since $a\leq y$.
Therefore $x\in G_a$.\\
Thus $V\left(G_{\{0\}}(L)\right)\subseteq V\Big(G_{\{0\}}(L_1)\cup \big(G_a + N(L_2)\big)\Big)$.
Since $ V\Big(G_{\{0\}}(L_1)\cup \big(G_a + N(L_2)\big)\Big)\subseteq V\left(G_{\{0\}}(L)\right)$, the equality holds.
Let $x$ and $y$ be adjacent in $G_{\{0\}}(L)$. Hence at most one of $x$ and $y$ may be in $L_2$. If $x,y\in L_1$,
then $x$ and $y$ are adjacent in $G_{\{0\}}(L_1)$. Now, without loss
of generality, assume that $x\in L_1$ and $y\in L_2$. Therefore $x\wedge
a=0$. Hence $x\in G_a$, \textit{i.e, } $x$ and $y$ are adjacent in $G_a + N(L_2)$.
Now, let $x$ and $y$ be adjacent in $G_{\{0\}}(L_1)\cup
\big(G_a + N(L_2)\big)$. If $x, y\in G_{\{0\}}(L_1)$, we are through. Let $x\in G_a$ and $y\in L_2$. Then $x\wedge a=0$. We claim that $x\wedge y=0$.
Suppose $x\wedge y\neq 0$. Then we have two possibilities either $x\wedge y\in L_1$ or $x\wedge y\in L_2$. If $x\wedge y\in L_2$,
then by the definition of adjunct, we have $a\leq x\wedge y$ which yields $a\leq x\wedge y\wedge a=0$, a contradiction.
Thus $x\wedge y\notin L_2$. Hence $x\wedge y \in L_1$. Since $x\wedge y\leq y$ for $y\in L_2$, again by the definition of adjunct,
we have $x\wedge y\leq a$. This gives $x\wedge y\wedge x\leq a\wedge x=0$, a contradiction to the fact that $x\wedge y \neq 0$.
Thus we conclude that $x\wedge a=0$ if and only if $x\wedge y=0$ for any $y\in L_2$. Therefore $G_{\{0\}}(L)= G_{\{0\}}(L_1)\cup \big(G_a + N(L_2)\big)$.
$(c)$ Assume that $a=0$. Let $x\in
V\left(G_{\{0\}}(L)\right)$. Then there exists a nonzero element
$y\in L$ such that $x\wedge y=0$. Hence at most one of $x$ and $y$ may be in
$L_2$. If $x,y\in L_1$, then $x\in G_{\{0\}}(L_1)$. Without loss
of generality, let $x\in L_1$ and $y\in L_2$. If $x\geq b$, then by
the definition of adjunct, we have $x\geq y$,
a contradiction to the fact that $x\wedge y=0$. Hence $x\ngeq b$, \textit{i.e.}, $x\in N(L_1^*
\backslash [b))$. Therefore $x\in V\Big( G_{\{0\}}(L_1) \cup
\big(N(L_1^* \backslash [b)) + N(L_2)\big)\Big)$.
Now, assume that $x\in
V\Big( G_{\{0\}}(L_1) \cup \big(N(L_1^* \backslash [b)) +
N(L_2)\big)\Big)$.
If $x\in G_{\{0\}}(L_1)$, then we are through.
Let $x\in V\Big( N\big(L_1^* \backslash [b)\big) + N(L_2)\Big)$.
If $x\in N\big(L_1^* \backslash [b)\big)$, then $x\ngeq b$. For any
$y\in L_2$, we have $x||y$ and by the definition of adjunct and
$a=0$, we have $x\wedge y=0$. Therefore $x\in
V\left(G_{\{0\}}(L)\right)$, as $y$ is a nonzero element of $L$.
If
$x\in N(L_2)$, then for any atom $p$ of $L_1$, $p\wedge x=0$.
Therefore $x\in G_{\{0\}}(L)$. Hence we have
$V\left(G_{\{0\}}(L)\right)=V\Big( G_{\{0\}}(L_1) \cup \big(N(L_1^* \backslash [b)) +
N(L_2)\big)\Big)$.
Let $x$ and $y$ be adjacent in $G_{\{0\}}(L)$. Then at most one of $x$ and $y$ may be in $L_2$.
If $x,y\in L_1$, then they are adjacent in $G_{\{0\}}(L_1)$. Therefore they are adjacent in
$G_{\{0\}}(L_1)\cup \Big( N\big(L_1^* \backslash [b)\big) + N(L_2)\Big)$. Without loss of generality, let $x\in L_1$
and $y\in L_2$. As above, $x\ngeq b$. Hence $x\in ( N(L_1^* \backslash [b))$ and $y\in L_2$.
Therefore they are adjacent in $N(L_1^* \backslash [b)) + N(L_2)$ and hence in $G_{\{0\}}(L_1)\cup \Big( N\big(L_1^* \backslash [b)\big) + N(L_2)\Big)$.
Conversely, suppose that $x $ and $y$ are adjacent in
$G_{\{0\}}(L_1)\cup \Big( N\big(L_1^* \backslash [b)\big) +
N(L_2)\Big)$. If both $x, y \in G_{\{0\}}(L_1)$, we are done.
Also, at most one of $x$ and $y$ may be in $L_2$. Without loss of
generality, let $x\in L_1$ and $y\in L_2$, then $x\ngeq b$,
\textit{i.e.}, $x\in N(L_1^* \backslash [b))$. Therefore $x\wedge
y=0$ in $L$. Hence $x$ and $y$ are adjacent in $G_{\{0\}}(L)$.
Therefore we get $G_{\{0\}}(L)= G_{\{0\}}(L_1)\cup \Big(
N\big(L_1^* \backslash [b)\big) + N(L_2)\Big)$.
\end{proof}
The following Figures \ref{f3}, \ref{f4} and \ref{f5} illustrate
Theorem \ref{t2}.
\begin{figure}
\caption{Theorem \ref{t2}
\label{f3}
\end{figure}
\begin{figure}
\caption{Theorem \ref{t2}
\label{f4}
\end{figure}
\begin{figure}
\caption{Theorem \ref{t2}
\label{f5}
\end{figure}
Let $G$ be a graph and $x, y$ be distinct vertices in $G$. A path
is a simple graph whose vertices can be ordered so that two
vertices are adjacent if and only if they are consecutive in the
list. If $G$ has a $x$, $y$-path, then the {\it distance} from $x$
to $y$, written $d(x,y)$, is the least length of a $x$, $y$-path.
If $G$ has no such path, then $d(x,y)=\infty$. The {\it diameter
}($diam(G)$) is $max_{x,y\in V(G)}d(x,y)$; see West \cite{6}.
\begin{cor}\label{c1}Let $L$ be an adjunct of two chains $C_1$ and $C_2$ with an adjunct pair $(a,1)$,
\textit{i.e.}, $L=C_1]_a ^1 C_2$. If $G_{\{0\}}(L)\neq\emptyset$,
then $a=0$ and $G_{\{0\}}(L)$ is a complete bipartite graph.
Hence $diam (G_{\{0\}}(L))\leq 2$. Moreover, $G_{\{0\}}(L)=
K_{m,n}$, if $|C_1|=n+2$ and $|C_2|=m$ where $m,n\in
\mathbb{N}$.\end{cor}
\begin{proof} Let $L=C_1]_a ^1 C_2$ and $G_{\{0\}}(L)\neq\emptyset$. Clearly $V(G_{\{0\}}(C_1))=\emptyset$. If $a\neq 0$, then by Theorem
\ref{t2}($a$), we have $G_{\{0\}}(L)=G_{\{0\}}(C_1)=\emptyset$, a contradiction. Therefore $a=0$.\\
Now, every element of $C_1\backslash \{0,1\}$ is adjacent to each element of $C_2$. Hence $G_{\{0\}}(L)$ is
a complete bipartite graph, in fact $G_{\{0\}}(L)= K_{m,n}$ whenever
$|C_1|=n+2$ and $|C_2|=m$ where $m,n\in \mathbb{N}$.\end{proof}
\begin{nt} Let $\mathcal{M}_n=\{0,1,a_1,a_2,\cdots,a_n\}$ be a lattice such that $0<a_i<1$, for every $i$,
$i=1,2,\cdots,n$ with $a_i\wedge a_j=0$ and $a_i\vee a_j=1$ for
every $ i\neq j$.
\end{nt}
\begin{rem}If $L$ is an adjunct of more than two chains, then $G_{\{0\}}(L)$
need not be bipartite. Consider the lattice $L=\mathcal{M}_3$ depicted in Figure \ref{f6}. Consider $L=C_1]_0 ^1 C_2]_0 ^1 C_3$
where $C_1=\{0,x,1\}$, $C_2=\{y\}$ and $C_3=\{z\}$. Then
$G_{\{0\}}(L)\cong K_3$, a non bipartite graph.\end{rem}
\begin{figure}
\caption{A dismantlable lattice with non-bipartite zero-divisor
graph}
\label{f6}
\end{figure}
The concept of dismantlable lattice was introduced by Rival
\cite{17}.
\begin{defn}[Rival
\cite{17}]A finite lattice $L$ having $n$ elements is called {\it dismantlable},
if there exists a chain $L_1 \subset L_2 \subset \cdots \subset
L_n (= L)$ of sublattices of $L$ such that $| L_i| = i$, for all
$i$.\end{defn} The following structure theorem is due to Thakare,
Pawar and Waphare \cite{18}.
\begin{st}[Theorem 2.2, Thakare et at. \cite{18}]\label{st}A finite lattice is dismantlable if and only if it is an
adjunct of chains.\end{st}
From the above structure theorem and Corollary \ref{c1}, it is clear that if
the vertex set of a zero-divisor graph of adjunct of two chains is
nonempty then it is a lower dismantlable lattice in the following
sense.
\begin{defn}We call a dismantlable lattice $L$ to be a \textit{lower dismantlable} if it is a chain or
every adjunct pair in $L$ is of the form $(0,b)$ for some $b\in L$.
\end{defn}
It should be noted that any lattice of the form
$L=C_0]_0^{x_1}C_1]_0^{x_2}\cdots]_0^{x_n}C_n$ is always a lower
dismantlable lattice, where $C_i$'s are chains. Consider the
lattices depicted in Figure \ref{f7}. Observe that the lattice $L$
is lower dismantlable whereas the lattice $L'$ is not lower
dismantlable.
\begin{figure}
\caption{Examples of lower dismantlable and non- lower
dismantlable lattice}
\label{f7}
\end{figure}
\begin{defn}
An element $x$ in a lattice $L$ is \textit{join-reducible $($meet-reducible$)$} in $L$
if there exist $y,z\in L$ both distinct from $x$, such that $y\vee z=x$ $(y\wedge z= x)$; $x$ is
\textit{join-irreducible $($meet-irreducible$)$} if it is not join-reducible $($meet-reducible$)$;
$x$ is \textit{doubly irreducible} if it is both join-irreducible and meet-irreducible.
Therefore, an element $x$ is doubly reducible in a lattice $L$ if and only if $x$ has at most one
lower cover or $x$ has at most one upper cover. The set of all meet-irreducible $($join-irreducible$)$
elements in $L$ is denoted by \textit{$M(L)$ $(J(L))$}. The set of all doubly irreducible elements in $L$ is
denoted by $Irr(L)$ and its in $L$ is denoted by \textit{$Red(L)$}. Thus, if $x\in Red(L)$ then $x$
is either join-reducible or meet-reducible.
For an integer $n\geq 3$, a {\it crown} is a partially ordered set $\{x_1, y_1,x_2,y_2,\cdots, x_n, y_n\}$
in which $x_i\leq y_i$, $x_{i+1}\leq y_i$, for $i=1,2,\cdots, n-1$ and $x_1\leq y_n$ are the
only comparability relations (see Figure \ref{f8}).\end{defn}
\begin{figure}
\caption{Crown of order $2n$}
\label{f8}
\end{figure}
Note that if $L$ is lower dismantlable lattice with the greatest element 1 as a join-reducible element, then it is easy to observe that
every nonzero nonunit element of $L$ is a vertex of $G_{\{0\}}(L)$.
The following lemma gives the properties of lower dismantlable
lattices which will be used in the sequel frequently.
\begin{lem}\label{l1}Let $L=C_0]_0^{x_1}C_1]_0^{x_2}\cdots]_0^{x_n}C_n$ be a lower dismantlable lattice,
where $C_i$'s are chains. Then for nonzero elements $a,b\in L$, we have.
\begin{enumerate}
\item[$i)$] $a\wedge b=0$ if and only if $a||b$ $($where $a||b$ means $a$ and $b$ are incomparable$)$.
\item[$ii)$] Let $a\in C_i$ with $b\notin C_i$, then $a\leq b$ if and
only if $x_i\leq b$.
\item[$iii)$] If $(0,1)$ is an adjunct pair $(${\it i.e.}, $x_i=1$ for
some $i\in \{1,2,\cdots, n\})$, then
$|V\left(G_{\{0\}}(L)\right)|=|L|-2$.
\end{enumerate}\end{lem}
\begin{proof}$i)$Suppose $a||b$ and $a\wedge b\neq
0$. It is clear that there is an adjunct pair $(a_1,b_1)$ in the
adjunct representation of $L$ such that $a_1=a\wedge b\neq 0$, a
contradiction to the definition of lower
dismantlability of $L$. The converse is obvious.\\
$ii)$ Let $a\in C_i$ and $b\notin C_i$ with $x_i\leq b$. As $C_i$ is joined at $x_i$,
we must have $a\leq x_i$. Hence $a\leq b$. Conversely, suppose
$a\leq b$. Now, $a\in C_i$ and $b\notin C_i$. If $x_i\nleq b$, we
have either $b\leq x_i$ or $x_i||b$. The second case is
impossible, by $(i)$ above, as it gives $a\wedge b=0$, since $a\leq x_i$. Also, it is given that
$b\notin C_i$, hence $a||b $ which yields $a\wedge b=0$, a contradiction to the fact that $a\leq b$ and $a\neq 0$. Therefore
$x_i\leq b$.\\
$iii)$ As $L$ is a lower dismantlable lattice having $(0,1)$ as an
adjunct pair, $L$ contains at least two chains in its adjunct
representation. Also $a\wedge b=0$ if and only if $a||b$. Hence
any $a\in L\backslash \{0,1\}$ is in $V\left(G_{\{0\}}(L)\right)$.
and consequently $|V\left(G_{\{0\}}(L)\right)|=|L|-2$.
\end{proof}
Now, we recall some definitions
from graph theory.
\begin{defn}A {\it cycle} is a graph with an equal number of
vertices and edges whose vertices can be placed around a circle so
that two vertices are adjacent if and only if they appear
consecutively along the circle. The {\it girth } of of a graph
with cycle, written $gr(G)$, is the length of its shortest cycle.
A graph with no cycle has infinite grith. A graph with no cycle
is \textit{acyclic}. A \textit{tree} is a connected acyclic graph.
A tree is called a \textit{rooted tree} if one vertex has been
designated the root, in which case the edges have a natural
orientation, towards or away from the root. A vertex $w$ of a
rooted tree is called an \textit{ancestor} of $v$ if $w$ is on the
unique path from $v$ to the root of the tree; see West \cite{6}.
Let $T$ be a rooted tree with the root $R$ has at least
two branches. Let $G(T)$ be the {\it non-ancestor graph} of $T$, \textit{i.e.}, $V(G(T))=T\backslash \{R\}$ and two vertices
are adjacent if and only if no one is an ancestor of the other. Denote the class of non-ancestor graphs of rooted trees by $\mathcal{G_T}$.
The \textit{cover graph} of a lattice $L$, denoted by
$CG(L)$, is the graph whose vertices are the elements of $L$ and whose edges are the pairs $(x,y)$ with $x,y\in L$
satisfying $x\prec y$ or $y\prec x$. The \textit{comparability graph} of a lattice $L$, denoted by $C(L)$, is the graph whose
vertices are the elements of $L$ and two vertices $x$ and $y$ are adjacent if and only if $x$ and $y$ are comparable.
The complement of the comparability graph $C(L)$, {\it i.e.}, $C(L)^c$, is called the \textit{incomparability graph} of $L$.\end{defn}
The following result is due to Kelly and Rival
\cite{10}.
\begin{thm}[Theorem 3.1, Kelly and Rival
\cite{10}]\label{t3}A finite lattice is dismantlable if and only if it contains no crown.\end{thm}
In the following theorem, we characterize the zero-divisor graph
$G_{\{0\}}(L)$ of a lower dismantlable lattice $L$ in terms of the
cover graph $CG(L)$ and the incomparability graph $C(L)^c$.
\begin{thm}\label{t4} The following statements are equivalent for a finite lattice $L$ with 1 as a join-reducible element.
\begin{enumerate}
\item[$(a)$] $L$ is a lower dismantlable lattice. \item[$(b)$]
Every nonzero element of $L$ is a meet-irreducible element.
\item[$(c)$] The cover graph $CG(L\backslash \{0\})$ of $L\backslash \{0\}$ is a tree.\item[$(d)$] The zero-divisor
graph of $L$ coinsides with the incomparability graph of $L\backslash
\{0,1\}$.
\end{enumerate}
\end{thm}
\begin{proof} $(a)\Rightarrow (b)$ Let $L$ be a lower dismantlable lattice. Then
by Structure Theorem,\\ $L= C_1]_{0}^{a_1}
C_2]_{0}^{a_2}\cdots]_{0}^{a_{n-1}} C_n$, where each $C_i$ is a chain.
Let $a(\neq 0)\in L$ be an element which is not meet-irreducible.
Then $a=b\wedge c$ for some $b,c\neq a$. But then $b||c$. By Lemma
\ref{l1}, $a=b\wedge c=0$, a contradiction to $a\neq 0$. Hence every
nonzero element of $L$ is meet
irreducible element.\\
$(b)\Rightarrow (c)$ Let $CG(L\backslash \{0\})$ be the cover graph of $L\backslash \{0\}$ and let
$C: a_1-a_2-\cdots-a_n-a_1$ be a cycle in $CG(L\backslash \{0\})$.
For distinct $a_1,a_2,a_3$ we have the following three cases.\\
\textbf{Case $(1)$:} Let $a_1\prec a_2\prec a_3$ be a chain. Then for $a_4$,
we have either $a_1\prec a_2\prec a_3\prec a_4$ is a chain or
$a_1\prec a_2\prec a_3$ and $a_4\prec a_3$. Then, $a_{i+1}\leq a_i
$, for all $i\geq 4$; otherwise we get $a_4$ as a meet reducible
element. Hence $a_n\leq a_{n-1}$ and $a_n=a_{n-1}\wedge a_1\neq 0$,
as $C: a_1-a_2-\cdots-a_n-a_1$ is a cycle. This contradicts the fact
that every nonzero element is meet-irreducible. On the other hand, if $a_1\prec a_2\prec a_3\prec
a_4$ is chain, then using the above arguments, we have $a_1\prec
a_2\prec a_3\prec a_4\prec a_5$ is a chain, Continuing in this way
we get $a_1\prec a_2\prec \cdots\prec a_n$ is a chain and $a_1\leq
a_n$. Hence $a_1$ and $a_n$ can not be adjacent, a contradiction to
the fact that $C$ is a
cycle in $CG(L\backslash \{0\})$.\\
\textbf{Case $(2)$:} $a_2= a_1 \wedge a_3$, which is impossible, as $a_1\wedge a_3=a_2\neq 0$, a
contradiction to lower dismantlability of $L$.\\
\textbf{Case $(3)$:} $a_2= a_3\vee a_1$. Note that $a_3\nleq a_4$
otherwise $a_3$ is the meet of $a_2$ and $a_4$, a contradiction to the fact that every nonzero element is meet-irreducible.
Hence $a_4\leq a_3$. In fact $a_{m+1}\leq a_m$, for $m\geq 3$. Using
the above arguments, we again obtain a contradiction. Hence
$CG(L\backslash \{0\})$ is a connected acyclic graph. Therefore it is
a tree.\\
$(c)\Rightarrow (a)$ If $L$ contains a crown, then $CG(L\backslash \{0\})$ contains a
cycle, a contradiction. Hence $L$ does not contain a crown. By
applying Theorem \ref{t3}, $L$ is a dismantlable lattice. Now, let
$a$ and $b$ be incomparable elements of $L$. Suppose that $a\wedge
b\neq 0$. Let $a\wedge b=a_1\prec a_2\prec\cdots\prec a_i=a\prec
\cdots\prec a_n=a\vee b$, be a covering and also $a\wedge b=b_1\prec
b_2\prec\cdots\prec b_j=b\prec\cdots\prec b_m=a\vee b$, be another
covering, distinct from the first covering (such coverings exist,
since $a||b$ ). Then $a_1- a_2-\cdots- a_n=b_m-b_{m-1}-\cdots-b_1=a_1$ is
a cycle in $CG(L\backslash \{0\})$, a contradiction to the fact that
$CG(L\backslash \{0\})$
is a tree. Thus $L$ is a lower dismantlable lattice.\\
$(d)\Rightarrow (b)$ Suppose $G_{\{0\}}(L)=C(L\backslash
\{0,1\})^c$, for some lattice $L$. We want to show
that $L$ does not contain any nonzero meet-reducible element. Suppose on
the contrary, $L$ has a nonzero meet-reducible element say $b$. Then there
exist $a, c \in L$ with $a, c \neq b$, such that $b = a\wedge c$.
Thus $a$ and $c$ are incomparable. So there is an edge $a-c$ in
$C(L\backslash \{0,1\})^c$. But $a\wedge c\neq 0$, hence $a$ and $c$
are not adjacent in $G_{\{0\}}(L)$, a contradiction. Consequently every
nonzero element of $L$ is a meet-irreducible element.\\
$(b)\Rightarrow (d)$ Suppose every nonzero element of $L$ is meet
irreducible. By the equivalence of $(a)$ and $(b)$ above,
$L$ is a lower dismantlable lattice. Hence $a\wedge b=0$ if and only if
$a||b$. Therefore $G_{\{0\}}(L)=C(L\backslash \{0,1\})^c$.\end{proof}
Note that a result similar to the equivalence of statements $(b)$ and $(d)$
of Theorem \ref{t4} can be found in Survase \cite{22}.
Grillet and Varlet \cite{19} introduced the concept of
0-distributive lattices as a generalization of distributive
lattices. A lattice $L$ with 0 is called \emph{ 0-distributive}
if, for every triplet $(a, b, c)$ of elements of $L$, $a\wedge b =
a\wedge c = 0$ implies $a\wedge (b\vee c) = 0$. More details about
0-distributive posets can be found in Joshi and Waphare \cite{20}.
Forbidden configurations for 0-distributive lattices are
obtained by Joshi \cite{21}.
\begin{lem}\label{l2}If $L$ is an adjunct of two chains with $(0,1)$ as an
adjunct pair, then $L$ is 0-distributive.
\end{lem}
\begin{proof}Let $a\wedge b =
a\wedge c = 0$, for $a,b,c \in L$. By Lemma \ref{l1}, $b$ and $c$
are either comparable or $b\wedge c=0$. If $b\wedge c=0$, this
together with $a\wedge b = a\wedge c = 0$ gives $L$ as adjunct of at
least three chains, a contradiction. Hence $b$ and $c$ are comparable. By $a\wedge b=a\wedge c=0$, we have $a\wedge (b\vee c)=0$.
\end{proof}
The following result is due to Joshi \cite{7}.
\begin{thm}[Theorem 2.14, Joshi \cite{7}]\label{t5}Let $L$ be a 0-distributive lattice. Then $G_{\{0\}}(L)$ is complete
bipartite if and only if there exist two minimal prime ideals
$P_1$ and $P_2$ of $L$ such that $P_1\cap P_2=\{0\}$.
\end{thm}
\begin{thm}\label{t6}Let $L$ be a lower dismantlable lattice having $(0,1)$ as an adjunct pair. Then the following statements are equivalent.
\begin{enumerate}
\item[$(a)$] $G_{\{0\}}(L)$ is a complete bipartite graph.
\item[$(b)$]$L$ is an adjunct of two chains only. \item[$(c)$] $L$
has exactly two atoms and exactly two dual atoms.\item[$(d)$]
There exist two minimal prime ideals $P_1$ and $P_2$ of $L$ such
that $P_1\cap P_2=\{0\}$. \item[$(e)$] $L$ is a
0-distributive lattice.\end{enumerate}
\end{thm}
\begin{proof}$(a)\Rightarrow (b)$ Suppose $G_{\{0\}}(L)=K_{m,n}$. Note that the independent set of
$K_{m,n}$ forms a chain in $L$. If $L$ is an adjunct of more than
two chains, then $L$ contains at least three
atoms, which forms a triangle in $G_{\{0\}}(L)$, a contradiction. Hence $L$ is adjunct of two chains only.
Moreover, as $(0,1)$ is an adjunct pair, $L=C_1]_0^1C_2$, where $C_1$ and $c_2$ are chains.\\
$(b)\Rightarrow (c)$ Obvious.\\
$(c)\Rightarrow (d)$ Let $d_1$ and $d_2$ be only dual atoms of $L$.
Then $P_1=(d_1]=\{x\in L~:~x\leq d_1\}$ and $P_2=(d_2]$ are ideals of $L$. In fact, $P_1\cap P_2=\{0\}$,
otherwise we get a nonzero meet-reducible element
in $L$, a contradiction to Theorem \ref{t4}. We claim that $P_1$ and
$P_2$ are prime ideals. Let $a\wedge b\in P_1$, {\it i.e.}, $a\wedge
b\leq d_1$. If $a\wedge b\neq 0$, then $a$ and $b$ are comparable. Hence $a\in P_1$ or $b\in P_1$. Suppose $a\wedge b=0$ and $a,b\notin
P_1=(d_1]$. Then by Lemma \ref{l1}, $a||b$. Since $d_2$ is the only dual atom other that $d_1$, we have $a,b\leq d_2$, which gives $(0,d_2)$ is an
adjunct pair in $L$,
hence there exist two atoms below $d_2$, {\it i.e.}, $L$ contains three atoms (one below $d_1$ and two below $d_2$), a contradiction.
Therefore $P_1$ is a prime ideal. Similarly $P_2$ is a prime ideal.\\
$(d)\Rightarrow (a)$ Follows from Theorem 2.14 \cite{7}.\\
$(a)\Rightarrow (e)$ Let $G_{\{0\}}(L)$ be a complete bipartite graph. By the
equivalence of statements $(a)$ and $(b)$, $L$ is an adjunct of
exactly two chains. By Lemma \ref{l2}, $L$ is
0-distributive.\\
$(e)\Rightarrow (a)$ Suppose that $L$ is 0-distributive. Assume on the
contrary, assume that $L$ is an adjunct of more than two chains,
{\it i.e.}, $L$ contains at least three atoms, say $a,b,c$.
Clearly $a\wedge b=b\wedge c=a\wedge c=0$.
We consider the following two cases.\\
\textbf{Case $(1):$} If $a\vee b= a\vee c=b\vee c$, then $L$ contains a
sublattice isomorphic to $\mathcal{M}_3$ (as shown in Figure
\ref{f6}), a contradiction to
0-distributivity of $L$.\\
\textbf{Case $(2):$} Let $a\vee b$ $||$ $b\vee c$. Clearly $b\leq (a\vee
b)\wedge (b\vee c)$ is a nonzero meet-reducible element in $L$, a
contradiction to Theorem \ref{t4}. Hence $a\vee b$ and $b\vee c$ are
comparable. Without loss of generality, suppose $a\vee b\leq b\vee
c$. Hence $a\leq a\vee b\leq b\vee c$, which gives $a\wedge (b\vee
c)=a$, but $a\wedge b=0$ and $a\wedge c=0$, again a contradiction to
0-distributivity of $L$.
Thus in any case $L$ does not contain three atoms. Since $(0,1)$
as an adjunct pair, it is clear that $L$ is an adjunct of exactly two
chains. Therefore $G_{\{0\}}(L)$ is a complete bipartite
graph.\end{proof}
The following result is due to Joshi \cite{7}.
\begin{thm}[Theorem 2.4, Joshi \cite{7}]\label{t7}Let $L$ be a lattice. Then $G_{\{0\}}(L)$ is connected with
$diam\big(G_{\{0\}}(L)\big) \leq 3$.
\end{thm}
The following result can be found in Alizadeh et al.
\cite{1}.
\begin{thm}[Theorem 4.2, Alizadeh \cite{1}]\label{t8}Let $L$ be a lattice, then $gr(G_{\{0\}}(L)\in
\{3,4,\infty\}$.
\end{thm}
In the following theorem, we characterize the diameter and girth
of $G_{\{0\}}(L)$ for a lower dismantlable lattice $L$.
\begin{thm}\label{t9}Let $L$ be a lower dismantlable lattice which is an adjunct of $n$ chains,
where $n\geq 2$. Then $V((G_{\{0\}}(L)))\neq\emptyset$ and $diam
(G_{\{0\}}(L))\leq 2$. Moreover if $(0,1)$ is the only adjunct pair
in $L$, then $diam (G_{\{0\}}(L))=1$ if and only if $L\cong
\mathcal{M}_n$. Further
\begin{enumerate} \item[$(a)$] $gr (G_{\{0\}}(L))=3$ if and only if
$L$ is an adjunct of at least three chains.\item[$(b)$] $gr
(G_{\{0\}}(L))=4$ if and only if $L=C_1 ] _0^aC_2$ with $|C_2|\geq
2$. \item[$(c)$]$gr (G_{\{0\}}(L))=\infty$ if and only if $L=C_1 ]
_0^aC_2$ with $|C_2|=1$.
\end{enumerate}\end{thm}
\begin{proof}Let $a,b\in V((G_{\{0\}}(L)))$. If $a||b$, then by Lemma \ref{l1}, $a\wedge b=0$ and hence $a,b$ are adjacent. Further let $a$ and $b$ are comparable, say $a\leq
b$. Since $b\in V((G_{\{0\}}(L)))$, there is an
element $c(\neq 0)$ such that $b\wedge c=0$. Hence $a\wedge c=0$.
Thus we get a path $a-c-b$ of length $2$. This shows that $d(a,b)\leq
2$ and in any case $diam (G_{\{0\}}(L))\leq 2$.
Let $(0,1)$ be the only adjunct pair in $L$. If $L\cong
\mathcal{M}_n$, then $G_{\{0\}}(L)\cong K_n$. Hence $diam
(G_{\{0\}}(L))=1$. Conversely, suppose $diam (G_{\{0\}}(L))=1$. If
$L=C_1 ] _0^1C_2$, then $G_{\{0\}}(L)\neq \emptyset $ if and
only if $|C_1|\geq 3 $ and $|C_2|\geq 1$. If $|C_1|\geq 3 $ and
$|C_2|= 2$ then for any $a,b\in C_2$ we have $d(a,b)=2$. Now, If
$|C_1|\geq 4$ and $|C_2|= 1$ then for any $a,b\in C_1\setminus
{\{0,1\}}$, again $d(a,b)=2$, a contradiction to the fact that
$diam (G_{\{0\}}(L))=1$. Therefore $|C_1|= 3 $ and $|C_2|= 1$ and
the result follows by induction.
By Theorem \ref{t8}, $gr(G_{\{0\}}(L)\in
\{3,4,\infty\}$.\\
$(a)$ If $L$ is an adjunct of at least three chains, then it
contains at least three atoms. Hence $G_{\{0\}}(L)$ contains a
triangle, and $gr (G_{\{0\}}(L))=3$. Conversely, let $gr
(G_{\{0\}}(L))=3$. If $L=C_1 ] _0^aC_2$, then by Corollary
\ref{c1}, $G_{\{0\}}(L)$ is complete bipartite. Therefore it does
not contain an odd cycle, a contradiction. Hence $L$ is
adjunct of at least three chains.\\
$(b)$ and $(c)$ If $L=C_1 ] _0^aC_2$, then by Corollary \ref{c1},
$G_{\{0\}}(L)$ is a complete bipartite graph. Then $|C_2|=1$ or
$|V(G_{\{0\}})(L)\cap C_1|=1$ if and only if $gr
(G_{\{0\}}(L))=\infty$. If $|C_2|\geq 2$ or $|V(G_{\{0\}})(L)\cap C_1|\geq 2$, then $gr
(G_{\{0\}}(L))=4$.
\end{proof}
\begin{rem}Note that if we drop the condition of lower dismantlability of $L$, then $diam(G_{\{0\}}(L))$ may exceed 2.
Consider a lattice $L=C_1]_0 ^e C_2]_b ^1C_3]_0 ^d C_4$, where $C_1=\{0,a,e,1\}$, $C_2=\{b\}$, $C_3=\{d\}$ and
$C_4=\{c\}$. Then $diam (G_{\{0\}}(L))=3$ as shown in Figure
\ref{f2}.
\end{rem}
Now, we give a realization of zero-divisor graphs of lower dismantlable lattices, \textit{i.e.},
we describe graphs that are the zero-divisor graphs of lower dismantlable lattices.
\begin{proof}[\bf{Proof of Theorem \ref{t1}}] $(a)\Rightarrow(b)$ Let $G\in \mathcal{G_T}$. Hence $G=V(T\backslash \{R\})$ for some rooted tree $T$
with the root $R$. Let $L=V(G)\cup \{R\}\cup\{0\}$. Define a relation $\leq$
on $L$ by, $a\leq R$, $0\leq a$ and $a\leq a$, for every $ a\in
L$. If $a\neq b$, then $a<b$ if and only if $b$ is an ancestor of
$a$. Clearly, $(L, \leq)$ is a poset. If $a||b$, then no one is
ancestor of the other, hence $0$ is the only element below $a$ and
$b$, \textit{i.e.}, $a\wedge b=0$. Let $A=\{c\in L~|~ c
\textnormal{ is common ancestor of} ~a~ \textnormal{and }~ b
\}$. Then $A\neq \emptyset$, as $R\in A$. We claim that the set
$A$ forms a chain. Let $x,y\in A$ with $x||y$. Hence $x$ and $y$
are ancestors of $a$ and $b$ both. But then $a-x-b-y-a$ is a cycle in the undirected graph of a rooted tree, a contradiction.
Thus $A$ is a chain. Then the smallest element of $A$ (it exists due to finiteness of $L$) is nothing but $a\vee b$. Hence $L$ is a lattice with the
greatest element $R$, now denoted by 1.
Since meet of any two incomparable elements is zero, $L$ does not contain a crown. Hence by Theorem \ref{t3}, $L$ is a dismantlable lattice,
say $L= C_0]_{a_1}^{b_1} C_1]_{a_2}^{b_2}\cdots]_{a_n}^{b_n} C_n$. Since meet of any two incomparable elements is zero,
we get $a_i=0$, $\forall i$. Therefore $L$ is a lower dismantlable lattice having 1 as join-reducible element
(since the root of tree has at least two branches, hence $1$ is join-reducible). Also, $a\wedge b=0$ if and only if no
one is an ancestor of the other. Therefore $G=G_{\{0\}}(L)$.\\
$(b)\Rightarrow(c)$ Follows by Theorem \ref{t4}.\\
$(c)\Rightarrow(a)$ Let $L$ be a lower dismantlable lattice. Let $G$ be an incomparability graph of $L\backslash \{0,1\}$,
{\it i.e.}, $G= C(L\backslash \{0,1\})^c$. Then Theorem \ref{t4}, the
cover graph of $L\backslash \{0\}$ is a rooted tree, say $T$, with a root $1$. Let $H$ be the non-ancestor graph of $T$.
Then clearly, $V(H)=V(G)$ and $a$ and $b$ are adjacent in $G$ if and only if $a||b$
if and only if no one is ancestor of the other if and only if $a$ and $b$ are adjacent in $H$. Hence $G=H$.\end{proof}
\textbf{Acknowledgements:} The authors are grateful to the referee for fruitful suggestions. The first author is financially supported by University Grant Commission, New Delhi via minor research project File No. 47-884/14(WRO).
\end{document}
|
\begin{document}
\title{Asymptotic Learning on Bayesian Social Networks\thanks{The
authors would like to thank Shachar Kariv for an enthusiastic
introduction to his work with Douglas Gale, and for suggesting the
significance of asymptotic learning in this model.
Elchanan Mossel is supported by NSF NSF award DMS 1106999, by ONR
award N000141110140 and by ISF grant 1300/08. Allan Sly is
supported in part by an Alfred Sloan Fellowship in
Mathematics. Omer Tamuz is supported by ISF grant 1300/08, and is
a recipient of the Google Europe Fellowship in Social Computing.
This research is supported in part by this Google Fellowship.}}
\author{Elchanan Mossel \and Allan Sly \and Omer Tamuz}
\maketitle
\begin{abstract}
Understanding information exchange and aggregation on networks is a
central problem in theoretical economics, probability and
statistics. We study a standard model of economic agents on the
nodes of a social network graph who learn a binary ``state of the
world'' $S$, from initial signals, by repeatedly observing each
other's best guesses.
Asymptotic learning is said to occur on a family of graphs $G_n =
(V_n,E_n)$ with $|V_n| \to \infty$ if with probability tending to
$1$ as $n \to \infty$ all agents in $G_n$ eventually estimate $S$
correctly. We identify sufficient conditions for asymptotic
learning and contruct examples where learning does not occur when
the conditions do not hold.
\end{abstract}
\section{Introduction}
We consider a directed graph $G$ representing a social network. The
nodes of the graph are the set of agents $V$, and an edge from agent
$u$ to $w$ indicates that $u$ can observe the actions of $w$. The
agents try to estimate a binary {\em state of the world} $S \in
\{0,1\}$, where each of the two possible states occurs with
probability one half.
The agents are initially provided with private signals which are
informative with respect to $S$ and i.i.d., conditioned on $S$:
There are two distributions, $\mu_0 \neq \mu_1$, such that conditioned
on $S$, the private signals are independent and distributed $\mu_S$.
In each time period $t \in \N$, each agent $v$ chooses an ``action''
$A_v(t)$, which equals whichever of $\{0,1\}$ the state of the
world is more likely to equal, conditioned on the information
available to $v$ at time $t$. This information includes its private
signal, as well as the actions of its social network neighbors in the
previous periods.
A first natural question is whether the agents eventually reach
consensus, or whether it is possible that neighbors ``agree to
disagree'' and converge to different actions. Assuming that the
agents do reach consensus regarding their estimate of $S$, a second
natural question is whether this consensus estimator is equal to $S$.
Certainly, since private signals are independent conditioned on $S$, a
large enough group of agents has, in the aggregation of their private
signals, enough information to learn $S$ with high
probability. However, it may be the case that this information is not
disseminated by the above described process. These and related
questions have been studied extensively in economics, statistics and
operations research; see Section~\ref{sec:literature}.
We say that the agents learn on a social network graph $G$ when all
their actions converge to the state of the world $S$. For a sequence
of graphs $\{G_n\}_{n=1}^\infty$ such that $G_n$ has $n$ agents, we
say that {\em Asymptotic learning} occurs when the probability that
the agents learn on $G_n$ tends to one as $n$ tends to infinity, for a
fixed choice of private signal distributions $\mu_1$ and $\mu_0$.
An agent's initial {\em private belief} is the probability that $S=1$,
conditioned only on its private signal. When the distribution of
private beliefs is atomic, asymptotic learning does not necessarily
occur (see Example~\ref{example:non-learning}). This is also the case
when the social network graph is undirected (see
Example~\ref{example:directed}). Our main result
(Theorem~\ref{thm:non-atomic-learning}) is that asymptotic learning
occurs for non-atomic private beliefs and undirected graphs.
To prove this theorem we first prove that the condition of non-atomic
initial private beliefs implies that the agents all converge to the
same action, or all don't converge at all
(Theorem~\ref{thm:unbounded-common-knowledge}). We then show that for
any model in which this holds, asymptotic learning occurs
(Theorem~\ref{thm:bounded-learning}). Note that it has been shown
that agents reach agreement under various other conditions (cf.
M{\'e}nager~\cite{menager2006consensus}). Hence, by
Theorem~\ref{thm:bounded-learning}, asymptotic learning also holds for
these models.
Our proof includes several novel insights into the dynamics of
interacting Bayesian agents. Broadly, we show that on undirected
social network graphs connecting a {\em countably infinite} number of
agents, if all agents converge to the same action then they converge
to the correct action. This follows from the observation that if
agents in distant parts of a large graph converge to the same action
then they do so {\em almost} independently. We then show that this
implies that for {\em finite} graphs of growing size the probability
of learning approaches one.
At its heart of this proof lies a topological lemma
(Lemma~\ref{lemma:local_property}) which may be of independent
interest; the topology here is one of rooted graphs (see, e.g.,
Benjamini and Schramm~\cite{benjamini2011recurrence}, Aldous and
Steele~\cite{aldous2003objective}). The fact that asymptotic learning
occurs for undirected graphs (as opposed to general strongly connected
graphs) is related to the fact that sets of bounded degree, {\it
undirected} graphs are compact in this topology. In fact, our proof
applies equally to any such compact sets. For example, one can replace
{\it undirected} with {\it $L$-locally strongly connected}: a directed
graph $G=(V,E)$ is $L$-locally strongly connected if, for each $(u,w)
\in E$, there exists a path in $G$ of length at most $L$ from $w$ to
$u$. Asymptotic learning also takes place on $L$-locally strongly
connected graphs, for fixed $L$, since sets of $L$-locally strongly
connected, uniformly bounded degree graphs are compact. See
Section~\ref{sec:locally-connected} for further discussion.
\subsection{Related literature}
\label{sec:literature}
\subsubsection{Agreement}
There is a vast economic literature studying the question of
convergence to consensus in dynamic processes and games. A founding
work is Aumann's seminal Agreement Theorem~\cite{aumann1976agreeing},
which states that Bayesian agents who observe beliefs (i.e., posterior
probabilities, as opposed to actions in our model) cannot ``agree to
disagree''. Subsequent work (notably Geanakoplos and
Polemarchakis~\cite{geanakoplos1982we}, Parikh and
Krasucki~\cite{parikh1990communication}, McKelvey and
Page~\cite{mckelvey1986common}, Gale and Kariv~\cite{GaleKariv:03}
M{\'e}nager~\cite{menager2006consensus} and Rosenberg, Solan and
Vieille~\cite{rosenberg2009informational}) expanded the range of
models that display convergence to consensus. One is, in fact, left
with the impression that it takes a pathological model to feature
interacting Bayesian agents who do ``agree to disagree''.
M{\'e}nager~\cite{menager2006consensus} in particular describes a
model similar to ours and proves that consensus is achieved in a
social network setting under the condition that the probability space
is finite and ties cannot occur (i.e., posterior beliefs are always
different than one half). Note that our asymptotic learning result
applies for any model where consensus is guaranteed, and hence in
particular applies to models satisfying M{\'e}nager's conditions.
\subsubsection{Agents on social networks}
Gale and Kariv~\cite{GaleKariv:03} also consider Bayesian agents who
observe each other's actions. They introduce a model in which, as in
ours, agents receive a single initial private signal, and the action
space is discrete. However, there is no ``state of the world'' or
conditionally i.i.d.\ private signals. Instead, the relative merit of
each possible action depends on all the private signals. Our model is
in fact a particular case of their model, where we restrict our
attention to the particular structure of the private signals described
above.
Gale and Kariv show (loosely speaking) that neighboring agents who
converge to two different actions must, at the limit, be indifferent
with respect to the choice between these two actions. Their result is
therefore also an agreement result, and makes no statement on the
optimality of the chosen actions, although they do profess interest in
the question of {\em ``... whether the common action chosen
asymptotically is optimal, in the sense that the same action would
be chosen if all the signals were public information... there is no
reason why this should be the case.''} This is precisely the
question we address.
A different line of work is the one explored by Ellison and
Fudenberg~\cite{EllFud:95}. They study agents on a social network that
use rules of thumb rather than full Bayesian updates. A similar
approach is taken by Bala and Goyal~\cite{BalaGoyal:96}, who also
study agents acting iteratively on a social network. They too are
interested in asymptotic learning (or ``complete learning'', in their
terms). They consider a model of bounded rationality which is not
completely Bayesian. One of their main reasons for doing so is the
mathematical complexity of the fully Bayesian model, or as they state,
{\em ``to keep the model mathematically tractable... this possibility
[fully Bayesian agents] is precluded in our model... simplifying the
belief revision process considerably.''} In this simpler,
non-Bayesian model, Bala and Goyal show both behaviors of asymptotic
learning and results of non-learning, depending on various parameters
of their model.
\subsubsection{Herd behavior}
The ``herd behavior'' literature (cf. Banerjee~\cite{Banerjee:92},
Bikhchandani, Hirshleifer and Welch~\cite{BichHirshWelch:92}, Smith
and S{\o}rensen~\cite{smith2000pathological}) consider related but
fundamentally simpler models. As in our model there is a ``state of
the world'' and conditionally independent private signals. A countably
infinite group of agents is exogenously ordered, and each picks an
action sequentially, after observing the actions of its predecessors
or some of its predecessors. Agents here act only {\em once}, as
opposed to our model in which they act {\em repeatedly}.
The main result for these models is that in some situations there may
arise an ``information cascade'', where, with positive probability,
almost all the agents take the wrong action. This is precisely the
opposite of {\em asymptotic learning}. The condition for information
cascades is ``bounded private beliefs''; herd behavior occurs when the
agents' beliefs, as inspired by their private signals, are bounded
away both from zero and from one~\cite{smith2000pathological}. In
contrast, we show that in our model asymptotic learning occurs even
for bounded beliefs.
In the herd behavior models information only flows in one direction:
If agent $u$ learns from $w$ then $w$ does not learn from $u$. This
significant difference, among others, makes the tools used for their
analysis irrelevant for our purposes.
\section{Formal definitions, results and examples}
\label{sec:formal}
\subsection{Main definitions}
The following definition of the agents, the state of the world and the
private signals is adapted from~\cite{MosselTamuz10B:arxiv}, where a
similar model is discussed.
\begin{definition}
\label{def:state-signals}
Let $(\Omega, \mathcal{O})$ be a $\sigma$-algebra. Let $\mu_0$ and
$\mu_1$ be different and mutually absolutely continuous probability
measures on $(\Omega, \mathcal{O})$.
Let $\delta_0$ and $\delta_1$ be the distributions on $\{0,1\}$ such
that $\delta_0(0)=\delta_1(1)=1$.
Let $V$ be a countable (finite or infinite) set of agents, and let
\begin{align*}
\mathbb{P} = {\textstyle \frac12}\delta_0\mu_0^V+{\textstyle \frac12}\delta_1\mu_1^V,
\end{align*}
be a distribution over $\{0,1\}\times\Omega^V$. We denote by $S
\in\{0,1\}$ the {\bf state of the world} and by $W_u$ the
{\bf private signal} of agent $u \in V$. Let
\begin{align*}
(S,W_{u_1},W_{u_2},\ldots) \sim \mathbb{P}.
\end{align*}
\end{definition}
Note that the private signals $W_u$ are i.i.d., conditioned on
$S$: if $S=0$ - which happens with probability half - the private
signals are distributed i.i.d.\ $\mu_0$, and if $S=1$ then they are
distributed i.i.d.\ $\mu_1$.
We now define the dynamics of the model.
\begin{definition}
\label{def:revealed-actions}
Consider a set of agents $V$, a state of the world $S$ and private
signals $\{W_u: u\in V\}$ such that
\begin{align*}
(S,W_{u_1},W_{u_2},\ldots) \sim \mathbb{P},
\end{align*}
as defined in Definition~\ref{def:state-signals}.
Let $G=(V,E)$ be a directed graph which we shall call the {\bf
social network}. We assume throughout that $G$ is simple (i.e., no
parallel edges or loops) and strongly connected. Let the set of
neighbors of $u$ be $N(u) = \{v :\: (u,v) \in E\}$. The {\bf
out-degree} of $u$ is equal to $|N(u)|$.
For each time period $t \in \{1, 2, \ldots\}$ and agent $u \in V$,
denote the {\bf action} of agent $u$ at time $t$ by $A_u(t)$,
and denote by ${\mathcal{F}}_u(t)$ the information available to agent $u$ at
time $t$. They are jointly defined by
\begin{align*}
{\mathcal{F}}_u(t) = \sigma(W_u,\{A_v(t'):\:v \in N(u), t' < t\}),
\end{align*}
and
\begin{align*}
A_u(t) =
\begin{cases}
0 & \CondP{S=1}{{\mathcal{F}}_u(t)} < 1/2 \\
1 & \CondP{S=1}{{\mathcal{F}}_u(t)} > 1/2 \\
\in \{0,1\} & \CondP{S=1}{{\mathcal{F}}_u(t)} = 1/2.
\end{cases}
\end{align*}
Let $X_u(t) = \CondP{S=1}{{\mathcal{F}}_u(t)}$ be agent $u$'s {\bf
belief} at time $t$.
\end{definition}
Informally stated, $A_u(t)$ is agent $u$'s best estimate of $S$
given the information ${\mathcal{F}}_u(t)$ available to it up to time $t$. The
information available to it is its private signal $W_u$ and the
actions of its neighbors in $G$ in the previous time periods.
\begin{remark}
\label{rem:map}
An alternative and equivalent definition of $A_u(t)$ is the
MAP estimator of $S$, as calculated by agent $u$ at time $t$:
\begin{align*}
A_u(t)=\operatornamewithlimits{argmax}_{s \in \{0,1\}}\CondP{S=s}{{\mathcal{F}}(t)}
=\operatornamewithlimits{argmax}_{A \in {\mathcal{F}}(t)}\P{A=S},
\end{align*}
with some tie-breaking rule.
\end{remark}
Note that we assume nothing about how agents break ties, i.e., how
they choose their action when, conditioned on their available
information, there is equal probability for $S$ to equal either 0 or 1.
Note also that the belief of agent $u$ at time $t=1$, $X_u(1)$,
depends only on $W_u$:
\begin{align*}
X_u(1) = \CondP{S=1}{W_u}.
\end{align*}
We call $X_u(1)$ the {\em initial belief} of agent $u$.
\begin{definition}
Let $\mu_0$ and $\mu_1$ be such that $X_u(1)$, the initial belief of
$u$, has a non-atomic distribution ($\Leftrightarrow$ the
distributions of the initial beliefs of all agents are
non-atomic). Then we say that the pair $(\mu_0,\mu_1)$ {\bf induce
non-atomic beliefs}.
\end{definition}
We next define some limiting random variables: ${\mathcal{F}}_u$ is the limiting
information available to $u$, and $X_u$ is its limiting belief.
\begin{definition}
Denote ${\mathcal{F}}_u = \cup_t{\mathcal{F}}_u(t)$, and let
\begin{align*}
X_u = \CondP{S=1}{{\mathcal{F}}_u}.
\end{align*}
\end{definition}
Note that the limit $\lim_{t \to \infty}X_u(t)$ almost surely
exists and equals $X_u$, since $X_u(t)$ is a bounded
martingale.
We would like to define the limiting {\em action} of agent
$u$. However, it might be the case that agent $u$ takes both actions
infinitely often, or that otherwise, at the limit, both actions are
equally desirable. We therefore define $A_u$ to be the limiting
{\em optimal action set}. It can take the values $\{0\}$, $\{1\}$ or
$\{0,1\}$.
\begin{definition}
Let $A_u$, the {\bf optimal action set} of agent $u$, be defined by
\begin{align*}
A_u =
\begin{cases}
\{0\} & X_u < 1/2 \\
\{1\} & X_u > 1/2 \\
\{0,1\} & X_u = 1/2.
\end{cases}
\end{align*}
\end{definition}
Note that if $a$ is an action that $u$ takes infinitely often then $a
\in A_u$, but that if $0$ (say) is the only action that $u$ takes
infinitely often then it still may be the case that
$A_u=\{0,1\}$. However, we show below that when $(\mu_0,\mu_1)$
induce non-atomic beliefs then $A_u$ is almost surely equal to
the set of actions that $u$ takes infinitely often.
\subsection{Main results}
In our first theorem we show that when initial private beliefs are
non-atomic, then at the limit $t \to \infty$ the optimal action sets
of the players are identical. As Example~\ref{example:non-learning}
indicates, this may not hold when private beliefs are atomic.
\begin{maintheorem}
\label{thm:unbounded-common-knowledge}
Let $(\mu_0,\mu_1)$ induce non-atomic beliefs. Then there exists
a random variable $A$ such that almost surely $A_u=A$
for all $u$.
\end{maintheorem}
I.e., when initial private beliefs are non-atomic then agents, at the
limit, agree on the optimal action. The following theorem states that
when such agreement is guaranteed then the agents learn the state of
the world with high probability, when the number of agents is
large. This phenomenon is known as {\em asymptotic learning}. This
theorem is our main result.
\begin{maintheorem}
\label{thm:bounded-learning}
Let $\mu_0,\mu_1$ be such that for every connected, undirected graph
$G$ there exists a random variable $A$ such that almost surely
$A_u=A$ for all $u \in V$. Then there exists a sequence
$q(n)=q(n, \mu_0, \mu_1)$ such that $q(n) \to 1$ as $n \to \infty$,
and $\P{A = \{S\}} \geq q(n)$, for any choice of undirected,
connected graph $G$ with $n$ agents.
\end{maintheorem}
Informally, when agents agree on optimal action sets then they
necessarily learn the correct state of the world, with probability
that approaches one as the number of agents grows. This holds
uniformly over all possible connected and undirected social network
graphs.
The following theorem is a direct consequence of the two theorems
above, since the property proved by
Theorem~\ref{thm:unbounded-common-knowledge} is the condition required
by Theorem~\ref{thm:bounded-learning}.
\begin{maintheorem}
\label{thm:non-atomic-learning}
Let $\mu_0$ and $\mu_1$ induce non-atomic beliefs. Then there exists
a sequence $q(n)=q(n, \mu_0, \mu_1)$ such that $q(n) \to 1$ as $n
\to \infty$, and $\P{A_u = \{S\}} \geq q(n)$, for all agents
$u$ and for any choice of undirected, connected $G$ with $n$ agents.
\end{maintheorem}
\subsection{Note on directed vs. undirected graphs}
\begin{figure}
\caption{\label{fig:royal-family}
\label{fig:royal-family}
\end{figure}
Note that we require that the graph $G$ not only be strongly
connected, but also undirected (so that if $(u,v) \in E$ then $(v,u) \in
E$.) The following example (depicted in Figure~\ref{fig:royal-family})
shows that when private beliefs are bounded then asymptotic learning
may not occur when the graph is strongly connected but not
undirected\footnote{We draw on Bala and Goyal's~\cite{BalaGoyal:96}
{\em royal family} graph.}.
\begin{example}
\label{example:directed}
Consider the the following graph. The vertex set is comprised of two
groups of agents: a ``royal family'' clique of 5 agents who all
observe each other, and $5-n$ agents - the ``public'' - who are
connected in a chain, and in addition can all observe all the agents
in the royal family. Finally, a single member of the royal family
observes one of the public, so that the graph is strongly connected.
\end{example}
Now, with positive probability, which is independent of $n$, there
occurs the event that all the members of the royal family initially
take the wrong action. Assuming the private signals are sufficiently
weak, then it is clear that all the agents of the public will adopt
the wrong opinion of the royal family and will henceforth choose the
wrong action.
Note that the removal of one edge - the one from the royal back to the
commoners - results in this graph no longer being strongly
connected. However, the information added by this edge rarely has an
affect on the final outcome of the process. This indicates that strong
connectedness is too weak a notion of connectedness in this
context. We therefore in seek stronger notions such as connectedness
in undirected graphs.
A weaker notion of connectedness is that of $L$-locally strongly
connected graphs, which we defined above. For any $L$, the graph from Example~\ref{example:directed} is not $L$-locally strongly
connected for $n$ large enough.
\section{Proofs}
Before delving into the proofs of
Theorems~\ref{thm:unbounded-common-knowledge}
and~\ref{thm:bounded-learning} we introduce additional definitions in
subsection~\ref{sec:more-defs} and prove some general lemmas in
subsections~\ref{sec:graph-limits}, \ref{sec:isomorphics-balls}
and~\ref{sec:delta-ind}. Note that Lemma~\ref{lemma:local_property},
which is the main technical insight in the proof of
Theorem~\ref{thm:bounded-learning}, may be of independent interest. We
prove Theorem~\ref{thm:bounded-learning} in
subsection~\ref{sec:learning} and
Theorem~\ref{thm:unbounded-common-knowledge} in
subsection~\ref{sec:convergence}.
\subsection{Additional general notation}
\label{sec:more-defs}
\begin{definition}
\label{def:llr}
We denote the {\bf log-likelihood ratio} of agent $u$'s belief at
time $t$ by
\begin{align*}
Z_u(t) =\log\frac{X_u(t)}{1-X_u(t)},
\end{align*}
and let
\begin{align*}
Z_u = \lim_{t\to\infty}Z_u(t).
\end{align*}
\end{definition}
Note that
\begin{align*}
Z_u(t)= \log\frac{\CondP{S=1}{{\mathcal{F}}_u(t)}}{\CondP{S=0}{{\mathcal{F}}_u(t)}}.
\end{align*}
and that
\begin{align*}
Z_u(1) = \log\frac{d\mu_1}{d\mu_0}(W_u).
\end{align*}
Note also that $Z_u(t)$ converges almost surely since
$X_u(t)$ does.
\begin{definition}
\label{def:agg-actions}
We denote the set of actions of agent $u$ up to time $t$ by
\begin{align*}
\bar{A}_u(t) = (A_u(1),\ldots,A_u(t-1)).
\end{align*}
The set of all actions of $u$ is similarly denoted by
\begin{align*}
\bar{A}_u = (A_u(1),A_u(2),\ldots).
\end{align*}
We denote the actions of the neighbors of $u$ up to time $t$ by
\begin{align*}
I_u(t) = \{\bar{A}_w(t):\: w \in N(u)\} = \{A_w(t'):\:w
\in N(u),t'<t\},
\end{align*}
and let $I_u$ denote all the actions of $u$'s neighbors:
\begin{align*}
I_u = \{\bar{A}_w:\: w \in N(u)\} = \{A_w(t'):\:w
\in N(u),t' \geq 1\}.
\end{align*}
\end{definition}
Note that using this notation we have that ${\mathcal{F}}_u(t) =
\sigma(W_u,I_u(t))$ and ${\mathcal{F}}_u = \sigma(W_u,I_u)$.
\begin{definition}
\label{def:p-u}
We denote the probability that $u$ chooses the correct action at time
$t$ by
\begin{align*}
p_u(t)=\P{A_u(t)=S}.
\end{align*}
and accordingly
\begin{align*}
p_u=\lim_{t \to \infty}p_u(t).
\end{align*}
\end{definition}
\begin{definition}
\label{def:group-signal}
For a set of vertices $U$ we denote by $W(U)$ the private
signals of the agents in $U$.
\end{definition}
\subsection{Sequences of rooted graphs and their limits}
\label{sec:graph-limits}
In this section we define a topology on {\em rooted graphs}. We call
convergence in this topology {\em convergence to local limits}, and
use it repeatedly in the proof of
Theorem~\ref{thm:bounded-learning}. The core of the proof of
Theorem~\ref{thm:bounded-learning} is the topological
Lemma~\ref{lemma:local_property}, which we prove here. This lemma is a
claim related to {\em local graph properties}, which we also introduce
here.
\begin{definition}
Let $G=(V,E)$ be a finite or countably infinite graph, and let $u
\in V$ be a vertex in $G$. We denote by $(G,u)$ the {\bf rooted
graph} $G$ with root $u$.
\end{definition}
\begin{definition}
Let $G=(V,E)$ and $G'=(V',E')$ be graphs. $h : V \to V'$ is a {\bf
graph isomorphism} between $G$ and $G'$ if $(u,v) \in E
\Leftrightarrow (h(u), h(v)) \in E'$.
Let $(G,u)$ and $(G',u')$ be rooted graphs. Then $h : V \to V'$ is a
{\bf rooted graph isomorphism} between $(G,u)$ and $(G',u')$ if $h$
is a graph isomorphism and $h(u) = u'$.
We write $(G,u) \cong (G',u')$ whenever there exists a rooted graph
isomorphism between the two rooted graphs.
\end{definition}
Given a (perhaps directed) graph $G=(V,E)$ and two vertices $u, w \in
V$, the graph distance $d(u,w)$ is equal to the length in edges of a
shortest (directed) path between $u$ and $w$.
\begin{definition}
\label{def:balls}
We denote by $B_r(G, u)$ the ball of radius $r$ around the vertex
$u$ in the graph $G=(V,E)$: Let $V'$ be the set of vertices $w$ such
that $d(u,w)$ is at most $r$. Let $E' = \{(u,w)\in E:\: u,w \in
V'\}$. Then $B_r(G, u)$ is the rooted graph with vertices $V'$,
edges $E'$ and root $u'$.
\end{definition}
We next define a topology on strongly connected rooted graphs (or
rather on their isomorphism classes; we shall simply refer to these
classes as graphs). A natural metric between strongly connected
rooted graphs is the following (see Benjamini and
Schramm~\cite{benjamini2011recurrence}, Aldous and
Steele~\cite{aldous2003objective}). Given $(G,u)$ and $(G',u')$, let
\begin{align*}
D((G,u),(G',u')) = 2^{-R},
\end{align*}
where
\begin{align*}
R = \sup \{r : B_r(G,u) \cong B_r(G',u')\}.
\end{align*}
This is indeed a metric: the triangle inequality follows immediately,
and a standard diagonalization argument is needed to show that if
$D((G,u),(G',u'))=0$ then $(G,u) \cong (G',u')$.
This metric induces a topology that will be useful to us. As usual,
the basis of this topology is the set of balls of the metric; the ball
of radius $2^{-R}$ around the {\em graph} $(G,u)$ is the set of graphs
$(G',u')$ such that $B_R(G,u) \cong B_R(G',u')$. We refer to
convergence in this topology as convergence to a {\em local limit},
and provide the following equivalent definition for it:
\begin{definition}
Let $\{(G_r,u_r)\}_{r=1}^\infty$ be a sequence of strongly connected
rooted graphs. We say that the sequence converges if there exists a
strongly connected rooted graph $(G',u')$ such that
\begin{align*}
B_r(G',u') \cong B_r(G_r,u_r),
\end{align*}
for all $r \geq 1$. We then write
\begin{align*}
(G',u')=\lim_{r \to \infty}(G_r,u_r),
\end{align*}
and call $(G',u')$ the {\bf local limit} of the sequence
$\{(G_r,u_r)\}_{r=1}^\infty$.
\end{definition}
Let ${\mathcal{G}}_d$ be the set of strongly connected rooted graphs with degree
at most $d$. Another standard diagonalization argument shows that
${\mathcal{G}}_d$ is {\em compact} (see
again~\cite{benjamini2011recurrence,aldous2003objective}). Then, since
the space is metric, every sequence in ${\mathcal{G}}_d$ has a converging
subsequence:
\begin{lemma}
\label{lemma:compactness}
Let $\{(G_r,u_r)\}_{r=1}^\infty$ be a sequence of rooted graphs in
${\mathcal{G}}_d$. Then there exists a subsequence
$\{(G_{r_i},u_{r_i})\}_{i=1}^\infty$ with $r_{i+1}>r_i$ for all $i$,
such that $\lim_{i \to \infty}(G_{r_i},u_{r_i})$ exists.
\end{lemma}
We next define {\em local properties} of rooted graphs.
\begin{definition}
\label{def:local-property}
Let $P$ be property of rooted graphs or a Boolean predicate on
rooted graphs. We write $(G,u) \in P$ if $(G,u)$ has the property,
and $(G,u) \notin P$ otherwise.
We say that $P$ is a {\bf local property} if, for every $(G,u) \in
P$ there exists an $r>0$ such that if $B_r(G,u) \cong B_r(G',u')$,
then $(G',u') \in P$. Let $r$ be such that $B_r(G,u) \cong
B_r(G',u') \Rightarrow (G',u') \in P$. Then we say that {\bf $(G,u)$
has property $P$ with radius $r$}, and denote $(G,u) \in P^{(r)}$.
\end{definition}
That is, if $(G,u)$ has a local property $P$ then there is some $r$
such that knowing the ball of radius $r$ around $u$ in $G$ is
sufficient to decide that $(G,u)$ has the property $P$. An alternative
name for a local property would therefore be a {\em locally decidable}
property. In our topology, local properties are nothing but {\em open
sets}: the definition above states that if $(G,u) \in P$ then there
exists an element of the basis of the topology that includes $(G,u)$
and is also in $P$. This is a necessary and sufficient condition for
$P$ to be open.
We use this fact to prove the following lemma.
\begin{definition}
\label{def:infgraphs}
Let $\mathcal{B}_d$ be the set of infinite, connected, undirected
graphs of degree at most $d$, and let $\mathcal{B}_d^r$ be the set of
$\mathcal{B}_d$-rooted graphs
\begin{align*}
\mathcal{B}_d^r = \{(G,u) \,:\, G \in \mathcal{B}_d, u \in G\}.
\end{align*}
\end{definition}
\begin{lemma}
\label{lemma:inf-graphs-closed}
$\mathcal{B}_d^r$ is compact.
\end{lemma}
\begin{proof}
Lemma~\ref{lemma:compactness} states that ${\mathcal{G}}_d$, the set of
strongly connected rooted graphs of degree at most $d$, is
compact. Since $\mathcal{B}_d^r$ is a subset of ${\mathcal{G}}_d$, it remains
to show that $\mathcal{B}_d^r$ is closed in ${\mathcal{G}}_d$.
The complement of $\mathcal{B}_d^r$ in ${\mathcal{G}}_d$ is the set of graphs
in ${\mathcal{G}}_d$ that are either finite or directed. These are both local
properties: if $(G,u)$ is finite (or directed), then there exists a
radius $r$ such that examining $B_r(G,u)$ is enough to determine
that it is finite (or directed). Hence the sets of finite graphs and
directed graphs in ${\mathcal{G}}_d$ are open in ${\mathcal{G}}_d$, their intersection
is open in ${\mathcal{G}}_d$, and their complement, $\mathcal{B}_d^r$, is
closed in ${\mathcal{G}}_d$.
\end{proof}
We now state and prove the main lemma of this subsection. Note that
the set of graphs $\mathcal{B}_d$ satisfies the conditions of this lemma.
\begin{lemma}
\label{lemma:local_property}
Let $\mathcal{A}$ be a set of infinite, strongly connected graphs, let
$\mathcal{A}^r$ be the set of $\mathcal{A}$-rooted graphs
\begin{align*}
\mathcal{A}^r = \{(G,u) \,:\, G \in \mathcal{A}, u \in G\},
\end{align*}
and assume that $\mathcal{A}$ is such that $\mathcal{A}^r$ is compact.
Let $P$ be a local property such that for each $G \in \mathcal{A}$
there exists a vertex $w \in G$ such that $(G,w) \in P$. Then for
each $G \in \mathcal{A}$ there exist an $r_0$ and infinitely many
distinct vertices $\{w_n\}_{n = 1}^\infty$ such that $(G,w_n) \in
P^{(r_0)}$ for all $n$.
\end{lemma}
\begin{figure}
\caption{\label{fig:local_property}
\label{fig:local_property}
\end{figure}
\begin{proof}
Let $G$ be an arbitrary graph in $\mathcal{A}$. Consider a sequence
$\{v_r\}_{r=1}^\infty$ of vertices in $G$ such that for all $r,s
\in \N$ the balls $B_r(G,v_r)$ and $B_s(G,v_s)$ are disjoint.
Since $\mathcal{A}^r$ is compact, the sequence $\{(G,
v_r)\}_{r=1}^\infty$ has a converging subsequence $\{(G,
v_{r_i})\}_{r=1}^\infty$ with $r_{i+1}>r_i$. Write $u_r = v_{r_i}$,
and let
\begin{align*}
(G',u') = \lim_{r \to \infty}(G, u_r).
\end{align*}
Note that since $\mathcal{A}^r$ is compact, $(G',u') \in \mathcal{A}^r$
and in particular $G' \in \mathcal{A}$ is an infinite, strongly
connected graph. Note also that since $r_{i+1}>r_i$, it also holds
that the balls $B_r(G,u_r)$ and $B_s(G,u_s)$ are disjoint for all
$r,s \in \N$.
Since $G' \in \mathcal{A}$, there exists a vertex $w' \in G'$ such that
$(G',w') \in P$. Since $P$ is a local property, $(G',w') \in
P^{(r_0)}$ for some $r_0$, so that if $B_{r_0}(G',w') \cong
B_{r_0}(G,w)$ then $(G,w) \in P$.
Let $R = d(u',w')+r_0$, so that $B_{r_0}(G',w') \subseteq
B_R(G',u')$. Then, since the sequence $(G,u_r)$ converges to
$(G',u')$, for all $r \geq R$ it holds that
$B_R(G,u_r)\cong~B_R(G',u')$. Therefore, for all $r>R$ there exists
a vertex $w_r \in B_R(G,u_r)$ such that $B_{r_0}(G,w_r) \cong
B_{r_0}(G',w')$. Hence $(G,w_r) \in P^{(r_0)}$ for all $r>R$ (see
Fig~\ref{fig:local_property}). Furthermore, for $r,s > R$, the balls
$B_R(G,u_r)$ and $B_R(G,u_s)$ are disjoint, and so $w_r \neq w_s$.
We have therefore shown that the vertices $\{w_r\}_{r > R}$ are an
infinite set of distinct vertices such that $(G,w_r) \in P^{(r_0)}$,
as required.
\end{proof}
\subsection{Coupling isomorphic balls}
\label{sec:isomorphics-balls}
This section includes three claims that we will use repeatedly
later. Their spirit is that everything that happens to an agent up to
time $t$ depends only on the state of the world and a ball of radius
$t$ around it.
Recall that ${\mathcal{F}}_u(t)$, the information available to agent $u$ at time
$t$, is the algebra generated by $W_u$ and $A_w(t')$ for all
$w$ neighbors of $u$ and $t' < t$. Recall that $I_u(t)$ denotes this
exact set of actions:
\begin{align*}
I_u(t) = \left\{ \bar{A}_w(t):\:w \in N(u)\right\} = \left\{
A_w(t'):\:w \in N(u),t'<t\right\}.
\end{align*}
\begin{claim}
\label{clm:a-from_cf}
For all agents $u$ and times $t$, $I_u(t)$ a deterministic
function of $W(B_t(G,u))$.
\end{claim}
Recall (Definition~\ref{def:group-signal}) that $W(B_t(G,u))$
are the private signals of the agents in $B_t(G,u)$, the ball of
radius $t$ around $u$ (Definition~\ref{def:balls}).
\begin{proof}
We prove by induction on $t$. $I_u(1)$ is empty, and so the claim
holds for $t=1$.
Assume the claim holds up to time $t$. By definition,
$A_u(t+1)$ is a function of $W_u$ and of $I_u(t+1)$,
which includes $\{A_w(t'):w\in N(u), t' \leq
t\}$. $A_w(t')$ is a function of $W_w$ and $I_w(t')$,
and hence by the inductive assumption it is a function of
$W(B_{t'}(G,w))$. Since $t'<t+1$ and the distance between $u$
and $w$ is one, $W(B_{t'}(G,w)) \subseteq W(B_{t+1}(G,u))$, for all
$w \in N(u)$ and $t' \leq t$ . Hence $I_u(t+1)$ is a function of
$W(B_{t+1}(G,u))$, the private signals in $B_{t+1}(G,u)$.
\end{proof}
The following lemma follows from Claim~\ref{clm:a-from_cf} above:
\begin{lemma}
\label{lemma:p-iso}
Consider two processes with identical private signal distributions
$(\mu_0,\mu_1)$, on different graphs $G = (V,E)$ and $G' = (V',E')$.
Let $t \geq 1$, $u \in V$ and $u' \in V'$ be such that there exists
a rooted graph isomorphism $h:B_t(G,u) \to B_t(G',w')$.
Let $M$ be a random variable that is measurable in ${\mathcal{F}}_u(t)$. Then
there exists an $M'$ that is measurable in ${\mathcal{F}}_{u'}(t)$ such that
the distribution of $(M,S)$ is identical to the
distribution of $(M',S')$.
\end{lemma}
Recall that a graph isomorphism between $G=(V,E)$ and
$G'=(V',E')$ is a bijective function $h :V \to V$ such that $(u,v) \in
E$ iff $(h(u),h(v)) \in E'$.
\begin{proof}
Couple the two processes by setting $S=S'$, and letting
$W_w=W_{w'}$ when $h(w)=w'$. Note that it follows that
$W_u=W_{u'}$. By Claim~\ref{clm:a-from_cf} we have that
$I_u(t)=I_{u'}(t)$, when using $h$ to identify vertices in $V$ with
vertices in $V'$.
Since $M$ is measurable in ${\mathcal{F}}_u(t)$, it must, by the definition of
${\mathcal{F}}_u(t)$, be a function of $I_u(t)$ and $W_u$. Denote then
$M=f(I_u(t),W_u)$. Since we showed that $I_u(t)=I_{u'}(t)$,
if we let $M'=f(I_{u'}(t),W_{u'})$ then the distribution of
$(M,S)$ and $(M',S')$ will be identical.
\end{proof}
In particular, we use this lemma in the case where $M$ is an estimator
of $S$. Then this lemma implies that the probability that $M=S$ is
equal to the probability that $M'=S'$.
Recall that $p_u(t) = \P{A_u(t) = S} = \max_{A \in
{\mathcal{F}}_u(t)}\P{A=S}$. Hence we can apply this lemma (\ref{lemma:p-iso})
above to $A_u(t)$ and $A_{u'}(t)$:
\begin{corollary}
\label{cor:p-iso}
If $B_t(G,u)$ and $B_t(G',u')$ are isomorphic then $p_u(t) =
p_{u'}(t)$.
\end{corollary}
\subsection{$\delta$-independence}
\label{sec:delta-ind}
To prove that agents learn $S$ we will show that the agents must, over
the duration of this process, gain access to a large number of
measurements of $S$ that are {\em almost} independent. To formalize
the notion of almost-independence we define $\delta$-independence and
prove some easy results about it. The proofs in this subsection are
relatively straightforward.
Let $\mu$ and $\nu$ be two measures defined on the same space. We
denote the total variation distance between them by
$d_{TV}(\mu,\nu)$. Let $A$ and $B$ be two random variables with joint
distribution $\mu_{(A,B)}$. Then we denote by $\mu_A$ the marginal
distribution of $A$, $\mu_B$ the marginal distribution of $B$, and
$\mu_A\times\mu_B$ the product distribution of the marginal distributions.
\begin{definition}
Let $(X_1,X_2,\dots,X_k)$ be random variables. We refer to them as {\bf
$\delta$-independent} if their joint distribution
$\mu_{(X_1,\ldots,X_k)}$ has total
variation distance of at most $\delta$ from the product of their
marginal distributions $\mu_{X_1}\times\cdots\times\mu_{X_k}$:
\begin{align*}
d_{TV}(\mu_{(X_1,\ldots,X_k)}, \mu_{X_1}\times\cdots\times\mu_{X_k}) \leq \delta.
\end{align*}
\end{definition}
Likewise, $(X_1,\ldots,X_l)$ are {\bf $\delta$-dependent} if the
distance between the distributions is more than $\delta$.
We remind the reader that a coupling $\nu$, between two random
variables $A_1$ and $A_2$ distributed $\nu_1$ and $\nu_2$, is a
distribution on the product of the spaces $\nu_1,\nu_2$ such that the
marginal of $A_i$ is $\nu_i$. The total variation distance between
$A_1$ and $A_2$ is equal to the minimum, over all such couplings
$\nu$, of $\nu(A_1 \neq A_2)$.
Hence to prove that $X,Y$ are
$\delta$-independent it is sufficient to show that there exists a
coupling $\nu$ between $\nu_1$, the joint distribution of $(X,Y)$ and
$\nu_2$, the products of the marginal distributions of $X$ and $Y$,
such that $\nu((X_1,Y_1) \neq (X_2,Y_2)) \leq \delta$.
Alternatively, to prove that $(A,B)$ are $\delta$-independent, one
could directly bound the total variation distance between
$\mu_{(A,B)}$ and $\mu_A\times\mu_B$ by $\delta$. This is often done
below using the fact that the total variation distance satisfies the
triangle inequality $d_{TV}(\mu,\nu) \leq
d_{TV}(\mu,\gamma)+d_{TV}(\gamma,\nu)$.
We state and prove some straightforward claims regarding
$\delta$-independence.
\begin{claim}
\label{clm:delta-independent}
Let $A$, $B$ and $C$ be random variables such that $\P{A \neq B} \leq
\delta$ and $(B,C)$ are $\delta'$-independent. Then $(A,C)$ are
$2\delta+\delta'$-independent.
\end{claim}
\begin{proof}
Let $\mu_{(A,B,C)}$ be a joint distribution of $A$, $B$ and $C$ such
that $\P{A \neq B} \leq \delta$.
Since $\P{A \neq B} \leq \delta$, $\P{(A,C) \neq (B,C)} \leq
\delta$, in both cases that $A,B,C$ are picked from either
$\mu_{(A,B,C)}$ or $\mu_{(A,B)}\times\mu_C$. Hence
\begin{align*}
d_{TV}(\mu_{(A,C)},\mu_{(B,C)}) \leq \delta
\end{align*}
and
\begin{align*}
d_{TV}(\mu_A\times\mu_C,\mu_B\times\mu_C) \leq \delta.
\end{align*}
Since $(B,C)$ are $\delta'$-independent,
\begin{align*}
d_{TV}(\mu_B\times\mu_C,\mu_{(B,C)}) \leq \delta'.
\end{align*}
The claim follows from the triangle inequality
\begin{align*}
d_{TV}(\mu_{(A,C)},\mu_A\times\mu_C) &\leq
d_{TV}(\mu_{(A,C)},\mu_{(B,C)}) + d_{TV}(\mu_{(B,C)},\mu_B\times\mu_C)
+ d_{TV}(\mu_B\times\mu_C,\mu_A\times\mu_C)\\
&\leq 2\delta+\delta'.
\end{align*}
\end{proof}
\begin{claim}
\label{clm:function-independent}
Let $(X,Y)$ be $\delta$-independent, and let $Z = f(Y,B)$ for some
function $f$ and $B$ that is independent of both $X$ and $Y$. Then
$(X,Z)$ are also $\delta$-independent.
\end{claim}
\begin{proof}
Let $\mu_{(X,Y)}$ be a joint distribution of $X$ and $Y$ satisfying
the conditions of the claim. Then since $(X,Y)$ are
$\delta$-independent,
\begin{align*}
d_{TV}(\mu_{(X,Y)},\mu_X\times\mu_Y) \leq \delta.
\end{align*}
Since $B$ is independent of both $X$ and $Y$,
\begin{align*}
d_{TV}(\mu_{(X,Y)}\times\mu_B,\mu_X\times\mu_Y\times\mu_B) \leq \delta
\end{align*}
and $(X,Y,B)$ are $\delta$-independent. Therefore there exists a
coupling between $(X_1,Y_1,B_1) \sim \mu_{(X,Y)}\times\mu_B$ and
$(X_2,Y_2,B_2) \sim \mu_X\times\mu_Y\times\mu_B$ such that
$\P{(X_1,Y_1,B_1) \neq (X_2,Y_2,B_2)} \leq \delta$. Then
\begin{align*}
\P{(X_1,f(Y_1,B_1)) \neq (X_2,f(Y_2,B_2))} \leq \delta
\end{align*}
and the proof follows.
\end{proof}
\begin{claim}
\label{clm:delta-ind-additive}
Let $A=(A_1,\ldots,A_k)$, and $X$ be random variables. Let
$(A_1,\ldots,A_k)$ be $\delta_1$-independent and let $(A,X)$ be
$\delta_2$-independent. Then $(A_1,\ldots,A_k,X)$ are
$(\delta_1+\delta_2)$-independent.
\end{claim}
\begin{proof}
Let $\mu_{(A_1,\ldots,A_k,X)}$ be the joint distribution of
$A=(A_1,\ldots,A_k)$ and $X$. Then since $(A_1,\ldots,A_k)$ are
$\delta_1$-independent,
\begin{align*}
d_{TV}(\mu_A,\mu_{A_1}\times\cdots\times\mu_{A_k})
\leq \delta_1.
\end{align*}
Hence
\begin{align*}
d_{TV}(\mu_A\times\mu_X,\mu_{A_1}\times\cdots\times\mu_{A_k}\times\mu_X)
\leq \delta_1.
\end{align*}
Since $(A,X)$ are $\delta_2$-independent,
\begin{align*}
d_{TV}(\mu_{(A,X)},\mu_A\times\mu_X) \leq \delta_2.
\end{align*}
The claim then follows from the triangle inequality
\begin{align*}
d_{TV}(\mu_{(A,X)},\mu_{A_1}\times\cdots\times\mu_{A_k}\times\mu_X)
\leq
d_{TV}(\mu_{(A,X)},\mu_A\times\mu_X)
+
d_{TV}(\mu_A\times\mu_X,\mu_{A_1}\times\cdots\times\mu_{A_k}\times\mu_X).
\end{align*}
\end{proof}
\begin{lemma}
\label{cor:majority}
For every $1/2 < p < 1$ there exist $\delta = \delta(p) >0$ and
$\eta = \eta(p) > 0$ such that if $S$ and
$(X_1,X_2,X_3)$ are binary random variables with $\P{S=1}=1/2$, $1/2
< p-\eta \leq \P{X_i=S} < 1$, and $(X_1,X_2,X_3)$ are
$\delta$-independent conditioned on $S$ then $\P{a(X_1,X_2,X_3) = S}
> p$, where $a$ is the MAP estimator of $S$ given $(X_1,X_2,X_3)$.
\end{lemma}
In other words, one's odds of guessing $S$ using three conditionally
almost-independent bits are greater than using a single bit.
\begin{proof}
We apply Lemma~\ref{clm:majority} below to three conditionally
independent bits which are each equal to $S$ w.p.\ at least
$p-\eta$. Then
\begin{align*}
\P{a(X_1,X_2,X_3) = S} \geq p-\eta+\eps_{p-\eta}
\end{align*}
where $\eps_q=\frac{1}{100}(2q-1)(3q^2-2q^3-q)$.
Since $\eps_q$ is continuous in $q$ and positive for $1/2<q<1$, it
follows that for $\eta$ small enough $p-\eta+\eps_{p-\eta} >
p$. Now, take $\delta < \eps_{p-\eta}-\eta$. Then, since we can
couple $\delta$-independent bits to independent bits so that they
differ with probability at most $\delta$, the claim follows.
\end{proof}
\begin{lemma}
\label{clm:majority}
Let $S$ and $(X_1,X_2,X_3)$ be binary random variables such that
$\P{S=1}=1/2$. Let $1/2 < p \leq \P{X_i=S} < 1$. Let
$a(X_1,X_2,X_3)$ be the MAP estimator of $S$ given
$(X_1,X_2,X_3)$. Then there exists an $\eps_p>0$ that depends only on
$p$ such that if $(X_1,X_2,X_3)$ are independent conditioned on $S$
then $\P{a(X_1,X_2,X_3) = S} \geq p + \eps_p$.
In particular the statement holds with
\begin{align*}
\eps_p=\frac{1}{100}(2p-1)(3p^2-2p^3-p).
\end{align*}
\end{lemma}
\begin{proof}
Denote $X=(X_1,X_2,X_3)$.
Assume first that $\P{X_i=S} = p$ for all $i$. Let $\delta_1,
\delta_2, \delta_3$ be such that $p+\delta_i = \CondP{X_i=1}{S=1}$
and $p-\delta_i=\CondP{X_i=0}{S=0}$.
To show that $\P{a(X) = S} \geq p + \eps_p$ it is enough
to show that $\P{b(X) = S} \geq p + \eps_p$ for some
estimator $b$, by the definition of a MAP estimator. We separate
into three cases.
\begin{enumerate}
\item If $\delta_1 = \delta_2 = \delta_3 = 0$ then the events
$X_i=S$ are independent and the majority of the $X_i$'s is equal
to $S$ with probability $p' = p^3+3p^2(1-p)$, which is greater than $p$
for ${\textstyle \frac12} < p < 1$. Denote $\eta_p = p' - p$. Then
$\P{a(X) = S} \geq p + \eta_p$.
\item Otherwise if $|\delta_i| \leq \eta_p / 6$ for all $i$ then we
can couple $X$ to three bits $Y=(Y_1,Y_2,Y_3)$ which satisfy the
conditions of case 1 above, and so that $\P{X \neq Y} \leq
\eta_p/2$. Then $\P{a(X) = S} \geq p + \eta_p/2$.
\item Otherwise we claim that there exist $i$ and $j$ such that
$|\delta_i+\delta_j| > \eta_p/12$.
Indeed assume w.l.o.g.\ that $\delta_1 \geq \eta_p/6$. Then if it
doesn't hold that $\delta_1+\delta_2 \geq \eta_p/12$ and it
doesn't hold that $\delta_1+\delta_3 \geq \eta_p/12$ then
$\delta_2 \leq -\eta_p/12$ and $\delta_3 \leq -\eta_p/12$ and
therefore $\delta_2+\delta_3 \leq -\eta_p/12$.
Now that this claim is proved, assume w.l.o.g.\ that
$\delta_1+\delta_2 \geq \eta_p/12$. Recall that $X_i \in \{0,1\}$,
and so the product $X_1X_2$ is also an element of $\{0,1\}$. Then
\begin{align*}
\P{X_1X_2 = S} &= {\textstyle \frac12}\CondP{X_1X_2 = 1}{S=1}+{\textstyle \frac12}\CondP{X_1X_2 =
0}{S=0}\\
&= {\textstyle \frac12}\left((p+\delta_1)(p+\delta_2)+(p-\delta_1)(p-\delta_2)+(p-\delta_1)(1-p+\delta_2)+(1-p+\delta_1)(p-\delta_2)\right)\\
&= p+{\textstyle \frac12}(2p-1)(\delta_1+\delta_2)\\
&\geq p + (2p-1)\eta_p/12,
\end{align*}
and so $\P{a(X) = S} \geq p + (2p-1)\eta_p/12$.
\end{enumerate}
Finally, we need to consider the case that $\P{X_i=S} = p_i > p$ for
some $i$. We again consider two cases. Denote $\eps_p =
(2p-1)\eta_p/100$. If there exists an $i$ such that $p_i > \eps_p$
then this bit is by itself an estimator that equals $S$ with
probability at least $p+\eps_p$, and therefore the MAP
estimator equals $S$ with probability at least $p+\eps_p$.
Otherwise $p \leq p_i \leq p_i+\eps_p$ for all $i$. We will
construct a coupling between the distributions of $X=(X_1,X_2,X_3)$
and $Y=(Y_1,Y_2,Y_3)$ such that the $Y_i$'s are conditionally
independent given $S$ and $\P{Y_i=S} = p$ for all $i$, and
furthermore $\P{Y \neq X} \leq 3\eps_p$. By what we've proved so far
the MAP estimator of $S$ given $Y$ equals $S$ with probability at
least $p+(2p-1)\eta_p/12 \geq p+8\eps_p$. Hence by the coupling, the
same estimator applied to $X$ is equal to $S$ with probability at
least $p+8\eps_p-3\eps_p>p+\eps_p$.
To couple $X$ and $Y$ let $Z_i$ be a real i.i.d.\ random variables
uniform on $[0,1]$. When $S=1$ let $X_i=Y_i=S$ if $Z_i >
p_i+\delta_i$, let $X_i=S$ and $Y_i=1-S$ if $Z_i \in
[p+\delta_i,p_i+\delta_i]$, and otherwise $X_i=Y_i=1-S$. The
construction for $S=0$ is similar. It is clear that $X$ and $Y$ have the
required distribution, and that furthermore $\P{X_i \neq Y_i} =
p_i-p \leq \eps_p$. Hence $\P{X \neq Y} \leq 3\eps_p$, as needed.
\end{proof}
\subsection{Asymptotic learning}
\label{sec:learning}
In this section we prove Theorem~\ref{thm:bounded-learning}.
\begin{theorem*}[\ref{thm:bounded-learning}]
Let $\mu_0,\mu_1$ be such that for every connected, undirected graph
$G$ there exists a random variable $A$ such that almost surely
$A_u=A$ for all $u \in V$. Then there exists a sequence
$q(n)=q(n, \mu_0, \mu_1)$ such that $q(n) \to 1$ as $n \to \infty$,
and $\P{A = \{S\}} \geq q(n)$, for any choice of undirected,
connected graph $G$ with $n$ agents.
\end{theorem*}
To prove this theorem we will need a number of intermediate results,
which are given over the next few subsections.
\subsubsection{Estimating the limiting optimal action set $A$}
We would like to show that although the agents have a common optimal
action set $A$ only at the limit $t \to \infty$, they can estimate
this set well at a large enough time $t$.
The action $A_u(t)$ is agent $u$'s MAP estimator of $S$ at time
$t$ (see Remark~\ref{rem:map}). We likewise define $K_u(t)$ to be
agent $u$'s MAP estimator of $A$, at time $t$:
\begin{align}
\label{eq:l-i-t}
K_u(t) = \operatornamewithlimits{argmax}_{K \in \{\{0\},\{1\},\{0,1\}\}}\CondP{A = K}{{\mathcal{F}}_u(t)}.
\end{align}
We show that the sequence of random variables $K_u(t)$ converges
to $A$ for every $u$, or that alternatively $K_u(t)=A$ for each
agent $u$ and $t$ large enough:
\begin{lemma}
\label{lemma:L-estimate}
$\P{\lim_{t \to \infty}K_u(t) = A} = 1$ for all $u \in V$.
\end{lemma}
This lemma (\ref{lemma:L-estimate}) follows by direct application of
the more general Lemma~\ref{lemma:estimation-converges} which we prove
below. Note that a consequence is that $\lim_{t \to \infty}
\P{K_u(t) = A} = 1$.
\begin{lemma}
\label{lemma:estimation-converges}
Let ${\mathcal{K}}_1 \subseteq {\mathcal{K}}_2, \ldots$ be a filtration of
$\sigma$-algebras, and let ${\mathcal{K}}_\infty = \cup_t{\mathcal{K}}_t$. Let $K$ be a
random variable that takes a finite number of values and is
measurable in ${\mathcal{K}}_\infty$. Let $M(t)=\operatornamewithlimits{argmax}_k\CondP{K =
k}{{\mathcal{K}}(t)}$ be the MAP estimator of $K$ given ${\mathcal{K}}_t$. Then
\begin{align*}
\P{\lim_{t \to \infty} M(t)=K} = 1.
\end{align*}
\end{lemma}
\begin{proof}
For each $k$ in the support of $K$, $\CondP{K=k}{{\mathcal{K}}_t}$ is a
bounded martingale which converges almost surely to
$\CondP{K=k}{{\mathcal{K}}_\infty}$, which is equal to $\ind{K=k}$, since $K$
is measurable in $G_\infty$. Therefore
$M(t)=\operatornamewithlimits{argmax}_k\CondP{K=k}{{\mathcal{K}}_t}$ converges almost surely to
$\operatornamewithlimits{argmax}_k\CondP{K=k}{{\mathcal{K}}_\infty}=K$.
\end{proof}
We would like at this point to provide the reader with some more
intuition on $A_u(t)$, $K_u(t)$ and the difference between
them. Assuming that $A = \{1\}$ then by definition, from some
time $t_0$ on, $A_u(t)=1$, and from
Lemma~\ref{lemma:L-estimate}, $K_u(t)=\{1\}$. The same applies
when $A = \{0\}$. However, when $A = \{0,1\}$ then
$A_u(t)$ may take both values 0 and 1 infinitely often, but
$K_u(t)$ will eventually equal $\{0,1\}$. That is, agent $u$ will
realize at some point that, although it thinks at the moment that 1 is
preferable to 0 (for example), it is in fact the most likely outcome
that its belief will converge to $1/2$. In this case, although it is
not optimal, a {\em uniformly random} guess of which is the best
action may not be so bad. Our next definition is based on this
observation.
Based on $K_u(t)$, we define a second ``action'' $C_u(t)$.
\begin{definition}
Let $C_u(t)$ be picked uniformly from $K_u(t)$: if $K_u(t) =
\{1\}$ then $C_u(t) = 1$, if $K_u(t) = \{0\}$ then $C_u(t) = 0$,
and if $K_u(t) = \{0,1\}$ then $C_u(t)$ is picked independently
from the uniform distribution over $\{0,1\}$.
\end{definition}
Note that we here extend our probability space by including in
$I_u(t)$ (the observations of agent $u$ up to time $t$) an extra
uniform bit that is independent of all else and $S$ in
particular. Hence this does not increase $u$'s ability to estimate
$S$, and if we can show that in this setting $u$ learns $S$ then $u$
can also learn $S$ without this bit. In fact, we show that
asymptotically it is as good an estimate for $S$ as the best estimate
$A_u(t)$:
\begin{claim}
\label{claim:b-a-equiv}
$\lim_{t \to \infty} \P{C_u(t)=S} = \lim_{t \to \infty} \P{A_u(t)=S}
= p$ for all $u$.
\end{claim}
\begin{proof}
We prove the claim by showing that it holds both when conditioning
on the event $A = \{0,1\}$ and when conditioning on its
complement.
When $A \neq \{0,1\}$ then for $t$ large enough
$A=\{A_u(t)\}$. Since (by Lemma~\ref{lemma:L-estimate})
$\lim K_u(t) = A$ with probability 1, in this case
$C_u(t)=A_u(t)$ for $t$ large enough, and
\begin{align*}
\lim_{t \to \infty}\CondP{C_u(t)=S}{A \neq \{0,1\}}\;=\;
\CondP{A=\{S\}}{A \neq \{0,1\}} \;=\; \lim_{t \to
\infty}\CondP{A_u(t)=S}{A \neq \{0,1\}}.
\end{align*}
When $A = \{0,1\}$ then $\lim X_u(t) = \lim
\CondP{A_u(t)=S}{{\mathcal{F}}_u(t)} = 1/2$ and so $\lim \P{A_u(t)=S} =
1/2$. This is again also true for $C_u(t)$, since in this case it is
picked at random for $t$ large enough, and so
\begin{align*}
\lim_{t \to \infty}\CondP{C_u(t)=S}{A = \{0,1\}}\;=\; \frac{1}{2} \;=\;
\lim_{t \to \infty}\CondP{A_u(t)=S}{A = \{0,1\}}.
\end{align*}
\end{proof}
\subsubsection{The probability of getting it right}
Recall Definition~\ref{def:p-u}: $p_u(t) = \P{A_u(t)=S}$ and
$p_u = \lim_{t \to \infty}p_u(t)$ (i.e., $p_u(t)$ is the probability
that agent $u$ takes the right action at time $t$). We prove here a
few easy related claims that will later be useful to us.
\begin{claim}
\label{clm:pMonotone}
$p_u(t+1) \geq p_u(t)$.
\end{claim}
\begin{proof}
Condition on ${\mathcal{F}}_u(t+1)$, the information available to agent $u$ at
time $t+1$. Hence the probability that $A_u(t+1) = S$ is at
least as high as the probability $A_u(t) = S$, since
\begin{align*}
A_u(t+1)=\operatornamewithlimits{argmax}_s\CondP{S=s}{{\mathcal{F}}(t+1)}
\end{align*}
and $A_u(t)$ is measurable in ${\mathcal{F}}(t+1)$. The claim is proved
by integrating over all possible values of ${\mathcal{F}}_u(t+1)$.
\end{proof}
Since $p_u(t)$ is bounded by one, Claim~\ref{clm:pMonotone} means that
the limit $p_u$ exists. We show that this value is the same for all
vertices.
\begin{claim}
\label{clm:pG}
There exists a $p \in [0,1]$ such that $p_u=p$ for all $u$.
\end{claim}
\begin{proof}
Let $u$ and $w$ be neighbors. As in the proof above, we can argue
that $\CondP{A_u(t+1) = S} {{\mathcal{F}}_u(t+1)} \geq
\CondP{A_w(t) = S} {{\mathcal{F}}_u(t+1)}$, since $A_w(t)$ is
measurable in ${\mathcal{F}}_u(t+1)$. Hence the same holds unconditioned, and
so we have that $p_u \geq p_w$, by taking the limit $t \to
\infty$. Since the same argument can be used with the roles of $u$
and $w$ reversed, we have that $p_u=p_w$, and the claim follows from
the connectedness of the graph, by induction.
\end{proof}
We make the following definition in the spirit of these claims:
\begin{definition}
$p = \lim_{t \to \infty} \P{A_u(t)=S}$.
\end{definition}
In the context of a specific social network graph $G$ we may denote
this quantity as $p(G)$.
For time $t=1$ the next standard claim follows from the fact that the
agents' signals are informative.
\begin{claim}
\label{clm:pGreaterThanHalf}
$p_u(t)>1/2$ for all $u$ and $t$.
\end{claim}
\begin{proof}
Note that
\begin{align*}
\CondP{A_u(1)=S}{W_u} =
\max\{X_u(1),1-X_u(1)\} =
\max\{\CondP{S=0}{W_u}, \CondP{S=1}{W_u}\}.
\end{align*}
Recall that $p_u(1) = \P{A_u(1) = S}$. Hence
\begin{align*}
p_u(1) &= \E{\CondP{A_u(1)=S}{W_u}}\\
&= \E{\max\{\CondP{S=0}{W_u}, \CondP{S=1}{W_u}\}}\\
\end{align*}
Since $\max\{a,b\}={\textstyle \frac12}(a+b)+{\textstyle \frac12}|a-b|$, and since
$\CondP{S=0}{W_u}+ \CondP{S=1}{W_u}=1$, it follows that
\begin{align*}
p_u(1) &=
{\textstyle \frac12}+{\textstyle \frac12}\E{|\CondP{S=0}{W_u}-\CondP{S=1}{W_u}|}\\
&= {\textstyle \frac12}+{\textstyle \frac12} D_{TV}(\mu_0,\mu_1),
\end{align*}
where the last equality follows by Bayes' rule. Since $\mu_0 \neq
\mu_1$, the total variation distance $D_{TV}(\mu_0,\mu_1)>0$ and
$p_u(1) > {\textstyle \frac12}$. For $t>1$ the claim follows from
Claim~\ref{clm:pMonotone} above.
\end{proof}
Recall that $|N(u)|$ is the out-degree of $u$, or the number of
neighbors that $u$ observes. The next lemma states that an agent with
many neighbors will have a good estimate of $S$ already at the second
round, after observing the first action of its neighbors. This lemma
is adapted from Mossel and Tamuz~\cite{MosselTamuz10B:arxiv}, and
provided here for completeness.
\begin{lemma}
\label{thm:large-out-deg}
There exist constants $C_1=C_1(\mu_0,\mu_1)$ and
$C_2=C_2(\mu_0,\mu_1)$ such that for any agent $u$ it holds that
\begin{align*}
p_u(2) \geq 1-C_1 e^{-C_2 \cdot N(u)}.
\end{align*}
\end{lemma}
\begin{proof}
Conditioned on $S$, private signals are independent and identically
distributed. Since $A_w(1)$ is a deterministic function of
$W_w$, the initial actions $A_w(1)$ are also
identically distributed, conditioned on $S$. Hence there exists a
$q$ such that $p_w(1) = \P{A_w(t)=S} = q$ for all agents
$w$. By Lemma~\ref{clm:pGreaterThanHalf} above, $q>1/2$. Therefore
\begin{align*}
\CondP{A_w(1)=1}{S=1} \neq \CondP{A_w(1)=1}{S=0},
\end{align*}
and the distribution of $A_w(1)$ is different when conditioned
on $S=0$ or $S=1$.
Fix an agent $u$, and let $n = |N(u)|$ be the out-degree of $u$, or
the number of neighbors that it observes. Let
$\{w_1,\ldots,w_{|N(u)|}\}$ be the set of $u$'s neighbors. Recall
that $A_u(2)$ is the MAP estimator of $S$ given
$(A_{w_1}(1),\ldots,A_{w_n}(1))$, and given $u$'s private signal.
By standard asymptotic statistics of hypothesis testing
(cf.~\cite{dasgupta2008asymptotic}), testing an hypothesis (in our
case, say, $S=1$ vs.\ $S=0$) given $n$ informative, conditionally
i.i.d.\ signals, succeeds except with probability that is
exponentially low in $n$. It follows that $\P{A_u(2) \neq S}$ is
exponentially small in $n$, so that there exist $C_1$ and $C_2$ such
that
\begin{align*}
p_u(2) = \P{A_u(2) = S} \geq 1-C_1 e^{-C_2 \cdot N(u)}.
\end{align*}
\end{proof}
The following claim is a direct consequence of the previous lemmas of
this section.
\begin{claim}
\label{thm:large-out-deg-graph}
Let $d(G) = \sup_u\{N(u)\}$ be the out-degree of the graph $G$; note
that for infinite graphs it may be that $d=\infty$. Then there exist
constants $C_1=C_1(\mu_0,\mu_1)$ and $C_2=C_2(\mu_0,\mu_1)$ such
that
\begin{align*}
p(G) \geq 1-C_1 e^{-C_2 \cdot d(G)}
\end{align*}
for all agents $u$.
\end{claim}
\begin{proof}
Let $u$ be an arbitrary vertex in $G$. Then by
Lemma~\ref{thm:large-out-deg} it holds that
\begin{align*}
p_u(2) \geq 1-C_1 e^{-C_2 \cdot N(u)},
\end{align*}
for some constants $C_1$ and $C_2$. By Lemma~\ref{clm:pMonotone} we
have that $p_u(t+1) \geq p_u(t)$, and therefore
\begin{align*}
p_u = \lim_{n \to \infty}p_u(t) \geq 1-C_1 e^{-C_2 \cdot N(u)}.
\end{align*}
Finally, $p(G) = p_u$ by Lemma~\ref{clm:pG}, and so
\begin{align*}
p_u \geq 1-C_1 e^{-C_2 \cdot N(u)}.
\end{align*}
Since this holds for an arbitrary vertex $u$, the claim follows.
\end{proof}
\subsubsection{Local limits and pessimal graphs}
We now turn to apply local limits to our process. We consider here and
henceforth the same model of Definitions~\ref{def:state-signals}
and~\ref{def:revealed-actions}, as applied, with the same private
signals, to different graphs. We write $p(G)$ for the value of $p$ on
the process on $G$, $A(G)$ for the value of $A$ on $G$, etc.
\begin{lemma}
\label{lemma:limit-leq-p}
Let $(G,u) = \lim_{r \to \infty}(G_r,u_r)$. Then $p(G) \leq \lim
\inf_r p(G_r)$.
\end{lemma}
\begin{proof}
Since $B_r(G_r,u_r) \cong B_r(G,u)$, by Lemma~\ref{cor:p-iso} we
have that $p_u(r) = p_{u_r}(r)$. By Claim~\ref{clm:pMonotone}
$p_{u_r}(r) \leq p(G_r)$, and therefore $p_u(r) \leq p(G_r)$. The
claim follows by taking the $\liminf$ of both sides.
\end{proof}
A particularly interesting case in the one the different $G_r$'s are
all the same graph:
\begin{corollary}
\label{cor:limit-leq-p}
Let $G$ be a (perhaps infinite) graph, and let $\{u_r\}$ be a
sequence of vertices. Then if the local limit $(H,u) = \lim_{r \to
\infty}(G,u_r)$ exists then $p(H) \leq p(G)$.
\end{corollary}
Recall that $\mathcal{B}_d$ denotes the set of infinite, connected,
undirected graphs of degree at most $d$. Let
\begin{align*}
\mathcal{B} = \bigcup_d \mathcal{B}_d.
\end{align*}
\begin{definition}
Let
\begin{align*}
p^* = p^*(\mu_0,\mu_1) = \inf_{G \in \mathcal{B}}p(G)
\end{align*}
be the probability of learning in the pessimal graph.
\end{definition}
Note that by Claim~\ref{clm:pGreaterThanHalf} we have that $p^*>1/2$.
We show that this infimum is in fact attained by some graph:
\begin{lemma}
\label{lemma:h-exists}
There exists a graph $H \in \mathcal{B}$ such that $p(H)=p^*$.
\end{lemma}
\begin{proof}
Let $\{G_r = (V_r,E_r)\}_{r=1}^{\infty}$ be a series of graphs in
$\mathcal{B}$ such that $\lim_{r \to \infty} p(G_r) = p^*$. Note that
$\{G_r\}$ must all be in $\mathcal{B}_d$ for some $d$ (i.e., have
uniformly bounded degrees), since otherwise the sequence $p(G_r)$
would have values arbitrarily close to $1$ and its limit could not
be $p^*$ (unless indeed $p^* = 1$, in which case our main
Theorem~\ref{thm:bounded-learning} is proved). This follows from
Lemma~\ref{thm:large-out-deg}.
We now arbitrarily mark a vertex $u_r$ in each graph, so that $u_r
\in V_r$, and let $(H,u)$ be the limit of some
subsequence of $\{G_r,u_r\}_{r=1}^\infty$. Since $\mathcal{B}_d$ is
compact (Lemma~\ref{lemma:inf-graphs-closed}), $(H,u)$ is guaranteed
to exist, and $H \in \mathcal{B}_d$.
By Lemma~\ref{lemma:limit-leq-p} we have that $p(H) \leq \liminf_r
p(G_r) = p^*$. But since $H \in \mathcal{B}$, $p(H)$ cannot be less
than $p^*$, and the claim is proved.
\end{proof}
\subsubsection{Independent bits}
We now show that on infinite graphs, the private signals in the
neighborhood of agents that are ``far enough away'' are (conditioned
on $S$) almost independent of $A$ (the final consensus estimate
of $S$).
\begin{lemma}
\label{lemma:r-independent}
Let $G$ be an infinite graph. Fix a vertex $u_0$ in $G$. Then for
every $\delta>0$ there exists an $r_\delta$ such that for every
$r\geq r_\delta$ and every vertex $u$ with $d(u_0,u)>2r$ it holds
that $W(B_r(G,u))$, the private signals in $B_r(G,u)$, are
$\delta$-independent of $A$, conditioned on $S$.
\end{lemma}
Here we denote graph distance by $d(\cdot,\cdot)$.
\begin{proof}
Fix $u_0$, and let $u$ be such that $d(u_0,u) > 2r$. Then
$B_r(G,u_0)$ and $B_r(G,u)$ are disjoint, and hence independent
conditioned on $S$. Hence $K_{u_0}(r)$ is independent of
$W(B_r(G,u))$, conditioned on $S$.
Lemma~\ref{lemma:L-estimate} states that $\P{\lim_{r \to
\infty}K_{u_0}(r) = A} = 1$, and so there exists an
$r_\delta$ such that for every $r \geq r_\delta$ it holds that
$\P{K_{u_0}(r) = A} > 1-{\textstyle \frac12}\delta$.
Recall Claim~\ref{clm:delta-independent}: for any $A,B,C$, if
$\P{A=B} = 1-{\textstyle \frac12}\delta$ and $B$ is independent of $C$, then
$(A,C)$ are $\delta$-independent.
Applying Claim~\ref{clm:delta-independent} to $A$,
$K_{u_0}(r)$ and $W(B_r(G,u))$ we get that for any $r$
greater than $r_\delta$ it holds that $W(B_r(G,u))$ is
$\delta$-independent of $A$, conditioned on $S$.
\end{proof}
We will now show, in the lemmas below, that in infinite graphs each
agent has access to any number of ``good estimators'':
$\delta$-independent measurements of $S$ that are each almost as
likely to equal $S$ as $p^*$, the minimal probability of estimating
$S$ on any infinite graph.
\begin{definition}
We say that agent $u \in G$ has {\bf $k$ $(\delta,\eps)$-good
estimators} if there exists a time $t$ and estimators
$M_1,\ldots,M_k$ such that $(M_1,\ldots,M_k) \in {\mathcal{F}}_u(t)$ and
\begin{enumerate}
\item $\P{M_i = S} > p^* - \eps$ for $1 \leq i \leq k$.
\item $(M_1, \ldots, M_k)$ are $\delta$-independent, conditioned on
$S$.
\end{enumerate}
\end{definition}
\begin{claim}
\label{clm:good-est-local}
Let $P$ denote the property of having $k$ $(\delta,\eps)$-good
estimators. Then $P$ is a {\em local property}
(Definition~\ref{def:local-property}) of the rooted graph
$(G,u)$. Furthermore, if $u \in G$ has $k$ $(\delta,\eps)$-good
estimators measurable in ${\mathcal{F}}_u(t)$ then $(G,u) \in P^{(t)}$, i.e.,
$(G,u)$ has property $P$ with radius $t$.
\end{claim}
\begin{proof}
If $(G,u) \in P$ then by definition there exists a time $t$ such
that $(M_1,\ldots,M_k) \in {\mathcal{F}}_u(t)$. Hence by
Lemma~\ref{lemma:p-iso}, if $B_t(G,u) \cong B_t(G',u')$ then $u' \in
G'$ also has $k$ $(\delta,\eps)$-good estimators $(M'_1,\ldots,M'_k)
\in {\mathcal{F}}_{u'}(t)$ and $(G',u') \in P$. In particular, $(G,u) \in
P^{(t)}$, i.e., $(G,u)$ has property $P$ with radius $t$.
\end{proof}
We are now ready to prove the main lemma of this subsection:
\begin{lemma}
\label{thm:independent-bits}
For every $d \geq 2$, $G \in \mathcal{B}_d$, $\eps, \delta > 0$ and
$k \geq 0$ there exists a vertex $u$, such that $u$ has $k$
$(\delta,\eps)$-good estimators.
\end{lemma}
Informally, this lemma states that if $G$ is an infinite graph with
bounded degrees, then there exists an agent that eventually has $k$
almost-independent estimates of $S$ with quality close to $p^*$, the
minimal probability of learning.
\begin{proof}
In this proof we use the term ``independent'' to mean ``independent
conditioned on $S$''.
We choose an arbitrary $d$ and prove by induction on $k$. The basis
$k = 0$ is trivial. Assume the claim holds for $k$, any $G \in
\mathcal{B}_d$ and all $\eps,\delta>0$. We shall show that it holds
for $k+1$, any $G \in \mathcal{B}_d$ and any $\delta,\eps>0$.
By the inductive hypothesis for every $G \in \mathcal{B}_d$ there
exists a vertex in $G$ that has $k$ $(\delta/100,\eps)$-good
estimators $(M_1, \ldots, M_k)$.
Now, having $k$ $(\delta/100,\eps)$-good estimators is a {\em local
property} (Claim~\ref{clm:good-est-local}). We now therefore
apply Lemma~\ref{lemma:local_property}: since every graph $G \in
\mathcal{B}_d$ has a vertex with $k$ $(\delta/100,\eps)$-good
estimators, any graph $G \in \mathcal{B}_d$ has a time $t_k$ for
which infinitely many distinct vertices $\{w_r\}$ have $k$
$(\delta/100,\eps)$-good estimators measurable at time $t_k$.
In particular, if we fix an arbitrary $u_0 \in G$ then for every $r$
there exists a vertex $w \in G$ that has $k$
$(\delta/100,\eps)$-good estimators and whose distance $d(u_0,w)$
from $u_0$ is larger than $r$.
We shall prove the lemma by showing that for a vertex $w$ that is
far enough from $u_0$ which has $(\delta/100,\eps)$-good estimators
$(M_1,\ldots,M_k)$, it holds that for a time $t_{k+1}$ large enough
$(M_1,\ldots,M_k,C_w(t_{k+1}))$ are $(\delta,\eps)$-good estimators.
By Lemma~\ref{lemma:r-independent} there exists an $r_\delta$ such
that if $r>r_\delta$ and $d(u_0,w) > 2r$ then $W(B_r(G,w))$
is $\delta/100$-independent of $A$. Let $r^* =
\max\{r_\delta,t_k\}$, where $t_k$ is such that there are infinitely
many vertices in $G$ with $k$ good estimators measurable at time
$t_k$.
Let $w$ be a vertex with $k$ $(\delta/100,\eps)$-good estimators
$(M_1,\ldots,M_k)$ at time $t_k$, such that $d(u_0,w) > 2r^*$.
Denote
\begin{align*}
\bar{M}=(M_1, \ldots, M_k).
\end{align*}
Since $d(u_0,w) > 2r_\delta$, $W(B_{r^*}(G,w))$ is
$\delta/100$-independent of $A$, and since $B_{t_k}(G,w)
\subseteq B_{r^*}(G,w)$, $W(B_{t_k}(G,w))$ is
$\delta/100$-independent of $A$. Finally, since $\bar{M} \in
{\mathcal{F}}_w(t_k)$, $\bar{M}$ is a function of $W(B_{t_k}(G,w))$,
and so by Claim~\ref{clm:function-independent} we have that
$\bar{M}$ is also $\delta/100$-independent of $A$.
For $t_{k+1}$ large enough it holds that
\begin{itemize}
\item $K_w(t_{k+1})$ is equal to $L$ with probability at least
$1-\delta/100$, since
\begin{align*}
\lim_{t \to \infty} \P{K_w(t)=A} = 1,
\end{align*}
by Claim~\ref{lemma:L-estimate}.
\item Additionally, $\P{C_w(t_{k+1})=S} > p^*-\eps$, since
\begin{align*}
\lim_{t \to \infty}\P{C_w(t)=S} = p \geq p^*,
\end{align*}
by Claim~\ref{claim:b-a-equiv}.
\end{itemize}
We have then that $(\bar{M},A)$ are $\delta/100$-independent and
$\P{K_w(t_{k+1}) \neq A} \leq \delta/100$.
Claim~\ref{clm:delta-independent} states that if $(A,B)$ are
$\delta$-independent $\P{B \neq C} \leq \delta'$ then $(A,C)$ are
$\delta+2\delta'$-independent. Applying this here we get that
$(\bar{M},K_w(t_{k+1}))$ are $\delta/25$-independent.
It follows by application of Claim~\ref{clm:delta-ind-additive} that
$(M_1,\ldots,M_k,K_w(t_{k+1}))$ are $\delta$-independent. Since
$C_w(t_{k+1})$ is a function of $K_w(t_{k+1})$ and an
independent bit, it follows by another application of
Claim~\ref{clm:function-independent} that $(M_1, \ldots, M_k,
C_w(t_{k+1}))$ are also $\delta$-independent.
Finally, since $\P{C_w(t_{k+1})=S} > p^*-\eps$, $w$ has the $k+1$
$(\delta,\eps)$-good estimators $(M_1,\ldots,C_w(t_{k+1}))$ and the
proof is concluded.
\end{proof}
\subsubsection{Asymptotic learning}
As a tool in the analysis of finite graphs, we would like to prove
that in infinite graphs the agents learn the correct state of the
world almost surely.
\begin{theorem}
\label{thm:bounded}
Let $G=(V,E)$ be an infinite, connected undirected graph with
bounded degrees (i.e., $G$ is a general graph in $\mathcal{B}$). Then
$p(G)=1$.
\end{theorem}
Note that an alternative phrasing of this theorem is that $p^*=1$.
\begin{proof}
Assume the contrary, i.e. $p^*<1$. Let $H$ be an infinite, connected
graph with bounded degrees such that $p(H) = p^*$, such as we've
shown exists in Lemma~\ref{lemma:h-exists}.
By Lemma~\ref{thm:independent-bits} there exists for arbitrarily
small $\eps,\delta >0 $ a vertex $w \in H$ that has access at some
time $T$ to three $\delta$-independent estimators (conditioned on
$S$), each of which is equal to $S$ with probability at least
$p^*-\eps$. By Claims~\ref{cor:majority}
and~\ref{clm:pGreaterThanHalf}, the MAP estimator of $S$ using these
estimators equals $S$ with probability higher than $p^*$, for the
appropriate choice of low enough $\eps,\delta$. Therefore, since
$w$'s action $A_w(T)$ is the MAP estimator of $S$, its
probability of equaling $S$ is $\P{A_w(T)=S} > p^*$ as well,
and so $p(H) > p^*$ - contradiction.
\end{proof}
Using Theorem~\ref{thm:bounded} we prove
Theorem~\ref{thm:bounded-learning}, which is the corresponding
theorem for finite graphs:
\begin{theorem*}[\ref{thm:bounded-learning}]
Let $\mu_0,\mu_1$ be such that for every connected, undirected graph
$G$ there exists a random variable $A$ such that almost surely
$A_u=A$ for all $u \in V$. Then there exists a sequence
$q(n)=q(n, \mu_0, \mu_1)$ such that $q(n) \to 1$ as $n \to \infty$,
and $\P{A = \{S\}} \geq q(n)$, for any choice of undirected,
connected graph $G$ with $n$ agents.
\end{theorem*}
\begin{proof}
Assume the contrary. Then there exists a series of graphs $\{G_r\}$
with $r$ agents such that $\lim_{r \to \infty}\P{A(G_r)=\{S\}} < 1$,
and so also $\lim_{r \to \infty}p(G_r) < 1$.
By the same argument of Theorem~\ref{thm:bounded} these graphs must
all be in $\mathcal{B}_d$ for some $d$, since otherwise, by
Lemma~\ref{thm:large-out-deg-graph}, there would exist a subsequence
of graphs $\{G_{r_d}\}$ with degree at least $d$ and $\lim_{d \to
\infty}p(G_{r_d}) = 1$. Since $\mathcal{B}_d$ is compact
(Lemma~\ref{lemma:inf-graphs-closed}), there exists a graph $(G,u)
\in \mathcal{B}_d$ that is the limit of a subsequence of $\{(G_r,
u_r)\}_{r=1}^\infty$.
Since $G$ is infinite and of bounded degree, it follows by
Theorem~\ref{thm:bounded} that $p(G) = 1$, and in particular
$\lim_{r \to \infty}p_u(r) = 1$. As before, $p_{u_r}(r) = p_u(r)$,
and therefore $\lim_{r \to \infty}p_{u_r}(r) = 1$. Since $p(G_r)
\geq p_{u_r}(r)$, $\lim_{r \to \infty}p(G_r) = 1$, which is a
contradiction.
\end{proof}
\subsection{Convergence to identical optimal action sets}
\label{sec:convergence}
In this section we prove Theorem~\ref{thm:unbounded-common-knowledge}.
\begin{theorem*}[\ref{thm:unbounded-common-knowledge}]
Let $(\mu_0,\mu_1)$ induce non-atomic beliefs. Then there exists
a random variable $A$ such that almost surely $A_u=A$
for all $u$.
\end{theorem*}
In this section we shall assume henceforth that the distribution of
initial private beliefs is non-atomic.
\subsubsection{Previous work}
The following theorem is due to Gale and Kariv~\cite{GaleKariv:03}.
Given two agents $u$ and $w$, let $E_u^0$ denote the event that
$A_u(t)$ equals $0$ infinitely often $E_w^1$ and the event that
$A_w(t)$ equals $1$ infinitely often.
\begin{theorem}[Gale and Kariv]
\label{thm:imitation}
If agent $u$ observes agent $w$'s actions then
\begin{equation*}
\P{E_u^0, E_w^1} = \P{X_u = 1/2, E_u^0, E_w^1}.
\end{equation*}
\end{theorem}
I.e., if agent $u$ takes action 0 infinitely often, agent $w$ takes
action 1 infinitely, and $u$ observes $w$ then $u$'s belief is $1/2$
at the limit, almost surely.
\begin{corollary}
\label{cor:imitation}
If agent $u$ observes agent $w$'s actions, and $w$ takes both
actions infinitely often then $X_u=1/2$.
\end{corollary}
\begin{proof}
Assume by contradiction that $X_u < 1/2$. Then $u$ takes
action 0 infinitely often. Therefore Theorem~\ref{thm:imitation}
implies that $X_u=1/2$ - contradiction.
The case where $X_u > 1/2$ is treated similarly.
\end{proof}
\subsubsection{Limit log-likelihood ratios}
Denote
\begin{align*}
Y_u(t) =
\log\frac{\CondP{I_u(t)}{S=1,\bar{A}_u(t)}}{\CondP{I_u(t)}{S=0,\bar{A}_u(t)}}.
\end{align*}
In the next claim we show that $Z_u(t)$, the log-likelihood ratio
inspired by $u$'s observations up to time $t$, can be written as the
sum of two terms: $Z_u(1)=\frac{d\mu_1}{d\mu_0}(W_u)$, which
is the log-likelihood ratio inspired by $u$'s private signal
$W_u$, and $Y_u(t)$, which depends only on the actions of $u$
and its neighbors, and does not depend directly on $W_u$.
\begin{claim}
\label{clm:llr-decomp}
\begin{align*}
Z_u(t) = Z_u(1) + Y_u(t).
\end{align*}
\end{claim}
\begin{proof}
By definition we have that
\begin{align*}
Z_u(t) =
\log\frac{\CondP{S=1}{{\mathcal{F}}_u(t)}}{\CondP{S=0}{{\mathcal{F}}_u(t)}}
= \log\frac{\CondP{S=1}{I_u(t),W_u}}{\CondP{S=0}{I_u(t),W_u}}.
\end{align*}
and by the law of conditional probabilities
\begin{align*}
Z_u(t) &= \log\frac{\CondP{I_u(t)}{S=1,
W_u}\CondP{W_u}{S=1}}{\CondP{I_u(t)}{S=0,W_u}\CondP{W_u}{S=0}}\\
&=\log\frac{\CondP{I_u(t)}{S=1,
W_u}}{\CondP{I_u(t)}{S=0,W_u}}+Z_u(1).
\end{align*}
Now $I_u(t)$, the actions of the neighbors of $u$ up to time $t$,
are a deterministic function of $W(B_t(G,u))$, the private signals
in the ball of radius $t$ around $u$, by
Claim~\ref{clm:a-from_cf}. Conditioned on $S$ these are all
independent, and so, from the definition of actions, these actions
depend on $u$'s private signal $W_u$ only in as much as it
affects the actions of $u$. Hence
\begin{align*}
\CondP{I_u(t)}{S=s,W_u} = \CondP{I_u(t)}{S=s,\bar{A}_u(t)},
\end{align*}
and therefore
\begin{align*}
Z_u(t) &=\log\frac{\CondP{I_u(t)}{S=1,
\bar{A}_u(t)}}{\CondP{I_u(t)}{S=0,\bar{A}_u(t)}}+Z_u(1)\\
&= Z_u(1)+Y_u(t).
\end{align*}
\end{proof}
Note that $Y_u(t)$ is a deterministic function of $I_u(t)$ and
$\bar{A}_u(t)$.
Following our notation convention, we define $Y_u = \lim_{t \to
\infty} Y_u(t)$. Note that this limit exists almost surely since the
limit of $Z_u(t)$ exists almost surely. The following claim follows
directly from the definitions:
\begin{claim}
\label{clm:y-measurable}
$Y_u$ is measurable in $(\bar{A}_u,I_u)$, the actions of $u$
and its neighbors.
\end{claim}
\subsubsection{Convergence of actions}
The event that an agent takes both actions infinitely often is (almost
surely) a sufficient condition for convergence to belief $1/2$. This
follows from the fact that these actions imply that its belief takes
values both above and below $1/2$ infinitely many times. We show that
it is also (almost surely) a necessary condition. Denote by $E_u^a$
the event that $u$ takes action $a$ infinitely often.
\begin{theorem}
\label{thm:mu-half-indecisive}
\begin{equation*}
\P{E_u^0 \cap E_u^1, X_u = 1/2} = \P{X_u = 1/2}.
\end{equation*}
\end{theorem}
I.e., it a.s.\ holds that $X_u=1/2$ iff $u$ takes both actions
infinitely often.
\begin{proof}
We'll prove the claim by showing that $\P{\neg(E_u^0 \cap E_u^1),
X_u = 1/2} = 0$, or equivalently that $\P{\neg(E_u^0 \cap
E_u^1), Z_u = 0} = 0$ (recall that $Z_u = \log
X_u/(1-X_u)$ and so $X_u=1/2 \Leftrightarrow
Z_u=0$).
Let $\bar{a}=(a(1), a(2), \ldots)$ be a sequence of actions, and
denote by $W_{-u}$ the private signals of all agents except
$u$. Conditioning on $W_{-u}$ and $S$ we can write:
\begin{align*}
\P{\bar{A}_u=\bar{a}, Z_u=0} &=
\E{\CondP{\bar{A}_u=\bar{a},
Z_u=0}{W_{-u},S}} \\
&= \E{\CondP{\bar{A}_u=\bar{a},
Z_u(1)=-Y_u}{W_{-u},S}}
\end{align*}
where the second equality follows from Claim~\ref{clm:llr-decomp}.
Note that by Claim~\ref{clm:y-measurable} $Y_u$ is fully determined
by $\bar{A}_u$ and $W_{-u}$. We can therefore write
\begin{align*}
\P{\bar{A}_u=\bar{a}, Z_u=0} &=
\E{\CondP{\bar{A}_u=\bar{a},
Z_u(1)=-Y_u(W_{-u},\bar{a})}{W_{-u},S}} \\
&\leq
\E{\CondP{Z_u(1)=-Y_u(W_{-u},\bar{a})}{W_{-u},S}}
\end{align*}
Now, conditioned on $S$, the private signal $W_u$ is
distributed $\mu_S$ and is independent of $W_{-u}$. Hence its
distribution when further conditioned on $W_{-u}$ is still
$\mu_S$. Since $Z_u(1) = \log\frac{d\mu_1}{d\mu_0}(W_u)$,
its distribution is also unaffected, and in particular is still
non-atomic. It therefore equals $-Y_u(W_{-u},\bar{a})$ with
probability zero, and so
\begin{align*}
\P{\bar{A}_u=\bar{a}, Z_u=0} = 0.
\end{align*}
Since this holds for all sequences of actions $\bar{a}$, it holds in
particular for all sequences which converge. Since there are only
countably many such sequences, the probability that the action
converges (i.e., $\neg(E_u^0 \cap E_u^1)$) and $Z_u=0$ is zero,
or
\begin{align*}
\P{\neg(E_u^0 \cap E_u^1), Z_u = 0} = 0.
\end{align*}
\end{proof}
Hence it impossible for an agent's belief to converge to $1/2$ and for
the agent to only take one action infinitely often. A direct
consequence of this, together with Thm.~\ref{thm:imitation}, is the
following corollary:
\begin{corollary}
\label{cor:converge}
The union of the following three events occurs with probability one:
\begin{enumerate}
\item $\forall u\in V:\lim_{t \to \infty} A_u(t)=S$. Equivalently, all
agents converge to the correct action.
\item $\forall u\in V:\lim_{t \to \infty} A_u(t)=1-S$. Equivalently, all
agents converge to the wrong action.
\item $\forall u\in V:X_u=1/2$, and in this case all agents take
both actions infinitely often and hence don't converge at all.
\end{enumerate}
\end{corollary}
\begin{proof}
Consider first the case that there exists a vertex $u$ such that $u$
takes both actions infinitely often. Let $w$ be a vertex that
observes $u$. Then by Corollary~\ref{cor:imitation} we have that
$X_w=1/2$, and by Theorem~\ref{thm:mu-half-indecisive} $w$ also
takes both actions infinitely often. Continuing by induction and
using the fact that the graph is strongly connected we obtain the
third case that none of the agents converge and $X_u=1/2$ for all
$u$.
It remains to consider the case that all agents' actions converge to
either 0 or 1. Using strong connectivity, to prove the theorem it
suffices to show that it cannot be the case that $w$ observes $u$
and they converge to different actions. In this case, by
Corollary~\ref{cor:imitation} we have that $X_w=1/2$, and then by
Theorem~\ref{thm:mu-half-indecisive} agent $w$'s actions do not
converge - contradiction.
\end{proof}
Theorem~\ref{thm:unbounded-common-knowledge} is an easy consequence of
this theorem. Recall that $A_u = \{1\}$ when $X_u > 1/2$,
$A_u = \{0\}$ when $X_u < 1/2$ and $A_u = \{0,1\}$
when $X_u=1/2$.
\begin{theorem*}[\ref{thm:unbounded-common-knowledge}]
Let $(\mu_0,\mu_1)$ induce non-atomic beliefs. Then there exists
a random variable $A$ such that almost surely $A_u=A$
for all $u$.
\end{theorem*}
\begin{proof}
Fix an agent $v$. When $X_v < 1/2$ (resp.\ $X_v > 1/2$)
then the first (resp.\ second) case of corollary~\ref{cor:converge}
occurs and $A=\{0\}$ (resp.\ $A=\{1\}$). Likewise when
$X_v=1/2$ then the third case occurs, $X_u = 1/2$ for
all $u \in V$ and $A_u = \{0,1\}$ for all $u \in V$.
\end{proof}
\subsection{Extension to $L$-locally connected graphs}
\label{sec:locally-connected}
The main result of this article, Theorem~\ref{thm:bounded-learning},
is a statement about undirected graphs. We can extend the proof to a
larger family of graphs, namely, $L$-locally connected graphs.
\begin{definition}
Let $G=(V,E)$ be a directed graph. $G$ is $L$-locally strongly
connected if, for each $(u,w) \in E$, there exists a path in $G$ of
length at most $L$ from $w$ to $u$.
\end{definition}
Theorem~\ref{thm:bounded-learning} can be extended as follows.
\begin{theorem}
\label{thm:l-locally}
Fix $L$, a positive integer. Let $\mu_0,\mu_1$ be such that for
every strongly connected, directed graph $G$ there exists a random
variable $A$ such that almost surely $A_u=A$ for all
$u \in V$. Then there exists a sequence $q(n)=q(n, \mu_0, \mu_1)$
such that $q(n) \to 1$ as $n \to \infty$, and $\P{A = \{S\}}
\geq q(n)$, for any choice of $L$-locally strongly connected graph
$G$ with $n$ agents.
\end{theorem}
The proof of Theorem~\ref{thm:l-locally} is essentially identical to
the proof of Theorem~\ref{thm:bounded-learning}. The latter is a
consequence of Theorem~\ref{thm:bounded}, which shows learning in
bounded degree infinite graphs, and of
Lemma~\ref{thm:large-out-deg-graph}, which implies asymptotic learning
for sequences of graphs with diverging maximal degree.
Note first that the set of $L$-locally strongly connected rooted
graphs with degrees bounded by $d$ is compact. Hence the proof of
Theorem~\ref{thm:bounded} can be used as is in the $L$-locally
strongly connected setup.
In order to apply Lemma~\ref{thm:large-out-deg-graph} in this setup,
we need to show that when in-degrees diverge then so do
out-degrees. For this note that if $(u,v)$ is a directed edge then $u$
is in the (directed) ball of radius $L$ around $v$. Hence, if there
exists a vertex $v$ with in-degree $D$ then in the ball of radius $L$
around it there are at least $D$ vertices. On the other hand, if the
out-degree is bounded by $d$, then the number of vertices in this ball
is at most $L\cdot d^L$. Therefore, $d \to \infty$ as $D \to \infty$.
\appendix
\section{Example of Non-atomic private beliefs leading to
non-learning}
\label{app:example-atomic}
We sketch an example in which private beliefs are atomic and
asymptotic learning does not occur.
\begin{example}
\label{example:non-learning}
Let the graph $G$ be the undirected chain of length $n$, so that
$V=\{1, \ldots, n\}$ and $(u,v)$ is an edge if $|u-v| = 1$. Let the
private signals be bits that are each independently equal to $S$
with probability $2/3$. We choose here the tie breaking rule under
which agents defer to their original signals\footnote{We conjecture
that changing the tie-breaking rule does not produce asymptotic
learning, even for randomized tie-breaking.}.
\end{example}
We leave the following claim as an exercise to the reader.
\begin{claim}
If an agent $u$ has at least one neighbor with the same private
signal (i.e., $W_u=W_v$ for $v$ a neighbor of $u$)
then $u$ will always take the same action $A_u(t) =
W_u$.
\end{claim}
Since this happens with probability that is independent of $n$, with
probability bounded away from zero an agent will always take the wrong
action, and so asymptotic learning does not occur. It is also clear
that optimal action sets do not become common knowledge, and these
fact are indeed related.
\end{document}
|
\begin{document}
\title{Behavior near the extinction time in self-similar
fragmentations I: the stable case} \author{Christina Goldschmidt
\thanks{Department of Statistics, University of Oxford;
\texttt{[email protected]}} \and B\'en\'edicte Haas
\thanks{Universit\'e Paris-Dauphine, Ceremade, F-75016 Paris, France;
\texttt{[email protected]}}}
\maketitle
\begin{abstract}
\noindent The stable fragmentation with index of self-similarity
$\alpha \in [-1/2,0)$ is derived by looking at the masses of the
subtrees formed by discarding the parts of a $(1 +
\alpha)^{-1}$--stable continuum random tree below height $t$, for $t
\geq 0$. We give a detailed limiting description of the
distribution of such a fragmentation, $(F(t), t \geq 0)$, as it
approaches its time of extinction, $\zeta$. In particular, we show
that $t^{1/\alpha}F((\zeta - t)^+)$ converges in distribution as $t
\to 0$ to a non-trivial limit. In order to prove this, we go
further and describe the limiting behavior of (a) an excursion of
the stable height process (conditioned to have length 1) as it
approaches its maximum; (b) the collection of open intervals where
the excursion is above a certain level and (c) the ranked sequence
of lengths of these intervals. Our principal tool is excursion
theory. We also consider the last fragment to disappear and show
that, with the same time and space scalings, it has a limiting
distribution given in terms of a certain size-biased version of the
law of $\zeta$.
In addition, we prove that the logarithms of the sizes of the
largest fragment and last fragment to disappear, at time $(\zeta -
t)^+$, rescaled by $\log(t)$, converge almost surely to the constant
$-1/\alpha$ as $t \to 0$.
\end{abstract}
\renewcommand{R\'esum\'e}{R\'esum\'e}
\begin{abstract}
\noindent La fragmentation stable d'incice $\alpha \in [-1/2,0)$ est
construite \`a partir des masses des sous-arbres de l'arbre continu
al\'eatoire stable d'indice $(1+\alpha)^{-1}$ obtenus en ne gardant
que les feuilles situ\'ees \`a une hauteur sup\'erieure \`a $t$, pour
$t \geq 0$. Nous donnons une description d\'etaill\'ee du
comportement asymptotique d'une telle fragmentation, $(F(t), t \geq
0)$, au voisinage de son point d'extinction, $\zeta$. En
particulier, nous montrons que $t^{1/\alpha} F((\zeta-t)^+)$
converge en loi lorsque $t \rightarrow 0$ vers une limite non
triviale. Pour obtenir ce r\'esultat, nous allons plus loin et
d\'ecrivons le comportement asymptotique en loi, apr\`es
normalisation, (a) d'une excursion du processus de hauteur stable
(conditionn\'ee \`a avoir une longueur $1$) au voisinage de son
maximum; (b) des intervalles ouverts o\`u l'excursion est au-dessus
d'un certain niveau; et (c) de la suite d\'ecroissante des longueurs
de ces intervalles. Notre outil principal est la th\'eorie des
excursions. Nous nous int\'eressons \'egalement au dernier fragment
\`a dispara\^itre et montrons, qu'avec les m\^emes normalisations en
temps et espace, la masse de ce fragment a une distribution limite
contruite \`a partir d'une certaine version biais\'ee de $\zeta$.
Enfin, nous montrons que les logarithmes des masses du plus gros
fragment et du dernier fragment \`a dispara\^itre, au temps
$(\zeta-t)^+$, divis\'es par $\log(t)$, convergent presque
s\^urement vers la constante $-1/\alpha$ lorsque $t \rightarrow 0$.
\end{abstract}
\noindent \emph{AMS subject classifications: 60G18, 60G52, 60J25
\newline Keywords: stable L\'evy processes, height processes,
self-similar fragmentations, extinction time, scaling limits.}
\section{Introduction}
The subject of this paper is a class of random fragmentation processes
which were introduced by Bertoin~\cite{BertoinSSF}, called the
self-similar fragmentations. In fact, we will find it convenient to
have two slightly different notions of a fragmentation process. By an
\emph{interval fragmentation}, we mean a process $(O(t), t \geq 0)$
taking values in the space of open subsets of $(0,1)$ such that $O(t)
\subseteq O(s)$ whenever $0 \leq s \leq t$. We refer to a connected
interval component of $O(t)$ as a \emph{block}. Let $F(t) = (F_1(t),
F_2(t), \ldots)$ be an ordered list of the lengths of the blocks of
$O(t)$. Then $F(t)$ takes values in the space
\[
\ensuremath{\mathcal{S}^{\downarrow}_1} = \left\{\mathbf{s} = (s_1, s_2, \ldots): s_1 \geq s_2 \geq \ldots
\geq 0, \sum_{i=1}^{\infty} s_i \leq 1\right\}.
\]
We call the process $(F(t), t \geq 0)$ a \emph{ranked fragmentation}.
A ranked fragmentation is called \emph{self-similar with index $\alpha
\in \ensuremath{\mathbb{R}}$} if it is a time-homogeneous Markov process which satisfies
certain \emph{branching} and \emph{self-similarity} properties.
Roughly speaking, these mean that every block should split into a
collection of sub-blocks whose relative lengths always have the same
distribution, but at a rate which is proportional to the length of the
original block raised to the power $\alpha$. (Rigorous definitions
will be given in Section~\ref{sec:preliminaries}.) Clearly the sign
of $\alpha$ has a significant effect on the behavior of the process.
If $\alpha > 0$ then larger blocks split faster than smaller ones,
which tends to act to balance out block sizes. On the other hand, if
$\alpha < 0$ then it is the smaller blocks which split faster.
Indeed, small blocks split faster and faster until they are reduced to
\emph{dust}, that is blocks of size 0.
The asymptotic behavior of self-similar fragmentations has been
studied quite extensively. In one sense, it is trivial, in that $F(t)
\to 0$ a.s.\ as $t \to \infty$, whatever the value of the index
$\alpha$ (provided the process is not trivially constant, i.e.\ equal
to its initial value for all times $t$). For $\alpha \geq 0$,
rescaled versions of the empirical measures of the lengths of the
blocks have law of large numbers-type behavior (see
Bertoin~\cite{BertoinAB} and Bertoin and
Rouault~\cite{Bertoin/Rouault}). For $\alpha < 0$, however, the
situation is completely different. Here, there exists an almost
surely finite random time $\zeta$, called the \emph{extinction time},
when the state is entirely reduced to dust. The manner in which mass
is lost has been studied in detail in \cite{HaasLossMass} and
\cite{HaasRegularity}.
The purpose of this article is to investigate the following more
detailed question when $\alpha$ is negative: \textbf{how does
the process $F((\zeta-t)^+)$ behave as $t \to 0$ ?} We provide a
detailed answer for a particularly nice one-parameter family of
self-similar fragmentations with negative index, called the
\emph{stable fragmentations}.
The simplest of the stable fragmentations is the \emph{Brownian
fragmentation}, which was first introduced and studied by
Bertoin~\cite{BertoinSSF}. Suppose that $(\mathbf{e}(x), 0 \leq x \leq 1)$ is
a standard Brownian excursion. Consider, for $t \geq 0$ the sets
\[
O(t) = \{x \in [0,1]: \mathbf{e}(x) > t\}
\]
and let $F(t) \in \ensuremath{\mathcal{S}^{\downarrow}_1}$ be the lengths of the interval components of
$O(t)$ in decreasing order. Then it can be shown that $(F(t), t \geq
0)$ is a self-similar fragmentation with index $-1/2$.
Miermont~\cite{Miermont} generalized this construction by replacing
the Brownian excursion with an excursion of the height process
associated with the stable tree of index $\beta \in (1,2)$, introduced
and studied by Duquesne, Le Gall and Le Jan
\cite{Duquesne/LeGall,LeGall/LeJan}. The corresponding process is a
self-similar fragmentation of index $\alpha = 1/\beta - 1$.
The behavior near the extinction time in the Brownian fragmentation
can be obtained via a decomposition of the excursion at its maximum.
We discuss this case in Section~\ref{sec:Brownian}. Abraham and
Delmas~\cite{Abraham/Delmas} have recently proved a generalized
Williams' decomposition for the excursions which code stable trees.
This provides us with the necessary tools to give a complete
description of the behavior of the stable fragmentations near their
extinction time, which is detailed in Section~\ref{sec:stable}. In
every case, we obtain that
\[
t^{1/\alpha}F((\zeta-t)^{+}) \convlaw F_{\infty} \text{ as }t \rightarrow 0,
\]
where $F_{\infty}$ is a random limit which takes values in the
set of non-increasing non-negative sequences with finite sum. The
limit $F_{\infty}$ is constructed from a self-similar function
$H_{\infty}$ on $\ensuremath{\mathbb{R}}$, which itself arises when looking at the scaling
behavior of the excursion in the neighborhood of its maximum. See
Theorems \ref{thm:stablefragfonc} and \ref{thm:stablefrag} and
Corollary \ref{corostablerank} for precise statements.
In Corollary \ref{lastfrag}, we also consider the process of the
\emph{last fragment}, that is the size of the (as it turns out unique)
block which is the last to disappear. We call this size $F_{\ast}(t)$ and prove that, scaled as
before, $F_{\ast}((\zeta-t)^+)$ also has a limit in distribution as $t \to 0$ which,
remarkably, can be expressed in terms of a certain size-biased version
of the distribution of $\zeta$.
Sections~\ref{sec:technicalbackground}, \ref{sec:convheightprocess},
\ref{sec:proof} and \ref{seclastfrag} are devoted to the proofs of
these results.
Finally, in Section~\ref{SectionLog}, we consider the logarithms of
the largest and last fragments and show that
\[
\frac{\log F_1((\zeta - t)^+)}{\log(t)} \to -1/\alpha \quad
\text{and} \quad \frac{\log F_{\ast}((\zeta - t)^+)}{\log(t)} \to -1/\alpha
\]
almost surely as $t \to 0$. In fact, these results hold for a more
general class of self-similar fragmentations with negative index.
We will investigate the limiting behavior of $F((\zeta -t)^{+})$ as $t
\to 0$ for general self-similar fragmentations with negative index
$\alpha$ in future work, starting in \cite{Goldschmidt/Haas2}. In
general, as indicated by the results for the logarithms of $F_1$ and
$F_{\ast}$ above, the natural conjecture is that $t^{1/\alpha}$ is the
correct re-scaling for non-trivial limiting behavior. However, since
the excursion theory tools we use here are not available in general,
we are led to develop other methods of analysis.
\section{Self-similar fragmentations}
\label{sec:preliminaries}
Define $\mathcal{O}_{(0,1)}$ to be the set of open subsets of $(0,1)$.
We begin with a rather intuitive notion of a fragmentation process.
\begin{definition}[Interval fragmentation]
An interval fragmentation is a process $(O(t), t \geq 0)$ taking
values in $\mathcal{O}_{(0,1)}$ such that $O(t) \subseteq O(s)$ whenever $0
\leq s \leq t$.
\end{definition}
In this paper we will be dealing with interval fragmentations which
derive from excursions. Here, an \emph{excursion} is a continuous
function $f:[0,1] \to \ensuremath{\mathbb{R}}^+$ such that $f(0) = f(1) = 0$ and $f(x) > 0$
for all $x \in (0,1)$. The associated interval fragmentation, $(O(t),
t \geq 0)$ is defined as follows: for each $t \geq 0$,
\[
O(t) = \{x \in [0,1]: f(x) > t\}.
\]
An example is given in Figure~\ref{fig:schematic}.
\begin{figure}
\caption{An interval fragmentation derived from a continuous
excursion: the set $O(t)$ is represented by the solid lines at level
$t$.}
\label{fig:schematic}
\end{figure}
We need to introduce a second notion of a fragmentation process. We
endow the space $\ensuremath{\mathcal{S}^{\downarrow}_1}$ with the topology of pointwise convergence.
\begin{definition}[Ranked self-similar fragmentation]
A ranked self-similar fragmentation $(F(t), t \geq 0)$ with index
$\alpha \in \ensuremath{\mathbb{R}}$ is a c\`adl\`ag Markov process taking values in
$\ensuremath{\mathcal{S}^{\downarrow}_1}$ such that
\begin{itemize}
\item $(F(t), t \geq 0)$ is continuous in probability;
\item $F(0) = (1,0,0, \ldots)$;
\item Conditionally on $F(t) = (x_1, x_2, \ldots)$, $F(t+s)$ has the
law of the decreasing rearrangement of the sequences $x_i
F^{(i)}(x_i^{\alpha} s)$, $i \geq 1$, where $F^{(1)}, F^{(2)},
\ldots$ are independent copies of the original process $F$.
\end{itemize}
\end{definition}
Let $r: \mathcal{O}_{(0,1)} \to \ensuremath{\mathcal{S}^{\downarrow}_1}$ be the function which to an open
set $O \subseteq (0,1)$ associates the ranked sequence of the lengths of
its interval components. We say that an interval fragmentation is
\emph{self-similar} if it possesses branching and self-similarity
properties which entail that $(r(O(t)), t \geq 0)$ is a ranked
self-similar fragmentation. See \cite{Basdevant,BertoinSSF,BertoinBook} for
this and further background material.
Bertoin~\cite{BertoinSSF} has proved that a ranked self-similar
fragmentation can be characterized by three parameters, $(\alpha, \nu,
c)$. Here, $\alpha \in \ensuremath{\mathbb{R}}$; $\nu$ is a measure on $\ensuremath{\mathcal{S}^{\downarrow}_1}$ such that
$\nu((1,0,\ldots)) = 0$ and $\int_{\ensuremath{\mathcal{S}^{\downarrow}_1}} (1 - s_1) \nu(\mathrm{d}\mathbf{s}) <
\infty$; and $c \in \ensuremath{\mathbb{R}}^+$. The parameter $\alpha$ is the \emph{index
of self-similarity}. The measure $\nu$ is the \emph{dislocation
measure}, which describes the way in which fragments suddenly
dislocate; heuristically, a block of mass $m$ splits at rate
$m^{\alpha} \nu(\mathrm{d}\mathbf{s})$ into blocks of masses $(ms_1, ms_2,
\ldots)$. The real number $c$ is the \emph{erosion coefficient},
which describes the rate at which blocks continuously melt. Note that
$\nu$ may be an infinite measure, in which case the times at which
dislocations occur form a dense subset of $\ensuremath{\mathbb{R}}^+$. When $c=0$, the
fragmentation is a pure jump process.
In the context of an interval fragmentation derived from an excursion,
it is easy to see that the extinction time of the fragmentation is
just the maximum height of the excursion:
\[
\zeta = \max_{0 \leq x \leq 1} f(x).
\]
In the examples we treat, this maximum will be attained at a unique
point, $x_*$. In this case, let $O_*(t)$ be the interval component of
$O(t)$ containing $x_*$ at time $t$, and let $F_{\ast}(t)$ be its
length, i.e.\ $F_*(t) = |O_*(t)|$. We call both $O_*$ and $F_*$ the
\emph{last fragment process}.
\section{The Brownian fragmentation}
\label{sec:Brownian}
We begin by discussing the special case of the Brownian fragmentation.
The sketch proofs in this section are not rigorous, but can be made
so, as we will demonstrate later in the paper. Our intention is to
introduce the principal ideas in a framework which is familiar to the
reader.
Let $(\mathbf{e}(x), 0 \leq x \leq 1)$ be a normalized Brownian
excursion with length 1 and for each $t \geq 0$ define the associated
interval fragmentation by
\begin{equation*}
O(t):=\left \{x \in [0,1] : \mathbf{e}(x)>t \right\}.
\end{equation*}
See Figure~\ref{fig:Brownianfrag} for a picture.
\begin{figure}
\caption{The Brownian interval fragmentation with the open intervals
which constitute the state at times $t = 0, 0.15, 0.53$ and $0.92$
indicated.}
\label{fig:Brownianfrag}
\end{figure}
The associated ranked fragmentation process, $(F(t), t \geq 0)$, has
index of self-similarity $\alpha=-1/2$, binary dislocation measure
specified by $\nu(s_1 + s_2 < 1) = 0$ and
\[
\nu(s_1 \in \mathrm dx) = 2(2 \pi x^3 (1-x)^3)^{-1/2}
\mathbbm{1}_{[1/2,1)}(x) \mathrm dx,
\]
and erosion coefficient $0$. See Bertoin
\cite{BertoinSSF} for a proof. The extinction time of this
fragmentation process is the maximum of the Brownian excursion. In
particular, from Kennedy~\cite{Kennedy} we have
\[
\Prob{\zeta > t} = 2 \sum_{n=1}^{\infty} (4t^2n^2 - 1) \exp(-2t^2
n^2), \text{ } t\geq 0.
\]
To the best of our knowledge, this is the only fragmentation for which
the law of $\zeta$ is known. It is well known that the maximum of
$\mathbf{e}$ is reached at a unique point $x_{*} \in [0,1]$ a.s., and
so the mass, $F_{\ast}(t)$, of the last fragment to survive is
well-defined.
There is a complete characterization of the limit in law of the
rescaled fragmentation near to its extinction time.
\begin{theorem}[Brownian fragmentation] \label{thm:Brownianfrag}
If $O$ is the Brownian interval fragmentation then
\begin{equation*}
\frac{O \left((\zeta -t \right)^{+})-x_{*}}{t^{2}}
\convlaw O_{\infty} \text{ as
$t\rightarrow 0$},
\end{equation*}
where $O_{\infty}=\{x\in
\mathbb{R}:H_{\infty}(x)<1\}$,
\begin{equation*}
H_{\infty}(x)=R_+(x)\I{x\geq 0}+ R_{-}(-x)\I{x<0}
\end{equation*}
and $R_+$ and $R_{-}$ are two independent $\mathrm{Bes}(3)$
processes.
\end{theorem}
A full discussion of the topology in which the above convergence in
distribution occurs is deferred until Section
\ref{subsec:topology}. A proof of this theorem is given in
Uribe Bravo~\cite{Uribe}. We give here a sketch, since a rigorous proof
will be given in the wider setting of general stable fragmentations.
\begin{proof}[Sketch of proof.]
We decompose the excursion $\mathbf{e}$ at its maximum $\zeta$.
Define
\[
\tilde{\mathbf{e}}(x)
= \begin{cases}
\zeta - \mathbf{e}(x_* + x), & 0 \leq x \leq 1-x_* \\
\zeta - \mathbf{e}(x - 1 + x_*), & 1-x_* < x \leq 1.
\end{cases}
\]
Then by Williams' decomposition for the Brownian excursion
\cite[Section VI.55]{Rogers/Williams2}, we have that
$\tilde{\mathbf{e}}$ is again a standard Brownian excursion.
Moreover, if $t < \zeta$ then
\begin{align*}
& t^{-2}(O(\zeta-t) - x_*) \\
& \qquad = \{y \in [0,(1-x_*)t^{-2}]:
\tilde{\mathbf{e}}(yt^2) < t\} \cup \{y \in [-x_* t^{-2},0]:
\tilde{\mathbf{e}}(1 + yt^2) < t\}.
\end{align*}
Now by the scaling property of Brownian motion, $(t^{-1}
\tilde{\mathbf{e}}(xt^2), 0 \leq x \leq t^{-2})$ has the distribution of a
Brownian excursion of length $t^{-2}$, which we will denote $(b^{t}(x), 0
\leq x \leq t^{-2})$. So the above set has the same distribution as
\[
\{x \in [0, (1-x_*)t^{-2}]: b^{t}(x) < 1\} \cup \{x \in
[-x_*t^{-2},0]: b^{t}(t^{-2} + x) < 1\}.
\]
Fix $n \in \ensuremath{\mathbb{R}}^+$. As $t \to 0$, the length of the excursion goes to
$\infty$ and $(b^{t}(x), 0 \leq x \leq n) \convdist (R_+(x), 0 \leq x
\leq n)$, where $(R_+(x), x \geq 0)$ is a 3-dimensional Bessel process
started at 0 (see, for example, Theorem 0.1 of
Pitman~\cite{PitmanStFl}). Moreover, by symmetry, $(b^{t}(t^{-2} -
x), 0 \leq x \leq n) \convdist (R_{-}(x), 0 \leq x \leq
n)$, where $R_{-}$ is (in fact) an independent copy of $R_+$. Thus we
obtain
\[
t^{-2}(O((\zeta-t)^+) - x_*) \convdist O_{\infty},
\]
as claimed.
\end{proof}
Theorem~\ref{thm:Brownianfrag} enables us to deduce an explicit limit
law for the associated ranked fragmentation. See Corollary
\ref{corostablerank} for a precise statement and note that, as
detailed at the end of Section \ref{sec:stable}, the passage from the
convergence of open sets to that of these ranked sequences is not
immediate. We also have the following limit law for the last
fragment, $F_{\ast}$.
\begin{corollary}
If $F$ is the Brownian fragmentation then
\begin{equation*}
\frac{F_{\ast }((\zeta - t)^{+})}{t^{2}} \convlaw \left(\frac{2\zeta
}{\pi }\right) ^{2} \text{ as $t \to 0$,}
\end{equation*}
or, equivalently,
\begin{equation*}
t (F_{\ast} ( (\zeta - t)^{+} ) )^{-1/2}
\convlaw \zeta_{\ast},
\end{equation*}
where $\zeta _{\ast}$ is a size-biased version of $\zeta$, i.e.\
$\E{f(\zeta_{\ast})} = \E{\zeta f(\zeta)}/ \E{\zeta}$ for every
test function $f$.
\end{corollary}
\begin{proof}
Let $T_+ = \inf\{t \geq 0: R_+(t) = 1\}$ and $T_{-} = \inf\{t
\geq 0: R_{-}(t) = 1\}$, where $R_+$ and $R_{-}$ are
the independent $\mathrm{Bes(3)}$ processes from the statement of
Theorem~\ref{thm:Brownianfrag}. Then by
Theorem~\ref{thm:Brownianfrag},
\[
\frac{F_{\ast}((\zeta - t)^{+})}{t^{2}} \convlaw T_+ + T_{-}.
\]
By Proposition 2.1 of Biane, Pitman and Yor~\cite{Biane/Pitman/Yor},
\[
T_+ + T_{-} \equidist \left(\frac{2 \zeta}{\pi}\right)^2
\]
and, moreover, if we define $Y = \sqrt{\frac{2}{\pi}} \zeta$ then
$Y$ satisfies
\[
\E{f(1/Y)} = \E{Y f(Y)}
\]
for any test function $f$ (in particular, $Y$ has mean 1). Hence,
\[
\E{f\left(\frac{\pi}{2 \zeta}\right)} = \sqrt{\frac{2}{\pi}}
\E{\zeta f(\zeta)} = \E{\zeta f(\zeta)} / \E{\zeta},
\]
which completes the proof.
\end{proof}
\textbf{Remark 1.} As noted by Uribe Bravo~\cite{Uribe}, the random
variable $(2 \zeta / \pi)^2$ has Laplace transform $2 \lambda(\sinh
\sqrt{2 \lambda})^{-2}$. He also uses another result in Biane, Pitman
and Yor~\cite{Biane/Pitman/Yor} to show that the Lebesgue measure of
the set $O_{\infty}$ has Laplace transform $(\cosh \sqrt{2
\lambda})^{-2}$. Let $M(t)$ be the total mass of the fragmentation
at time $t$, that is the Lebesgue measure of $O(t)$. Then this
entails that
\[
\frac{M((\zeta-t)^+)}{t^2} \convdist M_{\infty},
\]
where $M_{\infty}$ has Laplace transform $(\cosh \sqrt{2 \lambda})^{-2}$.
\textbf{Remark 2.} The Bes(3) process encodes a fragmentation process
with immigration which arises naturally when studying rescaled
versions of the Brownian fragmentation near $t=0$ (see Haas
\cite{HaasAsympFrag}). This is closely related to our approach: using
Williams' decomposition of the Brownian excursion, we obtain results
on the behavior of the fragmentation near its extinction time by
studying the sets of $\{x\in[0,1]: \mathbf{e}(x)<t\}$ for small $t$. This
duality between the behavior of the fragmentation near $0$ and near
its extinction time seems to be specific to the Brownian case.
\section{General stable fragmentations}
\label{sec:stable}
There is a natural family which generalizes the Brownian
fragmentation: the \emph{stable fragmentations}, constructed and
studied by Miermont in \cite{Miermont}. The starting point is the
stable height processes $H$ with index $1<\beta\leq2$ which were
introduced by Duquesne, Le Gall and Le Jan
\cite{Duquesne/LeGall,LeGall/LeJan} in order to code the genealogy of
continuous state branching processes with branching mechanism $\lambda^{\beta}$ via stable trees. We do
not give a definition of these processes here, since it is rather
involved; full definitions will be given in the course of the next
section. Here, we simply recall that it is possible to consider a
normalized excursion of $H$, say $\mathbf{e}$, which is almost surely
continuous on $[0,1]$. When $\beta=2$, this is the normalized Brownian
excursion (up to a scaling factor of $\sqrt{2}$).
Once again, let
\begin{equation*}
O(t):=\left \{x \in [0,1] : \mathbf{e}(x)>t \right\}.
\end{equation*}
For $1<\beta<2$, Miermont \cite{Miermont} proved that the corresponding ranked
fragmentation is a self-similar fragmentation of index
$\alpha=1/\beta-1$ and erosion coefficient $0$. The dislocation
measure is somewhat harder to express than that of the Brownian
fragmentation. Let $T$ be a stable subordinator of Laplace exponent
$\lambda^{1/\beta}$ and write $\Delta T_{[0,1]}$ for the sequence of
its jumps before time 1, ranked in decreasing order. Then for any
non-negative measurable function $f$,
\[
\int_{\ensuremath{\mathcal{S}^{\downarrow}_1}}f(\mathbf s)\nu(\mathrm d \mathbf s) =
\frac{\beta(\beta-1) \Gamma(1 - \frac{1}{\beta})}{\Gamma(2 -
\beta)} \E{T_1 f(T_1^{-1} \Delta T_{[0,1]})}.
\]
As we will discuss in Section~\ref{subsec:heightprocesses}, there
is a unique point $x_* \in [0,1]$ at which $\mathbf{e}$ attains its
maximum (this maximum is denoted $\zeta$ to be consistent with earlier
notation, so that $\zeta = \mathbf{e}(x_{\ast})$). So the size of the last fragment, $F_*$, is well-defined
for the stable fragmentations. We first state a result on the
behavior of the stable height processes near their maximum.
\begin{theorem}
\label{thm:stablefragfonc}
Let $\mathbf e$ be a normalized excursion of the stable height process
with parameter $\beta$ and extend its definition to $\mathbb R$ by setting
$\mathbf{e}(x)=0$ when $x \notin [0,1]$. Then there exists an almost surely
positive continuous self-similar function $H_{\infty}$ on
$\mathbb{R}$, which is symmetric in distribution (in the sense that
$(H_\infty(-x), x \geq 0) \equidist (H_\infty(x), x \geq 0)$) and
converges to $+\infty$ as $x \to +\infty$ or $x \to -\infty$, and
which is such that
\[
t^{-1}\left(\zeta-\mathbf e(x_*+t^{-1/\alpha} \ \cdot \ ) \right) \convlaw
H_{\infty} \text{ as $t \to 0$},
\]
where $\alpha=1/\beta-1$. The convergence holds
with respect to the topology of uniform convergence on compacts.
\end{theorem}
A precise definition of $H_{\infty}$ is given in Section~\ref{sec:Williams}. Intuitively, we can think of it as an excursion of the height process of infinite length, centered at its ``maximum" and flipped upside down.
Theorem~\ref{thm:stablefragfonc} leads to the following generalization of
Theorem~\ref{thm:Brownianfrag}.
\begin{theorem}[Stable interval fragmentation] \label{thm:stablefrag}
Let $O$ be a stable interval fragmentation with parameter $\beta$ and
consider the corresponding self-similar function $H_{\infty}$
introduced in Theorem \ref{thm:stablefragfonc}. Then
\begin{equation*}
t^{1/\alpha}\left(O\left((\zeta - t)^{+}\right)-x_{*}\right)
\convlaw \{x\in \mathbb{R}:H_{\infty}(x)<1\} \text{ as $t\rightarrow 0$}.
\end{equation*}
\end{theorem}
The topology on the bounded open sets of $\mathbb{R}$ in which this
convergence occurs will be discussed in the next section.
Define
\[
\ensuremath{\mathcal{S}^{\downarrow}_1}fin := \left\{\mathbf{s} = (s_1,s_2, \ldots): s_1 \geq
s_2 \geq \ldots \geq 0, \sum_{i=1}^{\infty} s_i < \infty\right\}.
\]
We endow this space with the distance
\[
d_{\ensuremath{\mathcal{S}^{\downarrow}_1}fin}(\mathbf{s},\mathbf{s'})=\sum_{i=1}^{\infty}|s_i-s_i'|.
\]
We also have the following ranked counterpart of
Theorem~\ref{thm:stablefrag}: let $F(t)$ be the decreasing sequence of
lengths of the interval components of $(O(t),t\geq 0)$ and, similarly,
let $F_{\infty}$ be the decreasing sequence of lengths of the interval
components of $\{x\in \mathbb{R}:H_{\infty}(x)<1\}$. Then $F_{\infty}
\in \ensuremath{\mathcal{S}^{\downarrow}_1}fin$.
\begin{corollary}[Ranked stable fragmentation]
\label{corostablerank}
As $t \to 0$,
\[
t^{1/\alpha}F((\zeta-t)^+)\convlaw F_{\infty}.
\]
\end{corollary}
In particular, this gives the behavior of the total mass $M(t) :=
\sum_{i = 1}^{\infty} F_i(t)$ of the fragmentation near its extinction
time.
Finally, as in the Brownian case, the distribution of the limit of the
size of the last fragment can be expressed in terms of a size-biased
version of the height $\zeta$.
\begin{corollary}[Behavior of the last fragment]
\label{lastfrag}
As $t \rightarrow 0$,
\begin{equation}
\label{eqn:laststable}
t\left(F_{\ast}\left( (\zeta -t)^{+}\right)\right)^{\alpha}
\convlaw \zeta_{\ast^{\alpha}}
\end{equation}
where $\zeta_{\ast^{\alpha}}$ is a ``$(-1/\alpha-1)$-size-biased''
version of $\zeta$, which means that for every test-function $f$,
\begin{equation}
\label{eqn:biasedzeta} \E{f(\zeta_{\ast^{\alpha}})} =\frac{
\E{\zeta^{-1/\alpha-1} f(\zeta)}}{\E{\zeta^{-1/\alpha-1}}}.
\end{equation}
Moreover,
\begin{itemize}
\item[(i)] there exist positive constants $0<A<B$ such that
\[
\exp\left(-Bt^{1/(1+\alpha)}\right) \leq \Prob{\zeta_{*^{\alpha}} \geq t}
\leq \exp\left(-At^{1/(1+\alpha)}\right)
\]
for all $t$ sufficiently large; \\
\item[(ii)] for any $q <1-1/\alpha$,
\[
\Prob{\zeta_{*^{\alpha}} \leq t} \leq t^q,
\]
for all $t \geq 0$ sufficiently small.
\end{itemize}
\end{corollary}
The proof of Theorem \ref{thm:stablefragfonc} is based on the
``Williams' decomposition'' of the height function $H$ given by
Abraham and Delmas~\cite[Theorem 3.2]{Abraham/Delmas}, and can be
found in Section \ref{sec:convheightprocess}. We emphasise the fact
that uniform convergence on compacts of a sequence of continuous
functions $f_n:\mathbb{R} \rightarrow \mathbb{R}$ to $f:\mathbb{R}
\rightarrow \mathbb{R}$ does not imply in general that the sets $\{x
\in \mathbb{R} : f_n(x)<1 \}$ converge to $\{x \in \mathbb{R} : f(x)<1
\}$. Take, for example, $f$ constant and equal to $1$ and $f_n$
constant and equal to $1-1/n$ (see the next section for the topology
we consider on open sets of $\mathbb{R}$). Less trivial examples show
that there may exist another kind of problem when passing from the
convergence of functions to that of ranked sequences of lengths of
interval components: take, for example, a continuous function
$f:\mathbb{R} \rightarrow \mathbb{R}^+$ which is strictly larger than
$1$ on $\mathbb{R} \backslash [-1,1]$ and then consider continuous
functions $f_n:\mathbb{R} \rightarrow \mathbb{R}^+$ such that $f_n=f$ on
$[-n,n]$, $f_n=0$ on $[n+1,2n]$. Clearly, $f_n$ converges uniformly
on compacts to $f$, but the lengths of the longest interval components
of $\{x \in \mathbb{R} : f_n(x)<1 \}$ converge to $\infty$.
However, we will see that the random functions we work with do not
belong to the set of ``problematic'' counter-examples that can arise
and that it will be possible to use Theorem \ref{thm:stablefragfonc}
to get Theorem \ref{thm:stablefrag} and Corollary
\ref{corostablerank}. Preliminary work will be done in Section
\ref{sec:technicalbackground}, where an explicit construction of the
limit function $H_{\infty}$ via Poisson point measures is also
given. Theorem \ref{thm:stablefrag} and Corollary \ref{corostablerank}
are proved in Section \ref{sec:proof}. Then we will see in Section
\ref{seclastfrag} that the limit $\zeta_{*^{\alpha}}$ arising in
(\ref{eqn:laststable}) is actually distributed as the length to the
power $\alpha$ of an excursion of $H$, conditioned to have its maximum
equal to $1$. It will then be easy to check that this is a size-biased
version of $\zeta$ as defined in (\ref{eqn:biasedzeta}). The bounds
for the tails $\Prob{\zeta_{*^{\alpha}} \geq t}$ will also be
proved in Section \ref{seclastfrag}, as well as the following remark.
\textbf{Remark.} In the Brownian case ($\alpha=-1/2$), the distribution of $\zeta$ (and consequently that of $\zeta_{*^{\alpha}}$), is known; see Section \ref{sec:Brownian}. We do not know the distribution of $\zeta_{*^{\alpha}}$ explicitly when $\alpha \in (-1/2,0)$. However, in this case, if we set, for $\lambda \geq 0$,
\[
\Phi(\lambda):= \E{\exp{(-\lambda \zeta_{*^{\alpha}}^{1/\alpha})}}
=\frac{\E{\exp(-\lambda \zeta^{1/\alpha})\zeta^{{-1/\alpha}-1}}}
{\E{\zeta^{-1/\alpha-1}}},
\]
then it can be shown that $\Phi$ satisfies the following equation:
\begin{equation}
\label{eqPhi}
\begin{split}
\Phi(\lambda)=\exp\!\left(\!-\!\int_{\mathbb{R}^+\times [0,\lambda^{-\alpha}]} \!\!\!
\left(1-e^{-(-\frac{\alpha}{\alpha+1})^{-1/\alpha}r
\int_0^t (1-\Phi( v^{-1/\alpha}))
v^{1/\alpha} \mathrm d v}\right) \right.\\
\left. \times \frac{-\alpha}
{(1+\alpha)^2 \Gamma\left(\frac{1+2\alpha}{1+\alpha}\right)}
e^{-r(-\frac{\alpha t}{\alpha+1})^{1 + 1/\alpha}} r^{-1/(\alpha+1)}
\mathrm d r \mathrm dt
\right).
\end{split}
\end{equation}
Finally, we recall that the almost sure logarithmic results for $F_1$
and $F_*$ will be proved in Section~\ref{SectionLog}.
\section{Technical background}
\label{sec:technicalbackground}
We start by detailing the topology on open sets which will give a
proper meaning to the statement of Theorem~\ref{thm:stablefrag}. We
then recall some facts about height processes and prove various useful
lemmas. Finally, we introduce the decomposition result of Abraham and
Delmas \cite{Abraham/Delmas}, in a form suitable for our purposes.
\subsection{Topological details}
\label{subsec:topology}
When dealing with interval fragmentations, we will work in the set
$\mathcal{O}$ of bounded open subsets of $\ensuremath{\mathbb{R}}$. This is endowed with
the following distance:
\begin{equation*}
d_{\mathcal O}(A,B)=\sum_{k \in \ensuremath{\mathbb{N}}} 2^{-k}
d_{\mathcal{H}}\Big((A \cap (-k,k))^{\mathrm{c}} \cap [-k,k],(B
\cap (-k,k))^{\mathrm{c}} \cap [-k,k]\Big),
\end{equation*}
where $S^{\mathrm{c}}$ denotes the closed complement of $S \in
\mathcal{O}$ and $d_{\mathcal H}$ is the Hausdorff distance on the set
of compact sets of $\ensuremath{\mathbb{R}}$. For $A \neq \ensuremath{\mathbb{R}}$, let $\chi_A(x) = \inf_{y
\in A^{\mathrm{c}}} |x-y|$. If we define, for $x \in [-k,k]$,
$\chi^k_A(x) = \chi_{A \cap (-k,k)} (x) = \inf_{y \in (A \cap
(-k,k))^{\mathrm{c}}} |x-y|$ then we also have
\[
d_{\mathcal{O}}(A,B) = \sum_{k \in \ensuremath{\mathbb{N}}} 2^{-k} \sup_{x \in
[-k,k]} |\chi_A^k(x) - \chi_B^k(x)|
\]
(see p.69 of Bertoin~\cite{BertoinBook}). The open sets we will deal
with in this paper arise as excursion intervals of continuous
functions. In particular, we will need to know that if we have a
sequence of continuous functions converging (in an appropriate sense)
to a limit, then the corresponding open sets converge.
Consider the space $C(\ensuremath{\mathbb{R}}, \ensuremath{\mathbb{R}}^+)$ of continuous functions from $\ensuremath{\mathbb{R}}$ to
$\ensuremath{\mathbb{R}}^+$. By \emph{uniform convergence on compacts}, we mean
convergence in the metric
\[
d(f,g) = \sum_{k \in \ensuremath{\mathbb{N}}} 2^{-k} \left( \sup_{t \in [-k,k]} |f(t) -
g(t)| \wedge 1 \right).
\]
The name is justified by the fact that convergence in $d$ is
equivalent to uniform convergence on all compact sets.
Suppose $f \in C(\ensuremath{\mathbb{R}},\ensuremath{\mathbb{R}}^+)$. We say that $a \in \ensuremath{\mathbb{R}}^+$ is a \emph{local
maximum} of $f$ if there exist $s \in \ensuremath{\mathbb{R}}$ and $\varepsilon > 0$ such
that $f(s) = a$ and $\max_{s-\varepsilon \leq t \leq s+\varepsilon} f(t) =
a$. Note that this includes the case where $f$ is constant and equal
to $a$ on some interval, even if $f$ never takes values smaller than
$a$. We define a \emph{local minimum} analogously.
\begin{proposition} \label{prop:fnconvimpliessetconv} Let $f_n: \ensuremath{\mathbb{R}} \to
\ensuremath{\mathbb{R}}^+$ be a sequence of continuous functions and let $f: \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}^+$
be a continuous function such that $f(0) = 0$, $f(x) \to \infty$ as
$x \to +\infty$ or $x \to -\infty$. Suppose also that $1$ is not a
local maximum of $f$ and that $\mathrm{Leb}\{x \in \ensuremath{\mathbb{R}}: f(x) = 1\} =
0$. Define $A = \{x \in \ensuremath{\mathbb{R}}: f(x) < 1\}$ and $A_n = \{x \in \ensuremath{\mathbb{R}}:
f_n(x) < 1\}$. Suppose now that $f_n$ converges to $f$ uniformly on
compact subsets of $\ensuremath{\mathbb{R}}$. Then $d_{\mathcal{O}}(A_n,A) \to 0$ as $n
\to \infty$.
\end{proposition}
Define
\[
g(x) = \sup\{y \leq x: f(y) \geq 1\}
\]
and
\[
d(x) = \inf\{y \geq x: f(y) \geq 1\}.
\]
Then
\begin{equation} \label{eqn:chiequiv}
\chi_A(x) = (x - g(x)) \wedge (d(x) - x).
\end{equation}
Define $g_n$ and $d_n$ to be the analogous quantities for $f_n$. The
proof of the following lemma is straightforward.
\begin{lemma} \label{lem:chi}
For $x, y \in \ensuremath{\mathbb{R}}$,
\[
|\chi_{A}(x) - \chi_{A}(y)| \leq |x-y|.
\]
Moreover,
\[
|\chi_{A_n}(x) - \chi_A(x)| \leq \max \{|g_n(x) - g(x)|,|d_n(x) - d(x)|\}.
\]
\end{lemma}
\begin{proof}[Proof of Proposition~\ref{prop:fnconvimpliessetconv}.]
It suffices to prove that $d_{\mathcal{H}}((A_n \cap
(-1,1))^{\mathrm{c}} \cap [-1,1],(A \cap (-1,1))^{\mathrm{c}} \cap
[-1,1]) \to 0$ as $n \to \infty$ for $f,f_n : [-1,1] \to \ensuremath{\mathbb{R}}^+$ such
that $f(0) = 0$, 1 is not a local maximum of $f$, $\mathrm{Leb}\{x
\in [-1,1]: f(x) = 1\} = 0$ and $f_n \to f$ uniformly. In other
words, we need to show that $\sup_{x \in [-1,1]}| \chi^1_{A_n}(x) -
\chi^1_{A}(x)| \to 0$ as $n \to \infty$. Note that the appropriate
definitions of $g(x)$ and $d(x)$ in order to make
(\ref{eqn:chiequiv}) true for $\chi^1_{A}$ are
\[
g(x) = \sup\{y \leq x: f(y) \geq 1\} \vee -1
\]
and
\[
d(x) = \inf\{y \geq x: f(y) \geq 1\} \wedge 1
\]
and we adopt these definitions for the rest of the proof.
Let $\varepsilon > 0$. For $r>0$ let
\begin{align*}
E^{\uparrow}_r & = \{x \in (-1,1): x \in (a,b) \text{ such that } f(y) > 1,
\forall\ y \in (a,b), |b-a| > r\}, \\
E^{\downarrow}_r & = \{x \in (-1,1): x \in (a,b) \text{ such
that } f(y) < 1, \forall\ y \in (a,b), |b-a| > r\}.
\end{align*}
These are the collections of excursion intervals of length exceeding
$r$ above and below 1. Take $0 < \delta < \varepsilon/2$ small enough that
$\mathrm{Leb}(E^{\uparrow}_{\delta} \cup E^{\downarrow}_{\delta}) > 2
- \varepsilon/2$ (we can do this since $\mathrm{Leb}\{x \in [-1,1]: f(x)
= 1\} = 0$). Set $R = [-1,1] \setminus (E^{\uparrow}_{\delta} \cup
E^{\downarrow}_{\delta})$. Each of $E^{\uparrow}_{\delta}$ and
$E^{\downarrow}_{\delta}$ is composed of a finite number of open
intervals.
$\bullet$ We will first deal with $E^{\uparrow}_{\delta}$. On this
set, $\chi_A^1(x) = 0$. Take hereafter $0 < \eta < \delta/6$ and let
\[
E^{\uparrow}_{\delta,\eta} = \{x \in E^{\uparrow}_{\delta}: (x-\eta,x+\eta)
\subseteq E^{\uparrow}_{\delta}\}.
\]
Then $\inf_{x \in E_{\delta,\eta}^{\uparrow}} f(x) > 1$. Since $f_n
\to f$ uniformly, it follows that there exists $n_1$ such that $f_n(x)
> 1$ for all $x \in E^{\uparrow}_{\delta,\eta}$ whenever $n \geq n_1$.
Then for $n \geq n_1$, we have $\chi_{A_n}^1(x) = 0$ for all $x \in
E^{\uparrow}_{\delta,\eta}$. Since $|\chi_{A_n}^1(x) - \chi_{A_n}^1(y)|
\leq |x-y|$, it follows that $\chi_{A_n}^1(x) < \eta$ for all $x \in
E^{\uparrow}_{\delta}$. So $\sup_{x \in E^{\uparrow}_{\delta}}
|\chi_A^1(x) - \chi_{A_n}^1(x)| < \eta$ whenever $n \geq n_1$.
$\bullet$ Now turn to $E^{\downarrow}_{\delta}$. As before, define
\[
E^{\downarrow}_{\delta,\eta} = \{x \in E^{\downarrow}_{\delta}:
(x-\eta,x+\eta) \subseteq E^{\downarrow}_{\delta}\}.
\]
Then $\sup_{x \in E^{\downarrow}_{\delta,\eta}} f(x) < 1$. Since $f_n
\to f$ uniformly, it follows that there exists $n_2$ such that $f_n(x)
< 1$ for all $x \in E^{\downarrow}_{\delta,\eta}$ whenever $n \geq
n_2$. Now, for each excursion below 1, there exists a left end-point
$g$ and a right end-point $d$. For all $x$ in the same excursion,
$g(x) = g$ and $d(x) = d$. Suppose first that we have $g \neq -1$, $d
\neq 1$ (in this case we say that the excursion \emph{does not touch
the boundary}). Since 1 is not a local maximum of $f$, there must
exist $z_g < g$ and $z_d > d$ such that $|z_g-g| < \eta$, $|z_d - d| <
\eta$, $f(z_g)> 1$ and $f(z_d) > 1$.
Suppose there are $N_{\delta}$ excursions below 1 of length greater
than $\delta$ which do not touch the boundary. To excursion $i$ there
corresponds a left end-point $g_i$, a right end-point $d_i$
and points $z_{g_i}$, $z_{d_i}$, $1 \leq i \leq N_{\delta}$.
Write
\[
\tilde{E}^{\downarrow}_{\delta} = \cup_{i=1}^{N_{\delta}}
(g_i,d_i) \text{ and }
\tilde{E}^{\downarrow}_{\delta,\eta} =
\tilde{E}^{\downarrow}_{\delta} \cap E^{\downarrow}_{\delta,\eta} =
\cup_{i=1}^{N_{\delta}} (g_i+\eta,d_i-\eta).
\]
Then $\min_{1 \leq i \leq N_{\delta}} (f(z_{g_i}) \wedge f(z_{d_i})) >
1$. Since $f_n \to f$ uniformly, there exists $n_3$ such that
$\min_{1 \leq i \leq N_{\delta}} (f_n(z_{g_i}) \wedge f_n(z_{d_i})) >
1$ for all $n \geq n_3$. So for $n \geq n_2 \vee n_3$ and any $x \in
\tilde{E}^{\downarrow}_{\delta,\eta}$, by the intermediate value
theorem, there exists at least one point $a_n(x) \in (g(x) - \eta,
g(x) + \eta)$ such that $f_n(a_n(x)) = 1$ and at least one point
$b_n(x) \in (d(x) - \eta, d(x) + \eta)$ such that $f_n(b_n(x)) = 1$.
Since $g(x)$ and $d(x)$ are constant on excursion intervals, it
follows that $\sup_{x \in \tilde{E}^{\downarrow}_{\delta,\eta}}
|g_n(x) - g(x)| < \eta$ and $\sup_{x \in
\tilde{E}^{\downarrow}_{\delta,\eta}} |d_n(x) - d(x)| < \eta$ for $n
\geq n_2 \vee n_3$. Hence, by Lemma~\ref{lem:chi},
\[
\sup_{x \in \tilde{E}^{\downarrow}_{\delta,\eta}} |\chi_A^1(x) -
\chi_{A_n}^1(x)| < \eta
\]
whenever $n \geq n_2 \vee n_3$. Since $|\chi_A^1(x) - \chi_A^1(y)|
\leq |x-y|$ and $|\chi_{A_n}^1(x) - \chi_{A_n}^1(y)| \leq |x-y|$, by
using the triangle inequality we obtain that
\[
\sup_{x \in \tilde{E}^{\downarrow}_{\delta}} |\chi_A^1(x) -
\chi_{A_n}^1(x)| < 3 \eta.
\]
It remains to deal with any excursions in $E^{\downarrow}_{\delta}$
which touch the boundary. Clearly, there is at most one excursion in
$E^{\downarrow}_{\delta}$ touching the left boundary and at most one
excursion touching the right boundary. For these excursions, we can
argue as before at the non-boundary end-points. At the boundary
end-points, the argument is, in fact, easier since we have (by
construction) $\chi_{A}^1(-1) = \chi_{A_n}^1(-1) = 0$ and
$\chi_{A}^1(1) = \chi_{A_n}^1(1) = 0$. So, there exists $n_4$ such
that for all $n \geq n_2 \vee n_3 \vee n_4$,
\[
\sup_{x \in E^{\downarrow}_{\delta}} |\chi_A^1(x) - \chi_{A_n}^1(x)| <
3 \eta.
\]
$\bullet$ For any $x \in R$, we have $\chi_A^1(x) \leq \delta/2$.
Moreover, since $\mathrm{Leb}(E^{\uparrow}_{\delta} \cup
E^{\downarrow}_{\delta}) > 2 - \varepsilon/2$, there must exist a point
$z(x) \in R$ such that $|z(x)- x| < \varepsilon/2$ which is the end-point
of an excursion interval (above or below 1) of length exceeding
$\delta$. So for all $x \in R$ and all $n$ we have
\begin{align*}
|\chi_A^1(x) - \chi_{A_n}^1(x)| & \leq |\chi_A^1(x)| + |\chi_{A_n}^1(x) -
\chi_{A_n}^1(z(x))| + |\chi_{A_n}^1(z(x)) - \chi_{A}^1(z(x))| \\
& \leq \delta/2 + |x-z(x)| + \sup_{y \in E^{\uparrow}_{\delta} \cup
E^{\downarrow}_{\delta}} | \chi_{A_n}^1(y) - \chi_A^1(y)| \\
& < \delta/2 + \varepsilon/2 + \sup_{y \in E^{\uparrow}_{\delta} \cup
E^{\downarrow}_{\delta}} | \chi_{A_n}^1(y) - \chi_A^1(y)|
\end{align*}
(note that at the second inequality we use the continuity of $\chi_A^1$
and $\chi_{A_n}^1$ and the fact that $\chi^1_A(z(x)) = 0$).
$\bullet$ Finally, let $n_0 = n_1 \vee n_2 \vee n_3 \vee n_4$. Then
since $\eta < \delta/6$ and $\delta < \varepsilon/2$, for $n \geq n_0$ we
have
\[
\sup_{x \in [-1,1]} |\chi_A^1(x) - \chi_{A_n}^1(x)| < \varepsilon.
\]
The result follows.
\end{proof}
The following lemma will be used implicitly in Section \ref{sec:convheightprocess}.
\begin{lemma} \label{lem:babyanalysis} Suppose that $a > 0$, $\alpha
\in \ensuremath{\mathbb{R}}$ and that $f: \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}^+$ is a continuous function. Let
$g(t) = a^{\alpha} f(at)$. Then the function $(a,f) \mapsto g$ is
continuous for the topology of uniform convergence on compacts.
\end{lemma}
\begin{proof}
Let $f_n: \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}^+$ be a
sequence of continuous functions with $f_n \to f$ uniformly on
compacts. Suppose that $a_n$ is a sequence of reals with $a_n \to a >
0$. Suppose $K \subseteq \ensuremath{\mathbb{R}}$ is a compact set. Then we have
\begin{align*}
& \sup_{t \in K} | a^{\alpha} f(at) - a_n^{\alpha} f_n(a_n t)| \\
& \leq \sup_{t \in K} | a^{\alpha} f(at) - a_n^{\alpha} f(at)|
+ \sup_{t \in K} |a_n^{\alpha} f(at) - a_n^{\alpha} f(a_n t)|
+ \sup_{t \in K} |a_n^{\alpha} f(a_n t) - a_n^{\alpha} f_n(a_n t)| \\
& \leq \sup_{t \in aK} |f(t)| |a^{\alpha} - a_n^{\alpha}|
+ a_n^{\alpha} \sup_{t \in K} |f(at) - f(a_n t)|
+ a_n^{\alpha} \sup_{t \in K} |f(a_n t) - f_n(a_n t)|.
\end{align*}
The set $aK$ is compact and so $f$ is bounded on it; it follows that
the first term converges to $0$. Take $0 < \varepsilon < a$. Since $a_n
\to a$, there exists $n$ sufficiently large that $|a_n - a| <
\varepsilon$. Define $\tilde{K} = \{bt: t \in K, b \in [a - \varepsilon,a +
\varepsilon]\}$. Then $\tilde{K}$ is also a compact set. The second
term converges to $0$ because $f$ is uniformly continuous on
$\tilde{K}$. The third term is bounded above by
$((a-\varepsilon)^{\alpha} \vee (a+\varepsilon)^{\alpha}) \sup_{t \in
\tilde{K}} | f(t) - f_n(t)|$ and so, since $f_n \to f$ uniformly on
compacts, this converges to $0$.
\end{proof}
Finally, we will need the following lemma in Section \ref{sec:proof}.
\begin{lemma} \label{lem:cvLeb}
Let $f: \mathbb{R} \rightarrow \mathbb{R}^+$ be a continuous
function such that
\[
\mathrm{Leb}\{x \in \ensuremath{\mathbb{R}} : f(x)=1\}=0.
\]
Suppose $f_n: \mathbb{R} \rightarrow \mathbb{R}^+$ is a sequence of
continuous functions that converges to $f$ uniformly on compacts.
Then, for all $K > 0$, as $n \rightarrow \infty$,
\[
\mathrm{Leb}\{x \in [-K,K] : f_n(x)<1\} \rightarrow \mathrm{Leb}\{x \in
[-K,K] : f(x)<1\}.
\]
\end{lemma}
\begin{proof}
Let $K>0$ and fix $\varepsilon>0$. For all $n$ sufficiently large
\[
f(x)-\varepsilon \leq f_n(x) \leq f(x) + \varepsilon, \text{ for all
$x \in [-K,K]$},
\]
hence
\begin{align*}
\mathrm{Leb}\{x \in [-K,K] : f(x)<1-\varepsilon\}
& \leq \mathrm{Leb}\{x \in [-K,K] : f_n(x)<1\} \\
&\leq \mathrm{Leb}\{x \in [-K,K] : f(x)<1+\varepsilon\}.
\end{align*}
When $\varepsilon \to 0$, the left-hand side of this inequality
converges to $\mathrm{Leb}\{x \in [-K,K] : f(x)<1\}$ and the right-hand side to
$\mathrm{Leb}\{x \in [-K,K] : f(x) \leq1\} $, which are equal by assumption.
\end{proof}
\subsection{Height processes} \label{subsec:heightprocesses}
Here, we define the stable height process and recall some of its
properties. We refer to Le Gall and Le Jan \cite{LeGall/LeJan} and
Duquesne and Le Gall \cite{Duquesne/LeGall} for background. (All of
the definitions and results stated without proof below may be found in
these references.)
Suppose that $X$ is a spectrally positive stable L\'evy process with
Laplace exponent $\lambda^{\beta}$, $\beta \in (1,2]$. That is,
$\E{\exp(-\lambda X_t)}=\exp(t\lambda^{\beta})$ for all $\lambda,t
\geq 0$ and, therefore, if $\beta \in (1,2)$, the L\'evy measure of
$X$ is $\beta(\beta-1)(\Gamma(2-\beta))^{-1} x^{-\beta-1}\mathrm d x$,
$x>0$. Let $I(t):=\inf_{0 \leq s \leq t} X(s)$ be the infimum process
of $X$. For each $t>0$, consider the time-reversed process defined
for $0 \leq s <t$ by
\[
\hat{X}^{(t)}(s):=X(t)-X((t-s)-),
\]
and let $(\hat{S}^{(t)}(s),0 \leq s \leq t)$ be its supremum, that is
$\hat{S}^{(t)}(s) = \sup_{u \leq s} \hat{X}^{(t)}(u)$. Then
the height process $H(t)$ is defined to be the local time at $0$ of
$\hat{S}^{(t)}-\hat{X}^{(t)}$.
It can be shown that the process $H$ possesses a continuous version
(Theorem 1.4.3 of \cite{Duquesne/LeGall}), which we will implicitly
consider in the following. The scaling property of $X$ is inherited
by $H$ (see Section 3.3 of \cite{Duquesne/LeGall}) as follows: for all
$a > 0$,
\begin{equation*}
(a^{1/{\beta}-1}H(ax),x\geq 0) \equidist (H(x), x \geq 0).
\end{equation*}
When $\beta=2$, the height process is, up to a scaling factor, a
reflected Brownian motion.
The excursion measure of $X-I$ away from $0$ is denoted by $\ensuremath{\mathbb{N}}$. \emph{In
the following, we work under this excursion measure}. Let
$\mathcal{E}$ be the space of excursions; that is, continuous
functions $f:\ensuremath{\mathbb{R}}^+ \to \ensuremath{\mathbb{R}}^+$ such that $f(0) = 0$, $\inf\{t > 0: f(t) =
0\} < \infty$ and if $f(s) = 0$ for some $s > 0$ then $f(t) = 0$ for
all $t > s$. The lifetime of $H \in \mathcal{E}$ is then denoted by
$\sigma$, that is
\[
\sigma := \inf\{x > 0: H(x) = 0\}.
\]
We define its maximum to be
\[
H_{\max}:=\max_{x \in [0,\sigma]} H(x).
\]
We will also deal with (regular versions of) the probability measures
$\ensuremath{\mathbb{N}}(\cdot| \sigma = v)$, $v > 0$ and $\ensuremath{\mathbb{N}}(\cdot | H_{\max} = m)$, $m >
0$. The following proposition is implicit in Section 3 of Abraham
and Delmas~\cite{Abraham/Delmas} and is also a consequence of
Theorem~\ref{thm:largestfrag} (ii) below.
\begin{proposition}
For any $v > 0$, under $\ensuremath{\mathbb{N}}(\cdot| \sigma = v)$ there exists an almost
surely unique point $x_{\max}$ at which $H$ attains its maximum,
that is
\[
x_{\max}:=\inf \{x \in [0,\sigma]:H(x)=H_{\max} \}.
\]
\end{proposition}
Note that $\mathbf{e}$, $\zeta$ and $x_*$ (see Section
\ref{sec:stable}) have the distributions of $H$, $H_{\max}$ and
$x_{\max}$ respectively under $\ensuremath{\mathbb{N}}( \cdot |\sigma=1)$.
First we note the tails of certain measures.
\begin{proposition} \label{prop:tails}
For all $m>0$,
\begin{gather}
\mathbb N\left( H_{\max}>m\right)=(\beta-1)^{1/(1-\beta)}m^{\frac{1}{1-\beta}},
\label{eqn:tail1} \\
\mathbb N\left(\sigma>m\right)=(\Gamma(1-1/\beta))^{-1}m^{-\frac{1}{\beta}}.
\label{eqn:tail2}
\end{gather}
\end{proposition}
\begin{proof}
For (\ref{eqn:tail1}) see, for example, Corollary 1.4.2 and Section
2.7 of Duquesne and Le Gall~\cite{Duquesne/LeGall}. It is well
known (Theorem 1, Section VII.1 of \cite{BertoinLevy}) that the
right inverse process $J = (J(t), t \geq 0)$ of $I$ defined by
$J(t):=\inf\{u \geq 0: -I(u) > t\}$ is a stable subordinator with a
L\'evy measure $(\beta\Gamma(1-1/\beta))^{-1}x^{-1-1/{\beta}}
\mathrm d x$, $x>0$. Since $\ensuremath{\mathbb{N}}(\sigma \in \mathrm{d}m)$ is equal to
this L\'evy measure, (\ref{eqn:tail2}) follows.
\end{proof}
Recall that $\alpha = 1/\beta - 1$. We will, henceforth, primarily
work in terms of $\alpha$ rather than $\beta$. We will make extensive
use of the scaling properties of the height function under the
excursion measure. For $m>0$, let $H^{[m]}(x) = m^{-\alpha}H(x/m)$.
Note that if $H$ has lifetime $\sigma$ then $H^{[m]}$ has lifetime $m
\sigma$ and maximum height $m^{-\alpha} H_{\max}$. Note also that
$(H^{[m]})^{[a]} = H^{[ma]}$, for all $m,a>0$. The following
proposition, which summarizes results from Section 3.3 of Duquesne and Le Gall~\cite{Duquesne/LeGall}, gives a version of the
scaling property for the height process conditioned on its lifetime.
\begin{proposition} \label{prop:Miermont}
For any test function $f: \mathcal{E} \to \ensuremath{\mathbb{R}}$ and any $x,m > 0$, we have
\[
\ensuremath{\mathbb{N}}[f(H^{[m]}) | \sigma = x/m] = \ensuremath{\mathbb{N}}[f(H)| \sigma = x].
\]
Moreover, for any $\eta > 0$,
\[
\ensuremath{\mathbb{N}}[f(H) | \sigma = x] = \ensuremath{\mathbb{N}}[f(H^{[x/\sigma]}) | \sigma > \eta].
\]
\end{proposition}
We now state two lemmas that will play an essential role in the proof
of Theorem \ref{thm:stablefragfonc}. The first lemma gives the scaling
property for $H$ conditioned on its maximum.
\begin{lemma} \label{lem:scaling} Let $f: \mathcal{E} \times \ensuremath{\mathbb{R}}^+
\times \ensuremath{\mathbb{R}}^+ \to \ensuremath{\mathbb{R}}$ be any test function. For all $m > 0$,
\[
\ensuremath{\mathbb{N}}[f(H^{[m]}, m \sigma, m^{-\alpha} H_{\max})] = m^{1 + \alpha}
\ensuremath{\mathbb{N}}[f(H, \sigma, H_{\max})].
\]
Moreover, for all $x, a > 0$,
\[
\ensuremath{\mathbb{N}}[f(H^{[x/\sigma]}, \sigma, H_{\max})] = a^{-1-\alpha}
\ensuremath{\mathbb{N}}[f(H^{[x/\sigma]}, a \sigma, a^{-\alpha} H_{\max})].
\]
In particular, for any test function $g: \mathcal{E} \times \ensuremath{\mathbb{R}}^+ \to
\ensuremath{\mathbb{R}}$,
\[
\ensuremath{\mathbb{N}}[g(H^{[x]}, \sigma) | H_{\max} = u]
= \ensuremath{\mathbb{N}}[g(H, x^{-1} \sigma) | H_{\max} = x^{-\alpha} u]
\]
and
\[
\ensuremath{\mathbb{N}}[g(H^{[x/\sigma]}, \sigma) | H_{\max} = u]
= \ensuremath{\mathbb{N}}[g(H^{[x/\sigma]}, x^{-1} \sigma) | H_{\max} = x^{-\alpha} u].
\]
\end{lemma}
\begin{proof}
By conditioning on the value of $\sigma$ and the tails in
Proposition~\ref{prop:tails}, we have
\[
\ensuremath{\mathbb{N}}[f(H^{[m]}, m \sigma, m^{-\alpha} H_{\max})]
= c \int_{\ensuremath{\mathbb{R}}^+} \ensuremath{\mathbb{N}}[f(H^{[m]}, mb, m^{-\alpha} H_{\max}) | \sigma = b]
b^{-\alpha-2} \mathrm{d} b,
\]
for some constant $c$. By Proposition \ref{prop:Miermont},
\[
\int_{\ensuremath{\mathbb{R}}^+} \ensuremath{\mathbb{N}}[f(H^{[m]}, mb, m^{-\alpha} H_{\max}) | \sigma = b]
b^{-\alpha-2} \mathrm{d} b
= \int_{\ensuremath{\mathbb{R}}^+} \ensuremath{\mathbb{N}}[f(H,mb,H_{\max}) | \sigma = mb]
b^{-\alpha-2} \mathrm{d} b.
\]
Changing variable with $a = mb$ gives
\[
c m^{1 + \alpha} \int_{\ensuremath{\mathbb{R}}^+} \ensuremath{\mathbb{N}}[f(H,a,H_{\max}) | \sigma = a]
a^{-\alpha-2} \mathrm{d} a = m^{1+\alpha} \ensuremath{\mathbb{N}}[f(H, \sigma, H_{\max})],
\]
which yields the first statement. The second statement is a
consequence of the first and the conditioned statements follow easily.
\end{proof}
Finally, we relate the law of $H$ conditioned on its lifetime and the
law of $H$ conditioned on its maximum. For the rest of the paper, $c$
denotes a positive finite constant that may vary from line to line.
\begin{lemma} \label{lem:sigmaH_max} For all test
functions $f: \mathcal{E} \to \ensuremath{\mathbb{R}}$ and all $x > 0$,
\[
\ensuremath{\mathbb{N}}[f(H) | \sigma = x] = \Gamma(-\alpha) \left(\frac{-\alpha}{ \alpha + 1}\right)^{1/\alpha} \int_{\ensuremath{\mathbb{R}}^{+}} \ensuremath{\mathbb{N}}[f(H^{[x/\sigma]}) \I{\sigma
> x} | H_{\max} = x^{-\alpha} u]
u^{1/\alpha} \mathrm{d}u.
\]
Moreover, for all non-negative test functions $g: \ensuremath{\mathbb{R}}^+ \to \ensuremath{\mathbb{R}}$,
\[
\ensuremath{\mathbb{N}}[g(\sigma^{\alpha}) | H_{\max} = 1]
= \frac{\ensuremath{\mathbb{N}}[g(H_{\max}) H_{\max}^{-1/\alpha - 1} | \sigma = 1]}
{\ensuremath{\mathbb{N}}[H_{\max}^{-1/\alpha - 1} | \sigma = 1]}.
\]
\end{lemma}
\begin{proof}
Taking $\eta = 1$ in the second statement of Proposition
\ref{prop:Miermont}, we see that $\ensuremath{\mathbb{N}}[f(H) | \sigma = x]$ is
equal to $\ensuremath{\mathbb{N}}[f(H^{[x/\sigma]}) \I{\sigma > 1}]/\mathbb N(\sigma>1)$. Then,
conditioning on the value of $H_{\max}$ and using (\ref{eqn:tail1}), we have
\[
\ensuremath{\mathbb{N}}[f(H^{[x/\sigma]}) \I{\sigma > 1}] = \left(\frac{-\alpha}{\alpha+1}\right)^{1/\alpha} \int_{\ensuremath{\mathbb{R}}^{+}}
\ensuremath{\mathbb{N}}[f(H^{[x/\sigma]}) \I{\sigma > 1} | H_{\max} = u]
u^{1/\alpha} \mathrm{d}u.
\]
By the final statement of Lemma~\ref{lem:scaling}, we see that this is
equal to
\[
\left(\frac{-\alpha}{\alpha+1}\right)^{1/\alpha} \int_{\ensuremath{\mathbb{R}}^{+}}
\ensuremath{\mathbb{N}}[f(H^{[x/\sigma]}) \I{\sigma > x} | H_{\max} = x^{-\alpha} u]
u^{1/\alpha} \mathrm{d}u.
\]
The first statement follows by noting from (\ref{eqn:tail2}) that $\ensuremath{\mathbb{N}}(\sigma > 1) = \Gamma(-\alpha)^{-1}$.
In order to prove the second statement in the lemma, note that by the
first statement we have
\[
\ensuremath{\mathbb{N}}[g(H_{\max}) H_{\max}^{-1/\alpha - 1} | \sigma = 1]
= c \int_{\ensuremath{\mathbb{R}}^+} \ensuremath{\mathbb{N}}[g(\sigma^{\alpha} H_{\max}) \sigma^{-1-\alpha}
H_{\max}^{-1/\alpha - 1} \I{\sigma > 1} |
H_{\max} = u] u^{1/\alpha} \mathrm{d} u.
\]
By Lemma~\ref{lem:scaling}, we have that
\[
\ensuremath{\mathbb{N}}[g(\sigma^{\alpha} H_{\max}) \I{\sigma > 1} \sigma^{-1-\alpha}
H_{\max}^{-1/\alpha - 1} | H_{\max} = u]
= \ensuremath{\mathbb{N}}[g(\sigma^{\alpha}) \sigma^{-1-\alpha} \I{\sigma > u^{1/\alpha}} |
H_{\max} = 1],
\]
for all $u > 0$. Hence, by Fubini's theorem,
\begin{align*}
\ensuremath{\mathbb{N}}[g(H_{\max}) H_{\max}^{-1/\alpha - 1} | \sigma = 1]
& = c \ensuremath{\mathbb{N}}\left[g(\sigma^{\alpha}) \sigma^{-1-\alpha}
\int_{\sigma^{\alpha}}^{\infty} u^{1/\alpha} \mathrm{d}u \bigg|
H_{\max} = 1\right] \\
& = c \ensuremath{\mathbb{N}}[g(\sigma^{\alpha}) | H_{\max} = 1].
\end{align*}
The result follows.
\end{proof}
\subsection{Williams' decomposition}
\label{sec:Williams}
Except in the Brownian case, the height process is not Markov.
However, a version of it can be reconstructed from a measure-valued
Markov process $\rho$, called the \emph{exploration process} (see Le
Gall and Le Jan \cite{LeGall/LeJan} or Section 0.3 of Duquesne and Le
Gall \cite{Duquesne/LeGall} for a definition), by taking $H(t)$ to be
the supremum of the topological support of $\rho(t)$. Abraham and
Delmas \cite{Abraham/Delmas} give a decomposition of the height
process $H$ (more precisely, of the continuum random tree coded by this
height process) in terms of this exploration process. This
decomposition is the analogue of Williams' decomposition for the
Brownian excursion discussed earlier. We recall their result below
but we state it in terms of the height process. This is somewhat less
precise, but is sufficient for our purposes and easier to state.
\noindent \textbf{Notation.} Take an arbitrary
point measure $\mu=\sum_{i\geq 1}a_i\delta_{t_i}$ on
$(0,\infty)$. Now consider, for each $i \geq 1$, an independent
Poisson point process on $\ensuremath{\mathbb{R}}^+ \times \mathcal{E}$ of intensity
$\mathrm{d}u \ensuremath{\mathbb{N}}( \cdot ,H_{\max}\leq t_i )$, having points
$\{(u_j^{(i)},f_j^{(i)}),j \geq 1\}$. For each $i \geq 1$, define a
subordinator $\tau^{(i)}$ by
\[
\tau^{(i)}(u)=\sum_{j: u_j^{(i)} \leq u} \sigma(f_j^{(i)}), \quad u \geq 0,
\]
where for any excursion $f$, $\sigma(f)$ denotes its length. Note
that the L\'evy measure of this subordinator integrates the function
$1 \wedge x$ on $\mathbb R^+$ since $\mathbb N[1\wedge \sigma,
H_{\text{max}}\leq t_i] \leq \mathbb N[1\wedge \sigma]$, which is finite by
Proposition~\ref{prop:tails}. Hence $\tau^{(i)}(u)<\infty$ for all $u
\geq 0$ a.s.
We will need the function $H^{(i)}$, defined on $[0,\tau^{(i)}(a_i)]$
by
\begin{equation}
\label{eqn:Hdefinition}
H^{(i)}(x) := \sum_{j: u_j^{(i)} \leq a_i} f_j^{(i)}(x-\tau^{(i)}(u_j^{(i)}-))
\I{\tau^{(i)}(u_j^{(i)}-) < x \leq \tau^{(i)}(u_j^{(i)})}.
\end{equation}
The process $H^{(i)}$ can be viewed as a collection of excursions of
$H$ conditioned to have heights lower than $t_i$ and such that the local
time of $H^{(i)}$ at $0$ is $a_i$.
We will now use this set-up in the situation of interest. Let
$\rho:=\sum_{i \geq 1} \delta_{(v_i,r_i,t_i)}$ be a Poisson point
measure on $[0,1]\times \mathbb R^+ \backslash \{0\} \times \mathbb
R^+ \backslash \{0\}$ with intensity
\begin{equation*}
\frac{\beta(\beta-1)}{\Gamma(2-\beta)}
\exp(-rc_{\beta}t^{1/(1-\beta)})r^{-\beta}\mathrm d v\mathrm d r
\mathrm dt,
\end{equation*}
where $c_{\beta}=(\beta-1)^{1/(1-\beta)}$. Conditionally on the
point measures $\sum_{i \geq 1}(1-v_i)r_i \delta_{t_i}$ and $\sum_{i
\geq 1}v_ir_i \delta_{t_i} $, use them to define two independent collections of
independent processes $\{H_+^{(i)}, i \geq 1\}$ and $\{H_{-}^{(i)}, i
\geq 1\}$ respectively, as in (\ref{eqn:Hdefinition}). We now glue
these together in order to define a function $H_{\infty}$ on $\ensuremath{\mathbb{R}}$.
For $u \geq 0$, set
\[
\eta^{+}(u):=\sum_{i: t_i \leq u}\tau^{(i)}_+((1-v_i)r_i), \quad
\eta^{-}(u):=\sum_{i: t_i \leq u}\tau^{(i)}_-(v_ir_i).
\]
It is not obvious that $\eta^+(u)$ and $\eta^-(u)$ are almost surely
finite for all $u\geq0$. This is a consequence of the forthcoming
Theorem \ref{thm:CVHcenter}. It can also be proved via Campbell's
theorem for Poisson point processes. Now set
\begin{align*}
H_{\infty}(x) := & \left(\sum_{i \geq 1}
\left[t_i-H_{+}^{(i)}(x-\eta^{+}(t_i-))\right]
\I{\eta^{+}(t_i-) < x \leq \eta^{+}(t_i)} \right) \I{x \geq 0} \\
& + \left( \sum_{i \geq 1} \left[t_i-H_{-}^{(i)}(-x-\eta^{-}(t_i-))\right]
\I{\eta^{-}(t_i-) < -x \leq \eta^{-}(t_i)} \right) \I{x < 0}.
\end{align*}
\begin{figure}
\caption{Schematic drawing of $H_{\infty}
\label{fig:Hinfty}
\end{figure}
Note that almost surely for all $u>0$,
\begin{equation*} \label{etaplus}
\eta^+(u)=\inf\{x>0: H_{\infty}(x)>u\}
\end{equation*}
or, equivalently, the right inverse of $\eta^+$ is equal to the
supremum process
\[
\left(\sup_{0 \leq y \leq x}H_{\infty}(y), x \geq 0\right).
\]
Symmetrically,
\begin{equation*}\label{etamoins}
\eta^-(u)=\sup\{x<0: H_{\infty}(x)>u\}
\end{equation*}
and the right inverse of $\eta^-$ is the supremum process
\[
\left(\sup_{-x \leq y \leq 0} H_{\infty}(y), x \geq 0\right).
\]
Roughly, the construction of $H_{\infty}$ works as follows:
conditional on the supremum (and for each value of the supremum), we
take a collection of independent excursions below that supremum which
are conditioned not to go below the $x$-axis, and which have total
local time $r_i$ for an appropriate value $r_i$. This local time is
split with a uniform random variable into a proportion $v_i$ which
goes to the left of the origin and a proportion $(1 - v_i)$ which goes
to the right of the origin. See Figure~\ref{fig:Hinfty} for an idea
of what $H_{\infty}$ looks like (note that the times $t_i$ should be
dense on the vertical axis). Note that we may always choose a
continuous version of $H_{\infty}$. Roughly, this is because the
processes $\eta^{+}$ and $\eta^{-}$ almost surely have no intervals where
they are constant. This implies that the one-sided suprema of
$H_{\infty}$ are continuous. Finally, the excursions that we glue
beneath these suprema can be assumed to be continuous.
The following theorem says that if we flip this picture over we obtain
an excursion of the height process which is conditioned on its maximum
height. The proof follows easily from Lemma 3.1 and Theorem 3.2 of Abraham
and Delmas~\cite{Abraham/Delmas}.
\begin{theorem} [Abraham and Delmas \cite{Abraham/Delmas} (Stable
case, $1<\beta<2$)]
\label{thm:CVHcenter}
For all $m>0$,
\begin{align*}
& \left(m-H_{\infty}(x-\eta^{-}(m)),0 \leq x \leq
\eta^-(m)+\eta^+(m) \right) \\
& \qquad \equidist \left(H(x),0
\leq x \leq \sigma \right) \text{ under } \mathbb N(\cdot |
H_{\max}=m).
\end{align*}
\end{theorem}
We note that, in particular, \[\eta(m):=\eta^-(m)+\eta^+(m)\] has the
distribution of $\sigma$ under $\ensuremath{\mathbb{N}} ( \cdot | H_{\max}=m)$ and that
$\eta^{-}(m)$ has the distribution of $x_{\max}$ under the same measure.
Theorem 3.2 of Abraham and Delmas~\cite{Abraham/Delmas} also entails a
Brownian counterpart of this result. Much of the complication in the
description of $H_{\infty}$ for $1 < \beta < 2$ came from the fact
that the stable height process has repeated local minima. Here the
construction of $H_{\infty}$ is simpler since local minima are unique
in the Brownian excursion. Let $\sum_{i \geq 1} \delta_{(t_i,h_i)}$
and $\sum_{i \geq 1} \delta_{(u_i,f_i)}$ be two independent Poisson
point measures on $\ensuremath{\mathbb{R}}_{+} \times \mathcal{E}$, both with intensity
$\mathrm d t \mathbb N(\cdot, H_{\max}\leq t )$. For $s \geq 0$, set
\[
\eta^{+}(s):= \sum_{i: t_i \leq s}\sigma(h_i),
\quad \eta^{-}(s):=\sum_{i: u_i \leq s}\sigma(f_i).
\]
Finally, set
\begin{align*}
H_{\infty}(x):= & \left( \sum_{i \geq 1} \left[t_i-h_i(x-\eta^{+}(t_i-))\right]
\I{\eta^+(t_i-) < x \leq \eta^+(t_i)} \right) \I{x \geq 0} \\
& + \left( \sum_{i \geq 1} \left[u_i - f_i(-x - \eta^{-}(u_i-))\right]
\I{\eta^-(u_i-) < -x \leq \eta^-(u_i)} \right) \I{x < 0}.
\end{align*}
\begin{theorem} [Abraham and Delmas~\cite{Abraham/Delmas},
Williams~\cite{Rogers/Williams2} (Brownian case, $\beta=2$)]
\label{thm:CVHcenterBr}
For all $m>0$,
\begin{align*}
& \left(m-H_{\infty}(x-\eta^{-}(m)),0 \leq x \leq
\eta^-(m)+\eta^+(m) \right)\\
& \qquad \equidist \left(H(x),0
\leq x \leq \sigma \right) \text{ under } \mathbb N(\cdot |
H_{\max}=m).
\end{align*}
\end{theorem}
In the sequel, we will need various properties of the processes
$(H_{\infty}(x), x \in \ensuremath{\mathbb{R}})$ and $(\eta(m), m \geq 0)$.
We start with some properties of $H_{\infty}$.
\begin{lemma} \label{lem:propofH_infty}
For all $\beta \in (1,2]$, the process $H_{\infty}$ is self-similar:
for all $m \in \ensuremath{\mathbb{R}}$,
\[
(m^{\alpha}H_{\infty}(mx), x \in \ensuremath{\mathbb{R}}) \equidist
(H_{\infty}(x), x \in \ensuremath{\mathbb{R}}).
\]
Moreover, with probability 1, $H_{\infty}(x) \rightarrow +\infty$ as
$x \rightarrow +\infty$ or $x \rightarrow -\infty$.
\end{lemma}
\begin{proof}
The self-similarity property is an easy consequence of the identity
in law stated in Theorems \ref{thm:CVHcenter} and
\ref{thm:CVHcenterBr} and of the scaling property of $H$ conditioned
on its maximum (Lemma \ref{lem:scaling}).
Now, we will show that for each $A>0$, a.s. $H_{\infty}(t)>A$ for
$t$ sufficiently large. This will imply that $\liminf_{x\rightarrow
+\infty} H_{\infty}(x)$ is almost surely larger than $A$ for all
$A>0$, and hence is infinite. So consider $A>0$ and recall the
construction of $H_{\infty}$ from Poisson point measures in the
stable cases $1<\beta<2$ (the proof can be done in a similar way in
the Brownian case). By construction of $H_{\infty}$, we will have
that $H_{\infty}(t)>A$ for $t$ sufficiently large if and only if,
conditional on the Poisson point measure $\sum_{i \geq 1}
(1-v_i)r_i\delta_{t_i}$, the number of $i$ such that $t_i>A+1$
and \[A_i:=\max_{x\in [0,\tau_+^{(i)}((1-v_i)r_i)]}H^{(i)}_+(x) \geq
t_i-A\] is almost surely finite. By the Borel-Cantelli lemma, it is
therefore sufficient to check that the sum $\sum_{t_i >
A+1}\Prob{A_i \geq t_i-A | t_i,(1-v_i)r_i}$ is almost surely
finite. By standard rules of Poisson measures,
\begin{align*}
\Prob{A_i \geq t_i-A|t_i,(1-v_i)r_i}
& = 1-\exp{(-(1-v_i)r_i\mathbb N\left[t_i-A\leq H_{\max}\leq t_i\right])}\\
& = 1-\exp{(-(1-v_i)r_ic_{\beta}\mathbb ((t_i-A)^{1/(1-\beta)}-t_i^{1/(1-\beta)}))}\\
& \leq (1-v_i)r_ic_{\beta}\mathbb
((t_i-A)^{1/(1-\beta)}-t_i^{1/(1-\beta)}).
\end{align*}
Finally,
\begin{align*}
& \E{\sum_{t_i>A+1}(1-v_i)r_i ((t_i-A)^{1/(1-\beta)}-t_i^{1/(1-\beta)})} \\
&
=\int_0^1 \int_0^{\infty} \int_{A+1}^{\infty} (1-v)
r((t-A)^{1/(1-\beta)}-t^{1/(1-\beta)})
\tfrac{\beta(\beta-1)}{\Gamma(2-\beta)}
\exp{(-c_{\beta}rt^{1/(1-\beta)})}r^{-\beta}\mathrm d t\mathrm d r
\mathrm dv,
\end{align*}
which is clearly finite. This gives the desired result. The proof is
identical for the behavior as $x \rightarrow -\infty$.
\end{proof}
From their
construction from Poisson point measures, it is immediate that the
processes $\eta^+$ and $\eta^-$ both have independent (but not
stationary) increments. The following lemma is an obvious corollary of the self-similarity of $H_{\infty}$.
\begin{lemma} \label{lem:egalloieta} Let $1<\beta \leq 2$. Then
for all $x\geq0$ and $m>0$,
\[
\left(m^{1/\alpha}\left(\eta^+(u+x)-\eta^+(u)\right) , u \geq 0 \right)\equidist
\left(\eta^+\left(\frac{u+x}{m}\right)-\eta^+\left(\frac{u}{m}\right),
u \geq 0\right).
\]
In particular, for any $a > 0$,
\[
m^{1/\alpha}\left(\eta^+(ma+u)-\eta^+(u) \right) \convdist \eta^+(a)
\]
as $m \to \infty$. The same holds by replacing the process $\eta^+$ by $\eta^-$ and then by $\eta$.
\end{lemma}
\section{Convergence of height processes}
\label{sec:convheightprocess}
Let $\mathcal{E}^*$ be the set of excursions $f: \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}^+$ such
that $f$ has a unique maximum. We write $f_{\max} = \max_{x \in \ensuremath{\mathbb{R}}}
f(x)$ and $x_{\max} = \ensuremath{\operatornamewithlimits{argmax}}_{x \in \ensuremath{\mathbb{R}}} f(x)$. Let $\phi:
\mathcal{E}^* \to C(\ensuremath{\mathbb{R}}, \ensuremath{\mathbb{R}}^+)$ be defined by
\[
\phi(f)(t) = f_{\max} - f(x_{\max} + t).
\]
Let $(H(x), 0 \leq x \leq \sigma)$ be an excursion with a unique
maximum. Extend this to a function in $\mathcal{E}^*$ by setting $H$
to be zero outside the interval $[0,\sigma]$. Now put
\[
\bar{H} = \phi(H).
\]
The aim of this section is to prove Theorem \ref{thm:stablefragfonc},
which, using the scaling property of the stable height process, can be
re-stated as
\begin{theorem} \label{thm:stablefragfonc2}
Let $H_x$ have the distribution of $\bar{H}$ under $\ensuremath{\mathbb{N}}(\cdot | \sigma
= x)$. Then
\[
H_x \convdist H_{\infty}
\]
as $x \to \infty$, in the sense of uniform convergence on compact intervals.
\end{theorem}
Write $H^{(m)}_{\infty}$ for the function which is
$H_{\infty}$ capped at level $m$:
\[
H^{(m)}_{\infty}(x)
= \begin{cases}
H_{\infty}(x) & \text{if $-\eta^{-}(m) \leq x \leq \eta^+(m)$} \\
m & \text{otherwise}.
\end{cases}
\]
Then we can re-state Theorem~\ref{thm:CVHcenter} as
\begin{theorem} \label{thm:CVHcenter2}
For all $m > 0$,
\[
H_{\infty}^{(m)} \equidist \bar{H} \text{ under $\ensuremath{\mathbb{N}}(\cdot |
H_{\max} = m)$}.
\]
\end{theorem}
We will need the following technical lemma.
\begin{lemma} \label{lem:etaHindep}
For all $a > 0$,
\[
(m^{1/\alpha} \eta(ma), H_{\infty}^{(ma)}) \convdist (\tilde{\eta}(a), H_{\infty}),
\]
as $m \to \infty$, where $\tilde{\eta}(a)$ has the same distribution as $\eta(a)$ and is independent of $H_{\infty}$. Here, the convergence is for the topology associated
with the metric
\[
d((a_1,f_1), (a_2,f_2)) = |a_1 - a_2| + \sum_{k \in \ensuremath{\mathbb{N}}} 2^{-k} \left(\sup_{t \in
[-k,k]} |f_1(t) - f_2(t)| \wedge 1 \right)
\]
on $\ensuremath{\mathbb{R}} \times C(\ensuremath{\mathbb{R}}, \ensuremath{\mathbb{R}}^+)$.
\end{lemma}
\begin{proof}
Let $f: \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}$ be a bounded continuous function and let
$g:C([-n,n], \ensuremath{\mathbb{R}}^+) \to \ensuremath{\mathbb{R}}$ be a bounded continuous function for some
$n \in \ensuremath{\mathbb{N}}$. To ease notation, whenever $h: \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}^+$ we will
write $g(h)$ for $g(h|_{[-n,n]})$.
It follows from Lemma \ref{lem:egalloieta} that $m^{1/\alpha} \eta(ma)
\equidist \eta(a)$ for all $m,a > 0$, but this is insufficient to give
the required asymptotic independence. Since the processes
$\eta^+,\eta^-$ have independent increments, by construction we have
that $m^{1/\alpha}(\eta(ma) - \eta(u))$ is independent of
$(H_{\infty}(y), y \in [-\eta^-(u), \eta^+(u)])$ and $\eta^+(u),
\eta^-(u)$, whenever $ma > u$. By Lemma~\ref{lem:egalloieta},
\[
m^{1/\alpha}(\eta(ma) - \eta(u)) \convdist \eta(a)
\]
as $m \to \infty$. So when $ma > u$ we certainly have
\begin{align*}
& \E{f(m^{1/\alpha}(\eta(ma) - \eta(u)))
g(H_{\infty}^{(ma)}) \I{\eta^-(u) > n, \eta^+(u) > n}} \\
& = \E{f(m^{1/\alpha}(\eta(ma) - \eta(u)))}
\E{g(H_{\infty}^{(ma)}) \I{\eta^-(u) > n, \eta^+(u) > n}} \\
& \to \E{f(\eta(a))}
\E{g(H_{\infty}) \I{\eta^-(u) > n, \eta^+(u) > n}}.
\end{align*}
Without loss of generality, the functions $|f|$ and $|g|$ are bounded
by 1. To ease notation, put $G_m = g(H_{\infty}^{(ma)}) \I{\eta^-(u) > n,
\eta^+(u) > n}$. We will first show that
\[
\left| \E{f(m^{1/\alpha}(\eta(ma) - \eta(u))) G_m} - \E{f(m^{1/\alpha}
\eta(ma)) G_m} \right| \to 0
\]
as $m \to \infty$. Clearly, this absolute value is smaller than
\begin{equation*}
\E{\left| f(m^{1/\alpha}(\eta(ma) - \eta(u)))-f(m^{1/\alpha}
\eta(ma)) \right| } = \E{\left| f((\eta(a) - \eta(u/m)))-f(
\eta(a)) \right| } ,
\end{equation*}
by the self-similarity property of $\eta$ (Lemma \ref{lem:egalloieta}).
The function $f$ is bounded and continuous and so, by dominated
convergence, this last quantity converges to $0$. So we obtain that
\[
\E{f(m^{1/\alpha}\eta(ma)) g(H_{\infty}^{(ma)})
\I{\eta^-(u) > n, \eta^+(u) > n}}
\to \E{f(\eta(a))} \E{g(H_{\infty})
\I{\eta^-(u) > n, \eta^+(u) > n}}.
\]
We must now remove the $\I{\eta^-(u) > n, \eta^+(u) > n}$. Again
using self-similarity, we have that $\eta^-(u) \to \infty$ and $\eta^+(u) \to \infty$
almost surely as $u \to \infty$. Take $\varepsilon > 0$. Then there
exists a $u$ such that
\[
\Prob{\eta^-(u) \leq n} < \frac{\varepsilon}{2},
\quad \Prob{\eta^+(u) \leq n} < \frac{\varepsilon}{2}.
\]
So for all $m > 0$,
\begin{align*}
& \left| \E{f(m^{1/\alpha}\eta(ma)) g(H_{\infty}^{(ma)})
\I{\eta^-(u) > n, \eta^+(u) > n}}
- \E{ f(m^{1/\alpha}\eta(ma)) g(H_{\infty}^{(ma)})} \right| \\
& \leq \Prob{\eta^-(u) \leq n} + \Prob{\eta^+(u) \leq n} < \varepsilon.
\end{align*}
It is now straightforward to conclude that
\begin{equation*}
\E{f(m^{1/\alpha} \eta(ma)) g(H_{\infty}^{(ma)})}
\to
\E{f(\eta(a))} \E{g(H_{\infty})}
\end{equation*}
as $m \to \infty$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:stablefragfonc2}.] Let $g:
C([-n,n],\ensuremath{\mathbb{R}}^+) \to \ensuremath{\mathbb{R}}^+$ be a bounded continuous function for some $n
\in \ensuremath{\mathbb{N}}$. As before, if $h: \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}^+$, we will write $g(h)$ for
$g(h|_{[-n,n]})$. Then it will be sufficient for us to show that
\[
\E{g(H_x)} \to \E{g(H_{\infty})}
\]
as $x \to \infty$.
Without loss of generality, take $|g| \leq 1$. Then
\begin{align*}
\E{g(H_x)} & = \ensuremath{\mathbb{N}}[g(\bar{H}) | \sigma = x] \\
& = \ensuremath{\mathbb{N}}[g(\phi(H)) | \sigma = x] \\
& = c \int_{\ensuremath{\mathbb{R}}^+} \ensuremath{\mathbb{N}}[g(\phi(H^{[x/\sigma]})) \I{\sigma \geq x} | H_{\max}
= x^{-\alpha} u] u^{1/\alpha} \mathrm{d}u,
\end{align*}
by Lemma~\ref{lem:sigmaH_max}. Suppose that we could show that
\begin{equation}
\ensuremath{\mathbb{N}}[g(\phi(H^{[x/\sigma]}))\I{\sigma \geq x} | H_{\max} = x^{-\alpha} u]
\to \E{g(H_{\infty})} \mathbb N[\I{\sigma \geq 1} | H_{\max} = u]
\label{eqn:conv}
\end{equation}
as $x \to \infty$ for all $u > 0$. Since $|g| \leq 1$,
\[
\ensuremath{\mathbb{N}}[g(\phi(H^{[x/\sigma]}))\I{\sigma \geq x} | H_{\max} = x^{-\alpha} u]
\leq \ensuremath{\mathbb{N}}[\I{\sigma \geq x} | H_{\max} = x^{-\alpha} u]
= \ensuremath{\mathbb{N}}[\I{\sigma \geq 1} | H_{\max} = u],
\]
by Lemma \ref{lem:scaling} and we also have that
\[
\int_{\ensuremath{\mathbb{R}}^+} \ensuremath{\mathbb{N}}[\I{\sigma \geq 1} | H_{\max} = u]
u^{1/\alpha} \mathrm{d} u = c \ensuremath{\mathbb{N}}[\sigma \geq 1] <
\infty.
\]
So then by the dominated convergence theorem, we would be able to
conclude that
\[
\ensuremath{\mathbb{N}}[g(\phi(H)) | \sigma = x] \to \E{g(H_{\infty})}.
\]
It remains to prove (\ref{eqn:conv}). By Theorem~\ref{thm:CVHcenter2},
\begin{align*}
& \ensuremath{\mathbb{N}}[g(\phi(H^{[x/\sigma]})) \I{\sigma \geq x} | H_{\max} = x^{-\alpha}
u] \\
& = \E{g((x^{-1} \eta(x^{-\alpha} u))^{\alpha} H_{\infty}^{(x^{-\alpha}
u)}(x^{-1} \eta(x^{-\alpha} u)\ \cdot\ )) \I{\eta(x^{-\alpha} u)
\geq x}}.
\end{align*}
Now, by Lemma~\ref{lem:etaHindep} we have that
\[
(x^{-1} \eta(x^{-\alpha} u), H_{\infty}^{(x^{-\alpha}
u)}) \convdist (\tilde{\eta}(u), H_{\infty})
\]
as $x \to \infty$, where $\tilde{\eta}(u)$ and $H_{\infty}$ are
independent. By the Skorokhod representation theorem, we may suppose that this convergence is almost sure. But then, by the bounded convergence theorem, since $\Prob{\tilde{\eta}(u) = 1} = \ensuremath{\mathbb{N}}(\sigma = 1 | H_{\text{max}} = u) = 0$, we have
\begin{align*}
& \E{g((x^{-1} \eta(x^{-\alpha} u))^{\alpha} H_{\infty}^{(x^{-\alpha}
u)}(x^{-1} \eta(x^{-\alpha} u)\ \cdot\ )) \I{\eta(x^{-\alpha} u)
\geq x}} \\
& \to \E{g(\tilde{\eta}(u)^{\alpha} H_{\infty}(\tilde{\eta}(u)\ \cdot\
)) \I{\tilde{\eta}(u) \geq 1}},
\end{align*}
as $x \to \infty$. By the scaling property of $H_{\infty}$ (Lemma
\ref{lem:propofH_infty}) and the independence of $\tilde{\eta}(u)$ and
$H_{\infty}$, we see that this last is equal to
\[
\E{g(H_{\infty})} \Prob{\tilde{\eta}(u) \geq 1}.
\]
Since $\Prob{\tilde{\eta}(u) \geq 1} = \ensuremath{\mathbb{N}}[\I{\sigma \geq 1} | H_{\max}
= u]$, the result follows.
\end{proof}
\section{Convergence of open sets and their sequences of ranked lengths}
\label{sec:proof}
In this section, we prove Theorem \ref{thm:stablefrag} and Corollary
\ref{corostablerank}.
We will need the concept of a \emph{tagged fragment}, that is, the
size $(\lambda(t))_{t \geq 0}$ of a block of the fragmentation which
is tagged uniformly at random. Then, according to \cite{BertoinSSF},
$(-\log(\lambda(t)), t \geq 0)$ is a time-change of a subordinator
$\xi$ with Laplace exponent
\begin{equation} \label{eqn:Laplacetagged}
\phi(t) = \int_{\ensuremath{\mathcal{S}^{\downarrow}_1}} \left(1 - \sum_{i=1}^{\infty} s_i^{1+t}\right) \nu(\mathrm
d \mathbf s), \quad t \geq 0.
\end{equation}
More specifically, $\lambda(t) = \exp(-\xi(\rho(t)))$, where
\[
\rho(t) := \inf\left\{u \geq 0 : \int_0^u \exp(\alpha \xi(r)) \mathrm{d}r \geq
t\right\}.
\]
\begin{lemma}
\label{lemmaconditions}
(i) For all $a>0$,
\[
\mathrm{Leb}\{x \in \mathbb R : H_{\infty}(x)=a\}=0 \text{ a.s. }
\]
(ii) For all $a>0$, almost surely, a is not a local maximum of $H_{\infty}$.
\end{lemma}
\begin{proof} (i) By Proposition 1.3.3 of Duquesne and Le
Gall~\cite{Duquesne/LeGall}, the height process $(H(x), x \geq 0)$
possesses a collection of local times $(L_s^x, s \geq 0, x \geq 0)$,
which we can assume is jointly measurable, and continuous and
non-decreasing in $s$. Moreover, for any non-negative measurable
function $g: \ensuremath{\mathbb{R}}^+ \to \ensuremath{\mathbb{R}}^+$,
\[
\int_0^s g(H_r) \mathrm{d} r = \int_{\ensuremath{\mathbb{R}}^+} g(x) L_s^x \mathrm{d} x
\quad \text{a.s.}
\]
Taking $g(x) = \I{x=a}$, we see that for any $s \geq 0$,
\[
\mathrm{Leb}\{t \in [0,s]: H(t) = a\} = 0 \quad \text{a.s.}
\]
Since the height process is built out of excursions, the same is true
for $H$ under the excursion measure $\ensuremath{\mathbb{N}}$. In other words, for all $a
> 0$,
\begin{equation}
\label{leb0}
\ensuremath{\mathbb{N}}(\{\mathrm{Leb}\{x \in [0,\sigma]: H(x) = a\}\neq 0\}) = 0.
\end{equation}
Moreover, from the construction of the process $H_{\infty}$ in
Section~\ref{sec:Williams}, and using the same notation, we see
that
\[
\mathrm{Leb}\{x \in \ensuremath{\mathbb{R}}: H_{\infty}(x) = a \}
= \sum_{i \geq 1} \sum_{j: u_j^{(i)} \leq r_i} \mathrm{Leb}\left\{x
\in [0, \sigma(f_j^{(i)})]: f_j^{(i)}(x) = t_i - a\right\},
\]
where $\sum_{i \geq 1} \delta_{(r_i,t_i)}$ is a Poisson point measure
of intensity
\[
\beta (\beta - 1) (\Gamma(2-\beta))^{-1}
\exp(-rc_{\beta} t^{1/(1-\beta)}) r^{-\beta} \mathrm{d}r \mathrm{d}t
\]
on $\ensuremath{\mathbb{R}}^+\setminus\{0\} \times \ensuremath{\mathbb{R}}^+\setminus \{0\}$ and, for each $i
\geq 1$, $\{(u^{(i)}_j, f^{(i)}_j), j \geq 1\}$ are the points of a
Poisson point measure of intensity $\mathrm{d}u \ensuremath{\mathbb{N}}(\cdot,H_{\max} \leq
t_i)$ on $\ensuremath{\mathbb{R}}^+ \times \mathcal{E}$. From this representation and
(\ref{leb0}), it is clear that the Lebesgue measure of $\{x \in \ensuremath{\mathbb{R}}:
H_{\infty}(x) = a \}$ is equal to 0 almost surely.
(ii) By Lemma 2.5.3 of Duquesne and Le Gall~\cite{Duquesne/LeGall}, for all $a > 0$,
\[
\ensuremath{\mathbb{N}}(\{\text{$a$ is a local minimum of $H$}\}) = 0.
\]
With notation as in part (i), we have
\[
\{ \text{$a$ is a local maximum of $H_{\infty}$} \} = \bigcup_{i \geq
1} \left(\{t_i=a \} \bigcup_{j: u_j^{(i)} \leq r_i} \left\{\text{$t_i - a$ is a local
minimum of $f_j^{(i)}$}\right\} \right)
\]
(recall that in the construction of $H_{\infty}$, the excursions
$f_j^{(i)}$ are glued upside down at levels $t_i$, so that local
minima of these excursions correspond to local maxima of
$H_{\infty}$). The conclusion is now obvious.
\end{proof}
The proof of Theorem \ref{thm:stablefrag} follows immediately from
Theorem \ref{thm:stablefragfonc2}, Proposition
\ref{prop:fnconvimpliessetconv} and Lemma \ref{lemmaconditions}.
It remains to prove Corollary \ref{corostablerank}. We will need the
following generalization of Theorem \ref{thm:stablefragfonc2} and the
forthcoming Lemma \ref{lemmarestriction}.
\begin{theorem} \label{thm:heightprocessconvdbis}
Let $H_x$ be as in Theorem \ref{thm:stablefragfonc2}, and denote by
$H_x|_K$ its restriction to the compact $[-K,K]$ (and similarly for
$H_{\infty})$. Then, for all $K>0$,
\[
(H_x|_K, \mathrm{Leb}\{ y \in [-K,K] : H_x(y)<1\}) \convdist
(H_{\infty}|_K, \mathrm{Leb}\{ y \in [-K,K] : H_{\infty}(y)<1\}),
\]
as $x \to \infty$. Here, the convergence is in the topology associated with the metric
\[
d((f_1, a_1), (f_2, a_2)) = \sup_{x \in [-K,K]} |f_1(x) - f_2(x)| + |a_1 - a_2|
\]
on $C([-K,K], \ensuremath{\mathbb{R}}^+) \times \ensuremath{\mathbb{R}}^+$.
\end{theorem}
\begin{proof} By the Skorokhod representation theorem, there exists a probability space such that the convergence in Theorem~\ref{thm:stablefragfonc2} is almost sure. Then the result follows from Lemma \ref{lem:cvLeb} and Lemma \ref{lemmaconditions} (i).
\end{proof}
\begin{lemma}
\label{lemmarestriction}
Given $\varepsilon>0$, there exists $K>0$ such that
\[
\Prob{\inf_{y \in (-\infty,K] \cup [K,+\infty)} H_x(y)<1} \leq \varepsilon
\]
for all $x$ sufficiently large.
\end{lemma}
\begin{proof}
By symmetry, it is sufficient to show that there exists $K>0$
such that \linebreak $\Prob{\inf_{y \in[K,+\infty)} H_x(y)<1} \leq
\varepsilon$ for all $x$ sufficiently large. As in the proof of
Theorem \ref{thm:stablefragfonc2}, we combine Lemma \ref{lem:sigmaH_max}
and Theorem \ref{thm:CVHcenter} to get
\begin{align} \label{imp}
& \Prob{\inf_{y \in[K,+\infty)} H_x(y)<1} \\
& \qquad = c\int_{\mathbb R^+} \Prob{\inf_{y \in[K,+\infty)}
H_{\infty}^{(x^{-\alpha}u)}(x^{-1}\eta(x^{-\alpha}u)y)<x^{\alpha}
\eta^{-\alpha}(x^{-\alpha}u),\eta(x^{-\alpha}u) \geq x}
u^{1/\alpha}\mathrm d u. \nonumber
\end{align}
For all $L \geq 1$, the probability in the integral can be bounded
above by
\[
\Prob{\inf_{y \in[x^{-1}\eta(x^{-\alpha}u)K,+\infty)}
H_{\infty}^{(x^{-\alpha}u)}(y)<x^{\alpha}\eta^{-\alpha}(x^{-\alpha}u),
Lx > \eta(x^{-\alpha}u) \geq x}
+ \Prob{\eta(x^{-\alpha}u) \geq Lx}.
\]
The first probability in this sum is in turn smaller than
\[
\Prob{\inf_{y \in[K,+\infty)}
H_{\infty}^{(x^{-\alpha}u)}(y)<L^{-\alpha}, Lx > \eta(x^{-\alpha}u)
\geq x},
\]
and so is also smaller than
\begin{equation} \label{Proba1}
\Prob{\inf_{y \in[K,+\infty)}
H_{\infty}^{(x^{-\alpha}u)}(y)<L^{-\alpha}}
\wedge \Prob{Lx > \eta(x^{-\alpha}u) \geq x}.
\end{equation}
Recall that by self-similarity,
\[
\Prob{Lx > \eta(x^{-\alpha}u) \geq x}
=\Prob{L > \eta(u) \geq 1} \leq \Prob{\eta(u) \geq 1}.
\]
Hence, (\ref{Proba1}) can be bounded from above by $\Prob{\eta(u)
\geq 1}$ when $x^{-\alpha}u \leq L^{-\alpha}$, and by
\[
\Prob{\inf_{y \in[K,+\infty)} H_{\infty}(y)<L^{-\alpha}}
\wedge \Prob{\eta(u) \geq 1}
\]
when $x^{-\alpha}u > L^{-\alpha}$. From the identity (\ref{imp}) and
these observations, we have
\begin{align}
\Prob{\inf_{y \in[K,+\infty)} H_x(y)<1} \leq & \
c \int_0^{L^{-\alpha}x^{\alpha}} \Prob{\eta(u) \geq 1}
u^{1/\alpha}\mathrm d u \nonumber \\
& + c\int_0^{\infty} \Prob{\inf_{y \in[K,+\infty)}
H_{\infty}(y)<L^{-\alpha}} \wedge \Prob{\eta(u) \geq
1} u^{1/\alpha}\mathrm d u \label{imp2} \\
& + c\int_0^{\infty} \Prob{\eta(u) \geq L}
u^{1/\alpha}\mathrm d u. \nonumber
\end{align}
Recall that $c\int_0^{\infty} \Prob{\eta(u) \geq L}
u^{1/\alpha}\mathrm d u=\mathbb N[\sigma \geq
L]<\infty$. Then fix $L$ large enough that the third integral in the
right-hand side of (\ref{imp2}) is smaller than $\varepsilon/3$. This $L$
being fixed, note that the first integral on the right-hand side of
(\ref{imp2}) is smaller than $\varepsilon/3$ for all $x$ sufficiently
large. Since $H_{\infty}(y) \to \infty$ a.s.\ as $y \to +\infty,
-\infty$, by dominated convergence we have that the second integral
(which does not depend on $x$) is smaller than $\varepsilon/3$ for $K$
sufficiently large. Hence, there exists some $K>0$ such that
$\Prob{\inf_{y \in[K,+\infty)} H_x(y)<1} \leq \varepsilon$ for all $x$
sufficiently large.
\end{proof}
\begin{proof}[Proof of Corollary \ref{corostablerank}.] Let $\varepsilon>0$
and consider some real number $K$ such that, for $x$ large enough, the
events $E_x=\{\inf_{y \in (-\infty,K] \cup [K,+\infty)} H_x(y) < 1
\}$ all have probability smaller than $\varepsilon/5$ (such a $K$ exists
by Lemma \ref{lemmarestriction}). Taking $K$ larger if necessary, we
also have that $E_{\infty}$ (defined in a similar way using
$H_{\infty}$) has probability smaller than $\varepsilon/5$.
Re-stated in terms of the functions $H_x$, our goal is to check that
the decreasing sequence of lengths of the interval components of
$O_x:=\{y\in \mathbb R: H_x(y)<1\}$, say $|O_x|^{\downarrow}$,
converges in distribution as $x \rightarrow \infty$ to the decreasing
sequence of lengths, $|O_{\infty}|^{\downarrow}$, of the interval
components of $O_{\infty}:=\{y\in \mathbb R : H_{\infty}(y)<1\}$. We
recall that the topology considered on $\ensuremath{\mathcal{S}^{\downarrow}_1}fin$ is given by the
distance $d_{\ensuremath{\mathcal{S}^{\downarrow}_1}fin}(\mathbf s,\mathbf
{s'})=\sum_{i=1}^{\infty}|s_i-s_i'|.$ Now, let $O_x^K$ be the
restriction of $O_x$ to the open set $(-K,K)$, for $x \in
(0,\infty]$. Let $|O^K_x|^{\downarrow}$ be the corresponding ranked
sequence of lengths of interval components. By the Skorokhod
representation theorem, we may suppose that the convergence stated in
Theorem \ref{thm:heightprocessconvdbis} holds almost surely. From
this, Lemma \ref{lemmaconditions} and Proposition
\ref{prop:fnconvimpliessetconv}, we have that $O_x^K$ converges to
$O_{\infty}^K$, in the sense that their complements in $[-K,K]$ converge in the Hausdorff distance.
Moreover,
the total length of $O_x^K$ converges to that of $O_{\infty}^K$. We
deduce that $|O^K_x|^{\downarrow}$ converges to
$|O_{\infty}^K|^{\downarrow}$ in the \emph{pointwise} distance on
$\ensuremath{\mathcal{S}^{\downarrow}_1}fin$ (see Proposition 2.2 of Bertoin~\cite{BertoinBook}). But
since we also have convergence of the total length of the open sets,
the convergence actually holds in the $d_{\ensuremath{\mathcal{S}^{\downarrow}_1}fin}$ distance.
Now, let $f: \ensuremath{\mathcal{S}^{\downarrow}_1}fin \rightarrow \mathbb R$ be any continuous function
such that $\sup_{\mathbf s \in \ensuremath{\mathcal{S}^{\downarrow}_1}fin}|f(\mathbf s)| \leq 1$. Since
$O_x=O^K_x$ on $E_x^{\mathrm{c}}$, we have
\begin{align*}
\Bigg|\E{f(|O_x|^{\downarrow})}-\E{f(|O_{\infty}|^{\downarrow})} \Bigg|
& \leq \Bigg|\E{f(|O^K_x|^{\downarrow})
\mathbbm{1}_{E_x^{\mathrm{c}}}}-\E{f(|O^K_{\infty}|^{\downarrow})
\mathbbm{1}_{E_{\infty}^{\mathrm{c}}}} \Bigg|
+ \Prob{E_x} + \Prob{E_{\infty}} \\
& \leq \Bigg|\E{f(|O^K_x|^{\downarrow})}-
\E{f(|O^K_{\infty}|^{\downarrow})} \Bigg| + 2\Prob{E_x} + 2
\Prob{E_{\infty}}.
\end{align*}
Using the fact that $|O^K_x|^{\downarrow}$ converges in distribution
to $|O^K_{\infty}|^{\downarrow}$, we get that for all $x$ sufficiently
large, $\left|\E{f(|O_x|^{\downarrow})}-
\E{f(|O_{\infty}|^{\downarrow})}\right| \leq \varepsilon$. The result
follows.
\end{proof}
\section{Behavior of the last fragment}
\label{seclastfrag}
In this section, we prove the results stated in Corollary
\ref{lastfrag} and the remark which follows it.
First, it is implicit in the proofs in the previous sections that the
distribution of the length of the interval component of $\{y\in
\mathbb R :H_{x}(y)<1\}$ that contains $0$ (i.e.\ the distribution of
$xF_{*}((\zeta-x^{\alpha})^+)$) converges as $x \rightarrow \infty$ to
the distribution of the length of the interval component of $\{y \in
\ensuremath{\mathbb{R}} :H_{\infty}(y)<1\}$ that contains $0$. By construction of the
function $H_{\infty}$ (see Section~\ref{sec:Williams}), this interval
component is distributed as $\eta(1)$. Indeed, almost surely $1$ is
not one of the $t_i$'s and therefore $H_{\infty}(y)<1$ for every $y
\in (-\eta^-(1), \eta^+(1))$. Moreover, it is easy to see that
$H_{\infty}(\eta^+(1))=H_{\infty}(\eta^-(1))=1$. In other words, we
have that
\[
t \left(F_{\ast}\left( (\zeta -t)^{+}\right)\right)^{\alpha}
\convlaw (\eta(1))^{\alpha} \text{ as $t \rightarrow 0$}.
\]
Recall then that $(\eta(1))^{\alpha}$ is distributed as
$\sigma^{\alpha}$ under $\mathbb N(\cdot | H_{\max}=1)$ which, by
Lemma~\ref{lem:sigmaH_max}, is easily seen to be distributed as the
$(-1/\alpha-1)$-size-biased version of $\zeta$ defined in
(\ref{eqn:biasedzeta}).
We now turn to the bounds given in (i) and (ii), Corollary
\ref{lastfrag}, for the tails of this size-biased version of $\zeta$.
(i) When $t \rightarrow \infty$, the given bounds are easy
consequences of Proposition 14 of \cite{HaasLossMass} on the
asymptotic behavior of $\Prob{\zeta>t}$ as $t \rightarrow
\infty$. Indeed, according to that result, there exist $A,B>0$
such that, for all $t$ large enough,
\[
\exp(-B\Psi(t)) \leq \Prob{\zeta \geq t} \leq \exp(-A\Psi(t)),
\]
where $\Psi$ is the inverse of the bijection $t\in[1,\infty)
\rightarrow t/\phi(t) \in [1/\phi(1),\infty)$ and $\phi$ is defined at
(\ref{eqn:Laplacetagged}). Miermont \cite[Section 3.2]{Miermont}
shows that in the case of the stable fragmentation,
\[
\phi(t)=(1+\alpha)^{-1}\frac{\Gamma(t-\alpha)}{\Gamma(t)}.
\]
Now let $c$ be a positive constant that may vary from line to
line. Using the fact that $\Gamma(z)$ is proportional to
$z^{z-1/2}\exp(-z)$ for large $z$, we get that $\phi(t)\sim
ct^{-\alpha}$ as $t \rightarrow \infty$. So $\Psi(t)\sim
ct^{1/(1+\alpha)}$ as $t \rightarrow \infty$. We just need to convert the
statements about the tails of $\zeta$ into statements about the tails
of $\zeta_{*^{\alpha}}$. On the one hand, note that
\[
\Prob{\zeta_{*^{\alpha}} \geq t}
= \frac{\E{\zeta^{-1/\alpha-1}\mathbbm{1}_{\{\zeta \geq t\}}}}{\E{\zeta^{-1/\alpha-1}}}
\geq \frac{t^{-1/\alpha-1} \Prob{\zeta \geq t} }{\E{\zeta^{-1/\alpha-1}}}
\geq \frac{\exp(-ct^{1/(1+\alpha)})}{\E{\zeta^{-1/\alpha-1}}} ,
\]
for $t$ sufficiently large. On the other hand, using the Cauchy-Schwarz
inequality and the fact that $\zeta$ has positive moments of all
orders, we easily get that
\[
\Prob{\zeta_{*^{\alpha}} \geq t}
= \frac{\E{\zeta^{-1/\alpha-1}\mathbbm{1}_{\{\zeta \geq t\}}}}{\E{\zeta^{-1/\alpha-1}}}
\leq \frac{c \ \Prob{\zeta \geq t}^{1/2}}{\E{\zeta^{-1/\alpha-1}}} ,
\]
for all $t \geq 0$. The result follows immediately.
(ii) The second assertion is as straightforward to prove. First,
\begin{equation} \label{eqn:zetastar}
\Prob{\zeta_{*^{\alpha}} \leq t}
= \frac{\E{\zeta^{-1/\alpha-1}\mathbbm{1}_{\{\zeta \leq t\}}}}{\E{\zeta^{-1/\alpha-1}}}
\leq \frac{t^{-1/\alpha-1} \Prob{\zeta \leq t}}{\E{\zeta^{-1/\alpha-1}}} .
\end{equation}
In Section 4.2.1 of \cite{HaasLossMass} it is proved that for all
$q<1+\underline{p}/(-\alpha)$, $\Prob{\zeta \leq t} \leq t^{q}$ for
small $t$, where
\[
\underline{p}=\sup\left\{q \geq 0: \int_{(1,\infty)} \exp(qx)
L(\mathrm d x)<\infty \right\}.
\]
Here, $L$ is the L\'evy measure of the subordinator $\xi$ with Laplace
exponent $\phi$ (see the beginning of Section~\ref{sec:proof} for the
definition). Using Miermont's results \cite[Section 3.2]{Miermont}
again, the measure $L$ is proportional to
$\exp(x)(\exp(x)-1)^{\alpha-1}\mathbbm{1}_{\{ x>0\}}\mathrm d x$. It
follows that $\underline{p}=-\alpha$. Combining this with
(\ref{eqn:zetastar}) gives the desired result.
We finish this section with the proof of equation (\ref{eqPhi}), which
is based on the fact that $\Phi(\lambda):= \E{\exp{(-\lambda
\eta(1))}}=\E{\exp{(-
\eta(\lambda^{-\alpha}))}}$, $\lambda \geq 0$. We recall that it is assumed that $\alpha \in (-1/2,0)$, i.e. $\beta \in (1,2)$. Using the Poisson construction of
$\eta(1)$, the expression for the Laplace transform of a subordinator,
the self-similarity of the process $\eta$ and Campbell's formula for
Poisson point processes, we obtain
\begin{align*}
\Phi(\lambda)
& = \E{\E{\exp \left(-\sum_{i: t_i
\leq \lambda^{-\alpha}} \left(\tau_+^{(i)}((1-v_i)r_i)+\tau_-^{(i)}(v_ir_i)\right)
\right) \Bigg| v_i,r_i,t_i,i \geq 1}} \\
&= \E{\exp \left(-\sum_{i: t_i \leq \lambda^{-\alpha}} r_i
\int_0^{\infty}(1-e^{- u}) \mathbb N[\sigma \in \mathrm d
u, H_{\mathrm{max}}\leq t_i]\right)} \\
&= \E{\exp \left(-(\beta-1)^{\beta/(\beta-1)} \sum_{i: t_i \leq \lambda^{-\alpha}} r_i
\int_0^{t_i} \int_0^{\infty}(1-e^{- u}) \mathbb N[\sigma
\in \mathrm d u | H_{\mathrm{max}}=v] v^{\beta/(1-\beta)} \mathrm
d v \right) } \\
&= \E{\exp \left(-(\beta-1)^{\beta/(\beta-1)} \sum_{i: t_i \leq \lambda^{-\alpha}} r_i
\int_0^{t_i} \E{1-e^{- \eta(v)}}
v^{\beta/(1-\beta)} \mathrm d v \right)} \\
&= \E{\exp \left(-(\beta-1)^{\beta/(\beta-1)} \sum_{i: t_i \leq \lambda^{-\alpha}} r_i
\int_0^{t_i} (1-\Phi ( v^{-1/\alpha}))
v^{\beta/(1-\beta)} \mathrm d v \right)} \\
&= \exp \left(\!\! - \int_{\mathbb R^+ \times [0,\lambda^{-\alpha}]} \! \!
(1-e^{-(\beta-1)^{\beta/(\beta-1)}r
\int_0^t (1-\Phi( v^{-1/ \alpha})) v^{\beta/(1-\beta)}
\mathrm d v}) \tfrac{\beta(\beta-1)}{\Gamma(2-\beta)}
e^{-c_{\beta}rt^{1/(1-\beta)}} r^{-\beta} \mathrm d r \mathrm dt \right),
\end{align*}
where $c_{\beta}=(\beta-1)^{1/(1-\beta)}$. Substituting $\beta = (1+
\alpha)^{-1}$, we obtain the desired expression.
\section{Almost sure logarithmic asymptotics}
\label{SectionLog}
The following result describes the almost sure logarithmic behavior
near the extinction time of the largest fragment and the last fragment
processes. It is valid for general self-similar fragmentations with
parameters $\alpha<0$, $c=0$ and $\nu(\sum_{i=1}^{\infty} s_i<1)=0.$ We
recall that for such fragmentations, the extinction time $\zeta$
possesses exponential moments (see \cite[Prop. 14]{HaasLossMass}).
\begin{theorem}\label{thm:largestfrag}
(i) Suppose there exists $\eta>0$ such that
$\int_{\ensuremath{\mathcal{S}^{\downarrow}_1}}s_1^{-\eta}\mathbf1_{\{s_1< 1/2 \}}\nu(\mathrm d \mathbf
s)<\infty$. Then,
\begin{equation*}
\frac{\log (F_{1}((\zeta -t)^{+}))}{\log(t)}
\overset{\mathrm{a.s.}}{\rightarrow } - 1 / \alpha \text{ as $t
\rightarrow 0$.}
\end{equation*}
\noindent (ii) If, moreover, $\alpha \geq -\gamma_{\nu }:=-\inf
\{\gamma>0 :\lim_{\varepsilon \rightarrow 0} \varepsilon^{\gamma} \nu
(s_{1}<1-\varepsilon ) = 0\}$, then the last fragment process $F_*$ is
well-defined (i.e. the fragmentation $F$ can be encoded into an interval fragmentation for which there exists a unique point $x \in (0,1)$ which is
reduced to dust at time $\zeta$) and
\begin{equation*}
\frac{\log (F_{\ast}((\zeta -t)^{+}))}{\log
(t)}\overset{\mathrm{a.s.}}{\rightarrow }-1/\alpha \text{ as $t
\to 0$.}
\end{equation*}
\end{theorem}
\begin{corollary}
\label{corologstable}
For the stable fragmentation with index $-1/2 \leq \alpha < 0$ (or,
equivalently, $1<\beta \leq 2$), with probability $1$,
\begin{equation*}
\lim_{t \rightarrow 0}\frac{\log (F_{1}((\zeta -t)^{+}))}{\log(t)}
=\lim_{t \rightarrow 0} \frac{\log (F_{\ast}((\zeta -t)^{+}))}{\log
(t)}= -\frac{1}{\alpha} = \frac{\beta}{\beta-1}.
\end{equation*}
\end{corollary}
\begin{proof}[Proof of Theorem \ref{thm:largestfrag}] (i) We first
prove this result in the case where there exists some real number
$a>0$ such that $\nu(s_1<a)=0.$ We will then explain how to adapt it
to the more general assumption $\int_{\ensuremath{\mathcal{S}^{\downarrow}_1}}s_1^{-\eta}\mathbf
1_{\{s_1< 1/2\}}\nu(\mathrm d \mathbf s)<\infty$. By the first
Borel-Cantelli lemma, it is sufficient to show that, for any
$\varepsilon > 0$,
\begin{align}
\sum_{i=1}^{\infty} \Prob{ F_1((\zeta - e^{-i})^{+}) >
\exp((\alpha^{-1} + \varepsilon) i )}
& < \infty \label{eqn:BC1} \\
\sum_{i=1}^{\infty} \Prob{ F_1((\zeta - e^{-i})^{+}) \leq
\exp((\alpha^{-1} - \varepsilon) i) } & < \infty. \label{eqn:BC2}
\end{align}
(Note that if $i^{-1}\log (F_{1}((\zeta -e^{-i})^{+}))$ converges
almost surely to $1/\alpha$ as $i \to \infty$, then almost surely for all
sequences $(t_n,n\geq 0)$ converging to 0, $\log (F_{1}((\zeta
-t_n)^{+})) / \log(t_n)$ converges to $-1/\alpha$, since
\[
\frac{F_1((\zeta - e^{-(i_n+1)})^{+})}{-i_n} \leq \frac{F_1((\zeta -
t_n)^{+})}{\log (t_n)} \leq \frac{F_1((\zeta - e^{-i_n})^{+})}{-(i_{n}+1)},
\]
where $i_n = \fl{-\log(t_n)}$).
Let $\mathcal{F}_t = \sigma(F(s): s \leq t)$ and suppose that $T$ is a
$(\mathcal{F}_t)_{t \geq 0}$-stopping time such that $T < \zeta$
a.s. According to \cite{BertoinSSF}, the branching and self-similarity
properties of $F$ hold for $(\mathcal{F}_t)_{t \geq 0}$-stopping
times, hence
\[
\zeta - T = \sup_{j \geq 1} \left\{
F_j(T)^{-\alpha} \zeta_j \right\},
\]
where $(\zeta_j, j \geq 1)$ are independent copies of $\zeta$,
independent of $F(T)$. Let
\begin{align*}
T_1 & = \inf\{t \geq 0: F_1(t) \leq \exp((\alpha^{-1} + \varepsilon)i) \\
T_2 & = \inf\{t \geq 0: F_1(t) \leq \exp((\alpha^{-1} -
\varepsilon)i).
\end{align*}
We start by proving (\ref{eqn:BC1}). We have
\begin{align*}
\Prob{F_1((\zeta - e^{-i})^+) > \exp((\alpha^{-1} + \varepsilon)i)}
& = \Prob{T_1 > \zeta - e^{-i}} \\
& = \Prob{\sup_{j \geq 1} F_j(T_1)^{-\alpha} \zeta_j < e^{-i}} \\
& \leq \Prob{F_1(T_1)^{-\alpha} \zeta_1 < e^{-i}}.\end{align*}
Since we have assumed that there exists $a>0$ such that
$\nu(s_1<a)=0$, we have $F_1(T_1) \geq aF_1(T_1-) \geq a
\exp((\alpha^{-1} + \varepsilon)i)$ a.s., and so
\begin{align}
\label{key}
\Prob{F_1(T_1)^{-\alpha} \zeta_1 < e^{-i}} & \leq
\Prob{a^{-\alpha}
\exp(-\alpha(\alpha^{-1} + \varepsilon)i) \zeta <e^{-i}} \\
\nonumber & = \Prob{\zeta < a^{\alpha} e^{\alpha \varepsilon i}}.
\end{align}
Let $(\lambda(t))_{t \geq 0}$ be the size of a tagged fragment as
defined at the beginning of Section \ref{sec:proof}, and let $\xi$ be
the related subordinator. Then consider the first time at which
$\lambda$ reaches $0$, i.e.
\[
\sigma = \inf\{t \geq 0 : \lambda(t) = 0\}=\int_0^{\infty}
\exp(\alpha\xi(r))\mathrm d r.
\]
Of course, $\sigma \leq \zeta$. Moreover, by Proposition 3.1 of
Carmona, Petit and Yor~\cite{Carmona/Petit/Yor}, $\sigma$ has moments
of all orders strictly greater than $-1$; this implies, in particular, that $\mathbb
E[\zeta^{-\gamma}]<\infty$ for $0<\gamma<1$. So, by Markov's inequality, we have that
\[
\Prob{F_1((\zeta - e^{-i})^+) > \exp((\alpha^{-1} + \varepsilon)i)} \leq
a^{\alpha / 2} e^{\alpha \varepsilon i / 2} \E{\zeta^{-1/2}},
\]
which is summable in $i$.
Now turn to (\ref{eqn:BC2}). Arguing as before, we have
\begin{align*}
\Prob{F_1((\zeta - e^{-i})^+) \leq \exp((\alpha^{-1} - \varepsilon)i)}
& = \Prob{T_2 \leq \zeta - e^{-i}} \\
& = \Prob{\sup_{j \geq 1} F_j(T_2)^{-\alpha} \zeta_j \geq e^{-i}}.
\end{align*}
Take $q > -1/\alpha$. Then, since $\zeta_1, \zeta_2, \ldots$ are
independent and identically distributed and independent of
$F(T_2)$,
\begin{align*}
\Prob{\sup_{j \geq 1} F_j(T_2)^{-\alpha} \zeta_j \geq e^{-i}}
& \leq \sum_{j \geq 1} \Prob{F_j(T_2)^{-\alpha} \zeta_j \geq e^{-i}/2} \\
& \leq 2^q\E{\zeta^q} e^{iq} \E{ \sum_{j \geq 1} F_j(T_2)^{-\alpha
q}}.
\end{align*}
The expectation $\E{\zeta^q}$ is finite and, since $-\alpha q - 1
> 0$, we have
\[
\sum_{j \geq 1} F_j(T_2)^{-\alpha q} \leq F_1(T_2)^{-\alpha q - 1}
\sum_{j \geq 1} F_j(T_2) \leq F_1(T_2)^{-\alpha q - 1}.
\]
But then
\[
\Prob{F_1((\zeta - e^{-i})^+) \leq \exp((\alpha^{-1} - \varepsilon)i)} \leq
2^q\E{\zeta^q} \exp(i(\varepsilon \alpha q - \alpha^{-1} + \varepsilon)),
\]
which is summable in $i$ for large enough $q$.
It remains to adapt this proof to the more general case where
$\int_{\ensuremath{\mathcal{S}^{\downarrow}_1}}s_1^{-\eta}\mathbf 1_{\{s_1<
1/2\}}\nu(\mathrm d \mathbf s)<\infty$ for some $\eta>0$. The key
inequality (\ref{key}) is then no longer valid and we have to check
that $\Prob{F_1(T_1)^{-\alpha} \zeta_1 < e^{-i}}$ is still summable in
$i$. The rest of the proof remains unchanged. So, denote by
$\Delta(T_1)$ the size of the ``multiplicative" jump of $F_1$ at time
$T_1$ (i.e.\ $\Delta(T_1) := F_1(T_1)/ F_1(T_1-)$) and recall that
$\zeta_1$ denotes a random variable with the same distribution as
$\zeta$ and independent of $F_1(T_1)$. Note that we may, and will,
suppose that $\zeta_1$ is independent of the whole fragmentation $F$.
Then,
\begin{align}
\nonumber
&\mathbb P\left(F_1^{-\alpha}(T_1)\zeta_1<e^{-i}\right)\\
\nonumber
& = \mathbb
P\left(F_1^{-\alpha}(T_1-)(\Delta(T_1))^{-\alpha}\zeta_1<e^{-i}\right)
\\
\nonumber
&\leq \mathbb P\left(e^{-\alpha(\alpha^{-1}+\varepsilon)i}
(\Delta(T_1))^{-\alpha}\zeta_1<e^{-i}\right) \\
\label{comp}
&=\mathbb P\left((\Delta(T_1))^{-\alpha}\zeta_1<e^{\alpha \varepsilon
i},\Delta(T_1)<1/2\right)
+\mathbb P\left(( \Delta(T_1))^{-\alpha}\zeta_1<e^{\alpha \varepsilon
i},\Delta(T_1)\geq1/2\right).
\end{align}
The second term in this last line is bounded from above, for all $\gamma>0$, by
\[
e^{\gamma \alpha \varepsilon i} \mathbb E[\zeta^{-\gamma}]2^{-\alpha\gamma},
\]
which is finite and summable in $i$ if we take $0<\gamma<1$. To bound
the first term in (\ref{comp}), introduce $\mathcal
D_1$, the set of jump times of $F_1$. For $t \in \mathcal D_1$, let
$\mathbf{s}(t)$ be the relative mass frequencies obtained by the
dislocation of $F_1(t-)$, and let $\Delta(t):=F_1(t)/F_1(t-)$. Since the largest fragment coming from $F_1$ at the time of a split may not be the largest block overall after the split,
$s_1(t)\leq \Delta(t)$. Then,
\begin{align*}
&\mathbb P((\Delta(T_1)^{-\alpha}\zeta_1<e^{\alpha \varepsilon i},\Delta(T_1)<1/2) \\
&=\mathbb E\left[\sum_{t \in \mathcal D_1}
\I{(\Delta(t))^{-\alpha}\zeta_1<e^{\alpha \varepsilon
i},\Delta(t)<1/2} \I{F_1(t-) \geq
e^{(\alpha^{-1}+\varepsilon)i}, F_1(t)\leq
e^{(\alpha^{-1}+\varepsilon)i} }\right] \\
&\leq e^{\gamma \alpha \varepsilon i}\mathbb E\left[\sum_{t \in \mathcal
D_1}(s_1(t))^{\alpha\gamma} \I{s_1(t)<1/2\}} \I{F_1(t-) \geq
e^{(\alpha^{-1}+\varepsilon)i} }\right] \mathbb E[\zeta_1^{-\gamma}].
\end{align*}
for all $\gamma>0$. The expectation $\mathbb E[\zeta_1^{-\gamma}] $
is finite when $\gamma<1$, which we assume for the rest of the
proof. Now, the process $(\Sigma(u),u \geq 0)$ defined
by
\[
\Sigma(u)=\sum_{t \in \mathcal D_1, t \leq u}(s_1(t))^{\alpha \gamma}
\I{s_1(t)<1/2} \I{F_1(t-) \geq e^{(\alpha^{-1}+\varepsilon)i}}
\]
is increasing, c\`adl\`ag, and adapted to the filtration $(\mathcal
F_t,t \geq 0)$ (and hence optional). So, according to the Doob-Meyer
decomposition \cite[Section VI]{Rogers/Williams2}, \cite[Theorem 1.8,
Chapter 2]{Jacod/Shiryaev}, it possesses an increasing predictable
compensator $(A(u),u \geq 0)$ such that $\mathbb
E[\Sigma(\infty)]=\mathbb E[A(\infty)]$. Moreover, since the
fragmentation process is a pure jump process where each block of size
$m$ splits at rate $m^{\alpha}\nu(\mathrm d \mathbf s)$ into blocks of
lengths $(ms_1,ms_2,...)$, this compensator is given by
\[
A(u) = \int_0^{u} F_1(t)^{\alpha}\I{F_1(t) \geq
e^{(\alpha^{-1}+\varepsilon)i} } \mathrm dt \int_{\ensuremath{\mathcal{S}^{\downarrow}_1}}s_1^{\alpha
\gamma} \I{s_1<1/2} \nu (\mathrm d \mathbf s), \quad u \geq 0.
\]
The second integral in this product is finite for small enough
$\gamma>0$, by assumption. The first is clearly smaller than
$e^{-\delta(\alpha^{-1}+\varepsilon)i} \int_0^{s} F_1(t)^{\alpha+\delta}
\mathbf 1_{\{F_1(t)>0\}}\mathrm dt $, for all $\delta \geq 0$. This
leads us to
\begin{equation}
\label{eqmaj}
\mathbb P\left((\Delta(T_1))^{-\alpha}\zeta_1<e^{\alpha \varepsilon
i},\Delta(T_1)<1/2\right) \leq C_{\gamma}e^{(\gamma \alpha \varepsilon
-\delta(\alpha^{-1}+\varepsilon)) i} \mathbb E\left [\int_0^{\zeta}
F_1(r)^{\alpha+\delta} \mathrm dr\right],
\end{equation}
where $C_{\gamma}$ is a constant depending only on $\gamma>0$ and
which is finite provided $\gamma$ is sufficiently small. To finish,
we claim that for all $0<\delta<-\alpha$, $\mathbb E[\int_0^{\zeta}
F_1(r)^{\alpha+\delta} \mathrm dr]<\infty$ (note this finiteness is
obvious for $\delta \geq -\alpha$, since $\mathbb
E[\zeta]<\infty$). Indeed, from Bertoin \cite{BertoinSSF}, we know
that the $\alpha$-self-similar fragmentation process $F$ (and its
interval counterpart) can be transformed through (somewhat
complicated) time-changes into a $(-\delta)$-self-similar
fragmentation process with same dislocation measure. We refer to
Bertoin's paper for details. In particular, if $|O_x(t)|$ denotes the
length of the fragment containing $x \in (0,1)$ at time $t$ in the
interval $\alpha$-self-similar fragmentation, and if $\zeta_x$ denotes
the first time at which this length reaches 0, we have
\[
\int_0^{\zeta_x} |O_x(r)|^{\alpha+\delta}\mathrm
dr=\zeta_x^{(\delta)} \leq \zeta^{(\delta)},
\]
where $\zeta_x^{(\delta)}$ is the time at which the point $x$ is
reduced to dust in the fragmentation with parameter $-\delta$ and
$\zeta^{(\delta)}:=\sup_x \zeta_x^{(\delta)}$ is the time at which
the whole fragmentation with parameter $-\delta$ is reduced to
dust. Now, since $\alpha+\delta<0$, we have $(F_1(r))^{\alpha+\delta}
\leq |O_x(r)|^{\alpha+\delta}$ for all $x \in (0,1)$, $r \geq 0$ and,
therefore,
\[
\mathbb E \left [\int_0^{\zeta} F_1(r)^{\alpha+\delta} \mathrm dr
\right]=\mathbb E \left [\sup_x \int_0^{\zeta_x}
F_1(r)^{\alpha+\delta} \mathrm dr \right] \leq \mathbb E \left
[\sup_x \int_0^{\zeta_x} |O_x(r)|^{\alpha+\delta}\mathrm dr \right]
\leq \mathbb E [ \zeta^{(\delta)}]<\infty.
\]
Hence, if we choose $\delta$ small enough that $\gamma \alpha
\varepsilon-\delta(\alpha^{-1}+\varepsilon)<0$, we get from (\ref{eqmaj})
that $\mathbb P((\Delta(T_1))^{-\alpha}\zeta_1<e^{\alpha \varepsilon
i},\Delta^{(1)}(T_1)<1/2)$ is summable in $i$.
\ensuremath{\setminus}allskip
(ii) The additional assumption on $\nu $ implies that it is infinite,
hence we know (see Haas and Miermont~\cite{Haas/Miermont}) that the
fragmentation can be encoded into a continuous function $G$ which is,
moreover, $\gamma$-H\"{o}lder, for all $\gamma <(-\alpha) \wedge
\gamma_{\nu}$ ($= -\alpha$ here). In particular, the maximum, $\zeta$,
of $G$ is attained for some $x \in (0,1)$. More precisely, we claim it
is attained at a \textit{unique} point, which is denoted by
$x_*$. See the end of the proof for an explanation of this
uniqueness. It implies, in particular, that the last fragment process
is well-defined: for each $t < \zeta$, we denote by $O_*(t)$ the
interval component of $\{x \in (0,1) : G(x)>t\}$ which contains
$x_*$, and by $F_*(t)$ the length of this interval. For $t<\zeta $,
let
\begin{align*}
x_{t}^{-} & = \sup\{x \leq x_*: G(x) \leq \zeta - t\} \\
x_{t}^{+} & = \inf\{x \geq x_*: G(x) \leq \zeta - t\},
\end{align*}
so that $O_{\ast }(\zeta -t)= (x_{t}^{-},x_{t}^{+})$. Then, for all
$0\leq \gamma <-\alpha $, there exists some constant $C$ such that
\begin{align*}
t & = G(x_*)-G(x_{t}^{-})\leq C(x_*-x_{t}^{-})^{\gamma} \\
t & = G(x_*)-G(x_{t}^{+})\leq C(x_{t}^{+}-x_*)^{\gamma},
\end{align*}
and, consequently, $F_{\ast}(\zeta -t)=x_{t}^{+}-x_{t}^{-}\geq
2(t/C)^{1/\gamma}$. This implies that
\begin{equation*}
\limsup_{t\rightarrow 0}\left( \frac{\log (F_{\ast}((\zeta
-t)^{+}))}{\log (t)} \right) \leq -1/\alpha .
\end{equation*}
For the liminf, use part (i) and the fact that $F_*(t) \leq F_1(t)$
for all $t \geq 0$.
Finally, we have to prove that there is a unique $x \in (0,1)$ such that
$G(x)=\zeta$. Note that
\begin{align*}
&\mathbb{P}\left( \exists t<\zeta :\text{at least two fragments present at
time }t\text{ die at }\zeta \right) \\
&=\mathbb{P}\left( \exists t\in \mathbb{Q}, t < \zeta :
\text{at least two fragments present at time }t\text{ die at }\zeta \right)
\end{align*}
and this latter probability is equal to $0$ if, for all $t\in
\mathbb{Q}$,
\begin{align*}
&\mathbb{P}\left( \text{at least two fragments present at time }t\text{ die
at }\zeta, t <\zeta\right) \\
&=\mathbb{P}\left( \exists i \neq j:F_{i}^{-\alpha }(t)\zeta_i
=F_{j}^{-\alpha }(t)\zeta_j \text{ and }F_{i}(t)\neq 0\right)=0,
\end{align*}
where $\left( \zeta_i,\zeta_j \right) $ are independent and
distributed as $\zeta$, independently of $F(t)$. Clearly, this is
satisfied if the distribution of $\zeta $ has no atoms. Now recall
that we are in the case where $\nu $ is infinite and suppose that
there exists $t>0$ such that $\mathbb{P(\zeta}=t\mathbb{)}>0$. Recall
also that, conditional on $u<\zeta$, $\zeta=u+\sup_{i \geq
1}F_i(u)^{-\alpha} \zeta_i $ where $(\zeta_i, i \geq 1)$ are
independent copies of $\zeta$, independent of $F(u)$. Moreover, the
supremum is actually a maximium, since we know there exists $x \in
(0,1)$ such that $G(x)=\zeta$. Then for all $0 < u < t$,
\begin{align*}
&\mathbb{P}\left( \exists i:F_{i}^{-\alpha }(u)\zeta_i =t-u\right) >0 \\
&\Leftrightarrow \exists i:\mathbb{P}\left( F_{i}^{-\alpha }(u)\zeta
=t-u\right) >0\text{ (with }\zeta \text{ independent of }F(u)\text{)} \\
&\Leftrightarrow \mathbb{P}\left( \lambda ^{-\alpha }(u)\zeta =t-u\right) >0,
\end{align*}
where $\lambda $ denotes the tagged fragment process.
Recall that $\lambda(u) = \exp(-\xi(\rho(u)))$, where $\xi$ is a subordinator with Laplace exponent given by (\ref{eqn:Laplacetagged}). Now, for any $b > 0$,
\[
\Prob{\xi(\rho(u)) = b} \leq \Prob{\exists v \geq 0: \xi(v) = b}.
\]
But we know from Kesten's theorem (Proposition 1.9 in
Bertoin~\cite{BertoinStFl}) that the right-hand side is 0 because the
L\'evy measure of the subordinator $\xi$ is infinite and it has no
drift.
Hence, $\mathbb{P}\left( \lambda ^{-\alpha }(u)\zeta
=t-u\right) =0$ for all $0<u<t$, and we can deduce the claimed
uniqueness.
\end{proof}
\begin{proof}[Proof of Corollary \ref{corologstable}.] It has been
proved in Haas and Miermont \cite[Section 3.5]{Haas/Miermont} that the
dislocation measure $\nu$ of any stable fragmentation satisfies
\[
\int_{\ensuremath{\mathcal{S}^{\downarrow}_1}} s_1^{-1}\I{s_1<1/2} \nu (\mathrm d \mathbf s)<\infty.
\]
(Note that this is obvious in the Brownian case since the
fragmentation is binary and so $\nu(s_1<1/2)=0$.) From \cite[Section
4.4]{Haas/Miermont}, we know that the parameter $\gamma_\nu$ (defined
in Theorem \ref{thm:largestfrag} (ii)) associated with the
dislocation measure $\nu$ of the stable fragmentation with index $\alpha$ is given
by $\gamma_{\nu}=-\alpha$. Hence, both assumptions of Theorem
\ref{thm:largestfrag} (i) and (ii) are satisfied.
\end{proof}
\noindent \textbf{Acknowledgments.} We are grateful to Jean Bertoin
and Gr\'egory Miermont for helpful discussions.
C.~G.\ would like to
thank the SPINADA project for funding which facilited this
collaboration, and Pembroke College, Cambridge for its support during
part of the project. C.~G.\ was funded by EPSRC Postdoctoral
Fellowship EP/D065755/1.
\end{document}
|
\begin{document}
\maketitle
\begin{abstract}
This is the third paper in a series \cite{DGHH2013,
Gell-Redman-Hassell-Zelditch} analyzing the asymptotic distribution
of the phase shifts in the semiclassical limit.
We analyze the distribution of phase shifts, or equivalently,
eigenvalues of the scattering matrix, $S_h(E)$, for semiclassical
Schr\"odinger operators on $\mathbb{R}^d$ which are perturbations of
the free Hamiltonian by a potential $V$ with polynomial decay.
Our assumption is that $V(x) \sim |x|^{-\alpha}
v(\hat x)$ as $x \to \infty$, for some $\alpha > d$, with corresponding derivative estimates.
In the semiclassical limit $h \to 0$, we show that the atomic measure on the
unit circle defined by these eigenvalues,
after suitable scaling in $h$, tends to a measure $\mu$ on $\mathbb{S}^1$.
Moreover, $\mu$ is the pushforward from
$\mathbb{R}$ to $\mathbb{R} / 2 \partiali
\mathbb{Z} = \mathbb{S}^1$ of a homogeneous distribution $\nu$ of
order $\beta$ depending on the dimension $d$ and the rate of decay $\alpha$ of the
potential function. As a corollary we obtain an asymptotic formula
for the accumulation of phase shifts in a sector of $\mathbb{S}^1$.
The proof relies on an extension of results in \cite{HW2008} on the
classical Hamiltonian dynamics and semiclassical Poisson operator
to the class of potentials under consideration here.
\end{abstract}
\section{Introduction}
Consider a semiclassical Schr\"odinger operator
$$
H_h := h^2 \Delta + V - E
$$
on $\mathbb{R}^d$, where $\Delta = - \sum_{i = 1}^d \partial_{x_i}^2$ is the
positive Laplacian, $E$ is a real constant and $V \colon \mathbb{R}^d \lra \mathbb{R}$ is a
smooth real-valued function satisfying
\begin{equation}
\label{eq:potential-asymptotics}
V(x) = \frac{v_0(\hat x)}{|x|^\alpha} + W(x), \quad x \in \RR^d, \quad \hat x = \frac{x}{|x|} , \quad \mbox{ where }
\big| W(x) \big| = O(|x|^{-(\alpha + \epsilon)}) ,
\end{equation}
for $|x|$ large, some $\alpha > 1$ and some $\epsilon > 0$. (For our
main theorem, we will require $\alpha > d$, and $W$ to satisfy `symbolic' derivative estimates as in \eqref{W-deriv-est}, but
for some of our intermediate results $\alpha > 1$ and \eqref{eq:potential-asymptotics} will be sufficient.)
Under these assumptions, the (relative) scattering matrix $S_h$
exists and is a unitary operator on $L^2(\mathbb{S}^{d - 1})$; $S_h$ is given on $\partialhi \in C^\infty(\mathbb{S}^{d
-1 })$ as $S_h \partialhi = e^{i\partiali(d -1)/2} \partialsi$ where $\partialsi$ is the
unique function such that there is a
solution $u_\partialhi$ to $H_h u_\partialhi = 0$ satisfying
\begin{equation}
u_\partialhi = r^{-(d - 1)/2} (e^{- i \sqrt{E} r/h} \partialhi(\omega) + e^{i
\sqrt{E} r/h}\partialsi(-\omega) ) + o(r^{(d - 1)/2}), \quad r = |x|. \label{eq:generalized-eigenfunction}
\end{equation}
The difference $S_h - \Id$ is a compact operator on $L^2(\mathbb{S}^{d - 1})$, and thus
the spectrum of $S_h$ lies on the unit circle, is discrete, and accumulates
only at $1$. Setting $\gamma = (d - 1)/(\alpha - 1)$, we define the
(infinite) atomic measure $\mu_h$ on the circle
which acts on $f \in C^0_{comp}(\mathbb{S}^1 \setminus 1)$ by
\begin{equation}
\label{eq:measure-definition}
\la \mu_h, f \ra = h^{\gamma \alpha} \sum_{e^{2i \beta_{n, h} } \in \spec(S_h)}
f(e^{2i \beta_{n, h}})
\end{equation}
for some enumeration $e^{2i \beta_{n, h}}$ of the eigenvalues of $S_h$, repeated according to their multiplicity.
The $\beta_{n, h} \in [0, \partiali)$ are called the `phase shifts' of $H_h$.
\noindent \textbf{Main Theorem:}
\textit{Let $I$ be any open interval of $\mathbb{S}^1$ containing $1$.
Assume that $V$ satisfies \eqref{eq:potential-asymptotics} for some
$\alpha > d$, and that $W$ in \eqref{eq:potential-asymptotics}
satisfies the additional derivative estimates
\begin{equation}
\big| \partial_x^k W(x) \big| = O(|x|^{-(\alpha + |k| + \epsilon)}) \ \forall \ k \in \NN^d , \quad |x| \to \infty.
\label{W-deriv-est}\end{equation}
Then the measures $\mu_h$ converge in the weak-$*$ topology on $\mathbb{S}^1 \setminus I$
to a measure $\mu$. Moreover, $\mu$ is the pushforward via the map $\mathbb{R} \lra
\mathbb{R}/ 2\partiali \mathbb{Z} \simeq \mathbb{S}^1$ of a homogeneous
measure
$$
\nu = \left\{
\begin{array}{lr}
(1/(2\partiali))^{d-1}a_1 \theta^{- \gamma + 1} & \mbox{ for } \theta > 0\\
(1/(2\partiali))^{d-1}a_2 |\theta|^{- \gamma + 1} & \mbox{ for } \theta < 0.
\end{array}
\right.
$$
Concretely, for every $f \in C^0_{comp}(\mathbb{S}^1 \setminus I)$, we have
$$
\lim_{h \to 0} \la \mu_h, f \ra = \int_{\mathbb{S}^1} f(e^{i\theta}) \, d\mu.
$$
The constants $a_1, a_2$ are given in
\eqref{eq:constants-3} below.
}
The Main Theorem is proven at the end of Section
\ref{sec:main-theorem}, modulo the proofs of subsequent technical lemmas.
It follows from the Main Theorem that eigenvalues of $S_h$ accumulate
in sectors of the unit circle at a rate of $h^{- \alpha( d -1)/(\alpha
- 1)}$. Indeed, defining a sector on the circle by choosing angles
$0 < \partialhi_0 < \partialhi_1 < 2\partiali$, and letting
$$
N(\partialhi_0, \partialhi_1) = \# \{ n : \partialhi_0 \le \beta_{h, n} \le \partialhi_1
\mbox{ mod } 2 \partiali \},
$$
the main theorem implies the following.
\begin{corollary}\label{thm:counting}
Assumptions as in the Main Theorem, the number of eigenvalues in a sector satisfies
$$
N(\partialhi_0, \partialhi_1) = h^{- \alpha (d - 1)/(\alpha - 1)}
(\int_{\partialhi_0}^{\partialhi_1} d\mu)(1 + o(1)).
$$
\end{corollary}
This result can be taken as an analogue of the Weyl asymptotic
formula, reviewed below in Section \ref{sec:eigenvalue-accumulation}.
It is also proven at the end of Section \ref{sec:main-theorem}.
The Main Theorem is proven, following \cite{Zelditch-Kuznecov}, via
analysis of the traces of the operators $S^k_h - \Id$. The fact that
these operators are trace class is shown for example in
\cite{Yafaev:Scattering-Theory:Some-Old}; in Section
\ref{sec:traces-and-compositions} below, we prove a precise
asymptotic formula for the trace which gives its leading order
behavior in $h$ as $h \to 0$. The trace of $S^k_h - \Id$ is equal to
$h^{-\alpha \gamma} \la \mu_h, p_k(z) \ra$ with $p_k(z) = z^k - 1$,
where $\mu_h$ is the measure defined in the Main Theorem, and our
asymptotic formula for the trace of $S^k_h - \Id$ shows that the Main
Theorem holds for these special values of $f$. Note that $p_k(z)$
does not, strictly speaking, satisfy the assumptions of the Main
Theorem, as its support contains $1$; in fact, we prove that the
conclusion of the theorem holds on the Banach space of continuous
functions vanishing to first order at $1$ --- see Sections
\ref{sec:main-theorem} and \ref{sec:eigenvalue-distribution}. To
conclude that the Main Theorem holds we show in Section
\ref{sec:eigenvalue-distribution} that the measures
$\mu_h$ are continuous on this space of continuous functions, which
contain the span of the $p_k$ as a dense subset, and use
an approximation argument to obtain the formula in the Main Theorem.
The trace of $S^k_h - \Id$ is obtained via analysis of the Schwartz
kernel of $S_h$ and its powers. By \cite{HW2008}, with previous
results for example in \cite{A2005,G1976,RT1989}, the operator $S_h$ is a semiclassical Fourier
Integral Operator whose canonical transformation is the
\textbf{total sojourn relation}, which is a map from incoming rays to
outgoing rays which are asymptotically tangent to the same flow line
of the Hamiltonian system induced by $h^2 \Delta + V - E$. The
precise relationship between the integral kernel of $S_h$ and the
total sojourn relation is discussed in
Section \ref{sec:the-scattering-matrix}, and in particular we see that
the canonical relation of $S_h$ is a perturbation of the identity operator of
order determined by $\alpha$, the rate of vanishing of $V$ at infinity.
We elaborate the latter remark in the special case that $E = 1$ and
$V$ is central ($V(x) = V(|x|)$) on $\mathbb{R}^2$. The bicharacteristics of the
Hamiltonian $p = |\xi|^2 + V - 1$ are paths $x(t)$ in $\mathbb{R}^2$
satisfying Newton's equation $\ddot x(t) = - 2\nabla V(x(t))$. The crucial
object related to the dynamical system in this context is the
`scattering angle' $\Sigma$, the angle by which an incoming ray is
deflected by the potential \cite{RSIII}. Indeed, in this case the canonical
relation of $S_h$
is the graph of the map
of $T^*\mathbb{S}^1 \lra T^* \mathbb{S}^1$ taking a point $(\omega,
\eta)$ to $(\omega + \Sigma(\eta), \eta)$, where
$(\omega, \eta)$ corresponds to a straight
ray $x_0(t) = \omega t + \eta$ and $\eta \partialerp \omega$. Here, the scattering
angle is given explicitly
by the formula \cite[Eqn.\ 2.6]{DGHH2013}
$$
\Sigma(\eta) = \partiali - 2 \int_{r_m}^\infty \frac{\eta}{r^2 \sqrt{1 -
\eta^2 r^2 - V(r)}} \, dr,
$$
where $r_m$ is the
minimum distance to the origin of the bicharacteristic ray $x(t)$ of the
Hamiltonian $p$ that is asymptotic to $x_0$ for time near minus infinity. When $V
\sim c /r^\alpha$, it is straightforward to compute that $r_m =
\eta( 1+ O(\eta^{- \alpha}))$ and $\Sigma(\eta) = O(\eta^{-
\alpha})$.
\
This is the third paper in a series analyzing the asymptotic distribution
of the phase shifts in the semiclassical limit using geometric
microlocal techniques, the first two works of which consider smooth compactly
supported potentials $V$ \cite{DGHH2013,
Gell-Redman-Hassell-Zelditch}. It is instructive to compare the Main Theorem with the
main result of
\cite{Gell-Redman-Hassell-Zelditch}, which is
\begin{theorem}\label{thm:comp} Let $V$ be a real, smooth, compactly supported potential, and $E \in \RR$
a nontrapping energy for the Schr\"odinger operator $H_h = h^2 \Delta + V - E$. Assume that
the set of periodic points of powers of the reduced scattering map associated to $H_h$ have measure zero in $T^* \mathbb{S}^{d-1}$. Define the sequence of measures $\nu_h$ on $\mathbb{S}^1$ by
\begin{equation}
\label{eq:measure-definition-comp}
\la \nu_h, f \ra = h^{d-1} \frac1{E^{(d-1)/2} \Vol \mathcal{I}} \sum_{e^{2i \beta_{n, h} } \in \spec(S_h)}
f(e^{2i \beta_{n, h}}),
\end{equation}
where $\mathcal{I}$ is the set of $(\omega, \eta) \in T^* \mathbb{S}^{d-1}$ associated to bicharacteristics that meet the support of $V$.
Then, for every $f \in C^0(\mathbb{S}^1)$ supported away from $1$, we have
\begin{equation}
\label{eq:equicompact}
\lim_{h \to 0} \la \nu_{h}, f \ra = \frac{1}{2\partiali} \int_{0}^{2\partiali}
f(e^{i\partialhi}) d\partialhi.
\end{equation}
In particular, the spectrum of $S_h$ is asymptotically equidistributed on the unit circle $\mathbb{S}^1$, away from the point $1$.
\end{theorem}
The main differences between the Main Theorem and Theorem~\ref{thm:comp} are
\begin{itemize}
\item The rate of accumulation is different. There are about $h^{-(d-1)}$ eigenvalues in a sector in the case of
compact support
\cite{Gell-Redman-Hassell-Zelditch} as opposed to $h^{-\alpha (d -
1)/(1 - \alpha)}$ for potentials decaying like $|x|^{-\alpha}$.
\item For compactly supported semiclassical
potentials, the phase shifts \textit{equidistribute} around the unit
circle as $h \to 0$, whereas for polynomial decay \textit{they do
not}; instead, we get the
homogeneous distributions in the Main Theorem.
\item The equidistribution result for compactly supported
potentials relies on two dynamical assumptions on the bicharacteristic
flow (the first being non-trapping).
For polynomial decay, neither of these assumptions are required.
\end{itemize}
These differences arise from the fact that $S_h$ for a compactly supported
potential is semiclassically equal to the identity operator outside a
compact set in phase space, so the difference $S_h^k - \Id$ is the
difference of two semiclassical FIO's of order $0$ with compact
microsupport. As such their traces grow like $h^{-(d - 1)}$ as $h \to
0$ (see e.g.\ \cite[Appendix]{Gell-Redman-Hassell-Zelditch}), and the
volume of phase space on which $S_h^k - \Id$ is microlocally nontrivial
enters into the leading asymptotics.
By contrast, in the present setting, compact
subsets of phase space are irrelevant since their contribution to the measure $\mu_h$ is
order $O(h^{-(d-1)} \times h^{\alpha (d-1)/(\alpha-1)} ) = O(h^{(d-1)/(\alpha -1)})$ which is a positive power of $h$, so only the asymptotic behaviour of the dynamics is important. This explains why the nontrapping assumption is not relevant in the present setting, as trapped rays only occur in a compact region of phase space.
The quicker rate of accumulation of eigenvalues, $\sim h^{-\alpha (d-1)/(\alpha-1)}$, as $h \to 0$ can be understood heuristically by observing that,
for each level of $h$, there is an `effective radius' $r(h) \sim
h^{-1/\alpha}$ outside of which the decaying potential $V$ is $O(h)$,
and therefore semiclassically negligible.
This radius tends to infinity as a negative power of $h$,
leading to an effective `interacting' volume of phase space that grows
as $h^{-(d-1)/\alpha}$. Since a unit of phase space volume contributes
roughly $h^{-(d-1)}$ eigenvalues, there are about
$h^{-(d-1)\alpha/(\alpha - 1)} = h^{-\gamma \alpha}$ phase shifts
deflected away from $1$. This
observation also explains why we fail to have equidistribution, as one
might naively guess based on Theorem~\ref{thm:comp}, in the
polynomially decaying case. Namely, there is no firm distinction
between interacting and noninteracting parts of phase space, so
correspondingly, there is no firm division between eigenvalues that
are `essentially $1$' and `essentially different from $1$'. So, unlike
the compactly supported case where the measure divides into a finite,
equidistributed part and an infinite atom at $1$, in the polynomially
decaying case, the point mass at $1$ is `smeared' into an absolutely
continuous measure with infinite mass near $1$. Thus equidistribution
is not possible as it would only account for a finite amount of mass
away from the point $1$.
One technical challenge of this work is that to treat potentials $V$
for which one has only the derivative estimates of the main theorem
requires an extension of the results in \cite{HW2008}. Indeed, the
structure of the integral kernel of the scattering matrix, the Poisson
operator, and indeed the outgoing and incoming resolvents are treated
in \cite{HW2008} in the case that
$$
V = r^{-2}\sum_{j = 0}^\infty a_j(\omega) r^{-j},
$$
for uniformly bounded $a_j \in C^\infty(\mathbb{S}^{d-1})$. In other
words, in the case that $V$ is a \textit{smooth} function of $\rho =
1/r$ and $\omega$ at $\rho = 0$. The potentials $V$ under
consideration here are merely \textit{conormal} (see Appendix
\ref{sec:soujourn}) and thus some care is required to show that the
scattering matrix has the FIO structure one would predict by analogy
with the smooth case. This extension is done in detail in Appendix
\ref{sec:soujourn}.
\
The introduction to \cite{Gell-Redman-Hassell-Zelditch} contains a
literature review on the topic, to which we refer the reader. In
particular, \cite{SY1985} contains an asymptotic formula for the phase
shifts for \textit{central} potentials of polynomial decay, in this
case with two asymptotic parameters. Namely, they analyze $\Delta +
gV + k^2$, for large $g$ or $k$, in particular obtaining, when $g =
k^2 = 1/h^2$ a formula for the phase shifts which implies our Main Theorem in the special case of central potentials. Other
related work includes \cite{BY1982, BY1984, BY1993, Sm1992}.
\section{The scattering matrix, $S_h$}\label{sec:the-scattering-matrix}
We now describe the FIO structure of the scattering matrix for
potentials with polynomial decay. Note that by setting $\wt{V} = V/E$
and $\wt{h} = h/\sqrt{E}$ we may reduce the general $E$ case to the $E
= 1$ case, and so we assume when convenient that
$$
E = 1.
$$
\subsection{The canonical relation of the scattering matrix}
We now describe the canonical relation of the scattering
matrix. In fact, following \cite{Gell-Redman-Hassell-Zelditch}, we
define a Legendre submanifold of $T^* \mathbb{S}^{d-1} \times
T^*\mathbb{S}^{d-1} \times \mathbb{R}$ related to the scattering
matrix in a way we describe in detail in Section \ref{sec:Schwartz-kernel}.
Given $\omega' \in \mathbb{S}^{d-1}$ and $\eta' \in \RR^d$ orthogonal to $\omega'$,
there is a unique
bicharacteristic ray $\gamma_{\omega', \eta'}(t) = (x_{\omega', \eta'}(t) ,
\xi_{\omega', \eta'}(t))$ of the semiclassical
Hamiltonian flow associated with $H_h = h^2 \Delta + V - 1$ satisfying
\begin{equation}\label{eq:incoming-data}
x_{\omega', \eta'}(t) = \omega' t + \eta' + o(1) \mbox{ for } t << 0.
\end{equation}
and $|\xi_{\omega', \eta'}|^2 + V(x_{\omega', \eta'}) \equiv 1$. (Recall
that a (semiclassical) bicharacteristic ray $\gamma = (x, \xi)$ is a solution to Hamilton's equations
$\dot{x}(t) = 2 \xi$, $\dot{\xi}(t) = - \nabla V(x).$) By our
non-trapping assumption, as $t \to + \infty$, this ray escapes to infinity,
taking the form
\begin{equation}\label{eq:outgoing-data}
x_{\omega', \eta'}(t) = \omega (t - \tau) + \eta + o(1) \mbox{ for } t >> 0.
\end{equation}
Here $\tau = \tau(\omega', \eta')$ is the `time delay'.
As explained in \cite{Gell-Redman-Hassell-Zelditch}, the pairs $(\omega', \eta')$
and $(\omega, \eta)$ can be interpreted as points in $T^* \mathbb{S}^{d-1}$.
The map $(\omega', \eta') \mapsto (\omega, \eta)$ is known as the reduced scattering map
$\mathcal{S} = \mathcal{S}_{E =1}$ at energy $E = 1$. (For the
arbitrary energy reduced scattering map see
\cite{Gell-Redman-Hassell-Zelditch} of Appendix \ref{sec:soujourn}. The ray corresponding to
$(\omega', \eta')$ produces an additional piece
of data, a function $\varphi \colon T^*\mathbb{S}^{d -1} \lra
\mathbb{R}$ defined by\footnote{The function $\varphi$ is closely related to, but not the same as, the time delay function $\tau$. See \cite[Section 2]{Gell-Redman-Hassell-Zelditch} for further discussion.}
\begin{equation}
\label{eq:phi-definition}
\varphi(\omega', \eta') = \int^\infty_{- \infty} x_{\omega', \eta'}(s) \cdot
\nabla V(x_{\omega', \eta'}(s)) ds.
\end{equation}
The reduced scattering
map, together with the
map $\varphi$, determine a Legendre submanifold of $T^* \mathbb{S}^{d-1} \times T^* \mathbb{S}^{d-1} \times \RR_\partialhi$, endowed with the contact form $\eta' \cdot d\omega' + \eta \cdot d\omega - d\varphi$, called the `total sojourn relation' in \cite{HW2008}:
\begin{equation}
\label{eq:total-sojourn-relation}
L = \big\{ (\omega', \eta', \omega, -\eta, \partialhi) \mid (\omega, \eta) = \mathcal{S}(\omega', \eta'), \, \partialhi = \varphi(\omega', \eta') \big\}.
\end{equation}
That is, the contact form vanishes when restricted to $L$. This
implies immediately that $\mathcal{S}$ is a symplectic map. The
scattering matrix $S_h$ can be described either as a semiclassical
Lagrangian distribution with canonical relation given by the graph of
$\mathcal{S}$, or as a Lagrangian-Legendrian distribution associated
to $L$, as we discuss further in Section~\ref{subsec:oscint}.
We now find it convenient to switch to using local coordinates $y = (y_1, \dots, y_{d-1})$ on the sphere $\mathbb{S}^{d-1}$. We will use $\eta$ for the dual coordinates on the cotangent bundle. This is a slight abuse of notation (compared to the usage of $\eta$ above) but we find it convenient and should not cause confusion. For example, the contact form written in $(y, \eta)$ coordinates is $\eta \cdot dy = \sum_i \eta_i dy_i$.
Due to the decay of $V$ at spatial infinity, the reduced scattering map $\mathcal{S}$ tends to the identity as $|\eta'|\to \infty$. It will be important in our analysis to understand precisely how this happens.
In Appendix \ref{sec:soujourn}, we will prove the following
proposition which describes the behavior of the map $\mathcal{S}$ in
the large $\eta$ regime. Before we state the result, we recall
the following standard terminology: we say that a function $\sigma =
\sigma(\sphvar, \dspharv, h)$ is a symbol of order $m$ in $\dspharv$, i.e.\ is in
the space $S^m$, if for each multi-index $k \in \mathbb{N}_0^{d-1}, k' \in \mathbb{N}_0^{d-1}$, there is
$C_{k, k'} > 0$ such that
\begin{equation}
\label{eq:symbol-estimates}
| D^{k}_{\sphvar} D^{k'}_\dspharv \sigma | \le C_{k,k'} \la \dspharv \ra^{s - |k'|},
\end{equation}
where $|k| = k_1 + \cdots + k_{d-1}$ and $\la \dspharv \ra =
(|\dspharv|^2 + 1)^{1/2}$ and $C_{k, k'}$ is independent of $h$.
\begin{proposition}\label{thm:sojourn-near-infty}
Suppose that $\alpha > 1$, and that $V \in \mathbb{C}^\infty$ can be expressed in the form
$$
V = \frac{v_0(\hat x)}{|x|^\alpha} + W,
$$
where $W$ satisfies \eqref{W-deriv-est}.
Then the map $(y', \eta') \mapsto (y, \eta, \partialhi) = (\mathcal{S}(y', \eta'), \varphi(y', \eta'))$ satisfies
\begin{equation}
\begin{split}
\sphvar_i &= \sphvar'_i + a_i(\sphvar', \hat{\dspharv'})
|\dspharv'|^{- \alpha} + e_i \\
\dspharv_i &= - \dspharv'_i + b_i(\sphvar', \hat{\dspharv'})
|\dspharv'|^{1 - \alpha} + \tilde e_i \\
\varphi &= c(\sphvar', \hat{\dspharv'}) |\dspharv'|^{1 - \alpha} + e',
\end{split}
\label{conreg}\end{equation}
for each $i = 1, \dots, d-1$ and $|\dspharv'|$ large, where $a_i, b_i$ and $c$ are smooth, $e_i \in S^{- \alpha - \epsilon}$, and $ \tilde e_i, e' \in S^{1 - \alpha - \epsilon}$ (see \eqref{eq:symbol-estimates}.)
\end{proposition}
\subsection{The Schwartz Kernel of $S_h$}\label{sec:Schwartz-kernel}\label{subsec:oscint}
Let us assume for the next few sections that $E$ is a nontrapping energy, and defer the trapping case to Section~\ref{sec:trapping}.
As shown in \cite{HW2008}, under this assumption, the scattering matrix is a `Legendrian-Lagrangian distribution'.
This means that for each fixed $h > 0$, $S_h$ is a (homogeneous) FIO; in our case\footnote{The results of \cite{HW2008} apply to asymptotically conic nontrapping manifolds. In general, the absolute scattering matrix is a FIO associated to the canonical relation of geodesic flow (at infinity) at time $\partiali$. In the case of $\RR^n$, one obtains the `relative scattering matrix', which is what we consider here, by composing the absolute scattering matrix with the antipodal map and multiplying by $i^{(d-1)/2}$. This reduces the canonical relation to the identity in this special case.}, it is a pseudodifferential operator, in fact equal to the identity up to a pseudodifferential operator of order $1-\alpha$.
As $h \to 0$, the scattering matrix is, in each bounded region of phase space $T^* \mathbb{S}^{d-1}$, a semiclassical
Lagrangian distribution, but it is not a semiclassical pseudodifferential operator. Instead, its canonical relation is the graph of the reduced scattering transformation, which is only \emph{asymptotically} equal to the identity; Proposition~\ref{thm:sojourn-near-infty} makes precise how this happens.
What is new about the result in \cite{HW2008} is that it
gives the precise oscillatory integral form of $S_h(E)$ in the transitional regime; that is, where the semiclassical frequency $\eta$ tends to infinity, uniformly as $h \to 0$.
However, more regularity was assumed on the potential $V$ in \cite{HW2008}; the assumption made there translates, in our context, to the potential having a Taylor series at infinity of the form $\sum_{j \geq 2} |x|^{-j} v_j(\hat x)$. We explain how the result extends to the potentials considered here in Appendix~\ref{sec:soujourn}.
\begin{remark}
The reason that the term Legendre distribution is used in \cite{HW2008} is because $S_h(E)$ is associated to the Legendre submanifold \eqref{eq:total-sojourn-relation}, which gives an extra piece of information, namely $\varphi$, in addition to the symplectic map $\mathcal{S}$. This eliminates an ambiguity (up to an additive constant) of the class of phase functions locally parametrizing the associated Lagrangian submanifold $\mathrm{graph }(\mathcal{S})$; see the appendix of \cite{Gell-Redman-Hassell-Zelditch} for more discussion on this point.
\end{remark}
We now express $S_h(E)$ as an oscillatory integral microlocally near infinity.
\begin{lemma}\label{lem:S-structure} Suppose $E$ is a nontrapping energy. Then the scattering matrix $S_h$ takes the form
\begin{equation}\label{eq:Sh-decomposition}
S_h = F_1 + F_2,
\end{equation}
where $F_1$ is a zeroth order FIO with compact microsupport, and $F_2$
has an oscillatory integral representation of the form
\begin{equation}\label{eq:Sh-at-fiber-infinity}
F_2(\sphvar, \sphvar') = (\frac{1}{2\partiali h})^{d - 1} \int e^{i((\sphvar -
\sphvar') \cdot \dspharv + G(\sphvar', \dspharv))/h} (1 + b(\sphvar,
\sphvar', \dspharv, h)) \, d\dspharv,
\end{equation}
where $G(\sphvar', \dspharv)$ is a symbol of order $1-\alpha$ in $\dspharv$, and
$b$ is a symbol in $\dspharv$ of order $-\alpha$ (see \eqref{eq:symbol-estimates}). Moreover,
\begin{equation}\begin{aligned}
G(\sphvar', \dspharv) &= g(\sphvar', \hat \dspharv) |\dspharv|^{1-\alpha} + \tilde g, \quad {\tilde g} \in S^{1-\alpha-\epsilon}
\end{aligned}\label{G} \end{equation}
\end{lemma}
\begin{proof}
We first address the question of finding a phase function parametrizing $L$ near infinity, that is, for large $(\eta, \eta')$.
By Lemma \ref{thm:sojourn-near-infty}, we may use $(\sphvar', \dspharv)$ as
coordinates on the Legendrian $L$ in the large $|\dspharv|$ region, and we write the remaining coordinates on $L$ in terms of these as
$$
\sphvar = W(\sphvar', \dspharv), \quad \dspharv' = N(\sphvar', \dspharv), \quad \varphi = T(\sphvar', \dspharv),
$$
where $(y', \eta', y, \eta, \varphi) \in L$.
The fact that $L$ is Legendrian implies the following identities
amongst these functions $W, N, T$, arising by expressing the vanishing
of $\eta' \cdot dy' + \eta \cdot dy - d\partialhi$ in these coordinates:
\begin{equation}
N_i d\sphvar'_i = \sum_{j = 1}^{d -1} (- \dspharv_i \frac{\partial W_i}{\partial \sphvar'_j} d\sphvar'_j + \frac{\partial T_i}{\partial \sphvar'_j} d\sphvar'_j ) , \quad
\sum_{j = 1}^{d -1 } \dspharv_i \frac{\partial W_i}{\partial \dspharv_j}
d\dspharv_j - \frac{\partial T}{\partial \dspharv_i} d\dspharv_i = 0.
\end{equation}
Using these identities, one can check that the Legendrian $L$ is parametrized by the function
\begin{equation}
\Phi(\sphvar', \sphvar, \dspharv) = (\sphvar - W(\sphvar', \dspharv)) \cdot \dspharv + T(\sphvar', \dspharv).
\label{Phi}\end{equation}
Let us write $W(\sphvar', \dspharv) = \sphvar' + \tilde W(\sphvar', \dspharv)$.
Then we have, comparing \eqref{G} and \eqref{Phi},
\begin{equation}\label{eqn:G-specifically}
G(\sphvar', \dspharv) = - \tilde W(\sphvar', \dspharv) \cdot \dspharv + T(\sphvar', \dspharv).
\end{equation}
It thus suffices to note that by Lemma \ref{thm:sojourn-near-infty}, $\tilde W$ is a classical symbol of order $-\alpha$, and $T$ is a classical symbol of order $1-\alpha$. Compare with \cite[Section 7.2]{HW2008}.
That the scattering matrix has a local oscillatory
integral expression using the phase function $\Phi$ with
symbol by $a = 1 + b$ with $b \in S^{-\alpha}$ is
shown in Appendix \ref{sec:scattering-matrix-deduction}.
\end{proof}
\begin{remark}\label{thm:switch-y-yprime}
The choice to parametrize $L$ using a function $G(y', \eta)$ was an arbitrary one. We can
just as well use $(y, \eta')$ to furnish coordinates on $L$, and then,
writing $$y' = W'(\sphvar, \dspharv') = y + \tilde W'(\sphvar, \dspharv') , \quad \dspharv =
N'(\sphvar, \dspharv'), \quad \varphi = T'(\sphvar, \dspharv'),$$
the function
$$\Phi'(y', y, \eta') = (y' - y) \cdot \eta' - \tilde W' \cdot \eta' + T'(y, \eta')$$ also
parametrizes $L$. (To be clear, $\varphi = T'(y, \eta')$ means that $T'(y,
\eta')$ is the value of $\varphi$ on $L$ at the point $(y', \eta', y,
\eta, \varphi)$.) Replacing the dummy variable $\eta'$ by $\overline{\eta} = -\eta'$, and
setting $$G'(y, \overline{\eta}) = \tilde W'(y,-\overline{\eta}) \cdot \overline{\eta} +
T'(y, -\overline{\eta})$$ gives $\Phi' = (y - y') \cdot \overline{\eta}+ G'(y, \overline{\eta})$, and it
follows as above that $G' = g'(\sphvar', \hat {\overline{\eta}}) |\overline{\eta}|^{1-\alpha} + \tilde
g', \quad {\tilde g'} \in S^{1-\alpha-\epsilon}$. Furthermore, we claim that
\begin{equation}
g' = g.
\label{gg'}\end{equation}
In fact, since $\tilde W(y', \eta) = - \tilde W'(y, -\overline{\eta})$ and $T(y', \eta) = T'(y, -\overline{\eta})$,
and writing $\overline{\eta} = \eta + N(y', \eta)$, $N \in S^{1-\alpha}$, we find that
$$
\tilde W(y', \eta) = - \tilde W'\big(y' + \tilde W(y', \eta), -\eta + N(y', \eta) \big),
$$
from which it follows that
$$
\tilde W(y', \eta) = - \tilde W'(y', -\eta) \text{ modulo } S^{1-2\alpha}.
$$
Similarly, $T'(y', -\eta) - T(y', \eta) \in S^{2(1-\alpha)}$. Thus we see that $G - G' \in S^{2(1-\alpha)}$, from
which \eqref{gg'} follows immediately.
Thus an oscillatory integral of the form in
\eqref{eq:Sh-at-fiber-infinity} with $G$ as in Lemma~\ref{lem:S-structure} can also be written
\begin{equation}\label{eq:Sh-at-fiber-infinity-y-dependent}
(\frac{1}{2\partiali h})^{d - 1} \int e^{i((\sphvar - \sphvar') \cdot \dspharv +
G'(\sphvar, \dspharv))/h} (1 + b(\sphvar, \sphvar', \dspharv, h)) \,
d\dspharv,
\end{equation}
where $G'$ has the same properties as $G$, in fact $G - G' \in S^{2(1-\alpha)}$, and $b$ has the same
symbolic properties as $a$ in \eqref{eq:Sh-at-fiber-infinity}.
\end{remark}
By adjusting the division between $F_1$ and $F_2$ suitably, we may
assume that $G$, as well as $\langle \dspharv \rangle^{|\gamma|}
D_\dspharv^\gamma G$, are sufficiently small, which we do without further
comment. Indeed, choosing a function $\chi = \chi(\dspharv)$ with $\chi
\equiv 1$ for $|\dspharv | < R/2 $ and $\chi \equiv 0$ for $|\dspharv| \ge R$
and writing
\begin{gather*}
\int e^{i(\sphvar - \sphvar') \cdot \dspharv + G(\sphvar',
\dspharv)/h} (1 + a) \, d\dspharv = \\
\qquad \int e^{i(\sphvar - \sphvar') \cdot
\dspharv + G/h}\chi (1 + a) \, d\dspharv + \int e^{i(\sphvar - \sphvar')
\cdot \dspharv + G/h} (1 - \chi) (1 + a) \, d\dspharv,
\end{gather*}
and taking $R$ large enough and including the first term on the right in
$F_1$ produces the desired effect. We drop the $\chi$ from the
notation in $F_2$ since e.g.\ we could take $a
\equiv - 1$ for $|\eta| \le R$.
\subsection{Powers of the scattering matrix}
To prove the Main Theorem, we will compute the trace of $S_h(E)^k - \Id$ for all integers $k$. Thus it is important to understand how to represent the powers $S_h^k$ as oscillatory integrals.
First assume that $k \geq 1$. The $k$th power of $S_h$ is $(F_1 +
F_2)^k$, and if we expand this product, every term has compact
microsupport except for $F_2^k$. (See Section
\ref{sec:traces-and-compositions} for a further discussion of the
other terms in the expansion and why their contribution to the eigenvalue
asymptotics is lower order.) For $k = -1$, recall that $S_h^{-1} =
S_h^*$, and thus the integral kernel of $S_h^{-1}$ is given by the
hermitian conjugates $F^*_1 + F^*_2$ in
\eqref{eq:Sh-decomposition}. Here $F^*_2(\sphvar, \sphvar') =
\overline{F}_2(y', y)$. By Remark \ref{thm:switch-y-yprime}, we may
take the phase function of $F_2$ to be of the form
in~\eqref{eq:Sh-at-fiber-infinity-y-dependent}, specifically
with $G'$ as in \eqref{eq:Sh-at-fiber-infinity-y-dependent}, we have
\begin{equation}\label{eq:Sh-at-fiber-infinity-conjugate}
F^*_2(\sphvar, \sphvar') = \overline{F}_2(y', y) = (\frac{1}{2\partiali h})^{d - 1} \int e^{i((\sphvar -
\sphvar') \cdot \dspharv - G(\sphvar', \dspharv))/h} (1 + b(\sphvar,
\sphvar', -\dspharv, h)) \, d\dspharv,
\end{equation}
We will show that $F^k_2$ has the following oscillatory integral structure.
\begin{lemma}\label{thm:composition} Suppose $k \geq 1$. Then the FIO
$F_2^k$ has an oscillatory integral representation of the form
\begin{equation}
(\frac{1}{2\partiali h})^{d - 1} \int e^{i((\sphvar - \sphvar') \cdot \dspharv + kG(\sphvar',
\dspharv) + E_k(\sphvar', \dspharv))/h} (1 + b_k(\sphvar, \sphvar', \dspharv, h))
\, d\dspharv,
\end{equation}
where $E_k$ is a symbol of order $2(1-\alpha)$ in $\dspharv$, and $b_k$ is
a symbol of order $1-\alpha$. Similarly, $(F^*_2)^k$ has an oscillatory
integral representation of the form
\begin{equation}
(\frac{1}{2\partiali h})^{d - 1} \int e^{i((\sphvar - \sphvar') \cdot \dspharv - kG(\sphvar',
\dspharv) + E_k(\sphvar', \dspharv))/h} (1 + b_k(\sphvar, \sphvar', \dspharv, h))
\, d\dspharv,
\end{equation}
\end{lemma}
\begin{proof} See Appendix \ref{sec:composition}. \end{proof}
The key point here is that, up to a term vanishing faster as
$|\eta|\to \infty$, the effect on the phase function $\Phi$ of raising
the scattering matrix to the power $k$ is essentially to replace $G$ by
$kG$.
\section{Proof of the main theorem}\label{sec:main-theorem}
The main idea of the proof, motivated by \cite{Zelditch-Kuznecov, Z1997} and \cite{Gell-Redman-Hassell-Zelditch}, is the following observation: if $\{ \nu_h \} $ is a family of \textit{finite} measures on $\mathbb{S}^1$ parametrized by $h > 0$, and if
each Fourier coefficient of $\nu_h$ converges to that of a certain
finite measure $\nu$ as $h \to 0$, then $\nu_h$ converges to $\nu$ in
the weak-$*$ topology. In our case, however, we cannot apply this
directly, as the $\mu_h$ in \eqref{eq:measure-definition} are infinite measures. Instead we have the following variant. Consider the following weighted sup norm for functions on $\mathbb{S}^1$:
\begin{equation}
\label{eq:weighted-norm}
\norm[w]{f} = \sup_{z \in \mathbb{S}^1 \setminus \{ 1 \}} \absv{\frac{f(z)}{z - 1}},
\end{equation}
and the associated Banach space
\begin{equation}
\label{eq:weighted-space}
C^0_{w}(\mathbb{S}^1) = \{ f \in C^0(\mathbb{S}^1) : \exists \, g \in C^0(\mathbb{S}^1) \text{ such that } f(z) = (z-1) g(z) \}.
\end{equation}
Then we show
\begin{proposition}\label{prop:density}
Suppose that the (infinite) measures $\nu_h$ and $\nu$ act (by integration) as bounded linear functionals on $C^0_{w}(\mathbb{S}^1)$. Moreover, assume that the norms of $\nu_h$ in the dual space are uniformly bounded in $h$. Then if
\begin{equation}
\lim_{h \to 0} \int_{\mathbb{S}^1} p(e^{i\theta}) d\nu_h = \int_{\mathbb{S}^1} p(e^{i\theta}) d\nu
\label{p-convergence}\end{equation}
for all polynomials $p \in C^0_{w}(\mathbb{S}^1)$, then $\nu_h \to \nu$ in the weak-$*$ topology on every compact subset of $\mathbb{S}^1 \setminus \{ 1 \}$.
\end{proposition}
\begin{proof}
The proof of this proposition consists of two elementary steps. We first observe, as in \cite[Proof of Lemma 5.3] {Gell-Redman-Hassell-Zelditch}, that polynomials in $C^0_{w}(\mathbb{S}^1)$ are dense in $C^0_{w}(\mathbb{S}^1)$. The proof is so simple that we repeat it here: given $f \in C^0_{w}(\mathbb{S}^1)$, by definition $f = (z-1) g(z)$ for some $g \in C^0(\mathbb{S}^1)$. We approximate $g$ in $C^0(\mathbb{S}^1)$ by a sequence of polynomials $p_j$. Then $(z-1) p_j$ lie in $C^0_{w}(\mathbb{S}^1)$ and approximate $f$ in the $C^0_{w}(\mathbb{S}^1)$ norm.
Then, given $\epsilon > 0$, and a continuous function $f$ on the circle, supported away from $1$, we need to show that
$$
\Big| \int f d\nu - \int f d\nu_h \Big| < C\epsilon,
$$
provided $h$ is sufficiently small. We choose a polynomial $p$ in $C^0_{w}(\mathbb{S}^1)$ such that
$\| p - f \|_{[w]} < \epsilon$. Then we estimate
\begin{equation}\begin{gathered}
\Big| \int f d\nu - \int f d\nu_h \Big| \\
\leq \Big| \int f d\nu - \int p d\nu \Big| + \Big| \int p d\nu - \int p d\nu_h \Big| + \Big| \int p d\nu_h - \int f d\nu_h \Big|.
\end{gathered}\end{equation}
The first term is bounded by $C_1 \| p - f \|_{[w]}$ where $C_1 = \| \nu \|_{(C^0_{w})^*}$ is the dual norm of $\nu$. The second term is bounded by $\epsilon$ provided that $h$ is sufficiently small, using \eqref{p-convergence}. The third term is bounded by $C_2 \| p - f \|_{[w]}$ where $C_2$ is a uniform bound on $\| \nu_h \|_{(C^0_{w})^*}$. Taking $C = C_1 + C_2 + 1$, this completes the proof.
\end{proof}
In the case of interest,
$\nu_h$ will be the measure $\mu_h$ defined in \eqref{eq:measure-definition} and $\nu$ will
be the pushforward of a homogeneous measure. In view of
Proposition~\ref{prop:density} we need the following result.
\begin{proposition}\label{thm:trace-class}
There exists $c > 0$ such that for $h$ sufficiently
small
\begin{equation}
\label{eq:1}
| \la \mu_h , f \ra | \le c \norm[w]{f}.
\end{equation}
\end{proposition}
We also need
\begin{proposition}\label{cor:traceclass}
For every $k \in \ZZ$, $S_h^k - \Id$ is trace class.
\end{proposition}
Propositions~\ref{thm:trace-class} and \ref{cor:traceclass} will be proved in Section \ref{sec:eigenvalue-distribution}.
It is easy to see that the polynomials $z^k - 1$, for $k \neq 0 \in \ZZ$, form a basis of the polynomials that vanish at $1$.
We thus need to show that, for each $k$, the quantity
\begin{equation}
\int (e^{ik\theta} - 1) d\mu_k = \Trace (S_h^k - \Id)
\label{k-Four-coeff-h}\end{equation}
converges as $h \to 0$, and to find a limit measure $\mu$ such that the limit of \eqref{k-Four-coeff-h} is equal to
\begin{equation}
\int (e^{ik\theta} - 1) d\mu.
\label{k-Four-coeff}\end{equation}
It turns out that the limit of the quantity \eqref{k-Four-coeff-h} is given by a power $c_{\partialm} |k|^\gamma$, where the coefficient $c_\partialm$ depends only on the sign of $k$. Such a homogenous `Fourier series' comes from a `homogeneous measure'. We now describe precisely what this entails.
\begin{definition}\label{def:hommeas} Let $\beta < -1$. We say that a measure $\mu$ on $\mathbb{S}^1 \setminus \{ 1 \}$ is the pushforward of a homogeneous measure of degree $\beta$ if there is a measure $\nu$ on $\RR$ of the form
\begin{equation}
\nu = \begin{cases}
c_1 \theta^{\beta} d\theta, \quad \, \theta > 0 \\
c_2 |\theta|^{\beta} d\theta, \quad \theta < 0,
\end{cases}
\label{hom-measures}\end{equation}
such that $\mu$ is the pushforward of $\nu$ under the quotient map $F : \RR \mapsto \mathbb{S}^1 = \RR / 2\partiali \ZZ$:
$$
\mu = F_*(\nu) \text{ on } \mathbb{S}^1 \setminus \{ 1 \}.
$$
\end{definition}
To state the following lemma, we will need to define the constant
\begin{equation}\label{eq:Gamma}
\Gamma = \int_0^\infty (e^{i \theta} - 1) \theta^{- \gamma - 1} d\theta.
\end{equation}
for $0 < \gamma < 1$. Note that $\Gamma$ is finite for $\gamma$ in
this range.
\begin{lemma}\label{lem:hom-measures} Suppose that $\mu$ is a measure on $\mathbb{S}^1$ that lies in the dual space of $C^0_w(\mathbb{S}^1)$, and is such that, for some
$\gamma > 0$,
\begin{equation}
\int_{\mathbb{S}^1} \big( e^{ik\theta} - 1 \big) d\mu =
\begin{cases} (\Gamma c_1 + \overline{\Gamma} c_2) k^{\gamma}, \quad \ \ k = 1, 2, 3, \dots \\
(\overline{\Gamma} c_1 + \Gamma c_2) |k|^{\gamma}, \quad k = -1, -2, -3, \dots
\end{cases}.
\label{k-hom}\end{equation}
Then on $\mathbb{S}^1 \setminus \{ 1 \}$, $\mu$ is the pushforward of a
homogeneous measure of degree $\beta = -1-\gamma$. Indeed, it is given
by $\nu$ in \eqref{hom-measures} (with the same constants $c_1$ and $c_2$).
\end{lemma}
\begin{remark} In this lemma, we are only making a statement about
$\mu$ away from the point $1$. Notice that there could be an atom at $1$ about which we can say nothing, as this would not affect the integrals in \eqref{k-hom}. \end{remark}
\begin{proof}
We first note that the integrals in \eqref{k-hom} determine $\mu$ uniquely as an element of the dual space of $C^0_w(\mathbb{S}^1)$, hence uniquely as a measure away from the point $1$. This is an immediate consequence of the density of polynomials in $C^0_w(\mathbb{S}^1)$. In view of this, it suffices to show that the measures in Definition~\ref{def:hommeas}, homogeneous of degree $\beta = -1-\gamma$, have `Fourier coefficients' of the form \eqref{k-hom}.
With $\beta = - \gamma - 1$, let $\nu = H(\theta) \theta^{\beta} d\theta$ where $H$ is the
Heaviside function, and let $\mu$ be the pushforward of $\nu$. We compute, for $k > 0$,
\begin{equation}\begin{gathered}
\int_{\mathbb{S}^1} \big( e^{ik\theta} - 1 \big) d\mu = \int_{\RR} \big( e^{ik\theta} - 1 \big) d\nu \\
= \int_0^\infty \big( e^{ik\theta} - 1 \big) \theta^{\beta} d\theta \\
= k^\gamma \int_0^\infty \big( e^{i\theta} - 1 \big)
\theta^{\beta} d\theta
= k^\gamma \Gamma.
\end{gathered}\end{equation}
Similarly, for $k < 0$ we find
$$
\int_{\mathbb{S}^1} \big( e^{ik\theta} - 1 \big) d\mu = |k|^\gamma \int_0^\infty \big( e^{-i\theta} - 1 \big)
\theta^{\beta} d\theta = |k|^\gamma \overline{\Gamma}.
$$
Similarly, if $\nu = (1 - H(\theta)) |\theta|^\beta$, then for $k > 0$
\begin{equation*}\begin{gathered}
\int_{\mathbb{S}^1} \big( e^{ik\theta} - 1 \big) d\mu = \int_{-\infty}^0 (e^{ik\theta} - 1) |\theta|^\beta d\theta = \int_0^\infty \big( e^{-ik\theta} - 1 \big) \theta^\beta d\theta
= k^\gamma \overline{\Gamma},
\end{gathered}\end{equation*}
and $\int_{\mathbb{S}^1} \big( e^{ik\theta} - 1 \big) d\mu = |k|^\gamma \Gamma$ for $k < 0$.
\end{proof}
We can now prove the Main Theorem.
\begin{proof}[Proof of the Main Theorem]
Proposition~\ref{thm:trace-class} shows that the measures $\mu_h$
are uniformly bounded in the dual space of $C^0_{w}(\mathbb{S}^1)$
(since $\alpha > d$). This means that we can
apply Proposition~\ref{prop:density}, showing that $\mu_h$
converges to a measure $\mu$ on $C^0_{w}$ provided that the
convergence on polynomials vanishing at $1$ in \eqref{p-convergence}
holds. Since the
polynomials $e^{ik\theta} - 1$ for $k \neq 0 \in \ZZ$ are a basis
for polynomials vanishing at $1$, it suffices to check
\eqref{p-convergence} for these polynomials. So this requires
computing the limit, as $h \to 0$, of $\Trace (S_h^k - \Id)$. This
we shall do in the Section \ref{sec:traces-and-compositions}, and the result is \eqref{trace2}
and \eqref{trace3}. That is, the integrals are given by
$(\sqrt{E}/2 \partiali)^{d-1}ck^{\gamma}$ for $k >0$ and $(\sqrt{E}/2 \partiali)^{d-1}\overline{c}k^{\gamma}$ for $k < 0$,
where $c$ is the constant in \eqref{eq:constants-3}, in particular
$c = a_1 \Gamma + a_2 \overline{\Gamma}$ for $a_1, a_2$ in
\eqref{eq:constants-3}. Thus by Lemma~\ref{lem:hom-measures}, the
homogeneous measure $\mu$ given by the pushforward of $a_1(\sqrt{E}/2 \partiali)^{d-1} H \theta^\beta + a_2 (\sqrt{E}/2 \partiali)^{d-1} (1 - H)
|\theta|^\beta$ which pairs with $e^{ik \theta} - 1$ to give the
above values. \end{proof}
We now prove Corollary \ref{thm:counting}.
\begin{proof}[Proof of Corollary \ref{thm:counting}]
Given $0 < \partialhi_0 < \partialhi_1 < 2\partiali$, and let $1_{[\partialhi_0, \partialhi_1]} \colon
\mathbb{S}^1 \lra \mathbb{R}$ be the indicator function of the
corresponding sector of the circle, $1_{[\partialhi_0, \partialhi_1]}(e^{i
\theta}) = 1$ if $\partialhi_0 \le \theta \le \partialhi_1$ modulo $2\partiali$ and
is zero otherwise. Then for $N(\partialhi_0, \partialhi_1)$ as defined in the corollary
$$
N(\partialhi_0, \partialhi_1) = \Tr(1_{[\partialhi_0, \partialhi_1]}(S_h))
$$
Let $f$ and $g$ are continuous, non-negative
functions, on the circle supported on $\mathbb{S}^1 \setminus 1$
such that $ f \le 1_{[\partialhi_0, \partialhi_1]} \le g$. Then
$$
\Tr f(S_h) \le \Tr(1_{[\partialhi_0, \partialhi_1]}(S_h)) \le \Tr g(S_h),
$$
and all three quantities are finite. Thus
$$
\int f \mu + o(1) = \la \mu_h, f \ra \le h^{\alpha \gamma} N(\partialhi_0,
\partialhi_1) \le \la \mu_h, g \ra = \int g \mu + o(1),
$$
where $o(1)$ denotes a quantity which goes to $0$ as $h \to 0$.
But for any $\epsilon > 0$ we can choose $f$ and $g$ can be chosen so
that $\int f \mu = \int 1_{[\partialhi_0, \partialhi_1]} \mu - \epsilon, \int g \mu
= \int 1_{[\partialhi_0, \partialhi_1]} \mu + \epsilon$, and thus for any $\epsilon$
we have
$$
\int 1_{[\partialhi_0, \partialhi_1]} \mu - \epsilon + o(1) \le h^{\alpha \gamma}
N(\partialhi_0, \partialhi_1) \le \int 1_{[\partialhi_0, \partialhi_1]} + \epsilon+ o(1),
$$
so $\lim h^{\alpha \gamma}N(\partialhi_0, \partialhi_1) = \int 1_{[\partialhi_0,
\partialhi_1]} \mu$, proving the Corollary.
\end{proof}
\section{Traces and compositions}\label{sec:traces-and-compositions}
We now compute the traces
\begin{equation}
\lim_{h \to 0} h^{\alpha \gamma} \tr (S_h^k - \Id),
\label{trace}\end{equation}
again with $\gamma = (d - 1)/(\alpha - 1)$, assuming still that
$\alpha > d$.
Indeed, we
will prove
\begin{lemma}\label{thm:trace-formula}
There is a constant $c$ such that for $k \in
\mathbb{Z}$,
$$
\lim_{h \to 0} \la \mu_h, z^k - 1 \ra = \left\{
\begin{array}{cc}
c (2\partiali)^{-(d - 1)} k^\gamma & \mbox{ if } k \ge 0 \\
\overline{c} (2\partiali)^{-(d - 1)} |k|^\gamma &\mbox{ if } k < 0
\end{array} \right.
$$
Indeed, $c = a_1 \Gamma + a_2 \overline{\Gamma}$ where $a_1$ and $a_2$
are defined in \eqref{eq:constants-3}
\end{lemma}
Note that any semiclassical, zeroth order FIO with compact
microsupport has trace bounded by $C h^{-(d-1)}$ (see e.g.\
\cite[Appendix]{Gell-Redman-Hassell-Zelditch}.) Therefore, in the
limit above we may replace $S_h^k$ by $F_2^k$. Indeed, in
$$
S_h^k - \Id = F_2^k - \Id + \sum_{j = 0}^{k - 1} {j \choose k} F_1^{k - j} F_2^j,
$$
all the terms in the sum on the right have compact microsupport, are
thus trace class with trace bounded by $h^{-(d -1)}$, and as we will
see, the trace of $F_2^k - \Id$ increases at the rate $h^{-\alpha
\gamma}$. Since
$$
\alpha \gamma = \alpha (d - 1)/(\alpha - 1) > d - 1,
$$
the trace of $F_2^k - \Id$ contributes the leading order part of
$\Tr(S_h^k - \Id)$.
For the same reason, we
can replace the identity operator by another FIO that differs from it
by an operator with compact microsupport. So we can restrict to the
microlocal region $|\dspharv| \geq R$ for arbitrary $R$, and using $h^{\alpha \gamma}
h^{-(d - 1)} = h^\gamma$ and Lemma \ref{thm:composition}, we can write
the terms Schwartz kernel of $S_h^k - \Id$ which contribute to the
trace to leading order in the form
\begin{equation}\label{eq:what-you-are-taking-trace-of}
\int\limits_{|\dspharv| \geq R} e^{\frac{i}{h} \big( (\sphvar - \sphvar') \cdot \dspharv + kG(\sphvar, \dspharv) \big)} \Big( 1 + b_k(\sphvar, \sphvar', \dspharv, h) \Big) \, d\dspharv
- \int\limits_{|\dspharv| \geq R} e^{\frac{i}{h} (\sphvar - \sphvar')
\cdot \dspharv } \, d\dspharv,
\end{equation}
We then compute \eqref{trace} by setting $y=y'$ and integrating over
$y$. To be more precise, the Schwartz kernel corresponding to the
oscillatory integral expression in
\eqref{eq:what-you-are-taking-trace-of} is actually a
\textit{half-density} on $\mathbb{S}^{d-1} \times \mathbb{S}^{d-1}$
acting on half-densities on $\mathbb{S}^{d-1}$, meaning that if, for
the moment, $H(y, y')$ denotes the distribution in
\eqref{eq:what-you-are-taking-trace-of}, then $H(y, y') |dy
dy'|^{1/2}$ is the Schwartz kernel -- more precisely the Schwartz
kernel is a finite sum of these -- and it acts on half densities
$\partialhi(y)|dy|^{1/2}$ by
\begin{equation}\label{eq:half-densities}
\partialhi(y)|dy|^{1/2} \mapsto (\int H(y, y') \partialhi(y') |dy'|) |dy|^{1/2}.
\end{equation}
It is standard that the trace of this operator is $\int H(y, y)
|dy|$, and thus we are tasked with computing
\begin{equation}
\lim_{h \to 0} \frac{h^{\gamma}}{ (2\partiali)^{d-1}} \Bigg( \int\limits_{|\dspharv| \geq R} e^{ikG(\sphvar, \dspharv)/h} a_k(\sphvar, \sphvar, \dspharv, h) \, dy \, d\dspharv
+ \int\limits_{|\dspharv| \geq R} \Big( e^{ikG(\sphvar, \dspharv)/h} - 1 \Big) \, dy \, d\dspharv \Bigg)
\end{equation}
Since $a_k= O(|\dspharv|^{1-\alpha})$ and $\alpha > d$, the first
integral is absolutely convergent. Due to the positive power of $h$
out the front, this term is zero in the limit $h \to 0$.
So consider the second term. We write
\begin{equation}\label{eq:G-decomposition}
G(\sphvar, \dspharv) = |\dspharv|^{1-\alpha} g(\sphvar, \hat \dspharv) + |\dspharv|^{1-\alpha-\epsilon} \tilde g(\sphvar, \dspharv),
\end{equation}
where $\wt{g}$ is a symbol of order $0$. We change variable to $ \dspharv'
= \dspharv (h/k)^{1/(\alpha - 1)}$ to obtain
\begin{equation*}
\begin{split}
&\frac{h^{\gamma}}{(2\partiali)^{d-1}} \int\limits_{|\dspharv| \geq R} \Big( e^{ikG(\sphvar, \dspharv)/h} - 1 \Big) \, dy \,
d\dspharv \\
& \qquad = \frac{k^{\gamma}}{(2\partiali)^{d-1}} \int \Big( e^{i \big( g(\sphvar,
\hat\dspharv')|\dspharv'|^{1-\alpha} + h^{\epsilon/(\alpha - 1)} k^{- \epsilon/(\alpha - 1)} \tilde
g(\sphvar, \dspharv' (k/h)^{1/(\alpha - 1)}) \big)} - 1 \Big) \, dy \, d\dspharv'.
\end{split}
\end{equation*}
The integrand in this integral is dominated for small $h$ by
$C|\dspharv'|^{1-\alpha}$ for $C > 0$ independent of $h$, which is integrable as $\alpha > d$. Thus, by the dominated convergence theorem, we can take the pointwise limit inside the integral, and obtain
\begin{equation}
\lim_{h \to 0} \la \mu_h, z^k - 1 \ra = \lim_{h \to 0} h^{(d-1)\alpha/(\alpha - 1)} \tr (S_h^k - \Id) =
c (1/(2\partiali))^{d-1} k^{\gamma}, \quad
\label{trace2}\end{equation}
where
\begin{equation}
c = \int \Big( e^{i g(\sphvar, \hat\dspharv')|\dspharv'|^{1-\alpha} } - 1
\Big) \, dy \, d\dspharv', \quad k \geq 1.\label{eq:constant}
\end{equation}
The fact that $S_h$ is unitary immediately implies
\begin{equation}
\lim_{h \to 0} h^{(d-1)\alpha/(\alpha - 1)} \tr (S_h^k - \Id) =
\overline{c} (1/(2\partiali))^{d-1} |k|^{\gamma} \quad k \leq -1.
\label{trace3}\end{equation}
It remains only to evaluate $c$. Write
$$
g(\sphvar, \hat\dspharv') = g_+(\sphvar, \hat\dspharv') - g_-(\sphvar,
\hat\dspharv'),
$$
where $g_+ = \max\{ g, 0\}, g_- = \max\{- g, 0\}$. Then
\begin{equation*}
\begin{split}
\int \Big( e^{i g(\sphvar, \hat\dspharv')|\dspharv'|^{1-\alpha} }
- 1 \Big) \, dy \, d\dspharv' &= \int \Big( e^{i g_+(\sphvar,
\hat\dspharv')|\dspharv'|^{1-\alpha} } - 1 \Big) \, dy \,
d\dspharv' \\
&\qquad + \int \Big( e^{ - i g_-(\sphvar,
\hat\dspharv')|\dspharv'|^{1-\alpha} } - 1 \Big) \, dy \,
d\dspharv'.
\end{split}
\end{equation*}
Considering the first term on the right hand side, we write $\dspharv' = r
\hat\dspharv'$ with $\hat\dspharv' = \dspharv'/ |\dspharv'|$ and for fixed $\sphvar, \hat\dspharv'$ with $g_+(\sphvar,
\hat\dspharv') \neq 0$, compute
\begin{equation}
\label{eq:constants-1}
\begin{split}
& \int \Big( e^{i g_+(\sphvar, \hat\dspharv')|\dspharv'|^{1-\alpha} } - 1
\Big) d\dspharv' \\
&\qquad = \int \Big( e^{i r^{1 - \alpha} } - 1
\Big) g_+^\gamma r^{d - 2} dr d\hat\dspharv' d\sphvar \\
&\qquad = \int g_+^\gamma \lp \int ( e^{i r^{1 - \alpha} } - 1)
r^{d - 2} dr \rp d\hat\dspharv' d\sphvar \\
&\qquad = \int g_+^\gamma \lp \int ( e^{i \rho } - 1)
\frac{1}{1 - \alpha} \rho^{- \gamma - 1} d\rho \rp d\hat\dspharv'
d\sphvar \\
&\qquad = \frac{\Gamma}{1 - \alpha} \int g_+^\gamma d\hat\dspharv' d\sphvar
\end{split}
\end{equation}
where in the first step we set $r = g_+^{1/(1 - \alpha)} \wt{r}$,
in the third step we set $r^{1 - \alpha} = \rho$, and where $\Gamma$
is the constant defined in \eqref{eq:Gamma}. A similar computation
shows that
\begin{equation}
\label{eq:constants-2}
\int \Big( e^{ - i g_-(\sphvar, \hat\dspharv')|\dspharv'|^{1-\alpha} } - 1
\Big) d\dspharv' = \frac{\overline{\Gamma}}{1 - \alpha} \int g_-^\gamma d\hat\dspharv' d\sphvar.
\end{equation}
and thus it follows that
\begin{equation}
c = a_1 \Gamma + a_2 \overline{\Gamma} , \mbox{ where } a_1 =
\frac{1}{1 - \alpha} \int g_+^\gamma d\hat\dspharv' d\sphvar, \mbox{ and } a_2 =
\frac{1}{1 - \alpha} \int g_-^\gamma d\hat\dspharv'
d\sphvar \label{eq:constants-3}
\end{equation}
\section{The asymptotic distribution of phase shifts}\label{sec:eigenvalue-distribution}
The aim of this section is to prove Propositions \ref{thm:trace-class}
and \ref{cor:traceclass}, to which end we must first obtain an
estimate for the rate at which eigenvalues of $S_h$ accumulate at $1$
\subsection{Eigenvalue accumulation at $1$}\label{sec:eigenvalue-accumulation}
\begin{proposition}\label{thm:rough-eigenvalue-asymptotics}
Let $\{ e^{2 i\beta_{h, n}} \}_{n = 1}^\infty$ be the eigenvalues of the scattering
matrix $S_h$. There exists a
constant $c > 0$ such that for each $\epsilon > 0$ sufficiently
small,
\begin{equation}
\label{eq:away-from-1}
\# \{ n : | e^{2 i\beta_{h, n}} - 1 | > \epsilon \} \le c
\epsilon^{-\gamma} h^{-\alpha \gamma},
\end{equation}
where as above $\gamma = (d - 1)/(\alpha - 1)$.
\end{proposition}
Before we prove the proposition, we remind the reader of the following
fact; let $K_h$ be a semiclassical pseudodifferential operator of
semiclassical order $0$ on a compact manifold $N$ without boundary,
with compact microsupport. Then there
exists a $c > 0$ such that off a subspace $W_h \subset L^2$ with $\dim
W_h \le c h^{- \dim N}$, we have
\begin{equation}
\label{eq:pseudo-bound}
\| K_h \|_{W_h^\partialerp \to L^2} = O(h^\infty).
\end{equation}
Indeed, this can be shown using properties of the semiclassical
Laplacian $h^2 \Delta_N$ corresponding to a Riemannian metric on $N$
as follows.
If one considers a smooth, compactly supported function $\chi
\colon T^* N \lra \mathbb{R}$, with $\WF_{h}(K_h) \subset \supp \chi$, then
writing
$$
K_h = K_h \chi(h^2 \Delta) + K_h (\Id - \chi(h^2 \Delta)),
$$
the term on the left hand side satisfies $\| K_h \chi(h^2 \Delta)
\| \le \| K_h \| \|\chi(h^2 \Delta)\|$ where the norms are operator
norms as maps on $L^2$. (see \cite{Zworski-book} for a definiton of
the semiclassical wavefront set $WF_h$ of a semiclassical FIO.)
By the Weyl
asymptotic formula for semiclassical pseudodifferential operators of
order $2$ \cite[Section 6.4]{Zworski-book}, which states that, if $\{\lambda^2_j \}_{j
= 1}^\infty$ are
the eigenvalues of $\Delta$
\begin{equation}\label{eq:Weyl-asymptotics}
\# \{ j : \lambda_j^2 < \lambda^2 \} = c_N \lambda^{\dim N} +
O(\lambda^{dim N - 1}),
\end{equation}
as $\lambda \to \infty$, we see that
$\chi(h^2 \Delta)$ is identically zero off a
set of dimension no bigger than $c \Vol(\supp (\chi)) h^{-\dim N}$.
For the term on the left, $\Id - \chi(h^2 \Delta)$ is a semiclassical
pseudo of order $0$, with microsupport in $(\supp \chi)^{c}$, and thus
$$
\WF_{h} (K_h (\Id - \chi(h^2 \Delta))) \subset \WF_{h}(K_h) \cap
(\supp \chi)^c = \varnothing,
$$
and in particular $\| K_h(\Id - \chi(h^2 \Delta)) \| = O(h^\infty)$.
In the proof we will use a semiclassical version of the
Calderon-Vaillancourt theorem, which, in the non-semiclassical setting
\cite[Theroem
2.73]{Folland-Harm-Anal-in-Phase} states that a
pseudodifferential
operator $Q$ on $\mathbb{R}^n$,
$$
Q = \int e^{(z - z') \cdot \zeta} a(z, z', \zeta) d\zeta,
$$
satisfying
$$
\| a \|_{2n + 1} := \sup_{|\alpha| + |\beta| \le 2n + 1} \|
\partial^\alpha_{z, z'} \partial^\beta_\zeta a \|_{L^\infty} < \infty
$$
is bounded on $L^2$ with
$ \| Q \|_{L^2 \to L^2} \le c_n \| a \|_{2n + 1}$,
where $c_n$ is a constant depending only on the dimension $n$.
Setting $\zeta = \wt{\zeta} /h$ shows that for a semiclassical
pseudodifferential operator of order $0$,
$$
Q_h = h^{-n}\int e^{(z - z') \cdot \zeta / h} b(z, z', \zeta, h) d\zeta,
$$
if
\begin{equation}\label{eq:CV-norm}
\| b \|_{2n + 1, h} := \sup_{|\alpha| + |\beta| \le 2n + 1} \|
\partial^\alpha_{z, z'} (h\partial_\zeta)^\beta b \|_{L^\infty} < \infty
\end{equation}
is bounded on $L^2$ with norm bounded by $c_n \| b \|_{2n + 1, h}$.
Indeed, this is just Courant-Vaillancourt with symbol depending on a
smooth parameter applied to a semiclassical
symbol $b(z, z', \zeta, h)$
\begin{proof}[Proof of Proposition \ref{thm:rough-eigenvalue-asymptotics}]
Consider the operator $S_h = F_1 + F_2$ decomposed as in
\eqref{eq:Sh-decomposition}, where $F_1$ has compact microsupport
and $F_2$ consists of a finite sum of terms of the form
\eqref{eq:Sh-at-fiber-infinity}. We begin by taking a cutoff
function $\chi_R \colon T^*\mathbb{S}^{d -1 } \lra \mathbb{R}$
with $\chi_R(\dspharv) \equiv 1 $ for $|\dspharv| \le R/2$ and $\supp \chi
\subset \{ |\dspharv | \le R \}$. Taking $R$ sufficiently large we have
$$
S_h - \Id = (S_h - \Id) \Op_h(\chi_R) + (S_h - \Id) (\Id - \Op_h(\chi_R)),
$$
where $\Op_h(q)$ applied to a symbol $q \in S^m(\mathbb{S}^{d -1})$
denotes the right quantization of $q$ to a semiclassical
pseudodifferential operator of order $m$, again see
\cite{Zworski-book}. Here,
the operator $(S_h - \Id) \Op_h(\chi) \in \Pscl^0$, and thus the statments
preceeding the proof apply, and since $\Op_h(\chi_R)$ is a semiclassical
pseudodifferential operator of order $0$ with compact microsupport
\cite[Appendix]{Gell-Redman-Hassell-Zelditch}, again using $d -1 <
\alpha \gamma$, we see by the discussion
proceeding the proof that the behavior of $(S_h - \Id) \Op_h(\chi)$
has no bearing on \eqref{eq:away-from-1}.
On the other hand, we can take $R$ large
enough so that $\WF_{h} (F_1 \Op_h(\chi_R)) = \varnothing$ and thus
the Schwartz kernel of the operator
$$
A_h = (S_h
- \Id) (\Id - \Op_h(\chi_R)).
$$
is a sum of terms of the form \eqref{eq:Sh-at-fiber-infinity} plus
terms of order $O(h^\infty)$, and we thus focus our attention on terms
as in \eqref{eq:Sh-at-fiber-infinity}.
By Remark \ref{thm:switch-y-yprime}, we may take $F_2$ to be as in
\eqref{eq:Sh-at-fiber-infinity-y-dependent} with phase function $\Phi = (\sphvar - \sphvar') \cdot \dspharv
+ G(\sphvar, \dspharv)$ which appear in $F_2$ have the asymptotics $G =
a(\sphvar, \wt{\dspharv})|\dspharv|^{1 - \alpha} + O(|\dspharv|^{-\alpha})$. For constant $\delta
> 0$, we consider two
asymptotic regimes
\begin{equation}
\label{eq:regimes}
\begin{split}
\mbox{regime I:}\quad |\dspharv| h^{1/(1 - \alpha)} \ge \delta &\quad \mbox{ here }
\exp{iG(\sphvar, \dspharv)/ h} \mbox{ is \textit{not} oscillatory} \\
\mbox{regime II:} \quad |\dspharv| h^{1/(1 - \alpha)} < \delta &\quad \mbox{ here }
\exp{ i G(\sphvar, \dspharv)/ h} \mbox{ is oscillatory.}
\end{split}
\end{equation}
As in the discussion proceeding the proof, we use functions of the
semiclassical Laplacian
$$
P_h := h^2 \Delta_{\mathbb{S}^{d -1}}.
$$
Let $\chi\colon \mathbb{R}^+ \lra \mathbb{R}$ be a bump function with
$\chi(r) = 1$ for $r \le 1/2$, $\chi \ge 0$, and $\supp \chi \subset
[0, 1]$, and write
\begin{equation}
\label{eq:Ah-bustup}
A_h = A_{h, 1} + A_{h, 2},
\end{equation}
where
$$ A_{h, 1} := A_h \chi((\epsilon
h)^{1/(\alpha - 1)} P_h) , \qquad
A_{h, 2} := + A_h (\Id - \chi((\epsilon
h)^{1/(\alpha - 1)} P_h)).
$$
We analyze these two operators
separately.
For $A_{h, 1}$, we begin by
pointing out that there exist subspaces $V_h\subset L^2$ with $\dim V
< \epsilon^{- \gamma} h^{-\alpha \gamma}$ such that
$(\chi((\epsilon h)^{2/(\alpha - 1)} P_h) \rvert_{V_h^\partialerp} \equiv
0$. Indeed, by the Weyl asymptotic formula
\eqref{eq:Weyl-asymptotics}, the direct sum of the
eigenspaces of $(\epsilon h)^{2/(\alpha - 1)} P_h$ with eigenvalue
less than $1$, which we take to be $V_h$, satisfies
\begin{equation}\label{eq:count-of-big-phase-shifts}
\dim V_h \le c ((\epsilon h)^{- 1/(\alpha - 1)} h^{-1})^{-(d - 1)} = c
\epsilon^{-\gamma} h^{-\alpha \gamma}.
\end{equation}
Thus $A_{h, 1}$ is also identically
zero off $V_h$.
Now consider $A_{h, 2} := A_h (\Id - \chi((\epsilon h)^{1/(\alpha - 1)}
P_h))$. We claim that there exists a constant $C > 0$ independent of
$h$ and $\epsilon$ such that
\begin{equation}\label{eq:epsilon-norm-bound}
\| A_{h, 2} \| \le C \epsilon.
\end{equation}
The Schwartz kernel of the operator $\chi((\epsilon
h)^{1/(\alpha - 1)} P_h)$ is given by finite sums of terms
\begin{equation}\label{eq:microlocal-function-of-Lapl}
(2 \partiali h)^{-(d -1)} \int e^{i (\sphvar - \sphvar')\cdot \dspharv/h }\wt{\chi}(\sphvar, \sphvar', \dspharv (\epsilon h)^{1/(\alpha - 1)}, h) d\dspharv,
\end{equation}
where $\wt{\chi}(\sphvar, \sphvar', \wt{\dspharv}, h)$ is a semiclassical symbol of
order zero with $\wt{\chi} \rvert_{|\wt{\dspharv}| \ge 1} = O(h^\infty)$.
The operator $A_{h, 2}$ is given by terms of the form
\begin{equation}
\label{eq:non-oscillatory}
\begin{split}
& (2\partiali h)^{- (d - 1)}\int e^{i (\sphvar - \sphvar')\cdot \dspharv/h + iG(\sphvar, \dspharv)/h} a(\sphvar, \sphvar', \dspharv, h)
(1 - \wt{\chi}(\sphvar, \sphvar', \dspharv (\epsilon h)^{1/(\alpha - 1)}, h) )
d\dspharv \\
& \qquad - (2\partiali h)^{- (d - 1)} \int e^{i (\sphvar - \sphvar')\cdot \dspharv/h }(1 - \wt{\chi}(\sphvar, \sphvar', \dspharv
(\epsilon h)^{1/(\alpha - 1)}, h)) d\dspharv.
\end{split}
\end{equation}
This requires some explanation. To compute the composition we must
compose an operator whose Schwartz kernel is an oscillatory integral
as in \eqref{eq:Sh-at-fiber-infinity-y-dependent}, call it $I_h(y, y')$ with an operator
whose Schwartz kernel is an oscillatory integral of the form
\eqref{eq:microlocal-function-of-Lapl}. This is done by arguing along
the lines in Appendix \ref{sec:composition}, where in particular we
see that the composition of two such oscillatory integrals is given by
$\int I_h(y, y'') \wt{I}_h(y'', y') |dy''|$. The situation here is substantially simpler
since the operator on the right is a semiclassical pseudo, and the
expression above is obtained easily from stationary phase. (Again, see Appendix
\ref{sec:composition}.)
Setting $\wt{\dspharv} = \dspharv (\epsilon h)^{1/(\alpha - 1)}$, $\wt{h} = (\epsilon h)^{1/(\alpha - 1)}
h$ and
factoring gives
\begin{equation}
\label{eq:change-of-variables}
(2\partiali \wt{h})^{- (d - 1)} \int e^{i (\sphvar -
\sphvar')\cdot \wt{\dspharv} / \wt{h} } \wt{a}(\sphvar, \sphvar', \wt{\dspharv}, \wt{h}) d\wt{\dspharv},
\end{equation}
where, letting $h$ remain as a function of $\wt{h}$ and $\epsilon$ for the moment,
\begin{equation*}
\begin{split}
\wt{a}(\sphvar, \sphvar', \wt{\dspharv}, \wt{h}) &= \lp \exp( iG(\sphvar, \wt{\dspharv} (\epsilon
h)^{-1/(\alpha - 1)})/h) b(\sphvar, \sphvar', \wt{\dspharv} (\epsilon
h)^{-1/(\alpha - 1)}, h) - 1 \rp \\
& \qquad \times (1 - \wt{\chi}(\sphvar, \sphvar', \wt{\dspharv}, h)
).
\end{split}
\end{equation*}
The immediate effect of this change of variables is that
\begin{equation} \label{eq:F}
\begin{split}
F &= \exp( iG(\sphvar, \wt{\dspharv} (\epsilon h)^{-1/(\alpha - 1)})/h)
\times (1 - \wt{\chi}(\sphvar, \sphvar', \wt{\dspharv}, h))
\end{split}
\end{equation}
satisfies $F - (1 -
\wt{\chi}) \in \epsilon S^{1 -
\alpha}$, i.e.\ it is a semiclassical symbol of order $1 - \alpha$
as a function of $\wt{\dspharv}$ and $\wt{h}$ (or $\wt{\dspharv}$ and $h$ for
that matter.)
Indeed, recalling that support of $1 - \wt{\chi}$ is contained in $\{
\wt{\dspharv} \ge 1 \}$ and thus $|\wt{\dspharv}|^{-\delta}( 1 -\wt{\chi})$ is
bounded below for any $\delta > 0$, we claim first that, notation as
in \eqref{eq:G-decomposition},
$$
(1 - \wt{\chi})\wt{G}/ h = (1 - \wt{\chi}) (g(\sphvar, \wt{\dspharv})
\epsilon |\wt{\dspharv}|^{1 - \alpha} + \wt{g}(\sphvar, \sphvar', \wt{\dspharv}(\epsilon h)^{-1/(\alpha -
1)})/h)
$$
satisfies that $(1 - \wt{\chi})\wt{g} /h \in \epsilon S^{1 - \alpha -
\epsilon}$ as a function of $\wt{\dspharv}$. Indeed,
\begin{equation}
\label{eq:symbol-estimates-2}
\begin{split}
| \partial_{\sphvar, \sphvar'}^\alpha \partial_{\wt{\dspharv}}^\beta (1 - \wt{\chi})\wt{g} /h |
\le c h^{-1} | \sum_{\alpha' \le \alpha, \beta' \le \beta} \partial_{\sphvar,
\sphvar'}^{\alpha - \alpha'} \partial_{\wt{\dspharv}}^{\beta - \beta'}(1 -
\wt{\chi}) \partial_{\sphvar,
\sphvar'}^{\alpha } \partial_{\wt{\dspharv}}^{\beta}\wt{g} |,
\end{split}
\end{equation}
and while for $\beta' \neq 0, \partial_{\wt{\dspharv}}^{\beta - \beta'} (1 -
\wt{\chi})$ is compactly supported in $\wt{\chi}$, the symbol
estimates for $\wt{g}$ give that for $\beta \neq 0$
\begin{equation}\label{eq:change-of-variables-symbol-estimate}
\begin{split}
|(1 - \wt{\chi}) \partial_{\wt{\dspharv}}^{\beta}\wt{g}| &\le (1 -
\wt{\chi}) (\epsilon
h)^{1/(1 - \alpha)} \la \wt{\dspharv} (\epsilon h)^{-1/(\alpha - 1)}
\ra^{1 - \alpha - \epsilon - |\beta|} \\
&\le (1 - \wt{\chi}) (\epsilon
h)^{1/(1 - \alpha) -(1 - \alpha - \epsilon - |\beta|) /(\alpha - 1)}\lp |\wt{\dspharv}
|^2 + (\epsilon h)^{2/(\alpha - 1)} \rp^{(1 - \alpha - \epsilon -
|\beta|) / 2}\\
&\le (\epsilon
h) \la \wt{\dspharv}
\ra^{1 - \alpha - \epsilon - |\beta|},
\end{split}
\end{equation}
where in the last line we used that $1 - \wt{\chi}$ is supported in
$|\wt{\dspharv}| \ge 1$, while for $\beta = 0$
firstly that
$$
|(1 - \wt{\chi}) \wt{g}(\sphvar, \sphvar', (\epsilon h)^{-1/(\alpha -
1)}/h| \le C (1 - \wt{\chi}) \la \wt{\dspharv} (\epsilon h)^{-1/(\alpha -
1)} \ra^{1 - \alpha - \epsilon} < C \epsilon.
$$
The estimates in \eqref{eq:symbol-estimates-2} and
\eqref{eq:change-of-variables-symbol-estimate} together show $(1 -
\wt{\chi})\wt{G}/ h$ and thus $F - (1 - \wt{\chi})
\in \epsilon S^{1 - \alpha}$.
Moreover,
$$
a(\sphvar, \sphvar', \wt{\dspharv} (\epsilon h)^{-1/(\alpha -
1)}, h) = 1 + b(\sphvar,
\sphvar', \wt{\dspharv} (\epsilon h)^{-1/(\alpha - 1)}, h)
$$
remains a symbol of order $0$ and $b$ a symbol of order
$1-\alpha$, and as in the case of $F$, $b(\sphvar,
\sphvar', \wt{\dspharv} (\epsilon h)^{-1/(\alpha - 1)}, h) \in \epsilon S^{1 -
\alpha}$ in $\wt{\dspharv}$.
Thus
$$
\wt{a} = (F - (1 - \wt{\chi})) + F b
$$
is $\epsilon$ times a semiclassical symbol of order $1 - \alpha$ whose
derivatives in $\sphvar, \sphvar'$ and $\wt{\dspharv}$ are uniformly bounded. In
particular Calderon-Vaillancourt (see \eqref{eq:CV-norm} and below) gives
\eqref{eq:epsilon-norm-bound}.
To finish the proof, given $\epsilon$ we take $\epsilon' = \epsilon /
C$ with $C$ in \eqref{eq:epsilon-norm-bound} and use $\epsilon'$ in
the arguments above to see that in $A_h = A_{h, 1} + A_{h, 2}$ (see
\eqref{eq:Ah-bustup}), $A_{h, 1}$ has at most $c \epsilon^{- \gamma}
h^{- \alpha \gamma}$ eigenvalues at distance $\epsilon$ from $1$ while
$A_{h, 2}$ is norm bounded by $\epsilon$. This is exactly the desired result.
\end{proof}
\subsection{Proof of Propositions~\ref{thm:trace-class} and \ref{cor:traceclass}}
\begin{proof}[Proof of Proposition~\ref{thm:trace-class}] Let $p \in \mathbb{N}$. Let
$$
A_{h}(p) = \{ e^{2i \beta_{h, n}} \in \spec S_h : 2^{-(p -1)} \ge
|e^{2 i
\beta_{h, n}} - 1| > 2^{-p} \},
$$
where elements are included with multiplicity. Then taking $\epsilon =
2^{-p}$ in \eqref{eq:away-from-1} gives
\begin{equation}
\label{eq:4}
|A_{h}(p) | \le c 2^{p(d - 1)/(\alpha - 1)} h^{- {\alpha/(\alpha - 1)}}.
\end{equation}
The pairing of $f$ with $\mu_h$ is given by
\begin{equation}
\label{eq:2}
\begin{split}
\la \mu_h , f \ra &= h^{{\alpha/(\alpha - 1)}} \sum_{\spec S_h} f(e^{2i \beta_{h, n}})
= h^{{\alpha/(\alpha - 1)}} \sum_{p = 0}^{\infty} ( \sum_{A_{h}(p)} f(e^{2i \beta_{h, n}}) ) .
\end{split}
\end{equation}
But,
\begin{equation}
\label{eq:3}
\begin{split}
| \sum_{A_h(p)} f(e^{2i \beta_{h, n}}) | &\le
\norm[w]{f} \sum_{A_h(p)} | e^{2i \beta_{h, n}} - 1| \\
&\le \norm[w]{f} 2^{-p} |A_h(p) | \\
&\le c \norm[w]{f} 2^{-p} (2^{p(d - 1)/(\alpha - 1)} h^{- {\alpha/(\alpha - 1)}}) \\
&\le c \norm[w]{f} h^{- {\alpha/(\alpha - 1)}} 2^{-p + p (d - 1)/(\alpha - 1)},
\end{split}
\end{equation}
Thus
\begin{equation}
\label{eq:5}
\la \mu_h , f \ra \le c h^{\alpha/(\alpha - 1)} \norm[w]{f} h^{- {\alpha/(\alpha - 1)}} \sum_{p =
0}^\infty 2^{p((d - 1)/(\alpha - 1) - 1) } \le c \norm[w]{f} \sum_{p =
0}^\infty 2^{p( (d - 1)/(\alpha - 1) - 1) },
\end{equation}
and the above is summable if and only if
$$
1 > \frac{d - 1}{\alpha - 1} \iff \alpha > d,
$$
which is exactly our assumption on $\alpha$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{cor:traceclass}] It is enough to prove for $k=1$, as for any other value of $k$, we can write $S_h^k - \Id$ as the product of $S_h - \Id$ with a bounded operator. For $k=1$, Proposition~\ref{thm:rough-eigenvalue-asymptotics} shows that the number of eigenvalues $z$ (counted with multiplicity) of $S_h$ such that $|z-1| \in [2^{-j}, 2^{-j+1}]$ is bounded by $C h^{-\alpha \gamma} 2^{j\gamma}$. Since $\gamma < 1$, we can sum $2^{-j+1} \times C h^{-\alpha \gamma} 2^{j\gamma}$ over $j \in \NN$, which is a bound for the sum of $|z-1|$ over all eigenvalues $z$. It follows that the trace norm of $S_h - \Id$ is finite.
\end{proof}
\section{Trapping energies}\label{sec:trapping}
Suppose now that $E$ is a trapping energy for the potential $V$. In this case, we write the scattering matrix $S_h(E)$ as the scattering matrix $\tilde S_h(E)$ for a different potential $\tV$, which is nontrapping at energy $E$, plus a small remainder. We can choose the potential $\tV$ to be equal to $V$ near infinity. To do this, we first choose a function $\partialhi \in C_c^\infty(\RR_+)$, equal to $1$ in a neighbourhood of $0$, and monotone nonincreasing. Then $\tV := V + 2E \partialhi(|x|/R)$ will be nontrapping at energy $E$, for sufficiently large $R$.
We then express the scattering matrix $S_h(E)$ in terms of $\tilde S_h(E)$.
To do this, we follow \cite[Section 8B]{GHS}. Let $R_h = (h^2\Delta +
V - (E+i0))^{-1}$ and $\tilde R_h = (h^2\Delta + \tV - (E+i0))^{-1}$
be the outgoing resolvents for the unperturbed and perturbed
potential, respectively. Also, let $\chi_i$, $i = 1, 2, 3$ be cutoff
functions supported near infinity in $\RR^n$, equal to $1$ for $|x|
\geq 2R$ and $0$ for $|x| \leq R$, such that $\chi_i \chi_j = \chi_j$
when $j < i$. Then, following the derivation of \cite[Equation
(8-7)]{GHS}, i.e.\ taking $\textsc{H} = \Delta + V$ and $\lambda =
E/h^2$ in that equation, we obtain
\begin{equation}
\chi_2 R_h \chi_1 = \chi_2 \tilde R_h \chi_1 + \chi_2 \tilde R_h [\chi_3, h^2 \Delta + V] R_h [h^2 \Delta + V, \chi_2] \tilde R_h \chi_1.
\label{res-identity}\end{equation}
These Schwartz kernels are defined on $\RR^d_x \times \RR^d_{x'} \times (0, h_0]_h$.
As discussed in \cite{HW2008}, if we use polar coordinates $x = (r,
\omega)$, $x' = (r', \omega')$, multiply the kernel of $R_h$ by
${r'}^{(d-1)/2}$ and take the limit $r' \to \infty$, we obtain the
Poisson kernel $P_h(E)$, which is a function of $(x, \omega', h)$. If
we then multiply the Poisson kernel $P_h(E)$ by $r^{(d-1)/2}$ and take
the (distributional) limit $r \to \infty$, we obtain the kernel of the
absolute scattering matrix; multiplying by $i^{(d-1)/2}$ and composing
with the antipodal map $A$, we obtain the scattering matrix $S_h(E)$
as we have normalized it. The same operations applied to $\tilde R_h$
produce $\tilde P_h(E)$ and $\tilde S_h(E)$. Applying these operations
to \eqref{res-identity}, we obtain
\begin{equation}
S_h(E) = \tilde S_h(E) + i^{(d-1)/2} A \tilde P^*_h(E) [\chi_3, h^2 \Delta + V] R_h [h^2 \Delta + V, \chi_2] \tilde P_h(E) .
\label{sm-identity}\end{equation}
For brevity, we write this in the form
\begin{equation}
S_h(E) = \tilde S_h(E) + B_h(E) ;
\end{equation}
clearly $B_h(E)$ is a uniformly bounded family of operators on $L^2(\mathbb{S}^{d-1})$.
Our previous arguments apply to $\tilde S_h(E)$, since $E$ is a nontrapping energy for $\tV$. So it suffices to show that the perturbation $B_h(E)$ has no effect on the weak-$*$ limit $\tilde \mu$ of the measures $\tilde \mu_h$ associated to $\tilde S_h(E)$, as $h \to 0$.
To show this, we now cut off to small and large frequencies using a cutoff $\chi(h^2 \Delta_{\mathbb{S}^{d-1}})$, where $\chi(t)$ is compactly supported, and identically $1$ near $t=0$. For simplicity we write this operator simply as $\chi$. Thus we write
\begin{equation}
S_h(E) = \chi \tilde S_h(E) + \chi B_h(E) + (\Id - \chi) \tilde S_h(E) + (\Id - \chi) B_h(E).
\label{S-decomp}\end{equation}
The first term is an FIO with compact microsupport, hence has trace norm bounded by $C h^{-(d-1)}$. The second term also has trace norm bounded by $C h^{-(d-1)}$, since this is true of $\chi$ which is also an FIO with compact microsupport. The third term is the principal term, and the fourth we bound using wavefront set results.
In fact, according to \cite{HW2008}, the semiclassical wavefront set of $\tilde P^*_h(E)$ is contained in
$$
\{ (\omega, \eta; x, \xi) \mid \text{ the bicharacteristic through } (x, \xi) \text{ has asymptotic } t \mapsto \eta + \omega (t - t_0), t \to \infty \}
$$
when the point $x$ is restricted to a fixed compact set. Now consider the composition $(\Id - \chi) A \tilde P^*_h(E) [\chi_3, h^2 \Delta + V]$. Composition on the right with $[\chi_3, h^2 \Delta + V]$ restricts the wavefront set to points $x \in \supp \nabla \chi_3$, that is, to $x$ lying in some fixed compact set in $\RR^d$. On the other hand, composition on the left with $(\Id - \chi)$ restricts the wavefront set to points $(\omega, \eta)$ in the support of the symbol of $\chi$.
By choosing $\chi$ suitably, we can arrange that this support is contained in $|\eta| \geq R'$ for $R'$ arbitrary. By choosing $R'$ sufficiently large, we arrange that the wavefront set of $(\Id - \chi) A \tilde P^*_h(E) [\chi_3, h^2 \Delta + V]$ vanishes. That implies that the Schwartz kernel of this operator is smooth and $O(h^\infty)$. The trace norm of the fourth term in \eqref{S-decomp} is therefore $O(h^\infty)$.
Now consider all the terms in $S_h(E)^k - \Id$, where $S_h(E)$ is decomposed according to \eqref{S-decomp}. The main term, $\big( (\Id - \chi) \tilde S_h(E) \big)^k - \Id$, is treated as in Sections~\ref{sec:the-scattering-matrix} -- \ref{sec:eigenvalue-distribution}. All other terms have trace norm bounded by $O(h^{-(d-1)})$, and therefore their contribution to
$h^{\gamma \alpha} \Trace (S_h(E)^k - \Id)$ vanishes in the limit $h \to 0$.
This completes the proof of the Main Theorem in the case of a trapping energy.
\begin{appendix}
\section{Regularity of the sojourn map}\label{sec:soujourn}
In this appendix we prove Proposition~\ref{thm:sojourn-near-infty} and
Lemma~\ref{lem:S-structure}. Our first task is to determine the
regularity of the Legendre submanifold $L$ \eqref{eq:total-sojourn-relation} as $|\eta| \to
\infty$. To do this, we use the fact that $L$ is the boundary value of
a Legendre submanifold $\SR$ over a space of dimension one greater which is a
bicharacteristic flowout, that is, the union of bicharacteristic
rays. We start by defining some spaces of conormal functions, and then
proceed to describe $\SR$ and its ambient contact manifold.
This process will use the language, developed by Melrose \cite{damwc, tapsit}, of analysis on manifolds with corners. Though some of
this is quite involved we will provide some brief explanations and
definitions for the
convenience of the reader.
\subsection{Conormal regularity of solutions to ODEs}\label{subsec:conormal}
Let $M$ be a manifold with corners, with boundary hypersurfaces
$H_1,\dots, H_m$ and boundary defining functions $\rho_1, \dots,
\rho_m$ respectively \cite{damwc}. Thus the boundary $\partial M$ is equal
to the union of the $H_i$, and for each $i$, $\rho_i$ is a
non-negative, smooth function on $M$ with $H_i = \{ \rho_i = 0\}$ and $d \rho_i \neq 0$ on $H_i$. Let $\rho = \rho_1 \dots, \rho_m$
be the product
of boundary defining functions. We say that a vector field
$\mathcal{V}$ on $M$ is a b-vector field if it is smooth, and tangent
to each boundary hypersurface $H_i$, or equivalently if
$\mathcal{V}(\rho_i) = O(\rho_i)$ for each $i$.
Let $\underline{\epsilon} = (\epsilon_1, \dots, \epsilon_m) \in \mathbb{R}^m$ be a multiweight, one for each boundary hypersurface of $M$. The ($L^\infty$-based) space of conormal functions with weight $\underline{\epsilon}$, $\mathcal{A}^{\underline{\epsilon}}(M)$, is defined as follows:
\begin{equation}
\mathcal{A}^{\underline{\epsilon}}(M) = \left\{ f \in
C^\infty(M^\circ) \mid
\begin{array}{c}
\rho^{-\underline{\epsilon}} f \in L^\infty(M) \mbox{ and } \rho^{-\underline{\epsilon}} \mathcal{V}_1
\dots \mathcal{V}_k f \in L^\infty(M) \\
\text{ for any $k$ b-vector
fields } \mathcal{V}_1, \dots, \mathcal{V}_k
\end{array}
\right\}.
\end{equation}
Here $\rho^{\underline{\epsilon}}$ is shorthand notation for the product $\rho_1^{\epsilon_1} \dots \rho_m^{\epsilon_m}$.
That is, $f \in \rho^{\underline{\epsilon}}L^\infty(M)$, and remains
in this space under repeated differentiations by b-vector fields on
$M$. The space $\mathcal{A}^{\underline{\epsilon}}$ is a Frechet
space whose metric we describe below for a simple example.
We also use the notation
$$C^{\infty, \underline{\epsilon}}(M) = C^\infty(M) +
\mathcal{A}^{\underline{\epsilon}}(M).
$$
The regularity condition of our potential $V$ in the Main
Theorem can be phrased in terms of the above spaces; the assumption on $V$ can be expressed in terms of the
radial compactification $\overline{\RR^d}$ of $\RR^d$, where $1/r$ is
taken as the boundary defining function at the `sphere at
infinity,' and is equivalent to assuming that for some $0 < \epsilon < 1$, $r^\alpha V \in C^{\infty,
\epsilon}(\overline{\RR^d})$. (Equivalently, $V \in r^{-\alpha}C^{\infty,
\epsilon}(\overline{\RR^d})$.) We abuse notation slightly by
defining a smooth map $u = (u_1, \dots,
u_n)$ on $M^\circ$ with values in $\mathbb{C}^n$ to lie in
$\mathcal{A}^{\underline{\epsilon}}(M)$ if and only if its components
do.
We note, for later use, the following result. The proof is straightforward and omitted.
\begin{lemma}\label{lem:inverses} \
(i) If $f \in \Cinfep(M)$ is bounded away from zero, then $1/f \in \Cinfep(M)$.
(ii) If $S: M \to N$ is a b-map\footnote{This means that the inverse
image of every boundary defining function on $N$ is a product of
boundary defining functions on $M$, times a smooth non-vanishing function. An invertible b-map induces, in particular, a bijection between the codimension $k$-faces of $M$ and the codimension $k$-faces of $N$.} between manifolds with corners $M$ and $N$ such that all components of $S$ have regularity $\Cinfep(M)$, and $S$ is invertible in the sense that it is invertible as a map and its Jacobian determinant is bounded away from zero, then the inverse map has regularity $\Cinfep(N)$.
(iii) Let $\gamma_1, \dots, \gamma_m$ be positive exponents, and suppose that $\epsilon > 0$ is sufficiently small (relative to the $\gamma_i$). Then the statements (i) and (ii) above also hold if the space $\Cinfep$ is replaced by $C^\infty + \Pi \rho_i^{\gamma_i} \Cinfep$.
\end{lemma}
It is well known that solutions of ODEs
$$
\frac{dy}{dt} = F(y,t), \quad y(0) = y_0
$$
are smooth if $F$ is smooth, and $y$ also depends smoothly on the
initial condition $y_0$. See e.g.\ Hartman \cite[Chapter 5]{Hartman}.
Here, we note the following variant of this standard result. We find it convenient
to write the ODE in terms of a b-derivative, $t \partialartial_t$.
\begin{proposition}\label{prop:conormal-regularity}
Consider the ODE
\begin{equation}\begin{aligned}
t \frac{dz}{dt} &= F(z, s, t) \qquad \ z(0) = z_0, \\
t \frac{ds}{dt} &= s G(z, s, t) \qquad s(0) = s_0 > 0.
\end{aligned}\label{ode}\end{equation}
for $z \in \RR^p$ and $s \in \RR_+$.
(i) Suppose that $F, G \in \mathcal{A}^{\beta_1, \beta_2}(\RR^{p}_{z} \times \RR^+_{s}
\times \RR^+_t)$, $\beta_i > 0$, where $\beta_1$ refers to the
$s$ variable and $\beta_2$ to $t$. Then the solution $z = z(z_0, s_0,
t), s = s(z_0, s_0, t)$ satisfies
$$
z(z_0, s_0, t) - z_0, \quad \frac1{s_0} (s(z_0, s_0, t) - s_0) \in
\mathcal{A}^{\beta_1, \beta_2}(\RR^{p}_{z_0}
\times \RR^+_{s_0} \times \RR^+_t)$$
locally near $t = 0$.
(ii) Suppose that $F, G \in t C^\infty(\RR^{p}_{z} \times \RR^+_{s}
\times \RR^+_t) + \mathcal{A}^{\beta_1, \beta_2}(\RR^{p}_{z} \times \RR^+_{s}
\times \RR^+_t)$, $\beta_i > 0$. Then the solution $z = z(z_0, s_0,
t), s = s(z_0, s_0, t)$ satisfies
$$
z - z_0, \frac1{s_0} (s- s_0) \in
t C^\infty(\RR^{p}_{z_0} \times \RR^+_{s_0}
\times \RR^+_t) + \mathcal{A}^{\beta_1, \beta_2}(\RR^{p}_{z_0} \times \RR^+_{s_0}
\times \RR^+_t)$$
locally near $t = 0$.
(iii) Let $\beta_i = \gamma_i + \epsilon$, where $\gamma_i > 0$ and $\epsilon$ is sufficiently small. Suppose that
$F, G \in t C^\infty(\RR^{p}_{z} \times \RR^+_{s}
\times \RR^+_t) + s^{\gamma_1} t^{\gamma_2} C^\infty(\RR^{p}_{z} \times \RR^+_{s}
\times \RR^+_t) + \mathcal{A}^{\beta_1, \beta_2}(\RR^{p}_{z} \times \RR^+_{s}
\times \RR^+_t)$. Then the solution $z = z(z_0, s_0,
t), s = s(z_0, s_0, t)$ satisfies
$$
z - z_0, \frac1{s_0} (s- s_0) \in
t C^\infty(\RR^{p}_{z_0} \times \RR^+_{s_0}
\times \RR^+_t) + s_0^{\gamma_1} t^{\gamma_2} C^\infty(\RR^{p}_{z_0} \times \RR^+_{s_0}
\times \RR^+_t) + \mathcal{A}^{\beta_1, \beta_2}(\RR^{p}_{z_0} \times \RR^+_{s_0}
\times \RR^+_t)
$$
locally near $t = 0$.
\end{proposition}
\begin{proof}
We start by making some reductions. We first let $\tilde z = z - z_0$ and $\tilde s = \log(s/s_0)$.
Then $\tilde z(0)$ and $\tilde s(0)$ solve the initial value problem
\begin{equation}\begin{aligned}
t\frac{d\tilde z}{dt} &= F(\tilde z + z_0, s_0 e^{\tilde s}, t) \qquad \ \tilde z(0) = 0 \\
t\frac{d\tilde s}{dt} &= G(\tilde z + z_0, s_0 e^{\tilde s}, t) \qquad \tilde s(0) = 0.\\
\end{aligned}\label{ode2}\end{equation}
Thus, we can combine $(\tilde z, \tilde s)$ into a new variable $Z$, satisfying an equation of the form
$$
t\frac{dZ}{dt} = \FFF(Z, z_0, s_0, t) \qquad \ \tilde Z(0) = 0 \\
$$
and show conormal regularity in the $(s_0, t)$ variables.
Let $S, T$ denote the differential operators $s_0 \partialartial_{s_0}$ and $t \partialartial_t$ respectively.
To prove (i), we need to show that $S^j T^k D_{z_0}^\alpha Z(z_0, s_0,
t)$ is bounded by $C s_0^{\beta_1} t^{\beta_2}$ for all
$(j,k,\alpha)$. This is clear when $j = k = |\alpha|= 0$, directly
from a pointwise estimate on $\FFF$. We prove by induction on
$j+k+|\alpha|$. We find that $w := S^j T^k D_{z_0}^\alpha Z(z_0, s_0,
t)$ has $i^{th}$ component satisfying an ODE of the form
$$
t \frac{dw_i}{dt} = \sum_j \frac{\partialartial \FFF_i}{\partialartial z_j} w_j + B,
$$
where $B$ is a sum of products of factors, each of which is a b-derivative of the form $S^{j'} T^{k'} D_{z_0}^{\alpha'}$ applied to $\FFF$ or $Z$, and where the total number of derivatives applied to any factor of $Z$ is strictly less than $j+k+|\alpha|$. By using an integrating factor, the bound $C s_0^{\beta_1} t^{\beta_2}$ on any b-derivative of $\FFF$, and the inductive assumption for lower-order b-derivatives of $Z$, we deduce a similar bound on $S^j T^k D_{z_0}^\alpha Z$, completing the proof.
To prove (ii), we write $\FFF = \Fsm + \FFF_c$, where $\Fsm$ is $t$ times a smooth function, and $\FFF_c$ is conormal of order $(\beta_1, \beta_2)$. We write $\Zsm$ for the solution to the ODE
\begin{equation}
t \frac{d \Zsm}{dt} = \Fsm(\Zsm, z_0, s_0, t).
\label{Zsm}\end{equation}
Then $\Zsm \in t C^\infty$ using standard ODE theory. So consider $Z - \Zsm$. This satisfies the ODE
\begin{equation}
t \frac{d (Z - \Zsm)}{dt} = \Fsm(Z, z_0, s_0, t) - \Fsm(\Zsm, z_0, s_0, t) + \FFF_c(Z, z_0, s_0, t).
\label{ZZsm}\end{equation}
It suffices to show that $Z - \Zsm$ is conormal of order $(\beta_1, \beta_2)$. We prove, by induction on $j+k+|\alpha|$, that $S^j T^k D_{z_0}^\alpha (Z - \Zsm)$ is bounded by $C s_0^{\beta_1} t^{\beta_2}$. When $j+k+|\alpha|=0$, notice that the RHS of \eqref{ZZsm} is bounded by $C |Z - \Zsm| + C s_0^{\beta_1} t^{\beta_2}$. We conclude, using an integrating factor, that $|Z - \Zsm|$ is bounded by $Cs_0^{\beta_1} t^{\beta_2}$. Now consider the b-differential operator $S^j T^k D_{z_0}^\alpha$ applied to $Z - \Zsm$. The argument is similar to part (i).
Consider the ODE satisfied by $S^j T^k D_{z_0}^\alpha (Z - \Zsm)$. On the RHS there will be a sum of products of factors of various sorts. These terms must of one of the following type. The first type is
\begin{equation*}\begin{gathered}
\sum_j \Big( \frac{\partialartial \Fsm_i(Z)}{\partialartial z_j} S^j T^k D_{z_0}^\alpha Z_j - \frac{\partialartial \Fsm_i(\Zsm)}{\partialartial z_j} S^j T^k D_{z_0}^\alpha \Zsm_j \Big) \\
= \sum_j \Big( \frac{\partialartial \Fsm_i(Z)}{\partialartial z_j} S^j T^k D_{z_0}^\alpha (Z_j - \Zsm_j) + \Big( \frac{\partialartial \Fsm_i(Z)}{\partialartial z_j} - \frac{\partialartial \Fsm_i(\Zsm)}{\partialartial z_j} \Big) S^j T^k D_{z_0}^\alpha \Zsm_j .
\end{gathered}\end{equation*}
Notice that the first term is a bounded multiple of $S^j T^k D_{z_0}^\alpha (Z_j - \Zsm_j)$, while the second is bounded in magnitude by $C |Z - \Zsm|$, and hence by $Cs_0^{\beta_1} t^{\beta_2}$.
The next type are terms that involve lower-order b-derivatives of $Z$ and $\Zsm$. All such terms include a factor that is either of the form $S^{j'} T^{k'} D_{z_0}^{\alpha'} (Z_j - \Zsm_j)$ or $(S^{j'} T^{k'} D_{z_0}^{\alpha'} \Fsm)(Z) - (S^{j'} T^{k'} D_{z_0}^{\alpha'} \Fsm)(\Zsm)$, or else involve $\FFF_c$. Using the inductive assumption, this gives an ODE of the form
$$
t \frac{dw_i}{dt} = \sum_j \frac{\partialartial \FFF_i}{\partialartial z_j} w_j + B,
$$
for $S^j T^k D_{z_0}^\alpha (Z - \Zsm)$, where $B$ is bounded by $Cs_0^{\beta_1} t^{\beta_2}$. As in part (i), we conclude that $S^j T^k D_{z_0}^\alpha (Z - \Zsm)$ is bounded by $C s_0^{\beta_1} t^{\beta_2}$.
The proof of part (iii) is similar to part (ii). We write $\FFF = \Fsm + s_0^{\gamma_1} t^{\gamma_2} \FFF_\gamma + \FFF_\beta$, where $\Fsm$ and $\FFF_\gamma$ are smooth. We first find a function $\ZZZ(z_0, s_0, t)$ that solves
\begin{equation}
t \frac{d\ZZZ}{dt} = \Fsm(\ZZZ, z_0, s_0, t) + s_0^{\gamma_1} t^{\gamma_2} \FFF_\gamma(\ZZZ, z_0, s_0, t)
\label{conormalODE}\end{equation}
up to an error which is conormal of order $(\beta_1, \beta_2)$. To do this, we start from the solution $\Zsm$ of \eqref{Zsm}, and modify it in order to solve away the term $s_0^{\gamma_1} t^{\gamma_2} Z_\gamma(z, z_0, s_0, t)$ to leading order, both at $s_0=0$ and at $t=0$. We propose an ansatz of the form $\ZZZ = \Zsm +s_0^{\gamma_1} t^{\gamma_2} Z_\gamma(z, z_0, s_0, t)$, where $Z_\gamma$ is $C^\infty$. Let $v(z_0, t)$ be the restriction of $Z_\gamma$ to $s_0=0$, and $w(z_0, s_0)$ be the restriction to $t=0$. To simplify notation, we shall suppress the dependence of all quantities on $z_0$ from now on.
To see what the functions $v$ and $w$ must be, we substitute $\Zsm +
s_0^{\gamma_1} t^{\gamma_2} Z_\gamma(s_0, t)$ into the ODE. This gives
a polyhomogeneous expansion both as $s_0 \to 0$ and $t \to 0$, with
the first possible non-integral power $s_0^{\gamma_1}$ as $s_0 \to 0$ and $t^{\gamma_2}$ as $t \to 0$. We seek to make these powers agree on the LHS and RHS of the ODE; this will determine $v$ and $w$ uniquely.
Computing the $s_0^{\gamma_1}$ terms of the RHS and LHS of (A.7) and setting them equal gives
\begin{equation}
t^{\gamma_2} \Big( t \frac{dv_i}{dt} + \gamma_2 v_i \Big) = t^{\gamma_2} \Big( \sum_j \frac{\partialartial \Fsm(\Zsm(0, t), 0, t)}{\partialartial z_j} v_j + {\FFF_\gamma}_i(\Zsm(0, t), 0, t) \Big).
\end{equation}
Dividing by $t^{\gamma_2}$ gives an ODE for $v_i$ which has a smooth solution. Moreover, since $\Fsm = O(t)$, the the value of $v_i$ at $t=0$ is given by
\begin{equation}
v_i(0) = \gamma_2^{-1} {\FFF_\gamma}_i(z_0, 0, 0).
\end{equation}
Similarly, the coefficient of $t^{\gamma_2}$ of the expansion at $t=0$ of the ODE is given by
\begin{equation}
s_0^{\gamma_1} \gamma_2 w_i(s_0, 0) = s_0^{\gamma_1} \Big( \sum_j \frac{\partialartial \Fsm(\Zsm(s_0,0), s_0, 0)}{\partialartial z_j} w_j + {\FFF_\gamma}_i(\Zsm(s_0,0), s_0, 0) \Big) .
\end{equation}
Clearly this has a smooth solution $w_i(s_0)$, with $w_i(0) = \gamma_2^{-1} {\FFF_\gamma}_i(z_0, 0, 0) = v_i(0)$. Since $v(0) = w(0)$, we can find a smooth $Z_\gamma(s_0, t)$ that agrees with $v$ at $s_0=0$ and with $w$ at $t=0$. Then it is easy to check that $\Zsm +s_0^{\gamma_1} t^{\gamma_2} Z_\gamma(z, z_0, s_0, t)$ solves the ODE \eqref{conormalODE} up to an error term that is conormal of order $(\beta_1, \beta_2)$, provided that $\epsilon$ is sufficiently small.
To complete the proof, we look for a solution $Z'(z_0, s_0, t)$ of the ODE
$$
t\frac{dZ'}{dt} = \Big( \Fsm + s_0^{\gamma_1} t^{\gamma_2} \FFF_\gamma + \FFF_\beta \Big)(Z, z_0, s_0, t)
$$
of the form $\Zsm +s_0^{\gamma_1} t^{\gamma_2} Z_\gamma(z, z_0, s_0, t) + Z_\beta$. It suffices to show that $Z_\beta$ is conormal of order $(\beta_1, \beta_2)$. This is proved using exactly the same argument as in (ii) above, so we omit the details.
\end{proof}
In the course of this proof, we have essentially proved the following perturbation result:
\begin{lemma}\label{lem:comparison} Suppose that $z, s$ solve the ODE \eqref{ode}, where $F, G \in tC^\infty$. Let $\tilde F, \tilde G$ be functions in $s^{\gamma_1} t^{\gamma_2} C^\infty + \mathcal{A}^{(\beta_1, \beta_2)}$, where $\gamma_i$ and $\beta_i$ are as in Proposition~\ref{prop:conormal-regularity}, part (iii), and let $F_* = F + \tilde F$ and $G_* = G + \tilde G$.
Let $z_*$, $s_*$ solve the ODE with $F, G$ replaced with $F_*, G_*$, and with the same initial conditions as in \eqref{ode}. Then
$$
z(t) - z_*(t), \frac1{s_0}(s - s_*) \in s_0^{\gamma_1} t^{\gamma_2} C^\infty + \mathcal{A}^{(\beta_1, \beta_2)}.
$$
\end{lemma}
The last result we shall need is closely related related to Proposition~\ref{prop:conormal-regularity}, but where the initial conditions are specified at a positive value of $t$, say $t=\delta$, where we suppose that $\delta > 0$ is sufficiently small that the solution exists on the time interval $t \in [0, \delta]$, and we are interested in the value at $t=0$. To state these results we need to introduce spaces of functions with different sorts of regularity in the $s$ and the $t$ variable. We write $\mathcal{A}^\beta_s C^\infty_{t, z}$ for the space of functions with conormal regularity of order $\beta$ in the $s$ variable and $C^\infty$ regularity in $t$ and $z$.
\begin{proposition}\label{prop:cr2} Let $(z, s)$ solve the ODE
\begin{equation}\begin{aligned}
t \frac{dz}{dt} &= F(z, s, t) \qquad \ z(1) = z_0, \\
t \frac{ds}{dt} &= s G(z, s, t) \qquad s(1) = s_0 > 0
\end{aligned}\label{ode3}\end{equation}
with initial conditions now at $t=1$. Then
(i) Suppose that $F, G$ are as in (i) of Proposition~\ref{prop:conormal-regularity}. Then
$$
z(z_0, s_0, t) - z_0, \quad \frac1{s_0} (s(z_0, s_0, t) - s_0) \in \mathcal{A}^{\beta_1}_{s_0} C^\infty_{t,z}
+ \mathcal{A}^{\beta_1, \beta_2}(\RR^{p}_{z_0}
\times \RR^+_{s_0} \times \RR^+_t).$$
(ii) Suppose that $F, G$ are as in (ii) of Proposition~\ref{prop:conormal-regularity}. Then
$$
z - z_0, \frac1{s_0} (s- s_0) \in C^\infty + \mathcal{A}^{\beta_1}_{s_0} C^\infty_{t,z} +
\mathcal{A}^{\beta_1, \beta_2}(\RR^{p}_{z_0} \times \RR^+_{s_0}
\times \RR^+_t).$$
(iii) Suppose that
$F, G$ are as in (iii) of Proposition~\ref{prop:conormal-regularity}. Then
$$
z - z_0, \frac1{s_0} (s- s_0) \in C^\infty + s_0^{\gamma_1} (C^\infty + t^{\gamma_2} C^\infty) +
\mathcal{A}^{\beta_1}_{s_0} \Big( C^\infty_{t,z} + t^{\gamma_2} C^\infty_{t,z} + \mathcal{A}^{\beta_2}_t C^\infty_z \Big).
$$
\end{proposition}
The proof is essentially identical to that of Proposition~\ref{prop:conormal-regularity}, and so is omitted. The only difference is that, instead of integrating from $t=0$, we integrate from $t=1$, so that, for example, when we integrate a term of the form $t^{\gamma_2} C^\infty_t$ with respect to $dt/t$, we only get $C^\infty_t + t^{\gamma_2} C^\infty_t$ rather than just $t^{\gamma_2} C^\infty_t$, accounting for the extra terms in Proposition~\ref{prop:cr2} compared to Proposition~\ref{prop:conormal-regularity}.
A simple consequence of Proposition~\ref{prop:cr2} is
\begin{corollary} \label{cor:solat0}
In case (iii) of Proposition~\ref{prop:cr2}, the functions $z(0), s(0)/s_0$ are $C^\infty + s_0^{\gamma_1} \Cinfe$ functions of the initial data $(z_0, s_0)$.
\end{corollary}
\subsection{The sojourn relation}\label{sec:sojourn-relation}
As we describe concretely in the following subsection, according to \cite{HW2008}, the Poisson operator is a microlocal
object associated to the `sojourn relation' $\SR$. We now proceed to
describe $\SR$ and determine its regularity properties.
In this paper, we shall take the viewpoint that $\SR$ is a Lagrangian
submanifold of $T^* \RR^d \times T^* \mathbb{S}^{d-1}$, that extends
nicely to a certain compactification of this space\footnote{In
\cite{HW2008}, the sojourn relation was viewed as a Legendre
submanifold of a space with one extra dimension, with the extra
coordinate being the variable denoted $\partialhi$ below. Here, we take
the view that $\partialhi$ is a function defined on $\SR$.}.
First we describe this (partial) compactification\footnote{Our partial
compactification serves to make the energy surface $\{ |\xi|^2 + V =
E \}$ compact, which is all that matters.} of $T^* \RR^d \times T^* \mathbb{S}^{d-1}$. Let $r$ denote the radial variable $|x|$ and let $y$ be local coordinates on $\mathbb{S}^{d-1}$. We write $h^{ij}(y)$ for the (dual) metric on $\mathbb{S}^{d-1}$ with respect to these local coordinates. If we write $(\lambda, \eta)$ for cotangent variables dual to $(r, y)$ on $\RR^d$, then it is natural to use $(\lambda, \mu = \eta/r)$ near infinity, as these are variables that are homogeneous of degree zero under dilations, i.e.\ remain of fixed length as $r \to \infty$. We write $\eta'$ for a cotangent variable dual to $y'$ on $T^* \mathbb{S}^{d -1}$, and scale it in the same way as $\eta$; that is, let $\mu' = \eta'/r$.
Finally, we radially compactify Euclidean space by introducing $\rho = r^{-1}$ and adding a boundary at $\rho = 0$. (However, the space is still not compact as $\mu, \mu'$ vary in $\RR^{d-1}$ and $\partialhi$ varies in $\RR$.)
\begin{equation}
\mbox{We denote the space with coordinates } (\rho, y, y'; \lambda, \mu,
\mu', \partialhi)\label{coords2} \mbox{ by $\X$.}
\end{equation}
A more invariant description of this space is
given in \cite{HW2008}, but we wish to avoid the geometric intricacies
here. The space $\X$ is a manifold with boundary, with the boundary
defined by $\{ \rho = 0 \}$, and we ignore the apparent singularity in
$\rho$ at $r = 0$ as we work in a neighborhood of $\rho = 0$.
The space $\X$ (or at least its interior) is a symplectic manifold with contact form $d\xi_j \wedge dx_j + d\eta'_i \wedge dy_i$.
Let $\Vec$ be the Hamilton vector field for the Hamiltonian $|\xi|^2 +
V(r, y) - E = \lambda^2 + |\mu|^2 + V - E$, and let $\Vec' = r
\Vec$.
In the coordinates $(r, y, \lambda, \eta, y', \eta')$, $\Vec$ is given by
\begin{equation}\begin{gathered}\label{eq:Vec}
\Vec = 2\lambda \frac{\partialartial}{\partialartial r} + \frac{2h^{ij} \eta_j}{r^{2}}
\frac{\partialartial}{\partialartial y_i} + \Big( \frac{2h^{ij} \eta_i
\eta_j}{r^3} - \frac{\partialartial V}{\partialartial r}
\Big)\frac{\partialartial}{\partialartial \lambda} \\ - \Big( \frac{\partialartial
h^{ij}}{\partialartial y_k}\frac{\eta_i \eta_j}{r^2} - \frac{\partialartial V}{\partialartial y_k} \Big)
\frac{\partialartial}{\partialartial \eta_k},
\end{gathered}\end{equation}
where we sum over repeated indices. In the coordinates $(\rho, y, \lambda, \mu, y', \mu')$, $\Vec'$ is given by
\begin{equation}\begin{gathered}
\Vec' = -2\lambda \Big( \rho \frac{\partialartial}{\partialartial \rho} + \mu \cdot \frac{\partialartial}{\partialartial \mu} + \mu' \cdot \frac{\partialartial}{\partialartial \mu'} \Big)
+ 2h^{ij} \mu_i \frac{\partialartial}{\partialartial y_j} \\\qquad \ + \Big(2 h^{ij} \mu_i
\mu_j + \rho \frac{\partialartial V}{\partialartial \rho} \big)
\frac{\partialartial}{\partialartial \lambda} - \Big( \frac{\partialartial
h^{ij}}{\partialartial y_k} \mu_i \mu_j - \frac{\partialartial
V}{\partialartial y_k} \Big) \frac{\partialartial}{\partialartial \mu_k} .
\end{gathered}\label{Vec'}\end{equation}
We now perform the operation of `blowing up' $\X$ at the submanifold
$Z = \{ \rho = 0, \mu = 0, \mu' = 0 \}$.
This operation consists of
replacing $Z$ with its inward pointing spherical normal bundle, which
turns the space $\X$ into a manifold with codimension 2 corners that
we shall denote $[\X; Z]$. This means essentially that $[\X; Z]$ is
the `minimal' manifold with corners on which the polar coordinates
$$
\trho = (\rho^2 + |\mu|^2 + |\mu'|^2)^{1/2} , \qquad \rho_B = \frac{\rho}{\trho}, \qquad \theta = (
\frac{\mu}{\trho}, \frac{\mu'}{\trho}),
$$
together with $y, y'$, extend smoothly up to all boundary faces.
It can be viewed as the geometric realization of
polar coordinates at $Z$, that is, the space on which polar coordinates are
smooth. The space $[\X; Z]$ has two boundary
hypersurfaces. One boundary hypersurface is the original boundary
$\rho = 0$, or rather the lift of this to the blown up space; we shall
denote this $B$. The other is the boundary hypersurface $\tilde Z$
created by blowup. We let $\trho$ denote \textit{any} boundary defining
function for $\tilde Z$; the above formula for $\trho$ is just an
example, as any $\trho$ satisfying the properties for bdf's (see
Section \ref{subsec:conormal}) will work, and in fact when convenient
we will take $\trho = |\mu|$ near the intersection of $B$ with
$\wt{Z}$ (the `corner') and $\trho = \rho$ in the interior of $\tilde Z$. It follows that
$\rho_B := \rho / \trho$ is a boundary defining function of $B$.
Away from $B$, coordinates near $\tilde Z$ are
\begin{equation}
\rho, \ \eta, \ \eta', \ y, \ y', \ \lambda,
\label{coords1}\end{equation}
or equivalently one can take $(\trho, \ \eta, \ \eta', \ y, \ y', \
\lambda)$. Indeed, both $\mu / \rho =
\eta$ and $\mu' / \rho = \eta'$ are bounded maps on compact subsets of
the interior of $\tilde Z$, and thus, not only can we take $\trho =
\rho$ but the above functions can be checked to yield a coordinate
patch on a tubular neighborhood $\tilde Z^{\circ} \times
[0, \epsilon)_{\trho}$.
We now write $\Vec'$ on the space $[\X; Z]$. Notice that $\rho \partialartial_\rho + \mu \partialartial_\mu + \mu' \partialartial_{\mu'}$ is precisely $\trho \partialartial_{\trho}$.
Also, we can easily check that $\partialartial_{y_j}$, $\partialartial_\lambda$
and $\trho \partialartial_\mu$ lift to smooth vector fields on $[\X;
Z]$. Using the assumption that $V \in \rho^{\alpha} C^{\infty,
\underline{\epsilon}}(\overline{\mathbb{R}^d}) \subset (\trho
\rho_B)^{\alpha} C^{\infty, \underline{\epsilon}}([\X ; \tilde Z])$,
where we use the notation of Section~\ref{subsec:conormal},
we compute that $\Vec'$ takes the form
\begin{equation}\begin{gathered}
\Vec' = \Big( -2\lambda \trho + O(\rho_B^\alpha \trho^\alpha C^{\infty, \underline{\epsilon}}) \Big) \partialartial_{\trho} +
O(\rho_B^{\alpha+1} \trho^{\alpha-1} C^{\infty, \underline{\epsilon}} ) \partialartial_{\rho_B} \\
+ h^{ij}\mu_i \partialartial_{y_j} + \Big(2 h^{ij} \mu_i
\mu_j + \rho \frac{\partialartial V}{\partialartial \rho} \Big)
\partialartial_\lambda + O(\rho_z C^\infty + \rho_B^\alpha \trho^{\alpha-1} C^{\infty, \underline{\epsilon}}) \partialartial_{\theta},
\end{gathered}\end{equation}
where the conormal coefficients of $\partial_{\trho}$ and $\partial_{\rho_B}$ come
from the $\partial_{y_k} V$ coefficients of $\partial_{\mu_k}$ in \eqref{Vec'}.
Note that $\Vec'$ is a `conormal b-vector field,' meaning it has
conormal regularity and is
tangent to both boundary hypersurfaces $B$ and $\wt{Z}$; this can be
seen directly by noting that all the $\partial_{\rho_B}$, resp.\ $\partial_{\trho}$
terms vanish at $B$, resp.\ $\wt{Z}$.
We will multiply $\Vec'$ by a function so that near $\tilde Z$ we can
use $\trho$ as a parameter for the flow. Thus for $0 < c < 1 / 2$ to be
chosen below, denote by
$\kappa$ the function on $\SR$ equal to $1$ for $|\lambda| \leq c \sqrt{E}$, and
equal to $-2(\sgn \lambda)\sqrt{E} \trho / (\Vec' \trho)$ for $|\lambda| \geq (1-c) \sqrt{E}$.
Letting $\Vec'' = \kappa \trho^{-1} \Vec'$, we have that
$\Vec''(\trho) = -2(\sgn \lambda)\sqrt{E}$ near $\cup_{\partialm} \partial_{\partialm}
\SR$, and it follows that for small enough $c$,
$\Vec''$ is a smooth vector field on the interior of the blown up space taking the form
\begin{equation}\begin{gathered}
\Vec'' = -2(\sgn \lambda)\sqrt{E} \partialartial_{\trho} +
O(\rho_B^{\alpha} \trho^{\alpha-2} C^{\infty, \underline{\epsilon}} ) \rho_B \partialartial_{\rho_B} \\
+ O(C^\infty + \rho_B^\alpha \trho^{\alpha-2} C^{\infty, \underline{\epsilon}}) \partialartial_{y, \lambda, \theta},
\end{gathered}\label{eq:time-and-rho}\end{equation}
in the region $|\lambda| > (1 - c) \sqrt{E}$, while the coefficient of
$\partial_\trho$ lies in $C^\infty + \rho_B^\alpha C^{\infty, \epsilon}$
outside this region.
Notice that $\kappa =1$ at the boundary of $\SR$, and that
$\Vec''$ is tangent to $B$, but transverse to $\tilde Z$, pointing
`inward' for $\lambda < 0$ and `outward' for $\lambda > 0$. We also
note for future reference, that, if $(\Vec^0)''$ is the corresponding
vector field for the zero potential, that
\begin{equation}\begin{gathered}
\Vec'' - (\Vec^0)'' = O(\rho_B^\alpha \trho^{\alpha-1} C^{\infty, \underline{\epsilon}})\partialartial_{\trho} +
O(\rho_B^{\alpha} \trho^{\alpha-2} C^{\infty, \underline{\epsilon}} ) \rho_B \partialartial_{\rho_B} \\
+ O(\rho_B^\alpha \trho^{\alpha-2} C^{\infty, \underline{\epsilon}}) \partialartial_{y, \lambda, \theta}.
\end{gathered}\label{VecVec0}\end{equation}
\begin{definition} We define the Lagrangian submanifold $\SR$ as follows:
we start from the `initial condition'
\begin{equation}
\partialartial_- \SR := \{ \trho = 0, y = y', \eta = -\eta', \lambda = -\sqrt{E} \} \subset [\X; Z],
\label{SR-}\end{equation}
written using the coordinates \eqref{coords1}, which is a submanifold of $\tilde Z$.\footnote{Near the boundary of $\tilde Z$, we use the coordinates \eqref{coords2} and
write it in the form
\begin{equation}
\{ |\mu| = 0, \ - \hat \mu' = \hat \mu, \ |\mu'|/|\mu| = 1, \ \omega = \omega', \ \lambda = -\sqrt{E}, \ \partialhi = 0 \}.
\end{equation}
This is clearly a smooth submanifold of $\tilde Z$.} Then $\SR$ is defined as
the flow out from $\partialartial_- \SR$ using the vector field $\Vec''$,
that is, the union of all integral curves of $\Vec''$ starting at
points of $\partialartial_- \SR$.
\end{definition}
\begin{lemma}\label{lem:uft}
All integral curves of $\Vec''$ starting at $\partialartial_- \SR$ reach the set $\tilde Z \cap \{ \lambda = +\sqrt{E} \}$ in finite time.
\end{lemma}
\begin{definition}\label{thm:SRplus}
We define $\partialartial_+ \SR$ to be the intersection of $\SR$ with $\tilde Z \cap \{ \lambda = \sqrt{E} \}$.
\end{definition}
\begin{remark} In \cite{HW2008} the sojourn relation was described as having conic singularities at the outgoing radial set $G^\sharp$, which were resolved by blowing up the span of this set. This blowup corresponds to the blowup of $Z$ already performed here.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{lem:uft}]
Notice that $\partialartial_- \SR$ is contained in the energy surface $\{ \lambda^2 + h^{ij} \mu_i \mu_j + V = E \}$. By conservation of energy, the integral curves
starting from $\partialartial_- \SR$ are completely contained in this energy surface.
Noting that $\mu = 0$ and $V = 0$ at $\tilde Z$, the integral curves
can only meet $\tilde Z$ at $\lambda = \partialm \sqrt{E}$.
We first show that trajectories contained in the original
boundary hypersurface $B$ return to $\tilde Z \cap \{ \lambda = +\sqrt{E} \}$ in finite time.
In this region, since $\rho / |\mu|$ and $|\mu'|/|\mu|$ are bounded,
we can take the boundary
defining function for $\tilde {Z}$ to be $\trho = |\mu|$. Then consider the vector field $\trho^{-1} \Vec'$, which is the same as $\Vec''$ up to reparametrization, and hence has the same integral curves. We compute that inside the boundary hypersurface $B$, the variables $\lambda$ and $|\mu|$ satisfy
\begin{equation}\begin{gathered}
\dot \lambda = |\mu|, \quad \dot{| \mu|} = -\lambda, \quad \dot {|\mu'|} = - \lambda, \quad |\mu| = \sqrt{h^{ij} \mu_i \mu_j}.
\end{gathered}\end{equation}
This has an exact solution $|\mu| = |\mu'| = \sqrt{E}\sin s$, $\lambda = -
\sqrt{E}\cos s$ where $s \in [0, \partiali]$ is the `time' parameter along this
reparametrized bicharacteristic and $\exp$ is the exponential map on
the sphere.\footnote{As this is happening, $y'$ traces out a geodesic on $\mathbb{S}^{d-1}$, of length $\partiali$, i.e. half a great circle.} In particular, it returns to $\tilde
Z$ in finite time, at $\lambda = +\sqrt{E}$, as claimed. Then by continuity, nearby trajectories also reach $\tilde
Z$ in finite time. As observed above, this can only be at $\lambda = \partialm\sqrt{E}$ and by continuity, it must be at $\lambda = +\sqrt{E}$.
Now consider integral curves starting at $\partialartial_- \SR$ and in the interior of $\tilde Z$. These integral curves immediate pass into the interior of $[ \X; Z]$, i.e. into $\{ \rho > 0 \}$. By the nontrapping hypothesis, they return to $\{ \rho = 0 \}$, and this can only be at $\tilde Z$, as the vector field $\Vec''$ is tangent to $B$. Since $\Vec''$ is inward pointing at $\tilde Z$ for $\lambda <0$ and outward pointing for $\lambda > 0$, according to \eqref{eq:time-and-rho}, this must occur at $\lambda > 0$, hence at $\lambda = +\sqrt{E}$.
\end{proof}
Note that the interior of $\tilde Z \cap \{ \lambda = \sqrt{E} \}$ can be identified with
$T^*\mathbb{S}^{d-1} \times T^*\mathbb{S}^{d-1}$; indeed it as
discussed above, $(y, \eta, y', \eta')$ give smooth functions on the
interior of $\trho = 0$, and thus $\tilde Z \cap \{ \lambda =
\sqrt{E} \}$ inherits a symplectic structure. The interior of the set $\partial_- \SR$ is
thus identified with $T^* \mathbb{S}^{d-1}$ as the diagonal, and
$\partial_- \SR$ itself is in fact the ball bundle obtained by
radially compactifying the fibers of $T^* \mathbb{S}^{d-1}$. The boundary $\partialartial_+ \SR$ of $\SR$
restricts to be a Lagrangian submanifold of this space,
and coincides with the graph of the reduced scattering map
$\mathcal{S}_E$ at energy $E$. Thus, the submanifold $\partial_+ \SR$
\textit{is precisely the
Lagrangian for the `absolute scattering matrix'.}
The relative
scattering matrix $S_h$, which is the object we are studying in this
paper, is the composition of the absolute scattering matrix with the
antipodal map multiplied by $i^{(d-1)/2}$; this normalization ensures
that the scattering matrix for the zero potential is the identity.
Note further that the integral curves of $\Vec''$ have initial
condition on the manifold with boundary $\partial_- \SR$, that the boundary
of $\partial_- \SR$ can be defined by the restriction of $\rho_B$ to $\partial_-
\SR$ (as its boundary is exactly its intersection with $B$), and that
one expects integral curves $\gamma_p(\tau)$, where $p \in \partial_- \SR$
is the initial value, to not be smooth in three places: 1) at $\tau =
0$, 2) at $p \in B$, and 3) when $\tau = T_p,$ the \textit{exit
time}, i.e.\ the time when $\gamma_p$ intersects $\partial_+ \SR$.
We can now prove Proposition~\ref{thm:sojourn-near-infty}.
\begin{proof}[Proof of Proposition~\ref{thm:sojourn-near-infty}]
Let $T(y', \eta')$ be the time (in terms of the vector field $\Vec''$) taken to reach $\partialartial_+ \SR$ starting at $(y', \eta') \in \partialartial_- \SR$. Also, let $Y(y', \eta', \tau)$ and $N(y', \eta', \tau)$ be the solutions of the ODE \eqref{eq:time-and-rho} for $y$, respectively $\eta$. For large $|\eta|$ we use inverse polar coordinates $\hat \eta, |\eta|^{-1}$ and write $\hat N$ and $|N|^{-1}$ for the corresponding ODE solutions.
Thus, the map $\mathcal{S}$ can be expressed in the form
\begin{equation}
\mathcal{S}(y', \eta') = \big( Y(y', \eta', T(y', \eta')), N (y', \eta', T(y', \eta')) \Big).
\label{SYN}\end{equation}
We first prove the following claim:
\begin{multline}
\text{For $|\eta'| \leq R < \infty$, $Y(y', \eta', T(y', \eta'))$ and $N (y', \eta', T(y', \eta'))$ are $C^\infty$ functions of $(y', \eta')$.} \\ \text{ For large $|\eta'|$, $(Y, \hat N)$ are $C^\infty + |\eta'|^{-\alpha} \Cinfe$ functions of $(y', \hat \eta', |\eta'|^{-1})$,} \\ \text{ while $|N|^{-1}$ is $|\eta|^{-1}$ times a $C^\infty + |\eta'|^{-\alpha} \Cinfe$ function of $(y', \hat \eta', |\eta'|^{-1})$.}
\end{multline}
To prove this for $|\eta'| \leq R$, we choose a small $\delta > 0$ and write $T'(y', \eta') = T(y', \eta') - \delta$. Because of the form of $\Vec''$ near $\partialartial_+ \SR$, this is the time taken for the trajectory starting at $(y', \eta') \in \partialartial_- \SR$ to reach the set $\{ \trho = \delta, \lambda > 0 \}$. As a consequence of Proposition~\ref{prop:conormal-regularity}, we see that $Y(y', \eta', T'(y', \eta'))$ and $N (y', \eta', T'(y', \eta'))$ are $C^\infty$ functions of $(y', \eta')$. (Unfortunately, we cannot immediately make the same claim with $T$ replacing $T'$, because $Y(y', \eta', \tau)$ fails to be smooth in $\tau$ precisely at $\tau = T$.) Now define the map
$$
(y_0, \eta_0) \mapsto S_\delta(y_0, \eta_0),
$$
where $\gamma^{-1}_{y_0, \eta_0}$ is the trajectory that meets $\partialartial_+ \SR$ at $(y_0, \eta_0)$ and $S_\delta(y_0, \eta_0)$ are the $(y, \eta)$ coordinates of the intersection of $\gamma^{-1}_{y_0, \eta_0}$ with $\{ \trho = \delta, \lambda > 0 \}$. Again using Proposition~\ref{prop:conormal-regularity}, we see that $S_\delta$ is a smooth map. Moreover, since $\Vec''$ is Lipschitz, the map $S_\delta$ is invertible for $\delta$ sufficiently small.
From these observations, and \eqref{SYN}, we see that
$$
\mathcal{S}(y', \eta') = S_\delta^{-1}\Big( Y(y', \eta', T'(y', \eta')), N (y', \eta', T'(y', \eta')) \Big)
$$
is smooth.
To prove the claim for $|\eta'|$ large, we follow exactly the same steps, replacing $C^\infty$ regularity by $C^\infty + |\eta'|^{-\alpha} \Cinfe$ regularity in terms of the boundary defining function $|\eta'|^{-1}$ for $\partialartial_- \SR$, making use of Lemma~\ref{lem:inverses}.
Now we consider the effect of the potential $V$ (compared to the zero
potential) on these functions. Let $A$ be the antipodal map on the sphere, and $A^*$ the induced map on its cotangent bundle. In the case of zero potential, at
$\partialartial_+ \SR$, $A^*(y, \eta)$ is equal to $(y', \eta')$. Since the
potential $V$ has the effect of perturbing $\Vec''$ by an
$O(\rho_B^{\alpha})$ term, $\rho_B = |\eta'|^{-1}$, we see from Lemma~\ref{lem:comparison} that
$y$ is given by $A(y')$, where $A$ is the antipodal map, plus a
$|\eta'|^{-\alpha} \Cinfe$ function
of the initial values $(y', \hat{\eta'}, 1/|\eta'|)$. Similarly, after
applying $A^*$, $1/|\eta|$ is equal to $1/|\eta'|$ plus a
$|\eta'|^{-\alpha-1} \Cinfe$ function of $(y', \hat{\eta'}, 1/|\eta'|)$.
If we write these statements in terms of the
Euclidean variables $\eta$ and $\eta'$, they translate precisely into
\eqref{conreg}.
The function $\varphi$ is discussed in
\eqref{eq:varphi}--~\eqref{varphireg} below.
\end{proof}
\subsection{Semiclassical parametrix for the Poisson operator}\label{sec:parametrix}
Heuristically speaking, e.g.\ from
\eqref{eq:generalized-eigenfunction}, the Scattering matrix is the
limit of the \textit{incoming} Poisson operator to the sphere at infinity after suitably
rescaling, composing with the antipodal map, and localizing in
frequency so as to extract only the outgoing part. We make a more precise statement now followed by a
characterization of the Schwartz kernels of both the Poisson operator
and scattering matrices.
The Schwartz kernel of the scattering matrix, as a half-density, is
the distributional limit of
$$
A e^{ \frac 14 \partiali i (d -1) }r^{-1/2} e^{-ir\sqrt{E}/h} M_{out} P_h(r,
\omega, \omega') \rvert_{r = \infty},
$$
where $P_h = P_h(E)$ is
the incoming Poisson operator, $A$ is the antipodal map, and
$M_{out}$ is a cutoff to semiclassically outgoing frequencies. This requires some
explanation, which we give a rough version of now with details to follow.
First of all, here we are regarding $P_h$ has a half-density, by multiplying by
$|dx d\omega'|^{1/2} = |r^{d-1} dr d\omega d\omega'|^{1/2}$ on
$\mathbb{R}^d_x \times \mathbb{S}_{\omega'}^{d-1}$. For example,
$P_{h, 0}(E)$, the incoming Poisson operator for the zero potential is
$$
P_{h, 0}(E) = (\sqrt{E}/2\partiali h)^{(d-1)/2} e^{- ix \cdot \omega' \sqrt{E}/h} |dx d\omega'|^{1/2}.
$$
By \cite[Eqn 1.13]{GST}, for $\partialsi \in C^\infty(\mathbb{S}^{d-1})$,
letting $A$ denote the antipodal map of $\mathbb{S}^{d-1}$, we
have
$$
P_{h, 0}(E) (\partialsi |d\omega|^{1/2}) \sim r^{-(d-1)/2} ( e^{-i r
\sqrt{E}/h} e^{\frac 14 \partiali i (d -1)} \partialsi(\omega) + e^{i r
\sqrt{E}/h} e^{- \frac 14 \partiali i (d -1)} A^*\partialsi) |dx|^{1/2},
$$
and for general (decaying, smooth) potentials, $P_h(E) (\partialsi
|d\omega|^{1/2})$ satisfies the same expression with $A^* \partialsi$
replaced by $A^* S_h \partialsi$, where $S_h = S_h(E)$ is the scattering matrix. Taking into account the half density
factor on $\mathbb{R}^d$, one then has
\begin{equation*}
\begin{split}
&r^{-1/2} e^{-ir \sqrt{E}/h} P_h(E) (\partialsi |d\omega|^{1/2}) \\
&\qquad \qquad \sim ( e^{- 2 i r \sqrt{E}/h} e^{\frac 14 \partiali i (d -1)}
\partialsi(\omega) + e^{- \frac 14 \partiali i (d -1)} A^* S_h(\partialsi)) |
\frac{dr}{r} d\omega |^{1/2}.
\end{split}
\end{equation*}
The half-density $|dr / r|^{1/2}$ is special; it is exactly the radial
half-density which makes sense to leading order invariantly at the
sphere at infinity of the radially compactified Euclidean space
$\overline{\mathbb{R}^d}$. One thus wishes to cancel off the $|dr /
r|^{1/2}$ factor, to microlocalize away from the $e^{-2ir \sqrt{E}/h}$
frequency, and then take the limit $r \to \infty$. Composing with $A e^{\frac 14
\partiali i (d -1)}$ will then give the scattering matrix.
In the case that $V$ is a smooth function viewed on $\overline{\RR^d}$, which requires in particular that $\alpha$ is an integer, the semiclassical Poisson operator was constructed in \cite{HW2008} as a sort of `boundary value' of the resolvent kernel. However, the Poisson operator can also be constructed directly. Here we make some remarks on this construction in the case that $V$ has regularity $\rho^\alpha \Cinfe$.
We wish to construct a Fourier integral operator $F$ which is a parametrix for the Poisson operator. That is, it should have the property that $(h^2 \Delta + V - E) F_h(\partialhi) \in h^\infty \rho^\infty C^\infty(\overline{\RR^d})$, and also that
$$
F_h(\partialhi) \sim
r^{-(d - 1)/2} (e^{- i \sqrt{E} r/h} \partialhi(\omega) + e^{i
\sqrt{E} r/h}\partialsi(-\omega) ) + o(r^{(d - 1)/2}), \quad r = |x| \to \infty.
$$
That is, up to $O(h^\infty \rho^\infty C^\infty)$ errors, $F_h \partialhi$ is a distorted plane wave for $\Delta + V $ of energy $E$, and has incoming boundary data $\partialhi$.
Then we will have $e^{i\partiali(d-1)/2}\partialsi = S_h(\partialhi)$ up to an $O(h^\infty C^\infty)$
error.
Based on \cite{HW2008}, our ansatz is that $F_h$ is a Fourier integral operator associated to the Lagrangian submanifold $\SR$. The principal symbol $\sigma_0$ should satisfy the transport equation
$$
\mathcal{L}_{\Vec} a_0 = 0,
$$
where $a_0$ is a half-density on $\SR$. Moreover, at $\partialartial_- \SR$, we have an initial condition for $a_0$. This arises from the microlocally incoming condition on the plane wave $F_h \partialhi$, that is, the condition that the incoming boundary data be $\partialhi$. Microlocally this translates to the condition that $\rho^{1/2} a_0$ restricts to $\partialartial_- \SR$ to be the canonical half-density $|dy' d\eta'|^{1/2}$ there. It is not hard to see that this implies that the half-density $a_0$ is equal to $|dy' d\eta' dt|^{1/2}$, where $t$ is the time parameter along the $\Vec$-trajectories.
Now consider the higher order symbols in any local parametrization of the FIO. A local parametrization
takes the form
\begin{equation}
h^{-\dim v/2} \int e^{i\Psi(y, y', \rho, v)/h} \sum_{j=0}^\infty h^j
a_j(y, y', \rho, v) \, dv \times \big| \frac{dy dy' d\rho}{\rho^{d+1}}
\big|^{1/2}. \label{eq:6}
\end{equation}
Without loss of generality, we can assume that $a_j$ depends on a minimal number of variables,
that is, $\dim \SR = 2d-1$ of the variables $(y, y', \rho, v)$. We
denote these variables collectively by $\blambda$. In that case, the symbols $\sigma_j$, $j \geq 0$, given by
$$
\sigma_j = a_j(\blambda) \Big| \frac{\partialartial(\blambda, d_v \Psi)}{\partialartial (y, y', \rho, v)} \Big|^{-1/2} |d\blambda|^{1/2},
$$
are formally determined by $\sigma_0$ and are solutions of an equation of the form
$$
\mathcal{L}_{\Vec} \sigma_j = Q \sigma_{j-1},
$$
where $Q$ is a second order operator on half-densities on $\SR$, depending on the particular variables on which $a_j$ depends. The operator $Q$ is induced by the Laplacian. Because of this it is $\rho^2$ times a b-differential operator of order 2. The regularity of the coefficients of $Q$ on $\SR$ is determined by the regularity of $\SR$, i.e. they take the form $C^\infty + \trho^{\alpha-1} \rho_B^\alpha C^\infty$.
We can write this transport equation on $\SR$ in the coordinates $(y', \eta', t)$. Writing $\sigma_j = s_j |dy'd\eta' dt|^{1/2}$, it implies the ODE
$$
\frac{\partialartial}{\partialartial t} s_j + k s_j = \rho^2 \tilde Q s_{j-1},
$$
where $\tilde Q$ is a scalar second order b-differential operator and, near $\partialartial_{\partialm} \SR$, we have $k = 2\rho^2 \partialartial_\rho \Lambda(y', \eta', \rho)$. Now changing variable from $t$ to $\trho$, and dividing by a factor of $\rho$, we have near $\partialartial_{\partialm} \SR$,
\begin{equation}
2 \Lambda \trho \partialartial_{\trho} s_j = \big( 2 \rho \partialartial_\rho \Lambda(y', \eta', \rho) \big) s_j + \rho \tilde Q s_{j-1}.
\label{transeqn}\end{equation}
Moreover, we have an initial condition $s_j = 0$ at $\partialartial_- \SR$.
Propositions~\ref{prop:conormal-regularity} and \ref{prop:cr2} apply
to this ODE and show that $s_j$ has the regularity $C^\infty +
\trho^{\alpha-1} \rho_B^\alpha C^\infty$ away from $\partialartial_+ \SR$,
and has the regularity given by part (iii) of
Proposition~\ref{prop:cr2} near $\partialartial_+ \SR$. In fact we can
conclude more regarding the vanishing of the $s_j$ using the ODE
comparison lemma, Lemma \ref{lem:comparison} above. Indeed, as the
coordinates $(y', \eta', t)$ provide coordinates on both $\SR$ and
$\SR^0$, the free scattering relation, we can compare the $s_j$ with
the $s_j^0$, the functions arising analogously in the free case, which
satisfy that $s_j^0 \equiv 0$ for $j \ge 1$ and $s^0_0 \equiv 1$. It is
straightforward to check that the ODEs for the $s_j^0$ differ in an
$\trho^{\alpha - 1}\rho_B^\alpha C^{\infty, \epsilon}$ manner, and
thus the lemma implies that the $s_j$ for $j \ge 1 $ and $1 - s_0$ lie
in $\trho^{\alpha-1} \rho_B^\alpha C^\infty$.\footnote{Further analysis shows that in fact $s_j$ vanishes to
order $\alpha + j$ for $j \geq 1$, but we do not need this fact
here.}
Using these symbols we can build an FIO parametrix for the Poisson
kernel, that solves the equation $(\Delta + V - E) F_h \partialhi = 0$ up to
an error in $h^\infty e^{i\sqrt{E} r/h} r^{-(d+1)/2}
C^\infty(\overline{\RR^d})$; an additional step reduces the error to
$h^\infty \rho^\infty C^\infty(\overline{\RR^d})$\footnote{This is
just as in the parametrix construction of Melrose-Zworski}. The
error term can be solved away by applying the outgoing resolvent, and
this contributes a correction term to the parametrix of the form
$h^\infty e^{ir/h} r^{-(d-1)/2} C^\infty(\omega, \omega')$. This follows from
\cite[Section 12]{M1994} (showing smoothness in the $y, y'$ variables)
and \cite{VZ2000} (showing that the correction is $O(h^\infty)$).
Therefore
this contributes a smooth term that is $O(h^\infty)$ to the scattering
matrix, and this has no effect on the conclusion of
Lemma~\ref{lem:S-structure}. So, in the proof in the next subsection,
it suffices to analyze the parametrix for the Poisson operator.
\subsection{The scattering matrix: proof of Lemma~\ref{lem:S-structure}}\label{sec:scattering-matrix-deduction}
The scattering matrix is obtained by taking a distributional limit of
the Poisson kernel as $r \to \infty$. We can break up the Poisson
operator microlocally into a piece microsupported away from
$\partialartial_+ \SR$, and a piece microsupported away from $\partialartial_-
\SR$. For the first piece, localized away from $\partialartial_+ \SR$, if we
multiply the kernel by $\rho^{1/2} e^{i\sqrt{E}/(h\rho)}$ and then take
the canonical restriction to $\rho = 0$, we obtain the identity
operator; that is, if we let this piece of $P_h(E)$ operate on a
smooth function $f(\omega')$, then multiply the result by $\rho^{1/2}
e^{i\sqrt{E}/(h\rho)}$ and then take the canonical restriction to $\rho =
0$, we obtain $f$.
For the second piece, localized away from $\partialartial_- \SR$, if we
multiply the kernel by $\rho^{1/2} e^{-i\sqrt{E}/(h\rho)}$ and then take
the canonical restriction to $\rho = 0$, we obtain the scattering
matrix.
We now see how this happens at the level of kernels. We need to analyze a microlocal representation of $F_h$ more carefully near the corner of $\SR$ at the intersection of $\partialartial_+ \SR$ and $\rho_B = 0$.
In this region, the Poisson operator can be expressed as an oscillatory integral using the results of \cite{HW2008}. After a rotation of coordinates, we can assume that $\mu_1$ and $1/\eta_1$ furnish local boundary defining functions $\trho$ and $\rho_B$, respectively. We can then use coordinates
\begin{equation}
\mathcal{Z} = (y', \mu_1, 1/\eta_1, \eta_j/\eta_1), \quad 2 \leq j \leq d-1,
\end{equation}
on $\SR$ near a point at $B \cap \tilde Z$, that is at $\rho_B = \trho = 0$. In terms of these we can write the other coordinates as smooth functions:
\begin{equation}
y_j = Y_j(\Z), \quad \Phi = \Phi(\Z).
\end{equation}
This $\Phi$ is the same as that in the previous paragraph thought of
as a function on $\SR$.
In the expressions below we will replace $\mu_1$ by $\sigma$ and $\eta_j/\eta_1$ by $v = (v_2, \dots, v_{d-1})$.
Then, according to \cite[Section 6.3]{HW2008}, $\SR$ has a local parametrization of the form
\begin{equation}
\Psi(r, y, y', \sigma, v) = \Phi(y', \sigma, \frac{\rho}{\sigma}, v) + \frac{\sigma}{\rho} \big(y_1 - Y_1(y', \sigma, \frac{\rho}{\sigma}, v)\big) + \sum_{j=2}^{d-1} \frac{\sigma}{\rho} v_j \big(y_j - Y_j(y', \sigma, \frac{\rho}{\sigma}, v)\big),
\label{Poisson-param}\end{equation}
Then a microlocal parametrix for the
Poisson operator $P_h(E)$ takes the form
\begin{equation}
(\sqrt{E}/(2\partiali h))^{-(d-1)/2} \int e^{i\Psi/h} \rho^{-(d-1)/2} \sigma^{d-2} a(\sigma, \frac{\rho}{\sigma}, y', v, h) \, d\sigma \, dv |\frac{d\rho dy}{\rho^{d+1}} dy'|^{1/2},
\label{Poisson-local}
\end{equation}
where $\Psi$ is as in \eqref{Poisson-param}\footnote{The powers of
$\rho$ and $\sigma$ are as given by \cite[Equation
(6.21)]{HW2008}, after taking into account that the half-density
used there is $\rho^{-(d-1)/2} h^{-(2d-1)/2}$ times the
half-density $|dx d\omega'|^{1/2}$ used here, times
$|dh/h^2|^{1/2}$.}
and the amplitude $a$ has regularity as determined by the regularity of the functions $s_j$ as described above.
If we multiply \eqref{Poisson-local} by $\rho^{1/2} e^{-i\sqrt{E}/(\rho
h)}$ and restrict to $\rho = 0$, using $d\sigma dv =
\rho^{d-1}\sigma^{-(d-2)} d\eta$ we obtain the following oscillatory
integral expression, with the given subsitution we obtain
\begin{equation}
\label{eq:Smatrix-f}
\begin{split}
& \int e^{i\Psi/h} \rho^{-(d-1)/2}
\sigma^{d-2} a(\sigma, y', \frac{\rho}{\sigma}, v, h) \, d\sigma
\, dv |\frac{d\rho dy}{\rho^{d+1}} dy'|^{1/2} \\
&\qquad = (\int e^{i\Psi/h} \rho^{(d-1)/2} a(\sigma, y', \eta, h) \, d\eta)
|\frac{d\rho dy}{\rho^{d + 1}} dy'|^{1/2}.
\end{split}
\end{equation}
Regarding $\Phi$ as a function on $\SR$, its behaviour as we approach
$\partialartial_+ \SR$ was worked out in \cite[Section
2]{Gell-Redman-Hassell-Zelditch} (where the function is called
$\partialhi_0$). It is shown that
$$
\Phi = \sqrt{E} r + \varphi + o(1), \trho \to 0,
$$
where $\varphi$ is a function on $\SR$ constant on trajectories, given by
\begin{equation}
\label{eq:varphi}
\varphi(y', \hat \eta', \rho_B) = \int_{\gamma} x \cdot \nabla V,
\ \mbox{ for $\gamma$ the trajectory with initial
condition $(y', \hat \eta', \rho_B)$}.
\end{equation}
Thus $\varphi$ is the boundary value of a function $\tilde{\varphi}$ on $\SR$ satisfying the ODE
$
\Vec(\tilde\varphi) = x \cdot \nabla V
$
with a zero initial condition at $\partialartial_+ \SR$. Using Propositions~\ref{prop:conormal-regularity}, \ref{prop:cr2} and Corollary~\ref{cor:solat0}, we see that
\begin{equation}
\varphi \in \rho_B^{\alpha-1} \Cinfe(\partialartial_+ \SR).
\label{varphireg}\end{equation}
(This is the final part of the proof of Lemma \ref{thm:sojourn-near-infty}.)
Multiplying by $e^{ \frac 14 \partiali i (d -1) }\rho^{1/2}
e^{-i\sqrt{E}/(\rho h)}$
and taking the distributional limit at $\rho = 0$ then gives
\begin{equation}
(\sqrt{E}/(2\partiali h))^{-(d-1)/2} \int e^{i \big( \varphi(y', \eta) + \sum_j(y_j - Y_j(y', \eta)) \eta_j \big) /h} a(0, y', \eta, h) \, d\eta |dy dy'|^{1/2}.
\label{Smatrix}
\end{equation}
We write the phase in this equation as $(y-A(y')) \cdot \eta + G(y', \eta)$.
From \eqref{varphireg}, as well as $Y = A(y') + O(\rho^\alpha \Cinfe)$ from \eqref{conreg}, we see that
$G \in \rho^{\alpha-1} \Cinfe$.
Then, from the fact that the principal symbol of the scattering matrix
is $|dy d\eta|^{1/2}$ (see \cite[Lemma 3.1]{DGHH2013}), we find that
$$
a(0, y', \eta, 0) = \Big| \det \frac{\partialartial(y, \eta, y-A(y') + d_\eta G)}{\partialartial(y', y, \eta)} \Big|^{1/2} = \big| \det (\Id + d^2_{y' \eta} G) \big|^{1/2} .
$$
This shows that
$$
a(0, y', \eta, 0) = 1 + O(\rho^\alpha \Cinfe).
$$
Moreover, it follows from the construction of the functions $a_j$ in
\eqref{eq:6} that each term in the expansion of $a$ in $h$ as $h \to
0$ is bounded by $O(\rho^\alpha \Cinfe)$ and also the error
contributes to lower order that $a - 1 \in S^{-\alpha}$. Composing with
the antipodal map gives \eqref{eq:Sh-at-fiber-infinity} and thus Lemma \ref{lem:S-structure} follows.
\section{Powers of the scattering matrix}\label{sec:composition}
In this section we will prove Lemma \ref{thm:composition}, which gives
an expression for powers of the scattering matrix $S_h^k$ near `fiber
infinity'. Indeed, recall that, notation as in Lemma \ref{thm:composition},
$S_h = F_1 + F_2$ where $F_1$ is a semiclassical FIO with compact
microsupport and $F_2$ (or rather its Schwartz kernel) is given by the
oscillatory kernel expression in \eqref{eq:Sh-at-fiber-infinity}, and as
discussed in the proof of Lemma
\ref{thm:trace-formula}, the only non-compactly microlocally supported
term in $S_h^k$ is $F_2^k$.
Recall the discussion in Section \ref{sec:traces-and-compositions}
(see near \eqref{eq:half-densities})
explaining that the oscillatory integral giving $F_2$ is to be thought
of as a half-density on $\mathbb{S}^{d-1} \times \mathbb{S}^{d-1}$.
Thus two oscillatory integrals
\begin{equation}\label{eq:oscillatory-integral}
I_i(y, y') = (2 \partiali h)^{-(d -1)}\int e^{\Phi_i(\sphvar, \sphvar', \dspharv)/h} a_i(\sphvar, \sphvar',
\dspharv, h) \, d\dspharv
\end{equation}
with $i = 1, 2$ define Schwartz kernels $I_i(y, y') |dy dy'|^{1/2}$
whose composition as operators is given by
$$
I_1 \circ I_2 = (\int I_1(y, y'') I_2(y'', y') |dy''|) |dy dy'|^{1/2}.
$$
Since this just amounts to integration in the $y''$ variable in all
the expressions below we will drop the half density factors.
\begin{proof}[Proof of Lemma \ref{thm:composition}]
Suppose that $G_1(\sphvar, \dspharv)$ and $G_2(\sphvar, \dspharv)$ are symbols
of order $-\beta$ for $\beta > 0$. We will show that the
composition of two FIOs, with phase function
\begin{equation}\label{eq:Phi-appendix}
\Phi_i(\sphvar, \sphvar', \dspharv) := (\sphvar - \sphvar') \cdot
\dspharv + G_i(\sphvar', \dspharv)
\end{equation}
is an FIO with phase function $(\sphvar -
\sphvar') \cdot \dspharv + G_1(\sphvar', \dspharv) + G_2(\sphvar', \dspharv) +
E(\sphvar', \dspharv)$, where $E$ is a symbol of order $-2\beta$.
Thus let $I_1, I_2$ be oscillatory integrals as in
\eqref{eq:oscillatory-integral}, with $\Phi_i$ as above and
with amplitudes $a_i \in S^{m_i}$.
The composition has a representation of the form
\begin{equation}\label{eq:initial-composition}
I_1 \circ I_2 : = (2\partiali h)^{-2(d -1)} \int e^{\frac{i}{h}\Phi(\sphvar, \sphvar'', \sphvar', \dspharv, \dspharv')} a_1(\sphvar, \sphvar'', \dspharv, h) a_2(\sphvar'', \sphvar', \dspharv', h) \, d\dspharv \, d\dspharv' \, d\sphvar'',
\end{equation}
where
$$
\Phi(\sphvar, \sphvar'', \sphvar', \dspharv, \dspharv') = (\sphvar - \sphvar'') \cdot \dspharv + G_1(\sphvar'', \dspharv) +(\sphvar'' - \sphvar') \cdot \dspharv' + G_2(\sphvar'', \dspharv').
$$
We eliminate the variables $(\sphvar'', \dspharv')$ up to an $O(h^\infty)$
error, by replacing them with their stationary values and applying the
stationary phase lemma \cite{Hvol1}. This works as the Hessian of $\Phi$ with respect to $(\sphvar'', \dspharv')$ is
$$
\begin{pmatrix}
0 & \Id \\
\Id & 0
\end{pmatrix}
+ O(|\dspharv'|^{-1-\alpha}),
$$
and is therefore invertible, with uniformly bounded inverse, for large $|\dspharv'|$.
The stationary points in $y'', \dspharv'$, i.e.\ the points where $D_{\sphvar'', \dspharv'}
\Phi = 0$, occur at
\begin{equation}\begin{aligned}
\sphvar'' &= \sphvar' - d_{\dspharv'} H(y'', \eta', \eta), \\
\dspharv' &= \dspharv + d_{\sphvar''} H(y'', \eta', \eta),
\end{aligned}\end{equation}
where
$$
H(y'', \eta', \eta) = G_1(y'', \eta) + G_2(y'', \eta').
$$
The second line in the above equation array shows that on the critial
set we can write $\eta = \eta(\eta', y'')$ with $\eta - \eta' \in S^{1
- \beta}$. Thus, we want to invert the transformation
\begin{equation}
\begin{pmatrix} \sphvar' \\ \dspharv \end{pmatrix}
= \begin{pmatrix} \sphvar'' \\ \dspharv' \end{pmatrix} +
\begin{pmatrix} - d_{\dspharv'} H(y'', \eta') \\ d_{\sphvar''} H(y'', \eta') \end{pmatrix}
\label{invert-trans}\end{equation}
when $|\dspharv|$ is large. It is easy to see, by the method of successive approximations for example, that the inverse exists for large $\dspharv$. We claim that the inverse map,
which we write in the form $\sphvar''(\sphvar', \dspharv), \dspharv'(\sphvar', \dspharv)$, is the identity plus a symbol of order $-\beta$. To see this, we differentiate \eqref{invert-trans} with respect to $\sphvar'$ and $\dspharv$ to obtain
\begin{equation}
\begin{pmatrix} \Id & 0 \\ 0 & \Id \end{pmatrix}
=
\Bigg(
\begin{pmatrix} \Id & 0 \\ 0 & \Id \end{pmatrix} +
\begin{pmatrix}
- d^2_{\dspharv'\sphvar''} H(\sphvar'', \dspharv') & d^2_{\dspharv'\dspharv'} H(\sphvar'', \dspharv') \\ d^2_{\sphvar''\sphvar''} H(\sphvar'', \dspharv') &
d^2_{\sphvar''\dspharv'} H(\sphvar'', \dspharv')
\end{pmatrix} \Bigg)
\begin{pmatrix} \frac{\partialartial \sphvar''}{\partialartial \sphvar'} & \frac{\partialartial \sphvar''}{\partialartial \dspharv} \\
\frac{\partialartial \dspharv'}{\partialartial \sphvar'} & \frac{\partialartial \dspharv'}{\partialartial \dspharv} \end{pmatrix} .
\label{matrix-relation}\end{equation}
This shows that
$$
\begin{pmatrix} \frac{\partialartial \sphvar''}{\partialartial \sphvar'} & \frac{\partialartial \sphvar''}{\partialartial \dspharv} \\
\frac{\partialartial \dspharv'}{\partialartial \sphvar'} & \frac{\partialartial \dspharv'}{\partialartial \dspharv} \end{pmatrix} =
\begin{pmatrix} \Id & 0 \\ 0 & \Id \end{pmatrix} +
\begin{pmatrix}
e_{11}(\sphvar', \dspharv) & e_{12}(\sphvar', \dspharv) \\
e_{21}(\sphvar', \dspharv) & e_{22}(\sphvar', \dspharv)
\end{pmatrix}
$$
where repeated differentiation of \eqref{matrix-relation} shows that
$$
\begin{pmatrix}
e_{11}(\sphvar', \dspharv) & e_{12}(\sphvar', \dspharv) \\
e_{21}(\sphvar', \dspharv) & e_{22}(\sphvar', \dspharv)
\end{pmatrix} \in \begin{pmatrix}
S^{-1 -\beta} & S^{-2 -\beta} \\
S^{ -\beta} & S^{-1 -\beta}
\end{pmatrix}.
$$
This proves that we can write
\begin{equation}
\begin{pmatrix} \sphvar'' \\ \dspharv' \end{pmatrix}
= \begin{pmatrix} \sphvar' \\ \dspharv \end{pmatrix} +
\begin{pmatrix} f_1(\sphvar', \dspharv) \\ f_2(\sphvar', \dspharv) \end{pmatrix}, \quad
f_1 \in S^{-1 -\beta}, f_2 \in S^{-\beta}.
\label{inverse-trans}\end{equation}
We now write the function $\wt{\Phi}$, the restriction of $\Phi$ to
the critical set $\{ D_{\dspharv'} \Phi = 0 \}$,
\begin{equation}\begin{gathered}
\wt{\Phi} = (\sphvar - \sphvar') \cdot \dspharv + G_1(\sphvar', \dspharv) + G_2(\sphvar', \dspharv) + E'(\sphvar', \dspharv), \\
E'(\sphvar, \sphvar', \dspharv) = (\sphvar' - \sphvar'')(\dspharv -
\dspharv') + G_2(\sphvar'', \dspharv') - G_2(\sphvar', \dspharv) +
G_1(y'', \eta) - G_1(y', \eta)
\end{gathered}\label{E}\end{equation}
Then writing $G_2(\sphvar'', \dspharv') - G_2(\sphvar', \dspharv) =
G_2(\sphvar'', \dspharv') - G_2(\sphvar'', \dspharv) + G_2(\sphvar',
\dspharv) - G_2(\sphvar', \dspharv) $ as
$$
(\sphvar'' - \sphvar') \cdot \int_0^1 (d_{\sphvar}G_2)(\sphvar'' + t(\sphvar' - \sphvar''), \dspharv') \, dt \\ + (\dspharv' - \dspharv) \cdot \int_0^1 (d_{\dspharv} G_2)(\sphvar', \dspharv + t(\dspharv' - \dspharv)) \, dt,
$$
and
$$
G_1(y'', \eta) - G_1(y', \eta) = (\sphvar'' - \sphvar') \cdot \int_0^1 (d_{\sphvar}G_2)(\sphvar'' + t(\sphvar' - \sphvar''), \dspharv') \, dt, $$
we see from \eqref{inverse-trans} that $E'$ is a symbol of order
$-2\beta$, so by stationary phase applied to~\eqref{eq:initial-composition},
$$
I_1 \circ I_2 = (2 \partiali h)^{-(d -1)} \int e^{i \wt{\Phi}(y, y',
\eta)/h} b(y, y', \eta) \, d\dspharv,
$$
where $b(y, y', \eta) = a_1(y, y'', \eta) a_2(y'', y', \eta')$
restricted to the the $\sphvar'', \dspharv'$ critical
set of $\Phi$, and, as is standard, $b \in S^{m_1 + m_2}$ with
principal symbol given by the product of the principal symbols of
$a_1$ and $a_2$.
Lemma \ref{thm:composition} now follows by applying the above
results to repeated compositions of $F_2$ (for $k \ge 1$) or $F_2^*$
(for $k \leq 1$). Indeed, note that the lemma is already
proven for $k = 1$ by \eqref{eq:Sh-at-fiber-infinity} and for $k = -1$
by \eqref{eq:Sh-at-fiber-infinity-conjugate}. We focus on the $k \ge
1$ case as the $k \leq 1$ case is completely analogous. Assuming by
induction that Lemma \ref{thm:composition} folds for $F_2^k$, consider
$F_2^{k + 1} = F_2^k \circ F_2$. Thus the Schwartz kernels of these operators are
given by oscillatory intergrals $I_1, I_2$ corresponding $F_2^k$ and
$F_2$, respectively, as in~\eqref{eq:oscillatory-integral}
with $\Phi_1$ and $G_1$ corresponding to $F_2^k$ and $\Phi_2, G_2$ corresponding
to $F_2$. Thus $G_1(y',
\eta) = kG(y', \eta) + E_k(y', \eta)$ and $G_2 = G(y, \eta) + E'(y,
\eta)$, where $G$ comes from
the original phase function of $S_h$, i.e.\ it is as in
\eqref{eq:Sh-at-fiber-infinity} and $E_k, E' \in S^{1 - \alpha -
\epsilon}$ for some $\epsilon > 0$. Here as in the arguments
above we have used Remark \ref{thm:switch-y-yprime} to switch the
roles of $y'$ and $y$ in the perturbation term of the phase function.
Thus the above arguments imply that the composition has phase function
$\Phi = (k + 1)G(y', \eta) + E_k(y', \eta) + E'(y', \eta) + E''(y',
\eta) $ where $E''$ is a symbol of order $2(1 - \alpha)$.
Also, the amplitudes satisfy $a_1 - 1 \in S^{1 - \alpha}$ and $a_2 - 1
\in S^{1 - \alpha}$, then $b - 1 \in S^{1 - \alpha}$ as well. Indeed,
this follows immediately from writing $y'', \eta'$ in terms of $y', \eta$ using
\eqref{inverse-trans}.\end{proof}
\end{appendix}
\end{document}
|
\begin{document}
\title{\textbf{H\"older regularity for the linearized porous medium equation in bounded domains}
}
\author{
Tianling Jin\footnote{T. Jin was partially supported by Hong Kong RGC grant GRF 16303822 and NSFC grant 12122120.}\quad and \quad
Jingang Xiong\footnote{J. Xiong was is partially supported by the National Key R\&D Program of China No. 2020YFA0712900, and NSFC grants 11922104 and 12271028.}}
\date{\today}
\maketitle
\begin{abstract}
In this paper, we systematically study weak solutions of a linear singular or degenerate parabolic equation in a mixed divergence form and nondivergence form, which arises from the linearized fast diffusion equation and the linearized porous medium equation with the homogeneous Dirichlet boundary condition. We prove the H\"older regularity of their weak solutions.
\end{abstract}
\section{Introduction}
Let $\Omega\subset\R^n$, $n\ge 1$, be a smooth bounded open set, and $\omega$ be a smooth function in $\overline\Omega$ comparable to the distance function $d(x):=\dist(x,\partial\Omega)$, that is, $0<\inf_{\Omega}\frac{\omega}{d}\le \sup_{\Omega}\frac{\omega}{d}<\infty$. For example, $\omega$ can be taken as the positive normalized first eigenfunction of $-\Delta$ in $\Omega$ with Dirichlet zero boundary condition. Let
\begin{equation}\label{eq:rangep}
p>-1
\end{equation}
be a fixed constant throughout the paper unless otherwise stated.
In this paper, we would like to study regularity of weak solutions to
\begin{equation} \label{eq:general}
\begin{split}
a\omega^{p} \pa_t u-D_j(a_{ij} D_i u+d_j u)+b_iD_i u+\omega^pcu+c_0u&=\omega^pf+f_0 -D_if_i\quad \mbox{in }\Omega \times(-1,0],\\
u&=0\quad \mbox{on }\partial\Omega \times(-1,0],
\end{split}
\end{equation}
where all $a,a_{ij}, d_j,b_i,c, c_0,f, f_0,f_i$ are functions of $(x,t)$, $D_i=\partial_{x_i}$, and the summation convention is used. Throughout this paper, we always assume the ellipticity condition, that is, $(a_{ij})$ is a symmetric matrix, and
\be \label{eq:ellip}
\forall\ (x,t)\in \Omega\times[-1,0],\ \lda\le a(x,t)\le \Lda, \quad \lda |\xi|^2 \le \sum_{i,j=1}^na_{ij}(x,t)\xi_i\xi_j\le \Lda |\xi|^2 \quad\forall\ \xi\in\R^n,
\ee
where $0<\lda\le \Lda<\infty$.
The study of the equation \eqref{eq:general} is motivated by the linearized equation of the fast diffusion equations (corresponding to $p>0$ in \eqref{eq:fde}) or slow diffusion equations (corresponding to $-1<p<0$ in \eqref{eq:fde}, which are also called porous medium equations)
\begin{equation} \label{eq:fde}
\begin{split}
\pa_t v^{p+1}&=\Delta v\quad \mbox{in }\Omega \times(0,\infty),\\
v&=0\quad \mbox{on }\partial\Omega \times(0,\infty).
\end{split}
\end{equation}
From DiBenedetto-Kwong-Vespri \cite{DKV}, we know that the solution $v$ of \eqref{eq:fde} with $p>0$ satisfies the global Harnack inequality
\begin{equation}\label{eq:boundarybehavior}
0<\inf_{\Omega}\frac{v(t,x)}{d(x)}\le \sup_{\Omega}\frac{v(t,x)}{d(x)}<\infty
\end{equation}
before its extinction time. See Bonforte-Figalli \cite{BFi} for a survey. From Aronson-Peletier \cite{AP}, we also know that the solution $v$ of \eqref{eq:fde} with $-1<p<0$ satisfies \eqref{eq:boundarybehavior} as well after certain waiting time. Therefore, the linearized equation of \eqref{eq:fde}, which plays an important role in proving optimal regularity of solutions to \eqref{eq:fde} in \cite{JX19, JX22, JRX}, falls into a form of the equation \eqref{eq:general}. In our earlier work \cite{JX19}, we have obtained many properties for equations like \eqref{eq:general} with $p>0$, such as well-posedness, local boundedness and Schauder estimates. In this paper, we study the equation \eqref{eq:general} in a more general and systematic way. The main goal of this paper is the H\"older regularity of its weak solutions to \eqref{eq:general} up to the boundary $\{x_n=0\}$.
After the De Giorgi-Nash-Moser theory on the H\"older regularity for uniformly elliptic and uniformly parabolic equations, there have been many investigations on regularity for degenerate or singular elliptic and parabolic equations. By the work of Fabes-Kenig-Serapioni \cite{FKS}, we still have H\"older regularity for elliptic equations whose coefficients are of $A_2$ weight. See also earlier work of Kruzkov \cite{Kruzkov}, Murthy-Stampacchia \cite{MS}, Trudinger \cite{Trudinger1, Trudinger2}, as well as recent work Sire-Terracini-Vita \cite{STV1,STV2} and Wang-Wang-Yin-Zhou \cite{WWYZ}, on degenerate elliptic equations. However, Chiarenza-Serapioni \cite{CS} provided several counterexamples showing that the aforementioned elliptic results do not carry over directly to the parabolic case. Nevertheless, H\"older regularity and Harnack inequality for degenerate or singular parabolic equations with various conditions and structures have been obtained in, e.g., Chiarenza-Serapioni \cite{CS2,CS3} and Guti\'errez-Wheeden \cite{GW0,GW}, with either the same weight or different weights of singular/degenerate coefficients of $u_t$ and $D^2u$. Recently, in a series of papers \cite{DP21-1,DP21-2,DP21-3,DP21-4}, Dong-Phan obtained results on the wellposedness and regularity estimates in weighted Sobolev spaces for parabolic equations with singular-degenerate coefficients, where the weights of singular/degenerate coefficients of $u_t$ and $D^2u$ appeared in a balanced way. Such Sobolev regularity was obtained later in Dong-Phan-Tran \cite{DPT} for equations similar to our equation \eqref{eq:general} for $-2<p<0$, where H\"older estimates and Schauder estimates with $p=-1$ have been studied in Daskalopoulos-Hamilton \cite{DH}, Koch \cite{Koch} and Feehan-Pop \cite{FP}. The literature on regularity theory for degenerate elliptic and parabolic equations is vast, and one can refer to the above papers for more references.
Under the condition \eqref{eq:ellip}, the equation \eqref{eq:general} is uniformly parabolic (in a mixed divergence and nondivergence form) when $x$ stays away from the boundary $\partial\Omega$. Therefore, to obtain global estimates for \eqref{eq:general}, we need to establish estimates near $\partial\Omega$, that is in $(B_r(x_0)\cap\Omega)\times(-1,0]$, where $x_0\in\partial\Omega,r>0$ and $B_r(x_0)$ is the open ball in $\R^n$ centered at $x_0$ with radius $r$. By the standard flattening the boundary techniques for studying boundary estimates, we only need to consider the equation in the half ball case.
Now we suppose $\Omega$ is a half ball. For $\bar x=(\bar x', 0)$, denote $B_R^+(\bar x) =B_R(\bar x)\cap \{(x',x_n):x_n>0\}$,
\[
Q_R^+(\bar x, \bar t)= B_R^+(\bar x) \times [\bar t-R^2, \bar t], \quad \mathcal{Q}_R^+(\bar x,\bar t)= B_R^+(\bar x) \times [\bar t-R^{p+2}, \bar t].
\]
For brevity, we drop $(\bar x)$ and $(\bar x,\bar t)$ in the above notations if $\bar x=0$ or $(\bar x,\bar t)=(0,0)$.
Consider the equation
\be \label{eq:linear-eq}
ax_n^{p} \pa_t u-D_j(a_{ij} D_i u+d_j u)+b_iD_i u+cx_n^pu+c_0 u=x_n^pf+f_0 -D_if_i \quad \mbox{in }Q_1^+
\ee
with partial Dirichlet condition
\be \label{eq:linear-eq-D}
u=0 \quad \mbox{on }\pa' B_1^+\times[-1,0],
\ee
where $$\pa' B_R^+= B_R \cap \{x_n=0\}.$$
We also denote $$\pa'' B_R^+=\pa B_R^+\setminus \pa' B_R^+,$$ and
\[
\partial_{pa} Q_R^+ \mbox{ as the standard parabolic boundary of } Q_R^+.
\]
We establish H\"older regularity estimates for solutions of \eqref{eq:linear-eq} and \eqref{eq:linear-eq-D} up to the boundary $\{x_n=0\}$, that is, in $\overline B_{1/2}^+\times[-1/2,0]$. If it additionally satisfies $u(\cdot,-1)=0$, then we also establish H\"older regularity up to the initial time, that is, in $\overline B_{1/2}^+\times[-1,0]$.
Our results are scattered in the following four sections.
\begin{itemize}
\item In Section \ref{sec:sobolev}, we introduce a corresponding weighted Sobolev space. We prove a weighted parabolic Sobolev inequality in Theorem \ref{thm:weightedsobolev} and Theorem \ref{thm:weightedsobolev2}, and a De Giorgi type isoperimetric inequality in Theorem \ref{thm:degiorgiisoperimetricelliptic}.
\item In Section \ref{sec:weaksolution}, we introduce the definition of weak solutions in Definition \ref{defn:weaksolutionp}, and establish the wellposedness in Theorem \ref{thm:existenceofweaksolution}.
\item in Section \ref{sec:bound}, we prove the local-in-time boundedness up to $\{x_n=0\}$ of weak solutions in Theorems \ref{thm:localboundedness}, and space-time global boundedness in Theorem \ref{thm:localboundednessglobal},
\item In Section \ref{sec:holderregularity}, we prove local-in-time H\"older estimates up to $\{x_n=0\}$ of weak solutions in Theorems \ref{thm:holdernearboundary}, and space-time global H\"older estimates in Theorem \ref{thm:uniformholderglobal}. In the end of the paper, we show the well-posedness of the Cauchy-Dirichlet problem \eqref{eq:general}.
\end{itemize}
Our proof of the boundedness and H\"older estimates of weak solutions uses the De Giorgi iteration. The local-in-time boundedness and H\"older estimates for \eqref{eq:linear-eq} with $-1<p<1$ and $a\equiv 1$ but without lower order terms follow from Guti\'errez-Wheeden \cite{GW0,GW}.
\section{Sobolev spaces and inequalities}\label{sec:sobolev}
\subsection{Some weighted Sobolev spaces}
In this section, we will introduce several Sobolev spaces that will be needed to define and study weak solutions of \eqref{eq:linear-eq}. Denote
\[
Q^+_{R,T}=B_R^+\times (-T,0].
\]
Let
\begin{align}
W^{1,1}_2(Q^+_{R,T})&:=\{g\in L^2(Q^+_{R,T}): \pa_tg\in L^2(Q^+_{R,T}), D_i g \in L^2(Q^+_{R,T}),\ i=1,\cdots,n\}\label{eq:standardspace},\\
\|g\|_{W^{1,1}_2(Q^+_{R,T})}&:=\|g\|_{L^2(Q^+_{R,T})}+\|\pa_t g\|_{L^2(Q^+_{R,T})}+ \sum_{i=1}^n\|D_i g\|_{L^2(Q^+_{R,T})}\nonumber
\end{align}
be the standard Sobolev space with the standard Sobolev norm.
Let $p>-1$. Let
\begin{align}
&V^{1,1}_2(Q^+_{R,T}):=\{g\in L^2(Q^+_{R,T}): \partial_tg\in L^2(Q^+_{R,T},x_n^{p}\ud x\ud t), D_i g \in L^2(Q^+_{R,T}), i=1,\cdots,n\}\label{eq:standardspaceweighted},\\
&\|g\|_{V^{1,1}_2(Q^+_{R,T})}:=\|g\|_{L^2(Q^+_{R,T})}+\|\pa_t g\|_{L^2(Q^+_{R,T},x_n^{p}\ud x\ud t)}+ \sum_{i=1}^n\|D_i g\|_{L^2(Q^+_{R,T})}\nonumber
\end{align}
be a weighted Sobolev space, with the weight $x_n^p$ only applied on $\pa_t g$. Let
\begin{align}
V_2(Q^+_{R,T}) &:=L^\infty ((-T,0]; L^2(B_R^+,x_n^{p}\ud x)) \cap L^2((-T,0];H^1(B_R^+)), \label{eq:weightedspaceinfinity}\\
\|u\|_{V_2(Q^+_{R,T})}&:= \left(\sup_{-T<t<0} \int_{B_R^+ }u^2 x_n^{p}\,\ud x + \|\nabla u\|_{L^2(B_R^+ \times(-T,0])} ^2\right)^{1/2}\label{eq:V2norm},
\end{align}
and
\begin{equation}\label{eq:weightedspaceC}
V_2^{1,0}(Q^+_{R,T}) =C ([-T,0]; L^2(B_R^+,x_n^{p}\ud x)) \cap L^2((-T,0];H^1(B_R^+))
\end{equation}
be a subspace of $V_2(Q^+_{R,T})$ endowed with the norm \eqref{eq:V2norm}.
Then all of $W^{1,1}_2(Q^+_{R,T})$, $V^{1,1}_2(Q^+_{R,T})$, $V_2^{1,0}(Q^+_{R,T})$ and $V_2(Q^+_{R,T}) $ are Banach spaces. If $p\ge 0$, then
\[
W^{1,1}_2(Q^+_{R,T})\subset V^{1,1}_2(Q^+_{R,T})\subset V_2^{1,0}(Q^+_{R,T})\subset V_2(Q^+_{R,T}).
\]
If $-1<p<0$, then
\[
V^{1,1}_2(Q^+_{R,T})\subset W^{1,1}_2(Q^+_{R,T}),\quad V^{1,1}_2(Q^+_{R,T})\subset V_2^{1,0}(Q^+_{R,T})\subset V_2(Q^+_{R,T}).
\]
In fact, $V_2^{1,0}(Q^+_{R,T})$ is the closure of $V^{1,1}_2(Q^+_{R,T})$ under the norm $ \|\cdot \|_{V_2(Q^+_{R,T})}$.
We also denote $$\mathring W^{1,1}_2(Q^+_{R,T}),\ \mathring V^{1,1}_2(Q^+_{R,T}),\ \mathring V_2^{1,0}(Q^+_{R,T}),\ \mathring V_2(Q^+_{R,T})$$ as the set of functions in
\[
W^{1,1}_2(Q^+_{R,T}), \ V^{1,1}_2(Q^+_{R,T}),\ V_2^{1,0}(Q^+_{R,T}),\ V_2(Q^+_{R,T}) \mbox{ vanishing a.e. on } \partial B_R^+\times[-T,0]
\]
in the trace sense, respectively.
\begin{lem}\label{lem:sobolevdense}
For $p>0$, $\mathring W^{1,1}_2(Q^+_{R,T})$ is dense in $\mathring V^{1,1}_2(Q^+_{R,T})$.
\end{lem}
\begin{proof}
For $\varphi\in \mathring V^{1,1}_2(Q^+_{R,T})$ and $\va>0$, let
\[
\varphi_\va(x,t):=e^{-\va /x_n}\varphi(x,t).
\]
Then
\[
\partial_t \varphi_\va=e^{-\va /x_n}\partial_t\varphi, \quad D_i \varphi_\va=e^{-\va /x_n}D_i\varphi, \ i=1,\cdots,n-1;
\]
and
\[
D_{n} \varphi_\va=e^{-\va /x_n}D_n\varphi+\frac{\va e^{-\va /x_n}}{x_n}\frac{\varphi}{x_n}.
\]
Hence, $\varphi_\va \in \mathring W^{1,1}_2(Q_1^+)$. By Hardy's inequality, we have
\[
\int_{Q_{R,T}^+} \frac{\varphi^2}{x_n^2}\,\ud x\ud t\le C \int_{Q_{R,T}^+} |\nabla\varphi|^2\,\ud x\ud t.
\]
Therefore, it follows from Lebesgue's dominated convergence theorem that $\|\varphi_\va-\varphi\|_{V^{1,1}_2(Q_1^+)}\to 0$ as $\va\to 0^+$.
\end{proof}
This density fact will be used for the existence of weak solutions to \eqref{eq:linear-eq} (see Theorem \ref{thm:existenceofweaksolution}).
\begin{lem}\label{lem:cutoffconstantspace}
Let $u\in V_2^{1,0}(Q^+_{R,T})$. Then for every $k\in\R$,
\[
(u-k)^+:=\max(u-k,0)\in V_2^{1,0}(Q^+_{R,T}).
\]
\end{lem}
\begin{proof}
It is clear that $(u-k)^+\in V_2(Q^+_{R,T})$. For two real numbers $r_1$ and $r_2$, we have the pointwise estimate
\begin{equation}\label{eq:cutoffconstant}
|(r_1-k)^+-(r_2-k)^+|\le |r_1-r_2|.
\end{equation}
Hence
\[
\|(u-k)^+(\cdot,t+h)-(u-k)^+(\cdot,t)\|_{L^2(B_R^+,x_n^p\ud x)}\le \|u(\cdot,t+h)-u(\cdot,t)\|_{L^2(B_R^+,x_n^p\ud x)}.
\]
Since $u\in C ([-T,0]; L^2(B_R^+,x_n^{p}\ud x))$, then $(u-k)^+\in C ([-T,0]; L^2(B_R^+,x_n^{p}\ud x))$ as well.
\end{proof}
\begin{lem}\label{lem:cutoffconstantconvergence}
Suppose $\{u_j\}\subset V_2^{1,0}(Q^+_{R,T})$ converges to $u$ in $V_2^{1,0}(Q^+_{R,T})$. Then for every $k\in\R$,
\[
(u_j-k)^+\to (u-k)^+\quad\mbox{in }V_2^{1,0}(Q^+_{R,T})\quad\mbox{as }j\to\infty.
\]
\end{lem}
\begin{proof}
It follows from \eqref{eq:cutoffconstant}.
\end{proof}
Denote
\[
u_h(x,t)=\frac{1}{h}\int_{t-h}^{t}u(x,s)\,\ud s
\]
as the Steklov average of $u$.
\begin{lem}\label{lem:steklovaverageconvergence}
Let $u\in V_2^{1,0}(Q^+_{R,T})$, and $\delta\in(0,T)$. Then for every $h\in (0,\delta)$, $u_h\in V^{1,1}_2(Q^+_{R,T-\delta})$, and
\[
u_h\to u\quad\mbox{in }V_2(Q^+_{R,T-\delta})\mbox{ as }h\to 0.
\]
\end{lem}
\begin{proof}
It is straightforward to verify that $u_h\in V^{1,1}_2(Q^+_{R,T-\delta})$. Also, by the Minkowski inequality, we have
\begin{align*}
\|(u_h-u)(\cdot,t)\|_{L^2(B_R^+,x_n^p\ud x)}&\le \frac 1h \int_{t-h}^t\|u(\cdot,s)-u(\cdot,t)\|_{L^2(B_R^+,x_n^p\ud x)}\,\ud s\\
&\le \sup_{t-h\le s\le t}\|u(\cdot,s)-u(\cdot,t)\|_{L^2(B_R^+,x_n^p\ud x)}\\
&\to 0\mbox{ as }h\to 0,
\end{align*}
where $u\in V_2^{1,0}(Q^+_{R,T})$ is used in the last inequality. Similarly,
\begin{align*}
\|D_x u_h-D_x u\|_{L^2(Q^+_{R,T-\delta})}&\le \frac 1h \int_{-h}^0\|D_x u(x,t+s)-D_x u(x,t)\|_{L^2(Q^+_{R,T-\delta})}\,\ud s\\
&\le \sup_{-h\le s\le 0}\|D_x u(x,t+s)-D_xu(x,t)\|_{L^2(Q^+_{R,T-\delta})}\\
&\to 0\mbox{ as }h\to 0,
\end{align*}
where we used the continuity of Lebesgue integrals with respect to translations in the last inequality.
\end{proof}
\subsection{Sobolev inequalities}
Next, we will prove a Sobolev inequality for functions in $\mathring V_2(Q^+_{R,T})$ (in fact, in a slightly larger space). To accommodate the partial boundary condition \eqref{eq:linear-eq-D}, we define the following space:
\[
H^1_{0,L}(B_R^+)=\{u\in H^1(B_R^+): u\equiv 0\ \mbox{on}\ \partial'B_R^+\}.
\]
Then we have the well-known Hardy inequality.
\begin{lem}[Hardy's inequality]\label{lem:hardyinequality}
For every $u\in H^1_{0,L}(B_R^+)$, there holds
\[
\int_{B_R^+}\frac{u(x)^2}{x_n^2}\,\ud x\le 4 \int_{B_R^+}|\nabla u(x)|^2\,\ud x.
\]
\end{lem}
Consequently, we have
\begin{lem}\label{lem:interpolation}
Let $p>0$. For every $u\in H^1_{0,L}(B_R^+)$ and every $\va>0$, there holds
\[
\int_{B_R^+}u^2\,\ud x\le 4 \va \int_{B_R^+}|\nabla u|^2\,\ud x+\va ^{-\frac{p}{2}} \int_{B_R^+}x_n^pu^2\,\ud x.
\]
\end{lem}
\begin{proof}
We have
\begin{align*}
\int_{B_R^+}u^2\,\ud x&= \int_{B_R^+}x_n^{\frac{2p}{p+2}}u^{\frac{4}{p+2}} x_n^{-\frac{2p}{p+2}}u^{\frac{2p}{p+2}}\,\ud x\\
&\le \left(\int_{B_R^+}x_n^{p}u^{2}\,\ud x \right)^\frac{2}{p+2}
\left(\int_{B_R^+}x_n^{-2}u^{2} \,\ud x \right)^\frac{p}{p+2}\\
&\le \va \int_{B_R^+}x_n^{-2}u^{2}\,\ud x +\va ^{-\frac p2}\int_{B_R^+}x_n^{p}u^{2} \,\ud x \\
&\le 4\va \int_{B_R^+}|\nabla u(x)|^2\,\ud x +\va ^{-\frac p2}\int_{B_R^+}x_n^{p}u^{2}\,\ud x,
\end{align*}
where we used H\"older's inequality, Young's inequality and Lemma \ref{lem:hardyinequality}.
\end{proof}
By the usual Sobolev inequality, Hardy's inequality, H\"older's inequality, and a scaling argument, we have the following Sobolev inequality for functions in $H^1_{0,L}(B_R^+)$.
\begin{lem}[Sobolev's inequality]\label{lem:Sobolevinequalitynot}
There exists $C>0$ depending only on $n$ such that for every $u\in H^1_{0,L}(B_R^+)$, there holds
\begin{align*}
\|u\|_{L^\frac{2n}{n-2}(B_R^+)}&\le C \|\nabla u\|_{L^2(B_R^+)}\quad\mbox{if }n\ge 3,\\
\|u\|_{L^q(B_R^+)}&\le CR^{\frac{n}{q}+\frac{2-n}{2}} \|\nabla u\|_{L^2(B_R^+)}\ \forall\,q>0\quad\mbox{if }n=1, 2.
\end{align*}
\end{lem}
Combining Hardy's inequality and Sobolev's inequality, we have the following Hardy-Sobolev inequality for functions in $H^1_{0,L}(B_R^+)$.
\begin{lem}[Hardy-Sobolev inequality]\label{lem:Hardy-Sobolev} Let $s\in (0,2)$. Then
\begin{equation}\label{eq:hardysobolev}
\Big(\int_{B_R^+}\frac{|u(x)|^{\frac{2(n-s)}{n-2}}}{x_n^s}\Big)^{\frac{n-2}{n-s}}\le C(n)^{\frac{n-2}{n-s}}\int_{B_R^+}|\nabla u|^2\quad\forall\ u\in H_{0,L}^1(B_R^+),
\end{equation}
when $n\ge 3$, and for $s\le r<\infty$,
\begin{equation}\label{eq:hardysobolev-2d}
\Big(\int_{B_R^+}\frac{|u(x)|^{r}}{x_n^s}\Big)^{\frac{2}{r}}\le C(r,s)R^{\frac{2(n-s)}{r}+2-n}\int_{B_R^+}|\nabla u|^2\quad\forall\ u\in H_{0,L}^1(B_R^+),
\end{equation}
when $n=1,2$.
\end{lem}
\begin{proof} By scaling, we only need to prove for $R=1$. If $n\ge 3$, using the H\"older inequality, Hardy inequality and Sobolev inequality, we have
\begin{align*}
\int_{B_1^+}\frac{|u(x)|^{\frac{2(n-s)}{n-2}}}{x_n^s}& = \int_{B_1^+}\frac{|u(x)|^{s}}{x_n^s} |u(x)|^{\frac{2n-sn}{n-2}}
\\& \le \Big( \int_{B_1^+}\frac{|u(x)|^{2}}{x_n^2} \Big)^{\frac{s}{2}} \Big( \int_{B_1^+}|u|^{\frac{2n}{n-2}} \Big)^{\frac{2-s}{2}} \\&
\le C(n)\Big( \int_{B_1^+} |\nabla u|^2 \Big)^{\frac{s}{2}} \Big( \int_{B_1^+} |\nabla u|^2 \Big)^{\frac{n}{n-2}\frac{2-s}{2}} \\&= C(n)\Big( \int_{B_1^+} |\nabla u|^2 \Big)^{\frac{n-s}{n-2}} .
\end{align*}
If $n=1,2$, we have
\begin{align*}
\int_{B_1^+}\frac{|u(x)|^{r}}{x_n^s} &= \int_{B_1^+}\frac{|u(x)|^{s}}{x_n^s} |u|^{r-s} \\&
\le \Big( \int_{B_1^+}\frac{|u(x)|^{2}}{x_n^2} \Big)^{\frac{s}{2}} \Big( \int_{B_1^+}|u|^{\frac{2(r-s)}{2-s}} \Big)^{\frac{2-s}{2}} \\&
\le C(r,s) \Big( \int_{B_1^+} |\nabla u|^2 \Big)^{\frac{r}{2}} .
\end{align*}
Therefore, we complete the proof.
\end{proof}
The next theorem is a mild generalization of Lemma 2.2 in \cite{JX19}.
\begin{thm}\label{thm:weightedsobolev}
Let $p\ge 0$. For every $u\in L^\infty ((-T,0]; L^2(B_R^+,x_n^{p}\ud x)) \cap L^2((-T,0];H^1_{0,L}(B_R^+)) $ (in particular, $u\in \mathring V_2(Q^+_{R,T})$), we have
\[
\left(\int_{Q^+_{R,T}} |u|^{2\chi}\ud x \ud t \right)^{\frac{1}{\chi}} \le C \|u\|_{V_2(Q^+_{R,T})}^2,
\]
where $\chi =\frac{n+p+2}{n+p}$ and $C$ depends only on $n$ and $p$ if $n\ge 3$; while $\chi=\frac{p+2}{p+1}$ and $C=C(p)R^{\frac{p+2-n}{p+2}}$ with the constant $C(p)$ depending only on $p$ if $n=1,2$.
\end{thm}
\begin{proof} We prove the case $n\ge 3$ first.
Let $s\in(0,2)$ be such that $\frac{s(n-2)}{2-s}=p$. By \eqref{eq:hardysobolev} and the H\"older inequality, we have
\begin{align*}
\int_{B_R^+ }|u|^{\frac{2(n+2-2s)}{n-s} } \,\ud x &=\int_{B_R^+ }|u|^{2} x_n^{-\frac{s(n-2)}{n-s}}|u|^{\frac{2(2-s)}{n-s} } x_n^{\frac{s(n-2)}{n-s}}\,\ud x \\
&\le \Big(\int_{B_R^+}\frac{|u|^{\frac{2(n-s)}{n-2}}}{x_n^s}\,\ud x\Big)^{\frac{n-2}{n-s}} \Big( \int_{B_R^+ } u^2 x_n^{\frac{s(n-2)}{2-s}}\,\ud x\Big)^{\frac{2-s}{n-s}} \\&
\le C(n,p) \Big(\int_{B_R^+ } |\nabla u|^2\,\ud x\Big) \Big( \int_{B_R^+ } u^2 x_n^{p}\,\ud x\Big)^{\frac{2-s}{n-s}}.
\end{align*}
Integrating the above inequality in $t$, we have
\begin{align*}
&\Big(\int_{-T}^0\int_{B_R^+ }|u(x,t)|^{\frac{2(n+2-2s)}{n-s}} \,\ud x \ud t\Big)^{\frac{n-s}{n+2-2s}}\\
& \le C(n,p) \sup_{-T<t<0}\Big( \int_{B_R^+ }u^2 x_n^{p}\,\ud x\Big)^{\frac{2-s}{n+2-2s}} \Big(\int_{B_R^+ \times[-T,0]} |\nabla u|^2 \,\ud x \ud t \Big)^{\frac{n-s}{n+2-2s}}\\&
\le C(n,p) \Big( \|\nabla u\|_{L^2(B_R^+ \times(-T,0])} ^2 + \sup_{-T<t<0} \int_{B_R^+ }u^2 x_n^{p}\,\ud x \Big),
\end{align*}
where we have used the Young inequality in the last inequality.
If $n=1,2$, using \eqref{eq:hardysobolev-2d} and the H\"older inequality, we have
\begin{align*}
\int_{B_R^+ }|u|^{2+\frac{2}{p+1}} \,\ud x&= \int_{B_R^+ }|u|^{2} x_n^{-\frac{p}{p+1}} |u|^{\frac{2}{p+1}} x_n^{\frac{p}{p+1}} \,\ud x \\&
\le \Big( \int_{B_R^+ } \frac{|u|^{\frac{2(p+1)}{p}}}{x_n}\,\ud x \Big)^{\frac{p}{p+1}} \Big( \int_{B_R^+ } u^2 x_n^{p}\,\ud x\Big)^{\frac{1}{p+1}} \\&
\le C R^{\frac{p+2-n}{p+1}}\Big( \int_{B_R^+ } |\nabla u|^2\,\ud x \Big) \Big( \int_{B_R^+ } u^2 x_n^{p}\,\ud x\Big)^{\frac{1}{p+1}} .
\end{align*}
Integrating the above inequality in $t$, we have
\begin{align*}
&\Big(\int_{-T}^0\int_{B_R^+ }|u(x,t)|^{\frac{2(p+2)}{p+1}} \,\ud x \ud t\Big)^{\frac{p+1}{p+2}}\\
& \le C(p)R^{\frac{p+2-n}{p+2}} \sup_{-T<t<0}\Big( \int_{B_R^+ }u^2 x_n^{p}\,\ud x\Big)^{\frac{1}{p+2}} \Big(\int_{B_R^+ \times[-T,0]} |\nabla u|^2 \,\ud x \ud t \Big)^{\frac{p+1}{p+2}}\\&
\le C(p) R^{\frac{p+2-n}{p+2}}\Big( \|\nabla u\|_{L^2(B_R^+ \times(-T,0])} ^2 + \sup_{-T<t<0} \int_{B_R^+ }u^2 x_n^{p}\,\ud x \Big),
\end{align*}
where we have used the Young inequality in the last inequality.
\end{proof}
For $-2<p<0$, then we have another parabolic Sobolev inequality.
\begin{thm}\label{thm:weightedsobolev2}
For every $u\in L^\infty ((-T,0]; L^2(B_R^+,x_n^{p}\ud x)) \cap L^2((-T,0];H^1_{0,L}(B_R^+)) $ (in particular, $u\in \mathring V_2(Q^+_{R,T})$), where $-2<p<0$, we have
\[
\left(\int_{Q^+_{R,T}} |u|^{2\chi}x_n^p\ud x \ud t \right)^{\frac{1}{\chi}} \le C \|u\|_{V_2(Q^+_{R,T})}^2,
\]
where $\chi =\frac{n+2p+2}{n+p}$ and $C$ depends only on $n$ and $p$ if $n\ge 3$; while $\chi=\frac{3}{2}$ and $C=C(p)R^{\frac{p+4-n}{3}}$ with the constant $C(p)$ depending only on $p$ if $n=1,2$.
\end{thm}
\begin{proof} We prove the case $n\ge 3$ first. By \eqref{eq:hardysobolev} and the H\"older inequality, we have
\begin{align*}
\int_{B_R^+ }|u|^{\frac{2(n+2+2p)}{n+p} } x_n^p \,\ud x &=\int_{B_R^+ }|u|^{2} x_n^{\frac{p(n-2)}{n+p}}|u|^{\frac{2(2+p)}{n+p} } x_n^{\frac{p(p+2)}{n+p}}\,\ud x \\
&\le \Big(\int_{B_R^+}\frac{|u|^{\frac{2(n+p)}{n-2}}}{x_n^{-p}}\,\ud x\Big)^{\frac{n-2}{n+p}} \Big( \int_{B_R^+ } u^2 x_n^{p}\,\ud x\Big)^{\frac{2+p}{n+p}} \\&
\le C(n,p) \Big(\int_{B_R^+ } |\nabla u|^2\,\ud x\Big) \Big( \int_{B_R^+ } u^2 x_n^{p}\,\ud x\Big)^{\frac{2+p}{n+p}}.
\end{align*}
Integrating the above inequality in $t$, we have
\begin{align*}
&\Big(\int_{-T}^0\int_{B_R^+ }|u(x,t)|^{\frac{2(n+2+2p)}{n+p}} x_n^p\,\ud x \ud t\Big)^{\frac{n+p}{n+2+2p}}\\
& \le C(n,p) \sup_{-T<t<0}\Big( \int_{B_R^+ }u^2 x_n^{p}\,\ud x\Big)^{\frac{2+p}{n+2+2p}} \Big(\int_{B_R^+ \times[-T,0]} |\nabla u|^2 \,\ud x \ud t \Big)^{\frac{n+p}{n+2+2p}}\\&
\le C(n,p) \Big( \|\nabla u\|_{L^2(B_R^+ \times(-T,0])} ^2 + \sup_{-T<t<0} \int_{B_R^+ }u^2 x_n^{p}\,\ud x \Big),
\end{align*}
where we have used the Young inequality in the last inequality.
If $n=1,2$, using \eqref{eq:hardysobolev-2d} and the H\"older inequality, we have
\begin{align*}
\int_{B_R^+ }|u|^{3} x_n^p\,\ud x&= \int_{B_R^+ }|u|^{2} x_n^{\frac{p}{2}} |u| x_n^{\frac{p}{2}} \,\ud x \\&
\le \Big( \int_{B_R^+ } \frac{|u|^{4}}{x_n^{-p}}\,\ud x \Big)^{\frac{1}{2}} \Big( \int_{B_R^+ } u^2 x_n^{p}\,\ud x\Big)^{\frac{1}{2}} \\&
\le C R^{\frac{p+4-n}{2}}\Big( \int_{B_R^+ } |\nabla u|^2\,\ud x \Big) \Big( \int_{B_R^+ } u^2 x_n^{p}\,\ud x\Big)^{\frac{1}{2}} .
\end{align*}
Integrating the above inequality in $t$, we have
\begin{align*}
&\Big(\int_{-T}^0\int_{B_R^+ }|u(x,t)|^{3} x_n^p\,\ud x \ud t\Big)^{\frac{2}{3}}\\
& \le C(p)R^{\frac{p+4-n}{3}} \sup_{-T<t<0}\Big( \int_{B_R^+ }u^2 x_n^{p}\,\ud x\Big)^{\frac{1}{3}} \Big(\int_{B_R^+ \times[-T,0]} |\nabla u|^2 \,\ud x \ud t \Big)^{\frac{2}{3}}\\&
\le C(p) R^{\frac{p+4-n}{3}} \Big( \|\nabla u\|_{L^2(B_R^+ \times(-T,0])} ^2 + \sup_{-T<t<0} \int_{B_R^+ }u^2 x_n^{p}\,\ud x \Big),
\end{align*}
where we have used the Young inequality in the last inequality. Note that
\[
\frac{p+4-n}{3}>0
\]
if $-2<p<0$ and $n=1,2$.
\end{proof}
Uing the idea of Fabes-Kenig-Serapioni \cite{FKS}, we have the following Poincar\'e inequality.
\begin{prop}\label{prop:weightedpoincare2}
Let $n\ge 1$, $p>-1$ and $r>0$. Then there exists a constant $C>0$ depending only on $n$ and $p$ such that
\[
\int_{B_r}|u(x)-(u)_{p,r}| |x_n|^{p}\,\ud x\le C r^{1+p} \int_{B_r}|\nabla u(x)|\,\ud x
\]
for all $u\in H^1(B_r)$, where
\[
(u)_{p,r}=\frac{\int_{B_r}u(x) |x_n|^{p}\,\ud x}{\int_{B_r} |x_n|^{p}\,\ud x}.
\]
\end{prop}
\begin{proof}
By scaling, we only need to prove it for $r=1$. By a density argument, we only need to show it for Lipschitz continuous (in $\overline B_1$) functions.
Using the triangle inequality and Lemma 1.4 of Fabes-Kenig-Serapioni \cite{FKS}, we have for all $x\in B_1$ that
\[
\left|u(x)-(u)_{0}\right|\le \frac{1}{|B_1|}\int_{B_1}|u(x)-u(y)|\,\ud y\le C \int_{B_1}\frac{|\nabla u(z)|}{|x-z|^{n-1}}\,\ud z.
\]
where
\[
(u)_{0}= \frac{1}{|B_1|}\int_{B_1}u(y)\,\ud y.
\]
Then
\[
\int_{B_1} \left|u(x)-(u)_{0}\right| |x_n|^p\,\ud x\le C \int_{B_1}\left(\int_{B_1}\frac{|x_n|^p}{|x-z|^{n-1}} \,\ud x\right) |\nabla u(z)|\,\ud z.
\]
Since
\begin{align*}
\int_{B_1}\frac{|x_n|^p}{|x-z|^{n-1}} \,\ud x &\le \int_{\{|x_n\le 1, |x'|\le 1|\}}\frac{|x_n|^p}{|x-z|^{n-1}} \,\ud x\\
&\le C\int_{\{|x_n\le 1, |x'|\le 1|\}}\frac{|x_n|^p}{|x|^{n-1}} \,\ud x\\
&=C\int_{-1}^1 \left(\int_{|x'|\le \frac{1}{|x_n|}}\frac{1}{(1+|x'|^2)^{\frac{n-1}{2}}}\,\ud x'\right) |x_n|^p\,\ud x_n\\
&\le C\int_{-1}^1 |\log |x_n||\cdot |x_n|^p\,\ud x_n\\
&\le C,
\end{align*}
where we used $p>-1$ in the last inequality, we have
\[
\int_{B_1} \left|u(x)-(u)_{0}\right| |x_n|^p\,\ud x\le C \int_{B_1}|\nabla u(z)|\,\ud z.
\]
Then the conclusion follows from the fact that
\begin{align*}
|(u)_{p,1}-(u)_{0}|=\left|\frac{\int_{B_1} (u-(u)_{0}) |x_n|^{p}\,\ud x}{\int_{B_1} |x_n|^{p}\,\ud x} \right|\le C \int_{B_1} |u-(u)_{0}| |x_n|^{p}\,\ud x\le C \int_{B_1}|\nabla u(z)|\,\ud z.
\end{align*}
\end{proof}
The last inequality is a De Giorgi type isoperimetric inequality.
\begin{thm}\label{thm:degiorgiisoperimetricelliptic}
Let $p>-1$, $k<\ell, r>0$ and $u\in H^1(B_r)$. For every
$
0<\va<\min\left(\frac{1}{2},\frac{1}{2(p+1)}\right),
$
there exists a positive constant $C$ depending only on $n,p$ and $\va$ such that
\begin{align*}
&(\ell-k)\int_{\{u\ge l\}\cap B_r} |x_n|^{p}\,\ud x\int_{\{u\le k\}\cap B_r} |x_n|^{p}\,\ud x\\
& \le C r^{n+2p+1+\frac{n(1-2\va)}{2}-\va p} \left(\int_{\{k<u<\ell\}\cap B_r} |\nabla u|^2\,\ud x\right)^{1/2} \left(\int_{\{k<u<\ell\}\cap B_r}|x_n|^{p}\,\ud x\right)^{\va}.
\end{align*}
\end{thm}
\begin{proof}
Let
\[
v=\sup(k, \inf(u,\ell))-k,\quad (v)_{p,r}=\frac{\int_{B_r} v(x) |x_n|^{p}\,\ud x}{\int_{B_r} |x_n|^{p}\,\ud x}.
\]
Then by Proposition \ref{prop:weightedpoincare2},
\begin{align*}
\int_{\{v=0\}\cap B_r} (v)_{p,r} |x_n|^{p}\,\ud x&\le \int_{B_r}|v(x)-(v)_{p,r}| |x_n|^{p}\,\ud x\\
&\le Cr^{1+p} \int_{B_r}|\nabla v(x)| \,\ud x\\
&=Cr^{1+p} \int_{\{k<u<\ell\}\cap B_r}|\nabla u(x)| \,\ud x.
\end{align*}
Using H\"older's inequality, we have
\begin{align*}
&\int_{\{k<u<\ell\}\cap B_r}|\nabla u(x)| \,\ud x \\
& \le C \left(\int_{\{k<u<\ell\}\cap B_r} |\nabla u|^2 \,\ud x\right)^{1/2} \left(\int_{\{k<u<\ell\}\cap B_r}|x_n|^{p}\,\ud x\right)^{\va} \left(\int_{\{k<u<\ell\}\cap B_r}|x_n|^{-\frac{2p\va}{1-2\va}}\,\ud x\right)^{\frac{1-2\va}{2}}\\
& \le C r^{\frac{n(1-2\va)}{2}-\va p} \left(\int_{\{k<u<\ell\}\cap B_r} |\nabla u|^2\,\ud x\right)^{1/2} \left(\int_{\{k<u<\ell\}\cap B_r}|x_n|^{p}\,\ud x\right)^{\va},
\end{align*}
where we used
$
0<\va<\min\left(\frac{1}{2},\frac{1}{2(p+1)}\right),
$
so that we can use H\"older's inequality and $|x_n|^{-\frac{2p\va}{p-2\va}}$ is integrable.
On the other hand, we have
\begin{align*}
\int_{\{v=0\}\cap B_r} (v)_{p,r} |x_n|^{p}\,\ud x&=\frac{\int_{B_r} v(x) |x_n|^{p}\,\ud x}{\int_{B_r} |x_n|^{p} \,\ud x}\cdot \int_{\{u\le k\}\cap B_r} |x_n|^{p} \,\ud x\\
& \ge \frac{(\ell-k)\int_{\{u\ge l\}\cap B_r} |x_n|^{p} \,\ud x}{\int_{B_r} |x_n|^{p} \,\ud x}\cdot \int_{\{u\le k\}\cap B_r} |x_n|^{p} \,\ud x\\
&\ge C r^{-n-p} (\ell-k)\int_{\{u\ge l\}\cap B_r} x_n^{p}\,\ud x\int_{\{u\le k\}\cap B_r} x_n^{p}\,\ud x.
\end{align*}
Hence, the conclusion follows.
\end{proof}
\section{Weak solutions}\label{sec:weaksolution}
\subsection{Definitions}
Regarding the coefficients of the equation \eqref{eq:linear-eq}, besides \eqref{eq:rangep}, we assume that
\begin{itemize}
\item there exist $0<\lambda\le\Lambda<\infty$ such that
\be \label{eq:ellip2}
\lda\le a(x,t)\le \Lda, \quad \lda |\xi|^2 \le \sum_{i,j=1}^na_{ij}(x,t)\xi_i\xi_j\le \Lda |\xi|^2, \quad\forall\ (x,t)\in Q_1^+,\ \forall\ \xi\in\R^n;
\ee
\item
\begin{equation}\label{eq:assumptioncoefficient}
\Big\||\pa_t a| + |c|\Big\|_{L^q(Q_1^+,x_n^p\ud x\ud t)} +\left\|\sum_{j=1}^n(b_j^2+d_j^2)+|c_0|\right\|_{L^q(Q_1^+)}\le\Lambda
\end{equation}
for some $q>\frac{\chi}{\chi-1}$;
\item
\begin{equation}\label{eq:assumptionf}
F_0:=\|f\|_{L^\frac{2\chi}{2\chi-1}(Q_1^+,x_n^p\ud x\ud t)}+\|f_0\|_{L^\frac{2\chi}{2\chi-1}(Q_1^+)}+ \sum_{j=1}^n\|f_j\|_{L^2(Q_1^+)}<\infty,
\end{equation}
\end{itemize}
where $\chi>1$ is the constant in Theorem \ref{thm:weightedsobolev} or Theorem \ref{thm:weightedsobolev2} depending on the value of $p$.
\begin{defn}\label{defn:weaksolutionp}
We say $u$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D} if $u\in C ((-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+)) $ and satisfies
\begin{equation}\label{eq:definitionweaksolution}
\begin{split}
&\int_{B^+_1}a(x,s) x_n^{p} u(x,s) \varphi(x,s)\,\ud x-\int_{-1}^s\int_{B_1^+} x_n^{p}(\varphi\partial_t a+a\partial_t \varphi)u\,\ud x\ud t\\
&\quad+ \int_{-1}^s\int_{B_1^+} \big(a_{ij}D_iuD_j\varphi+d_juD_j\varphi+b_jD_ju\varphi+c x_n^p u \varphi+c_0u\varphi\big)\,\ud x\ud t\\
&=\int_{-1}^s\int_{B_1^+} (x_n^p f\varphi+f_0\varphi+f_jD_j\varphi)\,\ud x\ud t\quad\mbox{a.e. }s\in (-1,0]
\end{split}
\end{equation}
for every $\varphi\in \mathring V^{1,1}_2(Q^+_1)$ satisfying $\varphi (\cdot,-1)\equiv 0$ in $B_1^+$ (in the trace sense).
\end{defn}
Using Theorem \ref{thm:weightedsobolev} and Theorem \ref{thm:weightedsobolev2}, one can verify that under the assumptions \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:assumptioncoefficient} and \eqref{eq:assumptionf}, each integral in \eqref{eq:definitionweaksolution} is finite.
\begin{defn}\label{defn:weaksolutionwithinitialtime}
We say that $u$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D} and the initial condition $u(\cdot,-1)\equiv 0$, if $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+)) $, $u(\cdot,-1)\equiv 0$, and satisfies \eqref{eq:definitionweaksolution} for all $\varphi\in \mathring V^{1,1}_2(Q^+_1)$.
\end{defn}
\begin{defn}\label{defn:weaksolutionglobal}
We say that $u$ is a weak solution of \eqref{eq:linear-eq} with the full boundary condition $u\equiv 0$ on $\pa_{pa} Q_1^+$, if $u\in \mathring V_2^{1,0}(Q^+_{1})$, $u(\cdot,-1)\equiv 0$, and satisfies \eqref{eq:definitionweaksolution} for all $\varphi\in \mathring V^{1,1}_2(Q^+_1)$.
\end{defn}
\begin{defn}\label{defn:weaksolutionglobalin}
Let $g\in V^{1,1}_2(Q_1^+)$. We say that $u$ is a weak solution of \eqref{eq:linear-eq} with the inhomogeneous boundary condition $u\equiv g$ on $\pa_{pa} Q_1^+$, if $u\in V_2^{1,0}(Q^+_{1})$, $u= g$ on $\pa_{pa} Q_1^+$, and $v:=u-g$ is a weak solution of
\begin{align*}
&a x_n^{p} \pa_t v -D_j(a_{ij} D_i v+d_j v)+b_iD_i v+cx_n^pv+c_0 v\\
&=x_n^{p} (f-a \pa_t g -cg) + (f_0-b_iD_i g -c_0 g)- D_i(f_i-a_{ij} D_i g-d_j g).
\end{align*}
with homogeneous boundary condition $v\equiv 0$ on $\pa_{pa} Q_1^+$.
\end{defn}
\subsection{Energy estimates, uniqueness and existence}
We start with energy estimates.
\begin{lem}\label{lem:Steklovapproximation}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D}, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:assumptioncoefficient} and \eqref{eq:assumptionf}.
Let $k\ge \sup_{\pa_{pa}Q_1^+}|u|$ and $\varphi=(u-k)^+$. Then there exists $C>0$ depending only on $n,p,\lambda,\Lambda$ such that
\begin{equation}\label{eq:uastestfunction}
\begin{split}
&\int_{B^+_1}x_n^{p} \varphi(x,s)^2\,\ud x+ \int_{-1}^s\int_{B_1^+} |\nabla\varphi|^2\,\ud x\ud t \\
&\le C\int_{-1}^s\int_{B_1^+} \varphi^2 \Big[(|\partial_t a|+|c|)x_n^p+\sum_{j=1} ^n(d_j^2+b_j^2)+|c_0|\Big]\,\ud x\ud t\\
&\quad+C\int_{-1}^s\int_{B_1^+\cap\{u>k\}} \Big(x_n^p f\varphi + f_0\varphi+\sum_{j=1}^n f_j^2\Big)\,\ud x\ud t\quad\mbox{a.e. }s\in (-1,0].
\end{split}
\end{equation}
\end{lem}
\begin{proof}
If $u\in V^{1,1}_2(Q^+_1)$ (cf. \eqref{eq:standardspaceweighted}), then $\varphi\in \mathring V^{1,1}_2(Q^+_1)$ and $\varphi (\cdot,-1)\equiv 0$ in $B_1^+$. Then \eqref{eq:uastestfunction} follows from \eqref{eq:definitionweaksolution}, by using \eqref{eq:ellip2} and H\"older's inequality.
In the following, we will show that we do not need to assume $u\in V^{1,1}_2(Q^+_1)$, and that $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ would be sufficient.
Denote
\[
u_h(x,t)=\frac{1}{h}\int_{t-h}^{t}u(x,s)\,\ud s
\]
as the Steklov average of $u$. Then for every $v\in V_2^{1,1}(B^+_1\times(-1+h,0))$ such that $v=0$ on $\pa (B^+_1\times(-1+h,0))$, by taking $v_{-h}$ as the test function in \eqref{eq:definitionweaksolution}, we have
\[
\begin{split}
&-\iint_{Q_1^+} x_n^{p}(v_{-h}\partial_t a+a\partial_t v_{-h})u\,\ud x\ud t\\
&+ \iint_{Q_1^+} \big(a_{ij}D_iuD_jv_{-h}+d_j u D_j v_{-h}+b_jD_juv_{-h}+cx_n^puv_{-h}+c_0uv_{-h}\big)\,\ud x\ud t\\
&=\iint_{Q_1^+} (x_n^pfv_{-h}+f_0v_{-h}+f_jD_jv_{-h})\,\ud x\ud t.
\end{split}
\]
By changing the order of the integration, we have
\begin{align}
&\iint_{Q_1^+} \big(a_{ij}D_iuD_jv_{-h}+d_j u D_j v_{-h}+b_jD_juv_{-h}+cx_n^puv_{-h}+c_0uv_{-h}\big)\,\ud x\ud t\nonumber\\
&\quad= \iint_{B_1^+\times(-1+h,0)} \big((a_{ij}D_iu+d_j u)_h D_jv+(b_jD_ju+cx_n^p u+c_0u)_hv\big)\,\ud x\ud t,\label{eq:localmaxapp1}\\
&\iint_{Q_1^+} (x_n^pfv_{-h}+f_0v_{-h}+f_jD_jv_{-h})\,\ud x\ud t\nonumber\\
&\quad=\iint_{B_1^+\times(-1+h,0)} ((x_n^pf+f_0)_hv+(f_j)_hD_jv)\,\ud x\ud t,\label{eq:localmaxapp2}
\end{align}
and
\begin{align}
&-\iint_{Q_1^+} x_n^{p}(v_{-h}\partial_t a+a\partial_t v_{-h})u\,\ud x\ud t\nonumber\\
&=\iint_{B_1^+\times(-1+h,0)} x_n^{p}v\{\partial_t [(au)_{h}]-(u\partial_t a)_h\}\,\ud x\ud t\nonumber\\
&= \iint_{B_1^+\times(-1+h,0)} x_n^{p}v\{a\partial_t u_h+u(\cdot,t-h)\partial_t a_h-(u\partial_t a)_h\}\,\ud x\ud t.\label{eq:localmaxapp3}
\end{align}
Furthermore,
\begin{align}
&\iint_{B_1^+\times(-1+h,0)} x_n^{p}v\{u(\cdot,t-h)\partial_t a_h-(u\partial_t a)_h\}\,\ud x\ud t\nonumber\\
&=\frac{1}{h}\iint_{B_1^+\times(-1+h,0)} x_n^{p}v \int_{t-h}^{t}[u(x,t-h)-u(x,s)]\partial_s a(x,s)\,\ud s\ud x\ud t\nonumber\\
&\to 0\quad\mbox{as }h\to 0.\label{eq:localmaxapp4}
\end{align}
The proof of \eqref{eq:localmaxapp4} is as follows. By Theorem \ref{thm:weightedsobolev} and Theorem \ref{thm:weightedsobolev2}, $u\in L^{2\chi}(Q_1^+,x_n^p\ud x\ud t)$. For every $\va>0$, there exists $\phi\in C^1(\overline Q_1^+)$ such that
\[
\|u-\phi\|_{L^{2\chi}(Q_1^+,x_n^p\ud x\ud t)}\le\va.
\]
Using $\phi\in C^1(\overline Q_1^+)$ and the dominated convergence theorem,
\[
\lim_{h\to 0}\frac{1}{h}\iint_{B_1^+\times(-1+h,0)} x_n^{p}v \int_{t-h}^{t}[\phi(x,t-h)-\phi(x,s)]\partial_s a(x,s)\,\ud s\ud x\ud t=0.
\]
Then
\begin{align*}
&\lim_{h\to 0^+}\left|\frac{1}{h}\iint_{B_1^+\times(-1+h,0)} x_n^{p}v \int_{t-h}^{t}[u(x,t-h)-u(x,s)]\partial_s a(x,s)\,\ud s\ud x\ud t\right|\\
&\le \|u-\phi\|_{L^{2\chi}(Q_1^+,x_n^p\ud x\ud t)}\|v\|_{L^{2\chi}(Q_1^+,x_n^p\ud x\ud t)}\|\pa_t a\|_{L^{\frac{\chi}{\chi-1}}(Q_1^+,x_n^p\ud x\ud t)}\\
&\le C(n,p,\lambda,\Lambda)\va.
\end{align*}
Since $\va$ is arbitrary, the conclusion \eqref{eq:localmaxapp4} follows.
For $0<\delta<1/4$, $3\delta-1<\tau<0$, define
\[
\xi_\delta(t)=
\begin{cases}
0, \mbox{ when } t<\delta-1,\\
\frac{t+\tau+\delta}{\delta}, \mbox{ when } \delta-1 \le t<2\delta-1,\\
1, \mbox{ when } 2\delta-1\le t<\tau-\delta, \\
\frac{-t}{\delta}, \mbox{ when } \tau-\delta\le t<\tau,\\
0, \mbox{ when } t>\tau.
\end{cases}
\]
Take $v=\xi_\delta(t)(u_h-k)^+$. Combining \eqref{eq:localmaxapp3} and \eqref{eq:localmaxapp4}, and using $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x))$, Lemma \ref{lem:steklovaverageconvergence}, Theorem \ref{thm:weightedsobolev} and Theorem \ref{thm:weightedsobolev2}, we have
\begingroup
\allowdisplaybreaks
\begin{align}
&-\lim_{h\to 0}\iint_{Q_1^+} x_n^{p}(v_{-h}\partial_t a+a\partial_t v_{-h})u\,\ud x\ud t\nonumber\\
&= \lim_{h\to 0}\iint_{B_1^+\times(-1+h,0)} x_n^{p}v a\partial_t u_h\,\ud x\ud t\nonumber\\
&=\lim_{h\to 0}\frac12 \iint_{B_1^+\times(-1+h,0)} x_n^{p}a(x,t)\xi_\delta(t)\partial_t [(u_h-k)^+]^2\,\ud x\ud t\nonumber\\
&=-\lim_{h\to 0}\frac12 \iint_{B_1^+\times(-1+h,0)} x_n^{p}\big\{[(u_h-k)^+]^2 a\pa_t\xi_\delta(t) +[(u_h-k)^+]^2\xi_\delta(t) \pa_ta\big\} \,\ud x\ud t\nonumber\\
&\ge -\frac{\Lambda}{2\delta} \iint_{B_1^+\times(-1+\delta,-1+2\delta)} x_n^{p}[(u-k)^+]^2 +\frac{\lambda}{2\delta} \iint_{B_1^+\times(\tau-\delta,\tau)} x_n^{p}[(u-k)^+]^2 \nonumber\\
&\quad-\frac12 \iint_{B_1^+\times(-1,0)} [(u-k)^+]^2 |\pa_ta|\,\ud x\ud t\nonumber\\
&\to \left.\frac{\lambda}{2} \int_{B_1^+} x_n^{p}[(u-k)^+]^2 \,\ud x\right\vert_{\tau}-\frac12 \iint_{B_1^+\times(-1,0)} [(u-k)^+]^2 |\pa_ta|\,\ud x\ud t \quad\mbox{as }\delta\to 0.\label{eq:localmaxapp22}
\end{align}
\endgroup
Also, by the proof of Lemma \ref{lem:steklovaverageconvergence}, we have
\begin{align*}
&\lim_{\delta\to 0} \lim_{h\to 0}\iint_{B_1^+\times(-1+h,0)} \big((a_{ij}D_iu)_h D_jv+(d_j u)_h D_j v+(b_jD_ju+cx_n^p u+c_0u)_hv\big)\,\ud x\ud t\\
&\quad =\iint_{B_1^+\times(-1,\tau)} \big[(a_{ij}D_iu+ d_j u) D_j(u-k)^+ +(b_jD_ju+cx_n^p u+c_0u)(u-k)^+\big]\,\ud x\ud t,\\
& \lim_{\delta\to 0} \lim_{h\to 0} \iint_{B_1^+\times(-1+h,0)} ((x_n^pf+f_0)_hv+(f_j)_hD_jv)\,\ud x\ud t\\
&\quad = \iint_{B_1^+\times(-1,\tau)} ((x_n^pf+f_0)(u-k)^++f_jD_j(u-k)^+)\,\ud x\ud t.
\end{align*}
Therefore, \eqref{eq:uastestfunction} follows from \eqref{eq:localmaxapp1}, \eqref{eq:localmaxapp2}, \eqref{eq:localmaxapp22}, and the Cauchy-Schwarz inequality.
\end{proof}
We have the following uniqueness of weak solutions.
\begin{thm}\label{thm:uniquenessofweaksolution}
Suppose $u$ is a weak solution of \eqref{eq:linear-eq} with the full boundary condition $u\equiv 0$ on $\pa_{pa} Q_1^+$, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:assumptioncoefficient} and \eqref{eq:assumptionf}. Then there exists $C>0$ depending only on $n$, $\lda, \Lda$ and $p$ such that
\begin{equation}\label{eq:energyestimateu}
\|u\|_{V_2(Q_1^+)}\le C \|f\|_{L^\frac{2\chi}{2\chi-1}(Q_1^+,x_n^p\ud x\ud t)}+C\|f_0\|_{L^\frac{2\chi}{2\chi-1}(Q_1^+)}+ C\sum_{j=1}^n\|f_j\|_{L^2(Q_1^+)}.
\end{equation}
Consequently,
there exists at most one weak solution of \eqref{eq:linear-eq} with the full boundary condition $u\equiv 0$ on $\pa_{pa} Q_1^+$.
\end{thm}
\begin{proof}
By letting $k=0$ in Lemma \ref{lem:Steklovapproximation}, we have
\begin{align}
&\int_{B^+_1}x_n^{p} u(x,s)^2\,\ud x+\int_{-1}^s\int_{B_1^+} |\nabla u|^2\,\ud x\ud t\nonumber\\
&\le C \int_{-1}^s\int_{B_1^+} \Big[\Big(\sum_{j=1}^n(d_j^2+b_j^2)+|c_0|+(|\pa_t a|+|c|)x_n^p\Big)u^2 +\sum_{j=1}^n f_j^2 +|f_0u|+|x_n^pfu|\Big]\,\ud x\ud t\nonumber\\
& \le C\|u\|^2_{L^{\frac{2q}{q-1}}(B_1^+\times[-1,s])}+C\|u\|^2_{L^{\frac{2q}{q-1}}(B_1^+\times[-1,s],x_n^p\ud x\ud t)}\nonumber\\
&\quad+ C \int_{-1}^s\int_{B_1^+} (\sum_{j=1}^n f_j^2+|f_0u|+|x_n^pfu|)\,\ud x\ud t.\label{eq:auxenergyestimate}
\end{align}
Since $q>\frac{\chi}{\chi-1}$, it follows from Theorem \ref{thm:weightedsobolev}, Theorem \ref{thm:weightedsobolev2} and Young's inequality that
\begin{align*}
\|u\|^2_{L^{\frac{2q}{q-1}}(B_1^+\times[-1,s])}&\le \delta \|u\|^2_{V_2(B_1^+\times[-1,s])} + C(\delta) \|u\|^2_{L^{2}(B_1^+\times[-1,s])},\\
\|u\|^2_{L^{\frac{2q}{q-1}}(B_1^+\times[-1,s],x_n^p\ud x)}&\le \delta \|u\|^2_{V_2(B_1^+\times[-1,s])} + C(\delta) \|u\|^2_{L^{2}(B_1^+\times[-1,s],x_n^p\ud x)},\\
\int_{-1}^s\int_{B_1^+}|f_0u|\,\ud x\ud t&\le \delta \|u\|^2_{V_2(B_1^+\times[-1,s])} + C(\delta)\|f_0\|^2_{L^{\frac{2\chi}{2\chi-1}}(B_1^+\times[-1,s])},\\
\int_{-1}^s\int_{B_1^+}|x_n^p f u|\,\ud x\ud t&\le \delta \|u\|^2_{V_2(B_1^+\times[-1,s])} + C(\delta)\|f\|^2_{L^{\frac{2\chi}{2\chi-1}}(B_1^+\times[-1,s],x_n^p\ud x\ud t)}.
\end{align*}
Plugging these to \eqref{eq:auxenergyestimate} and using Lemma \ref{lem:interpolation}, we obtain
\begin{align}\label{eq:beforegronwall}
\|u\|^2_{V_2(B_1^+\times[-1,s])} \le C \int_{-1}^s\int_{B_1^+} x_n^pu^2\,\ud x\ud t +CF_0^2,
\end{align}
where $F_0$ is defined in \eqref{eq:assumptionf}. In particular,
\[
\|u(\cdot,s)\|_{L^2(B_1^+,x_n^p\ud x)}^2\le C\|u\|_{L^2(B_1^+\times(-1,s],x_n^p\ud x\ud t)}^2+CF_0^2.
\]
By Gronwall's inequality, we have
\[
\|u\|_{L^2(B_1^+\times(-1,s],x_n^p\ud x\ud t)}^2\le CF_0^2.
\]
Plugging this back to \eqref{eq:beforegronwall}, the estimate \eqref{eq:energyestimateu} follows. Therefore, the uniqueness holds.
\end{proof}
\begin{thm}\label{thm:existenceofweaksolution}
Suppose $a$ is continuous in $\overline Q_1^+$, and the conditions \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:assumptioncoefficient} and \eqref{eq:assumptionf} hold. Then there exists a unique weak solution of \eqref{eq:linear-eq} with the full boundary condition $u\equiv 0$ on $\pa_{pa} Q_1^+$.
\end{thm}
\begin{proof}
For two real numbers $r_1$ and $r_2$, we denote $r_1\vee r_2=\max(r_1,r_2)$. We first consider the case with an additionally assume that $\pa_t a, c \in L^q(Q_1^+,x_n^p\vee 1\,\ud x\ud t)$ and $f\in L^\frac{2\chi}{\chi-1}(Q_1^+,x_n^p\vee 1\,\ud x\ud t)$, where $\chi$ is the constant in Theorem \ref{thm:weightedsobolev} or Theorem \ref{thm:weightedsobolev2}. An approximation argument in the end would remove this assumption.
For all $\va\in(0,1)$, let $a^\va\in C^2(\overline Q_1^+)$ be such that $a^\va\to a$ uniformly on $Q_1^+$, and $\pa_t a^\va\to \pa_t a$ in $L^q(Q_1^+,x_n^p\vee 1\,\ud x\ud t)$. Then there exists a unique energy weak solution $u_\va\in C ([-1,0]; L^2(B_1^+)) \cap L^2((-1,0];H_{0}^1(B_1^+))$ to the uniformly parabolic equation
\begin{align}\label{eq:degiorgilinearappappendix}
&a^\va \cdot (x_n+\va)^{p} \pa_t u_\va -D_j(a_{ij} D_i u_\va+d_j u_\va)+b_iD_i u_\va+c(x_n+\va)^p u_\va+c_0 u_\va\nonumber \\
&\quad=(x_n+\va)^pf+f_0 -D_if_i \quad \mbox{in }Q_1^+
\end{align}
with $u_\va\equiv 0$ on $\pa_{pa} Q_1^+$. That is,
\begin{equation}\label{eq:definitionweaksolution22}
\begin{split}
&\int_{B^+_1}a^\va(x,s) (x_n+\va)^{p} u_\va(x,s) \varphi(x,s)\,\ud x-\int_{-1}^s\int_{B_1^+} (x_n+\va)^{p}(\varphi\partial_t a^\va+a^\va\partial_t \varphi)u_\va\,\ud x\ud t\\
&=- \int_{-1}^s\int_{B_1^+} \big(a_{ij}D_iu_\va D_j\varphi+d_ju_\va D_j\varphi+b_jD_ju_\va \varphi+c(x_n+\va)^p u_\va \varphi+c_0u_\va \varphi\big)\,\ud x\ud t\\
&\quad +\int_{-1}^s\int_{B_1^+} ((x_n+\va)^pf\varphi+f_0\varphi+f_jD_j\varphi)\,\ud x\ud t
\end{split}
\end{equation}
for every $\varphi\in \mathring W^{1,1}_2(Q^+_1)$ satisfying $\varphi (\cdot,-1)\equiv 0$ in $B_1^+$ (in the trace sense). By the same proof of \eqref{eq:energyestimateu}, we have
\begin{align}\label{eq:energysmoothcase}
&\sup_{t\in[-1,0]}\int_{B_1^+}(x_n+\va)^{p} u_\va^2\,\ud x+\|\nabla u_\va\|^2_{L^2(Q_1^+)}\le CF_\va^2,
\end{align}
where
\[
F_\va=\|f\|_{L^\frac{2\chi}{2\chi-1}(Q_1^+,(x_n+\va)^p\ud x\ud t)}+\|f_0\|_{L^\frac{2\chi}{2\chi-1}(Q_1^+)}+ \sum_{j=1}^n\|f_j\|_{L^2(Q_1^+)}.
\]
Hence, if $p\ge 0$, then
\[
\|u_\va\|^2_{V_2(Q_1^+)}\le CF_\va^2.
\]
If $-1<p<0$, then by \eqref{eq:energysmoothcase} and the proof of Theorem \ref{thm:weightedsobolev2}, we have
\begin{align}\label{eq:weightedsobolev22}
\left(\int_{Q^+_{R,T}} (x_n+\va)^p |u_\va|^{2\chi}\ud x \ud t \right)^{\frac{1}{\chi}} \le CF_\va^2.
\end{align}
Therefore, by Theorem \ref{thm:weightedsobolev} and Theorem \ref{thm:weightedsobolev2}, for all $p>-1$, there exist $u\in L^{2\chi}(Q_1^+)\cap L^2((-1,0];H_{0}^1(B_1^+))$ and a subsequence $\{u_{\va_j}\}$, such that $u_{\va_j} \rightharpoonup u$ weakly in $L^{2\chi}(Q_1^+)$ and $Du_{\va_j} \rightharpoonup Du$ weakly in $L^{2}(Q_1^+)$. Let $\varphi\in C^\infty(Q^+_1)$ be such that $\varphi \equiv 0$ near the parabolic boundary $\pa_{pa}Q^+_1$ and let
\[
h_j(s):= \int_{B^+_1}a^{\va_j}(x,s) (x_n+\va_j)^{p} u_{\va_j}(x,s) \varphi(x,s)\,\ud x.
\]
By \eqref{eq:definitionweaksolution22}, \eqref{eq:energysmoothcase}, \eqref{eq:weightedsobolev22}, Theorem \ref{thm:weightedsobolev}, and the absolute continuity of Lebesgue integrals (applying to the right hand side of \eqref{eq:definitionweaksolution22}), we know that $h_j$ is uniformly bounded and equicontinuous on $[-1,0]$. By the Ascoli-Arzela Theorem, there is a subsequence of $\{h_j\}$, which is still denoted by $\{h_j\}$, such that $h_j$ uniformly converges to a function $h\in C([-1,0])$. On the other hand, since $u_{\va_j} \rightharpoonup u$ weakly in $L^{2\chi}(Q_1^+)$, we have that for every interval $I\subset[-1,0]$,
\[
\int_I h_j(s)\,\ud s \to \int_I \int_{B^+_1}a(x,s) x_n^{p} u(x,s) \varphi(x,s)\,\ud x\ud s.
\]
Hence,
\[
h(s)=\int_{B^+_1}a(x,s) x_n^{p} u(x,s) \varphi(x,s)\,\ud x\quad\mbox{a.e. in }[-1,0].
\]
Therefore, if one considers such a $\varphi$ independent of the time variable, then we know from \eqref{eq:energysmoothcase} that $u\in L^\infty([-1,0]; L^2(B_1^+,x_n^{p}\ud x))$, and it is straightforward to verify by sending $\va_j\to 0$ in \eqref{eq:definitionweaksolution22} that $u$ satisfies \eqref{eq:definitionweaksolution} for every $\varphi\in C^\infty(Q^+_1)$ being such that $\varphi \equiv 0$ near the parabolic boundary $\pa_{pa}Q^+_1$.
When $p\ge 0$, then by a standard density argument, it is straightforward to verify that $u$ satisfies \eqref{eq:definitionweaksolution} for every $\varphi\in \mathring W^{1,1}_2(Q^+_1)$ satisfying $\varphi (\cdot,-1)\equiv 0$ in $B_1^+$ (in the trace sense). By Lemma \ref{lem:sobolevdense}, this $u$ satisfies \eqref{eq:definitionweaksolution} for every $\varphi\in \mathring V^{1,1}_2(Q^+_1)$ satisfying $\varphi (\cdot,-1)\equiv 0$ in $B_1^+$.
When $-1<p<0$, we also use approximation arguments. Let $\varphi\in \mathring V^{1,1}_2(Q^+_1)$ satisfy $\varphi (\cdot,-1)\equiv 0$ in $B_1^+$. Using Minkowski's integral inequality, for every $\delta>0$, there exists $\mu>0$ such that
\[
\|\varphi\|_{V^{1,1}_2(Q^+_1\cap\{x_n<\mu\})} +\sup_{s\in (-1,0]}\|\varphi(\cdot,s)\|_{L^2(B^+_1\cap\{x_n<\mu\}, x_n^p\ud x)}+\|\varphi\|_{L^{2\chi}(Q^+_1\cap\{x_n<\mu\})}<\delta,
\]
where $\chi$ is the one in Theorem \ref{thm:weightedsobolev2}. Let $\eta_\mu$ be a smooth cut-off function such that $\eta\equiv 1$ on $[\mu,+\infty)$ and $\eta\equiv 0$ on $[0,\mu/2]$. Let $\varphi_1(x,t)=\eta(x_n)\varphi(x,t)$ and $\varphi_2(x,t)=(1-\eta(x_n))\varphi(x,t)$. Using the fact that $V^{1,1}_2(Q^+_{1})\subset W^{1,1}_2(Q^+_{1})$ when $-1<p<0$, we have \eqref{eq:definitionweaksolution22}. Similar to the above, by using the weak convergence of $u_\va$, it is straightforward to verify that
\begin{align*}
&\lim_{\va\to 0}\int_{B^+_1}a^\va(x,s) (x_n+\va)^{p} u_\va(x,s) \varphi_1(x,s)\,\ud x=\int_{B^+_1}a(x,s) x_n^{p} u(x,s) \varphi_1(x,s)\,\ud x\ \ \mbox{a.e. }s\in[-1,0]
\end{align*}
and
\begin{align*}
&\lim_{\va\to 0}\int_{-1}^s\int_{B_1^+} (x_n+\va)^{p}(\varphi_1\partial_t a^\va+a^\va\partial_t \varphi_1)u_\va\,\ud x\ud t=\int_{-1}^s\int_{B_1^+} x_n^{p}(\varphi_1\partial_t a+a\partial_t \varphi_1)u\,\ud x\ud t.
\end{align*}
By using Theorem \ref{thm:weightedsobolev2}, H\"older's inequality, \eqref{eq:energysmoothcase} and \eqref{eq:weightedsobolev22}, we can verify that
\begin{align*}
\left|\int_{B^+_1}a^\va(x,s) (x_n+\va)^{p} u_\va(x,s) \varphi_2(x,s)\,\ud x-\int_{-1}^s\int_{B_1^+} (x_n+\va)^{p}(\varphi_2\partial_t a^\va+a^\va\partial_t \varphi_2)u_\va\,\ud x\ud t\right|&\le C\delta,\\
\left|\int_{B^+_1}a(x,s) x_n^{p} u(x,s) \varphi_2(x,s)\,\ud x-\int_{-1}^s\int_{B_1^+} x_n^{p}(\varphi_2\partial_t a+a\partial_t \varphi_2)u\,\ud x\ud t\right|&\le C\delta.
\end{align*}
Then by sending $\va\to 0$ and then $\delta\to 0$ in \eqref{eq:definitionweaksolution22}, it follows that \eqref{eq:definitionweaksolution} holds for every $\varphi\in \mathring V^{1,1}_2(Q^+_1)$ satisfying $\varphi (\cdot,-1)\equiv 0$ in $B_1^+$.
Next, we want to verify that $u\in C ([-1,0]; L^2(B_1^+,x_n^{p-1}\ud x)) $. Note that we have
\begin{equation}\label{eq:continuityV2-1}
\begin{split}
&\int_{B^+_1}a(x,s) x_n^{p} u(x,s) \varphi(x,s)\,\ud x-\int_{-1}^s\int_{B_1^+} x_n^{p}a u\partial_t \varphi \,\ud x\ud t\\
&=\int_{-1}^s\int_{B_1^+} (F_0\varphi+F_jD_j\varphi)\,\ud x\ud t\quad\mbox{a.e. }s\in (-1,0],
\end{split}
\end{equation}
where
\begin{align*}
g_j&= f_j-a_{ij}D_iu-d_ju,\\
g_0&=x_n^pf+f_0-b_jD_ju-c_0u+x_n^p(\pa_t a+c) u.
\end{align*}
Hence, we know that $g_j\in L^2(Q_1^+), j=1,\cdots,n$, and $g_0\in L^{\frac{2\chi}{2\chi-1}}(Q_1^+)$. Moreover, we clearly have
\begin{equation}\label{eq:continuityV2-2}
\|u(\cdot,s)\|_{L^2(B_1^+,x_n^p\ud x)}\le \|u\|_{V_2(Q_1^+)},\quad\mbox{a.e. }s\in(-1,0].
\end{equation}
Denote
\[
I:=\{s\in[-1,0]: \eqref{eq:continuityV2-1} \mbox{ and } \eqref{eq:continuityV2-2} \mbox{ hold for }s.\}
\]
Then we know that $I$ is of measure zero. We can redefine $u(x,s)$ such that
\eqref{eq:continuityV2-1} and \eqref{eq:continuityV2-2} for $s\in[-1,0]\setminus I$. Indeed, because of $\eqref{eq:continuityV2-2}$, for every $s_0\in [-1,0]\setminus I$, there exists $\{s_k\}\subset I$ such that $s_k\to s_0$ and $u(\cdot,s_k)\rightharpoonup v(\cdot)$ in $L^2(B_1^+,x_n^p\ud x)$. We redefine $u(\cdot,s_0)=v(\cdot)$. Then \eqref{eq:continuityV2-1} and \eqref{eq:continuityV2-2} hold for $s_0$, and moreover, by \eqref{eq:continuityV2-1}, this $v(\cdot)$ is independent on the choice of the sequence $\{s_k\}$. Thus, we can assume that \eqref{eq:continuityV2-1} and \eqref{eq:continuityV2-2} hold for all $s\in[-1,0]$.
Let $Q_{1,s,h}^+=B_1^+\times(s,s+h)$ when $s,s+h\in(-1,0)$ (here, we assume $h>0$, and the argument for the case $h<0$ can be modified correspondingly). From \eqref{eq:continuityV2-1}, we obtain
\begin{equation}\label{eq:continuityV2-3}
\begin{split}
&\int_{B^+_1}a(x,s+h) x_n^{p} u(x,s+h) \varphi(x,s+h)\,\ud x-\int_{B^+_1}a(x,s) x_n^{p} u(x,s) \varphi(x,s)\,\ud x\\
&=\int_{Q_{1,s,h}^+} x_n^{p}a u\partial_t \varphi \,\ud x\ud t+\int_{Q_{1,s,h}^+} (g_0\varphi+g_jD_j\varphi)\,\ud x\ud t.
\end{split}
\end{equation}
By choosing $\varphi$ as a function in $C^\infty_c(B_1^+)$, and since $a$ is continuous in $\overline Q_1^+$, we have
\begin{align*}
\left|\int_{B^+_1}[a(x,s+h)-a(x,s)] x_n^{p} u(x,s+h) \varphi(x)\,\ud x\right|\to 0 \quad\mbox{as }h\to 0.
\end{align*}
Hence,
\begin{equation}\label{eq:weakconvergenceweight}
\lim_{h\to 0}\int_{B^+_1} x_n^{p} a(x,s) (u(x,s+h) -u(x,s)) \varphi(x)\,\ud x=0\quad\mbox{uniformly in }s.
\end{equation}
By a density argument, \eqref{eq:weakconvergenceweight} holds for all $\varphi\in L^2(B_1^+,x_n^p\ud x)$.
Choose $\va>0$ small such that $s\pm\va, s+h\pm\va \in (-1,0)$. Let
\[
\tilde u(x,t)=
\begin{cases}
u(x,s+h),\quad (x,t)\in B_1^+\times(s+h,\infty),\\
u(x,t), \quad (x,t)\in B_1^+\times(s,s+h],\\
u(x,s), \quad (x,t)\in B_1^+\times(-1,s].
\end{cases}
\]
and
\[
\varphi_\va(x,t)=\frac{1}{2\va}\int_{t-\va}^{t+\va}\tilde u(x,\tau)\,\ud \tau.
\]
Then $\varphi_\va\in \mathring V^{1,1}_2(Q^+_1)$ and \eqref{eq:continuityV2-3} holds for $\varphi_\va$. Note that
\begin{align*}
&\int_{s}^{s+h}\int_{B_1^+} x_n^{p}au\partial_t \varphi_\va\,\ud x\ud t\\
&=\frac{1}{2\va} \int_{s}^{s+h}\int_{B_1^+} x_n^{p}\big[a(x,t)u(x,t)\tilde u(x,t+\va)-a(x,t)u(x,t)\tilde u(x,t-\va)\big]\,\ud x\ud t\\
&=\frac{1}{2\va}\int_{B_1^+} x_n^{p}\int_{s+h-\va}^{s+h} a(x,t)u(x,t)u(x,s+h)\,\ud x\ud t\\
&\quad-\frac{1}{2\va}\int_{B_1^+} x_n^{p}\int_{s}^{s+\va} a(x,t)u(x,t)u(x,s)\,\ud x\ud t\\
&\quad + \frac{1}{2\va}\int_{B_1^+} x_n^{p}\int_{s+\va}^{s+h} u(x,t-\va)u(x,t)[a(x,t-\va)-a(x,t)]\,\ud x \ud t.
\end{align*}
Using the continuity of $a$ and \eqref{eq:weakconvergenceweight}, we have
\begin{align*}
&\lim_{\va \to 0}\int_{s}^{s+h}\int_{B_1^+} x_n^{p}au\partial_t \varphi_\va\,\ud x\ud t\\
&=\frac{1}{2}\int_{B_1^+} x_n^{p}a(x,s+h)u(x,s+h)^2\,\ud x-\frac{1}{2}\int_{B_1^+} x_n^{p}a(x,s)u(x,s)^2\,\ud x.
\end{align*}
Setting $\varphi=\varphi_\va$ in \eqref{eq:continuityV2-3} and then letting $\va\to 0$, we have by using \eqref{eq:weakconvergenceweight} that
\begin{align*}
&\frac{1}{2}\int_{B_1^+} x_n^{p} a(x,s+h)u(x,s+h)^2\,\ud x-\frac{1}{2}\int_{B_1^+} x_n^{p} a(x,s)u(x,s)^2\,\ud x\\
&=-\frac{1}{2}\int_{Q_{1,s,h}^+} x_n^{p}u^2\partial_t a\,\ud x\ud t+\int_{Q_{1,s,h}^+} (g_0\varphi+g_jD_j\varphi)\,\ud x\ud t.
\end{align*}
Hence,
\[
\lim_{h\to 0}\int_{B_1^+} x_n^{p} a(x,s+h)u(x,s+h)^2\,\ud x=\int_{B_1^+} x_n^{p} a(x,s)u(x,s)^2\,\ud x.
\]
Since $a\in C(\overline Q_1^+)$, we have
\begin{equation}\label{eq:normconvergenceweight00}
\lim_{h\to 0}\int_{B_1^+} x_n^{p} [a(x,s+h)-a(x,s)]u(x,s+h)^2\,\ud x=0,
\end{equation}
and thus,
\begin{equation}\label{eq:normconvergenceweight}
\lim_{h\to 0}\int_{B_1^+} x_n^{p} a(x,s)u(x,s+h)^2\,\ud x=\int_{B_1^+} x_n^{p} a(x,s)u(x,s)^2\,\ud x.
\end{equation}
It follows from \eqref{eq:weakconvergenceweight} and \eqref{eq:normconvergenceweight} that
\[
\lim_{h\to 0}\int_{B^+_1} x_n^{p} a(x,s) |u(x,s+h) -u(x,s)|^2 \,\ud x=0.
\]
Since $a\ge\lambda>0$, we obtain
\[
\lim_{h\to 0}\int_{B^+_1} x_n^{p} |u(x,s+h) -u(x,s)|^2 \,\ud x=0.
\]
Hence, $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) $, and thus, $u\in \mathring V_2^{1,0}(Q^+_{1})$.
Now let us use another approximation to remove the assume that $\pa_t a, c \in L^q(Q_1^+,x_n^p\vee 1\,\ud x\ud t)$ and $f\in L^\frac{2\chi}{\chi-1}(Q_1^+,x_n^p\vee 1\,\ud x\ud t)$. Suppose \eqref{eq:assumptioncoefficient} and \eqref{eq:assumptionf} hold. Let $a^\va\in C^2(\overline Q_1^+)$ be such that $a^\va\to a$ uniformly on $Q_1^+$ and $\pa_t a^\va\to \pa_t a$ in $L^q(Q_1^+,x_n^p\,\ud x\ud t)$, $c^\va\in C^2(\overline Q_1^+)$ be such that $c^\va\to c$ in $L^q(Q_1^+,x_n^p\,\ud x\ud t)$, and $f^\va\in C^2(\overline Q_1^+)$ be such that $f^\va\to f$ in $L^\frac{2\chi}{\chi-1}(Q_1^+,x_n^p\,\ud x\ud t)$. Then as proved in the above, there exists a weak solution $u_\va\in \mathring V_2^{1,0}(Q^+_{1})$ to the parabolic equation
\[
a^\va \cdot x_n^{p} \pa_t u_\va -D_j(a_{ij} D_i u_\va+d_j u_\va)+b_iD_i u_\va+c^\va x_n^p u_\va+c_0 u_\va=x_n^pf^\va+f_0 -D_if_i \quad \mbox{in }Q_1^+
\]
with $u_\va\equiv 0$ on $\pa_{pa} Q_1^+$. Then by the energy estimate in Theorem \ref{thm:uniquenessofweaksolution} and the same argument as above, one can show that $u_\va$ will converge to a weak solution of \eqref{eq:linear-eq} with the full boundary condition $u\equiv 0$ on $\pa_{pa} Q_1^+$.
Finally, the uniqueness follows from Theorem \ref{thm:uniquenessofweaksolution}.
\end{proof}
\subsection{$W^{1,1}_2$ regularity}
Next, we want to study the $W^{1,1}_2$ regularity of weak solutions to the equation \eqref{eq:linear-eq} with slightly stronger assumptions on the coefficients. Consider the following equation
\begin{equation}\label{eq:linear-eq-app}
a x_n^{p} \pa_t u -D_j[a_{ij} D_i u+(x_n^{p/2}\wedge 1)d_j u]+(x_n^{p/2}\wedge 1)b_iD_i u+cx_n^{p}u+c_0 u=x_n^{p}f\quad \mbox{in }Q_1^+,
\end{equation}
where $x_n^{p/2}\wedge 1=\min(x_n^{p/2},1)$. For the coefficients, besides \eqref{eq:ellip2}, we suppose that
\begin{equation}\label{eq:assumptioncoefficient2}
\left\|\pa_t a\right\|_{L^q(Q_1^+,x_n^p\ud x\ud t)}+ \|a_{ij}\|_{Lip(\overline Q_1^+)}+\|d_{j}\|_{Lip(\overline Q_1^+)}+\|c_0\|_{Lip(\overline Q_1^+)}+\||b_j|+|c|\|_{L^\infty(Q_1^+)}\le\Lambda,
\end{equation}
for some $q>\frac{\chi}{\chi-1}$. We also suppose that $-\mathrm{div} (A \nabla ) +c_0$ is coercive, where $A=(a_{ij})$, i.e., there exists a constant $\bar \lda>0$ such that
\be \label{eq:coer-app}
\int_{B_1^+} A\nabla \phi \nabla \phi +c_0 \phi^2 \ge \bar \lda \int_{B_1^+} \phi^2 \quad \quad \forall \phi\in H^1_0(B_1^+), a.e.\ t\in[-1,1].
\ee
Note that \eqref{eq:coer-app} implies that there exists $\tilde\lambda>0$ depending only on $\bar\lambda,\lambda,\Lambda$ and $n$ such that
\be \label{eq:coer-app2}
\int_{B_1^+} A\nabla \phi \nabla \phi +c_0 \phi^2 \ge \tilde\lda \int_{B_1^+} |\nabla\phi|^2 \quad \quad \forall \phi\in H^1_0(B_1^+), a.e.\ t\in[-1,1].
\ee
\begin{thm}\label{thm:energyestimateut}
Suppose $a$ is continuous in $\overline Q_1^+$, and the conditions \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:assumptioncoefficient2} and \eqref{eq:coer-app} hold. Suppose that $f\in L^2(Q_1^+,x^p\ud x\ud t)$. Let $u$ be the weak solution of \eqref{eq:linear-eq-app} with the full boundary condition $u\equiv 0$ on $\pa_{pa} Q_1^+$. Then
\begin{equation}\label{eq:energyestimateut}
\sup_{t\in(-1,0)}\int_{B_1^+} |\nabla u(x,t)|^2\,\ud x+\int_{Q_1^+} x_n^{p}|\pa_t u|^2\,\ud x\ud t\le C \int_{Q_1^+} x_n^pf^2\,\ud x\ud t,
\end{equation}
where $C>0$ depends only on $\lda, \bar\lambda,\Lambda,n$ and $p$.
\end{thm}
\begin{proof}
We first assume that $f\in L^2(Q_1^+,x^p\vee 1\ud x\ud t)$ and $\pa_t a\in L^q(Q_1^+,x_n^p\vee1\,\ud x\ud t)$.
For $\va>0$, let $a^\va, a_{ij}^\va, d_j^\va, c_0^\va\in C^\infty(\R^n)$ be such that $a^\va\to a$, $a_{ij}^\va\to a_{ij}$, $d_j^\va\to d_j$, $c_0^\va\to c_0$ uniformly on $\overline Q_1^+$, $\pa_t a^\va\to \pa_t a$ in $L^q(Q_1^+,x_n^p\vee 1\,\ud x\ud t)$, and
\[
\|a_{ij}^\va\|_{Lip(\overline Q_1^+)}+\|d_{j}^\va\|_{Lip(\overline Q_1^+)}\le C\Lambda.
\]
Let $b_{i}^\va, c^\va\in C^\infty(\R^n)$ be such that $b_{i}^\va\to b_{i}$, $c^\va\to c$ in $L^q(\overline Q_1^+)$ for some $q>\frac{\chi}{\chi-1}$, and
\[
\|b_{i}^\va\|_{L^\infty(Q_1^+)}+\|c^\va\|_{L^\infty(Q_1^+)}\le C\Lambda.
\]
Let $f_\va\in C_c^\infty(Q_1^+)$ be such that $f_\va\to f$ in $L^2(Q_1^+,x^p\vee 1\,\ud x\ud t)$ as $\va\to 0$.
Let $u_\va\in C ([-1,0]; L^2(B_1^+)) \cap L^2((-1,0];H_{0}^1(B_1^+))$ be the unique weak solution of
\be\label{eq:degiorgilinearappappendix2}
\begin{split}
&a^\va \cdot (x_n+\va)^{p} \pa_t u_\va -D_j[a_{ij}^\va D_i u_\va+((x_n+\va)^{p/2}\wedge (1+\va)^{p/2})d_j^\va u_\va]\\
&\quad +((x_n+\va)^{p/2}\wedge (1+\va)^{p/2})b_i^\va D_i u_\va+c_0^\va u_\va+c^\va(x_n+\va)^{p}u_\va=(x_n+\va)^{p} f_\va \quad \mbox{in }Q_1^+
\end{split}
\ee
with $u_\va\equiv 0$ on $\pa_{pa} Q_1^+$. By the Schauder regularity theory, we know that $D_{x} u_\va, \pa_t u_\va\in C(\overline Q_1^+)$.
For small $h>0$, denote $$u_\va^h(x,t)=\frac{u_\va(x,t+h)-u_\va(x,t)}{h}$$ for all $-1\le t\le -h$, and denote the left hand side of \eqref{eq:degiorgilinearappappendix2} as $I(x,t)$. Then we have for all $-1<t<-h$,
\begin{equation}\label{eq:partialttestfunction}
\begin{split}
&\int_{B_1^+\times(-1,t]} (I(x,s+h)+I(x,s))u_\va^h(x,s)\,\ud x\ud s\\
&=\int_{B_1^+\times(-1,t]}(x_n+\va)^{p/2}(f_\va (x,s+h)+f_\va (x,s))u_\va^h(x,s)\,\ud x\ud s.
\end{split}
\end{equation}
Using the symmetry of $A$, we have
\begingroup
\allowdisplaybreaks
\begin{align*}
&\int_{B_1^+}\int_{-1}^t [a_{ij}^\va(x,s+h)D_iu_\va(x,s+h)+ a_{ij}^\va(x,s)D_iu_\va(x,s)]D_ju_\va^h(x,s)\,\ud s\ud x\\
&=\frac{1}{h}\int_{B_1^+}\int_{t}^{t+h} a_{ij}^\va D_iu_\va D_ju_\va \,\ud s\ud x - \frac{1}{h}\int_{B_1^+}\int_{-1}^{-1+h} a_{ij}^\va D_iu_\va D_ju_\va\,\ud s\ud x\\
&\quad+ \int_{B_1^+}\int_{-1}^{t} \frac{a_{ij}^\va(x,s)-a_{ij}^\va(x,s+h)}{h}D_iu_\va(x,s)D_ju_\va(x,s+h) \,\ud s\ud x \\
& \to \int_{B_1^+} a_{ij}(x,t)D_i u_\va(x,t)D_j u_\va(x,t)\,\ud x -\int_{B_1^+}\int_{-1}^{t} \pa_s a_{ij}^\va D_iu_\va D_ju_\va\,\ud s\ud x\quad\mbox{as }h\to 0,
\end{align*}
\endgroup
where we used that $D_xu\in C^{0}(\overline B_1^+\times[-1,0])$ and $u(x,-1)\equiv 0$.
Here, we used $u_\va^h(x,t)$ instead of $\pa_t u_\va$ to avoid involving $D_x\pa_t u_\va$ in the calculation. Also,
\begingroup
\allowdisplaybreaks
\begin{align*}
&\int_{B_1^+}\int_{-1}^t ((x_n+\va)^{p/2}\wedge (1+\va)^{p/2})[d_{j}^\va(x,s+h)u_\va(x,s+h)+ d_{j}^\va(x,s)u_\va(x,s)]\pa_ju_\va^h(x,s)\,\ud s\ud x\\
& \to 2\int_{B_1^+} ((x_n+\va)^{p/2}\wedge (1+\va)^{p/2})d_{j}^\va(x,t)u_\va(x,t)\pa_j u_\va(x,t)\,\ud x \\
&\quad-2\int_{B_1^+}\int_{-1}^{t} ((x_n+\va)^{p/2}\wedge (1+\va)^{p/2})u_\va \pa_s d_{j}^\va D_j u_\va\,\ud s\ud x\\
&\quad -2 \int_{B_1^+}\int_{-1}^{t} ((x_n+\va)^{p/2}\wedge (1+\va)^{p/2})d_{j}^\va\pa_s u_\va D_j u_\va\,\ud s\ud x\quad\mbox{as }h\to 0.
\end{align*}
\endgroup
Using similar arguments, by sending $h\to 0$ in \eqref{eq:partialttestfunction}, and using \eqref{eq:ellip2}, \eqref{eq:assumptioncoefficient2}, \eqref{eq:coer-app} (or \eqref{eq:coer-app2}) and H\"older's inequality, we have
\begingroup
\allowdisplaybreaks
\begin{align*}
&\int_{B_1^+\times(-1,t]} (x_n+\va)^{p} |\pa_s u_\va|^2 \,\ud x\ud s +\int_{B_1^+} |\nabla u_\va(x,t)|^2\,\ud x \\
&\le C \int_{B_1^+} (x_n+\va)^pu_\va(x,t)^2\,\ud x+ C \int_{B_1^+\times(-1,t]} [ |\nabla u_\va|^2 + (x_n+\va)^p(f_\va^2+u_\va^2)] \,\ud x\ud s.
\end{align*}
\endgroup
Then it follows from \eqref{eq:energysmoothcase} that
\begin{align}\label{eq:weakderivativeint}
\sup_{t\in[-1,0]}\int_{B_1^+} |\nabla u_\va(x,t)|^2\,\ud x+ \int_{Q_1^+} (x_n+\va)^{p} |\pa_t u_\va|^2 \,\ud x\ud t \le C\int_{Q_1^+} (x_n+\va)^p f_\va^2\,\ud x\ud t.
\end{align}
Therefore, $\int_{(B_1^+\cap\{x_n>\delta\})\times(-1,0]} |\pa_t u_\va|^2 \le C(\delta)$ for every $\delta>0$. This implies the existence of weak derivative $\pa_t u$, and that $\pa_t u_\va$ weakly converges to $\pa_t u$ in $L^2((B_1^+\cap\{x_n>\delta\})\times(-1,0])$ for every $\delta$. Since
\begin{align*}
&\int_{Q_1^+\cap\{x_n>\delta\}} [(x_n+\va)^{p}-x_n^p] |\pa_t u_\va|^2 \,\ud x\ud t \to 0\mbox{ as }\va\to 0,\\
& \int_{Q_1^+\cap\{x_n>\delta\}} x_n^p |\pa_t u|^2 \,\ud x\ud t \le\liminf_{\va\to 0} \int_{Q_1^+\cap\{x_n>\delta\}} x_n^p |\pa_t u_\va|^2 \,\ud x\ud t,
\end{align*}
we have from \eqref{eq:weakderivativeint} by sending $\va\to 0$ that
\[
\sup_{t\in[-1,0]}\int_{B_1^+} |\nabla u(x,t)|^2\,\ud x+\int_{Q_1^+\cap\{x_n>\delta\}} x_n^p |\pa_t u|^2 \,\ud x\ud t \le C\int_{Q_1^+} x_n^p f^2\,\ud x\ud t.
\]
Then, \eqref{eq:energyestimateut} follows by sending $\delta \to 0$ and using the monotone convergence theorem.
Now let us use another approximation to remove the assume that $f\in L^2(Q_1^+,x^p\vee 1\ud x\ud t)$ and $\pa_t a\in L^q(Q_1^+,x_n^p\vee1\,\ud x\ud t)$. Let $a^\va\in C^2(\overline Q_1^+)$ be such that $a^\va\to a$ uniformly on $Q_1^+$ and $\pa_t a^\va\to \pa_t a$ in $L^q(Q_1^+,x_n^p\,\ud x\ud t)$, and $f_\va\in C^2_c(\overline Q_1^+)$ be such that $f_\va\to f$ in $L^2(Q_1^+,x_n^p\,\ud x\ud t)$. Then there exists a weak solution $u_\va\in \mathring V_2^{1,0}(Q^+_{1})$ to the parabolic equation
\[
a^\va \cdot x_n^{p} \pa_t u_\va -D_j[a_{ij} D_i u_\va+(x_n^{p/2}\wedge 1)d_j u_\va]+(x_n^{p/2}\wedge 1)b_iD_i u_\va+c x_n^p u_\va+c_0 u_\va=x_n^pf_\va \quad \mbox{in }Q_1^+
\]
with $u_\va\equiv 0$ on $\pa_{pa} Q_1^+$. By the argument of Theorem \ref{thm:existenceofweaksolution}, $u_\va$ will converge to a weak solution of \eqref{eq:linear-eq-app} with the full boundary condition $u\equiv 0$ on $\pa_{pa} Q_1^+$. By the same argument as above, one can show that
\[
\sup_{t\in(-1,0)}\int_{B_1^+} |\nabla u_\va(x,t)|^2\,\ud x+\int_{Q_1^+} x_n^{p}|\pa_t u_\va|^2\,\ud x\ud t\le C \int_{Q_1^+} x_n^pf_\va^2\,\ud x\ud t.
\]
Then the conclusion follows by sending $\va\to 0$.
\end{proof}
\section{Boundedness of weak solutions}\label{sec:bound}
\subsection{A maximum principle}
Suppose
\begin{equation}\label{eq:assumptionf4}
F_1:=\|f\|_{L^{\frac{2q\chi}{q\chi+\chi-q}}(Q_1^+,x_n^p\ud x\ud t)}+\|f_0\|_{L^{\frac{2q\chi}{q\chi+\chi-q}}(Q_1^+)}+ \sum_{j=1}^n\|f_j\|_{L^{2q}(Q_1^+)}<\infty,
\end{equation}
where $q$ is the one in \eqref{eq:assumptioncoefficient}.
\begin{thm}\label{thm:aweakmaxprinciple}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D}, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:assumptioncoefficient} and \eqref{eq:assumptionf4}. Suppose that all $d_j=0$ and $c\ge 0$. Then
\begin{equation*}
\|u\|_{L^\infty(Q_1^+)}\le \left\{
\begin{aligned}
& \sup_{\pa_{pa}Q_1^+}|u|+ CF_1 |Q_1^+|^{\frac{1}{2}(1-\frac{1}{q}-\frac{1}{\chi})}, \quad &\mbox{for } p\ge 0, \\
& \sup_{\pa_{pa}Q_1^+}|u|+ CF_1 |Q_1^+|_p^{\frac{1}{2}(1-\frac{1}{q}-\frac{1}{\chi})}, \quad &\mbox{for } -1<p<0,
\end{aligned}
\right.
\end{equation*}
where $|Q_1^+|$ denotes the Lebesgue measure of $Q_1^+$, $|Q_1^+|_p$ is defined in \eqref{eq:pmeasure} in the below, and $C>0$ depends only on $\lambda,\Lambda,n,p$ and $q$.
\end{thm}
\begin{proof}
It follows from Lemma \ref{lem:Steklovapproximation} that \eqref{eq:uastestfunction} holds. Let $\varphi$ be the one there. Using \eqref{eq:assumptioncoefficient}, Theorem \ref{thm:weightedsobolev}, Theorem \ref{thm:weightedsobolev2} and H\"older's inequality, one obtains for $p\ge 0$ that
\begin{align*}
\int_{-1}^s\int_{B_1^+} \varphi^2 ((|\partial_t a|+|c|)x_n^p+\sum_j b_j^2+|c_0|)\,\ud x\ud t&\le \va \|\varphi\|_{V_2(B_1^+\times(-1,s))}^2+C_\va \|\varphi\|_{L^2(B_1^+\times(-1,s))}^2,
\end{align*}
and for $-1<p<0$ that
\begin{align*}
&\int_{-1}^s\int_{B_1^+} \varphi^2 ((|\partial_t a|+|c|)x_n^p+\sum_j b_j^2+c)\,\ud x\ud t\\
&\le \va \|\varphi\|_{V_2(B_1^+\times(-1,s))}^2+C_\va \|\varphi\|_{L^2(B_1^+\times(-1,s),x_n^p\ud x\ud t)}^2.
\end{align*}
Similarly, we have
\begin{align*}
\int_{-1}^s\int_{B_1^+\cap\{u>k\}} \sum_j f_j^2\,\ud x\ud t&\le CF_1^2 |\{u>k\}\cap(B_1^+\times(-1,s))|^{1-\frac{1}{q}},\\
\int_{-1}^s\int_{B_1^+\cap\{u>k\}} f_0\varphi\,\ud x\ud t&\le \va \|\varphi\|_{V_2(B_1^+\times(-1,s))}^2+C_\va F_1^2|\{u>k\}\cap(B_1^+\times(-1,s))|^{1-\frac{1}{q}},\\
\int_{-1}^s\int_{B_1^+\cap\{u>k\}} x_n^pf\varphi\,\ud x\ud t&\le \va \|\varphi\|_{V_2(B_1^+\times(-1,s))}^2+C_\va F_1^2|\{u>k\}\cap(B_1^+\times(-1,s))|_p^{1-\frac{1}{q}},
\end{align*}
where
\begin{equation}\label{eq:pmeasure}
|E|_{p}=\int_{E} x_n^p\,\ud x\ud t\quad\mbox{for every measurable set }E\subset Q_1^+.
\end{equation}
By choosing $\va>0$ small, and using Theorem \ref{thm:weightedsobolev} and Theorem \ref{thm:weightedsobolev2}, we have for $p\ge 0$ that
\[
\|\varphi\|_{L^{2\chi}(B_1^+\times(-1,s))}^2\le C \|\varphi\|_{L^2(B_1^+\times(-1,s))}^2+CF_1^2|\{u>k\}\cap(B_1^+\times(-1,s))|^{1-\frac{1}{q}},
\]
and for $-1<p<0$ that
\[
\|\varphi\|_{L^{2\chi}(B_1^+\times(-1,s),x_n^p\ud x\ud t)}^2\le C \|\varphi\|_{L^2(B_1^+\times(-1,s),x_n^p\ud x\ud t)}^2+CF_1^2|\{u>k\}\cap(B_1^+\times(-1,s))|_{p}^{1-\frac{1}{q}}.
\]
When $s+1$ is sufficiently small, we have for $p\ge 0$ that
\[
C \|\varphi\|_{L^2(B_1^+\times(-1,s))}^2\le \frac12\|\varphi\|_{L^{2\chi}(B_1^+\times(-1,s))}^2,
\]
and for $-1<p<0$ that
\[
C \|\varphi\|_{L^2(B_1^+\times(-1,s),x_n^p\ud x\ud t)}^2\le \frac12\|\varphi\|_{L^{2\chi}(B_1^+\times(-1,s),x_n^p\ud x\ud t)}^2.
\]
Hence, we have
\begin{align*}
\|(u-k)^+\|_{L^{2\chi}(B_1^+\times(-1,s))}^2&\le CF_1^2|\{u>k\}\cap(B_1^+\times(-1,s))|^{1-\frac{1}{q}}\quad\mbox{if }p\ge 0\\
\|(u-k)^+\|_{L^{2\chi}(B_1^+\times(-1,s),x_n^p\ud x\ud t)}^2&\le CF_1^2|\{u>k\}\cap(B_1^+\times(-1,s))|_{p}^{1-\frac{1}{q}}\quad\mbox{if }-1<p<0.
\end{align*}
For $h>k$, we have
\begin{align*}
\|(u-k)^+\|_{L^{2\chi}(B_1^+\times(-1,s))}^2\ge (h-k)^2|\{u>h\}\cap(B_1^+\times(-1,s))|^{\frac{1}{\chi}},\\
\|(u-k)^+\|_{L^{2\chi}(B_1^+\times(-1,s),x_n^p\ud x\ud t)}^2\ge (h-k)^2|\{u>h\}\cap(B_1^+\times(-1,s))|_{p}^{\frac{1}{\chi}}.
\end{align*}
Hence, if we denote
\begin{equation*}
\psi(k)=\left\{
\begin{aligned}
& |\{u>k\}\cap(B_1^+\times(-1,s))|, \quad &\mbox{for } p\ge 0, \\
& |\{u>k\}\cap(B_1^+\times(-1,s))|_p, \quad &\mbox{for } -1<p<0,
\end{aligned}
\right.
\end{equation*}
then
\[
\psi(h)\le \frac{CF_1^{2\chi}}{(h-k)^{2\chi}}\psi(k)^{\beta},
\]
where $\beta=(1-\frac1q)\chi>1$ by the assumption of $q$. Define
\[
k_s=\sup_{\pa_{pa}Q_1^+}|u|+d-\frac{d}{2^s}.
\]
Then
\[
\psi(k_{s+1})\le \frac{CF_1^{2\chi}2^{2\chi}}{d^{2\chi}} (4^\chi)^s \psi(k_s)^\beta.
\]
Similar to \eqref{eq:nonlineariteration} and \eqref{eq:nonlineariteration2}, we can choose $C>0$ such that for $d\ge CF_1 |B_1^+\times(-1,s)|^{\frac{1}{2}(1-\frac{1}{q}-\frac{1}{\chi})}$, we have
\[
\psi\left(\sup_{\pa_{pa}Q_1^+}|u|+d\right)=0.
\]
That is,
\begin{equation*}
\sup_{B_1^+\times(-1,s)} u\le \left\{
\begin{aligned}
& \sup_{\pa_{pa}Q_1^+}|u|+ CF_1 |B_1^+\times(-1,s)|^{\frac{1}{2}(1-\frac{1}{q}-\frac{1}{\chi})}, \quad &\mbox{for } p\ge 0, \\
& \sup_{\pa_{pa}Q_1^+}|u|+ CF_1 |B_1^+\times(-1,s)|_p^{\frac{1}{2}(1-\frac{1}{q}-\frac{1}{\chi})}, \quad &\mbox{for } -1<p<0.
\end{aligned}
\right.
\end{equation*}
Keeping iterating for $s$ with a uniform step size, we obtain that
\begin{equation*}
\sup_{Q_1^+} u\le \left\{
\begin{aligned}
& \sup_{\pa_{pa}Q_1^+}|u|+ CF_1 |Q_1^+|^{\frac{1}{2}(1-\frac{1}{q}-\frac{1}{\chi})}, \quad &\mbox{for } p\ge 0, \\
& \sup_{\pa_{pa}Q_1^+}|u|+ CF_1 |Q_1^+|_p^{\frac{1}{2}(1-\frac{1}{q}-\frac{1}{\chi})}, \quad &\mbox{for } -1<p<0.
\end{aligned}
\right.
\end{equation*}
Applying the same result to the equation of $-u$, the conclusion follows.
\end{proof}
\subsection{A local maximum principle}
The following is the Caccioppoli inequality of weak solutions to \eqref{eq:linear-eq} and \eqref{eq:linear-eq-D}, which is the starting point of the De Giorgi iteration.
\begin{thm}\label{thm:caccipolli}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D}, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:assumptioncoefficient} and \eqref{eq:assumptionf4}. Let $x_0\in\pa' B_1^+$, $Q_{\rho,\tau}^+=B^+_\rho(x_0)\times(t_0,t_0+\tau]\subset Q_1^+$, $k\ge 0$, $\va\in(0,1]$, and $\xi\in V_2^{1,1}(Q_{\rho,\tau}^+)$ such that $\xi=0$ on $\pa''B_\rho(x_0)\times (t_0,t_0+\tau]$ and $0\le\xi\le 1$. Then
\begin{align*}
&\max\left(\sup_{t\in(t_0,t_0+\tau)}\int_{B^+_\rho(x_0)}x_n^{p} a [\xi(u-k)^+]^2(x,t)\,\ud x, \lambda\iint_{Q_{\rho,\tau}^+} |D[\xi(u-k)^+]|^2\,\ud x\ud s\right)\\
&\le (1+\va)\int_{B^+_\rho(x_0)}x_n^{p} a [\xi(u-k)^+]^2(x,t_0)\,\ud x +C\iint_{Q_{\rho,\tau}^+} (|D\xi|^2+|\xi\pa_t\xi|x_n^p) [(u-k)^+]^2 \,\ud x\ud t \\
&\quad +\frac{C}{\va^\kappa}\left( \|[(u-k)^+] \xi\|_{L^2(Q_{\rho,\tau}^+)}^2 + \|[(u-k)^+] \xi\|_{L^2(Q_{\rho,\tau}^+, x_n^p\ud x\ud t)}^2\right)\\
&\quad +\frac{C}{\va^\kappa}(k^2+F_1^2)\left( |\{u>k\}\cap Q_{\rho,\tau}^+|^{1-\frac1q}+|\{u>k\}\cap Q_{\rho,\tau}^+|_p^{1-\frac1q} \right),
\end{align*}
where $|\cdot|_p$ is defined in \eqref{eq:pmeasure}, $C>0$ depends only on $\lambda,\Lambda,n,p$ and $q$, $\kappa>0$ depends only on $q$ and $\chi$, and $F_1$ is given in \eqref{eq:assumptionf4}.
\end{thm}
\begin{proof}
Similar to the proof of Lemma \ref{lem:Steklovapproximation}, we can assume $u\in V^{1,1}_2(Q^+_1)$, since otherwise we can use its Steklov average to remove this assumption.
Taking $\varphi=\xi^2(u-k)^+$ in \eqref{eq:definitionweaksolution}, and using \eqref{eq:ellip2} and H\"older's inequality, one obtains that
\begingroup
\allowdisplaybreaks
\begin{align}
&\frac{1}{2}\int_{B^+_\rho(x_0)}x_n^{p} a [\xi(u-k)^+]^2(x,t)\,\ud x-\frac{1}{2}\int_{B^+_\rho(x_0)}x_n^{p} a [\xi(u-k)^+]^2(x,t_0)\,\ud x\nonumber\\
&+\frac{3\lambda}{4} \int_{t_0}^t\int_{B_\rho^+(x_0)} |D(\xi(u-k)^+)|^2\,\ud x\ud s\nonumber\\
&\le C\int_{t_0}^t\int_{B_\rho^+(x_0)} (|D\xi|^2+|\xi\pa_t\xi|x_n^p) [(u-k)^+]^2 \,\ud x\ud s \nonumber\\
&\quad +C\int_{t_0}^t\int_{B_\rho^+(x_0)} [(u-k)^+]^2 \xi^2 \left( (|\pa_t a|+|c|) x_n^p +\sum_j d_j^2+\sum_j b_j^2+ |c_0| \right)\,\ud x\ud s \nonumber\\
&\quad + C\int_{t_0}^t\int_{B_\rho^+(x_0)} \xi^2k^2\chi_{\{u>k\}} \left(|c|x_n^p+|c_0|+\sum_j d_j^2\right)\,\ud x\ud s\nonumber\\
&\quad+C\int_{t_0}^t\int_{B_\rho^+(x_0)\cap\{u>k\}} \left(|f|x_n^p\xi^2(u-k)^++ |f_0|\xi^2(u-k)^++\xi^2\sum_j f_j^2\right)\,\ud x\ud s. \label{eq:uastestfunction2}
\end{align}
\endgroup
Using \eqref{eq:assumptioncoefficient}, Theorem \ref{thm:weightedsobolev}, Theorem \ref{thm:weightedsobolev2} and H\"older's inequality one obtains
\begin{align*}
&\int_{t_0}^t\int_{B_\rho^+(x_0)} [(u-k)^+]^2 \xi^2 ((|\partial_t a|+|c|)x_n^p+\sum_j b_j^2+|c_0|+\sum_j d_j^2)\,\ud x\ud s\\
&\quad \le \va \|[(u-k)^+] \xi\|_{V_2(Q_{\rho,\tau}^+)}^2+\frac{C}{\va^\kappa} \|[(u-k)^+] \xi\|_{L^2(Q_{\rho,\tau}^+)}^2+\frac{C}{\va^\kappa} \|[(u-k)^+] \xi\|_{L^2(Q_{\rho,\tau}^+, x_n^p\ud x\ud t)}^2,\\
&k^2\iint_{Q_{\rho,\tau}^+} \chi_{\{u>k\}}\Big[|c|x_n^p+|c_0|+\sum_j d_j^2\Big]\,\ud x\ud t\\
&\le C k^2 |\{u>k\}\cap Q_{\rho,\tau}^+|^{1-\frac1q}+C k^2 |\{u>k\}\cap Q_{\rho,\tau}^+|_p^{1-\frac1q},
\end{align*}
and
\begin{align*}
\iint_{Q_{\rho,\tau}^+\cap\{u>k\}} \sum_j f_j^2\,\ud x\ud t&\le F_1^2 |\{u>k\}\cap Q_{\rho,\tau}^+)|^{1-\frac{1}{q}},\\
\iint_{Q_{\rho,\tau}^+\cap\{u>k\}} |f_0|\xi^2(u-k)^+\,\ud x\ud t&\le \va \|\varphi\|_{V_2(Q_{\rho,\tau}^+)}^2+\frac{C}{\va^\kappa} F_1^2|\{u>k\}\cap Q_{\rho,\tau}^+|^{1-\frac{1}{q}},\\
\iint_{Q_{\rho,\tau}^+\cap\{u>k\}} |f|x_n^p\xi^2(u-k)^+\,\ud x\ud t&\le \va \|\varphi\|_{V_2(Q_{\rho,\tau}^+)}^2+\frac{C}{\va^\kappa} F_1^2|\{u>k\}\cap Q_{\rho,\tau}^+|_p^{1-\frac{1}{q}}.
\end{align*}
By choosing $\va>0$ small, and using Theorem \ref{thm:weightedsobolev} and Theorem \ref{thm:weightedsobolev2}, the conclusion follows from \eqref{eq:uastestfunction2}.
\end{proof}
Now we can prove the local-in-time boundedness of weak solutions up to $\{x_n=0\}$.
\begin{thm}\label{thm:localboundedness}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D}, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, and
\begin{align}
\Big\||\pa_t a|+|c|\Big\|_{L^q(Q_1^+,x_n^p\ud x\ud t)}+\Big\|\sum_{j=1}^n(b_j^2+d_j^2)+|c_0|\Big\|_{L^q(Q_1^+)}&\le\Lambda \label{eq:localbddnesscoeffcient}\\
F_1:=\|f\|_{L^{\frac{2q\chi}{q\chi+\chi-q}}(Q_1^+,x_n^p\ud x\ud t)}+\|f_0\|_{L^{\frac{2q\chi}{q\chi+\chi-q}}(Q_1^+)}+ \sum_{j=1}^n\|f_j\|_{L^{2q}(Q_1^+)}&<\infty\label{eq:localbddnessf}
\end{align}
for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$. Denote $\mathcal{Q}_R^+=B_R^+(x_0)\times(t_0-R^{p+2},t_0]\subset Q_1^+$, where $x_0\in\pa' B_1^+$. Then we have, for any $\gamma>0$,
\[
\|u\|_{L^\infty(\mathcal Q_{R/2}^+)}\le C\left(R^{-\frac{n+p+2}{\gamma}}\|u\|_{L^\gamma(\mathcal Q_R^+)}+F_1 R^{1-\frac{n+p+2}{2q}} \right),
\]
where $C>0$ depends only on $\lambda,\Lambda,n,p$, $q$ and $\gamma$.
\end{thm}
\begin{proof}
Let $\theta\in(0,1)$. We will consider $R=1$ first, and then scale it back.
We would like to first show that
\begin{align}\label{eq:boundednessbyL2}
\|u\|_{L^\infty(\mathcal{Q}_{\theta}^+)}\le C\left[(1-\theta)^{-1/\beta}\Big(\|u\|_{L^2(Q_1^+)}+ \|u\|_{L^2(Q_1^+,\ x_n^p\,\ud x\ud t)}\Big)+F_1\right],
\end{align}
where $\beta=1-\frac{1}{q}-\frac{1}{\chi}$. We only need to prove \eqref{eq:boundednessbyL2} for $\theta\in(1/2,1)$.
Let
$$
\rho_m=\theta +2^{-m}(1-\theta), \quad k_m=k(2-2^{-m}),\quad m=0,1,2,\cdots,
$$
where $k>0$ to be fixed later. For brevity, we denote $Q_m^+=Q^+_{\rho_m}=B_{\rho_m}^+(x_0)\times(t_0-\rho_m^{p+2},t_0]$, and we take $\xi_m$ to be a cut-off function such that $\xi\in V_2^{1,1}(Q_{m}^+)$, $\xi_m\equiv 1$ on $Q_{m+1}^+$, $\xi=0$ on $Q_{m}^+\setminus Q^+_{(\rho_m+\rho_{m+1})/2}$, and $|D\xi_m|^2+|\pa_t\xi_m|\le C(n) (\rho_m-\rho_{m+1})^{-2}.$
Let
\begin{equation}\label{2}
\varphi_m=\left\{
\begin{aligned}
& \|(u-k_m)^+\|_{L^2(Q_m^+)}^2 \quad &\mbox{if } p\ge 0, \\
& \|(u-k_m)^+\|_{L^2(Q_m^+,x_n^p\ud x\ud t)}^2 \quad &\mbox{if } -1<p<0.
\end{aligned}
\right.
\end{equation}
Case 1: Suppose $p\ge 0$. By Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev}, we have
\begin{align*}
& \|\xi_m(u-k_{m+1})^+\|^2_{L^{2\chi}(Q_m^+)}\\
&\le C \|\xi_m(u-k_{m+1})^+\|^2_{V_2(Q_m^+)}\\
&\le \frac{C2^{2m}}{(1-\theta)^2}\|(u-k_{m+1})^+\|_{L^2(Q_{m}^+)}^2 +C(k^2+F_1^2)|A_m(k_{m+1})|^{1-\frac1q},
\end{align*}
where
\[
A_m(k)=\{u>k\}\cap Q_{m}^+,\ \mbox{ and }|A_m(k)| \mbox{ is the Lebesgue measure of }A_m(k).
\]
Take $k\ge F_1 $. Then,
\begin{align*}
\varphi_{m+1}&\le \|\xi_m(u-k_{m+1})^+\|^2_{L^{2}(Q_m^+)}\\
& \le \|\xi_m(u-k_{m+1})^+\|^2_{L^{2\chi}(Q_m^+)} |A_m(k_{m+1})|^{1-\frac{1}{\chi}}\\
&\le \frac{C2^{2m}}{(1-\theta)^2} \varphi_m |A_m(k_{m+1})|^{1-\frac{1}{\chi}}+ Ck^2|A_m(k_{m+1})|^{2-\frac{1}{\chi}-\frac{1}{q}}.
\end{align*}
Notice that
\[
\varphi_m=\|(u-k_m)^+\|_{L^2(Q_m^+)}^2 \ge (k_{m+1}-k_m)^2|A_m(k_{m+1})|= \frac{k^2}{2^{2m+2}} |A_m(k_{m+1})|.
\]
Hence,
\begin{equation}\label{eq:iterationstep0}
\varphi_{m+1}\le \frac{C2^{2m}}{(1-\theta)^2} \left(\frac{2^{2m+2}}{k^2}\right)^{1-\frac{1}{\chi}}\varphi_m^{2-\frac{1}{\chi}}+ Ck^2 \left(\frac{2^{2m+2}}{k^2}\right)^{2-\frac{1}{\chi}-\frac1q}\varphi_m^{2-\frac{1}{\chi}-\frac1q}.
\end{equation}
Case 2: Suppose $-1<p<0$. By Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev2}, we have
\begin{align*}
& \|\xi_m(u-k_{m+1})^+\|^2_{L^{2\chi}(Q_m^+,\ x_n^p\ud x\ud t)}\\
&\le C \|\xi_m(u-k_{m+1})^+\|^2_{V_2(Q_m^+)}\\
&\le \frac{C2^{2m}}{(1-\theta)^2}\|(u-k_{m+1})^+\|_{L^2(Q_{m}^+,\ x_n^p\ud x\ud t)}^2 +C(k^2+F_1^2)|A_m(k_{m+1})|_{p}^{1-\frac1q},
\end{align*}
where
\[
A_m(k)=\{u>k\}\cap Q_{m}^+,\ \mbox{ and }|\cdot|_{p} \mbox{ is defined in }\eqref{eq:pmeasure}.
\]
Take $k\ge F_1 $. Then,
\begin{align*}
\varphi_{m+1}&\le \|\xi_m(u-k_{m+1})^+\|^2_{L^{2}(Q_m^+,\ x_n^p\ud x\ud t)}\\
& \le \|\xi_m(u-k_{m+1})^+\|^2_{L^{2\chi}(Q_m^+,\ x_n^p\ud x\ud t)} |A_m(k_{m+1})|_{p}^{1-\frac{1}{\chi}}\\
&\le \frac{C2^{2m}}{(1-\theta)^2} \varphi_m |A_m(k_{m+1})|_{p}^{1-\frac{1}{\chi}}+ Ck^2|A_m(k_{m+1})|_{p}^{2-\frac{1}{\chi}-\frac{1}{q}}.
\end{align*}
Notice that
\[
\varphi_m=\|(u-k_m)^+\|_{L^2(Q_m^+,\ x_n^p\ud x\ud t)}^2 \ge (k_{m+1}-k_m)^2|A_m(k_{m+1})|_{p}= \frac{k^2}{2^{2m+2}} |A_m(k_{m+1})|_{p}.
\]
Hence, \eqref{eq:iterationstep0} also holds.
Now let us start from \eqref{eq:iterationstep0} which holds for all $p>-1$. If we further take $k\ge \|u\|_{L^2(Q_1^+)} + \|u\|_{L^2(Q_1^+,\ x_n^p\,\ud x\ud t)} $, then
\[
y_m:=\frac{\varphi_m}{k^2 }\le 1.
\]
Thus,
\[
y_{m+1}\le \frac{C 2^{2m(2-\frac{1}{\chi})}y_m^{1+\beta}}{(1-\theta)^2}.
\]
If
\begin{equation}\label{eq:nonlineariteration}
y_0=\frac{\|(u-k)^+\|_{L^2(Q_1^+)}^2}{k^2}\le \frac{\|u\|_{L^2(Q_1^+)}^2}{k^2}\le \bar y= \frac{(1-\theta)^{2\beta}}{C^{1/\beta}} 4^{(\frac{1}{\chi}-2)\frac{1}{\beta^2}},
\end{equation}
then one can show by induction that
\[
y_m\le \frac{\bar y}{(4^{2-\frac{1}{\chi}})^{\frac{m}{\beta}}},
\]
and thus,
\begin{equation}\label{eq:nonlineariteration2}
\lim_{m\to\infty}y_m=0.
\end{equation}
That is,
\[
\sup_{Q_{1/2}^+}u \le 2k.
\]
Therefore, we only need to choose
\[
k=F_1+\frac{C}{(1-\theta)^\beta}(\|u\|_{L^2(Q_1^+)}+ \|u\|_{L^2(Q_1^+,\ x_n^p\,\ud x\ud t)} ).
\]
This proves \eqref{eq:boundednessbyL2}.
Now we will use a scaling argument. For any $R\in(0,1]$, define
\begin{equation}\label{eq:rescaledequationcoefficients}
\begin{split}
\tilde u(x,t)&=u(x_0+Rx,t_0+R^{p+2}t),\quad \tilde a(x,t)=a(x_0+Rx,t_0+R^{p+2}t),\\
\tilde a_{ij}(x,t)&=a_{ij}(x_0+Rx,t_0+R^{p+2}t), \quad \tilde d_j(x,t)=d_j(x_0+Rx,t_0+R^{p+2}t), \\
\tilde b_j(x,t)&=b_j(x_0+Rx,t_0+R^{p+2}t), \quad \tilde c_0(x,t)=c_0(x_0+Rx,t_0+R^{p+2}t),\\
\tilde c(x,t)&=c(x_0+Rx,t_0+R^{p+2}t), \quad \tilde f(x,t)=f(x_0+Rx,t_0+R^{p+2}t),\\
\tilde f_j(x,t)&=f_j(x_0+Rx,t_0+R^{p+2}t), \quad \tilde f_0(x,t)=f_0(x_0+Rx,t_0+R^{p+2}t).
\end{split}
\end{equation}
Then
\begin{equation}\label{eq:rescaledequation}
\begin{split}
& \tilde ax_n^{p} \pa_t \tilde u-D_j( \tilde a_{ij} D_i \tilde u+ R\tilde d_j \tilde u)+ R\tilde b_iD_i \tilde u+ R^{p+2}\tilde c x_n^p \tilde u + R^2\tilde c_0 \tilde u\\
&\quad= R^{p+2}x_n^pf+R^2\tilde f_0 -RD_i \tilde f_i \quad \mbox{in }Q_1^+.
\end{split}
\end{equation}
Note that since $q>\max(\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$, then
\begin{equation}\label{eq:rescaledequationcoefficients1}
\begin{split}
&\Big\||\pa_t \tilde a|+R^{p+2}|\tilde c|\Big\|_{L^q(Q_1^+,x_n^p\ud x\ud t)}+ \Big\|\sum_{j=1}^n(R^2 \tilde b_j^2+R^2 \tilde d_j^2)+R^2| \tilde c_0|\Big\|_{L^q(Q_1^+)}\\
&\le [R^{p+2-\frac{n+2p+2}{q}}+R^{2-\frac{n+p+2}{q}}] \Lambda\le\Lambda,
\end{split}
\end{equation}
and
\begin{equation}\label{eq:rescaledequationcoefficients2}
\begin{split}
&R^{p+2}\|f\|_{L^{\frac{2q\chi}{q\chi+\chi-q}}(Q_1^+,x_n^p\ud x\ud t)}+R^2\|\tilde f_0\|_{L^{\frac{2q\chi}{q\chi+\chi-q}}(Q_1^+)}+ R\sum_{j=1}^n\|\tilde f_j\|_{L^{2q}(Q_1^+)}\\
&\le CR^{1-\frac{n+p+2}{2q}} F_1 \le CF_1.
\end{split}
\end{equation}
Hence, it follows from \eqref{eq:boundednessbyL2} that
\begin{align*}
&\|\tilde u\|_{L^\infty(\mathcal{Q}_{\theta}^+)}\le C\left((1-\theta)^{-1/\beta}(\|\tilde u\|_{L^2(Q_1^+)}+\|\tilde u\|_{L^2(Q_1^+,\ x_n^p\ud x\ud t)})+F_1\right),
\end{align*}
where in the second inequality we used that $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2})$ and $R\le 1$.
Scaling the estimate of $\tilde u$ back to $u$, we then obtain for $p\ge 0$ that
\begin{align*}
\|u\|_{L^\infty(\mathcal{Q}_{\theta R}^+)} &\le \|\tilde u\|_{L^\infty(\mathcal{Q}_\theta^+)}\\
&\le C\left(\frac{1}{R^{\frac{n+p+2}{2}}(1-\theta)^{1/\beta}}\| u\|_{L^2(\mathcal{Q}_R^+)}+ F_1\right)\\
&\le C\left(\frac{1}{R^{\frac{n+p+2}{2}}(1-\theta)^{1/\beta}}\| u\|_{L^\infty(\mathcal{Q}_R^+)}^{\frac{2-\gamma}{2}} \| u\|^\frac{\gamma}{2}_{L^\gamma(\mathcal{Q}_R^+)}+F_1\right)\\
&\le \frac{1}{2} \|u\|_{L^\infty(\mathcal{Q}_{ R}^+)} + \frac{C}{R^{\frac{n+p+2}{\gamma}}(1-\theta)^{\frac{2}{\beta\gamma}}} \| u\|_{L^\gamma(\mathcal{Q}_R^+)}+C F_1\\
&\le \frac{1}{2} \|u\|_{L^\infty(\mathcal{Q}_{ R}^+)} + \frac{C}{(R-\theta R)^{\alpha}} \| u\|_{L^\gamma(\mathcal{Q}_R^+)}+CF_1,
\end{align*}
where $\alpha= \max(\frac{n+p+2}{\gamma},\frac{2}{\beta\gamma})$.
By an iterative lemma, Lemma 1.1 in Giaquinta-Giusti \cite{GG}, we have
\[
\|u\|_{L^\infty(\mathcal{Q}_{\theta }^+)} \le \frac{C}{(1-\theta )^{\alpha}} \| u\|_{L^\gamma(\mathcal{Q}_1^+)}+CF_1.
\]
Applying this estimate to $\tilde u$ again, and scaling it back to $u$, we obtain the desired estimate.
Similarly, for $-1<p<0$ and $\gamma\in (0,2)$, we let $\tilde\gamma=\frac{(1+p)\gamma}{1-p}<\gamma$. Then we have
\begingroup
\allowdisplaybreaks
\begin{align*}
\|u\|_{L^\infty(\mathcal{Q}_{\theta R}^+)} &\le \|\tilde u\|_{L^\infty(\mathcal{Q}_\theta^+)}\\
&\le C\left(\frac{1}{R^{\frac{n+2p+2}{2}}(1-\theta)^{1/\beta}}\| u\|_{L^2(\mathcal{Q}_R^+,\ x_n^p\ud x\ud t)}+ F_1\right)\\
&\le C\left(\frac{1}{R^{\frac{n+2p+2}{2}}(1-\theta)^{1/\beta}}\| u\|_{L^\infty(\mathcal{Q}_R^+)}^{\frac{2-\tilde\gamma}{2}} \| u\|^\frac{\tilde\gamma}{2}_{L^{\tilde\gamma}(\mathcal{Q}_R^+,\ x_n^p\ud x\ud t)}+ F_1\right)\\
&\le \frac{1}{2} \|u\|_{L^\infty(\mathcal{Q}_{ R}^+)} + \frac{C}{R^{\frac{n+2p+2}{\tilde\gamma}}(1-\theta)^{\frac{2}{\beta\tilde\gamma}}} \| u\|_{L^{\tilde\gamma}(\mathcal{Q}_R^+,\ x_n^p\ud x\ud t)}+C F_1\\
&\le \frac{1}{2} \|u\|_{L^\infty(\mathcal{Q}_{ R}^+)} + \frac{C}{(R-\theta R)^{\tilde \alpha}} \| u\|_{L^{\tilde\gamma}(\mathcal{Q}_R^+,\ x_n^p\ud x\ud t)}+CF_1,
\end{align*}
\endgroup
where $\tilde \alpha= \max(\frac{n+2p+2}{\tilde\gamma},\frac{2}{\beta\tilde\gamma})$.
By an iterative lemma, Lemma 1.1 in Giaquinta-Giusti \cite{GG}, we have
\[
\|u\|_{L^\infty(\mathcal{Q}_{\theta }^+)} \le \frac{C}{(1-\theta )^{\tilde \alpha}} \| u\|_{L^{\tilde\gamma}(\mathcal{Q}_1^+,\ x_n^p\ud x\ud t)}+CF_1.
\]
Since
\[
\| u\|_{L^{\tilde\gamma}(\mathcal{Q}_1^+,\ x_n^p\ud x\ud t)} \le C \| u\|_{L^\gamma(\mathcal{Q}_1^+)},
\]
which follows from H\"older's inequality, then
\[
\|u\|_{L^\infty(\mathcal{Q}_{\theta }^+)} \le \frac{C}{(1-\theta )^{\tilde \alpha}} \| u\|_{L^\gamma(\mathcal{Q}_1^+)}+CF_1.
\]
Applying this estimate to $\tilde u$ again, and scaling it back to $u$, we obtain the desired estimate.
\end{proof}
If it additionally satisfies $u(\cdot,-1)=0$, then we can show the boundedness up to the initial time.
\begin{thm}\label{thm:localboundednesstobottom}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D} and the initial condition $u(\cdot,-1)=0$, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:localbddnesscoeffcient} and \eqref{eq:localbddnessf} for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2},\frac{n+2p+2}{p+2})$. Denote $\widetilde{\mathcal{Q}}_R^+=B_R^+(x_0)\times(-1,-1+R^{p+2}]\subset Q_1^+$, where $x_0\in\pa' B_1^+$. Then we have, for any $\gamma>0$,
\[
\|u\|_{L^\infty(\widetilde{\mathcal{Q}}_{R/2}^+)}\le C\left(R^{-\frac{n+p+2}{\gamma}}\|u\|_{L^\gamma(\widetilde{\mathcal{Q}}_R^+)}+F_1 R^{1-\frac{n+p+2}{2q}} \right),
\]
where $C>0$ depends only on $\lambda,\Lambda,n,p$, $q$ and $\gamma$.
\end{thm}
The proof of this theorem is almost identical to that of Theorem \ref{thm:localboundedness} (which is actually simpler since we do not need to cut off in the time variables). We omit the details.
Combining Theorems \ref{thm:localboundedness} and \ref{thm:localboundednesstobottom}, we have
\begin{thm}\label{thm:localboundednessglobal}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D} and the initial condition $u(\cdot,-1)=0$, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:localbddnesscoeffcient} and \eqref{eq:localbddnessf} for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2},\frac{n+2p+2}{p+2})$. Then we have, for any $\gamma>0$,
\[
\|u\|_{L^\infty(B_{1/2}^+\times(-1,0])}\le C\left(\|u\|_{L^\gamma(\mathcal{Q}_1^+)}+F_1 \right),
\]
where $C>0$ depends only on $\lambda,\Lambda,n,p$, $q$ and $\gamma$.
\end{thm}
\begin{proof}
It follows from Theorems \ref{thm:localboundedness} and \ref{thm:localboundednesstobottom}.\end{proof}
\section{H\"older regularity}\label{sec:holderregularity}
\subsection{Improvement of oscillations centered at the boundary}
Throughout this subsection, we assume all the assumptions in Theorem \ref{thm:localboundedness} and let $u$ be as in Theorem \ref{thm:localboundedness}. Suppose
\[
M:=\|u\|_{L^\infty(B_{3/4}^+\times(-3/4,0])}.
\]
Let
\[
\ud \mu_p(t)=a(x,t)x_n^p\,\ud x,\quad \ud \nu_p(t)=a(x,t)x_n^p\,\ud x\ud t
\]
and
\[
|A|_{\mu_p(t)}= \int_{A}a(x,t)x_n^p\,\ud x\quad\mbox{for } A\subset B_1^+,\quad |\widetilde A|_{\nu_p}= \int_{\widetilde A}a(x,t)x_n^p\,\ud x\ud t\quad\mbox{for } \widetilde A\subset Q_1^+.
\]
Recall that for $x_0\in\partial R^n_+$,
$$\mathcal{Q}_R(x_0,t_0):=B_R(x_0) \times (t_0-R^{p+2}, t_0], \quad \mathcal{Q}_R^+(x_0,t_0):=B_R^+(x_0) \times (t_0-R^{p+2}, t_0].$$ We simply write it as $\mathcal{Q}_R$ and $\mathcal{Q}_R^+$ if $(x_0,t_0)=(0,0)$.
\begin{lem}\label{lem:measurechange}
There exists $C>0$ depending only on $n$ and $p$ such that for every $\va\in(0,1)$, every $R\in(0,1]$, every $\delta>0$, every $\widetilde A\subset B_R^+\times [0,\delta R^{p+2}]$, if
\[
\frac{|\widetilde A|_{\nu_p}}{|B_R^+\times [0,\delta R^{p+2}]|_{\nu_p}} \le\va^{p+1},
\]
then
\[
\frac{|\widetilde A|}{|B_R^+\times [0,\delta R^{p+2}]|}\le C\max(\va^{p+1}, \va).
\]
\end{lem}
\begin{proof}
If $-1<p<0$, then
\[
\va^{p+1}\ge \frac{|\widetilde A|_{\nu_p}}{|B_R^+\times [0,\delta R^{p+2}]|_{\nu_p}} \ge \frac{\lambda R^p |\widetilde A|}{C R^{n+2p+2}} \ge \frac{1}{C} \frac{|\widetilde A|}{|B_R^+\times [0,\delta R^{p+2}]|}.
\]
If $p\ge 0$, then we have
\begin{align*}
\frac{|\widetilde A|}{|B_R^+\times [0,\delta R^{p+2}]|}&\le \frac{C}{\delta R^{n+2+p}}\left(\int_{\widetilde A\cap\{x_n\le \va R\}}\,\ud x\ud t+\int_{\widetilde A\cap\{x_n> \va R\}}\,\ud x\ud t\right)\\
&\le \frac{C}{\delta R^{n+2+p}}\left(\int_{\widetilde A\cap\{x_n\le \va R\}}\,\ud x\ud t+\frac{1}{\va^pR^p}\int_{\widetilde A\cap\{x_n> \va R\}}x_n^p\,\ud x\ud t\right)\\
&\le \frac{C}{\delta R^{n+2+p}}\left(\va \delta R^{n+2+p}+\frac{C \va^{p+1}\delta R^{n+2+2p}}{\va^pR^p}\right)\\
&=C\va.
\end{align*}
\end{proof}
We have the following De Giorgi lemmas.
\begin{lem}\label{lem:smallonlargeset}
Let $0<R\le 1$ and
\[
0\le \sup_{\mathcal{Q}_R^+} u\le \mu\le M.
\]
Then there exists $0<\gamma_0<1$ depending only on $\lambda,\Lambda,n,p$ and $q$ such that for $0\le k<\mu$, if
\begin{equation*}
H:=\mu-k>
\left\{
\begin{aligned}
& (M+F_1)R^{1-\frac{n+p+2}{2q}}, \quad &\mbox{for } p\ge 0, \\
& (M+F_1)R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}}, \quad &\mbox{for } -1<p<0,
\end{aligned}
\right.
\end{equation*}
and
\[
\frac{|\{(x,t)\in \mathcal{Q}_R^+: u(x,t)>k\} |_{\nu_{p}}}{|\mathcal{Q}_R^+|_{\nu_{p}}} \le \gamma_0,
\]
then
\[
u\le \mu-\frac{H}{2}\quad\mbox{in }\mathcal{Q}_{R/2}^+.
\]
\end{lem}
\begin{proof}
Let
\[
r_j=\frac {R}2+\frac{R}{2^{j+1}},\quad k_j=\mu-\frac{H}{2}- \frac{H}{2^{j+1}},\quad j=0,1,2,\cdots.
\]
Let $\eta_j $ be a smooth cut-off function satisfying
\[
\mbox{supp}(\eta_j) \subset \mathcal{Q}_{r_j}, \quad 0\le \eta_j \le 1, \quad \eta_j=1 \mbox{ in }\mathcal{Q}_{r_{j+1}}^+,
\]
\[
|D \eta_j(x,t)|^2+|\pa_t \eta_j(x,t)| R^{p} \le \frac{C(n)}{(r_j-r_{j+1})^2} \quad \mbox{in }\mathcal{Q}_R^+.
\]
Case 1: $p\ge 0$. Let us consider $n\ge 3$ first. Since $k_j>k\ge 0$, By Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev}, we have
\begin{equation}\label{eq:auxgiorgi1}
\Big(\int_{\mathcal{Q}_R^+} |\eta_j v|^{2\chi}\,\ud x \ud t \Big)^{\frac{1}{\chi}} \le C\left[ \frac{2^{2j}}{R^2} \|v\|^2_{L^2(\mathcal{Q}_{r_{j}}^+)} + (M+F_1)^2 |\mathcal{Q}_{r_{j}}^+\cap \{u>k_j\}|^{1-\frac1q}\right],
\end{equation}
where $v=(u-k_j)^+$. Let $A(k_j,\rho)= \{(x,t)\in \mathcal{Q}_{\rho}^+: u> k_j\}$ for $0<\rho\le R$. Then
\[
\Big(\int_{\mathcal{Q}_R^+} |\eta_j v|^{2\chi}\,\ud x \ud t \Big)^{\frac{1}{\chi}} \ge (k_{j+1}- k_j)^2 |A(k_{j+1}, r_{j+1})|^{\frac{1}{\chi}},
\]
and
\[
\int_{\mathcal{Q}_{R}^+} v^{2} \,\ud x\ud t \le H^2 |A(k_{j}, r_{j})|.
\]
It follows that
\begin{align*}
|A(k_{j+1}, r_{j+1})| & \le C\left[ \frac{2^{4j}}{R^2}|A(k_{j}, r_{j})| + \frac{2^{2j}(M+F_1)^2}{H^2} |A(k_{j}, r_{j})|^{1-\frac1q}\right]^\chi\\
&\le C\left[ \frac{2^{4j}}{R^2}|A(k_{j}, r_{j})| + \frac{2^{2j}}{R^{2-\frac{n+p+2}{q}}} |A(k_{j}, r_{j})|^{1-\frac1q}\right]^\chi\\
&\le C\left[ \frac{16^{j}}{R^{2-\frac{n+p+2}{q}}} |A(k_{j}, r_{j})|^{1-\frac1q}\right]^\chi,
\end{align*}
where we used the assumption on $H$, and $|A(k_{j}, r_{j})| \le |\mathcal{Q}_R^+|\le CR^{n+p+2}$. Hence
\begin{equation}\label{eq:auxgiorgi2}
\frac{|A(k_{j+1}, r_{j+1})| }{ |\mathcal{Q}_R^+|}\le C 16^{j\chi}\left( \frac{|A(k_{j}, r_{j})|}{|\mathcal{Q}_R^+|}\right)^{(1-\frac1q)\chi},
\end{equation}
where we used that $\chi=\frac{n+p+2}{n+p}$. Therefore, similarly to \eqref{eq:nonlineariteration} and \eqref{eq:nonlineariteration2}, there exists $\theta\in(0,1)$ such that if $\frac{|A(k_{0}, r_{0})|}{|\mathcal{Q}_R^+|}\le\theta$, then
\[
\lim_{j\to \infty}\frac{|A(k_{j+1}, r_{j+1})|}{|\mathcal{Q}_R^+|}=0.
\]
By Lemma \ref{lem:measurechange}, we only need to choose $\gamma_0=(\theta/C)^{p+1}$.
Now, let us consider $n=1, 2$. By Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev}, \eqref{eq:auxgiorgi1} would become
\[
\Big(\int_{\mathcal{Q}_R^+} |\eta_j v|^{2\chi}\,\ud x \ud t \Big)^{\frac{1}{\chi}} \le C R^{\frac{p+2-n}{p+2}}\left[ \frac{2^{2j}}{R^2} \|v\|^2_{L^2(\mathcal{Q}_{r_{j}}^+)} + (M+F_1)^2 |\mathcal{Q}_{r_{j}}^+\cap \{u>k_j\}|^{1-\frac1q}\right].
\]
By using $\chi=\frac{p+2}{p+1}$, one will still obtain \eqref{eq:auxgiorgi2}. Then the left proof is the same as above.
Case 2: $-1<p<0$. We still consider $n\ge 3$ first. By Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev2}, we have
\begin{equation}\label{eq:auxgiorgi1-2}
\Big(\int_{\mathcal{Q}_R^+} |\eta_j v|^{2\chi}x_n^p\,\ud x \ud t \Big)^{\frac{1}{\chi}} \le C\left[ \frac{2^{2j}}{R^2} \|v\|^2_{L^2(\mathcal{Q}_{r_{j}}^+,x_n^p\ud x\ud t)} + (M+F_1)^2 |\mathcal{Q}_{r_{j}}^+\cap \{u>k_j\}|_{\nu_p}^{1-\frac1q}\right],
\end{equation}
where $v=(u-k_j)^+$. Since
\[
\Big(\int_{\mathcal{Q}_R^+} |\eta_j v|^{2\chi}x_n^p\,\ud x \ud t \Big)^{\frac{1}{\chi}} \ge (k_{j+1}- k_j)^2 |A(k_{j+1}, r_{j+1})|_{\nu_p}^{\frac{1}{\chi}},
\]
and
\[
\int_{\mathcal{Q}_{R}^+} v^{2} \,\ud x\ud t \le H^2 |A(k_{j}, r_{j})|_{\nu_p},
\]
it follows that
\begin{align*}
|A(k_{j+1}, r_{j+1})|_{\nu_p} & \le C\left[ \frac{2^{4j}}{R^2}|A(k_{j}, r_{j})|_{\nu_p} + \frac{2^{2j}(M+F_1)^2}{H^2} |A(k_{j}, r_{j})|_{\nu_p}^{1-\frac1q}\right]^\chi\\
&\le C\left[ \frac{2^{4j}}{R^2}|A(k_{j}, r_{j})|_{\nu_p} + \frac{2^{2j}}{R^{p+2-\frac{n+2p+2}{q}}} |A(k_{j}, r_{j})|_{\nu_p}^{1-\frac1q}\right]^\chi\\
&\le C\left[ \frac{16^{j}}{R^{p+2-\frac{n+2p+2}{q}}} |A(k_{j}, r_{j})|_{\nu_p}^{1-\frac1q}\right]^\chi,
\end{align*}
where we used the assumption on $H$, and $|A(k_{j}, r_{j})|_{\nu_p} \le |\mathcal{Q}_R^+|_{\nu_p}\le CR^{n+2p+2}$. Hence
\begin{equation}\label{eq:auxgiorgi2-2}
\frac{|A(k_{j+1}, r_{j+1})|_{\nu_p} }{ |\mathcal{Q}_R^+|_{\nu_p}}\le C 16^{j\chi}\left( \frac{|A(k_{j}, r_{j})|_{\nu_p}}{|\mathcal{Q}_R^+|_{\nu_p}}\right)^{(1-\frac1q)\chi},
\end{equation}
where we used that $\chi=\frac{n+2p+2}{n+p}$. Hence, there exists $\theta\in(0,1)$ such that if $\frac{|A(k_{0}, r_{0})|_{\nu_p}}{|\mathcal{Q}_R^+|_{\nu_p}}\le\theta$, then
\[
\lim_{j\to \infty}\frac{|A(k_{j+1}, r_{j+1})|_{\nu_p}}{|\mathcal{Q}_R^+|_{\nu_p}}=0.
\]
If $n=1, 2$, then by Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev2}, \eqref{eq:auxgiorgi1-2} would become
\begin{align*}
& \Big(\int_{\mathcal{Q}_R^+} |\eta_j v|^{2\chi}x_n^p\,\ud x \ud t \Big)^{\frac{1}{\chi}} \\
&\le C R^{\frac{p+4-n}{3}}\left[ \frac{2^{2j}}{R^2} \|v\|^2_{L^2(\mathcal{Q}_{r_{j}}^+,x_n^p\ud x\ud t)} + (M+F_1)^2 |\mathcal{Q}_{r_{j}}^+\cap \{u>k_j\}|_{\nu_p}^{1-\frac1q}\right].
\end{align*}
By using $\chi=\frac{3}{2}$, one will still obtain \eqref{eq:auxgiorgi2-2}. Then the left proof is the same as above.
\end{proof}
\begin{lem}\label{lem:decay-1}
Let $0<R\le \frac12$. Suppose
\[
0\le \sup_{B_{2R}^+ \times [t_0, t_0+ R^{p+2}] } u\le\mu\le M.
\]
Then there exists $C>1$ depending only on $\lambda,\Lambda,n,p$ and $q$ such that for every $\ell\in\mathbb{Z}^+$, there holds either
\begin{equation}\label{eq:dichonomyH}
\mu\le
\left\{
\begin{aligned}
& 2^{\ell} (M+F_1)R^{1-\frac{n+p+2}{2q}}, \quad &\mbox{for } p\ge 0, \\
& 2^{\ell} (M+F_1)R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}}, \quad &\mbox{for } -1<p<0,
\end{aligned}
\right.
\end{equation}
or
\begin{equation}\label{eq:thechoiceofell}
\frac{|\{(x,t)\in B_{R}^+ \times [t_0, t_0+ R^{p+2}] : u(x,t)>\mu-\frac{\mu}{2^\ell}\}|_{\nu_{p}}}{|B_{R}^+ \times [t_0, t_0+ R^{p+2}]|_{\nu_{p}}}\le C\ell^{-\min\left(\frac{1}{4},\frac{1}{4(p+1)}\right)}.
\end{equation}
\end{lem}
\begin{proof}
We extend $u$ to be identically zero in $(B_1\setminus B_1^+)\times[-1,0]$, which will still be denoted as $u$. Let
\[
k_j= \mu- \frac{\mu}{2^j},
\]
and
\[
A(k_j,R;t)= B_R \cap \{u(\cdot, t)>k_j\}, \quad A(k_j,R)= B_{R} \times [t_0, t_0+ R^{p+2}] \cap \{u>k_j\}.
\]
Since $k_j\ge 0$, we have $A(k_j,R;t)= B_R^+ \cap \{u(\cdot, t)>k_j\}$ and
\begin{equation}\label{eq:measureauto}
\int_{B_R\setminus A(k_{j},R;t)}|x_n|^p\ud x\ge \int_{B_R\setminus B_R^+}|x_n|^p\ud x\ge C R^{n+p}.
\end{equation}
Then by Theorem \ref{thm:degiorgiisoperimetricelliptic}, we have
\begin{align*}
&(k_{j+1}-k_j) |A(k_{j+1},R;t)|_{\mu_{p}(t)} R^{n+p}\\
& \le C R^{n+2p+1+\frac{n(1-2\va)}{2}-\va p} \left(\int_{B_R^+} |\nabla (u-k_j)^+|^2\,\ud x\right)^{1/2} |A(k_{j},R;t) \setminus A(k_{j+1},R;t)|_{\mu_{p}(t)}^{\va},
\end{align*}
where we choose $\va=\min\left(\frac{1}{4},\frac{1}{4(p+1)}\right)$. Integrating in the time variable, and using H\"older's inequality again, we have
\begin{align*}
&\int_{t_0}^{t_0+ R^{p+2}} |A(k_{j+1},R;t)|_{\mu_{p}(t)} \,\ud t\\&
\le \frac{C2^{j+1}}{\mu} R^{p+1+\frac{n(1-2\va)}{2}-\va p+\frac{(p+2)(1-2\va)}{2}} |A(k_{j},R) \setminus A(k_{j+1},R)|_{\nu_{p}}^{\va}\\
&\quad\cdot \left(\int_{B_R^+\times [t_0, t_0+ R^{p+2}] } |\nabla (u-k_j)^+|^2\,\ud x \ud t\right)^{1/2} .
\end{align*}
It follows from Theorem \ref{thm:caccipolli} (with $\xi$ independent of $t$) that
\begin{align*}
&\int_{t_0}^{t_0+ R^{p+2}}\int_{B_R^+ } |\nabla (u-k_j)^+|^2 \,\ud x \ud t \\&
\le C \Big(\int_{B_{2R}^+} |(u-k_j)^+(t_0)|^2 x_n^{p} \,\ud x+\frac{1}{R^2}\int_{t_0}^{t_0+ R^{p+2}}\int_{B_{2R}^+ } |(u-k_j)^+|^{2} \,\ud x\ud t \\
& \quad+\int_{t_0}^{t_0+ R^{p+2}}\int_{B_{2R}^+ } |(u-k_j)^+|^{2} (1\vee x_n^p)\,\ud x\ud t + (M+F_1)^2|B_{2R}^+ \times [t_0, t_0+ R^{p+2}]|^{1-\frac{1}{q}}\Big )\\
&\le C\left( \frac{\mu^2}{ 4^j} R^{n+p}+\frac{\mu^2}{ 4^j} R^{n+p} +\frac{\mu^2}{ 4^j} R^{n+2p+2} + (M+F_1)^2 R^{(n+2+p)(1-1/q)}\right).
\end{align*}
If \eqref{eq:dichonomyH} fails for some $\ell$, then we have
\[
\int_{t_0}^{t_0+ R^{p+2}}\int_{B_R^+ } |\nabla (u-k_j)^+|^2 \,\ud x \ud t \le C \frac{\mu^2}{ 4^j} R^{n+p}
\]
for $j\le \ell$, where we used that $R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}}\ge R^{1-\frac{n+p+2}{2q}}$ for $p<0$. Hence,
\[
|A(k_{j+1},R)|_{\nu_{p}} \le C R^{p+1+\frac{n(1-2\va)}{2}-\va p+\frac{(p+2)(1-2\va)}{2}+\frac{n+p}{2}} |A(k_{j},R) \setminus A(k_{j+1},R)|_{\nu_{p}}^{\va}
\]
or
\[
( |A(k_{j+1},R)|_{\nu_{p}})^{\frac{1}{\va}} \le C R^{\left(\frac{1}{\va}-1\right)(n+2p+2)} |A(k_{j},R) \setminus A(k_{j+1},R)|_{\nu_{p}}.
\]
Taking a summation, we have
\begin{align*}
\ell (|A(k_{\ell},R)|_{\nu_{p}})^{\frac{1}{\va}} \le \sum_{j=0}^{\ell-1} (|A(k_{j+1},R)|_{\nu_{p}})^{\frac{1}{\va}} &\le C R^{\left(\frac{1}{\va}-1\right)(n+2p+2)} |B_{R}^+ \times [t_0, t_0+ R^{p+2}]|_{\nu_{p}} \\& \le C (|B_{R}^+ \times [t_0, t_0+R^{p+2}]|_{\nu_{p}})^{\frac{1}{\va}}.
\end{align*}
The lemma follows.
\end{proof}
Now we can prove the H\"older continuity on the boundary.
\begin{thm}\label{thm:holderontheboundary}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D}, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:localbddnesscoeffcient} and \eqref{eq:localbddnessf} for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$. Let $\bar x\in\pa' B_{1/2}$ and $\bar t\in (-1/4,0])$. Then there exist $\alpha>0$ and $C>0$, both of which depend only on $\lambda,\Lambda,n,p$ and $q$, such that
\[
|u(x,t)-u(\bar x, \bar t)|\le C(M+F_1) (|x-\bar x|+|t-\bar t|^{\frac{1}{p+2}})^\alpha
\]
for every $(x,t)\in B_{1/2}^+\times(-1/4,0]$.
\end{thm}
\begin{proof}
Without loss of generality, we assume $(\bar x,\bar t)=(0,0)$. For $R\in(0,1/2]$, denote
\[
\mu(R)=\sup_{\mathcal{Q}_R^+}u, \quad \widetilde\mu(R)=\inf_{\mathcal{Q}_R^+}u, \quad\omega(R)=\mu(R)-\widetilde\mu(R).
\]
Let $\gamma_0$ be the one in Lemma \ref{lem:smallonlargeset}. We can choose $\ell$ sufficiently large so that
\[
C\ell^{-\min\left(\frac{1}{4},\frac{1}{4(p+1)}\right)}\le\gamma_0,
\]
where $C$ is the one in \eqref{eq:thechoiceofell}. Then it follows from Lemma \ref{lem:smallonlargeset} and Lemma \ref{lem:decay-1} that either
\begin{equation}\label{eq:dichonomyH-11}
\mu(R)\le
\left\{
\begin{aligned}
& 2^{\ell} (M+F_1)R^{1-\frac{n+p+2}{2q}}, \quad &\mbox{for } p\ge 0, \\
& 2^{\ell} (M+F_1)R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}}, \quad &\mbox{for } -1<p<0,
\end{aligned}
\right.
\end{equation}
or
\begin{equation}\label{eq:oscalter2-0}
\mu(R/4)\le \mu(R)-\frac{\mu(R)}{2^{\ell+1}}.
\end{equation}
Applying these estimates to $-u$, we have
either
\begin{equation}\label{eq:dichonomyH-12}
\tilde \mu(R)\ge
\left\{
\begin{aligned}
& - 2^{\ell} (M+F_1)R^{1-\frac{n+p+2}{2q}}, \quad &\mbox{for } p\ge 0, \\
& - 2^{\ell} (M+F_1)R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}}, \quad &\mbox{for } -1<p<0,
\end{aligned}
\right.
\end{equation}
or
\begin{equation}\label{eq:oscalter2-1}
\tilde\mu(R/4)\ge \tilde\mu(R)-\frac{\tilde\mu(R)}{2^{\ell+1}}.
\end{equation}
In any case, we will obtain
\begin{equation*}
\omega(R/4)\le
\left\{
\begin{aligned}
& (1-2^{-\ell-1})\omega(R)+2^{\ell+1} (M+F_1)R^{1-\frac{n+p+2}{2q}}, \quad &\mbox{for } p\ge 0, \\
& (1-2^{-\ell-1})\omega(R)+2^{\ell+1} (M+F_1)R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}}, \quad &\mbox{for } -1<p<0.
\end{aligned}
\right.
\end{equation*}
By an iterative lemma, e.g. Lemma 3.4 in Han-Lin \cite{HL} (or Lemma B.2 in \cite{JX19}), there exist $\alpha$ and $C$, both of which depend only on $\lambda,\Lambda,n,p$ and $q$, such that
\[
\omega(R)\le C(M+F_1)R^{\alpha}\quad\,\forall R\in(0,R_0],
\]
from which the conclusion follows.
\end{proof}
\subsection{Interior H\"older estimates}
When $x$ is away from the boundary $\partial'B_1^+$, the equation \eqref{eq:linear-eq} is uniformly parabolic. We observe that all the assumptions in Theorem \ref{thm:localboundedness} are stronger than those in the uniformly parabolic case (which corresponds to $p=0$ and $\chi=\frac{n+2}{n}$). Therefore, using the same proof for uniformly parabolic equations, with a small adaptation to the existence of the coefficient $a$ in front of $\partial_t u$, one can show the following interior H\"older estimate.
\begin{thm}\label{thm:uniformholdernearboundary}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D}, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:localbddnesscoeffcient} and \eqref{eq:localbddnessf} for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$. Then there exist $\alpha>0$ and $C>0$, both of which depend only on $\lambda,\Lambda,n,p$ and $q$, such that for every $(x,t), (y,s)\in B_{1/4}(e_n/2)\times(-1/4,0]$, there holds
\[
|u(x,t)-u(y,s)|\le C(M+F_1) (|x-y|+|t-s|)^\alpha,
\]
where $e_n=(0,\cdots,0,1)$.
\end{thm}
The proof Theorem \ref{thm:uniformholdernearboundary} will be proved as follows. We only need to prove the H\"older continuity at the point $(e_n/2,0)$.
Similar to Theorem \ref{thm:caccipolli}, we have the Caccipolli inequality around the point $(e_n/2,0)$. Let $Q_{\rho,\tau}=B_\rho(e_n/2)\times(t_0,t_0+\tau]\subset Q_{1/2}(e_n/2,0)$, $k\in\R$, $\va\in(0,1]$, and $\xi\in V_2^{1,1}(Q_{\rho,\tau})$ such that $\xi=0$ on $\pa B_\rho(e_n/2)\times (t_0,t_0+\tau]$ and $0\le\xi\le 1$. Then
\begin{align}
&\max\left(\sup_{t\in(t_0,t_0+\tau)}\int_{B_\rho(e_n/2)}x_n^{p} a [\xi(u-k)^+]^2(x,t)\,\ud x, \lambda\iint_{Q_{\rho,\tau}} |D[\xi(u-k)^+]|^2\,\ud x\ud s\right)\nonumber\\
&\le (1+\va)\int_{B_\rho(e_n/2)}x_n^{p} a [\xi(u-k)^+]^2(x,t_0)\,\ud x +C\iint_{Q_{\rho,\tau}} (|D\xi|^2+|\xi\pa_t\xi|x_n^p) [(u-k)^+]^2 \,\ud x\ud t \nonumber\\
&\quad +\frac{C}{\va^\kappa}\left( \|[(u-k)^+] \xi\|_{L^2(Q_{\rho,\tau})}^2 + (k^2+F_1^2)|\{u>k\}\cap Q_{\rho,\tau}|^{1-\frac1q} \right). \label{eq:caccipolliu}
\end{align}
\begin{lem}\label{lem:smallonlargesetinterior}
Let $0<R\le 1/2$ and
\[
\sup_{Q_R(e_n/2,0)} u\le \mu\le M.
\]
Then there exists $0<\gamma_0<1$ depending only on $\lambda,\Lambda,n,p$ and $q$ such that for $k<\mu$, if
\begin{equation*}
H:=\mu-k>(M+F_1)R^{1-\frac{n+2}{2q}},
\end{equation*}
and
\[
\frac{|\{(x,t)\in Q_R(e_n/2,0): u(x,t)>k\} |_{\nu_{p}}}{|Q_R(e_n/2,0)|_{\nu_{p}}} \le \gamma_0,
\]
then
\[
u\le \mu-\frac{H}{2}\quad\mbox{in }Q_{R/2}(e_n/2,0).
\]
\end{lem}
The proof of Lemma \ref{lem:smallonlargesetinterior} is almost identical to that of Lemma \ref{lem:smallonlargeset}, and thus, we omit it.
\begin{lem}\label{lem:interiordecay-1}
Let $0<\delta\le 1$, $0<R\le \frac14$, $0<\sigma<1$ and
\[
\sup_{B_{2R}(e_n/2) \times [t_0, t_0+\delta R^{2}] } u\le\mu\le M.
\]
Suppose that $k<\mu$ and
\begin{equation}\label{eq:assumptionofmeasure}
|\{x\in B_R(e_n/2): u(x,t)>k\}|_{\mu_{p}(t)}\le (1-\sigma) |B_R(e_n/2)|_{\mu_{p}(t)} \quad \mbox{for any } t_0\le t\le t_0+\delta R^{2}.
\end{equation}
Then there exists $C>1$ depending only on $\lambda,\Lambda,n,p$ and $q$ such that for every $\ell\in\mathbb{Z}^+$, there holds either
\begin{equation}\label{eq:dichonomyHinterior}
H:=\mu-k\le 2^{\ell} (M+F_1)R^{1-\frac{n+2}{2q}},
\end{equation}
or
\[
\frac{|\{(x,t)\in B_{R}(e_n/2) \times [t_0, t_0+\delta R^{2}] : u(x,t)>\mu-\frac{H}{2^\ell}\}|_{\nu_{p}}}{|B_{R}(e_n/2) \times [t_0, t_0+\delta R^{2}]|_{\nu_{p}}}\le \frac{C}{\sigma \sqrt{\delta \ell}}.
\]
\end{lem}
\begin{proof}
The proof is very similar to that of Lemma \ref{lem:decay-1} with the following two changes. The first is that $k_j$ should be defined as $k_j=\mu-\frac{H}{2^j}$ instead. The second is that the estimate \eqref{eq:measureauto} should be replaced by the assumption \eqref{eq:assumptionofmeasure}. The left proofs are identical so that we omit it.
\end{proof}
The next lemma was not needed in the proof of Theorem \ref{thm:holderontheboundary}, and its proof is slightly different from the uniformly parabolic equations with $a\equiv 1$. Thus, we provide a proof.
\begin{lem}\label{lem:decay-2}
Let $0<\sigma<1$. There exist $R_0\in(0,\frac14)$ and $s_0>1$ depending only on $\lambda,\Lambda,n,p, q$ and $\sigma$ such that the following holds. Let $R\in(0,R_0]$ and
\[
\sup_{B_{2R}(e_n/2) \times [t_0, t_0+R^{2}] } u\le\mu\le M.
\]
Suppose that $k<\mu$ and
\[
|\{x\in B_R(e_n/2): u(x,t_0)>k\}|_{\mu_{p}(t_0)}\le (1-\sigma) |B_R(e_n/2)|_{\mu_{p}(t_0)}.
\]
Then either
\begin{equation}\label{eq:dichonomyH2}
H:=\mu-k\le 2^{s_0} (M+F_1)R^{1-\frac{n+2}{2q}}
\end{equation}
or
\[
|\{x\in B_R(e_n/2): u(x,t)>\mu-\frac{H}{2^{s_0}}\}|_{\mu_{p}(t)}\le (1-\frac{\sigma}{2}) |B_R(e_n/2)|_{\mu_{p}(t)} \quad \mbox{for all } t_0\le t\le t_0+R^{p+2} .
\]
\end{lem}
\begin{proof} Let $\eta$ be a cut-off function supported in $B_R(e_n/2)$ and $\eta=1$ in $B_{\beta R}(e_n/2)$, where $0<\beta<1$ will be fixed later. Let $0<\delta \le 1$ and
\[
A^\delta (k,R)= \{B_R(e_n/2) \times[t_0, t_0+\delta R^{2}] \}\cap \{u>k\}.
\]
Let $k_1>1$. By \eqref{eq:caccipolliu}, we have
\begin{align*}
&\sup_{t_0<t< t_0+\delta R^{2}} \int_{B_R(e_n/2)} x_n^p a v^{2}\eta^2 \,\ud x \\&
\quad \le (1+\va) \int_{B_R(e_n/2)} x_n^p a v^{2}\eta^2 \,\ud x\Big|_{t_0} + \frac{C}{\va^\kappa } \left(\frac{H^2 |A^\delta (k ,R)| }{(1-\beta)^2R^2}+ (M+F_1)^2 |A^\delta (k ,R)|^{1-\frac{1}{q}} \right),
\end{align*}
where $v=(u-k)^+$. Note that
\begin{align*}
\int_{B_R(e_n/2)} x_n^p a v^{2}\eta^2 \,\ud x \Big|_t &\ge (1-2^{-k_1})^2 H^2 |B_{\beta R}(e_n/2) \cap \{u(x,t)> \mu- H 2^{-k_1}\}|_{\mu_{p}(t)},\\
\int_{B_R(e_n/2)} x_n^p a v^{2}\eta^2 \,\ud x\Big|_{t_0} &\le H^2\{x\in B_R(e_n/2): u(x,t_0)>k\}|_{\mu_{p}(t_0)} \\&\le (1-\sigma) H^2 |B_R(e_n/2)|_{\mu_{p}(t_0)}.
\end{align*}
It follows that if \eqref{eq:dichonomyH2} fails, then for all $t\in [t_0, t_0+\delta R^{p+2}]$,
\begin{align*}
& |B_{\beta R}(e_n/2) \cap \{u(x,t)> \mu- H 2^{-k_1}\}|_{\mu_{p}(t)} \\&\le |B_R(e_n/2)|_{\mu_{p}(t_0)} \frac{(1+\va) (1-\sigma)}{(1-2^{-k_1})^2 }+\frac{CR^{n}}{\va^\kappa }\left[ \frac{C }{(1-\beta)^2} \mathscr{A}^\delta(k ,R)+\left(\mathscr{A}^\delta(k ,R)\right)^{1-\frac1q} \right],
\end{align*}
where
\begin{equation*}
\mathscr{A}^\delta(k ,R):= \frac{|A^\delta(k ,R)| }{R^{n+2}}.
\end{equation*}
Hence,
\begin{align*}
& |B_{ R}(e_n/2) \cap \{u(x,t)> \mu- H 2^{-k_1}\}|_{\mu_{p}(t)} \\&\le |B_R(e_n/2)|_{\mu_{p}(t_0)} \left( C(1-\beta)+ \frac{(1-\sigma)}{(1-2^{-k_1})^2}+4\va+\frac{C }{(1-\beta)^2\va^\kappa } \left(\mathscr{A}^\delta(k ,R)\right)^{1-\frac1q} \right).
\end{align*}
By choosing $\beta$ such that
\[
(1-\beta)^3=\left(\mathscr{A}^\delta(k ,R)\right)^{1-\frac1q},
\]
we have
\begin{align}\label{eq:smallinitiallater0}
& |B_{ R}(e_n/2) \cap \{u(x,t)> \mu- H 2^{-k_1}\}|_{\mu_{p}(t)} \nonumber\\
&\le |B_R(e_n/2)|_{\mu_{p}(t_0)} \left( \frac{(1-\sigma)}{(1-2^{-k_1})^2}+4\va+\frac{C }{\va^\kappa } \left(\mathscr{A}^\delta(k ,R)\right)^{\frac13(1-\frac1q)} \right).
\end{align}
For every $t_0\le\tau_1\le\tau_2<t_0+R^{p+2}$, we have
\begin{align*}
||B_R(e_n/2)|_{\mu_{p}(\tau_1)}-|B_R(e_n/2)|_{\mu_{p}(\tau_2)}|&\le \int_{B_R(e_n/2)}|a(x,\tau_1)-a(x,\tau_2)|x_n^p\,\ud x\\
&\le \int_{t_0}^{t_0+R^{p+2}}\int_{B_R(e_n/2)}|\pa_\tau a(x,\tau)|x_n^p\,\ud x\ud \tau\\
&\le \Lambda \left(\int_{t_0}^{t_0+R^{2}}\int_{B_R(e_n/2)}x_n^{p}\,\ud x\ud \tau\right)^\frac{q-1}{q}\\
&\le C R^{\frac{(n+2)(q-1)}{q}}\\
&= C\theta R^{n},
\end{align*}
where we used \eqref{eq:localbddnesscoeffcient} in the third inequality, and
\[
\theta= R^{2-\frac{n+2}{q}}.
\]
Then \eqref{eq:smallinitiallater0} becomes
\begin{align*}
& |B_{ R}(e_n/2) \cap \{u(x,t)> \mu- H 2^{-k_1}\}|_{\mu_{p}(t)} \nonumber\\
&\le |B_R(e_n/2)|_{\mu_{p}(t)} \left( \frac{(1+C\theta)(1-\sigma)}{(1-2^{-k_1})^2}+C\va+\frac{C }{\va^\kappa } \left(\mathscr{A}^\delta(k ,R)\right)^{\frac13(1-\frac1q)} \right).
\end{align*}
If we let
\[
\va=\left(\mathscr{A}^\delta(k ,R)\right)^{\frac{1}{3(1+\kappa )}(1-\frac1q)},
\]
then
\begin{align}\label{eq:smallinitiallater}
& |B_{ R}(e_n/2) \cap \{u(x,t)> \mu- H 2^{-k_1}\}|_{\mu_{p}(t)} \nonumber\\
&\le |B_R(e_n/2)|_{\mu_{p}(t)} \left( \frac{(1+C\theta)(1-\sigma)}{(1-2^{-k_1})^2}+C\left(\mathscr{A}^\delta(k ,R)\right)^{\frac{1}{3(1+\kappa )}(1-\frac1q)}\right).
\end{align}
Since
\[
\mathscr{A}^\delta(k ,R)\le C\delta,
\]
we fix an $a$ such that
\[
C\delta^{\frac{1}{3(1+\kappa )}(1-\frac1q)} <\frac{1}{8}\min(1-\sigma,\sigma).
\]
We choose $\delta$ slightly smaller if necessary to make $\delta^{-1}$ to be an integer. Let $N=\delta^{-1}$ and denote
\[
t_j=t_0+j\delta R^{2}\quad j=1,2,\cdots,N.
\]
We will inductively prove that there exist $s_1<s_2<\cdots<s_N$ such that
\begin{equation}\label{eq:densitypropa}
|B_{ R}(e_n/2) \cap \{u(x,t)> \mu- H 2^{-s_j}\}|_{\mu_{p}(t)} \le \left(1-\sigma+\frac{j}{4N} \sigma\right) |B_R(e_n/2)|_{\mu_{p}(t)}
\end{equation}
for all $t_{j-1}\le t\le t_j$, where all the $s_j$ depend only on $\lambda,\Lambda,n,p, q$ and $\sigma$, from which the conclusion of this lemma follow.
Let us consider $j=1$ first.
Since $2-\frac{n+2}{q}>0$, there exist $R_0$ small and $k_0$ large, depending on $\sigma$, such that for all $k_1\ge k_0$ and $R\le R_0$, we have
\[
\frac{(1+C\theta)(1-\sigma)}{(1-2^{-k_1})^2 }\le 1-\sigma+\frac{\sigma}{8N}.
\]
Then,
\[
|B_{ R}(e_n/2) \cap \{u(x,t)> \mu- H 2^{-k_1}\}|_{\mu_{p}(t)} \le \left(1-\frac{3}{4} \sigma\right) |B_R(e_n/2)|_{\mu_{p}(t)}
\]
for all $t\in [t_0, t_1]$. Applying Lemma \ref{lem:interiordecay-1}, for every $k_2>k_1$, we have
\[
|A^\delta(\mu- H 2^{-k_2},R)|_{\nu_p} \le \frac{C}{\sigma\sqrt{\delta(k_2-k_1)}} |\{B_R(e_n/2) \times[t_0, t_0+\delta R^{2}] \}|_{\nu_p}\le \frac{C\sqrt{\delta}\,R^{n+2}}{\sigma\sqrt{k_2-k_1}}.
\]
Hence,
\[
\frac{|A^\delta(\mu- H 2^{-k_2},R)|}{R^{n+2}}\le C \left(\frac{\sqrt{\delta}}{\sigma\sqrt{k_2-k_1}}\right)^{\frac{1}{p+1}}.
\]
Hence, we can choose $k_2$ large enough such hat
\[
C\left( \mathscr{A}^\delta(\mu- H 2^{-k_2} ,R) \right)^{\frac{1}{3(1+\kappa )}(1-\frac1q)}\le \frac{\sigma}{8N}.
\]
Let $k_1=k_0$ and $s_1=k_1+k_2$. By replacing $H$ by $H 2^{-k_2}$ in \eqref{eq:smallinitiallater}, it follows that
\[
|B_{ R}(e_n/2) \cap \{u(x,t)> \mu- H 2^{-s_1}\}|_{\mu_{p}(t)} \le \left(1-\sigma+\frac{1}{4N} \sigma\right) |B_R(e_n/2)|_{\mu_{p}(t)}\quad\mbox{for all } t_{0}\le t\le t_1.
\]
This prove \eqref{eq:densitypropa} for $j=1$. The proof for $j=2,3,\cdots,N$ is similar, and we omit it.
\end{proof}
Combining the above three lemmas, we will have the following improvement of oscillations.
\begin{lem}\label{lem:decay-3}
Let $0<\sigma<1$. There exist $R_0\in(0,\frac12)$ and $s>1$ depending only on $\lambda,\Lambda,n,p, q$ and $\sigma$ such that the following holds. Let $R\in(0,R_0]$ and
\[
\sup_{B_{2R}(e_n/2) \times [-R^{2},0] } u\le\mu\le M.
\]
Suppose that $k<\mu$ and
\[
|\{x\in B_R(e_n/2): u(x,-R^{2})>k\}|_{\mu_{p}(-R^{2})}\le (1-\sigma) |B_R(e_n/2)|_{\mu_{p}(-R^{2})}.
\]
Then either
\begin{equation}\label{eq:dichonomyH3}
H:=\mu-k\le 2^{s} (M+F_1)R^{1-\frac{n+2}{2q}}
\end{equation}
or
\[
\sup_{Q_{R/2}(e_n/2,0)} u\le \mu-\frac{H}{2^{s}}.
\]
\end{lem}
\begin{proof} Let $R_0$ and $s_0$ be those from Lemma \ref{lem:decay-2}. Suppose \eqref{eq:dichonomyH3} fails for some $s>s_0$, which will be fixed in the end.
Then it follows from Lemma \ref{lem:decay-2} that
\[
|\{x\in B_R(e_n/2): u(x,t)>\mu-\frac{H}{2^{s_0}}\}|_{\mu_{p}(t)}\le (1-\frac{\sigma}{2}) |B_R(e_n/2)|_{\mu_{p}(t)} \quad \mbox{for every } t_0\le t\le t_0+R^{2} .
\]
Then using Lemma \ref{lem:interiordecay-1}, we have
\[
\frac{|\{(x,t)\in B_{R}(e_n/2) \times [t_0, t_0+R^{2}] : u(x,t)>\mu-\frac{H}{2^{s-1}}\}|_{\nu_{p}}}{|B_{R}(e_n/2) \times [t_0, t_0+R^{2}]|_{\nu_{p}}}\le \frac{C}{\sigma \sqrt{s-s_0-1}}.
\]
Let $\gamma_0$ be the one in Lemma \ref{lem:smallonlargesetinterior}. We can choose $s$ sufficiently large so that
\[
\frac{C}{\sigma \sqrt{s-s_0-1}}\le\gamma_0.
\]
Then it follows from Lemma \ref{lem:smallonlargesetinterior} that
\[
\sup_{Q_{R/2}(e_n/2,0)} u\le \mu-\frac{H}{2^{s}}.
\]
\end{proof}
\begin{rem}\label{rem:decay-3}
From the above proof, for $\delta_0\le\delta\le\delta_0^{-1}$, if we consider the problem in $B_{2R}(e_n/2)\times[-\delta R^{p+2},0]$ instead of $B_{2R}(e_n/2)\times[- R^{p+2},0]$, then the conclusion in Lemma \ref{lem:decay-3} still holds, where the constant $s$ would additionally depend on $\delta_0$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{thm:uniformholdernearboundary}]
We only need to prove the H\"older continuity at the point $(e_n/2,0)$.
Let $R_0$ be the one in Lemma \ref{lem:decay-3} with $\sigma=1/2$. For $R\in(0,R_0]$, denote
\[
\mu(R)=\sup_{Q_R(e_n/2,0)}u, \quad \widetilde\mu(R)=\inf_{\mathcal{Q}_R(e_n/2,0)}u, \quad\omega(R)=\mu(R)-\widetilde\mu(R).
\]
Then one of the following two inequalities must hold:
\begin{align}
\left|\left\{x\in B_{\frac{R}{2}}(e_n/2): u\Big(x,-(\frac R2)^{2}\Big)>\mu(R)- \frac12\omega(R)\right\}\right|_{\mu_{p}(-(\frac R2)^{2})}\le \frac12 |B_{\frac R2}(e_n/2)|_{\mu_{p}(-(\frac R2)^{2})}, \label{eq:dichonomy1}\\
\left|\left\{x\in B_{\frac R2}(e_n/2): u\Big(x,-(\frac R2)^{2}\Big)<\widetilde\mu(R)+ \frac12\omega(R)\right\}\right|_{\mu_{p}(-(\frac R2)^{2})}\le \frac12 |B_{\frac R2}(e_n/2)|_{\mu_{p}(-(\frac R2)^{2})}.\label{eq:dichonomy2}
\end{align}
If \eqref{eq:dichonomy1} holds, then by Lemma \ref{lem:decay-3}, there exists $s>1$ such that
either
\begin{equation}\label{eq:oscalter1}
\frac{\omega(R)}{2}\le 2^{s} (M+F_1)R^{1-\frac{n+2}{2q}}
\end{equation}
or
\begin{equation}\label{eq:oscalter2}
\mu(R/4)\le \mu(R)-\frac{\omega(R)}{2^{s+2}}.
\end{equation}
If \eqref{eq:dichonomy2} holds, then by applying the above estimates to $-u$, one has either \eqref{eq:oscalter1} or
\begin{equation}\label{eq:oscalter3}
\widetilde\mu(R/4)\ge \widetilde\mu(R)+\frac{\omega(R)}{2^{s+2}}.
\end{equation}
In any case, we obtain
\begin{equation*}
\omega(R/4)\le (1-2^{-s-2})\omega(R)+2^{s+1} (M+F_1)R^{1-\frac{n+2}{2q}}.
\end{equation*}
By an iterative lemma, e.g. Lemma 3.4 in Han-Lin \cite{HL} (or Lemma B.2 in \cite{JX19}), there exist $\alpha$ and $C$, both of which depend only on $\lambda,\Lambda,n,p$ and $q$, such that
\[
\omega(R)\le C(M+F_1)R^{\alpha}\quad\,\forall R\in(0,R_0],
\]
from which the conclusion follows.
\end{proof}
\subsection{H\"older estimates near the boundary}
Together with the H\"older regularity at the boundary in Theorem \ref{thm:holderontheboundary} and the interior H\"older regularity in Theorem \ref{thm:uniformholdernearboundary}, one can obtain the H\"older regularity up to the boundary.
\begin{thm}\label{thm:holdernearboundary}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D}, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:localbddnesscoeffcient} and \eqref{eq:localbddnessf} for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$. Then for every $\gamma>0$, there exist $\theta>0$ and $C>0$, both of which depend only on $\lambda,\Lambda,n,p,\gamma$ and $q$, such that for every $(x,t), (y,s)\in B_{1/2}^+\times(-1/4,0]$, there holds
\[
|u(x,t)-u(y,s)|\le C(\|u\|_{L^\gamma({\mathcal{Q}}_1^+)}+F_1) (|x-y|+|t-s|)^\theta.
\]
\end{thm}
\begin{proof}
By normalization, we assume $\sup_{B_{3/4}\times[-3/4,0]}|u|+F_1=1$. For any $\bar x=(0,\bar x_n )\in B_{1/2}^+$, we let $R:=\bar x_n>0$, and rescale the solution and the coefficients as in \eqref{eq:rescaledequationcoefficients} with $x_0=0$. Then \eqref{eq:rescaledequation}, \eqref{eq:rescaledequationcoefficients1} and \eqref{eq:rescaledequationcoefficients2} hold. By Theorem \ref{thm:uniformholdernearboundary}, there exist $C>1$ and $0<\beta<1$, both of which depend only on $\lambda,\Lambda,n,p$ and $q$, such that
\begin{equation}\label{eq:holderafterscaling}
|\tilde u(e_n,0)-\tilde u(y,s)|\le C ||y-e_n|+\sqrt{s}|^\beta \quad \mbox{for all }(y,s) \mbox{ such that }|y-e_n|+\sqrt{s}<1/2.
\end{equation}
Consider $t\in (-1/2,0]$. If $|t|\le R^{2p+4}$, then we have
\[
|u(\bar x,t)-u(\bar x, 0)|=|\tilde u(e_n,t/R^{p+2})-\tilde u(e_n,0)|\le C |t/R^{p+2}|^{\beta/2}\le C t^{\beta/4},
\]
where we used \eqref{eq:holderafterscaling} in the first inequality. If $|t|\ge R^{2p+4}$, then we have
\begin{align*}
|u(\bar x,t)-u(\bar x, 0)|& \le |u(\bar x,t)-u(0,t)|+|u(0,t)-u(0,0)|+|u(0,0)-u(\bar x, 0)|\\
& \le C(R^{\alpha}+|t|^{\frac{\alpha}{p+2}})\\
&\le C |t|^{\frac{\alpha}{2(p+2)}},
\end{align*}
where we used Theorem \ref{thm:holderontheboundary} in the second inequality. This shows that $u$ is H\"older continuous in the time variable.
Consider $\tilde x=(\tilde x',\tilde x_n)\in B_{1/2}^+$ such that $\tilde x_n\le \bar x_n$. If $\tilde x\in B_{R^2}(\bar x)$, then we have
\[
|u(\bar x,0)-u(\tilde x, 0)|=|\tilde u(e_n,0)-\tilde u(\tilde x/R,0)|\le C ||\tilde x-\bar x|/R|^{\beta}\le C |\tilde x-\bar x|^{\beta/2},
\]
where we used \eqref{eq:holderafterscaling} in the first inequality. If $\tilde x\not\in B_{R^2}(\bar x)$, then we have
\begin{align*}
|u(\bar x,0)-u(\tilde x, 0)|& \le |u(\bar x,0)-u(0,0,0)|+|u(0,0,0)-u(\tilde x',0,0)|+|u(\tilde x',0,0)-u(\tilde x, 0)|\\
& \le C(R^{\alpha}+|\tilde x_n|^\alpha)\\
&\le C |\bar x-\tilde x|^{\frac{\alpha}{2}},
\end{align*}
where we used Theorem \ref{thm:holderontheboundary} in the second inequality. This shows that $w$ is H\"older continuous in the spatial variables.
Together with Theorem \ref{thm:localboundedness}, we finish the proof of this theorem.
\end{proof}
\subsection{H\"older estimates up to the initial time}
We can also show H\"older estimates up to the initial time.
\begin{thm}\label{thm:holderboundarybottom}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D} and the initial condition $u(\cdot,-1)=0$, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:localbddnesscoeffcient} and \eqref{eq:localbddnessf} for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$. Let $\bar x\in\pa' B_{1/4}$. Then for every $\gamma>0$, there exist $\alpha>0$ and $C>0$, both of which depend only on $\lambda,\Lambda,n,p,\gamma$ and $q$, such that
\[
|u(x,t)-u(\bar x, -1)|\le C(\|u\|_{L^\gamma({\mathcal{Q}}_1^+)}+F_1) (|x-\bar x|+|t+1|^{\frac{1}{p+2}})^\alpha
\]
for every $(x,t)\in B_{1/4}^+\times[-1,-\frac34]$.
\end{thm}
\begin{proof}
Let $M=\|u\|_{L^\infty(B_{3/4}^+\times(-1,-1/4))}$,
\[
\mu(R)=\sup_{\mathcal{Q}_R^+(\bar x,-1)}u, \quad \widetilde\mu(R)=\inf_{\mathcal{Q}_R^+(\bar x,-1)}u, \quad\omega(R)=\mu(R)-\widetilde\mu(R),
\]
\[
r_j=\frac {R}2+\frac{R}{2^{j+1}},\quad k_j=\frac{\mu(R)}{2}- \frac{\mu(R)}{2^{j+1}},\quad j=0,1,2,\cdots.
\]
For brevity, we denote
\[
\mathcal{Q}_{j,\delta}^+=B_{r_j}^+(\bar x)\times(-1,-1+\delta r_j^{p+2}).
\]
Let $\eta_j(x) $ be a smooth cut-off function satisfying
\[
\mbox{supp}(\eta_j) \subset B_{r_j}(\bar x), \quad 0\le \eta_j \le 1, \quad \eta_j=1 \mbox{ in }B_{r_{j+1}}(\bar x),
\]
\[
|D \eta_j(x,t)|^2 \le \frac{C(n)}{(r_j-r_{j+1})^2} \quad \mbox{in }B_R(\bar x).
\]
Case 1: $p\ge 0$. Let us consider $n\ge 3$ first. By Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev}, we have
\begin{equation}\label{eq:auxgiorgiinitial}
\Big(\int_{\mathcal{Q}_{j,\delta}^+} |\eta_j v|^{2\chi}\,\ud x \ud t \Big)^{\frac{1}{\chi}} \le C\left[ \frac{2^{2j}}{R^2} \|v\|^2_{L^2(\mathcal{Q}_{j,\delta}^+)} + (M+F_1)^2 |\mathcal{Q}_{j,\delta}^+\cap \{u>k_j\}|^{1-\frac1q}\right],
\end{equation}
where $v=(u-k_j)^+$. Let $A(k,r_j)= \{(x,t)\in \mathcal{Q}_{j,\delta}^+: u> k\}$. Then
\[
\Big(\int_{\mathcal{Q}_{j,\delta}^+} |\eta_j v|^{2\chi}\,\ud x \ud t \Big)^{\frac{1}{\chi}} \ge (k_{j+1}- k_j)^2 |A(k_{j+1}, r_{j+1})|^{\frac{1}{\chi}},
\]
and
\[
\int_{\mathcal{Q}_{j,\delta}^+} v^{2} \,\ud x\ud t \le \mu^2 |A(k_{j}, r_{j})|.
\]
If
\[
\mu\ge (M+F_1)R^{1-\frac{n+p+2}{2q}},
\]
then
\begin{align*}
|A(k_{j+1}, r_{j+1})| & \le C\left[ \frac{2^{4j}}{R^2}|A(k_{j}, r_{j})| + \frac{2^{2j}(M+F_1)^2}{\mu^2} |A(k_{j}, r_{j})|^{1-\frac1q}\right]^\chi\\
&\le C\left[ \frac{2^{4j}}{R^2}|A(k_{j}, r_{j})| + \frac{2^{2j}}{R^{2-\frac{n+p+2}{q}}} |A(k_{j}, r_{j})|^{1-\frac1q}\right]^\chi\\
&\le C\left[ \frac{16^{j}\delta^{1-\frac1q}}{R^{2-\frac{n+p+2}{q}}} |A(k_{j}, r_{j})|^{1-\frac1q}\right]^\chi,
\end{align*}
where we used $|A(k_{j}, r_{j})| \le \delta|\mathcal{Q}_{j,\delta}^+|\le C\delta R^{n+p+2}$. Hence
\begin{equation}\label{eq:auxgiorgiinitial2}
\frac{|A(k_{j+1}, r_{j+1})| }{ |\mathcal{Q}_R^+|}\le C \delta^{(1-\frac1q)\chi}16^{j\chi}\left( \frac{|A(k_{j}, r_{j})|}{|\mathcal{Q}_R^+|}\right)^{(1-\frac1q)\chi},
\end{equation}
where we used that $\chi=\frac{n+p+2}{n+p}$. Therefore, similarly to \eqref{eq:nonlineariteration} and \eqref{eq:nonlineariteration2}, there exists $\delta_0\in(0,1)$ such that if $\delta\le\delta_0$, then
\begin{equation}\label{eq:auxgiorgiinitial3}
\lim_{j\to \infty}\frac{|A(k_{j+1}, r_{j+1})|}{|\mathcal{Q}_R^+|}=0.
\end{equation}
Now, let us consider $n=1, 2$. By Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev}, \eqref{eq:auxgiorgi1} would become
\[
\Big(\int_{\mathcal{Q}_R^+} |\eta_j v|^{2\chi}\,\ud x \ud t \Big)^{\frac{1}{\chi}} \le C R^{\frac{p+2-n}{p+2}}\left[ \frac{2^{2j}}{R^2} \|v\|^2_{L^2(\mathcal{Q}_{r_{j}}^+)} + (M+F_1)^2 |\mathcal{Q}_{r_{j}}^+\cap \{u>k_j\}|^{1-\frac1q}\right].
\]
By using $\chi=\frac{p+2}{p+1}$, one will still obtain \eqref{eq:auxgiorgiinitial2} and \eqref{eq:auxgiorgiinitial3}. Then the left proof is the same as above.
Case 2: $-1<p<0$. Again, we consider $n\ge 3$ first. By Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev2}, we have
\begin{equation}\label{eq:auxgiorgiinitial-2}
\begin{split}
& \Big(\int_{\mathcal{Q}_{j,\delta}^+} |\eta_j v|^{2\chi}x_n^p\,\ud x \ud t \Big)^{\frac{1}{\chi}} \\
&\le C\left[ \frac{2^{2j}}{R^2} \|v\|^2_{L^2(\mathcal{Q}_{j,\delta}^+,x_n^p\ud x\ud t)} + (M+F_1)^2 |\mathcal{Q}_{j,\delta}^+\cap \{u>k_j\}|_{\nu_p}^{1-\frac1q}\right],
\end{split}
\end{equation}
where $v=(u-k_j)^+$. Then
\[
\Big(\int_{\mathcal{Q}_{j,\delta}^+} |\eta_j v|^{2\chi}x_n^p\,\ud x \ud t \Big)^{\frac{1}{\chi}} \ge (k_{j+1}- k_j)^2 |A(k_{j+1}, r_{j+1})|_{\nu_p}^{\frac{1}{\chi}},
\]
and
\[
\int_{\mathcal{Q}_{j,\delta}^+} v^{2} x_n^p\,\ud x\ud t \le \mu^2 |A(k_{j}, r_{j})|_{\nu_p}.
\]
If
\[
\mu\ge (M+F_1)R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}},
\]
then
\begin{align*}
|A(k_{j+1}, r_{j+1})|_{\nu_p} & \le C\left[ \frac{2^{4j}}{R^2}|A(k_{j}, r_{j})|_{\nu_p} + \frac{2^{2j}(M+F_1)^2}{\mu^2} |A(k_{j}, r_{j})|_{\nu_p}^{1-\frac1q}\right]^\chi\\
&\le C\left[ \frac{2^{4j}}{R^2}|A(k_{j}, r_{j})|_{\nu_p} + \frac{2^{2j}}{R^{p+2-\frac{n+2p+2}{q}}} |A(k_{j}, r_{j})|_{\nu_p}^{1-\frac1q}\right]^\chi\\
&\le C\left[ \frac{16^{j}\delta^{1-\frac1q}}{R^{p+2-\frac{n+2p+2}{q}}} |A(k_{j}, r_{j})|_{\nu_p}^{1-\frac1q}\right]^\chi,
\end{align*}
where we used $|A(k_{j}, r_{j})|_{\nu_p} \le \delta|\mathcal{Q}_{j,\delta}^+|_{\nu_p}\le C\delta R^{n+2p+2}$. Hence
\begin{equation}\label{eq:auxgiorgiinitial2-2}
\frac{|A(k_{j+1}, r_{j+1})|_{\nu_p} }{ |\mathcal{Q}_R^+|_{\nu_p}}\le C \delta^{(1-\frac1q)\chi}16^{j\chi}\left( \frac{|A(k_{j}, r_{j})|_{\nu_p}}{|\mathcal{Q}_R^+|_{\nu_p}}\right)^{(1-\frac1q)\chi},
\end{equation}
where we used that $\chi=\frac{n+2p+2}{n+p}$. Therefore, there exists $\delta_0\in(0,1)$ such that if $\delta\le\delta_0$, then
\begin{equation}\label{eq:auxgiorgiinitial3-2}
\lim_{j\to \infty}\frac{|A(k_{j+1}, r_{j+1})|_{\nu_p}}{|\mathcal{Q}_R^+|_{\nu_p}}=0.
\end{equation}
Now, let us consider $n=1, 2$. By Theorem \ref{thm:caccipolli} and Theorem \ref{thm:weightedsobolev}, \eqref{eq:auxgiorgi1} would become
\begin{align*}
& \Big(\int_{\mathcal{Q}_R^+} |\eta_j v|^{2\chi}x_n^p\,\ud x \ud t \Big)^{\frac{1}{\chi}} \\
&\le C R^{\frac{p+4-n}{3}}\left[ \frac{2^{2j}}{R^2} \|v\|^2_{L^2(\mathcal{Q}_{r_{j}}^+,x_n^p\ud x\ud t)} + (M+F_1)^2 |\mathcal{Q}_{r_{j}}^+\cap \{u>k_j\}|_{\nu_p}^{1-\frac1q}\right].
\end{align*}
By using $\chi=\frac{3}{2}$, one will still obtain \eqref{eq:auxgiorgiinitial2-2} and \eqref{eq:auxgiorgiinitial3-2}. Then the left proof is the same as above.
In each case, we have that if $0<\delta\le\delta_0$, then
\[
\sup_{B_{R/2}(\bar x)\times(-1,-1+\delta(R/2)^{p+2})} u\le \frac{\mu(R)}{2}.
\]
Applying this estimate to $-u$, one have
\[
\inf_{B_{R/2}(\bar x)\times(-1,-1+\delta(R/2)^{p+2})} u\ge \frac{\widetilde \mu(R)}{2}.
\]
Meanwhile, it follows from Lemma \ref{lem:smallonlargeset} and Lemma \ref{lem:decay-1} that there exists $\ell>0$ such that
either
\begin{equation*}
\mu\le
\left\{
\begin{aligned}
& 2^{\ell} (M+F_1)R^{1-\frac{n+p+2}{2q}}, \quad &\mbox{for } p\ge 0, \\
& 2^{\ell} (M+F_1)R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}}, \quad &\mbox{for } -1<p<0,
\end{aligned}
\right.
\end{equation*}
or
\[
\sup_{B_{R/4}(\bar x)\times(-1+\delta(R/2)^{p+2},-1+(R/2)^{p+2}]} u\le \mu(R)-\frac{\mu(R)}{2^{\ell}}.
\]
and either
\begin{equation*}
-\widetilde \mu\le
\left\{
\begin{aligned}
& 2^{\ell} (M+F_1)R^{1-\frac{n+p+2}{2q}}, \quad &\mbox{for } p\ge 0, \\
& 2^{\ell} (M+F_1)R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}}, \quad &\mbox{for } -1<p<0,
\end{aligned}
\right.
\end{equation*}
or
\[
\inf_{B_{R/4}(\bar x)\times(-1+\delta(R/2)^{p+2},-1+(R/2)^{p+2}]} u\ge \widetilde\mu(R)-\frac{\widetilde\mu(R)}{2^{\ell}}.
\]
In any case, we obtain
\begin{equation*}
\omega(R/4)\le
\left\{
\begin{aligned}
& (1-2^{\ell+1})\omega(R)+ 2^{\ell} (M+F_1)R^{1-\frac{n+p+2}{2q}}, \quad &\mbox{for } p\ge 0, \\
& (1-2^{\ell+1})\omega(R)+ 2^{\ell} (M+F_1)R^{\frac{p+2}{2}-\frac{n+2p+2}{2q}}, \quad &\mbox{for } -1<p<0.
\end{aligned}
\right.
\end{equation*}
By an iterative lemma, e.g. Lemma 3.4 in Han-Lin \cite{HL} (or Lemma B.2 in \cite{JX19}), there exist $\alpha$ and $C$, both of which depend only on $\lambda,\Lambda,n,p$ and $q$, such that
\[
\omega(R)\le C(M+F_1)R^{\alpha}\quad\,\forall R\in(0,1/4].
\]
The conclusion follows from the above and Theorem \ref{thm:localboundednessglobal}.
\end{proof}
Similar to the justifications of Theorem \ref{thm:uniformholdernearboundary} and Theorem \ref{thm:holderboundarybottom}, we also have
\begin{thm}\label{thm:uniformholderinitial}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D} and the initial condition $u(\cdot,-1)=0$, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:localbddnesscoeffcient} and \eqref{eq:localbddnessf} for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$. Then for every $\gamma>0$, there exist $\alpha>0$ and $C>0$, both of which depend only on $\lambda,\Lambda,n,p,\gamma$ and $q$, such that for every $(x,-1), (y,s)\in B_{1/4}(e_n/2)\times[-1,-\frac34]$, there holds
\[
|u(x,-1)-u(y,s)|\le C(\|u\|_{L^\gamma({\mathcal{Q}}_1^+)}+F_1) (|x-y|+|s+1|)^\alpha,
\]
where $e_n=(0,\cdots,0,1)$.
\end{thm}
Together with Theorem \ref{thm:uniformholdernearboundary} and Theorem \ref{thm:uniformholderinitial}, using similar scaling arguments to those in the proof of Theorem \ref{thm:holdernearboundary}, we have
\begin{thm}\label{thm:uniformholdernearinitial}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D} and the initial condition $u(\cdot,-1)=0$, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:localbddnesscoeffcient} and \eqref{eq:localbddnessf} for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$. Then for every $\gamma>0$, there exist $\alpha>0$ and $C>0$, both of which depend only on $\lambda,\Lambda,n,p,\gamma$ and $q$, such that for every $(x,t), (y,s)\in B_{1/4}(e_n/2)\times[-1,0]$, there holds
\[
|u(x,t)-u(y,s)|\le C(\|u\|_{L^\gamma({\mathcal{Q}}_1^+)}+F_1) (|x-y|+|t-s|)^\alpha,
\]
where $e_n=(0,\cdots,0,1)$.
\end{thm}
\begin{proof}
The proof is in the same spirit as that of Theorem \ref{thm:holdernearboundary}. We omit the details, and one can also refer to the proof of Theorem \ref{thm:uniformholderglobal} in the below.
\end{proof}
Finally, we have the space-time global H\"older estimate:
\begin{thm}\label{thm:uniformholderglobal}
Suppose $u\in C ([-1,0]; L^2(B_1^+,x_n^{p}\ud x)) \cap L^2((-1,0];H_{0,L}^1(B_1^+))$ is a weak solution of \eqref{eq:linear-eq} with the partial boundary condition \eqref{eq:linear-eq-D} and the initial condition $u(\cdot,-1)=0$, where the coefficients of the equation satisfy \eqref{eq:rangep}, \eqref{eq:ellip2}, \eqref{eq:localbddnesscoeffcient} and \eqref{eq:localbddnessf} for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$. Then for every $\gamma>0$, there exist $\alpha>0$ and $C>0$, both of which depend only on $\lambda,\Lambda,n,p,\gamma$ and $q$, such that for every $(x,t), (y,s)\in B_{1/2}^+\times[-1,0]$, there holds
\[
|u(x,t)-u(y,s)|\le C(\|u\|_{L^\gamma({\mathcal{Q}}_1^+)}+F_1) (|x-y|+|t-s|)^\alpha.
\]
\end{thm}
\begin{proof}
By Theorem \ref{thm:localboundednessglobal} and normalization, we assume $\sup_{B_{3/4}^+\times[-1,0]}|u|+F_1=1$. For any $\bar x=(0,\bar x_n )\in B_{1/4}^+$ and $\bar t\in (-1,0]$, we let $R:=\max(\bar x_n, (\bar t+1)^{\frac{1}{p+2}})>0$, and rescale the solution and the coefficients as in \eqref{eq:rescaledequationcoefficients} with $x_0=0$. Then we have \eqref{eq:rescaledequation} in $\mathcal{Q}^+_{1/R}$. Also, \eqref{eq:rescaledequationcoefficients1} and \eqref{eq:rescaledequationcoefficients2} hold with $Q_1^+$ replaced by $\widetilde Q^+=B_2^+\times(-R^{-p-2},-R^{-p-2}+1])$.
Case 1: $R=\bar x_n$.
Consider $s\in (-1,\bar t]$. If $|\bar t-s|\le R^{2p+4}$, then by Theorem \ref{thm:uniformholdernearinitial}, we have
\[
|u(\bar x,\bar t)-u(\bar x, s)|=|\tilde u(e_n,\bar t/R^{p+2})-\tilde u(e_n,s/R^{p+2})|\le C |(\bar t-s)/R^{p+2}|^{\alpha/2}\le C |\bar t-s|^{\alpha/4}.
\]
If $|\bar t-s|\ge R^{2p+4}$, then we have
\begin{align*}
|u(\bar x,\bar t)-u(\bar x, s)|& \le |u(\bar x,\bar t)-u(\bar x,-1)|+|u(\bar x,s)-u(\bar x, -1)|\\
& = |\tilde u(e_n,\bar t/R^{p+2})-\tilde u(e_n,-1/R^{p+2})|+|\tilde u(e_n,s/R^{p+2})-\tilde u(e_n, -1/R^{p+2})|\\
& \le C|t+1|^{\alpha}\\
&\le CR^{(p+2)\alpha}\\
&\le C |\bar t-s|^{\frac{\alpha}{2}}.
\end{align*}
This shows that $u$ is H\"older continuous in the time variable.
Consider $\tilde x=(\tilde x',\tilde x_n)\in B_{1/2}^+$ such that $\tilde x_n\le \bar x_n$. If $\tilde x\in B_{R^2}(\bar x)$, then by Theorem \ref{thm:uniformholdernearinitial}, we have
\[
|u(\bar x,\bar t)-u(\tilde x, \bar t)|=|\tilde u(e_n,\bar t/R^{p+2})-\tilde u(\tilde x/R,\bar t/R^{p+2})|\le C ||\tilde x-\bar x|/R|^{\beta}\le C |\tilde x-\bar x|^{\beta/2}.
\]
If $\tilde x\not\in B_{R^2}(\bar x)$, then we have
\begin{align*}
|u(\bar x,\bar t)-u(\tilde x, \bar t)|& \le |u(\bar x,\bar t)-u(0,-1)|+|u(0,-1)-u(\tilde x',0,-1)|+|u(\tilde x',0,-1)-u(\tilde x, \bar t)|\\
& \le C(R^{\alpha}+|\bar t+1|^\alpha)\\
&\le C(R^{\alpha}+|\bar t+1|^\alpha)\\
&\le C |\bar x-\tilde x|^{\frac{\alpha}{2}},
\end{align*}
where we used Theorem \ref{thm:holderboundarybottom} in the second inequality. This shows that $w$ is H\"older continuous in the spatial variables.
Case 2: $R= (\bar t+1)^{\frac{1}{p+2}}$.
Consider $s\in (-1,\bar t]$. If $|\bar t-s|\le R^{2p+4}$, then by Theorem \ref{thm:holdernearboundary}, we have
\[
|u(\bar x,\bar t)-u(\bar x, s)|=|\tilde u(\bar x/R,\bar t/R^{p+2})-\tilde u(\bar x/R,s/R^{p+2})|\le C |(\bar t-s)/R^{p+2}|^{\alpha/2}\le C |\bar t-s|^{\alpha/4}.
\]
If $|\bar t-s|\ge R^{2p+4}$, then by Theorem \ref{thm:holderboundarybottom}, we have
\begin{align*}
|u(\bar x,\bar t)-u(\bar x, s)|& \le |u(\bar x,\bar t)-u(0,-1)|+|u(0,-1)-u(\bar x, s)|\\
& \le CR^{\alpha}\\
&\le C |\bar t-s|^{\frac{\alpha}{2(p+2)}}.
\end{align*}
This shows that $u$ is H\"older continuous in the time variable.
Consider $\tilde x=(\tilde x',\tilde x_n)\in B_{1/2}^+$ such that $\tilde x_n\le \bar x_n$. If $\tilde x\in B_{R^2}(\bar x)$, then by Theorem \ref{thm:uniformholdernearboundary}, we have
\[
|u(\bar x,\bar t)-u(\tilde x, \bar t)|=|\tilde u(e_n,\bar t/R^{p+2})-\tilde u(\tilde x/R,\bar t/R^{p+2})|\le C ||\tilde x-\bar x|/R|^{\beta}\le C |\tilde x-\bar x|^{\beta/2}.
\]
If $\tilde x\not\in B_{R^2}(\bar x)$, then we have
\begin{align*}
|u(\bar x,\bar t)-u(\tilde x, \bar t)|& \le |u(\bar x,\bar t)-u(0,-1)|+|u(0,-1)-u(\tilde x',0,-1)|+|u(\tilde x',0,-1)-u(\tilde x, \bar t)|\\
& \le C(R^{\alpha}+|\bar t+1|^\alpha)\\
&\le C(R^{\alpha}+|\bar t+1|^\alpha)\\
&\le C |\bar x-\tilde x|^{\frac{\alpha}{2}},
\end{align*}
where we used Theorem \ref{thm:holderboundarybottom} in the second inequality. This shows that $w$ is H\"older continuous in the spatial variables.
Together with Theorem \ref{thm:localboundednessglobal}, we finish the proof of this theorem.
\end{proof}
\subsection{The Cauchy-Dirichlet problem}
In the end, let us go back to the Cauchy-Dirichlet problem in general domains mentioned at the beginning:
\begin{equation} \label{eq:finalgeneral}
\begin{split}
a\omega^{p} \pa_t u-D_j(a_{ij} D_i u+d_j u)+b_iD_i u+\omega^pcu+c_0u&=\omega^pf+f_0 -D_if_i\quad \mbox{in }\Omega \times(-1,0],\\
u&=0\quad \mbox{on }\partial_{pa}(\Omega \times(-1,0]),
\end{split}
\end{equation}
where $\Omega\subset\R^n$, $n\ge 1$, is a smooth bounded open set, and $\omega$ is a smooth function in $\overline\Omega$ comparable to the distance function $d(x):=\dist(x,\partial\Omega)$, that is, $0<\inf_{\Omega}\frac{\omega}{d}\le \sup_{\Omega}\frac{\omega}{d}<\infty$, and $p>-1$ is a constant.
Suppose there exist $0<\lambda\le\Lambda<\infty$ such that
\be \label{eq:ellip2final}
\lda\le a(x,t)\le \Lda, \quad \lda |\xi|^2 \le \sum_{i,j=1}^na_{ij}(x,t)\xi_i\xi_j\le \Lda |\xi|^2, \quad\forall\ (x,t)\in \Omega \times(-1,0],\ \forall\ \xi\in\R^n,
\ee
and
\begin{align}
\Big\||\pa_t a|+|c|\Big\|_{L^q(\Omega \times(-1,0],x_n^p\ud x\ud t)}+\Big\|\sum_{j=1}^n(b_j^2+d_j^2)+|c_0|\Big\|_{L^q(\Omega \times(-1,0])}&\le\Lambda, \label{eq:localbddnesscoeffcientfinal}\\
F_2:=\|f\|_{L^{\frac{2q\chi}{q\chi+\chi-q}}(\Omega \times(-1,0],x_n^p\ud x\ud t)}+\|f_0\|_{L^{\frac{2q\chi}{q\chi+\chi-q}}(\Omega \times(-1,0])}+ \sum_{j=1}^n\|f_j\|_{L^{2q}(\Omega \times(-1,0])}&<\infty\label{eq:localbddnessffinal}
\end{align}
for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$, where $\chi>1$ is the constant in Theorem \ref{thm:weightedsobolev} or Theorem \ref{thm:weightedsobolev2} depending on the value of $p$.
We say that $u$ is a weak solution of \eqref{eq:finalgeneral} if $u\in C ((-1,0]; L^2(\Omega,\omega^{p}\ud x)) \cap L^2((-1,0];H_{0}^1(\Omega)) $, $u(\cdot,-1)\equiv 0$, and satisfies
\begin{equation}\label{eq:definitionweaksolutionfinal}
\begin{split}
&\int_{\Omega}a(x,s) \omega(x)^{p} u(x,s) \varphi(x,s)\,\ud x-\int_{-1}^s\int_{\Omega} \omega^{p}(\varphi\partial_t a+a\partial_t \varphi)u\,\ud x\ud t\\
&\quad+ \int_{-1}^s\int_{\Omega} \big(a_{ij}D_iuD_j\varphi+d_juD_j\varphi+b_jD_ju\varphi+c \omega^p u \varphi+c_0u\varphi\big)\,\ud x\ud t\\
&=\int_{-1}^s\int_{\Omega} (\omega^p f\varphi+f_0\varphi+f_jD_j\varphi)\,\ud x\ud t\quad\mbox{a.e. }s\in (-1,0]
\end{split}
\end{equation}
for every $\varphi\in \{g\in L^2(\Omega \times(-1,0]): \partial_tg\in L^2(\Omega \times(-1,0],\omega^{p}\ud x\ud t), D_i g \in L^2(\Omega \times(-1,0]), i=1,\cdots,n, g=0\mbox{ on } \partial\Omega\times (-1,0]\}.$
\begin{thm}\label{thm:uniformholderglobalfinal}
Suppose $p>-1$, \eqref{eq:ellip2final}, \eqref{eq:localbddnesscoeffcientfinal} and \eqref{eq:localbddnessffinal} hold for some $q>\max(\frac{\chi}{\chi-1},\frac{n+p+2}{2}, \frac{n+2p+2}{p+2})$. Then there exists a unique weak solution $u\in C ((-1,0]; L^2(\Omega,\omega^{p}\ud x)) \cap L^2((-1,0];H_{0}^1(\Omega)) $ of \eqref{eq:finalgeneral}. Furthermore, for every $\gamma>0$, there exist $\alpha>0$ and $C>0$, both of which depend only on $\lambda,\Lambda,n,\Omega, p,\gamma$ and $q$, such that for every $(x,t), (y,s)\in \Omega\times[-1,0]$, there holds
\[
|u(x,t)-u(y,s)|\le C(\|u\|_{L^\gamma(\Omega\times[-1,0])}+F_2) (|x-y|+|t-s|)^\alpha.
\]
\end{thm}
\begin{proof}
The H\"older estimate of the weak solution follows from Theorem \ref{thm:uniformholderglobal}, Theorem \ref{thm:uniformholdernearinitial}, the flattening boundary technique and a covering argument.
The uniqueness of the weak solution follows from a similar energy estimate to that in Theorem \ref{thm:uniquenessofweaksolution}.
The existence of weak solutions follows by a similar argument to the proof of Theorem \ref{thm:existenceofweaksolution}. Here, we do not need to assume $a$ to be continuous, since the approximating solutions in the proof of Theorem \ref{thm:existenceofweaksolution} under the assumption of this theorem will be uniformly H\"older continuous up to the boundary. The argument there will go through without the assumption of the continuity of $a$. We leave the details to the readers.
\end{proof}
\small
\noindent T. Jin
\noindent Department of Mathematics, The Hong Kong University of Science and Technology\\
Clear Water Bay, Kowloon, Hong Kong\\[1mm]
Email: \textsf{[email protected]}
\noindent J. Xiong
\noindent School of Mathematical Sciences, Laboratory of Mathematics and Complex Systems, MOE\\ Beijing Normal University,
Beijing 100875, China\\[1mm]
Email: \textsf{[email protected]}
\end{document}
|
\begin{document}
\title[Fractional Hankel wavelet transform]{Continuity of the fractional Hankel wavelet transform on the spaces of type S}
\author{Kanailal Mahato}
\address{Department of Mathematics, Institute of Science, Banaras Hindu University, Varanasi-221005, India}
\email{kanailalmahato$@$gmail.com, kanailalmahato$@$bhu.ac.in}
\subjclass[2010]{46F05, 46F12, 42C40, 65T60.}
\date{\today}
\keywords{Bessel Operator, Fractional Hankel transform, Fractional Hankel translation, Wavelet transform, Gelfand-Shilov spaces, Ultradifferentiable function space.}
\begin{abstract}
In this article we study the fractional Hankel transform and its inverse on certain Gel'fand-Shilov spaces of type S. The continuous fractional wavelet transform is defined involving the fractional Hankel transform. The continuity of fractional Hankel wavelet transform is discussed on Gel'fand-Shilov spaces of type S. This article goes further to discuss the continuity property of fractional Hankel transform and fractional Hankel wavelet transform on the ultradifferentiable function spaces.
\end{abstract}
\maketitle
\section{Introduction}
In the recent years, the continuous wavelet transform has been successfully applied in the field of signal processing, image encryption. The continuous wavelet transform of a function $f$ associated with the wavelet $\psi$ is defined by
\begin{eqnarray}
(W_{\psi}f)(b, a)=\int_{-\infty}^{\infty}f(t) \overline{\psi_{b, a}}(t)\frac{dt}{a},\nonumber
\end{eqnarray}
where $\psi_{b, a}(t)= \psi\Big(\frac{t-b}{a} \Big),$ provided the integral exists, where $a\in \mathbb{R}^+$ and $b\in \mathbb{R}$. If $f, \psi \in L^2(\mathbb{R})$, then exploiting the Parseval relation for Fourier transform, the above expression can be viewed as (see \cite{Chui, Deb}):
\begin{eqnarray}
(W_{\psi}f)(b, a)= \frac{1}{2\pi}\int_{-\infty}^{\infty} e^{ib\omega} \hat{f}(\omega) \overline{\hat{\psi}(a\omega)}d\omega,\nonumber
\end{eqnarray}
where $\hat{f}$ and $\hat{\psi}$ denotes the Fourier transform of $f$ and $\psi$ respectively. The Gel'fand-Shilov spaces were introduced in \cite{gelfand} and studied the characterization of Fourier transform on the aforesaid spaces. Pathak \cite{pathak0}and Holschneider \cite{hols} studied the wavelet transform involving Fourier transform, on Schwartz space $S(\mathbb{R})$. Zemanian \cite{ze}, Lee\cite{lee} and Pathak\cite{rs} described the basic properties of classical Hankel transform on the certain Gel'fand-Shilov spaces of type S. In the theory of partial differential equations, mathematical analysis the spaces of type S play an important role as an intermediate spaces between those of $C^{\infty}$ and of the analytic functions. The main purpose this article is to study the fractional Hankel transform and continuous wavelet transform associated with fractional Hankel transform on Gel'fand-Shilov spaces of type S.
The fractional Hankel transform (FrHT), which is a generalization of the usual Hankel transform and depends on a parameter $\theta$, has been the focus of many researcher as it has a wide range of applications in the field of seismology, optics, signal processing, solving problems involving cylindrical boundaries. The fractional Hankel transform $\mathcal{H}_{ \nu, \mu}^{\theta}$ of a function $f$ of order $\nu \geq -\frac{1}{2}$ depends on an arbitrary real parameter $\mu$ and $\theta (0< \theta < \pi)$, is defined by (see \cite{kerr, prasad2, prasad4, torre}):
\begin{eqnarray}
(\mathcal H_{ \nu,\mu}^{\theta} f )(\omega)= \tilde{f}^{\theta}(\omega) =\displaystyle \int _{0}^\infty K^{\theta}(t,\omega) f(t) dt, \label{eq:1.1}
\end{eqnarray}
where,
\begin{eqnarray}
&&\quad K^{\theta}(t,\omega) =
\begin{cases}
C_{\nu, \mu, \theta}e^{\frac{i}{2}(t^2+\omega^2)\cot \theta}(t\omega csc\theta)^{-\mu}J_{\nu}(t \omega \csc\theta) t^{1+2\mu}, ~\theta \neq n\pi\\
(t \omega)^{-\mu}J_{\nu}(t \omega ) t^{1+2\mu},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \theta = \frac{\pi}{2}\\
\delta(t-\omega),\quad \quad \quad \quad \quad\quad \quad \quad \quad \quad \quad \quad\quad\quad \theta= n \pi, ~n \in \mathbb{Z}
\end{cases}
\label{eq:1.2}
\end{eqnarray}
where $C_{\nu, \mu, \theta}= \frac{e^{i(1+\nu)(\theta-\frac{\pi}{2})}}{(\sin \theta)^{1+\mu}}$.\\
The inverse of \eqref{eq:1.1} given as follows:
\begin{eqnarray}
f(t)= ((\mathcal H_{\nu,\mu}^{-\theta} )\tilde{f}^{\theta} )(t)= \displaystyle \int _{0}^\infty K^{-\theta}(\omega, t) \tilde{f}^{\theta}(\omega) d\omega, \label{eq:1.3}
\end{eqnarray}
where $K^{-\theta}(\omega, t)$ is same as $\overline{{K^{\theta}}}(\omega, t) $.
\par Let the space $L_{\nu,\mu}^{p}(I)$ contains of all those measurable functions $f$ on $I = (0, \infty)$ such that the integral $\displaystyle \int_{0}^{\infty} \vert f(t)\vert^{p} t^{\mu+\nu+1} dt $ exist and is finite. Also let $L^{\infty}(I)$ be the collection of almost everywhere bounded integrable functions. Hence endowed with the norm
\begin{eqnarray}
\vert \vert f \vert \vert _{L_{\nu,\mu}^{p}}=
\left\{
\begin{array}{l}
{\left(\displaystyle \int_{0}^{\infty} \vert f(t)\vert^{p} t^{\mu+\nu+1} dt\right)}^{\frac{1}{p}}, 1\leq p < \infty, \mu, \nu \in \mathbb{R}\\
\displaystyle ess \sup_{t \in I}\vert f(t) \vert, ~~~ \quad \quad p = \infty.
\end{array}
\right. \label{eq:1.4}
\end{eqnarray}
\textbf{Parseval's relation:} It is easy to see that for the operator $\mathcal{H}_{\nu, \mu}^{\theta} $, under certain conditions,
\begin{eqnarray*}
\int_{0}^{\infty} f(t)\overline{g(t)}t^{1+2\mu}dt = \int_{0}^{\infty}\tilde{f}^{\theta}(\omega) \overline{\tilde{g}^{\theta}(\omega)}\omega^{1+2\mu}d\omega.
\end{eqnarray*}
To define the fractional Hankel translation\cite{prasad2, prasad4, hamio, mahato1} $\tau_{t}^{\theta} $ of a function $\psi \in L_{\nu, \mu}^1(I)$, we need to introduce $D_{\nu,\mu}^{\theta}$, which is defined by:
\begin{eqnarray}
D_{\nu,\mu}^{\theta}(t,\omega,z)&=&C_{\nu,\mu, -\theta} \displaystyle \int_{0}^{\infty} {(zs \csc \theta)}^{-\mu}J_{\nu}(zs \csc \theta)e^{-\frac{i}{2}(z^2+t^2+\omega^2) \cot \theta}\label{eq:1.5}\\
&& \times {(ts \csc \theta)}^{-\mu} J_{\nu}(ts \csc \theta){(\omega s \csc \theta)}^{-\mu}J_{\nu}(\omega s \csc \theta)\nonumber \\
&& \times s^{1+3\mu-\nu}ds,\nonumber
\end{eqnarray}
provided the integral exist.
The fractional Hankel translation \cite{hirs} $ \tau_{t}^{\theta}$of $\psi$ is given by
\begin{eqnarray}
(\tau_{t}^{\theta}\psi)(\omega)& =& \psi^{\theta}(t,\omega)\nonumber\\
&=& C_{\nu, \mu, \theta} \displaystyle \int_{0}^{\infty} \psi(z)D_{\nu,\mu}^{\theta}(t,\omega, z)e^{\frac{i}{2}z^2 \cot \theta} z^{\mu+\nu+1} dz. \label{eq:1.6}
\end{eqnarray}
Wavelets are considered to be the set of elements constructed from translation and dilation of a single function $\psi \in L^2(\mathbb{R})$ \cite{pathak0, Chui, Deb}. In the similar way \cite{upadhyay, pathak} introduced the Bessel wavelet and the fractional Bessel wavelet by \cite{mahato1, prasad4, pathak, prasad1} as $\psi_{b, a, \theta}$, which is defined as below:
\begin{eqnarray}
\psi_{b, a, \theta}(t)&=&\mathcal{D}_a(\tau_{b}^{\theta}\psi)(t)=\mathcal{D}_{a}\psi^{\theta}(b, t)=a^{-2\mu-2} e^{\frac{i}{2}(\frac{1}{a^2}-1)t^2 \cot \theta} e^{\frac{i}{2}(\frac{1}{a^2}+1)b^2 \cot \theta}\nonumber\\
&& \times \psi^{\theta}(b/a, t/a), ~~b\geq 0, a>0,
\end{eqnarray}
where $\mathcal{D}_a$ represents the dilation of a function.
As per \cite{mahato1, pathak, prasad4, Chui, Deb, hols}, the fractional wavelet transform $W_{\psi}^{\theta}$ of $f\in L^2_{\nu, \mu}(I)$ associated with the wavelet $\psi \in L^2_{\nu, \mu}(I)$ defined by means of the integral transform
\begin{eqnarray}
(W_{\psi}^{\theta}f)(b, a)= \displaystyle \int_{0}^{\infty}f(t) \overline{\psi_{b,a,\theta}(t)} t^{1+2\mu}dt.\label{eq:1.8}
\end{eqnarray}
Now exploiting Parseval's relation and following \cite{mahato1, prasad4, pathak}, the above expression can be rewritten as
\begin{eqnarray}
(W_{\psi}^{\theta}f)(b, a)&=& \frac{1}{C_{\nu, \mu, -\theta}}\displaystyle \int_{0}^{\infty} K^{-\theta}(\omega, b) {(a\omega)}^{\mu-\nu} e^{\frac{i}{2}a^2\omega^2 \cot \theta} \tilde{f}^{\theta}(\omega) \nonumber \\
&& \times \overline{\mathcal{H}_{\nu, \mu}^{\theta}(z^{\nu-\mu}e^{-\frac{i}{2}z^2 \cot \theta} \psi(z))(a\omega)} d\omega\nonumber\\
&=& \frac{1}{C_{\nu, \mu, -\theta}} \mathcal{H}_{\nu, \mu}^{-\theta} \Big[ {(a\omega)}^{\mu-\nu} e^{-\frac{i}{2}a^2\omega^2 \cot \theta}\tilde{f}^{\theta}(\omega)\nonumber\\
&& \times \overline{\mathcal{H}_{\nu, \mu}^{\theta} (z^{\nu-\mu}e^{-\frac{i}{2}z^2 \cot \theta } \psi(z))}(a\omega)\Big](b).\label{eq:1.9}
\end{eqnarray}
According to \cite{lee, rs}, we now introduce the certain Gel'fand-Shilov spaces of type S on which the fractional Hankel transform \eqref{eq:1.1} and the fractional Hankel wavlet transform \eqref{eq:1.9} can be studied. Let us recall the definitions of these spaces.
\begin{definition}
The space $\mathbb{H}_{1, \alpha, A}(I)$ consists of infinitely differentiable functions $f$ on $I=(0, \infty)$ satisfying the inequality
\begin{eqnarray}
\left \vert x^k{(x^{-1}D_x)}^q e^{\pm \frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}f(x) \right \vert \leq C_{q}^{\nu, \mu} {(A+\delta)}^k k^{k\alpha}, ~~\forall k , q \in \mathbb{N}_{0},\label{eq:1.10}
\end{eqnarray}
where the constants $A$ and $ C_{q}^{\nu, \mu}$ depends on $f$ and $\alpha, \delta\geq 0$ are arbitrary constants and the norms are given by
\begin{eqnarray}\label{eq:1.11}
\vert \vert f\vert \vert_{q}^{\nu, \mu, \theta}= \sup_{0<x<\infty} \frac{\left \vert x^k{(x^{-1}D_x)}^q e^{\pm \frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}f(x)\right \vert}{{(A+\delta)}^k k^{k\alpha}} < \infty.
\end{eqnarray}
\end{definition}
\begin{definition}\label{defi:1.2}
The function $f \in \mathbb{H}^{2, \beta, B}(I)$ iff
\begin{eqnarray}
\left \vert x^k{(x^{-1}D_x)}^q e^{\pm \frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}f(x) \right \vert \leq C_{k}^{\nu, \mu} {(B+\sigma)}^q q^{q\beta}, ~~\forall k , q \in \mathbb{N}_{0},
\end{eqnarray}
where the constants $B, C_{k}^{\nu, \mu}$ depend on $f$ and $\sigma, \beta\geq 0$ is an arbitrary constant. In this space the norms are given by
\begin{eqnarray}
\vert \vert f\vert \vert_{k}^{\nu, \mu, \theta}= \sup_{0<x<\infty} \frac{\left \vert x^k{(x^{-1}D_x)}^q e^{\pm \frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}f(x)\right \vert}{{(B+\sigma)}^q q^{q\beta}} < \infty.
\end{eqnarray}
\end{definition}
\begin{definition}
The space $\mathbb{H}^{\beta, B}_{\alpha, A}(I)$ is defined as follows: $f \in \mathbb{H}^{\beta, B}_{\alpha, A}(I)$ if and only if
\begin{eqnarray}\label{eq:1.14}
\left \vert x^k{(x^{-1}D_x)}^q e^{\pm \frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}f(x) \right \vert \leq C^{\nu, \mu} {(A+\delta)}^k k^{k \alpha}{(B+\sigma)}^q q^{q\beta},
\end{eqnarray}
$\forall k , q \in \mathbb{N}_{0},$ where the constants $A, B, C^{\nu, \mu}$ depend on $f$ and $\alpha, \beta, \delta, \sigma\geq 0$ are arbitrary constants. We introduce the norms in the space $\mathbb{H}^{\beta, B}_{\alpha, A}(I)$ as follows:
\begin{eqnarray}
\vert \vert f\vert \vert^{\nu, \mu, \theta}= \sup_{0<x<\infty} \frac{\left \vert x^k{(x^{-1}D_x)}^q e^{\pm \frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}f(x)\right \vert}{{(A+\delta)}^k k^{k \alpha}{(B+\sigma)}^q q^{q\beta}} < \infty.
\end{eqnarray}
\end{definition}
Now we need to introduce the following types of test function spaces \cite{pathak0}
\begin{definition}
The space $\mathbb{H}_{1, \tilde{\alpha}, \tilde{A}}(I\times I)$, $\tilde{\alpha}=(\alpha_1, \alpha_2)$, $\alpha_1, \alpha_2 \geq 0$ and $\tilde{A}=(A_1, A_2)$, is defined as the collection of all smooth functions $f(b, a) \in I\times I$, such that for all $l, k, p, q\in \mathbb{N}_0,$
\begin{eqnarray}
&& \sup_{a, b} \vert a^l b^k {(a^{-1}D_a)}^p {(b^{-1}D_b)}^q e^{\pm \frac{i}{2}b^2 \cot \theta}b^{\mu-\nu}f(b, a) \vert \nonumber\\
&\leq &C_{p, q}^{\nu, \mu} {(A_1+\delta_1)}^{l} l^{l\alpha_1} {(A_2+\delta_2)}^{k} k^{k\alpha_2},
\end{eqnarray}
where the constants $A_1, A_2$ and $C_{p, q}^{\nu, \mu}$ depending on $f$ and $\delta_1, \delta_2 \geq 0$ be arbitrary constants.
\end{definition}
\begin{definition}
The space $\mathbb{H}^{2, \tilde{\beta}, \tilde{B}}(I\times I)$, $\tilde{\beta}=(\beta_1, \beta_2)$, $\beta_1, \beta_2 \geq 0$ and $\tilde{B}=(B_1, B_2)$, is defined as the space of all smooth functions $f(b, a)\in I\times I$, such that for all $l, k, p, q\in \mathbb{N}_0,$
\begin{eqnarray}
&& \sup_{a, b} \vert a^l b^k {(a^{-1}D_a)}^p {(b^{-1}D_b)}^q e^{\pm \frac{i}{2}b^2 \cot \theta}b^{\mu-\nu}f(b, a) \vert \nonumber\\
&\leq &C_{l, k}^{\nu, \mu} {(B_1+\sigma_1)}^{p} p^{p\beta_1} {(B_2+\sigma_2)}^{q} q^{q\beta_2},
\end{eqnarray}
where $\sigma_1, \sigma_2 \geq 0$ be arbitrary constants and $B_1, B_2, C_{l, k}^{\nu, \mu}$ be the constants depends on $f$.
\end{definition}
\begin{definition}
The space $\mathbb{H}_{\tilde{\alpha}, \tilde{A}}^{\tilde{\beta}, \tilde{B}}(I\times I)$, $\tilde{\alpha}=(\alpha_1, \alpha_2), \tilde{\beta}=(\beta_1, \beta_2)$, $\alpha_1, \alpha_2, \beta_1, \beta_2 \geq 0$ and $\tilde{A}=(A_1, A_2), \tilde{B}=(B_1, B_2)$, is defined as the space of all infinitely differentiable functions $f(b, a)\in I\times I$, such that for all $l, k, p, q\in \mathbb{N}_0,$
\begin{eqnarray}
&& \sup_{a, b} \vert a^l b^k {(a^{-1}D_a)}^p {(b^{-1}D_b)}^q e^{\pm \frac{i}{2}b^2 \cot \theta}b^{\mu-\nu}f(b, a) \vert \nonumber\\
&\leq &C^{\nu, \mu} {(A_1+\delta_1)}^{l} {(A_2+\delta_2)}^{k} {(B_1+\sigma_1)}^{p} {(B_2+\sigma_2)}^{q} l^{l\alpha_1}k^{k\alpha_2} p^{p\beta_1}q^{q\beta_2},
\end{eqnarray}
where the constants $A_1, A_2, B_1, B_2, C^{\nu, \mu}$ depending on $f$ and $\delta_1, \delta_2, \sigma_1, \sigma_2 \geq 0$ are arbitrary constants.
\end{definition}
From \cite{prasad2, torre} we have the differential operator $M_{\nu, \mu, \theta}$ as:\\
$M_{\nu, \mu, \theta}=-e^{-\frac{i}{2}x^2 \cot \theta} x^{\nu-\mu}D_{x}e^{\frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}.$ We shall need the following Lemma in the proof of the Theorem \ref{th2.1}.
\begin{lemma}\label{lemma:1.7}
Suppose that $\nu \geq -\frac{1}{2}, \mu, \theta$ as above and $q, k \in \mathbb{N}_{0}$. For $\psi \in \mathbb{W}_{\nu, \mu}^{\theta}$, then we obtain
$$(i) ~M_{\nu+k-1, \mu, \theta}...M_{\nu, \mu, \theta}\psi(x)={(-1)}^{k} x^{\nu-\mu+k}e^{-\frac{i}{2}x^2 \cot \theta}{(x^{-1}D_x)}^{k}e^{\frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}\psi(x),$$
$(ii) ~M_{\nu+q-1, \mu, \theta}...M_{\nu, \mu, \theta}( \mathcal{H}_{\nu, \mu}^{-\theta} \psi)(y)={(\csc \theta e^{i(\theta-\pi/2)})}^q (\mathcal{H}_{\nu+q, \mu}^{-\theta} x^q \psi)(y)$,\\
$(iii) ~\mathcal{H}_{\nu+q+k, \mu}^{-\theta}(x^q M_{\nu+k-1, \mu, -\theta} ...M_{\nu, \mu, -\theta}\psi)(y)={\Big(y\csc \theta e^{-i(\theta-\pi/2)}\Big)}^k (\mathcal{H}_{\nu+q, \mu}^{-\theta}x^q \psi)(y).$
\end{lemma}
\begin{proof}
Since,
\begin{eqnarray}
M_{\nu, \mu, \theta}&=&-e^{-\frac{i}{2}x^2 \cot \theta} x^{\nu-\mu}D_{x}e^{\frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}\nonumber\\
M_{\nu+1, \mu, \theta}M_{\nu, \mu, \theta}\psi(x)&=& x^{\nu-\mu+2}e^{-\frac{i}{2}x^2 \cot \theta} {(x^{-1}D_x)}^2 x^{\mu-\nu}e^{\frac{i}{2}x^2 \cot \theta}\psi(x).\nonumber
\end{eqnarray}
Proceeding in this way $k^{th}$ times, we get the required result $(i)$.\\
Now to prove $(ii),$ we have
\begin{eqnarray}
&&M_{\nu+q-1, \mu, \theta}...M_{\nu, \mu, \theta}( \mathcal{H}_{\nu, \mu}^{-\theta} \psi)(y)\nonumber\\
&=& {(-1)}^q y^{\nu-\mu+q} e^{-\frac{i}{2}y^2 \cot \theta} {(y^{-1}D_y)}^q y^{\mu-\nu}e^{\frac{i}{2}y^2 \cot \theta} C_{\nu, \mu, -\theta}\int_{0}^{\infty}{(xy\csc \theta)}^{-\mu}\nonumber\\
&& \times J_{\nu}(xy \csc \theta) e^{-\frac{i}{2}(x^2+y^2)\cot \theta}x^{1+2\mu} \psi (x) dx.\nonumber
\end{eqnarray}
Now exploiting the formula ${(x^{-1}D_x)}^m[x^{-n}J_{n}(x)]={(-1)}^m x^{-n-m}J_{n+m}(x),$ where $m, n$ being positive integers, the above expression becomes
\begin{eqnarray}
&&C_{\nu, \mu, -\theta} \int_{0}^{\infty} {(xy\csc \theta)}^{-\mu}J_{\nu}(xy \csc \theta)e^{-\frac{i}{2}(x^2+y^2)\cot \theta}x^{1+2\mu} {(x\csc \theta)}^q \psi(x)dx\nonumber\\
&=& {(\csc \theta e^{i(\theta-\pi/2)})}^q (\mathcal{H}_{\nu+q, \mu}^{-\theta}x^q\psi)(y) \nonumber.
\end{eqnarray}
This completes the proof of $(ii)$.\\
Using integration by parts we get
\begin{eqnarray}
&& (\mathcal{H}_{\nu+q+1, \mu}^{-\theta}x^qM_{\nu, \mu, -\theta}\psi)(y) \nonumber\\
&=& - C_{\nu+q+1, \mu, -\theta} \int_{0}^{\infty} e^{-\frac{i}{2}y^2 \cot \theta} {(y\csc \theta)}^{-\mu} x^{\nu+q+1}J_{\nu+q+1}(xy \csc \theta) D_{x}x^{\mu-\nu}\nonumber\\
&& \times e^{-\frac{i}{2}x^2 \cot \theta} \psi(x)dx\nonumber
\end{eqnarray}
Using the formula $D_{x}(x^nJ_n(x))=x^n J_{n-1}(x)$, in the above equation, then the above expression can be expressed as
\begin{eqnarray}
&& C_{\nu+q+1, \mu, -\theta} \int_{0}^{\infty} e^{-\frac{i}{2}y^2 \cot \theta} {(y\csc \theta)}^{-\mu} D_x[x^{\nu+q+1}J_{\nu+q+1}(xy \csc \theta)]\nonumber\\
&& \times x^{\mu-\nu} e^{-\frac{i}{2}x^2 \cot \theta} \psi(x)dx\nonumber\\
&=& y\csc \theta e^{-i(\theta-\pi/2)} (\mathcal{H}_{\nu+q, \mu}^{-\theta}x^q \psi)(y).\nonumber
\end{eqnarray}
Continuing $k^{th}$ times in the similar manner, we get the required result $(iii)$.
\end{proof}
We shall make use of the following inequality in our present study (see \cite[pp. 265]{fried}):
\begin{eqnarray}
{(m+n)}^{q(m+n)}\leq m^{m q} n^{n q}e^{m q}e^{n q}.\label{eq:1.19}
\end{eqnarray}
We shall need the following Leibnitz formula from \cite[p.134]{ze},
\begin{eqnarray}
&&{(t^{-1}D_t)}^{n}[e^{-\frac{i}{2}t^2 \cot \theta} t^{\mu-\nu}f(t)g(t)] \nonumber\\
&=&\displaystyle \sum_{r=0}^{n}\binom{n}{r}{(t^{-1}D_t)}^{r}[e^{-\frac{i}{2}t^2 \cot \theta} t^{\mu-\nu}f(t)] {(t^{-1}D_t)}^{n-r}[g(t)].\label{eq:1.20}
\end{eqnarray}
This article consists of four sections. Section 1 is introductory part, in which several properties and fundamental definitions are given. In section 2, continuous fractional Hankel transform$(\mathcal{H}_{\nu, \mu}^{\theta})$ and its inverse $(\mathcal{H}_{\nu, \mu}^{-\theta})$ is studied on certain Gel'fand-Shilov spaces of type S. Section 3 is devoted to the study of continuous fractional Hankel wavelet transform in the space of certain Gel'fand-Shilov spaces of type S. In section 4, fractional Hankel transform and wavelet transform associated with fractional Hankel transform is investigated on ultradifferentiable function spaces.
\section{The fractional Hankel transform $\mathcal{H}_{\nu, \mu}^{\theta}$ on the spaces of type S}
In this section we consider the mapping properties of the fractional Hankel transform $(\mathcal{H}_{\nu, \mu}^{\theta})$ and inverse fractional Hankel transform $(\mathcal{H}_{\nu, \mu}^{-\theta})$ on the spaces $\mathbb{H}_{1, \alpha, A}(I), \mathbb{H}^{2, \beta, B}(I)$ and $\mathbb{H}_{\alpha, A}^{\beta, B}(I)$.
\begin{theorem}
The inverse fractional Hankel transform $(\mathcal{H}_{\nu, \mu}^{-\theta})$ is a continuous linear mapping from $\mathbb{H}_{1, \alpha, A}(I)$ into $\mathbb{H}^{2, 2\alpha, A^2 {(2 e)}^{2\alpha}}(I),$ for $\nu \geq -\frac{1}{2}.$ \label{th2.1}
\end{theorem}
\begin{proof}
Exploiting Lemma \ref{lemma:1.7} $(ii)$ and $(iii)$ we obtain
\begin{eqnarray}
&&M_{\nu+q-1, \mu, \theta}...M_{\nu, \mu, \theta} (\mathcal{H}_{\nu, \mu}^{-\theta}\psi)(y)\nonumber\\
&=& {(\csc \theta e^{i(\theta-\pi/2)})}^q (\mathcal{H}_{\nu+q, \mu}^{-\theta} x^q \psi)(y)\nonumber\\
&=& {(\csc \theta )}^{q-k}y^{-k}{( e^{i(\theta-\pi/2)})}^{q+k} \mathcal{H}_{\nu+q+k, \mu}^{-\theta}(x^q M_{\nu+k-1, \mu, -\theta}...M_{\nu, \mu, -\theta})(y)\nonumber\\
&=& {(\csc \theta )}^{q-k}y^{-k}{( e^{i(\theta-\pi/2)})}^{q+k} C_{\nu+q+k, \mu, -\theta} \int_{0}^{\infty} {(xy \csc \theta)}^{-\mu}J_{\nu+q+k}(xy\csc \theta)\nonumber\\
&& \times {(-1)}^k e^{-\frac{i}{2}y^2 \cot \theta} x^{1+q+\mu+\nu+k} {(x^{-1}D_x)}^k [e^{-\frac{i}{2}x^2 \cot \theta} x^{\mu-\nu} \psi(x)] dx. \nonumber
\end{eqnarray}
Thus,
\begin{eqnarray}
&& {(-1)}^q y^k {(y^{-1}D_y)}^q e^{\frac{i}{2}y^2\cot \theta} y^{\mu-\nu}(\mathcal{H}_{\nu, \mu}^{-\theta}\psi)(y)\nonumber\\
&=& {(-1)}^k{(\csc \theta)}^{\nu+2q-k-\mu} C_{\nu, \mu, -\theta} \int_{0}^{\infty} {(xy \csc \theta)}^{-\nu-q} J_{\nu+q+k}(xy\csc \theta) x^{1+k+2\nu+2q}\nonumber\\
&& \times {(x^{-1}D_x)}^{k} [e^{-\frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}\psi(x)] dx. \label{eq:2.1}
\end{eqnarray}
Now, we choose $m$ be any natural number in such a way that $m\geq 1+2\nu$; upon taking $n=m+2q+k$ and use the fact that $\vert x^{-\nu-q}J_{\nu+q+k}(x) \vert\leq C$. Then writing the integral on the right hand side of \eqref{eq:2.1} as a sum of two integrals from 0 to 1 and 1 to $\infty$ and using \eqref{eq:1.10}, \eqref{eq:1.19}, we have
\begin{eqnarray}
&&\vert y^k {(y^{-1}D_y)}^q [e^{\frac{i}{2}y^2\cot \theta} y^{\mu-\nu}(\mathcal{H}_{\nu, \mu}^{-\theta}\psi)(y)]\vert \nonumber\\
&\leq & C\Big[ \sup \vert y^{2q+k}{(y^{-1}D_y)}^k [e^{-\frac{i}{2}y^2\cot \theta} y^{\mu-\nu} \psi(y) ]\vert \nonumber\\
&& + \sup \vert y^{n+2}{(y^{-1}D_y)}^k [e^{-\frac{i}{2}y^2\cot \theta} y^{\mu-\nu} \psi(y)] \vert \Big]\nonumber\\
&\leq & C\Big[ C_{k}^{'} {(A+\delta)}^{2q+k} {(2q+k)}^{\alpha(2q+k)} + C_{k}^{''} {(A+\delta)}^{m+2q+k+2} \nonumber\\
&& \times {(m+2q+k+2)}^{\alpha(m+2q+k+2)} \Big]\nonumber\\
&\leq & C_1\Big[ 1+ {(A+\delta)}^{m+2q+k+2}{(m+2q+k+2)}^{\alpha(m+2q+k+2)} \Big]\nonumber\\
&\leq & C_{1}^{'} {(A^2+\delta)}^q {(m+k+2)}^{\alpha(m+k+2)}e^{\alpha(m+k+2)} {(2q)}^{\alpha 2q} e^{\alpha 2q}\nonumber\\
&\leq & C_{2}^{'} {(A^2{(2e)}^{2\alpha}+\delta^{'})}^{q} q^{\alpha 2q}.\nonumber
\end{eqnarray}
This completes the proof.
\end{proof}
\begin{remark}\label{remark2.2}
Let $\nu \geq-1/2,$ then the fractional Hankel transform $(\mathcal{H}_{\nu, \mu}^{\theta})$ is a continuous linear mapping from $\mathbb{H}_{1, \alpha, A}(I)$ into $\mathbb{H}^{2, 2\alpha, A^2 {(2 e)}^{2\alpha}}(I)$.
\end{remark}
\begin{definition}
Let $\hat{\mathbb{H}}^{2, \beta, B}(I)$ be the space of all functions $f\in \mathbb{H}^{2, \beta, B}(I)$ satisfying the condition
\begin{eqnarray}
\sup_{0\leq r\leq q} C_{k+r}^{\nu, \mu}= C_{k}^{* \nu, \mu},
\end{eqnarray}
where $C_{k}^{* \nu, \mu}$ are constants restraining the $f's$ in $\mathbb{H}^{2, \beta, B}(I)$.
\end{definition}
\begin{theorem}\label{th2.4}
The inverse fractional Hankel transform ($\mathcal{H}_{\nu, \mu}^{-\theta}$) defined by \eqref{eq:1.3} is a continuous linear mapping from $\hat{\mathbb{H}}^{2, \beta, B}(I)$ into $\mathbb{H}_{1, \beta, B}(I)$, for $\nu\geq -1/2$.
\end{theorem}
\begin{proof}
Following as in the proof of the theorem \ref{th2.1} and using \eqref{eq:1.19} and Definition \ref{defi:1.2}, we have
\begin{eqnarray}
&& \vert y^k {(y^{-1}D_y)}^q[ e^{\frac{i}{2}y^2\cot \theta} y^{\mu-\nu}(\mathcal{H}_{\nu, \mu}^{-\theta}\psi)(y)]\vert \nonumber\\
&\leq & D \Big[ \int_{0}^{1} x^{1+k+2\nu+2q} \vert {(xy \csc \theta)}^{-\nu-q} J_{\nu+q+k}(xy\csc \theta) \vert \nonumber\\
&& \times \vert {(x^{-1}D_x)}^{k} [e^{-\frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}\psi(x)] \vert dx\nonumber\\
&& + \int_{1}^{\infty} x^{1+k+2\nu+2q+2} \vert {(xy \csc \theta)}^{-\nu-q} J_{\nu+q+k}(xy\csc \theta) \vert \nonumber\\
&& \times \vert {(x^{-1}D_x)}^{k}[ e^{-\frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}\psi(x)] x^{-2} \vert dx\Big]\nonumber\\
&\leq & D[C_{1+k+2\nu+2q}^{\nu, \mu}+C_{1+k+2\nu+2q+2}^{\nu, \mu}]{(B+\sigma)}^{k}k^{k\beta}\nonumber\\
&\leq & C_{q}^{*~\nu, \mu}{(B+\sigma)}^{k}k^{k\beta}. \nonumber
\end{eqnarray}
Which completes the proof.
\end{proof}
\begin{remark}\label{remark2.4}
The fractional Hankel transform ($\mathcal{H}_{\nu, \mu}^{\theta}$) is a continuous linear mapping from $\hat{\mathbb{H}}^{2, \beta, B}(I)$ into $\mathbb{H}_{1, \beta, B}(I)$, for $\nu\geq -1/2$.
\end{remark}
\begin{theorem}
For $\nu \geq -1/2,$ the inverse fractional Hankel transform $(\mathcal{H}_{\nu, \mu}^{-\theta})$ is a continuous linear mapping from $\mathcal{H}_{\alpha, A}^{\beta, B}(I)$ into $\mathcal{H}_{\alpha+\beta, ABe^{\alpha}}^{2\alpha, A^2 {(2e)}^{2\alpha}}(I)$.
\end{theorem}
\begin{proof}
In this case we obtain from \eqref{eq:2.1} and \eqref{eq:1.14},
\begin{eqnarray}
&&\vert y^k {(y^{-1}D_y)}^q [e^{\frac{i}{2}y^2\cot \theta} y^{\mu-\nu}(\mathcal{H}_{\nu, \mu}^{-\theta}\psi)(y)]\vert \nonumber\\
&\leq & C\Big[ \sup \vert y^{2q+k}{(y^{-1}D_y)}^k [e^{-\frac{i}{2}y^2\cot \theta} y^{\mu-\nu} \psi(y) ]\vert \nonumber\\
&& + \sup \vert y^{n+2}{(y^{-1}D_y)}^k [e^{-\frac{i}{2}y^2\cot \theta} y^{\mu-\nu} \psi(y)] \vert \Big]\nonumber\\
&\leq & C\Big[ C_{1}^{\nu, \mu} {(A+\delta)}^{2q+k} {(2q+k)}^{\alpha(2q+k)} {(B+\sigma)}^{k} k^{k\beta} + C_{2}^{\nu, \mu} {(A+\delta)}^{m+2q+k+2} \nonumber\\
&& \times {(m+2q+k+2)}^{\alpha(m+2q+k+2)} {(B+\sigma)}^{k} k^{k\beta}\Big]\nonumber\\
&\leq & C {(A+\delta)}^{m+2q+k+2} {(m+2q+k+2)}^{\alpha(m+2q+k+2)} {(B+\sigma)}^{k} k^{k\beta}.\nonumber
\end{eqnarray}
Now using \eqref{eq:1.19} in the above equation, then the above estimate can be rewritten as
\begin{eqnarray}
&&\vert y^k {(y^{-1}D_y)}^q [e^{\frac{i}{2}y^2\cot \theta} y^{\mu-\nu}(\mathcal{H}_{\nu, \mu}^{-\theta}\psi)(y)]\vert \nonumber\\
&\leq & C {(B+\sigma)}^{k} {(A+\delta)}^k k^{k\beta} {(A+\delta)}^{2q+m+2} {(2 q)}^{2\alpha q} {(k+m+2)}^{\alpha (k+m+2)} \nonumber\\
&& \times e^{2\alpha q} e^{\alpha(k+m+2)}\nonumber\\
&\leq & C^{'} {(AB+\delta_1)}^{k} k^{k(\alpha+\beta)} {(A^2+\delta_2)}^{q} 2^{2\alpha q} q^{2\alpha q} e^{2\alpha q} e^{\alpha k}\nonumber\\
&\leq & C^{''} {(ABe^{\alpha}+\delta_1^{'})}^{k} k^{k(\alpha+\beta)} {\left(A^2 {(2e)}^{2\alpha}+\delta_2^{''}\right)}^{q} q^{2\alpha q}.\nonumber
\end{eqnarray}
Hence the theorem proved.
\end{proof}
\begin{remark}\label{remark2.6}
For $\nu \geq -1/2,$ the fractional Hankel transform $(\mathcal{H}_{\nu, \mu}^{\theta})$ is a continuous linear mapping from $\mathbb{H}_{\alpha, A}^{\beta, B}(I)$ into $\mathbb{H}_{\alpha+\beta, ABe^{\alpha}}^{2\alpha, A^2 {(2e)}^{2\alpha}}(I)$.
\end{remark}
\section{The fractional wavelet transform on the spaces of type S}
In this section we study the wavelet transform on the spaces of type $S$. In order to discuss the continuity of fractional wavelet transform $W_{\psi}^{\theta}$ on the aforesaid function spaces, we need to introduce the following function space.
\begin{definition}\label{def:3.1}
The space $\mathbb{W}_{\nu, \mu, \theta}(I),$ consists of all those wavelets $\psi$, $\forall n \in \mathbb{N}_0$ and $\rho \in \mathbb{R}$ which satisfy
\begin{eqnarray}
{(t^{-1}D_t)}^{n}[t^{\mu-\nu}e^{\frac{i}{2}t^2 \cot \theta}\overline{\mathcal{H}_{\nu, \mu}^{\theta}(z^{\nu-\mu}e^{-\frac{i}{2}z^2 \cot \theta}\psi)(t)}] < D^{n, \rho}{(1+t)}^{\rho-n},
\end{eqnarray}
where $D^{n, \rho}$ is constant.
\end{definition}
\begin{theorem}
Suppose $\psi$ be the wavelet taken from $\mathbb{W}_{\nu, \mu, \theta}(I)$. The continuous fractional wavelet transform $W_{\psi}^{\theta}$ is a continuous linear mapping from $\mathbb{H}_{1, \alpha, A}(I)$ into $\mathbb{H}_{1, \tilde{\alpha}, \tilde{A}}(I\times I)$, where $\tilde{\alpha}=(0, 2\alpha)$ and $\tilde{A}=\left(a, A^2{(2e)}^{2\alpha}+a^2\right).$
\end{theorem}
\begin{proof}
From the definition of $W_{\psi}^{\theta}$ from \eqref{eq:1.9} and using \eqref{eq:2.1}, we obtain
\begin{eqnarray}
&&\vert b^k {(b^{-1}D_b)}^q e^{\frac{i}{2}b^2 \cot \theta} b^{\mu-\nu} (W_{\psi}^{\theta}f)(b, a)\vert \nonumber\\
&= & \Big\vert {(\csc \theta)}^{2q+\nu-\mu-k} \int_{0}^{\infty} \omega^{1+2\nu+2q+k} {(b\omega \csc \theta)}^{-\nu-q}J_{\nu+q+k}(b\omega \csc \theta)\nonumber\\
&& \times {(\omega^{-1}D_{\omega})}^k\Big[e^{-\frac{i}{2}\omega^2 \cot \theta} {(a\omega)}^{\mu-\nu}e^{\frac{i}{2}{a^2 \omega^2} \cot \theta} \overline{\mathcal{H}_{\nu, \mu}^{\theta}(z^{\nu-\mu}e^{-\frac{i}{2}z^2 \cot \theta}\psi)(a\omega)}\nonumber\\
&& \times \omega^{\mu-\nu}\tilde{f}^{\theta}(\omega)\Big]d\omega\Big\vert.\nonumber
\end{eqnarray}
Using the fact that $\vert x^{-\nu-q}J_{\nu+q+k}(x)\vert\leq C$ and in viewing \eqref{eq:1.20}, the above relation becomes
\begin{eqnarray}
&&\vert b^k {(b^{-1}D_b)}^q e^{\frac{i}{2}b^2 \cot \theta} b^{\mu-\nu} (W_{\psi}^{\theta}f)(b, a)\vert \nonumber\\
&\leq & C \vert{(\csc \theta)}^{2q+\nu-\mu-k}\vert \int_{0}^{\infty} \omega^{1+2\nu+2q+k} \sum_{r=0}^{k}\binom{k}{r}{(\omega^{-1}D_{\omega})}^r\Big[{(a\omega)}^{\mu-\nu}e^{\frac{i}{2}{a^2 \omega^2} \cot \theta}\nonumber\\
&& \times \overline{\mathcal{H}_{\nu, \mu}^{\theta}(z^{\nu-\mu}e^{-\frac{i}{2}z^2 \cot \theta}\psi)(a\omega)}~\Big] {(\omega^{-1}D_{\omega})}^{k-r}\Big[ e^{-\frac{i}{2}\omega^2 \cot \theta}\omega^{\mu-\nu}\tilde{f}^{\theta}(\omega)\Big] d\omega.
\end{eqnarray}
Therefore,
\begin{eqnarray}
&&\vert b^k {(a^{-1}D_a)}^p{(b^{-1}D_b)}^q e^{\frac{i}{2}b^2 \cot \theta} b^{\mu-\nu} (W_{\psi}^{\theta}f)(b, a)\vert \nonumber\\
&\leq & C\sum_{r=0}^{k}\binom{k}{r} \int_{0}^{\infty} \omega^{1+2\nu+2q+k}{(a^{-1}D_a)}^p{(\omega^{-1}D_{\omega})}^r\Big[{(a\omega)}^{\mu-\nu}e^{\frac{i}{2}{a^2 \omega^2} \cot \theta}\nonumber\\
&& \times \overline{\mathcal{H}_{\nu, \mu}^{\theta}(z^{\nu-\mu}e^{-\frac{i}{2}z^2 \cot \theta}\psi)(a\omega)}~\Big] {(\omega^{-1}D_{\omega})}^{k-r}\Big[ e^{-\frac{i}{2}\omega^2 \cot \theta}\omega^{\mu-\nu}\tilde{f}^{\theta}(\omega)\Big] d\omega. \label{eq:3.3}
\end{eqnarray}
Exploiting the definition \ref{def:3.1} for $t=a\omega$ we obtain
\begin{eqnarray}
&& \left\vert {(a^{-1}D_{a})}^{p}{(\omega^{-1}D_{\omega})}^{r} \Big[ {(a\omega)}^{\mu-\nu}e^{\frac{i}{2}a^2\omega^2 \cot \theta} \overline{\mathcal{H}_{\nu, \mu}^{\theta}(z^{\nu-\mu} e^{-\frac{i}{2}z^2 \cot \theta}\psi)}(a\omega)\Big]\right \vert \nonumber\\
&= & \left\vert a^{2r} \omega^{2p}{(t^{-1}D_{t})}^{p+r} \Big[ {t}^{\mu-\nu}e^{\frac{i}{2}t^2 \cot \theta} \overline{\mathcal{H}_{\nu, \mu}^{\theta}(z^{\nu-\mu} e^{-\frac{i}{2}z^2 \cot \theta}\psi_1)}(t)\Big]\right \vert \nonumber\\
&\leq & \left\vert a^{2r} \omega^{2p} D^{p+r, \rho_1}{(1+t)}^{\rho_1-p-r}\right \vert \nonumber\\
&\leq & \left\vert a^{2r} \omega^{2p} D^{p+r, \rho_1} {(1+a\omega)}^{\rho_1-p-r} \right \vert \nonumber\\
& \leq & \vert a^{2r} \omega^{2p} D^{p+r, \rho_1} {(1+a)}^{\rho_1-p-r}{(1+\omega)}^{\rho_1-p-r}\vert. \label{eq:3.4}
\end{eqnarray}
Using \eqref{eq:3.4} in \eqref{eq:3.3} and assuming $\nu_1$ be any positive integer such that $\nu_1\geq 1+2\nu$, we have
\begin{eqnarray}
&&\vert a^l b^k {(a^{-1}D_a)}^p{(b^{-1}D_b)}^q e^{\frac{i}{2}b^2 \cot \theta} b^{\mu-\nu} (W_{\psi}^{\theta}f)(b, a)\vert \nonumber\\
&\leq & C\sum_{r=0}^{k}\binom{k}{r} a^{2r} \int_{0}^{\infty} \omega^{\nu_1+2q+2p+k} {(1+a)}^{\rho_1-r-p}{(1+\omega)}^{\rho_1-r-p+s} \nonumber\\
&& \times {(\omega^{-1}D_{\omega})}^{k-r}\Big[ e^{-\frac{i}{2}\omega^2 \cot \theta}\omega^{\mu-\nu}\tilde{f}^{\theta}(\omega)\Big] \frac{1}{{(1+\omega)}^s}d\omega \nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-r-p+s}\binom{k}{r}\binom{\rho_1-r-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-r-p}\nonumber\\
&& \times \sup \left \vert \omega^{\nu_1+2q+2p+k+n}{(\omega^{-1}D_{\omega})}^{k-r}\Big[ e^{-\frac{i}{2}\omega^2 \cot \theta}\omega^{\mu-\nu}\tilde{f}^{\theta}(\omega)\Big] \right\vert \int_{0}^{\infty}\frac{1}{{(1+\omega)}^s}d\omega. \label{eq:3.5}
\end{eqnarray}
Exploiting the remark \ref{remark2.2} and \eqref{eq:1.11}, the right hand-side of the above estimate becomes
\begin{eqnarray}
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-r-p+s}\binom{k}{r}\binom{\rho_1-r-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-r-p} {\Big(A^2{(2e)}^{2\alpha}+\delta_2\Big)}^ {k-r} \nonumber\\
&& \times {(k-r)} ^{2\alpha(k-r)} \max_{0\leq r\leq k} \vert \vert \tilde{f}^{\theta}\vert \vert^{\nu, \mu, \theta}_{k-r}\nonumber\\
&\leq & C \sum_{n=0}^{\rho_1-p+s}\binom{\rho_1-p+s}{n}a^l {(1+a)}^{\rho_1-p}\sum_{r=0}^{k}\binom{k}{r} {(a^2)^r}{\Big(A^2{(2e)}^{2\alpha}+\delta_2\Big)}^{k-r}k^{k 2\alpha}\nonumber\\
&& \times \max_{0\leq r\leq k} \vert \vert \tilde{f}^{\theta}\vert \vert^{\nu, \mu, \theta}_{k-r}\nonumber\\
&\leq & C^{*} {(1+a)}^{\rho_1-p}a^l {\Big(A^2{(2e)}^{2\alpha}+a^2+\delta_2^{'}\Big)}^k k^{k 2\alpha} \max_{0\leq r\leq k} \vert \vert \tilde{f}^{\theta}\vert \vert^{\nu, \mu, \theta}_{k-r}\nonumber.
\end{eqnarray}
This completes the proof.
\end{proof}
\begin{theorem}
Let $\psi \in \mathbb{W}_{\nu, \mu, \theta}(I)$. The continuous fractional wavelet transform $W_{\psi}^{\theta}$ is a continuous linear mapping from $\hat{\mathbb{H}}^{2, \beta, B}(I)$ into $\hat{\mathbb{H}}^{2, \tilde{\beta}, \tilde{B}}(I\times I)$, where $\tilde{\beta}=(2\beta, 2\beta)$ and $\tilde{B}=\left(\frac{B^2}{a}{(2e)}^{2\beta}, B^2{e}^{2\beta}\right).$
\end{theorem}
\begin{proof}
From the estimate \eqref{eq:3.5} and using \eqref{eq:1.19}, we obtain
\begin{eqnarray}
&&\vert a^l b^k {(a^{-1}D_a)}^p{(b^{-1}D_b)}^q e^{\frac{i}{2}b^2 \cot \theta} b^{\mu-\nu} (W_{\psi}^{\theta}f)(b, a)\vert \nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-r-p+s}\binom{k}{r}\binom{\rho_1-r-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-r-p}\nonumber\\
&& \times \sup \left \vert \omega^{\nu_1+2q+2p+k+n}{(\omega^{-1}D_{\omega})}^{k-r}\Big[ e^{-\frac{i}{2}\omega^2 \cot \theta}\omega^{\mu-\nu}\tilde{f}^{\theta}(\omega)\Big] \right\vert \int_{0}^{\infty}\frac{1}{{(1+\omega)}^s}d\omega\nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-p+s}\binom{k}{r}\binom{\rho_1-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-p} {(B+\sigma)}^{\nu_1+2q+2p+k+n}\nonumber\\
&&\times {(\nu_1+2q+2p+k+n)}^{\beta(\nu_1+2q+2p+k+n)} \max \vert \vert \tilde{f}^{\theta}\vert \vert _{(\nu_1+2q+2p+k+n)}^{\nu, \mu, \theta}\nonumber\\
&\leq & C^{'} \sum_{r=0}^{k} \sum_{n=0}^{\rho_1-p+s}\binom{k}{r}\binom{\rho_1-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-p}{(B+\sigma)}^{2p} {(B+\sigma)}^{2q} {(2p)}^{\beta 2p} \nonumber\\
&& \times {(\nu_1+2q+k+n)}^{\beta(\nu_1+2q+k+n)} e^{\beta 2p} e^{\beta(\nu_1+2q+k+n)}\max \vert \vert \tilde{f}^{\theta}\vert \vert _{(\nu_1+2q+2p+k+n)}^{\nu, \mu, \theta}\nonumber\\
&\leq & C^{'} \sum_{r=0}^{k} \sum_{n=0}^{\rho_1-p+s}\binom{k}{r}\binom{\rho_1-p+s}{n} a^{2r+l} {(B^2/a+\sigma_1)}^p {(B+\sigma_2)}^{2q} p^{p 2\beta} q^{q2\beta}2^{\beta 2p} \nonumber\\
&& \times e^{2p\beta} e^{2q\beta}\max \vert \vert \tilde{f}^{\theta}\vert \vert _{(\nu_1+2q+2p+k+n)}^{\nu, \mu, \theta}\nonumber\\
&\leq & C^{'} \sum_{r=0}^{k} \sum_{n=0}^{\rho_1-p+s}\binom{k}{r}\binom{\rho_1-p+s}{n} a^{2r+l} {\left(\frac{B^2}{a}{(2e)}^{2\beta} +\sigma_1^{'}\right) }^p {(B^2e^{2\beta}+\sigma_2^{'})}^q\nonumber\\
&& \times p^{p 2\beta} q^{q2\beta}\max \vert \vert \tilde{f}^{\theta}\vert \vert _{(\nu_1+2q+2p+k+n)}^{\nu, \mu, \theta}.\nonumber
\end{eqnarray}
Hence the theorem proved.
\end{proof}
\begin{theorem}
Let $\psi \in \mathbb{W}_{\nu, \mu, \theta}(I)$. The continuous fractional wavelet transform $W_{\psi}^{\theta}$ is a continuous linear mapping from $\mathbb{H}^{ \beta, B}_{\alpha, A}(I)$ into $\mathbb{H}^{ \tilde{\beta}, \tilde{B}}_{\tilde{\alpha}, \tilde{A}}(I\times I)$, where $\tilde{\alpha}=(0, 3\alpha+\beta), \tilde{\beta}=\Big(2(\alpha+\beta), 2(\alpha+\beta)\Big)$ and $\tilde{A}=\left(a, (A^2 {(2e)}^{2\alpha}+a^2)ABe^{3\alpha+2\beta}\right)$ and $\tilde{B}=\Big(\frac{1}{a}A^2B^22^{2(\alpha+\beta)}e^{4\alpha+2\beta}, A^2B^2e^{6\alpha+4\beta}2^{2(\alpha+\beta)}\Big).$
\end{theorem}
\begin{proof}
Proceeding as in the proof of above theorem and in viewing the remark \ref{remark2.6}, we have
\begin{eqnarray}
&&\vert a^l b^k {(a^{-1}D_a)}^p{(b^{-1}D_b)}^q e^{\frac{i}{2}b^2 \cot \theta} b^{\mu-\nu} (W_{\psi}^{\theta}f)(b, a)\vert \nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-r-p+s}\binom{k}{r}\binom{\rho_1-r-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-r-p}\nonumber\\
&& \times \sup \left \vert \omega^{\nu_1+2q+2p+k+n}{(\omega^{-1}D_{\omega})}^{k-r}\Big[ e^{-\frac{i}{2}\omega^2 \cot \theta}\omega^{\mu-\nu}\tilde{f}^{\theta}(\omega)\Big] \right\vert \int_{0}^{\infty}\frac{1}{{(1+\omega)}^s}d\omega\nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-p+s}\binom{k}{r}\binom{\rho_1-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-p} {(ABe^{\alpha}+\delta)}^{(\nu_1+2q+2p+k+n)} \nonumber\\
&&\times {(\nu_1+2q+2p+k+n)}^{(\alpha+\beta)(\nu_1+2q+2p+k+n)} {\Big(A^2{(2e)}^{2\alpha}+\delta_2\Big)}^{k-r} {(k-r)}^{2\alpha(k-r)}\nonumber\\
&& \times \max \vert \vert f\vert \vert^{\nu, \mu, \theta}.
\end{eqnarray}
Exploiting the relation \eqref{eq:1.19}, the above estimate can be rewritten as
\begin{eqnarray}
&\leq & C^{*} \sum_{n=0}^{\rho_1-p+s} \binom{\rho_1-p+s}{n} a^l {(1+a)}^{\rho_1-p} \sum_{r=0}^{k}\binom{k}{r} {(a^2)}^r {\Big(A^2{(2e)}^{2\alpha}+\delta_2\Big)}^{k-r}\nonumber\\
&& \times {(ABe^{\alpha}+\delta)}^{(\nu_1+2q+2p+k+n)}{(2p)}^{2p(\alpha+\beta)} {(\nu_1+2q+k+n)}^{(\alpha+\beta){(\nu_1+2q+k+n})}\nonumber
\end{eqnarray}
\begin{eqnarray}
&&\times e^{(\alpha+\beta)2p} e^{(\alpha+\beta)(\nu_1+2q+k+n)}k^{2k\alpha}\max \vert \vert f\vert \vert^{\nu, \mu, \theta}\nonumber\\
&\leq &C^{*} \sum_{n=0}^{\rho_1-p+s} \binom{\rho_1-p+s}{n} a^l {(1+a)}^{\rho_1-p} {\Big(A^2{(2e)}^{2\alpha}+a^2+\delta_2\Big)}^k {(2p)}^{2p(\alpha+\beta)} {e}^{2p(\alpha+\beta)}\nonumber\\
&& \times {(ABe^{\alpha}+\delta)}^{(\nu_1+2q+2p+k+n)} e^{(\alpha+\beta)(\nu_1+2q+k+n)} {e}^{2q(\alpha+\beta)} {e}^{(\alpha+\beta)(\nu_1+k+n)}\nonumber\\
&&\times {(2q)}^{2q(\alpha+\beta)} {(\nu_1+k+n)}^{(\alpha+\beta)(\nu_1+k+n)} k^{2k\alpha}\max \vert \vert f\vert \vert^{\nu, \mu, \theta}\nonumber\\
&\leq & C_1\sum_{n=0}^{\rho_1-p+s} \binom{\rho_1-p+s}{n} a^l {(1+a)}^{\rho_1-p} {\Big(A^2{(2e)}^{2\alpha}+a^2+\delta_2\Big)}^k {(2p)}^{2p(\alpha+\beta)}\nonumber\\
&& \times {(ABe^{\alpha}+\delta)}^{(\nu_1+2q+2p+k+n)} e^{2(\alpha+\beta)k} k^{k(3\alpha+\beta)} e^{2p(\alpha+\beta)} e^{4q(\alpha+\beta)} {(2q)}^{2q(\alpha+\beta)}\nonumber\\
&\leq & C_2 \sum_{n=0}^{\rho_1-p+s}\binom{\rho_1-p+s}{n} a^l {\left[ \Big(A^2{(2e)}^{2\alpha}+a^2\Big)ABe^{3\alpha+2\beta} +\delta_3\right]}^k k^{k(3\alpha+\beta)}\nonumber\\
&& \times {\left(\frac{1}{a}A^2B^22^{2(\alpha+\beta)}e^{4\alpha+2\beta} +\delta_4\right)}^p p^{p2(\alpha+\beta)} {\left( A^2B^2e^{6\alpha+4\beta} 2^{2(\alpha+\beta)}+\delta_5 \right)}^q q^{q2(\alpha+\beta)}.\nonumber
\end{eqnarray}
This completes the proof of the theorem.
\end{proof}
\section{Fractional Hankel transform on ultradifferentiable function spaces}
In this section we discuss the fractional Hankel transform on spaces more general in previous sections \cite{rs, duran, pandey, marrero}. Assume that $\{ \xi_k\}_{k=0}^{\infty}$ and $\{ \eta_q\}_{q=0} ^ {\infty}$ be two arbitrary sequences of positive numbers possesses the following properties:
\begin{property}\label{property4.1}
\hspace*{.1cm}
\begin{itemize}
\item[[1]] $\xi_k^2 \leq \xi_{k-1}\xi_{k+1}, \forall k \in \mathbb{N}_0,$
\item[[2]] $\xi_k \xi_l \leq \xi_0 \xi_{k+l}, \forall k, l \in \mathbb{N}_0,$
\item[[3]] $\xi_k \leq R H^k \displaystyle \min_{0\leq l\leq k} \xi_l \xi_{k-l}, \forall k, l \in \mathbb{N}_0, R>0, H>0,$
\item[[4]] $\xi_{k+1}\leq RH^k \xi_k, \forall k \in \mathbb{N}_0, R>0, H>0,$
\item[[5]] $\displaystyle \sum_{j=0}^{\infty}\frac{\xi_j}{\xi_{j+1}}<\infty.$
\end{itemize}
\end{property}
From the above property [1], we have
\begin{eqnarray}
\frac{\xi_{k}}{\xi_{k+1}}\leq \frac{\xi_{k-1}}{\xi_{k}}\leq \frac{\xi_{k-2}}{\xi_{k-1}}...\leq \frac{\xi_{0}}{\xi_{1}},\nonumber
\end{eqnarray}
and
\begin{eqnarray}
\xi_{k-r}&=&\frac{\xi_{k-r}}{\xi_{k-r+1}} \frac{\xi_{k-r+1}}{\xi_{k-r+2}}...\frac{\xi_{k-1}}{\xi_{k}}\xi_k \nonumber\\
&\leq & {\left(\frac{\xi_{0}}{\xi_{1}}\right)}^r \xi_k. \label{eq:4.1}
\end{eqnarray}
In the very similar way we obtain
\begin{eqnarray}
\eta_{q-r}&\leq & {\left(\frac{\eta_{0}}{\eta_{1}}\right)}^r \eta_q.
\end{eqnarray}
We now introduce the following types of function spaces\cite{rs}.
\begin{definition}
Let $\{ \xi_k\}_{k=0}^{\infty}$ and $\{ \eta_q\}_{q=0} ^ {\infty}$ be two any sequences of positive numbers. An infinitely differentiable complex valued function $f\in \mathbb{H}_{1, \xi_k, A}(I) $ if and only if
\begin{eqnarray}
\left \vert x^k{(x^{-1}D_x)}^q e^{\pm \frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}f(x) \right \vert \leq C_{q}^{\nu, \mu} {(A+\delta)}^k \xi_k, ~~\forall k , q \in \mathbb{N}_{0},
\end{eqnarray}
for some positive constants $A, C_{q}^{\nu, \mu}$ depending on $f$; and $f$ belongs to the space $\mathbb{H}^{2, \eta_q, B}(I)$ if and only if
\begin{eqnarray}
\left \vert x^k{(x^{-1}D_x)}^q e^{\pm \frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}f(x) \right \vert \leq C_{k}^{\nu, \mu} {(B+\sigma)}^q \eta_q, ~~\forall k , q \in \mathbb{N}_{0},
\end{eqnarray}
for some positive constants $B$ and $C_{k}^{\nu, \mu}$ depending on $f$; and the function $f$ is said to be in the space $\mathbb{H}_{\xi_k, A}^{\eta_q, B}(I)$ if and only if
\begin{eqnarray}\label{eq:4.5}
\left \vert x^k{(x^{-1}D_x)}^q e^{\pm \frac{i}{2}x^2 \cot \theta} x^{\mu-\nu}f(x) \right \vert \leq C^{\nu, \mu} {(A+\delta)}^k \xi_k {(B+\sigma)}^q \eta_q,
\end{eqnarray}
$\forall k , q \in \mathbb{N}_{0},$ where $ A, B, C^{\nu, \mu} $ are certain positive constants dependent on $f$.
\end{definition}
The elements of the spaces $\mathbb{H}_{1, \xi_k, A}(I), \mathbb{H}^{2, \eta_q, B}(I)$ and $\mathbb{H}_{\xi_k, A}^{ \eta_q, B}(I)$ are known as ultradifferentiable functions \cite{koma, rs, matsu, rodino}. \\
We shall need similar types of function spaces of two variables.
\begin{definition}
The space $\mathbb{H}_{1, \xi_{ml+nk}, \tilde{A}}(I\times I), \tilde{A}=(A_1, A_2)$ is defined to the collection of all infinitely differentiable functions $f(b, a)$ satisfying, for all $l, k, m, n, p, q \in \mathbb{N}_0$,
\begin{eqnarray}
&& \sup_{a, b} \vert a^l b^k {(a^{-1}D_a)}^p {(b^{-1}D_b)}^q e^{\pm \frac{i}{2}b^2 \cot \theta}b^{\mu-\nu}f(b, a) \vert \nonumber\\
&\leq &C_{p, q}^{\nu, \mu} {(A_1+\delta_1)}^{l} {(A_2+\delta_2)}^{k} \xi_{ml+nk},
\end{eqnarray}
where the arbitrary constants $A_1, A_2, C_{p, q}^{\nu, \mu}$ depends on $f$.
\end{definition}
\begin{definition}
The space $\mathbb{H}^{2, \eta_{sp+tq}, \tilde{B}}(I\times I), \tilde{B}=(B_1, B_2)$ is defined to the collection of all functions $f(b, a)\in C^{\infty}(I\times I)$ satisfying, for all $l, k, s, t, p, q \in \mathbb{N}_0$,
\begin{eqnarray}
&& \sup_{a, b} \vert a^l b^k {(a^{-1}D_a)}^p {(b^{-1}D_b)}^q e^{\pm \frac{i}{2}b^2 \cot \theta}b^{\mu-\nu}f(b, a) \vert \nonumber\\
&\leq & C_{l, k}^{\nu, \mu} {(B_1+\sigma_1)}^{p} {(B_2+\sigma_2)}^{q} \eta_{sp+tq},
\end{eqnarray}
where the arbitrary constants $B_1, B_2, C_{l, k}^{\nu, \mu}$ depends on $f$.
\end{definition}
\begin{definition}
The space $\mathbb{H}_{\xi_{ml+nk}, \eta_{gl+hk}, \tilde{A} }^{\xi_{cp+ dq}, \eta_{sp+tq}, \tilde{B}}(I\times I), \tilde{A}=(A_1, A_2), \tilde{B}=(B_1, B_2)$ is defined to the collection of all infinitely differentiable functions $f(b, a)$ satisfying, for all $l, k, g, h, s, t, p, q, c, d \in \mathbb{N}_0$,
\begin{eqnarray}
&& \sup_{a, b} \vert a^l b^k {(a^{-1}D_a)}^p {(b^{-1}D_b)}^q e^{\pm \frac{i}{2}b^2 \cot \theta}b^{\mu-\nu}f(b, a) \vert \nonumber\\
&\leq & C^{\nu, \mu} {(A_1+\delta_1)}^{l}{(A_2+\delta_2)}^{k} {(B_1+\sigma_1)}^{p} {(B_2+\sigma_2)}^{q} \xi_{ml+nk} \eta_{gl+hk} \eta_{sp+tq} \xi_{cp+ dq},\nonumber
\end{eqnarray}
where the arbitrary constants $A_1, A_2, B_1, B_2$ and $C^{\nu, \mu}$ depends on $f$.
\end{definition}
\begin{theorem}\label{th4.6}
If $\{\xi_k\}$ and $\{\eta_q\}$ be the sequences satisfies the property \ref{property4.1}, the inverse fractional Hankel transform $(\mathcal{H}_{\nu, \mu}^{-\theta})$ is a continuous linear mapping from the space $\mathbb{H}_{ \xi_k, A}^{\eta_q, B}(I)$ into $\mathbb{H}^{\xi^2_{q}, B_1}_{\xi_k \eta_k, A_1},$ where $A_1=ABH^2, B_1= A^2H^6.$
\end{theorem}
\begin{proof}
Following the procedure of the proof of the Theorem \ref{th2.1} and using property \ref{property4.1} [3] and in viewing \eqref{eq:4.5}, we have
\begin{eqnarray}
&&\vert y^k {(y^{-1}D_y)}^q [e^{\frac{i}{2}y^2\cot \theta} y^{\mu-\nu}(\mathcal{H}_{\nu, \mu}^{-\theta}\psi)(y)]\vert \nonumber\\
&\leq & C\Big[ \sup \vert y^{2q+k}{(y^{-1}D_y)}^k [e^{-\frac{i}{2}y^2\cot \theta} y^{\mu-\nu} \psi(y) ]\vert \nonumber\\
&& + \sup \vert y^{m+2q+k+2}{(y^{-1}D_y)}^k [e^{-\frac{i}{2}y^2\cot \theta} y^{\mu-\nu} \psi(y)] \vert \Big]\nonumber\\
&\leq & C\Big[ C_1^{\nu, \mu}{(A+\delta)}^{2q+k}\xi_{2q+k} +C_2^{\nu, \mu} {(A+\delta)}^{m+2q+k+2}\xi_{m+2q+k+2}\Big]{(B+\sigma)}^{k}\eta_k\nonumber\\
&\leq & C^{'} {(B+\sigma)}^{k}\eta_k {(H(A+\delta))}^{2q+k}\xi_{2q+k} [1+{(A+\delta)}^{m+2}RH^{m+2}\xi_{m+2}]\nonumber\\
&\leq & C^{''} {(ABH^2+\delta_2)}^k \xi_k \eta_k {(A^2H^6+\delta_3)}^q \eta^2_{q}.\nonumber
\end{eqnarray}
This completes the proof.
\end{proof}
\begin{remark}
Let $\{\xi_k\}$ and $\{\eta_q\}$ be the sequences satisfies the property \ref{property4.1}, the fractional Hankel transform $(\mathcal{H}_{\nu, \mu}^{\theta})$ is a continuous linear mapping from the space $\mathbb{H}_{ \xi_k, A}^{\eta_q, B}(I)$ into $\mathbb{H}^{\xi^2_{q}, B_1}_{\xi_k \eta_k, A_1},$ where $A_1=ABH^2, B_1= A^2H^6.$
\end{remark}
\begin{theorem}\label{th4.8}
If $\{\xi_k\}$ be the sequence satisfies the property \ref{property4.1} then for $\nu\geq -1/2$, $\mathcal{H}_{\nu, \mu}^{-\theta}$ is a continuous linear mapping from $\mathbb{H}_{1, \xi_k, A}(I)$ into $\mathbb{H}^{2, \xi_q^2, A_1}(I)$, where $A_1=A^2H^6$.
\end{theorem}
\begin{proof}
From the above theorem, we have
\begin{eqnarray}
&&\vert y^k {(y^{-1}D_y)}^q [e^{\frac{i}{2}y^2\cot \theta} y^{\mu-\nu}(\mathcal{H}_{\nu, \mu}^{-\theta}\psi)(y)]\vert \nonumber\\
&\leq & C\Big[ \sup \vert y^{2q+k}{(y^{-1}D_y)}^k [e^{-\frac{i}{2}y^2\cot \theta} y^{\mu-\nu} \psi(y) ]\vert \nonumber\\
&& + \sup \vert y^{m+2q+k+2}{(y^{-1}D_y)}^k [e^{-\frac{i}{2}y^2\cot \theta} y^{\mu-\nu} \psi(y)] \vert \Big]\nonumber\\
&\leq & C\Big[ C_{k}^{\nu, \mu} {(A+\delta)}^{2q+k}\xi_{2q+k} + D_k^{\nu, \mu} {(A+\delta)}^{m+2q+k+2}\xi_{m+2q+k+2}\Big]\nonumber\\
&\leq & C^{'} {(A^2H^6+\delta_2)}^q \xi^2_q.\nonumber
\end{eqnarray}
Hence the theorem proved.
\end{proof}
\begin{remark}
Let $\{\xi_k\}$ be the sequence satisfies the property \ref{property4.1} then for $\nu\geq -1/2$, $\mathcal{H}_{\nu, \mu}^{\theta}$ is a continuous linear mapping from $\mathbb{H}_{1, \xi_k, A}(I)$ into $\mathbb{H}^{2, \xi_q^2, A_1}(I)$, where $A_1=A^2H^6$.
\end{remark}
\begin{definition}
The space $\hat{\mathbb{H}}^{2, \eta_{q}, B}(I)$ be the collection of all functions $f \in \mathbb{H}^{2, \eta_{q}, B}(I)$ satisfying the condition
\begin{equation}
\sup_{0\leq r \leq k}C_{k+r}^{\nu, \mu} =C_{k}^{'\nu, \mu},
\end{equation}
where $C_{k}^{'\nu, \mu}$ are constants restraining the $f's$ in $\mathbb{H}^{2, \eta_{q}, B}(I)$.
\end{definition}
\begin{theorem} \label{th4.11}
For $\nu\geq -1/2$ and suppose $\{\eta_q\}$ be the sequence satisfies the property \ref{property4.1} the inverse fractional Hankel $\mathcal{H}_{\nu, \mu}^{-\theta}$ is a continuous linear mapping from $\hat{\mathbb{H}}^{2, \eta_{q}, B}(I)$ into $\mathbb{H}_{1, \eta_k, B}(I)$.
\end{theorem}
\begin{proof}
Exploiting \eqref{eq:2.1} and Theorem \ref{th2.4}, we have
\begin{eqnarray}
&&\vert y^k {(y^{-1}D_y)}^q [e^{\frac{i}{2}y^2\cot \theta} y^{\mu-\nu}(\mathcal{H}_{\nu, \mu}^{-\theta}\psi)(y)]\vert \nonumber\\
&\leq & C\Big[ C_{1+2\nu+2q+k}^{\nu, \mu}+ C_{1+2\nu+2q+k+2}\Big]{(B+\sigma)}^k \eta_k\nonumber\\
&\leq & C_q^{*} {(B+\sigma)}^k \eta_k\nonumber.
\end{eqnarray}
This completes the proof.
\end{proof}
\begin{remark}
If $\{\eta_q\}$ be the sequence satisfies the property \ref{property4.1} and $\nu\geq -1/2$ the fractional Hankel transform $\mathcal{H}_{\nu, \mu}^{\theta}$ is a continuous linear mapping from $\hat{\mathbb{H}}^{2, \eta_{q}, B}(I)$ into $\mathbb{H}_{1, \eta_k, B}(I)$.
\end{remark}
\begin{theorem}
Let $\psi$ be the wavelet belongs to the space $\mathbb{W}_{\nu, \mu, \theta}(I)$. If $\{\xi_k\}$ be the sequence satisfies the property \ref{property4.1} then fractional Hankel wavelet transform is a continuous linear mapping from $\mathbb{H}_{1, \xi_k, A}(I)$ into $\mathbb{H}_{1, \xi_k^2, \tilde{A}}(I\times I),$ where $\tilde{A}= \left(a, A^2H^6+ \frac{a^2\xi_0^2}{\xi_1^2} \right),$ for $\nu \geq -1/2.$
\end{theorem}
\begin{proof}
From \eqref{eq:3.5} and in viewing Theorem \ref{th4.8}, we have
\begin{eqnarray}
&&\vert a^l b^k {(a^{-1}D_a)}^p{(b^{-1}D_b)}^q e^{\frac{i}{2}b^2 \cot \theta} b^{\mu-\nu} (W_{\psi}^{\theta}f)(b, a)\vert \nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-r-p+s}\binom{k}{r}\binom{\rho_1-r-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-r-p}\nonumber\\
&& \times \sup \left \vert \omega^{\nu_1+2q+2p+k+n}{(\omega^{-1}D_{\omega})}^{k-r}\Big[ e^{-\frac{i}{2}\omega^2 \cot \theta}\omega^{\mu-\nu}\tilde{f}^{\theta}(\omega)\Big] \right\vert \int_{0}^{\infty}\frac{1}{{(1+\omega)}^s}d\omega\nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-p+s}\binom{k}{r}\binom{\rho_1-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-p} {(A^2H^6+\delta_1)}^{k-r} \nonumber\\
&& \times \xi^2_{k-r} \max_{0\leq r\leq k} \vert \vert \tilde{f}^{\theta}\vert \vert_{k-r}^{\nu, \mu}\nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-p+s}\binom{k}{r}\binom{\rho_1-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-p} {(A^2H^6+\delta_1)}^{k-r}\nonumber\\
&& \times \xi^2_{k} {(\frac{\xi_0}{\xi_1})}^{2r} \max_{0\leq r\leq k} \vert \vert \tilde{f}^{\theta}\vert \vert_{k-r}^{\nu, \mu}\nonumber\\
&\leq & C \sum_{n=0}^{\rho_1-p+s} \binom{\rho_1-p+s}{n}a^l {(1+a)}^{\rho_1-p} \sum_{r=0}^{k} \binom{k}{r} {(A^2H^6+\delta_1)}^{k-r} {\left(a^2\frac{\xi_0^2}{\xi_1^2}\right)}^{r} \nonumber\\
&&\times \xi^2_{k}\max_{0\leq r\leq k} \vert \vert \tilde{f}^{\theta}\vert \vert_{k-r}^{\nu, \mu}\nonumber\\
&\leq & C^{*} a^l {\left(A^2H^6+ \frac{a^2 \xi^2_0}{\xi^2_1}+\delta_2\right)}^{k} \xi^2_{k}\max_{0\leq r\leq k} \vert \vert \tilde{f}^{\theta}\vert \vert_{k-r}^{\nu, \mu}\nonumber.
\end{eqnarray}
This completes the proof.
\end{proof}
\begin{theorem}
Suppose $\psi$ be the wavelet taken from $\mathbb{W}_{\nu, \mu, \theta}(I)$. If $\{\eta_q\}$ be the sequence satisfies the property \ref{property4.1} then fractional Hankel wavelet transform is a continuous linear mapping from $\hat{\mathbb{H}}^{2, \eta_q, B}(I)$ into $\hat{\mathbb{H}}^{2, \eta_{2p+2q}, \tilde{B}}(I\times I),$ where $\tilde{B}= \left(B^2/a, B^2\right),$ for $\nu \geq -1/2.$
\end{theorem}
\begin{proof}
Proceeding as in the proof of the earlier theorem and exploiting Theorem \ref{th4.11}, we obtain
\begin{eqnarray}
&&\vert a^l b^k {(a^{-1}D_a)}^p{(b^{-1}D_b)}^q e^{\frac{i}{2}b^2 \cot \theta} b^{\mu-\nu} (W_{\psi}^{\theta}f)(b, a)\vert \nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-r-p+s}\binom{k}{r}\binom{\rho_1-r-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-r-p}\nonumber\\
&& \times \sup \left \vert \omega^{\nu_1+2q+2p+k+n}{(\omega^{-1}D_{\omega})}^{k-r}\Big[ e^{-\frac{i}{2}\omega^2 \cot \theta}\omega^{\mu-\nu}\tilde{f}^{\theta}(\omega)\Big] \right\vert \int_{0}^{\infty}\frac{1}{{(1+\omega)}^s}d\omega\nonumber\\
& \leq & C_1 \sum_{r=0}^{k} \sum_{n=0}^{\rho_1-p+s} \binom{k}{r} \binom{\rho_1-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-p} \max \vert \vert \tilde{f}^{\theta} \vert \vert C_{k-r}^{\nu, \mu} \nonumber\\
&&\times {(B+\sigma)}^{\nu_1+2p+2q+k+n} \eta_{\nu_1+2p+2q+k+n}\nonumber\\
&\leq & C_2 \max \vert \vert \tilde{f}^{\theta} \vert \vert {\left(\frac{B^2}{a}+\sigma_1\right)}^{p} {(B^2+\sigma_2)}^{q} \eta_{\nu_1+2p+2q+k+n} \nonumber.
\end{eqnarray}
Hence the theorem proved.
\end{proof}
\begin{theorem}
Let $\psi \in \mathbb{W}_{\nu, \mu, \theta}(I)$. If $\{\xi_k\}$ and $\{\eta_q\}$ be the sequences satisfies the property \ref{property4.1} then fractional Hankel wavelet transform is a continuous linear mapping from $\mathbb{H}_{\xi_k, A}^{ \eta_q, B}(I)$ into $\mathbb{H}_{{\xi^2_{2k}}, \eta_k, \tilde{A}}^{\xi_{2p+2q}, \eta_{2p+2q}, \tilde{B}}(I\times I),$ where $\tilde{A}=\Big(a, ABH^2(A^2H^6+a^2\xi_0^2/\xi_1^2)\Big)$ and $\tilde{B}= ( \frac{1}{a}A^2B^2H^6, A^2B^2H^4),$ for $\nu \geq -1/2.$
\end{theorem}
\begin{proof}
Using Theorem \ref{th4.6} and in viewing \eqref{eq:3.5}, we see that
\begin{eqnarray}
&&\vert a^l b^k {(a^{-1}D_a)}^p{(b^{-1}D_b)}^q e^{\frac{i}{2}b^2 \cot \theta} b^{\mu-\nu} (W_{\psi}^{\theta}f)(b, a)\vert \nonumber\\
&\leq & C\sum_{r=0}^{k} \sum_{n=0}^{\rho_1-p+s}\binom{k}{r}\binom{\rho_1-p+s}{n} a^{2r+l} {(1+a)}^{\rho_1-p}\vert \vert \tilde{f}^{\theta}\vert \vert^{\nu, \mu} {(A^2H^6+\delta_2)}^{k-r} \nonumber\\
&& \times \xi_{k-r}^2 {(ABH^2 +\delta_1)}^ {\nu_1+2p+2q+k+n} \xi_{\nu_1+2p+2q+k+n}\eta_{\nu_1+2p+2q+k+n}\nonumber\\
&\leq &C^{'} \sum_{n=0}^{\rho_1-p+s} \binom{\rho_1-p+s}{n} a^l {(1+a)}^{\rho_1-p} \sum_{r=0}^{k} \binom{k}{r} {\Big(\frac{a^2\xi_0^2}{\xi_1^2}\Big)}^r {(A^2H^6+\delta_2)}^{k-r}\nonumber\\
&& \times \vert \vert \tilde{f}^{\theta}\vert \vert^{\nu, \mu} \xi_{k}^2 {(ABH^2 +\delta_1)}^ {\nu_1+2p+2q+k+n} \xi_{\nu_1+2p+2q+k+n}\eta_{\nu_1+2p+2q+k+n}\nonumber\\
&\leq & C_1 a^l {\Big(ABH^2(A^2H^6+a^2\xi_0^2/\xi_1^2)+\delta_3\Big)}^k {(\frac{1}{a}A^2B^2H^6+\delta_4)}^p {(A^2B^2H^4+\delta_5)}^q\nonumber\\
&&\times\vert \vert \tilde{f}^{\theta}\vert \vert^{\nu, \mu} \xi_{k}^2 \xi_{\nu_1+2p+2q+k+n}\eta_{\nu_1+2p+2q+k+n}\nonumber.
\end{eqnarray}
Now using the inequalities [2] and [3] from the property \ref{property4.1}, the last expression can be rewritten as
\begin{eqnarray}
&\leq&C_1 a^l {\Big(ABH^2(A^2H^6+a^2\xi_0^2/\xi_1^2)+\delta_3\Big)}^k {\left(\frac{1}{a}A^2B^2H^6+\delta_4\right)}^p {(A^2B^2H^4+\delta_5)}^q\nonumber\\
&&\times \vert \vert \tilde{f}^{\theta}\vert \vert^{\nu, \mu}\xi_{2k}^2 \eta_k \xi_{2p+2q}\eta_{2p+2q}\nonumber.
\end{eqnarray}
This completes the proof.
\end{proof}
\end{document}
|
\begin{document}
\makeatletter
\def\@ifnextchar[{\fmsl@sh}{\fmsl@sh[0mu]}{\@ifnextchar[{\fmsl@sh}{\fmsl@sh[0mu]}}
\def\fmsl@sh[#1]#2{
\mathchoice
{\@fmsl@sh\displaystyle{#1}{#2}}
{\@fmsl@sh\textstyle{#1}{#2}}
{\@fmsl@sh\scriptstyle{#1}{#2}}
{\@fmsl@sh\scriptscriptstyle{#1}{#2}}}
\def\@fmsl@sh#1#2#3{\m@th\ooalign{$\hfil#1\mkern#2/\hfil$\crcr$#1#3$}}
\makeatother
\thispagestyle{empty}
\begin{titlepage}
\boldmath
\begin{center}
\Large {\bf Transformation properties and entanglement of relativistic qubits under space-time and gauge transformations}
\end{center}
\unboldmath
\begin{center}
{ {\large Xavier Calmet$^*$}\footnote{[email protected]} {\large and} {\large Jacob Dunningham\footnote{[email protected]}} }
\end{center}
\begin{center}
{\sl Department of Physics $\&$ Astronomy,
University of Sussex, Brighton, BN1 9QH, United Kingdom
}
\end{center}
\begin{abstract}
\noindent
We revisit the properties of qubits under Lorentz transformations and, by considering Lorentz invariant quantum states in the Heisenberg formulation, clarify some misleading notation that has appeared in the literature on relativistic quantum information theory. We then use this formulation to consider the transformation properties of qubits and density matrices under space-time and gauge transformations. Finally we use our results to understand the behaviour of entanglement between different partitions of quantum systems. Our approach not only clarifies the notation, but provides a more intuitive and simple way of gaining insight into the behaviour of relativistic qubits. In particular, it allows us to greatly generalize the results in the current literature as well as substantially simplifying the calculations that are needed.
\end{abstract}
\end{titlepage}
\section{Introduction}
Entanglement is one of the most fundamental phenomena in quantum physics and underpins the rapidly growing research areas of quantum information and quantum technology \cite{nielsen_chuang, Dowling}. Motivated by its fundamental importance as well as promises of new applications, the theory of entanglement has received a lot of attention over the past three decades. To date, these studies have focused almost exclusively on the realm of nonrelativistic quantum mechanics. However, more recently attention has turned to relativistic treatments, both as a more complete description of the underlying physics and to understand any new features or behaviours that this more general framework reveals.
A growing body of research on different quantum systems has uncovered a wealth of results about how relativity affects entanglement \cite{czachor_einstein-podolsky-rosen-bohm_1997, gingrich, Peres:2002wx, dunningham2009,peres_quantum_2002, alsing_lorentz_2002, ahn2003a, ahn2003b, czachor_relativistic_2003, moon_relativistic_2004, lee_quantum_2004, czachor_comment_2005, lamata_relativity_2006, alsing_entanglement_2006, chakrabarti_entangled_2009, fuentes_entanglement_2010, Friis:2009va}. A clearer picture is emerging of relativistic quantum information theory and how qubits are transformed and entanglement is affected in different frames.
However some misleading notation has propagated in the literature that we want to address. Clarifying the notation helps us understand the physical situation and gives much better intuition into the behaviour of these systems.
In this paper, we begin by reviewing the transformations of qubits under Lorentz and gauge transformations. We then extend this to density matrices and finally consider the transformation of entanglement in different rest frames. Our approach reveals a more intuitive and simple way of understanding the behaviour of different entanglement partitions under Lorentz boosts. This new approach is checked against existing work \cite{Friis:2009va} and shown to be consistent with it. However, significantly, we show that it extends far beyond it. Whereas previous work considered limited and somewhat artificial quantum states with a particular geometry, we are able to consider very general systems with arbitrary boost geometries, general momenta, multiple qubits and multi-level quantum systems (qudits). The added advantage of our approach is that, despite being more general, the calculations are very much simpler. This formalism could also be extended to different measures of entanglement or other interesting observables that could give further insight into relativistic quantum information theory.
\section{Lorentz and gauge transformations}
In the relativistic quantum information theory literature, a relativistic ket for a single particle is typically defined by \cite{gingrich,Peres:2002wx}
\begin{eqnarray} \label{defKet}
|\Psi \rangle =\sum_\sigma \int_{-\infty}^\infty d\mu(p) \psi_\sigma |\sigma,p \rangle,
\end{eqnarray}
where $\sigma$ and $p$ are respectively the spin and momentum, $\psi_\sigma=\langle \sigma,p|\Psi \rangle$ is the wave function, and $|\sigma,p \rangle$ is a one-particle state.
The Lorentz invariant measure is given by \cite{Weinberg:1995mt}
\begin{eqnarray}
d\mu(p) =\frac{d^4 p}{(2\pi)^3} \delta(p^2-m^2) \theta(p^0) \rightarrow \frac{d^3 p}{(2\pi)^3 2 p^0},
\end{eqnarray}
where $\theta(p^0)$ is the Heaviside function and the last step holds because, as we shall see shortly, $\psi_\sigma |\sigma,p \rangle$ is a Lorentz invariant function $f (p)$ of the four-momentum $p_\mu$ of a particle of mass $m$ with positive energy $p^0>0$. Let us first observe that
$|\Psi \rangle$ does not carry any space-time index: on the right hand side of (\ref{defKet}) one sums over the spin and integrates over the four-momentum. It is thus a Lorentz scalar, as expected when using the Heisenberg picture. It is worth emphasizing that the definition given by (\ref{defKet}) only makes sense if one uses the Heisenberg picture. The ket $|\Psi \rangle$ is a Lorentz scalar, it does not carry any Lorentz or space-time index and it is thus invariant under Lorentz transformations.
In the literature (see e.g. \cite{gingrich,Peres:2002wx, dunningham2009}) one often finds the misleading notation $|\Psi \rangle^\prime=U(\Lambda) |\Psi \rangle$ where $U(\Lambda)$ is a Lorentz transformation. This ket cannot be transformed in this way because it is a Lorentz scalar and thus invariant. On the other hand the wave-function $\psi_\sigma$ and the one-particle state $|\sigma,p \rangle$ are Lorentz transforming quantities. For a spin $1/2$ particle, one has
\begin{eqnarray}
U(\Lambda) |\sigma,p \rangle = \sum_\lambda D_{\lambda,\sigma}[W(\Lambda,p)]|\lambda, \Lambda p \rangle
\end{eqnarray}
under a Lorentz transformation. The transformation of the wave function can be found from the invariance of the ket $|\Psi \rangle$ as follows
\begin{eqnarray}
\psi_\sigma^\prime(p^\prime)=(\langle \sigma,p|)^\prime |\Psi \rangle
&=&\sum_\lambda D^{\dagger}_{\lambda,\sigma} [W(\Lambda,p)]
\langle \lambda, \Lambda p|\Psi \rangle \\ \nonumber
&=&\sum_\lambda D^{-1}_{\lambda,\sigma} [W(\Lambda,p)]
\langle \lambda, \Lambda p|\Psi \rangle \\ \nonumber
&=&
\sum_\lambda D_{\lambda,\sigma} [W(\Lambda^{-1},p)]
\langle\lambda, \Lambda p|\Psi \rangle \\ \nonumber
&=&
\sum_\lambda D_{\lambda,\sigma} [W(\Lambda^{-1},p)]
\psi_\lambda(\Lambda p).
\end{eqnarray}
It is straightforward to generalize this to a two-particle system:
\begin{eqnarray}
\psi_{\sigma_1^\prime,\sigma_2^\prime}(p^\prime,q^\prime)= \sum_{\lambda_1,\lambda_2} D_{\lambda_1,\sigma_1}[W(\Lambda^{-1},p)] D_{\lambda_2,\sigma_2} [W(\Lambda^{-1},q)]
\psi_{\lambda_1,\lambda_2}(\Lambda p,\Lambda q).
\end{eqnarray}
We can see that the bras and the kets, besides being Lorentz invariant, are also gauge invariant \cite{Calmet:2012pn}. If we consider quantum electrodynamics, i.e. a local gauge transformation, the wave-function of the spinor field transforms as $\psi^\prime(x)=\exp ( i \alpha(x)) \psi(x)$ and the gauge potential as $A_\mu^\prime(x)= A_\mu(x) -1/e \partial_\mu \alpha(x)$ where $e$ is the electric charge. The one-particle state $|\sigma,p\rangle$ transforms as $|\sigma,p\rangle^\prime = \exp ( -i \alpha(x)) |\sigma,p\rangle$ and we thus see that the ket $|\Psi \rangle$ is invariant under gauge transformations.
We now turn our attention to the density matrix. We begin by defining this object and then consider its transformation properties. We follow the presentation in Feynman's book on statistical mechanics \cite{Feynman}. In particular, it is important to realize in which space it is defined. Let the variable $x$ describe the coordinates of the system and $y$ the rest of the universe. The most general wave function can then be written as $\Psi(x,y)=\sum_i C_i(y) \phi_i(x)$.
Using Dirac notation we introduce $\{|\phi_i\rangle\}$ which is a complete set of vectors in the vector space describing the system (the complete set is Lorentz invariant in $x$-space) and $\{|\theta_i\rangle\}$ (which is Lorentz invariant in $y$-space) is a complete set for the rest of the universe: $\phi_i(x)=\langle x |\phi_i\rangle$ and $\theta_i(y)=\langle y|\theta_i\rangle$. The most general wave-function is thus
\begin{eqnarray}
\psi(x,y)=\langle y |\langle x |\psi\rangle= \sum_{i,j} C_{i,j}\langle x |\phi_i\rangle \langle y|\theta_j\rangle
\end{eqnarray}
where $C_{i,j}$ are constant complex numbers. Under Lorentz transformations, $\psi(x,y)$ transforms as $\psi(x^\prime,y^\prime)=U(\Lambda_x)U(\Lambda_y) \psi(x,y)$. The density matrix is defined by
\begin{eqnarray}
\rho_{i^\prime i} =\sum_j C_{i,j}^\star C_{i^\prime,j}
\end{eqnarray}
$\rho_{i^\prime i}$ are in $\mathbb{C}$ and are thus scalars under Lorentz transformations -- they do not transform. The density matrix operator $\rho$ is defined such that
\begin{eqnarray} \label{rho}
\rho_{i^\prime i} = \langle\phi_i^\prime|\rho|\phi_i\rangle
\end{eqnarray}
and since $\rho$ only operates on the system described by $x$, is it also a Lorentz invariant quantity.
One can also introduce an $x$-representation for the density matrix
\begin{eqnarray}
\rho(x^\prime,x) \equiv \langle x^\prime |\rho |x \rangle = \int \psi(x^\prime,y) \psi^\star(x,y) d\mu(y)
\end{eqnarray}
and in momentum space:
\begin{eqnarray}
\rho_{\sigma^\prime(p^\prime),\sigma(p)}(p^\prime,p) \equiv \langle \sigma^\prime(p^\prime), p^\prime |\rho | \sigma(p), p \rangle = \int \psi(p^\prime,q) \psi^\star(p,q) d\mu(q).
\end{eqnarray}
It is straightforward to check that under a Lorentz transformation one has $\rho(x^\prime,y^\prime)=U(\Lambda_x)\rho(x,y)U(\Lambda_y)^\dagger$ and $\rho(q^\prime,p^\prime)=U(\Lambda_p) \rho(q,p)U(\Lambda_q)^\dagger$.
For fermions, the density matrix considered in Eq. (\ref{rho}) can be expressed as
\begin{eqnarray} \label{DM}
\rho=\sum_{\sigma_1(p),\sigma_2(q)} \int \int d\mu(p) d\mu(q)
\rho_{\sigma_1(p),\sigma_2(q)}(p,q) |p,\sigma_1(p) \rangle \langle q,\sigma_2(q)|
\end{eqnarray}
which is a Lorentz scalar. One can obtain a reduced density matrix by considering
\begin{eqnarray}
\int d\mu(r) \langle r| \rho|r \rangle=\sum_{\sigma_1(p),\sigma_2(q)} \int d\mu(r)
\rho_{\sigma_1(p),\sigma_2(q)}(r,r) |\sigma_1 (p)\rangle \langle \sigma_2(q)| \label{oneparticlered}
\end{eqnarray}
where $r$ is a momentum. This reduced density matrix is, however, not Lorentz invariant as $|r \rangle$ does not have well-defined Lorentz transformation properties on its own. This observation agrees with a body of literature that has considered single particles with spin and shown that under Lorentz transformations a partial trace over momentum or spin (and hence the entanglement between momentum and spin) is not invariant\cite{peres_quantum_2002, gingrich, Palge2011a, Palge2012a}. To get a Lorentz invariant, one would have to trace over a complete spinor such as $|\sigma,r \rangle$ which as explained above transforms as $ U(\Lambda) |\sigma,r \rangle = \sum_\lambda D_{\lambda,\sigma}[W(\Lambda,r)]|\lambda, \Lambda r \rangle$, i.e. as a spinor under Lorentz transformations. Note that the transformation involves a sum over the spin index and it is not possible to factorize the spin and momentum transformations since the rotation matrix $D_{\lambda,\sigma}[W(\Lambda,r)]$ depends both on the spin and the momentum. A Lorentz invariant reduced density matrix can be obtained by calculating
\begin{eqnarray}
\sum_\lambda \int d\mu(r) \langle r, \lambda(r)| \rho|r, \lambda(r)\rangle=\sum_\lambda \int d\mu(r) \rho_{\lambda,\lambda}(r,r)=1
\end{eqnarray}
which is however trivial.
\section{Entanglement under Lorentz transformations}
In relativity, fundamental quantities are observer independent, however there are many things that may be of interest that are not Lorentz invariant. Entanglement is one such example of particular note for quantum information theory. Entanglement is not a fundamental property of a system but depends on the choice of partition into subsystems and studies have shown that entanglement can be Lorentz invariant for some partitions and not others \cite{Friis:2009va}. We now consider this in the context of the above discussion.
There are different ways of quantifying entanglement \cite{vedral1998}, but for our purposes we define the entanglement between the $i$-th part of the system and the rest of the system to be
\begin{eqnarray}
E_i(\rho) = 1- \mbox{Tr} \rho_i^2,
\end{eqnarray}
where $\rho_i$ is obtained by tracing over all subsystems except the $i$-th. If the $i$-th subsystem is a pure state and not entangled with the rest of the system then the trace over the rest of the system will not affect it, i.e. it will remain in a pure state and hence $E_i(\rho)=0$ as expected. $E_i(\rho)$ will increase as $i$ is increasingly entangled with the rest of the system. We can also sum up the entropies found for different partitions as in \cite{Friis:2009va}
\begin{eqnarray}
E(\rho) = \sum_i (1- \mbox{Tr} \rho_i^2) \label{entropysum}.
\end{eqnarray}
This is not necessarily a Lorentz invariant quantity since, as discussed above, the reduced density matrix $\rho_i$ is not necessarily a Lorentz scalar, but will depend on the choice of spins or momenta that have been traced or summed over. The amount of entanglement between the different states is thus frame dependent.
A similar observation was made in, e.g. \cite{Friis:2009va}, however as we shall see our formulation gives a simpler and more intuitive way of seeing this. In particular, our approach does not require lengthy calculations.
We begin by extending our considerations to many particle states, which can be done straightforwardly. For example for a two-particle state, the density matrix becomes
\begin{eqnarray} \label{DM2}
\rho&=&\sum_{\sigma_1(p_1),\sigma_2(p_2),\sigma_3(p_3),\sigma_4(p_4)} \int\int\int\int d\mu(p_1) d\mu(p_2)d\mu(p_3) d\mu(q_4)
\nonumber \\ && \rho_{\sigma_1(p_1),\sigma_2(p_2),\sigma_3(p_3),\sigma_4(p_4)}(p_1,p_2,p_3,p_4) |p_1,\sigma_1(p_1), p_2,\sigma_2(p_2) \rangle \langle p_3,\sigma_3(p_3), p_4,\sigma_4(p_5)| \label{twoparticlerho}
\end{eqnarray}
which is a Lorentz scalar. We are now free to perform different contractions of the two-particle density matrix -- these correspond to finding the entanglement entropies for different partitions according to Eq.~(\ref{entropysum}). It is now clear what partitions will lead to Lorentz invariant entanglements: since the total density matrix $\rho$ is invariant, overall Lorentz invariance will be preserved so long as $\rho$ is sandwiched with a Lorentz invariant term. For example, the reduced density matrix
\begin{eqnarray}
\rho_{red}&=& \int d\mu(p_5) \sum_{\sigma_5(p_5)}\langle p_5,\sigma_5(p_5)| \rho | p_5,\sigma_5(p_5)\rangle \label{particle_partition}
\end{eqnarray}
is Lorentz invariant independently of the choice made for the contraction of the momenta or spin, since the trace is over Lorentz invariant spinors as discussed in Section~2 above.
On the other hand, if we decide to sandwich $\rho$ with a vector momentum $a$ (as in Eq.~(\ref{oneparticlered})) or spin $\sigma$, we would obtain a reduced density matrix which is frame dependent, e.g.
\begin{eqnarray}
\rho_{red}&=& \langle a| \rho |a\rangle \label{atrace}
\end{eqnarray}
is not Lorentz invariant. The same would apply if we integrated over $a$, but did not sum over $\sigma$. We thus have a very simple criterion to check whether a given reduced density matrix is Lorentz invariant or not: the reduction of the density matrix must be done in a covariant way.
We can compare this general result directly with the specific two-particle case considered in \cite{Friis:2009va}. In that work the authors considered the case of two spin-half particles each with one of two possible delta-function values of momentum $\{p_+,p_-\}$ that are equal and opposite in the $z$-direction. Their initial state was
\begin{eqnarray}
|\psi_{\rm total}\rangle = (\cos\alpha |p_+,p_-\rangle +\sin\alpha |p_-,p_+\rangle)(\cos\beta|\uparrow \downarrow\rangle\rangle + \sin\beta|\downarrow \uparrow\rangle\rangle), \label{friis_state}
\end{eqnarray}
where $\alpha$ and $\beta$ are real numbers and the kets respectively represent the momentum of particles 1 and 2 and the spins of particles 1 and 2. This can be considered as a four qubit state with two spin and two momentum qubits. We note that this is a very specific (and somewhat artificial) state with spin-1/2 particles in a particular initial state and momentum delta functions in the $z$-direction; we shall see shortly how our formulation can handle general situations.
They then subjected the overall state (\ref{friis_state}) to a specific Lorentz transformation in the $x$ direction and performed detailed calculations of the entanglement of this transformed state using (\ref{entropysum}) for different partitions to determine, among other things, which partitions correspond to Lorentz invariant entanglements. The partitions they considered were: 1) any one of the qubits with the other three; 2) the two spin qubits and the two momentum qubits; 3) the particle-particle partition, i.e. the momentum and spin of one particle and the momentum and spin of the other.
We can easily analyse these cases without any need for a calculation. The density matrix is
\begin{eqnarray}
\rho = |\psi_{\rm total}\rangle \langle \psi_{\rm total} |,
\end{eqnarray}
and can use this as the density matrix in our formulation above. For partition 1, we need to trace over any one of the spin or momentum qubits. As argued in Eq.~(\ref{atrace}) and the text below it, this leads to a reduced density matrix that is not Lorentz invariant and hence nor is the entanglement. Similarly for partition 2, we need to trace over both spins or both momenta. For the spins, the reduced density matrix is
\begin{eqnarray}
\rho_{\rm mom}&=& \sum_{\sigma_5(p_5), \sigma_6(p_6)}\langle \sigma_5(p_5), \sigma_6(p_6)| \rho |\sigma_5(p_5), \sigma_6(p_6)\rangle, \label{spin_momentum}
\end{eqnarray}
which is not Lorentz invariant. The final partition considered in \cite{Friis:2009va} is partition 3, which is the particle-particle partition and the reduced density matrix for this is given by Eq.~(\ref{particle_partition}) above, which we have shown to be Lorentz invariant. In fact, we can see that this is the only Lorentz invariant partition.
All of these results are consistent with the findings in \cite{Friis:2009va}. However there they considered a specific state and a specific boost and carried out detailed calculations of the entanglement for each different partition. Only by looking at the results of these calculations were they able to comment on whether different partitions had Lorentz invariant entanglements. By contrast, our formulation is much simpler and can be applied to general states and boosts since it does not rely on the details of the state or geometry. It gives a clear intuition for which partitions are invariant and why and this without performing any calculation. It can also easily be extended to other partitions not considered in \cite{Friis:2009va} such as a trace over the momentum of one particle and the spin of the other. The reduced density matrix in this case is
\begin{eqnarray}
\rho_{red}&=& \int d\mu(p_6) \sum_{\sigma_5(p_5)}\langle p_6,\sigma_5(p_5)| \rho | p_6,\sigma_5(p_5)\rangle,
\end{eqnarray}
which can be seen to be not Lorentz invariant, without any direct calculation. If we were to follow \cite{Friis:2009va}, this conclusion would require a lengthy calculation and, even then, would only be true for the particular initial state and particular boost geometry chosen. Similarly, we could easily extend our method to consider more than two qubits, states with $d$ spin levels (qudits), and general momentum wave packets (i.e. not delta functions). Analyzing these cases using previous methods would be very cumbersome, but we are able to immediately see that it is again just the particle partition that is Lorentz invariant. This illustrates the power and versatility of this formalism.
By considering quantum states in the Heisenberg formulation, we have not only been able to clarify some misleading notation and inaccurate statements, but also provide a simple and useful method for understanding the behaviour of relativistic qubits. This treatment is a good example of when the right notation can be more than simply a convenient way of keeping track of a calculation, but can aid physical insight and understanding. Our formulation enables a much simpler way of drawing general conclusions about the entanglement of quantum systems and could be a useful tool in furthering our understanding of relativistic quantum information theory.
\end{document}
|
\begin{document}
\title{Classical Fisher Information in Quantum Metrology -- Interplay of Probe, Dynamics and Measurement}
\author{Gabriel A.\ Durkin} \email{[email protected]}
\affiliation{Quantum Laboratory , NASA Ames Research Center, Moffett Field, California 94035, USA}
\date{\today}
\pacs{42.50.St,42.50.Dv,03.65.Ud,06.20.Dk}
\begin{abstract} We introduce a positive Hermitian operator, the Fisher operator, and use it to examine a measurement process incorporating unitary dynamics and complete measurements. We develop the idea of information complement, the minimization of which establishes the optimal precision for a fixed input. The formalism demonstrates that, in general, the classical Fisher Information has the Hamiltonian semi-norm as an upper bound. This is achievable with a qubit probe and only projective measurements, and is independent of the true value of the estimated parameter. In an interferometry context, we show that an optimal measurement scheme can be constructed from linear optics and photon counting, without recourse to generalised measurements or exotic unitaries outside of $SU(2)$. \end{abstract}
\maketitle
Scientists strive for an understanding of Nature by a physical interaction introducing correlations between observer and observed, and the process is called measurement. Quantum mechanics exerts fundamental limitations on the precision of any measurement \cite{Luis}, yet for a quantum observable there is still a classical probability distribution over measurement outcomes. This distribution may depend on some real-valued system parameter $\theta$ such as an interaction time or phase angle. It may have no associated Hermitian observable, nor be measurable directly -- but its estimation may be the true goal of the measurement. Inferring $\theta$ from frequencies of measurement outcomes is a conventional challenge in classical information theory with a well-established methodology \cite{Cover and Thomas}. From this classical underpinning we will show a quantum formalism emerges directly and without recourse to the quantum Fisher Information \cite{Metrology,MetrologyII} or a linear error propagation model \cite{HLimited-Interfer-Decorr,Yurke-SU2-Interfer}. Those approaches focus on the geometry of the state in Hilbert space and have a long distinguished history stretching at least as far back as 1976 \cite{Helstrom}. In contrast, the formulation in this paper is self-contained and incorporates all three instrument components -- input, dynamics and measurement choice -- on an equal footing. Our conclusions rely on no other results or assumptions than those presented herein.
Consider a maximal test \cite{Peres} having outcomes labelled $k$ and an associated probability distribution $P(\theta)=\{p_k (\theta)\}$ that depends on a continuous real parameter $\theta$. Classical Fisher information \cite{Cramer-Fisher} is defined as
\begin{equation}
\mathcal{J}(\theta) = \sum_{k} p_k (\theta)\left( \frac{\partial }{\partial \theta} \ln p_k (\theta) \right)^2 \; . \label{F1}
\end{equation}
This functional is a measure of the information contained in the distribution $P(\theta)$ about the parameter $\theta$ \cite{Cover and Thomas}. It provides the sole distance metric in probability space \cite{Fisher-unique-metric} and the scaling factor in local distinguishability \cite{Durkin-Dowling}. An explicit lower bound on the standard error of an unbiased estimate $\tilde{\theta}$ on the true phase $\theta$ is given by the reciprocal of the Fisher information, $(\delta \tilde{\theta})^2 \geq 1 / \mathcal{J}(\theta) $, called the Cram\'{e}r-Rao bound \cite{Cramer-Fisher}. Therefore, for optimal precision one maximizes the Fisher information \cite{Braunstein-knee}.
Let's take the case of the measurement observable $\hat{M}$ and outcomes $\{m_k\}$ associated with distribution $\{p_k\}$ applied to a quantum system that previously evolved from a known initial state $|\psi_{0} \rangle$ under the dynamics of some Hamiltonian $\hat{H}$ for time $\theta$. The Schr\"{o}dinger equation governs the dynamics $ i \partial_{\theta} | \psi_{\theta} \rangle = \hat{H} | \psi_{\theta} \rangle $,
(where $\hbar=1$) and the time evolution is explicitly $ | \psi_{\theta} \rangle = \exp\{- i \hat{H} \theta\} | \psi_{0} \rangle \; .$
The parameter estimation task involves an inference of the time-like variable $\theta$ from the measurement distribution $\{ p_k (\theta)\}$. Writing the spectral decomposition of the maximal measurement as $ \hat{M} = \sum_{k} m_k | k \rangle \langle k | $
then complex amplitudes $\langle k | \psi_{\theta} \rangle = r_k \exp\{i \phi_k\}$
lead to probabilities $ p_k = \langle k | \psi_{\theta} \rangle \langle \psi_{\theta} | k \rangle = r_{k}^{2} \; , \label{probs}$ where it is understood that $\{p_k, r_k \} \in [0,1]$ and $\phi_k \in [0,2 \pi )$ are all real-valued functions of parameter $\theta$. Replacing $p_k$ with $r_k^2$ in Eq.\eqref{F1} gives:
\begin{equation}
\mathcal{J}(\theta)= 4\sum_k \dot{r}_{k}^{2} \; ,\label{FI2}
\end{equation}
which we can now use to find an operator expression for the Fisher information. Differentiating $r_k^2$ gives
\begin{equation}
2 \dot{r}_k r_{k}= \langle k | \psi_{\theta} \rangle \langle \dot{\psi}_{\theta} | k \rangle + \langle k | \dot{\psi}_{\theta} \rangle \langle \psi_{\theta} | k \rangle \; . \label{diff}
\end{equation}
Now, $ \langle k | \dot{\psi}_{\theta} \rangle = \partial_{\theta} (r_k e^{i \phi_k})= e^{i \phi_k} (\dot{r}+i r \dot{\phi}_k) = | \langle k | \dot{\psi}_{\theta} \rangle| e^{i ( \phi_k + \tau_k)} $ where we define the inclination of the velocity vector:
\begin{equation}
\tau_k = \arg \langle k | \dot{\psi}_{\theta} \rangle - \arg \langle k | \psi_{\theta} \rangle = \tan^{-1}( r_k \dot{\phi}_k / \dot{r}_k) \; .
\end{equation}
Eq. \eqref{diff} yields $\dot{r}_k = \cos \tau_k | \langle k | \dot{\psi}_{\theta} \rangle| $. From the Schr\"{o}dinger equation we also have $ i \langle k | \dot{\psi}_{\theta} \rangle = \langle k | \hat{H} | \psi_{\theta} \rangle $. Substituting this expression and squaring gives
\begin{equation}
\dot{r}_{k}^{2} = \cos^2 \tau_k \langle \psi_{\theta} | \hat{H} | k \rangle \langle k | \hat{H} | \psi_{\theta} \rangle .
\end{equation}
Summing over all the measurement outcomes `$k$' gives the Fisher information using Eq.\eqref{FI2}: $\mathcal{J}(\theta) = 4 \sum_k \cos^2 \tau_k \langle \psi_{\theta} | \hat{H} | k \rangle
\langle k | \hat{H} | \psi_{\theta} \rangle $. Therefore, we can define a non-linear, positive and hermitian Fisher operator $\hat{F}_{\theta}$ that is diagonal in the measurement basis:
\begin{equation}
\hat{F}_{\theta} = 4 \sum_k \cos^2 \tau_k | k \rangle \langle k | = 4 \sum_k c_{\psi,k} | k \rangle \langle k | \: , \label{FOp}
\end{equation}
such that the Fisher information is then
\begin{equation}
\mathcal{J}(\theta) = \langle \psi_{\theta} | \hat{H} \hat{F}_{\theta} \hat{H} | \psi_{\theta} \rangle = \text{Tr} \; \left[ | \psi_{\theta} \rangle \langle \psi_{\theta} | \: \hat{H} \hat{F}_{\theta} \hat{H} \right ] . \label{FisherInfo}
\end{equation}
This can be re-expressed as an expectation value with respect to the probe state, the input $| \psi_0 \rangle$, by unitarily transforming the Fisher operator:
$
\mathcal{J}(\theta) = \langle \psi_{0} | \hat{H} \hat{\Phi}_{\theta} \hat{H} | \psi_{0} \rangle = \text{Tr} \; \left [ | \psi_{0} \rangle \langle \psi_{0} | \: \hat{H} \hat{\Phi}_{\theta} \hat{H} \right ] ,
$
where
\begin{equation}
\hat{\Phi}_{\theta} = e^{i \hat{H} \theta} \hat{F}_{\theta} e^{- i \hat{H} \theta} = 4 \sum_k c_{\psi,k} | k' \rangle \langle k' |
\end{equation}
is the unitarily transformed Fisher operator, cf Eq. \eqref{FOp}. Note that, due to the non-linear nature of $\hat{F}_\theta$, a basis transformation $| k \rangle \mapsto | k' \rangle$ would give a different result:
\begin{equation}
\hat{F}_\theta \mapsto \hat{F}_\theta' = 4 \sum_k c_{\psi,k'} | k' \rangle \langle k' | \; ,
\end{equation}
not equivalent to $\Phi_\theta$, since $c_{\psi,k'} \neq c_{\psi,k} $ generally.
\emph{Fixed Probe Optimization}: The completeness of the measurement basis provides a resolution of the identity $ \sum_k | k \rangle \langle k | = \mathbbm{1} $, and therefore Eq.\eqref{FisherInfo} becomes:
\begin{align}
\frac{\mathcal{J}(\theta) }{4} & = \langle \hat{H}^2 \rangle - \sum_k \sin^2 \tau_k \: \langle \psi_{\theta} | \hat{H} | k \rangle
\langle k | \hat{H} | \psi_{\theta} \rangle \nonumber \\
& = \langle \hat{H}^2 \rangle - \sum_k ( r_k \dot{\phi}_k )^2 = \langle \hat{H}^2 \rangle - \mathcal{K}(\theta) \; , \label{Fisher1}
\end{align}
where the form in the second line was previously presented in \cite{Durkin-Dowling}. With reference to Eq.\eqref{FI2} it is interesting to note that $\langle \hat{H}^2 \rangle$ is a sum of translational and rotational kinetic energy terms. Here it may be taken with respect to $| \psi_0 \rangle$, since $[\hat{H}^2, \exp\{i \hat{H} \theta \}] = 0$. In Eq.\eqref{Fisher1} the subtracted term, we will call it the `\emph{information complement}' $\mathcal{K}$, is positive and can only reduce $\mathcal{J}(\theta)$. We look for a measurement basis $\{ | k \rangle \}$ and associated set $\{ r_k, \phi_k\}$ that minimizes this information complement for a fixed input $|\psi_0\rangle$. Finding derivatives of $\phi_k$ by writing $\delta \phi_k = \arg \langle k | \psi_{\theta + \delta \theta} \rangle - \arg \langle k | \psi_{\theta} \rangle$ produces
\begin{equation}
r_k \dot{\phi}_k = \sum_l | H_{k , l} | r_l \cos (\phi_l - \phi_k + \Omega_{k,l}) = \mathcal{A}_k \; ,
\end{equation}
where $\langle k| \hat{H} | l \rangle = H_{k , l} = | H_{k , l} | \exp i \Omega_{k,l}$, and therefore
\begin{equation}
\mathcal{K} = \sum_k ( r_k \dot{\phi}_k )^2 = \sum_k \mathcal{A}_k^2 \; .
\end{equation}
By comparison, expanding $\langle \hat{H} \rangle$ in the $| k \rangle$ basis gives
\begin{equation}
\langle \hat{H} \rangle = \sum_{k,l} r_l r_k | H_{k , l} | r_l \cos (\phi_l - \phi_k + \Omega_{k,l}) = \sum_k r_k \mathcal{A}_k
\end{equation}
At the minimum
\begin{equation}
\frac{\partial \mathcal{K} }{\partial \phi_p}= \sum_k \frac{\partial}{\partial \phi_p}( \mathcal{A}_k^2) \; = \; 0
\end{equation}
and this calculation leads to
\begin{equation}
\mathcal{A}_p = - r_p \frac{\sum_k \mathcal{A}_k | H_{k , p} | \sin (\phi_p - \phi_k + \Omega_{k,p}) }{\sum_k r_k | H_{k , p} | \sin (\phi_p - \phi_k + \Omega_{k,p}) } = r_p \mathcal{B}_p \; ,
\end{equation}
at the minimum of $\mathcal{K}$. Remember, this is equivalent to maximizing $\mathcal{J}$ over $\{ | k \rangle \}$ for a fixed $| \psi_0 \rangle$. At this minimum we can prove the relation that
\begin{align}
&\mathcal{K} - \langle \hat{H} \rangle^2 \nonumber \\
= & \sum_k \mathcal{A}_k^2 - ( \sum_k r_k \mathcal{A}_k )^2 \nonumber \\
= & \sum_k r_k^2 \mathcal{B}_k^2 - ( \sum_k r_k^2 \mathcal{B}_k )^2 \nonumber \\
= & \: \Delta^2 \mathcal{B} \geq 0 \; ,
\end{align} since $\{r_k^2\} =\{ p_k\}$ is a probability distribution. Comparing again with Eq.\eqref{Fisher1} it follows directly that
\begin{equation}
\mathcal{J}(\theta) /4 \leq \Delta^2 \hat{H}
\end{equation}
for a fixed input $| \psi_0 \rangle$. Now we will show that this bound is saturated by a particular qubit input state.
\begin{figure}
\caption{\small{ \emph{Left}
\label{sphere}
\end{figure}
\emph{Optimizing for a Qubit:} For probe states that are eigenstate of the Hamiltonian then all $\dot{r}_k \mapsto 0 $ and $\mathcal{J} = 0 $ from Eq.\eqref{FI2} because any eigenstate $| \lambda \rangle$ of $\hat{H}$ only gains a phase during its evolution. Thus $r_k (\theta)= |\langle k | e^{i \hat{H} \theta}| \lambda \rangle | = |\langle k | e^{i \lambda \theta}| \lambda \rangle | = |\langle k | \lambda \rangle | = r_k(0)$, and $\dot{r}_k = 0$.
For optimality over all $| \psi_0 \rangle$ and $\{ | k \rangle \}$ the input state must thus be a \emph{superposition} of at least two Hamiltonian eigenvectors,
\begin{equation}
| \psi_0 \rangle \mapsto \cos \gamma | \lambda_{1} \rangle + e^{i \chi} \sin \gamma | \lambda_2 \rangle \; . \label{general-probe}
\end{equation}
Starting with a qubit probe state of the form of Eq. \eqref{general-probe} and some measurement basis pair $\{ | k_1 \rangle, | k_2 \rangle\}$ spanning the same $\mathbbm{C}^2$ as $\{ | \lambda_1 \rangle, | \lambda_2 \rangle\}$:
\begin{align}
| k_1 \rangle = & \: \; \; \; \cos \alpha | \lambda_1 \rangle + \sin \alpha | \lambda_2 \rangle \nonumber \\
| k_2 \rangle = & - \sin \alpha | \lambda_1 \rangle + \cos \alpha | \lambda_2 \rangle
\end{align}
Here it has been chosen that the measurement basis defines the $x$ axis on the Bloch sphere of FIG.\ref{sphere}, hence the real coefficients for $| k_{1,2}\rangle$. By confining the measurement basis vectors $\{ | k_1, \rangle, | k_2 \rangle \}$ to the same space as $\{ | \lambda_1 \rangle, | \lambda_2 \rangle\}$ then one can restrict interest to the component of $\hat{\Phi}_\theta$ within this qubit space:
\begin{align}
\hat{\Phi}_\theta = & \; e^{i \hat{H} \theta} \{ c_{\psi,k1} | k_1 \rangle \langle k_1 | + c_{\psi,k2} | k_2 \rangle \langle k_2 | + \dots \nonumber \} e^{- i \hat{H} \theta} \nonumber \\
= & \; \; c_{\psi,k1} \left(
\begin{array}{ll}
c^2 & e^{-i ( \lambda_2-\lambda_1) \theta } s c \\
e^{+i (\lambda_2-\lambda_1) \theta } s c & s^2
\end{array}
\right) \nonumber \\ + & \; c_{\psi,k2} \left(
\begin{array}{ll}
s^2 & - e^{-i (\lambda_2-\lambda_1) \theta) } s c \\
- e^{+ i (\lambda_2-\lambda_1) \theta)} s c & c^2
\end{array}
\right) + \dots \nonumber
\end{align}
where $ c $ ($s$) is $ \cos \alpha$ ($ \sin \alpha$). We ignore elements of $\hat{\Phi}_\theta$ that project onto the remaining Hilbert space, orthogonal to $\{ | \lambda_1 \rangle, | \lambda_2 \rangle\}$.
The coefficients $\cos^2 \tau_k$ have a geometric interpretation by mapping $\mathbbm{C} \mapsto \mathbbm{R}^2$:
\begin{align}
\tau_k &= \cos^{-1}\frac{\langle k | \dot{\psi}_\theta \rangle . \langle k | \psi_\theta \rangle}{|\langle k | \dot{\psi}_\theta \rangle | | \langle k | \psi_\theta \rangle|} = \sin^{-1} \frac{\langle k | \hat{H} | \psi_\theta \rangle . \langle k | \psi_\theta \rangle}{|\langle k | \hat{H} | \psi_\theta \rangle | | \langle k | \psi_\theta \rangle|} \nonumber \\ &= \sin^{-1}\frac{(\vec{v} + A \vec{w}) . (\vec{v} + \vec{w}) }{ | \vec{v} + A \vec{w} | |\vec{v} + \vec{w} |} \: , \end{align}
with $A = \lambda_2 / \lambda_1$ and
\begin{align}
\vec{v}_1 & = \cos( \alpha) \cos ( \gamma) , \; \; \vec{w}_1 = \sin (\alpha) \sin (\gamma) e^{ i (\chi - (\lambda_2 - \lambda_1) \theta ) } \nonumber \\
\vec{v}_2 & = - \sin ( \alpha) \cos ( \gamma), \; \vec{w}_2 = \cos (\alpha) \sin (\gamma) e^{ i (\chi - (\lambda_2 - \lambda_1) \theta ) } \ \nonumber .
\end{align}
The components in the measurement basis are:
\begin{align}
\langle k_{1,2}| e^{-i \hat{H} \theta} | \psi_{0} \rangle = e^{-i \lambda_1 \theta} (\vec{v}_{1,2} + \vec{w}_{1,2} ) \nonumber \\
\langle k_{1,2} | \hat{H} e^{-i \hat{H} \theta} | \psi_{0} \rangle = e^{-i \lambda_1 \theta} \lambda_1 (\vec{v}_{1,2} + A \vec{w}_{1,2} )
\end{align}
and the included angle between vectors $ (\vec{v} + \vec{w} )$ and $ (\vec{v} + A \vec{w} )$ is $\tau_k - \pi /2+ \arccos \{ \lambda_1/ |\lambda_1| \}$, because $\lambda_1$ may be negative. Now we have explicit expressions for
\begin{align}& c_{\psi,k} = \cos^2 \tau_k = (1-\sin^2 \tau_k) = 1- \nonumber \\
& \! \! \frac{\left(A R^2+(-1)^{k \! - \! 1}(A+1) \cos (\beta) R+1\right)^2}{\! \left(R^2 \! + \! 2 (-1)^{k \! - \! 1} \cos (\beta ) R \! + \! 1\right) \! \left(A^2 R^2 \! + \! 2 (-1)^{k \! - \! 1} A \cos (\beta) R \! + \! 1\right)} , \label{coeffs} \end{align}
with \begin{equation} \! \! \{ \! R_{1}, \! R_{2} \} \! \mapsto \! \left\{ \frac{ | \vec{w}_1|}{|\vec{v}_1|} , \!\frac{ | \vec{w}_2|}{|\vec{v}_2|} \right\} \! = \! \left\{ \tan(\alpha) \! \tan(\gamma), \frac{\tan(\gamma)}{ \tan(\alpha)} \! \right\} , \! \end{equation} and
$ \beta = \arccos \{ \vec{v_1}.\vec{w_1}/|\vec{v_1}||\vec{w_1}| \} =\chi - (\lambda_2 - \lambda_1) \theta $. For angles $\{ \alpha, \beta, \gamma \}$ expectation value $\langle \psi_0 | \hat{H} \hat{\Phi}_{\theta} \hat{H} | \psi_0 \rangle$ gives
\begin{widetext}
\begin{equation} \! \mathcal{J}(\alpha, \beta, \gamma)\! = \! \frac{- 4 \left(\lambda _1-\lambda
_2\right){}^2 s^2[2 \alpha ] s^2[2 \gamma ] s^2\left[ \beta \right]}{\left(c[2 (\alpha -\gamma )]+c [2 (\alpha +\gamma )]+2 c \left[ \beta \right] s [2 \alpha ] s [2 \gamma ]-2\right) \! \left( c [2 (\alpha -\gamma )]+c [2 (\alpha +\gamma )]+2 c \left[ \beta \right] s [2 \alpha ] s [2 \gamma ]+2 \right)}
\end{equation}
\end{widetext}
writing sin as `$s$' and cos as `$c$'. This function is optimized by angles $\{ \alpha, \gamma \} \mapsto \pi / 4$, independent of the value of $\beta$ and giving a saturable bound: $ \langle \psi_0 | \hat{H} \hat{\Phi}_{\theta} \hat{H} | \psi_0 \rangle \leq \left(\lambda _1-\lambda_2\right){}^2 $. This is the upper bound on the Fisher information for any superposition of two eigenstates of the Hamiltonian. It is saturated by a probe state
\begin{equation}
| \psi_{\text{opt}} \rangle = (| \lambda_1 \rangle + e^{ i \chi } | \lambda_2 \rangle) / \sqrt{2} \; ,
\end{equation}
where $\chi \in [0,2 \pi)$, see FIG.\ref{sphere}. The result $\alpha \mapsto \pi/4 $ dictates an optimal measurement scheme with components:
\begin{equation}
| k_{\pm} \rangle = (| \lambda_1 \rangle \pm | \lambda_2 \rangle) / \sqrt{2} \; , \label{best-measurement}
\end{equation}
also depicted in FIG.\ref{sphere}. All other basis elements $\in \{ | k \rangle \}$ span an orthogonal subspace.
\emph{Generalization to Higher Dimensions:} It is clear that for a given Hamiltonian $\hat{H}$, the maximal Fisher information is then bounded from below by $\left(\lambda _\uparrow - \lambda
_\downarrow \right)^2 = || \hat{H} ||^2$ where $\lambda_\uparrow$ ($\lambda_\downarrow$) is the max (min) eigenvalue of $\hat{H}$, and $|| \hat{H}||= \left(\lambda _\uparrow - \lambda_\downarrow \right)$ is the operator seminorm of the Hamiltonian \cite{MetrologyII}. A key property of the seminorm is that it gives an achievable upper bound to the variance: \begin{equation} || \hat{H} ||^2 \geq 4 \Delta^2 \hat{H}. \end{equation} This allows a connection to be made between the qubit result with that for a fixed $| \psi_0 \rangle$ -- we saw earlier for a fixed input that $\mathcal{J} \leq 4 \Delta^2 \hat{H}$. Therefore the qubit maximum variance state must be the universally optimal state over the full Hilbert space, as it saturates its variance bound $\mathcal{J} = || \hat{H} ||^2 $. The general result is
\begin{equation} \label{seminorm}
\begin{array}{c}
\text{max} \\
| \psi_0 \rangle, \{ | k \rangle \} \\
\end{array} \mathcal{J} = || \hat{H} ||^2 \; .
\end{equation}
A corollary of Eq.\eqref{seminorm} is that no greater number of superposed energy eigenstates can improve on the Fisher information provided by the maximum variance state $(|\lambda_{\uparrow} \rangle + e^{i \chi} | \lambda_{\downarrow} \rangle )/ \sqrt{2}$.
\emph{Photons:} In a Mach-Zehnder interferometer (MZ) the estimated parameter $\theta$ is simply the phase difference between the two interferometer paths. Spin $\hat{J}_y$ plays the role of Hamiltonian, and the Casimir operator $\hat{J}^2$ is equivalently $\hat{n}/2(\hat{n}/2+1)$ where $\hat{n}$ is the total photon number operator. (So $j = n/2$ for states of fixed $n$.) Eigen-equations are $\hat{J}^2 | j,m \rangle_i = j(j+1)| j,m \rangle_i$ and $\hat{J}_i | j,m \rangle_i = m | j,m \rangle_i$ for $i \in \{x,y,z \}$. The maximum variance state for the MZ is
\begin{equation}
| \psi_{opt} \rangle = \frac{1}{\sqrt{2}} ( |j, +j \rangle_y + e^{i \chi} | j, -j \rangle_y ) \; , \label{NOON}
\end{equation}
i.e. a NOON state \cite{Durkin-Dowling,NOON-both}, rotated by $\pi/2$ around the $x$ axis, with arbitray phase $\chi$. Eq.\eqref{best-measurement} indicates that the optimal measurement basis has two elements $| k_{\pm} \rangle = ( | j, + j \rangle_y \pm \exp(i \xi) |j , -j \rangle_y ) / \sqrt{2}$. Interestingly, the optimal measurement basis is not unique (in this case) and maximal precision is also recovered by $\{ | k \rangle \}$ corresponding to the eigenbasis of $\hat{J}_z$, a measurement of photon number difference between the two interferometer modes \cite{Durkin-Dowling}. Therefore, for a lossless MZ the precision limit is saturated only by NOON probes of Eq.\eqref{NOON}, and this is achievable with simple projection measurements, i.e. without generalised measurements -- compare \cite{Fisher-Optimal-measurements,Phase-POVM}. These optimal projection measurements may be performed by linear optics and photon counting alone. (For any photon number space of $n$ photons, of the full group of Unitary operations, SU$(n+1)$ , only the SU$(2)$ subset is needed.) The Cram\'{e}r-Rao inequality in the context of interferometry thus gives a precision known as the `Heisenberg' limit, \cite{Heisenberg-Limit} :
$(\delta \theta_\text{H})^2 \geq1/ \mathcal{J}_{\text{max}} = 1/(\lambda_{\uparrow} - \lambda_{\downarrow})^2 = 1/ 4j^2 = 1 / n^2 $.
\emph{Phase States}
As an example of a fixed probe, take the phase state \cite{phase-state,Phase-POVM}
\begin{equation}\label{phase-state}
|\psi_{0} \rangle \mapsto |j, \zeta \rangle = \frac{1}{\sqrt{2j+1}} \sum_{m=-j}^{j}e^{i m \zeta } |j,m \rangle_y \; ,
\end{equation}
(parameterized by $\zeta \in \mathbbm{R}$). In a MZ the evolution is a translation of the parameter, $e^{-i \theta \hat{J}_y } |j, \zeta \rangle = |j, \zeta - \theta \rangle$. These states are of interest because it is expected that their precision scales $ \mathcal{J}\propto j^2$, close to the Heisenberg limit. They have the peculiar feature that, for any $m$,
\begin{equation}
\arg \: _z \! \langle j, m | \hat{O} | j, \theta \rangle = - j \pi/2 \; , \; \forall \; \hat{O} \in SO(2j+1) \; ,
\end{equation}
due to the properties \cite{Sakurai-Wigner} of the Wigner rotation elements $_z \langle j, m_1 | e^{-i \hat{J}_y \theta} | j, m_2 \rangle_{z} \in \mathbbm{R}$. It follows that choosing $| k \rangle = \hat{O} |j,m \rangle_z$ makes $ \phi_k$ independent of $\theta$, i.e. $\tau_k = \dot{\phi}_k= \mathcal{K} = 0$. Therefore Eq.\eqref{Fisher1} gives $\mathcal{J}_{\text{max}}(\theta) \mapsto 4 \langle \hat{J}^2_y \rangle = 4j(j+1)/3$, with the desired scaling.
\emph{Summary and Outlook}: The formalism developed here incorporates all aspects of quantum parameter estimation explicitly; probe $| \psi_{0} \rangle $, dynamics $\hat{H}$, and measurement, $\{ | k \rangle \}$, clarifying how precision is determined by the interplay of all three. We introduced the information complement $\mathcal{K}$, the minimization of which allowed the optimal measurement to be found for a fixed probe. The greatest precision was found in the qubit subspace of the maximal variance input state. This ultimate precision is completely defined by the difference of the extremal energy eigenvalues -- no deeper dynamical structure is relevant, nor is the dimension of the Hilbert space.
Despite its many advantages (such as its easy extension to mixed states), if precision limits had been found using the quantum Fisher Information \cite{Metrology, MetrologyII}, nothing could be learned about the relative performance of various measurement schemes, as that approach assumes an unknown optimal measurement. But for specific dynamical processes (e.g. photon interferometry) there exist real physical restrictions on the types of probe and measurement that may be employed. For a restricted set that excludes an optimal measurement, maximizing the classical Fisher information over the available measurements will allow the locally optimal probe to be recovered.
In future, it may prove fruitful to develop the Fisher Operator approach to incorporate evolution of mixed states, governed by completely positive maps and generalised measurements.
This work was carried out under a contract with Mission Critical Technologies at NASA Ames Research Center. The author would like to thank Vadim Smelyanskiy and Gen Kimura for useful discussions.
\end{document}
|
\begin{document}
\newtheorem{Theo}{Theorem}
\newtheorem{Ques}{Question}
\newtheorem{Prop}[Theo]{Proposition}
\newtheorem{Exam}[Theo]{Example}
\newtheorem{Lemma}[Theo]{Lemma}
\newtheorem{Claim}{Claim}
\newtheorem{Cor}[Theo]{Corollary}
\newtheorem{Conj}[Theo]{Conjecture}
\theoremstyle{definition}
\newtheorem{Defn}[Theo]{Definition}
\newtheorem{Remark}[Theo]{Remark}
\title{Spectra of Coronae}
\ellet\thefootnote\relax\footnotetext{\noindent The published version of this
paper appears in \emph{Linear Algebra and its Applications}, Volume
435, no.~5, (2011), and contains occasional simplifications and additions
beyond the current version.}
\author{Cam McLeman and Erin McNicholas}
\begin{abstract}
We introduce a new invariant, the \emph{coronal} of a graph, and use
it to compute the spectrum of the corona $G\circ H$ of two graphs $G$
and $H$. In particular, we show that this spectrum is completely
determined by the spectra of $G$ and $H$ and the coronal of $H$.
Previous work has computed the spectrum of a corona only in the case
that $H$ is regular. We then explicitly compute the coronals for
several families of graphs, including regular graphs, complete
$n$-partite graphs, and paths. Finally, we use the corona
construction to generate many infinite families of pairs of cospectral
graphs.
\end{abstract}
\maketitle
\section{Introduction}
Let $G$ and $H$ be (finite, simple, non-empty) graphs. The
\emph{corona} $G\circ H$ of $G$ and $H$ is constructed as follows:
Choose a labeling of the vertices of $G$ with labels $1,2,\elldots,m$.
Take one copy of $G$ and $m$ disjoint copies of $H$, labeled
$H_1,\elldots,H_m$, and connect each vertex of $H_i$ to vertex $i$ of
$G$. This construction was introduced by Frucht and Harary
\cite{FH70} with the (achieved) goal of constructing a graph whose
automorphism group is the wreath product of the two component
automorphism groups. Since then, a variety of papers have appeared
investigating a wide range of graph-theoretic properties of coronas,
such as the bandwidth \cite{CLY92}, the minimum sum \cite{Wi93},
applications to Ramsey theory \cite{Ne85}, etc. Further, the spectral
properties of coronas are significant in the study of invertible
graphs. Briefly, a graph $G$ is invertible if the inverse of the
graph's adjacency matrix is diagonally similar to the adjacency matrix
of another graph $G^+$. Motivated by applications to quantum
chemistry, Godsil \cite{Go85} studies invertible bipartite graphs with
a unique perfect matching. In response to his question asking for a
characterization of such graphs with the additional property that
$G\cong G^+$, Simian and Cao \cite{SC89} determine the answer to be
exactly the coronas of bipartite graphs with the single-vertex graph
$K_1$.
\vspace*{.1in}
The study of spectral properties of coronas was continued by Barik,
et. al. in \cite{BPS07}, who found the spectrum of the corona $G\circ
H$ in the special case that $H$ is regular. In Section 2, we drop the
regularity condition on $H$ and compute the spectrum of the corona of
any pair of graphs using a new graph invariant called the coronal. In
Section 3, we compute the coronal for several families of graphs,
including regular graphs examined in \cite{BPS07}, complete
$n$-partite graphs, and path graphs. Finally, in Section 4, we see
that properties of the spectrum of coronas lends itself to
finding many large families of cospectral graph pairs.
\subsection*{Notation}
The symbols ${\bf 0}_n$ and ${\bf 1}_n$ (resp., ${\bf 0}_{mn}$ and
${\bf 1}_{mn}$) will stand for length-$n$ column vectors (resp.
$m\times n$ matrices) consisting entirely of 0's and 1's. For two
matrices $A$ and $B$, the matrix $A\otimes B$ is the Kronecker (or
tensor) product of $A$ and $B$. For a graph $G$ with adjacency matrix
$A$, the characteristic polynomial of $G$ is
$f_G(\elleftarrowmbda):=\det(\elleftarrowmbda I-A)$. We use the standard notations
$P_n$, $C_n$, $S_n$, and $K_n$ respectively for the path, cycle, star,
and complete graphs on $n$ vertices.
\section{The Main Theorem}
Let $G$ and $H$ be finite simple graphs on $m$ and $n$ vertices,
respectively, and let $A$ and $B$ denote their respective adjacency
matrices. We begin by choosing a convenient labeling of the vertices
of $G\circ H$. Recall that $G\circ H$ is comprised of the $m$
vertices of $G$, which we label arbitrarily using the symbols
$\{1,2,\elldots,m\}$, and $m$ copies $H_1,H_2,\elldots,H_m$ of $H$.
Choose an arbitrary ordering $h_1,h_2,\elldots,h_n$ of the vertices of
$H$, and label the vertex in $H_i$ corresponding to $h_k$ by the label
$i+mk$. Below is a sample corona with the above labeling procedure:
\begin{center}
\begin{tabular}{ccc}
\quad\quad\begin{tikzpicture}[scale=0.8,transform shape]
\Vertex[x=0,y=1]{1}
\Vertex[x=1,y=1]{2}
\Vertex[x=1,y=0]{3}
\Vertex[x=0,y=0]{4}
\Edge(1)(2)
\Edge(2)(3)
\Edge(3)(4)
\Edge(4)(1)
\draw(.5,-2) node[scale=1.5625] {$G$};
\end{tikzpicture}\quad\quad
&
\quad\quad\begin{tikzpicture}[scale=0.8,transform shape]
\Vertex[x=0,y=0]{1}
\Vertex[x=1,y=0]{2}
\Vertex[x=2,y=0]{3}
\Edge(1)(2)
\Edge(2)(3)
\draw(1,-2) node[scale=1.5625] {$H$};
\end{tikzpicture}\quad\quad
&
\quad\quad\begin{tikzpicture}[scale=0.5,transform shape]
\Vertex[x=0,y=3]{1}
\Vertex[x=-2,y=3]{5}
\Vertex[x=-1,y=4]{9}
\Vertex[x=0,y=5]{13}
\Vertex[x=3,y=3]{2}
\Vertex[x=3,y=5]{6}
\Vertex[x=4,y=4]{10}
\Vertex[x=5,y=3]{14}
\Vertex[x=3,y=0]{3}
\Vertex[x=5,y=0]{7}
\Vertex[x=4,y=-1]{11}
\Vertex[x=3,y=-2]{15}
\Vertex[x=0,y=0]{4}
\Vertex[x=0,y=-2]{8}
\Vertex[x=-1,y=-1]{12}
\Vertex[x=-2,y=0]{16}
\tikzstyle{LabelStyle}=[fill=white,sloped]
\foreach \x/\y in {1/2,2/3,3/4,4/1}{\Edge(\x)(\y)}
\foreach \x/\y in {1/5,1/9,1/13,5/9,9/13}{\Edge(\x)(\y)}
\foreach \x/\y in {2/6,2/10,2/14,6/10,10/14}{\Edge(\x)(\y)}
\foreach \x/\y in {3/7,3/11,3/15,7/11,11/15}{\Edge(\x)(\y)}
\foreach \x/\y in {4/8,4/12,4/16,8/12,12/16}{\Edge(\x)(\y)}
\draw(1.5,-4) node [scale=2.5]{$G\circ H$};
\end{tikzpicture}
\end{tabular}
\end{center}
\noindent Under this labelling the
adjacency matrix of $G\circ H$ is given by
$$
A\circ B:=\begin{bmatrix}
A&{\bf 1}_n^T\otimes I_m\\
{\bf 1}_n\otimes I_m&B\otimes I_m
\end{bmatrix}.
$$ The goal now is to compute the eigenvalues of this corona matrix in
terms of the spectra of $A$ and $B$. We introduce one new invariant
for this purpose.
\begin{Defn}
Let $H$ be a graph on $n$ vertices, with adjacency matrix $B$. Note
that, viewed as a matrix over the field of rational functions
$\mb{C}(\elleftarrowmbda)$, the characteristic matrix $\elleftarrowmbda I-B$ has determinant
$\det(\elleftarrowmbda I-B)=f_H(\elleftarrowmbda)\neq 0$, so is invertible. The
\emph{coronal} $\chi_H(\elleftarrowmbda)\in \mb{C}(\elleftarrowmbda)$ of $H$ is defined to
be the sum of the entries of the matrix $(\elleftarrowmbda I-B)^{-1}$. Note
this can be calculated as
$$
\chi_H(\elleftarrowmbda)={\bf 1}_n^T(\elleftarrowmbda I_n-B)^{-1}{\bf 1}_n.
$$
\end{Defn}
Our main theorem is that, beyond the spectra of $G$ and $H$, only the
coronal of $H$ is needed to compute the spectrum of $G\circ H$.
\begin{Theo}\elleftarrowbel{BigThm}
Let $G$ and $H$ be graphs with $m$ and $n$ vertices. Let
$\chi_H(\elleftarrowmbda)$ be the coronal of $H$. Then the characteristic
polynomial of $G\circ H$ is
$$f_{G\circ H}(\elleftarrowmbda)=f_H(\elleftarrowmbda)^mf_G(\elleftarrowmbda-\chi_H(\elleftarrowmbda)).$$
In particular, the spectrum of $G\circ H$ is completely determined by
the characteristic polynomials $f_G$ and $f_H$, and the coronal
$\chi_H$ of $H$.
\end{Theo}
\begin{proof}
\noindent Let $A$ and $B$ denote the respective adjacency matrices
of $G$ and $H$. We compute the characteristic polynomial of the
matrix $A\circ B$. For this, we recall two elementary results from
linear algebra on the multiplication of Kronecker products and
determinants of block matrices:
\begin{itemize}
\item In cases where each multiplication makes sense, we have
$$M_1M_2\otimes M_3M_4=(M_1\otimes M_3)(M_2\otimes M_4).$$
\item If $M_4$ is invertible, then
$$
\det\begin{pmatrix}M_1&M_2\\M_3&M_4\end{pmatrix}=\det(M_4)\det(M_1-M_2M_4^{-1}M_3).
$$
\end{itemize}
Combining these two facts, we have (as an equality of rational functions)
\begin{align*}
f_{G\circ H}(\elleftarrowmbda)&=\det(\elleftarrowmbda I_{m(n+1)}-A\circ B)\\
&=\det\begin{pmatrix}\elleftarrowmbda I_m-A&-{\bf 1}_n^T\otimes I_m\\-{\bf 1}_n\otimes I_m&\elleftarrowmbda I_{mn}-B\otimes I_m\end{pmatrix}\\
&=\det\begin{pmatrix}\elleftarrowmbda I_m-A&-{\bf 1}_n^T\otimes I_m\\-{\bf 1}_n\otimes I_m&(\elleftarrowmbda I_n-B)\otimes I_m\end{pmatrix}\\
&=\det((\elleftarrowmbda I_n-B)\otimes I_m)\det\bigg[(\elleftarrowmbda I_m-A)-({\bf 1}_n^T\otimes I_m)((\elleftarrowmbda I_n-B)\otimes I_m)^{-1}({\bf 1}_n\otimes I_m)\bigg]\\
&=\det(\elleftarrowmbda I_n-B)^m\det(\elleftarrowmbda I_m-A-({\bf 1}_n^T(\elleftarrowmbda I_n-B)^{-1}{\bf 1}_n)\otimes I_m)\\
&=\det(\elleftarrowmbda I_n-B)^m\det((\elleftarrowmbda-\chi_H(\elleftarrowmbda))I_m-A)\\
&=f_H(\elleftarrowmbda)^mf_G(\elleftarrowmbda-\chi_H(\elleftarrowmbda)).
\end{align*}
\end{proof}
\begin{Remark}\elleftarrowbel{cospec}
A natural question is whether or not the spectrum of $G\circ H$ is
determined by the spectra of $G$ and $H$, i.e., whether knowledge of
the coronal is strictly necessary. Indeed it is: Computing the
coronals of the cospectral graphs $S_5$ and $C_4\cup K_1$, we have
\[
\chi_{S_5}(\elleftarrowmbda)=\frac{5\elleftarrowmbda+8}{\elleftarrowmbda^2-4}\quad\quad\text{ and }\quad\quad \chi_{C_4\cup K_1}(\elleftarrowmbda)=\frac{5\elleftarrowmbda-2}{\elleftarrowmbda^2-2\elleftarrowmbda}.
\] Thus cospectral graphs need not have the same coronal, and hence
for a given graph $G$, the spectra of $G\circ S_5$ and $G\circ(C_4\cup
K_1)$ are almost always distinct. Note that this stands in
stark contrast to the situation for the Cartesian and tensor products
of graphs. In both of these cases, the spectrum of the product is
determined by the spectra of the components.
\end{Remark}
The unexpected simplicity of the examples in the above remark are
representative of a fairly common phenomena: since the coronal
$\chi_H(\elleftarrowmbda)=\frac{\widetilde{\chi}_H(\elleftarrowmbda)}{f_H(\elleftarrowmbda)}$ can
be computed as the quotient of the sum $\widetilde{\chi}_H(\elleftarrowmbda)$
of the cofactors of $\elleftarrowmbda I-B$ by the characteristic polynomial
$f_H(\elleftarrowmbda)$, it is \emph{a priori} the quotient of a degree $n-1$
polynomial by a degree $n$ polynomial. In practice, however, as in
the examples in the remark, these two polynomials typically have roots
in common, providing for a reduced expression for the coronal. Let us
suppose that
$g(\elleftarrowmbda):=\gcd(\widetilde{\chi}_H(\elleftarrowmbda),f_H(\elleftarrowmbda))$ has
degree $n-d$ (the $\gcd$ being taken in $\mb{C}[\elleftarrowmbda]$), so that
$\chi_H(\elleftarrowmbda)$ in its reduced form is a quotient of a degree $d-1$
polynomial by a degree $d$ polynomial. Since the denominator of this
reduced fraction is a factor of $f_H(\elleftarrowmbda)$, and since $f_G$ is of
degree $m$, each pole of $\chi_H(\elleftarrowmbda)$ is simultaneously a
multiplicity-$m$ pole of $f_G(\elleftarrowmbda-\chi_H(\elleftarrowmbda))$ and a
multiplicity-$m$ root of $f_H(\elleftarrowmbda)^m$. Since these contributions
cancel in the overall determination of the roots of $f_{G\circ
H}(\elleftarrowmbda)$ in the expression
$$f_{G\circ H}(\elleftarrowmbda)=f_H(\elleftarrowmbda)^mf_G(\elleftarrowmbda-\chi_H(\elleftarrowmbda))$$
\noindent from Theorem \ref{BigThm}, we can now more explicitly
describe the spectrum of the corona. Namely, let $d$ be the degree of
the denominator of $\chi_H(\elleftarrowmbda)$ as a reduced fraction. Then the
spectrum of $G\circ H$ consists of:
\begin{itemize}
\item Some ``old'' eigenvalues, i.e., the roots of $f_H(\elleftarrowmbda)$
which are not poles of $\chi_H(\elleftarrowmbda)$ (or equivalently, the roots
of $g(\elleftarrowmbda)$), each with multiplicity $|G|$; and
\item Some ``new'' eigenvalues, i.e., the values $\elleftarrowmbda$ such that
$\elleftarrowmbda-\chi_H(\elleftarrowmbda)$ is an eigenvalue $\mu$ of $G$ (with the
multiplicity of $\elleftarrowmbda$ equal to the multiplicity of $\mu$ as an
eigenvalue of $G$.)
\end{itemize}
Since for a given $\mu$, solving $\elleftarrowmbda-\chi_H(\elleftarrowmbda)=\mu$ by
clearing denominators amounts to finding the roots of a degree $d+1$
polynomial in $\elleftarrowmbda$, the above two bullets combine to respectively
provide all $(n-d)m+m(d+1)=m(n+1)$ eigenvalues of $G\circ H$.
The following table, computed using SAGE (\cite{sage}), gives the
number of graphs on $n$ vertices whose coronal has a denominator of
degree $d$ (as a reduced fraction), as well as the average degree of
this denominator, for $1\elleq n\elleq 7$.
\begin{table}[!ht]
\centering
\parbox{195pt}{Table 1: Number of graphs on $n$ vertices whose coronal has denominator of degree $d$}
\begin{tabular}{|c|c|c|c|c|c|c|c|}\hline
$d\backslash n$&1&2&3&4&5&6&7\\\hline
1&1&2&2&4&3&8&6\\
2&&0&2&5&12&28&44\\
3&&&0&2&13&50&138\\
4&&&&0&6&40&304\\
5&&&&&0&22&246\\
6&&&&&&8&214\\
7&&&&&&&92\\\hline
Total&1&2&4&11&34&156&1044\\\hline
Average $d$&1&1&1.5&1.82&2.65&3.41&4.68\\\hline
(Average $d$)/$n$&1&0.5&0.5&0.45&0.53&0.57&0.66\\\hline
\end{tabular}
\end{table}
\vspace*{.1in}
Since determining the characteristic polynomial of $G\circ H$ from the
spectra of $G$ and $H$ requires only the extra knowledge of the
coronal of $H$, it remains to develop techniques for computing these
coronals. In Section 3, we will develop shortcuts for these
computations, but we briefly conclude this section with some more
computationally-oriented approaches. A first such option is to have a
software package with linear algebra capabilities directly compute the
inverse of $\elleftarrowmbda I-B$ and sum its entries, as done in the
computations for Table 1. This seems to be computationally feasible
only for rather small graphs (e.g., $n\elleq 12$). A second, more
graph-theoretic, option relies on a combinatorial result of Schwenk
\cite{Sc91} to compute each cofactor of $\elleftarrowmbda I-B$ individually,
before summing them to compute the coronal:
\begin{Theo}[Schwenk, \cite{Sc91}]\elleftarrowbel{Schwenk}
For vertices $i$ and $j$ of a graph $H$ with adjacency matrix $B$, let
$\mathcal{P}_{i,j}$ denote the set of paths from $i$ to $j$. Then
$$
\operatorname{adj}(\elleftarrowmbda I-B)_{i,j}=\sum_{P\in \mathcal{P}_{i,j}}f_{H-P}(\elleftarrowmbda).
$$
\end{Theo}
\noindent Again, this approach becomes computationally infeasible
fairly quickly without a method for pruning the number of cofactors to
calculate. We explore this idea in the next section. Regardless,
from Theorem \ref{Schwenk}, we obtain:
\begin{Cor}
The spectrum of the corona $G\circ H$ is determined by the spectrum of
$G$ and the spectra of the proper subgraphs of $H$ (or more
economically, only from the spectra of those subgraphs obtained by
deleting paths from $H$).
\end{Cor}
\section{Computing Coronals}\elleftarrowbel{exs}
\noindent In this section, we will compute the coronals
$\chi_H(\elleftarrowmbda)$ for several families of graphs, and hence for such
$H$ obtain the full spectrum of the corona $G\circ H$ for any $G$.
The principal technique exploits the regularity or near-regularity of
a graph in order to greatly reduce the number of cofactor calculations
(relative to those required by Theorem \ref{Schwenk}) needed to
compute the coronal. In particular, we use these ideas to compute the
coronals of regular graphs, complete bipartite graphs, and paths.
For graphs that are ``nearly regular'' in the sense that their degree
sequences are almost constant, we can take advantage of
linear-algebraic symmetries to compute the coronals. We begin with
two concrete computations, those corresponding to regular and complete
bipartite graphs, before extracting the underlying heuristic and
applying it to the coronal of path graphs. The case of regular
graphs, first addressed in \cite{BPS07}, is particularly
straight-forward from this viewpoint.
\begin{Prop}[Regular Graphs]\elleftarrowbel{regular}
Let $H$ be $r$-regular on $n$ vertices. Then
$$\chi_H(\elleftarrowmbda)=\frac{n}{\elleftarrowmbda-r}.$$ Thus for any graph $G$, the
spectrum of $G\circ H$ is precisely:
\begin{itemize}
\item Every non-maximal eigenvalue of $H$, each with multiplicity $|G|$.
\item The two eigenvalues
$$
\frac{\mu+r\pm\sqrt{(r-\mu)^2+4n}}{2}
$$
for each eigenvalue $\mu$ of $G$.
\end{itemize}
\end{Prop}
\begin{proof}
Let $B$ be the adjacency matrix of $H$. By regularity, we have $B{\bf
1}_n=r{\bf 1}_n$, and hence $(\elleftarrowmbda I-B){\bf 1}_n=(\elleftarrowmbda-r){\bf 1}_n$.
Cross-dividing and multiplying by ${\bf 1}_n^T$,
$$\chi_H(\elleftarrowmbda)={\bf 1}_n^T(\elleftarrowmbda I-B)^{-1}{\bf 1}_n=\frac{{\bf
1}_n^T{\bf 1}_n}{\elleftarrowmbda-r}=\frac{n}{\elleftarrowmbda-r}.$$ The only pole
of $\chi_H(\elleftarrowmbda)$ is the maximal eigenvalue $\elleftarrowmbda=r$ of $H$, and
the ``new'' eigenvalues are obtained by solving
$\elleftarrowmbda-\frac{n}{\elleftarrowmbda-r}=\mu$ for each eigenvalue $\mu$ of $G$.
\end{proof}
It is noteworthy that all $r$-regular graphs on $n$ vertices have the
same coronal, especially given that the cofactors of the corresponding
matrices $(B-\elleftarrowmbda I)^{-1}$ appear to be markedly dissimilar. The
simplicity of this scenario, and the easily checked observation that
cospectral regular graphs must have the same regularity, lead to the
following corollary. We will make use of this corollary in the final
section.
\begin{Cor}\elleftarrowbel{reg}
Cospectral regular graphs have the same coronal.
\end{Cor}
\noindent As a second class of examples, we compute the coronals of
complete bipartite and $n$-partite graphs.
\begin{Prop}[Complete Bipartite Graphs]\elleftarrowbel{cbg}
Let $H=K_{p,q}$ be a complete bipartite graph on $p+q=n$ vertices.
Then
$$\chi_H(\elleftarrowmbda)=\frac{n\elleftarrowmbda+2pq}{\elleftarrowmbda^2-pq}.$$
For any graph $G$, the spectrum of $G\circ H$ is given by:
\begin{itemize}
\item The eigenvalue $0$ with multiplicity $m(n-2)$; and
\item For each eigenvalue $\mu$ of $G$, the roots of the polynomial
$$x^3-\mu x^2-(p+q+pq)x+pq(\mu-2).$$
\end{itemize}
\end{Prop}
\begin{proof}
Let $B=\begin{bmatrix}{\bf 0}_{p,p}&{\bf 1}_{p,q}\\{\bf 1}_{q,p}&{\bf
0}_{q,q}\end{bmatrix}$ be the adjacency matrix of $K_{p,q}$ and let
$X=\text{diag}((q+\elleftarrowmbda)I_p,(p+\elleftarrowmbda)I_q)$ be the diagonal matrix
with the first $p$ diagonal entries being $(q+\elleftarrowmbda)$ and the last
$q$ entries being $(p+\elleftarrowmbda)$. Then $(\elleftarrowmbda I-B)X{\bf
1}_n=(\elleftarrowmbda^2-pq){\bf 1}_n$, and so
$$
\chi_H(\elleftarrowmbda)={\bf 1}_n^T(\elleftarrowmbda I-B)^{-1}{\bf 1}_n=\frac{{\bf 1}_n^TX{\bf 1}_n}{\elleftarrowmbda^2-pq}=\frac{(p+q)\elleftarrowmbda+2pq}{\elleftarrowmbda^2-pq}.
$$
\noindent
Thus the coronal has poles at both of the non-zero eigenvalues
$\pm\sqrt{pq}$ of $K_{p,q}$, leaving only the eigenvalue 0 with
multiplicity $p+q-2$. Finally, solving $\elleftarrowmbda-\chi_H(\elleftarrowmbda)=\mu$
gives the new eigenvalues in the spectrum as stated in the
proposition.
\end{proof}
\begin{Remark}
It might be tempting in light of Propositions \ref{regular} and
\ref{cbg} to hope that the degree sequence of a graph determines its
coronal. This too, like the analogous conjecture stemming from
cospectrality (Remark \ref{cospec}), turns out to be false: The graphs
$P_5$ and $K_2\cup K_3$ have the same degree sequence, but we find by
direct computation that
$$
\chi_{P_5}(\elleftarrowmbda)=\frac{5\elleftarrowmbda^2+8\elleftarrowmbda-1}{\elleftarrowmbda^3-3\elleftarrowmbda}\quad\quad\quad\quad\chi_{K_2\cup K_3}(\elleftarrowmbda)=\frac{5\elleftarrowmbda-7}{\elleftarrowmbda^2-3\elleftarrowmbda+2}.
$$
\end{Remark}
\noindent Similar to the complete bipartite computation, we have the
following somewhat technical generalization to complete $k$-bipartite
graphs.
\begin{Prop}
Let $H$ be the complete $k$-partite graph $K_{n_1,n_2,\elldots,n_k}$.
Then
$$\chi_H(\elleftarrowmbda)=\elleft(\frac{\prod_{j=1}^k(n_j+\elleftarrowmbda)}{\sum_{j=1}^kn_j\prod\ellimits^k_{\substack{{i=1}\\i\neq
j}}(n_i+\elleftarrowmbda)}-1\right)^{-1}=\frac{\sum_{j=1}^kjC_j\elleftarrowmbda^{k-j}}{\elleftarrowmbda^k-\sum_{j=2}^k(j-1)C_j\elleftarrowmbda^{k-j}}$$
where $C_j$ is the sum of the ${k \choose j}$ products of the form
$n_{i_1}n_{i_2}\cdots n_{i_j}$ with distinct indices.
\end{Prop}
\begin{proof}
Let $B$ be the adjacency matrix of $H$, let
$g_j(\elleftarrowmbda)=\prod\ellimits^k_{\substack{{i=1}\\i\neq
j}}(n_i+\elleftarrowmbda)$, and let $X$ be the block diagonal matrix
whose $j$-th block is $g_jI_{n_j}$. Then
$$(\elleftarrowmbda I_n-B)X{\bf 1}_n=\elleft[\prod_{i=1}^k(n_i+\elleftarrowmbda)-\sum_{j=1}^kn_jg_j\right]{\bf 1}_n.$$
Solving as in the bipartite case, we find
\begin{align*}
\chi_H(\elleftarrowmbda)={\bf 1}_n^T(\elleftarrowmbda I-B)^{-1}{\bf 1}_n&=\frac{{\bf
1}_n^TX{\bf 1}_n}{\prod_{i=1}^k(n_i+\elleftarrowmbda)-\sum_{j=1}^kn_jg_j}\\
&=\frac{\sum_{j=1}^kn_jg_j}{\prod_{i=1}^k(n_i+\elleftarrowmbda)-\sum_{j=1}^kn_jg_j},
\end{align*}
which gives the result.
\end{proof}
The proof technique for the last two propositions generalizes to
``nearly regular'' graphs, by which we mean graphs $H$ for which all but a small
number of vertices have the same degree $r$. In this case, we can write
$$(\elleftarrowmbda I-B){\bf 1}_n=(\elleftarrowmbda-r){\bf 1}_n+{\bf v},$$ where ${\bf
v}=(v_i)$ is a vector consisting mostly of 0's. This gives
$$(\elleftarrowmbda I-B)^{-1}{\bf 1}_n=\frac{1}{\elleftarrowmbda-r}\elleft[{\bf 1}_n-(\elleftarrowmbda I-B)^{-1}{\bf v}\right],$$
and thus, using the adjugate formula for the determinant,
$$\chi_H(\elleftarrowmbda)={\bf 1}_n^T(\elleftarrowmbda I-B)^{-1}{\bf
1}_n=\frac{1}{\elleftarrowmbda-r}\elleft[n-\frac{1}{f_H(\elleftarrowmbda)}\sum_{1\elleq
i,j\elleq n}v_iC_{i,j}\right],$$ where $C_{i,j}$ denotes the
$(i,j)$-cofactor of $\elleftarrowmbda I-B$. Since $v_i$ is zero for most
values of $i$, we have an effective technique for computing coronals
if we can compute a small number of cofactors (as opposed to, in
particular, computing \emph{all} of the cofactors and using Theorem
\ref{Schwenk}). For example, if we let $f_n=f_{P_n}(\elleftarrowmbda)$ be the
characteristic polynomial of the path graph $P_n$ on $n$ vertices (by
convention, set $f_0=1$), we can compute coronals of paths as
follows:
\begin{Prop}[Path Graphs]\elleftarrowbel{path}
Let $H=P_n$. Then
$$\chi_H=\frac{nf_n-2\sum_{j=0}^{n-1}f_j}{(\elleftarrowmbda-2)f_n}.$$
\end{Prop}
\begin{proof}
In the notation of the discussion preceding the proposition, we take
$r=2$ and ${\bf v}=[1\, 0\, 0\, \cdots\, 0\, 0\, 1]^T$. Further, we
note that an easy induction argument using cofactor expansion gives
$C_{j,1}=C_{j,n}=f_{j-1}$. Thus we obtain
$$\chi_{P_n}(\elleftarrowmbda)=\frac{1}{\elleftarrowmbda-2}\elleft[n-\frac{1}{f_n}\sum_{j=1}^n
(C_{j,1}+C_{j,n})\right]=\frac{1}{\elleftarrowmbda-2}\elleft[n-\frac{2}{f_n}\sum_{j=0}^{n-1}
f_j\right],$$
from which the result follows.
\end{proof}
\noindent From this, we easily calculate the coronals for the first few path graphs:
\begin{table}[!ht]
\centering
\parbox{195pt}{Table 2: The coronals $\chi_{P_n}(\elleftarrowmbda)$ for $1\elleq n\elleq 7$}
\vspace*{.1in}
\begin{tabular}{c|ccccccc}
$n$&1&2&3&4&5&6&7\\\hline
$\chi_{P_n}(\elleftarrowmbda)$&$\frac{1}{\elleftarrowmbda}$&$\frac{2}{\elleftarrowmbda-1}$&$\frac{3\elleftarrowmbda+4}{\elleftarrowmbda^2-2}$&$\frac{4\elleftarrowmbda+2}{\elleftarrowmbda^2-\elleftarrowmbda-1}$&$\frac{5\elleftarrowmbda^2+8\elleftarrowmbda-1}{\elleftarrowmbda^3-3\elleftarrowmbda}$&$\frac{6\elleftarrowmbda^2
+ 4\elleftarrowmbda - 4}{\elleftarrowmbda^3 - \elleftarrowmbda^2 - 2\elleftarrowmbda +
1}$&$\frac{7\elleftarrowmbda^3 + 12\elleftarrowmbda^2 - 6\elleftarrowmbda - 8}{\elleftarrowmbda^4 -
4\elleftarrowmbda^2 + 2}$
\end{tabular}
\end{table}
\begin{Remark}
This particular example can also be computed using Theorem \ref{Schwenk}: For $i$ and $j$
distinct, there is a unique path $[i,j]$ from vertex $i$ to vertex $j$, so the
sum in the theorem reduces to a single term:
$$
\operatorname{adj}(\elleftarrowmbda I-B)_{ij}=f_{P_n-[i,j]}=f_{i-1}f_{n-j}.
$$
Similarly, we find $\operatorname{adj}(\elleftarrowmbda I-B)_{ii}=f_{i-1}f_{n-i}$.
Summing over all the cofactors gives
$$
\chi_{P_n}(\elleftarrowmbda)=\frac{1}{f_n}\elleft(\sum_{i=1}^nf_{i-1}f_{n-i}+2\sum_{i,j=1}^nf_{i-1}f_{n-j}\right),
$$ which reduces to the result computed in Proposition \ref{path}
after some arithmetic.
\end{Remark}
\section{Cospectrality}
At the end of \cite{BPS07}, the authors note that if $G_1$ and $G_2$
are cospectral graphs, then $G_1\circ K_1$ and $G_2\circ K_1$ are also
cospectral, and that (by repeated coronation with $K_1$) this leads to
an infinite collection of cospectral pairs. Armed with the
characteristic polynomial
$$f_{G\circ H}(\elleftarrowmbda)=f_H(\elleftarrowmbda)^mf_G(\elleftarrowmbda-\chi_H(\elleftarrowmbda))$$
of the corona (Theorem \ref{BigThm}), we can greatly generalize this
observation on two fronts.
\begin{Cor}\elleftarrowbel{cospec2}
If $G_1$ and $G_2$ are cospectral, and $H$ is any graph, then
$G_1\circ H$ and $G_2\circ H$ are cospectral. Further, if $H_1$ and
$H_2$ are cospectral and $\chi_{H_1}=\chi_{H_2}$, and $G$ is any
graph, then $G\circ H_1$ and $G\circ H_2$ are cospectral.
\end{Cor}
We remark that examples of this second type do indeed exist.
Define the \emph{switching graph} $\text{Sw}(T)$ of a tree $T$ with adjacency matrix
$A_T$ to be the graph with adjacency matrix
\[
A_{\text{Sw}(T)}:=\begin{bmatrix}1&0\\0&1\end{bmatrix}\otimes A_T+\begin{bmatrix}0&1\\1&0\end{bmatrix}\otimes A_{\overline{T}},
\]
and let $T_1$ and $T_2$ be non-isomorphic cospectral trees with
cospectral complements (note that by \cite{GM76}, generalizing
\cite{Sc73}, ``almost all'' trees admit a cospectral pair with
cospectral complement). Then the switching graphs $\text{Sw}(T_1)$
and $\text{Sw}(T_2)$ are non-isomorphic cospectral regular graphs (see
\cite{GM82}, Construction 3.7), and also have the same coronal by
Corollary \ref{reg}. Corollary \ref{cospec2} now implies that for
\emph{any} graphs $G$ and $H$, we have the cospectral pair $G\,\circ
\,\text{Sw}(T_1)$ and $G\,\circ \,\text{Sw}(T_2)$ and the cospectral
pair $\text{Sw}(T_1)\circ H$ and $\text{Sw}(T_2)\circ H$. This gives,
for example, infinitely many cospectral pairs of graphs with any given
graph $G$ as an induced subgraph.
\end{document}
|
\begin{document}
\title{space{-0.8cm}
\begin{abstract}
A graph $H$ is $K_s$-saturated if it is a maximal $K_s$-free graph, i.e., $H$ contains no clique on $s$ vertices, but the addition of any missing edge creates one. The minimum number of edges in a $K_s$-saturated graph was determined over 50 years ago by Zykov and independently by Erd\H{o}s, Hajnal and Moon. In this paper, we study the random analog of this problem: minimizing the number of edges in a maximal $K_s$-free subgraph of the Erd\H{o}s-R\'enyi random graph $G(n,p)$. We give asymptotically tight estimates on this minimum, and also provide exact bounds for the related notion of weak saturation in random graphs. Our results reveal some surprising behavior of these parameters.
\end{abstract}
\section{Introduction}
For some fixed graph $F$, a graph $H$ is said to be $F$-saturated if it is a maximal $F$-free graph, i.e., $H$ does not contain any copies of $F$ as a subgraph, but adding any missing edge to $H$ creates one. The saturation number $sat(n,F)$ is defined to be the minimum number of edges in an $F$-saturated graph on $n$ vertices. Note that the maximum number of edges in an $F$-saturated graph is exactly the extremal number $ex(n,F)$, so the saturation problem of finding $sat(n,F)$ is in some sense the opposite of the Tur\'an problem.
The first results on saturation were published by Zykov \cite{Z49} in 1949 and independently by Erd\H{o}s, Hajnal and Moon \cite{EHM64} in 1964. They considered the problem for cliques and showed that $sat(n,K_s)=(s-2)n-\binom{s-1}{2}$. Here the upper bound comes from the graph consisting of $s-2$ vertices connected to all other vertices. Since the 1960s, the saturation number $sat(n,F)$ has been extensively studied for various different choices of $F$. For results in this direction, we refer the interested reader to the survey \cite{FFS}.
In the present paper we are interested in a different direction of extending the original problem, where saturation is restricted to some host graph other than $K_n$. For fixed graphs $F$ and $G$, we say that a subgraph $H\subseteq G$ is $F$-saturated in $G$ if $H$ is a maximal $F$-free subgraph of $G$. The minimum number of edges in an $F$-saturated graph in $G$ is denoted by $sat(G,F)$. Note that with this new notation, $sat(n,F)=sat(K_n,F)$.
A question of this type already appeared in the above mentioned paper of Erd\H{o}s, Hajnal and Moon. They proposed the bipartite analog of the saturation problem, and even formulated a conjecture on the value of $sat(K_{n,m},K_{s,s})$. This conjecture was independently verified by Wessel \cite{W66} and Bollob\'as \cite{B67}, while general $K_{s,t}$-saturation in bipartite graphs was later studied in \cite{GKS15}. Several other host graphs have also been considered, including complete multipartite graphs \cite{FJPV, R} and hypercubes \cite{CG08,JP,MNS}.
In recent decades, classic extremal questions of all kinds are being extended to random settings. Hence it is only natural to ask what happens with the saturation problem in random graphs. As usual, we let $G(n,p)$ denote the Erd\H{o}s-R\'enyi random graph on vertex set $[n]=\{1,\ldots,n\}$, where two vertices $i,j\in [n]$ are connected by an edge with probability $p$, independently of the other pairs. In this paper we study $K_s$-saturation in $G(n,p)$.
The corresponding Tur\'an problem of determining $ex(G(n,p),K_s)$, the maximum number of edges in a $K_s$-saturated graph in $G(n,p)$, has attracted a considerable amount of attention in recent years. The first general results in this direction were given by Kohayakawa, R\"odl and Schacht \cite{KRS04}, and independently Szab\'o and Vu \cite{SV03} who proved a random analog of Tur\'an's theorem for large enough $p$. This
problem was resolved by Conlon and Gowers \cite{CG}, and independently by Schacht \cite{S}, who determined the correct range of edge probabilities where the Tur\'an-type theorem holds. The powerful method of hypergraph containers, developed by Balogh, Morris and Samotij \cite{BMS15} and by Saxton and Thomason \cite{ST15}, provides an alternative proof. Roughly speaking, these results establish that for most values of $p$, the random graph $G(n,p)$ behaves much like the complete graph as a host graph, in the sense that $K_s$-free subgraphs of maximum size are essentially $s-1$-partite.
Now let us turn our attention to the saturation problem. When $s=3$, the minimum saturated graph in $K_n$ is the star. Of course we cannot exactly adapt this structure to the random graph, because the degrees in $G(n,p)$ are all close to $np$ with high probability, but we can do something very similar. Pick a vertex $v_1$ and include all its incident edges in $H$. This way, adding any edge of $G(n,p)$ induced by the neighborhood of $v_1$ creates a triangle, so we have immediately taken care of a $p^2$ fraction of the edges (which is the best we can hope to achieve with one vertex). Then the edges adjacent to some other vertex is expected to take care of a $p^2$ fraction of the remaining edges, and so on: we expect every new vertex to reduce the number of remaining edges by a factor of $(1-p^2)$.
Repeating this about $\log_{1/(1-p^2)} \binom{n}{2}$ times, we obtain a $K_3$-saturated bipartite subgraph containing approximately $pn\log_{1/(1-p^2)} \binom{n}{2}$ edges. It feels natural to think that this construction is more or less optimal. Surprisingly, this intuition turns out to be incorrect. Indeed, we will present an asymptotically tight result which is better by a factor of $\frac{p\log(1-p)}{\log(1-p^2)}>1$.
Moreover, rather unexpectedly, the asymptotics of the saturation numbers do not depend on $s$, the size of the clique.
\begin{THM} \label{thm:main}
Let $0<p<1$ be some constant probability and $s\ge 3$ be an integer. Then
\[ sat(G(n,p),K_s)=(1+o(1))n\log_{\frac{1}{1-p}} n \]
with high probability.
\end{THM}
Our next result is about the closely related notion of weak saturation (also known as graph bootstrap percolation), introduced by Bollob\'as \cite{B68} in 1968. A graph $H\subseteq G$ is weakly $F$-saturated in $G$ if $H$ does not contain any copies of $F$, but the missing edges of $H$ in $G$ can be added back one-by-one in some order, such that every edge creates a new copy of $F$. The smallest number of edges in a weakly $F$-saturated graph in $G$ is denoted by $w\textrm{-}sat(G,F)$.
Clearly $w\textrm{-}sat(G,F)\le sat(G,F)$, but Bollob\'as conjectured that when both $G$ and $F$ are complete, then in fact equality holds. This somewhat surprising fact was proved by Lov\'asz \cite{L77}, Frankl \cite{F82} and Kalai \cite{K84,K85} using linear algebra:
\begin{THM}[\cite{K84,K85}] \label{thm:weak_clique}
\[ w\textrm{-}sat(K_n, K_s) = sat(K_n,K_s) = (s-2)n- \binom{s-1}{2} \]
\end{THM}
Again, many variants of this problem with different host graphs have been studied \cite{A85,MS,MNS}, and it turns out to be quite interesting for random graphs, as well. In this case we are able to determine the weak saturation number exactly. It is worth pointing out that this number is linear in $n$, as opposed to the saturation number, which is of the order $n\log n$.
\begin{THM} \label{thm:weaksat}
Let $0<p<1$ be some constant probability and $s\ge 3$ be an integer. Then
\[ w\textrm{-}sat(G(n,p),K_s) = (s-2)n- \binom{s-1}{2} \]
with high probability.
\end{THM}
We will prove Theorem~\ref{thm:main} in Section~\ref{sec:strongsat} and Theorem~\ref{thm:weaksat} in Section~\ref{sec:weaksat}. We finish the paper with a discussion of open problems in Section~\ref{sec:last}.
\textbf{Notation.} All our results are about $n$ tending to infinity, so we often tacitly assume that $n$ is large enough. We say that some property holds {\em with high probability}, or {\em whp}, if the probability tends to 1 as $n$ tends to infinity. In this paper $\log$ stands for the natural logarithm unless specified otherwise in the subscript. For clarity of presentation, we omit floor and ceiling signs whenever they are not essential. We use the standard notations of $G[S]$ for the subgraph of $G$ induced by the vertex set $S$, and $G[S,T]$ for the (bipartite) subgraph of $G[S\cup T]$ containing the $S$-$T$ edges of $G$. For sets $A,B$ and element $x$, we will sometimes write $A+x$ for $A\cup \{x\}$ and $A-B$ for $A\setminus B$.
\section{Strong saturation} \label{sec:strongsat}
In this section we prove Theorem~\ref{thm:main} about $K_s$-saturation in random graphs.
Let us say that a graph $H$ completes a vertex pair $\{u,v\}$ if adding the edge $uv$ to $H$ creates a new copy of $K_s$. Using this terminology, a $K_s$-free subgraph $H\subseteq G$ is $K_s$-saturated in $G$, if and only if $H$ completes all edges of $G$ missing from $H$.
We will make use of the following bounds on the tail of the binomial distribution.
\begin{CLAIM} \label{lem:chern}
Let $0<p<1$ be a constant and $X\sim Bin (n,p)$ be a binomial random variable. Then, for sufficiently large $n$,
\begin{enumerate}
\item $\mathbf{P}[X \ge np+a] \le e^{-\frac{a^2}{2(np+a/3)}}$,
\item $\mathbf{P}[X \le np-a] \le e^{-\frac{a^2}{2np}}$ and
\item $\mathbf{P}\left[X \le \frac{n}{\log^2 n}\right] \le (1-p)^{n-\frac{n}{\log n}}$.
\end{enumerate}
\end{CLAIM}
\begin{proof}
The first two statements are standard Chernoff-type bounds (see e.g. \cite{JLRBOOK}). The third can be proved using a straightforward union-bound argument as follows.
Let us think about $X$ as the cardinality of a random subset $A\subseteq [n]$, where every element in $[n]$ is included in $A$ with probability $p$, independently of the others. $X\le \frac{n}{\log^2 n}$ means that $A$ is a subset of some set $I\subseteq[n]$ of size $\frac{n}{\log^2 n}$. For a fixed $I$, the probability that $X\subseteq I$ is $(1-p)^{n-|I|}$. We can choose an $I$ of size $\frac{n}{\log^2 n}$ in
\[ \binom{n}{n/\log^2 n} \le \left(\frac{en}{n/\log^2 n}\right)^{\frac{n}{\log^2 n}} \le (e\log^2 n)^{\frac{n}{\log^2 n}} \le e^{\frac{3n\log\log n}{\log^2 n}}\]
different ways, so
\[ \mathbf{P}\left[X \le \frac{n}{\log^2 n}\right] \le e^{\frac{3n\log\log n}{\log^2 n}} \cdot (1-p)^{n-\frac{n}{\log^2 n}} \le (1-p)^{n-\frac{n}{\log n}}. \]
\end{proof}
\subseteqection{Lower bound}
First, we prove that any $K_s$-saturated graph in $G(n,p)$ contains at least $(1+o(1))n\log_{1/(1-p)} n$ edges. In fact, our proof does not use the property that $K_s$-saturated graphs are $K_s$-free, only that adding any missing edge from the host graph creates a new copy of $K_s$. Now if such an edge creates a new $K_s$ then it will of course create a new $K_3$, as well, so it is enough to show that our lower bound holds for triangle-saturation.
\begin{THM} \label{thm:main_lower}
Let $0<p<1$ be a constant. Then with high probability, $G=G(n,p)$ satisfies the following. If $H$ is a subgraph of $G$ such that for any edge $e\in G$ missing from $H$, adding $e$ to $H$ creates a new triangle, then $H$ contains at least $n\log_{1/(1-p)} n- 6n\log_{1/(1-p)} \log_{1/(1-p)} n$ edges.
\end{THM}
\begin{proof}
Let $H$ be such a subgraph and set $\alpha=\frac{1}{1-p}$. Let $A$ be the set of vertices that are incident to at least $\log_{\alpha}^2 n$ edges in $H$, and let $B=[n]-A$ be the rest. If $|A|\ge \frac{2n}{\log_{\alpha} n}$ then $H$ contains at least $\frac{1}{2}|A|\log_{\alpha}^2n\ge n\log_{\alpha} n$ edges and we are done. So we may assume $|A|\le \frac{2n}{\log_{\alpha} n}$ and hence $|B|\ge n(1-\frac{2}{\log_{\alpha} n})$. Our aim is to show that whp every vertex in $B$ is adjacent to at least $\log_{\alpha} n- 5\log_{\alpha} \log_{\alpha} n$ vertices of $A$ in $H$. This would imply that $H$ contains at least $|B|(\log_{\alpha} n- 5\log_{\alpha} \log_{\alpha} n)\ge n(\log_{\alpha} n- 6\log_{\alpha} \log_{\alpha} n)$ edges, as needed.
So pick a vertex $v\in B$ and let $N$ be its neighborhood in $A$ in the graph $H$. Let $uv$ be an edge of the random graph $G$ missing from $H$. Since $H$ is $K_3$-saturated, we know that $u$ and $v$ must have a common neighbor $w$ in $H$. Notice that for all but at most $\log_{\alpha}^4 n$ choices of $u$, this $w$ must lie in $A$. Indeed, $v$ is in $B$, so there are at most $\log_{\alpha}^2 n$ choices for $w$ to be a neighbor of $v$, and if this $w$ is also in $B$, then there are only $\log_{\alpha}^2 n$ options for $u$, as well. So the neighbors of $N$ in $H\subseteq G$ must contain all but $\log_{\alpha}^4 n$ of the vertices (outside $N$) that are adjacent to $v$ in $G$. The following claim shows that this is only possible if $|N|\ge \log_{\alpha} n- 5\log_{\alpha} \log_{\alpha} n$, thus finishing our proof.
\end{proof}
\begin{CLAIM} \label{lem:lowerlem}
Let $0<p<1$ be a constant and $G=G(n,p)$. Then whp for any vertex $x$ and set $Q$ of size at most $\log_{\alpha} n - 5\log_{\alpha} \log_{\alpha} n$, there are at least $2\log_{\alpha}^4 n$ vertices in $G$ adjacent to $x$ but not adjacent to any of the vertices in $Q$.
\end{CLAIM}
\begin{proof}
Fix $x$ and $Q$. Then the probability that some other vertex $y$ is adjacent to $x$ but not to any of $Q$ is $p'=p(1-p)^{|Q|}\ge p\frac{\log_{\alpha}^5 n}{n}$. These events are independent for the different $y$'s, so the number of vertices satisfying this property is distributed as $Bin(n-1-|Q|,p')$. Its expectation, $p'(n-1-|Q|)$ is at least $\frac{p}{2}\log_{\alpha}^5 n$, so the probability that there are fewer than $2\log_{\alpha}^4 n$ such vertices is, by Claim~\ref{lem:chern}, at most $e^{-\Omega(\log^5 n)}$. But there are only $n$ ways to choose $x$ and $\sum_{i=1}^{\log_{\alpha} n} \binom{n}{i}\le n\cdot n^{\log_{\alpha} n}$ ways to choose $Q$, so the probability that for some $x$ and $Q$ the claim fails is
\[ n^2\cdot n^{\log_{\alpha} n} \cdot e^{-\Omega(\log^5 n)} \le exp( O(\log^2 n)- \Omega(\log^5 n)) =o(1). \]
\end{proof}
\subseteqection{Upper bound}
Next, we construct a saturated subgraph of the random graph that contains $(1+o(1))n\log_{1/(1-p)} n$ edges. The following observation says that it is enough to find a graph that is saturated at almost all the edges.
\begin{OBS} \label{obs:almost}
It is enough to find a $K_s$-free graph $G_0$ that completes all but at most $o(n\log n)$ missing edges. A maximal $K_s$-free supergraph $G\supseteq G_0$ will then be $K_s$-saturated with an asymptotically equal number of edges.
\end{OBS}
Before we give a detailed proof of the upper bound, let us sketch the main ideas for the case $s=3$. For simplicity, we will also assume $p=\frac{1}{2}$.
As we mentioned in the introduction, if we fix a set $A_1$ of $\log_{4/3}\binom{n}{2}\approx 2\log_{4/3} n$ vertices with $B_1=[n]-A_1$, then $G[A_1,B_1]$ is a $K_3$-saturated subgraph with about $n\log_{4/3} n$ edges. But we can do better than that (see Figure~\ref{fig:strongsat}):
So instead, we fix $A_1$ to be a set of $2\log_2 n$ vertices and add all edges in $G[A_1,B_1]$ to our construction. This way we complete most of the edges in $B_1$ using about $n\log_2 n$ edges. Of course, we still have plenty of edges in $G[B_1]$ left incomplete, however, as we shall see, almost all of them are induced by a small set $B_2\subseteq B_1$ of size $o(n)$. But then we can complete all the edges in $B_2$ using only $o(n\log n)$ extra edges: just take an additional set $A_2$ of $\log_{4/3} \binom{n}{2}$ vertices, and add all edges in $G[A_2,B_2]$ to our construction.
This way, however, we still need to take care of the $\Theta(n\log n)$ incomplete edges between $A_2$ and $B_3=B_1-B_2$. As a side remark, let us point out that dropping the $K_3$-freeness condition from $K_3$-saturation would make our life easier here. Indeed, then we could have just chosen $A_2$ to be a subset of $B_2$.
The trick is to take yet another set $A_3$ of $o(\log n)$ vertices, and add all the $o(n\log n)$ edges of $G$ between $A_3$ and $A_2\cup B_3$ to our construction. Now the $o(\log n)$ vertices in $A_3$ are not enough to complete all the edges between $A_2$ and $B_3$, but if $|A_3|=\omega(1)$, then it will complete {\em most} of them. This gives us a triangle-free construction on $(1+o(1))2\log_2 n$ edges completing all but $o(n\log n)$ edges, and by the observation above this is enough.
\begin{figure}
\caption{strong $K_3$-saturation}
\label{fig:strongsat}
\end{figure}
Let us now collect the ingredients of the proof. The next lemma says that the graph comprised of the edges incident to a fixed set of $a$ vertices will complete all but an approximately $(1-p^2)^a$ fraction of the edges in any prescribed set $E$.
\begin{LEMMA} \label{lem:edgecover}
Let $s\ge 3$ be a fixed integer, and suppose $a=a(n)$ grows to infinity as $n\rightarrow\infty$. Let $A\subseteq [n]$ be a set of size $a$ and $E$ be a collection of vertex pairs from $B=[n]-A$. Now consider the random graph $G_A$ defined on $[n]$ as follows:
\begin{enumerate}
\item $G_A[B]$ is empty,
\item $G_A[A]$ is a fixed graph such that any induced subgraph on $\frac{a}{\log^2 a}$ vertices contains a copy of $K_{s-2}$,
\item the edges between $A$ and $B$ are in $G_A$ independently with probability $p$.
\end{enumerate}
Then the expected number of pairs in $E$ that $G_A$ does not complete is at most $(1-p^2)^{a-\frac{a}{\log a}}\cdot |E|$, provided $n$ is sufficiently large.
\end{LEMMA}
\begin{proof}
Suppose a pair $\{u,v\}\in E$ of vertices is incomplete, i.e., adding the edge $uv$ to $G_A$ does not create a $K_s$. Because of the second condition, the probability of this event can be bounded from above by the probability that $u$ and $v$ have fewer than $\frac{a}{\log^2 a}$ neighbors in $A$. As the size of the common neighborhood of $u$ and $v$ is distributed as $Bin(a,p^2)$, Lemma~\ref{lem:chern} implies that this probability is at most $(1-p^2)^{a-\frac{a}{\log a}}$. This bound holds for any pair in $E$, so the expected number of bad pairs is indeed no greater than $(1-p^2)^{a-\frac{a}{\log a}}\cdot |E|$.
\end{proof}
A $K_s$-saturated graph cannot contain any cliques of size $s$. So if we want to construct such a graph using Lemma~\ref{lem:edgecover}, we need to make sure that $G_A$ itself is $K_s$-free. The easiest way to do this is by requiring that $G_A[A]$ not contain any $K_{s-1}$. In our application, $G_A$ will be a subgraph of $G(n,p)$, so in particular, we need $G_A[A]$ to be a subgraph of an Erd\H{o}s-R\'enyi random graph that is $K_{s-1}$-free but induces no large $K_{s-2}$-free subgraph. Krivelevich \cite{K95} showed that whp we can find such a subgraph.
\begin{THM}[Krivelevich] \label{thm:kriv}
Let $s\ge 3$ be an integer, $p\ge cn^{-2/s}$ (for some small constant $c$ depending on $s$) and $G=G(n,p)$. Then whp $G$ contains a $K_{s-1}$-free subgraph $H$ that has no $K_{s-2}$-free induced subgraph on $n^{\frac{s-2}{s}}\textrm{polylog } n\le \frac{n}{\log^3 n}$ vertices.
\end{THM}
We will also use the fact that random graphs typically have relatively small chromatic numbers (see e.g. \cite{JLRBOOK}):
\begin{CLAIM} \label{lem:chrom}
Let $0<p<1$ be a constant and $G=G(n,p)$. Then $\chi(G)=(1+o(1))\frac{n}{2\log_{1/(1-p)}n}$ whp.
\end{CLAIM}
We are now ready to prove our main theorem.
\begin{THM}
Let $0<p<1$ be a constant, $s\ge 3$ be a fixed integer and $G=G(n,p)$. Then whp $G$ contains a $K_s$-saturated subgraph on $(1+o(1))n\log_{1/(1-p)}n$ edges.
\end{THM}
\begin{proof}
Define $\alpha=\frac{1}{1-p}$, $\beta=\frac{1}{1-p^2}$, and set $a_1=\frac{1}{p}(1+ \frac{3}{\log\log_{\alpha} n})\log_{\alpha} n $, $a_2=(1 + \frac{2}{\log\log_{\beta} n})\log_{\beta} \binom{n}{2}$ and $a_3=\frac{a_2}{\sqrt{\log a_2}}=o(\log n)$. Let us also define $A_1,A_2,A_3$ to be some disjoint subsets of $[n]$ such that $|A_i|=a_i$, and let $B_1=[n]-(A_1\cup A_2\cup A_3)$.
Expose the edges of $G[A_1]$. By Theorem~\ref{thm:kriv} it contains a $K_{s-1}$-free subgraph $G_1$ with no $K_{s-2}$-free subset of size $\frac{a_1}{\log^3 a_1}$ whp. Let $H_1$ be the graph on vertex set $A_1\cup B_1$ such that $H_1[A_1]=G_1$, $H_1[B_1]$ is empty, and $H_1$ is identical to $G$ between $A_1$ and $B_1$. Now define $B_2$ to be the set of vertices in $B_1$ that are adjacent to fewer than
$(1+\frac{2}{\log\log_{\alpha} n})\log_{\alpha} n$ vertices of $A_1$ in the graph $H_1$. We claim that $H_1$ completes all but $O(n\log\log n)$ of the vertex pairs in $B_1$ {\em not} induced by $B_2$.
To see this, let $F$ be the set of incomplete pairs in $B_1$ not induced by $B_2$. Now fix a vertex $v\in B_1-B_2$ and denote its neighborhood in $A_1$ by $N_v$. By definition, $|N_v|\ge (1+\frac{2}{\log\log_{\alpha} n})\log_{\alpha} n$. Let $d_F(v)$ be the degree of $v$ in $F$, i.e., the number of pairs $\{u,v\}\subseteq B_1$ that $H_1$ does not complete. Conditioned on $N_v$, the size of the common neighborhood of $v$ and some $u\in B_1$ is distributed as $Bin(|N_v|,p)$, so by Claim~\ref{lem:chern} the probability that this is smaller than $\frac{a_1}{\log^3 a_1}\le \frac{|N_v|}{\log^2|N_v|}$ is at most $(1-p)^{|N_v|-|N_v|/\log|N_v|} \le (1-p)^{\log_{\alpha} n}= \frac{1}{n}$. On the other hand, if the common neighborhood contains at least $\frac{a_1}{\log^3 a_1}$ vertices in $A_1$, then it also induces a $K_{s-2}$, thus the pair is complete. Therefore $\mathbf{E}[d_F(v)]\le 1$ and $\mathbf{E}[\sum_{v\in B_2-B_1} d_F(v)]\le n$.
Here the quantity $\sum_{v\in B_2-B_1} d_F(v)$ bounds $|F|$, the number of incomplete edges in $B_1$ that are not induced by $B_2$, so by Markov's inequality this number is indeed $O(n\log\log n)$ whp. By the Observation above, we can temporarily ignore these edges, and concentrate instead on completing those induced by $B_2$. To complete them, we will use the (yet unexposed) edges touching $A_2$.
The good thing about $B_2$ is that it is quite small: For a fixed $v\in B_1$, the size of its neighborhood in $A_1$ is distributed as $Bin(a_1,p)$, so the probability that it is smaller than $(1+\frac{2}{\log\log_{\alpha} n})\log_{\alpha} n =pa_1-\frac{\log_{\alpha} n}{\log\log_{\alpha} n}$ is, by Claim~\ref{lem:chern}, at most $e^{-\log_{\alpha} n/ 4 (\log\log_{\alpha} n)^2} \ll \frac{1}{\log n}$. So by Markov, $|B_2|$, the number of such vertices, is smaller than $\frac{n}{\log n}$ whp.
Now expose the edges of $G[A_2]$. Once again, whp we can apply Theorem~\ref{thm:kriv} to find a $K_{s-1}$-free subgraph $G_2$ that has no large $K_{s-2}$-free induced subgraph. Let $H_2$ be the graph on $A_2\cup B_2$ such that $H_2[A_2]=G_2$, $H_2[B_2]$ is empty, and $H_2=G$ on the edges between $A_2$ and $B_2$. Let us apply Lemma~\ref{lem:edgecover} to $G_{A_2}=H_2$ with $E$ containing all vertex pairs in $B_2$. The lemma says that the expected number of incomplete pairs in $E$ is at most
\[ (1-p^2)^{a_2-\frac{a_2}{\log a_2}} \cdot \binom{|B_2|}{2}\le (1-p^2)^{\log_{\beta} \binom{n}{2}}\cdot \binom{n/\log n}{2}=o(1), \]
so by Markov, $H_1$ completes all of $E$ whp, in particular it completes all the edges in $G[B_2]$. Note that $|B_2| \le \frac{n}{\log n}$ also implies that $H_2$ only contains $O(n)$ edges, so adding them to $H_1$ does not affect the asymptotic amount of edges in our construction.
However, the edges connecting $A_2$ to $B_3=B_1-B_2$ are still incomplete, and there are $\Theta(n\log n)$ of them, too many to ignore using the Observation. The idea is to complete most of these using the edges between $A_3$ and $A_2\cup B_3$, but we need to be a little bit careful not to create copies of $K_s$ with the edges in $H_2[A_2]=G_2$. We achieve this by splitting up $A_2$ into independent sets.
By Claim~\ref{lem:chrom}, we know that $k=\chi(G[A_2])=O\left(\frac{a_2}{\log a_2}\right)$, so $G_2\subseteq G[A_2]$ can also be $k$-colored whp. Let $A_2=\cup_{i=1}^k A_{2,i}$ be a partitioning into color classes, so here $G_2[A_{2,i}]$ is an empty graph for every $i$. Let us also split $A_3$ into $2k$ parts of size $a_4=a_3/2k$ and expose the edges in $G[A_3]$. Each of the $2k$ parts contains a $K_{s-1}$-free subgraph with no $K_{s-2}$-free subset of size $\frac{a_4}{\log^3 a_4}$ with the same probability $p_0$, and by Theorem~\ref{thm:kriv}, $p_0=1-o(1)$ (note that $a_4=\Omega(\sqrt{\log a_2})$ grows to infinity). This means that the expected number of parts not having such a subgraph is $2k(1-p_0)=o(k)$, so by Markov's inequality, whp $k$ of the parts, $A_{3,1},\ldots A_{3,k}$, do contain such subgraphs $G_{3,1},\ldots,G_{3,k}$.
Now define $H_{3,i}$ to be the graph with vertex set $A_{3,i}\cup A_{2,i}\cup B_3$ such that $H_{3,i}[A_{3,i}]=G_{3,i}$, $H_{3,i}[A_{2,i}\cup B_3]$ is empty, and the edges between $A_{3,i}$ and $A_{2,i}\cup B_3$ are the same as in $G$. Then we can apply Lemma~\ref{lem:edgecover} to show that $H_{3,i}$ is expected to complete all but a $(1-p^2)^{a_4-a_4/\log a_4}$ fraction of the $A_{2,i}$-$B_3$ pairs. This means that the expected number of edges between $A_2$ and $B_3$ that $H_3=\cup_{i=1}^k H_{3,i}$ does not complete is bounded by
\[ (1-p^2)^{\Omega(\sqrt{\log a_2})} \cdot |A_2||B_3| \le (1-p^2)^{\Omega(\sqrt{\log\log n})}\cdot O(n\log n).\]
So if $F'$ is the set of incomplete $A_2$-$B_3$ edges, then another application of Markov gives $|F'|=o(n\log n)$ whp.
It is easy to check that $H=H_1\cup H_2\cup H_3$ is $K_s$-free, since it can be obtained by repeatedly ``gluing'' together $K_s$-free graphs along independent sets, and such a process can never create an $s$-clique. To estimate the size of $H$, note that Claim~\ref{lem:chern} implies that whp all the vertices in $A_1$ have $(1+o(1))p|B_1|$ neighbors in $B_1$, so $H_1$ contains $(1+o(1))n\log_{\alpha} n$ edges. We have noted above that $H_2$ contains $O(n)$ edges and $H_3$ clearly contains at most $|A_2\cup B_3||A_3|=O\left(n\frac{\log n}{\sqrt{\log\log n}}\right)$ edges. So in total, $H$ contains $(1+o(1))n\log_{\alpha} n$ edges.
Finally, the only edges $H$ is not saturated at are either in $F$, in $F'$, touching $A_3$, or induced by $A_1\cup A_2$. There are $o(n\log n)$ edges of each of these four kinds, so any maximal $K_s$-free supergraph $H'$ of $H$ has $(1+o(1))n\log_{\alpha} n$ edges. This $H'$ is $K_s$-saturated, hence the proof is complete.
\end{proof}
\section{Weak saturation} \label{sec:weaksat}
In this section we prove Theorem~\ref{thm:weaksat} about the weak saturation number of random graphs of constant density. In fact, we prove the statement for a slightly more general class of graphs, satisfying certain pseudorandomness conditions. We will need some definitions to formulate this result.
Given a graph $G$ and a vertex set $X$, a {\em clique extension} of $X$ is a vertex set $Y$ disjoint from $X$ such that $G[Y]$ is a clique and $G[X,Y]$ is a complete bipartite graph. We define the size of this extension to be $|Y|$.
The following definition of goodness captures the important properties needed in our proof.
\begin{DEF}
A graph $G$ is $(t,\gamma)$-good if $G$ satisfies the following properties:
\begin{enumerate}
\item[P1.] For any vertex set $X$ of size $x$ and any integer $y$ such that $x,y\le t$, $G$ contains at least $\gamma n$ disjoint clique extensions of $X$ of size $y$.
\item[P2.] For any two disjoint sets $S$ and $T$ of size at least $\gamma n/2$, there is a vertex $v\in T$ and $X\subseteq S$ of size $t-1$ such that $X\cup v$ induces a clique in $G$.
\end{enumerate}
\end{DEF}
It is not hard to see that the Erd\H{o}s-R\'enyi random graphs satisfy the properties:
\begin{CLAIM} \label{lem:weak_conc}
Let $0<p<1$ be a constant and let $t$ be a fixed integer. Then there is a constant $\gamma=\gamma(p,t)>0$ such that whp $G=G(n,p)$ is $(t,\gamma)$-good.
\end{CLAIM}
\begin{proof}
To prove property P1, fix $x,y\le t$ and a set $X$ of size $x$, and split $V-X$ into groups of $y$ elements (with some leftover): $V_1,\ldots,V_m$ where $m=\floor{\frac{n-x}{y}}$. The probability that for some $i\in [m]$, all the pairs induced by $V_i$ or connecting $X$ and $V_i$ are edges in $G$ is $\tilde{p}=p^{\binom{y+x}{2}-\binom{x}{2}}$, which is a constant. Let $\mathcal{B}_{X,y}$ be the event that fewer than $m\tilde{p}/2$ of the $V_i$ satisfy this. By Claim~\ref{lem:chern}, $\mathbf{P}(\mathcal{B}_{X,y})\le e^{-m\tilde{p}/8}$.
Now set $\gamma=p^{\binom{2t}{2}}/4t$, then $\gamma n\le m\tilde{p}/2$, so if $\mathcal{B}_{X,y}$ does not hold then there are at least $\gamma n$ different $V_i$'s that we can choose to be the sets $Y_i$ we are looking for. On the other hand, there are only $\sum_{x=0}^t\binom{n}{x}\le t\binom{n}{t} \le t\cdot e^{t\log n}$ choices for $X$ and $t$ choices for $y$, so
\[ \mathbf{P}(\cup_{X,y} \mathcal{B}_{X,y}) \le t^2 \cdot e^{t\log n}\cdot e^{-\gamma n/4} = o(1),\]
hence P1 is satisfied whp.
To prove property P2, notice that Theorem~\ref{thm:kriv} implies that whp any induced subgraph of $G$ on at least $\frac{n}{\log^3 n}$ vertices contains a clique of size $t$.\footnote{We should point out that this statement is much weaker than Theorem~\ref{thm:kriv} and can be easily proved directly using a simple union bound argument.} Let us assume this is the case and fix $S$ and $T$. Then if a vertex $v\in T$ has at least $\frac{n}{\log^3 n}$ neighbors in $S$, then the neighborhood contains a $t-1$-clique in $G$ that we can choose to be $X$. So if property P2 fails, then no such $v$ can have $\frac{n}{\log^3 n}$ neighbors in $S$.
The neighborhood of $v$ in $S$ is distributed as $Bin(|S|,p)$, so the probability that $v$ has fewer than $\frac{n}{\log^3 n}\le \frac{|S|}{\log^2 |S|}$ neighbors in $S$ is at most $(1-p)^{|S|-|S|/\log|S|}\le e^{-p|S|/2}\le e^{-p\gamma n/4}$ by Claim~\ref{lem:chern}. These events are independent for the different vertices in $S$, so the probability that P2 fails for this particular choice of $S$ and $T$ is $e^{-\Omega(n^2)}$. But we can only fix $S$ and $T$ in $2^{2n}$ different ways, so whp P2 holds for $G$.
\end{proof}
Now we are ready to prove the following result, which immediately implies Theorem~\ref{thm:weaksat}.
\begin{THM} \label{thm:weak_gen}
Let $G$ be $(2s,\gamma)$-good. Then
\[ w\textrm{-}sat(G,K_s)= (s-2)n- \binom{s-1}{2}. \]
\end{THM}
\begin{proof}
To prove the lower bound on $w\textrm{-}sat(G,K_s)$, it is enough to show that $G$ is weakly saturated in $K_n$. Indeed, if $H$ is weakly saturated in $G$ and $G$ is weakly saturated in $K_n$, then $H$ is weakly saturated in $K_n$: We can just add edges one-by-one to $H$, obtaining $G$, and then keep adding edges until we reach $K_n$ in such a way that every added edge creates a new copy of $K_s$. But then Theorem~\ref{thm:weak_clique} implies that $H$ contains at least $(s-2)n-\binom{s-1}{2}$ edges, which is what we want.
Actually, $G$ is not only weakly, but strongly saturated in $K_n$: property P1 implies that for any vertex pair $X=\{u,v\}$, $G$ contains a clique extension of $X$ of size $s-2$. But then adding the edge $uv$ to this subgraph creates the copy of $K_s$ that we were looking for.
Let us now look at the upper bound on $w\textrm{-}sat(G,K_s)$.
Fix a set $C$ of $s-2$ vertices that induces a complete graph (such a $C$ exists by property P1, as it is merely a clique extension of $\emptyset$ of size $s-2$). Our saturated graph $H$ will consist of $G[C]$, plus $s-2$ edges for each vertex $v\in V-C$, giving a total of $\binom{s-2}{2}+(s-2)(n-s+2) =(s-2)n- \binom{s-1}{2}$ edges. We build $H$ in steps.
Let $V'$ be the set of vertices $v\in V-C$ adjacent to all of $C$.
We start our construction with the graph $H_0\subseteq G$ on vertex set $V_0=C\cup V'$ that contains all the edges touching $C$.
Once we have defined $H_{i-1}$ on $V_{i-1}$, we pick an arbitrary vertex $v_i\in V-V_{i-1}$, and choose a set $C_i\subseteq V_{i-1}$ of $s-2$ vertices that induces a clique in $G$ with $v_i$. Again, we can find such a $C_i$ as a clique extension of $v_i\cup C$. By the definition of $V'$, this $C_i$ will lie in $V'$. Then we set $V_i=V_{i-1}\cup v_i$ and define $H_i$ to be the graph on $V_i$ that is the union of $H_{i-1}$ and the $s-2$ edges connecting $v_i$ to $C_i$. Repeating this, we eventually end up with some graph $H=H_l$ on $V=V_l$ that has $\binom{n}{2}-\binom{n-s+2}{2}$ edges. We claim it is saturated.
\begin{figure}
\caption{Weak $K_4$-saturation. Black edges are in $H$, gray edges are in $G$ but not in $H$.}
\label{fig:weaksat}
\end{figure}
We really prove a bit more: we show, by induction, that $H_i$ is weakly $K_s$-saturated in $G[V_i]$ for every $i$.
This is clearly true for $i=0$: any edge of $G[V_0]$ not contained in $H_0$ is induced by $V'$, and forms an $s$-clique with $C$.
Now assume the statement holds for $i-1$. We want to show that we can add all the remaining edges in $G_i=G[V_i]$ to $H_i$ one-by-one, each time creating a $K_s$. By induction, we can add all the edges in $G_{i-1}$, so we may assume they are already there. Then the only missing edges are the ones touching $v_i$.
If $v\in V_i$ is a clique extension of $C_i\cup v_i$ then adding $vv_i$ creates a $K_s$, so we can add this edge to the graph. Property P1 applied to $C\cup C_i\cup v_i$ to find clique extensions of size 1 shows that there are at least $\gamma n$ such vertices in $V'\subseteq V_i$. Let $N$ be the new neighborhood of $v_i$ after these additions, then $|N|\ge \gamma n$.
Now an edge $vv_i$ can also be added if $v \cup C'$ induces a complete subgraph in $G$, where $C'$ is any $s-2$-clique in $G_i[N]$. Let us repeatedly add the available edges (updating $N$ with every addition). We claim that all the missing edges will be added eventually.
Suppose not, i.e., some of the edges in $G_i$ touching $v_i$ cannot be added using this procedure. There cannot be more than $\gamma n/2$ of them, as that would contradict property P2 with $S=N$ and $T$ being the remaining neighbors of $v_i$ in $V_i$. But then take one such edge, $vv_i$, and apply property P1 to $C\cup \{v, v_i\}$. It shows that there are $\gamma n$ disjoint clique extensions of size $s-2$, that is, $\gamma n$ disjoint $s-2$-sets in $V'$ that form $s$-cliques with $\{v,v_i\}$.
But at most $\gamma n/2$ of these cliques touch a missing edge other than $vv_i$, so there is a $C'$ among them that would have been a good choice for $v$. This contradiction establishes our claim and finishes the proof of the theorem.
\end{proof}
\section{Concluding remarks} \label{sec:last}
Many of the saturation results also generalize to hypergraphs. For example, Bollob\'as \cite{B65} and Alon \cite{A85} proved that $sat(K^{r}_n,K^{r}_s)=w\textrm{-}sat(K^{r}_n,K^{r}_s)=\binom{n}{r}-\binom{n-s+r}{r}$, where $K^{r}_t$ denotes the complete $r$-uniform hypergraph on $t$ vertices. It would therefore be very interesting to see how saturation behaves in $G^{r}(n,p)$, the random $r$-uniform hypergraph.
With some extra ideas, our proofs can be adapted to give tight results about $K^{r}_{r+1}$-saturated hypergraphs, but the general $K^{r}_s$-saturated case appears to be more difficult. It is possible, however, that our methods can be pushed further to solve these questions, and we plan to return to the problem at a later occasion.
Another interesting direction is to study the saturation problem for non-constant probability ranges. For example, Theorem~\ref{thm:main} about strong saturation can be extended to the range $\frac{1}{\log^{\varepsilon(s)} n}\le p\le 1-\frac{1}{o(n)}$ in a fairly straightforward manner.
With a bit of extra work, the case of $K_3$-saturation can be further extended to $n^{-1/2}\ll p\ll 1$, where one can prove $sat(G(n,p),K_3)=(1+o(1))\frac{n}{p}\log np^2$ as sketched below.
With $p=o(1)$, the upper bound construction can be simplified: Let $G_0\subseteq G(n,p)$ contain all edges between a set $A$ of $(1+o(1))\frac{1}{p^2}\log np^2$ vertices and $V-A$. Then $G_0$ will whp complete all but $O(n/p)$ edges of $G(n,p)$, so it can be extended to a $K_3$-saturated subgraph of the desired size. The matching lower bound can be proved along the lines of Theorem~\ref{thm:main_lower}, with a more careful analysis and an extra argument showing that the edges induced by $B$ cannot complete many more than $\frac{n}{p}\log^2 n$ edges.
On the other hand, note that for $p\ll n^{-1/2}$ there are much fewer triangles in $G(n,p)$ than edges, so any saturated graph must contain almost all the edges, i.e., $sat(G(n,p),K_3)=(1+o(1))\binom{n}{2}p$. This gives us a fairly good understanding of $K_3$-saturation in the random setting.
However, $K_s$-saturation in general appears to be more difficult in sparse random graphs. In particular, when $p$ is much smaller than $\frac{1}{\log n}$, the set $A$ in our construction becomes too sparse to apply Theorem~\ref{thm:kriv} on it. This suggests that here the saturation numbers might depend more on $s$.
It is also not difficult to extend our weak saturation result, Theorem~\ref{thm:weaksat}, to the range $n^{-\varepsilon(s)}\le p\le 1$.
A next step could be to obtain results for smaller values of $p$, in particular, it would be nice to determine the exact probability range
where the weak saturation number is $(s-2)n-\binom{s-1}{2}$. Note that our lower bound holds as long as $G(n,p)$ is weakly $K_s$-saturated in $K_n$.
This problem was studied by Balogh, Bollob\'as and Morris \cite{BBM12}, who showed that the probability threshold for $G(n,p)$ to be
weakly $K_s$-saturated is around $p\approx n^{-1/\lambda(s)}\textrm{polylog } n$, where $\lambda(s)=\frac{\binom{s}{2}-2}{s-2}$. It might be possible
that $w\textrm{-}sat(G(n,p),K_s)$ changes its behavior at the same threshold.
$F$-saturation in the complete graph has been studied for various different graphs $F$ (see \cite{FFS} for references). For example, K\'aszonyi and Tuza \cite{KT86} showed that for any fixed (connected) graph $F$ on at least 3 vertices, $sat(n,F)$ is linear in $n$. As we have seen, this is not true in $G(n,p)$. However, analogous results in random host graphs could be of some interest.
\noindent{\bf Acknowledgements.}\,
We would like to thank Choongbum Lee and Matthew Kwan for stimulating discussions on the topic of this paper.
\end{document}
|
\begin{document}
\title[ ]{Homeomorphisms generated from overlapping affine iterated function systems}
\author{Michael F. Barnsley}
\address{Department of Mathematics\\
Australian National University\\
Canberra, ACT, Australia\\
}
\author{Brendan Harding}
\address{Department of Mathematics\\
Australian National University\\
Canberra, ACT, Australia\\
}
\author{Andrew Vince}
\address{Department of Mathematics\\
University of Florida\\
Gainesville, FL 32611-8105, USA\\
}
\email{[email protected],}
\urladdr{http://www.superfractals.com{}}
\begin{abstract}
We develop the theory of fractal homeomorphism generated from pairs
overlapping affine iterated function systems.
\end{abstract}
\maketitle
\section{\label{introsec}Introduction}
We consider a pair of dynamical systems, $W:[0,1]\rightarrow\lbrack0,1]$ and
$L:[0,1]\rightarrow\lbrack0,1],$ as illustrated in Figure \ref{figaug17}. $W$
is differentiable on both $[0,\rho]$ and $(\rho,1]$, with slope greater than
$1/\lambda>1,$ and $L$ is piecewise linear with slope $1/\gamma>1$. If
$h(W)=-\ln$ $\gamma$ is the topological entropy of $W$ then there is
$p\in(0,1)$ such that the two systems are topologically conjugate, i.e. there
is a homeomorphism $H:[0,1]\rightarrow\lbrack0,1]$ such that $W=HLH^{-1}$.
This follows from \cite[Theorem 1]{denker}. It can also be deduced from
\cite{parry}. What is not known, prior to this work, is the explicit
relationship between $W$, on the left in Figure \ref{figaug17}$,$ and the
parameters $p$ and $\gamma$ that uniquely define $L$, on the right in Figure
\ref{figaug17}.
\begin{figure}
\caption{Theorem \ref{theorem1}
\label{figaug17}
\end{figure}
In this paper we prove constructively the existence of $L$ and establish
analytic expressions, that use only two itineraries of $W$, from which both
topological invariants $\gamma$ and $p$ can be deduced. We also provide a
direct construction for the graph of $H$. While $\gamma$ has been much
studied, the parameter $p$ is also of interest because it measures the
asymmetry of the set of itineraries of $W$. Indeed, one motivation is our
desire to establish, and to be able to compute, fractal homeomorphisms between
attractors of various overlapping iterated function systems, as explained in
Section \ref{masksec}, for applications such as those in \cite{BHI}. Our
approach is of a constructive character, similar to that in \cite{milnor}, but
founded in the theory of overlapping iterated function systems and associated
families of discontinuous dynamical systems. We make use of an analogue of the
kneading determinant of \cite{milnor}, appropriate for discontinuous interval
maps, and thereby avoid measure-theoretic existential demonstrations such as
those in \cite{parry, hoffbauer}.
Let $I=\{0,1\}$ and $I^{\infty}=\{0,1\}\times\{0,1\}\times...$ with the
product topology. Each point $x\in\lbrack0,1]$ has a unique itinerary
$\tau(x)\in I,$ where the $k^{th}$ component of $\tau(x)$, denoted by
$\tau(x)_{k}$, equals $0$ or $1$ according as $W^{k}(x)\in\lbrack0,\rho],$ or
$(\rho,1],$ respectively, for all $k\in\mathbb{N}$. Corresponding to each
$x\in\lbrack0,1)$ we associate an analytic function
\[
\tau(x)(\zeta):=(1-\zeta)\sum\limits_{k=0}^{\infty}\tau(x)_{k}\zeta^{k}\text{,
}\zeta\in\mathbb{C}\text{, }\left\vert \zeta\right\vert <1.
\]
Our first main result specifies the invariants $p$ and $\gamma$ in terms of
two of these functions, and describes the homeomorphism $H$.
\begin{theorem}
\label{theorem1} The topological entropy of $W$ is $-\ln\gamma$ where $\gamma$
is the unique solution of
\begin{equation}
\tau(\rho)(\gamma)=\tau(\rho+)(\gamma),\text{ with }\tau(\rho)(\varsigma
)<\tau(\rho+)(\varsigma)\text{ for }\varsigma\in\lbrack0.5,\gamma)\text{,}
\label{innovationeq}
\end{equation}
and
\[
p=\tau(\rho)(\gamma).
\]
Moreover,
\[
H(x)=\tau(x)(\gamma)\text{, for all }x\in\lbrack0,1),\text{ and }
H(1)=1\text{.}
\]
\end{theorem}
Here $\tau(\rho+)=\underset{\varepsilon\rightarrow0}{\lim}$ $\tau
(\rho+\left\vert \varepsilon\right\vert )$. Our second main result provides a
geometrical construction of the homeomophism $H.$ Let
\[
\square=\{(x,y)\in\mathbb{R}^{2}:0\leq x\leq1,0\leq y\leq1\},
\]
and let $\mathbb{H}$ denote the nonempty compact subsets of $\square$ with the
Hausdorff topology.
\begin{theorem}
\label{theorem2ndpart} If $gr(H)$ is the graph of $H$ then
\[
gr(H)=\bigcap\limits_{k\in\mathbb{N}}r^{k}(\square)=\lim_{k\rightarrow\infty
}r^{k}(\square)
\]
where
\[
r:\mathbb{H\rightarrow H\ni}S\mapsto F_{0}(S\cap P)\cup F_{1}(S\cap Q),
\]
\[
F_{0}:\square\rightarrow\square\mathbb{\ni}(x,y)\mapsto(\gamma x,W_{0}
^{-1}(y)),F_{1}:\square\rightarrow\square\mathbb{\ni}(x,y)\mapsto(\gamma
x+1-\gamma,W_{1}^{-1}(y)),
\]
\[
P=\{(x,y):x\leq p/\gamma,y\leq W_{0}(\rho)\}\text{, }Q=\{(x,y):x\geq
p/\gamma+1-1/\gamma,y\geq W_{1}(\rho)\}.
\]
\end{theorem}
The expression $gr(H)=\bigcap\limits_{k\in\mathbb{N}}r^{k}(\square
)=\lim_{k\rightarrow\infty}r^{k}(\square)$ is a localized version of the
expression $A=\lim_{k\rightarrow\infty}\mathcal{F}^{k}(\square)$ for the
attractor $A$ of a hyperbolic iterated function system $\mathcal{F}$ on
$\square$.
This paper is, in part, a continuation of \cite{mihalache}. In
\cite{mihalache} we analyse in some detail the topology and structure of the
address space/set of itineraries associated with $W$ and $W_{+}$. In this
paper we recall and use key results from \cite{mihalache}; but here the point
of view is that of masked iterated function systems, whereas in
\cite{mihalache} the point of view is classical symbolic dynamics. The novel
innovation in this paper is the introduction and exploitation of the family of
analytic functions in equation (\ref{innovationeq}), yielding Theorem
\ref{theorem1}.
** Special case : the affine case; implicit function theorem gives the
dependence of p on a,b,and rho.
** Outline of sections and their contents, with focus on how the proofs work.
** Comments on related work by Konstantin Igudesman \cite{igudesman1,
igudesman2}.
\section{\label{backgroundsec}Background and Notation}
\subsection{\label{IFSsec}Iterated functions systems, their attractors, coding
maps, sections and address spaces}
Let $\mathbb{X}$ be a complete metric space. Let $f_{i}:\mathbb{X\rightarrow
}\mathbb{X}$ ($i=0,1$) be contraction mappings. Let $\mathbb{H=H(X)}$ be the
nonempty compact subsets of $\mathbb{X}$. Endow $\mathbb{H}$ with the
Hausdorff metric. We use the same symbol $\mathcal{F}$ for the hyperbolic
iterated function system $\mathcal{(}\mathbb{X};f_{0},f_{1}),$ for the set of
maps $\{f_{0},f_{1}\}$, and for the contraction mapping
\[
\mathcal{F}:\mathbb{H\rightarrow H}\text{, }S\mapsto f_{0}(S)\cup
f_{1}(S)\text{.}
\]
Let $A\in\mathbb{H}$ be the fixed point of $\mathcal{F}$. We refer to $A$ as
the \textit{attractor} of $\mathcal{F}$.
Let $I=\{0,1\}$ and let $I^{\infty}=\{0,1\}\times\{0,1\}\times...$ have the
product topology induced from the discrete topology on $I$. For $\sigma\in
I^{\infty}$ we write $\sigma=\sigma_{0}\sigma_{1}\sigma_{2}\ldots,$ where
$\sigma_{k}\in I$ for all $k\in\mathbb{N}$. The product topology on
$I^{\infty}$ is the same as the topology induced by the metric $d(\omega
,\sigma)=2^{-k}$ where $k$ is the least index such that $\omega_{k}\neq
\sigma_{k}$. It is well known that $(I^{\infty},d)$ is a compact metric space.
For $\sigma\in I^{\infty}$ and $n\in\mathbb{N}$ we write $\sigma|_{n}
=\sigma_{0}\sigma_{1}\sigma_{2}...\sigma_{n}$. The \textit{coding map} for
$\mathcal{F}$ is
\[
{\mathbb P}i:I^{\infty}\rightarrow A\text{, }\sigma\mapsto\lim_{k\rightarrow\infty
}f_{\sigma|_{k}}(x),
\]
where $x\in\mathbb{X}$ is fixed and $f_{\sigma|_{k}}(x)=f_{\sigma_{0}}\circ
f_{\sigma_{1}}\circ...\circ f_{\sigma_{k}}(x).$ The map ${\mathbb P}i:I^{\infty
}\rightarrow A$ is a continuous surjection, independent of $x$. We refer to an
element of ${\mathbb P}i^{-1}(x)$ as an \textit{address} of $x\in A$. A
\textit{section} for $\mathcal{F}$ is a map $\tau:A\rightarrow I^{\infty}$
such that ${\mathbb P}i\circ\tau=i_{A}$, the identity map on $A$. We also say that
$\tau$ is a section \textit{of} ${\mathbb P}i$. We refer to $\Omega=\tau(A)$ as an
\textit{address space} for $A$ (associated with $\mathcal{F}$) because
$\Omega$ is a subset of $I^{\infty}$ and it is in bijective correspondence
with $A$.
We write $\overline{E}$ to denote the closure of a set $E$. But we write
$\overline{0}=000...,\overline{1}=111...\in I^{\infty}$. For $\sigma
=\sigma_{0}\sigma_{1}\sigma_{2}\ldots\in I^{\infty}$ we write $0\sigma$ to
mean $0\sigma_{0}\sigma_{1}\sigma_{2}\ldots\in I^{\infty}$ and $1\sigma
=0\sigma_{0}\sigma_{1}\sigma_{2}\ldots\in I^{\infty}$.
\subsection{\label{ordersec}Order relation on code space, top sections and
shift invariance}
We define a total order relation ${\mathbb P}receq$ on $I^{\infty},$ and on $I^{n}$ for
any $n\in\mathbb{N}$, by $\sigma{\mathbb P}rec\omega$ if $\sigma\neq\omega$ and
$\sigma_{k}<\omega_{k}$ where $k$ is the least index such that $\sigma_{k}
\neq\omega_{k}$. For $\sigma,\omega\in I^{\infty}$ with $\sigma{\mathbb P}receq\omega$
we define
\begin{align*}
\lbrack\sigma,\omega] & :=\{\zeta\in I^{\infty}:\sigma{\mathbb P}receq\zeta
{\mathbb P}receq\omega\},(\sigma,\omega):=\{\zeta\in I^{\infty}:\sigma{\mathbb P}rec\zeta
{\mathbb P}rec\omega\},\\
(\sigma,\omega] & :=\{\zeta\in I^{\infty}:\sigma{\mathbb P}rec\zeta{\mathbb P}receq
\omega\},[\sigma,\omega):=\{\zeta\in I^{\infty}:\sigma{\mathbb P}receq\zeta{\mathbb P}rec
\omega\}.
\end{align*}
It is helpful to note the following alternative characterization of the order
relation ${\mathbb P}receq$ on $I^{\infty}.$ Since the standard Cantor set
$C\subset\lbrack0,1]\subset\mathbb{R}$ is totally disconnected, and is the
attractor of the iterated function system $([0,1];f_{0}(x)=x/3,f_{1}
(x)=x/3+2/3)$, the coding map ${\mathbb P}i_{C}:I^{\infty}\rightarrow C,\sigma
\mapsto\sum\limits_{k=0}^{\infty}2\sigma_{k}/3^{k+1},$ is a homeomorphism. The
order relation ${\mathbb P}receq$ on $I^{\infty}$ can equivalently be defined by
$\sigma{\mathbb P}rec\omega$ if and only if ${\mathbb P}i_{C}(\sigma)<{\mathbb P}i_{C}(\omega)$.
The order relation ${\mathbb P}rec$ on $I^{\infty}$ can be used to define the
corresponding \textit{top section }$\mathcal{\tau}_{top}:A\rightarrow
I^{\infty}$ for $\mathcal{F}$, according to
\[
\mathcal{\tau}_{top}(x)=\max{\mathbb P}i^{-1}(x)\text{.}
\]
Top sections are discussed in \cite{monthly}. Let $\Omega_{top}=\mathcal{\tau
}_{top}(A)$ and let $S:I^{\infty}\rightarrow I^{\infty}$ denote the left-shift
map $\sigma_{0}\sigma_{1}\sigma_{2}...\mapsto\sigma_{1}\sigma_{2}\sigma
_{3}...$. We have
\[
S(\Omega_{top})\subseteq\Omega_{top}
\]
with equality when $f_{1}$ is injective, \cite[Theorem 2]{monthly}.
We say that a \textit{section} $\tau$ \textit{is shift invariant} when
$S(\Omega)=\Omega$, and \textit{shift-forward invariant when }$S(\Omega
)\subset\Omega.$ The examples considered later in this paper involve shift
invariant sections.
The branches of $S^{-1}$ are $s_{i}:I^{\infty}\rightarrow I^{\infty}$ with
$s_{i}(\sigma)=i\sigma$ $(i=0,1).$ Both $s_{0}$ and $s_{1}$ are contractions
with contractivity $1/2$. $I^{\infty}$ is the attractor of the iterated
function system $(I^{\infty};s_{0},s_{1})$. We write $2^{I^{\infty}}$ to
denote the set of all subsets of $I^{\infty}.$
\subsection{\label{masksec}Masks, masked dynamical systems and masked
sections}
Sections are related to masks. A \textit{mask} $\mathcal{M}$ \textit{for}
$\mathcal{F}$ is a pair of sets$,$ $M_{i}\subset f_{i}(A)$ $(i=0,1)$, such
that $M_{0}\cup M_{1}=A$ and $M_{0}\cap M_{1}=\emptyset.$ If the maps
$f_{i}|_{A}:A\rightarrow A$ $(i=0,1)$ are invertible, then we define a
\textit{masked dynamical system} \textit{for} $\mathcal{F}$ to be
\[
W_{\mathcal{M}}:A\rightarrow A,\text{ }M_{i}\ni x\mapsto f_{i}^{-1}(x),\text{
}(i=0,1).
\]
It is proved in \cite[Theorem 4.3]{BHI} that, given a mask $\mathcal{M},$ if
the maps $f_{i}|_{A}:A\rightarrow A$ $(i=0,1)$ are invertible, we can define a
section for $\mathcal{F}$, that we call a \textit{masked section}
$\mathcal{\tau}_{\mathcal{M}}$ for $\mathcal{F}$, by using itineraries of
$W_{\mathcal{M}}$, as follows. Let $x\in A$ and let $\left\{ x_{n}\right\}
_{n=0}^{\infty}$ be the orbit of $x$ under $W_{\mathcal{M}}$; that is,
$x_{0}=x$ and $x_{n}=W_{\mathcal{M}}^{n}(x_{0})$ for $n=1,2,...$. Define
\begin{equation}
\mathcal{\tau}_{\mathcal{M}}(x)=\sigma_{0}\sigma_{1}\sigma_{2}...
\label{itineraryeq}
\end{equation}
where $\sigma_{n}\in I$ is the unique symbol such that $x_{n}\in M_{\sigma
_{n}}$for all $n\in\mathbb{N}$.
Sections defined using itineraries of masked dynamical systems are shift invariant.
\begin{proposition}
\label{maskprop} Let the maps $f_{i}|_{A}:A\rightarrow A$ $($ $i=1,2)$ be invertible.
(i) Any mask $\mathcal{M}$ for $\mathcal{F}$ defines a shift-forward invariant
section, $\tau_{\mathcal{M}}:A\rightarrow I^{\infty}$, for $\mathcal{F}$.
(ii) Let $\Omega_{\mathcal{M}}=\tau_{\mathcal{M}}(A)$. The following diagram
commutes:
\[
\begin{array}
[c]{ccc}
\Omega_{\mathcal{M}} & \overset{S|_{\Omega_{\mathcal{M}}}}{\rightarrow} &
\Omega_{\mathcal{M}}\\
{\mathbb P}i\downarrow\uparrow\tau_{\mathcal{M}} & & {\mathbb P}i\downarrow\uparrow
\tau_{\mathcal{M}}\\
A & \underset{W_{\mathcal{M}}}{\rightarrow} & A
\end{array}
.
\]
(iii)\ Any section $\tau:A\rightarrow I^{\infty}$ for $\mathcal{F}$ defines a
mask $\mathcal{M}_{\tau}$ for $\mathcal{F}.$
(iv) If the section $\tau$ in (iii) is shift-forward invariant then $\tau
=\tau_{\mathcal{M}_{\tau}}.$
\end{proposition}
\begin{proof}
(i) Compare with \cite[Theorem 4.3]{BHI}. If the maps are invertible, we can
use $\mathcal{M}$ to define an itinerary for each $x\in$ $A$, as in
(\ref{itineraryeq}), yielding a section $\tau_{\mathcal{M}}$ for $\mathcal{F}
$. By construction, $\tau_{\mathcal{M}}$ is shift-forward invariant. (ii) We
show that $\tau_{\mathcal{M}}W_{\mathcal{M}}{\mathbb P}i\sigma=S\sigma$ for all
$\sigma\in\Omega_{\mathcal{M}}$. We have ${\mathbb P}i\sigma$ is a point $x\in A$ that
possesses address $\sigma\in\Omega_{\mathcal{M}}$. But $W_{\mathcal{M}}$ acts
by applying $f_{\sigma_{0}}^{-1}$ to $x=f_{\sigma_{0}}\circ f_{\sigma_{1}
}\circ f_{\sigma_{2}}...$ yielding the point $W_{\mathcal{M}}{\mathbb P}i
\sigma=f_{\sigma_{1}}\circ f_{\sigma_{2}}...$ which tells us that $\sigma
_{1}\sigma_{2}\sigma_{3}..$ is \textit{an} address of $W_{\mathcal{M}}
{\mathbb P}i\sigma$. But since $S\sigma\in\Omega_{\mathcal{M}}$ this address must be
the unique address in $\Omega_{\mathcal{M}}$ of $W_{\mathcal{M}}{\mathbb P}i\sigma.$ It
follows that $W_{\mathcal{M}}{\mathbb P}i\sigma=\sigma_{1}\sigma_{2}\sigma
_{3}..=S\sigma$. (iii) Given a section $\tau:A\rightarrow I^{\infty},$ we
define a mask $\mathcal{M}_{\tau}$ by $M_{i}=\{x\in A:\tau(x)_{0}=i\}(i=1,2).$
(iv) This is essentially the same as the proof of (ii).
\end{proof}
\subsection{\label{transformsec}Fractal transformations}
Let $\mathcal{G}=(\mathbb{Y};g_{1},g_{2})$ be a hyperbolic iterated function
system, with attractor $A_{\mathcal{G}}$ and coding map ${\mathbb P}i_{\mathcal{G}}$.
We refer to any mapping of the form
\[
\mathcal{T}_{\mathcal{FG}}:A\rightarrow A_{\mathcal{G}},x\mapsto
{\mathbb P}i_{\mathcal{G}}\circ\tau(x)\text{,}
\]
where $\tau$ is a section of $\mathcal{F}$, as a \textit{fractal
transformation. }Later in this paper we construct and study fractal
transformations associated with certain overlapping iterated function systems,
such as those suggested by the left-hand panel in Figure \ref{fig-trans}. We
will use part (iv) of the following result to establish Theorem \ref{theorem1}.
\begin{proposition}
[ \cite{BHI}]\label{basicconjugacythm} Let $\tau:A\rightarrow I^{\infty}$ be a
section for $\mathcal{F}$ and let $\Omega=\tau(A)$ be an address space the
attractor $A$ of $\mathcal{F}$.
(i) If $\Omega$ is an address space for $\mathcal{G}$ then $\mathcal{T}
_{\mathcal{FG}}:A\rightarrow A_{\mathcal{G}}$ is a bijection.
(ii) If, whenever $\sigma,\omega\in\overline{\Omega}$, ${\mathbb P}i(\sigma)={\mathbb P}i
(\omega){\mathbb R}ightarrow$ ${\mathbb P}i_{\mathcal{G}}(\sigma)={\mathbb P}i_{\mathcal{G}}(\omega)$,
then $\mathcal{T}_{\mathcal{FG}}:A\rightarrow A_{\mathcal{G}}$ is continuous.
(iii) If, whenever $\sigma,\omega\in\overline{\Omega}$, ${\mathbb P}i(\sigma
)={\mathbb P}i(\omega)\Leftrightarrow$ ${\mathbb P}i_{\mathcal{G}}(\sigma)={\mathbb P}i_{\mathcal{G}
}(\omega)$, then $\mathcal{T}_{\mathcal{FG}}:A\rightarrow A_{\mathcal{G}}$ is
a homeomorphism.
(iv) If $\tau$ is a masked section of $\mathcal{F}$ such that the condition in
(iii) holds then the corresponding pair of masked dynamical systems,
$W_{\mathcal{M}}:A\rightarrow A$ and, say, $W_{\mathcal{M}_{\mathcal{G}}
}:A_{\mathcal{G}}\rightarrow A_{\mathcal{G}}$ are topologically conjugate.
\end{proposition}
Here $W_{\mathcal{M}_{\mathcal{G}}}$ is defined in the obvious way, as
follows. Since $\Omega$ is an address space for the attractor $A_{\mathcal{G}
}$ of $\mathcal{G}$ and it is also shift-forward invariant -since it is a
masked address space for $\mathcal{F}$- it defines a shift-forward invariant
section $\mathcal{\tau}_{\mathcal{G}}$ for $\mathcal{G}$, so by Proposition
\ref{maskprop}(iii), it defines a mask $\mathcal{M}_{\mathcal{G}}$ for
$\mathcal{G}$ such that $\mathcal{\tau}_{\mathcal{G}}=\mathcal{\tau
}_{\mathcal{M}_{\mathcal{G}}}$; we use this latter mask to define the masked
dynamical system $W_{\mathcal{M}_{\mathcal{G}}}:A_{\mathcal{G}}\rightarrow
A_{\mathcal{G}}$.
\begin{proof}
(i) follows at once from the fact that $\Omega$ is a section for both
$\mathcal{F}$ and $\mathcal{G}$. (ii) and (iii) are proved in \cite[Theorem
3.4]{BHI}. (iv) is immediate, based on the definition of $W_{\mathcal{M}
_{\mathcal{G}}}:A_{\mathcal{G}}\rightarrow A_{\mathcal{G}}$.
\end{proof}
\begin{remark}
All of the results in Section \ref{backgroundsec} apply to any hyperbolic
iterated function systems of the form $\mathcal{F}=(\mathbb{X};f_{1}
,f_{2},...,f_{N})$ where $N$ is an arbitrary finite positive integer.
\end{remark}
\section{\label{overlapifssec}Overlapping iterated function systems of two
monotone increasing interval maps}
\subsection{\label{generalstructuresec}General structure}
Here we consider iterated function systems, related to $W$ as introduced at
the start of Section \ref{introsec}, that involve overlapping monotone
increasing interval maps. We introduce two families of masks and characterize
the associated sections and address spaces.
Let $\mathbb{X}$ be $[0,1]\subset\mathbb{R}$ with the Euclidean metric. Let
$0<\lambda<1.$ Let
\begin{equation}
\mathcal{F}=([0,1]\subset\mathbb{R};f_{0}(x),f_{1}(x))\label{ifsequation}
\end{equation}
where $f_{i}(i)=i,0<f_{i}(y)-f_{i}(x)<\lambda(y-x)$ for all $x<y$, $(i=0,1).$
Both maps are monotone strictly increasing contractions. We also require
$f_{0}(1)=a\geq1-b=f_{1}(0)$ with $0<a,b<1$. See Figure \ref{fig-trans}. The
attractor of $\mathcal{F}$ is $A=[0,1]=f_{0}([0,1])\cup f_{1}([0,1])$ and the
coding map is ${\mathbb P}i:I^{\infty}\rightarrow\lbrack0,1]$.
\begin{figure}
\caption{Left: the graphs of two functions that comprise an iterated function
system $\mathcal{F}
\label{fig-trans}
\end{figure}
We define a one parameter family of masks for $\mathcal{F}$,
\[
\{\mathcal{M}_{\rho}:0<1-b\leq\rho\leq a<1\},
\]
by $M_{0}=[0,\rho],$ $M_{1}=(\rho,1]$. The corresponding masked dynamical
system is
\begin{equation}
W:[0,1]\rightarrow\lbrack0,1]\ni x\mapsto\left\{
\begin{array}
[c]{c}
f_{0}^{-1}(x)\text{ if }x\in\lbrack0,\rho],\\
f_{1}^{-1}(x)\text{ otherwise},
\end{array}
\right. \label{Wequation}
\end{equation}
as graphed in Figure \ref{fig-trans}.
The masked section for $\mathcal{F}$ is $\tau=\tau(\rho),$ and the masked code
space is $\Omega=\Omega(\rho)$. The dependence on $\rho$ is implicit except
where we need to draw attention to it. For convenient reference we note that,
for all $x\in\lbrack0,1]$,
\[
\tau(x)=\sigma_{0}\sigma_{1}\sigma_{2}...\in I^{\infty}\text{ \textit{where}
}\sigma_{k}=\left\{
\begin{array}
[c]{c}
0\text{ \textit{if} }W^{k}(x)\in\lbrack0,\rho],\\
1\text{ \textit{otherwise}.}
\end{array}
\right.
\]
For example $\tau(0)=\overline{0}$ and $\tau(1)=\overline{1}$.
We need to understand the structure of $\overline{\Omega}$ because we will use
Proposition \ref{basicconjugacythm} to prove Theorem \ref{theorem1}. Since
$\Omega\subset I^{\infty}$ is totally disconnected and $A=[0,1]$ is connected,
it follows that $\tau:A\rightarrow\Omega$ is not a homeomorphism and hence
that $\overline{\Omega}\neq\Omega$. (Note that if $\overline{\Omega}=\Omega$
then $\tau:A\rightarrow\Omega$ is a homeomorphism, \cite[Theorem 3.2 (v)]{BHI}.)
To help to describe $\overline{\Omega}$ we introduce the mask $\mathcal{M}
_{\rho}^{+}=$ $\{M_{0}^{+}=[0,\rho)$,$M_{1}^{+}=[\rho,1]\}$ for $\mathcal{F}$.
Let $\Omega_{+}$be the address space associated with the mask $\mathcal{M}
_{\rho}^{+}$, and let $\tau^{+}:[0,1]\rightarrow\Omega_{+}$ be the
corresponding section. The corresponding masked dynamical system $W_{+}$ is
obtained by replacing $[0,\rho]$ by $[0,\rho)\ $in (\ref{Wequation}).
Proposition \ref{mihalachethm} is a summary of some results in
\cite{mihalache}, that concern the spaces $\Omega,\Omega_{+},\overline{\Omega
}$ and the sections $\tau$ and $\tau^{+}$. In particular, it describes the
monotonicity of $\tau$ and the subtle relationship between $\tau$ and
$\tau^{+}$, and it provides a characterization of $\overline{\Omega}$ in terms
of two itineraries.
\begin{proposition}
[\cite{mihalache}]\label{mihalachethm} For all $\rho\in\lbrack1-b,a],$
(i) $\Omega$ is closed from the left, $\Omega_{+}$ is closed from the right,
and
\[
\overline{\Omega}=\overline{\Omega_{+}}=\Omega\cup\Omega_{+}=\overline
{\Omega\cap\Omega_{+}};
\]
(ii) $\mathcal{{\mathbb P}i}^{-1}(x)\cap\overline{\Omega}=\{\tau(x),\tau^{+}(x)\}$ for
all $x\in$ $[0,1]$;
(iii) for all $x,y\in\lbrack0,1]$ with $x<y$,
\[
\tau(x){\mathbb P}receq\tau^{+}(x){\mathbb P}rec\tau(y){\mathbb P}receq\tau^{+}(y);
\]
(iv) for all $x\in\lbrack0,1],$ $\tau(x)\neq\tau^{+}(x)$ if only if $x\in
W^{-k}(\rho)$ for some $k\mathbb{\in N}$;
(v) $\tau(W(x))=S(\tau(x))$ and $\tau^{+}(W_{+}(x))=S(\tau^{+}(x))$ for all
$x\in\lbrack0,1]$;
(vi) $S(\Omega)=\Omega;$ $S(\Omega_{+})=\Omega_{+};$ $S(\overline{\Omega
})=\overline{\Omega};$
(vii) let $\tau(\rho)=\alpha$ and $\tau^{+}(\rho)=\beta$,
\begin{align*}
\Omega & =\{\sigma\in I^{\infty}:S^{k}(\sigma)\in\lbrack\overline{0}
,\alpha]\cup(\beta,\overline{1}]\text{ for all }k\mathbb{\in N\}}\text{;}\\
\Omega_{+} & =\{\sigma\in I^{\infty}:S^{k}(\sigma)\in\lbrack\overline
{0},\alpha)\cup\lbrack\beta,\overline{1}]\text{ for all }k\mathbb{\in
N\}}\text{;}\\
\overline{\Omega} & =\{\sigma\in I^{\infty}:S^{k}(\sigma)\in\lbrack
\overline{0},\alpha]\cup\lbrack\beta,\overline{1}]\text{ for all }k\mathbb{\in
N\}}.
\end{align*}
\end{proposition}
\begin{proof}
Proof of (i): This is \cite[Proposition 2]{mihalache}.
Proof of (ii): From the definition of the section $\tau$ we have
$\tau(x)=\mathcal{{\mathbb P}i}^{-1}(x)\cap\Omega$. Similarly $\tau^{+}(x)=\mathcal{{\mathbb P}i
}^{-1}(x)\cap\Omega_{+}.$ The result now follows at once from (i)
Proof of (iii) and (iv): These are equivalent to \cite[section 2, (5) and
(6)]{mihalache}.
Proof of (v) and (vi): These are the content of \cite[Proposition
1]{mihalache}$.$
Proof of (vii): This follows from \cite[Proposition 3]{mihalache}.
\end{proof}
Let $\mathfrak{F}$ denote the set of all iterated function systems of the form
of $\mathcal{F}$ described above, at the start of Section
\ref{generalstructuresec}. Let$\ \widetilde{\mathcal{F}}\in\mathfrak{F}$, and
let corresponding quantities be denoted by tildas. That is, let
\[
\widetilde{\mathcal{F}}=([0,1]\subset\mathbb{R};\widetilde{f}_{0}
(x),\widetilde{f}_{1}(x))
\]
where $\widetilde{f}_{i}(i)=i,0<\widetilde{f}_{i}(y)-\widetilde{f}
_{i}(x)<\widetilde{\lambda}(y-x)$ for all $x<y$, $(i=0,1)\ $where
$\widetilde{f}_{0}(1)=\widetilde{a}\geq1-\widetilde{b}=\widetilde{f}_{1}(0)$
with $0<\widetilde{a},\widetilde{b}<1$. Let $\mathcal{M}_{\widetilde{\rho}
}=\{[0,\widetilde{\rho}],(\widetilde{\rho},1]\}$ for $\widetilde{\rho}
\in\lbrack1-\widetilde{b},\widetilde{a}]$ be a family of masks for
$\widetilde{\mathcal{F}},$ analogous to the masks $\mathcal{M}_{\rho}$ for
$\mathcal{F}$, and let $\widetilde{\tau}$ and $\widetilde{\tau}^{+}$ be the
corresponding sections for $\widetilde{\mathcal{F}}$, analogous to the
sections $\tau$ and $\tau^{+}$ for $\mathcal{F}$. Let $\widetilde{W}
:[0,1]\rightarrow\lbrack0,1]$ denote the masked dynamical system for
$\widetilde{\mathcal{F}}$ corresponding to the mask $\mathcal{M}
_{\widetilde{\rho}}$.
\begin{corollary}
\label{corollary1}The following statements are equivalent:
(i) the fractal transformation $T_{\mathcal{F}\widetilde{\mathcal{F}}
}=\widetilde{{\mathbb P}i}\circ\tau:[0,1]\rightarrow\lbrack0,1]$ is an orientation
preserving homeomorphism;
(ii) $\widetilde{\tau}(\widetilde{\rho})=\tau(\rho)$ and $\widetilde{\tau}
^{+}(\widetilde{\rho})=\tau^{+}(\rho);$
(iii) the masked dynamical systems $W:[0,1]\rightarrow\lbrack0,1]$ and
$\widetilde{W}:[0,1]\rightarrow\lbrack0,1]$ are topologically conjugate under
an orientation preserving homeomorphism.
\begin{proof}
Proof that (iii)${\mathbb R}ightarrow$(ii): Let the homeomophism be $H:[0,1]\rightarrow
\lbrack0,1]$, such that $H(1)=1,$ so that $W=H^{-1}\widetilde{W}H$. Both
systems have the same set of itineraries and $x=H(\rho)=\widetilde{\rho}$ is
the location of the discontinuity of $\widetilde{W}$. The two itineraries
associated with the discontinuity must be the same for both systems, and in
the same order, because the homeomorphism is order preserving.
Proof that (ii)${\mathbb R}ightarrow$(iii): By Proposition \ref{mihalachethm} (vii) the
closure of the address spaces for the two systems is the same, namely
$\overline{\Omega}$. We are going to use Proposition \ref{basicconjugacythm}
(iv), so we need to check the condition in Proposition \ref{basicconjugacythm}
(iii). Suppose that $\sigma,\omega\in$ $\overline{\Omega}$ and suppose that
${\mathbb P}i(\sigma)={\mathbb P}i(\omega)$. We need to show that $\widetilde{{\mathbb P}i}
(\sigma)=\widetilde{{\mathbb P}i}(\omega)$. Since by Proposition
\ref{basicconjugacythm} $\overline{\Omega}=\Omega\cup\Omega_{+}$ we must have
either $\sigma,\omega\in\Omega$ or $\sigma,\omega\in\Omega_{+}$ or, without
loss of generality, $\sigma\in\Omega$ and $\omega\in\Omega_{+}$. If
$\sigma,\omega\in\Omega$ then ${\mathbb P}i(\sigma)={\mathbb P}i(\omega)$ implies $\sigma
=\tau\circ{\mathbb P}i(\sigma)=\tau\circ{\mathbb P}i(\omega)=\omega,$ so $\sigma=\omega$, whence
$\widetilde{{\mathbb P}i}(\sigma)=$ $\widetilde{{\mathbb P}i}(\omega).$ Similarly, if
$\sigma,\omega\in\Omega_{+}$ then also $\widetilde{{\mathbb P}i}(\sigma)=$
$\widetilde{{\mathbb P}i}(\omega)$, but this time use $\tau_{+}$.
Now suppose $\sigma\in\Omega$, $\omega\in\Omega_{+},$ and ${\mathbb P}i(\sigma
)={\mathbb P}i(\omega)$. Then we have $\widetilde{{\mathbb P}i}(\sigma)=$ $\widetilde{{\mathbb P}i}
\circ\tau\circ{\mathbb P}i(\sigma)$ and $\widetilde{{\mathbb P}i}(\omega)=\widetilde{{\mathbb P}i}
\circ\tau_{+}\circ{\mathbb P}i(\omega)=$ $\widetilde{{\mathbb P}i}\circ\tau_{+}\circ{\mathbb P}i(\sigma
)$. Again, if $\tau\circ{\mathbb P}i(\sigma)=\tau_{+}\circ{\mathbb P}i(\omega)(=\tau_{+}\circ
{\mathbb P}i(\sigma))$ we have $\widetilde{{\mathbb P}i}(\sigma)=$ $\widetilde{{\mathbb P}i}(\omega)$. So
we suppose $\tau\circ{\mathbb P}i(\sigma)\neq\tau_{+}\circ{\mathbb P}i(\omega)(=\tau_{+}\circ
{\mathbb P}i(\sigma)).$ But by Proposition \ref{mihalachethm}, $\tau\circ{\mathbb P}i
(\sigma)\neq\tau_{+}\circ{\mathbb P}i(\sigma)$ iff ${\mathbb P}i(\sigma)(={\mathbb P}i(\omega))\in
W^{-k}(\rho)$. But ${\mathbb P}i(\sigma)\in W^{-k}(\rho)$ implies $\tau\circ{\mathbb P}i
(\sigma)=\sigma_{0}\sigma_{1}...\sigma_{k-1}\tau(\rho)$ for some sigmas and
$\tau_{+}\circ{\mathbb P}i(\sigma)=\tau_{+}\circ{\mathbb P}i(\omega)=\sigma_{0}\sigma
_{1}...\sigma_{k-1}\tau_{+}(\rho)$ whence $\widetilde{{\mathbb P}i}(\omega
)=\widetilde{{\mathbb P}i}\circ\tau_{+}\circ{\mathbb P}i(\omega)=\widetilde{f}_{\sigma_{0}
\sigma_{1}...\sigma_{k-1}}\circ\widetilde{{\mathbb P}i}\circ\tau_{+}(\rho)$ and
$\widetilde{{\mathbb P}i}(\sigma)=\widetilde{{\mathbb P}i}\circ\tau\circ{\mathbb P}i(\sigma
)=\widetilde{f}_{\sigma_{0}\sigma_{1}...\sigma_{k-1}}\circ\widetilde{{\mathbb P}i}
\circ\tau(\rho);$ but by (ii)\ these latter two quantities are the same. Hence
the condition in \ref{basicconjugacythm} (iv) is satisfied.
The other direction of the last part here is essentially the same, but swap
the roles of tildas with non-tildas.
Proof that (i) and (iii) are equivalent is similar to the above, again using
Proposition \ref{basicconjugacythm}.
\end{proof}
\end{corollary}
We conclude this section by outlining a direct proof of Theorem
\ref{directtheorem}. Let $\mathcal{F}$ and $\mathcal{G}$ are two overlapping
IFSs, as discussed in this paper, and let corresponding masked dynamical
systems (see Figure \ref{figaug17}) have address spaces $\Omega_{\mathcal{F}}$
and $\Omega_{\mathcal{G}},$ respectively. Here $\tau_{\mathcal{F}
}:[0,1]\rightarrow\Omega_{\mathcal{F}}$ is the masked section for
$\mathcal{F}$, equal to the mapping from $x$ to the itinerary of $x,$ and
${\mathbb P}i_{\mathcal{F}}:\Omega_{\mathcal{F}}\rightarrow\lbrack0,1]$ is the
(continuous, onto, restriction to $\Omega_{\mathcal{F}}$ of the) coding map
for $\mathcal{F}.$ Here too, $\tau_{\mathcal{G}}:[0,1]\rightarrow
\Omega_{\mathcal{G}}$ is the masked section for $\mathcal{G}$, equal to the
mapping from $x$ to the itinerary of $x,$ and ${\mathbb P}i_{\mathcal{G}}
:\Omega_{\mathcal{G}}\rightarrow\lbrack0,1]$ is the (continuous, onto,
restriction to $\Omega_{\mathcal{G}}$ of the) coding map for $\mathcal{G}.$
Note that ${\mathbb P}i_{\mathcal{F}}=\tau_{\mathcal{F}}^{-1}$ and ${\mathbb P}i_{\mathcal{G}
}=\tau_{\mathcal{G}}^{-1}$.
Let $\overline{{\mathbb P}i}_{\mathcal{F}}:\overline{\Omega_{\mathcal{F}}}
\rightarrow\lbrack0,1]$ and $\overline{{\mathbb P}i}_{\mathcal{G}}:\overline
{\Omega_{\mathcal{G}}}\rightarrow\lbrack0,1]$ be the unique continuous
extensions of ${\mathbb P}i_{\mathcal{F}}$ and ${\mathbb P}i_{\mathcal{G}}$ respectively to the
closures of their domains. (They are the same as the restrictions of the
original coding maps (that act on $I^{\infty}$) to the closures of the
respective masked address spaces.)
\begin{theorem}
\label{directtheorem}Let $\Omega_{\mathcal{F}}=\Omega_{\mathcal{G}}$. Then
${\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}:[0,1]\rightarrow\lbrack0,1]$ is a
homeomorphism. (Its inverse is ${\mathbb P}i_{\mathcal{F}}\tau_{\mathcal{G}}$.)
\end{theorem}
\begin{proof}
Let $\Omega=\Omega_{\mathcal{F}}=\Omega_{\mathcal{G}}$. Consider the mapping
${\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}\overline{{\mathbb P}i}_{\mathcal{F}}
:\overline{\Omega}\rightarrow\lbrack0,1]$. It is observed that $\tau
_{\mathcal{F}}\overline{{\mathbb P}i}_{\mathcal{F}}\sigma=\sigma$ for all $\sigma
\in\Omega$ so ${\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}\overline{{\mathbb P}i}_{\mathcal{F}
}\sigma={\mathbb P}i_{\mathcal{G}}\sigma$ is continuous at all points $\sigma\in
\Omega.$ In fact, since ${\mathbb P}i_{\mathcal{G}}$ is uniformly continuous it follows
that $({\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}\overline{{\mathbb P}i}_{\mathcal{F}
})|_{\Omega}:\Omega\rightarrow\lbrack0,1]$ is uniformly continuous. Hence it
has a unique continuous extension to $\overline{\Omega},$ namely
$\overline{{\mathbb P}i}_{\mathcal{G}}.$
We want to show that this continuous extension is the same as ${\mathbb P}i
_{\mathcal{G}}\tau_{\mathcal{F}}\overline{{\mathbb P}i}_{\mathcal{F}}$. This will show
that ${\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}\overline{{\mathbb P}i}_{\mathcal{F}
}:\overline{\Omega}\rightarrow\lbrack0,1]$ is continuous.
Suppose $\overline{\sigma}\in\overline{\Omega}\backslash\Omega$. Then, by what
we know about the masked code spaces in question, there is a second point
$\sigma\in\Omega$ such that ${\mathbb P}i_{\mathcal{G}}\sigma=$ $\overline{{\mathbb P}i
}_{\mathcal{G}}\overline{\sigma}$ and ${\mathbb P}i_{\mathcal{F}}\sigma=$
$\overline{{\mathbb P}i}_{\mathcal{F}}\overline{\sigma}$. Hence we have ${\mathbb P}i
_{\mathcal{G}}\tau_{\mathcal{F}}\overline{{\mathbb P}i}_{\mathcal{F}}\overline{\sigma
}={\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}{\mathbb P}i_{\mathcal{F}}\sigma={\mathbb P}i_{\mathcal{G}
}\sigma=\overline{{\mathbb P}i}_{\mathcal{G}}\overline{\sigma},$ which is what we
wanted to show.
We conclude this first part of the proof with the conclusion that
${\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}\overline{{\mathbb P}i}_{\mathcal{F}}
:\overline{\Omega}\rightarrow\lbrack0,1]$ is continuous.
Now use that facts that (1) ${\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}\overline{{\mathbb P}i
}_{\mathcal{F}}:\overline{\Omega}\rightarrow\lbrack0,1]$ is continuous and (2)
$\overline{{\mathbb P}i}_{\mathcal{F}}:\overline{\Omega}\rightarrow\lbrack0,1]$ is
continuous to conclude, by a well-known theorem (i.e. the one cited in the
Monthly), that ${\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}:[0,1]\rightarrow
\lbrack0,1]$ is continuous.
Similarly prove that ${\mathbb P}i_{\mathcal{F}}\tau_{\mathcal{G}}:[0,1]\rightarrow
\lbrack0,1]$ is continuous.\ Finally check that ${\mathbb P}i_{\mathcal{F}}
\tau_{\mathcal{G}}{\mathbb P}i_{\mathcal{G}}\tau_{\mathcal{F}}=i_{[0,1]}$.
\end{proof}
\subsection{\label{trapsec}The special structure of the trapping region}
The material in this section is not needed towards the proof Theorems
\ref{theorem1} and \ref{theorem2ndpart}, but is of independent interest
because it concludes with Theorem \ref{shifttheorem} which connects the
present work to general binary symbolic dynamical systems.
Consider $W:[0,1]\rightarrow\lbrack0,1]$ as at the start of Section
\ref{introsec}. It is readily established that $W(0)=0$, $W(1)=1,$ and, for
any given $x\in(0,1),$ there exists an integer $K$ so that
\[
W^{k}(x)\in(W_{1}(\rho),W_{0}(\rho)]\text{ for all }k>K.
\]
Similarly for $W^{+}:[0,1]\rightarrow\lbrack0,1]$ we have $W^{+}(0)=0$,
$W^{+}(1)=1,$ and, for any given $x\in(0,1),$ there exists an integer $K$ so
that
\[
(W^{+})^{k}(x)\in\lbrack W_{1}(\rho),W_{0}(\rho))\text{ for all }k>K.
\]
We call $D=$ $[W_{1}(\rho),W_{0}(\rho)]$ the \textit{trapping region}. It is
readily checked that both $W|_{D}:D\rightarrow D$ and $W^{+}|_{D}:D\rightarrow
D$ are topologically transitive, and both are sensitively dependent on initial conditions.
The sets $D$ and $\Omega$ can be described in terms of $\tau(D)$. There is
some redundancy among the statements in Proposition \ref{trapprop}, but all
are informative. We define $s_{i}:I^{\infty}\rightarrow I^{\infty}$ by
$s_{i}(\sigma_{0}\sigma_{1}\sigma_{2}...)=i\sigma_{0}\sigma_{1}\sigma
_{2}...(i=0,1).$
\begin{proposition}
[\cite{mihalache}]\label{trapprop} Let $\tau(\rho)=\alpha$ and $\tau^{+}
(\rho)=\beta$.
(i) $\overline{\tau(D)}=\tau(D)\cup\tau^{+}(D);$
(ii) $S(\tau(D))=\tau(D),S(\tau^{+}(D))=\tau^{+}(D),$ and $S(\overline
{\tau(D)})=\overline{\tau(D)}$;
(iii) $\alpha=01\alpha_{2}\alpha_{3}...;$ $\beta=10\beta_{2}\beta_{3}...$;
(iv) both $S^{k}(\alpha)\in\lbrack\overline{0},\alpha]\cup\lbrack
\beta,\overline{1}]$ and $S^{k}(\beta)\in\lbrack\overline{0},\alpha
]\cup\lbrack\beta,\overline{1}]$ for all $k\in\mathbb{N};$
(v) $\overline{\tau(D)}=\bigcap\limits_{k=0}^{\infty}F^{k}([S\left(
\beta\right) ,S\left( \alpha\right) ])$ where $F:\mathbb{H(}I^{\infty
})\rightarrow\mathbb{H(}I^{\infty})$ is defined by
\[
F(\Lambda)=s_{0}([S^{2}(\beta),S(\alpha)]\cap\Lambda)\cup s_{1}([S(\beta
),S^{2}(\alpha)]\cap\Lambda);
\]
(vi) $\overline{\tau(D)}=\bigcap\limits_{k=0}^{\infty}F^{k}([S(\beta
),\alpha]\cup\lbrack\beta,S(\alpha)])$;
(vii) $\overline{\Omega}=\{\sigma\omega:\sigma\in\{0\}^{k}\cup\{1\}^{k}
,k=\mathbb{N}$,$\omega\in\overline{\tau(D)}\}$.
\end{proposition}
\begin{proof}
These statements are all direct consequences of \cite[Proposition
3]{mihalache} and the fact that the orbits of $\alpha$ and $\beta$ under
$S|_{\overline{\Omega}}:\overline{\Omega}\rightarrow\overline{\Omega}$ must
actually remain in $\overline{\tau(D)}$, the closure of the set of addresses
of the points in the trapping region. Of particular importance are (v), (vi)
and (vii) which taken together provide a detailed description of
$\overline{\Omega}$.
\end{proof}
The following Theorem provides characterizing information about \textit{all
}shift invariant subspaces of $I^{\infty}$. This is remarkable:\ it implies,
for example if $S\alpha\succ S^{2}\alpha\succ\beta\succ\alpha\succ$
$S^{2}\beta\succ S\beta$ then the overarching set contains a trapping region,
and submits to the description in (v), (vi) and (vii).
\begin{theorem}
\label{shifttheorem} Let $\Xi\subset I^{\infty}$ be shift foward invariant,
and let the following quantities be well defined:
\begin{align*}
\beta & =\inf\{\sigma\in\Xi:\sigma_{0}=1\},\\
\alpha & =\sup\{\sigma\in\Xi:\sigma_{0}=0\}.
\end{align*}
If $S\alpha\succ\beta$ and $\alpha\succ S\beta$ (the cases where one or other
of these two conditions does not hold are very simple and readily analyzed),
then $\overline{\Xi}\subset\overline{\Omega}$ where $\overline{\Omega}$ is
defined by Proposition \ref{trapprop} (v) and (vii). In particular, if
$S\alpha\succ S^{2}\alpha\succ\beta\succ\alpha\succ$ $S^{2}\beta\succ S\beta$
then the orbits of all points, except $\overline{0}$ and $\overline{1}$ end
up, after finitely many steps, in the associated trapping region defined in
Proposition \ref{trapprop} (vi).
\end{theorem}
\begin{proof}
Begin by noting that, by definition of $\alpha$ and $\beta$ we must have
$\overline{\Xi}\subset\lbrack\overline{0},\alpha]\cup\lbrack\beta,\overline
{1}].$ Since $S\overline{\Xi}=\overline{\Xi}$ we must have $S^{k}\sigma
\in\lbrack\overline{0},\alpha]\cup\lbrack\beta,\overline{1}]$ for all
$k\in\mathbb{N},$ for all $\sigma\in\Xi$. In particular, both $S^{k}
(\alpha)\in\lbrack\overline{0},\alpha]\cup\lbrack\beta,\overline{1}]$ and
$S^{k}(\beta)\in\lbrack\overline{0},\alpha]\cup\lbrack\beta,\overline{1}]$ for
all $k\in\mathbb{N}.$ The result now follows by taking inverse limits as in
the first part of the proof of Proposition 3 in \cite{mihalache}.
\end{proof}
\begin{example}
It is well known that any dynamical system $f:X\rightarrow X$ can be
represented coarsely, by choosing as subset $O\subset X$ and defining
itineraries $\sigma_{k}=0$ if $f^{k}(x)\in O$, $=1$ otherwise. The resulting
set of itineraries is foward shift invariant, so Theorem \ref{shifttheorem}
applies. In many cases one can impose conditions on the original dynamical
system, such as a topology on $X$ and continuity, to cause Theorem
\textit{\ref{shifttheorem} to yield much stronger conclusions, such as
Sharkovskii's Theorem, and results of Milnor-Thurston. But even as it stands,
the theorem is very powerful, as the following simple example, which I have
not been able to find in the literature, demonstrates. Note that it is a kind
of min/max theorem.}
\end{example}
\begin{example}
Let $f:\{1,2,3,...\}\rightarrow\{1,2,3,...\}\ni x\mapsto x+1$ be a dynamical
system. Let $O\subset\{1,2,3,...\}$ be the set of integers that are not
primes, $O=\{1,4,6,8,9,10,....\}$. Let $\{f^{n}(x)\}_{n=0}^{\infty}$ denote
the orbit of $x$ and let $\tau(x)=\sigma\in I^{\infty}$ denote the
corresponding symbolic orbit defined by $\sigma_{k}=0$ if $f^{k}(x)\in O$,
$=1$ otherwise. Let $\Omega=\tau(\{1,2,3,...\})$. Then $S\Omega\subset\Omega$
so we can apply Theorem \ref{shifttheorem}. We readily find that
$\alpha=011010100...$ and $\beta=1000...$. It follows that
\[
{\mathbb P}i_{z}(\alpha)=(1-z)\sum\limits_{P\text{=Prime}}z^{P-1}\text{ and }{\mathbb P}i
_{z}(\beta)=(1-z)
\]
whence the corresponding piecewise linear dynamical system is $L$ with slope
$1/z$ and $p=(1-z)$ where $z$ is the positive solution of
\[
z=\sum\limits_{P\text{=Prime}}z^{P}\text{.}
\]
It is readily verified, with the help of Theorem 1 that the dynamical system
$L=L_{\gamma=1/z,p=(1-z)}$ has this property: for all $n=0,1,2,...$we have
$L^{n}(1-z)>(1-z)$ iff $n+1$ is prime.
\end{example}
\begin{example}
This result also holds numerically, approximately: using Maple we find that
the solution to $\sum\limits_{P\text{=Prime,}P\leq29}z^{P}=z$ is approximately
$\tilde{z}=0.55785866$. Then, setting $\tilde{L}=L_{\gamma=1/\tilde
{z},p=(1-\tilde{z})}$, we find numerically that for $n\leq19,$ $\tilde{L}
^{n}(1-\tilde{z})\in(1-\tilde{z},1]$ iff $n+1$ is prime.
\end{example}
\subsection{Affine systems}
Here we consider the case where $W$ is piecewise affine and, in particular, of
the form of $L$, on the right in Figure \ref{figaug17}. Let
\[
0<a,b<1,1\leq a+b,
\]
\[
\mathcal{F}=\mathcal{F}(a,b)=([0,1]\subset\mathbb{R};f_{1}(x)=ax,f_{2}
(x)=bx+(1-b)).
\]
The attractor of $\mathcal{F}$ is $A=[0,1]\subset\mathbb{R}$, and the coding
map is ${\mathbb P}i:I^{\infty}\rightarrow\lbrack0,1].$ Define a one parameter family
of masks
\[
\{\mathcal{M}_{\rho}:0<1-b\leq\rho\leq a<1\}
\]
for $\mathcal{F}$ by $M_{0}=[0,\rho],$ $M_{1}=(\rho,1]$. The corresponding
masked dynamical system is
\[
W:[0,1]\rightarrow\lbrack0,1],x\mapsto\left\{
\begin{array}
[c]{c}
x/a\text{ if }x\in\lbrack0,\rho],\\
x/b+(1-1/b)\text{ otherwise}.
\end{array}
\right.
\]
The set of allowed parameters $\mathcal{(}a,b,\rho)$ is defined by the
simplex
\[
\mathcal{P=}\{\mathcal{(}a,b,\rho)\in\mathbb{R}^{3}:0<a,b<1,1\leq
a+b,1-b\leq\rho<a\}.
\]
We either suppress or reference $\mathcal{(}a,b,\rho)\in\mathcal{P}$ depending
on the context. The section is $\tau=\tau(a,b,\rho)$, the address space is
$\Omega=\Omega(a,b,\rho)=\tau(a,b,\rho)([0,1])=\tau([0,1]),$ and the masked
dynamical system is $W=W(a,b,\rho)$. We will denote the coding map for
$\mathcal{F}(a,b)$ by ${\mathbb P}i_{a,b}:I^{\infty}\rightarrow\lbrack0,1]$. In the
case where $a=b=\varsigma$ we will write this coding map as ${\mathbb P}i_{\varsigma
}:I^{\infty}\rightarrow\lbrack0,1]$.
The affine case is the key to Theorem 1, because we can write down explicitly
the mapping ${\mathbb P}i_{a,b}$ using the following explicit formulas:
\begin{align*}
f_{i}(x) & =a^{1-i}b^{i}x+i(1-b)\text{ }(i=0,1)\\
f_{\sigma_{i}}(x) & =a^{1-\sigma_{i}}b^{\sigma_{i}}x+\sigma_{i}(1-b)\text{
}\\
f_{\sigma_{0}}(f_{\sigma_{1}}(x)) & =a^{1-\sigma_{0}}b^{\sigma_{0}
}a^{1-\sigma_{1}}b^{\sigma_{1}}x+(\sigma_{1}a^{1-\sigma_{0}}b^{\sigma_{0}
}+\sigma_{0})(1-b)\\
& =a^{2-\sigma_{0}-\sigma_{1}}b^{\sigma_{0}+\sigma_{1}}x+(a^{1-\sigma_{0}
}b^{\sigma_{0}}\sigma_{1}+\sigma_{0})(1-b)\\
f_{\sigma_{0}}(f_{\sigma_{1}}(f_{\sigma_{2}}(x))) & =a^{3-\sigma_{0}
-\sigma_{1}-\sigma_{2}}b^{\sigma_{0}+\sigma_{1}+\sigma_{2}}+(a^{1-\sigma
_{0}-\sigma_{1}}b^{\sigma_{0}+\sigma_{1}}\sigma_{2}+a^{1-\sigma_{0}}
b^{\sigma_{0}}\sigma_{1}+\sigma_{0})(1-b),\\
& \text{\textit{and so on....}}
\end{align*}
We deduce
\[
{\mathbb P}i_{a,b}(\sigma)=(1-b)\sum\limits_{k=0}^{\infty}a^{k-\sigma_{0}-\sigma
_{1}-...\sigma_{k-1}}b^{\sigma_{0}+\sigma_{1}+...\sigma_{k-1}}\sigma
_{k}\text{.}
\]
Clearly, for each fixed $\sigma\in I^{\infty},$ ${\mathbb P}i_{a,b}(\sigma)$ is
analytic in $(a,b)\in\mathbb{C}^{2}$ with $|a|<1$ and $|b|<1.$ Also, since
\[
\left\vert \sum\limits_{k=0}^{\infty}a^{k-\sigma_{0}-\sigma_{1}-...\sigma
_{k-1}}b^{\sigma_{0}+\sigma_{1}+...\sigma_{k-1}}\sigma_{k}\right\vert \leq
\sum\limits_{k=1}^{\infty}\left( \max\{a,b\}\right) ^{k}\sigma_{k}
\]
we have that
\[
\left\vert {\mathbb P}i(\sigma)(a,b)\right\vert \leq(1-\max\{a,b\})^{-1}(1-b).
\]
It follows that
\[
{\mathbb P}i_{\varsigma}(\sigma)=(1-\varsigma)\sum\limits_{k=0}^{\infty}\sigma
_{k}\varsigma^{k}\text{.}
\]
Let $\sigma\in I^{\infty}$ be fixed for now, and let $\varsigma\in\mathbb{C}$
with $\left\vert \varsigma\right\vert <1$. We have
\[
\left\vert \sum\limits_{k=0}^{\infty}\sigma_{k}\varsigma^{k}\right\vert
\leq\sum\limits_{k=0}^{\infty}\left\vert \varsigma\right\vert ^{k}
\leq(1-\left\vert \varsigma\right\vert )^{-1}\text{.}
\]
Hence ${\mathbb P}i_{\varsigma}(\sigma)$ is analytic for $\left\vert \varsigma
\right\vert <1$ and can be continued to a continuous function on $\left\vert
\varsigma\right\vert \leq1.$ In particular ${\mathbb P}i_{1}(\sigma)$ is well defined,
finite and real.
\section{Proof of\ Theorem \ref{theorem1}}
\begin{lemma}
\label{lemmaA0} If $\sigma,\omega\in\overline{\Omega}$ with $\sigma{\mathbb P}rec
\omega$ then there is $\theta\in I^{k}$ for some $k\in\mathbb{N}$ such that
both $\theta\alpha$ and $\theta\beta$ belong to $\overline{\Omega}$ and
\[
\sigma{\mathbb P}receq\theta\alpha{\mathbb P}rec\theta\beta{\mathbb P}receq\omega.
\]
\end{lemma}
\begin{proof}
Let $\sigma,\omega\in\overline{\Omega}$ with $\sigma{\mathbb P}rec\omega$. Then there
is $n\in\mathbb{N}$ such that $\sigma|_{n}=\omega|_{n}$, $\sigma_{n}
=0,\omega_{n}=1$. It follow that $S^{n}\sigma{\mathbb P}receq\alpha{\mathbb P}rec\beta{\mathbb P}receq
S^{n}\omega$ whence, setting $\theta=\sigma|_{n}$ we have the stated result.
Since $S\beta{\mathbb P}rec\alpha{\mathbb P}rec\beta{\mathbb P}rec S\alpha$, it follows from
\cite[Proposition 3]{mihalache} that $\theta\alpha\in\Omega,\theta\beta
\in\Omega_{+},$ and both $\theta\alpha$ and $\theta\beta\in\overline{\Omega}.$
\end{proof}
\begin{lemma}
\label{lemmaA} If there is $\gamma\in(1/3,1)$ such that $\gamma=\inf\{\zeta
\in(0,1):{\mathbb P}i_{\varsigma}\alpha=$ ${\mathbb P}i_{\varsigma}\beta\},$ and if
$\varsigma<\gamma$, $\sigma,\omega\in\overline{\Omega}$ and $\sigma{\mathbb P}rec
\omega,$ then ${\mathbb P}i_{\varsigma}(\sigma)<{\mathbb P}i_{\varsigma}(\omega)$ and
${\mathbb P}i_{\gamma}(\sigma)\leq{\mathbb P}i_{\gamma}(\omega)$. If there is no such $\gamma
\in(1/3,1)$, then $\sigma,\omega\in\overline{\Omega}$ and $\sigma{\mathbb P}rec\omega,$
imply ${\mathbb P}i_{\varsigma}(\sigma)<{\mathbb P}i_{\varsigma}(\omega)$ for all $\varsigma<1$.
\end{lemma}
\begin{proof}
Suppose that is $\gamma\in(1/3,1)$ such that $\gamma=\inf\{\zeta\in
(0,1):{\mathbb P}i_{\varsigma}\alpha=$ ${\mathbb P}i_{\varsigma}\beta\}.$ Let
\[
{\mathbb H}at{\varsigma}=\inf\{\varsigma\in(0,1):\exists\sigma,\omega\in\overline
{\Omega},\text{ }\sigma_{0}=0,\omega_{0}=1,\text{\textit{s.t}. }{\mathbb P}i
_{\varsigma}(\sigma)={\mathbb P}i_{\varsigma}(\omega)\}.
\]
Using continuity of ${\mathbb P}i_{\varsigma}\sigma$ in $\varsigma$ and $\sigma,$ and
compactness of $\overline{\Omega}$ it follows that there is $\sigma,\omega
\in\overline{\Omega},$ with $\sigma_{0}=0,\omega_{0}=1,$ such that
\[
{\mathbb P}i_{{\mathbb H}at{\varsigma}}(\sigma)={\mathbb P}i_{{\mathbb H}at{\varsigma}}(\omega).
\]
Suppose ${\mathbb H}at{\varsigma}<\gamma$. So, for brevity we will assume that both
$\sigma\neq\alpha$ and $\beta\neq\omega$; it will be seen that the other two
possibilities can be handled similarly. We have that
\[
{\mathbb P}i_{1/3}(\sigma)\leq{\mathbb P}i_{1/3}(\alpha)<{\mathbb P}i_{1/3}(\beta){\mathbb P}receq{\mathbb P}i_{1/3}
(\omega)
\]
because ${\mathbb P}i_{1/3}$ is order preserving. We also have,
\[
{\mathbb P}i_{\varsigma}(\alpha)<{\mathbb P}i_{\varsigma}(\beta)\text{ for all }\varsigma
\leq{\mathbb H}at{\varsigma},
\]
Hence, by the intermediate value theorem, since ${\mathbb P}i_{{\mathbb H}at{\varsigma}}
(\sigma)={\mathbb P}i_{{\mathbb H}at{\varsigma}}(\omega)$, there is $\xi{\mathbb P}rec{\mathbb H}at{\varsigma}$
such that either ${\mathbb P}i_{\xi}(\sigma)={\mathbb P}i_{\xi}(\alpha)$ with $\sigma\neq\alpha$
or ${\mathbb P}i_{\xi}(\beta)={\mathbb P}i_{\xi}(\omega)$ with $\beta\neq\omega.$ That is,
either the graph of ${\mathbb P}i_{\xi}(\sigma)$, as a function of $\xi$, intersects
the graph of ${\mathbb P}i_{\xi}(\alpha)$ at some point $\xi<{\mathbb H}at{\varsigma}$ or the
graph of ${\mathbb P}i_{\xi}(\beta)$ intersects the graph of ${\mathbb P}i_{\xi}(\omega)$ at
some point $\xi<{\mathbb H}at{\varsigma}$. Suppose that ${\mathbb P}i_{\xi}(\sigma)={\mathbb P}i_{\xi
}(\alpha)$ with $\sigma\neq\alpha.$ Let $k$ be the least integer such that
${\mathbb P}i_{\xi}(\sigma)={\mathbb P}i_{\xi}(\alpha)$ with $(S^{k}\sigma)_{0}\neq(S^{k}
\alpha)_{0}$ and let $\{\tilde{\sigma},\tilde{\omega}\}=\{S^{k}\sigma
,S^{k}\alpha\}$ where $\tilde{\sigma}_{0}=0$ and $\tilde{\omega}_{0}=1.$ Then
we note that ${\mathbb P}i_{\xi}(\sigma)={\mathbb P}i_{\xi}(\alpha)$ implies ${\mathbb P}i_{\xi}
(\tilde{\sigma})={\mathbb P}i_{\xi}(\tilde{\omega}),$with $\tilde{\sigma}_{0}=0$ and
$\tilde{\omega}_{0}=1$. Hence $\xi\geq{\mathbb H}at{\varsigma}$ which is a
contradiction. Similarly we show that it cannot occur that ${\mathbb P}i_{\xi}
(\beta)={\mathbb P}i_{\xi}(\omega)$ with $\beta\neq\omega.$ We conclude that
${\mathbb H}at{\varsigma}=\gamma.$
Hence if $\varsigma<\gamma$, $\sigma,\omega\in\overline{\Omega}$ and
$\sigma{\mathbb P}rec\omega,$ then we cannot have ${\mathbb P}i_{\varsigma}(\sigma
)={\mathbb P}i_{\varsigma}(\omega).$ (For if so, then, similarly to above, let $k$ be
the least integer such that ${\mathbb P}i_{\xi}(\sigma)={\mathbb P}i_{\xi}(\omega)$ with
$(S^{k}\sigma)_{0}\neq(S^{k}\omega)_{0}$ and set $\{\tilde{\sigma}
,\tilde{\omega}\}=\{S^{k}\sigma,S^{k}\omega\}$ where $\tilde{\sigma}_{0}=0$
and $\tilde{\omega}_{0}=1;$ then ${\mathbb P}i_{\varsigma}(\sigma)={\mathbb P}i_{\varsigma
}(\omega)$ would imply ${\mathbb P}i_{\xi}(\tilde{\sigma})={\mathbb P}i_{\xi}(\tilde{\omega}
),$with $\tilde{\sigma}_{0}=0$ and $\tilde{\omega}_{0}=1$.) Since ${\mathbb P}i
_{1/3}(\sigma)<{\mathbb P}i_{1/3}(\omega)$ we conclude (again appealing to the
intermediate value theorem) that ${\mathbb P}i_{\varsigma}(\sigma)<{\mathbb P}i_{\varsigma
}(\omega)$ for all $\varsigma<\gamma$ and ${\mathbb P}i_{\gamma}(\sigma)\leq{\mathbb P}i
_{\gamma}(\omega)$. This completes the proof of the first part of the lemma.
Now suppose that there is no $\gamma\in(1/3,1)$ such that $\gamma=\inf
\{\zeta\in(0,1):{\mathbb P}i_{\varsigma}\alpha=$ ${\mathbb P}i_{\varsigma}\beta\}$. Then, if
$\sigma,\omega\in\overline{\Omega}$ and $\sigma{\mathbb P}rec\omega,$ the fact that
${\mathbb P}i_{1/3}(\sigma)<{\mathbb P}i_{1/3}(\omega)$ and the continuity of both
${\mathbb P}i_{\varsigma}(\sigma)$ and ${\mathbb P}i_{\varsigma}(\omega)$ in $\varsigma,$ for
all $\varsigma<1,$ imply ${\mathbb P}i_{\varsigma}(\sigma)<{\mathbb P}i_{\varsigma}(\omega)$ for
all $\varsigma<1$.
\end{proof}
The following lemma sharpens the first sentence in Lemma \ref{lemmaA}. (Recall
our notation $\theta|_{k}=\theta_{0}\theta_{1}...\theta_{k-1}$.)
\begin{lemma}
\label{lemmastrict} If there is $\gamma\in(1/3,1)$ such that $\gamma
=\inf\{\zeta\in(0,1):{\mathbb P}i_{\varsigma}\alpha=$ ${\mathbb P}i_{\varsigma}\beta\},$
$\sigma,\omega\in\Omega$ and $\sigma{\mathbb P}rec\omega,$ then ${\mathbb P}i_{\gamma}
(\sigma)<{\mathbb P}i_{\gamma}(\omega)$.
\end{lemma}
\begin{proof}
Let $\sigma,\omega\in\Omega$ and $\sigma{\mathbb P}rec\omega.$ Then we can find
$\theta\in\Omega$ such that $\sigma{\mathbb P}rec\theta{\mathbb P}rec\omega$. It follow from
\cite[(3)]{mihalache} and Proposition \ref{trapprop} we can find $k$ such that
$\sigma|_{k}\neq\theta|_{k}\neq\omega|_{k}$ and either $\{\theta|_{k}
0\beta,\theta|_{k}\alpha\}\subset\overline{\Omega}$ or $\{\theta|_{k}
\beta,\theta|_{k}1\alpha\}\subset\overline{\Omega}$. (This expresses the fact
that the $sup$ and $inf$ of each of the cylinder sets
\[
\left( \theta_{0}\theta_{1}...\theta_{k-1}\right) :=\{\eta\in\Omega
:\eta|_{k}=\theta|_{k}\}
\]
must belong to the inverse images of the critical point, together with
$\overline{0}$ and $\overline{1}$, see \cite[(3)]{mihalache}$.$ By taking $k$
sufficiently large the possibilities $\overline{0}$ or$\overline{\text{ }1}$
are ruled out by density of the set of all inverse images of the critical
point, see \cite[(4)]{mihalache}.)
If $\theta_{k}\theta_{k+1}=01$ then $\theta|_{k}0\beta\in\overline{\Omega}$,
$\theta|_{k}\alpha\in\Omega,$ and
\[
\sigma{\mathbb P}rec\theta|_{k}0\beta{\mathbb P}rec\theta|_{k}\alpha{\mathbb P}rec\omega;
\]
and if $\theta_{k}\theta_{k+1}=10$ then $\theta|_{k}\beta\in\overline{\Omega}
$, $\theta|_{k}1\alpha\in\Omega,$ and
\[
\sigma{\mathbb P}rec\theta|_{k}\beta{\mathbb P}rec\theta|_{k}1\alpha{\mathbb P}rec\omega.
\]
Suppose $\theta_{k}\theta_{k+1}=01.$ It follows using Lemma \ref{lemmaA} that
\[
{\mathbb P}i_{\gamma}\sigma\leq{\mathbb P}i_{\gamma}\theta|_{k}0\beta\text{ and }{\mathbb P}i_{\gamma
}\theta|_{k}\alpha\leq{\mathbb P}i_{\gamma}\omega\text{.}
\]
Now observe that
\begin{align*}
{\mathbb P}i_{\gamma}\theta|_{k}\alpha-{\mathbb P}i_{\gamma}\theta|_{k}0\beta & =\gamma^{k}
({\mathbb P}i_{\gamma}\alpha-{\mathbb P}i_{\gamma}0\beta)\\
& =\gamma^{k}({\mathbb P}i_{\gamma}\alpha-\gamma{\mathbb P}i_{\gamma}\beta)\\
& =\gamma^{k}(1-\gamma){\mathbb P}i_{\gamma}\alpha>0.
\end{align*}
It follows that
\[
{\mathbb P}i_{\gamma}\sigma<{\mathbb P}i_{\gamma}\omega.
\]
A similar calculation deals with the case $\theta_{k}\theta_{k+1}=01$. The
possibility that there is no $k$ such that $\theta_{k}\theta_{k+1}
\in\{01,10\}$ can be dealt with by related arguments.
\end{proof}
\begin{lemma}
\label{lemmaC} Let ${\mathbb H}at{\varsigma}\in(0,1]$ be maximal such that
${\mathbb P}i_{\varsigma}\alpha<{\mathbb P}i_{\varsigma}\beta$ for all $\varsigma\in
(0,{\mathbb H}at{\varsigma}).$ Then
\[
({\mathbb P}i_{\varsigma}\tau^{+}\rho-{\mathbb P}i_{\varsigma}\tau\rho)<(1-\varsigma
)/(1+\varsigma)
\]
for all $\varsigma{\mathbb P}rec{\mathbb H}at{\varsigma}$.
\end{lemma}
\begin{proof}
Note the identity, for any $\sigma\in I^{\infty}$ and any $\varsigma\in(0,1),$
\[
{\mathbb P}i_{\varsigma}\sigma=(1-\varsigma)\sigma_{0}+\varsigma{\mathbb P}i_{\varsigma}
S\sigma\text{.}
\]
Note too, by a simple calculation, $\tau^{+}\rho{\mathbb P}rec S\tau\rho$ (i.e.
$\beta{\mathbb P}rec S\alpha$) and $S\tau^{+}\rho{\mathbb P}rec\tau\rho$ (i.e. $S\beta
{\mathbb P}rec\alpha$)$.$ Hence, if $\varsigma\in(0,{\mathbb H}at{\varsigma})$, then by Lemma
\ref{lemmaA},
\begin{align*}
{\mathbb P}i_{\varsigma}\tau\rho & =(1-\varsigma)(\tau\rho)_{0}+\varsigma
{\mathbb P}i_{\varsigma}S\tau\rho>(1-\varsigma)(\tau\rho)_{0}+\varsigma{\mathbb P}i_{\varsigma
}\tau^{+}\rho,\\
{\mathbb P}i_{\varsigma}\tau^{+}\rho & =(1-\varsigma)(\tau^{+}\rho)_{0}+\varsigma
{\mathbb P}i_{\varsigma}S\tau^{+}\rho<(1-\varsigma)(\tau^{+}\rho)_{0}+\varsigma
{\mathbb P}i_{\varsigma}\tau\rho,
\end{align*}
whence, subtracting the first equation from the second, we get
\begin{align*}
{\mathbb P}i_{\varsigma}\tau^{+}\rho-{\mathbb P}i_{\varsigma}\tau\rho & <(1-\varsigma)(\tau
^{+}\rho)_{0}+\varsigma{\mathbb P}i_{\varsigma}\tau\rho-(1-\varsigma)(\tau\rho
)_{0}-\varsigma{\mathbb P}i_{\varsigma}\tau^{+}\rho\\
& =(1-\varsigma)((\tau^{+}\rho)_{0}-(\tau\rho)_{0})-\varsigma({\mathbb P}i_{\varsigma
}\tau^{+}\rho-{\mathbb P}i_{\varsigma}\tau\rho)
\end{align*}
which implies
\[
({\mathbb P}i_{\varsigma}\tau^{+}\rho-{\mathbb P}i_{\varsigma}\tau\rho)(1+\varsigma
)<(1-\varsigma)((\tau^{+}\rho)_{0}-(\tau\rho)_{0})
\]
whence
\[
({\mathbb P}i_{\varsigma}\tau^{+}\rho-{\mathbb P}i_{\varsigma}\tau\rho)<(1-\varsigma
)/(1+\varsigma).
\]
\end{proof}
It follows that either there exists $\gamma$ as described in Theorem
\ref{theorem1} with $\gamma<1$, or else ${\mathbb P}i_{\varsigma}\tau^{+}\rho
<{\mathbb P}i_{\varsigma}\tau\rho$ for all $\varsigma<1$ and $\lim_{\varsigma
\rightarrow1}({\mathbb P}i_{\varsigma}\tau^{+}\rho-{\mathbb P}i_{\varsigma}\tau\rho)=0$.
Later we will show that the second option implies that the system must have
zero entropy, which is impossible, so the second option here cannot occur. But
first we explain why, in the first case, Theorem \ref{theorem1} is implied.
\begin{lemma}
\label{lemmaB} Let there exist $\gamma$ as described in Theorem \ref{theorem1}
and let $\gamma<1$. Let $p=\tau(\rho)(\gamma)(={\mathbb P}i_{\gamma}\alpha)$ as in the
statement of Theorem \ref{theorem1}. Then the address space of $L_{\gamma
,p}:[0,1]\rightarrow\lbrack0,1]$ equals $\Omega.$
\end{lemma}
\begin{proof}
The address space of $L_{\gamma,p}$ is uniquely determined by the two
itineraries of $p={\mathbb P}i_{\gamma}\tau\rho$. By direct calculation it is verified
that these addresses must be $\alpha=\tau\rho$ and $\beta=\tau^{+}\rho$.
Simply apply $L_{\gamma,p}$ to ${\mathbb P}i_{\gamma}\tau\rho$ and $L_{\gamma,p}^{+}$
to ${\mathbb P}i_{\gamma}\tau^{+}\rho$, and with the aid of Lemma \ref{lemmaA},
validate that the addresses of these two points, ${\mathbb P}i_{\gamma}\tau\rho$ and
${\mathbb P}i_{\gamma}\tau^{+}\rho$ are indeed $\tau\rho$ and $\tau^{+}\rho$.
(\textbf{The assertion }$\sigma{\mathbb P}rec\omega{\mathbb R}ightarrow$\textbf{ }${\mathbb P}i_{\gamma
}(\sigma)<{\mathbb P}i_{\gamma}(\omega),$\textbf{ assured by Lemma \ref{lemmastrict},
implies that at each iterative step, in the computation of }${\mathbb P}i_{\gamma}
\tau\rho$\textbf{, the current point lies in the domain of the appropriate
branch of }$L_{\gamma,p}$\textbf{.})
\end{proof}
\begin{lemma}
\label{lemmaE} Let there exist $\gamma$ as described in Theorem \ref{theorem1}
and let $\gamma<1$. Then Theorem \ref{theorem1} holds.
\end{lemma}
\begin{proof}
This follows from Lemma \ref{lemmaB} together with Proposition
\ref{mihalachethm} parts (iii) and (iv). (Equivalently it follows from
Corollary \ref{corollary1}: I think I have in effect duplicated the intended
proof of Corollary \ref{corollary1}.)
\end{proof}
\begin{lemma}
\label{lemmaD} The equations
\[
\tau(\rho)(\gamma)=\tau(\rho+)(\gamma),\text{ with }\tau(\rho)(\varsigma
)<\tau(\rho+)(\varsigma)\text{ for }\varsigma\in\lbrack0.5,\gamma)\text{,}
\]
possess a unique solution $\gamma<1$.
\end{lemma}
\begin{proof}
If ${\mathbb P}i_{\varsigma}\tau^{+}\rho<{\mathbb P}i_{\varsigma}\tau\rho$ for all $\varsigma<1$
and $\lim_{\varsigma\rightarrow1}({\mathbb P}i_{\varsigma}\tau^{+}\rho-{\mathbb P}i_{\varsigma
}\tau\rho)=0$ then by (similar argument to the one in Lemma\ref{lemmaC}) that
$x$ implies ${\mathbb P}i_{\varsigma}\tau x<{\mathbb P}i_{\varsigma}\tau y$ for all
$\varsigma<1$. It follows that ${\mathbb P}i_{\varsigma}\tau([0,1])$ is an invariant
subset of the dynamical system $L_{p,\varsigma}$ for all $\varsigma<1,$ whence
the topological entropy of $W$ is less than $\log1/\lambda$ which is
impossible because $W$ has slope larger than or equal to $1/\lambda$.)
\end{proof}
Theorem \ref{theorem1} follows from Lemmas \ref{lemmaE} and \ref{lemmaD}.
\section{Proof of Theorem \ref{theorem2ndpart}}
This is straightfoward.
\end{document}
|
\begin{document}
\title[Stability and convergence in convex $A$-metric spaces]{On the
stability and convergence of Mann iteration process in convex $A$-metric
spaces}
\dedicatory{{\footnotesize (Dedicated to Assoc. Prof. Dr. Birol Gunduz who
passed away on the 3rd of April, 2019.)}}
\author{Isa Yildirim}
\address{Department of Mathematics, Faculty of Science, Ataturk University,
Erzurum, 25240, Turkey.}
\email{[email protected]}
\subjclass[2000]{Primary 47H09 ; Secondary 47H10}
\keywords{Convex structure, Convex $A$-metric space, Mann iteration process,
Stability}
\begin{abstract}
In this paper, firstly, we introduce the concept of convexity in $A$-metric
spaces and show that Mann iteration process converges to the unique fixed
point of Zamfirescu type contractions in this newly defined convex $A$
-metric space. Secondly, we define the concept of stability in convex $A$
-metric spaces and establish stability result for the Mann iteration process
considered in such spaces. Our results carry some well-known results from
the literature to convex $A$-metric spaces.
\end{abstract}
\maketitle
\section{\textbf{Introduction and preliminaries}}
The Banach Fixed Point Theorem which is the one of the most important
theorem in all analysis. It plays a key role for many applications in
nonlinear analysis. For example, in the areas such as optimization,
mathematical models, and economic theories. Due to this, the result has been
generalized in various directions. As a generalization of metric space,
Mustafa and Sims introduced a new class of generalized metric spaces called $
G$-metric spaces (see \cite{z12}, \cite{z13}) as a generalization of metric
spaces $(X,d).$ This was done to introduce and develop a new fixed point
theory for a variety of mappings in this new setting. This helped to extend
some known metric space results to this more general setting. The $G$-metric
space is defined as follows:
\begin{definition}
\cite{z13} Let $X$ be a nonempty set and let $G:X\times X\times X\rightarrow
\mathbb{R}
^{+}$ be a function satisfying the following properties:
(i) $G(x,y,z)=0$ if $x=y=z$
(ii) $0<G(x,x,y)$ for all $x,y\in X,$ with $x\neq y$
(iii) $G(x,x,y)\leq G(x,y,z)$ for all $x,y,z\in X,$ with $z\neq y$
(iv) $G(x,y,z)=G(x,z,y)=G(y,z,x)=...,$ (symmetry in all three variables); and
(v) $G(x,y,z)\leq G(x,a,a)+G(a,y,z)$ for all $x,y,z,a\in X$ (rectangle
inequality )$.$
Then the function $G$ is called a generalized metric or more specifically, a
$G$-metric on $X$, and the pair $(X,G)$ is called a $G$-metric space.
\end{definition}
Mustafa et al. studied many fixed point results for a self-mapping in $G$
-metric space. \cite{z3}-\cite{z1} can be cited for reference.
On the other hand, Abbas et al. \cite{a} introduced the concept of an $A$
-metric space as follows:
\begin{definition}
Let $X$ be nonempty set. Suppose a mapping $A:X^{t}\rightarrow
\mathbb{R}
$ satisfy the following conditions:
$\left( A_{1}\right) $ $A(x_{1},x_{2},...,x_{t-1},x_{t})\geq 0$ $,$
$\left( A_{2}\right) $ $A(x_{1},x_{2},...,x_{t-1},x_{t})=0$ if and only if $
x_{1}=x_{2}=...=x_{t-1}=x_{t},$
$\left( A_{3}\right) $ $A(x_{1},x_{2},...,x_{t-1},x_{t})\leq
A(x_{1},x_{1},...,(x_{1})_{t-1},y)+A(x_{2},x_{2},...,(x_{2})_{t-1},y)+$
$
...+A(x_{t-1},x_{t-1},...,(x_{t-1})_{t-1},y)+A(x_{t},x_{t},...,(x_{t})_{t-1},y)
$
for any $x_{i},y\in X,$ $(i=1,2,...,t).$ Then, $(X,A)$ is said to be an $A$
-metric space.
\end{definition}
It is clear that the an $A$-metric space for $t=2$ reduces to ordinary
metric $d$. Also, an $A$-metric space is a generalization of the $G$-metric
space.
\begin{example}
\cite{a} Let $X=
\mathbb{R}
$. Define a function $A:X^{t}\rightarrow
\mathbb{R}
$ by
\begin{eqnarray*}
A(x_{1},x_{2},...,x_{t-1},x_{t}) &=&\left\vert x_{1}-x_{2}\right\vert
+\left\vert x_{1}-x_{3}\right\vert +...+\left\vert x_{1}-x_{t}\right\vert \\
&&+\left\vert x_{2}-x_{3}\right\vert +\left\vert x_{2}-x_{4}\right\vert
+...+\left\vert x_{2}-x_{t}\right\vert \\
&&\vdots \\
&&+\left\vert x_{t-2}-x_{t-1}\right\vert +\left\vert
x_{t-2}-x_{t}\right\vert +\left\vert x_{t-1}-x_{t}\right\vert \\
&=&\sum_{i=1}^{t}\sum_{i<j}\left\vert x_{i}-x_{j}\right\vert \text{.}
\end{eqnarray*}
Then $(X,A)$ is an $A$-metric space..
\end{example}
\begin{lemma}
\label{l2} \cite{a} Let $\left( X,A\right) $ be $A$-metric space. Then $
A\left( x,x,\dots ,x,y\right) =A\left( y,y,\dots ,y,x\right) $ for all $
x,y\in X$.
\end{lemma}
\begin{lemma}
\label{l3} \cite{a} Let $\left( X,A\right) $ be $A$-metric space. Then for
all for all $x,y\in X$ we have $A\left( x,x,\dots ,x,z\right) \leq \left(
t-1\right) A\left( x,x,\dots ,x,y\right) +A\left( z,z,\dots ,z,y\right) $
and $A\left( x,x,\dots ,x,z\right) \leq \left( t-1\right) A\left( x,x,\dots
,x,y\right) +A\left( y,y,\dots ,y,z\right) $.
\end{lemma}
\begin{definition}
\cite{a} Let $\left( X,A\right) $ be $A$-metric space.
(i) A sequence $\left\{ x_{n}\right\} $ in $X$ is said to converge to a
point $u\in X$ if $A\left( x_{n},x_{n},\dots ,x_{n},u\right) \rightarrow 0$
as $n\rightarrow \infty $.
(ii) A sequence $\left\{ x_{n}\right\} $ in $X$ is called a Cauchy sequence
if $A\left( x_{n},x_{n},\dots ,x_{n},u_{m}\right) \rightarrow 0$ as $
n,m\rightarrow \infty $.
(iii) The $A$-metric space $\left( X,A\right) $ is said to be complete if
every Cauchy sequence in $X\ $is convergent.
\end{definition}
Recently, Yildirim \cite{i} introduced the notion of Zamfirescu mappings in $
A$-metric space as follows:
\begin{definition}
\label{d1} Let $\left( X,A\right) $ be $A$-metric space and $f:X\rightarrow
X $ be a mapping. $f$ is called a $A$-Zamfirescu mapping ($AZ$ mapping), if
and only if, there are real numbers, $0\leq a<1$, $0\leq b,c<\frac{1}{t}$
such that for all $x,y\in X$, at least one of the next conditions is true:
\begin{equation*}
\left( AZ_{1}\right) A(fx,fx,\dots ,fx,fy)\leq aA(x,x,\dots ,x,y)
\end{equation*}
\begin{equation*}
\left( AZ_{2}\right) A(fx,fx,\dots ,fx,fy)\leq b\left[ A(fx,fx,\dots
,fx,x)+A(fy,fy,\dots ,fy,y)\right]
\end{equation*}
\begin{equation*}
\left( AZ_{3}\right) A(fx,fx,\dots ,fx,fy)\leq c\left[ A(fx,fx,\dots
,fx,y)+A(fy,fy,\dots ,fy,x)\right]
\end{equation*}
\end{definition}
Yildirim \cite{i} also extended the Zamfirescu results \cite{16} to $A$
-metric spaces and he obtained the following results on fixed point theorems
for such mappings.
\begin{lemma}
\label{l1} \cite{i} Let $\left( X,A\right) $ be $A$-metric space and $
f:X\rightarrow X$ be a mapping. If $f$ is a $AZ$ mapping, then there is $
0\leq \delta <1$ such that
\begin{equation}
A\left( fx,fx,\dots ,fx,fy\right) \leq \delta A\left( x,x,\dots ,x,y\right)
+t\delta A(fx,fx,\dots ,fx,x) \label{1}
\end{equation}
and
\begin{equation}
A\left( fx,fx,\dots ,fx,fy\right) \leq \delta A\left( x,x,\dots ,x,y\right)
+t\delta A(fy,fy,\dots ,fy,x) \label{2}
\end{equation}
for all $x,y\in X$.
\end{lemma}
\begin{theorem}
\label{A} \cite{i} \noindent Let $\left( X,A\right) $ be complete $A$-metric
space and $f:X\rightarrow X$ be an$\ AZ$ mapping. Then $f$ has a unique
fixed point and Picard iteration process $\left\{ x_{n}\right\} $ defined by
$x_{n+1}=fx_{n}$ converges to a fixed point of $f$.
\end{theorem}
Studies in metric spaces are related to the existence of fixed point without
approximating them. The reason behind is the unavailablity of convex
structure in metric spaces. To solve this problem, Takahashi \cite{Ta}
introduced the notion of convex metric spaces and studied the approximation
of fixed points for nonexpansive mappings in this setting. Inspired by this,
Yildirim and Khan \cite{yk} defined convex structure in $G$-metric spaces
and they transformed the Mann iterative process to a convex $G$-metric space
as follows. And they also proved some fixed point theorems deal with
convergence of Mann iteration process for some class of mappings.
\begin{definition}
\label{yk1} \cite{yk} Let $(X,G)$ be a $G$-metric space. A mapping $
W:X^{2}\times I^{2}\rightarrow X$ is termed as a convex structure on $X$ if $
G(W(x,y;\lambda ,\beta ),u,v)\leq \lambda G(x,u,v)+\beta G(y,u,v)$ for real
numbers $\lambda $ and $\beta $ in $I=[0,1]$ satisfying $\lambda +\beta =1$
and $x,y,u$ and $v\in X.$
A $G$-metric space $(X,G)$ with a convex structure $W$ is called a convex $G$
-metric space and denoted as $(X,G,W).$
A nonempty subset $C$ of a convex $G$-metric space $(X,G,W)$ is said to be
convex if $W(x,y;a,b)\in C$ for all $x,y\in C$ and $a,b\in I.$\
\end{definition}
\begin{definition}
\label{ykk2} \cite{yk} Let $(X,G,W)$ be convex $G$-metric space with convex
structure $W$ and $f:X\rightarrow X$ be a mapping. Let $\left\{ {\alpha }
_{n}\right\} $ be a sequence in $[0,1]$ for $n\in
\mathbb{N}
.$ Then for any given $x_{0}\in X,$ the iterative process defined by the
sequence $\left\{ x_{n}\right\} $ as
\begin{equation}
x_{n+1}=W\left( x_{n},fx_{n};1-{\alpha }_{n},{\alpha }_{n}\right) ,\ \text{\
\ \ }n\in
\mathbb{N}
, \label{m}
\end{equation}
is called Mann iterative process in the convex metric space $(X,G,W).$
\end{definition}
The iterative approximation of a fixed point for certain classes of mappings
is one of the main tools in the fixed point theory. Many authors (\cite{kd,
gund, AR, Kim, Liu, Modi, 10, 29, yil, yko}) discussed the existence of
fixed points and convergence of different iterative processes for various
mappings in convex metric spaces.
Keeping the above in mind, in this paper, we first define the concept of
convexity in $A$-metric spaces. Then, we use Mann iteration in this newly
defined convex $A$-metric space to prove some convergence results for
approximating fixed points of some classes of mappings. We also discuss
stability result for the Mann iteration process. Results in this paper show
that different iteration methods can be used to approximate fixed points of
different class of mappings in $A$-metric spaces. Our results are just new
in the setting.
Now, we define convex structure in $A$-metric spaces as follows.
\begin{definition}
\label{yy} Let $(X,A)$ be a $A$-metric space and $I=\left[ 0,1\right] $. A
mapping $W:X^{t}\times I^{t}\rightarrow X$ is termed as a convex structure
on $X$ if
\begin{eqnarray}
&&A(u_{1},u_{2},...,u_{t-1},W(x_{1},x_{2},...,x_{t-1},x_{t};a_{1},a_{2},...,a_{t}))
\label{m9} \\
&\leq
&a_{1}A(u_{1},u_{2},...,u_{t-1},x_{1})+a_{2}A(u_{1},u_{2},...,u_{t-1},x_{2})
\notag \\
&&+...+a_{t}A(u_{1},u_{2},...,u_{t-1},x_{t}) \notag \\
&=&\sum_{i=1}^{t}a_{i}A(u_{1},u_{2},...,u_{t-1},x_{i}) \notag
\end{eqnarray}
for real numbers $a_{1},a_{2},...,a_{t}$ in $I=[0,1]$ satisfying $
\sum_{i=1}^{t}a_{i}=1$ and $u_{i}$, $x_{i}\in X$ for all $i=1,2,...,t$.
An $A$-metric space $(X,A)$ with a convex structure $W$ is called a convex $
A $-metric space and denoted as $(X,A,W).$
A nonempty subset $C$ of a convex $A$-metric space $(X,A,W)$ is said to be
convex if
$W(x_{1},x_{2},...,x_{t-1},x_{t};a_{1},a_{2},...,a_{t})\in C$ for all $
x_{i}\in C$ and $a_{i}\in I$, $i=1,2,...,t$.
\end{definition}
Next, we transform the Mann iteration process to a convex $A$-metric space
as follows.
\begin{definition}
\label{yk2} Let $(X,A,W)$ be convex $A$-metric space with convex structure $
W $ and $f:X\rightarrow X$ be a mapping. Let $\left\{ {\alpha }
_{i}^{n}\right\} $ be sequences in $[0,1]$ for all $i=1,2,...,t$ and $n\in
\mathbb{N}
.$ Then for any given $x_{0}\in X,$ the iteration process defined by the
sequence $\left\{ x_{n}\right\} $ as
\begin{equation}
x_{n+1}=W\left( x_{n},x_{n},...,x_{n},fx_{n};{\alpha }_{1}^{n},{\alpha }
_{2}^{n},...,{\alpha }_{t}^{n}\right) , \label{m7}
\end{equation}
is called Mann iteration process in the convex metric space $(X,A,W).$
\end{definition}
It follows from the structure of convex $A$-metric space that
\begin{eqnarray}
A(x_{n+1},u_{1},u_{2},...,u_{t-1}) &=&A(W\left( x_{n},x_{n},...,x_{n},fx_{n};
{\alpha }_{1}^{n},{\alpha }_{2}^{n},...,{\alpha }_{t}^{n}\right)
,u_{1},u_{2},...,u_{t-1}) \label{m8} \\
&\leq &{\alpha }_{1}^{n}A(x_{n},u_{1},u_{2},...,u_{t-1})+{\alpha }
_{2}^{n}A(x_{n},u_{1},u_{2},...,u_{t-1}) \notag \\
&&+...+{\alpha }_{t}^{n}A(fx_{n},u_{1},u_{2},...,u_{t-1}). \notag
\end{eqnarray}
If we take $t=2$ in $(\ref{m7})$ and $(\ref{m8})$, this structures reduce to
$(\ref{m})$ and $(\ref{m9})$.
The following Lemma shall be used in the proof of the stability result.
\begin{lemma}
\label{mm} \cite{ber} If $\delta $ is a real number such that $0\leq \delta
<1$ and $\left\{ {\varepsilon }_{n}\right\} $ is a sequence of positive
numbers such that ${\mathop{\mathrm{lim}}_{n\rightarrow \infty }{\varepsilon
}_{n}=0\ }$, then for any sequence of positive numbers $\left\{
u_{n}\right\} $ satisfying
\begin{equation*}
u_{n+1}\leq \delta u_{n}+{\varepsilon }_{n}\ ,\ n=0,1,\dots
\end{equation*}
we have
\begin{equation*}
{\mathop{\mathrm{lim}}_{n\rightarrow \infty }u_{n}=0\ }.
\end{equation*}
\end{lemma}
\section{\textbf{Main Results}}
\textbf{2.1 Convergence Result: }In this section, we prove the Mann
iteration process converges to fixed point of Zamfirescu mappings in
complete convex metric space $(X,A,W)$.
\begin{theorem}
\label{main1} Let $(X,A,W)$ be a complete convex $A$-metric space with a
convex structure $W$ and, $f:X\rightarrow X$ be an $AZ$ mapping$.$ Let $
\left\{ x_{n}\right\} $ be defined iteratively by $(\ref{m7})$ and $x_{0}\in
X,$ with $\left\{ {\alpha }_{t}^{n}\right\} \subset \lbrack
0,1],\tsum\limits_{i=1}^{t}a_{i}=1$ satisfying $\tsum\limits_{n=0}^{\infty }{
\alpha }_{t}^{n}=\infty $ for all $n\in
\mathbb{N}
$ and $i=1,2,...,t.$ Then $\left\{ x_{n}\right\} $ converges to a unique
fixed point of $f.$
\begin{proof}
From Theorem \ref{A}, we know that an$\ AZ$ mapping has a unique fixed point
in $X$. Call it $u$ and consider $x_{i}\in X$, $i=1,2,...,t$.
At least one of $\left( AZ_{1}\right) $, $\left( AZ_{2}\right) $ and $\left(
AZ_{3}\right) $ is satisfied. If $\left( AZ_{1}\right) ,\left( AZ_{2}\right)
$ or $\left( AZ_{3}\right) $ holds, we know that the following inequality
from Lemma \ref{l1}
\begin{equation}
A\left( fx,fx,\dots ,fx,fy\right) \leq \delta A\left( x,x,\dots ,x,y\right)
+t\delta A(fx,fx,\dots ,fx,x) \label{25}
\end{equation}
for all $x,y\in X$.
Let $\left\{ x_{n}\right\} $ be the Mann iteration process $(\ref{m7})$,
with $x_{0}\in X$ arbitrary. Then
\begin{eqnarray*}
A(u,u,...,u,x_{n+1}) &\leq &A(u,u,...,u,W\left( x_{n},x_{n},...,x_{n},fx_{n};
{\alpha }_{1}^{n},{\alpha }_{2}^{n},...,{\alpha }_{t}^{n}\right) ) \\
&\leq &{\alpha }_{1}^{n}A(u,u,...,u,x_{n})+{\alpha }
_{2}^{n}A(u,u,...,u,x_{n}) \\
&&+...+{\alpha }_{t}^{n}A(u,u,...,u,fx_{n}) \\
&=&\left( 1-{\alpha }_{t}^{n}\right) A(u,u,...,u,x_{n})+{\alpha }
_{t}^{n}A(u,u,...,u,fx_{n}).
\end{eqnarray*}
Take $x=u$ and $y=x_{n}$ in (\ref{25}) to obtain
\begin{eqnarray}
A(u,u,...,u,fx_{n}) &=&A(fu,fu,...,fu,fx_{n}) \label{5} \\
&\leq &\delta A(u,u,...,u,x_{n})+t\delta A(fu,fu,\dots ,fu,u) \notag \\
&=&\delta A(u,u,...,u,x_{n}) \notag
\end{eqnarray}
which together with (\ref{5}) yields
\begin{eqnarray}
A(u,u,...,u,x_{n+1}) &\leq &\left( 1-{\alpha }_{t}^{n}\right)
A(u,u,...,u,x_{n})+{\alpha }_{t}^{n}\delta A(u,u,...,u,x_{n}) \label{6} \\
&=&\left[ 1-\left( 1-\delta \right) {\alpha }_{t}^{n}\right]
A(u,u,...,u,x_{n}). \notag
\end{eqnarray}
Inductively we get
\begin{eqnarray}
A(u,u,...,u,x_{n+1}) &\leq &\left[ 1-\left( 1-\delta \right) {\alpha }
_{t}^{n}\right] A(u,u,...,u,x_{n}) \label{7} \\
&\leq &\left[ 1-\left( 1-\delta \right) {\alpha }_{t}^{n}\right] \left[
1-\left( 1-\delta \right) {\alpha }_{t}^{n-1}\right] A(u,u,...,u,x_{n-1})
\notag \\
&&\vdots \notag \\
&\leq &\tprod\limits_{k=0}^{n}\left[ 1-\left( 1-\delta \right) {\alpha }
_{t}^{k}\right] A(u,u,...,u,x_{0}) \notag
\end{eqnarray}
As $0\leq \delta <1,\left\{ {\alpha }_{t}^{k}\right\} \subset \lbrack 0,1]$
and $\tsum\limits_{k=0}^{\infty }{\alpha }_{t}^{k}=\infty $, we have
\begin{equation*}
\lim_{n\rightarrow \infty }\tprod\limits_{k=0}^{n}\left[ 1-\left( 1-\delta
\right) {\alpha }_{t}^{k}\right] =0,
\end{equation*}
which by (\ref{7}) implies
\begin{equation*}
\lim_{n\rightarrow \infty }A(u,u,...,u,x_{n+1})=\lim_{n\rightarrow \infty
}A(x_{n+1},x_{n+1},...,x_{n+1},u)=0.
\end{equation*}
Hence the sequence $\left\{ x_{n}\right\} $ defined iteratively by $(\ref{m7}
)$ converges to the fixed point of $f$.
\end{proof}
\end{theorem}
\textbf{2.2 Stability Result: }Now, we will give stability result for the
Mann iteration $(\ref{m7})$ in complete convex $A$-metric space.
\begin{definition}
Let $(X,A,W)$ be a convex $A$-metric space with a convex structure $W$ and, $
f:X\rightarrow X$ be a mapping, $x_{0}\in X$ and let us assume that the
iteration process\ $(\ref{m7})$, that is, the sequence $\left\{
x_{n}\right\} $ defined by $(\ref{m7})$, converges to a fixed point $u$ of $
f $.
Let $\left\{ y_{n}\right\} $ be an arbitrary sequence in $X$ and set
\begin{equation*}
\epsilon _{n}=A\left( y_{n+1},y_{n+1},...,y_{n+1},g\left( f,y_{n}\right)
\right) \text{ for }n=0,1,2,...
\end{equation*}
where $g\left( f,y_{n}\right) =W\left( y_{n},y_{n},...,y_{n},fy_{n};{\alpha }
_{1}^{n},{\alpha }_{2}^{n},...,{\alpha }_{t}^{n}\right) $ and $\left\{ {
\alpha }_{i}^{n}\right\} $ are real sequences in $[0,1]$ for $i=1,2,...,t$.
We say that the Mann iteration process $(\ref{m7})$ is $f$-stable or stable
with respect to $f$ if and only if
\begin{equation*}
\lim_{n\rightarrow \infty }\epsilon _{n}=0\Longleftrightarrow
\lim_{n\rightarrow \infty }y_{n}=u.
\end{equation*}
\end{definition}
\begin{theorem}
Let $(X,A,W)$ be a complete convex $A$-metric space with a convex structure $
W$ and, $f:X\rightarrow X$ be an $AZ$ mapping$.$Let $\left\{ x_{n}\right\} $
be defined iteratively by $(\ref{m7})$ and $x_{0}\in X,$ with $\left\{ {
\alpha }_{t}^{n}\right\} \subset \lbrack 0,1],\tsum\limits_{i=1}^{t}a_{i}=1$
satisfying $0<\alpha \leq \alpha _{n}$ and $\tsum\limits_{n=0}^{\infty }{
\alpha }_{t}^{n}=\infty $ for all $n\in
\mathbb{N}
$ and $i=1,2,...,t.$ Then the Mann iteration process $(\ref{m7})$ is $f$
-stable.
\end{theorem}
\begin{proof}
From Theorem \ref{A}, we know that $f$ has a unique fixed point. Suppose
that $u\in X$. From from Lemma \ref{l1}, we also know that
\begin{equation}
A\left( fx,fx,\dots ,fx,fy\right) \leq \delta A\left( x,x,\dots ,x,y\right)
+t\delta A(fx,fx,\dots ,fx,x). \label{26}
\end{equation}
Let $\left\{ y_{n}\right\} \subset X$ and $\epsilon _{n}=A\left(
y_{n+1},y_{n+1},...,y_{n+1},g\left( f,y_{n}\right) \right) $. Assume that $
\lim_{n\rightarrow \infty }\epsilon _{n}=0$. Then, we will show that $
\lim_{n\rightarrow \infty }y_{n}=u$. From (\ref{26}) and triangle
inequality, we get
\begin{eqnarray}
&&A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,u\right) \label{29} \\
&\leq &\left( t-1\right) A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,g\left(
f,y_{n}\right) \right) \notag \\
&&+A\left( g\left( f,y_{n}\right) ,g\left( f,y_{n}\right) ,\dots ,g\left(
f,y_{n}\right) ,u\right) \notag \\
&=&\left( t-1\right) A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,g\left(
f,y_{n}\right) \right) \notag \\
&&+A\left( u,u,\dots ,u,g\left( f,y_{n}\right) \right) \notag \\
&\leq &\left( t-1\right) {\varepsilon }_{n}+A\left( u,u,\dots ,u,g\left(
f,y_{n}\right) \right) \notag \\
&=&\left( t-1\right) {\varepsilon }_{n}+A\left( u,u,\dots ,u,W\left(
y_{n},y_{n},\dots ,y_{n},fy_{n};{\alpha }_{1}^{n},{\alpha }_{2}^{n},\dots ,{
\alpha }_{t}^{n}\right) \right) \notag \\
&\leq &\left( t-1\right) {\varepsilon }_{n}+{\alpha }_{1}^{n}A\left(
u,u,\dots ,u,y_{n}\right) +{\alpha }_{2}^{n}A\left( u,u,\dots ,u,y_{n}\right)
\notag \\
&&+\dots +{\alpha }_{t}^{n}A\left( u,u,\dots ,u,{fy}_{n}\right) \notag \\
&=&\left( t-1\right) {\varepsilon }_{n}+\left( 1-{\alpha }_{t}^{n}\right)
A\left( u,u,\dots ,u,y_{n}\right) +{\alpha }_{t}^{n}A\left( u,u,\dots ,u,{fy}
_{n}\right) \notag \\
&\leq &\left( t-1\right) {\varepsilon }_{n}+\left( 1-{\alpha }
_{t}^{n}\right) A\left( u,u,\dots ,u,y_{n}\right) \notag \\
&&+{\alpha }_{t}^{n}\left[ \delta A\left( u,u,\dots ,u,y_{n}\right) +t\delta
A\left( fu,fu,\dots ,fu,u\right) \right] \notag \\
&=&\left[ 1-\left( 1-\delta \right) {\alpha }_{t}^{n}\right] A\left(
y_{n},y_{n},\dots ,y_{n},u\right) +\left( t-1\right) {\varepsilon }_{n}.
\notag
\end{eqnarray}
Since $0\leq 1-\left( 1-\delta \right) {\alpha }_{t}^{n}<1-\left( 1-\delta
\right) \alpha <1$, using Lemma \ref{mm} in (\ref{29}) yields
\begin{equation*}
{\mathop{\mathrm{lim}}_{n\rightarrow \infty }A\left( y_{n},y_{n},\dots
,y_{n},u\right) =0,\ }
\end{equation*}
that is,
\begin{equation*}
{\mathop{\mathrm{lim}}_{n\rightarrow \infty }y_{n}=u\ }.
\end{equation*}
Conversely, let ${\mathop{\mathrm{lim}}_{n\rightarrow \infty }y_{n}=u\ }$.
Then,
\begin{eqnarray*}
{\varepsilon }_{n} &=&A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,g\left(
f,y_{n}\right) \right) \\
&\leq &\left( t-1\right) A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,u\right)
+A\left( u,u,\dots ,u,g\left( f,y_{n}\right) \right) \\
&=&\left( t-1\right) A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,u\right) \\
&&+A\left( u,u,\dots ,u,W\left( y_{n},y_{n},\dots ,y_{n},fy_{n};{\alpha }
_{1}^{n},{\alpha }_{2}^{n},\dots ,{\alpha }_{t}^{n}\right) \right) \\
&\leq &\left( t-1\right) A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,u\right) +{
\alpha }_{1}^{n}A\left( u,u,\dots ,u,y_{n}\right) \\
&&+{\alpha }_{2}^{n}A\left( u,u,\dots ,u,y_{n}\right) +\dots +{\alpha }
_{t}^{n}A\left( u,u,\dots ,u,{fy}_{n}\right) \\
&=&\left( t-1\right) A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,u\right)
+\left( 1-{\alpha }_{t}^{n}\right) A\left( u,u,\dots ,u,y_{n}\right) \\
&&{+\alpha }_{t}^{n}A\left( u,u,\dots ,u,{fy}_{n}\right) \\
&\leq &\left( t-1\right) A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,u\right)
+\left( 1-{\alpha }_{t}^{n}\right) A\left( u,u,\dots ,u,y_{n}\right) \\
&&+{\alpha }_{t}^{n}\left[ \delta A\left( u,u,\dots ,u,y_{n}\right) +t\delta
A\left( fu,fu,\dots ,fu,u\right) \right] \\
&\leq &\left( t-1\right) A\left( y_{n+1},y_{n+1},\dots ,y_{n+1},,u\right)
+\left( 1-{\alpha }_{t}^{n}\right) A\left( u,u,\dots ,u,y_{n}\right) \\
&&+{\alpha }_{t}^{n}\delta A\left( u,u,\dots ,u,y_{n}\right)
\end{eqnarray*}
Letting $n\rightarrow \infty $ in the above inequality, we have $
\lim_{n\rightarrow \infty }\epsilon _{n}=0$.
\end{proof}
\end{document}
|
\begin{document}
\title{\XXXTitle{}
\section{Introduction}
A very remarkable property of atoms is that in the limit
$Z\to\infty$, the Thomas--Fermi energy\cite{Thomas_1927,
Fermi_1927, Fermi_1928} becomes exact.\cite{Lieb_1976,
Lieb_1981, Baumgartner_1976} \ Unfortunately, the very ingenious
proofs of this beautiful result are somewhat complex. We
have strived in developing a relatively easier, but rather formal,
derivation of this fundamental result for neutral atoms by using,
in the process, the Green’s function corresponding to the
Thomas--Fermi potential. The derivation rests on the fact that
elementary scaling properties of integrals of the Green’s function
allow one readily to consider the $Z\to\infty$ limit with no
difficulty. The basic idea is that integrals of the Green’s
function for coincident space points involved in the analysis have
particularly simple power law behaviour for large $Z$. This
is spelled out in the text.
For the Hamiltonian of neutral atoms we choose
\begin{equation}\label{Eqn01}
H = \sum\limits_{\alpha=1}^{Z}\left(\frac{\vec{p}_{\alpha}^{2}}{2m}
-\frac{Ze^{2}}{r_{\alpha}}\right)+\sum\limits_{\alpha<\beta}^{Z}
\frac{e^{2}}{\left|\vec{r}_{\alpha}-\vec{r}_{\beta}\big.\right|}.
\end{equation}
We derive upper and lower bounds on the exact ground-state energy
of (\ref{Eqn01}), which for $Z\to\infty$ are the limits of
expressions involving integrals of the exact Green’s functions
with one-body potentials. The limits of both bounds are
shown to coincide with the ground-state Thomas--Fermi energy, thus
establishing the result.
\section{The upper bound}
We consider first the seemingly unrelated problem of a one-body
potential with Hamiltonian
\begin{equation}\label{Eqn02}
h = \frac{\vec{p}^{2}}{2m}+V(\vec{r})
\end{equation}
where $V(\vec{r})$ is the Thomas--Fermi potential
\begin{align}
V(\vec{r}) &= -\frac{Ze^{2}}{r}+e^{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}'\:
\frac{n(\vec{r}')}{\left|\vec{r}-\vec{r}'\big.\right|} \nonumber \\
&= Z^{4/3}v(\vec{R}) \nonumber \\
&\equiv{} -\frac{\hbar^{2}}{2m}\left(3\pi^{2}\big.\right)^{2/3}
Z^{4/3}\left(\rho_{\mathrm{TF}}(\vec{R})\big.\right)^{2/3},
\qquad{} \vec{r}=\frac{\vec{R}}{Z^{1/3}},
\label{Eqn03}
\end{align}
and $n(\vec{r})=Z^{2}\rho_{\mathrm{TF}}(\vec{R})$ is the
Thomas--Fermi density normalized as
\begin{equation}\label{Eqn04}
\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\:n(\vec{r}) = Z.
\end{equation}
The Green’s function corresponding to (\ref{Eqn02}) satisfies the
equation
\begin{equation}\label{Eqn05}
\left[-\ensuremath{\mathrm{i}}\frac{\partial}{\partial\tau}-\frac{\hbar^{2}}{2m}\nabla^{2}
+V(\vec{r})\right]G_{\pm}(\vec{r}t,\vec{r}'0) = \delta^{3}(\vec{r}-\vec{r}')
\delta(t),
\end{equation}
where, with appropriate boundary conditions,
\begin{equation}\label{Eqn06}
G_{\pm}(\vec{r}t,\vec{r}'0) = \mp\left(\frac{\ensuremath{\mathrm{i}}}{\hbar}\right)
\mathrm{\Theta}(\mp{}t)\:G_{0}(\vec{r}\tau,\vec{r}'0;V), \qquad{}
\tau=\frac{t}{\hbar}.
\end{equation}
We write
\begin{equation}\label{Eqn07}
G_{0}(\vec{r}\tau,\vec{r}'0;V) = \int\!\!\frac{\ensuremath{\mathrm{d}}^{3}\vec{k}}{(2\pi)^{3}}\;
\ensuremath{\mathrm{e}}^{\ensuremath{\mathrm{i}}\vec{k}\ensuremath{\boldsymbol{\cdot}}(\vec{r}-\vec{r}')}\exp\left[-\ensuremath{\mathrm{i}}\left(
\frac{\hbar^{2}\vec{k}^{2}}{2m}\tau+U(\vec{r},\tau,\vec{k})\right)\right].
\end{equation}
We readily see that $U$ satisfies the equation
\begin{equation}\label{Eqn08}
-\frac{\partial{}U}{\partial\tau}+V-\frac{\hbar^{2}}{m}\vec{k}\ensuremath{\boldsymbol{\cdot}}\vec{\nabla}U
+\frac{\hbar^{2}}{2m}\left(\vec{\nabla}U\big.\right)^{2}+\ensuremath{\mathrm{i}}\frac{\hbar^{2}}{2m}
\nabla^{2}U = 0,
\end{equation}
with the boundary condition $U\big|_{\tau=0}=0$. We are
particularly interested in the integral
\begin{equation}\label{Eqn09}
\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\;G_{0}(\vec{r}\tau,\vec{r}0;V),
\end{equation}
where $\exp\left[\ensuremath{\mathrm{i}}\vec{k}\ensuremath{\boldsymbol{\cdot}}(\vec{r}-\vec{r}')\big.\right]$ in
(\ref{Eqn07}) becomes simply replaced by 1. Under a scaling
we have $\vec{r}=\vec{R}/Z^{1/3}$, $V(\vec{r})=Z^{4/3}v(\vec{R})$,
where $v(\vec{R})$ is independent of $Z$. Accordingly, to
study the large $Z$ behaviour, we carry out the change of
variables $\vec{r}\to\vec{R}$ and simultaneously substitute
$\tau=T/Z^{4/3}$. Also, with the change of variables
$\vec{k}\to\vec{K}$, $\vec{k}=Z^{2/3}\vec{K}$, the product
$\vec{k}^{2}\tau=\vec{K}^{2}T$ in (\ref{Eqn07}) remains
invariant. With these new variables, (\ref{Eqn08}) becomes
\begin{equation}\label{Eqn10}
-\frac{\partial{}U}{\partial{}T}+v-\frac{\hbar^{2}}{mZ^{1/3}}
\vec{K}\ensuremath{\boldsymbol{\cdot}}\vec{\nabla}_{\!\!R}U+\frac{\hbar^{2}}{2mZ^{2/3}}
\left(\vec{\nabla}_{\!\!R}U\big.\right)^{2}+\ensuremath{\mathrm{i}}\frac{\hbar^{2}}{2mZ^{2/3}}
\nabla_{\!\!R}^{2}U = 0.
\end{equation}
Let $\lim_{Z\to\infty}U=U_{\infty}$. Then (\ref{Eqn10})
collapses to $-\partial{}U_{\infty}/\partial{}t+v=0$, whose
solution is $U_{\infty}=vT$. Hence for $Z\to\infty$, the
expression in (\ref{Eqn09}) becomes simply scaled by
$Z^{-1}Z^{2}=Z$. [On the other hand, if we carry out the
unitary scale transformation $\vec{k}\to\vec{K}$,
$\vec{k}=Z^{1/3}\vec{K}$, (\ref{Eqn08}) leads to
$U=vT+\mathcal{O}\!\left(Z^{-2/3}\big.\right)$, and
$\left[\hbar^{2}\vec{k}^{2}\tau/2m+U\big.\right]\to
\left[\hbar^{2}\vec{K}^{2}T/2mZ^{2/3}+vT+\mathcal{O}\!\left(Z^{-2/3}\big.\right)
\right]$. The latter, under the subsequent change of
variables $\vec{K}\to{}Z^{1/3}\vec{K}$, leads to
$\left[\hbar^{2}\vec{K}^{2}T/2m+vT+\mathcal{O}\!\left(Z^{-1/3}\big.\right)
\right]$, giving the same expression for (\ref{Eqn09}) as before,
with an overall scaling by $Z$.]
Accordingly, we have the following limits for large $Z$, as
readily verified upon substitution of $vT$ for $U$, $Z\to\infty$~:
\begin{subequations}
\begin{equation}\label{Eqn11a}
\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\:\frac{2}{2\pi\ensuremath{\mathrm{i}}}\int_{-\infty}^{\infty}\!
\frac{\ensuremath{\mathrm{d}}\tau}{\tau-\ensuremath{\mathrm{i}}\varepsilon}\:G_{0}(\vec{r}\tau,\vec{r}0;V)
\longrightarrow{}
Z\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}\:\rho_{\mathrm{TF}}(\vec{R})\equiv{}Z,
\end{equation}
\begin{align}
Z^{-7/3} & \int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\:\frac{2}{2\pi\ensuremath{\mathrm{i}}}\int_{-\infty}^{\infty}\!
\frac{\ensuremath{\mathrm{d}}\tau}{\tau-\ensuremath{\mathrm{i}}\varepsilon}\:\ensuremath{\mathrm{i}}\frac{\partial}{\partial\tau}
G_{0}(\vec{r}\tau,\vec{r}0;V) \nonumber \\
\longrightarrow{}&
2\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}\int\!\!\frac{\ensuremath{\mathrm{d}}^{3}\vec{K}}{(2\pi)^{3}}\left[
\frac{\hbar^{2}\vec{K}^{2}}{2m}+v(\vec{R})\right]\mathrm{\Theta}\!\left(
\sqrt{-\frac{2mv(\vec{R})}{\hbar^{2}}}-|\vec{K}|\right) \nonumber \\
&\quad{}=
\left(3\pi^{2}\big.\right)^{5/3}\frac{\hbar^{2}}{10\pi^{2}m}\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}
\:\left(\rho_{\mathrm{TF}}(\vec{R})\big.\right)^{5/3}-e^{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}\:
\frac{\rho_{\mathrm{TF}}(\vec{R})}{R} \nonumber \\
&\quad\quad{}
+e^{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}'\:\rho_{\mathrm{TF}}(\vec{R})
\frac{1}{\left|\vec{R}-\vec{R}'\big.\right|}\rho_{\mathrm{TF}}(\vec{R}').
\label{Eqn11b}
\end{align}
\end{subequations}
Here, the factor 2 multiplying the $\tau$-integrals is to account
for spin. The $\tau$-integrals project out the negative
spectrum of $h$.
Equation (\ref{Eqn11a}) in particular is of fundamental
importance. It states that for large $Z$, the Hamiltonian
$h$, allowing for spin, has $Z$ (orthonormal) eigenvectors
corresponding to its negative spectrum. Let
$g_{1}(\vec{r},\sigma),\ldots,g_{Z}(\vec{r},\sigma)$ denote these
eigenvectors for large $Z$. Define the determinantal
(anti-symmetric) function
\begin{equation}\label{Eqn12}
\phi_{Z}(\vec{r}_{1}\sigma_{1},\ldots,\vec{r}_{Z}\sigma_{Z}) =
\frac{1}{\sqrt{Z!}}\:\det\left[g_{\alpha}(\vec{r}_{\beta},\sigma_{\beta})
\big.\right].
\end{equation}
Since such an anti-symmetric function does not necessarily
coincide with the ground-state function of the Hamiltonian $H$ in
(\ref{Eqn01}) in question, the expectation value
$\BK{\phi_{Z}}{H}{\phi_{Z}\big.}$ with respect to $\phi_{Z}$ in
(\ref{Eqn12}) can only \emph{overestimate} the exact ground-state
energy $E_{Z}$ of $H$, or at best be equal to it.
We rewrite the Hamiltonian in (\ref{Eqn01}) equivalently as
\begin{equation}\label{Eqn13}
H = \sum\limits_{\alpha=1}^{Z}h_{\alpha}+\left(
\sum\limits_{\alpha<\beta}^{Z}\frac{e^{2}}{\left|\vec{r}_{\alpha}
-\vec{r}_{\beta}\big.\right|}-e^{2}\sum\limits_{\alpha=1}^{Z}
\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}'\:\frac{n(\vec{r}')}{\left|\vec{r}_{\alpha}
-\vec{r}'\big.\right|}\right),
\end{equation}
where $h_{\alpha}$ is defined in (\ref{Eqn02}) with variables
$\vec{r}_{\alpha}$, $\vec{p}_{\alpha}$.
Accordingly,
\begin{align}
\lim\limits_{Z\to\infty}Z^{-7/3}E_{Z} &\leqslant{} \lim\limits_{Z\to\infty}Z^{-7/3}
\BK{\phi_{Z}}{H}{\phi_{Z}\big.} \nonumber \\
&= \lim\limits_{Z\to\infty}Z^{-7/3}\sum\limits_{\alpha=1}^{Z}
\BK{g_{\alpha}}{h_{\alpha}}{g_{\alpha}\big.}
+\lim\limits_{Z\to\infty}Z^{-7/3}F_{Z},
\label{Eqn14}
\end{align}
where
\begin{align}
F_{Z} =& -e^{2}\sum\limits_{\sigma}\int\!
\frac{\ensuremath{\mathrm{d}}^{3}\vec{r}\,\ensuremath{\mathrm{d}}^{3}\vec{r}'}{\left|\vec{r}-\vec{r}'\big.\right|}\:
n_{Z}(\vec{r}\sigma,\vec{r}\sigma)\,n(\vec{r}') \nonumber \\
&\quad{}
+\frac{e^{2}}{2}\sum\limits_{\sigma,\sigma'}\int\!
\frac{\ensuremath{\mathrm{d}}^{3}\vec{r}\,\ensuremath{\mathrm{d}}^{3}\vec{r}'}{\left|\vec{r}-\vec{r}'\big.\right|}
\left[n_{Z}(\vec{r}\sigma,\vec{r}\sigma)\,n_{Z}(\vec{r}'\sigma',\vec{r}'\sigma')
-\left|n_{Z}(\vec{r}\sigma,\vec{r}'\sigma')\big.\right|^{2}\Big.\right],
\label{Eqn15}
\end{align}
\begin{equation}\label{Eqn16}
n_{Z}(\vec{r}\sigma,\vec{r}'\sigma') = \sum\limits_{\alpha=1}^{Z}
g_{\alpha}(\vec{r},\sigma)\,g_{\alpha}^{*}(\vec{r}',\sigma'),
\end{equation}
or
\begin{align}
F_{Z} \leqslant{} -e^{2}\int\!
\frac{\ensuremath{\mathrm{d}}^{3}\vec{r}\,\ensuremath{\mathrm{d}}^{3}\vec{r}'}{\left|\vec{r}-\vec{r}'\big.\right|}
&{}
\left[n(\vec{r}')\left(\sum\limits_{\sigma}n_{Z}(\vec{r}\sigma,\vec{r}\sigma)\right)\right.
\nonumber \\
&\quad{}
-\frac{1}{2}\left.\left(\sum\limits_{\sigma}n_{Z}(\vec{r}\sigma,\vec{r}\sigma)\right)
\left(\sum\limits_{\sigma'}n_{Z}(\vec{r}'\sigma',\vec{r}'\sigma')\right)\right].
\label{Eqn17}
\end{align}
However, we also have
\begin{align}
\lim\limits_{Z\to\infty}Z^{-2}\sum\limits_{\sigma}n_{Z}(\vec{r}\sigma,\vec{r}\sigma)
&=
\lim\limits_{Z\to\infty}Z^{-2}\:\frac{2}{2\pi\ensuremath{\mathrm{i}}}\int_{-\infty}^{\infty}\!
\frac{\ensuremath{\mathrm{d}}\tau}{\tau-\ensuremath{\mathrm{i}}\varepsilon}\;G_{0}(\vec{r}\tau,\vec{r}0;V)
\nonumber \\
&\equiv{} \rho_{\mathrm{TF}}(\vec{R}),
\label{Eqn18}
\end{align}
\begin{align}
\lim\limits_{Z\to\infty}Z^{-7/3}\sum\limits_{\alpha=1}^{Z}
\BK{g_{\alpha}}{h_{\alpha}}{g_{\alpha}\big.}
&= \lim\limits_{Z\to\infty}\left(Z^{-7/3}\:2\sum\limits_{\lambda<0}\lambda\right)
\nonumber \\
&= \lim\limits_{Z\to\infty}Z^{-7/3}\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\:\frac{2}{2\pi\ensuremath{\mathrm{i}}}
\int_{-\infty}^{\infty}\!\frac{\ensuremath{\mathrm{d}}\tau}{\tau-\ensuremath{\mathrm{i}}\varepsilon}
\nonumber \\
&\qquad\qquad\qquad\qquad\qquad{}\times
\ensuremath{\mathrm{i}}\frac{\partial}{\partial\tau}G_{0}(\vec{r}\tau,\vec{r}0;V),
\label{Eqn19}
\end{align}
where $\sum_{\lambda<0}\lambda$ in $2\sum_{\lambda<0}\lambda$ is a
sum over all the negative eigenvalues of $h$ in (\ref{Eqn02}),
allowing for multiplicity but not spin degeneracy. The
factor 2 takes the latter into account.
From (\ref{Eqn14})--(\ref{Eqn19}) and (\ref{Eqn11b}), we finally
have
\begin{align}
\lim\limits_{Z\to\infty}Z^{-7/3}E_{Z} &\leqslant{}
\frac{\left(3\pi^{2}\big.\right)^{5/3}\hbar^{2}}{10\pi^{2}m}\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}\:
\left(\rho_{\mathrm{TF}}(\vec{R})\big.\right)^{5/3}-e^{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}\:
\frac{\rho_{\mathrm{TF}}(\vec{R})}{R} \nonumber \\
&\qquad{}
+\frac{e^{2}}{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}\,\ensuremath{\mathrm{d}}^{3}\vec{R}'\:\rho_{\mathrm{TF}}(\vec{R})
\frac{1}{\left|\vec{R}-\vec{R}'\big.\right|}\rho_{\mathrm{TF}}(\vec{R}'),
\label{Eqn20}
\end{align}
and the right-hand side is the coefficient of $Z^{7/3}$ of the
ground-state Thomas--Fermi energy.
\section{The lower bound}
Given any arbitrary real and positive function
$\rho_{Z}(\vec{r})$, we use the following elementary text-book
bound\cite{Thirring_1981}~:
\begin{align}
\sum\limits_{\alpha<\beta}^{Z}
\frac{1}{\left|\vec{r}_{\alpha}-\vec{r}_{\beta}\big.\right|}
\geqslant{}&
\sum\limits_{\alpha=1}^{Z}\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\:
\frac{\rho_{Z}(\vec{r})}{\left|\vec{r}-\vec{r}_{\alpha}\big.\right|}
-\frac{1}{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\,\ensuremath{\mathrm{d}}^{3}\vec{r}'\:\rho_{Z}(\vec{r})
\frac{1}{\left|\vec{r}-\vec{r}'\big.\right|}\rho_{Z}(\vec{r}')
\nonumber \\
&\quad{}
-\frac{3}{2}\pi^{1/3}Z^{2/3}\left[\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\;
\left(\rho_{Z}(\vec{r})\big.\right)^{2}\right]^{1/3}.
\label{Eqn21}
\end{align}
Here the real function $\rho_{Z}(\vec{r})$ may be chosen to be
positive and is otherwise \emph{arbitrary} (i.e., may be chosen at
will) to the extent that the integrals on the right-hand side of
(\ref{Eqn21}) exist. We conveniently choose it in such a
way that $\rho_{Z}(\vec{r})\to{}Z^{2}\rho_{\mathrm{TF}}(\vec{R})$
for $Z\to\infty$, which will then coincide with $n(\vec{r})$ used
above in (\ref{Eqn03}). Consider the Hamiltonian
$h'=\vec{p}^{2}/2m+V'$, where
\begin{equation}\label{Eqn22}
V'(\vec{r}) = -\frac{Ze^{2}}{r}+e^{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}'\:
\frac{\rho_{Z}(\vec{r}')}{\left|\vec{r}-\vec{r}'\big.\right|}.
\end{equation}
With $\rho_{Z}(\vec{r})$ conveniently chosen, $V'(\vec{r})$ may be
chosen to be a locally square integrable function satisfying
$V'(\vec{r})\to{}0$ for $r\to\infty$. Let $\psi$ be a
normalized antisymmetric function in
$(\vec{r}_{1}\sigma_{1},\ldots,\vec{r}_{Z}\sigma_{Z})$. Then
(\ref{Eqn21}) implies that
\begin{align}
\BK{\psi}{H}{\psi\big.}
\geqslant{}&
\BK{\psi}{\sum\limits_{\alpha}h'_{\alpha}}{\psi}
-\frac{e^{2}}{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\,\ensuremath{\mathrm{d}}^{3}\vec{r}'\:\rho_{Z}(\vec{r})
\frac{1}{\left|\vec{r}-\vec{r}'\big.\right|}\rho_{Z}(\vec{r}')
\nonumber \\
&\quad{}
-\frac{3}{2}\pi^{1/3}Z^{2/3}e^{2}\left[\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\;
\left(\rho_{Z}(\vec{r})\big.\right)^{2}\right]^{1/3}.
\label{Eqn23}
\end{align}
Consider the lowest energy $E$ of the Hamiltonian
$\sum_{\alpha}h'_{\alpha}$. The Pauli exclusion principle
comes to the rescue here.\cite{Lieb_1976} \ Concerning the
Hamiltonian $\sum_{\alpha}h'_{\alpha}$, the $Z$
``non-interacting'' electrons (although each interacts with an
external potential $V'$) can be put, according to the Pauli
exclusion principle, in the lowest energy levels of
$\sum_{\alpha}h'_{\alpha}$ (allowing for spin degeneracy) if $Z$
is less than the number of such available levels. If $Z$ is
larger, then the remaining free electrons should have arbitrarily
small kinetic energies to define the lowest energy of
$\sum_{\alpha}h'_{\alpha}$. In either case,
$E\geqslant{}2\sum_{\lambda<0}\lambda$, where
$\sum_{\lambda<0}\lambda$, defined as above, is now applied to
$h'$. Accordingly,
\begin{equation}\label{Eqn24}
\lim\limits_{Z\to\infty}Z^{-7/3}\BK{\psi}{H}{\psi\big.}
\geqslant{} \lim\limits_{Z\to\infty}K_{Z},
\end{equation}
where
\begin{align}
K_{Z} &= Z^{-7/3}\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\:\frac{2}{2\pi\ensuremath{\mathrm{i}}}\int_{-\infty}^{\infty}\!
\frac{\ensuremath{\mathrm{d}}\tau}{\tau-\ensuremath{\mathrm{i}}\varepsilon}\:\ensuremath{\mathrm{i}}\frac{\partial}{\partial\tau}
G_{0}(\vec{r}\tau,\vec{r}0;V') \nonumber \\
&\qquad{}
-Z^{-7/3}\:\frac{e^{2}}{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\,\ensuremath{\mathrm{d}}^{3}\vec{r}'\:\rho_{Z}(\vec{r})
\frac{1}{\left|\vec{r}-\vec{r}'\big.\right|}\rho_{Z}(\vec{r}')
\nonumber \\
&\qquad{}
-\frac{3}{2}\pi^{1/3}Z^{-5/3}e^{2}\left[\int\!\ensuremath{\mathrm{d}}^{3}\vec{r}\;
\left(\rho_{Z}(\vec{r})\big.\right)^{2}\right]^{1/3},
\label{Eqn25}
\end{align}
and $G_{0}(\vec{r}\tau,\vec{r}0;V')$ is defined as above. Also
here we have used the equality on the extreme right-hand side of
(\ref{Eqn19}). Since the right-hand side of the inequality
(\ref{Eqn24}) is independent of $\psi$, this inequality holds with
$\psi$ corresponding to the ground-state function of $H$ as well,
i.e., with $\BK{\psi}{H}{\psi\big.}$ corresponding to
\begin{equation}\label{Eqn26}
\min\limits_{\psi}\BK{\psi}{H}{\psi\big.} = E_{Z}.
\end{equation}
To the extent that $\rho_{Z}(\vec{r})>0$ is arbitrary, we choose
it conveniently as
\begin{equation}\label{Eqn27}
\rho_{Z}(\vec{r}) =
Z^{2}\rho_{\mathrm{TF}}(\vec{R})\sqrt{1-\ensuremath{\mathrm{e}}^{-Z\alpha{}R}},
\end{equation}
where $\alpha>0$ is an arbitrary scale parameter. We note
that $\rho_{\mathrm{TF}}(\vec{R})\sim{}R^{-3/2}$ for $R\to{}0$,
and that $\rho_{\mathrm{TF}}(\vec{R})\sim{}R^{-6}$ for
$R\to\infty$. The factor $\sqrt{1-\exp(-Z\alpha{}R)}$
ensures the integrability of the last integral on the right-hand
side of (\ref{Eqn25}). We estimate the latter for
$Z\to\infty$ as
\begin{align}
\frac{1}{Z^{2/3}} & \left[\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}\;\rho_{\mathrm{TF}}^{2}(\vec{R})
\left(1-\ensuremath{\mathrm{e}}^{-Z\alpha{}R}\Big.\right)\right]^{1/3} \nonumber \\
&\qquad\qquad\leqslant{}
\left[\int_{\alpha{}R\leqslant{}1/Z}\!\ensuremath{\mathrm{d}}^{3}\vec{R}\;\frac{\alpha}{Z}\:R\,
\rho_{\mathrm{TF}}^{2}(\vec{R})\right. \nonumber \\
&\qquad\qquad\qquad\quad{}
+\frac{\left(1-\ensuremath{\mathrm{e}}^{-Z}\Big.\right)}{Z^{2}}\int_{1>\alpha{}R>1/Z}\!\ensuremath{\mathrm{d}}^{3}\vec{R}\;
\rho_{\mathrm{TF}}^{2}(\vec{R}) \nonumber \\
&\qquad\qquad\qquad\quad{}
+\left.\frac{1}{Z^{2}}\int_{\alpha{}R\geqslant{}1}\!\ensuremath{\mathrm{d}}^{3}\vec{R}\;
\rho_{\mathrm{TF}}^{2}(\vec{R})\right]^{1/3}.
\label{Eqn28}
\end{align}
The second integral on the right-hand side is at worst logarithmic
in $Z$. Hence the last term on the right-hand side of
(\ref{Eqn25}) vanishes for $Z\to\infty$. Since
${-\left(1-\ensuremath{\mathrm{e}}^{-Z\alpha{}R}\big.\right)}\geqslant{}{-1}$, the
second term (with the minus sign) on the right-hand side of
(\ref{Eqn25}) is bounded below by
\begin{equation*}
-\frac{e^{2}}{2}\int\!\ensuremath{\mathrm{d}}^{3}\vec{R}\,\ensuremath{\mathrm{d}}^{3}\vec{R}'\:\rho_{\mathrm{TF}}(\vec{R})
\frac{1}{\left|\vec{R}-\vec{R}'\big.\right|}\rho_{\mathrm{TF}}(\vec{R}').
\end{equation*}
Finally, we note that since $1-\ensuremath{\mathrm{e}}^{-Z\alpha{}R}\to{}1$ for
$Z\to\infty$, and with $V'\equiv{}Z^{4/3}v'_{Z}$ and
$\lim_{Z\to\infty}v'_{Z}\equiv{}v(\vec{R})$, the limit of the
first expression on the right-hand side of (\ref{Eqn25}) coincides
with that in (\ref{Eqn11b}) for $Z\to\infty$. All told, we
see that the lower bound in (\ref{Eqn24}) coincides with the upper
bound in (\ref{Eqn20}). This completes our
demonstration.
In a future report, we will investigate to what extent this
analysis may be extended to other interactions.
\end{document}
|
\begin{document}
\title[Unfriendly colorings]{Unfriendly colorings of graphs with finite average degree}
\author{Clinton T.~Conley}
\address[C.T.~Conley]{Carnegie Mellon University.}
\author{Omer Tamuz}
\address[O.~Tamuz]{California Institute of Technology.}
\thanks{Clinton T.~Conley was supported by NSF grant DMS-1500906. Omer Tamuz was supported by a grant from the Simons Foundation (\#419427).}
\maketitle
\begin{abstract}
In an unfriendly coloring of a graph the color of every node mismatches that of the majority of its neighbors. We show that every probability measure preserving Borel graph with finite average degree admits a Borel unfriendly coloring almost everywhere. We also show that every bounded degree Borel graph of subexponential growth admits a Borel unfriendly coloring.
\end{abstract}
\section{Introduction}
Suppose that $G$ is a locally finite graph on the vertex set $X$. We say that $c \colon X \to 2$ is an \emph{unfriendly coloring} of $G$ if for all $x \in X$ at least half of $x$'s neighbors receive a different color than $x$ does. More formally, letting $G_x$ denote the set of $G$-neighbors of $x$, such a function $c$ is an unfriendly coloring if $|\{y \in G_x : c(x) \neq c(y)\}| \geq |\{y \in G_x : c(x) = c(y)\}|$. By a compactness argument unfriendly colorings exist for all locally finite graphs (see, e.g.,~\cite{aharoni1990unfriendly}). There exist graphs with uncountable vertex sets that have no unfriendly colorings \cite{shelah1990graphs}; it is not known if this is possible for graphs with countably many vertices.
A large and growing literature considers measure-theoretical analogues of classical combinatorial questions (see, e.g., a survey by Kechris and Marks \cite{kechris2015descriptive}).
Following~\cite{conley2014measure}, we consider a measure-theoretical analogue of the question of unfriendly colorings. Suppose that $G$ is a locally finite Borel graph on the standard Borel space $X$, and that $\mu$ is a Borel probability measure on $X$. We say that $G$ is \emph{$\mu$-preserving} if there are countably many $\mu$-preserving Borel involutions whose graphs cover the edges of $G$. Equivalently, $G$ is $\mu$-preserving if its connectedness relation $E_G$ is a $\mu$-preserving equivalence relation.
An important example of such graphs comes from probability measure preserving actions of finitely generated groups. Indeed let a group, generated by the finite symmetric set $S$, act by measure preserving transformations on a standard Borel probability space $(X,\mu)$. Then the associated graph $G=(X,E)$ whose edges are
$$
E = \{(x,y) \,:\, y=s x\text{ for some } s \in S\}
$$
is a $\mu$-preserving graph.
In \cite{conley2014measure} it is shown that every free probability measure preserving action of a finitely generated group is weakly equivalent to another such action whose associated graph admits an unfriendly coloring. Note that such graphs are regular: (almost) every node has degree $|G_x|=|S|$. Recall that the ($\mu$-)\emph{cost} of a $\mu$-preserving locally finite Borel graph $G$ is simply half its average degree: $\cost(G) = \frac{1}{2} \int_X |G_x| \ d\mu$. Equivalently, using the Lusin-Novikov uniformization theorem (see, e.g., \cite[Lemma 18.12]{kechris2012classical}) one may circumvent this factor of $\frac{1}{2}$ by instead computing $\int_X |\vec{G}_x|d\mu$, where $\vec{G}$ is an arbitrary measurable orientation of $G$.
Our first result shows that every measure preserving graph with finite cost admits an (almost everywhere) unfriendly coloring.
\begin{theorem}\label{thm:invariant}
Suppose that $(X,\mu)$ is a standard probability space and that $G$ is a $\mu$-preserving locally finite Borel graph on $X$ with finite cost. Then there is a $\mu$-conull $G$-invariant Borel set $A$ such that $G\restriction A$ admits a Borel unfriendly coloring.
\end{theorem}
We next explore how the invariance assumption can be weakened. Recall that a Borel probability measure is $G$-\emph{quasi-invariant} if the $G$-saturation of every $\mu$-null set remains $\mu$-null. Such measures admit a \emph{Radon-Nikodym} cocycle $\rho \colon G \to \mathbb{R}^+$ so that whenever $A \subseteq X$ is Borel and $f \colon A \to X$ a Borel partial injection whose graph is contained in $G$, then $\mu(f[A]) = \int_A \rho(x,f(x))\ d\mu$.
\begin{theorem}\label{thm:quasiinvariant}
Suppose that $(X,\mu)$ is a standard probability space, that $G$ is a Borel graph on $X$ with bounded degree $d$, and that $\mu$ is $G$-quasi-invariant, with corresponding Radon-Nikodym cocycle $\rho$. Suppose also that for all $(x,y) \in G$, $1-\frac{1}{d} \leq \rho(x,y) \leq 1+\frac{1}{d}$. Then there is a $\mu$-conull $G$-invariant Borel set $A$ such that $G\restriction A$ admits a Borel unfriendly coloring.
\end{theorem}
The proofs of Theorems~\ref{thm:invariant} and~\ref{thm:quasiinvariant} build on a potential function technique used in~\cite{tamuz2015majority} (see also \cite{benjamini2016convergence}) to study majority dynamics on infinite graphs; in the context of finite graphs, these techniques go back to Goles and Olivos \cite{goles1980periodic}. Indeed, we show that in our settings (anti)-majority dynamics converge to an unfriendly coloring. The combinatorial nature of this technique allows us to extend our results to the Borel setting.
\begin{theorem}
\label{thm:subexp}
Suppose that $G$ is a bounded-degree Borel graph of subexponential growth. Then $G$ admits a Borel unfriendly coloring.
\end{theorem}
A natural question remains open: is there a locally finite Borel graph that does not admit a Borel unfriendly coloring? To the best of our knowledge this is not known, even with regards to the restricted class of bounded degree graphs. In contrast, Theorem~\ref{thm:invariant} shows that for this class unfriendly colorings exist in the measure preserving case. Still, we do not know if the finite cost assumption in Theorem~\ref{thm:invariant} is necessary, or whether every locally finite measure preserving graph admits an almost everywhere unfriendly coloring.
\section{Proofs}
\begin{proof}[Proof of Theorem~\ref{thm:invariant}]
By Kechris-Solecki-Todorcevic \cite[Proposition 4.5]{kechris1999borel}, there exists a \emph{repetitive sequence of independent sets}: a sequence $(X_n)_{n \in \mathbb{N}}$ of $G$-independent Borel sets so that each $x \in X$ is in infinitely many $X_n$. We will recursively build for each $n \in \mathbb{N}$ a Borel function $c_n \colon X \to 2$ which converge $\mu$-almost everywhere to an unfriendly coloring of $G$.
The choice of $c_0$ is arbitrary, but we may as well declare it to be the constant $0$ function.
Suppose now that $c_n$ has been defined. We build $c_{n+1}$ by ``flipping'' the color of vertices in $X_n$ with too many neighbors of the same color, and leaving everything else unchanged. More precisely, $c_{n+1}(x) = 1-c_n(x)$ if $x \in X_n$ and $|\{y \in G_x : c_n(x) \neq c_n(y)\}| < |\{y \in G_x : c_n(x) = c_n(y)\}|$; otherwise, $c_{n+1}(x) = c_n(x)$.
To show that this sequence $c_n$ converges $\mu$-a.e.~to an unfriendly coloring, we introduce some auxiliary graphs. Let $G_n$ be the subgraph of $G$ containing exactly those edges between vertices of the same $c_n$-color, so $x \mathrel{G_n} y$ iff $x \mathrel{G} y$ and $c_n(x) = c_n(y)$. Certainly for all $n \in \mathbb{N}$, $\cost(G_n) \leq \cost(G)$.
For $n \in \mathbb{N}$, let $B_n = \{x \in X : c_n(x) \neq c_{n+1}(x)\}$.
\begin{claim}
$\cost(G_n) - \cost(G_{n+1}) \geq \mu(B_n)$.
\end{claim}
\begin{proof}[Proof of the claim]
Recall that, by the definition of $c_{n+1}$, $x \in B_n$ iff $x \in X_n$ and $|\{y \in G_x : c_n(x) \neq c_n(y)\}| < |\{y \in G_x : c_n(x) = c_n(y)\}|$. In particular, $B_n \subseteq X_n$ and hence is $G$-independent. Thus $G_{n+1} = G_n \mathrel{\triangle} \{(x,y) : x \mathrel G y$ and $\{x,y\} \cap B_n \neq \emptyset\}$. But for each $x \in B_n$, the above characterization of membership in $B_n$ ensures that its $G_{n+1}$-degree is strictly smaller than its $G_n$-degree. The claim follows.
\end{proof}
In particular, since the sum telescopes we see $\sum_{n \in \mathbb{N}} \mu(B_n) \leq \cost(G) < \infty$. Hence the set $C = \{x \in X : x \in B_n \mbox{ for infinitely many n}\}$ is $\mu$-null by the Borel-Cantelli lemma. Let $A = X \setminus [C]_G$, so $A$ is $\mu$-conull.
\begin{claim}
$c$ is an unfriendly coloring of $G \restriction A$.
\end{claim}
\begin{proof}[Proof of the claim]
Fix $x \in A$ and fix $k\in \mathbb{N}$ sufficiently large so that $c_n$ has stabilized for $x$ and all its (finitely many) neighbors beyond $k$. Fix $n > k$ so that $x \in X_n$. Since $c_n(x) = c_{n+1}(x)$, the definition of $c_{n+1}$ implies that $|\{y \in G_x : c_n(x) \neq c_n(y)\}| \geq |\{y \in G_x : c_n(x) = c_n(y)\}|$. But $c_n = c$ on $G_x \cup \{x\}$, and hence $c$ is unfriendly as desired.
\end{proof}
This completes the proof of the theorem.
\end{proof}
We next analyze the extent to which the measure-theoretic hypotheses may be weakened in this argument. Note that the sequence $c_n$ of colorings is defined without using the measure at all (in fact it is determined by the graph $G$ and the sequence $(X_n)$ of independent sets); the measure only appears in the argument that sequence converges to a limit coloring. And even in this convergence argument, invariance only shows up in the critical estimate $\cost(G_n) - \cost(G_{n+1}) \geq \mu(B_n)$.
\begin{definition}
Suppose that $G$ is a locally finite Borel graph on standard Borel $X$, that $(X_n)_{n \in \mathbb{N}}$ is a sequence of $G$-independent Borel sets so that each $x\in X$ is in infinitely many $X_n$. We define the \emph{flip sequence} $(c_n)_{n \in \mathbb{N}}$ of Borel functions from $X$ to $2$ as follows:
\begin{itemize}
\item
$c_0$ is the constant $0$ function,
\item
$c_{n+1}(x) = 1-c_n(x)$ if $x \in X_n$ and $|\{y \in G_x : c_n(x) \neq c_n(y)\}| < |\{y \in G_x : c_n(x) = c_n(y)\}|$; otherwise, $c_{n+1}(x) = c_n(x)$.
\end{itemize}
\end{definition}
\begin{definition}
Given a locally finite Borel graph $G$ on $X$ and a sequence $(X_n)_{n \in \mathbb{N}}$ of repetitive independent sets as above, we say that a Borel measure $\mu$ on $X$ is \emph{compatible} with $G$ and $(X_n)$ if the corresponding flip sequence $c_n$ converges on a $\mu$-conull set.
\end{definition}
The proof of Theorem \ref{thm:invariant} shows that whenever $\mu$ is a $G$-invariant Borel probability measure with respect to which the average degree of $G$ is finite, then $\mu$ is compatible with every sequence of independent sets. We seek to weaken the invariance assumption when $G$ has bounded degree.
\begin{proposition}\label{prop:quasiinv}
Suppose that $G$ is a Borel graph on $X$ with bounded degree $d$, and that $\mu$ is a $G$-quasi-invariant Borel probability measure with corrsponding Radon-Nikodym cocycle $\rho$. Suppose further that for all $(x,y) \in G$, $1-\frac{1}{d} \leq \rho(x,y) \leq 1+\frac{1}{d}$. Then $\mu$ is compatible with every repetitive sequence of independent sets.
\end{proposition}
Theorem~\ref{thm:quasiinvariant} is an immediate consequence of this proposition.
\begin{proof}[Proof of Proposition~\ref{prop:quasiinv}]
Put $\varepsilon = \frac{1}{d}$. Define a measure $M$ on $G$ by putting for all Borel $H \subseteq G$,
$$
M(H) = \int_X |H_x|\ d\mu
$$
This new measure $M$ will replace the occurrences of cost in the proof of Theorem \ref{thm:invariant}.
Consider the flip sequence $c_n$, and define corresponding graphs $G_n \subseteq G$ by $x \mathrel{G_n} y$ iff $x \mathrel{G} y$ and $c_n(x) = c_n(y)$. As before, let $B_n$ denote those $x \in X_n$ for which $c_{n+1}(x) \neq c_n(x)$. Note that the ``double counting'' that occurred in the proof of Theorem \ref{thm:invariant} may no longer be true double counting, but the bound on $\rho$ ensures that each edge is counted at most $(2+\varepsilon)$ times and at least $(2-\varepsilon)$ times.
\begin{claim}
$M(G_n) - M(G_{n+1}) \geq \mu(B_n)$
\end{claim}
\begin{proof}[Proof of the claim]
Partition $B_n$ into finitely many Borel sets $A_{r,s}$ where $x \in A_{r,s}$ iff $x$ has $r$-many $G_n$ neighbors and $s$-many $G_{n+1}$ neighbors (so $r > s$ and $r+s \leq d$). We compute
\begin{align*}
M(G_n)-M(G_{n+1}) &= \int_X |(G_n)_x| - |(G_{n+1})_x|\ d\mu\\
&\geq \int_{B_n} (2-\varepsilon)|(G_n)_x| - (2+\varepsilon)|(G_{n+1})_x|\ d\mu\\
&=\sum_{r,s} \int_{A_{r,s}} (2-\varepsilon)r - (2+\varepsilon)s\ d\mu\\
&=\sum_{r,s} \int_{A_{r,s}} 2(r-s) -\varepsilon(r+s)\ d\mu\\
&\geq \sum_{r,s} \int_{A_{r,s}} 2 - d\varepsilon\ d\mu\\
&= \mu(B_n)
\end{align*}
as required.
\end{proof}
The remainder of the argument is as in the proof of Theorem \ref{thm:invariant}.
\end{proof}
Given Proposition~\ref{prop:quasiinv}, the proof of Theorem~\ref{thm:subexp} is straightforward.
\begin{proof}[Proof of Theorem~\ref{thm:subexp}]
Fix a degree bound $d$ for $G$ and put $\varepsilon = \frac{1}{d}$. It suffices to construct for each $x \in X$ a $G$-quasi-invariant Borel probability measure $\mu_x$ whose Radon-Nikodym cocycle is $\varepsilon$-bounded on $G$ such that $\mu_x(\{x\})>0$. If we do so, Proposition \ref{prop:quasiinv} ensures that the flip sequence $c_n$ converges $\mu_x$-everywhere for each $x$, and thus it converges everywhere. The limit is then an unfriendly coloring by the same argument as in the final claim in the proof of Theorem \ref{thm:invariant}.
To construct $\mu_x$, first define a purely atomic measure $\nu_x$ supported on the $G$-component of $x$ by declaring $\nu_x(\{y\}) = (1+\varepsilon)^{-\delta(x,y)}$, where $\delta$ denotes the graph metric. Subexponential growth of $G$ ensures that $K = \sum_{y \in [x]_G} \nu_x(\{y\}) < \infty$. Finally, put $\mu_x = \frac{1}{K} \nu_x$.
\end{proof}
\section*{Acknowledgments}
We thank the anonymous referee for insightful comments. We also thank Alekos Kechris for organizing the seminar that inspired this paper.
\end{document}
|
\begin{document}
\mainmatter
\pagestyle{myheadings}
\title{Setup of
Order Conditions for Splitting Methods}
\titlerunning{Order Conditions for Splitting Methods}
\author{Winfried Auzinger \inst{1} \and Wolfgang Herfort \inst{1}
\and Harald Hofst{\"a}tter \inst{1} \and Othmar~Koch \inst{2}}
\authorrunning{Winfried Auzinger et al.}
\institute{Technische Universit{\"a}t Wien, Austria \\
\email{[email protected],
[email protected],
[email protected]}, \\
\url{www.asc.tuwien.ac.at/~winfried},
\url{www.asc.tuwien.ac.at/~herfort},
\url{www.harald-hofstaetter.at}
\and
Universit{\"a}t Wien, Austria \\
\email{[email protected]}, \\
\url{www.othmar-koch.org}
}
\maketitle
\begin{abstract}
This article is based on~\cite{auzingeretal13c} and~\cite{auzingeretal15K1},
where an approach based on Taylor expansion and the structure of
its leading term as an element of a free Lie algebra was described
for the setup of a system of order conditions for operator splitting methods.
Along with a brief review of these materials and some theoretical background,
we discuss the implementation of the ideas from these papers
in computer algebra, in particular
using\footnote{Maple is a trademark of $ \text{MapleSoft}^{\text{TM}} $.}
Maple~18.
A parallel version of such a code is described.
\keywords{Evolution equations,
splitting methods,
order conditions,
local error,
Taylor expansion,
parallel processing}
\end{abstract}
\section{Introduction} \label{sec:intro}
The construction of higher order discretization schemes of one-step type
for the numerical solution of evolution equations is typically based
on the setup and solution of a large system of polynomial equations
for a number of unknown coefficients. Classical examples are Runge-Kutta
methods, and their various modifications, see e.g.~\cite{ketchetal13}.
To design particular schemes, we need to understand
\begin{itemize}
\item[(i)] how to generate a system of algebraic equations for the coefficients sought,
\item[(ii)] how to solve the resulting system of polynomial equations.
\end{itemize}
Here, we focus on (i) which depends on the particular class of
methods one is interested in. We consider {\em splitting methods,}
which are based on the
idea of approximating the exact flow of an evolution equation by
compositions based on (usually two) separate subflows
which are easier to evaluate.
Computer algebra is an indispensable tool for solving such a problem,
and there are different algorithmic approaches.
In general there is a tradeoff between `manual' a priori analysis
and machine driven automatization.
For splitting methods,
a well-known approach is based on recursive application of the
Baker-Campbell-Hausdorff (BCH) formula, see~\cite{haireretal06}.
Instead, we follow another approach based on Taylor expansion and
a theoretical result concerning the structure of the leading term
in this expansion.
This has the advantage that explicit knowledge of the BCH-coefficients
is not required. Moreover, our approach adapts easily to splitting
into more than two parts, and even to pairs of splitting schemes akin
to Runge-Kutta methods.
Topic (ii) is not discussed in this paper.
Details concerning the theoretical background and a discussion concerning
concrete results and optimized schemes obtained
are given in~\cite{auzingeretal15K1},
and a collection of optimized schemes can be found at~\cite{splithp}.
We note that a related approach has recently
also been considered in~\cite{blanesetal13a}.
\subsection{Splitting methods for the integration of evolution equations}
\label{subsec:splitting}
In many applications, the right hand side $ F(u) $ of an evolution equation
\begin{equation} \label{AB-problem}
\partial_t u(t) = F(u(t)) = A(u(t)) + B(u(t), \quad t \geq 0,
\quad u(0)~\text{given,}
\end{equation}
splits up in a natural way into two terms $ A(u) $ and $ B(u) $,
where the separate integration of the subproblems
\begin{equation*}
\partial_t u(t) = A(u(t)), \qquad \partial_t u(t) = B(u(t))
\end{equation*}
is much easier to accomplish than for the original problem.
\begin{example} \label{exa:lie-trotter}
The solution of a linear ODE system with constant coefficients,
\begin{equation*}
\partial_t u(t) = (A+B)\,u(t),
\end{equation*}
is given by
\begin{equation*}
u(t) = e^{t(A+B)}\,u(0).
\end{equation*}
The simplest splitting approximation (`Lie-Trotter'),
starting at some initial value $ u $ and applied with a time step of length $ t=h $,
is given by
\begin{equation*}
\nS(h,u) = e^{h B}\,e^{h A}\,u \approx e^{h(A+B)}u.
\end{equation*}
This is not exact (unless $ AB=BA $), but it satisfies
\begin{equation*}
\|(e^{h B}\,e^{h A} - e^{h(A+B)})u\| = \Order(h^2)
\quad \text{for}~~ h \to 0,
\end{equation*}
and the error of this approximation depends on behavior of the commutator
$ [A,B] = AB-BA $.
\qed
\end{example}
A general splitting method takes steps of the
form\footnote{By $ \Phi_F $ we denote the flow
associated with the equation $ \partial_t u = F(u) $,
and $ \Phi_A,\,\Phi_B $ are defined analogously.}
\begin{subequations} \label{AB-scheme}
\begin{equation} \label{AB-scheme-1}
\nS(h,u) = \nS_s(h,\nS_{s-1}(h,\ldots,\nS_1(h,u))) \approx \phi_F(h,u),
\end{equation}
with
\begin{equation} \label{AB-scheme-2}
\nS_j(h,v) = \phi_B(b_j\,h,\phi_A(a_j\,h,v)),
\end{equation}
\end{subequations}
where the (real or complex) coefficients $ a_j,b_j $ have to be found
such that a certain desired order of approximation for $ h \to 0 $ is obtained.
The local error of a splitting step is denoted by
\begin{equation} \label{local_error_notation}
\nS(h,u) - \phi_F(h,u) =: \nL(h,u).
\end{equation}
For our present purpose of finding asymptotic order conditions it is sufficient
to consider the case of a linear system, $ F(u) = F\,u = A\,u + B\,u $
with linear operators $ A $ and~$ B $.
We denote
\begin{equation*}
A_j = a_j\,A,~ B_j = b_j\,B, \quad j=1 \ldots s.
\end{equation*}
Then,
\begin{subequations} \label{AB-scheme-linear}
\begin{equation} \label{AB-scheme-linear-1}
\nS(h,u) = \nS(h) u, \quad
\nS(h) = \nS_s(h)\,\nS_{s-1}(h)\,\cdots\,\nS_1(h) \approx e^{h F},
\end{equation}
with
\begin{equation} \label{AB-scheme-linear-2}
\nS_j(h) = e^{h B_j}\,e^{h A_j}, \quad j=1 \ldots s.
\end{equation}
\end{subequations}
For the linear case the local error~\eqref{local_error_notation}
is of the form $ \nL(h) u $ with the linear operator
$ \nL(h) = \nS(h) - e^{hF} $.
\subsection{Commutators} \label{subsec:lie}
Commutators of the involved operators play a central role.
For formal consistency, we call $ A $ and $ B $ the `commutators of degree 1'.
There is (up to sign)
one non-vanishing\footnote{`Non-vanishing' means non-vanishing in general
(generic case, with no special assumptions on
$ A $ and $ B $).}
commutator of degree~2,
\begin{equation*}
[A,B] := A\,B - B\,A,
\end{equation*}
and there are two non-vanishing commutators of degree~3,
\begin{equation*}
[A,[A,B]] = A\,[A,B] - [A,B]\,A,
\quad
[[A,B],B] = [A,B]\,B - B\,[A,B],
\end{equation*}
and so on; see Section~\ref{subsec:locerr} for commutators of higher degrees.
\section{Taylor expansion of the local error} \label{sec:taylor}
\subsection{Representation of Taylor coefficients} \label{subsec:taycoe}
Consider the Taylor expansion, about $ h=0 $, of the local error operator $ \nL(h) $
of a consistent one-step method (satisfying the basic consistency
condition $ \nL(0)=0 $),
\begin{equation} \label{lerr-lead}
\nL(h) =
\sum_{q=1}^{p} \frac{h^q}{q!}\,{\rm d}h{q}\,\nL(h)\,\Big|_{h=0}
+\, \Order(h^{p+1}).
\end{equation}
The method is of asymptotic order $ p $ iff $ \nL(h) = \Order(h^{p+1}) $
for $ h \to 0 $; thus the conditions for order $ \geq p $ are given by
\begin{equation} \label{OC-Tay}
{\rm d}h{}\,\nL(h)\,\Big|_{h=0} = \;\ldots\; = {\rm d}h{p}\,\nL(h)\,\Big|_{h=0} =\, 0.
\end{equation}
The formulas in~\eqref{OC-Tay} need to be presented in a more
explicit form, involving the operators $A$ and $B$.
For a splitting method~\eqref{AB-scheme-linear},
a calculation based on the Leibniz formula for higher derivatives
shows\footnote{If $ A $ and $ B $ commute, i.e., $ AB=BA $, then all
these expressions vanish.}
(see~\cite{auzingeretal15K1})
\begin{equation} \label{OC-Tay-1-AB}
{\rm d}h{q}\,\nL(h)\,\Big|_{h=0}
= \sum_{|{\bm k}|=q} \dbinom{q}{{\bm k}}
\prod\limits_{j=s \ldots 1}\;
\sum_{\ell=0}^{k_j} \dbinom{k_j}{\ell}\,B_j^{\ell}\,A_j^{k_j-\ell}
\;-\; (A+B)^q,
\end{equation}
with $ {\bm k}=(k_1,\ldots,k_s) \in \NN_0^s $.
\paragraph{Representation of~\eqref{OC-Tay-1-AB} in Maple.}
The non-commuting operators $ A $ and $ B $ are represented by
symbolic variables {\tt A} and {\tt B}, which can be declared to
be non-commutative making use of the corresponding feature
implemented in the package {\tt Physics}.
Now it is straightforward to generate the sum~\eqref{OC-Tay-1-AB},
with unspecified coefficients $ a_j,b_j $,
using standard combinatorial tools;
for details see Section~\ref{sec:impl}.
\subsection{The leading term of the local error expansion} \label{subsec:locerr}
Formally, the multinomial sums in the expressions~\eqref{OC-Tay-1-AB}
are multivariate homogeneous polynomials of total degree $ q $ in the variables
$ a_j,b_j,\,j=1 \ldots s $, and the coefficients of these polynomials are
power products of total degree $ q $ composed of powers of the
non-commutative symbols~$ A $ and~$ B $.
\begin{example}[\,{\rm\cite{auzingeretal15K1}}] \label{exa:s=p=2}
For $ s=2 $ we obtain
\begin{align*}
{\rm d}h{}\,\nL(h)\,\Big|_{h=0} &= \noul{(a_1+a_2)}\,A + \noul{(b_1+b_2)}\,B
\;-\;(A+B), \\[2\jot]
{\rm d}h{2}\,\nL(h)\,\Big|_{h=0} &= ((a_1+a_2)^2)\,A^2 \label {OC-s=2-2}
+ \noul{(2\,a_2\,b_1)}\,A\,B \\
& \quad {}+ (2\,a_1\,b_1 + 2\,a_1\,b_2 + 2\,a_2\,b_2)\,B\,A
+ ((b_1+b_2)^2)\,B^2 \\[\jot]
& \quad \,-\,(A^2 + A\,B + B\,A +B^2).
\end{align*}
The consistency condition for order $ p \geq 1 $ reads
$ {\rm d}h{}\,\nL(h)\,\big|_{h=0} = 0 $, which is
equivalent to $ a_1+a_2=1 $ and $ b_1+b_2=1 $.
At first sight, for order $ p \geq 2 $ we need 4, or (at second sight)
2~additional equations to be satisfied, such that
$ {\rm d}h{2}\,\nL(h)\,\big|_{h=0}=0 $.
However, assuming that the conditions for order $ p \geq 1 $ are satisfied,
the second derivative $ {\rm d}h{2}\,\nL(h)\,\big|_{h=0} $
simplifies to the commutator expression
\begin{equation*}
{\rm d}h{2}\,\nL(h)\,\Big|_{h=0} = \noul{(2\,a_2\,b_1 - 1)}\,[A,B],
\end{equation*}
giving the single additional condition $ 2\,a_2\,b_1 = 1$ for order $ p \geq 2 $.
Assuming now that $ a_1,a_2 $ and $ b_1,b_2 $ are chosen such
that all conditions for $ p \geq 2 $ are satisfied,
the third derivative $ {\rm d}h{3}\,\nL(h)\,\big|_{h=0} $,
which now represents the leading term of the local error,
simplifies to a linear combination of the commutators
$ [A,[A,B]] $ and $ [[A,B],B] $, of degree~3, namely
\begin{equation*}
\qquad\quad
{\rm d}h{3}\,\nL(h)\,\Big|_{h=0} = \noul{(3\,a_2^2\,b_1 - 1)}\,[A,[A,B]]
+ \noul{(3\,a_2\,b_1^2 - 1)}\,[[A,B],B].
\qquad\qed
\end{equation*}
\end{example}
\begin{remark}
The classical second-order Strang splitting method corresponds to the choice
$ a_1=\frac{1}{2},\,b_1=1,\,a_2=\frac{1}{2},\,b_2=0 $, or
$ a_1=0,\,b_1=\frac{1}{2},\,a_2=1,\,b_2=\frac{1}{2} $.
\end{remark}
The observation from this simple example generalizes as follows:
\begin{proposition} \label{pro:leading-lie}
The leading term $ {\rm d}h{p+1}\,\nL(h)\,\big|_{h=0} $
of the Taylor expansion of the local error $ \nL(h) $
of a splitting method of order $ p $ is a Lie element, i.e.,
it is a linear combination of commutators of degree $ p+1 $.
\end{proposition}
\begin{proof}
See~\cite{auzingeretal13c,haireretal06}.
\qed
\end{proof}
\begin{example} \label{exa:higher-comm}
Assume that the coefficients $ a_j,b_j,\,j=1 \ldots s $ have been found
such that the associated splitting scheme is of order $ p \geq 3 $
(this necessitates $ s \geq 3 $). This means that
\begin{equation*}
{\rm d}h{}\,\nL(h)\,\Big|_{h=0} =\, {\rm d}h{2}\,\nL(h)\,\Big|_{h=0}
=\, {\rm d}h{3}\,\nL(h)\,\Big|_{h=0} =\, 0,
\end{equation*}
and from Proposition~\ref{pro:leading-lie} we know that
\begin{equation*}
{\rm d}h{4}\,\nL(h)\,\Big|_{h=0} =
\gamma_1\,[A,[A,[A,B]]] + \gamma_2\,[A,[[A,B],B]] + \gamma_3\,[[[[A,B],B],B]
\end{equation*}
holds, with certain coefficients $ \gamma_k $
depending on the $ a_j $ and $ b_j $.
Here we have made use of the fact that there are three independent commutators
of degree $4$ in $A$ and $B$.
\qed
\end{example}
Targeting for higher-order methods one needs to know a {\em basis of commutators}\,
up to a certain degree. The answer to this question is
known, and a full set of independent commutators of degree $ q $ can be represented
by a set of words of length $ q $ over the alphabet $ \{ A,B \} $.
A prominent example is the set of {\em Lyndon-Shirshov words}\,
(see~\cite{bokutetal2006}) displayed in Table~\ref{tab:Lyndon-AB}.
A combinatorial algorithm due to Duval~\cite{duval88} can be used to
generate this table.
Here, for instance, the word {\tt AABBB} represents the commutator
\begin{align*}
& [A,[[[A,B],B],B]] = \\
& \quad A^2 B^3 - 3ABAB^2 + 3AB^2AB - 2AB^3A + 3BAB^2A - 3B^2ABA + B^3A^2,
\end{align*}
with leading power product $ A^2 B^3 = AABBB $ (w.r.t.\ lexicographical order).
\begin{table}[!ht]
\begin{center}
\begin{small}
\begin{tabular}{|r|r|l|}
\hline
\,$q$\,&$ L_q $ &\;Lyndon-Shirshov words over the alphabet $ \{\mathtt{A,B}\} $ $\vphantom{\sum_A^A}$ \\ \hline
1\, & 2\, & \;$ {\mathtt{A}},\,{\mathtt{B}} $ $\vphantom{\sum_A^{A^A}}$ \\
2\, & 1\, & \;$ {\mathtt{AB}} $ $\vphantom{\sum_A^A}$ \\
3\, & 2\, & \;$ {\mathtt{AAB}},\,{\mathtt{ABB}} $ $\vphantom{\sum_A^A}$ \\
4\, & 3\, & \;$ {\mathtt{AAAB}},\,{\mathtt{AABB}},\,{\mathtt{ABBB}} $ $\vphantom{\sum_A^A}$ \\
5\, & 6\, & \;$ {\mathtt{AAAAB}},\,{\mathtt{AAABB}},\,{\mathtt{AABAB}},\,
{\mathtt{AABBB}},\,{\mathtt{ABABB}},\,{\mathtt{ABBBB}} $ $\vphantom{\sum_A^A}$ \\
6\, & 9\, & \;$ {\mathtt{AAAAAB}},\,{\mathtt{AAAABB}},\,{\mathtt{AAABAB}},\,
{\mathtt{AAABBB}},\,{\mathtt{AABABB}},\,
{\mathtt{AABBAB}},\,{\mathtt{AABBBB}},\,{\mathtt{ABABBB}},\,{\mathtt{ABBBBB}} $ $\vphantom{\sum_A^A}$ \\
7\, & 18\,& ~\,\ldots $\vphantom{\sum_A^A}$ \\
8\, & 30\,& ~\,\ldots $\vphantom{\sum_A^A}$ \\
9\, & 56\,& ~\,\ldots $\vphantom{\sum_A^A}$ \\
10\, & \,99\,& ~\,\ldots $\vphantom{\sum^A}$ \\[-\jot]
\vdots~ & \,\vdots~ & ~\,${\rm d}ots$ $\vphantom{\sum_A}$ \\[\jot] \hline
\end{tabular}
\newline
\end{small}
\caption{$ L_q $ is the number of words of length~$ q $. \label{tab:Lyndon-AB}}
\end{center}
\end{table}
\subsection{The algorithm: implicit recursive elimination} \label{subsec:ire}
On the basis of Proposition~\ref{pro:leading-lie},
and with a table of Lyndon-Shirshov words available,
we can build up a set of conditions for order $ \geq p $
for a splitting method with $ s $ stages in the following way
(recall the notation $ A_j := a_j\,A $, $ B_j = b_j\,B $):
{\em
\noindent
For $ q=1 \ldots p $\,:
\begin{itemize}
\item
Generate the symbolic expressions~\eqref{OC-Tay-1-AB}
in the indeterminate coefficients $ a_j,b_j $ and
the non-commutative variables {\tt A} and {\tt B}.
\item
Extract the coefficients of the power products (of degree $ q $)
represented by all Lyndon-Shirshov words of length $ q $,
resulting in a set of $ L_q $ polynomials
$ P_{q,k}(a_j,b_j) $ of degree $ q $
in the coefficients $ a_j $ and $ b_j $.
\end{itemize}
}
\noindent
The resulting set of $ \sum_{q=1}^{p} L_q $ multivariate polynomial equations
\begin{equation} \label{OCSYS}
P_{q,k}(a_j,b_j) = 0, \quad k=1 \ldots L_q, ~~ q=1 \ldots p
\end{equation}
represents the desired conditions for order $ p $.
We call this procedure {\em implicit recursive elimination,} because the
equations generated in this way are correct in an `a~posteriori' sense
(cf.\ Example~\ref{exa:s=p=2}):
\begin{subequations}
\begin{enumerate}[--~]
\item For $ q=1 $, the basic consistency equations
\begin{equation} \label{OC1}
\begin{aligned}
P_{1,1}(a_j,b_j) &= a_1 + \ldots + a_s - 1 = 0, \\
P_{1,2}(a_j,b_j) &= b_1 + \ldots + b_s - 1 = 0,
\end{aligned}
\end{equation}
are obtained.
\item {\em Assume}\, that~\eqref{OC1} is satisfied. Then, due to
Proposition~\ref{pro:leading-lie}, the additional (quadratic) equation
(note that $ L_2=1 $)
\begin{equation} \label{OC2}
P_{2,1}(a_j,b_j) = 0,
\end{equation}
represents one additional condition for a scheme of order $ p=2 $.
\item {\em Assume}\, that~\eqref{OC1} and~\eqref{OC2} are satisfied.
Then, due to Proposition~\ref{pro:leading-lie},
the additional (cubic) equations (note that $ L_3=2 $)
\begin{equation} \label{OC3}
P_{3,1}(a_j,b_j) = P_{3,2}(a_j,b_j) = 0,
\end{equation}
represent two additional conditions for a scheme of order $ p=3 $.
\item The process is continued in the same manner.
\end{enumerate}
\end{subequations}
If we (later) find a solution $ \{ a_j,b_j,\, j=1 \ldots s \} $
of the resulting system
\begin{equation*}
\eqref{OCSYS}=\{ \eqref{OC1},\eqref{OC2},\eqref{OC3},\ldots \}
\end{equation*}
of multivariate polynomial equations, this means that
\begin{align*}
\text{\eqref{OC1} is satisfied}
&~\Rightarrow~ \text{condition \eqref{OC2} is correct,} \\
\text{\eqref{OC2} is also satisfied}
&~\Rightarrow~ \text{condition \eqref{OC3} is correct,}
\end{align*}
and so on. By induction we conclude that the whole procedure is correct.
See~\cite{auzingeretal15K1}
for a more detailed exposition of this argument.
\begin{remark}
In addition, it makes sense to generate the additional conditions for
order $ p+1 $. Even if we do not solve for these conditions,
they represent the leading term of the local error, and this can be used
to search for optimized solutions for order $ p $, where the coefficients in
$ {\rm d}h{p+1}\,\nL(h)\,\big|_{h=0} $ become minimal in size.
\end{remark}
\section{A parallel implementation} \label{sec:impl}
In our Maple code, a table of Lyndon-Shirshov words up to a fixed length
(corresponding to the maximal order aimed for; see Table~\ref{tab:Lyndon-AB})
is included as static data. The procedure {\tt Order\_conditions} displayed
below generates a set of order conditions using the algorithm described
in Section~\ref{subsec:ire}.
\begin{itemize}
\item Fist of all, we activate the package {\tt Physics}
and declare the symbols {\tt A} and {\tt B} as non-commutative.
\item For organizing the multinomial expansion according to~\eqref{OC-Tay-1-AB}
we use standard functions from the packages {\tt combinat}
and {\tt combstruct}.
\item The number of terms during each stage rapidly increases as
more stages are to be computed. Therefore we have implemented a parallel
version based on the package {\tt Grid}. Parallelization is taken into account as follows:
\begin{itemize}
\item On a multi-core processor\footnote{E.g., on an Intel i7 processor,
6~cores are available.
The hyper-threading feature
enables the use of 12 parallel threads.},
all threads execute the same code.
Each thread identifies itself via a call to {\tt MyNode()},
and this is used to control execution.
Communication between the threads is realized via message passing.
\item Thread~0 is the master thread controlling the overall execution.
\item For $ q=1 \ldots p $\,:
\begin{itemize}
\item
Each of the working threads generates
symbolic expressions of the form (recall $ A_j=a_j\,A $,
$ B_j=b_j\,B $)
\begin{equation*}
\Pi_{\bm k} := \dbinom{q}{{\bm k}}
\prod\limits_{j=s \ldots 1}\;
\sum_{\ell=0}^{k_j} \dbinom{k_j}{\ell}\,B_j^{\ell}\,A_j^{k_j-\ell},
\quad {\bm k} \in \NN_0^s,
\end{equation*}
appearing in the sum~\eqref{OC-Tay-1-AB}. Here the work is
equidistributed over the threads, i.e., each of them generates
a subset of $ \{ \Pi_{\bm k},~ {\bm k} \in \NN_0^s \} $ in parallel.
\item
For each of these expressions $ \Pi_{\bm k} $,
the coefficients of all Lyndon-Shirshov monomials of degree $ q $
are computed, and the according subsets of coefficients
are summed up in parallel.
\item
Finally, the master thread 0 sums up the results
received from all the working threads. This results in the set
of multivariate polynomials representing the order conditions
at level~$ q $.
\end{itemize}
\end{itemize}
\item The Maple code displayed below is, to some extent,
to be read as pseudo-code.
For simplicity of presentation we have ignored some technicalities,
e.g., concerning the proper indexing of combinatorial tupels, etc.
The original, working code is available from the authors.
\end{itemize}
\begin{small}
\begin{verbatim}
> with(combinat)
> with(combstruct)
> with(Grid)
> with(Physics)
> Setup(noncommutativeprefix={A,B})
> Order_conditions := proc()
global p,s,OC, # I/O parameters via global variables
Lyndon # assume that table of Lyndon monomials is available
this_thread := MyNode() # each thread identifies itself
max_threads := NumNodes() # number of available threads
for j from 1 to s do
A_j[j] := a[j]*A
B_j[j] := b[j]*B
term[-1][j] := 1
end do
OC=[0$p]
for q from 1 to p do
if this_thread>0 then # working threads start computing
# master thread 0 is waiting
Mn := allstructs(Composition(q+2),size=2)
for j from 1 to s do
term[q-1][j] := 0
for mn from 1 to nops(Mn) do
term[q-1][j] :=
term[q-1][j] +
multinomial(q,Mn[mn])*B_j[j]^Mn[mn][2]*A_j[j]^Mn[mn][1]
end do
end do
k := iterstructs(Composition(q+s),size=s)
OC_q_this_thread := [0$nops(Lyndon[q])]
while not finished(k) do # generate expansion (7) term by term
Ms := nextstruct(k)
if get_active(this_thread) then # get_active:
# auxiliary Boolean function
# for equidistributing workload
Pi_k := 1
for j from s to 1 by -1 do
Pi_k := Pi_k*term[Ms[j]-1][j]
end do
Pi_k := multinomial(q,Ms)*expand(Pi_k)
OC_q_this_thread := # compare coefficients of Lyndon monomials
OC_q_this_thread +
[seq(coeff(Pi_k,Lyndon[q][l]),l=1..nops(Lyndon[q]))]
end if
end do
Send(0,OC_q_this_thread) # send partial sum to master thread
else # master thread 0 receives and sums up
# partial results from working threads
OC[q] := [(-1)$nops(Lyndon[q])] # initialize sum
for i_thread from 1 to max_threads-1 do
OC[q] := OC[q] + Receive(i_thread)
end do
end if
end do
end proc
> # Example:
> p := 4
> s := 4
> Launch(Order_conditions,imports=["p","s"],exports=["OC"]) # run
> OC[1]
[a[1]+a[2]+a[3]+a[4]-1,
b[1]+b[2]+b[3]+b[4]-1]
> OC[2]
[2*a[2]*b[1]+2*a[3]*b[1]+2*a[3]*b[2]
+2*a[4]*b[1]+2*a[4]*b[2]+2*a[4]*b[3]-1]
> OC[3]
[3*a[2]^2*b[1]+6*a[2]*a[3]*b[1]+6*a[2]*a[4]*b[1]
+3*a[3]^2*b[1]+3*a[3]^2*b[2]+6*a[3]*a[4]*b[1]+6*a[3]*a[4]*b[2]
+3*a[4]^2*b[1]+3*a[4]^2*b[2]+3*a[4]^2*b[3]-1,
3*a[2]*b[1]^2+3*a[3]*b[1]^2+6*a[3]*b[1]*b[2]
+3*a[3]*b[2]^2+3*a[4]*b[1]^2+6*a[4]*b[1]*b[2]+6*a[4]*b[1]*b[3]
+3*a[4]*b[2]^2+6*a[4]*b[2]*b[3]+3*a[4]*b[3]^2-1]
> OC[4]
[4*a[2]^3*b[1]+12*a[2]^2*a[3]*b[1]+12*a[2]^2*a[4]*b[1]
+12*a[2]*a[3]^2*b[1]+24*a[2]*a[3]*a[4]*b[1]+12*a[2]*a[4]^2*b[1]
+4*a[3]^3*b[1]+4*a[3]^3*b[2]+12*a[3]^2*a[4]*b[1]
+12*a[3]^2*a[4]*b[2]+12*a[3]*a[4]^2*b[1]+12*a[3]*a[4]^2*b[2]
+4*a[4]^3*b[1]+4*a[4]^3*b[2]+4*a[4]^3*b[3]-1,
6*a[2]^2*b[1]^2+12*a[2]*a[3]*b[1]^2+12*a[2]*a[4]*b[1]^2
+6*a[3]^2*b[1]^2+12*a[3]^2*b[1]*b[2]+6*a[3]^2*b[2]^2
+12*a[3]*a[4]*b[1]^2+24*a[3]*a[4]*b[1]*b[2]+12*a[3]*a[4]*b[2]^2
+6*a[4]^2*b[1]^2+12*a[4]^2*b[1]*b[2]+12*a[4]^2*b[1]*b[3]
+6*a[4]^2*b[2]^2+12*a[4]^2*b[2]*b[3]+6*a[4]^2*b[3]^2-1,
4*a[2]*b[1]^3+4*a[3]*b[1]^3+12*a[3]*b[1]^2*b[2]
+12*a[3]*b[1]*b[2]^2+4*a[3]*b[2]^3+4*a[4]*b[1]^3
+12*a[4]*b[1]^2*b[2]+12*a[4]*b[1]^2*b[3]+12*a[4]*b[1]*b[2]^2
+24*a[4]*b[1]*b[2]*b[3]+12*a[4]*b[1]*b[3]^2+4*a[4]*b[2]^3
+12*a[4]*b[2]^2*b[3]+12*a[4]*b[2]*b[3]^2+4*a[4]*b[3]^3-1]
\end{verbatim}
\end{small}
For practical use some further tools have been developed, e.g.\
for generating tables of polynomial coefficients for further use,
e.g., by numerical software other than Maple.
This latter job can also be parallelized.
\subsection{Special cases} \label{subsec:special}
Some special cases are of interest:
\begin{itemize}
\item {\em Symmetric schemes}\, are characterized by the property
$ \nS(-h,\nS(h,u)) = u $. Here, either $ a_1=0 $ or $ b_s=0 $,
and the remaining coefficient sets $ (a_j) $ and $ (b_j) $ are palindromic.
Symmetric schemes have an even order $ p $, and the order conditions
for even orders need not be included; see~\cite{haireretal06}.
Thus, we use a special ansatz and generate a reduced set of
equations.
\item {\em Palindromic schemes}\, were introduced in~\cite{auzingeretal15K1}
and characterized by the property $ \nS(-h,{\check\nS}(h,u)) = u $,
where $ \check\nS $ denotes the scheme $ \nS $ with the role of $ A $
and $ B $ interchanged.
In this case, the full coefficient set
\begin{equation*}
(a_1,b_1,\ldots,a_s,b_s)
\end{equation*}
is palindromic. As for symmetric schemes, this means that a special
ansatz is used, and again it is sufficient to generate a reduced
set of equations, see~\cite{auzingeretal15K1}.
\end{itemize}
Apart from these modifications, the basic algorithm remains unchanged.
\section{Modifications and extensions} \label{sec:ext}
\subsection{Splitting into more than two operators} \label{sec:OC-ABC}
Our algorithm directly generalizes to the case of
splitting into more than two operators.
Consider evolution equations where the right-hand side
splits into three parts,
\begin{equation} \label{ABC-problem}
\partial_t u(t) = F(u(t)) = A(u(t)) + B(u(t)) + C(u(t)),
\end{equation}
and associated splitting schemes,
\begin{subequations} \label{ABC-scheme}
\begin{equation} \label{ABC-scheme-1}
\nS(h,u) = \nS_s(h,\nS_{s-1}(h,\ldots,\nS_1(h,u))) \approx \phi_F(h,u),
\end{equation}
with
\begin{equation} \label{ABC-scheme-2}
\nS_j(h,v) = \phi_C(c_j\,h,\phi_B(b_j\,h,\phi_A(a_j\,h,v))),
\end{equation}
\end{subequations}
see~\cite{auzingeretal14a}.
Here the linear representation~\eqref{OC-Tay-1-AB} generalizes as follows,
with $A_j = a_j\,A,\, B_j=b_j\,B,\, C_j=c_j\,C $, and
$ {\bm k}=(k_1,\ldots,k_s) \in \NN_0^s $,\,
$ {\bm\ell} = (\ell_A,\ell_B,\ell_C) \in \NN_0^3 $:
\begin{equation} \label{OC-Tay-1-ABC}
{\rm d}h{q}\,\nL(h)\,\Big|_{h=0}
= \sum_{|{\bm k}|=q} \dbinom{q}{{\bm k}}
\prod\limits_{j=s \ldots 1}\;
\sum_{|{\bm\ell}|=k_j} \dbinom{k_j}{{\bm\ell}}\,
C_j^{\ell_C}\,B_j^{\ell_B}\,A_j^{\ell_A}
\;-\; (A+B+C)^q.
\end{equation}
On the basis of these identities, the algorithm from Section~\ref{subsec:ire}
generalizes in a straightforward way.
The Lyndon basis representing independent commutators now corresponds to Lyndon words
over the alphabet $ \{\mathtt{A,B,C}\} $, see~\cite{auzingeretal15K1}.
Concerning special cases (symmetries etc.) and parallelization,
similar considerations as before apply.
\subsection{Pairs of splitting schemes} \label{subsec:pairs}
For the purpose of adaptive time-splitting algorithms,
the construction of (optimized) pairs of schemes of orders
$ (p,p+1) $ is favorable.
Generating a respective set of order conditions can also be
accomplished by a modification of our code; the difference lies in
the fact that some coefficients are chosen a priori
(corresponding to a given method of order $ p+1 $),
but apart from this the generation of order conditions for an associated
scheme of order $ p $ works analogously as before.
Finding optimal schemes is then accomplished by tracing a large set
of possible solutions; see~\cite{auzingeretal15K1}.
\paragraph{Acknowledgements.}
This work was supported by the Austrian Science Fund (FWF) under grant P24157-N13,
and by the Vienna Science and Technology Fund (WWTF) under grant MA-14-002.
Computational results based on the ideas in this work have been achieved in part using the
Vienna Scientific Cluster (VSC).
\end{document}
|
\begin{document}
\title{Quantum heat engines and information}
\author{Ye Yeo}
\affiliation{Department of Physics, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore}
\author{Chang Chi Kwong}
\affiliation{Department of Physics, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore}
\begin{abstract}
Recently, Zhang {\em et al.} [PRA, {\bf 75}, 062102 (2007)] extended Kieu's interesting work on the quantum Otto engine [PRL, {\bf 93}, 140403 (2004)] by considering as working substance a bipartite quantum system $AB$ composed of subsystems $A$ and $B$. In this paper, we express the net work done $W_{AB}$ by such an engine explicitly in terms of the macroscopic bath temperatures and information theoretic quantities associated with the microscopic quantum states of the working substance. This allows us to gain insights into the dependence of positive $W_{AB}$ on the quantum properties of the states. We illustrate with a two-qubit XY chain as the working substance. Inspired by the expression, we propose a plausible formula for the work derivable from the subsystems. We show that there is a critical entanglement beyond which it is impossible to draw positive work locally from the individual subsystems while $W_{AB}$ is positive. This could be another interesting manifestation of quantum nonlocality.
\end{abstract}
\pacs{03.65.Ud, 07.20.Pe}
\maketitle
Heat engines are devices that extract energy from its environment in the form of heat and do useful work. At the heart of every heat engine is a working substance, such as a gas-air mixture in an automobile engine. The operation of the heat engine is achieved by subjecting the working substance of the engine to a sequence of thermodynamic processes that form a cycle, returning it to any arbitrarily selected state. Quantum heat engines, in contrast, operate by passing quantum matter through a closed series of quantum thermodynamic processes \cite{Quan}. For instance, Kieu \cite{Kieu1, Kieu2} introduced a class of quantum heat engines which consists of two-energy-eigenstate systems (qubits) undergoing, respectively quantum adiabatic processes and energy exchanges with heat baths at different stages of a cycle - the quantum version of the Otto cycle. Recently, Zhang {\em et al.} \cite{Zhang} extended Kieu's work by considering the quantum Otto engine with a two-qubit (isotropic) Heisenberg XXX chain as the working substance. The chain is subject to a constant external magnetic field. The purpose of their paper is to analyze the effect of quantum entanglement on the efficiency of the quantum Otto engine. This is an important and intriguing development since it brings together concepts from quantum mechanics and thermodynamics - two seemingly different fundamental areas of physics.
Entanglement is a property associated with the state of a composite quantum system made up of at least two subsystems. It is a nonlocal correlation between quantum systems that does not exist classically. Therefore, it is imperative to raise the following questions. What is the relationship between the net positive work derivable from the subsystems and that from the total system? What is the role of entanglement in this relationship? In order to obtain plausible answers to these questions, we need a means to calculate the work derivable from a subsystem. In this paper, we propose information theoretic answers \cite{Nielsen}. As a concrete example, we consider the two-qubit XY model \cite{Kamta}. The Hamiltonian for the two-qubit XY chain in an external magnetic field $B_m$ along the $z$ axis is given by
\begin{equation}
H = \frac{1}{2}(1 + \gamma)J\sigma^1_A \otimes \sigma^1_B + \frac{1}{2}(1 - \gamma)J\sigma^2_A \otimes \sigma^2_B + \frac{1}{2}B_m(\sigma^3_A \otimes \sigma^0_B + \sigma^0_A \otimes \sigma^3_B),
\end{equation}
where $\sigma^0_{\alpha}$ is the identity matrix and $\sigma^i_{\alpha}$ $(i = 1, 2, 3)$ are the Pauli matrices at site $\alpha = A, B$. The parameter $-1 \leq \gamma \leq 1$ measures the anisotropy of the system. $(1 + \gamma)J$ and $(1 - \gamma)J$ are real coupling constants for the spin interaction. The chain is said to be antiferromagnetic for $J > 0$ and ferromagnetic for $J < 0$. Here, we consider $J > 0$.
To set the stage, we describe the four quantum thermodynamic processes of the quantum Otto cycle. In the following, we consider as working substance a bipartite quantum system consisting of two subsystems, $A$ and $B$, with Hamiltonian $H = \sum_iE_i|\Psi^i\rangle_{AB}\langle\Psi^i|$. Here, $E_i$ and $|\Psi^i\rangle_{AB}$ are respectively the eigenvalues and eigenvectors of $H$. For the two-qubit XY model, we have
\begin{eqnarray}
|\Psi^1\rangle_{AB} & = & \frac{1}{\sqrt{({\cal B} + B_m)^2 + \gamma^2J^2}}
[({\cal B} + B_m)|00\rangle_{AB} + \gamma J|11\rangle_{AB}], \nonumber \\
|\Psi^2\rangle_{AB} & = & \frac{1}{\sqrt{2}}[|01\rangle_{AB} + |10\rangle_{AB}], \nonumber \\
|\Psi^3\rangle_{AB} & = & \frac{1}{\sqrt{2}}[|01\rangle_{AB} - |10\rangle_{AB}], \nonumber \\
|\Psi^4\rangle_{AB} & = & \frac{1}{\sqrt{({\cal B} - B_m)^2 + \gamma^2J^2}}
[({\cal B} - B_m)|00\rangle_{AB} - \gamma J|11\rangle_{AB}],
\end{eqnarray}
with $E_1 = -E_4 = {\cal B}$ and $E_2 = -E_3 = J$, where ${\cal B} \equiv \sqrt{B^2_m + \gamma^2J^2}$. Furthermore, we assume that the system is allowed to thermalize with the heat baths in processes 2 and 4. Specifically, process 4 brings the system to its initial quantum state given by the density operator
\begin{equation}\label{state1}
\rho^{(1)}_{AB} = \sum_ip_{i1}|\Psi^{i1}\rangle_{AB}\langle\Psi^{i1}|,
\end{equation}
with $p_{i1} \equiv \exp(-E_{i1}/kT_1)/Z_1$ and $Z_1 \equiv \sum_i \exp(-E_{i1}/kT_1)$. That is, the system is initially in thermal equilibrium with a heat bath at temperature $T_1$. For the two-qubit XY model, we have $E_{i1} = E_i$ and $|\Psi^{i1}\rangle_{AB} = |\Psi^i\rangle_{AB}$ with $J = J_1$.
\begin{enumerate}
\item The system is isolated from the heat bath and undergoes a quantum adiabatic process, with for instance $J$ changing from $J_1$ to $J_2$. Provided the rate of change is sufficiently slow, $p_{i1}$'s are maintained throughout according to the quantum adiabatic theorem \cite{Messiah}. At the end of process 1, the system has the probability $p_{i1}$ in the eigenstate $|\Psi^{i2}\rangle_{AB}$. According to Kieu \cite{Kieu1, Kieu2}, an amount of work is performed by the system, but no heat is transferred during this process.
\item The system is brought into some kind of contact with a heat bath at temperature $T_2 < T_1$. After the irreversible thermalization process, the quantum state of the system is given by the density operator
\begin{equation}\label{state2}
\rho^{(2)}_{AB} = \sum_ip_{i2}|\Psi^{i2}\rangle_{AB}\langle\Psi^{i2}|,
\end{equation}
where $p_{i2} \equiv \exp(-E_{i2}/kT_2)/Z_2$ and $Z_2 \equiv \sum_i \exp(-E_{i2}/kT_2)$. Here, for the two-qubit XY model, we have $E_{i2} = E_i$ and $|\Psi^{i2}\rangle_{AB} = |\Psi^i\rangle_{AB}$ with $J = J_2$. It follows from \cite{Kieu1, Kieu2} that only heat is transferred in this process to yield a change in the occupation probabilities, and the heat transferred is given by
\begin{equation}
Q_2 = \sum_iE_{i2}(p_{i2} - p_{i1}).
\end{equation}
\item The system is removed from the heat bath and undergoes a quantum adiabatic process, with for instance $J$ changing from $J_2$ to $J_1$. At the end of process 3, the system has the probability $p_{i2}$ in the eigenstate $|\Psi^{i1}\rangle_{AB}$. An amount of work is performed on the system, but no heat is transferred during process 3.
\item The system is brought into some kind of contact with a heat bath at temperature $T_1$. After the irreversible thermalization process, the quantum state of the system is returned to the initial one in Eq.(\ref{state1}). The heat transferred in process 4 is given by
\begin{equation}
Q_4 = \sum_iE_{i1}(p_{i1} - p_{i2}).
\end{equation}
\end{enumerate}
From the first law of thermodynamics, the net work done by the quantum Otto engine is \cite{Kieu1, Kieu2}
\begin{eqnarray}\label{WABo}
W_{AB} & = & Q_2 + Q_4 \nonumber \\
& = & \sum_i(E_{i1} - E_{i2})(p_{i1} - p_{i2}).
\end{eqnarray}
Substituting $E_{ij} = -kT_j\log(p_{ij}Z_j)$ into Eq.(\ref{WABo}) and setting the Boltzmann constant $k \equiv \log_2e$, we obtain
\begin{equation}\label{WAB}
W_{AB} = (T_1 - T_2)\{S[\rho^{(1)}_{AB}] - S[\rho^{(2)}_{AB}]\} - T_1H[p_{i2}||p_{i1}] - T_2H[p_{i1}||p_{i2}].
\end{equation}
Here, $S[\rho^{(j)}_{AB}]$ is the von Neumann entropy of the quantum state $\rho^{(j)}_{AB}$, and $H[p_{i2}||p_{i1}]$ and $H[p_{i1}||p_{i2}]$ are the relative entropies of $p_{i2}$ to $p_{i1}$ and $p_{i1}$ to $p_{i2}$ respectively. It follows from the non-negativity of the relative entropy that to derive positive work we not only have to demand $T_1 > T_2$ but also $S[\rho^{(1)}_{AB}] > S[\rho^{(2)}_{AB}]$ such that $H[p_{i2}||p_{i1}]$ and $H[p_{i1}||p_{i2}]$ are not too large. This is true regardless of whether $AB$ is a single quantum system or one composed of two or more subsystems. We shall illustrate this after the following remark.
We note that $H[p_{i2}||p_{i1}] = S[\rho^{(2)}_{AB}||\rho^{(1)}_{AB}]$, the quantum relative entropy of $\rho^{(2)}_{AB}$ to $\rho^{(1)}_{AB}$ if they share the same eigenstates. Similarly, we have $H[p_{i1}||p_{i2}] = S[\rho^{(1)}_{AB}||\rho^{(2)}_{AB}]$. This is the case for the two-qubit XY model when we let $B_m = \eta J$, with $\eta$ some fixed constant. We assume this to hold from here on without loss of generality. It follows immediately from Eq.(\ref{WABo}) that $W_{AB}$ is directly proportional to $(J_1 - J_2)$ for any given $\gamma$ and $\eta$. Another consequence is that $\rho^{(1)}_{AB}$'s and $\rho^{(2)}_{AB}$'s depend solely on $J_1/T_1$ and $J_2/T_2$ respectively. It then follows from Eq.(\ref{WAB}) that $W_{AB} = 0$ when $J_2/T_2 = J_1/T_1$. So, in order to have positive $W_{AB}$, we require $(T_2/T_1)J_1 \equiv J_{\min} < J_2 < J_1$.
It is clear, from Eq.(\ref{WAB}), that the condition $J_{\min} < J_2$ is one on the quantum states $\rho^{(1)}_{AB}$ and $\rho^{(2)}_{AB}$. It is thus natural to express this condition in terms of quantities that describe the two-qubit states. Here, we recall the quantum mutual information between the two subsystems $A$ and $B$, ${\cal I}^{(j)}(A:B) \equiv S[\rho^{(j)}_A] + S[\rho^{(j)}_B] - S[\rho^{(j)}_{AB}]$ \cite{Nielsen}. This is usually used to measure the total correlations between $A$ and $B$ \cite{Henderson}. Hence, in order to derive positive $W_{AB}$ we demand that ${\cal I}^{(2)}(A:B) > {\cal I}^{(1)}(A:B) \equiv {\cal I}^{(2)}_{\min}$. If the states are not separable, we may require that ${\cal C}[\rho^{(2)}_{AB}] > {\cal C}[\rho^{(1)}_{AB}] \equiv {\cal C}^{(2)}_{\min}$, where ${\cal C}[\rho^{(j)}_{AB}]$ is the Wootters concurrence \cite{Wootters} associated with $\rho^{(j)}_{AB}$. This measures the amount of entanglement or nonlocal quantum correlation between $A$ and $B$. Given $T_1$ and $J_1$, there is therefore a minimum ${\cal I}^{(2)}_{\min}$ or ${\cal C}^{(2)}_{\min}$ below which the quantum Otto engine does not yield any positive work.
Equation (\ref{WAB}) gives explicitly the dependence of $W_{AB}$ on the bath temperatures $T_1$ and $T_2$, and on the quantum states $\rho^{(1)}_{AB}$ and $\rho^{(2)}_{AB}$. Consider some $T_2$, $J_{\min} < J_2 < J_1$ and $\kappa T_2$, $\kappa J_2$, which yield identical $\rho^{(2)}_{AB}$. Here, $\kappa$ is a positive constant such that $\kappa J_2 > J_1$. Therefore, $W_{AB}$ is positive in the first case but negative in the latter since $\kappa J_2 > J_1$. This difference is obviously due to the bath temperatures. Our focus here is on analyzing the dependence of positive $W_{AB}$ on the quantum properties of the states. Therefore, for given $T_1$ and $J_1$, it suffices to chose an appropriately small $T_2$ such that as $J_2$ is increased from $J_{\min}$ to $J_1$, we have states $\rho^{(2)}_{AB}$, with quantum mutual information going from ${\cal I}^{(2)}_{\min}$ to sufficiently close to the maximum possible of $2$ (or with concurrence going from ${\cal C}^{(2)}_{\min}$ to sufficiently close to the maximum possible of $1$).
\begin{figure}
\caption{Net work done $W_{AB}
\end{figure}
\begin{figure}
\caption{The figure on the left hand side shows the behaviour of $(T_1-T_2)\{S[\rho_{AB}
\end{figure}
Equation (\ref{WAB}) also provides quantum information theoretic insights into the condition $J_2 < J_1$. Suppose $\gamma = 0.4$, $\eta = 0.3$, $T_1 = 1000$, $J_1 = 8$, and $T_2 = 0.1$. Then, $p_{11} \approx p_{21} \approx p_{31} \approx p_{41} \approx 0.25$ and $\rho^{(1)}_{AB}$ approximates the density operator of a maximally mixed state with neither classical nor quantum correlations. We also note that ${\cal I}^{(2)}(A:B) \approx 2$ and ${\cal C}[\rho^{(2)}_{AB}] \approx 1$ when $J_2 = 8$, satisfying the above sufficient condition. Figure 1 shows the dependence of $W_{AB}$ on $J_2$. $W_{AB}$ increases from zero at $J_2 = J_{\min} = 8 \times 10^{-4}$ to a maximum $W_{\max} \approx 10.3695$ at $J_2 = J_{\max} \approx 0.575065$, after which it decreases from $W_{\max}$ to zero at $J_2 = J_1$. Increase in $J_2$ yields $\rho^{(2)}_{AB}$ with $(T_1 - T_2)\{S[\rho^{(1)}_{AB}] - S[\rho^{(2)}_{AB}]\}$ and $T_1H[p_{i2}||p_{i1}]$ approaching the approximate maxima $1999.81$ and $1988.48$ respectively, but with $T_2H[p_{i1}||p_{i2}]$ that increases monotonically with $J_2$ (see Fig. 2). At $J_2 = 8$, $T_2H[p_{i1}||p_{i2}]$ exactly equals the difference between the maxima. And, $W_{AB}$ becomes negative if $J_2$ is increased beyond $J_2 = J_1$. Intuitively, one would expect to draw more work as the von Neumann entropy of $\rho^{(2)}_{AB}$ differs more from that of $\rho^{(1)}_{AB}$; for instance, when $\rho^{(2)}_{AB}$ becomes increasingly correlated. This is indeed the case before this difference results in significant increases in the relative entropies enough to cause $W_{AB}$ to decrease to zero. For $J_{\min} < J_2 < J_{\max}$, $W_{AB}$ increases with increasing quantum mutual information (compare with Fig. 3). During this increase the correlation changes from purely classical to a mixture of classical and increasingly quantum ones (see Fig. 4). Beyond $J_2 = J_{\max}$ when ${\cal C}[\rho^{(2)}_{AB}] \approx 0.903444$, $W_{AB}$ begins to decrease. It is not obvious at this stage what precisely is the role of entanglement. We shall further elaborate on this after we propose the following definition for the work derivable from the individual subsystem.
\begin{figure}
\caption{Quantum mutual information ${\cal I}
\end{figure}
\begin{figure}
\caption{The state $\rho^{(2)}
\end{figure}
The subsystems $A$ and $B$ are clearly being subjected to exactly the same quantum Otto cycle that the composite system $AB$ has undergone. The quantum states of $A$ and $B$ that correspond to Eqs.(\ref{state1}) and (\ref{state2}) are also well-defined. Namely, $\rho^{(j)}_A = {\rm tr}_B\rho^{(j)}_{AB}$ and $\rho^{(j)}_B = {\rm tr}_A\rho^{(j)}_{AB}$. Hence, inspired by Eq.(\ref{WAB}), we define
\begin{equation}\label{walpha}
w_{\alpha} \equiv (T_1 - T_2)\{S[\rho^{(1)}_{\alpha}] - S[\rho^{(2)}_{\alpha}]\}
- T_1H[q^{(\alpha)}_{i2}||q^{(\alpha)}_{i1}] - T_2H[q^{(\alpha)}_{i1}||q^{(\alpha)}_{i2}],
\end{equation}
where $\alpha = A$ or $B$, $q^{(A)}_{ij}$'s and $q^{(B)}_{ij}$'s are the eigenvalues of $\rho^{(j)}_A$ and $\rho^{(j)}_B$ respectively. Now, consider $\rho^{(1)}_{AB}$ approximately maximally mixed and $\rho^{(2)}_{AB}$ sufficiently close to a maximally entangled state, then both $\rho^{(1)}_{\alpha}$ and $\rho^{(2)}_{\alpha}$ will be almost identical to the maximally mixed state. According to Eq.(\ref{WAB}), the work derivable from a subsystem undergoing the quantum Otto cycle in this case will be extremely close to zero. For the two-qubit XY model, $\rho^{(j)}_A$ and $\rho^{(j)}_B$ are identical. Figure 5 shows $w_A$ as a function of $J_2$. $w_A$ increases from zero at $J_2 = J_{\min}$ to a maximum $w_{\max} \approx 0.231128$ at $J_2 = j_{\max} \approx 0.166289$, when ${\cal C}[\rho^{(2)}_{AB}] \approx 0.316188$. After $J_2 = j_{\max}$, $w_A$ decreases from $w_{\max}$ to zero at $J_2 = j_{\rm crit} \approx 1.24252$ when ${\cal C}[\rho^{(2)}_{AB}] = {\cal C}^{(2)}_{\rm crit} \approx 0.9964$. Therefore, Eq.(\ref{walpha}) quantitatively describes the above consideration. We propose here that $w_A$ ($w_B$) is the work derivable from the subsystem $A$ ($B$). Therefore, given $T_1$ and $J_1$, there is a critical concurrence ${\cal C}^{(2)}_{\rm crit}$ above which no positive work can be drawn locally from each subsystem. Increasing $J_2$ beyond $j_{\rm crit}$, $w_A$ becomes negative. This is because $\rho^{(2)}_A$ becomes more random than $\rho^{(1)}_A$.
\begin{figure}
\caption{Net work done $w_A$ by the quantum Otto engine with one qubit of a two-qubit XY chain as the working substance plotted vs $J_2$, for $\gamma = 0.4$, $\eta = 0.3$, $T_1 = 1000$, $J_1 = 8$, and $T_2 = 0.1$.}
\end{figure}
In conclusion, we have expressed, in Eq.(\ref{WAB}), the net work done $W_{AB}$ by a quantum Otto engine explicitly in terms of the macroscopic bath temperatures ($T_1$ and $T_2$) and information theoretic quantities associated with the microscopic quantum states ($\rho^{(1)}_{AB}$ and $\rho^{(2)}_{AB}$) of the working substance - a bipartite quantum system. Equation (\ref{WAB}) inspires our proposal of a plausible formula, Eq.(\ref{walpha}), for the work derivable from the subsystems $A$ and $B$. For the two-qubit XY chain, we show that $\rho^{(2)}_{AB}$ must have quantum mutual information ${\cal I}^{(2)}(A:B)$ or concurrence ${\cal C}[\rho^{(2)}_{AB}]$ greater than that of a given $\rho^{(1)}_{AB}$ to yield positive $W_{AB}$. Equation (\ref{WAB}) also provides information theoretic insights into how $W_{AB}$ increases and then decreases with increasing ${\cal I}^{(2)}(A:B)$ or ${\cal C}[\rho^{(2)}_{AB}]$. We show, using Eqs.(\ref{WAB}) and (\ref{walpha}), that there exists a critical concurrence above which no positive work can be derived locally from each subsystem while $W_{AB} > 0$. This could be another interesting manifestation of quantum nonlocality.
\end{document}
|
\begin{document}
\title{Experimental quantum computing to solve systems of linear equations}
\author{X.-D. Cai$^1$, C. Weedbrook$^2$, Z.-E. Su$^{1}$, M.-C. Chen$^1$, Mile Gu$^{3,4}$, M.-J. Zhu$^1$, Li Li$^{1}$, Nai-Le Liu$^{1}$, \\ Chao-Yang Lu$^{1}$, Jian-Wei Pan$^1$
}
\affiliation{$^1$ Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\affiliation{$^2$ Center for Quantum Information and Quantum Control,
Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, Toronto, M5S 3G4, Canada}
\affiliation{$^3$ Centre for Quantum Technologies, National University of Singapore, Singapore}
\affiliation{$^4$ Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China}
\date{\today}
\begin{abstract}
Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables $N$. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order $\log(N)$, giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving $2\times2$ linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.
\end{abstract}
\pacs{}
\maketitle
The problem of solving a system of linear equations plays a central role in diverse fields such as signal processing, economics, computer science, and physics. Such systems often involve tera or even petabytes of data, and thus the number of variables $N$, is exceedingly large. However, the best known algorithms for solving a system of $N$ linear equations on classical computers requires a time complexity on the order of $N$, posing a formidable challenge.
Harnessing the superposition principle of quantum mechanics, quantum computers \cite{Nielsen2000, ObrienNatureReview} promise to provide exponential speedup over their classical counterparts for certain tasks. Notable examples include quantum simulation \cite{Feynman1982,simulationLloyd} and Shor's quantum factoring algorithm \cite{Shor1997}, which have driven the field of quantum information over the past two decades as well as generating significant interest in quantum technologies that have enabled experimental demonstrations of the quantum algorithms in different physical systems \cite{shorNMR, shorPhoton, photonSimulation, trappedionSimulation, shorSuperconducting}.
Recently, Harrow, Hassidim and Lloyd~\cite{Harrow2009} proposed another powerful application of quantum computing for the very practical problem of solving systems of linear equations. They showed that a quantum computer can solve a system of linear equations exponentially faster than a classical computer in situations that we are only interested in expectation values of an operator associated with the solution rather than the full solution. A quantum algorithm has been designed such that the value of this property may be estimated to any fixed desired accuracy within $O(\log(N))$ time, making it one of the most promising applications of quantum computers.
In this article, we report an experimental demonstration of the simplest meaningful instance of this algorithm, that is, solving $2\times2$ linear equations for various input vectors. The quantum circuit is optimized and compiled into a linear optical network with four photonic quantum bits (qubits) and four controlled logic gates, which is used to coherently implement every subroutine for this algorithm. For various input vectors, the quantum computer gives solutions for the linear equations with reasonably high precision, ranging from fidelities of $0.825$ to $0.993$.
\begin{figure*}
\caption{Quantum circuits for solving systems of linear equations. \textbf{a}
\label{fig1}
\end{figure*}
The problem of solving linear equations can be summarized as follows: We aim to solve $A \vec{x} = \vec{b}$ for $\vec{x}$, when given a $N \times N$ Hermitian matrix $A$ and a vector $\vec{b}$. To adapt this problem to quantum processing, $\vec{x}$ and $\vec{b}$ are scaled to unit length (i.e., $||\vec{x}|| = ||\vec{b}|| = 1$). Thus, a vector $\vec{b}$ can be represented by a quantum state $\ket{b} = \sum_i b_i \ket{i}$ on $O(\log (N))$ qubits where $\ket{i}$ denotes computational basis. The desired solution $\vec{x}$ can then be encoded within the quantum state as
\begin{equation}
\ket{x} = c A^{-1} \ket{b} ,\qquad c^{-1} = ||A^{-1} \ket{b}||.
\end{equation}
The quantum algorithm devised in ref.~\cite{Harrow2009} was designed to synthesize $\ket{x}$ (see Fig.~\ref{fig1}a). The quantum algorithm involves three subsystems: a single ancilla qubit initialized in $\ket{0}$, a register of $n$ qubits of working memory initialized in $|0\rangle^{\otimes n}$ and an input state initialized in $\ket{b}$. The input state $\ket{b}$ can be expanded in the basis of $\ket{u_j}$ as $\ket{b} = \sum_{j=1}^N \beta_j \ket{u_j}$, where $\ket{u_j}$ is eigenstate of $A$, and $\beta_j = \langle {u_j}|b\rangle$. Execution of the algorithm can be decomposed into three subroutines: (1) phase estimation, (2) controlled rotation and (3) inverse phase estimation.
Step (1) is used to determine the eigenvalues of $A$, which we denote by $\lambda_j$. Phase estimation is essentially a controlled unitary with a change of basis that maps the eigenvalues onto the working memory \cite{Nielsen2000,phaseEstimation}. The phase estimation protocol is applied to the input, using the working memory as control, to give
\begin{align}
\sum_{j=1}^{N}\beta_{j}|u_{j}\rangle|{\lambda}_{j}\rangle,
\end{align}
where $\ket{\lambda_j}$ represents the binary representation of $\lambda_j$, stored to a precision of $n$ bits.
In step (2), one needs to extract the eigenvalues of $A^{-1}$, i.e. $\lambda_j^{-1}$ from $\ket{\lambda_j}$. This is realized through an additional ancillary qubit initialized in the state $\ket{0}$. Application of an appropriate controlled rotation $R({\lambda}^{-1})$ on this qubit (see Fig.~\ref{fig1}a) transforms the system to
\begin{align}
\sum_{j=1}^N \beta_{j}|u_{j}\rangle|\lambda_{j}\rangle \big(\sqrt{1-\frac{C^2}{{\lambda_{j}}^2}}|0\rangle
+\frac{C}{\lambda_{j}}|1\rangle \big).
\end{align}
The final step involves applying the gate sequence of step (1) in reverse. This disentangles the register, which is reset to $|0\rangle^{\otimes n}$. Therefore we end up with
\begin{align}
\sum_{j=1}^N \beta_{j}|u_{j}\rangle \big(\sqrt{1-\frac{C^2}{{\lambda_{j}}^2}}|0\rangle
+\frac{C}{\lambda_{j}}|1\rangle\big).
\end{align}
Measurement of the ancillary qubit and post-selection (with a successful probability) of an outcome of $|1\rangle$ will result in an output state $\sum_{j=1}^N C({\beta_{j}}/{\lambda_{j}})|u_{j}\rangle$ which is proportional to our expected result state $\ket{x}$.
Resource needed for the algorithm of a general $s$-sparse $N \times N$ matrix $A$ is estimated to be $O(log(N)s^2\kappa/\varepsilon)$, where $\kappa$ is the condition number (the
ratio between $A$'s largest and smallest eigenvalues), $\varepsilon$ is the acceptable error of the output vector (see \cite{Harrow2009} for more details). Putting together the success probability in the post-selection measurement in step (3), the total runtime of the quantum algorithm is $O(log(N)s^2{\kappa}^2/\varepsilon)$, which outperforms the best classical one and reach an exponential speedup generically \cite{Harrow2009}.
\begin{figure*}
\caption{Experimental setup. There are four key modules in the optical setup. (1) Qubit initialization: Ultraviolet laser pulses with a central wavelength of $394$~nm, pulse duration of $120$~fs and a repetition rate of $76$ MHz
pass through two $\beta$-barium borate (BBO) crystals to produce two photon pairs. The four single photons are spatially separated by PBS$_1$ and PBS$_2$ and initialized using HWPs, with three of them be in the state $|H\rangle_i$ where $i$ denoting
their spatial modes and one be in state $|b\rangle$. Photon $1$ is used as the input vector qubit and photon $4$ is used as the ancilla. Photons $2$ and $3$ are used as the register qubits $R1$ and $R2$, respectively.
(2). Phase estimation: The input qubit $|b\rangle$ is mixed with the two register qubits on PBS$_3$ and PBS$_4$ to simulate the CNOT gates in Fig.~\ref{fig1}
\label{fig2}
\end{figure*}
\begin{figure*}
\caption{Experimental results. Three different input vectors are chosen: \textbf{a}
\label{fig3}
\end{figure*}
Here we demonstrate a proof-of-principle experiment of this algorithm: solving systems of $2 \times 2$ linear equations. We choose the matrix $A$ to be
\begin{align} A =
\left(\begin{array}{cc}
1.5 & 0.5 \\
0.5 & 1.5
\end{array}\right),
\end{align}
and we choose the following values for input vector $|b\rangle$
\begin{align}\ket{b_1} = \frac{1}{\sqrt2}
\left(
\begin{array}{c}
1 \\
1 \\
\end{array}
\right),
\ket{b_2} = \frac{1}{\sqrt2}
\left(
\begin{array}{c}
1 \\
-1 \\
\end{array}
\right),
\ket{b_3} =
\left(
\begin{array}{c}
1 \\
0 \\
\end{array}
\right).
\end{align}
The matrix $A$ is chosen such as its eigenvalues are $1$ and $2$ which can be encoded with two qubits in registers \cite{footnote}.
This allows us to optimize the circuit requiring four qubits and four entangling gates as shown in Fig.~\ref{fig1}b. The phase estimation subroutine of the circuit can be compiled into two controlled-NOT (CNOT) gates, a swap gate, and three single qubit rotation gates. Following the circuit design of Ref.~\cite{Cao2012}, the R(${\lambda}^{-1}$) rotation subroutine is implemented in two steps: finding the reciprocal $|1/\lambda_j\rangle$ from eigenvalue $|\lambda_j\rangle$ stored in registers, which in our case can be realized by a swap gate, and controlled unitary gates $H(\theta)$, where
\begin{equation}
\label{Hgate}
H(\theta)=
\left(\begin{array}{cc}
\cos(2\theta) & \sin(2\theta) \\
\sin(2\theta) & -\cos(2\theta)
\end{array}\right).
\end{equation}
Finally, the subroutine of the inverse phase estimation is realized using a semiclassical version that employs single-qubit rotations conditioned on measurement outcomes \cite{semiclassical}.
To implement the quantum circuit shown in Fig.~\ref{fig1}b, we prepare four single photons from spontaneous parametric down-conversion \cite{SPDC} as the input qubits (Fig.~\ref{fig2}). The horizontal (H) and vertical (V) polarizations of the single photons are used to encode the logic qubits $|0\rangle$ and $|1\rangle$, respectively. The experimental challenge of implementing the circuit in Fig.~\ref{fig1}b lies in the four entangling gates between the single photonic qubits.
In the phase estimation subroutine, noting that the target qubits of the CNOT gates are fixed, their implementations can be simplified using combinations of a polarization beam splitter (PBS) and a half-wave plate (HWP), through which an arbitrary control qubit $\alpha|H\rangle+\beta|V\rangle$ and the target qubit $|H\rangle$ evolve into $\alpha|H\rangle|H\rangle+\beta|V\rangle|V\rangle$ which is the desired output of CNOT operations \cite{photonCNOT1}. The R(${\lambda}^{-1}$) rotation subroutine involves two consecutive controlled unitary gates, H($\pi$/8) and H($\pi$/16). Instead of decomposing it into multiple CNOT gates \cite{Nielsen2000,photonCNOT1}, we adopt a more efficient, entanglement-based construction method \cite{Zhou}. The ancillla qubit is first entangled with the register qubits by mixing on PBS$_5$, and then passed through a polarization-dependent Sagnac-like interferometer where the desired controlled unitary operations are applied (see \cite{supplemental} for more details and photon loss analysis). Finally, the ancillary qubit is measured, and when an outcome state $|1\rangle$ is obtained, the algorithm is announced successful.
Before running the algorithm, we first characterized the performance of the optical quantum circuit. The two registers, ancilla and input qubits ($|b_3\rangle$) are initialized in the $|H\rangle_A\otimes|H\rangle_{R1}\otimes|H\rangle_{R2}\otimes|H\rangle_b$ state. Theoretically these four qubits will evolve into a maximally-entangled four-qubit Greenberger-Horne-Zeilinger~(GHZ) state during the subroutine of phase estimation and $R(\lambda^{-1})$ rotation. After the four photons pass through PBS$_3$, PBS$_4$ and PBS$_5$, we observed the Hong-Ou-Mandel type interference among the four photons \cite{HOM,Pan-RMP}. We measured the fidelity -- defined as the overlap of the experimentally produced state with the ideal one -- of the generated four-photon GHZ state \cite{otfried-PRA}. The measurements (see Fig.~S1) yield a state fidelity of 0.65(1), which exceeds the threshold of 50$\%$~\cite{Fidelity} by 15 standard deviations. This confirms the presence of genuine entanglement \cite{genuine} created during the quantum computation.
We have implemented the algorithm for various input vectors $|b\rangle$ which are varied by tuning the HWP in front of PBS$_3$.
In accordance with Fig.~\ref{fig1}b, the two registers should be projected to the state $|0\rangle$, the ancilla qubit to state $|1\rangle$. The output $|x\rangle$ is measured in some desired observable. In the experiment, each run of the algorithm is finished by a fourfold coincidence measurement where all four detectors fires simultaneously.
We characterize the output by measuring the expectation values of the Pauli observables $Z$, $X$, and $Y$ for each input state $|b\rangle$. Fig.~\ref{fig3} shows both the ideal (gray bar) and experimentally obtained (red bar) expectation values for each observable. To quantify the algorithmic performance, we compute the output state fidelity $F=\langle{x}|\rho_{x}|x\rangle$, where $|x\rangle$ is the ideal state and $\rho_{x}$ is the experimentally reconstructed density matrix of the output state from the expectation values of the Pauli matrices (see Fig.~S2). Compared with ideal outcomes, the output states have fidelities of $0.993(3)$ for $|b_{1}\rangle$, $0.825(13)$ for $|b_{2}\rangle$, and $0.836(16)$ for $|b_{3}\rangle$, respectively.
The difference in the performance for the three inputs is linked to the specific optical setup used in the experiment. The fidelity imperfections for $|b_{2}\rangle$ and $|b_{3}\rangle$ are caused by high-order photon emission events and post-selection in CNOT gates. However, in the case for $|b_{1}\rangle$, high-order photon emissions and post-selection do not give a negative contribution, giving rise to a near-ideal algorithm performance.
In summary, we have presented a proof-of-principle demonstration of the quantum algorithm for solving systems of linear equations
in a small-scale quantum computer involving four qubits and four entangling gates. We have implemented every subroutine at the heart
of the algorithm and characterized the circuit and algorithmic performances by the quantum state fidelities. The technique of coherently
controlling multiple qubits and executing complex, multiple-gate quantum circuits presents an advance on linear optics quantum
computation \cite{Kok2007,Pan-RMP} and allows to test other similar quantum algorithms such as solving
differential equations~\cite{Leyton2009,Berry2010} and data fitting~\cite{Wiebe2012}.
In principle, efficient quantum computation can be achieved using single-photon sources, linear optics, and single-photon detectors~\cite{KLM, Kok2007, 66efficiency}.
The current experiment, however, is still limited by a probabilistic single-photon source and inefficient detectors.
It can be expected that with ongoing progress on deterministic single-photon sources~\cite{yuming}, high-efficiency ($>$$93\%$) single-photon
detectors \cite{nistDetector}, and on-chip integration \cite{shorPhoton}, a larger-scale quantum circuit for solving more complex linear equations
can be implemented in the future.
\noindent \textit{note}: During the stage of manuscript preparation, we became aware of a related work~\cite{walther}.
\noindent \textit{Acknowledgement}: We thank Xi-Lin Wang, and Daniel James for helpful discussions. This work was supported by the National Natural Science Foundation of China, the Chinese Academy of Sciences and the National Fundamental Research Program (under Grant No: 2011CB921300). C.-Y.L. acknowledges Churchill College Cambridge and the Youth Qianren Program. N.-L.L acknowledges Anhui Natural Science Foundation. C.W. is supported by NSERC. M.G. is supported by the National Research Foundation and Ministry of Education in Singapore.
\end{document}
|
\begin{document}
\title{Generalized entanglement constraints in multi-qubit systems in terms of Tsallis entropy}
\author{Jeong San Kim}
\email{[email protected]} \affiliation{
Department of Applied Mathematics and Institute of Natural Sciences, Kyung Hee University, Yongin-si, Gyeonggi-do 446-701, Korea
}
\date{\today}
\begin{abstract}
We provide generalized entanglement constraints in multi-qubit systems in terms of Tsallis entropy.
Using quantum Tsallis entropy of order $q$, we first provide a generalized monogamy inequality of multi-qubit entanglement
for $q=2$ or $3$. This generalization encapsulates multi-qubit CKW-type inequality as a special case. We further provide a generalized polygamy
inequality of multi-qubit entanglement in terms of Tsallis-$q$ entropy for $1 \leq q \leq2$ or $3 \leq q \leq 4$, which also
contains the multi-qubit polygamy inequality as a special case.
\end{abstract}
\pacs{
03.67.Mn,
03.65.Ud
}
\maketitle
\section{Introduction}
Quantum Tsallis entropy is a one-parameter generalization of
von Neumann entropy with respect to a nonnegative real parameter $q$~\cite{tsallis, lv}.
Tsallis entropy is used in many areas of quantum
information theory; Tsallis entropy can be used to characterize
classical statistical correlations inherented in quantum states~\cite{rr},
and it provides some conditions for separability of quantum
states~\cite{ar,tlb,rc}. There are also discussions about using the non-extensive statistical
mechanics to describe quantum entanglement in terms of Tsallis entropy~\cite{bpcp}.
As a function defined on the set of density matrices, Tsallis entropy is concave for all $q > 0$, which
plays an important role in quantum entanglement theory.
Because the concavity of Tsallis entropy assures the property of {\em entanglement monotone}~\cite{vidal},
it can be used to construct a faithful entanglement measure, which does not increase under
{\em local quantum operations and classical communication} (LOCC).
One distinct property of quantum entanglement from other classical correlations
is that multi-party entanglement cannot be freely shared among the parties.
This restricted shareability of entanglement in multi-party quantum systems is known as
{\em monogamy of entanglement}(MoE)~\cite{T04, KGS}.
MoE is a key ingredient for secure quantum cryptography~\cite{rg,m},
and it also plays an important role in condensed-matter physics such as the $N$-representability problem for
fermions~\cite{anti}.
Using {\em concurrence}~\cite{ww} as a bipartite entanglement measure,
Coffman-Kundu-Wootters(CKW) provided a mathematical characterization of MoE in three-qubit systems as an inequality~\cite{ckw},
which was generalized for arbitrary multi-qubit systems~\cite{ov}.
As a dual concept of MoE, a {\em polygamy} inequality of multi-qubit entanglement
was established in terms of {\em Concurrence of Assistance}(CoA).
Later, it was shown that the monogamy and polygamy inequalities of multi-qubit entanglement can also be established
by using other entropy-based entanglement measures such as R\'enyi, Tsallis and unified entropies~\cite{KSRenyi, KimT, KSU}.
Recently, a different kind of monogamous relation in multi-qubit entanglement was proposed by using
concurrence and CoA~\cite{ZF15}. Whereas the CKW-type monogamy inequalities of multi-qubit entanglement provide a lower bound
of bipartite entanglement between one qubit subsystem and the rest qubits in terms of two-qubit entanglement, the new kind of
monogamy relations in~\cite{ZF15} provide bounds of bipartite entanglement between a two-qubit subsystem and the rest in multi-qubit systems
in terms of two-qubit concurrence and CoA.
Here, we provide generalized entanglement constraints in multi-qubit systems in terms of
Tsallis entropy for a selective choice of the real parameter $q$.
Using quantum Tsallis entropy of order $q$, namely {\em Tsallis-$q$ entropy},
we first show that the CKW-type monogamy inequality of multi-qubit entanglement
can have a generalized form for $q=2$ or $3$. This generalized monogamy inequality encapsulates
multi-qubit CKW-type monogamy inequality as a special case. We further provide a generalized polygamy inequality
of multi-qubit entanglement in terms of Tsallis-$q$ entropy for $1 \leq q \leq2$ or $3 \leq q \leq 4$, which also
contains multi-qubit polygamy inequality as a special case.
This paper is organized as follows. In Sec.~\ref{Subsec:
definition}, we recall the definition of Tsallis-$q$ entropy, and the bipartite entanglement measure based on
Tsallis entropy, namely Tsallis-$q$ entanglement as well as its dual quantity, Tsallis-$q$ entanglement of assistance(TEoA).
In Sec.~\ref{Subsec: 2formula}, we review the analytic evaluations of Tsallis-$q$ entanglement and TEoA in two-qubit systems based on their functional
relations with concurrence, and we further review the monogamy and polygamy inequalities of multi-qubit entanglement in terms of Tsallis-$q$ entanglement and TEoA
in Sec.~\ref{Sec: monopoly}.
In Sec.~\ref{Sec: gmonoT}, we provide generalized monogamy and polygamy inequalities of multi-qubit entanglement in terms of Tsallis-$q$ entanglement and TEoA,
and we summarize our results in Sec.~\ref{Conclusion}.
\section{Tsallis-$q$ Entanglement}
\label{Sec: Tqentanglement}
\subsection{Definition}
\label{Subsec: definition}
Using a generalized logarithmic function with respect to the parameter $q$,
\begin{eqnarray}
\ln _{q} x &=& \frac {x^{1-q}-1} {1-q},
\label{qlog}
\end{eqnarray}
quantum Tsallis-$q$ entropy for a quantum state $\rho$ is defined as
\begin{align}
S_{q}\left(\rho\right)=-\mbox{$\mathrm{tr}$} \rho ^{q} \ln_{q} \rho = \frac {1-\mbox{$\mathrm{tr}$}\left(\rho ^q\right)}{q-1}
\label{Tqent}
\end{align}
for $q > 0,~q \ne 1$~\cite{lv}.
Although the quantum Tsallis-$q$ entropy has a singularity at $q=1$,
it converges to von Neumann entropy when $q$ tends to $1$~\cite{S_1},
\begin{equation}
\lim_{q\rightarrow 1}S_{q}\left(\rho\right)=-\mbox{$\mathrm{tr}$}\rho \ln \rho=S\left(\rho\right).
\end{equation}
Based on Tsallis-$q$ entropy, a class of bipartite entanglement measures was introduced;
for a bipartite pure state $\ket{\psi}_{AB}$ and each $q>0$,
its {\em Tsallis-$q$ entanglement}~\cite{KimT} is
\begin{equation}
{\mathcal T}_{q}\left(\ket{\psi}_{A|B} \right)=S_{q}(\rho_A),
\label{TEpure}
\end{equation}
where $\rho_A=\mbox{$\mathrm{tr}$} _{B} \ket{\psi}_{AB}\bra{\psi}$ is the reduced
density matrix of $\ket{\psi}_{AB}$ onto subsystem $A$. For a bipartite mixed state $\rho_{AB}$,
its Tsallis-$q$ entanglement is defined via convex-roof extension,
\begin{equation}
{\mathcal T}_{q}\left(\rho_{A|B} \right)=\min \sum_i p_i {\mathcal T}_{q}(\ket{\psi_i}_{A|B}),
\label{TEmixed}
\end{equation}
where the minimum is taken over all possible pure state
decompositions of $\rho_{AB}=\sum_{i}p_i
\ket{\psi_i}_{AB}\bra{\psi_i}$.
Because Tsallis-$q$ entropy converges to von Neumann entropy when $q$ tends to 1,
we have
\begin{align}
\lim_{q\rightarrow1}{\mathcal T}_{q}\left(\rho_{A|B} \right)=E_{\rm f}\left(\rho_{A|B} \right),
\end{align}
where $E_{\rm f}(\rho_{AB})$ is the EoF~\cite{bdsw} of $\rho_{AB}$ defined as
\begin{equation}
E_{\rm f}(\rho_{A|B})=\min \sum_{i}p_i S(\rho^{i}_{A}),\label{eof}
\end{equation}
with the minimization over all possible pure state
decompositions of $\rho_{AB}$,
\begin{equation}
\rho_{AB}=\sum_{i} p_i |\psi^i\rangle_{AB}\langle\psi^i|,
\label{decomp}
\end{equation}
and $\mbox{$\mathrm{tr}$}_{B}|\psi^i\rangle_{AB}\langle\psi^i|=\rho^{i}_{A}$.
In other words, Tsallis-$q$ entanglement is one-parameter generalization of EoF, and
the singularity of ${\mathcal T}_{q}\left(\rho_{AB}\right)$ at $q=1$ can be replaced by $E_{\rm f}(\rho_{AB})$.
As a dual quantity to Tsallis-$q$ entanglement,
{\em Tsallis-$q$ entanglement of Assistance}(TEoA) was defined as~\cite{KimT}
\begin{equation}
{\mathcal T}^a_{q}\left(\rho_{A|B} \right):=\max \sum_i p_i {\mathcal T}_{q}(\ket{\psi_i}_{A|B}),
\label{TEoA}
\end{equation}
where the maximum is taken over all possible pure state
decompositions of $\rho_{AB}$.
Similarly, we have
\begin{align}
\lim_{q\rightarrow1}{\mathcal T}^a_{q}\left(\rho_{A|B}
\right)=E^a\left(\rho_{A|B} \right),
\label{TsallistoEoA}
\end{align}
where $E^a(\rho_{A|B})$ is the {\em Entanglement of Assistance}~(EoA)
of $\rho_{AB}$ defined as~\cite{cohen}
\begin{equation}
E^a(\rho_{A|B})=\max \sum_{i}p_i S(\rho^{i}_{A}). \label{eoa}
\end{equation}
with the maximization over all possible pure state
decompositions of $\rho_{AB}$.
\subsection{Functional relation with concurrence in two-qubit systems}
\label{Subsec: 2formula}
For any bipartite pure state $\ket \psi_{AB}$, its concurrence is defined as~\cite{ww}
\begin{equation}
\mathcal{C}(\ket \psi_{A|B})=\sqrt{2(1-\mbox{$\mathrm{tr}$}\rho^2_A)}, \label{pure
state concurrence}
\end{equation}
where $\rho_A=\mbox{$\mathrm{tr}$}_B(\ket \psi_{AB}\bra \psi)$. For a mixed state
$\rho_{AB}$, its concurrence and concurrence of assistance(CoA) are defined as
\begin{align}
\mathcal{C}(\rho_{A|B})=\min \sum_k p_k \mathcal{C}({\ket
{\psi_k}}_{A|B}), \label{mixed state concurrence}
\end{align}
and
\begin{align}
\mathcal{C}^a(\rho_{A|B})=\max \sum_k p_k \mathcal{C}({\ket
{\psi_k}}_{A|B}),
\label{CoA}
\end{align}
respectively, where the minimum and maximum are taken over all possible pure state
decompositions, $\rho_{AB}=\sum_kp_k{\ket {\psi_k}}_{AB}\bra
{\psi_k}$.
For two-qubit systems, concurrence and CoA are known to have analytic
formulae~\cite{ww}; for any two-qubit state $\rho_{AB}$,
\begin{align}
\mathcal{C}(\rho_{A|B})=&\max\{0, \lambda_1-\lambda_2-\lambda_3-\lambda_4\},
\label{C_formula}
\end{align}
\begin{align}
\mathcal{C}^a(\rho_{A|B})=&\sum_{i=1}^{4}\lambda_i,
\label{Coa_formula}
\end{align}
where $\lambda_i$'s are the eigenvalues, in decreasing order, of
$\sqrt{\sqrt{\rho_{AB}}\tilde{\rho}_{AB}\sqrt{\rho_{AB}}}$ and
$\tilde{\rho}_{AB}=\sigma_y \otimes\sigma_y
\rho^*_{AB}\sigma_y\otimes\sigma_y$ with the Pauli operator
$\sigma_y$.
Later, it was shown that there is a functional relation between concurrence and Tsallis-$q$ entanglement in two-qubit systems~\cite{KimT}.
For any two-qubit state $\rho_{AB}$ (or bipartite pure state with Schmidt-rank 2), we have
\begin{equation}
{\mathcal T}_{q}\left(\rho_{A|B} \right)=f_{q}\left(\mathcal{C}(\rho_{A|B}) \right),
\label{relationmixed}
\end{equation}
for $1 \leq q \leq4$ where $f_{q}(x)$ is a monotonically increasing convex function defined as
\begin{align}
f_{q}(x)=&\frac{1}{q-1}\left[1-\left(\frac{1+\sqrt{1-x^2}}{2}\right)^{q}
-\left(\frac{1-\sqrt{1-x^2}}{2}\right)^{q}\right]
\label{f_q}
\end{align}
on $0 \leq x \leq 1$~\cite{lim}.
Here we note that the analytic evaluation of concurrence in Eq.~(\ref{C_formula}) together with the functional relations in Eq.~(\ref{relationmixed})
provides us with an analytic formula of Tsallis entanglement in two-qubit systems.
Moreover, the monotonicity and convexity of $f_{q}(x)$ for $1 \leq q \leq 4$ also provide an analytic lower bound of
TEoA,
\begin{equation}
{\mathcal T}^a_{q}\left(\rho_{A|B} \right)\geq f_{q}\left(\mathcal{C}^a(\rho_{A|B}) \right),
\label{relationassis}
\end{equation}
where the equality holds $q=2$ or $3$~\cite{KimT}.
\section{Multi-qubit entanglement constraints in terms of Tsallis entropy}
\label{Sec: monopoly}
The monogamy of a multi-qubit entanglement was shown to have a mathematical
characterization as an inequality; for a multi-qubit state $\rho_{A_1A_2\cdots A_n}$,
\begin{equation}
\mathcal{C}\left(\rho_{A_1|A_2 \cdots A_n}\right)^2 \geq \mathcal{C}\left(\rho_{A_1|A_2}\right)^2
+\cdots+\mathcal{C}\left(\rho_{A_1|A_n}\right)^2,
\label{nCmono}
\end{equation}
where $\mathcal{C}(\rho_{A_1|A_2\cdots A_n})$ is the
concurrence of $\rho_{A_1A_2\cdots A_n}$ with respect to the
bipartition between $A_1$ and the other qubits, and
$\mathcal{C}(\rho_{A_1|A_i})$ is the concurrence
of the two-qubit reduced density matrix $\rho_{A_1A_i}$ for $i=2,\ldots,
n$~\cite{ckw,ov}. Moreover, the {\em polygamy} (or dual monogamy) inequality of
multi-qubit entanglement was also established using CoA~\cite{gbs} as
\begin{equation}
\left(\mathcal{C}^a\left(\rho_{A_1|A_2 \cdots A_n}\right)\right)^2 \leq (\mathcal{C}^a\left(\rho_{A_1|A_2}\right))^2
+\cdots+(\mathcal{C}^a\left(\rho_{A_1|A_n}\right))^2,
\label{nCdual}
\end{equation}
where $\mathcal{C}^a(\rho_{A_1|A_2\cdots A_n})$ is the
CoA of $\rho_{A_1A_2\cdots A_n}$ with respect to the
bipartition between $A_1$ and the other qubits, and
$\mathcal{C}^a\left(\rho_{A_1|A_i}\right)$ is the CoA of the two-qubit reduced density
matrix $\rho_{A_1A_i}$ for $i=2,\ldots, n$.
Later, this mathematical characterization of monogamy and polygamy of multi-qubit entanglement
was also proposed in terms of Tsallis entropy, which encapsulate the inequalities
(\ref{nCmono}) and (\ref{nCdual}) as special cases
~\cite{KimT}. Based on the following property of the function
$f_q(x)$ in Eq.~(\ref{f_q}) for $2 \leq q \leq3$,
\begin{equation}
f_q\left(\sqrt{x^2+y^2}\right)\geq f_q(x)+f_q(y), \label{gqmono}
\end{equation}
the Tsallis monogamy inequality of multi-qubit entanglement was proposed as
\begin{equation}
{\mathcal T}_{q}\left( \rho_{A_1|A_2 \cdots A_n}\right)\geq
{\mathcal T}_{q}(\rho_{A_1|A_2}) +\cdots+
{\mathcal T}_{q}(\rho_{A_1|A_n}),
\label{Tmono}
\end{equation}
for $2 \leq q \leq3$.
For the case when $1 \leq q \leq 2$ or $3 \leq q \leq 4$, the function
$f_q(x)$ in Eq.~(\ref{f_q}) also satisfies
\begin{equation}
f_q\left(\sqrt{x^2+y^2}\right)\leq f_q(x)+f_q(y),
\label{gqpoly}
\end{equation}
which leads to the Tsallis polygamy inequality
\begin{equation}
{\mathcal T}^a_{q}\left( \rho_{A_1|A_2 \cdots A_n}\right)\leq
{\mathcal T}^a_{q}(\rho_{A_1|A_2}) +\cdots+{\mathcal
T}^a_{q}(\rho_{A_1|A_n})
\label{Tpoly}
\end{equation}
for any multi-qubit state $\rho_{A_1A_2\cdots A_n}$.
\section{Generalized multi-qubit entanglement constraints in terms of Tsallis entropy}
\label{Sec: gmonoT}
In this section, we provide generalized monogamy and polygamy inequalities of multi-qubit entanglement in terms of Tsallis entanglement and
TEoA. We first recall some properties of Tsallis entropy.
\begin{Prop}(Subadditivity of Tsallis entropy)
For any bipartite quantum state $\rho_{AB}$ with $\rho_A=\mbox{$\mathrm{tr}$}_B \rho_{AB}$, $\rho_B=\mbox{$\mathrm{tr}$}_A \rho_{AB}$, and $q \geq 1 $,
we have
\begin{align}
S_q\left(\rho_{AB}\right)\leq S_q\left(\rho_A\right)+S_q\left(\rho_B\right).
\label{eq: subadd}
\end{align}
\label{subadd}
\end{Prop}
Let us consider a three-party pure state $\ket{\psi}_{ABC}$ and its reduced density matrices $\rho_{BC}=\mbox{$\mathrm{tr}$}_{A}\ket{\psi}_{ABC}\bra{\psi}$,
$\rho_{B}=\mbox{$\mathrm{tr}$}_{AC}\ket{\psi}_{ABC}\bra{\psi}$ and $\rho_{C}=\mbox{$\mathrm{tr}$}_{AB}\ket{\psi}_{ABC}\bra{\psi}$.
For $q \geq 1$, Proposition~\ref{subadd} implies
\begin{align}
S_q\left(\rho_{BC}\right)\leq S_q\left(\rho_B\right)+S_q\left(\rho_C\right).
\label{subadd2}
\end{align}
Because $S_q\left(\rho_{BC}\right)=S_q\left(\rho_A\right)$ and $S_q\left(\rho_{C}\right)=S_q\left(\rho_{AB}\right)$,
Eq.~(\ref{subadd2}) can be rewritten as
\begin{align}
S_q\left(\rho_{A}\right)- S_q\left(\rho_B\right) \leq S_q\left(\rho_{AB}\right),
\label{subadd3}
\end{align}
and similarly, we also have
\begin{align}
S_q\left(\rho_{B}\right)- S_q\left(\rho_A\right) \leq S_q\left(\rho_{AB}\right).
\label{subadd4}
\end{align}
Thus we have the following triangle inequality of Tsallis entropy
\begin{align}
|S_q\left(\rho_{A}\right)- S_q\left(\rho_B\right)| \leq S_q\left(\rho_{AB}\right)\leq S_q\left(\rho_A\right)+S_q\left(\rho_B\right),
\label{triin}
\end{align}
for any bipartite quantum state $\rho_{AB}$ and $q \geq 1$.
\begin{Thm}
For $q=2$ or $3$ and any multi-qubit pure state $\ket{\psi}_{ABC_1C_2\cdots C_n}$, we have
\begin{align}
{\mathcal T}_{q}\left(\ket{\psi}_{AB|C_1C_2\cdots C_n}\right)
\geq \sum_{i=1}^{n}\left[{\mathcal T}_{q}(\rho_{A|C_i})-{\mathcal T}^a_{q}(\rho_{B|C_i})\right],
\label{eq:monothe1}
\end{align}
where $\rho_{AB}=\mbox{$\mathrm{tr}$}_{C_1...C_{n}}(|\psi\rangle\langle\psi|)$,
$\rho_{AC_i}=\mbox{$\mathrm{tr}$}_{BC_1...C_{i-1}C_{i+1}...C_{n}}(|\psi\rangle\langle\psi|)$
and $\rho_{BC_i}=\mbox{$\mathrm{tr}$}_{AC_1...C_{i-1}C_{i+1}...C_{n}}(|\psi\rangle\langle\psi|)$.
\label{monothm1}
\end{Thm}
\begin{proof}
For simplicity, we sometimes denote ${\bf C} =\{C_1, C_2, \cdots, C_n \}$.
From the definition of Tsallis entanglement of $\ket{\psi}_{ABC_1C_2\cdots C_n}$ with respect to the bipartition between
$AB$ and ${\bf C}$, we have
\begin{align}
{\mathcal T}_{q}\left(\ket{\psi}_{AB|{\bf C}}\right)=&S_q\left(\rho_{AB}\right)\nonumber\\
\geq& S_q\left(\rho_{A}\right)- S_q\left(\rho_{B}\right)\nonumber\\
=&{\mathcal T}_{q}\left(\ket{\psi}_{A|B{\bf C}}\right)-{\mathcal T}_{q}\left(\ket{\psi}_{B|A{\bf C}}\right),
\label{triin2}
\end{align}
where the inequality is due to the Inequality (\ref{triin}).
We note that for any pure state $\ket{\psi}_{ABC}$ in a $2\otimes2\otimes d$ quantum system with reduced density matrices
$\rho_{AB}=\mbox{$\mathrm{tr}$}_{C}\ket{\psi}_{ABC}\bra{\psi}$ and $\rho_{AC}=\mbox{$\mathrm{tr}$}_{B}\ket{\psi}_{ABC}\bra{\psi}$, we have~\cite{22d}
\begin{align}
\mathcal{C}\left(\ket{\psi}_{A|BC}\right)^2=\mathcal{C}^a\left(\rho_{A|B}\right)^2+\mathcal{C}\left(\rho_{A|C}\right)^2.
\label{22d}
\end{align}
For $q=2$ or $3$, Inequalities~(\ref{gqmono}) and (\ref{gqpoly}) imply that
\begin{equation}
f_q\left(\sqrt{x^2+y^2}\right)= f_q(x)+f_q(y),
\label{gqeq}
\end{equation}
therefore
\begin{align}
{\mathcal T}_{q}\left(\ket{\psi}_{A|B{\bf C}}\right)=&f_q\left(\mathcal{C}\left(\ket{\psi}_{A|B{\bf C}}\right)\right)\nonumber\\
=&f_q\left(\sqrt{\mathcal{C}^a\left(\rho_{A|B}\right)^2+\mathcal{C}\left(\rho_{A|{\bf C}}\right)^2}\right)\nonumber\\
=&f_q\left(\mathcal{C}^a\left(\rho_{A|B}\right)\right)+f_q\left(\mathcal{C}\left(\rho_{A|{\bf C}}\right)\right),
\label{ABC1}
\end{align}
where the last equality is due to Eq.~(\ref{gqeq}).
Moreover, we also have
\begin{align}
{\mathcal T}_{q}\left(\ket{\psi}_{B|A{\bf C}}\right)=&f_q\left(\mathcal{C}\left(\ket{\psi}_{B|A{\bf C}}\right)\right)\nonumber\\
\leq&f_q\left(\sqrt{\mathcal{C}^a\left(\rho_{A|B}\right)^2+\sum_{i=1}^{n}\mathcal{C}^a\left(\rho_{B|C_i}\right)^2}\right)\nonumber\\
=&f_q\left(\mathcal{C}^a\left(\rho_{A|B}\right)\right)+f_q\left(\sqrt{\sum_{i=1}^{n}\mathcal{C}^a\left(\rho_{B|C_i}\right)^2}\right),
\label{BAC1}
\end{align}
where the first inequality is due to Inequality~(\ref{nCdual}) and the monotonicity of $f_q(x)$ and the last equality is from Eq.~(\ref{gqeq}).
Eq.~(\ref{ABC1}) and Inequality~(\ref{BAC1}) imply that
\begin{align}
{\mathcal T}_{q}\left(\ket{\psi}_{A|B{\bf C}}\right)&-{\mathcal T}_{q}\left(\ket{\psi}_{B|A{\bf C}}\right)\nonumber\\
&\geq f_q\left(\mathcal{C}\left(\rho_{A|{\bf C}}\right)\right)-f_q\left(\sqrt{\sum_{i=1}^{n}\mathcal{C}^a\left(\rho_{B|C_i}\right)^2}\right).
\label{abcacba}
\end{align}
Here we note that
\begin{align}
f_q\left(\mathcal{C}\left(\rho_{A|{\bf C}}\right)\right)\geq& f_q\left(\sqrt{\sum_{i=1}^{n}\mathcal{C}\left(\rho_{A|C_i}\right)^2}\right)\nonumber\\
=&\sum_{i=1}^{n}f_q\left(\mathcal{C}\left(\rho_{A|C_i}\right)\right)\nonumber\\
=&\sum_{i=1}^{n}\mathcal{T}_{q}\left(\rho_{A|C_i}\right),
\label{AC1}
\end{align}
where the first inequality is due to Inequality~(\ref{nCmono}) and the monotonicity of $f_q(x)$,
the first equality is from the iterative use of Eq.~(\ref{gqeq}), and the last equality is from the functional relation of two-qubit
concurrence and Tsallis entanglement in Eq.~(\ref{relationmixed}).
Moreover, we also have
\begin{align}
f_q\left(\sqrt{\sum_{i=1}^{n}\mathcal{C}^a\left(\rho_{B|C_i}\right)^2}\right)
=&\sum_{i=1}^{n}f_q\left(\mathcal{C}^a\left(\rho_{B|C_i}\right)\right)\nonumber\\
\leq&\sum_{i=1}^{n}\mathcal{T}^a_{q}\left(\rho_{B|C_i}\right),
\label{BCi}
\end{align}
where the first equality is from the iterative use of Eq.~(\ref{gqeq}), and the last inequality is from Inequality~(\ref{relationassis}).
From Inequalities~(\ref{abcacba}), (\ref{AC1}) and (\ref{BCi}), we have
\begin{align}
{\mathcal T}_{q}\left(\ket{\psi}_{A|B{\bf C}}\right)&-{\mathcal T}_{q}\left(\ket{\psi}_{B|A{\bf C}}\right)\nonumber\\
&\geq \sum_{i=1}^{n}\mathcal{T}_{q}\left(\rho_{A|C_i}\right)-\sum_{i=1}^{n}\mathcal{T}^a_{q}\left(\rho_{B|C_i}\right),
\label{abcacba2}
\end{align}
which, together with Inequality~(\ref{triin2}), completes the proof.
\end{proof}
Theorem~\ref{monothm1} provides a monogamy-type lower bound of multi-qubit entanglement between two-qubit subsystem $AB$ and
the other $n$-qubit subsystem $C_1C_2\cdots C_n$
in terms of two-qubit entanglements inherent there.
For the case when one-qubit subsystem $B$ is separable from other qubits, Inequality~(\ref{eq:monothe1})
reduces to the CKW-type monogamy inequality in ~(\ref{Tmono}),
thus Theorem~\ref{monothm1} provides a generalized monogamy relation of multi-qubit entanglement in terms of Tsallis entropy.
The lower bound provided in Theorem~\ref{monothm1} is analytically obtainable due to the analytic evaluation of
two-qubit concurrence and CoA as well as their functional relation with Tsallis entanglement provided in Eq.~(\ref{relationmixed})
and Inequality~(\ref{relationassis}).
Now, we present a generalized polygamy relation of multi-qubit entanglement in terms of TEoA. We first
provide the following theorem, which shows a reciprocal relation of TEoA in three-party quantum systems.
\begin{Thm}
For $q\geq1$ any three-party quantum state $\rho_{ABC}$, we have
\begin{align}
{\mathcal T}_{q}^a\left(\rho_{A|BC}\right)
\leq& {\mathcal T}^a_{q}\left(\rho_{B|AC}\right)+{\mathcal T}^a_{q}\left(\rho_{C|AB}\right).
\label{eq: genpoly1}
\end{align}
\label{thm: genpoly1}
\end{Thm}
\begin{proof}
Let
\begin{align}
\rho_{ABC}=\sum_{j}p_j\ket{\psi_j}_{ABC}\bra{\psi_j}
\label{opt}
\end{align}
be an optimal decomposition realizing ${\mathcal T}_{q}^a\left(\rho_{A|BC}\right)$, that is,
\begin{align}
{\mathcal T}_{q}^a\left(\rho_{A|BC}\right)=\sum_{j}p_j{\mathcal T}_{q}\left(\ket{\psi_j}_{A|BC}\right).
\label{optdecT}
\end{align}
For each pure state $\ket{\psi_j}_{ABC}$ in the decomposition~(\ref{opt})
with $\rho^j_{BC}=\mbox{$\mathrm{tr}$}_{A}\ket{\psi_j}_{ABC}\bra{\psi_j}$, $\rho^j_{B}=\mbox{$\mathrm{tr}$}_{AC}\ket{\psi_j}_{ABC}\bra{\psi_j}$ and
$\rho^j_{C}=\mbox{$\mathrm{tr}$}_{AB}\ket{\psi_j}_{ABC}\bra{\psi_j}$, we have
\begin{align}
{\mathcal T}_{q}\left(\ket{\psi_j}_{A|BC}\right)=&S_q\left(\rho^j_{BC}\right)\nonumber\\
\leq & S_q\left(\rho^j_{B}\right)+S_q\left(\rho^j_{C}\right)\nonumber\\
=&{\mathcal T}_{q}\left(\ket{\psi_j}_{B|AC}\right)+{\mathcal T}_{q}\left(\ket{\psi_j}_{C|AB}\right),
\label{subj}
\end{align}
where the inequality is due to the subadditivity of Tsallis entropy in Proposition~\ref{subadd}.
Now we have
\begin{align}
{\mathcal T}_{q}^a\left(\rho_{A|BC}\right)=&\sum_{j}p_j{\mathcal T}_{q}\left(\ket{\psi_j}_{A|BC}\right)\nonumber\\
\leq & \sum_{j}p_j{\mathcal T}_{q}\left(\ket{\psi_j}_{B|AC}\right)+\sum_{j}p_j{\mathcal T}_{q}\left(\ket{\psi_j}_{C|AB}\right)\nonumber\\
\leq& {\mathcal T}^a_{q}\left(\rho_{B|AC}\right)+{\mathcal T}^a_{q}\left(\rho_{C|AB}\right),
\label{genpoly1}
\end{align}
where the first inequality is from Inequality~(\ref{subj}), and the second inequality is due to the definition of TEoA.
\end{proof}
Theorem~\ref{thm: genpoly1} shows the reciprocal relation of TEoA in three-party quantum systems;
the sum of two TEoA's with respect to two possible bipartition(B|AC and C|AB) always bounds the TEoA with respect to the remaining bipartition (A|BC).
Moreover, the iterative use of Inequality~(\ref{eq: genpoly1}) naturally leads us to the generalization of Theorem~\ref{thm: genpoly1} into
multi-party quantum systems.
\begin{Cor}
For $q\geq1$ and any multi-party quantum state $\rho_{A_1A_2\cdots A_n}$,
\begin{align}
{\mathcal T}_{q}^a\left(\rho_{A_1|A_2\cdots A_n}\right)
\leq& \sum_{i=2}^{n}{\mathcal T}^a_{q}\left(\rho_{A_i|A_1\cdots\mbox{$\omega$}idehat{A_i}\cdots A_n}\right),
\label{eq: genpoly3}
\end{align}
where
\begin{align}
{\mathcal T}^a_{q}\left(\rho_{A_i|A_1\cdots\mbox{$\omega$}idehat{A_i}\cdots A_n}\right)={\mathcal T}^a_{q}\left(\rho_{A_i|A_1\cdots A_{i-1}A_{i+1}\cdots A_n}\right)
\end{align}
for each $i=1, \cdots, n$.
\label{racmulti}
\end{Cor}
The following corollary presents a generalized polygamy relation of multi-qubit systems in terms of TEoA.
\begin{Cor}
For $1 \leq q \leq 2$ or $3 \leq q \leq 4$ and any multi-qubit state $\rho_{ABC_1C_2\cdots C_n}$,
we have
\begin{align}
{\mathcal T}_{q}^a\left(\rho_{AB|C_1C_2\cdots C_n}\right)
\leq &2 {\mathcal T}_{q}^a\left(\rho_{A|B}\right)\nonumber\\
&+\sum_{i=1}^{n}\left[\mathcal{T}^a_{q}\left(\rho_{A|C_i}\right)
+\mathcal{T}^a_{q}\left(\rho_{B|C_i}\right)\right].
\label{eq: genpoly2}
\end{align}
\label{Cor: genpoly2}
\end{Cor}
\begin{proof}
By considering $\rho_{ABC_1C_2\cdots C_n}$ as a three-party quantum state $\rho_{AB{\bf C}}$ with ${\bf C}=C_1C_2\cdots C_n$,
Theorem~\ref{thm: genpoly1} leads us to
\begin{align}
{\mathcal T}_{q}^a\left(\rho_{AB|{\bf C}}\right)
\leq& {\mathcal T}^a_{q}\left(\rho_{A|B{\bf C}}\right)+{\mathcal T}^a_{q}\left(\rho_{B|A{\bf C}}\right).
\label{genpoly2}
\end{align}
Form the multi-qubit Tsallis polygamy inequality in (\ref{Tpoly}), we have
\begin{align}
{\mathcal T}^a_{q}\left(\rho_{A|B{\bf C}}\right)&\leq {\mathcal T}_{q}^a\left(\rho_{A|B}\right)+\sum_{i=1}^{n}\mathcal{T}^a_{q}\left(\rho_{A|C_i}\right)\nonumber\\
{\mathcal T}^a_{q}\left(\rho_{B|A{\bf C}}\right)&\leq {\mathcal T}_{q}^a\left(\rho_{B|A}\right)+\sum_{i=1}^{n}\mathcal{T}^a_{q}\left(\rho_{B|C_i}\right).
\label{genpoly3}
\end{align}
Inequality~(\ref{genpoly2}) together with Inequalities~(\ref{genpoly3}) lead us to Inequality~(\ref{eq: genpoly2}).
\end{proof}
Corollary~\ref{Cor: genpoly2} provides a polygamy-type upper bound of multi-qubit entanglement between two-qubit subsystem $AB$ and the other $n$-qubit
subsystem $C_1C_2\cdots C_n$ in terms of two-qubit TEoA inherent there. For the case when one-qubit subsystem $B$ is
independent from other qubits (that is, $\rho_{AB{\bf C}}=\rho_{A{\bf C}}\otimes \rho_B$),
Inequality~(\ref{eq: genpoly2}) reduces to the Tsallis polygamy inequality in (\ref{Tpoly}). In other words, Corollary~\ref{Cor: genpoly2} shows a generalized polygamy
relation of multi-qubit entanglement in terms of TEoA.
\section{Conclusion}
\label{Conclusion}
We have provided generalized entanglement constraints in multi-qubit systems in terms of Tsallis-$q$ entanglement and TEoA.
We have shown that the CKW-type monogamy inequality of multi-qubit entanglement can have a generalized form
in terms of Tsallis-$q$ entanglement and TEoA for $q=2$ or $3$. This generalized monogamy inequality encapsulates multi-qubit CKW-type
inequality as a special case. We have further shown a generalized polygamy inequality of multi-qubit entanglement in terms of TEoA for
$1 \leq q \leq 2$ or $3 \leq q \leq 4$, which also contains multi-qubit polygamy inequality as a special case.
Whereas entanglement in bipartite quantum systems has been intensively studied with rich understanding, the situation becomes
far more difficult for the case of multi-party quantum systems, and very few are known for its characterization and quantification.
MoE is a fundamental property of multi-party quantum entanglement, which also provides various applications in quantum information theory.
Thus, it is an important and even necessary task to characterize MoE to understand the whole picture of multi-party quantum entanglement.
Although MoE is a typical property of multipartite quantum entanglement,
it is however about the relation of bipartite entanglements among the parties in multipartite systems.
Thus, it is inevitable and crucial to have a proper way of quantifying bipartite
entanglement for a good description of the monogamy nature in multi-party quantum entanglement.
Our result presented here deals with Tsallis-$q$ entropy, a one-parameter class of entropy functions and provide sufficient
conditions on the choice of the parameter $q$ for generalized monogamy and polygamy relations of multi-qubit entanglement.
Noting the importance of the study on multi-party quantum entanglement,
our result provides a useful methodology to understand the monogamy and polygamy nature of multi-party entanglement.
\section*{Acknowledgments}
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)
funded by the Ministry of Education, Science and Technology(NRF-2014R1A1A2056678).
\end{document}
|
\begin{document}
\title{Strong boundedness of simply connected split Chevalley groups defined over rings}
\author{Alexander A. Trost}
\address{University of Aberdeen}
\email{[email protected]}
\begin{abstract}
This paper is concerned with the diameter of certain word norms on S-arithmetic split Chevalley groups. Such groups are well known to be boundedly generated by root elements. We prove that word metrics given by conjugacy classes on S-arithmetic split Chevalley groups have an upper bound only depending on the number of conjugacy classes. This property, called \textit{strong boundedness}, was introduced by K\k{e}dra, Libmann and Martin in \cite{KLM} and proven for ${\rm SL}_n(R)$, assuming
$R$ is a principal ideal domain and $n\geq 3.$ We also provide examples of normal generating sets for S-arithmetic split Chevalley groups proving our bounds are sharp in an appropriate sense and give a complete account of the existence of small normally generating sets of ${\rm Sp}_4(R)$ and $G_2(R)$.
For instance, we prove that ${\rm Sp}_4(\mathbb{Z}[\frac{1+\sqrt{-7}}{2}])$ cannot be generated by a single conjugacy class.
\end{abstract}
\maketitle
\section{Introduction}
The main concept we study in this paper is strong boundedness of groups. For split Chevalley groups it arises from a couple of different sources. Most importantly, it is related to bounded generation of groups and to the diameter of the Cayley graph of the group with respect to certain infinite sets of generators of said group.
Firstly, a group $G$ is called \textit{boundedly generated} by a set $S\subset G$, if there is a natural number $N:=N(S)$ such that $G=(SS^{-1})^N.$
Bounded generation of split Chevalley groups has been widely studied. For example, for S-arithmetic, split Chevalley groups, Tavgen proved in \cite{MR1044049} that all split Chevalley groups of rank at least $2$ defined using S-algebraic integers, have bounded generation with respect to root elements.
We define precisely both split Chevalley groups $G(\Phi,R)$ and their root elements in Section~\ref{sec_basic_notions}, but for the purpose of this introduction, the reader can think about classical matrix groups like ${\rm SL}_n$ and ${\rm Sp}_{2n}$.
Furthermore, Morris \cite{MR2357719} has extended Tavgen's result to localizations of orders in rings of algebraic integers in the case of the elementary subgroup
of ${\rm SL}_n$ and Morgan, Rapinchuk, Sury \cite{MR3892969} established bounded generation by root elements, even in the case of ${\rm SL}_2,$ if the underlying ring of S-algebraic integers has infinitely many units.
Secondly, K\k{e}dra, Libman, Martin \cite{KLM} considered word norms for generating sets consisting of finitely many conjugacy classes. Namely,for a subset $S$ normally generating $G$, the word norm $\|g\|_S$ for $g\in G$ is the smallest number of conjugates of elements of $S\cup S^{-1}$ needed to write $g\in G.$ The diameter $\|G\|_S$, if it is finite, depends on the normally generating set $S$. However, the notion of \textit{strong boundedness} states that
$\|G\|_S$ has at least an upper bound only depending on the cardinality $|S|.$
The first example of this behaviour is presented in the next theorem. In it $\|\cdot\|_{EL(n)}$ denotes the word metric of ${\rm SL}_n$ with respect to the generating set of elementary matrices-that is unipotent matrices with at most $1$ non-zero off-diagonal entry.
\begin{Theorem}\cite[Theorem~6.1]{KLM}
\label{KLM_thm}
Let $R$ be a principal ideal domain and let ${\rm SL}_n(R)$ be boundedly generated by elementary matrices for $n\geq 3$ with the diameter
$\|{\rm SL}_n(R)\|_{EL(n)}$ satisfying $\|{\rm SL}_n(R)\|_{EL(n)}\leq C_n$ for some $C_n\in\mathbb{N}.$ Then ${\rm SL}_n(R)$ is normally generated by the single element $E_{1,n}(1)$ and
\begin{enumerate}
\item{for all finite, normally generating subsets $S$ of $G$, it holds $\|{\rm SL}_n(R)\|_S\leq C_n(4n+4)|S|.$
}
\item{if $R$ has infinitely many maximal ideals, then for each $k\in\mathbb{N}$ there is a finite, normally generating subset $S_k$ of $G$ with $|S_k|=k$ and
$\|{\rm SL}_n(R)\|_{S_k}\geq k$.}
\end{enumerate}
\end{Theorem}
The proof of this theorem uses extensive matrix calculations and relies heavily on the underlying ring being a principal ideal domain as well as bounded generation by elementary matrices. Bounded generation can be obtained from Tavgen \cite{MR1044049} and so one of the possible applications would be rings of algebraic integers with class number $1.$ However, it is well known that not all rings of algebraic integers are principal ideal domains and the paper \cite{MR1044049} speaks about more general matrix groups aside from ${\rm SL}_n$ and about arbitrary rings of algebraic integers.
In this paper we prove the following generalization of part (1):
\begin{repTheorem}{exceptional Chevalley}
Let $\Phi$ be an irreducible root system of rank at least $2$ and let $R$ be a commutative ring with $1$. Additionally, let $G(\Phi,R)$ be boundedly generated by root elements and if $\Phi=B_2$ or $G_2$, then we further assume $(R:2R)<\infty.$
Then there is a constant $C(\Phi,R)\in\mathbb{N}$ such that for all finite, normally generating subset $S$ of $G$, it holds
\begin{equation*}
\|G(\Phi,R)\|_S\leq C(\Phi,R)|S|.
\end{equation*}
\end{repTheorem}
\begin{remark}
Root elements are natural generalizations of the elementary matrices in ${\rm SL}_n$. Such root elements are usually denoted by $\varepsilon_{\chi}(x)$ with varying $\chi\in\Phi$ and $x\in R$. Most notably
\begin{equation*}
\varepsilon_{\chi}(x_1+x_2)=\varepsilon_{\chi}(x_1)\varepsilon_{\chi}(x_2)
\end{equation*}
holds for all $x_1,x_2\in R.$
\end{remark}
The proof of both Theorem~\ref{exceptional Chevalley} and part (1) of Theorem~\ref{KLM_thm} has the same two step strategy. First, one obtains
arbitrary root elements as bounded products of conjugates of the finite normally generating set in question and then secondly, one uses bounded generation of the group by root elements to finish. The second step is virtually the same in both cases. However, in the first step instead of using explicit matrix calculations, we use results about the structure of normal subgroups of matrix groups and G\"odel's Compactness Theorem, which enables us to treat more general rings.
Beyond that there are some features in the rank $2$-cases (more precisely ${\rm Sp}_4$ and $G_2$), which do not occur in the higher rank cases.
Theorem~\ref{exceptional Chevalley} is fairly abstract and can in principle be applied to a lot of different rings. In consequence, we get a couple of corollaries.
First, for rings of S-algebraic integers we obtain Theorem~\ref{alg_numbers_strong_bound}. Second, there is a result for rings of stable range $1$ (Theorem~\ref{stable_range1_strong_boundedness}) and more specifically for semilocal rings (Theorem~\ref{semilocal_uniform}).
For rings of S-algebraic integers, we also construct finite, normally generating subsets of $G(\Phi,R)$ in Section~\ref{section_lower_bounds}, that give generalizations of Theorem~\ref{KLM_thm}(2). For $\Phi=B_2$ or $G_2$ the situation is more complex: Namely there is a problem with small normally generating sets and we will give a complete account of this. One possible example of this issue is the following:
\begin{repCorollary}{small_generating_sets_Sp4}
Let $R$ be the ring of algebraic integers in the number field $\mathbb{Q}[\sqrt{-7}].$ Then ${\rm Sp}_4(R)$ and $G_2(R)$ are not generated by a single conjugacy class.
\end{repCorollary}
The paper is structured as follows: In Section~\ref{sec_basic_notions}, we define all needed notions like split Chevalley groups, their congruence subgroups and root elements, level ideals and the word norms, which we are studying. In Section~\ref{proof_main}, we state the main technical result and explain how to obtain the main theorem from it. In Section~\ref{proof_fundamental_prop}, we prove this technical result. Both of these sections are split up according to the particular root system $\Phi$ in question, as the arguments are quite different for different $\Phi.$ Section~\ref{Corollaries} speaks about various classes of rings that fulfill the assumptions of Theorem~\ref{exceptional Chevalley} and gives different versions of it. Lastly, in Section~\ref{section_lower_bounds}, we construct explicitly various finite, normal generating sets for the $G(\Phi,R)$ in case of $R$ a ring of S-algebraic integers. In the same section, we also give a complete description of when $G_2(R)$ and ${\rm Sp}_4(R)$ fail to have small normal generating sets for such rings and we make precise, what we mean by \textit{small}.
\section*{Acknowledgments}
I want to thank Bastien Karlhofer for pointing out Proposition~\ref{Vaserstein_decomposition} to me and for always being willing to listen and talk about mathematics. Further, I want to thank Ehud Meir for helpful comments regarding how to write a paper, Ben Martin for being available if I had questions and him and Jarek K\k{e}dra for tirelessly reading several iterations of this paper. This work was funded by Leverhulme Trust Research Project Grant RPG-2017-159.
\section{Basic definitions and notions}
\label{sec_basic_notions}
First, we introduce the basic notions of boundedness and word metrics we study in this paper:
\begin{mydef}
Let $G$ be a group.
\begin{enumerate}
\item{The notation $A\sim B$ for $A,B\in G$ denotes that $A,B$ are conjugate in $G$. Secondly we define $A^B:=BAB^{-1}$ for $A,B\in G$.}
\item{For $S\subset G$, we define $\langle\langle S\rangle\rangle$ as the smallest normal subgroup of $G$ containing $S.$}
\item{A subset $S\subset G$ is called a \textit{normally generating set} of $G$, if $\langle\langle S\rangle\rangle=G$.}
\item{
The group $G$ is called \textit{finitely normally generated}, if a finite normally generating set $S$ exists.}
\item{For $k\in\mathbb{N}$ and $S\subset G$ denote by
\begin{equation*}
B_S(k):=\bigcup_{1\leq i\leq k}\{x_1\cdots x_i|\ x_j\sim A\text{ or }x_j\sim A^{-1}\text{ for all }j\leq i\text{ and }A\in S\}\cup\{1\}.
\end{equation*}
Further set $B_S(0):=\{1\}.$ If S only contains the single element $A$, then we write $B_A(k)$ instead of $B_{\{A\}}(k)$.}
\item{Define for a set $S\subset G$ the conjugation invariant word norm $\|\cdot\|_S:G\to\mathbb{N}_0\cup\{+\infty\}$ by
$\|A\|_S:=\min\{k\in\mathbb{N}_0|A\in B_S(k)\}$ for $A\in\langle\langle S\rangle\rangle$ and by $\|A\|_S:=+\infty$ for $A\notin\langle\langle S\rangle\rangle.$ The diameter
$\|G\|_S={\rm diam}(\|\cdot\|_S)$ of $G$ is defined as the minimal $N\in\mathbb{N}$ such that $\|A\|_S\leq N$ for all $A\in G$ or as $\infty$ if there is no such $N$.}
\item{Define for $k\in\mathbb{N}$ the invariant
\begin{equation*}
\Delta_k(G):=\sup\{{\rm diam}(\|\cdot\|_S)|\ S\subset G\text{ with }\card{S}\leq k,\langle\langle S\rangle\rangle=G\}\in\mathbb{N}_0\cup\{\rm\infty\}
\end{equation*}
with $\Delta_k(G)$ defined as $-\infty$, if there is no normally generating set $S\subset G$ with $\card{S}\leq k.$
}
\item{The group $G$ is called \textit{strongly bounded}, if $\Delta_k(G)$ is finite for all $k\in\mathbb{N}$.
It is called \textit{uniformly bounded}, if there is a single global bound $L(G)\in\mathbb{N}$ with
$\Delta_k(G)\leq L(G)$ for all $k\in\mathbb{N}.$}
\end{enumerate}
\end{mydef}
\begin{remark}
\begin{enumerate}
\item{Note $\Delta_k(G)\leq\Delta_{k+1}(G)$ for all $k\in\mathbb{N}$.}
\item{
A group $G$ is called \textit{bounded} if $\nu(G)<+\infty$ holds for every conjugation-invariant norm $\nu:G\to\mathbb{R}_{\geq 0}$. For finitely normally generated groups this is equivalent to the existence of a finite normally generating set $S$ such that
\begin{equation*}
{\rm diam}(\|\cdot\|_S)<\infty.
\end{equation*}
Boundedness properties are not well behaved under passage to finite index subgroups. For example the infinite dihedral group $D_{\infty}$ is bounded, but its finite index subgroup $\mathbb{Z}$ is not.
}
\end{enumerate}
\end{remark}
\subsection{Simply connected split Chevalley groups}\label{naturality_Chevalley}
To define split Chevalley groups we will first define the Chevalley-Demazure group scheme. We do not prove various statements in the course of this definition. For a more complete description with implicit claims shown please consider \cite{MR1611814} and \cite[Theorem~1, Chapter~1, p.7; Theorem~6(e), Chapter~5, p.38; Lemma~27, Chapter~3, p.~29]{MR3616493}.
Let $G$ be a simply-connected, semi-simple complex Lie group and $T$ a maximal torus in $G$ with associated irreducible root system $\Phi.$ Further, denote by
$\Pi$ a system of positive, simple roots of $\Phi,$
by $\mathfrak{g}$ the corresponding complex semi-simple Lie-algebra of $G$. The Cartan-subalgebra corresponding to $T$ will be denoted by $\mathfrak{h}$ and the corresponding root spaces in $\mathfrak{g}$ by $E_{\phi}$ for $\phi\in\Phi.$ These choices of Cartan-subalgebra and (simple, positive) roots will be fixed throughout the paper.
The Lie-algebra $\mathfrak{g}$ has a so-called \textit{Chevalley basis}
\begin{equation*}
\{X_{\phi}\in E_{\phi}\}_{\{\phi\in\Phi\}}\cup\{H_{\phi}\}_{\{\phi\in\Pi\}}
\end{equation*}
such that the structure constants of the Lie Algebra $\mathfrak{g}$ with respect to this basis are all integral. Chevalley-basis are unique up to signs and automorphisms of $\mathfrak{g}$.
\
For each faithful, continuous representation $\rho:G\to GL(V)$ for a complex vector space $V$, there is a lattice $V_{\mathbb{Z}}$ in $V$ with the property:
\begin{equation*}
\frac{d\rho(X_{\phi})^k}{k!}\left(V_{\mathbb{Z}}\right)\subset V_{\mathbb{Z}}\text{ for all }\phi\in\Phi\text{ and }k\geq 0.
\end{equation*}
Fixing a minimal generating set $\{v_1,\dots,v_n\}$ of $V_{\mathbb{Z}}$, then defines functions $t_{ij}:G\to\mathbb{C}$ for all $1\leq i,j\leq n$ by:
\begin{equation*}
\rho(g)(v_i)=\sum_{j=1}^n t_{ij}(g)v_j,
\end{equation*}
because the set $\{v_1,\dots,v_n\}$ also defines a $\mathbb{C}$-basis of $V.$
The functions $t_{ij}$ generate a Hopf algebra called $\mathbb{Z}[G]$ and this defines the Chevalley-Demazure group scheme by
\begin{equation*}
G(\Phi,\cdot): R\mapsto G(\Phi,R):={\rm Hom}_{\mathbb{Z}}(\mathbb{Z}[G],R)
\end{equation*}
with the group structure on $G(\Phi,R)$ given by the Hopf algebra structure on $\mathbb{Z}[G]$ and the induced group homomorphisms
$G(\Phi,R)\to G(\Phi,S)$ obtained by postcomposing with the ring homomorphism $R\to S$. This group scheme $G(\Phi,\cdot)$ does not depend up to isomorphism on the choices of Chevalley basis, faithful representation $\rho$ and lattice $V_{\mathbb{Z}}.$
Further note, that the ring $\mathbb{Z}[y_{ij}]$ is a finitely generated $\mathbb{Z}$-algebra and $\mathbb{Z}$ is noetherian. Hence the polynomial ring in several unknown $\mathbb{Z}[y_{ij}]$ is noetherian and hence there is a finite collection of polynomial functions $P$ in $\mathbb{Z}[y_{ij}]$ such that
$\mathbb{Z}[y_{ij}]/(P(y_{ij}))\cong\mathbb{Z}[G]$ with the isomorphism given by $y_{ij}\mapsto t_{ij}.$
Using this, one can equivalently define $G(\Phi,R)$ as a subgroup of $GL_n(R)$ by setting:
\begin{equation*}
G(\Phi,R):=\{A\in R^{n\times n}|P(A)=0\}.
\end{equation*}
In this notation, the induced maps $G(\Phi,R)\to G(\Phi,S)$ are obtained by entry-wise application of the ring homomorphism $R\to S.$
We will use mostly this interpretation of $G(\Phi,R)$ in the course of this paper.
\begin{remark}
In terms of algebraic groups, the group $G(\Phi,R)$ is the group of $R$-points of the $\mathbb{Z}$-defined group scheme $G(\Phi,\cdot).$
\end{remark}
\subsection{Root elements}
Next, we will define the previously mentioned root elements of Chevalley groups. For this end, fix a root $\alpha\in\Phi$ and observe that for
$Z\in\mathbb{C}$ arbitrary the following function is an element of $\rho(G)\subset GL(V):$
\begin{equation*}
\varepsilon_{\alpha}(Z):=\sum_{k=0}^{\infty}\frac{(Zd\rho(X_{\alpha}))^k}{k!}
\end{equation*}
Further $\rho(\varepsilon_{\alpha}(Z))$ in $GL(V)$ has coordinates with respect to the basis $\{v_1,\dots,v_n\}$ that are polynomial functions in $Z$ with coefficients in $\mathbb{Z}.$ This yields a ring homomorphism
\begin{equation*}
\varepsilon_{\alpha}:\mathbb{Z}[G]\to\mathbb{Z}[Z].
\end{equation*}
By precomposing, this defines another map as follows:
\begin{equation*}
\varepsilon_{\alpha}:\varepsilon_{\alpha}(R):={\rm Hom}_{\mathbb{Z}}(\mathbb{Z}[Z],R)\to {\rm Hom}_{\mathbb{Z}}(\mathbb{Z}[G],R)=G(\Phi,R)
\end{equation*}
Lastly, the root elements $\varepsilon_{\alpha}(x)\in G(\Phi,R)$ for $x\in R$ are defined as the image of the map
$x:\mathbb{Z}[Z]\to R,Z\mapsto x$ under the map $\varepsilon_{\alpha}.$
The \textit{elementary subgroup} $E(\Phi,R)$ (or $E(R)$ if $\Phi$ is clear from the context) is defined as the subgroup of $G(\Phi,R)$ generated by the elements
$\varepsilon_{\alpha}(x)$ for $\alpha\in\Phi$ and $x\in R.$ We refer the reader to \cite{MR3616493} for further details regarding root elements.
Also note the following property:
\begin{mydef}
Let $R$ be a commutative ring with $1$. Then $G(\Phi,R)$ is \textit{boundedly generated by root elements}, if there is a natural number $N\in\mathbb{N}$ and
roots $\alpha_1,\dots,\alpha_N\in\Phi$ such that for all $A\in G(\Phi,R)$, there are $a_1,\dots,a_N\in R$ (depending on $A$) such that:
\begin{equation*}
A=\prod_{i=1}^N\varepsilon_{\alpha_i}(a_i).
\end{equation*}
\end{mydef}
The symbols $\varepsilon_{\alpha}(t)$ are additive in $t\in R$, that is $\varepsilon_{\alpha}(t+s)=\varepsilon_{\alpha}(t)\varepsilon_{\alpha}(s)$ holds for all $t,s\in R$ and a couple of commutator formulas expressed in the next lemma, hold. We will use the additivity and the commutator formulas implicitly throughout the paper usually without reference.
\begin{Lemma}\cite[Proposition~33.2-33.5]{MR0396773}
\label{commutator_relations}
Let $\alpha,\beta\in\Phi$ be roots with $\alpha+\beta\neq 0$ and $a,b\in R$ be given.
\begin{enumerate}
\item{If $\alpha+\beta\notin\Phi$, then $(\varepsilon_{\alpha}(a),\varepsilon_{\beta}(b))=1.$}
\item{If $\alpha,\beta$ are positive, simple roots in a root subsystem of $\Phi$ isomorphic to $A_2$, then\\
$(\varepsilon_{\beta}(b),\varepsilon_{\alpha}(a))=\varepsilon_{\alpha+\beta}(\pm ab).$}
\item{If $\alpha,\beta$ are positive, simple roots in a root subsystem of $\Phi$ isomorphic to $B_2$ with $\alpha$ short and $\beta$ long, then
\begin{align*}
&(\varepsilon_{\alpha+\beta}(b),\varepsilon_{\alpha}(a))=\varepsilon_{2\alpha+\beta}(\pm 2ab)\text{ and}\\
&(\varepsilon_{\beta}(b),\varepsilon_{\alpha}(a))=\varepsilon_{\alpha+\beta}(\pm ab)\varepsilon_{2\alpha+\beta}(\pm a^2b).
\end{align*}
}
\item{If $\alpha,\beta$ are positive simple roots in a root system of $\Phi$ isomorphic to $G_2$ with $\alpha$ short and $\beta$ long, then
\begin{align*}
&(\varepsilon_{\beta}(b),\varepsilon_{\alpha}(a))=\varepsilon_{\alpha+\beta}(\pm ab)\varepsilon_{2\alpha+\beta}(\pm a^2b)\varepsilon_{3\alpha+\beta}(\pm a^3b)
\varepsilon_{3\alpha+2\beta}(\pm a^3b^2),\\
&(\varepsilon_{\alpha+\beta}(b),\varepsilon_{\alpha}(a))=
\varepsilon_{2\alpha+\beta}(\pm 2ab)\varepsilon_{3\alpha+\beta}(\pm 3a^2b)\varepsilon_{3\alpha+2\beta}(\pm 3ab^2),\\
&(\varepsilon_{2\alpha+\beta}(b),\varepsilon_{\alpha}(a))=\varepsilon_{3\alpha+\beta}(\pm 3ab),\\
&(\varepsilon_{3\alpha+\beta}(b),\varepsilon_{\beta}(a))=\varepsilon_{3\alpha+\beta}(\pm ab)\text{ and}\\
&(\varepsilon_{2\alpha+\beta}(b),\varepsilon_{\alpha+\beta}(a))=\varepsilon_{3\alpha+2\beta}(\pm 3ab).
\end{align*}
}
\end{enumerate}
\end{Lemma}
\begin{remark}
Depending on the choice of the Chevalley basis, the signs on the arguments on the right hand side the above commutator formulas might vary. Further, if the chosen basis is not a Chevalley basis the arguments on the right hand side might even contain additional coefficients that are not $1$ or $-1.$ These issues are commonly referred to as \textit{pinning}. The sign problem will not be resolved in this paper, due to the fact that our norms are invariant under taking inverses anyway.
\end{remark}
Before continuing, we will define the Weyl group and diagonal elements in $G(\Phi,R)$:
\begin{mydef}
Let $R$ be a commutative ring with $1$ and let $\Phi$ be a root system. Define for $t\in R^*$ and $\phi\in\Phi$ the elements:
\begin{equation*}
w_{\phi}(t):=\varepsilon_{\phi}(t)\varepsilon_{-\phi}(-t^{-1})\varepsilon_{\phi}(t).
\end{equation*}
We will often write $w_{\phi}:=w_{\phi}(1).$ We also define $h_{\phi}(t):=w_{\phi}(t)w_{\phi}(1)^{-1}$ for $t\in R^*$ and $\phi\in\Phi.$
\end{mydef}
\begin{remark}
The Weyl group of $G(\Phi,R)$ is a quotient of the group generated by the $w_{\phi}$, but we do not need it for our study.
\end{remark}
Using these Weyl group elements, we can obtain the following lemma:
\begin{Lemma}
Let $R$ be a commutative ring with $1$ and $\Phi$ an irreducible root system. Let $\phi,\alpha \in\Phi$ and $x\in R$ be given.
Then for each normally generating set $S$ of $G(\Phi,R)$ one has
\begin{equation*}
\|\varepsilon_{\phi}(x)\|_S=\|\varepsilon_{w_{\alpha}(\phi)}(x)\|_S.
\end{equation*}
Here the element $w_{\alpha}(\phi)$ is defined as $\phi-\langle\phi,\alpha\rangle\alpha.$
\end{Lemma}
\begin{proof}
This is a direct consequence of \cite[Lemma~20(b), Chapter~3, p.~23]{MR3616493}.
\end{proof}
Next, we will define certain congruence subgroups and some other notions that we need later on.
\begin{mydef}
Let $\Phi$ be an irreducible root system and let $R$ be a commutative ring with $1$ in the following.
\begin{enumerate}
\item{For each pair $(J,L)$, where $J$ is an ideal in $R$ and $L$ an additive subgroup of $J$, we define the subgroup $E(J,L)$ of $G(\Phi,R)$ as the group generated by all elements of the form $\varepsilon_{\alpha}(x)$ for $\alpha\in\Phi$ short, $x\in J$ and $\varepsilon_{\beta}(y)$ for $\beta\in\Phi$ long, $y\in L$.}
\item{For each such pair $(J,L)$, we define the subgroup $\bar{E}(J,L)$ as the normal closure of $E(J,L)$ in $E(R)$.}
\item{For each such pair $(J,L)$, we define the subgroup $E^*(J,L)$ as follows:
\begin{equation*}
E^*(J,L):=\{A\in G(R,\Phi)|(A,E(R))\subset\bar{E}(J,L)\}.
\end{equation*}
}
\item{For an ideal $J$ in $R$ the map $\pi_J:G(\Phi,R)\to G(\Phi,R/J)$ is the group homomorphism induced by the quotient map $R\to R/J.$}
\item{For $k\in\mathbb{N}_0,S\subset G(\Phi,R)$ and $\chi\in\Phi$ set $\varepsilon(S,\chi,k):=\{r\in R|\varepsilon_{\chi}(r)\in B_S(k)\}$.}
\end{enumerate}
\end{mydef}
\subsection{Central elements of Chevalley groups and level ideals}
Let $G$ be a complex, simply-connected, semi-simple Lie-group with irreducible root system $\Phi$ which is not $B_2$ or $G_2$ and positive, simple roots
$\Pi.$ Then there are representations $\rho_i:G\to GL(V_i)$ for $1\leq i\leq u$ such that for $V:=V_1\oplus\cdots\oplus V_u$ the induced direct sum representation
$\rho:G\to GL(V)$ is faithful. The precise construction is explained in \cite[Chapter~3, p.~29]{MR3616493}. In case of $\Phi\neq B_2$ or $G_2$, this group is what we refer to as the split Chevalley group $G(\Phi,R).$ Setting further $n_i:=dim_{\mathbb{C}}(V_i)$ for $1\leq i\leq u,$ there is the following description of central elements in $G(\Phi,R)$:
\begin{Lemma}
\label{central_elements}
Let $R$ be a reduced, commutative ring with $1$ and $\Phi$ an irreducible root system, which is not $B_2$ or $G_2$. Further, let $A\in G(\Phi,R)$ commute with the elements of $E(\Phi,R)$. Then there are $t_1,\dots,t_u\in R^*$ such that $A=(t_1 I_{n_1})\oplus\cdots\oplus(t_u I_{n_u})\in GL(R^{n_1+n_2+\dots+n_u}).$
Furthermore, elements of this form are central in $G(\Phi,R)$.
\end{Lemma}
The proof for this lemma is in the Appendix. Presumably this statement holds for general rings $R$, but we were not able to find a reference.
Next, we give the definitions of $G(B_2,R)={\rm Sp}_4(R)$ and $G_2(R)$. While we do not specify the representations $\rho$ used, both
are still instances of our general definition of $G(\Phi,R)$ in Subsection~\ref{naturality_Chevalley}.
\begin{mydef}
Let $R$ be a commutative ring with $1$ and let
\begin{equation*}
Sp_4(R):=\{A\in R^{4\times 4}|A^TJA=J\}
\end{equation*}
be given with
\begin{equation*}
J=
\begin{pmatrix}
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
-1 & 0 & 0 & 0\\
0 & -1 & 0 & 0\\
\end{pmatrix}
\end{equation*}
The root system $B_2$ has four different positive roots namely, $B_2^+=\{\alpha,\beta,\alpha+\beta,2\alpha+\beta\}$ with
$\alpha$ short and $\beta$ long and both simple. The corresponding root elements have (subject to the choice of maximal torus) the following form for $t\in R$:
\begin{align*}
&\varepsilon_{\alpha}(t)=I_4+t(e_{12}-e_{43}),\varepsilon_{\alpha+\beta}(t)=I_4+t(e_{14}+e_{23})\\
&\varepsilon_{\beta}(t)=I_4+te_{24},\varepsilon_{2\alpha+\beta}(t)=I_4+te_{13}
\end{align*}
and $\varepsilon_{\phi}(t)=(\varepsilon_{-\phi}(t))^T$ for negative roots $\phi\in B_2.$
\end{mydef}
We could specify an explicit matrix description for $G_2$ as well, but this would be rather lengthy and instead we refer to the description in the appendix of
\cite{MR1487611}. This appendix gives $G_2$ as a subgroup-scheme of ${\rm GL}_8$.
We will not specify which elements of $G_2\subset{\rm GL}_8$ correspond to root elements in particular, but note the positive roots in $G_2.$ They are
\begin{equation*}
G_2^+=\{\alpha,\beta,\alpha+\beta,2\alpha+\beta,3\alpha+\beta,3\alpha+2\beta\}
\end{equation*}
with $\alpha$ short and $\beta$ long and both simple.
Next, we will define various variants of level ideals:
\begin{mydef}
\label{central_elements_def}
Let $R$ be a commutative ring with $1$ and let $A\in G(\Phi,R)$ be given. The \textit{level ideal $l(A)$} is defined as
\begin{enumerate}
\item{in case $\Phi\neq B_2$ or $G_2$ as the ideal in $R$ generated by the elements $a_{i,j}$ for all $1\leq i\neq j\leq n_1+\cdots+n_u$ and the elements
$a_{i,i}-a_{n_1+\cdots n_w,n_1+\cdots+ n_w}$ for all $1\leq i<n_1+\cdots+n_u$ and the smallest $w\in\{1,\dots,u\}$ with $i<n_1+\cdots+ n_w.$
}
\item{in case $\Phi=B_2$ as $l(A):=\pre{a_{i,j},(a_{i,i}-a_{j,j})}{1\leq i\neq j\leq 4}.$}
\item{in case $\Phi=G_2$ as $l(A):=\pre{a_{i,j},(a_{i,i}-a_{j,j})}{1\leq i\neq j\leq 8}.$}
\end{enumerate}
Furthermore, define the following ideals: If $\Phi=B_2$ define
\begin{equation*}
l(A)_2:=\pre{a_{i,j}^2,(a_{i,i}-a_{j,j})^2}{1\leq i\neq j\leq 4}
\end{equation*}
and if $\Phi=G_2$ define
\begin{equation*}
l(A)_3:=\pre{a_{i,j}^3,(a_{i,i}-a_{j,j})^3}{1\leq i\neq j\leq 8}.
\end{equation*}
\end{mydef}
\begin{remark}
\begin{enumerate}
\item{In case $\Phi=B_2$ or $G_2$, note $l(A)\subset\sqrt{l(A)_2}$ or $l(A)\subset\sqrt{l(A)_3}.$}
\item{The important point in the following discussion is that all of these ideals are finitely generated.}
\end{enumerate}
\end{remark}
\section{Fundamental propositions and the proof of Theorem~\ref{exceptional Chevalley}}
\label{proof_main}
Recall the (following equivalent version of the) main theorem:
\begin{Theorem}
\label{exceptional Chevalley}
Let $\Phi$ be an irreducible root system of rank at least $2$ and let $R$ be a commutative ring with $1$. Additionally, let $G(\Phi,R)$ be boundedly generated by root elements and if $\Phi=B_2$ or $G_2$, we further assume $(R:2R)<\infty.$
Then there is a constant $C(\Phi,R)\in\mathbb{N}$ such that
\begin{equation*}
\Delta_k(G(\Phi,R))\leq C(\Phi,R)k
\end{equation*}
for all $k\in\mathbb{N}.$
\end{Theorem}
The main technical tool to prove the theorem is the following:
\begin{Theorem}
\label{fundamental_reduction}
Let $\Phi$ be an irreducible root system of rank at least $2$ and let $R$ be a commutative ring with $1$.
Then there are constants $L(\Phi)\in\mathbb{N}$ (depending only on $\Phi$) such that for $A\in G(\Phi,R)$ it holds that
\begin{enumerate}
\item{for $\Phi\neq B_2,G_2$, there is an ideal $I(A)\subset\varepsilon(A,\chi,L(\Phi))$ for $\chi$ a short root.
This ideal has the property $l(A)\subset\sqrt{I(A)}.$}
\item{for $\Phi=B_2$ one has $2l(A)_2\subset\varepsilon(A,\phi,L(\Phi))$ for $\phi\in B_2$ arbitrary.}
\item{for $\Phi=G_2$ one has $l(A)_3\subset\varepsilon(A,\chi,L(\Phi))$ for $\chi=3\alpha+\beta.$}
\end{enumerate}
\end{Theorem}
We further need the two following technical observations. First:
\begin{Lemma}
\label{necessary_cond_conj_gen}
Let $\Phi$ be an irreducible root system of rank at least $2$ and $R$ a commutative ring with $1$ and $G:=G(\Phi,R)$ the corresponding split Chevalley group.
Further let $S$ be a normally generating set of $G.$ Then $\sum_{A\in S}l(A)=R.$ Also if we define for $T\subset G$ the set
\begin{equation*}
\Pi(T):=\{ m\text{ proper maximal ideal of $R$}|\ \forall A\in T:l(A)\subset m\}
\end{equation*}
then $\Pi(S)=\emptyset$ is equivalent to $\sum_{A\in S}l(A)=R.$
\end{Lemma}
\begin{proof}
Observe that for $I:=\sum_{A\in S}l(A)$, we have that $\pi_I(A)$ is scalar for all $A\in S$ if $\Phi=B_2$ or $G_2$ and has the form described in
Lemma~\ref{central_elements} if $\Phi\neq B_2$ or $G_2$. Next, assume there is a proper maximal ideal
$m$ containing $I$. As $S$ normally generates $G$, this implies that $\pi_{m}$ maps $G$ only to diagonal matrices. But $m\neq R$ holds, so we can pick an element $\lambda\notin m$ and then $\varepsilon_{\phi}(\lambda+ m)$ would be diagonal for all $\phi\in\Phi$ and so $\lambda\in m$. This contradiction proves $I=R.$ Lastly the equivalence of $\Pi(S)=\emptyset$ and $\sum_{A\in S}l(A)=R$ is clear.
\end{proof}
And second:
\begin{Lemma}
\label{congruence_fin}
Let $R$ be a commutative ring with $1$ such that $(R:2R)<\infty$ and such that $G:=G(\Phi,R)$ is boundedly generated by root elements for $\Phi=B_2$ or $G_2$. Further define
\begin{equation*}
N:=\langle\langle\varepsilon_{\phi}(a)|a\in 2R,\phi\in\Phi\rangle\rangle.
\end{equation*}
Then the group $G/N$ is finite.
\end{Lemma}
\begin{proof}
We are done, if $N$ has finite index in $G.$ The ideal $2R$ has finite index in $R$
so let $X\subset R$ be a finite set of representatives of $2R$ in $R$. The group $G$ is boundedly generated by root elements and so there is a $n:=n(R)$ and roots
$\alpha_1,\dots,\alpha_n\in\Phi$ such that for all $A\in G$ there are $r_1,\dots,r_n$ with
\begin{equation}
A=\prod_{i=1}^n\varepsilon_{\alpha_i}(r_i).
\end{equation}
Next, choose for each $i$ an element $a_i\in R$ and an $x_i\in X$ such that $r_i=2a_i+x_i.$ Note:
\begin{equation*}
A=\prod_{i=1}^n\varepsilon_{\alpha_i}(r_i)=
\varepsilon_{\alpha_1}(2 a_1)\left[\prod_{i=2}^n\varepsilon_{\alpha_i}(2 a_i)^{(\varepsilon_{\alpha_1}(x_1)\cdots\varepsilon_{\alpha_{i-1}}(x_{i-1}))}\right]\cdot
\left[\prod_{i=1}^n\varepsilon_{\alpha_i}(x_i)\right]
\end{equation*}
Yet the first two factors of $A$ are elements of $N$ and there are only finitely many possibilities for the third factor, so the statement of the lemma follows.
\end{proof}
We deal with the three different possibilities for $\Phi$ seperately.
\subsection{The higher-rank case and $A_2$}
\begin{Proposition}
\label{mult_bound}
Let $\Phi$ be any irreducible root system that is not $G_2, B_2$ or $A_1$, $R$ a commutative ring with $1$ and let $S$ be a finite subset of $G:=G(\Phi,R)$ with
$\Pi(S)=\emptyset$ and let $L(\Phi)$ be thrice as the $L(\Phi)$ in Theorem~\ref{fundamental_reduction}.
Then we have for all $a\in R$ that $\|\varepsilon_{\phi}(a)\|_S\leq\card{S}L(\Phi),$ where $\phi$ is any root in $\Phi$.
\end{Proposition}
\begin{proof}
Let $S=\{A_1,\dots,A_n\}$ be given and let $I(A_l)$ be the ideal from Theorem~\ref{fundamental_reduction} for all $l=1,\dots,n.$ Next, consider
the ideal $I:=I(A_1)+\cdots+I(A_n).$ As $I(A_l)\subset~\varepsilon(A_l,\phi,L(\Phi))$ holds for all $l$ and all short roots $\phi$ it is immediately clear that
$\|\varepsilon_{\phi}(a)\|_S\leq\card{S}L(\Phi)$ holds for all $a\in I.$ Thus it suffices to show that $I=R.$ The radical $\sqrt{I}$ contains the ideal
$l(A_1)+\cdots+l(A_n)$, which is $R$ by assumption. Hence $I=R$ holds.
This proves the claim of the proposition for short roots. If there are long roots in $\Phi$, then each long root $\phi$ is conjugate to a positive, simple long root in a root subsystem of $\Phi$ isomorphic to $B_2.$ Let $\psi$ be the corresponding short, positive, simple root in this root subsystem. Further according to the short root case, we know
$\|\varepsilon_{\psi}(a)\|_S\leq\card{S}L(\Phi)$ for all $a\in R$ already. So we obtain
$\|\varepsilon_{\psi}(1)\|_S,\|\varepsilon_{\psi+\phi}(a)\|_S\leq\card{S}L(\Phi)$ for all $a\in R$ and hence as
\begin{equation*}
(\varepsilon_{\psi}(1),\varepsilon_{\phi}(a))=\varepsilon_{\psi+\phi}(\pm a)\varepsilon_{2\psi+\phi}(\pm a),
\end{equation*}
we obtain $\|\varepsilon_{2\psi+\phi}(a)\|_S\leq 3|S|L(\Phi)$ for all $a\in R$. The root $2\psi+\phi$ is long and so we are done.
\end{proof}
We finish this case of Theorem~\ref{exceptional Chevalley}: Lemma~\ref{necessary_cond_conj_gen} implies $\Pi(S)=\emptyset$ and all root groups in $G(\Phi,R)$ are bounded with respect to $\|\cdot\|_S$ with a bound linear in $|S|.$ However, $G(\Phi,R)$ is also boundedly generated by root elements and hence we are done.
\subsection{The case of ${\rm Sp}_4$}
\begin{Proposition}
\label{mult_bound_B2}
Let $R$ be a commutative ring with $1$ and let $S\subset {\rm Sp}_4(R)$ be a finite set with $\Pi(S)=\emptyset.$ Let $L(B_2)$ be as given in
Theorem~\ref{fundamental_reduction}. Then we have for all $a\in 2R$ and for all $\phi\in B_2$ that $\|\varepsilon_{\phi}(a)\|_S\leq\card{S}L(B_2)$.
\end{Proposition}
\begin{proof}
Let $S=\{A_1,\dots,A_k\}$ be given and let $2l(A_l)_2$ be the ideal constructed in Theorem~\ref{fundamental_reduction} for all $l=1,\dots,k.$ Consider
the ideal $I:=l(A_1)_2+\cdots+l(A_k)_2.$ As $2l(A_l)_2\subset\varepsilon(A_l,\phi,L(B_2))$ holds for all $l$ and all $\phi\in B_2$, it is immediately clear that
$\|\varepsilon_{\phi}(2a)\|_S\leq\card{S}L(B_2)$ holds for all $a\in I.$ Thus it suffices to show that $I=R,$ which is clear because $R=\sum_{A\in S}l(A)$ holds by assumption and by construction of $I$ we have $\sum_{A\in S}l(A)\subset\sqrt{I}$.
\end{proof}
To finish the proof of the theorem, we prove next:
\begin{Proposition}
\label{non-simply-laced-reduction}
Let $R$ be a commutative ring with $1$ such that $(R:2R)<\infty$ and let ${\rm Sp}_4(R)$ be boundedly generated by root elements.
Also let $S$ be a finite subset of ${\rm Sp}_4(R)$ with $\Pi(S)=\emptyset$ and the property that $S$ maps to a normal generating subset of ${\rm Sp}_4(R)/N$ for $N$ as in Lemma~\ref{congruence_fin} and let $F\subset R$ be finite. Then there is a constant $M(B_2,F,R)$ such that $\|\varepsilon_{\phi}(f)\|_S\leq M(B_2,F)\card{S}$ for all $f\in F$ and all $\phi\in B_2.$ So this holds in particular, if $F$ is a finite set of representatives of $2R$ in $R$.
\end{Proposition}
\begin{proof}
Without loss of generality $F$ only contains a single element $f$. Let $\phi\in B_2$ be arbitrary and note that the group $G/N$ is finite.
Hence there are only finitely many possible normally generating sets of $G/N.$ Call this set of normally generating sets $E(G/N)$. Next, we define a finite set of subsets of $G$ that map to elements of $E(G/N).$
By bounded generation there are roots $\alpha_1,\dots,\alpha_n$ such that each element $A$ of $G$ can be written as
\begin{equation*}
A=\prod_{i=1}^n\varepsilon_{\alpha_i}(r_i)
\end{equation*}
for particular elements $r_1,\dots,r_n\in R$ depending on $A$. The ring $R/2R$ is finite by assumption and let $X$ be a set of representatives of $2R$ in $R$. Then consider the set $X'$ of elements of the form
\begin{equation*}
A=\prod_{i=1}^n\varepsilon_{\alpha_i}(x_i)
\end{equation*}
with all $x_i\in X$. Note that $X'$ is finite and hence the set $E(G):=\{T\subset X'|\pi(T)\in E(G/N)\}$ for $\pi:G\to G/N$ the canonical map, is also finite.
The group $G/N$ is finite and so there is an $M:=M(B_2,R)\in\mathbb{N}$ such that for all $T\in E(G)$ we can find elements
$t_1,\dots,t_M\in T\cup T^{-1}\cup\{1\},g_1,\dots,g_M\in G$ (all of them depending on $T$) with
\begin{equation}
\label{quotient_eq}
\pi(\varepsilon_{\phi}(f))=\pi\left(\prod_{i=1}^M g_it_ig_i^{-1}\right).
\end{equation}
Fix such a choice of elements $t_i,g_i$ for each one of the finitely many elements $T\in E(G)$ and call
the corresponding element $\prod_{i=1}^M g_it_ig_i^{-1}=:e(T).$ The set $E(G)$ is finite and hence the set
\begin{equation*}
\{\varepsilon_{\phi}(f)e(T)^{-1}|T\in E(G)\}\subset N
\end{equation*}
is finite as well. Next, we prove two claims: First, we show that $S$ only differs by some small terms (with respect to $\|\cdot\|_S$) from an element in $E(G).$
Secondly, we demonstrate how to obtain the proposition by using the fact that there is a finite number of possible error terms
$\{\varepsilon_{\phi}(f)e(T)^{-1}|T\in E(G)\}$.
\begin{Claim}
\end{Claim}
Let $A$ be an element of $S.$ Then, as in the proof of Lemma~\ref{congruence_fin}, we can pick elements $x_i\in X$ and $a_i\in R$ such that
\begin{equation*}
A=\varepsilon_{\alpha_1}(2 a_1)\left[\prod_{i=2}^n\varepsilon_{\alpha_i}(2 a_i)^{(\varepsilon_{\alpha_1}(x_1)\cdots\varepsilon_{\alpha_{i-1}}(x_{i-1}))}\right]\cdot
\left[\prod_{i=1}^n\varepsilon_{\alpha_i}(x_i)\right].
\end{equation*}
Set $A':=\prod_{i=1}^n\varepsilon_{\alpha_i}(x_i)$ and observe
\begin{align*}
\|AA'^{-1}\|_S&\leq\|\varepsilon_{\alpha_1}(2a_1)\|_S+\sum_{i=2}^n\|(\varepsilon_{\alpha_1}(x_1)\cdots\varepsilon_{\alpha_{i-1}}(x_{i-1}))\varepsilon_{\alpha_i}(2a_i)(\varepsilon_{\alpha_1}(x_1)\cdots\varepsilon_{\alpha_{i-1}}(x_{i-1}))^{-1}\|_S\\
&=\sum_{i=1}^n\|\varepsilon_{\alpha_i}(2a_i)\|_S.
\end{align*}
Yet Proposition~\ref{mult_bound_B2} implies $\|\varepsilon_{\alpha_i}(2a_i)\|_S\leq\card{S}L(B_2)$ for all $i$ and hence we can conclude that
$\|AA'^{-1}\|_S\leq\card{S}nL(B_2)$ and so
\begin{equation}
\label{gen_ineq2}
\|A'\|_S\leq 1+\card{S}nL(B_2).
\end{equation}
Next, $S$ is an element of $E(G)$ by assumption and hence $S':=\{A'|A\in S\}$ is an element of $E(G)$ as well.
In the following, we use the abbreviation $L:=L(B_2).$
\begin{Claim}
\end{Claim}
Each element of $N$ is a product of conjugates of root elements of the form $\varepsilon_{\phi}(2a)$ for $a\in R$ and $\phi\in B_2$.
Thus there is a maximal number of such factors in regards to the elements in the finite subset $\{\varepsilon_{\phi}(f)e(T)^{-1}|T\in E(G)\}$ of $N$. If we call this maximal number of factors $V:=V(B_2,R)$, then we obtain by applying Proposition~\ref{mult_bound_B2} that $\|\varepsilon_{\phi}(f)e(T)^{-1}\|_S\leq VL\card{S}$
holds for all $T\in E(G)$. This implies further
\begin{equation}
\label{gen_ineq}
\|\varepsilon_{\phi}(f)\|_S\leq VL\card{S}+\|e(T)\|_S=VL\card{S}+\|\prod_{i=1}^M g_it_ig_i^{-1}\|_S\leq VL\card{S}+M\max\{\|t\|_S|\ t\in T\}.
\end{equation}
Evaluating (\ref{gen_ineq}) for the particular element $S'\in E(G)$ and applying (\ref{gen_ineq2}) yields
\begin{equation*}
\|\varepsilon_{\phi}(f)\|_S\leq VL\card{S}+M\max\{\|A'\|_S|\ A'\in S'\}\leq VL\card{S}+M(1+\card{S}nL)=(VL+nLM)\card{S}+M.
\end{equation*}
This finishes the proof.
\end{proof}
\begin{remark}
\begin{enumerate}
\item{
There is a second possible proof in a special case using \cite[Theorem]{Gal-Kedra-Trost}. This theorem states that if $R$ is a ring of S-algebraic integers (see definition~\ref{S-algebraic_numbers_def}) and $\Phi$ is an irreducible root system that is not $A_1$, then every finite index subgroup of $G(\Phi,R)$ is bounded.
The group $N$ has finite index in ${\rm Sp}_4(R)$ and hence by \cite[Theorem]{Gal-Kedra-Trost} it is bounded and normally generated by elements of the form $\varepsilon_{\phi}(2a)$ for $\phi\in B_2$ and $a\in R.$ Using this, one can give a different albeit still very similar proof of the proposition.
A similar argument would yield a generalization of Theorem~\ref{exceptional Chevalley} for finite index subgroups of certain split Chevalley groups, but this is work in progress.}
\item{Using Milnor's, Serre's and Bass' solution for the Congruence subgroup problem \cite[Theorem~3.6, Corollary~12.5]{MR244257} in the case of $R$ a ring of
S-algebraic integers, the normal subgroup $N$ can be identified as the kernel of the reduction homomorphism $\pi_{2R}:{\rm Sp}_4(R)\to{\rm Sp}_4(R/2R)$
and hence $G/N={\rm Sp}_4(R/2R).$}
\end{enumerate}
\end{remark}
Let us finish the proof of Theorem~\ref{exceptional Chevalley} in case of ${\rm Sp}_4(R).$
First, note that $S\subset{\rm Sp}_4(R)$ being a normal generating set, implies both $\Pi(S)=\emptyset$ and $S$ mapping to a normal generating set in ${\rm Sp}_4(R)/N.$
Remember now, that ${\rm Sp}_4(R)$ is assumed to be boundedly generated by root elements. Hence to finish the proof of Theorem~\ref{exceptional Chevalley} for
$\Phi=B_2$, we only have to prove that all root groups in ${\rm Sp}_4(R)$ are bounded with respect to $\|\cdot\|_S$ with a bound linear in $|S|.$ Let $\phi\in B_2$ be arbitrary. We know already by Proposition~\ref{mult_bound_B2} that the group $\{\varepsilon_{\phi}(2a)|a\in R\}$ is bounded (with respect to $\|\cdot\|_S$) with a bound
linear in $\card{S}$. Furthermore, by Proposition~\ref{non-simply-laced-reduction}, we also know that for a set of representatives
$X$ of $2R$ in $R$ the set $\{\varepsilon_{\phi}(x)|x\in X\}$ is bounded with a bound that is linear in $|S|$.
Next, for each $a\in R$ there is an $x\in X$ and $b\in R$ such that $a=2b+x$ and hence the entire group $\varepsilon_{\phi}$ is bounded with a bound that is linear in $\card{S}.$
\subsection{The case of $G_2$}
First, we give the version of Proposition~\ref{mult_bound} for $G_2.$
\begin{Proposition}
\label{G2_mult_bound}
Let $R$ be a commutative ring with $1$ and let $S$ be a finite subset of $G_2(R)$ with $\Pi(S)=\emptyset$ and let $L(G_2)$ be $16$ times the constant $L(G_2)$ from Theorem~\ref{fundamental_reduction}. Then for all $a\in R:$
\begin{enumerate}
\item{$\|\varepsilon_{\phi}(2a)\|_S\leq L(G_2)\card{S}$ holds for all $\phi\in G_2$ short.}
\item{$\|\varepsilon_{\phi}(a)\|_S\leq L(G_2)\card{S}$ holds for all $\phi\in G_2$ long.}
\end{enumerate}
\end{Proposition}
\begin{proof}
Note that by Theorem~\ref{fundamental_reduction} there is a constant $L(G_2)$ such that for the ideal $I:=\sum_{A\in S}l(A)_3$, one has
$I\subset\varepsilon(S,\chi,L(G_2)\card{S}).$ As before $\Pi(S)=\emptyset$ implies $I=R.$ This yields the claim of the proposition for long roots. To get the claim for short roots use part (1b) of Proposition~\ref{G2_ideals} and replace $L(G_2)$ by $16L(G_2).$
\end{proof}
Next, the analogue of Proposition~\ref{non-simply-laced-reduction}:
\begin{Proposition}
\label{non-simply-laced-reduction_G2}
Let $R$ be a commutative ring with $1$ such that $(R:2R)<\infty$ and let $G_2(R)$ be boundedly generated by root elements.
Also let $S$ be a finite subset of $G_2(R)$ with $\Pi(S)=\emptyset$ and the property that $S$ maps to a normal generating subset of $G_2(R)/N$ for $N$ as in
Lemma~\ref{congruence_fin} and let $F\subset R$ be finite. Then there is a constant $M(G_2,F,R)$ such that $\|\varepsilon_{\phi}(f)\|_S\leq M(G_2,F)\card{S}$ for all
$f\in F$ and all $\phi\in B_2.$ So this holds in particular, if $F$ is a finite set of representatives of $2R$ in $R$.
\end{Proposition}
The proof is essentially the same as the one of Proposition~\ref{non-simply-laced-reduction}, so we are going to omit it. Also completing the proof of
Theorem~\ref{exceptional Chevalley} is very similar to ${\rm Sp}_4(R)$. The only difference is that we only have to show the boundedness of the root groups for the short roots, because it follows for long roots from Proposition~\ref{G2_mult_bound} already.
Thus, save for the proof of Theorem~\ref{fundamental_reduction}, we have proven Theorem~\ref{exceptional Chevalley}.
We also want to note the following corollary of the proof:
\begin{Corollary}
\label{sufficient_cond_gen_set}
Let $R$ be a commutative ring with $1$, $\Phi$ irreducible and of rank at least $2$ and assume $G(\Phi,R)=E(\Phi,R)$. Then a subset $S$ of $G$ normally generates $G$ precisely if
\begin{enumerate}
\item{one has $\Pi(S)=\emptyset$ in case $\Phi\neq B_2,G_2$}
\item{one has $\Pi(S)=\emptyset$ and $S$ maps to a normally generating set of $G/N$ for $N$ as in Lemma~\ref{congruence_fin} in case $\Phi=B_2$ or $G_2.$}
\end{enumerate}
\end{Corollary}
\begin{remark}
\begin{enumerate}
\item{
The case $\Phi\neq B_2, G_2$ is a consequence of a result by Abe \cite[Theorem~1,2,3,4]{MR991973}.}
\item{In case $\Phi=B_2$ or $G_2,$ the crucial point is that $\Pi(S)=\emptyset$ implies $N\subset\langle\langle S\rangle\rangle$. Hence it is obvious, that if $S$ maps to a normally generating set of $G/N$ for $G=G_2(R)$ or $Sp_4(R)$, then $S$ must normally generate $G.$ This is why we do not need the assumption $|R/2R|<+\infty$ here.}
\end{enumerate}
\end{remark}
The difference between ${\rm Sp}_4, G_2$ and the other cases is not merely an artifact of our proof strategy, as seen by studying the differences regarding normal generation between ${\rm Sp}_4, G_2$ and the other cases more in depth in Section~\ref{section_lower_bounds}.
\section{Proof of Theorem~\ref{fundamental_reduction}}
\label{proof_fundamental_prop}
The main idea is that the claims of Theorem~\ref{fundamental_reduction} are first order statements and to use results about normal subgroups of split Chevalley groups and G\"odel's compactness theorem. We distinguish the three different cases of possible root systems $\Phi$ again.
\subsection{Level ideals for higher rank split Chevalley groups and ${\rm SL}_3$}\label{Centralization_higher}
This is the largest case. The main tool in this case is the following theorem by Abe.
\begin{Theorem}\cite[Theorem~1,2,3,4]{MR991973}
\label{Abe}
Let $\Phi$ be an irreducible root system that is not $A_1,B_2,G_2$ and let $R$ be a commutative ring with $1$. Then for each subgroup
$H\subset G(\Phi,R)$ normalized by the group $E(\Phi,R)$, there is an ideal $J\subset R$ and an additive subgroup of $L$ of $J$
such that $\bar{E}(J,L)\subset H\subset E^*(J,L).$
\end{Theorem}
\begin{remark}
\begin{enumerate}
\item{
The paper \cite{MR843808} by Vaserstein deals with the simply laced case and with the multiple laced case under some assumptions. The papers Abe, Suzuki \cite{MR439947} and Abe \cite{MR258837} deal with local rings.}
\item{Theorem~\ref{Abe} is enough to prove strong boundedness of $G(\Phi,R)$ for commutative rings with $1$ and $\Phi\neq A_1,B_2,G_2$ with $G(\Phi,R)$ boundedly generated by root elements. However, this would not yield any linear bounds on $\Delta_k$ and is very similar to our argument, so we do not give more details.}
\end{enumerate}
\end{remark}
Next, we need the following lemma about root elements:
\begin{Lemma}
\label{A2_parts}
Let $\Phi$ an irreducible root system that is not $A_1,B_2$ or $G_2$, $R$ a commutative ring with $1$ and $A\in G(\Phi,R)$ be given and assume that $\lambda\in\varepsilon(A,\chi,N)$ for some $N\in\mathbb{N}$ and $\chi$ a short root. Then
\begin{equation*}
\lambda R\subset\varepsilon(A,\chi,8N)
\end{equation*}
holds.
\end{Lemma}
\begin{proof}
First note that $\lambda\in\varepsilon(A,,\chi,N)$ is equivalent to $\varepsilon_{\chi}(\lambda)\in B_A(N)$. We distinguish two cases:
\begin{enumerate}
\item
{$\Phi\neq B_n$ for $n\geq 3.$
The important fact is that $\chi$ is a short root in $\Phi$ and that all of these root systems contain a root subsystem isomorphic to $A_2$ consisting of short roots. Hence after conjugating with a suitable Weyl group elements, we can assume that $\Phi=A_2$ with simple positive roots $\alpha,\beta$ and
$\chi=\alpha+\beta.$ But observe that $w_{\beta}(\alpha)=\chi$ and hence $\varepsilon_{\alpha}(\lambda)\in B_A(N).$
For $x\in R$ arbitrary, we obtain further
\begin{equation*}
\varepsilon_{\chi}(\pm x\lambda)=\left(\varepsilon_{\alpha}(\pm\lambda),\varepsilon_{\beta}(\pm x)\right)\in B_A(2N).
\end{equation*}
}
\item{$\Phi=B_n$ for $n\geq 3.$ After conjugation with Weyl group elements, we assume that $n=3$ and so after conjugation we have positive, simple roots $\alpha,\beta,\chi$ with $\alpha,\beta$ long and $\chi$ short and $\beta$ double-bonded to $\chi$ in the Dynkin-diagram corresponding to the simple roots $\alpha,\beta$ and $\chi.$ However for $x\in R$ arbitrary
\begin{equation}
\label{Bn_equation}
B_A(2N)\ni(\varepsilon_{\chi}(\lambda),\varepsilon_{\beta}(x))=\varepsilon_{\beta+\chi}(x\lambda)\varepsilon_{\beta+2\chi}(x\lambda^2).
\end{equation}
The root $\beta+\chi$ is short however and so conjugate to $\chi$ under the Weyl group action
and hence we have $\varepsilon_{\beta+\chi}(\lambda)\in B_A(N).$ Thus for $x=1$ we obtain
$\varepsilon_{\beta+2\chi}(\lambda^2)=\varepsilon_{\beta+\chi}(-\lambda)(\varepsilon_{\beta+\chi}(\lambda)\varepsilon_{\beta+2\chi}(\lambda^2))\in B_A(3N).$
The root $\beta+2\chi$ is long and hence $\varepsilon_{\beta+2\chi}(\lambda^2)$ is (up to sign) conjugate to $\varepsilon_{\beta}(\lambda^2)$ and so
$\varepsilon_{\beta}(\lambda^2)\in B_A(3N)$.
Yet $\alpha,\beta$ are simple roots in a root subsystem of $B_3$ isomorphic to $A_2$ and hence we obtain as in the first item that
$\varepsilon_{\beta}(x\lambda^2)\in B_A(6N)$ for all $x\in R.$ Summarizing this with equation (\ref{Bn_equation}) we get $\varepsilon_{\beta+\chi}(x\lambda)\in B_A(8N)$
for all $x\in R.$ Hence after conjugation we are done.
}
\end{enumerate}
\end{proof}
\begin{remark}
This Lemma is a more quantitative version of Vasersteins \cite[Theorem~4(a)]{MR843808}.
\end{remark}
Next, we want to prove the following technical proposition yielding the first part of Theorem~\ref{fundamental_reduction}:
\begin{Proposition}
\label{higher_rank_centralization}
Let $R$ be a commutative ring with $1,\Phi$ an irreducible root system that is not $A_1, B_2$ or $G_2$ and $\chi$ an arbitrary short root in $\Phi$. Then there is a constant $L(\Phi)\in~\mathbb{N}$ (not depending on $R$ or $A$ or $\chi$) such that for all $A\in G(\Phi,R)$ there is an ideal $I(A)$ with $I(A)\subset\varepsilon(A,\chi,L(\Phi))$ and $l(A)\subset\sqrt{I(A)}.$
\end{Proposition}
\begin{proof}
First, choose polynomials $P$ in $\mathbb{Z}[y_{ij}]$ characterizing elements of $G(\Phi,\cdot)$ and $1\leq k,l\leq n_1+\cdots+n_u:=n$ with not both $k,l$ equal to $n$.
Next, let a language $\C L$ with the relation symbols, constants and function symbols
\begin{equation*}
(\C R,0,1,+,\times,(a_{i,j})_{1\leq i,j\leq n},(e(k,l,v))_{v\in\mathbb{N}})
\end{equation*}
and a further function symbol $\cdot^{-1}:\C R^{n\times n}\to\C R^{n\times n}$ be given. Note that we use capital letters to denote matrices of variables (or constants) in the language in the following. For example the symbol $\C A$ denotes the $n\times n$-matrix of constants $(a_{i,j})$ and $X$ commonly refers to matrices of $n\times n$ variables in $\C L$. We also use the notation $X^{-1}:=^{-1}(X)$.
Yet this is only a way to simplify notation, because first order sentences about matrices can always be reduced to first order sentences about their entries.
Let the first order theory $\C T_{kl}$ contain:
\begin{enumerate}
\item{Sentences forcing the universe $R:=\C R^{\C M}$ of each model $\C M$ of $\C T_{kl}$ is a commutative ring with respect to the functions
$+^{\C M},\times^{\C M}$ and with $0^{\C M},1^{\C M}$ being $0$ and $1$.}
\item{For all $v\in\mathbb{N}$:
If $k\neq l$ the sentence $e(k,l,v)=a_{k,l}^v$ should be included in $\C T_{kl}$. If on the other hand $k=l$, then choose the smallest $w\in\{1,\dots,u\}$ with
$k<n_1+\cdots+n_w$ and include the sentence $e(k,l,v)=(a_{k,k}-a_{n_w,n_w})^v$.}
\item{The sentence $P(\C A)=0$.}
\item{The sentence $\forall X:(P(X)=0)\rightarrow (XX^{-1}=I_n),$ where $I_n$ denotes the unit matrix in $\C R^{n\times n}$ with entries the constant symbols $0,1$ as appropriate.}
\item{A family of sentences $(\theta_r)_{r\in\mathbb{N}}$ as follows:
\begin{align*}
\theta_r:&\bigwedge_{1\leq v\leq r}\forall X_1^{(v)},\dots,X_r^{(v)},\forall e_1^{(v)},\dots,e_r^{(v)}\in\{0,1,-1\}:\\
&((P(X_1^{(v)})=\cdots=P(X_r^{(v)})=0)\rightarrow
(\varepsilon_{\chi}(e(k,l,v))\neq (\C A^{e_1})^{X_1^{(v)}}\cdots(\C A^{e_r^{(v)}})^{X_r^{(v)}})
\end{align*}
Here $\C A^{1}:=\C A,\C A^{-1}:=\C A^{-1}$ and $\C A^{0}:=I_n.$
}
\end{enumerate}
We first show that the theory $\C T_{kl}$ is inconsistent. To this end, let $\C M$ be a model for the sentences in (1) through (4)
and let $R:=R^{\C M}$ be the universe of $\C M.$ The sentences in (1) enforce that $R$ is a commutative ring with $1=1^{\C M}$ and $0=0^{\C M}$ and (3) enforces
that the matrix $A:=(a_{i,j}^{\C M})\in R^{n\times n}$ is an element of the split Chevalley group $G(\Phi,R).$
Let $H$ be the subgroup of $G(\Phi,R)$ normally generated by $A$.
According to Theorem~\ref{Abe} there is a pair $(J,L)$ such that
\begin{equation*}
\bar{E}(J,L)\subset H\subset E^*(J,L).
\end{equation*}
As $L\subset J$ holds, $A\in E^*(J,L)$ implies that $\pi_J(A)$ commutes with $E(R/J)$ and consequently that $\pi_{\sqrt{J}}(A)$ commutes with
$E(R/\sqrt{J}).$ The ring $R/\sqrt{J}$ is reduced and so $\pi_{\sqrt{J}}(A)$ has the form described in Lemma~\ref{central_elements}.
This implies that $l(A)\subset\sqrt{J}.$ Hence as $\bar{E}(J,L)\subset H$, there is a constant $r'\in\mathbb{N}$ such that
$\varepsilon_{\chi}(e(k,l,r')^{\C M})\in B_A(r').$ But this contradicts the statement $\theta_{r'}^{\C M}.$
So summarizing: a model of the sentences in (1) through (4) cannot be a model of all of the sentences $\theta_r$. Hence there is in fact no model of all of the above sentences and hence $\C T_{kl}$ is inconsistent. G\"odel's Compactness Theorem \cite{MR2596772} implies then, that a certain finite subset
$\C T_{kl}^0\subset\C T_{kl}$ is already inconsistent. Hence there is only a finite collection of the $\theta_r$ contained in
$\C T_{kl}^0.$ So let $L_{kl}(\Phi)\in\mathbb{N}$ be the largest $r\in\mathbb{N}$ with $\theta_r\in\C T_{kl}^0.$
For all $r\in\mathbb{N}$, we have $\{(1)-(4),\theta_{r+1}\}\vdash\theta_r.$ Hence the subset
$\C T_{kl}^1\subset\C T_{kl}$ that contains all sentences in (1) through (4) and the \textit{single} sentence $\theta_{L_{kl}(\Phi)}$, must be inconsistent as well.
Let $R$ be an arbitrary commutative ring with $1$ and let $A\in G(\Phi,R)$ be given. This gives us a model $\C M$ of (1) through (4) and hence as
$\C T_{kl}^1$ is inconsistent, this model must violate the sentence $\theta_{L_{kl}(\Phi)}.$ Thus there are elements
$g_1,\dots,g_{L_{kl}(\Phi)}\in G(\Phi,R)$ and $e_1,\dots,e_{L_{kl}(\Phi)}\in\{0,1,-1\}$ as well as a natural number $v\leq L_{kl}(\Phi)$
such that
\begin{equation*}
\varepsilon_{\chi}(e(k,l,v)^{\C M})=(A^{e_1})^{g_1}\cdots (A^{e_{L_{kl}(\Phi)}})^{g_{L_{kl}(\Phi)}}.
\end{equation*}
Hence we obtain that either a power of $a_{kl}$ (in case $k\neq l$) or a power of $a_{kk}-a_{n_1+\cdots+n_w,n_1+\cdots+n_w}$ (in case $k=l$) is an element of
$\varepsilon(A,\chi,L_{kl}(\Phi)).$ So setting
\begin{equation*}
L(\Phi):=\sum_{1\leq k,l\leq n\\ \text{ not both }k,l=n} 8L_{kl}(\Phi),
\end{equation*}
we get together with Lemma~\ref{A2_parts} an ideal $I(A)$ in $R$ such that $I(A)\subset\varepsilon(A,\chi,L(\Phi))$ and $l(A)\subset\sqrt{I(A)}$ holds. This ideal $I(A)$ has the desired properties as stated in the proposition for the single root $\chi.$ Note, that all short roots are conjugate under elements of the Weyl group and hence we have
$\varepsilon(A,\chi_1,L(\Phi))=\varepsilon(A,\chi_2,L(\Phi))$ for two short roots $\chi_1,\chi_2$ in $\Phi$, so the conclusion does not depend on the specific shoort
$\chi.$
\end{proof}
\begin{remark}
Compare this result with Morris' result \cite[Theorem~6.1(1)]{MR2357719}. In case of $\Phi=A_n$ and $R$ an order in a ring of algebraic integers,
Proposition~\ref{higher_rank_centralization} is a consequence of \cite[Theorem~6.1(1)]{MR2357719} and \cite[Theorem~6.4]{MR2357719} by way of considering the normal subgroup $N:=\langle\langle A\rangle\rangle.$
\end{remark}
\subsection{Level ideals for ${\rm Sp}_4$}
Remember that $B_2$ has the positive roots $\alpha,\beta,\alpha+\beta$ and $\chi=2\alpha+\beta$ with $\alpha$ short and
$\beta$ long and both simple. Again, we invoke a compactness argument. The main ingredient is the following observation due to Costa and Keller instead of Theorem~\ref{Abe}:
\begin{Theorem}\cite[Theorem~2.6, 4.2, 5.1, 5.2]{MR1162432}
\label{Keller_B2}
Let $R$ be a commutative ring with $1$. Let $A\in{\rm Sp}_4(R)$ be given. Then for all $x\in l(A)$ one has
$\varepsilon_{\chi}(2x+x^2)\varepsilon_{\alpha+\beta}(x^2)\subset\langle\langle A\rangle\rangle_{E(B_2,R)}$, where $\langle\langle A\rangle\rangle_{E(B_2,R)}$ denotes the subgroup of ${\rm Sp}_4(R)$ generated by the
$E(B_2,R)$-conjugates of $A.$
\end{Theorem}
Root elements in ${\rm Sp}_4$ are more complicated than in higher rank groups:
\begin{Lemma}
\label{B2_ideals}
Let $R$ be a commutative ring with $1$ and $S\subset{\rm Sp}_4(R).$ Let $\lambda\in R$ and
$N\in\mathbb{N}$ be given. Then
\begin{enumerate}
\item{
$\varepsilon_{\chi}(2\lambda+\lambda^2)\varepsilon_{\alpha+\beta}(\lambda^2)\in B_S(N)$ implies
$\{\varepsilon_{\chi}(2x\lambda^2)|x\in R\}\subset B_S(2N).$}
\item{$\varepsilon_{\chi}(\lambda)\in B_S(N)$ implies $\varepsilon_{\phi}(\lambda)\in B_S(3N)$ for all $\phi$ short.}
\item{$\varepsilon_{\alpha}(x\lambda)\in B_S(N)$ for all $x\in R$ implies $\varepsilon_{\phi}(x\lambda^2)\in B_S(3N)$ for all $\phi$ long and all $x\in R$.}
\item{$\varepsilon_{\chi}(\lambda)\in B_S(N)$ implies $\{\varepsilon_{\chi}(2x\lambda)|x\in R\}\subset B_S(6N).$}
\item{$\varepsilon_{\chi}(2\lambda+\lambda^2)\varepsilon_{\alpha+\beta}(\lambda^2)\in B_S(N)$ implies
$\{\varepsilon_{\phi}(2x\lambda^2)|x\in R,\phi\in B_2\}\subset B_S(6N)$.}
\end{enumerate}
All of the above implications stay true, if the balls $B_S$ are replaced by a normal subgroup of ${\rm Sp}_4(R).$
\end{Lemma}
\begin{proof}
For the first part inspect the commutator $(\varepsilon_{\alpha}(x),\varepsilon_{\chi}(2\lambda+\lambda^2)\varepsilon_{\alpha+\beta}(\lambda^2))$
for $x\in R$ arbitrary.
For the second part, note that $\varepsilon_{\chi}(\lambda)$ is conjugate to $\varepsilon_{\beta}(\lambda)$ and so
$\varepsilon_{\beta}(\lambda)\in~B_S(N).$ Note further
\begin{equation*}
B_A(2N)\ni(\varepsilon_{\beta}(\lambda),\varepsilon_{\alpha}(1))=\varepsilon_{\alpha+\beta}(\pm\lambda)\varepsilon_{\chi}(\pm\lambda).
\end{equation*}
These two facts imply $\varepsilon_{\alpha+\beta}(\lambda)\in B_S(3N).$ The element $\varepsilon_{\alpha+\beta}(\lambda)$ is conjugate to $\varepsilon_{\phi}(\lambda)$
for every short root $\phi\in B_2$. This proves the second part and the third part follows by considering for $x\in R$ the commutator
\begin{equation*}
B_A(2N)\ni(\varepsilon_{\beta}(x),\varepsilon_{\alpha}(\lambda))=\varepsilon_{\alpha+\beta}(\pm x\lambda)\varepsilon_{\chi}(\pm x\lambda^2).
\end{equation*}
and noting $\varepsilon_{\alpha+\beta}(\pm x\lambda)\in B_A(N).$
For the fourth part, note that we have by the second part, that $\varepsilon_{\alpha}(\lambda)\in B_S(3N).$ Next inspect for $x\in R$ the commutator:
\begin{equation*}
B_S(6N)\ni(\varepsilon_{\alpha}(\lambda),\varepsilon_{\alpha+\beta}(x))=\varepsilon_{\chi}(2x\lambda).
\end{equation*}
This proves the fourth part. The last part follows from part (1) and (2).
\end{proof}
With this lemma, the second case of Theorem~\ref{fundamental_reduction} follows:
\begin{Proposition}
\label{B2_centralization}
Let $R$ be a commutative ring with $1$ and let $A\in{\rm Sp}_4(R)$ be given. Then there is a constant $L(B_2)$ (not depending on $A$ or $R$) such that
$2l(A)_2\subset\varepsilon(A,\phi,L(B_2))$ for $\phi\in B_2$ arbitrary.
\end{Proposition}
\begin{proof}
The proof is very similar to the one of Proposition~\ref{higher_rank_centralization}.
First, let natural numbers $k,l$ be given with $1\leq k,l\leq 4$. Also if $k=l,$ then we assume that $k=l<4.$
The language $\C L$ and the theory $\C T_{kl}$ is defined the same way as in Proposition~\ref{higher_rank_centralization} except for three differences:
First we include a constant symbol $e(k,l)$ instead of $e(k,l,v)$. Secondly, (2) has the form
\begin{equation*}
e(k,l)=\left\{\begin{array}{lr}
a_{kl}, & \text{if } k\neq l \\
a_{kk}-a_{k+1,k+1}, & \text{if not}
\end{array}\right.
\end{equation*}
Most importantly, (5) is a family of sentences $(\theta_r)_{r\in\mathbb{N}}$ such that
\begin{align*}
\theta_r:&\forall X_1,\dots,X_r,\forall e_1,\dots e_r\in\{0,1,-1\}:((P(X_1)\wedge\dots\wedge P(X_r))\rightarrow\\
&(\varepsilon_{\chi}(2e(k,l)+e(k,l)^2)\varepsilon_{\alpha+\beta}(e(k,l)^2)\neq
(\C A^{e_1})^{X_1}\cdots(\C A)^{e_r})^{X_r}))
\end{align*}
Invoking Theorem~\ref{Keller_B2} instead of Theorem~\ref{Abe} yields that a model of (1) through (4) cannot be a model of all sentences in (5).
Hence $\C T_{kl}$ is inconsistent. Using G\"odel's compactness, we obtain, as in the proof of Proposition~\ref{higher_rank_centralization}, that
there is an $L_{k,l}(B_2)$ such that the subset $\C T_{kl}^1\subset\C T_{kl}$ that contains all sentences in (1) through (4) and the \textit{single} sentence
$\theta_{L_{k,l}(B_2)}$ is already inconsistent.
Let $R$ be an arbitrary commutative ring with $1$ and let $A\in {\rm Sp}_4(R)$ be given. This gives us a model $\C M$ of the (1) through (4) and hence as $\C T_1$ is inconsistent this model must violate the statement $\theta_{L_{k,l}(B_2)}^{\C M}.$ Thus there are elements
$g_1,\dots,g_{L_{k,l}(B_2)}\in{\rm Sp}_4(R)$ and $e_1,\dots e_{L_{k,l}(B_2)}\in\{0,1,-1\}$ such that (abusing the notation slightly)
\begin{equation*}
\varepsilon_{\chi}(2e(k,l)+e(k,l)^2)\varepsilon_{\alpha+\beta}(e(k,l)^2)=(A^{e_1})^{g_1}\cdots (A^{e_{L_{k,l}(B_2)}})^{g_{L_{k,l}(\Phi)}}
\end{equation*}
Next, Lemma~\ref{B2_ideals}(5) implies $2(e(k,l)^2)\in \varepsilon(A,\phi,6L_{k,l}(B_2))$ for all $\phi\in B_2.$
If we sum over all admissible $k,l$, this implies for all $\phi\in B_2$ that
\begin{equation*}
2l(A)_2=\sum_{k,l}(2e(k,l)^2)\subset\varepsilon(A,\phi,\sum_{k,l}6L_{k,l}(B_2)).
\end{equation*}
So defining $L(B_2):=\sum_{k,l}6L_{k,l}(B_2)$, we get the statement.
\end{proof}
\subsection{Level ideals for $G_2$}
Remember that the positive roots in $G_2$ are
$\alpha,\beta,\alpha+\beta,2\alpha+\beta,3\alpha+\beta$ and $3\alpha+2\beta=\chi$ for $\alpha,\beta$ simple, positive roots in $G_2$ with $\alpha$ short, $\beta$ long. Also note that the roots $3\alpha+\beta$ and $\beta$ span a root subsystem of $G_2$ isomorphic to $A_2.$
Next, we are using the following result by Costa and Keller:
\begin{Theorem}\cite[(3.6)~Main Theorem]{MR1487611}
\label{Keller_G2}
Let $R$ be a commutative ring with $1$ and let $H$ be an $E(G_2,R)$-normalized subgroup of $G_2(R).$ Then there is a pair of ideals $J,J'$ in $R$ with
\begin{equation*}
(x^3,3x|x\in J)\subset J'\subset J
\end{equation*}
such that
\begin{equation*}
[E(R),E(J,J')]\subset H\subset G(J,J').
\end{equation*}
We are not defining $G(J,J')$, but note that $H\subset G(J,J')$ implies that $H$ becomes trivial after reducing mod $J$.
\end{Theorem}
This implies:
\begin{Corollary}
\label{simplified_Keller_G2}
Let $R$ be a commutative ring with $1$, $A\in G_2(R)$ and $H$ the smallest subgroup of $G_2(R)$ normalized by $E(G_2,R)$ and containing $A$. Then we have
$\varepsilon_{3\alpha+2\beta}(a^3),\varepsilon_{3\alpha+2\beta}(3a)\in H$ for all $a\in l(A).$
\end{Corollary}
\begin{proof}
This follows directly from Theorem~\ref{Keller_G2}. Note first that $J$ must contain $l(A)$, because $A\in H$ becomes scalar after reducing modulo $J.$ Hence for $a\in l(A)$ we get $3a,a^3\in J'$ for all $a\in l(A).$
Lastly, $\{\varepsilon_{\beta}(b)|\ b\in J'\}\subset H$ holds, because $\beta$ is a root in the long $A_2$ in $G_2.$
\end{proof}
Next, note the following:
\begin{Proposition}
\label{G2_ideals}
Let $R$ be a commutative ring with $1$ and let $S\subset G_2(R)$ be given. Then
\begin{enumerate}
\item{
if for $N\in\mathbb{N},\lambda\in R$ one has $\varepsilon_{\chi}(\lambda)\in B_S(N)$, then
\begin{enumerate}
\item{$\{\varepsilon_{\phi}(x\lambda)|x\in R\}\subset B_S(2N)$ for $\phi$ long and}
\item{$\{\varepsilon_{\phi}(2x\lambda)|x\in R\}\subset B_S(16N)$ for $\phi$ short holds.}
\end{enumerate}}
\item{if $\varepsilon_{\alpha}(\lambda)\in B_S(N)$, then $\{\varepsilon_{\chi}(x\lambda^3),|x\in R\}\subset B_S(4N)$.}
\end{enumerate}
The implications are still true, if the balls $B_S$ are replaced by a normal subgroup of $G_2(R).$
\end{Proposition}
\begin{proof}
Part (1a) can be obtained by arguing as in the $A_2$-case. For part (1b) inspect the following commutator formula for all $x\in R:$
\begin{equation*}
\varepsilon_{\alpha+\beta}(\pm x\lambda)\varepsilon_{2\alpha+\beta}(\pm x^2\lambda)\varepsilon_{3\alpha+\beta}(\pm x^3\lambda)
\varepsilon_{3\alpha+2\beta}(\pm x^3\lambda^2)
=(\varepsilon_{\beta}(\lambda),\varepsilon_{\alpha}(x))\in B_S(2N)
\end{equation*}
Note that $\varepsilon_{3\alpha+\beta}(x^3\lambda),\varepsilon_{3\alpha+2\beta}(x^3\lambda^2)\in B_S(2N)$ by part (1a) and hence
\begin{equation*}
\varepsilon_{\alpha+\beta}(x\lambda)\varepsilon_{2\alpha+\beta}(x^2\lambda)\in B_S(2N*3)=B_S(6N).
\end{equation*}
Taking the commutator of this product with $\varepsilon_{\alpha}(1)$ yields (up to conjugation)
\begin{equation*}
\varepsilon_{2\alpha+\beta}(2x\lambda)\varepsilon_{3\alpha+\beta}(3x^2\lambda+6x\lambda)\varepsilon_{3\alpha+2\beta}(3x^2\lambda^2)\in B_S(12N).
\end{equation*}
Yet we have again $\varepsilon_{3\alpha+\beta}(3x^2\lambda+6x\lambda),\varepsilon_{3\alpha+2\beta}(3x^2\lambda^2)\in B_S(2N)$ and hence
$\varepsilon_{2\alpha+\beta}(2x\lambda)\in B_S(16N).$
Lastly, for part (2) inspect first the commutator
\begin{equation*}
B_S(2N)\ni(\varepsilon_{\alpha}(\lambda),\varepsilon_{\beta}(x))=
\varepsilon_{\alpha+\beta}(\pm x\lambda)\varepsilon_{2\alpha+\beta}(\pm x\lambda^2)\varepsilon_{3\alpha+\beta}(\pm x\lambda^3)
\varepsilon_{3\alpha+2\beta}(\pm x^2\lambda^2).
\end{equation*}
However, all of the factors besides $\varepsilon_{3\alpha+\beta}(x\lambda^3)$ in this product commute with $\varepsilon_{\beta}(1).$
Thus taking the commutator with $\varepsilon_{\beta}(1)$, we obtain the claim after conjugation.
\end{proof}
With this in hand, the last part of Theorem~\ref{fundamental_reduction} follows:
\begin{Proposition}
\label{G2_centralization}
Let $R$ be a commutative ring with $1$ and let $A\in G_2(R)$ be given. Then there is a constant $L(G_2)$ (not depending on $A$ or $R$) such that
$l(A)_3\subset\varepsilon(A,\chi,L(G_2)).$
\end{Proposition}
\begin{proof}
The proof is very similar to the ones in the previous subsections. Let natural numbers $k,l$ be given with
$1\leq k,l\leq 8$. Also if $k=l,$ we further assume that $k=l<8.$
Aside from the places, where $k,l$ have to range between $1$ and $8$, the language $\C L$ and the theory $\C T_{kl}$ is defined the same way as in Proposition~\ref{B2_centralization} except (5) has the form:
A family of sentences $(\theta_r)_{r\in\mathbb{N}}$ such that
\begin{equation*}
\theta_r:\forall X_1,\dots,X_r,\forall e_1,\dots e_r\in\{0,1,-1\}:((P(X_1)\wedge\dots\wedge P(X_r))\rightarrow
(\varepsilon_{\beta}(e(k,l)^3)\neq
(\C A^{e_1})^{X_1}\cdots(\C A)^{e_r})^{X_r}))
\end{equation*}
One obtains invoking Corollary~\ref{simplified_Keller_G2} that a model of (1) through (4) cannot be a model of (5).
Hence $\C T_{kl}$ is inconsistent. As before, we can by invoking compactness find an $L_{k,l}(G_2)\in\mathbb{N}$ such that the subset
$\C T_{kl}^{1}\subset\C T_{kl}$ that contains all sentences in (1) through (4) and the \textit{single} sentence $\theta_{L_{k,l}(G_2)}$, is already inconsistent.
Next, let $R$ be an arbitrary commutative ring with $1$ and let $A\in G(\Phi,R)$ be given. This gives us a model $\C M$ of the sentences in (1) through (4) and hence as $\C T_{kl}^1$ is inconsistent this model must violate the statement $\theta_{L_{k,l}(G_2)}^{\C M}.$ Thus there are elements
$g_1,\dots,g_{L_{k,l}(G_2)}\in G_2(R)$ and $e_1,\dots e_{L_{k,l}(G_2)}\in\{0,1,-1\}$ such that (abusing the notation slightly)
\begin{equation*}
\varepsilon_{\beta}(e(k,l)^3)=(A^{e_1})^{g_1}\cdots (A^{e_{L_{k,l}(G_2)}})^{g_{L_{k,l}(G_2)}}
\end{equation*}
Proposition~\ref{G2_ideals}(1a) implies $(e(k,l)^3)\subset\varepsilon(A,\chi,2L_{k,l}(G_2).$
Summing further over all admissible $k,l$ implies
\begin{equation*}
\sum_{k,l}(e(k,l)^3)\subset\varepsilon(A,\chi,\sum_{k,l}2L_{k,l}(G_2)).
\end{equation*}
Define next $L(G_2):=\sum_{k,l}2L_{k,l}(G_2)$ and we are done.
\end{proof}
\begin{remark}
In this paper, we restrict ourselves to the simply-connected type of split Chevalley groups. However, a careful rereading of the proofs reveals, that the we only used the fact that we have a description of $G(\Phi,R)$ as a matrix group and consequently an explicit description of the level ideal. However, similar descriptions exist for a lot of other types of split Chevalley groups and consequently
statements similar to Theorem~\ref{exceptional Chevalley} can be obtained for them.
\end{remark}
\section{Applications, corollaries and variants of Theorem~\ref{exceptional Chevalley}}
\label{Corollaries}
Theorem~\ref{exceptional Chevalley} naturally raises the question of which rings $R$ fulfill its main assumption,
that is bounded generation by root groups for $G(\Phi,R)$. We will mostly deal with rings of stable range $1$ and rings of S-algebraic integers for this.
\subsection{Stable range $1$, semilocal rings and uniform boundedness}
A useful tool in this context is the following (slightly reformulated) observation due to Tavgen:
\begin{Proposition}\cite[Proposition~1]{MR1044049}
\label{bootstrapping}
Let $\Phi$ be a root system, $R$ a commutative ring with $1$ such that there is an $m:=m(R),N(R)\in\mathbb{N}$ with the property that
each irreducible root subsystem $\Phi_0$ of $\Phi$ generated by simple roots of $\Phi$ with rank $m$ satisfies
\begin{equation*}
\|E(\Phi_0,R)\|_{EL}\leq N(R){\rm rank}(\Phi_0).
\end{equation*}
Then $\|E(\Phi,R)\|_{EL}\leq N(R){\rm rank}(\Phi).$ Here $\|\cdot\|_{EL}$ denotes the word norm on $E(\Phi,R)$ with respect to the generating set given by root elements.
\end{Proposition}
This is a useful proposition, because it allows to obtain bounded generation results for higher ranks from such results for low-rank root systems
at least if $G(\Phi,R)=E(\Phi,R)$ holds. In particular, we have the following:
\begin{Corollary}
\label{rank_1_boundedness}
Let $R$ be a commutative ring with $1$ such that ${\rm SL}_2(R)=G(A_1,R)$ is boundedly generated by root elements. Then for all irreducible root systems $\Phi$ the elementary Chevalley group $E(\Phi,R)$ is boundedly generated by root elements.
\end{Corollary}
\begin{proof}
This follows from Proposition~\ref{bootstrapping} in the case $m(R)=1$ for $N(R)$ determined by the bounded generation of ${\rm SL}_2(R),$
because for each simple root $\alpha\in\Phi$ the subgroup $E(\{\alpha,-\alpha\},R)$ of $E(\Phi,R)$ is either isomorphic to ${\rm SL}_2(R)$ or a quotient of it.
\end{proof}
Next, we define stable range:
\begin{mydef}
The \textit{stable range} of a commutative ring $R$ with $1$ is the smallest $n\in\mathbb{N}$ with the following property:
If any $v_0,\dots,v_m\in R$ generate the unit ideal $R$ for $m\geq n$, then there are $t_1,\dots,t_m$ such that the elements
$v_1':=v_1+t_1v_0,\dots,v_m':=v_m+t_mv_0$ also generate the unit ideal. If the ring $R$ does not have stable range $1$, but for each $a\in R-\{0\}$
the ring $R/aR$ does, then $R$ is said to have stable range $3/2.$
If no such $n$ exists, $R$ has stable range $+\infty.$
\end{mydef}
\begin{remark}
Having stable range at most $m$ for $m\in\mathbb{N}$ or at most $3/2$ are first order properties.
\end{remark}
Note the following result:
\begin{Proposition}\cite[Lemma~9]{MR961333}
\label{Vaserstein_decomposition}
Let $R$ be a commutative ring with $1$ of stable range $1$ and $\Phi$ a root system. Then ${\rm SL}_2(R)$ is boundedly generated by root elements and
$\|{\rm SL}_2(R)\|_{EL}\leq 4.$
\end{Proposition}
This proposition together with Corollary~\ref{rank_1_boundedness} yields that $E(\Phi,R)$ is boundedly generated by root elements for all irreducible root
systems $\Phi$ and using Tavgen's original version of Proposition~\ref{bootstrapping} that each element in $E(\Phi,R)$ can be written as a product of at most four upper and lower unitriangular elements. This was first observed by Vavilov, Smolenski, Sury in \cite[Theorem~1]{MR2822515}.
\begin{Proposition}\label{root_generation_semilocal_rings}
\cite[Corollary~2.4]{MR439947}
Let $R$ be a semi-local ring. Then for all irreducible root systems $\Phi$ of rank greater than one, the group $G(\Phi,R)$ is generated by root elements.
\end{Proposition}
Also each semilocal ring has stable range $1$:
\begin{Lemma}\cite[Lemma~6.4, Corollary~6.5]{MR0174604}
Every semilocal ring, that is each ring with only finitely many maximal ideals has stable range $1.$ So also each field has stable range $1$.
\end{Lemma}
So for $R$ a semilocal ring the group $G(\Phi,R)$ is boundedly generated by root elements and hence Theorem~\ref{exceptional Chevalley} can be applied to $G(\Phi,R)$. This case is more structured than Theorem~\ref{exceptional Chevalley} in fact:
\begin{Theorem}
\label{semilocal_uniform}
Let $R$ be a commutative, semilocal ring with $1$ and let $\Phi$ an irreducible root system. Furthermore, assume if $\Phi=B_2$ or $G_2$ that
$(R:2R)<\infty$ holds. Then $G(\Phi,R)$ is uniformly bounded.
\end{Theorem}
\begin{proof}
The strategy is to find a constant $K\in\mathbb{N}$ such that each finite normally generating subset $S$ of $G:=G(\Phi,R)$ has a subset
$\bar{S}$ with $|\bar{S}|\leq K$ such that $\bar{S}$ is also a normally generating subset of $G(\Phi,R).$ Then
Theorem~\ref{exceptional Chevalley} yields:
\begin{equation*}
\|G(\Phi,R)\|_S\leq\|G(\Phi,R)\|_{\bar{S}}\leq C(\Phi,R)|\bar{S}|\leq C(\Phi,R)K.
\end{equation*}
and so uniform boundedness for $G(\Phi,R)$ holds.
Assume $R$ has precisely $m$ maximal ideals. Let $S$ be a finite set of normal generators of $G(\Phi,R).$ Lemma~\ref{necessary_cond_conj_gen} implies
$\Pi(S)=\emptyset.$ Observe that for all $T_1,T_2\subset G(\Phi,R)$, we have $\Pi(T_1\cup T_2)=\Pi(T_1)\cap\Pi(T_2).$
This implies that if there are only $m$ maximal ideals in $R,$ then already some subset $S'$ of $S$
with $\card{S'}\leq m+1$ has the property $\bigcap_{A\in S'}\Pi(A)=\emptyset.$ Hence in case $\Phi\neq B_2$ or $G_2$, Corollary~\ref{sufficient_cond_gen_set}(1)
tells us that $S'$ is already a normally generating subset of $G(\Phi,R)$. This finishes the case $\Phi\neq B_2, G_2.$
Next, we do the case $\Phi=B_2$ or $G_2.$ We have $(R:2R)<\infty$ by assumption and hence Lemma~\ref{congruence_fin} implies for
$N:=\langle\langle\varepsilon_{\phi}(2a)|a\in R,\phi\in \Phi\rangle\rangle$, that the group $G/N$ is finite. The set $S$ normally generates the group $G$ and hence the image of $S$ in
$G/N$ normally generates $G/N$ and so we can pick a subset $S''\subset S$ with at most $M:=\card{G/N}$ elements such that the image of $S''$ in $G/N$ normally generates $G/N.$ Hence considering the set $\bar{S}:=S'\cup S''$ we have
\begin{equation*}
|\bar{S}|\leq|S'|+|S''|\leq m+1+M
\end{equation*}
and the upper bound $m+1+M$ clearly does not depend on $S.$ Corollary~\ref{sufficient_cond_gen_set}(2) implies that $\bar{S}$ is a normally generating set of $G(\Phi,R)$. Thus we are done.
\end{proof}
\begin{remark}
The above theorem applies to various cases for example local rings, $p$-adic integers or other discrete valuation domains.
\end{remark}
We also obtain the following:
\begin{Theorem}
\label{stable_range1_strong_boundedness}
Let $R$ be a commutative ring with $1$ of stable range $1$ and $\Phi$ an irreducible root system of rank at least $2$ that is not $G_2$ or $B_2.$
Then for the elementary subgroup $E(\Phi,R)$ of $G(\Phi,R)$, there is a constant $C(\Phi,R)$ such that
\begin{equation*}
\Delta_k(E(\Phi,R))\leq C(\Phi,R)k
\end{equation*}
for all $k\in\mathbb{N}$. If $\Phi=B_2$ or $G_2$ we must further assume that $(R:2R)<\infty.$
\end{Theorem}
\begin{proof}
We want to show a version of Theorem~\ref{exceptional Chevalley} that speaks about $E(\Phi,R)$ instead of $G(\Phi,R).$ This can be done by following the same arguments. The only difference in the proofs of Proposition~\ref{higher_rank_centralization}, Proposition~\ref{B2_centralization} and Proposition~\ref{G2_centralization} is that in the sentences of the theories $\C T_{kl}$, one can no longer quantify over the full Chevalley group $G(\Phi,R)$, but must quantify over all elements of $E(\Phi,R)$ despite the fact that this group cannot be defined in first order terms. Stable range $1$ is a first order property of a ring however and we know that $E(\Phi,R)$ has bounded generation by root elements for such rings. Thus by including a collection of sentences that describes that
$R$ has stable range $1$, we can modify the $\C T_{kl}$ such that it quantifies over all elements of $E(\Phi,R)$ by quantifying over appropriate finite products of root elements and then the rest of the argument goes through.
\end{proof}
\subsection{Rings of S-algebraic integers}
First, we are going to define S-algebraic integers.
\begin{mydef}\cite[Chapter~I, ยง11]{MR1697859}\label{S-algebraic_numbers_def}
Let $K$ be a finite field extension of $\mathbb{Q}$. Then let $S$ be a finite subset of the set $V$ of all valuations of $K$ such that $S$ contains all archimedean valuations. Then the ring $\C O_S$ is defined as
\begin{equation*}
\C O_S:=\{a\in K|\ \forall v\in V-S: v(a)\geq 0\}.
\end{equation*}
\end{mydef}
Rings of S-algebraic integers do not have stable range $1$. For a remarkably large class of them -the ones with infinitely many units- the corresponding ${\rm SL}_2$ are still boundedly generated by root elements \cite[Theorem~1.1]{MR3892969}. This will be more relevant in our upcoming paper
\cite{Hessenbergform_explicit_strong_bound}, when we talk about explicit bounds. More important for us is the following classical result:
\begin{Theorem}\cite[Theorem~A]{MR1044049}
\label{Tavgen}
Let $\Phi$ be an irreducible root system of rank at least $2$ and $R$ a ring of S-algebraic integers in a number field. Then $G(\Phi,R)$ has bounded generation with respect to root elements.
\end{Theorem}
Furthermore, all non-zero ideals $I$ in a ring $R$ of S-algebraic integers have finite index. So, rings $R$ of S-algebraic integers in number fields have the property that $G(\Phi,R)$ is boundedly generated by root elements for all irreducible $\Phi$ of rank at least $2$ and the ideal $2R$ (and all other non-zero ideals) have finite index in $R.$ Hence Theorem~\ref{exceptional Chevalley} can be applied to the groups $G(\Phi,R)$. This gives us the following Theorem:
\begin{Theorem}
\label{alg_numbers_strong_bound}
Let $R$ be a ring of S-algebraic integers in a number field and $\Phi$ an irreducible root system of rank at least $2$.
Then there is a constant $C(\Phi,R)\geq 1$ such that for $G(\Phi,R)$ one has
\begin{equation*}
\Delta_k(G(\Phi,R))\leq C(\Phi,R)k
\end{equation*}
for all $k\in\mathbb{N}.$
\end{Theorem}
\begin{remark}
Note that in contrast to the result by K\k{e}dra, Libman and Martin \cite[Theorem~6.1]{KLM}
we do not have any control on the the behaviour of $C(\Phi,R)$, beyond the fact that it does not depend on $k.$ However, in the paper \cite{Hessenbergform_explicit_strong_bound} we will remedy this fact by providing explicit values for $C(\Phi,R)$ in case the ring is a principal ideal domain.
\end{remark}
We provide lower bounds on $\Delta_k(G(\Phi,R))$ in Section~\ref{section_lower_bounds}.
Next, we are going to talk about orders in rings of algebraic integers and Morris results in \cite{MR2357719} and how to use them to get strong boundedness results. We do not define orders precisely, but they are subrings of rings of algebraic integers that are also sublattices of the same ring of algebraic integers.
First, there are the following results by Morris that are very similar to our results.
\begin{Theorem}\cite[Theorem~6.1, Remark~6.2, Corollary~6.13]{MR2357719}
Let $B$ be an order in a ring of algebraic integers and $S$ a multiplicative set in $B-\{0\}$. Further assume either that $n\geq 3$
or that $S^{-1}B$ has infinitely many units. Also let $X$ be a subset of $G:={\rm SL}_n(S^{-1}B)$, that is normalized by root elements and that does not consist entirely of scalar matrices. Then $X$ boundedly generates a finite index subgroup $N$ of ${\rm SL}_n(S^{-1}B)$ with a bound on the maximal length of a word in elements of $X$ that depends on $n,$ the degree $[K:\mathbb{Q}],$ the minimal numbers of generators of the level ideal $l(N)$ and the cardinality of $S^{-1}B/l(N).$ If
$X:=\{gsg^{-1}|s\in S,g\in {\rm SL}(S^{-1}B)\}$ for a finite set $S$ the minimal number of generators of $l(N)$ is smaller than $n^2\card{S}.$
Similarly if $\Gamma$ is a finite index subgroup of ${\rm SL}_n(S^{-1}B)$ and $X\subset\Gamma$ a set that is normalized by $\Gamma$ and does not consist entirely of scalar matrices, then $X$ boundedly generates a finite index subgroup $N$ of $\Gamma$ with a bound that depends on the same numbers as above.
\end{Theorem}
\begin{remark}
In the terminology of our paper this establishes that finite index, finitely normally generated, non-central subgroups $N$ of ${\rm SL}_n(S^{-1}B)$ are strongly bounded
(or $\Delta_k(N)<\infty$ for all $k$). The main difference (in the case that a finite $S$ normally generates ${\rm SL}_n$) to our result, is that Morris has no control on the actual value of $\Delta_k({\rm SL}_n)$, whereas we can establish that the dependence is at least linear in $k$.
Structurally, the main reason for this difference seems to be that Morris uses a first order compactness result upon the entirety of a \textit{set of generators} to establish bounded generation but does not have any control on any particular one of the generators individually. We, on the other hand, apply a compactness result
upon a particular given element of the group $G(\Phi,R)$ to obtain root elements with arguments lying in its level ideal and only later consider the full generating set to obtain the missing elements in case of $B_2$ and $G_2$. Our methods are able to prove a stronger version of \cite[Theorem~6.1, Remark~6.2, Corollary~6.13]{MR2357719} as well, but this is work in progress.
\end{remark}
Morris \cite[Theorem~5.26]{MR2357719} proves bounded generation by root elements for the subgroup $E(A_1,R)$ of ${\rm SL}_2(R)$, even in the case that $R$ is only a localization of an order, if said localization has infinitely many units. He demonstrates further that the elementary subgroup $E(A_2,R)$ of ${\rm SL}_3(R)$ is boundedly generated by root elements for $R$ a localization of an order
\cite[Corollary~3.13]{MR2357719}. Most importantly however, in both cases the bounded generation results follows by proving that localization of orders satisfy certain
first order properties, that Morris calls ${\rm Gen}(t,r)$ and ${\rm Exp}(t,l)$ in case of $E(A_2,R)$ and additionally ${\rm Unit}(1,x)$ in case of $E(A_1,R)$ as well as in both cases stable range $3/2.$ Then Morris show that these properties imply bounded generation by root elements.
Hence adding these first order properties in the proofs of Proposition~\ref{higher_rank_centralization}, Proposition~\ref{B2_centralization} and Proposition~\ref{G2_centralization} and applying Corollary~\ref{rank_1_boundedness} in case $\Phi$ is not simply-laced, one can prove the following:
\begin{Proposition}
Let $R$ be a localization of an order in a ring of algebraic integers and $\Phi$ an irreducible root system of rank at least $2$.
Assume further that $R$ has infinitely many units in case $\Phi$ is not simply-laced. There is a constant $C(\Phi,R)$ such that
\begin{equation*}
\Delta_k(E(\Phi,R))\leq C(\Phi,R)k
\end{equation*}
holds for all $k\in\mathbb{N}$.
\end{Proposition}
Lastly, it should be possible to prove bounded generation results and hence strong boundedness also in the case of rings of functions of
algebraic curves over finite fields. The picture seems to be less clear in this area however as the only result about this we could find was \cite{Nica}
stating bounded generation of ${\rm SL}_n(\mathbb{F}[T])$ for $\mathbb{F}$ a finite field and $n\geq 3.$
\section{Lower bounds on $\Delta_k$}
\label{section_lower_bounds}
In this section, we talk about lower bounds on $\Delta_k$. The dichotomy between $G_2,B_2$ and the other $\Phi$ persists here. Namely, for $\Phi=B_2$ or $G_2$ the lower bounds depend strongly on the ring $R.$ First the higher rank cases:
\begin{Proposition}\cite[Theorem~6.1]{KLM}
\label{Lower_bounds_higher_rank}
Let $\Phi$ be an irreducible root system of rank at least $2$ and let $R$ be a Dedekind domain with finite class number and infinitely many maximal ideals such that
$G(\Phi,R)$ is boundedly generated by root elements. Further assume that $2$ is a unit in $R$ if $\Phi=B_2$ or $G_2.$
Then $\Delta_k(G(\Phi,R))\geq k$ for all $k\in\mathbb{N}.$
\end{Proposition}
\begin{proof}
Let $k$ distinct maximal ideals $\C P_1,\dots,\C P_k$ be given and let $c$ be the class number of $R.$ All the ideals $\C P_i^c$ are principal so choose $t_i$
as one of its generators and set for all $i$
\begin{equation*}
r_i:=\prod_{1\leq j\neq i\leq k}t_j.
\end{equation*}
Fix a short root $\phi\in\Phi$ and consider the elements $A_i:=\varepsilon_{\phi}(r_i)$ and the set $S:=\{A_1,\dots,A_k\}.$ Then
$\Pi(A_i)=\bigcup_{j\neq i}\C \{P_j\}$ holds and thus $\Pi(S)=\emptyset.$ Hence if $\Phi\neq B_2, G_2$, then Corollary~\ref{sufficient_cond_gen_set}(1) implies that $S$ is a normally generating set of $G(\Phi,R).$ If on the other hand $\Phi=B_2$ or $G_2$ holds, then the assumption on $2$ implies
$R=2R$ and so the condition in Corollary~\ref{sufficient_cond_gen_set}(2) reduces to $\Pi(S)=\emptyset$ as well.
To finish the proof, assume for contradiction that $\|\varepsilon_{\phi}(1)\|_S\leq k-1.$ Then there are elements $g_1,\dots,g_{k-1}\in G(\Phi,R)$
and $s_1,\dots,s_{k-1}\in S\cup S^{-1}\cup\{1\}$ such that
\begin{equation*}
\varepsilon_{\phi}(1)=\prod_{1\leq i\leq k-1}s_i^{g_i}.
\end{equation*}
However, $\Pi(s_i^{g_i})=\Pi(s_i)$ contains at least $k-1$ elements of $\{\C P_1,\dots,\C P_k\}$ and hence $\bigcap_{1\leq i\leq k-1}\Pi(s_i^{g_i})$ cannot possibly
be empty. This implies
\begin{equation*}
\emptyset\neq\bigcap_{1\leq i\leq k-1}\Pi(s_i^{g_i})\subset\Pi(\varepsilon_{\phi}(1))=\emptyset.
\end{equation*}
This contradiction yields $\|\varepsilon_{\phi}(1)\|_S\geq k$. This proves as $\card{S}=k$ that
\begin{equation*}
\Delta_k(G(\Phi,R))\geq\|G(\Phi,R)\|_S\geq~\|\varepsilon_{\phi}(1)\|_S\geq k.
\end{equation*}
\end{proof}
\begin{remark}
This is a generalization of \cite[Theorem~6.1]{KLM}, yet the proof is essentially the same.
\end{remark}
Next, we are going to describe lower bounds on $\Delta_k({\rm Sp}_4(R))$ and $\Delta_k(G_2(R))$ in the general case. It turns out that in this case the (existence of) lower bounds are strongly dependent on the way $2$ splits into primes in the ring $R.$
\begin{Theorem}
\label{lower_bounds_rank2}
Let $\Phi$ be $B_2$ or $G_2$ and let $R$ be a ring of S-algebraic integers in a number field with $R\neq 2R$.
Further let
\begin{equation*}
r:=r(R):=|\{\C P|\ \C P\text{ divides 2R, is a prime ideal and }R/\C P=\mathbb{F}_2\}|
\end{equation*}
be given. Then for $G(\Phi,R)$
\begin{enumerate}
\item{the inequality $\Delta_k(G(\Phi,R))\geq k$ holds for all $k\in\mathbb{N}$ with $k\geq r(R)$ and}
\item{the equality $\Delta_k(G(\Phi,R))=-\infty$ holds for $k<r(R).$}
\end{enumerate}
\end{Theorem}
We show both parts of the theorem separately. For the first part, the main difficulty, compared to Proposition~\ref{Lower_bounds_higher_rank}
comes, from the more complex conditions a set $S$ has to fulfill to be a normal generating set. To address this, we need the following technical Proposition describing algebraic properties of finite quotients of rings of S-algebraic integers.
\begin{Proposition}
\label{quotient_normal_generation}
Let $R$ be a ring of S-algebraic integers and $\C P_1,\dots,\C P_s$ be non-zero prime ideals and $l_1,\dots,l_s\in\mathbb{N}.$
Assume further that at most one of the $\C P_i$ has the property
\begin{equation*}
[R/\C P_i:\mathbb{F}_2]=1
\end{equation*}
and let $\bar{x}\in R/(\C P_1^{l_1}\cdots\C P_s^{l_s})=:\bar{R}$
be a unit. Then $\varepsilon_{\alpha}(\bar{x})$ normally generates ${\rm Sp}_4(\bar{R})$ or $G_2(\bar{R})$ respectively.
\end{Proposition}
\begin{proof}
First, we do the case ${\rm Sp}_4(R).$ Let $N$ be the subgroup of ${\rm Sp}_4(R)$ normally generated by $\varepsilon_{\alpha}(\bar{x}).$
We first prove for $\bar{R}_0:=\{\bar{y}\in\bar{R}|\varepsilon_{\alpha}(\bar{y})\in N\}$ that $\bar{R}_0=\bar{R}.$ This is done in two steps.
First, we prove that $\bar{R}_0$ contains all units of $\bar{R}$ and is closed under addition and then second, that $\bar{R}$ is generated as an additive group by its
units. But this yields the proposition, because $\varepsilon_{\alpha}(\bar{a})\in N$ for all $\bar{a}\in\bar{R}$ implies together with Lemma~\ref{B2_ideals}(3),
that $N$ contains all root elements and by Proposition~\ref{root_generation_semilocal_rings} the group ${\rm Sp}_4(\bar{R})$ is generated by its root elements as
$\bar{R}$ is finite and hence semi-local.
For the first step, according to \cite[Lemma~20(c), Chapter~3, p.~23]{MR3616493}, we have for any unit $\bar{u}\in\bar{R}$ that
\begin{equation*}
N\ni h_{\beta}(\bar{x}\bar{u}^{-1})\varepsilon_{\alpha}(\bar{x})h_{\beta}(\bar{x}\bar{u}^{-1})^{-1}
=\varepsilon_{\alpha}((\bar{x}\bar{u}^{-1})^{\langle\alpha,\beta\rangle}\bar{x})
=\varepsilon_{\alpha}((\bar{x}\bar{u}^{-1})^{-1}\bar{x})=\varepsilon_{\alpha}(\bar{u}).
\end{equation*}
For the second step, observe first that $\bar{R}$ does not have $\mathbb{F}_2\times\mathbb{F}_2$ as a quotient ring. This is the case, because otherwise
the ring $R$ would have two distinct non-zero prime ideals $\C Q_1,\C Q_2$ with $R/\C Q_1=R/\C Q_2=\mathbb{F}_2$ and $\C P_1^{l_1}\cdots\C P_s^{l_s}\subset\C Q_1,\C Q_2.$
But then $\C Q_1$ and $\C Q_2$ are among the $\C P_1,\dots,\C P_s$, which is impossible, because there is at most one $\C P_i$ with $R/\C P_i=\mathbb{F}_2$.
Yet semi-local rings without $\mathbb{F}_2\times\mathbb{F}_2$ as a quotient ring are generated by their units according to \cite[Lemma~2(d)]{MR2161255}
and hence $\bar{R}_0=\bar{R}.$
For the case $G_2(R)$, note that we obtain $\{\bar{x}|\varepsilon_{\alpha}(\bar{x})\in N\}=\bar{R}$ the same way as in the case of ${\rm Sp}_4(R).$
So $N$ contains all root elements for short roots. Lemma~\ref{G2_ideals}(2) yields now that $N$ also contains all of the root elements for long roots.
Hence as $G_2(\bar{R})$ is generated by root elements we are done.
\end{proof}
We can show the first part of the theorem now.
\begin{proof}
Let the ideal $2R$ in $R$ split into primes as follows:
\begin{equation*}
2R=\left(\prod_{i=1}^r\C P_i^{l_i}\right)\cdot\left(\prod_{j=1}^s\C Q_j^{k_j}\right)
\end{equation*}
with $[R/\C P_i:\mathbb{F}_2]=1$ for $1\leq i\leq r$ and $[R/\C Q_j:\mathbb{F}_2]>1$ for $1\leq j\leq s.$ Next, let $c$ be the class number of $R.$
Pick elements $x_1,\dots,x_r\in R$ such that $\C P_i^c=(x_i)$ for all $i.$
Also choose $r+1$ distinct primes $V_{r+1},\dots,V_k$ in $R$ which do not agree with any of the $\C P_1,\dots,\C P_r,\C Q_1,\dots,\C Q_s.$
Passing to the powers $V_{r+1}^c,\dots,V_k^c$ we can find elements $v_{r+1},\dots,v_k\in R$ with $V_{r+1}^c=(v_{r+1}),\dots,V_k^c=(v_k).$
Further, define the following elements for $1\leq u\leq r(R)=r$
\begin{equation*}
r_u:=\left(\prod_{1\leq i\neq u\leq r}x_i\right)\cdot v_{r+1}\cdots v_k.
\end{equation*}
For $k\geq u\geq r+1$ set
\begin{equation*}
r_u:=x_1\cdots x_r\cdot\left(\prod_{r+1\leq u\neq q\leq k} v_q\right).
\end{equation*}
We consider the set $S:=\{\varepsilon_{\alpha}(r_1),\dots,\varepsilon_{\alpha}(r_k)\}$ in ${\rm Sp}_4(R)$ or $G_2(R).$ Note that $\alpha$ is the short, positive simple root in both cases. For the sake of brevity, we will only write down the case of ${\rm Sp}_4(R).$
\begin{Claim}S is a normal generating set of ${\rm Sp}_4(R).$
\end{Claim}
According to Corollary~\ref{sufficient_cond_gen_set}(2), we have to fulfill two conditions for this claim to hold, first $\Pi(S)=\emptyset$ and second that
$S$ maps to a normally generating subset of ${\rm Sp}_4(R)/N$ for $N:=\langle\langle\varepsilon_{\phi}(2x)|x\in R,\phi\in B_2\rangle\rangle$.
First, note that
\begin{align*}
\Pi(\varepsilon_{\alpha}(r_u))=
\begin{cases}
\{\C P_1,\dots,\hat{\C P_u},\dots,\C P_r,V_{r+1},\dots,V_k\} &\text{ , if }1\leq u\leq r\\
\{\C P_1,\dots,\C P_r, V_{r+1},\dots,\hat{V_u},\dots,V_k\} & \text{ , if } r+1\leq u\leq k,
\end{cases}
\end{align*}
where the hat denotes the omission of the corresponding prime. This implies $\Pi(S)=\emptyset.$
For the second condition note that Milnor's, Serre's and Bass' solution for the Congruence subgroup problem \cite[Theorem~3.6, Corollary~12.5]{MR244257} implies that
\begin{equation*}
N=ker(\pi_{2R}:{\rm Sp}_4(R)\to {\rm Sp}_4(R/2R)).
\end{equation*}
Hence it suffices to show that under the reduction homomorphism $\pi_{2R}:{\rm Sp}_4(R)\to{\rm Sp}_4(R/2R)$ the set $S$ maps to a normally generating set of
${\rm Sp}_4(R/2R).$ Using the Chinese Remainder Theorem yields:
\begin{equation*}
{\rm Sp}_4(R/2R)=\prod_{i=1}^r {\rm Sp}_4\left(R/(\C P_i^{l_i})\right)\times{\rm Sp}_4\left(R/(\prod_{j=1}^s\C Q_j^{k_j})\right).
\end{equation*}
For $1\leq u\leq k$, it follows further:
\begin{align*}
\pi_{2R}(\varepsilon_{\alpha}(r_u))=\left(\bigtimes_{i=1}^r(\varepsilon_{\alpha}(r_u+\C P_i^{l_i})),\varepsilon_{\alpha}(r_u+\prod_{j=1}^s\C Q_j^{k_j})\right)
\end{align*}
Depending on $u$ these elements look quite different.
First for $u=1$ the element $r_1$ is divisible by all $\C P_i^{l_i}$ except for $i=1$. Hence this implies
\begin{equation*}
\pi_{2R}(\varepsilon_{\alpha}(r_1))=
\left(\varepsilon_{\alpha}(r_1+\C P_1^{l_1}),\bigtimes_{i=2}^r(\varepsilon_{\alpha}(0)),\varepsilon_{\alpha}(r_1+\prod_{j=1}^s\C Q_j^{k_j})\right)
=\left(\varepsilon_{\alpha}(r_1+\C P_1^{l_1}),1,\varepsilon_{\alpha}(r_1+\prod_{j=1}^s\C Q_j^{k_j})\right)
\end{equation*}
Phrased differently, it is the element $\varepsilon_{\alpha}(r_1+\C P_1^{l_1}\prod_{j=1}^s\C Q_j^{k_j})$ in the subgroup
${\rm Sp}_4(R/(\C P_1^{l_1}\prod_{j=1}^s\C Q_j^{k_j})).$
Note that $r_1$ is not divisible by any of the $\C Q_j$ nor $\C P_1$ and hence $r_1+\C P_1^{l_1}\prod_{j=1}^s\C Q_j^{k_j}$ is a unit in
$R/(\C P_1^{l_1}\prod_{j=1}^s\C Q_j^{k_j}).$ Thus by Proposition~\ref{quotient_normal_generation} the element
$\varepsilon_{\alpha}(r_1+2R)$ normally generates the subgroup
\begin{equation*}
{\rm Sp}_4\left(R/(\C P_1^{l_1}\prod_{j=1}^s\C Q_j^{k_j})\right)={\rm Sp}_4(R/\C P_1^{l_1})\times {\rm Sp}_4\left(R/(\prod_{j=1}^s\C Q_j^{k_j})\right)
\end{equation*}
of ${\rm Sp}_4(R/2R).$
The same way for $2\leq u\leq r$ it follows that the element $\varepsilon_{\alpha}(r_u+2R)$ normally generates the subgroup
${\rm Sp}_4(R/\C P_u^{l_u})\times{\rm Sp}_4(R/(\prod_{j=1}^s\C Q_j^{k_j}))$ of ${\rm Sp}_4(R/2R).$
So already the subset $\{\varepsilon_{\alpha}(r_1)),\dots,\varepsilon_{\alpha}(r_r)\}$ of $S$ maps to a normally generating subset of ${\rm Sp}_4(R/2R)$ under $\pi_{2R}.$ This proves the claim.
\begin{Claim}The diameter of $\|\cdot\|_S$ is at least $k.$ As $|S|=k$ this proves the first part of the theorem for ${\rm Sp}_4(R).$
\end{Claim}
Assume for contradiction that ${\rm diam}(\|\cdot\|_S)\leq k-1.$ Then there are elements $s_1,\dots,s_{k-1}\in S\cup S^{-1}\cup\{1\}$ and
$g_1,\dots,g_{k-1}\in{\rm Sp}_4(R)$ with
\begin{equation*}
\varepsilon_{\alpha}(1)=\prod_{1\leq i\leq k-1}s_i^{g_i}.
\end{equation*}
But the set $\Pi(s_i^{g_i})=\Pi(s_i)$ contains at least $k-1$ elements of the set $\{\C P_1,\dots,\C P_r, V_{r+1},\dots,V_k\}$ and hence
$\bigcap_{1\leq i\leq k-1}\Pi(s_i^{g_i})$ is not empty and so $\emptyset\neq\bigcap_{1\leq i\leq k-1}\Pi(s_i^{g_i})\subset\Pi(\varepsilon_{\alpha}(1))=\emptyset.$
This contradiction proves $\|\varepsilon_{\alpha}(1)\|_S\geq k$.
\end{proof}
For the second part of Theorem~\ref{lower_bounds_rank2}, note the following:
\begin{Lemma}
There is an epimorphism ${\rm Sp}_4(\mathbb{F}_2)\to \mathbb{F}_2$ with $\varepsilon_{\phi}(a)\mapsto a$ for all $a\in\mathbb{F}_2$ and $\phi\in B_2.$
Similarly there is an epimorphism $G_2(\mathbb{F}_2)\to\mathbb{F}_2$ with
\begin{align*}
\varepsilon_{\phi}(a)\mapsto
\begin{cases}
a &\text{ ,if }\phi\in G_2\text{ short}\\
0 &\text{ ,if }\phi\in G_2\text{ long}
\end{cases}
\end{align*}
\end{Lemma}
\begin{proof}
We only do the case ${\rm Sp}_4(\mathbb{F}_2)$ again. According to \cite[Theorem~8;Chapter~6,p.~43]{MR3616493}, the group ${\rm Sp}_4(\mathbb{F}_2)$ is isomorphic to the group $G$ generated by elements of order $2$ named $\varepsilon_{\alpha}(1)$,$\varepsilon_{\beta}(1)$,
$\varepsilon_{\alpha+\beta}(1)$, $\varepsilon_{2\alpha+\beta}(1)$, $\varepsilon_{-\alpha}(1)$,$\varepsilon_{-\beta}(1)$ and $\varepsilon_{-\alpha-\beta}(1),\varepsilon_{-2\alpha-\beta}(1)$ subject to relations of the form
\begin{align*}
&(\varepsilon_{\phi}(1),\varepsilon_{\psi}(1))=1\text{, if }\phi+\psi\in B_2\text{ and no other sum of positive multiples of }\psi\text{ and }\phi\text{ is a root}\\
&(\varepsilon_{\phi}(1),\varepsilon_{\psi}(1))=1\text{, if }\phi+\psi\notin B_2\text{ and }\phi+\psi\neq 0\\
&(\varepsilon_{\phi}(1),\varepsilon_{\psi}(1))=\varepsilon_{\phi+\psi}(1)\varepsilon_{\tau}(1)\text{, if }\phi+\psi\in B_2
\text{ and }\tau=\phi+2\psi\text{ or }2\phi+\psi\in B_2.
\end{align*}
The map $\{\varepsilon_{\phi}(a)\mapsto a\}$ sends both sides of these relations to the same element, namely $0$,
and hence we get an epimorphism as required.
\end{proof}
\begin{remark}
\begin{enumerate}
\item{The group ${\rm Sp}_4(\mathbb{F}_2)$ is isomorphic to the permutation group $S_6$ and the epimorphism in the lemma is the
sign homomorphism $S_6\to\mathbb{F}_2.$}
\item{The group $G_2(\mathbb{F}_2)$ has a simple subgroup $U$ with $[G_2:U]=2$. The homomorphism is the map $G_2(\mathbb{F}_2)\to G_2(\mathbb{F}_2)/U=\mathbb{F}_2.$
The group $U$ is isomorphic to the twisted group $^2A_2(\mathbb{F}_9).$
}
\end{enumerate}
\end{remark}
Using this, the second part of the theorem follows:
\begin{proof}
We restrict ourselves to the case ${\rm Sp}_4(R)$ again. Let $2R=(\prod_{i=1}^r\C P_i^{l_i})(\prod_{j=1}^s\C Q_j^{k_j})$ be given as in the proof of the first part of the theorem. Using the Chinese Remainder Theorem, we know that the map
\begin{equation*}
{\rm Sp}_4(R)\twoheadrightarrow{\rm Sp}_4(R/2R)=\prod_{i=1}^r{\rm Sp}_4(R/(\C P_i^{l_i}))\times\prod_{i=1}^r{\rm Sp}_4(R/(\C Q_j^{k_j}))
\twoheadrightarrow\prod_{i=1}^r {\rm Sp}_4(R/\C P_i)={\rm Sp}_4(\mathbb{F}_2)^r
\end{equation*}
is an epimorphism. So composing with the epimorphism ${\rm Sp}_4(\mathbb{F}_2)\to\mathbb{F}_2$, we obtain an epimorphism $g:{\rm Sp}_4(R)\to\mathbb{F}_2^r.$
This suffices to prove the second part of the theorem, because a given normally generating set $S$ of ${\rm Sp}_4(R)$ with $|S|\leq r-1$ would map to a generating set of $\mathbb{F}_2^r$ with less than $r$ elements. The group $\mathbb{F}_2^r$ cannot be generated by less than $r$ elements however.
\end{proof}
This finishes the proof of Theorem~\ref{lower_bounds_rank2}. We note the following corollary:
\begin{Corollary}
Let $R$ be a ring of S-algebraic integers and $r(R)$ defined as in Theorem~\ref{lower_bounds_rank2}. Then both ${\rm Sp}_4(R)$ and $G_2(R)$ have
abelianization $\mathbb{F}_2^r.$
\end{Corollary}
\begin{proof}
We only do the case ${\rm Sp}_4(R).$ Note that $\langle\langle\varepsilon_{\phi}(2x)|x\in R,\phi\in B_2\rangle\rangle\subset ({\rm Sp}_4(R),{\rm Sp}_4(R))$
by Lemma~\ref{B2_ideals}(4) and (2) and further that ${\rm Sp}_4(R)$ is boundedly generated by root elements by Theorem~\ref{Tavgen}. Thus the abelianization $A(R)$ of ${\rm Sp}_4(R)$
is a finitely generated, $2$-torsion group. Let $r':=dim_{\mathbb{F}_2}(A(R)).$ The proof of Theorem~\ref{lower_bounds_rank2} implies
that $A(R)$ has the quotient $\mathbb{F}_2^r$ and hence $r'\geq r.$ Now on the other hand $r'>r$ is impossible, because it would imply as in the proof of the second part of Theorem~\ref{lower_bounds_rank2} that there are no normal generating sets of ${\rm Sp}_4(R)$ with precisely $r$ elements, which is wrong.
\end{proof}
For rings of quadratic integers it is known how $2$ splits into primes and hence we can give the following complete description of $r(R)$:
\begin{Corollary}
\label{other_quadratic_roots}
Let $D$ be a square-free integer and $R$ the ring of algebraic integers in $\mathbb{Q}[\sqrt{D}].$ Then the value of $r(R)$ is
\begin{enumerate}
\item{$r(R)=1$ precisely if $D\equiv 2,3,5,6,7\text{ mod }8$, so $\Delta_1({\rm Sp}_4(R)),\Delta_1(G_2(R))\neq -\infty.$}
\item{$r(R)=2$ precisely if $D\equiv 1\text{ mod }8$, so $\Delta_1({\rm Sp}_4(R))=\Delta_1(G_2(R))=-\infty$ and $\Delta_2({\rm Sp}_4(R))=\Delta_2(G_2(R))>-\infty.$}
\end{enumerate}
\end{Corollary}
\begin{proof}
We obtain from \cite[Theorem~25]{MR3822326} that the ideal $2R$ splits and ramifies in $R$ as follows:
\begin{enumerate}
\item{$2R$ is inert precisely if $D\equiv 5\text{ mod }8.$}
\item{$2R$ ramifies precisely if $D\equiv 2,3,6,7\text{ mod }8.$}
\item{$2R$ splits precisely if $D\equiv 1\text{ mod }8.$}
\end{enumerate}
In the first two cases, this implies $r(R)=1$ and in the third case $r(R)=2.$
\end{proof}
We finish this section with the following explicit example:
\begin{Corollary}\label{small_generating_sets_Sp4}
Let $R=\mathbb{Z}[\frac{1+\sqrt{-7}}{2}]$ be the ring of algebraic integers in the number field $\mathbb{Q}[\sqrt{-7}].$ Then ${\rm Sp}_4(R)$ and $G_2(R)$ are not generated by a single conjugacy class and so $\Delta_{1}({\rm Sp}_4(R))=\Delta_{1}(G_2(R))=-\infty$.
\end{Corollary}
\section*{Closing remarks}
This paper gives rise to a question regarding generalizations of the stated results. It is natural to ask how the results generalize to rings $R$ of integers of global fields of positive characteristic instead of rings of algebraic integers. This poses two issues:
\begin{enumerate}
\item{First, bounded generation of the corresponding Chevalley group $G(\Phi,R).$ There is the result of Nica \cite{Nica} regarding the basic case of $R=\mathbb{F}[T]$ for $\mathbb{F}$ finite, but no general result in this direction is known to us.
However replacing the number theoretic results used in the proofs of \cite{MR2357719} or \cite{MR1044049} by corresponding results in the 'number theory of R'
this should be possible. It seems likely to us that there might be some issues in case of the critical characteristics $2$ and $3$ for some root systems
$\Phi.$}
\item{Secondly, our argument in case $\Phi=B_2, G_2$ relied on $R/2R$ being finite. This is clearly true in the case of $char(R)\geq 3$ as
$R/2R$ is trivial in this case. If $char(R)=2,$ then this fails.}
\end{enumerate}
Next, one can ask about more general S-arithmetic lattices than the ones we dealt with. Here S-arithmetic lattices means groups commensurable with $G(\Phi,R).$ There is one straightforward way of generalizing our results to finite index subgroups by way of analyzing the subnormal structure of the Chevalley groups instead of the normal structure as we did, but this strategy will break down quickly after only a couple of easy examples.
The main issue is that the subnormal group structures are- to the best of our knowledge- badly understood for arbitrary commutative rings in the moment. There seem to be some results in this direction notably \cite{Hong} and \cite{Vavilov_Subnormal}.
Also it might be possible to imitate the strategy of Morris and try to isolate certain first order properties of rings of algebraic integers that might facilitate such a strategy.
Beyond this, there is also the issue that boundedness are badly behaved under passage to finite index supergroups and subgroups. Further it is not clear to us what algebraic structure plays the role of the level ideals of the corresponding subgroup in question in this case.
Considering the fact, that we are mainly interested in lattices it seems likely that a more geometric interpretation of our results is the most straightforward path to a generalization.
\section*{Appendix}
\begin{proof}[Lemma~\ref{central_elements}]
We split the proof into three parts. First we are going to show the statement for fields, then for integral domains and finally for general reduced rings.
So let $K$ be a field and $A=(a_{kl})\in G(\Phi,K)$ be given. For fields one has $G(\Phi,K)=E(\Phi,K)$ by \cite[Corollary~2.4]{MR439947} and hence $A$ is central in
$G(\Phi,K).$ Then by \cite[Lemma~28, Chapter~3,p.~29]{MR3616493} there are $t_1,\dots,t_u\in K-\{0\}$ such that $A=\prod_{i=1}^u h_{\alpha_i}(t_i)$, where
$\{\alpha_1,\dots,\alpha_u\}=\Pi$ are the simple, positive roots in $\Phi.$ Further
\begin{equation}\label{central_elements_eq1}
1=\prod_{i=1}^u t_i^{\langle\phi,\alpha_i\rangle}\text{ for all }\phi\in\Phi.
\end{equation}
Furthermore, for $1\leq j\leq u$ the element $A$ acts on the component of $K^{n_j}\subset K^{n_1+n_2+\dots+n_u}$
associated to the highest weight $\lambda_j$ of $V_j$ by multiplication with
\begin{equation}\label{central_elements_eq2}
\prod_{i=1}^u t_i^{\langle\lambda_j,\alpha_i\rangle}=\prod_{i=1}^u t_i^{\delta_{ij}}=t_j.
\end{equation}
Here we use that $\lambda_j$ is chosen as the fundamental weight corresponding to $\alpha_j$, that is $\langle\lambda_j,\alpha_i\rangle=\delta_{ij}$ holds for all $1\leq i,j\leq u.$
Each other weight in $V_j$ has the form $\lambda_j-\sum\phi$, where the $\phi$ are positive roots in $\Phi.$ Then (\ref{central_elements_eq1})
and (\ref{central_elements_eq2}) imply that $A$ acts on $K^{n_j}$ by $t_j I_{n_j}.$ So this yields the claim for fields.
For integral domains $R$, we distinguish two cases:
\begin{case} R is finite.
\end{case}
Yet finite integral domains are fields and hence we are done.
\begin{case} R is infinite.
\end{case}
Let $\alpha\in\Phi$ be given and observe that for $K$ the algebraic closure of the quotient field of $R$ we have the map
\begin{equation*}
\phi_{\alpha}:\mathbb{G}_a(K)=K\to G(\Phi,R),\lambda\mapsto (A,\varepsilon_{\alpha}(\lambda)).
\end{equation*}
This is a morphism of algebraic varieties and note that as $A$ commutes with elements in $\varepsilon_{\alpha}(R)$ by assumption,
we have $\phi_{\alpha}|_R$ is equal to the identity. But $R$ is Zariski-dense in $\mathbb{G}_a(K).$
So $\phi_{\alpha}|_R$ being the identity implies that $\phi_{\alpha}$ is constant. Hence $A$ commutes with the entire group $\varepsilon_{\alpha}$ in
$G(\Phi,K).$ However $G(\Phi,K)$ is generated by the elements $\{\varepsilon_{\alpha}(\lambda)|\lambda\in K,\alpha\in\Phi\}$. Hence
$A$ is central in $G(\Phi,K)$, so we are done again.
Lastly, let $R$ be a reduced ring. Further let $\C P$ be a prime ideal in $R.$ So $\pi_{\C P}(A)\in G(\Phi,R/\C P)$ commutes with $E(\Phi,R/\C P)$ and
$R/\C P$ is an integral domain. Thus we obtain
\begin{equation*}
A\equiv(a_{11}I_{n_1})\oplus\cdots\oplus(a_{n_1+\cdots+n_{u-1}+1,n_1+\cdots+n_{u-1}+1}I_{n_u})\text{ mod}\C P
\end{equation*}
for all prime ideals $\C P.$ This implies
\begin{equation*}
A\equiv(a_{11}I_{n_1})\oplus\cdots\oplus(a_{n_1+\cdots+n_{u-1}+1,n_1+\cdots+n_{u-1}+1}I_{n_u})\text{ mod}\bigcap_{\C P\text{ prime in }R}\C P=\sqrt{(0)}.
\end{equation*}
However $R$ is reduced and so $\sqrt{(0)}=(0)$ holds and we are done.
\end{proof}
\end{document}
|
\betagin{document}
\title{A solution to the Al-Salam--Chihara moment problem.}
\author{Wolter Groenevelt}
\address{Delft University of Technology\\
Delft Institute of Applied Mathematics\\
P.O.Box 5031\\
2600 GA Delft\\
The Netherlands}
\email{[email protected]}
\deltadicatory{This paper is dedicated to Ben de Pagter on the occasion of his 65th birthday.}
\betagin{abstract}
We study the $q$-hypergeometric difference operator $L$ on a particular Hilbert space. In this setting $L$ can be considered as an extension of the Jacobi operator for $q^{-1}$-Al-Salam--Chihara polynomials. Spectral analysis leads to unitarity and an explicit inverse of a $q$-analog of the Jacobi function transform. As a consequence a solution of the Al-Salam--Chihara indeterminate moment problem is obtained.
\end{abstract}
\subjclass{Primary 33D45; Secondary 44A15, 44A60}
\keywords{$q^{-1}$-Al-Salam--Chihara polynomials, little $q$-Jacobi function transform, $q$-hypergeometric difference operator, indeterminate moment problem}
\maketitle
\section{Introduction}
For every moment problem there is corresponding set of orthogonal polynomials. Through their three-term recurrence relation orthogonal polynomials correspond to a Jacobi operator. In particular, spectral analysis of a self-adjoint Jacobi operator leads to an orthogonality measure for the corresponding orthogonal polynomials, i.e., a solution to the moment problem. Indeterminate moment problems correspond to Jacobi operators that are not essentially self-adjoint, and in this case a self-adjoint extension of the Jacobi operator corresponds to a solution of the moment problem, see e.g.~\cite{S}, \cite{Schm}. Instead of looking for self-adjoint extensions of the Jacobi operator on (weighted) $\ell^2(\mathbb N)$, it is also useful to look for extensions of the operator to a larger Hilbert space. This method is used in e.g.~\cite{KS03}, \cite{GrK}. A choice of the larger Hilbert space often comes from the interpretation of the operator e.g.~in representation theory.
In this paper we consider an extension of the Jacobi operator for the Al-Salam--Chihara polynomials, and we obtain the spectral decomposition in a similar way as in Koelink and Stokman \cite{KS03}. The Al-Salam--Chihara polynomials are a family of orthogonal polynomials introduced by Al-Salam and Chihara \cite{ASC} that can be expressed as $q$-hypergeometric polynomials. If $q>1$ and under particular conditions on the other parameters, see Askey and Ismail \cite{AI}, they are related to an indeterminate moment problem. In case the moment problem is determinate, the polynomials are orthogonal with respect to a discrete measure. Chihara and Ismail \cite{CI} studied the indeterminate moment problem under certain conditions on the parameters. They obtained the explicit Nevanlinna parametrization, but did not derive explicit solutions. Christiansen and Ismail \cite{ChrI} used the Nevanlinna parametrization to obtain explicit solutions, discrete ones and absolutely continuous ones, of the indeterminate moment problem corresponding to the subfamily of symmetric Al-Salam--Chihara polynomials. Christiansen and Koelink \cite{CK} found explicit discrete solutions of the symmetric Al-Salam--Chihara moment problem exploiting the fact that the polynomials are eigenfunctions of a second-order $q$-difference operator acting on the variable of the polynomial (whereas the Jacobi operator acts on the degree). The solution we obtain in this paper has an absolutely continuous part and an infinite discrete part. As special cases we also obtain a solution for the symmetric Al-Salam--Chihara moment problem, and for the continuous $q^{-1}$-Laguerre moment problem.
The extension of the Jacobi operator we study in this paper is essentially the $q$-hypergeometric difference operator. The latter is a $q$-analog of the hypergeometric differential operator, whose spectral analysis (on the appropriate Hilbert space) leads the Jacobi function transform, see e.g.~\cite{Koo84}. In a similar way the $q$-hypergeometric difference operator corresponds to the little $q$-Jacobi function transform, see Kakehi, Masuda and Ueno \cite{K95}, \cite{KMU95} and also \cite[Appendix A.2]{KS01}. In this light the integral transform $\mathcal F$ we obtain can be considered as a second version of the little $q$-Jacobi function transform. The little $q$-Jacobi function transform has an interpretations as a spherical transform on the quantum $SU(1,1)$ group. We expect a similar interpretation for the integral transform obtained in this paper.
The organisation of this paper is as follows. In Section \ref{sec:preliminaries} some notations for $q$-hypergeometric functions are introduced and the definition of the Al-Salam--Chihara polynomials is given. In Section \ref{sec:difference operator} the $q$-difference operator $L$ is defined on a family of Hilbert spaces. Using Casorati determinants, which are difference analogs of the Wronskian, it is shown that $L$ with an appropriate domain is self-adjoint. In Section \ref{sec:eigenfunctions} eigenfunctions of $L$ are given in terms of $q$-hypergeometric functions. These eigenfunctions and their Casorati determinants are used to define the Green kernel in Section \ref{sec:spectral decomposition}, which is then used to determine the spectral decomposition of $L$. The discrete spectrum is only determined implicitly, except for one particular choice from the family of Hilbert spaces, where we can explicitly describe the spectrum and the spectral projections. Corresponding to this choice an integral transform $\mathcal F$ is defined in Section \ref{sec:integral transform} that diagonalizes the difference operator $L$, and an explicit inverse transform is obtained. The inverse transform gives rise to orthogonality relations for Al-Salam--Chihara polynomials.
\subsection{Notations and conventions}
Throughout the paper $q \in (0,1)$ is fixed. $\mathbb N$ is the set of natural numbers including $0$, and $\mathbb T$ is the unit circle in the complex plane. For a set $E \subseteq \mathbb R$ we denote by $F(E)$ the vector space of complex-valued functions on $E$.
\section{Preliminaries} \lambdabel{sec:preliminaries}
In this section we first introduce standard notations for $q$-hypergeometric functions from \cite{GR04}, state a few useful identities, and then define Al-Salam--Chihara polynomials.
\subsection{$q$-Hypergeometric functions}
The $q$-shifted factorial is defined by
\[
(x;q)_n = \prod_{j=0}^{n-1} 1-xq^{j}, \qquad n \in \mathbb N \cup \{\infty\},\quad x \in \mathbb C,
\]
where we use the convention that the empty product is equal to 1. Note that $(x;q)_n=0$ for $x \in q^{-\mathbb N_{< n}}$.
The following identities for $q$-shifted factorials will be useful later on:
\[
(xq^n;q)_\infty = \frac{ (x;q)_\infty }{ (x;q)_n }, \qquad (xq^{-n};q)_n = (-x)^n q^{-\frac12n(n+1)} (q/x;q)_n.
\]
We also define
\[
(x;q)_{-n} = \frac{ (x;q)_\infty}{(xq^{-n};q)_\infty}, \qquad n \in \mathbb N.
\]
The $\theta$-function is defined by
\[
\theta(x;q)= (x;q)_\infty(q/x;q)_\infty, \quad x \in \mathbb C^*.
\]
Note that $\theta(x;q)=0$ for $x \in q^\mathbb Z$.
We use the following notation for products of $q$-shifted factorials or $\theta$-functions
\[
\betagin{split}
(x_1,x_2,\ldots,x_k;q)_n &= \prod_{j=1}^k (x_j;q)_n,\\
\theta(x_1,x_2,\ldots,x_k;q) &= \prod_{j=1}^k \theta(x_j;q).
\end{split}
\]
$\theta$ satisfies the identities
\betagin{equation} \lambdabel{eq:simple theta identities}
\betagin{split}
\theta(xq^k;q) &= (-x)^{-k} q^{-\frac12k(k-1)} \theta(x;q), \\
\theta(-x,x;q) &= \theta(x^2;q^2),\\
\theta(x;q) &= \theta(x,qx;q^2).
\end{split}
\end{equation}
We also need the following fundamental $\theta$-function identity:
\betagin{equation} \lambdabel{eq:theta identity}
\theta(xv,x/v,yw,y/w)-\theta(xw,x/w,yv,y/v) = \frac{y}{v} \theta(xy,x/y,vw,v/w).
\end{equation}
The $q$-hypergeometric series $_r\varphi_s$ is defined by
\[
\rphis{r}{s}{a_1,\ldots,a_r}{b_1,\ldots,b_s}{q,x} = \sum_{n=0}^\infty \frac{ (a_1,\ldots,a_r;q)_n }{(q,b_1,\ldots,b_s;q)_n} \left[(-1)^n q^{\hf n(n-1)}\right]^{s-r+1} x^n.
\]
We refer to \cite{GR04} for convergence properties of the series. The $q$-hypergeometric difference equation is given by
\betagin{equation} \lambdabel{eq:q-hypergeometric difference eq}
(ABx-C)\varphi(qx) + [C+q-(A+B)x] \varphi(x) + (x-q) \varphi(x/q) = 0,
\end{equation}
and has $_2\varphi_1(A,B;C;q,x)$ as a solution. For $q \uparrow 1$ this $q$-difference equation becomes the hypergeometric differential equation.
\subsection{Al-Salam--Chihara polynomials}
The Al-Salam--Chihara polynomials in base $q^{-1}$ are defined by
\betagin{equation} \lambdabel{def:ASCpol}
P_n(\lambda;b,c;q^{-1}) = b^{-n}\rphis{3}{2}{q^{n},b\lambda,b/\lambda}{bc,0}{q^{-1},q^{-1}},
\end{equation}
which are polynomials of degree $n$ in $\lambda+\lambda^{-1}$. They are symmetric in the parameters $b$ and $c$, which follows from transformation formula \cite[(III.11)]{GR04}. The three-term recurrence relation is
\betagin{equation} \lambdabel{eq:3-term recurrence}
\betagin{split}
(\lambda+\lambda^{-1}) P_n(\lambda) = (1-bcq^{-n}) P_{n+1}(\lambda) + (b+c)q^{-n} P_n(\lambda) + (1-q^{-n})P_{n-1}(\lambda),
\end{split}
\end{equation}
where we use the convention $P_{-1} \equiv 0$. For $b+c\in \mathbb R$ and $bc\geq0$ the polynomials are orthogonal with respect to a positive measure on $\mathbb R$. By \cite[Theorem 3.2]{AI} the moment problem for these polynomials is indeterminate if and only if $b,c \in \mathbb R$ with $q < |b/c|<q^{-1}$, or $b=\overline{c}$.
The polynomials $P_n(\lambda;b,-b;q^{-1})$ are called symmetric Al-Salam--Chihara polynomials, and the polynomials $P_n(\lambda;q^{\hf(\alpha+1)},q^{\hf \alpha};q^{-1})$ with $\alpha \in \mathbb R$, are called continuous $q^{-1}$-Laguerre polynomials. The corresponding moment problems are indeterminate.
\section{The difference operator $L$} \lambdabel{sec:difference operator}
In this section we define the unbounded second-order $q$-difference operator $L$, and the Hilbert space the operator $L$ acts on. The $q$-difference operator is essentially the $q$-hypergeometric difference operator, and, on the given Hilbert space, $L$ can be considered as an extension of the Jacobi operator for the Al-Salam--Chihara polynomials from the previous section. \\
Let $z \in (q,1]$ and define
\[
I^- = -q^\mathbb N, \qquad I^+= zq^\mathbb Z, \qquad I= I^- \cup I^+.
\]
We also set $I_* = I\setminus\{-1\}$. Let $a$ and $s$ be parameters satisfying
$a \in \mathbb R$ and $0<a^2<1$, and $s \in \mathbb T\setminus\{-1,1\}$ or $s \in \mathbb R$ and $q < s^2 < 1$. The condition $s \not\in \{-1,1\}$ is only needed for technical purposes, and can be removed afterwards by continuity in $s$ of the functions involved.
\betagin{Def} \lambdabel{def:L}
The second order $q$-difference operator $L=L_{a,s}$ on $F(I)$ is given by
\[
\betagin{split}
(Lf)(x) &= \frac{1}{a}\left(1+\frac{1}{x}\right)f(x/q) - \frac{ s + s^{-1} }{ax} f(x) + a\left(1+\frac{ 1}{a^2x} \right)f(qx), \qquad x \in I_*,\\
(Lf)(-1) & = \frac{ s+ s^{-1} }{a} f(-1) + a\left(1-\frac{1}{a^2} \right) f(-q).
\end{split}
\]
\end{Def}
\betagin{rem}
Define for $x=-q^n \in I^-$
\[
\phi_\lambda(x) = a^{-n} P_n(\lambda;s/a,1/sa;q^{-1}).
\]
By \eqref{eq:3-term recurrence} $\phi_\lambda(x)$ satisfies
\[
\betagin{split}
(\lambda+&\lambda^{-1}) \phi_\lambda(-q^n) =\\
&\frac{1}{a}\left(1-q^{-n}\right)\phi_{\lambda}(-q^{n-1}) - \frac{ s+s^{-1} }{a}q^{-n} \phi_\lambda(-q^n) + a\left(1-\frac{q^{-n}}{a^2} \right) \phi_{\lambda}(-q^{n+1}),
\end{split}
\]
so that $\phi_\lambda$ is an eigenfunction of $L|_{I^-}$. We see that $L$ can be considered as an extension of the Jacobi operator for the Al-Salam--Chihara polynomials. Note also that the parameters $b=s/a$ and $c=1/sa$ satisfy $b=\overline{c}$ in case $s \in \mathbb T$, and $q<b/c \leq1$ in case $s \in \mathbb R$ with $q<s^2 < 1$, which (by symmetry in $b$ and $c$) corresponds exactly to the conditions under which the moment problem is indeterminate. Note that the families of symmetric Al-Salam--Chihara polynomials and continuous \mbox{$q^{-1}$-Laguerre} polynomials are included (for $s=\pm i$, and $(a,s)=(\pm q^{-\hf\alpha-\frac14},\pm q^{\frac14} )$, respectively).
\end{rem}
Let $\mathcal L^2_z=\mathcal L^2_z(I, w(x)d_qx)$ be the Hilbert space with inner product
\[
\lambdangle f, g \rangle_z = \int_{-1}^{\infty(z)} f(x) \overline{g(x)} w(x)\, d_qx,
\]
where $w$ is the positive weight function on $I$ given by
\betagin{equation} \lambdabel{def:weight function}
w(x)= w(x;a;q)= \frac{ (-qx;q)_\infty}{(-a^2x;q)_\infty}.
\end{equation}
Here the $q$-integral is defined by
\[
\int_{-1}^{\infty(z)} f(x)\, d_qx = (1-q)\sum_{n=0}^\infty f(-q^n)q^n + (1-q)\sum_{n=-\infty}^\infty f(zq^n)zq^n.
\]
In order to show that $(L,\mathcal D)$ is self-adjoint, with $\mathcal D \subseteq \mathcal L^2_z$ an appropriate dense domain, we need the following truncated inner product. For $k,l,m\in \mathbb N$ and $f,g \in F(I)$ we set
\betagin{equation} \lambdabel{eq:truncated inner product}
\betagin{split}
\lambdangle f, g \rangle_{k,l,m} &= \left( \int_{-1}^{-q^{k+1}} + \int_{zq^{m+1}}^{zq^{-l}} \right) f(x)\overline{g(x)} w(x) d_qx\\
&= (1-q)\sum_{n=0}^k f(-q^n)\overline{g(-q^n)} w(-q^n)q^n \\
& \quad + (1-q)\sum_{n=-l}^m f(zq^n) \overline{g(zq^n)} w(zq^n) zq^n.
\end{split}
\end{equation}
Note that for $f,g \in \mathcal L^2_z$ we have
\betagin{equation} \lambdabel{eq:limit truncated}
\lim_{k,l,m \rightarrow \infty}\lambdangle f,g \rangle_{k,l,m} = \lambdangle f, g \rangle_z.
\end{equation}
The following result will be useful to establish self-adjointness of $L$.
\betagin{lemma} \lambdabel{lem:CDet1}
For $k,l,m \in \mathbb N$ and $f,g \in F(I)$
\[
\lambdangle Lf, g \rangle_{k,l,m}-\lambdangle f, Lg \rangle_{k,l,m} = D(f,\overline{g})(-q^k) - D(f,\overline{g})(zq^m)+D(f,\overline{g})(zq^{-l-1}),
\]
where $D(f,g) \in F(I)$ is the Casorati determinant given by
\betagin{equation} \lambdabel{eq:Casorati}
\betagin{split}
D(f,g)(x) &= \Big(f(x)g(qx)-f(qx)g(x)\Big) a^{-1}(1-q)(1+a^2x)w(x),
\end{split}
\end{equation}
for $x \in I$.
\end{lemma}
\betagin{proof}
We first consider the sum $\sum_{n=0}^k$ of the truncated inner product. We write $f(-q^n)=f_n$ for functions $f$ on $I^-$, then
\[
(Lf)_n = a^{-1}(1-q^{-n}) f_{n-1} + \frac{s+s^{-1}}{a^2q^n}f_n + a(1-a^{-2}q^{-n})f_{n+1}.
\]
Now we have
\[
\betagin{split}
\sum_{n=0}^k \Big((Lf)_n \overline{g_n} w_n q^n -& f_k \overline{(Lg)_n} w_n q^n \Big) = \\
& a^{-1}\sum_{n=1}^k f_n \overline{g_{n-1}} (1-q^n)w_n -a^{-1}\sum_{n=1}^k f_{n-1}\overline{g_n}(1-q^n)w_n\\
+& a^{-1}\sum_{n=0}^k f_{n}\overline{g_{n+1}} (1-a^2q^n)w_n- a^{-1}\sum_{n=0}^k f_{n+1}g_n (1-a^2q^n)w_n.
\end{split}
\]
We use
\[
(1-q^{n+1}) w_{n+1} = (1-a^2q^{n}) w_{n}, \qquad n \in \mathbb N,
\]
then we see that, after shifting summation indices, the expression above is equal to
\[
\Big(f_k\overline{g_{k+1}}-f_{k+1}\overline{g_{k}}\Big)(1-a^2q^k)a^{-1}w_k,
\]
so we have obtained
\[
\int_{-1}^{-q^{k+1}}\Big((Lf)(x)\overline{g(x)}-f(x)\overline{Lg(x)}\Big)w(x)d_qx = D(f,\overline g)(-q^k).
\]
The sum $\sum_{n=-l}^m$ is treated in the same way, using the identity
\[
(1+zq^{n+1})w(zq^{n+1}) = (1+za^2q^n) w(zq^n), \qquad n \in \mathbb Z. \qedhere
\]
\end{proof}
Using the identity
\[
\frac{ (aq^{-n};q)_n }{ (bq^{-n};q)_n} = \frac{ (q/a;q)_n }{ (q/b;q)_n } \left( \frac{a}{b} \right)^n
\]
in the definition \eqref{def:weight function} of the weight function $w$, we see that
\[
w(zq^{-n}) = \frac{ (-zq;q)_\infty }{ (-za^2;q)_\infty } \frac{ (-1/z;q)_n }{ (-q/za^2;q)_n } a^{-2n}q^n,
\]
so that
\betagin{equation} \lambdabel{eq:asymptotics w}
w(zq^{-n}) = \frac{ \theta(-qz;q)}{\theta(-za^2;q)} a^{-2n} q^n \Big( 1+\mathcal O(q^n)\Big), \qquad n \rightarrow \infty.
\end{equation}
From this we see that if $f \in \mathcal L^2_z$, then
\betagin{equation} \lambdabel{eq:lim f}
\lim_{n \rightarrow \infty} a^{-n}f(zq^{-n}) =0.
\end{equation}
This enables us to prove the following lemma.
\betagin{lemma} \lambdabel{lem:CDet2}
For $f,g \in \mathcal L^2_z$, we have
\[
\lim_{n \rightarrow \infty}D(f, \overline{g})(zq^{-n}) = 0.
\]
\end{lemma}
\betagin{proof}
From the definition of $w$ we obtain
\[
(1+za^2q^{-n}) w(zq^{-n}) = \frac{ (-zq^{1-n};q)_\infty }{ (-za^2q^{1-n};q)_\infty } = \frac{ (-zq;q)_\infty }{ (-za^2q;q)_\infty } \frac{ (-1/z;q)_n }{ (-1/a^2z;q)_n }a^{-2n},
\]
so that
\[
(1+za^2q^{-n}) w(zq^{-n}) = \frac{ \theta(-zq;q) }{ \theta(-za^2q;q) } a^{-2n}\Big(1+\mathcal O(q^n)\Big), \qquad n \rightarrow \infty.
\]
Now the lemma follows from the definition of the Casorati determinant and the asymptotic behaviour \eqref{eq:lim f} of $f,g \in \mathcal L^2_z$.
\end{proof}
Now we are ready to introduce an appropriate dense domain for the unbounded operator $L$.
Let us define $\mathcal D \subseteq \mathcal L^2_z$ to be the subspace consisting of functions $f \in\mathcal L^2_z$ satisfying the following conditions:
\betagin{itemize}
\item $Lf \in \mathcal L^2_z$
\item $f$ is bounded on $-q^{\mathbb N+1} \cup z q^\mathbb N$
\item $\displaystyle \lim_{n \to \infty} f(zq^n) - f(-q^n)=0$
\end{itemize}
Note that $\mathcal D$ is dense in $\mathcal L^2_z$, since it contains the finitely supported functions on $I$.
\betagin{theorem} \lambdabel{thm:L self-adjoint}
The operator $(L,\mathcal D)$ is self-adjoint.
\end{theorem}
\betagin{proof}
First we show that $(L,\mathcal D)$ is symmetric.
Let $f,g \in \mathcal D$, and define
\[
u(x) = \frac{a}{(1-q)(1+a^2x)w(x)}, \qquad x \in I,
\]
then from \eqref{eq:Casorati} we obtain
\betagin{equation} \lambdabel{eq:D-D=}
\betagin{split}
u&(zq^n)D(f,g)(zq^{n}) - u(-q^n) D(f,g)(-q^n) =\\
& \left[f(zq^n )-f(-q^n)\right] g(zq^{n+1}) - f(zq^{n+1}) \left[ g(zq^n) - g(-q^n) \right] \\
& + f(-q^n) \left[ g(zq^{n+1}) - g(-q^{n+1})\right] - \left[ f(zq^{n+1}) - f(-q^{n+1}) \right] g(-q^n).
\end{split}
\end{equation}
From the conditions on $f$ and $g$ it follows that this tends to $0$ as $n \to \infty$. Then using
\[
\lim_{n \to \infty} u(zq^n) = \lim_{n \to \infty} u(-q^n) = \frac{a}{1-q},
\]
as well as \eqref{eq:limit truncated} and Lemmas \ref{lem:CDet1}, \ref{lem:CDet2} we see that
\[
\lambdangle Lf, g \rangle_z - \lambdangle f, Lg \rangle_z = \lim_{n \to \infty} D(f,g)(-q^n) - D(f,g)(zq^n)=0,
\]
so that $(L, \mathcal D)$ is symmetric.
Since $(L,\mathcal D)$ is symmetric, we have $(L,\mathcal D) \subseteq (L^*, \mathcal D^*)$, where $(L^*, \mathcal D^*)$ is the adjoint of the operator $(L,\mathcal D)$. Here, by definition, $\mathcal D^*$ is the subspace
\[
\mathcal D^* = \{g \in \mathcal L^2_z\ | \ f \mapsto \lambdangle Lf, g \rangle_z \thetaxt{ is continuous on } \mathcal D \}.
\]
We show that $L^* = L|_{\mathcal D^*}$.
Let $f$ be a non-zero function with support at only one point $x \in I$ and let $g \in F(I)$, then we obtain in the same way as above that $\lambdangle Lf,g \rangle_z=\lambdangle f, Lg \rangle_z$. In particular, for $g \in \mathcal D^*$ we then have $\lambdangle f, Lg \rangle_z = \lambdangle f, L^*g \rangle_z$, so $(Lg)(x)=(L^*g)(x)$. This holds for all $x \in I$, hence $L^*=L|_{\mathcal D^*}$.
Finally we need to show that $\mathcal D^* \subseteq \mathcal D$. Let $f \in \mathcal D$ and let $g \in \mathcal D^*$. Using Lemmas \ref{lem:CDet1}, \ref{lem:CDet2} and $L^*=L|_{\mathcal D^*}$ we obtain
\[
\lim_{n \rightarrow \infty} D(f,\overline{g})(-q^n)- D(f,\overline{g})(zq^n) = \lambdangle Lf,g \rangle_z - \lambdangle f,L^*g \rangle_z=0.
\]
Since this holds for all $f \in \mathcal D$, we find using \eqref{eq:D-D=} that $g$ is bounded near $0$, and
\[
\lim_{n \rightarrow \infty} g(zq^n)-g(-q^n)=0,
\]
hence $g \in \mathcal D$.
\end{proof}
\section{Eigenfunctions of $L$} \lambdabel{sec:eigenfunctions}
In this section we consider eigenspaces and eigenfunctions of $L$. The eigenfunctions are initially only defined on a part of $I_*$, and we show how to extend them to functions on $I_*$. \\
We start with some useful properties of eigenspaces of $L$. For $\mu \in \mathbb C$ we introduce the spaces
\betagin{equation} \lambdabel{eq:eigenspaces}
\betagin{split}
V_\mu^-&=\{ f \in F(I^-_*) \mid Lf=\mu f \},\\
V_\mu^+&=\{ f\in F(I^+)\mid Lf =\mu f \},\\
V_\mu &= \Big\{ f \in F(I_*) \ | \ Lf=\mu f, \quad f(zq^n)=o(q^{-n}), \quad f(-q^n) = o(q^{-n})\\
& \qquad \qquad \qquad \thetaxt{ and } f(zq^n)-f(-q^n) = o(1)\,\thetaxt{ for } n \to \infty \Big\}
\end{split}
\end{equation}
\betagin{lemma} \lambdabel{lem:eigenspaces}
For $\mu \in \mathbb C$
\betagin{enumerate}[(i)]
\item $\dim V_\mu^\pm = 2$.
\item For $f,g \in V_\mu^-$ the Casorati determinant $D(f,g)$ is constant on $I^-_*$.
\item For $f,g \in V_\mu^+$ the Casorati determinant $D(f,g)$ is constant on $I^+$.
\item For $f,g \in V_\mu$ the Casorati determinant $D(f,g)$ is constant on $I_*$.
\end{enumerate}
\end{lemma}
\betagin{proof}
For (i) we write $f(tq^n)= f_n$ (with $t=-1$ or $t=z$), then we see that $Lf =\mu f$ gives a recurrence relation of the form $\alphapha_n f_{n+1} +\betata_n f_n + \gammamma_n f_{n-1} = \mu f_n$, with $\alpha_n,\gamma_n \neq 0$ for all $n$. Solutions to such a recurrence relation are uniquely determined by specifying $f_n$ at two different points $n=l$ and $n=m$. So there are two independent solutions.
The proofs of (ii) and (iii) are the same. We prove (iii). Let $f,g \in F(I^+)$. Using the explicit expressions for the Casorati determinant and the weight function $w$, we find for $x \in I^+$
\[
\betagin{split}
(Lf)(x) g(x) &- f(x)(Lg)(x) \\
&= a^{-1}(1+1/x)\Big( f(x/q) g(x)- f(x) g(x/q) \Big) \\
& \quad - a(1+1/a^2x) \Big(f(x)g(qx)- f(qx)g(x) \Big) \\
&= \frac{ (1+1/x) D(f,g)(x/q) }{(1-q)(1+a^2x/q) w(x/q) } - \frac{ a^2(1+1/a^2x) D(f,g)(x) }{ (1-q)(1+a^2x) w(x)}\\
&= \frac{1}{(1-q)x w(x)} \Big( D(f,g)(x/q) - D(f,g)(x) \Big).
\end{split}
\]
Now if $f$ and $g$ satisfy $(Lf)(x)=\mu f(x)$ and $(Lg)(x)=\mu g(x)$, we have $D(f,g)(x/q)= D(f,g)(x)$, hence $D(f,g)$ is constant on $I^+$.
Statement (iv) follows from (ii) and (iii) and the fact that
\[
\lim_{n \to \infty} \Big(D(f,g)(zq^n) - D(f,g)(-q^n)\Big)=0,
\]
which can be obtained by rewriting the difference of the Casorati determinants similar as in the proof of Theorem \ref{thm:L self-adjoint}.
\end{proof}
We can now introduce explicit eigenfunctions of $L$ (in the algebraic sense). Comparing $L$ given in Definition \ref{def:L} to the $q$-hypergeometric difference equation \eqref{eq:q-hypergeometric difference eq} we find that
\betagin{equation} \lambdabel{def:psi}
\psi_\lambdambda(x;s) = \psi_\lambdambda(x;a,s\mkern 2mu | \mkern 2mu q) = s^{-n}\rphis{2}{1}{a \lambdambda/s, a/\lambdambda s}{q/s^2}{q,-qx},\qquad |x|<q^{-1},
\end{equation}
with $x=tq^n$, $t \in \{-1,z\}$, is a solution of the eigenvalue equation
\betagin{equation} \lambdabel{eq:eigenv eq}
Lf=\mu(\lambdambda)f, \qquad \mu(\lambdambda)= \lambdambda+ \lambdambda^{-1},
\end{equation}
on $I_{*,\leq 1}$.
From $L_{a,s}=L_{a,s^{-1}}$ it follows immediately that $\psi_\lambdambda(x;a,s^{-1} \mkern 2mu | \mkern 2mu q)$ is also a solution of \eqref{eq:eigenv eq} on $I_{*,\leq 1}$. Note that it does not follow from the $q$-hypergeometric difference equation that $\psi_\lambdambda$ is a solution to \eqref{eq:eigenv eq} for $x=-1$. In fact, we show later on that for generic $\lambda$ it is not a solution for $x=-1$. Using \eqref{eq:eigenv eq} the solutions $\psi_\lambdambda(x;s^{\pm 1})$ can be extended to functions on $I_*$ still satisfying \eqref{eq:eigenv eq}. We denote these extensions still by $\psi_\lambdambda(x;s^{\pm 1})$. Later on we give explicit expressions for the extensions. Note that both functions $\psi_\lambda(x;s^{\pm 1})$ can be considered as little $q$-Jacobi functions \cite{KMU95}.
\betagin{lemma} \lambdabel{lem:psi in Vmu}
For $\lambda \in \mathbb C^*$, $\psi_\lambda(\cdot;s)$ and $\psi_\lambda(\cdot;s^{-1})$ are in $V_{\mu(\lambda)}$.
\end{lemma}
\betagin{proof}
We already know that $\psi_\lambda$ is an eigenfunction of $L$, so we only need to check that $\psi_\lambda$ has the behavior near $0$ as stated in the definition of $V_{\mu(\lambda)}$.
From \eqref{def:psi} we find for $t \in \{-1,z\}$
\[
\psi_\lambda(tq^{n};s) = s^{-n}\Big(1+\mathcal O(q^n)\Big), \qquad n \to \infty,
\]
so that $\psi_\lambda(tq^n;s^{\pm 1}) = o(q^{-n})$ by the conditions on $s$, and
\[
\psi_\lambda(zq^n;s^{\pm 1}) - \psi_\lambda(-q^n;s^{\pm 1}) =o(1). \qedhere
\]
\end{proof}
Another solution of \eqref{eq:eigenv eq} is
\[
\Psi_\lambdambda(x) = \Psi_\lambdambda(x;a,s\mkern 2mu | \mkern 2mu q) = (a\lambdambda)^{-n} \rphis{2}{1}{a\lambdambda/s, as\lambdambda}{q\lambdambda^2}{q, - \frac{q}{a^2 x} }, \qquad x=zq^n>q/a^2,
\]
and clearly $\Psi_{1/\lambdambda}(x)$ also satisfies \eqref{eq:eigenv eq}. Both are solutions on $zq^{-\mathbb N_{\geq k}}$ for $k$ large enough. For $n \rightarrow \infty$ we have
\betagin{equation} \lambdabel{eq:asymptotics Psi}
\Psi_\lambdambda(zq^{-n})= (a\lambdambda)^n\Big(1+\mathcal O(q^n) \Big),
\end{equation}
so by the asymptotic behavior \eqref{eq:asymptotics w} of the weight function w, for $|\lambda|<1$ the function $\Psi_\lambda$ is an $L^2$-function at $\infty$. The solutions $\Psi_{\lambda^{\pm 1}}$ can be used to give the explicit extension of $\psi_\lambdambda$ on $I^+$;
\betagin{equation} \lambdabel{eq:psi=b Psi}
\psi_\lambdambda(x;s) = b_z(\lambdambda;s) \Psi_\lambdambda(x) + b_z(1/\lambdambda;s)\Psi_{1/\lambdambda}(x),
\end{equation}
with
\[
b_z(\lambdambda;s)=b_z(\lambdambda;a,s \mkern 2mu | \mkern 2mu q) = \frac{ (q/as\lambdambda, a/s\lambdambda;q)_\infty }{ (1/\lambdambda^2, q/s^2;q)_\infty } \frac{ \theta(-qaz\lambdambda/s;q)} {\theta(-qz;q)},
\]
which follows from the three-term transformation formula \cite[(III.32)]{GR04}.
Next we give a solution $\phi_\lambdambda$ of the eigenvalue equation \eqref{eq:eigenv eq} for all $x \in I$.
\betagin{Def} \lambdabel{def:phi}
Define
\[
\phi_\lambdambda(x) = \phi_\lambdambda(x;a,s\mkern 2mu | \mkern 2mu q) = d(\lambdambda;s) \psi_\lambdambda(x;a,s\mkern 2mu | \mkern 2mu q) + d(\lambdambda;1/s) \psi_{\lambdambda}(x;a,1/s \mkern 2mu | \mkern 2mu q).
\]
with
\[
d(\lambdambda;s)=d(\lambdambda;a,s \mkern 2mu | \mkern 2mu q)= \frac{ (as\lambdambda, as/\lambdambda;q)_\infty }{ (a^2, s^2;q)_\infty }.
\]
\end{Def}
Note that $\phi_\lambdambda$ is invariant under $\lambdambda \leftrightarrow 1/\lambdambda$ and $s \leftrightarrow 1/s$. Since $\psi_\lambdambda$ is a solution of \eqref{eq:eigenv eq} for $x \in I_*$, it is clear that $\phi_\lambdambda$ is also a solution of \eqref{eq:eigenv eq} for $x \in I_*$. We will show that $\phi_\lambdambda$ is an eigenfunction of $L$ on the whole of $I$, i.e.~including the point $-1$, by identifying $\phi_\lambda(-q^n)$ with an Al-Salam--Chihara polynomial.
\betagin{lemma} \lambdabel{lem:phi=ASCpol}
For $n \in \mathbb N$ we have
\[
\phi_\lambda(-q^n) = a^{-n} P_n(\lambda;s/a,1/sa;q^{-1}).
\]
As a consequence, $L\phi_\lambda(x)=\mu(\lambda)\phi_\lambda(x)$ for all $x \in I$.
\end{lemma}
\betagin{proof}
We use the three-term transformation formula for $_3\varphi_2$-functions \cite[(III.34)]{GR04} with $A=q^{-n}$, $n \in \mathbb N$. Letting $D \rightarrow \infty$ gives
\[
\betagin{split}
\rphis{3}{1}{q^{-n},B,C}{E}{q,\frac{Eq^n}{BC}} &= \frac{ (E/B,E/C;q)_\infty}{(E,E/BC;q)_\infty }\rphis{2}{1}{B,C}{BCq/E}{q,q^{n+1}}\\
& + \frac{ (B,C;q)_\infty }{ (E, BC/E;q)_\infty } \left( \frac{E}{BC} \right)^n \rphis{2}{1}{E/B, E/C}{Eq/BC}{q,q^{n+1}}.
\end{split}
\]
Using the identity $(A;q^{-1})_n=(1/A;q)_n (-A)^n q^{-\frac12 n(n-1)}$, we find the transformation
\[
\rphis{3}{1}{q^{-n},B,C}{E}{q,\frac{Eq^n}{BC}} = \rphis{3}{2}{q^n,1/B,1/C}{1/E,0}{q^{-1}, q^{-1}}.
\]
From these formulas with $B= a\lambdambda/s, C= a/\lambdambda s, E=1/a^2$, we find
\[
\phi_\lambdambda(-q^n;a,s \mkern 2mu | \mkern 2mu q) = s^n \rphis{3}{2}{q^n, s\lambdambda/a, s/\lambdambda a }{a^2,0}{q^{-1},q^{-1}},
\]
which we recognize as the Al-Salam--Chihara polynomial $a^{-n} P_n(\lambda;s/a,1/sa;q^{-1})$, see \eqref{def:ASCpol}. From the recurrence relation for these polynomials it follows that $\phi_\lambda$ satisfies $(L\phi_\lambdambda)(x)= \mu(\lambdambda)\phi_\lambda(x)$, for $x \in I^-$ (including $-1$), so $\phi_\lambdambda$ is a solution for \eqref{eq:eigenv eq} on $I$.
\end{proof}
Note that the eigenspace
\[
W_\mu^{-}=\{f\in F(I^-) \mid \ Lf=\mu f\}
\]
is $1$-dimensional, since the value of $f \in W_\mu^-$ at the point $-1$ completely determines $f$ on $I^-$. So $\phi_\lambdambda$ is the only solution (up to a multiplicative constant) to the eigenvalue equation \eqref{eq:eigenv eq} on $I^-$, hence also the only function in $V_{\mu(\lambda)}$ which is a solution to \eqref{eq:eigenv eq} on $I$.
\\
We also need the expansion of $\phi_\lambda$ into $\Psi_{\lambda^{\pm 1}}$ and we need to determine an explicit expression for the Casorati determinant $D(\phi_\lambda,\Psi_\lambda)$, which we need later on to describe the resolvent for $L$. Determining the expansion is a straightforward calculation.
\betagin{lemma} \lambdabel{lem:c-function expansion}
For $x \in I^+$,
\[
\phi_\lambda(x) = c_z(\lambda) \Psi_\lambda(x) + c_z(\lambda^{-1}) \Psi_{\lambda^{-1}}(x),
\]
with
\[
\betagin{split}
c_z(\lambda) &= c_z(\lambda;a,s \mkern 2mu | \mkern 2mu q) \\
&= \frac{(as/\lambda, a/s \lambda;q)_\infty}{(a^2,\lambda^{-2};q)_\infty \theta(-qz;q)} \Big( \frac{\theta(as\lambda, -qaz\lambda/s;q)}{\theta(s^2;q)} + \frac{\theta(a\lambda/s, -qasz\lambda;q)}{\theta(s^{-2};q)} \Big).
\end{split}
\]
\end{lemma}
\betagin{proof}
From \eqref{eq:psi=b Psi} and Definition \ref{def:phi} we find
\[
\phi_\lambda = c_z(\lambda) \Psi_\lambda + c_z(\lambda^{-1}) \Psi_{\lambda^{-1}}, \qquad \thetaxt{on } I^+,
\]
with
\[
c_z(\lambda) = d(\lambda;a,s \mkern 2mu | \mkern 2mu q) b_z(\lambda;a,s \mkern 2mu | \mkern 2mu q) + d(\lambda;a,1/s \mkern 2mu | \mkern 2mu q) b_z(\lambda;a,1/s \mkern 2mu | \mkern 2mu q).
\]
From the explicit expression for $d(\lambda)$ and $b_z(\lambda)$ we obtain the expression for $c_z(\lambda)$.
\end{proof}
\betagin{lemma} \lambdabel{lem:Casorati determinants}
For $x \in I^+$,
\[
D\big(\psi_\lambdambda(\cdot;s), \psi_\lambdambda(\cdot;1/s)\big)(x) = a(1-q)(s^{-1}-s)
\]
and
\[
D(\phi_\lambdambda, \Psi_\lambdambda)(x) =K_z\, c_z(\lambda^{-1}) (\lambdambda^{-1}-\lambdambda),
\]
with
\[
K_z= (1-q)\frac{ \theta(-z;q) }{\theta(-za^2;q)}.
\]
\end{lemma}
\betagin{proof}
The Casorati determinant $D(\Psi_{\lambdambda^{-1}},\Psi_\lambdambda)$ is constant on $I^+$, so we may determine it by letting $x \rightarrow \infty$.
Now from the asymptotic behavior \eqref{eq:asymptotics Psi} of $\Psi_\lambdambda(zq^{-k})$ when $k \rightarrow \infty$, the definition \eqref{eq:Casorati} of the Casorati determinant, and the asymptotic behavior of $(1+za^2q^{-k})w(zq^{-k})$ for $k \rightarrow \infty$, see Lemma \ref{lem:CDet2}, we obtain
\[
D(\Psi_{\lambdambda^{-1}},\Psi_\lambdambda)(x) = K_z (\lambdambda^{-1} - \lambdambda).
\]
With this result we can compute $D\big(\psi_{\lambdambda}(\cdot;s), \Psi_\lambdambda \big)$.
First expand $\psi_\lambdambda(x;s)$ in terms of $\Psi_{\lambdambda^{\pm 1}}(x)$ using \eqref{eq:psi=b Psi}, then we find
\[
D(\psi_{\lambdambda}(\cdot;s), \Psi_\lambdambda)(x) = b_z(1/\lambdambda;s) D(\Psi_{1/\lambdambda},\Psi_\lambdambda)(x) = K_z b_z(1/\lambdambda;s) (\lambdambda-\lambdambda^{-1}).
\]
This leads to
\[
\betagin{split}
D\big(\psi_\lambdambda(\cdot;s), &\psi_\lambdambda(\cdot;1/s)\big)(x)\\
&= b_z(\lambdambda;1/s) D( \psi_\lambdambda(\cdot;s), \Psi_\lambdambda)(x) + b_z(1/\lambdambda;1/s) D( \psi_\lambdambda(\cdot;s), \Psi_{1/\lambdambda})(x) \\
& = K_z \Big( b_z(1/\lambdambda;s)b_z(\lambdambda;1/s) - b_z(\lambdambda;s)b_z(1/\lambdambda;1/s)\Big) (\lambdambda - \lambdambda^{-1}).
\end{split}
\]
Writing this out and using the fundamental $\theta$-function identity \eqref{eq:theta identity}
with
\[
(x,y,v,w)= (a, -az, \lambdambda/s, \lambdambda s)
\]
we find
\[
D\big(\psi_\lambdambda(\cdot;s), \psi_\lambdambda(\cdot;1/s)\big)(x) = \frac{s K_z(\lambdambda - \lambdambda^{-1})}{az\lambda} \frac{ \theta(-a^2z, -1/z, \lambdambda^2, 1/s^2;q) }{(\lambdambda^{\pm 2}, qs^{\pm 2};q)_\infty \theta(-qz;q)^2}.
\]
This expression simplifies using identities for $\theta$-functions.
Finally, using the $c$-function expansion from Lemma \ref{lem:c-function expansion} we have
\[
D(\phi_\lambda,\Psi_\lambda) = c_z(\lambda^{-1}) D(\Psi_{\lambda^{-1}},\Psi_\lambda) = K_z\, c_z(\lambda^{-1})(\lambda^{-1} - \lambda). \qedhere
\]
\end{proof}
Note that so far we only have expressions for the Casorati determinants on $I^+$. These expressions are actually valid on $I_*$, which follows from the following result.
\betagin{prop} \lambdabel{prop:basis}
The set $\{\psi_\lambdambda(\cdot;s), \psi_\lambdambda(\cdot;1/s)\}$ is a basis for $V_{\mu(\lambdambda)}$.
\end{prop}
\betagin{proof}
From Lemma \ref{lem:psi in Vmu} we know that $\psi_\lambda(\cdot;s^{\pm 1}) \in V_{\mu(\lambda)}$. Then by Lemma \ref{lem:eigenspaces} the Casorati determinant $D\big(\psi_\lambdambda(\cdot;s), \psi_\lambdambda(\cdot;1/s)\big)$ is constant on $I_*$, so the value from Lemma \ref{lem:Casorati determinants} is valid on $I_*$. It is nonzero since we assumed $s \neq \pm 1$, so $\psi_\lambda(\cdot;s)$ and $\psi_\lambda(\cdot;1/s)$ are linearly independent, and since $\dim V_{\mu(\lambda)}=2$ they form a basis.
\end{proof}
Now $\Psi_\lambdambda$ extends to a function on $I_*$ by expanding it in terms of $\psi_{\lambdambda}(x;s^{\pm 1})$. This can be done explicitly, but we do not need the explicit expansion. Then $\Psi_\lambda, \phi_\lambda \in V_{\mu(\lambda)}$, so as a consequence we obtain the following result.
\betagin{cor} \lambdabel{cor:Dlambda}
On $I_*$,
\[
D(\phi_\lambda,\Psi_\lambda) = K_z\, c_z(\lambda^{-1})(\lambda^{-1}-\lambda), \qquad \lambda \in \mathbb C^*.
\]
.
\end{cor}
\section{The spectral decomposition of $L$} \lambdabel{sec:spectral decomposition}
In this section we determine the spectral decomposition for $L$ as a self-adjoint operator on $\mathcal L^2_z$. For $z=1$ the spectral projections are completely explicit.\\
To determine the spectral measure $E$ for the self-adjoint operator $(L,\mathcal D)$ we use the following formula, see \cite[Theorem XII.2.10]{DS63},
\betagin{equation} \lambdabel{eq:Stieltjes-Perron}
\lambdangle E(a,b)f, g \rangle_z = \lim_{\delta \downarrow 0} \lim_{\varepsilon \downarrow 0} \frac{1}{2\pi i} \int_{a + \delta}^{b-\delta} \Big( \lambdangle R_{\gamma+i\varepsilon} f,g \rangle_{z} - \lambdangle R_{\gamma-i\varepsilon} f,g \rangle_z \Big) d\gamma,
\end{equation}
for $a<b$ and $f,g \in \mathcal L^2_z$. Here $R_\gamma=(L-\gamma)^{-1}$ is the resolvent for $L$. $R_{\mu(\lambda)}$ can be expressed using the Green kernel $G_\lambda \in F(I\times I)$ given by
\[
G_\lambda(x,y) =
\betagin{cases}
\dfrac{\phi_\lambda(x) \Psi_\lambda(y)}{D_\lambda}, & x \leq y, \\ \\
\dfrac{\phi_\lambda(y) \Psi_\lambda(x)}{D_\lambda}, & x > y,
\end{cases}
\]
where $D_\lambda=D(\phi_\lambda,\Psi_\lambda)$ (see Corollary \ref{cor:Dlambda}) and we assume $\lambda \in \mathbb D \setminus (-1,1)$, where $\mathbb D$ is the open unit disc. Note that $G_\lambda(\cdot,y) \in \mathcal L^2_z$ for $y \in I$. Now the resolvent is given by
\[
(R_{\mu(\lambda)} f)(y) = \lambdangle f, G_\lambda(\cdot,y) \rangle_z,
\]
so that for $f,g \in \mathcal L^2_z$
\[
\betagin{split}
\lambdangle &R_{\mu(\lambda)} f, g \rangle_z \\
&= \iint_{I\times I} f(x) \overline{g(y)} G_\lambda(x,y) \, w(x) w(y) \, d_qx d_qy \\
& = \iint\limits_{\substack{(x,y) \in I \times I \\ x \leq y}} \frac{\phi_\lambda(x) \Psi_\lambda(y) }{D_\lambda}\Big(f(x) \overline{g(y)}+ f(y) \overline{g(x)}\Big) \big(1-\tfrac12 \delta_{x,y}\big) w(x)w(y) \, d_qx \, d_qy.
\end{split}
\]
The proof that $R_{\mu(\lambda)}$ is indeed the resolvent operator is identical to the proof of \cite[Proposition 6.1]{KS03}.
Let $\mu^{-1}:\mathbb C\setminus \mathbb R \to \mathbb D\setminus (-1,1)$ be the inverse of $\mu|_{\mathbb D \setminus (-1,1)}$. Let $\gamma \in (-2,2)$, then $\gamma= \mu(e^{i\psi})$ for a unique $\psi \in (0,\pi)$, and in this case
\[
\lim_{\varepsilon \downarrow 0}\mu^{-1}(\gamma\pm i\varepsilon) = e^{\mp i\psi}.
\]
Furthermore, for $\gamma \in \mathbb R\setminus[-2,2]$ we have $\gamma =\mu(\lambda)$ for a unique $\lambda \in (-1,1)$, and in this case
\[
\lim_{\varepsilon \downarrow 0} \mu^{-1}( \gamma \pm \varepsilon ) = \lambda.
\]
We see that we need to distinguish between spectrum in $(-2,2)$ and in $\mathbb R\setminus[-2,2]$.
\betagin{prop} \lambdabel{prop:E continuous}
Let $0< \psi_2 < \psi_1 < \pi$, and let $\mu_1=\mu(e^{i\psi_1})$ and $\mu_2 = \mu(e^{i\psi_2})$. Then, for $f,g \in \mathcal L^2_z$,
\[
\lambdangle E(\mu_1,\mu_2)f,g \rangle_z = \frac{1}{2\pi K_z} \int_{\psi_2}^{\psi_1} \lambdangle f, \phi_{e^{i\psi}} \rangle_z \lambdangle \phi_{e^{i\psi}},g \rangle_z \frac{d\psi}{|c_z(e^{i\psi})|^2}.
\]
\end{prop}
\betagin{proof}
For $\gamma = \mu(e^{i\psi})$ with $0 < \psi < \pi$, we have
\[
\betagin{split}
\lim_{\varepsilon \downarrow 0} & \left(\frac{\phi_{\mu^{-1}(\gamma+i\varepsilon)}(x) \Psi_{\mu^{-1}(\gamma+i\varepsilon)}(y) }{D_{\mu^{-1}(\gamma+i\varepsilon)}} - \frac{\phi_{\mu^{-1}(\gamma-i\varepsilon)}(x) \Psi_{\mu^{-1}(\gamma-i\varepsilon)}(y) }{D_{\mu^{-1}(\gamma-i\varepsilon)}} \right)\\
&=\frac{\phi_{e^{-i\psi}}(x) \Psi_{e^{-i\psi}}(y) }{D_{e^{-i\psi}}} - \frac{\phi_{e^{i\psi}}(x) \Psi_{e^{i\psi}}(y) }{D_{e^{i\psi}}} \\
&= \frac{\phi_{e^{i\psi}}(x) \left(c_z(e^{-i\psi}) \Psi_{e^{-i\psi}}(y) + c_z(e^{i\psi}) \Psi_{e^{i\psi}}(y) \right)}{2iK_z\sigman(\psi) \, c_z(e^{i\psi})c_z(e^{-i\psi})}\\
& = \frac{\phi_{e^{i\psi}}(x)\phi_{e^{i\psi}}(y) }{2iK_z\sigman(\psi) \, c_z(e^{i\psi})c_z(e^{-i\psi})},
\end{split}
\]
where we use $\phi_\lambda=\phi_{\lambda^{-1}}$ and the explicit expression for $D_\lambda$. Then the result follows from the inversion formula \eqref{eq:Stieltjes-Perron} and $d\mu(e^{i\psi}) = -2 \sigman(\psi) d\psi$.
\end{proof}
From the $c$-function expansion in Lemma \ref{lem:c-function expansion}, the asymptotic behavior of $\Psi_\lambda$ \eqref{eq:asymptotics Psi} and the weight $w$ \eqref{eq:asymptotics w}, it follows that $\phi_\lambda \not\in \mathcal L^2_z$ for $\lambda \in \mathbb T$, so $(-2,2)$ is contained in the continuous spectrum of $L$. The points $-2$ and $2$ are also in the continuous spectrum, which follows in the same way as in \cite[Lemma 7.1]{KS03}.
\\
Next we consider the spectrum in $\mathbb R\setminus[-2,2]$. Assume $\gamma \in \mathbb R\setminus[-2,2]$, then $\gamma$ only contributes to the spectral measure if $\gamma=\mu(\lambda)$ with $\lambda$ a simple pole of \mbox{$\lambda \mapsto G_\lambda$}, i.e., $c_z(\lambda^{-1})=0$. Let us assume that all real zeros of $\lambda \mapsto c_z(\lambda^{-1})$ are simple, and let $S_z$ be the set of all those zeros. For $\gamma \in S_z$ we write \mbox{$E(\{\gamma\}) = E(a,b)$} where $(a,b)$ is an interval disjoint from $[-2,2]$ such that $(a,b) \cap S_z = \{\gamma\}$. Note that $E(a,b)=0$ if $(a,b)\cap [-2,2]=\varnothing$ and $(a,b)\cap S_z = \varnothing$.
\betagin{prop} \lambdabel{prop:E discrete}
Let $f,g \in \mathcal L^2$, and let $\gamma=\mu(\lambda)$ with $\lambda \in S_z$, then
\[
\lambdangle E(\{\gamma\}) f, g \rangle_z = \frac{1}{K_z} \lambdangle f,\phi_\lambda \rangle_z \lambdangle \phi_\lambda, g \rangle_z \mathbb Res{\lambda'=\lambda} \left( \frac{1}{\lambda'c_z(\lambda') c_z(\lambda'^{-1})} \right).
\]
\end{prop}
\betagin{proof}
Let $\mathcal C$ be a clockwise oriented contour encircling $\gamma$ once, such that no other points in $S_z$ are enclosed by $\mathcal C$. Then by the residue theorem
\[
\betagin{split}
\lambdangle E(\{\gamma\}) f, g \rangle_z
& = \frac{1}{2\pi i} \int_{\mathcal C} \lambdangle R_\gamma f, g \rangle \, d\gamma \\
& = -\iint\limits_{\substack{(x,y) \in I \times I \\ x \leq y}} \phi_\lambda(x) \Psi_\lambda(y) \mathbb Res{\lambda'=\lambda} \left( \frac{1-1/\lambda'^2}{D_{\lambda'}} \right) \\
& \qquad \qquad \times \Big(f(x) \overline{g(y)}+ f(y) \overline{g(x)}\Big) \big(1-\tfrac12 \delta_{x,y}\big) w(x)w(y) \, d_qx \, d_qy \\
& = \frac{1}{K_z}\iint\limits_{\substack{(x,y) \in I \times I \\ x \leq y}} \phi_\lambda(x) \phi_\lambda(y) \mathbb Res{\lambda'=\lambda} \left( \frac{1}{\lambda'c_z(\lambda') c_z(\lambda'^{-1})} \right) \\
& \qquad \qquad \times \Big(f(x) \overline{g(y)}+ f(y) \overline{g(x)}\Big) \big(1-\tfrac12 \delta_{x,y}\big) w(x)w(y) \, d_qx \, d_qy.
\end{split}
\]
Here we used $\phi_\lambda = c_z(\lambda) \Phi_\lambda$, since $c_z(\lambda^{-1})=0$. Symmetrizing the double $q$-integral gives the result.
\end{proof}
For $\lambda \in S_z$ we have $\phi_\lambda \in \mathcal L^2_z$, so $\mu(S_z)$ is the discrete spectrum of $L$. Eigenfunctions (in the analytic sense) of self-adjoint operators are mutually orthogonal, so we have the following orthogonality relations.
\betagin{cor} \lambdabel{cor:orthogonality relations}
For $\lambda,\lambda' \in S_z$,
\[
\lambdangle \phi_\lambda,\phi_{\lambda'} \rangle_z = \delta_{\lambda,\lambda'}\frac{K_z}{ \mathbb Res{\lambda'=\lambda} \left( \frac{1}{\lambda'c_z(\lambda') c_z(\lambda'^{-1})} \right)}.
\]
\end{cor}
\betagin{proof}
Let $\lambda \in S_z$. We only have to compute the norm of $\phi_\lambda$. Take $f=g=\phi_\lambda$ and $\gamma=\mu(\lambda)$ in Proposition \ref{prop:E discrete}, then
\[
\lambdangle \phi_\lambda,\phi_{\lambda} \rangle_z = \lambdangle E(\{\mu(\lambda)\}) \phi_\lambda,\phi_\lambda\rangle_z = \frac{1}{K_z} \mathbb Res{\lambda'=\lambda} \left( \frac{1}{\lambda'c_z(\lambda') c_z(\lambda'^{-1})} \right) \lambdangle \phi_\lambda,\phi_\lambda \rangle_z^2,
\]
which gives the result.
\end{proof}
So far the discrete spectrum $\mu(S_z)$ is only defined implicitly, as $S_z$ is the set of zeros of $c_z(\lambda^{-1})$ inside $(-1,1)$. To make this explicit we need to solve the equation
\[
(as\lambda, a\lambda/s ;q)_\infty \Big( \frac{\theta(as/\lambda, -qaz/\lambda s;q)}{\theta(s^2;q)} + \frac{\theta(a/\lambda s, -qasz/\lambda;q)}{\theta(s^{-2};q)}\Big)=0,
\]
see Lemma \ref{lem:c-function expansion}. The factor $(as\lambda, a\lambda/s;q)_\infty$ has $\frac{1}{as}q^{-\mathbb N} \cup \frac{s}{a} q^{-\mathbb N}$ as the set of zeros. But it does not seem possible to give explicit expressions for the zeros of the second factor, the sum of $\theta$-functions, except in case $z=1$.
\betagin{lemma} \lambdabel{lem:c-function}
For $z=1$,
\[
c_1(\lambda) = \frac{(as/\lambda, a/s \lambda;q)_\infty \theta(a^2\lambda^2q;q^2)}{ (a^2,\lambda^{-2};q)_\infty \theta(qs^2;q^2)}.
\]
As a consequence, $c_1(\lambda^{-1})=0$ if and only if $\lambda\in \Gamma$ with
\[
\Gamma = \frac{1}{as}q^{-\mathbb N} \cup \frac{s}{a} q^{-\mathbb N} \cup a q^{\hf+\mathbb Z} \cup (-a)q^{\hf + \mathbb Z}.
\]
\end{lemma}
\betagin{proof}
For $z=1$ we can simplify the expression for $c_z(\lambda)$ using identities for $\theta$-functions from Section \ref{sec:preliminaries}. We have
\[
\theta(-qaz \lambda s^{\pm 1};q)= \theta(-qa\lambda s^{\pm 1};q ) = \frac{1}{a\lambda s^{\pm 1}} \theta(-a\lambda s^{\pm 1};q),
\]
so that
\[
c_1(\lambda) = \frac{s}{a\lambda} \frac{(as/\lambda, a/s \lambda;q)_\infty}{ (a^2,\lambda^{-2};q)_\infty \theta(-q,s^2;q)} \Big( \theta(as\lambda, -a\lambda/s;q) - \theta(a\lambda/s, -as\lambda;q) \Big).
\]
Use the fundamental $\theta$-function identity \eqref{eq:theta identity} with
\[
(x,y,v,w) = (a\lambda s^\frac12, a \lambda s^{-\frac12}, s^\frac12, -s^{-\frac12})
\]
for a fixed root $s^\frac12$ of $s$, to obtain
\[
\betagin{split}
\theta(as\lambda, -a\lambda/s;q) - \theta(a\lambda/s, -as\lambda;q) &= \frac{ a\lambda}{s} \frac{ \theta(a^2 \lambda^2, s, -1, -s;q) }{\theta(a\lambda,-a\lambda;q)}.
\end{split}
\]
Then simplifying the remaining expressions using \eqref{eq:simple theta identities} gives the result.
\end{proof}
Recall that we assumed $0 < a^2 <1$, and $s \in \mathbb T$ or $s \in \mathbb R$ with $q<s^2<1$. Using \ref{lem:c-function} we can read off the zeros of $c_1(\lambda^{-1})$ that are inside $(-1,1)$. This gives the following result for the spectrum.
\betagin{prop} \lambdabel{prop:spectrum}
The spectrum of the self-adjoint $q$-difference operator $L$ on $\mathcal L^2_1$ is
\[
[-2,2]\cup \mu(S_1) ,
\]
where for $s \in \mathbb T\setminus\{-1,1\}$ or $s \in \mathbb R$ such that $|s/a| \geq 1$
\[
S_1 = \left\{\pm aq^{m+\frac12} \mid m \in \mathbb Z \thetaxt{ such that } -1<aq^{m+\frac12} < 1 \right\},
\]
and for $s \in \mathbb R$ such that $|s/a|<1$
\[
S_1 = \left\{ \frac{s}{a} \right\} \cup \left\{\pm aq^{m+\frac12} \mid m \in \mathbb Z \thetaxt{ such that } -1<aq^{m+\frac12} < 1 \right\}.
\]
\end{prop}
\section{The integral transform} \lambdabel{sec:integral transform}
In this section we arrive at the main result. We define an integral transform $\mathcal F$ that can be considered as a $q$-analog of the Jacobi function transform. We show that $\mathcal F$ is unitary and we determine the inverse of $\mathcal F$. As a consequence we find orthogonality relations for $q^{-1}$-Al-Salam--Chihara polynomials, and hence we have a solution of the corresponding moment problem. Throughout this section we assume $z=1$ and omit all the subscripts $z$; in particular, $\mathcal L^2=\mathcal L^2_1$ and $c(z)=c_1(z)$.\\
We define the integral transform $\mathcal F$ as follows.
\betagin{Def}
Let $\mathcal D_0 \subseteq \mathcal L^2$ be the subset consisting of finitely supported functions. $\mathcal F:\mathcal D_0 \to F(\mathbb T \cup S)$ is given by \[
(\mathcal F f)(\lambda) = \int_{-1}^{\infty(1)} f(x) \phi_\lambda(x) w(x) \, d_qx, \qquad \lambda \in \mathbb T \cup S.
\]
\end{Def}
Let $\nu$ be the measure defined by
\[
\int f(\lambda) \, d\nu(\lambda) = \frac{1}{4 K\pi i} \int_\mathbb T f(\lambda) W(\lambda) \frac{ d\lambda}{\lambda} + \frac{1}{K}\sum_{\lambda \in S} f(\lambda) \hat W(\lambda),
\]
where $\mathbb T$ is oriented in the counter-clockwise direction,
\[
K=(1-q)\frac{ \theta(-1;q) }{\theta(-a^2;q)},
\]
\[
W(\lambda) = \frac{1}{|c(\lambda)|^2} =
\left|\frac{(as/\lambda, a/s \lambda;q)_\infty \theta(a^2\lambda^2q;q^2)}{ (a^2,\lambda^{-2};q)_\infty \theta(qs^2;q^2)} \right|^2,\qquad \lambda \in \mathbb T,
\]
and
\[
\hat W(\lambda) = \mathbb Res{\lambda'=\lambda}\left( \frac{1}{\lambda' c(\lambda') c(\lambda'^{-1}) } \right), \qquad \lambda \in S,
\]
with $S=S_1$ is given in Proposition \ref{prop:spectrum}. The residues can be computed explicitly, which gives the following expressions: for $\pm aq^{m+\hf} \in S$,
\[
\betagin{split}
\hat W(\pm aq^{m+\hf}) & =
\frac{(a^2,a^2, a^2q, a^{-2}q^{-1};q)_\infty \theta(qs^2,q/s^2;q^2)}{2(q^2;q^2)_\infty^2 (\pm a^2 q^\hf s, \pm a^2 q^\hf / s, \pm q^{-\hf}s, \pm q^{-\hf}/s ;q)_\infty \theta(a^4 q^2;q^2) } \\
& \quad \times \frac{ 1- a^2 q^{2m+1} }{1-a^2 q} \frac{(\pm a^2 q^\hf s, \pm a^2 q^\hf /s;q)_m }{ (\pm q^{\frac32} s, \pm q^{\frac32}/s;q)_m } q^{m(m+1)},
\end{split}
\]
and if $s/a \in S$,
\[
\betagin{split}
\hat W(s/a) = \frac{ (a^2, s^2/a^2;q)_\infty \theta(s^2q;q^2) }{(q,s^2;q)_\infty \theta(a^4q/s^2;q^2) }.
\end{split}
\]
Let $\mathcal H$ be the Hilbert space consisting of functions $f$ satisfying $f(\lambda)=f(\lambda^{-1})$ $\nu$-almost everywhere, with inner product
\[
\lambdangle f,g \rangle_{\mathcal H} = \int f(\lambda) \overline{g(\lambda)}\, d\nu(\lambda).
\]
Define $\mathcal G:\mathcal H \to F(I)$ by
\[
(\mathcal G g)(x) = \lambdangle g, \phi_{\bullet}(x) \rangle_{\mathcal H}, \qquad x \in I.
\]
We can now formulate the main result.
\betagin{theorem} \lambdabel{thm:main}
$\mathcal F$ extends uniquely to a unitary operator $\mathcal F:\mathcal L^2 \to \mathcal H$, with inverse $\mathcal G$.
\end{theorem}
As a consequence we have orthogonality relations for the functions $\phi_\bullet(x)$ with respect to the measure $\nu$. Since $\phi_\lambda(-q^{n})$ is an Al-Salam--Chihara polynomial by Lemma \ref{lem:phi=ASCpol}, $\nu$ is a solution of the corresponding indeterminate moment problem. Note that the polynomials are not dense in $\mathcal H$, so the measure $\nu$ is not an $N$-extremal solution.
\betagin{cor} \lambdabel{cor:solution moment problem}
The set $\{\phi_\bullet(x) \mid x \in I\}$ is an orthogonal basis for $\mathcal H$, with
\[
\lambdangle \phi_\bullet(x), \phi_\bullet(y) \rangle_{\mathcal H} = \delta_{x,y} \frac{1}{|x| w(x) }.
\]
In particular, the $q^{-1}$-Al-Salam--Chihara polynomials $P_n(\lambda) = P_n(\lambda;s/a,1/sa;q^{-1})$ satisfy
\[
\int P_n(\lambda) P_{n'}(\lambda) \, d\nu(\lambda) = \delta_{n,n'}\frac{a^{2n}}{q^n w(-q^n)}.
\]
\end{cor}
\betagin{rem}
The kernel $\phi_\lambda$ defined in Definition \ref{def:phi} is a linear combination of the functions $\psi_\lambda(\cdot;s^{\pm 1})$, which are essentially little $q$-Jacobi functions. Furthermore, $\mathcal F$ diagonalizes the $q$-hypergeometric difference operator $L$ considered as an unbounded operator on $\mathcal L^2$, i.e.
\[
\mathcal F \, L \, \mathcal F^{-1} = M_\mu,
\]
where $M_\mu$ is multiplication by $\mu(\,\cdot\,)$ on $\mathcal H$. So we may consider $\mathcal F$ as another little $q$-Jacobi function transform.
\end{rem}
\betagin{rem}
As already mentioned in Section 3 the symmetric Al-Salam--Chihara polynomials and the $q$-Laguerre polynomials are special cases of the Al-Salam--Chihara polynomials we consider, so we also obtained a solution for the corresponding moment problems. These solutions seem to be new. We can also (formally) obtain a solution of the Al-Salam--Carlitz II moment problem.
Assume $|s/a|\geq 1$, then from Proposition \ref{prop:spectrum} we see that the operator $aL_{a,s}$ has spectrum
\[
[-2a,2a] \cup \big\{ \pm (a^2 q^{m+\hf} + q^{-m-\hf}) \mid m \in \mathbb Z \thetaxt{ such that } -1 < aq^{m+\hf} < 1\big\}.
\]
For $a \to 0$ the continuous spectrum shrinks to $\{0\}$ and the discrete spectrum becomes $\{ \pm q^{-m-\hf}\mid m \in \mathbb Z\}$. From the recurrence relation for $a^{-n} P_n(\pm a q^{m+\hf})$ we find that the polynomials $p_n(\pm q^{m+\hf} ) = \lim_{a \to 0} a^{-n}P_n(\pm a q^{m+\hf};s/a,1/as;q^{-1})$ satisfy
\[
\pm q^{-m-\hf} p_n = - q^{-n} p_{n+1} + (s+s^{-1}) q^{-n} p_n+(1-q^{-n}) p_{n-1}.
\]
Comparing this to the recurrence relation for the Al-Salam--Carlitz II polynomials $V_n^{(a)}(x;q)$ \cite[\S14.25]{KLS}, we find that $p_n(\lambda) = (-s)^n q^{\hf n(n-1)}V_n^{(s^{-2})}(\lambda;q)$. Letting $a \to 0$ in the orthogonality relations for $a^{-n}P_n(\pm a q^{m+\hf};s/a,1/as;q^{-1})$ we find that the polynomials $p_n$ satisfy
\[
\betagin{split}
\sum_{m \in \mathbb Z} & p_{n}(q^{-m-\hf}) p_{n'}(q^{-m-\hf}) \frac{q^{m(m+1)}}{(q^{\frac32} s,q^{\frac32}/s;q)_m } \\
& \quad + p_{n}(-q^{-m-\hf}) p_{n'}(-q^{-m-\hf}) \frac{q^{m(m+1)}}{(-q^{\frac32} s,-q^{\frac32}/s;q)_m } = \delta_{n,n'} N_n,
\end{split}
\]
where $N_n$ can be determined explicitly. This is a special case of the solutions found in \cite{Gr}.
\end{rem}
Let us now turn to the proof of Theorem \ref{thm:main}, which takes several steps. From the results of the previous section we obtain the following result.
\betagin{prop} \lambdabel{prop:isometry}
The map $\mathcal F$ extends uniquely to an isometry $\mathcal F:\mathcal L^2\to \mathcal H$.
\end{prop} \lambdabel{prop:isometry}
\betagin{proof}
Let $f_1,f_2 \in \mathcal D_0$, then by Propositions \ref{prop:E continuous}, \ref{prop:E discrete}
and the definition of the inner product $\lambdangle \cdot, \cdot \rangle_{\mathcal H}$,
\[
\lambdangle f_1,f_2 \rangle = \lambdangle E(\mathbb R) f_1,f_2 \rangle = \lambdangle \mathcal Ff_1,\mathcal Ff_2 \rangle_{\mathcal H},
\]
from which it follows that $\mathcal F$ extends to an isometry $\mathcal L^2\to \mathcal H$.
\end{proof}
We show that $\mathcal F$ is unitary by showing that $\mathcal G$ is indeed the inverse of $\mathcal F$. Note that $\phi_\lambda(x) = (\mathcal F d_x)(\lambda)$, where $d_x \in \mathcal L^2$ is given by $d_x(y) = \frac{\delta_{x,y}}{w(x)|x|}$. So $\phi_{\bullet}(x) \in \mathcal H$ by Proposition \ref{prop:isometry}, and we see that $\mathcal Gg$ exists for all $g \in \mathcal H$. Using the functions $d_x$ it is easy to show that $\mathcal G$ is a left-inverse of $\mathcal F$.
\betagin{lemma} \lambdabel{lem:GF=id}
$\mathcal G \mathcal F = \mathrm{id}_{\mathcal L^2}$.
\end{lemma}
\betagin{proof}
Let $f \in \mathcal L^2$ and $x \in I$. Using $(\mathcal F d_x)(\lambda) = \phi_\lambda(x)$ and Proposition \ref{prop:isometry} we have
\[
(\mathcal G \mathcal F f)(x) = \lambdangle \mathcal Ff , \mathcal Fd_x \rangle_{\mathcal H} = \lambdangle f,d_x \rangle = f(x). \qedhere
\]
\end{proof}
It requires more work to show that $\mathcal G$ is also a right inverse of $\mathcal F$. We use a classical method \cite{BM67}, \cite{Go65}, which is essentially approximating with the Fourier transform. We need the truncated inner product, see \eqref{eq:truncated inner product}, of $\phi_\lambda$ and $\phi_{\lambda'}$ with $\lambda \neq \lambda'$. We first derive a results about these inner products that will be useful later on.
\betagin{lemma} \lambdabel{lem:properties truncated inner product}
For $l \in \mathbb N$ and $\lambda, \lambda' \in \mathbb C^*$ with $\mu(\lambda) \neq \mu(\lambda')$, the limit
\[
\lambdangle \phi_\lambda,\phi_{\lambda'} \rangle_l = \lim_{k \to \infty} \lambdangle \phi_\lambda, \phi_{\lambda'} \rangle_{k,l,k}
\]
exists and for $l \to \infty$,
\[
\lambdangle \phi_\lambda,\phi_{\lambda'} \rangle_l = K\sum_{\varepsilon,\eta \in \{-1,1\}} \frac{(\lambda^{\varepsilon} - \lambda'^\eta)(\lambda^{\varepsilon} \lambda'^{\eta} )^l c(\lambda^{\varepsilon}) c(\lambda'^{\eta})}{\mu(\lambda)-\mu(\lambda')}\Big(1+\mathcal O(q^l)\Big).
\]
\end{lemma}
\betagin{proof}
From Lemma \ref{lem:CDet1} we obtain
\[
\betagin{split}
\big(\mu(\lambda)-\mu(\lambda') \big) & \lambdangle \phi_\lambda,\phi_{\lambda'} \rangle_{k,l,m} = \\
&D(\phi_\lambda,\phi_{\lambda'})(-q^k) - D(\phi_\lambda,\phi_{\lambda'})(q^m) + D(\phi_\lambda,\phi_{\lambda'})(q^{-l-1}).
\end{split}
\]
Using $\phi_\lambda \in V_{\mu(\lambda)}$, see \eqref{eq:eigenspaces}, we obtain from the expression \eqref{eq:Casorati} for the Casorati determinant that
\[
\lim_{k \to \infty} D(\phi_\lambda,\phi_{\lambda'})(-q^k) - D(\phi_\lambda,\phi_{\lambda'})(q^k) = 0,
\]
which shows that
\[
\lambdangle \phi_\lambda,\phi_{\lambda'} \rangle_l = \frac{ D(\phi_\lambda,\phi_{\lambda'})(q^{-l-1})}{\mu(\lambda)-\mu(\lambda')}.
\]
The asymptotic behavior of this Casorati determinant can be obtained, using the $c$-function expansion in Lemma \ref{lem:c-function expansion}, from the asymptotic behavior of the Casorati determinant
$D(\Psi_\lambda,\Psi_{\lambda'})(q^{-l-1})$. From \eqref{eq:Casorati} and the asymptotic behavior \eqref{eq:asymptotics Psi} and \eqref{eq:asymptotics w} of $\Psi_\lambda$ and $w$, we obtain
\[
D(\Psi_\lambda,\Psi_{\lambda'})(q^{-l-1}) = K(\lambda-\lambda')(\lambda \lambda')^l \big( 1 + \mathcal O(q^l) \big), \qquad l \to \infty,
\]
from which the result follows.
\end{proof}
Next we show that $\lambdangle \phi_\lambda,\phi_{\lambda'} \rangle_l$ has a reproducing property similar to the Dirichlet kernel.
Let
\[
C_0(\mathbb T) = \{ g \in C(\mathbb T) \mid g(-1)=g(1)=0 \}.
\]
\betagin{prop} \lambdabel{prop:reproducing property}
For $g \in C_0(\mathbb T)$
\[
\lim_{l \to \infty} \frac{1}{4\pi i} \int_\mathbb T g(\lambda) \lambdangle \phi_\lambda,\phi_{\lambda'} \rangle_l \frac{d\lambda}{\lambda} =
\betagin{cases}
K\, g(\lambda') |c(\lambda')|^2,& \lambda'\in \mathbb T\setminus\{-1,1\},\\
0, & \lambda' \in S.
\end{cases}
\]
\end{prop}
\betagin{proof}
First consider $\lambda'= e^{i\theta'}$ with $\theta'\in(0,\pi)$. Using Lemma \ref{lem:properties truncated inner product} and substitution we have, for $l \to \infty$,
\[
\betagin{split}
I_l[g](\lambda') &= \frac{1}{4\pi i} \int_\mathbb T g(\lambda) \lambdangle \phi_\lambda,\phi_{\lambda'} \rangle_l \frac{d\lambda}{\lambda}\\
&= \frac{K}{2\pi} \sum_{\varepsilon,\eta \in \{-1,1\}} \int_0^\pi g(e^{i\theta}) \Big(F_l^{\varepsilon,\eta}(\theta,\theta')+\mathcal O(q^l)\Big) \, d\theta,
\end{split}
\]
where
\[
F_l^{\varepsilon,\eta}(\theta,\theta') =\frac{(e^{i\varepsilon\theta} - e^{i \eta \theta'})e^{il(\varepsilon+\eta)} c(e^{i\varepsilon \theta}) c(e^{i\eta \theta'})}{2\cos(\theta)-2\cos(\theta')}.
\]
Since $\lambda \mapsto c(\lambda)$ is continuous on $\mathbb T\setminus\{-1,1\}$, the function $\theta \mapsto F_l^{\varepsilon,\eta}(\theta,\theta')$ is continuous on $(0,\pi) \setminus\{\theta'\}$. If $\sgn(\varepsilon) = \sgn(\eta)$ the singularity at $\theta'$ is removable, so that these terms vanish in the limit by the Riemann-Lebesgue lemma. This gives
\[
\lim_{l \to \infty} I_l[g](e^{i\theta'}) = \lim_{l \to \infty} \frac{K}{2\pi} \int_0^\pi g(e^{i\theta}) \Big(F_l^{1,-1}(\theta,\theta')+F_l^{-1,1}(\theta,\theta')\Big) \, d\theta,
\]
where dominated convergence is used to get rid of the $\mathcal O(q^l)$-terms. Furthermore, using trigonometric identities we obtain
\betagin{equation} \lambdabel{eq:F+F}
\betagin{split}
F_l^{1,-1}(\theta,\theta')+F_l^{-1,1}(\theta,\theta') &= \frac{[c(e^{-i\theta})c(e^{i\theta'}) - c(e^{i\theta})c(e^{-i\theta'})](e^{-i\theta}-e^{i\theta'})}{4 \sigman(\hf(\theta+\theta')) \sigman(\hf(\theta-\theta'))} e^{il (\theta'-\theta)} \\
& \quad + c(e^{i\theta})c(e^{i\theta'}) D_l(\theta-\theta'),
\end{split}
\end{equation}
where
\[
D_l(t)= \frac{ \sigman((2l+1)t) }{ \sigman(\hf t)}
\]
is the Dirichlet kernel. Note that the first term in \eqref{eq:F+F} has a removable singularity at $\theta = \theta'$, so using the Riemann-Lebesgue lemma again this term vanishes in the limit $l \to \infty$. Then from the well-known limit property of the Dirichlet kernel we obtain
\[
\lim_{l \to \infty} I_l[g](e^{i\theta'}) = K g(e^{i\theta'}) c(e^{i\theta'}) c(e^{-i\theta'}),
\]
which gives the result for $\lambda' \in \mathbb T\setminus\{-1,1\}$.
Next let $\lambda' \in S$. In this case $c(1/\lambda')=0$ and then since $|\lambda'|<1$, we obtain from Lemma \ref{lem:properties truncated inner product}
\[
\lim_{l \to \infty}\lambdangle \phi_\lambda,\phi_{\lambda'} \rangle_l = K\lim_{l \to \infty}\sum_{\varepsilon \in \{-1,1\}} \frac{(\lambda^{\varepsilon} - \lambda')(\lambda^{\varepsilon} \lambda' )^l c(\lambda^{\varepsilon}) c(\lambda')}{\mu(\lambda)-\mu(\lambda')}\Big(1+\mathcal O(q^l)\Big)=0,
\]
from which the result follows.
\end{proof}
Now we are ready to prove Theorem \ref{thm:main}.
\betagin{proof}[Proof of Theorem \ref{thm:main}].
From Proposition \ref{prop:isometry} and Lemma \ref{lem:GF=id} we already know that $\mathcal F$ is an isometry with left-inverse $\mathcal G$, so
we have to show that $\mathcal G$ is a right-inverse of $\mathcal F$.
Let $\mathcal H_0 \subseteq \mathcal H$ be the dense subspace consisting of functions $g \in \mathcal H$ such that $g|_\mathbb T \in C_0(\mathbb T)$ and $g|_S$ has finite support. Then for $\lambda'\in \mathbb T \cup S$,
\[
\betagin{split}
(\mathcal F \mathcal G g)(\lambda') &= \int_{-1}^{\infty(1)} \phi_{\lambda'}(x)\int g(\lambda) \phi_\lambda(x) \, d\nu(\lambda) \,w(x)\, d_qx\\
& = \lim_{l \to \infty} \frac{1}{4K \pi i} \int_\mathbb T g(\lambda) \lambdangle \phi_{\lambda},\phi_{\lambda'} \rangle_l \frac{d\lambda}{\lambda|c(\lambda)|^2} \\
& \quad + \frac{1}{K}\sum_{\lambda \in S} g(\lambda) \lambdangle \phi_{\lambda}, \phi_{\lambda'} \rangle \mathbb Res{\hat \lambda = \lambda}\left( \frac{1}{\hat \lambda c(\hat \lambda) c(\hat\lambda^{-1})} \right).
\end{split}
\]
Using Proposition \ref{prop:reproducing property} for the integral part and Corollary \ref{cor:orthogonality relations} for the sum part, we find that this equals $g(\lambda')$. Since $\mathcal H_0$ is dense in $\mathcal H$ we find $\mathcal F \mathcal G = \mathrm{Id}_{\mathcal H}$.
\end{proof}
\betagin{thebibliography}{99}
\bibitem{ASC} W.A.~Al-Salam, T.S.~Chihara, \thetaxtit{Convolutions of orthonormal polynomials}, SIAM J.~Math.~Anal.~\thetaxtbf{7} (1976), no.~1, 16--28.
\bibitem{AI} R.~Askey, M.E.H.~Ismail, \thetaxtit{Recurrence relations, continued fractions, and orthogonal polynomials}, Mem.~Amer.~Math.~Soc.~\thetaxtbf{49} (1984), no.~300.
\bibitem{BM67} B.L.J. Braaksma, B. Meulenbeld, \thetaxtit{Integral transforms with generalized Legendre functions as kernels}, Compositio Math. \thetaxtbf{18} (1967), 235-287.
\bibitem{ChrI} J.S.~Christiansen, M.E.H.~Ismail, \thetaxtit{A moment problem and a family of integral evaluations}, Trans.~Amer.~Math.~Soc.~\thetaxtbf{358} (2006), no.~9, 4071--4097.
\bibitem{CI} T.S.~Chihara, M.E.H.~Ismail, \thetaxtit{Extremal measures for a system of orthogonal polynomials}, Constr.~Approx.~\thetaxtbf{9} (1993), no.~1, 111--119
\bibitem{CK} J.S.~Christiansen, E.~Koelink, \thetaxtit{Self-adjoint difference operators and symmetric Al-Salam-Chihara polynomials}, Constr.~Approx.~\thetaxtbf{28} (2008), no.~2, 199--218.
\bibitem{DS63} N. Dunford, J.T. Schwartz, \thetaxtit{Linear Operators Part II}, Interscience, New York, 1963.
\bibitem{GR04} G. Gasper, M. Rahman, \thetaxtit{Basic Hypergeometric Series}, 2nd ed., Cambridge University Press, Cambridge, 2004.
\bibitem{Go65} F. G\"otze, \thetaxtit{Verallgemeinerung einer Integraltransformation von Mehler-Fock durch den von Kuipers und Meulenbeld eingef\"urten Kern $P_k^{m,n}(z)$}, Indag. Math. \thetaxtbf{27} (1965), 396-404.
\bibitem{Gr} W.~Groenevelt, \thetaxtit{Orthogonality relations for Al-Salam-Carlitz polynomials of type II}, J.~Approx.~Theory \thetaxtbf{195} (2015), 89--108.
\bibitem{GrK} W.~Groenevelt, E.~Koelink, \thetaxtit{The indeterminate moment problem for the q-Meixner polynomials} J.~Approx.~Theory \thetaxtbf{163} (2011), no.~7, 838--863.
\bibitem{K95} T. Kakehi, \thetaxtit{Eigenfunction expansion associated with the Casimir operator on the quantum group ${\rm SU}_q(1,1)$}, Duke Math. J. \thetaxtbf{80} (1995), 535-573.
\bibitem{KMU95} T. Kakehi, T. Masuda, K. Ueno, \thetaxtit{Spectral analysis of a $q$-difference operator which arises from the quantum ${\rm SU}(1,1)$ group}, J. Operator Theory \thetaxtbf{33} (1995), 159-196.
\bibitem{KLS} R.~Koekoek, P.A.~Lesky, R.~Swarttouw, \thetaxtit{Hypergeometric orthogonal polynomials and their q-analogues}, Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2010.
\bibitem{KS01} E.~Koelink, J.V.~Stokman, \thetaxtit{Fourier transforms on the quantum SU(1,1) group}, With an appendix by Mizan Rahman. Publ.~Res.~Inst.~Math.~Sci.~\thetaxtbf{37} (2001), no.~4, 621--715.
\bibitem{KS03} E. Koelink, J.V. Stokman, \thetaxtit{The big $q$-Jacobi function transform}, Constr. Approx. \thetaxtbf{19} (2003), 191--235.
\bibitem{Koo84} T.H. Koornwinder, \thetaxtit{Jacobi functions and analysis on noncompact semisimple Lie groups}, in: Special Functions: Group Theoretical Aspects and Applications, R.A. Askey, T.H. Koornwinder, W. Schempp (Eds.), D. Reidel Publ. Comp., Dordrecht, 1984, 1-85.
\bibitem{S} B.~Simon, \thetaxtit{The classical moment problem as a self-adjoint finite difference operator}, Adv.~Math.~\thetaxtbf{137} (1998), no.~1, 82--203.
\bibitem{Schm} K.~Schm\"udgen, \thetaxtit{The moment problem}, Graduate Texts in Mathematics, \thetaxtbf{277}, Springer, Cham, 2017.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{From the totally asymmetric simple exclusion process to the KPZ fixed point}
\author{Jeremy Quastel \and Konstantin Matetski}
\address{University of Toronto, 40 St. George Street, Toronto, Ontario, Canada M5S 2E4}
\email{[email protected], [email protected]}
\subjclass[2010]{Primary 60K35; Secondary 82C27}
\keywords{TASEP, growth process, biorthogonal ensemble, determinantal point process, KPZ fixed point}
\begin{abstract}
These notes are based on the article [Matetski, Quastel, Remenik, \emph {The KPZ fixed point}, 2016] and give a self-contained exposition of construction of the KPZ fixed point which is a Markov process at the centre of the KPZ universality class. Starting from the Sch\"{u}tz's formula for transition probabilities of the totally asymmetric simple exclusion process, the method is by writing them as the biorthogonal ensemble/non-intersecting path representation found by Borodin, Ferrari, Pr\"{a}hofer and Sasamoto. We derive an explicit formula for the correlation kernel which involves transition probabilities of a random walk forced to hit a curve defined by the initial data. This in particular yields a Fredholm determinant formula for the multipoint distribution of the height function of the totally asymmetric simple exclusion process with arbitrary initial condition. In the 1:2:3 scaling limit the formula leads in a transparent way to a Fredholm determinant formula for the KPZ fixed point, in terms of an analogous kernel based on Brownian motion. The formula readily reproduces known special self-similar solutions such as the Airy$_1$ and Airy$_2$ processes.
\end{abstract}
\maketitle
\thispagestyle{empty}
\section{The totally asymmetric simple exclusion process}
The \emph{totally asymmetric simple exclusion process} (TASEP) is a basic interacting particle system studied in non-equilibrium statistical mechanics. The system consists of particles performing totally asymmetric nearest neighbour random walks on the one-dimensional integer lattice with the exclusion rule. Each particle independently attempts to jump to the neighbouring site to the right at rate $1$, the jump being allowed only if that site is unoccupied. More precisely, if we denote by $\eta \in \{0,1\}^\Z$ a particle configuration (where $\eta_x = 1$ if there is a particle at the site $x$, and $\eta_x = 0$ if the site is empty), then TASEP is a Markov process with infinitesimal generator acting on cylinder functions $f : \{0,1\}^\Z \to \R$ by
\[
(L f)(\eta) = \sum_{x \in \Z} \eta_x (1 - \eta_{x+1}) \big(f(\eta^{x, x+1}) - f(\eta)\big),
\]
where $\eta^{x, x+1}$ denotes the configuration $\eta$ with interchanged values at $x$ and $x+1$:
\[
\eta^{x, x+1}_y =
\begin{cases}
\eta_{x+1}, &~\text{if}~ y =x,\\
\eta_{x}, &~\text{if}~ y =x + 1,\\
\eta_{y}, &~\text{if}~ y \notin \{x, x+1\}.\\
\end{cases}
\]
See~\cite{ligg1} for the proof of the non-trivial fact that this process is well-defined.
\begin{exercise} Prove that the following measures $\mu$ are invariant for TASEP, i.e. $\int (Lf) d\mu=0$:
\begin{enumerate}[topsep=0pt]
\item the Bernoulli product measures with any density $\rho\in [0,1]$,
\item the Dirac measure on any configuration with $\eta_x =1$ for $x\ge x_0$, $\eta_x=0$ for $x<x_0$.
\end{enumerate}
It is known~\cite{ligg1} that these are the only invariant measures.
\end{exercise}
\noindent The TASEP dynamics preserves the order of particles. Let us denote positions of particles at time $t \geq 0$ by
\[
\cdots <X_t(2)<X_t(1)< X_t(0)< X_t(-1)<X_t(-2)< \cdots,
\]
where $X_t(i) \in \Z$ is the position of the $i$-{th} particle. Adding $\pm\infty$ into the state space and placing a necessarily infinite number of particles at infinity allows for left- or right-finite data with no change of notation (the particles at $\pm\infty$ are playing no role in the dynamics). We follow the standard
practice of ordering particles from the right; for right-finite data the rightmost particle is labelled $1$.
TASEP is a particular case of the \emph{asymmetric simple exclusion process} (ASEP) introduced by Spitzer in~\cite{Spitzer}. Particles in this model jump to the right with rate $p$ and to the left with rate $q$ such that $p + q = 1$, following the exclusion rule. Obviously, in the case $p = 1$ we get TASEP. In the case $p \in (0,1)$ the model becomes significantly more complicated comparing to TASEP, for example Sch\"{u}tz's formula described in Section~\ref{sec:distr} below cannot be written as a determinant which prevents the following analysis in the general case.
ASEP is important because of the weakly asymmetric limit, which means to diffusively rescale the growth process introduced below as
$\ep^{1/2} h_{\ep^{-2} t}(\ep^{-1} z)$ while at the same time taking $q-p=\mathcal{O}(\ep^{1/2})$, in order to obtain the KPZ equation~\cite{berGiaco}.
\subsection{The growth process.}
\label{sec:growth}
Of special interest in non-equilibrium physics is the \emph{growth process} associated to TASEP. More precisely, let
\[
X^{-1}_t(u) = \min \bigl\{k \in \Z : X_t(k) \leq u\bigr\}
\]
denote the label of the rightmost particle which sits to the left of, or at, $u$ at time $t$.
The \emph{TASEP height function} associated to $X_t$ is given for $z\in\Z$ by
\begin{equation}\label{defofh}
h_t(z) = -2 \left(X_t^{-1}(z-1) - X_0^{-1}(-1) \right) - z,
\end{equation}
which fixes $h_0(0)=0$. The height function is a random walk path $h_t(z+1) = h_t(z) +\hat{\eta}_t(z)$ with $\hat{\eta}_t(z)=1$ if there is a particle at $z$ at time $t$ and $-1$ if there is no particle at $z$ at time $t$. We can also easily extend the height function to a continuous function of $x\in \R$ by linearly interpolating between the integer points.
\begin{exercise} Show that the dynamics of $h_t$ is that local max's become local min's at rate $1$; i.e. if \break$h_t(z) = h_t(z\pm 1) +1$ then $h_t(z)\mapsto h_t(z)-2$ at rate $1$, the rest of the height function remaining unchanged (see the figure below). What happens if we consider ASEP?
\end{exercise}
\begin{figure}
\caption{Evolution of TASEP and its height function.}
\end{figure}
Two standard examples of initial data for TASEP are the \emph{step initial data} (when $X_0(k) = -k$ for $k \geq 1$) and \emph{$d$-periodic initial data} (when $X_0(k) = -d(k-1)$ for $k \in \Z$) with $d \geq 2$. Analysis of TASEP with one of this initial data is much easier than in the general case. In particular the results presented in Sections~\ref{sec:exact} and \ref{sec:123} below were known from~\cite{borFerPrahSasam,bfp,ferrariMatr} and served as a starting point for our work.
\section{Distribution function of TASEP}
\label{sec:distr}
If there are a finite number of particles, we can alternatively denote their positions
\[
\vec x \in \Omega_N = \bigl\{x_N < \cdots < x_1 \bigr\} \subset \Z^N,
\]
where $\Omega_N$ is called the \emph{Weyl chamber}.
The transition probabilities for TASEP with a finite number of particles was first obtained in~\cite{MR1468391} using \emph{(coordinate) Bethe ansatz}.
\begin{proposition}[Sch\"{u}tz's formula]\label{prop:Schuetz}
The transition probability for $2 \le N <\infty$ TASEP particles has a determinantal form
\begin{equation}\label{eq:Green}
\pp \bigl(X_t = \vec x\, |\, X_0 = \vec y\bigr)=\det\big[F_{i - j}(x_{N+1 - i}-y_{N+1 - j},t)\big]_{1\leq i,j\leq N}
\end{equation}
with $\vec x, \vec y \in \Omega_N$, and
\begin{equation}\label{eq:defF}
F_{n}(x, t)=\frac{(-1)^n}{2\pi \I} \oint_{\Gamma_{0,1}} dw\,\frac{(1-w)^{-n}}{w^{x-n+1}}e^{t(w-1)},
\end{equation}
where $\Gamma_{0,1}$ is any simple loop oriented anticlockwise which includes $w=0$ and $w=1$.
\end{proposition}
In the rest of this section we provide a proof of this result using Bethe ansatz and in Section~\ref{ss:schutz_check} we show that Sch\"{u}tz's formula can alternatively be easily checked to satisfy the Kolmogorov forward equation.
\subsection{Proof of Sch\"{u}tz's formula using Bethe ansatz.}
\label{schutzba}
In this section we will prove Proposition~\ref{prop:Schuetz} following the argument of~\cite{MR2824604}. We will consider $N \geq 2$ particles in TASEP and derive the master (Kolmogorov forward) equation for the process $X_t = \bigl(X_t(1), \cdots, X_t(N)\bigr) \in \Omega_N$, where $\Omega_N$ is the Weyl chamber defined above. For a function $F : \Omega_N \to \R$ we introduce the operator
\[
\big(\CL^{(N)} F\big)(\vec x) = -\sum_{k=1}^N \1{x_k - x_{k+1} > 1} \bigl(\nablam_{\!k} F\bigr)(\vec x),
\]
where $x_{N+1} = -\infty$ and $\nablam_{\!k}$ is the discrete derivative
\begin{equation}\label{eq:L}
\nablam f(z) = f(z) - f(z-1), \qquad f : \Z \to \R,
\end{equation}
acting on the $k$-th argument of $F$. One can see that this is the infinitesimal generator of TASEP in the variables $X_t$. Thus, if
\[
P^{(N)}_t(\vec y, \vec x) = \pp \bigl(X_t = \vec x\, |\, X_0 = \vec y\bigr)
\]
is the transition probability of $N$ particles of TASEP from $\vec y \in \Omega_N$ to $\vec x \in \Omega_N$, then \emph{the master equation} (=\emph{Kolmogorov forward equation}) is
\begin{equation}\label{eq:forward}
\frac{d}{dt} P^{(N)}_t(\vec y, \cdot) = \CL^{(N)} P^{(N)}_t(\vec y, \cdot), \qquad P^{(N)}_0(\vec y, \cdot) = \delta_{\vec y, \cdot}.
\end{equation}
The idea of~\cite{Bethe} was to rewrite \eqref{eq:forward} as a differential equation with constant coefficients and boundary conditions, i.e. if $u^{(N)}_t : \Z^N \to \R$ solves
\begin{equation}\label{eq:master}
\frac{d}{dt} u^{(N)}_t = -\sum_{k=1}^N \nablam_{\!k} u^{(N)}_t, \qquad u^{(N)}_0(\vec x) = \delta_{\vec y, \vec x},
\end{equation}
with the \emph{boundary conditions}
\begin{equation}\label{eq:boundary}
\nablam_{\!k} u^{(N)}_t (\vec x) = 0, \quad \text{when} \quad x_{k} = x_{k+1} + 1,
\end{equation}
then for $\vec x, \vec y \in \Omega_N$ one has
\begin{equation}\label{eq:P_u}
P^{(N)}_t(\vec y, \vec x) = u^{(N)}_t(\vec x).
\end{equation}
\begin{exercise}
Prove this by induction on $N \geq 1$.
\end{exercise}
\noindent The strategy is now to find a general solution to the master equation \eqref{eq:master} and then a particular one which satisfies the boundary and initial conditions. The method is known as \emph{(coordinate) Bethe ansatz}.
\subsubsection*{Solution to the master equation.}
For a fixed $\vec y \in \Z^N$, we are going to find a solution to the equation \eqref{eq:master}. For this we will consider indistinguishable particles, so that the state space $\{x_1, \cdots, x_N\} \subset \Z$ of the system is given by
\[
\sum_{\sigma \in \S_N} u^{(N)}_t(\vec x_\sigma),
\]
where $\S_N$ is the symmetric group and $\vec x_\sigma = \bigl(x_{\sigma(1)}, \cdots, x_{\sigma(N)}\bigr)$. With this in mind we define the generating function
\[
\phi^{(N)}_t(\vec w) = \frac{1}{|\S_N|}\sum_{\vec x \in \Z^N} \sum_{\sigma \in \S_N} \vec w^{\vec x_\sigma} u^{(N)}_t(\vec x_\sigma),
\]
where $\vec w \in \C^{N}$, $\vec w^{\vec x} = z_1^{x_1} \cdots z_N^{x_N}$ and $|\S_N| = N!$. Since we would like the identity \eqref{eq:P_u} to hold, it is reasonable to assume that $\bigl|u^{(N)}_t(\vec x)\bigr| \leq \min_{i} \frac{t^{x_i - y_i}}{(x_i - y_i)!}$ which guarantees locally absolute convergence of the sum above and all the following computations. Then \eqref{eq:master} yields
\begin{align*}
\frac{d}{dt} \phi^{(N)}_t(\vec w) &= \frac{1}{|\S_N|} \sum_{\vec x \in \Z^N} \sum_{\sigma \in \S_N} \vec w^{\vec x_\sigma} \frac{d}{dt} u^{(N)}_t(\vec x_\sigma) \\
&=- \frac{1}{|\S_N|} \sum_{\vec x \in \Z^N} \sum_{\sigma \in \S_N} \vec w^{\vec x_\sigma} \sum_{k=1}^N \nablam_{\!k} u^{(N)}_t(\vec x_\sigma) \\
&= -\frac{1}{|\S_N|} \sum_{k=1}^N \sum_{\vec x \in \Z^N} \sum_{\sigma \in \S_N} \vec w^{\vec x_\sigma} \nablam_{\!k} u^{(N)}_t(\vec x_\sigma) \\
&= \frac{1}{|\S_N|} \sum_{\vec x \in \Z^N} \sum_{\sigma \in \S_N} \vec w^{\vec x_\sigma} u^{(N)}_t(\vec x_\sigma) \sum_{k=1}^N \bigl(w_{\sigma(k)} - 1\bigr) \\
&= \phi^{(N)}_t(\vec w) \sum_{k=1}^N \eps(w_k),
\end{align*}
where $\eps(w) = w-1$ for $w \in \C$. From the last identity we conclude that
\[
\phi^{(N)}_t(\vec w) = C(\vec w) \prod_{k=1}^N e^{\eps(w_k) t},
\]
for a function $C : \C^N \to \C$ which is independent of $t$, but can depend on $\vec y$. Then Cauchy's integral theorem gives a solution to the master equation
\begin{align}\label{eq:gen_sol}
u^{(N)}_t(\vec x) = \frac{1}{(2\pi \I)^N} \sum_{\sigma \in \S_N} \oint_{\Gamma_0} d \vec w\, \frac{\phi^{(N)}_t(\vec w)}{\vec w_\sigma^{\vec x+1}} = \frac{1}{(2\pi \I)^N} \sum_{\sigma \in \S_N} \oint_{\Gamma_0} d\vec w\, C(\vec w) \prod_{k=1}^N \frac{e^{\eps(w_k) t}}{w_{\sigma(k)}^{x_k +1}},
\end{align}
where $\vec x + 1 = \bigl(x_1 + 1, \cdots, x_N+1\bigr)$ and $\Gamma_0$ is a contour in $\C^N$ around the origin. Our next goal it to find $C$ and $\Gamma_0$ such that this solution satisfied the initial and boundary conditions for \eqref{eq:master}.
\subsubsection*{Satisfying the boundary conditions.}
We are going to find functions $C$ and a contour $\Gamma_0$ such that the solution \eqref{eq:gen_sol} satisfies the boundary conditions \eqref{eq:boundary}. We will look for a solution in a more general form than \eqref{eq:gen_sol}. More precisely, we will consider functions $C_\sigma(z)$ depending on $\sigma \in \S_N$, which gives us the \emph{Bethe ansatz solution}
\begin{equation}\label{eq:Bethe_u}
u^{(N)}_t(\vec x) = \frac{1}{(2\pi \I)^N} \sum_{\sigma \in \S_N} \oint_{\Gamma_0} d\vec w\, C_\sigma(\vec w) \prod_{k=1}^N \frac{e^{\eps(w_k) t}}{z_{\sigma(k)}^{x_k +1}}.
\end{equation}
In the case $x_{k} = x_{k+1} + 1$, the boundary condition \eqref{eq:boundary} yields
\begin{align*}
\nablam_{\!k} u^{(N)}_t (\vec x) &=- \frac{1}{(2\pi \I)^N} \sum_{\sigma \in \S_N} \oint_{\Gamma_0} d \vec w \prod_{i \neq k, k+1} \frac{C_\sigma(\vec w)}{w_{\sigma(i)}^{x_i+1}} \frac{1 - w_{\sigma(k)}^{-1}}{w_{\sigma(k)}^{x_k} w_{\sigma(k+1)}^{x_{k+1}+1}} \prod_{i = 1}^N e^{\eps(w_i) t}\\
&= -\frac{1}{(2\pi \I)^N} \sum_{\sigma \in \S_N} \oint_{\Gamma_0} d \vec w \prod_{i \neq k, k+1} \frac{C_\sigma(\vec w)}{w_{\sigma(i)}^{x_i+1}} \frac{f(w_{\sigma(k)})}{(w_{\sigma(k)} w_{\sigma(k+1)})^{x_k}} \prod_{i = 1}^N e^{\eps(w_i) t} = 0.
\end{align*}
In particular, this identity holds if for $f(w) = 1 - w^{-1}$ we have
\[
\sum_{\sigma \in \S_N} \frac{C_\sigma(\vec w) f(w_{\sigma(k)})}{(w_{\sigma(k)} w_{\sigma(k+1)})^{x_k}} = 0,
\]
for all $\vec w \in \C^N$. Let $T_k \in \S_N$ be the transposition $(k, k+1)$, i.e. it interchanges the elements $k$ and $k+1$. Then the last identity holds if we have
\[
C_\sigma(\vec x) f(w_{\sigma(k)}) + C_{T_k \sigma}(\vec w) f(w_{\sigma(k + 1)}) = 0.
\]
In particular, one can see that the following functions satisfy this identity
\begin{equation}\label{eq:C_sigma}
C_\sigma(\vec w) = \sgn(\sigma) \prod_{i=1}^N f(w_{\sigma(i)})^i\, \psi(\vec w),
\end{equation}
for any function $\psi : \C^N \to \R$. Thus we need to find a specific function $\psi$ so that the initial condition in \eqref{eq:master} is satisfied.
\subsubsection*{Satisfying the initial condition.}
Since the equation \eqref{eq:master} preserves the Weyl chamber, it is sufficient to check the initial condition only for $\vec x, \vec y \in \Omega_N$. Combining \eqref{eq:Bethe_u} with \eqref{eq:C_sigma}, the initial condition at $t = 0$ is given by
\begin{equation}\label{eq:u_init}
\frac{1}{(2\pi \I)^N} \sum_{\sigma \in \S_N} \oint_{\Gamma_0} d\vec w \frac{C_\sigma(\vec w)}{w_{\sigma}^{\vec x +1}} = \delta_{\vec y, \vec x}.
\end{equation}
If $\mathrm{id} \in \S_N$ is the identity permutation and $C_{\mathrm{id}}(\vec w) = \vec w^{\vec y}$ then obviously
\[
\frac{1}{(2\pi \I)^N} \oint_{\Gamma_0} d \vec w \frac{C_{\mathrm{id}}(\vec w)}{w^{\vec x +1}} = \delta_{\vec y, \vec x}.
\]
For this to hold we need to choose the function $\psi$ in \eqref{eq:C_sigma} to be
\[
\psi(\vec w) = \prod_{i=1}^N f(w_{i})^{-i} w_i^{y_i}.
\]
Thus, a candidate for the solution is given by
\[
u^{(N)}_t(\vec x) = \frac{1}{(2\pi \I)^N} \sum_{\sigma \in \S_N} \sgn(\sigma) \oint_{\Gamma_0} d \vec w\, \prod_{k=1}^N \frac{f(w_{k})^{k - \sigma(k)} e^{\eps(w_k) t}}{w_{\sigma(k)}^{x_k - y_{\sigma(k)} +1}},
\]
which can be written as Sch\"{u}tz's formula \eqref{eq:Green}. It is obvious that the contour $\Gamma_{0}$ should go around $0$ and $1$, since otherwise the determinant in \eqref{eq:Green} will vanish when $\vec x$ and $\vec y$ are far enough.
In order to complete the we still need to prove that this solution satisfies the initial condition. To this end we notice that for $n \geq 0$ we have
\[
F_{-n}(x, 0) =\frac{(-1)^n}{2\pi \I} \oint_{\Gamma_{0}} dw\,\frac{(1-w)^{n}}{w^{x+n+1}},
\]
which in particular implies that $F_{-n}(x, 0) = 0$ for $x < -n$ and $x > 0$, and $F_{0}(x, 0) = \delta_{x, 0}$. In the case $x_N < y_N$, we have $x_N < y_k$ for all $k = 1, \ldots, N-1$, and $x_N - y_{N+1 - j} < 1-j$ since $y \in \Omega_N$. This yields $F_{1 - j}(x_{N}-y_{N+1 - j},0) = 0$ and the determinant in \eqref{eq:Green} vanishes, because the matrix contains a row of zeros. If $x_N \geq y_N$, then we have $x_k > y_N$ for all $k = 1, \ldots, N-1$, and all entries of the first column in the matrix from \eqref{eq:Green} vanish, except the first entry which equals $\delta_{x_N, y_N}$. Repeating this argument for $x_{N-1}$, $x_{N-2}$ and so on, we obtain that the matrix is upper-triangular with delta-functions at the diagonal, which gives us the claim.
\begin{remark}
Similar computations lead to the distribution function of ASEP~\cite{MR2824604}. Unfortunately, this distribution function doesn't have the determinantal form as \eqref{eq:Green} which makes its analysis significantly more complicated.
\end{remark}
\subsection{Direct check of Sch\"{u}tz's formula.}
\label{ss:schutz_check}
We will show that the determinant in \eqref{eq:Green} satisfies the master equation \eqref{eq:master} with the boundary conditions \eqref{eq:boundary}, providing an alternate proof to the one in Section \ref{schutzba}. To this end we will use only the following properties of the functions $F_{n}$, which can be easily proved,
\begin{equation}\label{eq:F_props}
\partial_t F_{n}(x,t) = -\nablam F_{n}(x,t), \qquad F_{n}(x,t) = -\nablap F_{n+1}(x,t),
\end{equation}
where $\nablam$ has been defined in \eqref{eq:L} and $\nablap f(x) = f(x+1) - f(x)$. Furthermore, it will be convenient to define the vectors
\begin{equation}\label{eq:propsF}
H_i(x, t) = \bigl[F_{i - 1}(x-y_{N},t), \cdots, F_{i - N}(x-y_{1},t) \bigr]'.
\end{equation}
Then, denoting by $u_t^{(N)}(\vec x)$ the right-hand side of \eqref{eq:Green}, we can write
\begin{align*}
\partial_t u_t^{(N)}(\vec x) &= \sum_{k = 1}^N \det \bigl[\cdots, \partial_t H_k(x_{N+1 - k}, t), \cdots\bigr]\\
&= -\sum_{k = 1}^N \det \bigl[\cdots, \nablam H_k(x_{N+1 - k}, t), \cdots\bigr]\\
&= -\sum_{k=1}^N \nablam_{\!k} \det \bigl[F_{i - j}(x_{N+1 - i}-y_{N+1 - j},t)\bigr]_{1\leq i,j\leq N},
\end{align*}
where the operators in the first and second sums are applied only to the $k$-th column, and where we made use of the first identity in \eqref{eq:propsF} and multi-linearity of determinants. Here, $\nablam_{\!k}$ is as before the operator $\nablam$ acting on $x_k$.
Now, we will check the boundary conditions \eqref{eq:boundary}. If $x_{k} = x_{k+1} + 1$, then using again multi-linearity of determinants and the second identity in \eqref{eq:propsF} we obtain
\begin{align*}
\nablam_{\!k} \det \bigl[F_{i - j}(x_{N+1 - i}\; -\; &y_{N+1 - j},t)\bigr] \\
&= \det \bigl[\cdots, \nablam H_{N+1 - k}(x_{k}, t), H_{N - k}(x_{k+1}), \cdots\bigr]\\
&= \det \bigl[\cdots, \nablap H_{N+1 - k}(x_{k} - 1, t), H_{N - k}(x_{k+1}), \cdots\bigr]\\
&= \det \bigl[\cdots, \nablap H_{N+1 - k}(x_{k + 1}, t), H_{N - k}(x_{k+1}), \cdots\bigr]\\
&= \det \bigl[\cdots, -H_{N - k}(x_{k + 1}, t), H_{N - k}(x_{k+1}), \cdots\bigr].
\end{align*}
The latter determinant vanishes, because the matrix has two equal columns. A proof of the initial condition was provided at the end of the previous section.
\section{Determinantal point processes}
In this section we provide some results on determinantal point processes, which can be found e.g. in~\cite{Bor11,borodinRains,johansson}. These processes were studied first in~\cite{Macchi1975} as `fermion' processes and the name 'determinantal' was introduced in~\cite{borOlsh00}.
\begin{definition}\label{def:DPP}
Let $\Xf$ be a discrete space and let $\mu$ be a measure on $\Xf$. A \emph{determinantal point process} on the space $\Xf$ with {\rm correlation kernel} $\CK:\Xf\times\Xf\to \C$ is a signed\footnote{ In our analysis of TASEP we will be using only a counting measure $\mu$ assigning a unit mass to each element of $\Xf$. However, a determinantal point process can be defined in full generality on a locally compact Polish space with a Radon measure (see~\cite{BHKPV}). Moreover, in contrast to the usual definition we define the measure $\mathcal{W}$ to be signed rather than a probability measure. This fact will be crucial in Section~\ref{eq:random_walks} below, and we should also note that all the properties of determinantal point processes which we will use don't require $\mathcal{W}$ to be positive.
} measure $\mathcal{W}$ on $2^\Xf$ (the power set of $\Xf$), integrating to $1$ and such that for any points $x_1, \cdots, x_n \in \Xf$ one has the identity
\begin{equation}\label{eq:DPP}
\sum_{\substack{Y \subset \Xf: \\ \{x_1,\ldots,x_n\}\subset Y}} \hspace{-0.5cm}\mathcal{W}(Y) = \det \bigl[\CK(x_i,x_j)\bigr]_{1 \leq i,j \leq n}\, \prod_{k=1}^n \mu(x_k),
\end{equation}
where the sum runs over finite subsets of $\Xf$.
\end{definition}
\noindent The determinants on the right-hand side are called \emph{$n$-point correlation functions} or \emph{joint intensities} and denoted by
\begin{equation}\label{eq:corr}
\varrho^{(n)}(x_1,\ldots,x_n) = \det \bigl[\CK(x_i,x_j)\bigr]_{1 \leq i,j \leq n}.
\end{equation}
One can easily see that these functions have the following properties: they are symmetric under permutations of arguments and vanish if $x_i = x_j$ for $i \neq j$.
\begin{exercise}
In the case that $\mathcal{W}$ is a positive measure, show that if $K$ is the kernel of the orthogonal projection onto a subset of dimension $n$, then the number of points in $\Xf$ is almost surely equal to $n$.
\end{exercise}
Usually, it is non-trivial to show that a process is determinantal. Below we provide several examples of determinantal point processes
(these ones are not signed).
\begin{example}[Non-intersecting random walks]\label{ex:RWs}
Let $X_i(t)$, $1 \leq i \leq n$, be independent time-homogeneous Markov chains on $\Z$ with one step transition probabilities $p_t(x,y)$ satisfying the identity \break$p_t(x,x-1) + p_t(x,x+1) = 1$ (i.e. every time each random walk makes one unit step either to the left or to the right). Let furthermore $X_i(t)$ be reversible with respect to a probability measure $\pi$ on $\Z$, i.e. $\pi(x) p_t(x,y) = \pi(y) p_t(y, x)$ for all $x, y \in \Z$ and $t \in \N$. Then, conditioned on the events that the values of the random walks at times $0$ and $2t$ are fixed, i.e. $X_i(0) = X_i(2t) = x_i$ for all $1 \leq i \leq n$ where each $x_i$ is even, and no two of them intersect on the time interval $[0, 2t]$, the configuration of mid-positions $\{X_i(t) : 1 \leq i \leq n\}$ is a determinantal point process on $\Z$ with respect to the measure $\pi$, i.e.
\begin{equation}\label{eq:RWs}
\pp\bigl[ X_i(t) = z_i, 1 \leq i \leq n \bigr] = \det \bigl[\CK(z_i,z_j)\bigr]_{1 \leq i,j \leq n} \prod_{k=1}^n \pi(z_k),
\end{equation}
where the probability is conditioned by the described event (assuming of course that its probability is non-zero). Here, the correlation kernel $\CK$ is given by
\begin{equation}\label{eq:RWsKernel}
\CK(u, v) = \sum_{i=1}^n \psi_i(u) \phi_i(v),
\end{equation}
where the functions $\psi_i$ and $\phi_i$ are defined by
\[
\psi_i(u) = \sum_{k=1}^n \left(A^{-\frac{1}{2}}\right)_{i, k} \frac{p_t(x_k, u)}{\pi(u)}, \qquad \phi_i(v) = \sum_{k=1}^n \left(A^{-\frac{1}{2}}\right)_{i, k} \frac{p_t(x_k, v)}{\pi(v)},
\]
with the matrix $A$ having the entries $A_{i, k} = \frac{p_{2t}(x_i, x_k)}{\pi(x_k)}$. Invertibility of the matrix $A$ follows from the fact that the probability of the condition is non-zero and Karlin-McGregor formula (see Exercise~\ref{ex:KMcG} below). This result is a particular case of a more general result of~\cite{johansson} and it can be obtained from Karlin-McGregor formula similarly to~\cite[Cor.~4.3.3]{BHKPV}.
\end{example}
\begin{exercise}
Prove that the mid-positions $\{X_i(t) : 1 \leq i \leq n\}$ of the random walks defined in the previous example form a determinantal process with the correlation kernel \eqref{eq:RWsKernel}.
\end{exercise}
\begin{exercise}[Karlin-McGregor formula \cite{karlinMcGregor}]\label{ex:KMcG}
Let $X_i$, $1 \leq i \leq n$, be i.i.d. (time-inhomogeneous) Markov chains on $\Z$ with transition probabilities $p_{k, \ell}(s,t)$ satisfying $p_{k, k+1}(t, t+1) + p_{k, k-1}(t, t+1) = 1$ for all $k$ and $t > 0$. Let us fixed initial states $X_i(0) = k_i$ for $k_1 < k_2 < \cdots < k_n$ such that each $k_i$ is even. Then the probability that at time $t$ the Markov chains are at the states $\ell_1 < \ell_2 < \cdots < \ell_n$, and that no two of the chains intersect up to time $t$, equals $\det \bigl[p_{k_i,\ell_j}(0, t)\bigr]_{1 \leq i, j \leq n}$.
Hint (this idea is due to S.R.S. Varadhan): for a permutation $\sigma \in \S_n$ and $0 \leq s \leq t$, define the process
\begin{equation}
M_\sigma(s) = \prod_{i=1}^n \pp\bigl( X_{i}(t) = \ell_{\sigma(i)} \big| X_i(s)\bigr),
\end{equation}
which is a martingale with respect to the filtration generated by the Markov chains $X_i$. This implies that the process $M = \sum_{\sigma \in \S_n} \sgn(\sigma) M_\sigma$ is also a martingale. Obtain the Karlin-McGregor formula by applying the optional stopping theorem to $M$ for a suitable stopping time.
\end{exercise}
\begin{example}[Gaussian unitary ensemble]
The most famous example of determinantal point processes is the \emph{Gaussian unitary ensemble} (GUE) introduced by Wigner. Let us define the $n\times n$ matrix $A$ to have i.i.d. standard complex Gaussian entries and let $H = \frac{1}{\sqrt 2} (A + A^*)$. Then the eigenvalues $\lambda_1 > \lambda_2 > \cdots > \lambda_n$ of $H$ form a determinantal point process on $\R$ with the correlation kernel
\[
\CK(x,y) = \sum_{k = 0}^{n-1} H_k(x) H_k(y),
\]
with respect to the Gaussian measure $d \mu(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^2 / 2} dx$, where $H_k$ are Hermite polynomials which are orthonormal on $L^2(\R, \mu)$. A proof of this result can be found in~\cite[Ch. 3]{mehta}.
\end{example}
\begin{example}[Aztec diamond tilings]
The Aztec diamond is a diamond-shaped union of lattice squares (see Figure~\ref{fig:aztec}). Let's now color some squares in gray following the pattern of a chess board and so that all the bottom left squares are colored. It is easy to see that the Aztec diamond can be perfectly covered by domino tilings, which are $2 \times 1$ or $1 \times 2$ rectangles, and the number of tilings growth exponentially in the width of the diamond. Let's draw a tiling uniformly from all possible tilings and let's mark gray left squares of horizontal dominos and gray bottom squares of vertical dominos. This random set is a determinantal point process on the lattice $\Z^2$~\cite{MR2118857}.
\end{example}
\begin{figure}
\caption{Aztec diamond tiling.}
\label{fig:aztec}
\end{figure}
\subsection{Probability of an empty region.}
A useful property of determinantal point processes is that the `probability' (recall that the measure in Definition~\ref{def:DPP} is signed) of having an empty region is given by a Fredholm determinant.
\begin{lemma}\label{lem:GapProbability}
Let $\mathcal{W}$ be a determinantal point process on a discrete set $\Xf$ with a measure $\mu$ and with a correlation kernel $\CK$. Then for a Borel set $B \subset \Xf$ one has
\[
\sum_{X \subset \Xf \setminus B} \hspace{-0.2cm} \mathcal{W}(X) = \det(I - \CK)_{\ell^2(B, \mu)},
\]
where the latter is the Fredholm determinant defined by
\begin{equation}\label{eq:Fredholm}
\det(I - \CK)_{\ell^2(B, \mu)} = \sum_{n\geq 0}\frac{(-1)^n}{n!} \int_{B^n} \det \bigl[\CK(y_i,y_j)\bigr]_{1 \leq i,j \leq n}\, d\mu(y_1) \cdots d\mu(y_n).
\end{equation}
\end{lemma}
\begin{proof}
Using Definition~\ref{def:DPP} and the correlation functions \eqref{eq:corr} we can write
\begin{align*}
\sum_{X \subset \Xf \setminus B} \hspace{-0.2cm} \mathcal{W}(X) &= \sum_{X \subset \Xf} \mathcal{W}(X)\, \prod_{x \in X}\left((1-\1{B}(x)\right) \\
&= \sum_{n\geq 0} \frac{(-1)^n}{n!} \sum_{X \subset \Xf} \mathcal{W}(X)\, \sum_{\substack{x_1,\ldots,x_n \in X \\ x_i \neq x_j}}\prod_{k=1}^n\1{B}(x_{k})\\
&= \sum_{n\geq 0} \frac{(-1)^n}{n!} \sum_{\substack{y_1,\ldots,y_n \in B \\ y_i \neq y_j}} \sum_{X \subset \Xf} \mathcal{W}(X)\, \sum_{x_1,\ldots,x_n \in X}\prod_{k=1}^n\1{x_{k} = y_k}\\
&= \sum_{n\geq 0} \frac{(-1)^n}{n!} \sum_{\substack{y_1,\ldots,y_n \in B \\ y_i \neq y_j}} \sum_{\substack{X \subset \Xf \\ \{y_1,\ldots,y_n\} \subset X}} \hspace{-0.5cm} \mathcal{W}(X)\\
&=\sum_{n\geq 0}\frac{(-1)^n}{n!}\sum_{\substack{y_1,\ldots,y_n \in B \\ y_i \neq y_j}} \hspace{-0.3cm} \varrho^{(n)}(y_1,\ldots,y_n) \prod_{k=1}^n \mu(y_k)\\
&= \sum_{n\geq 0}\frac{(-1)^n}{n!}\int_{B^n} \det \bigl[\CK(y_i,y_j)\bigr]_{1 \leq i,j \leq n}\, d\mu(y_1) \cdots d\mu(y_n)\\
&= \det(I - \CK)_{\ell^2(B, \mu)},
\end{align*}
which is exactly our claim. Note, that the condition $y_i \neq y_j$ cabe be omitted, because $\varrho^{(n)}$ vanishes on the diagonals.
\end{proof}
\begin{exercise}
Prove that if $\Xf$ is finite and $\mu$ is the counting measure, then the Fredholm determinant \eqref{eq:Fredholm} coincides with the usual determinant.
\end{exercise}
\subsection{$\mathbf L$-ensembles of signed measures.}
A more restrictive definition of a determinantal process was introduced in~\cite{borodinRains}. In order to simplify our notation, we take the measure $\mu$ in this section to be the counting measure and we will skip it in notation below.
With the notation of Definition~\ref{def:DPP}, let us be given a function $L:\Xf\times\Xf\to\C$. For any finite subset $X = \{x_1, \cdots, x_n\} \subset \Xf$ we define a symmetric minor $L_X = \bigl[L(x_i,x_j)\bigr]_{x_i,x_j \in X}$. Then one can define a (signed) measure on $\Xf$, called the \emph{$L$-ensemble}, by
\begin{equation}\label{eq:LEnsemble}
\mathcal{W}(X)=\frac{\det(L_X)}{\det(1+L)_{\ell^2(\Xf)}},
\end{equation}
for $X \subset \Xf$, if the Fredholm determinant $\det(1+L)_{\ell^2(\Xf)}$ is non-zero (recall the definition \eqref{eq:Fredholm}).
\begin{exercise}
Check that the measure $\mathcal{W}$ defined in \eqref{eq:LEnsemble} integrates to $1$.
\end{exercise}
The requirement $\det(1+L)_{\ell^2(\Xf)} \neq 0$ guarantees that there exists a unique function\break$(1+L)^{-1} : \Xf \times \Xf \to \C$ such that $(1+L)^{-1} * (1 + L) = 1$, where $*$ is the convolution on $\Xf$ and $1 : \Xf \times \Xf \to \{0,1\}$ is the identity function non-vanishing only on the diagonal. Furthermore, it was proved in~\cite{Macchi1975} that the $L$-ensemble is a determinantal point process:
\begin{proposition}\label{prop:Macchi}
The measure $\mathcal{W}$ defined in \eqref{eq:LEnsemble} is a determinantal point process with correlation kernel $\CK=L(1+L)^{-1} = 1 - (1+L)^{-1}$.
\end{proposition}
\begin{example}[Non-intersecting random walks]
It is not difficult to see that the distribution of the mid-positions $\{X_i(t) : 1 \leq i \leq n\}$ of the random walks from Example~\ref{ex:RWs} is the $L$-ensemble with the function
\[
L(u, v) = \sum_{i=1}^N p_t(u, x_i) p_t(x_i, v).
\]
The correlation kernel $\CK$ can be computed from Proposition~\ref{prop:Macchi} and it coincides with \eqref{eq:RWs}.
\end{example}
\begin{exercise}
Perform the computations from the previous example.
\end{exercise}
\subsection{Conditional $\mathbf L$-ensembles.}
An $L$-ensemble can be conditioned by fixing certain values of the determinantal process. More precisely, consider a nonempty subset $\Zf\subset\Xf$ and a given $L$-ensemble on $\Xf$. We define a measure on $2^\Zf$, called \emph{conditional $L$-ensemble}, in the following way:
\begin{equation}\label{eq:LCond}
\mathcal{W}(Y)=\frac{\det(L_{Y\cup\Zf^c})}{\det(1_\Zf+L)},
\end{equation}
for any $Y \subset \Zf$, where $1_\Zf(x,y) = 1$ if and only if $x = y \in \Zf$, and $1_\Zf(x,y) = 0$ otherwise.
\begin{exercise}
Prove that the measure $\mathcal{W}$ defined in \eqref{eq:LCond} integrates to $1$.
\end{exercise}
\noindent Roughly speaking the definition \eqref{eq:LCond} means that we restrict the $L$-ensemble by the condition that the values $\Zf^c$ are fixed. The following result is a generalisation of Proposition~\ref{prop:Macchi} and its proof be found in~\cite[Prop.~1.2]{borodinRains}:
\begin{proposition}\label{prop:ConditionalLensembles}
The conditional $L$-ensemble is a determinantal point process on $\Zf$ with correlation kernel
\begin{equation}\label{eq:K_LCond}
\CK=1_\Zf-(1_\Zf+L)^{-1}\big|_{\Zf\times\Zf},
\end{equation}
where $F \big|_{\Zf\times\Zf}$ means restriction of the function $F$ to the set $\Zf\times\Zf$.
\end{proposition}
\section{Biorthogonal representation of the correlation kernel}
The formula \eqref{eq:Green} is not suitable for asymptotic analysis of TASEP, because the size of the matrix goes to $\infty$ as the number of particles $N$ increases. To overcome this problem, the authors of~\cite{borFerPrahSasam} (and in its preliminary version~\cite{sasamoto}) wrote it as a Fredholm determinant, which can be then subject to asymptotic analysis.
In order to state this result, we need to make some definitions. For an integer $M \geq 1$, a fixed vector $\vec a \in\R^M$ and indices $n_1<\ldots<n_M$ we introduce the projections
\begin{equation}\label{eq:defChis}
\chi_{\vec a}(n_j,x)=\1{x> a_j}, \hspace{1cm} \bar\chi_{\vec a}(n_j,x)=\1{x\leq a_j},
\end{equation}
acting on $x \in \Z$, which also regard as multiplication operators acting on $\ell^2 \bigl(\{n_1,\ldots,n_M\}\times\Z\bigr)$.
We will use the same notation if $a$ is a scalar, writing
\begin{equation}\label{eq:defChisScalar}
\chi_a(x)=1-\bar\chi_a(x)=\1{x>a}.
\end{equation}
Then from~\cite{borFerPrahSasam} we have the following result:
\begin{theorem}\label{thm:BFPS}
Suppose that TASEP starts with particles labeled $X_0(1)>X_0(2)>\ldots >X_0(N)$ and let $1\leq n_1<n_2<\dotsm<n_M\leq N$ and $\vec a \in\R^M$ for some $1\leq M \leq N$. Then for $t>0$ we have
\begin{equation}\label{eq:extKernelProbBFPS}
\pp \bigl(X_t(n_j) > a_j,~j=1,\ldots,M\bigr)=\det \bigl(I-\bar\chi_{\vec a}K_t\bar\chi_{\vec a}\bigr)_{\ell^2(\{n_1,\ldots,n_M\}\times\Z)}
\end{equation}
where the kernel $K_t$ is given by
\begin{equation}\label{eq:Kt}
K_t(n_i,x_i;n_j,x_j)=-\phi^{n_j - n_i}(x_i,x_j)\1{n_i<n_j}+\sum_{k=1}^{n_j}\Psi^{n_i}_{n_i-k}(x_i)\Phi^{n_j}_{n_j-k}(x_j),
\end{equation}
and where $\phi(x,y)=\1{x>y}$ and
\begin{equation}\label{eq:defPsi}
\Psi^n_k(x)=\frac1{2\pi\I}\oint_{\Gamma_0}dw\,\left(\frac{1-w}{w}\right)^{k} \frac{e^{t(w-1)}}{w^{x - X_0(n-k) + 1}}.
\end{equation}
Here, $\Gamma_0$ is any simple loop, oriented counterclockwise, which includes the pole at $w=0$ but does not include $w=1$.
The functions $\Phi_k^{n}$, $0 \leq k < n$, are defined implicitly by the following two properties:
\begin{enumerate}[label={\normalfont (\arabic{*})}]
\item the biorthogonality relation, for $0 \leq k, \ell < n$,
\[
\sum_{x\in\Z}\Psi_k^{n}(x)\Phi_\ell^{n}(x)=\1{k=\ell},
\]
\label{ortho}
\item the spanning property
\[
{\mathrm{span}} \bigl\{\Phi^n_k : 0 \leq k < n\bigr\} = {\mathrm{span}}\bigl\{x^k : 0 \leq k < n\bigr\},
\]
which in particular implies that the function $\Phi^n_k$ is a polynomial of degree at most $n-1$.
\end{enumerate}
\end{theorem}
\begin{remark}
The problem with this result is that the functions $\Phi^n_k$ are not given explicitly. In special cases of periodic and step initial data exact integral expressions of these functions had been found in \cite{borFerPrahSasam}, \cite{ferrariMatr} and \cite{bfp} (the latter is for the discrete time TASEP, which can be easily adapted for continuous time). More precisely, for the step initial data $X_0(i)=-i$, $i\geq1$, we have
\[
\Phi^n_k(x) = \frac{1}{2 \pi i} \oint_{\Gamma_0} dv\, \frac{(1-v)^{x+n}}{v^{k+1}} e^{t v},
\]
and in the case of periodic initial data $X_0(i)=-d i$, $i\geq1$, with $d \geq 2$,
\[
\Phi^n_k(x) = \frac{1}{2 \pi \I} \oint_{\Gamma_{0}} dv\, \frac{(1 - d v) (2(1-v))^{x + dn - 1}}{v (2^d (1-v)^{d-1} v)^{k}} e^{t v}.
\]
The key new result in~\cite{KPZ} is an expression for the functions $\Phi^n_k$, and therefore the kernel $K_t$, for \emph{arbitrary} initial data.
\end{remark}
\begin{remark}
The functions $F$ from \eqref{eq:defF} and $\Psi$ from \eqref{eq:defPsi} are obviously related by the identity
\begin{equation}\label{eq:FandPhi}
\Psi^N_{k}(x) = (-1)^k F_{-k} (x - y_{N - k }, t),
\end{equation}
for $0 \leq k \leq N$, so that all properties of $F$ can be translated to $\Psi$. Moreover, one can see that if $n \leq 0$, then the function inside the integral in \eqref{eq:defF} has the only pole at $w=0$, which yields
\[
F_{n+1}(x,t)=-\sum_{y < x} F_n(y,t).
\]
Writing this relation in terms of the functions $\Psi^N_{k}$, we get
\begin{equation}\label{eq:PsiRecursion}
\Psi^{N}_{N - k}(x) = \sum_{y < x} \Psi^{N+1}_{N +1 - k}(x).
\end{equation}
\end{remark}
In the next section we provide a proof of this result following~\cite{borFerPrahSasam}. The main idea is to rewrite the problem in terms of non-intersecting random walks (or \emph{vicious random walks}) whose configurations form a \emph{Gelfand-Tsetlin pattern}\footnote{This property relates TASEP to random matrices, see~\cite{ferrariMatr}.}. The distribution of these random walks forms a determinantal point process whose correlation kernel is \eqref{eq:Kt}.
\subsection{Non-intersecting random walks.}
\label{eq:random_walks}
Our aim in this section is to rewrite Sch\"{u}tz's formula \eqref{eq:Green} in a form involving transition probabilities of non-intersecting random walks.
We start with rewriting the transition probabilities \eqref{eq:Green} in the following way:
\begin{proposition}\label{p:transitions}
For $\vec x, \vec y \in \Omega_N$, one has the following identity:
\begin{equation}\label{eq:transitions}
\pp \big(X_t = \vec x\, |\, X_0 = \vec y\big) = \sum_{\substack{\bz \in \GT_N: \\ z_1^{n} = x_n}} \det \Big[\Psi^N_{N-j}\big(z_{i}^N\big)\Big]_{1 \leq i,j \leq N},
\end{equation}
where the functions $\Psi$ are defined in \eqref{eq:defPsi}, where the sum runs over the domain $\GT_N$ of triangular arrays given by a Gelfand-Tsetlin pattern
\[
\GT_N = \Big\{\bz = (z_i^n)_{n, i} \colon z_i^n \in \Z,\; 1 \leq i \leq n < N,\; z_i^{n+1} < z_i^{n} \leq z_{i+1}^{n+1} \Big\},
\]
with fixed values $z_1^{n} = x_n$ for all $n=1, \cdots, N$ (see Figure~\ref{fig:GT} for a graphical representation of $\GT_4$).
\end{proposition}
{
\def4{4}
\def4{4}
\def1{1}
\begin{figure}
\caption{The Gelfand-Tsetlin pattern $\GT_4$ with fixed values $z^n_1 = x_n$.}
\label{fig:GT}
\end{figure}
}
\begin{proof}
This decomposition is obtained using only the identity
\begin{equation}\label{eq:F_sum}
F_{n+1}(x,t)=\sum_{y \geq x} F_n(y,t),
\end{equation}
which is the integrated form of the second equality in \eqref{eq:F_props} combined with the fact that the convergence $\lim_{y\to+\infty}F_n(y,t)=0$ holds fast enough. From Sch\"{u}tz's formula \eqref{eq:Green} we have
\begin{equation}\label{eq:P_det}\arraycolsep=5pt\renewcommand\arraystretch{1.2}
\pp \bigl(X_t = \vec x \,|\, X_0 = \vec y\bigr) =
\det\left[\begin{array}{ccc} F_0(z_1^N-y_N,t)&
\cdots & F_{-N+1}(z_1^N-y_1,t) \\ \vdots & \ddots & \vdots \\
F_{N-1}(z_1^1-y_N,t)&\cdots & F_0(z_1^1-y_1,t)\end{array}\right],
\end{equation}
where we renamed the variables $z^n_1 = x_n$. Applying the property \eqref{eq:F_sum} twice to each entry of the last row we can rewrite it as
\begin{align}\arraycolsep=5pt\renewcommand\arraystretch{1.2}
\Big[\begin{array}{ccc} F_{N-1}(z_1^1-y_N,t)&\;\cdots\; & F_0(z_1^1-y_1,t)\end{array}\Big] &= \sum_{z_2^2 \geq z_1^1} \Big[\begin{array}{ccc} F_{N-2}(z_2^2-y_N,t)&\;\cdots\; & F_{-1}(z_2^2-y_1,t)\end{array}\Big] \nonumber\\
&= \sum_{z_2^2 \geq z_1^1}\sum_{z_3^3\geq z_2^2} \Big[\begin{array}{ccc} F_{N-3}(z_3^3-y_N,t)&\;\cdots\; & F_{-2}(z_3^3-y_1,t)\end{array}\Big].\label{eq:last_row}
\end{align}
Applying furthermore the identity \eqref{eq:F_sum} to the penultimate row in \eqref{eq:P_det} we obtain
\[
\sum_{z_2^3 \geq z_1^2} \Big[\begin{array}{ccc} F_{N-3}(z_2^3-y_N,t)&\;\cdots\; & F_{-2}(z_2^3-y_1,t)\end{array}\Big].
\]
Combining these two identities with multilinearity of determinant, the right-hand side of \eqref{eq:P_det} equals
\[\arraycolsep=5pt\renewcommand\arraystretch{1.2}
\sum_{z_2^2 \geq z_1^1}\sum_{z_3^3\geq z_2^2}\sum_{z_2^3
\geq z_1^2} \det\left[\begin{array}{ccc} F_0(z_1^N-y_N,t)& \cdots &
F_{-N+1}(z_1^N-y_1,t) \\ \vdots & \ddots & \vdots \\
F_{N-3}(z_1^3-y_N,t)&\cdots & F_{-2}(z_1^3-y_1,t) \\
F_{N-3}(z_2^3-y_N,t)&\cdots & F_{-2}(z_2^3-y_1,t) \\
F_{N-3}(z_3^3-y_N,t)&\cdots & F_{-2}(z_3^3-y_1,t) \end{array}\right].
\]
The determinant is antisymmetric in the variables $z_2^3$ and $z_3^3$ (i.e. it changes sign if we swap $z_2^3$ and $z_3^3$), therefore the contribution of the symmetric part of the summation domain $\bigl\{z_3^3 \geq z_2^2\bigr\} \cap \bigl\{z_2^3 \geq z_1^2\bigr\}$ is zero. Since the symmetric part of this domain is $\big\{z_3^3 \geq z_2^2\big\} \cap \big\{z_2^3 \geq z_2^2\big\}$, we are left with the sum over $\big\{z_3^3 \geq z_2^2\big\} \cap \big\{z_2^3 \in[z_1^2,z_2^2)\big\}$. We iterate the same procedure for $k=3,\ldots,N-1$, applying \eqref{eq:F_sum} to the last $k$ rows and removing the sums over symmetric domains, and we get the formula
\[
\pp \bigl(X_t = \vec x \,|\, X_0 = \vec y\bigr) = \sum_{\substack{\bz \in \GT_N: \\ z_1^{n} = x_n}} \det \Big[F_{1-j} \big(z_{i}^N - y_{N + 1 -j},t\big)\Big]_{1 \leq i,j \leq N}.
\]
Now, we can use the identity \eqref{eq:FandPhi} to get
\begin{align*}
\det \Big[F_{1-j} \big(z_{i}^N - y_{N + 1 -j},t\big)\Big]_{1 \leq i,j \leq N} &= \det \Big[(-1)^{j-1} \Psi^N_{j-1} \big(z_{i}^N\big)\Big]_{1 \leq i,j \leq N}\\
&= (-1)^{(1 + 2 + \cdots + N) - N} \det \Big[\Psi^N_{j-1}\big(z_{i}^N\big)\Big]_{1 \leq i,j \leq N},
\end{align*}
and we change the order of the columns of the matrix inside the determinant
\[
\det \Big[\Psi^N_{j-1}\big(z_{i}^N\big)\Big]_{1 \leq i,j \leq N} = (-1)^{\lfloor N/2 \rfloor} \det \Big[\Psi^N_{N-j}\big(z_{i}^N\big)\Big]_{1 \leq i,j \leq N}.
\]
It is not difficult to see that $(1 + 2 + \cdots + N) - N + \lfloor N/2 \rfloor$ is an even integer so that the power of $-1$ equals $1$. Hence, combining these identities we get
\[
\det \Big[F_{1-j}\big(z_{i}^N - y_{N + 1 -j},t\big)\Big]_{1 \leq i,j \leq N} = \det \Big[\Psi^N_{N-j}\big(z_{i}^N\big)\Big]_{1 \leq i,j \leq N},
\]
which gives exactly our claim \eqref{eq:transitions}.
\end{proof}
The weight of a configuration $\bz \in \GT_N$ in \eqref{eq:transitions} can be written as
\begin{equation}\label{eq:pointMeasure}
\mathcal{W}_N(\bz) = \left(\prod_{n=1}^N \det \Big[\phi \big(z_i^{n-1}, z_j^{n}\big)\Big]_{1\leq i,j \leq n}\right)\, \det \Big[\Psi^N_{N-j}\big(z_{i}^N\big)\Big]_{1 \leq i,j \leq N},
\end{equation}
where $\phi(x,y)=\1{x>y}$ and where we have introduced new values $z_n^{n-1}= +\infty$ (so that $\phi(z_n^{n-1},y)=\nobreak1$ for all $y \in \Z$). The determinant $\det \big[\phi(z_i^{n-1}, z_j^{n})\big]_{1\leq i,j \leq n}$ is the indicator function for the inequalities in $\GT_N$ between the levels $n-1$ and $n$ to hold. More precisely, if we define the space of integer-valued triangular arrays
\begin{equation}\label{eq:LambdaDef}
\Lambda_N = \bigl\{\bz = (z_i^n)_{n, i} \colon z_i^n \in \Z,\; 1 \leq i \leq n \leq N \bigr\},
\end{equation}
then for $\bz = (z_i^n)_{n, i} \in \Lambda_N$ we have
\begin{equation}\label{eq:detIndicator}
\prod_{n=1}^N \det \big[\phi(z_i^{n-1}, z_j^{n})\big]_{1\leq i,j \leq n} = \1{\bz \in \GT_N}.
\end{equation}
\begin{exercise}
Prove that the identity \eqref{eq:detIndicator} holds.
\end{exercise}
\noindent Hence, the identities \eqref{eq:transitions} and \eqref{eq:pointMeasure} yield
\begin{equation}\label{eq:prob_as_sum}
\pp \bigl(X_t = \vec x \,|\, X_0 = \vec y\bigr) = \sum_{\substack{\bz \in \Lambda_N: \\ z_1^{n} = x_n}} \mathcal{W}_N(\bz),
\end{equation}
where the sum runs over the set $\Lambda_N$ of integer-valued triangular arrays with fixed boundary values $z_1^{n}=x_n$ for all $n = 1, \ldots, N$.
The variables \mbox{$\{z_i^n : i=1,\ldots,n\}$} in \eqref{eq:pointMeasure} can be interpreted as the positions of particles labelled by $i=1,\ldots,n$ at time $n$, so that $z_k^k,\ldots,z_k^n$ is the trajectory of particle $k$ with the transition kernel $\phi$ (which can be made a probability kernel by multiplying by a power of $2$). At time $n$ there are $n$ particles at positions $z_1^n,\ldots,z_n^n$, which make geometric jumps to the left at time $n+1$ conditioned by non-intersection (they are called \emph{vicious random walks}). Furthermore, a new $(n+1)$-st particle is added at position $z_{n+1}^{n+1} \geq z_n^n$ at time $n+1$.
The measure $\mathcal{W}_N$ on $\Lambda_N$ is not a probability measure, because the contribution coming from the functions $\Psi$ can give a negative value. We will show that this is a determinantal measure, which in particular means that the probability that the sites $z_{k_i}^{n_i}$ for $i=1,\ldots,M$ are occupied by the respective walks is proportional to
\begin{equation}\label{eq:vicious_det}
\det \Bigl[\CK_t \big( (n_i, k_i, z_{k_i}^{n_i}), (n_j, k_j, z_{k_j}^{n_j}) \big)\Bigr]_{1 \leq i, j \leq M},
\end{equation}
for a correlation kernel $\CK_t$ which will be obtained below.
\subsection{The correlation kernel of the signed measure.}
We will prove in this section that the measure $\mathcal{W}_N$ defined in \eqref{eq:pointMeasure} on triangular arrays $\Lambda_N$ is a determinantal point process and will find its correlation kernel, so that in particular the property \eqref{eq:vicious_det} holds. Note that we will consider the measure $\mathcal{W}_N$ on the whole space $\Lambda_N$, and only after having found the correlation kernel will we will fix the boundary values as in \eqref{eq:prob_as_sum}.
\subsubsection*{The measure $\mathcal{W}_N$ is a conditional $\mathbf L$-ensemble.} It will be more convenient to write the values of a triangular array as a one-dimensional array. More precisely, if fix some point configuration $(z^n_i)_{n, i} \in \Lambda_N$, then every value $z^n_i = z$ can be identified with the triplet $(n, i, z)$, where $n \in \{1, \cdots, N\}$ and $i \in \{1, \cdots, n\}$. So that the values $(z^n_i)_{n, i}$ can be written as a one-dimensional array parametrized by $(n, i)$, e.g. in the case $N = 3$ we have
\begin{center}
\begin{tikzpicture}[
array/.style={matrix of nodes,nodes={draw, minimum width=10mm, minimum height=5mm},column sep=-\pgflinewidth, row sep=0.5mm, nodes in empty cells,
row 1/.style={nodes={draw=none, fill=none, minimum size=5mm}, font=\scriptsize, color=blue}}]
\centering
\matrix[array] (array) {
$(1,1)$ & $(2,1)$ & $(2,2)$ & $(3,1)$ & $(3,2)$ & $(3,3)$\\
$z^1_1$ & $z^2_1$ & $z^2_2$ & $z^3_1$ & $z^3_2$ & $z^3_3$\\};
\end{tikzpicture}
\end{center}
\noindent With this idea in mind, the point process which we are going to define has the domain $\Zf$ of all triplets $(n, i, z)$, where $n \in \{1, \cdots, N\}$, $i \in \{1, \cdots, n\}$ and $z \in \Z$. In fact, we need to define a slightly larger domain $\Xf = \{1,2,\ldots,N\} \cup \Zf$, so that the numbers $\{1,2,\ldots,N\}$ will refer to either the values $z^{n-1}_{n}$ or the initial values $y_{n}$ of TASEP, and the determinantal point process will be conditioned by $\Zf^c = \Xf \setminus \Zf = \{1,2,\ldots,N\}$. Our aim is to define a function $L : \Xf \times \Xf \to \R$ such that for every set $Y \subset \Zf$ one has
\[
\mathcal{W}_N(Y) = \det \big(L_{Y\cup\Zf^c}\big),
\]
(where we use the notation from \eqref{eq:LEnsemble}) which means that $\mathcal{W}_N$ is a conditional $L$-ensemble. As we will see below, this corresponds to fixing the initial values $y_i$ of TASEP and the `infinities' $z^{n-1}_n$.
Using the equivalence between point configurations $(z^n_i)_{n, i} \in \Lambda_N$ and one-dimensional arrays described in the previous paragraph, every point configuration from $\Xf$ can be obviously identified with an array as well (by adding new boxes indexed by $1$, $2$, $\ldots$, $N$). In the previous example we will have the array
\begin{center}
\begin{tikzpicture}[
array/.style={matrix of nodes,nodes={draw, minimum width=10mm, minimum height=7mm},column sep=-\pgflinewidth, row sep=0.5mm, nodes in empty cells,
row 1/.style={nodes={draw=none, fill=none}, font=\scriptsize, color=blue}}]
\matrix[array] (array) {
$1$ & $2$ & $3$ & $(1,1)$ & $(2,1)$ & $(2,2)$ & $(3,1)$ & $(3,2)$ & $(3,3)$\\
$*^{\vphantom{1}}_{\vphantom{1}}$ & $*^{\vphantom{1}}_{\vphantom{1}}$ & $*^{\vphantom{1}}_{\vphantom{1}}$ & $z^1_1$ & $z^2_1$ & $z^2_2$ & $z^3_1$ & $z^3_2$ & $z^3_3$\\};
\end{tikzpicture}
\end{center}
where by $*$ we mean the values indexed by $1$, $2$, $\cdots$, $N$. This means that $L_{\{\bz\}\cup\Zf^c}$ (recall the notation from \eqref{eq:LEnsemble}) is in fact a function of two arguments each of which is either $(n, i)$, such that $n \in \{1, \cdots, N\}$ and $i \in \{1, \cdots, n\}$, or $k \in \{1, \cdots, N\}$. So we can identify $L_{\{\bz\}\cup\Zf^c}$ with a square matrix of size $N(N+3) / 2$. Now, we are going to define this matrix.
\subsubsection*{Writing the $\mathbf L$-function in a matrix form.} Let us denote by $\CM_{n, m}$ the set of all $n \times m$ matrices with real entries. Then for $1 \leq n < m \leq N$ we define the function $W_{[n,m)} : \Lambda_N \to \CM_{n, m}$ such that for a fixed particles configuration $\bz = (z_i^n) \in \Lambda_N$ the matrix $W_{[n,m)}(\bz)$ is given by
\[
\Big[W_{[n,m)} \big(\bz\big)\Big]_{i,j} = \phi^{m - n}(z_i^n,z_j^{m}) \1{n < m},\qquad 1\leq i\leq n,\,1\le j \leq m.
\]
Similarly we define the function $\Psi^{(N)} : \Lambda_N \to \CM_{N, N}$ to have the entries
\[
\Big[\Psi^{(N)}(\bz)\Big]_{i,j} = \Psi^N_{N-j}(z_i^N),\qquad 1\leq i,j \leq N,
\]
where the functions $\phi$ and $\Psi^N_{N-j}$ are as in the statement of Theorem~\ref{thm:BFPS}. Finally, for $0 \leq m < N$ we define the function $E_m : \Lambda_N \to \CM_{N, m+1}$ by the entries
\[
\Big[E_{m}(\bz)\Big]_{i,j}=
\left\{\begin{array}{ll}
\phi(z_{m+1}^m,z_j^{m+1}),\quad& \textrm{if}~ i=m+1,\;1\leq j \leq m+1,\\
0,&\textrm{otherwise}.
\end{array}\right.
\]
In fact, the function $E_{m}$ should also have the values of $z_{m+1}^m$ as arguments, but we prefer not to indicate this, since we will always fix these values to be infinities. With these objects at hand we identity the function $L_{\{\bz\} \cup \Zf^c}$ (recall, that the configuration $\bz \in \Lambda_N$ has been fixed) with the following square matrix in block form:
\begin{equation}\label{eq:MatrixL}
\arraycolsep=5pt\renewcommand\arraystretch{1.2}
L_{\{\bz\} \cup \Zf^c}=\left[\begin{array}{cccccc}
0 & E_0 & E_1 & E_2 &\ldots & E_{N-1} \\
0 & 0 & -W_{[1,2)} & 0 & \cdots & 0 \\
0 & 0 & 0 & -W_{[2,3)} & \ddots & \vdots \\
\vdots & \vdots & \vdots & \ddots & \ddots &0 \\
0 & 0 & 0 & 0 &\cdots & -W_{[N-1,N)} \\
\Psi^{(N)} & 0 & 0 & 0 & \cdots & 0
\end{array}
\right] (\bz),
\end{equation}
where each block takes $\bz$ as an argument and gives a usual matrix. The first $N$ columns (resp. rows) of the matrix \eqref{eq:MatrixL} are parametrized by the values $\{1,2,\ldots,N\}$ and the succeeding columns (resp. rows) are parametrized by the pairs $\{(n, i) : 1 \leq n \leq N,\, 1 \leq i \leq n\}$ which have the lexicographic order.
\begin{example}
In the case $N = 2$, a configuration $\bz \in \Lambda_2$ contains $3$ values $\{z_1^1\} \cup \{z_2^1, z_2^2\}$, and the matrix $L_{\{\bz\} \cup \Zf^c}$ is given by
\[
L_{\{\bz\} \cup \Zf^c}=
\begin{tikzpicture}[baseline=0pt,
array/.style={matrix of nodes,nodes={draw=none, minimum width=17mm, minimum height=7mm}, row sep=0.5mm, nodes in empty cells}]
\matrix[array, left delimiter={[},right delimiter={]}, decoration=brace] (array) {
$0$ & $0$ & $\phi(z_{1}^0,z_1^{1})$ & $0$ & $0$ \\
$0$ & $0$ & $0$ & $\phi(z_{2}^1,z_1^{2})$ & $\phi(z_{2}^1,z_2^{2})$ \\
$0$ & $0$ & $0$ & $-\phi(z_1^1,z_1^{2})$ & $-\phi(z_1^1,z_2^{2})$ \\
$\Psi^2_{1}(z_1^2)$ & $\Psi^2_{0}(z_1^2)$ & $0$ & $0$ & $0$ \\
$\Psi^2_{1}(z_2^2)$ & $\Psi^2_{0}(z_2^2)$ & $0$ & $0$ & $0$ \\};
\node[xshift=-2em] at (array-1-1.west) {\scriptsize\color{blue}$1$};
\node[xshift=-2em] at (array-2-1.west) {\scriptsize\color{blue}$2$};
\node[xshift=-2em] at (array-3-1.west) {\scriptsize\color{blue}$(1,1)$};
\node[xshift=-2em] at (array-4-1.west) {\scriptsize\color{blue}$(2,1)$};
\node[xshift=-2em] at (array-5-1.west) {\scriptsize\color{blue}$(2,2)$};
\node[yshift=1em] at (array-1-1.north) {\scriptsize\color{blue}$1$};
\node[yshift=1em] at (array-1-2.north) {\scriptsize\color{blue}$2$};
\node[yshift=1em] at (array-1-3.north) {\scriptsize\color{blue}$(1,1)$};
\node[yshift=1em] at (array-1-4.north) {\scriptsize\color{blue}$(2,1)$};
\node[yshift=1em] at (array-1-5.north) {\scriptsize\color{blue}$(2,2)$};
\begin{scope}[on background layer]
\fill[red!20] (array-4-1.north west) rectangle (array-5-2.south east);
\fill[green!20] (array-1-3.north west) rectangle (array-2-3.south east);
\fill[yellow!20] (array-1-4.north west) rectangle (array-2-5.south east);
\fill[blue!20] (array-3-4.north west) rectangle (array-3-5.south east);
\end{scope}
\draw[<-,shorten <=1pt, blue,rounded corners] (array-5-1.south east)
|-+(0,-0.4)
node[below] {\scriptsize\color{blue}$\Psi^N$};
\draw[<-,shorten <=1pt, blue,rounded corners] (array-2-3.south)
|-+(0,-0.25)
--+(0.3,-0.25)
|-+(0.3,-2.8)
node[below] {\scriptsize\color{blue}$E_0$};
\draw[<-,shorten <=1pt, blue,rounded corners] (array-3-4.south east)
|-+(0,-2)
node[below] {\scriptsize\color{blue}$-W_{[1,2)}$};
\draw[<-,shorten <=1pt, blue,rounded corners] (array-1-5.south east)
--+(0.23,0)
|-+(0.23,-1.8)
--+(-0.3,-1.8)
|-+(-0.3,-3.55)
node[below] {\scriptsize\color{blue}$E_1$};
\end{tikzpicture},
\]
where the `infinities' $z^0_1$ and $z^1_2$ are fixed. In particular, it follows from the definition of the function $\phi$ in Theorem~\ref{thm:BFPS} that $\phi(z_{1}^0,z) = \phi(z_{2}^1,z) = 1$ for any $z \in \Z$.
\end{example}
\subsubsection*{The $\mathbf L$-function defines the measure $\mathcal{W}_N$.} It is not difficult to see (recall the definition \eqref{eq:pointMeasure}) that one has the identity $\mathcal{W}(\bz) = C \det (L_{\{\bz\} \cup \Zf^c})$, where $C \in \{\pm 1\}$. To see this, we define the square matrices
\[
\Big[T_{m}\Big]_{i, j} = \phi(z_i^m,z_j^{m+1}),\qquad 1\le i, j \leq m+1.
\]
One can see that the first row of $T_{m}$ coincides with the only non-zero row of $E_{m}(\bz)$ and the other rows of $T_{m}$, with $m \geq 1$, form a matrix coinciding with $-W_{[m,m+1)} \big(\bz\big)$. Then the matrix \eqref{eq:MatrixL} is obtained by rows permutations from the following one:
\[
\arraycolsep=5pt\renewcommand\arraystretch{1.2}
\left[\begin{array}{cccccc}
0 & T_0 & 0 & 0 &\ldots & 0 \\
0 & 0 & T_1 & 0 & \cdots & 0 \\
0 & 0 & 0 & T_2 & \ddots & \vdots \\
\vdots & \vdots & \vdots & \ddots & \ddots &0 \\
0 & 0 & 0 & 0 &\cdots & T_{N-1} \\
\Psi^{(N)}(\bz) & 0 & 0 & 0 & \cdots & 0
\end{array}
\right],
\]
(one does it by swapping the first row of $T_2$ with the previous two rows, then the first row of $T_3$ with the previous three rows, and so on). The determinant of the latter matrix, and hence of $L_{\{\bz\} \cup \Zf^c}$, equals exactly $\mathcal{W}(\bz)$ defined in \eqref{eq:pointMeasure}.
\subsubsection*{The correlation kernel.} One can see that the minors \eqref{eq:MatrixL} uniquely define the function $L : \Xf \times \Xf \to \R$. For example, in the case $N=2$ above, one has
\begin{align*}
L\big((1), (1,1, z)\big) = \phi(z_{1}^0, z), &\qquad L\big((1,1,y), (2,2, z)\big) = -\phi(y,z),\\
L\big((2,1, z), (2)\big) = \Psi^2_{0}(z), &\qquad L\big((2,1, y), (1,1, z)\big) = 0.
\end{align*}
Hence, the point measure $\mathcal{W}_N$ is a conditional $L$-ensemble with this function $L$. By Proposition~\ref{prop:ConditionalLensembles}, the point-measure $\mathcal{W}_N$ on $\Zf$ is determinantal with correlation kernel $\CK : \Zf \times \Zf \to \R$ given by
\begin{equation}\label{eq:KDef}
\CK=1_\Zf-(1_\Zf+L)^{-1}\big|_{\Zf\times\Zf}.
\end{equation}
In fact, we can compute the inverse of the operator above. To this end, we identify the function $L$ with a function on $\Lambda_N$ and with values in $\CM_{N(N+3)/2, N(N+3)/2}$ so that the identities \eqref{eq:MatrixL} hold. Then $L$ can be written is the block form
\[
L=\left[
\begin{array}{cc}
0 & B \\
C & D_0 \\
\end{array}
\right]
\]
with the blocks
\[
B=[E_0,\ldots,E_{N-1}] \; : \; \Lambda_N \to \CM_{N, N(N+1)/ 2},\qquad C=[0,\ldots,0, (\Psi^{(N)})']' \;:\; \Lambda_N \to \CM_{N(N+1)/ 2, N},
\]
and $D_0 : \Lambda_N \to \CM_{N(N+1)/ 2, N(N+1)/ 2}$ given by
\[
D_0 = \arraycolsep=5pt\renewcommand\arraystretch{1.2}
\left[\begin{array}{ccccc}
0 & -W_{[1,2)} & 0 & \cdots & 0 \\
0 & 0 & -W_{[2,3)} & \ddots & \vdots \\
\vdots & \vdots & \ddots & \ddots &0 \\
0 & 0 & 0 &\cdots & -W_{[N-1,N)} \\
0 & 0 & 0 & \cdots & 0
\end{array}
\right].
\]
Defining furthermore the function $D=1 + D_0$ which can be written as
\[
D = \arraycolsep=5pt\renewcommand\arraystretch{1.2}
\left[\begin{array}{ccccc}
1 & -W_{[1,2)} & 0 & \cdots & 0 \\
0 & 1 & -W_{[2,3)} & \ddots & \vdots \\
\vdots & \vdots & \ddots & \ddots &0 \\
0 & 0 & 0 &\cdots & -W_{[N-1,N)} \\
0 & 0 & 0 & \cdots & 1
\end{array}
\right],
\]
the result of~\cite[Lem.~1.5]{borodinRains} yields an expression for the correlation kernel $\CK$. For the sake of completeness we provide a proof here. As for the function $L$, we will identify the functions $B$, $C$ and $D$ with the functions on $\Xf \times \Xf$. Moreover, we will denote for clarity the convolutions over the values in $\Zf$ by $\star$, and the convolution over $\{1, \cdots, N\}$ we will write as a usual product.
\begin{lemma}
The operators $D$ and $M=B \star D^{-1} \star C$ are invertible, and the correlation kernel $\CK$ defined in \eqref{eq:KDef} can be written as
\begin{equation}\label{eq:KAfterInversion}
\CK= 1_\Zf - D^{-1} + D^{-1} \star C M^{-1} B \star D^{-1}.
\end{equation}
\end{lemma}
\begin{proof}
The claim will follow if we show that one has
\begin{equation}\label{eq:LInverse}
(1_\Zf+L)^{-1} = \arraycolsep=5pt\renewcommand\arraystretch{1.2}
\left[
\begin{array}{cc}
- M^{-1} & M^{-1} B \star D^{-1} \\
D^{-1} \star C M^{-1} & \quad D^{-1} - D^{-1} \star C M^{-1} B \star D^{-1} \\
\end{array}
\right],
\end{equation}
where $M$ is as in the statement of this lemma. The easiest way is to check that the matrix on the right-hand side multiplied by $1_\Zf+L$ is the identity matrix, i.e.
\begin{align*}
(1_\Zf+L)&\arraycolsep=5pt\renewcommand\arraystretch{1.2}
\left[
\begin{array}{cc}
- M^{-1} & M^{-1} B \star D^{-1} \\
D^{-1} \star C M^{-1} & \quad D^{-1} - D^{-1} \star C M^{-1} B \star D^{-1} \\
\end{array}
\right]\\
&= \left[
\begin{array}{cc}
0 & B \\
C & D \\
\end{array}
\right] \left[
\begin{array}{cc}
- M^{-1} & M^{-1} B \star D^{-1} \\
D^{-1} \star C M^{-1} & \quad D^{-1} - D^{-1} \star C M^{-1} B \star D^{-1} \\
\end{array}
\right]\\
&= \left[
\begin{array}{cc}
B \star D^{-1} \star C M^{-1} & \quad B \star D^{-1} - B \star D^{-1} C \star M^{-1} B \star D^{-1} \\
- C M^{-1} + C M^{-1} & \quad CM^{-1} B \star D^{-1} + 1 - C M^{-1} B \star D^{-1} \\
\end{array}
\right]\\
&= \left[
\begin{array}{cc}
M M^{-1} & \quad B \star D^{-1} - M M^{-1} B \star D^{-1} \\
- C M^{-1} + C M^{-1} & \quad CM^{-1} B \star D^{-1} + 1 - C M^{-1} B \star D^{-1} \\
\end{array}
\right] = \left[
\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}
\right],
\end{align*}
which means that \eqref{eq:LInverse} indeed holds. Taking the restriction $(1_\Zf+L)^{-1}\big|_{\Zf\times\Zf}$ we get the right-bottom block of the matrix in \eqref{eq:LInverse} which combined with the definition \eqref{eq:KDef} gives exactly the expression on the right-hand side of \eqref{eq:KAfterInversion}.
\end{proof}
The inverse of the matrix $D$ can be computed very easily
\[
D^{-1} = (1 + D_0)^{-1} = \sum_{k \geq 0} D_0^{\star k} =\arraycolsep=5pt\renewcommand\arraystretch{1.2}
\left[\begin{array}{cccc}
1 & W_{[1,2)} & \cdots & W_{[1,N)} \\
0& 1 & \ddots & \vdots \\
\vdots & \ddots & \ddots & W_{[N-1,N)} \\
0& 0 & 0 &1
\end{array}
\right],
\]
so that the submatrix of $1 - D^{-1}$ with rows $(n, \cdot)$ and columns $(m, \cdot)$ is $W_{[n, m)} \1{n < m}$.
\begin{exercise}
Prove that the inverse of $D$ is indeed given by the matrix above.
\end{exercise}
\noindent Moreover, we can easily compute
\[\arraycolsep=5pt\renewcommand\arraystretch{1.2}
D^{-1} \star C = \left[\begin{array}{c}
W_{[1,N)} \star \Psi^{(N)} \\
\vdots \\
W_{[N-1,N)} \star \Psi^{(N)} \\
\Psi^{N}
\end{array}\right],
\]
as well as
\[
B \star D^{-1} = \arraycolsep=5pt\renewcommand\arraystretch{1.2} \left[\begin{array}{cccc}
E_0 & E_0 \star W_{[1,2)}+E_1 & \cdots & \sum_{k=1}^{N-1} E_{k-1} \star W_{[k,N)}+ E_{N-1}
\end{array}\right].
\]
Therefore the $(n,m)$-block of the correlation kernel $\CK$ is given by
\begin{equation}\label{eq:K_interm}
\bigl[\CK \bigr]_{(n, \cdot), (m, \cdot)} = -W_{[n,m)} \1{n < m}+W_{[n,N)} \star \Psi^{(N)} M^{-1} \left(\sum_{k=1}^{m-1} E_{k-1} \star W_{[k,m)}+ E_{m-1}\right).
\end{equation}
It follows from the property \eqref{eq:PsiRecursion} that for $\bz \in \Lambda_N$ one has
\[
\left[W_{[n,N)} \star \Psi^{(N)}(\bz)\right]_{i,j} = \left(\phi^{N - n} * \Psi^N_{N-j}\right)(z_i^n)=\Psi^{n}_{n-j}(z_i^n),
\]
and it remains to evaluate the last part of \eqref{eq:K_interm}. For the $N\times m$ matrix in the bracket in \eqref{eq:K_interm} we have
\[
\left[ \left(\sum_{k=1}^{m-1} E_{k-1} \star W_{[k,m)}+ E_{m-1} \right)(\bz)\right]_{i,j} =
\left\{\begin{array}{ll}
\phi^{m + 1}(z_i^{i-1},z_j^m),\quad & 1\leq i \leq m,\\
0, & m < i \leq N,
\end{array}\right.
\]
and we arrive at the expression
\[
\bigl[\CK(\bz)\bigr]_{(n, i), (m, j)} = - \phi^{m - n}(z_i^n,z_j^{m}) \1{n < m} + \sum_{\ell, k = 1}^m \Psi^{n}_{n-\ell}(z_i^n)\, \bigl[M^{-1}\bigr]_{\ell, k}\, \phi^{m + 1}(z_k^{k-1},z_j^m),
\]
where the matrix $M$ is given by
\begin{equation}\label{eq:MatrixM}
\bigl[M\bigr]_{i, j} =
\begin{cases}
\left(\phi^{N - i} * \Psi^N_{N- j}\right)(z_i^{i-1}),& i < j,\\
1,& i = j,\\
0,& i > j.
\end{cases}
\end{equation}
\subsubsection*{Biorthogonalization of the correlation kernel.} The functions $\phi^{m + 1}(z_i^{i-1},x)$ with $i =1, \ldots, m$ form a basis of ${\rm span}\bigl\{1,x,\ldots,x^{m-1}\bigr\}$ (by considering $z_i^{i-1}$ to be a fixed value). Since by assumption the functions $\bigl\{\Phi^m_{m-1}(x),\ldots,\Phi^m_0(x)\bigr\}$ form a basis of this space as well, we can define a matrix $A_m \in \CM_{m, m}$ which does a change of basis to $\bigl\{\Phi^m_{m-1}(x),\ldots,\Phi^m_0(x)\bigl\}$, namely
\[
\phi^{m + 1}(z_i^{i-1},x)=\sum_{\ell=1}^m \bigl[A_m\bigr]_{i,\ell}\, \Phi^m_{m-\ell}(x).
\]
We convolve this equation with $\Psi^m_{m-j}(x)$ and obtain, using the biorthogonality assumption,
\[
\bigl[A_m\bigr]_{i,j}= \left (\phi^{m - i + 1}*\Psi^m_{m-j}\right)(z_i^{i-1}).
\]
In particular, we have $A_N=M$, and since $M$ is invertible, the two properties from Theorem~\ref{thm:BFPS} indeed define the functions $\Phi^n_k$ uniquely. Thus we obtain
\begin{equation}\label{eq:some_expression}
\sum_{k=1}^m \bigl[M^{-1}\bigr]_{\ell, k}\, \phi^{m + 1}(z_k^{k-1},z_j^m) = \sum_{k=1}^m \bigl[A_N^{-1}\bigr]_{\ell, k}\, \sum_{i=1}^m \bigl[A_m\bigr]_{k,i}\, \Phi^m_{m-i}(z_j^m),
\end{equation}
and our claim is now to prove that
\begin{equation}\label{eq:AOrth}
\sum_{k=1}^m \bigl[A_N^{-1}\bigr]_{\ell, k} \bigl[A_m\bigr]_{k,i} = \delta_{\ell, i},
\end{equation}
for all $1 \leq \ell \leq N$ and $1 \leq i \leq m$. We notice that
\[
\bigl[A_m\bigr]_{k, i}= \left (\phi^{m - k + 1}*\Psi^m_{m-i}\right)(z_k^{k-1}) = \left (\phi^{N - k + 1}*\Psi^N_{N-i}\right)(z_k^{k-1}) = \bigl[A_N\bigr]_{k, i},
\]
for $1 \leq k, i \leq m$. Thus, using the fact that $A_N$ is upper-triangular (which follows from \eqref{eq:MatrixM}), we obtain for $1 \leq i \leq m$:
\[
\sum_{k=1}^m \bigl[A_N^{-1}\bigr]_{\ell, k} \bigl[A_m\bigr]_{k,i} = \sum_{k=1}^m \bigl[A_N^{-1}\bigr]_{\ell, k} \bigl[A_N\bigr]_{k,i} = \sum_{k=1}^N \bigl[A_N^{-1}\bigr]_{\ell, k} \bigl[A_N\bigr]_{k,i} = \delta_{\ell, i},
\]
which is exactly our claim. This gives that the right-hand side of \eqref{eq:some_expression} is equal to $\Phi^m_{m-\ell}(z_j^m)$, and the $(n,m)$-th block of the correlation kernel $\CK$ is given by
\[
\bigl[\CK(\bz)\bigr]_{(n, i), (m, j)} = - \phi^{m - n}(z_i^n,z_j^{m}) \1{n < m} + \sum_{\ell = 1}^m \Psi^{n}_{n-\ell}(z_i^n)\, \Phi^m_{m-\ell}(z_j^m),
\]
which, if we go to the function $\CK : \Zf \times \Zf \to \R$, is equivalent to
\[
\CK \bigl((n, i, x), (m, j, y)\bigr)=-\phi^{m - n}(x,y) \1{m > n} + \sum_{\ell = 1}^m\Psi^{n}_{n-\ell}(x)\, \Phi^{m}_{m-\ell}(y).
\]
Since we are interested in the distribution of the particles $z^n_1$ (see Figure~\ref{fig:GT}), the kernel \eqref{eq:Kt} is obtained by fixing the values $i = j = 1$ in the above expression.
\begin{exercise} Follow the proof of Lemma~\ref{lem:GapProbability} to show that the identity \eqref{eq:extKernelProbBFPS} holds.
\end{exercise}
\section{Explicit formulas for the correlation kernel}
\label{sec:exact}
Only for a few special cases of initial data (step, see e.g.~\cite{dimers}; and periodic~\cite{borFerPrahSasam,bfp,bfs}) were the correlation kernels \eqref{eq:Kt} known, and hence only for those choices asymptotics could be performed in the TASEP and related cases, leading to the Tracy-Widom $F_{\text{GUE}}$ and $F_{\text{GOE}}$ one-point distributions, and then later to the Airy processes for multipoint distributions. Below we provide formulas for arbitrary initial data and their extensions as $N \to \infty$.
\subsection{Finite initial data.}
We conjugate the kernels from Theorem~\ref{thm:BFPS} by powers of $2$:
\begin{equation}\label{eq:defQ}
Q(x,y)=\frac{1}{2^{x-y}}\1{x>y},
\end{equation}
as well as
\begin{equation}\label{eq:defPsi}
\Psi^n_k(x)=\frac1{2\pi\I}\oint_{\Gamma_0}dw\,\frac{(1-w)^k}{2^{x-X_0(n-k)}w^{x+k+1-X_0(n-k)}} e^{t(w-1)}.
\end{equation}
Then the functions $\Phi_k^{n}(x)$, $k=0,\ldots,n-1$, are defined implicitly by
\begin{enumerate}[label={\normalfont (\arabic{*})}]
\item the biorthogonality relation $\sum_{x\in\Z}\Psi_k^{n}(x)\Phi_\ell^{n}(x)=\1{k=\ell}$;
\label{ortho}
\item $2^{-x}\Phi^n_k(x)$ with $0 \leq k < n$ form a basis of ${\mathrm{span}}\bigl\{x^k : 0 \leq k < n\bigr\}$.
\end{enumerate}
We are interested in computing the kernel,
\begin{equation}\label{eq:Kt_new}
K_t(n_1,x_1;n_2,x_2)=-Q^{n_2-n_1}(x_1,x_2)\1{n_1<n_2}+\sum_{k=1}^{n_2}\Psi^{n_1}_{n_1-k}(x_1)\Phi^{n_2}_{n_2-k}(x_2),
\end{equation}
for $n_1,n_2\in\Z_{\geq1}$ and $x_1,x_2\in\Z$. The initial data $X_0$ appears in a simple way in the functions $\Psi_k^n$, which can be computed explicitly. We note that $Q$ is the transition matrix of a geometric random walk on $\Z$ and has a left inverse
\begin{equation}\label{eq:Qinv}
Q^{-1}=I+2\nablap,
\end{equation}
where we recall that $\nablap f(x) = f(x+1) - f(x)$. Moreover, for all $m,n\in \Z_{\geq 0}$ we have the identities
\[ Q^{n-m}\Psi^n_{n-k}=\Psi^m_{m-k}, \hspace{1cm} \Psi^{n}_k=e^{-\frac{t}{2} (I+\nablam)}Q^{-k}\delta_{X_0(n-k)},\]
where $\delta_x(y) = \uno{x = y}$ is the Kronecker's delta and $\nablam f(x) = f(x) - f(x-1)$.
\begin{exercise}
Prove that these identities hold.
\end{exercise}
\noindent In order to give exact formulas for the functions $\Phi^n_k$ from Theorem~\ref{thm:BFPS} we need to define the functions $h^n_k : \Z_{\geq 0} \times \Z \to \R$ as solutions to the initial-boundary value problem for the backwards heat equation
\begin{subnumcases}{\label{bhe}}
(\Qt)^{-1}h^n_k(\ell,z)=h^n_k(\ell+1,z), & $0 \leq \ell<k,\,z \in \Z$;\label{bhe1}\\
h^n_k(k,z)=2^{z-X_0(n-k)}, & $z \in \Z$;\label{bhe2}\\
h^n_k(\ell,X_0(n-\ell))= 0, & $0 \leq \ell<k$;\label{bhe3}
\end{subnumcases}
for $n\geq1$ and $0\le k <n$ and where $Q^*$ is the adjoint of $Q$.
\begin{exercise}
Show that the dimension of $\ker (Q^*)^{-1}$ is $1$ and use this to show existence and uniqueness of the solution to the equation \eqref{bhe}.
\end{exercise}
\begin{remark}
The expression $\Qt h^{n}_{k}(k,z)$ is divergent so you can't write $\Qt h^{n}_k(\ell+1,z)=h^{n}_k(\ell,z)$.
\end{remark}
\noindent With these functions at hand we are ready to give exact formulas for $\Phi^n_k$.
\begin{theorem}\label{thm:h_heat}
The functions $\Phi^n_k$ from Theorem~\ref{thm:BFPS} are given by
\begin{equation}\label{eq:defPhink}
\Phi^n_k(z) =\sum_{y\in \Z} h^{n}_k(0,y)\,e^{\frac{t}{2} (I+\nabla^-)}(y,z).
\end{equation}
\end{theorem}
\begin{proof}
Before proving \eqref{eq:defPhink} we need to prove that $2^{-x}h^{n}_{k}(0,x)$ is a polynomial of degree $k$. We proceed by induction.
Note first that, by \eqref{bhe2}, $2^{-x}h^{n}_{k}(k,x)$ is a polynomial of degree 0.
Assume now that $\tilde h^n_k(\ell,x)=2^{-x}h^{n}_{k}(\ell,x)$ is a polynomial of degree $k-\ell$ for some $0<\ell\leq k$.
By \eqref{bhe1} and \eqref{eq:Qinv} we have
\begin{equation}\label{eq:tildeh-eq}
\tilde h^{n}_{k}(\ell,y)=2^{-y}(\Qt)^{-1}h^{n}_{k}(\ell-1,y)=\tilde h^{n}_{k}(\ell-1,y-1)-\tilde h^{n}_{k}(\ell-1,y)
\end{equation}
Taking $x\geq X_0(n-\ell+1)$ and summing gives $\tilde h^{n}_{k}(\ell-1,x)=-\sum_{y=X_0(n-\ell+1)+1}^{x}2^{-y}h^{n}_{k}(\ell,y)$ thanks to \eqref{bhe3}, which by the inductive hypothesis is a polynomial of degree $k-\ell+1$.
The function\break$\tilde h^n_k(\ell-1,x)\big|_{x\geq X_0(n-\ell+1)}$ has a unique polynomial extension to all $\Z$, which by uniqueness of solutions of \eqref{bhe} and \eqref{eq:tildeh-eq} shows that $\tilde h^{n}_{k}(\ell-1,\cdot)$ is a polynomial of degree $k-\ell+1$ as needed.
Now we check the biorthogonality relation of \eqref{eq:defPhink} with the functions $\Psi^n_k$. We have
\begin{align*}
\sum_{z\in\Z}\Psi^n_\ell(z)\Phi^n_k(z) &=\sum_{z_1,z_2\in\Z}\sum_{z\in\Z}e^{-\frac{t}{2} (I+\nabla^-)}(z,z_1)Q^{-\ell}(z_1,X_0(n-\ell))h^{n}_{k}(0,z_2)e^{\frac{t}{2} (I+\nabla^-)}(z_2,z)\\
&=\sum_{z\in\Z}Q^{-\ell}(z,X_0(n-\ell))h^{n}_{k}(0,z)
=(\Qt)^{-\ell}h^{n}_{k}(0,X_0(n-\ell)),
\end{align*}
where in the first equality we have used the fact that $2^{-x}h^n_k(0,x)$ is a polynomial together with the fact that the $z_1$ sum is finite to apply Fubini. For $0 \leq \ell\leq k$, we use all equations in \eqref{bhe} to get
\[
(\Qt)^{-\ell}h^{n}_{k}(0,X_0(n-\ell))=h^{n}_{k}(\ell,X_0(n-\ell))=\1{k=\ell}.
\]
For $\ell>k$, we use \eqref{bhe1} and $2^z\in\ker{(\Qt)^{-1}}$ to obtain
\[
(\Qt)^{-\ell}h^{n}_{k}(0,X_0(n-\ell))=(\Qt)^{-(\ell-k-1)}(\Qt)^{-1}h^{n}_{k}(k,X_0(n-\ell))=0.
\]
Since $2^{-x}\Phi^n_k(x)$ is a polynomial of degree $k$ in $x$, this one is as well.
\end{proof}
\subsection{Correlation kernel as a transition probability.}
\label{subsec:hitting}
In this section we will perform summation in \eqref{eq:Kt_new} and obtain a formula for the correlation kernel involving hitting probabilities of a random walk. We start with noting that it is sufficient to fund the kernel \eqref{eq:Kt_new} with $n_1 = n_2$.
\begin{exercise} Show that the kernel \eqref{eq:Kt_new} can be recovered from the kernel $K^{(n)}_t(x_1,x_2)=K_t(n,x_1;n,x_2)$ by
\begin{equation}
K_t(n_i,\cdot;n_j,\cdot)=Q^{n_j-n_i} \bigl(-\1{n_i<n_j}+K^{(n_j)}_t\bigr).
\end{equation}
\end{exercise}
\noindent From the exercise, we can restrict our discussion to the kernel $K^{(n)}_t$. Using the functions $h^n_k$ let us define
\begin{equation}
G_{0,n}(z_1,z_2)=\sum_{k=0}^{n-1}Q^{n-k}(z_1,X_0(n-k))h^{n}_{k}(0,z_2),
\end{equation}
so that the correlation kernel $K_t^{(n)}$ equals
\begin{equation}\label{eq:Kt-decomp}
K_t^{(n)}=e^{-\frac{t}{2} (I+\nabla^-)} Q^{-n}G_{0,n}e^{\frac{t}{2} (I+\nabla^-)}.
\end{equation}
Next, we note that the functions $h^n_k$ can be written as hitting probabilities of a random walk. More precisely, let $\Qt$ (the adjoint of $Q$) be the transition kernel of the random walk $B_m^*$ with Geom$\bigl[\frac12\bigr]$ jumps (strictly) to the right. Then for $0\leq\ell\leq k\leq n-1$ we define stopping times
\[
\tau^{\ell,n}=\min \bigl\{m\in\{\ell,\ldots,n-1\}\!:\,B^*_m> X_0({n-m})\bigr\},
\]
with the convention that $\min\emptyset=\infty$. For $z\leq X_0(n-\ell)$ the function $h^n_k$ can be written as
\begin{equation}
h^n_k(\ell,z)=\pp_{B^*_{\ell-1}=z}\big(\tau^{\ell,n}=k\big).
\end{equation}
\begin{exercise}
Prove this identity.
\end{exercise}
\begin{exercise} Suppose $X$
is a random variable taking values in $\mathbb{N}$
with the \emph{memoryless property},i.e. for each pair of numbers $m,n\in\mathbb{N}$ one has
$\pp(X\ge m+n\mid X>n) = \pp(X\ge m)$.
Show that $X$ has a geometric distribution.
\end{exercise}
From the memoryless property of the geometric distribution we get for all $y > X_0(n-k)$,
\begin{equation}\label{eq:memoryless}
\pp_{B^*_{-1}=z}\big(\tau^{0,n}=k,\,B^*_k=y\big)=2^{X_0(n-k)-y}\pp_{B^*_{-1}=z}\big(\tau^{0,n}=k\big),
\end{equation}
and as a consequence, for $z_2\leq X_0(n)$, we have
\begin{equation}\label{eq:G-formula}
G_{0,n}(z_1,z_2)=\pp_{B^*_{-1}=z_2}\big(\tau^{0,n}<n,\,B^*_{n-1}=z_1\big),
\end{equation}
which is the probability for the walk starting at $z_2$ at time $-1$ to end up at $z_1$ after $n$ steps, having hit the curve $\big(X_0(n-m)\big)_{m=0,\ldots,n-1}$ in between.
\begin{exercise}
Show that the identities \eqref{eq:memoryless} and \eqref{eq:G-formula} indeed hold.
\end{exercise}
\noindent The next step is to obtain an expression along the lines of \eqref{eq:G-formula} which holds for all $z_2$, and not just for $z_2\leq X_0(n)$. To this end, we need to define analytic extensions of the kernels $Q^n$. More precisely, for each fixed $y_1$, $2^{-y_2}Q^n(y_1,y_2)$ extends in $y_2$ to a polynomial $2^{-y_2}\bar{Q}^{(n)}(y_1,y_2)$ of degree $n-1$ with
\begin{equation}
\bar{Q}^{(n)}(y_1,y_2)= \frac{1}{2\pi \I} \oint_{\Gamma_0} dw\,\frac{(1+w)^{y_1 - y_2 -1}}{2^{y_1-y_2} w^n},
\end{equation}
so that for $y_1-y_2\geq n$ we have $\bar{Q}^{(n)}(y_1,y_2)=Q^n(y_1,y_2)$. Furthermore, it is easy to prove that
$Q^{-1}\bar{Q}^{(n)}=\bar{Q}^{(n)}Q^{-1}=\bar{Q}^{(n-1)}$ for $n>1$, but $Q^{-1}\bar{Q}^{(1)}=\bar{Q}^{(1)}Q^{-1}=0$, and
$\bar{Q}^{(n)}\bar{Q}^{(m)}$ is divergent (so the $\bar{Q}^{(n)}$ are no longer a group like $Q^n$).
\begin{exercise}
Prove that these properties of $\bar{Q}^{(n)}$ indeed hold.
\end{exercise}
\noindent Let $B_m$ be now a random walk with transition matrix $Q$ (that is, $B_m$ has Geom$\bigl[\tfrac{1}{2}\bigr]$ jumps strictly to the left) for which we define the stopping time
\begin{equation}\label{eq:deftau}
\tau= \min\bigl\{ m\ge 0: B_m> X_0(m+1)\bigr\}.
\end{equation}
Using this stopping time and the extension of $Q^m$ we obtain:
\begin{lemma}\label{lem:G0n-formula}
For all $z_1,z_2\in\Z$ we have the identity
\begin{equation}
G_{0,n}(z_1,z_2) = \1{z_1>X_0(1)} \bar{Q}^{(n)}(z_1,z_2) + \1{z_1\leq X_0(1)}\ee_{B_0=z_1}\!\left[ \bar{Q}^{(n - \tau)}(B_{\tau}, z_2)\1{\tau<n}\right].
\end{equation}
\end{lemma}
\begin{proof}
For $z_2\leq X_0(n)$, the expression in \eqref{eq:G-formula} can be written as
\begin{multline}\label{eq:G0napp1}
G_{0,n}(z_1,z_2)=\pp_{B^*_{-1}=z_2}\big(\tau^{0,n}\le n-1,\,B^*_{n-1}=z_1\big)=\pp_{B_{0}=z_1}\big(\tau\leq n-1,B_{n}=z_2\big)\\
=\sum_{k=0}^{n-1}\sum_{z>X_0(k+1)}\pp_{B_{0}=z_1}\big(\tau=k,\,B_{k}=z\big)Q^{n-k}(z,z_2)
=\ee_{B_0=z_1}\!\left[Q^{n-\tau}\big(B_{\tau},z_2\big)\1{\tau<n}\right].
\end{multline}
The last expectation is straightforward to compute if $z_1>X_0(1)$, and we get
\begin{equation}
G_{0,n}(z_1,z_2)=\1{z_1>X_0(1)}Q^n(z_1,z_2)+\1{z_1\leq X_0(1)}\ee_{B_0=z_1}\!\left[Q^{n-\tau}\big(B_{\tau},z_2\big)\1{\tau<n}\right]
\end{equation}
for all $z_2\leq X_0(n)$.
Let us now denote
\[
\wt G_{0,n}(z_1,z_2)=\1{z_1>X_0(1)}\bar{Q}^{(n)}(z_1,z_2)+\1{z_1\leq X_0(1)}\ee_{B_0=z_1}\!\left[\bar{Q}^{(n-\tau)}\big(B_{\tau},z_2\big)\1{\tau<n}\right].
\]
We claim that $\wt G_{0,n}(z_1,z_2)=G_{0,n}(z_1,z_2)$ for all $z_2\leq X_0(n)$.
To see this, note that
\[
\P_{X_0(1)}\bar{Q}^{(n)}\bar\P_{X_0(n)}=\P_{X_0(1)}{Q^n}\bar\P_{X_0(n)},
\]
thanks to the properties proved in the exercise above. For the other term, the last equality in \eqref{eq:G0napp1} shows that we only need to check that $\P_{X_0(k+1)}\bar{Q}^{(k+1)}\bar\P_{X_0(n)}=\P_{X_0(k+1)}{Q^{k+1}}\bar\P_{X_0(n)}$ for $k=0,\ldots,n-1$, which follows again from the same fact proved in the exercise.
To complete the proof, recall that, by Theorem~\ref{eq:extKernelProbBFPS}, $K^{(n)}_t$ satisfies the following: for every fixed $z_1$, $2^{-z_2}K_t(z_1,z_2)$ is a polynomial of degree $n-1$ in $z_2$.
It is easy to check that this implies that $G_{0,n}=Q^nR^{-1}K_tR$ satisfies the same.
Since $\bar{Q}^{(k)}$ also satisfies this property for each $k=0,\ldots,n$, we deduce that $2^{-z_2}\wt G_{0,n}(z_1,z_2)$ is a polynomial in $z_2$.
Since it coincides with $2^{-z_2}G_{0,n}(z_1,z_2)$ at infinitely many $z_2$'s, we deduce that $\wt G_{0,n}=G_{0,n}$.
\end{proof}
In order to have a lighter notation, we define the following kernels
\begin{align}
\mathcal{S}_{-t,-n}(z_1,z_2) &:= (e^{-\frac{t}2 \nabla^-} Q^{-n})^*(z_1,z_2) = \frac{1}{2\pi\I} \oint_{\Gamma_0}dw\, \frac{(1-w)^{n}}{2^{z_2-z_1} w^{n +1 + z_2 - z_1}}e^{t(w-1 / 2)},\label{def:sm}\\
\bar{\mathcal{S}}_{-t,n} (z_1,z_2) &:= \bar{Q}^{(n)}e^{\frac{t}2 \nabla^-} (z_1,z_2)= \frac{1}{2 \pi \I} \oint_{\Gamma_{0}} dw\,\frac{(1-w)^{z_2-z_1 + n - 1}}{2^{z_1-z_2} w^{n}} e^{t(w - 1 / 2)}.\label{def:sn}
\end{align}
\begin{exercise}
Show that these operators indeed are given by the contour integrals.
\end{exercise}
\noindent Furthermore, we define the following function
\begin{equation}
\bar{\bar{\mathcal{S}}}_{-t,n}^{\epi(X_0)}(z_1,z_2) = \ee_{B_0=z_1}\!\left[ \bar{\mathcal{S}}_{-t,n - \tau}(B_{\tau}, z_2)\1{\tau<n}\right],
\end{equation}
where the superscript epi refers to the fact that $\tau$ (defined in \eqref{eq:deftau}) is the hitting time of the epigraph of the curve $\big(X_0(k+1)+1\big)_{k=0,\ldots,n-1}$ by the random walk $B_k$. With these operators at hand we have the following formula for TASEP with general right-finite initial data:
\begin{theorem}[TASEP formula for right-finite initial data]\label{thm:tasepformulas}
Assume that initial values satisfy $X_0(j)=\infty$ for $j\le 0$. Then for $1\leq n_1<n_2<\dotsm<n_M$ and $t>0$ we have
\begin{equation}\label{eq:extKernelProb}
\pp\!\left(X_t(n_j)>a_j,~j=1,\ldots,M\right)=\det \bigl(I-\bar\chi_a K_t\bar\chi_a\bigr)_{\ell^2(\{n_1,\ldots,n_M\}\times\Z)},
\end{equation}
where the kernel $K_t$ is given by
\begin{equation}\label{eq:Kt-2}
K_t(n_i,\cdot;n_j,\cdot)=-Q^{n_j-n_i}\1{n_i<n_j}+(\mathcal{S}_{-t,-{n}_i})^*\bar{\bar{\mathcal{S}}}_{-t,n_j}^{\epi(X_0)}.
\end{equation}
\end{theorem}
\begin{proof}
If $X_0(1)<\infty$ then we are in the setting of the above sections.
Formulas \eqref{eq:extKernelProb} and \eqref{eq:Kt-2} follow directly from the above definitions together with \eqref{eq:Kt-decomp} and Lemma~\ref{lem:G0n-formula}.
If $X_0(j)=\infty$ for $j=1,\ldots,\ell$ and $X_0(\ell+1)<\infty$ then
\[
\pp_{X_0}\big(X_t(n_j)>a_j,~j=1,\ldots,M\big)=\det\!\big(I-\bar\chi_aK^{(\ell)}_t\bar\chi_a\big)_{\ell^2(\{n_1,\ldots,n_M\}\times\Z)}
\]
with the correlation kernel
\[
K^{(\ell)}_t(n_i,\cdot;n_j,\cdot)=-Q^{n_j-n_i}\1{n_i<n_j}+(\mathcal{S}_{-t,-{n}_i+\ell})^*{\bar{\mathcal{S}}}_{-t,n_j-\ell}^{\epi(\theta_\ell X_0)},
\]
where $\theta_\ell X_0(j) = X_0(\ell + j)$. Using now the fact that $Q^{\ell}\bar{\bar{\mathcal{S}}}_{-t,n_j-\ell}^{\epi(\theta_\ell X_0)}={\bar{\mathcal{S}}}_{-t,n_j}^{\epi(X_0)}$ and \eqref{def:sm} we conclude that \eqref{eq:Kt-2} still holds in this case.
\end{proof}
\begin{remark} One can write a formula for initial data which are not right finite~\cite{KPZ}, but they are a bit more cumbersome. In practice,
one cuts of the data very far to the right, and uses the formula above. Since one has exact formulas, one can check that cutting off has a small effect.
\end{remark}
For some special initial data one can get simpler expressions for the correlation kernel~\cite{KPZ} and we can recover the formulas from \cite{borFerPrahSasam}, \cite{ferrariMatr} and \cite{bfp}. We leave these computations as exercises below.
\begin{exercise}
Consider TASEP with step initial data, $X_0(i) = -i$ for $i \geq 1$ and show that
\[
K_t(n_i,z_1;n_j,z_2)=-Q^{n_j-n_i}(z_1, z_2)\uno{n_i<n_j} + \frac{1}{(2\pi\I)^2} \oint_{\Gamma_0}dw \oint_{\Gamma_0}dv\, \frac{(1-w)^{n_i} (1-v)^{n_j + z_2}}{2^{z_1-z_2} w^{n_i + z_1 +1} v^{n_j}} \frac{e^{t(w + v-1)}}{1 - v - w}.
\]
\end{exercise}
\begin{exercise}$\!\!\!\!{}^*$
Consider TASEP with periodic initial data $X_0(i)=2i$, $i\in\Z$ and show that
\[K^{(n)}_t(z_1,z_2)=-\frac1{2\pi\I}\oint_{1+\Gamma_{0}}dv\, \frac{v^{z_2+2n}}{2^{z_1-z_2}(1-v)^{z_1+2n+1}}\, e^{t(1-2v)}.\]
(Hint: approximate by finite periodic initial data $X_0(i) = 2(N-i)$ for $i=1,\ldots,2N$.)
\end{exercise}
\subsection{Path integral formulas.}
In addition to the extended kernel formula \eqref{eq:extKernelProbBFPS}, one has a \emph{path integral formula}
\begin{equation}
\det \left(I-K^{(n_m)}_{t} \bigl(I-Q^{n_1-n_m}\P_{a_1}Q^{n_2-n_1}\P_{a_2}\dotsm Q^{n_m-n_{m-1}}\P_{a_m}\bigr)\right)_{L^2(\Z)},\label{eq:path-int-kernel-TASEPgem}
\end{equation}
where as before $K^{(n)}_t(z_1,z_2)=K_t(n,z_1;n,z_2)$. Such formulas were first obtained in \cite{prahoferSpohn} for the Airy$_2$ process (see \cite{prolhacSpohn} for the proof), and later was extended to the Airy$_1$ process in \cite{quastelRemAiry1} and then to a very wide class of processes in \cite{bcr}. We provide this result below in full generality.
For $t_1<t_2<\dotsm<t_n$ we consider an extended kernel $K^\uptext{ext}$ given as follows: for $1\leq i,j\leq n$ and $x,y\in X$ (here $(X,\mu)$ is a given measure space),
\begin{equation}\label{eq:generalExt}
K^\uptext{ext}(t_i,x;t_j,y)=
\begin{dcases*}
\mathcal{W}_{t_i,t_j}K_{t_j}(x,y), & if $i\geq j$,\\
-\mathcal{W}_{t_i,t_j}(I-K_{t_j})(x,y), & if $i<j$.
\end{dcases*}
\end{equation}
Additionally, we are considering multiplication operators $\Ml_{t_i}$ acting on a measurable function $f$ on $X$ as $\Ml_{t_i}f(x)=\varphi_{t_i}(x)f(x)$ for some measurable function $\varphi_{t_i}$ defined on $X$.
$M$ will denote the diagonal operator acting on functions $f$ defined on $\{t_1,\ldots,t_n\}\!\times\!X$ as $Mf(t_i,\cdot)=\Ml_{t_i}f(t_i,\cdot)$.
We provide below all the assumptions from \cite{bcr} except their Assumption 2(iii) which has to be changed in our case.
\begin{assumption}\label{assum:1}
There are integral operators $Q_t$ on $L^2(X)$ such that the following hold:
\begin{enumerate}[label=(\roman*)]
\item The integral operators $Q_{t_i}\mathcal{W}_{t_i,t_j}$, $Q_{t_i}K_{t_i}$,
$Q_{t_i}\mathcal{W}_{t_i,t_j}K_{t_j}$ and $Q_{t_j}\mathcal{W}_{t_j,t_i}K_{t_i}$ for $1\leq i<j\leq n$
are all bounded operators mapping $L^2(X)$ to itself.
\item The operator $K_{t_1}-\bar Q_{t_1}\mathcal{W}_{t_1,t_2}\bar Q_{t_2}\cdots\mathcal{W}_{t_{n-1},t_n}\bar Q_{t_n}
\mathcal{W}_{t_n,t_1}K_{t_1}$, where $\bar Q_{t_i}=I-Q_{t_i}$, is a bounded operator mapping
$L^2(X)$ to itself.
\end{enumerate}
\end{assumption}
\begin{assumption}\label{assum:2}
For each $i\leq j\leq k$ the following hold:
\begin{enumerate}[label=(\roman*)]
\item \emph{Right-invertibility}: $\mathcal{W}_{t_i,t_j}\mathcal{W}_{t_j,t_i}K_{t_i}=K_{t_i}$;
\item \emph{Semigroup property}: $\mathcal{W}_{t_i,t_j}\mathcal{W}_{t_j,t_k}=\mathcal{W}_{t_i,t_k}$;
\item \emph{Reversibility relation}: $\mathcal{W}_{t_i,t_j}K_{t_{j}}\mathcal{W}_{t_j,t_{i}}=K_{t_{i}}$, for all $t_i<t_j$.
\end{enumerate}
\end{assumption}
\begin{assumption}\label{assum:3}
One can choose multiplication operators $V_{t_i}$, $V'_{t_i}$, $U_{t_i}$ and $U'_{t_i}$
acting on $\CM(X)$, for $1\leq i\leq n$, in such a way that:
\begin{enumerate}[label=(\roman*)]
\item $V_{t_i}'V_{t_i}Q_{t_i}=Q_{t_i}$ and $K_{t_i}U_{t_i}'U_{t_i}=K_{t_i}$, for all $1\leq i\leq n$.
\item The operators $V_{t_i}Q_{t_i}K_{t_i}V_{t_i}'$, $V_{t_i}Q_{t_i}\mathcal{W}_{t_i,t_j}V_{t_j}'$,
$V_{t_i}Q_{t_i}\mathcal{W}_{t_i,t_j}K_{t_j}V_{t_j}'$ and
$V_{t_j}Q_{t_j}\mathcal{W}_{t_j,t_i}K_{t_i}V_{t_i}'$ preserve $L^2(X)$ and are trace class in $L^2(X)$, for all $1\leq i<j\leq n$.
\item The operator
$U_{t_{i}}\!\left[\mathcal{W}_{t_{i},t_1}K_{t_1}-\bar{Q}_{t_{i}}\mathcal{W}_{t_{i},t_{i+1}}\dotsm
\bar{Q}_{t_{n-1}}\mathcal{W}_{t_{{n-1}},t_{n}}\bar{Q}_{t_{n}}\mathcal{W}_{t_{n},t_1}K_{t_1}\right]U_{t_1}'$ preserves
$L^2(X)$ and is trace class in $L^2(X)$, for all $1\leq i\leq n$, where
$\bar{Q}_{t_i}=I-Q_{t_i}$.
\end{enumerate}
\end{assumption}
We are assuming here that $\mathcal{W}_{t_i,t_j}$ is invertible for all $t_i\leq t_j$, so that $\mathcal{W}_{t_j,t_i}$ is defined as a proper operator\footnote{This is just for simplicity; it is possible to state a version of Theorem~\ref{thm:alt-extendedToBVP} asking instead that the product $K_{t_j}\mathcal{W}_{t_j,t_i}$ be well defined.}.
Moreover, we assume that it satisfies
\begin{equation}
\mathcal{W}_{t_j,t_i}K_{t_{i}}=K^\uptext{ext}(t_j,\cdot;t_i,\cdot)
\end{equation}
for all $t_i\geq t_j$, and that the multiplication operators $U_{t_i},U_{t_i}'$ introduced in Assumption~\ref{assum:3} satisfy Assumption~\ref{assum:3}(iii) with the operator in that assumption replaced by
\[U_{t_i}\left[\mathcal{W}_{t_{i},t_{i+1}}\oM_{t_{i+1}}
\dotsm\mathcal{W}_{{t_{n-1}},{t_{n}}}\oM_{{t_{n}}}K_{t_n}-\mathcal{W}_{t_{i},{t_1}}\oM_{{t_1}}\mathcal{W}_{{t_1},{t_2}}\oM_{{t_2}}\dotsm\mathcal{W}_{{t_{n-1}},{t_{n}}}\oM_{{t_{n}}}K_{t_n}\right]U_{t_i}'.\]
\begin{theorem}\label{thm:alt-extendedToBVP}
Under the assumptions above, we have the identity
\begin{equation}
\det\!\big(I-\Ml K^\uptext{ext}\big)_{L^2(\{t_1,\dots,t_n\}\times X)}
=\det\!\big(I-K_{t_n}+K_{t_n}\mathcal{W}_{t_n,t_1}\oM_{t_1}\mathcal{W}_{t_1,t_2}\oM_{t_2}\dotsm\mathcal{W}_{t_{n-1},t_n}\oM_{t_n}\big)_{L^2(X)},
\end{equation}
where $\oM_{t_i}=I-\Ml_{t_i}$.
\end{theorem}
\begin{proof}
The proof is a minor adaptation of the arguments in \cite[Thm. 3.3]{bcr}, and we will use throughout it all the notation and conventions of that proof.
We will just sketch the proof, skipping several technical details (in particular, we will completely omit the need to conjugate by the operators $U_{t_i}$ and $V_{t_i}$, since this aspect of the proof can be adapted straightforwardly from \cite{bcr}).
In order to simplify notation throughout the proof we will replace subscripts of the form $t_i$ by $i$, so for example $\mathcal{W}_{i,j}=\mathcal{W}_{t_i,t_j}$.
Let $\sK=\Ml K^\uptext{ext}$. Then $\sK$ can
be written as
\begin{equation}
\sK=\sQ \bigl(\sW^{-}\sK^{\rm d}+\sW^{+}(\sK^{\rm d}-I)\bigr)\qquad\text{with}\qquad
\sK^{\rm d}_{ij}=K_{i}\uno{i=j},\quad \sQ_{i,j}=\Ml_{{i}}\uno{i=j},
\end{equation}
where $\sW^{-}$, $\sW^{+}$ are lower triangular, respectively strictly upper triangular, and defined by
\begin{equation*}
\sW^{-}_{ij} = \mathcal{W}_{i,j}\uno{i\geq j},\qquad
\sW^{+}_{ij}=\mathcal{W}_{{i},{j}}\uno{i < j}.
\end{equation*}
The key to the proof in \cite{bcr} was to observe that $\big[(I+\sW^{+})^{-1})\big]_{i,j}=\uno{j=i}-\mathcal{W}_{{i},{i+1}}\uno{j=i+1}$, which then implies that $\big[(\sW^{-}+\sW^{+})\sK^{\rm d}(I+\sW^{+})^{-1}\big]_{i,j}=\mathcal{W}_{{i},{1}}K_{1}\uno{j=1}$.
The fact that only the first column of this matrix has non-zero entries is what ultimately allows one to turn the Fredholm determinant of an extended kernel into one of a kernel acting on $L^2(X)$.
However, the derivation of this last identity uses $\mathcal{W}_{i,{j-1}}K_{j-1}\mathcal{W}_{{j-1},j}=\mathcal{W}_{i,j}K_{j}$, which is a consequence of Assumptions~\ref{assum:2}(ii) and (iii), and thus is not available to us. In our case we may proceed similarly by observing that
\begin{equation}
\big[(\sW^{-})^{-1}\big]_{i,j}=\uno{j=i}-\mathcal{W}_{{i},{i-1}}\uno{j=i-1},
\end{equation}
as can be checked directly using Assumption~\ref{assum:2}(ii).
Now using the identity \[\mathcal{W}_{i,{j+1}}K_{j+1}\mathcal{W}_{{j+1},j}=\mathcal{W}_{i,j}K_{j},\] which follows from Assumption~\ref{assum:2}(ii) and (iii), we get
\begin{equation}
\big[(\sW^{-}+\sW^{+})\sK^{\rm d}(\sW^{-})^{-1}\big]_{i,j}
=\mathcal{W}_{i,j}K_{j}-\mathcal{W}_{i,{j+1}}K_{j+1}\mathcal{W}_{{j+1},j}\uno{j<n}
=\mathcal{W}_{{i},{n}}K_{n}\uno{j=n}.\label{eq:T-T+}
\end{equation}
Note that now only the last column of this matrix has non-zero entries, which accounts for the difference between our result and that of \cite{bcr}.
To take advantage of \eqref{eq:T-T+} we write
\[
I-\sK=(I+\sQ\sW^+)\big[I-(I+\sQ\sW^+)^{-1}\sQ(\sW^-+\sW^+)\sK^{\rm d}(\sW^-)^{-1}\sW^-\big].
\]
Since $\sQ\sW^+$ is strictly upper triangular, $\,\det(I+\sQ\sW^+)=1$, which in particular shows that $I+\sQ\sW^+$ is invertible. Thus by the cyclic property of the Fredholm determinant, $\,\det(I-\sK)=\det(I-\wt\sK)$ with
\[\wt\sK=\sW^-(I+\sQ\sW^+)^{-1}\sQ(\sW^-+\sW^+)\sK^{\rm d}(\sW^-)^{-1}.\]
Since only the last column of $(\sW^-+\sW^+)\sK^{\rm d}(\sW^-)^{-1}$ is non-zero, the same holds for $\wt\sK$, and thus $\,\det(I-\sK)=\det(I-\wt\sK_{n,n})_{L^2(X)}$.
Our goal now is to compute $\wt\sK_{n,n}$.
From \eqref{eq:T-T+} and Assumption~\ref{assum:2}(ii) we get, for $0\leq k\leq n-i$,
\begin{multline*}
\left[(\sQ\sW^+)^k\sQ(\sW^-+\sW^+)\sK^{\rm d}(\sW^-)^{-1}\right]_{i,n}\\
=\qquad\smashoperator{\sum_{i<\ell_1<\dots<\ell_k\leq n}}\qquad
\Ml_{{i}}\mathcal{W}_{i,{\ell_1}}\Ml_{{\ell_1}}\mathcal{W}_{{\ell_{1}},{\ell_2}} \dotsm
\Ml_{{\ell_{k-1}}}\mathcal{W}_{{\ell_{k-1}},{\ell_k}}\Ml_{{\ell_k}}\mathcal{W}_{{\ell_k},{n}}K_{n},
\end{multline*}
while for $k>n-i$ the left-hand side above equals 0 (the case $k=0$ is interpreted as
$\Ml_{i}\mathcal{W}_{i,n}K_{n}$).
As in \cite{bcr} this leads to
\begin{equation}
\wt\sK_{i,n}
=\sum_{j=1}^i\sum_{k=0}^{n-j}(-1)^{k}\quad\smashoperator{\sum_{j=\ell_0<\ell_1<\dots<\ell_k\leq
n}}\quad\mathcal{W}_{i,j}\Ml_{j}\mathcal{W}_{{j},{{\ell_1}}}\Ml_{{\ell_1}}\mathcal{W}_{{\ell_1},{\ell_2}}
\Ml_{{\ell_{k-1}}}\mathcal{W}_{{\ell_{k-1}},{\ell_k}}\Ml_{{\ell_k}}\mathcal{W}_{{\ell_k},{n}}K_{n}.
\end{equation}
Replacing each $\Ml_{\ell}$ by $I-\oM_{\ell}$ except for the first one and simplifying as in \cite{bcr} leads to
\[\wt\sK_{i,n}=\mathcal{W}_{{i},{i+1}}\oM_{{i+1}}\mathcal{W}_{{i+1},{i+2}}\oM_{{i+2}}
\dotsm\mathcal{W}_{{{n-1}},{{n}}}\oM_{{{n}}}K_{n}-\mathcal{W}_{{i},{1}}\oM_{{1}}\mathcal{W}_{{1},{2}}\oM_{{2}}\dotsm\mathcal{W}_{{{n-1}},{{n}}}\oM_{{{n}}}K_{n}.\]
Setting $i=n$ yields $\wt\sK_{n,n}=K_{n}-\mathcal{W}_{{n},{1}}\oM_{{1}}\mathcal{W}_{{1},{2}}\oM_{{2}}\dotsm\mathcal{W}_{{{n-1}},{{n}}}\oM_{{{n}}}K_{n}$
and then an application of the cyclic property of the determinant gives the result.
\end{proof}
\subsection{Proof of the TASEP path integral formula.}\label{app:proofPathIntTASEP}
To obtain the path integral version \eqref{eq:path-int-kernel-TASEPgem} of the TASEP formula we use Theorem~\ref{thm:alt-extendedToBVP}.
Recall from \eqref{eq:PsiRecursion} that $Q^{n-m}\Psi^{n}_{n-k}=\Psi^{m}_{m-k}$.
Then we can write
\begin{equation}\label{eq:checkQKK}
Q^{n_j-n_i}K^{(n_j)}_t=\sum_{k=0}^{n_j-1}Q^{n_j-n_i}\Psi^{n_j}_{k}\otimes \Phi^{n_j}_{k}=\sum_{k=0}^{n_j-1}\Psi^{n_i}_{n_i-n_j+k}\otimes \Phi^{n_j}_{k}=K_t(n_i,\cdot;n_j,\cdot)+Q^{n_j-n_i}\uno{n_i<n_j}.
\end{equation}
This means that the extended kernel $K_t$ has exactly the structure specified in \eqref{eq:generalExt}, taking\break$t_i=n_i$, $K_{t_i}=K^{(n_i)}_t$, $\mathcal{W}_{t_i,t_j}=Q^{n_j-n_i}$ and $\mathcal{W}_{t_i,t_j}K_{t_j}=K_t(n_i,\cdot;n_j,\cdot)$.
It is not hard to check that Assumptions 1 and 3 of \cite[Thm. 3.3]{bcr} hold in our setting.
The semigroup property (Assumption~2(ii)) is trivial in this case, while the right-invertibility condition (Assumption~2(i))
\[
Q^{n_j-n_i}K_t(n_j,\cdot;n_i,\cdot)=K_t^{(n_i)}
\]
for $n_i\leq n_j$ follows similarly to \eqref{eq:checkQKK}.
However, Assumption 2(iii) of \cite{bcr}, which translates into $Q^{n_j-n_i}K_t^{(n_j)}=K_t^{(n_i)}Q^{n_j-n_i}$ for $n_i\leq n_j$, does not hold in our case (in fact, the right hand side does not even make sense as the product is divergent, as can be seen by noting that $\Phi^{(n)}_0(x)=2^{x-X_0(n)}$; alternatively, note that the left hand side depends on the values of $X_0(n_{i+1}),\ldots,X_0(n_j)$ but the right hand side does not), which is why we need Theorem~\ref{thm:alt-extendedToBVP}.
To use it, we need to check that
\begin{equation}
Q^{n_j-n_i}K_t^{(n_j)}Q^{n_i-n_j}=K_t^{(n_i)}.\label{eq:secondExtKernAssmp}
\end{equation}
In fact, if $k\geq0$ then \eqref{bhe1} together with the easy fact that $h^n_k(\ell,z)=h^{n-1}_{k-1}(\ell-1,z)$ imply that
\[
(\Qt)^{n_i-n_j}h^{n_j}_{k+n_j-n_i}(0,z)=h^{n_j}_{k+n_j-n_i}(n_j-n_i,z)=h^{n_i}_{k}(0,z),
\]
so that
$(\Qt)^{n_i-n_j}\Phi^{n_j}_{k+n_j-n_i}=\Phi^{n_i}_k$.
On the other hand, if $n_i-n_j\leq k<0$ then we have
\[
(\Qt)^{n_i-n_j}h^{n_j}_{k+n_j-n_i}(0,z)=(\Qt)^kh^{n_j}_{k+n_j-n_i}(k+n_j-n_i,z)=0
\]
thanks to \eqref{bhe2} and the fact that $2^z\in\ker(\Qt)^{-1}$, which gives $(\Qt)^{n_i-n_j}\Phi^{n_j}_{k+n_j-n_i}=0$.
Therefore, proceeding as in \eqref{eq:checkQKK}, the left hand side of \eqref{eq:secondExtKernAssmp} equals
\begin{equation}
\sum_{k=0}^{n_j-1}Q^{n_j-n_i}\Psi^{n_j}_{k}\otimes(\Qt)^{n_i-n_j}\Phi^{n_j}_{k}=\sum_{k=0}^{n_j-1}\Psi^{n_i}_{n_i-n_j+k}\otimes(\Qt)^{n_i-n_j}\Phi^{n_j}_{k}
=\sum_{k=0}^{n_i-1}\Psi^{n_i}_k\otimes\Phi^{n_i}_{k}
\end{equation}
as desired.
\section{The KPZ fixed point}\label{sec:123}
In this section we will take the KPZ scaling limit of the TASEP growth process, by using the formula from Theorem~\ref{thm:tasepformulas}, and will get a complete characterisation of the limiting Markov process, called the \emph{KPZ fixed point}. We start with introducing the topology and some operators.
\subsection{State space and topology.}
\label{UC}
The state space on which we define the KPZ fixed point is the following:
\begin{definition}[$\UC$ functions]
We define $\UC$ as the space of upper semicontinuous functions\break$\fh\!:\R \to [-\infty,\infty)$ with $\fh(\fx)\le C(1+|\fx|)$ for some $C<\infty$.
\end{definition}
\begin{example}
The $\UC$ function $\mathfrak{d}_\fu(\fu) = 0$, $\mathfrak{d}_\fu(\fx) = -\infty$ for $\fx\neq \fu$, is known as a \emph{narrow wedge at $\fu$}.
These arise naturally as $\mathfrak{d}_0$ is clearly the limit of the TASEP height function $h(x)=-|x|$ under the rescaling $\ep^{1/2}h(\ep^{-1}x)$.
\end{example}
We will endow this space with the topology of local $\UC$ convergence.
This is the natural topology for lateral growth, and will allow us to compute the KPZ limit in all cases of interest\footnote{Actually the bound $\fh(\fx)\le C(1+|\fx|)$ which we are imposing here and in~\cite{fixedpt} on $\UC$ functions is not as general as possible, but makes the arguments a bit simpler and it suffices for most cases of interest (see also~\cite[Foot. 9]{fixedpt}).}.
In order to define this topology, recall that $\fh$ is upper semicontinuous ($\UC$) if and only if its \emph{hypograph}
\[\hypo(\fh) = \{(\fx,\fy): \fy\le \fh(\fx)\}\]
is closed in $[-\infty,\infty)\times \R$.
Slightly informally, local $\UC$ convergence can be defined as follows:
\begin{definition}[Local $\UC$ convergence]
We say that $(\fh_\ep)_\ep\subseteq\UC$ \emph{converges locally in $\UC$} to $\fh\in\UC$ if there is a $C>0$ such that $\fh_\ep(\fx) \le C(1+|\fx|)$ for all $\ep>0$ and for every $M\geq1$ there is a $\delta=\delta(\ep,M)>0$ going to 0 as $\ep\to0$ such that the hypographs $\mathfrak{H}_{\ep,M}$ and $\mathfrak{H}_{M}$ of $\fh_\ep$ and $\fh$ restricted to $[-M,M]$ are $\delta$-close in the sense that
\[\cup_{(\ft,\fx)\in\mathfrak{H}_{\ep,M}}B_{\delta}((\ft,\fx))\subseteq\mathfrak{H}_{M}\qqand\cup_{(\ft,\fx)\in\mathfrak{H}_{M}}B_{\delta}((\ft,\fx))\subseteq\mathfrak{H}_{\ep,M}.\]
\end{definition}
We will also use an analogous space $\LC$, made of lower semicontinuous functions:
\begin{definition}[$\LC$ functions and local convergence]
We define $\LC=\big\{\fg\!: -\fg\in\UC\!\big\}$ and endow this space with the topology of \emph{local $\LC$ convergence} which is defined analogously to local $\UC$ convergence, now in terms of \emph{epigraphs},
\[\epi(\fg) = \{(\fx,\fy): \fy\ge \fg(\fx)\}.\]
Explicitly, $(\fg_\ep)_{\ep} \subset \LC$ converges locally in $\LC$ to $\fg \in \LC$ if and only if $-\fg_\ep\rightarrow-\fg$ locally in $\UC$.
\end{definition}
\begin{exercise}
Show that if $\fh$ is locally H\"older $\beta\in (0,1)$ then convergence in $\UC$ or $\LC$ implies uniform convergence on compact sets.
\end{exercise}
\begin{exercise}$\!\!\!\!{}^*$
Show that if $\fh_0\in \LC$ then the inviscid solution given by the Hopf-Lax formula
\[ h(t,x)=\sup_y\left\{ h_0(y)- t^{-1}(x-y)^2\right\}\]
is continuous in the $\UC$ topology.
\end{exercise}
\subsection{Auxiliary operators.}
In order to state our main result we need to introduce several operators, which will appear in the explicit Fredholm determinant formula for the fixed point. Our basic building block is the following (almost) group of operators:
\begin{definition}\label{def:groupS}
For $\fx,\ft\in\R^2\setminus \{\fx<0, \ft= 0\}$ let us define the operator
\begin{equation}\label{eq:groupS}
\fT_{\ft,\fx}=\exp \bigl\{ \fx\partial^2+\tfrac{\ft}3\tts\partial^3 \bigr\},
\end{equation}
which satisfies the identity
\begin{equation}\label{eq:groupS2}
\fT_{\fs,\fx}\fT_{\ft,\fy}=\fT_{\fs+\ft,\fx+\fy}
\end{equation}
as long as all subscripts avoid the region $\{\fx<0, \ft= 0\}$.
\end{definition}
For $\ft>0$ the operator $\fT_{\ft,\fx}$ acts on nice functions by convolution with the kernel
\begin{equation}\label{eq:fTdef}
\fT_{\ft,\fx}(z)=\frac1{2\pi\I} \int_{\langle}\, dw\,e^{\frac{\ft}3 w^3+\fx w^2-z w} = \ft^{-1/3} e^{\frac{2 \fx^3}{3\ft^2} -\frac{z\fx}{\ft} }\,\Ai \bigl(-\ft^{-1/3} z+\ft^{-4/3}\fx^2\bigr),
\end{equation}
where ${\langle}~$ is the positively oriented contour going in straight lines from $e^{-\I\pi/3}\infty$ to $e^{\I\pi/3}\infty$ through $0$ and $\Ai$ is the Airy function
\[
\Ai(z)= \frac1{2\pi\I} \int_{\langle}dw\, e^{\frac{1}3 w^3-z w}.
\]
When $\ft<0$ we have the identity $\fT_{\ft,\fx}=(\fT_{-\ft,\fx})^*$, which in particular yields
\begin{equation}
(\fT_{\ft,\fx})^*\fT_{\ft,-\fx}=\fI.
\end{equation}
\begin{exercise}
Prove that this identity indeed holds.
\end{exercise}
\begin{definition}[Hit operators]\label{def:hit}
For $\fg\in \LC$ let us define
\begin{equation}
\bar{\fT}^{\epi(\fg)}_{\ft,\fx}(v,u)=\ee_{\fB(0)=v}\big[\fT_{\ft,\fx-\ftau}(\fB(\ftau),u)\1{\ftau<\infty}\big]=\int_{0}^\infty\,\pp_{\fB(0)=v}(\ftau\in d\fs)\,\fT_{\ft,\fx-\fs}(\fB(\ftau),u)
\end{equation}
where $\fB(x)$ is a Brownian motion with diffusion coefficient $2$ and $\ftau$ is the hitting time of the epigraph of the function $\fg$.
\end{definition}
Note that for $v\ge\fg(0)$ we trivially have the identity
\begin{equation}\label{eq:epiabove}
\bar{\fT}^{\epi(\fg)}_{\ft,\fx}(v,u)=\fT_{\ft,\fx}(v,u).
\end{equation}
If $\fh\in \UC$, there is a similar operator $\bar{\fT}^{\hypo(\fh)}_{\ft,\fx}$, except that now $\ftau$ is the hitting time of the hypograph of $\fh$ and $\bar{\fT}^{\hypo(\fh)}_{\ft,\fx}(v,u)=\fT_{\ft,\fx}(v,u)$ for $v\le\fh(0)$.
One way to think of $\bar{\fT}^{\epi(\fg)}_{\ft,\fx}(v,u)$ is as a sort of asymptotic transformed transition `probability' for the Brownian motion $\fB$ to go from $v$ to $u$ hitting the epigraph of $\fg$ (note that $\fg$ is not necessarily continuous, so hitting $\fg$ is not the same as hitting $\epi(\fg)$; in particular, $\fB(\ftau)\geq\fg(\ftau)$ and in general the equality need not hold).
To see what we mean, write
\begin{equation}\label{eq:asymptTransTransProb}
\bar{\fT}^{\epi(\fg)}_{\ft,\fx}=\lim_{\mathbf{T}\to\infty}\bar{\fT}^{\epi(\fg),\mathbf{T}}\fT_{\ft,\fx-\mathbf{T}}
\quad\text{with}\quad\bar{\fT}^{\epi(\fg),\mathbf{T}}(v,u)=\ee_{\fB(0)=v}\big[\fT_{0,\mathbf{T}-\ftau}(\fB(\ftau),u)\1{\ftau\leq\mathbf{T}}\big]
\end{equation}
and note that $\bar{\fT}^{\epi(\fg),\mathbf{T}}(v,u)$ is nothing but the transition probability for $\fB$ to go from $v$ at time 0 to $u$ at time $\mathbf{T}$ hitting $\epi(\fg)$ in $[0,\mathbf{T}]$.
\begin{definition}[Brownian scattering operator]\label{def:epi}
For $\fg\in \LC$, $\fx\in \R$ and $\ft>0$ we define
\begin{equation}
\fK^{\epi(\fg)}_{-\ft}= \fI - \bigl(\fT_{-\ft,\fx} - \bar{\fT}^{\epi(\fg^-_\fx)}_{-\ft,\fx} \bigr)^* \bar\P_{\fg(\fx)}
\bigl(\fT_{-\ft,-\fx}-\bar{\fT}^{\epi(\fg^+_\fx)}_{-\ft,-\fx}\bigr),
\end{equation}
where $\fg^+_\fx(\fy)=\fg(\fx+\fy)$ and $\fg^-_\fx(\fy)=\fg(\fx-\fy)$.
\begin{exercise}
The projection $\bar\P_{\fg(\fx)}$ can be removed from the formula without changing its meaning.
\end{exercise}
\begin{exercise}$\!\!\!\!{}^*$
Show that the kernel $\fK^{\epi(\fg)}_{-\ft}$ does not depend on $\fx$.
\end{exercise}
\noindent There is another operator which uses $\fh\in \UC$, and hits `from above',
\begin{equation}\label{eq:Kepihypo}
\fK^{\hypo(\fh)}_{\ft}= \fI - \bigl(\fT_{\ft,\fx} - \bar{\fT}^{\hypo(\fh^-_\fx)}_{\ft,\fx} \bigr)^*\P_{\fh(\fx)} \bigl(\fT_{\ft,-\fx}-\bar{\fT}^{\hypo(\fh^+_\fx)}_{\ft,-\fx}\bigr),
\end{equation}
\end{definition}
\begin{exercise} Show that the following identity holds
\begin{equation}\label{skew}
\fK^{\hypo(\fh)}_{\ft}=\big(\varrho\,\fK^{\epi(-\varrho\fh)}_{-\ft}\,\varrho\big)^*,
\end{equation}
where $\varrho\fh(\fx) = \fh(-\fx)$.
\end{exercise}
As above, $\fT_{\ft,\fx} - \bar{\fT}^{\epi(\fg_\fx)}_{\ft,\fx}$ may be thought of as a sort of asymptotic transformed transition probability for a Brownian motion $\fB$, started at time 0, not to hit $\epi(\fg)$. Therefore $\fK^{\epi(\fg)}_{\ft}$ may be thought of as the same sort of asymptotic transformed transition probability for $\fB$, in this case hitting $\epi(\fg)$, which is built out of the product of left and right `no hit' operators.
\subsection{The KPZ fixed point formula.}
\label{sec:KPZfp}
At this stage we are ready to state our main result which we prove in Section~\ref{sec:TASEPlimit}.
\begin{definition}[The KPZ fixed point formula]
The KPZ fixed point is the Markov process on $\UC$ with transition probabilities
\begin{equation}\label{eq:fpform}
\pp_{\fh_0} \bigl( \fh(\ft, \fx) \le \fg(\fx),\,\fx\in\R\bigr) = \det\left(\fI-\fK^{\hypo(\fh_0)}_{\ft/2} \fK^{\epi(\fg)}_{-\ft/2}\right)_{L^2(\R)},
\end{equation}
where $\fh_0\in\UC$ and $\fg\in \LC$. Here $ \pp_{\fh_0}$ means the process with initial data $\fh_0$.
\end{definition}
\begin{remark}
The fact that the Fredholm determinant in the formula is finite is a consequence of the fact that there is a (multiplication) operator $M$ such that
the map $(\fh_0,\fg)\mapsto M\fK^{\hypo(\fh_0)}_{\ft/2} \fK^{\epi(\fg)}_{-\ft/2}M^{-1}$ is continuous from $\UC\times\LC$ into the trace class (see
\cite{KPZ}). We will not get into such issues in these notes.
\end{remark}
\begin{exercise}[Finite dimensional distributions] Let $\fh_0\in \UC$ and $\fx_1<\fx_2<\dotsm<\fx_M$. Show that
\begin{align}
&\pp_{\fh_0} \bigl(\fh(\ft,\fx_1)\leq\fa_1,\ldots,\fh(\ft,\fx_M)\leq\fa_M\bigr)\nonumber\\
&\hspace{0.3in}=\det\left(\fI-\fK^{\hypo(\fh_0)}_{\ft,\fx_M}+\fK^{\hypo(\fh_0)}_{\ft,\fx_M} e^{(\fx_1-\fx_M)\p^2}\bar\P_{\fa_1}e^{(\fx_2-\fx_1)\p^2}\bar\P_{\fa_2}\dotsm e^{(\fx_M-\fx_{M-1})\p^2}\bar\P_{\fa_M}\right)_{L^2(\R)}.\label{eq:twosided-path}
\end{align}
\end{exercise}
\begin{remark}[Extended kernels]
The formula in the exercise can be rewritten as
\begin{equation}\label{eq:twosided-ext}
\pp_{\fh_0} \bigl(\fh(\ft,\fx_1)\leq\fa_1,\ldots,\fh(\ft,\fx_M)\leq\fa_M\bigr)=\det\left(\fI-\chi_{\fa}\fK^{\hypo(\fh_0)}_{\ft,\uptext{ext}}\chi_{\fa}\right)_{L^2(\{\fx_1,\ldots,\fx_M\}\times\R)},
\end{equation}
where
\begin{align}
\fK^{\hypo(\bh_0)}_{\ft,\uptext{ext}}(\fx_i,\cdot;\fx_j,\cdot)&=-e^{(\fx_j-\fx_i)\p^2}\uno{\fx_i<\fx_j}+e^{-\fx_i\p^2}\fK^{\hypo(\fh_0)}_\ft e^{\fx_j\p^2}.\label{eq:Khypo-ext}
\end{align}
The kernel in \eqref{eq:twosided-ext} is usually referred to as an \emph{extended kernel} (note that the Fredholm determinant is being computed on the `extended $L^2$ space' $L^2(\{\fx_1,\ldots,\fx_M\}\times\R)$).
The kernel appearing after the second hypo operator in \eqref{eq:twosided-path} is sometimes referred to as a \emph{path integral kernel} \cite{bcr}, and should be thought of as a discrete, pre-asymptotic version of the epi operators (on a finite interval).
The fact that $e^{-\fx\p^2}\fK^{\hypo(\fh_0)}_{\ft}e^{\fx\p^2}$ makes sense is not entirely obvious, but follows from the fact that
$\fK^{\hypo(\fh_0)}_{\ft}$ equals
\begin{equation}\label{eq:Kep-expansion}
(\fT_{\ft,\fx} )^*\P_{\fg(\fx)}\fT_{\ft,-\fx} + ( \bar{\fT}^{\epi(\fg^-_\fx)}_{\ft,\fx} )^*\bar\P_{\fg(\fx)}
\fT_{\ft,-\fx} + (\fT_{\ft,\fx} )^* \bar\P_{\fg(\fx)}
\bar{\fT}^{\epi(\fg^+_\fx)}_{\ft,-\fx}
- ( \bar{\fT}^{\epi(\fg^-_\fx)}_{\ft,\fx} )^* \bar\P_{\fg(\fx)}
\bar{\fT}^{\epi(\fg^+_\fx)}_{\ft,-\fx},
\end{equation}
together with the group property \eqref{eq:groupS2} and the definition of the hit operators.
\end{remark}
\begin{example}[Airy processes] For special initial data, at time $\ft=1$ we recover the known Airy$_1$, Airy$_2$ and Airy$_{2\to1}$ processes:
\begin{enumerate}
\item Narrow wedge initial data leads to the \emph{Airy$_2$ process} \cite{prahoferSpohn,johansson}:
\[\fh(1,\fx;\mathfrak{d}_\fu) + (\fx-\fu)^2~=~ \aip_2(\fx);\]
\item \emph{Flat} initial data $\fh_0\equiv0$ leads to the \emph{Airy$_1$ process} \cite{sasamoto,borFerPrahSasam}:
\[\fh(1,\fx;0)~=~ \aip_1(\fx);\]
\item \emph{Wedge} or \emph{half-flat} initial data $\fh_{\text{h-f}}(\fx) = -\infty$ for $\fx<0$ and $\fh_{\text{h-f}}(\fx)=0$ for $\fx\geq0$, leads to the \emph{Airy$_{2\to1}$ process} \cite{bfs}:
\[\fh(1,\fx;\fh_{\text{h-f}})+\fx^2\uno{\fx<0}~=~ \Bt(\fx).\]
\end{enumerate}
We leave the derivation of the processes $\aip_1$ and $\aip_2$ as Exercise~\ref{eq:getAiry}. To get the formula in the case 3 we need to show that the finite dimensional distributions match, by computing the kernel on the right hand side of \eqref{eq:twosided-ext} with $\fh_0^-\equiv-\infty$ and $\fh_0^+(\fx)\equiv0$.
It is straightforward to check that $\bar{\fT}^{\hypo(\fh_0^-)}_{\ft,0}\equiv0$.
On the other hand, an application of the reflection principle based on \eqref{eq:asymptTransTransProb} yields that, for $v\geq0$,
\[\bar{\fT}^{\hypo(\fh_0^+)}_{\ft,0}(v,u)=\int_0^\infty\!\pp_v(\tau_0\in d\fy)\fT_{\ft,-\fy}(0,u)=\fT_{\ft,0}(-v,u),\]
which gives, with $\varrho$ the reflection operator $\varrho f(x)=f(-x)$,
\[\fK^{\hypo(\fh_0)}_{\ft}=\fI-(\fT_{\ft,0})^*\P_0[\fT_{\ft,0}-\varrho\fT_{\ft,0}]=(\fT_{\ft,0})^*(\fI+\varrho)\bar \P_0\fT_{\ft,0}.\]
This yields, using \eqref{eq:Khypo-ext},
\begin{align}
\fK^{\hypo(\fh_{\text{h-f}})}_{\ft,\text{ext}}(\fx_i,\cdot;\fx_j,\cdot)&=-e^{(\fx_j-\fx_i)\p^2}\uno{\fx_i<\fx_j}+\fT_{0,-\fx_i}(\fT_{\ft,0})^*(\fI+\varrho)\bar \P_0\fT_{\ft,0}\fT_{0,\fx_i}\\
&=-e^{(\fx_j-\fx_i)\p^2}\uno{\fx_i<\fx_j}+(\fT_{\ft,-\fx_i})^*(\fI+\varrho)\bar \P_0\fT_{\ft/2,\fx_i}.
\end{align}
Choosing $\ft=1$ and using \eqref{eq:fTdef} yields that the second term on the right hand side equals
\begin{multline}
\fK^{\hypo(\fh_{\text{h-f}})}_{\ft,\text{ext}}(\fx_i,u;\fx_j,v)
=\int_{-\infty}^0d\lambda\,e^{-2\fx_i^3/3-\fx_i(u-\lambda)}\,\Ai(u-\lambda+\fx_i^2)\, e^{2\fx_j^3/3+\fx_j(v-\lambda)}\,\Ai(v-\lambda+\fx_j^2)\\
+\int_{-\infty}^0d\lambda\,e^{-2\fx_i^3/3-\fx_i(u+\lambda)}\,\Ai(u+\lambda+\fx_i^2)\, e^{2\fx_j^3/3+\fx_j(v-\lambda)}\,\Ai(v-\lambda+\fx_j^2)
\end{multline}
which, after a simple conjugation, gives the kernel for the Airy$_{2\to1}$ process.
\end{example}
\begin{exercise}
Show that a conjugation gives the kernel for the Airy$_{2\to1}$ process (see \cite{bfs}).
\end{exercise}
\begin{example}[The Airy$_2$ process]
Given $\fx_1,\dots,\fx_n\in\mathbb{R}$ and $\fa_1<\dots<\fa_n$ in $\mathbb{R}$, we have
\begin{equation}
\mathbb{P}\bigl(\aip_2(\fx_1)\le \fa_1,\dots,\aip_2(\fx_n)\le \fa_n\bigr) =
\det \bigl(I-\chi_{\fa}\K^{\mathrm{ext}} \chi_{\fa}\bigr)_{L^2(\{\fx_1,\dots,\fx_n\}\times\mathbb{R})},
\end{equation}
where the {\it extended Airy kernel} is defined by
\begin{equation}
\K^\mathrm{ext}(\fx,u;\fx',u')=
\begin{cases}
\int_0^\infty d\lambda\,e^{-\lambda(\fx-\fx')}\Ai(u+\lambda)\Ai(u'+\lambda), &\text{if $\fx\ge \fx'$},\\
-\int_{-\infty}^0 d\lambda\,e^{-\lambda(\fx-\fx')}\Ai(u+\lambda)\Ai(u'+\lambda), &\text{if $\fx<\fx'$}.
\end{cases}\label{eq:extAiry}
\end{equation}
\end{example}
\begin{example}[The Airy$_1$ process]
For $\fx_i$ and $\fa_i$ as in the previous example, we have the identity
\begin{equation}
\mathbb{P}\bigl(\aip_1(\fx_1)\le \fa_1,\dots,\aip_1(\fx_n)\le \fa_n\bigr) =
\det\big(I-\chi_{\fa}K^{\mathrm{ext}}_1 \chi_{\fa}\big)_{L^2(\{\fx_1,\dots,\fx_n\}\times\mathbb{R})},
\end{equation}
with the kernel
\begin{multline}\label{eq:fExtAiry1}
K^{\rm ext}_1(\fx,u;\fx',u')=-\frac{1}{\sqrt{4\pi
(\fx'-\fx)}}\exp\!\left(-\frac{(u'-u)^2}{4 (\fx'-\fx)}\right)\uno{\fx'>\fx}\\
+\Ai(u+u'+(\fx'-\fx)^2) \exp\!\left((\fx'-\fx)(u+u')+\tfrac23(\fx'-\fx)^3\right).
\end{multline}
\end{example}
\begin{exercise}\label{eq:getAiry}
Obtain the kernels for the Airy$_2$ and Airy$_1$ processes from the formula \eqref{eq:fpform}.
\end{exercise}
\subsection{Symmetries and invariance.}
The KPZ fixed point inherits several nice properties as a scaling limit of TASEP. We will write $\fh(\ft, \fx;\fh_0)$ for the KPZ fixed point $\fh(\ft, \fx)$ started at $\fh_0$.
\begin{proposition}[Symmetries of $\fh$]\label{symmetries}
The KPZ fixed point $\fh$ has the following properties:
\begin{enumerate}[label=\emph{(\roman*)}]
\item (1:2:3 scaling invariance)\hskip0.1in
$\alpha\tts\fh(\alpha^{-3}\ft,\alpha^{-2}\fx;\alpha\fh_0(\alpha^{-2}\fx) )\stackrel{\uptext{dist}}{=} \fh(\ft, \fx;\fh_0), \quad \alpha>0$;
\item (Skew time reversal) \hskip0.1in
$\pp\big(\fh(\ft,\fx; \fg)\le -\ff(\fx)\big) =\pp\big(\fh(\ft,\fx;\ff)\le -\fg(\fx)\big),\quad\ff,\fg\in\UC;$
\item (Shift invariance) \hskip0.1in$\fh(\ft,\fx+\fu; \fh_0(\fx+\fu))\stackrel{\uptext{dist}}{=} \fh(\ft,\fx; \fh_0);$
\item (Reflection invariance) \hskip0.1in$\fh(\ft, -\fx;\fh_0(-\fx))\stackrel{\uptext{dist}}{=} \fh(\ft,\fx; \fh_0);$
\item (Affine invariance) \hskip0.1in$\fh(\ft,\fx; \ff(\fx) + \fa + c\fx)\stackrel{\uptext{dist}}{=} \fh(\ft,\fx; \ff(\fx+\frac12c\ft)) + \fa + c\fx + \frac14c^2\ft;$
\item (Preservation of max) \hskip0.1in$\fh(\ft,\fx;\ff_1\vee \ff_2)= \fh(\ft,\fx; \ff_1)\vee \fh(\ft,\fx; \ff_2)$.
\end{enumerate}
\end{proposition}
\begin{exercise} Prove property (i) from the fixed point formula. Prove that properties (ii), (iii),(iv) hold for TASEP, and therefore for the limiting
fixed point.
\end{exercise}
\noindent Properties (v) and (vi) require coupling and we provide their proves in Section~\ref{sec:variational}.
\subsection{Markov property.}
The sets $A_{\mathfrak{g}} = \bigl\{ \fh\in \UC: \fh(\fx) \le \fg(\fx),\,\fx\in\R\bigr\}$ with $\fg\in\LC$ form a generating family for the Borel sets $\mathcal{B}(\UC)$.
Hence from \eqref{eq:fpform} we can define the fixed point transition probabilities $p_{\fh_0}\,(\ft, A_{\mathfrak{g}})$, for which we have:
\begin{lemma}
For fixed $\fh_0\in\UC$ and $\ft>0$, the measure $p_{\fh_0}\,(\ft, \cdot)$ is a probability measure on $\UC$.
\end{lemma}
\begin{proof}[Sketch of the proof]
It is clear from the construction that $\pp_{\fh_0}( \fh(\ft, \fx_i) \le \fa_i, i=1,\ldots,n)$ is non-decreasing in each $\fa_i$ and is in $[0,1]$.
We need to show then that this quantity goes to $1$ as all $\fa_i$'s go to infinity, and to $0$ if any $\fa_i$ goes to $-\infty$.
The first one is standard, and relies on the inequality $\left|\det(\fI-\fK) - 1\right|\leq\|\fK\|_1e^{\|\fK\|_1+1}$ (with $\|\cdot\|_1$ denoting trace norm).
The second limit is in general very hard to show for a formula given in terms of a Fredholm determinant, but it turns out to be rather easy in our case, because the multipoint probability is trivially bounded by $\pp_{\fh_0}( \fh(\ft, \fx_i) \le \fa_i)$, for any $i$. By the skew time reversal symmetry this becomes the probability that the Airy$_2$ process minus a parabola is bounded everywhere by $-\fh_0+\fa_i$, which clearly
goes to $0$ as $\fa_i$ goes to $-\infty$.
\end{proof}
\begin{theorem}\label{thm:markovprop}
The KPZ fixed point $\big(\fh(\ft,\cdot)\big)_{\ft>0}$ is a (Feller) Markov process taking values in $\UC$.
\end{theorem}
\noindent The proof is based on the fact that $\fh(\ft,\fx)$ is the limit of $\fh^\ep(\ft,\fx)$, which is Markovian.
To derive from this the Markov property of the limit requires some compactness, which in our case is provided by Theorem~\ref{reg} below.
\subsection{Regularity and local Brownian behavior.}
Up to this point we only know that the fixed point is in $\UC$, but by the smoothing mechanism inherent to models in the KPZ class one should expect $\fh(\ft,\cdot)$
to at least be continuous for each fixed $\ft>0$.
The next result shows that for every $M>0$, $\fh(\ft,\cdot)\big|_{[-M,M]}$ is H\"older-$\beta$ for any $\beta<1/2$ with probability 1.
\begin{definition}[Local H\"older spaces]
Let us define the space
\[
\mathscr C= \bigl\{ \fh\!:\R \to [-\infty,\infty) ~\text{continuous with}~\fh(\fx)\le C(1+|\fx|)~\text{for some}~ C<\infty\bigr\}.
\]
For $M > 0$ we define the local H\"older norm
\begin{equation}
\| \fh \|_{\beta, [-M,M]} = \sup_{\fx_1\neq \fx_2 \in [-M,M]} \frac{ |\fh(\fx_2)-\fh(\fx_1)|}{|\fx_2-\fx_1|^\beta}
\end{equation}
and the local H\"older spaces
\[\mathscr C^\beta= \bigcap_{M \in \N} \bigl\{ \fh\in \mathscr C : \| \fh \|_{\beta, [-M,M]} <\infty\bigr\}.\]
The topology on $\UC$, when restricted to $\mathscr C$, is the topology of uniform convergence on compact sets.
$\UC$ is a Polish space and the spaces $\mathscr C^\beta$ are compact in $\UC$.
\end{definition}
Then we can get spatial regularity of the KPZ fixed point.
\begin{theorem}[Space regularity]\label{reg}
Fix $\ft>0$, $\fh_0\in \UC$ and initial data $h_0^{(\ep)}$ for the TASEP height function such that $\fh_0^\ep\rightarrow\fh_0$ locally in $\UC$ as $\ep\to0$.
Then for each $\beta\in (0,1/2)$ and $M > 0$ we have
\begin{equation}
\lim_{A\to \infty} \limsup_{\ep\to 0} \pp \bigl( \| \fh^\ep(\ft)\|_{\beta, [-M,M]}\ge A\bigr) = \lim_{A\to \infty} \pp\bigl( \| \fh\|_{\beta, [-M,M]}\ge A\bigr) =0.
\end{equation}
\end{theorem}
\noindent
The proof proceeds through an application of the Kolmogorov continuity theorem, which reduces regularity to two-point functions, and depends heavily on the representation \eqref{eq:twosided-path} for the two-point function in terms of path integral kernels. We prefer to skip the details.
\begin{remark}
Since the theorem shows that this regularity holds uniformly (in $\ep>0$) for the approximating $\fh^{\ep}(\ft,\cdot)$'s, we get the compactness needed for the proof of the Markov property.
\end{remark}
\begin{theorem}[Local Brownian behavior]
For any initial condition $\fh_0\in\UC$ the KPZ fixed point $\fh$ is locally Brownian in space in the sense that for each $\fy\in\R$, the finite dimensional distributions of
\[\mathfrak{b}^{\pm}_\ep(\fx)= \ep^{-1/2} \bigl(\fh(\ft,\fy \pm \ep \fx)-\fh(\ft,\fy) \bigr)\]
converge, as $\ep\to 0$, to those of Brownian motions with diffusion coefficient $2$.
\end{theorem}
\begin{proof}[A very brief sketch of the proof]
The proof is based again on the arguments of \cite{quastelRemAiry1}.
One uses \eqref{eq:twosided-path} and Brownian scale invariance to show that
\[
\pp \bigl(\fh(\ft,\ep\fx_1)\leq \fu+\sqrt{\ep}\fa_1 , \ldots, \fh(\ft,\ep\fx_n)\leq \fu+\sqrt{\ep}\fa_n\, \big|\,\fh(\ft,0)=\fu\bigr) =\ee\left(\uno{\fB(\fx_i)\le\fa_i, i=1,\ldots,n}\,\phi^\ep_{\fx,\fa}(\fu,\fB(\fx_n))\right),
\]
for some explicit function $\phi^\ep_{\fx,\fa}(\fu,\mathbf{b})$.
The Brownian motion appears from the product of heat kernels in \eqref{eq:twosided-path}, while $\phi^\ep_{\fx,\fa}$ contains the dependence on everything else in the formula (the Fredholm determinant structure and $\fh_0$ through the hypo operator $\fK^{\hypo(\fh_0)}_\ft$).
Then one shows that $\phi^\ep_{\fx,\fa}(\fu,\mathbf{b})$ goes to 1 in a suitable sense as $\ep\to0$.
\end{proof}
\begin{proposition}[Time regularity]\label{holder}
Fix $\fx_0\in \R$ and initial data $\fh_0\in \UC$. For $\ft>0$, $ \fh(\ft,\fx_0)$ is locally H\"older $\alpha$ in $\ft$ for any $\alpha<1/3$.
\end{proposition}
\noindent The proof uses the variational formula for the fixed point, we sketch it in the next section.
\subsection{Variational formulas and the Airy sheet}\label{sec:variational}
\begin{definition}[Airy sheet] The two parameter process
\[\aip(\fx,\fy) ~ = ~ \fh(1,\fy; \mathfrak{d}_\fx)+ (\fx-\fy)^2 \]
is called the \emph{Airy sheet} \cite{cqrFixedPt}.
Fixing either one of the variables, it is an Airy$_2$ process in the other.
We also write
\[\hat\aip(\fx,\fy) = \aip(\fx,\fy)- (\fx-\fy)^2.\]
\end{definition}
\begin{remark}
The KPZ fixed point inherits from TASEP a canonical coupling between the processes started with different initial data (using the same `noise').
It is this the property that allows us to define the two-parameter Airy sheet.
An annoying difficulty is that we cannot prove that this process is unique.
More precisely, the construction of the Airy sheet in \cite{fixedpt} goes through using tightness of the coupled processes at the TASEP level and taking subsequential limits, and at the present time there seems to be no way to assure that the limit points are unique.
This means that we have actually constructed `an' Airy sheet, and the statements below should really be interpreted as about any such
limit.
\noindent It is natural to wonder whether the fixed point formulas at our disposal determine the joint probabilities $\pp(\aip(\fx_i,\fy_i)\le \fa_i, i=1,\ldots,M)$ for the Airy sheet.
Unfortunately, this is not the case.
In fact, the most we can compute using our formulas is \[\pp\bigl(\hat\aip(\fx,\fy) \le \ff(\fx)+\fg(\fy),\,\fx,\fy\in\R\bigr) = \det\left(\fI-\fK^{\hypo(-\fg)}_{1} \fK^{\epi(\ff)}_{-1}\right).\]
Suppose we want to compute the two-point distribution for the Airy sheet $\pp(\hat\aip(\fx_i,\fy_i) \le \fa_i,\,i=1,2)$ from this.
We would need to choose $\ff$ and $\fg$ taking two non-infinite values, which yields a formula for $\pp(\hat\aip(\fx_i,\fy_j) \le \ff(\fx_i)+\fg(\fy_j),\,i,j=1,2)$, and thus we need to take $\ff(\fx_1)+\fg(\fy_1)=\fa_1$, $\ff(\fx_2)+\fg(\fy_2)=\fa_2$ and $\ff(\fx_1)+\fg(\fy_2)=\ff(\fx_2)+\fg(\fy_1)=L$ with $L\to\infty$.
But $\{\ff(\fx_i)+\fg(\fy_j),\,i,j=1,2\}$ only spans a 3-dimensional linear subspace of $\R^4$, so this is not possible.
\end{remark}
The preservation of max property allows us to write a variational formula for the KPZ fixed point in terms of the Airy sheet.
\begin{theorem}[Airy sheet variational formula]\label{thm:airyvar}
One has the identities
\begin{equation}\label{eq:var}
\fh(\ft,\fx;\fh_0) = \sup_\fy\big\{ \fh(\ft,\fx;\mathfrak{d}_\fy) + \fh_0(\fy)\big\} \stackrel{\uptext{dist}}{=} \sup_\fy\Big\{ \ft^{1/3}\hat\aip(\ft^{-2/3} \fx,\ft^{-2/3} \fy) + \fh_0(\fy)\Big\}.
\end{equation}
In particular, the Airy sheet satisfies the \emph{semi-group property}: If $\hat\aip^1$ and $\hat\aip^2$ are independent copies and $\ft_1+\ft_2=\ft$ are all positive, then
\begin{equation*}
\sup_\fz\left\{ \ft_1^{1/3}\hat\aip^1(\ft_1^{-2/3} \fx,\ft_1^{-2/3} \fz) + \ft_2^{1/3}\hat\aip^2(\ft_2^{-2/3} \fz,\ft_2^{-2/3} \fy) \right\} \stackrel{\uptext{dist}}{=} \ft^{1/3}\hat\aip^1(\ft^{-2/3} \fx,\ft^{-2/3} \fy).
\end{equation*}
\end{theorem}
\begin{proof}
Let $\fh_0^{n}$ be a sequence of initial conditions taking finite values $\fh^n_0(\fy^n_i)$ at $\fy^n_i$, $i=1,\ldots, k_n$, and $-\infty$ everywhere else, which converges to $\fh_0$ in $\UC$ as $n\to\infty$.
By repeated application of Proposition~\ref{symmetries}(v) (and the easy fact that $\fh(\ft,\fx;\fh_0+\fa)=\fh(\ft,\fx;\fh_0)+\fa$ for $\fa\in\R$) we get
\[\fh(\ft,\fx;\fh^n_0)=\sup_{i=1,\ldots,k_n}\big\{\fh(\ft,\fx;\mathfrak{d}_{\fy^n_i})+\fh^n_0(\fy^n_i)\big\},\]
and taking $n\to\infty$ yields the result (the second equality in \eqref{eq:var} follows from scaling invariance, Proposition~\ref{symmetries}).
\end{proof}
One of the interests in this variational formula is that it leads to proofs of properties of the fixed point, as we already mentioned in earlier sections.
\begin{proof}[Proof of Proposition~\ref{symmetries}(iv)]
The fact that the fixed point is invariant under translations of the initial data is straightforward, so we may assume $\fa=0$.
By Theorem~\ref{thm:airyvar} we have
\begin{align*}
\fh(\ft,\fx;\fh_0+c\fx)&\stackrel{\uptext{dist}}{=}\sup_\fy\Big\{ \ft^{1/3}\aip(\ft^{-2/3} \fx,\ft^{-2/3} \fy)-\ft^{-1}(\fx-\fy)^2 + \fh_0(\fy)+c\fy\Big\}\\
&\stackrel{\hphantom{\uptext{dist}}}{=}\sup_\fy\Big\{ \ft^{1/3}\aip(\ft^{-2/3} \fx,\ft^{-2/3}(\fy+c\ft/2))-\ft^{-1}(\fx-\fy)^2 + \fh_0(\fy+c\ft/2)+c\fx+c^2\ft/4\Big\}\\
&\stackrel{\uptext{dist}}{=}\sup_\fy\Big\{ \ft^{1/3}\hat\aip(\ft^{-2/3} \fx,\ft^{-2/3}\fy)+\fh_0(\fy+c\ft/2)+c\fx+c^2\ft/4\Big\}\\
&\stackrel{\hphantom{\uptext{dist}}}{=}\fh(\ft,\fx;\fh_0(\fx+c\ft/2))+c\fx+c^2\ft/4.\qedhere
\end{align*}
\end{proof}
\begin{proof}[Sketch of the proof of Theorem~\ref{holder}]
Fix $\alpha<1/3$ and choose $\beta<1/2$ so that $\beta/(2-\beta)=\alpha$.
By the Markov property it is enough to assume that $\fh_0\in \mathscr C^\beta$ and check the H\"older-$\alpha$ regularity at time 0.
By space regularity of the Airy$_2$ process (proved in \cite{quastelRemAiry1}, but which also follows from Theorem~\ref{reg}) there is an $R<\infty$ a.s. such that $|\aip_2(\fx)|\le R(1+ |\fx|^{\beta})$, and making $R$ larger if necessary we may also assume $| \fh_0(\fx)- \fh_0( \fx_0)| \le R( |\fx-\fx_0|^{\beta} + |\fx-\fx_0|)$.
From the variational formula \eqref{eq:var}, $|\fh(\ft, \fx_0) - \fh(0,\fx_0)|$ is then bounded by
\begin{equation*}
\sup_{\fx\in\R}\Big(R ( |\fx-\fx_0|^{\beta} + |\fx-\fx_0| + \ft^{1/3} + \ft^{(1 - 2\beta)/3 }|\fx|^\beta) - \tfrac1\ft (\fx_0-\fx)^2\Big).
\end{equation*}
The supremum is attained roughly at $x-x_0=\ft^{-\eta}$ with $\eta$ such that $|\fx-\fx_0|^{\beta}\sim\tfrac1\ft (\fx_0-\fx)^2$.
Then $\eta=1/(2-\beta)$ and the supremum is bounded by a constant multiple of $\ft^{\beta/(2-\beta)}=\ft^\alpha$, as desired.
\end{proof}
\section{The 1:2:3 scaling limit of TASEP}
\label{sec:TASEPlimit}
In this section we will prove that for a large class of initial data the growth process of TASEP converges to the KPZ fixed point defined in Section~\ref{sec:KPZfp}. To this end we consider the TASEP particles to be distributed with a density close to $\frac{1}{2}$, and take the following scaling of the height function $h$ from Section~\ref{sec:growth}:
\begin{equation}\label{eq:hep}
\fh^{\ep}(\ft,\fx) = \ep^{1/2}\!\left[h_{2\ep^{-3/2}\ft}(2\ep^{-1}\fx) + \ep^{-3/2}\ft\right].
\end{equation}
We will always consider the linear interpolation of $\fh^{\ep}$ to make it a continuous function of $\fx\in \R$. Suppose that we have initial data $X_0^\ep$ chosen to depend on $\ep$ in such a way that
\begin{equation}\label{xplim}
\fh_0=\lim_{\ep\to 0} \fh^{\ep}(0,\cdot)
\end{equation}
in the $\UC$ topology. For fixed $\ft>0$, we will prove that the limit
\begin{equation}\label{eq:height-cvgce}
\fh(\ft,\fx;\fh_0)=\lim_{\ep\to0}\fh^{\ep}(\ft,\fx)
\end{equation}
exists, and take it as our \emph{definition} of the KPZ fixed point $\fh(\ft,\fx; \fh_0)$. We will often omit $\fh_0$ from the notation when it is clear from the context.
\begin{exercise}
For any $\fh_0\in \UC$, we can find initial data $X^\ep_0$ so that \eqref{xplim} holds.
\end{exercise}
\noindent We have the following convergence result for TASEP:
\begin{theorem}\label{thm:fullfixedpt}
For $\fh_0\in\UC$, let $X_0^\ep$ be initial data for TASEP such that the corresponding rescaled height functions $\fh_0^\ep$ converge to $\fh_0$ in the $\UC$ topology as $\ep\to0$.
Then the limit \eqref{eq:height-cvgce} exists (in distribution) locally in $\UC$ and is the KPZ fixed point with initial value $\fh_0$.
\end{theorem}
In other words, under the 1:2:3 scaling, as long as the initial data for TASEP converges in $\UC$, the evolving TASEP height function converges to the KPZ fixed point.
We now sketch the proof. Our goal is to compute $\pp_{\fh_0} \bigl(\fh(\ft,\fx_i)\leq \fa_i,\;i=1,\ldots,M\bigr)$. We chose for simplicity the frame of reference
\begin{equation}\label{eq:x0conv}
\xx_0^{-1}(-1)=1,
\end{equation}
i.e. the particle labeled $1$ is initially the rightmost in $\Z_{<0}$. Then it follows from \eqref{defofh}, \eqref{eq:hep} and \eqref{eq:height-cvgce} that the required probability should coincide with the limit as $\ep\to 0$ of
\begin{equation}\label{eq:TASEPtofp}
\pp_{X_0}\!\left(X_{2\ep^{-3/2}\ft}(\tfrac12\ep^{-3/2}\ft-\ep^{-1}\fx_i-\tfrac12\ep^{-1/2}\fa_i+1)>2\ep^{-1}\fx_i-2,\;i=1,\ldots,M\right).
\end{equation}
We therefore want to consider Theorem~\ref{thm:tasepformulas} with
\begin{equation}\label{eq:KPZscaling}
t=2\ep^{-3/2}\ft,\qquad n_i = \tfrac{1}{2}\ep^{-3/2}\ft-\ep^{-1}\fx_i-\tfrac12\ep^{-1/2}\fa_i+1,\qquad a_i=2\ep^{-1}\fx_i-2.
\end{equation}
\begin{remark} One might worry that the initial data \eqref{eq:Kt-2} is assumed to be right finite. In fact, one can obtain a formula without this condition,
but it is awkward. On the other hand, one could always cut off the TASEP data far to the right, take the limit, and then remove the cutoff.
If we call the macroscopic position of the cutoff $L$, this means
the cutoff data is $X_0^{\ep,L}(n) = X^\ep_0(n)$ if
$n > -\lfloor\ep^{-1}L\rfloor$ and $X_0^{\ep,L}(n) = \infty$ if $n\le -\lfloor\ep^{-1}L\rfloor$. This corresponds to replacing $\fh^{\ep}_0(\fx)$ by $\fh^{\ep,L}_0(\fx)$ with a straight line with slope $-2\ep^{-1/2}$ to the right of $\ep X^\ep_0(-\lfloor\ep^{-1}L\rfloor)\sim 2L$. The question is
whether one can justify the exchange of limits $L\to\infty$ and $\ep\to 0$. It turns out not to be a problem because one can use the exact formula
to get uniform bound (in $\ep$, and over initial data in $\UC$ with bound $C(1+|x|)$) that the difference of \eqref{eq:TASEPtofp} computed with initial data $X^\ep_0$ and with initial data $X_0^{\ep,L}$ is bounded by $ C e^{-cL^3}$.
\end{remark}
\begin{lemma}\label{lem:KernelLimit1}
Under the scaling \eqref{eq:KPZscaling} (dropping the $i$ subscripts) and assuming that \eqref{xplim} holds, if we set $z=2\ep^{-1}\fx+\ep^{-1/2}(u+\fa)-2$ and $y'=\ep^{-1/2}v$, then we have for $\ft>0$, as $\eps \to 0$,
\begin{align}\label{eq:QRcvgce}
\fT^\ep_{-\ft,\fx}(v,u)&:=\ep^{-1/2}\mathcal{S}_{-t,-n}(y',z)\longrightarrow{} \fT_{-\ft,\fx}(v,u),\\\label{eq:QRcvgce2}
\bar\fT^\ep_{-\ft,-\fx}(v,u)&:=\ep^{-1/2}\bar{\mathcal{S}}_{-t,n}(y',z)\longrightarrow {} \fT_{-\ft,-\fx}(v,u),\\\label{eq:QRcvgce3}
\bar{\fT}^{\ep,\epi(-\fh_0^-)}_{-\ft,-\fx}(v,u)&:=\ep^{-1/2} {\bar{\mathcal{S}}}^{\oepi(X_0)}_{-t,n}(y',z)\longrightarrow {} \bar{\fT}^{\epi(-\fh_0^-)}_{-\ft,-\fx}(v,u)
\end{align}
pointwise, where $\fh_0^-(x)=\fh_0(-x)$ for $x\geq0$. Here $\mathcal{S}_{-t,-n}$ and $\bar{\mathcal{S}}_{-t,n}$ are defined in \eqref{def:sm} and \eqref{def:sn}.
\end{lemma}
\begin{proof}
Note that from \eqref{def:sm},\eqref{def:sn}, $ \bar{\mathcal{S}}_{-t,n} (z_1,z_2)= \mathcal{S}_{-t,-n+1-z_1+z_2}(z_2,z_1)$, so \eqref{eq:QRcvgce2}
follows from \eqref{eq:QRcvgce}.
By changing variables $w\mapsto\frac12(1-\ep^{1/2}\tilde w)$ in \eqref{def:sm}, and using the scaling \eqref{eq:KPZscaling}, we have
\begin{align}
&\fT^\ep_{-\ft,\fx}(u)= \frac{1}{2\pi\I} \oint_{\tts C_\ep}d\tilde w\, e^{\ep^{-3/2}\ft F_\ep(\ep^{1/2} \tilde w,\ep^{1/2}\fx_\ep/\ft,\ep u_\ep/\ft ) } ,
\label{eq:ft1} \\\label{eq:ft2}
&F( w,x,u)=
( \arctanh w-w)-
x\log(1- w^2) - u \arctanh w
\end{align}
where $\fx_\ep= \fx+\ep^{1/2} (\fa-u)/2+\ep/2 $ and $u_\ep = u+\ep^{1/2} $ and $C_\ep $ is a circle of radius $\ep^{-1/2}$ centred at $\ep^{-1/2}$
and $\arctanh w = \tfrac12[\log(1+w)-\log(1-w)]$. Note that
\begin{equation}\label{eq:ft3}
\partial_w F( w,x,u) = ( w-w_+)(w-w_-)(1- w^2)^{-1},\qquad w_\pm = x \pm \sqrt{ x^2+ u},
\end{equation}
From \eqref{eq:ft3} it is easy to see that as $\ep\searrow 0$, $\ep^{-3/2}\ft F_\ep(\ep^{1/2} \tilde w,\ep^{1/2}\fx_\ep/\ft,\ep u_\ep/\ft )$ converges to the corresponding exponent in \eqref{eq:fTdef} (keeping in mind that $
\fT_{-\ft,\fx}=(\fT_{\ft,\fx})^*$). Alternatively, one can just use \eqref{eq:ft2} and that for small $w$, $ \arctanh w-w\sim w^3/3$,
$-\log(1-w^2)\sim w^2$ and $\arctanh w\sim w$. Deform $C_\ep$ to the contour $\langle_{{}_\ep}~\cup~C^{\pi/3}_\ep$ where $\langle_{{}_\ep} $ is the part of the Airy contour $\langle$ within the ball of radius $\ep^{-1/2}$ centred at $\ep^{-1/2}$, and $C^{\pi/3}_\ep$ is the part of $C_\ep$ to the right of $\langle$.
As $\ep\searrow 0$, $\langle_{{}_\ep} \to \langle$, so it only remains to show that the integral over $C^{\pi/3}_\ep$ converges to $0$.
To see this note that the real part of the exponent of the integral over $\Gamma_0$ in \eqref{def:sm} is given by $ \ep^{-3/2}\tfrac{\ft}2 [ \cos\theta -1 + (\tfrac18- c\ep^{1/2})\log( 1-4(\cos\theta -1)] $ where $w=\tfrac12 e^{i\theta}$and $c= \fx/\ft + \fa/2\ep^{1/2} +\ep$. Using $\log(1+x) \le x$ for $x\ge 0$, this is less than or equal
to $
\ep^{-3/2}\tfrac{\ft}8 [ \cos\theta -1]
$ for sufficiently small $\ep$. The $\tilde{w}\in C^{\pi/3}_\ep$ corresponds to $\theta\ge \pi/3$, so the exponent there is less than $-\ep^{-3/2}\kappa\ft$ for some $\kappa>0$. Hence this part of the integral vanishes.
Now define the scaled walk $\fB^\ep(\fx) = \ep^{1/2}\left(B_{\ep^{-1}\fx} + 2\ep^{-1}\fx-1\right)$ for $\fx\in \ep\Z_{\geq0}$, interpolated linearly in between, and let $\ftau^\ep$ be the hitting time by $\fB^\ep$ of $\epi(-\fh^{\ep}(0,\cdot)^-)$.
By Donsker's invariance principle~\cite{billingsley}, $\fB^\ep$ converges locally uniformly in distribution to a Brownian motion $\fB(\fx)$ with diffusion coefficient $2$, and therefore (using convergence of the initial values of TASEP) the hitting time $\ftau^\ep$ converges to $\ftau$ as well.
\end{proof}
We will compute next the limit of \eqref{eq:TASEPtofp} using \eqref{eq:extKernelProb} under the scaling \eqref{eq:KPZscaling}.
To this end we change variables in the kernel as in Lemma~\ref{lem:KernelLimit1}, so that for $z_i=2\ep^{-1}\fx_i+\ep^{-1/2}(u_i+\fa_i)-2$ we need to compute the limit of $\ep^{-1/2}\big(\bar\chi_{2\ep^{-1}\fx-2}K_t\bar\chi_{2\ep^{-1}\fx-2}\big)(z_i,z_j)$.
Note that the change of variables turns $\bar\chi_{2\ep^{-1}\fx-2}(z)$ into $\bar\chi_{-\fa}(u)$.
We have $n_i<n_j$ for small $\ep$ if and only if $\fx_j<\fx_i$ and in this case we have, under our scaling,
\[\ep^{-1/2}Q^{n_j-n_i}(z_i,z_j)\longrightarrow e^{(\fx_i-\fx_j)\p^2}(u_i,u_j),\]
as $\ep\to 0$. For the second term in \eqref{eq:Kt-2} we have
\begin{multline}
\ep^{-1/2}(\mathcal{S}_{t,-{n}_i})^*\bar{\mathcal{S}}_{t,n_j}(z_i,z_j)=\ep^{-1}\int_{-\infty}^{\infty}dv\,(\fT_{-\ft,\fx_i}^\ep)^*(u_i,\ep^{-1/2}v)\bar\fT^{\ep,\epi(-\fh_0^-)}_{-\ft,-\fx_j}(\ep^{-1/2}v,u_j)\\
\xrightarrow[\ep\to0]{}(\fT_{-\ft,\fx_i})^*\fT_{-\ft,-\fx_j}(u_i,u_j)
\end{multline}
(modulo suitable decay of the integrand).
Thus we obtain a limiting kernel
\begin{equation}\label{def:firstker}
\fK_{\lim}(\fx_i,u_i;\fx_j,u_j)=-e^{(\fx_i-\fx_j)\p^2}(u_i,u_j)\1{\fx_i>\fx_j}+(\fT_{-\ft,\fx_i})^*\bar \fT^{\epi(-\fh_0^-)}_{-\ft,-\fx_j}(u_i,u_j),
\end{equation}
surrounded by projections $\bar\chi_{-\fa}$.
Our computations here only give pointwise convergence of the kernels, but they can be upgraded to trace class convergence (see \cite{KPZ}), which thus yields convergence of the Fredholm determinants.
We prefer the projections $\bar\chi_{-\fa}$ which surround \eqref{def:firstker} to read $\chi_{\fa}$, so we change variables $u_i\longmapsto-u_i$ and replace the Fredholm determinant of the kernel by that of its adjoint to get
\[\det\left(\fI-\chi_{a}\fK^{\hypo(\fh_0)}_{\ft,\uptext{ext}}\chi_{a}\right)\qquad\text{with}\qquad
\fK^{\hypo(\fh_0)}_{\ft,\uptext{ext}}(u_i,u_j)=\fK_{\lim}(\fx_j,-u_j;\fx_i,-u_i).\]
The choice of superscript $\hypo(\fh_0)$ in the resulting kernel comes from the fact
\[
\bar \fT^{\epi(-\fh_0^-)}_{-\ft,\fx}(v,-u)= \bigl(\bar\fT^{\hypo(\fh_0^-)}_{\ft,\fx}\bigr)^*(u,-v),
\]
which together with $\fT_{-\ft,\fx}(-u,v)=(\fT_{\ft,\fx})^*(-v,u)$ yield
\begin{equation}\label{eq:Kexthalf}
\fK^{\hypo(\fh_0)}_{\ft,\uptext{ext}}=-e^{(\fx_j-\fx_i)\p^2}\1{\fx_i<\fx_j}+(\bar \fT^{\hypo(\fh_0^-)}_{\ft,-\fx_i})^*\fT_{\ft,\fx_j}.
\end{equation}
This gives the following one-sided fixed point formula for the limit $\fh$:
\begin{theorem}[One-sided fixed point formula]\label{thm:Kfixedpthalf}
Let $\fh_0\in \UC$ with $\fh_0(\fx) = -\infty$ for $\fx>0$.
Then given $\fx_1<\fx_2<\dotsm<\fx_M$ and $\fa_1,\ldots,\fa_M\in\R$, we have
\begin{align}
&\pp_{\fh_0} \bigl(\fh(\ft,\fx_1)\leq \fa_1,\ldots,\fh(\ft,\fx_M)\leq \fa_M\bigr) =\det \left(\fI-\chi_{\fa} \fK^{\hypo(\fh_0)}_{\ft,\uptext{ext}}\chi_{\fa}\right)_{L^2(\{\fx_1,\ldots,\fx_M\}\times\R)}\\
&\hspace{0.275in}=\det \left(\fI-\fK^{\hypo(\fh_0)}_{\ft,\fx_M}+\fK^{\hypo(\fh_0)}_{\ft,\fx_M}e^{(\fx_1-\fx_M)\p^2}\bar \P_{\fa_1}e^{(\fx_2-\fx_1)\p^2}\bar \P_{\fa_2}\dotsm e^{(\fx_M-\fx_{M-1})\p^2}\bar \P_{\fa_M}\right)_{L^2(\R)},\label{eq:onesidepath}
\end{align}
with the kernel
\begin{equation}
\fK^{\hypo(\fh_0)}_{\ft,\fx}(\cdot,\cdot) = \fK^{\hypo(\fh_0)}_{\ft,\uptext{ext}}(\fx,\cdot;\fx,\cdot),
\end{equation}
where the latter is defined in \eqref{eq:Kexthalf}.
\end{theorem}
\noindent The second identity in \eqref{eq:onesidepath} can be obtained similarly to \eqref{eq:path-int-kernel-TASEPgem} for the discrete kernels.
Our next goal is to take a continuum limit in the $\fa_i$'s of the path-integral formula \eqref{eq:onesidepath} on an interval $[-L,L]$ and then take $L\to\infty$. For this we take $\fx_1,\dotsc,\fx_M$ to be a partition of $[-L,L]$ and take $\fa_i=\fg(\fx_i)$. Then taking the limit $M \to \infty$ we get as in \cite{flat} (and actually dating back to \cite{cqr})
\begin{equation}
\fT_{\ft/2,-L}\bar\P_{\fg(\fx_1)}e^{(\fx_2-\fx_1)\p^2}\bar\P_{\fg(\fx_2)}\dotsm e^{(\fx_M-\fx_{M-1})\p^2}\bar\P_{\fg(\fx_M)}(\fT_{\ft/2,-L})^*
\longrightarrow\fT_{\ft/2,-L}\wt\Theta^{\fg}_{-L,L}(\fT_{\ft/2,-L})^*,\label{eq:continuumLimit}
\end{equation}
where
\[
\wt\Theta^g_{\ell_1,\ell_2}(u_1,u_2)=\pp_{\fB(\ell_1)=u_1}\big(\fB(s)\leq\fg(s)~\forall\,s\in[\ell_1,\ell_2],\,\fB(\ell_2)\in du_2\big)/du_2.
\]
When we pass now to the limit $L \to \infty$, one can see (at least roughly) that we obtain
\[
\fT_{\ft/2,-L}\wt\Theta^{\fg}_{-L,L}(\fT_{\ft/2,-L})^*\longrightarrow \fI-\fK^{\epi(\fg)}_{-\ft/2}.
\]
One can find a rigorous proof of these results in \cite{KPZ}. After taking these limits, the Fredholm determinant from \eqref{eq:onesidepath} thus converges to
\begin{equation}
\det\left(\fI-\fK^{\hypo(\fh_0)}_{\ft/2}+\fK^{\hypo(\fh_0)}_{\ft/2} \bigl(\fI-\fK^{\epi(\fg)}_{-\ft/2}\bigr)\right)
=\det\left(\fI-\fK^{\hypo(\fh_0)}_{\ft/2}\fK^{\epi(\fg)}_{-\ft/2}\right),
\end{equation}
which is exactly the content of Theorem~\ref{thm:fullfixedpt}.
As in the TASEP case, the kernel in \eqref{eq:Kexthalf} can be rewritten (thanks to the analog of \eqref{eq:epiabove}) as
\begin{equation}\label{eq:Kexthalf-alt}
\fK^{\hypo(\fh_0)}_{\ft,\uptext{ext}}(\fx_i,\cdot;\fx_j,\cdot)=-e^{(\fx_j-\fx_i)\partial^2}\1{\fx_i<\fx_j}
+(\fT_{\ft,-\fx_i})^*\bar \P_{\fh_0(0)}\fT_{\ft,\fx_j}+(\bar{\fT}^{\hypo(\fh_0^-)}_{\ft,-\fx_i})^* \P_{\fh_0(0)}\fT_{\ft,\fx_j}.
\end{equation}
\subsection{From one-sided to two-sided formulas.}
Now we derive the formula for the KPZ fixed point with general initial data $\fh_0$
as the $L\to\infty$ limit of the formula with initial data \[\fh_0^L(\fx)=\fh_0(\fx)\1{\fx\leq L}-\infty\cdot\1{\fx>L},\] which can be obtained from the previous theorem by translation invariance.
We then take, in the next subsection, a continuum limit of the operator $e^{(\fx_1-\fx_M)\p^2}\bar \P_{a_1}e^{(\fx_2-\fx_1)\p^2}\bar \P_{\fa_2}\dotsm e^{(\fx_M-\fx_{M-1})\p^2}\bar \P_{\fa_M}$ on the right side of \eqref{eq:onesidepath} to obtain a ``hit'' operator for the final data as well. The result of all this is the same as if we started with two-sided data for TASEP.
The shift invariance of TASEP, tells us that $\fh(\ft, \fx; \fh_0^L)\stackrel{\text{dist}}{=} \fh(\ft,\fx-L;\theta_L\fh_0^L)$, where $\theta_L$ is the shift operator.
Our goal then is to take $L\to\infty$ in the formula given in Theorem~\ref{thm:Kfixedpthalf} for $\fh(\ft,\fx-L;\theta_L\fh_0^L)$.
We get
\[\pp_{\theta_L\fh_0}\!\left(\fh(\ft,\fx_1-L)\leq \fa_1,\ldots,\fh(\ft,\fx_M-L)\leq \fa_M\right)
=\det\left(\fI-\chi_{a}\wt\fK^{\theta_L\fh_0^L}_L\chi_{a}\right)_{L^2(\{\fx_1,\ldots,\fx_M\}\times\R)}\]
with
\begin{equation}\label{eq:onesidedforlimit}
\wt\fK^{\theta_L\fh_0^L}_L(\fx_i,\cdot ;\fx_j,\cdot)=e^{(\fx_j-\fx_i)\p^2}\1{\fx_i<\fx_j}+e^{(\fx_j-\fx_i)\p^2}\bigl(\bar{\fT}^{\hypo((\theta_L\fh_0^L)^-_0)}_{\ft,-\fx_j+L}\bigr)^*\fT_{\ft,\fx_j-L}.
\end{equation}
Since $(\theta_{L}\fh_0^L)_0^+\equiv-\infty$, we may rewrite $\bigl(\bar{\fT}^{\hypo((\theta_L\fh_0^L)^-_0)}_{\ft,-\fx_j+L}\bigr)^*\fT_{\ft,\fx_j-L}$ as\footnote{At first glance it may look as if the product $e^{-\fx\p^2}\fK^{\hypo(\fh)}_\ft e^{\fx\p^2}$ makes no sense, because $\fK^{\hypo(\fh)}_{\ft}$ is given in \eqref{eq:Kepihypo} as the identity minus a certain kernel, and applying $e^{\fx\p^2}$ to $\fI$ is ill-defined for $\fx<0$.
However, thanks to the analog of \eqref{eq:Kep-expansion} below for $\fK^{\hypo(\fh)}_\ft$, the action of $e^{\fx\p^2}$ on this kernel on the left and right is well defined for any $\fx\in\R$.
This also justifies the identity in \eqref{eq:heatkerout}.}
\[
\fI- \bigl(\fT_{\ft,-\fx_j+L}-\bar{\fT}^{\hypo((\theta_L\fh_0^L)^-_0)}_{\ft,-\fx_j+L}\bigr)^*\bigl(\fT_{\ft,\fx_j-L}-\bar{\fT}^{\hypo((\theta_L\fh_0^L)^+_0)}_{\ft,\fx_j-L}\bigr)
=e^{-\fx_j\p^2}\fK^{\hypo(\fh_0^L)}_\ft e^{\fx_j\p^2},\label{eq:heatkerout}
\]
where $\fK^{\hypo(\fh_0)}_\ft$ is the kernel defined in \eqref{eq:Kepihypo}.
Note the crucial fact that the right hand side depends on $L$ only through $\fh_0^L$ (the various shifts by $L$ on the left hand side of \eqref{eq:heatkerout} play no role).
It was shown in~\cite{flat} that $\fK^{\hypo(\fh_0^L)}_\ft\longrightarrow\fK^{\hypo(\fh_0)}_\ft$ as $L\to\infty$.
This tells us that the second term on the right hand side of \eqref{eq:onesidedforlimit} equals $e^{-\fx_i\p^2}\fK^{\hypo(\fh_0^L)}_\ft e^{\fx_j\p^2}$, which converges to $e^{-\fx_i\p^2}\fK^{\hypo(\fh_0)}_\ft e^{\fx_j\p^2}$ as $L\to\infty$, and leads to
\begin{theorem}[Two-sided fixed point formula]\label{thm:two-sided-lim-extended}
Let $\fh_0\in \UC$ and $\fx_1<\fx_2<\dotsm<\fx_M$.
Then for $\fh(\ft,\fx)$ given as in \eqref{eq:height-cvgce},
\begin{align}
&\pp_{\fh_0}\bigl(\fh(\ft,\fx_1)\leq\fa_1,\ldots,\fh(\ft,\fx_M)\leq\fa_M\bigr) =\det\left(\fI-\chi_{\fa}\fK^{\hypo(\fh_0)}_{\ft,\uptext{ext}}\chi_{\fa}\right)_{L^2(\{\fx_1,\ldots,\fx_M\}\times\R)}\label{eq:twosided-ext}\\
&\hspace{0.3in}=\det\left(\fI-\fK^{\hypo(\fh_0)}_{\ft,\fx_M}+\fK^{\hypo(\fh_0)}_{\ft,\fx_M} e^{(\fx_1-\fx_M)\p^2}\bar\P_{\fa_1}e^{(\fx_2-\fx_1)\p^2}\bar\P_{\fa_2}\dotsm e^{(\fx_M-\fx_{M-1})\p^2}\bar\P_{\fa_M}\right)_{L^2(\R)}\label{eq:twosided-path}
\end{align}
where the kernels are as in Theorem~\ref{thm:Kfixedpthalf}.
\end{theorem}
\subsection{Continuum limit.}\label{sec:continuumfixedpt}
We turn now to the continuum limit in the $\fa_i$'s of the path-integral formula \eqref{eq:twosided-path} on an interval $[-R,R]$ (we will take $R\to\infty$ later on).
To this end we conjugate the kernel inside the determinant by $\fT_{\ft/2,\fx_M}$, leading to
\[\wt\fK^{\fh_0}_{\ft,\fx_M}-\wt\fK^{\fh_0}_{\ft,\fx_M}\big[\fT_{\ft/2,\fx_1}\bar\P_{\fa_1}e^{(\fx_2-\fx_1)\p^2}\bar\P_{\fa_2}\dotsm e^{(\fx_M-\fx_{M-1})\p^2}\bar\P_{\fa_M}(\fT_{\ft/2,-\fx_M})^*\big]\]
with $\wt\fK^{\fh_0}_{\ft,\fx_M}=\fT_{\ft/2,\fx_M}\fK^{\hypo(\fh_0)}_{\ft,\fx_M}(\fT_{\ft/2,-\fx_M})^*=\fK^{\hypo(\fh_0)}_{\ft/2}$ (the second equality follows from \eqref{eq:Kepihypo}).
Now we take the limit of term in brackets, letting $\fx_1,\ldots,\fx_M$ be a partition of $[-R,R]$ and taking $M\to\infty$ with $\fa_i=\fg(\fx_i)$.
As in~\cite{cqr} we have
\[
\fT_{\ft/2,-R}\,\bar\P_{\fg(\fx_1)}e^{(\fx_2-\fx_1)\p^2}\bar\P_{\fg(\fx_2)}\dotsm e^{(\fx_M-\fx_{M-1})\p^2}\bar\P_{\fg(\fx_M)}(\fT_{\ft/2,-R})^*
\longrightarrow\fT_{\ft/2,-R}\,\wt\Theta^{\fg}_{-R,R}(\fT_{\ft/2,-R})^*,\label{eq:continuumLimit}
\]
where $\wt\Theta^g_{\ell_1,\ell_2}(u_1,u_2)=\pp_{\fB(\ell_1)=u_1}\big(\fB(s)\leq\fg(s)~\forall\,s\in[\ell_1,\ell_2],\,\fB(\ell_2)\in du_2\big)/du_2$,
Next we rewrite the probability as
\begin{multline*}
\pp_{\ell_1, x_1;\ell_2, x_2}\!\left(B(t)\leq g(t)\text{ on }[\ell_1,\ell_2]\right)
= \int_{-\infty}^{g(\alpha)}\pp_{\ell_1, x_1; \ell_2, x_2} ( B(\alpha)\in dy)\\
\times \pp_{\ell_1, x_1; \alpha, y} (B(t)<g(t)\text{ on }[\ell_1,\alpha])\pp_{\alpha, y;\ell_2, x_2}(
B(t)<g(t)\text{ on } [\alpha,\ell_2]).
\end{multline*}
The last probability in the above integral can be rewritten as
\begin{equation*}
\pp_{0, y;\ell_2-\alpha, x_2}(B(t)<g_\alpha(t)\text{ on } [0,\ell_2-\alpha])
=1-\int_0^{\ell_2-\alpha}\pp_y(\tau_{g_\alpha^+}\in dt)\tfrac{p(\ell_2-\alpha-t,x_2-g_\alpha(t))}{p(\ell_2-\alpha,x_2-y)}.
\end{equation*}
A similar identity can be written for $\pp_{\ell_1, x_1; \alpha, y}(B(t)<g(t)\text{ on }[\ell_1,\alpha])$, now using $\tau_{g_a^-}$, which we take to be independent of $\tau_{g_\alpha^+}$, and going backwards from time $\alpha$ to time
$\ell_1$.
Using this and writing $\pp_{\ell_1, x_1; \ell_2, x_2}(B(\alpha)\in dy)$ explicitly we find that
\begin{align*}
\pp_{\ell_1, x_1;\ell_2, x_2}\!\left(B(t)<g(t)\text{ on }[\ell_1,\ell_2]\right)
&=\int_{-\infty}^{g(\alpha)}dy\sqrt{\tfrac{\ell_2-\ell_1}{4\pi(a-\ell_1)(\ell_2-\alpha)}}
e^{-\frac{((\ell_2-\alpha)x_1+(\alpha-\ell_1)x_2+(\ell_1-\ell_2)y)^2}{4(\alpha-\ell_1)(\ell_2-\alpha)(\ell_2-\ell_1)}}\\
&\hspace{0.35in}\times\left(1-\int_0^{\alpha-\ell_1}\pp_y(\tau_{g_\alpha^-}\in dt_1)\tfrac{p(
\alpha-\ell_1 - t_1, x_1-g_\alpha(-t_1))}{ p(\alpha-\ell_1 , x_1-y) }\right)\\
&\hspace{0.35in}\times\left(1-\int_{0}^{\ell_2-\alpha}\pp_y(\tau_{g_\alpha^+}\in dt_2)\tfrac{
p(\ell_2-\alpha - t_2, x_2-g_\alpha(t_2))}{p(\ell_2-\alpha , x_2-y) } \right)
\end{align*}
Recalling that in the formula for
$\wt\Theta^g_{\ell_1,\ell_2}(x_2-x_1)$ this probability is premultiplied by
$p(\ell_2-\ell_1,x_2-x_1+\ell_2^2-\ell_1^2)$ and observing that
\[\tfrac{p(\ell_2-\ell_1,x_2-x_1+\ell_2^2-\ell_1^2)}{p(a-\ell_1,x_1-y)p(\ell_2-a,x_2-y)}\sqrt{\tfrac{\ell_2-\ell_1}{4\pi(a-\ell_1)(\ell_2-a)}}
e^{-\frac{((\ell_2-a)x_1+(a-\ell_1)x_2+(\ell_1-\ell_2)y)^2}{4(a-\ell_1)(\ell_2-a)(\ell_2-\ell_1)}}
=e^{\frac14(\ell_1^2-\ell_2^2+2x_1-2x_2)(\ell_1+\ell_2)}\] we deduce that
\begin{multline*}
\wt\Theta^g_{\ell_1,\ell_2}(x_1,x_2)=e^{\frac14(\ell_1^2-\ell_2^2+2x_1-2x_2)(\ell_1+\ell_2)}\\
\times\int_{-\infty}^{g(\alpha)}dy\left[p(\alpha-\ell_1 , x_1-y)-\int_0^{\alpha-\ell_1}dt_1\,\pp_{y}(\tau_{g_\alpha^-}\in dt_1)p(\alpha-\ell_1 - t_1, x_1-g_\alpha(-t_1))\right]\\
\times\left[p(\ell_2-\alpha , x_2-y)-\int_{0}^{\ell_2-\alpha}dt_2\,\pp_{y}(\tau_{g_\alpha^+}\in
dt_2)p(\ell_2 -\alpha- t_2, x_2-g_\alpha(t_2))\right].
\end{multline*}
Taking $-\ell_1=\ell_2=L$, we have that
for any $\alpha\in(-L,L)$ we have
\begin{multline*}
A^*e^{-L\Delta}\wt\Theta^g_{-L,L}e^{-L\Delta}A(\lambda_1,\lambda_2)\\
=\int_{-\infty}^{g(\alpha)}dy\left[A^*e^{\alpha \Delta}(\lambda_1,y)-\int_{0}^{\alpha+L}dt_1\,\pp_y(\tau_{g_\alpha^-}\in
dt_1)A^*e^{(\alpha-t_1)\Delta}(\lambda_1,g_\alpha(-t_1))\right]\\\times
\left[e^{-\alpha \Delta}A(y,\lambda_2)-\int_{0}^{L-\alpha}dt_2\,\pp_y(\tau_{g_\alpha^+}\in
dt_2)e^{-(\alpha+t_2)\Delta}A(g_\alpha(t_2),\lambda_2)\right].
\end{multline*}
where the \emph{Airy transform} $A$, is defined by
\begin{equation}
\label{eq:B0}
A(x,\lambda)=\Ai(x-\lambda).
\end{equation}
Now we have $\fT_{\ft/2,-R}\wt\Theta^{\fg}_{-R,R}(\fT_{\ft/2,-R})^*\longrightarrow \fI-\fK^{\epi(\fg)}_{-\ft/2}$ as $R\to\infty$.
Our Fredholm determinant is thus now given by
\[
\det\left(\fI-\fK^{\hypo(\fh_0)}_{\ft/2}+\fK^{\hypo(\fh_0)}_{\ft/2}(\fI-\fK^{\epi(\fg)}_{-\ft/2})\right)
=\det\left(\fI-\fK^{\hypo(\fh_0)}_{\ft/2}\fK^{\epi(\fg)}_{-\ft/2}\right).\label{eq:preThmfixedpt}
\]
which is the KPZ fixed point formula.
\begin{exercise} The Airy transform satisfies $AA^*=I$, so that $f(x)=\int_{-\infty}^\infty d\lambda\Ai(x-\lambda)Af(\lambda)$.
In other words, the shifted Airy functions $\{\Ai(x-\lambda)\}_{\lambda\in\R}$ (which are not in $L^2(\R)$) form a generalized orthonormal basis of $L^2(\R)$. Thus the \emph{Airy kernel} $\K(x,y)=\int_{-\infty}^0 d\lambda\Ai(x-\lambda)\!\Ai(y-\lambda)$ is the projection onto the subspace spanned by $\{\Ai(x-\lambda)\}_{\lambda\leq0}$). Show \begin{equation}\label{eq:K-AiryTr}
\K=A\bar \chi_0A^*,
\end{equation}
\begin{equation}\label{eq:FGUE}
F_{\rm GUE}(r)=\det\!\big(I-\chi_r\K \chi_r\big)=\det\!\big(I-\K \chi_r\K\big)
\end{equation}
and$^{*}$
\begin{equation}\label{eq:FGOE}
F_{\rm GOE}(4^{1/3}r)=\det\!\big(I-\K\varrho_r\K\big)
\end{equation}
where $\varrho_r$ is the reflection operator
\begin{equation}\label{eq:varrho}
\varrho_rf(x)=f(2r-x).
\end{equation}
\end{exercise}
\bibspread
\end{document}
|
\begin{document}
\topmargin -2pt
\baselineskip 20pt
\begin{flushright}
{\tt quant-ph/0308156} \\
\end{flushright}
\begin{center}
{\Large \bf Quantum Entanglement under Lorentz Boost }\\
{\sc Daeho Lee~ and~ Ee Chang-Young}\footnote{[email protected]}\\
{\it Department of Physics, Sejong University, Seoul 143-747, Korea}\\
{\bf ABSTRACT}
\end{center}
In order to understand the characteristics of quantum entanglement
of massive particles under Lorentz boost, we first introduce a
relevant relativistic spin observable, and evaluate its
expectation values for the Bell states under Lorentz boost. Then
we show that maximal violation of the Bell's inequality can be
achieved by properly adjusting the directions of the spin
measurement even in a relativistically moving inertial
frame.
Based on this we infer that the entanglement information is preserved
under Lorentz boost as a form
of correlation information determined by the transformation
characteristic of the Bell state in use.
\\
\noindent PACS codes: 03.65.Ud, 03.30.+p
\thispagestyle{empty}
\section*{I. Introduction}
Quantum entanglement is a novel feature of quantum physics when
compared with classical physics. It demonstrates the nonlocal
character of quantum mechanics and is the very basis of quantum
information processing such as quantum computation and quantum
cryptography. Up until recently quantum entanglement was
considered only within the non-relativistic regime. Then, starting
with the work of \cite{czachor} there have been quite a lot of
works investigating the effect of quantum entanglement measured in
an inertial frame moving with relativistic speed
\cite{am,pst,ging,tu,ahn,rsw,crsw,terno,czw,ps,czachor2,peres,bga,ahnk}.
Consider two spin half particles with total spin zero moving in
opposite directions. Suppose the spin component of each particle
is measured in the same direction by two observers in the
laboratory(lab) frame. Then the two spin components have opposite
values in whichever direction the spin measurements are performed.
This is known as the EPR(Einstein, Podolsky, Rosen) correlation
and is due to the isotropy of the spin singlet state. Is the EPR
correlation valid even for the two observers sitting in a moving
frame which is Lorentz boosted relativistically with respect to
the lab frame? This issue has been investigated from various
aspects by many people including the above quoted authors.
However, the answer for this question has not been clarified so
far.
In \cite{czachor}, Czachor considered the spin singlet of two
spin-$\frac{1}{2}$ massive particles moving in the same direction.
He introduced the concept of relativistic spin observable which is
closely related with the spatial components of the Pauli-Lubanski
vector. For two observers in the lab frame measuring the spin
component of each particle in the same direction, the expectation
value of the joint spin measurement, i.e., the expectation value
of the tensor product of relativistic spin observable of each
constituent particles, became dependent on the boost velocity.
Only when the boost speed reaches that of light, or when the
direction of the spin measurements is perpendicular to the boost
direction, the expectation value becomes $-1$. Thus only in these
limiting cases the results seem to agree with the EPR correlation.
Czachor considered only the changes in the spin operator part
by defining a new relativistic spin operator. There, the state
does not need to be transformed since the observer is at rest.
Starting a year ago there appeared a flurry of papers
investigating the effect of the Lorentz boost or the Wigner
rotation on entanglement. Here, we mention some of them that are
directly related to the issue in our work.
Alsing and Milburn \cite{am} considered the entanglement of two
particles moving in opposite directions and showed that the Wigner
rotation under Lorentz boost is a local unitary operation, with
which Dirac spinors representing the two particles transform. And
they reached a conclusion that the entanglement is Lorentz
invariant due to this unitary operation.
Gingrich and Adami \cite{ging} investigated the entanglement
between the spin and momentum parts of two entangled particles.
They concluded that the entanglement of the spin part is carried
over to the entanglement of the momentum part under Lorentz boost,
although the entanglement of the whole system is Lorentz invariant
due to unitarity of the transformation.
However, the concept of the reduced density matrices with
traced-out momenta used in that work drew some criticism recently
\cite{czw}.
Terashima and Ueda \cite{tu} considered the effect of Wigner
rotation on the spin singlet and evaluated the Bell observable
under Lorentz boost. They concluded that although the degree of
the violation of the Bell's inequality is decreased under Lorentz
boost, the maximal violation of the Bell's inequality can be
obtained by properly adjusting the directions of the spin
measurements in the moving frame. They also claimed that the
perfect anti-correlation of the spin singlet seen in the EPR
correlation is maintained for appropriately chosen spin
measurements directions depending on the Lorentz boost, even
though the EPR correlation is not maintained when the directions
of spin measurements remain the same.
In \cite{tu}, Terashima and Ueda considered the changes in the
states only. Their spin operator has the same form as the
non-relativistic spin operator. In this sense their result that
the maximal violation of Bell's inequality can be achieved even in
the moving frame was somewhat expected due to the unitarity of the
state transformation. In fact, \cite{am} and \cite{tu} considered
the changes in the states only, and the both reached a similar
conclusion that the entanglement can be preserved under Lorentz
boost.
However, if one considers the changes of the spin operator under
Lorentz transformation as Czachor did in \cite{czachor}, the story
becomes different: The Bell's inequality might not be violated.
I.e., the entanglement may not be preserved under the
transformation.
In a general situation, one has to consider the both, the changes
in the spin operator and the changes in the states. This was done
by Ahn etal. in \cite{ahn}. Ahn {\it et al.} \cite{ahn} calculated
the Bell observable for the Bell states under Lorentz boost, and
showed that the Bell's inequality is not violated in the
relativistic limit. They used the Czachor's relativistic spin
operator and transformed the state under Lorentz boost
accordingly. Their result strongly suggested that the entanglement
is not preserved under Lorentz boost.
They further concluded
\cite{ahnk} that quantum entanglement is not invariant under
Lorentz boost based on the evaluation of the entanglement fidelity
\cite{schum}.
Here, we would like to note that the spin operator used in
\cite{ahn} is not as general as it should be. This is because the
spin operator used in \cite{ahn} is the same one as the Czachor's
\cite{czachor} which is a spin operator for a restricted
situation, that we call the Czachor's limit in this paper.
In this paper, we consider the changes in the spin operator under
the Lorentz boost in a general situation compared with \cite{ahn}.
We first formulate the relativistic spin observable based on the
sameness of the expectation values of one-particle spin
measurement evaluated in two relative reference frames, one in the
lab frame in which the particle has a velocity $\vec{v}$ and the
observer is at rest, the other in the moving frame Lorentz boosted
with $\vec{v}$, in which the particle is at rest and the observer
is moving with a velocity $-\vec{v}$. Applying the relativistic
spin observable for the two-particle spin singlet state we
evaluate the expectation value of the joint spin measurement. Then
we calculate the values of the Bell observable for the Bell
states. The values of the Bell observable decreased as the boost
velocity becomes relativistic. However, we find a new set of spin
measurement axes with which the Bell's inequality is maximally
violated. This seems to imply that the information on the
correlation due to entanglement is kept even in the moving frame.
In fact, under a Lorentz boost certain entangled states transform
into combinations of different entangled states. However, in
certain directions of spin measurements the above combinations of
states become eigenstates of these spin operators. In this manner
the correlation information in one frame is maintained in other
frames.
The paper organizes as follows. In section II, we formulate the
relativistic spin observable. Then in section III, we evaluate the
expectation value of the joint spin measurement for spin singlet.
In section IV, we find a new set of spin measurement axes for a
spin singlet state, with which the Bell's inequality is maximally
violated even under Lorentz boost. In section V, we show that the
same thing can be done for the other Bell states. We conclude with
discussion in section VI.
\\
\section*{II. Relativistic spin observable}
In this section, we consider a spin measurement of a massive
particle viewed from two different inertial reference frames: one
in the lab frame where the particle has a certain velocity, the
other in the moving frame where the particle is at rest. In order
to make the particle at rest, the moving frame is Lorentz boosted
in the opposite direction of the particle's velocity in the lab
frame just to compensate the particle's motion in the lab frame.
Since we are just considering the same measurement in the two
inertial frames, the respective expectation values of this
measurement observed in the two frames should be the same.
\begin{equation}
\left<\Psi_{\vec{p}}\left| \frac{\vec{a}\cdot
\vec{\sigma_{p}}}{|\lambda(\vec{a}\cdot \vec{\sigma_{p}}) |}
\right|\Psi_{\vec{p}}\right>_{\text{lab}}
=\left<\Psi_{\vec{p}=0}\left| \frac{\vec{a_{p}}\cdot
\vec{\sigma}}{|\lambda(\vec{a_{p}}\cdot \vec{\sigma}) |}
\right|\Psi_{\vec{p}=0}\right>_{\text{rest}} \label{ce1}
\end{equation}
Here, $\vec{a}$ and $\vec{p}$ are the spin measurement axis
and the momentum of the particle respectively in the lab frame,
and
$\vec{a_{p}}$ is inversely Lorentz boosted vector of $\vec{a}$
by $-\vec{p}$.
$|\Psi_{\vec{p}}>$ and $|\Psi_{\vec{p}=0}>$ are the wave functions
of the particle in the lab and moving frames, respectively.
Now, the two wave functions are related by
\begin{equation}
|\Psi_{\vec{p}}> =U(L(\vec{p}))~ |\Psi_{\vec{p}=0}>
=U(R(\hat{p}))~U(L_{z}(|\vec{p}|))~ |\Psi_{\vec{p}=0}>,
\label{ce2}
\end{equation}
since for an arbitrary four momentum $p$ it can be written as
$p=R(\hat{p})L_{z}(|\vec{p}|)k$ where $k=(m,0,0,0)$ is the four
momentum of the particle at rest, $U(L(\vec{p}))$ and
$L_{z}(|\vec{p}|)$ are the Lorentz boosts along $\vec{p}$ and the
$z$-axis respectively, and $R(\hat{p})$ is a rotation of $\hat{z}$
to $\hat{p}$.
If we decompose the particle's wave functions into their spatial
and spin parts, for instance $|\Psi_{p=0}>=|0>\otimes
|\chi_{p=0}>,$ using the relation (\ref{ce1}) we can obtain the
relation between the relativistic spin operator $\vec{\sigma_{p}}$
in the lab frame and the non-relativistic spin operator
$\vec{\sigma}$ in the particle at rest frame (the moving frame) in
terms of the spin measurement axis $\vec{a}$ in the lab frame and
its Lorentz transformed spin measurement axis $\vec{a_{p}}$ in the
moving frame.
\begin{eqnarray}
& & \left<\chi_{\vec{p}=0}\left| \otimes \left< \vec{p} = 0\left|
\frac{\vec{a_{p}}\cdot \vec{\sigma}}{|\lambda(\vec{a_{p}}\cdot
\vec{\sigma}) |}\right|\vec{p}
= 0\right>\otimes\right|\chi_{\vec{p}=0}\right> _{\text{rest}} \nonumber\\
& & = \left<\Psi_{\vec{p}=0}\left|
U^{\dagger}(L_{z}(|\vec{p}|))U^{\dagger}(R(\hat{p}))
\left[\frac{\vec{a}\cdot \vec{\sigma_{p}}}{|\lambda(\vec{a}\cdot
\vec{\sigma_{p}}) |}\right] U(R(\hat{p}))~U(L_{z}(|\vec{p}|))
\right|\Psi_{\vec{p}=0}\right>_{\text{lab}}
\label{ce3}
\end{eqnarray}
Also, $U(L_{z}(|\vec{p}|))~|\Psi_{p}=0>=|p_{z}> \otimes~
|\chi_{p_{z}}>$ thus we have
\begin{eqnarray}
& & <\vec{p}=0|\vec{p}=0>\left<\chi_{\vec{p}=0}\left|
\frac{\vec{a_{p}}\cdot \vec{\sigma}}{|\lambda(\vec{a_{p}}\cdot
\vec{\sigma}) |}\right|\chi_{\vec{p}=0}\right>
_{\text{rest}} \nonumber\\
& & =\left<\chi_{p_{z}}\left|\otimes \left
<p_{z}\left| U^{\dagger}(R(\hat{p})) \left[\frac{\vec{a}\cdot
\vec{\sigma_{p}}}{|\lambda(\vec{a}\cdot \vec{\sigma_{p}})
|}\right] U(R(\hat{p})) \right|p_{z} \right >\otimes \right|
\chi_{p_{z}}\right>_{\text{lab}} \nonumber\\
& & =\left<\chi_{p_{z}}\left|U^{\dagger}(R(\hat{p}))\otimes \left
<\vec{p}\left| \left[\frac{\vec{a}\cdot
\vec{\sigma_{p}}}{|\lambda(\vec{a}\cdot \vec{\sigma_{p}})
|}\right] \right|\vec{p} \right >\otimes U(R(\hat{p})) \right|
\chi_{p_{z}}\right>_{\text{lab}}
\nonumber\\
& &
=<\vec{p}~|~\vec{p}>\left<\chi_{p_{z}}\left|U^{\dagger}(R(\hat{p}))
\left[\frac{\vec{a}\cdot \vec{\sigma_{p}}}{|\lambda(\vec{a}\cdot
\vec{\sigma_{p}}) |}\right] U(R(\hat{p})) \right|
\chi_{p_{z}}\right>_{\text{lab}} \label{ce4}
\end{eqnarray}
Here, we notice that the spin wave function in the particle at
rest frame is not affected under arbitrary Lorentz boost
$L(\vec{p})$ since the Wigner angle due to the Lorentz boost in
this case is zero:
\begin{equation}
|\Psi_{\vec{p}}>=U(L(\vec{p}))|\Psi_{\vec{p}=0}>
= U(L(\vec{p})) |0> \otimes \left(
\begin{array}{c}
\alpha \\
\beta \\
\end{array}
\right)
=|\vec{p}> \otimes \left(
\begin{array}{c}
\alpha \\
\beta \\
\end{array}
\right).
\end{equation}
Namely, $ | \chi_{\vec{p}=0}> _{\text{rest}} = |
\chi_{p_{z}}>_{\text{lab}} , $ and thus from (\ref{ce3}) and
(\ref{ce4}), we obtain the following relation.
\begin{equation}
U^{\dagger}(R(\hat{p})) \left[\frac{\vec{a}\cdot
\vec{\sigma_{p}}}{|\lambda(\vec{a}\cdot \vec{\sigma_{p}})
|}\right] U(R(\hat{p})) =\frac{\vec{a_{p}}\cdot
\vec{\sigma}}{|\lambda(\vec{a_{p}}\cdot \vec{\sigma}) |}
\frac{<\vec{p}=0|\vec{p}=0>}{<\vec{p}~|~\vec{p}>}. \label{ce6}
\end{equation}
Based on the above observation, we define the relativistic spin
observable as
\begin{equation}
\hat{a} \equiv \vec{a}\cdot \vec{\sigma_{p}}/|\lambda(\vec{a}\cdot
\vec{\sigma_{p}})| \equiv U(R(\hat{p})) \frac{\vec{a_{p}}\cdot
\vec{\sigma}}{|\lambda(\vec{a_{p}}\cdot \vec{\sigma})|}
U^{\dagger}(R(\hat{p})). \label{ce7}
\end{equation}
Here, $R(\hat{p})$ is the rotation from the $z$-axis to the
direction of $\vec{p}$, which can be written as
\begin{equation}
R(\hat{p}) =R_{z}(\phi_{p})R_{y}(\theta_{p}) = \left(
\begin{array}{ccc}
\cos\phi_{p}\cos\theta_{p} & -\sin \phi_{p} & \cos\phi_{p}\sin\theta_{p} \\
\sin \phi_{p}\cos\theta_{p} &\cos\phi_{p} & \sin \phi_{p}\sin\theta_{p} \\
-\sin\theta_{p} & 0 & \cos\theta_{p} \\
\end{array}
\right),
\label{ce8}
\end{equation}
and
\begin{equation}
U(R(\hat{p}))=\exp(-i\phi_{p}\sigma_{z}/2)\exp(-i\theta_{p}\sigma_{y}/2).
\end{equation}
The spin measurement axis in the moving frame, $\vec{a}_{p}$, is
given by the spatial part of
${a}_{p}=[R(\hat{p})L_{z}(|\vec{p}|)]^{-1}a$, where
$p=L_{\hat{p}}(|\vec{p}|)k = R(\hat{p})L_{z}(|\vec{p}|)k$ with
$k=(m,0,0,0)$, and the spin measurement axis in the lab frame,
$\vec{a}$, is the spatial part of $a$.
Putting all this together, the relativistic spin observable can be
expressed as follows.
\begin{eqnarray}
\hat{a} &= &U(R(\hat{p})) \frac{\vec{a_{p}}\cdot
\vec{\sigma}}{|\lambda(\vec{a_{p}}\cdot \vec{\sigma})|}
U^{\dagger}(R(\hat{p})) \nonumber\\
& = & \frac{\vec{a_{p}}}{|\lambda(\vec{a_{p}}\cdot \vec{\sigma})|}
\cdot [\exp(-i\phi_{p}\sigma_{z}/2)\exp(-i\theta_{p}\sigma_{y}/2)~
\vec{\sigma}~\exp(i\theta_{p}\sigma_{y}/2)\exp(i\phi_{p}\sigma_{z}/2)]
\nonumber\\
& = & \vec{a_{p}}\cdot
R(\hat{p})\vec{\sigma}/|\lambda(\vec{a_{p}}\cdot\vec{\sigma} )|
\label{ce10}
\end{eqnarray}
\\
\section*{III. Relativistic joint spin measurement for spin singlet }
In this section, we apply the relativistic spin observable defined
in the previous section to a spin singlet state which consists of
two massive spin-$\frac{1}{2}$ particles.
First, we consider a simple case in which the spin measuring device
is fixed in the lab frame and
the both particles are moving with the same
velocity in the lab frame. This is the same set-up what Czachor
considered in his work \cite{czachor}.
The spin measuring axes are in the direction of $\vec{a}$ for
particle 1, and in the direction of $\vec{b}$ for particle 2. We
choose the particles' moving direction as the $+z$ axis. Then the
expectation value of joint spin measurement for the particles can
be expressed as
\begin{eqnarray}
<\hat{a}\otimes \hat{b}> & = & \left<\Psi\left|\frac{\vec{a}\cdot
\sigma_{p}}{|\lambda(\vec{a}\cdot \sigma_{p})|} \otimes
\frac{\vec{b}\cdot \sigma_{p}}{|\lambda(\vec{b}\cdot \sigma_{p})|}
\right|\Psi \right> \nonumber\\
& =& \left<\Psi\left| \frac{\vec{a_{p}}\cdot
R(\hat{p})\vec{\sigma}}{|\lambda(\vec{a_{p}}\cdot\vec{\sigma} )|}
\otimes \frac{\vec{b_{p}}\cdot
R(\hat{p})\vec{\sigma}}{|\lambda(\vec{b_{p}}\cdot\vec{\sigma} )|}
\right|\Psi \right> \nonumber\\
& = & \left<\Psi\left| \frac{\vec{a_{p}}\cdot
\vec{\sigma}}{|\lambda(\vec{a_{p}}\cdot\vec{\sigma} )|} \otimes
\frac{\vec{b_{p}}\cdot
\vec{\sigma}}{|\lambda(\vec{b_{p}}\cdot\vec{\sigma} )|}
\right|\Psi \right>
\label{ce11}
\end{eqnarray}
where the state function is given by
$|\Psi>=\frac{1}{\sqrt{2}}~(|\vec{p},\frac{1}{2}>|\vec{p},\frac{-1}{2}>
-|\vec{p},\frac{-1}{2}>|\vec{p},\frac{1}{2}> )$.
In the last step, we used $R(\hat{p})=1$ since $\hat{p} =
\hat{z}$ in the present case.
The measuring axis $\vec{a_{p}}$ in the moving frame is given by the spatial part of
Lorentz transformed $a_{p}^{~\mu}=L_{z}(- \xi )_{~\nu}^{\mu}~a^{\nu}$
where $\tanh\xi \equiv \beta_{p}$
representing the velocity of the particles:
\begin{equation}
a_{p}=\left(
\begin{array}{cccc}
\cosh\xi & 0 & 0 & -\sinh\xi \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
-\sinh\xi & 0 & 0 & \cosh\xi \\
\end{array}
\right) \left(
\begin{array}{c}
0 \\
a_{x} \\
a_{y} \\
a_{z} \\
\end{array}
\right)= \left(
\begin{array}{c}
-a_{z}\sinh\xi \\
a_{x} \\
a_{y} \\
a_{z}\cosh\xi \\
\end{array}
\right).
\label{ce12}
\end{equation}
Since the magnitude of $\vec{a_{p}}$ is the same as that of the
eigenvalue of
$\vec{a_{p}}\cdot \vec{\sigma}$,
we get
$|\lambda_{a_{p}}|=\sqrt{a_{x}^{2}+a_{y}^{2}+a_{z}^{2}\cosh^{2}\xi}
=\sqrt{1+a_{z}^{2}\sinh^{2}\xi}$.
Thus, the relativistic spin observable for particle 1 in the
present case is given by
\begin{equation}
\hat{a} \equiv \frac{\vec{a} \cdot
\vec{\sigma_{p}}}{|\lambda(\vec{a} \cdot \vec{\sigma_{p}})|} =
\frac{\vec{a_p} \cdot \vec{\sigma}}{|\lambda(\vec{a_p} \cdot
\vec{\sigma})|}
=\frac{a_{x}\sigma_{x}+a_{y}\sigma_{y}+a_{z}\sigma_{z}\cosh\xi}
{\sqrt{1+a_{z}^{2}\sinh^{2}\xi}}.
\label{ce13}
\end{equation}
The same is for particle 2. Thus, the expectation value of the
joint spin measurement (\ref{ce11}) is given by
\begin{eqnarray}
<\hat{a}\otimes \hat{b}> & = &
\frac{<\Psi|(a_{x}\sigma_{x}+a_{y}\sigma_{y}+a_{z}\sigma_{z}\cosh\xi)
\otimes(b_{x}\sigma_{x}+b_{y}\sigma_{y}+b_{z}\sigma_{z}\cosh\xi)
|\Psi>}{\sqrt{1+a_{z}^{2}\sinh^{2}\xi}\sqrt{1+b_{z}^{2}\sinh^{2}\xi}}\nonumber\\
& = & -\frac{(a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\cosh^{2}\xi)}
{\sqrt{1+a_{z}^{2}\sinh^{2}\xi}\sqrt{1+b_{z}^{2}\sinh^{2}\xi}}
\label{ce14}
\end{eqnarray}
where we used $<\Psi|~\sigma_{i} \otimes
\sigma_{j}~|\Psi>=-\delta_{ij}$ for $i,j=x,y,z$. The (\ref{ce14})
agrees with the Czachor's result.
In order to see whether the Bell's inequality is still maximally
violated in this case, we now consider the so-called Bell
observable $C(a,a',b,b')$ defined as \cite{czachor}
\begin{equation}
C(a,a',b,b') \equiv <\hat{a} \otimes \hat{b}>+<\hat{a} \otimes
\hat{b'}>+<\hat{a'} \otimes \hat{b}>-<\hat{a'} \otimes \hat{b'}>.
\label{bell-obs}
\end{equation}
For maximal violation, we choose the following set of vectors for
spin measurements
\begin{eqnarray}
\vec{a} &=& (0,\frac{1}{\sqrt{2}}~,~\frac{1}{\sqrt{2}})~,~~
\vec{a'} = (0,-\frac{1}{\sqrt{2}}~,~\frac{1}{\sqrt{2}}),
\nonumber\\
\vec{b}&=& (0~,~0~,~1) ~, ~~~ \vec{b'}=(0~,~1~,~0),
\label{ce16}
\end{eqnarray}
which yields $ | C(a,a',b,b') | = 2 \sqrt{2} $ in the
non-relativistic case.
Using (\ref{ce14}), we get the Bell observable for the above
vector set as
\begin{equation}
C(a,a',b,b')
=-\frac{2(1+\cosh\xi)}{\sqrt{2+\sinh^{2}\xi}}.
\label{ce17}
\end{equation}
We see that $|C(a,a',b,b')|$ approaches 2 in the relativistic
limit $\xi \rightarrow \infty$, thereby the Bell's inequality is
not violated in this relativistic limit.
Next, we consider a more general situation in which the two
particles of the spin singlet move in opposite directions in the
lab frame and the two observers for particle 1 and 2 are sitting
in the moving frame Lorentz boosted with respect to the lab frame
in the direction perpendicular to the particles' movements.
Here, we choose particle 1 and 2 are moving in the $+z$ and $-z$
directions respectively in the lab frame, and the moving frame in
which the two observers for particle 1 and 2, Alice and Bob, are
sitting is Lorentz boosted to the $-x$ direction.
Now, the expectation value of the joint spin measurements
performed by Alice and Bob can be expressed as
\begin{eqnarray}
<\hat{a}\otimes \hat{b}> & =& \left<\Phi\left|\frac{\vec{a}\cdot
\vec{\sigma}_{\Lambda p}}{|\lambda(\vec{a}\cdot
\vec{\sigma}_{\Lambda p})|} \otimes \frac{\vec{b}\cdot
\vec{\sigma}_{\Lambda Pp}}{|\lambda(\vec{b}\cdot
\vec{\sigma}_{\Lambda Pp})|}
\right|\Phi \right> \nonumber\\
& = & \left<\Phi\left|\frac{\vec{a}_{\Lambda p}\cdot R(\vec{p}_{\Lambda})
~\vec{\sigma}}{|\lambda(\vec{a}_{\Lambda p}\cdot \vec{\sigma})|}
\otimes \frac{\vec{b}_{\Lambda Pp}\cdot R(\vec{p}_{\Lambda P})
~\vec{\sigma}}{|\lambda(\vec{b}_{\Lambda Pp}\cdot \vec{\sigma})|}
\right|\Phi \right>.
\label{ce18}
\end{eqnarray}
Here,
$|\Phi>=U(\Lambda)~|\Psi>$ where
$ |\Psi> = \frac{1}{\sqrt{2}}~(|\vec{p},\frac{1}{2}>|-\vec{p},\frac{-1}{2}>
-|\vec{p},\frac{-1}{2}>|-\vec{p},\frac{1}{2}> )$, and
$\Lambda$ is the Lorentz boost performed to Alice and Bob(in the moving frame).
In general, the effect of a Lorentz transformation to a state can
be expressed as \cite{sw}
\begin{equation}
U(\Lambda)~|~p~,\sigma>=\sum_{\sigma'}D_{\sigma \sigma
'}(W(\Lambda , p ))~ |~\Lambda p~ ,~ \sigma'> .
\label{LB-state}
\end{equation}
The explicit form for the singlet is given by
\begin{equation}
U(\Lambda)~|\Psi>=\cos \Omega_{p} |\Psi_{\Lambda}^{(-)}> + \sin
\Omega_{p} |\Phi_{\Lambda}^{(+)}> \label{LB-singlet}
\end{equation}
where
\[
|\Psi_{\Lambda}^{(-)}>= \frac{1}{\sqrt{2}}(|\Lambda
p,\frac{1}{2}>|\Lambda Pp,\frac{-1}{2}>-|\Lambda
p,\frac{-1}{2}>|\Lambda Pp,\frac{1}{2}> ),
\]
\[
|\Phi_{\Lambda}^{(+)}>= \frac{1}{\sqrt{2}}(|\Lambda
p,\frac{1}{2}>|\Lambda Pp,\frac{1}{2}>+~|\Lambda
p,\frac{-1}{2}>|\Lambda Pp,\frac{-1}{2}> ),
\]
and $\Omega_{p}$ is the Wigner angle due to Lorentz boost $\Lambda
$ performed to a particle with momentum $\vec{p}$ and is given
explicitly by
\begin{equation} \tan \Omega_{p}=\frac{\sinh \xi \sinh
\chi}{\cosh\xi+\cosh\chi}, ~~ \text{with} ~~ \tanh \xi = \beta_p
~, ~ \tanh \chi = \beta_{\Lambda}. \label{ce21}
\end{equation}
Here, $P$ is the space inversion operator given by
$P=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
\end{array}
\right)$,
and thus $Pp$ is given by
$\left(
\begin{array}{c}
\sqrt{p^{2}+m^{2}} \\
0 \\
0 \\
-p \\
\end{array}
\right)$.
Other expressions appeared above are given by
\begin{eqnarray}
\Lambda p & = & \left(
\begin{array}{cccc}
\cosh\chi & \sinh\chi & 0 & 0 \\
\sinh\chi & \cosh\chi & 0 & 0 \\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1 \\
\end{array}
\right)\left(
\begin{array}{c}
\sqrt{p^{2}+m^{2}} \\
0 \\
0 \\
p \\
\end{array}
\right) = \left(
\begin{array}{c}
\sqrt{p^{2}+m^{2}}\cosh\chi \\
\sqrt{p^{2}+m^{2}}\sinh\chi \\
0 \\
p \\
\end{array}
\right), \nonumber \\
R_{y}(\hat{p}_{\Lambda})
& = & \left(
\begin{array}{ccc}
\cos\theta_{\Lambda} & 0 & \sin\theta_{\Lambda} \\
0 & 1 & 0 \\
-\sin\theta_{\Lambda} & 0 & \cos\theta_{\Lambda} \\
\end{array}
\right), \nonumber
\\
R_{y}(\hat{p}_{\Lambda P}) & = & R_{y}(\pi-\theta_{\Lambda}),
\nonumber
\end{eqnarray}
where $ E_{p}=\sqrt{p^{2}+m^{2}},$ and
$ \tan\theta_{\Lambda}=(E_{p}\sinh\chi)/p=\frac{\sinh\chi}{\tanh\xi}.$
\\
Thus,
\begin{eqnarray}
R_{y}(\hat{p}_{\Lambda})~\vec{\sigma} &= & \left(
\begin{array}{ccc}
\cos\theta_{\Lambda} & 0 & \sin\theta_{\Lambda} \\
0 & 1 & 0 \\
-\sin\theta_{\Lambda} & 0 & \cos\theta_{\Lambda} \\
\end{array}
\right)\left(
\begin{array}{c}
\sigma_{x} \\
\sigma_{y} \\
\sigma_{z} \\
\end{array}
\right) =\left(
\begin{array}{c}
\sigma_{x}\cos\theta_{\Lambda}+\sigma_{z}\sin\theta_{\Lambda} \\
\sigma_{y} \\
-\sigma_{x}\sin\theta_{\Lambda}+\sigma_{z}\cos\theta_{\Lambda} \\
\end{array}
\right),
\end{eqnarray}
and
\begin{eqnarray}
a_{\Lambda p} & = & [R_{y}(\theta_{\Lambda})L_{z}(\eta)]^{-1}~a
\nonumber\\
& = & \left(
\begin{array}{cccc}
\cosh\eta & 0 & 0 &-\sinh\eta \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
-\sinh\eta & 0 & 0 & \cosh\eta\\
\end{array}
\right)\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & \cos\theta_{\Lambda} & 0 & -\sin\theta_{\Lambda} \\
0 & 0 & 1 & 0\\
0 & \sin\theta_{\Lambda} & 0 & \cos\theta_{\Lambda} \\
\end{array}
\right)\left(
\begin{array}{c}
0 \\
a_{x}\\
a_{y}\\
a_{z}\\
\end{array}
\right), \label{ce23}
\end{eqnarray}
where $\tanh\eta=|\vec{p}_{\Lambda}|/E_{\Lambda
p}=\frac{\sqrt{\tanh^{2}\xi+\sinh^{2}}\chi}{\cosh\chi}$.
Thus the spatial part of $a_{\Lambda p}$ and its magnitude are
given by
\begin{eqnarray}
\vec{a}_{\Lambda p} & = &
(a_{x}\cos\theta_{\Lambda}-a_{z}\sin\theta_{\Lambda},a_{y},
\cosh\eta(a_{x}\sin\theta_{\Lambda}+a_{z}\cos\theta_{\Lambda})), \nonumber \\
|\vec{a}_{\Lambda p}| & = & \sqrt{1+\sinh^{2}\eta ~(a_{x}\sin
\theta_{\Lambda}+a_{z}\cos\theta_{\Lambda})^{2}}.
\end{eqnarray}
Similarly from
$b_{\Lambda
p}=[R_{y}(\pi-\theta_{\Lambda})B_{z}(\eta)]^{-1}~b$,
we get
\begin{eqnarray}
\vec{b}_{\Lambda p} & = &
(-b_{x}\cos\theta_{\Lambda}-b_{z}\sin\theta_{\Lambda},b_{y},
\cosh\eta(b_{x}\sin\theta_{\Lambda}-b_{z}\cos\theta_{\Lambda})),
\nonumber \\
|\vec{b}_{\Lambda Pp}| & = & \sqrt{1+\sinh^{2}\eta ~(-b_{x}\sin
\theta_{\Lambda}+b_{z}\cos\theta_{\Lambda})^{2}},
\end{eqnarray}
and
\begin{equation}
R_{y}(\hat{p}_{\Lambda P})~\vec{\sigma}=\left(
\begin{array}{ccc}
-\cos\theta_{\Lambda} & 0 & \sin\theta_{\Lambda} \\
0 & 1 & 0 \\
-\sin\theta_{\Lambda} & 0 & -\cos\theta_{\Lambda} \\
\end{array}
\right)\left(
\begin{array}{c}
\sigma_{x} \\
\sigma_{y} \\
\sigma_{z} \\
\end{array}
\right) =\left(
\begin{array}{c}
-\sigma_{x}\cos\theta_{\Lambda}+\sigma_{z}\sin\theta_{\Lambda} \\
\sigma_{y} \\
-\sigma_{x}\sin\theta_{\Lambda}-\sigma_{z}\cos\theta_{\Lambda} \\
\end{array}
\right).
\end{equation}
Therefore, the tensor product of relativistic spin observables of
particle 1 and 2 for the joint spin measurement
can be expressed as
\begin{equation}
\hat{a} \otimes \hat{b}=\frac{\vec{a}_{\Lambda p}\cdot
R(\vec{p}_{\Lambda}) ~\vec{\sigma}}{|\lambda(\vec{a}_{\Lambda
p}\cdot \vec{\sigma})|} \otimes \frac{\vec{b}_{\Lambda Pp}\cdot
R(\vec{p}_{\Lambda P}) ~\vec{\sigma}}{|\lambda(\vec{b}_{\Lambda
Pp}\cdot \vec{\sigma})|}
\equiv \frac{\vec{A}\cdot \vec{\sigma}
\otimes \vec{B}\cdot \vec{\sigma}}{|\vec{a}_{\Lambda
p}||\vec{b}_{\Lambda Pp}|} \label{ce27}
\end{equation}
where
\begin{eqnarray}
\vec{A} & = & \left(
\begin{array}{c}
a_{x}(\cos^{2}\theta_{\Lambda}-\cosh\eta\sin^{2}\theta_{\Lambda})-
a_{z}(1+\cosh\eta)\sin\theta_{\Lambda}\cos\theta_{\Lambda}\\
a_{y} \\
a_{x}(1+\cosh\eta)\sin\theta_{\Lambda}\cos\theta_{\Lambda}-
a_{z}(\sin^{2}\theta_{\Lambda}-\cosh\eta\cos^{2}\theta_{\Lambda}) \\
\end{array}
\right), \nonumber\\
\vec{B} & = &
\left(
\begin{array}{c}
b_{x}(\cos^{2}\theta_{\Lambda}-\cosh\eta\sin^{2}\theta_{\Lambda})
+b_{z}(1+\cosh\eta)\sin\theta_{\Lambda}\cos\theta_{\Lambda}\\
b_{y} \\
-b_{x}(1+\cosh\eta)\sin\theta_{\Lambda}\cos\theta_{\Lambda}
-b_{z}(\sin^{2}\theta_{\Lambda}-\cosh\eta\cos^{2}\theta_{\Lambda}) \\
\end{array}
\right).
\label{ce28}
\end{eqnarray}
Using the following relations
\begin{eqnarray}
\sigma_{x} \otimes \sigma_{x}~ |\Phi> & = & -\cos
\Omega_{p}~|\Psi_{\Lambda}^{-}> + \sin
\Omega_{p}~|\Phi_{\Lambda}^{+}>, \nonumber\\
\sigma_{y} \otimes \sigma_{y}~ |\Phi> & = & -\cos
\Omega_{p}~|\Psi_{\Lambda}^{-}> - \sin
\Omega_{p}~|\Phi_{\Lambda}^{+}>, \nonumber\\
\sigma_{z} \otimes \sigma_{z}~ |\Phi> &= & -\cos
\Omega_{p}~|\Psi_{\Lambda}^{-}> + \sin
\Omega_{p}~|\Phi_{\Lambda}^{+}>, \nonumber\\
\sigma_{x} \otimes \sigma_{z}~ |\Phi> & = & -\cos
\Omega_{p}~|\Phi_{\Lambda}^{+}> - \sin
\Omega_{p}~|\Psi_{\Lambda}^{-}>, \nonumber\\
\sigma_{z} \otimes \sigma_{x}~ |\Phi> & = & \cos
\Omega_{p}~|\Phi_{\Lambda}^{+}> + \sin
\Omega_{p}~|\Psi_{\Lambda}^{-}>,
\label{ce29}
\end{eqnarray}
and since the remaining terms do not contribute to the expectation
value, we finally get the following expression for the expectation
value of the joint spin measurement for the spin singlet.
\begin{eqnarray}
& & <\hat{a} \otimes \hat{b}> \nonumber \\
& & = \frac{-1}{|\vec{a}_{\Lambda
p}||\vec{b}_{\Lambda Pp}|}[(A_{x}B_{x}+A_{z}B_{z})\cos 2
~\Omega_{p}+A_{y}B_{y}+(A_{x}B_{z}-A_{z}B_{x})\sin2~\Omega_{p}]
\label{ce30}
\end{eqnarray}
Here, we examine two limiting cases of the above formula.
\\
1) When $\chi \rightarrow 0$, $<\hat{a} \otimes \hat{b}>
\rightarrow
\frac{-1}{\sqrt{1+a_{z}^{2}\sinh^{2}\xi}\sqrt{1+b_{z}^{2}\sinh^{2}\xi}}
(a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\cosh^{2}\xi)$.
\\
2) When $\xi \rightarrow 0$,
$<\hat{a} \otimes \hat{b}>
\rightarrow
\frac{-1}{\sqrt{1+a_{x}^{2}\sinh^{2}\chi}\sqrt{1+b_{x}^{2}\sinh^{2}\chi}}
(a_{x}b_{x}\cosh^{2}\chi+a_{y}b_{y}+a_{z}b_{z})$.
\\
Notice that the second case exactly corresponds to the
Czachor's set-up and yields the same result.\\
Now, let us evaluate the Bell observable for a set of measurement
vectors which yield the maximal violation of the Bell's inequality
in the non-relativistic case:
\begin{eqnarray}
\vec{a}& = & (0,\frac{1}{\sqrt{2}}~,~\frac{1}{\sqrt{2}})~,~
\vec{a'}=(0,-\frac{1}{\sqrt{2}}~,~\frac{1}{\sqrt{2}}),
\nonumber\\
\vec{b} & = & (0~,~0~,~1) ~,~~~~ \vec{b'}=(0~,~1~,~0).
\label{ce31}
\end{eqnarray}
With this set of measurement vectors, the Bell observable
$C(a,a',b,b')$ is given by
\begin{eqnarray}
C(a,a',b,b') &= & <\hat{a} \otimes \hat{b}>+<\hat{a} \otimes
\hat{b'}>+<\hat{a'} \otimes \hat{b}>-<\hat{a'} \otimes \hat{b'}>
~ \nonumber \\
& = & - \frac{2}{\sqrt{1+\sin^{2}\theta_{\Lambda}+\cosh^{2}
\eta\cos^{2}\theta_{\Lambda}}} \nonumber \\
& & - \frac{2}{\sqrt{1+\sin^{2}\theta_{\Lambda}+\cosh^{2}
\eta\cos^{2}\theta_{\Lambda}}\sqrt{\sin^{2}\theta_{\Lambda}+\cosh^{2}
\eta\cos^{2}\theta_{\Lambda}}} \nonumber \\
& &
~ \times
\{\left[(\cosh\eta\cos^{2}\theta_{\Lambda}-\sin^{2}\theta_{\Lambda})^{2}
-(1+\cosh\eta)^{2}\sin^{2}\theta_{\Lambda}
\cos^{2}\theta_{\Lambda}\right]\cos 2 \Omega_{p}
\nonumber \\
& & ~~~~~
-(1+\cosh\eta)(\cosh\eta\cos^{2}\theta_{\Lambda}
-\sin^{2}\theta_{\Lambda})\sin
2\theta_{\Lambda}\sin2\Omega_{p}\}.
\label{ce32}
\end{eqnarray}
Here also, we consider two limiting cases of the above formula.
\\
1) When $ \chi \rightarrow 0 , ~ \theta_{\Lambda} \rightarrow 0, ~
\eta \rightarrow \xi$, we get
$|C(a,a',b,b')| \rightarrow
2(1+\cosh\xi)/\sqrt{2+\sinh^{2}\xi}$.
\\
2) When $\xi \rightarrow 0 , ~ \theta_{\Lambda} \rightarrow \pi/2,
~ \eta \rightarrow \chi$, we get $|C(a,a',b,b')| \rightarrow
2\sqrt{2}$.
\\
The first case is similar to the Czachor's set-up in the sense
that the observers are at rest and only the particles are moving
in opposite directions with the same speed. The result is the same
one that we can infer from the Czachor's result. The second case
corresponds to a case in the Czachor's in which the spin
measurement directions are perpendicular to the particles
movement. The result agrees with the Czachor's. \\
What happens if the two particles have different velocities not in
the opposite directions? In order to make the discussion simple we
consider the case when the observer is at rest in the lab frame.
Let $p$ and $q$ be the momentum of particle 1 and 2,
respectively. In this case, the Wigner angle $\Omega_p$ is zero.
Then, the expectation value of the joint spin measurement is given
by
\begin{equation}
<\hat{a} \otimes \hat{b}>=<\Psi|\frac{\vec{a}\cdot
R(\vec{p})\vec{\sigma}}{|\lambda(\vec{a}_{p}\cdot \vec{\sigma})|}
\otimes \frac{\vec{b}\cdot
R(\vec{q})\vec{\sigma}}{|\lambda(\vec{b}_{q}\cdot \vec{\sigma})|}
|\Psi>=<\Psi|\frac{\vec{A}\cdot \vec{\sigma}}{|\vec{a}_{p}|}
\otimes \frac{\vec{B}\cdot \vec{\sigma}}{|\vec{b}_{q}|} |\Psi>
\end{equation}
where
\begin{eqnarray}
\vec{A} &= & \left(
\begin{array}{c}
a_{x}(\cos^{2}\theta_{p}-\cosh\xi_{p}\sin^{2}\theta_{p})-a_{z}(1+\cosh\xi_{p})\sin\theta_{p}\cos\theta_{p} \\
a_{y}\\
a_{x}(1+\cosh\xi_{p})\sin\theta_{p}\cos\theta_{p}-a_{z}(\sin^{2}\theta_{p}-\cosh\xi_{p}\cos^{2}\theta_{p})\\
\end{array}
\right) , \nonumber \\
\vec{B} & = & \left(
\begin{array}{c}
b_{x}(\cos^{2}\theta_{q}-\cosh\xi_{q}\sin^{2}\theta_{q})-b_{z}(1+\cosh\xi_{q})\sin\theta_{q}\cos\theta_{q} \\
b_{y}\\
b_{x}(1+\cosh\xi_{q})\sin\theta_{q}\cos\theta_{q}-b_{z}(\sin^{2}\theta_{q}-\cosh\xi_{q}\cos^{2}\theta_{q})\\
\end{array}
\right) , \label{cea1}
\end{eqnarray}
and its value becomes
\begin{equation}
<\hat{a}\otimes \hat{b}>=-\frac{\vec{A}\cdot
\vec{B}}{|\vec{a}_{p}||\vec{b}_{q}|} ~ .
\end{equation}
For the same set of measurement vectors as in (\ref{ce31}), the
Bell observable is given by
\begin{eqnarray}
C(a,a',b,b')& = & -\frac{2}{\sqrt{2+\sinh^{2}\xi_{p}\cos^{2}\theta_{p}}}\nonumber\\
& & -\frac{2}
{\sqrt{2+\sinh^{2}\xi_{p}\cos^{2}\theta_{p}}\sqrt{1+\sinh^{2}\xi_{q}\cos^{2}\theta_{q}}}
\nonumber \\
& & ~ \times
\left\{
(1+\cosh\xi_{p})(1+\cosh\xi_{q})\sin_{\theta_{p}}\cos_{\theta_{p}}\sin_{\theta_{q}}\cos_{\theta_{p}}
\right. \nonumber \\
& & ~~~~~ \left.
+(\sin^{2}{\theta_{p}}-\cosh\xi_{p}\cos^{2}\theta_{p})
(\sin^{2}{\theta_{q}}-\cosh\xi_{q}\cos^{2}\theta_{q}) \right\} .
\end{eqnarray}
One can check that this result reduces to the one from
(\ref{ce32}), if $\vec{p}$
and $\vec{q}$ are in the opposite directions with the same
magnitude.
In the next section, we shall see that we can find the corrected
vector set for the maximal violation of Bell's inequality in this
case also.
\\
\section*{IV. Corrected Bell observable for spin singlet }
In this section, we will show that by appropriately choosing the
vector set for spin measurements the maximal violation of the
Bell's inequality can be achieved even in a relativistically
moving inertial frame. For the non-relativistic case, it is known
that a fully entangled state such as the spin singlet maximally
violates the Bell's inequality giving the value of the Bell
observable $2 \sqrt{2} $. For the spin singlet case, the vector
set inducing the maximal violation may be chosen as
\begin{eqnarray}
\vec{a} & = & (0,\frac{1}{\sqrt{2}}~,~\frac{1}{\sqrt{2}})~,~
\vec{a'}=(0,-\frac{1}{\sqrt{2}}~,~\frac{1}{\sqrt{2}})
\nonumber\\
\vec{b}& = & (0~,~0~,~1) ~,~~~ \vec{b'}=(0~,~1~,~0).
\label{ce33}
\end{eqnarray}
In the non-relativistic case, the expectation value of the joint
spin measurement for the spin singlet is given by
\begin{equation}
<\hat{a} \otimes \hat{b}> = -\vec{a} \cdot \vec{b} \label{ev-sing}
\end{equation}
for a set of measurement vectors, $\vec{a}$ for particle 1 and
$\vec{b}$ for particle 2, where $\hat{a}=\vec{a}\cdot
\vec{\sigma},~ \hat{b}=\vec{b}\cdot \vec{\sigma}$.
However, when the movement of the particles or the observers
become relativistic the above expectation value is not maintained
as Czachor and others have shown \cite{czachor,tu,ahn}.
Following the reasoning of Terashima and Ueda \cite{tu}, here we
investigate whether we can find a set of spin measurement
directions which preserve the non-relativistic expectation value
(\ref{ev-sing}) even under relativistic situations.
In order to do this, we consider the case in which a new set of
spin measurement directions $\vec{a}_{c}, ~ \vec{b}_{c}$ in a
Lorentz boosted frame yields the relation (\ref{ev-sing})
\begin{equation}
<\hat{a}_{c} \otimes \hat{b}_{c}> = -\vec{a} \cdot \vec{b}
\label{corr-set}
\end{equation}
with the previously chosen vector set $ \vec{a}, \vec{b} $ in the
non-relativistic lab frame.
The existence of the new vector set $\vec{a}_{c}, ~ \vec{b}_{c}$
implies that in the new frame the correlation between the two
entangled particles can be seen when the measurement is performed
along these new direction vectors not along the previously given
directions $ \vec{a}, \vec{b} $ in the lab frame.
Thus we will try to find $\vec{a}_{c}, ~ \vec{b}_{c}$ satisfying
(\ref{corr-set}) for a simple case of Czachor, then for a more
general case.
In the Czachor's set-up in which the both particles are moving in
the $+z$ direction, the relation (\ref{corr-set}) is satisfied if
\[ \frac{\vec{a}_{c}\cdot
\vec{\sigma}}{|\vec{a}_{c}|}=\vec{a}\cdot \vec{\sigma}. \]
Let us denote $\vec{a}_{c}=(a_{cx},a_{cy},a_{cz})$, then from the
relation $ ~ L_{z}(-\xi)a_{c}=a_{cp} ~, ~ \tanh \xi = \beta_p ~$,
\begin{equation}
\left(
\begin{array}{cccc}
\cosh\xi & 0 & 0 & -\sinh\xi \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
-\sinh\xi & 0 & 0 & \cosh\xi \\
\end{array}
\right)\left(
\begin{array}{c}
0 \\
a_{cx} \\
a_{cy} \\
a_{cz} \\
\end{array}
\right)=\left(
\begin{array}{c}
-a_{cz}\sinh\xi \\
a_{cx} \\
a_{cy} \\
a_{cz}\cosh\xi \\
\end{array}
\right), \label{ce36}
\end{equation}
thus we can write $\vec{a}_{cp}=(a_{cx},a_{cy},a_{cz}\cosh\xi)$,
and we get the following equation for the corrected vector
$\vec{a}_{c}$.
\begin{equation}
\frac{1}{\sqrt{1+a_{cz}^{2}\sinh^{2}\xi}}(a_{cx},a_{cy},a_{cz}\cosh\xi)=(a_{x},a_{y},a_{z})
\end{equation}
Solving the above equation, we get
\begin{equation}
a_{cx}=\frac{a_{x}}{\sqrt{1-a_{z}^{2}\tanh^{2}\xi}} , ~~
a_{cy}=\frac{a_{y}}{\sqrt{1-a_{z}^{2}\tanh^{2}\xi}}, ~~
a_{cz}=\frac{a_{z}}{\cosh\xi\sqrt{1-a_{z}^{2}\tanh^{2}\xi}}.
\label{ce38}
\end{equation}
Similarly, we get the same expression for the remaining corrected
vector $\vec{b}_{c}$ of the new frame.
The expectation value
of the joint spin measurement for the corrected vector set is now
given by (\ref{ce14}),
\begin{eqnarray}
<\hat{a_{c}} \otimes \hat{b_{c}}> &
= & -\frac{(a_{cx}b_{cx}+a_{cy}b_{cy}+a_{cz}b_{cz}\cosh^{2}\xi)}
{\sqrt{1+a_{cz}^{2}\sinh^{2}\xi}\sqrt{1+b_{cz}^{2}\sinh^{2}\xi}}
\label{ce39} \\
& = & -(a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}) = -\vec{a}\cdot\vec{b},
\nonumber
\end{eqnarray}
and thus satisfies our requirement.
For the evaluation of the Bell observable, we first evaluate the
corrected vectors for $ \vec{a}, \vec{a}'$ given by (\ref{ce33})
by use of (\ref{ce38})and similarly for $ \vec{b}, \vec{b}'$ given
by (\ref{ce33}):
\begin{eqnarray}
\vec{a}_{c} &= &(0,\frac{1}{\sqrt{2-\tanh^{2}\xi}}~,
~\frac{1}{\cosh\xi\sqrt{2-\tanh^{2}\xi}})~,~\nonumber\\
\vec{a'}_{c} & = &
(0,\frac{-1}{\sqrt{2-\tanh^{2}\xi}}~,~\frac{1}{\cosh\xi\sqrt{2-\tanh^{2}\xi}})~,~
\nonumber\\
\vec{b}_{c}& = & (0~,~0~,~1) ~, ~~~~~ \vec{b'}_{c}=(0~,~1~,~0).
\end{eqnarray}
By use of the formula (\ref{ce39}) the Bell observable with the
above corrected vector set is now evaluated as
\begin{eqnarray}
C(a_{c},a'_{c},b_{c},b'_{c}) & = & <\hat{a}_{c}\otimes
\hat{b}_{c}>+<\hat{a}_{c}\otimes
\hat{b'}_{c}>+<\hat{a'}_{c}\otimes
\hat{b}_{c}>-<\hat{a'}_{c}\otimes \hat{b'}_{c}> \nonumber\\
& = &
-\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{2}}
=-2\sqrt{2}
\end{eqnarray}
retrieving the value of the maximal violation of the Bell's
inequality in the non-relativistic case.
\\
Next, we consider a more general case as in section III. The two
particles of the spin singlet are moving in the $+z$ and $-z$
directions respectively in the lab frame, and the observers, Alice
and Bob, are sitting in a moving frame which is Lorentz boosted
toward the $-x$ direction.
In this case, the expectation value of the joint spin measurement
for a corrected vector set is given by (\ref{ce30}),
\begin{eqnarray}
& & <\hat{a}_{c} \otimes \hat{b}_{c}> \label{ce42} \\
& & =\frac{-1}{|\vec{a}_{c\Lambda
p}||\vec{b}_{c\Lambda Pp}|}[(A_{cx}B_{cx}+A_{cz}B_{cz})\cos 2
~\Omega_{p}+A_{cy}B_{cy}+(A_{cx}B_{cz}-A_{cz}B_{cx})\sin2~\Omega_{p}],
\nonumber
\end{eqnarray}
where $\vec{A}_{c} ,\vec{B}_{c}$ correspond to $\vec{A} ,\vec{B}$
of (\ref{ce28}) in which $\vec{a} ,\vec{b}$ are replaced with
$\vec{a}_{c} ,\vec{b}_{c}$, and other notations such as $\Omega_p,
\xi, \chi, \eta, \theta_\Lambda , $ etc. are the same as in
section III.
Here again, we will get the corrected vector set of spin
measurement directions if we find $\vec{a}_{c} ,\vec{b}_{c}$ which
give the expectation value (\ref{ce42}) to be $- \vec{a} \cdot
\vec{b} $. Namely, we want to find $\vec{a}_{c} ,\vec{b}_{c}$ that
satisfy the following equation:
\begin{equation}
\frac{-1}{|\vec{a}_{c\Lambda
p}||\vec{b}_{c\Lambda Pp}|}[(A_{cx}B_{cx}+A_{cz}B_{cz})\cos 2
~\Omega_{p}+A_{cy}B_{cy}+(A_{cx}B_{cz}-A_{cz}B_{cx})\sin2~\Omega_{p}]
= - \vec{a} \cdot \vec{b}
\label{ce43}
\end{equation}
where $(\vec{a}_{c\Lambda p} ,\vec{b}_{c\Lambda Pp})$ correspond
to $(\vec{a}_{\Lambda p} ,\vec{b}_{\Lambda Pp})$ of the previous
section as above.
One can check that the equation (\ref{ce43}) is satisfied if the
following relation is satisfied
\begin{equation}
\frac{A_{ci}}{|\vec{a}_{c\Lambda p}|}=\bar{a}_i ~, ~~
\frac{B_{ci}}{|\vec{b}_{c\Lambda Pp}|}=\bar{b}_i ~~~ \text{for} ~~
i=(x,y,z),
\label{ce44}
\end{equation}
where
\begin{equation}
\bar{a}_i \equiv R_y (\Omega_p) a_i ~, ~~ \bar{b}_i \equiv R_y
(- \Omega_p) b_i ~~~ \text{with}~~
R_{y}(\Omega_{p})=\left(
\begin{array}{ccc}
\cos \Omega_{p} & 0 & \sin\Omega_{p} \\
0 & 1 & 0 \\
-\sin\Omega_{p} & 0 & \cos\Omega_{p} \\
\end{array}
\right). \label{ce45}
\end{equation}
Solving (\ref{ce44}), we obtain $\vec{a}_{c} ,\vec{b}_{c}$ in
terms of $\vec{a} ,\vec{b}$:
\begin{eqnarray}
a_{cz}&=&\frac{\overline{a}_{z}}
{\sqrt{\left[F_{a}(1+\cosh\eta)\sin\theta_{\Lambda}\cos\theta_{\Lambda}-
(\sin^{2}\theta_{\Lambda}-\cosh\eta\cos^{2}\theta_{\Lambda})\right]^{2}
-\overline{a}_{z}^{2}\sinh^{2}\eta\left(
F_{a}\sin\theta_{\Lambda}+\cos\theta_{\Lambda}\right)^{2}}},
\nonumber\\
a_{cx}&=&\overline{a}_{x}\sqrt{1+a_{cz}^{2}\sinh^{2}\eta\left(F_{a}\sin\theta_{\Lambda}
+\cos\theta_{\Lambda}\right)^{2}},
\nonumber\\
a_{cy}&=&a_{y}\sqrt{1+a_{cz}^{2}\sinh^{2}\eta\left(F_{a}\sin\theta_{\Lambda}
+\cos\theta_{\Lambda}\right)^{2}}, \label{ce46}
\\
b_{cz}
&=&\frac{\bar{b}_{z}}
{\sqrt{\left[F_{b}(1+\cosh\eta)\sin\theta_{\Lambda}\cos\theta_{\Lambda}-
(\sin^{2}\theta_{\Lambda}-\cosh\eta\cos^{2}\theta_{\Lambda})\right]^{2}
-\bar{b}_{z}^{2}\sinh^{2}\eta\left(F_{b}\sin\theta_{\Lambda}
-\cos\theta_{\Lambda}\right)^{2}}},
\nonumber\\
b_{cx}&=&\overline{b}_{x}\sqrt{1+b_{cz}^{2}\sinh^{2}\eta\left(F_{b}\sin\theta_{\Lambda}
-\cos\theta_{\Lambda}\right)^{2}},
\nonumber\\
b_{cy}&= &
b_{y}\sqrt{1+b_{cz}^{2}\sinh^{2}\eta\left(F_{b}\sin\theta_{\Lambda}
-\cos\theta_{\Lambda}\right)^{2}},
\nonumber
\end{eqnarray}
where $ \bar{a}_i ~, ~\bar{b}_i ~~\text{for}~~ i=x,y,z ~$ are
given by (\ref{ce45}), and
\begin{eqnarray}
F_{a} & =
& \frac{(1+\cosh\eta)\tan\theta_{\Lambda}-f_{a}(\tan^{2}\theta_{\Lambda}-\cosh\eta)
}{(1-\cosh\eta\tan^{2}\theta_{\Lambda})-f_{a}(1+\cosh\eta)\tan\theta_{\Lambda}},
\nonumber \\
F_{b} & =
& -\frac{(1+\cosh\eta)\tan\theta_{\Lambda}+f_{b}(\tan^{2}\theta_{\Lambda}-\cosh\eta)
}{(1-\cosh\eta\tan^{2}\theta_{\Lambda})+f_{b}(1+\cosh\eta)\tan\theta_{\Lambda}},
\nonumber \\
\text{with} & & f_{a} \equiv
\frac{\overline{a}_{x}}{\overline{a}_{z}}~,~~~
f_{b} \equiv \frac{\overline{b}_{x}}{\overline{b}_{z}}. \nonumber
\end{eqnarray}
And thus $ |\vec{a}_{c\Lambda p}|, ~ |\vec{b}_{c\Lambda Pp}| $ are
given by
\begin{eqnarray}
|\vec{a}_{c\Lambda p}|&
=&\sqrt{1+a_{cz}^{2}\sinh^{2}\eta\left(F_{a}\sin\theta_{\Lambda}
+\cos\theta_{\Lambda}\right)^{2}},
\nonumber\\
|\vec{b}_{c\Lambda Pp}|
&=&\sqrt{1+b_{cz}^{2}\sinh^{2}\eta\left(F_{b}\sin\theta_{\Lambda}
-\cos\theta_{\Lambda}\right)^{2}} . \nonumber
\end{eqnarray}
Now, we consider how the correlation due to entanglement is
changed by Lorentz boost. For the spin singlet the two spins of
particle 1 and 2 are always antiparallel in the non-relativistic
case. Thus we would like to see how the corrected vector set in
the Lorentz boosted moving frame shows the correlation between the
two spins of the entangled particles that exists in the
non-relativistic lab frame.
As expressed in (\ref{ce43}), the corrected vector set for the
spin singlet is defined to satisfy
\[
<\hat{a}_{c} \otimes \hat{b}_{c}> =-\vec{a} \cdot \vec{b}.
\]
In other words, when the two measurement directions for particle 1
and 2 are the same, $\vec{a} = \vec{b}$, in the non-relativistic
lab frame, the expectation value of the joint spin measurement
with the new directions, $( \vec{a}_{c} ~, ~ \vec{b}_{c})$, in
the Lorentz boosted moving frame should be $-1$. Here, we would
like to see what these corrected spin measurement directions are
and consider the meaning of these new directions when $\vec{a} =
\vec{b}=(0,0,1)$.
In this case, from (\ref{ce45}) $~ \bar{a}_i ~, ~ \bar{b}_i ~ ~
\text{for} ~~ i=x,y,z $ are given by
\[
\{ \bar{a}_{i} \} = ( \sin\Omega_{p}, 0 , \cos\Omega_{p} ) ~,~~
\{ \bar{b}_{i} \} = ( -\sin\Omega_{p}, 0 , \cos\Omega_{p} ) ,
\]
and thus from (\ref{ce46}) the corrected vectors are given as
follows.
\begin{eqnarray}
a_{cz}&=&\frac{\cos \Omega_{p}} {\sqrt{D_a}},
\nonumber\\
a_{cx}&=&\sin\Omega_{p}\sqrt{1+\cos^{2}\Omega_{p}\sinh^{2}\eta\left(F_{a}\sin\theta_{\Lambda}
+\cos\theta_{\Lambda}\right)^{2}},
\nonumber\\
a_{cy}&=&0 , \nonumber \\
b_{cz}
&=&\frac{\cos\Omega_{p}} {\sqrt{D_b}} ,
\nonumber\\
b_{cx}&=&-\sin\Omega_{p}\sqrt{1+\cos^{2}\Omega_{p}\sinh^{2}\eta\left(F_{a}\sin\theta_{\Lambda}
+\cos\theta_{\Lambda}\right)^{2}} ,
\nonumber\\
b_{cy}&=&0, \nonumber
\end{eqnarray}
where
\[
D_a =
\left[F_{a}(1+\cosh\eta)\sin\theta_{\Lambda}\cos\theta_{\Lambda}-
(\sin^{2}\theta_{\Lambda}-\cosh\eta\cos^{2}\theta_{\Lambda})\right]^{2}
-\cos^{2}\Omega_{p}\sinh^{2}\eta\left(
F_{a}\sin\theta_{\Lambda}+\cos\theta_{\Lambda}\right)^{2} ,
\]
\[
D_b =
\left[F_{b}(1+\cosh\eta)\sin\theta_{\Lambda}\cos\theta_{\Lambda}+
(\sin^{2}\theta_{\Lambda}-\cosh\eta\cos^{2}\theta_{\Lambda})\right]^{2}
-\cos^{2}\Omega_{p} \sinh^{2}\eta\left(
F_{a}\sin\theta_{\Lambda}+\cos\theta_{\Lambda}\right)^{2} ,
\]
and
\[
F_{a} = \frac{(1+\cosh\eta)\tan\theta_{\Lambda} -\tan
\Omega_{p}(\tan^{2}\theta_{\Lambda}-\cosh\eta)
}{(1-\cosh\eta\tan^{2}\theta_{\Lambda})-\tan
\Omega_{p}(1+\cosh\eta)\tan\theta_{\Lambda}} ,
\]
with
$ ~ \tanh \xi = \beta_p , \ \tanh \chi = \beta_{\Lambda} , \
\cosh\eta=\cosh\xi\cosh\chi , \ \tan\theta_{\Lambda}
=\frac{\sinh\chi}{\tanh\xi} , \ \tan \Omega_{p}
=\frac{\sinh\xi\sinh\chi}{\cosh\xi+\cosh\chi} ~ .$
\\
In the limit, when
$\xi \rightarrow \infty ~,~ \chi\rightarrow \infty $, the above
result yields after some numerical calculation
\[F_{a}
\rightarrow 0~,~a_{cz} \rightarrow 0 , ~a_{cx} \rightarrow 1 , \]
and
\[F_{b} \rightarrow
0~,~b_{cz} \rightarrow 0 , ~b_{cx} \rightarrow -1 . \]
This result tells us that in the highly relativistic limit when
the boost speed reaches the speed of light both spins become
parallel not anti-parallel since the two spin measurement
directions should be opposite in order to maintain the same
expectation value $-1$ for the joint spin measurement in the
moving frame. This agrees with what we expected for spin rotation
under Lorentz boost.
\\
Finally, we consider the case that we dealt with at the end of the
last section in which the two particles are not moving in the
opposite directions. Namely, the two particles have arbitrary
momenta $p$ and $q$ and the observer is at rest in the lab frame.
Following the same argument, the corrected vector set should
satisfy the following condition.
\begin{equation}
<\hat{a_{c}} \otimes
\hat{b_{c}}>=-\frac{\vec{A}_{c}\cdot\vec{B}_{c}}{|\vec{a}_{c
p}||\vec{b}_{c q}|}=-\vec{a}\cdot \vec{b} ~ .
\end{equation}
Here, $\vec{A}_{c}, ~ \vec{B}_{c}$ are given by $\vec{A}, ~
\vec{B}$ in (\ref{cea1}) with $\vec{a}, ~ \vec{b}$ replaced with
$\vec{a}_{c}, ~ \vec{b}_{c}$, respectively.
This condition can be split into two conditions
\begin{equation}
\frac{\vec{A}_{c}}{|\vec{a}_{c p}|}=\vec{a} ,
~~ \frac{\vec{B}_{c}}{|\vec{b_{cq}|}}=\vec{b} ,
\end{equation}
and the result is given by for $\vec{a}_{c}~$,
\begin{eqnarray}
a_{cz}=\frac{a_{z}}{\sqrt{D_{a}}}
\end{eqnarray}
where
\[D_{a}=[F_{a}(1+\cosh\xi_{p})\sin\theta_{p}\cos\theta_{p}-(\sin^{2}\theta_{p}-\cosh\xi_{p}\cos^{2}\theta_{p})]^{2}
-a^{2}_{z}\sinh^{2}\xi_{p}(F_{a}\sin\theta_{p}+\cos\theta_{p})^{2},
\]
\[F_{a}=\frac{(1+\cosh\xi_{p})\tan\theta_{p}-\frac{a_{x}}{a_{z}}(\tan^{2}\theta_{p}-\cosh\xi_{p})}{(1-\cosh\xi_{p}\tan^{2}\theta_{p})-
\frac{a_{x}}{a_{z}}(1+\cosh\xi_{p})\tan\theta_{p}},
\]
and
\[a_{cx}=a_{x}\sqrt{1+a^{2}_{cz}\sinh^{2}\xi_{p}(F_{a}\sin\theta_{p}+\cos\theta_{p})^{2}},
\]
\[a_{cy}=a_{y}\sqrt{1+a^{2}_{cz}\sinh^{2}\xi_{p}(F_{a}\sin\theta_{p}+\cos\theta_{p})^{2}}
.
\]
In the case of $\vec{b}_{c}$, $
b_{cz}=\frac{b_{z}}{\sqrt{D_{b}}}$,
and $\vec{a}$ and $\vec{p}$ are replaced with $\vec{b}$ and
$\vec{q}$, respectively, in the expression for $\vec{a}_{c}$.
One can check that this result reduces to the one from
(\ref{ce46}) with $\Omega_{p}=0, ~ \eta \rightarrow \xi, ~ \tan
\theta_{\Lambda}=0$, if $\vec{p}$
and $\vec{q}$ are in the opposite directions with the same
magnitude.
\\
\section*{V. Corrected Bell observable for the Bell states}
In this section, we will find corrected vector sets of spin
measurement directions for the remaining Bell states.
The Bell states are defined by \cite{nc}
\begin{eqnarray}
|\Phi_{p}^{(+)}> & \equiv & \frac{1}{\sqrt{2}}(|p,1/2>|-p,1/2>+|p,-1/2>|-p,-1/2>), \nonumber\\
|\Phi_{p}^{(-)}> & \equiv & \frac{1}{\sqrt{2}}(|p,1/2>|-p,1/2>-|p,-1/2>|-p,-1/2>), \nonumber\\
|\Psi_{p}^{(+)}> & \equiv &
\frac{1}{\sqrt{2}}(|p,1/2>|-p,-1/2>+|p,-1/2>|-p,1/2>),
\label{ce47} \\
|\Psi_{p}^{(-)}> & \equiv &
\frac{1}{\sqrt{2}}(|p,1/2>|-p,-1/2>-|p,-1/2>|-p,1/2>), \nonumber
\end{eqnarray}
and transform under Lorentz boost as
\begin{eqnarray}
U(\Lambda)|\Phi_{p}^{(+)}>&=&\cos\Omega_{p} ~|\Phi_{\Lambda p}^{(+)}>
- \sin\Omega_{p}~ |\Psi_{\Lambda p}^{(-)}>, \nonumber\\
U(\Lambda)|\Phi_{p}^{(-)}>&=&|\Phi_{\Lambda p}^{(-)}>, \nonumber\\
U(\Lambda)|\Psi_{p}^{(+)}>&=&|\Psi_{\Lambda p}^{(+)}>, \label{ce48} \\
U(\Lambda)|\Psi_{p}^{(-)}>&=&\cos\Omega_{p}~|\Psi_{\Lambda
p}^{(-)}>+\sin\Omega_{p}~|\Phi_{\Lambda p}^{(+)}>, \nonumber
\end{eqnarray}
where $\Omega_{p}$ is the Wigner angle due to the Lorentz boost
$\Lambda $ performed to a particle with momentum $\vec{p}$ and is
given by (\ref{ce21}).
In the non-relativistic case, the expectation values of the joint
spin measurement for the Bell states are given by
\begin{eqnarray}
<\Phi_{p}^{(+)}| ~\vec{a}\cdot\vec{\sigma} \otimes
\vec{b}\cdot\vec{\sigma}~| \Phi_{p}^{(+)}>
& = & a_{x}b_{x}-a_{y}b_{y}+a_{z}b_{z}, \nonumber\\
<\Phi_{p}^{(-)}| ~\vec{a}\cdot\vec{\sigma} \otimes
\vec{b}\cdot\vec{\sigma}~| \Phi_{p}^{(-)}>
& = & a_{x}b_{x}-a_{y}b_{y}+a_{z}b_{z}, \nonumber\\
<\Psi_{p}^{(+)}| ~\vec{a}\cdot\vec{\sigma} \otimes
\vec{b}\cdot\vec{\sigma}~| \Psi_{p}^{(+)}>
& = & a_{x}b_{x}+a_{y}b_{y}-a_{z}b_{z}, \label{ce49} \\
<\Psi_{p}^{(-)}| ~\vec{a}\cdot\vec{\sigma} \otimes
\vec{b}\cdot\vec{\sigma}~| \Psi_{p}^{(-)}>
& = & -a_{x}b_{x}-a_{y}b_{y}-a_{z}b_{z} ~. \nonumber
\end{eqnarray}
In the relativistic case, we consider the general case that we
studied in the previous sections in which particle 1 and 2 are
moving in the $+z$ and $-z$ directions respectively in the lab
frame, and the two observers for particle 1 and 2, Alice and Bob,
are Lorentz boosted to the $-x$ direction, the expectation values
of the joint spin measurement are given by use of the formula
(\ref{ce27}):
\begin{eqnarray}
<\tilde{\Phi}^{(+)}| \hat{a} \otimes \hat{b}| \tilde{\Phi}^{(+)}>
&=&\frac{1}{|\vec{a}_{\Lambda p}||\vec{b}_{\Lambda
Pp}|}[(A_{x}B_{x}+A_{z}B_{z})\cos 2 \Omega_{p} -A_{y}B_{y}
+(A_{x}B_{z}-A_{z}B_{x})\sin 2 \Omega_{p}],
\nonumber\\
<\tilde{\Phi}^{(-)}|\hat{a} \otimes \hat{b} | \tilde{\Phi}^{(-)}>
&=&\frac{1}{|\vec{a}_{\Lambda p}||\vec{b}_{\Lambda
Pp}|}[A_{x}B_{x}-A_{y}B_{y}+A_{z}B_{z}], \nonumber\\
<\tilde{\Psi}^{(+)}| \hat{a} \otimes \hat{b}| \tilde{\Psi}^{(+)}>
&=&\frac{1}{|\vec{a}_{\Lambda p}||\vec{b}_{\Lambda
Pp}|}[A_{x}B_{x}+A_{y}B_{y}-A_{z}B_{z}], \label{ce50} \\
<\tilde{\Psi}^{(-)}| \hat{a} \otimes \hat{b}| \tilde{\Psi}^{(-)}>
&=&\frac{-1}{|\vec{a}_{\Lambda p}||\vec{b}_{\Lambda
Pp}|}\left[(A_{x}B_{x}+A_{z}B_{z})\cos 2 \Omega_{p} +A_{y}B_{y}
+(A_{x}B_{z}-A_{z}B_{x})\sin 2 \Omega_{p}\right], \nonumber
\end{eqnarray}
where $A_i ~, ~ B_i ~~\text{for} ~~ i=x,y,z~$ are given by
(\ref{ce28}), and we denoted the transformed Bell states as
\begin{eqnarray}
|\tilde{\Phi}^{(+)}> & = & U(\Lambda)|\Phi_{p}^{(+)}> ~, ~~
|\tilde{\Phi}^{(-)}>=U(\Lambda)|\Phi_{p}^{(-)}> , \nonumber \\
|\tilde{\Psi}^{(+)}> & = & U(\Lambda)|\Psi_{p}^{(+)}> ~, ~~
|\tilde{\Psi}^{(-)}> =U(\Lambda)|\Psi_{p}^{(-)}> .\nonumber
\end{eqnarray}
Here, we directly consider the corrected vector sets for maximal
violation of the Bell's inequality as we have done in the singlet
case. First we notice that for the states $ \{ |
\tilde{\Phi}^{(+)}> ~, ~ | \tilde{\Psi}^{(-)}>\}$ the corrected
vector set $(\vec{a}_{c}~, ~ \vec{b}_{c})$ should satisfy
\begin{eqnarray}
\frac{\vec{A}_{c}}{|\vec{a}_{c\Lambda p}|}&=& R_{y}(\Omega_{p})\vec{a}, \nonumber \\
\frac{\vec{B}_{c}}{|\vec{b}_{c\Lambda
Pp}|}&=&R_{y}(-\Omega_{p})\vec{b}, \label{ce51}
\end{eqnarray}
in order to give the same expectation values of the
non-relativistic case such that
\begin{eqnarray}
<\hat{a}_{c} \otimes \hat{b}_{c}>
&=&a_{x}b_{x}-a_{y}b_{y}+a_{z}b_{z} ~~\text{for} ~~ | \tilde{\Phi}^{(+)}> , \nonumber \\
<\hat{a}_{c} \otimes \hat{b}_{c}>
&=&-a_{x}b_{x}-a_{y}b_{y}-a_{z}b_{z} ~~\text{for} ~~|
\tilde{\Psi}^{(-)}>. \label{ce52}
\end{eqnarray}
Next, for the states $ \{ | \tilde{\Phi}^{(-)}> , |
\tilde{\Psi}^{(+)}> \}$ the corrected vector set
$(\vec{a}_{c},\vec{b}_{c})$ should satisfy
\begin{eqnarray}
\frac{\vec{A}_{c}}{|\vec{a}_{c\Lambda p}|}&=& \vec{a}, \nonumber \\
\frac{\vec{B}_{c}}{|\vec{b}_{c\Lambda Pp}|}&=&\vec{b},
\label{ce53}
\end{eqnarray}
such that
\begin{eqnarray}
<\hat{a}_{c} \otimes \hat{b}_{c}>
&=&a_{x}b_{x}-a_{y}b_{y}+a_{z}b_{z} ~ , ~~ \text{for}~~ | \tilde{\Phi}^{(-)}> , \nonumber \\
<\hat{a}_{c} \otimes \hat{b}_{c}>
&=&a_{x}b_{x}+a_{y}b_{y}-a_{z}b_{z} ~, ~~ \text{for}~~
|\tilde{\Psi}^{(+)}> . \label{ce54}
\end{eqnarray}
Here, $(\vec{A}_{c},~ \vec{B}_{c})$ are the one with
$(\vec{a}_{c},~ \vec{b}_{c})$ instead of $(\vec{a}, ~ \vec{b})$ in
(\ref{ce28}).
In this manner, we can find the corrected vector sets for the
other Bell states once the original vector sets which induce the
maximal violation of the Bell's inequality for each Bell state in
the non-relativistic case are given.
Namely, once we find the solutions for the equations (\ref{ce51})
and (\ref{ce53}), then we have the corrected vector sets for joint
spin measurement which will induce the maximal violation of the
Bell's inequality for the remaining Bell states.
\\
\section*{VI. Conclusion}
In this paper, we show that by appropriate rotations of the
directions of spin measurement one can achieve the maximal
violation of the Bell's inequality even in a relativistically
moving frame if the state is fully entangled in a non-relativistic
lab frame.
In order to do this, we first define the relativistic spin
observable which we use for the spin measurement in an arbitrary
Lorentz boosted inertial frame. With this relativistic spin
observable we evaluate the expectation values of the joint spin
measurement for the Bell states in a Lorentz boosted frame.
In the spin singlet case, the expectation value evaluated in a lab
frame in which the two particles are moving in the same direction
exactly agrees with the Czachor's result.
To measure the degree of violation of the Bell's inequality, we
then evaluate the so-called Bell observable for the Bell states.
The degree of violation decreases under Lorentz boost. However,
this is the case when one evaluates the Bell observable with the
same spin measurement directions as in the non-relativistic lab
frame.
In fact, we show that the Bell's inequality is still
maximally violated in a Lorentz boosted frame, if we properly
choose new set of spin measurement directions. We show this
following the reasoning of Terashima and Ueda \cite{tu} for all
the Bell states.
In the non-relativistic case, maximal violation of the Bell's
inequality implies full entanglement of a given state. Thus we may
infer that the restoration of maximal violation of the Bell's
inequality in a Lorentz boosted frame indicates the preservation
of the entanglement information in a certain form even under
Lorentz boost.
We check this idea by investigating how the EPR correlation of the
spin singlet whose spins are up and down in the $z$-direction
changes under a ultra-relativistic Lorentz boost that reaches the
speed of light in the $x$-direction.
As we discussed in section IV, the new spin measurement directions
which give the maximal violation of the Bell's inequality, i.e.,
which preserve the expectation value of the joint spin
measurement, become the $+x$ and $-x$ directions when the original
directions for the joint spin measurement in the lab frame are
both in the $+z$ direction.
Namely, the perfect anti-correlation of the spin singlet becomes
the perfect correlation under a ultra-relativistic Lorentz boost
perpendicular to the original spin directions.
However, one can also check that for the unchanged Bell states
under Lorentz boost such as $| \Phi_{p}^{(-)} > ~, ~
|\Psi_{p}^{(+)} > $ in (\ref{ce47}), the form of correlation does
not change.
We thus conclude that the entanglement information is preserved as
a form of correlation information determined by the transformation
characteristic of the Bell state in use.
Finally, we would like to notice that even though the Lorentz
boost is not unitary, our relativistic spin observable (\ref{ce7})
in section II is determined by the rotation from the $z$-axis to
the direction of the particle's momentum $\vec{p}$. From this and
that the spin state is transformed by the Wigner rotation which is
unitary, we can infer that the restoration of the maximal
violation of Bell's inequality by adjusting the measurement axes
might be expected from the unitarity of the rotation. This is in a
sense similar to the cases of Refs. \cite{am,tu} where the
unitarity of the spin state transfomation given by the Wigner
rotation ensures the same result, since their spin observables are
not changed.
\\
\noindent
{\Large \bf Acknowledgments}
\noindent We would like to thank B.H. Lee for comment on our
calculational error in section II. This work was supported in part
by Korea
Research Foundation, Grant No. KRF-2002-042-C00010. \\
\end{document}
|
\begin{document}
\title{Orbifold Cohomology as Periodic Cyclic Homology}
\author{Vladimir Baranovsky}
\date{June 22, 2002}
\maketitle
\begin{abstract}
It known from the work of Feigin-Tsygan, Weibel and Keller that the
cohomology groups of a smooth complex variety $X$ can be recovered
from (roughly speaking) its derived category of coherent
sheaves. In this paper we show that for a finite group $G$ acting
on $X$ the same procedure applied to $G$-equivariant sheaves
gives the orbifold cohomology of $X/G$.
As an application, in some cases we are able to obtain simple
proofs of an additive isomorphism between the
orbifold cohomology of $X/G$ and the usual cohomology of its
crepant resolution (the equality of Euler and Hodge numbers
was obtained earlier by various authors). We also state some
conjectures on the product
structures, as well as the singular case; and a connection with
a recent work by Kawamata.
\end{abstract}
\section{Introduction}
Let $X$ be a smooth variety over a field $k$ (for simplicity we
assume in this introduction that $k = \mathbb{C}$) and $G$ be a
finite group acting on $X$. If the quotient variety $X/G$ is Gorenstein
(i.e. the canonical class is a Cartier divisor) and $\pi: Y \to X/G$
is a crepant resolution of singularities then Ruan's Cohomological
Crepant Resolution Conjecture (which we call Cohomological Conjecture
for short) states, as a particular case, that the cohomology groups
$H^*(Y, \mathbb{C})$ should be isomorphic to the orbifold cohomology
$$
H^*_{orb}(X/G, \mathbb{C}) = \Big( \bigoplus_{g \in G} H^*(X^g, \mathbb{C})
\Big)_G
$$
where $(\ldots)_G$ denotes the coinvariants, $X^g$ is the fixed
point set, and $G$ acts on the above direct sum by conjugating $g$.
This definition of $H^*_{orb}(X, \mathbb{C})$ is slightly different
from the usual one, see Section 3 of \cite{Ru}, but equivalent to it.
Moreover, Ruan has introduced product structures on
$H^*(Y, \mathbb{C})$ (a deformation of the usual product using the
rational
curves contracted by $\pi$) and $H^*_{orb}(X/G, \mathbb{C})$,
\textit{loc. cit.},
and the Cohomological Conjecture states that $H^*(Y, \mathbb{C})
\simeq H^*_{orb}(X/G, \mathbb{C})$ is actually a ring isomorphism.
On the level of Betti (or Hodge) numbers this conjecture was recently
proved by Lupercio-Poddar and Yasuda, see \cite{Y}. However, with the
approach
used in the proof (motivic integration) it is not clear how to
identify the actual cohomology groups with their product structures.
In this paper we will try to outline a different approach to the
Cohomological
Crepant Resolution Conjecture and show that it is in fact a consequence
of a Categorical Resolution Conjecture stated (in a form and under
a name slightly different from ours) by Kawamata in \cite{Ka}. We hope
that the categorical approach will allow to interpret
the product structures. Besides, the author believes that it would
be very important to establish a link between the
categorical and the motivic integration
approach; relating the derived category of sheaves to the space
of arcs (or possibly the space of formal loops of Kapranov-Vasserot).
For simplicity we only work with global quotients $X/G$ but
all statements can be made (and, hopefully, proved)
for general smooth Deligne-Mumford stacks and categories of
sheaves twisted by a gerbe.
In more detail: we would like to show that the above Cohomological Conjecture
follows from an equivalence of two derived categories: the bounded
derived category $D^b(Y)$ of coherent sheaves on $Y$ and the
bounded derived category $D^b_G(X)$ of $G$-equivariant sheaves on $X$
(i.e. sheaves on the quotient stack $[X/G]$).
Thus, a possible proof of the Cohomological Conjecture could consist
of three steps:
(1) Prove an equivalence of derived categories $D^b(Y) \to D^b_G(X)$
(Categorical Resolution Conjecture - see end of Section 5).
(2) Recover an isomorphism of cohomology groups
$H^*(Y, \mathbb{C}) \to H^*_{orb}(X/G, \mathbb{C})$ from the above
equivalence.
(3) Identify the two product structures.
\noindent
In this paper we mostly deal with the second step. As for the
other two, we note that (1) is known in some cases
due to the work of Bridgeland-King-Reid and
Kawamata (see \cite{BKR}, \cite{Ka} and Section 4 of this paper);
while the orbifold product in (3)
seems to arise from the convolution product of sheaves
(see Section 5).
To deal with (2) one needs a construction
which recovers the (orbifold) cohomology ring from the (equivariant)
derived category. In a sense, we are using some additional structure:
the derived category should "remember" that it was obtained
as a quotient of two DG-categories forming a \textit{localization pair},
see Section 2.4 of \cite{K1}. However, it follows from a result of Orlov
\cite{Or} that fully faithful exact functor between the (non-equivariant)
derived
categories automatically preserves this additional structure; and
the same holds for equivalences of equivariant
derived categories (in the case of a finite group action).
In the non-equivariant case a construction recovering the cohomology groups
follows from the work of Feigin-Tsygan, Weibel and Keller. In fact,
for any exact category $\mathcal{A}$, such as the category $Vect(Y)$ of vector
bundles on $Y$ or the category $Vect_G(X)$ of $G$-equivariant vector bundles on
$X$, Keller constructs in \cite{K1} a \textit{mixed complex} $C(\mathcal{A})$ which
leads to a family of homology theories $HC_{\bullet}(\mathcal{A}, W)$
depending on a graded $k[u]$-module $W$ (usually taken to be of finite
projective dimension). We "recall" the relevant definitions in Section 2.
Two properties make this construction very attractive in our setup:
\begin{itemize}
\item When $\mathcal{A}$ is the category of vector bundles on $Y$, $k =
\mathbb{C}$ and $W=k[u, u^{-1}]$, the above homology group
$HC_0(\mathcal{A}, W)$ (resp. $HC_1(\mathcal{A}, W)$) can be identified
with $H^{even}(Y, \mathbb{C})$ (resp. $H^{odd}(Y, \mathbb{C})$). Note that
multiplication by $u$ gives an isomorphism $HC_i(\mathcal{A}, W) \simeq
HC_{i+2}(\mathcal{A}, W)$.
\item For any $W$ the homology theory $HC_{\bullet}(\mathcal{A}, W)$ is
invariant with respect to equivalences of derived categories
coming from functors between localization pairs.
\end{itemize}
\noindent
To formulate our results, recall that $G$ acts on $\coprod_{g \in G} X^g$:
$h \in G$ sends $x \in X^g$ to $hx \in X^{hgh^{-1}}$.
This action is inherited by $\bigoplus_{g \in G} HC_{\bullet}
(Vect(X^g), W)$.
\begin{theorem} \label{main} Let $G$ be a finite group acting on a smooth
quasiprojective variety $X$ over a field $k$ of characteristic not
dividing
$|G|$. For any graded $k[u]$-module $W$ of finite projective dimension
there exists an isomorphism functorial with respect to pullbacks
under $G$-equivariant maps:
$$
\psi_X: HC_{\bullet} (Vect_G(X), W) \simeq
\Big(\bigoplus_{g \in G} HC_{\bullet}(Vect(X^g), W)\Big)_G
$$
where $(\ldots)_G$ denotes the coinvariants.
\end{theorem}
In the $C^{\infty}$-manifold or $C^{\infty}$-etale groupoid setting
this result (formulated in terms of modules over smooth functions rather
than categories) has a long history. First Feigin-Tsygan, see \cite{FT},
constructed a spectral sequence computing cyclic homology of a general crossed
product algebra, which was later reformulated by Getzler-Jones, see
\cite{GJ2}; and also \cite{AK} for more general crossed products by Hopf
algebras. When the crossed product algebra comes from functions
on a smooth manifold the $E_2$ term of this spectral sequence can be
interpreted in terms of fixed point submanifolds: this result was
announced in \cite{Br} and the first published proof appears in \cite{BN}.
Later it was generalized in \cite{Cr}.
The case when $G$ is a Lie group was studied by Nistor in \cite{N}; later
Block and Getzler have related the corresponding crossed product cyclic
homology groups to equivariant differential forms and fixed points, see
\cite{BG}. We also mention a
closely related computation of $G$-equivariant topological $K$-theory
by G. Segal, see \cite{HH}; and its algebraic $K$-theory counterparts
\cite{V}, \cite{To}.
In Section 3 we adapt the proofs in \cite{BG} and \cite{GJ2}
to fit our case of categories and rings of regular functions. Note that
the proof of \cite{V} cannot be applied in our case due to the failure of
devissage, see Example 1.11 in \cite{K1}.
In those cases when the derived equivalence $D^b(Y) \to D^b_G(X)$ is known,
we get an isomorphism between $HC_\bullet(Vect(Y), W)$ and the right
hand side expression in Theorem \ref{main}, obtaining a
slightly generalized version of the Cohomological Conjecture (in general
the cyclic homology groups \textit{do not} satisfy the long exact
sequence hence even
on the level of dimensions the equality cannot be derived using
motivic measures and motivic integration). This is the second main
result of this paper (see Corollary \ref{last}).
We expect that, in order to identify the product structures
in step (3) above, one should
modify $\psi_X$ to make it compatible with pushforwards under
$G$-equivariant closed embeddings, rather than pullbacks
(compare with Lemmas 4.2 and 4.3 in \cite{V}).
We note here that for a general algebraic group $G$ the equivariant derived
category $D^b_G(X)$ is defined \textit{not} by taking complexes
of $G$-equivariant sheaves but by a more delicate localization procedure,
see \cite{BL}. One can expect that the corresponding
cyclic homology groups satisfy nice properties, for example
similar to those proved in \cite{BG}.
This paper is organized as follows. Section 2 gives some basic
information of cyclic homology groups of exact categories. In Section 3
we prove Theorem \ref{main}. In Section 4 we show how an equivalence
of derived categories implies equality of (orbifold) cohomology groups
and also give some examples in which this equivalence is known.
Finally, in Section 5 we give a conjecture about the singular case
and a conjecture on how the orbifold cohomology product can be recovered
from the convolution product in the derived category.
\noindent
\textbf{Acknowledgements.} The present work was motivated by a lecture
of Y. Ruan on orbifold cohomology given by him at Caltech;
and the two beautiful papers by B. Keller
\cite{K1}, \cite{K2} on cyclic homology of exact categories. The author is
grateful to both of them for providing this inspiration.
\section{Generalities on Mixed Complexes}
Recall, cf \cite{W1} that a \emph{mixed complex} over a
commutative ring $k$ is a sequence of $k$-modules
$\{C_m: m \geq 0\}$ with two families of morphisms $b: C_m \to C_{m-1}$
and $B: C_m \to C_{m+1}$ satisfying
$b^2 = B^2 = Bb + bB = 0$. To any such mixed complex
one can apply the following formalism (see \cite{GJ1}): let $W$ be
a graded module over the polynomial ring $k[u]$, where $\deg(u) = -2$
(in practice it is always assumed that $W$ has finite homological
dimension). Then one can form a complex $C[[u]]
\otimes_{k[u]} W$ with a differential $b + u B$ and compute its
cohomology groups, to be denoted by $HC_{\bullet}(C, W)$. The
following are important examples:
\begin{itemize}
\item $W = k[u]/uk[u]$ gives the Hochschild homology $HH_{\bullet}
(C)$
\item $W = k [u, u^{-1}]/u k[u]$ gives cyclic homology $HC_{\bullet}
(C)$
\item $W = k[u, u^{-1}]$ gives periodic cyclic homology $HP_{\bullet}
(C)$
\item $W = k[u]$ gives negative cyclic homology $HN_{\bullet}
(C)$ (sometimes also denoted by $HC^-_{\bullet} (C)$).
\end{itemize}
\noindent
The following lemma shows that for some purposes it suffices
to consider only the first case.
\begin{lemma} \label{mix-der}
Let $f: (C, b, B) \to (C', b', B')$ be a map of mixed
complexes such that $f$ induces an isomorphism $H(C, b) \to H(C', b')$.
Then for any coefficients $W$ of finite projective dimension
over $k[u]$,
$$
f: \qquad H_{\bullet} (C[[u]] \otimes_{k[u]} W, b + u B) \to
H_{\bullet}(C'[[u]] \otimes_{k[u]} W, b' + u B')
$$
is an isomorphism.
\end{lemma}
\noindent
\textit{Proof.} See Proposition 2.4 in \cite{GJ1}.
\noindent
This lemma justifies the following point of view on mixed complexes,
see \cite{K1}, Section 1.2. Let $\Lambda$ be the DG-algebra
generated by an
indeterminate $\varepsilon$ of chain degree 1 with $\varepsilon^2 = 0$
and $d \varepsilon = 0$. Then a mixed complex may be identified
with a left $\Lambda$-module whose underlying DG $k$-module is $(C, b)$
and where $\varepsilon$ acts by $B$. Moreover, if we are interested
only in the resulting homology groups (as is the case in this paper),
we can view a mixed complex as an object in the derived category
of the DG algebra $\Lambda$.
In what follows we will need a definition of the \textit{mapping cone}
over a map $f: C \to C'$ of mixed complexes. It is given by the
mixed complex
$$
\bigg(C' \oplus C[1], \bigg[
\begin{array}{cc} b_{C'} & f \\ 0 & - b_C \end{array}
\bigg], \bigg[
\begin{array}{cc} B_{C'} & 0 \\ 0 & - B_{C} \end{array}
\bigg] \bigg)
$$
Now we briefly recall Keller's construction of the mixed complex
$C(\mathcal{A})$ of an exact category $\mathcal{A}$ over a field $k$
(actually in \cite{K1} the complex is defined for any commutative
ring $k$ but the general definition is somewhat more involved).
Starting from $\mathcal{A}$ one can construct its category
$\mathcal{C}^b \mathcal{A}$ of all bounded complexes over $\mathcal{A}$
and the category $\mathcal{A}c^b \mathcal{A}$ of bounded acyclic
complexes over $\mathcal{A}$. Both $\mathcal{C}^b\mathcal{A}$ and
$\mathcal{A}c^b \mathcal{A}$ are \textit{DG categories}, i.e. for any
pair of objects $X$, $Y$ the group $Hom(X, Y)$ is $\mathbb{Z}$-graded
(by degree of a map) with a differential, which satisfies some natural
axioms (see \cite{K3})).
For any small DG category $\mathcal{B}$ over a field $k$ Keller
constructs a mixed complex as follows. Denote for
notational convenience $Hom_{\mathcal{B}}(X, Y)$ by
$(X \to Y)$ and consider for each $n \in \mathbb{N}$ a vector space
$$
C_n (\mathcal{B})= \bigoplus (B_0 \to B_1) \otimes (B_1 \to B_2)
\otimes \ldots \otimes (B_{n-1} \to B_{n}) \otimes (B_n \to B_0)
$$
where the sum runs over all sequences $B_0, \ldots, B_n$ of objects
of $\mathcal{B}$. The face maps
$$
d_i (f_0, \ldots, f_i, f_{i+1}, \ldots, f_n) = \bigg\{
\begin{array}{lll}
(f_0, \ldots, f_{i} f_{i+1}, \ldots, f_n) & \textrm{if } 0 \leq i \leq n-1 \\
(-1)^{n+ \sigma} (f_n f_0, \ldots, f_{n-1}) & \textrm{if } i = n
\end{array}
$$
(where $\sigma = \deg f_n \cdot (\deg f_0 + \ldots + \deg f_{n-1})$);
together with the degeneracy maps
$$
s_i (f_0, \ldots, f_i, f_{i+1}, \ldots, f_n) =
(f_0, \ldots, f_{i}, id_{B_i}, f_{i+1}, \ldots, f_n) \qquad i = 0, \ldots, n
$$
and the cyclic operator
$$
t_n (f_0, \ldots, f_n) = (-1)^{n + \sigma} (f_n, f_0, f_1,
\ldots, f_{n-1}),
$$
define a mixed complex $(C(\mathcal{B}), b, (1 - t) s N)$ as in \cite{GJ2},
Section 2. Note that, unlike in \cite{K1}, we write $fg$ for a
composition of $f:A \to B$ and $g:B \to C$ instead of the usual
$gf$. This non-traditional notation will allow us later to match
our formulas with those of Getzler-Jones in \cite{GJ2}.
Next, for any DG subcategory $\mathcal{C} \subset \mathcal{B}$ one has
a mixed complex
$$
C(\mathcal{C}, \mathcal{B}) = Cone (C(\mathcal{C}) \to C(\mathcal{B})).
$$
We will always use this definition when $(\mathcal{C}, \mathcal{B})$
is a \emph{localization pair} in the sense of \cite{K1}, Section 2.4.
Every such localization pair leads to a triangulated
category $\mathcal{T}= \mathcal{B}/\mathcal{C}$ associated to it.
For example, the pair $(\mathcal{A}c^b \mathcal{A}, \mathcal{C}^b \mathcal{A})$
gives rise to the derived category $\mathcal{T}$ of $\mathcal{A}$. One of
the main properties of the mixed complex $C(\mathcal{C}, \mathcal{B})$
is expressed in the following proposition (see Theorem 2.4(a) in \cite{K1}):
\begin{prop}
\label{invariance}
Let $(\mathcal{C}, \mathcal{B})$ and $(\mathcal{C}', \mathcal{B}')$ be two
localization pairs and $\mathcal{T}$, $\mathcal{T}'$ their derived
categories.
If $F: \mathcal{B} \to \mathcal{B}'$ is an exact
functor which takes $\mathcal{C}$ to $\mathcal{C}'$ and induces and
equivalence up to factors $\mathcal{T} \to \mathcal{T}'$ (cf. 1.5 in
\cite{K1}), then $F$ induces an isomorphism $C(\mathcal{C}, \mathcal{B}) \to
C(\mathcal{C}', \mathcal{B}')$ in the derived category of $\Lambda$.
\end{prop}
If $\mathcal{A}$ is an exact category then its mixed
complex is defined as
$C(\mathcal{A}c^b \mathcal{A}, \mathcal{C}^b \mathcal{A})$.
We will write $\mathcal{A}c^b(X)$ (resp. $\mathcal{C}^b(X)$) instead
of $\mathcal{A}c^b Vect(X)$ (resp. $\mathcal{C}^b Vect(X))$ and
similarly for $\mathcal{A}c^b_G(X)$ and $\mathcal{C}^b_G(X)$.
Define $C(X) = C(\mathcal{A}c^b(X), \mathcal{C}^b(X))$; $C_G(X) =
C(\mathcal{A}c^b_G(X), \mathcal{C}^b_G(X))$.
\section{Mixed Complex for a Finite Group Action}
In this section we will prove Theorem \ref{main}. In view of Lemma
\ref{mix-der} and the subsequent remarks, it suffices to prove
the following proposition:
\begin{prop}
\label{actual}
Let $G$ be a finite group acting on a smooth
quasiprojective variety $X$ over a field $k$ of characteristic not dividing
$|G|$. There exists a quasiisomorphism in the derived category of $\Lambda$:
$$
\psi_X: C_G(X) \to
\Big(\bigoplus_{g \in G} C(X^g)\Big)_G
$$
which is functorial with respect to pullbacks under $G$-equivariant
maps of smooth varieties $f: Y \to X$.
\end{prop}
\noindent
The proof of the above proposition will be carried out in several
steps: first we replace $Vect_G(X)$ by a categorical analogue of
a crossed product ring $A \rtimes G$ and construct a functorial map
of objects in the derived category (Steps 1 and 2). Once this map is
constructed, we use Mayer-Vietoris, Luna's Fundamental Lemma and etale
descent of Weibel-Geller to reduce to the case when $X$ is a vector
space with a linear action of $G$; and conclude the argument using
exactness of a Koszul complex (Steps 3-6). Our proof is an adaptation of
the argument in \cite{BG} from the differentiable to the algebraic situation.
\noindent
\label{step1}
\textit{Step 1: Change of categories.}
\noindent
Note that the action of $G$ on $X$ is inherited by the categories
$Vect_G(X)$, $\mathcal{C}^b(X)$ and $\mathcal{A}c^b(X)$:
for any $g \in G$ and any bundle (or a complex of bundles) $\mathcal{F}$
we can consider $g\mathcal{F} := (g^{-1})^* \mathcal{F}$ and
for any $\psi: \mathcal{F} \to \mathcal{G}$ we have $g(\psi): g\mathcal{F} \to g\mathcal{G}$.
For any $\mathcal{F}$ consider
the object $\widetilde{\mathcal{F}} = \bigoplus_{g \in G} g\mathcal{F}$
with its natural $G$-equivariant structure. Then each object
$\mathcal{H}$ in $\mathcal{C}^b \;Vect_G(X)$
is isomorphic to a direct factor of some $\widetilde{\mathcal{F}}$.
In fact, take $\mathcal{F} = \mathcal{H}$ viewed as an object in
$\mathcal{C}^b\;Vect(X)$. Then the $G$-equivariant structure on $\mathcal{H}$
defines isomorphisms $i_g: \mathcal{H} \to g\mathcal{H}$ for
all $g \in G$. Now consider the $G$-equivariant maps
$$
\mathcal{H} \stackrel{a}\longrightarrow \widetilde{\mathcal{H}} =
\bigoplus_{g \in G} g\mathcal{H} \stackrel{b}\longrightarrow \mathcal{H}
$$
where $a$ is given by a direct sum of $i_g$ and $b = \frac{1}{|G|} \sum i_g$.
Since $b \circ a = id_{\mathcal{H}}$, $\mathcal{H}$ splits off as
a direct factor of $\widetilde{\mathcal{H}}$.
Denote by $\mathcal{C}^b(X) \rtimes G$, resp.
$\mathcal{A}c^b(X)\rtimes G$ the full subcategory of
$\mathcal{C}^b_G(X)$, resp. $\mathcal{A}c^b_G(X)$,
formed by all $\widetilde{\mathcal{F}}$. Since
$\mathcal{C}^b(X) \rtimes G$ and
$\mathcal{A}c^b(X)\rtimes G$ are closed under degree-wise
split extensions and shifts, both are exact DG categories
(by Example 2.2 (a) in \cite{K1}). The above argument shows that
the natural embedding $\mathcal{C}^b(X)\rtimes G \to
\mathcal{C}^b_G(X)$ induces an equivalence \textit{up to
factors} of associated derived categories. Now Theorem 2.4 (a)
in \cite{K1} gives
\begin{prop} \label{crossed}
There exists an isomorphism in the mixed derived category
$$
C(\mathcal{A}c^b(X)\rtimes G, \mathcal{C}^b(X)\rtimes G) \to C_G(X)
$$
which is functorial with respect to $G$-equivariant pullbacks
of vector bundles.
\end{prop}
\noindent
\label{step2}
\textit{Step 2: Construction of the Quasiisomorphism.}
\noindent
To proceed further, we take a closer look at $\mathcal{C}^b(X)
\rtimes G$. The objects in this category can be identified with
objects in
$\mathcal{C}^b(Vect(X))$ while the morphisms are given by
$$
(\mathcal{F} \to \mathcal{G})_{\mathcal{C}^b(X)\rtimes G}
= \bigoplus_{g \in G} (\mathcal{F} \to
g\mathcal{G})_{\mathcal{C}^b(X)}
$$
(the expression on the right obviously coincides with all
$G$-equivariant morphisms from $\widetilde{\mathcal{F}}$ to
$\widetilde{\mathcal{G}}$). If we denote by $\psi \cdot g$ and
$ \varphi \cdot h $
the components of
$(\mathcal{F} \to \mathcal{G})_{\mathcal{C}^b(X)\rtimes G}$ and
$(\mathcal{G} \to \mathcal{H})_{\mathcal{C}^b(X)\rtimes G}$
living in
$(\mathcal{F} \to g\mathcal{G})_{\mathcal{C}^b(X)}$ and
$(\mathcal{G} \to h\mathcal{H})_{\mathcal{C}^b(X)}$,
respectively, then the composition of $\psi \cdot g$ and
$\varphi \cdot h $ is given by
$$
\psi g(\varphi) \cdot g h: \mathcal{F} \to
g\mathcal{G} \to gh\mathcal{H}
$$
Now define the map
$$
C(\mathcal{C}^b(X) \rtimes G) \to \Big(\bigoplus_{g \in G}
C(\mathcal{C}^b(X^g)) \Big)_G
$$
by sending $(\varphi_0 \cdot g_0, \varphi_1 \cdot g_1,
\ldots, \varphi_n \cdot g_n)$ to
$({\varphi_0}_{|_{X^g}}, {g_0(\varphi_1)}_{|_{X^g}},
g_0 g_1(\varphi_2)_{|_{X^g}}, \ldots, (g_0 \ldots g_{n-1})
(\varphi_n)_{|_{X^g}})$ where $g = g_0 \ldots g_n$. Note that only
the map to the coinvariants is indeed a map of mixed complexes
since the individual maps $C(\mathcal{C}^b(X) \rtimes G) \to
C(\mathcal{C}^b(X^g))$ do not respect the last face map $d_n$ and
the cyclic operator $t_n$. Since the above morphism obviously sends
$
C(\mathcal{A}c^b(X) \rtimes G)$ to $\Big(\bigoplus_{g \in G}
C(\mathcal{A}c^b(X^g)) \Big)_G
$
we obtain a morphism of mixed complexes
$$
C(\mathcal{A}c^b(X) \rtimes G, \mathcal{C}^b(X) \rtimes G) \to
\Big(\bigoplus_{g \in G} C(\mathcal{A}c^b(X^g),
\mathcal{C}^b(X^g)) \Big)_G.
$$
Composing this map with the inverse of the quasiisomorphism
in Proposition \ref{crossed} we obtain a morphism of objects
in the derived category of $\Lambda$
$$
\psi_X: C_G(X) \to \Big( \bigoplus_{g \in G} C(X^g) \Big)_G
$$
which is functorial with respect to pullbacks under $G$-equivariant
maps $f: Y \to X$.
\noindent
\label{step3}
\textit{Step 3: Mayer-Vietoris and Luna's Fundamental Lemma}
\noindent
Now we use a Mayer-Vietoris sequence to reduce to the case when
$X$ is affine.
\begin{prop} Let $X$ be a quasiprojective scheme,
$V, W \subset X$ two $G$-invariant open subschemes and
$U = V \cap W$. There is a distinguished triangle in the
mixed derived category category:
$$
C_G(X) \to C_G(V) \oplus C_G(W) \to C_G(U) \to C_G(X)[1]
$$
\end{prop}
\noindent
\textit{Outline of Proof.} Most of the argument is identical to
the non-equivariant case proved in Proposition 5.8 of \cite{K2}.
First one shows that for a quasiprojective scheme $X$,
$C_G(X)$ is quasiisomorphic to the complex
obtained from the category of $G$-equivariant perfect complexes
(see Section 5.1 of \cite{K2}). Moreover, let $\mathcal{T}_G(X)$
be the derived category of $G$-equivariant perfect complexes and for
any closed $Z \subset X$ by $\mathcal{T}_G(X \textrm{on } Z)$ the
subcategory of complexes which are exact on the complement to $Z$.
If $Z = X \setminus W$
and $j: V \to X$ is the open embedding, one shows that the lines
of the diagram
$$
\begin{CD}
0 @>>> \mathcal{T}_G (X \textrm{ on } Z)
@>>> \mathcal{T}_G X @>>> \mathcal{T}_G W @>>> 0\\
& & @Vj^*VV @VVV @VVV \\
0 @>>> \mathcal{T}_G (V \textrm{ on } Z)
@>>> \mathcal{T}_G V @>>> \mathcal{T}_G (V \cap W) @>>> 0\\
\end{CD}
$$
are exact up to factors and the functor $j^*$ is an
equivalence up to factors (the proof in Sections 5.4 and 5.5 of
\cite{TT} may be repeated almost word-by-word; note also
that working ``up to factors" allows one to ignore all problems
with non-surjectivity of $K^0_G(X) \to K^0_G(W)$ and so on,
since for any complex $E$, the sheaf $E \oplus E[1]$ has
zero class in $K$-theory). Finally, one applies Theorem 2.7
of \cite{K1}.
Alternatively, one can work with $\mathcal{C}^b(X) \rtimes G$ and
deduce the above fact from the non-equivariant version (\cite{K1},
Proposition 5.8)
$\square$
The above Mayer-Vietoris sequence shows that if $\psi_U$, $\psi_V$
and $\psi_W$ are quasiisomorphisms then the same holds for $\psi_X$
(of course, this is only possible by functoriality of $\psi_X$ applied
to open embeddings). By induction on the number of elements in a
$G$-invariant affine covering of $X$ we can assume that $X = Spec\; A$
is affine. In this case we can reinterpret the map $\psi_X$ as
follows. The group $G$ acts on the $k$-algebra $A$ and we can define
$A \rtimes G$ to be the crossed product algebra $A \otimes k[G]$ with
the product defined by
$$
(a_1 \cdot g_1) (a_2 \cdot g_2) = (a_1 g_1(a_2)) \cdot g_1 g_2
$$
Then the category $Vect_G(X)$ is equivalent to the category of
$A \rtimes G$-modules which are projective as $A$-modules. Since $G$
is finite, this is equivalent to
being projective as $A \rtimes G$-modules.
Now consider the category $dgfree\; A\rtimes G$ of complexes
of free $A\rtimes G$-modules. By Section 2.4 of \cite{K1} the
natural functor $dgfree\; A \rtimes G \to \mathcal{C}^b \; Vect_G(X)$
induces a quasiisomorphism of mixed complexes
$$
C(0, dgfree\; A\rtimes G) \to C_G(X).
$$
Moreover, if one considers $A \rtimes G$ as a subcategory of $dgfree \; A
\rtimes G$ with one object (a free rank one module viewed as
a complex in degree 0) then by Theorem 2.4 (a) of \cite{K1} one gets
a quasiisomorphism
$$
C(A \rtimes G) \to C(0, dgfree \; A \rtimes G)
$$
where the mixed complex $C(B)$ of any $k$-algebra $B$ is defined
as in Section 2 (if one views $B$ as a category with one object).
Thus, we obtain a chain of quasiisomorphisms
$$
C(A \rtimes G) \to C(0, dgfree\; A \rtimes G)
\to C (\mathcal{A}c^b(X) \rtimes G, \mathcal{C}^b (X) \rtimes G)
\to C_G(X)
$$
induced by embeddings of subcategories.
Similarly, if $g \in G$ and $J_g \subset A$ denotes the ideal
of the fixed point set $X^g \subset X$ then we have a chain of
quasiisomorphisms
$$
C(A/J_g) \to C(0, dgfree\; A/J_g) \to C(X^g)
$$
Restricting the map $\psi_X$ constructed in the previous step,
it suffices to prove that the following map is a quasiisomorphism
in the derived category of $\Lambda$:
$$
\psi_A: C(A \rtimes G) \to \Big(\bigoplus_{g \in G} C(A/J_g) \Big)_G;
$$
where $\psi_A(a_0 \cdot g_0, \ldots, a_n \cdot g_n)$ is
the image of $(a_0, g_0(a_1), \ldots,
(g_0\ldots g_{n-1})(a_n)) \in A^{\otimes n + 1}$ in
$(A/J_g)^{\otimes n+1} = C_n(A/J_g)$ and $g = g_0 \ldots g_n$.
\noindent
Eventually we will assume that $X$ is not only affine but has some
additional properties. The following proposition is
probably well-known but the author was unable to
find a convenient reference.
\begin{prop}
\label{mayer}
Let $G$ be a finite group acting on a
smooth quasiprojective variety $X$ over a field $k$. Assume that
$|G|$ is invertible in $k$. There exists a covering of $X$ by
$G$-invariant affine open subsets $U_1, \ldots, U_n$ such that
for any $i = 1, \ldots, n$ there is a point $x_i \in U_i$
satisfying the following properties:
(a) the fixed point scheme $U_i^g$ is empty unless $g \in G_{x_i}$
(the stabilizer of $x_i$);
(b) if $T_{x_i}$ is the tangent space to $U_i$ at $x_i$ with
its natural $G_{x_i}$ action, then there exists a $G_{x_i}$-equivariant
etale morphism $\varphi_i: U_i \to T_{x_i}$ such that
for any subgroup $H \subset G_{x_i}$ the induced morphism
$U_i/H \to T_{x_i}/H$ is etale and the
diagram
$$
\begin{array}{ccc}
U_i & \longrightarrow & T_{x_i} \\
\downarrow & & \downarrow \\
U_i/H & \longrightarrow & T_{x_i}/H
\end{array}
$$
is Cartesian;
(c) for any $H \subset G_{x_i}$ the fixed point scheme $U_i^H$ is
a scheme-theoretic preimage of $T_{x_i}^H$.
\end{prop}
\noindent
\textit{Proof.} For any point $x \in X$ we can choose and open affine
$G$-invariant neighborhood $U$ such that (a) holds.
By Lemma 8.3 of \cite{BR} there exists a $G_x$-equivariant
map $\varphi: U \to T_x$ such that $\varphi(x) = 0$ and
$d\varphi (x)$ is equal to identity. Now by Theorem 6.2 of \cite{BR}
(Luna's Fundamental Lemma in finite characteristic) applied to
the finite set of subgroups $H \in G_{x_i}$ we can shrink $U$ so
that (b) holds as well.
For (c) let $U_i = Spec\; B$ and $T_{x_i} = Spec\; A$. Denote by $I_A$
the $A^H$-submodule of $A$ generated by elements $a - h(a)$, $a \in A, h\in H$.
Let also $J_A$ be the ideal in $A$ generated by $I_A$; then $J_A$ is the
ideal of the fixed point scheme $T_{x_i}^H$. We use the similar
notation $I_B, J_B$ for the objects corresponding to $B$. Since $B$ is
flat over $A$, $J_A \otimes_A B$ is naturally an ideal in $B = A \otimes_A B$
and it suffices to prove that it coincides with $J_B$ (apriori we
just have an inclusion $J_A \otimes_A B \subset J_B$). Note that since
$|H|$ is invertible $A = I_A \oplus A^H$ as $A^H$-modules. Applying
$\otimes_{A^H} B^H$ and using the fact that $B = A \otimes_{A^H} B^H$ by
part (b) we conclude that $I_B = I_A \otimes_{A^H} B^H $ which implies
$J_B = J_A \otimes_A B$ by definition of $J_A$, $J_B$. $\square$
\noindent
Thus, later we may assume that $X$ is affine and there exists $x \in X$
such that (a), (b) and (c) above are satisfied.
\noindent
\label{step4}
\textit{Step 4: Eilenberg-Zilber Theorem}
\noindent
The mixed complex $C(A \rtimes G)$ was studied in a more general
situation by Jones and Getzler \cite{GJ2}. In the next proposition
we present their results (with some simplifications possible due to
the fact that $|G|$ is invertible in $k$; also our isomorphism
is more explicit than in \cite{GJ2}).
To state the lemma we need to fix some notation. For any $g \in G$
consider the sequence of vector spaces
$
\big(A^\natural_g)_n = A^{\otimes n+1}
$
together with the face, degeneracy and cyclic operators defined
by the formulas similar to those in Section 2:
$$
d_i (a_0, \ldots, a_i, a_{i+1}, \ldots, a_n) = \bigg\{
\begin{array}{lll}
(a_0, \ldots, a_{i} a_{i+1}, \ldots, a_n) & \textrm{if } 0 \leq i \leq n-1 \\
(g^{-1}(a_n) a_0, \ldots, a_{n-1}) & \textrm{if } i = n
\end{array}
$$
$$
s_i (a_0, \ldots, a_n) = (a_0, \ldots, a_i, 1, a_{i+1}, \ldots, a_n) \qquad
i = 0, \ldots, n;
$$
$$
t_n(a_0, \ldots, a_n) = (g^{-1}(a_n), a_0, \ldots, a_{n-1})
$$
Note that the operator $t_n$ \emph{does not} satisfy the cyclic
identity $t^{n+1}=1$. However, we can still construct a mixed complex
from the spaces $A^{\natural}_g$ by considering a $G$-action on the
direct sum $\bigoplus_{g \in G} A^\natural_g$ such that $h \in G$
sends $(a_0, \ldots, a_n) \in A^\natural_{g}$ to
$(h(a_0), \ldots, h(a_n)) \in A^\natural_{hg h^{-1}}$.
Then this $G$-action commutes with $d_i, s_i, t$ hence we obtain
the face, degeneracy and cyclic operators on the quotient
space $\Big(\bigoplus_g A^\natural_g \Big)_G$ of $G$-coinvariants.
In this quotient space the operator $t_n$ does satisfy $t_n^{n+1} = 1$
and we denote the resulting mixed complex by
$C \Big( \bigoplus_{g \in G} A^\natural_g \Big)_G$.
\begin{prop} The map of mixed complexes
$$
\varphi_A: C(A \rtimes G) \to C\Big( \bigoplus_{g \in G}
A^\natural_g \Big)_G
$$
defined by $\varphi(a_0 \cdot g_0, \ldots a_n \cdot g_n) =
(a_0, g_0(a_1), g_0 g_1 (a_2), \ldots, g_0 \ldots g_{n-1} (a_n))
\in \big(A^\natural_g\big)_n$ with $g = g_0 \ldots g_n$; is a
quasiisomorphism in the derived category of $\Lambda$.
\end{prop}
\noindent
\textit{Proof.} In \cite{GJ2} Getzler and Jones define
a bi-graded object
$A \natural G (p, q) = k[G^{p+1}] \otimes A^{\otimes q+1}$
with two families of face maps $d^{h}: A \natural G (p, q)
\to A \natural G(p-1, q)$ (horizontal maps) and
$ d^{v}: A \natural G (p, q) \to A \natural G(p, q-1)$ (vertical maps),
and similarly
for degeneracies and cyclic operators (we will not use
the precise definitions of these operators). These two families
of operators give $A\natural G(p, q)$ the structure of a
\emph{cylindrical module}, see \cite{GJ2} before Proposition 1.1.
This cylindrical module has a total complex $Tot_n (A, G) =
\bigoplus_{p + q = n} A \natural G (p, q)$ and a diagonal complex
$\Delta_n(A, G) = A \natural G (n, n)$, both being mixed complexes (see
\cite{GJ2} for more details).
The cylindrical module structure on $A \natural G(p, q)$ is
defined in such a way that $C(A \rtimes G)$ is isomorphic to
the diagonal complex $\Delta (A, G)$ via the map
$$
(a_0 \cdot g_0, \ldots, a_n \cdot g_n) \mapsto
(g_0, \ldots, g_n | h_0^{-1} a_0, \ldots, h_n^{-1} a_n) \in
A \natural G (n, n) = \Delta_n (A, G)
$$
where $h_i = g_i \ldots g_n$. Applying the Eilenberg-Zilber
Theorem for paracyclic modules, see Theorem 3.1 in \cite{GJ2},
one gets an explicit quasiisomorphism
$$
AW: \Delta (A, G) \to Tot (A, G)
$$
given by the Alexander-Whitney map defined in Section 8.5.2 of
\cite{W1}. We will only need one component of this map
(with values in $A \natural G (0, n)$) which is given simply by
$d_1^h \ldots d_n^h$, where $d^h_i: A \natural G (i, q)
\to A \natural G (i - 1, q)$ sends $(g_0, \ldots, g_i | a_0, \ldots, a_q)$
to $(g_i g_0, \ldots, g_{i-1} | g_i (a_0), \ldots, g_i (a_0))$.
Now the transformation
$$
(g_0, \ldots, g_p | a_0, \ldots, a_q) \mapsto
(g_1, \ldots, g_p | g_0 g_1 \ldots g_p | a_0, \ldots, a_q)
$$
identifies $A \natural G (p, q)$ with a cylindrical module
the vertical maps of which are given by the above operators on
$\bigoplus_{g \in G} A^\natural_g$
(and the index $g$ corresponds to $(\ldots | g | \ldots )$
in the above notation), while the rows can be identified with the
standard homological complex of $G$ acting
on $\bigoplus_{g \in G} A^\natural_g$. See Section 4 of \cite{GJ2}
for details. Since $|G|$ is invertible in $k$, the projection of
$Tot (A, G)$ onto its first column $A \natural (0, \bullet)$
together with the projection to the coinvariants, gives a
quasiisomorphism
$$
Tot(A, G) \to C \Big(\bigoplus_{g \in G} A^\natural_g \Big)_G.
$$
The fact that it is indeed a quasiisomorphism can be proved using
the homotopy
$$
h: A \natural G (p, q) \to A \natural G (p+1, q);
\qquad (g_1, \ldots, g_p | g | a_0 \ldots a_q) \mapsto
\frac{1}{|G|}\sum_{g' \in G} (g', g_1, \ldots, g_p |g
| a_0, \ldots, a_q )
$$
The composition of (quasi)isomorphisms:
$$
C(A \rtimes G) \to \Delta (A, G) \to Tot (A, G) \to
C\Big(\bigoplus_g A^\natural_g \Big)_G
$$
is given by
$$
(a_0 \cdot g_0, \ldots, a_n \cdot g_n) \to
(g^{-1}_0(a_0), a_1, g_1(a_2), \ldots, (g_1 \ldots g_{n-1}) (a_n))
\in (A^\natural_g)_n; \qquad g = g_1 g_2 \ldots g_n g_0.
$$
Note that this differs from the map in the statement of our
proposition exactly be the action of $g_0 \in G$. Since on the
space of coinvariants $g_0$ acts by identity, this finishes the proof.
$\square$
\noindent
Thus, we have reduced Proposition \ref{actual} to the claim that the
map
$$
C\Big(\bigoplus_{g \in G} A^\natural_g \Big)_G \to
\Big(\bigoplus_{g \in G} C(A/J_g)\Big)_G
$$
defined by the natural surjections $(A^\natural_g)_n \to (A/J_g)^{\otimes
n+1}$, is a quasiisomorphism. Note also that we have not used the
smoothness assumption yet.
\noindent
\label{step5}
\textit{Step 5: Shapiro's Lemma and Etale Descent}
\noindent
Let $\mathcal{O} \subset G$ be a conjugacy class. It is
easy to see that
$
\bigoplus_{g \in \mathcal{O}} A^\natural_g
$
and
$
\bigoplus_{g \in \mathcal{O}} C(A/J_g)
$
are $G$-invariant subspaces, hence it suffices to prove
the quasiisomorphism
$$
C\Big(\bigoplus_{g \in \mathcal{O}} A^\natural_g \Big)_G \to
\Big(\bigoplus_{g \in \mathcal{O}} C(A/J_g)\Big)_G
$$
for all conjugacy classes $\mathcal{O}$. To that end, choose $g \in \mathcal{O}$
and denote by $C_g = \{h \in G | gh = hg \}$ the centralizer of $g$.
Then $C_g$ acts on $A^\natural_g$; and on the space of
coinvariants $\Big(A/J_g\Big)_{C_g}$ the cyclic operator $t_n$
satisfies $t_n^{n+1} = 1$ therefore we obtain a mixed complex
$C(A^\natural_g)_{C_g}$. Moreover,
we have natural isomorphisms of $G$-modules
$$
\bigoplus_{g' \in \mathcal{O}} A^\natural_{g'} \simeq Ind_{C_g}^G C(A/J_g);
\qquad \bigoplus_{g' \in \mathcal{O}} C(A/I_{g'})
\simeq Ind_{C_g}^G C(A/J_g)
$$
where $Ind_{C_g}^G$ denotes the induction map from $C_g$-modules
to $G$-modules. By Shapiro's Lemma we are reduced to
proving the quasiisomorphism
$$
C(A^\natural_g)_{C_g} \simeq (C(A/J_g))_{C_g};
$$
which would follow once we prove that the natural surjection $A \to A/J_g$
induces a quasiisomorphism
$$
C(A^\natural_g)_{\langle g \rangle} \simeq C(A/J_g)
$$
where $\langle g \rangle \subset C_g$ is the cyclic subgroup generated by
$g$. Note that $\langle g \rangle$ acts trivially on $A/J_g$ (in fact, $J_g$
is generated by elements $a - g(a)$ with $a \in A$), so we don't
have to take coinvariants on the right hand side.
Now we finally use Proposition \ref{mayer} (and the smoothness
assumption which stands behind it). Since the properties (a), (b) and
(c) are preserved by finite intersections of
$G$-invariant affine open subsets, by Mayer-Vietoris argument
we can assume that there exists a point $x \in X$ and a $G_x$-equivariant
map $\varphi: X = Spec\; B \to T_x = Spec \; A$ which is etale
and satisfies (a), (b) and (c) of Proposition \ref{mayer}.
Let $J_B$ and $J_A$ denote the ideal of $g$-fixed points in $B$ and $A$,
respectively. We have two
cases:
\noindent
When $g \notin G_x$, $g$ has no fixed points by (a) of \ref{mayer}, i.e.
$J_B = B$. Then by Theorem 6 of \cite{Lo} $C(B \rtimes \langle g \rangle)$ is
quasiisomorphic to $(C(B))_{\langle g \rangle}$ (the piece obtained from the
conjugacy class of identity), so the argument of Step 4 applied to
$G = \langle g \rangle$ shows that $C(B^\natural_g)$ is quasiisomorphic to zero
(if $g \neq 1$).
\noindent
When $g \in G_x$ we use the etale map $\varphi: X \to T_x$ to reduce to
the case of the flat space $T_x$. Note that until now all maps were defined
in the derived category of $\Lambda$. However, it
suffices to check that the map $C(B^\natural_g)_{\langle g \rangle} \to C(B/J_B)$
defined above is a composition of quasiisomorphisms of complexes
over $k$. Note that the action $b(b_0, \ldots, b_n)
= (b b_0, b_1, \ldots, b_n)$ actually turns both $C(B^\natural_g)$ and $C(B/J_B)$
into complexes of $B$-modules. We will show that they are isomorphic in the
derived category of $B$ (thus, taking coinvariants in $C(B^\natural_g)_{\langle g \rangle}$
is only necessary to define an \textit{apriori} mixed complex structure).
\begin{prop} Let $X = Spec \; B$, $x \in X$, $g \in G_x$ and $T_x = Spec\; A$
be as above. If the map $A \to A/J_A$ induces a quasiisomorphism
$C(A^\natural_g) \to C(A/J_A)$ in the derived category of $A$ then the map
$B \to B/J_B$ induces a quasiisomorphism $C(B^\natural_g) \to C(B/J_B)$ in
the derived category of $B$.
\end{prop}
\noindent
\textit{Proof.} This proof is a minor modification of the etale descent
result of \cite{GW}. In fact, if $C(A^\natural_g) \to C(A/J_A)$ is
a quasiisomorphism of complexes of $A$-modules then consider the following
commutative diagram:
$$
\begin{array}{ccc}
B \otimes_A C(A^\natural_g) & \longrightarrow & B \otimes_A C(A/J_A)
\simeq B/J_B \otimes_{A/J_A} C(A/J_A) \\
\downarrow & & \downarrow \\
C(B^\natural_g) & \longrightarrow & C(B/J_B)
\end{array}
$$
where the two vertical arrows are given by $b \otimes (a_0, \ldots, a_n)
= (ba_0, a_1, \ldots, a_n)$.
The top arrow is a quasiisomorphism since $B$ is flat over $A$. The top left
corner isomorphism holds since $B \simeq B/J_B \otimes_{A/J_A} A$ by part (b)
of Proposition \ref{mayer}. To show that the bottom arrow is a
quasiisomorphism we will show that this property holds for the two vertical arrows.
Moreover, it suffices to prove it for the left arrow only since then one
can set $g =1$, replace the pair $(B, A)$ by $(B/J_B, A/J_A)$, respectively, and
get the proof for the right arrow.
To prove the assertion about $B \otimes_A C(A^\natural_g) \to C(B^\natural_g)$
we borrow some formulas from pp. 513-514 of \cite{BG}. Let $A_{\Delta}$
and $A_g$ denote bimodules over $A \otimes A$ isomorphic to $A$ as abelian
groups, with the module structure given by $(a_0, a_1) \cdot a = a_0 a a_1$
for $a \in A_{\Delta}$ and $(a_0, a_1) \cdot a = a_0 a g^{-1} (a_1)$
for $a \in A_g$. Consider the free resolution $P_\bullet^A$ of $A_{\Delta}$ as
$A \otimes A$-module:
$$
\ldots \to A \otimes A \otimes A \otimes A \stackrel{\partial}\to A \otimes A
\otimes A \stackrel{\partial}\to A \otimes A \stackrel{\Delta}\to A \to 0
$$
where the $A \otimes A$-module structure on $A^{\otimes (k+2)}$ is given by
$$
(\bar{a}_0, \bar{a}_1) \cdot (a_0, \ldots, a_{k+1}) = (\bar{a}_0 a_0, a_1,
\ldots, a_k, a_{k+1} \bar{a}_1);
$$
the map $\Delta: A \otimes A \to A$ sends $(a_0, a_1)$ to $a_0 a_1$ and
$$
\partial (a_0, \ldots, a_{k+1}) =
\sum_{i = 0}^k (-1)^i (a_0, \ldots, a_i a_{i+1}, \ldots a_{k+1}).
$$
Exactness of $P_\bullet^A$ is proved using the homotopy $s: (a_0, \ldots,
a_{k+1}) \mapsto (1, a_0, \ldots, a_{k+1})$. Let $B_g$ and $P^B_\bullet$ be
the similar objects over $B$.
Now we have a chain of isomorphisms:
$$
B \otimes_A C (A^\natural_g)
\simeq B \otimes_A A_g \otimes_{A \otimes A} P^A_\bullet \simeq
B_g \otimes_{A \otimes A} P^A_\bullet \simeq B_g \otimes_{B \otimes B}
(B \otimes B) \otimes_{A \otimes A} P^A_\bullet.
$$
The first isomorphism follows from the definitions of $P^A_\bullet$, $C(A^\natural_g)$ and
$A_g$. The second uses
$\langle g \rangle$-equivariance of $Spec\; B \to Spec\; A$ and the $A \otimes A$-module
structure on $B_g$ which comes from $A \otimes A \to B \otimes B$. Taking
into account that $C(B^\natural_g) \simeq B_g \otimes_{B \otimes B} P^B_\bullet$,
we need to prove that the natural injective map of complexes
$\rho: (B \otimes B) \otimes_{A \otimes A}
P^A_\bullet \to P^B_\bullet$ becomes a quasiisomorphism after applying
$B_g \otimes_{B \otimes B}(\ldots)$. Explicitly, we have
$$
\begin{array}{cccccccc}
\ldots \to & B \otimes A \otimes B & \to & B \otimes B & \to & B \otimes_A B & \to
& 0 \\
& \downarrow \rho & & \downarrow \rho & & \downarrow a & \\
\ldots \to & B \otimes B \otimes B & \to & B \otimes B & \to & B & \to & 0
\end{array}
$$
where the upper row is exact since $B \otimes B$ is flat over $A \otimes A$ and the
right vertical arrow $a$ is the natural surjective map $B \otimes_A B \to B$.
Since $B$ is etale over $A$, the kernel $C$ of $a$ is a $B \otimes B$-module
supported away from the diagonal $X_\Delta \subset X \times X = Spec(B \otimes B)$.
Note that the support of $B_g$ coincides with the graph of the map $g^{-1}: X \to X$.
We now claim that the supports of $B_g$ and $C$ are disjoint. In fact, if $(x_1, x_2)
\in Supp (C) \subset X \times X$ then $x_1 \neq x_2$ but $\varphi(x_1) =
\varphi(x_2) \in T_x$. If $(x_0, g^{-1}(x_0)) \in Supp(C)$ then $g^{-1}$ stabilizes
$\varphi(x_0) \in T_x$ but does not stabilize $x_0 \in X$, which contradicts
property (c) in Proposition \ref{mayer}. Hence the supports of $B_g$ and $C$ are
disjoint and after tensoring $B_g \otimes_{B \otimes B} $ the two rows
in the above diagram become quasiisomorphic since $B_g \otimes_{B \otimes B} Coker
\rho$ computes $Tor^\bullet_{B \otimes B} (B_g, C) = 0$
(a bit more rigorously, one could first show that the
localization at each maximal prime vanishes - see the last lines on p. 374 of
\cite{GW}). This finishes the proof of the proposition.
$\square$
\noindent
Thus, it suffices to prove the quasiisomorphism $C(A^\natural_g) \to C(A/J_A)$ when
$A = k[T_x]$ is a polynomial ring with an action of the cyclic group $\langle g \rangle$
induced from its linear action on $T_x$.
\noindent
\label{step6}
\textit{Step 6: the Linear Case}
\noindent
As a last step we consider the case of a flat space $V = T_x= Spec\; A$ with a
linear action of the cyclic group $H=\langle g \rangle$. Let $V = V_0 \oplus V_1$ where
$V_0 = V^H$ and $V_1$ is the $H$-invariant complement. Note that
$\overline{A}=A/J_A$ is the algebra of regular functions on $V_0$.
Recall that for any variety $Y$, a vector bundle $E$ and its section $s$
one has a Koszul complex
$$
\ldots \to \Lambda^3 E^* \stackrel{\partial}\to \Lambda^2 E^* \stackrel{\partial}
\to E^* \stackrel{\partial} \to \mathcal{O}_Y \to 0
$$
where the differential is given by contraction with $s$. We denote this
Koszul complex by $Kos(Y, E, s)$. It is well-known that for a regular
section $s$ and affine $Y$, $Kos(Y, E, s)$ is a projective resolution of
$\mathcal{O}_Z$ where $Z$ is the zero scheme of $s$.
Recall from the previous step that $C(A^\natural_g)$ is obtained by
taking a particular projective resolution $P^A_\bullet$ of the
diagonal copy $V_\Delta \subset V \times V$ and tensoring it with
the $A \otimes A$-module corresponding to the graph of $g^{-1}: V \to V$.
Similarly, $C(\overline{A})$ is obtained by taking a particular resolution
$P^{\overline{A}}_\bullet$ of $(V_0)_\Delta \subset V_0 \times V_0$ and
tensoring it with the module of functions on $(V_0)_\Delta$. As in \cite{BG}
we prove the quasiisomorphism $C(A^\natural_g) \to C(\overline{A})$ by
looking at the Koszul resolutions instead of $P_\bullet$.
Indeed, $V_\Delta \subset V \times V$ is a zero scheme
of a section $s$ of the trivial vector bundle with fiber $V$, given by
$s(v_1, v_2) = v_1 - v_2$; and similarly $(V_0)_\Delta$ is a
zero scheme of the section $s_0 = s|_{V_0 \times V_0}$ which takes
values in the trivial vector bundle with the fiber $V_0$. Then
we have a commutative diagram
$$
\begin{array}{ccc}
P^A_\bullet \quad & \longrightarrow & P^{\overline{A}}_\bullet \quad \\
\downarrow \alpha_A & & \downarrow\alpha_{\overline{A}} \\
Kos(V \times V, V, s) & \longrightarrow &Kos (V_0 \times V_0, V_0, s_0)
\end{array}
$$
where $\alpha_A$ is an extension of the identity map $A_\Delta \to A_\Delta$
to the projective resolutions, and $\alpha_{\overline{A}}$ is its reduction
modulo $J_A$ (for example, we define $\alpha$ by the
formula on p. 515 of \cite{BG}). Since $\alpha_A$ (resp. $\alpha_{\overline{A}}$)
is a quasiisomorphism by definition of a projective resolution, and remains
one after applying $A_g \otimes_{A \otimes A}$ (resp. $\overline{A}_\Delta
\otimes_{\overline{A} \otimes \overline{A}}$) we only have to check that the
induced map
$$
A_g \otimes_{A \otimes A} Kos(V \times V, V, s) \to \overline{A}_\Delta
\otimes_{\overline{A} \otimes \overline{A}} Kos (V_0 \times V_0, V_0, s_0)
$$
is a quasiisomorphism. But by direct computation
(see \cite{SGA}, Expose VII, Proposition 2.5, for example) one can see that
the left hand side is isomorphic to the left hand side tensored by
$Kos(V_1, V_1, s')$
where $s'$ is the section of a trivial vector bundle
over $V_1$ with fiber $V_1$, given by $s'(v) = v - g^{-1}(v)$. Since $V_1^H = 0$,
$Kos(V_1, v_1, s')$ is quasiisomorphic to $k$. This finishes the proof of Step 6,
and the proof of Proposition \ref{actual}. $\square$
\section{Equivalences of Derived Categories and Cohomology}
\noindent
First we state a result which says that the cohomology of a complex
quasiprojective variety can be recovered from an enhanced version
of its derived category of vector bundles. The parts (a) and (b)
of the next theorem are not stated
explicitly in the papers \cite{K1}, \cite{K2} but easily follow
from their results.
\begin{theorem}
\label{cohom-der}(a) Let $X$ be a quasiprojective variety over the field
of complex numbers. In notation of Section 1, the cyclic homology group
$HC_i(Vect(X), \mathbb{C}[u, u^{-1}])$
is isomorphic to $H^{even}(X, \mathbb{C})$ for $i = 2k$ and
$H^{odd}(X, \mathbb{C})$ for $i = 2k+1$.
(b) If $F: D^b(X) \to D^b(Y)$ is an equivalence of bounded derived
categories of sheaves on two smooth projective varieties $X, Y$ over
a filed $k$ then
$F$ induces an isomorphism $HC_\bullet(Vect(X), W) \to
HC_\bullet(Vect(Y), W)$ for any graded $k[u]$-module $W$
of finite projective dimension. In particular, if $k =\mathbb{C}$ then
$F$ induces an isomorphism of complex cohomology groups.
(c) Let $X, Y$ be as in (b) and assume that a finite group $G$
acts on $X$. If $char\; k$ does not divide $|G|$ and there exists
an equivalence of derived categories $F: D^b(Y) \to D^b_G (X)$
then $F$ induces an isomorphism
$$
HC_\bullet(Vect(Y), W) \simeq \Big( \bigoplus_{g \in G} HC_\bullet
(Vect(X^g), W) \Big)_G
$$
(d) If $char\; k = 0$ (resp. $k = \mathbb{C}$) and $W = k [u, u^{-1}]$
then in the situation of (b) or (c) one has an isomorphism
of Hodge filtrations (resp. pieces of the Hodge decomposition).
\end{theorem}
\noindent
\textit{Proof.} By Corollary 5.2 of \cite{K2} the mixed complex
$C(X)$ is quasiisomorphic to the mixed complex obtained by
sheafifying the standard mixed complex of an algebra
(see Section 9 of \cite{W1}). This result holds for
any field $k$. If $char\; k = 0$ then by \cite{FT} (in the affine
case) and \cite{W3} (general quasiprojective schemes)
the homology groups $HC_i(Vect(X), k[u, u^{-1}])$
are given by the crystalline cohomology of $X$ (see Theorem 3.4 in \cite{W3}).
If particular, when $k = \mathbb{C}$ we get the Betti cohomology of
$X(\mathbb{C})$. This proves (a).
To prove (b) note that by smoothness the triangulated category obtained from
$(\mathcal{A}c^b(X), \mathcal{C}^b (X))$ is equivalent
to the bounded derived category $D^b(X)$. By
a fundamental theorem of Orlov (see Theorem 2.2,
\cite{Or}) any equivalence $F$ as above is induced by a Fourier-Mukai
transform with respect to some sheaf on $X \times Y$. Thus, any such
equivalence $F$ automatically comes from a morphism of localization pairs
and we can conclude by using the invariance property
(Theorem \ref{invariance}). Note that part (a) gives an isomorphism
of complex cohomology groups only as $\mathbb{Z}/2 \mathbb{Z}$-graded
vector spaces but by the main result of \cite{W3} this can be
refined to an isomorphism of $\mathbb{Z}$-graded vector spaces as
well.
Part (c) follows from Theorem \ref{invariance}; Theorem \ref{main}
and the fact that any equivalence $F$ still comes from a functor
between localization pairs due to 8.1 and 8.2 in \cite{K3}.
Part (d) follows from \cite{W3}.
$\square$
\noindent
\textbf{Example.} Let $X - \to Y$ be an elementary flop of
Bondal-Orlov, see \cite{BO}, Theorem 3.6. By \emph{loc. cit.} any
such flop induces an equivalence
of derived categories $F: D^b(X) \to D^b(Y)$ hence an isomorphism
of cohomology groups by part (b) of the above theorem. Note that the
motivic integration approach only gives an equality of
Betti numbers or classes in the K-theory of Hodge structures. This
equivalence
was extended to more general flops in dimensions 3 and 4
by Bridgeland and Namikawa.
\noindent
Next we describe some situations when a derived category of sheaves
on one variety is equivalent to the equivariant derived category of sheaves
(with respect to a finite group action) on another variety. From now
on we assume that $k = \mathbb{C}$. Let $G$ be a finite group with a
unimodular action on a smooth irreducible quasiprojective variety $X$
(i.e. we require that for each $x \in X$ the image of the stabilizer
$G_x$ in $GL(T_x)$ actually belongs to the subgroup $SL(T_x)$). Denote
by $GHilb(X) \subset Hilb^{|G|}(X)$ the scheme of all $G$-invariant
0-dimensional subschemes $Z \subset X$ on multiplicity $|G|$ such that
$H^0(\mathcal{O}_Z)$ is isomorphic to a regular representation of $|G|$.
Assume further that the generic point of $x \in X$ has trivial stabilizer
and let $Y(G, X) \subset GHilb(X)$ be the irreducible component
containing all free $G$-orbits.
\begin{theorem} \label{exmpl-der} \ \\
\noindent
(1) Assume that $X, G$ and $Y= Y(G, X)$ are as above and one of the following
conditions is satisfied
(a) $\dim X \leq 3$;
(b) $X$ is a complex symplectic variety, $G$ preserves the symplectic
structure and $Y$ is a crepant resolution of $X$;
(c) $X$ is the $n$-th cartesian power of a smooth quasiprojective surface
$S$; $G = \Xi_n$ (the symmetric group) with the natural permutation
action on $X$.
\noindent
Then there exists an equivalence $F: D^b(Y) \to D^b_G(X)$ coming from
a morphism of localization pairs.
\noindent
(2) Assume that $\dim X \leq 3$; $X$ is projective and $X'$ is another
smooth projective variety with an action of a finite group $G'$. Suppose
that $X/G$ and $X'/G'$ are Gorenstein and there exists a common resolution
of singularities $\pi: Z \to X/G$,
$\pi': X'/G'$ with $\pi^* K_X \simeq (\pi')^* K_{X'}$. Then there
exists an equivalence of categories $D^b_G(X) \simeq D^b_{G'}(X')$.
In particular, if $Z = X'$ and $G'= \{1\}$ (i.e. $Z$ is a crepant
resolution) there exists an equivalence $D_G^b(X) \simeq D^b(Z)$.
\end{theorem}
\noindent
\textit{Proof.} Parts 1a, 1b correspond to Theorem 1.2 and Corollary 1.3
in \cite{BKR}, respectively. To prove 1c, note that for $S = \mathbb{C}^2$
by a result of Haiman, cf. \cite{Ha} Theorem 5.1, the variety $Y$ is
isomorphic to the Hilbert scheme $Hilb^n (S)$. Then the morphism $Y \to
X/\Xi_n$ is semismall, see \cite{GS}, and the assertion follows from a
general criterion of Theorem 1.1 in \cite{BKR}. In general the equality
$Hilb^n(S) = Y$ is derived from the $\mathbb{C}^2$ case by considering
completions of local rings at points $s \in S$ which are the same as for
$\mathbb{C}^2$ since $S$ is smooth.
Part 2 is a simplified version of Theorem 1.7 due to Kawamata, cf. \cite{Ka}.
$\square$.
\begin{corr} \label{last}In the first (resp. the second) case of the above theorem one
has an isomorphism
of the cyclic homology groups
$$
HC_\bullet(Vect_G(X), W) \simeq HC_\bullet(Vect(Y), W), \quad (resp. \quad
HC_\bullet(Vect_G(X), W) \simeq HC_\bullet(Vect_{G'}(X'), W))
$$
for all graded $k[u]$-modules $W$ of finite projective dimension.
For $W = \mathbb{C}[u, u^{-1}]$ this reduces to isomorphisms
$$
H^*_{orb} (X, \mathbb{C}) \simeq H^*(Y, \mathbb{C}); \qquad
H^*_{orb} (X, \mathbb{C}) \simeq H^*_{orb}(X', \mathbb{C})
$$
of (orbifold) cohomology groups with their Hodge filtratons.
\end{corr}
\noindent
\textit{Proof.} By inspecting the proofs in \cite{BKR} and and \cite{Ka}
one can see that all the derived equivalences above are given
by functors between localization pairs. Hence the first two assertions
follow from the invariance property (Theorem \ref{invariance}).
To obtain the last pair of equalities on applies Theorem \ref{main} and
Theorem \ref{cohom-der} (a). $\square$
\noindent
Finally we note that by a recent conjecture of Kawamata, the derived
equivalences \ref{exmpl-der} should be a part of more general statement.
Here we formulate a part of Conjecture 1.2 in \cite{Ka} in a slightly
generalized form:
\begin{conj} (Categorical Resolution Conjecture)
Let $\mathcal{X}$, $\mathcal{Y}$ be smooth Deligne-Mumford
stacks with Gorenstein moduli spaces $X, Y$. Suppose that
there exist
birational maps $f:Z \to X$ and $g: Z \to Y$ such that
$f^*(K_X) = g^*(K_Y)$. Then the derived categories $D^b(\mathcal{X})$ and
$D^b(\mathcal{Y})$ of sheaves in the etale topology, are equivalent.
\end{conj}
\section{Concluding Remarks}
The isomorphisms of Theorem \ref{main} are additive counterparts
of a $K$-theoretic statement in \cite{V}. However, in \textit{loc. cit.}
one also has a statement concerning $K'$-theory of singular varieties.
This motivates the following conjecture:
\begin{conj} Let $X$ be a quasiprojective scheme over a field
$k$, $G$ a finite group acting on $X$ and assume that $char\; k$ does not
divide $|G|$. Let $Coh(X)$ be the exact category of coherent sheaves on $X$
and $Coh_G(X)$ the exact category of $G$-equivariant sheaves.
Then for any $W$ of finite projective dimension over $k[u]$
there exists an isomorphism
$$
\phi_X: \Big( \bigoplus_{g \in G}
HC_\bullet(Coh(X^g), W) \Big)_G \to HC_\bullet(Coh_G(X), W)
$$
which is functorial with respect to the (derived) pushforwards under
$G$-equivariant proper maps.
\end{conj}
We expect that for smooth $X$ and $W = k[u, u^{-1}$ (periodic cyclic homology
case) this isomorphism satisfies the multiplicative part of the Cohomological
Crepant Resolution Conjecture, see
\cite{Ru}.
We have seen in the proof of Theorem \ref{main} that the isomorphism
$\psi_X$ is defined using pullbacks to the fixed point sets. We expect
that, as in \cite{V}, the isomorphism $\phi_X$ can be defined using
direct images with respect to the closed embeddings $X^g \to X$. Note
that $\phi_X$ and $\psi_X$ are not expected to be mutually inverse, but
their composition should be invertible similarly to
Lemma 4.2 and Lemma 4.3 in \cite{V}. For singular $X$ the
cyclic homology of $Coh(X)$ may be different from the sheafified
cyclic homology of the algebras $\mathcal{O}(U)$, $U \subset X$; therefore
the general strategy of proof has to be completely different.
\noindent
Finally we state another conjecture aimed at a better understanding of the
orbifold product on orbifold cohomology, see Section 3 \cite{Ru}.
Let $\mathcal{X}$ be a smooth Deligne-Mumford stack and $\pi: U \to
\mathcal{X}$ be an etale cover by a scheme. Consider $Z = U \times_{\pi} U
\subset U \times U$. If $\pi_i: U \times U \times U \to U \times U$
is the projection omitting the $i$-th factor, $i = 1, 2, 3$, then
$\pi_2(\pi_3^{-1} (Z) \cap \pi_1^{-1}(Z)) = Z$, i.e. $Z$ is an
idempotent correspondence from $U$ to itself.
In particular, we have a (finite) morphism $m: Z \times_U Z \to Z$
which allows to define for any two objects $\mathcal{F}$,
$\mathcal{G} \in D^b(Z)$ a third object
$$
\mathcal{F} * \mathcal{G} := m_* \big(p_1^*\mathcal{F} \otimes p_2^*\mathcal{G})
$$
where $p_1, p_2$ are the projections of $Z \times_U Z$ to $Z$, and all operations
are understood in the derived sense.
The $*$-product in general is not commutative, but it is
associative (more precisely, there is a canonical isomorphism between
$\mathcal{F}*(\mathcal{G} * \mathcal{H})$ and $(\mathcal{F} * \mathcal{G})
* \mathcal{H}$). When $\mathcal{X}$ is the quotient stack $[X/G]$ we
can take $U = X$ and then $Z = \coprod_{g \in G} X^g$ while
$m: Z \times_U Z \to Z$ is given by pointwise group product within
stabilizers.
\begin{conj} When $\mathcal{X} = [X/G]$ and $U = X$ the above
product $(\mathcal{F}, \mathcal{G}) \mapsto \mathcal{F} * \mathcal{G}$
induces the product of the periodic cyclic homology of $Z = \coprod_{g \in G} X^g$
which coincides with the product on $A(X, G) = H^*(Z, \mathbb{C})$ described
in Section 3 of \cite{Ru}. Thus, taking the (co)invariants with respect to
$G$-action gives the orbifold product structure on $H^*_{orb}(X/G, \mathbb{C})$.
\end{conj}
Note that the only place where smoothness of $\mathcal{X}$ is important
is the relation between cyclic homology of coherent sheaves and the
orbifold cohomology groups. However, we could \textit{define} orbifold
cohomology for singular Deligne-Mumford stacks via cyclic homology of
coherent sheaves; such a definition, perhaps, would give invariants which
behave better than those defined via the usual cohomology.
Department of Mathematics
253-37 Caltech
Pasadena, CA 91125, USA
email: [email protected]
\end{document}
|
\begin{document}
\sloppy
\title{
Dynamical aspects of quantum entanglement
for coupled mapping systems
}
\section{Introduction}
Quantum information processings~\cite{NC00}
have been recognized as a new paradigm of science,
because of its fundamentality, applicability, and
interdisciplinarity.
For a quantum computer, a typical example of the paradigm,
to be really effective, it must consist of
a large number of (interacting) qubits, and its size cannot be microscopic.
Such a ``complex'' structure of an effective
quantum computer causes many problems due to decoherence
\cite{decoherence} and ``quantum chaos''.\cite{GS00}
Here we investigate the relation between
(quantum) entanglement (which is an essential ingredient of
quantum information processings) and chaotic behavior of the
corresponding classical systems. The systems we employ are
weakly coupled mapping systems, and each subsystem can be
chaotic in the classical limit.
We can imagine as follows:
A quantum computer has many processing units which
are ``chaotic'' in some sense (e.g., in the classical limit),
and they weakly interact with each other.
How much entanglement is generated between the units
in such a situation? This question is interesting
from the view point of robustness of quantum computation.\cite{GS00}
Due to the weakness of the coupling,
we utilize a perturbation theory to analyze the
entanglement production in the system considered.
In this paper, we discuss generality of our formula by employing
two typical chaotic systems,
i.e., coupled kicked tops and rotors (Sec.~\ref{sec:mapping}).
Numerical results for both systems are presented
(Sec.~\ref{sec:numerical}).
To complement our previous papers,~\cite{TFM02,FMT03}
we also discuss the wavefunction properties
of the subsystems in the entanglement production region
employing the Husimi representation (Sec.~\ref{sec:husimi}).
\section{Coupled mapping systems and
the perturbative formula for entanglement production}
\label{sec:mapping}
Here we consider a composite system which consists
of two subsystems.
We denote one-step time-evolution operators
for each subsystem as $U_1$ and $U_2$.
We also introduce another one-step time-evolution operator
$U_{\epsilon}$ which describes the interaction between them.
Furthermore, the coupling time-evolution
operator is assumed to be
\begin{equation}
U_{\epsilon}= \exp \{ -i \epsilon V /\hbar \}
\end{equation}
where $V=\sum_{\alpha} q_{\alpha}^{(1)} \otimes q_{\alpha}^{(2)}$
with $q_{\alpha}^{(i)}$ being
a $\alpha$-th dynamical variable for subsystem $i$,
and $\epsilon$ is a coupling parameter.
The latter is used in the perturbative treatment below.
Hence the one-step time evolution for the whole system is
described by
\begin{equation}
\label{eq:defCKTU}
|\Psi(t+1) \rangle =
U_{\epsilon} U_1 U_2 |\Psi(t) \rangle.
\end{equation}
Since we only examine the case where the whole
system is in a pure state,
we quantify the entanglement production by the
linear entropy of the subsystem
\begin{eqnarray}
S_{\rm lin}(t) = 1- {\rm Tr}_1 \{ \rho^{(1)}(t)^2 \},
\label{eq:defSlin}
\end{eqnarray}
where $\rho^{(1)}(t)$ is the reduced density
operator for the first subsystem.
Using the time-dependent perturbation theory,
we derive a formula (see Refs.~\citen{TFM02,FMT03}
for a simpler case and its derivation)
for the linear entropy: $S_{\rm lin}(t) =
S^{\rm PT}_{\rm lin}(t) + {\cal O}(\epsilon^3)$ as
\begin{equation}
\label{eq:miyaji}
S^{\rm PT}_{\rm lin}(t)
= S_0 \sum_{l=1}^t \sum_{m=1}^t D(l,m)
\end{equation}
where $S_0=2 \epsilon^2/\hbar^2$ and
$D(l,m)$ is a correlation function
of the uncoupled system:
\begin{equation}
\label{eq:defD}
D(l,m) = \sum_{\alpha,\beta}
C^{(1)}_{\alpha,\beta}(l,m) C^{(2)}_{\alpha,\beta}(l,m)
\end{equation}
with
\begin{equation}
\label{eq:defCi}
C^{(i)}_{\alpha,\beta}(l,m)
= \langle q^{(i)}_{\alpha}(l) q^{(i)}_{\beta}(m) \rangle_i
- \langle q^{(i)}_{\alpha}(l) \rangle_i
\langle q^{(i)}_{\beta}(m) \rangle_i
\end{equation}
and $q^{(1)}_{\alpha}(l)=(U_1^{l})^{\dagger} q^{(1)}_{\alpha} U_1^{l}$ etc.
represents a free time evolution of the subsystem's variable
without interaction.
In the next section,
we apply this result to two model mapping systems, i.e.,
coupled kicked tops and rotors.
\section{Stronger chaos does not imply larger entanglement
production rate}
\label{sec:numerical}
We introduce two coupled kicked systems as follows.
Coupled kicked tops are described by the
following unitary operators:
\begin{eqnarray}
U_1 &=& e^{-i k_1 J_{z_1}^2/(2j \hbar)} e^{-i \pi J_{y_1}/(2 \hbar)},
\\
U_2 &=& e^{-i k_2 J_{z_2}^2/(2j \hbar)} e^{-i \pi J_{y_2}/(2 \hbar)},
\\
U_{\epsilon} &=& e^{-i \epsilon J_{z_1} J_{z_2}/(j \hbar)},
\end{eqnarray}
and coupled kicked rotors are by the following:
\begin{eqnarray}
U_1 &=& e^{-i I_1^2/(2 \hbar)} e^{-i k_1 \cos \theta_1/\hbar},
\\
U_2 &=& e^{-i I_2^2/(2 \hbar)} e^{-i k_2 \cos \theta_2/\hbar},
\\
U_{\epsilon} &=& e^{-i \epsilon \cos(\theta_1-\theta_2)/\hbar},
\end{eqnarray}
where $k_1$, $k_2$ represent the strength of nonlinearity
related to chaotic properties of the systems.
For the detailed information of the former and latter systems,
see Refs.~\citen{TFM02,FMT03} and Ref.~\citen{TAI89}, respectively.
We note that the periodic boundary
conditions with period $2 \pi$
are imposed for both variables $\theta_i$ and $I_i$.
As an initial state,
we take a product, i.e., separable state
of spin coherent states\cite{FMT03}
(coherent states)
for coupled kicked tops (rotors).
In Fig.~\ref{fig:linear},
according to Eq.~(\ref{eq:defSlin}), we calculate the time
evolution of the linear entropy
for the ``chaotic'' initial conditions,
i.e., their corresponding
classical states in phase space are embedded in chaotic seas.
For {\it both} cases,
the linear entropy increases linearly as a function of time
in this transient region, which is called
$t$-linear entanglement production region in the following,
before the ``equilibration'' of the entropy.
In these cases, the coupling is in a sense weak, and
we apply the formula, Eq.~(\ref{eq:miyaji}),
to these situations.
\begin{figure}
\caption{\label{fig:linear}
\label{fig:linear}
\end{figure}
From a classical intuition, we expect that
the correlation functions $D(l,m)$ decay very quickly
as a function of $|l-m|$
for strongly chaotic cases (this is numerically confirmed for
a kicked top\cite{TFM02,FMT03} and a kicked rotor\cite{Shepelyansky83}),
so it is assumed to be
\begin{equation}
\label{eq:Dpheno}
D(l,m) \simeq D_0 e^{-\gamma|l-m|}.
\end{equation}
Using above, we can easily derive the following
for the entanglement production rate:
\begin{equation}
\label{eq:dMdt}
\Gamma \equiv
\left. \frac{d S^{\rm PT}_{\rm lin}(t)}{dt} \right|_{t \gg 1/\gamma}
\simeq
\Gamma_0 \coth (\gamma/2)
\end{equation}
with $\Gamma_0=S_0 D_0$.
Since $\gamma$ often becomes large as the nonlinear parameters increase,
and there is a numerical experiment
that $\gamma$ and the sum
of positive Lyapunov exponents are correlated in the
coupled kicked tops~\cite{FMT03},
this formula implies that strong chaos ($\gamma \rightarrow \infty$)
leads to a saturation of the entanglement production rate
($\Gamma \rightarrow \Gamma_0$).
This prediction is actually confirmed by numerical
experiments for {\it both} systems
(coupled kicked tops and rotors)
as shown in Fig.~\ref{fig:nonlinear}.
The above is a result for strongly chaotic cases.
In contrast, we comment for weakly chaotic cases using
the above formula:
For strongly chaotic cases {\it with bounded phase space},
$D_0$ saturates to a certain value determined by a
statistical argument, whereas, for weakly chaotic cases,
$D_0$ grows up as the nonlinear parameters increase.
This roughly explains the common belief:
{\it chaos enhances entanglement for weakly chaotic systems}.
(See the cited references in Refs.~\citen{TFM02,FMT03}.)
This seems to contradict our result,
but the point is that we focus on strongly chaotic
cases where $D_0$ saturates, and in such a situation,
stronger chaos saturates the entanglement production rate
for weakly coupled mapping systems.
We also comment that this formula can be easily extended
to describe flow systems with continuous time~\cite{FMT03},
which is useful when applying to more realistic systems.
\begin{figure}
\caption{\label{fig:nonlinear}
\label{fig:nonlinear}
\end{figure}
\section{Husimi representation as
a tool to study entanglement production}
\label{sec:husimi}
The linear entropy which we employ as a measure of
entanglement is calculated from the reduced
density operator $\rho^{(1)}(t)$.
Hence it is natural to ask what happens in
$\rho^{(1)}(t)$ itself when the entanglement
production occurs.
Here we shall address this issue
using the results of coupled kicked tops.
In Fig.~\ref{fig:wave} (b),
we show the absolute value
of the reduced density matrix
$\rho^{(1)}_{m_1,m_2}(t)
=
\langle j m_1 | \rho^{(1)}(t) | j m_2 \rangle$
during the $t$-linear entanglement production region
(see Sec.~\ref{sec:numerical}).
We also show the corresponding density matrix of a single kicked
top [Fig.~\ref{fig:wave} (a)].
Here $|jm \rangle$ is the simultaneous eigenstate
of $J^2$ and $J_z$ for top 1.
From the previous works~\cite{SKO96},
we expect that the decay of the off-diagonal elements
of the reduced density matrix reflects entanglement production.
Such an analysis is useful when the entanglement production
becomes almost equilibrated,
however, two cases in Fig.~\ref{fig:wave} are very hard to
distinguish since
the system is in
the $t$-linear entanglement production region.
(Of course, after a long time,
the off-diagonal elements finally decay.)
To solve this issue,
we propose to examine the Husimi representation of the
density operator~\cite{Takahashi}.
\begin{figure}
\caption{\label{fig:wave}
\label{fig:wave}
\end{figure}
The Husimi function for the kicked top system is defined by
\begin{equation}
H(\theta,\phi)=
\langle \theta,\phi | \rho^{(1)}(t) | \theta,\phi \rangle,
\end{equation}
where $|\theta,\phi \rangle$ is a spin-coherent state given by
\begin{equation}
\langle j m| \theta,\phi\rangle
=\frac{\gamma^{j-m}}{(1+|\gamma|^2)^j} \sqrt{\frac{2j!}{(j+m)!(j-m)!}}
\end{equation}
with $\gamma=e^{i \phi} \tan (\theta/2)$.
For simplicity of visualization,
we take a smaller value of $j$ ($j=30$ instead of $j=80$ used in
Figs.~\ref{fig:linear} and \ref{fig:nonlinear}).
Though the normal-scale plots do not show
any significant fingerprint of entanglement production
due to the interaction (Fig.~\ref{fig:husimi1}),
the log-scale plots do show it (Fig.~\ref{fig:husimi2}).
The Husimi zeros are easily confirmed in Fig.~\ref{fig:husimi2} (a).
They become local minima with positive values
as the interaction strength gets larger
[Fig.~\ref{fig:husimi2} (b)].
Husimi zeros are considered to characterize ``complexity''
of the wavefunction~\cite{LVT},
so the quantification of this local minima,
the traces of the Husimi zeros,
might characterize the entanglement production
from ``complexity theory'' point of view~\cite{Adachi}.
Many measures for complexity of a Husimi function
have been proposed so far~\cite{LVT,Takahashi}, but
we suppose that another measure, e.g.,
the phase-space averaged curvature of a Husimi function,
is needed to quantify the complexity related to entanglement production.
This issue will be discussed elsewhere.
\begin{figure}
\caption{\label{fig:husimi1}
\label{fig:husimi1}
\end{figure}
\begin{figure}
\caption{\label{fig:husimi2}
\label{fig:husimi2}
\end{figure}
\section{Summary and outlook}
\label{sec:summary}
We have investigated how the production rate of
quantum entanglement
is affected by the chaotic properties of the corresponding
classical system using coupled kicked tops and rotors.
We have derived and numerically confirmed that
{\it the increment of the strength of chaos does not
enhance the production rate of entanglement}
when the coupling is weak enough and the subsystems are strongly chaotic.
We believe that this conclusion is general and applicable
to any mapping system {\it with bounded phase space}.
It is often believed that {\it chaos enhances entanglement},
but that can be the case for weakly chaotic regions,
and such effects can be also explained by the
factor $D_0$ in our formula, Eq.~(\ref{eq:dMdt}).
Note again that the above our conclusion is basically for
strongly chaotic regions where $D_0$ saturates.
We also proposed to use the logarithmic plot of the
Husimi representation of a reduced density operator
to investigate entanglement production in coupled
quantum systems.
Since the dynamical aspects of entanglement production
have been studied in relation with quantum computing\cite{BS03},
we hope that our analysis will be useful in the studies of
quantum information processsings.
In Ref.~\citen{GS00}, Prosen and Znidaric showed
that quantum computing can be more robust
with the use of quantum chaotic systems
compared to the corresponding regular systems.
We expect that the saturation of entanglement production
found here will be utilized in such a situation.
In addition, entanglement production is strongly related
to decoherence processes \cite{decoherence},
so we expect that there are many applications of
our results for quantum dynamical processes
like quantum optical processes in environments\cite{MS98}
or chemical reactions in solvents\cite{Okazaki01}.
One of the authors (H.F.) thanks
T.~Prosen, K.~Takahashi, M.~Toda, S.~Kawabata, and K.~Saito for
useful comments.
\end{document}
|
\begin{document}
\title[Admissibility Conjecture and Property (T)]
{Admissibility Conjecture and Kazhdan's Property (T) for quantum groups}
\author{Biswarup Das}
\address{Department of Mathematical Sciences, University of Oulu, Finland.}
\address{Instytut Matematyczny, Uniwersytet Wroc\l awski, Poland.}
\email{[email protected]}
\author{Matthew Daws}
\address{Jeremiah Horrocks Institute, University of Central Lancashire, UK.}
\email{[email protected]}
\author{Pekka Salmi}
\address{Deprtment of Mathematical Sciences, University of Oulu, Finland.}
\email{[email protected]}
\begin{abstract}
We give a partial solution to a long-standing open problem in the
theory of quantum groups, namely we prove that all finite-dimensional
representations of a wide class of locally compact quantum groups
factor through matrix quantum groups (Admissibility Conjecture for
quantum group representations). We use this to study Kazhdan's
Property (T) for quantum groups with non-trivial scaling group,
strengthening and generalising some of the earlier results obtained
by Fima, Kyed and So\l tan, Chen and Ng, Daws, Skalski and
Viselter, and Brannan and Kerr. Our main results are:
\begin{enumerate}[label=\textup{(\roman*)}]
\item
All finite-dimensional unitary representations of locally compact
quantum groups which are either unimodular or arise through a
special bicrossed product construction are admissible.
\item
A generalisation of a theorem of Wang which characterises Property
(T) in terms of isolation of finite-dimensional irreducible
representations in the spectrum.
\item
A very short proof of the fact that quantum groups with Property (T)
are unimodular.
\item
A generalisation of a quantum version of a theorem of Bekka--Valette
proven earlier for quantum groups with trivial scaling group,
which characterises Property (T) in terms of non-existence of almost
invariant vectors for weakly mixing representations.
\item
A generalisation of a quantum version of Kerr--Pichot theorem, proven
earlier for quantum groups with trivial scaling group,
which characterises Property (T) in terms of denseness properties of
weakly mixing representations.
\end{enumerate}
\end{abstract}
\maketitle
\section{Introduction}
Property (T) was introduced in the mid-1960s by Kazhdan, as a tool to
demonstrate that a large class of lattices are finitely generated. The
discovery of Property (T) was a cornerstone in group theory and the
last decade saw its importance in many different subjects like ergodic
theory, abstract harmonic analysis, operator algebras and some of the
very recent topics like C*-tensor categories (see
\cite{bekka_property_T, Connes_Weiss,
popa_subfactors, neshveyev_Drinfeld_Center} and references
therein).
In the late 1980s the subject of operator algebraic quantum
groups gained prominence starting with the seminal work of Woronowicz
\cite{woroSUq2}, followed by works of Baaj, Skandalis,
\hbox{Woronowicz}, Van Daele, Kustermans, Vaes and others
\cite{baaj-skand,woro_mutiplicative,woro,
kusvaes,masuda_woronowicz_nakagami}. Quantum
groups can be looked upon as noncommutative analogues of locally
compact groups, so quite naturally the notion
of Property (T) appeared also in that more general context.
Property (T) was first studied within the framework of Kac
algebras (a precursor to the theory of locally compact quantum groups)
\cite{petrescu_property_T_for_Kac}, then for algebraic quantum groups
\cite{conti_property_T_algebraic_quantum_groups} and discrete quantum
groups \cite{Fima_property_T_discrete_quantum_group,kyed}, and more recently
for locally compact quantum groups
\cite{xiao_property_T,daws_skalski_viselter,brannan_property_T}.
By definition a locally compact group $G$ has Property (T) if every
unitary representation with approximately invariant vectors
has in fact a non-zero invariant vector. This definition
extends verbatim to locally compact \emph{quantum} groups,
using the natural extensions of the necessary terms.
By a result of Fima, a discrete quantum group having Property (T)
is necessarily a Kac algebra, which is equivalent to being unimodular
in the case of discrete quantum groups, and is the dual of a compact
matrix quantum group
\cite[Propositions~7~\&~8]{Fima_property_T_discrete_quantum_group}.
This is a quantum generalisation of a result originally due to Kazhdan
\cite[Theorem 1.3.1 \& Corollary~1.3.6]{bekka_property_T}.
In particular, while studying Property (T) for discrete quantum
groups, one is lead to consider only unimodular discrete quantum
groups. Since a discrete quantum group is
unimodular if and only if it is of Kac type, unimodular discrete
quantum groups have trivial scaling automorphism groups,
and this is important in what follows.
Generalising to locally compact quantum groups,
Brannan and Kerr proved that a second countable
locally compact quantum group with Property~(T) is necessarily unimodular
\cite[Theorem~6.3]{brannan_property_T} --
a result for which we will also give a new and short proof
without the second countability assumption.
So again while studying Property (T) for quantum groups, one is lead
to consider only unimodular quantum groups.
However, a \emph{unimodular} locally compact quantum group can
have a \emph{non-trivial scaling automorphism group}. Examples of
such locally compact quantum groups are Drinfeld doubles
of non-Kac-type compact quantum groups: see Section
\ref{sec:quantum-double}.
A recent result of Arano (see
\cite[Theorem 7.5]{arano1}), which finds applications in the study of
C*-tensor categories and subfactors \cite{neshveyev_Drinfeld_Center,
popa_subfactors}, states that the Drinfeld double of the
Woronowicz compact quantum group $SU_q(2n+1)$ has Property (T). This
produces a concrete example of a unimodular locally compact quantum
group with non-trivial scaling automorphism group, which has
Property~(T).
In this paper, we study Property (T) and related problems,
in particular on unimodular locally compact quantum groups
with non-trivial scaling automorphism group.
To enable this study, we prove the
`Admissibility Conjecture' for unimodular locally compact quantum
groups, that is, we show that every finite-dimensional
unitary representation of a unimodular locally compact quantum group
is admissible. The Admissibility Conjecture is a long-standing open
problem in the theory of quantum groups, which was implicitly stated
in \cite{soltan} and was conjectured in
\cite[Conjecture~7.2]{daws}. Admissibility of a finite-dimensional
unitary representation of a quantum group means effectively that it
`factors' through a compact matrix quantum group.
Returning to locally compact groups,
we note the following important characterisation of
Property (T) by Bekka and Valette
\cite[Theorem~1]{bekka_property_T_amenable_representations}:
a locally compact group $G$ has Property (T) if and only if
every unitary representation of $G$ with approximately invariant vectors
is not weakly mixing (i.e.\ admits a non-zero finite-dimensional
subrepresentation).
This characterisation turns out to be more useful from the application
perspective than the definition itself, as has been elucidated in
\cite{brannan_property_T}. An important consequence of the
Bekka--Valette theorem is the Kerr--Pichot theorem which states that if
$G$ does not have Property (T), then within the set of all unitary
representations on a fixed separable Hilbert space, the weakly mixing
ones form a dense $G_\delta$-set in the weak topology, strengthening
an earlier result of Glasner and Weiss
\cite[Theorem~2$^\prime$]{glasner_property_T} concerning
the density of ergodic representations. Another important result along
characterising
Property (T) is that $G$ has Property (T) if and only if the trivial
representation is isolated in the hull--kernel topology
of the dual space $\widehat{G}$ \cite{wang_property_T}. A theorem of
Wang \cite[Theorem~2.1]{wang_property_T} (see also \cite[Theorem
1.2.5]{bekka_property_T}) extends this to all irreducible
finite-dimensional unitary representations of $G$ i.e. $G$ has
Property (T) if and only if all irreducible finite-dimensional
unitary representations of $G$ are isolated in $\widehat{G}$. This in
particular helps us better understand the structure of the
full group C*-algebra $C^*_u(G)$ and has other important applications
\cite[Chapter I]{bekka_property_T}.
The first quantum version of Wang's characterisation of Property (T)
was proven for discrete quantum groups (see \cite[Remark 5.4]{kyed}).
Under the additional hypothesis of having low duals,
Bekka--Valette and Kerr--Pichot theorems were proven for unimodular
discrete quantum groups
\cite[Theorem 7.3, 7.6 \& 9.3]{daws_skalski_viselter}.
Recall that for discrete quantum groups being unimodular is the
same as being a Kac algebra
and that Kac algebras form a class of locally compact quantum groups
with trivial scaling group.
The study of Property (T) on quantum groups
progressed along these lines with
the quantum versions of the theorems of Wang,
Bekka--Valette and Kerr--Pichot
generalised to quantum groups with trivial scaling groups
in \cite[Proposition 3.2 \& Theorem 3.6]{xiao_property_T} and in
\cite[Theorem 4.7, 4.8, 4.9 \& 5.1]{brannan_property_T}.
Upon giving an affirmative answer to the Admissibility Conjecture for
unimodular locally compact quantum groups (including those with
non-trivial scaling groups), we proceed to prove a quantum version of
Wang's theorem for them as well as generalised versions of
the Bekka--Valette and the Kerr--Pichot theorems.
In particular, we show that for unimodular quantum groups with
non-trivial scaling automorphism group, the weakly mixing
representations are dense in the set of representations on a separable
Hilbert space if the quantum group does not have Property (T).
\section{Notation and terminology}
\langlebel{Subsection: LCQG}
We collect a few facts from the theory of locally compact quantum
groups, as developed in the papers
\cite{kus,kusvaes,kusvaes_vNa}, and we refer the reader to
\cite{kuster_notes_LCQG} for a summary of the main results in the
theory. We will take the viewpoint
that whenever we consider a locally compact quantum group, the symbol
$\mathbb{G}$ denotes the underlying `locally compact quantum space' of the
quantum group. From this viewpoint, for a locally compact quantum
group $\mathbb{G}$ the corresponding C*-algebra of `continuous functions on
$\mathbb{G}$ vanishing at infinity' will be denoted by $C_0(\mathbb{G})$. It is
equipped with a coassociative \emph{comultiplication}
$\cop:C_0(\mathbb{G})\to M(C_0(\mathbb{G})\otimes C_0(\mathbb{G}))$ and left and right Haar
weights $\phi$ and $\psi$ \cite[Definition~4.1]{kusvaes}
(where we use the notation that $M(A)$ denotes the multiplier algebra
of a C*-algebra $A$). An important aspect of the theory of locally
compact quantum groups is a noncommutative Pontryagin duality theory,
which in particular allows one to view both a locally compact group
and its `dual' as locally
compact quantum groups \cite[Subsection 6.2]{kuster_notes_LCQG},
\cite[Section 8]{kusvaes}. The dual of $\mathbb{G}$, which is again a
locally compact quantum group, is denoted by $\mathbb{G}dual$.
(For example if $\mathbb{G}=G$, a locally compact group, then
$C_0(\mathbb{G}dual)=C^\ast_r(G)$, the reduced group C*-algebra of $G$.)
As in the case of $\mathbb{G}$,
$C_0(\mathbb{G}dual)$ is equipped with a coassociative comultiplication
$\widehat{\cop}:C_0(\mathbb{G}dual)\to M(C_0(\mathbb{G}dual)\otimes C_0(\mathbb{G}dual))$
and left and right Haar weights $\widehat{\phi}$ and $\widehat{\psi}$.
By the definition of the dual quantum group as given in \cite[Definition
8.1]{kusvaes}, we may think of both the C*-algebras $C_0(\mathbb{G})$ and
$C_0(\mathbb{G}dual)$ as acting faithfully and non-degenerately on the
Hilbert space $L^2(\mathbb{G})$ (obtained by applying the GNS construction
to the left Haar weight $\phi$).
A locally compact quantum group is said to be \emph{compact} if
$C_0(\mathbb{G})$ is unital. Compact quantum groups themselves
have a very nice theory \cite{woro, vandaele_cqg}.
The fundamental multiplicative unitary
$\textup{W}\in M(C_0(\mathbb{G})\otimes C_0(\mathbb{G}dual))$ (called the \emph{Kac--Takesaki operator}
in the \emph{theory of Kac algebras} \cite{enock_Kac}, a precursor to
the theory of locally compact quantum groups) implements the
comultiplications as follows:
\[
\cop(x)=\textup{W}^\ast(1\otimes x)\textup{W},\qquad x\in C_0(\mathbb{G}),
\]
and
\[
\widehat{\Delta}(x)=\chi(\textup{W}(x\otimes1)\textup{W}^\ast),\qquad x\in C_0(\mathbb{G}dual),
\]
where $\chi:B(L^2(\mathbb{G})\otimes L^2(\mathbb{G}))\to B(L^2(\mathbb{G})\otimes L^2(\mathbb{G}))$ is
the flip map
\cite[Definition 6.12 \& Subsection 6.2]{kuster_notes_LCQG},
\cite[pp. 872--873, Definition 8.1]{kusvaes}.
The von Neumann algebra generated by $C_0(\mathbb{G})$
(respectively, by $C_0(\mathbb{G}dual)$) in $B(L^2(\mathbb{G}))$ will be
denoted by $L^\infty(\mathbb{G})$ (respectively, by $L^\infty(\mathbb{G}dual)$).
Then the preduals of $L^\infty(\mathbb{G})$ and $L^\infty(\mathbb{G}dual)$ are
denoted by $L^1(\mathbb{G})$ and $L^1(\mathbb{G}dual)$, respectively. The above formulas
imply that both the maps $\cop$ and $\widehat{\cop}$ can be
lifted to normal $*$-homomorphisms on $L^\infty(\mathbb{G})$ and
$L^\infty(\mathbb{G}dual)$. The preadjoints of the normal maps $\cop$ and
$\widehat{\cop}$ equip $L^1(\mathbb{G})$ and $L^1(\mathbb{G}dual)$ with the
structure of a completely contractive Banach algebra.
The universal C*-algebra $C_0^u(\mathbb{G})$ associated with $\mathbb{G}$
is the universal C*-envelope of a distinguished Banach $*$-algebra
$L^1_\sharp(\mathbb{G}dual)$ (as an algebra,
$L^1_\sharp(\mathbb{G}dual)\subset L^1(\mathbb{G}dual)$).
The C*-algebra $C^u_0(\mathbb{G})$ is equipped with a coassociative
comultiplication denoted by $\cop_u:C^u_0(\mathbb{G})\to M(C^u_0(\mathbb{G})\otimes C^u_0(\mathbb{G}))$
\cite[Proposition 6.1]{kus}. Moreover,
there exists a surjective $*$-homomorphism
$\Lambda_\mathbb{G}:C^u_0(\mathbb{G})\to C_0(\mathbb{G})$
called the \emph{reducing morphism}, which intertwines the comultiplications:
$(\Lambda_\mathbb{G}\otimes\Lambda_\mathbb{G})\circ\cop_u=\cop\circ\Lambda_\mathbb{G}$.
The comultiplication on $C_0^u(\mathbb{G}dual)$ is denoted by
$\widehat{\cop}_u$.
The dual space $C_0^u(\mathbb{G})^*$ is a completely contractive Banach
algebra with respect to the \emph{convolution} product
\[
\omega_1\conv \omega_2 = (\omega_1\otimes \omega_2)\circ \cop_u,
\qquad \omega_1, \omega_2\in C_0^u(\mathbb{G})^*.
\]
As shown in \cite[Corollary 4.3 and Proposition 5.2]{kus}, the
fundamental multiplicative unitary $\textup{W}$ admits a lift
$\text{\reflectbox{$\Ww$}}\:\!\in M(C_0(\mathbb{G})\otimes C^u_0(\mathbb{G}dual))$ called the semi-universal bicharacter
of $\mathbb{G}$. It is characterised by the following universal property:
there is a one-to-one correspondence between
\begin{itemize}
\item
unitary elements $U\in M(C_0(\mathbb{G})\otimes B)$ such that
$(\cop\otimes\iota)(U)=U_{13}U_{23}$ (here $B$ is a C*-algebra)
\item
non-degenerate $*$-homomorphisms $\pi_U:C^u_0(\mathbb{G}dual)\to M(B)$
satisfying $(\iota\otimes\pi_U)(\text{\reflectbox{$\Ww$}}\:\!)=U$.
\end{itemize}
There are two important maps associated with a locally compact
quantum group $\mathbb{G}$ related to the inverse operation of a group.
The \emph{antipode} $S$ is a densely defined norm-closed map on
$C_0(\mathbb{G})$ \cite[Section 5]{kusvaes}. It can be extended to a
densely defined strictly closed map on $M(C_0(\mathbb{G}))$
\cite[Remark 5.44]{kusvaes}. The antipode has a universal counterpart
$S_u$ which is a densely defined map on $C^u_0(\mathbb{G})$
\cite[Section 9]{kus}. The \emph{unitary antipode}
$R:C_0(\mathbb{G})\to C_0(\mathbb{G})$ is a $*$-antiautomorphism
\cite[Proposition~5.20]{kusvaes} satisfying
$(R\otimes R)\circ\cop=\chi\circ\cop\circ R$.
Its universal counterpart $R_u$ is a $*$-antiautomorphism of
$C^u_0(\mathbb{G})$ having similar properties as $R$ \cite[Proposition 7.2]{kus}.
The corresponding maps on the dual quantum group
are denoted by $\widehat{S}$, $\widehat{S}_u$, $\widehat{R}$ and
$\widehat{R}_u$. It is worthwhile to note that if $\mathbb{G}$ is a Kac
algebra, then $S=R$ and $S_u=R_u$.
In general, the antipode has a polar decomposition
$S = R\circ \tau_{-i/2}$ where $\tau_{i/2}$ is defined by
an analytic extension of the scaling automorphism group
$(\tau_t)_{t\in\mathbb{R}}$,
where each $\tau_t: C_0(\mathbb{G}) \to C_0(\mathbb{G})$ is a $*$-automorphism.
The scaling group is implemented by the modular operator $\widehat{\nabla}$
of the dual left Haar weight.
(If $\mathbb{G}$ is a Kac algebra, $\widehat{\nabla}$ is affiliated to the center of $L^\infty(\mathbb{G})$ and consequently $\tau_t = \iota$ for every
$t\in\mathbb{R}$.)
The scaling group of the dual quantum group $\mathbb{G}dual$ is denoted
by $\widehat{\tau}$. The scaling groups have
their universal counterparts on the C*-algebras
$C^u_0(\mathbb{G})$ and $C^u_0(\mathbb{G}dual)$ and these are denoted by
$\tau^u$ and $\widehat{\tau}^u$, respectively (see \cite[Definition 4.1]{kus}).
The universal antipode has a similar decomposition
$S_u = R^u\circ \tau^u_{-i/2}$.
The modular automorphism group associated to the left Haar weight
$\phi$ on $\mathbb{G}$ is denoted by $(\sigma_t)_{t\in\mathbb{R}}$,
and its universal counterpart by $(\sigma^u_t)_{t\in\mathbb{R}}$.
As shown in \cite[Proposition 6.3]{kus} there exist \emph{counits}
$\epsilon_u:C^u_0(\mathbb{G})\to\mathbb{C}$ and
$\widehat{\epsilon}_u:C^u_0(\mathbb{G}dual)\to\mathbb{C}$,
which are $*$-homomorphisms satisfying
\[
(\epsilon_u\otimes\iota)(\cop_u(x)) = x
=(\iota\otimes\epsilon_u)(\cop_u(x)), \qquad x\in C^u_0(\mathbb{G}),
\]
and
\[
(\widehat{\epsilon}_u\otimes\iota)(\widehat{\cop}_u(x)) = x
=(\iota\otimes\widehat{\epsilon}_u)(\widehat{\cop}_u(x)),
\qquad x\in C^u_0(\mathbb{G}dual).
\]
Moreover, $(\iota\otimes\widehat{\epsilon}_u)(\text{\reflectbox{$\Ww$}}\:\!)=1$.
A \emph{representation} of a locally compact quantum group $\mathbb{G}$
on a Hilbert space $H$ is an invertible
element $U \in M(C_0(\mathbb{G})\otimes \K(H))$ such that
\begin{equation} \langlebel{eq:corep}
(\cop\otimes \iota)(U) = U_{13}U_{23}.
\end{equation}
We are mostly interested in \emph{unitary representations} in which case
$U$ is further a unitary.
Note that if $U\in \linf(\mathbb{G})\overline{\otimesimes} B(H)$ is a unitary
that satisfies \eqref{eq:corep}, then $U \in M(C_0(\mathbb{G})\otimes \K(H))$.
Indeed, \eqref{eq:corep} implies that
\[
U_{13} = \textup{W}^*_{12}U_{23}\textup{W}_{12}U_{23}^* \in
M\bigl(C_0(\mathbb{G})\otimes\K(L^2(\mathbb{G}))\otimes\K(H)\bigr)
\]
as $\textup{W}\in M\bigl(C_0(\mathbb{G})\otimes\K(L^2(\mathbb{G}))\bigr)$, and the
claim follows.
The \emph{trivial representation}
$1\otimes 1\in M(C_0(\mathbb{G})\otimes \mathbb{C})$ is denoted by $1$.
Two representations $U$ and $V$ are \emph{similar} if
there is an invertible $a\in B(H_V, H_U)$
such that $V = (1\otimes a^{-1}) U (1\otimes a)$
(where $B(H_V, H_U)$ denotes the set of bounded linear maps from $H_V$
to $H_U$). If $a$ is further an unitary
map $U$ and $V$ are said to be (\emph{unitarily}) \emph{equivalent}.
Given a representation $U\in M(C_0(\mathbb{G})\otimes \K(H))$,
its \emph{contragradient representation}
is
\[
U^c = (R\otimes \top) U \in M(C_0(\mathbb{G})\otimes \K(\conj{H}))
\]
where $R$ is the unitary antipode,
$\top(x)\conj\xi = \conj{x^*\xi}$ for $x\in B(H)$
and $\xi \in H$ and $\conj{H}$ is the dual Hilbert space of $H$.
If $U\in M(C_0(\mathbb{G})\otimes \K(H))$ and $V\in M(C_0(\mathbb{G})\otimes \K(K))$
are representations of $\mathbb{G}$, their tensor product is
\[
U\tp V = U_{12}V_{13}\in M(C_0(\mathbb{G})\otimes\K(H\otimes K)).
\]
As noted above, every unitary representation $U$ of $\mathbb{G}$ is
associated with a representation $\pi$ of the C*-algebra
$C_0^u(\mathbb{G}dual)$ via $U = (\iota\otimes \pi)\text{\reflectbox{$\Ww$}}\:\!$, and vice versa.
In particular, the trivial representation is associated with
the counit $\widehat\epsilon_u$.
If $\pi_U$ and $\pi_V$ are the representations of $C_0^u(\mathbb{G}dual)$
associated with $U$ and $V$, respectively, then
the representation $(\pi_U\otimes\pi_V)\circ\chi\circ\widehat\cop_u$
is associated with $U\tp V$.
\section{A characterisation of admissible finite-dimensional unitary
representations}
\langlebel{Section: Admissibility of a finite-dimensional unitary
representation of a LCQG}
Let $U\in M(C_0(\mathbb{G})\otimes M_n(\mathbb{C}))$ be a finite-dimensional unitary
representation of a locally compact quantum group $\mathbb{G}$. Choosing the
standard basis for $\mathbb{C}^n$ we write $U=(U_{ij})_{i,j=1}^n$, where $U_{ij}\in M(C_0(\mathbb{G}))$
for $i,j=1$,~$2$, \ldots, $n$ are the matrix coefficients of $U$, and
we have $\cop(U_{ij})=\sum_{k=1}^nU_{ik}\otimes U_{kj}$ for
$i,j=1,2,\ldots, n$.
\begin{dfn} \langlebel{Definition: Admissible representations of LCQG}
A finite-dimensional unitary representation $U=(U_{ij})_{i,j=1}^n$ of a locally
compact quantum group $\mathbb{G}$ is called \emph{admissible} if
$U^t:=(U_{ji})_{i,j=1}^n$ is invertible in the C*-algebra
$M(C_0(\mathbb{G})\otimes M_n(\mathbb{C}))$.
\end{dfn}
Admissible finite-dimensional representations of locally compact
quantum groups first appeared in the work of So\l tan
\cite{soltan}, who introduced the quantum Bohr compactification of a
locally compact quantum group.
Daws \cite{daws} studied further the quantum Bohr compactification
as well as questions related to admissibility.
It was conjectured (see \cite[Conjecture 7.2]{daws})
that every finite-dimensional unitary representation
of a locally compact quantum group is admissible.
Note that this conjecture is already false if we
replace quantum group by quantum semigroup: a
counterexample due to Woronowicz is given in
\cite[Example 4.1]{wang_free_product_CQG}.
From the results in \cite{woromatrixQG} it follows that if
$U\in M(C_0(\mathbb{G})\otimes M_n(\mathbb{C}))$ is an admissible finite-dimensional unitary
representation, then the C*-algebra generated by the matrix
coefficients of $U$ in $M(C_0(\mathbb{G}))$ gives rise to a compact matrix quantum
group. It follows that a finite-dimensional unitary representation
of a locally compact quantum group is admissible if and only if
it factors, in this sense,
through a compact quantum group (as finite-dimensional
representations of compact quantum groups are admissible).
It is worthwhile to note that the use of $C_0(\mathbb{G})$ above is
purely a matter of convenience: we can do similar considerations for
$C^u_0(\mathbb{G})$ as well.
The linear span of all matrix coefficients of admissible unitary
representations of $\mathbb{G}$ is denoted by $\aphopf(\mathbb{G})$.
Note that $\aphopf(\mathbb{G})$ is a Hopf $*$-algebra. Its norm closure
in $M(C_0(\mathbb{G}))$ is denoted by $\ap(\mathbb{G})$.
It may be that $\ap(\mathbb{G})$ is not the universal C*-completion of
$\aphopf(\mathbb{G})$. The compact quantum group
associated with $\aphopf(\mathbb{G})$ is the \emph{quantum Bohr compactification}
of $\mathbb{G}$ and is denoted by $\mathfrak{b}\mathbb{G}$. See \cite{soltan, daws}
for more details.
Next we characterise the admissibility of a finite-dimensional
unitary representation in terms of the
scaling group.
\begin{ppsn}\langlebel{Proposition: covariant finite-dimensional C* representations give admissible corepresentations}
Let $\pi:C^u_0(\mathbb{G}dual)\to M_n(\mathbb{C})$ be a non-degenerate
$*$-homomorphism, $U=(\iota\otimes\pi)(\text{\reflectbox{$\Ww$}}\:\!)$ and
$L_U = \linsp\set{U_{ij}}{i,j=1,2,\ldots, n}$. Then the
following statements are equivalent:
\begin{enumerate}[label=\textup{(\roman*)}]
\item
The representation $U\in M(C_0(\mathbb{G})\otimes M_n(\mathbb{C}))$ is admissible.
\item
The vector space $L_U$ is invariant under the scaling group $(\tau_t)$.
\item
There exists a strongly continuous one-parameter automorphism group
$(\alpha_t)$ on $M_n(\mathbb{C})$ such that
\[
\pi\circ\widehat{\tau}^u_t=\alpha_t\circ\pi\qquad \text{for every }~t\in\mathbb{R}.
\]
\end{enumerate}
\end{ppsn}
\begin{proof}
(i)$\implies$(iii):
Since $U=(U_{ij})$ is admissible,
$U_{ij}\in \aphopf(\mathbb{G})$ for all $i, j=1, 2, \ldots, n$.
Let $\Theta:C^u(\mathfrak{b}\mathbb{G})\to \ap(\mathbb{G})\subset M(C_0(\mathbb{G}))$
be the canonical quantum group morphism from the universal
C*-completion of $\aphopf(\mathbb{G})$ onto the closure of $\aphopf(\mathbb{G})$.
Let $\widehat{\Theta}:C_0^u(\mathbb{G}dual)\to M(C^u_0(\widehat{\mathfrak{b}\mathbb{G}}))$
be the dual morphism so that
$(\iota\otimes\widehat\Theta)\text{\reflectbox{$\Ww$}}\:\! = (\Theta\otimes\iota){\mathds{V}\!\!\text{\reflectbox{$\mathds{V}$}}}_{\mathfrak{b}\mathbb{G}}$
(see \cite[Corollary 4.3]{mrw}).
Since $\widehat{\mathfrak{b}\mathbb{G}}$ is a discrete quantum group, we can drop $u$ from the
notation and write $c_0(\widehat{\mathfrak{b}\mathbb{G}})$ for notational convenience.
Let $\wt U\in M(C^u(\mathfrak{b}\mathbb{G})\otimes M_n(\mathbb{C}))$
be the lift of the representation $U$,
i.e.\ $(\Theta\otimes\iota)\wt U = U$.
Then there is a non-degenerate $*$-homomorphism
$\phi:c_0(\widehat{\mathfrak{b}\mathbb{G}})\to M_n(\mathbb{C})$
such that $\wt U = (\iota\otimes\phi){\mathds{V}\!\!\text{\reflectbox{$\mathds{V}$}}}_{\mathfrak{b}\mathbb{G}}$.
We have
\[
(\iota\otimes\phi\circ\widehat\Theta)\text{\reflectbox{$\Ww$}}\:\!
= (\Theta\otimes\phi){\mathds{V}\!\!\text{\reflectbox{$\mathds{V}$}}}_{\mathfrak{b}\mathbb{G}} = U = (\iota\otimes \pi)\text{\reflectbox{$\Ww$}}\:\!,
\]
which implies that $\pi=\phi\circ\widehat{\Theta}$.
There is an unbounded strictly positive operator $K$
affiliated to the von Neumann algebra $\ell^\infty(\widehat{\mathfrak{b}\mathbb{G}})$
such that $K$ implements the scaling group $(\tau'_t)$
of $\widehat{\mathfrak{b}\mathbb{G}}$ in the sense that
$\tau'_t(x) = K^{-2it} x K^{2it}$ for
every $x\in\ell^\infty(\widehat{\mathfrak{b}\mathbb{G}})$ and $t\in\mathbb{R}$ \cite[Proposition 4.3]{discrete_QG_daele}.
Note that $K^{2it}\in \ell^\infty(\widehat{\mathfrak{b}\mathbb{G}})$ for all $t\in\mathbb{R}$.
Now for $x\in C_0^u(\mathbb{G}dual)$ and $t\in\mathbb{R}$,
\[
\pi(\widehat{\tau}^u_t(x))
= \phi(\widehat{\Theta}(\widehat{\tau}^u_t(x)))
= \phi(\tau^\prime_t(\widehat{\Theta}(x)))
\]
since $\widehat{\Theta}$ intertwines the scaling groups
as a morphism of quantum groups (by \cite[Remark 12.1]{kus}).
Continuing the calculation, we have
\[
\pi(\widehat{\tau}^u_t(x))
= \phi\bigl(K^{-2it}\widehat{\Theta}(x)K^{2it}\bigr)
= \phi(K^{-2it})\pi(x)\phi(K^{2it}).
\]
Defining $\alpha_t(A) = \phi(K^{-2it}) A\phi(K^{2it})$ for $t\in\mathbb{R}$
and $A\in M_n(\mathbb{C})$ yields the desired result.
(iii)$\implies$(ii):
From the version of \cite[Proposition 9.1]{kus} for $C^u_0(\mathbb{G}dual)$,
we have $(\tau_t\otimes\iota)(\text{\reflectbox{$\Ww$}}\:\!) = (\iota\otimes\widehat{\tau}^u_{t})\text{\reflectbox{$\Ww$}}\:\!$
for all $t\in\mathbb{R}$. Then
\begin{equation*}
\begin{split}
(\tau_t\otimes\iota)(U) &= (\tau_t\otimes\iota)\circ (\iota \otimes\pi)(\text{\reflectbox{$\Ww$}}\:\!)
=(\iota\otimes \pi\circ\widehat{\tau}^u_t)(\text{\reflectbox{$\Ww$}}\:\!)\\
&=(\iota\otimes \alpha_t\circ\pi)(\text{\reflectbox{$\Ww$}}\:\!) = (\iota\otimes \alpha_t)(U).
\end{split}
\end{equation*}
It follows that $L_U$ is invariant under the scaling group $(\tau_t)$
(ii)$\implies$(i):
Let $\tau_z$ denote the
analytic extension of $(\tau_t)_{t\in\mathbb{R}}$ at $z\in\mathbb{C}$ in the
$\sigma$-weak topology on $L^\infty(\mathbb{G})$. For $x\in L_U$ define in a standard
way the smear of $x$
\[ x_n = \frac{n}{\sqrt\pi} \int_{\mathbb R}
e^{-n^2t^2} \tau_t(x) \ dt, \]
where the integral converges in the $\sigma$-weak topology.
Each $x_n$ is analytic for $(\tau_t)$.
As $L_U$ is invariant under the scaling group and is finite-dimensional,
it follows that $x_n \in L_U$ for each $n$. It follows that $L_U \subseteq
D(\tau_z)$ for all $z$. In particular, $L_U\subset D(\tau_{{\frac{i}{2}}})
= D(S\inv)$, due to the polar
decomposition of $S\inv$. It then follows that
$U_{ij}^\ast\in D(S)$ for all $i,j=1,2,\ldots, n$. By
\cite[Proposition 3.11]{daws}, $U$ is admissible.
\end{proof}
\begin{rmrk}\langlebel{Remark: Admissiblity conjecture is true for Kac algebras}
It follows from Proposition \ref{Proposition: covariant
finite-dimensional C* representations give admissible
corepresentations} that for quantum groups with trivial scaling
automorphism group, all finite-dimensional unitary representations are
admissible. In particular the Admissibility Conjecture
holds for Kac algebras, as also noted in \cite{daws}.
\end{rmrk}
\begin{rmrk}\langlebel{Remark: Admissiblity conjecture implies
representation is covariant}
For a general locally compact quantum group, it follows that
admissible representations of $\mathbb{G}$
correspond to those finite-dimensional representations of
$C^u_0(\mathbb{G}dual)$ which are covariant with respect to the scaling
action of $\mathbb{R}$ on $C^u_0(\mathbb{G}dual)$.
\end{rmrk}
\begin{rmrk}\langlebel{Remark: CQG}
Let $U\in M(C_0(\mathbb{G})\otimes M_n(\mathbb{C}))$ be admissible. Then $U$ is a
corepresentation of the compact quantum group $AP(\mathbb{G})$. As the
inclusion $AP(\mathbb{G}) \rightarrow M(C_0(\mathbb{G}))$ intertwines $R$, we see
that $U^c$ may also be considered as a corepresentation of the compact
quantum group $AP(\mathbb{G})$. Combining \cite[Definition 1.3.8, Definition
1.4.5 and equation (1.7.1)]{nesh-tuset} we can conclude that
$\overline{U} = (U_{ij}^*)_{i,j=1}^n$ is equivalent to $U^c$, as
corepresentations of $AP(\mathbb{G})$, and hence also of $C_0(\mathbb{G})$ (it is
worthwhile to point out that in \cite{nesh-tuset}, $U^c$ and
$\overline{U}$ are what we call $\overline{U}$ and $U^c$ respectively
in this article).
\end{rmrk}
\section{Examples of locally compact quantum groups with admissible
representations}
It is an open question whether all
finite-dimensional unitary representations of a locally compact
quantum group are admissible. In this subsection we give
examples of locally compact quantum groups for which the
Admissibility Conjecture is true.
\subsection{Quantum groups arising from a bicrossed product
construction}
\langlebel{Subsection: Examples of locally compact
quantum groups with admissible representations}
We recall some facts about the bicrossed product construction of a matched
pair of quantum groups and refer to \cite{bicrossed} for
details. Define a normal $*$-homomorphism $\tau$ as follows:
\[
\tau : L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R})\to L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R}),\quad
\tau(f)(t)=\tau_t(f(t)) \quad (f\in L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R}),~t\in\mathbb{R})
\]
where we have identified $L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R})$ with
$L^\infty(\mathbb{R},L^\infty(\mathbb{G}))$.
Then the map $\tau$ is a matching between the locally compact quantum groups
$\mathbb{G}$ and $\mathbb{R}$ with trivial cocycles
($\mathscr{U}=1$ and $\mathscr{V} = 1$),
in the sense of \cite[Definition 2.1]{bicrossed}.
In the notation of \cite[Definition 2.1]{bicrossed}, we define
a left action
$\alpha:L^\infty(\mathbb{R})\to L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R})$
and a right action
$\beta:L^\infty(\mathbb{G})\to L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R})$ by the
formulas
\[
\alpha(f)=\tau(1\otimes f)=1 \otimes f,\qquad f\in L^\infty(\mathbb{R}),
\]
and
\[
\beta(x)(t) = \tau(x\otimes 1)(t) = \tau_t(x),\qquad t\in\mathbb{R},~x\in L^\infty(\mathbb{G}).
\]
Consider the crossed products
$M:= \mathbb{G}\tensor[_\alpha]{\ltimes}{} L^\infty(\mathbb{R})$ and
$\widehat{M}:=L^\infty(\mathbb{G})\tensor[]{\rtimes}{_\beta} \mathbb{R}$.
It follows from the discussion in \cite[Subsection 2.2]{bicrossed}
that $M$ is a locally compact quantum group in the reduced form
and $\widehat{M}$ is the reduced dual. Note that since the left
action $\alpha$ is trivial, $M = L^\infty(\mathbb{G}dual)\wot L^\infty(\mathbb{R})$.
Denote the quantum group underlying $\widehat{M}$ by
$\mathbb{G}\rtimes_{\beta}\mathbb{R}$.
The left multiplicative unitary of $\mathbb{G}\rtimes_{\beta}\mathbb{R}$
is
\[
\textup{W} = \textup{W}^\mathbb{R}_{24} \bigl((\iota\otimes\beta)(\dual\textup{W}^\mathbb{G})\bigr)_{134},
\]
where the leg numbering refers to the underlying Hilbert
space $\ltwo(\mathbb{G})\otimes\ltwo(\mathbb{R})\otimes\ltwo(\mathbb{G})\otimes\ltwo(\mathbb{R})$
(see \cite[p. 141]{double_crossed_product}).
It follows from the above formula that
$C_0(\mathbb{G}\rtimes_{\beta}\mathbb{R}) = C_0(\mathbb{G})\rtimes_{\beta}\mathbb{R}$.
We will show a similar characterisation of
$C^u_0(\mathbb{G}\rtimes_{\beta}\mathbb{R})$. Consider the action
\[
\beta^u:C^u_0(\mathbb{G})\to M(C^u_0(\mathbb{G})\otimes C_0(\mathbb{R})),\qquad
\beta^u(x)(t)=\tau^u_t(x), \quad x\in C^u_0(\mathbb{G}), t\in\mathbb{R}.
\]
\begin{lmma}
$C_0^u(\mathbb{G}\rtimes_\beta \mathbb{R}) \cong C_0^u(\mathbb{G})\rtimes_{\beta^u}\mathbb{R}$.
\end{lmma}
\begin{proof}
Denote the comultiplication of $\mathbb{G}\ltimes_\alpha\mathbb{R}$ by $\Delta$.
We recall the following formula from \cite[p. 141]{double_crossed_product}:
\[
(\iota\otimes\Delta)({\textup{W}^\mathbb{G}}\otimes 1_{L^\infty(\IR)}) =
\textup{W}^\mathbb{G}_{14}\bigl((\iota\otimes\alpha)\circ\beta\otimes\iota\bigr)({\textup{W}^\mathbb{G}})_{1452},
\]
where the leg numbering is done with respect to the Hilbert space
$L^2(\qg)\otimesL^2(\qg)\otimesL^2(\IR)\otimesL^2(\qg)\otimesL^2(\IR)$.
Applying the definition of $\alpha$, we obtain
\begin{equation}\langlebel{eq:first-rep}
(\iota\otimes\Delta)({\textup{W}^\mathbb{G}}\otimes 1_{L^\infty(\IR)}) = \textup{W}^\mathbb{G}_{14}
\bigl((\iota\otimes\chi)\circ(\beta\otimes\iota)({\textup{W}^\mathbb{G}})\bigr)_{125}.
\end{equation}
where $\chi:L^\infty(\IR)\overline{\otimesimes}L^\infty(\qg)d\toL^\infty(\qg)d\overline{\otimesimes}L^\infty(\IR)$ is the flip map.
Write $U = (\iota\otimes\chi)\circ(\beta^u\otimes\iota)({\mathds{W}^\mathbb{G}})$.
Then \eqref{eq:first-rep} says that
\[
(\Lambda_\mathbb{G}\otimes \iota\otimes\iota\otimes\iota\otimes\iota)\bigl((\iota\otimes\Delta)({\mathds{W}^\mathbb{G}}\otimes
1_{L^\infty(\IR)})\bigr) = (\Lambda_\mathbb{G}\otimes \iota\otimes\iota\otimes\iota\otimes\iota)(\mathds{W}^\mathbb{G}_{14} U_{125}).
\]
where $\Lambda_\mathbb{G}:C^u_0(\mathbb{G})\to C_0(\mathbb{G})$ is the reducing
morphism. We would like to apply \cite[Result 6.1]{kus} (or a
version of it for $\mathbb{G}$) to deduce that in fact
\begin{equation}
\langlebel{eq:univ-rep}
(\iota\otimes\Delta)(\mathds{W}^\mathbb{G}\otimes1_{L^\infty(\IR)})=\mathds{W}^\mathbb{G}_{14}U_{125}.
\end{equation}
To this end, we need to check that both sides of the preceding
equation define corepresentations of $C_0^u(\mathbb{G})$.
Since $(\tau_t^u\otimes\tau_t^u)\circ\Delta_u=\Delta_u\circ\tau_t^u$ for all
$t\in\mathbb{R}$, it follows that $U\in M(C_0^u(\mathbb{G})\otimes \K(L^2(\qg)\otimesL^2(\IR))$ is a
corepresentation of $C_0^u(\mathbb{G})$.
Therefore, $\mathds{W}^\mathbb{G}\tp U$ is a corepresentation and hence
also
\[
\bigl((\iota\otimes\chi\otimes\iota)(\mathds{W}^\mathbb{G}\tp U)\bigr)_{1245}
\]
where $\chi$ is the appropriate flip map.
It follows that the right-hand side of
\eqref{eq:univ-rep} is a
corepresentation, and the left-hand side clearly is as well.
Suppose that $X\in M(C_0(\mathbb{G} \ltimes_\alpha\mathbb{R})\otimes \K(H))$ is a
unitary representation of $\mathbb{G}\ltimes_\alpha\mathbb{R}$.
By \cite[Proposition 4.1]{double_crossed_product} there
are unitary representations $z\in M(C_0(\mathbb{G}dual)\otimes \K(H))$ and
$y\in M(C_0(\mathbb{R})\otimes \K(H))$
such that $X=(\alpha\otimes\iota)(y)z_{13}$, where the leg numbering
is done with respect to the Hilbert space $L^2(\qg)\otimesL^2(\IR)\otimes H$.
(The unitarity of the representations is implicit in
the proof of \cite[Proposition 4.1]{double_crossed_product}.)
The equation after equation (4.2) in page 146 of
\cite{double_crossed_product} says that
\[
(\Delta\otimes\iota)(z_{13})z^\ast_{35}
=(\alpha\otimes\iota)(y^\ast)_{345}z_{15}(\alpha\otimes\iota)(y)_{345},
\]
where the leg numbering is done with respect to the Hilbert space
\[
L^2(\qg)\otimesL^2(\IR)\otimesL^2(\qg)\otimesL^2(\IR)\otimes H.
\]
Applying the fact that $\alpha(f)=1_{L^\infty(\qg)}\otimes f$ for all $f\inL^\infty(\IR)$,
it follows that the above equation can be reduced to:
\begin{equation}\langlebel{Equation: equation 3}
(\Delta\otimes\iota)(z_{13})z^\ast_{35}=y^\ast_{45}z_{15}y_{45}.
\end{equation}
Let $\pi: C^u_0(\mathbb{G})\to B(H)$ be the non-degenerate
C*-representation associated with $z$, so that we have
$(\iota\otimes\pi)(\widehat{\text{\reflectbox{$\Ww$}}\:\!^\mathbb{G}})=z$. Then it is easy to check that
\begin{equation*}
\begin{split}
(\Delta\otimes\iota)(z_{13})=
(\iota\otimes\iota\otimes\iota\otimes\iota\otimes\pi)
\bigl(\sigma\bigl((\iota\otimes \Delta)({\mathds{W}^\mathbb{G}}^\ast\otimes1_{L^\infty(\IR)})\bigl)\bigr),
\end{split}
\end{equation*}
where
\[
\sigma:
M(C^u_0(\mathbb{G})\otimes C_0(\mathbb{G}dual)\otimes C_0(\mathbb{R})\otimes C_0(\mathbb{G}dual)\otimes C_0(\mathbb{R}))
\to M(C_0(\mathbb{G}dual)\otimes C_0(\mathbb{R})\otimes C_0(\mathbb{G}dual)\otimes C_0(\mathbb{R})\otimes C^u_0(\mathbb{G}))
\]
permutes the coordinates according to the permutation $(1\,5\,4\,3\,2)$.
Then, applying \eqref{eq:univ-rep} and \eqref{Equation: equation 3},
we get
\[
(\iota\otimes\iota\otimes\iota\otimes\iota\otimes\pi)
\Bigl(\bigl((\iota\otimes\chi)\circ(\iota\otimes\beta^u)
\bigl(\widehat{\text{\reflectbox{$\Ww$}}\:\!^\mathbb{G}}\bigr)\bigr)_{145}\Bigr)
= y^\ast_{45}z_{15}y_{45},
\]
where
$\chi : M(C^u_0(\mathbb{G})\otimes C_0(\mathbb{R}))\to M(C_0(\mathbb{R})\otimes C^u_0(\mathbb{G}))$
is the usual flip (we are also using the fact that
$\widehat{\text{\reflectbox{$\Ww$}}\:\!^\mathbb{G}} = \Sigma(\mathds{W}^\mathbb{G})^\ast\Sigma$).
Letting $y_t:=y(t)\in B(H)$ for $t\in\mathbb{R}$, it
follows from the above equation that
\[
\pi(\tau^u_t(x)) = y^\ast_t\pi(x)y_t,\qquad t\in\mathbb{R}, x\in C^u_0(\mathbb{G}).
\]
Therefore, $\pi:C^u_0(\mathbb{G})\to B(H)$ is a covariant C*-representation and
hence lifts to
a representation $\wt\pi$ of the crossed product C*-algebra
$C^u_0(\mathbb{G})\rtimes_{\beta^u}\mathbb{R}$.
Put
\[
\text{\reflectbox{$\Ww$}}\:\!:=\textup{W}^\mathbb{R}_{24} \bigl((\iota\otimes\beta^u)(\widehat{\text{\reflectbox{$\Ww$}}\:\!^\mathbb{G}})\bigr)_{134}.
\]
The above argument shows that
$X = (\iota\otimes\wt\pi)(\text{\reflectbox{$\Ww$}}\:\!)$, so that $\text{\reflectbox{$\Ww$}}\:\!$
is a maximal corepresentation of $(C_0(\mathbb{G}\ltimes_\alpha\mathbb{R}),\Delta)$
in the sense of \cite[Definition 23]{soltan_multiplicative_ii}.
Therefore, the right leg of $\text{\reflectbox{$\Ww$}}\:\!$ generates the universal C*-algebra
$C^u_0(\mathbb{G}\rtimes_{\beta}\mathbb{R})$, but this
is precisely $C^u_0(\mathbb{G})\rtimes_{\beta^u}\mathbb{R}$, which is what we wanted
to prove.
\end{proof}
In our case, $\alpha$ being trivial, the left Haar weight
of $\mathbb{G}\ltimes_\alpha\mathbb{R}$ is $\widehat\phi\otimes\psi$
where $\widehat\phi$ is the left Haar weight of $\dual\mathbb{G}$
and $\psi$ the Haar weight of $\mathbb{R}$
(see \cite[p. 141]{double_crossed_product}).
Hence, the modular operator of the left Haar weight of
$\mathbb{G}\ltimes_\alpha\mathbb{R}$ is $\widehat{\nabla}\otimes I_{L^2(\mathbb{R})}$,
where $\widehat{\nabla}$ is the modular operator of the
left Haar weight of $\widehat\phi$.
It follows that the scaling group of $\mathbb{G}\rtimes_{\beta}\mathbb{R}$ is given by
$\widetilde\tau_t = \tau_t\otimes\iota_{B(L^2(\mathbb{R}))}$, $t\in\mathbb{R}$, and so
the scaling group on $C^u_0(\mathbb{G}\rtimes_{\beta}\mathbb{R})$
is given by $\widetilde\tau^{u}_t = \tau^u_t\otimes\iota_{B(L^2(\mathbb{R}))}$,
$t\in\mathbb{R}$.
By the general theory of crossed products, the covariant
representations of $C^u_0(\mathbb{G})$ with respect to $(\tau^u_t)$
correspond to the non-degenerate representations of
$C_0^u(\mathbb{G})\rtimes_{\beta^u}\mathbb{R}$ (note that $\mathbb{R}$ is amenable
so the reduced and the full crossed products coincide).
Now suppose that $\rho$ is a non-degenerate representation of
$C_0^u(\mathbb{G})\rtimes_{\beta^u} \mathbb{R}$.
Then $\pi:=\rho\circ\beta^u$ is a covariant representation
of $C_0^u(\mathbb{G})$ and the associated one-parameter unitary group
is given by $U_t = \rho(1\otimes\langlembda_t)$
where $\langlembda_t\in M(C^*(\mathbb{R}))$ is the left translation by
$t\in\mathbb{R}$.
It is easy to see that
$\wt\tau^{u}_t \circ \beta^u = \beta^u\circ \tau^u_t$ for
all $t\in\mathbb{R}$.
Now for every $f\in C_c(\mathbb{R})$ (the space of
compactly supported continuous functions on $\mathbb{R}$)
and $x\in C^u_0(\mathbb{G})$, we have
\begin{equation*}
\begin{split}
\rho\big(\wt\tau^{u}_t\big(\beta^u(x)(1\otimes f)\big)\big)
&=\rho\big(\beta^u(\tau^u_t(x))(1\otimes f)\big)
=\pi(\tau^u_t(x))\int_\mathbb{R} f(s)U_s \,ds\\
&=U^\ast_t\Bigl(\pi(x)\int_\mathbb{R} f(s)U_s \,ds\Bigr)U_t
=U^\ast_t\bigl(\rho\big(\beta^u(x)(1\otimes f)\big)\bigr)U_t.
\end{split}
\end{equation*}
The norm-density of elements of the form $\beta^u(x)(1\otimes f)$
in $C^u_0(\mathbb{G})\rtimes_{\beta}\mathbb{R}$ implies that
$\rho(\wt\tau^{u}_t(X)) = U^\ast_t(\rho(X))U_t$ for all
$X\in C^u_0(\mathbb{G})\rtimes_{\beta}\mathbb{R}$ and $t\in\mathbb{R}$.
In particular, it follows from
Proposition \ref{Proposition: covariant finite-dimensional C*
representations give admissible corepresentations},
that all finite-dimensional unitary representation of the
quantum group $\mathbb{G}\ltimes_\alpha \mathbb{R} = \widehat{\mathbb{G}\rtimes_{\beta}\mathbb{R}}$
are admissible. We summarise these observations in the following
theorem.
\begin{thm} \langlebel{thm:bicrossed}
Let $\mathbb{G}$ be a locally compact quantum group. Let
$(\tau_t)_{t\in\mathbb{R}}$ be the scaling group and consider the matching
\[
\tau: L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R})\to L^\infty(\mathbb{G})\overline{\otimesimes}
L^\infty(\mathbb{R}),
\quad \tau(f)(t) = \tau_t(f(t)) \quad(f\in L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R}),~t\in\mathbb{R}).
\]
Let $\alpha:L^\infty(\mathbb{R})\to L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R})$
and $\beta:L^\infty(\mathbb{G})\to
L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{R})$
be the associated left and right actions
defined by the formulas
\begin{align*}
\alpha(f) &= \tau(1\otimes f)=1_{L^\infty(\mathbb{G})}\otimes f,\qquad f\in L^\infty(\mathbb{R}),\\
\beta(x)(t) &= \tau(x\otimes 1)(t) =\tau_t(x),\qquad t\in\mathbb{R},~x\in L^\infty(\mathbb{G}).
\end{align*}
Let $\mathbb{G}\ltimes_{\alpha}\mathbb{R}$ be the the quantum group arising from the
action $\alpha$ through the bicrossed product construction.
Then all finite-dimensional unitary representations of $\mathbb{G}\ltimes_\alpha\mathbb{R}$
are admissible.
\end{thm}
\subsection{Unimodular quantum groups}
\langlebel{sec:quantum-double}
A locally compact quantum group is \emph{unimodular}
if its left and right Haar weights coincide.
The following theorem states that all unimodular quantum groups
satisfy the Admissibility Conjecture.
\begin{thm}\langlebel{thm:admissibility-unimodular}
Suppose that $\mathbb{G}$ is a unimodular locally compact quantum group.
Then all finite-dimensional unitary representations of $\mathbb{G}$ are admissible.
\end{thm}
\begin{proof}
Let $U\in M(C_0(\mathbb{G})\otimes M_n(\mathbb{C}))$ be a finite-dimensional unitary
representation, and let $V\in M(C^u_0(\mathbb{G})\otimes M_n(\mathbb{C}))$ be the
unique lift of $U$ to the universal level.
Write $V =(V_{ij})$, where $V_{ij}$ are the matrix coefficients of
$V$, and let
\[
L_V = \linsp\set{V_{ij}}{i, j=1, 2, \ldots, n}.
\]
Since $\mathbb{G}$ is unimodular, the universal modular automorphism groups
of the right and left invariant weights are the same. It then follows from
\cite[Proposition~9.2]{kus} that
\[
\Delta_u\circ\sigma^u_t = (\tau^u_t\otimes\sigma^u_t)\circ\Delta_u
\]
and
\[
\Delta_u\circ\sigma^u_t=(\sigma^u_t\otimes\tau^u_{-t})\circ\Delta_u
\]
for every $t\in\mathbb{R}$, where $\{\sigma^u_t\}_{t\mathbb{R}}$ denotes the universal modular automorphism group of both the right and left invariant weights. Applying these to the matrix coefficient of $V$,
we obtain
\[
\Delta_u(\sigma^u_t(V_{ij}))=\sum_{k=1}^n\tau_t^u(V_{ik})\otimes\sigma^u_t(V_{kj})
\]
and
\[
\Delta_u(\sigma^u_t(V_{ij}))=\sum_{k=1}^n\sigma^u_t(V_{ik})\otimes\tau^u_{-t}(V_{kj}).
\]
Then applying the counit of $C_0^u(\mathbb{G})$ to the above
identities, it follows that $\tau^u_{-t}(\sigma^u_{t}(L_V))\subset L_V$ and
$\tau^u_t(\sigma^u_t(L_V))\subset L_V$ for all $t\in\mathbb{R}$.
This implies that for $X\in L_V$,
\[
\tau^u_{2t}(X) = \big(\tau^u_{t}\circ\sigma^u_{-t}\big)\circ
\big(\sigma^u_{t}\circ\tau^u_t\big)(X)\subset L_V.
\]
Therefore, $\tau^u_t(L_V)\subset L_V$ for all $t\in\mathbb{R}$.
Let $\Lambda_\mathbb{G}:C^u_0(\mathbb{G})\to C_0(\mathbb{G})$ be the reducing
morphism. Then $\tau_t\circ\Lambda_\mathbb{G}=\Lambda_\mathbb{G}\circ\tau^u_t$ for all
$t\in\mathbb{R}$ (see \cite[Section 4]{kus}).
Write $U = (U_{ij})$, and note that $\Lambda_\mathbb{G}(V_{ij})=U_{ij}$.
Since $L_V$ is $\tau^u_t$-invariant, it follows that
$L_U$ is $\tau_t$-invariant. By Proposition \ref{Proposition:
covariant finite-dimensional C* representations give admissible
corepresentations}, $U$ is admissible.
\end{proof}
A class of examples of unimodular quantum groups with non-trivial scaling
groups is given by Drinfeld doubles.
Given a matched pair of locally compact quantum groups $\mathbb{G}$ and
$\mathbb{H}$, the von Neumann algebra $L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{H})$ can
be given the structure of a locally compact quantum group, called the
double crossed product of $\mathbb{G}$ and $\mathbb{H}$
\cite[Section 3 \& Theorem 5.3]{double_crossed_product}.
If $\mathbb{H}=\mathbb{G}dual$, then a matching
\[
m:L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{G}dual)\to L^\infty(\mathbb{G})\overline{\otimesimes} L^\infty(\mathbb{G}dual)
\]
is given by
\[
m(X)=W_{\mathbb{G}^\mathrm{op}} X W_{\mathbb{G}^\mathrm{op}}^\ast,
\]
where $W_{\mathbb{G}^\mathrm{op}}$ is the left multiplicative unitary of the
opposite quantum group $\mathbb{G}^\mathrm{op}$ obtained from $\mathbb{G}$.
The resulting double crossed product quantum group is precisely
Drinfeld double, as has been shown in
\cite[Section~8]{double_crossed_product}. Moreover,
by \cite[Proposition 8.1]{double_crossed_product},
Drinfeld doubles are always unimodular.
It also follows from \cite[Theorem 5.3]{double_crossed_product} that
if $\mathbb{G}$ has non-trivial scaling group, then the double crossed product has
non-trivial scaling group as well. Thus Drinfeld double
construction produces concrete examples of unimodular locally compact
quantum groups with non-trivial scaling group, and by
Theorem \ref{thm:admissibility-unimodular}
the Admissibility Conjecture is true for such quantum groups.
\section{Characterisation of Kazhdan's Property T for locally compact
quantum groups}
\langlebel{Section: Fell topology and weak containment of representations
of C*-algebras}
Let $A$ be a C*-algebra and let $\pi:A\to B(H_\pi)$ and
$\rho:A\to B(H_\rho)$ be two non-degenerate
representations. The representation
$\pi$ is said to be \emph{equivalent} to $\rho$ if there
exists a unitary $U:H_\pi\to H_\rho$ such that
$U^\ast\rho(x)U = \pi(x)$ for every $x\in A$. If $U$ is only an
isometry, then we will say that $\pi$ is \emph{contained} in $\rho$
and write $\pi\subset\rho$
(in other words, $\pi$ is a \emph{subrepresentation} of $\rho$).
Now let $\mathcal{S}$ be a set of representations of $A$.
We say that the representation $\pi$ is
\emph{weakly contained} in $\mathcal{S}$
and write $\pi\prec\mathcal{S}$ if
$\bigcap_{\rho\in\mathcal{S}}\Ker\rho\subset \Ker\pi$.
We will adopt the convention that whenever $\pi\prec\{\rho\}$ we will
simply write $\pi\prec\rho$.
Let $\widehat{A}$ denote the set of inequivalent irreducible
representations of $A$. The \emph{closure} of $\mathcal{S}\subset\widehat{A}$ is
defined as
\[
\overline{\mathcal{S}}= \set{\pi\in\widehat{A}}{\pi\prec\mathcal{S}}.
\]
From \cite[Lemma 1.6]{fell} it follows that the above closure defines
a topology on $\widehat{A}$, which is referred to as the Fell topology
on $\widehat{A}$ (in \cite{fell} this was called the hull-kernel topology on
$\widehat{A}$). The following result
from \cite[Lemma~2.1]{xiao_property_T} is crucial in the sequel.
\begin{ppsn}\langlebel{prop:T}
Let $A$ be a C*-algebra.
If $\rho\in\widehat{A}$
is finite-dimensional, then $\{\rho\}$ is a closed subset of
$\widehat{A}$.
Moreover, the following statements are equivalent for $\rho\in\widehat{A}$:
\begin{enumerate}[label=\textup{(\roman*)}]
\item $\rho$ is an isolated point in $\widehat{A}$.
\item If a representation $\pi$ of $A$ satisfies $\rho\prec\pi$, then
$\rho\subset\pi$.
\item $A=\Ker\rho\oplus\bigcap_{\nu\in\widehat{A}\setminus\{\rho\}}\Ker\nu$.
\end{enumerate}
\end{ppsn}
\begin{rmrk}\langlebel{Remark: Notion of weak containment from Brannan}
It is worthwhile to mention that recently in
\cite[Definition~3.3]{brannan_property_T} the authors have used a
slightly different notion of weak containment, namely a C*-algebraic
version of \cite[Definition~7.3.5]{Zimmer_weak_containment}.
However, we will be concerned with irreducible representations
in which case the definition coincide
(as mentioned after Definition 3.3 in \cite{brannan_property_T}).
\end{rmrk}
The following definition is a natural extension of containment
to the setting of unitary representations of locally compact quantum
groups (see \cite[Definition~3.2]{haagerup_daws_fima_skalski}).
\begin{dfn}
Let $U\in M(C_0(\mathbb{G})\otimes \K(H))$ and $V\in M(C_0(\mathbb{G})\otimes \K(K))$ be two
unitary representations of a locally compact quantum group $\mathbb{G}$,
and let $\pi_U:C^u_0(\mathbb{G}dual)\to B(H)$ and
$\pi_V:C^u_0(\mathbb{G}dual)\to B(K)$ be the associated
representations of $C^u_0(\mathbb{G}dual)$
(i.e. $U=(\iota\otimes\pi_U)(\text{\reflectbox{$\Ww$}}\:\!)$ and $V=(\iota\otimes\pi_V)(\text{\reflectbox{$\Ww$}}\:\!)$).
If $\pi_U\prec\pi_V$, we say $U$ is \emph{weakly contained} in $V$
and write $U\prec V$. If $\pi_U\subset\pi_V$, we
say that $U$ is \emph{contained} in $V$
(or that $U$ is a \emph{subrepresentation} of $V$) and write $U\subset V$.
\end{dfn}
\begin{rmrk}\langlebel{Remark: Tensor Containment}
Let $U \prec V$, so that $\pi_U \prec \pi_V$. Let $W$ be a corepresentation.
We claim that $W \tp U \prec W \tp V$. Indeed, this is equivalent to
$\pi_{W\tp U} = (\pi_W\otimes\pi_U)\circ\chi\circ\widehat\cop_u
\prec (\pi_W\otimes\pi_V)\circ\chi\circ\widehat\cop_u = \pi_{W\tp V}$, which
by definition, is equivalent to showing that $\ker (\pi_W\otimes\pi_V)\circ\widehat\cop_u
\subseteq \ker (\pi_W\otimes\pi_U)\circ\widehat\cop_u$.
Let $W$ act on $H_W$, and let $a\in \ker (\pi_W\otimes\pi_V)\circ\widehat\cop_u$. For
$\omega \in \K(H_W)^*$ let $b = (\omega\circ\pi_W\otimes\iota)\widehat\cop_u(a)$, so
that $\pi_V(b) = 0$. As $\pi_U\prec\pi_V$, it follows that $\pi_U(b)=0$, so
that $(\omega\otimes\iota) (\pi_W\otimes\pi_U)\circ\widehat\cop_u(a)=0$. As $\omega$ was
arbitrary, $(\pi_W\otimes\pi_U)\circ\widehat\cop_u(a)=0$, as required.
\end{rmrk}
We next recall the definitions of invariant and almost invariant
vectors for quantum group representations
(see \cite[Definition 3.2]{haagerup_daws_fima_skalski}).
\begin{dfn}\langlebel{Definition: Invariant and almost invariant vectors}
Let $U\in M(C_0(\mathbb{G})\otimes \K(H))$ be a representation of a locally
compact quantum group $\mathbb{G}$. A vector $\xi\in H$ is called
\emph{invariant} for $U$ if $U(\eta\otimes\xi)=\eta\otimes\xi$ for all
$\eta\in L^2(\mathbb{G})$.
$U$ is said to have \emph{almost invariant vectors} if there exits a net
$(\xi_\alpha)_\alpha$ of unit vectors in $H$ such that
$\|U(\eta\otimes\xi_\alpha)-\eta\otimes\xi_\alpha\|\to0$ for all
$\eta\in L^2(\mathbb{G})$.
\end{dfn}
Note that if $U$ and $V$ are similar representations of a locally
compact quantum group $\mathbb{G}$, then
$U$ has an invariant vector if and only if $V$ has.
The analogous statement holds for almost invariant vectors.
We will need invariant means in the following arguments,
so next we set up the necessary terminology for those.
\begin{dfn}\langlebel{Definition: Eberlein algebra for a LCQG}
The (reduced) \emph{Fourier--Stieltjes algebra} of $\mathbb{G}$ is defined by
\[
\fs(\mathbb{G}) = \linsp\set{(\iota\otimes\omega)(\text{\reflectbox{$\Ww$}}\:\!)}{\omega\in C^u_0(\mathbb{G}dual)^\ast}.
\]
Then the \emph{Eberlein algebra} of $\mathbb{G}$ is defined by
\[
\eb(\mathbb{G}) = \overline{\fs(\mathbb{G})}^{\|\cdot\|_{B(L^2(\mathbb{G}))}}.
\]
\end{dfn}
Note that $\fs(\mathbb{G})\subset M(C_0(\mathbb{G}))$ and that
$\fs(\mathbb{G})$ is a subalgebra of $M(C_0(\mathbb{G}))$.
It then follows that $\eb(\mathbb{G})$ is a closed subalgebra
of $M(C_0(\mathbb{G}))$. When $\mathbb{G}$ is of Kac type, $\eb(\mathbb{G})$ is
self-adjoint and so a C*-subalgebra of $M(C_0(\mathbb{G}))$
(see for example \cite[Section 7]{matbd}).
The Banach algebra $L^1(\mathbb{G})$ acts on its dual $L^\infty(\mathbb{G})$ by
\[
x\cdot\omega =(\omega\otimes\iota)(\Delta(x)),
\qquad \omega\in \lone(\mathbb{G}), x\in\linf(\mathbb{G}),
\]
and
\[
\omega\cdot x=(\iota\otimes\omega)(\Delta(x)), \qquad \omega\in
\lone(\mathbb{G}), x\in\linf(\mathbb{G}),
\]
making $\linf(\mathbb{G})$ an $\lone(\mathbb{G})$-bimodule. It is easy to see that
$\eb(\mathbb{G})$ is invariant under these actions. This allows us to define
the notion of an invariant mean on $\eb(\mathbb{G})$. It follows from
\cite[Proposition 3.15]{daws_skalski_viselter}
that $\eb(\mathbb{G})$ admits an \emph{invariant mean} $\mu\in \eb(\mathbb{G})^*$
in the sense that $\|\mu\| = \mu(1) = 1$ and
\[
\mu(\omega\cdot x) = \omega(1) \mu(x) = \mu(x\cdot\omega)
\qquad \omega\in L^1(\mathbb{G}), x\in \eb(\mathbb{G}).
\]
Note that the Hopf $*$-algebra $\aphopf(\mathbb{G})$ underlying the quantum
Bohr compactification is contained in $\fs(\mathbb{G})$, and therefore
$\ap(\mathbb{G})\subset \eb(\mathbb{G})$. The uniqueness of the Haar state of a
compact quantum group implies the following result.
\begin{lmma}\langlebel{Lemma: the restriction of the invariant mean to the Bohr compactification is the Haar state}
Let $M$ be the restriction of the invariant mean $\mu\in \eb(\mathbb{G})^*$
to $\ap(\mathbb{G})$. Then $M$ is the Haar state of the
compact quantum group $(\ap(\mathbb{G}),\Delta)$.
\end{lmma}
\begin{lmma}\langlebel{Lemma: making sense of tensoring with the mean}
Let $X\in B(L^2(\mathbb{G}))\overline{\otimesimes} B(H)$ for some Hilbert space $H$, and suppose
that $(\iota\otimes\omega)(X)\in \eb(\mathbb{G})$ for all $\omega\in B(H)_\ast$.
Let $\nu\in \eb(\mathbb{G})^*$. Then there exists an operator
$T\in B(H)$ such that for all $\omega\in B(H)_\ast$
\[
\langlengle T, \omega\ranglengle = \langlengle \nu, (\iota\otimes\omega)(X)\ranglengle.
\]
We will be denoting this operator by $(\nu\otimes\iota)(X)$, so that
\[
\omega\bigl((\nu\otimes\iota)(X)\bigr) = \nu\bigl((\iota\otimes\omega)(X)\bigr)
\]
for all $\omega\in B(H)_\ast$.
\end{lmma}
\begin{proof}
The map
\[
\omega\mapsto\langlengle \nu, (\iota\otimes\omega)(X)\ranglengle
: B(H)_\ast \to \mathbb{C}
\]
defines a bounded functional on $B(H)_\ast$, which gives the existence
of the operator $T\in B(H)$.
\end{proof}
The next result gives a formula for the orthogonal projection onto the set of invariant vectors for a unitary corepresentation of $C_0(\mathbb{G})$ (also see \cite[Proposition 3.14]{daws_skalski_viselter}).
\begin{lmma}\langlebel{Corollary: expressing the projection onto the set of invarint subspace in terms of the invariant mean}
Let $V\in M(C_0(\mathbb{G})\otimes \K(H))$ be a unitary representation of $\mathbb{G}$
and let $\mu\in \eb(\mathbb{G})^*$ be the invariant mean on $\eb(\mathbb{G})$.
The operator $P = (\mu\otimes\iota)(V)\in B(H)$
is the projection onto the subspace of invariant vectors for $V$ (we may note that the definition of $P$ makes sense because of Lemma \ref{Lemma: making sense of tensoring with the mean}).
\end{lmma}
\begin{proof}
Due to the subtle definition of $(\mu\otimes\iota)(V)$, we include
a careful calculation of the fact that the image of $P$
consists of invariant vectors. Given $\sigma\in \lone(\mathbb{G})$
and $\omega\in B(H)_*$, we have
\[
(\sigma\otimes\omega)\bigl(V(1\otimes P)\bigr)
= \omega((\sigma\otimes \iota)(V)P)
= \mu\bigl((\sigma\otimes\iota\otimes\omega)(V_{13}V_{23})\bigr)
\]
due to the commutation relation in
Lemma~\ref{Lemma: making sense of tensoring with the mean}.
Continuing from here using the fact that $V$ is a representation,
we have
\begin{align*}
(\sigma\otimes\omega)\bigl(V(1\otimes P)\bigr)
&= \mu\bigl((\sigma\otimes\iota\otimes\omega)((\cop\otimes\iota)(V))\bigr)
= \mu\bigl((\sigma\otimes\iota)(\cop((\iota\otimes\omega)(V)))\bigr)\\
&= \sigma(1)\mu((\iota\otimes\omega)(V))
= (\sigma\otimes\omega)(1\otimes P)
\end{align*}
due to the invariance of $\mu$ and the fact that $\mu((\iota\otimes\omega)(V))=\omega(P)$ by virtue of Lemma \ref{Lemma: making sense of tensoring with the mean}.
It follows that the image of $P$ consists of invariant vectors.
That $P$ is an idempotent map is also easy to compute.
Notice that $\|P\|\leq 1$, because $\|\mu\|=1$, from which it follows that
$P$ is a subprojection of the orthogonal projection onto the subspace of invariant vectors.
We now prove that indeed $P$ is the projection onto the subspace of invariant vectors. We show that for any invariant vector $\xi$, $P\xi=\xi$, from which the result follows. So let then $\xi$ be an invariant vector. Let $\eta\in H$. Letting $\omega_{\xi,\eta}(\cdot)\in B(H)_\ast$ be the functional given by $\omega_{\xi,\eta}(x):=\langlengle x\xi,\eta\ranglengle$ for $x\in B(H)$ we have
\begin{align*}
\omega_{\xi,\eta}(P)& = \omega_{\xi,\eta}\big((\mu\otimes\iota)(V)\big)\\
&=\mu\big((\iota\otimes\omega_{\xi,\eta})(V)\big)\quad(\text{by Lemma \ref{Lemma: making sense of tensoring with the mean}})\\
&=\mu(1.\omega_{\xi,\eta}(1))\quad(\text{as $\xi$ is invariant})\\
&=\omega_{\xi,\eta}(1)\quad(\text{as $\mu(1)=1$}).
\end{align*}
As this holds for any $\eta\in H$, we have that $P\xi=\xi$, as we wanted.
\end{proof}
The following result from
\cite[Corollary~2.5 and Corollary~2.8]{haagerup_daws_fima_skalski}
gives a nice criterion for the existence of invariant and almost
invariant vectors. Denote the trivial representation of $\mathbb{G}$ by~$1$.
\begin{ppsn}
\langlebel{prop:inv-cont}
Let $U\in M(C_0(\mathbb{G})\otimes \K(H))$ be a unitary
representation of a locally compact quantum group $\mathbb{G}$.
\begin{enumerate}[label=\textup{(\roman*)}]
\item
$U$ has a nonzero invariant vector if and only if $1\subset U$.
\item
$U$ has almost invariant vectors if and only if $1\prec U$.
\end{enumerate}
\end{ppsn}
We now recall the definition of Kazhdan's Property (T) for quantum groups
(see \cite[Definition 3.1]{xiao_property_T}, which
goes back to \cite[Definition 6]{Fima_property_T_discrete_quantum_group}).
\begin{dfn}\langlebel{Definition: Kazhdan property (T) for QG}
A locally compact quantum group $\mathbb{G}$ has (\emph{Kazhdan's})
\emph{Property (T)}
if every unitary representation of $\mathbb{G}$
that has almost invariant vectors has a nonzero invariant vector.
\end{dfn}
It follows from Proposition~\ref{prop:inv-cont}
that $\mathbb{G}$ has Property (T) if and only if
for every unitary representation $U$ of $\mathbb{G}$
\[
1\prec U\implies 1\subset U.
\]
The following theorem
\cite[Proposition 3.2 and Theorem 3.6]{xiao_property_T}
and \cite[Theorem 4.7]{brannan_property_T}
gives a series of equivalent conditions to Property (T).
\begin{thm}\langlebel{thm:T-equi}
Let $\mathbb{G}$ be a locally compact quantum group.
The following statements are equivalent:
\begin{enumerate}[label=\textup{(T\arabic*)}]
\item $\mathbb{G}$ has Property (T).
\item The counit $\widehat{\epsilon}_u$ is an isolated point in
$\widehat{C^u_0(\mathbb{G}dual)}$.
\item $C^u_0(\mathbb{G}dual)=\Ker\widehat{\epsilon}_u\oplus\mathbb{C}$.
\item There is a projection $p\in M(C^u_0(\mathbb{G}dual))$ such that
$p C^u_0(\mathbb{G}dual) p = \mathbb{C} p$ and $\widehat{\epsilon}_u(p)=1$.
\end{enumerate}
If $\mathbb{G}$ has trivial scaling group, then the above conditions are further
equivalent to the following (quantum version of Wang's theorem \cite[Theorem 2.1]{wang_property_T}):
\begin{enumerate}[label=\textup{(T\arabic*)}] \setcounter{enumi}{4}
\item
Every finite-dimensional irreducible representation of
$C^u_0(\mathbb{G}dual)$ is an isolated point in $\widehat{C^u_0(\mathbb{G}dual)}$.
\item
$C^u_0(\mathbb{G}dual)\cong B\oplus M_n(\mathbb{C})$ for some C*-algebra $B$ and
$n\in\mathbb{N}_0$.
\end{enumerate}
\end{thm}
We will prove that for a locally compact quantum group with
non-trivial scaling group, (T5) as well as
a suitable generalisation of (T6) are equivalent with (T1)--(T4).
We first make two observations, which will be used later (also see \cite[Proposition 3.14~\&~Proposition 3.15]{viselter}).
\begin{lmma}\langlebel{lemma:trivial}
Let $U\in M(C_0(\mathbb{G})\otimes M_n(\mathbb{C}))$ be a
finite-dimensional admissible unitary representation of $\mathbb{G}$. Then
$1\subset U^c\tp U$.
\end{lmma}
\begin{proof}
Write $U=(U_{ij})_{i,j=1}^n$, and define $\overline{U} = (U^*_{ij})_{i,j=1}^n$.
Since $U$ is admissible, $\overline{U}\in \eb(\mathbb{G})\otimes M_n(\mathbb{C})$.
Define $V =\overline{U}\tp U \in \eb(\mathbb{G})\otimes (M_n(\mathbb{C})\otimes M_n(\mathbb{C}))$.
Let $\mu$ be the invariant mean on $\eb(\mathbb{G})$.
We will show that $(\mu\otimes\iota)(V)\neq0$, where we are using the
notation of Lemma \ref{Lemma: making sense of tensoring with the
mean}.
Towards a contradiction, suppose that $(\mu\otimes\iota)(V)=0$, so that for all
$\nu\in (M_n(\mathbb{C})\otimes M_n(\mathbb{C}))_\ast$ we have
$\mu((\iota\otimes \nu)(V))=0$. Choosing $\nu$ such that
$(\iota\otimes \nu)(V)=U^\ast_{ij}U_{ij}$
leads to $\mu(U^\ast_{ij}U_{ij})=0$ for all $i,j = 1$,~$2$,
\ldots,~$n$. Therefore,
\[
\mu\biggl(\sum_{i=1}^nU^\ast_{ij}U_{ij}\biggr) = \mu(1) = 0,
\]
which cannot happen as $\mu(1) = 1$.
By Lemma~\ref{Corollary: expressing the projection onto the set of invarint subspace in terms of the invariant mean},
any vector in the image of $(\mu\otimes\iota)(V)$
is an invariant vector for $V$.
Since $\overline{U}$ is similar to $U^c$ (as $U$ is admissible, Remark~\ref{Remark: CQG}), the
representation $V$ is similar to $U^c\tp U$ and the result follows from
Proposition~\ref{prop:inv-cont}-(i).
\end{proof}
The following result may be considered as an extension
of \cite[Proposition 3.5]{xiao_property_T}
(see also \cite[Proposition A.1.12]{bekka_property_T} and
\cite[Theorem 2.6]{kyed}).
\begin{thm}
\langlebel{Theorem: Tensor product contains trivial representation implies
that each contains a finite-dimensional representation}
Let $U\in M(C_0(\mathbb{G})\otimes M_n(\mathbb{C}))$ and $V\in M(C_0(\mathbb{G})\otimes \K(H))$ be
unitary representations of a locally compact quantum group $\mathbb{G}$ with
$U$ being admissible.
Then the following are equivalent:
\begin{enumerate}[label=\textup{(\roman*)}]
\item The representations $U$ and $V$ contain a common
finite-dimensional unitary representation.
\item The representation $U^c\tp V$ contains the trivial representation.
\end{enumerate}
\end{thm}
\begin{proof}
(i)$\implies$(ii):
Let $W$ be a common finite-dimensional unitary representation of $U$
and~$V$. Then $W$ is also admissible as $U$ is admissible. We have
$W^c\tp W\subset U^c\tp V$ and by Lemma~\ref{lemma:trivial},
$1\subset W^c\tp W$.
(ii)$\implies$(i):
Let $\mu$ denote the invariant mean on $\eb(\mathbb{G})$.
Let $x\in B(H,\mathbb{C}^n)$ and
$y=(\mu\otimes\iota)(U^*(1\otimes x)V)$ so that $y\in B(H,\mathbb{C}^n)$.
Now
\begin{align*}
U^*(1\otimes y)V
&= (\mu\otimes\iota\otimes\iota)\bigl(U^*_{23}U^*_{13}(1\otimes1\otimes x)V_{13}V_{23}\bigr)\\
&= (\mu\otimes\iota\otimes\iota)\circ(\cop\otimes \iota)\bigl(U^*(1\otimes x)V\bigr)
= 1\otimes y
\end{align*}
due to the invariance of $\mu$ (see the proof
of Lemma~\ref{Corollary: expressing the projection onto the set of
invarint subspace in terms of the invariant mean}
for making the above calculation more rigorous).
Since $U$ is unitary, we have
\[
U(1\otimes y)=(1\otimes y)V.
\]
Next we show that for some $x$, the resulting $y$ is nonzero,
It follows from the hypothesis, via Proposition~\ref{prop:inv-cont}-(i),
that $\conj{U}\tp V$ has an invariant vector $\zeta\in \mathbb{C}^n\otimes H$,
and so
\begin{equation} \langlebel{eq:zeta-eq}
(\iota\otimes \omega_{\zeta,\zeta})(\overline{U}\tp V) = \langle \zeta, \zeta\rangle 1.
\end{equation}
For $\xi\in H$ and $\alpha\in \mathbb{C}^n$,
let $x = \theta_{\alpha,\xi}\in B(H, \mathbb{C}^n)$ be defined by
$\theta_{\alpha,\xi}(\eta)=\langle \eta,\xi\rangle\alpha$ for $\eta\in H$.
If $y=(\mu\otimes\iota)(U^*(1\otimes\theta_{\alpha,\xi})V) = 0$
for every $\xi\in H$ and $\alpha\in \mathbb{C}^n$,
then for every $\alpha,\beta\in\mathbb{C}^n$ and $\xi,\eta\in H$, we have
\begin{equation*}
\begin{split}
0=\langle y\eta,\beta\rangle
&= \mu\bigl((\iota\otimes\omega_{\eta,\beta})(U^\ast(1\otimes\theta_{\alpha,\xi})V)\bigr)
=\mu\bigl(
(\iota\otimes\omega_{\beta,\alpha})(U)^\ast(\iota\otimes\omega_{\eta,\xi})(V)\bigr)\\
&=\mu\bigl( (\iota\otimes\omega_{\alpha,\beta})(\overline{U})
(\iota\otimes\omega_{\eta,\xi})(V)\bigr).
\end{split}
\end{equation*}
Therefore $(\mu\otimes\iota)(\overline{U}\tp V) = 0$, and so
by equation \eqref{eq:zeta-eq}
\[
0 = \omega_{\zeta,\zeta}\bigl((\mu\otimes\iota)(\overline{U}\tp V)\bigr)
= \mu\bigl((\iota\otimes \omega_{\zeta,\zeta})(\overline{U}\tp V)\bigr)
= \langle\zeta, \zeta\rangle \mu(1) \ne 0.
\]
Consequently, $y\ne 0$ for some $\xi\in H$, $\alpha\in \mathbb{C}^n$.
To finish the proof we now argue as in the last part of the proof of
\cite[Theorem 2.6]{kyed}. Since $U$ and $V$ are unitaries such that
$U(1\otimes y)=(1\otimes y)V$, it follows that
$V(1\otimes y^\ast)=(1\otimes y^\ast)U$. Moreover, since
$y$ is a compact operator, this implies that also $y^\ast y$ is
compact and satisfies $V(1\otimes y^\ast y)=(1\otimes y^\ast y)V$. Similarly we have
that $U(1\otimes yy^\ast)=(1\otimes yy^\ast)U$, where $yy^\ast\in
B(\mathbb{C}^n)$. Both $y^\ast y$ and $yy^\ast$ are positive compact
operators, which implies that the eigenspaces of positive eigenvalues
are finite-dimensional. Moreover, the non-zero part of the spectrum of
$y^\ast y$ and $yy^\ast$ are the same. So let $\langlembda>0$ be a common
eigenvalue for both $y^\ast y$ and $yy^\ast$. Then the eigenprojection
of $yy^\ast$ corresponding to $\langlembda$ also intertwines $U$, thereby
producing a finite-dimensional subrepresentation of $U$, say
$W$. Applying the same argument to $y^\ast y$, we get a
finite-dimensional subrepresentation of $V$, say $W^\prime$. Since $U$ is
admissible by hypothesis, also $W$ is admissible.
Moreover, the partial isometry coming from the polar
decomposition of $y$ gives an equivalence between $W$ and $W^\prime$,
which proves the claim.
\end{proof}
We now prove a generalisation of conditions (T5) and (T6) in
Theorem~\ref{thm:T-equi}.
\begin{thm} \langlebel{thm:kazhdan}
Let $\mathbb{G}$ be any locally compact quantum group. Then
$\mathbb{G}$ having Property (T) is equivalent to any of the following
statements:
\begin{enumerate}[label=\textup{(T\arabic*)}] \setcounter{enumi}{4}
\item
Every finite-dimensional irreducible C*-representation of
$C_0^u(\mathbb{G}dual)$ is an isolated point in $\widehat{C^u_0(\mathbb{G}dual)}$.
\item[\textup{(T6$'$)}]
There is a finite-dimensional irreducible C*-representation of
$C_0^u(\mathbb{G}dual)$ which is covariant with respect to the scaling
automorphism group $(\widehat{\tau}^u_t)$
and is an isolated point in $\widehat{C^u_0(\mathbb{G}dual)}$.
\item
$C^u_0(\mathbb{G}dual)\cong B\oplus M_n(\mathbb{C})$ for some C*-algebra $B$ and
some $n\in\mathbb{N}_0$ with $\widehat{\tau}^u_t(B)\subset B$ for all $t\in\mathbb{R}$.
\end{enumerate}
\end{thm}
\begin{proof}
The proof is based on the same idea as the proof of
\cite[Theorem~3.6]{xiao_property_T}.
(T5)$\implies$(T6$'$) because covariant irreducible finite-dimensional
representations always exist: the counit.
(T6$'$)$\implies$(T6):
Let $\pi$ be a finite-dimensional
irreducible representation of $C^u_0(\mathbb{G}dual)$
which is covariant with respect to the scaling automorphism group.
By the equivalence of (i) and (iii) in Proposition~\ref{prop:T},
$C^u_0(\mathbb{G}dual)\cong \ker \pi \oplus M_n(\mathbb{C})$ for some $n\in\mathbb{N}_0$.
Since $\pi$ is covariant, we have
$\widehat{\tau}^u_t(\ker(\pi))\subset \ker\pi$ for all $t\in\mathbb{R}$,
and so (T6) holds.
It remains to prove that (T1) of
Theorem~\ref{thm:T-equi} implies (T5) and that (T6) implies (T2) of
Theorem~\ref{thm:T-equi}. The result will then follow from the
equivalence of (T1) and (T2) of Theorem~\ref{thm:T-equi}.
(T1)$\implies$(T5):
To prove (T5), it is enough to show that (ii) in Proposition~\ref{prop:T} holds
for every irreducible finite-dimensional representation $\rho$ of
$C^u_0(\mathbb{G}dual)$. To this end, let $\pi$ be a representation of
$C_0^u(\mathbb{G}dual)$ such that $\rho\prec\pi$.
Let $U=(\iota\otimes\rho)(\text{\reflectbox{$\Ww$}}\:\!)$ and $V=(\iota\otimes\pi)(\text{\reflectbox{$\Ww$}}\:\!)$.
Since $\mathbb{G}$ has Property (T), it is unimodular by
Theorem~\ref{thm: property T implies unimodularity}, and so
$\mathbb{G}$ satisfies the Admissibility Conjecture
by Theorem~\ref{thm:admissibility-unimodular}.
Hence $U$ is an irreducible finite-dimensional
admissible unitary representation of $\mathbb{G}$
and $U\prec V$. Now by Lemma
\ref{lemma:trivial} we have that $1\subset U^c\tp U$ and also
$U^c\tp U\prec U^c\tp V$ by Remark~\ref{Remark: Tensor Containment}.
Since $\mathbb{G}$ has Property (T) and $1\prec U^c\tp V$, it follows
that $1 \subset U^c \tp V$. Thus by Theorem \ref{Theorem: Tensor
product contains trivial representation implies that each contains a
finite-dimensional representation} there exists a finite-dimensional
unitary representation $W$ such that $W\subset U$ and
$W\subset V$. Since $U$ is irreducible, this implies that $W=U$, and so
$\rho\subset\pi$.
(T6)$\implies$(T2):
Suppose that $C^u_0(\mathbb{G}dual)\cong B\oplus M_n(\mathbb{C})$ and $B$ in
invariant under the scaling automorphism group.
Let $\mu:C^u_0(\mathbb{G}dual)\to M_n(\mathbb{C})$ be the irreducible representation
corresponding to the summand $M_n(\mathbb{C})$, and let
$U=(\iota\otimes\mu)(\text{\reflectbox{$\Ww$}}\:\!)$ be the unitary representation of $\mathbb{G}$
associated with $\mu$. Since $\ker(\mu)\cong B$ and
$\widehat{\tau}^u_t(B)\subset B$ for all $t\in\mathbb{R}$, it follows that
$\mu$ is covariant with respect to the scaling automorphism group of
$C^u_0(\mathbb{G}dual)$. Hence $U$ is admissible by Proposition
\ref{Proposition: covariant finite-dimensional C* representations give
admissible corepresentations}.
The representation $\pi$ of $C^u_0(\mathbb{G}dual)$
associated to $U^c\tp U$ is finite-dimensional and covariant with
respect to scaling automorphism of $C^u_0(\mathbb{G}dual)$ by Proposition
\ref{Proposition: covariant finite-dimensional C* representations give
admissible corepresentations} since $U^c\tp U$ is admissible.
Therefore, $\pi$ decomposes into a direct sum of finitely many
covariant irreducible representations. Since
$1\subset U^c\tp U$ by Lemma~\ref{lemma:trivial},
one of the irreducible components is $\widehat{\epsilon}_u$.
Therefore $\pi=\bigoplus_{k=0}^m\pi_k$, where $\pi_0 =
\widehat{\epsilon}_u$ and $\pi_k$ is
an irreducible finite-dimensional covariant representation
for each $k=1,2,\ldots, m$. By Proposition~\ref{prop:T},
each singleton set $\{\pi_k\}$ is closed in
the Fell topology. Towards a contradiction, let us assume that
$\widehat{\epsilon}_u$ is not an isolated point.
So there is a net
\[
(\rho_\alpha)_{\alpha\in\Lambda} \subset
\widehat{C_0^u(\mathbb{G}dual)}\setminus
\{\widehat{\epsilon}_u,\pi_1,\pi_2,\ldots,\pi_m\}
\]
such that $(\rho_\alpha)_{\alpha\in\Lambda}$ converges in the Fell topology to
$\widehat{\epsilon}_u$. By the definition of closure in the
Fell topology, this implies that
$\widehat{\epsilon}_u \prec \bigoplus_{\alpha\in\Lambda}\rho_\alpha$,
and so
\[
\mu=(\mu\otimes\widehat{\epsilon}_u)\circ\chi\circ\widehat{\cop}_u
\prec\bigoplus_{\alpha\in\Lambda}(\mu\otimes\rho_\alpha)\circ\chi\circ\widehat{\cop}_u,
\]
where $\chi$ is the flip map.
Since $\mu$ satisfies condition (iii) in Proposition~\ref{prop:T}
(by definition), it follows by condition (ii) in
Proposition~\ref{prop:T} that
\[
\mu\subset\bigoplus_{\alpha\in\Lambda}
(\mu\otimes\rho_\alpha)\circ\chi\circ\widehat{\cop}_u.
\]
By irreducibility of $\mu$ we have
$\mu\subset(\mu\otimes\rho_{\alpha})\circ\chi\circ\widehat{\cop}_u$ for some
$\alpha\in\Lambda$. Letting $U_{\alpha}=(\iota\otimes\rho_{\alpha})(\text{\reflectbox{$\Ww$}}\:\!)$, this
means that $U\subset U\tp U_{\alpha}$. Combining this with Lemma
\ref{lemma:trivial} it follows that
\[
1\subset U^c\tp U\subset U^c\tp U\tp U_{\alpha},
\]
so that we have
\[
1\subset U^c\tp U\tp U_{\alpha}
=\Big(\bigoplus_{k=0}^m (\iota\otimes\pi_k)(\text{\reflectbox{$\Ww$}}\:\!)\Big)\tp U_{\alpha}.
\]
Since $(U^c\tp U)^c$ is equivalent to $U^c\tp U$ (recall that $U$ is
admissible, Remark~\ref{Remark: CQG}), it follows that
\[
1\subset\Big(\bigoplus_{k=0}^m(\iota\otimes\pi_k)(\text{\reflectbox{$\Ww$}}\:\!)\Big)^c
\tp U_{\alpha}.
\]
An application of Theorem \ref{Theorem: Tensor product contains
trivial representation implies that each contains a
finite-dimensional representation} now yields a finite-dimensional
unitary representation $W$ such that
$W\subset\bigoplus_{k=0}^m(\iota\otimes\pi_k)(\text{\reflectbox{$\Ww$}}\:\!)$ and
$W\subset U_{\alpha}$.
Since $U_{\alpha}$ is irreducible, we have $W=U_{\alpha}$.
On the other hand, $\pi_k$ is irreducible
for $k=0,1,2,\ldots, m$, and so
$U_{\alpha}=(\iota\otimes\pi_{k_0})(\text{\reflectbox{$\Ww$}}\:\!)$ for some
$k_0\in\{0,1,2,\ldots, m\}$. This means that $\rho_{\alpha}=\pi_{k_0}$
which is a contradiction. Thus $\widehat{\epsilon}_u$ must be
an isolated point.
\end{proof}
\section{Properties of quantum groups with Property (T)}
In this section we prove several properties shared by quantum groups
with Property (T). We include a very short proof of the fact that a
quantum group with Property (T) is unimodular (see \cite[Section
6]{brannan_property_T} and \cite[Proposition
7]{Fima_property_T_discrete_quantum_group}). We consider unimodular
locally compact quantum groups as well as quantum groups arising
through the bicrossed product construction as discussed in Subsection
\ref{Subsection: Examples of locally compact quantum groups with
admissible representations} and prove a variation of Theorem
\ref{thm:kazhdan} and improved versions of the quantum versions of
the Bekka--Valette theorem \cite[Theorem 4.8]{brannan_property_T}
(characterising Property (T) in terms of non-existence of almost
invariant vectors for weakly mixing representations) and the Kerr--Pichot
theorem \cite[Theorem 4.9]{brannan_property_T} (characterising
Property (T) in terms of density properties of weakly mixing
representations) for these quantum groups.
\subsection{Quantum groups with Property (T) are unimodular}
It is a well-known fact that a locally compact group $G$ with Property
(T) is unimodular \cite[Corollary 1.3.6-(ii)]{bekka_property_T}. The
proof of this result makes use of the fact that if $G$ has Property
(T) and admits a continuous homomorphism into a locally compact
group $H$ with dense range, then $H$ has Property (T) \cite[Theorem
1.3.4]{bekka_property_T}. A version of \cite[Theorem
1.3.4]{bekka_property_T} for locally compact quantum groups has been
obtained in \cite[Theorem 5.7]{daws_skalski_viselter}. Using this, it
seems plausible that one can prove that Property (T) for quantum
groups implies unimodularity similarly to the classical case.
However, the proofs in the case of quantum groups have
proceeded differently.
To the best of our knowledge,
the first result in this direction for quantum groups was
proven for discrete quantum groups
\cite[Proposition~7]{Fima_property_T_discrete_quantum_group}. This was
subsequently
generalised to second countable locally compact quantum groups
\cite[Section~6]{brannan_property_T}. The proof in the
case of a locally compact
quantum group, as given in \cite{brannan_property_T}, proceeds via
showing that non-unimodular locally compact quantum groups always
admit a weakly mixing representation that weakly contains the
trivial representation, as a consequence the quantum group cannot have Property (T).
We give a very short proof of the fact that Property (T) implies
unimodularity, using a completely different technique, which does not
require the second countability assumption.
\begin{thm}\langlebel{thm: property T implies unimodularity}
Let $\mathbb{G}$ be a locally compact quantum group with Property (T). Then
$\mathbb{G}$ is unimodular.
\end{thm}
\begin{proof}
Let $\delta$ denote the modular element of $\mathbb{G}$, so that
$\delta$ is a strictly positive element affiliated to $C_0(\mathbb{G})$
\cite[Definition 7.11]{kusvaes}.
By \cite[Proposition 7.12]{kusvaes}, for all $s,t\in\mathbb{R}$,
\begin{enumerate}[label=\textup{(\roman*)}]
\item
$\Delta(\delta^{is})=\delta^{is}\otimes\delta^{is}$,
\item
$\tau_t(\delta^{is})=\delta^{is}$.
\end{enumerate}
Note that $\delta^0=1$ is the trivial representation of $\mathbb{G}$.
Relation (i) implies that for all $s\in\mathbb{R}$, $\delta^{is}$ is a
$1$-dimensional unitary representation of $\mathbb{G}$,
so there exists C*-representations
$\pi_s:C^u_0(\mathbb{G}dual)\to\mathbb{C}$ such that
$(\iota\otimes\pi_s)(\text{\reflectbox{$\Ww$}}\:\!)=\delta^{is}$.
For all $\omega\in B(L^2(\mathbb{G}))_\ast$, it follows that
\[
\lim_{s\to0} \pi_s( (\omega\otimes\iota)(\text{\reflectbox{$\Ww$}}\:\!) )
=\lim_{s\to0}\omega(\delta^{is})
=\omega(1)=\widehat{\epsilon}_u((\omega\otimes\iota)(\text{\reflectbox{$\Ww$}}\:\!)),
\]
where $\widehat{\epsilon}_u$ is the counit of $C_0^u(\widehat\mathbb{G})$.
Since the family $\{\pi_s\}_{s\in\mathbb{R}}$ is
uniformly bounded and
the elements of the form $(\omega\otimes\iota)(\text{\reflectbox{$\Ww$}}\:\!)$ for
$\omega\in B(L^2(\mathbb{G}))_*$ are norm-dense in $C_0^u(\mathbb{G}dual)$
\cite[Equation (5.2)]{kus}, it follows that
$\lim_{s\to0}\pi_s(x)=\widehat{\epsilon}_u(x)$ for all $x\in
C^u_0(\mathbb{G}dual)$.
By \cite[Proposition 9.1]{kus},
$(\tau_t\otimes\iota)(\text{\reflectbox{$\Ww$}}\:\!)=(\iota\otimes\widehat{\tau}^u_{-t})\text{\reflectbox{$\Ww$}}\:\!$ for all
$t\in\mathbb{R}$, where $(\widehat{\tau}^u_t)_{t\in\mathbb{R}}$
is the universal scaling group of $\mathbb{G}dual$.
Since $\mathbb{G}$ has Property (T) by hypothesis,
$\widehat{\epsilon}_u$ is isolated in $\widehat{C_0^u(\mathbb{G}dual)}$.
Since $\lim_{t\to0}\pi_t(x) = \widehat{\epsilon}_u(x)$
for all $x\in C_0^u(\mathbb{G}dual)$, it follows that the net
$(\pi_t)_{t\in\mathbb{R}}$ converges to $\widehat{\epsilon}_u$ in the Fell
topology of $\widehat{C_0^u(\mathbb{G}dual)}$.
As $\widehat{\epsilon}_u$ is isolated, we must have
$\pi_t=\widehat{\epsilon}_u$ for all $t\in\mathbb{R}$
with $|t|$ sufficiently small, that is,
$\delta^{it}=1$ for all $t\in\mathbb{R}$ with $|t|$ sufficiently small.
It follows that in fact $\delta^{it} = 1$ for all $t\in\mathbb{R}$, and so
that $\delta = 1$, as required.
\end{proof}
\subsection{Other aspects of quantum groups with Property (T)}
Combining Proposition~\ref{Proposition: covariant finite-dimensional
C* representations give admissible corepresentations},
Theorem~\ref{thm:admissibility-unimodular} and the proof of ``\textup{(T6)} $\implies$ \textup{(T2)}" in Theorem~\ref{thm:kazhdan} we have the following.
\begin{crlre}\langlebel{Corollary: T5 iff T6 for unimodular}
Let $\mathbb{G}$ be a locally compact quantum group
that is either unimodular or arises
through the bicrossed product construction as described in
Theorem~\ref{thm:bicrossed}.
Then $\mathbb{G}$ having Property (T) is equivalent to either of the following
statements:
\begin{enumerate}
\item[\textup{(T6$'$)}]
There is a finite-dimensional irreducible C*-representation of
$C_0^u(\mathbb{G}dual)$ that is an isolated point in $\widehat{C^u_0(\mathbb{G}dual)}$.
\item[\textup{(T6)}]
$C^u_0(\mathbb{G}dual)\cong B\oplus M_n(\mathbb{C})$ for some C*-algebra $B$ and
some $n\in\mathbb{N}_0$.
\end{enumerate}
\end{crlre}
A unitary representation of a locally compact quantum group
is \emph{weakly mixing} if it admits no nonzero admissible
finite-dimensional subrepresentation
(see \cite{viselter}).
Corollary~\ref{Corollary: T5 iff T6 for unimodular}
together with \cite[Lemma~3.6]{brannan_property_T} gives the
Bekka--Valette theorem for all unimodular locally compact quantum groups.
This was known before only for quantum groups with
trivial scaling automorphism group
\cite[Theorem~4.8]{brannan_property_T}.
\begin{crlre}\langlebel{Corollary: Bekka-Valette theorem for unimodular
LCQG with nontrivial scaling action}
Let $\mathbb{G}$ be a second countable locally compact quantum group
that is either unimodular or arises
through the bicrossed product construction as described in
Theorem~\ref{thm:bicrossed}.
Then $\mathbb{G}$ has Property (T) if and only if
every weakly mixing unitary representation of $\mathbb{G}$ fails to have
almost invariant vectors.
\end{crlre}
Combining Corollary \ref{Corollary: Bekka-Valette theorem for
unimodular LCQG with nontrivial scaling action} with \cite[Theorems
3.7 \& 3.8]{brannan_property_T} we have the Kerr--Pichot theorem for
unimodular quantum groups with non-trivial scaling group.
Also this was known before only for the case of quantum groups with
trivial scaling group \cite[Theorem~4.8]{brannan_property_T}.
Given a second countable locally compact quantum group $\mathbb{G}$ and
a Hilbert space $H$, denote
the set of all unitary representations of $\mathbb{G}$ on $H$
by $\Rep(\mathbb{G}, H)$
and the set of all weakly mixing unitary representations of
$\mathbb{G}$ on $H$ by $W(\mathbb{G}, H)$.
\begin{crlre}\langlebel{Corollary: Kerr-Pichot theorem for unimodular LCQG
with nontrivial scaling action}
Let $\mathbb{G}$ be a second countable locally compact quantum group
that is either unimodular or arises
through the bicrossed product construction as described in
Theorem~\ref{thm:bicrossed}. Let $H$ be a separable
infinite-dimensional Hilbert space.
If $\mathbb{G}$ does not have Property (T), then
$W(\mathbb{G}, H)$ is a dense $G_\delta$-set in $\Rep(\mathbb{G}, H)$.
If $\mathbb{G}$ has Property (T), then $W(\mathbb{G}, H)$ is closed and nowhere
dense in $\Rep(\mathbb{G}, H)$.
\end{crlre}
\end{document}
|
\begin{document}
\begin{center}
\title[A Note on Small Overlap Monoids]{A Note on the Definition of \\ Small Overlap Monoids}
\keywords{monoid, semigroup, word problem, finite presentation, small overlap, small cancellation}
\subjclass[2000]{20M05}
\maketitle
Mark Kambites \\
School of Mathematics, \ University of Manchester, \\
Manchester M13 9PL, \ England.
\texttt{[email protected]} \\
\end{center}
\begin{abstract}
Small overlap conditions are simple and natural combinatorial conditions
on semigroup and monoid presentations, which serve to limit the complexity
of derivation sequences between equivalent words in the generators. They
were introduced by J.~H.~Remmers, and more recently have been extensively
studied by the present author. However, the definition of small overlap
conditions hitherto used by the author was slightly more restrictive than
that introduced by Remmers; this note eliminates this discrepancy by extending
the recent methods and results of the author to apply to Remmers' small overlap monoids in full generality.
\end{abstract}
Small overlap conditions are simple and natural combinatorial conditions
on semigroup and monoid presentations, which serve to limit the complexity
of derivation sequences between equivalent words in the generators. Introduced
by J.~H.~Remmers \cite{Higgins92,Remmers71,Remmers80}, and more recently
studied by the present author \cite{K_generic,K_smallover1,K_smallover2},
they are the natural semigroup-theoretic analogue of the small cancellation
conditions widely used in combinatorial group theory \cite{Lyndon77}.
The definitions of small overlap conditions originally introduced by
Remmers are slightly more general than those used by the present author.
The aims of this note are to clarify this distinction, and then to extend
the methods and results introduced in \cite{K_smallover1,K_smallover2} to
the full generality of small overlap monoids as studied by Remmers.
In addition to this introduction, this article comprises three sections.
In Section~\ref{sec_prelim} we briefly recall the definitions of small
overlap conditions, and also discuss the distinction between Remmers' and
the author's definitions. In Section~\ref{sec_main} we show how to extend
the key technical results from \cite{K_smallover1}, from the slightly
restricted setting considered there to Remmers' small overlap conditions
in their more general form. Finally, Section~\ref{sec_apps} applies the
results of the previous section to extend the main results of
\cite{K_smallover1,K_smallover2} to the more general case.
The proofs for certain of the results in this paper are very similar (in
some cases identical) to arguments used in previous papers \cite{K_smallover1,K_smallover2}. In the interests of brevity we refrain
from repeating these, instead providing detailed references. Hence, while the
results of this paper may be read in isolation, the reader wishing to fully
understand the proofs is advised to read it in conjunction with
\cite{K_smallover1,K_smallover2}.
\section{Small Overlap Monoids}\label{sec_prelim}
We assume familiarity with basic notions of combinatorial semigroup
theory, including free semigroups and monoids, and semigroup and monoid
presentations. Except where stated otherwise, we assume we have a fixed
finite presentation for a monoid (or semigroup, the difference being
unimportant). Words are assumed to be drawn from the free monoid on the
generating alphabet unless otherwise stated. We write $u = v$ to indicate that two words are
equal in the free monoid or semigroup, and $u \equiv v$ to indicate that they represent
the same element of the monoid or semigroup presented. We say that a word $p$ is a
\textit{possible prefix} of $u$ if there exists a (possibly empty) word
$w$ with $pw \equiv u$, that is, if the element represented by $u$ lies in
the right ideal generated by the element represented by $p$. The empty
word is denoted $\epsilon$.
A \textit{relation word} is a word which occurs as one side of a
relation in the presentation. A \textit{piece} is a word in the
generators which occurs as a factor in sides of two \textit{distinct} relation
words, or in two different (possibly overlapping) places within one
side of a relation word. Note that this definition differs slightly from
that used in \cite{K_smallover1,K_smallover2} in the presence of the word
``distinct''; we shall discuss the
significance of this shortly. By convention, the empty word is always a piece.
We say that a presentation is \textit{weakly $C(n)$}, where
$n$ is a positive integer, if no relation word can be written as the product
of \textit{strictly fewer than} $n$ pieces. Thus for each $n$, being weakly
$C(n+1)$ is a stronger condition than being weakly $C(n)$.
In \cite{K_smallover1,K_smallover2} we used a slightly
more general definition of a piece, following through with which led
to slightly more
restrictive conditions $C(n)$; the author is grateful to Uri Weiss for
pointing out this discrepancy.
Specifically, in \cite{K_smallover1,K_smallover2} we defined a piece to be a
word which
occurs more than once as a factor of words in the \textit{sequence} of
relation words. Under this definition, if the same relation word appears
twice in a presentation then it is considered to be a piece, and so the
presentation fails to satisfy $C(2)$. By contrast, Remmers defined a piece
to be a word which appears more than once as a factor of words in the
\textit{set} of relation words. The effect of this is that Remmers' definition
permits $C(2)$ (and higher) presentations to have relations of, for example,
the form $(u, v_1)$ and $(u, v_2)$ with $v_1 \neq v_2$. (Equivalently, one
could choose to define a piece in terms of the sequence of relation words
but permit ``$n$-ary'' relations of the form $(u,v_1,v_2)$, to be interpreted
as equivalent to relations $(u,v_1)$ and $(u,v_2)$). In this paper, we say
that a presentation is \textit{strongly} $C(n)$ if it is weakly $C(n)$ and
has no repeated relation words, that is, if it satisfies the condition which
was called \textit{C(n)} in \cite{K_smallover1,K_smallover2}.
In fact it transpires that the weakly $C(n)$ conditions still suffice
to establish the main methods and results of \cite{K_smallover1,K_smallover2}. However,
this fact is rather obscured by the technical details and notation in
\cite{K_smallover1,K_smallover2}. In particular, for a relation word $R$ we
defined $\ol{R}$ to be the (necessarily unique) word such that $R = \ol{R}$ or $\ol{R} = R$ is a relation in the
presentation. The extensive use of this notation makes it difficult
to convince oneself that the arguments in \cite{K_smallover1,K_smallover2}
do indeed apply in the more general case, so the aim of this paper is to
provide full proofs of the results of those papers in the more general setting.
For each relation word $R$, let $X_R$ and $Z_R$ denote respectively the
longest prefix of $R$ which is a piece, and the longest suffix of $R$
which is a piece. If the presentation is weakly $C(3)$ then $R$ cannot be
written as a product of two pieces, so this prefix and suffix cannot meet;
thus, $R$ admits a factorisation $X_R Y_R Z_R$ for some non-empty word $Y_R$.
If moreover the presentation is weakly $C(4)$, then the relation word $R$
cannot be written as a product of three pieces, so $Y_R$ is not a piece. The
converse also holds: a weakly $C(3)$ presentation such that no $Y_R$ is a piece
is a weakly $C(4)$ presentation. We
call $X_R$, $Y_R$ and $Z_R$ the \textit{maximal piece prefix}, the
\textit{middle word} and the \textit{maximal piece suffix} respectively
of $R$.
Assuming now that the presentation is weakly $C(3)$,
we shall use the letters $X$, $Y$ and $Z$ (sometimes with adornments or
subscripts) exclusively to represent maximal piece prefixes, middle words
and maximal piece suffixes respectively of relation words; two such letters
with the same subscript or adornment (or with none) will be assumed to
stand for the appropriate factors of the same relation word.
We say that a relation word $\ol{R}$ is a \textit{complement} of a relation
$R$ if there are relation words $R = R_1, R_2, \dots, R_n = \ol{R}$ such
that either $(R_i, R_{i+1})$ or $(R_{i+1}, R_i)$ is a relation in the
presentation for $1 \leq i < n$. We say that $\ol{R}$ is a \textit{proper}
complement of $R$ if, in addition, $\ol{R} \neq R$. Abusing notation and
terminology slightly, if $R = X_R Y_R Z_R$ and $\ol{R} = X_{\ol{R}} Y_{\ol{R}} Z_{\ol{R}}$ then we
write $\ol{X_R} = X_{\ol{R}}$, $\ol{X_R Y_R} = X_{\ol{R}} Y_{\ol{R}}$ and so
forth. We say that $\ol{X_R}$ is a complement of $X_R$, and $\ol{X_R Y_R}$
is a complement of $X_R Y_R$.
A \textit{relation prefix} of a word
is a prefix which admits a (necessarily unique, as a consequence of the
small overlap condition) factorisation of the form $a X Y$ where $X$ and $Y$
are the maximal piece prefix and middle word respectively of some relation
word $XYZ$. An \textit{overlap prefix (of length $n$)} of
a word $u$ is a relation prefix which admits an (again necessarily unique)
factorisation of the form $b X_1 Y_1' X_2 Y_2' \dots X_n Y_n$ where
\begin{itemize}
\item $n \geq 1$;
\item $b X_1 Y_1' X_2 Y_2' \dots X_n Y_n$ has no factor of the form $X_0Y_0$,
where $X_0$ and $Y_0$ are the maximal piece prefix and middle word respectively
of some relation word, beginning before the end of the prefix $b$;
\item for each $1 \leq i \leq n$, $R_i = X_i Y_i Z_i$ is a relation word with
$X_i$ and $Z_i$ the maximal piece prefix and suffix respectively; and
\item for each $1 \leq i < n$, $Y_i'$ is a proper, non-empty prefix of $Y_i$.
\end{itemize}
Notice that if a word has a relation prefix, then the shortest such must
be an overlap prefix. A relation prefix $a XY$ of a word $u$ is called
\textit{clean} if $u$ does \textbf{not} have a prefix
$$a XY' X_1 Y_1$$
where $X_1$ and $Y_1$ are the maximal piece prefix and middle word respectively
of some relation word, and $Y'$ is a proper, non-empty prefix of $Y$. As in
\cite{K_smallover1}, clean overlap prefixes will play a crucial role in what
follows.
If $u$ is a word and $p$ is a piece, we say that $u$ is \textit{$p$-active} if $p u$ has a relation prefix
$aXY$ with $|a| < |p|$, and \textit{$p$-inactive} otherwise.
\section{Technical Results}\label{sec_main}
In this section we show how some technical results and methods from
\cite{K_smallover1} concerning strongly $C(4)$ monoids
can be extended to cover weakly $C(4)$ monoids. We assume
throughout initially a fixed monoid presentation which is weakly $C(4)$.
The following three
foundational statements are completely unaffected by our revised definitions,
and can still be proved exactly as in \cite{K_smallover1}.
\begin{proposition}\label{prop_overlapprefixnorel}
Let $a X_1 Y_1' X_2 Y_2' \dots X_n Y_n$
be an overlap prefix of some word. Then this prefix
contains no relation word as a factor, except possibly the suffix $X_n Y_n$ in
the case that $Z_n = \epsilon$.
\end{proposition}
\begin{proposition}\label{prop_opgivesmop}
Let $u$ be a word. Every overlap prefix of $u$ is contained in a
clean overlap prefix of $u$.
\end{proposition}
\begin{corollary}\label{cor_nomopnorel}
If a word $u$ has no clean overlap prefix, then it contains
no relation word as a factor, and so if $u \equiv v$ then $u = v$.
\end{corollary}
The following lemma is essentially a restatement of \cite[Lemma~1]{K_smallover1}
using our new notation. The proof is essentially the same as in
\cite{K_smallover1}, with the addition of an obvious inductive argument to
allow for the fact that several rewrites may be needed to obtain $\ol{XYZ}$
from $XYZ$.
\begin{lemma}\label{lemma_staysclean}
Suppose $u = w XYZ u'$ with $w XY$ a clean overlap prefix and
$\ol{XYZ}$ is a complement of $XYZ$. Then $w \ol{XY}$ is a clean
overlap prefix of $w \ol{XYZ} u'$.
\end{lemma}
From now on, we shall assume that our presentation is weakly $C(4)$.
We are now ready to prove our first main technical result, which is an
analogue of \cite[Lemma 2]{K_smallover1}, and is fundamental to our
approach to weakly $C(4)$ monoids.
\begin{lemma}\label{lemma_overlap}
Suppose a word $u$ has clean overlap prefix $w X Y$. If
$u \equiv v$ then $v$ has overlap prefix $w \ol{X Y}$ for some
complement $\ol{XYZ}$ of $XYZ$, and
no relation word occurring as a factor of $v$ overlaps this prefix,
unless it is $\ol{X Y Z}$ in the obvious place.
\end{lemma}
\begin{proof}
Since $w X Y$ is an overlap prefix of $u$, it has by definition a
factorisation
$$w XY = a X_1 Y_1' \dots X_{n} Y_{n}' X Y$$
for some $n \geq 0$. We use this fact to prove the claim by induction on
the length $r$ of a rewrite sequence (using the defining relations) from
$u$ to $v$.
In the case $r = 0$, we have $u = v$, so $v$ certainly has (clean) overlap
prefix $w XY$.
By Proposition~\ref{prop_overlapprefixnorel}, no relation word factor can
occur entirely within this prefix, unless it is the suffix $X Y$ and $Z = \epsilon$. If
a relation word factor of $v$ overlaps the end of the given overlap prefix
and entirely contains $XY$ then, since $XY$ is not a piece, that
relation word must clearly be $XYZ$. Finally,
a relation word cannot overlap the end of the given overlap prefix but
not contain the suffix $XY$, since this would clearly contradict either the
fact that the given overlap prefix is clean, or the fact that $Y$ is not a
piece.
Suppose now for induction that the lemma holds for all values less than $r$,
and that there is a rewrite sequence from $u$ to $v$ of length $r$. Let
$u_1$ be the second term in the sequence, so that $u_1$ is obtained from
$u$ by a single rewrite using the defining relations, and $v$ from $u_1$
by $r-1$ rewrites.
Consider the relation word in $u$ which is to be rewritten in order to
obtain $u_1$, and in
particular its position in $u$. By Proposition~\ref{prop_overlapprefixnorel},
this relation word cannot be contained in the clean overlap prefix $w XY$,
unless it is $X Y$ where $Z = \epsilon$.
Suppose first that the relation word to be rewritten contains the final
factor $Y$
of the given clean overlap prefix. (Note that this covers in particular the
case that the relation word is $XY$ and $Z = \epsilon$.)
From the weakly $C(4)$ assumption we know that $Y$ is not a piece, so we may deduce
that the relation word is $X Y Z$ contained in the obvious place. In
this case, applying the rewrite clearly leaves $u_1$ with a prefix
$w \hat{X} \hat{Y}$ for some complement $\hat{X} \hat{Y} \hat{Z}$ of
$XYZ$. By Lemma~\ref{lemma_staysclean}, this is a clean overlap
prefix. Now $v$ can be obtained from
$u_1$ by $r-1$ rewrite steps, so it follows from the inductive hypothesis
that $v$ has overlap prefix
$w \ol{XY}$ where $\ol{XYZ}$ is a complement of $\hat{X} \hat{Y} \hat{Z}$ and hence of
$XY$. It follows also that no relation word occurring as a factor of $v$
overlaps this prefix, unless it is $\ol{X Y Z}$; this
completes the proof in this case.
Next, we consider the case in which the relation word factor in $u$ to be
rewritten does not contain the final factor $Y$ of the clean overlap
prefix, but does overlap with the end of the clean overlap prefix. Then
$u$ has a factor of the form $\hat{X} \hat{Y}$, where $\hat{X}$ is the
maximal piece prefix and $\hat{Y}$ the middle word of a relation word,
which overlaps $X Y$, beginning after the start of $Y$. This clearly
contradicts the assumption that the overlap prefix is clean.
Finally, we consider the case in which the relation word factor in $u$
which is to be rewritten does not overlap the given clean overlap prefix
at all. Then obviously, the given clean overlap prefix of $u$ remains an
overlap prefix of $u_1$. If this overlap prefix is clean, then a simple
application of the inductive hypothesis again suffices to prove that $v$
has the required property.
There remains, then, only the case in which the given overlap prefix is
no longer clean in $u_1$. Then by definition there exist words $\hat{X}$ and
$\hat{Y}$, being a maximal piece prefix and middle word respectively of some relation
word, such
that $u_1$ has the prefix
$$a X_1 Y_1' \dots X_n Y_n' X Y' \hat{X} \hat{Y}$$
for some proper, non-empty prefix $Y'$ of $Y$.
Now certainly this is not a prefix of $u$, since this would contradict
the assumption that $a X_1 Y_1' \dots X_n Y_n' XY$
is a clean overlap
prefix of $u$. So we deduce that $u_1$ can be transformed to $u$ by
rewriting a relation word
overlapping the final $\hat{X}\hat{Y}$. This relation word factor cannot contain
the entire of this factor $\hat{X}\hat{Y}$, since then it would overlap with
the prefix $a X_1 Y_1' \dots X_n Y_n X Y$, which would
again contradict the assumption that this prefix is a clean overlap prefix of
$u$. Nor can the relation word contain the final factor $\hat{Y}$, since $\hat{Y}$ is not a piece.
Hence, $u_1$ must have a prefix
$$a X_1 Y_1' \dots X_{n-1} Y_{n-1}' X_n Y_n' X Y' \hat{X} \hat{Y}' R$$
for some relation word and proper, non-empty prefix $\hat{Y}'$ of $\hat{Y}$ and
some relation word $R$. Suppose $R = X_R Y_R Z_R$ where $X_R$ and $Z_R$
are the maximal piece prefix and suffix respectively. Then it is readily
verified that
$$a X_1 Y_1' \dots X_{n-1} Y_{n-1}' X_n Y_n' X Y' \hat{X} \hat{Y}' X_R Y_R$$
is a clean overlap prefix of $u_1$. Indeed, the fact it is an overlap prefix
is immediate, and if it were not clean then some factor of $u_1$ of the form
$\tilde{X} \tilde{Y}$ would have to overlap the end of the given prefix; but
this factor would either be contained in $Y_R Z_R$ (contradicting the
fact that $\tilde{X}$ is a maximum piece prefix of $\tilde{X} \tilde{Y} \tilde{Z}$)
or would contain a non-empty suffix of $Y_R$ followed by $Z_R$ (contradicting
the fact that $Z_R$ is a maximum piece prefix of $X_R Y_R Z_R$).
Now by the inductive
hypothesis, $v$ has prefix
\begin{equation}\label{vprefix2}
a X_1 Y_1' \dots X_{n-1} Y_{n-1}' X_n Y_n' X Y' \hat{X} \hat{Y}' \ol{X_R Y_R}.
\end{equation}
for some complement $\ol{X_R Y_R}$ of $X_R Y_R$. But now
$v$ has prefix
$$a X_1 Y_1' \dots X_{n-1} Y_{n-1}' X_n Y_n' X Y' \hat{X} \hat{Y}'$$
which in turn has prefix
\begin{equation}\label{vprefix3}
a X_1 Y_1' \dots X_{n-1} Y_{n-1}' X_n Y_n' X Y.
\end{equation}
Moreover, by Proposition~\ref{prop_overlapprefixnorel}, the prefix
\eqref{vprefix2} of $v$ contains no relation
word as a factor, unless it is the final factor $\ol{X_R Y_R}$
and $\ol{Z_R} = \epsilon$, and it follows
easily that no relation word factor overlaps the prefix \eqref{vprefix3}
of $v$.
\end{proof}
The following results are now proved exactly as their analogues in
\cite{K_smallover1}.
\begin{corollary}\label{cor_noncleanprefix}
Suppose a word $u$ has (not necessarily clean) overlap prefix
$w XY$. If $u \equiv v$ then $v$ has a
prefix $w$ and contains no relation word overlapping this prefix.
\end{corollary}
\begin{proposition}\label{prop_dumpprefix}
Suppose a word $u$ has an overlap prefix $a X Y$ and that
$u = a X Y u''$. Then $u \equiv v$ if and only if $v = a v'$ where
$v' \equiv X Y u''$.
\end{proposition}
\begin{proposition}\label{prop_inactive}
Let $u$ be a word and $p$ a piece.
If $u$ is $p$-inactive then $p u \equiv v$ if and only if $v = p w$
for some $w$ with $u \equiv w$.
\end{proposition}
\begin{proposition}\label{prop_coactive}
Let $p_1$ and $p_2$ be pieces and suppose $u$ is $p_1$-active and $p_2$-active.
Then $p_1$ and $p_2$ have a
common non-empty suffix, and if $z$ is their maximal common suffix then
\begin{itemize}
\item[(i)] $u$ is $z$-active;
\item[(ii)] $p_1 u \equiv v$ if and only if $v = z_1 v'$ where $z_1 z = p_1$ and
$v' \equiv z u$; and
\item[(iii)] $p_2 u \equiv v$ if and only if $v = z_2 v'$ where $z_2 z = p_2$
and $v' \equiv z u$.
\end{itemize}
\end{proposition}
\begin{corollary}\label{cor_actsame}
Let $p_1$ and $p_2$ be pieces. Suppose $p_1 u \equiv p_1 v$ and $u$
is $p_2$-active. Then $p_2 u \equiv p_2 v$.
\end{corollary}
The following is a strengthening of the \cite[Corollary 4]{K_smallover1}
\begin{corollary}\label{cor_eitheror}
Let $u$ and $v$ be words and $p_1, p_2, \dots, p_k$ be pieces.
Suppose there exist words $u = u_1, \dots, u_n = v$ such that
for $1 \leq i < n$ there exists $1 \leq j_i \leq k$ with
$p_{j_i} u_i \equiv p_{j_i} u_{i+1}$.
Then $p_j u \equiv p_j v$ for some $j$ with $1 \leq j \leq k$.
\end{corollary}
\begin{proof}
Fix $u$, $v$ and $p_1, \dots, p_k$, and suppose $n$ is minimal such
that a sequence $u_1, \dots, u_n$ with the hypothesized properties exists.
Our aim is thus to show that $n \leq 2$. Suppose for a contradiction
that $n > 2$.
If $u_2$ was $p_{j_2}$-inactive then by Proposition~\ref{prop_inactive} we
would have $u_2 \equiv u_3$ so that $p_{j_1} u_1 \equiv p_{j_1} u_2 \equiv p_{j_1} u_3$
which clearly contradicts the minimality assumption on $n$.
Thus, $u_2$ is $p_{j_2}$-active.
But now since $p_{j_1} u_1 \equiv p_{j_1} u_2$, we apply
Corollary~\ref{cor_actsame} to see that
$p_{j_2} u_1 \equiv p_{j_2} u_2 \equiv p_{j_2} u_3$, which again
contradicts the minimality of $n$.
\end{proof}
We now present a lemma which gives a set of mutually exclusive combinatorial
conditions, the disjunction of which is necessary and sufficient for two words
of a certain form to represent the same element.
\begin{lemma}\label{lemma_eq}
Suppose $u = X Y u'$ where $XY$ is a clean overlap prefix of
$u$. Then $u \equiv v$ if and only if one of the following mutually
exclusive conditions holds:
\begin{itemize}
\item[(1)] $u = XYZ u''$ and $v = XYZ v''$ and $\ol{Z} u'' \equiv \ol{Z} v''$
for some complement $\ol{Z}$ of $Z$;
\item[(2)] $u = X Y u'$, $v = X Y v'$, and $Z$ fails to be a
prefix of at least one of $u'$ and $v'$, and $u' \equiv v'$;
\item[(3)] $u = X Y Z u''$, $v = \ol{X} \ol{Y} \ol{Z} v''$ for some
uniquely determined proper complement $\ol{XYZ}$ of $XYZ$,
and $\hat{Z} u'' \equiv \hat{Z} v''$ for some complement $\hat{Z}$
of $Z$;
\item[(4)] $u = X Y u'$, $v = \ol{X} \ol{Y} \ol{Z} v''$ for some uniquely
determined proper complement $\ol{XYZ}$ of $XYZ$ but
$Z$ is not a prefix of $u'$ and $u' \equiv Z v''$;
\item[(5)] $u = X Y Z u''$, $v = \ol{X} \ol{Y} v'$ for some uniquely determined
proper complement
$\ol{XYZ}$ of $XYZ$,
but $\ol{Z}$ is not a prefix of $v'$ and $\ol{Z} u'' \equiv v'$;
\item[(6)] $u = X Y u'$, $v = \ol{X} \ol{Y} v'$ for some uniquely determined proper complement
$\ol{XYZ}$ of $XYZ$, $Z$ is not
a prefix of $u'$ and $\ol{Z}$ is not a prefix of $v'$, but
$Z = z_1 z$, $\ol{Z} = z_2 z$, $u' = z_1 u''$, $v' = z_2 v''$ where
$u'' \equiv v''$ and $z$ is the maximal common suffix of $Z$ and $\ol{Z}$,
$z$ is non-empty, and $z$ is a possible prefix of $u''$.
\end{itemize}
\end{lemma}
\begin{proof}
It follows easily from the definitions that no complement of $XY$ is a
prefix of another. Hence, $v$ can have at most one of them as a prefix. Thus,
conditions (1)-(2) are not consistent with conditions (3)-(6), and the
prefixes of $v$ in (3)-(6) are uniquely determined. The mutual
exclusivity of (1) and (2) is self-evident from the definitions, and
likewise that of (3)-(6).
It is easily verified that each of the conditions
(1)-(5) imply that $u \equiv v$. We show next that (6) implies that
$u \equiv v$. Since $z$ is a possible prefix of $u''$ and $u'' \equiv v''$,
we may write $u'' \equiv zx \equiv v''$ for some word $x$. Now we have
\begin{align*}
u = X Y u' = XY z_1 u'' &\equiv XY z_1 z x = XYZ x \\
&\equiv \ol{XYZ} x = \ol{XY} z_2 z x \equiv \ol{XY} z_2 v'' = \ol{XY} v' = v.
\end{align*}
It remains to show that $u \equiv v$ implies that one of
the conditions (1)-(6) holds. To this end, suppose $u \equiv v$;
then there is a rewrite sequence taking $u$ to $v$.
By Lemma~\ref{lemma_overlap}, every term in this sequence will have prefix
which is a complement of $XY$, and this prefix can only be modified by
the application of a relation, both sides of which are complements of
$XYZ$, in the obvious place. We now prove the claim by case analysis.
By Lemma~\ref{lemma_overlap}, $v$ begins either with $XY$ or with some
proper complement $\ol{XY}$.
Consider first the case in which $v$ begins with $XY$; we split this into
two further cases depending on whether $u$ and $v$ both begin with the full
relation word $XYZ$; these will correspond respectively to conditions (1)
and (2) in the statement of the lemma.
\textbf{Case (1).} Suppose $u = XYZ u''$ and $v = X Y Z v''$.
Then clearly there is a rewrite sequence taking $u$ to $v$ which by
Lemma~\ref{lemma_overlap} can be
broken up as:
\begin{align*}
u &= XYZ u'' = X_0 Y_0 Z_0 u'' \to^* X_0 Y_0 Z_0 u_1 \to X_1Y_1Z_1 u_1 \to^* X_1 Y_1 Z_1 u_2 \\
&\to X_2 Y_2 Z_2 u_2 \to^* \dots \to X_n Y_n Z_n u_n \to^* X_n Y_n Z_n v'' = XYZ v'' = v
\end{align*}
where each prefix $X_i Y_i Z_i$ is a complement of $XYZ$, and
none of the steps in the sequences indicated by $\to^*$ involves rewriting
a relation word overlapping with the prefix $X_i Y_i$.
It follows that there are rewrite sequences.
$$Z u'' \to^* Z u_1, \ Z_1 u_1 \to^* Z_1 u_2, \ Z_2 u_2 \to^* Z_2 u_3, \ \dots, \ Z_n u_n \to^* Z_n v''$$
Now by Corollary~\ref{cor_eitheror}, we have $Z_i u'' \equiv Z_i v''$ for
some $1 \leq i \leq n$, where $Z_i$ is a complement of $Z$ as required to
show that condition (1) holds.
\textbf{Case (2).} Suppose now that $u = X Y u'$, $v = XY v'$ and $Z$
fails to be a prefix of at least one of $u'$ and $v'$. We must show that
$u' \equiv v'$; suppose for a contradiction that this does not hold.
We again consider rewrite sequences
from $u = XY u'$ to $v = XY v'$. Again using Lemma~\ref{lemma_overlap}, we
see that there is either (i) such a sequence taking $u$ to $v$ containing
no rewrites of relation words overlapping the prefix $XY$, or (ii) such a
sequence taking $u$ to $v$ which can be broken up as:
\begin{align*}
u &= XY u' = X_0 Y_0 u'' \to^* X_0 Y_0 Z_0 u_1 \to X_1Y_1Z_1 u_1 \to^* X_1 Y_1 Z_1 u_2 \\
&\to X_2 Y_2 Z_2 u_2 \to^* \dots \to X_n Y_n Z_n u_n \to^* X_n Y_n Z_n v'' = X_n Y_n v' = XY v' = v
\end{align*}
where each prefix $X_i Y_i Z_i$ is a complement of $XYZ$, and
none of the steps in the sequences indicated by $\to^*$ involves rewriting
a relation word overlapping with the prefix $X_i Y_i$.
In case (i) there is clearly a rewrite sequence
taking $u'$ to $v'$ so that $u' \equiv v'$ as required. In case (ii), there
are rewrite sequences.
$$u' \to^* Z u_1, \ Z_1 u_1 \to^* Z_1 u_2, \ Z_2 u_2 \to^* Z_2 u_3, \ \dots, \ Z_n u_n = Z u_n \to^* v'$$
Now if $u'$ does not begin with $Z$, we can deduce from
Proposition~\ref{prop_inactive} that $u_1$ is $Z$-active.
By Corollary~\ref{cor_eitheror}, we have $\hat{Z} u_1 \equiv \hat{Z} u_n$
for some complement $\hat{Z}$ of $Z$. Since $u_1$ is
$Z$-active, Corollary~\ref{cor_actsame} tells us that we also have
$Z u_1 \equiv Z u_n$. But now
$$u' \equiv Z u_1 \equiv Z u_n \equiv v'$$
so condition (2) holds. A similar argument applies if
$v'$ does not begin with $Z$.
\textbf{Case (3).} Suppose $u = XYZ u''$ and
$v = \ol{XYZ} v''$.
Then $u = XYZ u'' \equiv v \equiv XYZ v''$, so by the same argument as in case (1) we
have either $Zu'' \equiv Z v''$ or $\ol{Z} u'' \equiv \ol{Z} v''$ as required
to show that condition (3) holds.
\textbf{Case (4).} Suppose $u = XY u'$ and
$v = \ol{XYZ} v''$ but $Z$ is not a prefix of $u'$. Then
$u = XY u' \equiv v \equiv XYZ v''$. Now applying the same argument as
in case (2) (with $XYZ v''$ in place of $v$ and setting $v' = Zv''$) we
have $u' \equiv v' = Z v''$ so that condition (4) holds.
\textbf{Case (5).} Suppose $u = XYZ u''$, $v = \ol{XY} v'$
but $\ol{Z}$ is not a prefix of $v'$. Then we have
$\ol{XYZ} u'' \equiv u \equiv v = \ol{XY} v'$, and moreover,
Lemma~\ref{lemma_staysclean} guarantees that $\ol{XY}$ is a clean overlap
prefix of $\ol{XYZ} u''$. Now applying the same
argument as in case (1) (but with $\ol{XYZ} u''$ in place of $u$ and
setting $u' = \ol{Z} u''$) we
obtain $u' \equiv v' = \ol{Z} u''$ so that condition (5) holds.
\textbf{Case (6).} Suppose $u = XY u'$, $v = \ol{XY} v'$ and that $Z$ is not a
prefix of $u'$ and $\ol{Z}$ is not a prefix of $v'$.
It follows this time that there is a rewrite sequence taking $u$ to $v$ of
the form
\begin{align*}
u = XY u' = & X_0 Y_0 u' \to^* X_0 Y_0 Z_0 u_1 \to X_1Y_1Z_1 u_1 \to^* X_1 Y_1 Z_1 u_2 \\
&\to X_2 Y_2 Z_2 u_2 \to^* \dots \to X_n Y_n Z_n u_n \to^* X_n Y_n v' = \ol{XY} v' = v
\end{align*}
where once more by Lemma~\ref{lemma_overlap}
each prefix $X_i Y_i Z_i$ is a complement of $XYZ$, and
none of the steps in the sequences indicated by $\to^*$ involves rewriting
a relation word overlapping with the prefix $X_i Y_i$.
Now there are rewrite sequences.
$$u' \to^* Z u_1, \ Z_1 u_1 \to^* Z_1 u_2, \ Z_2 u_2 \to^* Z_2 u_3, \ \dots, \ Z_n u_n = \ol{Z} u_n \to^* v'$$
Notice that, since $u'$ does not begin with $Z$, we may deduce from
Proposition~\ref{prop_inactive} that $u_1$ is $Z$-active.
By Corollary~\ref{cor_eitheror}, we have $\hat{Z} u_1 \equiv \hat{Z} u_n$
for some complement $\hat{Z}$ of $Z$. Now since $u_1$ is
$Z$-active, Corollary~\ref{cor_actsame} tells us that we also have
$Z u_1 \equiv Z u_n$. But now
$$u' \equiv Z u_1 \equiv Z u_n$$ where $u'$ does not begin with $Z$, and
also $v' \equiv \ol{Z} u_n$ were $v'$ does not begin with $\ol{Z}$. By
applying Proposition~\ref{prop_inactive} twice, we deduce that $u_n$ is both
$Z$-active and $\ol{Z}$-active.
Let $z$ be the maximal common suffix of $Z$ and $\ol{Z}$. Then
applying Proposition~\ref{prop_coactive} (with $p_1 = Z$ and $p_2 = \ol{Z}$),
we see that $z$ is non-empty and
\begin{itemize}
\item $u' = z_1 u''$ where $Z = z_1 z$ and $u'' \equiv z u_n$; and
\item $v' = z_2 v''$ where $\ol{Z} = z_2 z$ and $v'' \equiv z u_n$.
\end{itemize}
But then we have
$u'' \equiv z u_n \equiv v''$ and also $z$ is a possible prefix of
$u''$ as required to show that condition (6) holds.
\end{proof}
\begin{lemma}\label{lemma_eqandprefix}
Suppose $u = X Y u'$ where $XY$ is a clean overlap prefix, and suppose
$p$ is a piece. Then $u \equiv v$ and $p$ is a possible prefix of $u$
if and only if one of the following mutually exclusive conditions holds:
\begin{itemize}
\item[(1')] $u = XYZ u''$ and $v = XYZ v''$ and
$\ol{Z} u'' \equiv \ol{Z} v''$ for some complement $\ol{Z}$ of $Z$, and
also $p$ is a prefix of some complement of $X$;
\item[(2')] $u = X Y u'$, $v = X Y v'$, and $Z$ fails to be a
prefix of at least one of $u'$ and $v'$, and $u' \equiv v'$,
and also either
\begin{itemize}
\item $p$ is a prefix of $X$; or
\item $p$ is a prefix of some complement of $X$ and $Z$ is a possible prefix of $u'$.
\end{itemize}
\item[(3')] $u = X Y Z u''$, $v = \ol{X} \ol{Y} \ol{Z} v''$ for some
uniquely determined proper complement $\ol{XYZ}$ of $XYZ$, and $\hat{Z} u'' \equiv \hat{Z} v''$
for some complement $\hat{Z}$ of $Z$, and
$p$ is a prefix of some complement of $X$;
\item[(4')] $u = X Y u'$, $v = \ol{X} \ol{Y} \ol{Z} v''$ for some uniquely
determined proper
complement $\ol{XYZ}$ of $XYZ$, but
$Z$ is not a prefix of $u'$ and $u' \equiv Z v''$, and also
$p$ is a prefix of some complement of $X$;
\item[(5')] $u = X Y Z u''$, $v = \ol{X} \ol{Y} v'$ for some uniquely
determined proper
complement $\ol{XYZ}$ of $X$,
but $\ol{Z}$ is not a prefix of $v'$ and $\ol{Z} u'' \equiv v'$,
and also $p$ is a prefix of some complement of $X$;
\item[(6')] $u = X Y u'$, $v = \ol{X} \ol{Y} v'$ for some uniquely
determined proper
complement $\ol{XYZ}$ of $XYZ$, $Z$ is not
a prefix of $u'$ and $\ol{Z}$ is not a prefix of $v'$, but
$Z = z_1 z$, $\ol{Z} = z_2 z$, $u' = z_1 u''$, $v' = z_2 v''$ where
$u'' \equiv v''$, $z$ is the maximal common suffix of $Z$ and $\ol{Z}$,
$z$ in non-empty, $z$ is a possible prefix of $u''$, and
also $p$ is a prefix of some complement of $X$.
\end{itemize}
\end{lemma}
\begin{proof}
Mutual exclusivity of the six conditions is proved exactly as for
Lemma~\ref{lemma_eq}. Suppose now that one of the six conditions above applies. Each condition
clearly implies the corresponding condition from Lemma~\ref{lemma_eq},
so we deduce immediately that $u \equiv v$. We must show, using the fact
that $p$ is a prefix of a complement of $X$, that $p$ is a possible prefix
of $u$, or equivalently of $v$.
In case (1'), $p$ is clearly a possible prefix of $u = XYZu''$, and cases
(3'), (4') and (5') are entirely similar.
In case (2'), if $p$ is a prefix of $X$ then
it is already a prefix of $u$, while if $p$ is a prefix of a proper
complement $\ol{X}$ of $X$ and $Z$ is a
possible prefix of $u'$, say $u' \equiv Z w$, then
$$u \ = \ XYu' \ \equiv \ XYZw \ \equiv \ \ol{XYZ} w$$
where the latter has $p$ as a possible prefix.
Finally, in case (6') we know that $z$ is a possible prefix of $u''$, say
$u'' \equiv z x$, so we have
$$u = XYu' = XYz_1u'' = XYz_1zx = XYZx$$
and it is again clear that $p$ is a possible prefix of $u$.
Conversely, suppose $u \equiv v$ and $p$ is a possible prefix of $u$. Then
exactly one of the six conditions in Lemma~\ref{lemma_eq} applies. By
Lemma~\ref{lemma_overlap}, every word equivalent to $u$ begins with a
complement of $XY$, so $p$ must be a prefix of a word beginning with
some complement $\hat{X} \hat{Y}$. Since $\hat{X}$ is the maximal piece prefix of
$\hat{X} \hat{Y} \hat{Z}$ and $\hat{Y}$ is non-empty, it
follows that $p$ is a prefix of $\hat{X}$. If any but condition (2)
of Lemma~\ref{lemma_eq} is satisfied, this suffices to show
that the corresponding condition from the statement of
Lemma~\ref{lemma_eqandprefix} holds.
If condition (2) from Lemma~\ref{lemma_eq} applies, we must show
additionally that either $p$ is a prefix of $X$, or that $Z$ is a
possible prefix of $u'$. Suppose $p$ is not
a prefix of $X$. Then by the above, $p$ is a prefix of some complement
$\hat{X}$. It follows from Lemma~\ref{lemma_overlap}, that the
only way the prefix $XY$ of the word $u$ can be changed using the defining
relations is by application of
a relation of the form $(XYZ, \ol{XYZ})$. In order for this to happen, one must
clearly be able to rewrite $u = XYu'$ to a word of the form $XYZ w$;
consider the shortest possible rewrite sequence which achieves this.
By Lemma~\ref{lemma_overlap}, no term in the sequence except for the last
term will contain a relation word overlapping the initial $XY$. It follows
that the same rewriting steps rewrite $u'$ to $Zw$, so that $Z$ is a
possible prefix of $u'$, as required.
\end{proof}
\section{Applications}\label{sec_apps}
The main application presented in \cite{K_smallover1} was for each
strongly $C(4)$ monoid presentation, a linear time recursive algorithm to decide,
given words $u$, $v$ and a piece $p$, whether $u \equiv v$ and $p$ is
a possible prefix of $u$. In particular, by fixing $p = \epsilon$, we
obtain an algorithm which
solves the word problem for the presentation in linear time.
Figure~1 shows a modified version of the algorithm which works for weakly
$C(4)$ presentations. The proofs of correctness and
termination are essentially the same as those in \cite{K_smallover1}, but
relying on the more general results of Section~\ref{sec_main}. Thus, we
establish the following theorem.
\begin{theorem}\label{thm_lineartime}
For every weakly $C(4)$ finite monoid presentation, there exists a
two-tape Turing machine which solves the corresponding word problem in
time linear in the lengths the input words.
\end{theorem}
\begin{figure}
\caption{Algorithm to solve the word problem for a fixed weakly $C(4)$ presentation.}
\label{li_start_a}
\label{li_allepsilon}
\label{li_someepsilon}
\label{li_end_a}
\label{li_start_b}
\label{li_uvdifferentstart}
\label{li_updifferentstart}
\label{li_rec_nomop}
\label{li_end_b}
\label{li_start_c}
\label{li_pnotprefix}
\label{li_vstartswrong}
\label{li_rec_case1b}
\label{li_rec_case1a}
\label{li_rec_case2a}
\label{li_rec_case2b}
\label{li_rec_case4}
\label{li_rec_case5}
\label{li_case6no}
\label{li_rec_case6}
\label{li_end_c}
\label{fig_algorithm}
\end{figure}
The algorithms presented \cite[Section~5]{K_smallover1} for finding
the pieces of a presentation and hence testing strong small overlap conditions
may clearly also be used to test the weak variants of those conditions, with
the proviso
that one considers the \textit{set} of relation words in the presentation,
with any duplicates disregarded. In particular, we have:
\begin{corollary}
There is a RAM algorithm which, given as input a finite presentation $\langle \mathscr{A} \mid \mathscr{R} \rangle$,
decides in time $O(|\mathscr{R}|^2)$ whether the presentation is weakly $C(4)$.
\end{corollary}
\begin{theorem}\label{thm_ramuniform}
There is a RAM algorithm which, given as input a weakly $C(4)$ finite presentation
$\langle \mathscr{A} \mid \mathscr{R} \rangle$ and two words $u, v \in \mathscr{A}^*$, decides whether
$u$ and $v$ represent the same element of the semigroup presented in
time
$$O \left( |\mathscr{R}|^2 \min(|u|,|v|) \right).$$
\end{theorem}
Just as with the algorithm from \cite{K_smallover1}, the algorithm in
Figure~\ref{fig_algorithm} is essentially a finite state process, and
can be implemented on a $2$-tape prefix-rewriting automaton using a
slight variation on the technique described in the proof of
\cite[Theorem~2]{K_smallover2}. It follows that we have:
\begin{theorem}\label{thm_main}
Let $\langle \mathscr{A} \mid \mathscr{R} \rangle$ be a finite monoid presentation which is
weakly $C(4)$. Then the relation
$$\lbrace (u, v) \in \mathscr{A}^* \times \mathscr{A}^* \mid u \equiv v \rbrace$$
is deterministic rational and reverse deterministic rational. Moreover,
one can, starting from the presentation, effectively compute 2-tape
deterministic automata recognising this relation and its reverse.
\end{theorem}
Just as in \cite{K_smallover2}, we obtain as corollaries large number of
other facts about weakly $C(4)$ monoids. For brevity we refrain from explaining
all terms, and instead refer the reader to \cite{K_smallover2} for definitions.
\begin{corollary}
Every monoid admitting a weakly $C(4)$ finite presentation
\begin{itemize}
\item is \textit{rational} (in the sense of Sakarovitch \cite{Sakarovitch87});
\item is word hyperbolic (in the sense of Duncan and Gilman \cite{Duncan04});
\item is asynchronous automatic;
\item has a regular language of linear-time computable normal forms (namely,
the set of words minimal in their equivalence class with respect to the
lexicographical order induced by any total order on the generating set);
\item has a boolean algebra of rational subsets;
\item has uniformly decidable rational subset membership problem; and
\item has rational subsets which coincide with its recognisable subsets.
\end{itemize}
\end{corollary}
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
\end{document}
|
\begin{document}
\baselineskip=16pt
\title{ f Duality Pairs Induced by Auslander and Bass Classes hanks{2010 Mathematics Subject Classification: 18G25, 16E10, 16E30.}
\begin{abstract}
Let $R$ and $S$ be any rings and $_RC_S$ a semidualizing bimodule, and let $\mathcal{A}_C(R^{op})$
and $\mathcal{B}_C(R)$ be the Auslander and Bass classes respectively. Then both the pairs
$$(\mathcal{A}_C(R^{op}),\mathcal{B}_C(R))\ {\rm and}\ (\mathcal{B}_C(R),\mathcal{A}_C(R^{op}))$$
are coproduct-closed and product-closed duality pairs and both
$\mathcal{A}_C(R^{op})$ and $\mathcal{B}_C(R)$ are covering and preenveloping; in particular,
the former duality pair is perfect. Moreover,
if $\mathcal{B}_C(R)$ is enveloping in $\mathop{\rm Mod}\nolimits R$, then $\mathcal{A}_C(S)$ is enveloping in $\mathop{\rm Mod}\nolimits S$.
Then some applications to the Auslander projective dimension of modules are given.
\end{abstract}
\mathcal{P}agestyle{myheadings}
\markboth{\rightline {\scriptsize Z. Y. Huang}}
{\leftline{\scriptsize Duality Pairs Induced by Auslander and Bass Classes}}
\section{Introduction}
In relative homological algebra, the theory of covers and envelopes is fundamental and important.
Let $R$ be a ring and $\mathop{\rm Mod}\nolimits R$ the category of left $R$-modules. Given a subcategory of $\mathop{\rm Mod}\nolimits R$,
it is always worth studying whether or when it is (pre)covering or (pre)enveloping.
This problem has been studied extensively, see \cite{BR}--\cite{HJ09} and references therein.
Let $R$ be a commutative noetherian ring and $C$ a semidualizing $R$-module,
and let $\mathcal{A}_C(R)$ and $\mathcal{B}_C(R)$ be the Auslander and Bass classes respectively.
By proving that both $\mathcal{A}_C(R)$ and $\mathcal{B}_C(R)$ are Kaplansky classes, Enochs and Holm
got in \cite[Theorems 3.11 and 3.12]{EH} that the pair $(\mathcal{A}_C(R),(\mathcal{A}_C(R))^\bot)$
is a perfect cotorsion pair, $\mathcal{A}_C(R)$ is covering and preenveloping
and $\mathcal{B}_C(R)$ is preenveloping. Holm and J{\o}rgensen introduced the notion of duality pairs
and proved the following remarkable result. Let $R$ be an arbitrary ring, and let $\mathscr{X}$ and
$\mathscr{Y}$ be subcategories of $\mathop{\rm Mod}\nolimits R$ and $\mathop{\rm Mod}\nolimits R^{op}$ respectively.
When $(\mathscr{X},\mathscr{Y})$ is a duality pair, the following assertions hold true:
(1) If $\mathscr{X}$ is closed under coproducts, then $\mathscr{X}$ is covering;
(2) if $\mathscr{X}$ is closed under products, then $\mathscr{X}$ is preenveloping; and
(3) if $_RR\in\mathscr{X}$ and $\mathscr{X}$ is closed under coproducts and extensions,
then $(\mathscr{X},\mathscr{X}^{\mathcal{P}erp})$ is a perfect cotorsion pair
(\cite[Theorem 3.1]{HJ09}). By using it, they generalized the above result of Enochs and Holm to
the category of complexes, and Enochs and Iacob investigated in \cite{EI} the existence of Gorenstein
injective envelopes over commutative noetherian rings.
Let $R$ and $S$ be arbitrary rings and $_RC_S$ a semidualizing bimodule, and let $\mathcal{A}_C(R^{op})$
be the Auslander class in $\mathop{\rm Mod}\nolimits R^{op}$ and $\mathcal{B}_C(R)$ the Bass class in $\mathop{\rm Mod}\nolimits R$.
Our first main result is the following
\begin{theorem}\label{1.1} {\rm (Theorem \ref{3.3})}
\begin{enumerate}
\item[(1)] Both the pairs
$$(\mathcal{A}_C(R^{op}),\mathcal{B}_C(R))\ and\ (\mathcal{B}_C(R),\mathcal{A}_C(R^{op}))$$
are coproduct-closed and product-closed duality pairs; and furthermore, the former one is perfect.
\item[(2)] $\mathcal{A}_C(R^{op})$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits R^{op}$
and $\mathcal{B}_C(R)$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits R$.
\end{enumerate}
\end{theorem}
As a consequence of Theorem \ref{1.1}, we get that the pair
$$(\mathcal{A}_C(R^{op}),\mathcal{A}_C(R^{op})^{\bot})$$
is a hereditary perfect cotorsion pair and $\mathcal{A}_C(R^{op})$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits R^{op}$,
where $\mathcal{A}_C(R^{op})^{\bot}$ is the right $\operatorname{op}eratorname{Ext}$-orthogonal class of $\mathcal{A}_C(R^{op})$ (Corollary \ref{3.4}).
This result was proved in \cite[Theorem 3.11]{EH} when $R$ is a commutative noetherian ring and $_RC_S={_RC_R}$.
By Theorem \ref{1.1} and its symmetric result, we have that $\mathcal{B}_C(R)$ is preenveloping in $\mathop{\rm Mod}\nolimits R$
and $\mathcal{A}_C(S)$ is preenveloping in $\mathop{\rm Mod}\nolimits S$. Moreover, we prove the following
\begin{theorem}\label{1.2} {\rm (Theorem \ref{3.7}(2))}
If $\mathcal{B}_C(R)$ is enveloping in $\mathop{\rm Mod}\nolimits R$, then $\mathcal{A}_C(S)$ is enveloping in $\mathop{\rm Mod}\nolimits S$.
\end{theorem}
Then we apply these results and their symmetric results to study the Auslander projective dimension of modules.
We obtain some criteria for computing the Auslander projective dimension of modules in $\mathop{\rm Mod}\nolimits S$ (Theorem \ref{4.4}).
Furthermore, we get the following
\begin{theorem}\label{1.3} {\rm (Theorem \ref{4.10})}
If $_RC$ has an ultimately closed projective resolution, then
$$\mathcal{A}_C(S)={{C_S}^{\top}}={^{\bot}\mathcal{I}_C(S)},$$
where ${{C_S}^{\top}}$ is the $\operatorname{op}eratorname{Tor}$-orthogonal class of $C_S$ and ${^{\bot}\mathcal{I}_C(S)}$ is
the left $\operatorname{op}eratorname{Ext}$-orthogonal class of the subcategory $\mathcal{I}_C(S)$ of $\mathop{\rm Mod}\nolimits S$ consisting of $C$-injective modules.
\end{theorem}
As a consequence, we have that if $_RC$ has an ultimately closed projective resolution,
then the projective dimension of $C_S$ is at most $n$ if and only if the Auslander projective dimension of
any module in $\mathop{\rm Mod}\nolimits S$ is at most $n$ (Corollary \ref{4.11}).
\section{Preliminaries}
In this paper, all rings are associative with identities. Let $R$ be a ring. We use $\mathop{\rm Mod}\nolimits R$ to denote the category
of left $R$-modules and all subcategories of $\mathop{\rm Mod}\nolimits R$ are full and closed under isomorphisms.
For a subcategory $\mathscr{X}$ of $\mathop{\rm Mod}\nolimits R$, we write
$${^\mathcal{P}erp{\mathscr{X}}}:=\{A\in\mathop{\rm Mod}\nolimits R\mid\operatorname{op}eratorname{Ext}^{\geq 1}_{R}(A,X)=0 \mbox{ for any}\ X\in \mathscr{X}\},$$
$${{\mathscr{X}}^\mathcal{P}erp}:=\{A\in\mathop{\rm Mod}\nolimits R\mid\operatorname{op}eratorname{Ext}^{\geq 1}_{R}(X,A)=0 \mbox{ for any}\ X\in \mathscr{X}\},$$
$${^{\mathcal{P}erp_1}{\mathscr{X}}}:=\{A\in\mathop{\rm Mod}\nolimits R\mid\operatorname{op}eratorname{Ext}^{1}_{R}(A,X)=0 \mbox{ for any}\ X\in \mathscr{X}\},$$
$${{\mathscr{X}}^{\mathcal{P}erp_1}}:=\{A\in\mathop{\rm Mod}\nolimits R\mid\operatorname{op}eratorname{Ext}^{1}_{R}(X,A)=0 \mbox{ for any}\ X\in \mathscr{X}\}.$$
For subcategories $\mathscr{X},\mathscr{Y}$ of $\mathop{\rm Mod}\nolimits R$, we write $\mathscr{X}\mathcal{P}erp\mathscr{Y}$ if
$\operatorname{op}eratorname{Ext}^{\geq 1}_{R}(X,Y)=0$ for any $X\in \mathscr{X}$ and $Y\in \mathscr{Y}$.
\begin{definition} \label{2.1}
{\rm (\cite{E1,EJ00})
Let $\mathscr{X}\subseteq\mathscr{Y}$ be
subcategories of $\mathop{\rm Mod}\nolimits R$. A homomorphism $f: X\to Y$ in
$\mathop{\rm Mod}\nolimits R$ with $X\in\mathscr{X}$ and $Y\in \mathscr{Y}$ is called an {\bf $\mathscr{X}$-precover} of $Y$
if $\operatorname{op}eratorname{Hom}_{R}(X^{'},f)$ is epic for any $X^{'}\in\mathscr{X}$; and $f$ is called {\bf right minimal}
if an endomorphism $h:X\to X$ is an automorphism whenever $f=fh$.
An {\bf $\mathscr{X}$-precover} $f: X\to Y$ is called an {\bf $\mathscr{X}$-cover} of $Y$
if it is right minimal. The subcategory $\mathscr{X}$ is called {\bf (pre)covering}
in $\mathscr{Y}$ if any object in $\mathscr{Y}$ admits an $\mathscr{X}$-(pre)cover.
Dually, the notions of an {\bf $\mathscr{X}$-(pre)envelope}, a {\bf left minimal homomorphism}
and a {\bf (pre)enveloping subcategory} are defined.}
\end{definition}
\begin{definition}\label{2.2}
{\rm (\cite{EJ00, GT12})
Let $\mathscr{U},\mathscr{V}$ be subcategories of $\mathop{\rm Mod}\nolimits R$.
\begin{enumerate}
\item[(1)] The pair $(\mathscr{U},\mathscr{V})$ is called a {\bf cotorsion pair} in $\mathop{\rm Mod}\nolimits R$ if
$\mathscr{U}={^{\bot_1}\mathscr{V}}$ and $\mathscr{V}={\mathscr{U}^{\bot_1}}$.
\item[(2)] A cotorsion pair $(\mathscr{U},\mathscr{V})$ is called {\bf perfect} if $\mathscr{U}$ is covering and $\mathscr{V}$
is enveloping in $\mathop{\rm Mod}\nolimits R$.
\item[(3)] A cotorsion pair $(\mathscr{U},\mathscr{V})$ is called {\bf hereditary} if one of the following equivalent
conditions is satisfied.
\begin{enumerate}
\item[(3.1)] $\mathscr{U}\mathcal{P}erp \mathscr{V}$.
\item[(3.2)] $\mathscr{U}$ is projectively resolving in the sense that $\mathscr{U}$ contains all projective modules
in $\mathop{\rm Mod}\nolimits R$, $\mathscr{U}$ is closed under extensions and kernels of epimorphisms.
\item[(3.3)] $\mathscr{V}$ is injectively coresolving in the sense that $\mathscr{V}$ contains all injective modules
in $\mathop{\rm Mod}\nolimits R$, $\mathscr{V}$ is closed under extensions and cokernels of monomorphisms.
\end{enumerate}
\end{enumerate}}
\end{definition}
Set $(-)^+:=\operatorname{op}eratorname{Hom}_{\mathbb{Z}}(-,\mathbb{Q}/\mathbb{Z})$, where $\mathbb{Z}$ is the additive group of integers and $\mathbb{Q}$
is the additive group of rational numbers. The following is the definition of duality pairs (cf. \cite{EI,HJ09}).
\begin{definition}\label{2.3}
{\rm Let $\mathscr{X}$ and $\mathscr{Y}$ be subcategories of $\mathop{\rm Mod}\nolimits R$ and $\mathop{\rm Mod}\nolimits R^{op}$ respectively.
\begin{enumerate}
\item[(1)] The pair ($\mathscr{X},\mathscr{Y}$) is called a {\bf duality pair} if the following conditions are satisfied.
\begin{enumerate}
\item[(1.1)] For a module $X\in\mathop{\rm Mod}\nolimits R$, $X\in\mathscr{X}$ if and only if $X^{+}\in \mathscr{Y}$.
\item[(1.2)] $\mathscr{Y}$ is closed under direct summands and finite direct sums.
\end{enumerate}
\item[(2)] A duality pair ($\mathscr{X},\mathscr{Y}$) is called {\bf (co)product-closed} if $\mathscr{X}$ is closed under
(co)products.
\item[(3)] A duality pair ($\mathscr{X},\mathscr{Y}$) is called {\bf perfect} if it is coproduct-closed,
$_RR\in\mathscr{X}$ and $\mathscr{X}$ is closed under extensions.
\end{enumerate}}
\end{definition}
We also recall the following remarkable result.
\begin{lemma}\label{2.4}
{\rm (\cite[p.7, Theorem]{EI} and \cite[Theorem 3.1]{HJ09})}
Let $\mathscr{X}$ and $\mathscr{Y}$ be subcategories of $\mathop{\rm Mod}\nolimits R$ and $\mathop{\rm Mod}\nolimits R^{op}$ respectively.
If $(\mathscr{X},\mathscr{Y})$ is a duality pair, then the following assertions hold true.
\begin{enumerate}
\item[(1)] If $(\mathscr{X},\mathscr{Y})$ is coproduct-closed, then $\mathscr{X}$ is covering.
\item[(2)] If $(\mathscr{X},\mathscr{Y})$ is product-closed, then $\mathscr{X}$ is preenveloping.
\item[(3)] If $(\mathscr{X},\mathscr{Y})$ is perfect, then $(\mathscr{X},\mathscr{X}^{\mathcal{P}erp})$ is a perfect cotorsion pair.
\end{enumerate}
\end{lemma}
\begin{definition} \label{2.5}
{\rm (\cite{HW07}).
Let $R$ and $S$ be rings. An ($R,S$)-bimodule $_RC_S$ is called
{\bf semidualizing} if the following conditions are satisfied.
\begin{enumerate}
\item[(a1)] $_RC$ admits a degreewise finite $R$-projective resolution.
\item[(a2)] $C_S$ admits a degreewise finite $S$-projective resolution.
\item[(b1)] The homothety map $_RR_R\stackrel{_R\gamma}{\rightarrow} \operatorname{op}eratorname{Hom}_{S^{op}}(C,C)$ is an isomorphism.
\item[(b2)] The homothety map $_SS_S\stackrel{\gamma_S}{\rightarrow} \operatorname{op}eratorname{Hom}_{R}(C,C)$ is an isomorphism.
\item[(c1)] $\operatorname{op}eratorname{Ext}_{R}^{\geq 1}(C,C)=0$.
\item[(c2)] $\operatorname{op}eratorname{Ext}_{S^{op}}^{\geq 1}(C,C)=0$.
\end{enumerate}}
\end{definition}
Wakamatsu in \cite{W1} introduced and studied the so-called {\bf generalized tilting modules},
which are usually called {\bf Wakamatsu tilting modules}, see \cite{BR, MR}. Note that
a bimodule $_RC_S$ is semidualizing if and only if it is Wakamatsu tilting (\cite[Corollary 3.2]{W3}).
Examples of semidualizing bimodules are referred to \cite{HW07,W2}.
\section{Duality pairs}
In this section, $R$ and $S$ are arbitrary rings and $_RC_S$ is a semidualizing bimodule.
We write $(-)_*:=\operatorname{op}eratorname{Hom}(C,-)$ and
$${{_RC}^{\bot}}:=\{M\in\mathop{\rm Mod}\nolimits R\mid\operatorname{op}eratorname{Ext}^{\geq 1}_{R}(C,M)=0\}\ \text{and}\
{{C_S}^{\bot}}:=\{B\in\mathop{\rm Mod}\nolimits S^{op}\mid\operatorname{op}eratorname{Ext}^{\geq 1}_{S^{op}}(C,B)=0\},$$
$${^{\top}{_RC}}:=\{N\in\mathop{\rm Mod}\nolimits R^{op}\mid\operatorname{op}eratorname{Tor}_{\geq 1}^{R}(N,C)=0\}\ \text{and}\
{{C_S}^{\top}}:=\{A\in\mathop{\rm Mod}\nolimits S\mid\operatorname{op}eratorname{Tor}_{\geq 1}^{S}(C,A)=0\}.$$
\begin{definition} \label{3.1}
{\rm (\cite{HW07})
\begin{enumerate}
\item[(1)] The {\bf Auslander class} $\mathcal{A}_{C}(R^{op})$ with respect to $C$ consists of all modules $N$
in $\mathop{\rm Mod}\nolimits R^{op}$ satisfying the following conditions.
\begin{enumerate}
\item[(a1)] $N\in{^{\top}{_RC}}$.
\item[(a2)] $N\otimes _{R}C\in{{C_S}^{\mathcal{P}erp}}$.
\item[(a3)] The canonical valuation homomorphism
$$\mu_N:N\rightarrow (N\otimes_RC)_*$$
defined by $\mu_N(x)(c)=x\otimes c$ for any $x\in N$ and $c\in C$ is an isomorphism in $\mathop{\rm Mod}\nolimits R^{op}$.
\end{enumerate}
\item[(2)] The {\bf Bass class} $\mathcal{B}_C(R)$ with respect to $C$ consists of all modules $M$
in $\mathop{\rm Mod}\nolimits R$ satisfying the following conditions.
\begin{enumerate}
\item[(b1)] $M\in{_RC^{\mathcal{P}erp}}$.
\item[(b2)] $M_*\in{{C_S}^{\top}}$.
\item[(b3)] The canonical valuation homomorphism
$$\theta_M:C\otimes_SM_*\rightarrow M$$
defined by $\theta_M(c\otimes f)=f(c)$ for any $c\in C$ and $f\in M_*$ is an isomorphism in $\mathop{\rm Mod}\nolimits R$.
\end{enumerate}
\item[(3)] The {\bf Auslander class} $\mathcal{A}_C(S)$ in $\mathop{\rm Mod}\nolimits S$ and the {\bf Bass class} $\mathcal{B}_C(S^{op})$
in $\mathop{\rm Mod}\nolimits S^{op}$ are defined symmetrically.
\end{enumerate}}
\end{definition}
The following result is crucial. From its proof, it is known that the conditions in the definitions
of $\mathcal{A}_{C}(R^{op})$ and $\mathcal{B}_C(R)$ are dual item by item.
\begin{proposition}\label{3.2}
\begin{enumerate}
\item[]
\item[(1)] For a module $N\in\mathop{\rm Mod}\nolimits R^{op}$, $N\in\mathcal{A}_{C}(R^{op})$ if and only if $N^+\in\mathcal{B}_C(R)$.
\item[(2)] For a module $M\in\mathop{\rm Mod}\nolimits R$, $M\in\mathcal{B}_C(R)$ if and only if $M^+\in\mathcal{A}_{C}(R^{op})$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Let $N\in \mathop{\rm Mod}\nolimits R^{op}$. Then we have the following
(a) \begin{align*}
&\ \ \ \ \ \ \ \ \ \ N\in{^{\top}{_RC}}\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Tor}_{\geq 1}^R(N,C)=0\\
&\ \ \ \ \ \Leftrightarrow [\operatorname{op}eratorname{Tor}_{\geq 1}^R(N,C)]^+=0\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Ext}^{\geq 1}_R(C,N^+)=0\ \text{(by \cite[Lemma 2.16(b)]{GT12})}\\
&\ \ \ \ \ \Leftrightarrow N^+\in{_RC^{\mathcal{P}erp}}.
\end{align*}
(b) \begin{align*}
&\ \ \ \ \ \ \ \ \ \ N\otimes _{R}C\in{{C_S}^{\mathcal{P}erp}}\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Ext}_{S^{op}}^{\geq 1}(C,N\otimes_RC)=0\\
&\ \ \ \ \ \Leftrightarrow [\operatorname{op}eratorname{Ext}_{S^{op}}^{\geq 1}(C,N\otimes_RC)]^+=0\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Tor}_{\geq 1}^S(C,(N\otimes_RC)^+)=0\ \text{(by \cite[Lemma 2.16(d)]{GT12})}\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Tor}_{\geq 1}^S(C,(N^+)_*)=0\ \text{(by \cite[Lemma 2.16(a)]{GT12})}\\
&\ \ \ \ \ \Leftrightarrow (N^+)_*\in{{C_S}^{\top}}.
\end{align*}
(c) By \cite[Lemma 2.16(c)]{GT12}, the canonical valuation homomorphism
$$\alpha:C\otimes_S(N\otimes_RC)^+ \to [\operatorname{op}eratorname{Hom}_{S^{op}}(C,N\otimes_RC)]^+$$
defined by $\alpha(c\otimes g)(f)=gf(c)$ for any $c\in C$, $g\in (N\otimes_RC)^+$ and $f\in\operatorname{op}eratorname{Hom}_{S^{op}}(C,N\otimes_RC)$
is an isomorphism in $\mathop{\rm Mod}\nolimits R$. By \cite[Lemma 2.16(a)]{GT12}, the canonical valuation homomorphism
$$\beta:(N\otimes_RC)^+\to \operatorname{op}eratorname{Hom}_R(C,N^+)$$
defined by $\beta(g)(c)(x)=g(x\otimes c)$ for any $g\in (N\otimes_RC)^+$, $c\in C$ and $x\in N$
is an isomorphism in $\mathop{\rm Mod}\nolimits S$. So $$1_C\otimes\beta:C\otimes_S(N\otimes_RC)^+\to C\otimes_S\operatorname{op}eratorname{Hom}_R(C,N^+)$$
via $(1_C\otimes\beta)(c\otimes g)=c\otimes\beta(g)$ for any $c\in C$ and $g\in (N\otimes_RC)^+$ is
an isomorphism in $\mathop{\rm Mod}\nolimits R$.
Consider the following diagram
\begin{gather*}
\begin{split}
\xymatrix{
& C\otimes_S(N\otimes_RC)^+ \ar[rr]^{\alpha} \ar [d]^{1_C\otimes\beta} && [\operatorname{op}eratorname{Hom}_{S^{op}}(C,N\otimes_RC)]^+ \ar [d]^{(\mu_N)^+} \\
& C\otimes_S\operatorname{op}eratorname{Hom}_R(C,N^+) \ar[rr]^{\theta_{N^+}} &&N^+,}
\end{split}
\end{gather*}
where
$$(\mu_N)^+:[\operatorname{op}eratorname{Hom}_{S^{op}}(C,N\otimes_RC)]^+\to N^+$$
via $(\mu_N)^+(f^{'})=f^{'}\mu_N$ for any $f^{'}\in [\operatorname{op}eratorname{Hom}_{S^{op}}(C,N\otimes_RC)]^+$ is a natural homomorphism in $\mathop{\rm Mod}\nolimits R$,
and
$$\theta_{N^+}:C\otimes_S\operatorname{op}eratorname{Hom}_R(C,N^+)\to N^+$$
defined by $\theta_{N^+}(c\otimes f^{''})=f^{''}(c)$ for any $c\in C$ and $f^{''}\in\operatorname{op}eratorname{Hom}_R(C,N^+)$
is a canonical valuation homomorphism in $\mathop{\rm Mod}\nolimits R$. Then for any $c\in C$, $g\in(N\otimes_RC)^+$ and $x\in N$, we have
$$(\mu_N)^+\alpha(c\otimes g)(x)=\alpha(c\otimes g)\mu_N(x)=g\mu_N(x)(c)=g(x\otimes c)$$
$$\theta_{N^+}(1_C\otimes\beta)(c\otimes g)(x)=\theta_{N^+}(c\otimes \beta(g))(x)=\beta(g)(c)(x)=g(x\otimes c),$$
Thus $$(\mu_N)^+\alpha=\theta_{N^+}(1_C\otimes\beta),$$
and therefore $\mu_N$ is an isomorphism $\Leftrightarrow$ $(\mu_N)^+$ is an isomorphism
$\Leftrightarrow$ $\theta_{N^+}$ is an isomorphism.
We conclude that $N\in\mathcal{A}_{C}(R^{op})\Leftrightarrow N^+\in\mathcal{B}_C(R)$.
(2) Let $M\in \mathop{\rm Mod}\nolimits R$. Then we have the following
(a) \begin{align*}
&\ \ \ \ \ \ \ \ \ \ M\in{_RC^{\mathcal{P}erp}}\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Ext}^{\geq 1}_R(C,M)=0\\
&\ \ \ \ \ \Leftrightarrow [\operatorname{op}eratorname{Ext}^{\geq 1}_R(C,M)]^+=0\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Tor}_{\geq 1}^R(M^+,C)=0\ \text{(by \cite[Lemma 2.16(d)]{GT12})}\\
&\ \ \ \ \ \Leftrightarrow M^+\in{^{\top}{_RC}}.
\end{align*}
(b) \begin{align*}
&\ \ \ \ \ \ \ \ \ \ M_*\in{{C_S}^{\top}}\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Tor}_{\geq 1}^S(C,M_*)=0\\
&\ \ \ \ \ \Leftrightarrow [\operatorname{op}eratorname{Tor}_{\geq 1}^S(C,M_*)]^+=0\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Ext}_{S^{op}}^{\geq 1}(C,(M_*)^+)=0\ \text{(by \cite[Lemma 2.16(b)]{GT12})}\\
&\ \ \ \ \ \Leftrightarrow \operatorname{op}eratorname{Ext}_{S^{op}}^{\geq 1}(C,M^+\otimes_RC)=0\ \text{(by \cite[Lemma 2.16(c)]{GT12})}\\
&\ \ \ \ \ \Leftrightarrow M^+\otimes_RC\in{{C_S}^{\mathcal{P}erp}}.
\end{align*}
(c) By \cite[Lemma 2.16(a)]{GT12}, the canonical valuation homomorphism
$$\tau:[C\otimes_S\operatorname{op}eratorname{Hom}_R(C,M)]^+\to \operatorname{op}eratorname{Hom}_{S^{op}}(C,[\operatorname{op}eratorname{Hom}_R(C,M)]^+)$$
defined by $\tau(g^{'})(c)(f)=g^{'}(c\otimes f)$ for any $g^{'}\in [C\otimes_S\operatorname{op}eratorname{Hom}_R(C,M)]^+$, $c\in C$ and $f\in\operatorname{op}eratorname{Hom}_R(C,M)$
is an isomorphism in $\mathop{\rm Mod}\nolimits R^{op}$. By \cite[Lemma 2.16(c)]{GT12}, the canonical valuation homomorphism
$$\sigma:M^+\otimes_RC\to [\operatorname{op}eratorname{Hom}_R(C,M)]^+$$
defined by $\sigma(g\otimes c)(f)=gf(c)$ for any $g\in M^+$, $c\in C$ and $f\in\operatorname{op}eratorname{Hom}_R(C,M)$
is an isomorphism in $\mathop{\rm Mod}\nolimits S^{op}$. So $$\operatorname{op}eratorname{Hom}_{S^{op}}(C,\sigma):\operatorname{op}eratorname{Hom}_{S^{op}}(C,M^+\otimes_RC)\to \operatorname{op}eratorname{Hom}_{S^{op}}(C,[\operatorname{op}eratorname{Hom}_R(C,M)]^+)$$
via $\operatorname{op}eratorname{Hom}_{S^{op}}(C,\sigma)(g^{''})=\sigma g^{''}$ for any $g^{''}\in \operatorname{op}eratorname{Hom}_{S^{op}}(C,M^+\otimes_RC)$ is
an isomorphism in $\mathop{\rm Mod}\nolimits R^{op}$.
Consider the following diagram
\begin{gather*}
\begin{split}
\xymatrix{
& M^+ \ar[rr]^{(\theta_M)^+} \ar [d]^{\mu_{M^+}} && [C\otimes_S\operatorname{op}eratorname{Hom}_R(C,M)]^+ \ar [d]^{\tau} \\
& \operatorname{op}eratorname{Hom}_{S^{op}}(C,M^+\otimes_RC) \ar[rr]^{\operatorname{op}eratorname{Hom}_{S^{op}}(C,\sigma)} &&\operatorname{op}eratorname{Hom}_{S^{op}}(C,[\operatorname{op}eratorname{Hom}_R(C,M)]^+),}
\end{split}
\end{gather*}
where
$$(\theta_M)^+:M^+\to [C\otimes_S\operatorname{op}eratorname{Hom}_R(C,M)]^+$$
via $(\theta_M)^+(g)=g\theta_M$ for any $g\in M^+$ is a natural homomorphism in $\mathop{\rm Mod}\nolimits R^{op}$, and
$$\mu_{M^+}:M^+\to \operatorname{op}eratorname{Hom}_{S^{op}}(C,M^+\otimes_RC)$$ defined by $\mu_{M^+}(g)(c)=g\otimes c$ for any $g\in M^+$ and $c\in C$
is a canonical valuation homomorphism in $\mathop{\rm Mod}\nolimits R^{op}$. Then for any $g\in M^+$, $c\in C$ and $f\in\operatorname{op}eratorname{Hom}_R(C,M)$, we have
$$\tau(\theta_M)^+(g)(c)(f)=(\theta_M)^+(g)(c\otimes f)=g\theta_M(c\otimes f)=gf(c),$$
$$\operatorname{op}eratorname{Hom}_{S^{op}}(C,\sigma)\mu_{M^+}(g)(c)(f)=\sigma\mu_{M^+}(g)(c)(f)=\sigma(g\otimes c)(f)=gf(c),$$
Thus $$\tau(\theta_M)^+=\operatorname{op}eratorname{Hom}_{S^{op}}(C,\sigma)\mu_{M^+},$$
and therefore $\theta_M$ is an isomorphism $\Leftrightarrow$ $(\theta_M)^+$ is an isomorphism
$\Leftrightarrow$ $\mu_{M^+}$ is an isomorphism.
We conclude that $M\in\mathcal{B}_C(R)\Leftrightarrow M^+\in\mathcal{A}_{C}(R^{op})$.
\end{proof}
As a consequence, we get the following
\begin{theorem}\label{3.3}
\begin{enumerate}
\item[]
\item[(1)] The pair
$$(\mathcal{A}_C(R^{op}),\mathcal{B}_C(R))$$ is a perfect coproduct-closed and product-closed duality pair
and $\mathcal{A}_C(R^{op})$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits R^{op}$.
\item[(2)] The pair
$$(\mathcal{B}_C(R),\mathcal{A}_C(R^{op}))$$ is a coproduct-closed and product-closed duality pair
and $\mathcal{B}_C(R)$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits R$.
\end{enumerate}
\end{theorem}
\begin{proof}
It follows from \cite[Proposition 4.2(a)]{HW07} that both $\mathcal{A}_C(R^{op})$ and $\mathcal{B}_C(R)$ are closed under
direct summands, coproducts and products. So by Lemma \ref{2.4}(1)(2) and Proposition \ref{3.2}, we have that
both the pairs
$$(\mathcal{A}_C(R^{op}),\mathcal{B}_C(R))\ {\rm and}\ (\mathcal{B}_C(R),\mathcal{A}_C(R^{op}))$$
are coproduct-closed and product-closed duality pairs, $\mathcal{A}_C(R^{op})$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits R^{op}$
and $\mathcal{B}_C(R)$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits R$. Moreover, $\mathcal{A}_C(R^{op})$ is projectively resolving
by \cite[Theorem 6.2]{HW07}, so the duality pair $(\mathcal{A}_C(R^{op}),\mathcal{B}_C(R))$ is perfect.
\end{proof}
We write
$$\mathcal{A}_C(R^{op})^{\bot}:=\{Y\in\mathop{\rm Mod}\nolimits R^{op}\mid \operatorname{op}eratorname{Ext}_{R^{op}}^{\geq 1}(N,Y)=0\ \text{for any}\ N\in\mathcal{A}_C(R^{op})\}.$$
The following corollary was proved in \cite[Theorem 3.11]{EH} when $R$ is a commutative noetherian ring and $_RC_S={_RC_R}$.
\begin{corollary}\label{3.4}
The pair $$(\mathcal{A}_C(R^{op}),\mathcal{A}_C(R^{op})^{\bot})$$ is a hereditary perfect cotorsion pair
and $\mathcal{A}_C(R^{op})$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits R^{op}$.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{3.3}(1) and Lemma \ref{2.4}(3).
\end{proof}
The following two results are the symmetric versions of Theorem \ref{3.3} and Corollary \ref{3.4} respectively.
\begin{theorem}\label{3.5}
\begin{enumerate}
\item[]
\item[(1)] The pair $$(\mathcal{A}_C(S),\mathcal{B}_C(S^{op}))$$ is a perfect coproduct-closed and product-closed duality pair
and $\mathcal{A}_C(S)$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits S$.
\item[(2)] The pair $$(\mathcal{B}_C(S^{op}),\mathcal{A}_C(S))$$ is a coproduct-closed and product-closed duality pair
and $\mathcal{B}_C(S^{op})$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits S^{op}$.
\end{enumerate}
\end{theorem}
We write
$$\mathcal{A}_C(S)^{\bot}:=\{X\in\mathop{\rm Mod}\nolimits S\mid \operatorname{op}eratorname{Ext}_{S}^{\geq 1}(N^{'},X)=0\ \text{for any}\ N^{'}\in\mathcal{A}_C(S)\}.$$
\begin{corollary}\label{3.6}
The pair $$(\mathcal{A}_C(S),\mathcal{A}_C(S)^{\bot})$$ is a hereditary perfect cotorsion pair
and $\mathcal{A}_C(S)$ is covering and preenveloping in $\mathop{\rm Mod}\nolimits S$.
\end{corollary}
Holm and White proved in \cite[Proposition 4.1]{HW07} that there exist the following (Foxby) equivalences of categories
$$\xymatrix@C=16ex{{\mathcal{A}_C(S)}\ar@<0.8ex>[r]_-{\sim}^-{C\otimes_{S}-}&
{\mathcal{B}_{C}(R)}\ar@<0.8ex>[l]^-{\operatorname{op}eratorname{Hom}_{R}(C,-)},&}$$
$$\xymatrix@C=16ex{{\mathcal{A}_C(R^{op})}\ar@<0.8ex>[r]_-{\sim}^-{-\otimes_{R}C}&
{\mathcal{B}_{C}(S^{op})}\ar@<0.8ex>[l]^-{\operatorname{op}eratorname{Hom}_{S^{op}}(C,-)}.&}$$
Compare this result with Theorems \ref{3.3} and \ref{3.5}.
By Theorems \ref{3.3}(2) and \ref{3.5}(1), $\mathcal{B}_C(R)$ is preenveloping in $\mathop{\rm Mod}\nolimits R$
and $\mathcal{A}_C(S)$ is preenveloping in $\mathop{\rm Mod}\nolimits S$. In the following result, we construct
an $\mathcal{A}_C(S)$-preenvelope of a given module in $\mathop{\rm Mod}\nolimits S$ from a $\mathcal{B}_C(R)$-preenvelope
of some module in $\mathop{\rm Mod}\nolimits R$.
\begin{theorem}\label{3.7}
\begin{enumerate}
\item[]
\item[(1)] Let $N\in\mathop{\rm Mod}\nolimits S$ and
$$f:C\otimes_SN\to B$$ be a $\mathcal{B}_C(R)$-preenvelope of $C\otimes_SN$ in $\mathop{\rm Mod}\nolimits R$.
Then we have
\begin{enumerate}
\item[(1.1)] $$f_*\mu_N:N\to B_*$$ is an $\mathcal{A}_C(S)$-preenvelope of $N$ in $\mathop{\rm Mod}\nolimits S$.
\item[(1.2)] If $f$ is a $\mathcal{B}_C(R)$-envelope of $C\otimes_SN$, then $f_*\mu_N$ is an $\mathcal{A}_C(S)$-envelope of $N$.
\end{enumerate}
\item[(2)] If $\mathcal{B}_C(R)$ is enveloping in $\mathop{\rm Mod}\nolimits R$, then $\mathcal{A}_C(S)$ is enveloping in $\mathop{\rm Mod}\nolimits S$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1.1) Let $N\in\mathop{\rm Mod}\nolimits S$ and
$$f:C\otimes_SN\to B$$ be a $\mathcal{B}_C(R)$-preenvelope in $\mathop{\rm Mod}\nolimits R$.
By \cite[Proposition 4.1]{HW07}, we have $B_*\in\mathcal{A}_C(S)$.
Let $g\in\operatorname{op}eratorname{Hom}_S(N,A)$ with $A\in\mathcal{A}_C(S)$. By \cite[Proposition 4.1]{HW07}
again, we have $C\otimes_SA\in\mathcal{B}_C(R)$. So there exists $h\in\operatorname{op}eratorname{Hom}_R(B,C\otimes_SA)$
such that $1_C\otimes g=hf$, that is, the following diagram
$$\xymatrix@R=15pt@C=15pt{
&&C\otimes_SN\ar[d]_{1_C\otimes g}\ar[r]^{\ \ \ \ f}&B\ar@{-->}[ld]^{h}&\\
&&C\otimes_SA&&}$$
commutes. From the following commutative diagram
$$\xymatrix@R=20pt@C=20pt{
N\ar[r]^{g}\ar[d]_{\mu_N} &A\ar[d]^{\mu_A}\\
(C\otimes_SN)_*\ar[r]^{(1_C\otimes g)_*}&(C\otimes_SA)_*,}$$
we get $\mu_Ag=(1_C\otimes g)_*\mu_N$. Because $\mu_A$ is an isomorphism, we have
$$g={\mu_A}^{-1}(1_C\otimes g)_*\mu_N=({\mu_A}^{-1}h_*)(f_*\mu_N),$$
that is, the following diagram
$$\xymatrix@R=15pt@C=15pt{
&&N\ar[d]_{g}\ar[r]^{f_*\mu_N}&B\ar@{-->}[ld]^{{\mu_A}^{-1}h_*}&\\
&&A&&}$$
commutes. Thus $f_*\mu_N:N\to B_*$ is an $\mathcal{A}_C(S)$-preenvelope of $N$.
(1.2) By (1.1), it suffices to prove that if $f$ is left minimal, then so is $f_*\mu_N$.
Let $f$ be left minimal and $h\in\operatorname{op}eratorname{Hom}_S(B_*,B_*)$ such that $f_*\mu_N=h(f_*\mu_N)$. Then we have
$$(1_C\otimes f_*)(1_C\otimes \mu_N)=1_C\otimes (f_*\mu_N)=1_C\otimes (h(f_*\mu_N))
=(1_C\otimes h)(1_C\otimes f_*)(1_C\otimes \mu_N).\eqno{(3.1)}$$
From the following commutative diagram
$$\xymatrix@R=20pt@C=20pt{
C\otimes_S(C\otimes_SN)_*\ar[r]^{1_C\otimes f_*}\ar[d]_{\theta_{C\otimes_SN}} &C\otimes_SB_*\ar[d]^{\theta_B}\\
C\otimes_SN\ar[r]^{f}&B,}$$
we get
$$f\theta_{C\otimes_SN}=\theta_B(1_C\otimes f_*).\eqno{(3.2)}$$
So we have
\begin{align*}
&\ \ \ \ \ \ \ \ \ \ f=f1_{C\otimes_SN}\\
&\ \ \ \ \ \ \ \ \ \ \ \ =f(\theta_{C\otimes_SN}(1_C\otimes \mu_N))\ \text{(by \cite[Proposition 2.2(1)]{Wi}})\\
&\ \ \ \ \ \ \ \ \ \ \ \ =\theta_B(1_C\otimes f_*)(1_C\otimes \mu_N)\ \text{(by (3.2))}\\
&\ \ \ \ \ \ \ \ \ \ \ \ =\theta_B(1_C\otimes h)(1_C\otimes f_*)(1_C\otimes \mu_N)\ \text{(by (3.1))}\\
&\ \ \ \ \ \ \ \ \ \ \ \ =\theta_B(1_C\otimes h)({\theta_B}^{-1}\theta_B)(1_C\otimes f_*)(1_C\otimes \mu_N)\ \text{(because $\theta_B$ is an isomorphism)}\\
&\ \ \ \ \ \ \ \ \ \ \ \ =\theta_B(1_C\otimes h){\theta_B}^{-1}f\theta_{C\otimes_SN}(1_C\otimes \mu_N)\ \text{(by (3.2))}\\
&\ \ \ \ \ \ \ \ \ \ \ \ =\theta_B(1_C\otimes h){\theta_B}^{-1}f1_{C\otimes_SN}\ \text{(by \cite[Proposition 2.2(1)]{Wi}})\\
&\ \ \ \ \ \ \ \ \ \ \ \ =\theta_B(1_C\otimes h){\theta_B}^{-1}f.
\end{align*}
Because $f$ is left minimal, $\theta_B(1_C\otimes h){\theta_B}^{-1}$ is an isomorphism, which implies that
$1_C\otimes h$ and $(1_C\otimes h)_*$ are also isomorphisms. From the following commutative diagram
$$\xymatrix@R=20pt@C=20pt{
B_*\ar[r]^{h}\ar[d]_{\mu_{B_*}} &B_*\ar[d]^{\mu_{B_*}}\\
(C\otimes_SB_*)_*\ar[r]^{(1_C\otimes h)_*}&(C\otimes_SB_*)_*,}$$
we get
$$(1_C\otimes h)_*\mu_{B_*}=\mu_{B_*}h.$$
Because $B_*\in\mathcal{A}_C(S)$ by \cite[Proposition 4.1]{HW07}, $\mu_{B_*}$ is an isomorphism.
It follows that $h$ is also an isomorphism and $f_*\mu_N$ is left minimal.
(2) It follows from the assertion (1.2) immediately.
\end{proof}
We do not know whether a $\mathcal{B}_C(R)$-preenvelope of given module in $\mathop{\rm Mod}\nolimits R$
can be constructed from an $\mathcal{A}_C(S)$-preenvelope of some module in $\mathop{\rm Mod}\nolimits S$,
and do not know whether the converse of Theorem \ref{3.7}(2) holds true.
By Theorems \ref{3.3}(2) and \ref{3.5}(1), $\mathcal{B}_C(R)$ is covering in $\mathop{\rm Mod}\nolimits R$
and $\mathcal{A}_C(S)$ is covering in $\mathop{\rm Mod}\nolimits S$. In the following result, we construct
a $\mathcal{B}_C(R)$-cover of a given module in $\mathop{\rm Mod}\nolimits R$ from an $\mathcal{A}_C(S)$-cover
of some module in $\mathop{\rm Mod}\nolimits S$.
\begin{proposition}\label{3.8}
Let $M\in\mathop{\rm Mod}\nolimits R$ and $$g:A\to M_*$$ be an $\mathcal{A}_C(S)$-cover of $M_*$ in $\mathop{\rm Mod}\nolimits S$.
Then
$$\theta_M(1_C\otimes g):C\otimes_SA\to M$$ is a $\mathcal{B}_C(R)$-cover of $M$ in $\mathop{\rm Mod}\nolimits R$.
\end{proposition}
\begin{proof}
Let $M\in\mathop{\rm Mod}\nolimits R$ and $$g:A\to M_*$$ be an $\mathcal{A}_C(S)$-cover of $M_*$ in $\mathop{\rm Mod}\nolimits S$.
By \cite[Proposition 4.1]{HW07}, we have $C\otimes_SA\in\mathcal{B}_C(R)$.
Let $f\in\operatorname{op}eratorname{Hom}_R(B,M)$ with $B\in\mathcal{B}_C(R)$. By \cite[Proposition 4.1]{HW07}
again, we have $B_*\in\mathcal{A}_C(S)$. So there exists $h\in\operatorname{op}eratorname{Hom}_S(B_*,A)$
such that ${f}_*=gh$, that is, the following diagram
$$\xymatrix{ & B_* \ar[d]^{{f}_*} \ar@{-->}[ld]_{h}\\
A \ar[r]^{g} & M_*}$$
commutes. From the following commutative diagram
$$\xymatrix@R=20pt@C=20pt{
C\otimes_SB_*\ar[r]^{1_C\otimes {f}_*}\ar[d]_{\theta_B} &C\otimes_SM_*\ar[d]^{\theta_M}\\
B\ar[r]^{f}&M,}$$
we get $f\theta_B=\theta_M(1_C\otimes {f}_*)$. Because $\theta_B$ is an isomorphism, we have
$$f=\theta_M(1_C\otimes {f}_*){\theta_B}^{-1}=\theta_M(1_C\otimes (gh)){\theta_B}^{-1}
=(\theta_M(1_C\otimes g))((1_C\otimes h)){\theta_B}^{-1}),$$
that is, the following diagram
$$\xymatrix{ & B \ar[d]^{f} \ar@{-->}[ld]_{(1_C\otimes h)){\theta_B}^{-1}}\\
C\otimes_SA \ar[r]_{\theta_M(1_C\otimes g)} & M}$$
commutes. Thus $\theta_M(1_C\otimes g):C\otimes_SA\to M$ is a $\mathcal{B}_C(R)$-precover of $M$.
In the following, it suffices to prove that $\theta_M(1_C\otimes g)$ is right minimal.
Let $h\in\operatorname{op}eratorname{Hom}_R(C\otimes_SA,C\otimes_SA)$ such that $\theta_M(1_C\otimes g)=(\theta_M(1_C\otimes g))h$. Then we have
$$(\theta_M)_*(1_C\otimes g)_*=(\theta_M(1_C\otimes g))_*=((\theta_M(1_C\otimes g))h)_*
=(\theta_M)_*(1_C\otimes g)_*h_*.\eqno{(3.3)}$$
From the following commutative diagram
$$\xymatrix@R=20pt@C=20pt{
A\ar[r]^{g}\ar[d]_{\mu_A} &M_*\ar[d]^{\mu_{M_*}}\\
(C\otimes_SA)_*\ar[r]^{(1_C\otimes g)_*}&(C\otimes_SM_*)_*,}$$
we get
$$\mu_{M_*}g=(1_C\otimes g)_*\mu_A.\eqno{(3.4)}$$
So we have
\begin{align*}
&\ \ \ \ \ \ \ \ \ \ g=1_{M_*}g\\
&\ \ \ \ \ \ \ \ \ \ \ \ =(\theta_M)_*\mu_{M_*}g\ \text{(by \cite[Proposition 2.2(1)]{Wi}})\\
&\ \ \ \ \ \ \ \ \ \ \ \ =(\theta_M)_*(1_C\otimes g)_*\mu_A\ \text{(by (3.4))}\\
&\ \ \ \ \ \ \ \ \ \ \ \ =(\theta_M)_*(1_C\otimes g)_*h_*\mu_A\ \text{(by (3.3))}\\
&\ \ \ \ \ \ \ \ \ \ \ \ =(\theta_M)_*(1_C\otimes g)_*\mu_A{\mu_A}^{-1}h_*\mu_A\ \text{(because $\mu_A$ is an isomorphism)}\\
&\ \ \ \ \ \ \ \ \ \ \ \ =(\theta_M)_*\mu_{M_*}g{\mu_A}^{-1}h_*\mu_A\ \text{(by (3.4))}\\
&\ \ \ \ \ \ \ \ \ \ \ \ =1_{M_*}g{\mu_A}^{-1}h_*\mu_A\ \text{(by \cite[Proposition 2.2(1)]{Wi}})\\
&\ \ \ \ \ \ \ \ \ \ \ \ =g{\mu_A}^{-1}h_*\mu_A.
\end{align*}
Because $g$ is right minimal, ${\mu_A}^{-1}h_*\mu_A$ is an isomorphism, which implies that
$h_*$ and $1_C\otimes h_*$ are also isomorphisms. From the following commutative diagram
$$\xymatrix@R=20pt@C=20pt{
C\otimes_S(C\otimes_SA)_*\ar[r]^{1_C\otimes h_*}\ar[d]_{\theta_{C\otimes_SA}} &C\otimes_S(C\otimes_SA)_*\ar[d]^{\theta_{C\otimes_SA}}\\
C\otimes_SA\ar[r]^{h}&C\otimes_SA,}$$
we get
$$h\theta_{C\otimes_SA}=\theta_{C\otimes_SA}(1_C\otimes h_*).$$
Because $C\otimes_SA\in\mathcal{B}_C(R)$ by \cite[Proposition 4.1]{HW07}, $\theta_{C\otimes_SA}$ is an isomorphism.
It follows that $h$ is also an isomorphism and $\theta_M(1_C\otimes g)$ is right minimal.
\end{proof}
We do not know whether an $\mathcal{A}_C(S)$-cover of a given module in $\mathop{\rm Mod}\nolimits S$
can be constructed from a $\mathcal{B}_C(R)$-cover of some module in $\mathop{\rm Mod}\nolimits R$.
\section{The Auslander projective dimension of modules}
For a subcategory $\mathscr{X}$ of $\mathop{\rm Mod}\nolimits S$ and $N\in \mathop{\rm Mod}\nolimits S$, the \textbf{$\mathscr{X}$-projective dimension}
$\mathscr{X}$-$\mathcal{P}d_SN$ of $N$ is defined as $\inf\{n\mid$ there exists an exact sequence
$$0 \to X_n \to \cdots \to X_1\to X_0 \to N\to 0$$
in $\mathop{\rm Mod}\nolimits S$ with all $X_i\in\mathscr{X}\}$, and we set $\mathscr{X}$-$\mathcal{P}d_SN$ infinite
if no such integer exists. We call $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN$ the \textbf{Auslander projective dimension}
of $N$. For any $n\geq 0$, we use $\Omega^n(N)$ to denote the $n$-th syzygy of $N$ (note: $\Omega^0(N)=N$).
\begin{lemma}\label{4.1}
Let $N\in\mathop{\rm Mod}\nolimits S$ and $n\geq 0$. If $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN\leq n$ and
$$0\to K_n \to A_{n-1} \to \cdots \to A_1\to A_0 \to N\to 0$$
be an exact sequence in $\mathop{\rm Mod}\nolimits S$ with all $A_i$ in $\mathcal{A}_{C}(S)$,
then $K_n\in\mathcal{A}_{C}(S)$; in particular, $\Omega^n(N)\in\mathcal{A}_C(S)$.
\end{lemma}
\begin{proof}
Because $\mathcal{A}_{C}(S)$ is projectively resolving and is closed under direct summands and coproducts
by \cite[Theorem 6.2 and Proposition 4.2(a)]{HW07}, the assertion follows from \cite[Lemma 3.12]{AB}.
\end{proof}
We use $\mathcal{A}_{C}(S)$-$\mathcal{P}d^{<\infty}$ to denote the subcategory of $\mathop{\rm Mod}\nolimits S$ consisting of modules
with finite Auslander projective dimension.
\begin{proposition}\label{4.2}
$\mathcal{A}_{C}(S)$-$\mathcal{P}d^{<\infty}$ is closed under extensions, kernels of epimorphisms and
cokernels of monomorphisms.
\end{proposition}
\begin{proof}
Let $$0\to N_1 \to N_2 \to N_3 \to 0$$
be an exact sequence in $\mathop{\rm Mod}\nolimits S$ and $n\geq 0$. If
$\mathop{\rm max}\nolimits\{\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN_1,\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN_3\}\leq n$,
then by Lemma \ref{4.1}, there exist exact sequences
$$0\to \Omega^n(N_1)\to P^{n-1}_1 \to \cdots\to P^{1}_1 \to P^{0}_1 \to N_1 \to 0,$$
$$0\to \Omega^n(N_3)\to P^{n-1}_3 \to \cdots\to P^{1}_3 \to P^{0}_3 \to N_3 \to 0$$
in $\mathop{\rm Mod}\nolimits S$ with all $P_i^j$ projective and $\Omega^n(N_1),\Omega^n(N_3)\in\mathcal{A}_{C}(S)$.
Then we get exact sequences
$$0\to K_n\to P^{n-1}_1\operatorname{op}lus P^{n-1}_3 \to \cdots\to P^{1}_1 \operatorname{op}lus P^{1}_3\to P^{0}_1\operatorname{op}lus P^{0}_3 \to N_2 \to 0,$$
$$0\to \Omega^n(N_1)\to K_n \to \Omega^n(N_3)\to 0$$
in $\mathop{\rm Mod}\nolimits S$. By \cite[Theorem 6.2]{HW07}, we have $K_n\in\mathcal{A}_{C}(S)$ and
$\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN_2\leq n$.
If $\mathop{\rm max}\nolimits\{\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN_1,\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN_2\}\leq n$, then by Corollary \ref{3.6}
and Lemma \ref{4.1}, there exist $\operatorname{op}eratorname{Hom}_S(\mathcal{A}_{C}(S),-)$-exact exact sequences
$$0\to A^{n}_1\to A^{n-1}_1 \to \cdots\to A^{1}_1 \to A^{0}_1 \to N_1 \to 0,$$
$$0\to A^{n}_2\to A^{n-1}_2 \to \cdots\to A^{1}_2 \to A^{0}_2 \to N_2 \to 0$$
in $\mathop{\rm Mod}\nolimits S$ with all $A_i^j$ in $\mathcal{A}_{C}(S)$. By \cite[Theorem 3.6]{Hu}, we get an exact sequence
$$0\to A^n_1\to A^{n-1}_1\operatorname{op}lus A^{n}_2 \to \cdots\to A^{0}_1 \operatorname{op}lus A^{1}_2\to A^{0}_2 \to N_3 \to 0$$
in $\mathop{\rm Mod}\nolimits S$, and so $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN_3\leq n+1$.
If $\mathop{\rm max}\nolimits\{\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN_2,\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN_3\}\leq n$, then by Corollary \ref{3.6}
and Lemma \ref{4.1}, there exist $\operatorname{op}eratorname{Hom}_S(\mathcal{A}_{C}(S),-)$-exact exact sequences
$$0\to A^{n}_2\to A^{n-1}_2 \to \cdots\to A^{1}_2 \to A^{0}_2 \to N_2 \to 0,$$
$$0\to A^{n}_3\to A^{n-1}_3 \to \cdots\to A^{1}_3 \to A^{0}_3 \to N_3 \to 0$$
in $\mathop{\rm Mod}\nolimits S$ with all $A_i^j$ in $\mathcal{A}_{C}(S)$. By \cite[Theorem 3.2]{Hu}, we get exact sequences
$$0\to A^n_2\to A^{n-1}_2\operatorname{op}lus A^{n}_3 \to \cdots\to A^{1}_2 \operatorname{op}lus A^{2}_3\to A \to N_1 \to 0,$$
$$0\to A \to A_{2}^0 \operatorname{op}lus A_{3}^1\to A^{0}_3 \to 0$$
in $\mathop{\rm Mod}\nolimits S$. By \cite[Theorem 6.2]{HW07}, we have $A\in\mathcal{A}_{C}(S)$,
and so $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN_1\leq n$.
\end{proof}
We write
$$\mathcal{I}_C(S):=\{I_*\mid I\ {\rm \ is\ injective\ in}\ \mathop{\rm Mod}\nolimits R\}.$$
The modules in $\mathcal{I}_C(S)$ is called {\bf $C$-injective} (\cite{HW07}).
Let $Q$ be an injective cogenerator for $\mathop{\rm Mod}\nolimits R$. Then
$$\mathcal{I}_C(S)=\mathop{\rm Prod}\nolimits_SQ_*$$ by \cite[Proposition 2.4(2)]{LHX},
where $\mathop{\rm Prod}\nolimits_SQ_*$ is the subcategory of $\mathop{\rm Mod}\nolimits S$ consisting of direct summands
of products of copies of $Q_*$. By \cite[Lemma 2.16(b)]{GT12}, we have the following
isomorphism of functors
$$\operatorname{op}eratorname{Hom}_R(\operatorname{op}eratorname{Tor}_i^S(C,-),Q)\cong\operatorname{op}eratorname{Ext}_S^i(-,Q_*)$$
for any $i\geq 1$. This gives the following
\begin{lemma}\label{4.3}
${C_S}^{\top}={^{\bot}\mathcal{I}_C(S)}$.
\end{lemma}
For a subcategory $\mathscr{X}$ of $\mathop{\rm Mod}\nolimits S$, a sequence in $\mathop{\rm Mod}\nolimits S$ is called
{\bf $\operatorname{op}eratorname{Hom}_{S}(-,\mathscr{X})$-exact} if it is exact after applying the functor
$\operatorname{op}eratorname{Hom}_{S}(-,X)$ for any $X\in\mathscr{X}$.
Now we give some criteria for computing the Auslander projective dimension of modules.
\begin{theorem}\label{4.4}
Let $N\in\mathop{\rm Mod}\nolimits S$ with $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN<\infty$ and $n\geq 0$.
Then the following statements are equivalent.
\begin{enumerate}
\item[(1)] $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN\leq n$.
\item[(2)] $\Omega^n(N)\in\mathcal{A}_{C}(S)$.
\item[(3)] $\operatorname{op}eratorname{Tor}^S_{\geq n+1}(C,N)=0$.
\item[(4)] There exists an exact sequence
$$0\to H \to A \to N\to 0$$
in $\mathop{\rm Mod}\nolimits S$ with $A\in \mathcal{A}_{C}(S)$ and $\mathcal{I}_C(S)$-$\mathcal{P}d_SH\leq n-1$.
\item[(5)] There exists a ($\operatorname{op}eratorname{Hom}_{S}(-,\mathcal{I}_C(S))$-exact) exact sequence
$$0\to N \to H^{'} \to A^{'} \to 0$$
in $\mathop{\rm Mod}\nolimits S$ with $A^{'}\in \mathcal{A}_{C}(S)$ and $\mathcal{I}_C(S)$-$\mathcal{P}d_SH^{'}\leq n$.
\end{enumerate}
\end{theorem}
\begin{proof}
By Lemma \ref{4.1} and the dimension shifting, we have $(1)\Leftrightarrow (2)\Rightarrow (3)$.
$(3)\Rightarrow (2)$ Because $\operatorname{op}eratorname{Tor}^S_{\geq n+1}(C,N)=0$ by (3), we have $\Omega^n(N)\in{C_S}^\top$,
and so $\Omega^n(N)\in{^{\bot}\mathcal{I}_C(S)}$ by Lemma \ref{4.3}.
Note that all projective modules in $\mathop{\rm Mod}\nolimits S$ are in $\mathcal{A}_C(S)$ by \cite[Theorem 6.2]{HW07}.
Because $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN<\infty$ by assumption,
we have $\mathcal{A}_{C}(S)$-$\mathcal{P}d_S\Omega^n(N)<\infty$ by Proposition 4.2.
Assume that $\mathcal{A}_{C}(S)$-$\mathcal{P}d_S\Omega^n(N)=m(<\infty)$ and
$$0\to A_m \to \cdots \to A_1 \to A_0 \to \Omega^n(N) \to 0 \eqno{(4.1)}$$
is an exact sequence in $\mathop{\rm Mod}\nolimits S$ with all $A_j$ in $\mathcal{A}_C(S)$.
Because $\mathcal{A}_C(S)\subseteq {C_S}^{\top}={^{\bot}\mathcal{I}_C(S)}$ by
Lemma \ref{4.3}, the exact sequence (4.1) is $\operatorname{op}eratorname{Hom}_{S}(-,\mathcal{I}_C(S))$-exact.
By \cite[Theorem 3.11(1)]{TH}, we have the following $\operatorname{op}eratorname{Hom}_{S}(-,\mathcal{I}_C(S))$-exact exact sequence
$$0\to A_j \to U_j^0\to U_j^1 \to \cdots \to U_j^i \to \cdots$$
in $\mathop{\rm Mod}\nolimits S$ with all $U_j^i$ in $\mathcal{I}_C(S)$ for any $0\leq j\leq m$ and $i\geq 0$.
It follows from \cite[Corollary 3.5]{Hu} that there exist
the following two exact sequences
$$0\to \Omega^n(N) \to U \to \operatorname{op}lus _{i=0}^mU_i^{i+1} \to \operatorname{op}lus_{i=0}^mU_i^{i+2}
\to \operatorname{op}lus_{i=0}^mU_i^{i+3} \to \cdots,$$
$$0\to U_m^0 \to U_m^1\operatorname{op}lus U_{m-1}^0 \to \cdots \to \operatorname{op}lus _{i=2}^mU_i^{i-2}
\to \operatorname{op}lus_{i=1}^mU_i^{i-1} \to \operatorname{op}lus _{i=0}^mU_i^i \to U \to 0,$$
and the former one is $\operatorname{op}eratorname{Hom}_{S}(-,\mathcal{I}_C(S))$-exact. Because $\mathcal{I}_C(S)$ is closed
under finite direct sums and cokernels of monomorphisms by \cite[Proposition 5.1(c) and Corollary 6.4]{HW07},
we have $U\in\mathcal{I}_{C}(S)$. By \cite[Theorem 3.11(1)]{TH} again, we have
$\Omega^n(N)\in\mathcal{A}_{C}(S)$.
$(1)\Rightarrow (4)$ By \cite[Theorem 6.2]{HW07}, $\mathcal{A}_C(S)$ is closed under extensions.
By \cite[Theorem 3.11(1)]{TH}, we have that
$\mathcal{I}_C(S)$ is an $\mathcal{I}_C(S)$-coproper cogenerator
for $\mathcal{A}_C(S)$ in the sense of \cite{Hu2}. Then the assertion follows from \cite[Theorem 4.7]{Hu2}.
$(4)\Rightarrow (5)$ Let
$$0\to H \to A \to N\to 0$$
be an exact sequence in $\mathop{\rm Mod}\nolimits S$ with $A\in \mathcal{A}_{C}(S)$ and $\mathcal{I}_C(S)$-$\mathcal{P}d_SH\leq n-1$.
By \cite[Theorem 3.11(1)]{TH}, there exists a $\operatorname{op}eratorname{Hom}_{S}(-,\mathcal{I}_C(S))$-exact exact sequence
$$0\to A\to U\to A^{'}\to 0$$ in $\mathop{\rm Mod}\nolimits S$ with $U\in\mathcal{I}_C(S)$
and $A^{'}\in \mathcal{A}_{C}(S)$. Consider the following push-out diagram
$$\xymatrix{
& & 0 \ar[d] &0 \ar@{-->}[d] & \\
0 \ar[r] & H \ar@{==}[d] \ar[r] & A \ar[d]\ar[r] &N \ar@{-->}[d] \ar[r] & 0 \\
0 \ar@{-->}[r] & H \ar@{-->}[r] & U \ar[d] \ar@{-->}[r] & H^{'} \ar@{-->}[d] \ar@{-->}[r] & 0 \\
& & A^{'} \ar[d]\ar@{==}[r] & A^{'}\ar@{-->}[d] & \\
& & 0 & 0. & } $$
By the middle row in this diagram, we have $\mathcal{I}_C(S)$-$\mathcal{P}d_SH^{'}\leq n$. Because the middle column
in the above diagram is $\operatorname{op}eratorname{Hom}_{S}(-,\mathcal{I}_C(S))$-exact, the rightmost column is also
$\operatorname{op}eratorname{Hom}_{S}(-,\mathcal{I}_C(S))$-exact by \cite[Lemma 2.4(2)]{Hu} and it is the desired exact sequence.
$(5)\Rightarrow (1)$ Let
$$0\to N \to H^{'} \to A^{'} \to 0$$
be an exact sequence in $\mathop{\rm Mod}\nolimits S$ with $A^{'}\in \mathcal{A}_{C}(S)$ and $\mathcal{I}_C(S)$-$\mathcal{P}d_SH^{'}\leq n$.
Then there exists an exact sequence
$$0\rightarrow U_n\rightarrow \cdots\rightarrow U_1\rightarrow U_0\rightarrow H^{'}\rightarrow 0$$ in $\mathop{\rm Mod}\nolimits S$ with all $U_i$ in $\mathcal{I}_C(S)$.
Set $H:=\mbox{Ker}(U_0\rightarrow H^{'})$.
Then $\mathcal{I}_C(S)$-$\mathcal{P}d_SH\leq n-1$. Consider the following pull-back diagram
$$\xymatrix@R=20pt@C=20pt{& 0 \ar@{-->}[d] & 0 \ar[d]&& &\\
& H \ar@{==}[r] \ar@{-->}[d] & H \ar[d]& &&\\
0 \ar@{-->}[r] & A \ar@{-->}[d] \ar@{-->}[r] & U_0 \ar[d] \ar@{-->}[r] &A^{'} \ar@{==}[d] \ar@{-->}[r] & 0\\
0 \ar[r] & N \ar@{-->}[d]\ar[r] & H^{'} \ar[r] \ar[d] & A^{'} \ar[r] & 0 &\\
& 0 & 0. & && }$$
Applying \cite[Theorem 6.2]{HW07} to the middle row in this diagram yields $A\in \mathcal{A}_{C}(S)$.
Thus $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN\leq n$ by the leftmost column in the above diagram.
\end{proof}
The only place where the assumption $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN<\infty$ in Theorem \ref{4.4} is used is in showing
$(3)\Rightarrow(2)$. By Theorem \ref{4.4}, it is easy to get the following standard observation.
\begin{corollary}\label{4.5}
Let $$0\to L \to M \to K\to 0$$
be an exact sequence in $\mathop{\rm Mod}\nolimits S$. Then we have
\begin{enumerate}
\item[(1)] $\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}K \leq \mathop{\rm max}\nolimits\{\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}M,\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}L+1\}$,
and the equality holds true if $\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}M \neq \mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}L$.
\item[(2)] $\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}L \leq \mathop{\rm max}\nolimits\{\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}M,\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}K-1\}$,
and the equality holds true if $\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}M\neq \mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}K$.
\item[(3)] $\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}M \leq \mathop{\rm max}\nolimits\{\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}L,\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}K\}$,
and the equality holds true if $\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}K \neq \mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}L+1$.
\end{enumerate}
\end{corollary}
The following corollary is an addendum to the implications $(1)\Rightarrow (4)$ and $(1)\Rightarrow (5)$ in Theorem \ref{4.4}.
\begin{corollary}\label{4.6}
Let $N\in\mathop{\rm Mod}\nolimits S$ with $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN=n(<\infty)$.
Then there exist exact sequences
$$0\to H \to A \to N\to 0,$$
$$0\to N \to H^{'} \to A^{'} \to 0$$
in $\mathop{\rm Mod}\nolimits S$ with $A,A^{'}\in \mathcal{A}_{C}(S)$ and $\mathcal{I}_C(S)$-$\mathcal{P}d_SH=\mathcal{I}_C(S)$-$\mathcal{P}d_SH^{'}=n$.
\end{corollary}
\begin{proof}
Let $N\in\mathop{\rm Mod}\nolimits S$ with $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN=n(<\infty)$.
By Theorem \ref{4.4}, there exists an exact sequence
$$0\to H \to A \to N\to 0$$
in $\mathop{\rm Mod}\nolimits S$ with $A\in \mathcal{A}_{C}(S)$ and $(\mathcal{A}_{C}(S)$-$\mathcal{P}d_SH\leq)\mathcal{I}_C(S)$-$\mathcal{P}d_SH\leq n-1$.
By Theorem \ref{4.4} again, we have $\sup\{i\geq 0\mid \operatorname{op}eratorname{Tor}^S_{i}(C,N)\neq 0\}=n$.
So $\sup\{i\geq 0\mid \operatorname{op}eratorname{Tor}^S_{i}(C,H)\neq 0\}=n-1$, and hence
$\mathcal{A}_{C}(S)$-$\mathcal{P}d_SH=n-1$ by Theorem \ref{4.4}.
It follows that $\mathcal{I}_C(S)$-$\mathcal{P}d_SH=n-1$.
By Theorem \ref{4.4}, there exists an exact sequence
$$0\to N \to H^{'} \to A^{'} \to 0$$
in $\mathop{\rm Mod}\nolimits S$ with $A^{'}\in \mathcal{A}_{C}(S)$ and $(\mathcal{A}_{C}(S)$-$\mathcal{P}d_SH\leq)\mathcal{I}_C(S)$-$\mathcal{P}d_SH^{'}\leq n$.
By Corollary \ref{4.5}(3), we have $\mathcal{A}_{C}(S)$-$\mathcal{P}d_SH=\mathcal{A}_{C}(S)$-$\mathcal{P}d_SN=n$,
and so $\mathcal{I}_C(S)$-$\mathcal{P}d_SH^{'}=n$.
\end{proof}
Let $N\in \mathop{\rm Mod}\nolimits S$. Bican, El Bashir and Enochs proved in \cite{BBE} that $N$ has a flat cover. We use
$$\cdots \buildrel {f_{n+1}} \over \longrightarrow F_n(N) \buildrel {f_n} \over \longrightarrow \cdots
\buildrel {f_2} \over \longrightarrow F_1(N) \buildrel {f_1} \over \longrightarrow F_0(N) \buildrel {f_0}
\over \longrightarrow N \to 0\eqno{(4.2)}$$
to denote a minimal flat resolution of $N$ in $\mathop{\rm Mod}\nolimits S$, where each $F_i(N)\to\mathcal{I}m f_i$ is a flat cover of $\mathcal{I}m f_i$.
\begin{lemma}\label{4.7}
Let $N\in\mathop{\rm Mod}\nolimits S$ and $n\geq 0$. If $\operatorname{op}eratorname{Tor}_{1\leq i\leq n}^S(C,N)=0$, then we have
\begin{enumerate}
\item[(1)] There exists an exact sequence
$$0\rightarrow \operatorname{op}eratorname{Ext}_R^{n+1}(C,\mathcal{K}er(1_C\otimes f_{n+1}))\rightarrow N\stackrel{\mu_{N}}{\longrightarrow}
(C\otimes_SN)_*\rightarrow\operatorname{op}eratorname{Ext}_R^{n+2}(C,\mathcal{K}er(1_C\otimes f_{n+1}))\rightarrow 0$$
in $\mathop{\rm Mod}\nolimits S$.
\item[(2)] $\operatorname{op}eratorname{Ext}_R^{1\leq i \leq n}(C,\mathcal{K}er(1_C\otimes f_{n+1}))=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) The case for $n=0$ follows from \cite[Proposition 3.2]{TH}. Now suppose $n\geq 1$. If $\operatorname{op}eratorname{Tor}_{1\leq i\leq n}^S(C,N)=0$,
then the exact sequence (4.2) yields the following exact sequence
$$0\to \mathcal{K}er(1_C\otimes f_{n+1}) \to C\otimes_SF_{n+1}(N) \buildrel {1_C\otimes f_{n+1}}\over \longrightarrow
C\otimes_SF_{n}(N) \buildrel {1_C\otimes f_{n}}\over \longrightarrow\cdots$$
$$\buildrel {1_C\otimes f_{2}}\over \longrightarrow C\otimes_SF_{1}(N) \buildrel {1_C\otimes f_{1}}
\over \longrightarrow C\otimes_SF_{0}(N) \buildrel {1_C\otimes f_{0}}\over \longrightarrow C\otimes_SN \to 0\eqno{(4.3)}$$
in $\mathop{\rm Mod}\nolimits R$. Because all $C\otimes_SF_i(N)$ are in $_RC^{\bot}$ by \cite[Lemma 2.3(1)]{TH}, we have
$$\operatorname{op}eratorname{Ext}_R^{1}(C,\mathcal{K}er(1_C\otimes f_1))\cong \operatorname{op}eratorname{Ext}_R^{n+1}(C,\mathcal{K}er(1_C\otimes f_n)),$$
$$\operatorname{op}eratorname{Ext}_R^{2}(C,\mathcal{K}er(1_C\otimes f_1))\cong \operatorname{op}eratorname{Ext}_R^{n+2}(C,\mathcal{K}er(1_C\otimes f_n)).$$
Now the assertion follows from \cite[Proposition 3.2]{TH}.
(2) Applying the functor $(-)_*$ to the exact sequence (4.3) we get the following commutative diagram
$$\xymatrix{
&&F_{n+1}(N) \ar[d]^{\mu_{F_{n+1}(N)}}\ar[r]^{\ \ \ \ f_{n+1}}& F_n(N)\ar[d]^{\mu_{F_{n}(N)}}\ar[r]^{f_n}
& \cdots \ar[r]^{f_1} & F_0(N)\ar[d]^{\mu_{F_{0}(N)}}\\
0\ar[r]&(\mathcal{K}er(1_C\otimes f_{n+1}))_*\ar[r]& (C\otimes_SF_{n+1}(N))_*\ar[r]^{\ \ \ \ (1_C\otimes f_{n+1})_*\ \ \ \ }
& (C\otimes_SF_{n}(N))_*\ar[r]^{\ \ \ \ \ \ (1_C\otimes f_{n})_*}\ar[r]& \cdots \ar[r]^{(1_C\otimes f_{1})_* \ \ \ \ \ \ \ }& (C\otimes_SF_{0}(N))_*.}$$
All columns are isomorphisms by \cite[Lemma 4.1]{HW07}. So the bottom row in this diagram is exact.
Because all $C\otimes_SF_{i}(N)$ are in $_RC^{\bot}$, we have
$\operatorname{op}eratorname{Ext}_R^{1\leq i \leq n}(C,\mathcal{K}er(1_C\otimes f_{n+1}))=0$.
\end{proof}
Let $X\in \mathop{\rm Mod}\nolimits R$ and let $$\cdots \buildrel {g_{n+1}} \over \longrightarrow P_n \buildrel {g_n} \over \longrightarrow
\cdots \buildrel {g_2} \over \longrightarrow P_1 \buildrel {g_1} \over \longrightarrow P_0 \buildrel {g_0} \over \longrightarrow X \to 0$$
be a projective resolution of $X$ in $\mathop{\rm Mod}\nolimits R$. If there exists $n\geq 1$ such that $\mathcal{I}m g_n\cong \operatorname{op}lus_jW_j$, where each $W_j$ is
isomorphic to a direct summand of some $\mathcal{I}m g_{i_j}$ with $i_j<n$, then we say that $X$ {\bf has an ultimately closed projective resolution at $n$};
and we say that $X$ {\bf has an ultimately closed projective resolution} if it has an ultimately closed projective resolution at some $n$ (\cite{J}).
It is trivial that if $\mathcal{P}d_{R}X$ (the projective dimension of $X$) $\leq n$, then $X$ has an ultimately closed projective resolution at $n+1$.
Let $R$ be an artin algebra. If either $R$ is of finite representation type or the square of the radical of $R$ is zero,
then any finitely generated left $R$-module has an ultimately closed projective resolution (\cite[p.341]{J}).
Following \cite{Wi}, a module $N\in\mathop{\rm Mod}\nolimits S$ is called {\bf $C$-adstatic} if $\mu_N$ is an isomorphism.
\begin{proposition}\label{4.8}
Let $N\in\mathop{\rm Mod}\nolimits S$ and $n\geq 1$. If $\operatorname{op}eratorname{Tor}_{1\leq i\leq n}^S(C,N)=0$, then $N$ is $C$-adstatic
provided that one of the following conditions is satisfied.
\begin{enumerate}
\item[(1)] $\mathcal{P}d_{R}C\leq n$.
\item[(2)] $_RC$ has an ultimately closed projective resolution at $n$.
\end{enumerate}
\end{proposition}
\begin{proof} (1) It follows directly from Lemma \ref{4.7}(1).
(2) Let
$$\cdots \buildrel {g_{n+1}} \over \longrightarrow P_n \buildrel {g_n} \over \longrightarrow
\cdots \buildrel {g_2} \over \longrightarrow P_1 \buildrel {g_1} \over \longrightarrow
P_0 \buildrel {g_0} \over \longrightarrow C \to 0$$ be a projective resolution of $C$ in $\mathop{\rm Mod}\nolimits R$
ultimately closed at $n$. Then $\mathcal{I}m g_n\cong \operatorname{op}lus_jW_j$ such that each $W_j$ is
isomorphic to a direct summand of some $\mathcal{I}m g_{i_j}$ with $i_j<n$. Let $N\in\mathop{\rm Mod}\nolimits S$ with
$\operatorname{op}eratorname{Tor}_{1\leq i\leq n}^S(C,N)=0$. By Lemma \ref{4.7}(2), we have
$$\operatorname{op}eratorname{Ext}^{1}_R(\mathcal{I}m g_{i_j},\mathcal{K}er(1_C\otimes f_{n+1}))\cong \operatorname{op}eratorname{Ext}^{i_j+1}_R(C,\mathcal{K}er(1_C\otimes f_{n+1}))=0.$$
Because $W_j$ is isomorphic to a direct summand of some $\mathcal{I}m g_{i_j}$, we have $\operatorname{op}eratorname{Ext}^{1}_R(W_j,\mathcal{K}er(1_C\otimes f_{n+1}))=0$
for any $j$, which implies
\begin{align*}
&\ \ \ \ \operatorname{op}eratorname{Ext}^{n+1}_R(C,\mathcal{K}er(1_C\otimes f_{n+1}))\\
& \cong \operatorname{op}eratorname{Ext}^{1}_R(\mathcal{I}m g_{n},\mathcal{K}er(1_C\otimes f_{n+1}))\\
& \cong \operatorname{op}eratorname{Ext}^{1}_R(\operatorname{op}lus_jW_j,\mathcal{K}er(1_C\otimes f_{n+1}))\\
& \cong \Pi_j\operatorname{op}eratorname{Ext}^{1}_R(W_j,\mathcal{K}er(1_C\otimes f_{n+1}))\\
& =0.
\end{align*}
Then by Lemma \ref{4.7}(2), we conclude that $\operatorname{op}eratorname{Ext}_R^{1\leq i \leq n+1}(C,\mathcal{K}er(1_C\otimes f_{n+1}))=0$.
Similar to the above argument we get $\operatorname{op}eratorname{Ext}_R^{n+2}(C,\mathcal{K}er(1_C\otimes f_{n+1}))=0$.
It follows from Lemma \ref{4.7}(1) that $\mu_N$ is an isomorphism and $N$ is $C$-adstatic.
\end{proof}
\begin{corollary}\label{4.9}
For any $n\geq 1$,
a module $N\in \mathop{\rm Mod}\nolimits S$ satisfying $\operatorname{op}eratorname{Tor}_{0\leq i\leq n}^S(C,N)=0$ implies $N=0$ provided that
one of the following conditions is satisfied.
\begin{enumerate}
\item[(1)] $\mathcal{P}d_{R}C\leq n$.
\item[(2)] $_RC$ has an ultimately closed projective resolution at $n$.
\end{enumerate}
\end{corollary}
\begin{proof}
Let $N\in \mathop{\rm Mod}\nolimits S$ with $\operatorname{op}eratorname{Tor}_{0\leq i\leq n}^S(C,N)=0$. By Proposition \ref{4.8},
we have that $N$ is $C$-adstatic and $N\cong (C\otimes_SN)_*=0$.
\end{proof}
We now are in a position to give the following
\begin{theorem}\label{4.10}
If $_RC$ has an ultimately closed projective resolution, then
$$\mathcal{A}_C(S)={{C_S}^{\top}}={^{\bot}\mathcal{I}_C(S)}.$$
\end{theorem}
\begin{proof}
By the definition of $\mathcal{A}_C(S)$ and Lemma \ref{4.3}, we have
$\mathcal{A}_C(S)\subseteq{{C_S}^{\top}}={^{\bot}\mathcal{I}_C(S)}$.
Now let $N\in{^{\bot}\mathcal{I}_C(S)}$ and let $f:C\otimes_SN\to B$
be a $\mathcal{B}_C(R)$-preenvelope of $C\otimes_SN$ in $\mathop{\rm Mod}\nolimits R$ as in Theorem \ref{3.7}.
Because $\mathcal{B}_C(R)$ is injectively coresolving
in $\mathop{\rm Mod}\nolimits R$ by \cite[Theorem 6.2]{HW07}, $f$ is monic. By Proposition \ref{4.8},
$\mu_N$ is an isomorphism. Then by Theorem \ref{3.7}(1), we have a monic $\mathcal{A}_C(S)$-preenvelope
$$f^0:N\rightarrowtail A^0$$ of $N$, where $f^0=f_*\mu_N$ and $A^0=B_*$. So we have a
$\operatorname{op}eratorname{Hom}_S(-,\mathcal{A}_C(S))$-exact exact sequence
$$0\to N \buildrel {f^0} \over \longrightarrow A^0 \to N^1 \to 0$$
in $\mathop{\rm Mod}\nolimits S$, where $N^1=\mathscr{C}oker f^0$. Because $A^0\in{^{\bot}\mathcal{I}_C(S)}$,
we have $N^1\in{^{\bot}\mathcal{I}_C(S)}$. Similar to the above argument, we get a
$\operatorname{op}eratorname{Hom}_S(-,\mathcal{A}_C(S))$-exact exact sequence
$$0\to N^1 \buildrel {f^1} \over \longrightarrow A^1 \to N^2 \to 0$$
in $\mathop{\rm Mod}\nolimits S$ with $A^1\in\mathcal{A}_C(S)$ and $N^2\in{^{\bot}\mathcal{I}_C(S)}$. Repeating this procedure,
we get a $\operatorname{op}eratorname{Hom}_S(-,\mathcal{A}_C(S))$-exact exact sequence
$$0\to N \buildrel {f^0} \over \longrightarrow A^0 \buildrel {f^1} \over \longrightarrow
A^1\buildrel {f^2} \over \longrightarrow \cdots \buildrel {f^i}
\over \longrightarrow A^i\buildrel {f^{i+1}} \over \longrightarrow \cdots$$
in $\mathop{\rm Mod}\nolimits S$ with all $A^i$ in $\mathcal{A}_C(S)$. Because $\mathcal{I}_C(S)\subseteq\mathcal{A}_C(S)$
by \cite[Corollary 6.1]{HW07}, this exact sequence is $\operatorname{op}eratorname{Hom}_S(-,\mathcal{I}_C(S))$-exact.
By \cite[Theorem 3.11(1)]{TH}, there exists a $\operatorname{op}eratorname{Hom}_S(-,\mathcal{A}_C(S))$-exact exact sequence
$$0\to A^i\to U^i_0 \to U^i_1 \to \cdots \to U^i_j\to \cdots$$
in $\mathop{\rm Mod}\nolimits S$ with all $U^i_j$ in $\mathcal{I}_C(S)$ for any $i,j\geq 0$. Then by \cite[Corollary 3.9]{Hu},
we get the following $\operatorname{op}eratorname{Hom}_S(-,\mathcal{A}_C(S))$-exact exact sequence
$$0\to N\to U^0_0 \to U^0_1\operatorname{op}lus U^1_0 \to \cdots \to \operatorname{op}lus_{i=0}^{n}U^i_{n-i}\to \cdots$$
in $\mathop{\rm Mod}\nolimits S$ with all terms in $\mathcal{I}_C(S)$. It follows from \cite[Theorem 3.11(1)]{TH} that
$N\in\mathcal{A}_C(S)$. The proof is finished.
\end{proof}
We use $\mathcal{P}d_{S^{op}}C$ and $\mathop{\rm fd}\nolimits_{S^{op}}C$ to denote the projective and flat dimensions of $C_S$ respectively.
\begin{corollary}\label{4.11}
If $_RC$ has an ultimately closed projective resolution,
then the following statements are equivalent for any $n\geq 0$.
\begin{enumerate}
\item[(1)] $\mathcal{P}d_{S^{op}}C\leq n$.
\item[(2)] $\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}N \leq n$ for any $N\in\mathop{\rm Mod}\nolimits S$.
\end{enumerate}
\end{corollary}
\begin{proof}
Assume that $_RC$ has an ultimately closed projective resolution. By Theorem \ref{4.10},
we have $\mathcal{A}_C(S)={{C_S}^{\top}}$. Then
it is easy to see that $C_S$ is flat (equivalently, projective) if and only if $\mathcal{A}_{C}(S)=\mathop{\rm Mod}\nolimits S$,
so the assertion for the case $n=0$ follows. Now let $N\in\mathop{\rm Mod}\nolimits S$ and $n\geq 1$.
$(2)\Rightarrow (1)$ By (2) and Theorem \ref{4.4}, we have
$\Omega^n(N)\in\mathcal{A}_{C}(S)(\subseteq{C_S}^\top)$. Then by the dimension shifting,
we have $\operatorname{op}eratorname{Tor}_{\geq n+1}^S(C,N)=0$, and so $\mathcal{P}d_{S^{op}}C=\mathop{\rm fd}\nolimits_{S^{op}}C\leq n$.
$(1)\Rightarrow (2)$ If $\mathcal{P}d_{S^{op}}C\leq n$, then $\Omega^n(N)\in{{C_S}^{\top}}$ by the dimension shifting.
By Theorem \ref{4.10}, we have $\Omega^n(N)\in\mathcal{A}_{C}(S)$ and $\mathcal{A}_{C}(S)$-$\mathcal{P}d_{S}N \leq n$.
\end{proof}
{\bf Acknowledgements.}
This research was partially supported by NSFC (Grant No. 11571164).
\end{document}
|
\begin{document}
\title{A two species trap for chromium and rubidium atoms}
\begin{abstract}
{We realize a combined trap for bosonic chromium ($^{52}$Cr) and
rubidium ($^{87}$Rb) atoms. First experiments focus on exploring a
suitable loading scheme for the combined trap and on studies of
new trap loss mechanisms originating from simultaneous trapping of
two species. By comparing the trap loss from the $^{87}$Rb
magneto-optical trap (MOT) in absence and presence of magnetically
trapped ground state $^{52}$Cr atoms we determine the scattering
cross section of $\sigma_{inelRbCr}=5.0\pm4.0\cdot10^{-18}$\,m$^2$
for light induced inelastic collisions between the two species.
Studying the trap loss from the Rb magneto-optical trap induced by
the Cr cooling-laser light, the photoionization cross section of
the excited $5$P$_{3/2}$ state at an ionizing wavelength of
426\,nm is measured to be
$\sigma_{p}=1.1\pm0.3\cdot10^{-21}$\,m$^2$.}
\end{abstract}
\section{Introduction}
The dominant interaction in atomic Bose-Einstein condensates (BEC)
realized so far, is the contact interaction. In the ultracold
regime, this interaction is isotropic and short range. Recently,
increasing theoretical interest has focused on the dipole-dipole
interaction \cite{Baranov2002,Goral2000,San00}. This interaction
is long range and anisotropic and thus would greatly enrich the
physics of degenerate quantum gases. Additionally, tuning of this
interaction from attraction to repulsion is possible by applying
time dependent external fields \cite{Gio02}. Together with the use
of a Feshbach resonance to vary the s-wave scattering length this
would allow control of the scattering properties of an ultracold
sample over a wide range of values and characteristics.
A promising candidate for the observation of the influence of the
dipole-dipole interaction on the dynamics of a BEC is atomic
chromium \cite{Schmidt2003}. Due to its comparably large magnetic
dipole moment of 6$\mu_B$, where $\mu_B$ is the Bohr magneton, the
dipole-dipole interaction is of the same order of magnitude as the
contact interaction. However, many of the effects proposed for
dipolar gases \cite{Pu01,demille,Baranov2002b} require a much
stronger dipole-dipole interaction. Magnetic dipole moments or
electric dipole moments aligned in external fields that lead to
such strong dipolar interaction can be found in heteronuclear
molecules. The strongly paramagnetic Cr-Rb-molecule, is thus
expected to have a large electric dipole moment beside its
magnetic moment. In addition, the generation of degenerate gases
of ultracold Cr-Rb-molecules formed by two ultracold gases of Cr
and Rb which are each produced by laser cooling and subsequent
sympathetic cooling \cite{Modugno2001} seems to be feasible.
The paper is organized as follows. After a short description of
our experimental setup, we present our findings on photoionization
of the excited Rb$^*$. Measurements concerning interspecies
collisions can be found in the subsequent section followed by our
conclusions.
\section{The combined magneto-optical trap for Cr and Rb}
As a first step, on the way to the generation of ultracold,
heteronuclear molecules, we have realized a two species
magneto-optical trap for chromium and rubidium atoms. For our
measurement we prepare both MOTs in the quadrupol field of two
coils oriented in anti-Helmholtz configuration. Each trap is
formed by three orthogonal and retro-reflected trapping laser
beams, with the typical $\sigma^+-\sigma^-$-polarization. The
trapping beams of the two traps are tilted by a small angle with
respect to each other to be able to set up separate optics for
both traps. We load about $N_{Cr}=4\cdot10^6$ bosonic
$^{52}$Cr-atoms from a high temperature effusion cell via a
Zeeman-slower into the Cr-MOT. The 426\,nm cooling light is
generated by a frequency doubled Ti:Sa-laser. The Rb-MOT with a
steady state atom number of about $N_{Rb}=3\cdot10^6$ Rb-atoms is
loaded from the Rb-background gas provided by a continuously
operated Rb-getter source. The stabilized Rb-cooling and repumping
lasers are provided via a fiber from a different experimental
setup. Both frequencies separated by 6.8\,GHz are amplified by a
single laser diode using injection locking technique. When both
traps are operated at the same time, the steady state number of Rb
atoms $N_{Rb}$ drops by a factor of 5 due to photoionization of
the excited Rb-cooling state.
\section{Photoionization of magneto-optically trapped rubidium atoms}
Photoionization of a neutral atom is a transition from a bound
state $|i\rangle$ with internal energy $E_i$ to a continuum state
$|f\rangle$. Such a transition becomes possible if the energy
${E_p=\hbar\omega_p}$ of an incident photon exceeds the
ionization-energy ${E_{ion}=\hbar\omega_{ion}}$ of the bound state
$|i\rangle$. The transition rate ${\Gamma_{i{\rightarrow}f}}$ is
given by Fermi's Golden Rule:
\begin{equation}
\label{gammaif}
\Gamma_{i{\rightarrow}f}=\frac{2\pi}{\hbar}\left|\left<f\left|\hat{H}_{ia}\right|i\right>\right|^2\rho(E_{i}+E_p),
\end{equation}
where $\left<f\left|\hat{H}_{ia}\right|i\right>$ is the transition
matrix element of the interaction Hamiltonian $\hat{H}_{ia}$ and
$\rho(E_{i}+E_p)$ is the density of states in the continuum at the
final energy $E_{i}+E_p$. The photoionization rate can also be
expressed using the photoionization cross-section
$\sigma(\omega_p)$ if we define the photon-flux
$\Phi=\frac{I_p}{\hbar\omega_p}$ using the intensity $I_p$ of the
ionizing light with a frequency of $\omega_p$:
\begin{equation}\label{blueint}
\Gamma_{i{\rightarrow}f}(\omega_p,I_p)=\sigma_p(\omega_p)\Phi
\end{equation}
\begin{figure}
\caption{Energy diagram of the photoionization of Rubidium:
The energy of the incident photon is sufficient to ionize the excited
$5$P$_{3/2}
\label{pischeme}
\caption{Dependence of the one body loss rate of the $Rb$ trap on the population ${\rho_{ee}
\label{gammatotlin}
\end{figure}
For $^{87}$Rb the ionization threshold is at an energy level of
4.2\,eV (297\,nm) above the $5$S$_{1/2}$ ground state. With an
energy difference of 1.6\,eV (780\,nm) between the ground state
and the $5$P$_{3/2}$ excited state, the ionization energy of the
excited state is 2.6\,eV (479\,nm). Thus, if we apply the Cr
cooling laser (426\,nm) in the Rb-MOT, transitions of excited
Rb$^*$ atoms to continuum states become possible. This situation
is displayed in figure \ref{pischeme}. During the process, energy
and momentum are conserved and the excess energy is distributed
among the electron and the ion leading to a velocity of the
electron of $v_e\approx3.4\cdot10^5$\,m/s. Therefore,
recombination within the trap volume is very unlikely. Ions and
electrons are not supported by the trap. This additional loss
contributes to the total one-body loss rate, which reads
$\gamma_{tot}=\gamma_{Rb}+\gamma_{p}$, where $\gamma_{p}$ is the
loss rate induced by the ionization and $\gamma_{Rb}$ represents
all other one body losses which are mainly caused by background
gas collisions. As the ionization occurs in the excited state, the
ionization rate $\Gamma_{i{\rightarrow}f}$ has to be multiplied
with the population probability $\rho_{ee}$ of the excited state:
\begin{equation}
\label{gammaeq} \gamma_{p}=\Gamma_{i{\rightarrow}f} \rho_{ee}
\qquad, \mathrm{where}
\end{equation}
\begin{equation}
\label{rhoeeeq} \rho_{ee}=\frac{s}{2({s+1})},\quad s=\frac{\langle
C\rangle^2 I/I_s}{1+(2\delta/\Gamma)^2}.
\end{equation}
Here $s$ is the saturation parameter expressed by the average
Clebsch-Gordan coefficient $\langle C\rangle^2=7/15$, $\Gamma$ the
natural linewidth and $I_s=1.6\,$mW/cm$^2$ the saturation
intensity of the Rb trapping transition in a light field with
total intensity $I$ and detuning $\delta$.
Neglecting two and three-body losses, which is a good
approximation for the low densities we observe in our trap, the
time evolution of the atom number $N_{Rb}(t)$ in the rubidium trap
can be described by the following well known rate equation:
\begin{equation}
\frac{dN_{Rb}}{dt}=L-\gamma_{tot} N_{Rb},
\end{equation}
where $L$ is the loading rate. We record loading curves of the
Rb-MOT with a photodiode at different intensities of the Rb-MOT
beams ranging from 20\,mW/cm$^2$ to 160\,mW/cm$^2$ and intensities
of the Cr trapping light from 0\,mW/cm$^2$ to 600\,mW/cm$^2$ and
determine the total loss rate $\gamma_{tot}$ under each of these
conditions. In figure \ref{gammatotlin} the measured loss rate
$\gamma_{tot}$ is plotted over $\rho_{ee}$ at a total intensity of
the ionizing laser of ${I_p=}$100\,mW/cm$^2$. Subtracting the
background loss rate $\gamma_{rb}$ obtained from loading curves
with no ionizing light present from the total loss rate, we
extract the ionization rate
${\gamma_{p}(I_{Rb})=\gamma_{tot}(I_{Rb})-\gamma_{Rb}}$. The
ionization rate shows a linear increase with the excited state
population and vanishes for $\rho_{ee}\rightarrow0$. We therefore
attribute the increase of the one-body loss rate when the 426\,nm
Cr-trapping light is switched on to photoionization of excited
Rb$^*$ atoms.
The population of the excited state is calculated using Eq.
(\ref{rhoeeeq}) and the intensities of the beams which are
measured outside the chamber and corrected taking into account the
transmission of the windows and the retroreflected beams. The
Rb-MOT laser is tuned to maximum intensity of the fluorescence of
the MOT, suggesting a detuning\footnote{According to several
publications (see e.g. \cite{Rapol2001}) maximum fluorescence is
observed at a detuning of $\delta\approx2.25\Gamma$ and we
estimated an accuracy of $\pm0.25\Gamma$ of this value.} of
$\delta\approx2.25\pm0.25\Gamma$. We estimate a total systematic
error caused by the uncertainty in the detuning and spatial
inhomogeneity of the laser beams of 20$\%$. For each intensity of
the Rb MOT beams a least square linear fit of Eq. (\ref{blueint})
to the measured rate constants $\Gamma_{i\rightarrow
f}=\gamma_{p}/\rho_{ee}$ yields $\sigma_{p}$. The mean value of
these photoionization cross sections of the $5P$ state of
$^{87}$Rb at a wavelength of 426\,nm is:
\begin{equation}
\sigma_{p}(426nm)=1.1\pm0.3\cdot10^{-21}\mathrm{m}^2
\end{equation}
This value is in good agreement with previously published values
\cite{Din92,Fuso2000} at comparable wavelengths and with
theoretical predictions \cite{Aym83}.
\section{Inelastic interspecies collisions}
In the second part of this article, we investigate the inelastic
trap losses in combined Cr-Rb traps. During an interspecies
collision in the presence of cooling light, each atom (Cr and Rb)
undergoes several transitions from the ground to the excited
state. If the atoms approach each other they experience a
molecular potential. For ground state atoms the long-range part of
this potential is dominated by the attractive van-der-Waals
potential $V_{gg}(r)=C_6/r^6$, where r is the internuclear
distance. Based on a two level model Schl\"oder et al.
\cite{Schloeder1999} pointed out, that this part of the molecular
potential does not change its dependence on r if one of the atoms
is in the excited state $V_{eg}(r)=C^*_6/r^6$. The $C^*_6$
coefficient leads to an attractive (repulsive) interaction, if the
excited atom has the smaller (larger) transition frequency of both
the atoms. Since $C^*_6$ is always larger than $C_6$, the excited
state potential is steeper. This has two consequences for Cr-Rb
collisions:
First, inelastic collisions involving the excited state of $Cr^*$
are prevented, because the steep, repulsive excited state
potential hinders the two species to get very close to each other.
Second, in contrast to homonuclear collisions, where the potential
$V_{eg}$ is proportional to $r^{-3}$, in the heteronuclear case,
the colliding atoms decouple from the light field at smaller
internuclear distances. This leads to a higher survival
probability \cite{Gallagher1989,Weiner1999}, which means that the
two atoms can approach each other in the Rb$^*$-Cr-potential to a
internuclear distance where fine structure changing collisions
(FC) and radiative escape (RE) lead to trap losses.
Since loss due to photoionization of excited Rb-atoms triggered by
the Cr-trapping light and loss due to Cr-light induced two-body
collisions in the Cr-MOT \cite{Bradley2000} are dominant in both
MOTs, it was not possible to observe trap loss due to interspecies
collisions during the simultaneous operation of these traps.
However, because of the 6 times larger magnetic moment of Cr we
are able to prepare a magnetically trapped (MT) cloud of Cr-atoms
and study the interaction with magneto-optically trapped Rb-atoms
in the absence of the Cr-cooling light. As explained above, in a
combined MOT for Cr and Rb we do not expect excited Cr atoms to
contribute to the inelastic loss coefficient, therefore, the
measured $\beta$-coefficient for inelastic collisions in the
overlapped MT for Cr and MOT for Rb should be very similar to a
coefficient measured while both MOTs are operated simultaneously.
For this measurement, we magnetically trap about $5\cdot10^7$
$^{52}$Cr-atoms at $100\,\mu$K in the $^7$S$_3$ ground state using
the continuous loading scheme presented by Stuhler et al.
\cite{Stu01}. Here, the MT potential is created by the same coil
configuration as described in the previous section. During the
measurements the field gradient in the direction of the coil axis
is $25$\,G/cm. Assuming a cut off parameter of $\eta=10$ this
results in a calculated magnetic trap depth of
$k_B\cdot530$\,$\mu$K for Cr-atoms in the extreme Zeeman substate
\cite{Luiten1996}. We then apply the Rb cooling and repumping
light for a certain interaction time $t$ to load the Rb-MOT. The
temperature of the Rb cloud is about $320\,\mu$K. The light forces
in the Rb-MOT result in a much deeper trapping potential of
approximately $k_B\cdot8$\,K. After the interaction time $t$ the
fluorescence of either the Rb-cloud or of the resonantly
illuminated Cr-atoms is imaged onto a calibrated CCD-camera while
no near resonant light is applied to the other species. The number
of atoms of the imaged species is calculated from the pixel count
of the image. We perform a series of such measurements in which we
record the evolution of the number of Cr-atoms $N_{Cr}(t)$ and
Rb-atoms $N_{Rb}(t)$. The temperature and the magnetic moment of
the magnetically trapped Cr-Atoms are obtained from a 2D-fit to
the atomic distribution in a quadrupole field under the influence
of the gravity. The temperature of the Rb-atoms is deduced from
the ballistic expansion of the cloud.
Figure \ref{fig:zerfall_cr} illustrates the decay of the number of
magnetically trapped Cr-atoms during the first 30\,s with and
without trapped Rb-atoms being present. Neglecting two-body loss,
a fit of a one-body decay without trapped Rb-atoms present yields
a lifetime of 10\,s. In the presence of the Rb-MOT the atom number
is reduced by $8\cdot10^5$\,atoms (25\%) after 30\,s.
\begin{figure}
\caption{Decay of the number of magnetically trapped Cr-atoms with and without the influence of the Rb-MOT.
The inset indicates the constants of the factor $N_{Cr}
\label{fig:zerfall_cr}
\caption{Loading curve of the Rb-MOT in the presence and absence of magnetically trapped Cr-atoms. The solid line indicates
a fit of the one-body loading equation to the measurement without magnetically trapped Cr-atoms.
The dotted line represents the initial slope of this fit.}
\label{fig:laden_rb}
\end{figure}
The loading process of the Rb-MOT is documented in figure
\ref{fig:laden_rb}. The measurement is performed with and without
magnetically trapped Cr-atoms. Neglecting two- and three-body
processes, we obtain a loading rate of
$L_{Rb}=2.6\cdot10^4$\,atoms/s and a lifetime of $9$\,s from a fit
of the one-body loading equation to the data without trapped
Cr-atoms. This yields a steady-state atom number of
$2.3\cdot10^5$\,atoms. The additional loss channel introduced by
magnetically trapped Cr-atoms can be clearly observed.
The decay of the number of magnetically trapped Cr-atoms
$N_{Cr}(t)$ and the increase of magneto-optically trapped Rb-atoms
$N_{Rb}(t)$ during loading in the presence of the other species is
governed by the following coupled rate equations:
\begin{eqnarray}
\label{eq:ratengleichungenCr}\frac{dN_{Cr}}{dt}&=&-\gamma_{Cr} N_{Cr}-\beta_{CrRb}\int d^3r\,n_{Cr} n_{Rb} \\
\label{eq:ratengleichungenRb}\frac{dN_{Rb}}{dt}&=&L_{Rb}-\gamma_{Rb} N_{Rb}- \beta_{RbCr}\int d^3r\,n_{Cr} n_{Rb}
\end{eqnarray}
Because of the different loss channels of the two types of traps,
$\beta_{CrRb}$ and $\beta_{RbCr}$ are not expected to be equal.
The integral over both density distributions in the interspecies
collision term can be approximated by the following expression
\cite{Stuhler2001b}:
\begin{eqnarray}\label{eq:ueberlappvolumen}
\int d^3r\, n_{Cr} n_{Rb}&=& \frac{N_{Cr} N_{Rb}}{\overline{V}},\\
\overline{V}&=&V_{MT}/\varsigma, \\
\varsigma&=&e^{\frac{\overline{\sigma}^2}{2
z^2}}\left(\frac{\overline{\sigma}^2}{z^2}+1\right)\left(1-\mathrm{Erf}
\left[\frac{\overline{\sigma}}{\sqrt{2}z}\right]\right)-
\sqrt{\frac{2}{\pi}}\frac{\overline{\sigma}}{z},
\end{eqnarray}
where the effective collision volume $\overline{V}$ varies between
the Cr-MT volume $V_{MT}=8\pi z^3$ for $\overline{\sigma}\ll z$
and the volume of the Rb-MOT $V_{MOT}$ for $\overline{\sigma}\gg
z$. The magnetic trap and the MOT are regarded to be isotropic
with a $1/e$-length $z$ for the Cr cloud in the MT in direction of
the coils axes and a mean $1/\sqrt{e}$-size $\overline{\sigma}$
for the Rb-cloud in the MOT, respectively.
In order to deduce the loss coefficient $\beta_{RbCr}$ from the Rb
loading curve, Eq. (\ref{eq:ratengleichungenRb}) is solved for the
first seconds by assuming a constant factor
$N_{Cr}N_{Rb}/\overline{V}$ which is well reproduced by our data
due to the inverse evolution of the atom numbers (see inset figure
\ref{fig:zerfall_cr}). From the initial slope $\alpha$, the loss
coefficient $\beta_{RbCr}$ can be calculated:
\begin{equation}\label{eq:slope}
\alpha=L_{Rb}-\beta_{RbCr}\frac{N_{Cr}N_{Rb}}{\overline{V}}.
\end{equation}
Using $L_{Rb}$ from the measurements in which no Cr-atom was
trapped we obtain a loss coefficient of
$\beta_{RbCr}=1.4\pm1.1\cdot10^{-17}$\,m$^3$/s with a population
probability of the excited Rb state of about 25\%. This yields an
inelastic cross section of $\sigma_{inel,RbCr}=\beta_{RbCr}
\overline{v}=5.0\pm4.0\cdot 10^{-18}$\,m$^2$, where we have used
the mean velocity $\overline{v}=\sqrt{\frac{8
k_B}{\pi}\left(\frac{T_{Cr}}{m_{Cr}}+\frac{T_{Rb}}{m_{Rb}}\right)}$.
Due to uncertainties in $N_{Cr}, N_{Rb}$ and $\overline{V}$ we
estimate a relative systematic error of 80\%.
From the difference in the number of Cr atoms in the presence and
absence of the trapped Rb cloud, we extract an upper and lower
limit for the loss coefficient taking the maximum and minimum
value of the factor $N_{Cr}N_{Rb}/\overline{V}$ and assuming this
factor to be constant over the first 30\,s:
$4.7\cdot10^{-16}\,$m$^3$/s $< \beta_{CrRb}<
5.5\cdot10^{-15}\,$m$^3$/s. For this approximation we used the
data points between 20\,s and 30\,s, where the data are well
separated. The systematic relative error of these limits is again
about 80\%. The loss coefficients $\beta_{RbCr}$ and
$\beta_{CrRb}$ we obtain, thus, differ by one order of magnitude.
We attribute this to the large difference of the trap depths of
the dissipative Rb-MOT and the conservative Cr-MT. Losses arise if
the energy gained by inelastic collisions between unpolarized Rb
and polarized Cr atoms is sufficiently high to eject an atom from
its trap. Due to energy and momentum conservation 63\% of the
released energy is transferred to the Cr atoms. While in the
Rb-MOT only fine structure changing (FC) and radiative escape (RE)
interspecies collisions lead to additional trap loss, in the
shallow Cr-MT FC, RE and hyperfine changing collision in the Rb
atom, interspecies dipolar relaxation and depolarizing collisions
which end in un-trapped states reduce the Cr atom number.
The atom loss in the Cr-MT in the presence of the Rb-MOT is
accompanied by a temperature increase ($\sim 8\,\mu$K) of the Cr
cloud. Figure \ref{fig:tempincr} depicts the evolution of the
temperature with and without the presence of the other species.
Without Rb atoms present, this heating rate is reduced to
$4\,\mu$K/s and is mainly caused by anti-evaporation due to
Majorana losses in the Cr-trap. We exclude thermalization with the
Rb-MOT as the cause of the additional heating, because the same
heating rate of the Cr cloud was measured in Cr traps prepared at
temperatures lower and higher than the temperature of the Rb-MOT.
Since the cloud is not collisionally dense, the atoms are expelled
from the trap without depositing any energy in the cloud after
most inelastic processes. Therefore, we attribute this increase to
anti-evaporation caused by the mentioned loss mechanism within an
inhomogeneous gas of Rb atoms and to collisions which lead to
depolarization of the magnetic sublevels. In latter collisions,
the energy difference between the substates of Cr and Rb (Rb$^*$)
which is $3/2\mu_B B$ ($4/3\mu_B B$) in the small magnetic field
near the trap center, is released and heats up the cloud.
\begin{figure}
\caption{Temperature increase of the Cr-cloud in the MT. Depicted is the
temperature evolution of the cloud in presence and absence of the Rb-MOT}
\label{fig:tempincr}
\end{figure}
\section{Conclusions}
The first results on a combined Cr-Rb trap which are presented in
this paper show that the operation of both MOTs is limited by the
photoionization of the excited state of the Rb-atoms which occurs
with a cross section of $\sigma_{p}=1.1\pm 0.3\cdot10^{-21}m^2$.
Due to other dominant loss mechanisms interspecies collisions
could not be directly observed in the combined MOTs. By
overlapping the Rb-MOT and the Cr-MT in space and time, we
measured light induced interspecies collisions with a cross
section of $\sigma_{inel,RbCr}=5.0\pm 4.0\cdot10^{-18}$m$^2$, in
the Rb-MOT. Since collisions involving the excited state of Cr are
prevented in a combined MOT of Rb and Cr atoms, a very similar
cross section is expected for light induced collisions in a
combined MOT. Due to the contribution of additional loss channels,
the loss rate of Cr-atoms in the very shallow MT is more
pronounced. Simultaneous operation of both MOTs could be improved
by alternating cooling laser pulses for the two species which
would suppress photoionization. If a Rb ground state trap is
prepared before the Cr-MOT is loaded, light-induced interspecies
collisions could be prevented.
The discussed measurements have already indicated the richness of
the interaction in a combined system of trapped Cr and Rb atoms.
An improvement of our Rb-source should allow us to load a
significant number of Rb-atoms into a Rb-MT and study ground state
collisions. These measurements will allow us to extract the
elastic and inelastic interspecies ground state cross sections
which are important for sympathetic cooling. Here, studies of
inelastic processes resulting from the interaction of two species
with very different internal structures are of theoretical
interest to gain a deeper understanding of these relaxation
processes.
\section{Acknowledgment}
We thank Axel Grabowski for providing us with stabilized Rb
trapping light, K. Rz\c{a}\.{z}ewski and J. Stuhler for fruitful
discussions. This work was funded by the DFG SPP 1116 and the RTN
network "Cold Quantum Gases" under the contract No.
HPRN-CT-2000-00125.
\end{document}
|
\begin{document}
\title{Algebraic Characterization of the SSC $\Delta_s(\mathcal{G}_{n,r}^{1})$}
\author{Agha Kashif$^1$, Zahid Raza$^2$, Imran Anwar$^3$ }
\thanks{ {\bf 1.} University of Management and Technology Lahore, Pakistan.\\
{\bf 2.} University of Sharjah, College of Sciences, Department of Mathematics,United Arab Emirates.\\
{\bf 3.} ASSMS, Government College University, Lahore, Pakistan.}
\email {[email protected], [email protected], [email protected]}
\maketitle
\begin{abstract}
In this paper, we characterize the set of spanning trees of
$\mathcal{G}_{n,r}^{1}$ (a simple connected graph consisting of $n$
edges, containing exactly one {\em $1$-edge-connected chain} of $r$
cycles $\mathbb{C}_r^1$ and
$\mathcal{G}_{n,r}^{1}\setminus\mathbb{C}_r^1$ is a {\em forest}).
We compute the Hilbert series of the face ring
$k[\Delta_s(\mathcal{G}_{n,r}^{1})]$ for the spanning simplicial
complex $\Delta_s(\mathcal{G}_{n,r}^{1})$. Also, we characterize
associated primes of the facet ideal
$I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^{1}))$. Furthermore, we
prove that the face ring $k[\Delta_s(\mathcal{G}_{n,r}^{1})]$ is
Cohen-Macaulay.
\end{abstract}
\noindent
{\it Key words: } simplicial complex, $f$-vector, face ring, facet ideal, spanning trees, primary decomposition, Hilbert series, Cohen-Macaulay ring.\\
{\it 2000 Mathematics Subject Classification}: Primary 13P10, Secondary 13H10, 13F20, 13C14.
\section{introduction}
The study of simplicial complexes arising form a simple graph has
been an important topic and attracted good literature. One popular
chapter of this literature is the complementary simplicial complex
$\Delta_G$ of a graph $G$; for example, see \cite{Vi}. The notion of
spanning simplicial complex (SSC) $\Delta_s(G)$ associated to a
simple connected graph $G(V,E)$ was firstly introduced in
\cite{ARK}. For {\em uni-cyclic graphs} $U_{n,m}$, it is proved that
$\Delta_s(U_{n,m})$ is {\em shifted} in \cite{ARK}. Zhu, Shi and
Geng \cite{Zh} further investigated the algebraic and combinatorial
properties of SSC associated to $n$-cyclic graphs with a common
edge. In \cite{KAR}, the authors investigated the algebraic
properties of SSC $\Delta_s(G_{n,r})$
associated to {\em r-cyclic graphs} $G_{n,r}$ (containing exactly $r$ cycles having no edge in common). Moreover, they proved that the facet ideal $I_{\mathcal{F}}(\Delta_s(G_{n,r}))$ has linear quotients with respect to its generating set and computed the {\em betti numbers} of $I_{\mathcal{F}}(\Delta_s(G_{n,r}))$ for particular cases. Some other interesting classes of simple finite connected graphs are studied for SSC by Pan, Li and Zhu in \cite{PLZ} and Guo and Wu in \cite{GW}.\\
In this paper, we investigate the class of spanning simplicial
complexes $\Delta_s(\mathcal{G}_{n,r}^{1})$ associated to
$\mathcal{G}_{n,r}^{1}$. Where $\mathcal{G}_{n,r}^{1}$ is a
connected graph having $n$ edges, containing exactly one {\em
$1$-edge-connected chain} of $r$ cycles $\mathbb{C}_r^1$ and
$\mathcal{G}_{n,r}^{1}\setminus\mathbb{C}_r^1$ is a {\em forest}. In other words, $\mathcal{G}_{n,r}^{1}$ is a graph consisting of $r$ cycles such that every pair of consecutive cycles have exactly one edge common between them. If
$C_1,C_2,\ldots,C_r$ are the $r$ cycles of the graph
$\mathcal{G}_{n,r}^{1}$ forming $\mathbb{C}_r^1$ with respective
lengths $m_1,m_2,\ldots, m_r$ then we fix the label of edge set of
$\mathcal{G}_{n,r}^{1}$ as follows;
\begin{equation}
E=\{e_{11},\ldots,e_{1m_1},e_{21}, \ldots
,e_{1m_2-1},\ldots,e_{r1},\ldots ,e_{rm_r-1},e_1,\ldots,e_t\}
\end{equation}
where, $t=n-\sum\limits_{i=1}^{r} m_i+(r-1)$ and $\{e_{i1}, \ldots,
e_{iv}\}$ is the edge-set of $i$th-cycle such that $v=m_1$ for
$i=1,\;\; v=m_i-1$ for $i> 1$ and $e_{i1}$ always represents the
common edge between $i$th and $(i+1)$th-cycle (for $1\leq i< r$). We
give the characterization of $s(\mathcal{G}_{n,r}^{1})$ in
\ref{scn}. The formulation for $f-vectors$ is presented in \ref{fsc}
which further applied to device a formula to compute the {\em
Hilbert series} of the {\em face ring}
$k\big[\Delta_s(\mathcal{G}_{n,r}^{1})\big]$ (see \ref{Hil}).
Moreover in \ref{Ass}, we characterize of all the associated primes
of the facet ideal
$I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^{1}))$. Finally, we
prove that the face ring $k[\Delta_s(\mathcal{G}_{n,r}^{1})]$ is
Cohen-Macaulay in \ref{CM}.
\section{Background and basic notions}
In this section, we give some background and preliminaries of the
topic and define some notions that will be useful in the sequel.
\begin{Definition}\label{spa}
\em {A {\em spanning tree} of a simple connected finite graph
$G(V,E)$ is a subtree of $G$ that contains every vertex of $G$. We
represent the collection of all edge-sets of the spanning trees of
$G$ by $s(G)$, in other words;
$$s(G):=\{E(T_i)\subset E , \hbox {\, where $T_i$ is a spanning tree of $G$}\}.$$
}\end{Definition} \noindent For any simple connected graph $G$, the
authors mentioned the {\em cutting-down method} to obtain all the
spanning trees of $G$ in \cite{ARK}. According to this method a
spanning tree is obtained by removing one edge from each cycle
appearing in the graph. However, for the graph
$\mathcal{G}_{n,r}^{1}$ with $r$ cycles having one edge common in
every consecutive cycles and the labeling given in $(1)$, one can
obtain its spanning trees by removing exactly $r$ edges from the
graph with not more than two edges deleted from any cycle. Also,
keeping in view that if a common edge between two cycles is removed
then only one edge can be removed from the non common edges explicitly from the cycles on the either side of the common edge.\\
For example by using the above said {\em cutting-down method} for
the graph $\mathcal{G}_{10,2}^{1}$ given in fig. $1$:\\
$s(\mathcal{G}_{10,2}^{1})=\big\{ \{ e_1, e_2, e_3, e_4, e_{13},
e_{11}, e_{23}, e_{21}\}, \{ e_1, e_2, e_3, e_4, e_{13}, e_{11},
e_{23}, e_{22}\},\\ \{ e_1, e_2, e_3, e_4, e_{13}, e_{11}, e_{21},
e_{22}\}, \{ e_1, e_2, e_3, e_4, e_{13}, e_{23}, e_{21}, e_{22}\},
\{ e_1, e_2, e_3, e_4, e_{12}, e_{11}, e_{23}, e_{21}\},\\ \{ e_1,
e_2, e_3, e_4, e_{12}, e_{11}, e_{23}, e_{22}\}, \{ e_1, e_2, e_3,
e_4, e_{12}, e_{11}, e_{21}, e_{22}\}, \{ e_1, e_2, e_3, e_4,
e_{12}, e_{23}, e_{21}, e_{22}\},\\ \{ e_1, e_2, e_3, e_4, e_{13},
e_{12}, e_{23}, e_{22}\},\{ e_1, e_2, e_3, e_4, e_{13}, e_{12},
e_{23}, e_{21}\},\{ e_1, e_2, e_3, e_4, e_{13}, e_{12}, e_{21},
e_{22}\},\\ \{ e_1, e_2, e_3, e_4, e_{13}, e_{23},e_{21}, e_{22}\},
\{ e_1, e_2, e_3, e_4, e_{12}, e_{23},e_{21}, e_{22}\}\big\}$
\begin{center}
\begin{picture}(300,80)\label{fig}
\thicklines
\put(100,27){\line(1,0){69.6}}\put(100,60){\line(1,0){70}}
\put(96,25){$\bullet$}
\put(77,27){\line(1,0){30}}\put(75,24){$\bullet$}\put(85,18){${e_{13}}$}\put(104,40){${e_{11}}$}
\put(57,27){\line(1,0){30}}\put(77,27){\line(2,3){23}}\put(55,24){$\bullet$}\put(65,18){${e_{1}}$}
\put(99,27){\line(0,1){33}}\put(97,57){$\bullet$}
\put(70,40){$e_{12}$}\put(172,40){$e_{21}$}\put(200,40){$e_{3}$}
\put(132,18){${e_{23}}$}\put(182,18){${e_{2}}$}\put(205,18){${e_{4}}$}
\put(132,64){$e_{22}$} \put(165.5,25){$\bullet$}
\put(166,27){\line(1,0){33}}\put(195,24.5){$\bullet$}
\put(197,27){\line(1,0){33}}\put(225,24.5){$\bullet$}
\put(197,27){\line(0,1){30}}\put(195,54){$\bullet$}
\put(168,27){\line(0,1){33}}\put(165.5,57){$\bullet$}
\put(116,-4){\tiny Fig. 1 . $\mathcal{G}_{10,2}^1$}
\end{picture}
\end{center}
\begin{Definition}{\em
A {\em simplicial complex} $\Delta$ over a finite set $[n]=\{1,
2,\ldots,n \}$ is a collection of subsets of $[n]$, with the
property that $\{i\}\in \Delta$ for all $i\in[n]$, and if $F\in
\Delta$ then $\Delta$ will contain all the subsets of $F$
(including the empty set). An element of $\Delta$ is called a face
of $\Delta$, and the dimension of a face $F$ of $\Delta$ is defined
as $|F|-1$, where $|F|$ is the number of vertices of $F$. The
maximal faces of $\Delta$ under inclusion are called facets of
$\Delta$. The dimension of the simplicial complex $\Delta$ is :
$$\hbox{dim} \Delta = \max\{\hbox{dim} F | F \in \Delta\}.$$
We denote the simplicial complex $\Delta$ with facets $\{F_1,\ldots
, F_q\}$ by $$\Delta = \big\langle F_1,\ldots, F_q\big\rangle $$ }
\end{Definition}
\begin{Definition}\label{fvec}
{\em For a simplicial complex $\Delta$ over $[n]$ having dimension
$d$, its $f-vector$ is a $d+1$-tuple, defined as:
$$f(\Delta)=(f_0,f_1,\ldots,f_d)$$
where $f_i$ denotes the number of $i-dimensional$ faces in $\Delta.$
}\end{Definition}
\begin{Definition}\label{co}{\bf (Spanning Simplicial Complex )}\\
{\em Let $G(V,E)$ be a a simple finite connected graph and
$s(G)=\{E_1, E_2,\ldots,E_t\}$ be the edge-set of all possible
spanning trees of $G(V,E)$, then we defined (in \cite{ARK}) a
simplicial complex $\Delta_s(G)$ on $E$ such that the facets of
$\Delta_s(G)$ are precisely the elements of $s(G)$, we call
$\Delta_s(G)$ as the {\em spanning simplicial complex} of $G(V,E)$.
In other words;
$$\Delta_s(G)=\big\langle E_1,E_2,\ldots,E_t\big\rangle.$$
}\end{Definition}
Here we recall a definition from \cite{F1}.
\begin{Definition}{\em
Let $\Delta$ be a simplicial complex with vertex set $V=[n]$ and
facets $F_1,F_2,\ldots,F_q$. A {\em vertex cover} for $\Delta$ is a
subset $A$ of $V$ such that $A\cap F_i\neq\emptyset$ for all
$i\in\{1,2,\ldots,q\}$. A {\em minimal vertex cover} of $\Delta$ is
a subset $A$ of $V$ such that $A$ is a {\em vertex cover}, and no
proper subset of $A$ is a {\em vertex cover} for $\Delta$.}
\end{Definition}
For example, the {\em minimal vertex covers} for the {\em spanning
simplicial complex} $\Delta_s(\mathcal{G}_{10,2}^1)$ given in Fig.
1, are as follows:
$$\{e_1\},\{e_2\},\{e_3\},\{e_4\},\{e_{13},e_{12}\},\{e_{23},e_{22}\}$$
\section{Spanning trees of $\mathcal{G}_{n,r}^1$ and Face ring $\Delta_s(\mathcal{G}_{n,r}^1)$ }
In this section, we discuss the combinatorial properties of
$\mathcal{G}_{n,r}^1$. We use $\bf \tau(\mathcal{G}_{n,r}^1)$ to
denote the {\bf total number of cycles} contained in
$\mathcal{G}_{n,r}^1$. We begin with the elementary result, that
tells the total number of cycles contained by $\mathcal{G}_{n,r}^1$.
\begin{Proposition}\label{cycles}{\em
The total number of cycles in the graph $\mathcal{G}_{n,r}^1$ will
be
$$\tau(\mathcal{G}_{n,r}^1)=\frac{r(r+1)}{2}$$
}\end{Proposition}
\begin{proof}
As the graph $\mathcal{G}_{n,r}^{1}$ contains one-edge connected
chain $\mathbb{C}_r^1$ of $r$ cycles $\{C_1, C_2,\ldots, C_r\}$. By
removing the common edges between any number of consecutive cycles,
we obtain a cycle by the remaining edges. The cycle obtained in this
way by adjoining consecutive cycles $C_i,C_{i+1},\ldots,C_{i+k}$ is
denoted by $\bf C_{i,i+1,\ldots,i+k}$. Therefore, we get the
following cycles
$$C_{1,2},C_{2,3},\ldots,C_{r-1,r},C_{1,2,3},\ldots,C_{r-2,r-1,r},\ldots,C_{2,3,\ldots,r},C_{12,3,\ldots,r}$$
Hence, the set of all possible cycles contained in the graph
$\mathcal{G}_{n,r}^{1}$ will be
$$\{C_{i,i+1,\ldots,i+k}\mid \;\;\;i\in\{1,2,\ldots,r-k\}\;\hbox{and}\; 0 \le k\le r-1\}.$$
Therefore, we get the total number of cycles contained in the graph
$\mathcal{G}_{n,r}^{1}$ as
$$\tau(\mathcal{G}_{n,r}^{1})=\sum\limits_{k=0}^{r-1}{\sum\limits_{i=1}^{r-k}1}=\frac{r(r+1)}{2}.$$\end{proof}
It is clear from above proposition that the cycle
$C_{i,i+1,\ldots,i+k}$ is obtained by removing the common edges in
between the adjacent cycles $C_i, C_{i+1},\ldots, C_{i+k}$. We
denote the {\bf length of cycle $\bf C_{i,i+1,\ldots, i+k}$} by $\bf
|C_{i,i+1,\ldots,i+k}|$.
\begin{Proposition}\label{scn}
\em{Let $\mathcal{G}_{n,r}^{1}$ be a graph containing the one-edge
connected chain $\mathbb{C}_r^1$ of $r$ cycles $\{C_1, C_2,\ldots,
C_r\}$, then the length of cycle $C_{i,i+1,\ldots,i+k}$ will be
$$\Big|C_{i,i+1,\ldots,i+k}\Big|=\sum\limits_{\alpha=0}^{k}\big|C_{i+\alpha}\big|-2k.$$
}\end{Proposition}
\begin{proof}
It is clear from above that $C_{i,i+1,\ldots,i+k}$ is obtained by
deleting the common edges shared by the adjacent cycles $\{C_i,
C_{i+1},\ldots, C_{i+k}\}$ in $\mathcal{G}_{n,r}^{1}$. Therefore,
the length of the cycle $C_{i,i+1,\ldots,i+k}$ is obtained by adding
lengths of all $C_i,C_{i+1},\ldots,C_{i+k}$ and subtracting $2k$
from it, since the common edges are being counted twice. Hence, we
have
$$
\Big|C_{i,i+1,\ldots,i+k}\Big|=\sum\limits_{\alpha=0}^{k}\big|C_{i+\alpha}\big|-2k.
$$
\end{proof}
We use $\bf |C_{i,i+1,\ldots,i+k}\bigcap C_{j,j+1,\ldots,j+l}|$ to
denote the {\bf number of edges shared by the cycles}
$C_{i,i+1,\ldots,i+k}$ and $C_{j,j+1,\ldots,j+l}$. The following
proposition characterizes $|C_{i,i+1,\ldots,i+k}\bigcap
C_{j,j+1,\ldots,j+l}|$ in $\mathcal{G}_{n,r}^{1}$.
\begin{Proposition}\label{scn}
\em{Let $\mathcal{G}_{n,r}^{1}$ be a graph containing the one-edge
connected chain $\mathbb{C}_r^1$ of $r$ cycles $\{C_1, C_2,\ldots,
C_r\}$ of lengths $m_1, m_2,\ldots, m_r$, then for $1\le k\le l\le
r$ we have
$$\Big|C_{i,i+1,\ldots,i+k}\bigcap C_{j,j+1,\ldots,j+l}\Big|=\left\{
\begin{array}{ll}
1, & {i+k=j-1;} \\
|C_{j,j+1,\ldots,j+\alpha}|-2, & {i+k=j+\alpha ;\; 0\le\alpha \le k-1;} \\
|C_{j,j+1,\ldots,j+k}|-1, & {i+k=j+k;} \\
|C_{j,j+1,\ldots,j+l}|, & {i+k=j+l\;and\; l=k;} \\
|C_{i,i+1,\ldots,i+k}|-2, & {i+k = j+\alpha; \; k+1 \le \alpha \le l-1;} \\
|C_{i,i+1,\ldots,i+k}|-1, & {i+k = j+l;} \\
|C_{i,i+1,\ldots,i+k-\alpha}|-2, & {i+k=j+l+\alpha; \; 1\le\alpha\le k;} \\
1, & {i=j+l+1;} \\
0, & {otherwise.}
\end{array}
\right.$$}
\end{Proposition}
\begin{proof}
Here we denote $m_{i,i+1,\ldots,i+k}=\Big|C_{i,i+1,\ldots,i+k}\Big|$.\\
Now for $1\le k\le l\le
r$ we discuss the following cases for $\Big|C_{i,i+1,\ldots,i+k}\bigcap C_{j,j+1,\ldots,j+l}\Big|$ :\\
\begin{description}
\item[Case (i)] If $i+k=j-1$, then the right most edges of the cycle $C_{i,i+1,\ldots,i+k}$ are from its adjoining cycle $C_{i+k}$ and the left most edges of the cycle $C_{j,j+1,\ldots,j+l}$ are from its adjoining cycle $C_{j+l}$, and since $C_{i+k}$ and $C_{j+l}$ are consecutive so they have only one edge in common.
\item[Case (ii)] If $i+k=j+\alpha ;\; 0\le\alpha \le k-1$, then the left most $\alpha$ adjoining cycles of the cycles of $C_{j,j+1,\ldots,j+l}$, i.e., $C_j,C_{j+1},\ldots,C_{j+\alpha}$ coincide with the right most $\alpha$ adjoining cycles of the cycles of $C_{i,i+1,\ldots,i+k}$. Therefore, the intersection $C_{i,i+1,\ldots,i+k}\bigcap C_{j,j+1,\ldots,j+l}$ will contain all edges of $C_{j,j+1,\ldots,j+\alpha}$ except its two edges, one the edge of $C_{j,j+1,\ldots,j+\alpha}$ which is the common edge of $C_{j+\alpha}$ and $C_{j+\alpha+1}$ and second the edge of $C_{j,j+1,\ldots,j+\alpha}$ which is common edge between $C_j$ and $C_{j-1}$.
\item[Case (iii)] If $i+k=j+k\; and\; k<l$, then $i=j$ and therefore the cycle $C_{i,i+1,\ldots,i+k}$ lies completely in the cycle $C_{j,j+1,\ldots,j+l}$ except its one edge which is the common edge between its adjoining cycle $C_{j+k}$ and the cycle $C_{j+k+1}$. Therefore the intersection $C_{i,i+1,\ldots,i+k}\bigcap C_{j,j+1,\ldots,j+l}$ will contain all edges of $C_{j,j+1,\ldots,j+k}$ except one.
\item[Case (iv)] If $i+k=j+l+\alpha; \; 1\le\alpha\le k$, then the left most $k-\alpha+1$ adjoining cycles of the cycle $C_{i,i+1,\ldots,i+k}$, which are $C_i,C_{i+1},\ldots,C_{i+k-\alpha}$ coincide with the right most $k-\alpha+1$ adjoining cycles of the cycle $C_{j,j+1,\ldots,j+l}$. Hence the intersection $C_{i,i+1,\ldots,i+k}\bigcap C_{j,j+1,\ldots,j+l}$ will contain all the edges of the cycle $C_{i,i+1,\ldots,i+k-\alpha}$ except two; one is the common edge of its adjoining cycle $C_i$ and the cycle $C_{i-1}$ and the other is the common edge of its adjoining cycle $C_{i+k-\alpha}$ and the cycle $C_{i+k-\alpha+1}$.
\end{description}
The remaining cases can be proved in the similar way.
\end{proof}
\begin{Lemma}\label{scn}{\bf Characterization of $s(\mathcal{G}_{n,r}^1)$}\\
\em{Let $\mathcal{G}_{n,r}^1$ be the $r-$cycles graph with edge set $E$ as defined in eq (1), then a subset $E(T_{(j_1i_1,j_2i_2,\hdots,j_ri_r)})\subset E$, where $j_\alpha\in\{1,2,\hdots,r\}$, $i_\alpha\in\{1,2,\hdots,m_{j_\alpha}-1\};\;j_\alpha\geq 2$ and $i_\alpha\in\{1,2,\hdots,m_1\};\;j_\alpha=1$, will belong to $s(\mathcal{G}_{n,r}^1)$ if and only if it satisfies any of the following:
\begin{enumerate}
\item if $j_\alpha i_\alpha\neq j_\alpha 1$ for all $\alpha$ except for which $j_\alpha i_\alpha=ri_\alpha$, then $E(T_{(j_1i_1,j_2i_2,\hdots,j_ri_r)})=E\setminus\{e_{1i_1},e_{2i_2},\hdots, e_{ri_r}\}$
\item if $j_\alpha i_\alpha= j_\alpha 1$ for any $\alpha$, then $E(T_{(j_1i_1,j_2i_2,\hdots,j_ri_r)})=E\setminus\{e_{j_1i_1},e_{j_2i_2},\hdots, e_{j_ri_r}\}$ where, $\{e_{j_1i_1},e_{j_2i_2},\hdots,e_{j_{\alpha -1}i_{\alpha -1}},e_{j_{\alpha +1}i_{\alpha +1}},\hdots , e_{j_ri_r}\}$ will contain exactly one edge from $C_{j_{\alpha}(j_{\alpha}+1)}\setminus \{ e_{(j_{\alpha}-1)1}, e_{j_{\alpha}1} \}$.
\item if $j_{\alpha} i_{\alpha}= j_{\alpha} 1$ for $\alpha \in \{r_1,r_1+1,\hdots,r_2\}$, where $1\le r_1<r_2< r$ then
\begin{enumerate}
\item if $e_{j_{r_1}1},e_{j_{(r_1+1)}1},\hdots, e_{j_{r_2}1}$ are common edges from consecutive cycles then $E(T_{(j_1i_1,j_2i_2,\hdots,j_ri_r)})=E\setminus\{e_{j_1i_1},e_{j_2i_2},\hdots, e_{j_ri_r}\}$ such that $\{e_{j_1i_1},e_{j_2i_2},\hdots, e_{j_ri_r}\}\setminus \{e_{j_{r_1}1},e_{j_{(r_1+1)}1},\hdots, e_{j_{r_2}1}\}$ will contain exactly one edge from $C_{j_{r_1}j_{(r_1+1)},\hdots,j_{r_2}}\setminus \{e_{(j_{r_1}-1)1},e_{j_{r_2}1} \}$,
\item if none of $e_{j_{r_1}1},e_{j_{(r_1+1)}1},\hdots, e_{j_{r_2}1}$ are common edges from consecutive cycles then $E(T_{(j_1i_1,j_2i_2,\hdots,j_ri_r)})=E\setminus\{e_{j_1i_1},e_{j_2i_2},\hdots, e_{j_ri_r}\}$ such that for each edge $e_{j_{r_t}1}$ case 2 holds.
\item if some of $e_{j_{r_1}1},e_{j_{(r_1+1)}1},\hdots, e_{j_{r_2}1}$ are common edges from consecutive cycles then $E(T_{(j_1i_1,j_2i_2,\hdots,j_ri_r)})=E\setminus\{e_{j_1i_1},e_{j_2i_2},\hdots, e_{j_ri_r}\}$ such that $(3.(a))$ is satisfied for the common edges of consecutive cycles and $(3.(b))$ is satisfied for remaining common edges.
\end{enumerate}
\end{enumerate}
In particular, if we denote the above classes of subsets of $E$ by $\mathcal{C}_{(1)},\mathcal{C}_{(2)},\mathcal{C}_{(3a)},\mathcal{C}_{(3b)},\mathcal{C}_{(3c)}$ respectively then,
$$ s(\mathcal{G}_{n,r}^1)=\mathcal{C}_{(1)}\bigcup\mathcal{C}_{(2)}\bigcup\mathcal{C}_{(3a)}\bigcup\mathcal{C}_{(3b)}\bigcup\mathcal{C}_{(3c)}$$}
\end{Lemma}
\begin{proof}
Since $\mathcal{G}_{n,r}^1$ is a $r$-cycles graph with cycles $C_1,C_2,\hdots,C_r$ and $e_{11},e_{21},\hdots,e_{(r-1)1}$ as common edges between consecutive cycles and by cutting down process a total of $r$ edges must be removed with not more than one edges from the non common edges of each cycle. Therefore, in order to obtain a spanning tree of $\mathcal{G}_{n,r}^1$ with none of common edges $e_{11},e_{21},\hdots,e_{(r-1)1}$ to be removed, we need to remove exactly one edge from the non common edges from each cycle. This explains the case (1) of the above lemma.
Now for a spanning tree of $\mathcal{G}_{n,r}^1$ such that exactly one common edge $e_{j_\alpha 1}$ is removed, we need to remove precisely $r-1$ edges using cutting down process from the remaining edges. However, from the non common edges of the cycle $C_{j_{\alpha}(j_{\alpha}+1)}$ , we cannot remove more than one edge (since that will result in a disconnected graph). This explains the proof of case of (2) of the lemma.
Next for the case (3.a), we need to obtain a spanning tree of $\mathcal{G}_{n,r}^1$ such that $r_2-r_1$ common edges must be removed from consecutive cycles. If $C_{j_{r_1}},C_{j_{r_1+1}},\hdots,C_{j_{r_2}}$ are consecutive cycles then the remaining $r-(r_1-r_2)$ edges must be removed in such a way that exactly one edge is removed from the non common edges of $C_{j_{r_1}j_{(r_1+1)},\hdots,j_{r_2}}$ and the remaining $r-(r_1-r_2)$ cycles of the graph $\mathcal{G}_{n,r}^1$ which concludes the case.
The remaining cases of the lemma can be visualised in similar manner using the above cases . Consequently, if we denote the above disjoint classes of subsets of $E$ by $\mathcal{C}_{(1)},\mathcal{C}_{(2)},\mathcal{C}_{(3a)},\mathcal{C}_{(3b)},\mathcal{C}_{(3c)}$ respectively, then, we get the desired result for $s(\mathcal{G}_{n,r}^1)$ as follows:
$$ s(\mathcal{G}_{n,r}^1)=\mathcal{C}_{(1)}\bigcup\mathcal{C}_{(2)}\bigcup\mathcal{C}_{(3a)}\bigcup\mathcal{C}_{(3b)}\bigcup\mathcal{C}_{(3c)}$$
\end{proof}
Our next result is the characterization of the $f$-vector of $\Delta_s(\mathcal{G}_{n,r}^1)$.
\begin{Proposition}\label{fsc}
\em{Let $\Delta_s(\mathcal{G}_{n,r}^1)$ be a spanning simplicial complex of the graph $\mathcal{G}_{n,r}^1$, then the $dim(\Delta_s(\mathcal{G}_{n,r}^1))=n-r-1$ with $f-$vector $f(\Delta_s(\mathcal{G}_{n,r}^1))=(f_0,f_1,\hdots,f_{n-r-1})$ and\\
$ f_i=\left(
\begin{array}{c}
n \\
i+1 \\
\end{array}
\right)+\sum\limits_{k=1}^{\tau}(-1)^k\\
\left[
\begin{array}{c}
{\sum\limits_{j=1}^{k}\sum\limits_{k_{s_j}=0}^{r-1}\sum\limits_{i_{s_j}=1
}^{r-k_{s_j}} \left(
\begin{array}{c}
n-\sum\limits_{j=1}^{k} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{k}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
i+1-\sum\limits_{j=1}^{k} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{k}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
\end{array}
\right)}\\
\end{array}
\right]$
\\where $0\le i\le n-r-1$}\end{Proposition}
\begin{proof}
Let $E$ be the edge set of $\mathcal{G}_{n,r}^1$ and $\mathcal{C}_{(1)},\mathcal{C}_{(2)},\mathcal{C}_{(3a)}, \mathcal{C}_{(3b)}, \mathcal{C}_{(3c)}$ are disjoint classes of spanning trees of $\mathcal{G}_{n,r}^1$ then from lemma 3.2 we have
$$ s(\mathcal{G}_{n,r}^1)=\mathcal{C}_{(1)}\bigcup\mathcal{C}_{(2)}\bigcup\mathcal{C}_{(3a)}\bigcup\mathcal{C}_{(3b)}\bigcup\mathcal{C}_{(3c)}$$
Therefore, by definition 2.4 we can write
$$ \Delta_s (\mathcal{G}_{n,r}^1)=\Big\langle\mathcal{C}_{(1)}\bigcup\mathcal{C}_{(2)}\bigcup\mathcal{C}_{(3a)}\bigcup\mathcal{C}_{(3b)}\bigcup\mathcal{C}_{(3c)} \Big\rangle$$
Since each facet $\hat{E}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}=E(T_{(j_1i_1,j_2i_2,\hdots,j_ri_r)})$ is obtained by deleting exactly $r$ edges from the edge set of $\mathcal{G}_{n,r}^1$, keeping in view lemma 3.2, therefore dimension of each facet is same i.e., $n-r-1$ ( since $ |\hat{E}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}|=n-r$ ) and hence dimension of $\Delta_s(\mathcal{G}_{n,r}^1)$ will be $n-r-1$.\\
Also it is clear from the definition of $\Delta_s(\mathcal{G}_{n,r}^1)$ that it contains all those subsets of $E$ which do not contain the sets $\{e_{11},\hdots,e_{1m_1}\}$ and $\{e_{(i-1)1},e_{i1},\hdots,e_{im_i-1}\}$ for all $2\le i\le r$, i.e., those subsets of $E$ which do not contain any cycle in the graph $\mathcal{G}_{n,r}^1$.\\
Now by lemma 3.1 the total cycles in the graph $\mathcal{G}_{n,r}^1$ are $C_{i,i+1,\hdots,i+k}\;\;\;i\in\{1,2,\hdots,r-k\}\;and\; 0 \le k\le r-1$, and their total number is $\tau$. Let $F$ be any subset of $E$ of order $i+1$ such that it does not contain any $C_{i,i+1,\hdots,i+k}\;\;\;i\in\{1,2,\hdots,r-k\}\;and\; 0 \le k\le r-1$, in it. The total number of such $F$ is indeed $f_i$. We use inclusion exclusion principle to find this number. Therefore,\\
$f_i=$ Total number of subsets of $E$ of order $i+1$ not containing $C_{i,i+1,\hdots,i+k}; \;i\in\{1,2,\hdots,r-k\}\;and\; 0 \le k\le r-1$.\\
By Inclusion Exclusion Principle we have,\\
$f_i=\Big( $ Total number of subsets of $E$ of order $i+1\Big)- \Big(\sum\limits_{j=1}^{1}\sum\limits_{k_{s_j}=0}^{r-1}\sum\limits_{i_{s_j}=1}^{r-k{s_j}}$ number of subsets of $E$ of order $i+1$ containing $C_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}\Big)+\Big(\sum\limits_{j=1}^{2}\sum\limits_{k_{s_j}=0}^{r-1}\sum\limits_{i_{s_j}=1}^{r-k_{s_j}}$ number of subsets of $E$ of order $i+1$ containing both $C_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}\Big)- \cdots +(-1)^{\tau}\Big(\sum\limits_{j=1}^{\tau}\sum\limits_{k_{s_j}=0}^{r-1}\sum\limits_{i_{s_j}=1
}^{r-k_{s_j}}$ number of subsets of $E$ of order $i+1$ containing each $C_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}\Big)$
\\
\\This implies\\
$f_i=\left(
\begin{array}{c}
n \\
i+1 \\
\end{array}
\right)-\Big[\sum\limits_{j=1}^{1}\sum\limits_{k_{s_j}=0}^{r-1}\sum\limits_{i_{s_j}=1}^{r-k_{s_j}}\left(
\begin{array}{c}
n-m_{i_{s_1},i_{s_1}+1,\hdots,i_{s_1}+k_{s_1}} \\
i+1-m_{i_{s_1},i_{s_1}+1,\hdots,i_{s_1}+k_{s_1}} \\
\end{array}
\right)
\Big]+\\ \left[
\begin{array}{c}
\sum\limits_{j=1}^{2}\sum\limits_{k_{s_j}=0}^{r-1}\sum\limits_{i_{s_j}=1}^{r-k_{s_j}}
\left(
\begin{array}{c}
n-\sum\limits_{j=1}^{2} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{2}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
i+1-\sum\limits_{j=1}^{2} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{2}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
\end{array}
\right) \\
\end{array}
\right]\\
- \cdots+(-1)^{\tau}\\
\left[
\begin{array}{c}
{\sum\limits_{j=1}^{\tau}\sum\limits_{k_{s_j}=0}^{r-1}\sum\limits_{i_{s_j}=1
}^{r-k_{s_j}} \left(
\begin{array}{c}
n-\sum\limits_{j=1}^{\tau} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{\tau}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
i+1-\sum\limits_{j=1}^{\tau} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{\tau}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
\end{array}
\right)}\\
\end{array}
\right]$
\\This implies\\
$ f_i=\left(
\begin{array}{c}
n \\
i+1 \\
\end{array}
\right)+\sum\limits_{k=1}^{\tau}(-1)^k\\
\left[
\begin{array}{c}
{\sum\limits_{j=1}^{k}\sum\limits_{k_{s_j}=0}^{r-1}\sum\limits_{i_{s_j}=1
}^{r-k_{s_j}} \left(
\begin{array}{c}
n-\sum\limits_{j=1}^{k} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{k}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
i+1-\sum\limits_{j=1}^{k} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{k}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
\end{array}
\right)}\\
\end{array}
\right]
$
\end{proof}
\begin{cor}
Let $\Delta_s(\mathcal{G}_{n,2}^1)$ be a spanning simplicial complex of a graph with $2$ cycles of lengths $m_1, m_2$ having one edge common, then the $dim(\Delta_s(\mathcal{G}_{n,2}^1))=n-3$ with $f-$vectors $f(\Delta_s(G_{n,2}^1))=(f_0,f_1,\ldots,f_{n-3})$ and
\\$ f_i=\left(
\begin{array}{c}
n \\
i+1 \\
\end{array}
\right)-\Big[\left(
\begin{array}{c}
n-m_1 \\
i+1-m_1 \\
\end{array}
\right)+\left(
\begin{array}{c}
n-m_2 \\
i+1-m_2 \\
\end{array}
\right)+\left(
\begin{array}{c}
n-m_{1,2} \\
i+1-m_{1,2} \\
\end{array}
\right)\Big]+\\ \Big[\left(
\begin{array}{c}
n-m_1-m_2+|C_1\cap C_2| \\
i+1-m_1-m_2+|C_1\cap C_2| \\
\end{array}
\right)+\left(
\begin{array}{c}
n-m_1-m_{1,2}+|C_1\cap C_{1,2}| \\
i+1-m_1-m_{1,2}+|C_1\cap C_{1,2}| \\
\end{array}
\right)+\\ \left(
\begin{array}{c}
n-m_2-m_{1,2}+|C_2\cap C_{1,2}| \\
i+1-m_2-m_{1,2}+|C_2\cap C_{1,2}| \\
\end{array}
\right)
\Big]+\\ \Big[\left(
\begin{array}{c}
n-m_1-m_2-m_{1,2}+|C_1\cap C_2|+|C_1\cap C_{1,2}|+|C_2\cap C_{1,2}| \\
i+1-m_1-m_2-m_{1,2}+|C_1\cap C_2|+|C_1\cap C_{1,2}|+|C_2\cap C_{1,2}| \\
\end{array}
\right)
\Big]$
\\where $0\le i\le n-3.$
\end{cor}
For a simplicial complex $\Delta$ over $[n]$, one would associate to it the
Stanley-Reisner ideal, that is, the monomial ideal $I_{\mathcal N}(\Delta)$ in
$S=k[x_1, x_2,\ldots ,x_n]$ generated by monomials corresponding to
non-faces of this complex (here we are assigning one variable of the
polynomial ring to each vertex of the complex). It is well known
that the face ring $k[\Delta]=S/I_{\mathcal N}(\Delta)$
is a standard graded algebra. We refer the readers to \cite{HP} and
\cite{Vi} for more details about graded algebra $A$, the Hilbert
function $H(A,t)$ and the Hilbert series $H_t(A)$ of a graded algebra.
Our main result of this section is as follows;
\begin{Theorem}\label{Hil} {\em Let $\Delta_s(\mathcal{G}_{n,r}^1) $ be the spanning simplicial complex of
$\mathcal{G}_{n,r}^1$, then the Hilbert series of the face ring
$k\big[\Delta_s(\mathcal{G}_{n,r}^1)\big]$ is given by,\\
$H(k[\Delta_s(\mathcal{G}_{n,r}^1)],t)=1+\sum\limits_{i=0}^{d}\frac{{n\choose
{i+1}}{t^{i+1}}}{(1-t)^{i+1}}+\sum\limits_{i=0}^{d}\sum\limits_{k=1}^{\tau}(-1)^k\\
\tiny{
\Big[
\begin{array}{c}
{\sum\limits_{j=1}^{k}\sum\limits_{k_{s_j}=0}^{r-1}\sum\limits_{i_{s_j}=1
}^{r-k_{s_j}} \left(
\begin{array}{c}
n-\sum\limits_{j=1}^{k} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{k}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
i+1-\sum\limits_{j=1}^{k} m_{i_{s_j},i_{s_j}+1,\hdots,i_{s_j}+k_{s_j}}+\sum\limits_{u,v=1}^{k}\big|C_{i_{s_u},i_{s_u}+1,\hdots,i_{s_u}+k_{s_u}}\bigcap C_{i_{s_v},i_{s_v}+1,\hdots,i_{s_v}+k_{s_v}}\big| \\
\end{array}
\right)}\\
\end{array}
\Big]} \frac{t^{i+1}}{(1-t)^{i+1}}$}
\end{Theorem}
\begin{proof}
From \cite{Vi}, we know that if $\Delta$ is a simplicial complex of
dimension $d$ and $f(\Delta)=(f_0, f_1, \ldots,f_d)$ its $f$-vector,
then the Hilbert series of face ring $k[\Delta]$ is given
by $$H(k[\Delta],t)= 1+\sum_{i=0}^{d}\frac{f_i
t^{i+1}}{(1-t)^{i+1}}.$$ By substituting the values of $f_i$'s from
Proposition \ref{fsc} in this above expression, we get the desired result.
\end{proof}
\section{Associated primes of the facet ideal $I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^1))$ }
We present the characterization of all associated
primes of the facet ideal $I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^1))$ of
spanning simplicial complex $\Delta_s(\mathcal{G}_{n,r}^1)$ in this section.
Associated to a simplicial complex $\Delta$ over $[n]$, one defines {\em the facet
ideal} $I_{\mathcal{F}}(\Delta)\subset S$, which is
generated by square-free monomials $x_{i1} \ldots x_{is}$, where
$\{v_{i1} ,\ldots, v_{is}\}$ is a facet of $\Delta$.
\begin{Lemma}\label{Ass}{\em
If $\Delta_s(\mathcal{G}_{n,r}^1)$ be the spanning simplicial complex of the
$r$-cycles graph $\mathcal{G}_{n,r}^1$, then
$$I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^1))=\left(\bigcap_{e_t\not\in C_{i}}^{ 1 \leq i\leq r}(x_t)\right)\bigcap \left(\bigcap_{ 2\leq i_\alpha\neq i_\beta\leq m_{j_\alpha}-1}^{2\leq j_\alpha\leq r-1}(x_{j_\alpha i_\alpha},x_{j_\alpha i_\beta})\right)$$
$$\bigcap\left(\bigcap_{2\leq i_\alpha\neq i_\beta\leq m_1}\left(x_{1i_\alpha},x_{1i_\beta,} \right)\right)\bigcap\left(\bigcap_{1\leq i_\alpha\neq i_\beta\leq m_r-1}\left(x_{ri_\alpha},x_{ri_\beta,} \right)\right)$$ }
\end{Lemma}
\begin{proof} Consider the spanning simplicial complex
$\Delta_s(\mathcal{G}_{n,r}^1)$ and let
$I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^1))$ be the facet ideal of
$\Delta_s(\mathcal{G}_{n,r}^1)$. Since from \cite[Proposition 1.8]{F1}, we know
that a minimal prime ideal of the facet ideal
$I_{\mathcal{F}}(\Delta)$ has one to one correspondence with the
minimal vertex cover of the simplicial complex. Therefore, in order
to compute the primary decomposition of the facet ideal
$I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^1))$; it is sufficient to compute
all the minimal vertex cover of $\Delta_s(\mathcal{G}_{n,r}^1)$.\\
Indeed clear from the definition of $\Delta_s(\mathcal{G}_{n,r}^1)$ and by Lemma \ref{scn} that $\{e_t\}$ is a minimal vertex cover of $\Delta_s(\mathcal{G}_{n,r}^1)$ such that $e_t\not\in
C_{i} \;\forall\; i\in \{1,\ldots, r\}$ . Moreover, $\{e_{j_\alpha i_\alpha},e_{j_\alpha i_\beta}\}$ is also a minimal vertex cover of $\Delta_s(\mathcal{G}_{n,r}^1)$ with
$2\leq i_\alpha\neq i_\beta\leq m_{j_\alpha}-1$ for $ j_\alpha\in\{2,\hdots, r-1\}$ , $2\leq i_\alpha\neq i_\beta\leq m_1$ for $ j_\alpha=1$ and $1\leq i_\alpha\neq i_\beta\leq m_r-1$ for $j_\alpha=r$. Indeed for any $\hat{E}_{(j_1i_1,j_2i_2,\hdots, j_ri_r)}\in s(\mathcal{G}_{n,r}^1)$ the intersection $\{e_{j_\alpha i_\alpha},e_{j_\alpha i_\beta}\}\cap \hat{E}_{(j_1i_1,j_2i_2,\hdots, j_ri_r)} $ is nonempty.
\end{proof}
\section{Cohen-Macaulayness of the face ring of $\Delta_s(\mathcal{G}_{n,r}^1)$}
In this section, we include some definitions and results from \cite{AR} and use them to show that the face ring of $\Delta_s(\mathcal{G}_{n,r}^1)$ is Cohen-Macaulay.
\begin{Definition}\label{qlq}\cite{AR}\\
\em{Let $I\subset S=k[x_1,x_2,\hdots,x_n]$ be a monomial ideal, we
say that $I$ will have the \textit{quasi-linear quotients}, if there
exists a minimal monomial system of generators $m_1,m_2,\hdots,m_r$
such that $mindeg(\hat{I}_{m_i})=1$ for all $1<i\le r$, where
$$\hat{I}_{m_i}=(m_1,m_2,\hdots,m_{i-1}):(m_i).$$}
\end{Definition}
\begin{Theorem}\cite{AR}
\em{Let $\Delta$ be a pure simplicial complex of dimension $d$ over
$[n]$. Then $\Delta$ will be a shellable simplicial complex if and
only if $I_{\mathcal{F}}(\Delta)$ will have the quasi-linear
quotients.}
\end{Theorem}
\begin{cor}\label{frcm}\cite{AR}
\em{ If the facet ideal $I_{\mathcal{F}}(\Delta)$ of a pure
simplicial complex $\Delta$ over $[n]$ has quasi-linear quotients,
then the face ring is Cohen Macaulay.}
\end{cor}
\begin{Theorem}\label{CM}
\em{The face ring of $\Delta_s(\mathcal{G}_{n,r}^1)$ is
Cohen-Macaulay.}
\end{Theorem}
\begin{proof}
By corollary \ref{frcm}, it is sufficient to show that $I_{\mathcal{F}}\big(\Delta_s(\mathcal{G}_{n,r}^1)\big)$ has a quasi-linear quotients in $S=k[x_{11},x_{12},\hdots,x_{1m_1},x_{21},x_{22},\hdots,x_{2(m_2-1)},\hdots,x_{r1},x_{r2},\hdots,\\
x_{r(m_r-1)},x_1,x_2,\hdots,x_t]$. By lemma \ref{scn}, we have
$$ s (\mathcal{G}_{n,r}^1)=\mathcal{C}_{(1)}\bigcup\mathcal{C}_{(2)}\bigcup\mathcal{C}_{(3a)}\bigcup\mathcal{C}_{(3b)}\bigcup\mathcal{C}_{(3c)}$$
Therefore,
$$ \Delta_s (\mathcal{G}_{n,r}^1)=\Big\langle\hat{E}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}=E\backslash \{e_{j_1i_1},e_{j_2i_2},\hdots,e_{j_ri_r}\}\mid \hat{E}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}\in s (\mathcal{G}_{n,r}^1)\Big\rangle$$
and hence we can write,
$$ I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^1))=\Big( x_{\hat{E}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}}\mid \hat{E}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}\in s (\mathcal{G}_{n,r}^1)\Big).$$
Here $ I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^1))$ is a pure
monomial ideal of degree $n-r$ with
$x_{\hat{E}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}}$ as the product of all
variables in $S$ except $x_{j_1i_1},x_{j_2i_2},\hdots,x_{j_ri_r}$.
Now we will show that $
I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^1))$ has quasi-linear
quotients with respect to the following generating system:\\
$\{x_{\hat{E}_{(11,21,\hdots,r1)}}\},\{x_{\hat{E}_{(11,21,\hdots,(r-1)1,j_ri_r)}}\mid i_r\neq 1\},\{x_{\hat{E}_{(11,21,\hdots,(r-2)1,(r-1)i_{r-1},j_ri_r)}}\mid i_{r-1}\neq 1\},\\
\{x_{\hat{E}_{(11,21,\hdots,(r-3)1,(r-2)i_{r-2},j_{r-1}i_{r-1},j_ri_r)}}\mid i_{r-2}\neq 1\},\hdots ,\{x_{\hat{E}_{(11,2i_2,j_3i_3\hdots,j_ri_r)}}\mid i_2\neq 1\},\\
\{x_{\hat{E}_{(1i_1,j_2i_2,\hdots,j_ri_r)}}\mid i_1\neq 1\}$\\
Let us put
\\$\begin{array}{c}
C_{(11,21,\hdots,(r-1)1,j_ri_r)}=\{x_{\hat{E}_{(11,21,\hdots,(r-1)1,j_ri_r)}}\mid i_r\neq 1\}, \\
C_{(11,21,\hdots,(r-2)1,(r-1)i_{r-1},j_ri_r)}=\{x_{\hat{E}_{(11,21,\hdots,(r-2)1,(r-1)i_{r-1},j_ri_r)}}\mid i_{r-1}\neq 1\}, \\
\vdots \\
C_{(1i_1,j_2i_2,\hdots,j_ri_r)}=\{x_{\hat{E}_{(1i_1,j_2i_2,\hdots,j_ri_r)}}\mid i_1\neq 1\}.
\end{array}$\\
Also for any $C_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}$, denote $\bar{C}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}$ as the residue collection of all the generators which precedes $C_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}$ in the above order. We will show that
$$(\bar{C}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}):( x_{\hat{E}_{(j_1i_1,j_2i_2,\hdots,j_ri_r)}})$$
contains atleast one linear generator.\\
Now for any generator $x_{\hat{E}_{(11,\hdots,(k-1)1,j_ki_k,\hdots,j_ri_r)}}$, the above said system of generators guarantee the existence of a generator $x_{\hat{E}_{(11,\hdots,(k-1)1,j_\alpha i_\alpha,j_{k+1}i_{k+1},\hdots,j_ri_r)}}$ in $\bar{C}_{(11,\hdots,(k-1)1,j_ki_k,\hdots,j_ri_r)}$ such that $j_\alpha i_\alpha\neq j_ki_k$. Therefore, by using the definition of colon ideal it is easy to see that
$$(\bar{C}_{(11,\hdots,(k-1)1,j_ki_k,\hdots,j_ri_r)}):(x_{(11,\hdots,(k-1)1,j_ki_k,\hdots,j_ri_r)})$$
contains a linear generator $x_{j_ki_k}$. Hence $I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{n,r}^1))$ has quasi-linear quotients, as required.
\end{proof}
We conclude this section with an example.
\begin{Example}
\em{For the graph $\mathcal{G}_{10,2}^1$ given in Fig. 1.,
the facet ideal of the spanning simplicial complex is:\\
$$I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{10,2}^1))=(x_{11,21},C_{11,j_2i_2},C_{1i_1,j_2i_2})$$
where $C_{11,j_2i_2}=x_{11,22},x_{11,23},x_{11,12},x_{11,13}$ and $C_{1i_1,j_2i_2}=x_{12,21},x_{12,22},x_{12,23},x_{13,21},x_{13,22},x_{13,23}.$
It is easy to see that, $I_{\mathcal{F}}(\Delta_s(\mathcal{G}_{10,2}^1))$ has quasi-linear quotients with respect to the ordering given to its generators (by applying above theorem). Hence the face ring of $\Delta_s(\mathcal{G}_{10,2}^1)$ is Cohen Macaulay.}
\end{Example}
\end{document}
|
\begin{document}
\title{Spectral Methods for Parameterized Matrix Equations}
\begin{abstract}
We apply polynomial approximation methods --- known in the numerical PDEs
context as \emph{spectral methods} --- to approximate the vector-valued
function that satisfies a linear system of equations where the matrix
and the right hand side depend on a parameter.
We derive both an interpolatory pseudospectral method and a residual-minimizing
Galerkin method, and we show how each can be interpreted as solving a
truncated infinite system of equations; the difference between the two methods
lies in where the truncation occurs. Using classical theory, we derive
asymptotic error estimates related to the region of analyticity of the solution,
and we present a practical residual error estimate. We verify the results with
two numerical examples.
\end{abstract}
\begin{keywords}
parameterized systems, spectral methods
\end{keywords}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{P.~G. CONSTANTINE, D.~F. GLEICH, AND G.~IACCARINO}{SPECTRAL METHODS FOR MATRIX
EQUATIONS}
\section{Introduction}
We consider a system of linear equations where the elements of the matrix of
coefficients and right hand side depend analytically on a parameter.
Such systems often arise as an intermediate step within computational methods
for engineering models which depend on one or more parameters. A large class of
models employ such parameters to represent uncertainty in the input quantities;
examples include PDEs with random
inputs~\cite{Babuska04,Frauenfelder2005,Xiu02}, image deblurring
models~\cite{Chung08}, and noisy inverse problems~\cite{Chandrasekaran98}.
Other examples of parameterized linear systems occur in electronic circuit
design~\cite{Li2009}, applications of
PageRank~\cite{Brezinski06,Constantine07}, and dynamical
systems~\cite{Dieci03}. Additionally, we note a recent rational
interpolation scheme proposed by Wang et al.~\cite{Wang08} where each
evaluation of the interpolant involves a constrained least-squares problem that
depends on the point of evaluation. Parameterized linear operators have been
analyzed in their own right in the context of perturbation theory; the standard
reference for this work is Kato~\cite{Kato80}.
In our case, we are interested in approximating the vector-valued function
that satisfies the parameterized matrix equation. We will analyze the use of
polynomial approximation methods, which have evolved under the heading
``spectral methods'' in the context of numerical methods for
PDEs~\cite{Boyd01,Canuto06,Hesthaven07}. In their most basic form, these
methods are characterized by a global approximation of the function of interest by a finite
series of orthogonal (algebraic or trigonometric) polynomials. For smooth
functions, these methods converge geometrically, which is the primary reason for their popularity. The
use of spectral methods for parameterized equations is not unprecedented. In
fact, the authors were motivated primarily by the so-called polynomial chaos
methods~\cite{Ghanem91,Xiu02} and related work~\cite{Babuska04,Babuska05,Xiu05}
in the burgeoning field of uncertainty quantification. There has been some work
in the linear algebra community analyzing the fully discrete problems that
arise in this context~\cite{Ernst09,Powell08,Elman05}, but we know of no
existing work addressing the more general problem of parameterized matrix
equations.
There is an ongoing debate in spectral methods communities surrounding the
relative advantages of Galerkin methods versus pseudospectral methods. In the
case of parameterized matrix equations, the interpolatory
pseudospectral methods only require the solution of the parameterized model
evaluated at a discrete set of points,
which makes parallel implementation straightforward. In contrast,
the Galerkin method requires the solution of a coupled linear system whose
dimension is many times larger than the original parameterized set of
equations. We offer insight into this contest by establishing a fair ground
for rigorous comparison and deriving a concrete relationship between the two
methods.
In this paper, we will first describe the parameterized matrix equation and
characterize its solution in section~\ref{sec:parmmats}. We then derive a spectral
Galerkin method and a pseudospectral method for approximating the solution to
the parameterized matrix equation in section~\ref{sec:spectral}. In
section~\ref{sec:connections}, we analyze the relationship between these methods
using the symmetric, tridiagonal Jacobi matrices -- techniques which are
reminiscent of the analysis of Gauss quadrature by Golub and
Meurant~\cite{Golub94} and Gautschi~\cite{Gautschi02}. We derive error
estimates for the methods that relate the geometric rate of convergence to the
size of the region of analyticity of the solution in section~\ref{sec:error},
and we conclude with simple numerical examples in section~\ref{sec:examples}.
See table~\ref{tab:notation} for a list of notational conventions, and note
that \emph{all index sets begin at 0 to remain consistent with the ordering of
a set of polynomials by their largest degree.}
\begin{table}
\centering
\label{tab:notation}
\caption{We attempt to use a consistent and clear notation throughout the
paper. This table details the notational conventions, which we use unless
otherwise noted. Also, all indices begin at 0. }
\begin{tabular}{clc}
\toprule
\textbf{Notation} & \textbf{Meaning}\\
\midrule
$A(s)$ & a square matrix-valued function of a parameter $s$\\
$b(s)$ & a vector-valued function of the parameter $s$\\
$\mathbf{A}$ & a constant matrix\\
$\mathbf{b}$ & a constant vector\\
$\ip{\cdot}$ & the integral with respect to a given weight function\\
$\ipm{\cdot}{n}$ & the integral $\ip{\cdot}$ approximated by an $n$-point Gauss
quadrature rule\\
$\submat{\mathbf{M}}{r\times r}$ & the first $r\times r$ principal minor of a matrix
$\mathbf{M}$\\
\bottomrule
\end{tabular}
\end{table}
\section{Parameterized Matrix Equations}
\label{sec:parmmats}
In this section, we define the specific problem we will study and characterize
its solution. We consider problems that depend on a single parameter $s$ that
takes values in the finite interval $[-1,1]$. Assume that the
interval $[-1,1]$ is equipped with a positive scalar weight function $w(s)$ such
that all moments exist, i.e.
\begin{equation}
\ip{s^k}\equiv\int_{-1}^1 s^k w(s)\,ds<\infty,\qquad k=1,2,\dots,
\end{equation}
and the integral of $w(s)$ is equal to 1. We will use the bracket notation to
denote an integral against the given weight function. In a stochastic context,
one may interpret this as an expectation operator where $w(s)$ is the density
function of the random variable $s$.
Let the $\mathbb{R}^N$-valued function $x(s)$ satisfy the linear system of
equations
\begin{equation}
\label{eq:main}
A(s)x(s)=b(s),\qquad s\in[-1,1]
\end{equation}
for a given $\mathbb{R}^{N\times N}$-valued function $A(s)$ and
$\mathbb{R}^N$-valued function $b(s)$. We assume that both $A(s)$ and $b(s)$
are analytic in a region containing $[-1,1]$, which implies that they have a
convergent power series
\begin{equation}
\label{eq:powerseries}
A(s)=\mathbf{A}_0+\mathbf{A}_1s+\mathbf{A}_2s^2+\cdots,\qquad b(s)=\mathbf{b}_0+\mathbf{b}_1s+\mathbf{b}_2s^2+\cdots,
\end{equation}
for some constant matrices $\mathbf{A}_i$ and constant vectors $\mathbf{b}_i$. Additionally,
we assume that $A(s)$ is bounded away from singularity for all $s\in[-1,1]$.
This implies that we can write $x(s)=A^{-1}(s)b(s)$.
The elements of the solution $x(s)$ can also be written using Cramer's
rule~\cite[Chapter 6]{Meyer00} as a ratio of determinants.
\begin{equation}
\label{eq:cramer}
x_i(s) = \frac{\det(A_i(s))}{\det(A(s))}, \qquad i=0,\dots,N-1,
\end{equation}
where $A_i(s)$ is the parameterized matrix formed by replacing the $i$th column
of $A(s)$ by $b(s)$. From equation \eqref{eq:cramer} and the invertibility of
$A(s)$, we can conclude that $x(s)$ is analytic in a region containing
$[-1,1]$.
Equation \eqref{eq:cramer} reveals the underlying structure
of the solution as a function of $s$. If $A(s)$ and $b(s)$ depend polynomially
on $s$, then \eqref{eq:cramer} tells us that $x(s)$ is a rational function. Note
also that this structure is independent of the particular weight function $w(s)$.
\section{Spectral Methods}
\label{sec:spectral}
In this section, we derive the spectral methods we use to approximate the
solution $x(s)$. We begin with a brief review of the relevant theory of
orthogonal polynomials, Gaussian quadrature, and Fourier series. We include this
section primarily for the sake of notation and refer the reader to a standard
text on orthogonal polynomials~\cite{Szego39} for further theoretical details
and~\cite{Gautschi04} for a modern perspective on computation.
\subsection{Orthogonal Polynomials and Gaussian Quadrature}
Let $\mathbb{P}$ be the space of real polynomials defined on $[-1,1]$,
and let $\mathbb{P}_n\subset\mathbb{P}$ be the space of polynomials of degree
at most $n$. For any $p$, $q$ in $\mathbb{P}$, we define the inner product as
\begin{equation}
\label{eq:innerproduct}
\ip{pq} \equiv \int_{-1}^1 p(s)q(s)w(s)\,ds.
\end{equation}
We define a norm on $\mathbb{P}$ as $\norm{p}{L^2} = \sqrt{\ip{p^2}}$, which is
the standard $L^2$ norm for the given weight $w(s)$. Let $\{\pi_k(s)\}$ be the
set of polynomials that are orthonormal with respect to
$w(s)$, i.e.~$\ip{\pi_i\pi_j}=\delta_{ij}$. It is known that $\{\pi_k(s)\}$
satisfy the three-term recurrence relation
\begin{equation}
\label{eq:3term}
\beta_{k+1}\pi_{k+1}(s) = (s-\alpha_k)\pi_k(s) - \beta_k\pi_{k-1}(s), \qquad
k=0,1,2,\dots,
\end{equation}
with $\pi_{-1}(s)=0$ and $\pi_0(s)=1$. If we consider only the
first $n$ equations, then we can rewrite \eqref{eq:3term} as
\begin{equation}
s\pi_k(s) =
\beta_k\pi_{k-1}(s)+\alpha_k\pi_k(s)+\beta_{k+1}\pi_{k+1}(s),\qquad
k=0,1,\dots,n-1.
\end{equation}
Setting $\mbox{\boldmath$\pi$}_n(s) =
[\pi_0(s),\pi_1(s),\dots,\pi_{n-1}(s)]^T$, we can write this conveniently in
matrix form as
\begin{equation}
\label{eq:mat3term}
s\mbox{\boldmath$\pi$}_n(s) = \mathbf{J}_n\mbox{\boldmath$\pi$}_n(s) + \beta_{n}\pi_{n}(s)\mathbf{e}_{n}
\end{equation}
where $\mathbf{e}_n$ is a vector of zeros with a one in the last entry, and $\mathbf{J}_n$
(known as the \emph{Jacobi matrix}) is a symmetric, tridiagonal matrix defined
as
\begin{equation}
\label{eq:jacobi}
\mathbf{J}_n =
\begin{bmatrix}
\alpha_0 & \beta_1& & & \\
\beta_1 & \alpha_1 & \beta_2 & & \\
& \ddots & \ddots & \ddots & \\
& & \beta_{n-2} & \alpha_{n-2} & \beta_{n-1}\\
& & & \beta_{n-1} & \alpha_{n-1}
\end{bmatrix}.
\end{equation}
The zeros $\{\lambdabda_i\}$ of $\pi_{n}(s)$ are the eigenvalues of $\mathbf{J}_n$ and
$\mbox{\boldmath$\pi$}_n(\lambdabda_i)$ are the corresponding eigenvectors; this follows directly
from \eqref{eq:mat3term}.
Let $\mathbf{Q}_n$ be the orthogonal matrix of eigenvectors of $\mathbf{J}_n$. Then we write
the eigenvalue decomposition of $\mathbf{J}_n$ as
\begin{equation}
\label{eq:eigJ}
\mathbf{J}_n = \mathbf{Q}_n\mathbf{\Lambda}_n\mathbf{Q}_n^T.
\end{equation}
It is known (c.f.~\cite{Gautschi04}) that the eigenvalues $\{\lambdabda_i\}$ are the
familiar Gaussian quadrature points associated with the weight function
$w(s)$. The quadrature weight $\nu_i$ corresponding to $\lambdabda_i$ is equal to
the square of the first component of the eigenvector associated with
$\lambdabda_i$, i.e.
\begin{equation}
\mathbf{Q}(0,i)^2 \;=\; \nu_i.
\end{equation}
The weights $\{\nu_i\}$ are known to be strictly positive. We will use
these facts repeatedly in the sequel. For an integrable scalar function $f(s)$,
we can approximate its integral by an $n$-point Gaussian quadrature rule, which
is a weighted sum of function evaluations,
\begin{equation}
\label{eq:gq}
\int_{-1}^1f(s)w(s)\,ds = \sum_{i=0}^{n-1} f(\lambdabda_i)\nu_i + R_n(f).
\end{equation}
If $f\in\mathbb{P}_{2n-1}$, then $R_n(f)=0$; that is to say the \emph{degree of
exactness} of the Gaussian quadrature rule is $2n-1$. We use the notation
\begin{equation}
\label{eq:gqnote}
\ipm{f}{n}\equiv \sum_{i=0}^{n-1} f(\lambdabda_i)\nu_i
\end{equation}
to denote the Gaussian quadrature rule. This is a discrete approximation to the
true integral.
\subsection{Fourier Series}
The polynomials $\{\pi_k(s)\}$ form an orthonormal basis for the Hilbert space
\begin{equation}
\label{eq:hilbert}
L^2\;\equiv\;L^2_w([-1,1])\;=\;
\left\{f:[-1,1]\rightarrow\mathbb{R}\;\left|\right.\;\norm{f}{L^2}<\infty\right\}.
\end{equation}
Therefore, any $f\in L^2$ admits a convergent \emph{Fourier series}
\begin{equation}
\label{eq:fourier}
f(s) = \sum_{k=0}^\infty \ip{f\pi_k}\pi_k(s).
\end{equation}
The coefficients $\ip{f\pi_k}$ are called the \emph{Fourier coefficients}. If we
truncate the series \eqref{eq:fourier} after $n$ terms, we are left
with a polynomial of degree $n-1$ that is the best approximation polynomial in
the $L^2$ norm. In other words, if we denote
\begin{equation}
\label{eq:truncation}
P_nf(s) = \sum_{k=0}^{n-1}\ip{f\pi_k}\pi_k(s),
\end{equation}
then
\begin{equation}
\label{eq:bestapprox}
\norm{f-P_nf}{L^2} = \inf_{p\in\mathbb{P}_{n-1}}\norm{f-p}{L^2}.
\end{equation}
In fact, the error made by truncating the series is equal to the sum of squares
of the neglected coefficients,
\begin{equation}
\label{eq:neglected}
\norm{f-P_nf}{L^2}^2 = \sum_{k=n}^\infty \ip{f\pi_k}^2.
\end{equation}
These properties of the Fourier series motivate the theory and practice of spectral
methods.
We have shown that the each element of the solution $x(s)$ of the
parameterized matrix equation is analytic in a region containing the closed
interval $[-1,1]$. Therefore it is continuous and bounded on $[-1,1]$, which
implies that $x_i(s)\in L^2$ for $i=0,\dots,N-1$. We can thus write the
convergent Fourier expansion for each element using vector notation as
\begin{equation}
\label{eq:solfourier}
x(s) = \sum_{k=0}^\infty \ip{x\pi_k}\pi_k(s).
\end{equation}
Note that we are abusing the bracket notation here, but this will make further
manipulations very convenient. The computational strategy is to choose a
truncation level $n-1$ and estimate the coefficients of the truncated expansion.
\subsection{Spectral Collocation}
The term \emph{spectral collocation} typically refers to the technique of
constructing a Lagrange interpolating polynomial through the exact solution
evaluated at the Gaussian quadrature points. Suppose that $\lambdabda_i$,
$i=0,\dots,n-1$ are the Gaussian quadrature points for the weight function $w(s)$.
We can construct an $n-1$ degree polynomial interpolant of the solution through
these points as
\begin{equation}
\label{eq:collocation}
x_{c,n}(s) \;=\; \sum_{i=0}^{n-1}x(\lambdabda_i)\ell_i(s) \;\equiv\;\mathbf{X}_c\mathbf{l}_n(s).
\end{equation}
The vector $x(\lambdabda_i)$ is the solution to the equation
$A(\lambdabda_i)x(\lambdabda_i)=b(\lambdabda_i)$. The $n-1$ degree polynomial
$\ell_i(s)$ is the standard Lagrange basis polynomial defined as
\begin{equation}
\label{eq:lagrange}
\ell_i(s) = \prod_{j=0,\;j\not=i}^{n-1}
\frac{s-\lambdabda_j}{\lambdabda_i-\lambdabda_j}.
\end{equation}
The $N\times n$ constant matrix $\mathbf{X}_c$ (the subscript $c$ is for
\emph{collocation}) has one column for each $x(\lambdabda_i)$, and $\mathbf{l}_n(s)$ is a
vector of the Lagrange basis polynomials.
By construction, the collocation polynomial $x_{c,n}$ interpolates the true
solution $x(s)$ at the Gaussian quadrature points. We will use this construction
to show the connection between the pseudospectral method and the Galerkin
method.
\subsection{Pseudospectral Methods}
Notice that computing the true coefficients of the Fourier expansion of $x(s)$
requires the exact solution. The essential idea of the pseudospectral method
is to approximate the Fourier coefficients of $x(s)$ by a Gaussian quadrature
rule. In other words,
\begin{equation}
\label{eq:pseudospec}
x_{p,n}(s) \;=\; \sum_{i=0}^{n-1}\ipm{x\pi_k}{n}\pi_k(s) \;\equiv\;\mathbf{X}_p\mbox{\boldmath$\pi$}_n(s),
\end{equation}
where $\mathbf{X}_p$ is an $N\times n$ constant matrix of the approximated Fourier
coefficients; the subscript $p$ is for \emph{pseudospectral}. For clarity, we
recall
\begin{equation}
\label{eq:gaussfourier}
\ipm{x\pi_k}{n} = \sum_{i=0}^{n-1}x(\lambdabda_i)\pi_k(\lambdabda_i)\nu_i.
\end{equation}
where $x(\lambdabda_i)$ solves $A(\lambdabda_i)x(\lambdabda_i)=b(\lambdabda_i)$.
In general, the number of points in the quadrature rule need not
have any relationship to the order of truncation. However, when the number of
terms in the truncated series is equal to the number of points in the
quadrature rule, the pseudospectral approximation is equivalent to the
collocation approximation. This relationship is well-known, but we include
following lemma and theorem for use in later proofs.
\begin{lemma}
\label{lem:basischange}
Let $\mathbf{q}_0$ be the first row of $\mathbf{Q}_n$, and define $\mathbf{D}_{\mathbf{q}_0} =
\mathrm{diag}(\mathbf{q}_0)$.
The matrices $\mathbf{X}_p$ and $\mathbf{X}_c$ are related by the equation
$\mathbf{X}_p = \mathbf{X}_c\mathbf{D}_{\mathbf{q}_0}\mathbf{Q}_n^T$.
\end{lemma}
\begin{proof}
Write
\begin{align*}
\mathbf{X}_p(:,k) &= \ipm{x\pi_k}{n}\\
&= \sum_{j=0}^{n-1} x(\lambdabda_j)\pi_k(\lambdabda_j)\nu_j\\
&= \sum_{j=0}^{n-1}
\mathbf{X}_c(:,j)\frac{1}{\|\mbox{\boldmath$\pi$}_n(\lambdabda_j)\|_2}\frac{\pi_k(\lambdabda_j)}{\|\mbox{\boldmath$\pi$}_n(\lambdabda_j)\|_2}\\
&= \mathbf{X}_c\mathbf{D}_{\mathbf{q}_0}\mathbf{Q}_n^T(:,k)
\end{align*}
which implies $\mathbf{X}_p = \mathbf{X}_c\mathbf{D}_{\mathbf{q}_0}\mathbf{Q}_n^T$ as required.
\end{proof}
\begin{theorem}
\label{thm:pseudoequalcollocation}
The $n-1$ degree collocation approximation is equal to the $n-1$ degree
pseudospectral approximation using an $n$-point Gaussian quadrature rule, i.e.
\begin{equation}
x_{c,n}(s) = x_{p,n}(s).
\end{equation}
for all $s$.
\end{theorem}
\begin{proof}
Note that the elements of $\mathbf{q}_0$ are all
non-zero, so $\mathbf{D}_{\mathbf{q}_0}^{-1}$ exists.
Then lemma \ref{lem:basischange} implies $\mathbf{X}_c = \mathbf{X}_p\mathbf{Q}_n\mathbf{D}_{\mathbf{q}_0}^{-1}$.
Using this change of variables, we can write
\begin{equation}
x_{c,n}(s) \;=\; \mathbf{X}_c\mathbf{l}_n(s) \;=\;
\mathbf{X}_p\mathbf{Q}_n\mathbf{D}_{\mathbf{q}_0}^{-1}\mathbf{l}_n(s).
\end{equation}
Thus it is sufficient to show that $\mbox{\boldmath$\pi$}_n(s) =
\mathbf{Q}_n\mathbf{D}_{\mathbf{q}_0}^{-1}\mathbf{l}_n(s)$. Since this is just a vector of polynomials with
degree at most $n-1$, we can do this by multiplying each
element by each orthonormal basis polynomial up to order $n-1$ and
integrating. Towards this end we define $\mathbf{\Theta} \equiv \ip{\mathbf{l}_n\mbox{\boldmath$\pi$}_n^T}$.
Using the polynomial exactness of the Gaussian quadrature rule, we compute
the $i,j$ element of $\mathbf{\Theta}$.
\begin{align*}
\mathbf{\Theta}(i,j) &= \ip{l_i\pi_j}\\
&= \sum_{k=0}^{n-1}
\ell_{i}(\lambdabda_{k})\pi_{j}(\lambdabda_{k})\nu_{k}\\
&=
\frac{1}{\|\mbox{\boldmath$\pi$}_n(\lambdabda_{i})\|_2}
\frac{\pi_{j}(\lambdabda_{i})}{\|\mbox{\boldmath$\pi$}_n(\lambdabda_{i})\|_2}\\
&= \mathbf{Q}_n(0,i)\mathbf{Q}_n(j,i),
\end{align*}
which implies that $\mathbf{\Theta} = \mathbf{D}_{\mathbf{q}_0}\mathbf{Q}_n^T$. Therefore
\begin{align*}
\ip{\mathbf{Q}_n\mathbf{D}_{\mathbf{q}_0}^{-1}\mathbf{l}_n\mbox{\boldmath$\pi$}_n^T} &=
\mathbf{Q}_n\mathbf{D}_{\mathbf{q}_0}^{-1}\ip{\mathbf{l}_n\mbox{\boldmath$\pi$}_n^T}\\
&= \mathbf{Q}_n\mathbf{D}_{\mathbf{q}_0}^{-1}\mathbf{\Theta}\\
&= \mathbf{Q}_n\mathbf{D}_{\mathbf{q}_0}^{-1}\mathbf{D}_{\mathbf{q}_0}\mathbf{Q}_n^T\\
&= \mathbf{I}_n,
\end{align*}
which completes the proof.
\end{proof}
Some refer to the pseudospectral
method explicitly as an interpolation method~\cite{Boyd01}.
See~\cite{Hesthaven07} for an insightful interpretation in terms of a discrete
projection. Because of this property, we will freely interchange the
collocation and pseudospectral approximations when convenient in the ensuing
analysis.
The work required to compute the pseudospectral approximation is
highly dependent on the parameterized system. In general, we assume that the
computation of $x(\lambdabda_i)$ dominates the work; in other words, the cost
of computing Gaussian quadrature formulas is negligible compared to computing the
solution to each linear system. Then if each $x(\lambdabda_i)$ costs
$\mathcal{O}(N^3)$, the pseudospectral approximation with $n$ terms costs
$\mathcal{O}(nN^3)$.
\subsection{Spectral Galerkin}
The spectral Galerkin method computes a finite dimensional approximation to
$x(s)$ such that each element of the equation residual is orthogonal to the
approximation space. Define
\begin{equation}
\label{eq:resid}
r(y,s) = A(s)y(s)-b(s).
\end{equation}
The finite dimensional approximation space for each component $x_i(s)$ will be
the space of polynomials of degree at most $n-1$. This space is spanned by the
first $n$ orthonormal polynomials, i.e.
$\mathrm{span}(\pi_0(s),\dots,\pi_{n-1}(s))=\mathbb{P}_{n-1}$. We seek an
$\mathbb{R}^N$-valued polynomial $x_{g,n}(s)$ of maximum degree $n-1$ such that
\begin{equation}
\label{eq:orthoresid}
\ip{r_i(x_{g,n})\pi_k}=0,\qquad i=0,\dots,N-1,\qquad k=0,\dots,n-1,
\end{equation}
where $r_i(x_{g,n})$ is the $i$th component of the residual.
We can write equations \eqref{eq:orthoresid} in matrix notation as
\begin{equation}
\label{eq:matresid}
\ip{r(x_{g,n})\mbox{\boldmath$\pi$}_n^T} = \mathbf{0}
\end{equation}
or equivalently
\begin{equation}
\label{eq:varform}
\ip{Ax_{g,n}\mbox{\boldmath$\pi$}_n^T} = \ip{b\mbox{\boldmath$\pi$}_n^T}.
\end{equation}
Since each component of $x_{g,n}(s)$ is a polynomial of degree at most $n-1$, we
can write its expansion in $\{\pi_k(s)\}$ as
\begin{equation}
\label{eq:galerkin}
x_{g,n}(s) \;=\; \sum_{k=0}^{n-1}\mathbf{x}_{g,k}\pi_k(s) \;\equiv\; \mathbf{X}_g\mbox{\boldmath$\pi$}_n(s),
\end{equation}
where $\mathbf{X}_g$ is a constant matrix of size $N\times n$; the subscript $g$ is
for \emph{Galerkin}. Then equation \eqref{eq:varform} becomes
\begin{equation}
\label{eq:varform2}
\ip{A\mathbf{X}_g\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T} = \ip{b\mbox{\boldmath$\pi$}_n^T}.
\end{equation}
Using the vec notation~\cite[Section 4.5]{GVL96}, we can rewrite
\eqref{eq:varform2} as
\begin{equation}
\label{eq:galerkinsys}
\ip{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A}\mathrm{vec}(\mathbf{X}_g)=\ip{\mbox{\boldmath$\pi$}_n\otimes b}.
\end{equation}
where $\mathrm{vec}(\mathbf{X}_g)$ is an $Nn\times 1$ constant vector equal to the columns of
$\mathbf{X}_g$ stacked on top of each other. The constant matrix
$\ip{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A}$ has size $Nn\times Nn$ and a distinct block
structure; the $i,j$ block of size $N\times N$ is equal to $\ip{\pi_i\pi_j A}$.
More explicitly,
\begin{equation}
\ip{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A}
=
\bmat{
\ip{\pi_0\pi_0 A} & \cdots & \ip{\pi_{0}\pi_{n-1} A} \\
\vdots & \ddots & \vdots\\
\ip{\pi_{n-1}\pi_0 A} & \cdots & \ip{\pi_{n-1}\pi_{n-1} A}
}.
\end{equation}
Similarly, the $i$th block of the $Nn\times 1$ vector $\ip{\mbox{\boldmath$\pi$}_n\otimes b}$
is equal to $\ip{b\pi_i}$, which is exactly the $i$th Fourier coefficient of
$b(s)$.
Since $A(s)$ is bounded and nonsingular for all $s\in[-1,1]$, it is
straightforward to show that $x_{g,n}(s)$ exists and is unique using the
classical Galerkin theorems presented and summarized in Brenner and
Scott~\cite[Chapter 2]{Brenner02}. This implies that $\mathbf{X}_g$ is unique, and
since $b(s)$ is arbitrary, we conclude that the matrix
$\ip{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A}$ is nonsingular for all finite truncations $n$.
The work required to compute the Galerkin approximation depends on how one
computes the integrals in equation \eqref{eq:galerkinsys}. If we assume that
the cost of forming the system is negligible, then the costly part of the
computation is solving the system \eqref{eq:galerkinsys}. The size of the
matrix $\ip{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A}$ is $Nn\times Nn$, so we expect an
operation count of $\mathcal{O}(N^3n^3)$, in general. However, many
applications beget systems with sparsity or exploitable structure that can
considerably reduce the required work.
\subsection{Summary}
We have discussed two classes of spectral methods: (i) the
interpolatory pseudospectral method which approximates the truncated Fourier
series of $x(s)$ by using a Gaussian quadrature rule to approximate each Fourier
coefficient, and (ii) the Galerkin projection method which finds an
approximation in a finite dimensional subspace such that the residual
$A(s)x_{g,n}(s)-b(s)$ is orthogonal to the approximation space. In general, the
$n$-term pseudospectral approximation requires $n$ solutions of the original
parameterized matrix equation \eqref{eq:main} evaluated at the Gaussian quadrature
points, while the Galerkin method requires the solution of the
coupled linear system of equations \eqref{eq:galerkinsys} that is $n$ times as
large as the original parameterized matrix equation. A rough operation count
for the pseudospectral and Galerkin approximations is $\mathcal{O}(nN^3)$ and
$\mathcal{O}(n^3N^3)$, respectively.
Before discussing asymptotic error estimates, we first derive some interesting
and useful connections between these two classes of methods. In particular, we
can interpret each method as a set of functions acting on the infinite Jacobi
matrix for the weight function $w(s)$; the difference between the methods lies
in where each truncates the infinite system of equations.
\section{Connections Between Pseudospectral and Galerkin}
\label{sec:connections}
We begin with a useful lemma for
representing a matrix of Gauss quadrature integrals in terms of functions of
the Jacobi matrix.
\begin{lemma}
\label{lem:jacpoly}
Let $f(s)$ be a scalar function analytic in a region containing $[-1,1]$. Then
$\ipm{f\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T}{n} = f(\mathbf{J}_n)$.
\end{lemma}
\begin{proof}
We examine the $i,j$ element of the $n\times n$ matrix $f(\mathbf{J}_n)$.
\begin{align*}
\mathbf{e}_i^T f(\mathbf{J}_n) \mathbf{e}_j &= \mathbf{e}_i^T\mathbf{Q}_n f(\mathbf{\Lambda}_n) \mathbf{Q}_n^T\mathbf{e}_j\\
&= \mathbf{q}_i^T f(\mathbf{\Lambda}_n) \mathbf{q}_j\\
&= \sum_{k=0}^{n-1} f(\lambdabda_k)
\frac{\pi_i(\lambdabda_{k})}{\|\mbox{\boldmath$\pi$}(\lambdabda_{k})\|_2}
\frac{\pi_j(\lambdabda_{k})}{\|\mbox{\boldmath$\pi$}(\lambdabda_{k})\|_2}\\
&= \sum_{k=0}^{n-1} f(\lambdabda_k) \pi_i(\lambdabda_k)\pi_j(\lambdabda_k)\nu_k^n\\
&=\ipm{f\pi_i\pi_j}{n},
\end{align*}
which completes the proof.
\end{proof}
Note that Lemma \ref{lem:jacpoly} generalizes Theorem 3.4 in~\cite{Golub94}.
With this in the arsenal, we can prove the following theorem
relating pseudospectral to Galerkin.
\begin{theorem}
\label{thm:pseudogalerkin}
The pseudospectral solution is equal to an approximation of the Galerkin
solution where each integral in equation \eqref{eq:galerkinsys} is
approximated by an $n$-point Gauss quadrature formula. In other words, $\mathbf{X}_p$
solves
\begin{equation}
\label{eq:pseudospecsys}
\ipm{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A}{n}\mathrm{vec}(\mathbf{X}_p) = \ipm{\mbox{\boldmath$\pi$}_n\otimes\mathbf{b}}{n}.
\end{equation}
\end{theorem}
\begin{proof}
Define the $N\times n$ matrix $\mathbf{B}_c=[b(\lambdabda_0) \cdots b(\lambdabda_{n-1})]$.
Using the power series expansion of $A(s)$ (equation
\eqref{eq:powerseries}), we can write the matrix of each
collocation solution as
\begin{equation}
A(\lambdabda_k) = \sum_{i=0}^\infty\mathbf{A}_i\lambdabda_k^i
\end{equation}
for $k=0,\dots,n-1$. We collect these into one large block-diagonal system by
writing
\begin{equation}
\label{eq:pf1eq1}
\left(\sum_{i=0}^\infty\mathbf{\Lambda}_n^i\otimes\mathbf{A}_i\right)\mathrm{vec}(\mathbf{X}_c)
=
\mathrm{vec}(\mathbf{B}_c).
\end{equation}
Let $\mathbf{I}$ be the $N\times N$ identity matrix. Premultiply \eqref{eq:pf1eq1} by
$(\mathbf{D}_{\mathbf{q}_0}\otimes\mathbf{I})$, and by commutativity of diagonal matrices and the mixed
product property, it becomes
\begin{equation}
\label{eq:pf1eq2}
\left(\sum_{i=0}^\infty\mathbf{\Lambda}_n^i\otimes\mathbf{A}_i\right)(\mathbf{D}_{\mathbf{q}_0}\otimes\mathbf{I})\mathrm{vec}(\mathbf{X}_c)
=
(\mathbf{D}_{\mathbf{q}_0}\otimes\mathbf{I})\mathrm{vec}(\mathbf{B}_c).
\end{equation}
Premultiplying \eqref{eq:pf1eq2} by $(\mathbf{Q}_n\otimes\mathbf{I})$, properly inserting
$(\mathbf{Q}_n^T\otimes\mathbf{I})(\mathbf{Q}_n\otimes\mathbf{I})$ on the left hand side, and
using the eigenvalue decomposition \eqref{eq:eigJ}, this becomes
\begin{equation}
\label{eq:pf1eq4}
\left(\sum_{i=0}^\infty\mathbf{J}_n^i\otimes\mathbf{A}_i\right)(\mathbf{Q}_n\otimes\mathbf{I})(\mathbf{D}_{\mathbf{q}_0}\otimes\mathbf{I})\mathrm{vec}(\mathbf{X}_c)
=
(\mathbf{Q}_n\otimes\mathbf{I})(\mathbf{D}_{\mathbf{q}_0}\otimes\mathbf{I})\mathrm{vec}(\mathbf{B}_c).
\end{equation}
But note that Lemma \ref{lem:basischange} implies
\begin{equation}
(\mathbf{Q}_n\otimes\mathbf{I})(\mathbf{D}_{\mathbf{q}_0}\otimes\mathbf{I})\mathrm{vec}(\mathbf{X}_c) = \mathrm{vec}(\mathbf{X}_p).
\end{equation}
Using an argument identical to the proof of Lemma \ref{lem:basischange}, we can
write
\begin{equation}
(\mathbf{Q}_n\otimes\mathbf{I})(\mathbf{D}_{\mathbf{q}_0}\otimes\mathbf{I})\mathrm{vec}(\mathbf{B}_c) = \ipm{\mbox{\boldmath$\pi$}_n\otimes b}{n}
\end{equation}
Finally, using Lemma \ref{lem:jacpoly}, equation \eqref{eq:pf1eq4} becomes
\begin{equation}
\label{eq:pf1eq5}
\ipm{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A}{n}\mathrm{vec}(\mathbf{X}_p)
=
\ipm{\mbox{\boldmath$\pi$}_n\otimes b}{n}.
\end{equation}
as required.
\end{proof}
Theorem \ref{thm:pseudogalerkin} begets a corollary giving conditions for
equivalence between Galerkin and pseudospectral approximations.
\begin{corollary}
\label{cor:equiv}
If $b(s)$ contains only polynomials of maximum degree $m_b$ and $A(s)$ contains
only polynomials of maximum degree 1 (i.e. linear functions of $s$), then
$x_{g,n}(s)=x_{p,n}(s)$ for $n\geq m_b$ for all $s\in[-1,1]$.
\end{corollary}
\begin{proof}
The parameterized matrix $\mbox{\boldmath$\pi$}_n(s)\mbox{\boldmath$\pi$}_n(s)^T\otimes A(s)$ has polynomials of
degree at most $2n-1$. Thus, by the polynomial exactness of the Gauss quadrature
formulas,
\begin{equation}
\ipm{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A}{n} = \ip{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A},
\qquad
\ipm{\mbox{\boldmath$\pi$}_n\otimes b}{n} = \ip{\mbox{\boldmath$\pi$}_n\otimes b}.
\end{equation}
Therefore $\mathbf{X}_g = \mathbf{X}_p$, and consequently
\begin{equation}
x_{g,n}(s)\;=\;\mathbf{X}_g\mbox{\boldmath$\pi$}_n(s) \;=\; \mathbf{X}_p\mbox{\boldmath$\pi$}_n(s)\;=\;x_{p,n}(s).
\end{equation}
as required.
\end{proof}
By taking the transpose of equation \eqref{eq:varform2} and following the
steps of the proof of theorem \ref{thm:pseudogalerkin}, we get another
interesting corollary.
\begin{corollary}
\label{cor:aj}
First define $A(\mathbf{J}_n)$ to be the $Nn\times Nn$ constant matrix with the $i,j$
block of size $n\times n$ equal to $A(i,j)(\mathbf{J}_n)$. Next define $b(\mathbf{J}_n)$ to be
the $Nn\times n$ constant matrix with the $i$th $n\times n$ block equal to
$b_i(\mathbf{J}_n)$. Then the pseudospectral coefficients $\mathbf{X}_p$ satisfy
\begin{equation}
\label{eq:pseudospectranspose}
A(\mathbf{J}_n)\mathrm{vec}(\mathbf{X}_p^T)=b(\mathbf{J}_n)\mathbf{e}_0,
\end{equation}
where $\mathbf{e}_0=[1,0,\dots,0]^T$ is an $n$-vector.
\end{corollary}
Theorem \ref{thm:pseudogalerkin} leads to a fascinating connection between the
matrix operators in the Galerkin and pseudospectral methods, namely that the
matrix in the Galerkin system is equal to a submatrix of the matrix from a
sufficiently larger pseudospectral computation. This is the key to
understanding the relationship between the Galerkin and pseudospectral
approximations. In the following
lemma, we denote the first $r\times r$ principal minor of a matrix $\mathbf{M}$ by
$[\mathbf{M}]_{r\times r}$.
\begin{lemma}
\label{lem:galerkinsubmat}
Let $A(s)$ contain only polynomials of degree at most $m_a$, and let $b(s)$
contain only polynomials of degree at most $m_b$. Define
\begin{equation}
\label{eq:mdef}
m\equiv m(n)\geq
\max\left(
\left\lceil\frac{m_a+2n-1}{2}\right\rceil
\;,\;
\left\lceil\frac{m_b+n}{2}\right\rceil
\right)
\end{equation}
Then
\begin{align*}
\ip{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A} &= \submat{\ipm{\mbox{\boldmath$\pi$}_m\mbox{\boldmath$\pi$}_m^T\otimes
A}{m}}{Nn\times Nn}\\
\ip{\mbox{\boldmath$\pi$}_n\otimes b} &= \submat{\ipm{\mbox{\boldmath$\pi$}_m\otimes b}{m}}{Nn\times 1}.
\end{align*}
\end{lemma}
\begin{proof}
The integrands of the matrix $\ip{\mbox{\boldmath$\pi$}_n\mbox{\boldmath$\pi$}_n^T\otimes A}$ are polynomials of
degree at most $2n+m_a-2$. Therefore they can be integrated exactly with a Gauss
quadrature rule of order $m$. A similar argument holds for $\ip{\mbox{\boldmath$\pi$}_n\otimes
b}$.
\end{proof}
Combining Lemma \ref{lem:galerkinsubmat} with corollary \ref{cor:aj}, we get
the following proposition relating the Galerkin coefficients to the Jacobi
matrices for $A(s)$ and $b(s)$ that depend polynomially on
$s$.
\begin{proposition}
\label{prop:submatsys}
Let $m$, $m_a$, and $m_b$ be defined as in Lemma \ref{lem:galerkinsubmat}.
Define $[A]_n(\mathbf{J}_m)$ to be the $Nn\times Nn$ constant matrix with the $i,j$
block of size $n\times n$ equal to $[A(i,j)(\mathbf{J}_m)]_{n\times n}$ for
$i,j=0,\dots,N-1$.
Define $[b]_n(\mathbf{J}_m)$ to be the $Nn\times n$ constant matrix
with the $i$th $n\times n$ block equal to $[b_i(\mathbf{J}_m)]_{n\times n}$ for
$i=1,\dots,N$. Then the Galerkin coefficients $\mathbf{X}_g$ satisfy
\begin{equation}
[A]_n(\mathbf{J}_m)\mathrm{vec}(\mathbf{X}_g^T)=[b]_n(\mathbf{J}_m)\mathbf{e}_0,
\end{equation}
where $\mathbf{e}_0=[1,0,\dots,0]^T$ is an $n$-vector.
\end{proposition}
Notice that Proposition \ref{prop:submatsys} provides a way to compute the
exact matrix for the Galerkin computation without any symbolic
manipulation, but beware that $m$ depends on both $n$ and the largest degree of
polynomial in $A(s)$. Written in this form, we have no trouble taking $m$ to
infinity, and we arrive at the main theorem of this section.
\begin{theorem}
\label{thm:connections}
Using the notation of Proposition \ref{prop:submatsys} and corollary
\ref{cor:aj}, the coefficients $\mathbf{X}_g$ of the $n$-term Galerkin approximation of
the solution $x(s)$ to equation \eqref{eq:main} satisfy the linear system of
equations
\begin{equation}
[A]_n(\mathbf{J}_\infty)\mathrm{vec}(\mathbf{X}_g^T)=[b]_n(\mathbf{J}_\infty)\mathbf{e}_0,
\end{equation}
where $\mathbf{e}_0=[1,0,\dots,0]^T$ is an $n$-vector.
\end{theorem}
\begin{proof}
Let $A^{(m_a)}(s)$ be the truncated power series of $A(s)$ up to order $m_a$,
and let $b^{(m_b)}(s)$ be the truncated power series of $b(s)$ up to order
$m_b$. Since $A(s)$ is analytic and bounded away from singularity for all
$s\in[-1,1]$, there exists an integer $M$ such that $A^{(m_a)}(s)$ is also
bounded away from singularity for all $s\in[-1,1]$ and all $m_a>M$ (although
the bound may be depend on $m_a$). Assume that $m_a>M$.
Define $m$ as in equation \eqref{eq:mdef}. Then by Proposition
\ref{prop:submatsys}, the coefficients $\mathbf{X}_g^{(m_a,m_b)}$ of the $n$-term
Galerkin approximation to the solution of the truncated system satisfy
\begin{equation}
\label{eq:trunceq}
[A^{(m_a)}]_n(\mathbf{J}_m)\mathrm{vec}((\mathbf{X}_g^{(m_a,m_b)})^T)=[b^{(m_b)}]_n(\mathbf{J}_m)\mathbf{e}_0.
\end{equation}
By the definition of $m$ (equation \eqref{eq:mdef}), equation
\eqref{eq:trunceq} holds for all integers greater than some minimum value.
Therefore, we can take $m\rightarrow\infty$ without changing the solution at
all, i.e.
\begin{equation}
[A^{(m_a)}]_n(\mathbf{J}_\infty)\mathrm{vec}((\mathbf{X}_g^{(m_a,m_b)})^T)=[b^{(m_b)}]_n(\mathbf{J}_\infty)\mathbf{e}_0.
\end{equation}
Next we take $m_a,m_b\rightarrow\infty$ to get
\begin{align*}
[A^{(m_a)}]_n(\mathbf{J}_\infty) &\rightarrow [A]_n(\mathbf{J}_\infty)\\
[b^{(m_b)}]_n(\mathbf{J}_\infty) &\rightarrow [b]_n(\mathbf{J}_\infty)
\end{align*}
which implies
\begin{equation}
\mathbf{X}_g^{(m_a,m_b)} \rightarrow \mathbf{X}_g
\end{equation}
as required.
\end{proof}
Theorem \ref{thm:connections} and corollary \ref{cor:aj} reveal the fundamental
difference between the Galerkin and pseudospectral approximations. We put them
side-by-side for comparison.
\begin{equation}
\label{eq:comparison}
[A]_n(\mathbf{J}_\infty)\mathrm{vec}(\mathbf{X}_g^T)=b_n(\mathbf{J}_\infty)\mathbf{e}_0,\qquad
A(\mathbf{J}_n)\mathrm{vec}(\mathbf{X}_p^T)=b(\mathbf{J}_n)\mathbf{e}_0.
\end{equation}
The difference lies in where the truncation occurs. For pseudospectral, the
infinite Jacobi matrix is first truncated, and then the operator is applied.
For Galerkin, the operator is applied to the infinite Jacobi matrix, and the
resulting system is truncated. The question that remains is whether it matters.
As we will see in the error estimates in the next section, the interpolating
pseudospectral approximation converges at a rate comparable to the Galerkin
approximation, yet requires considerably less computational effort.
\section{Error Estimates}
\label{sec:error}
Asymptotic error estimates for polynomial approximation are well-established in
many contexts, and the theory is now considered classical. Our goal is to apply
the classical theory to relate the rate of geometric convergence to some
measure of singularity for the solution. We do not seek the tightest bounds in
the most appropriate norm as in~\cite{Canuto06}, but instead we offer
intuition for understanding the asymptotic rate of convergence.
We also present a residual error estimate that may be more useful in practice.
We complement
the analysis with two representative numerical examples.
To discuss convergence, we need to choose a norm. In the statements and
proofs, we will use the standard $L^2$ and $L^\infty$ norms generalized to
$\mathbb{R}^N$-valued functions.
\begin{definition}
\label{def:norm}
For a function $f:\mathbb{R}\rightarrow\mathbb{R}^N$, define the $L^2$ and
$L^\infty$ norms as
\begin{align}
\label{eq:norms}
\norm{f}{L^2} &:= \sqrt{\sum_{i=0}^{N-1} \int_{-1}^{1}f_i^2(s)w(s)\,ds}\\
\norm{f}{L^\infty} &:= \max_{0\leq i\leq N-1}\left(\sup_{-1\leq s\leq 1}
|f_i(s)|\right)
\end{align}
\end{definition}
With these norms, we can state error estimates for both Galerkin and
pseudospectral methods.
\begin{theorem}[Galerkin Asymptotic Error Estimate]
\label{thm:galerkinerr}
Let $\rho^\ast$ be the sum of the semi-axes of the greatest ellipse with foci
at $\pm 1$ in which $x_i(s)$ is analytic for $i=0,\dots,N-1$. Then for
$1<\rho<\rho^\ast$, the asymptotic error in the Galerkin approximation is
\begin{equation}
\label{eq:galerkinerr}
\norm{x-x_{g,n}}{L^2} \leq C\rho^{-n},
\end{equation}
where $C$ is a constant independent of $n$.
\end{theorem}
\begin{proof}
We begin with the standard error estimate for the Galerkin
method~\cite[Section 6.4]{Canuto06} in the $L^2$ norm,
\begin{equation}
\norm{x-x_{g,n}}{L^2}\leq C \norm{x-R_nx}{L^2}.
\end{equation}
The constant $C$ is independent of $n$ but depends on the extremes of the
bounded eigenvalues of $A(s)$. Under the consistency hypothesis, the operator
$R_n$ is a projection operator such that
\begin{equation}
\norm{x_i-R_nx_i}{L^2}\rightarrow 0,\qquad n\rightarrow\infty.
\end{equation}
for $i=0,\dots,N-1$.
For our purpose, we let $R_nx$ be the expansion of $x(s)$ in terms of the
Chebyshev polynomials,
\begin{equation}
\label{eq:chebseries}
R_nx(s) = \sum_{k=0}^{n-1}\mathbf{a}_kT_k(s),
\end{equation}
where $T_k(s)$ is the $k$th Chebyshev polynomial, and
\begin{equation}
\mathbf{a}_{k,i} = \frac{2}{\pi c_k}\int_{-1}^1 x_i(s)T_k(s)(1-s^2)^{-1/2}\,ds,\qquad
c_k=\left\{
\begin{array}{cc}
2 & \mbox{ if $k=0$}\\
1 & \mbox{ otherwise}
\end{array}
\right.
\end{equation}
for $i=0,\dots,N-1$.
Since $x(s)$ is continuous for all $s\in[-1,1]$ and $w(s)$ is normalized, we
can bound
\begin{equation}
\norm{x-R_nx}{L^2} \leq \sqrt{N}\norm{x-R_nx}{L^\infty}
\end{equation}
The Chebyshev series
converges uniformly for functions that are continuous on
$[-1,1]$, so we can bound
\begin{align}
\norm{x-R_nx}{L^\infty} &= \norm{\sum_{k=n}^\infty\mathbf{a}_kT_k(s)}{L^\infty}\\
&\leq \norm{\sum_{k=n}^\infty|\mathbf{a}_k|}{\infty}
\end{align}
since $-1\leq T_k(s)\leq 1$ for all $k$. To be sure, the quantity $|\mathbf{a}_k|$ is
the component-wise absolute value of the constant vector $\mathbf{a}_k$, and the norm
$\|\cdot\|_\infty$ is the standard infinity norm on $\mathbb{R}^N$.
Using the classical
result stated in~\cite[Section 3]{Gottlieb77}, we have
\begin{equation}
\limsup_{k\rightarrow\infty}|\mathbf{a}_{k,i}|^{1/k}=\frac{1}{\rho^\ast_i},\qquad
i=0,\dots,N-1
\end{equation}
where $\rho^\ast_i$ is the sum of the semi-axes of the greatest ellipse with
foci at $\pm 1$ in which $x_i(s)$ is analytic. This implies that asymptotically
\begin{equation}
|\mathbf{a}_{k,i}|=\mathcal{O}\left(\frac{\rho_i}{k}\right),\qquad
i=0,\dots,N-1.
\end{equation}
for $\rho_i<\rho^\ast_i$. We take $\rho=\min_i \rho_i$, which suffices to prove
the estimate \eqref{eq:galerkinerr}.
\end{proof}
Theorem \ref{thm:galerkinerr} recalls the well-known fact that the convergence
of many polynomial approximations (e.g.~power series, Fourier series) depend
on the size of the region in the complex plane in which the function is
analytic. Thus, the location of the singularity nearest the interval $[-1,1]$
determines the rate at which the approximation converges as one includes
higher powers in the polynomial approximation. Next we derive a similar result
for the pseudospectral approximation using the fact that it interpolates
$x(s)$ at the Gauss points of the weight function $w(s)$.
\begin{theorem}[Pseudospectral Asymptotic Error Estimate]
\label{thm:pseudospecerr}
Let $\rho^\ast$ be the sum of the semi-axes of the greatest ellipse with foci
at $\pm 1$ in which $x_i(s)$ is analytic for $i=0,\dots,N-1$. Then for
$1<\rho<\rho^\ast$, the asymptotic error in the pseudospectral approximation is
\begin{equation}
\label{eq:pseudospecerr}
\norm{x-x_{p,n}}{L^2} \leq C\rho^{-n},
\end{equation}
where $C$ is a constant independent of $n$.
\end{theorem}
\begin{proof}
Recall that $x_{c,n}(s)$ is the Lagrange interpolant of $x(s)$ at the Gauss
points of $w(s)$, and let $x_{c,n,i}(s)$ be the $i$th component of
$x_{c,n}(s)$. We will use the result from~\cite[Theorem 4.8]{Rivlin69} that
\begin{equation}
\label{eq:lagrangeerr}
\int_{-1}^1 (x_i(s) - x_{c,n,i}(s))^2w(s)\,ds
\leq
4E^2_n(x_i),
\end{equation}
where $E_n(x_i)$ is the error of the best approximation polynomial in the
uniform norm. We can, again, bound $E_n(x_i)$ by the error of the Chebyshev
expansion \eqref{eq:chebseries}. Using Theorem \ref{thm:pseudoequalcollocation}
with equation \eqref{eq:lagrangeerr},
\begin{align*}
\norm{x-x_{p,n}}{L^2} &= \norm{x-x_{c,n}}{L^2} \\
&\leq 2\sqrt{N}\norm{x-R_nx}{L^\infty}.
\end{align*}
The remainder of the proof proceeds exactly as the proof of theorem
\ref{thm:galerkinerr}.
\end{proof}
We have shown, using classical approximation theory, that the interpolating
pseudospectral method and the Galerkin method have the same asymptotic rate of
geometric convergence. This rate of convergence depends on the size of the
region in the complex plane where the functions $x(s)$ are analytic. The
structure of the matrix equation reveals at least one singularity that occurs
when $A(s^\ast)$ is rank-deficient for some $s^\ast\in\mathbb{R}$, assuming the
right hand side $b(s^\ast)$ does not fortuitously remove it. For a general
parameterized matrix, this fact may not be useful. However, for many
parameterized systems in practice, the range of the parameter is dictated by
existence and/or stability criteria. The value that makes the system singular
is often known and has some interpretation in terms of the model. In these
cases, one may have an upper bound on $\rho$, which is the sum of the
semi-axes of the ellipse of analyticity, and this can be used to estimate the
geometric rate of convergence \emph{a priori}.
We end this section with a residual error estimate -- similar to residual error
estimates for constant matrix equations -- that may be more useful in practice
than the asymptotic results.
\begin{theorem}
\label{thm:residerr}
Define the residual $r(y,s)$ as in equation \eqref{eq:resid}, and let
$e(y,s)=x(s)-y(s)$ be the $\mathbb{R}^N$-valued function representing the error
in the approximation $y(s)$. Then
\begin{equation}
\label{eq:residerr}
C_1\norm{r(y)}{L^2}\leq\norm{e(y)}{L^2}\leq C_2\norm{r(y)}{L^2}
\end{equation}
for some constants $C_1$ and $C_2$, which are independent of $y(s)$.
\end{theorem}
\begin{proof}
Since $A(s)$ is non-singular for all $s\in[-1,1]$, we can write
\begin{equation}
A^{-1}(s)r(y,s)\;=\;y(s)-A^{-1}(s)b(s)\;=\;e(y,s)
\end{equation}
so that
\begin{align*}
\norm{e(y)}{L^2}^2 &= \ip{e(y)^Te(y)}\\
&= \ip{r^T(y)A^{-T}A^{-1}r(y)}\\
\end{align*}
Since $A(s)$ is bounded, so is $A^{-1}(s)$. Therefore, there exist constants
$C_1^\ast$ and $C_2^\ast$ that depend only on $A(s)$ such that
\begin{equation}
C_1^\ast\ip{r^T(y)r(y)}\leq\ip{e^T(y)e(y)}\leq C_2^\ast\ip{r^T(y)r(y)}.
\end{equation}
Taking the square root yields the desired result.
\end{proof}
Theorem \ref{thm:residerr} states that the $L^2$ norm of the residual behaves
like the $L^2$ norm of the error. In many cases, this residual error may be
much easier to compute than the true $L^2$ error. However, as in residual error
estimates for constant matrix problems, the constants in Theorem
\ref{thm:residerr} will be large if the bounds on the eigenvalues of $A(s)$
are large. We apply these results in the next section with two numerical
examples.
\section{Numerical Examples}
\label{sec:examples}
We examine two simple examples of spectral methods applied to parameterized
matrix equations. The first is a $2\times 2$ symmetric parameterized matrix,
and the second comes from a discretized second order ODE. In both cases, we
relate the convergence of the spectral methods to the size of the region of analyticity
and verify this relationship numerically. We also compare the behavior of the
true error to the behavior of the residual error estimate from theorem
\ref{thm:residerr}.
To keep the computations simple, we use a constant weight function $w(s)$.
The corresponding orthonormal polynomials are the normalized Legendre
polynomials, and the Gauss points are the Gauss-Legendre points.
\subsection{A $2\times 2$ Parameterized Matrix Equation}
Let $\epsilon>0$, and consider the following parameterized matrix equation
\begin{equation}
\label{eq:2x2}
\bmat{1+\mathbf{a}repsilon & s\\ s & 1}\bmat{x_0(s) \\ x_1(s)} = \bmat{2\\ 1}.
\end{equation}
For this case, we can easily compute the exact solution,
\begin{equation}
x_0(s)=\frac{2-s}{1+\mathbf{a}repsilon-s^2},\qquad
x_1(s)=\frac{1+\mathbf{a}repsilon-2s}{1+\mathbf{a}repsilon-s^2}.
\end{equation}
Both of these functions have a poles at $s=\pm\sqrt{1+\mathbf{a}repsilon}$, so the
sum of the semi-axes of the ellipse of analyticity is bounded, i.e.
$\rho<\sqrt{1+\mathbf{a}repsilon}$. Notice that the matrix is linear in $s$, and the
right hand side has no dependence on $s$. Thus, corollary \ref{cor:equiv}
implies that the Galerkin approximation is equal to the pseudospectral
approximation for all $n$; there is no need to solve the system
\eqref{eq:galerkinsys} to compute the Galerkin approximation.
In figure \ref{fig:ex1} we plot both the true $L^2$ error and the residual error
estimate for four values of $\mathbf{a}repsilon$. The results confirm the analysis.
\begin{figure}
\caption{The convergence of the spectral methods applied to equation
\eqref{eq:2x2}
\label{fig:ex1}
\end{figure}
\subsection{A Parameterized Second Order ODE}
Consider the second order boundary value problem
\begin{align}
\label{eq:ode}
\frac{d}{dt}\left(
\alpha(s,t)\frac{du}{dt}
\right)
&=
1\qquad t\in[0,1]\\
u(0)&=0\\
u(1)&=0
\end{align}
where, for $\mathbf{a}repsilon>0$,
\begin{equation}
\label{eq:coefficients}
\alpha(s,t) = 1+4\cos(\pi s)(t^2-t),\qquad s\in[\mathbf{a}repsilon,1].
\end{equation}
The exact solution is
\begin{equation}
\label{eq:exactsol}
u(s,t) = \frac{1}{8\cos(\pi s)}\ln\left(1+4\cos(\pi s)(t^2-t)\right).
\end{equation}
The solution $u(s,t)$ has a singularity at $s=0$ and $t=1/2$. Notice that we
have adjusted the range of $s$ to be bounded away from 0 by $\mathbf{a}repsilon$.
We use a standard piecewise linear Galerkin finite element method with $512$
elements in the $t$ domain to construct a stiffness matrix parameterized by
$s$, i.e.
\begin{equation}
\label{eq:femat}
(K_0+\cos(\pi s)K_1)x(s) = b.
\end{equation}
Figure \ref{fig:ex2} shows the convergence of the residual error estimate for both
Galerkin and pseudospectral approximations as $n$ increases. (Despite having the
exact solution \eqref{eq:exactsol} available, we do not present the decay of the
$L^2$ error; it is dominated entirely by the discretization error in the $t$
domain.) As
$\mathbf{a}repsilon$ gets closer to zero, the geometric convergence rate of the
spectral methods degrades considerably. Also, note that each element of the
parameterized stiffness matrix is an analytic function of $s$, but figure
\ref{fig:ex2} verifies that the less expensive pseudospectral approximation
converges at the same rate as the Galerkin approximation.
\begin{figure}
\caption{The convergence of the residual error estimate for the Galerkin and
pseudospectral approximations applied to the parameterized matrix equation
\eqref{eq:femat}
\label{fig:ex2}
\end{figure}
\section{Summary and Conclusions}
\label{sec:summary}
We have presented an application of spectral methods to parameterized matrix
equations. Such parameterized systems arise in many applications. The goal of
a spectral method is to construct a global polynomial approximation of the
$\mathbb{R}^N$-valued function that satisfies the parameterized system.
We derived two basic spectral methods: (i) the interpolatory pseudospectral
method, which approximates the coefficients of the truncated Fourier series
with Gauss quadrature formulas, and (ii) the Galerkin method, which finds an
approximation in a finite dimensional subspace by requiring that the equation
residual be orthogonal to the approximation space. The primary work involved in
the pseudospectral method is solving the parameterized system at a finite set
of parameter values, whereas the Galerkin method requires the solution of a
coupled system of equations many times larger than the original parameterized
system.
We showed that one can interpret the differences between these
two methods as a choice of when to truncate an infinite linear system of
equations. Employing this relationship we derived conditions under which these
two approximations are equivalent. In this case, there is no reason to solve the
large coupled system of equations for the Galerkin approximation.
Using classical techniques, we presented asymptotic error estimates
relating the decay of the error to the size of the region of analyticity of the
solution; we also derived a residual error estimate that may be more useful in
practice. We verified the theoretical developments with two numerical examples:
a $2\times 2$ matrix equation and a finite element discretization of a
parameterized second order ODE.
The popularity of spectral methods for PDEs stems from their \emph{infinite}
(i.e. geometric) order of convergence for smooth functions compared to finite
difference schemes. We have the same advantage in the case of parameterized
matrix equations, plus the added bonus that there are no boundary conditions
to consider. The primary concern for these methods is determining the value of
the parameter closest to the domain that renders the system singular.
\end{document}
|
\begin{document}
\title{A power series method for solving ordinary and partial differentials equations motivated by domain growth.}
\begin{abstract}
\noindent In this work we present a power series method for solving ordinary and partial differential equations. To demonstrate our method we solve a system of ordinary differential equations describing the movement of a random walker on a one-dimensional lattice, two nonlinear ordinary differential equations, a wave and diffusion equation (linear partial differential equations), and a nonlinear partial differential equation (quasilinear). The inclusion of boundary conditions and the general solutions to other equations of interest are included in the Supplementary material.
\end{abstract}
{\begin{center}{\noindent{\bf Keywords:}
Differential equations, ordinary, partial, power series solutions, growth.}
\end{center}}
\section{Introduction}
We present a method that can be used to generate power series solutions for linear and nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs). The method we present relies on a separation of variables in a system of equations we construct, and generates a power series weighted by coefficients written in terms of the initial condition. This method was motivated by work conducted on the mathematical modeling of domain growth \citep{Ross2017a}.
\\
\\
The outline of this work is as follows: In Section 2.1 we demonstrate how our method can be used to generate power series solutions for linear and nonlinear ODEs. In Sections 2.2 and 2.3 we demonstrate how this method can be extended to linear and nonlinear PDEs, including the implementation of boundary conditions in the linear PDE case. We finish with a short discussion of the method presented in this work in Section 3.
\section{Results}
\subsection{Solving a system of ordinary differential equations describing the movement of a random walker on a one-dimensional lattice}
A one-dimensional lattice with periodic boundaries (a ring) is displayed in Fig. \ref{fig:figure1}.
\begin{figure}
\caption{A one-dimensional lattice with periodic boundary conditions can be represented as a ring. The sites are sequentially labelled from $i \in \{1, 2, ..., N \}
\label{fig:figure1}
\end{figure}
\\
The following equation describes the time evolution of the probability that an unbiased excluding random walker occupies site $i$ on a one-dimensional periodic lattice\footnote{A derivation of Eq. \eqref{eq:static_diffusion} can be found in the Supplementary material (SM1).}:
\begin{align}
\frac{\mathrm{d}p_{i}}{\mathrm{d}t} = \left(\frac{P_{m}}{2}\right)\left(p_{i-1} - 2p_{i} + p_{i+1}\right).
\label{eq:static_diffusion}
\end{align}
In Eq. \eqref{eq:static_diffusion} $p_{i}$ is the probability a random walker is situated at site $i$ at time $t$, and $P_{m}$ is the rate at which the random walker attempts to move to an adjacent site on the lattice.
\\
\\
We now write Eq. \eqref{eq:static_diffusion} in the following manner
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\sum^{\infty}_{n=0}p^{n}_{i} = \sum^{\infty}_{n=0}\left(\frac{P_{m}}{2}\right)\left(p^{n}_{i-1} - 2p^{n}_{i} + p^{n}_{i+1}\right).
\label{eq:sum_back}
\end{align}
That is, we postulate that $p_{i}$ can be written as the infinite series
\begin{align}
p_{i} = \sum^{\infty}_{n=0}p^{n}_{i}.
\label{eq:pbar}
\end{align}
We now decompose Eq. \eqref{eq:sum_back} into the following infinite system of equations
\begin{align}
\frac{\mathrm{d}p^{0}_{i}}{\mathrm{d}t} = -\beta p^{0}_{i},
\label{eq:metastatic_diffusion_asep_0}
\end{align}
and
\begin{align}
\frac{\mathrm{d}p^{n}_{i}}{\mathrm{d}t} = -\beta p^{n}_{i} + \beta p^{n-1}_{i} + \left(\frac{P_{m}}{2}\right)\left(p^{n-1}_{i-1} - 2p^{n-1}_{i} + p^{n-1}_{i+1}\right), \ \ \ \forall \ n > 0.
\label{eq:metastatic_diffusion_asep}
\end{align}
Notice in Eq. \eqref{eq:metastatic_diffusion_asep} that we have written the terms associated with the movement of the random walker in terms of $n-1$, not $n$.\footnote{It has previously been shown \citep{Ross2017a} that if the lattice is growing, Eq. \eqref{eq:metastatic_diffusion_asep} would be written as
\begin{align}
\frac{\mathrm{d}p^{n}_{i}}{\mathrm{d}t} = -P_{g} p^{n}_{i} + P_{g} p^{n-1}_{i} + \left(\frac{P_{m}}{2}\right)\Big(p^{n}_{i-1} - 2p^{n}_{i} + p^{n}_{i+1}\Big), \nonumber
\end{align}
where the motility terms have the same $n$ as the time derivative, and $P_{g}$ is the rate at which the lattice grows. This observation is what initially motivated the work we present here.}
We include the parameter $\beta$ in Eqs. \eqref{eq:metastatic_diffusion_asep_0} and \eqref{eq:metastatic_diffusion_asep} as a `shape' parameter, however, its inclusion in Eqs. \eqref{eq:metastatic_diffusion_asep_0} and \eqref{eq:metastatic_diffusion_asep} is not necessary and $\beta$ can be set to zero if desired.\footnote{If $\beta = 0$ then Eqs. \eqref{eq:metastatic_diffusion_asep_0} and \eqref{eq:metastatic_diffusion_asep} would be
\begin{align}
\frac{\mathrm{d}p^{0}_{i}}{\mathrm{d}t} = 0,
\end{align}
and
\begin{align}
\frac{\mathrm{d}p^{n}_{i}}{\mathrm{d}t} = \left(\frac{P_{m}}{2}\right)\left(p^{n-1}_{i-1} - 2p^{n-1}_{i} + p^{n-1}_{i+1}\right), \ \ \ \forall \ n > 0.
\end{align}} Finally, we simplify Eq. \eqref{eq:metastatic_diffusion_asep} to obtain
\begin{align}
\frac{\mathrm{d}p^{n}_{i}}{\mathrm{d}t} = -\beta p^{n}_{i} + (\beta - P_{m})p^{n-1}_{i} + \left(\frac{P_{m}}{2}\right)\left(p^{n-1}_{i-1} + p^{n-1}_{i+1}\right), \ \ \ \forall \ n > 0.
\label{eq:metastatic_diffusion2}
\end{align}
It is readily apparent that if we sum Eq. \eqref{eq:metastatic_diffusion_asep_0} and Eq. \eqref{eq:metastatic_diffusion_asep} for all $n > 0$ we obtain
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\sum^{\infty}_{n=0}p^{n}_{i} = \sum^{\infty}_{n=0}\left(\frac{P_{m}}{2}\right)\Big(p^{n}_{i-1} - 2p^{n}_{i} + p^{n}_{i+1}\Big).
\label{eq:metastatic_diffusion_typ2}
\end{align}
This means if we substitute Eq. \eqref{eq:pbar} into Eq. \eqref{eq:metastatic_diffusion_typ2} we arrive at
\begin{align}
\frac{\mathrm{d} p_{i}}{\mathrm{d}t} = \left(\frac{P_{m}}{2}\right)\left(p_{i-1} - 2p_{i} + p_{i+1}\right),
\label{eq:sum_back_sub}
\end{align}
which recapitulates Eq. \eqref{eq:static_diffusion}.
\\
\\
The decomposition of Eq. \eqref{eq:static_diffusion} into the infinite system of equations contained in Eqs. \eqref{eq:metastatic_diffusion_asep_0} and \eqref{eq:metastatic_diffusion_asep} is straightforward to solve. To see this consider the initial equation, Eq. \eqref{eq:metastatic_diffusion_asep_0},
\begin{align}
\frac{\mathrm{d}p^{0}_{i}}{\mathrm{d}t} = -\beta p^{0}_{i}.
\label{eq:metastatic_diffusion_init}
\end{align}
As the initial equation does not `inherit' any terms associated with the movement of the random walker, Eq. \eqref{eq:metastatic_diffusion_init} admits the simple solution:
\begin{align}
p^{0}_{i} = A_{i}\operatorname{exp}(-\beta t),
\label{eq:metastatic_diffusion_init_sol}
\end{align}
where $A_{i}$ is the initial value of site $i$.
\\
\\
Equation \eqref{eq:metastatic_diffusion_init_sol} can be placed in Eq. \eqref{eq:metastatic_diffusion2} for $n = 1$, so that Eq. \eqref{eq:metastatic_diffusion2} becomes
\begin{align}
\frac{\mathrm{d}p^{1}_{i}}{\mathrm{d}t} = -\beta p^{1}_{i} + (\beta - P_{m})A_{i}\operatorname{exp}(-\beta t) + \left(\frac{P_{m}}{2}\right)\left(A_{i-1} + A_{i+1}\right)\operatorname{exp}(-\beta t).
\label{eq:metastatic_diffusion_simp}
\end{align}
This means Eq. \eqref{eq:metastatic_diffusion_simp} is now also straightforward to solve. Repeated application of this process admits the following recurrence formula as a solution for Eq. \eqref{eq:metastatic_diffusion2}:
\begin{align}
p_{i}^{n}(t) = \operatorname{exp}(-\beta t)\left(\frac{t^{n}}{n!}\right)\left[ \sum_{j=0}^{n}(\beta -P_{m})^{n-j}(P_{m})^{j}\frac{1}{2^{j}}\binom{n}{j}\left(\sum_{k=-j}^{-j:2:j}\binom{j}{\frac{(k+j)}{2}}A_{i+k}\right)\right], \ \ \ \forall \ n \geq 0.
\label{eq:metastatic_diffusion_sol}
\end{align}
Therefore, the probability of site $i$ being occupied by a walker at time $t$ is given by
\begin{align}
\sum_{n = 0}^{\infty} p_{i}^{n}(t) = \sum_{n = 0}^{\infty}\operatorname{exp}(-\beta t)\left(\frac{t^{n}}{n!}\right)\left[ \sum_{j=0}^{n}(\beta -P_{m})^{n-j}(P_{m})^{j}\frac{1}{2^{j}}\binom{n}{j}\left(\sum_{k=-j}^{-j:2:j}\binom{j}{\frac{(k+j)}{2}}A_{i+k}\right)\right],
\label{eq:metastatic_diffusion_sum}
\end{align}
in accordance with Eq. \eqref{eq:pbar}.
\\
\\
In Fig. \ref{fig:figure2} we demonstrate that the solution given by Eq. \eqref{eq:metastatic_diffusion_sum} matches the evolution of the ensemble average of the discrete model excellently at our chosen level of truncation (for the algorithm used in the discrete model see the Supplementary material (SM2)). In Fig. \ref{fig:figure2b} we display what we refer to as `streams', given by Eq. \eqref{eq:metastatic_diffusion_sol}, for a single value of the shape parameter $\beta $. In the Supplementary material (SM3) we display streams for different values of $\beta$, which demonstrates how $\beta $ influences the shape of the streams that compose the solution given by Eq. \eqref{eq:metastatic_diffusion_sum}.
\begin{figure}
\caption{A comparison of an ensemble average of the discrete model with periodic boundary conditions and Eq. \eqref{eq:metastatic_diffusion_sum}
\label{fig:figure2}
\end{figure}
\begin{figure}
\caption{The streams of sites $i = 25$ and $i=30$ as given by Eq. \eqref{eq:metastatic_diffusion_sol}
\label{fig:figure2b}
\end{figure}
\subsubsection{Boundary conditions}
It is possible to implement boundary conditions in equations describing the movement of a random walker on a one-dimensional lattice with the method we are presenting. For an unbiased random walker on a one-dimensional lattice with no-flux boundary conditions the equations describing the probability of finding a walker at a given site are:
\begin{align}
\frac{\mathrm{d}p^{n}_{1}}{\mathrm{d}t} &= -\beta p^{n}_{1} + \beta p^{n-1}_{1} + \left(\frac{P_{m}}{2}\right)\left(p^{n-1}_{2} - p^{n-1}_{1}\right), \nonumber \\
&\vdotswithin{=} \nonumber \\
\frac{\mathrm{d}p^{n}_{i}}{\mathrm{d}t} &= -\beta p^{n}_{i} + \beta p^{n-1}_{i} + \left(\frac{P_{m}}{2}\right)\left(p^{n-1}_{i-1} - 2p^{n-1}_{i} + p^{n-1}_{i+1}\right), \nonumber \\
&\vdotswithin{=} \nonumber \\
\frac{\mathrm{d}p^{n}_{N}}{\mathrm{d}t} &= -\beta p^{n}_{N} + \beta p^{n-1}_{N} + \left(\frac{P_{m}}{2}\right)\left(p^{n-1}_{N-1} - p^{n-1}_{N}\right), \ \ \ \forall \ n > 0.
\label{eq:metastatic_diffusionBC1}
\end{align}
Following the same procedure as we did for Eqs. \eqref{eq:metastatic_diffusion_sol} and \eqref{eq:metastatic_diffusion_sum} we find the following recurrence relation for sites not situated on the boundary
\begin{align}
p_{i}^{n}(t) = \left(\frac{t}{n}\right)\left[(\beta -P_{m})p_{i}^{n-1}(t) + \left(\frac{P_{m}}{2}\right)(p_{i+1}^{n-1}(t) + p_{i-1}^{n-1}(t))\right], \ \ \ \forall \ n > 0,
\label{eq:ecom}
\end{align}
with
\begin{align}
p_{i}^{0}(t) = A_{i}\operatorname{exp}(-\beta t).
\label{}
\end{align}
The reader will notice that we have written Eq. \eqref{eq:ecom} in a more economical form than Eq. \eqref{eq:metastatic_diffusion_sol}. It is possible to write Eq. \eqref{eq:ecom} in the same manner as Eq. \eqref{eq:metastatic_diffusion_sol}, and if done so each site on the lattice will have its own recurrence formula describing the probability of a random walker being located at that site at time $t$.
\\
\\
From Eq. \eqref{eq:ecom} the evolution with respect to time of the probability of site $i$ being occupied by a walker is given by
\begin{align}
\sum^{\infty}_{n=0}p_{i}^{n}(t) = p_{i}^{0}(t) + \sum^{\infty}_{n=1}\left(\frac{t}{n}\right)\left[(\beta -P_{m})p_{i}^{n-1}(t) + \left(\frac{P_{m}}{2}\right)(p_{i+1}^{n-1}(t) + p_{i-1}^{n-1}(t))\right].
\label{eq:middle}
\end{align}
The recurrence relations for sites situated on the boundary are
\begin{align}
p_{1}^{n}(t) = \left(\frac{t}{n}\right)\left[\left(\beta -\frac{P_{m}}{2}\right)p_{1}^{n-1}(t) + \left(\frac{P_{m}}{2}\right)(p_{2}^{n-1}(t))\right] \ \ \ \forall \ n > 0,
\label{eq:bound1}
\end{align}
and
\begin{align}
p_{N}^{n}(t) = \left(\frac{t}{n}\right)\left[\left(\beta -\frac{P_{m}}{2}\right)p_{N}^{n-1}(t) + \left(\frac{P_{m}}{2}\right)(p_{N-1}^{n-1}(t))\right] \ \ \ \forall \ n > 0.
\label{eq:bound2}
\end{align}
Therefore, the evolution with respect to time of the probability of sites $1$ and $N$ being occupied by a walker are given by
\begin{align}
\sum^{\infty}_{n=0}p_{1}^{n}(t) = p_{1}^{0}(t) + \sum^{\infty}_{n=1}\left(\frac{t}{n}\right)\left[\left(\beta -\frac{P_{m}}{2}\right)p_{1}^{n-1}(t) + \left(\frac{P_{m}}{2}\right)(p_{2}^{n-1}(t))\right],
\label{eq:bound1gen}
\end{align}
and
\begin{align}
\sum^{\infty}_{n=0}p_{N}^{n}(t) = p_{N}^{0}(t) + \sum^{\infty}_{n=1}\left(\frac{t}{n}\right)\left[\left(\beta -\frac{P_{m}}{2}\right)p_{N}^{n-1}(t) + \left(\frac{P_{m}}{2}\right)(p_{N-1}^{n-1}(t))\right],
\label{eq:bound2gen}
\end{align}
respectively.
\\
\\
In Fig. \ref{fig:figure3} the solutions given by Eqs. \eqref{eq:middle}, \eqref{eq:bound1gen} and \eqref{eq:bound2gen} are displayed. It can be seen that Eqs. \eqref{eq:middle}, \eqref{eq:bound1gen} and \eqref{eq:bound2gen} and the ensemble average from the discrete model match excellently. The algorithm for the discrete model can be found in the Supplementary material (SM2).
\\
\\
We provide a final example of our method being applied to a linear ODE in the Supplementary material (SM4).
\begin{figure}
\caption{A comparison of an ensemble average of the discrete model with no-flux boundary conditions and Eqs. \eqref{eq:middle}
\label{fig:figure3}
\end{figure}
\subsubsection{Nonlinear ordinary differential equations}
We now apply our method to nonlinear ODEs. This allows us to demonstrate another important aspect of our method.
Initially we solve
\begin{align}
\frac{\mathrm{d}p}{\mathrm{d}t} = \gamma p(1-p),
\label{eq:non_lin2}
\end{align}
where $\gamma > 0$. The analytic solution of Eq. \eqref{eq:non_lin2} is
\begin{align}
p(t) = \frac{C_{1}\operatorname{exp}(\gamma t)}{1 - C_{1} + C_{1}\operatorname{exp}(\gamma t)},
\label{eq:non_lin2_sol}
\end{align}
where $C_{1}$ is the value of $p(t)$ at $t = 0$.\\
\\
To solve Eq. \eqref{eq:non_lin2} in our framework we proceed as follows. To begin with we decompose Eq. \eqref{eq:non_lin2} into
\begin{align}
\frac{\mathrm{d}p^{n}(t)}{\mathrm{d}t} = -\beta p^{n}(t) + \beta p^{n-1}(t) + \gamma p^{n-1}(t) - \gamma p^{n-1}(t)\sum^{\infty}_{n=0}p^{n}(t), \ \ \ \forall \ n > 0,
\label{eq:nonlin_exp3}
\end{align}
with
\begin{align}
\frac{\mathrm{d}p^{0}(t)}{\mathrm{d}t} = -\beta p^{0}(t).
\label{eq:nonlin_exp4}
\end{align}
It is evident that Eq. \eqref{eq:nonlin_exp3} cannot be solved in the same iterative manner as Eq. \eqref{eq:metastatic_diffusion_asep} due to the due to the nonlinear term present on its right-hand-side.\footnote{One might think the Eq. \eqref{eq:nonlin_exp3} should be written as
\begin{align}
\frac{\mathrm{d}p^{n}(t)}{\mathrm{d}t} = -\beta p^{n}(t) + \beta p^{n-1}(t) + \gamma p^{n-1}(t) - \gamma p^{n-1}(t)p^{n-1}(t), \ \ \ \forall \ n > 0,
\label{eq:nonlin_exp_wrong}
\end{align}
but this is incorrect as each stream needs to be multiplied by every other stream to account for the nonlinearity in Eq. \eqref{eq:non_lin2}.} To circumvent this we sum Eq. \eqref{eq:nonlin_exp3} for all $n \geq 0$ to obtain
\begin{align}
\sum^{\infty}_{n=0}\frac{\mathrm{d}p^{n}(t)}{\mathrm{d}t} = \gamma\sum^{\infty}_{n=0}p^{n}(t) - \gamma\sum^{\infty}_{n=0}p^{n}(t)\sum^{\infty}_{n=0}p^{n}(t),
\label{eq:expand}
\end{align}
and then decompose Eq. \eqref{eq:expand} in the following manner:
\begin{align}
\frac{\mathrm{d}p^{0}(t)}{\mathrm{d}t} = -\beta p^{0}(t),
\label{eq:nonlin_exp_N0_new1}
\end{align}
\begin{align}
\frac{\mathrm{d}p^{1}(t)}{\mathrm{d}t} = -\beta p^{1}(t) + \beta p^{0}(t) + \gamma p^{0} - \gamma p^{0}(t)p^{0}(t),
\label{eq:nonlin_exp_N0_new2}
\end{align}
\begin{align}
\frac{\mathrm{d}p^{2}(t)}{\mathrm{d}t} = -\beta p^{2}(t) + \beta p^{1}(t) + \gamma p^{1} - \gamma p^{1}(t)p^{1}(t) - 2\gamma p^{1}(t)p^{0}(t),
\label{eq:nonlin_exp_N0_new3}
\end{align}
so that in general
\begin{align}
\frac{\mathrm{d}p^{n}(t)}{\mathrm{d}t} &= -\beta p^{n}(t) + \beta p^{n-1}(t) + \gamma p^{n-1}(t) \nonumber \\
& \ \ \ - \gamma p^{n-1}(t)p^{n-1}(t) - 2\gamma p^{n-1}(t)\left(\sum^{n-2}_{i=0}p^{i}(t)\right), \ \ \ \forall n > 0.
\label{eq:nonlin_exp_N0_new4}
\end{align}
The decomposition of Eq. \eqref{eq:expand} into the equations contained in Eqs. \eqref{eq:nonlin_exp_N0_new1} and \eqref{eq:nonlin_exp_N0_new4} allows us to solve the unknowns iteratively, and it is straightforward to demonstrate that summing Eqs. \eqref{eq:nonlin_exp_N0_new1} and \eqref{eq:nonlin_exp_N0_new4} for all $n > 0$ returns Eq. \eqref{eq:expand}. In Fig. \ref{fig:figureNLode} (a) we compare the solution of Eq. \eqref{eq:nonlin_exp_N0_new4} with the analytical solution Eq. \eqref{eq:non_lin2_sol}.
\begin{figure}
\caption{In (a) Eqs. \eqref{eq:nonlin_exp_N0_new1}
\label{fig:figureNLode}
\end{figure}
\\
It is also possible to derive power series solutions in terms of simple functions for nonlinear ODEs with the method we are presenting. For instance, the nonlinear ODE
\begin{align}
\frac{\mathrm{d}y}{\mathrm{d}t} = \alpha y^{2},
\label{eq:non_lin3}
\end{align}
has the following power series solution
\begin{align}
y = \sum^{\infty}_{n=0} \frac{(-1)^{n}y^{0}}{n!}\operatorname{log}^{n}(1 - \alpha y^{0} t),
\label{eq:non_lin3sol}
\end{align}
where
\begin{align}
y^{0} = y(0) = A.
\label{eq:non_lin3yo}
\end{align}
In Fig. \ref{fig:figureNLode} (b) we compare Eq. \eqref{eq:non_lin3sol} with the analytical solution of Eq. \eqref{eq:non_lin3},
\begin{align}
y = \frac{1}{\frac{1}{A} - \alpha t}.
\label{eq:non_lin3anal}
\end{align}
The details of how to derive Eq. \eqref{eq:non_lin3sol} are given in the Supplementary material (SM5).
\subsection{Solving a linear partial differential equation}
We now demonstrate that the method we are presenting is also applicable to PDEs.
We start by applying this method to linear PDEs, for instance the wave equation:
\begin{align}
\frac{\partial u(x,t)}{\partial t} = c\frac{\partial u(x,t)}{\partial x}.
\label{eq:hyperbolic_eq}
\end{align}
Motivated by the previous section we write Eq. \eqref{eq:hyperbolic_eq} as
\begin{align}
\frac{\partial u^{n}}{\partial t} = -\beta u^{n} + \beta u^{n-1} + c\frac{\partial u^{n-1}}{\partial x}, \ \ \ \forall n > 0,
\label{eq:hyperbolic_eq_growth}
\end{align}
with
\begin{align}
\frac{\partial u^{0}}{\partial t} = -\beta u^{0}.
\label{eq:hyperbolic_eq_growth0}
\end{align}
As before we initially solve Eq. \eqref{eq:hyperbolic_eq_growth0},
\begin{align}
u^{0}(x,t) = A(x)\operatorname{exp}(-\beta t).
\label{eq:pde_diffusion_init}
\end{align}
It can be seen in Eq. \eqref{eq:pde_diffusion_init} that our method relies on the separation of spatial and temporal variable in the initial equation.
For general $u^{n}$ we obtain
\begin{align}
u^{n}(x,t) = \left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}c^{j}\left(\frac{\partial^{j}A(x)}{\partial x^{j}} \right)\right], \ \ \ \forall \ n \geq 0.
\label{eq:hyperbolic_eq_sol}
\end{align}
If $\beta = 0$ we have
\begin{align}
u^{n}(x,t) = \left(\frac{t^{n}}{n!}\right)\left[c^{n}\left(\frac{\partial^{n}A(x)}{\partial x^{n}} \right)\right], \ \ \ \forall \ n \geq 0,
\label{eq:hyperbolic_eq_sol_Pg0}
\end{align}
and each stream is a polynomial in $t$ weighted by coefficients written in terms of the initial condition\footnote{In the case of Eq. \eqref{eq:hyperbolic_eq_sol_Pg0} it is evident we have simply derived a Taylor series expansion \citep{Abbott2001}.}.
From Eq. \eqref{eq:hyperbolic_eq_sol} the general solution to Eq. \eqref{eq:hyperbolic_eq} is
\begin{align}
\sum^{\infty}_{n=0}u^{n}(x,t) = \sum^{\infty}_{n=0}\left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}c^{j}\left(\frac{\partial^{j}A(x)}{\partial x^{j}} \right)\right].
\label{eq:hyperbolic_eq_sol_gen}
\end{align}
A simple initial condition for Eq. \eqref{eq:hyperbolic_eq_sol_gen} is
\begin{align}
A(x) = \operatorname{sin}(x),
\label{eq:ic}
\end{align}
and if we substitute Eq. \eqref{eq:ic} into Eq. \eqref{eq:hyperbolic_eq_sol} we obtain
\begin{align}
\sum^{\infty}_{n=0}u^{n}(x,t) = \sum^{\infty}_{n=0}\left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}c^{j}\left(\frac{\partial^{j}}{\partial x^{j}}\operatorname{sin}{x} \right)\right].
\label{eq:hyperbolic_eq_sol_sin}
\end{align}
It is also straightforward to solve the diffusion equation, which is
\begin{align}
\frac{\partial u}{\partial t} = D\frac{\partial^{2} u}{\partial x^{2}}.
\label{eq:diffusion_eq}
\end{align}
Solving Eq. \eqref{eq:diffusion_eq} in a similar manner to how we solved Eq. \eqref{eq:hyperbolic_eq} we obtain
\begin{align}
u^{n}(x,t) = \left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}D^{j}\left(\frac{\partial^{2j}A(x)}{\partial x^{2j}}\right)\right], \ \ \ \forall \ n \geq 0.
\label{eq:diffusion_eq_sol}
\end{align}
If $\beta = 0$ Eq. \eqref{eq:diffusion_eq_sol} is
\begin{align}
u^{n}(x,t) = \left(\frac{t^{n}}{n!}\right)\left[D^{n}\left(\frac{\partial^{2n}A(x)}{\partial x^{2n}}\right)\right], \ \ \ \forall \ n \geq 0.
\label{eq:diffusion_eq_sol_Pg0}
\end{align}
Therefore, from Eq. \eqref{eq:diffusion_eq_sol} our general solution to Eq. \eqref{eq:diffusion_eq} is
\begin{align}
\sum_{n=0}^{\infty}u^{n}(x,t) &= \sum_{n=0}^{\infty}\left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}D^{j}\left(\frac{\partial^{2j}A(x)}{\partial x^{2j}}\right)\right].
\label{eq:diffusion_eq_sol_gen}
\end{align}
If we use $A(x)$ = sin($x$) for the initial condition in Eq. \eqref{eq:diffusion_eq_sol} we obtain
\begin{align}
\sum_{n=0}^{\infty}u^{n}(x,t) &= \sum_{n=0}^{\infty}\left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}c^{(j)}\left((-1)^{j}\operatorname{sin}(x)\right)\right].
\label{eq:diffusion_eq_sol_exp}
\end{align}
This method is trivially extendable to two-dimensional linear PDEs, the details of which are given in the Supplementary material (SM6). It is also possible to implement boundary conditions, and this is also demonstrated in the Supplementary material (SM7).
\subsection{Solving a nonlinear partial differential equation}
Finally, we demonstrate that this method is also extendable to nonlinear PDEs. For instance, the quasilinear inviscid Burgers equation. The inviscid Burgers equation is
\begin{align}
\frac{\partial u}{\partial t} = \alpha u\frac{\partial u}{\partial x},
\label{eq:inviscid_burger_eq}
\end{align}
where $\alpha$ is a constant. We begin by writing Eq. \eqref{eq:inviscid_burger_eq} in the following manner
\begin{align}
\frac{\partial u^{n}}{\partial t} = -\beta u^{n} + \beta u^{n-1} + \alpha u^{n-1}\left(\sum^{\infty}_{i=0}\frac{\partial u^{i}}{\partial x}\right), \ \ \ \forall n > 0,
\label{eq:inviscid_burger_eq_growth}
\end{align}
with
\begin{align}
\frac{\partial u^{0}}{\partial t} = -\beta u^{0}.
\label{eq:inviscid_burger_eq_growth0}
\end{align}
As in the case of nonlinear ODEs we have to multiply the $n^{th}$ stream by all other streams (including itself) to account for the nonlinearity in Eq. \eqref{eq:inviscid_burger_eq}. We then decompose Eqs. \eqref{eq:inviscid_burger_eq_growth} and \eqref{eq:inviscid_burger_eq_growth0} in the following manner:
\begin{align}
\frac{\partial u^{0}}{\partial t} &= -\beta u^{0},
\label{eq:inviscid_burger_eq_growth_simp_solve_L0}
\end{align}
with
\begin{align}
\frac{\partial u^{1}}{\partial t} &= -\beta u^{1} + \beta u^{0} + \alpha u^{0}\frac{\partial u^{0}}{\partial x},
\label{eq:inviscid_burger_eq_growth_simp_solve_L1}
\end{align}
and
\begin{align}
\frac{\partial u^{n}}{\partial t} &= -\beta u^{n} + \beta u^{n-1} + \alpha u^{n-1}\left(\sum^{n-1}_{j=0}\frac{\partial u^{j}}{\partial x}\right) + \alpha\frac{\partial u^{n-1}}{\partial x}\left(\sum^{n-2}_{k=0}u^{k}\right), \ \ \ \forall n > 0.
\label{eq:inviscid_burger_eq_growth_simp_solve}
\end{align}
In Fig. \ref{fig:figureNL} the solution of Eqs. \eqref{eq:inviscid_burger_eq_growth_simp_solve_L0}-\eqref{eq:inviscid_burger_eq_growth_simp_solve} is compared with the solution of Eq. \eqref{eq:inviscid_burger_eq} before the onset of the multivalue behaviour that the solution of Eq. \eqref{eq:inviscid_burger_eq} exhibits. We use symbolic integration in Matlab to compute Eqs. \eqref{eq:inviscid_burger_eq_growth_simp_solve_L0}-\eqref{eq:inviscid_burger_eq_growth_simp_solve}. It should be readily apparent how to extend this method to more complicated nonlinear PDEs.
\begin{figure}
\caption{A comparison of the solution to Eq. \eqref{eq:inviscid_burger_eq}
\label{fig:figureNL}
\end{figure}
\section{Discussion}
We have presented a power series method for solving both linear and nonlinear ODEs and PDEs. We finish by detailing some issues with the method we have introduced in this work.
\\
\\
Our main criticism of the work we have presented is that in the case of some nonlinear equations presented in this work we have not supplied solutions for the $n^{th}$ stream written in terms of simple functions. For instance, Eqs. \eqref{eq:non_lin2} and \eqref{eq:inviscid_burger_eq}. The method presented here would be most useful if an efficient means of writing the power series solutions for nonlinear equations became evident, which would allow analysis to be directly carried out on these solutions. The Supplementary material (SM5) shows that in some cases of nonlinear equations it is possible to write the $n^{th}$ stream of its solution in terms of simple functions, however, a way to generalise the approach used on Eq. \eqref{eq:non_lin3} has not yet become apparent to the authors.
\\
\\
It is also important to acknowledge that we have not dealt with the issue of convergence in the power series we have presented. It is obvious to say that the convergence of these power series, and their radius of convergence, will depend on the initial conditions of the equation, and the equation itself \citep{Abbott2001}. However, a more general treatment on the convergence of the methods presented here is certainly required. Finally, a word on the role of the shape parameter $\beta$. Its role may seem somewhat superfluos, however, it is a simple way to circumvent numerical issues when the value of streams that compose solutions becomes too large for a standard computer to accurately represent. It also means that the value of the streams composing a solution can be made positive for a given interval of interest by selecting the appropriate value of $\beta$, and so provides another analytic tool to utilise when employing the methods presented here.
\section*{Supplementary material}
\subsubsection*{SM1: The derivation of Equation \eqref{eq:static_diffusion} in the main text.}
We derive Eq. \eqref{eq:static_diffusion} in the following manner. The probability that an unbiased excluding random walker occupies site $i$ on a one-dimensional periodic lattice at time $t + \delta t$ is given by
\begin{align}
p_{i}(A;t+\delta t) &= p_{i}(A;t) + \frac{P_{m}\delta t}{2}\Bigg(p_{i-1,i}(A,0;t) - p_{i-1,i}(0,A;t)\Bigg) \nonumber \\
& \ \ \ + \frac{P_{m}\delta t}{2}\Bigg(p_{i,i+1}(0,A;t) - p_{i,i+1}(A,0;t)\Bigg).
\label{eq:SM11}
\end{align}
In Eq. \eqref{eq:SM11} $p_{i-1,i}(A,0;t)$ is the second-order probability that site $i-1$ and $i$ are occupied and unoccupied, respectively, at time $t$. The other second-order terms in Eq. \eqref{eq:SM11} have similar meanings. If we rearrange Eq. \eqref{eq:SM11} and take $\delta t \rightarrow 0$ in the limit we obtain
\begin{align}
\frac{\mathrm{d}p_{i}(A;t)}{\mathrm{d}t} &= \frac{P_{m}}{2}\Bigg(p_{i-1,i}(A,0;t) - p_{i-1,i}(0,A;t)\Bigg) \nonumber \\
& \ \ \ + \frac{P_{m}}{2}\Bigg(p_{i,i+1}(0,A;t) - p_{i,i+1}(A,0;t)\Bigg).
\label{eq:SM12}
\end{align}
We now remove the second-order terms in Eq. \eqref{eq:SM12} by making the following closure
\begin{align}
p_{i,i+1}(A,0;t) = p_{i}(A;t)(1-p_{i+1}(A;t)).
\label{eq:SM13}
\end{align}
If we place Eq. \eqref{eq:SM13} in Eq. \eqref{eq:SM14} we obtain
\begin{align}
\frac{\mathrm{d}p_{i}(A;t)}{\mathrm{d}t} &= \frac{P_{m}}{2}\Bigg(p_{i-1}(A;t) - 2p_{i}(A;t) + p_{i+1}(A;t)\Bigg).
\label{eq:SM14}
\end{align}
If we drop the explicit `$A$' and `$t$' from our notation in Eq. \eqref{eq:SM14} we recapitulate Eq. \eqref{eq:static_diffusion}.
\subsubsection*{SM2: Algorithm for discrete random-walk}
We use a discrete random-walk model on a one-dimensional regular lattice with lattice spacing $\Delta$ \citep{Liggett} and length $N$, where $N$ is an integer describing the number of lattice sites. Simulations are performed with either periodic boundary or no-flux conditions. Each random walker is assigned to a lattice site, from which it can move into an adjacent site. If an agent attempts to move into a site that is already occupied, the movement event is aborted. This process, whereby only one agent is allowed per site, is generally known as an exclusion process. Time is evolved continuously, and random walker movements are attempted in accordance with the Gillespie algorithm \citep{Gillespie_orig}. Attempted agent movement events occur with rate $P_{m}$ per unit time. The initial conditions of the discrete model are provided in the main text when necessary.
\subsubsection*{SM3: The effect of different values of $\beta$ in Eq. \eqref{eq:metastatic_diffusion_sum} in the main text.}
In Fig. \ref{fig:figure2c} we display streams, Eq. \eqref{eq:metastatic_diffusion_sol}, for different values of $\beta$. This demonstrates how $\beta $ influences the shape of the streams that compose the solution given by Eq. \eqref{eq:metastatic_diffusion_sum}.
\begin{figure}
\caption{The streams of site $i = 25$ as given by Eq. \eqref{eq:metastatic_diffusion_sol}
\label{fig:figure2c}
\end{figure}
\subsubsection*{SM4: Linear ordinary differential equation}
We provide the solution to the following linear ODE
\begin{align}
\frac{\mathrm{d}q}{\mathrm{d}t} = (1-2t)q,
\label{eq:non_lin1}
\end{align}
which is linear in $q$. The analytic solution of Eq. \eqref{eq:non_lin1} is
\begin{align}
q(t) = C_{2}\operatorname{exp}(t-t^{2}),
\label{eq:non_lin1_sol}
\end{align}
where $C_{2}$ is the value of $q(t)$ at $t = 0$.
To solve Eq. \eqref{eq:non_lin1} in our framework we rewrite it as
\begin{align}
\frac{\mathrm{d}q^{n}(t)}{\mathrm{d}t} = -\beta q^{n}(t) + \beta q^{n-1}(t) + (1-2t)q^{n-1}(t), \ \ \ \forall \ n > 0,
\label{eq:nonlin_exp}
\end{align}
with
\begin{align}
\frac{\mathrm{d}q^{0}(t)}{\mathrm{d}t} = -\beta q^{0}(t).
\label{eq:nonlin_exp0}
\end{align}
In Fig. \ref{fig:figureNL} (a) the solution of Eqs. \eqref{eq:nonlin_exp} and \eqref{eq:nonlin_exp0} is compared with the analytical solution Eq. \eqref{eq:non_lin1_sol}.
\begin{figure}
\caption{In (a) Eqs. \eqref{eq:nonlin_exp}
\label{fig:figureLMSM}
\end{figure}
\subsubsection*{SM5: Nonlinear ordinary differential equation}
To solve Eq. \eqref{eq:non_lin3} we proceed in the following manner. We begin with
\begin{align}
\frac{\mathrm{d}y}{\mathrm{d}t} = \alpha y^{2},
\label{eq:SM31}
\end{align}
and
\begin{align}
y^{0} = A,
\label{eq:SM32}
\end{align}
and
\begin{align}
\frac{\mathrm{d}y^{1}}{\mathrm{d}t} = \alpha yy^{0}.
\label{eq:SM33}
\end{align}
In this derivation we assume $\beta = 0$ for simplicity.
Initially, we rewrite Eq. \eqref{eq:SM31} as
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}(\operatorname{log}(y)) = \alpha y.
\label{eq:SM34}
\end{align}
This means
\begin{align}
\frac{\mathrm{d}y^{1}}{\mathrm{d}t} = y^{0}\frac{\mathrm{d}}{\mathrm{d}t}(\operatorname{log}(y)),
\label{eq:SM35}
\end{align}
which gives
\begin{align}
y^{1} = y^{0}\operatorname{log}(y) + c_{1}.
\label{eq:SM36}
\end{align}
Therefore
\begin{align}
y = y^{0}\operatorname{exp}\left(\frac{y^{1}}{y^{0}}\right),
\label{eq:SM37}
\end{align}
because $c_{1} = -y^{0}\operatorname{log}(y^{0})$. If we place Eq. \eqref{eq:SM37} in Eq. \eqref{eq:SM33} we obtain
\begin{align}
\frac{\mathrm{d}y^{1}}{\mathrm{d}t} = \alpha (y^{0})^{2}\operatorname{exp}\left(\frac{y^{1}}{y^{0}}\right),
\label{eq:SM38}
\end{align}
which we can integrate to obtain
\begin{align}
y^{1} = -y^{0}\operatorname{log}\left(1 - \alpha y^{0}t\right).
\label{eq:SM39}
\end{align}
Now we recognise
\begin{align}
\frac{\mathrm{d}y^{2}}{\mathrm{d}t} = \frac{y^{1}}{y^{0}}\frac{\mathrm{d}y^{1}}{\mathrm{d}t}.
\label{eq:SM310}
\end{align}
If we integrate Eq. \eqref{eq:SM310}, and then solve for $y^{3}$ in a similar manner we obtain the following power series solution for $y$
\begin{align}
y = \sum^{\infty}_{n=0} \frac{(-1)^{n}y^{0}}{n!}\operatorname{log}^{n}(1 - \alpha y^{0} t).
\label{eq:non_lin3solSM}
\end{align}
Alternatively, we can place Eq. \eqref{eq:SM39} into Eq. \eqref{eq:SM37} to obtain
\begin{align}
y = \frac{y^{0}}{1 - \alpha y^{0} t},
\label{eq:non_lin3analSM}
\end{align}
which recapitulates the analytical solution to Eq. \eqref{eq:SM31}, given as Eq. \eqref{eq:non_lin3anal}.
\subsubsection*{SM6: Two-dimensional linear partial differential equation}
The general solution for the two-dimensional linear diffusion equation $(\beta = 0)$ is
\begin{align}
\sum^{\infty}_{n=0}p^{n}(x,y;t) = \sum^{\infty}_{n=0}\frac{(Dt)^{n}}{n!}\left[\sum^{n}_{i}\binom{n}{i}\frac{\partial^{2n}A(x,y)}{\partial x^{2(n-i)}\partial y^{2i}} \right].
\label{eq:lin_bc}
\end{align}
\subsubsection*{SM7: Boundary conditions for linear partial differential equation}
We now demonstrate how to implement boundary conditions in linear PDEs with our method. A simple example is for the diffusion equation
\begin{align}
\frac{\partial u}{\partial t} = D\frac{\partial^{2} u}{\partial x^{2}},
\label{eq:lin_bc}
\end{align}
with
\begin{align}
u^{L}(x,0) = A(x) = \gamma + \lambda x,
\end{align}
and
\begin{align}
\sum^{\infty}_{i=0}{u^{L+i \delta L}(0,t)} = \gamma = u^{L+i \delta L}(0,0), \ \ \ \ \sum^{\infty}_{i=0}{u^{L+i \delta L}(L,t)} = \gamma + \lambda L = u^{L+i \delta L}(L,0).
\end{align}
If we implemented Neumann boundary conditions these would take the form
\begin{align}
\sum^{\infty}_{i=0}\frac{\partial u^{L+i \delta L}(0,t)}{\partial x} = -c_{1}, \ \ \ \ \sum^{\infty}_{i=0}\frac{\partial u^{L+i \delta L}(L,t)}{\partial x} = c_{2}.
\end{align}
To implement our boundary conditions we proceed as follows: As we already have the general solution for Eq. \eqref{eq:diffusion_eq_sol} we can take its partial derivative with respect to time to obtain\footnote{A simpler way to obtain the solution for the given initial condition is the following. Implementing initial condition in Eq. \eqref{eq:lin_bc} gives:
\begin{align}
\frac{\partial u^{L+\delta L}}{\partial t} = -\beta u^{L + \delta L} + \beta u^{L},
\end{align}
with the (straightforward) solution being
\begin{align}
\sum^{\infty}_{i=0}u^{L+i\delta L}(x,t) = \sum^{\infty}_{i=0}A(x)\frac{(\beta t)^{i}}{i!}\operatorname{exp}(-\beta t) = A(x)\sum^{\infty}_{i=0}\frac{(\beta t)^{i}}{i!}\operatorname{exp}(-\beta t) = A(x),
\end{align}
as the Poisson distribution sums to identity.}
\begin{align}
\frac{\partial u}{\partial t} = \sum_{n=0}^{\infty}\left(n\left(\frac{t^{n-1}}{n!}\right)\operatorname{exp}(-\beta t)-\beta \left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\right)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}D^{(j)}\left(\frac{\partial^{2j}A(x)}{\partial x^{2j}}\right)\right],
\end{align}
which means
\begin{align}
\frac{\partial^{2} u}{\partial x^{2}} &= \nonumber \\
& \hspace{-0cm} \left(\frac{1}{D}\right)\sum_{n=0}^{\infty}\left(n\left(\frac{t^{n-1}}{n!}\right)\operatorname{exp}(-\beta t)-\beta \left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\right)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}D^{(j)}\left(\frac{\partial^{2j}A(x)}{\partial x^{2j}}\right)\right].
\label{eq:diffusion_dx2}
\end{align}
Integrating Eq. \eqref{eq:diffusion_dx2} with respect to $x$ gives
\begin{align}
\frac{\partial u}{\partial x} + c_{1} &= \nonumber \\
&\hspace{-1cm} \left(\frac{1}{D}\right)\sum_{n=0}^{\infty}\left(n\left(\frac{t^{n-1}}{n!}\right)\operatorname{exp}(-\beta t)-\beta \left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\right)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}D^{(j)}\left(\frac{\partial^{2j-1}A(x)}{\partial x^{2j-1}}\right)\right],
\label{eq:diffusion_dx}
\end{align}
and integrating Eq. \eqref{eq:diffusion_dx} with respect to $x$ gives
\begin{align}
u + c_{1}x + c_{2} &= \nonumber \\
&\hspace{-2cm} \left(\frac{1}{D}\right)\sum_{n=0}^{\infty}\left(n\left(\frac{t^{n-1}}{n!}\right)\operatorname{exp}(-\beta t)-\beta \left(\frac{t^{n}}{n!}\right)\operatorname{exp}(-\beta t)\right)\left[\sum^{n}_{j=0}\binom{n}{j}(\beta )^{n-j}D^{(j)}\left(\frac{\partial^{2j-2}A(x)}{\partial x^{2j-2}}\right)\right].
\label{eq:full_sol}
\end{align}
If we apply the boundary conditions and initial condition to Eq. \eqref{eq:full_sol} we obtain
\begin{align}
u &= \frac{1}{D}\sum_{n=0}^{\infty}\operatorname{exp}(-\beta t)\left(n\left(\frac{t^{n-1}}{n!}\right)-\beta \left(\frac{t^{n}}{n!}\right)\right)\Bigg[\left[(\beta )^{n}\left(\gamma \frac{x^{2}}{2} + \lambda \frac{x^{3}}{6}\right) + n(\beta )^{n-1}D(\gamma + \lambda x)\right] \nonumber \\
& \ \ \ - \left(\frac{x}{L}\right)\left[(\beta )^{n}\left(\gamma \frac{L^{2}}{2} + \lambda \frac{L^{3}}{6}\right) + n(\beta )^{n-1}D(\gamma + \lambda L)\right] + \left(\frac{x}{L}-1\right)\left[n(\beta )^{n-1}D\gamma\right]\Bigg] \nonumber \\ & \ \ \ + \lambda x + \gamma
\label{eq:lin_bc_sol}
\end{align}
as the solution to Eq. \eqref{eq:lin_bc}. In this solution we define $$n\left(\frac{t^{n-1}}{n!}\right) = 0$$ when $n = 0$ to avoid a singularity when $t = 0$.
\end{document}
|
\begin{document}
\title[$m$-isometric operators and their local properties]{$m$-isometric operators and their local properties}
\author[Z.\ J.\ Jab{\l}o\'nski]{Zenon Jan
Jab{\l}o\'nski}
\address{Instytut Matematyki,
Uniwersytet Jagiello\'nski, ul.\ \L ojasiewicza 6,
PL-30348 Kra\-k\'ow, Poland}
\email{[email protected]}
\author[I.\ B.\ Jung]{Il Bong Jung}
\address{Department of Mathematics, Kyungpook National University,
Da\-egu 41566, Korea} \email{[email protected]}
\author[J.\ Stochel]{Jan Stochel}
\address{Instytut Matematyki, Uniwersytet
Jagiello\'nski, ul.\ \L ojasiewicza 6, PL-30348
Kra\-k\'ow, Poland} \email{[email protected]}
\thanks{The research of the second author was supported by the National Research
Foundation of Korea (NRF) grant funded by the Korea
government (MSIT) (2018R1A2B6003660)}
\subjclass[2010]{Primary 47B20; Secondary 47A75}
\keywords{$m$-isometric operator, generalized
eigenspace, Jordan block, orthogonality, algebraic
operator}
\maketitle
\begin{abstract}
In this paper we give necessary and sufficient conditions
for a bounded linear Hilbert space operator to be an
$m$-isometry for an unspecified $m$ written in terms of
conditions that are applied to ``one vector at a time''. We
provide criteria for orthogonality of generalized
eigenvectors of an (a priori unbounded) linear operator $T$
on an inner product space that correspond to distinct
eigenvalues of modulus $1$. We also discuss a similar
question of when Jordan blocks of $T$ corresponding to
distinct eigenvalues of modulus $1$ are ``orthogonal''.
\end{abstract}
\section{\label{Sec1}Introduction and notation}
Let $\hh$ be a Hilbert space (all vector spaces are
assumed to be complex throughout this paper). Denote by
$\ogr{\hh}$ the $C^*$-algebra of all bounded linear
operators on $\hh$ and by $I_{\hh}$ (abbreviated to $I$ if
no confusion arises) the identity operator on $\hh$.
Suppose that $T\in \ogr{\hh}$. Define
\begin{align*}
\bscr_m(T) = \sum_{k=0}^m (-1)^k \binom{m}{k}
{T^*}^kT^k, \quad m=0,1,2,\ldots.
\end{align*}
Given a positive integer $m$, we say that $T$ is an
{\em $m$-isometry} if $\bscr_m(T)=0$. Clearly,
\begin{align} \label{miso-1}
\begin{minipage}{72ex}
{\em $T$ is an $m$-isometry if and only if
$\sum\limits_{k=0}^m (-1)^k \binom{m}{k} \|T^k h\|^2 =
0$ for all $h \in \hh$.}
\end{minipage}
\end{align}
A $1$-isometry is an isometry and {\em vice versa}. The
notion of an $m$-isometric operator has been invented by
Agler (see \cite[p.\ 11]{Ag-0}). We refer the reader to the
trilogy \cite{Ag-St1,Ag-St2,Ag-St3} by Agler and Stankus
for the fundamentals of the theory of $m$-isometries. The
class of $m$-isometric operators emerged from the study of
the time shift operator of the modified Brownian motion
process \cite{Ag-St2} as well as from the investigation of
invariant subspaces of the Dirichlet shift
\cite{Rich88,Rich91}. The topics related to $m$-isometries
are currently being studied intensively (see e.g.,
\cite{Ber-Mar-No13,Ber-Mar-Mul-No14,Le15,Ab-Le16,McC-Ru2016,Ko-Lee2018,A-C-J-S19}).
Let us recall a few facts about $m$-isometries. It is
easily seen that $m$-isometries are injective. The
following is well known (see \cite[Lemma~1.21]{Ag-St1}).
\begin{align} \label{as-spec}
\begin{minipage}{70ex}
{\em If $\hh\neq \{0\}$ and $T\in\ogr{\hh}$ is an
$m$-isometry, then either $\sigma(T) \subseteq \tbb$ or
$\sigma(T)=\overline{\dbb}${\em ;} in both cases $r(T)=1$.}
\end{minipage}
\end{align}
Here $\sigma(T)$ is the spectrum of $T$, $r(T)$ is the
spectral radius of $T$, $\dbb$ is the open unit disc in the
complex plane centered at $0$ and $\tbb$ is its topological
boundary. Following \cite{Bo-Ja}, we say that an
$m$-isometry $T$ is {\em strict} if $m=1$ and $\hh\neq
\{0\}$, or $m\Ge 2$ and $T$ is not an $(m-1)$-isometry (in
both cases $\hh\neq \{0\}$). Examples of strict
$m$-isometries for any $m\Ge 2$ are provided in
\cite[Proposition~8]{At91} (see also
\cite[Example~2.3]{Sh-At}). As observed by Agler and
Stankus, if $T\in \ogr{\hh}$ is an $m$-isometry, then $T$
is a $k$-isometry for all $k\Ge m$ (see line 6 on page 389
in \cite{Ag-St1} and note that $\beta_m(T)=
\frac{(-1)^m}{m!} \bscr_m(T)$). This implies that if $T$ is
an $m$-isometry, then there exists a unique $k\in \nbb$
such that $T$ is a strict $k$-isometry and $k\Le m$.
The well-known characterization of $m$-isometries due to
Agler and Stankus states that an operator $T\in \ogr{\hh}$
is an $m$-isometry if and only if for every $h\in \hh$,
$\|T^nh\|^2$ is a polynomial in $n$ of degree at most $m-1$
(see \cite[p.\ 389]{Ag-St1}; see also Theorem~\ref{Newt4}).
The first question we deal with in this paper is whether
dropping the degree constraint yields $m$-isometricity of
$T$ for some unspecified $m$. This problem is formulated in
the spirit of Kaplansky's theorem \cite[Theorem~15]{Kap}
which asserts that an operator $T\in \ogr{\hh}$ is
algebraic if and only if it is locally algebraic. We answer
the above question in the affirmative (see
Theorem~\ref{weakmiso}). However, if we drop the assumption
that $\hh$ is complete the answer is no (see
Remark~\ref{wiel-nod}). Nevertheless, the assertion
\eqref{miso-1} enables one to define $m$-isometries in the
case of lack of completeness of $\hh$. Motivated by needs
of the theory of unbounded operators (see e.g.,
\cite{Ja-St01}), we consider $m$-isometries $T$ on an inner
product space $\mscr$ starting from Section~ \ref{Sec4}.
Notice that any linear operator $T$ on $\mscr$ can be
regarded as an (a priori unbounded) operator in $\hh$ with
invariant domain $\mscr$, where $\hh$ is the Hilbert space
completion of $\mscr$. We do not assume that $\mscr$ is
invariant for $T^*$. Let us point out that even under these
circumstances it may happen that $\dz{T^*}=\{0\}$ (see
\cite[Example~3, p.\ 69]{Weid80}). In this part of the
paper we are mostly interested in finding criteria for
orthogonality of generalized eigenvectors of $T$
corresponding to distinct eigenvalues of modulus $1$. A
similar question can be asked about ``orthogonality'' of
Jordan blocks of $T$ corresponding to distinct eigenvalues
of modulus $1$. We answer both these questions by using the
polynomial approach as developed for $m$-isometries. The
most delicate step in the proof depends heavily on the
celebrated Carlson's theorem which gives a sufficient
condition for an entire function of exponential type
vanishing on nonnegative integers to vanish globally. These
two problems are related to \cite{A-H-S} where the question
of orthogonality of spectral spaces corresponding to
distinct eigenvalues were studied in the context of bounded
algebraic Hilbert space operators that are roots of
polynomials in two variables. As a byproduct, we answer the
question of when an algebraic operator on $\mscr$ is an
$m$-isometry. Namely, we show that for a given $m$,
algebraic $m$-isometries are precisely finite orthogonal
sums of operators of the form $z I + N$, where $z$ is a
complex number of modulus $1$ and $N$ is a nilpotent
operator with prescribed index of nilpotency (see
Proposition~\ref{m-iso-alg}). What is more, if an algebraic
operator is a strict $m$-isometry, then $m$ must be a
positive odd number. It is worth mentioning that there are
$m$-isometries and nilpotent operators that are unbounded
and closed as operators in $\hh$ (see
\cite[Example~6.4]{Ja-St01} and \cite[Example~3.3]{Ota88},
resp.). We also show that if $T\in\ogr{\hh}$ is a compact
$m$-isometry, then $\hh$ is finite dimensional (see
Proposition~\ref{compop}).
The organization of this paper is as follows. In
Section~\ref{Sec2} we provide necessary facts concerning
$m$-isometries needed in this paper. In particular, we
discuss a finite difference operator which plays an
essential role in our investigations. In Section~\ref{Sec3}
we state and prove a few ``local'' characterizations of
$m$-isometries including that mentioned above (see
Theorem~\ref{weakmiso}). These characterizations are stated
in terms of conditions that are applied to ``one vector at
a time''. In particular, we show that verifying
$m$-isometricity for some unspecified $m$ can be reduced to
considering the restrictions of the operator in question to
its cyclic invariant subspaces. This result resembles the
analogical ones for subnormal operators (see
\cite[Corollary~1]{Trent1981}; see also \cite{Sto-Sz1984}).
Starting from Section~ \ref{Sec4}, we investigate
$m$-isometries on an inner product space $\mscr$. Theorem~
\ref{Bang1-m} is an adaptation of a result due to
Berm\'{u}dez, Martin\'{o}n, M\"{u}ller and Noda
\cite[Theorem~3]{Ber-Mar-Mul-No14} to the context of linear
operators on $\mscr$. The main result of
Section~\ref{Sec5}, Theorem~\ref{alg-m-iso-lem} provides a
criterion for orthogonality of generalized eigenvectors.
Finally, in Section~\ref{Sec6}, we characterize
``orthogonality'' of Jordan blocks corresponding to
distinct eigenvalues of modulus $1$ (see Theorem~
\ref{alg-m-iso-lem-b}).
We conclude this section by fixing notation. In what
follows, $\rbb$ and $\cbb$ stand for the fields of real and
complex numbers, respectively. Set $\tbb=\{z\in \cbb\colon
|z|=1\}$ and $\ubb=\{-1,1\} \times \{-\I,\I\}$, where $\I$
stands for the imaginary unit. The sets of integers,
nonnegative integers and positive integers are denoted by
$\zbb$, $\zbb_+$ and $\nbb$, respectively. The ring of all
polynomials in one indeterminate $x$ with coefficients in a
ring $R$ is denoted by $R[x]$. If $p=\sum_{j=0}^n a_j x^j
\in \cbb[x]$, then the polynomials $p^*, \, \mathrm{Re}\,
p, \, \mathrm{Im}\, p \in \cbb[x]$ are defined by
\begin{align*}
p^*=\sum_{j=0}^n \bar a_j x^j, \quad \mathrm{Re}\,
p=\sum_{j=0}^n \mathrm{Re} (a_j) x^j \quad \text{and} \quad
\mathrm{Im}\, p=\sum_{j=0}^n \mathrm{Im} (a_j) x^j.
\end{align*}
As usual, the identity transformation on a vector space
$\mscr$ is denoted by $I_{\mscr}$ and abbreviated to $I$ if
no confusion arises. Given a linear map $T\colon \mscr \to
\mscr$ and a vector $h\in \mscr$, we write $\cscr_T(h)$ for
the vector space spanned by $\{T^n h\colon n\in \zbb_+\}$.
\section{\label{Sec2}Basic characterizations of $m$-isometries}
This section provides necessary facts on a finite
difference operator that will be used in
characterizing $m$-isometries.
If $k\in \{-\infty\} \cup \zbb_+$, $R = \cbb$ or
$R=\ogr{\hh}$ and $\{\gamma_n\}_{n=0}^{\infty} \subseteq
R$, then we say that $\gamma_n$ is a {\em polynomial in
$n$} (of {\em degree} $k$) if there exists a polynomial
$p\in R[x]$ (of degree $k$) such that $p(n)=\gamma_n$ for
all $n\in \zbb_+$. It follows from the Fundamental Theorem
of Algebra that such $p$ is unique. Denote by
$\cbb^{\zbb_+}$ the vector space of all complex sequences
$\{\gamma_n\}_{n=0}^{\infty}$ with linear operations
defined coordinatewise. We regard $\cbb^{\zbb_+}$ as a
topological vector space equipped with the topology of
pointwise convergence. Note that $\cbb^{\zbb_+}$ is, in
fact, a Frech\'{e}t space (see \cite[Remarks 1.38(c)]{Rud};
see also \cite[Ex.\ 13, p.\ 104]{con2}). Define the linear
transformation $\tscr\colon \cbb^{\zbb_+} \to
\cbb^{\zbb_+}$ by
\begin{align*}
(\tscr \gammab)_n = \gamma_{n+1}, \quad n\in \zbb_+,
\, \gammab=\{\gamma_n\}_{n=0}^{\infty} \in
\cbb^{\zbb_+}.
\end{align*}
Set $\triangle = \tscr - I$. It is easily seen that the
transformations $\tscr$ and $\triangle$ are continuous.
Applying Newton's binomial formula, we get
\begin{align} \label{Newt1}
(\triangle^m \gammab)_n = (-1)^m \sum_{k=0}^m (-1)^k
\binom{m}{k} \gamma_{n+k}, \quad m,n\in\zbb_+, \,
\gammab=\{\gamma_n\}_{n=0}^{\infty} \in \cbb^{\zbb_+}.
\end{align}
Using Newton's binomial formula again, we can
reproduce the original sequence $\gammab$ by means of
$\{(\triangle^k \gammab)_0\}_{k=0}^{\infty}$ as
follows:
\allowdisplaybreaks
\begin{align} \notag
\gamma_n = (\tscr^n \gammab)_0 & = ((\triangle + I)^n
\gammab)_0
\\ \notag
& = \sum_{k=0}^n \binom{n}{k} (\triangle^{k}
\gammab)_0
\\ \label{Newt2}
& = \sum_{k=0}^{\infty} \frac{(\triangle^{k}
\gammab)_0}{k!} (n)_k, \quad n\in\zbb_+, \,
\gammab=\{\gamma_n\}_{n=0}^{\infty} \in \cbb^{\zbb_+},
\end{align}
where $(n)_k$ is a polynomial in $n$ of degree $k$ given by
\begin{align*}
(n)_k =
\begin{cases}
1 & \text{if } k=0 \text{ and } n \in \zbb_+,
\\
\prod_{j=0}^{k-1} (n-j) & \text{if } k\in \nbb \text{
and } n \in \zbb_+.
\end{cases}
\end{align*}
Observe that $(n)_k=0$ for all $k>n$. The formula
\eqref{Newt2}, which is known as Newton's interpolation
formula, enables as to describe the kernel $\ker
\triangle^m$ of $\triangle^m$ (cf.\ \cite[Exercise
7.2]{Dick}).
\begin{pro} \label{kerdelta}
If $m\in \nbb$, then
\begin{align*}
\ker \triangle^m = \Big\{\{\gamma_n\}_{n=0}^{\infty}
\in \cbb^{\zbb_+}\colon \text{$\gamma_n$ is a
polynomial in $n$ of degree at most $m-1$}\Big\}.
\end{align*}
\end{pro}
\begin{proof}
The inclusion ``$\subseteq$'' is a direct consequence
of \eqref{Newt2}. In turn, the inclusion
``$\supseteq$'' can be proved by induction on $m$ by
using the fact that if $\gammab =
\{\gamma_n\}_{n=0}^{\infty} \in \cbb^{\zbb_+}$ and
$\gamma_n$ is a polynomial in $n$ of degree $m$, then
$(\triangle \gammab)_n$ is a polynomial in $n$ of
degree $m-1$.
\end{proof}
Now we apply the above formulas to operators. For
this, we attach to $T\in \ogr{\hh}$ and $h\in \hh$,
the sequence $\gammab_{T,h} =
\{(\gammab_{T,h})_n\}_{n=0}^{\infty}$ defined by
\begin{align*}
(\gammab_{T,h})_n=\|T^nh\|^2, \quad n\in \zbb_+.
\end{align*}
Note that the formula (iii) of Proposition~\ref{Newt3}
below is due to Agler and Stankus (see \cite[Eq.\
(1.3)]{Ag-St1}).
\begin{pro}\label{Newt3}
If $T\in \ogr{\hh}$, then the following formulas
hold{\em :}
\begin{enumerate}
\item[(i)] $(\triangle^m \gammab_{T,h})_n = (\triangle^m
\gammab_{T,T^nh})_0$ for all $h \in \hh$ and $m,n\in
\zbb_+$,
\item[(ii)]
$(\triangle^m \gammab_{T,h})_0 = (-1)^m\is{\bscr_m(T)h}{h}$
for all $h \in \hh$ and $m\in \zbb_+$,
\item[(iii)] $T^{*n}T^n = \sum_{k=0}^{\infty} (n)_k
\frac{(-1)^k}{k!} \bscr_k(T)$ for all $n\in \zbb_+$.
\end{enumerate}
\end{pro}
\begin{proof}
Applying \eqref{Newt1} to $\gammab_{T,h}$, we obtain
(i) and (ii). In turn, applying \eqref{Newt2} to
$\gammab_{T,h}$ and using (ii), we get (iii).
\end{proof}
Combining the conditions (i) and (ii) of
Proposition~\ref{Newt3}, we get the following
important property of $m$-isometries.
\begin{cor}[\mbox{\cite[p.\ 389]{Ag-St1}}] \label{Agl-St}
If $m\in \nbb$ and $T\in \ogr{\hh}$ is an
$m$-isometry, then $T$ is a $k$-isometry for every
integer $k \Ge m$.
\end{cor}
The rest of this section is devoted to characterizing
the class of $m$-isometric operators. Note that the
equivalence (i)$\Leftrightarrow$(ii) of
Theorem~\ref{Newt4} below is due to Agler and Stankus
(see \cite[p.\ 389]{Ag-St1}). Denote by
$\{e_n\}_{n=0}^{\infty}$ the standard orthonormal
basis of $\ell^2$, the Hilbert space of all square
summable complex sequences indexed by $\zbb_+$. For a
given bounded sequence $\{\lambda_n\}_{n=0}^{\infty}
\subseteq \cbb$ there exists a unique operator
$W\in\ogr{\ell^2}$, called a {\em unilateral weighted
shift} with weights $\{\lambda_n\}_{n=0}^{\infty}$,
such that
\begin{align*}
We_n = \lambda_n e_{n+1}, \quad n\in \zbb_+.
\end{align*}
\begin{thm}\label{Newt4}
If $m\in \nbb$ and $T\in \ogr{\hh}$, then the
following conditions are equivalent{\em :}
\begin{enumerate}
\item[(i)] $T$ is an $m$-isometry,
\item[(ii)] $T^{*n}T^n$ is
a polynomial in $n$ of degree at most $m-1$,
\item[(iii)] for each $h\in \hh$,
$\|T^nh\|^2$ is a polynomial in $n$ of degree at most
$m-1$,
\item[(iv)] $T$ is injective and for each nonzero $h\in
\hh$, the unilateral weighted shift $W_{T,h}$ with
weights\footnote{\;$W_{T,h}$ is bounded because the
sequence of its weights is bounded above by $\|T\|$.}
$\big\{\frac{\|T^{n+1}h\|}{\|T^n h\|}\big\}_{n=0}^{\infty}$
is an $m$-isometry.
\end{enumerate}
Moreover, if $T$ is an $m$-isometry, then
\begin{align*}
T^{*n}T^n = \sum_{k=0}^{m-1} (n)_k \frac{(-1)^k}{k!}
\bscr_k(T), \quad n\in \zbb_+.
\end{align*}
\end{thm}
\begin{proof}
The implication (i)$\Rightarrow$(ii) and the
``moreover'' part are direct consequences of
Proposition~\ref{Newt3}(iii) and
Corollary~\ref{Agl-St}.
(ii)$\Rightarrow$(iii) Obvious.
(iii)$\Rightarrow$(i) Use Propositions~\ref{kerdelta}
and \ref{Newt3}(ii).
(i)$\Leftrightarrow$(iv) Apply
\cite[Proposition~2.4]{Ja-Ju-St}.
\end{proof}
As shown below, the equivalence (i)$\Leftrightarrow$(iii)
of Corollary~\ref{TrieuLe}, which is due to Abdullah and Le
\cite[Theorem~2.1]{Ab-Le16}, is an almost immediate
consequence of the Agler-Stankus assertion ``Hence if
$T\in\mathcal L(\hh)$ one sees via (1.3) that $T$ is an
$m$-isometry if and only if $s_T(k)$ is a polynomial in $k$
of degree less than $m$'' (see \cite[p.\ 389]{Ag-St1}).
\begin{cor} \label{TrieuLe}
Fix $m\in \nbb$. Let $W\in \ogr{\ell^2}$ be a unilateral
weighted shift with weights
$\{\lambda_n\}_{n=0}^{\infty}\subseteq \cbb$. Then the
following conditions are equivalent{\em :}
\begin{enumerate}
\item[(i)] $W$ is an $m$-isometry,
\item[(ii)] $\|W^n e_0\|^2$ is a
polynomial in $n$ of degree at most $m-1$, where
$e_0=(1,0,0,\ldots)$,
\item[(iii)] there exists a polynomial
$p\in \cbb[x]$ of degree at most $m-1$ such that
\begin{align} \label{Frank}
\text{$p(n) > 0$ and $|\lambda_n|^2 =
\frac{p(n+1)}{p(n)}$ for all $n\in \zbb_+$.}
\end{align}
\end{enumerate}
\end{cor}
\begin{proof}
(i)$\Rightarrow$(ii) Apply Theorem~\ref{Newt4}(iii).
(ii)$\Rightarrow$(iii) Let $p\in \cbb[x]$ be a polynomial
of degree at most $m-1$ such that $\|W^n e_0\|^2 = p(n)$
for all $n\in \zbb_+$. Since $m$-isometries are injective,
$p(n) > 0$ for all $n \in \zbb_+$. This and
\begin{align*}
|\lambda_n|^2 = \frac{\|W^{n+1} e_0\|^2}{\|W^n
e_0\|^2}, \quad n\in \zbb_+,
\end{align*}
implies (iii).
(iii)$\Rightarrow$(i) Denote by
$\{e_n\}_{n=0}^{\infty}$ the standard orthonormal
basis for $\ell^2$. Since
\begin{align*}
\|W^n h\|^2 = \sum_{j=0}^{\infty} |\is{h}{e_j}|^2
\|W^n e_j\|^2 \overset{\eqref{Frank}}=
\sum_{j=0}^{\infty} |\is{h}{e_j}|^2
\frac{p(n+j)}{p(j)}, \quad n \in \zbb_+, \, h \in
\ell^2,
\end{align*}
and the transformation $\triangle\colon \cbb^{\zbb_+} \to
\cbb^{\zbb_+}$ is linear and continuous, we infer from
Proposition~\ref{kerdelta} that $\triangle^m \gammab_{W,h}
= 0$ for all $h\in \ell^2$. This together with
Proposition~\ref{Newt3}(ii) completes the proof.
\end{proof}
\section{\label{Sec3}``Local'' characterizations of $m$-isometries}
In this section we prove, among other things, that if
an operator $T\in\ogr{\hh}$ has the property that
$\|T^n h\|^2$ is a polynomial in $n$ for every $h\in
\hh$, then there exists $m\in \nbb$ such that $T$ is
an $m$-isometry (see Theorem~\ref{weakmiso} below).
Before we do this, we need three lemmata. The first is
patterned on the polarization formula for polynomials
on vector spaces (see \cite[Theorem~A]{Bo-Si}; see
also \cite{M-O}). Its routine verification is left to
the reader.
\begin{lem} \label{polarf}
Let $\kbb\in \{\rbb,\cbb\}$, $\xscr$ be a vector space over
$\kbb$ and $\varphi\colon \xscr \times \xscr \to \kbb$ be a
mapping such that $\varphi(\cdot,h)$ and $\varphi(h,\cdot)$
are additive for every $h\in \xscr$. Then
\begin{align*}
\varphi(h,h) = \frac{1}{2} \sum_{j=0}^2 (-1)^{j}
\binom{2}{j} \varphi(h_0+ j h, h_0+ j h), \quad h,h_0 \in
\xscr.
\end{align*}
\end{lem}
The second lemma gives a sufficient condition for a
sequence of bounded operators to have at least one
vanishing term.
\begin{lem} \label{lnzero}
Let $\mathscr V$ be a nonempty open subset of $\hh$
and let $\{L_m\}_{m=1}^{\infty} \subseteq \ogr{\hh}$
be a sequence with the property that for every $h\in
\mathscr V$ there exists $m_h\in\nbb$ such that
\begin{align*}
\is{L_{m_h}h}{h}=0.
\end{align*}
Then there exists $m\in \nbb$ such that $L_{m}=0$.
\end{lem}
\begin{proof}
Define for every $k\in \nbb$ the subset $\mathscr V_k$ of
$\mathscr V$ by
\begin{align*}
\mathscr V_k = \big\{h\in \mathscr V\colon
\is{L_{k}h}{h}=0\big\}.
\end{align*}
Clearly, each set $\mathscr V_k$ is relatively closed
in $\mathscr V$. By assumption $\mathscr V =
\bigcup_{k=1}^{\infty} \mathscr V_k$. According to the
Baire category theorem \cite[Theorem~48.2,
Lemma~48.4]{Munkres00}, there exist $m\in \nbb$,
$h_0\in \mathscr V_{m}$ and $\epsilon \in (0,\infty)$
such that
\begin{align} \label{miso-a}
\text{$\is{L_{m}h}{h}=0$ for every $h\in \hh$ such
that $\|h-h_0\| < \epsilon$.}
\end{align}
Applying Lemma~\ref{polarf} to
$\varphi(f,g)=\is{L_{m}f}{g}$, we see that the
following equalities hold for every $h\in \hh$ with
$\|h\| < \frac{1}{2} \epsilon$,
\begin{align*}
\is{L_{m}h}{h} = \frac{1}{2} \sum_{j=0}^2 (-1)^{j}
\binom{2}{j} \is{L_{m}(h_0+ j h)}{h_0+ j
h}\overset{\eqref{miso-a}}=0.
\end{align*}
By the homogeneity of $L_{m}$, this implies that
$\is{L_{m}h}{h}=0$ for all $h\in \hh$, or equivalently
that $L_{m}=0$. This completes the proof.
\end{proof}
The third lemma provides yet another
characterization of $m$-isometries.
\begin{lem} \label{mialo-b}
If $m\in \nbb$, then an operator $T\in \ogr{\hh}$ is
an $m$-isometry if and only if there exists a nonempty
open subset $\mathscr V$ of $\hh$ such that for every
$h\in \mathscr V$, $\|T^nh\|^2$ is a polynomial in $n$
of degree at most $m-1$.
\end{lem}
\begin{proof}
The ``only if'' part follows from
Theorem~\ref{Newt4}(iii). To prove the ``if'' part,
take $h\in \mathscr V$. Since by assumption
$\|T^nh\|^2$ is a polynomial in $n$ of degree at most
$m-1$, we infer from Proposition~\ref{kerdelta} that
$\gammab_{T,h} \in \ker \triangle^{m}$. Hence, by
Proposition~\ref{Newt3}(ii), we have
\begin{align*}
(-1)^{m} \is{\bscr_{m}(T)h}{h} = (\triangle^{m}
\gammab_{T,h})_0 = 0.
\end{align*}
Hence $\is{\bscr_{m}(T)h}{h}=0$ for every $h\in
\mathscr V$. By Lemma~\ref{lnzero} or directly by
Lemma~\ref{polarf}, $\bscr_{m}(T)=0$, which means that
$T$ is an $m$-isometry.
\end{proof}
We are now ready to state and prove ``local''
characterizations of $m$-isometries. We also give a
topological description of certain sets of vectors $h$
having the property that $\|T^n h\|^2$ is a polynomial in
$n$ of prescribed degree, where $T$ is a strict
$m$-isometry. Note that the implication
(ii)$\Rightarrow$(i) of Theorem~\ref{weakmiso} below is not
true if we drop the assumption that $\hh$ is complete (see
Remark~\ref{wiel-nod}). Recall that $\cscr_T(h)$ is the
linear span of $\{T^n h\colon n\in \zbb_+\}$.
\begin{thm} \label{weakmiso}
The following conditions are equivalent for $T\in
\ogr{\hh}${\em :}
\begin{enumerate}
\item[(i)] there exists $m\in \nbb$ such that $T$
is an $m$-isometry,
\item[(ii)] for any $h\in \hh$,
$\|T^nh\|^2$ is a polynomial in $n$,
\item[(iii)] $\ker T=\{0\}$ and for any $h\in
\hh\setminus \{0\}$, there exists $m_h\in \nbb$ such
that the unilateral weighted shift $W_{T,h}$ with
weights $\big\{\frac{\|T^{n+1}h\|} {\|T^n
h\|}\big\}_{n=0}^{\infty}$ is an
\mbox{$m_h$-isometry},
\item[(iv)] there exists a nonempty open subset $\mathscr V$
of $\hh$ such that for any $h\in \mathscr V$, there
exists $m_h\in\nbb$ such that
\begin{align*}
\is{\bscr_{m_h}(T)h}{h}=0,
\end{align*}
\item[(v)] for any $h\in \hh$, there exists $m_h\in\nbb$
such that $T|_{\overline{\cscr_T(h)}}$ is an
$m_h$-isometry.
\end{enumerate}
Moreover, if $T\in \ogr{\hh}$ is a strict
$m$-isometry, where $m \Ge 2$, then
\begin{enumerate}
\item[(a)] $\hh=\Big\{h \in \hh\colon \|T^nh\|^2 \text{ is a
polynomial in $n$ of degree at most $m-1$}\Big\}$,
\item[(b)]
$\fscr:=\Big\{h \in \hh\colon \|T^nh\|^2 \text{ is a
polynomial in $n$ of degree at most $m-2$}\Big\}$ is a
closed nowhere dense subset of $\hh$,
\item[(c)]
$\mathscr U:=\Big\{h \in \hh\colon \|T^nh\|^2 \text{
is a polynomial in $n$ of degree $m-1$}\Big\}$ is an
open dense subset of $\hh$.
\end{enumerate}
\end{thm}
\begin{proof}
The implications (i)$\Rightarrow$(v) and
(v)$\Rightarrow$(iv) are obvious due to
\eqref{miso-1}. The implication (iv)$\Rightarrow$(i)
follows from Lemma~\ref{lnzero}, while the implication
(i)$\Rightarrow$(iii) is a direct consequences of
Theorem~\ref{Newt4}(iv).
(iii)$\Rightarrow$(ii) Assume that (iii) holds. Fix a
nonzero $h\in \hh$. By assumption, there is $m_h\in \nbb$
such that $W_{T,h}$ is an $m_h$-isometry. Let
$e_0=(1,0,0,\ldots)$ be the zeroth term of the standard
orthonormal bass of $\ell^2$. By Theorem~\ref{Newt4}(iii),
$\|W_{T,h}^ne_0\|^2$ is a polynomial in $n$ of degree at
most $m_h-1$; denote it by $p$. Then
\begin{align*}
p(n)=\|W_{T,h}^ne_0\|^2 = \frac{\|T^n h\|^2}{\|h\|^2},
\quad n\in \zbb_+,
\end{align*}
which yields (ii).
We now prove that (ii) implies (iv). For this, assume
that (ii) holds. Take $h\in \hh\setminus \{0\}$. Then
by assumption $\|T^nh\|^2$ is a polynomial in $n$ of
some degree $k_h\in \zbb_+$. By
Proposition~\ref{kerdelta}, $\gammab_{T,h} \in \ker
\triangle^{m_h}$ with $m_h=k_h+1$. Hence, by
Proposition~\ref{Newt3}(ii), we have
\begin{align*}
(-1)^{m_h} \is{\bscr_{m_h}(T)h}{h} = (\triangle^{m_h}
\gammab_{T,h})_0 = 0.
\end{align*}
Therefore (iv) is valid. This means that the
conditions (i)-(v) are equivalent.
It remains to prove the moreover part. The condition
(a) is a direct consequence of
Theorem~\ref{Newt4}(iii). To prove (b), recall that
$\cbb^{\zbb_+}$ is a topological vector space with the
topology of pointwise convergence on $\zbb_+$ denoted
here by $\tau$. Write $\mscr$ for the vector subspace
of $\cbb^{\zbb_+}$ which consists of all polynomials
in $n$ of degree at most $m-2$. We first show that
$\fscr$ is a closed subset $\hh$. Indeed, if
$\{h_k\}_{k=1}^{\infty} \subseteq \fscr$ converges to
$h\in \hh$, then by the continuity of $T$, the
sequence $\{\gammab_{T,h_k}\}_{k=1}^{\infty} \subseteq
\mscr$ is $\tau$-convergent to $\gammab_{T,h}$ in
$\cbb^{\zbb_+}$. Since any finite dimensional vector
subspace of a topological vector space is closed, we
deduce that $\gammab_{T,h} \in \mscr$, which means
that $h\in \fscr$. Hence $\fscr$ is closed. Suppose,
to the contrary, that $\fscr$ is not a nowhere dense
subset of $\hh$. Then, by Lemma~\ref{mialo-b}, $T$ is
an $(m-1)$-isometry, which contradicts our assumption
that $T$ is a strict $m$-isometry. Finally, since by
(a), $\mathscr U=\hh\setminus \fscr$, the condition
(c) follows from (b). This completes the proof.
\end{proof}
\begin{rem}
Regarding the moreover part of Theorem~\ref{weakmiso},
it is worth pointing out that for any integer $m\Ge
2$, there exists a strict $m$-isometry. This was shown
by Athavale in the case of infinite dimensional
separable Hilbert spaces (see
\cite[Proposition~8]{At91}). In turn, in view of
Proposition~\ref{m-iso-alg} below, for any positive
odd number $m$, there exists a strict $m$-isometry
which is algebraic; what is more, this may happen even
in finite dimensional Hilbert spaces (see
Remark~\ref{inv-miso}). This justifies the following
construction. Take integers $m_1$ and $m_2$ such that
$2 \Le m_1 < m_2$. Let for $j=1,2$, $T_j\in
\ogr{\hh_j}$ be a strict $m_j$-isometry. Set
$\hh=\hh_1 \oplus \hh_2$ and $T=T_1 \oplus T_2$. Using
Theorem~\ref{Newt4}(iii) we deduce that $T$ is a
strict $m_2$-isometry. It is clear that for every $h
\in \hh_1 \oplus \{0\}$, $\|T^nh\|^2$ is a polynomial
in $n$ of degree at most $m_1-1$, hence of degree at
most $m_2-2$. In view of Theorem~\ref{weakmiso}, the
set $\mathscr U$ is open and dense in $\hh$ and its
complement $\fscr$ contains the closed vector subspace
$\hh_1 \oplus \{0\}$.
{$\diamondsuit$}
\end{rem}
\section{\label{Sec4}$m$-isometries on inner product spaces}
Motivated by our investigations and needs of the
theory of unbounded operators (see e.g.,
\cite{Ja-St01}), we consider here $m$-isometries on
inner product spaces. We do not assume that the
operators in question are continuous. Suppose $\mscr$
is an inner product space and $\hh$ is its Hilbert
space completion. Denote by $\lino{\mscr}$ the algebra
of all linear operators on $\mscr$. A member of
$\lino{\mscr}$ can be though of as a densely defined
operator in $\hh$ with invariant domain $\mscr$. Given
$T\in \lino{\mscr}$ and $m\in \nbb$, we say that $T$
is an {\em $m$-isometry}\/\footnote{\;Clearly, if
$\mscr=\hh$ and $T\in \ogr{\hh}$, then both
definitions of $m$-isometricity, the present one and
that of Section~\ref{Sec1}, coincide (cf.\
\eqref{miso-1}).} if
\begin{align} \label{def-miso}
\hat\fcal_{T;m}(h):=\sum\limits_{k=0}^m (-1)^k
\binom{m}{k} \|T^k h\|^2 = 0, \quad h \in \mscr.
\end{align}
Similarly to Section \ref{Sec1}, we define the notion
of a strict $m$-isometry. Arguing as in
Section~\ref{Sec2}, we verify that $T$ is an
$m$-isometry if and only if for each $h\in \mscr$,
$\|T^nh\|^2$ is a polynomial in $n$ of degree at most
$m-1$ (cf.\ Theorem~\ref{Newt4}). Consequently, if $T$
is an $m$-isometry, then $T$ is a $k$-isometry for
every integer $k \Ge m$. If $\mscr$ is infinite
dimensional it may happen that an $m$-isometry on
$\mscr$ is closed as an operator in $\hh$ but not
bounded (see e.g.\ \cite[Example~6.4]{Ja-St01}).
We begin our investigations by making the following
observation.
\begin{pro} \label{polar-1}
If $T\in\lino{\mscr}$ is an $m$-isometry, then for all
$f,g\in \mscr$, $\is{T^n f}{T^n g}$ is a polynomial in
$n$ of degree at most $m-1$, and
\begin{align*}
\is{T^n f}{T^n g}= \sum_{k=0}^{m-1} (n)_k
\frac{(-1)^k}{k!} \fcal_{T;k}(f,g), \quad n\in \zbb_+,
\, f, g \in \mscr,
\end{align*}
where $\fcal_{T;k}$ is the sesquilinear form on
$\mscr$ defined by
\begin{align*}
\fcal_{T;k}(f,g) = \sum_{j=0}^k (-1)^j \binom{k}{j}
\is{T^j f}{T^j g}, \quad f,g \in \mscr, \, k \in
\zbb_+.
\end{align*}
\end{pro}
\begin{proof}
We can argue as in Section~\ref{Sec2} and apply the
polarization formula to the sesquilinear form
$\mscr\times\mscr \ni (f,g) \to \is{T^n f}{T^n g} \in
\cbb$.
\end{proof}
Recall that an operator $N \in \lino{\mscr}$ is said
to be a {\em nilpotent operator} if there exists $k\in
\nbb$ such that $N^k=0$. If additionally $N^{k-1}\neq
0$, then $k$ is called the {\em index of nilpotency}
of $N$ and is denoted here by $\nil{N}$; note that
then linear dimension of $\mscr$ must be at least
$\nil{N}$, so in particular $\mscr\neq \{0\}$. Again,
as in the case of $m$-isometries, if $\mscr$ is
infinite dimensional it may happen that a nilpotent
operator on $\mscr$ is closed as an operator in $\hh$
but not bounded (see \cite[Example~3.3]{Ota88}).
Theorem~\ref{Bang1-m}(i) below was proved by
Berm\'{u}dez, Martin\'{o}n, M\"{u}ller and Noda in the
case of bounded Hilbert space operators (see
\cite[Theorem~3]{Ber-Mar-Mul-No14}). In turn,
Theorem~\ref{Bang1-m}(ii) was shown by Le and
independently by Gu and Stankus, again for Hilbert
space operators (see \cite[Theorem~16]{Le15} and
\cite[Theorem~4]{Gu-St15}, resp.). The proof of
Theorem~\ref{Bang1-m}(i) is an adaptation (and a
simplification) of that of
\cite[Theorem~3]{Ber-Mar-Mul-No14} to the context of
inner product spaces.
\begin{thm} \label{Bang1-m}
Suppose $m\in \nbb$, $A\in \lino{\mscr}$ is an
$m$-isometry and $N\in \lino{\mscr}$ is a nilpotent
operator such that $AN=NA$. Set $m_N=m +
2(\nil{N}-1)$. Then
\begin{enumerate}
\item[(i)] $A+N$ is an $m_N$-isometry,
\item[(ii)] $A+N$ is a strict $m_N$-isometry
if and only if there exists $f_0\in \mscr$ such~that
\begin{align*}
\sum_{l=0}^{m-1} (-1)^{l} \binom{m-1}{l} \|A^{l}
N^{\nil{N}-1} f_0\|^2 \neq 0.
\end{align*}
\end{enumerate}
Moreover, if $A+N$ is a strict $m_N$-isometry, then
$A$ is a strict $m$-isometry.
\end{thm}
\begin{proof}
(i) Set $k=\nil{N}$ and $\kappa_n=\min\{n,k-1\}\in \zbb_+$
for $n\in \zbb_+$. By assumption and
Proposition~\ref{polar-1}, there exist polynomials
$p_{i,j;f}, q_{i,j;f} \in \cbb[x]$ such~that
\allowdisplaybreaks
\begin{align} \label{iso-21}
\left\langle A^{n}\Big(A^{j-i}N^{i}f\Big),
A^{n}\Big(N^{j}f\Big)\right\rangle = p_{i,j;f}(n),
\quad 0\Le i< j, \, n\in \zbb_+, \, f\in \mscr,
\\ \label{iso-22}
\left\langle A^{n}\Big(N^{i}f\Big),
A^{n}\Big(A^{i-j}N^{j}f\Big)\right\rangle =
q_{i,j;f}(n), \quad 0\Le j\Le i, n\in \zbb_+, \, f\in
\mscr.
\end{align}
Using Newton's binomial formula, we see that
\allowdisplaybreaks
\begin{align} \notag
\|(A+N)^n f\|^2 & = \left\langle \sum_{i=0}^{\kappa_n}
\frac{(n)_i}{i!} A^{n-i}N^if, \sum_{j=0}^{\kappa_n}
\frac{(n)_j}{j!} A^{n-j}N^jf\right\rangle
\\ \notag
& = \sum_{0\Le i < j \Le \kappa_n} \frac{(n)_i}{i!}
\frac{(n)_j}{j!} \left\langle
A^{n-j}\Big(A^{j-i}N^{i}f\Big),
A^{n-j}\Big(N^{j}f\Big)\right\rangle
\\ \notag
& \hspace{4ex}+ \sum_{0\Le j \Le i \Le \kappa_n}
\frac{(n)_i}{i!} \frac{(n)_j}{j!} \left\langle
A^{n-i}\Big(N^{i}f\Big),
A^{n-i}\Big(A^{i-j}N^{j}f\Big)\right\rangle
\\ \notag
& \hspace{-3.65ex}\overset{\eqref{iso-21} \&
\eqref{iso-22}}= \sum_{0\Le i < j \Le k-1}
\frac{(n)_i}{i!} \frac{(n)_j}{j!} p_{i,j;f}(n-j)
\\ \label{zeg-1}
& \hspace{8ex} + \sum_{0\Le j \Le i \Le k-1}
\frac{(n)_i}{i!} \frac{(n)_j}{j!} q_{i,j;f}(n-i),
\quad n \in \zbb_+, f\in \mscr.
\end{align}
Since $(n)_l$ is a polynomial in $n$ of degree $l$
and, by Proposition~\ref{polar-1}, $p_{i,j;f},
q_{i,j;f}$ are polynomials of degree at most $m-1$, we
conclude that $\|(A+N)^n f\|^2$ is a polynomial in $n$
of degree at most $m - 1 + 2(k-1)$. Therefore, $A+N$
is $m_N$-isometry.
(ii) Observe that by Proposition~\ref{polar-1}, the
coefficient of the polynomial $q_{k-1,k-1;f}$ that
corresponds to the monomial $x^{m-1}$ equals
$\frac{(-1)^{m-1}}{(m-1)!}\fcal_{A;m-1}(N^{k-1} f,
N^{k-1}f)$. This together with \eqref{zeg-1} yields (ii).
The ``moreover'' part is a consequence of (ii). This
completes the proof.
\end{proof}
Regarding the moreover part of Theorem~\ref{Bang1-m},
it is worth pointing out that $A+N$ may not be a
strict $m_N$-isometry even if $A$ is a strict
$m$-isometry (see the paragraph preceding
\cite[Theorem 3]{Ber-Mar-Mul-No14}).
The question of when a scalar translate of a nilpotent
operator is an $m$-isometry is answered by
Proposition~\ref{Bang1} below. The assertion (i) of
this proposition was proved by Berm\'{u}dez,
Martin\'{o}n and Noda in the case of bounded Hilbert
space operators (see
\cite[Theorem~2.2]{Ber-Mar-No13}). Before proving
Proposition~\ref{Bang1}, we state a simple lemma whose
proof is left to the reader.
\begin{lem} \label{lim-1n}
If $p\in \cbb[x]$ is a nonzero polynomial, then
$\lim_{n\to\infty} |p(n)|^{1/n}=1$.
\end{lem}
\begin{pro} \label{Bang1}
Suppose $\mscr\neq \{0\}$ and $N\in \lino{\mscr}$ is a
nilpotent operator. Then the following statements are
valid{\em :}
\begin{enumerate}
\item[(i)] if $V\in \lino{\mscr}$ is an isometry such that
$VN=NV$, then $V+N$ is a strict $(2 \nil{N}-1)$-isometry,
\item[(ii)] if $z\in \cbb$ is such that
$\|(zI+N)^n h\|^2$ is a polynomial in $n$ for every
$h\in \mscr$, then $|z|=1$,
\item[(iii)] if $z\in \cbb$ and
$zI+N$ is an $m$-isometry, then~$|z|=1$.
\end{enumerate}
\end{pro}
\begin{proof}
In view of Proposition~\ref{polar-1} and
Theorem~\ref{Bang1-m}, it suffices to prove (ii). Under the
assumptions of (ii), we argue as follows. Set $k=\nil{N}$.
Let $h_0\in \mscr$ be such that $N^{k-1}h_0\neq 0$ and let
$p\in \cbb[x]$ be a polynomial such that
\begin{align*}
p(n)=\|(zI+N)^n h_0\|^2, \quad n\in \zbb_+.
\end{align*}
Clearly $p\neq 0$. Consider first the case $z=0$. Then
$p(n)=\|N^n h_0\|^2=0$ for all $n\Ge k$, which means
that the polynomial $p$ has infinitely many roots.
Hence $p=0$, a contradiction. Suppose now that $z\neq
0$. Using Newton's binomial formula, we obtain
\begin{align} \label{slon-2}
p(n) = \bigg\|\sum_{j=0}^{k-1} \frac{(n)_j}{j!}
z^{n-j} N^j h_0\bigg\|^2 = |z|^{2n}
\bigg\|\sum_{j=0}^{k-1} \frac{(n)_j}{j!z^j} N^j
h_0\bigg\|^2, \quad n\in \zbb_+.
\end{align}
Notice that $\big\|\sum_{j=0}^{k-1}
\frac{(n)_j}{j!z^j} N^j h_0\big\|^2$ is a polynomial
in $n$ which is nonzero (because $p(0)\neq 0$).
Combined with Lemma~\ref{lim-1n}, this yields
\begin{align*}
1=\lim_{n\to\infty} p(n)^{1/n}
\overset{\eqref{slon-2}}= |z|^2,
\end{align*}
which completes the proof.
\end{proof}
It is worth noting that the proof of
Proposition~\ref{Bang1}(iii) can be made much shorter
if we assume additionally that $\mscr=\hh$ and
$N\in\ogr{\hh}$. Indeed, then by the spectral mapping
theorem, $r(zI+N)=|z|$. Therefore, if $zI+N$ is an
$m$-isometry, then by \eqref{as-spec}, $|z|=1$.
\section{\label{Sec5}Orthogonality of generalized eigenvectors}
Theorem~\ref{alg-m-iso-lem} below provides a few
sufficient conditions for orthogonality of generalized
eigenvectors corresponding to distinct eigenvalues.
First we need the following lemma of algebraic nature
whose proof depends heavily on the Carlson theorem
recorded below.
\begin{thm}[\mbox{\cite{Car},\cite[Theorem 1]{Rubel}}] \label{Ca-Ru}
Let $\phi$ be an entire function on $\cbb$ such that
$\sup_{z \in \cbb} |\phi(z)|\E^{-\tau |z|} < \infty$
for some $\tau\in \rbb$ and $\sup_{y \in \rbb}
|\phi(\I y)|\E^{-\eta |y|} < \infty$ for some $\eta\in
(-\infty,\pi)$. If $\phi(n)=0$ for every $n\in\nbb$,
then $\phi(z)=0$ for all $z\in \cbb$.
\end{thm}
\begin{lem} \label{re-polyn}
Suppose $p,q \in \cbb[x]$ and $\alpha \in \tbb \setminus
\{1\}$ are such that
\begin{align} \label{penre}
p(n) = \mathrm{Re} (\alpha^n q(n)), \quad n \in
\zbb_+.
\end{align}
Then $\mathrm{Re}\, q = 0$ if $\alpha=-1$, and $q=0$
otherwise.
\end{lem}
\begin{proof}
We consider three possible cases that are logically
disjoint.
{\sc Case 1.} $\alpha=-1$.
It follows from \eqref{penre} that
\begin{align*}
p(2n) = (\mathrm{Re} \, q)(2n), \quad n\in \zbb_+.
\end{align*}
Then, by the Fundamental Theorem of Algebra, we have
\begin{align} \label{penre2}
p (n) = (\mathrm{Re} \, q)(n), \quad n\in \zbb_+.
\end{align}
As a consequence, we obtain
\begin{align*}
(\mathrm{Re} \, q)(2n+1) \overset{\eqref{penre2}}=
p(2n+1) \overset{\eqref{penre}}= - (\mathrm{Re} \,
q)(2n+1), \quad n\in \zbb_+.
\end{align*}
Thus, by the Fundamental Theorem of Algebra again,
$\mathrm{Re} \, q = 0$.
{\sc Case 2.} $\alpha=\E^{\I \vartheta}$ for some
$\vartheta \in (0,\pi)$.
Observe that
\begin{align} \notag
p(n) & \overset{\eqref{penre}}= \mathrm{Re} \, (\E^{\I
n \vartheta}) \mathrm{Re} \,(q(n)) - \mathrm{Im} \,
(\E^{\I n \vartheta}) \mathrm{Im} \,(q(n))
\\ \label{penre3}
&\hspace{1ex} =\cos(n \vartheta) (\mathrm{Re} \,q)(n)
- \sin (n \vartheta) (\mathrm{Im} \,q)(n), \quad n\in
\zbb_+.
\end{align}
Define the entire function $\phi$ on $\cbb$ by
\begin{align} \label{penre2.5}
\phi(z) = p(z) - \cos(\vartheta z) (\mathrm{Re}
\,q)(z) + \sin (\vartheta z) (\mathrm{Im} \,q)(z),
\quad z\in \cbb.
\end{align}
It follows from \eqref{penre3} that
\begin{align} \label{penre4}
\phi(n) = 0, \quad n\in \zbb_+.
\end{align}
Now we show that for any $\epsilon \in (0,\infty)$,
there exists $c_{\epsilon} \in (0,\infty)$ such that
\begin{align} \label{penre5}
|\phi(z)| \Le c_{\epsilon} \E^{(\vartheta + \epsilon)
|z|}, \quad z \in \cbb.
\end{align}
Indeed, if $r \in \cbb[x]$, then for every $\epsilon
\in (0,\infty)$ there exists $d_{\epsilon} \in
(0,\infty)$ such that $|r(z)| \Le d_{\epsilon}
\E^{\epsilon |z|}$ for all $z\in \cbb$. It is easily
seen that $|\cos(z\vartheta)| \Le \E^{\vartheta |z|}$
and $|\sin(z\vartheta)| \Le \E^{\vartheta |z|}$ for
all $z\in \cbb$. Putting this all together yields
\eqref{penre5}.
By \eqref{penre5}, $\phi$ is of exponential type
(i.e., $|\phi(z)| \Le c \E^{\tau |z|}$ for all $z \in
\cbb$ and for some $c,\tau\in \rbb$) and there exist
$d\in (0,\infty)$ and $\eta \in (0,\pi)$ such~that
\begin{align*}
|\phi(\I y)| \Le d\E^{\eta |y|}, \quad y\in \rbb.
\end{align*}
These two facts and \eqref{penre4} imply that the
entire function $\phi$ satisfies the assumptions of
Theorem \ref{Ca-Ru}. Therefore by this theorem
$\phi=0$. Combined with \eqref{penre2.5}, this yields
\begin{align} \label{penre6}
p(z) = \cos(\vartheta z) (\mathrm{Re} \,q)(z) - \sin
(\vartheta z) (\mathrm{Im} \,q)(z), \quad z\in \cbb.
\end{align}
Substituting $z=\I y$ with $y\in \rbb$ into
\eqref{penre6}, we obtain
\begin{align} \label{penre7}
p(\I y) = u_1(y) \E^{-\vartheta y} + u_2(y)
\E^{\vartheta y}, \quad y\in \rbb,
\end{align}
where $u_1, u_2 \in \cbb[x]$ are polynomials given by
\begin{align} \label{penre7.5}
2 u_1 = (\mathrm{Re} \,q)(\I x) + \I (\mathrm{Im}
\,q)(\I x) \quad \text{and} \quad 2 u_2 = (\mathrm{Re}
\,q)(\I x) - \I (\mathrm{Im} \,q)(\I x).
\end{align}
It follows from \eqref{penre7} that
\begin{align} \label{penre8}
u_2(y)=\frac{p(\I y)}{\E^{\vartheta y}} -
\frac{u_1(y)}{\E^{2 \vartheta y}}, \quad y\in \rbb.
\end{align}
Using the fact that $p, u_1, u_2 \in \cbb[x]$ and passing
to the limit as $y\to \infty$ in \eqref{penre8}, we deduce
that $u_2=0$. A similar argument shows that $u_1=0$.
Combined with \eqref{penre7.5}, this implies that
$\mathrm{Re} \,q = 0$ and $\mathrm{Im} \,q = 0$. Hence
$q=0$.
{\sc Case 3.} $\alpha=\E^{\I \vartheta}$ for some
$\vartheta \in (\pi,2\pi)$.
In view of \eqref{penre} we have
\begin{align*}
p(n) = \mathrm{Re} (\bar \alpha^n q^*(n)), \quad n \in
\zbb_+.
\end{align*}
Since $\bar \alpha = \E^{\I (2\pi -\vartheta)}$ and
$2\pi -\vartheta \in (0,\pi)$, we can apply Case 2. As
a consequence, we obtain $q^*=0$. Thus $q=0$, which
completes the proof.
\end{proof}
For the reader's convenience, we recall necessary
terminology related to generalized eigenvectors. Given
$T\in\lino{\mscr}$ and $z\in \cbb$, we set
\begin{align*}
\gev{T}{z}=\bigcup_{n\in \nbb} \ker((T-z I)^{n}).
\end{align*}
Since $\ker((T-z I)^{n}) \subseteq \ker((T-z
I)^{n+1})$ for all $n\in \nbb$, we deduce that
$\gev{T}{z}$ is a vector subspace of $\mscr$ which is
invariant for $T$. Notice that $\gev{T}{z} \neq \{0\}$
if and only if $z$ is an eigenvalue of $T$. If $z$ is
an eigenvalue of $T$, then a nonzero element of
$\gev{T}{z}$ is called a {\em generalized eigenvector}
of $T$ corresponding to $z$, and the vector space
$\gev{T}{z}$ itself is called the {\em generalized
eigenspace} (or {\em spectral space}) of $T$
corresponding to $z$.
We are now ready to prove the aforesaid criterions for
orthogonality of generalized eigenvectors. Recall that
$\ubb=\{-1,1\} \times \{-\I,\I\}$.
\begin{thm} \label{alg-m-iso-lem}
Let $z_1$ and $z_2$ be two distinct elements of
$\tbb$. Suppose $T\in\lino{\mscr}$ and $h_j \in
\gev{T}{z_j}$ for $j=1,2$. Then the following
assertions hold{\em :}
\begin{enumerate}
\item[(i)] $\mathrm{Re}\is{T^nh_1}{T^nh_2}=0$ for all
$n\in \zbb_+$ provided $z_1=-z_2$ and $\|T^n(h_1 +
h_2)\|^2$ is a polynomial in $n$,
\item[(ii)] $\is{T^nh_1}{T^nh_2}=0$
for all $n\in \zbb_+$ provided $z_1=-z_2$ and there is
$(\epsilon_1,\epsilon_2) \in\ubb$ such that
$\|T^n(\epsilon_k h_1 + h_2)\|^2$ is a polynomial in
$n$ for $k=1,2$,
\item[(iii)] $\is{T^nh_1}{T^nh_2}=0$
for all $n\in \zbb_+$ provided $z_1\neq -z_2$ and
$\|T^n(h_1 + h_2)\|^2$ is a polynomial in $n$.
\end{enumerate}
\end{thm}
\begin{proof} By assumption there exit $k_1,k_2 \in \nbb$ such that
$h_j \in \ker((T - z_j I)^{k_j})$ for $j=1,2$. Before
justifying the assertions (i)-(iii), we discuss the general
case when $\|T^n(h_1 + h_2)\|^2$ is a polynomial in $n$.
Let $r\in \cbb[x]$ be a polynomial such that
$r(n)=\|T^n(h_1 + h_2)\|^2$ for all $n\in \zbb_+$. Set
$N_j= T - z_j I$ for $j=1,2$. Then
\begin{align} \notag
r(n) & = \|(I+ \bar z_1 N_1)^n h_1\|^2 + \|(I+ \bar
z_2 N_2)^n h_2\|^2
\\ \label{pietrz1}
&\hspace{6.5ex}+ 2 \mathrm{Re} (\alpha^n \is{(I + \bar
z_1 N_1)^n h_1}{(I + \bar z_2 N_2)^n h_2}), \quad n\in
\zbb_+,
\end{align}
where $\alpha:=z_1 \bar z_2$. Since $z_1, z_2 \in
\tbb$ and $z_1 \neq z_2$, we see that $\alpha \in
\tbb\setminus \{1\}$. By assumption and Newton's
binomial formula, we have
\begin{align} \label{pietrz3}
(I+ \bar z_j N_j)^n h_j = \sum_{l=0}^n \binom{n}{l}
\bar z_j^l N_j^l h_j = \sum_{l=0}^{k_j-1}
\frac{(n)_l}{l!} \bar z_j^l N_j^l h_j, \quad j=1,2.
\end{align}
Since for any $l\in \zbb_+$, $(n)_l$ is a polynomial in
$n$, \eqref{pietrz3} implies that $\|(I + \bar z_1 N_1)^n
h_1\|^2$, $\|(I + \bar z_2 N_2)^n h_2\|^2$ and $\is{(I+
\bar z_1 N_1)^n h_1}{(I+ \bar z_2 N_2)^n h_2}$ are
polynomials in $n$. It follows form \eqref{pietrz1} that
there exist polynomials $p, q \in \cbb[x]$ such that
\begin{align} \label{pietrz2}
p(n) = \mathrm{Re}(\alpha^n q(n)), \quad n\in \zbb_+,
\end{align}
where
\begin{align*}
q(n) = \is{(I+ \bar z_1 N_1)^n h_1}{(I+ \bar z_2
N_2)^n h_2}, \quad n\in \zbb_+.
\end{align*}
The above discussion also gives the following identity
\begin{align} \label{asu-1}
\is{T^n h_1}{T^n h_2} = \alpha^n q(n), \quad n\in
\zbb_+.
\end{align}
(i) If $z_1=-z_2$, then $\alpha=-1$ and thus by
\eqref{pietrz2} and Lemma~\ref{re-polyn},
$\mathrm{Re}\, q = 0$, or equivalently by
\eqref{asu-1}, $\mathrm{Re}\is{T^nh_1}{T^nh_2}=0$ for
all $n\in \zbb_+$.
(ii) Assume that $\|T^n(\epsilon h_1 + h_2)\|^2$ is a
polynomial in $n$ for $\epsilon\in \{1,\I\}$ (the
remaining cases can be proved in the same way).
Applying (i) to the pairs $(h_1,h_2)$ and $(\I
h_1,h_2)$, we see that
\begin{align*}
\mathrm{Re}\is{T^nh_1}{T^nh_2}=0=\mathrm{Im}\is{T^nh_1}{T^nh_2},
\quad n \in \zbb_+,
\end{align*}
which yields (ii).
(iii) Since now $\alpha \in \tbb\setminus \{-1\}$, the
assertion (iii) is a direct consequence of
\eqref{pietrz2}, \eqref{asu-1} and
Lemma~\ref{re-polyn}. This completes the proof.
\end{proof}
\begin{cor} \label{Bang-ch}
Let $z_1$ and $z_2$ be two distinct elements of
$\tbb$. Suppose $T\in\lino{\mscr}$ and $h_j \in
\gev{T}{z_j}$ for $j=1,2$. Consider the following
conditions{\em :}
\begin{enumerate}
\item[(i)] $z_1 = -z_2$ and there is
$(\epsilon_1,\epsilon_2) \in \ubb$ such that
$\|T^n(\epsilon_k T^j h_1 + h_2)\|^2$ is a polynomial
in $n$ for $k=1,2$ and every $j\in \zbb_+$,
\item[(ii)] $z_1 \neq -z_2$ and
$\|T^n(T^j h_1 + h_2)\|^2$ is a polynomial in $n$ for
every $j\in \zbb_+$.
\end{enumerate}
Then any of the conditions {\em (i)} and {\em (ii)} implies
that $\overline{\cscr_T(h_1)} \perp
\overline{\cscr_T(h_2)}$.
\end{cor}
Regarding Theorem~\ref{alg-m-iso-lem} and its proof,
it is worth noticing that if $z\in \tbb$ is an
eigenvalue of an operator $T\in \lino{\mscr}$ and $h$
is in $\gev Tz$, then $T^n h$ may not be a polynomial
in $n$ (though, by Proposition~\ref{Bang1}, $\|T^n
h\|^2$ is). A simple example is given below.
\begin{exa}
Take $\mscr=\cbb^2$, $z\in \tbb\setminus \{1\}$, $T=
[\begin{smallmatrix} z & 1
\\ 0 & z \end{smallmatrix}]$ and
$h=[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}]$.
Then $h\in \ker(T-zI)$ and $T^n h = z^n h$ for all
$n\in \zbb_+$. As a consequence, $\|T^n h\|^2=\|h\|^2$
is a polynomial in $n$, however $T^n h$ is not a
polynomial in $n$ because the sequence $\{T^n
h\}_{n=0}^{\infty}$ is not constant and nonconstant
polynomials in $n$ are unbounded.
{$\diamondsuit$}
\end{exa}
It turns out that the assumptions of
Theorem~\ref{alg-m-iso-lem}(i) do not imply that $h_1
\perp h_2$. In fact, it can be even worse as shown in
an example below.
\begin{exa}
Let $\mscr=\cbb^2$, $T=[\begin{smallmatrix} \I & 2
\\ 0 & -\I \end{smallmatrix}]$, $h_1=[\begin{smallmatrix} 1
\\ 0 \end{smallmatrix}]$, $h_2=[\begin{smallmatrix} \I
\\ 1 \end{smallmatrix}]$, $z_1=\I$ and $z_2=-\I$. Then
$z_1,z_2 \in \tbb$, $z_1\neq z_2$, $z_1 = - z_2$, $h_1
\in \ker(T-z_1 I)$, $h_2 \in \ker(T-z_2 I)$ and
$T^2=-I$. It is now a matter of routine to verify that
$\|T^n(h_1+h_2)\|^2=3$ for all $n\in \zbb_+$, which
means that $\|T^n(h_1+h_2)\|^2$ is a polynomial in
$n$. Clearly $\is{T^kh_1}{T^lh_2}=-\I^{k+l+1}$ for all
$k,l\in \zbb_+$.
{$\diamondsuit$}
\end{exa}
\section{\label{Sec6}Jordan blocks and algebraic operators}
We begin by recalling the necessary terminology and facts
related to the concept of Jordan's block. Let $\mscr$ be an
inner product space and $\hh$ be its Hilbert space
completion. Suppose that $z$ is an eigenvalue of $T\in
\lino{\mscr}$ and $h$ is in $\gev{T}{z}\setminus \{0\}$.
Then there exists $k \in \nbb$ such that
$h\in\ker((T-zI)^k)$. Using Newton's binomial formula, we
deduce that $\cscr_T(h)$ is the linear span of the vectors
$h$, $Th$, \ldots, $T^{k-1}h$, so $\cscr_T(h)$ is a finite
dimensional subspace of $\ker((T-zI)^k) \subseteq
\gev{T}{z}$. Clearly, we have
\begin{align} \label{cyc2}
(T|_{\cscr_T(h)} - z I_{\cscr_T(h)})^{k}=(T - z
I)^{k}|_{\cscr_T(h)}=0,
\end{align}
which implies that $T|_{\cscr_T(h)}$ is an algebraic
operator whose minimal polynomial takes the form
$(x-z)^n$ for some $n\Le k$ (Recall that an operator
$S\in \lino{\mscr}$ is said to be {\em algebraic} if
there exists a nonzero polynomial $p \in \cbb[x]$ such
that $p(S)=0$.)
Given $T\in \lino{\mscr}$, $h\in \mscr\setminus\{0\}$
and $z\in \cbb$, we say that $T$ admits a {\em Jordan
block} $J$ (or that $J$ is a {\em Jordan block} of
$T$\/) at $h$ with eigenvalue $z$ if
$J=T|_{\cscr_T(h)}$ and $J-z I_{\cscr_T(h)}$ is a
nilpotent operator. Notice that $T$ admits a Jordan
block at $h$ with eigenvalue $z$ if and only if $h\in
\gev{T}{z} \setminus \{0\}$. Moreover, if this is the
case, then by the spectral mapping theorem and
\eqref{cyc2}, $\sigma(T|_{\cscr_T(h)}) = \{z\}$ and
$z$ is an eigenvalue of $T$ (recall that $\dim
\cscr_T(h) < \infty$). This justifies that part of the
definition of a Jordan block which refers to the
expression ``with eigenvalue $z$''.
Suppose that $T$ admits a Jordan block $J$ at $h$ with
eigenvalue $z$. Set $N=J-zI_{\cscr_T(h)}$ and
$k=\nil{N}$. Then $\{0\} \varsubsetneq \jd{N}
\varsubsetneq \ldots \varsubsetneq \jd{N^k} = \mscr$.
Since $\dim \cscr_T(h) = k$, it must be that $\dim
\jd{N^j}=j$ for $j=1,\ldots,k$. Hence, there exists a
Hamel basis $\{e_j\}_{j=1}^k$ of $\cscr_T(h)$ such
that $N^j e_j=0$ for $j=1,\ldots,k$. Then the matrix
representation of $J$ with respect to this basis takes
the familiar form (with zero entries in blank spaces)
\begin{align*}
\left[
\begin{matrix}
z & 1 & & &
\\
& z & 1 & &
\\
& & \ddots & \ddots&
\\
& & & z & 1
\\
& & & & z
\end{matrix} \right].
\end{align*}
Moreover, by Proposition~\ref{Bang1}, $J$ is an
$m$-isometry for some $m\in \nbb$ if and only if
$|z|=1$, and if this is the case, then $J$ is a strict
$(2 \nil{N}-1)$-isometry.
Theorem~\ref{alg-m-iso-lem-b} below, which essentially
follows from corollary to Theorem~\ref{alg-m-iso-lem},
characterizes ``orthogonality'' of Jordan blocks
corresponding to distinct eigenvalues of modulus $1$.
The most spectacular is the implication
(ii)$\Rightarrow$(i).
\begin{thm} \label{alg-m-iso-lem-b}
Suppose $T\in\lino{\mscr}$ admits a Jordan block at
$h_j\in \mscr$ with eigenvalue $z_j\in \tbb$ for
$j=1,2$ and $z_1\neq z_2$. Then the following
conditions are~equivalent{\em :}
\begin{enumerate}
\item[(i)] $\cscr_T(h_1) \perp \cscr_T(h_2)$,
\item[(ii)] either $z_1=-z_2$ and there is
$(\epsilon_1,\epsilon_2) \in\ubb$ such that
$\|T^n(\epsilon_k T^j h_1 + h_2)\|^2$ is a polynomial
in $n$ for $k=1,2$ and every $j\in \zbb_+$, or
$z_1\neq -z_2$ and $\|T^n(T^jh_1 + h_2)\|^2$ is a
polynomial in $n$ for every $j\in \zbb_+$,
\item[(iii)] $\|T^n(g_1 + h_2)\|^2$ is a
polynomial in $n$ for every $g_1\in \cscr_T(h_1)$,
\item[(iv)] $\|T^n(g_1 + g_2)\|^2$ is a
polynomial in $n$ for all $g_1\in \cscr_T(h_1)$ and
$g_2\in \cscr_T(h_2)$,
\item[(v)] there is $m\in \nbb$ such that $T|_{\cscr_T(h_1) +
\cscr_T(h_2)}$ is an $m$-isometry.
\end{enumerate}
\end{thm}
\begin{proof}
The implication (v)$\Rightarrow$(iv) follows from
Proposition~\ref{polar-1}. The implications
(iv)$\Rightarrow$(iii) and (iii)$\Rightarrow$(ii) are
obvious.
(ii)$\Rightarrow$(i) Apply Corollary~\ref{Bang-ch}.
(i)$\Rightarrow$(v) By assumption and
Proposition~\ref{Bang1}, $T|_{\cscr_T(h_j)}$ is
$m_j$-isometry, where $m_j=2 \nil{N_j}-1$ with
$N_j:=T|_{\cscr_T(h_j)}-z_j I_{\cscr_T(h_j)}$ for $j=1,2$.
Hence $T|_{\cscr_T(h_j)}$is an $m$-isometry with
$m=\max\{m_1,m_2\}$ for $j=1,2$. Using \eqref{def-miso},
(i) and the fact that the vector spaces $\cscr_T(h_1)$,
$\cscr_T(h_2)$ and $\cscr_T(h_1) + \cscr_T(h_2)$ are
invariant for $T$, we easily verify that the operator
$T|_{\cscr_T(h_1) + \cscr_T(h_2)}$ is an $m$-isometry. This
completes the proof.
\end{proof}
\begin{rem}
It is worth mentioning that some implications of
Theorem~\ref{alg-m-iso-lem-b} can also be proved in a
different way. Namely, the implication (iv)$\Rightarrow$(v)
is a direct consequence of Theorem~\ref{weakmiso}. In turn,
the implication (v)$\Rightarrow$(i) can be proved as
follows. Under the assumptions of
Theorem~\ref{alg-m-iso-lem-b}, suppose that
$T_0:=T|_{\mscr_0}$ is an $m$-isometry, where
$\mscr_0:=\cscr_T(h_1) + \cscr_T(h_2)$. Recall that
$\mscr_0$ is a finite dimensional vector space which is
invariant for $T$. Moreover, there exist $k_1,k_2 \in \nbb$
such that
\begin{align} \label{rodo1}
\cscr_T(h_j)\subseteq \mscr_0 \cap \ker{(T - z_j
I)^{k_j}} = \ker{(T_0 - z_j I_{\mscr_0})^{k_j}}, \quad
j=1,2.
\end{align}
Noticing that
\begin{align*}
(T_0 - z_1 I_{\mscr_0})^{k_1}(T_0 - z_2
I_{\mscr_0})^{k_2}(g_1+g_2) \overset{\eqref{rodo1}}=
0, \quad g_1 \in \cscr_T(h_1), \, g_2 \in
\cscr_T(h_2),
\end{align*}
we see that $T_0$ is an algebraic operator (observe that
$\cscr_T(h_1) \cap \cscr_T(h_2)=\{0\}$ due to \eqref{rodo1}
and \cite[Theorem~3.5.2]{Bi-So87}). Since $\mscr_0$ is
finite dimensional, $\mscr_0$ is a Hilbert space and
$T_0\in \ogr{\mscr_0}$. Define the polynomial $p\in
\cbb[x,y]$ by $p(x,y)=(xy-1)^{m}$. Then one can verify that
\begin{align*}
p(T_0)=(-1)^m\bscr_m(T_0)=0,
\end{align*}
where the operator $p(T_0)$ is defined as in
\cite[(1)]{A-H-S}. Hence $T_0$ is a root of $p$ in the
sense of \cite[Page 126]{A-H-S}. Note now that if
$\lambda,\mu\in \sigma(T_0)$ and $\lambda\neq \mu$,
then $p(\lambda,\bar \mu)\neq 0$. Indeed, otherwise
$p(\lambda,\bar \mu)= 0$, which implies that
$\lambda\bar \mu=1$. Since by \eqref{as-spec} and
$\dim \mscr_0 < \infty$, $\sigma(T_0) \subseteq \tbb$,
we deduce that $\lambda = \mu$, a contradiction.
Applying \cite[Lemma~19]{A-H-S}, we get
$\gev{T_0}{z_1} \perp \gev{T_0}{z_2}$. Since by
\eqref{rodo1}, $\cscr_T(h_j) \subseteq \gev{T_0}{z_j}$
for $j=1,2$, we conclude that $\cscr_T(h_1) \perp
\cscr_T(h_2)$, which gives (i).
{$\diamondsuit$}
\end{rem}
Using Theorem~\ref{alg-m-iso-lem-b}, we can
completely characterize algebraic $m$-isometries on
inner product spaces. Note that the equivalence
(i)$\Leftrightarrow$(iv) of
Proposition~\ref{m-iso-alg} below was proved by
Berm\'{u}dez, Martin\'{o}n and Noda in the finite
dimensional case (see
\cite[Theorem~2.7]{Ber-Mar-No13}).
\begin{pro} \label{m-iso-alg}
Suppose $\mscr\neq \{0\}$ and $T\in \lino{\mscr}$.
Then the following conditions are equivalent{\em :}
\begin{enumerate}
\item[(i)] $T$ is an $m$-isometric algebraic operator
for some $m\in\nbb$,
\item[(ii)] $T$ is algebraic and $\|T^n h\|^2$ is a polynomial in $n$ for
every $h\in \mscr$,
\item[(iii)] $T$ has a nontrivial orthogonal decomposition\/\footnote{\;The
orthogonal decomposition in \eqref{sta-orth-1}
corresponds to the orthogonal decomposition
$\mscr=\mscr_1\oplus \ldots \oplus \mscr_{\varkappa}$,
while nontriviality means that $\mscr_j\neq \{0\}$ for
all $j=1, \ldots, \varkappa$.}
\begin{align} \label{sta-orth-1}
T=(z_1 I_{\mscr_1} + N_1) \oplus \ldots \oplus
(z_\varkappa I_{\mscr_\varkappa} + N_\varkappa) \qquad
(\varkappa\in \nbb),
\end{align}
where $z_1, \ldots, z_\varkappa$ are distinct elements
of $\tbb$ and $N_1 \in \lino{\mscr_1}, \ldots,
N_\varkappa \in \lino{\mscr_\varkappa}$ are nilpotent
operators,
\item[(iv)] $T$ is algebraic and there exist an
isometric $($equivalently, a unitary$)$ operator $V\in
\lino{\mscr}$ and a nilpotent operator $N\in
\lino{\mscr}$ such that $VN=NV$ and $T=V+N$.
\end{enumerate}
Moreover, if {\em (iii)} holds, then $T$ is a strict
$m$-isometry with
\begin{align} \label{maxnj}
m = \max_{j\in \{1, \ldots, \varkappa\}} \big(2
\nil{N_j}-1\big).
\end{align}
\end{pro}
\begin{proof}
(i)$\Rightarrow$(ii) This implication is a direct
consequence of Proposition~\ref{polar-1}.
(ii)$\Rightarrow$(iii) Since $T$ is algebraic, there
exist $\varkappa\in \nbb$, $k_1, \ldots, k_\varkappa
\in \nbb$, distinct numbers $z_1, \ldots, z_\varkappa
\in \cbb$ and nonzero vector subspaces $\mscr_1,
\ldots, \mscr_\varkappa$ of $\mscr$ such that $\mscr=
\mscr_1 \dotplus \ldots \dotplus \mscr_\varkappa$ (a
direct sum), $T(\mscr_j) \subseteq \mscr_j$ and $(T
-z_j I)^{k_j}|_{\mscr_j} = 0$ for $j=1, \ldots,
\varkappa$ (see e.g., \cite[Section~6]{C-J-S09}).
Then, by Proposition~\ref{Bang1}(ii), $z_j\in \tbb$
for $j=1, \ldots, \varkappa$. Applying the implication
(iv)$\Rightarrow$(i) of Theorem~\ref{alg-m-iso-lem-b}
shows that $\mscr_i \perp \mscr_j$ for all $i\neq j$,
which yields (iii).
(iii)$\Rightarrow$(i) Set $k_j=\nil{N_j}$ for
$j=1,\ldots,\varkappa$. Since $p(T)=0$ for $p\in \cbb[x]$
given by
\begin{align*}
p=(x-z_1)^{k_1} \cdots (x-z_\varkappa)^{k_\varkappa},
\end{align*}
the operator $T$ is algebraic. In turn, by
Proposition~\ref{Bang1}(i), $T_j:=z_j I_{\mscr_j} +
N_j$ is an $m$-isometry for $j=1, \ldots, \varkappa$,
where $m$ is as in \eqref{maxnj}. Hence by
\eqref{def-miso} and \eqref{sta-orth-1}, $T$ is an
$m$-isometry. We now show that $T$ is a strict
$m$-isometry. Let $j_0\in \{1, \ldots,\varkappa\}$ be
such that $m=2k_{j_0} -1$. Then by \eqref{def-miso}
and Proposition~\ref{Bang1}(i) applied to $T_{j_0}$,
there exists $h_{j_0}\in \mscr_{j_0}$ such that
$\hat\fcal_{T_{j_0};m-1}(h_{j_0}) \neq 0$ which
together with \eqref{sta-orth-1} implies that
$\hat\fcal_{T;m-1}(g_{0}) \neq 0$, where $g_0:=g_{0,1}
\oplus \ldots \oplus g_{0,\varkappa}$ with $g_{0,j}=0$
for all $j \neq j_0$ and $g_{0,j_0}=h_{j_0}$. This
shows that the operator $T$ is not an
$(m-1)$-isometry.
(iii)$\Rightarrow$(iv) Clearly $V:=z_1 I_{\mscr_1}
\oplus \ldots \oplus z_\varkappa I_{\mscr_\varkappa}$
and $N:= N_1 \oplus \ldots \oplus N_\varkappa$
satisfy~(iv).
(iv)$\Rightarrow$(i) It suffices to apply
Proposition~\ref{Bang1}(i).
\end{proof}
\begin{rem} \label{inv-miso}
In view of Proposition~\ref{m-iso-alg}, if $T\in
\ogr{\hh}$ is an algebraic strict $m$-isometry, then
$m$ is a positive odd number\footnote{\;The oddness of
$m$ can also be deduced from
\cite[Proposition~1.23]{Ag-St1} and the inclusion
$\sigma(T) \subseteq \tbb$ which follows from
Proposition~\ref{m-iso-alg}.}. In particular, this is
the case for strict $m$-isometries on finite
dimensional Hilbert spaces (use the Cayley-Hamilton
theorem). According to Proposition~\ref{m-iso-alg},
for any $\epsilon \in (0,\infty)$ and for any positive
odd number $m$, there exists a strict $m$-isometry on
a finite dimensional Hilbert space (of dimension at
least $\frac{1}{2}(m+1)$) such that $\|T\| \Le 1 +
\epsilon$.
{$\diamondsuit$}
\end{rem}
\begin{rem} \label{wiel-nod}
The implication (ii)$\Rightarrow$(i) of Proposition~
\ref{m-iso-alg} is not true if the assumption on
algebraicity is dropped. It suffices to consider the
countable orthogonal sum $\bigoplus_{n=1}^{\infty}
T_n$ on ``finite vectors'', where each $T_n$ is a
strict $m_n$-isometry with $m_n \to \infty$ as $n\to
\infty$. If $\{m_n\}_{n=1}^{\infty}$ are positive odd
numbers, then in view of Remark~\ref{inv-miso} we can
also guarantee that $\sup_{n\in \nbb} \|T_n\| <
\infty$, or equivalently that $T$ is continuous.
{$\diamondsuit$}
\end{rem}
As shown below the only compact $m$-isometries are
algebraic operators on finite dimensional Hilbert
spaces, and as such are described by
Proposition~\ref{m-iso-alg}.
\begin{pro} \label{compop}
Let $T\in \ogr{\hh}$ be an $m$-isometry. Suppose $T^l$ is
compact for some $l\in \nbb$. Then $\dim \hh < \infty$ and
$T$ is an algebraic $m$-isometry.
\end{pro}
\begin{proof}
Clearly, we can assume that $\hh\neq \{0\}$. Since, by
\cite[Theorem~2.3]{Ja}, $T^l$ is an $m$-isometry, there is
no loss of generality in assuming that $T$ is a compact
$m$-isometry. In view of the Riesz-Schauder theorem
\cite[Theorem~VI.15]{R-S}, the only possible accumulation
point for the spectrum of a compact operator is $0$.
Therefore by \eqref{as-spec}, $\sigma(T)\subseteq \tbb$.
This implies that $T$ is invertible in $\ogr{\hh}$. Since
$T$ is compact, we deduce that $\dim \hh < \infty$. By the
Cayley-Hamilton theorem, $T$ is algebraic. This completes
the proof.
\end{proof}
\begin{rem}
By Proposition~\ref{compop}, if $T\in \ogr{\hh}$ is an
$m$-isometry such that $T^l$ is compact for some $l\in
\nbb$, then $T$ is compact. This is not true for
operators which are not $m$-isometries. Indeed, if
$\dim{\hh} \Ge \aleph_0$, then for any integer $l\Ge
2$ there exists an operator $T\in \ogr{\hh}$ such that
$T^l$ is compact, though the operators $T^1, \ldots,
T^{l-1}$ are not, e.g., the nilpotent operator $T\in
\ogr{\hh}$ defined by
\begin{align*}
T(h_1 \oplus \ldots \oplus h_l) = 0 \oplus h_1 \oplus
\ldots \oplus h_{l-1}, \quad h_1, \ldots, h_l \in
\mathcal M,
\end{align*}
on the orthogonal sum $\hh = \mathcal M \oplus \ldots
\oplus \mathcal M$ of $l$ copies of an infinite
dimensional Hilbert space $\mathcal M$.
{$\diamondsuit$}
\end{rem}
\textbf{Acknowledgement}. A substantial part of this
paper was written while the first and the third author
visited Kyungpook National University during the autumn of
2018 and the spring of 2019. They wish to thank the faculty
and the administration of this unit for their warm
hospitality.
\end{document}
|
\begin{document}
\title{An exponential lower bound for cut sparsifiers in planar graphs hanks{
This research is a part of projects that have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme
under grant agreements No 714704 (Marcin Pilipczuk and Anna Zych-Pawlewicz).
Nikolai Karpov has been supported by the Warsaw Centre of Mathematics and Computer Science and the Government of the Russian Federation (grant 14.Z50.31.0030).}
\begin{textblock}{20}(0, 12.5)
\includegraphics[width=40px]{logo-erc}
\end{textblock}
\begin{textblock}{20}(-0.25, 12.9)
\includegraphics[width=60px]{logo-eu}
\end{textblock}
\begin{abstract}
Given an edge-weighted graph $G$ with a set $\ensuremath{Q}$ of $k$ terminals,
a \emph{mimicking network} is a graph with the same set of terminals that exactly preserves
the sizes of minimum cuts between any partition of the terminals.
A natural question in the area of graph compression is to provide as small mimicking networks as possible for
input graph $G$ being either an arbitrary graph or coming from a specific graph class.
In this note we show an exponential lower bound for cut mimicking networks in planar graphs:
there are edge-weighted planar graphs with $k$ terminals that require $2^{k-2}$ edges in any
mimicking network.
This nearly matches an upper bound of $\mathcal{O}(k 2^{2k})$ of Krauthgamer and Rika [SODA 2013, arXiv:1702.05951]
and is in sharp contrast with the $\mathcal{O}(k^2)$ upper bound under the assumption that all terminals lie on a single
face [Goranci, Henzinger, Peng, arXiv:1702.01136].
As a side result we show a hard instance for double-exponential upper bounds given by Hagerup, Katajainen, Nishimura, and Ragde~[JCSS 1998],
Khan and Raghavendra~[IPL 2014],
and Chambers and Eppstein~[JGAA 2013].
\end{abstract}
\section{Introduction}\label{sec:intro}
One of the most popular paradigms when designing effective algorithms is preprocessing.
These days in many applications, in particular mobile ones, even though fast running time is desired, the memory usage is the main limitation. The preprocessing needed for such applications is to reduce the size of the input data prior to some resource-demanding computations,
without (significantly) changing the answer to the problem being solved.
In this work we focus on this kind of preprocessing, known also as graph compression, for flows and cuts.
The input graph needs to be compressed while preserving its essential flow and cut properties.
Central to our work is the concept of a \emph{mimicking network}, introduced by
Hagerup, Katajainen, Nishimura, and Ragde~\cite{HagerupKNR98}.
Let $G$ be an edge-weighted graph with a set $Q \subseteq V(G)$ of $k$ terminals.
For a partition $Q = S \uplus \bar{S}$,
a minimum cut between $S$ and $\bar{S}$ is called a \emph{minimum $S$-separating cut}.
A \emph{mimicking network} is an edge-weighted graph
$G'$ with $Q \subseteq V(G')$ such that the weights of minimum $S$-separating cuts
are equal in $G$ and $G'$ for every partition $Q = S \uplus \bar{S}$.
Hagerup et al~\cite{HagerupKNR98}
observed the following simple preprocessing step: if two vertices $u$ and $v$
are always on the same side of the minimum cut between $S$ and $\bar{S}$ for every choice
of the partition $Q = S \uplus \bar{S}$, then they can be merged without changing the size
of any minimum $S$-separating cut.
This procedure always
leads to a mimicking network with at most $2^{2^k}$ vertices.
The above upper bound can be improved to a still double-exponential bound
of roughly $2^{\binom{k-1}{\lfloor (k-1)/2 \rfloor}}$, as observed both by
Khan and Raghavendra~\cite{KhanR14} and by Chambers and Eppstein~\cite{ChambersE13}.
In 2013, Krauthgamer and Rika~\cite{KrauthgamerR13} observed that the aforementioned preprocessing
step can be adjusted to yield a mimicking network of size $\mathcal{O}(k^2 2^{2k})$ for planar graphs.
Furthermore, they introduced a framework for proving lower bounds, and showed that
there are (non-planar) graphs, for which any mimicking network
has $2^{\Omega(k)}$ edges; a slightly stronger lower bound
of $2^{(k-1)/2}$ has been shown by Khan and Raghavendra~\cite{KhanR14}.
On the other hand, for planar graphs the lower bound of~\cite{KrauthgamerR13} is $\Omega(k^2)$.
Furthermore, the planar graph lower bound applies even in the special case when all the terminals
lie on the same face.
Very recently, two improvements upon these results for planar graphs have been announced.
In a sequel paper, Krauthgamer and Rika~\cite{KrauthgamerR17} improve the
polynomial factor in the upper bound for planar graphs to $\mathcal{O}(k 2^{2k})$ and show that the exponential dependency
actually adheres only to the \emph{number of faces containing terminals}: if
the terminals lie on $\gamma$ faces, one can obtain a mimicking network
of size $\mathcal{O}(\gamma 2^{2\gamma} k^4)$.
In a different work, Goranci, Henzinger, and Peng~\cite{GoranciHP17} showed a tight $\mathcal{O}(k^2)$ upper bound
for mimicking networks for planar graphs with all terminals on a single face.
\myparagraph{Our results.}
We complement these results by showing an exponential lower bound for mimicking networks in planar graphs.
\begin{theorem}\label{thm:main}
For every integer $k \geq 3$,
there exists a planar graph $G$ with a set $Q$ of $k$ terminals
and edge cost function under which every mimicking network for $G$ has
at least $2^{k-2}$ edges.
\end{theorem}
This nearly matches the upper bound of $\mathcal{O}(k2^{2k})$ of Krauthgamer and Rika~\cite{KrauthgamerR17}
and is in sharp contrast with the polynomial bounds when the terminals lie on a constant
number of faces~\cite{GoranciHP17,KrauthgamerR17}.
Note that it also nearly matches the improved bound of $\mathcal{O}(\gamma 2^{2\gamma} k^4)$ for terminals on $\gamma$ faces~\cite{KrauthgamerR17},
as $k$ terminals lie on at most $k$ faces.
As a side result, we also show a hard instance for mimicking networks in general graphs.
\begin{theorem}\label{thm:side}
For every integer $k \geq 1$ that is equal to $6$ modulo $8$, there
exists a graph $G$ with a set $Q$ of $k$ terminals
and $\Omega(2^{\binom{k-1}{\lfloor (k-1)/2 \rfloor} - k/2})$ vertices,
such that no two vertices can be identified without strictly increasing the size of some minimum $S$-separating cut.
\end{theorem}
The example of Theorem~\ref{thm:side}, obtained by iterating the construction of Krauthgamer and Rika~\cite{KrauthgamerR13},
shows that the doubly exponential bound
is natural for the preprocessing step of Hagerup et al~\cite{HagerupKNR98}, and
one needs different techniques to improve upon it.
Note that the bound of Theorem~\ref{thm:side} is very close to the upper bound given
by~\cite{ChambersE13,KhanR14}.
\myparagraph{Related work.}
Apart from the aforementioned work on mimicking
networks~\cite{GoranciHP17,HagerupKNR98,KhanR14,KrauthgamerR13,KrauthgamerR17},
there has been substantial work on preserving cuts and flows approximately,
see e.g.~\cite{robi-apx,robi-old,mm-sparsifiers}.
If one wants to construct mimicking networks for vertex cuts in
unweighted graphs with deletable terminals (or with small integral
weights), the representative sets approach of Kratsch and Wahlstr\"{o}m~\cite{KratschW12}
provides a mimicking network with $\mathcal{O}(k^3)$ vertices, improving upon a previous
quasipolynomial bound of Chuzhoy~\cite{Chuzhoy12}.
We prove Theorem~\ref{thm:main} in Section~\ref{sec:main}
and show the example of Theorem~\ref{thm:side} in Section~\ref{sec:side}.
\section{Exponential lower bound for planar graphs}\label{sec:main}
In this section we present the main result of the paper.
We provide a construction that proves that there are planar graphs with $k$ terminals whose mimicking networks are of size $\Omega(2^k)$.
In order to present the desired graph, for the sake of simplicity, we describe its dual graph $(\du{G},\du{c})$. We let $\du{\ensuremath{Q}}=\{\du{f_n},\du{f_s},\du{f_1},\du{f_2},\dots,\du{f_{k-2}} \}$ be the set of faces in $\du{G}$ corresponding to terminals in the primal graph $\duu{G}$.
\footnote{Since the argument mostly operates on the dual graph, for notational simplicity,
we use regular symbols for objects in the dual graph, e.g., $G$, $c$, $f_i$,
while starred symbols refer to the dual of the dual graph, that is, the primal graph.}
There are two special terminal faces $\du{f_n}$ and $\du{f_s}$, referred to as the north face and the south face. The remaining faces of $\du{\ensuremath{Q}}$ are referred to as equator faces.
A set $\du{\ensuremath{Q}s} \subset \du{\ensuremath{Q}}$ is \emph{important} if $\du{f_n} \in \du{\ensuremath{Q}s}$ and $\du{f_s} \notin \du{\ensuremath{Q}s}$. Note that there are $2^{k-2}$ important sets; in what follows we care only
about minimum cuts in the primal graph for separations between important sets and their complements.
For an important set $\du{\ensuremath{Q}s}$,
we define its \emph{signature} as a bit vector $\sign{\du{\ensuremath{Q}s}} \in \bitv{|\du{\ensuremath{Q}}|-2}$ whose $i$'th position is defined as $\sign{\du{\ensuremath{Q}s}}[i]= 1 \text{ iff } \du{f_{i}} \in \du{\ensuremath{Q}s}$.
Graph $\du{G}$ will be composed of $2^{k-2}$ cycles referred to as important cycles, each corresponding to an important subset $\du{\ensuremath{Q}s} \subset \du{\ensuremath{Q}}$.
A cycle corresponding to $\du{\ensuremath{Q}s}$ is referred to as $\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$ and it separates $\du{\ensuremath{Q}s}$ from $\overline{\du{\ensuremath{Q}s}}$.
Topologically, we draw the equator faces on a straight horizontal line that we call the equator. We put the north face $\du{f_n}$ above the equator and the south face $\du{f_s}$ below the equator. For any important $\du{\ensuremath{Q}s} \subset \du{\ensuremath{Q}}$, in the plane drawing of $\du{G}$ the corresponding cycle $\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$ is a curve that goes to the south of $\du{f_i}$ if $\du{f_i} \in \du{\ensuremath{Q}s}$ and otherwise to the north of $\du{f_i}$. We formally define important cycles later on, see Definition~\ref{def:impcyc}.
We now describe in detail the construction of $\du{G}$.
We start with a graph $H$ that is almost a tree, and then embed $H$ in the plane
with a number of edge crossings, introducing a new vertex on every edge crossing.
The graph $H$ consists of a complete binary tree of height $k-2$ with root $v$ and an extra vertex
$w$ that is adjacent to the root $v$ and every one of the $2^{k-2}$ leaves of the tree.
In what follows, the vertices of $H$ are called \emph{branching vertices}, contrary
to \emph{crossing vertices} that will be introduced at edge crossings
in the plane embedding of $H$.
To describe the plane embedding of $H$, we need to introduce some notation of the vertices
of $H$.
The starting point of our construction is the edge $\du{e}=\{ \du{w}, \du{v} \}$.
Vertex $\du{v}$ is the first branching vertex and also the root of $H$.
In vertex $\du{v}$, edge $\du{e}$ branches into $\du{e_0}=\{\du{v},\du{v_0}\}$ and $\du{e_1}=\{\du{v},\du{v_1} \}$. Now $\du{v_0}$ and $\du{v_1}$ are also branching vertices.
The branching vertices are partitioned into layers $L_0,\ldots,L_{k-2}$. Vertex $\du{v}$ is in layer $L_0=\{ \du{v} \}$, while $\du{v_0}$ and $\du{v_1}$ are in layer $L_1=\{ \du{v_0}, \du{v_1} \}$. Similarly, we partition edges into layers $\mathcal{E}^H_0,\ldots \mathcal{E}^H_{k-1}$. So far we have $\mathcal{E}^H_0=\{ \du{e} \}$ and $\mathcal{E}^H_1=\{ \du{e_0}, \du{e_1} \}$.
The construction continues as follows. For any layer $L_i, i \in \{1, \ldots , k-3 \}$, all the branching vertices of $L_i=\{ \du{v_{00 \ldots 0}} \ldots \du{v_{11 \ldots 1}} \}$ are of degree $3$. In a vertex $\du{v_a} \in L_i$, $a \in \bitv{i}$, edge $\du{e_a} \in \mathcal{E}^H_i$ branches into edges $\du{e_{0a}}=\{ \du{v_a}, \du{v_{0a}} \},\du{e_{1a}}=\{ \du{v_a}, \du{v_{1a}} \} \in \mathcal{E}^H_{i+1}$, where $\du{v_{0a}},\du{v_{1a}} \in L_{i+1}$. We emphasize here that the new bit in the index is added \emph{as the first symbol}.
Every next layer is twice the size of the previous one, hence $|L_i|=|\mathcal{E}^H_i|=2^i$. Finally the vertices of $L_{k-2}$ are all of degree $2$. Each of them is connected to a vertex in $L_{k-3}$ via an edge in $\mathcal{E}^H_{k-2}$ and to the vertex $w$ via an edge in $\mathcal{E}^H_{k-1}$.
We now describe the drawing of $H$, that we later make planar by adding crossing vertices, in order to obtain the graph $G$.
As we mentioned before, we want to draw equator faces $\du{f_1}, \ldots \du{f_{k-2}}$ in that order from left to right on a horizontal line (referred to as an equator). Consider equator face $\du{f_i}$ and vertex layer $L_i$ for some $i>0$. Imagine a vertical line through $\du{f_i}$ perpendicular to the equator, and let us refer to it as an $i$'th meridian. We align the vertices of $L_i$ along the $i$'th meridian, from the north to the south. We start with the vertex of $L_i$ with the (lexicographically) lowest index, and continue drawing vertices of $L_i$ more and more to the south while the indices increase. Moreover, the first half of $L_i$ is drawn to the north of $\du{f_i}$, and the second half to the south of $\du{f_i}$.
Every edge of $H$, except for $e$, is drawn as a straight line segment connecting its endpoints.
The edge $\du{e}$ is a curve encapsulating the north face $\du{f_n}$ and separating it from $\du{f_s}$-the outer face of $\du{G}$.
\begin{figure}
\caption{The graph $\du{G}
\label{dual}
\end{figure}
The crossing vertices are added whenever the line segments cross. This way the edges of $H$
are subdivided and the resulting graph is denoted by $\du{G}$.
This completes the description of the structure and the planar drawing of $\du{G}$.
We refer to Figure~\ref{dual} for an illustration of the graph $G$.
The set $\mathcal{E}_i$ consists of all edges of $G$ that are parts of the (subdivided) edges of $\mathcal{E}^H_i$ from $H$, see Figure~\ref{subdivide}.
We are also ready to define important cycles formally.
\begin{figure}
\caption{The layer $\mathcal{E}
\label{subdivide}
\end{figure}
\begin{definition}\label{def:impcyc}
Let $\du{\ensuremath{Q}s} \subset \du{\ensuremath{Q}}$ be important.
Let $\pi$ be a unique path in the binary tree $H-\{w\}$ from the root
$\du{v}$ to $\du{v_{\rev{\sign{\du{\ensuremath{Q}s}}}}}$,
where $\rev{\cdot}$ operator reverses the bit vector.
Let $\pi'$ be the path in $G$ corresponding to $\pi$.
The important cycle $\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$ is composed of $\du{e}$, $\pi'$, and an edge in $\mathcal{E}_{k-1}$ adjacent to $\du{v_{\rev{\sign{\du{\ensuremath{Q}s}}}}}$.
\end{definition}
We now move on to describing how weights are assigned to the edges of $\du{G}$.
The costs of the edges in $\du{G}$ admit $k-1$ values: $c_1, c_2, \ldots c_{k-2}$, and $C$. Let $c_{k-2}=1$. For $i \in \{1 \dots k-3 \}$ let $c_i= \sum_{j=i+1}^{k-2}|\mathcal{E}_{j}|c_{j}$.
Let $C=\sum_{j=1}^{k-2} |\mathcal{E}_i|c_i$. Let us consider an arbitrary edge $\du{e_{ba}}=\{ \du{v_{a}}, \du{v_{ba}} \}$ for some $a \in \bitv{i}, i \in \{ 0 \ldots k-3 \}, b \in \{ 0,1 \}$ (see Figure~\ref{subdivide} for an illustration). As we mentioned before, $\du{e_{ba}}$ is subdivided by crossing vertices into a number of edges. If $b=0$, then edge $\du{e_{ba}}$ is subdivided by\footnote{For a bit vector $a$, $\dec{a}$ denotes the integral value of $a$ read as a number in binary.} $\dec{a}$ crossing vertices into $\dec{a}+1$ edges: $\du{e^1_{ba}}=\{ \du{v_a}, \du{x^1_{ba}} \}, \du{e^2_{ba}}=\{ \du{x^1_{ba}},\du{x^2_{ba}} \} \ldots \du{e^{\dec{a}+1}_{ba}}=\{ \du{x^{\dec{a}}_{ba}}, \du{v_{ba}} \}$. Among those edges $\du{e^{\dec{a}+1}_{ba}}$ is assigned cost $C$, and the remaining edges subdividing $\du{e_{ba}}$ are assigned cost $c_i$. Analogically, if $b=1$, then edge $\du{e_{ba}}$ is subdivided by $2^i-1-\dec{a}$ crossing vertices into $2^i-\dec{a}$ edges: $\du{e^1_{ba}}=\{ \du{v_a}, \du{x^1_{ba}} \}, \du{e^2_{ba}}=\{ \du{x^1_{ba}},\du{x^2_{ba}} \} \ldots \du{e^{2^i-\dec{a}}_{ba}}=\{ \du{x^{2^i-1-\dec{a}}_{ba}}, \du{v_{ba}} \}$. Again, we let edge $\du{e^{2^i-\dec{a}}_{ba}}$ have cost $C$, and the remaining edges subdividing $\du{e_{ba}}$ are assigned cost $c_i$.
Finally, all the edges connecting the vertices of the last layer with $w$ have weight $c_{k-2} = 1$.
The cost assignment within an edge layer is presented in Figure~\ref{subdivide}.
This finishes the description of the dual graph $G$. We now consider the primal graph $\duu{G}$ with the set of terminals $\duu{\ensuremath{Q}}$ consisting of the
$k$ vertices of $\duu{G}$ corresponding to the faces $\ensuremath{Q}$ of $G$. In the remainder of this section we show that there is a cost function on the edges of $\duu{G}$, under which any mimicking network for $\duu{G}$
contains at least $2^{k-2}$ edges. This cost function is in fact a small perturbation of the edge costs implied by the dual graph $G$.
In order to accomplish this,
we use the framework introduced in~\cite{KrauthgamerR13}. In what follows,
$\mincut{G}{c}{S}{S'}$ stands for the minimum cut separating $S$ from $S'$ in a graph $G$ with cost function $c$. Below we provide the definition of the cutset-edge incidence matrix and the Main Technical Lemma from~\cite{KrauthgamerR13}.
\begin{definition}[Incidence matrix between cutsets and edges] Let $(G,c)$ be a $k$-terminal network, and fix an enumeration $S_1, \ldots S_m$ of all $2^{k-1}-1$ distinct and nontrivial bipartitions $Q=S_i \cup \overline{S}_i$. The cutset-edge incidence matrix of $(G,c)$ is the matrix $A_{G,c} \in \{ 0,1 \}^{m \times E(G)}$ given by
$$
(A_{G,c})_{i,e}=
\begin{cases}
1 \text{ if } e \in \mincut{G}{c}{S_i}{\overline{S}_i}\\
0 \text{ otherwise.}
\end{cases}
$$
\end{definition}
\begin{lemma}[Main Technical Lemma of \cite{KrauthgamerR13}]\label{lem:mtl}
Let $(G,c)$ be a $k$-terminal network. Let $A_{G,c}$ be its cutset-edge incidence matrix, and assume that for all $S \subset Q$ the minimum $S$-separating cut of $G$ is unique. Then there is for $G$ an edge cost function $\tilde{c}: E(G) \mapsto \mathbb{R}^+$, under which every mimicking network $(G',c')$ satisfies $|E(G')| \geq \rank{A_{G,c}}$.
\end{lemma}
Recall that $\duu{G}$ is the dual graph to the graph $\du{G}$ that we constructed.
By slightly abusing the notation, we will use the cost function $c$ defined on the dual edges
also on the corresponding primal edges.
Let $\duu{\ensuremath{Q}}=\{ \duu{f_n}, \duu{f_s}, \duu{f_1}, \ldots \duu{f_{k-2}} \}$ be the set of terminals in $\duu{G}$ corresponding to $\du{f_n}, \du{f_s}, \du{f_1}, \ldots \du{f_{k-2}}$ respectively. We want to apply Lemma~\ref{lem:mtl} to $\duu{G}$ and $\duu{\ensuremath{Q}}$. For that we need to show that the cuts in $\duu{G}$ corresponding to important sets are unique and that $\rank{A_{\duu{G},c}}$ is high.
\begin{figure}
\caption{Primal graph $G^\ast$.\label{primal}
\label{primal}
\end{figure}
As an intermediate step let us argue that the following holds.
\begin{claim}\label{clm:kc}
There are $k$ edge disjoint simple paths in $\duu{G}$ from $\duu{f_n}$ to $\duu{f_s}$: $\pi_0, \pi_1, \ldots, \pi_{k-2}, \pi_{k-1}$. Each $\pi_i$ is composed entirely of edges dual to the edges of $\mathcal{E}_i$ whose cost equals $C$. For $i \in \{ 1 \ldots k-2 \}$, $\pi_i$ contains vertex $\duu{f_i}$. Let $\pi_i^n$ be the prefix of $\pi_i$ from $\duu{f_n}$ to $\duu{f_i}$ and $\pi_i^s$ be the suffix from $\duu{f_i}$ to $\duu{f_s}$. The number of edges on $\pi_i$ is $2^i$, and the number of edges on $\pi_i^n$ and $\pi_i^s$ is $2^{i-1}$.
\end{claim}
\begin{proof}The primal graph $\duu{G}$ together with paths $\pi_0, \pi_1 \ldots \pi_{k-2},\pi_{k-1}$ is pictured in Figure~\ref{primal}. The paths $\pi_{k-2},\pi_{k-1}$ visit the same vertices in the same manner, so for the sake of clarity only one of these paths is shown in the picture. This proof contains a detailed description of these paths and how they emerge from in the dual graph $\du{G}$.
Consider a layer $L_i$. Recall that for any $ba \in \bitv{i}$ edge $\du{e_{ba}}$ of the almost tree is subdivided in $\du{G}$, and all the resulting edges are in $\mathcal{E}_i$. If $b=0$, then edge $\du{e_{ba}}$ is subdivided by $\dec{a}$ crossing vertices into $\dec{a}+1$ edges: $\du{e^1_{ba}}=\{ \du{v_a}, \du{x^1_{ba}} \}, \du{e^2_{ba}}=\{ \du{x^1_{ba}},\du{x^2_{ba}} \} \ldots \du{e^{\dec{a}+1}_{ba}}=\{ \du{x^{\dec{a}}_{ba}}, \du{v_{ba}} \}$, where $\du{c}(\du{e^{\dec{a}+1}_{ba}})=C$. Analogically, if $b=1$, then edge $\du{e_{ba}}$ is subdivided by $2^i-1-\dec{a}$ crossing vertices into $2^i-\dec{a}$ edges: $\du{e^1_{ba}}=\{ \du{v_a}, \du{x^1_{ba}} \}, \du{e^2_{ba}}=\{ \du{x^1_{ba}},\du{x^2_{ba}} \} \ldots \du{e^{2^i-\dec{a}}_{ba}}=\{ \du{x^{2^i-1-\dec{a}}_{ba}}, \du{v_{ba}} \}$. Again, $\du{c}(\du{e^{2^i-\dec{a}}_{ba}})=C$. Consider the edges of $\mathcal{E}_i$ incident to vertices in $L_i$. If we order these edges lexicographically by their lower index, then each consecutive pair of edges shares a common face. Moreover, the first edge $\du{e^1_{00\ldots0}}$ is incident to $\du{f_n}$ and the last edge $\du{e^1_{11\ldots1}}$ is incident to $\du{f_s}$. This gives a path $\pi_i$ from $f_n$ to $f_s$ through $f_i$ in the primal graph where all the edges on $\pi_i$ have cost $C$. Path $\pi_{k-1}$ is given by the edges of $\mathcal{E}_{k-1}$ in a similar fashion and path $\pi_0$ is composed of a single edge dual to $\du{e}$.
\end{proof}
We move on to proving that the condition in Lemma~\ref{lem:mtl} holds.
We extend the notion of important sets $\ensuremath{Q}s \subseteq \ensuremath{Q}$ to sets $\duu{\ensuremath{Q}s} \subseteq \duu{\ensuremath{Q}}$
in the natural manner.
\begin{lemma}\label{lem:uniquecuts}
For every important $\duu{\ensuremath{Q}s} \subset \duu{\ensuremath{Q}}$, the minimum cut separating $\duu{\ensuremath{Q}s}$ from $\overline{\duu{\ensuremath{Q}s}}$ is unique and corresponds to cycle $\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$ in $\du{G}$.
\end{lemma}
\begin{proof}
Let $\ensuremath{\mathcal{C}}$ be the set of edges of $G$ corresponding to some
minimum cut between $\duu{\ensuremath{Q}s}$ and $\duu{\overline{\ensuremath{Q}s}}$ in $\duu{G}$.
Let $\ensuremath{Q}s \subseteq \ensuremath{Q}$ be the set of faces of $G$ corresponding to the set $\duu{\ensuremath{Q}s}$.
We start by observing that the edges of $\duu{G}$ corresponding to $\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$
form a cut between $\duu{\ensuremath{Q}s}$ and $\duu{\overline{\ensuremath{Q}s}}$. Consequently,
the total weight of edges of $\ensuremath{\mathcal{C}}$ is at most the total weight of the edges of
$\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$.
By Claim~\ref{clm:kc}, $\ensuremath{\mathcal{C}}$ contains at least $k$ edges of cost $C$, at least one edge of cost $C$ per edge layer (it needs to hit an edge in every path $\pi_0 , \ldots \pi_{k-1}$). Note that $\ensuremath{\mathcal{C}}_{\sign{ \du{\ensuremath{Q}s} }}$ contains exactly $k$ edges of cost $C$. We assign the weights in a way that $C$ is larger than all other edges in the graph taken together.
This implies that $\ensuremath{\mathcal{C}}$ contains exactly one edge of cost $C$ in every edge layer $\mathcal{E}_i$.
In particular, $\ensuremath{\mathcal{C}}$ contains the edge $e = \{ v,w \}$.
Furthermore, the fact that $\duu{f_i}$ lies on $\pi_i$ implies that
the edge of weight $C$ in $\mathcal{E}_i \cap \ensuremath{\mathcal{C}}$ lies on $\pi_i^n$ if $\duu{f_i} \notin \ensuremath{Q}s$
and lies on $\pi_i^s$ otherwise.
Consequently, in $\duu{G}-\ensuremath{\mathcal{C}}$ there is one connected component containing all vertices
of $\duu{\ensuremath{Q}s}$ and one connected component containing all vertices of $\overline{\duu{\ensuremath{Q}s}}$.
By the minimality of $\ensuremath{\mathcal{C}}$, we infer that $\duu{G}-\ensuremath{\mathcal{C}}$ contains
no other connected components apart from the aforementioned two components.
By planarity, since any minimum cut in a planar graph corresponds to a collection of cycles
in its dual, this implies that $\ensuremath{\mathcal{C}}$ is a single cycle in $G$.
Let $e_i$ be the unique edge of $\mathcal{E}_i \cap \ensuremath{\mathcal{C}}$ of weight $C$
and let $e_i'$ be the unique edge of $\mathcal{E}_i \cap \ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$ of weight $C$.
We inductively prove that $e_i = e_i'$ and
that the subpath of $\ensuremath{\mathcal{C}}$ between $e_i$ and $e_{i+1}$ is the same as on
$\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$.
For the base of the induction, note that $e_0 = e_0' = e$.
Consider an index $i > 0$ and the face $\du{f_i}$. If $\du{f_i} \in \du{\ensuremath{Q}s}$, i.e., $\du{f_i}$ belongs to the north side, then $e_i$ lies south of $f_i$, that is, lies on $\pi_i^s$.
Otherwise, if $f_i \notin \ensuremath{Q}s$, then $e_i$ lies north of $f_i$, that is, lies on $\pi_i^n$.
Let $v_a$ and $v_{ba}$ be the vertices of $\ensuremath{\mathcal{C}}_{\sign{\ensuremath{Q}s}}$ that lie
on $L_{i-1}$ and $L_i$, respectively. By the inductive assumption, $v_a$ is an endpoint
of $e_{i-1}' = e_{i-1}$ that lies on $\ensuremath{\mathcal{C}}$.
Let $e_i = xv_{bc}$, where $v_{bc} \in L_i$ and let $e_i' = x'v_{ba}$.
Since $\ensuremath{\mathcal{C}}$ is a cycle in $G$ that contains exactly one edge on each path $\pi_i$,
we infer that $\ensuremath{\mathcal{C}}$ contains a path between $v_a$ and $v_{bc}$ that consists of
$e_i$ and a number of edges of $\mathcal{E}_i$ of weight $c_i$.
A direct check shows that the subpath from $v_a$ to $v_{ba}$ on $\ensuremath{\mathcal{C}}_{\sign{\ensuremath{Q}s}}$
is the unique such path with minimum number of edges of weight $c_i$.
Since the weight $c_i$ is larger than the total weight of all edges of smaller weight,
from the minimality of $\ensuremath{\mathcal{C}}$ we infer that $v_{ba} = v_{bc}$ and $\ensuremath{\mathcal{C}}$
and $\ensuremath{\mathcal{C}}_{\sign{\ensuremath{Q}s}}$ coincide on the path from $v_a$ to $b_{ba}$.
Consequently, $\ensuremath{\mathcal{C}}$ and $\ensuremath{\mathcal{C}}_{\sign{\ensuremath{Q}s}}$ coincide on the path from the edge $e=vw$
to the vertex $v_{\rev{\sign{\ensuremath{Q}s}}} \in L_{k-2}$. From the minimality of $\ensuremath{\mathcal{C}}$
we infer that also the edge $\{w,v_{\rev{\sign{\ensuremath{Q}s}}} \}$ lies on the cycle $\ensuremath{\mathcal{C}}$ and, hence,
$\ensuremath{\mathcal{C}} = \ensuremath{\mathcal{C}}_{\sign{\ensuremath{Q}s}}$. This completes the proof.
\end{proof}
\begin{claim}\label{clm:rank}
$\rank{A_{G,c}} \geq 2^{k-2}$.
\end{claim}
\begin{proof}
Recall Definition~\ref{def:impcyc} and the fact that $\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$ is defined for every important $\ensuremath{Q}s \subseteq \ensuremath{Q}$.
This means that the only edge in $\mathcal{E}_{k-1}$ that belongs to $\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$ is the edge adjacent to $\du{v_{\rev{\sign{\du{\ensuremath{Q}s}}}}}$. Let us consider the part of adjacency matrix where rows correspond to the cuts corresponding to $\ensuremath{\mathcal{C}}_{\sign{\du{\ensuremath{Q}s}}}$ for important $\ensuremath{Q}s \subset \ensuremath{Q}$ and where columns correspond to the edges in $\mathcal{E}_{k-1}$ of weight $C$. Let us order the cuts according to $\rev{\sign{\du{\ensuremath{Q}s}}}$ and the edges by the index of the adjacent vertex in $L_{k-2}$ (lexicographically). Then this part of $A_{G,c}$ is an identity matrix. Hence, $\rank{A_{G,c}} \geq 2^{k-2}$.
\end{proof}
Lemma \ref{lem:uniquecuts} and Claim~\ref{clm:rank} provide the conditions necessary for Lemma~\ref{lem:mtl} to apply. This proves our main result stated in Theorem~\ref{thm:main}.
\section{Doubly exponential example}\label{sec:side}
\begin{figure}
\caption{Illustration of the construction. The two panels correspond to two cases in the proof, either $u_{S_0}
\label{fig:double-exp}
\end{figure}
In this section we show an example graph for which the compression technique introduced by Hagerup et al~\cite{HagerupKNR98} does indeed produce a mimicking network on
roughly $2^{\binom{k-1}{\lfloor (k-1)/2 \rfloor}}$ vertices.
Our example relies on doubly exponential edge costs. Note that an example with single exponential costs can be compressed into a mimicking network of size single exponential in $k$ using the techniques of~\cite{KratschW12}.
Before we go on, let us recall the technique of Hagerup et al~\cite{HagerupKNR98}. Let $G$ be a weighted graph and $Q$ be the set of terminals. Observe that a minimum cut separating $S \subset Q$ from $\overline{S}=Q \setminus S$, when removed from $G$, divides the vertices of $G$ into two sides: the side of $S$ and the side of $\overline{S}$. The side is defined for each vertex, as all connected components obtained by removing the minimum cut contain a terminal. Now if two vertices $u$ and $v$
are on the same side of the minimum cut between $S$ and $\overline{S}$ for every $S \subset Q$, then they can be merged without changing the size of any minimum $S$-separating cut. As a result there is at most $2^{2^k}$ vertices in the graph;
as observed by~\cite{ChambersE13,KhanR14}, this bound can be improved to roughly $2^{\binom{k-1}{\lfloor (k-1)/2 \rfloor}}$. After this brief introduction we move on to describing our example.
Our construction builds up on the example provided in~\cite{KrauthgamerR13} in the proof of Theorem 1.2.
As stated in Theorem~\ref{thm:side} of this paper, our construction works for parameter $k$ equal to $6$ modulo $8$.
Let $k = 2r+2$, that is, $r$ is equal to $2$ modulo $4$.
These remainder assumptions give the following observation via standard calculations.
\begin{lemma}\label{lem:ell-even}
The integer $\ell := \binom{2r+1}{r}$ is even.
\end{lemma}
\begin{proof}
Recall that $r$ equals $2$ modulo $4$.
Since $\binom{2r+1}{r} = \frac{(2r+1)!}{r!(r+1)!}$, while the largest power of $2$ that divides $a!$ equals $\sum_{i=1}^\infty \lfloor \frac{a}{2^i} \rfloor$, we have that
the largest power of $2$ that divides $\binom{2r+1}{r}$ equals:
\begin{align*}
& \sum_{i=1}^\infty \left\lfloor \frac{2r+1}{2^i} \right\rfloor - \sum_{i=1}^\infty \left\lfloor \frac{r}{2^i} \right\rfloor - \sum_{i=1}^\infty \left\lfloor \frac{r+1}{2^i} \right\rfloor
= r + \sum_{i=1}^\infty \left\lfloor \frac{r}{2^i} \right\rfloor - 2 \sum_{i=1}^\infty \left\lfloor \frac{r}{2^i} \right\rfloor \\
&\quad = r - \sum_{i=1}^\infty \left\lfloor \frac{r}{2^i} \right\rfloor
= r - \frac{r}{2} - \frac{r-2}{4} - \sum_{i=1}^\infty \left\lfloor \frac{r}{4 \cdot 2^i} \right\rfloor
\geq \frac{1}{2} + \frac{r}{4} - \sum_{i=1}^\infty \frac{r}{4 \cdot 2^i}
= \frac{1}{2}.
\end{align*}
In particular, it is positive. This finishes the proof of the lemma.
\end{proof}
We start our construction with a complete bipartite graph $G_0 = (Q_0, U, E)$, where one side of the graph consists of $2r+1 = k-1$ terminals $Q_0$, and the other side of the graph consists of $\ell = \binom{2r+1}{r}$ non-terminals
$U = \{u_S~|~S \in \binom{Q_0}{r}\}$. That is, the vertices $u_S \in U$ are indexed by subsets of $Q_0$ of size $r$.
The cost of edges is defined as follows. Let $\alpha$ be a large constant that we define later on.
Every non-terminal $u_S$ is connected by edges of cost $\alpha$ to every terminal $q \in Q_0 \setminus S$ and by edges of cost $(1+\frac{1}{r} + \frac{1}{r^2})\alpha$ to every terminal $q \in S$.
To construct the whole graph $G$, we extend $G_0$ with a last terminal $x$ (i.e., the terminal set is $Q = Q_0 \cup \{x\}$)
and build a third layer of $m = \binom{\ell}{\ell/2}$ non-terminal vertices $W = \{w_Z~|~Z \in \binom{U}{\ell/2}\}$. That is, the vertices $w_Z \in W$ are indexed by subsets of $U$ of size $\ell/2$.
There is a complete bipartite graph between $U$ and $W$ and every vertex of $W$ is adjacent to $x$.
The cost of edges is defined as follows. An edge $u_S w_Z$ is of cost $1$ if $u_S \in Z$, and of cost $0$ otherwise. Every edge of the form $xw_Z$ is of cost $\ell/2 - 1$.
This finishes the description of the construction. For the reference see the top picture in Figure~\ref{fig:double-exp}.
We say that a set $S \subseteq Q$ is \emph{important} if $x \in S$ and $|S| = r+1$. Note that there are $\ell = \binom{2r+1}{r} = \binom{k-1}{\lfloor (k-1)/2 \rfloor}$ important sets.
We observe the following.
\begin{lemma}\label{lem:dblexp}
Let $S \subset Q$ be important and let $S_0 = S \setminus \{x\} = S \cap Q_0$. For $\alpha > r^2 \ell|W|$,
the vertex $w_Z$ is on the $S$ side of the minimum cut between $S$ and $Q \setminus S$
if and only if $u_{S_0} \in Z$.
\end{lemma}
\begin{proof}
First, note that if $\alpha > r^2 \ell |W|$, then the total cost of all the edges incident to vertices of $W$ is less than $\frac{1}{r^2} \alpha$.
Intuitively, this means that cost of the cut inflicted by the edges of $G_0$ is of absolutely higher importance than the ones incident with $W$.
Consider an important set $S \subseteq Q$ and let $S_0 = S \setminus \{x\} = S \cap Q_0$.
Let $u_{S'} \in U$. The \emph{balance} of the vertex $u_{S'}$, denoted henceforth $\beta(u_{S'})$, is the difference of the cost of edges
connecting $u_{S'}$ with $S_0$ and the ones connecting $u_{S'}$ and $Q_0 \setminus S$. Note that we have
$$\beta(u_{S_0}) = r \cdot \left(1+\frac{1}{r}+\frac{1}{r^2}\right) \alpha - \left(r+1\right) \cdot \alpha = \frac{1}{r} \alpha.$$
On the other hand, for $S' \neq S_0$, the balance of $u_{S'}$ can be estimated as follows:
$$\beta(u_{S'}) \leq (r-1) \cdot \left(1+\frac{1}{r}+\frac{1}{r^2}\right) \alpha + \alpha - r \cdot \alpha - \left(1+\frac{1}{r}+\frac{1}{r^2}\right) \alpha
= -\frac{r+2}{r^2} \alpha < -\frac{1}{r^2} \alpha.$$
Consequently, as $\frac{1}{r^2} \alpha$ is larger than the cost of all edges incident with $W$,
in a minimum cut separating $S$ from $Q \setminus S$, the vertex $u_{S_0}$ picks the $S$ side, while every vertex $u_{S'}$ for $S' \neq S_0$ picks the $Q \setminus S$ side.
Consider now a vertex $w_Z \in W$ and consider two cases: either $u_{S_0} \in Z$ or $u_{S_0} \notin Z$; see also Figure~\ref{fig:double-exp}.
\myparagraph{Case 1: $u_{S_0} \in Z$. } As argued above, all vertices of $U$ choose their side according to what is best in $G_0$, so $u_{S_0}$ is the only vertex in $U$ on the $S$ side.
To join the $S$ side, $w_Z$ has to cut $\ell/2-1$ edges $u_{S'} w_Z$ of cost $1$ each, inflicting a total cost of $\ell/2-1$;
note that it does not need to cut the edge $u_{S_0}w_Z$, which is of cost $1$ as $u_{S_0} \in Z$.
To join the $Q \setminus S$ side, $w_Z$ needs to cut $xw_Z$ of cost $\ell/2-1$
and $u_{S_0}w_Z$ of cost $1$, inflicting a total cost of $\ell/2$.
Consequently, $w_Z$ joins the $S$ side.
\myparagraph{Case 2: $u_S \notin Z$. } Again all vertices of $U$ choose their side according to what is best in $G$, so $u_{S_0}$ is the only vertex in $U$ on the $S$ side.
To join the $S$ side, $w_Z$ has to cut $\ell/2$ edges $u_{S'}w_Z$ of cost $1$ each, inflicting a total
cost of $\ell/2$.
To join the $Q \setminus S$ side, $w_Z$ has to cut one edge of positive cost, namely the edge $xw_Z$ of cost $\ell/2-1$.
Consequently, $w_Z$ joins the $Q \setminus S$ side.
This finishes the proof of the lemma.
\end{proof}
Lemma~\ref{lem:dblexp} shows that $G$ cannot be compressed using the technique presented in~\cite{HagerupKNR98}.
To see that let us fix two vertices $w_Z$ and $w_{Z'}$ in $W$,
and let $u_S \in Z \setminus Z'$.
Then, Lemma~\ref{lem:dblexp} shows that $w_Z$ and $w_{Z'}$ lie on different
sides of the minimum cut between $S$ and $Q \setminus S$.
Thus, $w_Z$ and $w_{Z'}$ cannot be merged.
Similar but simpler arguments show that no other pair of vertices in $G$ can be merged.
To finish the proof of Theorem~\ref{thm:side}, observe that
$$|W| = \binom{\ell}{\ell/2} = \Omega\left(2^{\ell}/\sqrt{\ell}\right) = \Omega\left(2^{\binom{k-1}{\lfloor (k-1)/2 \rfloor} - k/2}\right).$$
\end{document}
|
\begin{document}
\title{Machine learning models for DOTA 2 outcomes prediction}
\author{Kodirjon Akhmedov
and Anh Huy Phan
\thanks{Kodirjon Akhmedov is with the Department
of Center for Computational and Data-Intensive Science and Engineering (CDISE), Skolkovo Institute of Science and Technology, Moscow,
Russia, 121205 (e-mail: [email protected]). }
\thanks{Anh Huy Phan is with the Department
of Center for Computational and Data-Intensive Science and Engineering (CDISE), Skolkovo Institute of Science and Technology, Moscow,
Russia, 121205 (e-mail: [email protected])}
}
\maketitle
\begin{abstract}
Prediction of the real-time multiplayer online battle arena (MOBA) games' match outcome is one of the most important and exciting tasks in Esports analytical research. This research paper predominantly focuses on building predictive machine and deep learning models to identify the outcome of the Dota 2 MOBA game using the new method of multi-forward steps predictions. Three models were investigated and compared: Linear Regression (LR), Neural Networks (NN), and a type of recurrent neural network Long Short-Term Memory (LSTM). In order to achieve the goals, we developed a data collecting python server using Game State Integration (GSI) to track the real-time data of the players. Once the exploratory feature analysis and tuning hyper-parameters were done, our models' experiments took place on different players with dissimilar backgrounds of playing experiences. The achieved accuracy scores depend on the multi-forward prediction parameters, which for the worse case in linear regression 69\% but on average 82\%, while in the deep learning models hit the utmost accuracy of prediction on average 88\% for NN, and 93\% for LSTM models.
\end{abstract}
\begin{IEEEkeywords}
Esports analytics; Machine learning in Esports; Dota 2 match outcome; Real-time data analytics; User experience (UX)
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
\IEEEPARstart{T}{he} term Esports is extended as electric sports, which is a form of competition using video games. It is often organized in teams with multiple players, and it can be played by any individual online/offline. In world competitions, we witness professional players or "pro players" who have gained more experience in particular video games. Esports is not just playing on the computer, but nowadays, it is also considered one of the trending research areas in the scientific community with its predictions and improvement in players' performance [5]. There has been much analysis over the several types of the games such as CS:GO, and Dota 2 the top games under the observation of many Esports researchers [9]. However, there are yet many methods to investigate the games and their outcomes. Finding the best model for predicting the real-time game results is one of the main tasks in Esport [6]. Prediction models are at a high level in many terms. For example, for coaches to improve their players' performance level or for business enterprises, the confident high-level models bring much profit to their budget.
There have been past competitions and investigations over the Dota 2 game prediction. For example, Kaggle’s Dota 2: Win Probability Prediction in collaboration with OpenDataScience is being organized almost every year. We investigated those contests. The most popular articles close to our research area were Win Prediction in Multi-Player Esports: Live Professional Match Prediction [6], To win or not to win? A prediction model to determine the outcome of a Dota 2 match [8], Performance of Machine Learning Algorithms in Predicting Game Outcome from Drafts in Dota 2 [10]. Predicting the winning side of Dota 2 [11] and other articles have been collected and analyzed their results, that still lack some specific proper and easy methodologies which we will discuss them in the related works section.
We can confidently state that scientific research in Dota 2 is still not fully discovered in the science community; therefore, the gap in the knowledge needs more breakthroughs to bring brand new outcomes. While the market is rising at a high rate expected to 1.62 billion USD and 291.6 million people are forecasted to be occasional viewers of Esports by 2024. Thus, the paper has strongly motivated and covered the Dota 2 game analysis. The project covers Esports’ area: develops a real-time data collection process for tracking the in-game data of the gamer using game state integration (GSI), the research focuses upon the Dota 2 Esports Steam game, investigates the multi-forward steps prediction approach using the target that characterizes the success of a player throughout the game process. Real-time prediction of the game or the post-game predictions will be possible with our real-time data collection python tool or GUI and machine and deep learning predictive models to analyze the match result in advance.
The paper has the exact current and long-term goals to achieve for handling the problem statement. We discuss the contemporary state-of-the-art works in Dota 2 match outcome prediction and investigate the possibility of developing machine learning models for the current plans. The following aims of the research are implemented in this paper: \\
\textbf{Contributions}
\begin{itemize}
\item Investigating how the real-time data can be collected and creating a tool for collecting data from players.
\item Implementing machine learning and deep learning models to predict the outcome of the match.
\item Data analysis, investigating features and defining most essential variables, testing models accuracy, and finding considerable features.
\item Experimenting with players on different occasions and recording results for comparison among several machine learning models.
\end{itemize}
The following section will give an overall outline of the Dota 2 video game and Steam gaming platform. In section III, we focus on the literature review, where we discuss and explain the limitations of existing papers. In section IV, we propose our new method of collecting real-time data and analyze shortcomings of the data tracking. Section V will be dedicated to theory and algorithms where all explanations of the hypothesis, sensitivity analysis, feature engineering, data prepossessing will be fully explained. Section VI will be dedicated to methodology and experiment setups that define machine learning predictive models and their parameters, compare models, and discuss certain performance levels. Section VII will run through the experimenting where real-time data collection and LR, NN, and LSTM experiments will be presented in tables and graphs and discussed. Section VIII demonstrates how the graphical user interface works and built using PyQt designer and some examples of working GUI. Finally, in the last section, we will conclude what outcomes we got from this paper and tell about long-term future goals and the paper's impact on the innovation.
\section{Dota 2 video game and Steam gaming platform}
\subsection{Dota 2 video game} A Dota 2 (Short for "Defense of the Ancients") is a MOBA video game developed by Valve. The game has ten players in two teams called ‘Dire’ (represented in red color) and ‘Radiant’(represented in green color); there are five players per team, and the idea is to defend your ancient tower and do not let the enemy come towards your side.
Each match is played on a map split into two sides by a river. Each player chooses a hero from 115 possible heroes (the number of heroes changes depending on Valve’s updates). Heroes are the main characteristics of how the game goes. If the five players have chosen the right combination of the hero, they have a high chance of winning or vice versa; the good gamers specifically take into account the combinations of their hero [8].
The heroes are defined by the strength, agility, and intelligence. There are five couriers (flying horses) which can help five heroes to deliver some items during the game from the your shops and a secret shop (which you can buy many valuable items by visiting with your hero or sending your courier and asking your courier to bring where you want but if courier seen by the enemy during the way to delivery it can be killed and you need to wait to re-spawn your courier and vice versa) as heroes gain more golds during the game. There are not only heroes but creeps that come every half minute in the game; each wave has three Melee creeps and one Ranged creep, and every $10^{th}$ wave Siege creep will also spawn. During the attack, buildings can also throw a fire for the closest enemy to the building; therefore, bearing in mind that not coming close to the tower unless the player does not have extraordinary power. We can see the full illustration of all heroes, creeps, buildings, maps in a screenshot taken from our live game in Figure 1. \\
\begin{figure}
\caption{Battlefield of the Dota 2. Bottom left corner is the current map, in the center “Radiant” creeps and heroes fighting against “Dire” and tower throwing fire to “Radiant.”
}
\end{figure}
\subsection{Steam gaming platform}
Steam is a video game digital distribution service which is also developed by Valve, where it enables players to connect with each other and play online games at the same time. Steam enables several possibilities for instance video streaming, and social networking, together with updating feature of the games. Each player in the Steam has the ability to have a Steam Token where we use in identifying particular player id. Our research's data collection process is also integrated with Steam Token where we need to get the user’s token to our game state integration (GSI) configuration.
\section{Related works}
We have researched over the related works, several SOTA articles and different competitions based on Dota 2 win predictions, data tracking, and other related fields have been selected and reviewed for our literature. The authors in [6], used standard machine learning models, feature engineering, and optimization. Which the method give up to 85\% accurate after 5 minutes of gameplay used 5.7k data from the international 2017 Dota 2 tournament. [12] proposed methods in two steps, in the first step, Heroes in Dota 2 were quantified from 17 aspects more accurately. In the second step, they proposed a new method to represent a draft of heroes. A priority table of 113 heroes was created based on prior knowledge to support this method. The evaluation indexes of several machine learning methods on this task have been compared and analyzed.
Another article [8] discussed to improve the prediction results of the existing model, i.e., data collection, feature extraction, and feature encoding. Their idea using augmented regression was defining success probability = (heros picked and success heroes)/(heroes picked) and final success probability = (regression probability + success probability)/2. If the final success probability $> 0.5$, they predict a "Radiant" win; otherwise, "Dire" win, the metrics they used include how an individual hero contributed and how heroes complemented each other in a match. The results show that the pure logistic regression accuracy, asymptotically approaches 69.42\%, and they conclude that just hero selection plays a vital role in winning or losing the game. In augmented logistic regression, the test accuracy score approached 74.1\%, and authors compared the both models.
Conley and Perry were the first to demonstrate the importance of information from the draft stage of the game with Logistic Regression and k-Nearest Neighbors (kNN) [2]. They got 69.8\% test accuracy on an 18,000 training dataset for Logistic Regression, but it could not capture the synergistic and antagonistic relationships between heroes inside and between teams. To address that issue, the authors used kNN with custom weights for neighbors and distance metrics with 2-fold cross-validation on 20,000 matches to choose d dimension parameters for kNN. For optimal d-dimension = 4 they got 67.43\% accuracy on cross-validation and 70\% accuracy on 50,000 test datasets [2].
The article [10] was also taken into consideration, where authors discussed the comparison of several ML models: Naive Bayes classifier, Logistic Regression chosen to replicate the results of the previous works on their collected datasets and Gradient Boosted Decision Trees, Factorization Machines and Decision Trees were chosen for their ability to work with spare data. AUC, and Log-Loss measured the quality of prediction (Cross-Entropy) on ten-fold cross-validation, the result of AUC was 70\% in XGBoost and 68\% in LogRess and Naive Bayes, the results compared in three normal, high, very high skill sets. Kuangyan Song, Tianyi Zhang, Chao Ma [11] have tried logistic regression to predict the winning side of Dota 2 games based on hero lineups. They collected data using an API provided by the game developer. The results were illustrated using training and testing errors, and a conclusion has been made.
Paper [7] used the replays in Dota 2 was tested in different classifiers . These articles and our research are in completely different concerning methods. In contrast, we have the ML models, and they have the other methods to analyze their project.
The interesting article [1] used the ARMA model for investigating time series of change in the health of hero also implemented feature exploration analysis; they achieved slightly greater than 80\% accuracy in predicting the sign of significant changes in health and reasonably accurate results with linear and logistic regression models.
The paper [4] predicted match results with \textit{xpm, gpm, kpm, lane efficiency, and solo competitive rank}, authors compared SVM and NN. The [3] discussed zone changes, team members distribution, and time series methods. They concluded that MOBA teams behaviour is applicable to team skills. [9] was from the HSE university researchers did consider game statistics and True Skills, which is the similar to Elo rating for chess and they have done for CSGO and Dota 2 with AUC, recall, precision values and compared the results. Wang, W. demonstrated predicting the match outcome depending on the hero draft, with the help of neural networks on GPU, and logistic regression methods with hero drafts [13]. Yifan Yang, Tian Qin, and Yu-Heng Lei tried to predict the matches using logistic regression with replay of data and improved their accuracy from 58.5\% to 71.5\% [14].
From the previous researches, we have found the following shortcomings:
\begin{itemize}
\item There is a need for developing deep learning models for better accuracy prediction.
\item The need for the novel theoretical approach is important to implement most reliable models.
\item Authors have not thought of developing their own data collection tool.
\item Authors only limited with source codes while players are not scientists to use codes, thus there is a need for developing user-friendly GUI.
\end{itemize}
Hence our contribution is filling the knowledge gap in the discussed researches and proposing our novel approaches to bring more light to the Esport's Dota 2 win prediction scientific area.
\section{Data Collection and Analysis}
Identifying the player's reaction, movements, real-time interactions could reveal more about the player's results over time. Therefore, it is essential to track such significant information as the game progresses; the D2API python library is useful tool. For example, the following command in python: \begin{itemize}
\item $import$ $d2api$
\item $api=d2api.APIWrapper(api$ $key="Your$ $Steam$ $Token")$
\end{itemize}
could give us match details when we have the id of the played match.
\begin{itemize}
\item $match$ $details=api.get$
\item $match$ $details(You$ $Match$ $ID)$
\end{itemize}
will load that many details in the variable or the same $api.get$ $heroes()$ can give the details about the match hero. In practice, it was not possible to store in a file in preferred format. For this particular case, we tried to observe other players' data, and it was not our expectation while we were planning to investigate and view changes of data variable features of our games and extract data in the "CSV" format for further research purposes.
The idea about writing our codes and integrating between Anaconda and Steam gaming platform was our predominant priority. We discovered the game state integration (GSI) where it could help developers pull data from live Dota 2 games. Python programming language has been selected by our side as a programming language for the development of the data collection tool. We have created a tool that returns us results of real-time data in CSV, JSON, and TEXT files. We have experimented with a quite a lot of our games and successfully collected real-time data using Python as a back-end server. The idea of implementing is vital in researching player’s predictive machine learning and deep learning models or general analysis of players throughout the game. Our practical experiments show that we have developed a tool for collecting data from the Dota 2 MOBA game. The importance of this part of the work is that many researchers want to implement different applications and models for Esports. In contrast, their models could differ, but the tool for collecting data will remain very beneficial for everyone who has a curiosity in in-game and real-time data collection. Thus, this section can also help researchers and our models by providing a ready and reliable real-time data collection tool. The codes are available at our Github Repository.\footnote{ \url{https://github.com/KodirjonAkhmedov/Real-Time-Data-Collection-Dota-2}}
\subsection{Game State Integration (GSI) in Dota 2}
Game State Integration (GSI) links the game events that are happening with computer, and it is an advantageous method for tracking the real-time data of the playing game. To be more precise, in the case of Dota 2, game state integration has been developed using Python and required configuration files by provided by Valve corporation. Configuration file was the one we need to put inside installed folder of the Dota 2 in our device, for example, in Linux directory has following path address: $/home/test/.steam/debian-installation/steamapps/common/Dota2beta/game/Dota/$ $cfg/gamestateintegration$, where we need to specify our .cfg file inside configurations files of the game. The GSI consists of an end point section setting which was done for CS:GO but we have created our own endpoint setting for Dota 2 using the Valve's website.\footnote{ \url{https://developer.valvesoftware.com/wiki/Counter-Strike:_Global_Offensive_Game_State_Integration}}:
\begin{itemize}
\item \textbf{"uri"} which represents the server and port which the game will make post request to this uri: http://127.0.0.1:8080
\item \textbf{"timeout" "5.0"} The game expects an HTTP 2XX response code from its HTTP POST request, and the game will not attempt submitting the following HTTP POST request while a previous request is still in flight. The game will consider the request to be timed out if a response is not received within so many seconds and re-heartbeat next time with the whole state omitting any delta-computation. If the setting is not specified, then a default short timeout of 1.1 sec will be used.
\item \textbf{"buffer" "0.1"} Because multiple game events tend to occur one after another very quickly, it is recommended to specify a non-zero buffer. When buffering is enabled, the game will collect events for so many seconds to report a more significant delta. Setting 0.0 is to disable buffering completely. If the setting is not specified, then default buffer of 0.1 seconds will be used.
\item \textbf{"throttle" "0.1"} For high-traffic endpoints, this setting will make the game client not send another request for at least this many seconds after receiving the previous HTTP 2XX response to avoid notifying the service when the game state changes too frequently. If the setting is not specified, then a default throttle of 1.0 sec will be used.
\item \textbf{"heartbeat" "0.5"} Even if no game state change occurs, this setting instructs the game to request so many seconds after receiving the previous HTTP 2XX response. The service could be configured to consider a game as offline or disconnected if it did not get a notification for a significant period exceeding the heartbeat interval.
\end{itemize}
\subsection{Real-time data collection and Steam integration}
For Steam platform integration and for collecting data, there is a need to launch Dota 2 game from the Steam gaming platform. Here the server is the python where it works as the “third party” between the game and the computer to enable us to store the data inside the computer directory. The following quick steps can describe the process:
\begin{itemize}
\item In the initial step, we define a class called real-time Dota 2 parser. Parser of Dota 2 in real-time takes as input a JSON request and has as output a CSV file. Following that, we define a dictionary for buildings that contains information about buildings, and a function for that variables will be saved all the values and keys pairs given by JSON file to create a dictionary.
\item In the second step, we took the seven GSI values and their keys, namely, Provider, Map, Player, Hero, Building, Abilities, and Items.
\item In the third step, we save a list of values and column names for each arrived request in the dictionary but not buildings because we will save the building values separately and update information about buildings in each post request.
\item In the fourth step, we save the tracked data to the CSV data. Initially, we create CSV file in our current directory. If it exists, then we renew the data and start writing a new CSV file. An important part to notice there are two cases when our server starts, let us explain with Dota 2 state: the Dota 2 has game states: “pre-game”, “in-progress” and “post-game,” and we define them in our condition when we have these two “pre-game” and “in-progress,” conditions, then our server starts writing real-time data of the game.
\item In this fifth step, we define Server Handler by defining several functions of set response and doget and dopost which we will get the size of the data and get the data itself, and we will use the real-time parser that we have defined earlier.
\item In the sixth step, we begin to write on three different data files: JSON, CSV, and TEXT in each step by making a post request.
\item In the final seventh stage of our work, we define the HTTP request starting and stopping points where the server works until the user terminates it manually by their kernel inside their environment. In our case was Jupyter Notebook’s stop kernel button. However, this process has been integrated with the Graphical User Interface section, and there is no need to interact with python code. Only start, stop and view buttons enable us to track data easily.
\end{itemize}
\subsection{Data features}
Once we tracked the data we started learning the exciting features and their importance in our machine and deep learning models. The data collection tool enabled us to collect more than real time 160 data features. Lists of game state integration are following, \textit{"buildings", "provider", "map", "player", “hero'', "abilities", "items", "draft", "wearables"} where the data has been joined to those main game states including:
\begin{itemize}
\item \textbf{“Player”}represents a person who is playing the game and his ability to play the game.
\item \textbf{“Hero”} represents the player’s choice of hero for the game and hero statistics during the game.
\item \textbf{“Abilities”} represents the hero’s list of abilities used or how they fluctuated over the time.
\item \textbf{“Items”} represents the hero’s items and their usage over time.
\item \textbf{“Buildings”} represents “Dires” or “Radiants” towers (bottom, middle, and top) towers' statistics.
\item \textbf{“Provider”} represents what is the game which name that is currently in process and what is the game's id Etc.
\item \textbf{“Map”} represents the current map, and map states such as does the winner exists or the map is paused Etc.
\end{itemize}
It is noteworthy that, the columns of our collected data are fixed in all of the games. On the contrary, the rows of the data are in ascending order depending on the game progress. For instance, if the gamers play the game more time, the data capacity will be more, and the row of the data will be greater in length compared to games that are finished in a short period of time.
\subsection{Sensitivity analysis}
Before starting the analysis on the game data features, we need to extract the statistical data from more than 160 features, not all the values are in numerical. More precisely, the collected data could have data types, objects, and integers; therefore, we need to get only the numerical values of data to fit in our model. Thus, we have taken data features whose data values are either \textit{'int16', 'int32', 'int64', 'float16', 'float32', or 'float64'}, extracted statistical data could change depending on how long the game takes place.
Let us define the target variable as y and some independent variables as x. Based on our scientific hypothesis, we have taken the variable that primarily symbolizes the player’s success during the game, that is \textbf{PLAYER.GOLD}, the distribution of the target variable for one game can be seen in Figure 2. \textit{Gold} is the currency used to buy items or instantly revive a hero. Gold can be earned from killing heroes, creeps, couriers, buildings, additional enemies that only stay still in some specific corners of the team’s map side, or by taking bonuses that appear at certain places on the map. It is also passively gained periodically throughout the game. For the input parameter x, we defined ten fixed features with the most considerable correlation to our target variable, Figure 3 illustrates those ten input features' distribution over the game.
\begin{figure}
\caption{ Player gold change for one particular game during the game
}
\end{figure}
\begin{figure}
\caption{Chosen fixed independent features’ change throughout the game in line plot.
}
\end{figure}
\section{Theory and Algorithms}
There has been a hypothesis for predicting the final match result using different techniques and analyses that lacks some decent and easy mechanisms. This section will describe our novel approach, and how the method and algorithms we have tried could be differentiable and important in prediction. As we have seen in our previous section, we have developed a real-time data collection tool using Python as a server for Dota 2 for model implementations, which means our theories and algorithms will be based on our collected data. Depending on our data, we developed our models initially simple linear regression, then the neural network and long short-term memory were trained and calculated the accuracy scores of the models. Moreover, so far, we have done sensitivity analysis on features and their importance. We defined that \textit{player.gold} feature represents the success of the player. With that in mind, we developed a new raw dataset by composing highly correlated $x$ independent features to target $y$ and predicted multi-forward steps. We introduced them as $L$ (time-lagged correlation) and $d$ (multi-forward steps).
\subsection{Time-lagged correlation and multi-forward step prediction}
The method we propose in this paper is dependent on the time-lagged correlation to rows $(L)$ and multi-forward steps $(d)$. Those parameters can manually be adjusted by users, for example, to predict 20 steps ahead of the game while our real-time data is being collected; users can set the $d=20$, then the models will give the results accordingly. Likewise, users can edit the time-lagged correlation $L$ to define the number of composing rows. Let us describe both parameters in detail:
\begin{itemize}
\item \textbf{Multi-forward step prediction (d)} - represents the number of steps ahead of predicting the game. The parameter plays a crucial role in defining a model's bounds and constructing raw data described in the following subsection during the preprocessing period.
\item \textbf{Time-lagged (L)} - represents the number of joining rows, imagine the user is tracking data, and user will have initial data which our data tracker is collecting. Imagine each row of that data as variable $x$, which has columns of independent variables $x_1, x_2,...,x_n$ and target $y$ then depending on the $L$, let us set $L=3$, we create a new dataset by composing three rows of the $x$ in one value defined as $X$ in a single row, after that three rows composed, we join next three rows and process continues until $(number$ $of$ $rows(NRow) - (L+d-1))$. In the case of target variable $y$, we add that multi-forward step prediction $d$ to the time-lagged number and write that value to the new dataset and create a raw dataset based on parameters.
\end{itemize}
\subsection{Data construction using model parameters}
We construct new data set using model parameters time-lagged $(L)$ and multi-forward step ahead prediction $(d)$, independent features, target, start indexing, and get our final working data set, consider Table 1 then we have the following data set:
\begin{table}
\caption{Model construction process using independent variables}
\centering
\begin{tabular}{| c | c | c | c| c | c | c | c|}
\hline
Vector of features & $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ & ... & $x_m$ \\
\hline
x(1) & * & * & * & * & * & * & * \\
\hline
x(1) & * & * & * & * & * & * & * \\
\hline
x(1) & * & * & * & * & * & * & * \\
\hline
x(1) & * & * & * & * & * & * & * \\
\hline
... & * & * & * & * & * & * & * \\
\hline
x(L) & * & * & * & * & * & * & * \\
\hline
x(L+1) & * & * & * & * & * & * & * \\
\hline
... & * & * & * & * & * & * & * \\
\hline
x(NRow-d) & * & * & * & * & * & * & * \\
\hline
\end{tabular}
\end{table}
$$Dataset=\left(
\begin{array}{lcr}
X(1) & \vert & y(L+d)\\
X(2) & \vert & y(L+1+d)\\
X(3) & \vert & y(L+2+d)\\
X(4) & \vert & y(L+3+d)\\
\ldots & \ldots & \ldots\\
X(NRow-d-L+1) & \vert & y(NRow)
\end{array}
\right)$$
where,
$X(1)=\left(x(1), x(2), \ldots, x(L)\right)$
$X(2)=\left(x(2), x(3), \ldots, x(L+1)\right)$
$X(3)=\left(x(3), x(4), \ldots, x(L+2)\right)$
$X(4)=\left(x(4), x(5), \ldots, x(L+3)\right)$
$\ldots$
$X(k)=\left(x(k), x(k+1), \ldots, x(L+k-1)\right)$
$\ldots$
Eventually, we have $dataset X$ with composed independent variables and $dataset Y$ with the target variable. The shape of our data are entirely depend on the game but for this particular case, $dataset X$ is 3833 rows and 70 columns, and $dataset Y$ consists of 3833 rows and a column with a target.
\section{Methodology and Experimental Setup}
In this section, we will enumerate the languages and programs used in the research, present the metrics and model parameters, describe the implementation of our machine and deep learning models, and compare the different models and several reasons why some models did better compared to others. All models and Graphical User Interface were implemented using Python in Linux Operating System.
\subsection{Machine Learning models and their parameters}
Three machine learning models have been chosen: \newline
\begin{itemize}
\item \textbf{Linear Regression} is a supervised machine learning algorithm used to predict values within a continuous range. The idea of the regression problem is dependent on our predicting \textit{target variable}. The dependent variable is player gold and this parameter is not stable during the game, and there are very good linear correlations among certain features therefore, the regression problem has been chosen to predict. We built a linear regression model by splitting our constructed $datasetX$ and $datasetY$ into train and test data, Test size has a 20\% partition, and the rest was dedicated to the training part. We fit the model and take the prediction value from our regressor. The experiments have been done for different datasets, and results were compared in the experiments section. \newline
\item \textbf{Neural Networks} one of the leading models in time-series forecasting and Esports. Working with artificial neural networks, we could acquire better results due to its ability to train the model through the hidden layers, where each layer takes the input data and processes in the activation functions and passes to the successive layer that can reduce its loss function with forward and backward propagation where biases and weights will be updated and adjusted back and forth during the training process until the number of epochs unless and until the loss value is completely reduced can lead to achieving better model performance. \newline
\item \textbf{Long Short-Term Memory} (LSTM) is a type of recurrent neural network (RNN). The output from the previous step feed as an input and the other inputs for the next step and process continues. In LSTM, nodes are recurrent, but all have an internal state. The node uses the internal states as a working memory space that enables information flow over many time steps. Input value, previous output value, and the internal state are all used in the node calculations; the results of the calculations are not only the output value but also used to update the states. Neural networks have parameters that determine how the inputs are used in the calculations, but LSTM has the parameters known as gates that control the node's information. LSTM nodes are rather complicated than regular recurrent nodes, making them learn complex interdependencies and sequences of the input data that results in better outcomes.
\end{itemize}
Several metrics were used to evaluate the performance of the all models. For the accuracy of the linear regression model, we have used the Sklearn library and calculated the $R^2$ but we took final accuracy score as $adjusted$ $R^2$, while NN and LSTM models accuracy scores were taken as $R^2$. \newline
\textbf{Why adjusted $R^2$}, when we have more input variables or predictors are added to a regression model it may surge the $R^2$ value, which entice model's markers to add even more. This is called \textbf{over-fitting} and result in unexpected high $R^2$. Whereas, adjusted $R^2$ is used to identify how trustworthy the correlation and how much is identified by the addition of input features. $R^2$ adjusted accuracy score is evaluated using the sum of squares error (SSE) and the total sum of squares (SST):
$$R^2 = 1 - \frac{SSE}{SST}$$
where,
\[ SSE = \sum (y_{test} - y_{predicted})^2 \]
\[ SSE = \sum (y_{testmean} - y_{test})^2 \]
then,
$$R^2_{adjusted} = 1 - (1-R^2) * \frac{N-1}{N-p-1}$$
where, p is a number of input features and N is a number of rows.
For tuning the optimizers, we chose Adam in NN and LSTM models. We used Adam because it is a replacement optimization algorithm for stochastic gradient descent for training deep learning models. Adam combines the best properties of the AdaGrad and RMSProp algorithms. We used the Adam parameters with learning $rate=0.01$, $beta_1=0.9$, and $betta_2=0.999$, which were the standard values for the Adam optimization for better performance. The loss function of the LSTM was chosen as the same MSE as the neural network. The model has been trained over 100 epochs since it can update weights and biases of the long training process during 100 epochs, while one epoch means that the optimizer has used every training example once. The batch size was 64, which fits the model in the fastest way possible and returns the output while training with batch size equals to one will take many hours to train our models; therefore, we took the normal value of batch size and evaluated the model performances. The number of total parameters for LSTM training was 622,209, and there were no non-trainable parameters. As for NN, there were a total of 438,913 with 436,225 trainable parameters and 2,688 non-trainable parameters.
\subsection{Architectures of NN and LSTM}
We have tested several architectures for the Neural Network model and calculated the accuracy score for each particular experiment, and we have concluded choosing the architecture that returned us the best performance of the models. We used a sequential model where we can add layers easily. We imported different types of layers to use in our models from the Keras library such that: In our neural networks architecture there are four layers which is performed in Figure 4.
\begin{figure}
\caption{Illustration of Neural Network architecture.
}
\end{figure}
In the LSTM model, we used the same sequential model. We standardized the $xtrain$, $ytrain$, $xtest$, $ytest$ with standard and mean and standard deviation as we did in NN. When splitting, we used standard 20\% testing to 80\% training respectively. In comparison with neural networks, we need to reshape the input for enabling the correct working process of the LSTM model. Bearing in mind that, we always have to give a three-dimensional array as an input to the LSTM network, the first dimension represents the samples, the second dimension represents the time-steps, and the third dimension represents the number of features in one input sequence. We imported the LSTM model from TensorFlow Keras layers, and the different architecture of the LSTM has been tested several times, and when the accuracy score performed better results, we chose that architecture as the final working architecture demonstrated in Figure 5.
\begin{figure}
\caption{Illustration of LSTM architecture.
}
\end{figure}
\section{Results of the Experiments}
This section will describe the result of all tests and experiments that we have done using our models, theory, and algorithms. We would to show the outcomes of the working models and their comparisons in different situations. Overall the experiment process took the final six months of the project period where we were mainly involved in the experiments with different players and collecting their gaming data.
\subsection{Real-time data collection experiments and results}
Real-time data collection tools were tested in more than 100 games, and all the data has been collected and trained on the models and evaluated the match results. Figure 6 demonstrates the CSV format data that has been collected using our tool. The experiments were done on different values of the $L$ and $d$ where we predicted the low time steps then we increased multi-forward steps respectively, we can see in Table II comparison of three models accuracy performances. Table III describes the increasing $d$ multi-forward steps and fixed $L$ time-lagged correlation experiments on three models.
\newline
\begin{figure}
\caption{Illustration of real-time CSV data collected for a Dota 2 game.
}
\end{figure}
\begin{table}[!ht]
\caption{Accuracy scores comparisons of predictive models}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c | c| c | c | c | c |}
\hline
Models & d=2, L=5 & d=10, L=7 & d=15, L=10 & d=20,L=15 & d=25, L=20 \\ [0.5ex]
\hline\hline
LR & 0.98 & 0.93 & 0.90 & 0.90 & 0.88 \\
\hline
NN & 0.99 & 0.94 & 0.93 & 0.93 & 0.91\\
\hline
LSTM & 0.99 & 0.95 & 0.94 & 0.95 & 0.94\\ [0.1ex]
\hline
\end{tabular}
}
\end{table}
\begin{table}[!ht]
\caption{Accuracy scores comparisons of predictive models with fixed \textbf{L=15} and increasing multi-forward steps}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c | c| c | c | c | c | c | c | c |}
\hline
Models & d=30 & d=40 & d=50 & d=60 & d=70 & d=80 & d=90 & d=100 \\ [0.5ex]
\hline\hline
LR & 0.86 & 0.92 & 0.74 & 0.67 & 0.64 & 0.68 & 0.67 & 0.70 \\
\hline
NN & 0.90 & 0.92 & 0.90 & 0.89 & 0.93 & 0.86 & 0.94 & 0.95\\
\hline
LSTM & 0.93 & 0.98 & 0.92 & 0.91 & 0.98 & 0.96 & 0.95 & 0.92 \\ [0.1ex]
\hline
\end{tabular}
}
\end{table}
\begin{table}[!ht]
\caption{Testing our model on 10 different new game data sets and accuracy scores are following: when d=20, L=15 with 10 fixed input features on all experiments}
\centering
\scalebox{0.76}{
\begin{tabular}{|c | c| c | c | c | c | c | c| c | c | c|}
\hline
Models & Exp1 & Exp2 & Exp3 & Exp4 & Exp5 & Exp6 & Exp7 & Exp8 & Exp9 & Exp10 \\ [0.5ex]
\hline\hline
LR & 0.81 & 0.69 & 0.75 & 0.72 & 0.84 & 0.82 & 0.96 & 0.87 & 0.85 & 0.87 \\
\hline
NN & 0.87 & 0.82 & 0.87 & 0.79 & 0.88 & 0.88 & 0.97 & 0.92 & 0.91 & 0.92 \\
\hline
LSTM & 0.92 & 0.91 & 0.91 & 0.88 & 0.91 & 0.92 & 0.98 & 0.96 & 0.94 & 0.94\\
\hline
\end{tabular}
}
\end{table}
\begin{table}[!ht]
\caption{Testing our model on 10 different new game data sets and accuracy scores are following: when d=20, L=15 with 10 highly correlated input features on all experiments}
\centering
\scalebox{0.76}{
\begin{tabular}{|c | c| c | c | c | c | c | c| c | c | c|}
\hline
Models & Exp1 & Exp2 & Exp3 & Exp4 & Exp5 & Exp6 & Exp7 & Exp8 & Exp9 & Exp10 \\ [0.5ex]
\hline\hline
LR & 0.80 & 0.75 & 0.78 & 0.70 & 0.67 & 0.80 & 0.96 & 0.82 & 0.87 & 0.88 \\
\hline
NN & 0.90 & 0.88 & 0.86 & 0.78 & 0.86 & 0.87 & 0.96 & 0.83 & 0.90 & 0.92 \\
\hline
LSTM & 0.94 & 0.94 & 0.88 & 0.83 & 0.92 & 0.90 & 0.98 & 0.92 & 0.90 & 0.93\\ [1ex]
\hline
\end{tabular}
}
\end{table}
\subsection{Graphical visualization of results}
We have performed graphical visualization of our results of predicted and actual values using different types of graphical representations for our consideration, Q-Q plots for viewing the distributions and bar plots and comparison in the separate and merged plots were used to analyse the performances. We can see the plots for different model performances for linear regression in Figure 8. \newline
\textbf{Linear Regression Results} \newline
First, let us consider linear regression results. In Figure 7, we have performed Q-Q plot of the linear regression to see the distribution, and as we can see, the distribution is almost skewed-right. we have achieved accuracy scores depending on the multi-forward prediction parameters, which for the worse case in linear regression 69\% but on average 82\% respectively. \newline
\begin{figure}
\caption{LR Q-Q plot of the quantiles}
\end{figure}
\begin{figure}
\caption{Plot of the difference between predicted and actual values of Linear Regression model
}
\end{figure}
\textbf{Neural Network}
In the next experiment, we have tested our neural networks model’s prediction results. The experiment results show better performance for neural network implementation comprising 88\% of average accuracy score for the same data set used in linear regression, which is 6\% higher accuracy score than the LR model.
\begin{figure}
\caption{Line plot of true sorted (orange) and predicted sorted (blue) values of our target using NN
}
\end{figure}
\newline
We can see some outliers in the data at the beginning and the end of the data set. Considering we are working with large values, we could have such extremum points despite standardization, but there was no considerable impact on the model performance. In the case of neural networks, both predicted and actual values are almost on the same points, representing the high accuracy score with slight non stable values (Figure 9). \newline
\textbf{Long Term Short Memory Results}
The third experiment based on LSTM model, In LSTM model the data for the experiment was the same. The results in this model performed a much better average accuracy score than NN and LR, 93\% accuracy compare to 82\% for LR and 88\% for NN respectively, which means Long Short-Term Memory results are 11\% higher than the Linear Regression model. Reason for such a significant change is that in the LSTM model, biases and weights are updated during the training process over the epochs much sufficiently compared to other two counterparts. There are still some outliers in the prediction such that we have seen some predicted values below the actual values of the target at some points but this model performed perfectly, Figure 10 represents Q-Q plot of quantiles.
\begin{figure}
\caption{LSTM Q-Q plot of quantiles
}
\end{figure}
Despite experimenting on models accuracy performance, we tested future predictions of our target variable with multi-forward steps $d$ where future values are predicted on three models and compared. We have verified that future prediction values of the target was close to its previously gained values. LSTM and LR were quiet ambitious to have increased prediction while NN showed us slightly decreased prediction based on our experiments (Figure 11).
\begin{figure}
\caption{Comparison of three models' on $d=20$ multi-forward steps future prediction of target with $L=15$ time-lagged correlation.
}
\end{figure}
\subsection{Discussion of the results among compared models}
Results show that we have achieved considerably decent accuracy scores in all of the models, and the models we tried in some sense gave us a fulfillment which we have attained to see the performance of several models. Initially, we were concerned about the accuracy, but after several experiments, we had a firm belief that the accuracy fluctuates over the range of datasets. However, the range does not drop much from 70\% despite altering several parameters of the model and testing on varying data collected by our data collection tool.
For example, in deep learning models, LSTM and NN, We deduced that using Neural Networks could give better accuracy than the linear regression model. Moreover, we found that if $d$ and $L$ increase, accuracy decreases but not below 80\% based on our tests, where we have tested with the same dataset and increasing multi-forward steps, and measured accuracy scores, which showed us LSTM also did better performance over Linear Regression and Neural Networks almost in all of the ten experiments except there was better performance in NN when we had chosen fixed composing rows and multi-forward steps, for example, the first experiment they both had the same performances (Table II), but overall, the LSTM was the best compared to the other two models due to its rather complicated nodes, making it learn complex interdependencies and sequences of the input data that results in better results.
The experiments also included with optimal fixed $L$ model parameter and predicting increasing $d$ multi-forward steps such that we tested from 30 to 100 steps prediction (Table III). We have deduced that Linear Regression can be significantly affected with increase of $d$ where it showed low performance, while NN and LSTM models could handle long steps ahead prediction with accuracy not lower than 85\%.
Our findings of real-time data collection aided us in implementing machine learning models. Data consists of more than 160 columns and more rows depending on game time. The applications of our current discussion are essential in evaluating performance of machine learning models Dota 2 match outcome prediction. In terms of the data collection process, the real-time data collection python script could ease the job of most of the researchers to have ready tool for their models and implement such machine learning models for their different research to enhance the field of the research. There are some limitations, such as research needing further development considering more parameters and bringing some more methods and approaches which we considered in long-term goals with grants, and publications.
\section{Real time GUI using PyQt}
Once we have the working model and data collection tool, how can we make the work more user-friendly was the primary question for us? We wanted the users to click on the buttons and evaluate the accuracy scores of the models based on their choice, and collect the data of their games. The main purpose of this section is to reduce the interaction with source codes and interact with the graphical user interface and evaluate everything using our GUI. For that reason, we thought of creating GUI and connecting our source codes in the backend. There were some more options for doing this task. For example, between web-based and desktop-based GUI, we thought that the desktop-based software is a good option, where we found a handy tool which enables programmers to create user interfaces called PyQt, that is a set of Python bindings for The Qt Company's Qt application framework and runs on all platforms supported by Qt, including Windows, macOS, Linux, iOS and Android. PyQt6 supports Qt v6, PyQt5 supports Qt v5, and PyQt4 supports Qt v4. The bindings are implemented as a set of Python modules and contain over 1,000 classes. \newline
We created the GUI using the PyQt designer application for the Linux operating system. Initially, we had several ideas for the creation of the user interface, but finally, we have made a tool that will be easy to interact with, the instruction of how to use the tool explained in the following subsection.
\subsection{Instruction of the tool}
In this section, we will go through each step of the usage of our GUI for data collecting and evaluating the models' performances, let us see each step in the following stages:
\begin{itemize}
\item GUI has three main sections: Data-Collection, Ml models, and Results.
\item GUI has nine buttons and three text browsers to view model performances.
\item The first part: data-collection, has “Start,” “Stop,” and “Go to the folder” buttons. Once the game starts, click on it, and it will start collecting real-time data. We can view current data in the “Go to the folder.” This button is made to make sure that data is there and to show whether it is working. Once the game finishes safely close the application and enjoy using collected data.
\item The second part: Consists of machine learning models that are three models: Linear Regression, Neural Network, Long Term-Short Memory in order to use them, click on the name of the models and wait sometimes while the two deep learning models need time to train over the 100 epochs whereas, Linear Regression will do the quick training.
\item The third part: results that give us accuracy scores. Once a model has done its part, click one of the (See NN/LSTM/LR Result) buttons, and you can get real-time model performances next to the models in a text box.
\item To close the GUI, click x safely and finish the work.
\end{itemize}
\subsection{Backend connection with data collection and models’ codes}
For the backend connection of the GUI, we have used the libraries of the \textit{PyQt widgets, QApplication, QMainWindow, QPlainTextEdit, QtCore, QtGui, and Tkinter} together with the file dialog. Those all the libraries enabled us to implement the backend implementation process of the tool. For each model and data collection section, we linked the main UI, for example, if $pushButton_1.clicked.connect(button$ $name$ $of$ $the$ $GUI)$. Similarly all the other subsections of the push buttons were connected.
The next stage of the process was creating functions for the .py file reading process and integrating them into our GUI. Overall, nine functions were created such that data collection start a simple $os$ library to read .py file and for stopping the collection $sys.exit()$ was connected. In the case of the models, the same procedure for data collection was implemented with $os$ library. However, each model has one additional function which is responsible for reading the results, which means after the run stage of the model, it returns us TXT file and stores it in a directory then this TXT with results inside was read and performed in the text browser with $ui.textBrowser.setText(file$ $name)$. The process was repeated for the other two models, and this stage has been completed successfully.
The final working GUI was tested over the game. We can see an example of our games in Figure 12, the data has been collected, and the models' accuracy scores have been evaluated.
\begin{figure}
\caption{Example of working GUI with the accuracy scores of the models on the right bottom
}
\end{figure}
\newline
\section{Conclusion}
In conclusion, all these results suggest that we have built efficient machine learning models from the supervised linear regression model to deep learning models of Neural Networks, and Long Short-Term Memory were implemented to predict game outcomes of Dota 2. It can be stated that using recent approaches, we defined one of the possible ways to achieve our goals. Moreover, with the help of data collection, we successfully implemented our algorithms and investigated essential variables in the Dota 2 game. \newline
Though the experiments run, we have achieved accuracy scores depending on the multi-forward prediction parameters, which for the worse case in linear regression 69\% but on average 82\% while in the deep learning models hit the utmost accuracy of prediction averagely ranging from 88\% for NN to 93\% for LSTM respectively. We have seen that the LSTM model performed better over the other two models in several experiments for flexible occasions of different games. Besides, we have tested dataset on increasing multi-forward steps, and measured accuracy scores, which showed us LSTM also did better performance over Linear Regression and Neural Networks respectively. The performance of the NN was instead relatively better than the linear regression, in which LR was the model with low performance. \newline
Besides, we had experimented with several steps ahead predictions, as we can deduce that when we have fewer steps ahead prediction and time-lagged correlation, we have high accuracy scores, and vice versa. The graphical user interface section was also designed for the sake of user experience and we illustrated some examples.
[1] ZachCleghern,SoumendraLahiri,Osman Ozaltin,and David L Roberts. Predicting future states in dota 2 using value-split models of time series attribute data. In Proceedings of the 12th International Conference on the Foundations of Digital Games, pages 1–10, 2017.
[2] Kevin Conley and Daniel Perry. How does he saw me? a recommendation engine for picking heroes in dota 2. Np, nd Web, 7, 2013.
[3] Anders Drachen, Matthew Yancey, John Maguire, Derrek Chu, Iris Yuhui Wang, Tobias Mahlmann, Matthias Schubert, and Diego Klabajan. Skill-based differences in spatio-temporal team behaviour in defence of the ancients 2 (dota 2). In 2014 IEEE Games Media Entertainment, pages 1–8. IEEE, 2014.
[4] Petra Grutzik, Joe Higgins, and Long Tran. Predicting outcomes of professional dota 2 matches. Technical report, Technical Report. Stanford University, 2017.
[5] Juho Hamari and Max Sjoblom. What is esports and why do people watch it? Internet research, 2017.
[6] Victoria J Hodge, Sam Michael Devlin, Nicholas John Sephton, Florian Oliver Block, Peter Ivan Cowling, and Anders Drachen. Win prediction in multi-player esports: Live professional match prediction. IEEE Transactions on Games, 2019.
[7] Filip Johansson and Jesper Wikstrom. Result prediction by mining replays in dota 2, 2015.
[8] Kaushik Kalyanaraman. To win or not to win? a prediction model to determine the outcome of a dota2 match. Technical report, Technical Report. Technical report, University of California San Diego, 2014.
[9] Ilya Makarov, Dmitry Savostyanov, Boris Litvyakov, and Dmitry I Ignatov. Predicting winning team and proba- bilistic ratings in “dota 2” and “counter-strike: Global offensive” video games. In International Conference on Analysis of Images, Social Networks and Texts, pages 183–196. Springer, 2017.
[10] Aleksandr Semenov, Peter Romov, Sergey Korolev, Daniil Yashkov, and Kirill Neklyudov. Performance of machine learning algorithms in predicting game outcome from drafts in dota 2. In International Conference on Analysis of Images, Social Networks and Texts, pages 26–37. Springer, 2016.
[11] Kuangyan Song, Tianyi Zhang, and Chao Ma. Predicting the winning side of dota2. Sl: sn, 2015.
[12] Nanzhi Wang, Lin Li, Linlong Xiao, Guocai Yang, and Yue Zhou. Outcome prediction of dota2 using machine learning methods. In Proceedings of 2018 International Conference on Mathematics and Artificial Intelligence, pages 61–67, 2018.
[13] Weiqi Wang. Predicting multiplayer online battle arena (moba) game outcome based on hero draft data. PhD thesis, Dublin, National College of Ireland, 2016.
[14] Yifan Yang, Tian Qin, and Yu-Heng Lei. Real- time esports match result prediction. arXiv preprint arXiv:1701.03162, 2016.
\end{document}
|
\begin{eqnarray}gin{document}
\title{Computations of Mather Minimal Log Discrepancies}
\begin{eqnarray}gin{abstract}
We compute the Mather minimal log discrepancy via jet schemes and arc spaces for toric varieties and very general hypersurfaces.
\end{abstract}
\section{Introduction}
\label{intro}
The minimal log discrepancy is an important invariant in algebraic geometry. It is well known that certain conjectures on the minimal log discrepancy imply the termination of the Minimal Model Program (see \cite{Sh04}). However, not much about minimal log discrepancy is known compared to other invariants defined in similar settings such as the log canonical threshold. Recently, the notion of \emph{Mather minimal log discrepancy} was introduced by Ishii in \cite{Ish11}. It is closely related to the minimal log discrepancy and they share many similar properties. But the Mather minimal log discrepancy is defined more generally for an arbitrary variety. This paper is concerned with the computation of Mather minimal log discrepancy in the context of toric varieties and very general hypersurfaces.
Let us start by recalling the definition of the minimal log discrepancy. Let $X$ be a normal $\mathbb{Q}$-Gorenstein variety over an algebraically closed field $k$ of characteristic zero and let $f: Y\rightarrow X$ be a birational morphism with $Y$ normal. For a divisor $E$ on $Y$ over $X$ and an ideal $\mathfrak{a}$ in $\mathcal{O}_X$, the \emph{log discrepancy} of the pair $(X,\mathfrak{a})$ with respect to $E$ is defined as
\begin{eqnarray}gin{equation*}
a(E;X,\mathfrak{a}):=\textnormal{ord}_E(K_{Y/ X})-\textnormal{ord}_E(\mathfrak{a})+1
\end{equation*}
For each closed subset $W$ in $X$, the minimal log discrepancy of $(X,\mathfrak{a})$ with respect to $W$ is
\begin{eqnarray}gin{equation*}
\textnormal{mld}(W;X,\mathfrak{a}):=\min\{a(E;X,\mathfrak{a}) | c_X(E)\subset W\},
\end{equation*}
where $c_X(E)$ is the center of $E$ on $X$. An introduction to minimal log discrepancies can be found in \cite{Am06}.
Now let $X$ be an arbitrary variety over an algebraically closed field $k$ of characteristic zero. Let $f:Y\rightarrow X$ be a resolution of singularities so that $Y$ is a sufficiently "high" birational model over $X$ (will be made clear in Section \ref{sec3}). The Mather minimal log discrepancy is defined in a similar way to the usual minimal log discrepancy (by simply replacing the relative canonical divisor with the \emph{Mather discrepancy divisor}) but it is much easier to describe in terms of jet schemes and arc spaces. The Mather minimal log discrepancy for a closed point $x$ of a variety $X$ is denoted by $\widehat{\textnormal{mld}}(x;X)$. Of the many nice properties of the Mather minimal log discrepancy, one of the most important is Inversion of Adjunction (\cite[Theorem 4.10]{dFD11} and \cite[Proposition 3.10]{Ish11}).
When both Mather and usual minimal log discrepancies are defined, the two differ by the pull back of a certain ideal sheaf (\cite[2.2]{Ish11}). In particular, Mather minimal log discrepancy is always larger than or equal to the usual minimal log discrepancy. Their relation has been further studied in \cite{Ish13}, \cite{EI13} and \cite{dFT16}. We note that contrary to usual minimal log discrepancies, the variety has "good" singularities when Mather minimal log discrepancies are small (see \cite[Theorem 4.7]{Ish11} for a more precise description).
In Section \ref{sec2} we give an overview of jet schemes and arc spaces. We start with the definition of jet schemes, and apply the definition to describing jet schemes of an affine variety over a field. It shows that the jet schemes of an affine variety $X$ are also affine and we get explicit defining equations for $X_m$. This explicit description will be important for our analysis in Section \ref{sec5}. Next we review arc spaces and cylinders (especially contact loci). The arc space of a variety $X$, denoted by $X_\infty$, is the projective limit of the projective system $\{X_m\}_{0\leq m<\infty}$ of jet schemes, and cylinders are inverse images of constructible subsets of $X_m$ in $X_\infty$.
In Section \ref{sec3} we review the basics about Mather minimal log discrepancy. We start by defining the notion of Mather discrepancy divisor through Nash blow-ups. Then we recall the following result connecting the Mather minimal log discrepancy to jet schemes:
\begin{eqnarray}gin{prop}
\label{fundamental_prop1}
(\cite[Lemma 4.2]{Ish11})
Let $X$ be a variety over an algebraically closed field $k$ of characteristic $0$. If $x$ is a closed point of $X$, then we have
\begin{eqnarray}gin{equation*}
\widehat{\textnormal{mld}}(x;X)=\lim_{m\rightarrow \infty}((m+1)\dim(X)-\dim (\psi_m(\pi^{-1}(x)))),
\end{equation*}
where $\psi_m: X_\infty\rightarrow X_m$ and $\pi: X_\infty\rightarrow X$ are canonical truncation maps.
\end{prop}
The key point for the examples considered in Section \ref{sec4} and Section \ref{sec5}, is to compute/bound $\dim(\psi_m(\pi^{-1}(x)))$ for $m$ large enough.
Section \ref{sec4} is devoted to the study of the Mather minimal log discrepancy for a toric variety at a closed point $x$. The question is local so we assume $X=X(\sigma)$ is the affine toric variety associated to the cone $\sigma\subset N_\mathbb{R}:= N\otimes_\mathbb{Z}\mathbb{R}$, where $N\cong\mathbb{Z}^n$ is the lattice of $\sigma$. We further assume that $\sigma$ spans $N_\mathbb{R}$. First, we consider the case when $x$ is a torus-invariant point. By Proposition \ref{fundamental_prop1}, the key is to compute $\dim(\psi_m(\pi^{-1}(x)))$ for $m$ large enough. The space $\psi_m(\pi^{-1}(x))$ is decomposed into $T_m$-orbits, where $T_m$ is the $m^{\textnormal{th}}$ jet scheme of the torus $T$ in $X$ which naturally acts on $X_m$. We use the fact that those orbits correspond to lattice points in the interior of $\sigma$. This characterization of orbits follows from the work of Ishii (\cite{Ish03}). The problem thus comes down to finding the dimension of each $T_m$-orbit, which is in turn done by computing the dimension of its stabilizer.
In order to state our result, we introduce some notation. Let $n$ be the dimension of $X$ and $M=N^\vee$ be the dual lattice. We define the dual space $M_\mathbb{R}:=M\otimes_\mathbb{Z} \mathbb{R}$ and the dual cone $\sigma^\vee:=\{u\in M_\mathbb{R} | \langle u,v\rangle\geq 0\textnormal{ for all }v\in \sigma\}$. With this notation, we show the dimension of the $T_m$-orbit associated to a lattice point $a$ in the interior of $\sigma$ is equal to
\begin{eqnarray}gin{equation*}
(m+1)n- \min \big\{\sum_{i=1}^n \langle a,u_{i} \rangle | u_{1},\ldots,u_{n}\ \textnormal{span}\ M_\mathbb{R},\textnormal{ with }u_i\in M\cap \sigma^\vee\textnormal{ for each }i\big\},
\end{equation*}
where the minimum is run over all linearly independent sets of vectors $\{u_1,\ldots,u_n\}$ in $M\cap \sigma^\vee$. Now we just need to let the point $a$ vary and take the maximum. Hence we get the following theorem:
\begin{eqnarray}gin{thm}
Let $X$ be an affine toric variety associated to a cone $\sigma$ of dimension $n$ over an algebraically closed field $k$ of characteristic zero. Let $N$ be the lattice of $\sigma$ and $M$ be the dual lattice. If $\sigma$ spans $N_\mathbb{R}$ and $x$ is the torus-invariant point, then we have
\begin{eqnarray}gin{equation*}
\widehat{\textnormal{mld}} (x;X)=\min_{a\in \textnormal{Int}(\sigma)\cap N}\Big\{ \min \big\{\sum_{i=1}^n \langle a,u_{i} \rangle | u_{1},\ldots,u_{n}\ \textnormal{span}\ M_\mathbb{R},\ u_i\in M\cap \sigma^\vee\textnormal{ for each }i\big\}\Big\},
\end{equation*}
where the second minimum is taken over all linearly independent sets of vectors $\{u_1,\ldots,u_n\}$ in $M\cap \sigma^\vee$.
\end{thm}
We use the theorem to compute $\widehat{\textnormal{mld}}(x;X)$ in some examples. For example, we show that if $X$ is a toric surface, then $\widehat{\textnormal{mld}} (x;X)=\dim(X)$ (which is $2$). In higher dimension, the same conclusion holds if the torus-fixed point $x$ is an isolated singularity point and $X$ is simplicial. We also give some examples where $\widehat{\textnormal{mld}}(x;X)\neq \dim(X)$.
We conclude Section \ref{sec4} by considering an arbitrary closed point on a toric variety $X$. Recall that the set of closed points of $X$ is a disjoint union of $T$-orbits associated to faces of the cone $\sigma$. Each orbit is generated by a distinguished point associated to the corresponding face. Therefore, the problem reduces to computing the Mather minimal log discrepancy at these distinguished points, and it is further reduced to the case of a torus-invariant point in the following sense:
\begin{eqnarray}gin{thm}
Let $X=X(\sigma)$ be an affine toric variety of dimension $n$ over an algebraically closed field $k$ of characteristic zero. Let $\tau$ be a face of $\sigma$ of dimension $k<n$ and $x_\tau$ be the distinguished point associated to $\tau$. If $Y$ is the $k$-dimensional affine toric variety associated to the cone $\tau$ and $y$ is the torus-invariant point of $Y$, then we have
\begin{eqnarray}gin{equation*}
\widehat{\textnormal{mld}} (x_\tau;X)-n=\widehat{\textnormal{mld}} (y;Y)-k.
\end{equation*}
\end{thm}
We consider the case of very general hypersurfaces in Section \ref{sec5}. Let $f=\sum_{i=1}^N a_{I^i} x^{I^i}$ be the defining equation of a hypersurface $X\subset \mathbb{A}^{n+1}$, where $I^i$ are multi-indices and $x^{I^i}$ stands for $\Pi _{j=1}^{n+1} x_j ^{I_j^i}$. The \emph{support} of $f$ is the set $A:=\{I^1,\ldots,I^N\}\subset \mathbb{Z}^{n+1}$. When $A\neq \emptyset$, the \emph{dimension} of $A$ is the dimension of the linear span over $\mathbb{Q}$ of the convex hull of $A-a$, for any $a\in A$. Following from the result of Yu (\cite[Theorem 3]{Yu16}), we deduce that for a support $A$ such that $\dim(A)\geq 2$ or $\dim(A)=1$ and the convex hull of $A$ contains exactly two integral points, and for general coefficients $a_{I^i}$, $X$ is an integral hypersurface. Then, under a certain generality condition, we give a lower bound for the Mather minimal log discrepancy of $X$ at the origin. As in the case of toric varieties, we write $\psi_m(\pi^{-1}(0))$ as a disjoint union, up to the image of a thin set, of subsets $C^m_\alpha$, with $\alpha=(\alpha_1,\ldots,\alpha_{n+1})$ running over all $(n+1)$-tuples of positive integers. For simplicity, we define the \emph{product} of an $(n+1)$-tuple $\alpha$ with a multi-index $I$ as $\alpha\cdot I:=\sum_{j=1}^{n+1}\alpha_j I_j$. An $(n+1)$-tuple $\alpha$ is called \emph{feasible} if $\min_{1\leq i \leq N} \{\alpha \cdot I^i\}$ is attained by at least two different $i$'s. We show that $C^m_\alpha=\emptyset$ if $\alpha$ is not feasible; when $f$ has a fixed support and very general coefficients, $\dim(C^m_\alpha)$ is bounded above by
\begin{eqnarray}gin{equation*}
mn-\sum_{j=1}^{n+1} (\alpha_j-1)-1+ \min_{1\leq i\leq N} \{I^i\cdot \alpha \}-\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}}\{I^i\cdot \alpha-\alpha_j \}.
\end{equation*}
By taking the maximum over all feasible $\alpha$'s, we obtain the following theorem:
\begin{eqnarray}gin{thm}
Let $f=\sum _{i=1} ^N a_{I^i} x^{I^i}$ be a polynomial with a fixed support $A$ such that $f$ has no constant term and that $f$ is not divisible by any $x_i$, and let $X$ be the hypersurface defined by $f$. If $A$ is $1$-dimensional and the convex hull contains only two integral points, or if $A$ has dimension $\geq 2$, then for very general coefficients $(a_{I^i})_{1\leq i\leq N}$, the hypersurface $X$ is integral and we have
\begin{eqnarray}gin{equation*}
\widehat{\textnormal{mld}}(0;X)\geq \underset{\alpha}{\min} \{\sum_{j=1}^{n+1} (\alpha_j-1)+1-\underset{1\leq i\leq N}{\min} \{I^i\cdot \alpha \}+\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}}\{I^i\cdot \alpha-\alpha_j \} \}+n,
\end{equation*}
where the first minimum is taken over all feasible $(n+1)$-tuples $\alpha$.
\end{thm}
In spite of the fact that the theorem only gives a lower bound of Mather minimal log discrepancy, we can use the proof of the above result to show that the inequality is actually an equality in many cases. We end the section with various examples. These examples show that the lower bound can be attained in many cases, but we also see that the inequality in the theorem can be strict.
\proof[Acknowledgements]
I want to express my deepest gratitude to my advisor, Mircea Musta\c{t}\u{a}, for years of guidance and patience. This paper would not have been possible without him. I also thank Karen Smith for a lot of helpful comments, and Mattias Jonsson, James Tappenden and Yuchen Zhang for some corrections.
\section{Preliminaries on jet schemes and arcs spaces}
\label{sec2}
In this section we review some basic properties of jet schemes and arc spaces that we need in the following sections. We mostly follow \cite{EM09}. For more details, see \cite{M14}, \cite{DL99} and \cite{dF16}.
\subsection{Jet schemes}
$\\$
A variety is an integral, separated scheme of finite type over a field. Let $k$ be an algebraically closed field of arbitrary characteristic and $X$ be a scheme of finite type over $k$. For each nonnegative integer $m$, we define the $m^{\textnormal{th}}$ \emph{jet scheme} of $X$, denoted by $X_m$, to be a scheme over $k$ such that for every $k$-algebra $A$ we have a functorial bijection
\begin{eqnarray}gin{equation}
\label{eqn_jet_defn}
\textnormal{Hom}_{\textnormal{Sch}/k}(\textnormal{Spec}(A),X_m)\cong \textnormal{Hom}_{\textnormal{Sch}/k}(\textnormal{Spec}\ A[t]/(t^{m+1}),X).
\end{equation}
The jet schemes $X_m$ exist according to \cite[Proposition 2.2]{EM09}. Moreover, they are unique up to a canonical isomorphism since the bijection (\ref{eqn_jet_defn}) describes the functor of points of $X_m$. In particular, each element of the left-hand side of the bijection (\ref{eqn_jet_defn}) is an $A$-valued point of $X_m$, which is also called an $A$-valued $m$-jet of $X$. A $k$-valued point of $X_m$ is simply called an $m$-jet of $X$. Clearly when $m=0$ we have $X_0\cong X$. The canonical truncation map $A[t]/(t^{m+1})\rightarrow A[t]/(t^{p+1})$ for $m>p$ induces the map
\begin{eqnarray}gin{equation*}
\textnormal{Hom}(\textnormal{Spec}\ A[t]/(t^{m+1}),X)\longrightarrow \textnormal{Hom}(\textnormal{Spec}\ A[t]/(t^{p+1}),X).
\end{equation*}
This induces via the bijection (\ref{eqn_jet_defn}) a canonical projection $\pi_{m,p}:X_m\longrightarrow X_p$. We denote this map by $\pi_m$ when $p=0$. These canonical projections satisfy the obvious compatibilities $\pi_{p,q}\circ \pi_{m,p}=\pi_{m,q}$ for $m>p>q$.
\begin{eqnarray}gin{rems}
\label{jet_rems}
The following facts follow easily from the definition:
(i) If $f:X\rightarrow Y$ is a morphism of schemes of finite type over $k$, then there is an induced morphism of jet schemes $f_m:X_m\rightarrow Y_m$. Note that the induced maps $f_m$ are compatible with the canonical projections $\pi_{p,q}$, i.e. $\pi_{m,p}\circ f_m=f_p\circ \pi_{m,p}$.
(ii) For schemes $X$ and $Y$ of finite type over $k$, there is a canonical isomorphism
\begin{eqnarray}gin{equation*}
(X\times Y)_m\cong X_m\times Y_m,
\end{equation*}
for every $m\geq 0$.
(iii) If $G$ is a group scheme over $k$ acting on a scheme $X$ of finite type over $k$, then $G_m$ is also a group scheme over $k$ and it acts on $X_m$.
\end{rems}
\begin{eqnarray}gin{exmp}
Consider an affine scheme $X\hookrightarrow \A^n$ and let $g_1,\ldots,g_r\in k[x_1,\ldots,x_n]$ be generators for the ideal defining $X$. For a $k$-algebra $A$, consider an $A$-valued m-jet $\gamma$ of $X$ represented by $\gamma :\textnormal{Spec}\ A[t]/(t^{m+1})\rightarrow X$. Giving $\gamma$ is equivalent to giving a morphism of $k$-algebras
\begin{eqnarray}gin{equation*}
\gamma^\ast: k[x_1,\ldots,x_n]/[g_1,\ldots,g_r]\longrightarrow A[t]/(t^{m+1}).
\end{equation*}
Let us write
\begin{eqnarray}gin{equation*}
\gamma^\ast(x_i)=\sum_{j=0}^m x_i^{(j)} t^j, \textnormal{ for } 1\leq i\leq n.
\end{equation*}
They should satisfy $g_l(\gamma^\ast(x_1),\ldots,\gamma^\ast(x_n))=0$ in $k[t]\slash t^{m+1}$ for $1\leq l\leq r$. If we write
\begin{eqnarray}gin{equation*}
g_l(\sum_{j=0}^m x_1^{(j)} t^j,\ldots, \sum_{j=0}^m x_n^{(j)} t^j)=\sum_{j=0}^m G_l^{(j)}(\underline{x})t^j\quad (\textnormal{mod } t^{m+1}),
\end{equation*}
we see that
\begin{eqnarray}gin{equation}
X_m\cong \textnormal{Spec}\ k[x_i^{(j)} | 1\leq i\leq n, 1\leq j\leq m]/(G_l^{(j)}|1\leq l\leq r, 0\leq j\leq m).
\label{jet_affine}
\end{equation}
In particular, we conclude the jet schemes of an affine scheme are also affine schemes, of finite type over $k$.
\end{exmp}
\begin{eqnarray}gin{rem}
It follows from the above example that the canonical projections $\pi_{m,p}:X_m\rightarrow X_p$ are affine morphisms.
\end{rem}
\begin{eqnarray}gin{rem}
Another consequence of the above example is that if $X\hookrightarrow \A^n$ is a closed immersion, then the induced morphism of jet schemes $X_m\hookrightarrow (\A^n)_m$ is also a closed immersion. Moreover, we deduce from the explicit description of the equations of $X_m$ in $(\A^n)_m$ that more generally, if $X\hookrightarrow Y$ is a closed immersion then so is the induced map $X_m\rightarrow Y_m$.
\end{rem}
\begin{eqnarray}gin{exmp}
\label{jet_affine_space}
The simplest (but important) example is $X=\A^n$. It follows immediately from equation (\ref{jet_affine}) that $(\A^n)_m\cong \A^{(m+1)n}$. Furthermore, the canonical projections $\pi_{m,p}$ are just projections along certain coordinate planes.
\end{exmp}
\begin{eqnarray}gin{lem}
\label{lem_etale}
(\cite[Lemma 2.9]{EM09}) If $f:X\rightarrow Y$ is an \'etale morphism, then for every $m\geq 0$ the following commutative diagram is Cartesian:
$$\begin{eqnarray}gin{CD}
X_m @>f_m>> Y_m\\
@V\pi_m^X VV @V\pi_m^Y VV\\
X @>f>> Y.
\end{CD}$$
\end{lem}
\begin{eqnarray}gin{cor}
\label{jet_smooth}
(\cite[Corollary 2.11]{EM09}) If $X$ is a smooth variety of dimension $n$, then the canonical projections $\pi_{m,p}$ are locally trivial fibrations with fiber $\mathbb{A}^{(m-p)n}$. In particular, $X_m$ is smooth of dimension $(m+1)n$.
\end{cor}
\begin{eqnarray}gin{proof}
For every point $x\in X$, one can find an open subset $x\in U$ and an \'etale morphism $U\rightarrow \mathbb{A}^n$. Using Lemma \ref{lem_etale}, the assertion is reduced to the case of an affine space, which follows from Example \ref{jet_affine_space}.
\end{proof}
\subsection{Arc spaces and cylinders}
\label{section_arc_spaces}
$\\$
Given a scheme $X$ of finite type over $k$ as before, we have a projective system
\begin{eqnarray}gin{equation*}
\dots \longrightarrow X_m\longrightarrow X_{m-1}\longrightarrow \dots\longrightarrow X_1\longrightarrow X_0=X,
\end{equation*}
in which all morphisms are affine. Therefore, the projective limit exists in the category of $k$-schemes. The projective limit is denoted by $X_\infty$ and it is called the \emph{arc space} of $X$. Unlike the jet schemes, the arc space is typically not of finite type over $k$. We denote by $\psi_m$ the canonical map $X_\infty\rightarrow X_m$. We also write $\pi:=\psi_0: X_\infty\rightarrow X_0=X$ for the projection to the original scheme $X$.
It follows from the definition of jet schemes and projective limit that for every field extension $K$ of $k$, we have functorial isomorphisms
\begin{eqnarray}gin{equation*}
\textnormal{Hom}(\textnormal{Spec}(K), X_\infty)\cong \underset{\longleftarrow}{\lim}\ \textnormal{Hom}(\textnormal{Spec}\ K[t]/t^{m+1},X) \cong \textnormal{Hom}(\textnormal{Spec}\ K[\![t]\!], X).
\end{equation*}
A $k$-valued point of $X_\infty$ is called an arc on $X$ and is represented by
\begin{eqnarray}gin{equation}
\label{arc}
\gamma:\textnormal{Spec}\ k[\![t]\!]\longrightarrow X.
\end{equation}
For every field extension $K$ of $k$, a $K$-valued point of $X_\infty$ is called an $K$-valued arc of $X$. From now on, whenever we deal with $X_m$ and $X_\infty$ we will restrict to their $k$-valued points. Since the jet schemes are of finite type over $k$ this causes no ambiguity. Note that since we only consider the $k$-valued points, $X_\infty$ is the set-theoretic projective limit of the $X_m$ and the Zariski topology on $X_\infty$ is the projective limit topology.
\begin{eqnarray}gin{rem}
As in the case of jet schemes, if $f:X\rightarrow Y$ is a morphism of schemes of finite type over $k$, then we have an induced map on the arc spaces $f_\infty: X_\infty\rightarrow Y_\infty$ that is compatible with canonical projections.
\end{rem}
\begin{eqnarray}gin{rem}
\label{arc_product}
For schemes $X$ and $Y$ of finite type over $k$, there is a canonical isomorphism $(X\times Y)_\infty\cong X_\infty\times Y_\infty$ and we have the following commutative diagram:
$$\begin{eqnarray}gin{CD}
(X\times Y)_\infty @>\cong>> X_\infty\times Y_\infty\\
@V\psi_m^{X\times Y} VV @V\psi_m^X\times \psi_m^Y VV\\
(X\times Y)_m @>\cong>> X_m\times Y_m.
\end{CD}$$
\end{rem}
We now define the notion of cylinders. Recall that a \emph{constructible set} in a scheme of finite type over $k$ is a finite union of locally closed subsets. A \emph{cylinder} in $X_\infty$ is a subset of the form $C=\psi_m^{-1} (S)$, for some nonnegative integer $m$ and some constructible subset $S$ of $X_m$. The arc spaces are typically not of finite type over $k$. So far most study on arc spaces has been focusing on cylinders and their irreducible components.
There is a special type of cylinders, the \emph{contact loci}, that will play an important role in what follows. To an ideal sheaf $\mathfrak{a}$, we associate subsets of arcs with prescribed vanishing order along $\mathfrak{a}$. More precisely, if $\gamma: \textnormal{Spec}\ k[\![t]\!]\rightarrow X$ is an arc, the inverse image of $\mathfrak{a}$ is an ideal in $k[\![t]\!]$ generated by $t^r$, for some $r$ (if the ideal is not zero). This $r$ is \emph{the order of $\gamma$ along $\mathfrak{a}$}, denoted by $\textnormal{ord}_\gamma (\mathfrak{a})$. When the inverse image is zero, we put $\textnormal{ord}_\gamma(\mathfrak{a})=\infty$. A contact locus is a subset of $X_\infty$ of one of the following forms:
\begin{eqnarray}gin{equation*}
\textnormal{Cont}^e(\mathfrak{a}):=\{\gamma\in X_\infty | \textnormal{ord}_\gamma(\mathfrak{a})=e\},
\end{equation*}
or
\begin{eqnarray}gin{equation*}
\textnormal{Cont}^{\geq e}(\mathfrak{a}):=\{\gamma\in X_\infty | \textnormal{ord}_\gamma(\mathfrak{a})\geq e\}.
\end{equation*}
We can similarly define subsets of $X_m$ with specified order along $\mathfrak{a}$, namely $\textnormal{Cont}^e(\mathfrak{a})_m$ and $\textnormal{Cont}^{\geq e}(\mathfrak{a})_m$, for $m\geq e$. It is clear that for every $m\geq e$, we have
\begin{eqnarray}gin{equation*}
\textnormal{Cont}^e(\mathfrak{a})=\psi_m^{-1}(\textnormal{Cont}^e(\mathfrak{a})_m),\ \textnormal{Cont}^{\geq e}(\mathfrak{a})=\psi_m^{-1}(\textnormal{Cont}^{\geq e}(\mathfrak{a})_m).
\end{equation*}
This implies that $\textnormal{Cont}^e(\mathfrak{a})$ is a locally closed set and $\textnormal{Cont}^{\geq e}(\mathfrak{a})$ is a closed set.
\begin{eqnarray}gin{defn}
Let $X$ be a scheme of finite type over $k$ of pure dimension $d$. A subset $A\subset X_\infty$ is \emph{thin} if there is some closed subvariety $S$ of $X$ whose dimension is strictly less than $d$ such that $A\subset S_\infty$. If a subset $A$ is not thin, it is \emph{fat}.
\end{defn}
We need the following result for our discussion in the following chapters:
\begin{eqnarray}gin{lem}
\label{lem_4.3}
(\cite[Proposition 5.10]{EM05})
Let $X$ be a variety over $k$ of dimension $d$. Then
(1) For every $m\geq 0$, we have
\begin{eqnarray}gin{equation*}
\dim(\psi_m(X_\infty))\leq (m+1)d.
\end{equation*}
(2) For every $m,\ n\geq 0$ with $m\geq n$, the fibers of $\psi_m(X_\infty)\rightarrow \psi_n(X_\infty)$ are of dimension $\leq (m-n)d$.
\end{lem}
\section{Preliminaries on Mather minimal log discrepancy}
\label{sec3}
In this section we introduce the Mather minimal log discrepancy following \cite{Ish11}. The definition is very similar to the usual minimal log discrepancy. Details on usual minimal log discrepancy and its relation to arc spaces can be found in \cite{EMY03}. Results on the relation between Mather minimal log discrepancy and the usual minimal log discrepancy can be found in \cite{EI13} and \cite{Ish13}. For details on Mather minimal log discrepancy, we refer to \cite{Ish11}, \cite{EI13} and \cite{Ish15}.
\subsection{Definition}
\begin{eqnarray}gin{defn}
Let $X$ be a variety over a field $k$ and $f:Y\rightarrow X$ be a proper birational morphism of varieties, with $Y$ normal. Each prime divisor $E$ on $Y$ gives a valuation $\textnormal{ord}_E$ on $K(Y)=K(X)$. Here $E$ is called a \emph{divisor over} $X$ and we equate two divisors on two normal varieties over $X$ if they give rise to the same valuation on $X$. The \emph{center} of $E$ is the closure of the image of $E$ on $X$. A \emph{divisorial valuation} on $X$ is one of the form $v=q\cdot \textnormal{ord}_E$ where $q$ is a positive integer and $E$ is a divisor over $X$.
\end{defn}
Let $X$ be a variety of dimension $d$ over an algebraically closed field $k$ of characteristic zero. For simplicity we write $\Omega_X$ for the sheaf of relative differentials $\Omega_{X/k}$. The projection
\begin{eqnarray}gin{equation*}
\pi: \mathbb{P}_X(\wedge^d \Omega_X)\longrightarrow X
\end{equation*}
is an isomorphism over the smooth locus $X_{\textnormal{reg}}\subset X$. In particular, there is a section $\sigma: X_{\textnormal{reg}}\rightarrow \mathbb{P}_X(\wedge^d \Omega_X)$.
\begin{eqnarray}gin{defn}
The \emph{Nash blow-up} of $X$ is the closure of the image of $\sigma$, and is denoted by $\hat{X}$. It is a variety over $k$ with a projective morphism $\pi|_{\hat{X}}: \hat{X}\rightarrow X$ that is an isomorphism over the smooth locus of $X$. The line bundle
\begin{eqnarray}gin{equation*}
\widehat{K}_X:=\mathcal{O}_{\mathbb{P}_X(\wedge^d \Omega_X)}(1)|_{\hat{X}}
\end{equation*}
is called the \emph{Mather canonical line bundle} of $X$.
\end{defn}
\begin{eqnarray}gin{rem}
If $X$ is smooth, then clearly $\hat{X}=X$ and $\widehat{K}_X$ is just the canonical line bundle of $X$. More generally, the Nash blow-up can be thought of as the parameter space of limits of all tangent directions at smooth points of $X$.
\end{rem}
One can always find a resolution of singularities $f:Y\rightarrow X$ that factors through the Nash blow-up. Then the image of the $f^\ast (\wedge ^d \Omega_X)$ under the canonical homomorphism
\begin{eqnarray}gin{equation*}
\wedge^d df: f^\ast (\wedge ^d \Omega_X)\rightarrow \wedge^d\Omega_Y
\end{equation*}
is of the form $J\wedge^d \Omega_Y$ where $J$ is an invertible ideal sheaf on $Y$ (\cite[Proposition 1.7]{dFEI08}). Let $\widehat{K}_{Y/X}$ be the effective divisor defined by $J$. This is supported on the exceptional locus of $f$ and it is called the \emph{Mather discrepancy divisor}. For each prime divisor $E$ on $Y$, we define $\hat{k}_E:=\textnormal{ord}_E(\widehat{K}_{Y/X})$. If $v=q\cdot \textnormal{ord}_E$ is a divisorial valuation, we write $\hat{k}_v:=q\cdot\hat{k}_E$.
\begin{eqnarray}gin{defn}
Let $(X,\mathfrak{a})$ be a pair where $X$ is a variety over $k$ and $\mathfrak{a}$ is a nonzero ideal in $\mathcal{O}_X$. For a closed subset $W$ of $X$, the Mather minimal log discrepancy of $(X,\mathfrak{a})$ along $W$ is defined as
\begin{eqnarray}gin{equation*}
\widehat{\textnormal{mld}}(W;X,\mathfrak{a}):=\inf\{\hat{k}_E-\textnormal{ord}_E(\mathfrak{a})+1 | E\textnormal{ is a divisor over }X\textnormal{ with center in }W\}.
\end{equation*}
When $\dim(X)=1$ and the infimum is negative, we make the convention that
\begin{eqnarray}gin{equation*}
\widehat{\textnormal{mld}}(W;X,\mathfrak{a})=-\infty.
\end{equation*}
\end{defn}
\begin{eqnarray}gin{rem}
If $\dim(X)\geq 2$ and $\widehat{\textnormal{mld}}(W;X,\mathfrak{a})<0$, then $\widehat{\textnormal{mld}}(W;X,\mathfrak{a})=-\infty$ (see \cite[Remark 3.4]{Ish11}). This is why we make the convention for the case when $\dim(X)=1$.
\end{rem}
\subsection{Relation to jet schemes and arc spaces}
$\\$
From now on we specialize to the case when $W=\{x\}$ for some closed point $x\in X$ and $\mathfrak{a}=\mathcal{O}_X$. We denote the Mather minimal log discrepancy by $\widehat{\textnormal{mld}}(x;X)$ for simplicity and write $\dim(X)=d$.
\begin{eqnarray}gin{defn}
If $X$ and $Y$ are varieties over $k$, and $A\subset X$ and $B\subset Y$ are constructible subsets. Then a map $f: A\rightarrow B$ is a \emph{piecewise trivial fibration with fiber} $F$, if there exists a finite partition of B into locally closed subsets $S$ of $Y$ such that $f^{-1}(S)$ is isomorphic to $S\times F$ and $f|_{f^{-1}(S)}$ is the projection $S\times F\rightarrow S$ under the isomorphism.
Recall that for a scheme $X$ of finite type over $k$, there are canonical morphisms $\pi:X_\infty\rightarrow X$ and $\psi_m:X_\infty\rightarrow X_m$ for every $m\geq 0$ (in Subsection \ref{section_arc_spaces}).
\end{defn}
\begin{eqnarray}gin{defn}
Fix a closed point $x$ of $X$. For every $m\geq 0$ we define
\begin{eqnarray}gin{equation*}
\lambda_m(x):=md-\dim \psi_m(\pi^{-1}(x)).
\end{equation*}
When there is no confusion we simply write $\lambda_m$ instead of $\lambda_m(x)$.
\end{defn}
\begin{eqnarray}gin{rem}
\label{rem_smooth}
Corollary \ref{jet_smooth} shows that when $X$ is a smooth variety and $x$ is a closed point of $X$, we have for each $m\geq 0$,
\begin{eqnarray}gin{equation*}
\lambda_m(x)=m\dim(X)-\dim(\psi_m(\pi^{-1}(x)))=0.
\end{equation*}
\end{rem}
\begin{eqnarray}gin{lem}
\label{lem_lambda}
(\cite[Lemma 4.2]{Ish11}) For every $m\geq 0$, we have $\lambda_m\geq 0$ and $\lambda_{m+1} \geq \lambda_m$. Moreover, $\lambda_m$ is constant for $m\gg 0$.
\end{lem}
\begin{eqnarray}gin{defn}
\label{defn_lambda}
According to the lemma above, $\underset{m\rightarrow \infty}{\lim} \lambda_m(x)$ exists and it is equal to $\lambda_m(x)$ for all $m$ large enough. We denote this limit by $\lambda(x)$. When there is no confusion, we also write this limit as $\lambda$.
\end{defn}
The following result from \cite{Ish11} describes the Mather minimal log discrepancy in terms of jet schemes and arc spaces. We use this result to reduce the problem to computing $\lambda(x)$ in what follows.
\begin{eqnarray}gin{prop}
\label{fundamental_prop}
(\cite[Lemma 4.2]{Ish11}) If $X$ is a variety over $k$ of dimension $d$ and $x$ is a closed point of $X$, then
\begin{eqnarray}gin{equation*}
\lambda(x)=\widehat{\textnormal{mld}} (x;X)-d.
\end{equation*}
\end{prop}
\begin{eqnarray}gin{rem}
\label{relation_smooth}
If $X$ is smooth and $x\in X$ is a closed variety, by Remark \ref{rem_smooth} we have $\lambda_m(x)=0$ for every $m\geq 0$. Hence, by Proposition \ref{fundamental_prop} we have $\widehat{\textnormal{mld}}(x;X)=\dim(X)$. The same conclusion holds if we only assume $x$ is a smooth point of $X$ because in this case we may replace $X$ by a smooth open neighborhood of $x$. Therefore, we only consider singular points in the following chapters.
\end{rem}
\section{Mather minimal log discrepancy of toric varieties}
\label{sec4}
This section is devoted to the computation of the Mather minimal log discrepancy associated to a closed point on a toric variety. We will first do the computation for a torus-invariant point, and then show the computation generalizes to an arbitrary closed point.
The computation depends only on local properties of the toric variety so we assume throughout the section that $X=X(\sigma)$ is an affine toric variety associated to a cone $\sigma$ over an algebraically closed field of characteristic zero.
We write $x_\sigma$ for the torus-invariant point in $X$ (when it exists), and therefore according to Proposition \ref{fundamental_prop}, computing the Mather minimal log discrepancy associated to $x_\sigma$ is equivalent to computing the dimension of $C^m:=\psi_m (\pi^{-1}(x_\sigma))$ for $m$ large enough. More precisely, $\widehat{\textnormal{mld}} (x_\sigma;X)=mn-\dim (C^m)$ when $m\gg 0$. We will decompose $C^m$ into orbits under the $T_m$-action, where $T$ is the torus in $X$, and compute the dimension of $C^m$ by computing the dimension of these orbits instead.
\subsection{Quick review}
$\\$
Let $k$ be an algebraically closed field of characteristic zero. An affine \emph{toric variety} of dimension $n$ is defined using a \emph{lattice} $N\cong\mathbb{Z}^n$ and a cone $\sigma$ in $N_\mathbb{R}:=N\otimes_\mathbb{Z} \mathbb{R}$. A \emph{cone} $\sigma$ is a rational convex cone in $N_\mathbb{R}$ containing no nonzero linear subspace and which is generated by finitely many lattice vectors.
Let $M:=\textnormal{Hom}_\mathbb{Z}(N,
\mathbb{Z})$ be the \emph{dual lattice} and we put $M_\mathbb{R}=M\otimes_\mathbb{Z}\mathbb{R}$. We denote by $\langle \cdot, \cdot \rangle$ the canonical pairing $M\times N\rightarrow \mathbb{Z}$. The affine toric variety associated to the cone $\sigma$ is defined as
\begin{eqnarray}gin{equation*}
X(\sigma):=\textnormal{Spec}\ k[M\cap \sigma^\vee],
\end{equation*}
where $\sigma^\vee$ is the \emph{dual cone} contained in $M_\mathbb{R}$, i.e. $\sigma^\vee=\{u\in M_\mathbb{R} | \langle u,v\rangle\geq 0\textnormal{ for all }v\in \sigma\}$. The semigroup algebra $k[M\cap \sigma^\vee]$ is defined as $\underset{u\in M\cap \sigma^\vee}{\oplus} k\cdot \chi^u$, with $\chi^u\cdot \chi^v=\chi^{u+v}$. Then clearly for some elements $u_1,\ldots,u_s\in M\cap \sigma^\vee$, we have $\chi^{u_1},\ldots,\chi^{u_s}$ generate $k[M\cap \sigma^\vee]$ if and only if $u_1,\ldots,u_s$ generate $M\cap \sigma^\vee$ as a semigroup.
A $k$-valued point $x$ of $X(\sigma)$ corresponds to a homomorphism of $k$-algebras
\begin{eqnarray}gin{equation*}
x^\ast:k[\sigma^\vee\cap M]\longrightarrow k.
\end{equation*}
We put $\sigma^\perp:=\{u\in M_\mathbb{R} | \langle u,v\rangle= 0\textnormal{ for all }v\in \sigma\}$. A \emph{face} of $\sigma$ is a subset of $\sigma$ of the form $\{v\in \sigma| \langle u,v\rangle=0\}$, for some $u\in\sigma^\vee$. The \emph{distinguished point} $x_\tau$ corresponding to a face $\tau$ of $\sigma$ is defined by $x_\tau^\ast(\chi^u)=1$ if $u\in \tau^\perp$, and $x_\tau^\ast(\chi^u)=0$ otherwise.
\begin{eqnarray}gin{rem}
The point $x_\sigma$ exists if $N_\mathbb{R}$ is the linear span of $\sigma$. This point will play a special role in what follows. The computation when $N_\mathbb{R}$ is not spanned by $\sigma$ can be easily reduced to this case, since $X(\sigma)$ will be a product of a torus with a lower-dimensional toric variety that contains a torus-invariant point. So from now on, we assume that $\sigma$ spans $N_\mathbb{R}$.
\end{rem}
The toric variety $X=X(\sigma)$ contains the torus $T=\textnormal{Spec}\ k[M]\cong(k^\ast)^n$ and the group action of $T$ on itself extends to an action on $X$. More precisely, the $T$-action on $X$ is given by $G:T\times X\rightarrow X$, which is equivalent to the following morphism of $k$-algebras:
\begin{eqnarray}gin{equation}
\label{eqn_action}
k[M\cap\sigma^\vee]\longrightarrow k[M]\otimes k[M\cap \sigma^\vee], \chi^u\longmapsto \chi^u\otimes\chi^u.
\end{equation}
It is a general fact that the $T$-orbit $O(\tau)$ that contains $x_\tau$ is of dimension equal to the codimension of $\tau$ in $\sigma$. In particular, the point $x_\sigma$ is the unique torus-invariant point. The toric variety $X$ is the disjoint union of the orbits $O(\tau)$ with $\tau$ varying over all faces of $\sigma$. Therefore, any point of $X$ lies in the same orbit with one of the $x_\tau$'s.
A torus-invariant prime divisor $D$ is the closure of the orbit associated to a one-dimensional face. Let us call this one-dimensional face $\tau$. Then we have
\begin{eqnarray}gin{equation*}
D=V (\tau):=\textnormal{Spec}\ k[M\cap \sigma^\vee\cap \tau^\perp].
\end{equation*}
For more details on toric varieties, we refer the reader to \cite{Ful93}.
\subsection{Characterization of orbits in $C^m$}
\label{section_orbits}
\begin{eqnarray}gin{defn}
\label{defn_Cm}
Recall that for each variety $X$ over $k$ there are canonical morphisms $\psi_m:X_\infty\rightarrow X_m$ and $\pi: X_\infty\rightarrow X$. For every $m\geq 1$, we define $C^m$, a subset of $X_m$, as
\begin{eqnarray}gin{equation*}
C^m:=\psi_m(\pi^{-1}(x_\sigma)).
\end{equation*}
\end{defn}
Let $T$ be the torus in $X=X(\sigma)$. It follows from Remark \ref{jet_rems} (iii) that there is a natural group action of $T_m$ on $X_m$. In this subsection, we approximate $C^m$ by a union of $T_m$-orbits and show that these orbits can be represented by lattice points in the interior of $\sigma$. This characterization builds on the work of Ishii \cite{Ish03} who gave a similar description for the $T_\infty$-orbits in $X_\infty$. We denote by $\mathbb{Z}_{\geq 0}$ the set of nonnegative integers.
Let $\gamma :\textnormal{Spec}\ k[t]/(t^{m+1})\longrightarrow X$ be an $m$-jet inside $C^m$ and let $\delta:\textnormal{Spec}\ k[\![t]\!]\longrightarrow X$ be an arc on $X$ which lifts $\gamma$. Then we have the following commutative diagram:
$$\begin{eqnarray}gin{tikzcd}[column sep=small]
k[M\cap \sigma^\vee] \arrow{r}{\delta^\ast} \arrow{rd}{\gamma^\ast}
& k[\![t]\!]\arrow{d}{} \\
& k[t]/(t^{m+1}),
\end{tikzcd}$$
where the vertical map is the canonical truncation.
Let $\tau_1,\ldots,\tau_d$ be the one-dimensional faces of $\sigma$ and $D_i:=V(\tau_i)$ be the corresponding torus-invariant prime divisors of $X$. We assume that $\delta$ is not in the arc space of any $D_i$. Equivalently, $\delta^\ast(\chi ^u)\neq 0$ for every $u\in M\cap\sigma^\vee$. Thus the order in $t$ of $\delta^\ast(\chi ^u)\in k[\![t]\!]$ is well-defined. We put $\textnormal{ord}_\delta(u):=\textnormal{ord}_t(\delta^\ast(\chi^u))$ for each $u\in M\cap \sigma^\vee$. Let $S_m$ be the set $\{0,1,\ldots,m,\infty\}$ and $Tr_m:\mathbb{Z}_{\geq 0} \rightarrow S_m$ be the obvious truncation map that takes any number larger than $m$ to $\infty$. We define $\textnormal{ord}_\gamma$ to be the composition of $\textnormal{ord}_\delta$ with $Tr_m$. Then we get the following commutative diagram:
$$\begin{eqnarray}gin{tikzcd}[column sep=large]
M\cap \sigma^\vee \arrow{r}{\textnormal{ord}_\delta} \arrow{rd}{\textnormal{ord}_\gamma}
& \mathbb{Z}_{\geq 0}\arrow{d}{Tr_m} \\
& S_m.
\end{tikzcd}$$
Note that for each $u\in M\cap \sigma^\vee$, the value of $\textnormal{ord}_\gamma(u)$ only depends on $\gamma^\ast(\chi^u)$. Thus $\textnormal{ord}_\gamma$ is independent of choice of $\delta$ and we call it the order map of $\gamma$.
Since $\textnormal{ord}_\delta$ is an additive map that takes lattice points in the cone $\sigma^\vee$ to nonnegative integers, it corresponds uniquely to a lattice point in $\sigma$. Now we give a first description of some of the $T_m$-orbits of $C^m$.
\begin{eqnarray}gin{lem}
\label{lem_orbits}
With the above notation, $\psi_m(X_\infty\backslash \underset{i}{\cup} (D_i)_\infty)$ is preserved by the $T_m$-action and its orbits are in one-to-one correspondence with the maps $M\cap \sigma^\vee\rightarrow S_m$ that can be lifted to additive maps $M\cap \sigma^\vee\rightarrow\mathbb{Z}_{\geq 0}$. The corresponding map is exactly the order map of any element in the orbit.
Moreover, $\psi_m(\pi^{-1}(x_\sigma)\backslash \underset{i}{\cup} (D_i)_\infty)$ is also preserved by the $T_m$-action and its orbits are in one-to-one correspondence with the maps $M\cap \sigma^\vee\rightarrow S_m$ that can be lifted to additive maps $M\cap \sigma^\vee \rightarrow \mathbb{Z}_{\geq 0}$, such that the inverse image of $\{0\}$ is $\{0\}$.
\end{lem}
\begin{eqnarray}gin{proof}
First let us describe the $T_m$-action on $X_m$ and the $T_\infty$-action on $X_\infty$. Let
\begin{eqnarray}gin{equation*}
g:\textnormal{Spec}\ k[t]/(t^{m+1})\rightarrow \textnormal{Spec}\ k[M]
\end{equation*}
be a point of $T_m$ and
\begin{eqnarray}gin{equation*}
\gamma:\textnormal{Spec}\ k[t]/(t^{m+1})\rightarrow \textnormal{Spec}\ k[M\cap \sigma^\vee]
\end{equation*}
be a point of $X_m$. Then $g\cdot \gamma$ is a morphism $\textnormal{Spec}\ k[t]/(t^{m+1})\rightarrow \textnormal{Spec}\ k[M\cap \sigma^\vee]$ that is equal to $G\circ (g,\gamma)$. By equation (\ref{eqn_action}), for each $u\in M\cap \sigma$ we have
\begin{eqnarray}gin{equation*}
(g\cdot \gamma)^\ast (\chi^u)=g^\ast(\chi^u)\cdot \gamma^\ast(\chi^u).
\end{equation*}
Similarly, if $\delta:\textnormal{Spec}\ k[\![t]\!]\rightarrow k[M]$ is a point in $T_\infty$ and $\alpha:\textnormal{Spec}\ k[\![t]\!]\rightarrow k[M\cap \sigma^\vee]$ is a point of $X_\infty$, then for each $u\in M\cap \sigma^\vee$ we have
\begin{eqnarray}gin{equation*}
(\delta\cdot\alpha)^\ast(\chi^u)=\delta^\ast(\chi^u)\cdot \alpha^\ast(\chi^u).
\end{equation*}
Note that both $g^\ast(\chi^u)$ and $\delta^\ast(\chi^u)$ above are units since $\chi^u$ has an inverse $\chi^{-u}$ in $k[M]$.
Now let $\gamma$ be an $m$-jet in $\psi_m(X_\infty\backslash \underset{i}{\cup} (D_i)_\infty)$ with a lifting $\delta$ in $X_\infty\backslash \underset{i}{\cup} (D_i)_\infty$. For each $\alpha\in T_m$, there is a lifting $\xi\in T_\infty$ of $\alpha$ by smoothness of $T$. Since $\xi^\ast(\chi^u)$ is a unit for each $u\in M$, $(\xi\cdot \delta)^\ast(\chi^u)\neq 0$ for each $u\in M\cap \sigma^\vee$. Hence $\xi\cdot \delta$ is also in $X_\infty\backslash \underset{i}{\cup} (D_i)_\infty$. Therefore, $\alpha\cdot \gamma=\psi_m(\xi\cdot \delta)$ is in $\psi_m(X_\infty\backslash \underset{i}{\cup} (D_i)_\infty)$. This shows that $\psi_m(X_\infty\backslash \underset{i}{\cup} (D_i)_\infty)$ is preserved by the $T_m$-action.
For $\psi_m(\pi^{-1}(x_\sigma)\backslash\underset{i}{\cup} (D_i)_\infty)$, one applies the same argument and observes that an arc $\delta$ lies above the torus-invariant point $x_\sigma$ if and only if $\delta^\ast(\chi^u)$ has positive order whenever $u\neq 0$, which is equivalent to $\textnormal{ord}_\delta^{-1}(0)=\{0\}$. Since $\xi^\ast(\chi^u)$ is a unit for each $u\in M$ and $\xi\in T_\infty$, $\xi\cdot \delta$ also lies above $0$. This shows that $\psi_m(\pi^{-1}(x_\sigma)\backslash \underset{i}{\cup} (D_i)_\infty)$ is also preserved by the $T_m$-action.
Pick two $m$-jets $\alpha\in T_m$ and $\gamma\in \psi_m(X_\infty\backslash \underset{i}{\cup} (D_i)_\infty)$. The morphism $\alpha^\ast$ takes any $\chi^u$, with $u\in M$, to a unit. Therefore, multiplying $\gamma$ by $\alpha$ does not change the order map $\textnormal{ord}_\gamma$. In other words, $\textnormal{ord}_{\gamma}=\textnormal{ord}_{ \alpha\cdot\gamma}$. This shows that the order map is the same for all points in a $T_m$-orbit.
Now we show that two $m$-jets in $\psi_m(X_\infty\backslash \underset{i}{\cup} (D_i)_\infty)$ with the same order map are in the same $T_m$-orbit. Let $\gamma$ be in $\psi_m(X_\infty\backslash \underset{i}{\cup} (D_i)_\infty)$ and $\phi$ be its order map. We define the special $m$-jet $\gamma_\phi$ whose associated morphism is
\begin{eqnarray}gin{equation*}
\gamma_\phi^\ast:k[M\cap \sigma^\vee]\longrightarrow k[t]/(t^{m+1}),\ \gamma_\phi^\ast(\chi^u)= t^{\phi(u)},
\end{equation*}
with the convention that $t^\infty=0$. If we write $\phi(a)+\phi(b)=\infty$ whenever the sum is $\geq m+1$, then we have $\phi(a)+\phi(b)=\phi(a+b)$ for any $a,b\in M\cap\sigma^\vee$. Therefore, $\gamma_\phi^\ast$ is a homomorphism of $k$-algebras.
Let $\delta\in X_\infty\backslash \underset{i}{\cup} (D_i)_\infty$ be a lifting of $\gamma$ and $\psi$ be the order map of $\delta$. Then we may define an arc $\delta_\psi$ such that
\begin{eqnarray}gin{equation*}
\delta_\psi^\ast:k[M\cap \sigma^\vee]\longrightarrow k[\![t]\!],\ \delta_\psi^\ast(\chi^u)=t^{\psi(u)}.
\end{equation*}
Obviously, $\delta_\psi$ lifts $\gamma_\phi$, and it has the same order map as $\delta$. Hence we have a morphism of $k$-algebras as follows:
\begin{eqnarray}gin{equation*}
\alpha^\ast:k[M\cap \sigma^\vee]\longrightarrow k[\![t]\!],\ \alpha^\ast(\chi^u)=\delta^\ast(\chi^u)/ \delta_\psi^\ast(\chi^u).
\end{equation*}
$\alpha^\ast$ extends to the entire $k[M]$ since $M\cap \sigma^\vee$ spans $M$ and since $\alpha^\ast(\chi^u)$ is a unit for each $u\in M\cap \sigma^\vee $. Hence $\alpha\in T_\infty$ and clearly we have $\alpha\cdot \delta_\psi=\delta$, and therefore, $\psi_m(\alpha)\cdot \gamma_\phi=\gamma$. This shows that $\gamma$ is in the same $T_m$-orbit as the special $m$-jet $\gamma_\phi$, and so is any other $m$-jet with the same order map.
Finally, we show that each map $\phi:M\cap \sigma^\vee\rightarrow S_m$ that can be lifted to an additive map $\psi: M\cap \sigma^\vee\rightarrow \mathbb{Z}_{\geq 0}$ is the order map of some $m$-jet in $\psi_m(X_\infty\backslash \underset{i}{\cup} (D_i)_\infty)$. Define the special $m$-jet $\gamma_\phi$ and the arc $\delta_\psi$ that lifts $\gamma_\phi$ in the same way as above. Then we have $\delta_\psi \in X_\infty\backslash\underset{i}{\cup} (D_i)_\infty$ because $\delta_\psi^\ast(\chi^u)$ has finite order for each $u\in M\cap \sigma^\vee$. Therefore, $\gamma_\phi$ is an $m$-jet in $\psi_m(X_\infty\backslash\underset{i}{\cup} (D_i)_\infty)$ and clearly $\textnormal{ord}_{\gamma_\phi}=\phi$. Hence we have produced a $T_m$-orbit whose corresponding order map is equal to the map $\phi$ that we started with.
\end{proof}
As mentioned above, an additive map $M\cap \sigma^\vee\rightarrow \mathbb{Z}_{\geq 0}$ corresponds uniquely to a lattice point in $\sigma$. Denote by $\varphi_a$ the additive map corresponding to the lattice point $a$ and $\bar{\varphi}_a$ the composition of $\varphi_a$ with the truncation map $Tr_m$. Then clearly every order map in Lemma \ref{lem_orbits} is equal to $\bar{\varphi}_a$ for some $a\in \sigma\cap N$. In particular, the order map takes only $0$ to $0$ if the lattice point $a$ is contained in $\textnormal{Int}(\sigma)$, the interior of $\sigma$. However, there could be more than one such $a$. To understand the additive maps $M\cap \sigma^\vee\rightarrow \mathbb{Z}_{\geq 0}$ better, we first study the semigroup $M\cap \sigma^\vee$ and show that there is a unique minimal set of generators.
\begin{eqnarray}gin{defn}
An element $u\in M\cap \sigma^\vee$ is called \emph{irreducible} if it cannot be written as the sum of two nonzero elements of $M\cap \sigma^\vee$.
\end{defn}
\begin{eqnarray}gin{lem}
The semigroup $M\cap\sigma^\vee$ has a unique minimal set of generators consisting of all the irreducible elements.
\end{lem}
\begin{eqnarray}gin{proof}
First, since $\sigma^\vee$ is a convex polyhedral cone, $M\cap \sigma^\vee$ is finitely generated. Therefore, there exists a minimal set of generators.
Second, we show that any element of $M\cap\sigma^\vee$ can be generated by irreducible elements. Pick an element $v\in \textnormal{Int}(\sigma)\cap N$. Then $\langle u,v\rangle$ is a positive integer for any $u\in M\cap \sigma^\vee$. We claim that for each $u\in M\cap \sigma^\vee$, $u$ can be written as the sum of at most $\langle u,v\rangle$ irreducible elements. If $\langle u,v\rangle=1$, then $u$ must irreducible. Otherwise, there are nonzero elements $u_1,u_2\in M\cap \sigma^\vee$ such that $u=u_1+u_2$. But $\langle u_1,v\rangle$ and $\langle u_2,v\rangle$ are both positive integers since $v\in \textnormal{Int}(\sigma)\cap N$. This is not possible as they add up to $\langle u,v\rangle=1$. Inductively, suppose our claim holds for all $u$ such that $\langle u,v\rangle\leq p$. Pick $u\in M\cap \sigma^\vee$ such that $\langle u,v\rangle=p+1$. If $u$ is irreducible, then we are done. Otherwise, there are nonzero elements $u_1,u_2\in M\cap \sigma^\vee$ such that $u=u_1+u_2$. Both $\langle u_1,v\rangle$ and $\langle u_2,v\rangle$ are $\leq p$. By assumption, $u_1$ and $u_2$ can be written as the sum of at most $\langle u_1,v\rangle$ and $\langle u_2,v\rangle$ irreducible elements respectively. Therefore, $u$ can be written as the sum of at most $\langle u_1,v\rangle+\langle u_2,v\rangle=p+1$ irreducible elements.
Finally, note that any set of generators must contain all irreducible elements by definition. We conclude that the set of irreducible elements form the unique minimal set of generators for $M\cap \sigma^\vee$.
\end{proof}
\begin{eqnarray}gin{rem}
If $u_1,\ldots,u_s$ form the unique minimal set of generators of $M\cap \sigma^\vee$, then $\chi^{u_1},\ldots,\chi^{u_s}$ also form the unique minimal set of monomial generators of $k[M\cap \sigma^\vee]$.
\end{rem}
The following lemma makes a connection between the set of order maps and the set of lattice points.
\begin{eqnarray}gin{lem}
\label{lem_compactness}
Fix an integer $m\geq 1$ and let $\chi^{u_1},\chi^{u_2},\ldots,\chi^{u_s}$ be the minimal set of monomial generators of $k[M\cap\sigma^\vee]$. For each integer $c\geq 0$ we define
\begin{eqnarray}gin{equation*}
P_c:=\Big\{ a\in \sigma\cap N \Big |\textnormal{the set}\ \{u_i|\varphi_a(u_i)\leq m+c \}\ \textnormal{spans}\ M_{\mathbb{R}}\Big\}.
\end{equation*}
Then the following hold:\\
(1) For any two different $a,b\in P_0$, $\bar{\varphi}_a\neq \bar{\varphi}_b$.\\
(2) There exists some $c_0\in \mathbb{Z}^+$ such that for any $a\in \sigma\cap N$ one can find $b\in P_{c_0}$ with $\bar{\varphi}_a= \bar{\varphi}_b$.
\end{lem}
\begin{eqnarray}gin{proof}
First let's assume we have $a,b\in P_0$ and $\bar{\varphi}_a= \bar{\varphi}_b$. Define
\begin{eqnarray}gin{equation*}
\Gamma_0:=\{u_i|\varphi_a(u_i)\leq m\}.
\end{equation*}
Since $\bar{\varphi}_a=\bar{\varphi}_b$, we deduce that $\varphi_a$ and $\varphi_b$ take the same values on $\Gamma_0$. By definition of $P_0$, $\Gamma_0$ spans $M_\mathbb{R}$. Thus we conclude that $\varphi_a=\varphi_b$, which implies that $a=b$.\\
For (2), we choose a positive integer $c_0$ large enough such that for any subset $S\subset \{u_1,u_2,\ldots,u_s\}$ that does not span $M_\mathbb{R}$, there is some $v\in N$ satisfying
\begin{eqnarray}gin{equation}
\label{eqn_2.5}
\varphi_v(u_i)=0,\textnormal{ for all } u_i\in S,\ \textnormal{and }1\leq \max_{u_i\notin S}\{\varphi_v(u_i)\}\leq c_0.
\end{equation}
Such a number $c_0$ exists because there are only finitely many subsets of $\{1,2,\ldots,s\}$. For each point $b\in \sigma\cap N$ we put $S_b:=\big\{u\in \{u_1,\ldots,u_s\}|\varphi_b(u)\leq m+c_0\big\}$. If there is some $b$ such that $\bar\varphi_a=\bar\varphi_b$ and such that $S_b$ spans $M_\mathbb{R}$, then we are done.
Now suppose there is no such $b$. We pick a point $b$ such that $\bar\varphi_a=\bar\varphi_b$ and such that $S_b$ is maximal. By relabeling we may write $S_b=\{u_1,\ldots,u_l\}$ for some integer $l<s$. By assumption $S_b$ does not span $M_\mathbb{R}$, so we can fine $v\in N$ that satisfies (\ref{eqn_2.5}) with $S$ replaced by $S_b$. Clearly there is some positive integer $k$ such that
\begin{eqnarray}gin{equation*}
\varphi_{b-kv}(u_i) >m,\textnormal{ for all } i>l,
\end{equation*}
\begin{eqnarray}gin{equation*}
\varphi_{b-kv}(u_{i_0}) \leq m+c_0,\textnormal{ for some } i_0>l.
\end{equation*}
Notice that $\bar{\varphi}_{b-kv}=\bar{\varphi}_b=\bar\varphi_a$, and hence $b-kv\in \sigma\cap N$. But clearly we have
\begin{eqnarray}gin{equation*}
S_b\subsetneqq\{u_1,\ldots,u_l,u_{i_0}\}\subset S_{b-kv}.
\end{equation*}
This contradicts the maximality of $S_b$. So we conclude that there must be some $b\in P_{c_0}$ such that $\bar\varphi_a=\bar\varphi_b$.
\end{proof}
\begin{eqnarray}gin{rem}
We have proved that for each $a\in \sigma\cap N$, the map $\bar{\varphi}_a$ corresponds to a $T_m$-orbit in $\psi_m(X_\infty\backslash \cup_i (D_i)_\infty)$. We denote this orbit by $T_{m,a}$.
\end{rem}
\begin{eqnarray}gin{rem}
For each $a\in \textnormal{Int}(\sigma)\cap N$, $\varphi_a$ is an additive map $M\cap \sigma^\vee\rightarrow\mathbb{Z}_{\geq 0}$ such that $\varphi_a^{-1}(0)=\{0\}$. According to Lemma \ref{lem_orbits} the corresponding orbits $T_{m,a}$ are all the $T_m$-orbits contained in $\psi_m(\pi^{-1}(x_\sigma)\backslash \cup_i (D_i)_\infty)$.
\end{rem}
\begin{eqnarray}gin{cor}
\label{cor_finiteness}
The sets $\psi_m(X_\infty\backslash \underset{i}{\cup} (D_i)_\infty)$ and $\psi_m(\pi^{-1}(x_\sigma)\backslash \underset{i}{\cup} (D_i)_\infty)$ contain only finitely many $T_m$-orbits.
\end{cor}
\begin{eqnarray}gin{proof}
According to Lemma \ref{lem_orbits}, we just need to show there are finitely many order maps $\bar\varphi_a$ for $a\in \sigma\cap N$. By Lemma \ref{lem_compactness}, there is a positive integer $c_0$ such that every order map is equal to $\bar\varphi_a$ for some $a\in P_{c_0}$. Therefore, it suffices to show that $P_{c_0}$ is compact.
For any $u_{j_1},\ldots,u_{j_n}\subset \{u_1,\ldots,u_s\}$ that span $M_\mathbb{R}$, we define
\begin{eqnarray}gin{equation*}
K_{j_1,j_2,\ldots,j_n}:= \{ a\in \sigma\cap N|\varphi_a(u_{j_i})\leq m+c_0\textnormal{ for }1\leq i\leq n\}.
\end{equation*}
Then $P_{c_0}$ is the union of all $K_{j_1,\ldots,j_n}$ as $(j_i)_{1\leq i\leq n}$ varies such that $u_{j_1},\ldots,u_{j_n}$ span $M_\mathbb{R}$. Since this is a finite union, it suffices to show that each $K_{j_1,\ldots,j_n}$ is compact.
By relabeling let us assume that $j_i=i$ for $1\leq i\leq n$. Let $v_1,\ldots,v_l$ be a minimal set of generators of $\sigma\cap N$. Since $u_1,\ldots,u_n$ span $M_\mathbb{R}$, for each $v_i$ there exists some $u_j$ with $1\leq j\leq n$ such that $\langle v_i,u_j\rangle$ is a positive integer. Therefore,
\begin{eqnarray}gin{equation*}
K_{1,2,\ldots,n}\subset \{a\in \sigma\cap N|a=\sum_{i=1}^l c_i v_i,\textnormal{ with }0\leq c_i\leq m+c_0\textnormal{ for each } i\}.
\end{equation*}
This shows that $K_{1,2,..,n}$ is compact.
\end{proof}
\begin{eqnarray}gin{rem}
The structure of the jet schemes of toric varieties is in general very hard to describe unlike the case of arc spaces. One can find a description of jet schemes of toric surfaces in \cite{Mou11}. Instead of the entire jet schemes, we only describe the structure of images of the arc space in the $m^{\textnormal{th}}$ jet scheme.
\end{rem}
\subsection{Main results}
\label{section_dimension}
$\\$
In this subsection we compute the dimension of the orbit $T_{m,a}$ by computing the dimension of the corresponding stabilizer. Denote by $H_{m,a}$ the stabilizer of any element of $T_{m,a}$ under the $T_m$-action. We start with the following lemma.
\begin{eqnarray}gin{lem}
\label{lem_finiteness}
Let $u_1,\ldots,u_n\in M$ be elements that generate $M_{\mathbb{R}}$ over $\mathbb{R}$. For every $a_{i,j}\in k$ with $1\leq i\leq n$ and $0\leq j\leq m$ such that $a_{i,0}\neq 0$ for all $i$, the set of elements $\alpha\in T_m$ such that
\begin{eqnarray}gin{equation}
\label{eqn_4}
\alpha^\ast(\chi^{u_i})=\sum_{j=0}^m a_{i,j}t^j\textnormal{ for }1\leq i\leq n
\end{equation}
is nonempty and finite.
\end{lem}
\begin{eqnarray}gin{proof}
Consider the subgroup $M'$ of $M$ generated by $u_1,\ldots,u_n$ and the corresponding torus $T'=\textnormal{Spec}\ k[M']$. Note that we have an induced morphism $f\colon T\to T'$. It is well-known that in characteristic $0$, this map is finite and \'{e}tale. This follows, for example, by choosing a basis $w_1,\ldots,w_n$ of $M$ such that $d_1w_1,\ldots,d_nw_n$ is a basis of $M'$, for some positive integers $d_1,\ldots,d_n$. In this case, it follows from Lemma \ref{lem_etale} that
\begin{eqnarray}gin{equation*}
T_m\simeq T'_m\times_{T'}T.
\end{equation*}
In particular, the induced morphism $T'_m\to T_m$ is finite and \'{e}tale and its fibers are non-empty and finite. Since it is clear that there is a unique $\begin{eqnarray}ta\in T'_m$ such that $\begin{eqnarray}ta^*(\chi^{u_i})=\sum_{j=0}^ma_{i,,j}t^j$ for all $i$, we deduce the assertion in the lemma.
\end{proof}
\begin{eqnarray}gin{defn}
\label{defn_ph}
For each $a\in \textnormal{Int}(\sigma)\cap N$, we define
\begin{eqnarray}gin{equation}
\label{eqn_ph}
\Phi(a):= \min \big\{\sum_{i=1}^n \langle a,u_{i} \rangle | u_{1},\ldots,u_{n}\ \textnormal{span}\ M_\mathbb{R},\textnormal{ with }u_i\in M\cap \sigma^\vee\textnormal{ for each } i\big\},
\end{equation}
where the minimum is run over all linearly independent sets of vectors $\{u_1,\ldots,u_n\}$ in $M\cap \sigma^\vee$.
\end{defn}
\begin{eqnarray}gin{rem}
\label{rem_ph}
Clearly if the minimum in (\ref{eqn_ph}) is attained at some elements $u_1,\ldots,u_n$, each $u_i$ must be irreducible. We show in the following one way to find elements $u_1,\ldots,u_n$ at which the above minimum is achieved.
\end{rem}
Fix $a\in \textnormal{Int}(\sigma)\cap N$. Let $u_1,\ldots, u_s$ be the minimal set of generators of the semigroup $M\cap \sigma^\vee$ and let $S_0:=\{u_1,\ldots,u_s\}$. We first choose $u_{j_1}\in S_0$ such that $\varphi_a(u_{j_1})=\langle a, u_{j_1}\rangle$ is minimal and define $S_1:=S_0\backslash \textnormal{Span}(u_{j_1})$. Recursively, for each $1\leq i\leq n-1$, assuming $u_{j_1},\ldots, u_{j_i}$ are chosen and $S_i=S_0\backslash \textnormal{Span}(u_{j_1},\ldots,u_{j_{i}})$, we choose $u_{j_{i+1}} \in S_i$ such that $\varphi_a(u_{j_{i+1}})=\langle a, u_{j_{i+1}}\rangle$ is minimal and define $S_{i+1}:=S_0\backslash \textnormal{Span}(u_{j_1},\ldots,u_{j_{i+1}})$. Once $u_{j_1},\ldots, u_{j_n}$ are all chosen, it is clear that they span $M_\mathbb{R}$.
\begin{eqnarray}gin{lem}
\label{lem_ph}
For each $a\in \textnormal{Int}(\sigma)\cap N$ and $u_{j_1},\ldots, u_{j_n}$ chosen as above, we have
\begin{eqnarray}gin{equation*}
\sum_{k=1}^n \langle a,u_{j_k} \rangle=\Phi(a).
\end{equation*}
\end{lem}
\begin{eqnarray}gin{proof}
By Remark \ref{rem_ph} we can find $i_1,\ldots, i_n$ such that $u_{i_1},\ldots, u_{i_n}$ span $M_\mathbb{R}$ and they compute $\Phi(a)$. If the set $\{u_{i_1},\ldots, u_{i_n}\}$ is equal to $\{u_{j_1},\ldots, u_{j_n}\}$, the claim in the lemma follows immediately. Hence we assume that by relabeling, there exists some $k$, with $1\leq k\leq n$, such that $i_1=j_1,\ldots, i_{k-1}=j_{k-1}$ and $i_k\neq j_k$. If $k=n$, we have $\langle a,u_{j_n}\rangle \leq \langle a,u_{i_n}\rangle$ by the choice of $u_{j_n}$. Hence
\begin{eqnarray}gin{equation*}
\sum_{k=1}^n \langle a,u_{j_k} \rangle\leq \sum_{k=1}^n \langle a,u_{i_k}\rangle.
\end{equation*}
This proves the claim in the lemma.
Now suppose the conclusion holds when $k>k_0$ for some $k_0<n$, and we consider the case when $k=k_0$. We claim there exists some $l$, with $k_0\leq l\leq n$, such that
\begin{eqnarray}gin{equation*}
u_{j_{k_0}}\not\in \textnormal{Span}(u_{i_1},\ldots, \hat{u}_{i_l},\ldots, u_{i_n}).
\end{equation*}
Otherwise, we have
\begin{eqnarray}gin{eqnarray*}
u_{j_{k_0}}&\in& \bigcap_{l=k_0}^n \textnormal{Span}(u_{i_1},\ldots, \hat{u}_{i_l},\ldots, u_{i_n})\\
&=& \textnormal{Span}(u_{i_1},\ldots, u_{i_{k_0-1}})\\
&=& \textnormal{Span}(u_{j_1},\ldots, u_{j_{k_0-1}}).
\end{eqnarray*}
But this contradicts the fact that $u_{j_1},\ldots, u_{j_n}$ span $M_\mathbb{R}$.
The above claim implies that if we replace $u_{i_l}$ by $u_{j_{k_0}}$, $u_{i_1},\ldots, u_{i_n}$ still span $M_\mathbb{R}$. It also shows that
\begin{eqnarray}gin{equation*}
u_{i_l}\not\in \textnormal{Span}(u_{j_1},\ldots, u_{j_{k_0-1}}),
\end{equation*}
and hence by the choice of $u_{j_{k_0}}$, we have $\langle a, u_{j_{k_0}} \rangle \leq \langle a, u_{i_l}\rangle$. We conclude that if we replace $u_{i_l}$ by $u_{j_{k_0}}$, the question is reduced to the case when $k\geq k_0+1$, and we are done by induction.
\end{proof}
\begin{eqnarray}gin{thm}
\label{thm_stablizer}
Fix a lattice point $a\in \textnormal{Int}(\sigma)$. Let $\chi^{u_1},\chi^{u_2},\ldots,\chi^{u_s}$ be the minimal set of monomial generators of $k[M\cap\sigma^\vee]$. If $H_{m,a}$ is the stabilizer of any element of $T_{m,a}$ under the $T_m$-action, then the following hold:
(1) We have $\dim(H_{m,a})= \Phi(a)$ for
\begin{eqnarray}gin{equation}
\label{eqn_big}
m\geq \max\{\max_{1\leq i\leq n} \langle a,u_{j_i}\rangle\},
\end{equation}
where the maximum is taken over all possible choices of $n$ vectors $u_{j_1},...,u_{j_n}$ among $u_1,u_2,...,u_s$ that span $M_\mathbb{R}$, and such that the minimum in (\ref{eqn_ph}) is attained.
(2) If $m$ does not satisfy the inequality (\ref{eqn_big}), then we have either $\dim(H_{m,a})= \Phi(a)$ or $m\leq \dim(H_{m,a})\leq \Phi(a)$.
\end{thm}
\begin{eqnarray}gin{proof}
For simplicity we write $\varphi_m$ for $\min\{m,\bar{\varphi}_a\}$, and $\varphi^m$ for $\min\{m+1,\bar{\varphi}_a\}$. Let $H_{m,a}$ be the stabilizer of the special jet $\gamma_{\bar{\varphi}_a}$ defined in Lemma \ref{lem_orbits}. Then an $m$-jet $\alpha\in T_m$ is contained in $H_{m,a}$ if and only if
\begin{eqnarray}gin{equation}
\label{eqn_6}
\alpha^\ast(\chi^{u_i})\cdot t^{\varphi^m(u_i)}=t^{\varphi^m(u_i)}\textnormal{ in } k[t]/(t^{m+1}) \textnormal{ for }1\leq i\leq s.
\end{equation}
This is clearly equivalent to
\begin{eqnarray}gin{equation}
\label{eqn_7}
\begin{eqnarray}gin{split}
\alpha^\ast (\chi^{u_i})=1+\sum_{j=m+1-\varphi^m(u_i)}^m a_{i,j}t^j,\ \textnormal{if }\varphi^m(u_i)\leq m,\\
\alpha^\ast (\chi^{u_i})=\sum_{j=0}^m a_{i,j}t^j,\ \textnormal{if }\varphi^m(u_i)=m+1,
\end{split}
\end{equation}
for each $i$ and for some $a_{i,j}\in k$, with the condition that $a_{i,0}\neq 0$ when $\varphi^m(u_i)=m+1$.
Choose any $n$ vectors from $\{u_1,\ldots,u_s\}$ that span $M_\mathbb{R}$. By relabeling, let us assume they are $u_1,\ldots,u_n$.
We define $A:=\mathbb{A}^{\sum_{i=1}^n \varphi^m(u_i)}$ and the map
\begin{eqnarray}gin{equation*}
\pi:H_{m,a}\longrightarrow A,\ \pi(\alpha)=(a_{i,j})_{1\leq i\leq n,m+1-\varphi^m(u_i)\leq j\leq m}.
\end{equation*}
Then Lemma \ref{lem_finiteness} implies that $\pi$ has finite fibers. Therefore, we have
\begin{eqnarray}gin{equation*}
\dim(H_{m,a})\leq\dim(A)= \sum_{i=1}^n\varphi^m(u_i).
\end{equation*}
By letting $u_1,\ldots,u_n$ vary so that they span $M_\mathbb{R}$, we conclude that
\begin{eqnarray}gin{equation*}
\dim(H_{m,a})\leq\min\{\sum_{i=1}^n \varphi^m(u_{j_i})|u_{j_1},\ldots,u_{j_n}\textnormal{ span }M_\mathbb{R}\}\leq \Phi(a).
\end{equation*}\\
In what follows, we assume that after relabeling, $u_1,\ldots, u_n$ are chosen as in Lemma \ref{lem_ph}. We claim that
\begin{eqnarray}gin{equation}
\label{eqn_8}
\dim(H_{m,a})\geq \sum_{i=1}^n \varphi_m(u_i).
\end{equation}
Consider the subgroup $M'$ of $M$ generated by $u_1,\ldots,u_n$ and the corresponding torus $T'=\textnormal{Spec}\ k[M']$. By the proof of Lemma \ref{lem_finiteness}, the commutative diagram
$$\begin{eqnarray}gin{CD}
T_m @> >> T'_m\\
@V\pi_m^T VV @V\pi_m^{T'} VV\\
T @> >> T'.
\end{CD}$$
is Cartesian. Hence, for each $\alpha'\in T_m'$ that lies over $(1,\ldots, 1)\in T'$, there is a unique $\alpha\in T_m$ lying over $(1,\ldots, 1)\in T$ such that $\alpha$ is mapped to $\alpha'$. We claim that for each $\alpha'\in T_m'$ lying over $(1,\ldots, 1)$ that satisfies (\ref{eqn_7}) for $1\leq i\leq n$, the corresponding $\alpha\in T_m$ is an element in $H_{m,a}$.
To prove this, we just need to show that $\alpha$ satisfies conditions (\ref{eqn_7}) for $1\leq i\leq s$. Since $\alpha$ maps to $\alpha'$, it automatically satisfies (\ref{eqn_7}) for $1\leq i\leq n$. Now pick an integer $z$ such that $n+1\leq z\leq s$. Then there exist integers $l>0$, $d_i$ and $q\leq n$ such that
\begin{eqnarray}gin{equation}
\label{eqn_7.5}
lu_z=\sum_{i=1}^q d_i u_i,
\end{equation}
where $d_q\neq 0$. By applying $\alpha^\ast$ on both sides, we get
\begin{eqnarray}gin{equation*}
\alpha^\ast(\chi^{u_z})^l=\prod_{i=1}^q \alpha^\ast(\chi^{u_i})^{d_i}.
\end{equation*}
By using (\ref{eqn_7}) for $1\leq i\leq n$, we see that the $t$-order of $\prod_{i=1}^q \alpha^\ast(\chi^{u_i})^{d_i}-1$, hence also that of $\alpha^\ast(\chi^{u_z})^l-1$, is at least $\min_{1\leq i\leq q} \{m+1-\varphi_m(u_i)\}$. Since $\alpha$ lies over $(1,\ldots, 1)$, this implies that the $t$-order of $\alpha^\ast(\chi^{u_z})-1$ is at least $\min_{1\leq i\leq q} \{m+1-\varphi_m(u_i)\}$. On the other hand, equation (\ref{eqn_7.5}) implies that
\begin{eqnarray}gin{equation*}
u_z\in \{u_1,\ldots,u_s\}\backslash \textnormal{Span}(u_1,\ldots, u_k),
\end{equation*}
for each $k$ with $1\leq k\leq q-1$. By the construction of $u_1,\ldots, u_n$, we have $\varphi_a(u_z)\geq \varphi_a(u_k)$ for each $1\leq k\leq q$. Hence, we have
\begin{eqnarray}gin{equation*}
m+1-\varphi^m(u_z)\leq \min_{1\leq i\leq q} \{m+1-\varphi^m(u_i)\}.
\end{equation*}
So the $t$-order of $\alpha^\ast(\chi^{u_z})-1$ is $\geq m+1-\varphi^m(u_z)$. This, however, implies condition (\ref{eqn_7}) for $i=z$. Since $z$ is arbitrary, $\alpha$ satisfies conditions (\ref{eqn_7}) for $1\leq i\leq s$, and hence $\alpha\in H_{m,a}$.
Define the affine space $A$ and the map $\pi:H_{m,a}\rightarrow A$ as above with respect to $u_1,\ldots, u_n$. Let $Y\subset A$ be the subspace defined by $a_{i,0}=1$ for $1\leq i\leq n$ such that $\varphi^m(u_i)=m+1$. Then the above discussion shows that $Y$ is contained in the image of $\pi$. We conclude that
\begin{eqnarray}gin{equation*}
\dim(H_{m,a})\geq \dim(Y)=\sum_{i=1}^n \varphi_m(u_i).
\end{equation*}
According to Lemma \ref{lem_ph}, the minimum in (\ref{eqn_ph}) is achieved by $u_1,\ldots, u_n$. Hence, the condition (\ref{eqn_big}) guarantees that $m\geq \varphi_a(u_i)$ for each $1\leq i\leq n$. Under this condition, we have
\begin{eqnarray}gin{equation*}
\dim(H_{m,a})\geq \sum_{i=1}^n \varphi_m(u_i)= \sum_{i=1}^n \varphi_a(u_i)=\Phi(a).
\end{equation*}
This completes the proof of (1).
For (2), we consider two cases. If $m\geq \max_{1\leq i\leq n}\varphi_a(u_i)$, then (\ref{eqn_8}) implies that $\dim(H_{m,a})\geq \Phi(a)$ as in (1). If there is some $i$, with $1\leq i\leq n$, such that $m<\varphi_a(u_i)$, then $\varphi_m(u_i)=m$. So (\ref{eqn_8}) implies that $\dim(H_{m,a})\geq \varphi_m(u_i)=m$. Since we have proved that $\dim(H_{m,a})$ is always $\leq \Phi(a)$, the conclusions in (2) follow.
\end{proof}
\begin{eqnarray}gin{cor}
\label{cor_orbits}
With the same assumptions as in Theorem \ref{thm_stablizer} and for all $m\geq 0$, the dimension of the orbit $T_{m,a}$ satisfies one of the following:
\begin{eqnarray}gin{equation}
\label{eqn_8.1}
\dim(T_{m,a})=(m+1)n- \Phi(a),\textnormal{ or}
\end{equation}
\begin{eqnarray}gin{equation}
\label{eqn_8.2}
(m+1)n- \Phi(a)\leq \dim(T_{m,a})\leq (m+1)n-m.
\end{equation}
\end{cor}
\begin{eqnarray}gin{proof}
Observe that $T$ is smooth of dimension $n$. Hence by Corollary \ref{jet_smooth}, $\dim(T_m)=n(m+1)$. The conclusions follow immediately from Theorem \ref{thm_stablizer}.
\end{proof}
Now we can prove our main result. Recall that the Mather minimal log discrepancy can be computed in terms of the invariant $\lambda$ defined in Definition \ref{defn_lambda}, via Property \ref{fundamental_prop}. According to Lemma \ref{lem_lambda}, this in turn can be computed from the dimension of $C^m$ (defined in Definition \ref{defn_Cm}), when $m$ is large enough. We have seen that $C^m$ can be approximated by a union of explicit $T_m$-orbits. Thus computing the dimension of $C^m$ boils down to computing the dimension of these $T_m$-orbits.
\begin{eqnarray}gin{thm}
\label{thm_toric}
For $m$ large enough we have
\begin{eqnarray}gin{equation}
\dim(C^m)=n(m+1)-\min_{a\in \textnormal{Int}(\sigma)\cap N} \Phi(a),
\end{equation}
where $\Phi$ is defined in Definition \ref{defn_ph}.
\end{thm}
\begin{eqnarray}gin{proof}
First of all, note that $T_{m,a}$ lies over the torus-fixed point $x_\sigma$ if and only if $a$ is in the interior of the cone $\sigma$ (see Lemma \ref{lem_orbits}). Therefore $C^m$ is the union of finitely many $T_m$-orbits $T_{m,a}$ (by Lemma \ref{lem_orbits} and Corollary \ref{cor_finiteness}), for $a$ in the interior of $\sigma$, and of the orbits contained in the image of the $(D_i)_\infty$. But $\dim (\psi_m((D_i)_\infty)\leq (n-1)(m+1)$ by Lemma \ref{lem_4.3}. When $m$ is large enough, the dimension of these orbits contained in the image of the $(D_i)_\infty$ is smaller than $mn-\lambda(x_\sigma)$. Thus, we only need to compute $\max_{a\in \textnormal{Int}(\sigma)\cap N} \dim(T_{m,a})$ when $m$ is large enough. Note that even though $\textnormal{Int}(\sigma)\cap N$ is an infinite set, we are actually taking maximum over the finite set of $T_m$-orbits.
By Lemma \ref{lem_lambda} we thus see if $m$ is large enough, then
\begin{eqnarray}gin{equation*}
mn-\lambda(x_\sigma)=\dim(C^m)=\max_{a\in \textnormal{Int}(\sigma)\cap N} \dim(T_{m,a}).
\end{equation*}
Let us fix such $m$ such that, in addition, $m>n+\lambda(x_\sigma)$. From Corollary \ref{cor_orbits}, we see two cases (\ref{eqn_8.1}) and (\ref{eqn_8.2}). If $\dim(T_{m,a})\leq (m+1)n-m$, then we have
\begin{eqnarray}gin{equation*}
n(m+1)-\Phi(a)\leq \dim(T_{m,a})\leq(m+1)n-c\cdot m<mn-\lambda(x_\sigma).
\end{equation*}
Therefore, replacing these $\dim(T_{m,a})$ by $n(m+1)-\Phi(a)$ does not change the maximum of $\dim(T_{m,a})$. So we get
\begin{eqnarray}gin{eqnarray*}
&&\max_{a\in \textnormal{Int}(\sigma)\cap N} \dim(T_{m,a})\\
&=&\max_{a\in \textnormal{Int}(\sigma)\cap N}\Big\{(m+1)n- \Phi(a)\Big\}\\
&=&n(m+1)-\min_{a\in \textnormal{Int}(\sigma)\cap N}\Phi(a).
\end{eqnarray*}
The last formula gives the assertion in the theorem.
\end{proof}
\begin{eqnarray}gin{cor}
\label{cor_toric}
Let $X$ be an affine toric variety over $k$ of dimension $n$ associated to a cone $\sigma$. Let $N$ be the lattice and $M$ be the dual lattice. If $\sigma$ spans $N_\mathbb{R}$ and $x_\sigma \in X$ is the torus-invariant point, the invariant $\lambda(x_\sigma)$ defined in Definition \ref{defn_lambda} is computed by the following formula
\begin{eqnarray}gin{equation*}
\lambda(x_\sigma)=\min_{a\in \textnormal{Int}(\sigma)\cap N}\Phi(a)-n,
\end{equation*}
where the function $\Phi$ is defined in Definition \ref{defn_ph}.
\end{cor}
The following is a direct corollary of Corollary \ref{cor_toric} and Proposition \ref{fundamental_prop}.
\begin{eqnarray}gin{cor}
With the same assumptions as in Corollary \ref{cor_toric}, we have
\begin{eqnarray}gin{equation*}
\widehat{\textnormal{mld}} (x_\sigma;X)=\min_{a\in \textnormal{Int}(\sigma)\cap N}\Big\{ \min \big\{\sum_{i=1}^n \langle a,u_{i} \rangle | u_{1},\ldots,u_{n}\ \textnormal{span}\ M_\mathbb{R}\textnormal{, }u_i\in M\cap \sigma^\vee\textnormal{ for each } i\big\}\Big\},
\end{equation*}
where the second minimum is run over all linearly independent sets of vectors $\{u_1,\ldots, u_n\}$ in $M\cap \sigma^\vee$.
\end{cor}
\subsection{Examples}
$\\$
The conclusions of Theorem \ref{thm_toric} and Corollary \ref{cor_toric} involve two minima. It is not clear whether the formula can be simplified in the case of an arbitrary toric variety. However, we can simplify this formula in some special cases. Here we provide some examples of computations of the invariant $\lambda$.
\begin{eqnarray}gin{exmp}
Suppose $\sigma\subset \mathbb{R}^2$ is the two dimensional cone generated by $2e_1-e_2$ and $e_2$, where $e_1$ and $e_2$ form the standard basis of $N$. Then $\sigma^\vee$ is a cone in $M_\mathbb{R}$ generated by $e_1^\ast$ and $e_1^\ast+2e_2^\ast$, where $e_1^\ast$ and $e_2^\ast$ form the dual basis. It's easy to see that $u_1=e_1^\ast$, $u_2=e_1^\ast+e_2^\ast$ and $u_3=e_1^\ast+2e_2^\ast$ form the minimal set of generators of $M\cap \sigma^\vee$.
For each $a\in N$ we write $a=(x,y)$, where $x$, $y$ are coordinates with respect to the standard basis. In order that $a\in \textnormal{Int}(\sigma)\cap N$, we need to have $x>0$ and $x+2y>0$. Therefore, according to Corollary \ref{cor_toric} we have
\begin{eqnarray}gin{eqnarray*}
\lambda(x_\sigma)&=&\underset{x>0,x+2y>0}{\min} \min \{x+(x+y),x+(x+2y),(x+y)+(x+2y)\}-2\\
&=& \underset{x>0,x+2y>0}{\min} \min \{ 2x+y,2x+3y\}-2.
\end{eqnarray*}
It's easy to see that the minimum is equal to $0$, which is attained when $x=1$ and $y=0$, and hence $\widehat{\textnormal{mld}}(x_\sigma;X)=\dim(X)=2$.
\end{exmp}
In fact, we have the following general result:
\begin{eqnarray}gin{prop}
\label{prop_simplicial_isolated}
If the torus-invariant point $x_\sigma$ is an isolated singularity of a simplicial toric variety $X$, then $\lambda(x_\sigma)=0$, and hence $\widehat{\textnormal{mld}}(x_\sigma;X)=\dim(X)$.
\end{prop}
\begin{eqnarray}gin{proof}
First we claim that if $x_\sigma$ is an isolated singularity, then all facets (faces of codimension $1$) of $\sigma$ are nonsingular. Suppose that there is a proper face $\tau$ of $\sigma$ that is singular. Recall that
\begin{eqnarray}gin{equation*}
O(\tau) = \textnormal{Spec}\ k[M\cap \tau^\perp]\cong (k^\ast)^{n-\dim(\tau)}
\end{equation*}
is the $T$-orbit that contains the distinguished point $x_\tau$. Denote by $N_\tau$ the subgroup of $N$ generated by $N\cap \tau$. Then we may choose a splitting of $N$ and write
\begin{eqnarray}gin{equation*}
N=N_\tau\oplus N',\ \tau=\tau'\oplus\{0\},
\end{equation*}
where $\tau'$ is a cone in $(N_\tau)_\mathbb{R}$. Dually, we can decompose $M=M_\tau\oplus M'$. Let
\begin{eqnarray}gin{equation*}
U_\tau=\textnormal{Spec}\ k[M\cap \tau^\vee],
\end{equation*}
and let $U_{\tau'}$ be the affine toric variety corresponding to the cone $\tau'$ and lattice $N_\tau$. With this notation, we have
\begin{eqnarray}gin{equation}
\label{eqn_decomp}
U_\tau\cong \textnormal{Spec}\ k[M_\tau\cap \tau'^\vee]\times \textnormal{Spec}\ k[M']\cong U_{\tau'}\times (k^\ast)^{n-\dim(\tau)}.
\end{equation}
Note that $U_\tau$ is an open subset of $X$ that contains $O(\tau)$. Since $\tau'$ is a singular cone, the torus-fixed point $x_{\tau'}\in U_{\tau'}$ is a singular point. In this case, the orbit $O(\tau)$, which corresponds via the above isomorphism to $\{x_{\tau'}\}\times {\textnormal{Spec}}\ k[M']$ is a subset of dimension
$n-\dim(\tau)$ contained in the singular locus of $X$ and that contains $0$ in its closure. This contradicts the fact that $0$ is an isolated singular point of X. So we conclude that all facets are nonsingular.
Since $X$ is simplicial, the cone $\sigma$ has only $n$ one-dimensional faces. Assume that $v_1,\ldots,v_n$ are the primitive lattice vectors on these one-dimensional faces. Then $v_1,\ldots,v_{n-1}$ span a facet of $\sigma$, and is therefore nonsingular. By applying an automorphism on $N$ one may assume that $v_1=e_1,\ldots,\ v_{n-1}=e_{n-1}$ and $v_n=a_1e_1+\cdots+a_{n-1}e_{n-1}+te_n$ with $0\leq a_i <t$. Define $a=e_1+\cdots+e_n$.
Note that $o_i:=te_i^\ast-a_ie_n^\ast$ is orthogonal to the facet spanned by $v_1,\ldots,\hat{v}_i,\ldots,v_n$ for every $i$, with $1\leq i\leq n-1$. In fact, the dual cone $\sigma^\vee$ is spanned by $o_1,\ldots,o_{n-1},e_n^\ast$. Since $\langle o_i, a\rangle=t-a_i>0$ and $\langle e_n^\ast, a\rangle=1$, $a$ is in the interior of $\sigma$.
Clearly $e_1^\ast,\ldots,e_n^\ast$ are all in the dual cone $\sigma^\vee$. In fact, each $e_i^\ast$ is on the face spanned by $o_i$ and $e_n^\ast$. Since $\varphi_a(e_i^\ast)=1$, we have $\Phi(a)\leq n$ and hence $\lambda(x_\sigma)=0$. By Proposition \ref{fundamental_prop} we get $\widehat{\textnormal{mld}}(x_\sigma;X)=\dim(X)$.
\end{proof}
\begin{eqnarray}gin{cor}
If $X$ is a two-dimensional affine toric variety, then $\lambda(x_\sigma)=0$.
\end{cor}
\begin{eqnarray}gin{proof}
Observe that every two-dimensional affine toric variety is simplicial, and that every facet is one-dimensional, hence nonsingular. Thus $x_\sigma$ is an isolated singularity of a simplicial toric variety. The conclusion follows immediately from Proposition \ref{prop_simplicial_isolated}.
\end{proof}
The above examples might suggest that $\lambda$ is always $0$, or $\widehat{\textnormal{mld}}$ is always equal to $\dim(X)$, for any toric variety. But this is not true in general, as we will see shortly. Now let us look at an example of a different type. We discuss this class of examples in detail in the next section (see Example \ref{exmp_binomial} for details); we refer to this section for the proof of the formula that we use.
\begin{eqnarray}gin{exmp}
Let $X\subset \mathbb{A}^{n+1}$ be the hypersurface defined by the binomial function
\begin{eqnarray}gin{equation*}
f=x_1x_2\cdots x_n-y^{n-1}
\end{equation*}
for some $n\geq 3$. The dimension of $X$ is $n$ while the dimension of $X_{\textnormal{sing}}$ is $n-2$. Since $X$ is Cohen-Macaulay, being a hypersurface, it follows from Serre's criterion that $X$ is normal. It is a general fact that $X$ is a toric variety if it is normal and defined by binomials (\cite[Lemma 1.1]{St95}). By applying formula (\ref{eqn_binomial}) below from Section \ref{sec5}, we immediately get $\lambda=1$.
\end{exmp}
\subsection{Extension to arbitrary closed points}
$\\$
Corollary \ref{cor_toric} gives the formula that computes the invariant $\lambda$ associated to the torus-invariant point for an affine toric variety $X$ over $k$ that corresponds to a cone that spans $N_\mathbb{R}$. Now we show how this computation is generalized to an arbitrary closed point of a toric variety $X$. We start by proving the following proposition:
\begin{eqnarray}gin{prop}
\label{prop_product}
Let $X$ and $Y$ be two varieties over $k$ such that $Y$ is smooth. For any closed points $x\in X$ and $y\in Y$, the invariant $\lambda$ for the closed point $(x,y)\in X\times Y$ is equal to $\lambda(x)$.
\end{prop}
\begin{eqnarray}gin{proof}
By definition, we have
\begin{eqnarray}gin{equation*}
\lambda((x,y))=(\dim(X)+\dim(Y))m-\dim(\psi_m^{X\times Y} ((X\times Y)_\infty),
\end{equation*}
for $m$ large enough. According to Remark \ref{arc_product}, this is equal to
\begin{eqnarray}gin{equation*}
(\dim(X)+\dim(Y))m-\dim(\psi_m^X(X_\infty))- \dim(\psi_m^Y(Y_\infty))=\lambda(x)+\lambda(y).
\end{equation*}
According to Remark \ref{relation_smooth}, $\lambda(y)=0$. Therefore, $\lambda((x,y))=\lambda(x)$.
\end{proof}
The key fact is that any closed point is in the orbit of the distinguished point of a face of $\sigma$. More precisely, let $X$ be the affine toric variety associated to a cone $\sigma$ and $O(\tau)$ be the $T$-orbit that contains the distinguished point $x_\tau$ for some face $\tau$ of $\sigma$. Then $X=\cup_\tau O(\tau)$ with $\tau$ varying over all faces of $\sigma$. We refer the reader to \cite[Chapter 3]{Ful93} for details. Then $\lambda(p)=\lambda(x_\tau)$ for every $p$ in $O(\tau)$, because there is an element $t\in T$ that maps $p$ to $x_\tau$, and such that multiplication by $t$ gives an automorphism of $X$. Therefore, it is enough to compute $\lambda(x_\tau)$, where $\tau$ is a face of $\sigma$.
Following the notation in the proof of Proposition \ref{prop_simplicial_isolated}, we have an open subset $U_\tau\cong U_{\tau'}\times (k^\ast)^{n-\dim(\tau)}$ of $X$ that contains $O(\tau)$. The point $x_\tau\in O(\tau)$ is mapped to $(x_{\tau'},\underline{1})$ by the isomorphism, where $x_{\tau'}$ is the torus-invariant point in $U_{\tau'}$. Therefore, we are reduced to computing $\lambda((x_{\tau'},\underline{1}))$ and we obtain the following corollary by using Proposition \ref{prop_product}:
\begin{eqnarray}gin{cor}
With the above notation, if $X=X(\sigma)$ is an affine toric variety over $k$ of dimension $n$ and $\tau$ is a face of $\sigma$, then we have $\lambda(x_\tau)=\lambda(x_{\tau'})$. Equivalently, we have
\begin{eqnarray}gin{equation*}
\widehat{\textnormal{mld}} (x_\tau;X)-n=\widehat{\textnormal{mld}} (x_{\tau'};U_{\tau'})-\dim(\tau).
\end{equation*}
\end{cor}
\section{Mather minimal log discrepancy of hypersurfaces}
\label{sec5}
This section is devoted to the computation of the Mather minimal log discrepancy of the origin on a hypersurface whose defining equation has fixed monomials and very general coefficients.
\subsection{Basic setup}
\label{sec_setup}
$\\$
In this subsection, we fix our notation and give the criterion on the defining equation such that the hypersurface is integral. Throughout the section, we assume that $X$ is a hypersurface in
\begin{eqnarray}gin{equation*}
\mathbb{A}^{n+1}=\textnormal{Spec}\ \mathbb{C}[x_1,\ldots,x_{n+1}],
\end{equation*}
for some positive integer $n$, over the field of complex numbers. We have $\dim(X)=n$. After a change of coordinates, we may and will assume that the origin of $\mathbb{A}^{n+1}$ is contained in $X$.
Let $f$ be the defining equation of $X$ in $\mathbb{A}^{n+1}$. We can write
$f=\sum _{i=1} ^N a_{I^i} x^{I^i}$, where $I^i=(I^i_1,I^i_2,\ldots,I^i_{n+1})$ are multi-indices and $x^{I^i}$ stands for $\prod _{j=1}^{n+1} x_j ^{I_j^i}$. Suppose that all the coefficients $a_{I^i}$ are nonzero. Then $N$, the number of monomials in the polynomial $f$, is at least $2$ when $X$ is integral, unless $X$ is a coordinate plane. We denote by $\mathbb{Z}_+$ the set of positive integers and by $\mathbb{Z}_{\geq 0}$ the set of nonnegative integers.
\begin{eqnarray}gin{defn}
The \emph{support} of a multi-index $I^i$ is $| I^i| := \{j | I_j^i> 0\}$. Given an $(n+1)$-tuple $\alpha=(\alpha_1,\ldots,\alpha_{n+1})\in \mathbb{Z}^{n+1}$ and a multi-index $I$, we define the \emph{product} as $\alpha \cdot I := \sum_{j=1} ^{n+1} \alpha_j I_j$. The \emph{support} of a polynomial $f=\sum _{i=1} ^N a_{I^i} x^{I^i}$, with all $a_{I^i}\neq 0$, is the set
\begin{eqnarray}gin{equation*}
A=\{I^1,\ldots,I^N\}\subset (\mathbb{Z}_{\geq 0})^{n+1}.
\end{equation*}
The \emph{dimension} of $A$ is defined by $\dim(A)=\dim_\mathbb{Q} (\textnormal{Span}_\mathbb{Q}\{A-a\})$, for any $a\in A$.
\end{defn}
\begin{eqnarray}gin{rem}
Clearly the dimension of a support $A$ is independent of the choice of $a$. When $X$ is integral and is not a hyperplane, the support $A$ of $f$ has at least two points, hence $\dim(A)\geq 1$.
\end{rem}
\begin{eqnarray}gin{rem}
We may assume without loss of generality that
\begin{eqnarray}gin{equation*}
\cup_{1\leq i\leq N} |I^i |=\{1,2,\ldots,n+1\}.
\end{equation*}
Otherwise, $X$ is the product of an affine space with a hypersurface of lower dimension. Then by Proposition \ref{prop_product}, computing $\lambda$ for the origin in $X$ is reduced to computing the corresponding $\lambda(0)$ on the lower dimensional hypersurface.
\end{rem}
\begin{eqnarray}gin{rem}
If $0$ is a smooth point, then the invariant $\lambda(0)$ is trivially zero by Remark \ref{relation_smooth}. So we focus on the case where $0$ is a singular point of $X$. In particular, we assume that $X$ is not a hyperplane.
\end{rem}
In order that the hypersurface $X$ contains the origin, we require that $f$ is a polynomial in
\begin{eqnarray}gin{equation*}
(x_1,x_2,\ldots,x_n)\cdot \mathbb{C}[x_1,x_2,\ldots,x_{n+1}],
\end{equation*}
or equivalently, the point $(0,0,\ldots,0)$ is not in the support of $f$. By requiring that $X$ is irreducible and is not a hyperplane, we see that $f$ is not divisible by $x_i$ for each $i$. This means that the support of $f$ contains at least one point in each coordinate plane $x_i=0$. We first characterize those $A$, such that a general polynomial with support $A$ defines an integral hypersurface. We denote by $\textnormal{conv}(A)$ the convex hull of $A$. The following result is a simplified version of \cite[Theorem 3]{Yu16}:
\begin{eqnarray}gin{thm}
\label{thm_integral}
Let $R=\mathbb{C}[x_1^{\pm 1},\ldots,x_{n+1}^{\pm 1}]$ be the Laurent polynomial ring in $n+1$ variables. Then a general polynomial $f$ with support $A$ generates a proper prime ideal in $R$ if and only if one of the following holds:
\item (1) $\dim(A)\geq 2$, or
\item (2) $\dim(A)=1$ and $\textnormal{conv}(A)$ contains only two integral points.
\end{thm}
\begin{eqnarray}gin{lem}
\label{lem_prime}
Let $R=\mathbb{C}[x_1^{\pm 1},\ldots,x_{n+1}^{\pm 1}]$ be the Laurent polynomial ring in $n+1$ variables and $f$ be a polynomial in $\mathbb{C}[x_1,\ldots,x_{n+1}]$ that is not divisible by any $x_i$. If $f$ generates a prime ideal in $R$, then $f$ also generates a prime ideal in $\mathbb{C}[x_1,\ldots,x_{n+1}]$.
\end{lem}
\begin{eqnarray}gin{proof}
The assertion follows from the fact that
\begin{eqnarray}gin{equation*}
f\cdot \mathbb{C}[x_1,\ldots,x_{n+1}]=f\cdot \mathbb{C}[x_1^{\pm 1},\ldots,x_{n+1}^{\pm 1}]\cap \mathbb{C}[x_1,\ldots,x_{n+1}].
\end{equation*}
This follows easily, using the fact that $\mathbb{C}[x_1,\ldots,x_{n+1}]$ is a UFD, from the fact that $f$ is not divisible by any $x_i$.
\end{proof}
\begin{eqnarray}gin{defn}
\label{defn_integral}
A finite subset $A$ of $(\mathbb{Z}_{\geq 0})^{n+1}$ is called \emph{integral} if the following conditions hold:
\item (1) $A$ contains at least one point in each coordinate plane $x_i=0$,
\item (2) $A$ does not contain the origin $(0,\ldots,0)$, and
\item (3) $\dim(A)\geq 2$, or $\dim(A)=1$ and $\textnormal{conv}(A)$ contains only two integral points.
Let $|A|$ be the cardinality of $A$. We denote by $F(A)\subset (\mathbb{C}^\ast)^{|A|}$ the set of coefficients, such that a polynomial $f$ with support $A$ and these coefficients generates a prime ideal in the Laurent polynomial ring $R$.
\end{defn}
By Theorem \ref{thm_integral}, the set $F(A)$, for each integral subset $A\subset (\mathbb{Z}_{\geq 0})^{n+1}$, contains a nonempty open subset of $(\mathbb{C}^\ast)^{|A|}$. The following is a direct corollary of Lemma \ref{lem_prime}:
\begin{eqnarray}gin{cor}
\label{cor_integral}
If $f$ is a polynomial with an integral support $A$ and coefficients in $F(A)$, then $f$ defines an integral hypersurface in $\mathbb{A}^{n+1}$ containing the origin.
\end{cor}
In what follows, we fix an integral subset $A\subset (\mathbb{Z}_{\geq 0})^{n+1}$ with cardinality $N\geq 2$, and assume that the defining equation $f$ has support $A$ and coefficients in $F(A)$. Recall that we write $f=\sum _{i=1} ^N a_{I^i} x^{I^i}$ with all $a_{I^i}\neq 0$. In this case we have $A=\{I^1,\ldots, I^N\}$.
\begin{eqnarray}gin{defn}
For each positive integer $m$, we define as in the previous section the sets $C^m:=\psi_m (\pi^{-1} (0))$ in the $m^{\textnormal{th}}$ jet scheme of $X$, where $0$ is the origin of $\mathbb{A}^{n+1}$. For each $(n+1)$-tuple $\alpha\in \mathbb{Z}^{n+1}$ such that $1\leq \alpha_j\leq m$ for each $j$, we define
\begin{eqnarray}gin{equation*}
C_\alpha ^m:=C^m\cap (\cap_{1\leq j\leq n+1} \textnormal{Cont}^{\alpha_j} (x_j)_m).
\end{equation*}
In other words, $C_\alpha ^m$ is a subset of $C^m$ with prescribed order along each $x_j$.
\end{defn}
Now we fix $\alpha$ with $\alpha_j\geq 1$ for each $j$. Let $n_0(\alpha)=\min_{1\leq i \leq N} \{\alpha \cdot I^i\}$. After relabeling we may assume that the minimum is attained by precisely those $i$ with $1\leq i\leq k$ for some $k\geq 1$. Then the image of $f$ under the map
\begin{eqnarray}gin{equation}
\label{eqn_base}
\mathbb{C}[x_1,x_2,\ldots,x_{n+1}]\longrightarrow (\mathbb{C}[x_j^{(s)}|1\leq j\leq n+1,s\geq \alpha_j])[\![t]\!],\ x_j\longmapsto \sum_{s=\alpha_j}^\infty x_j^{(s)}t^s,
\end{equation}
has $t$-order $\geq n_0(\alpha)$ and the coefficient of $t^{n_0(\alpha)}$ is
\begin{eqnarray}gin{equation}
\label{eqn_P0}
P_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}):=\sum_{i=1}^k a_{I^i}\Pi_{j=1}^{n+1} (x_j^{(\alpha_j)})^{I_j^i}.
\end{equation}
$P_0$ is an element in $\mathbb{C}[x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}]$. We define the condition $\Delta ^\alpha$, which will be used in the statements of the main result in this section, as follows:
\begin{eqnarray}gin{cond}
\label{open_condition}
We say that the condition $\Delta^\alpha$ holds for $f$ if
\begin{eqnarray}gin{equation*}
\bigcap_{j=1}^{n+1} \mathbb{V}\Big(\frac {\partial P_0}{\partial x_j^{(\alpha_j)}}\Big) \cap \mathbb{V}(P_0)= \emptyset\
\end{equation*}
in the torus
\begin{eqnarray}gin{equation*}
(\mathbb{C}^\ast)^{n+1}= \textnormal{Spec}\ \mathbb{C}[x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots, x_{n+1}^{({\alpha_{n+1}})}]_{(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots, x_{n+1}^{({\alpha_{n+1}})})}.
\end{equation*}
\end{cond}
\begin{eqnarray}gin{defn}
The \emph{weight} of a monomial $\prod_{j=1}^{n+1}\prod_{i=1}^{b_j} x_j^{(\begin{eqnarray}ta^j_i)}$ is the sum of the superscripts $\sum_{i,j}\begin{eqnarray}ta_i^j$; the \emph{weight} of a polynomial in $(x_j^{(u)})_{1\leq j\leq {n+1};\ u>0}$ is the smallest weight among its monomials.
\end{defn}
\begin{eqnarray}gin{rem}
\label{rem_order}
It is easy to see that for every $s$, each monomial in the coefficient of $t^s$ in the image of a polynomial under the map (\ref{eqn_base}) has weight $s$.
\end{rem}
\subsection{Main results}
\begin{eqnarray}gin{lem}
\label{lem_first}
For a fixed $\alpha=(\alpha_1,\ldots,\alpha_{n+1})\in (\mathbb{Z}_+)^{n+1}$, the set
\begin{eqnarray}gin{equation*}
F_\alpha :=\{ (a_{I^i})_{1\leq i\leq N} \in (\mathbb{C}^\ast)^N |\textnormal{condition}\ \Delta ^\alpha\textnormal{ is satisfied}\}
\end{equation*}
contains a nonempty open subset of $(\mathbb{C}^\ast)^N$.
\end{lem}
\begin{eqnarray}gin{proof}
We use the Kleiman-Bertini Theorem in characteristic zero, which states that the general element of a linear system of divisors on a variety $Y$ is nonsingular away from the base locus of the linear system and the singular locus of $Y$.
Let $Y$ be the affine space $\mathbb{C}^{n+1}=\textnormal{Spec}\ \mathbb{C}[x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}]$ and $Z$ be the hypersurface in $Y$ defined by the polynomial $P_0=\sum_{i=1}^k a_{I^i}\Pi_{j=1}^{n+1} (x_j^{(\alpha_j)})^{I_j^i}$. Note that the left-hand side of the equation in condition $\Delta^\alpha$ is the singular locus of $Z$. Therefore, it suffices to show that for a general choice of coefficients, $Z$ will be nonsingular away from the coordinate planes.
The linear system of divisors $\mathbb{H}$ on Y consisting of hypersurfaces defined by polynomials of the form
\begin{eqnarray}gin{equation*}
p=\sum_{i=1}^k a_{I^i}\Pi_{j=1}^{n+1} (x_j^{(\alpha_j)})^{I_j^i},
\end{equation*}
for $a_{I^i}\in\mathbb{C}$, is clearly base point free away from the coordinate planes because each monomial $x^{I^i}$ is already so. By the Kleiman-Bertini Theorem, the hypersurface $Z$ is nonsingular away from the coordinate planes for a general choice of $(a_{I^i})_{1\leq i\leq N}\in(\mathbb{C}^\ast)^N$.
\end{proof}
We now study the image of $f$ under the map (\ref{eqn_base}). It suffices to study the image of each monomial of $f$. The image of $f$ is just the sum of the images of all the monomials of $f$.
\begin{eqnarray}gin{lem}
\label{lem_third}
Fix $\alpha=(\alpha_1,\ldots,\alpha_{n+1})\in (\mathbb{Z}_+)^{n+1}$. For each $s\geq 1$, the coefficient of $t^s$ in the image of $x_1^{b_1}x_2^{b_2}\ldots x_{n+1}^{b_{n+1}}$ under the map (\ref{eqn_base}) is equal to
\begin{eqnarray}gin{equation}
\label{eqn_monomial_image}
\sum_{c_{i,j}} \Big(\prod_{i=1}^{n+1} b_i! \cdot \prod_{j\geq \alpha_i} \frac{(x_i^{(j)})^{c_{i,j}}}{c_{i,j}!}\Big),
\end{equation}
where the sum is over all $c_{i,j}$ with $1\leq i\leq n+1$ and $j\geq \alpha_i$ such that
\begin{eqnarray}gin{equation*}
\sum_{i,j}j\cdot c_{i,j}=s\textnormal{ and } \sum_{j\geq \alpha_i} c_{i,j}=b_i\textnormal{ for all }i.
\end{equation*}
\end{lem}
\begin{eqnarray}gin{proof}
By considering the weight of a monomial $\prod_{i=1}^{n+1}\prod_{j\geq \alpha_i} (x_i^{(j)})^{c_{i,j}}$, it is clear that if this monomial appears with nonzero coefficient in the coefficient of $t^s$, then we have $\sum_{i,j}j\cdot c_{i,j}=s$. If such a monomial shows up in the image of $x_1^{b_1}x_2^{b_2}\ldots x_{n+1}^{b_{n+1}}$, then it clearly satisfies $\sum_{j\geq \alpha_i} c_{i,j}=b_i$ for all $i$. Moreover, it follows from the multinomial formula that if these conditions are satisfied, then the coefficient of the above monomial is
\begin{eqnarray}gin{equation*}
\prod_{i=1}^{n+1} \frac{b_i!}{\prod_{j\geq \alpha_i} c_{i,j}!}
\end{equation*}
\end{proof}
This lemma shows that the images of two different monomials under the map (\ref{eqn_base}) do not mix together. Thus, the number of monomials in the coefficient of $t^s$ of the image of $f$ is the sum of the numbers of monomials in the coefficient of $t^s$ for the image of each of the monomials of $f$. Similarly, the highest superscript in the coefficient of $t^s$ of the image of $f$ is equal to the maximum of the highest superscript in the coefficient of $t^s$ that appears in the images of all the monomials of $f$.
\begin{eqnarray}gin{lem}
\label{lem_forth}
Fix $\alpha=(\alpha_1,\ldots,\alpha_{n+1})\in (\mathbb{Z}_+)^{n+1}$ and a monomial $x_1^{b_1}x_2^{b_2}\ldots x_{n+1}^{b_{n+1}}$. Then for each $s\geq \sum_{i=1}^{n+1} b_i \alpha_i$, the largest superscript appearing in the coefficient of $t^s$ in the image of $x_1^{b_1}x_2^{b_2}\ldots x_{n+1}^{b_{n+1}}$ under the map (\ref{eqn_base}) is equal to $s-\mu$ for some fixed number $\mu$. Moreover, this largest superscript only appears in the monomials of the form
\begin{eqnarray}gin{equation*}
(x_1^{(\alpha_1)})^{b_1}(x_2^{(\alpha_2)})^{b_2}\ldots(x_j^{(\alpha_j)})^{b_j-1}\ldots(x_{n+1}^{(\alpha_{n+1})})^{b_{n+1}} \cdot x_j^{(s-\mu)},
\end{equation*}
for some $j$ such that $\alpha_j=\max_{1\leq i\leq n+1} \alpha_i$, and we have $\mu=\sum_{i=1}^{n+1} b_i \alpha_i-\alpha_j$.
\end{lem}
\begin{eqnarray}gin{proof}
According to Lemma \ref{lem_third}, the coefficient of $t^s$ in the image of the monomial $x_1^{b_1}x_2^{b_2}\ldots x_{n+1}^{b_{n+1}}$ consists of monomials of the form $\prod_{i=1}^{n+1}\prod_{j=1}^{b_i} x_i^{(\begin{eqnarray}ta^i_j)}$, with $\begin{eqnarray}ta^i_j\geq \alpha_i$ for every $i$ and $j$ and such that $\sum_{i,j}\begin{eqnarray}ta^i_j =s$. When $s<\sum_{i=1}^{n+1}b_i\alpha_i$, there is no such monomial. Hence we require that $s\geq \sum_{i=1}^{n+1}b_i\alpha_i$. When $s=\sum_{i=1}^{n+1}b_i\alpha_i$, there is only one monomial
\begin{eqnarray}gin{equation*}
(x_1^{(\alpha_1)})^{b_1}(x_2^{(\alpha_2)})^{b_2}\ldots(x_j^{(\alpha_j)})^{b_j}\ldots(x_{n+1}^{(\alpha_{n+1})})^{b_{n+1}}
\end{equation*}
in the coefficient of $t^s$. Hence the largest superscript that shows up in this case is equal to $\max_i \alpha_i$. In what follows, we assume that $s>\sum_{i=1}^{n+1}b_i\alpha_i$. Then the largest superscript that appears in the coefficient of $t^s$ is given by the following optimization problem:
\begin{eqnarray}gin{eqnarray*}
\max &&\max_{i,j}\{\begin{eqnarray}ta^i_j\}\\
\textnormal{s.t.} && \begin{eqnarray}ta^i_j\geq \alpha_i \textnormal{ for each } i\\
\textnormal{and}&& \sum_{i,j}\begin{eqnarray}ta^i_j =s.
\end{eqnarray*}
Let $(\bar\begin{eqnarray}ta^i_j)_{i,j}$ give the optimal solution to this optimization problem. We claim that there exist $i_0$ and $j_0$, with $1\leq i_0\leq n+1$ and $1\leq j_0\leq b_{i_0}$, such that $\bar\begin{eqnarray}ta^i_j=\alpha_i$ if and only if $(i,j)\neq (i_0,j_0)$. First we show that if the maximum is attained by $\bar\begin{eqnarray}ta^{i_0}_{j_0}$, then we have $\bar\begin{eqnarray}ta^{i_0}_{j_0}>\alpha_{i_0}$. By relabeling, we assume that $\alpha_1=\max_i \alpha_i$. Consider another feasible solution $(\tilde\begin{eqnarray}ta^i_j)$ to the optimization problem, with $\tilde\begin{eqnarray}ta^1_1=\alpha_1+s-\sum_{i=1}^{n+1}b_i\alpha_i$ and $\tilde\begin{eqnarray}ta^i_j=\alpha_i$ for $(i,j)\neq (1,1)$. Then clearly $\max_{i,j}\{\tilde\begin{eqnarray}ta^i_j\}=\tilde\begin{eqnarray}ta^1_1>\max_i \alpha_i$. Hence
\begin{eqnarray}gin{equation*}
\bar\begin{eqnarray}ta^{i_0}_{j_0}= \max_{i,j}\{\bar\begin{eqnarray}ta^i_j\}\geq \max_{i,j}\{\tilde\begin{eqnarray}ta^i_j\}>\alpha_{i_0}.
\end{equation*}
Now suppose contrary to our claim, that we have another pair $(i_1,j_1)\neq (i_0,j_0)$, such that $\bar\begin{eqnarray}ta^{i_1}_{j_1}> \alpha_{i_1}$. We define $\begin{eqnarray}ta^i_j=\bar\begin{eqnarray}ta^i_j$ if $(i,j)\neq (i_0,j_0), (i_1,j_1)$, $\begin{eqnarray}ta^{i_0}_{j_0}=\bar\begin{eqnarray}ta^{i_0}_{j_0} +1$ and $\begin{eqnarray}ta^{i_1}_{j_1}=\bar\begin{eqnarray}ta^{i_1}_{j_1}-1$. Then clearly $(\begin{eqnarray}ta^i_j)$ is also feasible, while $\max_{i,j}\{\begin{eqnarray}ta^i_j\}>\max_{i,j}\{\bar\begin{eqnarray}ta^i_j\}$. This contradicts our choice of $(\bar\begin{eqnarray}ta^i_j)$.
The above discussion shows that the largest superscript that appears in a monomial in the coefficient of $t^s$ shows up only in monomials of the form
\begin{eqnarray}gin{equation*}
(x_1^{(\alpha_1)})^{b_1}(x_2^{(\alpha_2)})^{b_2}\ldots(x_j^{(\alpha_j)})^{b_j-1}\ldots(x_{n+1}^{(\alpha_{n+1})})^{b_{n+1}} \cdot x_j^{(\begin{eqnarray}ta_j(s))}.
\end{equation*}
Moreover, such monomials appear in the coefficient of $t^s$ for all $j$.
By considering the weight of such a monomial, we get $s=\begin{eqnarray}ta_j(s)-\alpha_j+\sum_{i=1}^{n+1} b_i\alpha_i$. This implies that when $\begin{eqnarray}ta_j(s)$ is the largest superscript, $\alpha_j=\max_i\alpha_i$, and
\begin{eqnarray}gin{equation*}
\begin{eqnarray}ta_j(s)=s-\sum_{i=1}^{n+1} b_i\alpha_i+\alpha_j.
\end{equation*}
This proves the lemma with $\mu=\sum_{i=1}^{n+1} b_i\alpha_i-\max_i\alpha_i$.
\end{proof}
\begin{eqnarray}gin{rem}
With the same proof as above, one can show that for each fixed index $j$, with $1 \leq j\leq n+1$, the largest superscript for $x_j$ appearing in the coefficient of $t^s$ in the image of $x_1^{b_1}x_2^{b_2}\ldots x_{n+1}^{b_{n+1}}$ under the map (\ref{eqn_base}) appears in the monomials of the form
\begin{eqnarray}gin{equation*}
(x_1^{(\alpha_1)})^{b_1}(x_2^{(\alpha_2)})^{b_2}\ldots(x_j^{(\alpha_j)})^{b_j-1}\ldots(x_{n+1}^{(\alpha_{n+1})})^{b_{n+1}} \cdot x_j^{(s-\mu)},
\end{equation*}
where $\mu=\sum_{i=1}^{n+1} b_i\alpha_i-\alpha_j$.
\end{rem}
Combining Lemma \ref{lem_third} and Lemma \ref{lem_forth}, we see that if $P=x_1^{b_1}\ldots x_{n+1}^{b_{n+1}}$ and if $\alpha_{j_0}=\max_i \alpha_i$, then for $s>\sum_{i=1}^{n+1}b_i\alpha_i$, the term with the highest superscript in the coefficient of $t^s$ for the image of $P$ under the map (\ref{eqn_base}) is equal to
\begin{eqnarray}gin{equation*}
\frac{\partial P}{\partial x_{j_0}}(x_1^{(\alpha_1)},\ldots, x_{n+1}^{(\alpha_{n+1})})\cdot x_{j_0}^{(s-\mu)},
\end{equation*}
with $\mu=\sum_{i=1}^{n+1} b_i\alpha_i-\alpha_{j_0}$. When $s=\sum_{i=1}^{n+1}b_i\alpha_i$, the coefficient of $t^s$ is equal to $P(x_1^{(\alpha_1)},\ldots, x_{n+1}^{(\alpha_{n+1})})$. This is the smallest $s$ such that the coefficient of $t^s$ is nonzero. Similarly, for each fixed index $j$, the term with the highest superscript of $x_j$ is equal to
\begin{eqnarray}gin{equation*}
\frac{\partial P}{\partial x_{j}}(x_1^{(\alpha_1)},\ldots, x_{n+1}^{(\alpha_{n+1})})\cdot x_{j}^{(s-\mu')},
\end{equation*}
with $\mu'=\sum_{i=1}^{n+1} b_i\alpha_i-\alpha_{j}$.
Since the image of different monomials of $f$ do not mix, by Lemma \ref{lem_forth} the coefficient of $t^s$ in the image of $f$ under the map (\ref{eqn_base}), for $s>\max_{1\leq i\leq N}\{\alpha\cdot I^i\}$, is of the form
\begin{eqnarray}gin{equation}
\label{eqn_16}
\begin{eqnarray}gin{split}
T_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})x_{j_0}^{(s-\mu_{j_0})} \qquad\qquad\qquad\\
+ Q_s(x_j^{(\begin{eqnarray}ta_j)} | \alpha_j\leq \begin{eqnarray}ta_j\leq s-\mu_{j_0}; \alpha_{j_0}\leq \begin{eqnarray}ta_{j_0} <s-\mu_{j_0}),
\end{split}
\end{equation}
for some index $j_0$, some $\mu_{j_0}>0$, and polynomials $T_0$ and $Q_s$. In other words, the highest superscript is $s-\mu_{j_0}$ and it is attained at the index $j_0$. Recall that $f=\sum_{i=1}^N a_{I^i} x^{I^i}$ and $n_0(\alpha)=\min_i \{\alpha\cdot I^i\}$ and this minimum is attained by all $1\leq i\leq k$. For each $i$ with $I^i_{j_0}>0$, we compute the product $\alpha\cdot I^i$ and define
\begin{eqnarray}gin{equation*}
n_0(\alpha)':=\min_{I^i_{j_0}>0} \{\alpha\cdot I^i\} \textnormal{ and } \sigma:=\{1\leq i\leq N|I^i_{j_0}>0,\ \alpha\cdot I^i=n_0(\alpha)'\}.
\end{equation*}
Clearly we have $n_0(\alpha)'\geq n_0(\alpha)$. According to the discussion for a monomial above, $n_0(\alpha)'$ is the smallest integer such that the coefficient of $t^{n_0(\alpha)'}$ contains a monomial divisible by $x_{j_0}^{(q)}$ for some $q$ (in fact it is divisible by $x_{j_0}^{(\alpha_{j_0})}$). Moreover, the coefficient of $t^{n_0(\alpha)'}$ is equal to
\begin{eqnarray}gin{equation}
\label{eqn_P1}
P_1(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}) +\textnormal{ other terms without } x_{j_0},
\end{equation}
where $P_1(x_1,x_2,\ldots,x_{n+1})=\sum_{i\in \sigma} a_{I^i}x^{I^i}$. Clearly if $n_0(\alpha)'=n_0(\alpha)$, then $\sigma \subset \{1,2,\ldots,k\}$. Otherwise, if $n_0(\alpha)'>n_0(\alpha)$, then $\sigma \subset \{k+1,\ldots,N\}$. By the discussion for the case of a monomial, we have
\begin{eqnarray}gin{equation}
\label{eqn_T0}
T_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}) =\frac {\partial P_1(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})}{\partial x_{j_0}^{(\alpha_{j_0})}}.
\end{equation}
For each fixed index $j$, similar arguments for the highest superscript of $x_j$ also hold.
\begin{eqnarray}gin{rem}
Consider the weight of the first term in equation (\ref{eqn_P1}), we get $I^i\cdot \alpha =n_0(\alpha)'$ for each $i\in \sigma$. Hence each monomial in $T_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})$ has weight $n_0(\alpha)'-\alpha_{j_0}=I^i\cdot \alpha -\alpha_{j_0}$ for every $i\in \sigma$. Consider the weight of the first term in equation (\ref{eqn_16}) we get $s=I^i\cdot \alpha-\alpha_{j_0}+s-\mu_{j_0}$ for each $i\in \sigma$, or equivalently, $\mu_{j_0}=I^i\cdot \alpha-\alpha_{j_0}$. But $s-\mu_{j_0}$ is the highest superscript appearing in the coefficient of $t^s$. Therefore, we have
\begin{eqnarray}gin{equation}
\label{eqn_mu}
\mu_{j_0}=\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}} \{I^i\cdot \alpha-\alpha_j \},
\end{equation}
and the minimum is attained when $i\in \sigma$ and $j=j_0$. The condition $I_j^i>0$ is equivalent to $\frac{\partial P_1}{\partial x_{j}^{(\alpha_{j})}} \neq 0$.
\end{rem}
Recall that $C^m=\psi_m(\pi^{-1}(0))$ is a contact locus in $X_m$ and
\begin{eqnarray}gin{equation*}
C_\alpha ^m:=C^m\cap (\cap_{1\leq j\leq n+1} \textnormal{Cont}^{\alpha_j} (x_j)_m).
\end{equation*}
is a contact locus in $C^m$ for each $(n+1)$-tuple $\alpha$. The following lemma gives an upper bound to the dimension of $C^m_\alpha$.
\begin{eqnarray}gin{lem}
\label{lem_second}
Fix $\alpha=(\alpha_1,\ldots,\alpha_{n+1})\in (\mathbb{Z}_+)^{n+1}$. Let $m$ be an integer such that $\alpha_j\leq m$ for each $j$. If $\min_{1\leq i \leq N} \{\alpha \cdot I^i\}$ is attained by a unique $i$, then $C_\alpha^m=\emptyset$. If $\min_{1\leq i \leq N} \{\alpha \cdot I^i\}$ is attained by at least two different $i$'s and if $(a_{I^i})_{1\leq i\leq N} \in F_\alpha\cap F(A)$, where $F_\alpha$ is as defined in Lemma \ref{lem_first} and $F(A)$ is as defined in Definition \ref{defn_integral}, then we have
\begin{eqnarray}gin{equation}
\label{eqn_14}
\dim C_\alpha^m \leq mn-\sum_{j=1}^{n+1} (\alpha_j-1)-1+ \min_{1\leq i\leq N} \{I^i\cdot \alpha \}-\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}} \{I^i\cdot \alpha-\alpha_j \},
\end{equation}
for all $m$ large enough.
\end{lem}
\begin{eqnarray}gin{rem}
\label{rem_feasible}
An $(n+1)$-tuple $\alpha\in (\mathbb{Z}_+ )^{n+1}$ is called \emph{feasible} if $\min_{1\leq i \leq N} \{\alpha \cdot I^i\}$ is attained by at least two different $i$'s. Otherwise, it is called \emph{non-feasible}. According to Lemma \ref{lem_second}, $C^m_\alpha=\emptyset$ if $\alpha$ is non-feasible.
\end{rem}
\begin{eqnarray}gin{proof}
Consider the affine space
\begin{eqnarray}gin{equation*}
\textnormal{Spec}\ \mathbb{C}[x_1^{(\alpha_1)}, x_1^{(\alpha_1+1)}, \ldots, x_1^{(m)}, \ldots , x_{n+1}^{(\alpha_{n+1})}, x_{n+1}^{(\alpha_{n+1}+1)}, \ldots, x_{n+1}^{(m)}]=\mathbb{A}^{m(n+1)-\sum_{i=1}^{n+1} (\alpha_i-1)}.
\end{equation*}
Clearly $x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}$ are regular functions on this affine space. We denote by $U_m$ the open subset of
\begin{eqnarray}gin{equation*}
\mathbb{A}^{m(n+1)-\sum_{i=1}^{n+1} (\alpha_i-1)}
\end{equation*}
where $x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}$ do not vanish. Let $m_0=\max_{1\leq j\leq n+1}\{\alpha_j\}$. When $m\geq m_0$, $C^m_\alpha$ is naturally embedded in $U_m$.
Pick an arc $\gamma$ in $X_\infty\cap(\cap_{1\leq i\leq n+1}\textnormal{Cont} ^{\alpha_i}(x_i))$. Then $\gamma$ is represented by a homomorphism of $\mathbb{C}$-algebras \begin{eqnarray}gin{equation}
\label{eqn_base2}
\gamma^\ast: \mathbb{C}[x_1,\ldots,x_{n+1}]\longrightarrow \mathbb{C}[\![t]\!],\ \gamma^\ast(x_i)= \sum_{j=\alpha_i}^\infty x_i^{(j)}t^j,
\end{equation}
such that $\gamma^\ast(f)=0$ and $x_i^{(\alpha_i)}\neq 0$ for each $i$. Let us write $G_s$ for the coefficient of $t^s$ in $\gamma^\ast(f)$. Then by definition (equation (\ref{eqn_P0})), we have $G_{n_0(\alpha)}=P_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})$. Hence, as long as $m>n_0(\alpha)$, we have $C^m_\alpha \subset \V(P_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}))$. But when $\min_{1\leq i \leq N} \{\alpha \cdot I^i\}$ is attained by a unique $i$, $P_0$ is a monomial. Thus, we have $\V(P_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})) =\emptyset$ in $U_m$, which implies that $C^m_\alpha=\emptyset$. This shows that $C^m_\alpha=\emptyset$ if $\alpha$ is non-feasible. In what follows, we assume that $\alpha$ is feasible and $(a_{I_i})_i\in F_\alpha\cap F(A)$, and show that in such a case, $C_\alpha^m$ is a finite union of locally closed subsets of $U_m$, all of them having dimension less than or equal to the right-hand side of (\ref{eqn_14}).
For each $m\geq m_0$, we define $A_0:=C^m_\alpha \backslash \V (T_0)$. Since $(a_{I_i})_i\in F_\alpha$, we can find $j$ such that
\begin{eqnarray}gin{equation*}
\frac{\partial P_0}{\partial x_j^{(\alpha_j)}}(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})\neq 0.
\end{equation*}
Then for each $s\geq 1$, consider the highest superscript of $x_j$ in the coefficient of $t^{n_0(\alpha)+s}$ and we get
\begin{eqnarray}gin{equation}
\label{eqn_11}
G_{n_0(\alpha)+s} = \frac{\partial P_0}{\partial x_j^{(\alpha_j)}}(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})\cdot x_j^{(\alpha_j+s)} + R_s,
\end{equation}
where $R_s$ is a polynomial in $\{x_i^{(t_i)}|\alpha_i\leq t_i\leq \alpha_i+s\textnormal{ for all } i\neq j\textnormal{, }\alpha_j\leq t_j<\alpha_j+s\}$. For each $s\geq 1$, if we consider the highest superscript among all $x_i$, for $1\leq i\leq n+1$, in the coefficient of $t^{n_0(\alpha)'+s}$, we get
\begin{eqnarray}gin{equation}
\label{eqn_12}
G_{n_0(\alpha) '+s} = T_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})\cdot x_{j_0}^{(\alpha_{j_0}+s)} + \tilde{R}_s,
\end{equation}
where $\tilde{R}_s$ is a polynomial in $\{x_i^{(t_i)}|\alpha_i\leq t_i\leq \alpha_{j_0}+s\textnormal{ for all } i\neq j_0\textnormal{, }\alpha_{j_0}\leq t_{j_0}<\alpha_{j_0}+s\}$.
Note that
\begin{eqnarray}gin{equation*}
G_{n_0(\alpha)'+m-\alpha_{j_0}}=T_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{(\alpha_{n+1})})\cdot x_{j_0}^{(m)}+\tilde{R}_{m-n_0(\alpha)'}.
\end{equation*}
Every variable in the above expression of $G_{n_0(\alpha)'+m-\alpha_{j_0}}$ has a superscript $\leq m$. The same holds for $G_k$, with $n_0(\alpha)\leq k< n_0(\alpha)'+m-\alpha_{j_0}$. Hence each $\V(G_k)$, with $n_0(\alpha)\leq k\leq n_0(\alpha)'+m-\alpha_{j_0}$, can be considered as a closed subset of $U_m$.
We claim that if $m>n_0(\alpha)'$, then
\begin{eqnarray}gin{equation*}
A_0 = \V(G_{n_0(\alpha)}, G_{n_0(\alpha)+1},\ldots, G_{n_0(\alpha)'+m-\alpha_{j_0}}) \backslash \V(T_0)\subset U_m.
\end{equation*}
In fact, if we embed $A_0$ naturally in $A_\infty:=\textnormal{Spec}\ \mathbb{C} [x_i^{(s_i)}|1\leq i\leq n+1,s_i\geq \alpha_i]$, and consider each $\V(G_k)$ as a subset of $A_\infty$, then we have
\begin{eqnarray}gin{equation*}
A_0= U_m\cap (\cap_{s\geq 0}\V(G_{n_0(\alpha)+s}))\backslash \mathbb{V}(T_0).
\end{equation*}
Hence, $A_0$ is contained in $\V(P_0, G_{n_0(\alpha)+1},\ldots, G_{n_0(\alpha)'+m-\alpha_{j_0}}) \backslash \V(T_0)\subset U_m$.
On the other hand, for each $s\geq 1$, if we fix
\begin{eqnarray}gin{equation*}
\{x_i^{(t_i)}|\alpha_i\leq t_i\leq \alpha_i+s\textnormal{ for all } i\neq j\textnormal{, }\alpha_j\leq t_j<\alpha_j+s\}
\end{equation*}
such that $\frac{\partial P_0}{\partial x_j^{(\alpha_j)}}(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})\neq 0$, then the equation $G_{n_0(\alpha)+s}=0$ has a unique solution for $x_j^{(\alpha_j+s)}$. Similarly, if we fix
\begin{eqnarray}gin{equation*}
\{x_i^{(t_i)}|\alpha_i\leq t_i\leq \alpha_{j_0}+s\textnormal{ for all } i\neq j_0\textnormal{, }\alpha_{j_0}\leq t_{j_0}<\alpha_{j_0}+s\}
\end{equation*}
such that $T_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}) \neq 0$, then the equation $G_{n_0(\alpha)'+s}=0$ has a unique solution for $x_{j_0}^{(\alpha_{j_0}+s)}$. The existence of solutions for $G_{n_0(\alpha)+s}=0$ and $G_{n_0(\alpha)'+s}=0$, for each $s\geq 1$, shows that every element in $U_m\cap\V(P_0, G_{n_0(\alpha)+1},\ldots, G_{n_0(\alpha)'+m-\alpha_{j_0}}) \backslash \V(T_0)$ can be lifted to an element in $X_\infty$, and hence contained in $A_0$. Moreover, we see that each equation $G_{n_0(\alpha)+s}=0$ or $G_{n_0(\alpha)'+s}=0$ cuts down the dimension exactly by $1$. This shows that the codimension of $A_0$ in $U_m$ is exactly the number of equations unless $A_0=\emptyset$. We conclude that if $A_0\neq \emptyset$, then
\begin{eqnarray}gin{eqnarray*}
\dim(A_0)&=&
m(n+1)-\sum_{i=1}^{n+1} (\alpha_i-1)+n_0(\alpha)-(n_0(\alpha)'-\alpha_{j_0})-1-m\\
&=& mn-\sum_{i=1}^{n+1} (\alpha_i-1)-1+n_0(\alpha)-\mu_{j_0}.
\end{eqnarray*}
Now suppose that $\psi_m(\gamma)\in C^m_\alpha\cap \V(T_0)$. Then the first term on the right-hand side of equation (\ref{eqn_12}) vanishes. If we delete the first term and rearrange the equation to get a new highest-superscript term, we claim that for $s$ sufficiently large (independent of $m$), the equation (\ref{eqn_12}) becomes
\begin{eqnarray}gin{equation*}
G_{n_0(\alpha)'+s}=T_1\cdot x_{j_1}^{(s-\mu_1)}+ \textnormal{Remaining Terms without }x_{j_1}^{(s-\mu_1)}
\end{equation*}
for some polynomial $T_1$ and some number $\mu_1$, with $s-\mu_1$ being the highest superscript.
To prove this, let us consider the monomials in the expression of $G_{n(\alpha)'+s}$ after deleting $T_0\cdot x_{j_0}^{(\alpha_{j_0}+s)}$. According to the proof of Lemma \ref{lem_forth}, they are of the form $\prod_{j=1}^{n+1}\prod_{k=1}^{I^i_j} x_i^{(\begin{eqnarray}ta^j_k)}$ for some $i$, with $\begin{eqnarray}ta^j_k\geq \alpha_j$ and $\sum_{j,k}\begin{eqnarray}ta^j_k=n_0(\alpha)'+s$. Hence, the number
\begin{eqnarray}gin{equation*}
\max_{s\geq 1}\{\max_{j,k}\{\begin{eqnarray}ta^j_k\}-s\}
\end{equation*}
is bounded above. In fact, it is bounded above by $\alpha_{j_0}$. Suppose that the maximum is attained by some $s_0\geq 1$ and $(\bar\begin{eqnarray}ta^j_k)_{j,k}$, with $\bar\begin{eqnarray}ta^{j_1}_1=\max_{j,k}\{\bar\begin{eqnarray}ta^j_k\}$. In other words, the highest superscript appears in the monomial
\begin{eqnarray}gin{equation*}
x_1^{(\bar\begin{eqnarray}ta^1_1)}\ldots x_1^{\big(\bar\begin{eqnarray}ta^1_{I^i_1}\big)}\ldots \widehat{x_{j_1}^{(\bar\begin{eqnarray}ta^{j_1}_1)} }\ldots x_{n+1}^{\big(\bar\begin{eqnarray}ta^{n+1}_{I^i_{n+1}}\big)}\cdot x_{j_1}^{(\bar\begin{eqnarray}ta^{j_1}_1)}.
\end{equation*}
On the other hand, for each $s\geq s_0$, $G_{n_0(\alpha)'+s}$ contains the monomial
\begin{eqnarray}gin{equation*}
x_1^{(\bar\begin{eqnarray}ta^1_1)}\ldots x_1^{\big(\bar\begin{eqnarray}ta^1_{I^i_1}\big)}\ldots \widehat{x_{j_1}^{(\bar\begin{eqnarray}ta^{j_1}_1)} }\ldots x_{n+1}^{\big(\bar\begin{eqnarray}ta^{n+1}_{I^i_{n+1}}\big)}\cdot x_{j_1}^{(\bar\begin{eqnarray}ta^{j_1}_1+s-s_0)}.
\end{equation*}
This shows that $\max_{s\geq 1}\{\max_{j,k}\{\begin{eqnarray}ta^j_k\}-s\}$ is attained by all $s\geq s_0$ and the same $j_1$. Hence the highest superscript in $G_{n_0(\alpha)'+s}$ is equal to $s-\mu_1$ for some fixed $\mu_1$ and for $s$ sufficiently large. We also see that if $M\cdot x_{j_1}^{(s-\mu_1)}$ is a monomial in $G_{n_0(\alpha)'+s}$ that contains the highest superscript, then $M\cdot x_{j_1}^{(s'-\mu_1)}$ is a monomial in $G_{n_0(\alpha)'+s'}$ that contains the highest superscript for each $s'>s$. Moreover, for every such monomial, the weight of $M$ is equal to $n_0(\alpha)'+\mu_1$. There are only finitely many monomials with a fixed weight. Hence, we get
\begin{eqnarray}gin{equation*}
G_{n_0(\alpha)'+s}=T_1\cdot x_{j_1}^{(s-\mu_1)}+ \textnormal{Remaining Terms without }x_{j_1}^{(s-\mu_1)}
\end{equation*}
for $s$ sufficiently large and for a fixed polynomial $T_1$. Clearly, we have $s-\mu_1\leq \alpha_{j_0}+s$, or equivalently, $\mu_1\geq -\alpha_{j_0}$.
Let $m_1$ be $\geq $ the largest superscript appearing in $T_1$ and $m_1\geq m_0$. Then for each $m\geq m_1$, we define $A_1:=C^m_\alpha\cap \V(T_0)\backslash \V(T_1)\subset U_m$. With the same analysis as above we can show that
\begin{eqnarray}gin{equation*}
A_1= \V(T_0,G_{n_0(\alpha)},G_{n_0(\alpha)+1},\ldots,G_{n_0(\alpha)' +\mu_1+m}) \backslash \V(T_1)
\end{equation*}
and that each $G_i$ cuts down dimension exactly by $1$. Hence either $A_1=\emptyset$ or
\begin{eqnarray}gin{eqnarray*}
\dim(A_1)&\leq&
m(n+1)-\sum_{i=1}^{n+1} (\alpha_i-1)+n_0(\alpha)-1-n_0(\alpha)'-\mu_1-m\\
&\leq& mn-\sum_{i=1}^{n+1} (\alpha_i-1)-1+n_0(\alpha)-(n_0(\alpha)'-\alpha_{j_0})\\
&=& mn-\sum_{i=1}^{n+1} (\alpha_i-1)-1+n_0(\alpha)-\mu_{j_0}.
\end{eqnarray*}
Inductively, suppose we have $A_{k}=C^m_\alpha\cap (\cap_{l\leq k-1} \V(T_l))\backslash \V(T_k)$ for each $m\geq m_k$, for some number $m_k\geq \max_{0\leq i\leq k-1}\{m_i\}$, and when $\psi_m(\gamma)\in A_k$ we have
\begin{eqnarray}gin{equation}
\label{eqn_17}
G_{n_0(\alpha)'+s}=T_k\cdot x_{j_k}^{(s-\mu_k)}+ \textnormal{Remaining Terms},
\end{equation}
for $s$ sufficiently large and some number $\mu_k\geq -\alpha_{j_0}$, where $s-\mu_k$ is the highest superscript.
Now suppose that $\psi_m(\gamma)\in C^m_\alpha\cap (\cap_{l\leq k} \V(T_l))$, then the first term of the right-hand side of equation (\ref{eqn_17}) vanishes. If we delete the first term and rearrange the equation to get a new highest-superscript term, with the same proof as above we can show that
\begin{eqnarray}gin{equation*}
G_{n_0(\alpha)'+s}=T_{k+1}\cdot x_{j_{k+1}}^{(s-\mu_{k+1})}+ \textnormal{Remaining Terms},
\end{equation*}
for $s$ sufficiently large (independent of $m$). Clearly, we have $\mu_{k+1}\geq \mu_k\geq -\alpha_{j_0}$.
Let $m_{k+1}$ be $\geq $ the highest superscript in $T_{k+1}$ and $m_{k+1}\geq m_k$. For each $m\geq m_{k+1}$, we define $A_{k+1}:=C^m_\alpha\cap (\cap_{l\leq k} \V(T_l))\backslash \V(T_{k+1})$. With the same analysis as in the case when $k=0$ we can show that
\begin{eqnarray}gin{equation*}
A_{k+1}= \V(T_0,\ldots,T_k,G_{n_0(\alpha)},G_{n_0(\alpha)+1},\ldots,G_{n_{k+1}'+m}) \backslash \V(T_{k+1})
\end{equation*}
and that each $G_i$ cuts down dimension exactly by $1$. Hence either $A_{k+1}=\emptyset$ or
\begin{eqnarray}gin{eqnarray*}
\dim(A_{k+1})&\leq&
m(n+1)-\sum_{i=1}^{n+1} (\alpha_i-1)+n_0(\alpha)-1-n_0(\alpha)'-\mu_{k+1}-m\\
&\leq& mn-\sum_{i=1}^{n+1} (\alpha_i-1)-1+n_0(\alpha)-\mu_{j_0}.
\end{eqnarray*}
We claim that there can be only finitely many such steps. First note that the highest superscript decreases, or equivalently, the number $\mu_k$ increases, by at least $1$ as $k$ increases by $n+1$ because we must have used the same subscript $j_k$ during $n+2$ steps. Second, the decrease in highest superscript must eventually stop because $G_{n_0(\alpha)+s}$ contains the term $\frac{\partial P_0}{\partial x_j^{(\alpha_j)}}(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})\cdot x_j^{(\alpha_j+s)}$ for some $j$, with $\frac{\partial P_0}{\partial x_j^{(\alpha_j)}}(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}) \neq 0$, and for every $s\geq 1$. Hence, for each $m$ large enough, we can decompose $C^m_\alpha$ into a finite union $\cup_{i\geq0} A_i$, where each $A_i$ either is empty or has dimension less than or equal to the number in the lemma. This completes the proof.
\end{proof}
\begin{eqnarray}gin{rem}
\label{rem_equality}
From the proof of Lemma \ref{lem_second} we see that for a fixed feasible $\alpha$, coefficients $(a_{I_i})_i\in F_\alpha\cap F(A)$ and $m$ large enough, if $A_0\neq \emptyset$, then we have
\begin{eqnarray}gin{equation}
\label{eqn_18}
\dim (C_\alpha^m)= mn-\sum_{j=1}^{n+1} (\alpha_j-1)-1+ \min_{1\leq i\leq N} \{I^i\cdot \alpha \}-\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}} \{I^i\cdot \alpha-\alpha_j \}.
\end{equation}
We also see that $G_{n_0(\alpha)+s}=0$ for each $s\geq 1$ has a solution as long as $T_0\neq 0$. Therefore, $A_0\neq \emptyset$ for $m$ large enough if and only if $\V(P_0)\backslash \V(T_0)\neq \emptyset$ in the torus
\begin{eqnarray}gin{equation*}
(\mathbb{C}^\ast)^{n+1}= \textnormal{Spec}\ \mathbb{C}[x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots, x_{n+1}^{({\alpha_{n+1}})}]_{(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots, x_{n+1}^{({\alpha_{n+1}})})}.
\end{equation*}
Note that the condition $\V(P_0)\backslash \V(T_0)\neq \emptyset$ is independent of $m$.
\end{rem}
We also need the following result:
\begin{eqnarray}gin{prop}
\label{prop_finite_components}
(\cite[Proposition 3.5]{dFEI08})
If $X$ is a variety over $k$, then the number of irreducible components of a cylinder on $X_\infty$ is finite.
\end{prop}
Recall that if we fix an integral support $A=\{I^1,\ldots, I^N\}$, then $F(A)$ and $F_\alpha$ are subsets of $(\mathbb{C}^\ast)^N$ defined in Definition \ref{defn_integral} and Lemma \ref{lem_first} respectively. Each of them contains an open dense subset of $(\mathbb{C}^\ast)^N$. An $(n+1)$-tuple $\alpha=(\alpha_1,\ldots, \alpha_{n+1})$ is called feasible if $\min_{1\leq i\leq N}\{\alpha\cdot I^i\}$ is attained by at least two different $i$'s. For each feasible $\alpha$, we define polynomials $P_0$ by equation (\ref{eqn_P0}) and $T_0$ by equation (\ref{eqn_16}). Using the above lemmas we obtain the following theorem:
\begin{eqnarray}gin{thm}
\label{thm_hypersurface}
Let $A=\{I^1,\ldots, I^N\}$ be an integral support.
If
\begin{eqnarray}gin{equation*}
(a_{I^i})_{1\leq i\leq N}\in \bigcap_{\textnormal{feasible }\alpha} F_\alpha\cap F(A)
\end{equation*}
and $X$ is the hypersurface in $\mathbb{A}^{n+1}$ defined by $f=\sum_{i=1}^N a_{I^i} x^{I^i}$, then $X$ is an integral hypersurface containing the origin $0$ and the invariant $\lambda$ (defined in Definition \ref{defn_lambda}) for the origin satisfies
\begin{eqnarray}gin{equation}
\label{eqn_hypersurface}
\lambda(0) \geq \min \{\sum_{j=1}^{n+1} (\alpha_j-1)+1-\underset{1\leq i\leq N}{\min} \{I^i\cdot \alpha \}+\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}} \{I^i\cdot \alpha-\alpha_j \} \},
\end{equation}
where the first minimum is taken over all feasible $(n+1)$-tuples $\alpha$.
Moreover, assume the first minimum is attained at some feasible $\alpha$. If for this $\alpha$, we have $\V(P_0)\backslash \V(T_0)\neq \emptyset$ in the torus
\begin{eqnarray}gin{equation*}
(\mathbb{C}^\ast)^{n+1}= \textnormal{Spec}\ \mathbb{C}[x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots, x_{n+1}^{({\alpha_{n+1}})}]_{(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots, x_{n+1}^{({\alpha_{n+1}})})},
\end{equation*}
then the inequality (\ref{eqn_hypersurface}) is in fact an equality.
\end{thm}
\begin{eqnarray}gin{proof}
Let $f$ be fixed with $(a_{I^i})_{1\leq i\leq N}\in \cap_\alpha F_\alpha\cap F(A)$. By Corollary \ref{cor_integral}, $X$ is an integral hypersurface containing the origin $0$. According to Proposition \ref{prop_finite_components}, $\pi^{-1}(0)$ contains only finitely many irreducible components $C_1,\ldots,C_p,\ Z_1,\ldots,Z_q$, where each $C_j$ is thin and each $Z_i$ is fat. We have $\dim (C^m)=mn-\lambda(0)$ when $m$ is large enough. For each thin irreducible component $C_j$ of $\pi^{-1}(0)$, however, by Lemma \ref{lem_4.3} we see that $\dim(\psi_m(C_j))\leq (m+1)(n-1)$. Thus, for $m$ large enough, we have
\begin{eqnarray}gin{equation*}
\dim (C^m)=\underset{1\leq i\leq q}{\max} \dim(\psi_m(Z_i)).
\end{equation*}
By Lemma \ref{lem_4.3}, the fibers of $\psi_{m+1}(\pi^{-1} (x))\rightarrow \psi_m(\pi^{-1} (x))$ have dimension $\leq n$. Since $\dim (C^m)=mn-\lambda(0)$ for every $m$ large enough, we have
\begin{eqnarray}gin{equation*}
\dim (C^{m+1})= \dim(C^m)+n\textnormal{ for }m\gg 0.
\end{equation*}
This also implies that there is some $i$ such that
\begin{eqnarray}gin{equation*}
\dim (C^m)=\dim(\psi_m(Z_i)) \textnormal{ for all }m\gg 0.
\end{equation*}
In fact, pick a positive integer $M$ such that $\dim (C^{m+1})= \dim(C^m)+n$ for all $m\geq M$. If $\dim(\psi_m(Z_i))< \dim (C^m)$ for some $m\geq M$, then
\begin{eqnarray}gin{eqnarray*}
\dim(\psi_{m+k}(Z_i))&\leq& \dim(\psi_m(Z_i))+nk\\
&<&\dim (C^m)+nk\\
&=& \dim (C^{m+k}),
\end{eqnarray*}
for every $k\geq 0$. It follows that if there exists $m_i\geq M$ for each $i$ such that $\dim(\psi_{m_i}(Z_i))< \dim (C^{m_i})$, we have $\dim(C^m)> \max_{1\leq i\leq q}\dim(\psi_m(Z_i))$ when $m> \max_{1\leq i\leq q} \{m_i\}$, a contradiction. Therefore, by relabeling we may assume that
\begin{eqnarray}gin{equation*}
\dim (C^m)=\dim(\psi_m(Z_1))\textnormal{ for all }m\geq M.
\end{equation*}
Since $X=\V(f)$ is irreducible and is not a hyperplane, we have $\V(x_i,f)\subsetneq \V(f)$ for each $i$. This implies that the fat component $Z_1$ is not contained in $\textnormal{Cont}^\infty(x_i)$ for each $i$, or equivalently, $Z_1$ does not have infinite order along any $x_i$. Choose an $(n+1)$-tuple $\alpha'=(\alpha_1',\ldots, \alpha_{n+1}')\in (\mathbb{Z}_{+})^{n+1}$ with
\begin{eqnarray}gin{equation*}
\alpha_i'=\min\{\textnormal{ord}_\gamma(x_i)|\gamma\in Z_1\}\textnormal{, } 1\leq i\leq n+1.
\end{equation*}
Then if $m\geq \max\{M,\alpha_1',\ldots, \alpha_{n+1}'\}$, $C^m_{\alpha'}$ contains a dense open subset of $\psi_m(Z_1)$, hence $\dim(C^m)=\dim(C^m_{\alpha'})$. By applying Lemma \ref{lem_second}, we get for $m\gg 0$,
\begin{eqnarray}gin{equation*}
\dim(C^m)\leq mn-\sum_{j=1}^{n+1} (\alpha_j'-1)-1+\underset{1\leq i\leq N}{\min} \{I^i\cdot \alpha' \}-\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}}\{I^i\cdot \alpha'-\alpha_j' \}.
\end{equation*}
Therefore, for $m\gg 0$ we have
\begin{eqnarray}gin{eqnarray*}
\lambda(0)&=& mn-\dim(C^m)\\
&\geq& \sum_{j=1}^{n+1} (\alpha_j'-1)+1-\underset{1\leq i\leq N}{\min} \{I^i\cdot \alpha' \}+\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}}\{I^i\cdot \alpha'-\alpha_j' \}\\
&\geq& \underset{\textnormal{feasible }\alpha}{\min} \big\{\sum_{j=1}^{n+1} (\alpha_j-1)+1-\underset{1\leq i\leq N}{\min} \{I^i\cdot \alpha \}+\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}}\{I^i\cdot \alpha-\alpha_j \} \big\}.
\end{eqnarray*}
Now suppose the first minimum in (\ref{eqn_hypersurface}) is attained at some feasible $\alpha$ and that for this $\alpha$ we have
\begin{eqnarray}gin{equation*}
\dim (C_\alpha^m)= mn-\sum_{j=1}^{n+1} (\alpha_j-1)-1+ \min_{1\leq i\leq N} \{I^i\cdot \alpha \}-\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}} \{I^i\cdot \alpha-\alpha_j \}.
\end{equation*}
Since $C^m_\alpha\subset C^m$, we have $\dim(C_\alpha^m)\leq \dim(C^m)$. On the other hand, we have $\dim(C^m_\alpha)\geq \dim(C^m_{\alpha'})=\dim(C^m)$ by the choice of $\alpha$. This shows that
\begin{eqnarray}gin{eqnarray*}
\lambda(0)&=&mn-\dim(C^m_\alpha)\\
&=&\sum_{j=1}^{n+1} (\alpha_j-1)+1-\underset{1\leq i\leq N}{\min} \{I^i\cdot \alpha \}+\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}}\{I^i\cdot \alpha-\alpha_j \}.
\end{eqnarray*}
In other words, we obtain an equality if $\dim(C^m_\alpha)$ attains the upper bound in the statement of Lemma \ref{lem_second} for some feasible $\alpha$ where the first minimum in (\ref{eqn_hypersurface}) is attained. According to Remark \ref{rem_equality}, this happens when $\V(P_0)\backslash \V(T_0)\neq \emptyset$ for such an $\alpha$.
\end{proof}
Combining Lemma \ref{lem_first}, Theorem \ref{thm_hypersurface} and Proposition \ref{fundamental_prop}, we get the following corollary:
\begin{eqnarray}gin{cor}
\label{cor_hypersurface}
Let $A=\{I^1,\ldots,I^N\}\subset (\mathbb{Z}_{\geq 0})^{n+1}$ be a fixed integral subset (see Definition \ref{defn_integral}). If $X$ is a hypersurface in $\mathbb{A}^{n+1}$ defined by a very general polynomial with support $A$, then $X$ is an integral hypersurface containing the origin $0$ and we have
\begin{eqnarray}gin{equation}
\label{eqn_hypersurface_mld}
\widehat{\textnormal{mld}}(0;X)\geq \min \{\sum_{j=1}^{n+1} (\alpha_j-1)+1-\underset{1\leq i\leq N}{\min} \{I^i\cdot \alpha \}+\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}} \{I^i\cdot \alpha-\alpha_j \} \}+n,
\end{equation}
where the first minimum is taken over all $(n+1)$-tuples $\alpha$ such that $\underset{1\leq i\leq N}{\min} \{I^i\cdot \alpha \}$ is attained by at least two different $i$'s.
Moreover, assume the first minimum is attained at some feasible $\alpha$. If for this $\alpha$, the polynomials $P_0$ (defined in equation (\ref{eqn_P0})) and $T_0$ (defined in equation (\ref{eqn_16})) satisfy $\V(P_0)\backslash \V(T_0)\neq \emptyset$ in the torus
\begin{eqnarray}gin{equation*}
(\mathbb{C}^\ast)^{n+1}= \textnormal{Spec}\ \mathbb{C}[x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots, x_{n+1}^{({\alpha_{n+1}})}]_{(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots, x_{n+1}^{({\alpha_{n+1}})})},
\end{equation*}
then the inequality (\ref{eqn_hypersurface_mld}) is in fact an equality.
\end{cor}
\subsection{Examples}
$\\$
There are many interesting examples of hypersurfaces where the inequality in Theorem \ref{thm_hypersurface} turns out to be an equality. According to Theorem \ref{thm_hypersurface}, we just need to show that the coefficients are in $\bigcap_\alpha F_\alpha\cap F(A)$, and that $\V(P_0)\backslash \V(T_0)\neq \emptyset$ for certain feasible $\alpha$. In these cases, the invariants $\lambda$ and Mather mld are independent of the coefficients in the defining equations.
\begin{eqnarray}gin{exmp}
\label{exmp_binomial}
Let $X=\mathbb{V}(f)\subset \mathbb{A}^{n+1}$ be an integral variety of dimension $n$ where $f$ is a binomial. Note that $X$ is not necessarily normal. So it might not be a toric variety. The irreducibility of $X$ implies that we can write $f$ in the form
\begin{eqnarray}gin{equation*}
f=ax_1^{\begin{eqnarray}ta_1}x_2^{\begin{eqnarray}ta_2}\cdots x_p^{\begin{eqnarray}ta_p}-bx_{p+1}^{\begin{eqnarray}ta_{p+1}}x_{p+2}^{\begin{eqnarray}ta_{p+2}}\cdots x_{p+q}^{\begin{eqnarray}ta_{p+q}},
\end{equation*}
where $p+q\leq n+1$. If $p+q<n+1$, $X$ is the product of a lower dimensional binomial hypersurface with an affine space. The question is hence reduced to the case when $p+q=n+1$. By assuming $0\in X$, we also require that $p\geq q\geq 1$. The support $A$ contains $N=2$ elements $(\begin{eqnarray}ta_1,\begin{eqnarray}ta_2,\ldots, \begin{eqnarray}ta_p, 0,\ldots, 0)$ and $(0,\ldots, 0,\begin{eqnarray}ta_{p+1},\ldots, \begin{eqnarray}ta_{p+q})$. By requiring that $A$ is integral (see Definition \ref{defn_integral}), we further assume that the line segment connecting these two points does not contain any other integral point. Hence $X$ is integral if the coefficients $(a,b)\in F(A)$ according to Corollary \ref{cor_integral}. On the other hand, by applying a coordinate change that takes $x_1$ to $c\cdot x_1$ and preserves all $x_2,\ldots, x_{p+q}$, we see that any two such hypersurfaces $X$ and $X'$, with different coefficients $(a,b)$ and $(a',b')$, are isomorphic. Hence, we conclude that $F(A)=(\mathbb{C}^\ast)^2$. Clearly, an $(n+1)$-tuple $\alpha\in (\mathbb{Z}_+)^{n+1}$ is feasible (see Remark \ref{rem_feasible}) if and only if
\begin{eqnarray}gin{equation}
\label{eqn_13}
\sum_{i=1}^p \alpha_i\begin{eqnarray}ta_i=\sum_{i=p+1}^{p+q} \alpha_i\begin{eqnarray}ta_i.
\end{equation}
For any feasible $\alpha$, following the notation in the previous section, we have $n_0(\alpha)=\sum_{i=1}^p \alpha_i\begin{eqnarray}ta_i$ and
\begin{eqnarray}gin{equation*}
P_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}) =f(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})
\end{equation*}
is of weight $n_0(\alpha)$. Since $\frac{\partial P_0}{\partial x_j^{(\alpha_j)}}$ is a monomial for each $j$, $F_\alpha=(\mathbb{C}^\ast)^{2}$ for each feasible $\alpha$.
Now fix a feasible $\alpha$. Clearly if $\alpha_{j_0}=\underset{1\leq j\leq n+1}{\max}\{\alpha_j\}$, then we get
\begin{eqnarray}gin{equation*}
T_0(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})}) =\frac{\partial f(x_1^{({\alpha_1})},x_2^{({\alpha_2})},\ldots,x_{n+1}^{({\alpha_{n+1}})})} {\partial x_{j_0}^{(\alpha_{j_0})}}.
\end{equation*}
In particular, $P_0$ is binomial while $T_0$ is a monomial. Hence $\V(P_0)\backslash \V (T_0)\neq \emptyset$ in the torus $(\mathbb{C}^\ast)^{n+1}$. According to Remark \ref{rem_equality}, for each feasible $\alpha$, we have
\begin{eqnarray}gin{eqnarray*}
\dim (C^m_\alpha)
&=& mn-\sum_{j=1}^{n+1} (\alpha_j-1)-1+ \min_{1\leq i\leq N} \{I^i\cdot \alpha \}-\underset{1\leq j\leq n+1\textnormal{ with } I_j^i>0}{\min_{1\leq i\leq N}} \{I^i\cdot \alpha-\alpha_j \}\\
&=& mn -\sum_{j=1}^{n+1} (\alpha_j-1) +\underset{1\leq i\leq n+1}{\max}\{\alpha_i-1\}.
\end{eqnarray*}
Therefore, Theorem \ref{thm_hypersurface} implies that
\begin{eqnarray}gin{equation}
\label{eqn_binomial}
\lambda=\min\{\sum_{i=1}^{n+1}\alpha_i -\underset{1\leq i\leq n+1}{\max}\alpha_i-n\},
\end{equation}
or equivalently,
\begin{eqnarray}gin{equation}
\widehat{\textnormal{mld}}(0;X)=\min\{\sum_{i=1}^{n+1}\alpha_i -\underset{1\leq i\leq n+1}{\max}\alpha_i\},
\end{equation}
where the first minimum is taken over all $\alpha\in (\mathbb{Z}_+)^{n+1}$ that satisfy equation (\ref{eqn_13}).
\end{exmp}
\begin{eqnarray}gin{exmp}
Consider the Whitney Umbrella $X=\mathbb{V}(x^2-y^2z)$. The nonsingular locus of $X$ has codimension $1$. Therefore it does not follow in the framework discussed in Section \ref{sec4}, since it is not normal. Nevertheless, we can use the formula (\ref{eqn_binomial}) and conclude that $\lambda=1$ and $\widehat{\textnormal{mld}}(0;X)=3$.
\end{exmp}
\begin{eqnarray}gin{rem}
The binomial hypersurfaces are nice examples where $\lambda$ and Mather mld can be computed directly in a simple form. Note that the result is independent of coefficients $a$ and $b$. This makes sense because we have seen that any two binomial polynomials with the same support define isomorphic hypersurfaces. However, this is not the case if $f$ is more complicated, and then $\lambda$ indeed depends on the coefficients.
\end{rem}
\begin{eqnarray}gin{exmp}
Let $X$ be a curve in $\mathbb{A}^2$ defined by $f=a_1 x^2 +a_2 y^2 +a_3 xy +a_4 y^3$. For a very general choice of coefficients $a_i$, $\lambda$ has a lower bound given by equation (\ref{eqn_hypersurface}). The lower bound is $0$, which is achieved when $\alpha_1=\alpha_2=1$.
First, assume all $a_i$ are equal to $1$. Then $X$ is integral. For any choice of feasible $\alpha$ (see Remark \ref{rem_feasible}), it's clear that $P_0(x,y)$ can only be $x^2+y^2+xy$. Thus the condition $\Delta^\alpha$ (see Condition \ref{open_condition}) is satisfied, or equivalently, we have $(1,1,1,1)\in\cap_\alpha F_\alpha$. Since $T_0(x,y)=2x$ or $2y$, we get $\V(P_0)\backslash \V(T_0)\neq \emptyset$ in the torus $(\mathbb{C}^\ast)^4$. By Theorem \ref{thm_hypersurface}, we conclude that $\lambda=0$, and the minimum in equation (\ref{eqn_hypersurface}) is attained at the tuple $\alpha$ with $\alpha_1=\alpha_2=1$.
Now instead we assume that $a_1=a_2=a_4=1$ and $a_3=2$. $X$ is still integral. But condition $\Delta^\alpha$ is no longer satisfied. By computing $\dim(\psi_m(\pi^{-1}(0)))$ directly from definition, it can be shown that $\lambda=1$.
\end{exmp}
\begin{eqnarray}gin{exmp}
\label{exmp_2}
Let $X\subset \mathbb{A}^{n+1}$ be a hypersurface defined by $f=\sum_{i=1}^{n+1} x_i^{b_i}$, with $n\geq 2$. As Lemma \ref{lem_second} suggests, we consider only feasible $(n+1)$-tuples $\alpha$ (see Remark \ref{rem_feasible}). Clearly, $\alpha$ is feasible if and only if $\underset{1\leq i\leq n+1}{\min} \{ b_i\alpha_i\}$ is attained by at least two different $i$'s.
Note that for any feasible $\alpha$, $\frac{\partial P_0}{\partial x_j^{(\alpha_j)}}$ is always a monomial for each $j$. Thus, we have $\cap_{\textnormal{feasible }\alpha} F_\alpha=(\mathbb{C}^\ast)^{n+1}$. Clearly, when $n\geq 2$, $X$ is an integral hypersurface.
Similarly, $T_0$ is always a monomial for any feasible $\alpha$. So we conclude $\mathbb{V}(P_0)\backslash \mathbb{V}(T_0)\neq \emptyset$. According to Theorem \ref{thm_hypersurface}, we get
\begin{eqnarray}gin{eqnarray}
\label{eqn_15}
\lambda &=& \min\{ \sum_{i=1}^{n+1} (\alpha_i-1) +1- \underset{1\leq i\leq n+1}{\min} \{ b_i\alpha_i\} + \underset{1\leq i\leq n+1}{\min} \{(b_i-1)\alpha_i\}\}\textnormal{, or}
\end{eqnarray}
\begin{eqnarray}gin{eqnarray*}
\widehat{\textnormal{mld}}(0;X) &=& \min\{ \sum_{i=1}^{n+1} (\alpha_i-1) +1- \underset{1\leq i\leq n+1}{\min} \{ b_i\alpha_i\} + \underset{1\leq i\leq n+1}{\min} \{(b_i-1)\alpha_i\}\}+n,
\end{eqnarray*}
where the first minimum is taken over all feasible $\alpha$.
\end{exmp}
There are many classical examples that fall into the category of Example \ref{exmp_2}. A large portion of the following class of examples are of this type.
\begin{eqnarray}gin{exmp}
Consider here the ADE singularities. All the varieties here are integral.
\item (1) Singularities of type $A_k$: $X$ is defined by $f=x_1^{k+1}+x_2^2+\cdots x_n^2$ for $n\geq 3$.
Choose multi-index $\alpha$ with $\alpha_i=1$ for $1\leq i\leq n$. The minimum weight $n_0(\alpha)$ is attained by $n-1$ monomials if $k>1$, or $n$ monomials if $k=1$. In both cases, $\alpha$ is feasible. Let $b_1=k+1$ and $b_i=2$ for $2\leq i\leq n$. Then we have
\begin{eqnarray}gin{equation*}
\sum_{i=1}^{n+1} (\alpha_i-1) +1- \underset{1\leq i\leq n+1}{\min} \{ b_i\alpha_i\} + \underset{1\leq i\leq n+1}{\min} \{(b_i-1)\alpha_i\}=0.
\end{equation*}
Hence according to equation (\ref{eqn_15}), we get $\lambda=0$ or $\widehat{\textnormal{mld}}(0;X)=n-1$.
\item (2) Singularities of type $D_k$: $X$ is defined by $f=x_1^{k-1}+x_1x_2^2 +x_3^2+\cdots + x_n^2$ with $k\geq 4$. One checks easily that the coefficients are in $\cap_\alpha F_\alpha$.
If $n\geq 4$, then there are at least two quadratic terms. Hence $\alpha=(1,\ldots,1)$ is feasible, which achieves the minimum $0$ in equation (\ref{eqn_hypersurface}). Note that we have
\begin{eqnarray}gin{equation*}
P_0=(x_3^{(1)})^2+\ldots+(x_n^{(1)})^2
\end{equation*}
and $T_0=\frac{\partial P_0}{\partial x_3^{(1)}}=2x_3^{(1)}$. Therefore, $\mathbb{V}(P_0)\backslash \mathbb{V}(T_0)\neq \emptyset$. By Theorem \ref{thm_hypersurface}, we get $\lambda=0$ or $\widehat{\textnormal{mld}}(0;X)=n-1$.
When $n=3$, the minimum $1$ of equation (\ref{eqn_hypersurface}) is achieved when $\alpha=(2,1,2)$. With similar analysis, we obtain $\lambda=1$ or $\widehat{\textnormal{mld}}(0;X)=n=3$.
\item (3) Singularities of type $E_6$: $X$ is defined by $f=x_1^4+x_2^3+x_3^2+\ldots+x_n^2$. This belongs to Example \ref{exmp_2}. So we use equation (\ref{eqn_15}).
If $n\geq 4$, with $\alpha=(1,\ldots,1)$, we get $\lambda=0$ or $\widehat{\textnormal{mld}}(0;X)=n-1$. When $n=3$, the minimum is achieved when $\alpha=(1,2,2)$, and we get $\lambda=1$ or $\widehat{\textnormal{mld}}(0;X)=n=3$.
\item (4) Singularities of type $E_7$: $X$ is defined by $f=x_1^3x_2+x_2^3+x_3^2+\ldots+x_n^2$.
This is very similar to case (2). Again one checks easily that the coefficients satisfy Condition $\Delta^\alpha$ for all feasible multi-indices $\alpha$.
If $n\geq 4$, $\alpha=(1,\ldots,1)$ is feasible. Similar to (2) we get $\lambda=0$ or $\widehat{\textnormal{mld}}(0;X)=n-1$.
When $n=3$, $\alpha=(2,2,3)$ is feasible and it gives minimum in equation (\ref{eqn_hypersurface}). Simple analysis similar to the ones above shows that we have an equality and hence $\lambda=2$ or $\widehat{\textnormal{mld}}(0;X)=n+1=4$.
\item (5) Singularities of type $E_8$: $X$ is defined by $f=x_1^5+x_2^3+x_3^2+\ldots+x_n^2$. This belongs to Example \ref{exmp_2} so we can apply formula (\ref{eqn_15}).
When $n\geq 4$, we get $\lambda=0$ or $\widehat{\textnormal{mld}}(0;X)=n-1$. The minimum is attained when $\alpha=(1,\ldots,1)$. When $n=3$, we have $\lambda=2$ or $\widehat{\textnormal{mld}}(0;X)=n+1=4$, and it is attained when $\alpha=(2,2,3)$.
\end{exmp}
\subsection{Possible generalizations}
$\\$
We only treat the case when the hypersurface is defined by a very general polynomial with a fixed support. An obvious question is: what can we say if the hypersurface is defined by a general polynomial (so that it is integral) with a fixed support? Unfortunately, the polynomials $P_0$ defined in equation (\ref{eqn_P0}) and $T_0$ defined in equation (\ref{eqn_16}) don't behave well and our method fails.
An obvious generalization of the results in this section is to treat the class of complete intersection varieties. However, our method doesn't work well when there are multiple defining equations.
\begin{eqnarray}gin{thebibliography}{EMY02}
\bibitem[Amb06]{Am06}
Florin Ambro.
\newblock The minimal log discrepancy.
\newblock In {\em Proceedings of the Symposium ``Multiplier ideals and arc
spaces''(RIMS 2006), K. Watanabe (Ed.), RIMS Koukyuuroku}, volume 1550, pages
121--130, 2006.
\bibitem[dF16]{dF16}
Tommaso de~Fernex.
\newblock The space of arcs of an algebraic variety.
\newblock {\em arXiv preprint arXiv:1604.02728}, 2016.
\bibitem[dFD11]{dFD11}
Tommaso de~Fernex and Roi Docampo.
\newblock Jacobian discrepancies and rational singularities. to appear in j.
eur. math. soc.
\newblock {\em arXiv preprint arXiv:1106.2172}, 2011.
\bibitem[dFEI07]{dFEI08}
Tommaso de~Fernex, Lawrence Ein, and Shihoko Ishii.
\newblock Divisorial valuations via arcs.
\newblock {\em Publ. RIMS, 44, (2008) 425-448}, 2007.
\bibitem[dFT16]{dFT16}
Tommaso de~Fernex and Yu-Chao Tu.
\newblock Towards a link theoretic characterization of smoothness.
\newblock {\em arXiv preprint arXiv:1608.08510}, 2016.
\bibitem[DL99]{DL99}
Jan Denef and Fran{\c{c}}ois Loeser.
\newblock Germs of arcs on singular algebraic varieties and motivic
integration.
\newblock {\em Inventiones mathematicae}, 135(1):201--232, 1999.
\bibitem[EI15]{EI13}
Lawrence Ein and Shihoko Ishii.
\newblock Singularities with respect to mather--jacobian discrepancies.
\newblock {\em Commutative Algebra and Noncommutative Algebraic Geometry},
2:125, 2015.
\bibitem[EM05]{EM05}
L~Ein and M~Mustata.
\newblock Jet schemes and singularities, algebraic geometry--seattle 2005, part
2, 505--546.
\newblock In {\em Proc. Sympos. Pure Math}, volume~80, 2005.
\bibitem[EM09]{EM09}
Lawrence Ein and Mircea Mustata.
\newblock Jet schemes and singularities.
\newblock In {\em Proc. Sympos. Pure Math}, volume 80, Part 2, pages 505--546,
2009.
\bibitem[EMY02]{EMY03}
Lawrence Ein, Mircea Mustata, and Takehiko Yasuda.
\newblock Jet schemes, log discrepancies and inversion of adjunction.
\newblock {\em Invent. Math. 153 (2003), 119-135.}, 2002.
\bibitem[Ful93]{Ful93}
William Fulton.
\newblock {\em Introduction to toric varieties}.
\newblock Number 131 in Annals of Mathematics Studies. Princeton University
Press, 1993.
\bibitem[IR13]{Ish13}
Shihoko Ishii and Ana~J Reguera.
\newblock Singularities with the highest mather minimal log discrepancy.
\newblock {\em Mathematische Zeitschrift}, 275(3-4):1255--1274, 2013.
\bibitem[IR15]{Ish15}
Shihoko Ishii and Ana Reguera.
\newblock Singularities in arbitrary characteristic via jet schemes.
\newblock {\em arXiv preprint arXiv:1510.05210}, 2015.
\bibitem[Ish04]{Ish03}
Shihoko Ishii.
\newblock The arc space of a toric variety.
\newblock {\em Journal of Algebra}, 278(2):666--683, 2004.
\bibitem[Ish13]{Ish11}
Shihoko Ishii.
\newblock Mather discrepancy and the arc spaces.
\newblock In {\em Annales de l'institut Fourier}, volume 63, Part 1, pages
89--111. Association des Annales de l'institut Fourier, 2013.
\bibitem[Mou11]{Mou11}
Hussein Mourtada.
\newblock Jet schemes of toric surfaces.
\newblock {\em Comptes Rendus Mathematique}, 349(9):563--566, 2011.
\bibitem[Mus14]{M14}
Mircea Mustata.
\newblock The dimension of jet schemes of singular varieties.
\newblock {\em arXiv preprint arXiv:1404.7731}, 2014.
\bibitem[Sho04]{Sh04}
Vyacheslav~Vladimirovich Shokurov.
\newblock Letters of a bi-rationalist v: Mld's and termination of log flips.
\newblock {\em Trudy Matematicheskogo Instituta imeni VA Steklova},
246:328--351, 2004.
\bibitem[Stu95]{St95}
Bernd Sturmfels.
\newblock Equations defining toric varieties. algebraic geometry?santa cruz
1995, 437--449.
\newblock In {\em Proc. Sympos. Pure Math}, volume~62, 1995.
\bibitem[Yu16]{Yu16}
Josephine Yu.
\newblock Do most polynomials generate a prime ideal?
\newblock {\em Journal of Algebra}, 459:468--474, 2016.
\end{thebibliography}
\end{document}
|
\begin{document}
\allowdisplaybreaks[4]
\numberwithin{equation}{section} \numberwithin{figure}{section}
\numberwithin{table}{section}
\displaystyleef\Omega{\Omegamega}
\displaystyleef\partial{\partialartial}
\displaystyleef\mathbb{R}{\mathbb{R}}
\displaystyleef\mathop{\rm argmin}{\mathop{\rm argmin}}
\displaystyleefH^2_0(\O){H^2_0(\Omega)}
\displaystyleef\mathcal{T}{\mathcal{T}}
\displaystyleef\mathcal{E}{\mathcal{E}}
\displaystyleef\mathcal{V}{\mathcal{V}}
\displaystyleef\mathcal{C}{\mathcal{C}}
\displaystyleef\sum_{T\in\cT_h}{\sum_{T\in\mathcal{T}_h}}
\displaystyleefH^2_0(\O)um{\sum_{e\in\mathcal{E}_h}}
\displaystyleef\sum_{p\in\cV_h}{\sum_{p\in\mathcal{V}_h}}
\displaystyleefH^2_0(\O)umI{\sum_{e\in\mathcal{E}_h^i}}
\displaystyleef\Mean#1{\Big\{\hspace{-5pt}\Big\{\frac{\partial^2 #1}{\partial n^2}\Big\}\hspace{-5pt}\Big\}}
\displaystyleef\Jump#1{\Big[\hspace{-3pt}\Big[\frac{\partial #1}{\partial n}\Big]\hspace{-3pt}\Big]}
\displaystyleef\jump#1{[\hspace{-1pt}[{\partial #1}/{\partial n}]\hspace{-1pt}]}
\displaystyleef\jumpTwo#1{[\hspace{-1pt}[{\partial^2 #1}/{\partial n^2}]\hspace{-1pt}]}
\displaystyleef\jumpThree#1{[\hspace{-1pt}[{\partial^3 #1}/{\partial n^3}]\hspace{-1pt}]}
\displaystyleefH^2(\O;\cT_h){H^2(\Omega;\mathcal{T}_h)}
\displaystyleef\Omegasch{\mathrm{Osc}_h}
\displaystyleefh_{\scriptscriptstyle T}{h_{\scriptscriptstyle T}}
\displaystyleefh_p{h_p}
\displaystyleef\eta_{\scriptscriptstyle T}{\eta_{\scriptscriptstyle T}}
\displaystyleef\eta_{e,{\scriptscriptstyle 1}}{\eta_{e,{\scriptscriptstyle 1}}}
\displaystyleef\eta_{e,{\scriptscriptstyle 2}}{\eta_{e,{\scriptscriptstyle 2}}}
\displaystyleef\eta_{e,{\scriptscriptstyle 3}}{\eta_{e,{\scriptscriptstyle 3}}}
\displaystyleefz_{\scriptscriptstyle T}{z_{\scriptscriptstyle T}}
\displaystyleef\OmegaSC{{\mathrm{Osc}(f;\mathcal{T}_h)}}
\displaystyleef\Omegasc{{\mathrm{Osc}}}
\displaystyleef\tilde{K}{\tilde{K}}
\displaystyleef\tilde{u}{\tilde{u}}
\displaystyleef\tilde{\lambda}{\tilde{\lambda}}
\displaystyleef\displaystyle{\displaystyleisplaystyle}
\displaystyleefb_{\scriptscriptstyle T}{b_{\scriptscriptstyle T}}
\displaystyleefC_{\scriptscriptstyle \rm Sobolev}{C_{\scriptscriptstyle \rm Sobolev}}
\displaystyleefC_{\scriptscriptstyle \rm PF}{C_{\scriptscriptstyle \rm PF}}
\displaystyleef|\!|\!|{|\!|\!|}
\displaystyleef\bar f_{\scriptscriptstyle T}{\bar f_{\scriptscriptstyle T}}
\title[{\em A Posteriori} Analysis for the Obstacle Problem of Kirchhoff Plates]
{An {\em A Posteriori} Analysis of $\bm C^{\bf 0}$ Interior Penalty Methods
for the Obstacle Problem of Clamped Kirchhoff Plates}
\author[S.C. Brenner]{Susanne C. Brenner}
\address{Susanne C. Brenner, Department of Mathematics and Center for
Computation and Technology, Louisiana State University, Baton Rouge,
LA 70803} \email{[email protected]}
\author[J. Gedicke]{Joscha Gedicke}
\address{Joscha Gedicke, Department of Mathematics and Center for
Computation and Technology, Louisiana State University, Baton Rouge,
LA 70803} \email{[email protected]}
\author[L.-Y. Sung]{Li-yeng Sung} \address{Li-yeng Sung,
Department of Mathematics and Center for Computation and Technology,
Louisiana State University, Baton Rouge, LA 70803}
\email{[email protected]}
\author[Y. Zhang]{Yi Zhang}
\address{Yi Zhang, Department of Mathematics, University of Tennessee,
Knoxville, TN 37996}\email{[email protected]}
\keywords{Kirchhoff plates, obstacle problem, {\em a posteriori} analysis, adaptive,
$C^0$ interior penalty methods, discontinuous Galerkin methods, fourth order variational inequalities}
\begin{abstract}
We develop an {\em a posteriori} analysis of $C^0$ interior penalty methods
for the displacement obstacle problem of clamped Kirchhoff plates. We show that
a residual based error estimator originally designed for $C^0$ interior
penalty methods for the boundary value problem of clamped Kirchhoff plates can
also be used for the obstacle problem. We obtain reliability and efficiency
estimates for the error estimator and introduce an adaptive algorithm based on
this error estimator. Numerical results indicate that the performance of the
adaptive algorithm is optimal for both quadratic and cubic $C^0$ interior
penalty methods.
\end{abstract}
\subjclass[2010]{65N30, 65N15, 65K15, 74K20, 74S05}
\thanks{The work of the first and third authors was supported in part
by the National Science Foundation under Grant No.
DMS-13-19172. The work of the second author was supported in part by a fellowship
within the Postdoc-Program of the German Academic Exchange Service (DAAD)}
\maketitle
\section{Introduction}\label{sec:Introduction}
Let $\Omega\subset\mathbb{R}^2$ be a bounded polygonal domain, $f\in L_2(\Omega)$, $\partialsi\in C(\bar\Omega)\cap C^2(\Omega)$ and
$\partialsi<0$ on $\partial\Omega$.
The displacement obstacle problem for the clamped Kirchhoff plate is to find
\begin{equation}\label{eq:Obstacle}
u=\mathop{\rm argmin}_{v\in K}\Big[\frac12 a(v,v)-(f,v)\Big]
\end{equation}
where
\begin{equation}\label{eq:BilinearForms}
a(w,v)=\int_\Omega D^2w:D^2 v\,dx=\int_\Omega \sum_{i,j=1}^2
\Big(\frac{\partial^2 w}{\partial x_i\partial x_j}\Big)\Big(\frac{\partial^2 v}{\partial x_i\partial x_j}\Big)
dx, \quad (f,v)=\int_\Omega fv\,dx
\end{equation}
and
\begin{equation}\label{eq:KDef}
K=\{v\in H^2_0(\Omega):\,v\geq\partialsi\quad\text{in}\quad \Omega\}.
\end{equation}
\partialar
The unique solution $u\in K$ of \eqref{eq:Obstacle}--\eqref{eq:KDef}
is characterized by the variational inequality
\begin{equation*}
a(u,v-u)\geq (f,v-u)\qquad\forall\,v\in K,
\end{equation*}
which can be written in the following equivalent complementarity form:
\begin{equation}\label{eq:ComplementarityCondition}
\int_\Omega (u-\partialsi)\,d\lambda=0,
\end{equation}
where the Lagrange multiplier $\lambda$ is the nonnegative Borel measure defined by
\begin{equation}\label{eq:lambdaDef}
a(u,v)=(f,v)+\int_\Omega v\,d\lambda \qquad \forall\,v\in H^2_0(\Omega).
\end{equation}
\begin{remark}\label{rem:SupportOfLambda}
Since $u>\partialsi$ near $\partial\Omega$, the support of $\lambda$ is disjoint from $\partial\Omega$ because of
\eqref{eq:ComplementarityCondition}.
\end{remark}
\begin{remark}\label{rem:lambda}
We can treat $\lambda$ as a member of $H^{-2}(\Omega)=[H^2_0(\Omega)]'$ such that
\begin{equation*}
\langle\lambda,v\rangle=\int_\Omega v\,d\lambda \qquad \forall\,v\in H^2_0(\Omega).
\end{equation*}
\end{remark}
\partialar
$C^0$ interior penalty methods
\cite{EGHLMT:2002:DG3D,BSung:2005:DG4,BNeilan:2011:SingularPerturbation,
BGGS:2012:CH,Brenner:2012:C0IP,GGN:2013:C0IP}
form a natural hierarchy of
discontinuous Galerkin methods that are proven to be effective
for fourth order elliptic boundary value problems.
The goal of this paper is to develop an {\em a posteriori} error analysis of
$C^0$ interior penalty methods
for the obstacle problem defined by \eqref{eq:Obstacle}--\eqref{eq:KDef}.
While there is a substantial literature on the {\em a posteriori} error analysis of finite
element methods for second order
obstacle problems (cf.
\cite{HK:1994:MGObstacle,CN:2000:Obstacle,Veeser:2001:Obstacle,NSV:2003:Obstacle,BC:2004:Obstacle,
NSV:2005:Contact,SV:2007:Adaptive,
BHS:2008:Obstacle, BCH:2009:Obstacle,GP:2014:Obstacle,GP:2015:Quadratic,CH:2015:Obstacle}
and the references therein),
as far as we know this is the first paper on the {\em a posteriori} error analysis
for the displacement obstacle problem of Kirchhoff plates.
We note that there is a
fundamental difference between second order and fourth order obstacle problems, namely
that the Lagrange multipliers for the fourth order
discrete obstacle problems can be represented naturally as sums of Dirac point measures
(cf. Section~\ref{sec:C0IP}), which
leads to a simpler {\em a posteriori} error analysis (cf. Section~\ref{sec:Reliability} and
Section~\ref{sec:Efficiency}).
\partialar
The rest of the paper is organized as follows. We recall the $C^0$ interior penalty
methods in Section~\ref{sec:C0IP} and analyze a mesh-dependent boundary value problem
in Section~\ref{sec:BVP} that plays an important role in the {\em a posteriori} error analysis
carried out in Section~\ref{sec:Reliability} and Section~\ref{sec:Efficiency}.
An adaptive algorithm motivated by the {\em a posteriori} error analysis is
introduced in Section~\ref{sec:Adaptive} and we report results of several
numerical experiments
in Section~\ref{sec:Numerics}. We end the paper with some concluding remarks in Section~\ref{sec:Conclusions}.
\section{$C^0$ Interior Penalty Methods}\label{sec:C0IP}
Let $\mathcal{T}_h$ be a triangulation of $\Omega$, $\mathcal{V}_h$ be the set of the vertices of $\mathcal{T}_h$,
$\mathcal{E}_h$ be the set of the edges of $\mathcal{T}_h$, and $V_h\subset H^1_0(\Omega)$
be the $P_k$ Lagrange finite element space ($k\geq2$)
associated with $\mathcal{T}_h$. The discrete problem for the $C^0$ interior
penalty method \cite{BSZZ:2012:Kirchhoff,BSZ:2012:Obstacle} is to find
\begin{equation}\label{eq:C0IP}
u_h=\mathop{\rm argmin}_{v\in K_h}\Big[\frac12 a_h(v,v)-(f,v)\Big],
\end{equation}
where $K_h=\{v\in V_h:\, v(p)\geq\partialsi(p)\quad\text{for all}\;p\in \mathcal{V}_h\}$,
\begin{align*}
a_h(w,v)&=\sum_{T\in\cT_h} \int_TD^2w:D^2v\,dx+H^2_0(\O)um\int_e\Big(\Mean{w}\Jump{v} +\Mean{v}\Jump{w}\Big)ds\\
&\hspace{40pt}+H^2_0(\O)um \frac{\sigma}{|e|}\int_e\Jump{w}\Jump{v}\,ds,
\end{align*}
$\{\hspace{-3.5pt}\{\cdot\}\hspace{-3.5pt}\}$ denotes the average across an edge,
$[\hspace{-1.5pt}[\cdot]\hspace{-1.5pt}]$ denotes the jump across an edge, $|e|$ is the length of
the edge $e$,
and $\sigma\geq1$ is a penalty parameter large enough so that $a_h(\cdot,\cdot)$ is positive-definite
on $V_h$. Details for the notation and the choice of $\sigma$ can be found in
\cite{BSung:2005:DG4,JSY:2014:C0IP}.
\partialar
The unique solution $u_h\in K_h$ of \eqref{eq:C0IP} is characterized by the variational
inequality
\begin{equation*}
a_h(u_h,v-u_h)\geq (f,v-u_h)\qquad\forall\,v\in K_h,
\end{equation*}
which can be expressed in the following equivalent complementarity form:
\begin{equation}\label{eq:DiscreteComplentarityCondition}
\sum_{p\in\mathcal{V}_h}\lambda_h(p)\big(u_h(p)-\partialsi(p)\big)=0,
\end{equation}
where the Lagrange multipliers $\lambda_h(p)$ are defined by
\begin{equation}\label{eq:lambdahDef}
a_h(u_h,v)=(f,v)+\sum_{p\in\mathcal{V}_h}\lambda_h(p)v(p)\qquad\forall\,v\in V_h
\end{equation}
and satisfy
\begin{equation}\label{eq:SignOflambdah}
\lambda_h(p)\geq0 \qquad\forall\,p\in\mathcal{V}_h.
\end{equation}
\partialar
We also use $\lambda_h$ to denote the measure $\sum_{p\in\mathcal{V}_h}\lambda_h(p)\displaystyleelta_p$,
where $\displaystyleelta_p$ is the Dirac point measure at $p$. The equation \eqref{eq:lambdahDef} can therefore
be written as
\begin{equation}\label{eq:AlternativelambdahDef}
a_h(u_h,v)=(f,v)+\int_\Omega v\,d\lambda_h\qquad\forall\,v\in V_h.
\end{equation}
\begin{remark}\label{rem:2And4}
For second order obstacle problems, the discrete Lagrange multiplier cannot be extended to
$H^{-1}(\Omega)$ as a sum of Dirac point measures since such measures do not belong to $H^{-1}(\Omega)$.
Consequently there are different choices for extending the discrete Lagrange multiplier
to $H^{-1}(\Omega)$ \cite{Veeser:2001:Obstacle,NSV:2003:Obstacle,NSV:2005:Contact}.
The fact that the Lagrange multiplier for the discrete fourth order obstacle problem can be expressed
naturally as a sum of Dirac point measures leads to the simple {\em a posteriori}
error analysis in Section~\ref{sec:Reliability} and
Section~\ref{sec:Efficiency}.
\end{remark}
\begin{remark}\label{rem:lambdah}
We can also treat $\lambda_h$ as a member of $H^{-2}(\Omega)=[H^2_0(\Omega)]'$ such that
\begin{equation*}
\langle\lambda_h,v\rangle=\int_\Omega v\,d\lambda_h=\sum_{p\in\mathcal{V}_p}\lambda_h(p)v(p) \qquad\forall\,v\in H^2_0(\Omega).
\end{equation*}
\end{remark}
\partialar
Let the mesh-dependent norm $\|\cdot\|_h$ be defined by
\begin{equation}\label{eq:hNormDef}
\|v\|_h^2=\sum_{T\in\cT_h} |v|_{H^2(T)}^2+H^2_0(\O)um\frac{\sigma}{|e|}\|\jump{v}\|_{L_2(e)}^2.
\end{equation}
Note that
\begin{equation}\label{eq:SameNorm}
\|v\|_h=|v|_{H^2(\Omega)} \qquad\forall\,v\in H^2_0(\O).
\end{equation}
\partialar
The following {\em a priori} error estimate is known \cite{BSZ:2012:Obstacle,BSZZ:2012:Kirchhoff}:
\begin{equation}\label{eq:APriori}
\|u-u_h\|_h\leq Ch^\alpha,
\end{equation}
where the index of elliptic regularity $\alpha\in(\frac12,1]$ is determined by the interior angles
of $\Omega$ and can be taken to be $1$ if $\Omega$ is convex.
\partialar
Our goal is to develop {\em a posteriori} error estimates for $\|u-u_h\|_h$.
\partialar
Two useful tools for the analysis of $C^0$ interior penalty methods are the
nodal interpolation operator $\Pi_h:H^2_0(\O)\longrightarrow V_h$ and an enriching operator
$E_h:V_h\longrightarrow W_h\subset H^2_0(\Omega)$, where $W_h$ is the Hsieh-Clough-Tocher macro finite element space \cite{Ciarlet:1974:HCT}.
\partialar
The operator $E_h$ is defined by averaging
(cf. \cite[Section~4.1]{Brenner:2012:C0IP})
and hence
\begin{equation}\label{eq:Invariance}
(E_hu_h)(p)=u_h(p)\quad\text{for all}\; p\in\mathcal{V}_h.
\end{equation}
The following estimate can be found in the proof of \cite[Lemma~1]{Brenner:2012:C0IP}.
\begin{equation}\label{eq:EhFundamentalEstimate}
h_{\scriptscriptstyle T}^{-4}\|v-E_hv\|_{L_2(T)}^2\leq C\sum_{e\in\tilde\mathcal{E}_T}
\frac{1}{|e|}\|\jump{v}\|_{L_2(e)}^2 \qquad\forall\,T\in\mathcal{T}_h,
\end{equation}
where $\tilde\mathcal{E}_T$ is the set of the edges of $\mathcal{T}_h$ emanating from the vertices of $T$, and
the positive constant $C$ depends only on $k$ and the shape regularity of $\mathcal{T}_h$.
\partialar
From \eqref{eq:EhFundamentalEstimate} and standard inverse estimates \cite{Ciarlet:1978:FEM,BScott:2008:FEM}, we also have
\begin{alignat}{3}
h_{\scriptscriptstyle T}^{-2}\|v-E_hv\|_{L_\infty(T)}^2&\leq C\sum_{e\in\tilde\mathcal{E}_T}
\frac{1}{|e|}\|\jump{v}\|_{L_2(e)}^2 &\qquad&\forall\,T\in\mathcal{T}_h,\label{eq:EhLInfty}\\
\sum_{T\in\cT_h}|v-E_hv|_{H^2(T)}^2&\leq CH^2_0(\O)um\frac{1}{|e|}\|\jump{v}\|_{L_2(e)}^2&\qquad&\forall\,v\in V_h,
\label{eq:EhEnergyEst}\\
\|v-E_hv\|_h^2&\leq CH^2_0(\O)um\frac{\sigma}{|e|}\|\jump{v}\|_{L_2(e)}^2&\qquad&\forall\,v\in V_h,
\label{eq:EhhNormEst}
\end{alignat}
where the positive constant $C$ depends only on $k$ and the shape regularity of $\mathcal{T}_h$.
\section{A Mesh-Dependent Boundary Value Problem}\label{sec:BVP}
Let $z_h\in H^2_0(\Omega)$ be defined by
\begin{equation}\label{eq:zhDef}
a(z_h,v)=(f,v)+\int_\Omega v\,d\lambda_h=(f,v)+\sum_{p\in\mathcal{V}_h}
\lambda_h(p)v(p)\qquad\forall\,v\in H^2_0(\Omega).
\end{equation}
Then $u_h$ is the approximate solution of \eqref{eq:zhDef}
obtained by the $C^0$ interior penalty method.
\begin{remark}\label{rem:Braess}
The idea of considering such mesh-dependent boundary value problems was introduced in
\cite{Braess:2005:Obstacle} for second order obstacle problems.
\end{remark}
\partialar
A residual based error estimator \cite{BGS:2010:C0IP,Brenner:2012:C0IP} for $u_h$ (as an approximate solution of \eqref{eq:zhDef})
is given by
\begin{equation}\label{eq:BVPEstimator}
\eta_h=\Big(H^2_0(\O)um\eta_{e,{\scriptscriptstyle 1}}^2+H^2_0(\O)umI (\eta_{e,{\scriptscriptstyle 2}}^2+\eta_{e,{\scriptscriptstyle 3}}^2)+\sum_{T\in\cT_h}\eta_{\scriptscriptstyle T}^2\Big)^\frac12,
\end{equation}
where $\mathcal{E}_h^i$ is the set of the edges of $\mathcal{T}_h$ interior to $\Omega$,
\begin{align}
\eta_{e,{\scriptscriptstyle 1}}&=\frac{\sigma}{|e|^\frac12}\|\jump{u_h}\|_{L_2(e)},\label{eq:etae1Def}\\
\eta_{e,{\scriptscriptstyle 2}}&=|e|^\frac12\|\jumpTwo{u_h}\|_{L_2(e)},\label{eq:etae2Def}\\
\eta_{e,{\scriptscriptstyle 3}}&=|e|^\frac32\|\jumpThree{u_h}\|_{L_2(e)},\label{eq:etae3Def}\\
\eta_{\scriptscriptstyle T}&=h_{\scriptscriptstyle T}^2\|f-\Delta^2 u_h\|_{L_2(T)}.\label{eq:etaTDef}
\end{align}
\partialar
The following result will play an important role in the {\em a posteriori} error analysis
of the obstacle problem. Note that its proof is made simple by the representation of
the discrete Lagrange multiplier $\lambda_h$ as a sum of Dirac point measures supported
at the vertices of $\mathcal{T}_h$, which allows the analysis in \cite{Brenner:2012:C0IP}
to be used here.
\begin{lemma}\label{lem:ZhReliability}
There exists a positive constant $C$, depending only on $k$ and
the shape regularity of $\mathcal{T}_h$, such that
\begin{equation}\label{eq:zhuhEstimate}
\|z_h-u_h\|_h\leq C\eta_h.
\end{equation}
\end{lemma}
\begin{proof} We have an obvious estimate
\begin{equation}\label{eq:zhuhEst1}
H^2_0(\O)um\frac{\sigma}{|e|}\left\|\Jump{(z_h-u_h)}\right\|_{L_2(e)}^2
=H^2_0(\O)um\frac{\sigma}{|e|}\left\|\Jump{u_h}\right\|_{L_2(e)}^2\leq H^2_0(\O)um \eta_{e,{\scriptscriptstyle 1}}^2,
\end{equation}
and it only remains to estimate $\sum_{T\in\cT_h} |z_h-u_h|_{H^2(T)}^2$.
\partialar
Let $E_h:V_h\longrightarrow H^2_0(\O)$ be the enriching operator. It follows from
\eqref{eq:EhEnergyEst} and \eqref{eq:etae1Def} that
\begin{align}\label{eq:zhuhEst2}
\sum_{T\in\cT_h} |z_h-u_h|_{H^2(T)}^2&\leq 2\sum_{T\in\cT_h}\big[
|z_h-E_hu_h|_{H^2(T)}^2+|u_h-E_hu_h|_{H^2(T)}^2\big]\\
&\leq 2|z_h-E_hu_h|_{H^2(\Omega)}^2+CH^2_0(\O)um\eta_{e,{\scriptscriptstyle 1}}^2,\notag
\end{align}
and, by duality,
\begin{equation}\label{eq:zhuhEst3}
|z_h-E_hu_h|_{H^2(\Omega)}=\sup_{\partialhi\inH^2_0(\O)\setminus\{0\}}\frac{a(z_h-E_hu_h,\partialhi)}{|\partialhi|_{H^2(\Omega)}}.
\end{equation}
\partialar
In view of \eqref{eq:lambdahDef} and \eqref{eq:zhDef},
the numerator on the right-hand side of \eqref{eq:zhuhEst3}
becomes
\begin{align*}
&a(z_h-E_hu_h,\partialhi)=\sum_{T\in\cT_h} \int_T D^2(z_h-E_hu_h):D^2\partialhi\,dx\notag\\
&\hspace{40pt}=(f,\partialhi)+\sum_{p\in\mathcal{V}_h}\lambda_h(p)\partialhi(p)\notag\\
&\hspace{70pt}+\sum_{T\in\cT_h} \int_T D^2(u_h-E_hu_h):D^2\partialhi\,dx
-\sum_{T\in\cT_h}\int_T D^2u_h:D^2(\partialhi-\Pi_h\partialhi)\,dx\\
&\hspace{100pt}-\sum_{T\in\cT_h}\int_T D^2u_h:D^2(\Pi_h\partialhi)\,dx\\
&\hspace{40pt}=(f,\partialhi)+\sum_{p\in\mathcal{V}_h}\lambda_h(p)\partialhi(p)-(f,\Pi_h\partialhi)
-\sum_{p\in\mathcal{V}_h}\lambda_h(p)(\Pi_h\partialhi)(p)+a_h(u_h,\Pi_h\partialhi)\notag\\
&\hspace{70pt}+\sum_{T\in\cT_h} \int_T D^2(u_h-E_hu_h):D^2\partialhi\,dx
-\sum_{T\in\cT_h}\int_T D^2u_h:D^2(\partialhi-\Pi_h\partialhi)\,dx\\
&\hspace{100pt}-\sum_{T\in\cT_h}\int_T D^2u_h:D^2(\Pi_h\partialhi)\,dx.
\end{align*}
Since $\partialhi$ and $\Pi_h\partialhi$ agree on the vertices of $\mathcal{T}_h$, the two terms involving
$\lambda_h$ cancel each other and we end up with
\begin{align*}
a(z_h-E_hu_h,\partialhi)
=&\sum_{T\in\cT_h} \int_T D^2(u_h-E_hu_h):D^2\partialhi\,dx
-\sum_{T\in\cT_h}\int_T D^2u_h:D^2(\partialhi-\Pi_h\partialhi)\,dx\\
&\hspace{40pt}-\sum_{T\in\cT_h}\int_T D^2u_h:D^2(\Pi_h\partialhi)\,dx+a_h(u_h,\Pi_h\partialhi)+(f,\partialhi-\Pi_h\partialhi),\notag
\end{align*}
which is precisely the equation \cite[$(7.9)$]{Brenner:2012:C0IP} (and
which has nothing to do with either
$z_h$ or $\lambda_h$).
\partialar
It then follows from the estimates \cite[$(7.10)-(7.19)$]{Brenner:2012:C0IP} that
\begin{equation}\label{eq:zhuhEst4}
a(u-E_hu_h,\partialhi)\leq C\eta_h|\partialhi|_{H^2(\Omega)}.
\end{equation}
\partialar
The estimate \eqref{eq:zhuhEstimate} follows from \eqref{eq:hNormDef} and
\eqref{eq:zhuhEst1}--\eqref{eq:zhuhEst4}.
\end{proof}
\section{Reliability Estimates for the Obstacle Problem}\label{sec:Reliability}
We begin with a simple estimate.
\begin{lemma}\label{lem:UpperBdd}
There exists a positive constant $C$, depending only on $k$ and
the shape regularity of $\mathcal{T}_h$, such that
\begin{equation}\label{eq:UpperBdd}
\|u-u_h\|_h+\|\lambda-\lambda_h\|_{H^{-2}(\Omega)}\leq C\eta_h+\sqrt{\int_\Omega (\partialsi-E_hu_h)^+d\lambda}\;.
\end{equation}
\end{lemma}
\begin{proof}
Let $E_h:V_h\longrightarrow H^2_0(\Omega)$ be the enriching operator.
We can write
\begin{align}\label{eq:UpperBdd1}
|u-E_hu_h|_{H^2(\Omega)}^2&=a(u-E_hu_h,u-E_hu_h)\\
&=a(u-z_h,u-E_hu_h)+a(z_h-E_hu_h,u-E_hu_h),\notag
\end{align}
and, in view of \eqref{eq:SameNorm},
\eqref{eq:EhhNormEst}, \eqref{eq:etae1Def} and Lemma~\ref{lem:ZhReliability},
the second term on the right-hand side of \eqref{eq:UpperBdd1} is bounded by
\begin{align}\label{eq:UpperBdd2}
a(z_h-E_hu,u-E_hu_h)&\leq |z_h-E_hu_h|_{H^2(\Omega)}|u-E_hu_h|_{H^2(\Omega)}\notag\\
&\leq \big(\|z_h-u_h\|_h+\|u_h-E_hu_h\|_h\big)|u-E_hu_h|_{H^2(\Omega)}\\
&\leq C\eta_h|u-E_hu_h|_{H^2(\Omega)}.\notag
\end{align}
\partialar
By \eqref{eq:KDef}--\eqref{eq:lambdaDef}, \eqref{eq:DiscreteComplentarityCondition},
\eqref{eq:SignOflambdah},
\eqref{eq:Invariance} and \eqref{eq:zhDef},
the first term on the right-hand side of \eqref{eq:UpperBdd1} can be bounded as follows:
\begin{align}\label{eq:UpperBdd3}
&a(u-z_h,u-E_hu_h)=\int_\Omega (u-E_hu_h)\,d\lambda-
\sum_{p\in\mathcal{V}_h}\lambda_h(p)\big(u(p)-(E_hu_h)(p)\big)\\
&\hspace{40pt}=\int_\Omega (\partialsi-E_hu_h)\,d\lambda-
\sum_{p\in\mathcal{V}_h}\lambda_h(p)\big(u(p)-\partialsi(p)\big)\leq \int_\Omega (\partialsi-E_hu_h)^+\,d\lambda. \notag
\end{align}
\partialar
It follows from \eqref{eq:SameNorm} and \eqref{eq:UpperBdd1}--\eqref{eq:UpperBdd3} that
\begin{equation*}
\|u-E_hu_h\|_h\leq C\eta_h+\sqrt{\int_\Omega (\partialsi-E_hu_h)^+d\lambda}\;,
\end{equation*}
which together with \eqref{eq:EhhNormEst} implies
\begin{equation}\label{eq:UpperBdd4}
\|u-u_h\|_h\leq C\eta_h+\sqrt{\int_\Omega (\partialsi-E_hu_h)^+d\lambda}\;.
\end{equation}
\partialar
In order to estimate $\|\lambda-\lambda_h\|_{H^{-2}(\Omega)}$, we observe that
\eqref{eq:lambdaDef}, \eqref{eq:SameNorm} and \eqref{eq:zhDef}
imply
\begin{align}\label{eq:LMEst}
\|\lambda-\lambda_h\|_{H^{-2}(\Omega)}&=\sup_{v\in H^2_0(\Omega)}\frac{\displaystyle\int_\Omega v\,d(\lambda-\lambda_h)}
{|v|_{H^2(\Omega)}}\\
&=\sup_{v\in H^2_0(\Omega)}\frac{a(u-z_h,v)}
{|v|_{H^2(\Omega)}}
= |u-z_h|_{H^2(\Omega)}\
\leq \|u-u_h\|_h+\|z_h-u_h\|_h.\notag
\end{align}
The estimate for $\|\lambda-\lambda_h\|_{H^{-2}(\Omega)}$ then follows from Lemma~\ref{lem:ZhReliability}
and \eqref{eq:UpperBdd4}.
\end{proof}
\partialar
We can also remove the inconvenient $E_h$ in the estimate \eqref{eq:UpperBdd}.
\begin{theorem}\label{thm:ReliableII}
There exists a positive constant $C$, depending only on $k$ and
the shape regularity of $\mathcal{T}_h$, such that
\begin{align}\label{eq:ReliableII}
\|u-u_h\|_h+\|\lambda-\lambda_h\|_{H^{-2}(\Omega)}
&\leq C\Big(\eta_h+|\lambda|^\frac12\sqrt{\max_{T\in\mathcal{T}_h}h_{\scriptscriptstyle T}\sum_{ e\in\tilde\mathcal{E}_T}|e|^{-1/2}\|\jump{u_h}\|_{L_2(e)}}\,\Big)\\
&\hspace{40pt}+ |\lambda|^\frac12\|(\partialsi-u_h)^+\|_{L_\infty(\Omega)}^\frac12,\notag
\end{align}
where $\tilde\mathcal{E}_T$ is the set of the edges in $\mathcal{T}_h$ that emanate from the vertices of $T$.
\end{theorem}
\begin{proof} We have
\begin{equation}\label{eq:uReliable1}
\int_\Omega (\partialsi-E_hu_h)^+\,d\lambda \leq
\big[\|(\partialsi-u_h)^+\|_{L_\infty(\Omega)}+\|u_h-E_hu_h\|_{L_\infty(\Omega)}\big]|\lambda|,
\end{equation}
and, by \eqref{eq:EhLInfty},
\begin{equation}\label{eq:uReliable2}
\|u_h-E_hu_h\|_{L_\infty(\Omega)}\leq C\max_{T\in\mathcal{T}_h}h_{\scriptscriptstyle T}\sum_{ e\in\tilde\mathcal{E}_T}|e|^{-1/2}\|\jump{u_h}\|_{L_2(e)}.
\end{equation}
\partialar
The estimate \eqref{eq:ReliableII} follows from \eqref{eq:UpperBdd}, \eqref{eq:uReliable1}, and \eqref{eq:uReliable2}.
\end{proof}
\begin{remark}\label{rem:ReliableII}
The estimate \eqref{eq:ReliableII} is {not} a genuine {\em a posteriori} error estimate since $|\lambda|$ is not known.
But it is useful for monitoring the asymptotic convergence of adaptive algorithms
(cf. Lemma~\ref{lem:AsymptoticConvergenceRate} and Lemma~\ref{lem:LMTest}).
\end{remark}
\begin{remark}\label{rem:Genuine}
Under the stronger assumption $\partialsi\in C^2(\bar\Omega)$ on the obstacle function,
one can also obtain a genuine
{\em a posteriori} error estimate by replacing $|\lambda|$ with a computable bound.
\partialar
Indeed, for any $w\in K$, we have
\begin{equation*}
\frac12|u|_{H^2(\Omega)}^2\leq\frac12 |w|_{H^2(\Omega)}^2-(f,w)+(f,u)
\leq \frac12 |w|_{H^2(\Omega)}^2-(f,w)+C\|f\|_{L_2(\Omega)}^2+\frac14|u|_{H^2(\Omega)}^2,
\end{equation*}
by a Poincar\'e-Friedrichs inequality \cite{Necas:2012:Direct} and the arithmetic-geometric means inequality,
and hence
\begin{equation}\label{eq:uH2Bdd}
|u|_{H^2(\Omega)}^2\leq 2|w|_{H^2(\Omega)}^2-4(f,w)+C\|f\|_{L_2(\Omega)}^2,
\end{equation}
where $C$ is a computable positive constant. Combining \eqref{eq:uH2Bdd} with the Sobolev
embedding (cf. \cite{ADAMS:2003:Sobolev})
$H^2(\Omega)\hookrightarrow C^{0,\gamma}(\Omega)$
for any $\gamma<1$, we see that there is a computable $\displaystyleelta>0$ such that
$u(x)>\partialsi(x)$ if the distance from $x$ to $\partial\Omega$ is $<\displaystyleelta$.
Therefore there is a computable $\partialhi\in C^\infty_c(\Omega)$ such that
$\partialhi=1$ on the support of $\lambda$.
\partialar
We then have, in view of \eqref{eq:lambdaDef} and \eqref{eq:uH2Bdd},
\begin{equation*}
|\lambda|=a(u,\partialhi)-(f,\partialhi)\leq |u|_{H^2(\Omega)}|\partialhi|_{H^2(\Omega)}+\|f\|_{L_2(\Omega)}\|\partialhi\|_{L_2(\Omega)}\leq C,
\end{equation*}
where the positive constant $C$ is computable.
\end{remark}
\section{Efficiency Estimates for the Obstacle Problem}\label{sec:Efficiency}
\partialar
Let the local data oscillation $\Omegasc(f;T)$ be defined by
\begin{equation*}
\Omegasc(f;T)=h_{\scriptscriptstyle T}^2\|f-\bar f_{\scriptscriptstyle T}\|_{L_2(T)},
\end{equation*}
where $\bar f_{\scriptscriptstyle T}$ is the $L_2$ projection of $f$ in the polynomial space $P_j(T)$ with
$j=\max(k-4,0)$. The global data oscillation is then given by
$$\OmegaSC=\Big(\sum_{T\in\mathcal{T}_h}\Omegasc(f;T)^2\Big)^\frac12.$$
\begin{theorem}\label{thm:LocalEfficiencyII}
There exists a positive constant $C$, depending only on the shape regularity of $\mathcal{T}_h$, such that
\begin{alignat*}{3}
\eta_{e,{\scriptscriptstyle 1}}&\leq \frac{\sigma}{|e|^\frac12} \|\jump{(u-u_h)}\|_{L_2(e)}&\quad&\forall\,e\in\mathcal{E}_h,\\
\eta_{e,{\scriptscriptstyle 2}}&\leq C\Big[\sum_{T\in\mathcal{T}_e}\big[|u-u_h|_{H^2(T)}+\Omegasc(f;T)\big]+\|\lambda-\lambda_h\|_{H^{-2}(\Omegamega_e)}
\Big]&\qquad&\forall\,e\in\mathcal{E}_h^i,\\
\eta_{e,{\scriptscriptstyle 3}}&\leq C\Big[\sum_{T\in\mathcal{T}_e}\big[|u-u_h|_{H^2(T)}+\Omegasc(f;T)\big]+\|\lambda-\lambda_h\|_{H^{-2}(\Omegamega_e)}\\
&\hspace{40pt}+\frac{1}{|e|}\|\jump{(u-u_h)}\|_{L_2(e)}^2\Big]&\qquad&\forall\,e\in\mathcal{E}_h^i,\\
\eta_{\scriptscriptstyle T}&\leq C\big(|u-u_h|_{H^2(T)}+\Omegasc(f;T)+\|\lambda-\lambda_h\|_{H^{-2}(T)}
\big)&\qquad&\forall\,T\in\mathcal{T}_h,
\end{alignat*}
where $\mathcal{T}_e$ is the set of the two triangles that share the edge $e$ and
$\Omega_e$ is the interior of $\bigcup_{T\in\mathcal{T}_e}\bar T$.
\end{theorem}
\begin{proof}
The estimate for $\eta_{e,1}$ is obvious. The other estimates are obtained by modifying the
arguments in \cite[Section~5.3]{Brenner:2012:C0IP}.
\partialar
In the proof of the estimate \cite[$(5.17)$]{Brenner:2012:C0IP} (with $v=u_h$), we replace the relation
\begin{equation*}
\int_T (\bar f_{\scriptscriptstyle T}-\Delta^2 u_h)z\,dx=\int_T D^2(u-u_h):D^2z\,dx+\int_T (\bar f_{\scriptscriptstyle T}-f)z\,dx
\end{equation*}
by
\begin{align}\label{eq:AreaBubble}
\int_T (\bar f_{\scriptscriptstyle T}-\Delta^2 u_h)z\,dx=\int_T D^2(u-u_h):D^2z\,dx+\int_T (\bar f_{\scriptscriptstyle T}-f)z\,dx
-\int_T z\,d(\lambda-\lambda_h)
\end{align}
to obtain the estimate
\begin{align*}
\int_T (\bar f_{\scriptscriptstyle T}-\Delta^2u_h)z&\leq C\big(h_{\scriptscriptstyle T}^{-2}|u-u_h|_{H^2(T)}
+\|f-\bar f_{\scriptscriptstyle T}\|_{L_2(T)}+h_{\scriptscriptstyle T}^{-2}\|\lambda-\lambda_h\|_{H^{-2}(T)}\big)\|z\|_{L_2(T)},
\end{align*}
which then leads to the estimate for $\eta_{\scriptscriptstyle T}$. Note that \eqref{eq:AreaBubble} holds because
the bubble function $z$ vanishes at the vertices of $\mathcal{T}_h$.
\partialar
In the proof of the estimate \cite[$(5.26)$]{Brenner:2012:C0IP} (with $v=u_h$), we replace the relation
\begin{align*}
&\sum_{T\in\mathcal{T}_e}\Big(-\int_T D^2u_h:D^2(\zeta_1\zeta_2)\,dx
+\int_T(\Delta^2u_h)(\zeta_1\zeta_2)\,dx\Big)\\
&\hspace{40pt}=
\sum_{T\in\mathcal{T}_e}\int_TD^2(u-u_h):D^2(\zeta_1\zeta_2)\,dx-\sum_{T\in\mathcal{T}_e}\int_T (f-\Delta^2u_h)
(\zeta_1\zeta_2)\,dx
\end{align*}
that appears in \cite[$(5.24)$]{Brenner:2012:C0IP} by
\begin{align}\label{eq:EdgeBubble1}
&\sum_{T\in\mathcal{T}_e}\Big(-\int_T D^2u_h:D^2(\zeta_1\zeta_2)\,dx
+\int_T(\Delta^2u_h)(\zeta_1\zeta_2)\,dx\Big)\notag\\
&\hspace{40pt}=
\sum_{T\in\mathcal{T}_e}\int_TD^2(u-u_h):D^2(\zeta_1\zeta_2)\,dx-\sum_{T\in\mathcal{T}_e}\int_T
(f-\Delta^2u_h)(\zeta_1\zeta_2)\,dx\\
&\hspace{70pt} -\int_{\Omega_e}(\zeta_1\zeta_2)\,d(\lambda-\lambda_h)\notag
\end{align}
to obtain the estimate
\begin{align*}
&\sum_{T\in\mathcal{T}_e}\Big(-\int_T D^2u_h:D^2(\zeta_1\zeta_2)\,dx
+\int_T(\Delta^2u_h)(\zeta_1\zeta_2)\,dx\Big)\\
&\hspace{30pt}\leq C\Big[\sum_{T\in\mathcal{T}_e}\big(h_{\scriptscriptstyle T}^{-2}|u-u_h|_{H^2(T)}+\|f-\Delta^2u_h\|_{L_2(T)}\big)
+h_{\scriptscriptstyle T}^{-2}\|\lambda-\lambda_h\|_{H^{-2}(\Omega_e)}\Big]\|\zeta_1\zeta_2\|_{L_2(\Omega_e)},
\end{align*}
which then leads to the estimate for $\eta_{e,2}$. Note that
\eqref{eq:EdgeBubble1} holds because the bubble function $\zeta_1\zeta_2$ vanishes
at the vertices of $\mathcal{T}_h$.
\partialar
Finally, in the proof of the estimate \cite[$(5.32)$]{Brenner:2012:C0IP} (with $v=u_h$),
we replace the relation
\begin{align*}
&\sum_{T\in\mathcal{T}_e}\Big(\int_T D^2u_h:D^2(\zeta_2\zeta_3)\,dx
-\int_T (\Delta^2 u_h)(\zeta_2\zeta_3)\,dx\Big)\\
&\hspace{40pt}=\sum_{T\in\mathcal{T}_e}\int_T D^2(u_h-u):D^2(\zeta_2\zeta_3)\,dx+
\sum_{T\in\mathcal{T}_2}\int_T(f-\Delta^2u_h)(\zeta_2\zeta_3)\,dx
\end{align*}
that appears in \cite[$(5.30)$]{Brenner:2012:C0IP} by
\begin{align}\label{eq:EdgeBubble2}
&\sum_{T\in\mathcal{T}_e}\Big(\int_T D^2u_h:D^2(\zeta_2\zeta_3)\,dx
-\int_T (\Delta^2 u_h)(\zeta_2\zeta_3)\,dx\Big)\notag\\
&\hspace{40pt}=\sum_{T\in\mathcal{T}_e}\int_T D^2(u_h-u):D^2(\zeta_2\zeta_3)\,dx+
\sum_{T\in\mathcal{T}_2}\int_T(f-\Delta^2u_h)(\zeta_2\zeta_3)\,dx\\
&\hspace{70pt}+\int_{\Omega_e} (\zeta_2\zeta_3)\,d(\lambda-\lambda_h)
\notag
\end{align}
to obtain the estimate
\begin{align*}
&\sum_{T\in\mathcal{T}_e}\Big(\int_T D^2u_h:D^2(\zeta_2\zeta_3)\,dx
-\int_T (\Delta^2 u_h)(\zeta_2\zeta_3)\,dx\Big)\\
&\hspace{30pt}\leq C\Big[\sum_{T\in\mathcal{T}_e}\big(h_{\scriptscriptstyle T}^{-2}|u-u_h|_{H^2(T)}+\|f-\Delta^2u_h\|_{L_2(T)}\big)
+h_{\scriptscriptstyle T}^{-2}\|\lambda-\lambda_h\|_{H^{-2}(\Omega_e)}\Big]\|\zeta_2\zeta_3\|_{L_2(\Omega_e)},
\end{align*}
which then leads to the estimate for $\eta_{e,{\scriptscriptstyle 3}}$. Again \eqref{eq:EdgeBubble2} holds because
the bubble function $\zeta_2\zeta_3$ vanishes at the vertices of $\mathcal{T}_h$.
\end{proof}
\partialar
We can also prove a global efficiency result under the following assumption:
\begin{align}\label{eq:RefinementLevels}
&\text{The triangles (resp. interior edges) of $\mathcal{T}_h$ can be divided into $n$ disjoint
groups}\notag\\
&\text{so that the ratio of the diameters of any two triangles
(resp. interior edges) in}\\
&\text{the same group is bounded above by a constant $\tau\geq1$.}\notag
\end{align}
\begin{theorem}\label{thm:GlobalEfficiency}
Under assumption \eqref{eq:RefinementLevels},
there exists a positive constant $C$ depending only on $\tau$, $k$ and
the shape regularity of $\mathcal{T}_h$ such that
\begin{equation}\label{eq:GlobalEfficiency}
\eta_h\leq C\big(\sqrt{\sigma}\|u-u_h\|_h+\sqrt{n}\|\lambda-\lambda_h\|_{H^{-2}(\Omega)}+\OmegaSC\big).
\end{equation}
\end{theorem}
\begin{proof} We have a trivial estimate
\begin{equation*}
H^2_0(\O)um\eta_{e,{\scriptscriptstyle 1}}^2\leq CH^2_0(\O)um\frac{\sigma^2}{|e|}\left\|\Jump{(u-u_h)}\right\|_{L_2(e)}^2.
\end{equation*}
\partialar
For the estimate involving $\eta_{\scriptscriptstyle T}$,
we first write
$\mathcal{T}_h$ as the disjoint union $\mathcal{T}_{h,1}\cup\cdots\cup\mathcal{T}_{h,n}$ so that the ratio of the diameters of
any two triangles in $\mathcal{T}_{h,j}$
is bounded by $\tau$. For $1\leq j\leq n$, the subdomain $\Omega_j$ is the interior of
$\cup_{T\in\mathcal{T}_{h,j}}\bar T$.
\partialar
For any $T\in\mathcal{T}_{h,j}$, let $z_{\scriptscriptstyle T}$ be the bubble function in \cite[Section~5.3.2]{Brenner:2012:C0IP}
associated with $T$ and we define
$z_j=\sum_{T\in\mathcal{T}_{h,j}}z_{\scriptscriptstyle T} \in H^2_0(\Omega_j)$.
It follows from \cite[$(5.16)$]{Brenner:2012:C0IP}, \eqref{eq:AreaBubble}
and a standard inverse estimate that
\begin{align*}
\|\bar f_{\scriptscriptstyle T}-\Delta^2u_h\|_{L_2(T)}^2&\leq C\int_T (\bar f_{\scriptscriptstyle T}-\Delta^2u_h)z_{\scriptscriptstyle T}\,dx\\
&\leq C\Big(\big[h_{\scriptscriptstyle T}^{-2}|u-u_h|_{H^2(T)}
+\|f-\bar f_{\scriptscriptstyle T}\|_{L_2(T)}\big]\|z_{\scriptscriptstyle T}\|_{L_2(T)}-\int_T z_{\scriptscriptstyle T}\,d(\lambda-\lambda_h)\Big)
\end{align*}
and hence
\begin{align*}
&\sum_{T\in\mathcal{T}_{h,j}}\|\bar f_{\scriptscriptstyle T}-\Delta^2 u_h\|_{L_2(T)}^2
\leq C\Big(\sum_{T\in\mathcal{T}_{h,j}}\big[h_{\scriptscriptstyle T}^{-2}|u-u_h|_{H^2(T)}
+\|f-\bar f_{\scriptscriptstyle T}\|_{L_2(T)}\big]\|z_{\scriptscriptstyle T}\|_{L_2(T)}\\
&\hspace{180pt}-\int_{\Omega_j}z_j\,d(\lambda-\lambda_h)\Big)\\
&\hspace{40pt}\leq C\Big[\Big(\sum_{T\in\mathcal{T}_{h,j}}\big[h_{\scriptscriptstyle T}^{-4}|u-u_h|_{H^2(T)}^2
+\|f-\bar f_{\scriptscriptstyle T}\|_{L_2(T)}^2\big]\Big)^\frac12\Big(\sum_{T\in\mathcal{T}_{h,j}}\|z_{\scriptscriptstyle T}\|_{L_2(T)}^2\Big)^\frac12\\
&\hspace{80pt}+
\|\lambda-\lambda_h\|_{H^{-2}(\Omega_j)}
\Big(\sum_{T\in\mathcal{T}_{h,j}}h_{\scriptscriptstyle T}^{-4}\|z_{\scriptscriptstyle T}\|_{L_2(T)}^2\Big)^\frac12\Big]
\end{align*}
by a standard inverse estimate.
\partialar
Therefore we have
\begin{align}\label{eq:BubbleEst1}
\sum_{T\in\mathcal{T}_{h,j}}h_{\scriptscriptstyle T}^4\|\bar f-\Delta^2u_h\|_{L_2(T)}^2 &\leq
C\Big(\sum_{T\in\mathcal{T}_{h,j}}\big[h_{\scriptscriptstyle T}^4\|f-\bar f\|_{L_2(T)}^2+
|u-u_h|_{H^2(T)}^2\big] \\
&\hspace{60pt}+\|\lambda-\lambda_h\|_{H^{-2}(\Omega_j)}^2\Big)\notag
\end{align}
because (cf. \cite[$(5.16)$]{Brenner:2012:C0IP})
$$\|z_{\scriptscriptstyle T}\|_{L_2(T)}\approx \|\bar f-\Delta^2u_h\|_{L_2(T)}$$
and the diameters $h_{\scriptscriptstyle T}$ are comparable for $T\in\mathcal{T}_{h,j}$.
\partialar
It follows from \eqref{eq:BubbleEst1} that
\begin{align*}
\sum_{T\in\cT_h}\eta_{\scriptscriptstyle T}^2&= \sum_{T\in\cT_h}h_{\scriptscriptstyle T}^4\|f-\Delta^2u_h\|_{L_2(T)}^2\\
&\leq
\sum_{j=1}^n \sum_{T\in\mathcal{T}_{h,j}}2h_{\scriptscriptstyle T}^4\big[\|\bar f_{\scriptscriptstyle T}-\Delta^2u_h\|_{L_2(T)
}+\|f-\bar f_{\scriptscriptstyle T}\|_{L_2(T)}\big]^2\\
&\leq C\sum_{j=1}^n \Big(\sum_{T\in\mathcal{T}_{h,j}}\big[h_{\scriptscriptstyle T}^4\|f-\bar f_{\scriptscriptstyle T}\|_{L_2(T)}^2+
|u-u_h|_{H^2(T)}^2\big] \\
&\hspace{60pt}+\|\lambda-\lambda_h\|_{H^{-2}(\Omega_j)}^2\Big)\\
&\leq C\Big(\OmegaSC^2+\sum_{T\in\mathcal{T}_h}|u-u_h|_{H^2(T)}^2
+n\|\lambda-\lambda_h\|_{H^{-2}(\Omega)}^2\Big),
\end{align*}
where we have also used the trivial estimate
$
\|\lambda-\lambda_h\|_{H^{-2}(\Omega_j)}\leq \|\lambda-\lambda_h\|_{H^{-2}(\Omega)}.
$
\partialar
The estimates for $\eta_{e,{\scriptscriptstyle 2}}$ and $\eta_{e,{\scriptscriptstyle 3}}$ can be established by
using \eqref{eq:EdgeBubble1}, \eqref{eq:EdgeBubble2} and results in
\cite[Sections~5.3.3 and 5.3.4]{Brenner:2012:C0IP}. Their derivations are
similar to the derivation for $\eta_{\scriptscriptstyle T}$ and hence are omitted.
\end{proof}
\section{An Adaptive Algorithm}\label{sec:Adaptive}
In view of the efficiency estimates in Section~\ref{sec:Efficiency},
we will use $\eta_h$ from \eqref{eq:BVPEstimator} as the error indicator in the adaptive loop
\begin{equation*}
\textsf{Solve}\longrightarrow \textsf{Estimate}\longrightarrow \textsf{Mark}\longrightarrow
\textsf{Refine}
\end{equation*}
to define an adaptive algorithm for the $C^0$ interior penalty methods for
\eqref{eq:Obstacle}--\eqref{eq:KDef}.
\partialar
In the step \textsf{Solve}, we compute the solution of the discrete obstacle problem \eqref{eq:C0IP} by
a primal-dual active set method \cite{BIK:1999:PDAS,HIK:2003:PDAS}. In the step
\textsf{Estimate}, we
compute $\eta_{e,{\scriptscriptstyle 1}}$, $\eta_{e,{\scriptscriptstyle 2}}$, $\eta_{e,{\scriptscriptstyle 3}}$
and $\eta_{\scriptscriptstyle T}$ defined in \eqref{eq:etae1Def}--\eqref{eq:etaTDef}. In the step \textsf{Mark}, we use
the D\"orfler marking strategy \cite{Dorfler:1996:AdaptiveConvergence} to mark a minimum number of
triangles and edges whose contributions exceed $\theta\eta_h$ for some
$\theta\in(0,1)$. In the step \textsf{Refine},
we refine the marked triangles and edges followed by a closure algorithm that preserves the conformity
of the triangulation.
\partialar
In the adaptive setting the subscript $h$ will be replaced by the subscript $\ell$, where $\ell=0,1,\,\ldots$ denotes the
level of refinements.
The adaptive algorithm generates a sequence of triangulations $\mathcal{T}_\ell$ of $\Omega$,
a sequence of solutions
$u_\ell\in V_\ell$ of the discrete obstacle problems and a sequence of error indicators
$\eta_\ell$.
\partialar
According to Theorem~\ref{thm:ReliableII}, we can use the following result to monitor
the asymptotic convergence rate of the adaptive algorithm.
\begin{lemma}\label{lem:AsymptoticConvergenceRate}
Suppose $\eta_\ell=O(N_\ell^{-\gamma})$, where $N_\ell$ is the number of degrees of freedom
$($dof$)$ at
the refinement level $\ell$. Then we have
\begin{equation}\label{eq:AsymptoticConvergence}
\|u-u_\ell\|_\ell+\|\lambda-\lambda_\ell\|_{H^{-2}(\Omega)}=O(N_\ell^{-\gamma})
\end{equation}
provided that
\begin{align}
Q_{\ell,1}=\sqrt{\max_{T\in\mathcal{T}_\ell}h_{\scriptscriptstyle T}\sum_{ e\in\tilde\mathcal{E}_T}|e|^{-1/2}\|\jump{u_\ell}\|_{L_2(e)}}&=O(N_\ell^{-\gamma}),\label{eq:MaxJump}\\
Q_{\ell,2}=\|(\partialsi-u_\ell)^+\|_{L_\infty(\Omega)}^\frac12 &=O(N_\ell^{-\gamma}).\label{eq:Violation}
\end{align}
In particular, the estimate \eqref{eq:AsymptoticConvergence} holds if $Q_{\ell,1}$
and $Q_{\ell,2}$ are dominated by $\eta_\ell$.
\end{lemma}
\partialar
Note that $\|\lambda-\lambda_h\|_{H^{-2}(\Omega)}$ is not computable. However we can
test the convergence of $\|\lambda-\lambda_h\|_{H^{-2}(\Omega)}$ indirectly as follows.
Let $\partialhi\in C^\infty_c(\Omega)$ be equal to $1$ on the supports of $\lambda$ and the $\lambda_\ell$'s.
Then we have
\begin{equation}\label{eq:LMIndirect}
|\lambda|-|\lambda_\ell|=\int_\Omega \partialhi\,d(\lambda-\lambda_\ell)
\leq |\partialhi|_{H^2(\Omega)}\|\lambda-\lambda_\ell\|_{H^{-2}(\Omega)},
\end{equation}
which implies
\begin{align}\label{eq:DscreteLMEstimates} |\lambda_\ell|-|\lambda_{\ell+1}|
&=(|\lambda_\ell|-|\lambda|)+(|\lambda|-|\lambda_{\ell+1}|)\\
&\leq |\partialhi|_{H^2(\Omega)}\big(\|\lambda-\lambda_\ell\|_{H^{-2}(\Omega)}
+\|\lambda-\lambda_{\ell+1}\|_{H^{-2}(\Omega)}\big).\notag
\end{align}
\partialar
Let $\Lambda_\ell$ be defined by
\begin{equation}\label{eq:LambdaEllDef}
\Lambda_\ell=|(|\lambda_\ell|-|\lambda_{\ell+1}|)|.
\end{equation}
The following result is an immediate consequence of Lemma~\ref{lem:AsymptoticConvergenceRate}
and \eqref{eq:DscreteLMEstimates}.
\begin{lemma}\label{lem:LMTest}
Suppose $\eta_\ell=O(N_\ell^{-\gamma})$, where $N_\ell$ is the number of dof at
the refinement level $\ell$. Then we have
\begin{equation*}
\Lambda_\ell=O(N_\ell^{-\gamma})
\end{equation*}
provided that \eqref{eq:MaxJump} and \eqref{eq:Violation} are valid.
\end{lemma}
\begin{remark}\label{rem:lambdaEst}
In view of \eqref{eq:LMIndirect}, we can also replace $|\lambda|$ by $|\lambda_\ell|$
in \eqref{eq:ReliableII} to obtain a true {\em a posteriori} error estimate that is
asymptotically reliable under the assumptions of Lemma~\ref{lem:AsymptoticConvergenceRate}.
\end{remark}
\section{Numerical Experiments}\label{sec:Numerics}
In this section we report numerical results that demonstrate the estimate \eqref{eq:ReliableII}
and illustrate the performance of the adaptive algorithm
for quadratic and cubic $C^0$ interior penalty methods. We choose the penalty parameter $\sigma$
to be
$6$ (resp. $18$) for the quadratic (resp. cubic) $C^0$ interior penalty method.
We also take $\theta$ to be $0.5$ in the
D\"orfler marking strategy.
\partialar
We will consider three examples. The first one concerns
a problem on the unit square with known
exact solution. The second one is about a problem on a $L$-shaped domain with a two dimensional
coincidence set (where $u=\partialsi$)
that has a fairly smooth boundary. The third example is also about
a problem on a $L$-shaped domain
but with a coincidence set that is one dimensional. For the second and third examples
where the exact solution is not known, we estimate the error
$\|u-u_\ell\|_\ell$ by using a reference solution computed on the mesh obtained by
a uniform refinement of the last mesh generated by the refinement procedure.
\partialar
In each of the experiment for the adaptive algorithm, we will present figures that display
the convergence histories for
$\|u-u_\ell\|_\ell$ and $\eta_\ell$, and for the quantities $Q_{\ell,1}$ and $Q_{\ell,2}$ defined in
\eqref{eq:MaxJump} and \eqref{eq:Violation}. We also present tables that contain
numerical results for the quantity $\Lambda_\ell$ defined in \eqref{eq:LambdaEllDef}
and examples of adaptively generated meshes.
\subsection{Example 1} \label{subsec:Example1}
In this example we consider an obstacle problem on the
unit square $\Omega=(-0.5,0.5)^2$ from \cite[Example~1]{BSZZ:2012:Kirchhoff} with $f=0$, $\partialsi=1-|x|^2$ and nonhomogeneous
boundary conditions,
whose exact solution is given by
\begin{equation*}
u(x)=\begin{cases}
C_1|x|^2 \ln(|x|) + C_2|x|^2 + C_3\ln(|x|)+C_4 & \qquad r_0 < |x|\\[6pt]
1-|x|^2 & \qquad |x|\leq r_0
\end{cases},
\end{equation*}
where
$r_0\approx 0.18134453$,
$C_1\approx 0.52504063$,
$C_2\approx -0.62860905$,
$C_3\approx 0.017266401$ and
$C_4\approx 1.0467463$.
\partialar
For this example the coincidence set
is the disc centered at the origin with radius $r_0$ whose
boundary is the free boundary, and we have
$|\lambda|=8\partiali C_1\approx 13.1957.$
\partialar
Due to the nonhomogeneous boundary conditions, we modify
the discrete obstacle problem (cf. \cite{BSZZ:2012:Kirchhoff}) to find
\begin{equation*}
u_h = \mathop{\rm argmin}_{v \in K_h}\Big[\frac{1}{2} a_h(v,v) - F(v)\Big],
\end{equation*}
where $K_h= \{ v \in V_h : v-\Pi_h u \in H^1_0(\Omegamega), \, v(p) \geq \partialsi(p)
\quad \forall\,p \in \mathcal{V}_h \}$,
\begin{align*}
F(v) &= (f,v) + \sum_{e \in \mathcal{E}_h^b} \int_e
\left( \Mean{v} + \frac{\sigma}{|e|}
\Jump{v} \right)
\Jump{u}ds,
\end{align*}
and $\mathcal{E}_h^b$ is the set of the edges of $\mathcal{T}_h$ that are on the boundary of $\Omega$.
We also modify
the residual based error estimator:
\begin{equation*}
\eta_h=\Big(H^2_0(\O)umI\eta_{e,{\scriptscriptstyle 1}}^2+H^2_0(\O)umI (\eta_{e,{\scriptscriptstyle 2}}^2+\eta_{e,{\scriptscriptstyle 3}}^2)+\sum_{T\in\cT_h}\eta_{\scriptscriptstyle T}^2
+\sum_{e\in\mathcal{E}_h^b}\sigma^2|e|^{-1}\|\jump{(u_h-u)}\|_{L_2(e)}^2\Big)^\frac12.
\end{equation*}
\partialar
In the first experiment we
solve the discrete problem with the $P_2$ element
on uniform meshes and
compute the quantity
\begin{align}\label{eq:QDef}
Q_h&=C\Big(\eta_h+|\lambda|^\frac12\sqrt{\max_{T\in\mathcal{T}_h}h_{\scriptscriptstyle T}\sum_{ e\in\tilde\mathcal{E}_T}|e|^{-1/2}\|\jump{u_h}\|_{L_2(e)}}\,\Big)\\
&\hspace{40pt}+ |\lambda|^\frac12\|(\partialsi-u_h)^+\|_{L_\infty(\Omega)}^\frac12\notag
\end{align}
that appears on the right-hand side of \eqref{eq:ReliableII}, with $C=0.32$ and
$|\lambda|=13.196$. The results for $\|u-u_h\|_h/Q_h$ (cf.
Table~\ref{table:ReliabilityTest})
clearly demonstrate the estimate \eqref{eq:ReliableII}.
\begin{table}[hh]
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
$h$ & $2^{-1}$ & $2^{-2}$ &$2^{-3}$ & $2^{-4}$ &$2^{-5}$ &$2^{-6}$ &$2^{-7}$&
$2^{-8}$ &$2^{-9}$ & $2^{-10}$\\
\hline
$\|u-u_h\|_h/Q_h$ & $0.93$ & $0.94$ & $0.96$ & $0.97$ & $0.97$ & $0.97$ &$0.98$&
$0.98$ & $0.99$ & $1.04$
\end{tabular}
\partialar
\caption{Numerical results for the estimate \eqref{eq:ReliableII}}
\label{table:ReliabilityTest}
\end{table}
\partialar
In the second experiment we solve the discrete
obstacle problem with the cubic element
on uniform and adaptive meshes. We observe optimal (resp. suboptimal) convergence
rate for adaptive (resp. uniform) meshes in Figure~\ref{fig:P3Example1}\hspace{1pt}(a) and
also the reliability of $\eta_\ell$. Furthermore the optimal $O(N_\ell^{-1})$ convergence
rate of $\|u-u_\ell\|_\ell$ is justified by
Figure~\ref{fig:P3Example1}\hspace{1pt}(b) and Lemma~\ref{lem:AsymptoticConvergenceRate}.
\begin{figure}
\caption{Convergence histories for the cubic $C^0$ interior penalty method
for Example~1: (a) $\|u-u_\ell\|_\ell$ and
$\eta_\ell$, (b) $\eta_\ell$, $Q_{\ell,1}
\label{fig:P3Example1}
\end{figure}
\partialar
According to Lemma~\ref{lem:LMTest} and
Figure~\ref{fig:P3Example1}\hspace{1pt}(b), the magnitude of $\Lambda_\ell$
should be $O(N_\ell^{-1})$. This is confirmed by the
results in Table~\ref{table:LMExample1}, where $N_\ell$ increases from $N_0=49$ to
$N_{20}=231328$.
\begin{table}[hh]
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c}
$\ell$ & $0$ & $1$ &$2$ & $3$ & $4$ & $5$ &$6$ & $7$ & $8$ &$9$ & $10$\\
\hline
$\Lambda_\ell N_\ell$ & $155$ & $250$ & $60.5$ & $214$ & $164$ & $99.6$ & $101$
& $75.8$ & $21.8$ & $31.9$ & $17.9$ \\
\hline \hline
$\ell$ & $11$ & $12$ &$13$ & $14$ & $15$ & $16$ & $17$ & $18$& $19$ &$20$ \\
\hline
$\Lambda_\ell N_\ell$ & $12.2$ & $1.48$ & $23.8$ & $5.37$ &$0.403$ & $4.65$
& $20.5$ &$51.1$ & $259$ & $730$
\end{tabular}
\caption{$\Lambda_\ell N_\ell$ for the adaptive cubic $C^0$ interior penalty method for
Example~1}
\label{table:LMExample1}
\end{table}
\partialar
An adaptive mesh with roughly 3000 nodes is depicted in Figure~\ref{fig:MeshExp2} and
strong refinement near the free boundary is observed.
\begin{figure}
\caption{Adaptive mesh
for the cubic $C^0$ interior penalty method for Example~1}
\label{fig:MeshExp2}
\end{figure}
\subsection{Example 2}\label{subsec:Example2}
In this example we consider the obstacle problem
from \cite[Example~4]{BSZZ:2012:Kirchhoff} for a clamped plate occupying the $L$-shaped domain
$\Omega=(-0.5,0.5)^2\setminus[0,0.5]^2$ with $f=0$ and
$\displaystyle \partialsi(x)=1-\Big[\frac{(x_1+1/4)^2}{0.2^2} + \frac{x_2^2}{0.35^2}\Big]$.
The coincidence set for this problem is presented in
Figure~\ref{fig:ContactL1}\hspace{1pt}(a).
\begin{figure}
\caption{$L$-shaped domain for Example~2: (a) Coincidence set for the obstacle problem
(b) Adaptive mesh with $\approx 3000$ nodes
for the $P_2$ element
(c) Adaptive mesh with $\approx 5000$ nodes for the $P_3$ element}
\label{fig:ContactL1}
\end{figure}
\partialar
In the first experiment
we solve the discrete obstacle problem with the $P_2$ element
on uniform and adaptive meshes. Optimal (resp. suboptimal)
convergence rate for adaptive (resp. uniform) meshes and the reliability of $\eta_\ell$ are
observed in
Figure~\ref{fig:P2Example2}\hspace{1pt}(a), and
the $O(N_\ell^{-1/2})$ convergence rate of $\|u-u_\ell\|_\ell$ is justified
by Figure~\ref{fig:P2Example2}\hspace{1pt}(b) and Lemma~\ref{lem:AsymptoticConvergenceRate}.
\begin{figure}
\caption{Convergence histories for the quadratic $C^0$ interior penalty method
for Example~2: (a) $\|u-u_\ell\|_\ell$ and
$\eta_\ell$, (b) $\eta_\ell$, $Q_{\ell,1}
\label{fig:P2Example2}
\end{figure}
\partialar
The $O(N_\ell^{-1/2})$ bound for $\Lambda_\ell$ predicted by
Lemma~\ref{lem:LMTest} and Figure~\ref{fig:P2Example2}\hspace{1pt}(b)
is observed in Table~\ref{table:LMP2Example2}, where
$N_\ell$ increases from $65$ to $827483$.
\begin{table}[hh]
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
$\ell$ & $0$ & $1$ &$2$ & $3$ & $4$ & $5$ &$6$ & $7$ & $8$ \\
\hline
$\Lambda_\ell N_\ell^{1/2}$ & $2715$ & $391$ & $637$ & $756$ & $1454$ & $654$ & $613$
& $467$ & $411$ \\
\hline \hline
$\ell$ & $9$ & $10$ &$11$ & $12$ & $13$ & $14$ & $15$ & $16$& \\
\hline
$\Lambda_\ell N_\ell^{1/2}$ & $360$ & $149$ &$255$ &$105$ & $144$ & $70$ & $72$ & $52$
\end{tabular}
\caption{$\Lambda_\ell N_\ell^{1/2}$ for the adaptive quadratic $C^0$ interior penalty method for
Example~2}
\label{table:LMP2Example2}
\end{table}
\partialar
An adaptive mesh with roughly 3000 nodes is displayed in
Figure~\ref{fig:ContactL1}\hspace{1pt}(b), where
we observe a strong refinement near the reentrant corner. In contrast the refinement near the
free boundary is mild. This is due to the fact that away from the reentrant corner
the solution belongs to $H^3$ (cf. \cite{Frehse:1971:VarInequality,BR:1980:Biharmonic})
and we are using the $P_2$ element.
\partialar
In the second experiment we solve the obstacle problem
with the $P_3$ element on
uniform and adaptive meshes.
We observe optimal (resp. suboptimal) convergence rate for adaptive (resp. uniform) meshes
in Figure~\ref{fig:P3Example2}\hspace{1pt}(a) and that $\eta_\ell$ is reliable in both cases.
Moreover the $O(N_\ell^{-1})$ convergence rate of $\|u-u_\ell\|_\ell$ is
justified by Figure~\ref{fig:P3Example2}\hspace{1pt}(b) and Lemma~\ref{lem:AsymptoticConvergenceRate}.
\begin{figure}
\caption{Convergence histories for the cubic $C^0$ interior penalty method
for Example~2: (a) $\|u-u_\ell\|_\ell$ and
$\eta_\ell$, (b) $\eta_\ell$, $Q_{\ell,1}
\label{fig:P3Example2}
\end{figure}
\partialar
The results for $\Lambda_\ell$ are reported in Table~\ref{table:LMP3Example2},
where the $O(N_\ell^{-1})$ bound for $\Lambda_\ell$ predicted by
Lemma~\ref{lem:LMTest} and Figure~\ref{fig:P3Example2}\hspace{1pt}(b) can be observed.
Note that there are large oscillations at the beginning before the coincidence has been
captured by the adaptive mesh. Here $N_\ell$ increases from
$N_0=133$ to $N_{22}=358792$.
\begin{table}[hh]
\begin{tabular}{c|c|c|c|c|c|c|c|c}
$\ell$ & $0$ & $1$ &$2$ & $3$ & $4$ & $5$ &$6$ & $7$ \\
\hline
$\Lambda_\ell N_\ell$ & $15398$ & $16877$ & $2893$ & $1035$ & $11806$& $8925$ & $15493$ & $5993$
\\
\hline \hline
$\ell$ & $8$ & $9$ & $10$ & $11$ & $12$ &$13$ & $14$ & $15$\\
\hline
$\Lambda_\ell N_\ell$ & $1048$ & $5162$ & $3544$ & $3271$ & $1362$ &$119$ & $580$ & $778$ \\
\hline\hline
$\ell$ & $16$ & $17$ & $18$& $19$ &$20$ & $20$ &$21$ &$22$\\
\hline
$\Lambda_\ell N_\ell$ & $77$ & $96$ & $68$ & $92$ & $147$ & $116$ &$754$ & $885$
\end{tabular}
\caption{$\Lambda_\ell N_\ell$ for the adaptive cubic $C^0$ interior penalty method for
Example~2}
\label{table:LMP3Example2}
\end{table}
\partialar
An adaptive mesh with roughly 5000 nodes is displayed in Figure~\ref{fig:ContactL1}\hspace{1pt}(c),
where we observe strong refinement near both the reentrant corner and the free boundary.
\goodbreak
\subsection{Example 3}\label{subsec:Example3}
In this example we consider the obstacle problem on the $L$-shaped domain
$\Omega=(-0.5,0.5)^2\setminus[0,0.5]^2$ with
$$
\partialsi(x)= - [\sin(2 \partiali (x_1+0.5) (x_2+0.5)) \sin(4 \partiali (x_1-0.5) (x_2-0.5))] - 0.35
$$
and
\begin{equation*}
f(x)=
\begin{cases}
10^3 \Big( \frac 12 e^{(x_1+0.25)^2+(x_2+0.25)^2} \Big) &\qquad x_1\leq 0, x_2 > 0 \\
0 &\qquad x_1 \leq 0, x_2 \leq 0 \\
10^3 \Big( \frac{1}{2} + [(x_1-0.25)^2+(x_2+0.25)^2]^{3/2} \Big) &\qquad x_1 \geq 0, x_2 \leq 0
\end{cases}\,.
\end{equation*}
For this example, the coincidence set is one dimensional
(cf. Figure~\ref{fig:ContactSetL2}\hspace{1pt}(a)).
\begin{figure}
\caption{$L$-shaped domain for Example~3: (a) Coincidence set for the obstacle problem
(b) Adaptive mesh with 11062 dof
for the $P_2$ element
(c) Adaptive mesh with 12841 dof for the $P_3$ element}
\label{fig:ContactSetL2}
\end{figure}
\partialar
In the first experiment we solve the obstacle problem with the $P_2$
element on uniform and adaptive meshes.
We observe optimal (resp. suboptimal) convergence rate for adaptive (resp. uniform) meshes
in Figure~\ref{fig:P2Example3}\hspace{1pt}(a)
and also the reliability of $\eta_\ell$.
The $O(N_\ell^{-1/2})$ convergence rate of $\|u-u_\ell\|_\ell$ is confirmed by
Figure~\ref{fig:P2Example3}\hspace{1pt}(b) and
Lemma~\ref{lem:AsymptoticConvergenceRate}.
\begin{figure}
\caption{Convergence histories for the quadratic $C^0$ interior penalty method
for Example~3: (a) $\|u-u_\ell\|_\ell$ and
$\eta_\ell$, (b) $\eta_\ell$, $Q_{\ell,1}
\label{fig:P2Example3}
\end{figure}
\partialar
The results in Table~\ref{table:LMP2Example3} agrees with the
$O(N_\ell^{-1/2})$ bound for $\Lambda_\ell$ that follows from
Lemma~\ref{lem:LMTest} and Figure~\ref{fig:P2Example3}\hspace{1pt}(b).
The number of dof increases from $N_0=65$ to
$N_{12}=134096$.
\begin{table}[hh]
\begin{tabular}{c|c|c|c|c|c|c|c |c |c |c |c |c |c}
$\ell$ & $0$ & $1$ &$2$ & $3$ & $4$ & $5$ &$6$ & $7$ & $8$ & $9$& $10$ &$11$ & $12$ \\
\hline
&&&&&&&&&&&&& \\[-11pt]
$\Lambda_\ell N_\ell^{1/2}$ & $1151$ & $501$ & $92$ & $201$ & $120$ & $419$
&$201$ &$98$ & $75$ & $34$ & $76$ &$40$ & $36$\\
\end{tabular}
\caption{$\Lambda_\ell N_\ell^{1/2}$ for the adaptive quadratic $C^0$ interior penalty method for
Example~3}
\label{table:LMP2Example3}
\end{table}
\partialar
An adaptive mesh with 11062 dof is depicted in
Figure~\ref{fig:ContactSetL2}\hspace{1pt}(b), where
we observe that the only strong refinement is around the reentrant corner.
This is again due to the fact that
away from the reentrant corner the solution belongs to $H^3$ and we are using the $P_2$ element.
\partialar
In the second experiment we solve the obstacle problem
with the $P_3$ element on
uniform and adaptive meshes.
We observe optimal (resp. suboptimal) convergence rate for adaptive (resp. uniform) meshes
in Figure~\ref{fig:P3Example3}\hspace{1pt}(a) and also the reliability
of $\eta_\ell$. Furthermore the $O(N_\ell^{-1})$ convergence rate
for $\|u-u_\ell\|_\ell$ is justified by Figure~\ref{fig:P2Example3}\hspace{1pt}(b) and
Lemma~\ref{lem:AsymptoticConvergenceRate}.
\begin{figure}
\caption{Convergence histories for the cubic $C^0$ interior penalty method
for Example~3: (a) $\|u-u_\ell\|_\ell$ and
$\eta_\ell$, (b) $\eta_\ell$, $Q_{\ell,1}
\label{fig:P3Example3}
\end{figure}
\partialar
The results in Table~\ref{table:LMP3Example3} agrees with
the $O(N_\ell^{-1})$ bound for $\Lambda_\ell$ predicted by
Lemma~\ref{lem:LMTest} and Figure~\ref{fig:P3Example3}\hspace{1pt}(b).
Here $N_\ell$ increases from $N_0=133$ to $N_{15}=88699$.
\begin{table}[hh]
\begin{tabular}{c|c|c|c|c|c|c|c|c}
$\ell$ & $0$ & $1$ &$2$ & $3$ & $4$ & $5$ &$6$ & $7$ \\
\hline
$\Lambda_\ell N_\ell$ & $11724$ &$ 1782$ &$ 1842$ & $32888$
& $1046$ & $6439$ & $2974$ & $2588$
\\
\hline \hline
$\ell$ & $8$ & $9$ & $10$ & $11$ & $12$ &$13$ & $14$ & $15$\\
\hline
$\Lambda_\ell N_\ell$ & $2781$ & $25657$ & $2215$& $3805$ & $5177$
& $2030$ & $1092$ & $2355$ \\
\end{tabular}
\caption{$\Lambda_\ell N_\ell$ for the adaptive cubic $C^0$ interior penalty method for
Example~3}
\label{table:LMP3Example3}
\end{table}
\partialar
An adaptive mesh with 12841 dof is depicted in
Figure~\ref{fig:ContactSetL2}\hspace{1pt}(c), where
we observe strong refinement around the reentrant corner and the coincidence set.
\section{Conclusions}\label{sec:Conclusions}
We have developed a simple {\em a posteriori} error analysis of $C^0$ interior penalty methods
for the displacement obstacle problem of clamped
Kirchhoff plates by taking advantage of the fact that the Lagrange multiplier for the
discrete problem can be represented naturally as the sum of Dirac point measures supported
at the vertices of the triangulation.
Numerical results indicate that the adaptive
algorithm based on a standard {\em a posteriori} error estimator originally
developed for boundary value problems also
performs optimally for quadratic and cubic $C^0$ interior penalty methods
for obstacle problems.
However the theoretical justification of convergence and optimality for adaptive
$C^0$ interior penalty methods remains open even in the case
when the obstacle is absent.
\partialar
The results in this paper can be extended to the displacement obstacle problem of
the biharmonic equation with the boundary conditions of simply
supported plates or the Cahn-Hilliard type. In the case where $\Omega$ is convex, such
problems are related to distributed elliptic optimal control problems
with pointwise state constraints
\cite{LGY:2009:Control,GY:2011:State,BSZ:2013:OptimalControl,BSZ:2015:PP} and
can also be considered in three dimensional domains.
Adaptive finite element methods for these problems based on the approach in this paper
are ongoing projects.
\end{document}
|
\begin{document}
\title{Injective split systems}
\author{M. Hellmuth \and K. T. Huber \and V. Moulton \and G. E. Scholz \and P. F. Stadler}
\institute{M. Hellmuth\at
Department of Mathematics, Faculty of Science, Stockholm University, Sweden.\\
K. T. Huber \at
School of Computer Sciences, University of East Anglia, Norwich, UK.\\
V. Moulton \at
School of Computer Sciences, University of East Anglia, Norwich, UK.\\
G. E. Scholz \at
Bioinformatics Group, Department of Computer Science \& Interdisciplinary Center for Bioinformatics, Universität
Leipzig, Germany.
\email{[email protected]} \\
P. F. Stadler \at
{Bioinformatics Group, Department of Computer Science \& Interdisciplinary Center for Bioinformatics, Universität
Leipzig, Germany $\ \cdot \ $
Max Planck Institute for Mathematics in the Sciences,
Leipzig, Germany $\ \cdot \ $
Department of Theoretical Chemistry, University of Vienna, Austria $\ \cdot \ $
Facultad de Ciencias, Universidad National de Colombia,
Bogot{\'a}, Colombia $\ \cdot \ $
Santa Fe Institute, Santa Fe, NM, USA.}
\\
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
A split system $\mathcal S$ on a finite set $X$, $|X|\ge3$, is a set of
bipartitions or splits of $X$ which contains all splits of
the form $\{x,X-\{x\}\}$, $x \in X$. To any such split system $\mathcal S$ we can
associate the
Buneman graph $\mathcal B(\mathcal S)$ which is
essentially a median graph with leaf-set $X$ that displays
the splits in $\mathcal S$.
In this paper, we consider properties of injective split systems, that
is, split systems $\mathcal S$ with the property that
$\med_{\mathcal B(\mathcal S)}(Y) \neq \med_{\mathcal B(\mathcal S)}(Y')$ for
any 3-subsets $Y,Y'$ in $X$, where $\med_{\mathcal B(\mathcal S)}(Y)$
denotes the median in $\mathcal B(\mathcal S)$ of the three elements in $Y$ considered
as leaves in $\mathcal B(\mathcal S)$.
In particular, we show that for any set $X$ there always exists an injective split
system on $X$, and
we also give a characterization for when a split system is injective. We also consider
how complex the Buneman graph $\mathcal B(\mathcal S)$
needs to become in order for a split system $\mathcal S$ on $X$ to be injective. We do this
by introducing a quantity for $|X|$ which we call the injective dimension for $|X|$,
as well as two related quantities, called the injective 2-split and the rooted-injective
dimension. We derive some upper and lower bounds for
all three of these dimensions and also prove that some of these bounds are tight.
An underlying motivation for studying
injective split systems is that they can be used to
obtain a natural generalization of symbolic tree maps.
An important consequence of our results is that any three-way symbolic map on $X$ can
be represented using Buneman graphs.
\keywords{Median graph \and Split system \and Buneman graph}
\end{abstract}
\maketitle
\section{Introduction}
Let $X$ be a finite set $|X| \ge 3$.
A {\em (three-way) symbolic map (on $X$)} is a map $\delta:{X \choose 3} \to M$
to some set $M$ of symbols.
In \cite{HMS19}, a special type of symbolic map
was studied, called a {\em symbolic tree map} which arises as follows.
Let $T$ be a phylogenetic tree with leaf-set $X$ (i.e. an unrooted tree with no vertices
of degree two and leaf set $X$
\cite{SS03})
in which each interior vertex $v$ of $T$ is labelled by some
element $l(v)$ in $M$ by some labelling map $l$.
The symbolic tree map $\delta$ associated to $T$ is
the map from ${X \choose 3}$ to $M$ that is
obtained by setting
\[
\delta(Y) = l(\med_T(Y)), \,\,\,\, Y \in {X \choose 3},
\]
where $\med_T(Y)$ is the unique interior vertex of $T$
that belongs to the shortest paths between
each pair of the three vertices in $Y$, and ${X \choose 3}$ denotes the set of all 3-subsets of $X$.
For example, for the
symbolic tree map $\delta$ associated to the labelled
tree in Figure~\ref{fig-intro}(i), $\delta(\{1,2,3\})=c$, and $\delta(\{2,3,4\})=b$.
Symbolic tree maps are closely
related to {\em symbolic ultrametrics} \cite{BD98} and also appear in
the theory of hypergraph colourings \cite{G84} -- see \cite{HMS19} for more details,
where amongst other results, a characterization of symbolic tree maps is presented.
There are also close connections with cograph theory \cite{H13} and modular
decompositions \cite{B22}.
\begin{figure}
\caption{For $X=\{1, \ldots, 5\}
\label{fig-intro}
\end{figure}
In \cite{HMS19} it was asked how results on symbolic tree maps might
be extended to {\em Buneman graphs} \cite{D97} (see also \cite[p.8]{B22}), as these graphs
provide a natural way to generalize phylogenetic trees. More
specifically, given a {\em split system (on $X$)}, i.\,e.\, a set $\mathcal S$ of
bipartitions or {\em splits} of $X$
that contains all splits of the form $\{\{x\}, X-\{x\}\}$, $x \in X$,
then the Buneman graph $\mathcal B(\mathcal S)$ on $X$ associated to $\mathcal S$
is essentially a median graph with leaf-set $X$
(see Section~\ref{sec:preli} for more details).
The fact that $\mathcal B(\mathcal S)$ is a median graph implies
that for any 3-subset $Y$ of $X$, there exists a unique vertex
$\med_{\mathcal B(\mathcal S)}(Y)$
in $\mathcal B(\mathcal S)$ (or {\em median}),
that lies on shortest paths between any pair of elements in $Y$.
Since every phylogenetic tree is a Buneman graph, the
notion of a symbolic tree map naturally generalises by considering labelling maps $\delta$ that can be represented by
labelling the internal vertices of some Buneman graph $\mathcal B(\mathcal S)$,
and, for any 3-subset $Y$ of $X$, taking $\delta(Y)$
to be the label of $\med_{\mathcal B(\mathcal S)}(Y)$. For example, for the
map $\delta$ associated to the interior vertex-labelled
Buneman graph depicted in Figure~\ref{fig-intro}(ii),
$\delta(\{1,2,3\})=k$, and $\delta(\{3,4,5\})=f$.
It is therefore of interest to understand under what circumstances we
can represent for a split system $\mathcal S$ on $X$
a symbolic map $\delta$ on $X$ by labelling the vertices of some Buneman graph
$\mathcal B(\mathcal S)$
on $X$ and vertex set $V$. In other words, we want to find some labelling map $l \colon V-X\to M$
such that $\delta(\{x,y,z\}) = l(\med_{\mathcal B(\mathcal S)}(Y))$ for all
$Y \in {X \choose 3}$.
Clearly this is the case if there is some split system $\mathcal S$ on $X$ such that
\begin{equation}\label{is-injective}
\med_{\mathcal B(\mathcal S)}(Y) \neq \med_{\mathcal B(\mathcal S)}(Y') \mbox{ for all distinct } Y, Y' \in {X \choose 3},
\end{equation}
since then we can just label the vertex $\med_{\mathcal B(\mathcal S)}(Y)$ by $\delta(Y)$ for every 3-subset $Y$ of $X$.
For example, the Buneman graph depicted in Figure~\ref{fig-intro}(ii)
enjoys Property~(\ref{is-injective}), whereas the phylogenetic tree $T$ (which is a Buneman graph
for the split system obtained by deleting all seven edges in turn)
in Figure~\ref{fig-intro}(i) does not since, for example, $\med_T(\{1,5,3\})=\med_T(\{1,5,4\})$.
Motivated by these considerations, we call a split system $\mathcal S$ {\em injective}
if Property~(\ref{is-injective}) holds. In this paper we shall focus on
understanding such split systems, in particular presenting
some results concerning their properties. We
now briefly summarize them.
In the next section, we begin by presenting some preliminaries
concerning Buneman graphs.
In Section~\ref{sec:c2} we then prove that for any finite set $X$ with $|X|\ge 3$,
there always exists some injective split system on $X$. In particular, we
show that the split system on $X$ which contains all
those splits $\{A,B\}$ of $X$ with $\min\{|A|,|B|\}| \le 2$,
and the split system that is obtained by deleting any pair of
edges in a cycle with vertex set $X$ are both injective (Theorem~\ref{circbu}).
In particular, as mentioned above, it follows that any
symbolic map $\delta$ on a set $X$
can be represented by some Buneman graph.
In Section~\ref{sec:dice}, we provide a characterization of
injective split systems (Theorem~\ref{dices}).
This characterization is obtained by considering
how the restriction of a split system on $X$ to small subsets of $X$
partitions these subsets.
In particular, it implies that it can be decided if a split system $\mathcal S$ on $X$ is injective
or not by considering the restriction of $\mathcal S$ to subsets of $X$ with size at most 6.
In general, since we can always represent a symbolic map by some
Buneman graph, we would like to find representations
that are as simple as possible.
Since for any split system $\mathcal S$ the Buneman graph $B(\mathcal S)$ is an
isometric subgraph of an $|\mathcal S|$-cube in which the
convex hull of any isometric cycle of length $k$ is a $k$-cube, $k\ge 3$, a
natural measure for the complexity of a split system $\mathcal S$
is the dimension of the largest isometric $k$-cube in $B(\mathcal S)$.
We call this quantity the \emph{dimension} of $\mathcal S$; for
example, the split systems in Figure~\ref{fig-intro}(i) and (ii) have
dimension 1 and 2, respectively.
In Section~\ref{sec:id}, we investigate the
notion of the \emph{injective dimension} $ID(n)$ which we define to be
the smallest dimension of any injective split system on a set
of size $n$, $n \ge 3$. In particular, as well as giving the values of $\ID(n)$ for all $n \le 8$, we
show that $\ID(n) \leq \lfloor \frac{n}{2} \rfloor$, and that $\ID(n) \ge 3$ for all $n
\ge 8$ (Theorem~\ref{cor:ID-numbers}).
As an immediate corollary to this result it follows that
to represent arbitrary symbolic maps on sets $X$ of size 6 or more
using Buneman graphs, Buneman graphs that contain 3-cubes are required.
We continue by considering two variants of the injective dimension.
The first variant, $\ID_2(n)$, is considered in Section~\ref{sec:id2} and is
given by restricting the definition of $\ID(n)$ to split systems $\mathcal S$
for which every split $\{A,B\} \in \mathcal S$
has $\min\{|A|,|B|\}| \le 2$. We show that for all $n \geq 5$,
$\lfloor \frac{n}{2} \rfloor \le \ID_2(n) \leq n-3$ (Theorem~\ref{id2-lb}) which implies that
$\ID_2(5)=2$.
The second variant, $\ID^r(n)$, is considered in Section~\ref{sec:idr}, and is
defined by modifying the definition of injectivity as follows:
We say that a split system $\mathcal S$ on $X$ is \emph{rooted-injective} relative to some $r \in X$ if
\[
\med_{\mathcal B(\mathcal S)}(Z\cup\{r\}) \neq \med_{\mathcal B(\mathcal S)}(Z' \cup \{r\}) \mbox{ for all distinct } Z, Z' \in {X \choose 2}.
\]
The quantity $\ID^r(n)$ is given in an analogous way to $\ID(n)$
by taking the minimum over rooted-injective splits systems relative to $r$.
Using a recent result from \cite{B22} concerning rooted median graphs, we show
that, in contrast to $\ID_2(n)$,
$\ID^r(n)=2$ for all $n \ge 4$. We conclude in Section~\ref{sec:discuss} with a
discussion of some open problems.
\section{Preliminaries} \label{sec:preli}
\subsection{Graphs and median graphs} \label{sec:graphs}
We consider undirected graphs $G=(V,E)$
whose vertex sets $V$ are finite with $|V| \ge 2$, and whose
edge sets $E$ are contained in $\binom{V}{2}$, i.e., graphs without loops and multiple
edges. A \emph{leaf} in such a graph is a vertex with degree one.
A \emph{cycle} is a connected graph in which every vertex has degree two.
The \emph{length} of a cycle $C$ is the number of edges or, equivalently, the
number of vertices in $C$.
A connected graph that does not contain a cycle is called a \emph{tree}.
If $G$ is connected then we denote by $d_{G}(v,w)$ the length of a shortest path between two
vertices $v$ and $w$ of $G$. Note that $d_{G}(v,w)=0$ if and only if $v=w$.
A connected subgraph $G'$ of $G$ is called isometric if
$d_{G'}(v,w) = d_{G}(v,w)$, for all vertices $v$ and $w$ in $G'$.
A vertex $x$ in $G$ is called a \emph{median} of three vertices $u,v,w\in V$
if $d_{G}(u,x)+d_{G}(x,v)=d_{G}(u,v)$,
$d_{G}(v,x)+d_{G}(x,w)=d_{G}(v,w)$ and
$d_{G}(u,x)+d_{G}(x,w)=d_{G}(u,w)$. A connected graph is called a
\emph{median graph} if any three of its vertices have a unique median \cite{Mulder1978}.
In other words, $G$ is a median graph if for all vertices $u$, $v$, and $w$ in $G$, there is
a unique vertex that belongs to shortest paths between each pair of $u, v$
and $w$. We denote the unique median of three vertices $u$, $v$ and $w$ in
a median graph $G$ by $\med_{G}(u,v,w)$. Median graphs
have several interesting characterizations and properties, see e.g. \cite{mulder2011median}.
For example, a connected graph $G$ is a median graph if and only if the
convex hull\footnote{A subset $G'$ of a graph $G$
is \emph{convex} if for any two vertices $v,w$ in $G'$
\emph{every} shortest path between $v$ and $w$ is a subgraph of $G'$.}
of any isometric cycle of $G$ is a hypercube (see e.g. \cite{klavzar1999median}).
\subsection{Buneman graphs}
From now on, we let $X$ be a finite set with $|X|\ge 3$. A {\em split (of
$X$)} is a bipartition $A|B=B|A$ of $X$ into
two non-empty subsets, that is, $A,B\subset X$, $A\cap B=\emptyset$ and $A\cup
B=X$.
For simplicity, we write $a_1\ldots a_k|b_1\ldots b_l$ or $a_1\ldots a_k|\overline{a_1\ldots a_k}$ for a split
$A|B$ if $A=\{a_1,\ldots, a_k\}$ and $B=\{b_1,\ldots, b_l\}$, for some $k,l\geq 1$.
We call the sets $A$ and $B$ the \emph{parts} of the split $A|B$.
If $S=A|B$
is such that $|A|<|B|$ then we call $A$ the {\em small part} of $S$.
The {\em size} of a split $A|B$ is defined as $\min\{|A|,|B|\}$,
and if a split $S$ has size $r$ we call $S$ an {\em $r$-split}.
A split $A|B$ of $X$ is called \emph{trivial} if it has size $1$ or, equivalently,
if $A|B$ is of the form $x|\overline{x}$ for some $x\in X$.
For a split $S=A|B$ of $X$, we let $S(x)$ denote the part of
$S$ that contains $x$.
We say that
$S$ {\em separates} two elements $x$ and $y$ in $X$ if $S(x)\not=S(y)$.
From now on we shall assume that all split systems on $X$ contain all trivial splits on $X$.
Following \cite{D97}, we define for a split system $\mathcal S$ on $X$, the
{\em Buneman graph $\mathcal B(\mathcal S)$ (on $X$)}
to be the graph with
vertex set consisting of all maps $\phi: \mathcal S \to \mathcal P(X)$
satisfying the following two conditions:
\begin{itemize}
\item[(B1)] For all $S\in \mathcal S$, $\phi(S) \in S$.
\item[(B2)] For all $S, S' \in \mathcal S$ distinct, $\phi(S) \cap \phi(S') \neq
\emptyset$.
\end{itemize}
Two vertices $\phi$ and $\phi'$ in $\mathcal B(\mathcal S)$ are joined by an edge if
there
is a unique split $S \in \mathcal S$ such that $\phi(S) \neq \phi'(S)$.
For example, the graphs in Figure~\ref{fig-intro}(i) and (ii)
are Buneman graphs on $X=\{1,\ldots, 5\}$ for the split systems
\[\mathcal S_1=\{15|234,24|135\} \cup \{x|\overline{x} \,:\, x \in X\}\]
and
\[\mathcal S_2=\{15|234,24|135, 12|345, 34|125,35|124\} \cup \{x|\overline{x} \,:\, x \in X\},\]
respectively.
We now summarise some relevant properties of the Buneman graph
(for proofs of these facts see e.g. \cite[Chapter 4]{D12}; see also \cite{BG91} using different notation).
\begin{enumerate}
\item[(S1)] For all $x \in X$, the map
$\phi_x: \mathcal S \to \mathcal P(X)$ given by
putting $\phi_x(S)=S(x)$, for all $S \in \mathcal S$, is a leaf
in $\mathcal B(\mathcal S)$.
\item[(S2)] Let $S=A|B\in \mathcal{S}$. Then
the removal of all edges $\{\phi,\phi'\}$ in
$\mathcal B(\mathcal S)$ with $\phi(S)\neq \phi'(S)$
disconnects
$\mathcal B(\mathcal S)$ into precisely two connected
components, one of which contains the leaves $\phi_a$, $a \in A$ and the other
the leaves $\phi_b$, $b \in B$.
\item[(S3)] $\mathcal B(\mathcal S)$ is a median graph.
\item[(S4)] $\mathcal B(\mathcal S)$ is an isometric subgraph of
the $|\mathcal S|$-dimensional hypercube consisting of all those maps $\phi: \mathcal S \to \mathcal P(X)$
that only satisfy Property~(B1) in the definition of the Buneman graph
(with edge set defined in the analogous same way).
\item[(S5)] For any three vertices $\phi_1, \phi_2,\phi_3$
in $\mathcal B(\mathcal S)$, the median
of $\phi_1, \phi_2$ and $\phi_3$ in $\mathcal B(\mathcal S)$ is the map that
assigns to each split $S \in \mathcal S$ the part of $S$ of multiplicity two or more in the
multiset $\{\phi_1(S),\phi_2(S),\phi_3(S)\}$ (see also \cite[p.\ 1905, Equ.\ (1)]{D11}).
\end{enumerate}
Suppose that ${\mathcal S}$ is a split system on $X$.
In light of Property~(S1), we shall consider $X$ as being the leaf-set of $\mathcal B(\mathcal S)$, since
each $x \in X$ corresponds to the map $\phi_x$ in $\mathcal B(\mathcal S)$.
As an example for (S2), consider the tree in Figure~\ref{fig-intro}(i).
Removing the edge associated to the split $15|234$ disconnects the tree
into two trees with leaf sets $\{1,5\}$ and $\{2,3,4\}$, respectively.
In this way, we see that $\mathcal B(\mathcal S_1)$ displays
each of the splits in $\mathcal S_1$.
Note that by Property~(S3) and the fact mentioned at the end Section~\ref{sec:graphs},
the convex hull of any isometric cycle in $\mathcal B(\mathcal S)$ is a hypercube. In light of this, we
define the {\em dimension} $\dim(\mathcal S)$ of a split system $\mathcal S$
to be the dimension of the largest hypercube contained in $\mathcal B(\mathcal S)$
in case $\mathcal B(\mathcal S)$ is not a phylogenetic tree and one otherwise.
This dimension can be characterized in terms of splits as follows.
Suppose $S=A|B$ and $T=C|D$ are two splits in $\mathcal S$.
Then $S$ and $T$ are called {\em incompatible} if $S\not=T$ and
$A\cap C$, $A \cap D$, $B\cap C$ and $B \cap D$ are all non-empty; otherwise $S$
and $T$ are called {\em compatible}. Calling a set $\mathcal S$ of splits {\em incompatible}
if any two splits in $\mathcal S$ are incompatible, then $\dim(\mathcal S)$
is equal to the maximum size of an incompatible subset
of $\mathcal S$
(see e.g. \cite[p. 445]{C08}). If $\mathcal B(\mathcal S)$ contains a cycle then
it must contain a hypercube of dimension two or more.
Hence, a split system $\mathcal S$ on $X$
is 1-dimensional
if and only if $\mathcal B(\mathcal S)$ is a phylogenetic tree on $X$ (in which case it has $|\mathcal S|+1$ vertices and $|X|$ leaves), a
fact which also holds if and only if every
pair of splits in $\mathcal S$ is compatible (see e.g. \cite{D97}).
In particular, as mentioned in the introduction, it follows that
any phylogenetic tree is a Buneman graph of
some split system, and that any two distinct splits in this split system must be compatible.
\section{Two families of injective split systems}\label{sec:c2}
Let $\mathcal S$ be a split system on $X$.
For $Y =\{x,y,z\} \in {X \choose 3}$, we let $\phi_Y = \phi_{xyz}=med_{\mathcal B(\mathcal S)}(Y)$ denote
the median of $\phi_x,\phi_y,\phi_z$ in $\mathcal B({\mathcal S})$, which exists by Property (S3).
In this notation, $\mathcal S$ is
{\em injective} if for all $Y,Y' \in {X \choose 3}$ distinct, we have
$\phi_Y \neq \phi_{Y'}$.
Note that if $|X|=3$, then there is only one split system $\mathcal S$ on $X$
(the one that contains only trivial splits), and that $\mathcal S$ is injective, since $|\binom{X}{3}|=1$.
In this section, we show that for every set $X$ with $|X|\ge 4$ there
exists an injective split system on $X$. To do this, we shall present two infinite
families of injective split systems.
We begin with a simple but useful lemma.
\begin{lemma}\label{medinbu}
Let $\mathcal S$ be a split system on a set $X$, $|X| \geq 3$, and
let $x,y,z \in X$ distinct. Then $\phi_{xyz}$ is the (unique) map
in $\mathcal B(\mathcal S)$ that assigns to each
split
$S \in
\mathcal S$ the part $A\in S$ for which
$|A \cap \{x,y,z\}| \geq 2$.
\end{lemma}
\begin{proof}
Let $S\in\mathcal S$. Then $\phi_v(S)=S(v)$, for all $v\in \{x,y,z\}$. By Property~(S5),
$\phi_{xyz}(S)$ is the part of $S$
that appears twice (or more) in the multiset $\{S(x),S(y),S(z)\}$, that is, the part of $S$
that contains (at least) two elements of $\{x,y,z\}$.
\end{proof}
Now, a split system $\mathcal S$ on $X$ is called {\em circular} \cite{BD92} if
there exists a labelling $x_1, \ldots, x_n$, $n=|X|$, of the elements of
$X$ such that all splits of $\mathcal S$ are of the form
$x_i x_{i+1} \ldots x_j|\overline{x_i x_{i+1} \ldots x_j}$, some $1
\leq i \leq j \leq n$.
If $\mathcal S$ is a circular split system on $X$ and there is no circular split system $\mathcal S'$ on $X$
such that $\mathcal S \subsetneq \mathcal S'$, then we say that $\mathcal S$ is a \emph{maximal circular}
split system on $X$. Note that a maximal circular split system on $X$
has size ${|X| \choose 2}$ \cite[Section 3]{BD92}.
We now use Lemma~\ref{medinbu} to show that there exist
families of split systems that are injective.
\begin{theorem}\label{circbu}
Let $\mathcal S$ be a split system on $X$, $|X| \geq 4$. Then:
\begin{itemize}
\item[(i)] If $\mathcal S$ contains all 2-splits of $X$, then $\mathcal
S$ is injective.
\item[(ii)] If $\mathcal S$ is maximal circular, then $\mathcal S$ is injective.
\end{itemize}
\end{theorem}
\begin{proof}
For both (i) and (ii), let $Y=\{x,y,z\}$ and $Y'$ denote two distinct subsets of $X$ of size
$3$. Assume without loss of generality that $x\notin Y'$.
(i) By Lemma \ref{medinbu}, $\phi_Y$
is the unique map $\mathcal B(\mathcal S)$ that assigns to each
split $S \in
\mathcal S$ the part $A$ of $S$ such that $|A \cap Y| \geq 2$.
It follows that for $S=xy|\overline{xy}$ (which is an element of $\mathcal S$ as it has size two), $\phi_{xyz}(S)=\{x,y\}$.
Since $x\not\in Y'$, we obtain $\phi_{xyz}(S)= X-\{x,y\}$.
Consequently, $\phi_Y \neq \phi_{Y'}$.
(ii)
Put $X=\{x_1,\ldots, x_n\}$, $n\geq 4$.
Then there exist $i,j,k\in\{1,\ldots, n\}$ with $i<j<k$ ($\mathrm{mod}\ n$) such that $x=x_i$, $y=x_j$ and $z=x_k$.
With respect to the circular ordering of $X$ induced by $\mathcal S$ it follows that
one of the four sets $\{x=x_i,x_{i+1}, \ldots, y=x_l\}$, $\{y=x_l,x_{l+1}, \ldots, x=x_i\}$,
$\{x=x_i,x_{i+1}, \ldots, z=x_k\}$ and $\{z=x_k,x_{i+1},\ldots, x=x_i\}$
must contain at most one element of $Y'$.
Let $A$ be such a set. Since $\mathcal S$ is maximal circular by assumption, it follows that the split $S=A | X-A$
is contained in $\mathcal S$. By Lemma \ref{medinbu}, $\phi_Y(S)=A\not=X-A=\phi_{Y'}(S)$.
Hence, $\phi_Y \neq \phi_{Y'}$.
\end{proof}
\begin{figure}
\caption{For $X=\{1,\dots,n\}
\label{bubu}
\end{figure}
In view of Theorem~\ref{circbu}~(ii), it is interesting to understand if
maximal circular split systems admit proper subsets that are also injective. As it turns out,
the answer is no in general, as we show in our next result.
\begin{proposition}\label{prop:circ-proper-subset}
Let $\mathcal S$ be a circular split system on $X$ with $|X|\geq 4$ and
let $\mathcal S'$ denote a split system on $X$ that is contained in
$\mathcal S$ as a proper subset.
Then $\mathcal S'$ is not injective.
\end{proposition}
\begin{proof}
Let $S_0$ be a non-trivial split in $\mathcal S-\mathcal S'$.
We show that there exists
two subsets $Y$ and $Z$ of $X=\{1,\ldots, n\}$ distinct such that $\phi_Y(S)=\phi_Z(S)$
for all $S \in \mathcal S-\{S_0\}$.
In particular, $\phi_Y(S)=\phi_Z(S)$ for all $S \in \mathcal S'$, so $\mathcal S'$ is not injective.
Assume that $\mathcal S$ is circular for the
natural ordering of $X$. Without loss of generality, we may assume that
$S_0=1\ldots k|k+1 \ldots n$, some $2 \leq k \leq \frac{n}{2}$. Consider the sets $Y=\{n,1,k\}$ and $Z=\{n,1,k+1\}$.
Let $S\in \mathcal S-\{S_0\}$. If $S(n)=S(1)$ then, by Lemma \ref{medinbu},
$\phi_Y(S)=S(n)=\phi_Z(S)$. If $S(n)\not=S(1)$ then $S$ must be of
the form $1\dots\ell | \ell+1\dots n$, some $1\leq \ell\leq n-1$. Since $S\neq S_0$, we have $\ell\neq k$.
Hence, $S(k)=S(k+1)$. Moreover, since
$S(1)\not=S(n)$ either $1$ or $n$ must
be contained in $S(k)$. We can then apply Lemma \ref{medinbu} again
to conclude that $\phi_Y(S)=\phi_Z(S)$ which completes
the proof.
\end{proof}
We remark that a similar result to Proposition~\ref{prop:circ-proper-subset}
does not necessarily hold for non-circular split systems even if they are
injective. For example, Theorem~\ref{circbu}(i) implies that
the split system $\mathcal S$ on $X=\{1, \ldots, n\}$, $n \geq 5$, that
consists precisely of all trivial splits and 2-splits on $X$ is injective. Let
$\mathcal S^*$ denote the split system containing all splits of $\mathcal S$ except those of the form
$1x|\overline{1x}$, $x \in X-\{1\}$. Then, $\mathcal S^*$ is injective. To see this,
consider the proof of Theorem~\ref{circbu}(i). Then, up to
potentially having to relabel the elements of $Y$ and $Y'$,
the elements $x$ and $y$ can always be chosen to be
different from $1$. Hence, the split $S=xy|\overline{xy}$ such that
$\phi_Y(S) \neq \phi_{Y'}(S)$ can always be chosen in such a way that $ S\in\mathcal S^*$.
As a consequence, it follows that for all $Y, Y' \in {X \choose 3}$ distinct,
there exists a split $S$ of $\mathcal S^*$ such that $\phi_Y(S) \neq \phi_{Y'}(S)$
which implies that $\mathcal S^*$ is injective.
\section{Characterization of injective split systems and Dicing}\label{sec:dice}
In this section, we characterize injective split systems
(Theorem~\ref{dices}). To this end, we shall consider the restriction of a
split system on $X$ to subsets of $X$ which is defined as follows. Given a
split system $\mathcal S$ on $X$, and a subset $Y \subseteq X$ with
$|Y|\geq 3$ then we define the restriction $\mathcal S|_Y$ of $\mathcal S$
to $Y$ as the set of splits $S|_Y$ restricted to $Y$, that is,
\[
\mathcal S|_Y = \{ S|_Y= A\cap Y| B \cap Y \,:\, A|B \in \mathcal S
\}.
\]
Note that $\mathcal S|_Y$ is in fact a split system on $Y$ since $\mathcal S|_Y$
contains all trivial splits on $Y$.
We begin by proving a useful lemma concerning such restrictions.
\begin{lemma}\label{cases}
Suppose that $S$ is a split on $X$ with $|X|\geq 4$, and
that $x,y,z,p$ are distinct elements of $X$. Then the following holds for
$Y=\{x,y,z,p\}$.
\begin{enumerate}
\item[(i)] $\phi_{xyz}(S) \neq \phi_{xyp}(S)$ if and only if
$S|_Y\in \{xz|yp,yz|xp\}$. In particular, $S|_Y\not=xy|pz$.
\item[(ii)] If $|X|\geq 5$ and $q\in X-Y$ then $\phi_{xyz}(S) \neq \phi_{xpq}(S)$ if and only if
$S|_{Y\cup\{q\}}$ is one of the splits $yz|xpq$,
$pq|xyz$, $xy|zpq$, $xz|ypq$, $xp|yzq$ or $xq|yzp$.
\item[(iii)] If $|X|\geq 6$ and $q,r\in X-Y$ distinct then $\phi_{xyz}(S) \neq \phi_{pqr}(S)$ if and only if
$S|_{Y\cup\{q,r\}}$ is a 3-split or it is a 2-split of $Y$
whose part of size 2 is contained in $\{x,y,z\}$ or $\{p,q,r\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
To see Assertion~(i) observe that,
by Lemma~\ref{medinbu}, we have $\phi_{xyz}(S) \neq
\phi_{xyp}(S)$ if and only if one of $A$ and $B$, say $A$, contains at
least two elements of $\{x,y,z\}$ while $B$ contains at
least two elements of $\{x,y,p\}$. Since $A\cap B=\emptyset$, this is only
possible if and only if $z\in A$ and $p\in B$ while either $x\in A$ and $y\in B$ or
$y\in A$ and $x\in B$. The latter is equivalent to $S|_Y\in
\{xz|yp,yz|xp\}$ which, in particular, implies that $S|_Y\neq xy|pz$.
Hence, Assertion~(i) must hold.
To see Assertion~(ii), observe that, by
Lemma~\ref{medinbu}, $\phi_{xyz}(S) \neq
\phi_{xpq}(S)$ if and only if one of $A$ and $B$,
say $A$, contains at least two elements of $\{x,y,z\}$ and $B$ contains
at least two elements of $\{x,p,q\}$. As is easy to see, this is the case if
and only if $S|_{Y'}$ is not a trivial split on $Y'=Y\cup\{q\}$ and one of
$S(y)=S(z)$ or $S(p)=S(q)$ holds. Consideration
of all ten non-trivial splits on $Y'$ shows that $S|_{Y'}$ must be one of $yz|xpq$, $pq|xyz$, $xy|zpq$,
$xz|ypq$, $xp|yzq$ or $xq|yzp$. Hence,
Assertion~(ii) must hold.
To see Assertion~(iii), observe that, by
Lemma~\ref{medinbu}, $\phi_{xyz}(S) \neq
\phi_{pqr}(S)$ holds if and only if one of $A$ and $B$, say $A$, contains at least two
elements of $\{x,y,z\}$ and $B$ contains at least two elements of
$\{p,q,r\}$. Put $Y'=Y\cup\{q,r\}$, $A'= A\cap Y'$ and
$B'=B\cap Y'$. Since $A\cap B=\emptyset$ it follows
that $S|_{Y'}$ must be a 2- or 3-split and that if $S|_{Y'}$ is a 2-split, its part
of size 2 is contained in $\{x,y,z\}$ or
$\{p,q,r\}$.
Conversely, put $A=\{x,y,z\}$ and $B=\{p,q,r\}$ again. If
$S|_{Y'} = A'|B'$ is a 3-split on $Y'=Y\cup\{q,r\}$ then, clearly,
$|A'\cap \{x,y,z\}|\geq 2$ and $|B'\cap \{p,q,r\}|\geq 2$. Since
$A'\subseteq A$ and $B'\subseteq B$, we obtain $\phi_{xyz}(S) \neq
\phi_{pqr}(S)$. Furthermore, if $S|_{Y} = A'|B'$ is a 2-split such that the
part of size 2 is contained in $A$ or $B$, then the
other part must be of size 4 and must contain $B$ or $A$.
Consequently, $\phi_{xyz}(S) \neq \phi_{pqr}(S)$. Hence, Assertion~(iii) must
hold.
\end{proof}
We now make a key definition. We shall say that a split
system $\mathcal S$ on $X$
\begin{description}
\item[$\bullet$ \textnormal{\em $4$-dices $X$}] if
$|X| < 4$ or for all $Y \in {X \choose 4}$,
$\mathcal S|_Y$ contains at least two 2-splits,
\item[$\bullet$ \textnormal{\em 5-dices $X$}] if
$|X| < 5$ or for all $Y \in {X \choose 5}$,
$\mathcal S|_Y$ contains at least five 2-splits, and
\item[$\bullet$ \textnormal{\em 6-dices $X$}] if
$|X| < 6$ or for all $Y \in {X \choose 6}$, $\mathcal
S|_Y$
contains at least one 3-split or a \emph{triangle of 2-splits}, that is,
three 2-splits of the form $xy|Y-\{x,y\}$, $xz|Y-\{x,z\}$ and
$yz|Y-\{y,z\}$ where $x$, $y$, and $z$ are distinct elements in $Y$.
\end{description}
Note that, in general, if a split system on $X$ $k$-dices $X$ it need not $k'$-dice $X$, for
$k,k'\in \{4,5,6\}$ distinct.
Nevertheless, some interesting relationship between these concepts hold as the next lemma illustrates.
\begin{lemma}\label{lm-up}
Suppose $\mathcal S$ is a split system on $X$.
\begin{itemize}
\item[(i)] If $\mathcal S$ 4-dices $X$ and $|X| \geq 5$ then, for all $Y
\in {X \choose 5}$,
$\mathcal S|_Y$ contains at least four 2-splits.
\item[(ii)] If $\mathcal S$ 5-dices $X$ and $|X| \geq 6$ then, for all $Y
\in {X \choose 6}$,
$\mathcal S|_Y$ contains a 3-split or(at least) eight
2-splits.
\end{itemize}
\end{lemma}
\begin{proof}
(i) Suppose that $\mathcal S$ 4-dices $X$ and that $|X|\geq 5$.
Let $Y=\{x,y,z,t,u\}\in {X \choose 5}$ and $Y'=\{x,y,z,t\}\in {X \choose
4}$.
Since $\mathcal S$ 4-dices $X$, $\mathcal S|_{Y'}$ contains at least two
2-splits $S'_1$ and $S'_2$. Hence, $\mathcal S|_Y$ contains two
splits $S_1$ and $S_2$ such that
$S_1|_{Y'}=S'_1$ and $S_2|_{Y'}=S'_2$.
Moreover, since $S'_1$ and $S'_2$ are both 2-splits on $Y'$, the part
$A_1$ of $S_1$ and
$A_2$ of $S_2$ of size 2 does
not contain $u$. Note that $A_1$ and $A_2$ must be parts of $S'_1$ and $S'_2$,
respectively. In particular, since $S'_1$ and $S'_2$ are splits on
$Y'$ and $S'_1\neq S'_2$ it follows that $|A_1 \cap A_2|=1$. Without loss of
generality,
we may assume that $A_1 \cap A_2=\{x\}$.
Replacing $Y'$ by $Y''=\{y,z,t,u\}$ and using an analogous argument implies that $\mathcal
S|_Y$ also contains two distinct 2-splits on $Y$, call them $S_3$ and $S_4$, whose
parts of size $2$ do not contain $x$. In particular,
$S_3$ and $S_4$ are distinct from $S_1$ and $S_2$.
In summary, $\mathcal S|_Y$ contains at least four distinct 2-splits.
(ii) Suppose that $\mathcal S$ 5-dices $X$ and that $|X|\geq 6$.
Let $Y \in {X \choose 6}$. If $\mathcal S|_Y$ contains a 3-split
we are done. Hence, assume $\mathcal S|_Y$ does not contain a 3-split.
Since $|Y|=6$ it follows that a split in $\mathcal S|_Y$ must be trivial or a
2-split. We continue with showing that $\mathcal S|_Y$
contains at least eight 2-splits.
Let $x\in Y$. Since a split in $\mathcal S|_Y$ is either trivial or a
2-splits, all 2-splits of $\mathcal S|_{Y-\{x\}}$
correspond to the 2-splits of $\mathcal S|_Y$ whose
small part does not contain $x$.
We claim that there exists an element $x_0$ of $Y-\{x\}$ that belongs to the small
part of at least three 2-splits of $Y$.
To see this, we consider the following two cases: (a)
$\mathcal S|_Y$ does not contain a split whose small part contains $x$
and (b) $\mathcal S|_Y$ contains a split whose small part contains
$x$.
In case of (a), let $Y' =
\{x,y,a,b,c\}$ be a subset of $Y$ of size $5$.
Since $\mathcal S$ 5-dices $X$, it follows that $\mathcal S|_{Y'}$ contains at least five
of the ${4\choose 2}=6$ possible 2-splits in
$\{ya|\overline{ya}, yb|\overline{yb}, yc|\overline{yc}, ab|\overline{ab}, ac|\overline{ac},$
$bc|\overline{bc}\}$
that might be contained in
$\mathcal S|_Y$ and do not have $x$ in their small part. It is now straight-forward to
verify that there is some $x_0 \in
Y'-\{x\}$ such that $\mathcal S|_{Y'}$ contains three 2-splits whose small
part contains $x_0$.
Consider now Case (b). Since
$\mathcal S$ 5-dices $X$ and $|X|\geq 6$, $\mathcal S|_{Y-\{x\}}$ contains
again at least five
2-splits. Then if there exists an element $x_0 \in
Y-\{x\}$ such that $\mathcal S|_{Y-\{x\}}$ contains three 2-splits whose
small part contains $x_0$ then the claim follows. If
this is not the case, then consideration of all ${5\choose 2}=10$ possible 2-splits in
$\mathcal S|_{Y-\{x\}}$ shows that $\mathcal S|_{Y-\{x\}}$ must contain exactly five
2-splits and that all elements of $Y-\{x\}$ must belong to the small part of exactly two
2-splits of $\mathcal S|_{Y-\{x\}}$. In addition, by assumption on $x$,
there exists an element $x_0$ of $Y-\{x\}$ such that $\{x,x_0\}$ is the small
part of a split of $\mathcal S|_Y$. Since $x_0$ also belongs to the small part of
exactly two 2-splits in $\mathcal S|_{Y-\{x\}}$, it follows that $x_0$
belongs to the small part of exactly three 2-splits of $\mathcal
S|_{Y-\{x\}}$.
Hence, there is some $x_0 \in
Y'-\{x\}$ such that $\mathcal S_{Y'}$ contains three 2-splits whose small
part contains $x_0$.
In summary, in both Case~(a) and (b), there is some $x_0 \in Y'-\{x\}$ such that $\mathcal
S|_{Y'}$ contains three 2-splits whose small part contains $x_0$.
Moreover, $\mathcal S|_{Y-\{x_0\}}$ contains at least five 2-splits
because $\mathcal S$ 5-dices $X$ and $|Y|=6$.
Since the small part of a split in $\mathcal S|_{Y-\{x_0\}}$ is also the small part of a split in
$\mathcal S|_Y$ whose small part does not contain $x_0$, it follows that there also exists at least five
2-splits in $\mathcal S|_Y$ whose small part does not contain $x_0$.
Hence,
$\mathcal S|_Y$ contains at least eight 2-splits.
\end{proof}
To prove the main theorem of this section, we require a further result
concerning dicing.
\begin{proposition}\label{prop45d}
Suppose $\mathcal S$ is a split system on $X$ with $|X|\geq 4$. Then the following holds.
\begin{itemize}
\item[(i)] $\mathcal S$ 4-dices $X$ if and only if for all
$A,B \in {X \choose 3}$
with $|A \cap B|=2$, we have $\phi_A \neq \phi_B$.
\item[(ii)] If $|X|\geq 5$ then $\mathcal S$ 4- and 5-dices $X$ if and only if for all
distinct $A,B \in {X \choose 3}$
with $A \cap B \neq \emptyset$, we have $\phi_A \neq \phi_B$.
\end{itemize}
\end{proposition}
\begin{proof}
(i) Let $A=\{x,y,z\}$ and $B=\{x,y,t\}$ be subsets of $X$ and let $Y=A \cup B$.
Assume first that $\mathcal S$ 4-dices $X$. Then $\mathcal S|_Y$ contains at
least two 2-splits because $|Y|=4$. In particular, $\mathcal S|_Y$ contains at least one 2-split
$S$ distinct from $xy|tz$. By Lemma~\ref{cases}(i), it follows that
$\phi_A(S) \neq \phi_B(S)$. Consequently, $\phi_A \neq \phi_B$.
Conversely, if $\phi_A \neq \phi_B$, then there exists a split
$S$ in $\mathcal S$ such that $\phi_A(S) \neq \phi_B(S)$. By Lemma~\ref{cases}(i),
$S|_Y \in \{xz|yt,yz|xt\}$. If $S|_Y = xz|yt$, then consider the set
$C=\{x,z,t\}$.
Since, by assumption, $\phi_A \neq \phi_C$ there must exist a split $S'$ in
$\mathcal S$ such $\phi_A(S') \neq \phi_C(S')$.
By Lemma~\ref{cases}(i) it follows that $S'|_Y \neq S|_Y$. If $S|_Y = yz|xt$ then an
analogous argument with $C$ replaced by $D=\{y,z,t\}$ implies that there exists
a split $S''$ with $S''|_Y \in \{yx|zt,yt|zx\}$. Lemma~\ref{cases}(i) implies again that
$S|_Y \neq S''|_Y$. Hence,
$\mathcal S|_Y$ contains at least two 2-splits one of which is $S|_Y$ and
the other is $S'|_Y$ or $S''|_Y$.
(ii) Assume first that $\mathcal S$ 4-dices and 5-dices $X$.
Let $A,B\in \binom{X}{3}$ distinct such that $A\cap B\not=\emptyset$. If $|A\cap B|=2$ then, by
Proposition~\ref{prop45d}(i), $\phi_A \neq
\phi_B$ must hold. So assume that $|A\cap B|\not=2$.
Let $A=\{x,y,z\}$ and $B=\{x,p,q\}$. Then $|A\cap B|=1$.
Since $\mathcal S$ 5-dices $X$ and $|X|\geq 5$ it follows that $\mathcal S|_Y$
contains at least five 2-splits where $Y=A\cup B$. Since there are exactly ${10\choose 2}$ 2-splits on $Y$,
it follows that $\mathcal S|_Y$ contains at least one of the six 2-splits
in $\{yz|xpq, pq|xyz, xy|zpq, xz|ypq, xp|yzq,xq|yzp\}$.
By Lemma~\ref{cases}(ii), it follows that $\phi_A \neq \phi_B$.
Conversely, assume that for all distinct $A,B \in {X \choose 3}$
with $A \cap B \neq \emptyset$ we have that $\phi_A \neq \phi_B$.
If $|A\cap B|=2$ then $\mathcal S$ 4-dices $X$ in view of Proposition~\ref{prop45d}(i).
To
see that $\mathcal S$ also 5-dices $X$, we need to show in view of $|X|\geq 5$ that for all $Y\in \binom{X}{5}$ the split system $\mathcal S|_Y$ contains at
least five 2-splits.
Let $Y\in \binom{X}{5}$.
Since $\mathcal S$ 4-dices $X$,
it follows by Lemma~\ref{lm-up}(i) that $\mathcal S|_Y$ contains at least four 2-splits.
Assume for contradiction that $\mathcal S|_Y$ contains
precisely four 2-splits $S_1, \ldots, S_4$. For all $1\leq i\leq 4$, let $A_i$ denote the small part of $S_i$.
Then the multiset $\mathcal A=A_1 \cup A_2 \cup A_3 \cup A_4$
contains eight elements. We claim that there exists no element $x\in Y$ with multiplicity three or more in $\mathcal A$.
To see the claim, assume for contradiction that there exists some $x \in X$ that is
contained in three of $A_i$, $1\leq i\leq 4$. Since, for all $1 \leq i \leq 4$, the split
$S_i|_{Y-\{x\}}$ is a 2-split of $Y-\{x\}$ if and only if $x\not\in A_i$ it follows that
$\mathcal S|_{Y-\{x\}}$ contains at most one 2-split. But this is not possible because $\mathcal S$ 4-dices $X$
and $|X|\geq 5$ thereby concluding the proof of the claim.
Hence, every element of $Y$ has multiplicity at most two in $\mathcal A$. Since $Y$ contains five
elements and $\mathcal A$ has size eight, one of the following two cases must hold:
(a) three elements of $Y$ have multiplicity two
in $\mathcal A$ and the other two have multiplicity one and (b)
four elements of $Y$ have multiplicity two in $\mathcal A$ and one does not appear in $\mathcal A$.
Suppose first that Case~(a) holds. Let $x$ and $y$ be the two elements in $\mathcal A$ that appear
only once. Then there exists an element $q\in Y- \{x,y\}$
such that neither $\{x,q\}$ nor $\{y,q\}$ is contained in $\{A_1,A_2,A_3,A_4\}$.
Since $q$ has multiplicity two in $\mathcal A$ while $x$ and $y$ have
multiplicity one each, this implies that there exist $i,j\in\{1,\ldots, 4\}$ distinct such that
the two sets $A_i$ and $A_j$ not containing
$q$ satisfy $A_i \cup A_j=\{x,y,z,p\}$. It
follows that $\mathcal S|_{A_i\cup A_j}$ only contains the split $A_i|A_j$,
contradicting the fact that $\mathcal S$ 4-dices $X$.
Suppose now that Case~(b) holds. Let $x$ be the element of $Y$
not present in $\mathcal A$. Since each element of $\{y,z,p,q\}$ appears
twice in $\mathcal A$, it follows that, up to potentially having to relabel the elements
of $Y-\{x\}$, $S|_Y=\{yp|xzq, yq|xzp, zp|xyq, zq|xyp\}$ We can now use
Lemma~\ref{cases}(ii) to conclude that $\phi_A=\phi_B$, which contradicts our assumption
that $\phi_A\not=\phi_B$.
\end{proof}
Note that the assumption that $\mathcal S$ 4-dices $X$ is necessary
for the characterization in Proposition~\ref{prop45d}~(ii) to hold.
For example, the
split system on $X=\{1, \ldots 5\}$ whose set of non-trivial splits equals
\[\{12|345, 23|451, 34|512, 45|123\}\]
does not 5-dice $X$ but $\phi_A \neq \phi_B$
holds for all $A,B \in {X \choose 3}$ with $|A \cap B|=1$.
We now show that injectivity of a split system can be characterized by
considering at most 6-points.
\begin{theorem}\label{dices}
Suppose $\mathcal S$ is a split system on $X$, $|X| \geq 3$.
Then $\mathcal S$ is injective if and only if $\mathcal S$ 4-, 5- and 6-dices $X$.
\end{theorem}
\begin{proof}
If $|X|=3$, then the equivalence trivially holds. Hence, we may assume for the following that $|X| \geq 4$.
Assume first that $\mathcal S$ 4-, 5- and 6- dices $X$, and let $A,B \in {X \choose 3}$ distinct.
If $|X|=4$, then $|A \cap B|=2$. In that case, Proposition~\ref{prop45d}(i) implies that
$\phi_A \neq \phi_B$. If $|X|=5$, then $A \cap B \neq \emptyset$. In that case,
Proposition~\ref{prop45d}(ii) implies that $\phi_A \neq \phi_B$. Finally, suppose that $|X| \geq 6$. In view of
Proposition~\ref{prop45d}(ii), we have that $\phi_A \neq \phi_B$
holds in case $A \cap B \neq \emptyset$.
It remains to show that $\phi_A \neq \phi_B$ also holds when $A \cap B=\emptyset$. To see this, let $A=\{x,y,z\}$ and $B=\{t,u,v\}$ be
subsets of $X$ such that $A\cap B=\emptyset$.
Let $Y=A \cup B$. Since $|X|\geq 6$ and $\mathcal S$ 6-dices $X$, the split system $\mathcal S|_Y$
contains either a 3-split or a triangle of 2-splits. In both cases, we can use
Lemma~\ref{cases}(iii) to conclude that $\phi_A \neq \phi_B$.
Conversely, assume that $\mathcal S$ is injective. Then $\phi_A \neq \phi_B$
for all distinct $A, B \in {X \choose 3}$ with $A \cap B \neq \emptyset$. By
Proposition~\ref{prop45d}(ii), it follows that $\mathcal S$ 4-dices and
5-dices $X$. To see that $\mathcal S$ also 6-dices $X$, suppose that $|X| \geq 6$ and let
$Y=\{x,y,z,t,u,v\}$ be a subset of $X$ of size 6.
Since $\mathcal S$ 5-dices $X$ Lemma~\ref{lm-up} implies
that $\mathcal S|_Y$ contains either a 3-split or at least eight 2-splits.
We claim that if $\mathcal S|_Y$ does not contain a 3-split
then $\mathcal S|_Y$ must contain a triangle of 2-splits. To see the claim, we remark first that
if $\mathcal S|_Y$ contains ten 2-splits or more, then it must contain a triangle of 2-splits.
Employing a case analysis, we obtain that, up to potentially having to relabel the elements of $Y$, a
split system on $Y$ containing eight 2-splits or more without containing a
triangle of 2-splits is either (a) the split system $\mathcal S_1$ whose set of non-trivial splits is
$\{xy|\overline{xy}, xz|\overline{xz}, xt|\overline{xt}, xu|\overline{xu},
yv|\overline{yv}, zv|\overline{zv}, tv|\overline{tv}, uv|\overline{uv}\}$ or (b)
a subset of the split system $\mathcal S_2$ whose set of non-trivial splits is
$\{xy|\overline{xy}, yz|\overline{yz}, zt|\overline{zt}, tu|\overline{tu},
uv|\overline{uv}, vx|\overline{vx}, xt|\overline{xt}, yu|\overline{yu}, zv|\overline{zv}\}$.
Since $\mathcal S_1$ does not 5-dice $X$ because $\mathcal S_1|_{Y-\{x\}}$
contains only four 2-splits it follows that $\mathcal S|_Y\not=\mathcal S_1$.
Hence, Case~(a) cannot hold. But Case~(b) cannot hold either since
if $\mathcal S|_Y$ is a subset of $\mathcal S_2$ then
Lemma~\ref{cases}(iii) implies $\phi_{uxz}= \phi_{tvy}$. But this is
impossible because $\mathcal S|_Y$ is injective. Hence,
$\mathcal S|_Y$ must contain a triangle of 2-splits, as claimed. Thus,
$\mathcal S$ also 6-dices $X$.
\end{proof}
As an important consequence of the last result, we see that
injectivity of a split system is well behaved with respect to restriction:
\begin{corollary}\label{sub-inj}
Suppose $\mathcal S$ is a split system on $X$ with $|X|\geq 3$.
If $\mathcal S$ is injective, then $\mathcal S|_Y$ is injective, for all $Y \subseteq X$ with $|Y|\geq 3$.
\end{corollary}
\begin{proof}
Suppose that $Y \subseteq X$ with $|Y|\geq 3$ and that $\mathcal S$ is injective.
Then, by Theorem~\ref{dices}, $\mathcal S$ 4-, 5- and 6-dices $X$.
So $\mathcal S|_Y$ 4-, 5- and 6-dices $Y$. By Theorem~\ref{dices}, it follows that
$\mathcal S|_Y$ is injective.
\end{proof}
\section{The injective dimension}\label{sec:id}
Recall that the dimension $\dim(\mathcal S)$ of a split system $\mathcal S$
is defined as the dimension of the largest hypercube in
$\mathcal B(\mathcal S)$ or, equivalently,
the size of the largest incompatible subset of $\mathcal S$.
For $n\ge 3$, we define the {\em injective dimension} $\ID(n)$ of $n$ to be
\begin{equation}
\label{define}
\ID(n) = \min\{ \dim(\mathcal S) \colon \mathcal S \mbox{ is an injective split
system on } \{1, \ldots, n\} \}.
\end{equation}
Note that since Theorem~\ref{circbu} implies that for all $X$ with $n=|X|\geq 4$ there
exists an injective split system on $X$, the quantity $\ID(n)$ is well-defined.
We are interested in $\ID(n)$ since its value gives a lower bound for
the number of vertices in the Buneman graph of any injective split system on $X$.
In particular, if $\ID(n)=m$ then the Buneman graph $\mathcal B(\mathcal S)$
of any injective split system $\mathcal S$ on $X$ must contain
an $m$-cube as a subgraph. Hence, $\mathcal B(\mathcal S)$ must contain at least $2^m$ vertices.
To be able to present some upper and lower bounds for $\ID(n)$ (Theorem~\ref{cor:ID-numbers}),
we first show that $\ID: \mathbb N_{\geq 3} \to \mathbb N$ is a monotone increasing function.
\begin{lemma}\label{thm:inj-dim}
For any two integers $n$ and $m$ with $n \geq m \ge 3$, we have $\ID(n) \geq \ID(m)$.
\end{lemma}
\begin{proof}
Let $\mathcal S$ be an injective split system on some set $X$ with $|X|=n$
such that $\dim(\mathcal S)=\ID(n)$. Let $Y$ be a subset of $X$ of size $m$. By
Corollary~\ref{sub-inj}, the split system $\mathcal S|_Y$ is injective, so
$\ID(m) \leq \dim(\mathcal S|_Y)$. To see that $\dim(\mathcal S|_Y) \leq
dim(\mathcal S)$ also holds it suffices to remark that if two splits $S$ and $S'$ in $\mathcal S$ are such that
$S|_Y$ and $S'|_Y$ are incompatible then $S$ and $S'$ are also incompatible.
Hence, an incompatible subset of $\mathcal S|_Y$ naturally induces an incompatible subset of $\mathcal S$ of the same size.
It follows
that $\ID(m) \leq \dim(\mathcal S|_Y) \leq \dim(\mathcal S)=\ID(n)$, as desired.
\end{proof}
We now give upper and lower bounds for $\ID(n)$ where $n=|X|\geq 4$. As we shall
see in the proof, the upper bound comes from the
fact that a maximal circular split system on $X$ is injective by Theorem~\ref{circbu}(ii)
and that in \cite{C08} it was shown that the maximum dimension of a hypercube in $\mathcal B(\mathcal S)$
is $\lfloor \frac{n}{2} \rfloor$. Note that
the split system $\mathcal S$ formed by all splits of $X$ of size two or less is injective by Theorem~\ref{circbu}(i) and, by \cite{C08}, has
dimension $n-1$. Indeed, two splits $S$ and $S'$ in $\mathcal S$ are incompatible if there exists an
element $x \in X$ such that $x$ belongs to the small part of both $S$ and $S'$. Hence, the largest incompatible
subsets of $\mathcal S$ are the subsets of the form $xy|\overline{xy}\,:\, y \in X-\{x\}\}$, some $x \in X$, and these
subsets have size $n-1$.
\begin{theorem}\label{cor:ID-numbers}
For all intergers $n \ge 4$, we have $\ID(n) \leq \lfloor \frac{n}{2}
\rfloor$.
Moreover, $\ID(3)=1$, $\ID(4)=\ID(5)=2$, $\ID(6)=\ID(7)=\ID(8)=3$, and for all
$n \ge 9$, $\ID(n) \ge 3$.
\end{theorem}
\begin{proof}
Let $X=\{1, \ldots, n\}$. To see that the first statement holds,
let $\mathcal S$ be a maximal circular split system on $X$.
If $n\geq 4$ then Theorem~\ref{circbu}(ii) implies that $\mathcal S$ is injective.
Hece, $\ID(n) \leq \dim(\mathcal S)$. By \cite{C08}, the Buneman
graph $\mathcal B(\mathcal S')$ of a maximal circular split system $\mathcal S'$ on $X$ contains an
$\lfloor \frac{n}{2} \rfloor$-cube, and all other subcubes in $\mathcal B(\mathcal S')$
have no larger dimension. Hence, $\dim(\mathcal S')=\lfloor \frac{n}{2} \rfloor$.
Thus, $\ID(n) \leq \lfloor \frac{n}{2} \rfloor$.
To see the remainder of the theorem, note first that $\ID(3)=1$
since, as was mentioned in Section~\ref{sec:c2} already, the unique split system
on $X$ is injective and $\mathcal B(\mathcal S)$ is a phylogenetic tree on $X$
To see that $\ID(4)=\ID(5)=2$ holds, we first remark that
in view of the first statement of the theorem, we have $\ID(4) \leq 2$
and $\ID(5) \leq 2$. Now, let $X$ be such that $n \in \{4,5\}$ and
assume for contradiction that there exists an injective split
system $\mathcal S$ on $X$ with $\dim(\mathcal S)=1$. In particular, $\mathcal S$ is compatible. Then
$\mathcal B(\mathcal S)$ is a phylogenetic tree on $X$
and has $|\mathcal S|+1$ vertices. Moreover, since
a compatible split system on $X$ has at most $2n-3$ elements (see e.g. \cite[Theorem 3.3]{D12}),
it follows that $\mathcal B(\mathcal S)$ has at most $2$
internal vertices if $n=4$, and at most $3$ internal
vertices if $n=5$. But $\mathcal S$ is injective, so $\mathcal B(\mathcal S)$
must have at least ${4 \choose 3}=4$ internal vertices if $n=4$,
and at least ${5 \choose 3}=10$ internal vertices if $n=5$, a contradiction.
Hence, $\ID(4)=\ID(5)=2$.
We continue with showing
that $\ID(6)\ge 3$ from which it then follows by Lemma~\ref{thm:inj-dim} and the
first statement of the theorem
that $\ID(6)=\ID(7)=3$ and that $\ID(n) \ge 3$, for all $n \ge 8$.
Suppose that $\mathcal S$ is an injective split system on $X=\{1,\ldots, 6\}$.
Bearing in mind that, by Theorem~\ref{dices}, $\mathcal S$ 4-, 5- and 6-dices $X$
we next perform a case analysis on the number of 3-splits in $\mathcal S$.
If $\mathcal S$ contains three 3-splits or more then $\dim(\mathcal S) \geq 3$ since
all 3-splits of $X$ are pairwise incompatible.
If $\mathcal S$ contains two 3-splits, say $123|456$ and $234|561$,
then since $\mathcal S$ 4-dices $X$ it follows that there must exist a split $S\in\mathcal S$
such that $S(2) \neq S(3)$ and $S(5) \neq S(6)$. Since the splits $S$, $123|456$, and $234|561$ are pairwise
incompatible, we obtain $\dim(\mathcal S) \geq 3$.
If $\mathcal S$ contains one 3-split, say $123|456$, then one of the following two cases must hold.
If there exists an element $x \in X$ and three splits $S_1$, $S_2$, and $S_3$ in $\mathcal S$ containing $x$
in their small part then
$\dim(\mathcal S) \geq 3$ because $\{S_1,S_2,S_3\}$ is incompatible. If no such element $x$ exists then
$\mathcal S$ contains at most six 2-split. An exhaustive search shows that, up to potentially having to relabel the elements
in $\{1,2,3\}$, there exists only one such split system that is injective i.\,e.\,$\mathcal S$ is the split system
whose subset of non-trivial splits is the set
\[
\{123|456, 15|\overline{15}, 16|\overline{16}, 24|\overline{24}, 26|\overline{26}, 34|\overline{34}, 35|\overline{35}\}.
\]
One can then easily verify that $\{123|456,15|\overline{15}, 16|\overline{16}\}$ is incompatible. Hence, $\dim(\mathcal S)\geq 3$ in this case.
Finally, if $\mathcal S$ does not contain a 3-split, then
it must contain a triangle of 2-splits because $\mathcal S$ 6-dices $X$. Since the three splits in such a triangle
are pairwise incompatible it follows that $\dim(\mathcal S) \geq 3$. This concludes the proof that $\ID(6)\geq 3$.
To show that $\ID(8) =3$, we employed Theorem~\ref{dices} and used a computer program to verify
that $\mathcal S$ is the split system whose subset of non-trivial splits is
\[
\begin{array}{r l}
&\{1234|5678,1357|2468,123|\overline{123},246|\overline{246},478|\overline{478},156|\overline{156},12|\overline{12}, 34|\overline{34},56|\overline{56},78|\overline{78},26|\overline{26},\\
&35|\overline{35},17|\overline{17},48|\overline{48},68|\overline{68},57|\overline{57},23|\overline{23}\}
\end{array}
\]
is injective. Since $\dim(\mathcal S)=3$, it follows that
$\ID(8)=3$.
\end{proof}
Note that as $\ID(8)=3$, the upper bound for $\ID(n)$ given
in Theorem~\ref{cor:ID-numbers} is not
tight even for $n=8$.
In general, it appears to be difficult to find a better upper or lower bounds for
$\ID(n)$, however in
the next two sections we shall give improved bounds for two variants of the injective dimension.
\section{The injective 2-split-dimension}\label{sec:id2}
To help better understand the injective dimension of a split system,
in this section we shall consider a restricted version of this
quantity that is defined as follows.
For $n\geq 3$, let $\mathbb{S}_2(n)$ be the set of all injective split systems
on $X=\{1,\dots, n\}$ whose non-trivial splits all have size 2.
As mentioned in the introduction, we define $\ID_2(n)$ for $n\ge 3$ as
\begin{equation}
\label{eq:define-2}
\ID_2(n) = \min\{ dim(\mathcal S) \colon \mathcal S\in \mathbb{S}_2(n) \}.
\end{equation}
By Theorem~\ref{circbu}~(i), $\ID_2(n)$ is well-defined.
Clearly $\ID_2(n) \geq \ID(n)$ and
equality holds for $n =3,4,5$ since every non-trivial split
of a set $X$ of size 3, 4, or 5 is a 2-split.
In the main result of this section (Theorem~\ref{id2-lb}), we provide upper and lower
bounds for $\ID_2(n)$. To prove it, we shall use two lemmas.
For $\mathcal S$ a split system on $X$, we denote by $P(\mathcal S)$ the graph
with vertex set $X$ and with edge set all the pairs $\{x,y\}$ such that $xy|\overline{xy} \in \mathcal S$.
We also denote the degree of a vertex $x \in X$ in $P(\mathcal S)$ by $\deg_{P(\mathcal S)}(x)$.
If $\mathcal S$ contains only trivial splits and 2-splits then $P(\mathcal S)$ and dicing are related as stated
as in Lemma~\ref{dice2}. We omit its straight-forward proof but remark in passing that Lemma~\ref{dice2}
is a strengthening of Theorem~\ref{dices} for split systems in $\mathbb{S}_2(n)$, for all $n\geq 3$.
\begin{lemma}\label{dice2}
Let $\mathcal S \in \mathbb{S}_2(|X|)$ be a split system on $X$ with $|X| \ge 3$.
Then,
\begin{itemize}
\item[$\bullet$] $\mathcal S$ 4-dices $X$ if and only if $|X| \leq 4$ or for all $Y \in {X \choose 4}$,
the restriction $P(\mathcal S|_Y)$ contains two edges that share a vertex.
\item[$\bullet$] $\mathcal S$ 5-dices $X$ if and only if $|X| \leq 5$ or for all $Y \in {X \choose 5}$,
the restriction $P(\mathcal S|_Y)$ contains five edges or more.
\item[$\bullet$] $\mathcal S$ 6-dices $X$ if and only if $|X| \leq 6$ or for all $Y \in {X \choose 6}$,
the restriction $P(\mathcal S|_Y)$ contains a 3-clique.
\end{itemize}
\end{lemma}
In terms of the dimension of a split system in $\mathbb{S}_2(n)$, $n\geq 3$, we also have
the following result.
\begin{lemma}\label{dim2}
Let $\mathcal S \in \mathbb{S}_2(|X|)$ be a split system on $X$ with $|X| \ge 3$.
Then,
\begin{itemize}
\item[(i)] If $P(\mathcal S)$ does not contain a 3-clique then
$\dim(\mathcal S)=\max_{x \in X} \{\deg_{P(\mathcal S)}(x)\}$.
\item[(ii)] If $P(\mathcal S)$ contains a 3-clique
then $\dim(\mathcal S)=\max \{\max_{x \in X} \{\deg_{P(\mathcal S)}(x)\}, 3\}$.
\end{itemize}
\end{lemma}
\begin{proof}
We prove (i) and (ii) together. For this, put $n=|X|$. If $n=3$ then $P(\mathcal S)$ consists of three isolated vertices. So Assertion~(i)
holds. Since $\mathcal S$ only contains trivial splits, it follows that Assertion~(ii) holds vacuously. So assume
that $n\geq 4$.
Let $\mathcal S \in \mathbb{S}_2(n)$.
Then a maximal incompatible subset $\mathcal S'$ of
$\mathcal S$ must be of one of the following two types:
\begin{itemize}
\item[(a)] A triangle of 2-splits.
\item[(b)] The set of all 2-splits in $\mathcal S$ containing some $x\in X$ in their small part.
\end{itemize}
To see that these are the only two possible types, it suffices to remark that any subset $\mathcal S'\subseteq \mathcal S$ with
$|\mathcal S'|\geq 4$ is incompatible if and only if there exists some $x\in X$ such that all splits of $\mathcal S'$ contain $x$ in their small part.
If $\mathcal S'$ is of Type~(a) then $\mathcal S'$ corresponds to a 3-clique in $P(\mathcal S)$
and $|\mathcal S'|=3$. If $\mathcal S'$ is of Type~(b) then $\mathcal S'$ corresponds to the
set of edges of $P(\mathcal S)$ that are incident with $x$. Hence, $|\mathcal S'|=\deg_{P(\mathcal S)}(x)\geq 3$. Thus,
if $P(\mathcal S)$ has a vertex $x$ with $\deg_{P(\mathcal S)}(x)\geq 3$ or if $P(\mathcal S)$ does
not contain a 3-clique then $\dim(\mathcal S)=\max_{x \in X} \{\deg_{P(\mathcal S)}(x)\}$.
Otherwise, $\dim(\mathcal S)=3$.
\end{proof}
We now prove the main result of this section.
\begin{theorem}\label{id2-lb}
For all $n \geq 5$,
\[
\lfloor \frac{n}{2} \rfloor \le \ID_2(n) \leq n-3.
\]
\end{theorem}
\begin{proof}
We first show that $\ID_2(n) \leq n-3$ by constructing an injective
split system $\mathcal S_n$ on $X_n=\{1, \ldots, n\}$ with $\dim(\mathcal S)=n-3$.
For this, let $\sigma_n$ denote some circular ordering of the
elements of $X_n$. Let $\mathcal S_n$ denote the set of all splits $xy|X_n-\{x,y\}$
such that $x,y \in X_n$ are not consecutive under $\sigma_n$. By definition of $\mathcal S_n$, all
vertices of $P(\mathcal S_n)$ have degree $n-3$. If $n \geq 6$, it follows by
Lemma~\ref{dim2} that $\dim(\mathcal S_n)=n-3$. If $n=5$, it is straight-forward to
check that $P(\mathcal S_n)$ does not contain a 3-clique. So, by
Lemma~\ref{dim2}, $\dim(\mathcal S_n)=n-3$ holds in this case too.
Thus, it remains to show that $\mathcal S_n$ is injective. In view of Theorem~\ref{dices},
we do this by showing that $\mathcal S_n$ 4-, 5- and 6-dices $X_n$.
To see that $\mathcal S_n$ 4-dices $X_n$, let
$Y \in {X_n \choose 4}$ which exists as $n\geq 5$. By Lemma~\ref{dice2}, it suffices to show that
there exists an element of $Y$ that has degree 2 or more in $P(\mathcal S_n|_Y)$. Let $x \in Y$.
If $\deg_{P(\mathcal S_n|_Y)}(x)\geq 2$, we are done by the definition of $\mathcal S_n$.
Otherwise, $Y$ contains two
elements $y$ and $z$ such that $y$ and $z$ precede and follow $x$
under $\sigma_n$, respectively. Let $t$ be the fourth element of $Y$. Then $\{x,t\}$ is
an edge in $P(\mathcal S_n)$. Moreover, since $n \geq 5$ and $t \neq x$, there
must be at least one of $y,z$ that is adjacent with $t$ in $P(\mathcal S_n)$. Thus,
$\deg_{P(\mathcal S_n|_Y)}(t)\geq 2$, as required.
To see that $\mathcal S_n$ 5-dices $X_n$, let
$Y \in {X_n \choose 5}$ which again exists because $n\geq 5$. By Lemma~\ref{dice2}, it suffices to show that
$P(\mathcal S_n|_Y)$ contains at least five edges. To see this, note first that, for all $x \in X$, there are at
most two elements in $Y-\{x\}$ that do not form an edge with $x$ in $P(\mathcal S_n|_Y)$
because $\mathcal S_n$ is circular. For all
$x \in Y$, it follows that $\deg_{P(\mathcal S_n|_Y)}(x)\geq 2$. Since $Y$
contains five elements, this imples that $P(\mathcal S_n|_Y)$ contains at least five edges, as required.
Finally, to see that $\mathcal S_n$ 6-dices $X_n$, note first that we
may assume that $|X|\geq 6$ as otherwise $\mathcal S_n$ 6-dices $X_n$ by definition.
Let $Y \in {X_n \choose 6}$. By Lemma~\ref{dice2}, it suffices to show
that $P(\mathcal S_n|_Y)$ contains a 3-clique. To see this, let $x \in Y$.
Then, by the definition of $P(\mathcal S_n)$, there exist at least
three elements in $Y$, say $y$, $z$ and $t$, that form an edge with $x$ in $P(\mathcal S_n)$.
Moreover, at least two of $y$, $z$ and $t$, say $y$ and $z$, must form an
edge $\{y,z\}$ in $P(\mathcal S_n)$ since $y$, $z$ and $t$ cannot all be
consecutive with each other under $\sigma_n$. It follows that
$\{x,y,z\}$ is the vertex set of a 3-clique in $P(\mathcal S_n|_Y)$, as required.
This concludes the proof that $\ID_2(n) \leq n-3$.
We now show that $\lfloor \frac{n}{2} \rfloor \le \ID_2(n)$.
We begin by showing that $\ID_2(n+2)>\ID_2(n)$, for all $n \geq 3$.
Assume that $n\geq 3$. Also, assume that $\sigma_{n+2}$ is the natural
ordering of $X_{n+2}=\{1,2,\ldots, n,n+1, n+2\}$. Let $\mathcal S\in \mathbb{S}_2(n+2)$ denote a split system on
$X_{n+2}$ that attains $\ID_2(n+2)$.
Let $\mathcal S'$ denote a maximal incompatible subset of $\mathcal S$.
We claim that $\mathcal S'$ must contain a non-trivial split that separates
the elements $n+1$ and $n+2$. Clearly, $\mathcal S$ must contain such a split as
otherwise Lemma~\ref{medinbu} implies that $\phi_Y(S)=\phi_{Y'}(S)$ holds for all $S \in \mathcal S$ and
all $Y,Y' \in {X_{n+2} \choose 3}$ with $Y \cap Y'=\{n+1,n+2\}$. Hence, $\mathcal S$ is not injective
which is impossible.
Choose a split $S_0\in\mathcal S$ such that $S_0(n+1)\not=S_0(n+2)$. Assume for contradiction that all splits $S \in \mathcal S'$
satisfy $S(n+1)=S(n+2)$. Then $\mathcal S_0$ is incompatible with every split
in $\mathcal S'$ because $S_0$ and every split in $\mathcal S'$ have size two.
Hence, $\mathcal S'\cup\{S_0\}$ is an incompatible subset of $\mathcal S$ that contains $\mathcal S'$ as a proper
subset which contradicts the choice of $S'$.
Consider now the restriction $\mathcal S_n$ of $\mathcal S$ to $X_n$. By
Corollary~\ref{sub-inj}, $\mathcal S_n$ is injective because $\mathcal S$ is injective. Moreover, since all maximal
incompatible subsets of $\mathcal S$ contain a split separating $n+1$ and $n+2$ by the previous claim, it follows that no maximal incompatible subset of $\mathcal S_n$ has size equal to
$\dim(\mathcal S)$. Hence, $\dim(\mathcal S_n)< \dim(\mathcal S)$.
Since $\dim(\mathcal S)=\ID_2(n+2)$ by the choice of $\mathcal S$, and
$\dim(\mathcal S_n) \geq \ID_2(n)$ by the injectivity of $\mathcal S_n$, it follows
that $\ID_2(n+2)>\ID_2(n)$, as required.
We conclude with showing that $\ID_2(n) \geq \lfloor \frac{n}{2} \rfloor$ holds by performing
induction on $n$.
If $n =5$ then $\ID_2(n)=\ID(n)$ since all non-trivial splits on $X_n$
are 2-splits and $\ID(n)= \lfloor \frac{n}{2} \rfloor$ holds by
Corollary~\ref{cor:ID-numbers}. This implies the stated inequality
in this case. Now, let $n>5$ and assume that the stated
inequality holds for all $5\leq n'<n$. Since $\ID_2(n)>\ID_2(n-2)$
it follows by induction hypothesis
that $\ID_2(n)> \ID_2(n-2)\geq \lfloor \frac{n-2}{2} \rfloor$. Hence,
$\ID_2(n) \geq \lfloor \frac{n-2}{2} \rfloor +1=\lfloor \frac{n}{2} \rfloor$,
as desired.
\end{proof}
\section{Rooted injective dimension}\label{sec:idr}
In this section, we consider another variant of the injective
dimension which behaves quite differently from $\ID(n)$.
Let $X$ denote a set with $|X|=n$. Choose some element $r \in X$.
For $Z \in {X-\{r\} \choose 2}$, put $Z_r=Z \cup \{r\}$.
We say that a split system is {\em rooted-injective (relative to $r$)} if
\[
\phi_{Z_r} \neq \phi_{Z'_r}
\]
for all $Z,Z' \in {X \choose 2}$ distinct. This
concept is closely related to the rooted median graphs considered in \cite{B22}.
Note that if $X=3$ then the (unique) split system on $X$ is $r$-rooted injective for
any choice of $r\in X$. Also,
note that if $\mathcal S$ is injective, then $\mathcal S$ is
rooted-injective relative to $r$, for all $r \in X$. The converse, however,
does not hold. For example, the split system $\mathcal S$ on $X=\{1, \ldots, 6\}$
whose set of non-trivial splits is:
\[
\{14|\overline{14}, 15|\overline{15}, 16|\overline{16}, 24|\overline{24}, 25|\overline{25}, 26|\overline{26},
34|\overline{34}, 35|\overline{35}, 36|\overline{36}\}
\]
is not injective because $\mathcal S$ does not 6-dice $X$ and so Theorem~\ref{dices} does not hold. But $\mathcal S$
is rooted-injective relative to $r$, for all $r \in X$.
For $n \ge 3$, $X$ a set with $|X|=n$ and some $r\in X$, we define the
{\em rooted-injective dimension} $\ID^r(n)$ to be
\[
\ID^r(n) = \min\{ dim(\mathcal S) : \mathcal S \mbox{ is a rooted-injective
split system on $X$ relative to $r$} \}.
\]
Our next result (Theorem~\ref{th:idr}) shows that $\ID^r(n)$ is well-defined for all $n\geq 3$,
and that, in contrast to $\ID_2(n)$, $\ID^r(n)$ is always equal to 2 when $n \geq 4$.
\begin{theorem}\label{th:idr}
Suppose that $X$ is such that $n=|X|\ge 4$ and that $r \in X$. Then
there exists a rooted-injective split system $\mathcal S$ on $X$ relative to $r$ with $\dim(\mathcal S)=2$.
Moreover $\ID^r(n) =2$.
\end{theorem}
\begin{proof}
Put $X = \{1,2,\dots,n-1,r\}$.
First note that $\ID^r(n)\ge 2$, since if $\ID^r(n) =1$,
then there would be a rooted-injective split system $\mathcal S$
on $X$ relative to $r$ with $\dim(\mathcal S)=1$. But this is not possible
since then the Buneman graph $B(\mathcal S)$ associated to $\mathcal S$ would be
a phylogenetic tree on $X$ with $|\mathcal S|+1$ edges. Using a similar argument to the one used
to show that $\ID(4)=\ID(5)=2$ in the proof of Theorem~\ref{cor:ID-numbers}, it is straight-forward to
check that then $\mathcal S$ is not rooted-injective which is impossible.
Now, define the split system $\mathcal S$ on
$X$ whose subset of non-trivial splits is equal to $\mathcal S_1 \cup \mathcal S_2$, where:
\[
\mathcal S_1=\{ \{n-1-i,\ldots,n-1\} | \overline{\{n-1-i,\ldots,n-1\}} \cup\{r\} \,:\, 0 \le i \le n-3\}
\]
and
\[
\mathcal S_2=\{ \{n-1-i,\ldots,1\} | \overline{\{n-1-i,\ldots,1\}} \cup\{r\} \,:\, 0 \le i \le n-3\}.
\]
\begin{figure}
\caption{The Buneman graph of a split system on $\{1,2,3,4,5, 6,r\}
\label{fig:grid}
\end{figure}
For example, for $n=7$, the Buneman graph $B(\mathcal S)$ of $\mathcal S$
is the half-grid pictured in Figure~\ref{fig:grid}. More precisely, in that figure,
the splits in $\mathcal S_1$ and $\mathcal S_2$ are the splits associated to edges
oriented downwards from left to right and from right to left, respectively.
To see that $\dim(\mathcal S)=2$, it suffices to remark that
$\mathcal S_1$ and $\mathcal S_2$ are compatible, so a maximal incompatible
subset of $\mathcal S$ has size at most $2$. Since $\mathcal S$ is not compatible,
it follows that $\dim(\mathcal S)=2$.
We next show that $\mathcal S$ is rooted-injective relative to $r$. To see this,
let $Z, Z' \in {X-\{r\} \choose 2}$ distinct. Also, let $x^-=\min(Z \cup Z')$
and $x^+=\max(Z \cup Z')$. Since the $Z \cup Z'$ has size
at least $3$, we have that $x^-$ and $x^+$ are distinct. Furthermore, $x^- \leq n-3$
and $x^+ \geq 3$ must hold. In particular, the splits $S^-=\{x^-+1, \ldots, n-1\}|\{1, \ldots, x^-\} \cup \{r\}$
and $S^+=\{1, \ldots, x^+-1\}|\{x^+, \ldots, n-1\} \cup \{r\}$ belong to $\mathcal S_1$
and $\mathcal S_2$ respectively, so both splits belong to $\mathcal S$.
Moreover, $Z \cap Z'$ contains at most one element, so at least one of $x^-$ and $x^+$
does not belong to $Z \cap Z'$. If $x^- \notin Z \cap Z'$
then $S^-$ satisfies $\phi_{Z_r}(S^-) \neq \phi_{Z'_r}(S^-)$, and if if $x^+ \notin Z \cap Z'$
then $S^+$ satisfies $\phi_{Z_r}(S^+) \neq \phi_{Z'_r}(S^+)$. So, $\mathcal S$ is rooted-injective relative to $r$.
\end{proof}
\begin{remark}
The proof that the split system $\mathcal S$ is rooted-injective relative to $r$ in
Theorem~\ref{th:idr} gives an alternative
proof that the extended half-grid for $(n+1)$ in \cite[p.7]{B22} can be used to
represent a symbolic map,
since the Buneman graph $B(\mathcal S)$ with
the pendant edge containing $r$ contracted is isomorphic to
the extended half-grid on $n$.
\end{remark}
Note that the rooted-injective split system $\mathcal S$ in the proof of Theorem~\ref{th:idr}
is the union of two split systems $\mathcal S_1$ and $\mathcal S_2$
whose associated Buneman graphs are phylogenetic trees.
In general, if $\mathcal S$ is a split system on $X$ with this property
then $\dim(\mathcal S)\leq 2$ (since every
3-subset of $\mathcal S$ must contain at least one pair of splits
that is contained in one of the split systems, and so this pair of splits must be compatible).
Hence, by Theorem~\ref{cor:ID-numbers}, $\mathcal S$ cannot be injective in case $|X| \ge 6$.
\section{Discussion}\label{sec:discuss}
In this paper we have defined and explored the concept of injective split
systems, that is, splits systems $\mathcal S$ on a set $X$ such that two
distinct sets of three elements of $X$ have distinct median vertex in the
Buneman graph $\mathcal B(\mathcal S)$ associated to $\mathcal
S$. Making use of the notion of dicing, we have shown that a given
split system is injective if and only if its subsets of size $6$ or less
are injective, from which we derived a characterization of injective split
systems. We also studied the injective dimension of an integer $n \geq 3$,
that is, the minimal dimension of an injective split system on some set of
$n$ elements. On this topic, it remains an open question whether there is a
lower bound for $\ID(n)$ that is linear in $n$.
The notion of an injective split system also suggests to consider a
matching concept of surjective split systems. We call a split system ${\mathcal S}$
on some set $X$ with $|X|\geq 3$ \emph{surjective} if the vertex set of
$B(\mathcal S)$ is equal to
\begin{equation}
\{\phi_x \,:\, x \in X\} \cup \{ \phi_Y \,:\, Y \in {X \choose 3}\},
\end{equation}
In other words, every non-leaf vertex in $B({\mathcal S})$ is the median of three
leaves in $B({\mathcal S})$. Note that every split system whose Buneman graph is a
phylogenetic tree is surjective but, for example, the split system
corresponding to the Buneman graph in example in Fig.~\ref{bubu}(ii) is not
surjective because the central vertex in the graph is not the median of any
three leaves. The general properties of surjective split systems remain to
be investigated.
Naturally, one may want to study \emph{bijective} split system ${\mathcal S}$
that are both injective and surjective. We conjecture that a split
system ${\mathcal S}$ on some set $X$ with $|X|\geq 3$ is bijective if and only if
either $|X|=3$ and $|{\mathcal S}|=3$ or $|X|=4$, $|{\mathcal S}|=6$ (i.\,e.\,the Buneman
graph associated to ${\mathcal S}$ is a three-leaved phylogenetic tree or -- up to
leaf relabelling -- the graph in Fig.~\ref{bubu}(i), respectively). A
proof or counter-example for this conjecture might use concepts that are
related to the so-called median stabilization degree of a median algebra --
see e.g. \cite{B99,evans1982median}.
Finally, another interesting open problem is the following: Can we develop
a modular decomposition theory for Buneman graphs along the lines described
in \cite{B22}?
\section*{Conflict of interest}
The authors declare that they have no conflict of interest.
\section*{Data availability}
Not applicable.
\end{document}
|
\begin{document}
\title{3-critical subgraphs of snarks}
\begin{abstract}
In this paper we further our understanding of the structure of class two cubic graphs, or snarks, as they are commonly known. We do this by investigating their 3-critical subgraphs, or as we will call them, minimal conflicting subgraphs. We consider how the minimal conflicting subgraphs of a snark relate to its possible minimal 4-edge-colourings. We fully characterise the relationship between the resistance of a snark and the set of minimal conflicting subgraphs. That is, we show that the resistance of a snark is equal to the minimum number of edges which can be selected from the snark, such that the selection contains at least one edge from each minimal conflicting subgraph. We similarly characterise the relationship between what we call \textit{the critical subgraph} of a snark and the set of minimal conflicting subgraphs. The critical subgraph being the set of all edges which are conflicting in some minimal colouring of the snark. Further to this, we define groups, or \textit{clusters}, of minimal conflicting subgraphs. We then highlight some interesting properties and problems relating to clusters of minimal conflicting subgraphs.
\end{abstract}
\section{Introduction}{\label{Introduction}}
As is well-known, the edge chromatic number of a cubic graph is either three or four. Such graphs are referred to as cubic class one and cubic class two graphs, respectively. Cubic class two graphs are more commonly known as snarks. Snarks have long been of particular interest in graph theory, largely for the fact that many major problems in graph theory are easily solvable for graphs which are not snarks. Tutte’s 5-flow conjecture \cite{tutte} and the cycle double cover conjecture \cite{szekeres} are major examples of the these problems.
Let $G=(V,E)$ be a graph. A $k$-edge-colouring, $f$, of $G$ is a mapping from the set of edges of $G$ to a set of $k$ colours. That is, $f : E \longrightarrow \{1,\dots,k\}$. $f$ is a \textit{proper} $k$-edge-colouring of $G$ if no two adjacent elements in $E$ are mapped to the same colour. By Vizing's theorem \cite[Theorem 6.2]{bondy}, if $G$ is a graph and $f$ is a proper colouring then the smallest possible value of $k$ is $\Delta$ or $\Delta + 1$, where $\Delta$ is the maximum degree of any vertex in $G$. If the smallest possible value of $k$ is $\Delta$, then we say that $G$ is class one, or $\Delta$-edge-colourable. Otherwise we say that $G$ is class two, or $(\Delta + 1)$-edge-colourable. Given a $k$-edge-colouring $f$, we call the set $f^{-1}(i)$ a colour class, for each $i \in \{1,\dots,k\}$. A vertex $v$ is \textit{conflicting} with regard to $f$ if more than one of the edges incident to $v$ are mapped to the same colour.
The \textit{resistance} of $G$, denoted as $r(G)$, is defined as the $\min\{|f^{-1}(i)| : f$ is a proper $(\Delta+1)$-edge-colouring of G and $f^{-1}(i)$ is a colour class$\}$. That is, the minimum number of edges that can be removed from a graph such that the resulting graph is 3-edge-colourable \cite{steffen}. As it turns out, somewhat counter-intuitively perhaps, the resistance of $G$ equals the \textit{vertex resistance} of $G$, denoted as $r_v(G)$, which is the minimum number of vertices that needs to be removed from $G$ such that the resultant graph is class one. If given a 3-edge-colouring of $G$ with $r(G)$ conflicting vertices, we can also find a proper 4-edge-colouring of $G$ with $r(G)$ edges being mapped to one particular colour. Furthermore, the conflicting vertices in the 3-edge-colouring have a one-to-one relationship with the set of edges mapped to the fourth colour in the 4-edge-colouring. This is a result we use implicitly going forward, in that we do not consider 3-edge colourings with conflicting vertices. We consider only proper 3-edge colourings or proper 4-edge-colourings of cubic graphs. It has also been proven that $r(G) = 0$ or $r(G) \geq 2$ for any cubic graph $G$ \cite{fiol}.
If $f$ is a proper $k$-edge-colouring of a graph $G$ and $|f^{-1}(i)| = r(G)$ for some $i \in \{1, \dots, k\}$, then we call $f$ a \textit{minimal colouring} \cite{steffen2}. For cubic graphs, we will use colour sets $\{1,2,3\}$ and $\{0,1,2,3\}$ for class one and class two graphs, respectively. We will assume $|f^{-1}(0)| = r(G)$ for a minimal colouring $f$ of $G$. Given a minimal colouring $f$ of $G$, if $f(e)=0$ for some edge $e \in G$ then we call $e$ a \textit{conflicting edge} with regard to $f$. Now, let $H \subset G$ and let $f_H$ be a proper colouring of $H$. A proper colouring of $G$, $f_G$, with $f_G(e) = f_H(e)$ for all $e \in H$ is called an \textit{extension} of $f_H$.
If $f_G$ is such that the number of conflicting edges in $G-H$ is minimal given $f_H$, then we call $f_G$ a \textit{minimal extension} of $f_H$.
In this paper, we further understand the complexity of these graphs by extending on the definition of conflicting zones introduced in \cite{fiol}, although we opt for the term conflicting subgraphs. We define minimal conflicting subgraphs, as well as consequent concepts such as: the buffer subgraph, which is the maximum subgraph containing no edges in or adjacent to any minimal conflicting subgraph; the critical subgraph, which is the subgraph containing all edges which are conflicting in some minimal colouring of the graph; and clusters of minimal conflicting subgraphs which are essentially overlapping minimal conflicting subgraphs (formal definitions to follow). We then prove the following insight about cubic class two graphs. That for any collection of edges $R$ in a cubic graph $G$, such that $R$ contains an edge from every minimal conflicting subgraph in $G$, $G-R$ is 3-edge-colourable. Furthermore, for such an $R$ with minimal possible order, the resistance of $G$ is $|R|$. We are able to then characterise the resistance, as well as the critical subgraph of a graph, in terms of the set of minimal conflicting subgraphs. Finally, we discuss further problems of consideration.
\section{Minimal conflicting subgraphs}
A conflicting subgraph of a cubic graph $G$ is defined as a subgraph $H$ of $G$ which does not admit a proper 3-edge-colouring. That is essentially, a subgraph which itself is not 3-edge-colourable. This idea was introduced in \cite{fiol}, where it was called a conflicting zone. With a view to further understand what makes a cubic graph class two, we extend on this idea by defining \textit{minimal conflicting subgraphs}. The essential idea being to isolate from the graph that which is non 3-edge-colourable.
\begin{definition} \label{mcz}{\rm
Let $G$ be a subcubic graph and let $M$ be a conflicting subgraph of $G$. If for any $e \in E(M)$ we have that $M - {e}$ is not a conflicting subgraph, then we call $M$ a \textit{minimal conflicting subgraph} of $G$. Let
$$M_G = \bigcup \{ M ~|~M {\rm~is~a~minimal~conflicting~subgraph~of~} G\}.$$
We call $M_G$ the \textit{maximal conflicting subgraph} of $G$. Let
$$C_G = \{ e \in E(G)~|~e \notin M_G~ {\rm and}~e~{\rm is~ adjacent~ to~ some}~e'~{\rm in}~ M_G\}.$$
We call $C_G$ \textit{the conflict-cut set} of $G$. Let
$$B_G = \{ e \in E(G)~|~e \notin M_G \cup C_G\}.$$
We call $B_G$ the \textit{buffer subgraph} of $G$.}
\end{definition}
A subcubic graph $G$ is called 3-critical if it has chromatic index 4 and $G - e$ has chromatic index 3 for every $e \in G$. It is easy to see that this definition coincides with our definition of minimal conflicting subgraphs, in that a minimal conflicting subgraph can be thought of as a 3-critical subgraph. If the 3-critical subgraphs represent only that which is essentially non 3-edge-colourable, then the buffer subgraph represents that which is essentially redundant in contributing to the non colourability of the cubic graph. We list some properties of 3-critical subgraphs, or minimal conflicting subgraphs, of subcubic graphs. First we present some known properties of 3-critical graphs in general, after which we prove some more pertinent properties for our purposes regarding minimal conflicting subgraphs.
\begin{proposition} \label{properties_of_3crits}
Let $M$ be a 3-critical graph. The following statements are true.
\begin{enumerate}
\item[(i)] $r(M)=1$ and every edge $e \in M$ is conflicting in some minimal colouring of $M$.
\item[(ii)] $M$ is strictly subcubic.
\item[(iii)] $M$ is bridgeless.
\item[(iv)] Every vertex in $M$ has degree two or three.
\item[(v)] Every vertex in $M$ has at least two neighbours of degree three.
\end{enumerate}
\begin{proof}
These are known properties of 3-critical graphs and we omit the proofs.
\end{proof}
\end{proposition}
\begin{proposition} \label{properties_of_mczs}
Let $G$ be a bridgeless cubic graph. The following statements are true.
\begin{enumerate}
\item[(i)] The distance between any two disjoint minimal conflicting subgraphs of $G$ is at least one.
\item[(ii)] Every conflicting subgraph in $G$ contains a minimal conflicting subgraph.
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item[(i)] This follows on directly from Proposition \ref{properties_of_3crits} (iv).
\item[(ii)] Let $M$ be a conflicting subgraph of $G$. Choose an edge $e \in M$. We check $e$ by considering $r(M - \{ e \})$. If $r(M - \{ e \}) \neq 0$ then remove $e$ from $M$. If $r(M - \{ e \})=0$ then leave $M$ as is and mark $e$ as checked. Continue checking edges in $M$ until every edge is checked. Once every edge is checked, $M$ is then a minimal conflicting subgraph.
\end{enumerate}
\end{proof}
\end{proposition}
We begin our investigation into these structures. We consider their existence relative to conflicting edges in minimal colourings. Note that although our primary interest is in cubic graphs, some results are applicable to subcubic graphs as well and are stated as such.
\begin{proposition} \label{r_distinct_mczs}
Let $G$ be a subcubic class two graph and let $f$ be a minimal colouring of $G$. For each conflicting edge $e$ with regard to $f$, there exists at least one minimal conflicting subgraph which contains $e$ and also contains no other conflicting edge with regard to $f$.
\begin{proof}
Let $f$ be a minimal colouring of $G$ and let $R = \{ e_1, \dots, e_r \}$ be the set of conflicting edges with regard to $f$. For each $i \in \{ 1, \dots ,r \}$ let $M_i = \{ e_i\}$ and conduct the following process. Choose an edge $e$ not contained in $M_i \cup R$ which is adjacent to some edge in $M_i$. Add edge $e$ to $M_i$. While $r(M_i)=0$, we keeping adding such edges. Since $r(G - (R-\{ e_i\}))$ must equal 1, we know that eventually we will have $r(M_i)=1$. If $r(M_i)=1$ then $M_i$ is a conflicting subgraph which contains no other conflicting edge with regard to $f$ besides $e_i$. By Proposition \ref{properties_of_mczs} (ii), $M_i$ contains a minimal conflicting subgraph. Since $r(M_i - e_i) = 0$, this minimal conflicting subgraph must contain $e_i$. This completes the proof.
\end{proof}
\end{proposition}
From Proposition \ref{r_distinct_mczs}, it is clear that the resistance of a cubic class two graph is less than or equal to the number of distinct minimal conflicting subgraphs contained in the graph. We may be inclined to think that the number of minimal conflicting subgraphs is in some way upper bounded by resistance, however this is not the case. The flower snarks and Loupekine snarks represent counter examples to this idea. Each of the graphs in these classes have resistance 2. However, the order of the graphs can be arbitrarily large. Furthermore, the number of possible single vertices which can be removed from the said graphs in order to leave behind a minimal conflicting subgraph is also arbitrarily large. Thus the number of minimal conflicting subgraphs is not bounded by resistance.
While no such upper bound exists, there does exist an essential relationship between resistance and minimal conflicting subgraphs. This relationship, as is proven in the following theorem, provides much insight on possible conflicting edges and minimal colourings of snarks. First, we present an important definition.
\begin{definition}
Let $G$ be a subcubic class two graph with minimal conflicting subgraphs $M_1,\dots,M_r$. A representative conflicting subset of $G$ is a set of distinct edges $R = {e_1, \dots ,e_s} \subset E(G)$ such that $R \cap M_i \neq \emptyset$ for each $i$.
\end{definition}
We note that for a graph $G$ there may exist representative conflicting subsets of varying order.
\begin{theorem} \label{choose_confledges_in_mincol}
Let $G$ be a subcubic class two graph. Then
$$r(G) = \min\{|R| : R~ {\rm is~ a~ representative~ conflicting~ subset~ of}~ G\}$$
\begin{proof}
Let $\mathcal{M} = \{M_1,\dots,M_m\}$ be the set of all minimal conflicting subgraphs in $G$ and let $R$ be a representative conflicting subset of $G$. Note that no $M_i$ in $\mathcal{M}$ is a subgraph of $G-R$. Assume now that $G-R$ is not 3-edge-colourable. Then $G-R$ contains some minimal conflicting subgraph $M'$ by Proposition \ref{properties_of_mczs} (ii). But $M'$ is also contained in $G$, which is a contradiction since $M'$ is not contained in $\mathcal{M}$. Therefore $G-R$ is 3-edge-colourable.
Let $|R|$ be minimal. Since $G-R$ is 3-edge-colourable, we know that $r(G) \leq |R|$. Assume that $r(G) < |R|$ and let $f$ be a minimal colouring of $G$. Let $R'$ be the conflicting edges with regard to $f$. By Proposition \ref{r_distinct_mczs}, every element in $R'$ is contained in some minimal coflicting subgraph of $G$. If $R'$ is not a representative conflicting subset then there exists some minimal conflicting subgraph $M' \subset G$ which contains no conflicting edges with regard to $f$. In which case, we have a minimal conflicting subgraph of $G$ which is properly coloured by $f$ using just three colours, a contradiction. If $R'$ is a representative conflicting subset, then the minimality of $|R|$ is contradicted since $|R'| = r(G) < |R|$. Therefore, $r(G)=|R|$.
\end{proof}
\end{theorem}
A cubic graph $G$ may have resistance $r(G)$, but given that information there is no way of knowing which combination of $r(G)$ edges may be removed from $G$ in order to render colourability. Theorem \ref{choose_confledges_in_mincol} is significant in that it informs us exactly which combinations of $r(G)$ edges are sufficient for this purpose. The requisite is that we can identify the minimal conflicting subgraphs of $G$. Another way of understanding the result, is that we can choose a minimal colouring, relative to conflicting edges, by simply selecting a combination of edges from each minimal conflicting subgraph, as long as this is done minimally.
Furthermore, with Theorem \ref{choose_confledges_in_mincol} we further note that if there exists some edge $e$ which is contained in exactly one minimal conflicting subgraph $M$, but $M$ has non-empty intersection with some other minimal conflicting subgraph $M'$, then $e$ may not be conflicting in any minimal colouring of $G$. Another way of saying this is, if $e \in M$ where $M$ is a minimal conflicting subgraph of $G$, then it is not necessarily the case that $r(G-e)=r(G)-1$. Equivalently, we could say that there does not necessarily exist some minimal colouring of $H \subset G$ which can be extended to a minimal colouring of $G$. It is possible to have a minimal colouring of a subgraph $H \subset G$ with say, $r_1$ conflicting edges, such that a minimal extension has $r_2$ further conflicting edges, but $r_1 + r_2 > r(G)$. What is also clear is that that if every minimal conflicting subgraph of $G$ is disjoint, then $r(G) = |\mathcal{M}|$, where $\mathcal{M}$ is the set of all minimal conflicting subgraphs in $G$. Consequent to this discussion, we define the following.
\begin{definition} \label{critical_zone} {\rm
Let $G$ be a subcubic graph. Let
$$K_G = \{e \in G ~|~ f(e)=0 {\rm ~for ~some~ minimal~ colouring~}f{\rm~ of}~ G\}.$$
We call $K_G$ the \textit{critical subgraph} of $G$.}
\end{definition}
As we did with resistance, we are also able to explicitly characterise the critical subgraph in terms of the minimal conflicting subgraphs.
\begin{theorem}
Let $G$ be a subcubic class two graph. Then
$$K_G = \bigcup\{R : R~ {\rm is~ a~ representative~ conflicting~ subset~ of}~G~{\rm of~minimal~order} \}.$$
\begin{proof}
Let $R$ be a representative conflicting subset of $G$ with minimal order. Then the edges in $R$ are the conflicting edges of some minimal colouring of $G$. Therefore, the edges in $R$ are all critical.
Let $e$ be a critical edge of $G$. Then it is a conflicting edge in some minimal colouring $f$ of $G$. Let $R$ be the set of all conflicting edges in $G$ with regard to $f$. Since $G - R$ is colourable, $R$ is a representative conflicting subset of $G$. Since $f$ is minimal, $R$ must have minimal order. Therefore, $e$ is contained in the union of all representative conflicting subsets of $G$ with minimal order.
\end{proof}
\end{theorem}
It is clear that $K_G \subseteq M_G$.
We present an example where $K_G \subset M_G = G$, an example where $K_G = M_G \subset G$, and an example where $K_G = M_G = G$. The first two examples are specific graphs, while the third is the interesting general case of a hypo-Hamiltonian snark.
\begin{example} \label{ex1} {\rm
The subcubic graph $G$ depicted below consists of four identical minimal conflicting subgraphs, $M_1, M_2, M_3$ and $M_4$. Thus $M_G = G$. $M_1 \cap M_2$, $M_2 \cap M_3$ and $M_3 \cap M_4$ are
represented by the thicker edges. We have $r(G) = 2$ and $K_G = (M_1 \cap M_2) \cup (M_3 \cap M_4) \subset M_G = G$. The sets of two edges, one each from $(M_1 \cap M_2)$ and $(M_3 \cap M_4)$, are the only representative conflicting subsets of minimal order. Thus, even though $K_G = (M_1 \cap M_2) \cup (M_3 \cap M_4)$ is itself 3-edge-colourable, any minimal colouring of $G$ must contain a conflicting edge in each of $(M_1 \cap M_2)$ and $(M_3 \cap M_4)$.
\begin{center}
\begin{tikzpicture}[every node/.style={draw,shape=circle,fill=black,text=white,scale=0.6},scale=0.4]
\begin{scope}
\foreach \i [count=\ii from 0] in {54,126,198,270,342}{
\path (\i:15mm) node (a\ii) {};
}
\foreach \x in {0,...,4}{
\tikzmath{
integer \y;
\y = mod(\x+2,5);
}
\draw (a\x) -- (a\y);
}
\path let \p1 = (a2) in node (af0) at (\x1,-4) {};
\path let \p1 = (a3) in node (af1) at (\x1,-4) {};
\path let \p1 = (a4) in node (af2) at (\x1,-4) {};
\draw (a2) -- (af0);
\draw (a3) -- (af1);
\draw (a4) -- (af2);
\draw (af0) -- (af1) -- (af2);
\end{scope}
\begin{scope} [xshift=6cm]
\foreach \i [count=\ii from 0] in {54,126,198,270,342}{
\path (\i:15mm) node (b\ii) {};
}
\foreach \x in {0,...,4}{
\tikzmath{
integer \y;
\y = mod(\x+2,5);
}
\draw (b\x) -- (b\y) [line width=2.2pt];
}
\path let \p1 = (b2) in node (bf0) at (\x1,-4) {};
\path let \p1 = (b3) in node (bf1) at (\x1,-4) {};
\path let \p1 = (b4) in node (bf2) at (\x1,-4) {};
\draw (b2) -- (bf0) [line width=2.2pt] ;
\draw (b3) -- (bf1) [line width=2.2pt];
\draw (b4) -- (bf2) [line width=2.2pt];
\draw (bf0) -- (bf1) -- (bf2) [line width=2.2pt];
\end{scope}
\begin{scope} [xshift=12cm]
\foreach \i [count=\ii from 0] in {54,126,198,270,342}{
\path (\i:15mm) node (c\ii) {};
}
\foreach \x in {0,...,4}{
\tikzmath{
integer \y;
\y = mod(\x+2,5);
}
\draw (c\x) -- (c\y) [line width=2.2pt];
}
\path let \p1 = (c2) in node (cf0) at (\x1,-4) {};
\path let \p1 = (c3) in node (cf1) at (\x1,-4) {};
\path let \p1 = (c4) in node (cf2) at (\x1,-4) {};
\draw (c2) -- (cf0) [line width=2.2pt];
\draw (c3) -- (cf1) [line width=2.2pt];
\draw (c4) -- (cf2) [line width=2.2pt];
\draw (cf0) -- (cf1) -- (cf2) [line width=2.2pt];
\end{scope}
\begin{scope} [xshift=18cm]
\foreach \i [count=\ii from 0] in {54,126,198,270,342}{
\path (\i:15mm) node (d\ii) {};
}
\foreach \x in {0,...,4}{
\tikzmath{
integer \y;
\y = mod(\x+2,5);
}
\draw (d\x) -- (d\y) [line width=2.2pt];
}
\path let \p1 = (d2) in node (df0) at (\x1,-4) {};
\path let \p1 = (d3) in node (df1) at (\x1,-4) {};
\path let \p1 = (d4) in node (df2) at (\x1,-4) {};
\draw (d2) -- (df0) [line width=2.2pt];
\draw (d3) -- (df1) [line width=2.2pt];
\draw (d4) -- (df2) [line width=2.2pt];
\draw (df0) -- (df1) -- (df2) [line width=2.2pt];
\end{scope}
\begin{scope} [xshift=24cm]
\foreach \i [count=\ii from 0] in {54,126,198,270,342}{
\path (\i:15mm) node (e\ii) {};
}
\foreach \x in {0,...,4}{
\tikzmath{
integer \y;
\y = mod(\x+2,5);
}
\draw (e\x) -- (e\y);
}
\path let \p1 = (e2) in node (ef0) at (\x1,-4) {};
\path let \p1 = (e3) in node (ef1) at (\x1,-4) {};
\path let \p1 = (e4) in node (ef2) at (\x1,-4) {};
\draw (e2) -- (ef0);
\draw (e3) -- (ef1);
\draw (e4) -- (ef2);
\draw (ef0) -- (ef1) -- (ef2);
\end{scope}
\draw (a0) -- (b1) node (mid) [midway] {};
\draw (b0) -- (c1) node (mid) [midway] {};
\draw (c0) -- (d1) node (mid) [midway] {};
\draw (d0) -- (e1) node (mid) [midway] {};
\draw (af2) -- (bf0);
\draw (bf2) -- (cf0);
\draw (cf2) -- (df0);
\draw (df2) -- (ef0);
\end{tikzpicture}
\end{center}}
\end{example}
\begin{example} \label{ex2} {\rm
The snark $G$ depicted below consists of three identical non-overlapping minimal conflicting subgraphs $M_1, M_2$ and $M_3$. $M_1 \cup M_2 \cup M_3$ is represented by the thicker edges. Any set of three edges, one each from $M_1, M_2$ and $M_3$, is a representative conflicting subset. Therefore $r(G) = 3$ and $K_G = M_1 \cup M_2 \cup M_3 = M_G \subset G.$
\begin{center}
\begin{tikzpicture}[every node/.style={draw,shape=circle,fill=black,text=white,scale=0.7},scale=0.4]
\begin{scope} [rotate=60, yshift=5cm, rotate=288]
\foreach \i [count=\ii from 0] in {54,126,198,270,342}{
\path (\i:15mm) node (a\ii) {};
}
\foreach \x in {0,...,4}{
\tikzmath{
integer \y;
\y = mod(\x+2,5);
}
\draw (a\x) -- (a\y) [line width=2.2pt];
}
\foreach \i [count=\ii from 0] in {54,126,198,270}{
\path (\i:30mm) node (b\ii) {};
}
\draw (b0) -- (b1) [line width=2.2pt];
\draw (b1) -- (b2) [line width=2.2pt];
\draw (b2) -- (b3) [line width=2.2pt];
\draw (a0) -- (b0) [line width=2.2pt];
\draw (a1) -- (b1) [line width=2.2pt];
\draw (a2) -- (b2) [line width=2.2pt];
\draw (a3) -- (b3) [line width=2.2pt];
\end{scope}
\begin{scope} [rotate=180, yshift=5cm, rotate=288]
\foreach \i [count=\ii from 0] in {54,126,198,270,342}{
\path (\i:15mm) node (c\ii) {};
}
\foreach \x in {0,...,4}{
\tikzmath{
integer \y;
\y = mod(\x+2,5);
}
\draw (c\x) -- (c\y) [line width=2.2pt];
}
\foreach \i [count=\ii from 0] in {54,126,198,270}{
\path (\i:30mm) node (d\ii) {};
}
\draw (d0) -- (d1) [line width=2.2pt];
\draw (d1) -- (d2) [line width=2.2pt];
\draw (d2) -- (d3) [line width=2.2pt];
\draw (c0) -- (d0) [line width=2.2pt];
\draw (c1) -- (d1) [line width=2.2pt];
\draw (c2) -- (d2) [line width=2.2pt];
\draw (c3) -- (d3) [line width=2.2pt];
\end{scope}
\begin{scope} [rotate=300, yshift=5cm, rotate=288]
\foreach \i [count=\ii from 0] in {54,126,198,270,342}{
\path (\i:15mm) node (e\ii) {};
}
\foreach \x in {0,...,4}{
\tikzmath{
integer \y;
\y = mod(\x+2,5);
}
\draw (e\x) -- (e\y) [line width=2.2pt];
}
\foreach \i [count=\ii from 0] in {54,126,198,270}{
\path (\i:30mm) node (f\ii) {};
}
\draw (f0) -- (f1) [line width=2.2pt];
\draw (f1) -- (f2) [line width=2.2pt];
\draw (f2) -- (f3) [line width=2.2pt];
\draw (e0) -- (f0) [line width=2.2pt];
\draw (e1) -- (f1) [line width=2.2pt];
\draw (e2) -- (f2) [line width=2.2pt];
\draw (e3) -- (f3) [line width=2.2pt];
\end{scope}
\node (c) at (0,0) {};
\draw (a4) -- (c);
\draw (c4) -- (c);
\draw (e4) -- (c);
\draw (b3) -- (d0);
\draw (d3) -- (f0);
\draw (f3) -- (b0);
\end{tikzpicture}
\end{center}}
\end{example}
\begin{example} \label{ex3} {\rm
The general case of a hypo-Hamiltonian snark $G$ is depicted below. Let $e$ be any given edge in $G$. Then $e$ is contained in a Hamiltonian cycle of $G-v$ where $v$ is a vertex distance 1 from $e$. In the diagram, $e$ is conflicting in a minimal 4-edge-colouring of $G$ with two conflicting edges. Therefore, $K_G = G$. The 3-coloured chordal edges and the alternatively 1-2 coloured edges in the Hamiltonian cycle are not depicted in the diagram. Since hypo-Hamiltonian snarks are bicritical \cite{steffen2}, we note that this implies that every minimal conflicting subgraph of a hypo-Hamiltonian snark contains all but one vertex.
\begin{center}
\begin{tikzpicture}[every node/.style={draw,shape=circle,fill=black,text=white,scale=0.6},scale=0.6]
\node (a0) at (25:4cm) [label={[above left,color=black,scale=1.5]:$ $}] {};
\node (a1) at (45:4cm) [label={[above right,color=black,scale=1.5]:$ $}] {};
\node (a2) at (65:4cm) [label={[above left,color=black,scale=1.5]:$ $}] {};
\node (a3) at (135:4cm) [label={[above left,color=black,scale=1.5]:$ $}] {};
\node (a4) at (155:4cm) [label={[above left,color=black,scale=1.5]:$ $}] {};
\node (a5) at (175:4cm) [label={[above left,color=black,scale=1.5]:$ $}] {};
\node (a6) at (250:4cm) [label={[above left,color=black,scale=1.5]:$ $}] {};
\node (a7) at (270:4cm) [label={[above left,color=black,scale=1.5]:$ $}] {};
\node (a8) at (290:4cm) [label={[above left,color=black,scale=1.5]:$ $}] {};
\draw node (x1) at (5:4cm) [fill=white,text=black,draw = none,scale=1] {} ;
\draw node (x2) at (85:4cm) [fill=white,text=black,draw = none,scale=1] {} ;
\draw node (x3) at (115:4cm) [fill=white,text=black,draw = none,scale=1] {} ;
\draw node (x4) at (195:4cm) [fill=white,text=black,draw = none,scale=1] {} ;
\draw node (x5) at (230:4cm) [fill=white,text=black,draw = none,scale=1] {} ;
\draw node (x6) at (310:4cm) [fill=white,text=black,draw = none,scale=1] {} ;
\node (c) at (0,0) [label={[above,color=black,scale=1.5]:$v$}] {};
\draw (c) -- node [draw=none,fill=white,text=black] {$1$} (a1);
\draw (c) -- node [draw=none,fill=white,text=black] {$3$} (a7);
\draw (x1) -- (a0) [loosely dotted];
\draw (x2) -- (a2) [loosely dotted];
\draw (x3) -- (a3) [loosely dotted];
\draw (x4) -- (a5) [loosely dotted];
\draw (a4) -- node [draw=none,fill=white,text=black] {$0$} (c);
\draw (x5) -- (a6) [loosely dotted];
\draw (x6) -- (a8) [loosely dotted];
\draw (a0) -- node [draw=none,fill=white,text=black] {$0$} (a1) -- node [draw=none,fill=white,text=black] {$2$}(a2);
\draw (a3) -- node [draw=none,fill=white,text=black] {$2$} (a4) -- node [draw=none,fill=white,text=black] {$1$} (a5);
\draw (a6) -- node [draw=none,fill=white,text=black] {$1$} (a7) -- node [draw=none,fill=white,text=black] {$2$} (a8);
\end{tikzpicture}
\end{center}}
\end{example}
\section{Further considerations}
\subsection{Clusters}
We have noticed that it is typically the case in smaller snarks that minimal conflicting subgraphs have non-empty intersections. To facilitate further brief discussion, it serves to formally define groups of minimal conflicting subgraphs in terms of non-empty intersections, as well as distinguish between different types of these groups.
\begin{definition} \label{cluster} {\rm
Let $G$ be a subcubic class two graph. Let $\mathcal{M} = \{ M_1, \dots, M_m \}$ be a collection of minimal conflicting subgraphs of $G$.
\begin{enumerate}
\item [(i)] If for every $i \in \{1,\dots,m\}$ with $i\neq j$ there exists some $j \in \{1,\dots,m\}$ such that $M_i \cap M_j \neq \varnothing$, and $M \cap M_i = \varnothing$ for any other minimal conflicting subgraph $M \notin \mathcal{M}$, then we call $\mathcal{M}$ a \textit{cluster} of minimal conflicting subgraphs.
\item [(ii)] If $\mathcal{M}$ is a cluster and $\bigcap M_i \neq \varnothing$ then we call $\mathcal{M}$ a \textit{dense cluster}.
\item [(iii)] If $\mathcal{M}$ is a cluster and is not dense then it is a \textit{sparse cluster}.
\item [(iv)] If $\mathcal{M}$ is sparse cluster such that for every $i, j \in \{1,\dots,m\}$ we have that $M_i \cap M_j \neq \varnothing$, then it is a \textit{densely sparse cluster}.
\end{enumerate}}
\end{definition}
We prove and discuss some immediate results on these structures. Our investigations suggest that it serves to consider strictly subcubic clusters and cubic clusters seperately.
\begin{proposition} \label{propclusters}
The following statements are true.
\begin{enumerate}
\item[(i)] There exists no cubic dense cluster.
\item[(ii)] Let G be a bridgeless cubic graph. If $M_G = G$ then $G$ consists entirely of one sparse cluster of minimal conflicting zones.
\item[(iii)] There exists a strictly subcubic
cluster with $n$ minimal conflicting subgraphs for each $n \geq 1$.
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item[(i)] Every dense cluster has a representative conflicting subset of order 1. No cubic graph can have resistance 1. Therefore, no dense cluster can be cubic.
\item[(v)] Since the distance between any two clusters must be at least one by Proposition \ref{properties_of_mczs}, that one edge cannot be contained in any minimal conflicting subgraph. Thus $M_G = G$ implies that $G$ consists entirely of one cluster of minimal conflicting subgraphs. By $(i)$, $G$ cannot be dense, and is therefore sparse.
\item[(iii)] Consider Example \ref{ex1}, but with
$n$ minimal conflicting subgraphs $M_1, \dots , M_n$. As in Example \ref{ex1}, let $M_i$ intersect with $M_{i+1}$ for $i \in \{1, \dots , n-1\}$. The result is a strictly subcubic
cluster with $n$ minimal conflicting subgraphs.
\end{enumerate}
\end{proof}
\end{proposition}
Now, any cluster with two minimal conflicting subgraphs is trivially dense, and we have seen that we can easily find such clusters. In Proposition \ref{propclusters} (iii), the clusters are however sparse for $n \geq 3$. The question of whether there exists dense clusters with three or more minimal conflicting subgraphs remains.
\begin{problem} \label{prob2}
For which $n \geq 3$ does there exist a dense cluster with $n$ minimal conflicting subgraphs?
\end{problem}
Recall Example \ref{ex3}, that in a hypo-Hamiltonian snark, the removal of any vertex leaves behind a minimal conflicting subgraph containing all the remaining vertices. Thus any two minimal conflicting subgraphs intersect, and there is no single vertex which is present in every minimal conflicting subgraph. Hypo-Hamiltonian snarks are therefore densely sparse clusters. From our investigations, we suspect that the only cubic densely sparse clusters are those which are similar to Example \ref{ex3}. That is, possibly hypo-Hamiltonian, and consequently where the removal of any one vertex leaves behind a minimal conflicting subgraph. In such cases as well, resistance is necessarily $2$. Recall as well that every edge in a ypo-Hamiltonian snark is critical.
Thus, we formulate the following conjecture.
\begin{conjecture} \label{con2}
Let $G$ be a bridgeless cubic graph. Then $K_G = G$ if and only if $r(G) = 2$ and $G$ is a densely sparse cluster.
\end{conjecture}
\subsection{Snark reduction}
Although triviality in snarks is not formally defined, snarks have generally been considered to contain more triviality if they are easily reducible to smaller snarks by some well-defined reduction (many variations of snark reductions have been considered by previous authors, (see for example \cite{nedela, steffen2})). In particular, the most universally accepted notion of triviality in snarks are those with either girth less than 5 or those with cyclic connectivity less than 4.
There does however exist many snarks with these ``trivial" properties, with arbitrarily large resistance, and which can only be reduced to smaller snarks with strictly less resistance. Reduction of resistance cannot be considered trivial, since some structural complexity is contributing to that resistance being present. Thus we propose instead that snarks which can be reduced in size, without reducing resistance, should be considered more trivial, and that so-called ``trivial'' snarks should be studied as much as so-called ``non-trivial'' snarks. Given Theorem \ref{choose_confledges_in_mincol}, these would typically be snarks with a large buffer subgraph. Thus we may perhaps formally define a snark to contain no triviality if its buffer subgraph is empty.
The notion of a maximal conflicting subgraph and buffer subgraph therefore opens up a new avenue of consideration regarding snark reductions. That is, reducing the snark to contain only the essentially uncolourable, by removing the buffer subgraph.
This new notion of reducibility relates interestingly to a problem of oddness and resistance in snarks. Recall that the oddness of a graph $G$ is the minimum number of odd components in a 2-factor of $G$, denoted as $\omega(G)$ \cite{huck}. In \cite{allie} we disproved a conjecture by Fiol et. al. in \cite{fiol}, by showing that the ratio of oddness to resistance can be arbitrarily large. It was conjectured in \cite{fiol} that $\omega(G) \leq 2r(G)$ for any snark $G$. We disproved this by constructing a class of graphs with increasing oddness, but constant resistance equal to 3. Interestingly, each of the graphs in the class defined contains exactly three disjoint minimal conflicting subgraphs (thus resistance is 3 in each graph). The increase in oddness, whilst keeping resistance constant, is as a result of adding particular subgraphs to the buffer subgraph. This leads to an interesting reformulation of the disproved conjecture which was posed in \cite{fiol}.
\begin{conjecture} {\rm
Let $G$ be a snark with an empty buffer subgraph. Then $\omega(G) \leq 2r(G)$.}
\end{conjecture}
\end{document}
|
\begin{document}
\title{Commutative Regular Languages with Product-Form Minimal Automata}
\author{Stefan Hoffmann\orcidID{0000-0002-7866-075X}}
\authorrunning{S. Hoffmann}
\institute{Informatikwissenschaften, FB IV,
Universit\"at Trier, Universitätsring 15, 54296~Trier, Germany,
\email{[email protected]}}
\maketitle
\begin{abstract}
We introduce a subclass of the commutative regular languages
that is characterized by the property that
the state set of the minimal deterministic automaton can be written as a certain Cartesian product.
This class behaves much better with respect to the state complexity
of the shuffle, for which we find the bound~$2nm$ if the input languages have state complexities $n$ and $m$,
and the upward and downward closure and interior operations,
for which we find the bound~$n$.
In general, only the bounds $(2nm)^{|\Sigma|}$
and $n^{|\Sigma|}$ are known for these operations in the commutative case.
We prove different characterizations of this class
and present results to construct languages from this class.
Lastly, in a slightly more general setting of partial commutativity, we introduce other, related,
language classes and investigate the relations between them.
\keywords{finite automaton \and state complexity \and shuffle \and upward closure \and downward closure \and commutative language \and product-form minimal automaton \and partial commutation}
\end{abstract}
\section{Introduction}
\label{sec:introduction}
The state complexity, as used here, of a regular language $L$
is the minimal number of states needed in a complete deterministic automaton
recognizing~$L$. The state complexity of an operation
on regular languages is the greatest state complexity
of the result of this operation
as a function of the (maximal) state complexities of its arguments.
Investigating the state complexity of the result of a regularity-preserving operation on regular languages,
see~\cite{GaoMRY17} for a survey, was first initiated by Maslov in~\cite{Mas70} and systematically started by Yu, Zhuang \& Salomaa in~\cite{YuZhuangSalomaa1994}.
A language is called commutative, if
for each word in the language, every permutation of this word is also in the language.
The class of commutative automata, which recognize commutative regular languages, was introduced in~\cite{BrzozowskiS73}.
The shuffle and iterated shuffle have been introduced and studied to understand
the semantics of parallel programs. This was undertaken, as it appears
to be, independently by Campbell and Habermann~\cite{CamHab74}, by Mazurkiewicz~\cite{DBLP:conf/mfcs/Marzurkiewicz75}
and by Shaw~\cite{Shaw78zbMATH03592960}. They introduced \emph{flow expressions},
which allow for sequential operators (catenation and iterated catenation) as well
as for parallel operators (shuffle and iterated shuffle)
to specify sequential and parallel execution traces.
The shuffle operation as a binary operation, but not the iterated shuffle,
is regularity-preserving on all regular languages. The state complexity of the shuffle
operation in the general cases was investigated in~\cite{BrzozowskiJLRS16}
for complete deterministic automata and in~\cite{DBLP:journals/jalc/CampeanuSY02}
for incomplete deterministic automata. The bound $2^{nm-1} + 2^{(m-1)(n-1)}(2^{m-1}-1)(2^{n-1}-1)$ was obtained
in the former case, which is not known to be tight, and the tight bound $2^{nm}-1$
in the latter case.
A word is a (scattered) subsequence of another word, if it can be obtained from the latter word by deleting
letters. This gives a partial order, and the upward and downward closure and interior operations
refer to this partial order. The upward closures are also known as shuffle ideals.
The state complexity of these operations was investigated in~\cite{DBLP:journals/tcs/GruberHK07,DBLP:journals/fuin/GruberHK09,DBLP:journals/ita/Heam02,KarandikarNS16,DBLP:journals/fuin/Okhotin10}
The state complexity of the projection operation was investigated in~\cite{Hoffmann2021DLT,DBLP:journals/tcs/JiraskovaM12,Wong98}.
In~\cite{Wong98}, the tight upper bound $3 \cdot 2^{n-2} - 1$
was shown, and in~\cite{DBLP:journals/tcs/JiraskovaM12} the refined, and tight, bound $2^{n-1} + 2^{n-m} - 1$
was shown, where $m$ is related to the number of unobservable transitions for the projection operator.
Both results were established for incomplete deterministic automata.
In~\cite{Hoffmann2021NISextended,DBLP:conf/cai/Hoffmann19,DBLP:conf/dcfs/Hoffmann21,Hoffmann2021DLT} the state complexity of these operations
was investigated for commutative regular languages.
The results are summarized in Table~\ref{tab:sc_known_results}.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Operation & Upper Bound & Lower Bound & References \\ \hline
$\pi_{\Gamma}(U)$, $\Gamma \subseteq \Sigma$ & $n$ & $n$ & \cite{Hoffmann2021NISextended,Hoffmann2021DLT} \\
$U \shuffle V$ & $\min\{(2nm)^{|\Sigma|},f(n,m)\}$ & $\Omega\left( nm \right) $ & \cite{BrzozowskiJLRS16,Hoffmann2021NISextended,DBLP:conf/cai/Hoffmann19} \\
$\uparrow\! U$ & $n^{|\Sigma|}$ & $\Omega\left( \left( \frac{n}{|\Sigma|} \right)^{|\Sigma|} \right)$ & \cite{DBLP:journals/ita/Heam02,Hoffmann2021NISextended,DBLP:conf/dcfs/Hoffmann21} \\
$\downarrow\! U$ & $n^{|\Sigma|}$ & $n$ & \cite{Hoffmann2021NISextended,DBLP:conf/dcfs/Hoffmann21} \\
$\uptodownarrow\! U$ & $n^{|\Sigma|}$ & $\Omega\left( \left( \frac{n}{|\Sigma|} \right)^{|\Sigma|} \right)$ & \cite{Hoffmann2021NISextended,DBLP:conf/dcfs/Hoffmann21} \\
$\downtouparrow\! U$ & $n^{|\Sigma|}$ & $n$ & \cite{Hoffmann2021NISextended,DBLP:conf/dcfs/Hoffmann21} \\
$U \cup V$,$U \cap V$ & $nm$ & tight for each $\Sigma$ & \cite{Hoffmann2021NISextended,DBLP:conf/cai/Hoffmann19} \\ \hline
\end{tabular}
\caption{Overview of results for commutative regular languages.
The state complexities of the input languages are $n$ and $m$.
Also, $f(n,m) = 2^{nm-1} + 2^{(m-1)(n-1)}(2^{m-1}-1)(2^{n-1}-1)$
is the general bound for shuffle from~\cite{BrzozowskiJLRS16} in case of complete automata.}
\label{tab:sc_known_results}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Operation & Upper Bound & Lower Bound & Reference \\ \hline
$\pi_{\Gamma}(U)$, $\Gamma \subseteq \Sigma$ & $n$ & $n$ & Thm.~\ref{thm:sc_results} \\
$U \shuffle V$ & $2nm$ & $\Omega\left( nm \right) $ & Thm.~\ref{thm:sc_results} \\
$\uparrow\! U$,$\downarrow\! U$,$\uptodownarrow\! U$,$\downtouparrow\! U$ & $n$ & $n$ & Thm.~\ref{thm:sc_results} \\
$U \cap V$, $U \cup V$ & $nm$ & tight for each $\Sigma$ & Thm.~\ref{thm:sc_results} \\ \hline
\end{tabular}
\caption{State Complexity results on the subclass
of commutative languages with product-form minimal automaton
for input languages with state complexities $n$ and~$m$.}
\label{tab:sc_product-from}
\end{table}
In~\cite{GomezA08} the minimal commutative automaton was introduced, which can be associated
with every commutative regular language.\todo{die unteren schranken noch zeigen.}
This automaton played a crucial role in~\cite{Hoffmann2021NISextended,DBLP:conf/cai/Hoffmann19}
to derive the bounds mentioned in Table~\ref{tab:sc_known_results}.
Here, we will investigate the subclass of those language
for which the minimal commutative automaton is in fact the smallest automaton
recognizing a given commutative language.\todo{und beispiel unterschied kann beliebig groß werden.}
For this language class, we will derive the following state complexity bounds
summarized in Table~\ref{tab:sc_product-from}.
Additionally, we will prove other characterizations
and properties of the subclass considered and relate it with other subclasses, in a more general setting,
in the final chapter.
\section{Preliminaries}
In this section and Section~\ref{sec:product-form},
we assume that $k \ge 0$ denotes our alphabet size and
$\Sigma = \{a_1, \ldots, a_k\}$ is our \emph{alphabet}.
We will also write $a,b,c$ for $a_1,a_2,a_3$ in case of $|\Sigma| \le 3$.
The set $\Sigma^{\ast}$ denotes
the set of all finite sequences over $\Sigma$, i.e., of all \emph{words}. The finite sequence of length zero,
or the \emph{empty word}, is denoted by $\varepsilon$. For a given word we denote by $|w|$
its \emph{length}, and for $a \in \Sigma$ by $|w|_a$ the \emph{number of occurrences of the symbol $a$}
in $w$.
For $a \in \Sigma$, we set $a^* = \{a\}^*$.
A \emph{language} is a subset of $\Sigma^*$.
For $u \in \Sigma^*$, the \emph{left quotient} is $u^{-1}L = \{ v \in \Sigma^* \mid uv \in L\}$
and the \emph{right quotient} is $Lu^{-1} = \{ v \in \Sigma^* \mid vu \in L \}$.
The \emph{shuffle operation}, denoted by $\shuffle$, is defined by
\begin{multline*}
u \shuffle v = \{ w \in \Sigma^* \mid w = x_1 y_1 x_2 y_2 \cdots x_n y_n
\emph{ for some words } \\ x_1, \ldots, x_n, y_1, \ldots, y_n \in \Sigma^*
\emph{ such that } u = x_1 x_2 \cdots x_n \emph{ and } v = y_1 y_2 \cdots y_n \},
\end{multline*}
for $u,v \in \Sigma^{\ast}$ and
$L_1 \shuffle L_2 := \bigcup_{x \in L_1, y \in L_2} (x \shuffle y)$ for $L_1, L_2 \subseteq \Sigma^{\ast}$.
If $L_1, \ldots, L_n \subseteq \Sigma^*$, we set $\bigshuffle_{i=1}^n L_i = L_1 \shuffle \ldots \shuffle L_n$.
Let $\Gamma \subseteq \Sigma$.
The \emph{projection homomorphism} $\pi_{\Gamma} : \Sigma^* \to \Gamma^*$
is given by $\pi_{\Gamma}(x) = x$ for $x \in \Gamma$
and $\pi_{\Gamma}(x) = \varepsilon$ for $x \notin \Gamma$ and extended
to $\Sigma^*$ by $\pi_{\Gamma}(\varepsilon) = \varepsilon$
and $\pi_{\Gamma}(wx) = \pi_{\Gamma}(w)\pi_{\Gamma}(x)$ for $w \in \Sigma^*$ and $x \in \Sigma$.
As a shorthand, we set, with respect to a given naming $\Sigma = \{a_1, \ldots, a_k\}$,
$\pi_j = \pi_{\{a_j\}}$. Then $\pi_j(w) = a_j^{|w|_{a_j}}$.
A language $L \subseteq \Sigma^*$ is \emph{commutative},
if, for $u,v \in \Sigma^*$ such that $|v|_x = |u|_x$ for every $x \in \Sigma$,
we have $u \in L$ if and only if $v \in L$, i.e., $L$
is closed under permutation of letters in words from $L$.
A quintuple $\mathcal A = (\Sigma, Q, \delta, q_0, F)$ is a \emph{finite deterministic and complete automaton} (DFA),
where $\Sigma$ is the \emph{input alphabet},
$Q$ the \emph{finite set of states}, $q_0 \in Q$
the \emph{start state}, $F \subseteq Q$ the set of \emph{final states} and
$\delta : Q \times \Sigma \to Q$ is the \emph{totally defined state transition function}.
Here, we do not consider incomplete automata.
The transition function $\delta : Q \times \Sigma \to Q$
extends to a transition function on words $\delta^{\ast} : Q \times \Sigma^{\ast} \to Q$
by setting $\delta^{\ast}(q, \varepsilon) := q$ and $\delta^{\ast}(q, wa) := \delta(\delta^{\ast}(q, w), a)$
for $q \in Q$, $a \in \Sigma$ and $w \in \Sigma^{\ast}$. In the remainder, we drop
the distinction between both functions and also denote this extension by $\delta$.
The language \emph{recognized} by an automaton $\mathcal A = (\Sigma, Q, \delta, q_0, F)$ is
$
L(\mathcal A) = \{ w \in \Sigma^{\ast} \mid \delta(q_0, w) \in F \}.
$
A language $L \subseteq \Sigma^{\ast}$ is called \emph{regular} if $L = L(\mathcal A)$
for some finite automaton~$\mathcal A$.
The \emph{Nerode right-congruence}
with respect to $L \subseteq \Sigma^*$ is defined, for $u,v \in \Sigma^*$, by $u \equiv_L v$ if and only if
$
\forall x \in \Sigma^* : ux \in L \Leftrightarrow vx \in L.
$
The equivalence class of $w \in \Sigma^{\ast}$
is denoted by $[w]_{\equiv_L} = \{ x \in \Sigma^{\ast} \mid x \equiv_L w \}$.
A language is regular if and only if the above right-congruence has finite index, and it can
be used to define the \emph{minimal deterministic automaton}
$\mathcal A_L = (\Sigma, Q_L, \delta_L, [\varepsilon]_{\equiv_L}, F_L)$
with
$Q_L = \{ [u]_{\equiv_L} \mid u \in \Sigma^{\ast} \}$,
$\delta_L([w]_{\equiv_L}, a) = [wa]_{\equiv_L}$
and $F_L = \{ [u]_{\equiv_L} \mid u \in L \}$.
Let $L \subseteq \Sigma^*$ be regular
with minimal automaton $\mathcal A_L = (\Sigma, Q_L, \delta_L, [\varepsilon]_{\equiv_L}, F_L)$.
The number $|Q_L|$ is called the \emph{state complexity} of $L$ and
denoted by $\operatorname{sc}(L)$. The \emph{state complexity of a regularity-preserving operation}
on a class of regular languages is the greatest state complexity of the result
of this operation as a function of the (maximal) state complexities for argument languages from the class.
Given two automata $\mathcal A = (\Sigma, S, \delta, s_0, F)$
and $\mathcal B = (\Sigma, T, \mu, t_0, E)$, an \emph{automaton homomorphism}
$h : S \to T$ is a map between the state sets such that
for each $a \in \Sigma$ and state $s \in S$ we have
$
h(\delta(s, a)) = \mu(h(s),a),
$
$h(s_0) = t_0$ and $h^{-1}(E) = F$. If $h : S \to T$ is surjective, then $L(\mathcal B) = L(\mathcal A)$. A bijective homomorphism between automata $\mathcal A$
and $\mathcal B$
is called an \emph{isomorphism}, and the two automata are said to be isomorphic.
The \emph{minimal commutative automaton} was introduced in~\cite{GomezA08}
to investigate the learnability of commutative languages. In~\cite{Hoffmann2021NISextended,DBLP:conf/cai/Hoffmann19}
this construction was used to define the index and period vector and in
the derivation of the state complexity bounds mentioned in Table~\ref{tab:sc_known_results}.
\begin{definition}[minimal commutative aut.]
\label{def::min_com_aut}
Let $L \subseteq \Sigma^*$ be regular. The \emph{minimal commutative automaton} for $L$
is $\mathcal C_L = (\Sigma, S_1 \times \ldots \times S_k, \delta, s_0, F)$
with
\[
S_j = \{ [a_j^m]_{\equiv_L} : m \ge 0 \}, \quad
F = \{ ([\pi_1(w)]_{\equiv_L}, \ldots, [\pi_k(w)]_{\equiv_L}) : w \in L \}
\]
and $\delta((s_1, \ldots, s_j, \ldots, s_k), a_j) = (s_1, \ldots, \delta_{j}(s_j, a_j), \ldots, s_k)$
with one-letter transitions $\delta_{j}([a_j^m]_{\equiv_L}, a_j) = [a_j^{m+1}]_{\equiv_L}$ for $j = 1,\ldots, k$ and $s_0 = ([\varepsilon]_{\equiv_L}, \ldots, [\varepsilon]_{\equiv_L})$.
\end{definition}
\begin{toappendix}
\begin{remark}
\label{rem:equal_states_C_L}
Let $L \subseteq \Sigma^*$ be commutative and $\mathcal C_L = (\Sigma, S_1 \times \ldots \times S_k, \delta, s_0, F)$
be the minimal commutative automaton.
Note that, by the definition of the transition function in Definition~\ref{def::min_com_aut},
we have, for all $u,v \in \Sigma^*$,
\begin{equation}
\delta(s_0, u) = \delta(s_0, v) \Leftrightarrow \forall j \in \{1,\ldots,k\} : \pi_j(u) \equiv_L \pi_j(v).
\end{equation}
\end{remark}
\end{toappendix}
\begin{toappendix}
For a commutative language $L \subseteq \Sigma^*$,
the Nerode right- and left-congruence and the syntactic congruence
coincide. So, as $[u]_{\equiv_L} = [\pi_1(u) \cdots \pi_k(u)]_{\equiv_L}
= [\pi_1(u)]_{\equiv_L} \cdots [\pi_k(u)]_{\equiv_L}$
we have:
\begin{equation}
\label{eqn:proj_equivalent_words_equivalent}
\forall j \in \{1,\ldots,k\} : \pi_j(u) \equiv_L \pi_j(v)
\Rightarrow u \equiv_L v.
\end{equation}
\end{toappendix}
In~\cite{GomezA08}, the next result was shown.
\begin{theorem}[G{\'{o}}mez \& Alvarez~\cite{GomezA08}]
\label{thm::min_com_aut}
Let $L \subseteq \Sigma^*$ be a commutative regular language.
Then, $L = L(\mathcal C_L)$.
\end{theorem}
In general the minimal commutative automaton is not equal to the
minimal deterministic and complete automaton for a regular commutative language $L$, see Example~\ref{ex:min_aut_vs_min_com_aut}.
\begin{example}
\label{ex:min_aut_vs_min_com_aut}
For $L = \{ w \in \Sigma^* \mid |w|_a = 0 \mbox{ or } |w|_b > 0 \}$
with $\Sigma = \{a,b\}$ the minimal deterministic
and complete automaton and the minimal commutative automaton are not the same, see Figure~\ref{fig:ex:min_aut_vs_min_com_aut}. This language is from~\cite{GomezA08}.
In fact, the difference can get quite large, as shown by $L_p = \{ w \in \Sigma^* \mid \sum_{j=1}^k j\cdot |w|_{a_j} \equiv 0 \pmod{p} \}$ for a prime $p > k$. Here, $\operatorname{sc}(L_p) = p$, but $\mathcal C_{L_p}$ has $p^k$ states.
\begin{figure}
\caption{The minimal deterministic automaton (left) and the minimal commutative
automaton (right) of the language $\{ w \in \Sigma^* \mid |w|_a = 0 \mbox{ or }
\label{fig:ex:min_aut_vs_min_com_aut}
\end{figure}
\end{example}
The next definition from~\cite{Hoffmann2021NISextended,DBLP:conf/cai/Hoffmann19} generalizes the notion of a cyclic and non-cyclic part for unary automata
\cite{PighizziniS02},
and the notion of periodic language~\cite{EhrenfeuchtHR83,Hoffmann2021NISextended,DBLP:conf/cai/Hoffmann19}.
\begin{definition}[index and period vector]
\label{def:index_and_period_vector}
The \emph{index vector} $(i_1, \ldots, i_k)$
and \emph{period vector} $(p_1, \ldots, p_k)$
for a commutative regular language $L \subseteq \Sigma^*$ with minimal commutative automaton
$\mathcal C_L = (\Sigma, S_1 \times \ldots \times S_k, \delta, s_0, F)$
are the unique minimal numbers such that $\delta(s_0, a_j^{i_j}) = \delta(s_0, a_j^{i_j + p_j})$
for all $j \in \{1,\ldots,k\}$.
\end{definition}
Note that, in Definition~\ref{def:index_and_period_vector},
we have, for all $j \in \{1,\ldots,k\}$, $|S_j| = i_j + p_j$. Also note that
for unary languages, i.e., if $|\Sigma| = 1$, $\mathcal C_L$ equals $\mathcal A_L$
and $i_1 + p_1$ equals the number of states of the minimal automaton.
\begin{example}
\label{ex:index_period}
Let $L = (aa)^{\ast} \shuffle (bb)^{\ast} \cup (aaaa)^{\ast} \shuffle b^{\ast}$.
Then $(i_1, i_2) = (0,0)$, $(p_1, p_2) = (4,2)$,
$\pi_1(L) = (a a)^{\ast}$ and $\pi_2(L) = b^{\ast}$.
\end{example}
Let $u, v \in \Sigma^*$.
Then, $u$ is a \emph{subsequence}\footnote{Also called a \emph{scattered subword}
in the literature~\cite{DBLP:journals/tcs/GruberHK07,KarandikarNS16}.} of $v$, denoted by $u \preccurlyeq v$,
if and only if
$
v \in u \shuffle \Sigma^*.
$
The thereby given order is called the \emph{subsequence order}.
Let $L \subseteq \Sigma^*$.
Then, we define
(1) the \emph{upward closure}
$\mathop{\uparrow\!} L = L \shuffle \Sigma^* = \{ u \in \Sigma^* : \exists v \in L : v \preccurlyeq u \}$;
(2) the \emph{downward closure} $\mathop{\downarrow\!} L = \{ u \in \Sigma^* : u \shuffle \Sigma^* \cap L \ne \emptyset \} = \{ u \in \Sigma^* : \exists v \in L : u \preccurlyeq v \}$;
(3) the \emph{upward interior}, denoted by $\mathop{\downtouparrow\!} L$,
as the largest upward-closed set in $L$, i.e. the largest
subset $U \subseteq L$ such that $\mathop{\uparrow\!} U = U$
and
(4) the \emph{downward interior}, denoted by $\mathop{\uptodownarrow\!} L$,
as the largest downward-closed set in $L$, i.e., the largest
subset $U \subseteq L$ such that $\mathop{\downarrow\!} U = U$.
We have
$
\mathop{\uptodownarrow\!} L = \Sigma^* \setminus \mathop{\uparrow\!} (\Sigma^* \setminus L)
$
and
$
\mathop{\downtouparrow\!} L = \Sigma^* \setminus \mathop{\downarrow\!} (\Sigma^* \setminus L).
$
The following two results, which will be needed later, are from~\cite{Hoffmann2021NISextended,DBLP:conf/cai/Hoffmann19}.
\begin{theorem}
\label{thm:sc_shuffle}
Let $U,V \subseteq \Sigma^*$ be commutative regular languages with index
and period vectors $(i_1, \ldots, i_k), (j_1, \ldots, j_k)$
and $(p_1, \ldots, p_k), (q_1, \ldots, q_k)$. Then, the index vector of $U \shuffle V$
is at most
\[
(i_1 + j_1 + \lcm(p_1, q_1) - 1, \ldots, i_k + j_k + \lcm(p_k,q_k) - 1)
\]
and the period vector is at most
$
(\lcm(p_1, q_1), \ldots, \lcm(p_k, q_k)).
$
So, $\operatorname{sc}(U\shuffle V) \le \prod_{l=1}^k (i_l + j_l + 2\cdot \lcm(p_l, q_l) - 1)$.
\end{theorem}
\begin{theorem}
\label{thm:sc_closure_interior}
Let $\Sigma = \{a_1, \ldots, a_k\}$.
Suppose $L \subseteq \Sigma^*$ is commutative and regular
with index vector $(i_1, \ldots, i_k)$
and period vector $(p_1, \ldots, p_k)$.
Then,
$\max\{\operatorname{sc}(\uparrow\! L),\operatorname{sc}(\downarrow\! L),\operatorname{sc}(\downtouparrow\! L ),\operatorname{sc}(\uptodownarrow\! L) \} \le \prod_{j=1}^k (i_j + p_j)$.
\end{theorem}
\section{Product-Form Minimal Automata}
\label{sec:product-form}
As shown in Example~\ref{ex:min_aut_vs_min_com_aut}, the minimal automaton, in general,
does not equal
the minimal commutative automaton. Here, we introduce the class of commutative regular
languages for which both are isomorphic. The corresponding
commutative languages are called \emph{languages with a minimal automaton
of product-form}, as the minimal commutative automaton is built with the Cartesian product.
\begin{definition}[languages with product-form minimal automaton]
A commutative and regular language $L \subseteq \Sigma^*$
is said to have a \emph{minimal automaton of product-form},
if $\mathcal C_L$ is isomorphic to $\mathcal A_L$.
\end{definition}
If $|\Sigma| = 1$, we see easily that $\mathcal C_L$ is the minimal deterministic and complete automaton.
\begin{proposition}
\label{prop:unary_min_prod_form}
If $|\Sigma| = 1$, then each commutative and regular $L \subseteq \Sigma^*$
has a minimal automaton of product-form.
More generally, if $L \subseteq\{a\}^*$, then $L \shuffle (\Sigma\setminus\{a\})^*$
has a minimal automaton of product-form.
\end{proposition}
Apart from the unary languages, we give another example
of a language with minimal automaton of product-form next.
\begin{example} \label{ex::comm_aut}
Let $L = (a a)^{\ast} \shuffle (b b)^{\ast} \cup (a a a)^{\ast} \shuffle b(b b)^{\ast}$ over $\Sigma = \{a,b\}$.
See Figure~\ref{fig::example} for the minimal commutative automaton.
Here, the minimal commutative
automaton equals the minimal automaton.
\begin{figure}
\caption{$\mathcal C_L$ for $L = (a a)^{\ast}
\label{fig::example}
\end{figure}
\end{example}
However, the next proposition gives a strong necessary criterion
for a commutative language to have a minimal automaton of product-form.\todo{oder product-type?}
\begin{propositionrep}
\label{prop:at_most_one_projection_finite}
If $L \subseteq \Sigma^*$ is commutative and regular
with a minimal automaton of product-form,
then
$|\{ x \in \Sigma \mid \pi_{\{x\}}(L) \mbox{ is finite } \}| \le 1$.
So, $\pi_{\Gamma}(L)$ is infinite for $|\Gamma| \ge 2$, in particular no finite language over
an at least binary alphabet is in this class.
\end{propositionrep}
\begin{proof}
Suppose we have two distinct $j, j' \in \{1,\ldots,k\}$
such that $\pi_j(L)$ and $\pi_{j'}(L)$ are finite.
Set $N = \max\ + 1$.
Then $a_j^N \equiv_L a_{j'}^N$
and, as $N > 0$, this implies that the minimal
commutative automaton has strictly more states than the minimal deterministic automaton.
For the last sentence, note that if $a \in \Gamma$, then
$\pi_{\{a\}}(L) = \pi_{\{a\}}(\pi_{\Gamma}(L))$. Hence, if $\pi_{\Gamma}(L)$
is finite with $|\Gamma| \ge 2$, then at least two one-letter projection
languages would be finite too. Hence, with the previous claim, if $L$
is commutative and regular, it does not have a minimal automaton of product-form in this case. \qed
\end{proof}
For example, $L = \{\varepsilon\}$ over $\Sigma$
does not have a minimal automaton of product-form if $|\Sigma| > 1$.
Recall that the minimal automaton, as defined here, is always complete.
Note that the converse of Proposition~\ref{prop:at_most_one_projection_finite}
is not true, as shown by $aa^*$ over $\Sigma = \{a,b\}$.
In the following statement, we give alternative characterizations
for commutative languages with minimal automata of product-form.
\begin{theoremrep}
\label{thm:product-form_characterizations}
Let $L \subseteq \Sigma^*$ be a commutative regular language
with index vector $(i_1, \ldots, i_k)$
and period vector $(p_1, \ldots, p_k)$.
The following are equivalent:
\begin{enumerate}
\item the minimal automaton has product-form;
\item $\operatorname{sc}(L) = \prod_{j=1}^k (i_j + p_j)$;
\item $u \equiv_L v$ implies $\forall a \in \Sigma : a^{|u|_a} \equiv_L a^{|v|_a}$;
\item $u \equiv_L v$ if and only if $\forall a \in \Sigma : a^{|u|_a} \equiv_L a^{|v|_a}$.
\end{enumerate}
\end{theoremrep}
\begin{proof}
First, suppose $\mathcal C_L$ is isomorphic to $\mathcal A_L$.
Then, by the definition of the state complexity, we find $\operatorname{sc}(L) = \prod_{j=1}^k (i_j + p_j)$.
Conversely, suppose $\operatorname{sc}(L) = \prod_{j=1}^k (i_j + p_j)$.
As $L$ is commutative, we have $L = L(\mathcal C_L)$.
Every automaton recognizing $L$ in which all states are reachable
from the start state can be mapped surjectively onto $\mathcal A_L$, see~\cite{Hopcroft:1971,DBLP:books/daglib/0088160}.
In particular, this holds true for $\mathcal C_L$.
By finiteness, as both have the same number of states, this must be an isomorphism.
Hence, the first two conditions are equivalent
That the last condition is equivalent to the first
is shown, in a more general context without referring to any previous
result, in Theorem~\ref{thm:alternative_characterizations},
as the case of commutative languages corresponds
to the case $\Sigma_1 = \{a_1\}, \ldots, \Sigma_k = \{a_k\}$
with the notation from Section~\ref{subsec:subclasses}.\qed
That the third condition is equivalent to the last condition
follows as for a commutative language we
have $[u]_{\equiv_L} = [a_1^{|u|_{a_1}} \cdots a_k^{|u|_{a_k}}]_{\equiv_L}$,
so if $\forall a \in \Sigma : a^{|u|_a} \equiv_L a^{|v|_a}$,
then $u \equiv_L v$ is always valid for commutative languages.\qed
\end{proof}
Next, we give a way to construct commutative regular languages with minimal automata
of product-form.
\begin{toappendix}
Let $L \subseteq \Sigma^*$. Next, we will need the following equations:
\begin{equation}\label{eqn:word_in_shuffle_lang}
u \in \bigshuffle_{i=1}^k \pi_i(L) \Leftrightarrow \forall i \in \{1,\ldots,k\} : \pi_i(u) \in \pi_i(L),
\end{equation}
and
\begin{equation}\label{eqn:L_in_shuffle_L}
L \subseteq \bigshuffle_{i=1}^k \pi_i(L).
\end{equation}
\begin{lemma}
\label{lem:union_shuffle_projection_lang}
Let $L \subseteq \Sigma^*$ be a commutative language
with minimal commutative automaton $\mathcal C_L = (\Sigma, S_1 \times \ldots\times S_k, \delta, s_0, F)$,
index vector $(i_1, \ldots, i_k)$ and period vector $(p_1, \ldots, p_k)$.
Set~$n = |F|$. Suppose $F = \{ (s_1^{(l)}, \ldots, s_k^{(l)}) \mid l \in \{ 1,\ldots, n \} \}$
and set, for $l \in \{1,\ldots, n\}$, $L^{(l)} = \{ w \in \Sigma^* \mid \delta(s_0, w) = (s_1^{(l)}, \ldots, s_k^{(l)}) \}$,
and $L_j^{(l)} = \pi_j(L^{(l)})$, $j \in \{1,\ldots,k\}$.
Then,
\[
L = \bigcup_{l = 1}^{n} L^{(l)} \mbox{ and } L^{(l)} = \bigshuffle_{j=1}^k L_j^{(l)}.
\]
For $j \in \{1,\ldots,k\}$, set $\mathcal C_{L, \{a_j\}} = (\{a_j\}, S_j, \delta_j, [\varepsilon]_{\equiv_L}, F_j)$
with
\[
\delta_j([a_j^m]_{\equiv_L}, a_j) = [a_j^{m+1}]_{\equiv_L} \mbox{ and }
F_j = \{ s_j^{(1)}, \ldots, s_j^{(n)} \}
\]
for $[a_j^m]_{\equiv_L} \in S_j$, $m \ge 0$.
Then,
\[
L_j^{(l)} = L((\{a_j\}, S_j, \delta_j, [\varepsilon]_{\equiv_L}, \{s_j^{(l)}\})
\mbox{ and }
L(\mathcal C_{L, \{a_j\}}) = \pi_j(L) = \bigcup_{l=1}^{n} L_j^{(l)}.
\]
for $l \in \{1,\ldots,n\}$ and the automata $\mathcal C_{L,\{a_j\}}$
have index $i_j$ and period $p_j$.
\end{lemma}
\begin{proof}
As every accepted word drives $\mathcal C_L$ into some final state,
we have $L = \bigcup_{l = 1}^{n} L^{(l)}$. Next, we show the other claims.
\begin{claiminproof}
Let $l \in \{1,\ldots,n\}$. Then $L^{(l)} = \bigshuffle_{j=1}^k L_j^{(l)}$.
\end{claiminproof}
\begin{claimproof}
By Equation~\eqref{eqn:L_in_shuffle_L}, we have $L^{(l)} \subseteq \bigshuffle_{j=1}^k L_j^{(l)}$.
If $w \in \bigshuffle_{j=1}^k L_j^{(l)}$,
then, for $j \in \{1,\ldots,k\}$, there exists $u_j \in L_j^{(l)}$
such that $\pi_j(w) = \pi_j(u_j)$.
Hence, by Remark~\ref{rem:equal_states_C_L}, $\delta(s_0, w) = \delta(s_0, u_1 \cdots u_j) = (s_1^{(l)}, \ldots, s_k^{(l)})$,
and so $w \in L^{(l)}$.
\end{claimproof}
\begin{claiminproof}
Let $l \in \{1,\ldots,n\}$ and $j \in \{1,\ldots,k\}$. Then
\[ L_j^{(l)} = L((\{a_j\}, S_j, \delta_j, [\varepsilon]_{\equiv_L}, \{s_j^{(l)}\}).
\]
\end{claiminproof}
\begin{claimproof}
By Remark~\ref{rem:equal_states_C_L}, for $u \in a_j^*$,
\begin{align*}
& \delta_j([\varepsilon]_{\equiv_L}, u) = s_j^{(l)} \\
& \Leftrightarrow [u]_{\equiv_L} = s_j^{(l)} \\
& \Leftrightarrow \exists w \in L^{(l)} : u \equiv_L \pi_j(w) \\
& \Leftrightarrow \exists w \in L^{(l)} : \delta(s_0, \pi_1(w)\cdots \pi_{j-1}(w) u \pi_{j+1}(w) \cdots \pi_k(w)) = \delta(s_0, w) \\
& \Leftrightarrow \exists w \in \Sigma^* : \delta(s_0, \pi_1(w)\cdots \pi_{j-1}(w) u \pi_{j+1}(w) \cdots \pi_k(w)) = (s_1^{(l)}, \ldots, s_k^{(l)}) \\
& \Leftrightarrow u \in L_j^{(l)}.
\end{align*}
And this shows the claim.
\end{claimproof}
As $F_j = \{s_j^{(1)},\ldots, s_j^{(n)}\}$, we
find $L(\mathcal C_{L,\{a_j\}}) = \bigcup_{l=1}^n L_j^{(l)}$
and, with the previous equations and $L_j^{(l)} \subseteq a_j^*$,
\[
\pi_j(L) = \pi_j\left( \bigcup_{l=1}^{n} \bigshuffle_{i=1}^k L_i^{(l)} \right)
= \bigcup_{l=1}^{n} \pi_j\left( \bigshuffle_{i=1}^k L_i^{(l)} \right)
= \bigcup_{l=1}^{n} L_j^{(l)}.
\]
Lastly, that the automata $\mathcal C_{L,\{a_j\}}$ have index $i_j$
and period $p_j$ is implied as the Nerode right-congruence classes $[a_j^m]_{\equiv_L}$, $m \ge 0$
are the states of these automata. So, we have shown all equations in the statement.~\qed
\end{proof}
\begin{lemma}
\label{lem::nerode_on_projections}
Let $L \subseteq \Sigma^*$ be a language with $L = \bigshuffle_{i=1}^k \pi_i(L)$. Then,
for each $j \in \{1,\ldots k\}$ and $n,m\ge 0$, we have
$
a_j^m \equiv_{L} a_j^n
\Leftrightarrow a_j^m \equiv_{\pi_j(L)} a_j^n,
$
where on the right side the equivalence is considered with
respect to the unary alphabet $\{a_j\}$.
\end{lemma}
\begin{proof}
In the next equations, we will write ``$\Leftrightarrow$'' to mean two formulas are semantically equivalent, and
``$\leftrightarrow$'' as an equivalence in the formula itself.
By assumption and Equation~\eqref{eqn:word_in_shuffle_lang}, we have
\begin{equation} \label{eqn:nerode_on_projections}
w \in L \Leftrightarrow \forall r \in \{1,\ldots k \} : \pi_r(w) \in \pi_r(L).
\end{equation}
Now, let $j \in \{1,\ldots k\}$ be fixed and $n,m \ge 0$. We have
\begin{equation*}
a_j^m \equiv_L a_j^n \Leftrightarrow \forall x \in \Sigma^{\ast} : a_j^m x \in L \leftrightarrow a_j^n x \in L.
\end{equation*}
By Equation~\eqref{eqn:nerode_on_projections}, this is equivalent to
\begin{multline}\label{eqn:forall_r}
\forall x \in \Sigma^{\ast} : ( \forall r \in \{1,\ldots,k\} : \pi_r(a_j^m x) \in \pi_r(L) ) \\
\leftrightarrow ( \forall r \in \{1,\ldots,k\} : \pi_r(a_j^n x) \in \pi_r(L) ).
\end{multline}
\begin{claiminproof}
Equation~\eqref{eqn:forall_r} is equivalent to
\[
\forall x \in \Sigma^{\ast} : \pi_j(a_j^m x) \in \pi_j(L) \leftrightarrow \pi_j(a_j^n x) \in \pi_j(L).
\]
\end{claiminproof}
\begin{claimproof}
First, suppose Equation~\eqref{eqn:forall_r} holds true
and suppose, for $x \in \Sigma^*$,
we have $\pi_j(a_j^m x) \in \pi_j(L)$.
Choose, for $r \ne j$, words $u_r \in \pi_r(L)$, which is possible
as $L = \bigshuffle_{r=1}^k \pi_r(L)$.
Set $u_j = \pi_j(x)$ and $y = u_1 \cdots u_k$.
Then, for all $r \in \{1,\ldots,k\}$,
as $u_r \in a_r^*$, and hence
\[
\pi_r(a_j^my) = \left\{
\begin{array}{ll}
\pi_r(a_j^mx) & \mbox{if } r = j; \\
\pi_r(u_r) & \mbox{if } r \ne j,
\end{array}
\right.
\]
we have $\pi_r(a_j^m y) \in \pi_r(L)$.
So, by Equation~\eqref{eqn:forall_r},
we can deduce that, for all $r \in \{1,\ldots,k\}$,
we have $\pi_r(a_j^n y) \in \pi_r(L)$.
In particular, we find $\pi_j(a_j^n y) \in \pi_j(L)$.
But $\pi_j(a_j^ny) = \pi_j(a_j^n)\pi_j(y) = \pi_j(a_j^n)\pi_j(x) = \pi_j(a_j^nx)$.
So, $\pi_j(a_j^nx) \in \pi_j(L)$.
Similarly, we can show that, for all $x \in \Sigma^*$,
$\pi_j(a_j^nx) \in \pi_j(L)$ implies $\pi_j(a^mx) \in \pi_j(L)$.
Conversely, now suppose, for each $x \in \Sigma^*$, we have
\[
\pi_j(a_j^m x) \in \pi_j(L) \leftrightarrow \pi_j(a_j^n x) \in \pi_j(L).
\]
Let $x \in \Sigma^*$ and suppose, for all $r \in \{1,\ldots, k\}$,
we have $\pi_r(a_j^mx) \in \pi_r(L)$.
As, for $r \ne j$,
we have $\pi_r(a_j^mx) = \pi_r(x) = \pi_r(a_j^nx)$
and, as, by assumption, the condition $\pi_j(a_j^mx) \in \pi_j(L)$
implies $\pi_j(a_j^nx) \in \pi_j(L)$,
we find that, for all $r \in \{1,\ldots,k\}$,
we have $\pi_r(a_j^nx) \in \pi_r(L)$.
Similarly, we can show the other
implication in Equation~\eqref{eqn:forall_r}.
\end{claimproof}
So, by the previous claim, Equation~\eqref{eqn:forall_r} simplifies to
\begin{align*}
& \forall x \in \Sigma^{\ast} : \pi_j(a_j^m x) \in \pi_j(L) \leftrightarrow \pi_j(a_j^n x) \in \pi_j(L) \\
& \Leftrightarrow \forall x \in \{a_j\}^{\ast} : a_j^m x \in \pi_j(L) \leftrightarrow a_j^n x \in \pi_j(L) \\
& \Leftrightarrow a_j^m \equiv_{\pi_j(L)} a_j^n,
\end{align*}
and we have shown the statement.~\qed
\end{proof}
\end{toappendix}
\begin{lemmarep}
\label{lem:sc_shuffle_lang}
Let $\Sigma = \{a_1, \ldots, a_k\}$ and, for $j \in \{1,\ldots,k\}$, $L_j \subseteq \{a_j\}^*$
be regular and infinite with index $i_j$ and period $p_j$.
Then,
$
\operatorname{sc}\left(\bigshuffle_{j=1}^k L_j\right) = \prod_{j=1}^k\operatorname{sc}(L_j) = \prod_{j=1}^k (i_j + p_j)
$
and $\bigshuffle_{j=1}^k L_j$ has index vector $(i_1, \ldots, i_k)$ and period vector $(p_1, \ldots, p_k)$.
With Thm.~\ref{thm:product-form_characterizations}, $\bigshuffle_{j=1}^k L_j$ has a product-form minimal automaton.
\end{lemmarep}
\begin{proof}
Set $L = \bigshuffle_{j=1}^k L_j$. Let $u,v \in \Sigma^*$ be two words such that
\[
u = a_1^{n_1} \cdot\ldots\cdot a_k^{n_k} \mbox{ and }
v = a_1^{m_1} \cdot\ldots\cdot a_k^{n_k}.
\]
with $0\le n_j, m_j < \operatorname{sc}(L_j)$, $j \in \{1,\ldots,k\}$.
Suppose there exists $r \in \{1,\ldots, k\}$
such that $n_r \ne m_r$. As $L_r$ is unary and $\max\{n_r,m_r\}<\operatorname{sc}(L_r)$,
\[ \varepsilon, a_r, a_r^2, \ldots, a_r^{\max\{n_r,m_r\}-1} \]
are represenatives of distinct Nerode right-equivalence classes of $L_r$.
Hence, there exists $l_r > 0$ such that, without loss of generality,
\[
a_r^{n_r + l_r} \in L_r \mbox{ and } a_r^{m_r + l_r} \notin L_r.
\]
As all the $L_j$, $j \in \{1,\ldots, k\}$, are infinite, there exists
$l_j \ge 0$ for $j \in \{1,\ldots, k\}\setminus\{r\}$
such that $a_j^{n_j + l_j} \in L_j$.
Then,
\[
a_1^{n_1 + l_1} \cdot\ldots\cdot a_k^{n_k + l_k} \in L.
\]
And, as $a_r^{m_r + l_r} \notin L_r$, we find
\[
a_1^{m_1 + l_1} \cdot\ldots\cdot a_k^{m_k + l_k} \notin L.
\]
Set $x = a_1^{l_1}\cdot\ldots\cdot a_k^{l_k}$.
Then, as $L$ is a commutative language,
\[
ux \in L \mbox{ and } vx \notin L,
\]
i.e., $u \not\equiv_L v$.
Hence, all words of the form $a_1^{n_1} \cdot\ldots\cdot a_k^{n_k}$
with $0 \le n_j < \operatorname{sc}(L_j)$ are pairwise non-equivalent.
So, $\prod_{j=1}^k\operatorname{sc}(L_j) \le \operatorname{sc}(L)$.
Let $j \in \{1,\ldots,k\}$ and $\mathcal A_{L_j} = (\{a_j\}, Q_{L_j}, \delta_{L_j}, [\varepsilon]_{\equiv_{L_j}}, F_{L_j})$
be the minimal automaton
for $L_j$. Also, let $\mathcal C_L = (\Sigma, S_1 \times \ldots\times S_k, \delta, s_0, F)$
be the minimal commutative automaton from Definition~\ref{def::min_com_aut}
and consider the automaton $\mathcal C_{L, \{a_j\}} = (\{a_j\}, S_j, \delta_j, [\varepsilon]_{\equiv_L}, F_j)$
from Lemma~\ref{lem:union_shuffle_projection_lang}.
By Theorem~\ref{thm::min_com_aut}, we have $L = L(\mathcal C_L)$.
Hence, $\operatorname{sc}(L) \le \prod_{j=1}^k |S_j|$.
By Lemma~\ref{lem::nerode_on_projections}, as $L_j = \pi_j(L)$,
the map
\[
[a_j^n]_{\equiv_L} \mapsto [a_j^n]_{\equiv_{L_j}}
\]
is a well-defined isomorphism between $\mathcal A_{L_j}$
and $\mathcal C_{L, \{a_j\}}$. So, $|S_j| = \operatorname{sc}(L_j)$.
Hence,
\[
\operatorname{sc}(L) \le \prod_{j=1}^k \operatorname{sc}(L_j),
\]
and, combining everything, $\operatorname{sc}(L) = \prod_{j=1}^k \operatorname{sc}(L_j)$.
The entry of the index and period vector of $L$ for $j \in \{1,\ldots,k\}$
is the index and period of the unary automaton $\mathcal C_{L,\{a_j\}}$.
As $\mathcal C_{L,\{a_j\}}$ and $\mathcal A_{L_j}$ are isomorphic,
they have the same index and period. So, all claims of the
statement are shown.~\qed
\end{proof}
In the next theorem and the following remark, we investigate closure properties of the class in question.
\begin{theoremrep}
\label{thm:closure_properties}
The class of commutative regular languages with minimal automata
of product-form is closed under left and right quotients
and complementation. It is not closed under union, intersection and projection.
\end{theoremrep}
\begin{proof}
As for every language $L \subseteq \Sigma^*$,
the state complexity of $L$ and the complement of $L$
are equal (recall we are only concerned with complete automata here)
and $\mathcal A_L$ is minimal for both languages, closure
of the class under complementation follows.
For commutative languages, left and right quotients
give the same sets, i.e., $u^{-1} L = Lu^{-1}$,
hence it is sufficient to show closure under left quotients.
Now, if $\mathcal A_L = (\Sigma, Q_L, \delta_L, [\varepsilon]_{\equiv_L}, F_L)$
is the minimal automaton, then, for $u \in \Sigma^*$,
$\mathcal B = (\Sigma, Q_L, \delta_L, [u]_{\equiv_L}, F_L)$
recognizes $u^{-1}L$. In fact, as for every $[u]_{\equiv_L} \ne [v]_{\equiv_L}$,
there exists $x \in \Sigma^*$ such that $ux \in L$ and $vx \notin L$, or vice versa.
So, we have $[ux]_{\equiv_L} \in F_L$ and $[vx]_{\equiv_L}\notin F_L$, or vice versa.
This implies, after discarding states not reachable from the start state,
that $\mathcal B$ is isomorphic to the minimal automaton of $u^{-1}L$.
Now, observe that $[w]_{\equiv_L}$ in $\mathcal A_L$
corresponds to the state $([a_1^{|w|_{a_1}}]_{\equiv_L},\ldots,[a_k^{|w|_{a_k}}]_{\equiv_L})$
in $\mathcal C_L$.\todo{das genauer?}
So, $\mathcal B$ is of product-form, which gives the claim.
Note that, for every $j \in \{1,\ldots,k\}$,
the index for the letter $a_j$ of $u^{-1}L$
is $\max\{ i_j - |u|_{a_j}, 0 \}$
and the period for the letter $a_j$ is the same.
See Remark~\ref{rem:union_intersection_not_closed}
for examples of languages which show that the class considered is not closed
under union, intersection and projection.\qed
\end{proof}
\begin{remark}
\label{rem:union_intersection_not_closed}
We have $a\shuffle b^* \cap a^* \shuffle b = a\shuffle b$,
showing, using Proposition~\ref{prop:unary_min_prod_form} and~\ref{prop:at_most_one_projection_finite},
that this class is not closed under intersection
and by DeMorgan's laws, as we have closure under complementation,
we also cannot have closure under union.
Also, $L = aa^* \shuffle bb^* \shuffle cc^* \cup bb^* \shuffle a^* \cup b^*$ \todo{beweisen}
has a minimal automaton of product-form, but $\pi_{\{a,b\}}(L) = bb^* \shuffle a^* \cup b^*$
is the language from Example~\ref{ex:min_aut_vs_min_com_aut}. So, this class
is also not closed under projection.
\end{remark}
\begin{theoremrep}
\label{thm:sc_results}
Let $U, V \subseteq \Sigma^*$ be commutative regular languages
with product-form minimal automata with $\operatorname{sc}(U) = n$
and $\operatorname{sc}(V) = m$.
\begin{enumerate}
\item \label{thm:sc:shuffle}
We have $\operatorname{sc}(U \shuffle V) \le 2nm$ if $|\Sigma| > 1$
and $\operatorname{sc}(U \shuffle V) \le nm$ if $|\Sigma| = 1$.
Furthermore, for any $\Sigma$, there exist
$U, V$ as above such that $nm \le \operatorname{sc}(U \shuffle V)$.
\item In the worst case, $n$ states are sufficient
and necessary for a DFA to recognize~$\uparrow\! U$.
Similarly for the
downward closure and interior
operations.
\item In the worst case, $n$ states are sufficient
and necessary for a DFA to recognize
the projection of~$U$.
\item In the worst case, $nm$ states are sufficient
and necessary for a DFA to recognize
$U \cap V$ or $U \cup V$.
\end{enumerate}
\end{theoremrep}
\begin{proof}
\begin{enumerate}
\item If $|\Sigma| = 1$, then the shuffle operation
is the same as concatenation and the claim follows
by the state complexity of concatenation
on unary languages, see~\cite{YuZhuangSalomaa1994}.
Otherwise, with Theorem~\ref{thm:sc_shuffle},
\begin{multline*}
\operatorname{sc}(U \shuffle V) \le \prod_{l=1}^k (i_l + j_l + 2\lcm(p_l, q_l) - 1)
\\ \le \prod_{l=1}^k 2(i_l + p_l)(j_l + q_l)
= 2 \cdot \prod_{l=1}^k (i_l + p_l) \prod_{l=1}^k (j_l + q_l)
= 2 n m.
\end{multline*}
Let $p, q > 0$ be two coprime numbers
and set $U = \bigshuffle_{j=1}^k a_j^{p-1}(a_j^{p})^*$
and $V = \bigshuffle_{j=1}^k a_j^{q-1}(a_j^{q})^*$.
By Lemma~\ref{lem:sc_shuffle_lang}
both have minimal automata of product-form
and $\operatorname{sc}(U) = p^k$ and $\operatorname{sc}(V) = q^k$
Also, $\operatorname{sc}( a_j^{p-1}(a_j^{p})^* \cdot a_j^{q-1}(a_j^{q})^* ) = pq$, see~\cite[Lemma 5.1 and Fact 5.2]{YuZhuangSalomaa1994}.
Further,
\[
U \shuffle V = \bigshuffle_{j=1}^k ( a_j^{p-1}(a_j^{p})^* \cdot a_j^{q-1}(a_j^{q})^* ).
\]
So, with Lemma~\ref{lem:sc_shuffle_lang},
the minimal automaton of $U \shuffle V$
has product-form and $\operatorname{sc}(U\shuffle V) = p^k \cdot q^k$.
\item This is a direct application
of Theorem~\ref{thm:sc_closure_interior},
as by assumption
and Theorem~\ref{thm:product-form_characterizations}
we have $n = \prod_{l=1}^k (i_l + p_l)$.
For the lower bound, use $L = a^n \shuffle (\Sigma\setminus\{a\})^*$.
Then, the upward closure is $a^na^* \shuffle (\Sigma\setminus\{a\})^*$
and the downward closure is $\{\varepsilon,a,\ldots,a^n\}\shuffle (\Sigma\setminus\{a\})^*$.
\item Set $L = \shuffle_{j=1}^k a_j^* = \Sigma^*$.
Then, $\operatorname{sc}(L) = 1$. Or, for any $n > 1$, where we assume $\Sigma = \{a,b\}$
for notational simplicity, we have $L = a^* \shuffle b^nb^*$, then $\operatorname{sc}(L) = n+1$
and $\operatorname{sc}(\pi_{\{b\}}(L)) = n+1$.
\item The stated bound is valid for union and intersection
in general, see~\cite[Theorem 4.3]{YuZhuangSalomaa1994}.
Let $p, q > 0$ be two coprime numbers
and set $U = \bigshuffle_{j=1}^k a_j^{p-1}(a_j^{p})^*$
and $V = \bigshuffle_{j=1}^k a_j^{q-1}(a_j^{q})^*$.
By Lemma~\ref{lem:sc_shuffle_lang}
both have minimal automata of product-form.
Also, $\operatorname{sc}( a_j^{p-1}(a_j^{p})^* \cap a_j^{q-1}(a_j^{q})^* ) = pq$
and
\[
U \cap V = \bigshuffle_{j=1}^k ( a_j^{p-1}(a_j^{p})^* \cap a_j^{q-1}(a_j^{q})^* ).
\]
So, with Lemma~\ref{lem:sc_shuffle_lang},
the minimal automaton of $U \cap V$
has product-form. As this property is closed
under complementation by Theorem~\ref{thm:closure_properties}
the statement for union is implied.
\end{enumerate}
This finishes the proof.\qed
\end{proof}
\begin{remark}
\label{rem:lower_bound_shuffle}
I do not know if the bound $2nm$ stated in Theorem~\ref{thm:sc:shuffle}
for the shuffle operation is tight, but the next example
shows that if we have a binary alphabet, we can find commutative languages
with state complexities $n$ and $m$
and product-form minimal automata whose shuffle
needs an automaton with strictly more than $nm$ states.
A similar construction works for more than two letters.
Let $p, q > 11$ be two coprime numbers.
Set $U = a \shuffle b^{p-1}(b^p)^* \cup a^{p-1}(a^p)^* \shuffle bb^{p-1}(b^p)^*$
and $V = b^{q-1}(b^q)^* \cup a^{q-1}(a^q)^* \shuffle bbb^{q-1}(b^q)^*$.
Then, using that shuffle distributes over union
and a number-theoretical result from~\cite[Lemma 5.1]{YuZhuangSalomaa1994},
we find
\begin{multline*}
U \shuffle V = a \shuffle W \cup a^{p-1} (a^p)^* \shuffle bW \cup \\
a^q(a^q)^* \shuffle bbW \cup a^{q-1 + p - 1}(a^p)^* (a^q)^* \shuffle bbbW,
\end{multline*}
where $a^{q-1 + p - 1}(a^p)^* (a^q)^* = F \cup a^{pq - 1}a^*$
for some finite set $F \subseteq \{\varepsilon, a, \ldots, a^{pq - 3} \}$
and $W = E \cup b^{pq-1}b^*$ for some $E \subseteq \{\varepsilon, b, \ldots, b^{pq-3} \}$.
Note that by~\cite[Lemma 5.1]{YuZhuangSalomaa1994} we have $a^{pq-2} \shuffle bbbW \cap U \shuffle V = \emptyset$.
All languages involved have a product-form minimal automaton.
The minimal automaton for $U$ has $(2 + p) \cdot (1+p)$
states, the minimal automaton for $V$ has $(1 + q)\cdot (q+2)$ states
and that for $U \shuffle V$ has $2pq\cdot (pq+3)$ states.
As $(p-11)(q-11) > 0$ we
can deduce $(1+p)(2+p)(1+q)(2+q) < 2(pq)^2 < 2pq(pq+3)$.
\end{remark}
\section{Partial Commutativity and Other Subclasses}
\label{subsec:subclasses}
A \emph{partial commutation} on $\Sigma$ is a symmetric and irreflexive relation $I \subseteq \Sigma \times \Sigma$, often called
the \emph{independence relation}. Of interest is the \emph{congruence} $\sim_I$ generated on $\Sigma^*$
by the relation
$
\{ (ab, ba) \mid (a,b) \in I \}.
$
A language $L \subseteq \Sigma^*$ is \emph{closed under $I$-commutation}
if $u \in L$ and $u \sim_I v$ implies $v \in L$.
If $I = \{ (a,b) \in \Sigma \times \Sigma \mid a \ne b \}$,
then the languages closed under $I$-commutation are precisely the commutative languages.
Languages closed under some partial commutation relation have been extensively studied,
see~\cite{DBLP:journals/iandc/GomezGP13}, also for further references,
and in particular with relation to (Mazurkiewicz) trace theory~\cite{DBLP:books/ws/95/DR1995,DBLP:journals/iandc/GomezGP13,mazurkiewicz77}, a formalism to describe the execution histories
of concurrent programs.
Here, we will focus on the case that $(\Sigma\times \Sigma) \setminus I$ is transitive, i.e.,
if $u \not\sim_I v$ and $v \not\sim_I w$ implies $u \not\sim_I w$.
In this case, $(\Sigma\times \Sigma) \setminus I$ is an equivalence relation
and we will write $\Sigma_1, \ldots, \Sigma_k$ for the different equivalence classes.
The reason to focus on this particular generalization is, as we will see later, that the definition of the minimal commutative
automaton transfers to this more general setting without much difficulty.
To ease the notation, if we have a partial commutation relation as above with a corresponding
partition $\Sigma = \Sigma_1 \cup \ldots \Sigma_k$ of the alphabet,
we also write $\mathcal L_{\Sigma_1, \ldots, \Sigma_k}$ for the \emph{class of languages
closed under this partial commutation}. Then, as is easily seen,
we have $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$
if and only if, for $x \in \Sigma_i$, $y \in \Sigma_j$ ($i \ne j$)
and each $u, v \in \Sigma^*$ we have
$
uxyv \in L \Leftrightarrow uyxv \in L.
$
For example, $L$ is commutative
if and only if $L \in \mathcal L_{\{a_1\}, \ldots, \{a_k\}}$
for $\Sigma = \{a_1, \ldots, a_k\}$.
\begin{toappendix}
\begin{example}
\label{ex:commutate}
Let $\Sigma = \Sigma_1 \cup \Sigma_2$
with $\Sigma_1 = \{a,b\}$ and $\Sigma_2 = \{c,d\}$.
Then, in the language $U = \{abcd, acbd, cabd, cadb, cdab \}$
the letters commute according to the partition $\Sigma = \Sigma_1 \cup \Sigma_2$.
On the contrary, in the language $V = \{abc, bac, cba\}$
the letters do not commute according to the partition,
as, for example, $abc \in L$, but $cab \notin L$.
\end{example}
\begin{remark}
\label{rem:letters_commutate_with_decomposition}
Let $\Sigma = \Sigma_1 \cup \ldots \cup \Sigma_k$
and $L \subseteq \Sigma^*$ be
such that the letters in $L$
commute according to the partition.
Then, for each $w \in \Sigma^*$,
\[
w \in L \Leftrightarrow
\pi_{\Sigma_1}(w) \pi_{\Sigma_2}(w) \cdot \ldots \cdot \pi_{\Sigma_k}(w) \in L
\]
and
$
w \equiv_L \pi_{\Sigma_1}(w) \pi_{\Sigma_2}(w) \cdot \ldots \cdot \pi_{\Sigma_k}(w).
$
\end{remark}
\end{toappendix}
\begin{toappendix}
The next lemma will be needed when we relate the different subclasses
in Subsection~\ref{subsec:subclasses}.
\begin{lemmarep}
\label{lem:equivalence_classes_projection}
If $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$
and $u \in \Sigma_i^*$,
then $[u]_{\equiv_L} \cap \Sigma_i^* \subseteq [u]_{\equiv_{\pi_{\Sigma_i}(L)}} \cap \Sigma_i^*$.
\end{lemmarep}
\begin{proof}
Suppose $u,v \in \Sigma_i^*$, $i \in \{1,\ldots,k\}$, and $u \equiv_L v$.
If $x \in \Sigma_i^*$ and $ux \in \pi_{\Sigma_i}(L)$,
then $ux = \pi_{\Sigma_i}(w)$ for some $w \in L$.
Set
\[
w' = \pi_{\Sigma_1}(w) \cdots \pi_{\Sigma_{i-1}}(w)vx\pi_{\Sigma_{i+1}}(w)\cdots \pi_{\Sigma_k}(w).
\]
Then, using that $\equiv_L$ is a right-congruence, $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$
and Remark~\ref{rem:letters_commutate_with_decomposition},
\begin{align*}
[w]_{\equiv_L}
& = [\pi_{\Sigma_1}(w) \cdots \pi_{\Sigma_{i-1}}(w)ux\pi_{\Sigma_{i+1}}(w)\cdots \pi_{\Sigma_k}(w)]_{\equiv_L} \\
& = [u \pi_{\Sigma_1}(w) \cdots \pi_{\Sigma_{i-1}}(w)x\pi_{\Sigma_{i+1}}(w)\cdots \pi_{\Sigma_k}(w)]_{\equiv_L} \\
& = [v \pi_{\Sigma_1}(w) \cdots \pi_{\Sigma_{i-1}}(w)x\pi_{\Sigma_{i+1}}(w)\cdots \pi_{\Sigma_k}(w)]_{\equiv_L} \\
& = [\pi_{\Sigma_1}(w) \cdots \pi_{\Sigma_{i-1}}(w)vx\pi_{\Sigma_{i+1}}(w)\cdots \pi_{\Sigma_k}(w)]_{\equiv_L}
\end{align*}
So, $w' \equiv_L w$, and hence, $w' \in L$. Also, $\pi_{\Sigma_i}(w') = vx$.
So, $vx \in \pi_{\Sigma_i}(L)$.~\qed
\end{proof}
\end{toappendix}
\subsection{The Canonical Automaton}
\label{subsec:can_aut}
Here, we generalize our notion of commutative minimal automaton, Definition~\ref{def::min_com_aut},
to have uniform recognition devices for languages in $\mathcal L_{\Sigma_1,\ldots,\Sigma_k}$.
\begin{definition}
\label{def:generalization_canonical_aut}
Let $\Sigma = \Sigma_1 \cup \ldots \cup \Sigma_k$ be a partition
and $L \subseteq \Sigma^*$.
Set $\mathcal C_{L, \Sigma_1, \ldots, \Sigma_k} = (\Sigma, S_1 \times \ldots \times S_k,
\delta, s_0, F)$
with, for $i \in \{1,\ldots, k\}$,
$
S_i = \{ [u]_{\equiv_L} \mid u \in \Sigma_i^* \}$,
$F = \{ ([\pi_{\Sigma_1}(u)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(u)]_{\equiv_L}) \mid u \in L \}$,
$s_0 = ( [\varepsilon]_{\equiv_L}, \ldots, [\varepsilon]_{\equiv_L})$
and, for $x \in \Sigma_i$,
\[
\delta(([u_1]_{\equiv_L}, \ldots, [u_i]_{\equiv_L}, \ldots, [u_k]_{\equiv_L}), x)
= ([u_1]_{\equiv_L}, \ldots, [u_ix]_{\equiv_L}, \ldots, [u_k]_{\equiv_L})
\]
with words $u_j \in \Sigma_j^*$, $j \in \{1,\ldots, k\}$.
This is called the \emph{canonical automaton} for the given $L$ with
respect to $\Sigma = \Sigma_1 \cup \ldots \cup \Sigma_k$.
\end{definition}
\begin{toappendix}
\begin{remark}
\label{rem:can_aut_transition}
Let $\Sigma = \Sigma_1 \cup \ldots \cup \Sigma_k$ be a partition, $u_i,v_i \in \Sigma^*$, $i \in \{1,\ldots,k\}$,
and $w \in \Sigma^*$.
Then, for the canonical automaton $\mathcal C_{L, \Sigma_1, \ldots, \Sigma_k}$,
we have
\begin{multline*}
\delta(([u_1]_{\equiv_L}, \ldots, [u_k]_{\equiv_L}), w)
= ([v_1]_{\equiv_L}, \ldots, [v_k]_{\equiv_L}) \\
\Leftrightarrow
\forall i \in \{1,\ldots k\} : u_i\pi_{\Sigma_i}(w) \equiv_L v_i.
\end{multline*}
\end{remark}
\end{toappendix}
\begin{toappendix}
\begin{example}
\input{tikz_ex_gen_can_aut}
Please see Figure~\ref{fig:ex_gen_can_aut}
for an automaton that recognizes a language in $\mathcal L_{\{a,c\},\{b\}}$
and the canonical automaton derived from it.
That the recognized language is indeed in $\mathcal L_{\{a,c\},\{b\}}$
can be seen by noting that for each state $q \in Q$
we have $\delta(q, ab) = \delta(q, ba)$
and $\delta(q, cb) = \delta(q, bc)$.
\end{example}
\begin{example}
Let $\Sigma = \{a,b,c\}$ and $L = \Sigma^* ac^*b \Sigma^*$.
Then $L \in \mathcal L_{\{a,b\},\{c\}}$. The minimal automaton of $L$
has a single final state and is isomorphic to $\mathcal C_{L, \{a,b\},\{c\}}$,
as $[\varepsilon]_{\equiv_L} = [c^n]_{\equiv_L}$ for each $n \ge 0$.
The language $L \cap \Sigma^* c \Sigma^*$
also has the property that the minimal automaton is isomorphic
to $\mathcal C_{L \cap \Sigma^* c \Sigma^*, \{a,b\},\{c\}}$.
\end{example}
\end{toappendix}
Next, we show that the canonical automata recognize precisely the
languages in $\mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
Note that we have dropped the assumption of regularity of $L$.
\begin{theoremrep}
\label{thm:canonical_aut}
Let $L \subseteq \Sigma^*$ and $\Sigma = \Sigma_1 \cup \ldots \cup \Sigma_k$ be a partition.
Then,
\begin{enumerate}
\item $L \subseteq L(\mathcal C_{L,\Sigma_1,\ldots,\Sigma_k})$ and $L(\mathcal C_{L,\Sigma_1,\ldots,\Sigma_k}) \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
\item $L = L(\mathcal C_{L,\Sigma_1,\ldots,\Sigma_k}) \Leftrightarrow L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
\item Let $L \in \mathcal L_{\Sigma_1,\ldots,\Sigma_k}$. Then
$L$ is regular if and only if $\mathcal C_{L,\Sigma_1,\ldots,\Sigma_k}$
is finite.
\end{enumerate}
\end{theoremrep}
\begin{proof}
\begin{enumerate}
\item Let $\mathcal C_{L, \Sigma_1, \ldots, \Sigma_k} = (\Sigma, S_1 \times \ldots \times S_k,
\delta, s_0, F)$ be the canonical automaton for $L$.
For each $w \in \Sigma^*$, we have
\[
\delta(s_0, w)
= ([\pi_{\Sigma_1}(w)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(w)]_{\equiv_L}).
\]
If $w \in L$, then, by Definition~\ref{def:generalization_canonical_aut},
we find $\delta(s_0, w) \in F$.
So, $L \subseteq L(\mathcal C_{L,\Sigma_1, \ldots, \Sigma_K})$.
By Definition~\ref{def:generalization_canonical_aut},
for each $a \in \Sigma_i$, $b \in \Sigma_j$, $i \ne j$, $i,j \in \{1,\ldots,k\}$
and state $s \in S_1 \times \ldots \times S_k$,
we have $\delta(s, ab) = \delta(s, ba)$.
So, we conclude $L(\mathcal C_{L,\Sigma_1,\ldots,\Sigma_k}) \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
\item If $L = L(\mathcal C_{L, \Sigma_1, \ldots,\Sigma_k})$, then, by the previous
item, $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
Now, suppose $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
By the previous item, we only have to establish $L(\mathcal C_{L, \Sigma_1, \ldots,\Sigma_k}) \subseteq L$.
So, suppose $w \in L(\mathcal C_{L, \Sigma_1, \ldots, \Sigma_k})$.
By Definition~\ref{def:generalization_canonical_aut},
there exists $u \in L$ such that $\pi_{\Sigma_i}(w) \equiv_L \pi_{\Sigma_i}(u)$
for $i \in \{1,\ldots, k\}$.
As the letters in $L$ commute according to the partition,
we have, by Remark~\ref{rem:letters_commutate_with_decomposition}, as $u \in L$,
\[
\pi_{\Sigma_1}(u) \pi_{\Sigma_2}(u) \pi_{\Sigma_3}(u) \cdots \pi_{\Sigma_k}(u) \in L.
\]
So, using $\pi_{\Sigma_1}(w) \equiv_L \pi_{\Sigma_1}(u)$, we find
\[
\pi_{\Sigma_1}(w) \pi_{\Sigma_2}(u) \pi_{\Sigma_3}(u)\cdots \pi_{\Sigma_k}(u) \in L.
\]
Using that the letters in $L$ commute according to the partition
again, we find, with the previous equation,
\[
\pi_{\Sigma_2}(u) \pi_{\Sigma_1}(w) \pi_{\Sigma_3}(u) \cdots \pi_{\Sigma_k}(u) \in L.
\]
And then, using $\pi_{\Sigma_2}(w) \equiv_L \pi_{\Sigma_2}(u)$,
\[
\pi_{\Sigma_2}(w) \pi_{\Sigma_1}(w) \pi_{\Sigma_3}(u) \cdots \pi_{\Sigma_k}(u) \in L.
\]
Continuing in this manner, and reordering the result, we get
\[
\pi_{\Sigma_1}(w) \pi_{\Sigma_2}(w) \pi_{\Sigma_3}(w) \cdots \pi_{\Sigma_k}(w) \in L.
\]
So, as the letters in $L$ commute according to the partition, $w \in L$.
Hence $L(\mathcal C_{L, \Sigma_1, \ldots, \Sigma_k}) \subseteq L$.
\item As $\mathcal C_{L, \Sigma_1, \ldots \Sigma_k}$ is defined
with the Nerode right-congruence classes, if $L$
is regular, it must be a finite automaton. If the automaton is finite
and $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$,
then, by the previous item, $L = L(\mathcal C_{L, \Sigma_1, \ldots, \Sigma_k})$
and $L$ is regular.
\end{enumerate}
\noindent So, we have established all statements in the theorem.~\qed
\end{proof}
Also, used in defining a subclass in the next subsection, we will derive a canonical automaton for certain projected languages from $\mathcal C_{L,\Sigma_1,\ldots,\Sigma_k}$. Essentially, the next definition and proposition mean that if we only use
one ``coordinate'' of $\mathcal C_{L,\Sigma_1, \ldots, \Sigma_k}$, then this recognizes a projection of $L$.
\begin{definition} \label{def::proj_aut}
Let $i \in \{1,\ldots, k\}$ and $L \in \mathcal L_{\Sigma_1,\ldots,\Sigma_k}$.
The \emph{canonical projection automaton (for $\Sigma_i)$}
is $\mathcal C_{L,\Sigma_i} = (\Sigma_i, S_i, \delta_i, [\varepsilon]_{\equiv_L}, F_i)$
with
$S_i = \{ [u]_{\equiv_L} \mid u \in \Sigma_i^* \}$,
$\delta_i([u]_{\equiv_L}, x) = [ux]_{\equiv_L} \mbox{ for } x \in \Sigma_i$
and $F_i = \{ [\pi_{\Sigma_i}(u)]_{\equiv_L} \mid u \in L \}$.
\end{definition}
\begin{propositionrep}
\label{prop:projected_language}
Let $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
Then, for $i \in \{1,\ldots,k\}$, $\pi_{\Sigma_i}(L) = L(\mathcal C_{L, \Sigma_i})$.
\end{propositionrep}
\begin{proof}
Let $\mathcal C_{L,\Sigma_i} = (\Sigma, S_i, \delta_i, [\varepsilon]_{\equiv_L}, F_i)$.
Suppose $u \in \pi_{\Sigma_i}(L)$.
Then, there exists $w \in L$ such that $u = \pi_{\Sigma_i}(w)$.
So, $[u]_{\equiv_L} \in F_i$. Hence, $u \in L(\mathcal C_{L, \Sigma_i})$.
Conversely, if $u \in L(\mathcal C_{L, \Sigma_i}) \subseteq \Sigma_i^*$,
then $u \equiv_L \pi_{\Sigma_i}(w)$ for some $w \in L$.
We have\footnote{Set, for $u_1, \ldots, u_n \in \Sigma^*$, $\prod_{i=1}^n u_i = u_1 \cdot \ldots \cdot u_n$.} $\pi_{\Sigma_i}(w)\prod_{j=1, j\ne i}^k \pi_{\Sigma_j}(w) \in L$.
So, $u \prod_{j=1, j \ne i}^k \pi_{\Sigma_j}(w) \in L$,
which gives $u \in \pi_{\Sigma_j}(L)$. \qed
\end{proof}
\subsection{Subclasses in \texorpdfstring{$\mathcal L_{\Sigma_1, \ldots, \Sigma_k}$}{L\_Sigma\_1...Sigma\_k}}
Here, we investigate several subclasses of $\mathcal L_{\Sigma_1,\ldots, \Sigma_k}$.
Recall that, for $L \subseteq \Sigma^*$, the minimal automaton
of $L$ is denoted by $\mathcal A_L$.
\begin{definition}
\label{def:Li}
Let $\Sigma = \Sigma_1 \cup \ldots \cup \Sigma_k$
be a partition. Then,
define the following classes of languages.
\begin{align*}
\mathcal L_1 & = \{ L\mid \mathcal C_{L,\Sigma_1,\ldots,\Sigma_k} \mbox{ has a single final state and $L = L(\mathcal C_{L,\Sigma_1,\ldots,\Sigma_k})$. }\}, \\
\mathcal L_2 & = \left\{ L \mid L = \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L) \right\}, \\
\mathcal L_3 & = \{ L \mid L = L(\mathcal C_{L,\Sigma_1,\ldots,\Sigma_k}), \forall i \in \{1,\ldots,k\} : \mathcal A_{\pi_{\Sigma_i}(L)} \mbox{ is isomorphic to } \mathcal C_{L,\Sigma_i} \}, \\
\mathcal L_4 & = \{ L \mid \mathcal A_L \mbox{ is isomorphic to } \mathcal C_{L,\Sigma_1,\ldots,\Sigma_k} \}.
\end{align*}
\end{definition}
First, we show that these are in fact subclasses of $\mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
\begin{propositionrep}
\label{prop:L_i_in_L_Sigmai}
Let $\Sigma = \Sigma_1 \cup \ldots \cup \Sigma_k$
be a partition.
For each $i \in \{1,2,3,4\}$
we have $\mathcal L_i \subseteq \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
\end{propositionrep}
\begin{proof}
If $i \in \{1,3,4\}$, by Theorem~\ref{thm:canonical_aut},
we have $\mathcal L_i \subseteq \mathcal L_{\Sigma_1,\ldots,\Sigma_k}$.
By the definition of the shuffle product, $\mathcal L_2 \subseteq \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.~\qed
\end{proof}
\begin{remark}
\label{rem:A_L_singlefinal_but_not_C_L}
Regarding $\mathcal L_1$, note that there exist languages $L = L(\mathcal C_{L,\Sigma_1, \ldots, \Sigma_k})$
such that the minimal automaton has a single final state, but
$\mathcal C_{L,\Sigma_1, \ldots, \Sigma_k}$
has more than one final state.
For example, $L = \{ w \in \{a,b\}^* \mid |w|_a > 0 \mbox{ or } |w|_b > 0 \}$.
However, if $\mathcal C_{L,\Sigma_1, \ldots, \Sigma_k}$ has a single final state, then
the minimal automata also has only a single final state.
\end{remark}
\begin{example}
\label{ex:Lis}
Let $\Sigma = \Sigma_1 \cup \Sigma_2$
with $\Sigma_1 = \{a\}$ and $\Sigma_2 = \{b\}$.
Set $L = ( aa(aaa)^* \shuffle bb(bbb)^* ) \cup ( a(aaa)^* \shuffle b(bbb)^* )$.
Then $L \in (\mathcal L_3 \cap \mathcal L_4) \setminus \mathcal L_2$.
\end{example}
\begin{example}
Set $L = ( a(aaa)^* \shuffle b ) \cup aa(aaa)^*$.
Then $L \in \mathcal L_3 \setminus \mathcal L_4$.
\end{example}
The languages in $\mathcal L_1$ arise in connection
with the canonical automaton.
\begin{propositionrep}
Let $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$
and
$
\mathcal C_{L, \Sigma_1, \ldots, \Sigma_k} = (\Sigma, S_1 \times \ldots \times S_k, \delta, s_0, F).
$
Then, for all $s \in S_1 \times \ldots \times S_k$,
$
\{ w \in \Sigma^* \mid \delta(s_0, w) = s \} \in \mathcal L_1.
$
\end{propositionrep}
\begin{proof}
Let $s = (s_1, \ldots, s_k) \in S_1 \times \ldots \times S_k$. Set $U = \{ w \in \Sigma^* \mid \delta(s_0, w) = s \}$.
By construction of $\mathcal C_{L,\Sigma_1, \ldots, \Sigma_k}$, we
have $U \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
So, by Theorem~\ref{thm:canonical_aut},
$U = L(\mathcal C_{U, \Sigma_1, \ldots, \Sigma_k})$, where
$\mathcal C_{U, \Sigma_1, \ldots, \Sigma_k}$ is the canonical automaton for $U$.
Let
\[
E = \{ ([\pi_{\Sigma_1}(u)]_{\equiv_U}, \ldots, [\pi_{\Sigma_k}(u)]_{\equiv_U}) \mid u \in U \}
\]
be the final states of $\mathcal C_{U, \Sigma_1, \ldots, \Sigma_k}$.
We have to show $|E| = 1$. Let $u_i \in \Sigma_i^*$ be such that $s_i = [u_i]_{\equiv_L}$
for $i \in \{1,\ldots, k\}$.
\begin{claiminproof}
For all $v \in \Sigma_i^*$ and $u \in U$ we have
\[
v \equiv_U \pi_{\Sigma_i}(u) \Leftrightarrow
v \equiv_L u_i.
\]
\end{claiminproof}
\begin{claimproof} We show two separate statements that, taken together, imply
our claim.
\begin{enumerate}
\item For all $i \in \{1,\ldots, k\}$ and $u \in U$, $[\pi_{\Sigma_i}(u)]_{\equiv_U} \cap \Sigma_i^* \subseteq [u_i]_{\equiv_L}$.
Fix $i \in \{1,\ldots,k\}$.
Suppose $x \in \Sigma_i^*$
such that $x \equiv_U \pi_{\Sigma_i}(u)$.
Then, as
\[
\pi_{\Sigma_i}(u) \pi_{\Sigma_1}(u) \cdots \pi_{\Sigma_{i-1}}(u) \pi_{\Sigma_{i+1}}(u)\cdots \pi_{\Sigma_k}(u) \in U,
\]
we find, by the definition of the Nerode right-congruence,
\[
x \pi_{\Sigma_1}(u) \cdots \pi_{\Sigma_{i-1}}(u) \pi_{\Sigma_{i+1}}(u)\cdots \pi_{\Sigma_k}(u) \in U.
\]
Hence, $\delta(s_0, x \pi_{\Sigma_1}(u) \cdots \pi_{\Sigma_{i-1}}(u) \pi_{\Sigma_{i+1}}(u)\cdots \pi_{\Sigma_k}(u)) = s$.
By Remark~\ref{rem:can_aut_transition}, as $x \in \Sigma_i^*$,
we find $x \equiv_L u_i$.
\item For all $i \in \{1,\ldots, k\}$, if $x,y \in [u_i]_{\equiv_L}\cap \Sigma_i^*$,
then $x \equiv_U y$.
Fix $i \in \{1,\ldots,k\}$.
Assume there exist $x,y \in \Sigma_i^*$ such that $x \not\equiv_U y$, but $x,y \in [u_i]_{\equiv_L}$.
Then, without loss of generality, there exists $z \in \Sigma^*$
such that
\[
xz \in U, \quad yz \notin U.
\]
But then, for all $j \in \{1,\ldots, k\}$, $\pi_{\Sigma_j}(xz) \equiv_L u_j$.
As $x,y \in \Sigma_i^*$, for $j \ne i$, we also have $\pi_{\Sigma_j}(yz) \equiv_L u_j$.
So, for $yz \notin U$ to hold true, we must have $\pi_{\Sigma_i}(yz) \not\equiv_L u_i$.
However, by assumption $x,y \in [u_i]_{\equiv_L}$, so
\[
\pi_{\Sigma_i}(yz) = y \pi_{\Sigma_i}(z) \equiv_L x \pi_{\Sigma_i}(z) = \pi_{\Sigma_i}(xz) \equiv_L u_i,
\]
which implies $\pi_{\Sigma_i}(yz) \equiv_L u_i$. So, $yz \notin U$ is not possible.
Hence, all words from $\Sigma_i^*$ in $[u_i]_{\equiv_L}$
must be equivalent for $\equiv_U$.
\end{enumerate}
Combining the first and second part gives, for each $i \in \{1,\ldots, k\}$ and $u \in U$, $[\pi_{\Sigma_i}(u)]_{\equiv_U} \cap \Sigma_i^* = [u_i]_{\equiv_L} \cap \Sigma_i^*$.
\end{claimproof}
Now, choose $u, u' \in U$ and $i \in \{1,\ldots, k\}$.
Then, by the above claim, we find
\[
\pi_{\Sigma_i}(u) \equiv_L u_i \mbox{ and }
\pi_{\Sigma_i}(u') \equiv_L u_i.
\]
Using the above claim, with $v = \pi_{\Sigma_i}(u')$,
we can then deduce $\pi_{\Sigma_i}(u') \equiv_U \pi_{\Sigma_i}(u)$.
So, for each $u, u' \in U$, we have
\[
([\pi_{\Sigma_1}(u)]_{\equiv_U}, \ldots, [\pi_{\Sigma_k}(u)]_{\equiv_U})
= ([\pi_{\Sigma_1}(u')]_{\equiv_U}, \ldots, [\pi_{\Sigma_k}(u')]_{\equiv_U}),
\]
which implies $|E| = 1$.~\qed
\end{proof}
Next, we give alternative characterization for $\mathcal L_2, \mathcal L_3$
and $\mathcal L_4$.
\begin{theoremrep}
\label{thm:alternative_characterizations}
Let $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$. Then,
\begin{enumerate}
\item $L \in \mathcal L_2$ if and only if, for each $w \in \Sigma^*$,
the following is true:
\[
w \in L \Leftrightarrow \forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(w) \in \pi_{\Sigma_i}(L);
\]
\item $
L \in \mathcal L_3$ if and only if, for all $i \in \{1,\ldots,k\}$ and $u \in \Sigma_i^*$,
we have
\[
[u]_{\equiv_L} \cap \Sigma_i^* = [u]_{\equiv_{\pi_{\Sigma_i}(L)}} \cap \Sigma_i^*;
\]
\item $L \in \mathcal L_4$ if and only if, for each $u,v \in \Sigma^*$,
\[
u \equiv_L v
\Leftrightarrow \forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(u) \equiv_L \pi_{\Sigma_i}(v).
\]
\end{enumerate}
\end{theoremrep}
\begin{proof}
The separate claims are stated in
Proposition~\ref{prop:char_L2}, Proposition~\ref{prop:char_L3}
and Proposition~\ref{prop:char_L4}.\qed
\end{proof}
\begin{toappendix}
Note that, for each $L \subseteq \Sigma^*$,
\begin{equation}
\label{eqn:L_in_bigshuffle_pi_Sigma_i}
L \subseteq \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L).
\end{equation}
\begin{proposition}
\label{prop:char_L2}
Let $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
Then, $L \in \mathcal L_2$ if and only if, for each $w \in \Sigma^*$,
the following is true:
\[
w \in L \Leftrightarrow \forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(w) \in \pi_{\Sigma_i}(L).
\]
\end{proposition}
\begin{proof}
\begin{enumerate}
\item Suppose $L = \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L)$.
We show that the equivalence holds true.
If $w \in L$, then, for each $i \in \{1,\ldots,k\}$, $\pi_{\Sigma_i}(w) \in \pi_{\Sigma_i}(L)$.
Conversely, let $w \in \Sigma^*$ and assume, for each $i \in \{1,\ldots,k\}$, we have
$\pi_{\Sigma_i}(w) \in \pi_i(L)$.
Then, $w \in \bigshuffle_{i=1}^k \pi_{\Sigma_i}(w) \subseteq \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L) = L$.
\item Now, suppose, for each $w \in \Sigma^*$, we have
\[
w \in L \Leftrightarrow \forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(w) \in \pi_i(L).
\]
By Equation~\eqref{eqn:L_in_bigshuffle_pi_Sigma_i}, $L \subseteq \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L)$.
So, assume $w \in \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L)$.
Fix $j \in \{1,\ldots,k\}$.
Then, as $\Sigma_1,\ldots,\Sigma_k$ are pairwise disjoint
and so $\pi_{\Sigma_i}(\pi_{\Sigma_j}(L)) = \{\varepsilon\}$ for $i \ne j$,
\[
\pi_{\Sigma_j}(w) \in \pi_{\Sigma_j}\left( \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L) \right) =
\bigshuffle_{i=1}^k \pi_{\Sigma_j}(\pi_{\Sigma_i}(L)) = \pi_{\Sigma_j}(L).
\]
Hence, by applying the equation, we find $w \in L$.
So, $\bigshuffle_{i=1}^k \pi_{\Sigma_i}(L) \subseteq L$.
\end{enumerate}
So, we have shown the equivalence of both conditions.~\qed
\end{proof}
\begin{proposition}
\label{prop:char_L3}
Let $L \in \mathcal L_{\Sigma_1,\ldots,\Sigma_k}$.
Then,
$
L \in \mathcal L_3$ if and only if, for each $i \in \{1,\ldots,k\}$ and $u \in \Sigma_i^*$,
we have
$[u]_{\equiv_L} \cap \Sigma_i^* = [u]_{\equiv_{\pi_{\Sigma_i}(L)}} \cap \Sigma_i^*.
$
\end{proposition}
\begin{proof}
The map $\varphi : \{ [u]_{\equiv_L} \cap \Sigma_i^* \mid u \in \Sigma_i^* \} \to
\{ [u]_{\pi_{\Sigma_i}(L)} \cap \Sigma_i^* \mid u \in \Sigma_i^* \}$ given by
\[
\varphi([u]_{\equiv_L} \cap \Sigma_i^*) = [u]_{\equiv_{\pi_{\Sigma_i}(L)}} \cap \Sigma_i^*
\]
is well-defined by Lemma~\ref{lem:equivalence_classes_projection}.
Also, the non-empty sets $[u]_{\equiv_L} \cap \Sigma_i^*$
partition $\Sigma_i^*$ as a right-congruence over $\Sigma_i^*$.
Moreover, the non-empty right-congruence classes of the form $[u]_{\equiv_L} \cap \Sigma_i^*$
are precisely the states of $\mathcal C_{L, \Sigma_i}$.
So, $\varphi$ induces a surjective homomorphism
from $\mathcal C_{L,\Sigma_i}$ onto $\mathcal A_{\pi_{\Sigma_i}(L)}$.
This yields the stated equivalence.~\qed
\end{proof}
\begin{proposition}
\label{prop:char_L4}
Let
$L \in \mathcal L_{\Sigma_1,\ldots,\Sigma_k}$.
Then, $L \in \mathcal L_4$ if and only if, for each $u,v \in \Sigma^*$,
\[
u \equiv_L v
\Leftrightarrow \forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(u) \equiv_L \pi_{\Sigma_i}(v).
\]
\end{proposition}
\begin{proof}
First, we show that the implication from right to left is always true.
Hence, we only have to argue that if the other implication
is true, this is equivalent to $L \in \mathcal L_4$.
\begin{claiminproof}
Let $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$ and $u,v \in \Sigma^*$. Then,
\[
\forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(u) \equiv_L \pi_{\Sigma_i}(v) \Rightarrow u \equiv_L v.
\]
\end{claiminproof}
\begin{claimproof}
Suppose \[ \forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(u) \equiv_L \pi_{\Sigma_i}(v). \]
Then, using $\pi_{\Sigma_1}(u) \equiv_L \pi_{\Sigma_1}(v)$, at it is a right-congruence, we find
\[
\pi_{\Sigma_1}(u) \pi_{\Sigma_2}(v) \cdots \pi_{\Sigma_k}(v)
\equiv_L
\pi_{\Sigma_1}(v) \pi_{\Sigma_2}(v) \cdots \pi_{\Sigma_k}(v).
\]
As $L \in \mathcal L_{\Sigma_1, \ldots, \Sigma_k}$, we have
\[
\pi_{\Sigma_1}(u) \pi_{\Sigma_2}(v)\pi_{\Sigma_3}(v) \cdots \pi_{\Sigma_k}(v) \equiv_L \pi_{\Sigma_2}(v)\pi_{\Sigma_1}(u)\pi_{\Sigma_3}(v) \cdots \pi_{\Sigma_k}(v).
\]
Then, using $\pi_{\Sigma_2}(u) \equiv_L \pi_{\Sigma_2}(v)$,
\[
\pi_{\Sigma_2}(u) \pi_{\Sigma_1}(u)\pi_{\Sigma_3}(v) \cdots \pi_{\Sigma_k}(v)
\equiv_L
\pi_{\Sigma_2}(v) \pi_{\Sigma_1}(u)\pi_{\Sigma_3}(v) \cdots \pi_{\Sigma_k}(v).
\]
Hence, up to now,
\[
\pi_{\Sigma_2}(u) \pi_{\Sigma_1}(u)\pi_{\Sigma_3}(v) \cdots \pi_{\Sigma_k}(v)
\equiv_L
\pi_{\Sigma_2}(v) \pi_{\Sigma_1}(v)\pi_{\Sigma_3}(v) \cdots \pi_{\Sigma_k}(v).
\]
Continuing similarly, we can show that
\[
\pi_{\Sigma_2}(u) \cdots \pi_{\Sigma_k}(u)
\equiv_L
\pi_{\Sigma_2}(u) \cdots \pi_{\Sigma_k}(u).
\]
So\footnote{Alternatively, we can note that by commutativity, the Nerode right-congruence is also
a left-congruence. Hence, it is the syntactic congruence, giving a natural composition operation
on the equivalence classes. Then,
by Remark~\ref{rem:letters_commutate_with_decomposition},
\[
[u]_{\equiv_L} = [\pi_{\Sigma_1}(u) \cdots \pi_{\Sigma_k}(u)]_{\equiv_L}
= [\pi_{\Sigma_1}(u)]_{\equiv_L} \cdots [\pi_{\Sigma_k}(u)]_{\equiv_L},
\]
which also gives that, if for all $i \in \{1,\ldots,k\}$
we have $\pi_{\Sigma_i}(u) \equiv_L \pi_{\Sigma_i}(v)$, then $u \equiv_L v$.},
by Remark~\ref{rem:letters_commutate_with_decomposition},
$u \equiv_L v$.
\end{claimproof}
Therefore, as
\[
\forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(u) \equiv_L \pi_{\Sigma_i}(v) \Rightarrow
u \equiv_L v,
\]
the map
\[
([\pi_{\Sigma_1}(u_1)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(u_k)]_{\equiv_L}) \mapsto [\pi_{\Sigma_1}(u_1) \cdots \pi_{\Sigma_k}(u_k)]_{\equiv_L}.
\]
for $u_1, \ldots, u_k \in \Sigma^*$
is well-defined. For, suppose $v_1, \ldots, v_k \in \Sigma^*$
such that
\[
\forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(u_i) \equiv_L \pi_{\Sigma_i}(v_i).
\]
Set $u = \pi_{\Sigma_1}(u_1)\cdots\pi_{\Sigma_k}(u_k)$
and $v = \pi_{\Sigma_1}(v_1)\cdots\pi_{\Sigma_k}(v_k)$.
Then, for all $i \in \{1,\ldots,k\}$,
we have $\pi_{\Sigma_i}(u) = \pi_{\Sigma_i}(u_i)$
and $\pi_{\Sigma_i}(v) = \pi_{\Sigma_i}(v_i)$,
which implies $\pi_{\Sigma_i}(u) \equiv_L \pi_{\Sigma_i}(v)$
and we find that $u \equiv_L v$.
Also, as $([\pi_{\Sigma_1}(u)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(u)])$
gets mapped to $[\pi_{\Sigma_1}(u)\cdots\pi_{\Sigma_k}(u)]_{\equiv_L} = [u]_{\equiv_L}$,
the map is surjective.
By commutativity, we can also easily show that it is an automaton
homomorphism.
So, if $L \in \mathcal L_4$, then $\mathcal C_{L,\Sigma_1,\ldots, \Sigma_k}$
and $\mathcal A_L$ have the same number of states. Hence, the above given
surjective map is actually bijective.
So,
\[
[\pi_{\Sigma_1}(u_1) \cdots \pi_{\Sigma_k}(u_k)]_{\equiv_L}
= [\pi_{\Sigma_1}(v_1) \cdots \pi_{\Sigma_k}(v_k)]_{\equiv_L}
\]
implies
\[
([\pi_{\Sigma_1}(u_1)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(u_k)]_{\equiv_L})
= ([\pi_{\Sigma_1}(v_1)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(v_k)]_{\equiv_L}).
\]
In particular, if $u \equiv_L v$,
then, by Remark~\ref{rem:letters_commutate_with_decomposition},
we have $[\pi_{\Sigma_1}(u) \cdots \pi_{\Sigma_k}(u)]_{\equiv_L}
= [\pi_{\Sigma_1}(v) \cdots \pi_{\Sigma_k}(v)]_{\equiv_L}$,
which gives
\[
([\pi_{\Sigma_1}(u)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(u)]_{\equiv_L})
= ([\pi_{\Sigma_1}(v)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(v)]_{\equiv_L})
\]
and we have the implication
\begin{equation}\label{eqn:L_4_implication}
u \equiv_L v \Rightarrow
\forall i \in \{1,\ldots,k\} : \pi_{\Sigma_i}(u) \equiv_L \pi_{\Sigma_i}(v).
\end{equation}
Conversely, suppose Equation~\eqref{eqn:L_4_implication}
holds true.
Then, the map
\[
[u]_{\equiv_L} \mapsto ([\pi_{\Sigma_1}(u)]_{\equiv_L} , \ldots, [\pi_{\Sigma_k}(u)]_{\equiv_L} )
\]
is well-defined.
As, for $u_1, \ldots, u_k \in \Sigma^*$,
$u = \pi_{\Sigma_1}(u_1) \cdots \pi_{\Sigma_k}(u_k)$
gets mapped to $([\pi_{\Sigma_1}(u_1)]_{\equiv_L} , \ldots, [\pi_{\Sigma_k}(u_k)]_{\equiv_L} )$,
it is also surjective. So, by finiteness\footnote{Actually, we do not need to use
finitess, and can show it directly by noting that the given map is a two-sided inverse
to the map
\[
([\pi_{\Sigma_1}(u_1)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(u_k)]_{\equiv_L}) \mapsto [\pi_{\Sigma_1}(u_1) \cdots \pi_{\Sigma_k}(u_k)]_{\equiv_L}.
\]}, it is a bijection between $\mathcal A_L$
and $\mathcal C_{L,\Sigma_1, \ldots, \Sigma_k}$.~\qed
\end{proof}
\end{toappendix}
\begin{toappendix}
\begin{remark}
Note that the condition in Proposition~\ref{prop:char_L3}
is stated only for words from $\Sigma_i^*$, $i \in \{1,\ldots,k\}$.
For example, stipulating the equation
\[ [u]_{\equiv_L} \cap \Sigma_i^* = [\pi_{\Sigma_i}(u)]_{\equiv_{\pi_{\Sigma_i}(L)}} \cap \Sigma_i^*
\]
for all $u \in \Sigma^*$ is a strong demand.
It implies, for $a \in \Sigma_i$, $b \in \Sigma_j$, $i \ne j$,
$[ab]_{\equiv_L} = [a]_{\equiv_L} = [b]_{\equiv_L} $,
as
\begin{align*}
[ab]_{\equiv_L} \cap \Sigma_i^* & = [a]_{\equiv_{\pi_{\pi_{\Sigma_i}(L)}}} \cap \Sigma_i^*; \\
[ab]_{\equiv_L} \cap \Sigma_j^* & = [b]_{\equiv_{\pi_{\pi_{\Sigma_j}(L)}}} \cap \Sigma_j^*,
\end{align*}
which gives $a,b \in [ab]_{\equiv_L}$.
\end{remark}
\begin{remark}
Let $L \subseteq \Sigma^*$.
In several statements, we have intersected the equivalence classes for $\pi_{\Sigma_i}(L)$, $i \in \{1,\ldots,k\}$,
with $\Sigma_i^*$.
Equivalently, we could also say that we only consider the equivalence over the smaller alphabet $\Sigma_i^*$.
More generally, if $\Gamma \subseteq \Sigma$ and $U \subseteq \Gamma^*$,
then, $u, v \in \Gamma^*$ are equivalent for $\equiv_U$ over the alphabet $\Gamma$
if and only if they are equivalent for $\equiv_U$ over $\Sigma$.
Also, over the larger alphabet, all words using symbols not in $\Gamma$
are equivalent.
\end{remark}
\begin{remark}
The condition in Proposition~\ref{prop:char_L3}
does not imply that the language is in $\mathcal L_{\Sigma_1, \ldots, \Sigma_k}$.
Let $L = \{ ab, b\}$.
Then, $\pi_{\{a\}}(L) = \{a\}$, $\pi_{\{b\}}(L) = \{b\}$ and, for each $n,m \ge 0$,
$
[a^n]_{\equiv_L} \cap a^* = [a^n]_{\equiv_{\{a\}}}$ and
$[b^m]_{\equiv_L} \cap b^* = [b^m]_{\equiv_{\{b\}}}.$
However, $L \notin \mathcal L_{\{a\},\{b\}}$.
\end{remark}
\end{toappendix}
\begin{example}
\label{ex:L4-incomparable}
Let $L_1$ be the language from Example~\ref{ex::comm_aut}.
Set $L_2 = a_1 \shuffle a_2 = \{a_1 a_2, a_2 a_1\}$.
Both of their letters commute for the partition $\{a_1,a_2\} = \{a_1\} \cup \{a_2\}$.
Then, $L_1 \in \mathcal L_4 \setminus \mathcal L_3$ and $L_2 \in \mathcal L_1 \setminus \mathcal L_4$.
\end{example}
Finally, in Theorem~\ref{thm:L1inL2inL3},
we establish inclusion relations, which are all proper, between
$\mathcal L_1, \mathcal L_2$ and $\mathcal L_3$, also see Figure~\ref{fig:inclusions_Li}.
\begin{figure}
\caption{Inclusion relations between the language classes.
}
\label{fig:inclusions_Li}
\end{figure}
\begin{toappendix}
\begin{example}\label{ex::lang_classes}
For the language $L = (a(aaa)^* \cup aa(aaa)^*) \shuffle b = a(aaa)^* \shuffle b \cup aa(aaa)^* \shuffle b$
the minimal commutative automaton has more than a single
final state, but $L = \pi_1(L) \shuffle \pi_2(L)$.
\end{example}
\begin{proposition} \label{prop::general_spec_form_final_states}
Let $L \in \mathcal L_{\Sigma_1,\ldots,\Sigma_k}$.
Then, the following statements are equivalent:
\begin{enumerate}
\item $L = \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L)$;
\item in the canonical automaton $\mathcal C_{L,\Sigma_1, \ldots, \Sigma_k} = (\Sigma, S_1 \times \ldots \times S_k, \delta, s_0, F)$
we can write $F = F_1 \times \ldots \times F_k$ with $F_i \subseteq S_i$.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item Suppose $L = \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L)$.
For the final state set $F$ of the canonical automaton, we will
show $F = F_1 \times \ldots \times F_k$.
First, note that, by Definition~\ref{def:generalization_canonical_aut},
we find $F \subseteq F_1 \times \ldots \times F_k$.
For the converse inclusion, let $u_1, \ldots, u_k \in L$, and consider the tuple
\[ ( [\pi_{\Sigma_1}(u_1)]_{\equiv_L}, \ldots, [\pi_{\Sigma_1}(u_k)]_{\equiv_L} ) \in F_1 \times \ldots \times F_k.
\]
Choose $u \in \prod_{i=1}^k \pi_{\Sigma_i}(u_i)$.
By assumption, $\prod_{i=1}^k \pi_{\Sigma_i}(L) \subseteq L$.
Hence $u \in L$.
As $\pi_{\Sigma_i}(u) = \pi_{\Sigma_i}(u_i)$ for $i \in \{1,\ldots, k\}$,
we find
\[
( [\pi_{\Sigma_1}(u_1)]_{\equiv_L}, \ldots, [\pi_{\Sigma_1}(u_k)]_{\equiv_L} )
= ( [\pi_{\Sigma_1}(u)]_{\equiv_L}, \ldots, [\pi_{\Sigma_1}(u)]_{\equiv_L} ).
\]
By Definition~\ref{def:generalization_canonical_aut},
$( [\pi_{\Sigma_1}(u)]_{\equiv_L}, \ldots, [\pi_{\Sigma_1}(u)]_{\equiv_L} ) \in F$.
So, $F_1 \times \ldots \times F_k \subseteq F$.
\item Assume $F = F_1 \times \ldots \times F_k$.
Let $u \in \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L)$.
Then there exist words $u_i \in L$, $i \in \{1,\ldots, k\}$,
such that $\pi_{\Sigma_i}(u) = \pi_{\Sigma_i}(u_i)$.
So, $[\pi_{\Sigma_i}(u_i)]_{\equiv_L} \in F_i$
and we find
\[
([\pi_{\Sigma_1}(u)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(u)]_{\equiv_L}) \in F.
\]
By Definition~\ref{def:generalization_canonical_aut}, there exists $v \in L$
such that
\[
([\pi_{\Sigma_1}(u)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(u)]_{\equiv_L})
= ([\pi_{\Sigma_1}(v)]_{\equiv_L}, \ldots, [\pi_{\Sigma_k}(v)]_{\equiv_L}).
\]
Using Remark~\ref{rem:letters_commutate_with_decomposition} and the previous equation,
\begin{align*}
[u]_{\equiv_L} & = [\pi_{\Sigma_1}(u) \cdots \pi_{\Sigma_k}(u)]_{\equiv_L} \\
& = [\pi_{\Sigma_1}(u)]_{\equiv_L} \cdot\ldots\cdot [\pi_{\Sigma_k}(u)]_{\equiv L} \\
& = [\pi_{\Sigma_1}(v)]_{\equiv_L} \cdot\ldots\cdot [\pi_{\Sigma_k}(v)]_{\equiv L} \\
& = [v]_{\equiv L}.
\end{align*}
So, as $v \in L$, we have $u \in L$.
Hence, $\prod_{i=1}^k \pi_{\Sigma_i}(L) \subseteq L$.
So, with Equation~\eqref{eqn:L_in_bigshuffle_pi_Sigma_i},
$L = \prod_{i=1}^k \pi_{\Sigma_i}(L)$.
\end{enumerate}
Hence, the first and the last condition are equivalent
and this concludes the proof.~\qed
\end{proof}
\end{toappendix}
\begin{theoremrep}
\label{thm:L1inL2inL3}
We have $\mathcal L_1 \subsetneq \mathcal L_2 \subsetneq \mathcal L_3$.
\end{theoremrep}
\begin{proof}
By Proposition~\ref{prop::general_spec_form_final_states},
we find $\mathcal L_1 \subseteq \mathcal L_2$. That the inclusion is proper is shown
by Example~\ref{ex::lang_classes}.
Suppose $L = \bigshuffle_{i=1}^k \pi_{\Sigma_i}(L)$.
Let $i \in \{1,\ldots,k\}$ and $u,v \in \Sigma_i^*$.
We have
\[
u \equiv_L v \Leftrightarrow \forall x \in \Sigma^* : ux \in L \leftrightarrow vx \in L.
\]
Using Proposition~\ref{prop:char_L2}, the condition on the right hand side is equivalent to
\begin{multline}\label{eqn:L_2}
\forall x \in \Sigma^* [ ( \forall j \in \{1,\ldots, k\} : \pi_{\Sigma_j}(ux) \in \pi_{\Sigma_j}(L) ) \leftrightarrow \\ ( \forall j \in \{1,\ldots, k\} : \pi_{\Sigma_j}(vx) \in \pi_{\Sigma_j}(L) ) ].
\end{multline}
\begin{claiminproof}
Equation~\eqref{eqn:L_2} is equivalent to
\begin{equation} \label{eqn:L_2_2}
\forall x \in \Sigma^* : \pi_{\Sigma_i}(ux) \in \pi_{\Sigma_i}(L) \leftrightarrow \pi_{\Sigma_i}(vx) \in \pi_{\Sigma_i}(L).
\end{equation}
\end{claiminproof}
\begin{claimproof}
Suppose Equation~\eqref{eqn:L_2_2} holds true.
Let $x \in \Sigma^*$.
Then, if, for each $j \in \{1,\ldots,k\}$,
we have
\[
\pi_{\Sigma_j}(ux) \in \pi_{\Sigma_j}(L),
\]
using that, for $u,v \in \Sigma_i^*$, we have,
\[
\pi_{\Sigma_j}(ux) = \left\{
\begin{array}{ll}
u \pi_{\Sigma_i}(x) & \mbox{if } i = j; \\
\pi_{\Sigma_j}(x) & \mbox{if } i \ne j,
\end{array}\right.
\]
and similarly for $\pi_{\Sigma_j}(vx)$,
we find with Equation~\eqref{eqn:L_2_2}
that, for each $j \in \{1,\ldots,k\}$,
\[
\pi_{\Sigma_j}(ux) \in \pi_{\Sigma_j}(L) ).
\]
The other implication of Equation~\eqref{eqn:L_2}
can be shown similarly. Hence, Equation~\eqref{eqn:L_2} holds true.
Conversely, suppose now Equation~\eqref{eqn:L_2}
holds true.
Let $x \in \Sigma^*$
and assume
\[
\pi_{\Sigma_i}(ux) \in \pi_{\Sigma_i}(L).
\]
As $L \ne \emptyset$,
we find $u_j \in \Sigma_j^*$
such that $u_j \in \pi_{\Sigma_j}(L)$
for each $j \in \{1,\ldots,k\}$.
Then, set $y = u_1 \cdots u_{i-1} \pi_{\Sigma_i}(x) u_{i+1}\cdots u_k$.
By choice and as $\pi_{\Sigma_i}(y) = \pi_{\Sigma_i}(x)$,
we have for each $j \in \{1,\ldots,k\}$ that
\[
\pi_{\Sigma_j}(u y) \in \pi_j(L).
\]
Hence, by Equation~\eqref{eqn:L_2}, for each $j \in \{1,\ldots,k\}$,
\[
\pi_{\Sigma_j}(v y) \in \pi_j(L).
\]
In particular, $\pi_{\Sigma_i}(vx) = \pi_{\Sigma_i}(vy) \in \pi_i(L)$.
The other implication of Equation~\eqref{eqn:L_2_2}
can be shown similarly. Hence, Equation~\eqref{eqn:L_2_2} holds true.
\end{claimproof}
So, by the previous claim, Equation~\eqref{eqn:L_2}
simplifies to
\[
\forall x \in \Sigma^* : \pi_{\Sigma_i}(ux) \in \pi_{\Sigma_i}(L) \leftrightarrow \pi_{\Sigma_i}(vx) \in \pi_{\Sigma_i}(L),
\]
which is equivalent to
\[
\forall x \in \Sigma_i^* : ux \in \pi_{\Sigma_i}(L) \leftrightarrow vx \in \pi_{\Sigma_i}(L).
\]
But this is precisely the definition of $u \equiv_{\pi_{\Sigma_i}(L)} v$.
Hence,
\[
[u]_{\equiv_L} \cap \Sigma_i^* = [u]_{\pi_{\Sigma_i}(L)} \cap \Sigma_i^*
\]
and by Proposition~\ref{prop:char_L3}, we find $\mathcal L_2 \subseteq \mathcal L_3$.
That the inclusion is proper, is shown by Example~\ref{ex:Lis}.~\qed
\end{proof}
\begin{remark}
\label{rem:inclusions}
Theorem~\ref{thm:L1inL2inL3} and Example~\ref{ex:L4-incomparable}
show that $\mathcal L_4$ is incomparable to each of the other language classes
with respect to inclusion.
\end{remark}
\section{Conclusion} The language class of commutative regular languages
with minimal automata of product-form behaves well with respect
to the descriptional complexity measure of state complexity for certain operations, see Table~\ref{tab:sc_product-from},
and Lemma~\ref{lem:sc_shuffle_lang} allows us to construct
infinitely many commutative regular languages with product-form minimal automaton.
The investigation started could be carried out for other operations and measures of descriptional complexity
as well. Likewise, as done in~\cite{GomezA08,DBLP:conf/icgi/Gomez10}
for commutative and more general partial commutativity conditions,
it might be interesting if the learning algorithms given there could be improved
for the language class introduced.
Lastly, if the bound $2nm$ for shuffle is tight is an open problem.
Remark~\ref{rem:lower_bound_shuffle} shows that the bound $nm$ is not sufficient,
however, giving an infinite family of commutative regular languages with minimal automata
of product-form attaining the bound $2nm$ for shuffle is an open problem.
{
\noindent \footnotesize
\textbf{Acknowledgement.} I thank the anonymous referees of~\cite{Hoffmann2021NISextended} (the extended version of~\cite{DBLP:conf/cai/Hoffmann19}), whose feedback also helped in the present work. I also sincerely
thank the referees of the present submission, which helped me alot
in identifying unclear or ungrammatical formulations and
a missing definition.
}
\end{document}
|
\begin{document}
\baselineskip 14.8pt
\title{
Stationary Probability Vectors of Higher-order Markov Chains}
\author{Chi-Kwong Li\thanks{Department of Mathematics,
College of William and Mary, Williamsburg, VA 23187, USA. ([email protected])}
\ and \ Shixiao Zhang\thanks{Department of Mathematics, University of Hong Kong,
Hong Kong. ([email protected])}}
\date{}
\maketitle
\begin{abstract}
We consider the higher-order Markov Chain, and characterize the
second order Markov chains admitting every probability distribution vector
as a stationary vector. The result is used to construct
Markov chains of higher-order with the same property.
We also study conditions under which the set of stationary
vectors of the Markov chain has a certain affine dimension.
\end{abstract}
{\bf Key words.} Transition probability tensor, higher-order Markov chains.
\section{Introduction}
A discrete-time Markov chain is a stochastic process with a sequence of
random variables
$$\left\{ {{X_t},t = 0,1,2 \ldots } \right\},$$
which takes on values in a discrete finite state space
$${\langle} n {\rangle} = \{1, \dots, n\}$$ for a positive integer $n$,
such that with time independent probability
\begin{eqnarray*}
{p_{ij}} &=& \Pr \left( {{X_{t + 1}=i}|{X_t} = j,{X_{t - 1}} = {i_{t - 1}},{X_{t - 2}}
= {i_{t - 2}}, \dots ,{X_1} = {i_1},{X_0} = {i_0}} \right) \\
&=& \Pr \left( {{X_{t + 1}}=i|{X_t} = j} \right)
\end{eqnarray*}
holds for all $i,j,{i_0}, \cdots ,{i_{t - 1}}$.
The nonnegative matrix $P = (p_{ij})_{1 \le i, j \le n}$ is the
transition matrix of the Markov process is column
stochastic, i.e., $\sum_{i=1}^n p_{ij} = 1$ for $j = 1, \dots, n$.
Denote by
\begin{equation}{\langle}bel{omega}
\Omega_n =\left\{ {\bf x} = (x_1, \dots, x_n)^t: x_1, \dots, x_n \ge 0,
\ \sum_{i=1}^n x_i = 1\right\}
\end{equation}
the simplex of probability vectors in ${\bf R}^n$.
A nonnegative vector ${\bf x}\in \Omega_n$
is a stationary probability vector (also known as the distribution)
of a finite Markov Chain if $P{\bf x} = {\bf x}$.
By the Perron-Frobenius Theory (e.g., see \cite{3,Ross})
every discrete-time Markov Chain has a stationary
probability vector, and the vector is unique if the transition matrix
is primitive, i.e., there is a positive integer $r$ such that all entries of $P^r$
are positive. The uniqueness condition is useful when one uses numerical schemes to
determine the stationary vectors. With the uniqueness condition, any
convergent scheme would lead to the unique stationary vector; e.g., see \cite{5}.
More generally, one may consider an $m$-th order Markov chain such that
\begin{eqnarray*}
{p_{i,{i_1}, \cdots ,{i_m}}}
&=& \Pr \left( {{X_{t + 1}} = i|{X_t} = {i_1},{X_{t - 1}}
= {i_2}, \dots ,{X_1} = {i_t},{X_0} = {i_{t + 1}}} \right) \\
&=&\Pr \left( {{X_{t + 1}} = i|{X_t} = {i_1}, \cdots ,{X_{t - m + 1}} = {i_m}} \right),
\end{eqnarray*}
where $i,{i_1}, \cdots ,{i_m} \in {\langle} n {\rangle}$; see \cite{1,2}.
In other words, the current state of the process depends on $m$ past states.
Observe that
$$\sum_{i=1}^n p_{i, i_1, \dots, i_m} = 1, \qquad
1 \le i_1, \dots, i_m \le n.$$
When m=1, it is just the standard Markov Chain. There are many situations
that one would use the Markov Chain models.
We refer readers to the papers \cite{1,2,ling,19,20} and the references therein.
Note that $P=(p_{i,i_1, \dots, i_m})$ is an $(m+1)$-fold tensor of ${\bf R}^n$
governing the transition of states in the $m$-th order Markov chain according to the
following rule
$$x_i(t+1) = \sum_{1 \le i_1, \dots, i_m \le n} p_{i,i_1, \dots, i_m} x_{i_1}(t) \cdots x_{i_m}(t),
\quad i = 1, \dots, n.$$
We will call $P$ the transition probability tensor of the Markov
chain.\footnote{As pointed out by the referee, instead of
the tensor properties of $P$, we are actually studying the
hypermatrix of the tensor $P$ with respect to a special
choice of basis of $\otimes ^{m+1}{\bf C}^n$.}
A nonnegative vector ${\bf x} = (x_1, \dots, x_n)^t \in {\bf R}^n$ with entries summing up to 1 is a stationary (probability distribution) vector if
\begin{equation}{\langle}bel{eq1}
x_i = \sum_{1 \le i_1, \dots, i_m \le n} p_{i_1, \dots, i_m} x_{i_1} \cdots x_{i_m},
\quad i = 1, \dots, n.
\end{equation}
By a weaker version of the Perron-Frobenius Theorem for tensors
in \cite{L} (see also \cite{CPZ,FGH}),
a stationary vector for a higher-order Markov chain always
exists. Moreover, the stationary vector will have positive entries
if the transition tensor $P = (p_{i,i_1, \dots, i_m})$ is irreducible,
i.e., there is no non-empty proper index subset
$I \subset \left\{ {1,2, \cdots ,n} \right\}$
such that ${p_{i,{i_1},\dots, {i_m}}} = 0$
for all ${i} \in I$, and ${i_1},\dots, {i_m} \notin I$.
Researchers have derived sufficient conditions for the
stationary vector to be unique, and proposed some iterative methods
to find the stationary vector; see \cite{CPZ,FGH,ling,L}.
In this paper, we consider an extreme situation of the problem, namely,
every probability vector in the simplex $\Omega_n$
is a stationary vector of a higher-order Markov chain.
In the standard (first-order) Markov chain,
this can happen if and only if $P$ is the identity matrix.
We show that such a phenomenon may occur for a large family of higher-ordered
Markov chains. In particular, we characterize those
second order Markov chains with this property.
The result is used to study higher-order Markov chains with a similar property.\footnote{
As pointed out by the referee, this problem is
related the Inverse Perron-Frobenius Problem: Given a distribution what are the Markov
chains having it as a stationary distribution?
For example, one may see \cite{GB}.}
In our discussion, we always let
$${\cal E} = \{e_1, \dots, e_n\}$$
denote the standard basis for ${\bf R}^n$.
Then $\Omega_n$ is the convex hull of the set ${\cal E}$, denoted by ${\rm conv}\, {\cal E}$.
For any $k \in \{1, \dots, n\}$, a subset of $\Omega_n$ obtained by
taking the convex hull of $k$ vectors from the set ${\cal E}$
is a face of the simplex $\Omega_n$ of affine dimension $k-1$.
We also consider higher-order Markov chains with a $(k-1)$-dimension
face of $\Omega_n$ as the set of stationary vectors.
Other geometrical features and problems concerning the set of
stationary vectors of higher-order Markov chains will also be mentioned.
\baselineskip 17.5pt
\section{Second Order Markov Chains}
In the following, we characterize those second order Markov chains
so that every vector in $\Omega_n$ is a stationary vector.
Note that for a second order Markov chains
the conditions for the stationary vector ${\bf x} = (x_1, \dots, x_n)^t$
in (\ref{eq1}) can be rewritten as
\begin{equation} {\langle}bel{eq2}
{\bf x} = (x_1 P_1 + \cdots x_n P_n) {\bf x},
\end{equation}
where for $i = 1, \dots, n$,
\begin{equation} {\langle}bel{eq3}
P_i = (p_{ris})_{1 \le r, s \le n}
\end{equation}
is a column stochastic matrix, i.e., a nonnegative matrix
so that the sum of entries of each column is 1.
We have the following theorem.
\begin{theorem} {\langle}bel{main} Suppose $P = (p_{i,i_1,i_2})$ is the transition tensor
of a second order Markov chain. Then every vector in the set $\Omega_n$
is a stationary vector if and only if
there are nonnegative vectors ${\bf v}_1, \dots, {\bf v}_n \in {\bf R}^n$
with entries in $[0,1]$ such that for $i = 1, \dots, n$,
${\bf v}_i = (v_{i1}, \dots, v_{in})^t$ with $v_{ii} = 0$, and
$$P_i = I_n - {\rm diag}\,(v_{i1}, \dots, v_{in}) + e_i {\bf v}_i^t =
{\footnotesize\left(\begin{array}{ccccccc}
1 - v_{i1} & & & & & & \\
{}& \ddots & & & & & \\
{}& & 1 - v_{i,i-1} & & & & \\
v_{i1} & \cdots & v_{i,i-1} & 1 & v_{i,i + 1} & \cdots & v_{in} \\
{}&{}&{}&{}&1 - v_{i,i + 1}& {} & {} \\
{}&{}&{}&{}&{}& \ddots &{}\\
{}&{}&{}&{}&{}&{}&1 - v_{in}\\
\end{array}\right)}.$$
\end{theorem}
To prove Theorem \ref{main}, we need the following detailed analysis for the
second order Markov chain when $n = 2$.
\begin{proposition} {\langle}bel{n=2case}
Let
${a_1},{a_2},{b_1},{b_2} \in \left[ {0,1} \right]$.
Consider the following equation with unknown $x\in [0,1]$:
$$ \left[ x\left( {\begin{array}{*{20}{c}}{{a_1}}&{{b_1}}\\
{1 - {a_1}}&{1 - {b_1}}\end{array}} \right) +
\left( {1 - x}\right) \left( {\begin{array}{*{20}{c}}{{a_2}}&{{b_2}}\\
{1 - {a_2}}&{1 - {b_2}}\end{array}} \right)\right]
\left( {\begin{array}{*{20}{c}}x\\{1 - x}\end{array}} \right)
= \left( {\begin{array}{*{20}{c}}x\\1-x
\end{array}} \right).
$$
Then one of the following holds for the above equation.
\begin{itemize}
\item[{\rm (1)}]
If ${a_1} = 1,{a_2} + {b_1} = 1,{b_2} = 0$, then every $x \in [0,1]$ is a solution.
\item[{\rm (2)}] If ${a_2} + {b_1} < 1 = a_1$, then there are two solutions in $[0,1]$,
namely, $x=1$ and $x = \frac{{{b_2}}}{{{b_2} + 1 - {a_2} - {b_1}}}$.
\item[{\rm (3)}] If $a_2+b_1 - a_1 > a_2+b_1 -1 \ge 0 = b_2$,
then there are two solutions in $[0,1]$,
namely, $x = 0$ and $x = \frac{a_2+b_1-1}{a_2+b_1-a_1}$.
\item[{\rm (4)}] Otherwise, there is a unique solution in $[0,1]$ determined as follows.
If ${a_1} - {a_2} - {b_1} + {b_2} = 0$,
then $x = \frac{{{b_2}}}{{2{b_2} + 1 - {a_2} - {b_1}}}$.
If ${a_1} - {a_2} - {b_1} + {b_2} \ne 0$,
then
\[x= \frac{{2{b_2} + 1 - {a_2} - {b_1} - \sqrt \Delta}}
{{2\left( {{a_1} - {a_2} - {b_1} + {b_2}} \right)}}\]
with $\Delta = {\left( {2{b_2} + 1 - {a_2} - {b_1}} \right)^2} -
4{b_2}\left( {{a_1} - {a_2} - {b_1} + {b_2}} \right) = {\left( {1 - {a_2} - {b_1}} \right)^2} + 4{b_2}
\left( {1 - {a_1}} \right) \ge 0$.
\end{itemize}
\end{proposition}
\it Proof. \rm Let
$$f(x) = \left( {{a_1} - {a_2} - {b_1} + {b_2}} \right){x^2}
+ \left( {{a_2} + {b_1} - 2{b_2}} \right)x + {b_2}$$
be the first entry of the vector
$$ \left[ x\left( {\begin{array}{*{20}{c}}{{a_1}}&{{b_1}}\\
{1 - {a_1}}&{1 - {b_1}}\end{array}} \right) +
\left( {1 - x}\right) \left( {\begin{array}{*{20}{c}}{{a_2}}&{{b_2}}\\
{1 - {a_2}}&{1 - {b_2}}\end{array}} \right)\right]
\left( {\begin{array}{*{20}{c}}x\\{1 - x}\end{array}} \right).
$$
We need only solve $f\left( x \right) = x$ with $x \in [0,1]$.
Then the equation corresponding to the second entry will also satisfy. Set
\[g(x) = f(x)-x = \left( {{a_1} - {a_2} - {b_1} + {b_2}} \right){x^2}
+ \left( {{a_2} + {b_1} - 2{b_2} - 1} \right)x + {b_2} = 0.\]
Then $g\left( 0 \right) = {b_2} \ge 0$ and $g\left( 1 \right) = {a_1} - 1 \le 0$.
By the Intermediate Value Theorem, there is at least one ${x_0} \in \left[ {0,1} \right]$
such that $g\left( {{x_0}} \right) = 0$.
Let
$$\Delta = {\left( {2{b_2} + 1 - {a_2} - {b_1}} \right)^2}
- 4{b_2}\left( {{a_1} - {a_2} - {b_1} + {b_2}} \right)
= {\left( {1 - {a_2} - {b_1}} \right)^2} + 4{b_2}\left( {1 - {a_1}} \right) \ge 0.$$
Suppose ${a_1} - {a_2} - {b_1} + {b_2} = 0$. The quadratic equation reduces to
$\left( {{a_2} + {b_1} - 2{b_2} - 1} \right)x + {b_2} = 0$.
If ${{a_2} + {b_1} - 2{b_2} - 1 = 0}$, then one can readily check that condition (1) holds.
If ${{a_2} + {b_1} - 2{b_2} - 1 \ne 0}$, then the first case of condition (4) holds.
Suppose ${a_1} - {a_2} - {b_1} + {b_2} \ne 0$.
If $g(0) > 0$ and $g(1) < 0$, then the quadratic function $g(x)$ can
only have one solution in $[0,1]$.
If ${{a_1} - {a_2} - {b_1} + {b_2}} > 0$, then $g(x) \rightarrow \infty$ as
$x \rightarrow \infty$. Since $g(1) < 0$, the larger root of $g(x) = 0$ equals
$\frac{{2{b_2} + 1 - {a_2} - {b_1} + \sqrt \Delta}}
{{2\left( {{a_1} - {a_2} - {b_1} + {b_2}} \right)}}$
will be larger than 1.
Hence, the second case of condition (4) holds.
If ${{a_1} - {a_2} - {b_1} + {b_2}} < 0$, then $g(x) \rightarrow -\infty$ as
$x \rightarrow -\infty$. Since $g(0) > 0$, the smaller root of $g(x) = 0$ equals
$\frac{{2{b_2} + 1 - {a_2} - {b_1} + \sqrt \Delta}}
{{2\left( {{a_1} - {a_2} - {b_1} + {b_2}} \right)}}$
will be smaller than 0.
Hence, the second case of condition (4) holds.
Suppose $0 = g(1) = a_1 - 1$. Then $g(x)$ will have another solution in $[0,1]$
if and only if $a_1 - a_2 - b_1+b_2 = 1-a_2 - b_1 + b_2 \ge 0$. This happens
if and only if condition (2) holds.
Suppose $0 = g(0) = b_2$ and $0\ne g(1)$.
Then $g(x)$ will have another solution in $[0,1]$
if and only if $a_1-a_2-b_1+b_2 = a_1 - a_2 -b_1 < 0$ and the
maximum of $g(x)$ is attained at a positive number $x$. This happens if and only
if condition (3) holds.
$\Box$
\noindent
{\bf Proof of Theorem \ref{main}}.
The sufficiency can be readily checked.
We focus on the necessity.
Note that Proposition \ref{n=2case} covers the case when $n = 2$.
We will use an inductive argument. It is illustrative to see the case when $n = 3$.
Consider the system
\[
\left[{x_1}\left( {\begin{array}{*{20}{c}}{{a_{11}}}&{{a_{12}}}&{{a_{13}}}\\
{{a_{21}}}&{{a_{22}}}&{{a_{23}}}\\{{a_{31}}}&{{a_{32}}}&{{a_{33}}} \end{array}} \right) +
{x_2}\left( {\begin{array}{*{20}{c}}{{b_{11}}}&{{b_{12}}}&{{b_{13}}}\\
{{b_{21}}}&{{b_{22}}}&{{b_{23}}}\\{{b_{31}}}&{{b_{32}}}&{{b_{33}}}
\end{array}} \right)
+ {x_1}\left( {\begin{array}{*{20}{c}}{{c_{11}}}&{{c_{12}}}&{{c_{13}}}\\
{{c_{21}}}&{{c_{22}}}&{{c_{23}}}\\{{c_{31}}}&{{c_{32}}}&{{c_{33}}}\end{array}} \right)\right]
{\bf x} = {\bf x}.\]
If we set the third entry of the stationary vector ${\bf x}$ to be 0,
then we can have infinitely many solutions of the form
${\bf x} = \left( {\begin{array}{*{20}{c}}x\\{1 - x}\\0\end{array}} \right)$
with $x \in \left[ {0,1} \right]$. By the 2-by-2 case, this happens
if and only if the sub-matrices
$\left( {\begin{array}{*{20}{c}}{{a_{11}}}&{{a_{12}}}\\{{a_{21}}}&{{a_{22}}}\end{array}} \right)$
and $\left( {\begin{array}{*{20}{c}}{{b_{11}}}&{{b_{12}}}\\{{b_{21}}}&{{b_{22}}}\end{array}} \right)$
are of the form
$\left( {\begin{array}{*{20}{c}}1&{{a_{12}}}\\0&{1 - {a_{12}}}\end{array}} \right),
\left( {\begin{array}{*{20}{c}}{1 - {a_{12}}}&0\\{{a_{12}}}&1\end{array}} \right)$.
Similarly, setting the second entry of ${\bf x}$ to be 0, we see that the submatrices
$\left( {\begin{array}{*{20}{c}}{{a_{11}}}&{{a_{13}}}\\{{a_{31}}}&{{a_{33}}}\end{array}} \right)$
and $\left( {\begin{array}{*{20}{c}}{{c_{11}}}&{{c_{13}}}\\{{c_{31}}}&{{c_{33}}}\end{array}} \right)$
are of the form $\left( {\begin{array}{*{20}{c}}1&{{a_{13}}}\\0&{1 - {a_{13}}}\end{array}} \right),
\left( {\begin{array}{*{20}{c}}{1 - {a_{13}}}&0\\{{a_{13}}}&1\end{array}} \right)$.
Finally, setting the first entry of ${\bf x}$ to be 0, we see that the sub-matrices
$\left( {\begin{array}{*{20}{c}}{{a_{22}}}&{{a_{23}}}\\{{a_{32}}}&{{a_{33}}}\end{array}} \right)$
and $\left( {\begin{array}{*{20}{c}}{{c_{22}}}&{{c_{23}}}\\{{c_{32}}}&{{c_{33}}}\end{array}} \right)$
are of the form $\left( {\begin{array}{*{20}{c}}1&{{a_{23}}}\\0&{1 - {a_{23}}}\end{array}} \right),
\left( {\begin{array}{*{20}{c}}{1 - {a_{23}}}&0\\{{a_{23}}}&1\end{array}} \right)$.
Thus, the three matrices in the equation are of the form
$$\left( {\begin{array}{*{20}{c}}
1&{{a_{12}}}&{{a_{13}}}\\ 0&{1 - {a_{12}}}&0\\ 0&0&{1 - {a_{13}}} \end{array}} \right), \
\left( {\begin{array}{*{20}{c}} {1 - {a_{12}}}&0&0\\ {{a_{12}}}&1&{{a_{23}}}\\
0&0&{1 - {a_{23}}} \end{array}} \right),
\ \left( \begin{array}{*{20}{c}} {1 - {a_{13}}}&0&0\\ 0&{1 - {a_{23}}}&0\\
a_{13}&a_{23}& 1 \end{array} \right).$$
More generally, suppose the result holds for the $(n-1)$-dimension case.
Consider the $n$-dimension case, and the equation
$$(x_1 P_1 + \cdots + x_nP_n){\bf x} = {\bf x} \qquad \hbox{ with } {\bf x} \in \Omega_n.$$
Let $j \in \{1, \dots, n\}$. Setting the $j$-th entry of
${\bf x} = (x_1, \dots, x_n)^t$ to be zero, we see that for $i \ne j$, the $(n-1)$ sub-matrix
of $P_i$ obtained by deleting its $j$th row
and $j$th column has the form
$$I_{n-1} - {\rm diag}\,(a_{i,1}, \dots, a_{i,j-1}, a_{i,j+1}, \dots, a_{i,n})
+ \hat e_i (a_{i,1}, \dots, a_{i,j-1}, a_{i,j+1}, \dots, a_{i,n}),$$
where $\hat e_i$ is obtained from $e_i$ by removing the $j$th entry for
$i = 1, \dots, n$.
Combining the information for different $j = 1, \dots, n$, and
$i = 1, \dots, j-1, j+1, \dots,n$, we see that the matrices ${P_1}, \cdots ,{P_n}$
have the asserted form.
$\Box$
Theorem \ref{main} shows that
it is possible for a second order Markov chain to have many stationary vectors.
In previous study \cite{CPZ,FGH,ling,L}, researchers obtained sufficient conditions for
a higher-order Markov chain to have a unique stationary vector.
Here we construct a family of examples of second-order Markov chains
such that one of the following holds.
(a) There are exactly $k$ stationary vectors for a given
$k \in \{1,\dots, n+1\}$.
(b) The set of stationary vectors is a $k$ dimensional face
of $\Omega_n$ for $k = 1, \dots, n-2$.
(c) The set of stationary vectors is a disconnected set
equal to the union of a $k$ dimensional face
of $\Omega_n$ and $\{(\sum_{j=1}^n e_j)/n\}$,
for $k = 1, \dots, n-2$.
\begin{theorem} {\langle}bel{main2}
Suppose $n > 2$ and a second order Markov chain with transition tensor
$P = (p_{i,i_1,i_2})$. Let $P_i = (p_{ris})_{1 \le r, s \le n}$ for $i = 1, \dots, n$.
Let $k \in \{1, \dots, n\}$ and $f_k = (e_1 + \cdots + e_k)/k$.
If every column of $P_i$ equals $f_k$, then
$f_k$ is the only stationary vector of the Markov chain.
\begin{itemize}
\item[{\rm (1)}]
If $k = 2$, replace the first column of $P_1$ by $e_1$ and all the columns of $P_2$ by
$e_2$.
Then the resulting Markov chain has 2 stationary vectors, namely,
$e_1$ and $e_2$.
\item[{\rm (2)}]
If $2 < k \le n$, replace the $i$th column of $P_i$ by $e_i$ and all other columns by
$e_k$ for $i = 1, \dots, k$. Then the resulting
Markov chain has $k$ stationary vectors, namely,
$e_1, \dots, e_k$.
\item[{\rm (3)}]
Suppose $k = n$. If we replace the $i$th column of $P_i$ by $e_i$ for all
$i = 1, \dots, n$, then
the resulting Markov chain has $n+1$ stationary vectors, namely,
$e_1, \dots, e_n$ and $f_n$.
\item[{\rm (4)}] If $k \in \{2, \dots, n-1\}$ and we replace the first $k$
columns of $P_i$ by $e_i$ for $i = 1, \dots, k$,
then the set of stationary vectors for
the Markov chain equals
${\rm conv}\, \{e_1, \dots, e_k\}$.
\item[{\rm (5)}] Suppose $k \in \{2, \dots, n-1\}$ and we reset the matrices
$P_1, \dots, P_n$ so that
the first $k$ columns of $P_i$ equal
$$v_i = \begin{cases} e_i & \hbox{ if }\ $i = 1, \dots, k$,\\
(e_{k+1} + \cdots + e_n)/(n-k) & \hbox{ if }\ $i = k+1, \dots, n$,\end{cases}$$
and all other columns equal to $f_n$.
Then the set of stationary vectors for
the Markov chain equals
$\{f_n\}\cup {\rm conv}\, \{e_1, \dots, e_k\}$.
\end{itemize}
\end{theorem}
\it Proof. \rm
Suppose $k \in \{1, \dots, n\}$ and
every column of $P_i$ equals $f_k$.
Then ${\bf x}=(x_1, \dots, x_n)^t \in \Omega_n$ satisfies
$${\bf x} = (x_1P_1 + \cdots + x_nP_n){\bf x} = (x_1+\cdots + x_k) f_k$$
if and only if $x_{k+1} = \cdots = x_n = 0$ and $x_1 = \cdots = x_k = 1/k$.
(1) Suppose $k = 2$, and we replace $P_1$ and $P_2$ as suggested.
Then ${\bf x} \in \Omega_n$ satisfies
$${\bf x} = (x_1 P_1 + \cdots + x_nP_n){\bf x}$$
if and only if
$x_3 = \cdots = x_n = 0$ and
$$x_1 \begin{pmatrix} 1 & 1/2 \cr 0 & 1/2\cr\end{pmatrix}+
x_2 \begin{pmatrix} 0 & 0 \cr 1 & 1\cr\end{pmatrix} =
\begin{pmatrix} x_1 \cr x_2\cr\end{pmatrix}.$$
By Proposition \ref{n=2case}, $x_1 = 1$ or $x_2 = 1$.
So, the Markov chain has two stationary vectors $e_1$ and $e_2$.
(2) Suppose $k > 2$, and
the $i$th column of $P_i$ is replaced by $e_i$
and replace all other columns by $e_k$ for $i = 1, \dots, k$.
Direct checking shows that $e_1, \dots, e_k$ and $f_k$ are stationary vectors
of the Markov chain. Conversely,
suppose ${\bf x}=(x_1, \dots, x_n)^t \in \Omega_n$ satisfies
$${\bf x} = (x_1P_1 + \cdots + x_nP_n){\bf x}.$$
Then $x_{k+1} = \cdots = x_n = 0$,
$$x_k = \sum_{j=1}^{k-1} x_j(1-x_j) + x_k, \quad \hbox{ and } \quad x_j = x_j^2 \quad
\hbox{ for } j = 1, \dots, k-1.$$
Thus, $x_j \in \{0, 1\}$ so that ${\bf x} = e_j$ if $x_j = 1$
for any $j=1,\dots,k-1$.
If $x_1 = \cdots = x_{k-1} = 0$, then $x_k$ is the only nonzero entry and ${\bf x} = e_k$.
(3) Suppose $k = n$ and we replace the $i$th column of $P_i$ by $e_i$ and all
$i = 1, \dots, n$.
Direct computation shows that $e_1, \dots, e_n$ and $f_n$ are stationary vectors.
Conversely, suppose ${\bf x} = (x_1, \dots, x_n)^t \in \Omega_n$ satisfies
$${\bf x} = (x_1P_1 + \cdots + x_n P_n){\bf x}.$$
Then
$$x_i = \frac{1}{n} \left(\sum_{1 \le i,j\le n} x_ix_j - \sum_{j=1}^k x_j^2\right) + x_i^2
= \frac{1}{n} \left(1 - \sum_{j=1}^k x_j^2\right) + x_i^2.$$
Let $\ell = \frac{1}{n} \left(1- \sum_{j=1}^k x_j^2\right)$ and
consider two cases.
{\bf Case 1.} If $\ell = 0$, then $x_i \in \{0,1\}$ for each $i = 1, \dots, n$.
Thus, we have ${\bf x} \in \{e_1, \dots, e_n\}$.
{\bf Case 2.} Suppose $\ell > 0$. Because $x_i^2 - x_i + \ell = 0$, we see that
$x_i = \left(1\pm \sqrt{1-4\ell}\right)/2$.
If at least one of the $x_i$'s equals
$\left(1+ \sqrt{1-4\ell}\right)/2$, then by the fact that $n > 2$,
$$
1 = \sum_{j=1}^n x_j
\ge \frac{ \left(1+\sqrt{1-4\ell}\right)}{2} + (n-1)\frac{\left(1- \sqrt{1-4\ell}\right)}{2}
= 1 + (n-2) \frac{\left(1- \sqrt{1-4\ell}\right)}{2} > 1,
$$
which is a contradiction. Thus,
$x_i = \left(1- \sqrt{1-4\ell}\right)/2$ for each $i = 1,\dots, n$, and hence
${\bf x} = f_n$.
(4) Clearly, every vector
in ${\rm conv}\, \{e_1, \dots, e_k\}$ is a stationary vector of the Markov chain.
Conversely, suppose ${\bf x} = (x_1, \dots, x_n)^t \in \Omega_n$ is a stationary vector.
Then
$${\bf x} = (x_1 P_1 + \cdots + x_n P_n) {\bf x}$$
implies that $x_{k+1} = \cdots = x_n = 0$, and $x_1, \dots, x_k$ can be any nonnegative
numbers summing up to one.
(5) One readily checks that $f_n$ and every vector
in ${\rm conv}\, \{e_1, \dots, e_k\}$ is a stationary vector of the Markov chain.
Conversely, suppose ${\bf x} = (x_1, \dots, x_n)^t \in \Omega_n$ is a stationary vector.
If $\beta = (\sum_{j=k+1}^n x_j)$, then
\begin{eqnarray*}
{\bf x} &=& (x_1 P_1 + \cdots + x_n P_n) {\bf x} \\
&=& (1-\beta)(x_1e_1 + \cdots + x_k e_k) + \beta f_n
+ (1-\beta)\beta(e_{k+1} + \cdots + e_n)/(n-k).
\end{eqnarray*}
Thus, $x_{k+1} = \cdots = x_n$.
If all of them are zero, then $x_1, \dots, x_k$ can be any nonnegative
numbers summing up to one. If $x_{k+1} = \cdots = x_n = r > 0$, then
$x_i = (1-\beta)x_i + \beta/n$ so that $x_i = 1/n$ for $i = 1, \dots, k$.
If follows that $x_{k+1} = \cdots = x_n = r = 1/n$ also.
$\Box$
Next, we obtain a result illustrating some additional geometrical feature of
the set of stationary vectors of a second order Markov chain.
\begin{proposition} {\langle}bel{1-face}
Consider the following equation for the stationary vectors of a second order Markov chain:
$$(x_1 P_1 + \cdots + x_n P_n){\bf x} = {\bf x}.$$
Suppose the Markov chain has two stationary vector of the form
$xe_i + (1-x)e_j$ and $y e_i + (1-y) e_j$ for some $x, y \in (0,1)$ and
$1 \le i < j \le n$. Then every vector of the form $z e_i + (1-z) e_j$
with $z \in [0,1]$ is a stationary vector of the Markov chain.
\end{proposition}
\it Proof. \rm If $n = 2$, the result follows from Proposition \ref{n=2case}.
Suppose $n \ge 3$. The hypothesis of the proposition implies that the
2-by-2 submatrices of $P_i$ and $P_j$ lying in rows and columns indexed by
$i$ and $j$ have the form
$\left( {\begin{array}{*{20}{c}}1&a\\0&{1 - a}\end{array}} \right)$
and $\left( {\begin{array}{*{20}{c}}{1 - a}&0\\a&1\end{array}} \right)$.
It follows that every vector of the form $z e_i + (1-z) e_j$
with $z \in [0,1]$ is a stationary vector of the Markov chain.
$\Box$
Proposition \ref{1-face} asserts that
if the set of stationary vectors of a second order Markov chain
contains two interior points of a 1-dimensional face of $\Omega_n$,
then every vector in the 1-dimensional face is a stationary vector.
We conjecture that if the set of stationary vectors of a second order Markov
chain contains $k$ interior points of a $(k-1)$ dimensional face of the simplex
$\Omega_n$, then every vector in the $(k-1)$ dimensional face is a stationary
vector.
\section{Higher-Order Markov Chains}
In this section, we use the results in Section
2 to construct higher-order Markov chains so that
(I) every vector in $\Omega_n$ is a stationary vector, and
(II) the set of stationary vectors have different affine dimensions.
\noindent
We will identify a transition probability tensor
$P = (p_{i,i_1, \dots, i_m})$ as the $n\times n^m$ hypermatrix
with row index $i = 1, \dots, n$, and column indexes
$i_1 \cdots i_m$ with $i_1, \dots, i_n \in{\langle} n{\rangle}=\{1, \dots, n\}$
arranged in lexicographic order. For example, for $n = 2$ and $m = 3$,
the row indexes are $1,2$, and the column indexes are
$111, 112, 121, 122, 211, 212, 221, 222$.
We will use the tensor (Kronecker) product notation for vectors in $\Omega_n$.
For example,
$${\bf x}^{(3)} = {\bf x} \otimes {\bf x} \otimes {\bf x} = (x_1x_1x_1, x_1x_1x_2, x_1x_2 x_1, x_1x_2x_2,
x_2x_1x_1, x_2x_1x_2, x_2x_2 x_1, x_2x_2x_2)^t .$$
The the stationary vector condition can be represented as the following matrix equation:
$$P {\bf x}^{(m)} = {\bf x}.$$
As pointed out by the referee, the aobove displayed equation
is precisely the definition of an $L^2$-eigenpair in \cite{L} or equivalently
a $Z$-eigenpair in \cite{Q}.
We first consider Markov chains satisfying condition (I).
We will illustrate the construction for the third order
Markov chains for $n = 2$, and then describe the general construction.
\noindent
{\bf First order Markov chains.} Every vector in $\Omega_2$ is a stationary
vector if and only if $P = I_2$.
\noindent
{\bf Second order Markov chains.}
We can use two copies the first order chain
$I_2$
to produce $\tilde P = [I_2 | I_2]$ so that
$$\tilde P {\bf x}^{(2)} = [I_2 |I_2] {\bf x}^{(2} = {\bf x}
\quad \hbox{ with } {\bf x} = (x_1x_1, x_1x_2, x_2x_1, x_2x_2)^t.$$
Observe that the second and third entries on ${\bf x}^{(2)}$ are the same, so one can permute
the second the third columns of $\tilde P = [I_2,I_2]$ to get
$\tilde P_1 = \begin{pmatrix}1 & 1 & 0 & 0 \cr 0 & 0 & 1 & 1 \cr\end{pmatrix}$ so that
$\tilde P_1 {\bf x}^{(2)} = {\bf x}$ for every ${\bf x} \in \Omega_2$.
Evidently, for every $a \in [0,1]$, $\tilde P_a = a\tilde P + (1-a)\tilde P_1$ will satisfy
$\tilde P_a {\bf x}^{(2)} = {\bf x}$. In fact, we have shown that these are all possible transition
tensors have the desired property.
\noindent
{\bf
Third order Markov chains.} Suppose $\tilde P = [Q|Q]$, where
$Q = \begin{pmatrix}q_{111} & q_{112} & q_{121} & q_{122} \cr
q_{211} & q_{212} & q_{221} & q_{222}\cr\end{pmatrix}$
satisfies $Q {\bf x}^{(2)} = {\bf x}$ for every ${\bf x} \in \Omega_2$.
Then $P{\bf x}^{(3)} = {\bf x}$ for every ${\bf x} \in \Omega_2$.
Now, observe that the entries of ${\bf x}^{(3)}$ indexed by $112,121,211$ are all equal to
$x_1^2 x_2$. So, we can permute the columns of $\tilde P$ indexed by $112,121,211$ in
$6 (=3!)$ different ways to get matrices $\tilde P_1$ satisfying $\tilde P_1 {\bf x}^{(3)} = {\bf x}$.
Similarly, we can permute the columns of $\tilde P$ indexed by $122,212,221$ in
6 different ways to get matriices $\tilde P_2$ satisfying
$\tilde P_2 {\bf x}^{(3)} = {\bf x}$. As a result,
we get $6^2 = 36$ matrices with the desired property.
Now, we can take convex combination these matrices to get a
large family of matrices with the desired property.
One easily extends the above idea to obtain the following.
\begin{theorem}
Suppose $m,n \ge 2$, and $P$ is a transition probability
tensor $P$ represented as an $n \times n^{m-1}$ matrix such that
$P {\bf x}^{(m-1)} = {\bf x}$ for all ${\bf x} \in \Omega_n$.
Let $\tilde P = [P | \cdots | P] = {\bf 1} \otimes P$ with ${\bf 1} = (1, \dots, 1)\in
{\bf R}^{1\times n}$.
Then
\begin{equation}{\langle}bel{eqa}
\tilde P {\bf x}^{(m)} = ({\bf 1} \otimes P)({\bf x}\otimes {\bf x}^{(m-1)})
= ({\bf 1} {\bf x})\otimes (P {\bf x}^{(m-1)}) = {\bf x} \qquad \hbox{ for all } {\bf x} \in \Omega_n.
\end{equation}
Moreover, one can permute the columns of $\tilde P$
corresponding to the entries in the vector ${\bf x}^{(m)}$ with the same values:
$x_1^{m_1} \cdots x_n^{m_n}$ for all nonnegative sequence $(m_1, \dots, m_n)$
with $m_1 + \cdots + m_n = m$ to yield other Markov chains
satisfying {\rm (\ref{eqa})}; in addition, taking convex combination of
these matrices will also result in Markov chains satisfying {\rm (\ref{eqa})}.
\end{theorem}
Note that there are
$${m \choose m_1, m_2, \dots, m_n} = \frac{m!} {m_1! \cdots m_n!}$$
so many terms in the vector ${\bf x}^{(m)}$, and hence there are
${m \choose m_1, m_2, \dots, m_n}!$ permutations for the corresponding columns in $\tilde P$.
Thus, we can generate many new matrices $\tilde P_1$ from $\tilde P$ satisfying
$\tilde P_1 {\bf x}^{(m)} = {\bf x}$ for all ${\bf x} \in \Omega_n$.
An interesting question is whether a higher-order Markov chain with transition
tensor $\tilde P$ satisfying $\tilde P{\bf x}^{(m)} = {\bf x}$ for every ${\bf x} \in \Omega_n$ can be
obtained from the above construction.
Next, we turn to higher-order Markov chains satisfying condition (II).
\begin{theorem} Suppose $n > 2$,
$k \in \{1, \dots, n\}$, and $f_k = (e_1 + \cdots + e_k)/k$.
Construct an $m$-th order Markov chain with transition tensor
$P = (p_{i,i_1,\dots i_m})$ identified as the $n \times n^m$ matrix
$P = [P_1 | \cdots | P_n]$ so that $P_i$ is $n \times n^{m-1}$,
so that every column of $P$ equals $f_k$. Then
$f_k$ is the only stationary vector of the Markov chain.
\begin{itemize}
\item[{\rm (1)}]
If $k = 2$, replace the first volume of $P_1$ by $e_1$ and all columns of $P_2$ by $e_2$.
The resulting Markov chain has 2 stationary vectors, namely,
$e_1$ and $e_2$.
\item[{\rm (2)}]
If $2 < k \le n$, replace the column of $P_i$ indexed by
$(i_1, \dots, i_m) = (i, \dots, i)$
by $e_i$ for all $i = 1, \dots, k$,
and all other columns by
$e_k$ for $i = 1, \dots, k$, then the resulting
Markov chain has $k$ stationary vectors, namely,
$e_1, \dots, e_k$.
\item[{\rm (3)}]
Suppose $k = n$. If we replace the
the column of $P_i$ indexed by
$(i_1, \dots, i_m) = (i, \dots, i)$ by $e_i$ for all
$i = 1, \dots, n$, then
the resulting Markov chain has $n+1$ stationary vectors, namely,
$e_1, \dots, e_n$ and $f_n$.
\item[{\rm (4)}] If $k \in \{2, \dots, n-1\}$ and we replace the
columns of $P_i$ indexed by $(i_1, \dots, i_m)$ with $1 \le i_1, \dots, i_m \le k$
by $e_i$ for $i = 1, \dots, k$,
then the set of stationary vectors for
the Markov chain equals
${\rm conv}\,\{e_1, \dots, e_k\}$.
\item[{\rm (5)}] Suppose $k \in \{2, \dots, n-1\}$ and we reset the matrices
$P_1, \dots, P_n$ so that for each $i = 1, \dots, n$,
the columns of $P_i$
indexed by $(i_1, \dots, i_m)$ with $1 \le i_1, \dots, i_m \le k$,
equal
$$v_i = \begin{cases} e_i & \hbox{ if } \ $i = 1, \dots, k$,\\
(e_{k+1} + \cdots + e_n)/(n-k) & \hbox{ if } \ $i = k+1, \dots, n$,\end{cases}$$
and all other columns of $P_i$ equal $f_n$.
Then the set of stationary vectors for
the Markov chain equals
$\{f_n\} \cup {\rm conv}\, \{e_1, \dots, e_k\}$.
\end{itemize}
\end{theorem}
\it Proof. \rm The proof is an easy adaptation of that of Theorem \ref{main2}.
$\Box$
To conclude our note, we remark that there are many interesting questions concerning
the stationary vectors of higher-order Markov chains that deserve further study.
\noindent
{\bf Acknowledgment}
The study of the problem in the paper
began when Li was visiting the University of Hong Kong in the Spring of 2012.
He gracefully acknowledge the support and hospitality of the colleagues at the
Department of Mathematics at the University of Hong Kong.
The research of Li was supported by USA NSF and HK RCG.
The authors would also like to thank the referee for some
helpful comments and references.
\end{document}
|
\begin{document}
\begin{center}
{\Lambdarge {\bf Bounds for Local Density of Sphere Packings \\and
the Kepler Conjecture}} \\
ace{1.5\baselineskip}
{\em Jeffrey C. Lagarias} \\
ace*{1.5\baselineskip}
\end{center}
ace{1.5\baselineskip}
{\small
\centerline{\bf Abstract}
This paper formalizes the local density inequality approach
to getting upper bounds for sphere packing densities in ${\mathbb R}^n$.
This approach was first suggested by L. Fejes-T\'oth
in 1954 as a method
to prove the Kepler conjecture that the densest packing of
unit spheres in ${\mathbb R}^3$ has density $\frac{\pi}{\sqrt{18}}$,
which is attained by the ``cannonball packing.''
Local density inequalities give upper bounds for the sphere
packing density formulated as an optimization
problem of a nonlinear function over a compact set in a finite
dimensional Euclidean space. The approaches of
L. Fejes-T\'oth, of W.-Y. Hsiang, and of T. C. Hales,
to the Kepler conjecture are each based on
(different) local density inequalities. Recently
T. C. Hales, together with S. P. Ferguson, has
presented extensive details carrying out a modified
version of the Hales
approach to prove the Kepler conjecture. We describe
the particular local density inequality underlying the
Hales and Ferguson approach to prove Kepler's
conjecture and sketch some features of their proof.\\
{\em AMS Subject Classification (2000):} Primary 52C17,~
Secondary: 11H31 \\
{\em Keywords:} sphere packing, Kepler conjecture \\}
\section{Introduction}
\hspace*{\parindent}
The Kepler conjecture was stated by Kepler in 1611.
It asserts that the
face-centered cubic lattice gives the
tightest possible packing of unit spheres in ${\mathbb R}^3$.
\paragraph{Kepler Conjecture.}
{\em Any packing $\Omegaega$ of unit spheres in ${\mathbb R}^3$ has upper
packing density
\beql{101}
\bar{\rho} (\Omegaega ) \le \frac{\pi}{\sqrt{18}}
\simeq 0.740480 ~.
\end{equation}
}
ace*{+.1in}
\noindent The definition of upper packing density is given in \S2.
The problem of proving the Kepler conjecture appears as part of
Hilbert's 18th problem, see \cite{Hi}.
T. C. Hales has described an approach for proving
Kepler's conjecture, and has announced a proof,
completed with the aid of S. Ferguson, which is currently presented in
a set of six preprints. The proof is computer-intensive,
and involves checking over $5000$ subproblems.
The Hales approach is similar to earlier approaches in that
it aims to prove a local density inequality that gives a (sharp)
upper bound on the density. It involves several new ideas
which are indicated in \S4 and \S5.
Local density inequalities
obtain upper bounds for the sphere packing constant via an
auxiliary nonlinear optimization problem over a compact set of
``local configurations''. They measure a ``local density'' in
the neighborhood of each sphere center separately.
The general approach to the Kepler conjecture
is first to find a local optimization problem that
actually attains the optimal bound
$\frac{\pi}{\sqrt{18}}$ (assuming that one exists), and then to prove it.
This approach was first suggested in the early 1950's by
L. Fejes-T\'oth\cite[pp. 174-181]{FT}, who presented some
evidence that an optimal local density inequality might exist
in three dimensions.
The objects of this paper are:
(i) To formulate local density inequalities for sphere
packings in ${\mathbb R}^n$, in sufficient generality to
include the known candidates for optimal
local inequalities.
(ii) To review the history of local density inequalities for
three dimensional sphere packing and the Kepler conjecture.
(iii) To give a precise statement of the local density
inequality considered in the Hales-Ferguson approach.
(iv) To outline some of features of the Hales-Ferguson proof.
In \S2 we present a general framework for local density
inequalities, which is valid in ${\mathbb R}^n$,
given as Theorem~\ref{th21}. This framework is
sufficient to cover the approaches of L. Fejes-T\'oth,
W.-Y. Hsiang, and T. Hales and S. Ferguson to Kepler's conjecture.
A different framework for local density inequalities
appears in Oesterl\'{e}~\cite{Oes99}.
In \S3 we review the history of work on local optimization inequalities
for Kepler's conjecture.
In \S4 we describe the precise local optimization problem
formulated by Ferguson and Hales in \cite{FH},
which putatively attains $\frac{\pi}{\sqrt{18}}$.
In \S5 we make remarks on some details of the proof strategy
taken in the papers of Hales
\cite{I}, \cite{II}, \cite{III}, \cite{IV},
Hales and Ferguson \cite{FH}, and Ferguson \cite{Fe}.
In \S6 we make some concluding remarks.
The current status of the Hales-Ferguson proof is that
it appears to be sound. The proof has reputedly been examined in
fairly careful detail by a team of reviewers, but it is so long
and complicated that it seems difficult for any one person to
check it. This paper is intended as an aid in understanding the
overall structure of the Hales-Ferguson proof approach. For another
account of the Hales and Ferguson work,
see Oesterl\`{e} \cite{Oes99}.
For Hales' own perspective, see Hales \cite{H00}.
Two appendices are included which contain some information relevant to the
Hales-Ferguson proof.
Appendix A describes some of the Hales-Ferguson scoring functions.
Appendix B lists references in the Hales and Ferguson preprints for proofs of
lemmas and theorems stated without proof in \S4 and \S5.
This paper is a slightly revised version of the manuscript ~\cite{La99}.
\paragraph{Notation.}
${\bf B}_n := B_n( {\bf z}ero;1) = \{ {\bf x} \in {\mathbb R}^n : \| {\bf x} \| \le 1 \}$ is the
unit n-sphere. It has volume
$\kappa_n := \pi^{\frac{n}{2} }/\Gamma(\frac{n}{2} + 1)$,
with $\kappa_2 = \pi$ and $\kappa_3 = \frac {4\pi}{3}$. We let
${\bf C}_n ({\bf x}, T) := {\bf x} + [0,T]^n$
denote an n-cube of sidelength $T$, with sides parallel to the
coordinate axes,
and lowest corner at ${\bf x} \in {\mathbb R}^n$.
\section{Local Density Inequalities}
\hspace*{\parindent}
In this section we present a general formulation of local density
inequalities.
We recall the standard definition
of sphere packing densities,
following Rogers \cite{Ro}.
Let $\Omegaega$ denote a set of
unit sphere centers, so that
$\| {\bf v}- {\bf v}' \| \ge 2 $ for distinct
${\bf v} , {\bf v}' \in \Omegaega.$
\begin{defi}\lambdabel{de1}
{\rm
(i) For a bounded region $S$ in ${\mathbb R}^n$, and a sphere packing $\Omegaega + {\bf B}_n$
specified by the sphere centers $\Omegaega$,
the {\em density}
$\rho (S)= \rho(\Omegaega, S)$ of the packing in the region $S$ is
\beql{103}
\rho (S) := \frac{vol (S \cap (\Omegaega +{\bf B}_n))}{vol (S)} ~.
\end{equation}
(ii) For $T> 0$ the {\em upper density} $\rho (\Omegaega ,T)$ is the
maximum density of the packing $\Omegaega$ over all cubes of size $T$, i.e.
\beql{104}
\bar{\rho} (\Omegaega, T ) := \sup_{{\bf x} \in {\mathbb R}^3} \rho([0, T]^n + {\bf x} ) ~.
\end{equation}
Then the {\em upper packing density} of $\Omegaega$ is
\beql{105}
\bar{\rho} (\Omegaega ) := \limsup_{T \to \infty} \bar{\rho} (\Omegaega , T) ~.
\end{equation}
(iii) The {\em sphere packing density} $\delta({\bf B}_n)$ of the ball ${\bf B}_n$
of unit radius is
\beql{eq105a}
\delta({\bf B}_n) : = \sup_{\Omegaega} \bar{\rho} (\Omegaega ).
\end{equation}
}
\end{defi}
\begin{defi}\lambdabel{de22}
{\rm
A sphere packing $\Omegaega$ is {\em saturated} if no new sphere centers
can be added to it.}
\end{defi}
To obtain sphere packing bounds it obviously suffices to
study saturated sphere packings,
and in what follows we assume that all packings are saturated
unless otherwise stated.
\begin{defi}\lambdabel{de23}
{\rm
An {\em admissible partition rule}
is a rule assigning to each saturated packing $\Omegaega$ in ${\mathbb R}^n$
a collection of closed sets
${\cal P} ( \Omegaega ) : = \{ R_\alpha = R_\alpha ( \Omegaega ) \}$ with the
following properties.
(i) {\em Partition}.
Each set $R_\alpha$ is a finite union of bounded convex polyhedra.
The sets $R_\alpha $ cover ${\mathbb R}^3$ and have pairwise disjoint interiors.
(ii) {\em Locality}.
There is a positive constant $C$ (independent of $\Omegaega$) such that
each region $R_\alpha$ has
\beql{M21}
diameter (R_\alpha) \le C ~.
\end{equation}
Each $R_\alpha$ is completely determined by the set of sphere centers
${\bf w} \in \Omegaega$ with
\beql{M22}
distance ( {\bf w} , R_\alpha ) \le C ~.
\end{equation}
There are at most $C$ regions intersecting any cube of side 1.
(iii) {\em Translation-Invariance}.
The partition assigned to the translated packing $\Omega' = \Omega + {\bf x}$
consists of the sets
$\{ R_\alpha ( \Omegaega ) + {\bf x} \}$.
}
\end{defi}
\begin{defi}\lambdabel{de4}
{\rm An {\em admissible weight function} (or {\em admissible score function})
$\sigma$ for an admissible
partition rule in ${\mathbb R}^n$ assigns to each region
$R_\alpha \in {\cal P} ( \Omegaega )$ and each ${\bf v} \in \Omegaega$ a real weight
$\sigma (R_\alpha , {\bf v} )$ which satisfies $|\sigma (R_\alpha , {\bf v} )| < C^*$
for an absolute constant $C^*$, and which has
the following properties.
(i)
{\em Weighted Density Average.}
There are positive constants $A$ and $B$ (independent of $\Omegaega$)
such that
for each set $R_\alpha$,
\beql{M23}
\sum_{{\bf v} \in \Omega} \sigma ( R_\alpha, {\bf v} ) = (A~ \rho ( R_\alpha ) - B)
vol ( R_\alpha ) ~,
\end{equation}
where
\beql{M24}
\rho (R_\alpha ) vol (R_\alpha ) = vol (R_\alpha \cap (\Omega + {\bf B}_n) )
\end{equation}
measures the volume covered in $R_\alpha$ by the sphere packing $\Omega$
with unit spheres.
(ii) {\em Locality.}
There is an absolute constant $C$ (independent of $\Omegaega$) such that each
value $\sigma (R_\alpha , {\bf v} )$ is completely determined by the set of
sphere centers
${\bf w} \in \Omegaega$ with $\| {\bf w} - {\bf v} \| \le C$.
Furthermore
\beql{M25}
\sigma (R_\alpha, {\bf v} ) = 0 \quad \mbox{if}\quad
dist ({\bf v} , R_\alpha ) > C~.
\end{equation}
(iii) {\em Translation-Invariance.}
The weight function $\sigma'$ assigned to the translated packing
$\Omegaega' = \Omegaega + {\bf x}$ satisfies
\beql{M26}
\sigma' (R_\alpha + {\bf x} , {\bf v} + {\bf x} ) = \sigma (R_\alpha, {\bf v} ) ~.
\end{equation}
}
\end{defi}
Note that this definition specifically allows negative weights.
The ``local density'' is measured by the sum of the weights associated
to a given vertex ${\bf v}$ in a saturated packing.
\begin{defi}\lambdabel{Nde25}
{\rm (i) The {\em vertex $D$-star} (or {\em decomposition star}) ${\cal D} ({\bf v} )$
at a vertex ${\bf v} \in \Omegaega$ consists of all sets
$R_\alpha \in {\cal P} (\Omegaega )$ such that
$\sigma (R_\alpha , {\bf v} ) \neq 0$.
(ii). The {\em total score} assigned to a vertex $D$-star ${\cal D} ({\bf v} )$ at
${\bf v} \in \Omegaega$ is
\beql{M27}
Score( {\cal D} ({\bf v} ) ) :=
\sum_{R_\alpha \in {\cal D} ({\bf v} )} \sigma ( R_\alpha , {\bf v} )~.
\end{equation}
}
\end{defi}
The total score at ${\bf v}$ depends only on
regions entirely contained within distance $C$ of ${\bf v}$.
Any admissible partition and weight function
$({\cal P} , \sigma )$ together yield a local inequality for the density of
sphere packings, as follows.
\begin{theorem}\lambdabel{th21}
Given an admissible partition in ${\mathbb R}^n$
and weight function $({\cal P} , \sigma )$,
set
\beql{M27a}
\theta = \theta_{{\cal P}, \sigma} (A,B) := \sup_{\Omegaega ~\mbox{saturated}}
\left ( \sup_{{\bf v} \in \Omegaega}~~ Score( {\cal D} ({\bf v} ) ) \right) ~.
\end{equation}
and suppose that $\theta < \kappa_n A,$ where $\kappa_n$ is
the volume of the unit $n$-sphere.
Then the maximum sphere packing density satisfies
\beql{M28}
\delta({\bf B}) \le \frac{\kappa_n B}{\kappa_n A- \theta} ~.
\end{equation}
\end{theorem}
\paragraph{Remark.}(1) We let $f(A,B, \theta ) :=
\frac{ \kappa_n B}{\kappa_n A- \theta}$ denote the packing density bound
as a function of the score constants $A$ and $B$.
The sphere packing density bound actually depends only on the
{\em score constant ratio} $\frac {B}{A}$, rather than
on $B$ and $A$ separately, since
$\theta$ is a homongeneous linear function of $A$ and $B$. This ratio
detemines the relative weighting of covered and uncovered volume used
in the inequality.
(2) A natural approach to sphere packing bounds, used in
many previous upper bounds, is to partition space
into pieces $R({\bf v})$ corresponding to each sphere center ${\bf v}$,
with each piece containing the unit sphere around ${\bf v}$, and
aims to establish an upper bound
\beql{M28a}
\rho (R({\bf v})) = \frac {\kappa_n}{vol (R({\bf v}))} \le \theta~,~~~~
all~~~ {\bf v} \in \Omegaega.
\end{equation}
Then one obtains $\bar{\rho}(\Omegaega) \le \theta.$ An
optimal sphere packing bound of this sort
must necessarily be {\em volume-independent,} in the sense that
if equality is to be attained at all local cells $R({\bf v})$
simultaneously, then they must all have the same volume.
In contrast the inequality
of Theorem~\ref{th21} does take into account the volumes of the
individual pieces in the vertex $D$-star, and this allows
more flexibility in the
local density inequalities that can be constructed, which might
attain an optimal bound.
\paragraph{Proof of Theorem~\ref{th21}.}
We may assume that $\Omegaega$ is saturated. Given $T > 0$, and any
$\epsilonsilon > 0$
we choose a point ${\bf x} \in {\mathbb R}^n$ which attains the density bound
$\bar{\rho} (\Omega, T )$ on the cube ${\bf C}_n ({\bf x}, T)$ to within $\epsilonsilon$.
We evaluate the scores of all vertex $D$-stars
of vertices ${\bf v} \in \Omegaega \cap {\bf C}_n ({\bf x}, T)$ in two ways.
First, by definition of $\theta$,
\beql{M209}
\sum_{{\bf v} \in \Omegaega \cap {\bf C}_n({\bf x}, T )} Score ({\cal D} ({\bf v}) ) \le \theta \# | \Omegaega \cap {\bf C}_n ({\bf x} , T ) | ~.
\end{equation}
However we also have
\begin{eqnarray}\lambdabel{M210}
\sum_{{\bf v} \in \Omegaega \cap {\bf C}_n({\bf x}, T)} Score ({\cal D} ({\bf v} ) ) & = &
\sum_{{\bf v} \in \Omegaega \cap {\bf C}_n ({\bf x} , T)}
\left( \sum_\alpha \sigma (R_\alpha, {\bf v} )\right) \nonumber \\
& = & \sum_{\alpha \atop R_\alpha \subseteq {\bf C}_n ({\bf x}, T)}
\left( \sum_{{\bf v} \in \Omega \cap {\bf C}_n({\bf x}, T)} \sigma (R_\alpha , {\bf v}) \right) +
O(T^{n - 1}) \nonumber \\
& = & \sum_{R_\alpha \subseteq {\bf C}_n ({\bf x};T )}
(A~~ \rho ( R_\alpha ) - B)~ vol (R_\alpha ) + O(T^{n - 1} ) \nonumber \\
& = & \kappa_n A \# | \Omegaega \cap {\bf C}_n ({\bf x},T) | -
B~~vol ( {\bf C}_n ({\bf x}, T)) + O (T^{n - 1} ) \nonumber \\
& = & \kappa_n A \#| \Omegaega \cap {\bf C}_n ({\bf x}, T ) | -
BT^n + O(T^{n - 1} ) ~.
\end{eqnarray}
Here we use that fact that $\{R_\alpha\}$ partitions ${\mathbb R}^n$, so covers the cube,
and the $O(T^{n - 1})$ error terms above occur because the counting is not
perfect within a constant distance $C$ of the boundary of the cube.
Combining these evaluations yields
$$ (\kappa_n A ~-~\theta)\#| \Omegaega \cap {\bf C}_n ({\bf x}, T ) | \leq ~B~T^n +
O(T^{n - 1}.) $$
If $\theta < \kappa_n A$, then we can rewrite this as
\beql{M211}
\frac {\#| \Omegaega \cap {\bf C}_n ({\bf x}, T ) |}{T^n} \leq
\frac {B}{\kappa_n A - \theta}~ + O(\frac {1}{T}).
\end{equation}
By assumption
\begin{eqnarray*}
\bar{\rho} ( \Omega, T) - \epsilonsilon & \leq &
\frac{vol ( {\bf C} ( {\bf x} , T) \cap ( \Omegaega + {\bf B} ))}{T^3} \\
& = & \kappa_n
\frac{\#| \Omegaega \cap {\bf C} ({\bf x}, T)|}{T^n} + O \left( \frac{1}{T} \right) ~.
\end{eqnarray*}
Together with \eqn{M211}, this yields
\beql{M212}
\bar{\rho} ( \Omegaega, T) - \epsilonsilon \le \frac{\kappa_n B}{\kappa_n A- \theta}
+ O \left( \frac{1}{T} \right) ~,
\end{equation}
with an $O$-symbol constant independent of $\epsilonsilon.$
Letting $\epsilonsilon \to 0$ and then
$T \to \infty$ gives the inequality for $\bar{\rho} (\Omegaega)$.
Since this holds for all saturated packings the result follows.~~~${\vrule height .9ex width .8ex depth -.1ex }$
Determining the quantity $\theta_{{\cal P},\sigma} (A,B)$ for fixed $A,B$ can be
viewed as a nonlinear optimization problem over a compact set.
The translation-invariance property of $({\cal P} , \sigma )$ allows
the supremum \eqn{M27a} to be taken over the smaller set with
${\bf v} = {\bf z}ero$ and admissible $\Omegaega$ containing ${\bf z}ero$.
The locality property shows that that the vertex D-star at ${\bf 0}$ is
completely determined by ${\bf w} \in \Omegaega$ with $\| {\bf w} \| \le C$ .
The set of such configurations of nearby sphere centers forms a
compact set in the Euclidean topology.
Actually the partition and weight functions may be discontinuous functions
of the locations of sphere centers, so the optimization problem above
is not genuinely over a compact set.
One must compactify the space of
allowable vertex $D$-stars
by allowing some sets of sphere centers to be assigned
more than one possible
vertex $D$-star. In practical cases there is a finite upper bound
on the number of possibilities.
\begin{defi}\lambdabel{de14}
{\em A local density inequality in ${\mathbb R}^n$ is} optimal
{\em if}
\beql{M213}
f(A,B, \theta_{{\cal P}, \sigma} ) =
\delta({\bf B}_n) ~.
\end{equation}
\end{defi}
Optimal local density inequalities exist in one and two dimensions.
In discussing the three-dimensional
case, we shall presume that $\delta({\bf B}_3) = \frac {\pi}{\sqrt{18}}$,
so that an optimal density inequality in ${\mathbb R}^3$
will refer to one achieving this value.
The evidence indicates that there are many different possible optimal
local density inequalities in three dimensions, including
that of the Hales and Ferguson proof.
There are currently four candidates for local density
inequalities that may be optimal in three dimensions. The first is that of
L. Fejes-Toth, described in \S3,
which uses averages over Voronoi domains,
in which the score constant ratio $\frac {B}{A} = \frac {\pi}{\sqrt{18}}.$
The second is that of Hsiang \cite{Hs}, which is a modification
of the Fejes-T\'oth averaging, and uses the same score constant ratio.
The third is due to Hales \cite{I}, and is based on the Delaunay
triangulation, using a modified
scoring rule described in \S3. The fourth is that given in
Ferguson and Hales \cite{FH}, and
uses a combination of Voronoi-type domains and Delaunay simplices,
with a complicated scoring rule, described in \S4.
In the latter two cases the score constant ratio is
$$\frac{B}{A} = \delta_{oct}= \frac{-3 \pi +
12 \arccos ( \frac{1}{\sqrt{3}} )}{\sqrt{8}} \approx 0.720903.$$
In all four cases the compact
set of local configurations to be searched
has very high dimension.
Each sphere center has three degrees of freedom, and the number of
sphere centers
involved in these methods to determine a vertex $D$-star seems
to be around 50, so the
search space consists of components of dimension up to roughly 150.
It is unknown whether {\em optimal} local density inequalities exist for
the sphere packing problem in ${\mathbb R}^n$ in any dimension $n \ge 4$.
In dimensions 4, 8 and 24 it seems plausible that the minimal volume
Voronoi cell in any sphere packing actually
occurs in the densest lattice packing. If so, the Voronoi cell
decomposition would yield
an optimal local inequality in these dimensions, and the densest packing
would be a lattice packing in these dimensions.
Another question asks: in which dimensions is the maximal
sphere packing density
attained by a sphere packing whose centers
form a finite number of cosets of an $n$-dimensional lattice?
Perhaps in such dimensions
an optimal local density inequality exists. The state of the
art in sphere
packings in dimensions four and above is given in
Conway and Sloane \cite{CS}.
\section{History}
\hspace*{\parindent}
We survey results on local density inequalities in
three dimensions.
The work on local density bounds was originally based on two
partitions of ${\mathbb R}^3$ associated to a set $\Omegaega$ of sphere
centers: the Voronoi tesselation and the Delaunay triangulation.
Since they will play an important role, we recall their definitions.
\begin{defi}\lambdabel{de31}
{\em The} Voronoi domain ({\em or} Voronoi cell) {\em of ${\bf v} \in \Omegaega$ is}
\beql{301}
V_{vor}( {\bf v} ) = V_{vor}( {\bf v} , \Omegaega) := \{ {\bf x} \in {\mathbb R}^3 : \| {\bf x}-{\bf v} \| \le \| {\bf x}-{\bf w} \| \quad\mbox{for all}\quad
{\bf w} \in \Omegaega \} ~.
\end{equation}
{\em The} Voronoi tesselation {\em for $\Omegaega$ is the set of Voronoi domains}
$\{V_{vor} ({\bf v}) : {\bf v} \in \Omegaega \}$.
\end{defi}
The Voronoi tesselation is a partition of space, up to boundaries of
measure zero.
If $\Omegaega$ is a saturated sphere packing, then all Voronoi domains
are compact sets with diameter bounded by $4 \sqrt{2}$.
\begin{defi}\lambdabel{de32}
{\em The} Delaunay triangulation {\em associated to a set $\Omegaega$ is
dual to the Voronoi tesselation.
It contains an edge between every pair of vertices that have Voronoi domains
that share a common face. Suppose now that the points of
$\Omegaega$ are in general
position, which means that each corner of a Voronoi domain has
exactly four incident Voronoi domains.
In this case these Voronoi domains have between them four faces that
touch this corner, and these faces in turn determine (four edges of)
a Delaunay simplex. The resulting Delaunay simplices partition ${\mathbb R}^3$
and make up the Delaunay triangulation. In the
case of non-general position $\Omegaega$ the Delaunay triangulation is
not unique. The
possible Delaunay
triangulations are
determined locally as limiting cases of general position points. (There are
only finitely many triangulations possible in any bounded region of space.)}
\end{defi}
All the simplices in a Delaunay triangulation have vertices
${\bf v}_i \in \Omegaega$ and contain no other point ${\bf v} \in \Omegaega$.
We define, more generally:
\begin{defi}\lambdabel{de21}
{\rm A {\em $D$-simplex} (or
{\em weak Delaunay-simplex})
for $\Omegaega$ is any tetrahedron $T$ with vertices
${\bf v}_1, {\bf v}_2, {\bf v}_3, {\bf v}_4 \in \Omegaega$ such that no other
${\bf v} \in \Omegaega$ is in the closure of $T$.
We denote it $D( {\bf v}_1, {\bf v}_2, {\bf v}_3, {\bf v}_4)$.
}
\end{defi}
In the literature a {\em Delaunay
simplex} associated to a point set $\Omegaega$ is any simplex with vertices
in $\Omegaega$ whose circumscribing sphere
contains no other vertex of $\Omegaega$ in its interior.
All simplices in a Delaunay triangulation of $\Omegaega$ are
Delaunay simplices, so are
necessarily $D$-simplices, but the converse need not hold.
The admissible partitions that have been seriously studied all consist of a
domain $V({\bf v} )$ associated to each vertex ${\bf v} \in V$, which we call a
{\em $V$-cell}, together with a collection of certain D- simplices
$D({\bf v}_1, {\bf v}_2, {\bf v}_3,{\bf v}_4)$ which we call the {\em $D$-system} of the
partition.
We use the term {\em $D$-set} to refer to a $D$-simplex included
in the $D$-system.
We note that a $V$-cell may consist of several polyhedral pieces,
and may even be disconnected.
The original approach of L. Fejes-T\'oth to getting local upper
bounds for sphere packing in ${\mathbb R}^3$
used the Voronoi tesselation associated to $\Omegaega$.
If $\Omegaega$ is a saturated packing, then each Voronoi domain
$V_{vor}( {\bf v} )$ is a
bounded polyhedron
consisting of points within distance at most $4 \sqrt{2}$ of ${\bf v}$.
Examples are known of Voronoi domains in a saturated packing
that have 44 faces;
an upper bound for the number of faces of a Voronoi domain of a
saturated packing is 49.
The {\em Voronoi partition} takes the $V$-sets
$V({\bf v} )$ to be the Voronoi domains of ${\bf v}$,
with no $D$-sets,
and the vertex $D$-star ${\cal D}_{vor} ( {\bf v} )$ is just $V_{vor}( {\bf v} )$.
A {\em Voronoi scoring rule} is:
\beql{302a}
Score ( {\cal D}_{vor} ({\bf v} )) : =( \rho (V( {\bf v} )) - B)~~vol(V({\bf v})), ~.
\end{equation}
with the score constants $A = 1$ and $B$ is to be chosen optimally.
Such scoring rules are admissible.
However it has long been known that no Voronoi scoring rule gives an optimal
inequality \eqn{M213}.
The dodecahedral conjecture states that the maximum packing density of a
Voronoi domain is attained for a local configuration of 12 spheres
touching at the center of faces of a circumscribed regular dodecahedron.
\paragraph{Dodecahedral Conjecture.}
{\em For a Voronoi domain $V_{vor}( {\bf v} )$ of a unit sphere packing,
\beql{301a}
\rho (V_{vor}( {\bf v} )) \le
\frac{\pi}{15 (1- \cos \frac{\pi}{5} \tan \frac{\pi}{3} )} \simeq
0.754697
\end{equation}
and equality is attained for the dodecahedral configuration.
}
A proof of the dodecahedral conjecture has been announced by
Hales and McLaughlin \cite{HM},
based on similar ideas to the Hales' approach to the Kepler conjecture.
In 1953 L. Fejes-T\'oth \cite{FT} proposed that an optimal inequality
might exist based on a weighted averaging over Voronoi domains near a given
sphere center, and in 1964 he made a specific proposal for
such an optimal inequality.
In the notation of this paper he used the Voronoi partition and an
admissible scoring function of the form
\beql{N303a}
\sigma (V_{vor}( {\bf w} ), {\bf v} ) := \omegaega ({\bf w} , {\bf v} )\{ ( A \rho (V_{vor}( {\bf w} ))
- B) vol ~(V_{vor}( {\bf w} ))\},
\end{equation}
in which $A = 1$ and $B = \frac {\pi}{\sqrt{18}}$ and
the {\em weights} $\omegaega ({\bf w}, {\bf v} )$ are given by
\beql{N303b}
\omegaega ( {\bf w}, {\bf v} ) := \left\{
\begin{array}{cll}
\frac{1}{12} & {\rm if} & 2 \leq \| {\bf w} - {\bf v} \| \le 2 + t~, \\ [+.2in]
0 & {\rm if} & \| {\bf w} - {\bf v} \| > 2 + t ~,
\end{array}
\right.
\end{equation}
and
\beql{N303c}
\omegaega ({\bf v}, {\bf v} ) := 1 - \sum_{{\bf w} \ne {\bf v}} \omegaega ({\bf w}, {\bf v}).
\end{equation}
Here $t \ge 0$ is a fixed constant. The resulting inequality
in Theorem~\ref{th21} is optimal if the associated constant
$\theta_{{\cal P},\omegaega}(A,B) = 0$.
The Fejes-T\'oth scoring function corresponds to a weighted averaging over
the spheres touching a central sphere at ${\bf v}$, where the weight assigned the
central sphere depends on how many spheres touch it.
L. Fejes-T\'oth considered choosing $t$ as large as possible
consistent with requiring that
$\omegaega ({\bf v}, {\bf v} ) \ge 0,$
which is equivalent to requiring that it is impossible to pack $13$ spheres
around a given sphere, with all 13 sphere centers within distance $2 + t$
of the center of the given sphere.
In 1964 he suggested \cite[p.299]{FT1} that one could take the value
$t = 0.0534,$
and we consider this to be Fejes-T\'oth's candidate for an
optimal inequality. It has not been demonstrated that
$\omegaega ({\bf v}, {\bf v} ) \ge 0$ holds for this value of $t$, but note that
the argument of Theorem~\ref{th21} is valid
for any value of $t$, even if some $\omegaega ({\bf v}, {\bf v} )$ are negative. The
only issue is whether the resulting sphere packing bound is
optimal. Fejes-T\'oth \cite{FT1} explicitly noted that establishing
an optimal inequality, if it is true,
reduces the problem in principle to one in a finite number of variables,
possibly amenable to solution by computer.
In 1993 Wu-Yi Hsiang \cite{Hs} studied a variant of the Fejes-T\'oth
approach.
He used the Voronoi partition and an admissible scoring function
\footnote{We have converted the locally averaged density in Hsiang
\cite[Section 3]{Hs} to the form given in \S2 by clearing denominators,
and we have cancelled out Hsiang's factor of 13. Note also that each
Voronoi domain contains exactly one sphere, so that
$\rho(V({\bf v})) vol(V({\bf v}))= \frac {4 \pi}{3}.$}
of the form \eqn{N303a}, with the same $A=1$ and $B= \frac{\pi}{\sqrt{18}}$
and with the {\em weights} $\omegaega ({\bf w}, {\bf v} ) \ge 0$ given by
\beql{N304}
\omegaega ( {\bf w}, {\bf v} ) := \left\{
\begin{array}{cll}
\frac{1}{1+ N({\bf v} )} & {\rm if} & \| {\bf w} - {\bf v} \| \le \frac{218}{100} ~, \\ [+.2in]
0 & {\rm if} & \| {\bf w} - {\bf v} \| > \frac{218}{100} ~,
\end{array}
\right.
\end{equation}
where
\beql{N305}
N( {\bf v} ) : = \# \left\{
{\bf w} \in \Omegaega : ~ 0 < \| {\bf w} - {\bf v} \| \le
\frac{218}{100} \right\}
\end{equation}
counts the number of ``near neighbors'' of ${\bf v}$.
Hsiang announced that his local inequality is optimal
(with $\theta_{{\cal P},\omegaega}(A,B)= 0$), and that he had proved it,
which would then constitute a proof of Kepler's conjecture.
However his proof of optimality is regarded as incomplete by the mathematical
community, see G. Fejes-T\'oth's review of Hsiang's paper in
Mathematical Reviews, the critique in \cite{Ha94}, and
Hsiang's rejoinder~\cite{Hs95}.
In 1992 Hales (\cite{H1}, \cite{H2}) studied the
Delaunay triangulation, which partitions ${\mathbb R}^3$
into $D$-sets.
There are a finite number of
(local) choices for
Delaunay triangulations
of a neighborhood of a fixed ${\bf v} \in \Omegaega$.
Hales used the following function in defining his associated
weight function.
\begin{defi}\lambdabel{de25b}
{\rm
The {\em compression} of a finite region $R$ in ${\mathbb R}^3$ with respect
to a sphere packing
$\Omegaega$ is
\beql{201}
\Gamma (R) := (\rho ( R) - \delta_{oct}) vol(R)
\end{equation}
in which
\beql{202}
\delta_{oct} := \frac{-3 \pi + 12 \arccos ( \frac{1}{\sqrt{3}} )}{\sqrt{8}}
\approx 0.720903
\end{equation}
is the packing density of the regular octahedron of sidelength 2
with unit spheres centered at its vertices.
}
\end{defi}
Hales initially considered the admissible weight function
\beql{302}
\sigma (D({\bf v}_1, {\bf v}_2, {\bf v}_3, {\bf v}_4 ) , {\bf v}_i ) :=
\Gamma(D({\bf v}_1, {\bf v}_2, {\bf v}_3, {\bf v}_4 )).
\end{equation}
The vertex $D$-star of ${\bf v}$ consists of all the simplices in the
Delaunay triangulation that have ${\bf v}$ as a vertex; we call this
set of simplices the {\em Delaunay D-star}
${\cal D}_{Del} ({\bf v} )$ at ${\bf v}$. Hales used score
constants $A=4$ and $B= 4\delta_{oct}$, with $A = 4$ used since
each simplex is counted four times.
However he discovered that the {\em pentagonal prism}
attained a score value exceeding what is needed to
prove Kepler's conjecture.
The pentagonal prism is conjectured to be extremal for this score function.
The fact that the (conjectured) extremal configurations for the
Voronoi tesselation and Delaunay triangulation do not coincide suggested
to Hales that a
hybrid scoring rule be considered that combines the best features
of the Voronoi and Delaunay scoring function.
In 1997 Hales again considered a
Delaunay triangulation, but modified
the scoring rule to depend on the shape of the D-simplex
$D( {\bf v}_1, {\bf v}_2, {\bf v}_3, {\bf v}_4 )$. For some simplices he used the
weight function above, while for others he cut the simplex into
four pieces, one for each vertex, call the pieces $V(D, {\bf v}_i),$
and assigned the weights\footnote{More precisely, he used the
``analytic continuation'' of this scoring function that is described
in Appendix A.}
$$\sigma(D({\bf v}_1, {\bf v}_2, {\bf v}_3, {\bf v}_4 ), {\bf v}_i)= 4\Gamma( V(D, {\bf v}_i)), $$
for $1 \leq i \leq 4.$
He also partitioned
a vertex D-star into pieces called
``clusters'' whose score functions could be evaluated separately
and added up to get the total score. Each ``cluster'' is a
finite union of Delaunay simplices filling up that part of the
vertex D-star at ${\bf v}$ lying in a pointed cone with vertex ${\bf v}$.
This vertex cone subdivision
facilitates computer-aided proofs by decomposing the problem
into smaller subproblems.
Hales (\cite{I}, \cite{II}) presented evidence that this
modified scoring function
satisfies an optimal local inequality.
He showed that the two known local extremal configurations
\footnote{These correspond to Voronoi cells being the rhombic
dodecahedron or trapezo-rhombic dodecahedron in Fejes-T\'oth
\cite[p. 295]{FT1}.} gave
local maxima of the score of the Delaunay $D$-star in the
configuration space with
\beql{303}
Score({\cal D}_{Del} ({\bf v} )) = 8 pt~,
\end{equation}
where
\beql{303a}
pt := \frac{11 \pi}{3} - 12 \arccos \left(
\frac{1}{\sqrt{3}} \right) \simeq 0.0553736 ~,
\end{equation}
which is the optimal value.
He also showed that this was a global upper bound over the subset of
configurations described by a vertex map\footnote{See \S5 for a
definition of vertex map ${\cal G} ({\bf v})$.}
${\cal G} ( {\bf v} )$ that is triangulated.
Hales \cite[Conjecture 2.2]{I} conjectured that the
modified score function achieved the optimal inequality
\beql{304}
Score ({\cal D}_{Del} ( {\bf v} )) \le 8 pt ~,
\end{equation}
for all Delaunay D-stars ${\cal D}_{Del} ({\bf v} )$.
However he and his student S. P. Ferguson \cite{FH} discovered
that a pentagonal prism configuration comes very close to
violating the inequality \eqn{304}.
Furthermore there turned out to be many similar difficult configurations
which might possibly violate the inequality.
These and other difficulties indicated that it was not
numerically feasible to prove \eqn{304}
by a computer proof, assuming that \eqn{304} is actually true.
Hales and Ferguson together then
further modified both the partition rule ${\cal P}$ and
the scoring rule $\sigma$, to obtain a rule
with the following properties.
\begin{itemize}
\item[(i)]
It makes the score inequality stronger on the known bad cases related to the
pentagonal prism configuration.
\item[(ii)]
It uses a more complicated notion of ``cluster'', which includes
Voronoi pieces as well as D-sets, and which retains
the ``decoupling'' property that it
is completely determined by
vertices of $\Omegaega$ in the cone above it.
\item[(iii)]
It chooses a scoring function which when combined with
``truncation' on clusters is still strong enough to rule out
most configurations. The ``truncation'' operation greatly reduces
the number of configurations to be checked, at the cost of weakening
the inequality to be proved.
\end{itemize}
In \S4 we give a precise description of the Hales-Ferguson rules
$({\cal P} , \sigma )$.
\section{Hales-Ferguson Partition and Score Function}
\hspace*{\parindent}
Ferguson and Hales \cite{FH} use the following partition
and scoring rule.
The partition uses two types of $D$-simplices, with a complicated
rule for picking which
ones to include as $D$-sets in the partition.
Modified Voronoi domains $V({\bf v} )$ are used as $V$-sets.
These differ from the usual Voronoi domain (with the $D$-sets removed)
by mutually exchanging some regions called ``tips''.
The scoring rule is also complicated:
the weight function used on a $D$-simplex no longer depends on
just its shape,
but depends on the structure of nearby $D$-sets.
We begin by defining the two types of $D$-simplices.
\begin{defi}\lambdabel{de41}
A QR-tetrahedron {\rm (or} quasi-regular tetrahedron$)$
{\rm is any tetrahedron with all vertices in $\Omegaega$ and all edges of
length $\le \frac{251}{100}$.
}
\end{defi}
\begin{defi}\lambdabel{de42}
{\rm A} QL-tetrahedron {\rm (or} quarter$)$
{\rm is any tetrahedron with all vertices in $\Omegaega$ and five edges of
length
$\le \frac{251}{100}$ and one edge with length
$\frac{251}{100} < l \le 2 \sqrt{2}$.
The long edge is called the {\em spine} (or {\em diagonal}) of the
QL-tetrahedron.
}
\end{defi}
For some purposes\footnote{In proving inequalities one wants to work on
a compact set. In compactifying the space of
configurations, this requires allowing the lower inequality in the
definition of $QL$-tetrahedron to be an equality.}
the case of a spine of length exactly $\frac{251}{100}$ should be considered
as either a QL-tetrahedron and QR-tetrahedron.
Here we treat it exclusively as a QR-tetrahedron.
Neither kind of tetrahedron is guaranteed to be included in the Delaunay
triangulation of $\Omegaega$,
but we do have:
\begin{lemma}\lambdabel{le41}
All QR-tetrahedra and QL-tetrahedra are D-simplices.
\end{lemma}
The Hales-Ferguson partition rule starts by selecting which
$D$-simplices to include in the $D$-system.
These consist of:
\begin{itemize}
\item[(i)]
All QR-tetrahedra.
\item[(ii)]
Some QL-tetrahedra. The QL-tetrahedra included in the partition
satisfy the {\em common spine condition} which states
that for a given spine, either all QL-tetrahedra having that spine are
included, or none are.
\end{itemize}
This collection of tetrahedra must (by definition of admissible partition)
form a nonoverlapping set,
where we say that two sets $S_1$ and $S_2$ {\em overlap} if
$\bar{S_1} \cap \bar{S_2}$ has positive Lebesgue measure in ${\mathbb R}^3$.
To justify (i) we have:
\begin{lemma}\lambdabel{le42}
No two QR-tetrahedra overlap.
\end{lemma}
$QL$-tetrahedra may overlap $QR$-tetrahedra or other $QL$-tetrahedra,
hence one needs a rule
for deciding which $QL$-tetrahedra to include.
To begin with, $QL$-tetrahedra can overlap $QR$-tetrahedra in
essentially one way.
\begin{lemma}\lambdabel{le43}
If a $QL$-tetrahedron and $QR$-tetrahedron overlap, then the $QR$-tetrahedron
has a common face with an adjacent $QR$-tetrahedron, and the two unshared
vertices of these $QR$-tetrahedra are the endpoints of the spine of the
$QL$-tetrahedron.
The union of these two $QR$-tetrahedra can be
partitioned into three $QL$-tetrahedra having the given spine,
which includes the given $QL$-tetrahedron. Aside from these
$QL$-tetrahedra, no other $QL$-tetrahedron
overlaps either of these two $QR$-tetrahedra.
\end{lemma}
This lemma shows that the $QL$-tetrahedra having a given spine
have the property that either all of them or none of them overlap
the set of $QR$-tetrahedra. We next consider how $QL$-tetrahedra can overlap
other $QL$-tetrahedra. The following configuration plays an
important role.
\begin{defi}\lambdabel{Nde43}
{\rm A {\em $Q$-octahedron} is an octahedron whose 6 vertices
${\bf v}_i \in \Omegaega$ and whose 12 edges each have lengths
$2 \le l \le \frac{251}{100}$.}
\end{defi}
A $Q$-octahedron has three interior diagonals.
If a diagonal has length $2 < l \le \frac{251}{100}$ then it partitions the
$Q$-octahedron into four $QR$-tetrahedra.
If a diagonal has length $\frac{251}{100} < l \le 2 \sqrt{2}$ then it partitions
the $Q$-octahedron into four $QL$-tetrahedra of which is the common spine.
If a diagonal has length $l > 2\sqrt{2}$ it yields no partition.
A $Q$-octahedron thus gives between zero and three
different partitions into
four $QR$-tetrahedra or $QL$-octahedra.
We call it a {\em live $Q$-octahedron} if it has at least one such partition.
Lemma~\ref{le43} implies that if it has a partition into $QR$-octahedra
then it has no other partition into $QR$-octahedra or $QL$-octahedra.
\begin{lemma}\lambdabel{Nle43}
A $QL$-tetrahedron having spine a diagonal of a $Q$-octahedron
does not overlap any $QL$-tetrahedron whose spine is not a diagonal
of the same $Q$-octahedron.
\end{lemma}
The rules for choosing which $QL$-tetrahedra to include either take all
$QL$-tetrahedra having the same spine or take none of them.
Thus the selection rule really specifies which spines to include.
\begin{defi}\lambdabel{Nde44}
{\rm Consider an edge $[{\bf v}_1, {\bf v}_2]$ with ${\bf v}_1, {\bf v}_2 \in \Omegaega$ and
$\frac{251}{100} < \| {\bf v}_1 - {\bf v}_2 \| \le 2 \sqrt{2}$.
A vertex ${\bf w} \in \Omegaega$ is called an {\em anchor} of the edge
$[{\bf v}_1 , {\bf v}_2 ]$ if
$$\| {\bf w}- {\bf v}_i \| \le \frac{251}{100} \quad\mbox{for}\quad
i=1,2 ~.
$$
}
\end{defi}
Ferguson and Hales use the number of $QL$-tetrahedra having a
spine $[{\bf v}_1, {\bf v}_2 ]$
and the number of anchors of that spine in deciding which $QL$-tetrahedra
to include in the $D$-system.
Call a $QL$-tetrahedron {\em isolated} if it is the only $QL$-tetrahedron
on its
spine $[{\bf v}_1, {\bf v}_2 ]$.
The inclusion rule for an isolated
$QL$-tetrahedron is:
\begin{itemize}
\item[(QL0)]
An isolated $QL$-tetrahedron is included in the $D$-system if and only
if it
overlaps\footnote{It cannot overlap a $QR$-tetrahedron by
Lemma \ref{le43}.} no other $QL$-tetrahedron or $QR$-tetrahedron.
\end{itemize}
Next consider spines $[{\bf v}_1, {\bf v}_2 ]$ which have two or more
associated $QL$-tetrahedra.
Such spines have at least three anchors, and the inclusion rules are:
\begin{itemize}
\item[(QL1)]
Each non-isolated $QL$-tetrahedron on a spine with 5 or more anchors
is included in the $D$-system.
\item[(QL2)]
Each non-isolated $QL$-tetrahedron on a spine with 4 anchors is
included in the $D$-system,
if the spine is not a diagonal of some $Q$-octahedron.
In the case of a live $Q$-octahedron, we include all $QL$-tetrahedra having one
particular diagonal. and exclude
all $QL$-tetrahedra on other diagonals.
For definiteness, we choose the spine to
be the shortest diagonal. In case of a tie for shortest diagonal, a
suitable tie-breaking rule is used.
\item[(QL3)]
Each non-isolated $QL$-tetrahedron on a spine with 3 anchors is
included in the $D$-system if each $QL$-tetrahedron on the spine
does not overlap any other $QL$-tetrahedron or $QR$-tetrahedron,
or overlaps only isolated $QL$-tetrahedra.
It is excluded from the $D$-system if some tetrahedron on the spine
overlaps either a $QR$-tetrahedron
or a non-isolated $QL$-tetrahedron having four or more anchors.
Finally, if some tetrahedron on the spine overlaps a
nonisolated $QL$-tetrahedron having exactly
three anchors, then the spine of the overlapped set is unique,
and exactly one of these two sets of non-isolated
$QL$-tetrahedra with 3 anchors is to be included in the $D$-system,
according to a tie-breaking rule\footnote{The tie-breaking rule could
be to include the spine with lowest endpoint using a lexicographic
ordering of points in ${\mathbb R}^3$. It appears to me that
Hales would permit an
arbitrary choice of which one to include, see Lemma~\ref{Nle44} (iii).}.
\end{itemize}
The set of $QL$-tetrahedra selected above are pairwise disjoint and
are disjoint from all $QR$-tetrahedra.
This is justified by the following lemma.
\begin{lemma}
\lambdabel{Nle44}
(i) If two $QL$-tetrahedra overlap, then at most one of them has $5$ or
more anchors.
(ii)~If two overlapping $QL$-tetrahedra each have $4$ anchors, then their
spines are (distinct) diagonals of some $Q$-octahedron.
(iii)~If a nonisolated $QL$-tetrahedron with $3$
anchors overlaps another $QL$-tetrahedron having three anchors,
then each of their spines contains exactly two nonisolated $QL$-tetrahedra,
and these four $QL$-tetrahedra overlap no other $QL$-tetrahedron or
$QR$-tetrahedron.
\end{lemma}
We call the set of $QR$-tetrahedra and $QL$-tetrahedra selected as above the
{\em {\em Hales-Ferguson} $D$-system.}
(Hales and Ferguson call this a $Q$-system.)
We now define the $V$-cells of the Hales-Ferguson partition.
To begin with, we take the Voronoi domain $V_{vor} ({\bf v} )$ at vertex ${\bf v}$
and remove from it all $D$-simplices in the $D$-system to obtain a reduced
Voronoi region $V_{red} ( {\bf v} )$.
Next we move certain regions of $V_{red} ({\bf v} )$ called ``tips'' to
neighboring reduced Voronoi regions to obtain modified regions
$V_{mod} ({\bf v} )$ and finally we define the $V$-cell $V( {\bf v} )$ at ${\bf v}$ to be
the closure of $V_{mod} ({\bf v} )$.
\begin{defi}\lambdabel{Nde45}
{\rm Let $T$
be any tetrahedron such that the center $ {\bf x} = {\bf x} (T)$ of its
circumscribing sphere lies outside $T$.
A vertex ${\bf v}$ of $T$ is {\em negative} if the plane $H$ determined
by the face $F$ of $T$ opposite ${\bf v}$ separates ${\bf v}$ from ${\bf x}$.
The ``{\em tip}'' $\Delta (T,{\bf v} )$ of $T$ associated to a negative
vertex ${\bf v}$
is that part of the Voronoi region of ${\bf v}$ with respect to the points
$\{ {\bf v}_1, {\bf v}_2, {\bf v}_3, {\bf v}_4 \}$ that lies in the closed half
plane $H^+$ determined by $H$ that contains ${\bf x}$.
The ``tip'' region $\Delta (T,{\bf v} )$ does not overlap $T$, and
is a tetrahedron having ${\bf x}$ as a vertex,
and has three other vertices lying on $H$.
}
\end{defi}
\begin{lemma}\lambdabel{Nle45}
(i) A $QR$-tetrahedron or $QL$-tetrahedron $T$ has at most
one negative vertex.
(ii) If a negative vertex is present, then the three vertices of
the associated ``tip'' that lie on $H$ actually lie in
the face of $T$ opposite to the negative vertex.
(iii)~The ``tip'' of any tetrahedron in the Hales-Ferguson $D$-system
either does not overlap any $D$-simplex in the Hales-Ferguson $D$-system,
or else is entirely contained in the union of the $D$-simplices in the
$D$-system.
\end{lemma}
We say that a ``tip'' that does not overlap any $D$-set is {\em uncovered.}
The lemma shows that uncovered ``tips'' lie in the union of the
Voronoi regions $\{V_{red} ({\bf w} ) : {\bf w} \in \Omegaega \}$, so that
rearrangement of uncovered ``tips'' is legal.
There is an a priori possibility that two ``tips'' may overlap\footnote{
I don't know if this possibility can occur.} each other.
\paragraph{Uncovered Tip Rearrangement Rule.}
Each ${\bf y} \in {\mathbb R}^3$ that belongs to an uncovered ``tip'' is reassigned to
the nearest {\em vertex} ${\bf w} \in \Omegaega$ such that
${\bf y}$ is not in an uncovered ``tip'' of
any pair $(T, {\bf w} )$ where $T$ is in the Hales-Ferguson $D$-system
and ${\bf w}$ is a negative vertex of ${\bf v}$.
(A tiebreaking rule is used if two nearest vertices ${\bf w}$ are equidistant.)
This rule cuts an uncovered ``tip'' into a finite number of polyhedral pieces
and reassigns the pieces to different reduced Voronoi regions.
This prescribes how $V_{mod} ({\bf v} )$ is constructed, and thus defines the
Hales-Ferguson $V$-cells $V({\bf v} )$.
\begin{figure}
\caption{``Tip'' of $D$-simplex $[{\bf v}
\end{figure}
A two-dimensional analogue of a ``tip'' is pictured\footnote{
See Figure 2.1 of Hales \cite{II} for another example.} in Figure 1.
In this figure the triangle $T = [ {\bf v}_1, {\bf v}_2, {\bf v}_3 ]$ plays the role of a
$D$-simplex, with ${\bf v}_2$ as a negative vertex and the ``tip'' is the
shaded region. The points ${\bf c}_{012}, {\bf c}_{013}, {\bf c}_{123}$ are centroids
of the triangles determined by the corresponding ${\bf v}_i$'s.
The shaded triangle $[ {\bf c}_{012}, {\bf c}_{013}, {\bf c}_{123}]$ is in the
Voronoi cell $V_{vor}({\bf v}_0)$ while the remainder of the ``tip'' is
in the Voronoi cell $V_{vor}({\bf v}_2).$ The uncovered tip rearrangement rule
partitions the part in $V_{vor}({\bf v}_2)$ into three triangles which
are reassigned to the $V$-cells $V({\bf v}_0), V({\bf v}_1)$ and $ V({\bf v}_3),$
e.g. $[ {\bf y}_1, {\bf y}_2, {\bf c}_{012}]$ is reassigned to $ V({\bf v}_1)$.
The reassignment of the ``tip''
ensures that the pointed cone over ${\bf v}_2$
generated by the $D$-simplex $[{\bf v}_1, {\bf v}_2, {\bf v}_3]$ does not
contain any part of the $V$-cell at ${\bf v}_2$. In this example
the $V$-cell at ${\bf v}_0$ does not feel the effect
of the vertex ${\bf v}_2$, due to the rearrangement.
We now turn to the Hales-Ferguson scoring rules.
These use the compression function $\Gamma (S)$ given in \eqn{201}.
The compression function is additive:
If $S= S_1 \cup S_2$ is a partition, then
\beql{203}
\Gamma (S) = \Gamma (S_1 ) + \Gamma (S_2 ) ~.
\end{equation}
For a $D$-simplex $T$,
\beql{204}
vol (T) \rho (T) = \sum_{i=1}^4
\frac{(\mbox{solid angle})_i}{3} ~,
\end{equation}
where a full solid angle is $4 \pi$.
The Hales-Ferguson weight function for a V-cell is as follows.
\begin{itemize}
\item[(S1)]
For a $V$-cell $V({\bf v} )$,
\beql{N41}
\sigma_{HF} (V( {\bf v} ), {\bf w} ) = \left\{
\begin{array}{cll}
4 \Gamma ( V( {\bf v} )) & \mbox{if} & {\bf v} = {\bf w} ~, \\ [+.2in]
0 & \mbox{if} & {\bf v} \neq {\bf w} ~.
\end{array}
\right.
\end{equation}
\end{itemize}
We next consider the weight function for $D$-sets.
Let $(T, {\bf v} )$ denote a $D$-simplex together with
a vertex ${\bf v}$ of it.
\begin{defi}\lambdabel{Nde46}
{\rm The {\em Voronoi measure} $vor (T, {\bf v} )$ is defined as follows.
If the center of the circumscribing sphere of $T$ lies inside
$T$, then $T$ is partitioned into four pieces
$$V_{vor}^+ (T, {\bf v}_i ) := \{
{\bf x} \in T: \| {\bf x} - {\bf v}_i \| \le \| {\bf x} - {\bf v}_j \| \quad\mbox{for}\quad
1 \le j \le 4 \}
$$
and then
\beql{N42}
vor (T, {\bf v} ) := \Gamma ( V_{vor}^+ (T, {\bf v} )) ~.
\end{equation}
There is an analytic formula for the right side of \eqn{N42} given in
Appendix A, and this formula is used to define $vor (T, {\bf v} )$ in
cases where the circumcenter falls outside $T$.
}\end{defi}
In cases where the circumcenter is outside ${\bf v}$, and ${\bf v}$ is a
negative vertex, then
\beql{N43}
vor (T, {\bf v} ) = \Gamma ( V_{vor} (T, {\bf v} ) \cup \mbox{``tip''} )
\end{equation}
while for the other three vertices parts of the ``tip''
are counted with a negative
weight, in such a way that
\beql{N44}
\sum_{i=1}^4 vor (T, {\bf v}_i ) = 4 \Gamma ( T)
\end{equation}
holds in all cases. The weight function for a $D$-set is given as
follows:
\begin{itemize}
\item[(S2)]
For a $QR$-tetrahedron $T= D({\bf v}_1, {\bf v}_2, {\bf v}_3, {\bf v}_4 )$ in
the $D$-system,
\beql{N45}
\sigma_{HF} (T, {\bf v} ) =
\left\{
\begin{array}{cl}
\Gamma (T) & \mbox{if the circumradius of $T$ is at most
$\frac{141}{100}$.} \\ [+.2in]
vor (T, {\bf v} ) & \mbox{if the circumradius of $T$ exceeds $\frac{141}{100}$.}
\end{array}
\right.
\end{equation}
\end{itemize}
The $QL$-tetrahedron scoring function is complicated. For a
$QL$-tetrahedron $T$. let $\eta^+ (T)$ be the maximum of the circumradii
of the two
triangular faces of $T$ adjacent
to the spine of $T$, and define the function
\beql{N46}
\mu (T, {\bf v} ) :=
\left\{ \begin{array}{cll}
\Gamma (T) & \mbox{if} & \eta^+ (T) \le \sqrt{2} \\ [+.2in]
vor (T, {\bf v} ) & \mbox{if} & \eta^+ (T) > \sqrt{2}
\end{array}
\right.
\end{equation}
Then the $QL$-tetrahedron scoring function is defined by:
\begin{itemize}
\item[(S3)]
(``Flat quarter'' case)
For a $QL$-tetrahedron $T$ and a vertex ${\bf v}$ not on its spine,
\beql{N47}
\sigma_{HF} (T, {\bf v} ) := \mu (T, {\bf v} ) ~.
\end{equation}
\item[(S4)]
(``Upright quarter case'')
For a $QL$-tetrahedron $T$ with vertex ${\bf v}$ on its spine, let
$\hat{{\bf v}}$ denote the opposite vertex on the spine.
If $T$ is an isolated $QL$-tetrahedron, set
\beql{N48}
\sigma_{HF} (T, {\bf v} ) := \mu (T, {\bf v} ) ~.
\end{equation}
If $T$ is part of a $Q$-octahedron, set
\beql{N49}
\sigma_{HF} (T, {\bf v} ) := \frac{1}{2} (
\mu (T, {\bf v} ) + \mu (T, \hat{{\bf v}} )) ~.
\end{equation}
In all other cases, set
\beql{N410}
\sigma_{HF} (T, {\bf v} ) := \frac{1}{2}
( \mu (T, {\bf v} ) + \mu (T, \hat{{\bf v}} )) +
\frac{1}{2} ( vor_0 (T, {\bf v} ) - vor_0 (T, \hat{{\bf v}} )) ~,
\end{equation}
in which $vor_0 (T, {\bf v} )$ is a ``truncated Voronoi measure''
that only counts volume within radius $\frac{1}{2} ( \frac{251}{100} )$ of
vertex ${\bf v}$, which is defined in Appendix A, and in \cite[pp. 9-11]{FH}.
\end{itemize}
The scoring rule (S4) is the most complicated one. In it
the definition \eqn{N49} plays an important role in
obtaining good bounds
for the pentagonal prism case treated in Ferguson \cite{Fe}, while
the definition \eqn{N410} is important in analyzing general configurations
using truncation in Hales \cite{IV}.
\begin{theorem}\lambdabel{Nth41}
The Hales-Ferguson partition and scoring function
$({\cal P}_{HF} , \sigma_{HF} )$ are admissible, with score constants $A = 4$ and
$B = 4 \delta_{oct}.$
\end{theorem}
\paragraph{Proof.}
It is easy to verify that the
definitions for scoring
$QR$-tetrahedra and $QL$-tetrahedra satisfy the weighted density average
property
\beql{N411}
\sum_{i=1}^4 \sigma_{HF} (T, {\bf v}_i ) = 4 \Gamma (T) ~,
\end{equation}
which correspond to $A=4$ and $B = 4 \delta_{oct},$ using \eqn{N44}.
Most of the remaining admissibility conditions are verified by
Lemmas \ref{le41}--\ref{Nle45} except for locality.
For locality, a conservative estimate indicates that the rules for
removing and adding
``tips'' to determine the $V$-cell $V( {\bf v} )$ are determined by sphere centers
${\bf w} \in \Omegaega$ with $\| {\bf w} - {\bf v} \| \le 12 \sqrt{2}$.
Finally the score function on the $D$-simplices is determined by vertices
within distance $6 \sqrt{2}$ of ${\bf v}$.~~~${\vrule height .9ex width .8ex depth -.1ex }$
Theorem \ref{th21} associates to $({\cal P}_{HF}, \sigma_{HF} )$ a sphere-packing
bound that the Hales program asserts is optimal.
To establish the Kepler bound
\beql{213}
\bar{\rho} (\Omegaega) \le \frac{\pi}{\sqrt{18}} ~,
\end{equation}
via \eqn{M209}, one must prove that
\beql{214}
\theta :=\theta_{{\cal P}_{HF}, \sigma_{HF}}(4, 4 \delta_{oct}) = 8 pt~,
\end{equation}
where
\beql{215}
pt :=
\frac{11 \pi}{3} - 12 \arccos \left( \frac{1}{\sqrt{3}} \right) \simeq
0.0553736 ~.
\end{equation}
The score function
$Score({\cal D}_{HF}({{\bf v}}))$ is discontinuous as a function of the sphere centers
in $\Omegaega$ near ${\bf v}$, because it
is a sum of contributions of pieces which may appear and disappear
as sphere centers move, and discontinuities occur when $QL$-tetrahedra
convert to $QR$-tetrahedra.
To deal with this, one compactifies the configuration space
by allowing some sphere center configurations
to have more than one legal decomposition into pieces (but at most
finitely many).
The optimization problem can then be split into a finite number of subproblems
on each of which $\sigma_{HF}$ is continuous.
The complexity of the definition of $({\cal P}_{HF}, \sigma_{HF} )$ is designed
to yield a computationally tractable nonlinear
optimization problem.
The introduction of $QL$-tetrahedra and the complicated score function on
them is designed to help get good bounds for the pentagonal prism case and
similar cases.
The rule for moving ``tips'' is intended to facilitate decomposition of the
nonlinear optimization problem into more tractable pieces via
Theorem \ref{le55} below, and the use of ``truncation.''
\section{Kepler Conjecture}
\hspace*{\parindent}
The main result to be established by the Hales program is the following.
\begin{theorem}\lambdabel{th51}
{\rm (Main Theorem)}
For the Hales-Ferguson partition and scoring rule
$({\cal P}_{HF} , \sigma_{HF} )$, and any
${\bf v} \in \Omegaega$ in a saturated sphere-packing, the vertex $D$-star
${\cal D}_{HF} ({\bf v} )$ at ${\bf v}$ satisfies
\beql{501}
Score ({\cal D}_{HF} ( {\bf v} )) \le 8 pt ~,
\end{equation}
where
$pt := \frac{11 \pi}{3} - 12 \arccos \left( \frac{1}{\sqrt{3}} \right) \simeq 0.0553736$.
\end{theorem}
The Kepler conjecture follows by Theorem \ref{th21}.
To prove the inequality \eqn{501}, by translation-invariance we can reduce
to the case ${\bf v} = {\bf z}ero$ and search the set of all possible vertex stars,
which by
\S4 are determined by those points ${\bf w} \in \Omegaega$ with
$\| {\bf w} \| \le 12 \sqrt{2}$. From now on we assume
${\bf z}ero \in \Omegaega$ and ${\bf v} = {\bf z}ero$.
The space of possible sphere centers
$\{ {\bf w} \in \Omegaega: \| {\bf w} \| \le 12 \sqrt{2} \}$ is compact.
It can be decomposed into a large number of pieces, on
each of which the score function is continuous.
To obtain {\em compact} pieces, we must compactify the
configuration space by assigning more than one possible
local D-star ${\cal D} ({\bf v} )$
to certain arrangements
of sphere centers. The compactification assigns at most finitely
many to each arrangement
with an absolute upper bound on the number of possibilities.
The definition of the score $Score({\cal D} ( {\bf v} ))$ involves a sum over
the $V$-sets and $D$-sets.
The usefulness of the compression measure $\Gamma (S)$ is justified
by the following lemma.
\begin{lemma}\lambdabel{le51}
(i) Every $QR$-tetrahedron $T$ satisfies
\beql{502}
\Gamma (T) \le pt ~,
\end{equation}
with equality occurring only when $T$ is a regular tetrahedron of
edge length $2$.
(ii) A $QL$-tetrahedron $T$ has
\beql{503}
\Gamma (T) \le 0 ~,
\end{equation}
with equality occurring for those $T$ having five edges of length $2$ and
a spine of length $2 \sqrt{2}$.
\end{lemma}
Result (ii) illustrates a somewhat counterintuitive behavior of the
local density function:
when holding five edges of a tetrahedron fixed of length 2, and
allowing the sixth edge to vary over
$\frac {251}{100} \leq l \leq 2 \sqrt {2}$,
the local density measure is largest for a
spine of {\em maximal} length.
The vertices ${\bf w} \in \Omegaega$ with $\| {\bf w} \| \le \frac{251}{100}$
play a particularly
important role, for they determine all $QR$-simplices of $\Omegaega$
containing ${\bf z}ero$
as a vertex.
\begin{defi}\lambdabel{de51}
{\rm The {\em planar map} (or {\em graph}) ${\cal G} ( {\bf v} )$ associated
to a vertex
${\bf v} \in \Omegaega$
consists of the radial projection onto the unit sphere
$\partial B ({\bf v}; 1) = \{ {\bf x} \in {\mathbb R}^3 : \| {\bf x} - {\bf v} \| =1 \}$
centered at ${\bf v}$
of all vertices ${\bf w} \in \Omegaega$ with
$\| {\bf w} - {\bf v} \| \le \frac{251}{100}$ plus all those edges
$[{\bf w}, {\bf w}']$ between two such vertices which have length
$\| {\bf w} - {\bf w}' \| \le \frac{251}{100}$.
}
\end{defi}
Here we regard the planar map ${\cal G} ( {\bf v} )$ as being given with its embedding
as a set of arcs on the
sphere.
The following lemma asserts that no new vertices are introduced other
than those coming from points of $\Omegaega$ with $\| {\bf w} \| \le \frac{251}{100}$.
\begin{lemma}\lambdabel{le52}
The radial projection of two edges $[{\bf w}_1, {\bf w}_2]$, $[{\bf w}'_1 , {\bf w}'_2 ]$
as above onto the unit sphere $\partial B({\bf v} ;1)$ give two arcs in
${\cal G} ({\bf v} )$ which either are disjoint
or which intersect at an endpoint of both arcs.
\end{lemma}
We study local configurations classified by the planar map ${\cal G} ( {\bf z}ero )$.
The planar map ${\cal G} ( {\bf z}ero )$, which is determined by the vertices
$\| {\bf w} \| \le \frac{251}{100}$, does
not in general uniquely determine the vertex D-star ${\cal D}_{HF} ( {\bf z}ero )$,
but does determine all points ${\bf x}$ in it with
$\| {\bf x} \| \le \frac{251}{200}$.
\begin{defi}
\lambdabel{deP52}
{\rm The part of ${\cal D}_{HF} ( {\bf v} )$ that lies in the pointed cone with
base point ${\bf v}$ determined by a face of the map
${\cal G} ( {\bf v} )$ is called the {\em cluster} over that face. Note that
the face need not be convex, or even simply connected- it could be
topologically an annulus, for example.
}
\end{defi}
The following lemma shows that the vertex D-star
${\cal D}_{HF}({\bf 0})$ can be cut up into clusters in a way compatible
with the scoring function.
\begin{lemma}\lambdabel{le53}
Each $QR$-tetrahedron or $QL$-tetrahedron in the D-star ${\cal D}_{HF}({\bf 0})$
is contained in a single cluster. Furthermore all such tetrahedra having a
common spine are contained in a single cluster.
\end{lemma}
In effect the partition of the vertex D-star into clusters
partitions the $V$-cell into smaller pieces, while leaving the $D$-sets
unaffected. The scoring function is additive over any partition of
a $V$-cell into smaller pieces, according to \eqn{203} and \eqn{N41}.
The {\em score} $\sigma_{HF}(F)$ of the cluster determined
by a face $F$ of ${\cal G} ( {\bf v} )$ is the sum of the scores of the
$QR$-tetrahedra and $QL$-tetrahedra in the cluster, plus the
Voronoi score $4\Gamma(R)$ of the remaining part $R$ of the cluster.
We then have
\beql{503a}
Score({\cal D}_{HF}({\bf v})) = \sum_{F \in {\cal G} ( {\bf v} )}\sigma_{HF}(F).
\end{equation}
We now consider clusters associated to the simplest faces $F$ in
the graph ${\cal G} ( {\bf v} )$. Each triangular face corresonds to a
$QR$-tetrahedron in ${\cal D}_{HF} ( {\bf v} )$, and, conversely,
each $QR$-tetrahedron in ${\cal D}_{HF} ( {\bf v} )$ produces a triangular face.
A {\em quad cluster} is a cluster over a quadrateral face.
A $Q$-octahedron with spine ending at ${\bf 0}$ results in a quadrilateral
face, but there are many other kinds of quad clusters.
In the case of faces $F$ with $\ge 5$ edges, the cluster may consist of
a $V$-cell plus some $QL$-tetrahedra, in many possible ways.
All the possible decompositions into such pieces have to be
considered as separate configurations.
\begin{lemma}\lambdabel{le54}
(i) A cluster over a triangular face $F$ consists of a single
QR-tetrahedron, and
conversely. The score of such a cluster is at most
1 pt, and equality holds if and only if it is a regular tetrahedron
of edge length 2.
(ii) The sum of the score functions over any quad cluster is at most
zero. Equality can occur only if the four sphere centers ${\bf v}_i$ corresponding
to the vertices of the quad cluster each
lie at distance 2 from ${\bf v}$ and also from each other, if they
share an edge of the quad cluster.
(iii) The score of a cluster over any face with five or more
sides is strictly negative.
\end{lemma}
The extremal graphs where equality is known to occur in \eqn{501} have
eight triangular faces and six quadrilateral faces. The upper bound
of $8 pt$ for these cases is implied by this lemma.
(It appeared first in Hales~\cite[Theorem 4.1]{II}.)
The following result rules out graphs ${\cal G} ( {\bf v} )$ with
faces of high degree.
\begin{theorem}\lambdabel{th52}
All decomposition stars ${\cal D}_{HF} ( {\bf z}ero )$ with planar maps
${\cal G} ( {\bf z}ero )$ satisfy
\beql{504}
Score( {\cal D}_{HF}( {\bf z}ero )) \le 8 pt
\end{equation}
unless the planar map ${\cal G} (0)$ consists entirely of
(not necessarily convex) faces of the following kinds:
polygons having at most $8$ sides,
in which pentagons and hexagons may contain an
isolated interior vertex or a single edge
from an interior vertex to an outside vertex, and a pentagon
may exclude from its interior a
triangle with two interior vertices.
\end{theorem}
There remain a finite set of possible map structures that
satisfy the conditions of Theorem \ref{th52}.
Here we use the fact that there can be at most 50 vertices ${\bf v}$ with
$\| {\bf v} \| \le \frac{251}{100}$.
The list is further pruned by various methods, and reduced to about 5000
cases. Since the (putative) extremal cases are already covered
by Lemma~\ref{le54}, in the remaining cases
one wishes to prove a strict inequality
in \eqn{501}, and such bounds can be
obtained in principle by computer.
Most of the remaining cases are eliminated by linear programming bounds.
The linear programs involve obtain upper bounds for the score function
$Score ({\cal D}_{HF} ({\bf z}ero ))$ for a planar map ${\cal G}$ of a particular
configuration type, using as objective function the score function,
in the form:
\beql{P55}
Maximize ~~ Score ({\cal D}_{HF} (0)) := \sum_{faces \atop F} \sigma (F) ~,
\end{equation}
where the variable $\sigma (F)$ is the sum of weights
associated to the cluster over the face $F$.
The use of linear programming relaxations of the nonlinear program
seems to be a necessity in bounding the score function.
For example the compression function $\Gamma (R)$ for different
regions $R$ is badly
behaved: it is neither convex nor concave in general.
The linear constraints include hyperplanes bounding the convex hull of the
score function over the variable space.
One can decouple the contributions of the separate faces $F$ of
${\cal G} ({\bf v} )$ using the following result.
\begin{lemma}\lambdabel{le55}
{\rm (Decoupling Lemma)}
Let ${\bf v} \in \Omegaega$ be a vertex of a saturated packing and let $F$
be a face of the associated
planar map ${\cal G} ( {\bf v} )$, and let ${\cal C}_F$ denote the $($closed$)$
pointed cone over $F$ with vertex ${\bf v}$, and let ${\cal C}_{F,red}$ denote
the closure of the cone over $F$ obtained by removing from ${\cal C}_F$
all cones over $D$-sets with a corner at ${\bf v}$.
Then the portion of the $V$-cell $V({\bf v} )$ that lies in ${\cal C}_F$ is
completely determined by the vertices of
$\Omegaega$ that fall in the smallest closed convex cone
$\bar{{\cal C}}_F$ containing ${\cal C}_F$.
In particular,
\beql{505}
V_F :=V( {\bf v} ) \cap {\cal C}_F = V(\Omegaega \cap \bar{{\cal C}}_F , {\bf v} ) \cap {\cal C}_{F,red} ~.
\end{equation}
\end{lemma}
To obtain such a decoupling lemma requires the exchange of ``tips''
between Voronoi domains, as described in \S4.
The decoupling lemma permits the score function
$\sigma (V( {\bf v} ), {\bf v} )$ to be decomposed into polyhedral pieces
that depend on only a few of the nearby vertices.
This decomposes the problem into a sum of smaller problems, to
bound the scores of the pieces
$\sigma ( V( {\bf v} ) \cap {\cal C}_F , {\bf v} )$ in terms of these vertices.
It will often be applied when the face $F$ is convex,
in which case $\bar{{\cal C}}_F = {\cal C}_F$.
A futher very important relaxation of the linear programs involves
``truncation.'' The {\em truncated V-cell} is
\beql{505a}
V_{trunc}({\bf v}) := V({\bf v}) \cap {\bf B}({\bf v}: \frac {251}{200}).
\end{equation}
We may consider truncation of that part of the
$V-cell$ over each face of ${\cal G}$ separately.
\begin{lemma}\lambdabel{le56}
Let $F$ be a face of ${\cal G}({\bf v})$ and ${\cal C}_F$ the cone over that face.
The region $V_{trunc}({\bf v}) \cap {\cal C}_F$ is entirely determined by
the vertices of ${\cal G}({\bf v})$ in ${\cal C}_F$. If ${\cal V}_F$ denotes this set
of vertices, together with ${\bf v}$ then this region is the closure of
$(V_{vor}({\cal V}, {\bf v}) \cap {\cal C}_F) - \{ D-sets \}.$ The compression
function satisfies the bound
\beql{505b}
\Gamma ( V_{trunc}({\bf v}) \cap {\cal C}_F) \ge \Gamma( V({\bf v}) \cap {\cal C}_F).
\end{equation}
\end{lemma}
The inequality \eqn{505b} implies that replacing a Voronoi-type region
by a truncated region can only increase the score, hence one can relax
the linear program
by using the score of truncated regions. If one is lucky the linear
programming bounds using truncated regions will still be strong enough to
give the desired inequality. The use of truncation greatly reduces the
number of configurations that must be examined.
Truncation bounds were also used in
proving Theorem~\ref{th52} above.
We add the following remarks about the construction of
the linear programming problems.
\begin{itemize}
\item[(1)]
For each face $F$ of a given graph type ${\cal G}$ Hales and Ferguson
construct a large number of linear
programming constraints in terms of the edge lengths,
dihedral angles and solid angles
of the polyhedral pieces making up the cluster of ${\cal D}_{HF} ({\bf v} )$
over face
$F$ of the graph ${\cal G}$.
The edge lengths, dihedral angles and solid angles are
variables
in the linear program. Some of the constraints embody geometric
restrictions that a polyhedron of the given type must satisfy. Others
of them are inequalities relating the weight function of the polyhedron, which
is also a variable in the linear program, to the geometric quantities.
The inequalities bound the score function on the cluster
(either as a $V$-cell
or as $D$-sets) in terms of these variables.
There are also some global constraints in the linear program,
for example that the solid
angles of the faces around ${\bf v}$ add up to $4 \pi$.
\item[(2)]
The weight function for $D$-sets does not
permit subdivision of the simplex, but the weight function on
the $V$-cell is additive under subdivision, so one can cut such
regions up into smaller pieces if necessary, to get improved
linear programming bounds, by including more stringent constraints.
\item[(3)]
In the linear programming relaxation, a feasible
solution to the constraints need not correspond to any geometrically
constructible vertex $D$-star. All that is required is that
every vertex $D$-star of the particular configuration type correspond
to some feasible point of the linear program.
\end{itemize}
In this fashion one obtains a long list of linear programs, one for
each configuration type, and to
rule out a map type ${\cal G}$ one needs an upper bound for the
linear program's objective function strictly below 8 pt.
To rigorously obtain such an upper bound, it suffices to find a feasible
solution to the dual linear program, and to obtain a good upper
bound one wants the dual feasible solution close to a dual
optimal solution.
The value of the dual $LP$'s objective function is then a
certified upper bound to the primal $LP$.
To obtain such a certification, it is useful to formulate
the linear programs so that
the dual linear program has only inequality constraints, with
no equality constraints, so that the feasible region for it is
full-dimensional.
This way, one can guarantee that the dual feasible solution is
{\em strictly} inside the dual feasible region which
facilitates checking feasibility.
This is necessary because the linear program put on the
computer is only an approximation to the true linear program.
For example, certain constraints of the true $LP$ involve transcendental
numbers like $\pi$, and one considers an approximation.
The effect of these errors is to perturb the {\em objective function}
of the dual linear program.
Thus a rigorous bound on the effect of these perturbations on the
upper bound can be obtained in terms of the dual feasible solution.
In this way one can (in principle) get a certified upper bound
\footnote{The Hales proof in the preprints
used a linear programming package CPLEX
that does not supply such certificates. Therefore the linear
programming part of the Ferguson-Hales proof needs to be re-done
to obtain {\em guaranteed} certificates.}
on the score for a map type ${\cal G}$, using a computer.
The linear programming bounds in the Ferguson-Hales approach above suffice
to eliminate all map types ${\cal G}$ not ruled out by Theorem~\ref{th52}
except for about 100 ``bad'' cases. These are then handled
by ad hoc methods.
[I am not sure
of the details about how these remaining ``bad'' cases are handled.
Presumably
they are split into smaller pieces, extra inequalities are generated
somehow, and perhaps specific information on the location of
vertices more than $\frac{251}{100}$ is incorporated into the linear
programs.]
\section{Concluding Remarks}
\hspace*{\parindent}
The Kepler conjecture appears to be an extraordinarily difficult
nonlinear optimization problem.
The ``configuration space'' to be optimized over has an extremely
complicated
structure, of high dimensionality, and the function being optimized is
highly nonlinear and nonconvex, and lacks good monotonicity properties.
The crux of the Hales approach is to select a formulation of an
optimization problem that can be carried out (mostly by computer)
in a reasonable length of time.
This led to the Hales-Ferguson choice of an very complicated partition and
score function, giving an inelegant local inequality, which
however has good decomposition properties in terms of the nonlinear
program. Much of the work in the proof lies in the reductions to
reasonable sized cases, and the use of linear programming
relaxations.
The elimination of the most complicated cases, in Theorem \ref{th52} was
a major accomplishment of this approach.
The use of Delaunay simplices
to cover most of the volume where density
is high seems important to the proof and to the choice of score
functions, since simple analytic formulae are available
for tetrahedra.
The Hales - Ferguson proof, assumed correct, is a tour de force
of nonlinear optimization.
In contrast, the Hsiang approach formulates a relatively elegant local
inequality, involving only Voronoi domains and a fairly simple
weight function:
only nearest neighbor regions are counted.
It is conceivable that a rigorous proof of the Hsiang inequality can be
established, but it very likely will require an enormous
computer-aided proof of a
sort very similar to the Hales approach.
Voronoi domains do not seem well suited to computer proof:
they may have 40 or more faces each, and the Hsiang approach requires
considering up to twenty of them at a time.
A computer-aided proof would likely have to dissect the Voronoi domains
into pieces, further increasing the size of the problem.
\paragraph{Acknowledgments.} I am indebted to T. Hales for critical
readings of a preliminary version, with many suggestions and corrections.
G. Ziegler provided comments and corrections.
\subsection*{Appendix A. Hales Score Function Formulas}
\hspace*{\parindent}
These definitions are taken from in Hales \cite[Section 8]{I} and
Ferguson and Hales \cite[p. 8--11]{FH}.
A tetrahedron $T( l_1, l_2 , l_3 , \ldots, l_6 )$ is
uniquely determined by its six edge lengths $l_i$.
Let the vertices of $T$ be ${\bf v}_0, {\bf v}_1, {\bf v}_2 , {\bf v}_3$ and number
the edges as
\beql{A1}
l_i = \| {\bf v} - {\bf v}_i \|
\quad\mbox{for}\quad
1 \le i \le 3, ~
l_4 = \| {\bf v}_2 - {\bf v}_3 \| ,
l_5 = \| {\bf v}_1 - {\bf v}_3 \|\quad\mbox{and}\quad
l_6 = \| {\bf v}_1 - {\bf v}_2\| ~.
\end{equation}
We take ${\bf v}_0 = {\bf z}ero$ for convenience.
Suppose that the circumcenter ${\bf w}_c = {\bf w}_c (T)$ of $T$ is contained in
the pointed cone over vertex ${\bf v}_0$, determined by $T$.
Let $\hat{T}_0$ denote the part of the Voronoi cell of ${\bf v}_0$
with respect to the set $\Omegaega = \{{\bf v}_0, {\bf v}_1, {\bf v}_2 {\bf v}_3 \}$ of
vertices of $T$ that lies in $T$.
Suppose in addition that the three faces of $T$ containing ${\bf v}_0$
are each {\em non-obtuse} triangles. Then
the set $\hat{T}_0$ subdivides into six pieces, called Rogers
simplices by Hales \cite[p. 31]{I}.
A {\em Rogers simplex} in $T$ is the convex hull of ${\bf v}_0$, the
midpoint
of an edge emanating from ${\bf v}_0$, the circumcenter of one face of $T$
containing that edge, and the circumcenter ${\bf w}_c = {\bf w}_c (T)$.
If $a$ denotes the half-length of an edge, $b$ the circumradius of a face and
$c= \| {\bf y}_c \|$ is the circumradius of $T$ then the associated
Rogers simplex has shape
\beql{A2}
R(a,b,c) := T(a,b,c, (c^2 -b^1 )^{1/2}, (c^2 -a^2)^{1/2}, (b^2 - a^2 )^{1/2} ) ~,
\end{equation}
with the positive square root taken.
The intersection of a unit sphere centered at ${\bf z}ero$ with
$R(a,b,c)$ has volume $\frac{1}{3} Sol ({\bf v}_0; R(a,b,c))$,
where $Sol ({\bf v}_0 ; R(a,b,c))$ denotes the solid angle of
$R(a,b,c)$ at ${\bf v}_0$,
normalized so that a total solid angle is $4 \pi$.
Set
\beql{A3}
x_i = l_i^2
\end{equation}
are the squares of the edge lengths.
\begin{lemma}\lambdabel{lA1}
The solid angle $Sol ( {\bf v}_0 , T)$ of a tetrahedron
$T(l_1, l_2, l_3, l_4 , l_5, l_6 )$ is given by
\beql{A4}
Sol ( {\bf v}_0 , T) := 2 ~{\mbox{arccot}} \left( \frac{2A}{\Delta^{1/2}} \right)
\end{equation}
in which the positive square root of $\Delta$ is taken,
the value of arccot lies in $[0, \pi ]$, and
\beql{A5}
A(l_1, l_2, l_3, l_4, l_5, l_6) :=
l_1 l_2 l_3 + \frac{1}{2} l_1 (l_2^2 + l_3^2 - l_4^2 ) +
\frac{1}{2} l_2 (l_1^2 + l_3^2 - l_5^2 ) + \frac{1}{2}
l_3 (l_1^2 + l_3^2 - l_6^2 )
\end{equation}
and
\begin{eqnarray}\lambdabel{A6}
\Delta
(l_1, l_2, l_3, l_4, l_5, l_6) & := &
l_1^2 l_4^2 (-l_1^2 + l_2^2 + l_3^2 - l_4^2 + l_5^2 - l_6^2 ) \nonumber \\
&&+ l_2^2 l_5^2 (l_1^2 - l_2^2 + l_3^2 + l_4^2 - l_5^2 + l_6^2 ) \nonumber \\
&&+ l_3^2 l_6^2 (l_2^2 + l_2^2 - l_3^2 + l_4^2 + l_5^2 - l_6^2 ) \nonumber \\
&&- l_2^2 l_3^2 l_4^2 - l_1^2 l_3^2 l_5^2 -l_1^2 l_2^2 l_6^2 - l_4^2 l_5^2 l_6^2 ~.
\end{eqnarray}
\end{lemma}
\paragraph{Definition A.1.}
(i) for a tetrahedron $T(l_1, l_2, \ldots, l_6)$ with vertex ${\bf v}_0$,
if the circumcenter ${\bf w}_c$ of $T$
falls inside the cone determined by $T$ at ${\bf v}_0$, then we set
\beql{A7}
vor (T, {\bf v}_0) : = 4 \sum_{i=1}^6 \left\{
vol (R_i (a,b,c)) (-\delta_{oct} ) + \frac{1}{3} Sol (R_i, {\bf v}_0)\right\}
\end{equation}
with
\beql{A8}
vol (R(a,b,c)) := \frac{a(b^2 - a^2)^{1/2} (c^2 - b^2 )^{1/2}}{6} ,
\quad\mbox{for}\quad 1 \le a \le b \le c ~.
\end{equation}
This formula satisfies $vor (T, {\bf v}_0 ) = 4 \Gamma ( \hat{T}_0)$.
(ii) The six tetrahedra $R_i (a,b,c)$ are still defined even when
the circumcenter
${\bf w}_c$ falls outside the cone of $T$ at vertex ${\bf v}_0$, and we still
take the formula \eqn{A7} to define $vor (T, {\bf v}_0)$, except that both
$vol (R_i (a,b,c))$ and $Sol(R_i,{\bf v}_0)$ are
counted with a negative sign:
each tetrahedron $R_i (a,b,c)$ falls outside $T$, and has no interior
in common with it.
Hales calls the definition (ii) the ``analytic continuation'' of case (i).
It has a geometric interpretation.
The truncated Voronoi function $vor (T, {\bf v}_0; t)$ of a tetrahedron
$T$ at vertex ${\bf v}_0$ is intended to measure the compression
$\Gamma ( \hat{T}_0 \cap B ({\bf v}_0 ; t))$.
Here we have truncated the region $\hat{T}_0$ by removing from it
all points at distance greater than $t$ from ${\bf v}_0$.
We set
\beql{A9}
vor_0 (T, {\bf v}_0) := vor \left(T, {\bf v}_0, \frac{251}{200} \right) ~.
\end{equation}
The definition
\beql{A10}
vor (T, {\bf v}_0 ; t) := \Gamma ( \hat{T}_0 \cap B ({\bf v}_0 ; t))
\end{equation}
is valid only when the circumcenter ${\bf w}_c$ of $T$ lies in the
cone generated from
$T$ at vertex ${\bf v}_0$.
In the remaining case one must construct an analytic representation
analogous to \eqn{A7} for $vor (T, {\bf v}_0; t)$.
This is done in \cite[pages 9--10]{FH}.
\subsection*{Appendix B. References to the Hales Program Results}
\hspace*{\parindent}
This paper was written to state the
Hales-Ferguson local inequality in as simple a way as I could find,
and does not match
the order in which things are done in the preprints of Hales and Ferguson.
Also, the lemmas and theorems stated here are not
all stated in the Hales and Ferguson preprints; some of
them are based on the talks that Hales gave at IAS in January 1999.
The pointers below indicate where to look in the preprints for
the results I formulate as lemmas and theorems.
{\em Warning}:
The Hales-Ferguson partition and scoring function given in \cite{FH},
which are the ones actually used for the proof of the Kepler conjecture,
differ from those used earlier by Hales in \cite{I} and \cite{II}.
\begin{itemize}
\item[(0)] The idea of considering local inequalities that weight
total area and covered area by spheres in a ratio $\frac {B}{A}$ that is
not equal to the optimal density occurs in Hales' original approach
based on Delaunay triangulations, see \cite{H1} \cite{H2}.
It also appears in Hales \cite[Lemma 2.1]{I}
and in Ferguson and Hales \cite[Proposition 3.14]{FH}. I have
inserted the parameters $A$ and $B$ in order to include the density
inequality of Hsiang\cite{Hs} in the same framework.
\item[(1)]
{\em Definitions \ref{de41} and \ref{de42}} appear in \cite[page 2]{FH}.
\item[(2)]
{\em Lemma \ref{le41}} is Lemma 1.2 of
\cite{FH}, proved in Lemma 3.5 of \cite{I}.
(The fact that no vertex of $\Omegaega$ occurs inside a face of
a $QL$-tetrahedron or a $QR$-tetrahedron
requires additional argument.)
\item[(3)]
{\em Lemma \ref{le42}} follows from Lemma 1.3 of \cite{FH}.
\item[(4)]
{\em Lemma \ref{le43}} is proved in \cite[p.3 bottom]{FH}.
\item[(5)]
{\em Lemma \ref{Nle43}}
is covered in the discussion on \cite[pages 5--6]{FH}.
\item[(6)]
{\em Lemma \ref{Nle44}} (i) -(ii1) are covered in the discussion
on \cite[page 5]{FH}, including Lemma 1.8 of that paper.
\item[(7)]
The notion of ``tip'' is discussed at length in section 2 of Hales~\cite{II}.
In part II ``tips'' are not actually reassigned- although this is
mentioned - their existence affects the scoring rule used for the associated
Delaunay simplex which the ``tip'' is associated to.
The rules for moving ``tips'' around to make $V$-cells in the
Hales-Ferguson approach are
discussed on \cite[page 8]{FH}. Warning: the way that ``tips'' are
handled in part II and in \cite{FH} may not be the same:
\cite{FH} takes priority.
\item[(8)]
{\em Lemma \ref{Nle45}} (i) is \cite[Lemma 2.2]{II} and \cite[Lemma 4.17]{FH}.
Facts related to (ii) are discussed in \cite[Sect. 8.6.7]{I}. (For the
second part I do not have a reference.)
(iii) Hales mentioned this in IAS lectures, and sent me a proof sketch,
which I expanded into the following: Let $S$ be a simplex in the
$D$-system that overlaps a ``tip'' protuding from ${\bf v}$. Say that the ``tip''
overlaps by pointing in to $S$ along a face $F$ of $S$.
Thus $F$ is a negatively oriented face of $S' =(F, {\bf v})$,
which means that the simplex $S'$ is a
$QR$-tetrahedron or else a $QL$-tetrahedron with spine on $F$.
Suppose first that $S'$ is a $QL$-tetrahedron.
It now follows that $S$ must be
$QL$-tetrahedron with its spine on $F$
by \cite[Lemma 2.2]{FH}.
So $S'$ and $S$ are adjacent $QL$-tetrahedra with spines
on their common face $F$. Now $S$ is in the $D$-system
since $S'$ is in the $D$-system.
Thus the distance of ${\bf v}$ to the vertices in $F$ is at most
$\frac {251}{100},$ since ${\bf v}$ is not on the spine.
We now suppose that the ``tip'' is not
entirely contained in $S$, and derive a contradiction.
If it isn't contained in $S$, then it crosses out through a face
$F'$ of $S$.
By the same argument, the distance from ${\bf v}$ to the vertices of
$F'$ is at most $\frac {251}{100}.$ Thus ${\bf v}$ has distances at
most $\frac {251}{100}$ from all vertices of $S$, which is impossible
by \cite[Lemma 1.2]{FH} and \cite[Lemma 1.3]{FH}.
Suppose secondly that $S'$ is a $QR$-tetrahedron.
Then one shows that $S$ is
also a $QR$-tetrahedron, hence is in the $D$-system. The rest of
the argument goes as before, to the same contradiction.
\item[(9)]
{\em Theorem \ref{th51}}. The main theorem is first
stated as Conjecture 3.15 in \cite[p. 13]{FH}.
It is the theorem asserted to be proved in \cite{KC}.
\item [(10)]
{\em Lemma \ref{le51}}
(i) appears as \cite[Lemma 3.13]{FH}.
(ii) is a special case of \cite[Lemma 3.13]{FH}
for a quad cluster, which can consist of four congruent $QL$-tetrahedra.
\item[(11)]
The {\em standard regions}~ corresponding to the
graph ${\cal G} ( {\bf v} )$ are defined on \cite[p. 4]{FH}.
(``Planar map that breaks unit sphere into regions.'')
\item[(12)]
{\em Lemma \ref{le52}}
follows from Lemma 1.6 of \cite{FH}, which implies that crossing
lines come from $QL$-tetrahedra only.
\item[(13)]
{\em Lemma \ref{le53}} is an immediate consequence of Lemma~\ref{le52}
and \cite[Lemma 1.3]{FH}.
\item[(14)]
{\em Lemma \ref{le54}} appears as \cite[Lemma 3.13]{FH}.
\item[(15)]
{\em Theorem \ref{th52}}
follows from the Corollary to Theorem 4.4 of \cite{IV}.
See also Proposition 7.1 of \cite{III}.
\item[(16)]
{\em Lemma \ref{le55}} and {\em Lemma \ref{le56}}.
These results are briefly stated at the bottom of p. 8 of \cite{FH}.
There are also some relevant details in Hales \cite[Sect. 2.2]{II}.
(I do not know an exact reference for detailed proof.)
\end{itemize}
\begin{center}
\begin{tabular}{rll}
~ & $\underline{\mbox{Hales-Ferguson terminology}}$ & $\underline{\mbox{Terminology in this paper}}$ \\ [+.2in]
(1) & decomposition star ~~~~~~~~~~~~ & vertex D-star \\ [+.2in]
(2) & quasiregular tetrahedron & $QR$-tetrahedron \\ [+.2in]
(3) & quarter & $QL$-tetrahedron \\[+.2in]
(4) & diagonal (of quarter)~~~~~ & spine (of QL-tetrahedron)\\ [+.2in]
(5) & $Q$-system ~~~~~~~~~~~~~& $D$-system \\ [+.2in]
(6) & score $\sigma(R, {\bf v})$ ~~~& weight function $\sigma(R, {\bf v})$ \\ [+.2in]
(7) & standard cluster~~~~~~& cluster
\end{tabular}
\end{center}
ace*{.2\baselineskip}
\noindent AT\&T Labs - Research \\
Florham Park, NJ 07932-0971 \\
email: {\tt [email protected]} \\
\end{document}
|
\begin{document}
\title{Spin-augmented observables for efficient photonic quantum error correction}
\author{Elena Callus}\email{[email protected]}
\author{Pieter Kok}\email{[email protected]}
\affiliation{Department of Physics and Astronomy, The University of Sheffield, Sheffield, S3 7RH, UK}
\begin{abstract}\noindent
We demonstrate that the spin state of solid-state emitters inside micropillar cavities can serve as measure qubits in syndrome measurements. The photons, acting as data qubits, interact with the spin state in the microcavity and the total state of the system evolves conditionally due to the resulting circular birefringence. By performing a quantum non-demolition measurement on the spin state, the syndrome of the optical state can be obtained. Furthermore, due to the symmetry of the interaction, we can alternatively choose to employ the optical states as measure qubits. This protocol can be adapted to various resource requirements, including spectral discrepancies between the data qubits and codes with modified connectivities, by considering entangled measure qubits. Finally, we show that spin-systems with dissimilar characteristic energies can still be entangled with high levels of fidelity and tolerance to cavity losses in the strong coupling regime.
\end{abstract}
\date{\today}
\maketitle
Linear optical quantum computing with single photons becomes resource-inefficient and requires a high overhead due to weak photon--photon interactions, making multi-qubit gates difficult to implement \cite{Kok2007}. However, this drawback can be overcome at the measurement stage if one can resolve amongst a broader class of observables, e.g., performing measurements of two-photon qubits such as Bell states \cite{Knill2001}. This would thereby allow for the execution of non-linear gates more efficiently. Although quantum dot (QD) spin systems tend to be too short-lived for useable long-term memories, they interact efficiently with light. The spin--photon interaction augments photonic quantum information processing, with important applications in photonic state measurements. This complements linear optical quantum computing and can dramatically increase its efficiency. Here we propose an application of this interaction to the measurement of a larger class of qubit observables.
\begin{figure}
\caption{Schematic of the stabilizer measurement setup, with (a) a single QD and (b) multiple entangled QDs. The optical states, $\ket{\psi}
\label{fig:setup}
\end{figure}
\begin{figure}
\caption{A schematic of the two-dimensional array of data (open circles) and measure (black circles) qubits of a surface code. The plaquette, $\mathsf{Z}
\label{fig:sandp}
\end{figure}
The spin--photon interface is a promising candidate for applications in quantum information technologies and quantum communication \cite{Kimble2008,Atatre2018}. The low decoherence rate of the photon renders it suitable as a flying qubit, transporting information over large distances and interacting readily with the solid-state spin, which acts as a stationary qubit. Over the last few years, various systems belonging to this family have been extensively studied with the aim of applications in various quantum technologies, yielding the development of, e.g., photonic quantum gates \cite{Hacker.2016,Duan.2004} and optical non-linearities \cite{Javadi.2018,Javadi.2015}, as well as entanglement of remote spin states \cite{Delteil.2015,Young2013,Cirac.1997}, photon polarisation \cite{Hu.2008a} and spin--photon states \cite{Economou.2016}. The circular birefringence arising from the optical selection rules for a spin state confined in a cavity \cite{Young2011} has been used to develop schemes for, e.g., quantum teleportation \cite{Hu2011}, quantum non-demolition measurements \cite{Hu2009} and entanglement beam splitters \cite{Hu2009a}. Furthermore, this system also has applications in the design of complete and deterministic Bell-state analyzers \cite{Bonato.2010,Hu2011}, a marked improvement over what is possible using just linear optics \cite{Calsamiglia2001}. Here, the spin--photon system measures the qubit parity whilst information about the symmetry is obtained using linear optics.
In this work, we will discuss the application of spin--photon interfaces to carry out efficient photonic stabilizer measurements in the surface code. A key objective in quantum physics is the physical realisation of fault-tolerant quantum computers, necessitating the development of quantum error detection and correction \cite{Shor1995}. Surface codes are a well-studied set of stabilizer codes designed for the implementation of error-corrected quantum computing \cite{Roffe.2019}. The first proposal was presented by Kitaev in the form of the toric code \cite{Kitaev.2003,Kitaev.1997}, assuming periodic boundary conditions that allow it to be mapped onto a torus. This was later generalised to planar versions with different variations in the boundary conditions \cite{Bravyi.1998,Freedman.1998,Fowler.2012}. The surface code considers a 2D square lattice arrangement of data and measure qubits, with the latter being used to detect errors and perform stabilizer measurements of the encoded data qubits.
Stabilizer measurement is one type of error detection technique, indicating the presence of possible noisy errors in the physical data qubits \cite{Nielsen2012,Gottesman1997}. It consists of a series of projective measurements performed on specific sets of qubits, with the measurement outcomes, or syndromes, indicating the location and type of error. Given that a direct measurement of the physical data qubits interferes with the coherence of the state and destroys the encoded information, the measurements are performed on entangled measure qubits. There exist several approaches when it comes to the physical implementation of quantum error correction, with platforms including photonic architectures \cite{Bell2014,Yao2012,Aoki2009}, superconducting circuits \cite{Andersen2020,Rist2015,Kelly2015}, trapped atomic ions \cite{Linke2017,Lanyon2013} and nitrogen-vacancy centres \cite{Cramer2016} having been experimentally explored.
Using solid-state QDs trapped inside micropillar cavities and scattering interactions at the single-photon level, the total state of the optical and spin sub-systems evolves conditionally, with entanglement occuring in the presence of coherent errors. This allows us to perform a quantum non-demolition measurement of one of the two subsystems, effectively retrieving information about the state of the other. Letting the optical and spin states serve as data and measure qubits, we show this interaction and measurement process can be used to extract the syndrome, as shown schematically in Fig. \ref{fig:setup}. The scheme also has the advantage that the assignment of the data and measure qubits can be swapped around. Moreover, we will discuss the use of multiple entangled measure qubits as a means of reducing resource requirements, accommodating for possible spectral variations between the data qubits and codes that consider various connectivities between the qubits.
The detection of errors in the data qubits is performed by means of syndrome measurements. The measurement operators, or stabilizers, for a surface code are comprised of star, $\mathsf{X}_s =\prod_{j\in \text{star}(s)}\hat{X}_j $, and plaquette operators, $\mathsf{Z}_p=\prod_{j\in \text{plaq}(p)}\hat{Z}_j$, where $\hat{X}$ and $\hat{Z}$ are the Pauli-X and Pauli-Z operators. The operators act on either the four data qubits that are adjacent to a vertex, said to belong to a star $s$, or adjacent to a face, said to belong to a plaquette $p$, as shown in Fig. \ref{fig:sandp}. Furthermore, the operators all commute, with $[\mathsf{X}_s,\mathsf{X}_{s'}]=[\mathsf{Z}_p,\mathsf{Z}_{p'}]=[\mathsf{X}_s,\mathsf{Z}_{p}]=0$. The eigenstates of $\hat{Z}$ are $\ket{0}$ and $\ket{1}$, and those of $\hat{X}$ are $\ket{\pm}\propto\left(\ket{0}\pm\ket{1}\right)$. The graph of the surface code is stabilized by $\mathsf{X}_s$ and $\mathsf{Z}_p$. The eigenvalues obtained from the measurement of these operators indicates the possible presence of errors, and depends on the parity of the state, with an eigenvalue of $+1$ ($-1$) corresponding to a state with even (odd) parity. The state of the data qubits may be initialised such that they are simultaneous eigenstates of all the stabilizer operators with eigenvalues of $\pm1$, referred to as the quiescent state. The standard method of extracting the qubit syndrome involves the implementation of a CNOT gate on each of the data qubits belonging to a star or plaquette, with the measure qubit serving as the control.
\begin{figure}
\caption{(a) A micropillar cavity, with resonance frequency $\omega_c$, coupled to a QD with strength $g$. The field in the cavity couples to the output mode and lossy modes with rates $\kappa$ and $\kappa_s$, respectively. (b) The polarisation- and spin-dependent coupling rules for the QD with central frequency $\omega_{X^-}
\label{fig:micropillar}
\end{figure}
The physical spin setup, demonstrated in Fig. \ref{fig:micropillar}, consists of a single-sided micropillar with two distributed Bragg reflectors on both ends, where only one side is fully reflective and the other end is partially transmissive. The cavity mode couples to an electron spin in the form of a charged QD contained within the micropillar cavity. Given the optical selection rules, the interaction of a photon within the cavity becomes polarisation and spin dependent \cite{Warburton2013}. In the case of a negatively charged QD, the $\ket{\uparrow}$ ($\ket{\downarrow}$) spin state can be optically excited to the negative trion, $X^-$, state $\ket{\uparrow\downarrow,\Uparrow}$ ($\ket{\uparrow\downarrow,\Downarrow}$) by absorption of a left-handed (right-handed) circularly polarised photon, $\ket{L}$ ($\ket{R}$). Cross-transitions between the lower and higher energy states are not allowed by the conservation of angular momentum.
The post-interaction reflection coefficient for a single-sided cavity coupled to a QD is given by \cite{Hu.2008}
\begin{equation}\label{eq:rh}
r_h(\omega)=\frac{\left[\I\left(\omega_{X^-}-\omega\right)+\frac{\gamma}{2}\right]\left[\I\left(\omega_{c}-\omega\right)+\frac{\kappa_s}{2}-\frac{\kappa}{2}\right]+g^2}{\left[\I\left(\omega_{X^-}-\omega\right)+\frac{\gamma}{2}\right]\left[\I\left(\omega_{c}-\omega\right)+\frac{\kappa_s}{2}+\frac{\kappa}{2}\right]+g^2},
\end{equation}
where $\omega$, $\omega_c$ and $\omega_{X^-}$ represent the frequencies of the photon, the cavity mode and the trion transition, respectively; $\gamma$ represents the decay rate of the $X^-$ dipole, $\kappa$ and $\kappa_s$ are the cavity decay rates into the output and the lossy side modes, respectively, and $g$ is the coupling strength between the QD and the cavity field. When the photon does not couple to the QD due to the selection rules, the only contribution to the reflection coefficient is from the (empty) cavity interaction. Setting $g=0$, we can characterise the cold cavity interaction by
\begin{equation}
r_0(\omega)=\frac{\I\left(\omega_{c}-\omega\right)+\frac{\kappa_s}{2}-\frac{\kappa}{2}}{\I\left(\omega_{c}-\omega\right)+\frac{\kappa_s}{2}+\frac{\kappa}{2}}.
\end{equation}
We will consider only the resonant interaction case in this work, where $\omega_c=\omega_{X^-}$, and allow detuning of the photon frequency, $\omega$, where $\delta=\omega_c-\omega=\omega_{X^-}-\omega$. For small enough cavity losses $\kappa_s$, one sees that $|r_0(\omega)|\simeq 1$ for all frequency detuning, whilst $|r_h(\omega)|\simeq 1$ except in the region of $\delta= \pm g$, when in the strong-coupling regime with $g>\left(\kappa+\kappa_s\right)/4$. We apply a frequency detuning such that the difference in phase shifts imparted during the coupled and the cold cavity interactions is $\pm\pi/2$. This means that the $\delta$ is set such that $\tilde{\phi}(\omega)\equiv\phi_h(\omega)-\phi_0(\omega)=\pm \pi/2$, where $\phi_i(\omega)=\arg\left[r_i(\omega)\right]$ for $i=h,0$. We will drop the notation for frequency dependence for ease of readability.
Fig. \ref{fig:setup} shows a diagrammatic setup for the syndrome measurement, where we first consider situation (a) with four photons interacting sequentially with a single spin. The photons serve as data qubits with $\ket{L}$ and $\ket{R}$ encoding the logical $\ket{0}$ and logical $\ket{1}$ qubit states, respectively, whilst the spin states act as the measure qubits. The Hadamard gates are applied pre- and post-interaction only when performing a star measurement, $\mathsf{X}_s$, such that the $\hat{X}$-basis eigenstates transform as $\ket{+}\leftrightarrow\ket{g}$ and $\ket{-}\leftrightarrow\ket{e}$. The electron spin state is initialised to $\ket{+_S}=\left(\ket{\uparrow}+\ket{\downarrow}\right)/\sqrt{2}$, however we note that the procedure also works for an initial spin state given by $\ket{-_S}=\left(\ket{\uparrow}-\ket{\downarrow}\right)/\sqrt{2}$. Letting the four photons belonging to a plaquette or star set interact with the spin system sequentially in time, and assuming $|r_0 \left(\omega\right)|=|r_h\left(\omega\right)|= 1$, a photonic eigenstate and the electron spin state evolve together, up to a global phase, as
\begin{equation}\label{equation:state}
\begin{split}
\bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}&\ket{i_j}\otimes\ket{+_S}\rightarrow \bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}} \exp\left(\I\tilde{\phi}\delta_{i_jL}\right)\ket{i_j}\\
&\otimes\left[\ket{\uparrow}+\prod_{\substack{k\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}\exp\left[-\I\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)\right]\ket{\downarrow}\right]/\sqrt{2},\\
\end{split}
\end{equation}
where $\ket{i_j}\in\{\ket{L},\ket{R}\}$, $j$ and $k$ index the same four photonic qubits in a plaquette or star set, $\delta_{i_j L}$ is the Kronecker delta, and $\phi_{j,\ast}$ is the phase shift resulting from the interaction between the photonic state $\ket{i}_j$ and spin state $\ket{\ast}\in\left\{\ket{\uparrow},\ket{\downarrow}\right\}$. The frequency detuning, $\delta$, is set such that $\tilde{\phi}=\pm\pi/2$, resulting in $\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)=\pm\pi/2$ ($\mp\pi/2$) for a left-handed (right-handed) circularly polarised photon. The relative phase shift between the two spin states accumulates with every spin--photon interaction. The total phase shift imparted from two orthogonally polarised photons is zero, whilst that resulting from pairs of identically polarised photons is $\pm\pi$. Given the set of all eigenstates, the spin state evolves to $\ket{+_S}$ ($\ket{-_S}$) for an even (odd) parity photonic state. By measuring the spin in the $\hat{X}$-basis, we can therefore perform a quantum non-demolition measurement that reveals the syndrome of the data qubits in a complete and efficient manner. The phase shift acquired by the individual photonic eigenstates post-measurement is shown in Eq. \ref{equation:state} and has two contributions. The key contribution to the imparted phase shift is $\exp\left(\I\tilde{\phi}\delta_{i_jL}\right)$, which introduces unwanted phase flips to the encoded state. This is corrected by a polarisation-dependent phase shift acting only on the right-circularly polarised state such that $\ket{R}\rightarrow\exp\left(\I\tilde{\phi}\right)\ket{R}$, possible to achieve in a passive manner using linear optics. This rotation corrects the state, irrespective of the physical state, preserving the original pre-measurement encoded state, including any detected errors. The procedure is similar when we account for the presence of various boundary conditions in the surface code (see SM).
Next, we consider the confidence in the spin read-out \cite{Kok2001} as an appropriate figure of merit for the measurement performance given possible variations in the frequency detuning, $\delta$, from the optimal. The ground state of the planar code is given by \cite{Nielsen2012}
\begin{equation}
\ket{\psi_0} \propto \prod_s\left(\mathds{1}+\mathsf{X}_s\right)\ket{0}^{\otimes n},
\end{equation}
where $n$ is the number of physical data qubits and the product is over the whole set of stars $s$. The state is assumed to be prone to coherent errors that can be modelled by applying a Pauli channel of the form
\begin{equation}
\mathcal{E}\left(\rho\right)=(1-p)\rho+x\hat{X}\rho\hat{X}+y\hat{Y}\rho\hat{Y}+z\hat{Z}\rho\hat{Z}
\end{equation}
to each individual physical qubit, where $x,y,z$ are the probabilities of the respective Pauli errors and $p=x+y+z$ is the physical qubit error rate. Since plaquette (star) operators detect only $X$($Z$)-type errors, we may address the performance of each measurement type individually. For our confidence measure, we may simply assume that both star and plaquette measurements also factor in any possible $Y$-type errors, since $\hat{Y}=\hat{Z}\hat{X}$. We can then express the confidence in a spin read-out of $\ket{\pm}$ by
\begin{equation}
\frac{\text{Tr}\left[\mathbb{P}_\pm\otimes\ket{\pm_S}\bra{\pm_S}\left\{U\left(\mathcal{E}\left(\rho\right)\otimes\ket{+_S}\bra{+_S}\right)U^\dagger\right\}\right]}{\text{Tr}\left[\mathds{1}\otimes\ket{\pm_S}\bra{\pm_S}\left\{U\left(\mathcal{E}\left(\rho\right)\otimes\ket{+_S}\bra{+_S}\right)U^\dagger\right\}\right]},
\end{equation}
where $U$ is the spin- and polarisation-dependent two-qubit gate describing the interaction and $\mathbb{P}_\pm$ is the projection operator onto the $\pm1$ eigenstates.
We show in Fig. \ref{fig:confidence} the confidence in the two types of plaquette and star measurement outcomes for different coupling strengths $g$ and for various physical qubit error probabilities as a function of the frequency detuning $\delta$. We consider here only the strong-coupling regime, as it is in this case that we satisfy the requirement that $|r_h|\rightarrow 1$. Current experimental values show coupling strengths with $g/\kappa$ reaching values of up to around 2.4 \cite{Volz2012,Reitzenstein2007}. We note here that the distance of the code, i.e., the measure of the number of physical qubits used to encode a logical qubit, does not have an effect on the confidence value, and that the plaquette and star measurements show the same type of behaviour due to their equivalence up to a Hadamard gate. We see that the confidence in the $\ket{-}$ spin state measurement tends to be lower. This is due to the way the coefficients accumulate in the presence of errors: the accumulation in such cases builds up in an uneven way, resulting in a more volatile behaviour as the $\delta$ is varied. Furthermore, we show that the proposed scheme is robust and tolerant to deviations in the frequency detuning from the optimal.
\begin{figure}
\caption{Confidence in the $\ket{+_S}
\label{fig:confidence}
\end{figure}
Next, we consider a setup where the syndrome measurement is performed in parallel on the incoming photons. This may be done in order to optimise for the type of resources required as well as to accommodate for possible differences in the spectral characteristic of the data qubits. Due to the linear nature of our transformation, the total phase shift is equivalent to the sum of the individual interactions and therefore the syndrome measurement can be done using two or four measure qubits per stabilizer measurement. The spins need to be entangled into a general GHZ-state up to any Pauli operations. Each photon is then allowed to ineract with exactly one of the spins (or, in the case of a two-qubit register, two photons interacting with each spin), such that the interaction satisfies the conditions specified for the single-qubit register. Finally, all the spins are measured in the $\hat{X}$-basis, as was done in the single-qubit register, in order to extract the syndrome.
A setup making use of two or four measure qubits would require fewer photon switches and optical circulators, and would cater for a larger spectral variation between the photons. On the other hand, this calls for entanglement generation, which may require additional resources in terms of time and physical components. One way of entangling the spin states of two QDs is to allow a linearly polarised photon to interact with each state sequentially \cite{Young2013,Hu.2008}. This results in a so-called optical Faraday rotation which rotates the polarisation and, given initial spin states $\left(\ket{\uparrow}+\ket{\downarrow}\right)/\sqrt{2}$ and $\tilde{\phi}=\pm\pi/2$, evolves the spin--photon state to $-\I\ket{V}\left(\ket{\uparrow\uparrow}-\ket{\downarrow\downarrow}\right)\pm\I\ket{H}\left(\ket{\uparrow\downarrow}+\ket{\downarrow\uparrow}\right)$, up to normalisation. Therefore, by measuring the polarisation of the photon, the spin state is projected onto a maximally entangled state. This protocol can be extended to four spin states by entangling another pair and then entangling together one spin state from each pair.
In the case of photonic data qubits with different frequencies, we would require spectrally different QD-spin systems in order to satisfy the condition of $\tilde{\phi}=\pm\pi/2$. In such cases, the entanglement procedure outlined above may still be used to generate states with high fidelity, albeit the heralded efficiency of the procedure is reduced. By setting the frequency of the linearly polarised photon, say $\ket{H}$, such that $\tilde{\phi_1}=-\tilde{\phi_2}$, where $\tilde{\phi_i}$ is the difference in phase shifts for QD-spin system $i$, the state of the total system post-interaction is
\begin{equation}\label{eq:prob}
\begin{split}
\ket{H}&\left(\ket{\uparrow\uparrow}+\ket{\downarrow\downarrow}\right)+\left(e^{\I\tilde{\phi}}\ket{L}+e^{-\I\tilde{\phi}}\ket{R}\right)\ket{\uparrow\downarrow}\\
&+\left(e^{-\I\tilde{\phi}}\ket{L}+e^{\I\tilde{\phi}}\ket{R}\right)\ket{\downarrow\uparrow},
\end{split}
\end{equation}
up to some global phase and normalisation constant. Upon the detection of an orthogonally polarised photon (in this case $\ket{V}$), the spin states would be projected onto the maximally entangled state $\left(\ket{\uparrow\downarrow}-\ket{\downarrow\uparrow}\right)/\sqrt{2}$. Similarly, one can set the photonic frequency such that $\tilde{\phi_1}=\tilde{\phi_2}$, probabilistically generating the entangled state $\left(\ket{\uparrow\uparrow}+\ket{\downarrow\downarrow}\right)/\sqrt{2}$. The efficiency of the entanglement generation increases with the energy detuning until it peaks at around $40-60\%$ of the maximum possible efficiency. This is because a phase shift that maximises the probability of obtaining an orthogonally polarised photon (i.e. $|\tilde{\phi}|\approx\pi/2$) while satisfying the requirement set in Eq. \ref{eq:prob} is easier to achieve in systems that are sufficiently dissimilar.
One physical limitation that needs to be accounted for is the spin decoherence time, $T_2$, whereby the coherence of the superposition of spin states decays mostly due to interactions with nuclear spins, with experimental values for $T_2$ in the range of several \si{\nano \second} \cite{Androvitsaneas2022,Tran2022,Huang2015}. In the case of a single QD, the fidelity of the spin state would reduce by a factor of $\left(1+\exp\left[-t/T_2\right]\right)/2$, where $t$ is the total time taken for all four photons to interact with the spin, with current lifetime values of exciton photons in micropillars reaching a few hundred \si{\pico\second} \cite{Gins2022,Gins2021,Huber2020}, depending on the detuning between the emitter and the cavity. In the case of $n$ QDs, the fidelity would decay by a factor of $\left(1+\exp\left[-nt/T_2\right]\right)/2$. Since the interaction time $t$ is inversely proportional to the register size of the spins, the reduction in fidelity due to spin decoherence when utilising multiple entangled spin states remains the same. Moreover, although there is a reduction in the measurement confidence and fidelity, the spin dephasing has no detrimental effect on the quiescent state of the data qubits once the syndrome has been extracted.
In conclusion, we have shown how the spin--photon interface may be applied to quantum error detection to perform syndrome measurements, specifically by utilising solid-state emitters inside micropillar cavities and optical circular birefringence. Working in the strong-coupling regime, we have also shown that the scheme is robust over the frequency detuning $\delta$ for coupling strengths $g$ routinely reached in experiment, making this proposed scheme a viable practical candidate. Our analysis has centred around the use of the spin state as the measure qubit, however due to the inherent symmetry of the interaction, it is possible to swap around the assignment of the two types of qubits and perform the syndrome measurement with the photonic state instead. Moreover, it might prove to be useful to use entangled spin states in some implementations as increasing the register of measure qubits in this way also allows for flexibility in the connectivity of the code \cite{Chamberland2020,Chamberland2020a} and accommodates for spectral variations between the data qubits. Such a setup may also prove to be a more resource efficient way of physically realising surface codes tailored to biased noise, where Hadamard transformations are applied to certain Pauli matrices of the star and plaquette operators \cite{Tuckett2018,BonillaAtaides2021,Tiurev2022}. We therefore show that entanglement is still possible for QD-systems with varying characteristic energies. This can be done with high levels of fidelity, albeit with lower generation efficiencies, even in lossy systems when working in the strong coupling regime.
Potential directions for future work include the extension of the proposed scheme to other measurement families in quantum error correction. Examples of these include the logical Pauli operators, acting on the whole column or row of the qubit array \cite{Bravyi2014}; lattice surgery, which results in logical operations on the encoded qubits by means of splitting or merging the qubit lattice \cite{Horsman2012}; and measurements in higher-dimensional hypergraph product codes \cite{Zeng2019}. Other avenues to explore would be the generalisation of a Bell-state analyser to a scheme that would allow for the observation and discrimination between non-maximally entangled states, as well as the measurement of photons for further versatility. It is evident that the spin--photon interface has potential applications in various aspects of optical quantum information processing, proving it to be a versatile and integral component in the design of quantum technology and vastly improving the performance of linear optical quantum computing.
\begin{acknowledgments}
EC is supported by an EPSRC studentship. PK is supported by the EPSRC Quantum Communications Hub, Grant No. EP/M013472/1. The authors thank Armanda Quintavalle, Joschka Roffe, Ruth Oulton and Andrew Young for valuable comments and discussions.
\end{acknowledgments}
{}
\widetext
\begin{center}
\textbf{\large Supplemental Material: Spin-augmented observables for efficient photonic quantum error correction}
\end{center}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\setcounter{page}{1}
\makeatletter
\renewcommand{S\arabic{equation}}{S\arabic{equation}}
\renewcommand{S\arabic{figure}}{S\arabic{figure}}
\renewcommand{\bibnumfmt}[1]{[S#1]}
\renewcommand{\citenumfont}[1]{S#1}
\section*{Star and plaqeutte measurements}
We assume that the spin, serving as the measure qubit, is initialised in the state $\ket{+_S}=\left(\ket{\uparrow}+\ket{\downarrow}\right)/\sqrt{2}$. The star and plaquette operators, defined as
\begin{equation}
\mathsf{X}_s =\prod_{j\in \text{star}(s)}\hat{X}_j \qquad \text{ and } \qquad
\mathsf{Z}_p=\prod_{j\in \text{plaq}(p)}\hat{Z}_j,
\end{equation}
respectively, where $\hat{X}$ and $\hat{Z}$ are the Pauli-X and Pauli-Z operators, stabilize the surface code. The eigenbasis for these operators may be expressed as
\begin{equation}\label{eq:eigenbasis}
\left\{\bigotimes_{j\in \text{star}(s)}\ket{i}_j \text{s.t.} \ket{i}=\ket{0} \text{or} \ket{1}\right\} \qquad \text{ and } \qquad \left\{\bigotimes_{j\in \text{plaq}(p)}\ket{i}_j \text{s.t.} \ket{i}=\ket{+} \text{or} \ket{-}\right\},
\end{equation}
respectively, where $\ket{\pm}=\left(\ket{0}\pm\ket{1}\right)/\sqrt{2}$.
We encode the logical $\ket{0}$ and logical $\ket{1}$ qubits into the left-, $\ket{L}$ and right-handed polarisations, $\ket{R}$, and apply a Hadamard gate before and after interaction in the case of a star measurement, such that $\ket{0}\leftrightarrow\ket{+}$ and $\ket{1}\leftrightarrow\ket{-}$. Then the evolution of a photonic eigenstate and the spin state can be expressed as
\begin{equation}\label{equation:SMstate}
\begin{split}
\bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}\ket{i_j}\otimes\ket{+_S}\rightarrow & \bigotimes_{j}\exp\left(\I\phi_{j,\uparrow}\right)\ket{i_j}\otimes\ket{\uparrow}/\sqrt{2}+\bigotimes_{j}\exp\left(\I\phi_{j,\downarrow}\right)\ket{i_j}\otimes\ket{\downarrow}/\sqrt{2}\\
&=\bigotimes_j \exp\left(\I\phi_{j,\uparrow}\right)\ket{i_j}\otimes\left[\ket{\uparrow}+\prod_k\exp\left[-\I\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)\right]\ket{\downarrow}\right]/\sqrt{2}\\
&=e^{4\I\phi_0}\bigotimes_j \exp\left(\I\tilde{\phi}\delta_{i_jL}\right)\ket{i_j}\otimes\left[\ket{\uparrow}+\prod_k\exp\left[-\I\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)\right]\ket{\downarrow}\right]/\sqrt{2},
\end{split}
\end{equation}
where $\ket{i_j}\in\{\ket{L},\ket{R}\}$, $j$ and $k$ index the same four photonic qubits in a plaquette or star set, $\delta_{i_j L}$ is the Kronecker delta, and $\phi_{j,\ast}$ is the phase shift resulting from the interaction between the photonic state $\ket{i_j}$ and spin state $\ket{\ast}\in\left\{\ket{\uparrow},\ket{\downarrow}\right\}$. This gives us Eq. \ref{equation:state} up to a global phase $\exp\left[4\I\phi_0\right]$. (One may also consider an initial spin state of $\ket{-_S}=\left( \ket{\uparrow}-\ket{\downarrow}\right)/\sqrt{2}$, and perform syndrome extraction in a similar manner.)
\section*{Considering boundary conditions}
So far, we have consider the toric code exhibiting only periodic boundary conditions. Different surface codes may also be comprised of various types of boundaries that result in modifications of the stabilizer operators applied on the data qubits located along these boundaries. Consider boundaries in the qubit array as shown in Fig. \ref{fig:sandp2}. The spin--photon interface can be used for these syndrome measurements as well, without requiring a change in the frequency detuning $\delta$, such that
\begin{equation}
\bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}\ket{i_j}\otimes\ket{+_S}\rightarrow e^{3\I\phi_0}\bigotimes_j \exp\left(\I\tilde{\phi}\delta_{i_jL}\right)\ket{i_j}\otimes\left[\ket{\uparrow}+\prod_k\exp\left[-\I\left(\phi_{k,\uparrow}-\phi_{k,\downarrow}\right)\right]\ket{\downarrow}\right]/\sqrt{2}.\\
\end{equation}
As the cumulation of the relative phase in the electron spin state is dependent on the parity of the photonic state, where the $+1$ ($-1$) eigenstates are of even (odd) parity, the state evolves to either $\ket{L}_S=\left(\ket{\uparrow}+\I\ket{\downarrow}\right)/\sqrt{2}$ ($\ket{R}_S=\left(\ket{\uparrow}-\I\ket{\downarrow}\right)/\sqrt{2}$). The syndrome can therefore be obtained by measuring the spin in the $\hat{Y}$-basis. Any corrections to the phase shift of the photonic state are addressed in the same manner as for weight-four operators.
\begin{figure}
\caption{A representation of boundary conditions in the two-dimensional surface code. The plaquette, $\mathsf{Z}
\label{fig:sandp2}
\end{figure}
\section*{Swapping the assignment of the data and measure qubits}
The interaction between the photon and the spin is symmetric, meaning that the assignment of the data and measure qubits may be swapped around. We show this explicitly by starting off from, say, a horizontally polarised photon $\ket{H}=\left(\ket{L}+\ket{R}\right)/\sqrt{2}$ that then interacts consecutively with each spin state. The total state transforms to
\begin{equation}
\bigotimes_{\substack{j\in \text{star}(s) \\ \text{or }\text{plaq}(p)}}\ket{i_j}\otimes\ket{H}\rightarrow e^{4\I\phi_0}\bigotimes_j \exp\left(\I\tilde{\phi}\delta_{i_j\uparrow}\right)\ket{i}_j\otimes\left[\ket{\L}+\prod_k\exp\left[-\I\left(\phi_{k,L}-\phi_{k,R}\right)\right]\ket{R}\right]/\sqrt{2},
\end{equation}
where now $\ket{i_j}\in\left\{\ket{\uparrow},\ket{\downarrow}\right\}$ and $\phi_{j,\ast}$ is the phase shift resulting from the interaction between the spin state $\ket{i_j}$ and spin state $\ket{\ast}\in\left\{\ket{L},\ket{R}\right\}$. The detection of a horizontally (vertically, $\ket{V}=-\I\left(\ket{L}-\ket{R}\right)/\sqrt{2}$) polarised photon would signal a syndrome of $+1$ ($-1$). In the case of a syndrome measurement at the boundaries, the photonic state evolves to $\ket{L}\pm\I\ket{R}$, which can be easily resolved into $\ket{H}$ and $\ket{V}$ by adding a polarisation-dependent $\pi/2$ phase shift post-interaction and before photon-detection.
In order to correct for possible changes in the relative phase shifts between the eigenstates making up the quiescent state, it is sufficient to simply allow a $\ket{R}$ photon to interact with each data qubit. Then
\begin{equation}\begin{split}
e^{4\I\phi_0}\bigotimes_j \exp\left(\I\tilde{\phi}\delta_{i_j\uparrow}\right)\ket{i}_j\otimes\ket{R}\rightarrow & e^{4\I\phi_0}\bigotimes_j \exp\left[\I\left(\tilde{\phi}\delta_{i_j\uparrow}+\phi_0 \delta_{i_j\uparrow}+\phi_h \delta_{i_j\downarrow}\right)\right]\ket{i}_j\otimes\ket{R}\\
& = \exp\left[4\I\left(\phi_0+\phi_h \right)\right]\bigotimes_j\ket{i}_j\otimes\ket{R},
\end{split}
\end{equation}
where $\tilde{\phi}+\phi_0=\phi_h$ and the reflection coefficient described in Eq. \ref{eq:rh} is independent of the spin orientation. This way, every eigenstate obtains an equivalent total phase shift, rendering just an overall global phase shift to the surface code.
\section*{Entanglement generation}
\begin{figure}
\caption{Heralded efficiency, $\eta$, of the entanglement generation protocol as a function of $\Delta/\kappa$, where $\Delta=\omega_{X_1}
\label{fig:plot1}
\end{figure}
In Fig. \ref{fig:plot1} we show the heralded efficiency, $\eta$, of the entangling procedure described by Eq. \ref{eq:prob} as a function of the characteristic energy detuning, $\Delta=\omega_{X_1}-\omega_{X_2}$, for varying cavity decay rates $\kappa$ and coupling strengths $g$. (We work here both in the weak- and the strong-coupling regime to show that the entangling procedure may be employed in either.) We consider only the case of $\tilde{\phi_1}=-\tilde{\phi_2}$ due to higher probabilities of success, as phase shifts that satisfy this condition for QD--spin systems exhibiting typical variations can approach very closely $\pm\pi/2$. Looking at Eq. \ref{eq:prob}, this maximises the probability of measuring an orthogonally polarised photon. Also, variations in the QD linewidths may marginally enhance the efficiency, however the effect of this decreases as $g$ is increased.
We also show in Fig. \ref{fig:plot2} the effect of spectral variations as well as side-cavity losses, characterised by $\kappa_s$, on the fidelity of the entangled state:
\begin{equation}
\mathcal{F}=\frac{|r_{h_1}r_{h_2}-r_{0_1}r_{0_2}|^2}{|r_{h_1}r_{h_2}-r_{0_1}r_{0_2}|^2+|r_{h_1}r_{0_2}-r_{0_1}r_{h_2}|^2},
\end{equation}
where $r_{h_i}$ and $r_{0_i}$ are the reflection coefficients for the QD-coupled and empty cavity cases for system $i$, respectively.
\begin{figure}
\caption{Fidelity, $\mathcal{F}
\label{fig:plot2}
\end{figure}
\end{document}
|
\begin{document}
\title{Solutions with peaks for a coagulation-fragmentation equation. Part II: aggregation in peaks}
\begin{abstract}
The aim of this two-part paper is to investigate the stability properties of a special class of solutions to a coagulation-fragmentation equation. We assume
that the coagulation kernel is close to the diagonal kernel, and that the fragmentation kernel is diagonal. In a companion paper we constructed a two-parameter family of stationary solutions concentrated in Dirac masses, and we carefully studied the asymptotic decay of the tails of these solutions, showing that this behaviour is stable. In this paper we prove that for initial data which are sufficiently concentrated, the corresponding solutions approach one of these stationary solutions for large times.
\varepsilonnd{abstract}
\mathbb Section{Introduction} \label{sect:intro}
The aim of this two-part paper is to investigate the stability properties of a special class of solutions to a coagulation-fragmentation model.
We consider the evolution equation
\begin{equation} \label{eq:coagfrag}
\begin{split}
\partial_t f(\xi,t) = \mathscr{C}[f](\xi,t) + \mathscr{F}[f](\xi,t)\,,
\varepsilonnd{split}
\varepsilonnd{equation}
where the coagulation operator and the fragmentation operator are respectively defined as
\begin{equation} \label{coag}
\mathscr{C}[f](\xi,t) := \frac12\int_0^{\xi} K(\xi-\varepsilonta,\varepsilonta)f(\xi-\varepsilonta,t)f(\varepsilonta,t)\,\mathrm{d} \varepsilonta - \int_0^\infty K(\xi,\varepsilonta)f(\xi,t)f(\varepsilonta,t)\,\mathrm{d} \varepsilonta\,,
\varepsilonnd{equation}
\begin{equation} \label{frag}
\mathscr{F}[f](\xi,t) := \int_0^\infty \Gamma(\xi+\varepsilonta,\varepsilonta)f(\xi+\varepsilonta,t)\,\mathrm{d}\varepsilonta - \frac12\int_0^\xi \Gamma(\xi,\varepsilonta)f(\xi,t)\,\mathrm{d}\varepsilonta\,.
\varepsilonnd{equation}
We consider a coagulation kernel $K$ compactly supported around the diagonal $\{\xi=\varepsilonta\}$, and a diagonal fragmentation kernel $\Gamma$ (see Section~\ref{sect:setting} for the precise assumptions).
In a companion paper \cite{BNVd} we started the investigation of the stability properties of a family of stationary solutions to \varepsilonqref{eq:coagfrag} with peaks
concentrated in Dirac masses; the goal of this paper is to show for a class of initial data such that the corresponding solutions to the evolution equation \varepsilonqref{eq:coagfrag}
converge for large times to one of these stationary solutions.
The coagulation-fragmentation with such kernels is thus an example of nonuniqueness of stationary solutions with given mass. In addition, even though a detailed balance
condition is satisfied, one cannot exploit a corresponding entropy as in \cite{LM03,Can07}. Another motivation for our study is that for the pure coagulation equation
with kernels that concentrate near the diagonal there is evidence of evolution into time-periodic peak solutions \cite{HNV16} and we expect that the techniques
developed here will also be useful for a corresponding study.
We refer to the introduction of \cite{BNVd} for more detailed motivations, bibliographical references and related questions, and to \cite{BLL19a,BLL19b} for
general background on coagulation-fragmentation equations. We pass now to describe the main result proved in this paper.
It is shown in \cite{BNVd} (see also Proposition~\ref{prop:stationary} below) that, given any value of the total mass $M>0$ and a shifting parameter $\rho\in[0,1)$, there exists a stationary (measure) solution to \varepsilonqref{eq:coagfrag} in the form
\begin{equation} \label{intro1}
f_p(\xi;M,\rho) = \mathbb Sum_{n=-\infty}^\infty f_n(M,\rho)\,\mathrm{d}lta(\xi-2^{n+\rho}),
\varepsilonnd{equation}
with total mass
\begin{equation} \label{intro2}
\int_0^\infty \xi f_p(\xi;M,\rho) \,\mathrm{d}\xi = \mathbb Sum_{n=-\infty}^\infty 2^{n+\rho}f_n(M,\rho) = M.
\varepsilonnd{equation}
The measure $f_p(\cdot\,;M,\rho)$ is concentrated in the discrete set $\{2^{n+\rho}\}_{n\in\mathbb Z}$.
In the main result of this paper (Theorem~\ref{thm:stability}) we show that, for a class of initial data compactly supported around the points $\{2^{n}\}_{n\in\mathbb Z}$, a solution to \varepsilonqref{eq:coagfrag} converges as $t\to\infty$ to one of the discrete measures $f_p(\cdot\,;M,\rho)$, for some $\rho\in[0,1)$, with the same mass $M$ of the initial datum.
The class of initial data for which this stability result holds is determined in terms of the asymptotic behaviour as $\xi\to\infty$. We consider an initial datum $f_0\in\mathcal{M}^+(0,\infty)$ with total mass $M=\int_0^\infty \xi f_0(\xi)\,\mathrm{d}\xi$ such that
\begin{equation} \label{intro3}
\mathbb Supp f_0 \mathbb Subset \bigcup_{n\in\mathbb Z}(2^{n-\,\mathrm{d}lta_0},2^{n+\,\mathrm{d}lta_0})
\varepsilonnd{equation}
for some $\,\mathrm{d}lta_0>0$ sufficiently small. Secondly, by introducing the quantities
\begin{equation} \label{intro5}
m_n(0) := \int_{(2^{n-\,\mathrm{d}lta_0},2^{n+\,\mathrm{d}lta_0})} f_0(\xi)\,\mathrm{d}\xi
\varepsilonnd{equation}
representing the number of particles located around the point $2^n$, we assume that the sequence $\{m_n(0)\}_{n\in\mathbb Z}$ is a small perturbation of the coefficients of one of the stationary states \varepsilonqref{intro1} (not necessarily the one with the same mass), in the sense that
\begin{equation} \label{intro4}
m_n(0) = (1+\varepsilon_n^0) f_n(M^0,\rho^0) , \qquad n\in\mathbb Z,
\varepsilonnd{equation}
for some $M^0>0$, $\rho^0>0$, and for a sequence $|\varepsilon_n^0|\leq\,\mathrm{d}lta_0$. Then our main result can be stated in the following form: there exists $\,\mathrm{d}lta_0>0$ small enough, depending ultimately only on the total mass $M>0$ of the initial datum, such that if $f_0$ satisfies \varepsilonqref{intro3}--\varepsilonqref{intro4} then there exists a (weak) solution to \varepsilonqref{eq:coagfrag} with initial datum $f_0$ which converges, as $t\to\infty$, to a stationary solution $f_p(\cdot\,;M,\rho)$, where $M$ is the mass of $f_0$ and $\rho\in[0,1)$.
This stability property results from the combination of two main effects. We introduce the first and second moments around each peak of the solution $f(\xi,t)$ at time $t$:
\begin{equation*}
\begin{split}
p_n(t) &:= \frac{1}{m_n(t)}\int_{2^{n-\,\mathrm{d}lta_0}}^{2^{n+\,\mathrm{d}lta_0}} \frac{\ln(\xi/2^n)}{\ln2}f(\xi,t)\,\mathrm{d}\xi,\\
q_n(t) &:= \frac{1}{m_n(t)}\int_{2^{n-\,\mathrm{d}lta_0}}^{2^{n+\,\mathrm{d}lta_0}} \Bigl( \frac{\ln(\xi/2^n)}{\ln 2}-p_n(t) \Bigr)^2 f(\xi,t)\,\mathrm{d}\xi
\varepsilonnd{split}
\varepsilonnd{equation*}
(where $m_n(t)$ is defined as in \varepsilonqref{intro5} with $f_0$ replaced by $f(\cdot,t)$). Notice that, for a solution concentrated in peaks in the form \varepsilonqref{intro1}, one has $p_n=\rho$, $q_n=0$ for all $n\in\mathbb Z$. We show that all the first moments $p_n(t)$ tend to align, as $t\to\infty$, to a common value $\rho\in[0,1)$, which describes the asymptotic position of the peaks. Furthermore, the second moments $q_n(t)$ converge exponentially to 0 as $t\to\infty$, for every $n\in\mathbb Z$: this yields concentration in peaks.
This asymptotic behaviour of the functions $p_n(t)$, $q_n(t)$ can be obtained by a careful analysis of the corresponding evolution equations. In turn, this relies on a representation of the functions $m_n(t)$, at each time, as a perturbation of a stationary state: indeed by a fixed point argument we show that an identity in the form \varepsilonqref{intro4} holds for every positive time $t$, with $M^0$ replaced by a value $M(t)$ which converges to the mass $M$ of the solution as $t\to\infty$. We refer to the beginning of Section~\ref{sect:strategy} for a more detailed discussion of the general strategy of the proof.
We conclude by noting that the result obtained in \cite{BNVd} can be seen as a particular case of the stability theorem proved in this paper: it corresponds to the case in which the initial datum is already supported at the points $\{2^{n+\rho}\}_{n\in\mathbb Z}$, and therefore all the shifting coefficients $p_n(t)$ are constant and equal to $\rho$, and all the variances $q_n(t)$ vanish identically. Hence the proof in \cite{BNVd} can be used also as a guide to get an insight of the main strategy, which is here technically more involved due to the presence of a dispersion around the peaks and of a not uniform shifting.
\noindent\textbf{Structure of the paper.} The paper is organized as follows. In Section~\ref{sect:setting} we formulate the precise assumptions on the coagulation and fragmentation kernels, we recall from \cite{BNVd} the construction of the family of stationary solutions in the form \varepsilonqref{intro1}, and we state the main result of the paper. In Section~\ref{sect:strategy} we discuss the strategy of the proof and we introduce some auxiliary results. In Section~\ref{sect:linear} we state the regularity result on the linearized equation, whose proof is postponed to Appendix~\ref{sect:appendix}. Finally, in Section~\ref{sect:proof} we give the proof of the main result of this paper.
\mathbb Section{Setting and main result} \label{sect:setting}
\mathbb Subsection{Assumptions on the kernels} \label{subsect:kernel}
We assume the following: the coagulation kernel is supported near the diagonal and has the form
\begin{equation} \label{kernel1}
K(\xi,\varepsilonta) = \frac{1}{\xi+\varepsilonta}k\Bigl(\frac{\xi+\varepsilonta}{2}\Bigr) Q\Bigl(\frac{2\varepsilonta}{\xi+\varepsilonta}-1\Bigr)\,,
\varepsilonnd{equation}
where $k\in C^2((0,\infty))$, $k>0$, satisfies the growth conditions
\begin{align}
\qquad\qquad\qquad&k(\xi)\mathbb Sim\xi^{\alpha+1} & &\text{as }\xi\to\infty, \quad\alpha\in(0,1), & \qquad\qquad\qquad \label{kernel2}\\
\qquad\qquad\qquad&k(\xi)= k_0 + O(\xi^{\bar{\alpha}}) & &\text{as }\xi\to0^+, \quad\bar{\alpha}>1, & \qquad\qquad\qquad \label{kernel2bis}
\varepsilonnd{align}
\begin{align}
|k'(\xi)|&\leq k_1\xi^\alpha \quad\text{for $\xi\geq1$,} \qquad |k'(\xi)|\leq k_1\xi^{\bar{\alpha}-1} \quad\text{for $\xi\leq1$,} \label{kernel2ter} \\
|k''(\xi)|&\leq k_2\xi^{\alpha-1} \quad\text{for $\xi\geq1$,} \qquad |k''(\xi)|\leq k_2\xi^{\bar{\alpha}-2} \quad\text{for $\xi\leq1$,} \label{kernel2quater}
\varepsilonnd{align}
for some $k_0,k_1,k_2>0$, and $Q$ is a cut-off function such that
\begin{equation} \label{kernel3}
Q\in C^2(\mathbb R), \quad Q\geq0, \quad Q(0)=1, \quad \mathbb Supp Q\mathbb Subset\bigl(-{\textstyle\frac13},{\textstyle\frac13}\bigr), \quad Q(\xi)=Q(-\xi).
\varepsilonnd{equation}
The kernel $K$ has been written in the form \varepsilonqref{kernel1} to emphasize that it is close to the diagonal kernel in the sense of measures. The condition on the support of $Q$ guarantees that, for solutions concentrated in Dirac masses at points $\{2^n\}_{n\in\mathbb Z}$, the different peaks do not interact with each other; in particular
\begin{equation} \label{suppK0}
\mathbb Supp K(\xi,\varepsilonta) \mathbb Subset \Bigl\{ \frac12\xi < \varepsilonta < 2\xi \Bigr\}.
\varepsilonnd{equation}
As for the fragmentation kernel $\Gamma(\xi,\varepsilonta)$, we assume that
\begin{equation} \label{kernel4}
\Gamma(\xi,\varepsilonta) = \gamma(\xi)\,\mathrm{d}lta(\xi-2\varepsilonta)\,,
\varepsilonnd{equation}
where $\gamma\in C^2((0,\infty))$, $\gamma(\xi)>0$, satisfies the growth conditions
\begin{align}
\qquad\qquad\qquad&\gamma(\xi)=\xi^{\beta} + O(\xi^{\tilde{\beta}}) & &\text{as }\xi\to\infty, \quad\beta\in(1,2),\; \tilde{\beta}<\beta & \qquad\qquad\qquad \label{kernel5}\\
\qquad\qquad\qquad&\gamma(\xi)= \gamma_0 + O(\xi^{\bar{\beta}}) & &\text{as }\xi\to0^+, \quad\bar{\beta}>1, & \qquad\qquad\qquad \label{kernel5bis}
\varepsilonnd{align}
\begin{align}
|\gamma'(\xi)|&\leq \gamma_1\xi^{\beta-1} \quad\text{for $\xi\geq1$,} \qquad |\gamma'(\xi)|\leq \gamma_1\xi^{\bar{\beta}-1} \quad\text{for $\xi\leq1$,} \label{kernel5ter} \\
|\gamma''(\xi)|&\leq \gamma_2\xi^{\beta-2} \quad\text{for $\xi\geq1$,} \qquad |\gamma''(\xi)|\leq \gamma_2\xi^{\bar{\beta}-2} \quad\text{for $\xi\leq1$,} \label{kernel5quater}
\varepsilonnd{align}
for some $\gamma_0,\gamma_1,\gamma_2>0$.
\mathbb Subsection{Logarithmic variables} \label{sect:variables}
It is convenient to go over to logarithmic variables, which will be used along the rest of the paper: we set
\begin{equation}\label{variables}
g(x,t) := \xi f(\xi,t), \qquad \xi=2^{x}.
\varepsilonnd{equation}
After an elementary change of variables, \varepsilonqref{eq:coagfrag} takes the form
\begin{equation} \label{eq:coagfrag2}
\partial_t g(x,t) = \mathscr{C}[g](x,t) + \mathscr{F}[g](x,t)\,,
\varepsilonnd{equation}
where the coagulation operator $\mathscr{C}$ and the fragmentation operator $\mathscr{F}$ are now given by
\begin{equation} \label{coag2}
\begin{split}
\mathscr{C}[g](x,t) &:= \frac{\ln2}{2}\int_{-\infty}^{x} \frac{2^xK(2^x-2^y,2^y)}{2^x-2^y} g\Bigl(\frac{\ln(2^x-2^y)}{\ln2},t\Bigr)g(y,t)\,\mathrm{d} y \\
&\qquad - \ln2\int_{-\infty}^\infty K(2^x,2^y)g(x,t)g(y,t)\,\mathrm{d} y\,,
\varepsilonnd{split}
\varepsilonnd{equation}
\begin{equation} \label{frag2}
\begin{split}
\mathscr{F}[g](x,t) &:= \ln2\int_{-\infty}^\infty \frac{2^{x+y}\Gamma(2^x+2^y,2^y)}{2^x+2^y} g\Bigl(\frac{\ln(2^x+2^y)}{\ln2},t\Bigr)\,\mathrm{d} y \\
&\qquad - \frac{\ln2}{2}\int_{-\infty}^x \Gamma(2^x,2^y)g(x,t)2^y\,\mathrm{d} y
\varepsilonnd{split}
\varepsilonnd{equation}
and mass conservation is expressed by
\begin{equation} \label{mass}
\int_{\mathbb R} 2^x g(x,t)\,\mathrm{d} x = \int_{\mathbb R} 2^xg(x,0)\,\mathrm{d} x \qquad\text{for all }t>0.
\varepsilonnd{equation}
\mathbb Subsection{Weak formulation and well-posedness} \label{subsect:weaksol}
In \cite{BNVd} we introduced the following notion of weak solution in the space of positive Radon measures $g\in\mathcal{M}_+(\R)$, that allows to consider solutions to \varepsilonqref{eq:coagfrag2} concentrated in Dirac masses. In the following, with abuse of notation, we denote by $\int_A\phi(x)g(x)\,\mathrm{d} x$ the integral of $\phi$ on $A\mathbb Subset\mathbb R$ with respect to the measure $g$, also in the case that $g$ is not absolutely continuous with respect to the Lebesgue measure.
\begin{definition}[Weak solution] \label{def:weakg}
A map $g\in C([0,T];\mathcal{M}_+(\R)beta)$ is a \varepsilonmph{weak solution} to \varepsilonqref{eq:coagfrag2} in $[0,T]$ with initial condition $g_0\in\mathcal{M}_+(\R)$ if for every $t\in[0,T]$
\begin{equation} \label{weakg}
\begin{split}
\partial_t\biggl(\int_{\mathbb R} &g(x,t)\varphi(x)\,\mathrm{d} x \biggr) \\
& = \frac{\ln2}{2}\int_{\mathbb R}\int_{\mathbb R} K(2^y,2^z)g(y,t)g(z,t)\biggl[\varphi\Bigl(\frac{\ln(2^y+2^z)}{\ln2}\Bigr) - \varphi(y) - \varphi(z) \biggr]\,\mathrm{d} y \,\mathrm{d} z \\
& \qquad - \frac14\int_{\mathbb R} \gamma(2^{y+1}) g(y+1) \bigl[\varphi(y+1) - 2\varphi(y) \bigr]\,\mathrm{d} y
\varepsilonnd{split}
\varepsilonnd{equation}
for every test function $\varphi\in C(\mathbb R)$ such that $\lim_{x\to-\infty}\varphi(x)<\infty$ and $\varphi(x)\lesssim 2^x$ as $x\to\infty$, and $g(\cdot,0)=g_0$.
\varepsilonnd{definition}
Notice that, in order for the fragmentation term on the right-hand side of \varepsilonqref{weakg} to be well-defined for test functions with exponential growth at infinity,
in view of the growth assumption \varepsilonqref{kernel5} we require in the definition the finiteness of the integral $\int_{\mathbb R}2^{(1+\beta)x}g(x)\,\mathrm{d} x$. In particular,
as the test function $\varphi(x)=2^x$ is admissible, this notion of weak solution guarantees the mass conservation property \varepsilonqref{mass}.
The existence of a (global in time) weak solution for a suitable class of initial data is proved in \cite{BNVd}.
We report the statement here for the reader's convenience.
\begin{theorem}[Existence of weak solutions] \label{thm:wp}
Suppose that $g_0\in\mathcal{M}_+(\R)$ satisfies
\begin{equation} \label{moment0}
\|g_0\| := \mathbb Sup_{\mathbb Substack{n\in\mathbb Z\\n< 0}} \, \frac{1}{2^n}\int_{[n,n+1)}g_0(x)\,\mathrm{d} x + \int_{[0,\infty)} g_0(x)\,\mathrm{d} x <\infty, \qquad \int_{\mathbb R} 2^{\theta x} g_0(x)\,\mathrm{d} x <\infty
\varepsilonnd{equation}
for some $\theta>\beta+1$. Then there exists a global weak solution $g$ to \varepsilonqref{eq:coagfrag2} with initial datum $g_0$, according to Definition~\ref{def:weakg}, which satisfies for all $T>0$
\begin{equation} \label{wpestg}
\mathbb Sup_{0\leq t\leq T}\|g(\cdot,t)\| \leq C(T, g_0), \qquad
\mathbb Sup_{0\leq t\leq T}\int_{\mathbb R} 2^{\theta x} g(x,t)\,\mathrm{d} x \leq C(T, g_0),
\varepsilonnd{equation}
where $C(T, g_0)$ denotes a constant depending on $T$, $g_0$, and on the properties of the kernels.
\varepsilonnd{theorem}
It is convenient to introduce a notation for the right-hand side of the weak equation \varepsilonqref{weakg}, evaluated on a given test function $\varphi$: therefore we define the operators
\begin{align}
B_{\mathrm c}[g,g;\varphi]
& :=\frac{\ln2}{2}\int_{\mathbb R}\int_{\mathbb R} K(2^y,2^z)g(y)g(z)\biggl[\varphi\Bigl(\frac{\ln(2^y+2^z)}{\ln2}\Bigr) - \varphi(y) - \varphi(z) \biggr]\,\mathrm{d} y \,\mathrm{d} z \,, \label{rhsweakc} \\
B_{\mathrm f}[g;\varphi]
&:= \frac14\int_{\mathbb R} \gamma(2^{y+1}) g(y+1) \bigl[\varphi(y+1) - 2\varphi(y) \bigr]\,\mathrm{d} y \,. \label{rhsweakf}
\varepsilonnd{align}
\begin{remark} \label{rm:supp}
Notice that, in view of \varepsilonqref{suppK0}, the coagulation kernel $K$ evaluated at the point $(2^y,2^z)$ is supported in the region $\{ |y-z|<1 \}$. In particular, there exists $\varepsilon_0\in(0,1)$, determined by the kernel $K$, such that
\begin{equation} \label{suppK}
\mathbb Supp K(2^y,2^z) \mathbb Subset \bigl\{ |y-z|<\varepsilon_0 \bigr\}.
\varepsilonnd{equation}
Furthermore, in view of the asymptotic properties \varepsilonqref{kernel2}--\varepsilonqref{kernel2bis} we have a uniform estimate
\begin{equation} \label{estK}
(2^y+2^z)K(2^y,2^z) \leq C_K (1+2^y2^z) \qquad\text{for every $y,z\in\mathbb R$, $|y-z|<1$.}
\varepsilonnd{equation}
\varepsilonnd{remark}
\mathbb Subsection{Stationary solutions} \label{subsect:stationary}
In \cite{BNVd} we proved that the equation \varepsilonqref{eq:coagfrag2} admits a two-parameters family of stationary solutions supported in a set of Dirac masses at integer distance, of the form
\begin{equation}\label{peak1}
g_p(x;A,\rho) = \mathbb Sum_{n=-\infty}^\infty a_n(A,\rho) \,\mathrm{d}lta(x-n-\rho).
\varepsilonnd{equation}
The parameter $\rho\in[0,1)$ fixes the shifting of the peaks with respect to the integers, while $A>0$ characterizes the decay of the solution as $x\to\infty$ (see \varepsilonqref{peak5}). In particular, the parameter $A$ is in one-to-one correspondence with the total mass $M$ of the solution: given any $\rho\in[0,1)$ and any value of the total mass $M$, there exists a unique value $A_{M,\rho}>0$ such that the corresponding stationary solution $g_p(\cdot;A_{M,\rho},\rho)$ satisfies the mass constraint
\begin{equation} \label{peak2}
\int_{\mathbb R}2^{x}g_p(x;A_{M,\rho},\rho) \,\mathrm{d} x = \mathbb Sum_{n=-\infty}^\infty 2^{n+\rho}a_n(A_{M,\rho},\rho) = M \,.
\varepsilonnd{equation}
By plugging the expression \varepsilonqref{peak1} into the weak formulation \varepsilonqref{weakg} of the equation, we see that $g_p$ is a stationary solution if the coefficients $a_n$ satisfy the recurrence equation
\begin{equation} \label{eq:stat}
a_{n+1} = \zeta_{n,\rho}a_n^2\,,
\qquad\text{where }
\zeta_{n,\rho} := \frac{\ln2}{2^{n+\rho}} \frac{k(2^{n+\rho})}{\gamma(2^{n+\rho+1})} \,.
\varepsilonnd{equation}
The existence of stationary solutions in the form \varepsilonqref{peak1} is guaranteed by the following proposition, proved in \cite{BNVd}.
\begin{proposition}[Stationary peaks solutions] \label{prop:stationary}
Let $\rho\in[0,1)$ and $A>0$ be given. There exists a unique family of coefficients $\{a_n(A,\rho)\}_{n\in\mathbb Z}$ solving \varepsilonqref{eq:stat} which are positive, bounded, and satisfy
\begin{equation} \label{peak5}
\begin{split}
a_n & = a_{-\infty}\bigl(2^n + A_02^{2n}\bigr) + o(2^{2n}) \qquad\qquad\text{as }n\to-\infty, \\
a_n & \mathbb Sim a_\infty 2^{(\beta-\alpha)n}e^{-A2^n} \qquad\qquad\qquad\qquad\text{as } n\to\infty,
\varepsilonnd{split}
\varepsilonnd{equation}
where $a_{-\infty}:=\frac{\gamma_0 2^{\rho+1}}{k_0\ln2}$, $a_\infty:=(\ln2)^{-1}2^\beta2^{(\beta-\alpha)(\rho+1)}$, and $A_0$ is uniquely determined by $A$.
In particular, the measure $g_p(\cdot;A,\rho)$ defined by \varepsilonqref{peak1} is a stationary solution to \varepsilonqref{eq:coagfrag2}.
\varepsilonnd{proposition}
In the proof of the main result of this paper we will not directly work with the solutions to the stationary equation \varepsilonqref{eq:stat} constructed in Proposition~\ref{prop:stationary}, as we did in \cite{BNVd}, but we will need to consider the more general case in which the parameter $\rho$ in \varepsilonqref{eq:stat} actually depends on $n$. More precisely, we assume that $p=\{p_n\}_{n\in\mathbb Z}$ is a given sequence such that $|p_n|\leq\,\mathrm{d}lta_0$ for some given $\,\mathrm{d}lta_0\in(0,1)$, and we consider the recurrence equation
\begin{equation} \label{nearlystat1}
\bar{m}_{n+1} = \zeta_{n}(p)\bar{m}_n^2\,,
\qquad\text{where}\quad
\zeta_{n}(p) := \frac{\ln2}{2^{n+p_n}} \frac{k(2^{n+p_n})}{\gamma(2^{n+1+p_{n+1}})} \,.
\varepsilonnd{equation}
The values $\bar{m}_n$ represent the coefficients of a stationary solution with peaks at the points $n+p_n$, $n\in\mathbb Z$.
We generalize Proposition~\ref{prop:stationary} to the case of a nonconstant shifting $p$ in Lemma~\ref{lem:nearlystat} below.
It will be sometimes convenient to switch to new variables
\begin{equation} \label{nearlystat2}
\bar{\mu}_n := \frac12\zeta_{n}(p)\bar{m}_n\,,
\varepsilonnd{equation}
solving
\begin{equation} \label{nearlystat3}
\bar{\mu}_{n+1} = \theta_n(p) \bar{\mu}_n^2\,,
\qquad \text{with}\quad
\theta_n(p) := \frac{2\zeta_{n+1}(p)}{\zeta_{n}(p)}\,.
\varepsilonnd{equation}
We also have the relation
\begin{equation} \label{nearlystat3bis}
\frac{\bar{m}_{n+1}}{\bar{m}_n} = 2\bar{\mu}_n.
\varepsilonnd{equation}
In view of assumptions \varepsilonqref{kernel2}-\varepsilonqref{kernel2bis} and \varepsilonqref{kernel5}--\varepsilonqref{kernel5bis} on the kernels, one can show the following asymptotic behaviour of the coefficients:
\begin{equation} \label{nearlystat8}
\begin{split}
\zeta_{n}(p)&= \Bigl(\frac{k_0\ln2}{\gamma_0}\Bigr)2^{-(n+p_n)} + O(2^{(\bar{c}-1)n}) \qquad\text{as }n\to-\infty, \\
\zeta_{n}(p)&\mathbb Sim (\ln2) 2^{\alpha(n+p_n)}2^{-\beta(n+1+p_{n+1})} \qquad\qquad\text{ as }n\to\infty,
\varepsilonnd{split}
\varepsilonnd{equation}
and
\begin{equation} \label{nearlystat9}
\begin{split}
\theta_{n}(p) &= 2^{p_n-p_{n+1}} + O(2^{\bar{c}n}) \qquad\qquad\qquad\qquad\text{as }n\to-\infty, \\
\theta_{n}(p) &\mathbb Sim 2^{\alpha-\beta+1}2^{\alpha(p_{n+1}-p_n)}2^{-\beta(p_{n+2}-p_{n+1})} \qquad\text{as }n\to\infty,
\varepsilonnd{split}
\varepsilonnd{equation}
where $\bar{c}:=\min\{\bar{\alpha},\bar{\beta}\}>1$.
\begin{lemma} \label{lem:nearlystat}
Let $A>0$ and $p=\{p_n\}_{n\in\mathbb Z}$, with $|p_n|\leq\,\mathrm{d}lta_0$, be given.
Then there exists a family of positive and bounded coefficients $\{\bar{m}_n(A,p)\}_{n\in\mathbb Z}$ solving \varepsilonqref{nearlystat1} which satisfy
\begin{equation} \label{nearlystat4}
\bar{m}_n = O(2^n) \quad\text{as $n\to-\infty$,}
\qquad
\bar{m}_n = O\bigl( 2^{(\beta-\alpha)n}e^{-A2^n} \bigr) \quad\text{as $n\to\infty$.}
\varepsilonnd{equation}
Furthermore
\begin{equation} \label{nearlystat6}
\frac{\partial\bar{m}_n}{\partial A} = -2^n\bar{m}_n,
\qquad\qquad
\Big|\frac{\partial\bar{m}_n}{\partial p_k}\Big| \leq c2^{n-k}\bar{m}_n \quad\text{for }k\geq n,
\varepsilonnd{equation}
and $\frac{\partial\bar{m}_n}{\partial p_k}=0$ for $k<n$, for some constant $c>0$ depending only on the kernels.
\varepsilonnd{lemma}
\begin{proof}
For given $A$ and $p$, it is easily checked that the sequence $\{\bar{\mu}_n\}_{n\in\mathbb Z}$ defined by
\begin{equation} \label{nearlystat5}
\bar{\mu}_n = e^{-A2^n}\varepsilonxp\Biggl( -2^n\mathbb Sum_{j=n+1}^{\infty}2^{-j}\ln(\theta_{j-1}(p))\Biggr)
\varepsilonnd{equation}
is a solution to \varepsilonqref{nearlystat3}. In turn, defining $\bar{m}_n$ by the relation \varepsilonqref{nearlystat2}, and recalling the asymptotic behaviour \varepsilonqref{nearlystat8}--\varepsilonqref{nearlystat9}, we obtain a positive and bounded solution to \varepsilonqref{nearlystat1} with the decay \varepsilonqref{nearlystat4}. The first identity in \varepsilonqref{nearlystat6} follows directly from \varepsilonqref{nearlystat2} and \varepsilonqref{nearlystat5}. Finally by \varepsilonqref{nearlystat2} and the definitions \varepsilonqref{nearlystat1}, \varepsilonqref{nearlystat3} of $\zeta_n(p)$, $\theta_n(p)$ we have with straightforward computations
\begin{equation*}
\begin{split}
\frac{1}{\bar{m}_n}\frac{\partial\bar{m}_n}{\partial p_k}
& = \frac{1}{\bar{\mu}_n}\frac{\partial\bar{\mu}_n}{\partial p_k} - \frac{1}{\zeta_n}\frac{\partial\zeta_n}{\partial p_k}
= -2^n\mathbb Sum_{j=n+1}^\infty \frac{2^{-j}}{\theta_{j-1}}\frac{\partial\theta_{j-1}}{\partial p_k} - \frac{1}{\zeta_n}\frac{\partial\zeta_n}{\partial p_k} \\
& = -2^n\mathbb Sum_{j=n+1}^\infty 2^{-j} \Bigl( \frac{1}{\zeta_{j}}\frac{\partial\zeta_{j}}{\partial p_k} - \frac{1}{\zeta_{j-1}}\frac{\partial\zeta_{j-1}}{\partial p_k}\Bigr)
- \frac{1}{\zeta_n}\frac{\partial\zeta_n}{\partial p_k}
= - \mathbb Sum_{j=n}^\infty 2^{n-j-1}\frac{1}{\zeta_j}\frac{\partial\zeta_j}{\partial p_k}.
\varepsilonnd{split}
\varepsilonnd{equation*}
Observing that
\begin{equation*}
\frac{1}{\zeta_j}\frac{\partial\zeta_j}{\partial p_k} =
\begin{cases}
\ln2\Bigl( \frac{k'(2^{k+p_k})2^{k+p_k}}{k(2^{k+p_k})}-1\Bigr) & \text{if }j=k,\\
-\frac{\gamma'(2^{k+p_k})2^{k+p_k}\ln2}{\gamma(2^{k+p_k})} & \text{if }j=k-1,\\
0 & \text{otherwise,}\\
\varepsilonnd{cases}
\varepsilonnd{equation*}
we finally obtain
\begin{equation} \label{nearlystat7}
\frac{1}{\bar{m}_n}\frac{\partial\bar{m}_n}{\partial p_k} =
\begin{cases}
-2^{n-k-1}\ln2\Bigl( \frac{k'(2^{k+p_k})2^{k+p_k}}{k(2^{k+p_k})}-1\Bigr) + 2^{n-k} \frac{\gamma'(2^{k+p_k})2^{k+p_k}\ln2}{\gamma(2^{k+p_k})} & \text{if } k>n,\\
-2^{n-k-1}\ln2\Bigl( \frac{k'(2^{k+p_k})2^{k+p_k}}{k(2^{k+p_k})}-1\Bigr)& \text{if } k=n,\\
0 & \text{if } k<n.\\
\varepsilonnd{cases}
\varepsilonnd{equation}
The second condition in \varepsilonqref{nearlystat6} follows thanks to the assumptions \varepsilonqref{kernel2ter}, \varepsilonqref{kernel5ter}.
\varepsilonnd{proof}
\mathbb Subsection{Main result} \label{subsect:mainresult}
We are now in the position to state the main result of the paper. It is first convenient to introduce a notation for the following space of sequences: for $\theta\in\mathbb R$, we set
\begin{equation} \label{spacesequences}
\mathcal{Y}_\theta := \Bigl\{ y=\{y_n\}_{n\in\mathbb Z} \,:\, \|y\|_{\theta}<\infty \Bigr\},
\qquad\text{with}\quad
\|y\|_{\theta} := \mathbb Sup_{n\leq0} \, 2^{n}|y_n| + \mathbb Sup_{n>0} \, 2^{\theta n}|y_n|.
\varepsilonnd{equation}
Given an initial datum $g_0\in\mathcal{M}_+(\R)$, we also introduce the following quantities:
\begin{align}
m_n^0 &:= \int_{[n-1/2,n+1/2)} g_0(x)\,\mathrm{d} x\,, \label{mn0}\\
p_n^0 &:= \frac{1}{m_n^0} \int_{[n-1/2,n+1/2)} g_0(x)(x-n)\,\mathrm{d} x\,, \label{pn0}\\
q_n^0 &:= \frac{1}{m_n^0} \int_{[n-1/2,n+1/2)} g_0(x)(x-n-p_n^0)^2\,\mathrm{d} x \,, \label{qn0}
\varepsilonnd{align}
and $m^0:=\{m_n^0\}_{n\in\mathbb Z}$, $p^0:=\{p_n^0\}_{n\in\mathbb Z}$, $q^0:=\{q_n^0\}_{n\in\mathbb Z}$.
Our main result reads as follows.
\begin{theorem}[Stability of stationary peaks solutions] \label{thm:stability}
Given $M>0$, there exists $\,\mathrm{d}lta_0>0$, depending only on $M$, with the following property.
Let $g_0\in\mathcal{M}_+(\R)$ be an initial datum with total mass
\begin{equation} \label{mass0}
\int_{\mathbb R} 2^x g_0(x)\,\mathrm{d} x = M,
\varepsilonnd{equation}
and support
\begin{equation} \label{supp0}
\mathbb Supp g_0 \mathbb Subset \bigcup_{n\in\mathbb Z} (n-\,\mathrm{d}lta_0,n+\,\mathrm{d}lta_0).
\varepsilonnd{equation}
Assume further that
\begin{equation} \label{m0}
m_n^0 = \bar{m}_n(A^0, p^0)(1+2^ny_n^0),
\varepsilonnd{equation}
for some $A^0>0$ and $y^0=\{y_n^0\}_{n\in\mathbb Z}\in\mathcal{Y}_1$ satisfying
\begin{equation} \label{A0y0}
|A^0-A_M|\leq\,\mathrm{d}lta_0,\qquad \|y^0\|_1\leq\,\mathrm{d}lta_0.
\varepsilonnd{equation}
Then there exists a weak solution $g$ to \varepsilonqref{eq:coagfrag2} with initial datum $g_0$, and $\rho\in[0,1)$, such that
\begin{equation*}
g(\cdot,t) \to g_p(\cdot\,;A_{M,\rho},\rho) \qquad\text{in the sense of measures as $t\to\infty$,}
\varepsilonnd{equation*}
where $g_p(\cdot\,;A_{M,\rho},\rho)$ is the stationary solution with total mass $M$ concentrated in peaks at the points $\{n+\rho\}_{n\in\mathbb Z}$, see \varepsilonqref{peak1}--\varepsilonqref{peak2}.
\varepsilonnd{theorem}
In the statement of the theorem, $\bar{m}_n$ are the coefficients introduced in Lemma~\ref{lem:nearlystat}, and $A_M=A_{M,0}>0$ is the unique value such that the stationary solution $g_p(\cdot;A_M,0)$ has total mass $M$ (see \varepsilonqref{peak2}). The full argument for the proof of Theorem~\ref{thm:stability} will be given in Section~\ref{sect:proof}.
\begin{remark} \label{rm:wasserstein}
The proof of Theorem~\ref{thm:stability} shows that a much stronger convergence result holds: indeed we also obtain a uniform exponential decay in time of the 2-Wasserstein distance between the (normalized) restriction of the measure $g(\cdot,t)$ to each interval $(n-\frac12,n+\frac12)$, $n\in\mathbb Z$, and the corresponding Dirac delta centered at the point $n+\rho$:
\begin{equation} \label{wasserstein}
\mathbb Sup_{n\in\mathbb Z} W_2(g_n(t),\,\mathrm{d}lta_\rho) \leq Ce^{-\frac{\nu}{2} t},
\qquad\text{where}\quad g_n(t):= \frac{1}{m_n(t)}g(\cdot+n,t)\chi_{(-\frac12,\frac12)}
\varepsilonnd{equation}
(see \varepsilonqref{mn} for the definition of $m_n(t)$), where $W_2$ denotes the 2-Wasserstein distance
$$
W_2(\mu,\nu) := \biggl( \inf_{\pi\in\Pi(\mu,\nu)}\int_{\mathbb R\times\mathbb R} |x-y|^2\,\mathrm{d}\pi(x,y) \biggr)^\frac12.
$$
Indeed denoting by $p_n(t)$ and $q_n(t)$ the first and second moment of $g(\cdot,t)$ around each peak (see \varepsilonqref{pn} and \varepsilonqref{qn}), from the definition of $W_2$ one finds
\begin{align*}
W_2^2(g_n(t),\,\mathrm{d}lta_\rho)= \frac{1}{m_n(t)}\int_{(n-\frac12,n+\frac12)} |x-n-\rho|^2 g(x,t)\,\mathrm{d} x \leq 2q_n(t) + 2|p_n(t)-\rho|^2 \leq Ce^{-\nu t},
\varepsilonnd{align*}
where the uniform exponential convergence of $q_n(t)\to0$ and $p_n(t)\to\rho$ is obtained in the proof (see for instance \varepsilonqref{decaypn1}--\varepsilonqref{decayqn1}).
\varepsilonnd{remark}
\mathbb Section{Strategy of the proof and auxiliary results}\label{sect:strategy}
In the following we assume that $g_0\in\mathcal{M}_+(\R)$ is a given initial datum satisfying the assumptions of Theorem~\ref{thm:stability}: in particular, $M>0$ is a fixed quantity denoting the total mass of the solution (see \varepsilonqref{mass0}), and $g_0$ is supported in the union of intervals $\bigcup_{n\in\mathbb Z} I_n$, where $I_n:=(n-\,\mathrm{d}lta_0,n+\,\mathrm{d}lta_0)$ (see \varepsilonqref{supp0}), for some $\,\mathrm{d}lta_0\in(0,1)$ to be chosen later. We now present the general strategy for the proof of Theorem~\ref{thm:stability}.
We will first show in Lemma~\ref{lem:supp} that the structure of the support of the solution is preserved by the evolution, that is, every weak solution $g(\cdot,t)$ starting from $g_0$ remains supported for all positive times in the union of the intervals $I_n$. In view of this property, we define for $n\in\mathbb Z$ the following quantities:
\begin{align}
m_n(t) &:= \int_{I_n} g(x,t)\,\mathrm{d} x, \label{mn}\\
p_n(t) &:= \frac{1}{m_n(t)} \int_{I_n}g(x,t)(x-n)\,\mathrm{d} x, \label{pn}\\
q_n(t) &:= \frac{1}{m_n(t)} \int_{I_n}g(x,t)(x-n-p_n(t))^2\,\mathrm{d} x. \label{qn}
\varepsilonnd{align}
Notice in particular that $p_n(t)\in[-\,\mathrm{d}lta_0,\,\mathrm{d}lta_0]$ is chosen so that
\begin{equation} \label{pn2}
\int_{I_n} g(x,t)\bigl(x-n-p_n(t)\bigr)\,\mathrm{d} x =0,
\varepsilonnd{equation}
and that $0\leq q_n(t) \leq 4\,\mathrm{d}lta_0^2$.
The proof of Theorem~\ref{thm:stability} will be achieved by showing that $q_n(t)$ decays exponentially to zero as $t\to\infty$ (this gives concentration in peaks), and that each $p_n(t)$ aligns to a constant value $\rho$ independent of $n$ (which determines the asymptotic position of the peaks as $t\to\infty$).
The evolution equation for $m_n(t)$, which we derive below in Lemma~\ref{lem:approx} starting from the weak formulation of the equation, can be seen as a perturbation of the equation
\begin{equation*}
\frac{\,\mathrm{d} m_n}{\,\mathrm{d} t}
= \frac{\gamma(2^{n+\rho})}{4} \Bigl( \zeta_{n-1,\rho}(m_{n-1}(t))^2 - m_n(t) \Bigr) - \frac{\gamma(2^{n+1+\rho})}{2} \Bigl( \zeta_{n,\rho}(m_{n}(t))^2 - m_{n+1}(t) \Bigr) \,.
\varepsilonnd{equation*}
We studied this equation in detail in the companion paper \cite{BNVd}: it corresponds to the particular case in which all the variances $q_n(t)$ are identically equal to zero, and all the first moments $p_n(t)$ are equal to a constant value $\rho$. In this case, the functions $m_n(t)$ would represent the coefficients of a solution which remains concentrated for all times at the points $\{n+\rho\}_{n\in\mathbb Z}$. In the main result of \cite{BNVd} we showed that, if this solution is initially a small perturbation of one of the stationary states given by Proposition~\ref{prop:stationary}, then this property is preserved for later times. The approach is then to reproduce the argument of \cite{BNVd} adapting it to this more general setting: that is, we try to construct a solution such that $m_n(t)$ is at each time a perturbation of a stationary state. More precisely, we will show that the quantities $m_n(t)$ can be represented in the form
\begin{equation} \label{yn}
m_n(t) = \bar{m}_n(A(t),p(t))\bigl( 1 + 2^ny_n(t) \bigr),
\varepsilonnd{equation}
where $\bar{m}_n$ are given by Lemma~\ref{lem:nearlystat}, for suitable functions $A(t)$ and $y(t)=\{y_n(t)\}_{n\in\mathbb Z}$ to be determined via a fixed point argument (Proposition~\ref{prop:fp}).
The structure \varepsilonqref{yn} allows us to write the evolution equations for the first and second moments $p_n(t)$, $q_n(t)$ in a handier way, and to prove the desired decay of these quantities.
The key idea is that the structure of all the evolution equations for $m_n$, $p_n$, $q_n$ can be seen as a perturbation of the same linearized problem, for which we provide a full regularity theory in Section~\ref{sect:linear} (whose proof is postponed to Appendix~\ref{sect:appendix}). The most technical part of the proof consists then in proving uniform estimates on the remainder terms.
A drawback in this approach is that in order to prove the representation \varepsilonqref{yn} we would need to assume \varepsilonmph{a priori} that the expected decay of $p_n(t)$
and $q_n(t)$ holds, which on the other hand we can prove only by exploiting \varepsilonqref{yn} itself. To avoid the circularity of the argument, we first need to consider
a truncated problem, in which we cut-off the tail of the solution at large distance $n>N>>1$. The advantage is that for the truncated problem we can assume that
$p_n$ and $q_n$ have the expected decay \varepsilonmph{for a short interval of time}, which allows us to start the process and to show that \varepsilonqref{yn} holds for small times.
Then by a continuation argument we can extend all the estimates globally in time. In a final step we will complete the proof by sending the truncation parameter $N$ to infinity.
In the rest of this section we prove some auxiliary results: we show the property of preservation of the support, already mentioned, and we
compute the evolution equations for the quantities $m_n(t)$, $p_n(t)$, $q_n(t)$.
\mathbb Subsection{Preservation of the support} \label{subsect:support}
We start by showing that any weak solution $g$ remains concentrated on the support of the initial datum for all positive times.
\begin{lemma} \label{lem:supp}
Assume that $g$ is a weak solution to \varepsilonqref{eq:coagfrag2} with initial datum $g_0$, according to Definition~\ref{def:weakg}. Suppose that the measure $g_0$ is supported in a union of intervals of the form $\bigcup_{n\in\mathbb Z}(n-\,\mathrm{d}lta_0,n+\,\mathrm{d}lta_0)$, for a given $\,\mathrm{d}lta_0>0$ such that
\begin{equation} \label{delta0a}
\,\mathrm{d}lta_0 < \frac{1-\varepsilon_0}{2} \,,
\varepsilonnd{equation}
where $\varepsilon_0$ is defined in \varepsilonqref{suppK}.
Then this condition is preserved for later times.
\varepsilonnd{lemma}
\begin{proof}
We write $I:=[-\,\mathrm{d}lta_0,\,\mathrm{d}lta_0]$ and we consider the test function
\begin{equation*}
\varphi:=\mathbb Sum_{n=-\infty}^\infty \chi_I(\cdot-n)\,.
\varepsilonnd{equation*}
Let us split the total mass $M=\int_{\mathbb R} 2^xg(x,t)\,\mathrm{d} x$ of the solution, which is preserved along the evolution, as
\begin{equation} \label{lemsupp1}
M=M_{\mathrm{int}}(t) + M_{\mathrm{ext}}(t)\,,
\qquad\text{where }
M_{\mathrm{int}}(t) := \int_{\mathbb R} 2^{x}g(x,t)\varphi(x)\,\mathrm{d} x \,.
\varepsilonnd{equation}
We obtain an evolution equation for $M_{\mathrm{int}}(t)$ by using a sequence of continuous test functions approximating $\psi(x):=2^x\varphi(x)$ in the weak formulation \varepsilonqref{weakg}. Recall that the right-hand side of \varepsilonqref{weakg} is the difference of the two operators $B_{\mathrm c}$ and $B_{\mathrm f}$ introduced in \varepsilonqref{rhsweakc}--\varepsilonqref{rhsweakf}; in particular, as $B_{\mathrm c}$ is a quadratic form we have
\begin{equation} \label{lemsupp2}
B_{\mathrm c}[g,g;\psi] = B_{\mathrm c}[g\varphi,g\varphi;\psi] + 2 B_{\mathrm c}[g\varphi, g(1-\varphi);\psi] + B_{\mathrm c}[g(1-\varphi),g(1-\varphi);\psi] \,.
\varepsilonnd{equation}
The measure $g\varphi$ is supported on the union of intervals $\bigcup_{n\in\mathbb Z}(n+I)$. To compute the term $B_{\mathrm c}[g\varphi,g\varphi;\psi]$ it is then sufficient to consider values $y,z\in\bigcup_{n\in\mathbb Z}(n+I)$, since otherwise the corresponding contribution would vanish.
By \varepsilonqref{suppK} we integrate on the region $\{ |y-z|<\varepsilon_0 \}$, otherwise $K(2^y,2^z)$ would vanish; then the assumption \varepsilonqref{delta0a} gives that different intervals $(n+I)$ and $(m+I)$ do not interact, and we can assume that $y,z$ always belong to the same interval $(n+I)$:
\begin{multline} \label{lemsupp6}
B_{\mathrm c}[g\varphi,g\varphi;\psi] = \frac{\ln2}{2}\mathbb Sum_{n\in\mathbb Z}\int_{n+I}\int_{n+I} K(2^y,2^z) g(y,t)g(z,t) \\ \times\biggl[(2^y+2^z)\varphi\Bigl(\frac{\ln(2^y+2^z)}{\ln2}\Bigr) - 2^y\varphi(y)- 2^z\varphi(z) \biggr] \,\mathrm{d} y \,\mathrm{d} z \,.
\varepsilonnd{multline}
For $y,z\in(n+I)$ we have $\varphi(y)=\varphi(z)=1$. We now claim that the following implication holds:
\begin{equation} \label{claimsupp}
y,z\in (n+I) \quad\Longrightarrow\quad \frac{\ln(2^y+2^z)}{\ln2} \in (n+1+I) \,.
\varepsilonnd{equation}
Indeed by elementary computations
\begin{align*}
\frac{\ln(2^y+2^z)}{\ln2} = \frac{\ln\bigl(2(2^{y-1}+2^{z-1})\bigr)}{\ln2} = 1 + \frac{\ln(2^{y-1}+2^{z-1})}{\ln2}\,,
\varepsilonnd{align*}
and by monotonicity it is easily seen that $\frac{\ln(2^{y-1}+2^{z-1})}{\ln2}\in (n+I)$ whenever $y,z\in (n+I)$.
From the previous considerations it follows that the term in brackets in the integral \varepsilonqref{lemsupp6} vanishes identically. Therefore $B_{\mathrm c}[g\varphi,g\varphi;\psi]=0$. Similarly for the fragmentation term
\begin{equation*}
B_{\mathrm f}[g;\psi] =
2\int_{\mathbb R}\frac{\gamma(2^{y+1})}{4} g(y+1,t)2^y \bigl[\varphi(y+1)-\varphi(y)\bigr]\,\mathrm{d} y = 0 \,.
\varepsilonnd{equation*}
Therefore by \varepsilonqref{lemsupp2} we have
\begin{equation} \label{lemsupp3}
\frac{\,\mathrm{d}}{\,\mathrm{d} t} M_{\mathrm{ext}}(t) = - \frac{\,\mathrm{d}}{\,\mathrm{d} t} M_{\mathrm{int}}(t)
= - 2 B_{\mathrm c}[g\varphi, g(1-\varphi);\psi] - B_{\mathrm c}[g(1-\varphi),g(1-\varphi);\psi] \,.
\varepsilonnd{equation}
We now bound the right-hand side of \varepsilonqref{lemsupp3} in terms of $M_{\mathrm{ext}}(t)$: using \varepsilonqref{estK},
\begin{equation*}
\begin{split}
\big|2 B_{\mathrm c}[g\varphi, & g(1-\varphi);\psi]\big| \\
& \leq \ln2\mathbb Sum_{n\in\mathbb Z}\int_{[n,n+1)} \,\mathrm{d} z\, g(z,t)(1-\varphi(z)) \int_{\{ |y-z|<1\}} K(2^y,2^z)g(y,t)(2^y+2^z)\,\mathrm{d} y \\
& \leq C_K \ln2\mathbb Sum_{n\in\mathbb Z}\int_{[n,n+1)} \,\mathrm{d} z\, g(z,t)(1-\varphi(z)) \int_{\{ |y-z|<1\}} (1+2^y2^z)g(y,t)\,\mathrm{d} y \\
& \leq C_K' \mathbb Sum_{n<0}\int_{[n,n+1)} \,\mathrm{d} z\, g(z,t)(1-\varphi(z)) \int_{(n-1,n+2)} g(y,t)\,\mathrm{d} y \\
& \qquad + C_K' \mathbb Sum_{n\geq0}\int_{[n,n+1)} \,\mathrm{d} z\, 2^zg(z,t)(1-\varphi(z)) \int_{(n-1,n+2)} 2^yg(y,t)\,\mathrm{d} y \\
& \leq C_K'' \|g(t)\| \mathbb Sum_{n<0}\int_{[n,n+1)} 2^n g(z,t)(1-\varphi(z))\,\mathrm{d} z \\
& \qquad + C_K'' M \mathbb Sum_{n\geq0}\int_{[n,n+1)} 2^{z}g(z,t)(1-\varphi(z))\,\mathrm{d} z \\
& \leq C_K''' \Bigl( \|g(t)\| + M \Bigr) M_{\mathrm{ext}}(t)
\varepsilonnd{split}
\varepsilonnd{equation*}
(where the norm $\|\cdot\|$ is defined in \varepsilonqref{moment0}).
The term $|B_{\mathrm c}[g(1-\varphi),g(1-\varphi);\psi]|$ can be bounded in a similar way.
We then deduce, from \varepsilonqref{lemsupp3} and from the uniform bound \varepsilonqref{wpestg} on the solution, that $\frac{\,\mathrm{d}}{\,\mathrm{d} t} M_{\mathrm{ext}}(t) \leq C M_{\mathrm{ext}}(t)$. Now the assumption on the support of $g_0$ implies that $M_{\mathrm{ext}}(0)=0$. It then follows by Gr\"onwall's Lemma that $M_{\mathrm{ext}}(t)\varepsilonquiv0$ for all $t>0$, that is, $g(t)$ remains supported in $\bigcup_{n\in\mathbb Z}(n+I)$.
\varepsilonnd{proof}
\mathbb Subsection{Derivation of the moment equations} \label{subsect:moments}
We now assume that $g$ is a weak solution to \varepsilonqref{eq:coagfrag2} with initial datum $g_0$, and we derive three evolution equations for the quantities $m_n(t)$, $p_n(t)$, $q_n(t)$, introduced in \varepsilonqref{mn}, \varepsilonqref{pn}, \varepsilonqref{qn}, by using the test functions
\begin{equation*}
\varphi_n^0(x) = \chi_{I_n}(x)\,, \quad \varphi^1_n(x) := (x-n)\chi_{I_n}(x)\,, \quad \varphi^2_n(x;t) := (x-n-p_n(t))^2\chi_{I_n}(x)
\varepsilonnd{equation*}
in the weak formulation \varepsilonqref{weakg} of the equation. In the first case, by using the condition \varepsilonqref{suppK} and the implication \varepsilonqref{claimsupp} we find
\begin{equation} \label{mneq}
\begin{split}
\frac{\,\mathrm{d} m_n}{\,\mathrm{d} t}
&= B_{\mathrm c}[g(\cdot,t),g(\cdot,t);\varphi_n^0] - B_{\mathrm f}[g(\cdot,t);\varphi_n^0] \\
& = \frac{\ln2}{2}\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z)\,\mathrm{d} y\,\mathrm{d} z - \ln2\int_{I_n}\int_{I_n} K(2^y,2^z)g(y)g(z)\,\mathrm{d} y\,\mathrm{d} z \\
& \qquad - \frac14\int_{I_n}\gamma(2^y)g(y)\,\mathrm{d} y + \frac12\int_{I_{n+1}}\gamma(2^y)g(y)\,\mathrm{d} y \,.
\varepsilonnd{split}
\varepsilonnd{equation}
Similarly for $p_n(t)$ we have
\begin{equation*}
\begin{split}
\frac{\,\mathrm{d}}{\,\mathrm{d} t}\bigl(m_n p_n\bigr)
&= B_{\mathrm c}[g(\cdot,t),g(\cdot,t);\varphi_n^1] - B_{\mathrm f}[g(\cdot,t);\varphi_n^1] \\
& = \frac{\ln2}{2}\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z) \biggl(\frac{\ln(2^y+2^z)}{\ln 2}-n\biggr)\,\mathrm{d} y\,\mathrm{d} z \\
& \qquad - \ln2\int_{I_n}\int_{I_n} K(2^y,2^z)g(y)g(z)(y-n)\,\mathrm{d} y\,\mathrm{d} z \\
& \qquad - \frac14\int_{I_n}\gamma(2^y)g(y)(y-n)\,\mathrm{d} y + \frac12\int_{I_{n+1}}\gamma(2^y)g(y)(y-(n+1))\,\mathrm{d} y \,.
\varepsilonnd{split}
\varepsilonnd{equation*}
By writing the right-hand side as $m_n'p_n+m_np_n'$ and using the equation \varepsilonqref{mneq} for $m_n$ we therefore have
\begin{align} \label{pneq}
\frac{\,\mathrm{d} p_n}{\,\mathrm{d} t}
& = \frac{1}{m_n} \biggl[ \frac{\ln2}{2}\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z) \biggl(\frac{\ln(2^y+2^z)}{\ln 2}-n-p_n\biggr)\,\mathrm{d} y\,\mathrm{d} z \nonumber \\
& \qquad - \ln2\int_{I_n}\int_{I_n} K(2^y,2^z)g(y)g(z)\bigl(y-n-p_n\bigr)\,\mathrm{d} y\,\mathrm{d} z \\
& \qquad - \frac14\int_{I_n}\gamma(2^y)g(y) \bigl(y-n-p_n\bigr)\,\mathrm{d} y + \frac12\int_{I_{n+1}}\gamma(2^y)g(y) \bigl(y-(n+1)-p_n\bigr)\,\mathrm{d} y \biggr]\,. \nonumber
\varepsilonnd{align}
Finally we compute the evolution equation for $q_n(t)$:
\begin{equation*}
\begin{split}
\frac{\,\mathrm{d}}{\,\mathrm{d} t}\bigl(m_n q_n \bigr)
&= B_{\mathrm c}[g(\cdot,t),g(\cdot,t);\varphi_n^2(\cdot;t)] - B_{\mathrm f}[g(\cdot,t);\varphi_n^2(\cdot;t)] \\
& = \frac{\ln2}{2}\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z) \biggl(\frac{\ln(2^y+2^z)}{\ln 2}-n-p_n\biggr)^2\,\mathrm{d} y\,\mathrm{d} z \\
& \qquad - \ln2\int_{I_n}\int_{I_n} K(2^y,2^z)g(y)g(z) \bigl(y-n-p_n\bigr)^2 \,\mathrm{d} y\,\mathrm{d} z \\
& \qquad - \frac14\int_{I_n}\gamma(2^y)g(y)\bigl(y-n-p_n\bigr)^2\,\mathrm{d} y + \frac12\int_{I_{n+1}}\gamma(2^y)g(y)\bigl(y-(n+1)-p_n\bigr)^2\,\mathrm{d} y \,,
\varepsilonnd{split}
\varepsilonnd{equation*}
which gives, after using the equation \varepsilonqref{mneq} for $m_n'(t)$,
\begin{equation} \label{qneq}
\begin{split}
\frac{\,\mathrm{d} q_n}{\,\mathrm{d} t}
&= \frac{1}{m_n}\biggl[
\frac{\ln2}{2}\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z) \biggl( \biggl(\frac{\ln(2^y+2^z)}{\ln 2}-n-p_n\biggr)^2 - q_n\biggr)\,\mathrm{d} y\,\mathrm{d} z \\
& \qquad - \ln2\int_{I_n}\int_{I_n} K(2^y,2^z)g(y)g(z) \Bigl( \bigl(y-n-p_n\bigr)^2 -q_n \Bigr) \,\mathrm{d} y\,\mathrm{d} z \\
& \qquad - \frac14\int_{I_n}\gamma(2^y)g(y) \Bigl( \bigl(y-n-p_n\bigr)^2 - q_n \Bigr) \,\mathrm{d} y \\
& \qquad + \frac12\int_{I_{n+1}}\gamma(2^y)g(y) \Bigl( \bigl(y-(n+1)-p_n\bigr)^2 - q_n \Bigr) \,\mathrm{d} y \biggr] \,.
\varepsilonnd{split}
\varepsilonnd{equation}
\mathbb Subsection{Approximate equations} \label{subsect:momentsapprox}
We next identify the leading order terms in the equations \varepsilonqref{mneq}, \varepsilonqref{pneq}, \varepsilonqref{qneq}.
In the following computations we will omit the dependence on the variable $t$.
\begin{lemma} \label{lem:taylor}
For all $h\in\mathbb Z$ the following identities hold:
\begin{equation} \label{approx1a}
\int_{I_h}\int_{I_h} K(2^y,2^z)g(y)g(z) \,\mathrm{d} y\,\mathrm{d} z = \frac{k(2^{h+p_h})}{2^{h+p_h+1}} m_h^2 (1+O(q_h)),
\varepsilonnd{equation}
\begin{equation} \label{approx1b}
\int_{I_h} \gamma(2^y)g(y)\,\mathrm{d} y = \gamma(2^{h+p_h}) m_h (1+O(q_h)),
\varepsilonnd{equation}
\begin{equation} \label{approx2a}
\int_{I_h}\int_{I_h} K(2^y,2^z)g(y)g(z)(y-h-p_h)\,\mathrm{d} y\,\mathrm{d} z = \frac{k(2^{h+p_h})}{2^{h+p_h+1}} m_h^2 O(q_h),
\varepsilonnd{equation}
\begin{equation} \label{approx2b}
\int_{I_h} \gamma(2^y)g(y)(y-h-p_h)\,\mathrm{d} y = \gamma(2^{h+p_h})m_hO(q_h),
\varepsilonnd{equation}
\begin{equation} \label{approx3a}
\int_{I_h}\int_{I_h} K(2^y,2^z)g(y)g(z) \bigl( (y-h-p_h)^2-q_h \bigr) \,\mathrm{d} y\,\mathrm{d} z = \frac{k(2^{h+p_h})}{2^{h+p_h+1}} \,\mathrm{d}lta_0 m_h^2O(q_h),
\varepsilonnd{equation}
\begin{equation} \label{approx3b}
\int_{I_h} \gamma(2^y)g(y) \bigl( (y-h-p_h)^2 -q_h \bigr) \,\mathrm{d} y = \gamma(2^{h+p_h}) \,\mathrm{d}lta_0 m_hO(q_h),
\varepsilonnd{equation}
where the notation $f_1=O(f_2)$ means that there exists a constant $C$ (depending only on the kernels) such that $|f_1|\leq C f_2$.
\varepsilonnd{lemma}
\begin{proof}
By a Taylor expansion of the function $K(2^y,2^z)$ at the point $(y,z)=(h+p_h,h+p_h)$ we have, for $y,z\in I_h$,
\begin{multline} \label{taylorK}
K(2^y,2^z)
= K(2^{h+p_h},2^{h+p_h}) + \frac{\partial K}{\partial\xi}(2^{h+p_h},2^{h+p_h})2^{h+p_h}\ln2 \bigl[(y-h-p_h)+(z-h-p_h)\bigr] \\
+ O\Bigl( K(2^{h+p_h},2^{h+p_h}) \bigl[(y-h-p_h)^2 + (z-h-p_h)^2\bigr] \Bigr)
\varepsilonnd{multline}
(here we used the fact that the quadratic term can be controlled in terms of the function $K$ itself, thanks to the growth assumptions \varepsilonqref{kernel2ter}--\varepsilonqref{kernel2quater} on the derivatives of the kernel).
Then, inserting this expression into the integral \varepsilonqref{approx1a}, and observing that in view of \varepsilonqref{pn2} the linear term vanishes, we obtain
\begin{multline*}
\int_{I_h}\int_{I_h} K(2^y,2^z)g(y)g(z) \,\mathrm{d} y\,\mathrm{d} z
= K(2^{h+p_h},2^{h+p_h}) \int_{I_h}g(y)\,\mathrm{d} y\int_{I_h} g(z)\,\mathrm{d} z \\
+ O\biggl( K(2^{h+p_h},2^{h+p_h}) \int_{I_h}\int_{I_h}g(y)g(z) (y-h-p_h)^2 \,\mathrm{d} y\,\mathrm{d} z \biggr),
\varepsilonnd{multline*}
from which \varepsilonqref{approx1a} follows by using the expression of the coagulation kernel in \varepsilonqref{kernel1}.
The equalities \varepsilonqref{approx2a} and \varepsilonqref{approx3a} are obtained similarly by inserting the Taylor expansion \varepsilonqref{taylorK} into the integrals, and recalling \varepsilonqref{pn2}.
The identities \varepsilonqref{approx1b}, \varepsilonqref{approx2b}, \varepsilonqref{approx3b} can be proved by analogous arguments, using the Taylor expansion of the fragmentation kernel, for $y\in I_h$,
\begin{equation} \label{taylorgamma}
\gamma(2^y) = \gamma(2^{h+p_h}) + \gamma'(2^{h+p_h})2^{h+p_h}\ln2(y-h-p_h) + O\bigl(\gamma(2^{h+p_h})(y-h-p_h)^2\bigr)
\varepsilonnd{equation}
(also in this case the quadratic term is controlled in terms of the function $\gamma$ itself thanks to the growth assumptions \varepsilonqref{kernel5ter}--\varepsilonqref{kernel5quater} on the derivatives fo $\gamma$).
\varepsilonnd{proof}
\begin{lemma} \label{lem:approx}
The functions $m_n(t)$, $p_n(t)$, $q_n(t)$, introduced in \varepsilonqref{mn}, \varepsilonqref{pn}, and \varepsilonqref{qn} respectively, obey the following equations:
\begin{equation}\label{mneq2}
\begin{split}
\frac{\,\mathrm{d} m_n}{\,\mathrm{d} t}
&= \frac{\gamma(2^{n+p_n})}{4}\Bigl( \zeta_{n-1}(p) m_{n-1}^2(1+O(q_{n-1})) - m_n(1+O(q_n)) \Bigr) \\
& \qquad - \frac{\gamma(2^{n+1+p_{n+1}})}{2} \Bigl( \zeta_n(p) m_n^2(1+O(q_n)) - m_{n+1}(1+O(q_{n+1})) \Bigr) .
\varepsilonnd{split}
\varepsilonnd{equation}
\begin{equation} \label{pneq2}
\begin{split}
\frac{\,\mathrm{d} p_n}{\,\mathrm{d} t}
& = \frac{\ln2}{2}\frac{k(2^{n-1+p_{n-1}})}{2^{n+p_{n-1}}}\frac{m_{n-1}^2}{m_n} \Bigl( (p_{n-1}-p_n) + O(q_{n-1}) \Bigr) -\ln2\frac{k(2^{n+p_n})}{2^{n+p_n+1}}m_nO(q_n) \\
& \qquad -\frac{\gamma(2^{n+p_n})}{4}O(q_n) + \frac{\gamma(2^{n+1+p_{n+1}})}{2}\frac{m_{n+1}}{m_n} \Bigl( (p_{n+1}-p_n) + O(q_{n+1}) \Bigr),
\varepsilonnd{split}
\varepsilonnd{equation}
\begin{equation} \label{qneq2}
\begin{split}
\frac{\,\mathrm{d} q_n}{\,\mathrm{d} t}
&= \frac{\ln2}{2}\frac{k(2^{n-1+p_{n-1}})}{2^{n+p_{n-1}}}\frac{m_{n-1}^2}{m_n} \Bigl[ \Bigl(\frac12q_{n-1}-q_n\Bigr) +\,\mathrm{d}lta_0O(q_{n-1}) + (p_{n-1}-p_n)^2 \Bigr]\\
& \qquad -\ln2\frac{k(2^{n+p_n})}{2^{n+p_n+1}}m_n\,\mathrm{d}lta_0 O(q_n) - \frac{\gamma(2^{n+p_{n}})}{4}\,\mathrm{d}lta_0O(q_n) \\
& \qquad + \frac{\gamma(2^{n+1+p_{n+1}})}{2}\frac{m_{n+1}}{m_n} \Bigl[ (q_{n+1}-q_n) + \,\mathrm{d}lta_0O(q_{n+1}) + (p_{n+1}-p_n)^2 \Bigr] .
\varepsilonnd{split}
\varepsilonnd{equation}
\varepsilonnd{lemma}
\begin{proof}
By inserting \varepsilonqref{approx1a} and \varepsilonqref{approx1b} into \varepsilonqref{mneq} we obtain the equation
\begin{equation*}
\begin{split}
\frac{\,\mathrm{d} m_n}{\,\mathrm{d} t} &= \frac{\ln2}{2}\frac{k(2^{n-1+p_{n-1}})}{2^{n+p_{n-1}}} m_{n-1}^2(1+O(q_{n-1})) - \ln2\frac{k(2^{n+p_n})}{2^{n+p_n+1}}m_n^2(1+O(q_n))\\
& \qquad -\frac{\gamma(2^{n+p_n})}{4}m_n(1+O(q_n)) + \frac{\gamma(2^{n+1+p_{n+1}})}{2}m_{n+1}(1+O(q_{n+1})) ,
\varepsilonnd{split}
\varepsilonnd{equation*}
which can be rewritten in the form \varepsilonqref{mneq2} by using the coefficients $\zeta_n(p)$ introduced in \varepsilonqref{nearlystat1}.
To prove \varepsilonqref{pneq2}, we first observe that by Taylor expansion, for all $h\in\mathbb Z$ and $y,z\in I_{h}$
\begin{align} \label{taylorlog}
\frac{\ln(2^y+2^z)}{\ln 2}
& = \frac{1}{\ln2}\ln\Bigl( 2^{h+p_h} \Bigl[ 2+\ln2(y-h-p_h)+\ln2(z-h-p_h) \nonumber\\
& \qquad\qquad + O\bigl( (y-h-p_h)^2+(z-h-p_h)^2 \bigr) \Bigr] \Bigr) \nonumber \\
& = h + p_h + 1 + \frac{1}{2}\bigl(y-h-p_h+z-h-p_h\bigr) + O\bigl((y-h-p_h)^2+(z-h-p_h)^2\bigr) \nonumber \\
& = 1 + \frac{y+z}{2} + O\bigl((y-h-p_h)^2+(z-h-p_h)^2\bigr).
\varepsilonnd{align}
The first term in \varepsilonqref{pneq} becomes, using the symmetry of the kernel, \varepsilonqref{taylorlog}, \varepsilonqref{approx1a}, and \varepsilonqref{approx2a},
\begin{equation*}
\begin{split}
\int_{I_{n-1}}&\int_{I_{n-1}} K(2^y,2^z)g(y)g(z) \biggl(\frac{\ln(2^y+2^z)}{\ln 2}-n-p_n\biggr)\,\mathrm{d} y\,\mathrm{d} z \\
& = \int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z) \Bigl( y-(n-1)-p_n + O\bigl((y-(n-1)-p_{n-1})^2\bigr) \Bigr)\,\mathrm{d} y\,\mathrm{d} z \\
& = \frac{k(2^{n-1+p_{n-1}})}{2^{n+p_{n-1}}}m_{n-1}^2 (p_{n-1}-p_n)(1+O(q_{n-1})) + \frac{k(2^{n-1+p_{n-1}})}{2^{n+p_{n-1}}}m_{n-1}^2O(q_{n-1}) .
\varepsilonnd{split}
\varepsilonnd{equation*}
For the other terms in \varepsilonqref{pneq} we can use directly \varepsilonqref{approx1b}, \varepsilonqref{approx2a}, and \varepsilonqref{approx2b}: we obtain
\begin{equation*}
\begin{split}
m_n\frac{\,\mathrm{d} p_n}{\,\mathrm{d} t}
& = \frac{\ln2}{2}\frac{k(2^{n-1+p_{n-1}})}{2^{n+p_{n-1}}}m_{n-1}^2 \Bigl( (p_{n-1}-p_n) (1+O(q_{n-1})) + O(q_{n-1}) \Bigr) \\
& \qquad -\ln2\frac{k(2^{n+p_n})}{2^{n+p_n+1}}m_n^2O(q_n) -\frac{\gamma(2^{n+p_n})}{4}m_nO(q_n) \\
& \qquad + \frac{\gamma(2^{n+1+p_{n+1}})}{2} m_{n+1} \Bigl( (p_{n+1}-p_n)(1+O(q_{n+1})) +O(q_{n+1}) \Bigr) ,
\varepsilonnd{split}
\varepsilonnd{equation*}
from which \varepsilonqref{pneq2} follows.
It remains to prove the equation \varepsilonqref{qneq2} for $q_n$, starting from \varepsilonqref{qneq}. The first term in \varepsilonqref{qneq} becomes, using \varepsilonqref{taylorlog}, the symmetry of the kernel, \varepsilonqref{approx3a}, \varepsilonqref{approx2a}, and \varepsilonqref{approx1a},
\begin{equation*}
\begin{split}
\int_{I_{n-1}}&\int_{I_{n-1}} K(2^y,2^z)g(y)g(z) \biggl( \biggl(\frac{\ln(2^y+2^z)}{\ln 2}-n-p_n\biggr)^2 - q_n\biggr)\,\mathrm{d} y\,\mathrm{d} z \\
& = \int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z) \biggl[ \biggl( \frac{y+z}{2} - (n-1) - p_n \\
& \qquad\qquad + O\bigl( (y-(n-1)-p_{n-1})^2+(z-(n-1)-p_{n-1})^2 \bigr) \biggr)^2 - q_n\biggr] \,\mathrm{d} y\,\mathrm{d} z \\
& = \Bigl(\frac12q_{n-1}-q_n\Bigr)\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z)\,\mathrm{d} y\,\mathrm{d} z \\
& \qquad + \frac12\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z) \Bigl( (y-(n-1)-p_{n-1})^2-q_{n-1} \Bigr) \,\mathrm{d} y \,\mathrm{d} z \\
& \qquad + \frac12\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z)\bigl(y-(n-1)-p_{n-1}\bigr)\bigl(z-(n-1)-p_{n-1}\bigr) \,\mathrm{d} y \,\mathrm{d} z \\
& \qquad + 2(p_{n-1}-p_n)\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z)\bigl(y-(n-1)-p_{n-1}\bigr) \,\mathrm{d} y \,\mathrm{d} z \\
& \qquad + (p_{n-1}-p_n)^2\int_{I_{n-1}}\int_{I_{n-1}} K(2^y,2^z)g(y)g(z)\,\mathrm{d} y \,\mathrm{d} z + \frac{k(2^{n-1+p_{n-1}})}{2^{n+p_{n-1}}}m_{n-1}^2\,\mathrm{d}lta_0O(q_{n-1})\\
& = \frac{k(2^{n-1+p_{n-1}})}{2^{n+p_{n-1}}}m_{n-1}^2 \biggl[ \Bigl(\frac12q_{n-1}-q_n\Bigr)(1+O(q_{n-1})) + \,\mathrm{d}lta_0 O(q_{n-1}) \\
& \qquad\qquad +(p_{n-1}-p_n)O(q_{n-1}) + (p_{n-1}-p_n)^2(1+O(q_{n-1}))\biggr].
\varepsilonnd{split}
\varepsilonnd{equation*}
For the second and third term in \varepsilonqref{qneq} we can use directly \varepsilonqref{approx3a} and \varepsilonqref{approx3b}, while for the last term in \varepsilonqref{qneq} we have, by using \varepsilonqref{approx3b}, \varepsilonqref{approx1b} and \varepsilonqref{approx2b},
\begin{equation*}
\begin{split}
\frac12 & \int_{I_{n+1}}\gamma(2^y)g(y) \Bigl( \bigl(y-(n+1)-p_n\bigr)^2 - q_n \Bigr) \,\mathrm{d} y \\
& = \frac12\int_{I_{n+1}}\gamma(2^y)g(y)\Bigl( (y-(n+1)-p_n)^2-(y-(n+1)-p_{n+1})^2\Bigr)\,\mathrm{d} y \\
& \qquad + \frac12\bigl(q_{n+1}-q_n\bigr)\int_{I_{n+1}}\gamma(2^y)g(y)\,\mathrm{d} y + \frac12 \gamma(2^{n+1+p_{n+1}}) \,\mathrm{d}lta_0 m_{n+1}O(q_{n+1}) \\
& = (p_{n+1}-p_n)\int_{I_{n+1}}\gamma(2^y)g(y)(y-(n+1)-p_{n+1})\,\mathrm{d} y + \frac12(p_{n+1}-p_n)^2\int_{I_{n+1}}\gamma(2^y)g(y)\,\mathrm{d} y \\
& \qquad + \frac{\gamma(2^{n+1+p_{n+1}})}{2}m_{n+1} \Bigl( (q_{n+1}-q_n)(1+O(q_{n+1})) + \,\mathrm{d}lta_0 O(q_{n+1}) \Bigr) \\
& = \frac{\gamma(2^{n+1+p_{n+1}})}{2}m_{n+1} \Bigl( (p_{n+1}-p_n)O(q_{n+1}) + (p_{n+1}-p_n)^2(1+O(q_{n+1})) \\
& \qquad + (q_{n+1}-q_n)(1+O(q_{n+1})) + \,\mathrm{d}lta_0 O(q_{n+1}) \Bigr).
\varepsilonnd{split}
\varepsilonnd{equation*}
The equality \varepsilonqref{qneq2} follows.
\varepsilonnd{proof}
Suppose now that the coefficients $m_n(t)$ can be represented as perturbations of the functions $\bar{m}_n$, according to \varepsilonqref{yn}.
By plugging the expression \varepsilonqref{yn} into \varepsilonqref{mneq2}, and using \varepsilonqref{nearlystat1}, \varepsilonqref{nearlystat3bis}, and \varepsilonqref{nearlystat6}, we find
\begin{align} \label{yneq2}
\frac{\,\mathrm{d} y_n}{\,\mathrm{d} t}
& = - \frac{1}{2^n\bar{m}_n}\frac{\partial\bar{m}_n}{\partial A}\frac{\,\mathrm{d} A}{\,\mathrm{d} t} (1+2^ny_n)
- \frac{1}{2^n}\mathbb Sum_{k=-\infty}^\infty \frac{1}{\bar{m}_n}\frac{\partial\bar{m}_n}{\partial p_k}\frac{\,\mathrm{d} p_k}{\,\mathrm{d} t}(1+2^ny_n) \nonumber\\
& + \frac{\gamma(2^{n+p_n})}{2^{n+2}}\Bigl( \zeta_{n-1}(p) \frac{\bar{m}_{n-1}^2}{\bar{m}_n}(1+2^{n-1}y_{n-1})^2(1+O(q_{n-1})) - (1+2^ny_n)(1+O(q_n)) \Bigr) \nonumber\\
& - \frac{\gamma(2^{n+1+p_{n+1}})}{2^{n+1}} \Bigl( \zeta_n(p) \bar{m}_n(1+2^ny_n)^2(1+O(q_n)) - \frac{\bar{m}_{n+1}}{\bar{m}_n}(1+2^{n+1}y_{n+1})(1+O(q_{n+1})) \Bigr) \nonumber\\
& = (1+2^ny_n)\frac{\,\mathrm{d} A}{\,\mathrm{d} t} - (1+2^ny_n)\frac{1}{2^n}\mathbb Sum_{k=n}^\infty \frac{1}{\bar{m}_n}\frac{\partial\bar{m}_n}{\partial p_k}\frac{\,\mathrm{d} p_k}{\,\mathrm{d} t} \nonumber\\
& + \frac{\gamma(2^{n+p_n})}{2^{n+2}} \biggl[ (1+2^{n-1}y_{n-1})^2(1+O(q_{n-1})) - (1+2^ny_n)(1+O(q_n)) \nonumber\\
& - 4\bar{\mu}_n\frac{\gamma(2^{n+1+p_{n+1}})}{\gamma(2^{n+p_n})} \Bigl( (1+2^ny_n)^2(1+O(q_n)) - (1+2^{n+1}y_{n+1})(1+O(q_{n+1})) \Bigr) \biggr] ,
\varepsilonnd{align}
where $\bar{\mu}_n=\bar{\mu}_n(A(t),p(t))$. Similarly, under the assumption \varepsilonqref{yn} we can write the approximate equations \varepsilonqref{pneq2}, \varepsilonqref{qneq2} for $p_n(t)$ and $q_n(t)$ in the following form:
\begin{multline} \label{pneq3}
\frac{\,\mathrm{d} p_n}{\,\mathrm{d} t}
= \frac{\gamma(2^{n+p_n})}{4} \biggl[ \frac{(1+2^{n-1}y_{n-1})^2}{(1+2^ny_n)} \Bigl( (p_{n-1}-p_n) + O(q_{n-1}) \Bigr) + O(q_n) \\
- 4\bar{\mu}_n\frac{\gamma(2^{n+1+p_{n+1}})}{\gamma(2^{n+p_n})} \biggl( \frac{(1+2^{n+1}y_{n+1})}{(1+2^ny_n)}\Bigl( (p_{n}-p_{n+1}) + O(q_{n+1}) \Bigr) + (1+2^ny_n)O(q_n) \biggr)\biggr],
\varepsilonnd{multline}
\begin{align} \label{qneq3}
\frac{\,\mathrm{d} q_n}{\,\mathrm{d} t}
& = \frac{\gamma(2^{n+p_n})}{4} \biggl[ \frac{(1+2^{n-1}y_{n-1})^2}{(1+2^ny_n)} \Bigl( \frac{q_{n-1}}{2}-q_n + \,\mathrm{d}lta_0O(q_{n-1}) + (p_{n-1}-p_n)^2 \Bigr) + \,\mathrm{d}lta_0O(q_n) \nonumber \\
& \qquad - 4\bar{\mu}_n\frac{\gamma(2^{n+1+p_{n+1}})}{\gamma(2^{n+p_n})} \frac{(1+2^{n+1}y_{n+1})}{(1+2^ny_n)}\Bigl( (q_{n}-q_{n+1}) + \,\mathrm{d}lta_0O(q_{n+1}) - (p_{n+1}-p_n)^2 \Bigr) \nonumber \\
& \qquad - 4\bar{\mu}_n\frac{\gamma(2^{n+1+p_{n+1}})}{\gamma(2^{n+p_n})} (1+2^ny_n)\,\mathrm{d}lta_0O(q_n) \biggr],
\varepsilonnd{align}
with $\bar{\mu}_n=\bar{\mu}_n(A(t),p(t))$.
This is the most convenient form of the equations for $p_n$, $q_n$ to prove their desired decay properties.
\mathbb Section{The linearized equation}\label{sect:linear}
The main evolution equations that we are going to investigate in the paper, namely \varepsilonqref{yneq2}, \varepsilonqref{pneq3}, \varepsilonqref{qneq3}, can be seen as perturbation of the same linearized problem. The proof of the main result relies therefore on the properties of solutions to the linearized equation, which in its simplest form is
\begin{equation} \label{linear1}
\frac{\,\mathrm{d} y_n}{\,\mathrm{d} t} = \frac{\gamma(2^{n+\rho})}{4} \Bigl[ y_{n-1}-y_n - \mathbb Sigma_n\bigl( y_n-y_{n+1}\bigr) \Bigr] ,
\varepsilonnd{equation}
for given $\rho\in[0,1)$ and coefficients $\mathbb Sigma_n$ with suitable asymptotic properties as $n\to\pm\infty$.
We studied this linear equation in full details in \cite[Section~5]{BNVd}, with $\mathbb Sigma_n$ given by
\begin{equation} \label{linear3}
\mathbb Sigma_n:=4\zeta_{n,\rho}a_n(A_{M,\rho},\rho)\frac{\gamma(2^{n+1+\rho})}{\gamma(2^{n+\rho})}
\varepsilonnd{equation}
(here $\zeta_{n,\rho}$ are defined in \varepsilonqref{eq:stat} and $a_n(A_{M,\rho},\rho)$ are the coefficients of a stationary solution with total mass $M$, see Proposition~\ref{prop:stationary}).
We remark that for this analysis the particular form of $\mathbb Sigma_n$ is not relevant, but what matters is only their asymptotic behaviour, namely $\mathbb Sigma_n\to8$ as $n\to-\infty$ and $\mathbb Sigma_n=O(e^{-A_M2^n})$ as $n\to\infty$.
The goal of this section is to extend the linear theory in order to treat the more general case of small time-dependent perturbations of the coefficients in \varepsilonqref{linear1}, which appears in the context of this paper.
\mathbb Subsection{The constant coefficients case}
We start by recalling the result proved in \cite{BNVd} for \varepsilonqref{linear1}. We will use in the following the notation introduced in \varepsilonqref{spacesequences} for the space of sequences $\mathcal{Y}_\theta$ and the norm $\|\cdot\|_\theta$. It is convenient to denote the linear operator on the right-hand side of \varepsilonqref{linear1}, acting on a sequence $y=\{y_n\}_{n\in\mathbb Z}$, by
\begin{equation} \label{linear4}
\mathscr{L}_n(y) := \frac{\gamma(2^{n})}{4} \Bigl[ y_{n-1}-y_n - \mathbb Sigma_n\bigl( y_n-y_{n+1}\bigr) \Bigr],
\qquad
\mathscr{L}(y) := \{ \mathscr{L}_n(y) \}_{n\in\mathbb Z}
\varepsilonnd{equation}
(we assume here for simplicity $\rho=0$). We also introduce a symbol for the discrete derivatives
\begin{equation} \label{linearD}
D^+_n(y) := y_{n+1}-y_n,
\qquad
D^-_n(y) := y_n-y_{n-1},
\qquad
D^{\pm}(y) := \{ D^{\pm}_n(y) \}_{n\in\mathbb Z} \,.
\varepsilonnd{equation}
The following result was proved in \cite[Theorem~5.1]{BNVd}.
\begin{theorem} \label{thm:linear}
Let $\mathbb Sigma_n$ be as in \varepsilonqref{linear3}, for a fixed value of $M$ (and $\rho=0$). Let also
\begin{equation*}
\theta\in(-1,\beta), \qquad \tilde{\theta}\in[\theta,\beta], \qquad\text{with }\tilde{\theta}-\theta<\beta,
\varepsilonnd{equation*}
be fixed parameters, and let $y^0=\{y^0_n\}_{n\in\mathbb Z}\in\mathcal{Y}_\theta$ be a given initial datum.
Then there exists a unique solution $t\mapsto S(t)(y^0)=\{S_n(t)(y^0)\}_{n\in\mathbb Z}\in\mathcal{Y}_{{\theta}}$ to the linear problem \varepsilonqref{linear1} with initial datum $y^0$.
Furthermore there exist constants $\nu>0$ (depending only on $M$), $C_1=C_1(M,\theta,\tilde{\theta})$, and $C_2=C_2(M,\theta,\tilde{\theta})$ such that for all $t>0$
\begin{equation} \label{linear5b}
\|D^+(S(t)(y^0))\|_{\tilde{\theta}} \leq C_1 \|y^0\|_\theta \, t^{-\frac{\tilde{\theta}-\theta}{\beta}} e^{-\nu t},
\varepsilonnd{equation}
\begin{equation} \label{linear5}
\|S(t)(y^0)-S_{\infty}(t)(y^0)\|_{\tilde{\theta}} \leq C_2 \|y^0\|_\theta \, t^{-\frac{\tilde{\theta}-\theta}{\beta}} e^{-\nu t} \qquad\text{if $\tilde{\theta}>0$,}
\varepsilonnd{equation}
where $S_{\infty}(t)(y^0):=\lim_{n\to\infty}S_n(t)(y^0)$.
\varepsilonnd{theorem}
All the constants in the statement above (and in the rest of this section) depend also on the properties of the coagulation and fragmentation kernels; however we will not mention this dependence explicitly, as the kernels are fixed throughout the paper. As observed in \cite[Remark~A.5]{BNVd}, the constant $C_1$ in \varepsilonqref{linear5b} blows up if $\theta\to\beta$ or $\tilde{\theta}-\theta\to\beta$, while the constant $C_2$ in \varepsilonqref{linear5} explodes also if $\tilde\theta\to0$.
\mathbb Subsection{The case of time-dependent coefficients} \label{subsect:linear}
We now extend the analysis of the linearized problem \varepsilonqref{linear1} to the case of time-dependent coefficients, which appears in the context of this paper. More precisely, we assume that a sequence $p(t)=\{p_n(t)\}_{n\in\mathbb Z}$ is given, with the following decay properties: for all $t>0$
\begin{equation} \label{asspn}
\begin{split}
|p_n(t)| \leq\varepsilonta_0,
\quad
\big\| D^+(p(t)) \big\|_{\bar{\theta}_1} \leq \varepsilonta_0 t^{-\bar{\theta}_1/\beta} e^{-\frac{\nu}{2} t},
\quad
\Big\| \frac{\,\mathrm{d} p(t)}{\,\mathrm{d} t} \Big\|_{\bar{\theta}_1-\beta} \leq \varepsilonta_0 \bigl( 1+t^{-\bar{\theta}_1/\beta}\bigr) e^{-\frac{\nu}{2} t}.
\varepsilonnd{split}
\varepsilonnd{equation}
Here $\varepsilonta_0\in(0,1)$ and $\nu>0$ are constants to be determined later, depending only on the given value of the mass $M>0$, and $\bar{\theta}_1\in(\beta-1,1)$ is a fixed parameter. The bounds \varepsilonqref{asspn} are the expected decay behaviour of the sequence of first moments introduced in \varepsilonqref{pn}, that will be recovered a posteriori.
We study the equation
\begin{equation} \label{slinear1}
\begin{split}
\frac{\,\mathrm{d} y_n}{\,\mathrm{d} t}
& = \frac{\gamma(2^{n+p_n(t)})}{4} \Bigl( y_{n-1}-y_n - \mathbb Sigma_n(t) (y_n-y_{n+1}) \Bigr) ,
\varepsilonnd{split}
\varepsilonnd{equation}
where the coefficients $\mathbb Sigma_n(t)$ are given by
\begin{equation} \label{slinear2}
\mathbb Sigma_n(t) := 8\bar{\mu}_n(A_M,p(t))\frac{\gamma(2^{n+1+p_{n+1}(t)})}{\gamma(2^{n+p_n(t)})}
\varepsilonnd{equation}
(recall here that $\bar{\mu}_n$ are the coefficients explicitly defined in \varepsilonqref{nearlystat5}, depending on a positive parameter $A_M$ and on the sequence $p(t)$). The value of the constant $A_M>0$ is fixed throughout this section, and is uniquely determined by the mass $M>0$. The equation \varepsilonqref{slinear1} can be seen as a perturbation of \varepsilonqref{linear1}, when the shifting parameter $\rho$ is not constant but depends on the peak $n$, and on time. It is convenient to introduce a symbol for the linear operator on the right-hand side of \varepsilonqref{slinear1}, acting on a sequence $y=\{y_n\}_{n\in\mathbb Z}$ at time $t$:
\begin{equation} \label{slinear3}
\mathscr{L}_n(y;t) := \frac{\gamma(2^{n+p_n(t)})}{4} \Bigl( y_{n-1}-y_n - \mathbb Sigma_n(t) (y_n-y_{n+1}) \Bigr) ,
\quad \mathscr{L}(y;t) := \{ \mathscr{L}_n(y;t) \}_{n\in\mathbb Z}.
\varepsilonnd{equation}
We remark that, in view of \varepsilonqref{kernel5}, \varepsilonqref{kernel5bis}, \varepsilonqref{nearlystat9}, \varepsilonqref{nearlystat5}, and \varepsilonqref{asspn} we have
\begin{equation} \label{slinear8}
\limsup_{n\to-\infty}|\mathbb Sigma_n(t)-8| \leq c\varepsilonta_0, \qquad \limsup_{n\to\infty}\mathbb Sigma_n(t) \leq ce^{-A_M2^n}
\varepsilonnd{equation}
(for a uniform constant $c>0$, depending only on the kernels). The following result extends Theorem~\ref{thm:linear} to this situation, and its technical proof is postponed to the Appendix~\ref{sect:appendix}.
\begin{theorem} \label{thm:slinear}
There exist $\varepsilonta_0\in(0,1)$ and $\nu>0$, depending only on $M$, with the following property. Let $p(t)$ be a given sequence satisfying \varepsilonqref{asspn}.
Let also
\begin{equation*}
\theta\in(-1,\beta), \qquad \tilde{\theta}\in[\theta,\beta], \qquad\text{with }\tilde{\theta}-\theta<\beta,
\varepsilonnd{equation*}
be fixed parameters, and let $y^0=\{y_n^0\}_{n\in\mathbb Z}\in\mathcal{Y}_\theta$ be a given initial datum.
Then, for every $t_0\geq0$, there exists a unique solution
\begin{equation} \label{slinear4}
t\mapsto T(t;t_0)(y^0) = \{T_n(t;t_0)(y^0)\}_{n\in\mathbb Z}\in\mathcal{Y}_{\theta}
\varepsilonnd{equation}
to \varepsilonqref{slinear1} in $t\in(t_0,\infty)$, with $T(t_0;t_0)(y^0)=y^0$.
Furthermore, there exist constants $C_1=C_1(M,\theta,\tilde{\theta})$ and $C_2=C_2(M,\theta,\tilde{\theta})$ such that
\begin{equation} \label{slinear6b}
\|D^+(T(t;t_0)(y^0))\|_{\tilde{\theta}} \leq C_1 \|y^0\|_\theta (t-t_0)^{-\frac{\tilde{\theta}-\theta}{\beta}} e^{-\nu(t-t_0)},
\varepsilonnd{equation}
\begin{equation} \label{slinear6}
\|T(t;t_0)(y^0)-T_{\infty}(t;t_0)(y^0)\|_{\tilde{\theta}} \leq C_2 \|y^0\|_\theta (t-t_0)^{-\frac{\tilde{\theta}-\theta}{\beta}} e^{-\nu(t-t_0)} \qquad\text{if $\tilde{\theta}>0$}
\varepsilonnd{equation}
for all $t>t_0$, where $T_\infty(t;t_0)(y^0) := \lim_{n\to\infty} T_n(t;t_0)(y^0)$.
\varepsilonnd{theorem}
\mathbb Subsection{The truncated problem} \label{subsect:lineartrunc}
We eventually consider the equivalent of Theorem~\ref{thm:slinear} when we truncate the operator $\mathscr{L}_n$ for large values of $n$. More precisely, we fix a (large) $N\in\mathbb N$ and we study the equation
\begin{equation} \label{tlinear1}
\begin{split}
\frac{\,\mathrm{d} y_n}{\,\mathrm{d} t}
& = \frac{\gamma(2^{n+p_n^N(t)})}{4} \Bigl( y_{n-1}-y_n - \mathbb Sigma_n^N(t) (y_n-y_{n+1}) \Bigr) \qquad\text{for }n\leq N,
\varepsilonnd{split}
\varepsilonnd{equation}
$y_n(t)=0$ for $n>N$, where we assume that $p^N(t)=\{p_n^N(t)\}_{n\in\mathbb Z}$ is a given sequence satisfying \varepsilonqref{asspn}, and the coefficients $\mathbb Sigma_n^N(t)$ are given by
\begin{equation} \label{tlinear2}
\mathbb Sigma^N_n(t) :=
\begin{cases}
8\bar{\mu}_n(A_M,p^N(t))\frac{\gamma(2^{n+1+p^N_{n+1}(t)})}{\gamma(2^{n+p^N_n(t)})} & \text{if $n<N$,} \\
0 & \text{if $n=N$.}
\varepsilonnd{cases}
\varepsilonnd{equation}
As in \varepsilonqref{slinear3}, we introduce a symbol for the linear operator on the right-hand side of \varepsilonqref{tlinear1}, acting on a sequence $y=\{y_n\}_{n\in\mathbb Z}$ at time $t$:
\begin{equation} \label{tlinear3}
\mathscr{L}_n^N(y;t) := \frac{\gamma(2^{n+p_n^N(t)})}{4} \Bigl( y_{n-1}-y_n - \mathbb Sigma_n^N(t) (y_n-y_{n+1}) \Bigr) ,
\varepsilonnd{equation}
and $\mathscr{L}^N(y;t) := \{ \mathscr{L}_n^N(y;t) \}_{n\in\mathbb Z}$. We then have the following result, equivalent to Theorem~\ref{thm:slinear}.
\begin{theorem} \label{thm:lineartrunc}
There exist $\varepsilonta_0\in(0,1)$ and $\nu>0$, depending only on $M$, with the following property. Let $p^N(t)$ be a given sequence satisfying \varepsilonqref{asspn}, and let $\mathbb Sigma^N_n(t)$ be defined by \varepsilonqref{tlinear2}. Let also
\begin{equation*}
\theta\in(-1,\beta), \qquad \tilde{\theta}\in[\theta,\beta], \qquad\text{with }\tilde{\theta}-\theta<\beta,
\varepsilonnd{equation*}
be fixed parameters, and let $y^0=\{y_n^0\}_{n\in\mathbb Z}\in\mathcal{Y}_\theta$ be a given initial datum with $y^0_n=0$ for $n>N$.
Then, for every $t_0\geq0$, there exists a unique solution
\begin{equation} \label{tlinear4}
t\mapsto T^N(t;t_0)(y^0) = \{T_n^N(t;t_0)(y^0)\}_{n\in\mathbb Z}\in\mathcal{Y}_{\theta}
\varepsilonnd{equation}
to \varepsilonqref{tlinear1} in $t\in(t_0,\infty)$, with $T^N(t_0;t_0)(y^0)=y^0$.
Furthermore, there exist constants $C_1=C_1(M,\theta,\tilde{\theta})$ and $C_2=C_2(M,\theta,\tilde{\theta})$ such that
\begin{equation} \label{tlinear5}
\|D^+(T^N(t;t_0)(y^0))\|_{\tilde{\theta}} \leq C_1 \|y^0\|_\theta (t-t_0)^{-\frac{\tilde{\theta}-\theta}{\beta}} e^{-\nu(t-t_0)},
\varepsilonnd{equation}
\begin{equation} \label{tlinear6}
\|T^N(t;t_0)(y^0)-T^N_{N}(t;t_0)(y^0)\|_{\tilde{\theta}} \leq C_2 \|y^0\|_\theta (t-t_0)^{-\frac{\tilde{\theta}-\theta}{\beta}} e^{-\nu(t-t_0)} \qquad\text{if $\tilde{\theta}>0$,}
\varepsilonnd{equation}
\begin{equation} \label{tlinear7}
|\mathscr{L}^N_N(T^N(t;t_0)(y^0),t)| \leq C_2 \|y^0\|_\theta (t-t_0)^{-\frac{\tilde{\theta}-\theta}{\beta}} e^{-\nu(t-t_0)} \qquad\text{if $\tilde{\theta}>0$.}
\varepsilonnd{equation}
\varepsilonnd{theorem}
The proof of the theorem can be obtained by a slight refinement of the argument given for Theorem~\ref{thm:slinear}.
\mathbb Section{Proof of the main result}\label{sect:proof}
Along this section we assume that $g_0\in\mathcal{M}_+(\R)$ is a given initial datum, with total mass $M>0$, satisfying the assumptions of Theorem~\ref{thm:stability}, for some $\,\mathrm{d}lta_0>0$ that will be chosen at the end of the proof. For the moment we assume that $\,\mathrm{d}lta_0\in(0,1)$ satisfies the condition \varepsilonqref{delta0a}, which guarantees the conservation of the support. We also let $\nu>0$ be the constant given by Theorem~\ref{thm:lineartrunc}, determined solely by $M$.
\mathbb Subsection{Truncation} \label{subsect:truncation}
The first step in the proof is to obtain a truncated solution by a cut-off of the tail for large $n$. More precisely, we fix a parameter $N\in\mathbb N$ ($N>>1$) and we truncate the kernels and the initial datum by setting
\begin{equation} \label{truncation1}
g_0^N(x):=g_0(x)\chi_{(-\infty,N+\frac12)}(x),
\varepsilonnd{equation}
\begin{equation} \label{truncation2}
K_N(2^y,2^z) := K(2^y,2^z)\chi_{(-\infty,N-\frac12)}(y),
\qquad
\gamma_N(2^y) := \gamma(2^y)\chi_{(-\infty,N+\frac12)}(y).
\varepsilonnd{equation}
We are in the position to apply the well-posedness Theorem~\ref{thm:wp} (replacing the original kernels by the truncated ones), with the initial datum $g_0^N$: indeed the assumption \varepsilonqref{m0} guarantees that the integrability conditions \varepsilonqref{moment0} are satisfied. This yields the existence of a global-in-time weak solution $t\mapsto g^N(t)\in\mathcal{M}_+(\R)$, in the sense of Definition~\ref{def:weakg}, with conserved mass; the estimates \varepsilonqref{wpestg} hold uniformly with respect to $N$.
Furthermore, by Lemma~\ref{lem:supp} (which continues to hold in this truncated setting) we have that $\mathbb Supp g^N(t) \mathbb Subset \bigcup_{n\in\mathbb Z}I_n$ for all positive times, where $I_n=(n-\,\mathrm{d}lta_0,n+\,\mathrm{d}lta_0)$. By using this information and bearing in mind that by \varepsilonqref{suppK} and \varepsilonqref{delta0a} different intervals do not interact in the coagulation term, we can write the weak formulation \varepsilonqref{weakg} of the equation as follows:
\begin{equation*}
\begin{split}
\partial_t\biggl(\int_{\mathbb R} &g^N(x,t)\varphi(x)\,\mathrm{d} x \biggr) \\
& = \frac{\ln2}{2}\mathbb Sum_{k\leq N-1}\int_{I_k}\int_{I_k} K_N(2^y,2^z)g^N(y,t)g^N(z,t)\biggl[\varphi\Bigl(\frac{\ln(2^y+2^z)}{\ln2}\Bigr) - \varphi(y) - \varphi(z) \biggr]\,\mathrm{d} y \,\mathrm{d} z \\
& \qquad - \frac14\mathbb Sum_{k\leq N-1} \int_{I_k} \gamma_N(2^{y+1}) g^N(y+1,t) \bigl[\varphi(y+1) - 2\varphi(y) \bigr]\,\mathrm{d} y
\varepsilonnd{split}
\varepsilonnd{equation*}
for every test function $\varphi\in C_{\mathrm c}(\mathbb R)$. It is then clear (recall the implication \varepsilonqref{claimsupp}) that
\begin{equation} \label{truncation4}
\mathbb Supp g^N(t) \mathbb Subset \bigcup_{n\leq N} I_n \qquad\text{for all $t>0$.}
\varepsilonnd{equation}
We define the quantities $m_n^N(t)$, $p_n^N(t)$, $q_n^N(t)$, according to \varepsilonqref{mn}, \varepsilonqref{pn}, \varepsilonqref{qn}, respectively, with $g$ replaced by $g^N$.
The evolution equations for these quantities (for a non-truncated weak solution) have been obtained in Lemma~\ref{lem:approx}; since we are now working with the truncated kernels, we have that those equations continue to hold for $m_n^N(t)$, $p_n^N(t)$, $q_n^N(t)$ for all $n\leq N-1$, while they have to be slightly modified for $n=N$. For larger values $n>N$ all the quantities vanish identically by \varepsilonqref{truncation4}.
For instance, the equation for $m^N_n(t)$ for $n=N$ becomes
\begin{equation}\label{mneq2N}
\frac{\,\mathrm{d} m^N_N}{\,\mathrm{d} t} = \frac{\gamma(2^{N+p^N_N})}{4}\Bigl( \zeta_{N-1}(p^N) (m^N_{N-1})^2 (1+O(q^N_{N-1})) - m^N_N(1+O(q^N_N)) \Bigr).
\varepsilonnd{equation}
\mathbb Subsection{Decay of the first and second moments} \label{subsect:pnqn}
We remark that by \varepsilonqref{truncation4}
\begin{equation} \label{decaypn0}
|p_n^N(t)| \leq \,\mathrm{d}lta_0, \qquad 0\leq q_n^N(t) \leq 4\,\mathrm{d}lta_0^2 \qquad\text{for all $t\geq0$.}
\varepsilonnd{equation}
We now fix two auxiliary parameters $\bar\theta_1$ and $\bar{\theta}_2$ satisfying
\begin{equation} \label{theta12}
\beta-1 < \bar{\theta}_1 < \bar{\theta}_2 <1.
\varepsilonnd{equation}
These parameters are fixed throughout the paper, therefore in the following we will not mention explicitly the dependence of all the constants on $\bar\theta_1$, $\bar\theta_2$. We let also $L_1$, $L_2$, $L_3$ be positive constants, that will be chosen later depending only on $M$.
Thanks to the fact that we are considering a truncated problem, we can assume that there exists a small time interval $[0,t^N]$, with $t^N>0$ (depending on $N$) such that for all $t\in(0,t^N)$
\begin{equation} \label{decaypn1}
\|D^+(p^N(t))\|_{0} \leq 2L_1\,\mathrm{d}lta_0e^{-\frac{\nu}{2}t},
\qquad
\| D^+(p^N(t))\|_{\bar{\theta}_1} \leq 2L_1\,\mathrm{d}lta_0 \, t^{-\bar{\theta}_1/\beta} e^{-\frac{\nu}{2} t},
\varepsilonnd{equation}
\begin{equation} \label{decaypn2}
\Big\|\frac{\,\mathrm{d} p^N}{\,\mathrm{d} t}(t)\Big\|_{-\beta} \leq 2L_2 \,\mathrm{d}lta_0 e^{-\frac{\nu}{2} t},
\qquad
\Big\|\frac{\,\mathrm{d} p^N}{\,\mathrm{d} t}(t)\Big\|_{\bar{\theta}_1 - \beta} \leq 2L_2 \,\mathrm{d}lta_0 \bigl(1+t^{-\bar{\theta}_1/\beta}\bigr) e^{-\frac{\nu}{2} t},
\varepsilonnd{equation}
\begin{equation} \label{decayqn1}
\mathbb Sup_{n\in\mathbb Z}|q_n^N(t)| \leq 8\,\mathrm{d}lta_0^{3/2} e^{-\nu t},
\qquad
\mathbb Sup_{n>0} 2^{\bar{\theta}_2 n} | q^N_n(t) | \leq 2L_3\,\mathrm{d}lta_0^{3/2} \, t^{-\bar{\theta}_2/\beta} e^{-\nu t},
\varepsilonnd{equation}
where $D^+$ denotes the discrete derivative (see \varepsilonqref{linearD}).
This is the expected decay for the functions $p^N$, $q^N$ as suggested by the natural scaling in the corresponding evolution equations. For technical reasons we can not obtain the optimal decay, namely for $\bar{\theta}_1=\bar{\theta}_2=1$, and we have to introduce the two additional parameters satisfying \varepsilonqref{theta12}.
In addition to \varepsilonqref{delta0a}, we make now a second assumption on $\,\mathrm{d}lta_0$, namely
\begin{equation} \label{delta0b}
\,\mathrm{d}lta_0<\varepsilonta_0, \qquad 2L_1\,\mathrm{d}lta_0 < \varepsilonta_0, \qquad 2L_2\,\mathrm{d}lta_0 < \varepsilonta_0,
\varepsilonnd{equation}
where $\varepsilonta_0>0$ is the fixed constant given by Theorem~\ref{thm:lineartrunc}, determined by $M$; with this assumption the sequence $p^N$ satisfies the condition \varepsilonqref{asspn} in the interval $(0,t^N)$, and allows us to apply the linear theory developed in Section~\ref{sect:linear}.
\mathbb Subsection{Fixed point} \label{subsect:fixedpoint}
The next goal is to represent the functions $m_n^N(t)$, in the time interval $[0,t^N]$, as perturbations of stationary states, as in \varepsilonqref{yn}.
In order to do this, we first need to rewrite the starting assumption \varepsilonqref{m0} in an equivalent form for the functions $m^N(0)$.
\begin{lemma} \label{lem:initialcond}
For all sufficiently large $N$ there exists $A^N(0)>0$, $y^N(0)=\{y^N_n(0)\}_{n\in\mathbb Z}$ such that
\begin{equation} \label{m0N}
m_n^N(0) = \bar{m}_n(A^N(0),p^N(0)) (1+2^ny^N_n(0)) \qquad\text{for all $n\leq N$,}
\varepsilonnd{equation}
with $y^N_N(0)=0$ and
\begin{equation} \label{A0Ny0N}
|A^N(0)-A^0| \leq c_02^{-N}\,\mathrm{d}lta_0, \qquad \|y^N(0)-y^0\|_1 \leq c_0\,\mathrm{d}lta_0
\varepsilonnd{equation}
for a uniform constant $c_0>0$, independent of $N$.
\varepsilonnd{lemma}
\begin{proof}
We use the explicit representation of $\bar{m}_n$ that can be obtained by combining \varepsilonqref{nearlystat2} and \varepsilonqref{nearlystat5}:
\begin{equation} \label{proofinitialcond1}
\bar{m}_n(A,p) = \frac{2e^{-A2^n}}{\zeta_n(p)} \varepsilonxp\Biggl( -2^n\mathbb Sum_{j=n+1}^{\infty}2^{-j}\ln(\theta_{j-1}(p))\Biggr),
\varepsilonnd{equation}
where the coefficients $\zeta_n(p)$, $\theta_n(p)$ are defined in \varepsilonqref{nearlystat1} and \varepsilonqref{nearlystat3} respectively. By observing that $m^N_n(0)=m_n(0)$ for all $n\leq N$, recalling the assumption \varepsilonqref{m0} and imposing that \varepsilonqref{m0N} holds at $n=N$ with $y^N_N(0)=0$, we obtain the condition that defines $A^N(0)$:
\begin{equation*}
\bar{m}_N(A^N(0),p^N(0)) = \bar{m}_N(A^0,p^0)(1+2^Ny_N^0),
\varepsilonnd{equation*}
which yields, using \varepsilonqref{proofinitialcond1},
\begin{equation*}
e^{(A^N(0)-A^0)2^N}
= \frac{\zeta_N(p^0)}{\zeta_N(p^N(0))} \varepsilonxp\Biggl( 2^N\mathbb Sum_{j=N+1}^{\infty}2^{-j} \ln\Bigl(\frac{\theta_{j-1}(p^0)}{\theta_{j-1}(p^N(0))}\Bigr) \Biggr)(1+2^Ny_N^0)^{-1},
\varepsilonnd{equation*}
or equivalently
\begin{equation*}
A^N(0)-A^0
= 2^{-N}\ln\biggl(\frac{\zeta_N(p^0)}{\zeta_N(p^N(0))}\biggr) +\mathbb Sum_{j=N+1}^{\infty}2^{-j} \ln\Bigl(\frac{\theta_{j-1}(p^0)}{\theta_{j-1}(p^N(0))}\Bigr) - 2^{-N}\ln(1+2^Ny_N^0).
\varepsilonnd{equation*}
This equation selects the value $A^N(0)$, and moreover implies the first estimate in \varepsilonqref{A0Ny0N}. We further define $y^N_n(0)$, for $n<N$, by imposing that \varepsilonqref{m0N} holds: this gives for $n<N$
\begin{align*}
2^ny_n^N(0) &:= \frac{m_n^N(0)}{\bar{m}_n(A^N(0),p^N(0))} - 1 = \frac{\bar{m}_n(A^0,p^0)(1+2^ny_n^0)}{\bar{m}_n(A^N(0),p^N(0))} - 1 \\
& \xupref{proofinitialcond1}{=} e^{(A^N(0)-A^0)2^n}\frac{\zeta_n(p^N(0))}{\zeta_n(p^0)} \varepsilonxp\Biggl( - 2^n\mathbb Sum_{j=n+1}^{\infty}2^{-j}\ln\Bigl(\frac{\theta_{j-1}(p^0)}{\theta_{j-1}(p^N(0))}\Bigr)\Biggr) (1+2^ny^0_n) - 1.
\varepsilonnd{align*}
From this expression also the second estimate in \varepsilonqref{A0Ny0N} follows.
\varepsilonnd{proof}
We now look for functions $A^N(t)$, $y^N(t)=\{y_n^N(t)\}_{n\in\mathbb Z}$ such that
\begin{equation} \label{ynN}
m^N_n(t) = \bar{m}_n(A^N(t),p^N(t))\bigl( 1 + 2^ny^N_n(t) \bigr) \qquad \text{for all $n\leq N$.}
\varepsilonnd{equation}
By plugging this expression into \varepsilonqref{mneq2}, as in \varepsilonqref{yneq2}, we find for all $n<N$
\begin{align*}
\frac{\,\mathrm{d} y^N_n}{\,\mathrm{d} t}
& = (1+2^ny^N_n)\frac{\,\mathrm{d} A^N}{\,\mathrm{d} t} - (1+2^ny^N_n)\frac{1}{2^n}\mathbb Sum_{k=n}^\infty \frac{1}{\bar{m}_n}\frac{\partial\bar{m}_n}{\partial p_k}\frac{\,\mathrm{d} p^N_k}{\,\mathrm{d} t} \nonumber\\
& \qquad + \frac{\gamma(2^{n+p^N_n})}{2^{n+2}} \biggl[ (1+2^{n-1}y^N_{n-1})^2(1+O(q^N_{n-1})) - (1+2^ny^N_n)(1+O(q^N_n)) \nonumber\\
& \qquad - 4\bar{\mu}_n\frac{\gamma(2^{n+1+p^N_{n+1}})}{\gamma(2^{n+p^N_n})} \Bigl( (1+2^ny^N_n)^2(1+O(q^N_n)) - (1+2^{n+1}y^N_{n+1})(1+O(q^N_{n+1})) \Bigr) \biggr] ,
\varepsilonnd{align*}
where $\bar{m}_n=\bar{m}_n(A^N(t),p^N(t))$, $\bar{\mu}_n=\bar{\mu}_n(A^N(t),p^N(t))$ (see \varepsilonqref{nearlystat2}). For $n=N$, using instead \varepsilonqref{mneq2N}, we have
\begin{align*}
\frac{\,\mathrm{d} y^N_N}{\,\mathrm{d} t}
& = (1+2^Ny^N_N)\frac{\,\mathrm{d} A^N}{\,\mathrm{d} t} - (1+2^Ny^N_N)\frac{1}{2^N}\mathbb Sum_{k=N}^\infty \frac{1}{\bar{m}_N}\frac{\partial\bar{m}_N}{\partial p_k}\frac{\,\mathrm{d} p^N_k}{\,\mathrm{d} t} \nonumber\\
& \qquad + \frac{\gamma(2^{N+p^N_N})}{2^{N+2}} \biggl[ (1+2^{N-1}y^N_{N-1})^2(1+O(q^N_{N-1})) - (1+2^Ny^N_N)(1+O(q^N_N)) \biggr] .
\varepsilonnd{align*}
The previous equations for the sequence $y^N(t)$ can be written in a more compact form, where we highlight the leading order linear operator:
\begin{multline} \label{yneq}
\frac{\,\mathrm{d} y^N_n}{\,\mathrm{d} t}
= \frac{\gamma(2^{n+p^N_n})}{4} \Bigl( y^N_{n-1}-y^N_n - \mathbb Sigma^N_n(t) (y^N_n-y^N_{n+1}) \Bigr) \\
+ (1+2^ny^N_n)\frac{\,\mathrm{d} A^N}{\,\mathrm{d} t} + \mathbb Sum_{i=1}^{3}r^{(i)}_n(y^N(t),A^N(t),t) \qquad\text{for $n\leq N$}
\varepsilonnd{multline}
and $y_n^N(t)\varepsilonquiv0$ for $n>N$, where we introduced the coefficients
\begin{equation} \label{sigman}
\mathbb Sigma^N_n(t) :=
\begin{cases}
8\bar{\mu}_n(A_M,p^N(t))\frac{\gamma(2^{n+1+p^N_{n+1}(t)})}{\gamma(2^{n+p^N_n(t)})} & \text{if $n<N$,} \\
0 & \text{if $n=N$.}
\varepsilonnd{cases}
\varepsilonnd{equation}
The remainders $r^{(i)}_n$ in \varepsilonqref{yneq} are explicitly given (for $n<N$) by
\begin{multline} \label{r1}
r^{(1)}_n(y,A,t) := \frac{2^n\gamma(2^{n+p^N_n})}{4} \biggl[ \frac14 y_{n-1}^2 - 4\bar{\mu}_n(A,p^N)\frac{\gamma(2^{n+1+p^N_{n+1}})}{\gamma(2^{n+p^N_n})} y_n^2 \biggr] \\
+ 2\gamma(2^{n+1+p^N_{n+1}}) \bigl(\bar{\mu}_n(A_M,p^N)-\bar{\mu}_n(A,p^N)\bigr)\bigl( y_n-y_{n+1} \bigr),
\varepsilonnd{multline}
\begin{multline} \label{r2}
r^{(2)}_n(y,A,t) := \frac{\gamma(2^{n+p^N_n})}{2^{n+2}} \Bigl( (1+2^{n-1}y_{n-1})^2O(q^N_{n-1}) - (1+2^ny_n)O(q^N_n) \Bigr) \\
- \bar{\mu}_n(A,p^N)\frac{\gamma(2^{n+1+p^N_{n+1}})}{2^n} \Bigl( (1+2^ny_n)^2O(q^N_n) - (1+2^{n+1}y_{n+1})O(q^N_{n+1}) \Bigr),
\varepsilonnd{multline}
\begin{equation} \label{r3}
r^{(3)}_n(y,A,t)
:= - (1+2^ny_n)\frac{1}{2^n}\mathbb Sum_{k=n}^\infty \frac{1}{\bar{m}_n(A,p^N(t))}\frac{\partial\bar{m}_n}{\partial p_k}(A,p^N(t))\frac{\,\mathrm{d} p^N_k}{\,\mathrm{d} t}
\varepsilonnd{equation}
(for $n=N$, the terms containing $\bar{\mu}_n$ in $r^{(1)}_N$ and $r^{(2)}_N$ are not present).
At this point it is important to recall that $p^N(t)=\{p^N_n(t)\}_{n\in\mathbb Z}$ and $q^N(t)=\{q^N_n(t)\}_{n\in\mathbb Z}$ are given sequences, satisfying the estimates \varepsilonqref{decaypn1}--\varepsilonqref{decayqn1} in the small time interval $(0,t^N)$. The goal is to show the existence of a pair $(y^N(t),A^N(t))$ solving \varepsilonqref{yneq}.
The linearized operator in \varepsilonqref{yneq} has exactly the form \varepsilonqref{tlinear3}, with the sequence $p^N(t)$ satisfying the assumption \varepsilonqref{asspn} in $(0,t^N)$ in view of \varepsilonqref{delta0b}; we then denote by $T^N(t;s)$ the corresponding resolvent operator, according to \varepsilonqref{tlinear4}.
By Duhamel's formula, the solution to \varepsilonqref{yneq} with a given initial datum $y^N(0)$ can be represented in terms of the solution to the linearized problem as
\begin{multline} \label{yn2}
y^N_n(t) = T^N_n(t;0)(y^N(0)) + A^N(t) - A^0 + \int_0^t \frac{\,\mathrm{d} A^N}{\,\mathrm{d} t}(s)T^N_n(t;s)(P(y^N(s)))\,\mathrm{d} s \\
+ \mathbb Sum_{i=1}^{3}\int_0^t T^N_n(t;s)(r^{(i)}(y^N(s),A^N(s),s))\,\mathrm{d} s
\varepsilonnd{multline}
for $n\leq N$, where for notational convenience we introduced the operator
\begin{equation} \label{linearP}
P_n(y) := 2^ny_n, \qquad P(y):=\{ P_n(y) \}_{n\in\mathbb Z} \,.
\varepsilonnd{equation}
We select the function $A^N(t)$ by imposing that $y^N_N(t)=0$ for all $t>0$ (notice that this condition is satisfied at time $t=0$, see Lemma~\ref{lem:initialcond}): this gives the equation
\begin{multline} \label{yn3}
A^N(t) - A^0 = - T^N_N(t;0)(y^N(0)) - \int_0^t \frac{\,\mathrm{d} A^N}{\,\mathrm{d} t}(s)T^N_N(t;s)(P(y^N(s)))\,\mathrm{d} s \\
- \mathbb Sum_{i=1}^{3}\int_0^t T^N_N(t;s)(r^{(i)}(y^N(s),A^N(s),s))\,\mathrm{d} s ,
\varepsilonnd{multline}
and by differentiating with respect to $t$
\begin{multline} \label{fp2}
\frac{\,\mathrm{d} A^N}{\,\mathrm{d} t}
= - \biggl[ \mathscr{L}^N_N(T^N(t;0)(y^N(0)),t)
+ \int_0^t \frac{\,\mathrm{d} A^N}{\,\mathrm{d} t}(s) \mathscr{L}^N_N \bigl[ T^N(t;s)( P(y^N(s)) ), t \bigr] \,\mathrm{d} s \\
+ \mathbb Sum_{i=1}^3\int_0^t \mathscr{L}^N_N \bigl[ T^N(t;s)(r^{(i)}(y^N(s),A^N(s),s)), t \bigr] \,\mathrm{d} s
+ \mathbb Sum_{i=1}^3 r^{(i)}_N(y^N(t),A^N(t),t)\biggr],
\varepsilonnd{multline}
where $\mathscr{L}^N$ denotes the linear operator on the right-hand side of \varepsilonqref{yneq}, see \varepsilonqref{tlinear3}.
In turn, by inserting \varepsilonqref{yn3} into \varepsilonqref{yn2} we have for $n\leq N$
\begin{multline} \label{fp1}
y^N_n(t)
= \bigl[T^N_n(t;0)-T^N_N(t;0)\bigr](y^N(0)) + \int_0^t \frac{\,\mathrm{d} A^N}{\,\mathrm{d} t}(s) \bigl[T^N_n(t;s)-T^N_N(t;s)\bigr]( P(y^N(s)) )\,\mathrm{d} s \\
+ \mathbb Sum_{i=1}^3 \int_0^t \bigl[T^N_n(t;s)-T^N_N(t;s)\bigr](r^{(i)}(y^N(s),A^N(s),s))\,\mathrm{d} s \,.
\varepsilonnd{multline}
The pair $(y^N(t),A^N(t))$ will be determined by applying a fixed point argument to the two equations \varepsilonqref{fp2}--\varepsilonqref{fp1}.
This is the content of the following proposition.
\begin{proposition} \label{prop:fp}
There exists $\,\mathrm{d}lta_0>0$, depending on $M$, $L_1$, $L_2$, $L_3$, with the following property. Let $g_0$ be an initial datum satisfying the assumptions of Theorem~\ref{thm:stability} and let $g^N$ be the corresponding weak solution to the truncated problem, obtained in Section~\ref{subsect:truncation}. Assume further that the maps $t\mapsto p^N(t)$, $t\mapsto q^N(t)$ satisfy the estimates \varepsilonqref{decaypn1}, \varepsilonqref{decaypn2}, \varepsilonqref{decayqn1} for all $t\in(0,t^N)$, for some $t^N>0$.
Then there exist functions $t\mapsto (y^N(t),A^N(t))$, for $t\in(0,t^N)$, such that $y^N_N(t)\varepsilonquiv0$ and
\begin{equation} \label{fp}
m^N_n(t) = \bar{m}_n(A^N(t),p^N(t))\bigl( 1 + 2^ny^N_n(t) \bigr) \qquad \text{for all $n\leq N$ and $t\in(0,t^N)$.}
\varepsilonnd{equation}
Moreover the following estimates hold:
\begin{equation}\label{fp3}
\begin{split}
\|y^N(t)\|_1\leq C_0\,\mathrm{d}lta_0,
\qquad
\|y^N(t)\|_\beta \leq C_0\,\mathrm{d}lta_0\bigl(1+ t^{-\frac{\beta-1}{\beta}}) e^{-\frac{\nu}{2} t}, \\
|A^N(t)-A_M|\leq C_0\,\mathrm{d}lta_0,
\qquad
\Big| \frac{\,\mathrm{d} A^N}{\,\mathrm{d} t}(t) \Big| \leq C_0\,\mathrm{d}lta_0\bigl(1+t^{-\frac{\beta-1}{\beta}}\bigr) e^{-\frac{\nu}{2}t},
\varepsilonnd{split}
\varepsilonnd{equation}
for a constant $C_0$ depending only on $M$, $L_1$, $L_2$, and $L_3$.
\varepsilonnd{proposition}
\begin{proof}
Along the proof, we will denote by $C$ a generic constant, possibly depending on the properties of the kernels, on $M$, $L_1$, $L_2$, and $L_3$, which might change from line to line. Let $\,\mathrm{d}lta>0$ be a small parameter, to be chosen later, and let $g:(0,\infty)\to\mathbb R$ be the function
\begin{equation} \label{pfp0}
g(t) := \bigl(1+t^{-\frac{\beta-1}{\beta}}\bigr) e^{-\frac{\nu}{2} t}.
\varepsilonnd{equation}
Since we always deal with truncated sequences, it is convenient to denote by $\mathcal{Y}_\beta^N$ the space of sequences $y\in\mathcal{Y}_\beta$ (see \varepsilonqref{spacesequences}) such that $y_n=0$ for all $n\geq N$.
We work in the space $\mathcal{X}:=\mathcal{X}_1\times\mathcal{X}_2$, where
\begin{equation} \label{pfp1}
\mathcal{X}_1 := \Bigl\{ y\in C([0,t^N];\mathcal{Y}_\beta^N) \,:\, \|y\|_{\mathcal{X}_1}\leq \,\mathrm{d}lta \Bigr\},
\qquad
\|y\|_{\mathcal{X}_1} := \mathbb Sup_{0<t<t^N} \Bigl( \|y(t)\|_1 + \frac{\|y(t)\|_\beta}{g(t)}\Bigr) \,,
\varepsilonnd{equation}
\begin{equation} \label{pfp2}
\mathcal{X}_2 := \Bigl\{ \Lambda\in C([0,t^N];\mathbb R) \,:\, \|\Lambda\|_{\mathcal{X}_2}\leq \,\mathrm{d}lta \Bigr\},
\qquad
\|\Lambda\|_{\mathcal{X}_2} := \mathbb Sup_{0<t<t^N} \, \frac{|\Lambda(t)|}{g(t)} \,.
\varepsilonnd{equation}
For $\Lambda\in\mathcal{X}_2$ we let
\begin{equation} \label{pfp5}
A_\Lambda(t) := A^N(0) + \int_0^t \Lambda(s)\,\mathrm{d} s \,.
\varepsilonnd{equation}
Notice that for every $\Lambda\in\mathcal{X}_2$
\begin{equation} \label{pfp5bis}
|A_\Lambda(t)-A^N(0)| = \bigg|\int_0^t\Lambda(s)\,\mathrm{d} s\bigg| \leq \|\Lambda\|_{\mathcal{X}_2}\int_0^t g(s)\,\mathrm{d} s \leq C\,\mathrm{d}lta .
\varepsilonnd{equation}
By combining \varepsilonqref{A0y0}, \varepsilonqref{A0Ny0N}, and \varepsilonqref{pfp5bis} we find
\begin{equation} \label{pfp5ter}
|A_{\Lambda}(t)-A_M| \leq C\,\mathrm{d}lta + c_02^{-N}\,\mathrm{d}lta_0 + \,\mathrm{d}lta_0 \leq C(\,\mathrm{d}lta+\,\mathrm{d}lta_0)
\varepsilonnd{equation}
and in particular we can assume without loss of generality that $|A_{\Lambda}(t)-A_M| \leq \frac{A_M}{2}$ for every $\Lambda\in\mathcal{X}_2$, provided that we choose $\,\mathrm{d}lta_0$ and $\,\mathrm{d}lta$ sufficiently small (depending on $M$).
We define a map $\mathcal{T}:\mathcal{X}\to\mathcal{X}$ by setting $\mathcal{T}(y,\Lambda):=(\tilde{y},\tilde{\Lambda})$, where
\begin{multline} \label{pfp3}
\tilde{y}_n(t) := \bigl[T^N_n(t;0)-T^N_N(t;0)\bigr](y^N(0)) + \int_0^t \Lambda(s) \bigl[T^N_n(t;s)-T^N_N(t;s)\bigr]( P(y(s)) )\,\mathrm{d} s \\
+ \mathbb Sum_{i=1}^3 \int_0^t \bigl[T^N_n(t;s)-T^N_N(t;s)\bigr](r^{(i)}(y(s),A_\Lambda(s),s))\,\mathrm{d} s
\varepsilonnd{multline}
for $n\leq N$, $\tilde{y}_n(t)=0$ for $n> N$, and
\begin{multline} \label{pfp4}
\tilde{\Lambda}(t) :=
- \biggl[ \mathscr{L}^N_N(T^N(t;0)(y^N(0)),t)
+ \int_0^t \Lambda(s) \mathscr{L}^N_N \bigl[ T^N(t;s)( P(y(s)) ) , t \bigr] \,\mathrm{d} s \\
+ \mathbb Sum_{i=1}^3\int_0^t \mathscr{L}^N_N \bigl[ T^N(t;s)(r^{(i)}(y(s),A_\Lambda(s),s)) , t \bigr] \,\mathrm{d} s
+ \mathbb Sum_{i=1}^3 r^{(i)}_N(y(t),A_\Lambda(t),t)\biggr].
\varepsilonnd{multline}
The rest of the proof amounts to showing that the map $\mathcal{T}$ is a contraction in $\mathcal{X}$, provided that $\,\mathrm{d}lta$ and $\,\mathrm{d}lta_0$ are chosen small enough.
\mathbb Smallskip\noindent\textit{Step 1: $\mathcal{T}(y,\Lambda)\in\mathcal{X}$ for every $(y,\Lambda)\in\mathcal{X}$.}
Since $y^N(0)\in\mathcal{Y}_1$, by Theorem~\ref{thm:lineartrunc} we have (using also \varepsilonqref{A0y0}, \varepsilonqref{A0Ny0N})
\begin{equation} \label{pfp10}
\begin{split}
\big\| \bigl[ T^N(t;0)-T^N_N(t;0) \bigr] (y^N(0)) \big\|_\beta & \leq C_2(M,1,\beta)\|y^N(0)\|_1 t^{-\frac{\beta-1}{\beta}} e^{-\nu t} \leq C\,\mathrm{d}lta_0 g(t), \\
|\mathscr{L}^N_N(T^N(t;0)(y^N(0)),t)| & \leq C_2(M,1,\beta)\|y^N(0)\|_1 t^{-\frac{\beta-1}{\beta}} e^{-\nu t} \leq C\,\mathrm{d}lta_0 g(t).
\varepsilonnd{split}
\varepsilonnd{equation}
Similarly, since $y(s)\in\mathcal{Y}_\beta$ we have $P(y(s))\in\mathcal{Y}_{\beta-1}$ for every positive $s$, with
\begin{equation} \label{pfp13}
\|P(y(s))\|_{\beta-1} \leq \|y(s)\|_\beta,
\varepsilonnd{equation}
and it follows, again by Theorem~\ref{thm:lineartrunc}, that
\begin{equation} \label{pfp11}
\begin{split}
\big\| \bigl[ T^N(t;s)-T^N_N(t;s)\bigr] (P(y(s))) \big\|_\beta &\leq C_2(M,\beta-1,\beta)\|y(s)\|_\beta (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)} \\
& \leq C \,\mathrm{d}lta g(s) (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)}, \\
\big|\mathscr{L}^N_N \bigl[ T^N(t;s)(P(y(s))),t \bigr] \big| &\leq C \,\mathrm{d}lta g(s) (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)} .
\varepsilonnd{split}
\varepsilonnd{equation}
In the same way, we can use the bounds \varepsilonqref{pfp30} on the remainders $r^{(i)}$ proved in Lemma~\ref{lem:fixedpointr} below (the assumptions of the lemma are satisfied in view of \varepsilonqref{pfp5ter} and $\|y(t)\|_1\leq\,\mathrm{d}lta$), together with Theorem~\ref{thm:lineartrunc}, to obtain the following estimates:
\begin{equation} \label{pfp12a}
\begin{split}
\big\| \bigl[ T^N(t;s)-&T^N_N(t;s)\bigr] (r^{(1)}(y(s),A_\Lambda(s),s)) \big\|_\beta \\
& \leq C_2(M,\beta-1,\beta)\|r^{(1)}(y(s),A_\Lambda(s),s)\|_{\beta-1} (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)} \\
& \leq C \Bigl( \|y(s)\|_{\beta}^2 + |A_{\Lambda}(s)-A_M|\|y(s)\|_\beta \Bigr) (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)} \\
& \leq C \Bigl( \,\mathrm{d}lta^2(g(s))^2 + \bigl( \,\mathrm{d}lta + \,\mathrm{d}lta_0 \bigr)\,\mathrm{d}lta g(s) \Bigr) (t-s)^{-\frac{1}{\beta}}e^{-\nu(t-s)} ,
\varepsilonnd{split}
\varepsilonnd{equation}
\begin{equation} \label{pfp12b}
\begin{split}
\big\| \bigl[ T^N&(t;s)-T^N_N(t;s)\bigr] (r^{(2)}(y(s),A_\Lambda(s),s)) \big\|_\beta \\
& \leq C_2(M,\bar{\theta}_2-\beta+1,\beta)\|r^{(2)}(y(s),A_\Lambda(s),s)\|_{\bar{\theta}_2-\beta+1} (t-s)^{-\frac{2\beta-\bar{\theta}_2-1}{\beta}} e^{-\nu(t-s)} \\
& \leq C\,\mathrm{d}lta_0 s^{-\bar{\theta}_2/\beta} (t-s)^{-\frac{2\beta-\bar{\theta}_2-1}{\beta}} e^{-\frac{\nu}{2} s}e^{-\nu(t-s)},
\varepsilonnd{split}
\varepsilonnd{equation}
\begin{equation} \label{pfp12c}
\begin{split}
\big\| \bigl[ T^N&(t;s)-T^N_N(t;s)\bigr] (r^{(3)}(y(s),A_\Lambda(s),s)) \big\|_\beta \\
& \leq C_2(M,\bar{\theta}_1-\beta+1,\beta)\|r^{(3)}(y(s),A_\Lambda(s),s)\|_{\bar{\theta}_1-\beta+1} (t-s)^{-\frac{2\beta-\bar{\theta}_1-1}{\beta}} e^{-\nu(t-s)} \\
& \leq C\,\mathrm{d}lta_0 s^{-\bar{\theta}_1/\beta} (t-s)^{-\frac{2\beta-\bar{\theta}_1-1}{\beta}} e^{-\frac{\nu}{2} s}e^{-\nu(t-s)} .
\varepsilonnd{split}
\varepsilonnd{equation}
The same estimates hold for the terms $\mathscr{L}^N_N [ T^N(t;s)(r^{(i)}(y(s),A_\Lambda(s),s)) , t ]$, $i=1,2,3$.
By plugging the bounds \varepsilonqref{pfp10}, \varepsilonqref{pfp11}, \varepsilonqref{pfp12a}, \varepsilonqref{pfp12b}, \varepsilonqref{pfp12c} into \varepsilonqref{pfp3} we find
\begin{equation} \label{pfp16}
\begin{split}
\|\tilde{y}(t)\|_\beta
& \leq C\,\mathrm{d}lta_0 g(t) + C\,\mathrm{d}lta^2 \int_0^t \bigl(g(s)\bigr)^2 (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)} \,\mathrm{d} s \\
& \qquad + C\bigl( \,\mathrm{d}lta + \,\mathrm{d}lta_0 \bigr)\,\mathrm{d}lta \int_0^t g(s)(t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)}\,\mathrm{d} s \\
& \qquad + C\,\mathrm{d}lta_0\mathbb Sum_{i=1}^2 \int_0^t s^{-\bar{\theta}_i/\beta}(t-s)^{-\frac{2\beta-\bar{\theta}_i-1}{\beta}} e^{-\frac{\nu}{2} s} e^{-\nu(t-s)} \,\mathrm{d} s \\
& \leq C\bigl( \,\mathrm{d}lta_0 + \,\mathrm{d}lta^2 + \,\mathrm{d}lta_0\,\mathrm{d}lta \bigr)g(t),
\varepsilonnd{split}
\varepsilonnd{equation}
where we used the elementary estimates
\begin{equation} \label{pfp15}
\begin{split}
\int_0^t \bigl(g(s)\bigr)^2 (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)} \,\mathrm{d} s &\leq C g(t),\\
\int_0^t g(s) (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)}\,\mathrm{d} s &\leq C g(t),\\
\int_0^t s^{-\bar{\theta}_i/\beta}(t-s)^{-\frac{2\beta-\bar{\theta}_i-1}\beta}e^{-\frac{\nu}{2} s}e^{-\nu(t-s)} \,\mathrm{d} s & \leq Cg(t) \qquad(i=1,2),
\varepsilonnd{split}
\varepsilonnd{equation}
for $C$ depending only on $\beta$, $\bar{\theta}_1$, $\bar{\theta}_2$, $\nu$ (recall the assumption \varepsilonqref{theta12}). In particular, by choosing $\,\mathrm{d}lta$ and $\,\mathrm{d}lta_0$ sufficiently small, depending ultimately only on $M$ (with $\,\mathrm{d}lta_0$ depending on $\,\mathrm{d}lta$, and $\,\mathrm{d}lta_0\mathbb Sim\,\mathrm{d}lta$), \varepsilonqref{pfp16} yields $\|\tilde{y}(t)\|_\beta\leq \frac{\,\mathrm{d}lta}{2}g(t)$.
In order to control $\|\tilde{y}(t)\|_1$, by a similar procedure we repeatedly apply Theorem~\ref{thm:lineartrunc} and we use the bounds \varepsilonqref{pfp30}: we find from \varepsilonqref{pfp3}
\begin{equation*}
\begin{split}
\|\tilde{y}(t)\|_1
& \leq C_2(M,1,1)\|y^N(0)\|_1e^{-\nu t} + C_2(M,0,1)\,\mathrm{d}lta \int_0^t g(s) \|y(s)\|_1 (t-s)^{-\frac{1}{\beta}}e^{-\nu(t-s)}\,\mathrm{d} s \\
& \qquad + C_2(M,\beta-1,1) \int_0^t \|r^{(1)}(y(s),A_\Lambda(s),s)\|_{\beta-1} (t-s)^{-\frac{2-\beta}{\beta}}e^{-\nu(t-s)}\,\mathrm{d} s\\
& \qquad + C_2(M,\bar{\theta}_2-\beta+1,1)\int_0^t \|r^{(2)}(y(s),A_\Lambda(s),s)\|_{\bar{\theta}_2-\beta+1} (t-s)^{-\frac{\beta-\bar{\theta}_2}{\beta}}e^{-\nu(t-s)}\,\mathrm{d} s\\
& \qquad + C_2(M,\bar{\theta}_1-\beta+1,1)\int_0^t \|r^{(1)}(y(s),A_\Lambda(s),s)\|_{\bar{\theta}_1-\beta+1} (t-s)^{-\frac{\beta-\bar{\theta}_1}{\beta}}e^{-\nu(t-s)}\,\mathrm{d} s\\
& \leq C\,\mathrm{d}lta_0 + C\,\mathrm{d}lta^2\int_0^t g(s)(t-s)^{-\frac{1}{\beta}}e^{-\nu(t-s)}\,\mathrm{d} s \\
& \qquad + C \int_0^t \Bigl[ \,\mathrm{d}lta^2\bigl(g(s)\bigr)^2 + (\,\mathrm{d}lta+\,\mathrm{d}lta_0)\,\mathrm{d}lta g(s) \Bigr] (t-s)^{-\frac{2-\beta}{\beta}}e^{-\nu(t-s)}\,\mathrm{d} s \\
& \qquad + C \,\mathrm{d}lta_0 \mathbb Sum_{i=1}^2\int_0^t s^{-\bar{\theta}_i/\beta} e^{-\frac{\nu}{2} s} (t-s)^{-\frac{\beta-\bar{\theta}_i}{\beta}}e^{-\nu(t-s)}\,\mathrm{d} s \,.
\varepsilonnd{split}
\varepsilonnd{equation*}
By estimates similar to \varepsilonqref{pfp15} we then find $\|\tilde{y}(t)\|_1\leq C( \,\mathrm{d}lta_0 + \,\mathrm{d}lta^2 )\leq\frac{\,\mathrm{d}lta}{2}$, provided that we choose $\,\mathrm{d}lta$ and $\,\mathrm{d}lta_0$ sufficiently small; this, combined with the previous estimate, gives $\|\tilde{y}\|_{\mathcal{X}_1} \leq \,\mathrm{d}lta$.
We proceed similarly for $\tilde{\Lambda}$: the first three terms on the right-hand side of \varepsilonqref{pfp4} can be estimated exactly in the same way, by using \varepsilonqref{pfp10}, \varepsilonqref{pfp11}, and the equivalent of \varepsilonqref{pfp12a}--\varepsilonqref{pfp12c} for $\mathscr{L}^N_N [ T^N(t;s)(r^{(i)}(y(s),A_\Lambda(s),s)) , t ]$. The only novelty is the last term in \varepsilonqref{pfp4}, which can be controlled thanks to \varepsilonqref{pfp30b}. In this way one obtains an estimate of the form
\begin{equation*}
|\tilde{\Lambda}(t)| \leq C\bigl( \,\mathrm{d}lta_0 + \,\mathrm{d}lta^2 + \,\mathrm{d}lta_0\,\mathrm{d}lta \bigr)g(t)
\varepsilonnd{equation*}
and in turn $\|\tilde{\Lambda}\|_{\mathcal{X}_2} \leq \,\mathrm{d}lta$. Hence we can conclude that $\mathcal{T}$ maps $\mathcal{X}$ into itself.
\mathbb Smallskip\noindent\textit{Step 2: contractivity.}
Let $(y^1,\Lambda^1),(y^2,\Lambda^2)\in\mathcal{X}$ and set $(\tilde{y}^i,\tilde{\Lambda}^i):=\mathcal{T}(y^i,\Lambda^i)$, $i=1,2$. In view of the definition \varepsilonqref{pfp3} of $\tilde{y}^i$ we have
\begin{align} \label{pfp14}
\big| \tilde{y}_n^1(t)-\tilde{y}_n^2&(t) \big| \leq \int_0^t \big| \Lambda^1(s)-\Lambda^2(s)\big| \big|T_n^N(t;s)-T_N^N(t;s)\big|( P(y^1(s)) )\,\mathrm{d} s \nonumber\\
& + \int_0^t |\Lambda^2(s)| \big|T_n^N(t;s)-T^N_N(t;s)\big| \bigl( P(y^1(s)-y^2(s)) \bigr)\,\mathrm{d} s \\
& + \mathbb Sum_{i=1}^3\int_0^t \big|T_n^N(t;s)-T_N^N(t;s)\big| \bigl( r^{(i)}(y^1(s),A_{\Lambda^1}(s),s)-r^{(i)}(y^2(s),A_{\Lambda^2}(s),s) \bigr)\,\mathrm{d} s \,. \nonumber
\varepsilonnd{align}
The first two integrals can be estimated using \varepsilonqref{pfp11}; for the last integral containing the remainders, similarly to \varepsilonqref{pfp12a}--\varepsilonqref{pfp12c} we find, using \varepsilonqref{pfp31} in Lemma~\ref{lem:fixedpointr} below,
\begin{equation*}
\begin{split}
\big\| \bigl(T^N&(t;s)-T_N^N(t;s)\bigr) \bigl( r^{(1)}(y^1(s),A_{\Lambda^1}(s),s)-r^{(1)}(y^2(s),A_{\Lambda^2}(s),s) \bigr) \big\|_\beta \\
& \leq C_2(M,\beta-1,\beta)\| r^{(1)}(y^1(s),A_{\Lambda^1}(s),s)-r^{(1)}(y^2(s),A_{\Lambda^2}(s),s) \|_{\beta-1} (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)} \\
& \leq C \biggl[ \|y^1-y^2\|_{\mathcal{X}_1}g(s) \Bigl( \,\mathrm{d}lta g(s) + (\,\mathrm{d}lta + \,\mathrm{d}lta_0) \Bigr) \\
& \qquad\qquad + \|\Lambda^1-\Lambda^2\|_{\mathcal{X}_2} \Bigl( \,\mathrm{d}lta^2\bigl(g(s)\bigr)^2 + \,\mathrm{d}lta g(s)\Bigr) \biggr] (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)} \,,
\varepsilonnd{split}
\varepsilonnd{equation*}
where we used the bound $|A_{\Lambda^1}(s)-A_{\Lambda^2}(s)|\leq C\|\Lambda^1-\Lambda^2\|_{\mathcal{X}_2}$, which follows from \varepsilonqref{pfp5}; and
\begin{equation*}
\begin{split}
\big\| \bigl(T^N&(t;s)-T_N^N(t;s)\bigr) \bigl( r^{(2)}(y^1(s),A_{\Lambda^1}(s),s)-r^{(2)}(y^2(s),A_{\Lambda^2}(s),s) \bigr) \big\|_\beta \\
& \leq C_2\| r^{(2)}(y^1(s),A_{\Lambda^1}(s),s)-r^{(2)}(y^2(s),A_{\Lambda^2}(s),s) \|_{\bar{\theta}_2-\beta+1} (t-s)^{-\frac{2\beta-\bar{\theta}_2-1}{\beta}} e^{-\nu(t-s)} \\
& \leq C\,\mathrm{d}lta_0 \Bigl( \|y^1-y^2\|_{\mathcal{X}_1} + \|\Lambda^1-\Lambda^2\|_{\mathcal{X}_2} \Bigr) s^{-\bar{\theta}_2/\beta} e^{-\frac{\nu}{2}s} (t-s)^{-\frac{2\beta-\bar{\theta}_2-1}{\beta}} e^{-\nu(t-s)}
\varepsilonnd{split}
\varepsilonnd{equation*}
(the same estimate holds for $r^{(3)}$, with $\bar{\theta}_1$ in place of $\bar{\theta}_2$). Hence from \varepsilonqref{pfp14} it is straightforward to obtain an estimate of the form
\begin{multline*}
\big\| \tilde{y}^1(t) - \tilde{y}^2(t) \big\|_{\beta}
\leq C(\,\mathrm{d}lta+\,\mathrm{d}lta_0) \Bigl( \|y^1-y^2\|_{\mathcal{X}_1} + \|\Lambda^1-\Lambda^2\|_{\mathcal{X}_2} \Bigr) \\
\biggl( \int_0^t \Bigl[ \bigl(g(s)\bigr)^2 + g(s) \Bigr] (t-s)^{-\frac{1}{\beta}} e^{-\nu(t-s)} \,\mathrm{d} s + \mathbb Sum_{i=1}^2\int_0^t s^{-\bar{\theta}_i/\beta} e^{-\frac{\nu}{2}s} (t-s)^{-\frac{2\beta-\bar{\theta}_i-1}{\beta}} e^{-\nu(t-s)} \,\mathrm{d} s \biggr),
\varepsilonnd{multline*}
which in turn yields, recalling \varepsilonqref{pfp15},
\begin{equation*}
\| \tilde{y}^1(t) - \tilde{y}^2(t) \|_{\beta} \leq C(\,\mathrm{d}lta+\,\mathrm{d}lta_0) \Bigl( \|y^1-y^2\|_{\mathcal{X}_1} + \|\Lambda^1-\Lambda^2\|_{\mathcal{X}_2} \Bigr) g(t).
\varepsilonnd{equation*}
In a similar way we obtain an estimate for $\|\tilde{y}^1(t)-\tilde{y}^2(t)\|_1$, which combined with the previous one gives
\begin{equation} \label{pfp20}
\big\| \tilde{y}^1 - \tilde{y}^2 \big\|_{\mathcal{X}_1} \leq C(\,\mathrm{d}lta+\,\mathrm{d}lta_0) \Bigl( \|y^1-y^2\|_{\mathcal{X}_1} + \|\Lambda^1-\Lambda^2\|_{\mathcal{X}_2} \Bigr) .
\varepsilonnd{equation}
Starting from the inequality
\begin{align*}
\big| \tilde{\Lambda}^1(t) - \tilde{\Lambda}^2(t) \big|
& \leq \int_0^t \big| \Lambda^1(s)-\Lambda^2(s)\big| \big| \mathscr{L}^N_N\bigl[T^N(t;s)(P(y^1(s))),t\bigr] \big| \,\mathrm{d} s \\
& + \int_0^t |\Lambda^2(s)| \big| \mathscr{L}^N_N\bigl[T^N(t;s)(P(y^1(s)-y^2(s))),t\bigr] \big| \,\mathrm{d} s \\
& + \mathbb Sum_{i=1}^3 \int_0^t \big| \mathscr{L}^N_N\bigl[ T^N(t;s) \bigl( r^{(i)}(y^1(s),A_{\Lambda^1}(s),s)-r^{(i)}(y^2(s),A_{\Lambda^2}(s),s) \bigr) , t \bigr] \big| \,\mathrm{d} s\\
& + \mathbb Sum_{i=1}^3 \big| r^{(i)}_N(y^1(t),A_{\Lambda^1}(t),t)-r^{(i)}_N(y^2(t),A_{\Lambda^2}(t),t) \big| \,,
\varepsilonnd{align*}
the same argument (using also \varepsilonqref{pfp31b} in Lemma~\ref{lem:fixedpointr}) shows that
\begin{equation} \label{pfp21}
\big\| \tilde{\Lambda}^1 - \tilde{\Lambda}^2 \big\|_{\mathcal{X}_1} \leq C(\,\mathrm{d}lta+\,\mathrm{d}lta_0) \Bigl( \|y^1-y^2\|_{\mathcal{X}_1} + \|\Lambda^1-\Lambda^2\|_{\mathcal{X}_2} \Bigr) .
\varepsilonnd{equation}
Therefore by \varepsilonqref{pfp20}--\varepsilonqref{pfp21} it follows that the map $\mathcal{T}$ is a contraction in the space $\mathcal{X}$, provided that $\,\mathrm{d}lta$ and $\,\mathrm{d}lta_0$ are small enough.
\mathbb Smallskip\noindent\textit{Step 3: conclusion.}
In view of the previous steps, Banach's fixed point theorem yields the existence of a unique pair $(y^N,\Lambda^N)$ in the space $\mathcal{X}$ such that $(y^N,\Lambda^N)=\mathcal{T}(y^N,\Lambda^N)$; that is, denoting by $A^N(t):=A^N(0)+\int_0^t\Lambda^N(s)\,\mathrm{d} s$, the maps $t\mapsto y^N(t)$, $t\mapsto A^N(t)$ satisfy the two equations \varepsilonqref{fp2}--\varepsilonqref{fp1}. Moreover by construction $y^N_n(t)=0$ for all $n\geq N$.
Now, by integrating \varepsilonqref{fp2} in $(0,t)$ we have
\begin{equation*}
\begin{split}
A^N(t) - A^N(0)
& = - \int_0^t \frac{\,\mathrm{d}}{\,\mathrm{d} s} \bigl[ T^N_N(s;0)(y^N(0))\bigr] \,\mathrm{d} s \\
& \qquad\qquad - \int_0^t \,\mathrm{d} s \int_0^s \frac{\,\mathrm{d} A^N}{\,\mathrm{d}\xi}(\xi) \frac{\,\mathrm{d}}{\,\mathrm{d} s}\bigl[ T_N^N(s;\xi)( P(y^N(\xi)) ) \bigr] \,\mathrm{d}\xi \\
& \qquad\qquad - \mathbb Sum_{i=1}^3\int_0^t \,\mathrm{d} s \int_0^s \frac{\,\mathrm{d}}{\,\mathrm{d} s}\bigl[ T_N^N(s;\xi)(r^{(i)}(y^N(\xi),A^N(\xi),\xi)) \bigr] \,\mathrm{d}\xi \\
& \qquad\qquad - \mathbb Sum_{i=1}^3\int_0^t r^{(i)}_N(y^N(s),A^N(s),s)\,\mathrm{d} s \\
& = y^N_N(0) - T_N^N(t;0)(y^N(0)) \\
& \qquad\qquad + \int_0^t \frac{\,\mathrm{d} A^N}{\,\mathrm{d}\xi}(\xi) \Bigl( 2^Ny^N_N(\xi) - T_N^N(t;\xi)( P(y^N(\xi)) ) \Bigr) \,\mathrm{d}\xi \\
& \qquad\qquad + \mathbb Sum_{i=1}^3\int_0^t \Bigl( r^{(i)}_N(y^N(\xi),A^N(\xi),\xi) - T^N_N(t;\xi)(r^{(i)}(y^N(\xi),A^N(\xi),\xi)) \Bigr) \,\mathrm{d} \xi \\
& \qquad\qquad - \mathbb Sum_{i=1}^3\int_0^t r^{(i)}_N(y^N(s),A^N(s),s)\,\mathrm{d} s \,.
\varepsilonnd{split}
\varepsilonnd{equation*}
Recalling that $y^N_N(t)=0$ for all $t\geq0$, we see that the previous equation is exactly the identity \varepsilonqref{yn3}. Finally, combining \varepsilonqref{yn3} and \varepsilonqref{fp1}, we conclude that the pair $t\mapsto(y^N(t),A^N(t))$ satisfies \varepsilonqref{yn2}, and, in turn, \varepsilonqref{yneq}.
Finally, if we define the quantities
\begin{equation*}
\tilde{m}^N_n(t) := \bar{m}_n(A^N(t),p^N(t))\bigl(1+2^ny_n^N(t)\bigr) \qquad \text{for all }n\leq N,
\varepsilonnd{equation*}
we see that \varepsilonqref{yneq} implies that the functions $\tilde{m}^N_n(t)$ satisfy the same evolution equation as $m^N_n(t)$, with the same initial datum $\tilde{m}^N_n(0)=m^N_n(0)$ (see Lemma~\ref{lem:initialcond}). Therefore by uniqueness of the solution to this system of ODEs (which follows from the fact that we are considering a truncated problem) we conclude that $\tilde{m}^N(t)=m^N(t)$, that is the property in the statement holds.
\varepsilonnd{proof}
The following lemma contains the main estimates on the remainders needed in the proof of Proposition~\ref{prop:fp}.
\begin{lemma} \label{lem:fixedpointr}
Let $y\in\mathcal{Y}_\beta$, $A>0$, and let $r^{(i)}(y,A,t)$, $i=1,2,3$, be the sequences defined by \varepsilonqref{r1}, \varepsilonqref{r2}, and \varepsilonqref{r3}.
Assume also that $\|y\|_1\leq1$ and $A\geq \frac12A_M$. Then there exists a constant $C_3$, depending on $M$, $L_1$, $L_2$, $L_3$, such that
\begin{equation} \label{pfp30}
\begin{split}
\|r^{(1)}(y,A,t)\|_{\beta-1} &\leq C_3 \|y\|_{\beta}^2 + C_3 |A_M-A|\|y\|_\beta \,, \\
\|r^{(2)}(y,A,t)\|_{\bar{\theta}_2-\beta+1} & \leq C_3 \,\mathrm{d}lta_0^{3/2} t^{-\bar{\theta}_2/\beta} e^{-\frac{\nu}{2} t} \,, \\
\|r^{(3)}(y,A,t)\|_{\bar{\theta}_1-\beta+1} &\leq C_3 \,\mathrm{d}lta_0 t^{-\bar{\theta}_1/\beta} e^{-\frac{\nu}{2}t} \,,
\varepsilonnd{split}
\varepsilonnd{equation}
and
\begin{equation} \label{pfp30b}
\begin{split}
|r^{(1)}_N(y,A,t)| & \leq C_3 \|y\|_\beta\|y\|_1 \,,\\
|r^{(2)}_N(y,A,t)| & \leq C_3 \,\mathrm{d}lta_0^{3/2} t^{-\frac{\beta-1}{\beta}}e^{-\nu t}\,, \\
|r^{(3)}_N(y,A,t)| & \leq C_3 \,\mathrm{d}lta_0 t^{-\frac{\beta-1}{\beta}}e^{-\frac{\nu}{2} t}\,.
\varepsilonnd{split}
\varepsilonnd{equation}
Furthermore, for every $y^1,y^2\in\mathcal{Y}_\beta$ with $\|y^i\|_1\leq1$ and $A^1,A^2>\frac12 A_M$ we have
\begin{align} \label{pfp31}
\| r^{(1)}(y^1,A^1,t)-r^{(1)}(y^2,A^2,t) \|_{\beta-1} &\leq C_3 \|y^1-y^2\|_\beta \Bigl( \max\{ \|y^1\|_\beta, \|y^2\|_\beta\} + |A_M-A^1| \Bigr) \nonumber\\
& \qquad + C_3|A^1-A^2| \Bigl( \|y^2\|_\beta^2 + \|y^2\|_\beta \Bigr) \,, \nonumber \\
\| r^{(2)}(y^1,A^1,t)-r^{(2)}(y^2,A^2,t) \|_{\bar\theta_2-\beta+1} &\leq C_3\,\mathrm{d}lta_0^{3/2} \Bigl( \|y^1-y^2\|_1 + |A^1-A^2| \Bigr) t^{-\bar\theta_2/\beta}e^{-\frac{\nu}{2} t}\,, \\
\| r^{(3)}(y^1,A^1,t)-r^{(3)}(y^2,A^2,t) \|_{\bar{\theta}_1-\beta+1} &\leq C_3\,\mathrm{d}lta_0\|y^1-y^2\|_1 t^{-\bar{\theta}_1/\beta}e^{-\frac{\nu}{2}t} \,, \nonumber
\varepsilonnd{align}
and
\begin{equation} \label{pfp31b}
\begin{split}
|r^{(1)}_N(y^1,A^1,t)-r^{(1)}_N(y^2,A^2,t)| & \leq C_3\bigl(\|y^1\|_1+\|y^2\|_1\bigr) \|y^1-y^2\|_\beta \,,\\
|r^{(2)}_N(y^1,A^1,t)-r^{(2)}_N(y^2,A^2,t)| & \leq C_3 \,\mathrm{d}lta_0^2 \|y^1-y^2\|_\beta\,, \\
|r^{(3)}_N(y^1,A^1,t)-r^{(3)}_N(y^2,A^2,t)| & \leq C_3 \,\mathrm{d}lta_0 \|y^1-y^2\|_\beta\,.
\varepsilonnd{split}
\varepsilonnd{equation}
\varepsilonnd{lemma}
\begin{proof}
Along the proof, the symbol $\lesssim$ will be used for inequalities up to constants which can depend only on the properties of the kernels, on $M$, $L_1$, $L_2$, $L_3$. We first consider the remainder $r^{(1)}$. The estimates \varepsilonqref{pfp30} and \varepsilonqref{pfp31} are proved in \cite[Lemma~6.3]{BNVd} (with minor modifications). For \varepsilonqref{pfp30b} and \varepsilonqref{pfp31b}, it is sufficient to observe that for $n=N$ the expression of $r^{(1)}_N$ simplifies and yields (using \varepsilonqref{kernel5})
\begin{equation*}
|r^{(1)}_N(y,A,t)| = \frac{2^N\gamma(2^{N+p^N_N(t)})}{16}y_{N-1}^2 \lesssim 2^{(\beta+1)N}y_{N-1}^2 \lesssim \|y\|_{\beta}\|y\|_1,
\varepsilonnd{equation*}
\begin{equation*}
\begin{split}
|r^{(1)}_N(y^1,A^1,t)-r^{(1)}_N(y^2,A^2,t)|
& \lesssim 2^{(\beta+1)N} |y^1_{N-1}-y^2_{N-1}| |y^1_{N-1}+y^2_{N-1}| \\
& \lesssim \bigl(\|y^1\|_1+\|y^2\|_1\bigr) \|y^1-y^2\|_\beta\,.
\varepsilonnd{split}
\varepsilonnd{equation*}
We next consider the term $r^{(2)}$. We have for $n<0$, using the bound $\|y\|_1\leq 1$, \varepsilonqref{kernel5bis}, the asymptotics of $\bar{\mu}_n$ as $n\to-\infty$, and the estimate \varepsilonqref{decayqn1},
\begin{equation*}
\begin{split}
|r^{(2)}_n(y,A,t)|
&\lesssim 2^{-n}\bigl( |q^N_{n-1}(t)| + |q^N_n(t)| \bigr) + 2^{-n}\bar{\mu}_n(A,p^N(t)) \bigl( |q^N_n(t)| + |q^N_{n+1}(t)|\bigr) \\
&\lesssim \,\mathrm{d}lta_0^{3/2} 2^{-n}e^{-\nu t}.
\varepsilonnd{split}
\varepsilonnd{equation*}
Similarly, for $n\geq0$ we have, using \varepsilonqref{kernel5} and \varepsilonqref{decayqn1},
\begin{equation*}
\begin{split}
|r^{(2)}_n(y,A,t)|
&\lesssim 2^{(\beta-1)n}\bigl( |q^N_{n-1}(t)| + |q^N_n(t)| \bigr) + 2^{(\beta-1)n}\bar{\mu}_n(A,p^N(t)) \bigl( |q^N_n(t)| + |q^N_{n+1}(t)|\bigr) \\
&\lesssim \,\mathrm{d}lta_0^{3/2} 2^{(\beta-1-\bar{\theta}_2)n}t^{-\bar{\theta}_2/\beta}e^{-\nu t} .
\varepsilonnd{split}
\varepsilonnd{equation*}
Then the estimate \varepsilonqref{pfp30} for $r^{(2)}$ follows.
For $n=N$, we first observe that by interpolating between the two estimates in \varepsilonqref{decayqn1} we have for all $n>0$
\begin{equation*}
2^{(\beta-1)n}|q_n^N(t)| = \Bigl(2^{\bar{\theta}_2n}|q_n^N(t)|\Bigr)^{\frac{\beta-1}{\bar{\theta}_2}} |q_n^N(t)|^{1-\frac{\beta-1}{\bar{\theta}_2}} \lesssim \,\mathrm{d}lta_0^{3/2}t^{-\frac{\beta-1}{\beta}}e^{-\nu t},
\varepsilonnd{equation*}
which yields, using also $\|y\|_1\leq1$,
\begin{equation*}
\begin{split}
|r^{(2)}_N(y,A,t)|
& = \frac{\gamma(2^{N+p^N_N(t)})}{2^{N+2}}\Big| (1+2^{N-1}y_{N-1})^2O(q^N_{N-1}(t)) - (1+2^Ny_N)O(q^N_N(t)) \Big| \\
& \lesssim 2^{(\beta-1)N}\Bigl( |q^N_{N-1}(t)| + |q^N_N(t)| \Bigr)
\lesssim \,\mathrm{d}lta_0^{3/2}t^{-\frac{\beta-1}{\beta}}e^{-\nu t},
\varepsilonnd{split}
\varepsilonnd{equation*}
which is the second estimate in \varepsilonqref{pfp30b}. To prove the Lipschitz continuity of $r^{(2)}$ (estimate \varepsilonqref{pfp31}), we first observe that in view of the explicit expression of $\bar{\mu}_n$ in \varepsilonqref{nearlystat5} and of the assumption $A^1,A^2\geq \frac12 A_M$ one can show that
\begin{equation} \label{estmun}
|\bar{\mu}_n(A^1,p^N)-\bar{\mu}_n(A^2,p^N)| \lesssim 2^n e^{-\frac12 A_M 2^n}|A^1-A^2|.
\varepsilonnd{equation}
Then the bound in \varepsilonqref{pfp31} can be obtained straightforwardly, using this estimate and \varepsilonqref{decayqn1}. We obtain the estimate in \varepsilonqref{pfp31b} for $r^{(2)}$ by using the trivial bound $|q^N_n(t)|\lesssim\,\mathrm{d}lta_0^2$:
\begin{equation*}
\begin{split}
|r^{(2)}_N(y^1,A^1,t)-r^{(2)}_N(y^2,A^2,t)|
& \lesssim 2^{(\beta-1)N} \big|(1+2^{N-1}y_{N-1}^1)^2-(1+2^{N-1}y_{N-1}^2)^2\big| |q^N_{N-1}(t)| \\
& \qquad + 2^{\beta N} \big|y_N^1-y_N^2\big| |q^N_{N}(t)| \\
& \lesssim \,\mathrm{d}lta_0^2\|y^1-y^2\|_\beta\,.
\varepsilonnd{split}
\varepsilonnd{equation*}
We eventually consider the remainder $r^{(3)}$: by \varepsilonqref{nearlystat6}, \varepsilonqref{decaypn2}, and the assumption $\|y\|_1\leq1$,
\begin{align*}
|r^{(3)}_n(y,A,t)|
& \lesssim (1+\|y\|_1)\,\mathrm{d}lta_0t^{-\bar{\theta}_1/\beta}e^{-\frac{\nu}{2}t} \biggl( \mathbb Sum_{k=n\wedge1}^0 2^{-k} + \mathbb Sum_{k=n\vee 1}^N 2^{-k}2^{(\beta-\bar{\theta}_1)k} \biggr) \\
& \lesssim
\begin{cases}
\,\mathrm{d}lta_0 2^{-n} t^{-\bar{\theta}_1/\beta} e^{-\frac{\nu}{2}t} &\text{for $n\leq0$,}\\
\,\mathrm{d}lta_0 2^{(\beta-1-\bar{\theta}_1)n} t^{-\bar{\theta}_1/\beta}e^{-\frac{\nu}{2}t} &\text{for $n>0$,}
\varepsilonnd{cases}
\varepsilonnd{align*}
from which the last estimate in \varepsilonqref{pfp30} follows.
We next observe that by interpolating between the two estimates in \varepsilonqref{decaypn2} we have for $n>0$
\begin{equation*}
2^{-n}\bigg| \frac{\,\mathrm{d} p^N_n}{\,\mathrm{d} t}(t)\bigg|
\leq \biggl( 2^{(\bar{\theta}_1-\beta)n}\bigg| \frac{\,\mathrm{d} p^N_n}{\,\mathrm{d} t}(t)\bigg| \biggr)^{\frac{\beta-1}{\bar{\theta}_1}}
\biggl( 2^{-\beta n}\bigg|\frac{\,\mathrm{d} p^N_n}{\,\mathrm{d} t}(t)\bigg|\biggr)^{1-\frac{\beta-1}{\bar{\theta}_1}}
\lesssim \,\mathrm{d}lta_0 t^{-\frac{\beta-1}{\beta}} e^{-\frac{\nu}{2}t},
\varepsilonnd{equation*}
hence for $n=N$ we obtain the third estimate in \varepsilonqref{pfp30b} (using also \varepsilonqref{nearlystat6}):
\begin{equation*}
|r^{(3)}_N(y,A,t)| \lesssim (1+\|y\|_1)2^{-N}\bigg| \frac{\,\mathrm{d} p^N_N}{\,\mathrm{d} t}(t)\bigg|
\lesssim \,\mathrm{d}lta_0t^{-\frac{\beta-1}{\beta}}e^{-\frac{\nu}{2}t}.
\varepsilonnd{equation*}
Observe that, in view of \varepsilonqref{nearlystat7}, the quantity $\frac{1}{\bar{m}_n(A,p^N(t))}\frac{\partial\bar{m}_n}{\partial p_k}(A,p^N(t))$ is actually independent of $A$: then using \varepsilonqref{nearlystat6} and \varepsilonqref{decaypn2} we find
\begin{align*}
|r^{(3)}_n(y^1,A^1,t)-r^{(3)}_n(y^2,A^2,t)|
& = \bigg| (y_n^1-y_n^2)\mathbb Sum_{k=n}^\infty \frac{1}{\bar{m}_n}\frac{\partial\bar{m}_n}{\partial p_k}\frac{\,\mathrm{d} p^N_k}{\,\mathrm{d} t}\bigg| \\
& \lesssim
\begin{cases}
\,\mathrm{d}lta_0 2^{-n}2^n|y_n^1-y_n^2| t^{-\bar{\theta}_1/\beta}e^{-\frac{\nu}{2}t} &\text{for $n\leq0$,}\\
\,\mathrm{d}lta_0 2^{(\beta-1-\bar{\theta}_1)n}2^n|y_n^1-y_n^2| t^{-\bar{\theta}_1/\beta}e^{-\frac{\nu}{2} t} &\text{for $n>0$,}
\varepsilonnd{cases}
\varepsilonnd{align*}
so that also the third estimate in \varepsilonqref{pfp31} holds.
For $n=N$, arguing similarly we obtain the last estimate in \varepsilonqref{pfp31b}.
\varepsilonnd{proof}
\mathbb Subsection{Continuation argument} \label{subsect:continuation}
The next goal is to extend the representation \varepsilonqref{fp} of the functions $m_n^N(t)$, obtained in Proposition~\ref{prop:fp}, for all positive times. This will be achieved by a continuation argument. Indeed, recall that the fundamental assumption in Proposition~\ref{prop:fp} is that the maps $p^N(t)$, $q^N(t)$ satisfy the estimates \varepsilonqref{decaypn1}--\varepsilonqref{decayqn1} in the time interval $[0,t^N]$. The idea is now to show that \varepsilonmph{at the time $t^N$} the same estimates \varepsilonqref{decaypn1}--\varepsilonqref{decayqn1} hold with strict inequality, and therefore they can be extended for larger times $t\in[t^N,t^N+\varepsilon]$; in turn, this condition allows to repeat the proof of Proposition~\ref{prop:fp} and to extend also the representation \varepsilonqref{fp} in $[t^N,t^N+\varepsilon]$.
In order to prove the claim, we take advantage of the representation \varepsilonqref{fp} in order to write the evolution equations for $p^N$ and $q^N$ in a handier form. In Lemma~\ref{lem:approx} we computed the equations \varepsilonqref{pneq2}--\varepsilonqref{qneq2} for the (not truncated) functions $p_n(t)$, $q_n(t)$; then, at the end of Section~\ref{sect:strategy} we have seen that those equations can be written in the form \varepsilonqref{pneq3}--\varepsilonqref{qneq3} under the assumption that $m_n(t)$ can be represented as in \varepsilonqref{yn}. Now, since the truncated functions $m^N_n(t)$ satisfy \varepsilonqref{fp} for $t\in[0,t^N]$, the very same equations hold for $p^N_n(t)$, $q^N_n(t)$ for all $t\in[0,t^N]$ and for all $n<N$:
\begin{multline} \label{pneqN}
\frac{\,\mathrm{d} p_n^N}{\,\mathrm{d} t}
= \frac{\gamma(2^{n+p^N_n})}{4} \biggl[ \frac{(1+2^{n-1}y^N_{n-1})^2}{(1+2^ny^N_n)} \Bigl( (p^N_{n-1}-p^N_n) + O(q^N_{n-1}) \Bigr) + O(q_n^N) \\
\qquad -4\bar{\mu}_{n}\frac{\gamma(2^{n+1+p^N_{n+1}})}{\gamma(2^{n+p^N_n})} \biggl( \frac{(1+2^{n+1}y^N_{n+1})}{(1+2^ny^N_n)}\Bigl( (p^N_{n}-p^N_{n+1}) + O(q^N_{n+1}) \Bigr) + (1+2^ny_n^N)O(q_n^N) \biggr) \biggr] ,
\varepsilonnd{multline}
\begin{align} \label{qneqN}
\frac{\,\mathrm{d} q_n^N}{\,\mathrm{d} t}
&= \frac{\gamma(2^{n+p^N_n})}{4} \biggl[ \frac{(1+2^{n-1}y^N_{n-1})^2}{(1+2^ny^N_n)} \Bigl( \frac{q^N_{n-1}}{2}-q^N_n +\,\mathrm{d}lta_0O(q^N_{n-1}) + (p^N_{n-1}-p^N_n)^2\Bigr) +\,\mathrm{d}lta_0O(q^N_n) \nonumber \\
& \qquad -4\bar{\mu}_n\frac{\gamma(2^{n+1+p^N_{n+1}})}{\gamma(2^{n+p^N_n})} \frac{(1+2^{n+1}y^N_{n+1})}{(1+2^ny^N_n)}
\Bigl( (q^N_{n}-q^N_{n+1}) + \,\mathrm{d}lta_0O(q^N_{n+1}) - (p^N_{n+1}-p^N_n)^2 \Bigr) \nonumber \\
& \qquad -4\bar{\mu}_n\frac{\gamma(2^{n+1+p^N_{n+1}})}{\gamma(2^{n+p^N_n})} (1+2^ny^N_n)\,\mathrm{d}lta_0O(q^N_n) \biggr],
\varepsilonnd{align}
where $\bar{\mu}_n=\bar{\mu}_n(A^N(t),p^N(t))$. For $n=N$, recalling also that $y^N_N=0$,
\begin{equation} \label{pneqNb}
\frac{\,\mathrm{d} p_N^N}{\,\mathrm{d} t} = \frac{\gamma(2^{N+p_N^N})}{4} \biggl[ (1+2^{N-1}y^N_{N-1})^2 \Bigl( (p^N_{N-1}-p^N_N) + O(q^N_{N-1}) \Bigr) + O(q^N_N) \biggr],
\varepsilonnd{equation}
\begin{multline} \label{qneqNb}
\frac{\,\mathrm{d} q^N_N}{\,\mathrm{d} t}
= \frac{\gamma(2^{N+p^N_N})}{4} \biggl[ (1+2^{N-1}y^N_{N-1})^2 \Bigl( \frac{q^N_{N-1}}{2}-q^N_N + \,\mathrm{d}lta_0O(q^N_{N-1}) + (p^N_{N-1}-p^N_N)^2 \Bigr) + \,\mathrm{d}lta_0O(q^N_N) \biggr],
\varepsilonnd{multline}
and $p^N_n(t)=q^N_n(t)=0$ for $n>N$.
\begin{proposition} \label{prop:continuation}
There exist positive constants $L_1$, $L_2$, $L_3$, depending only on $M$, and $\,\mathrm{d}lta_0>0$ sufficiently small, such that the estimates \varepsilonqref{decaypn1}, \varepsilonqref{decaypn2}, \varepsilonqref{decayqn1} and the conclusion of Proposition~\ref{prop:fp} hold for all $t>0$.
\varepsilonnd{proposition}
\begin{proof}
We let $\overline{T}$ be the supremum of the times $T>0$ such that the estimates \varepsilonqref{decaypn1}, \varepsilonqref{decaypn2}, \varepsilonqref{decayqn1} hold for every $t\in(0,T)$.
Notice that $\overline{T}>0$, as the estimates are satisfied in $(0,t^N)$. The proof amounts to show that $\overline{T}=\infty$: indeed, this allows to repeat the proof of Proposition~\ref{prop:fp} in the time interval $(0,\infty)$. We assume by contradiction that $\overline{T}<\infty$, and we will show that \varepsilonqref{decaypn1}--\varepsilonqref{decayqn1} hold at $t=\overline{T}$ with strict inequality: this would allow to extend the estimates for larger times, leading to a contradiction.
In view of the fact, already observed, that the proof of Proposition~\ref{prop:fp} can be repeated in the time interval $(0,\overline{T})$, the sequences $p^N(t)$, $q^N(t)$ obey the evolution equations \varepsilonqref{pneqN}--\varepsilonqref{qneqNb} for $t\in(0,\overline{T})$.
As usual, we denote by $C$ a constant which can depend only on the properties of the kernels and on $M$, and might change from line to line.
\mathbb Smallskip\noindent\textit{Step 1: decay of $p^N$.}
We first show that \varepsilonqref{decaypn1} holds at $t=\overline{T}$ with strict inequality, with the choice
\begin{equation} \label{L1}
L_1\geq\max\Bigl\{C_1(M,0,0), C_1(M,0,\bar{\theta}_1)\Bigr\}
\varepsilonnd{equation}
(where $C_1$ is the constant given by Theorem~\ref{thm:lineartrunc}).
We highlight the leading order linear operator in the equations \varepsilonqref{pneqN}, \varepsilonqref{pneqNb} for $p^N(t)$: for $n\leq N$
\begin{equation} \label{pcont1}
\frac{\,\mathrm{d} p_n^N}{\,\mathrm{d} t} = \frac{\gamma(2^{n+p^N_n})}{4}\Bigl( p^N_{n-1}-p^N_n - \mathbb Sigma^N_n(t)(p^N_n-p^N_{n+1}) \Bigr) + R^{1}_n(t) + R^{2}_n(t) + R^{3}_n(t)
\varepsilonnd{equation}
where we introduced the following quantities:
\begin{equation} \label{pcont3}
\mathbb Sigma_n^N(t) :=
\begin{cases}
4\bar{\mu}_n(A_M,p^N(t))\frac{\gamma(2^{n+1+p^N_{n+1}(t)})}{\gamma(2^{n+p^N_n(t)})} & \text{if } n<N,\\
0 & \text{if }n=N,
\varepsilonnd{cases}
\varepsilonnd{equation}
\begin{multline} \label{pcont4}
R^{1}_n(t) := \frac{\gamma(2^{n+p^N_n})}{4} \biggl(\frac{(1+2^{n-1}y^N_{n-1})^2}{(1+2^ny^N_n)}-1\biggr) (p^N_{n-1}-p^N_n) \\
-\bar{\mu}_n(A^N(t),p^N(t))\gamma(2^{n+1+p^N_{n+1}}) \biggl( \frac{(1+2^{n+1}y^N_{n+1})}{(1+2^ny^N_n)}-1\biggr) (p^N_{n}-p^N_{n+1}) ,
\varepsilonnd{multline}
\begin{multline} \label{pcont4b}
R^{2}_n(t)
:= \frac{\gamma(2^{n+p^N_n})}{4} \biggl[ \frac{(1+2^{n-1}y^N_{n-1})^2}{(1+2^ny^N_n)} O(q^N_{n-1}) + O(q_n^N) \biggr] \\
-\bar{\mu}_n(A^N(t),p^N(t))\gamma(2^{n+1+p^N_{n+1}}) \biggl[ \frac{(1+2^{n+1}y^N_{n+1})}{(1+2^ny^N_n)} O(q^N_{n+1}) + (1+2^ny_n^N)O(q_n^N)\biggr],
\varepsilonnd{multline}
\begin{equation} \label{pcont4c}
R^{3}_n(t)
:= \gamma(2^{n+1+p^N_{n+1}}) \Bigl[ \bar{\mu}_n(A_M,p^N(t)) - \bar{\mu}_n(A^N(t),p^N(t)) \Bigr] \bigl(p^N_n-p^N_{n+1})
\varepsilonnd{equation}
(for $n=N$ the terms containing $\bar{\mu}_n$ are not present in $R^1$, $R^2$, $R^3$).
The linearized operator in \varepsilonqref{pcont1} is of the form considered in Section~\ref{subsect:lineartrunc}, see \varepsilonqref{tlinear1}; since also the assumption \varepsilonqref{asspn} is satisfied, we are
in the position to apply Theorem~\ref{thm:lineartrunc}. We can write the solution to \varepsilonqref{pcont1} in terms of the solution to the linearized problem as
\begin{equation}
p^N_n(t) = T^N_n(t;0)(p^N(0)) + \mathbb Sum_{i=1}^3\int_0^t T^N_n(t;s)(R^{i}(s))\,\mathrm{d} s
\varepsilonnd{equation}
(see \varepsilonqref{tlinear4} for the definition of the operator $T^N$). By applying Theorem~\ref{thm:lineartrunc} and using the estimates \varepsilonqref{contrp1}--\varepsilonqref{contrp3} proved in Lemma~\ref{lem:continuation} below we then find
\begin{align*}
\big\| D^+\bigl(p^N(t)\bigr) \big\|_{0}
& \leq \big\| D^+\bigl(T^N(t;0)(p^N(0))\bigr) \big\|_{0} + \mathbb Sum_{i=1}^3\int_0^t \big\| D^+\bigl(T^N(t;s)(R^{i}(s)) \bigr) \big\|_{0} \,\mathrm{d} s \\
& \leq C_1(M,0,0) \|p^N(0)\|_0 e^{-\nu t} \\
& \qquad + \mathbb Sum_{i=1}^3 C_1(M,\bar{\theta}_1-\beta,0)\int_0^t \big\| R^{i}(s) \big\|_{\bar{\theta}_1-\beta}(t-s)^{-\frac{\beta-\bar{\theta}_1}{\beta}}e^{-\nu(t-s)} \,\mathrm{d} s \\
& \leq L_1\,\mathrm{d}lta_0 e^{-\nu t} + C\,\mathrm{d}lta_0^{3/2} \int_0^t \bigl(1+s^{-\bar{\theta}_1/\beta}\bigr)e^{-\frac{\nu}{2} s}(t-s)^{-\frac{\beta-\bar{\theta}_1}{\beta}}e^{-\nu(t-s)} \,\mathrm{d} s \,.
\varepsilonnd{align*}
It can be checked by elementary arguments that the integral on the right-hand side of the previous inequality is bounded by $Ce^{-\frac{\nu}{2}t}$. Then
\begin{equation*}
\big\| D^+\bigl(p^N(t)\bigr) \big\|_{0}
\leq L_1\,\mathrm{d}lta_0 e^{-\nu t} + C\,\mathrm{d}lta_0^{3/2} e^{-\frac{\nu}{2}t}
\leq \frac32 L_1\,\mathrm{d}lta_0e^{-\frac{\nu}{2}t}
\varepsilonnd{equation*}
for all $t\in(0,\overline{T})$, by choosing $\,\mathrm{d}lta_0$ small enough. Therefore the first estimate in \varepsilonqref{decaypn1} holds with strict inequality at $t=\overline{T}$.
By the same argument, using also \varepsilonqref{contrp1bis}, we have
\begin{align*}
\big\| D^+\bigl(p^N(t)\bigr) \big\|_{\bar{\theta}_1}
& \leq \big\| D^+\bigl(T^N(t;0)(p^N(0))\bigr) \big\|_{\bar{\theta}_1} + \mathbb Sum_{i=1}^3\int_0^t \big\| D^+\bigl(T^N(t;s)(R^{i}(s)) \bigr) \big\|_{\bar{\theta}_1} \,\mathrm{d} s \\
& \leq C_1(M,0,\bar{\theta}_1) \|p^N(0)\|_0 t^{-\bar{\theta}_1/\beta} e^{-\nu t} \\
& \qquad + C_1(M,\bar{\theta}_1-1,\bar{\theta}_1)\int_0^t \big\| R^{1}(s) \big\|_{\bar{\theta}_1-1} (t-s)^{-\frac{1}{\beta}}e^{-\nu(t-s)} \,\mathrm{d} s \\
& \qquad + C_1(M,\bar{\theta}_2-\beta,\bar{\theta}_1) \int_0^t \big\| R^{2}(s) \big\|_{\bar{\theta}_2-\beta}(t-s)^{-\frac{\beta-(\bar{\theta}_2-\bar{\theta}_1)}{\beta}}e^{-\nu(t-s)} \,\mathrm{d} s \\
& \qquad + C_1(M,0,\bar{\theta}_1)\int_0^t \|R^3(s)\|_{0}(t-s)^{-\bar{\theta}_1/\beta}e^{-\nu(t-s)}\,\mathrm{d} s \\
& \leq L_1\,\mathrm{d}lta_0 t^{-\bar{\theta}_1/\beta}e^{-\nu t} \\
& \qquad + C\,\mathrm{d}lta_0^2 \int_0^t \bigl(1+s^{-\frac{\beta-1}{\beta}}\bigr)s^{-\bar{\theta}_1/\beta}e^{-\frac{\nu}{2}s}(t-s)^{-\frac{1}{\beta}}e^{-\nu(t-s)} \,\mathrm{d} s \\
& \qquad + C\,\mathrm{d}lta_0^{3/2} \int_0^t \bigl(1+s^{-\bar{\theta}_2/\beta}\bigr) e^{-\nu s} (t-s)^{-\frac{\beta-(\bar{\theta}_2-\bar{\theta}_1)}{\beta}}e^{-\nu(t-s)} \,\mathrm{d} s \\
& \leq L_1\,\mathrm{d}lta_0 t^{-\bar{\theta}_1/\beta}e^{-\nu t} + C\,\mathrm{d}lta_0^{3/2}t^{-\bar{\theta}_1/\beta} e^{-\frac{\nu}{2}t}
\leq \frac32 L_1\,\mathrm{d}lta_0 t^{-\bar{\theta}_1/\beta}e^{-\nu t}
\varepsilonnd{align*}
for all $t\in(0,\overline{T})$, by choosing $\,\mathrm{d}lta_0$ small enough. Therefore also the second estimate in \varepsilonqref{decaypn1} holds with strict inequality at $t=\overline{T}$.
\mathbb Smallskip\noindent\textit{Step 2: decay of $\frac{\,\mathrm{d} p^N}{\,\mathrm{d} t}$.}
We next choose $L_2$ such that
\begin{equation} \label{L2}
L_2\geq 2L_1 \biggl( \mathbb Sup_{\xi\leq2}\gamma(\xi)+ \mathbb Sup_{\xi\geq1}\xi^{-\beta}\gamma(\xi) \biggr) \Bigl(1+\mathbb Sup_{n,t}\mathbb Sigma_n^N(t)\Bigr).
\varepsilonnd{equation}
Going back to the equation \varepsilonqref{pcont1}, we see that we can bound, using the estimates \varepsilonqref{contrp1}--\varepsilonqref{contrp3} in Lemma~\ref{lem:continuation} below and the definition of $L_2$,
\begin{align*}
\Big\|\frac{\,\mathrm{d} p^N}{\,\mathrm{d} t}(t)\Big\|_{-\beta}
& \leq \frac{L_2}{2L_1}\|D^+(p^N(t))\|_{0} + \| R^{1}(t) \|_{-\beta} + \|R^{2}(t) \|_{-\beta} + \|R^{3}(t) \|_{-\beta}\\
& \leq L_2\,\mathrm{d}lta_0 e^{-\frac{\nu}{2}t} + 5C_4\,\mathrm{d}lta_0^{3/2}e^{-\frac{\nu}{2}t}
\leq \frac32L_2\,\mathrm{d}lta_0 e^{-\frac{\nu}{2}t}
\varepsilonnd{align*}
by choosing $\,\mathrm{d}lta_0$ small enough, and similarly
\begin{align*}
\Big\|\frac{\,\mathrm{d} p^N}{\,\mathrm{d} t}(t)\Big\|_{\bar{\theta}_1-\beta}
& \leq \frac{L_2}{2L_1}\|D^+p^N(t)\|_{\bar\theta_1} + \| R^{1}(t) \|_{\bar{\theta}_1-\beta} + \|R^{2}(t) \|_{\bar{\theta}_1-\beta} + \|R^{3}(t) \|_{\bar{\theta}_1-\beta} \\
& \leq L_2\,\mathrm{d}lta_0t^{-\bar{\theta}_1/\beta}e^{-\frac{\nu}{2}t} + 3C_4\,\mathrm{d}lta_0^{3/2}(1+t^{-\bar{\theta}_1/\beta})e^{-\frac{\nu}{2}t} \\
& \leq \frac32L_2\,\mathrm{d}lta_0 \bigl(1+t^{-\bar{\theta}_1/\beta}\bigr)e^{-\frac{\nu}{2}t}.
\varepsilonnd{align*}
These two estimates imply that \varepsilonqref{decaypn2} holds with strict inequality at $t=\overline{T}$, as claimed.
\mathbb Smallskip\noindent\textit{Step 3: decay of $q^N$.}
It remains to prove the decay \varepsilonqref{decayqn1} of $q^N$. This will be obtained by comparison with an explicit supersolution for the equation \varepsilonqref{qneqN}, \varepsilonqref{qneqNb} satisfied by $q^N$.
We first consider the sequence $\bar{q}_n(t)=4\,\mathrm{d}lta_0^{3/2}e^{-\nu t}$: thanks to the bounds \varepsilonqref{decaypn1}, \varepsilonqref{decayqn1} and \varepsilonqref{fp3}, one can see that, possibly taking a smaller $\nu>0$ (depending only on the kernels), the sequence $\bar{q}_n$ is a supersolution for the equation \varepsilonqref{qneqN}, namely for $t\in[0,\overline{T}]$ and $n\leq N$
\begin{align*}
\frac{\,\mathrm{d} \bar{q}_n}{\,\mathrm{d} t}
&\geq \frac{\gamma(2^{n+p^N_n})}{4} \biggl[ \frac{(1+2^{n-1}y^N_{n-1})^2}{(1+2^ny^N_n)} \Bigl( \frac{\bar{q}_{n-1}}{2}-\bar{q}_n +\,\mathrm{d}lta_0O(q^N_{n-1}) + (p^N_{n-1}-p^N_n)^2\Bigr) +\,\mathrm{d}lta_0O(q^N_n) \nonumber \\
& \qquad -\mathbb Sigma_n^N(t)\frac{(1+2^{n+1}y^N_{n+1})}{(1+2^ny^N_n)}
\Bigl( (\bar{q}_{n}-\bar{q}_{n+1}) + \,\mathrm{d}lta_0O(q^N_{n+1}) - (p^N_{n+1}-p^N_n)^2 \Bigr) \nonumber \\
& \qquad - \mathbb Sigma_n^N(t) (1+2^ny^N_n)\,\mathrm{d}lta_0O(q^N_n) \biggr] .
\varepsilonnd{align*}
It follows that $w_n(t):=\bar{q}_n(t)+\varepsilon2^{-n}-q^N_n(t)$ satisfies, for $t\in[0,\overline{T}]$ and $n\leq N$,
\begin{equation*}
\frac{\,\mathrm{d} w_n}{\,\mathrm{d} t} \geq
\frac{\gamma(2^{n+p^N_n})}{4} \biggl[ \frac{(1+2^{n-1}y^N_{n-1})^2}{(1+2^ny^N_n)} \Bigl( \frac{w_{n-1}}{2}-w_n \Bigr)
-\mathbb Sigma_n^N(t)\frac{(1+2^{n+1}y^N_{n+1})}{(1+2^ny^N_n)}(w_n-w_{n+1}) \biggr]
\varepsilonnd{equation*}
with $w_n(0)\geq0$ by \varepsilonqref{decaypn0}, $w_{N+1}(t)\geq0$, $w_{-N_1}(t)\geq0$ for $N_1$ large enough, depending on $\varepsilon$; hence by applying the maximum principle in the region $n\in[-N_1,N]$, $t\in[0,\overline{T}]$ we obtain $w_n(t)\geq0$, which yields (by passing to the limit first as $N_1\to\infty$, then as $\varepsilon\to0$)
\begin{equation*}
\bar{q}_n(t)\geq q_n^N(t) \qquad\text{for all $n\leq N$ and $t\in[0,\overline{T}]$.}
\varepsilonnd{equation*}
Hence the first estimate in \varepsilonqref{decayqn1} holds with strict inequality at $t=\overline{T}$.
Next, we let $n_0$ be given by Lemma~\ref{lem:continuation2} below. Notice that the first estimate in \varepsilonqref{decayqn1} yields
\begin{equation} \label{qcont2}
\mathbb Sup_{0<n\leq n_0} 2^{\bar{\theta}_2n}|q_n^N(t)| \leq 2^{\bar{\theta}_2n_0+2}\,\mathrm{d}lta_0^{3/2}e^{-\nu t}.
\varepsilonnd{equation}
We then let $\hat{q}_n$ be the solution to the initial/boundary value problem \varepsilonqref{hatqn1}. If $\,\mathrm{d}lta_0$ is small enough, one can show that $\hat{q}_n$ is a supersolution for the equation \varepsilonqref{qneqN} solved by $q_n^N$, in the sense that for $n>n_0$ and $t\in[0,\overline{T}]$
\begin{align*}
\frac{\,\mathrm{d} \hat{q}_n}{\,\mathrm{d} t}
& = \frac{\gamma(2^n)}{4} \Bigl( \frac12(1+\,\mathrm{d}lta_1)\hat{q}_{n-1} - (1-\,\mathrm{d}lta_1)\hat{q_n} \Bigr) \\
&\geq \frac{\gamma(2^{n+p^N_n})}{4} \biggl[ \frac{(1+2^{n-1}y^N_{n-1})^2}{(1+2^ny^N_n)} \Bigl( \frac{\hat{q}_{n-1}}{2}-\hat{q}_n +\,\mathrm{d}lta_0O(q^N_{n-1}) + (p^N_{n-1}-p^N_n)^2\Bigr) +\,\mathrm{d}lta_0O(q^N_n) \nonumber \\
& \qquad -\mathbb Sigma_n^N(t)\frac{(1+2^{n+1}y^N_{n+1})}{(1+2^ny^N_n)}
\Bigl( ({q}^N_{n}-{q}^N_{n+1}) + \,\mathrm{d}lta_0O(q^N_{n+1}) - (p^N_{n+1}-p^N_n)^2 \Bigr) \nonumber \\
& \qquad - \mathbb Sigma_n^N(t) (1+2^ny^N_n)\,\mathrm{d}lta_0O(q^N_n) \biggr] .
\varepsilonnd{align*}
Indeed, by using the decay estimates \varepsilonqref{decaypn1} and \varepsilonqref{decayqn1} for $D^+(p^N)$ and $q^N$, the estimate \varepsilonqref{fp3} on $y^N$, the fast decay \varepsilonqref{contsigma} of $\mathbb Sigma_n^N$, and the estimate from below \varepsilonqref{hatqn2} on $\hat{q}_n$, one can show that all the terms on the right-hand side can be bounded in terms of $C(\,\mathrm{d}lta_0)\hat{q}_n$, where $C(\,\mathrm{d}lta_0)$ can be made arbitrarily small by choosing $\,\mathrm{d}lta_0$ small enough.
Hence the function $w_n(t)=\hat{q}_n(t)-q^N_n(t)$ satisfies for $t\in[0,\overline{T}]$ and $n> n_0$
\begin{equation*}
\frac{\,\mathrm{d} w_n}{\,\mathrm{d} t} \geq
\frac{\gamma(2^{n+p^N_n})}{4} \frac{(1+2^{n-1}y^N_{n-1})^2}{(1+2^ny^N_n)} \Bigl( \frac{w_{n-1}}{2}-w_n \Bigr),
\varepsilonnd{equation*}
with $w_n(0)\geq0$, $w_{n_0}(t)\geq0$ (by \varepsilonqref{qcont2}). The maximum principle gives $w_n(t)\geq0$ for all $t\in[0,\overline{T}]$ and $n>n_0$, that is, in view of \varepsilonqref{hatqn2},
\begin{equation*}
\mathbb Sup_{n>n_0} 2^{\bar\theta_2n}|q_n^N(t)| \leq c_2\,\mathrm{d}lta_0^{3/2}t^{-\bar{\theta}_2/\beta}e^{-\nu t}.
\varepsilonnd{equation*}
By combining this estimate with \varepsilonqref{qcont2}, we eventually find that also the second estimate in \varepsilonqref{decayqn1} holds at $t=\overline{T}$ with strict inequality, as claimed, choosing $L_3=\max\{c_2,2^{\bar{\theta}_2n_0+2} \}$.
\varepsilonnd{proof}
The following two lemmas are instrumental in the proof of Proposition~\ref{prop:continuation}.
\begin{lemma} \label{lem:continuation}
Let $R^{1}$, $R^{2}$, $R^{3}$ be the sequences defined in \varepsilonqref{pcont4}, \varepsilonqref{pcont4b}, \varepsilonqref{pcont4c} respectively. Then there exists a constant $C_4$, depending on $M$, $L_1$, $L_2$, $L_3$, such that for all $t\in(0,\overline{T})$
\begin{equation} \label{contrp1}
\|R^{1}(t)\|_{\theta-\beta} \leq C_4\,\mathrm{d}lta_0^2 \bigl(1+t^{-\theta/\beta}\bigr)e^{-\frac{\nu}{2} t},
\qquad\text{for all $\theta\in[0,\bar{\theta}_1]$,}
\varepsilonnd{equation}
\begin{equation} \label{contrp2}
\|R^{2}(t)\|_{\theta-\beta} \leq C_4\,\mathrm{d}lta_0^{3/2} \bigl(1+t^{-\theta/\beta}\bigr)e^{-\nu t},
\qquad\text{for all $\theta\in[0,\bar{\theta}_2]$,}
\varepsilonnd{equation}
\begin{equation} \label{contrp3}
\|R^{3}(t)\|_{\theta-\beta} \leq C_4\,\mathrm{d}lta_0^2 e^{-\frac{\nu}{2} t},
\qquad\text{for all $\theta\in[0,\beta]$,}
\varepsilonnd{equation}
\begin{equation} \label{contrp1bis}
\|R^{1}(t)\|_{\bar{\theta}_1-1} \leq C_4\,\mathrm{d}lta_0^2 \bigl(1+t^{-\frac{\beta-1}{\beta}}\bigr)t^{-\bar{\theta}_1/\beta}e^{-\frac{\nu}{2}t}.
\varepsilonnd{equation}
\varepsilonnd{lemma}
\begin{proof}
Along the proof, the symbol $\lesssim$ will be used for inequalities up to constants which can depend only on $M$, $L_1$, $L_2$, $L_3$. We remark that, in view of the explicit expression \varepsilonqref{nearlystat5} of $\bar{\mu}_n$ and of the fact that, by construction, $A^N(t)\geq\frac{A_M}{2}$, we have
\begin{equation} \label{contsigma}
\limsup_{n\to-\infty} | \bar{\mu}_n(A^N(t),p^N(t))-1|\lesssim\,\mathrm{d}lta_0,
\quad
\bar{\mu}_n(A^N(t),p^N(t)) = O(e^{-\frac12 A_M2^n}) \quad\text{as $n\to\infty$.}
\varepsilonnd{equation}
The estimates below follow essentially by using the assumptions \varepsilonqref{kernel5}--\varepsilonqref{kernel5bis} on $\gamma$, the asymptotics \varepsilonqref{contsigma} of $\bar{\mu}_n$, the bounds \varepsilonqref{fp3} on $y^N$ and $A^N$, and the estimates \varepsilonqref{decaypn1}--\varepsilonqref{decayqn1} on $p^N$, $q^N$ (which by assumption hold for $t\in(0,\overline{T})$).
We first consider $R^{1}$. Observe that by interpolating between the two estimates in \varepsilonqref{decaypn1} we have for $\theta\in[0,\bar{\theta}_1]$
\begin{equation*}
\|D^+(p^N(t))\|_{\theta} \leq \bigl(\|D^+(p^N(t))\|_{\bar\theta_1}\bigr)^{\frac{\theta}{\bar{\theta}_1}} \bigl(\|D^+(p^N(t))\|_{0}\bigr)^{1-\frac{\theta}{\bar{\theta}_1}} \lesssim \,\mathrm{d}lta_0 t^{-\theta/\beta}e^{-\frac{\nu}{2}t}.
\varepsilonnd{equation*}
For $n\leq0$ we then find
\begin{align*}
|R^{1}_n(t)|
& \lesssim \Bigl(2^n|y^N_{n-1}-y^N_n| + 2^{2n}|y^N_{n-1}|^2 \Bigr)|p^N_{n-1}-p^N_n| + |2^{n+1}y^N_{n+1}-2^ny^N_n||p^N_n-p^N_{n+1}| \\
& \lesssim 2^{-n}\|y^N(t)\|_1 \|D^+(p^N(t))\|_{0} \lesssim 2^{-n}\,\mathrm{d}lta_0^2 e^{-\frac{\nu}{2}t},
\varepsilonnd{align*}
whereas for $n\geq0$
\begin{align*}
|R^{1}_n(t)|
& \lesssim 2^{\beta n}\Bigl(2^n|y^N_{n-1}-y^N_n| + 2^{2n}|y^N_{n-1}|^2 \Bigr)|p^N_{n-1}-p^N_n| \\
& \qquad + 2^{\beta n} e^{-\frac12 A_M2^n} |2^{n+1}y^N_{n+1}-2^ny^N_n||p^N_n-p^N_{n+1}| \\
& \lesssim 2^{(\beta-\theta)n}\|y^N(t)\|_1 \|D^+(p^N(t))\|_{\theta} \lesssim 2^{(\beta-\theta)n}\,\mathrm{d}lta_0^2 t^{-\theta/\beta}e^{-\frac{\nu}{2}t}.
\varepsilonnd{align*}
The previous estimates combined yield \varepsilonqref{contrp1}.
For the proof of \varepsilonqref{contrp1bis} we estimate as before ($n>0$)
\begin{align*}
|R^{1}_n(t)|
& \lesssim 2^{(1-\bar{\theta}_1)n}\|y^N(t)\|_\beta \|D^+(p^N(t))\|_{\bar{\theta}_1} \\
& \lesssim 2^{(1-\bar{\theta}_1)n} \,\mathrm{d}lta_0^2 \bigl(1+t^{-\frac{\beta-1}{\beta}}\bigr)e^{-\frac{\nu}{2}t}t^{-\bar{\theta}_1/\beta}e^{-\frac{\nu}{2}t}.
\varepsilonnd{align*}
We next consider $R^{2}$: we first observe that by interpolating between the two estimates in \varepsilonqref{decayqn1} we have for all $n>0$ and $\theta\in[0,\bar{\theta}_2]$
\begin{equation*}
2^{\theta n}|q_n^N(t)| = \Bigl(2^{\bar{\theta}_2n}|q_n^N(t)|\Bigr)^{\frac{\theta}{\bar{\theta}_2}} |q_n^N(t)|^{1-\frac{\theta}{\bar{\theta}_2}} \lesssim \,\mathrm{d}lta_0^{3/2}t^{-\theta/\beta}e^{-\nu t}.
\varepsilonnd{equation*}
Hence
\begin{align*}
|R^{2}_n(t)| &\lesssim |q^N_{n-1}(t)| + |q^N_{n}(t)| + |q^N_{n+1}(t)| \lesssim \,\mathrm{d}lta_0^{3/2} e^{-\nu t} & & \text{for $n<0$,} \\
|R^{2}_n(t)| &\lesssim 2^{\beta n} \Bigl( |q^N_{n-1}(t)| + |q^N_{n}(t)| + |q^N_{n+1}(t)| \Bigr) \lesssim 2^{(\beta-\theta)n}\,\mathrm{d}lta_0^{3/2}t^{-\theta/\beta}e^{-\nu t} & & \text{for $n\geq0$,}
\varepsilonnd{align*}
and the two estimates combined yield \varepsilonqref{contrp2}.
Finally, to prove the estimate \varepsilonqref{contrp3} for $R^{3}$, we recall \varepsilonqref{estmun} and we obtain
\begin{align*}
2^n|R^{3}_n(t)| &\lesssim \,\mathrm{d}lta_02^n |p^N_n-p^N_{n+1}| \lesssim \,\mathrm{d}lta_0^{2} e^{-\frac{\nu}{2} t} & & \text{for $n<0$,} \\
2^{(\theta-\beta)n}|R^{3}_n(t)| &\lesssim \,\mathrm{d}lta_0 2^{\theta n}2^ne^{-\frac12 A_M2^n} |p^N_n-p^N_{n+1}| \lesssim \,\mathrm{d}lta_0|p^N_n-p^N_{n+1}| \lesssim \,\mathrm{d}lta_0^{2} e^{-\frac{\nu}{2} t} & & \text{for $n\geq0$.}
\varepsilonnd{align*}
The conclusion follows.
\varepsilonnd{proof}
\begin{lemma} \label{lem:continuation2}
There exist $\,\mathrm{d}lta_1>0$, $n_0\in\mathbb N$, and $c_2>c_1>0$ such that if $\hat{q}(t)=\{\hat{q}_n(t)\}_{n\in\mathbb Z}$ solves
\begin{equation} \label{hatqn1}
\begin{cases}
\frac{\,\mathrm{d}\hat{q}_n}{\,\mathrm{d} t} = \frac{\gamma(2^n)}{4} \Bigl( \frac12(1+\,\mathrm{d}lta_1)\hat{q}_{n-1} - (1-\,\mathrm{d}lta_1)\hat{q_n} \Bigr) & \text{for $n>n_0$,}\\
\hat{q}_n(0) = 4\,\mathrm{d}lta_0^{3/2} & \text{for $n>n_0$,}\\
\hat{q}_{n_0}(t) = 4\,\mathrm{d}lta_0^{3/2}e^{-\nu t} & \text{for $t\geq0$,}
\varepsilonnd{cases}
\varepsilonnd{equation}
then
\begin{equation} \label{hatqn2}
c_1\,\mathrm{d}lta_0^{3/2} e^{-\nu t} \leq 2^{\bar{\theta}_2n}\hat{q}_n(t) \leq c_2\,\mathrm{d}lta_0^{3/2} t^{-\bar{\theta}_2/\beta} e^{-\nu t}
\qquad\text{for all $n>n_0$.}
\varepsilonnd{equation}
\varepsilonnd{lemma}
\begin{proof}
Let $\mathbb Sigma>0$ satisfy $2^{-\mathbb Sigma}=\frac{1-\,\mathrm{d}lta_1}{1+\,\mathrm{d}lta_1}$. With the change of variables $w_n(t):=2^{(1-\mathbb Sigma)n}\hat{q}_n(\frac{t}{1-\,\mathrm{d}lta_1})$, the equation \varepsilonqref{hatqn1} becomes
\begin{equation} \label{hatqn3}
\frac{\,\mathrm{d} w_n}{\,\mathrm{d} t} = \frac{\gamma(2^n)}{4}\bigl( w_{n-1}-w_n \bigr).
\varepsilonnd{equation}
We can then express $w_n$ in terms of the fundamental solution to \varepsilonqref{hatqn3}, which has been computed in \cite[Lemma~A.3]{BNVd}:
\begin{equation*}
w_n(t) = \frac{\gamma(2^{n_0+1})}{4} \int_0^t \Psi_n^{(n_0+1)}(t-s)w_{n_0}(s)\,\mathrm{d} s + \mathbb Sum_{\varepsilonll=n_0+1}^n \Psi_n^{(\varepsilonll)}(t)w_\varepsilonll(0), \qquad n\geq n_0+1.
\varepsilonnd{equation*}
From the explicit expression of $\Psi$ provided by \cite[Lemma~A.3]{BNVd} one can obtain an estimate of the form
\begin{equation*}
c_1 e^{-\frac{\gamma(2^\varepsilonll)}{4}t} \leq \Psi_n^{(\varepsilonll)}(t) \leq c_2 e^{-\frac{\gamma(2^\varepsilonll)}{4}t} \qquad (n\geq\varepsilonll\geq n_0)
\varepsilonnd{equation*}
for $c_2>c_1>0$ depending only on the fragmentation kernel $\gamma$; then, combining the previous estimate with the assumptions on $w_n(0)$, $w_{n_0}(t)$, we find for all $n\geq n_0+1$
\begin{equation*}
\begin{split}
w_n(t)
& \leq C_{n_0}\,\mathrm{d}lta_0^{3/2}\int_0^t e^{-\frac{\gamma(2^{n_0+1})}{4}(t-s)} e^{-\frac{\nu s}{1-\,\mathrm{d}lta_1}}\,\mathrm{d} s + 4c_2\,\mathrm{d}lta_0^{3/2}\mathbb Sum_{\varepsilonll=n_0+1}^n 2^{(1-\mathbb Sigma)\varepsilonll}e^{-\frac{\gamma(2^\varepsilonll)}{4}t} \\
& \leq C_{n_0}\,\mathrm{d}lta_0^{3/2}e^{-\frac{\nu t}{1-\,\mathrm{d}lta_1}} + C\,\mathrm{d}lta_0^{3/2} t^{-\frac{1-\mathbb Sigma}{\beta}}e^{-\frac{\gamma(2^{n_0+1})}{8} t}
\leq C_{n_0}\,\mathrm{d}lta_0^{3/2} t^{-\frac{1-\mathbb Sigma}{\beta}}e^{-\frac{\nu t}{1-\,\mathrm{d}lta_1}}
\varepsilonnd{split}
\varepsilonnd{equation*}
(the sum can be bounded by arguing as in \cite[equation~(A.43)]{BNVd}), and similarly
\begin{equation*}
w_n(t) \geq C'_{n_0}\,\mathrm{d}lta_0^{3/2} e^{-\frac{\nu t}{1-\,\mathrm{d}lta_1}}.
\varepsilonnd{equation*}
Going back to the function $\hat{q}_n$ with the change of variables, and choosing $\,\mathrm{d}lta_1>0$ such that $1-\mathbb Sigma=\bar{\theta}_2$, we obtain the estimate in the statement.
\varepsilonnd{proof}
\mathbb Subsection{Conclusion} \label{subsect:conclusion}
We are now in a position to conclude the proof of the main result of the paper, by passing to the limit in the truncation parameter $N\to\infty$.
\begin{proof}[Proof of Theorem~\ref{thm:stability}]
For every sufficiently large $N\in\mathbb N$ we constructed in Section~\ref{subsect:truncation} a weak solution corresponding to the truncated initial datum and the truncated kernels, see \varepsilonqref{truncation1} and \varepsilonqref{truncation2}. These solutions exist for all positive times, remain supported in small intervals around the integers \varepsilonqref{truncation4}, and the corresponding sequence of masses $m^N(t)$ can be represented as in \varepsilonqref{fp} for all $t>0$, for suitable functions $t\mapsto(y^N(t),A^N(t))$.
Moreover, the sequences of first and second moments $p^N(t)$, $q^N(t)$ obey the estimates \varepsilonqref{decaypn1}--\varepsilonqref{decayqn1} for $t\in(0,\infty)$.
All the constants in the estimates are in particular independent of $N$.
For every $r>0$ we have the uniform bound
\begin{equation} \label{concl2}
\begin{split}
\int_{\mathbb R} 2^{rx}g^N(x,t)\,\mathrm{d} x
& \leq 2^{r\,\mathrm{d}lta_0}\mathbb Sum_{n\in\mathbb Z} 2^{rn}m^N_n(t) \\
& \xupref{fp}{\leq} 2^{r\,\mathrm{d}lta_0}\bigl(1+\|y^N(t)\|_1\bigr)\mathbb Sum_{n\in\mathbb Z} 2^{rn}\bar{m}_n(A^N(t),p^N(t)) \leq C_r
\varepsilonnd{split}
\varepsilonnd{equation}
for $C_r$ independent of $N$, in view of the asymptotics \varepsilonqref{nearlystat4} of $\bar{m}_n$ and of \varepsilonqref{fp3}. This in particular implies that, for every fixed $t$, the sequence of measures $\{g^N(\cdot,t)\}_N$ is tight, and hence relatively compact with respect to narrow convergence. Moreover, the family $\{g^N\}_N$ is equicontinuous, in the sense that for every $0<s<t<T$ and $\varphi\in C_{\mathrm b}(\mathbb R)$ we have, by using the weak formulation of the equation and the assumptions on the kernels,
\begin{equation*}
\begin{split}
\bigg| & \int_{\mathbb R}\varphi(x)g^N(x,t)\,\mathrm{d} x - \int_{\mathbb R}\varphi(x)g^N(x,s)\,\mathrm{d} x\bigg| \\
& \leq \frac{\ln2}{2}\mathbb Sum_{k\leq N-1} \int_s^t \int_{I_k}\int_{I_k} K_N(2^y,2^z)g^N(y,\tau)g^N(z,\tau)\bigg| \varphi\Bigl(\frac{\ln(2^y+2^z)}{\ln2}\Bigr) - \varphi(y) - \varphi(z) \bigg| \,\mathrm{d} y \,\mathrm{d} z \,\mathrm{d} \tau\\
& \qquad + \frac14\mathbb Sum_{k\leq N-1} \int_s^t\int_{I_k} \gamma_N(2^{y+1}) g^N(y+1,\tau) \big| \varphi(y+1) - 2\varphi(y) \big| \,\mathrm{d} y\,\mathrm{d} \tau \\
& \leq C\|\varphi\|_\infty\int_s^t \biggl( \mathbb Sum_{k\leq0}2^{-k}\bigl(m^N_k(\tau)\bigr)^2 + \mathbb Sum_{k>0}2^{\alpha k}\bigl(m^N_k(\tau)\bigr)^2 + \mathbb Sum_{k\leq0} m^N_k(\tau) + \mathbb Sum_{k>0}2^{\beta k}m^N_k(\tau) \biggr) \,\mathrm{d}\tau \\
& \leq C\|\varphi\|_\infty |t-s|,
\varepsilonnd{split}
\varepsilonnd{equation*}
with $C$ independent of $N$. Hence by Ascoli-Arzel\`a Theorem we can find a subsequence $N_j\to\infty$ such that the measures $g^{N_j}(\cdot,t)$ narrowly converge to some limit measure $g(\cdot,t)$ for every $t>0$, in the sense
\begin{equation} \label{concl1}
\int_{\mathbb R}\varphi(x)g^{N_j}(x,t)\,\mathrm{d} x \to\int_{\mathbb R}\varphi(x)g(x,t)\,\mathrm{d} x \qquad\text{for every }\varphi\in C_{\mathrm b}(\mathbb R).
\varepsilonnd{equation}
Next, thanks to \varepsilonqref{concl1} we can pass to the limit in the weak formulation \varepsilonqref{weakg} of the equation and obtain that $g$ is a weak solution, in the sense of Definition~\ref{def:weakg}, with initial datum $g_0$. Moreover $\mathbb Supp g(\cdot,t)\mathbb Subset\bigcup_{n\in\mathbb Z}I_n$. By defining $m_n(t)$, $p_n(t)$, $q_n(t)$ as in \varepsilonqref{mn}, \varepsilonqref{pn}, \varepsilonqref{qn} respectively, the convergence \varepsilonqref{concl1} implies
\begin{equation} \label{concl3}
m^{N_j}_n(t)\to m_n(t),
\quad
p^{N_j}_n(t)\to p_n(t),
\quad
q^{N_j}_n(t)\to q_n(t)
\qquad\text{as $j\to\infty$, for every $n\in\mathbb Z$.}
\varepsilonnd{equation}
In particular the sequences $p(t)$, $q(t)$ obey the bounds \varepsilonqref{decaypn1}--\varepsilonqref{decayqn1} (which are uniform in $N$), and in turn there exists $\rho\in[-\,\mathrm{d}lta_0,\,\mathrm{d}lta_0]$ such that
\begin{equation} \label{concl4}
p_n(t)\to\rho, \qquad q_n(t)\to0 \qquad\text{as $t\to\infty$, for all $n\in\mathbb Z$.}
\varepsilonnd{equation}
We next show that also the limit sequence $m_n(t)$ can be represented in terms of the coefficients $\bar{m}_n$. Indeed, by \varepsilonqref{fp3} we have that (up to further subsequences) $y^{N_j}_n(t)\to y_n(t)$ and $A^{N_j}(t)\to A(t)$ as $j\to\infty$, for some limit functions $y_n(t)$, $A(t)$. We deduce that
\begin{equation} \label{concl6}
\begin{split}
m_n(t)
& = \lim_{j\to\infty} m^{N_j}_n(t)
\xupref{fp}{=} \lim_{j\to\infty} \bar{m}_n(A^{N_j}(t),p^{N_j}(t)) \bigl(1+2^ny_n^{N_j}(t)\bigr) \\
& = \bar{m}_n(A(t),p(t))(1+2^ny_n(t)).
\varepsilonnd{split}
\varepsilonnd{equation}
We eventually pass to the limit as $t\to\infty$. We extract a subsequence $t_j\to\infty$ such that $g(\cdot,t_j)\to \mu$ in the sense of measures as $j\to\infty$, for some limit measure $\mu$. As the sequence $y_n(t)$ satisfies the bound \varepsilonqref{fp3}, we have $2^ny_n(t_j)\to0$ as $j\to\infty$, and we can further assume that there exists the limit $A_\infty:=\lim_{j\to\infty}A(t_j)$. Hence by \varepsilonqref{concl6} we have for every $n\in\mathbb Z$
\begin{equation} \label{concl5}
\lim_{j\to\infty}m_n(t_j)=\lim_{t\to\infty}\bar{m}_n(A(t_j),p(t_j))(1+2^ny_n(t_j)) = a_n(A_{\infty},\rho) .
\varepsilonnd{equation}
We claim that $A_\infty=A_{M,\rho}$: indeed we have in view of the conservation of mass of the solution $g(x,t)$ and by a Taylor expansion
\begin{equation*}
M= \mathbb Sum_{n=-\infty}^\infty\int_{I_n}2^xg(x,t_j)\,\mathrm{d} x = \mathbb Sum_{n=-\infty}^\infty \biggl( 2^{n+p_n(t_j)}m_n(t_j) + O(2^nm_n(t_j)q_n(t_j)) \biggr),
\varepsilonnd{equation*}
so that by passing to the limit as $j\to\infty$ and using \varepsilonqref{concl4}, \varepsilonqref{concl5} we find
\begin{equation*}
M= \mathbb Sum_{n=-\infty}^\infty 2^{n+\rho}a_n(A_\infty,\rho),
\varepsilonnd{equation*}
which implies $A_\infty=A_{M,\rho}$ (see \varepsilonqref{peak2}).
Thanks to \varepsilonqref{concl4} we have for every $n\in\mathbb Z$
\begin{equation*}
\int_{I_n}(x-n-\rho)^2\,\mathrm{d}\mu(x) = \lim_{j\to\infty}\int_{I_n}(x-n-p_n(t_j))^2g(x,t_j)\,\mathrm{d} x = \lim_{j\to\infty} m_n(t_j)q_n(t_j)=0,
\varepsilonnd{equation*}
therefore $\mathbb Supp\mu\mathbb Subset\bigcup_{n\in\mathbb Z}\{n+\rho\}$, and $\mu=\mathbb Sum_{n\in\mathbb Z}b_n\,\mathrm{d}lta_{n+\rho}$ for suitable coefficients $b_n$. Moreover by \varepsilonqref{concl5}
\begin{equation*}
b_n = \lim_{j\to\infty} m_n(t_j)=a_n(A_{M,\rho},\rho),
\varepsilonnd{equation*}
and we conclude that the limit measure $\mu$ coincides with the stationary solution $g_p(A_{M,\rho},\rho)$. Finally, by uniqueness of the limit we also have that the full family of measures $g(\cdot,t)$ converges to $g_p(A_{M,\rho},\rho)$ as $t\to\infty$.
\varepsilonnd{proof}
\appendix
\mathbb Section{Proof of the regularity result for the linearized problem} \label{sect:appendix}
We provide in this section the proof of the regularity result for the linearized problem \varepsilonqref{slinear1}, Theorem~\ref{thm:slinear}. The proof follows the same strategy as that of Theorem~\ref{thm:linear}, given in \cite[Appendix~A]{BNVd}, which can be seen as a particular case of Theorem~\ref{thm:slinear}.
\begin{proof}[Proof of Theorem~\ref{thm:slinear}]
Along the proof, we will denote by $C$ a generic constant, possibly depending on the properties of the kernels, on $M$, and on $\bar{\theta}_1$, which might change from line to line. The estimate in the statement will be proved for the exponent $\nu>0$ given by
\begin{equation} \label{sprooflinear0}
\nu = \frac{1}{4c_0},
\varepsilonnd{equation}
where $c_0$ is the constant given by Lemma~\ref{lem:poincare} below. We divide the proof into several steps.
\mathbb Smallskip\noindent\textit{Step 1.}
A maximum principle argument as in the first step of the proof of \cite[Lemma~A.1]{BNVd} can be applied also in this case, with minor changes, and shows, for a given initial datum $y^0\in\mathcal{Y}_\theta$, the existence and uniqueness of a solution $t\mapsto y(t)$ with $y(0)=y^0$ in the space $\mathcal{Y}_0$ for $\theta\geq0$ and in the space $\mathcal{Y}_\theta$ if $\theta<0$, satisfying in addition
\begin{equation} \label{sprooflinear1}
\begin{split}
\|y(t)\|_{0} &\leq 2\|y^0\|_\theta e^{\mu t} \qquad\text{(if $\theta\geq0$),}\\
\|y(t)\|_{\theta} &\leq 2\|y^0\|_\theta e^{\mu t} \qquad\text{(if $\theta<0$),}
\varepsilonnd{split}
\varepsilonnd{equation}
for some $\mu>0$ (we omit the details here).
The maximum principle also yields a uniform estimate on $y_n(t)$ in the region $n\geq N$, for a fixed $N$ sufficiently large, in terms of the initial values $y^0$ and on the boundary values $y_N(t)$. In order to obtain such an estimate, we again distinguish between the two cases $\theta\geq0$ and $\theta<0$. In the first case ($\theta\geq0$), the initial datum $y^0_n$ is bounded as $n\to\infty$, and we can directly use a comparison principle with the constant $\mathbb Sup_{n\geq N}|y^0_n| + \mathbb Sup_{0\leq s\leq t}|y_N(s)|$ (since the constants are solutions to \varepsilonqref{slinear1}). In the other case ($\theta<0$), we can compare with the sequence $2^{-\theta n}$, which is a supersolution to \varepsilonqref{slinear1} in the region $n\geq N$ for $N$ large enough (exploiting the fact that $\mathbb Sigma_n\to0$ as $n\to\infty$). In conclusion, we find for every $t>0$
\begin{equation} \label{sprooflinear1bis}
\begin{split}
\mathbb Sup_{n\geq N} |y_n(t)| &\leq \mathbb Sup_{n\geq N}|y^0_n| + \mathbb Sup_{0\leq s\leq t}|y_N(s)| \leq \|y^0\|_\theta + \mathbb Sup_{0\leq s\leq t}|y_N(s)| \qquad \text{(if $\theta\geq0$),}\\
\mathbb Sup_{n\geq N} 2^{\theta n}|y_n(t)| &\leq \mathbb Sup_{n\geq N}2^{\theta n}|y^0_n| + \mathbb Sup_{0\leq s\leq t}|y_N(s)| \leq \|y^0\|_\theta + \mathbb Sup_{0\leq s\leq t}|y_N(s)| \qquad\text{(if $\theta<0$).}
\varepsilonnd{split}
\varepsilonnd{equation}
\mathbb Smallskip\noindent\textit{Step 2.}
We will now prove a uniform decay estimate in bounded regions $n\in[-n_0,n_0]$, for $n_0\in\mathbb N$ sufficiently large.
To this aim, we introduce the quantities
\begin{equation} \label{sprooflinear2}
\bar{m}(t) := \frac{\mathbb Sum_{n=-\infty}^\infty 2^{2n}\bar{m}_n(A_M,p(t)) y_n(t)}{\mathbb Sum_{n=-\infty}^\infty 2^{2n}\bar{m}_n(A_M,p(t))} \,,
\qquad
I(t) := \mathbb Sum_{n=-\infty}^\infty 2^{2n}\bar{m}_n(A_M,p)(y_n-\bar{m})^2 \,,
\varepsilonnd{equation}
where the coefficients $\bar{m}_n$ are defined in Lemma~\ref{lem:nearlystat}. In view of the rough bound \varepsilonqref{sprooflinear1}, and of the decay \varepsilonqref{nearlystat4} of the sequence $\bar{m}_n$, we easily obtain a uniform estimate for small times:
\begin{equation} \label{sprooflinear201}
|\bar{m}(t)| \leq \overline{C} \|y^0\|_\theta,
\qquad
|I(t)| \leq \overline{C} \|y^0\|_\theta^2
\qquad\text{for all $t\leq1$,}
\varepsilonnd{equation}
for a uniform constant $\overline{C}$.
We now compute the evolution equations for $\bar{m}(t)$ and $I(t)$ and show that these quantities decay exponentially to 0, by a Gr\"onwall-type argument.
In view of \varepsilonqref{sprooflinear201} we can restrict to times $t\geq1$, so that we do not have to take into account the time singularity $t^{-\bar{\theta}_1/\beta}$ in \varepsilonqref{asspn}. We preliminary notice that, by using the estimates \varepsilonqref{nearlystat6} and \varepsilonqref{asspn}, we find
\begin{equation} \label{sprooflinear202}
\begin{split}
\bigg| \frac{\,\mathrm{d}}{\,\mathrm{d} t}\Bigl[\bar{m}_n(A_M,p(t))\Bigr] \bigg|
& \leq \mathbb Sum_{k=n}^\infty \bigg| \frac{\partial\bar{m}_n}{\partial p_k}(A_M,p(t)) \frac{\,\mathrm{d} p_k}{\,\mathrm{d} t} \bigg| \\
& \leq C\varepsilonta_0 e^{-\frac{\nu}{2}t} \bar{m}_n(A_M,p(t)) \biggl( \mathbb Sum_{k=n\wedge 1}^0 2^{n-k} +\mathbb Sum_{k=n\vee 1}^\infty 2^{n-k}2^{(\beta-\bar{\theta}_1)k} \biggr) \\
& \leq C\varepsilonta_0 e^{-\frac{\nu}{2}t} \bar{m}_n(A_M,p(t)) \max\bigl\{ 1,2^{(\beta-\bar{\theta}_1)n} \bigr\} .
\varepsilonnd{split}
\varepsilonnd{equation}
By elementary computations using the equation \varepsilonqref{slinear1}, the definition \varepsilonqref{slinear2} of $\mathbb Sigma_n(t)$, and the relation \varepsilonqref{nearlystat3bis}, one can check that
\begin{equation*}
\begin{split}
\frac{\,\mathrm{d}\bar{m}(t)}{\,\mathrm{d} t}
= \frac{1}{\mathbb Sum_{n=-\infty}^\infty 2^{2n}\bar{m}_n(A_M,p(t))} {\mathbb Sum_{n=-\infty}^\infty 2^{2n} \frac{\,\mathrm{d}}{\,\mathrm{d} t}\Bigl[\bar{m}_n(A_M,p(t))\Bigr] \bigl(y_n(t)-\bar{m}(t)\bigr) },
\varepsilonnd{split}
\varepsilonnd{equation*}
hence in view of \varepsilonqref{sprooflinear202}, for all $t\geq1$,
\begin{equation} \label{sprooflinear203}
\begin{split}
\bigg| \frac{\,\mathrm{d}\bar{m}(t)}{\,\mathrm{d} t} \bigg|
& \leq C\varepsilonta_0 e^{-\frac{\nu}{2} t} \biggl( \mathbb Sum_{n=-\infty}^0 2^{2n}\bar{m}_n\big|y_n-\bar{m}\big| + \mathbb Sum_{n=1}^\infty 2^{(\beta-\bar{\theta}_1)n}2^{2n}\bar{m}_n\big|y_n-\bar{m}\big| \biggr) \\
& \leq C\varepsilonta_0 e^{-\frac{\nu}{2} t} \mathbb Sqrt{I(t)}
\varepsilonnd{split}
\varepsilonnd{equation}
(where the last passage follows by H\"older inequality).
In order to obtain an evolution equation for $I(t)$, we observe that, recalling the notation \varepsilonqref{linearD} for the discrete derivatives, the equation \varepsilonqref{slinear1} can be written in the form
\begin{equation*}
2^{2n}\bar{m}_n(A_M,p(t)) \frac{\,\mathrm{d} y_n}{\,\mathrm{d} t} = D^-_n\Bigl( \bigl\{ 2^{2k}\gamma(2^{k+1+p_{k+1}(t)})\bar{m}_{k+1}(A_M,p(t)) D_k^+(y(t)) \bigr\}_k \Bigr);
\varepsilonnd{equation*}
from this identity, by subtracting $\bar{m}(t)$, multiplying by $(y_n(t)-\bar{m}(t))$, and summing over $n$, a straightforward computation yields
\begin{multline*}
\frac{\,\mathrm{d} I(t)}{\,\mathrm{d} t}
= - 2\mathbb Sum_{n=-\infty}^\infty 2^{2n} \gamma(2^{n+1+p_{n+1}})\bar{m}_{n+1}(D^+_n(y))^2
+ \mathbb Sum_{n=-\infty}^\infty 2^{2n} \frac{\,\mathrm{d}}{\,\mathrm{d} t}\Bigl[\bar{m}_n(A_M,p(t))\Bigr] (y_n-\bar{m})^2.
\varepsilonnd{multline*}
Then, by using the Poincar\'e-type inequality in Lemma~\ref{lem:poincare}, \varepsilonqref{sprooflinear202}, and \varepsilonqref{sprooflinear1bis}, we find
\begin{align} \label{sprooflinear207}
\bigg| \frac{\,\mathrm{d} I(t)}{\,\mathrm{d} t} \bigg|
& \leq -\frac{2}{c_0}I(t) + C \varepsilonta_0 e^{-\frac{\nu}{2} t} \mathbb Sum_{n=-\infty}^{\infty} 2^{2n}\max\{ 1,2^{(\beta-\bar{\theta}_1)n} \}\bar{m}_n(A_M,p)(y_n-\bar{m})^2 \nonumber \\
& \leq -\Bigl(\frac{2}{c_0} - C\varepsilonta_0 e^{-\frac{\nu}{2} t} \Bigr) I(t) + C\varepsilonta_0 e^{-\frac{\nu}{2} t} \mathbb Sum_{n=N+1}^\infty 2^{(\beta-\bar\theta_1)n}2^{2n}\bar{m}_n(A_M,p)(y_n-\bar{m})^2 \nonumber\\
& \leq -\frac{1}{c_0}I(t) + C\varepsilonta_0 e^{-\frac{\nu}{2} t} \Bigl( |\bar{m}(t)|^2 + \|y^0\|_\theta^2 + \mathbb Sup_{0\leq s\leq t}|y_N(s)|^2 \Bigr),
\varepsilonnd{align}
provided that we choose $\varepsilonta_0$ sufficiently small. Hence, recalling the choice \varepsilonqref{sprooflinear0},
\begin{equation} \label{sprooflinear204}
\bigg| \frac{\,\mathrm{d} I(t)}{\,\mathrm{d} t} \bigg|
\leq -4\nu I(t) + C\varepsilonta_0 e^{-\frac{\nu}{2} t} \Bigl( \|y^0\|_\theta^2 + \mathbb Sup_{0\leq s\leq t}|\bar{m}(s)|^2 + \mathbb Sup_{0\leq s\leq t}|I(s)| \Bigr).
\varepsilonnd{equation}
Setting
\begin{equation*}
\bar{t} := \mathbb Sup\Bigl\{ t\geq1 \,:\, |\bar{m}(s)| \leq 2\overline{C} \|y^0\|_\theta, \, |I(s)| \leq 2\overline{C} \|y^0\|_\theta^2 \text{ for all $s\in[1,t]$} \Bigr\},
\varepsilonnd{equation*}
we find that $\bar{t}=\infty$ by a standard continuation argument, using \varepsilonqref{sprooflinear201} and the two estimates \varepsilonqref{sprooflinear203}--\varepsilonqref{sprooflinear204} (which hold for $t\geq1$), and choosing $\varepsilonta_0$ small enough; hence
\begin{equation} \label{sprooflinear205}
|\bar{m}(t)| \leq 2\overline{C} \|y^0\|_\theta,
\qquad
|I(t)| \leq 2\overline{C} \|y^0\|_\theta^2
\qquad\text{for all $t>0$.}
\varepsilonnd{equation}
By inserting \varepsilonqref{sprooflinear205} into \varepsilonqref{sprooflinear204} we find by Gr\"onwall's inequality
\begin{equation} \label{sprooflinear206}
|I(t)| \leq C \|y^0\|_\theta^2 e^{-\frac{\nu}{4} t}
\qquad\text{for all $t>0$,}
\varepsilonnd{equation}
which in turn implies that for every $n_0\in\mathbb N$ there exists a constant $C_{n_0}$, depending on $n_0$, on the kernels, and on the fixed parameter $M$, such that
\begin{equation} \label{sprooflinear3}
\mathbb Sup_{-n_0\leq n \leq n_0}|y_n(t)-\bar{m}(t)| \leq C_{n_0} \|y^0\|_\theta e^{-\frac{\nu}{8}t} \qquad\text{for every }t>0.
\varepsilonnd{equation}
Moreover, by \varepsilonqref{sprooflinear203} we have $|\frac{\,\mathrm{d}\bar{m}}{\,\mathrm{d} t}|\leq C\varepsilonta_0 \|y^0\|_\theta e^{-\frac{\nu}{2}t}$ for all $t\geq1$, which yields the existence of the limit $\bar{m}_\infty:=\lim_{t\to\infty}\bar{m}(t)$ and the estimate
\begin{equation} \label{sprooflinear24}
\big| \bar{m}(t_2)-\bar{m}(t_1) \big| \leq C \|y^0\|_\theta e^{-\frac{\nu}{2} t_1} \quad\text{for all $t_2\geq t_1 \geq 1$.}
\varepsilonnd{equation}
In the rest of the proof, $C_{n_0}$ always denotes a constant depending on $n_0$, on the kernels, and on the fixed parameter $M$, possibly changing from line to line.
\mathbb Smallskip\noindent\textit{Step 3.}
We now want to extend the estimate \varepsilonqref{sprooflinear3} in the region $n\leq-n_0$. We first notice that, thanks to \varepsilonqref{sprooflinear1} and \varepsilonqref{sprooflinear205}, we have for all $n\leq -n_0$
\begin{equation} \label{sprooflinear25}
|y_n(t)-\bar{m}_\infty| \leq C2^{-n}\|y^0\|_\theta \qquad\text{for all $t\leq1$.}
\varepsilonnd{equation}
We next extend \varepsilonqref{sprooflinear25} for times $t>1$. This can be achieved by a maximum principle argument, similar to the third step in the proof of \cite[Lemma~A.1]{BNVd}.
For fixed $T>1$ and $\varepsilon>0$ we consider the sequence
\begin{equation*}
z_n(t) := N 2^{-n}e^{-\frac{\nu}{8} t} + \varepsilon 4^{-n},
\varepsilonnd{equation*}
where $N>0$ is a constant to be fixed later. We first observe that
\begin{equation*}
\frac{\,\mathrm{d} z_n}{\,\mathrm{d} t} - \mathscr{L}_n(z;t) = N 2^{-n}e^{-\frac{\nu}{8} t} \Bigl( -\frac{\nu}{8} - \frac{\gamma(2^{n+p_n(t)})}{4} \bigl(1-\mathbb Sigma_n(t)/2\bigr) \Bigr) - \varepsilon\frac{\gamma(2^{n+p_n(t)})}{4^{n+1}}\Bigl(3-\frac34\mathbb Sigma_n(t)\Bigr).
\varepsilonnd{equation*}
Recalling the asymptotics \varepsilonqref{kernel5bis} and \varepsilonqref{slinear8} as $n\to-\infty$ and taking $\varepsilonta_0$ sufficiently small, assuming without loss of generality that $\nu<2\gamma_0$, we obtain that $z_n$ is a supersolution for \varepsilonqref{slinear1} in the region $n\in(-\infty,-n_0]$, for every sufficiently large $n_0$. Furthermore, for $t=1$
\begin{equation*}
|y_n(1)-\bar{m}_\infty| \leq C2^{-n}\|y^0\|_{\theta} < N 2^{-n} \leq z_n(0) \qquad\text{for all }n\leq -n_0,
\varepsilonnd{equation*}
provided that we choose $N>C\|y^0\|_{\theta}$. By \varepsilonqref{sprooflinear3} and \varepsilonqref{sprooflinear24}, for $n=-n_0$ we have
\begin{equation*}
|y_{-n_0}(t)-\bar{m}_\infty| \leq (C_{n_0}+C)\|y^0\|_{\theta}e^{-\frac{\nu}{8}t} \leq z_{-n_0}(t) \qquad\text{for every }t\geq1,
\varepsilonnd{equation*}
if we choose $N>2^{-n_0}(C_{n_0}+C) \|y^0\|_{\theta}$.
Finally, by \varepsilonqref{sprooflinear1} and \varepsilonqref{sprooflinear205} we can choose $n_1>n_0$ sufficiently large, depending on $\varepsilon$ and $T$, such that
\begin{equation*}
|y_{-n_1}(t)-\bar{m}_\infty| \leq (2^{n_1+1}e^{\mu T}+2\overline{C})\|y^0\|_{\theta} \leq \varepsilon4^{n_1} \leq z_{-n_1}(t) \qquad\text{for every }t\in[1,T].
\varepsilonnd{equation*}
Therefore, the choice $N > \max\{ C, 2^{-n_0}(C_{n_0}+C)\}\|y^0\|_{\theta}$, we can apply the Maximum Principle in the compact region $(n,t)\in[-n_1,-n_0]\times[1,T]$:
\begin{equation*}
|y_n(t)-\bar{m}_\infty| \leq z_n(t) \qquad\text{for all $n\in[-n_1,-n_0]$ and $t\in[1,T]$.}
\varepsilonnd{equation*}
Letting firstly $n_1\to\infty$, and then $\varepsilon\to0$, $T\to\infty$, the previous argument shows that
\begin{equation} \label{sprooflinear20}
|y_n(t)-\bar{m}_\infty| \leq N 2^{-n}e^{-\frac{\nu}{8} t} \qquad\text{for all $n\leq -n_0$ and $t\geq1$.}
\varepsilonnd{equation}
By combining the estimates \varepsilonqref{sprooflinear3}, \varepsilonqref{sprooflinear24}, \varepsilonqref{sprooflinear25}, and \varepsilonqref{sprooflinear20}, we obtain that for every $n_0\in\mathbb N$ sufficiently large there exists a constant $C_{n_0}$ such that
\begin{equation} \label{sprooflinear21}
|y_n(t)-\bar{m}_\infty| \leq C_{n_0} 2^{-n} \|y^0\|_\theta e^{-\frac{\nu}{8} t} \qquad\text{for all $n\leq n_0$ and $t>0$.}
\varepsilonnd{equation}
\mathbb Smallskip\noindent\textit{Step 4.}
We eventually investigate the behaviour of solutions to \varepsilonqref{slinear1} as $n\to\infty$. We therefore now restrict to the region $n\geq n_0$, where $n_0\in\mathbb N$ is a sufficiently large constant. In particular, we are allowed to use the asymptotics \varepsilonqref{kernel5}, and we will always assume without loss of generality that
\begin{equation} \label{sprooflinear30}
\gamma(2^n)<\gamma(2^{n+1}),
\qquad
\frac{1}{2} \, 2^{\beta(n-m)} \leq \frac{\gamma(2^n)}{\gamma(2^m)} \leq \frac32 \, 2^{\beta(n-m)}
\qquad
\text{for all $n,m\geq n_0$.}
\varepsilonnd{equation}
For $\varepsilonll\in\mathbb Z$, $\varepsilonll\geq n_0$, let $\Psi^{(\varepsilonll)}_n$ be the solution to the problem
\begin{equation} \label{linearfund1}
\begin{cases}
\frac{\,\mathrm{d} \Psi_n^{(\varepsilonll)}}{\,\mathrm{d} t} = \frac{2^{\beta n}}{4}\bigl( \Psi_{n-1}^{(\varepsilonll)} - \Psi_n^{(\varepsilonll)} \bigr) ,\\
\Psi_n^{(\varepsilonll)}(0) = \,\mathrm{d}lta(n-\varepsilonll).
\varepsilonnd{cases}
\varepsilonnd{equation}
The functions $\Psi_n^{(\varepsilonll)}$ have been explicitly computed in \cite[Lemma~A.3]{BNVd}.
As a particular case of \cite[Lemma~A.3]{BNVd}, there exists a uniform constant $c>0$ such that
\begin{equation} \label{linearfund2}
\big| \Psi_n^{(\varepsilonll)}(t) - \Psi_{n+1}^{(\varepsilonll)}(t) \big| \leq c 2^{-\beta(n-\varepsilonll)}e^{-\frac{2^{\beta\varepsilonll}}{4} t} \qquad\text{for all $n\geq\varepsilonll\geq n_0$.}
\varepsilonnd{equation}
In particular, there exists the limit $\Psi^{(\varepsilonll)}_\infty(t):=\lim_{n\to\infty}\Psi^{(\varepsilonll)}_n(t)$, which satisfies
\begin{equation} \label{linearfund3}
\big| \Psi_n^{(\varepsilonll)}(t) - \Psi_{\infty}^{(\varepsilonll)}(t) \big| \leq c 2^{-\beta(n-\varepsilonll)}e^{-\frac{2^{\beta\varepsilonll}}{4} t} \qquad\text{for all $n\geq\varepsilonll\geq n_0$.}
\varepsilonnd{equation}
By means of the fundamental solutions $\Psi_n^{(\varepsilonll)}$ we can write a representation formula for the solution to \varepsilonqref{slinear1} in the region $n\geq n_0$ in terms of the initial values $y_n^0$ and of the values of the solution for $n=n_0-1$. More precisely, we denote by $p_\infty(t):=\lim_{n\to\infty}p_n(t)$ (which exists by \varepsilonqref{asspn}), and we solve the initial/boundary value problem
\begin{equation} \label{sprooflinear31}
\begin{cases}
\frac{\,\mathrm{d} y_n}{\,\mathrm{d} t} = \frac{2^{\beta(n+p_\infty(t))}}{4} \bigl( y_{n-1}-y_n \bigr) + r_n(t) & n\geq n_0, \\
y_n(0) = y_n^0 & n\geq n_0,\\
y_{n_0-1}(t) = \lambda(t) & t>0,
\varepsilonnd{cases}
\varepsilonnd{equation}
where $r_n(t):=r_n^{(1)}(t) + r_n^{(2)}(t)$,
\begin{align*}
r_n^{(1)}(t) &:= \frac14\Bigl[ \gamma(2^{n+p_n(t)}) - 2^{\beta(n+p_\infty(t))} \Bigr] \bigl( y_{n-1}-y_n \bigr), \\
r_n^{(2)}(t) &:= -\frac{\gamma(2^{n+p_n(t)})}{4}\mathbb Sigma_n(t)\bigl(y_n(t)-y_{n+1}(t)\bigr).
\varepsilonnd{align*}
By rescaling the time variable, that is by introducing $\tau:=\int_0^t 2^{\beta p_\infty(s)}\,\mathrm{d} s$ and $\tilde{y}_n(\tau):=y_n(t)$ (and defining similarly $\tilde{p}_n(\tau)$, $\tilde{\lambda}(\tau)$, $\tilde{r}_n(\tau)$), the problem \varepsilonqref{sprooflinear31} takes the form
\begin{equation} \label{sprooflinear32}
\begin{cases}
\frac{\,\mathrm{d} \tilde{y}_n}{\,\mathrm{d}\tau} = \frac{2^{\beta n}}{4} \bigl( \tilde{y}_{n-1}-\tilde{y}_n \bigr) + 2^{-\beta \tilde{p}_\infty(\tau)}\tilde{r}_n(\tau) & n\geq n_0, \\
\tilde{y}_n(0) = y_n^0 & n\geq n_0,\\
\tilde{y}_{n_0-1}(\tau) = \tilde\lambda(\tau) & \tau>0.
\varepsilonnd{cases}
\varepsilonnd{equation}
Notice that $c_1t\leq \tau \leq c_2t$ for positive constants $c_2>c_1>0$.
By Duhamel's Principle we can write the solution to \varepsilonqref{sprooflinear32}, for all $n\geq n_0$, as
\begin{equation} \label{sprooflinear33}
\begin{split}
\tilde{y}_n(\tau)
& = \frac{2^{\beta n_0}}{4}\int_0^\tau \Psi_n^{(n_0)}(\tau-s)\tilde\lambda(s)\,\mathrm{d} s + \mathbb Sum_{\varepsilonll=n_0}^n \Psi_n^{(\varepsilonll)}(\tau)y_\varepsilonll^0 \\
& \qquad\qquad + \int_0^\tau \mathbb Sum_{\varepsilonll=n_0}^n \Psi_n^{(\varepsilonll)}(\tau-s) 2^{-\beta \tilde{p}_\infty(s)}\tilde{r}_\varepsilonll(s)\,\mathrm{d} s \,.
\varepsilonnd{split}
\varepsilonnd{equation}
Using the representation formula \varepsilonqref{sprooflinear33}, together with the fact that $\frac{2^{\beta\varepsilonll}}{4}\int_0^\infty \Psi^{(\varepsilonll)}_n(s)\,\mathrm{d} s =1$ for every $n\geq\varepsilonll$ (see \cite[(A.26)]{BNVd}), we can write the difference between $\tilde{y}_n(\tau)$ and $\tilde{y}_{n+1}(\tau)$, $n\geq n_0$, as follows:
\begin{align}\label{sprooflinear33b}
\tilde{y}_n(\tau) & - \tilde{y}_{n+1}(\tau)
= \frac{2^{\beta n_0}}{4} \int_0^\tau \bigl( \Psi_n^{(n_0)}-\Psi_{n+1}^{(n_0)} \bigr) (\tau-s) ( \tilde\lambda(s)-\bar{m}_\infty) \,\mathrm{d} s \nonumber \\
& - \bar{m}_\infty \frac{2^{\beta n_0}}{4} \int_\tau^\infty \bigl( \Psi_n^{(n_0)}-\Psi_{n+1}^{(n_0)} \bigr) (s)\,\mathrm{d} s
+ \mathbb Sum_{\varepsilonll=n_0}^{n+1} \bigl( \Psi_n^{(\varepsilonll)}(\tau)-\Psi_{n+1}^{(\varepsilonll)}(\tau) \bigr) y_\varepsilonll^0 \nonumber \\
& + \int_0^\tau \mathbb Sum_{\varepsilonll=n_0}^{n+1} \bigl( \Psi_n^{(\varepsilonll)}-\Psi_{n+1}^{(\varepsilonll)} \bigr) (\tau-s) 2^{-\beta\tilde{p}_\infty(s)} \tilde{r}_\varepsilonll(s) \,\mathrm{d} s
=: I_1 + I_2 + I_3 + I_4.
\varepsilonnd{align}
We now estimate separately each term on the right-hand side of \varepsilonqref{sprooflinear33b}. For a fixed $\tilde{\theta}\in[\theta,\beta]$ as in the statement, we set $w(\tau):=\mathbb Sup_{n\geq n_0} 2^{\tilde{\theta} n}|\tilde{y}_n(\tau)-\tilde{y}_{n+1}(\tau)|$. It is also convenient to introduce the constant $\Lambda:=\frac{2^{\beta n_0}}{8}$.
Notice that, by \varepsilonqref{sprooflinear21}, we have
\begin{equation*}
|\tilde\lambda(\tau)-\bar{m}_\infty| = |y_{n_0-1}(t)-\bar{m}_\infty| \leq C_{n_0}\|y^0\|_\theta e^{-\frac{\nu}{8} \tau} .
\varepsilonnd{equation*}
Combining this estimate with \varepsilonqref{linearfund2} we obtain
\begin{equation} \label{sprooflinear34a}
2^{\tilde{\theta}n} |I_1| \leq C_{n_0}\|y^0\|_\theta 2^{(\tilde\theta-\beta) n}\int_0^\tau e^{-\Lambda(\tau-s)}e^{-\frac{\nu}{8} s}\,\mathrm{d} s \leq C_{n_0}\|y^0\|_\theta e^{-\frac{\nu}{8}\tau}.
\varepsilonnd{equation}
For the second term $I_2$, using again \varepsilonqref{linearfund2} and \varepsilonqref{sprooflinear205} we have
\begin{equation} \label{sprooflinear34b}
2^{\tilde{\theta}n} |I_2| \leq C_{n_0}\|y^0\|_\theta 2^{(\tilde\theta-\beta) n} \int_\tau^\infty e^{-\Lambda s}\,\mathrm{d} s \leq C_{n_0}\|y^0\|_\theta e^{-\Lambda\tau}.
\varepsilonnd{equation}
The following estimate is proved in \cite[(A.44)]{BNVd}:
\begin{equation} \label{sprooflinear34c}
2^{\tilde{\theta}n} |I_3|
\leq C \|y^0\|_\theta 2^{(\tilde\theta-\beta) n} \mathbb Sum_{\varepsilonll=n_0}^{n+1} 2^{(\beta-\theta)\varepsilonll}e^{-\frac{2^{\beta\varepsilonll}}{4}\tau}
\leq C\|y^0\|_\theta \bigl( 1+\tau^{-\frac{\tilde{\theta}-\theta}{\beta}}\bigr) e^{-\Lambda\tau}.
\varepsilonnd{equation}
To bound the term $I_4$ containing the remainder $\tilde{r}_\varepsilonll$, we need some preliminary estimates. In view \varepsilonqref{asspn} we have for all $\bar\theta\in(0,\bar{\theta}_1]$, by interpolation,
\begin{equation*}
|\tilde{p}_\varepsilonll(\tau)-\tilde{p}_\infty(\tau)|
\leq \mathbb Sum_{j=\varepsilonll}^\infty |\tilde{p}_j(\tau)-\tilde{p}_{j+1}(\tau)|
\leq \mathbb Sum_{j=\varepsilonll}^\infty \Bigl( \varepsilonta_02^{-\bar{\theta}_1 j}\tau^{-\frac{\bar{\theta}_1}{\beta}} \Bigl)^{\frac{\bar{\theta}}{\bar{\theta}_1}} \varepsilonta_0^{1-\frac{\bar{\theta}}{\bar{\theta}_1}}
\leq C_{\bar{\theta}}\varepsilonta_02^{-\bar{\theta}\varepsilonll}\tau^{-\frac{\bar{\theta}}{\beta}}.
\varepsilonnd{equation*}
Then, using also \varepsilonqref{kernel5} and \varepsilonqref{slinear8},
\begin{align*}
2^{-\beta \tilde{p}_\infty(\tau)}|\tilde{r}_\varepsilonll^{(1)}(\tau)|
& \leq C \Bigl( |\gamma(2^{\varepsilonll+\tilde{p}_\varepsilonll(\tau)})-2^{\beta(\varepsilonll+\tilde{p}_\varepsilonll(\tau))}| + 2^{\beta\varepsilonll}|2^{\beta\tilde{p}_\varepsilonll(\tau)}-2^{\beta\tilde{p}_\infty(\tau)}| \Bigr) |\tilde{y}_{\varepsilonll-1}(\tau)-\tilde{y}_{\varepsilonll}(\tau)| \\
& \leq C \Bigl( 2^{\tilde{\beta}\varepsilonll} + 2^{\beta\varepsilonll}|\tilde{p}_\varepsilonll(\tau)-\tilde{p}_\infty(\tau)| \Bigr) |\tilde{y}_{\varepsilonll-1}(\tau)-\tilde{y}_{\varepsilonll}(\tau)| \\
& \leq C_{\bar{\theta}} \Bigl( 2^{\tilde{\beta}\varepsilonll} + \varepsilonta_0 2^{(\beta-\bar{\theta})\varepsilonll} \tau^{-\frac{\bar{\theta}}{\beta}} \Bigr) |\tilde{y}_{\varepsilonll-1}(\tau)-\tilde{y}_{\varepsilonll}(\tau)| , \\
2^{-\beta \tilde{p}_\infty(\tau)}|\tilde{r}_\varepsilonll^{(2)}(\tau)| &\leq C 2^{\beta\varepsilonll}e^{-A_M2^\varepsilonll} |\tilde{y}_\varepsilonll(\tau)-\tilde{y}_{\varepsilonll+1}(\tau)| .
\varepsilonnd{align*}
In turn we obtain, recalling \varepsilonqref{linearfund2},
\begin{align*}
2^{\tilde{\theta}n}|I_4|
& \leq C 2^{(\tilde{\theta}-\beta) n} \int_0^\tau \mathbb Sum_{\varepsilonll=n_0}^{n+1} 2^{\beta\varepsilonll}e^{-\frac{2^{\beta\varepsilonll}}{4}(\tau-s)} \Bigl[ 2^{\tilde{\beta}\varepsilonll} + 2^{(\beta-\bar{\theta})\varepsilonll}s^{-\frac{\bar{\theta}}{\beta}} \Bigr] |\tilde{y}_{\varepsilonll-1}(s)-\tilde{y}_{\varepsilonll}(s)| \,\mathrm{d} s \nonumber \\
& \qquad + C 2^{(\tilde{\theta}-\beta) n} \int_0^\tau \mathbb Sum_{\varepsilonll=n_0}^{n+1} 2^{2\beta\varepsilonll}e^{-\frac{2^{\beta\varepsilonll}}{4}(\tau-s)} e^{-A_M2^\varepsilonll} |\tilde{y}_\varepsilonll(s)-\tilde{y}_{\varepsilonll+1}(s)| \,\mathrm{d} s \nonumber \\
& \leq C_{n_0} 2^{(\tilde{\theta}-\beta) n} \int_0^\tau e^{-\Lambda(\tau-s)} \Bigl[ 1 + s^{-\frac{\bar{\theta}}{\beta}} \Bigr] |\tilde{y}_{n_0-1}(s)-\tilde{y}_{n_0}(s)| \,\mathrm{d} s \nonumber \\
& \qquad + C 2^{(\tilde{\theta}-\beta) n} \int_0^\tau \mathbb Sum_{\varepsilonll=n_0+1}^{n+1} 2^{(\beta-\tilde{\theta})\varepsilonll}e^{-\frac{2^{\beta\varepsilonll}}{4}(\tau-s)} \Bigl[ 2^{\tilde{\beta}\varepsilonll} + 2^{(\beta-\bar{\theta})\varepsilonll}s^{-\frac{\bar{\theta}}{\beta}} \Bigr] w(s) \,\mathrm{d} s \nonumber \\
& \qquad + C 2^{(\tilde{\theta}-\beta) n} \int_0^\tau \biggl( \mathbb Sum_{\varepsilonll=n_0}^{n+1} 2^{(2\beta-\tilde{\theta})\varepsilonll} e^{-A_M2^\varepsilonll} \biggr) e^{-\Lambda(\tau-s)}w(s) \,\mathrm{d} s \,. \varepsilonnd{align*}
By using \varepsilonqref{sprooflinear21} in the first term, and an estimate similar to \varepsilonqref{sprooflinear34c} in the second integral, we end up with
\begin{align} \label{sprooflinear34d}
2^{\tilde{\theta}n}|I_4|
& \leq C_{n_0} \|y^0\|_\theta \int_0^\tau e^{-\Lambda(\tau-s)} \Bigl[ 1 + s^{-\frac{\bar{\theta}}{\beta}}\Bigr] e^{-\frac{\nu}{8} s} \,\mathrm{d} s \nonumber \\
& \quad + C \int_0^\tau \mathbb Sum_{\varepsilonll=n_0+1}^{n+1} e^{-\frac{2^{\beta\varepsilonll}}{4}(\tau-s)} \Bigl[ 2^{\tilde{\beta}\varepsilonll} + 2^{(\beta-\bar{\theta})\varepsilonll}s^{-\frac{\bar{\theta}}{\beta}} \Bigr] w(s) \,\mathrm{d} s
+ C \int_0^\tau e^{-\Lambda(\tau-s)}w(s) \,\mathrm{d} s \nonumber\\
& \leq C_{n_0} \|y^0\|_\theta e^{-\frac{\nu}{8}\tau} + C \int_0^\tau \bigl( 1+ (\tau-s)^{-\frac{\tilde{\beta}}{\beta}} \bigr) e^{-\Lambda(\tau-s)}w(s)\,\mathrm{d} s \nonumber \\
& \quad + C \int_0^\tau \bigl(1+(\tau-s)^{-\frac{\beta-\bar{\theta}}{\beta}} \bigr) s^{-\frac{\bar{\theta}}{\beta}} e^{-\Lambda(\tau-s)}w(s)\,\mathrm{d} s
+ C \int_0^\tau e^{-\Lambda(\tau-s)}w(s) \,\mathrm{d} s .
\varepsilonnd{align}
By inserting \varepsilonqref{sprooflinear34a}, \varepsilonqref{sprooflinear34b}, \varepsilonqref{sprooflinear34c}, \varepsilonqref{sprooflinear34d} into \varepsilonqref{sprooflinear33b} we find
\begin{multline} \label{sprooflinear35}
w(\tau) \leq C_{n_0}\|y^0\|_\theta \biggl( e^{-\frac{\nu}{8}\tau} + e^{-\Lambda\tau} + \tau^{-\frac{\tilde{\theta}-\theta}{\beta}} e^{-\Lambda\tau} \biggr)
+ C \int_0^\tau \bigl( 1+ (\tau-s)^{-\frac{\tilde{\beta}}{\beta}} \bigr) e^{-\Lambda(\tau-s)}w(s)\,\mathrm{d} s \\
+ C \int_0^\tau \bigl( 1+ (\tau-s)^{-\frac{\beta-\bar{\theta}}{\beta}} \bigr) s^{-\frac{\bar{\theta}}{\beta}} e^{-\Lambda(\tau-s)} w(s)\,\mathrm{d} s
+ C \int_0^\tau e^{-\Lambda(\tau-s)}w(s) \,\mathrm{d} s .
\varepsilonnd{multline}
Thanks to this estimate, the exponential-in-time decay of $w(t)$ can be obtained by means of a Gr\"onwall-type argument. We first consider small times $0<\tau<1$: in this case \varepsilonqref{sprooflinear35} becomes
\begin{equation*}
w(\tau) \leq C_{n_0} \|y^0\|_\theta \tau^{-\frac{\tilde{\theta}-\theta}{\beta}} + C \int_0^\tau \Bigl( 1 + (\tau-s)^{-\frac{\tilde{\beta}}{\beta}} + s^{-\frac{\bar{\theta}}{\beta}} + (\tau-s)^{-\frac{\beta-\bar{\theta}}{\beta}}s^{-\frac{\bar{\theta}}{\beta}} \Bigr)w(s)\,\mathrm{d} s \,,
\varepsilonnd{equation*}
hence
\begin{equation} \label{sprooflinear36}
w(\tau) \leq C_{n_0}\|y^0\|_\theta \tau^{-\frac{\tilde{\theta}-\theta}{\beta}} \qquad\text{for all $0<\tau<1$.}
\varepsilonnd{equation}
For $\tau\geq1$, we set $\Psi(\tau):=\mathbb Sup_{1\leq s\leq \tau}e^{\frac{\nu}{8}s}w(s)$. Choosing $\bar{\theta}\in(0,\beta)$ such that $\frac{\tilde{\theta}-\theta+\bar{\theta}}{\beta}<1$, we obtain from \varepsilonqref{sprooflinear35}
\begin{equation*}
\begin{split}
\Psi(\tau)
&\leq C_{n_0}\|y^0\|_\theta \\
& \qquad + C_{n_0}\|y^0\|_\theta\mathbb Sup_{1\leq s\leq\tau} e^{\frac{\nu}{8} s}\int_0^1\Bigl[ 1+(s-r)^{-\frac{\tilde{\beta}}{\beta}} + r^{-\frac{\bar{\theta}}{\beta}} + (s-r)^{-\frac{\beta-\bar{\theta}}{\beta}} r^{-\frac{\bar{\theta}}{\beta}} \Bigr] e^{-\Lambda(s-r)} r^{-\frac{\tilde{\theta}-\theta}{\beta}}\,\mathrm{d} r \\
& \qquad + C \Psi(\tau) \mathbb Sup_{1\leq s \leq\tau} \int_1^s \Bigl[ 1+(s-r)^{-\frac{\tilde{\beta}}{\beta}} + (s-r)^{-\frac{\beta-\bar{\theta}}{\beta}} \Bigr] e^{-(\Lambda-\nu/8)(s-r)}\,\mathrm{d} r \\
& \leq C_{n_0}\|y^0\|_\theta + C\Psi(\tau) \int_0^\infty \Bigl[ 1+ t^{-\frac{\tilde{\beta}}{\beta}} + t^{-\frac{\beta-\bar{\theta}}{\beta}} \Bigr] e^{-(\Lambda-\nu/8)t}\,\mathrm{d} t \\
& \leq C_{n_0}\|y^0\|_\theta + C\Bigl( \frac{1}{\Lambda-\nu/8} + \frac{\Gamma(1-\frac{\tilde{\beta}}{\beta})}{(\Lambda-\nu/8)^{1-\tilde{\beta}/\beta}} + \frac{\Gamma(\frac{\bar{\theta}}{\beta})}{(\Lambda-\nu/8)^{\bar{\theta}/\beta}} \Bigr) \Psi(\tau) .
\varepsilonnd{split}
\varepsilonnd{equation*}
Recalling that $\Lambda=\frac{2^{\beta n_0}}{8}$, by choosing $n_0$ sufficiently large we obtain $\Psi(\tau)\leq C_{n_0}\|y^0\|_\theta$ and, in turn, $w(t)\leq C_{n_0}\|y^0\|_\theta e^{-\frac{\nu}{8}\tau}$ for all $\tau\geq1$. Combining this estimate with \varepsilonqref{sprooflinear36} we conclude
\begin{equation*}
w(\tau) \leq C_{n_0}\|y^0\|_\theta \bigl( 1+\tau^{-\frac{\tilde{\theta}-\theta}{\beta}} \bigr) e^{-\frac{\nu}{8}\tau} \qquad\text{for all $\tau>0$.}
\varepsilonnd{equation*}
By rescaling the time and going back to the original variables, and bearing in mind the estimate \varepsilonqref{sprooflinear21},
\begin{equation} \label{sprooflinear37}
\| y_n(t)-y_{n+1}(t)\|_{\tilde{\theta}} \leq C_{n_0} \|y^0\|_\theta \bigl( 1+ t^{-\frac{\tilde{\theta}-\theta}{\beta}}\bigr) e^{-\frac{\nu}{8} t}.
\varepsilonnd{equation}
\mathbb Smallskip\noindent\textit{Step 5.}
The last step of the proof consists in improving the exponent of the exponential in \varepsilonqref{sprooflinear37}, in order to get the optimal decay. In order to do this, we go back to Step 2 and we observe that, thanks to \varepsilonqref{sprooflinear3} and \varepsilonqref{sprooflinear37}, for all $t\geq1$ and $n\geq n_0$
\begin{equation*}
|y_n(t)-\bar{m}(t)| \leq |y_{n_0}(t)-\bar{m}(t)| + \mathbb Sum_{j=n_0}^{n-1}|y_j(t)-y_{j+1}(t)| \leq C_{n_0}\|y^0\|_\theta e^{-\frac{\nu}{8}t}.
\varepsilonnd{equation*}
Then inserting this estimate into \varepsilonqref{sprooflinear207}
\begin{align*}
\bigg| \frac{\,\mathrm{d} I(t)}{\,\mathrm{d} t} \bigg|
& \leq -4\nu I(t) + C\varepsilonta_0 e^{-\frac{\nu}{2} t} \mathbb Sum_{n=n_0+1}^\infty 2^{(\beta-\bar\theta_1)n}2^{2n}\bar{m}_n(A_M,p)(y_n-\bar{m})^2 \nonumber\\
& \leq -4\nu I(t) + C_{n_0}\varepsilonta_0\|y^0\|_\theta^2 e^{-\frac{\nu}{4}t} e^{-\frac{\nu}{2} t},
\varepsilonnd{align*}
so that Gr\"onwall inequality yields $|I(t)|\leq C\|y^0\|_\theta^2e^{-\frac{\nu}{2}t}$. Hence we have improved the exponent in the estimate \varepsilonqref{sprooflinear206}, and repeating the steps 3 and 4 we obtain that \varepsilonqref{sprooflinear37} holds with $\frac{\nu}{4}$ in place of $\frac{\nu}{8}$. By iterating this argument, we eventually obtain the desired decay
\begin{equation} \label{sprooflinear38}
\| y_n(t)-y_{n+1}(t)\|_{\tilde{\theta}} \leq C_{n_0} \|y^0\|_\theta t^{-\frac{\tilde{\theta}-\theta}{\beta}}e^{-\nu t},
\varepsilonnd{equation}
that is \varepsilonqref{slinear6b} This also shows the existence of the limit $y_\infty(t):=\lim_{n\to\infty}y_n(t)$ and the estimate \varepsilonqref{slinear6}.
\varepsilonnd{proof}
The following discrete Poincar\'e-type inequality is used in the first step of the proof of Theorem~\ref{thm:slinear}.
\begin{lemma}\label{lem:poincare}
With the notation introduced in the proof of Theorem~\ref{thm:slinear}, there exists a constant $c_0>0$ (depending on $M$) such that for every $t>0$
\begin{equation*}
\mathbb Sum_{n=-\infty}^\infty 2^{2n}\bar{m}_n(A_M,p) (y_n-\bar{m})^2 \leq c_0 \mathbb Sum_{n=-\infty}^\infty 2^{2n} \gamma(2^{n+1+p_{n+1}})\bar{m}_{n+1}(A_M,p)(D^+_n(y))^2 \,.
\varepsilonnd{equation*}
\varepsilonnd{lemma}
\begin{proof}
The proof can be obtained by adapting the corresponding result in \cite[Lemma~A.2]{BNVd}, with minor changes.
\varepsilonnd{proof}
\noindent
{\bf Acknowledgments.}
The authors acknowledge support through the CRC 1060 \textit{The mathematics of emergent effects} at the University of Bonn that is funded through the German Science Foundation (DFG).
\varepsilonnd{document}
|
\begin{document}
\title[Three results on Frobenius categories]
{Three results on Frobenius categories}
\author[Xiao-Wu Chen] {Xiao-Wu Chen}
\thanks{}
\blacksquareubjclass{18E10, 16G50, 18E30}
\date{\today}
\keywords{Frobenius category, Cohen-Macaulay module,
weighted projective line, matrix factorization, minimal monomorphism}
\thanks{This project was supported by Alexander von Humboldt
Stiftung and National Natural Science Foundation of China
(No.10971206).}
\thanks{E-mail:
xwchen$
\blacksquareymbol{64}$mail.ustc.edu.cn}
\maketitle
\dedicatory{}
\commby{}
\begin{abstract}
This paper consists of three results on Frobenius categories: (1) we
give sufficient conditions on when a factor category of a Frobenius
category is still a Frobenius category; (2) we show that any
Frobenius category is equivalent to an extension-closed exact
subcategory of the Frobenius category formed by Cohen-Macaulay
modules over some additive category; this is an analogue of
Gabriel-Quillen's embedding theorem for Frobenius categories; (3) we
show that under certain conditions an exact category with enough
projective and enough injective objects allows a natural new exact
structure, with which the given category becomes Frobenius. Several applications
of the results are discussed.
\end{abstract}
\blacksquareection{Introduction}
Recently Ringel and Schmidmeier study intensively the classification
problem in the (graded) submodule category over the truncated
polynomial algebra $k[t]/{(t^p)}$ (\cite{RS08}). Here, $k$ is a
field and $p\geq 1$ is a natural number. This problem goes back to
Birkhoff and is studied by Arnold and Simson. For an account of
the history, we refer to \cite{RS08}. The complexity of this
classification problem depends on the parameter $p$. According to
$p<6$, $p=6$ and $p>6$, the classification problem turns out to be
finite, tame and wild, respectively. We denote by
$\mathcal{S}(\widetilde{p})$ the graded submodule category which is
called the category of Ringel-Schmidmeier in \cite{Ch09}. It has a
natural exact structure and becomes an exact category in the sense
of Quillen; moreover, it is a Frobenius category.
In more recent work (\cite{KLM2}), Kussin, Lenzing and Meltzer give
a surprising link between the category $\mathcal{S}(\widetilde{p})$
of Ringel-Schmidmeier and the category of vector bundles on the
weighted projective line of type $(2, 3, p)$. To be more precise,
let $\mathbb{X}$ be the weighted projective line of type $(2, 3, p)$
in the sense of Geigle and Lenzing (\cite{GL}). Denote by ${\rm
vect}\; \mathbb{X}$ the category of vector bundles on $\mathbb{X}$.
It has a natural exact structure such that it is a Frobenius
category (\cite{KLM1}). Following \cite{KLM2} we denote by
$\mathcal{F}$ the additive closure of the so-called fading line
bundles. Consider the factor category ${\rm vect}\;
\mathbb{X}/{[\mathcal{F}]}$ of ${\rm vect}\; \mathbb{X}$ modulo
those morphisms factoring through $\mathcal{F}$. One of the main
results in \cite{KLM2} states that there is an equivalence of
categories between ${\rm vect}\; \mathbb{X}/{[\mathcal{F}]}$ and
$\mathcal{S}(\widetilde{p})$; also see Example
\ref{exm:KLMcontinued}. From this equivalence the authors recover
some major results in \cite{RS08} via certain known results on
vector bundles over the weighted projective lines. Note that
according to $p< 6$, $p=6$ and $p>6$ the weighted projective line
$\mathbb{X}$ is domestic, tubular and wild, respectively.
We have noted above that the category $\mathcal{S}(\widetilde{p})$
of Ringel-Schmidmeier is Frobenius. Hence via the equivalence
mentioned above one infers that the factor category ${\rm vect}\;
\mathbb{X}/{[\mathcal{F}]}$ is also a Frobenius category; see
\cite[Theorem A]{KLM2}. However a direct argument of this surprising
fact seems missing. More generally, one may ask when a factor
category of a Frobenius category is still Frobenius. This is one of
the motivations of the present paper. Another motivation is to understand
the minimal monomorphism in the sense of Ringel-Schmidmeier (\cite{RS06}),
which plays an important role in the study of Auslander-Reiten sequences in submodule categories.
The paper is organized as follows, which mainly consists of
three results on Frobenius categories. We collect in Section 2 some basic facts and
notions on exact categories and Frobenius categories. In Section 3
we give sufficient conditions on when a factor category of a
Frobenius category is still Frobenius; see Theorem \ref{thm:partI}.
We apply the obtained result to recover \cite[Theorem A]{KLM2},
modulo a certain technical fact which is somehow hidden in
\cite{KLM2}. We also apply the result to the Frobenius category of
matrix factorizations. In Section 4 we prove a general result on
Frobenius categories: each Frobenius category is equivalent, as
exact categories, to an extension-closed exact subcategory of the
Frobenius category formed by Cohen-Macaulay modules over some
additive category; see Theorem \ref{thm:partII}. This can be viewed
as an analogue of Gabriel-Quillen's embedding theorem for Frobenius
categories. We observe that the category
$\mathcal{S}(\widetilde{p})$ of Ringel-Schmidmeier can be viewed as
the category of Cohen-Macaulay modules over some graded algebras
(\cite{Ch09}). Together with this observation and the result in
Section 3, our general result recovers a part of \cite[Theorem
C]{KLM2}. In Section 5 we give sufficient conditions such that on an
exact category with enough projective and enough injective objects
there exists another natural exact structure, with which the given
category becomes Frobenius; see Theorem \ref{thm:partIII}. An
application of this result allows us to interpret the minimal
monomorphism operation in \cite{RS06} as a triangle functor, which
is right adjoint to an inclusion triangle functor.
\blacksquareection{Preliminaries on exact categories}
In this section we collect some basic facts and notions on exact
categories and Frobenius categories. The basic reference
is \cite[Appendix A]{Ke3}. For a systematical treatment of
exact category, we refer to \cite{Bu10}.
\vskip 5pt
Let $\mathcal{A}$ be an additive category. A \emph{composable pair}
of morphisms is a sequence $X
\blacksquaretackrel{i} \rightarrow Y
\blacksquaretackrel{d}
\rightarrow Z$; such a composable pair is denoted by $(i, d)$. Two
composable pairs $(i, d)$ and $(i', d')$ are \emph{isomorphic}
provided that there are isomorphisms $f\colon X\rightarrow X'$,
$g\colon Y\rightarrow Y'$ and $h\colon Z\rightarrow Z'$ such that
$g\circ i=i'\circ f$ and $h\circ d=d' \circ g$. A composable pair
$(i, d)$ is called a \emph{kernel-cokernel pair} provided that
$i={\rm Ker}\; d$ and $d={\rm Cok}\; i$.
An \emph{exact structure} on an additive category $\mathcal{A}$
is a chosen class $\mathcal{E}$ of kernel-cokernel pairs in
$\mathcal{A}$, which is closed under isomorphisms and
is subject to the following axioms (Ex0), (Ex1), (Ex1)$^{\rm op}$,
(Ex2) and (Ex2)$^{\rm op}$. A pair $(i, d)$ in the chosen class
$\mathcal{E}$ is called a \emph{conflation}, while $i$ is called an
\emph{inflation} and $d$ is called a \emph{deflation}. The pair
$(\mathcal{A}, \mathcal{E})$ is called an \emph{exact category} in
the sense of Quillen (\cite{Qui73}); sometimes we suppress the class
$\mathcal{E}$ and just say that $\mathcal{A}$ is an exact category.
Following \cite[Appendix A]{Ke3}, the axioms of exact category are
listed as follows:
\begin{enumerate}
\item[(Ex0) \; ] the identity morphism of the zero object is a deflation;
\item[(Ex1) \; ] a composition of two deflations is a deflation;
\item[(Ex1)$^{\rm op}$] a composition of two inflations is an
inflation;
\item[(Ex2) \; ] for a deflation $d\colon Y \rightarrow Z$ and a
morphism $f\colon Z'\rightarrow Z$ there exists a pullback diagram
such that $d'$ is a deflation:
\[\xymatrix{ Y' \ar@{.>}[r]^{d'} \ar@{.>}[d]^-{f'} & Z' \ar[d]^{f} \\
Y \ar[r]^-{d} & Z}\]
\item[(Ex2)$^{\rm op}$] for an inflation $i\colon X \rightarrow Y$ and a
morphism $f\colon X\rightarrow X'$ there exists a pushout diagram
such that $i'$ is an inflation:
\[\xymatrix{ X \ar[r]^-{i} \ar[d]^-{f} & Y \ar@{.>}[d]^-{f'} \\
X' \ar@{.>}[r]^-{i'} & Y'}\]
\end{enumerate}
Let us remark that the axiom (Ex1)$^{\rm op}$ can be deduced from
the other axioms; see \cite[Appendix A]{Ke3}.
For an exact category $\mathcal{A}$, a full additive subcategory
$\mathcal{B}
\blacksquareubseteq \mathcal{A}$ is said to be
\emph{extension-closed} provided that for any conflation
$X
\blacksquaretackrel{i} \rightarrow Y
\blacksquaretackrel{d}\rightarrow Z$ with $X,
Z\in \mathcal{B}$ we have $Y\in \mathcal{B}$. In this case, the
subcategory $\mathcal{B}$ inherits the exact structure from
$\mathcal{A}$ to become an exact category. We will call such a
subcategory an \emph{extension-closed exact subcategory}. Observe
that any abelian category has a natural exact structure such that
conflations are induced by short exact sequences. Consequently, any
full additive subcategory in an abelian category which is closed
under extensions has a natural exact structure and then becomes an
exact category.
Recall that an additive functor $F\colon \mathcal{B}\rightarrow
\mathcal{A}$ between two exact categories is called \emph{exact}
provided that it sends conflations to conflations; an exact functor
$F\colon \mathcal{B}\rightarrow \mathcal{A}$ is said to be an
\emph{equivalence of exact categories} provided that $F$ is an
equivalence and there exists a quasi-inverse of $F$ which is exact.
From now on $\mathcal{A}$ is an exact category. We will need the
following two facts. For the first fact, we refer to the first step
in the proof of \cite[Proposition A.1]{Ke3}; for the second one, we
refer to the axiom c) in the proof of \cite[Proposition A.1]{Ke3}
\begin{lem}\label{lem:lem1}
Consider the diagram in {\rm (Ex2)}. Then the sequence
$$Y'
\blacksquaretackrel{\binom{d'}{-f'}}\longrightarrow Z'\oplus Y
\blacksquaretackrel{(f,
d)}\longrightarrow Z$$ is a conflation and we have a commutative
diagram such that the two rows are conflations:
\[\xymatrix{ X\ar@{=}[d] \ar[r]^{i'} & Y' \ar[r]^{d'} \ar[d]^-{f'} & Z' \ar[d]^{f} \\
X\ar[r]^-{f'\circ i'} & Y \ar[r]^-{d} & Z}\]
\end{lem}
\begin{lem}\label{lem:lem2}
Let $d$ be a morphism such that $d\circ e$ is a deflation for some
morphism $e$. Assume further that $d$ has a kernel. Then $d$ is a
deflation.
$
\blacksquarequare$
\end{lem}
Recall that an object $P$ in $\mathcal{A}$ is \emph{projective}
provided that the functor ${\rm Hom}_\mathcal{A}(P, -)$ sends
conflations to short exact sequences; this is equivalent to that any
deflation ending at $P$ splits. The exact category $\mathcal{A}$ is
said to \emph{have enough projective objects} provided that each
object $X$ fits into a deflation $d\colon P\rightarrow X$ with $P$
projective. Dually one has the notions of \emph{injective object}
and \emph{having enough injective objects}.
An exact category $\mathcal{A}$ is said to be \emph{Frobenius}
provided that it has enough projective and enough injective objects,
and the class of projective objects coincides with the class of
injective objects (\cite[Section 3]{He60}). The importance of
Frobenius categories lies in that they give rise naturally to
triangulated categories; see \cite{Ha1} and \cite[1.2]{Ke3}.
The following notion will be convenient for us: for a Frobenius
category $\mathcal{A}$, an extension-closed exact subcategory
$\mathcal{B}
\blacksquareubseteq \mathcal{A}$ is said to be \emph{admissible}
provided that each object $B$ in $\mathcal{B}$ fits into conflations
$B\rightarrow P\rightarrow B'$ and $B''\rightarrow Q\rightarrow B$
in $\mathcal{B}$ such that $P, Q$ are projective in $\mathcal{A}$.
Note that an admissible subcategory $\mathcal{B}$ of a Frobenius
category $\mathcal{A}$ is still Frobenius; moreover, an object $B$
in $\mathcal{B}$ is projective if and only if it is projective
viewed as an object in $\mathcal{A}$.
\blacksquareection{ Factor category of Frobenius category}
In this section we study a certain factor category of a Frobenius
category. We give sufficient conditions on when the factor category
inherits the exact structure from the given Frobenius category such
that it becomes a Frobenius category. As an application, our result
specializes to \cite[Theorem A]{KLM2} modulo certain technical
results which are somehow hidden in \cite{KLM2}. We give an example to apply
our result to the category of matrix factorizations (\cite{Ei80}).
\vskip 5pt
Let $(\mathcal{A}, \mathcal{E})$ be a Frobenius category. Denote by
$\mathcal{P}$ the full subcategory consisting of projective objects.
Let $\mathcal{F}
\blacksquareubseteq \mathcal{P}$ be a full additive
subcategory. For two objects $X, Y$ in $\mathcal{A}$ denote by
$[\mathcal{F}](X, Y)$ the subgroup of ${\rm Hom}_\mathcal{A}(X, Y)$
consisting of those morphisms which factor through an object in
$\mathcal{F}$. Denote by $\mathcal{A}/[\mathcal{F}]$ the
\emph{factor category} of $\mathcal{A}$ modulo $\mathcal{F}$: the
objects are the same as the ones in $\mathcal{A}$, for two objects
$X$ and $Y$ the Hom space is given by the quotient group ${\rm Hom}_\mathcal{A}(X,
Y)/[\mathcal{F}](X, Y)$ and the composition is induced by the one in
$\mathcal{A}$; compare \cite[p.101]{ARS}. Note that the factor
category $\mathcal{A}/[\mathcal{F}]$ is an additive category.
Denote by $\pi_\mathcal{F}\colon \mathcal{A}\rightarrow
\mathcal{A}/[\mathcal{F}]$ the canonical functor. Denote by
$\mathcal{E}_\mathcal{F}$ the class of composable pairs in
$\mathcal{A}/[\mathcal{F}]$ which are isomorphic to composable
pairs $(\pi_\mathcal{F}(i), \pi_\mathcal{F}(d))$ for $(i, d)\in
\mathcal{E}$.
The case $\mathcal{F}=\mathcal{P}$ is of particular interest, since
the corresponding factor category, known as the \emph{stable
category} of $\mathcal{A}$ and denoted by
$\underline{\mathcal{A}}$, has a natural triangulated structure. In
this case the canonical functor $\pi_\mathcal{P}\colon
\mathcal{A}\rightarrow \underline{\mathcal{A}}$ sends conflations to
exact triangles. For details, see \cite[Chapter I, Section 2]{Ha1}.
We are interested in the following question: when the factor
category $\mathcal{A}/[\mathcal{F}]$ becomes a Frobenius category
such that its exact structure is given by $\mathcal{E}_\mathcal{F}$?
Note that in general the case $\mathcal{F}=\mathcal{P}$ will not
meet the requirement. The aim of this section is to give a partial
answer to this question.
\vskip 5pt
Recall that a \emph{pseudo-kernel} of a morphism $f\colon
X\rightarrow Y$ is a morphism $c\colon Y\rightarrow C$ such that
$c\circ f=0$ and it satisfies that any morphism $c'\colon
Y\rightarrow C'$ with $c'\circ f=0$ factors through $c$. Dually one
has the notion of \emph{pseudo-cokernel}; see \cite[Section
2]{AS81}.
Recall that for a subcategory $\mathcal{S}$ of $\mathcal{A}$, a
morphism $f\colon S \rightarrow X$ is said to be a \emph{right
$\mathcal{S}$-approximation} of $X$ provided that $S\in \mathcal{S}$
and any morphism from an object in $\mathcal{S}$ to $X$ factors
through $f$. Dually one has the notion of \emph{left
$\mathcal{S}$-approximation}; see \cite[Section 1]{AR91}.
\vskip 5pt
Our first result is as follows, which gives sufficient conditions on
when the pair $(\mathcal{A}/[\mathcal{F}], \mathcal{E}_\mathcal{F})$
is a Frobenius category.
\begin{thm}\label{thm:partI}
Let $(\mathcal{A}, \mathcal{E})$ be a Frobenius category and let
$\mathcal{P}$ denote the subcategory of projective objects. Suppose
that $\mathcal{F}
\blacksquareubseteq \mathcal{P}$ satisfies the following
conditions:
\begin{enumerate}
\item any object $A$ in $\mathcal{A}$ fits into a sequence
$$A
\blacksquaretackrel{i_A}\longrightarrow F_A
\blacksquaretackrel{p_A}\longrightarrow
P_A$$ such that $i_A$ is a left $\mathcal{F}$-approximation of $A$,
$P_A\in \mathcal{P}$ and $p_A$ is a pseudo-cokernel of $i_A$;
\item any object $A$ in $\mathcal{A}$ fits into a sequence
$$P^A
\blacksquaretackrel{i^A}\longrightarrow F^A
\blacksquaretackrel{p^A}\longrightarrow
A$$
such that $p^A$ is a right $\mathcal{F}$-approximation of $A$,
$P^A\in \mathcal{P}$ and $i^A$ is a pseudo-kernel of $p^A$.
\end{enumerate}
Then the pair $(\mathcal{A}/[\mathcal{F}], \mathcal{E}_\mathcal{F})$
is a Frobenius category.
\end{thm}
\begin{proof} In the proof, we write $\pi_\mathcal{F}$ as $\pi$. We
will divide the proof into three steps.
\vskip 3pt
\emph{Step 1.} We will first show that the composable pairs
in $\mathcal{E}_\mathcal{F}$ are kernel-cokernel pairs. It suffices
to show that for any conflation $X
\blacksquaretackrel{i}\rightarrow
Y
\blacksquaretackrel{d}\rightarrow Z$ in $\mathcal{A}$ we have $\pi(i)={\rm
Ker}\; \pi(d)$ and $\pi(d)={\rm Cok}\; \pi(i)$. We will only show
that $\pi(i)={\rm Ker}\; \pi(d)$, and the remaining equality is shown by a
dual argument.
To show that $\pi(i)$ is mono, it suffices to show that any morphism
$a\colon A \rightarrow X$ in $\mathcal{A}$ having the property
$i\circ a\in [\mathcal{F}](A, Y)$ necessarily lies in
$[\mathcal{F}](A, X)$. Consider the sequence in (1) for $A$. Since
$i\circ a\colon A\rightarrow Y$ factors through an object in
$\mathcal{F}$ and $i_A\colon A\rightarrow F_A$ is a left
$\mathcal{F}$-approximation, there is a morphism $t\colon
F_A\rightarrow Y$ such that $i\circ a=t\circ i_A$. Using that $p_A$
is a pseudo-cokernel of $i_A$, we have a morphism $s\colon
P_A\rightarrow Z$ making the following diagram commute
\[\xymatrix{ A \ar[d]^-a \ar[r]^-{i_A} & F_A \ar@{.>}[d]^{t} \ar[r]^-{p_A} & P_A
\ar@{.>}[d]^-{s}\\
X \ar[r]^-i & Y \ar[r]^-d & Z}\] Since $P_A$ is projective and $(i,
d)$ is a conflation, we may lift $s$ to a morphism $s'\colon
P_A\rightarrow Y$ such that $d\circ s'=s$. Then we have
$$d\circ
(t-s'\circ p_A)=d\circ t-s\circ p_A=0.$$ Since $i={\rm Ker}\;d $,
there exists $a'\colon F_A\rightarrow X$ such that $i\circ
a'=t-s'\circ p_A$. Composing the two sides with $i_A$, we get
$i\circ a'\circ i_A=t\circ i_A=i\circ a$. Note that $i$ is mono and
$F_A\in \mathcal{F}$. Then we have $a=a'\circ i_A$ and it lies in
$[\mathcal{F}](A, X)$.
Having shown that $\pi(i)$ is mono, it suffices to show that
$\pi(i)$ is a pseudo-kernel of $\pi(d)$. Then we have $\pi(i)={\rm
Ker}\; \pi(d)$. For this end, take a morphism $a\colon A\rightarrow
Y$ such that $d\circ a\in [\mathcal{F}](A, Z)$. We will show that
$\pi(a)$ factors through $\pi(i)$. Assume that $d\circ a$ factors as
$A
\blacksquaretackrel{x}\rightarrow F
\blacksquaretackrel{y}\rightarrow Z$ with $F\in
\mathcal{F}$. Since $F$ is projective and $(i, d)$ is a conflation,
we may lift $y$ to a morphism $y'\colon F\rightarrow Y$ such that
$d\circ y'=y$. Then we have
$$d\circ (a-y'\circ x)=d\circ a-y\circ
x=0.$$
Hence there exists a morphism $a'\colon A\rightarrow X$ such
that $a-y'\circ x=i\circ a'$. Note that $F\in \mathcal{F}$. Applying
$\pi$ we get $\pi(a)=\pi(i)\circ \pi(a')$.
\vskip 3pt
\emph{Step 2.} We will show next that the pair
$(\mathcal{A}/[\mathcal{F}], \mathcal{E}_\mathcal{F})$ is an exact
category. Note that by definition a morphism $\delta \colon
\pi(Y)\rightarrow \pi(Z)$ is a deflation if and only if there exist
morphisms $a\colon Y\rightarrow Y'$ and $b\colon Z'\rightarrow Z$
such that $\pi(a)$ and $\pi(b)$ are isomorphisms, and a deflation
$d\colon Y'\rightarrow Z'$ in $\mathcal{A}$ such that we have a
\emph{factorization} $\delta=\pi(b)\circ \pi(d)\circ \pi(a)$. The axiom
(Ex0) is trivial.
To show (Ex1), assume that we are given two deflations $\delta\colon
\pi(Y)\rightarrow \pi(Z)$ and $\gamma\colon \pi(Z)\rightarrow
\pi(W)$ in $\mathcal{A}/[\mathcal{F}]$. We may assume that $\delta$
and $\gamma $ factor as $\pi(Y)
\blacksquaretackrel{\pi(a)}\rightarrow
\pi(Y')
\blacksquaretackrel{\pi(d)}\rightarrow
\pi(Z')
\blacksquaretackrel{\pi(b)}\rightarrow \pi(Z)$
and $\pi(Z)
\blacksquaretackrel{\pi(x)}\rightarrow \pi(Z'')
\blacksquaretackrel{\pi(e)}\rightarrow \pi(W')
\blacksquaretackrel{\pi(y)}\rightarrow \pi(W)$, respectively. Here $d\colon Y'\rightarrow Z'$ and
$e\colon Z''\rightarrow W'$ are deflations in $\mathcal{A}$. Take a
morphism $z\colon Z''\rightarrow Z$ such that $\pi(z)=(\pi(x)\circ
\pi(b))^{-1}$. By (Ex2) we have the pullback diagram in $\mathcal{A}$
\[\xymatrix{
Y'' \ar@{.>}[r]^-{d'} \ar@{.>}[d]^-{z'} & Z'' \ar[d]^{z}\\
Y'\ar[r]^-{d} & Z'
}\]
such that $d'$ is a deflation. By Lemma \ref{lem:lem1} the sequence
$Y''
\blacksquaretackrel{\binom{d'}{-z'}}\rightarrow Z''\oplus Y'
\blacksquaretackrel{(z,
d)}\rightarrow
Z'$ is a conflation in $\mathcal{A}$. By the first step, applying $\pi$ to this sequence we
get a kernel-cokernel pair in the factor category $\mathcal{A}/[\mathcal{F}]$.
In particular, the diagram above is
still a pullback diagram in $\mathcal{A}/[\mathcal{F}]$. Hence the
fact that $\pi(z)$ is an isomorphism implies that $\pi(z')$ is also an
isomorphism. Take a morphism $a'\colon Y'\rightarrow Y''$ in $\mathcal{A}$
such that $\pi(a')=\pi(z')^{-1}$.
Then $\gamma \circ \delta$ factors as $$\pi(Y)
\blacksquaretackrel{\pi(a'\circ a)}\longrightarrow \pi(Y'')
\blacksquaretackrel{\pi(e\circ d')}\longrightarrow \pi(W')
\blacksquaretackrel{\pi(y)}\longrightarrow
\pi(W).$$
By (Ex1) $e \circ d'$ is a deflation in $\mathcal{A}$. Observe that
both $\pi(a'\circ a)$ and $\pi(y)$ are isomorphisms. Then we have
that $\gamma\circ \delta$ is a deflation in
$\mathcal{A}/[\mathcal{F}]$, proving the axiom (Ex1). Dually one
shows (Ex1)$^{\rm op}$.
To show (Ex2), take a deflation $\delta\colon \pi(Y)\rightarrow
\pi(Z)$ and a morphism $\pi(f)\colon \pi(Z')\rightarrow \pi(Z)$.
Without loss of generality we may assume that $\delta=\pi(d)$ for a
deflation $d\colon Y\rightarrow Z$ in $\mathcal{A}$. Then we apply
(Ex2) for $\mathcal{A}$ to get a pullback diagram in $\mathcal{A}$.
As above, using Lemma \ref{lem:lem1} and the first step, the
obtained diagram is also a pullback diagram in the factor category
$\mathcal{A}/[\mathcal{F}]$. This proves the axiom (Ex2) for
$\mathcal{A}/[\mathcal{F}]$. Dually one shows (Ex2)$^{\rm op}$.
\vskip 3pt
\emph{Step 3.} The exact category $(\mathcal{A}/[\mathcal{F}],
\mathcal{E}_\mathcal{F})$ is Frobenius. Recall that up to
isomorphism conflations in $\mathcal{A}/[\mathcal{F}]$ are given by the
images of conflations in $\mathcal{A}$. Then it follows immediately
that objects in $\mathcal{P}/[\mathcal{F}]$ are projective and
injective in the exact category $(\mathcal{A}/[\mathcal{F}],
\mathcal{E}_\mathcal{F})$; moreover, each object $\pi(X)$ in
$\mathcal{A}/[\mathcal{F}]$ admits a deflation $\pi(P)\rightarrow
\pi(X)$ and an inflation $\pi(X)\rightarrow \pi(I)$ with $\pi(P),
\pi(I)\in \mathcal{P}/[\mathcal{F}]$. From these, one concludes
immediately that the exact category $(\mathcal{A}/[\mathcal{F}],
\mathcal{E}_\mathcal{F})$ is Frobenius.
\end{proof}
\begin{rem}
As shown in the third step above, the full subcategory of
$\mathcal{A}/[\mathcal{F}]$ consisting of projective objects is
equal to $\mathcal{P}/[\mathcal{F}]$. Using again the fact that up
to isomorphism conflations in $\mathcal{A}/[\mathcal{F}]$ are given
by the images of conflations in $\mathcal{A}$, we have an
identification
$\underline{\mathcal{A}}=\underline{\mathcal{A}/[\mathcal{F}]}$ of
triangulated categories.
$
\blacksquarequare$
\end{rem}
We will apply Theorem \ref{thm:partI} in two examples.
We begin with our motivating example. We will see that, modulo certain technical results in
\cite{KLM2}, Theorem \ref{thm:partI} specializes to \cite[Theorem
A]{KLM2}.
\begin{exm}\label{exm:KLM}
Let $k$ be a field and $p\geq 2$ be a natural number. Let
$\mathbb{X}$ be the \emph{weighted projective line} of type $(2, 3,
p)$ in the sense of Geigle and Lenzing (\cite{GL}). Denote by ${\rm
coh}\; \mathbb{X}$ the abelian category of coherent sheaves on
$\mathbb{X}$ and by $\mathcal{O}$ the structure sheaf on
$\mathbb{X}$. Denote by $L$ the rank one abelian group on three
generators $\vec{x}_1$, $\vec{x_2}$, $\vec{x}_3$ subject to the
relations $2\vec{x}_1=3\vec{x}_2=p\vec{x}_3$. Recall that the group
$L$ acts on ${\rm coh}\; \mathbb{X}$. We denote the action of an
element $\vec{x}\in L$ on a sheaf $E$ by $E(\vec{x})$.
Denote by ${\rm vect}\; \mathbb{X}$ the full subcategory of ${\rm
coh}\; \mathbb{X}$ consisting of vector bundles. Recall that all the
line bundles on $\mathbb{X}$ are given by $\mathcal{O}(\vec{x})$ for
$\vec{x}\in L$; moreover, $\mathcal{O}(\vec{x})
\blacksquareimeq
\mathcal{O}(\vec{y})$ implies that $\vec{x}=\vec{y}$. In other
words, the Picard group of $\mathbb{X}$ is isomorphic to $L$; see
\cite[Proposition 2.1]{GL}. Recall that the subcategory ${\rm
vect}\; \mathbb{X}
\blacksquareubseteq {\rm coh}\; \mathbb{X}$ is closed under
extensions and then it has a natural exact structure. However with
this exact structure the category ${\rm vect}\; \mathbb{X}$ is not
Frobenius.
Following \cite{KLM1} a short exact sequence $\eta\colon
0\rightarrow E'\rightarrow E\rightarrow E''\rightarrow 0 $ of vector bundles is
\emph{distinguished} provided that the sequences ${\rm
Hom}(\mathcal{O}(\vec{x}), \eta)$ are exact for all $\vec{x}\in L$.
By Serre duality this is equivalent to that the sequences ${\rm
Hom}(\eta, \mathcal{O}(\vec{x}))$ are exact for all $\vec{x}\in L$.
Observe that the category ${\rm vect}\; \mathbb{X}$ of vector
bundles is an exact category such that conflations are induced by
distinguished short exact sequences; compare Lemma \ref{lem:lem3}.
We denote by $\mathcal{A}$ this exact category. Moreover, the exact
category $\mathcal{A}$ is Frobenius such that its subcategory
$\mathcal{P}$ of projective objects is equal to the additive closure
of all line bundles. For details, see \cite{KLM1}.
The following terminology is taken from \cite{KLM2}. A line bundle
$\mathcal{O}(\vec{x})$ is said to be \emph{fading} provided that
$\vec{x}\notin \mathbb{Z}\vec{x}_3\cup \vec{x}_2+
\mathbb{Z}\vec{x}_3$. Take $\mathcal{F}
\blacksquareubseteq \mathcal{P}$ to be
the additive closure of these fading line bundles. We claim that the
subcategory $\mathcal{F}$ satisfies the conditions in Theorem
\ref{thm:partI}. Then it follows from Theorem \ref{thm:partI} that
the factor category $\mathcal{A}/[\mathcal{F}]$ inherits the
Frobenius exact structure from the one of $\mathcal{A}$; this is
\cite[Theorem A]{KLM2}.
In fact, the proof of \cite[Proposition 3.13]{KLM2} yields the
following technical fact: for a vector bundle $E$ there is a short
exact sequence $0\rightarrow E
\blacksquaretackrel{\alpha}\rightarrow C
\rightarrow P_1\rightarrow 0$ with $C\in \mathcal{F}$ and $P_1\in
\mathcal{P}$; moreover, the morphism $\alpha$ is a left
$\mathcal{F}$-approximation (by \cite[Lemma 3.12 (2)]{KLM2}). Here
we are consistent in notation with the proof of \cite[Proposition
3.13]{KLM2}. Note that one has a dual version of this result using
the duality $d\colon \mathcal{A}\rightarrow \mathcal{A}$ in the
proof of \cite[Proposition 3.2]{KLM2}.
$
\blacksquarequare$
\end{exm}
The second example shows that a certain factor category of the
category of matrix factorizations has a Frobenius exact structure.
\begin{exm}\label{exm:mf}
Let $R$ be a commutative noetherian ring and let $f\in R$ be a
regular element. Recall that a \emph{matrix factorization} of $f$ is
a composable pair $P^0
\blacksquaretackrel{d_P^0} \rightarrow P^1
\blacksquaretackrel{d_P^1}\rightarrow P^0$ consisting of finitely generated
projective $R$-modules such that $d_P^1\circ d_P^0=f\; {\rm
Id}_{P^0}$ and $d_P^0\circ d_P^1=f\; {\rm Id}_{P^1}$; a morphism
$(f^0, f^1)\colon(d_P^0, d_P^1) \rightarrow (d_Q^0, d_Q^1) $ between
matrix factorizations consists of two morphisms $f^0\colon
P^0\rightarrow Q^0$ and $f^1\colon P^1\rightarrow Q^1$ of
$R$-modules such that $d_Q^0\circ f^0=f^1\circ d_P^0$ and
$d_Q^1\circ f^1=f^0\circ d_P^1$. Observe that since $f$ is regular,
the two morphisms $d_P^0$ and $d_P^1$ in a matrix factorization are
mono. For details, see \cite[Section 5]{Ei80}.
Denote by ${\rm MF}_R(f)$ the category of matrix factorizations of
$f$. It has a natural exact structure such that a sequence
$(d_{P'}^0, d_{P'}^1)\rightarrow (d_{P}^0, d_{P}^1)\rightarrow
(d_{P''}^0, d_{P''}^1)$ is a conflation if and only if the
corresponding sequences $0\rightarrow P'^i\rightarrow P^i\rightarrow
P''^i\rightarrow 0$ of $R$-modules are short exact, $i=0, 1$.
Moreover, with this exact structure ${\rm MF}_R(f)$ is a Frobenius
category, and its projective objects are equal to direct summands of
an object of the form $({\rm Id}_P, f\; {\rm Id}_P)\oplus (f\; {\rm
Id}_P, {\rm Id}_P)$ for a projective $R$-module $P$; compare
\cite[Chapter I, 3.2]{Ha1} and \cite[Example 5.3]{Ke}.
Denote by $\mathcal{F}$ the full subcategory of ${\rm MF}_R(f)$ consisting of
objects of the form $({\rm Id}_P, f\; {\rm Id}_P)$ for a projective $R$-module $P$. We claim that
$\mathcal{F}$ satisfies the conditions in Theorem \ref{thm:partI}. Indeed, for a matrix
factorization $(d_P^0, d_P^1)$, the following two sequences
$$(d_P^0, d_P^1)
\blacksquaretackrel{(d_P^0, {\rm Id}_{P^1})}\longrightarrow ({\rm Id}_{P^1}, f\; {\rm Id}_{P^1})
\longrightarrow (0, 0)$$
and
$$(0, 0)\rightarrow ({\rm Id}_{P^0}, f\; {\rm Id}_{P^0})
\blacksquaretackrel{({\rm Id}_{P^0}, d_P^0)}\longrightarrow (d_P^0, d_P^1)$$
are the required sequences in (1) and (2), respectively. In this way, we get a factor Frobenius
category ${\rm MF}_R(f)/[\mathcal{F}]$.
$
\blacksquarequare$
\end{exm}
\blacksquareection{Frobenius category and Cohen-Macaulay module}
In this section we will show that any Frobenius category is
equivalent, as exact categories, to an admissible subcategory of the
Frobenius category formed by Cohen-Macaulay modules over an additive
category. This is an analogue of Gabriel-Quillen's embedding theorem
for Frobenius categories; see \cite{Bu10, Ke3}. In particular, our
result suggests that the category of Cohen-Macaulay modules serves
as a standard model for Frobenius categories. We apply the obtained
result to recover a part of \cite[Theorem C]{KLM2}. We also make an application to
the category of matrix factorizations.
\vskip 5pt
Let $\mathcal{C}$ be an additive category. Denote by ${\rm Mod}\;
\mathcal{C}$ the (large) abelian category of additive contravariant
functors from $\mathcal{C}$ to the category of abelian groups; by
abuse of terminology these functors are called
$\mathcal{C}$-\emph{modules}. Note that exact sequences of
$\mathcal{C}$-modules are given by sequences of functors over
$\mathcal{C}$, which are exact taking values at each object $C\in
\mathcal{C}$.
For an object $C$ in $\mathcal{C}$, denote by $H_C={\rm
Hom}_\mathcal{C}(-, C)$ the corresponding representable functor.
This gives rise to the \emph{Yoneda functor} $H\colon
\mathcal{C}\rightarrow {\rm Mod}\; \mathcal{C}$. Yoneda Lemma says
that there exists a natural isomorphism ${\rm Hom}_{{\rm Mod}\;
\mathcal{C}}(H_C, M)
\blacksquareimeq M(C)$ for each object $C\in \mathcal{C}$
and $M\in {\rm Mod}\; \mathcal{C}$. From these one infers that the
Yoneda functor $H$ is fully faithful and the modules $H_C$ are
projective for all $C\in \mathcal{C}$. Recall that a
$\mathcal{C}$-module $M$ is \emph{finitely generated} provided that
there exists an epimorphism $H_C\rightarrow M$ for some object $C\in
\mathcal{C}$. Observe that a $\mathcal{C}$-module is finitely
generated projective if and only if it is a direct summand of $H_C$
for an object $C\in \mathcal{C}$. For details, we refer to
\cite{Mit72}.
Recall that a cochain complex $P^\bullet=(P^n, d^n\colon
P^n\rightarrow P^{n+1})_{n\in \mathbb{Z}}$ consisting of finitely
generated projective $\mathcal{C}$-modules is said to be
\emph{totally acyclic} provided that it is acyclic and for each
object $C$ the Hom complex ${\rm Hom}_{{\rm Mod}\;
\mathcal{C}}(P^\bullet, H_C)$ is acyclic; compare \cite[p.400]{AM}.
Following \cite{Buc87} and \cite{Bel3} a $\mathcal{C}$-module $M$ is
said to be (maximal) \emph{Cohen-Macaulay} provided that there exists a
totally acyclic complex $P^\bullet$ such that the $0$-th cocycle
$Z^0(P^\bullet)$ is isomorphic to $M$. In this case, the complex
$P^\bullet$ is said to be a \emph{complete resolution} of $M$.
Observe that a finitely generated projective $\mathcal{C}$-module
$P$ is Cohen-Macaulay, since we may take its complete resolution as
$\cdots \rightarrow 0\rightarrow P
\blacksquaretackrel{{\rm Id}_P}\rightarrow
P\rightarrow 0\rightarrow \cdots$. Note that in the literature,
Cohen-Macaulay modules are also called \emph{modules of G-dimension
zero} (\cite{ABr69}) and \emph{Gorenstein-projective modules}
(\cite{EJ}). Let us remark that Cohen-Macaulay modules are closely related
to singularity categories (\cite{Buc87,Or04,Ch10}).
Denote by ${\rm CM}(\mathcal{C})$ the full subcategory of ${\rm
Mod}\; \mathcal{C}$ consisting of Cohen-Macaulay
$\mathcal{C}$-modules. Note that since each Cohen-Macaulay module is
finitely generated, the category ${\rm CM}(\mathcal{C})$ has small
Hom sets. Observe that ${\rm CM}(\mathcal{C})
\blacksquareubseteq {\rm Mod}\;
\mathcal{C}$ is closed under extensions; compare \cite[Propositon
5.1]{AR91}. Then it becomes an exact category such that conflations
are induced by short exact sequences with terms in ${\rm
CM}(\mathcal{C})$.
The following result is well known; compare \cite[Proposition
3.1(1)]{Ch10}. For the definition of an admissible subcategory, see
Section 2.
\begin{lem}\label{lem:lem3.5}
The exact category ${\rm CM}(\mathcal{C})$ is Frobenius; moreover,
its projective objects are equal to finitely generated projective
$\mathcal{C}$-modules. Consequently, any admissible subcategory of
${\rm CM}(\mathcal{C})$ is a Frobenius category.
\end{lem}
\begin{proof}
Observe first that for a Cohen-Macaulay $\mathcal{C}$-module $M$ and
a finitely generated projective $\mathcal{C}$-module $P$ we have
${\rm Ext}^i_{{\rm Mod}\; \mathcal{C}}(M, P)=0$ for $i\geq 1$;
compare \cite[Lemma 2.1]{CFH06}. Hence the object $P$ is injective
in ${\rm CM}(\mathcal{C})$; while it is clearly projective in ${\rm
CM}(\mathcal{C})$. Observe from the definition that for each
Cohen-Macaulay module $M$ with its complete resolution $P^\bullet$,
we have two conflations $Z^{-1}(P^\bullet)\rightarrow
P^{-1}\rightarrow M$ and $M\rightarrow P^0\rightarrow
Z^1(P^\bullet)$. These two conflations imply that the exact category
${\rm CM}(\mathcal{C})$ has enough projective and enough injective objects;
moreover, from these one infers that the class of projective objects
coincides with the class of injective objects, both of which are
equal to the class of finitely generated projective
$\mathcal{C}$-modules. This shows that the category ${\rm CM}(\mathcal{C})$ is a Frobenius
category, and the last statement follows immediately; see Section 2.
\end{proof}
Let $(\mathcal{A}, \mathcal{E})$ be a Frobenius category. Denote by
$\mathcal{P}$ the full subcategory of its projective objects.
Consider the category ${\rm Mod}\; \mathcal{P}$ of
$\mathcal{P}$-modules. For each object $A$ in $\mathcal{A}$ denote
by $h_A$ the $\mathcal{P}$-module obtained by restricting the
functor $H_A={\rm Hom}_\mathcal{A}(-, A)$ on $\mathcal{P}$. This
yields a functor $h\colon \mathcal{A}\rightarrow {\rm Mod}\;
\mathcal{P}$ sending $A$ to $h_A$; such a functor is known as the
\emph{restricted Yoneda functor}. Observe that for an object $P\in
\mathcal{P}$ we have $h_P=H_P$.
\vskip 5pt
Recall from Lemma \ref{lem:lem3.5} that an admissible subcategory of
the category of Cohen-Macaulay modules is Frobenius. In fact, all
Frobenius categories arise in this way. This is our second result,
which is an analogue of Gabriel-Quillen's embedding theorem for
Frobenius categories; see \cite[Proposition A.2]{Ke3} and
\cite[Theorem A.1]{Bu10}.
\begin{thm}\label{thm:partII}
Use the notation as above. Then the restricted Yoneda functor
$h\colon \mathcal{A}\rightarrow {\rm Mod}\; \mathcal{P}$ induces an
equivalence of exact categories between $\mathcal{A}$
and an admissible subcategory of ${\rm CM}(\mathcal{P})$.
\end{thm}
\begin{proof}
We will divide the proof into four steps. First observe that the functor
$h$ sends conflations in $\mathcal{A}$ to short exact sequences of
$\mathcal{P}$-modules, and sends projective objects in $\mathcal{A}$ to
representable functors over $\mathcal{P}$, in particular, finitely
generated projective $\mathcal{P}$-modules.
\vskip 3pt
\emph{Step 1.} We will show that for each object $A\in \mathcal{A}$
the $\mathcal{P}$-module $h_A$ is Cohen-Macaulay. For this, take
conflations $\eta^i\colon A^i\rightarrow P^i \rightarrow A^{i+1}$
such that $A^0=A$ and $P^i$'s are projective for $i\in \mathbb{Z}$.
Applying $h$ to these conflations we get short exact sequences
$0\rightarrow h_{A^i}\rightarrow h_{P^i} \rightarrow
h_{A^{i+1}}\rightarrow 0$. Splicing these short exact sequences we
get an acyclic complex $h_{P^\bullet}$ of finitely generated
projective $\mathcal{P}$-modules which satisfies that
$Z^0(h_{P^\bullet})
\blacksquareimeq h_A$. It remains to show that the complex
$h_{P^\bullet}$ satisfies that for each object $P\in \mathcal{P}$
the Hom complex ${\rm Hom}_{{\rm Mod}\; \mathcal{P}}(h_{P^\bullet},
H_P)$ is acyclic. Here $H_P$ denotes the representable functor
corresponding to $P$. Using Yoneda Lemma this Hom complex is
isomorphic to the Hom complex ${\rm Hom}_\mathcal{A}(P^\bullet, P)$.
Here the complex $P^\bullet$ in $\mathcal{A}$ is constructed by
splicing the conflations $\eta^i$ together. Then the Hom complex
${\rm Hom}_\mathcal{A}(P^\bullet, P)$ is acyclic, since it is
constructed by splicing the short exact sequence ${\rm
Hom}_\mathcal{P}(\eta^i, P)$ together; here we use the fact that the
object $P$ is injective in $\mathcal{A}$. Consequently the complex
$h_{P^\bullet}$ is totally acyclic and then the $\mathcal{P}$-module
$h_A$ is Cohen-Macaulay.
\vskip 3pt
\emph{Step 2.} We will show that the functor $h$ is fully faithful. This
is indeed fairly standard; compare the argument in
\cite[p.102]{ARS}. We will only show the fullness, and by a similar
argument one can show the faithfulness.
For an object $A\in \mathcal{A}$, we obtain from the conflations
$\eta^{-1}$ and $\eta^{-2}$ in the first step a cokernel sequence
$P^{-2}\rightarrow P^{-1}\rightarrow A\rightarrow 0$. This sequence
induces a projective presentation $H_{P^{-2}}\rightarrow
H_{P^{-1}}\rightarrow h_A \rightarrow 0$ of $\mathcal{P}$-modules.
Similarly for another object $A'$ we get a projective presentation
$H_{P'^{-2}}\rightarrow H_{P'^{-1}}\rightarrow h_{A'}\rightarrow 0$.
Given a morphism $\theta\colon h_A\rightarrow h_{A'}$, there exists
a commutative diagram
\[\xymatrix{
H_{P^{-2}}\ar@{.>}[d]^-{\theta^{-2}} \ar[r] & H_{P^{-1}}\ar@{.>}[d]^-{\theta^{-1}}
\ar[r] & h_A \ar[r] \ar[d]^-\theta & 0\\
H_{P'^{-2}} \ar[r] & H_{P'^{-1}} \ar[r] & h_{A'} \ar[r] & 0
}\]
Observe that by Yoneda Lemma there exist morphisms $\mu^{-i}\colon
P^{-i}\rightarrow P'^{-i}$ such that $h_{\mu^{-i}}=\theta^{-i}$;
moreover, these two morphisms make the left side square in the
following diagram commute.
\[\xymatrix{
P^{-2} \ar[d]^-{\mu^{-2}} \ar[r] & P^{-1} \ar[d]^-{\mu^{-1}}
\ar[r] & A \ar[r] \ar@{.>}[d]^-\mu & 0\\
P'^{-2} \ar[r] & P'^{-1} \ar[r] & A \ar[r] & 0
}\]
Since the two rows in the diagram above are cokernel sequences, one
infers that there exists $\mu\colon A \rightarrow A'$ making the
diagram commute. It is direct to see that $h_\mu=\theta$ and this
proves that the functor $h$ is full.
\vskip 3pt
\emph{Step 3.} Denote by ${\rm Im}\; h$ the \emph{essential
image} of the functor $h$. We have shown that ${\rm Im}\; h
\blacksquareubseteq
{\rm CM}(\mathcal{P})$. We will now show that it is
extension-closed. Note that the functor $h$ sends projective objects
to projective modules, and sends conflations to short exact
sequences. This will imply that ${\rm Im}\; h$ is an admissible
subcategory of ${\rm CM}(\mathcal{P})$; see Section 2.
Take a conflation $h_X \rightarrow M \rightarrow h_Y$ in ${\rm
CM}(\mathcal{P})$ with $X, Y\in \mathcal{A}$. We will show that $M$
lies in ${\rm Im}\; h$. For this, take a conflation $X\rightarrow Q
\blacksquaretackrel{d}\rightarrow X'$ with $Q$ projective. Then we have the
following commutative exact diagram
\[\xymatrix{
h_X \ar[r] \ar@{=}[d] & M \ar[r] \ar@{.>}[d] & h_Y
\ar@{.>}[d]^-{\theta}\\
h_X \ar[r] & H_Q \ar[r]^{h_d} & h_{X'}}\] Here we use that $h_Q=H_Q$
is injective in ${\rm CM}(\mathcal{P})$; see Lemma \ref{lem:lem3.5}. From this diagram we have
a conflation $M \rightarrow h_Y\oplus H_Q
\blacksquaretackrel{(\theta, h_d)}\rightarrow h_X'$
in ${\rm CM}(\mathcal{P})$.
By the second step there
exists a morphism $\mu\colon Y\rightarrow X'$ such that
$h_\mu=\theta$.
By Lemma \ref{lem:lem1} there exists a conflation
$Z\rightarrow Y\oplus Q
\blacksquaretackrel{(\mu, d)}\rightarrow X'$
in $\mathcal{A}$ for some object $Z$. Applying $h$ to this conflation we get an isomorphism $M
\blacksquareimeq h_Z$.
\vskip 3pt
\emph{Step 4.} We will show that the functor $h$ induces an
equivalence of exact categories between $\mathcal{A}$ and ${\rm Im}\;
h$. Then we are done with the proof. What remains to show is that the functor $h$
\emph{reflects exactness}, that is, any sequence $\eta\colon X
\blacksquaretackrel{i}\rightarrow Y
\blacksquaretackrel{d}\rightarrow Z$
in $\mathcal{A}$ is a conflation provided that $h_\eta$ is a
conflation in ${\rm Im}\; h$. View the functor $h$ as a full
embedding. Since $h_i={\rm Ker}\; h_d$, we have $i={\rm Ker}\; d$.
For each projective object $P$, we have an isomorphism ${\rm
Hom}_\mathcal{A}(P, \eta)
\blacksquareimeq {\rm Hom}_{{\rm
CM}(\mathcal{P})}(H_P, h_\eta)$ of sequences, and hence they are
both exact. In particular, for a chosen deflation
$P
\blacksquaretackrel{d'}\rightarrow Z$ with $P$ projective, there exists a
morphism $t\colon P\rightarrow Y$ such that $d\circ t=d'$. Now we
apply Lemma \ref{lem:lem2} to the morphism $d$. Then $d$ is a
deflation and as its kernel, $i$ is an inflation. Consequently, the
sequence $\eta$ is a conflation in $\mathcal{A}$, completing the
proof.
\end{proof}
We will apply Theorem \ref{thm:partII} to recover a part of
\cite[Theorem C]{KLM2}, which gives a surprising link between the
category of vector bundles on weighted projective lines and a
certain submodule category.
\begin{exm}\label{exm:KLMcontinued}
Consider the factor Frobenius category
$\mathcal{A'}={\rm vect}\; \mathbb{X}/[\mathcal{F}]$ in Example \ref{exm:KLM}.
The full subcategory $\mathcal{P}'$ consisting of projective objects
is equal to $\mathcal{P}/[\mathcal{F}]$. Then Theorem
\ref{thm:partII} implies that the associated restricted Yoneda
functor $h\colon \mathcal{A}'\rightarrow {\rm Mod}\; \mathcal{P}'$
induces an equivalence of exact categories between $\mathcal{A}'$
with an admissible subcategory of ${\rm CM}(\mathcal{P}')$. This
might be viewed as a part of \cite[Theorem C]{KLM2}.
In this situation, a highly nontrivial result is that the corresponding
admissible subcategory is ${\rm CM}(\mathcal{P}')$ itself; compare \cite[Propositon
3.18]{KLM2}. Finally observe that the category ${\rm
CM}(\mathcal{P}')$ of Cohen-Macaulay $\mathcal{P}'$-modules is equal to the
submodule category ${\mathcal{S}(\widetilde{p})}$ of Ringel-Schmidmeier
(by combining \cite[Lemma B]{KLM2} and a graded version of \cite[Lemma 4.3]{Ch09}). From
these we conclude that there is an equivalence of exact categories between
the factor category ${\rm vect}\; \mathbb{X}/[\mathcal{F}]$ and the category
$\mathcal{S}(\widetilde{p})$ of Ringel-Schmidmeier;
this is \cite[Theorem C]{KLM2}.
$
\blacksquarequare$
\end{exm}
In the next example, we apply Theorem \ref{thm:partII} to the factor Frobenius
category obtained in Example \ref{exm:mf}.
\begin{exm}\label{exm:mfcontinued}
Let $R$ be a commutative noetherian ring and let $f\in R$ be a
regular element. We consider the factor Frobenius category $\mathcal{A}={\rm MF}_R(f)/[\mathcal{F}]$ in
Example \ref{exm:mf}. Observe that its full subcategory $\mathcal{P}$ of projective objects
is the additive closure of the object $T:=(f{\rm Id}_R, {\rm Id}_R)$; moreover,
the endomorphism ring of $T$ (in $\mathcal{A}$) is isomorphic to the quotient ring $S:=R/(f)$.
By a version of Morita equivalence we have an equivalence ${\rm Mod}\; \mathcal{P}
\blacksquareimeq \mbox{Mod}\; S$
of module categories; here $\mbox{Mod}\; S$ denotes the category of $S$-modules. Furthermore, this
equivalence restricts to an equivalence ${\rm CM}(\mathcal{P})
\blacksquareimeq {\rm CM}(S)$. Here, ${\rm CM}(S)$ is
the category of (maximal) Cohen-Macaulay $S$-modules (\cite{Bu10, Bel3}).
Together with this equivalence we apply Theorem \ref{thm:partII} to $\mathcal{A}$.
Then the restricted Yoneda functor
$$h\colon {\rm MF}_R(f)/[\mathcal{F}] \longrightarrow {\rm CM}(S)$$
identifies ${\rm MF}_R(f)/[\mathcal{F}]$ as an admissible subcategory of ${\rm CM}(S)$. We will
describe this admissible subcategory of ${\rm CM}(S)$. For this end,
we will first give another description of the functor $h$.
\vskip 3pt
Consider the following functor
$${\rm Cok}\colon {\rm MF}_R(f)\longrightarrow {\rm CM}(S)$$
which sends a matrix factorization $(d_P^0, d_P^1)$ to ${\rm Cok}\;
d_P^1$ and which acts on morphisms naturally. Observe that ${\rm
Cok}\; d_P^1$ is indeed a Cohen-Macaulay $S$-module; compare \cite[Proposition 5.1]{Ei80}.
Note that the functor ${\rm Cok}$ is exact and vanishes on $\mathcal{F}$. Then we have an induced
functor ${\rm Cok}\colon {\rm MF}_R(f)/[\mathcal{F}]\rightarrow {\rm CM}(S)$. We claim that
there is a natural isomorphism between $h$ and ${\rm Cok}$. In fact, to see this isomorphism,
it suffices to note the natural isomorphisms ${\rm Hom}_\mathcal{A}(T, (d_P^0, d_P^1))
\blacksquareimeq {\rm Cok}\; d_P^1$ for
all matrix factorizations $(d_P^0, d_P^1)$.
Denote by $\mathcal{B}$ the full subcategory of ${\rm CM}(S)$ consisting of modules which, when viewed as $R$-modules, have
projective dimension at most one. Observe that $\mathcal{B}$ is an extension-closed exact subcategory of
${\rm CM}(S)$. Recall that in a matrix factorization $(d_P^0, d_P^1)$ both morphisms $d_P^0$ and
$d_P^1$ are mono. It follows that the image of the functor {\rm Cok} lies in $\mathcal{B}$.
We claim that any module in $\mathcal{B}$ lies in the image of the functor {\rm Cok}.
To see this, for an $S$-module $M$ in $\mathcal{B}$ we take an exact sequence
$0\rightarrow P^1
\blacksquaretackrel{d_P^1}\rightarrow P^0
\blacksquaretackrel{\pi}\rightarrow M\rightarrow 0$ such that
$P^i$ are finitely generated projective $R$-modules, $i=0,1$. Since $f$ vanishes on $M$, then $\pi\circ f{\rm Id}_{P^0}=0$ and then $f{\rm Id}_{P^0}$ factors uniquely
through $d_P^1$. In this way, we obtain a morphism $d_P^0\colon P^0\rightarrow P^1$ such that
$(d_P^0, d_P^1)$ is a matrix factorization. Observe that $M
\blacksquareimeq {\rm Cok}\; d_P^1$. This shows the claim.
Recall that the two functors $h$ and ${\rm Cok}$ are isomorphic. Then we conclude that
the essential image of $h$ is $\mathcal{B}$.
In particular, the subcategory $\mathcal{B}
\blacksquareubseteq {\rm CM}(S)$ is admissible. Hence the restricted Yoneda
functor induces an equivalence of exact categories ${\rm MF}_R(f)/[\mathcal{F}]
\blacksquareimeq \mathcal{B}$. One might
compare this with \cite[Corollary 6.3]{Ei80} and \cite[Theorem 3.9]{Or04}.
\vskip 3pt
The situation is particularly nice if we assume that the ring $R$ is \emph{regular}, that is, $R$ has finite global
dimension. In this case, the quotient ring $S$ is Gorenstein.
Observe that each Cohen-Macaulay $S$-module has projective
dimension at most one, when viewed as an $R$-module (by \cite[Lemma
18.2(i)]{Mat}), that is, $\mathcal{B}={\rm CM}(S)$. Then the restricted
Yoneda functor $h$ induces an equivalence of exact categories ${\rm
MF}_R(f)/[\mathcal{F}]
\blacksquareimeq {\rm CM}(S).$
$
\blacksquarequare$
\end{exm}
We introduce the following notion: a Frobenius category
$\mathcal{A}$ is \emph{standard} provided that the associated
restricted Yoneda functor $h\colon \mathcal{A}\rightarrow {\rm
CM}(\mathcal{P})$ is an equivalence of exact categories; this is
equivalent by Theorem \ref{thm:partII} to that the functor $h$ is
dense. For example, one can show that a Frobenius abelian category
is standard; Example
\ref{exm:KLMcontinued} claims that the factor Frobenius category
$\mathcal{A}'$ is standard; Example \ref{exm:mfcontinued} implies that for a regular ring
$R$ and a regular element $f\in R$, the factor Frobenius category
${\rm MF}_R(f)/[\mathcal{F}]$ is standard.
In general, it would be very nice to have an intrinsic criterion on
when a Frobenius category is standard.
\blacksquareection{Frobenius category from exact category}
In this section we give sufficient conditions such that on an exact
category with enough projective and enough injective objects there
exists another natural exact structure, with which the given
category becomes Frobenius. We apply the result to the morphism
category of a Frobenius abelian category; it turns out that this
morphism category has a natural Frobenius exact structure. This
observation allows us to interpret the minimal monomorphism
operation in \cite{RS06} as a triangle functor, which is right
adjoint to an inclusion triangle functor.
\vskip 5pt
Let $(\mathcal{A}, \mathcal{E})$ be an exact category with enough
projective and enough injective objects. We denote by $\mathcal{P}$
and $\mathcal{I}$ the full subcategory of $\mathcal{A}$ consisting
of projective and injective objects, respectively. Note that the
exact category $\mathcal{A}$ might not be Frobenius. The aim is to
show that under certain conditions there is a new exact structure
$\mathcal{E}'$ on $\mathcal{A}$ such that $(\mathcal{A},
\mathcal{E}')$ is a Frobenius category.
Recall that a full additive subcategory $\mathcal{S}$ of
$\mathcal{A}$ is said to be \emph{contravariantly finite} provided
that each object in $\mathcal{A}$ has a right
$\mathcal{S}$-approximation. Dually one has the notion of
\emph{covariantly finite subcategory} (\cite[Section 2]{AS81}). For
two full subcategories $\mathcal{X}$ and $\mathcal{Y}$ of
$\mathcal{A}$, denote by $\mathcal{X}\vee \mathcal{Y}$ the smallest
full additive subcategory of $\mathcal{A}$ which contains
$\mathcal{X}$ and $\mathcal{Y}$ and is closed under taking direct
summands.
Recall that for a full additive subcategory $\mathcal{S}$ of an exact category $\mathcal{A}$,
a conflation $\eta\colon X\rightarrow Y\rightarrow Z$ is \emph{right
$\mathcal{S}$-acyclic} provided that the sequences ${\rm
Hom}_\mathcal{A}(S, \eta)$ are short exact for all $S\in
\mathcal{S}$. Dually one has the notion of \emph{left
$\mathcal{S}$-acyclic conflation}.
\vskip 5pt
Here is our third result, which gives sufficient
conditions such that there is a natural (and new) exact structure on
$\mathcal{A}$,
with which $\mathcal{A}$ becomes a Frobenius category.
\begin{thm}\label{thm:partIII}
Use the notation as above. Assume that $\mathcal{P}'
\blacksquareubseteq
\mathcal{P}$ and $\mathcal{I}'
\blacksquareubseteq \mathcal{I}$ are two full
additive subcategories subject to the following conditions:
\begin{enumerate}
\item $\mathcal{P}'\vee \mathcal{I}=\mathcal{I}'\vee \mathcal{P}$;
\item $\mathcal{P}'
\blacksquareubseteq \mathcal{A}$ is covariantly finite and
$\mathcal{I}'
\blacksquareubseteq \mathcal{A}$ is contravariantly finite;
\item the class of right $\mathcal{I}'$-acyclic conflations
coincides with the class of left $\mathcal{P}'$-acyclic conflations.
\end{enumerate}
Denote the class of conflations in (3) by $\mathcal{E}'$. Then the
pair $(\mathcal{A}, \mathcal{E}')$ is a Frobenius exact category.
\end{thm}
The proof of this result is quite direct, once we notice the
following general observation.
\begin{lem}\label{lem:lem3}
Let $(\mathcal{A}, \mathcal{E})$ be an exact category. For a full
additive subcategory $\mathcal{S}
\blacksquareubseteq \mathcal{A}$, denote by
$\mathcal{E}'$ the class of right $\mathcal{S}$-acyclic conflations.
Then the pair $(\mathcal{A}, \mathcal{E}')$ is an exact category.
\end{lem}
\begin{proof}
For a conflation $(i, d)$ in $\mathcal{E}'$, we will temporarily
call $i$ an $\mathcal{E}'$-inflation and $d$ an
$\mathcal{E}'$-deflation. We verify the axioms for the pair
$(\mathcal{A}, \mathcal{E}')$. Recall that the axiom (Ex1)$^{\rm
op}$ can be deduced from the others; see \cite[Appendix A]{Ke3}. So
we only show the remaining four axioms. The axiom (Ex0) is clear.
Recall that a deflation $d\colon Y\rightarrow Z$ is an
$\mathcal{E}'$-deflation if and only if every morphism from an
object in $\mathcal{S}$ to $Z$ factors through $d$. This observation
yields (Ex1) immediately.
Consider the pullback diagram in the axiom (Ex2); see Section 2.
Assume that $d\colon Y\rightarrow Z$ is an $\mathcal{E}'$-deflation.
We will show that $d'\colon Y'\rightarrow Z'$ is also an
$\mathcal{E}'$-deflation. Take a morphism $s\colon S\rightarrow Z'$
with $S\in \mathcal{S}$. Since $d$ is an $\mathcal{E}'$-deflation,
the morphism $f\circ s$ lifts to $Y$, that is, there exists
$s'\colon S\rightarrow Y$ such that $d\circ s'=f\circ s$. Using the
universal property of the pullback diagram, we infer that there
exists a unique morphism $t\colon S\rightarrow Y'$ such that $d'
\circ t=s$ and $f' \circ t=s'$. In particular, the morphism $s$
factors through $d'$, proving that $d'$ is an
$\mathcal{E}'$-deflation.
It remains to verify (Ex2)$^{\rm op}$. Consider the pushout diagram
in (Ex2)$^{\rm op}$; see Section 2. We assume that $i\colon
X\rightarrow Y$ is an $\mathcal{E}'$-inflation. We will show that
$i'$ is an $\mathcal{E}'$-inflation. We apply the dual of Lemma
\ref{lem:lem1} to get the following commutative diagram such that
the two rows are conflations
\[\xymatrix{ X \ar[r]^-{i} \ar[d]^-{f} & Y \ar[d]^-{f'} \ar[r]^-d & Z \ar@{=}[d] \\
X' \ar[r]^-{i'} & Y' \ar[r]^-{d'} & Z }\]
Here $d=d'\circ f'$. The fact that $i$ is an
$\mathcal{E}'$-inflation implies that $d$ is an
$\mathcal{E}'$-deflation. Consider any morphism $s\colon
S\rightarrow Z$ with $S\in \mathcal{S}$. Then $s$ factors through
$d$. Since $d=d'\circ f'$, we infer that the morphism $s$ factors
through $d'$. This shows that $d'\colon Y'\rightarrow Z$ is an
$\mathcal{E}'$-deflations and then $i'$ is an
$\mathcal{E}'$-inflation. We are done.
\end{proof}
\vskip 10pt
\noindent {\bf Proof of Theorem \ref{thm:partIII}:}\quad By Lemma
\ref{lem:lem3} the pair $(\mathcal{A}, \mathcal{E}')$ is an exact
category. We will call a conflation in $\mathcal{E}'$ an
$\mathcal{E}'$-conflation. Note by the condition (3) that the
objects in $\mathcal{P}'\vee \mathcal{I}=\mathcal{I}'\vee
\mathcal{P}$ are projective and injective in the exact category
$(\mathcal{A}, \mathcal{E}')$.
Observe from the condition (3) that a conflation
$X
\blacksquaretackrel{i}\rightarrow Y
\blacksquaretackrel{d}\rightarrow Z$ is an
$\mathcal{E}'$-conflation if and only if any morphism from an object
in $\mathcal{I}'$ to $Z$ factors through $d$, if and only if any
morphism from $X$ to an object in $\mathcal{P}'$ factors through
$i$. For an object $Z$ in $\mathcal{A}$, take a conflation $d\colon
P\rightarrow Z$ with $P\in \mathcal{P}$ and a right
$\mathcal{I}'$-approximation $s\colon I'\rightarrow Z$. By Lemma
\ref{lem:lem1} the morphism $(d, s)\colon P\oplus I'\rightarrow Z$
is an deflation. It induces a conflation $\eta\colon X\rightarrow
P\oplus I'
\blacksquaretackrel{(d, s)}\rightarrow Z $.
From the observation just made, we
obtain that the conflation $\eta$ is an $\mathcal{E}'$-conflation.
This proves that the exact category $(\mathcal{A}, \mathcal{E}')$
has enough projective objects and the class of projective objects
is equal to $\mathcal{I}'\vee \mathcal{P}$. Dually one shows that
the exact category $(\mathcal{A}, \mathcal{E}')$ has enough
injective objects and the class of injective objects is equal to
$\mathcal{P}'\vee \mathcal{I}$. Then we conclude that the exact
category $(\mathcal{A}, \mathcal{E}')$ is Frobenius, completing the
proof.
$
\blacksquarequare$
\vskip 5pt
We apply Theorem \ref{thm:partIII} to the morphism category of a
Frobenius abelian category. This allows us to interpret the minimal
monomorphism operation (\cite{RS06}) as a right adjoint to an
inclusion triangle functor.
\begin{exm}
Let $\mathcal{A}$ be an abelian category. Denote by ${\rm
Mor}(\mathcal{A})$ the \emph{morphism category} of $\mathcal{A}$:
its objects are given by morphisms $\alpha\colon X\rightarrow Y$ in
$\mathcal{A}$, and morphisms $(f, g)\colon \alpha \rightarrow
\alpha'$ are given by commutative squares in $\mathcal{A}$, that is,
two morphisms $f\colon X\rightarrow X'$ and $g\colon Y\rightarrow
Y'$ such that $\alpha'\circ f=g\circ \alpha$. It is an abelian
category; a sequence $\alpha \rightarrow \alpha'\rightarrow
\alpha''$ in ${\rm Mor}(\mathcal{A})$ is exact if and only if the
corresponding sequences of domains and targets are exact in
$\mathcal{A}$; see \cite[Corollary 1.2]{FGR75}.
Assume that the abelian category $\mathcal{A}$ is Frobenius. In
general the abelian category ${\rm Mor}(\mathcal{A})$ is not
Frobenius. In fact, the category ${\rm Mor}(\mathcal{A})$
has enough projective and injective objects; projective objects are
equal to objects of the form
$(0\rightarrow P)\oplus (Q
\blacksquaretackrel{\rm Id_Q}\rightarrow
Q)$ for some projective objects $P, Q\in \mathcal{A}$; dually injective
objects are equal to objects of the form $(P\rightarrow 0)\oplus
(Q
\blacksquaretackrel{\rm Id_Q}\rightarrow Q)$ for some injective objects $P, Q\in \mathcal{A}$;
compare \cite[Section 2]{RS06}.
Denote by $\mathcal{P}$ and
$\mathcal{I}$ the full subcategory consisting of projective and injective objects in
${\rm Mor}(\mathcal{A})$,
respectively.
Take $\mathcal{P}'
\blacksquareubseteq \mathcal{P}$ to be the
full subcategory consisting of objects of the form $0\rightarrow
P$. Take $\mathcal{I}'
\blacksquareubseteq \mathcal{I}$ to be the full
subcategory consisting of objects of the form $P\rightarrow 0$. We
will verify the conditions in Theorem \ref{thm:partIII}.
The condition (1) is clear. To see (2), take an object $\alpha \colon X\rightarrow Y$
in ${\rm Mor}(\mathcal{A})$ and consider its cokernel $\pi\colon Y\rightarrow {\rm Cok}\; \alpha$
and a monomorphism $i\colon {\rm Cok}\; \alpha\rightarrow P$ with
$P$ injective. Then the morphism $(0, i\circ \pi)\colon \alpha \rightarrow (0\rightarrow
P)$ is a left $\mathcal{P}'$-approximation. This proves that
$\mathcal{P}'
\blacksquareubseteq {\rm Mor}(\mathcal{A})$ is covariantly finite. Dually
$\mathcal{I}'
\blacksquareubseteq {\rm Mor}(\mathcal{A})$ is
contravariantly finite. For (3), observe that a short exact
sequence $ 0\rightarrow \alpha \rightarrow
\alpha'\rightarrow \alpha'' \rightarrow 0$ in ${\rm
Mor}(\mathcal{A})$ is left $\mathcal{P}'$-acyclic if and only if the
corresponding sequence of cokernels is exact; by Snake Lemma this is
equivalent to that the corresponding sequence of kernels is exact,
and then equivalent to that the sequence is right
$\mathcal{I}'$-acyclic.
\vskip 3pt
We apply Theorem \ref{thm:partIII} to obtain a Frobenius
exact structure on ${\rm Mor}(\mathcal{A})$. Note that the
corresponding conflations are given by short exact sequences in ${\rm
Mor}(\mathcal{A})$ such that the associated sequences of kernels
and cokernels are exact in $\mathcal{A}$; moreover, projective
objects are equal to objects of the form $(0\rightarrow P)\oplus
(Q
\blacksquaretackrel{\rm Id_Q}\rightarrow Q)\oplus (R\rightarrow 0)$ for some
projective objects $P, Q, R\in \mathcal{A}$. Denote by
$\mathcal{P}_{\rm new}$ the full subcategory of ${\rm Mor}(\mathcal{A})$
formed by these objects. We denote by $\underline{\rm Mor}(\mathcal{A})$ the stable
category of ${\rm Mor}(\mathcal{A})$ modulo $\mathcal{P}_{\rm
new}$; it is a triangulated category (\cite{Ha1, Ke3}).
Recall that ${\rm Mon}(\mathcal{A})$ is the extension-closed exact
subcategory of ${\rm Mor}(\mathcal{A})$ consisting of monomorphisms
in $\mathcal{A}$; it is called the \emph{monomorphism category} of
$\mathcal{A}$. In fact, it is a Frobenius category such that its
projective objects are equal to objects of the form $(0\rightarrow
P)\oplus (Q
\blacksquaretackrel{{\rm Id}_Q}\rightarrow Q)$ for projective
objects $P, Q\in \mathcal{A}$. Denote by $\underline {\rm
Mon}(\mathcal{A})$ the stable category. For details, see
\cite{Ch09}. Hence we have an inclusion triangle functor ${\rm
inc}\colon \underline{\rm Mon}(\mathcal{A}) \hookrightarrow
\underline{\rm Mor}(\mathcal{A})$. It is remarkable that this
functor admits a right adjoint ${\rm inc}_\rho\colon \underline{\rm
Mor}(\mathcal{A})\rightarrow \underline{\rm Mon}(\mathcal{A})$: for
each object $\alpha\colon X\rightarrow Y$, consider a monomorphism
$i_X\rightarrow I(X)$ such that $I(X)$ injective, and set ${\rm
inc}_\rho(\alpha)=\binom{\alpha}{i_X}\colon X\rightarrow Y\oplus
I(X)$; the action of ${\rm inc}_\rho$ on morphisms is defined
naturally. In particular, the functor ${\rm inc}_\rho$ is a triangle
functor; see \cite[Section 8]{Ke}.
Suppose that the abelian category $\mathcal{A}$ has injective hulls. For an object
$\alpha\colon X\rightarrow Y$ in ${\rm Mor}(\mathcal{A})$, consider
its kernel $i\colon K\rightarrow X$ and an injective hull $j\colon K\rightarrow
I(K)$. Then there exists a morphism
$\bar{i}\colon X\rightarrow I(K)$ such that $\bar{i}\circ i=j$.
Note that $\binom{\alpha}{\bar{i}}\colon X\rightarrow Y\oplus I(K)$
is a monomorphism; it is called the \emph{minimal
monomorphism} associated to $\alpha$ (\cite[Sections 2,4]{RS06}).
Denote the minimal monomorphism by ${\rm Mimo}(\alpha)$. It is
remarkable that there is a natural isomorphism between the object
${\rm inc}_\rho(\alpha)$ and ${\rm Mimo}(\alpha)$ in the stable
category $\underline{\rm Mon}(\mathcal{A})$; compare \cite[Section
4, Claim 2]{RS06}. Then the minimal monomorphism operation ${\rm
Mimo}(\mbox{-})$ becomes naturally a triangle functor.
$
\blacksquarequare$
\end{exm}
We would like to point out that the Frobenius category ${\rm
Mor}(\mathcal{A})$ in the above example is standard, that is, the
associated restricted Yoneda functor yields an equivalence ${\rm
Mor}(\mathcal{A})
\blacksquareimeq {\rm CM}(\mathcal{P}_{\rm new})$ of exact
categories; see Section 4. Indeed, both exact categories are
equivalent to the category of left exact sequences in $\mathcal{A}$
with the obvious exact structure. This observation and its
generalization will be treated elsewhere.
\vskip 10pt
{\footnotesize \noindent Xiao-Wu Chen, Department of
Mathematics, University of Science and Technology of
China, Hefei 230026, P. R. China \\
Homepage: http://mail.ustc.edu.cn/$^
\blacksquareim$xwchen \\
\emph{Current address}: Institut fuer Mathematik, Universitaet
Paderborn, 33095, Paderborn, Germany}
\end{document}
|
\begin{eqnarray}gin{document}
\makeatletter
\newcommand\rlarrows{\mathop{\operator@font \rightleftarrows}\nolimits}
\makeatother
\newenvironment{itemizePacked}{
\begin{eqnarray}gin{itemize}
\setlength{\textup{i}temsep}{1pt}
\setlength{\partialarskip}{0pt}
\setlength{\partialarsep}{0pt}
}{\textup{e}nd{itemize}}
\newenvironment{enumeratePacked}{
\begin{eqnarray}gin{enumerate}
\setlength{\textup{i}temsep}{5pt}
\setlength{\partialarskip}{0pt}
\setlength{\partialarsep}{0pt}
}{\textup{e}nd{enumerate}}
\newcommand{\f}[2]{{\frac{#1}{#2}}}
\newcommand{\wt}[1]{{\widetilde{#1}}}
\newcommand{\wh}[1]{{\widehat{#1}}}
\newcommand{\wc}[1]{{\widecheck{#1}}}
\newcommand{\chem}[1]{\textup{e}nsuremath{\mathrm{#1}}}
\newcommand{\lrangle}[1]{{\langle{#1}\rangle}}
\newcommand{\lrcurl}[1]{{\{{#1}\}}}
\newcommand{\ol}[1]{{\overline{#1}}}
\newcommand{\ul}[1]{{\underline{#1}}}
\newcommand{{\scriptscriptstyle\mathsf T}}{{\scriptscriptstyle\mathsf T}}
\newcommand{{\scriptscriptstyle\displaystyleelta}}{{\scriptscriptstyle\displaystyleelta}}
\newcommand{{\scriptscriptstyle <}}{{\scriptscriptstyle <}}
\newcommand{{\varepsilon}}{{\varepsilon}}
\textup def\rm{ref}{\rm{ref}}
\textup def_{\REFUP}{_{\rm{ref}}}
\textup defreaction progress parameter{reaction progress parameter}
\newcommand{\bfit}[1]{\textbf{\textit{#1}}}
\newcommand{\colred}[1]{{\color{red} #1}}
\newcommand{\colblue}[1]{{\color{blue} #1}}
\newcommand{\colwhite}[1]{{\color{white} #1}}
\newcommand{\colgreen}[1]{{\color{green} #1}}
\newcommand{\colbrown}[1]{{\color{Brown} #1}}
\newcommand{\colfucsia}[1]{{\color{Fuchsia} #1}}
\newcommand{\colBlue}[1]{{\color{Blue} #1}}
\newcommand{\corrections}[1]{{\color{blue}\textup{i}t#1}}
\newcommand{\comment}[1]{{\color{blue}\bf#1}}
\textup def\chi_{Z,\textup{st}}{\chi_{Z,\textup{st}}}
\textup def\chi_{Z,\textup{q}}{\chi_{Z,\textup{q}}}
\textup def\chi_{Z,\textup{i}}{\chi_{Z,\textup{i}}}
\textup def
{\partialagestyle{empty}\cleardoublepage}{
{\partialagestyle{empty}\cleardoublepage}}
\textup def{\cal{A}}{{\cal{A}}}
\textup def{\cal{B}}{{\cal{B}}}
\textup def{\cal{C}}{{\cal{C}}}
\textup def{\cal{O}}{{\cal{O}}}
\textup def{\cal{D}}{{\cal{D}}}
\textup def{\cal{E}}{{\cal{E}}}
\textup def{\cal{F}}{{\cal{F}}}
\textup def{\cal{H}}{{\cal{H}}}
\textup def{\cal{G}}{{\cal{G}}}
\textup def{\cal{N}}{{\cal{N}}}
\textup def{\cal{L}}{{\cal{L}}}
\textup def{\cal{S}}{{\cal{S}}}
\textup def{\cal{T}}{{\cal{T}}}
\textup def{\cal{U}}{{\cal{U}}}
\textup def{\cal{C}}{{\cal{C}}}
\textup def{\cal{M}}{{\cal{M}}}
\textup def{\cal{P}}{{\cal{P}}}
\textup def{\cal{R}}{{\cal{R}}}
\textup def{\cal{V}}{{\cal{V}}}
\textup def{\cal{Q}}{{\cal{Q}}}
\textup def{\rm{erf}}{{\rm{erf}}}
\textup def\textup{\textup}
\textup def\partial{\partialartial}
\textup def\textup d{\textup d}
\textup def\displaystyle{\textup displaystyle}
\textup def\textup{i}iint\limits_{{\Omega_{\cal F}}}{\textup{i}iint\limits_{{\Omega_{\cal F}}}}
\textup def{\Omega_{\cal F}}{{\Omega_{\cal F}}}
\textup def{\Omega_{\cal A}}{{\Omega_{\cal A}}}
\textup def\textup{e}{\textup{e}}
\textup def\textup{i}{\textup{i}}
\textup def{\rm{Fr}}{{\rm{Fr}}}
\textup def{\rm{Ma}}{{\rm{Ma}}}
\textup def{\rm{Re}}{{\rm{Re}}}
\textup def{\rm{Kn}}{{\rm{Kn}}}
\textup def{\rm{Rd}}{{\rm{Rd}}}
\textup def{\rm{Le}}{{\rm{Le}}}
\textup def\displaystylea{{\rm{Da}}}
\textup def{\rm{Ka}}{{\rm{Ka}}}
\textup def{\rm{Nu}}{{\rm{Nu}}}
\textup def{\rm{Sc}}{{\rm{Sc}}}
\textup def{\rm{Ri}}{{\rm{Ri}}}
\textup def{\rm{Ec}}{{\rm{Ec}}}
\textup def{\rm{Tu}}{{\rm{Tu}}}
\textup def{\rm{St}}{{\rm{St}}}
\textup def{\rm{Mi}}{{\rm{Mi}}}
\textup def{\rm{Ra}}{{\rm{Ra}}}
\textup def{\mbox{\boldmath$a$}}{{\mbox{\boldmath$a$}}}
\textup def{\mbox{\boldmath$b$}}{{\mbox{\boldmath$b$}}}
\textup def{\mbox{\boldmath$B$}}{{\mbox{\boldmath$B$}}}
\textup def{\mbox{\boldmath$c$}}{{\mbox{\boldmath$c$}}}
\textup def\textup dvec{{\mbox{\boldmath$d$}}}
\textup def\textup{e}vec{{\mbox{\boldmath$e$}}}
\textup def{\mbox{\boldmath$F$}}{{\mbox{\boldmath$F$}}}
\textup def{\mbox{\boldmath$N$}}{{\mbox{\boldmath$N$}}}
\textup def{\mbox{\boldmath$f$}}{{\mbox{\boldmath$f$}}}
\textup def{\mbox{\boldmath$g$}}{{\mbox{\boldmath$g$}}}
\textup def{\mbox{\boldmath$h$}}{{\mbox{\boldmath$h$}}}
\textup def\textup{i}vec{{\mbox{\boldmath$i$}}}
\textup def{\mbox{\boldmath$j$}}{{\mbox{\boldmath$j$}}}
\textup def{\mbox{\boldmath$k$}}{{\mbox{\boldmath$k$}}}
\textup def\partialvec{{\mbox{\boldmath$p$}}}
\textup def{\mbox{\boldmath$P$}}{{\mbox{\boldmath$P$}}}
\textup def{\mbox{\boldmath$u$}}{{\mbox{\boldmath$u$}}}
\textup def{\mbox{\boldmath$U$}}{{\mbox{\boldmath$U$}}}
\textup def{\mbox{\boldmath$n$}}{{\mbox{\boldmath$n$}}}
\textup def{\mbox{\boldmath$t$}}{{\mbox{\boldmath$t$}}}
\textup def{\mbox{\boldmath$R$}}{{\mbox{\boldmath$R$}}}
\textup def{\mbox{\boldmath$r$}}{{\mbox{\boldmath$r$}}}
\textup def{\mbox{\boldmath$s$}}{{\mbox{\boldmath$s$}}}
\textup def{\mbox{\boldmath$S$}}{{\mbox{\boldmath$S$}}}
\textup def{\mbox{\boldmath$x$}}{{\mbox{\boldmath$x$}}}
\textup def{\mbox{\boldmath$v$}}{{\mbox{\boldmath$v$}}}
\textup def{\mbox{\boldmath$w$}}{{\mbox{\boldmath$w$}}}
\textup def{\mbox{\boldmath$y$}}{{\mbox{\boldmath$y$}}}
\textup def{\mbox{\boldmath$m$}}{{\mbox{\boldmath$m$}}}
\textup def{\mbox{\boldmath$z$}}{{\mbox{\boldmath$z$}}}
\textup def{\mbox{\boldmath$X$}}{{\mbox{\boldmath$X$}}}
\textup def{\mbox{\boldmath$q$}}{{\mbox{\boldmath$q$}}}
\textup def{\mbox{\boldmath$0$}}{{\mbox{\boldmath$0$}}}
\textup def{\mbox{\boldmath$\xi$}}{{\mbox{\boldmath$\xi$}}}
\textup def{\boldsymbol{\wp}}{{\boldsymbol{\wp}}}
\textup def\partialsivec{{\mbox{\boldmath$\partialsi$}}}
\textup def{\varepsilon}vec{{\mbox{\boldmath${\varepsilon}ilon$}}}
\textup def\partialhivec{{\mbox{\boldmath$\partialhi$}}}
\textup def{\mbox{\boldmath$\varphi$}}{{\mbox{\boldmath$\varphi$}}}
\textup def{\mbox{\boldmath$\zeta$}}{{\mbox{\boldmath$\zeta$}}}
\textup def{\mbox{\boldmath$\kappa$}}{{\mbox{\boldmath$\kappa$}}}
\textup def{\pmb{\varkappa}}{{\partialmb{\varkappa}}}
\textup def\textup{e}tavec{{\mbox{\boldmath$\textup{e}ta$}}}
\textup def{\boldsymbol{\Psi}}{{\boldsymbol{\Psi}}}
\textup def{\boldsymbol{W}}{{\boldsymbol{W}}}
\textup def{\mbox{\boldmath$Y$}}{{\mbox{\boldmath$Y$}}}
\textup def{\mbox{\boldmath$V$}}{{\mbox{\boldmath$V$}}}
\textup def{\cal{L}}vec{{\boldsymbol{\cal{L}}}}
\textup def{\cal{M}}vec{{\boldsymbol{\cal{M}}}}
\textup def{\cal{H}}vec{{\boldsymbol{\cal{H}}}}
\textup def{\cal{V}}vec{{\boldsymbol{\cal{V}}}}
\textup def{\mbox{\boldmath$\omega$}}{{\mbox{\boldmath$\omega$}}}
\textup def{\mbox{\boldmath$\Omega$}}{{\mbox{\boldmath$\Omega$}}}
\textup def{\boldsymbol{\sigma}}{{\boldsymbol{\sigma}}}
\textup def{\underline{\underline{{A}}}}{{\underline{\underline{{A}}}}}
\textup def{\underline{\underline{{B}}}}{{\underline{\underline{{B}}}}}
\textup def{\underline{\underline{{\tau}}}}{{\underline{\underline{{\tau}}}}}
\textup def{\underline{\underline{{\sigma}}}}{{\underline{\underline{{\sigma}}}}}
\textup def{\underline{\underline{{C}}}}{{\underline{\underline{{C}}}}}
\textup def{\underline{\underline{{I}}}}{{\underline{\underline{{I}}}}}
\textup def{\underline{\underline{{S}}}}{{\underline{\underline{{S}}}}}
\textup def{\underline{\underline{{R}}}}{{\underline{\underline{{R}}}}}
\textup def{\underline{\underline{{T}}}}{{\underline{\underline{{T}}}}}
\textup def{\underline{\underline{{E}}}}{{\underline{\underline{{E}}}}}
\textup def{\underline{\underline{{t}}}}{{\underline{\underline{{t}}}}}
\textup def{\boldsymbol{\alpha}}{{\boldsymbol{\alpha}}}
\textup def\begin{eqnarray}tavec{{\boldsymbol{\begin{eqnarray}ta}}}
\textup def{\boldsymbol{\tau}}{{\boldsymbol{\tau}}}
\textup def{\boldsymbol{\theta}}{{\boldsymbol{\theta}}}
\textup def{\mbox{\boldmath$\lambda$}}{{\mbox{\boldmath$\lambda$}}}
\textup def{\cal{T}}mat{{\underline{\underline{{\cal{T}}}}}}
\textup def{\cal{L}}mat{{\underline{\underline{{\cal{L}}}}}}
\textup def{\cal{M}}mat{{\underline{\underline{{\cal{M}}}}}}
\textup def{\underline{\underline{{\kappa}}}}{{\underline{\underline{{\kappa}}}}}
\textup def\nabla{\nabla}
\textup def\displaystyleiv{\nabla \cdot}
\textup def\nabla^2{\nabla^2}
\begin{eqnarray}gin{frontmatter}
\title{Entropy-Bounded Discontinuous Galerkin Scheme for Euler Equations}
\author{Yu Lv\corref{CORR1}}
\cortext[CORR1]{Corresponding author}
\textup{e}ad{[email protected]}
\author{Matthias Ihme}
\textup{e}ad{[email protected]}
\address{Department of Mechanical Engineering, Stanford University, Stanford, CA 94305, USA}
\begin{eqnarray}gin{abstract}
An entropy-bounded Discontinuous Galerkin (EBDG) scheme is proposed in which the solution is regularized by constraining the entropy. The resulting scheme is able to stabilize the solution in the vicinity of discontinuities and retains the optimal accuracy for smooth solutions. The properties of the limiting operator according to the entropy-minimum principle are proofed analytically, and an optimal CFL-criterion is derived. We provide a rigorous description for locally imposing entropy constraints to capture multiple discontinuities. Significant advantages of the EBDG-scheme are the general applicability to arbitrary high-order elements and its simple implementation for two- and three-dimensional configurations. Numerical tests confirm the properties of the scheme, and particular focus is attributed to the robustness in treating discontinuities on arbitrary meshes.
\textup{e}nd{abstract}
\textup{e}nd{frontmatter}
\tableofcontents
\section{\label{INTRO}Introduction}
The stabilization of solutions near flow-field discontinuities remains an open problem to the discontinuous Galerkin (DG) community. Considerable progress has been made on the development of limiters for two-dimensional quadrilateral and triangular elements. These limiters can be categorized into three classes. Methods that limit the solution using information about the slope along certain spatial directions \cite{TVBlim, TVBlim2D} fall in the first class. The second class of limiters extends this idea by limiting based on the moments of the solution~\cite{MOMLIM1_BISWAS,MOMLIM2_KRIVO}, and schemes in which the DG-solution is projected onto a WENO~\cite{WENOLIM1_QIU,WENOLIM_ZHU_2008,WENOLIM_ZHONG} or Hermit WENO (HWENO)~\cite{HWENOLIM_LUO} representation fall in the last category.
Although these limiters show promising results for canonical test cases on regular elements and structured mesh partitions, the following two issues related to practical applications have not been clearly answered:
\begin{eqnarray}gin{itemize}
\textup{i}tem How can discontinuous solutions be regularized on multi-dimensional curved high-order elements?
\textup{i}tem How can non-physical solutions that are triggered by strong discontinuities and geometric singularities be avoided?
\textup{e}nd{itemize}
The present work attempts to simultaneously address both of these questions.
Recently, positivity-preserving DG-schemes have been developed for the treatment of flow-field discontinuities, and relevant contributions are by Zhang and Shu~\cite{ZHANG_SHU_JCP2010, ZHANG_SHU_JCP2011,ZHANG_SHU_NUMERMATH2010}. The positivity preserving method provides a robust framework with provable $L_1$-stability, preventing the appearance of negative pressure and density. Resulting algorithmic modifications are minimal, and these schemes have been used in simulations of detonation systems with complex reaction chemistry~\cite{LV_IHME_PCI_2014,LV_IHME_JCP_2014}.
Motivated by these attractive properties, the present work aims at developing an algorithm that avoids non-physical solutions on arbitrary elements and multi-dimensional spatial representations. The resulting scheme that will be developed in this work has the following properties: First, by invoking the entropy principle, solutions are constrained by a local entropy bound. Second, a general implementation on arbitrary elements is proposed without restriction to a specific quadrature rule. Third, the entropy constraint is imposed on the solutions through few algebraic operations, thereby avoiding the computationally expensive inversion of a nonlinear system. Fourth, a method for the evaluation of an optimal CFL-criterion is derived, which is applicable to general polynomial orders and arbitrary element types.
The remainder of this paper has the following structure. The governing equations and the discretization are summarized in the next two sections. The entropy-bounded DG (EBDG) formulation is presented in Sec.~\ref{SEC_ENTROPY_PRINCIPLE}, and the derivation of the CFL-constraint and the limiting operator are presented. This analysis is performed by considering a one-dimensional setting, and the generalization to multi-dimensional and arbitrary elements is presented in Sec.~\ref{SEC_GENERALIZATION}. Section~\ref{SEC_EVALUATION_ENTROPY_BOUND} is concerned with the evaluation of the entropy-bounded DG-scheme, and a detailed description of the algorithmic implementation is given in Sec.~\ref{SEC_ALG_IMPLEMENTATION}. The EBDG-method is demonstrated by considering several test cases, and the accuracy and stability are examined in Sec.~\ref{SEC_NUMERICAL_TEST}. The paper finishes with conclusions.
\section{\label{SEC_GOVERNING_EQUATIONS}Governing equations}
We consider a system of conservation equations,
\begin{eqnarray}gin{equation}
\label{EQ_GOVERNING_EQU}
\frac{\partialartial \mathsf{U}}{\partialartial t} + \nabla \cdot \mathsf{F} = 0\qquad\text{in $\Omega$}\;,
\textup{e}nd{equation}
where the solution variable $\mathsf{U}:\mathbb{R}\times\mathbb{R}^{N_d} \rightarrow \mathbb{R}^{N_v}$ and the flux term $\mathsf{F}:\mathbb{R}^{N_v}\rightarrow \mathbb{R}^{N_v\times N_d}$. Here, $N_d$ denotes the spatial dimension and $N_v$ is the dimension of the solution vector. For the Euler equations, $\mathsf{U}$ and $\mathsf{F}$ take the form:
\begin{eqnarray}gin{subeqnarray}
\label{EQ_EULER_FLUX}
\slabel{EQ_EULER_STATE}
\mathsf{U}(x,t) &=& (\rho, \rho u, \rho e)^T\;,\\
\slabel{EQ_EULER_FLUX}
\mathsf{F}(\mathsf{U}) & = & \left(\rho u, \rho u \otimes u + p \text{I}, u (\rho e+p)\right)^T\;,
\textup{e}nd{subeqnarray}
where $t$ is the time, $x\textup{i}n \mathbb{R}^{N_d}$ is the spatial coordinate vector, $\rho$ is the density, $u \textup{i}n \mathbb{R}^{N_d}$ is the velocity vector, $e$ is the specific total energy, and $p$ is the pressure. Equation (\ref{EQ_GOVERNING_EQU}) is closed with the ideal gas law:
\begin{eqnarray}gin{equation}
p = (\gamma - 1) \left(\rho e - \frac{\rho |u|^2}{2}\right)\;,
\textup{e}nd{equation}
in which $\gamma$ is the ratio of specific heats, which, for the present work, is set to a constant value of $\gamma=1.4$. Here and in the following, we use $|\cdot|$ to represent the Euclidean norm. With this, we define the local maximum characteristic speed as,
\begin{eqnarray}
\nu = |u| + c\qquad\text{with}\qquad c = \sqrt{\frac{\gamma p}{\rho}}\;,
\textup{e}nd{eqnarray}
where $c$ is the speed of sound.
Because of the presence of discontinuities in the solution of Eq.~(\ref{EQ_GOVERNING_EQU}), we seek a weak solution that satisfies physical principles. This is the so-called entropy solution. By introducing ${\cal{U}}$ as a convex function of $\mathsf{U}$ with ${\cal{U}}: \mathbb{R}^{N_v} \rightarrow \mathbb{R}$, Lax~\cite{LAX_ENTROPY_BOUND_1971} showed that the entropy solution of Eq.~(\ref{EQ_GOVERNING_EQU}) satisfies the following inequality:
\begin{eqnarray}gin{equation}
\label{EQ_ENTROPY_GOVERNING_EQU}
\frac{\partialartial \mathcal{U}}{\partialartial t} + \nabla \cdot \mathcal{F} \leq 0\;,
\textup{e}nd{equation}
where ${\cal{F}}: \mathbb{R}^{N_v} \rightarrow \mathbb{R}^{N_d}$ is the corresponding flux of $\mathcal{U}$. The consistency condition between Eqs.~(\ref{EQ_GOVERNING_EQU}) and~(\ref{EQ_ENTROPY_GOVERNING_EQU}) requires~\cite{LAX_ENTROPY_BOUND_1971}:
\begin{eqnarray}gin{equation}
\left(\frac{\partialartial \mathcal{U}}{\partialartial \mathsf{U}} \right)^T\frac{\partialartial \mathsf{F}}{\partialartial \mathsf{U}} = \frac{\partialartial \mathcal{F}}{\partialartial \mathsf{U}}\;.
\textup{e}nd{equation}
The weak solution of Eq.~(\ref{EQ_GOVERNING_EQU}) that satisfies this additional condition for the pair $(\mathcal{U}, \mathcal{F})$ is called an entropy solution. With this definition, Eq.~(\ref{EQ_ENTROPY_GOVERNING_EQU}) is commonly called entropy inequality or entropy condition, and $\mathcal{U}$ is called the entropy variable. A familiar example for gas-dynamic applications is to relate ${\cal{U}}$ to the physical entropy $s$ with:
\begin{eqnarray}gin{equation}
\label{ENTROPY_DEFINITION}
s = \ln(p) - \gamma \ln(\rho) + s_0\;,
\textup{e}nd{equation}
where $s_0$ is the reference entropy. The corresponding definition of the entropy variable and its flux in the context of the Euler system is \cite{TADMOR_1986}:
\begin{eqnarray}gin{equation}
\label{EQ_ENTROPY_DEF}
(\mathcal{U}, \mathcal{F})=(-\rho s, -\rho s u)\;.
\textup{e}nd{equation}
Note that Eq.~(\ref{ENTROPY_DEFINITION}) directly provides a constraint on the positivity of pressure $p$ and density $\rho$.
\section{\label{SEC_DG_DISCRETIZATION}Discontinuous Galerkin discretization}
We consider the problem to be posed on the domain $\Omega$ with boundary $\partialartial \Omega$. A mesh partition is defined as $\Omega = \cup_{e=1}^{N_e} \Omega_e$, where $\Omega_e$ corresponds to a discrete element of this partition. The edge of element $\Omega_e$ is defined as $\partialartial \Omega_e$. In order to distinguish different sides of the edge, the superscripts ``$+$" and ``$-$" are used to denote the interior and exterior, respectively. We define a global space of test functions as
\begin{eqnarray}gin{equation}
\mathpzc{V} = \oplus_{e=1}^{N_e} \mathpzc{V}_e\;,\qquad\mathpzc{V}_e = \text{span}\{\varphi_n(\Omega_e)\}_{n=1}^{N_p}\;,
\textup{e}nd{equation}
where $\varphi_n$ is the $n$th polynomial basis, and $N_p$ is the number of bases. On the space $\mathpzc{V}_e$ we seek an approximate solution to Eq.~(\ref{EQ_GOVERNING_EQU}) of the form:
\begin{eqnarray}gin{equation}
\mathsf{U} \simeq U = \oplus_{e=1}^{N_e} U_e\;,\qquad U_e \textup{i}n \mathpzc{V}_e\;,
\textup{e}nd{equation}
where the solution vector $U_e$ on each individual element takes the general form
\begin{eqnarray}gin{equation}
U_e(x,t) = \sum_{m=1}^{N_p} \widetilde{U}_{e,m}(t) \varphi_m(x)\;,
\textup{e}nd{equation}
and the unknown vector of basic coefficients $\widetilde{U}_{e,m} \textup{i}n \mathbb{R}^{N_v \times N_p}$ is obtained from the discretized weak solution of Eq.~(\ref{EQ_GOVERNING_EQU}):
\begin{eqnarray}gin{equation}
\label{WEAK_FORM}
\frac{d \widetilde{U}_{e,m}}{d t}\textup{i}nt_{\Omega_e}\varphi_n \varphi_m d\Omega - \textup{i}nt_{\Omega_e} \nabla\varphi_n \cdot F(U_e)d\Omega + \textup{i}nt_{\partialartial \Omega_e} \varphi^+_n \wh{F}(U^{+}_e, U^{-}_e, \wh{\textmd{n}}) d\Gamma = 0\;,
\textup{e}nd{equation}
$\forall \varphi_n$ with $n=1, \ldots,N_p$. The numerical Riemann flux $\wh{F}$ is evaluated based on the states at both sides of the interface $\partialartial \Omega_e$ and the outward-pointing normal vector $\wh{\textmd{n}}$. It is of interest to note that for the particular case of $N_p=1$ and $\varphi_1 = 1$, the weak form reduces to the classical first-order finite-volume (FV) discretization. It can also be seen that the DG-scheme does not rely on a specific type of basis functions. Since the following derivation is based on this mathematical property, we introduce the following lemma.
\begin{eqnarray}gin{lemma}
\label{LEMMA_BASIS_SWITCH}
A polynomial $P$,
\begin{eqnarray}
\label{EQ_LM_POLYNOMIAL1}
P(x) = \sum_{m=1}^{N_p} \widetilde{P}_m \varphi_m(x)\qquad \text{for $x\textup{i}n \Omega_e$}\;,
\textup{e}nd{eqnarray}
with a set of polynomial bases $\{\varphi_m(x), m = 1,\ldots,N_p\}$, can be exactly interpolated by a Lagrangian polynomial of $N_p$ points $\{y_n \textup{i}n \Omega_e,~n = 1,\ldots,N_p\}$ under the condition that $\begin{itemize}g[\varphi_m(y_n)\begin{itemize}g]$ is non-singular:
\begin{eqnarray}
\label{EQ_LM_POLYNOMIAL2}
P(x) = \sum_{n=1}^{N_p} P(y_n)\partialhi_n(x)\qquad \text{for $x\textup{i}n \Omega_e$}\;.
\textup{e}nd{eqnarray}
\\
Proof: By equating Eqs.~(\ref{EQ_LM_POLYNOMIAL1}) and (\ref{EQ_LM_POLYNOMIAL2}), and comparing terms, it follows that
\begin{eqnarray}
\label{EQ_BASIS_SWITCHING_TENSOR}
\varphi_m(x) = \sum_{n=1}^{N_p} \varphi_m(y_n) \partialhi_n(x) \qquad \text{or}\qquad [\varphi_m(x)] = [\varphi_m(y_n)] ~ [\partialhi_n(x)]\;,
\textup{e}nd{eqnarray}
where we use $[\cdot]$ to denote a tensor or a vector. Since $[\varphi_m(y_n)]$ is non-singular, Eq.~(\ref{EQ_BASIS_SWITCHING_TENSOR}) can be inverted:
\begin{eqnarray}n
[\partialhi_n(x)] = [\varphi_m(y_n)]^{-1} ~ [\varphi_m(x)]\;.
\textup{e}nd{eqnarray}n
\textup{e}nd{lemma}
\begin{eqnarray}gin{remark}
The significance of this lemma is that it provides a description to convert any basis set to a Lagrangian basis set with $N_p$ interpolation points, as long as they are located at general positions. To facilitate the following derivation, we choose the points $y_n$ with $ n=1,\ldots,N_p$ from the $N_q$ quadrature points\cite{CUBATURE_FORMULAS}. According to the accuracy requirement of the quadrature scheme for Eq.~(\ref{WEAK_FORM}), $N_q \geq N_p$ is alway true.
\textup{e}nd{remark}
\section{\label{SEC_ENTROPY_PRINCIPLE}Entropy principle and entropy-bounded discontinuous Galerkin method}
In this section, we review the entropy principle by considering a three-point FV-setting. Then, we will explore how to extend this principle to a DG-scheme, which leads to the concept of entropy boundedness. In order to enable the implementation of this concept, two important ingredients will be discussed, namely a time-step constraint and a limiting operator. After conducting numerical analyses by considering a one-dimensional configuration, we will extend the entropy boundedness to multi-dimensional and arbitrary element types. The dimensional generality, geometric adaptability and simple implementation are major advantages of the resulting entropy-bounded DG-method.
\subsection{\label{ENTROPY_PRINCIPLE_THREE-POINT_FVM}Preliminaries and related work}
To illustrate the entropy principle, we consider a local Lax-Friedrichs flux, which can be written as:
\begin{eqnarray}gin{equation}
\label{LAX_FRIEDRICHS_FLUX}
\wh{F}(U_L, U_R, \wh{\textmd{n}}) = \frac{1}{2}\left(F(U_L) + F(U_R)\right) \cdot \wh{\textmd{n}} - \frac{1}{2}\lambda(U_R - U_L)\;,
\textup{e}nd{equation}
and
\begin{eqnarray}n
\lambda \geq \max\limits_{k \textup{i}n \{L, R\}} \nu (U_k)
\textup{e}nd{eqnarray}n
is the dissipation coefficient. Note that this flux function satisfies consistency: $\wh{F}(U, U, \wh{\textmd{n}}) = F(U) \cdot \wh{\textmd{n}}$, conservation: $\wh{F}(U_L, U_R, \wh{\textmd{n}}) = - \wh{F}(U_R, U_L, -\wh{\textmd{n}})$, and Lipschitz-continuity. In the following, we consider the simplest case of DGP0 scheme, with $N_p = 1$, in a one-dimensional setting. This formulation is consistent with the classical three-point FV-discretization. For $x \textup{i}n \Omega_e = [x_{_{e-1/2}},~ x_{_{e+1/2}}]$, the discretized solution to Eq.~(\ref{EQ_GOVERNING_EQU}) can be written as:
\begin{eqnarray}
\label{1D_FVM}
\nonumber
\widetilde{U}_e(t+\displaystyleelta t) & = & \widetilde{U}_e -\frac{\displaystyleelta t}{h}\left(\wh{F}(\widetilde{U}_e, \widetilde{U}_{e-1}, -1) + \wh{F}(\widetilde{U}_e, \widetilde{U}_{e+1}, 1) \right)\;,
\textup{e}nd{eqnarray}
where $\widetilde{U}_e$ is the basis coefficient, which is identical to the piecewise constant approximation to the exact solution in $\Omega_e$. In the following, we introduce $\widetilde{U}_e^{\displaystyleelta t}$ to denote the solution vector $\widetilde{U}_e(t+\displaystyleelta t)$, and use the superscript $\displaystyleelta t$ to denote a temporally updated quantity at $t+\displaystyleelta t$. With the numerical flux given in Eq.~(\ref{LAX_FRIEDRICHS_FLUX}), this discretization preserves the positivity of pressure and density under the CFL-condition \cite{ZHANG_SHU_JCP2010, PERTHAME_SHU_POSI_1996}:
\begin{eqnarray}gin{equation}
\label{1D_CFL_CONDI}
\frac{\displaystyleelta t \lambda}{h} \leq \frac{1}{2}\;.
\textup{e}nd{equation}
In addition, it was discussed in \cite{PERTHAME_SHU_POSI_1996} that Eq.~(\ref{1D_FVM}) satisfies the discrete minimum entropy principle proposed by Tadmor~\cite{TADMOR_1986},
\begin{eqnarray}gin{equation}
\label{TADMOR_ENTROPY_PRIN}
s(\widetilde{U}_e^{\displaystyleelta t}) \geq {s}^0_e(t) = \min\limits_{j\textup{i}n \{e-1, e, e+1\}}s(\widetilde{U}_j) .
\textup{e}nd{equation}
To show this property, we can rewrite Eq.~(\ref{1D_FVM}) and split $\widetilde{U}_e(t + \displaystyleelta t)$ into two parts. For $x \textup{i}n \Omega_e$, this is written as
\begin{eqnarray}gin{subeqnarray}
\label{DISCRET_SPLIT}
\slabel{DISCRET_SPLIT_I}
\widetilde{U}_e(t + \displaystyleelta t) & = & \f{1}{2}\left(\widetilde{U}_{e,p1}^{\displaystyleelta t} + \widetilde{U}_{e,p2}^{\displaystyleelta t} \right)\;,\\
\slabel{DISCRET_SPLIT_P1}
\widetilde{U}^{\displaystyleelta t}_{e,p1} &=& \widetilde{U}_e -\frac{\displaystyleelta t}{h}\left( F(\widetilde{U}_{e+1}) - \lambda_{_{e+1/2}}\widetilde{U}_{e+1} - F(\widetilde{U}_{e}) + \lambda_{_{e+1/2}}\widetilde{U}_{e} \right)\;,\\
\slabel{DISCRET_SPLIT_P2}
\widetilde{U}^{\displaystyleelta t}_{e,p2} &=& \widetilde{U}_e + \frac{\displaystyleelta t}{h}\left(F(\widetilde{U}_{e-1}) + \lambda_{_{e-1/2}}\widetilde{U}_{e-1} - F(\widetilde{U}_{e}) - \lambda_{_{e-1/2}}\widetilde{U}_{e} \right)\;,
\textup{e}nd{subeqnarray}
where $\widetilde{U}_{e,p1}^{\displaystyleelta t}$ and $\widetilde{U}_{e,p2}^{\displaystyleelta t}$ can be viewed as the P0-approximations to the solutions of the hyperbolic systems (under the CFL constraint of Eq. (\ref{1D_CFL_CONDI}))
\begin{eqnarray}gin{subeqnarray}
\label{SYSTEM_INEQ_MODIFIED}
\slabel{SYSTEM_INEQ_MODIFIED_RIGHT}
\frac{\partialartial \mathsf{U}}{\partialartial t} + \left(\mathsf{F}'(\mathsf{U}) - \lambda_{_{e+1/2}} \rm{I} \right) \frac{\partialartial \mathsf{U}}{\partialartial x}= 0\;,\\
\slabel{SYSTEM_INEQ_MODIFIED_LEFT}
\frac{\partialartial \mathsf{U}}{\partialartial t} + \left(\mathsf{F}'(\mathsf{U}) + \lambda_{_{e-1/2}}\rm{I}\right) \frac{\partialartial \mathsf{U}}{\partialartial x } = 0\;,
\textup{e}nd{subeqnarray}
with the exact (Godunov) flux. If we denote the exact solutions to Eq.~(\ref{SYSTEM_INEQ_MODIFIED_RIGHT}) and (\ref{SYSTEM_INEQ_MODIFIED_LEFT}) as $\mathsf{U}_{p1}(x, t + \displaystyleelta t)$ and $\mathsf{U}_{p2}(x, t + \displaystyleelta t)$, respectively, then their P0-approximations in $\Omega_e$ yield $\widetilde{U}_{e,p1}^{\displaystyleelta t} = \frac{1}{h} \textup{i}nt_{x_{e-1/2}}^{x_{e+1/2}} \mathsf{U}_{p1}(x, t + \displaystyleelta t) dx$ and $\widetilde{U}_{e,p2}^{\displaystyleelta t} = \frac{1}{h} \textup{i}nt_{x_{e-1/2}}^{x_{e+1/2}} \mathsf{U}_{p2}(x, t + \displaystyleelta t) dx$. Both equation systems are obtained by imposing a constant shift on the characteristic speeds without modifying the characteristic variables. With these modifications, all characteristics in Eq.~(\ref{SYSTEM_INEQ_MODIFIED_RIGHT}) are right-running while those in Eq.~(\ref{SYSTEM_INEQ_MODIFIED_LEFT}) are left-running. The corresponding entropy inequalities take the form
\begin{eqnarray}gin{subeqnarray}
\label{ENTROPY_INEQ_MODIFIED}
\slabel{ENTROPY_INEQ_MODIFIED_RIGHT}
\frac{\partialartial \mathcal{U}}{\partialartial t} + \frac{\partial}{\partial x} \left(\mathcal{F} - \lambda_{_{e+1/2}}\mathcal{U} \right) \leq 0\;,\\
\slabel{ENTROPY_INEQ_MODIFIED_LEFT}
\frac{\partialartial \mathcal{U}}{\partialartial t} + \frac{\partial}{\partial x} \left(\mathcal{F} + \lambda_{_{e-1/2}}\mathcal{U} \right)\leq 0\;.
\textup{e}nd{subeqnarray}
Without loss of generality, we now consider Eq. (\ref{ENTROPY_INEQ_MODIFIED_RIGHT}) and integrate over $[t,t+\displaystyleelta t]\times[x_{e-1/2},x_{e+1/2}]$, resulting in the following expression:
\begin{eqnarray}gin{equation}
\begin{eqnarray}gin{split}
\textup{i}nt\limits_{x_{e-1/2}}^{x_{e+1/2}}\mathcal{U}\begin{itemize}g(\mathsf{U}_{p1}(x, t+{\displaystyleelta t})\begin{itemize}g) dx -
\textup{i}nt\limits_{x_{e-1/2}}^{x_{e+1/2}} \mathcal{U}(\widetilde{U}_e)dx +
\textup{i}nt\limits_t^{t+\displaystyleelta t}\left(\mathcal{F}(\mathsf{U}(x_{_{e+1/2}},t)) - \lambda_{_{e+1/2}}\mathcal{U}(\mathsf{U}(x_{_{e+1/2}},t)) \right) dt \\
- \textup{i}nt\limits_t^{t+\displaystyleelta t}\left(\mathcal{F}(\mathsf{U}(x_{_{e-1/2}},t)) - \lambda_{_{e-1/2}}\mathcal{U}(\mathsf{U}(x_{_{e-1/2}},t)) \right) dt \leq 0\;.
\textup{e}nd{split}
\textup{e}nd{equation}
Recognizing that all characteristics are right-running, the temporal integral can be evaluated exact since $\mathsf{U}(x_{_{e-1/2}},t) = \widetilde{U}_e$ and $\mathsf{U}(x_{_{e+1/2}},t) = \widetilde{U}_{e+1}$ under the condition of Eq.~(\ref{1D_CFL_CONDI}). Then by utilizing the convexity of $\mathcal{U}$ with respect to $\mathsf{U}$, the following estimate is obtained:
\begin{eqnarray}n
\mathcal{U}(\widetilde{U}_{e, p1}^{\displaystyleelta t}) & = & \mathcal{U}\left(\frac{1}{h}\textup{i}nt_{x_{e-1/2}}^{x_{e+1/2}}\mathsf{U}_{p1}(x, t+\displaystyleelta t)dx \right) \leq \frac{1}{h}\textup{i}nt_{x_{e-1/2}}^{x_{e+1/2}}\mathcal{U}(\mathsf{U}_{p1}(x, t+\displaystyleelta t)) dx\;\\
& \leq & \mathcal{U}(\widetilde{U}_e) + \frac{\displaystyleelta t}{h}\left(\mathcal{F}(\widetilde{U}_{e}) - \lambda_{_{e+1/2}} \mathcal{U}(\widetilde{U}_{e}) \right) - \frac{\displaystyleelta t}{h}\left( \mathcal{F}(\widetilde{U}_{e+1}) - \lambda_{_{e+1/2}} \mathcal{U}(\widetilde{U}_{e+1}) \right) \;.
\textup{e}nd{eqnarray}n
With the definition of $({\cal{U}}, {\cal{F}})$, given in Eq.~(\ref{EQ_ENTROPY_DEF}), it follows
\begin{eqnarray}
s(\widetilde{U}_{e,p1}^{\displaystyleelta t}) \geq \frac{\rho_e}{\rho_{e,p1}^{\displaystyleelta t}} \left[1- \frac{\displaystyleelta t}{h}\left(\lambda_{_{e+1/2}} - u_e\right)\right] s(\widetilde{U}_e) + \frac{\rho_{e+1}}{\rho_{e,p1}^{\displaystyleelta t}} \frac{\displaystyleelta t}{h}\left(\lambda_{_{e+1/2}} - u_{e+1} \right) s(\widetilde{U}_{e+1})\;.
\textup{e}nd{eqnarray}
The constraint (\ref{1D_CFL_CONDI}) ensures that the coefficients in front of $s(\widetilde{U}_{e})$ and $s(\widetilde{U}_{e+1})$ are positive and sum to unity according to Eq. (\ref{DISCRET_SPLIT_P1}). From these arguments directly follows:
\begin{eqnarray}gin{equation}
s(\widetilde{U}_{e,p1}^{\displaystyleelta t}) \geq \min\{s(\widetilde{U}_e), s(\widetilde{U}_{e+1})\}\;,
\textup{e}nd{equation}
and
\begin{eqnarray}gin{equation}
s(\widetilde{U}_{e,p2}^{\displaystyleelta t}) \geq \min\{s(\widetilde{U}_{e-1}), s(\widetilde{U}_{e})\}\;.
\textup{e}nd{equation}
Combining these two relations with the quasi-concavity of the entropy $s$ (Lemma 2.1 of \cite{ZHANG_SHU_NUMERMATH2010}), the discrete minimum entropy principle of Eq. (\ref{TADMOR_ENTROPY_PRIN}) is obtained.
The result above is obtained for a one-dimensional setting. However, in the following we derive a rotational version of this entropy principle for multi-dimensional cases by following the idea of~\cite{PERTHAME_SHU_POSI_1996}. By considering an arbitrary face with a normal $\wh{\textmd{n}}$, we can define a tangential vector $\wh{\textmd{t}}$. By projecting the velocity in Cartesian coordinates onto this local coordinate $(\wh{\textmd{n}},~ \wh{\textmd{t}})$ with the following mapping,
\begin{eqnarray}n
u ~\rightarrow ~(u_n,~u_t )^T\;,
\textup{e}nd{eqnarray}n
with
\begin{eqnarray}n
u_n = u \cdot \wh{\textmd{n}}, \qquad~ u_t = u \cdot \wh{\textmd{t}}\;,
\textup{e}nd{eqnarray}n
with which, the conservative variables and flux terms can be written as follows
\begin{eqnarray}gin{subeqnarray}
\label{3D_AUG_SYS}
\slabel{3D_AUG_SYS_U}
\mathsf{U} &=& (\rho,~\rho u_n,~\rho u_t,~\rho e_{n},~\rho e_{t})^T,\;\\
\slabel{3D_AUG_SYS_F}
\mathsf{F} &=& (\rho u_{n},~\rho u_{n}^2 + p,~\rho u_{t} u_{n},~ u_{n}(\rho e_{n} + p ),~\rho u_{n}e_{t})^T\;,
\textup{e}nd{subeqnarray}
and
\begin{eqnarray}n
e_{n} = \frac{p}{\rho(\gamma - 1)} + \frac{1}{2}u_{n}^2, \qquad~e_{t} = \frac{1}{2} u_{t}^2.
\textup{e}nd{eqnarray}n
For this augmented system, it can be seen that the variations of density, pressure and entropy are all governed by a 1D reduced system that is parallel to $\wh{\textmd{n}}$:
\begin{eqnarray}gin{subeqnarray}
\label{3D_RED_SYS}
\slabel{3D_RED_SYS_U}
\mathsf{U} &=& (\rho,~\rho u_n,~\rho e_n)^T,\;\\
\slabel{3D_RED_SYS_F}
\mathsf{F} &=& (\rho u_n,~\rho u_n^2 + p,~ u_n(\rho e_n + p ))^T,\;
\textup{e}nd{subeqnarray}
and its discretized version is identical to Eq. (\ref{1D_FVM}). Thus, the solution preserves the positivity of density and pressure and satisfies the entropy principle under the CFL-constraint, Eq. (\ref{1D_CFL_CONDI}), with $\lambda \geq \max\limits_{j \textup{i}n \{e-1, e, e+1\}} \nu(U_j)$.
We now conclude the above analysis with the following lemma as a critical element for the subsequent derivation.
\begin{eqnarray}gin{lemma}
\label{LEMMA_THREEPOINT_ROT}
For a three-point system defined on ${\mathbb{R}}^{N_d}$, the solution along an arbitrary direction {$\wh{\textmd{n}}$},
\begin{eqnarray}
\label{ROT_FVM_DISC}
\widetilde{U}_e^{\displaystyleelta t} = \widetilde{U}_e + \frac{\displaystyleelta t}{h}\left(\wh{F}(\widetilde{U}_e, \widetilde{U}_{e-1}, - \wh{\textmd{n}}) + \wh{F}(\widetilde{U}_e, \widetilde{U}_{e+1}, \wh{\textmd{n}})\right)\;,
\textup{e}nd{eqnarray}
with the flux function $\wh{F}$ specified in Eq. (\ref{LAX_FRIEDRICHS_FLUX}), preserves the positivity of density and pressure, and satisfies the entropy principle:
\begin{eqnarray}
\label{ROT_ENTROPY_PRIN}
s(\widetilde{U}_e^{\displaystyleelta t}) \geq \min\limits_{j \textup{i}n \{e-1, e, e+1\}} s(\widetilde{U}_j)\;,
\textup{e}nd{eqnarray}
under the CFL condition:
\begin{eqnarray}n
\frac{\displaystyleelta t \lambda}{h } \leq \frac{1}{2},~\qquad \lambda \geq \max\limits_{j \textup{i}n \{e-1, e, e+1\}} \nu(\widetilde{U}_j),
\textup{e}nd{eqnarray}n
\textup{e}nd{lemma}
This three-point system is consistent with that used by Zhang and Shu~\cite{ZHANG_SHU_JCP2010, ZHANG_SHU_NUMERMATH2010}. The difference is that we introduce a local entropy bound $s_e^0$ at time $t$ instead of a global entropy bound that is derived from the initial conditions $\min\limits_{x\textup{i}n\Omega} s(x, 0)$. Although a local Lax-Friedrichs flux was used for illustrative purposes, other Riemann solvers that preserve positivity and entropy stability are equally suitable, for example, the Roe-type solver with entropy fix \cite{ROE_ENTROPY_FIX_TADMOR}, the kinetic-type solver \cite{KINE_ENTROPY_SOLVER_1994} and the exact Godunov solver.
\subsection{\label{SEC_ENTROPY_BOUNDED_DG}Entropy-bounded DG-scheme}
To robustly capture shocks while retaining the high-order benefit of the DG-scheme, sub-cell shock resolution is required \cite{ZJWANG_REVIEW_2013}. We now extend the discussion by considering a high-order DG-solution with sub-cell representation. In each DG-cell, the whole solution is approximated by a function space. However, there is no guarantee that the high-order ($N_p > 1$) solution obeys the physical entropy principle. This is the reason that DG suffers from numerical instability in the vicinity of discontinuities. To suppress these instabilities, one approach is to consider imposing constraints based on the behavior of the entropy solution. The positivity-preserving DG-method \cite{ZHANG_SHU_JCP2010, ZHANG_SHU_JCP2011} is a successful example for this approach. Based on the entropy principle, Eq.~(\ref{TADMOR_ENTROPY_PRIN}), Zhang and Shu \cite{ZHANG_SHU_NUMERMATH2010} extended their implementation to an entropy-based constraint. Here, we propose a general framework that is based on the entropy principle, and major differences and advantages have been highlighted in Sec. \ref{INTRO}.
We define the constraint for the high-order DG-scheme as follows:
\begin{eqnarray}
\label{ENTROPY_PRINCIPLE_FOR_DG}
\forall x \textup{i}n \Omega_e,~~s(U_e^{\displaystyleelta t}(x)) \geq \min\{s(U(y))|~ y \textup{i}n \Omega_e \cup \partialartial \Omega_e^-\} \textup{e}quiv s_e^0(t)\;.
\textup{e}nd{eqnarray}
In this equation, the right-hand-side sets an entropy bound for an element-local solution in $\Omega_e$; with this, we refer to a DG-solution as \textup{e}mph{entropy-bounded} if it satisfies this principle. $s_e^0(t)$ is a local estimate for the true entropy minimum in $\Omega_e$, $|s_e^0(t) - \min\limits_{x \textup{i}n \Omega_e} s\left(\mathsf{U}(x)\right)| \sim \mathcal{O}(h^k)$, where $k$ is the local order of accuracy. Besides that, $s_e^0(t)$ is bounded if the entropy is bounded at the domain boundaries, $s_e^0(t)\geq \min\limits_{x\textup{i}n \Omega}s(U(x, t=0)) = s^0$, where $s^0$ is the minimum entropy at the initial condition. By imposing this constraint, we expect that the sub-cell DG-solution is regularized, avoiding the appearance of non-physical solutions. This idea is illustrated in Fig. \ref{EB_IDEA_DEMO}. At time level $t$, $s_e^0(t)$ is calculated and used to set a reference bound for the solution at the next step, $U_e^{\displaystyleelta t}$. If $U_e^{\displaystyleelta t}$ yields entropy undershoot with respect to $s_e^0(t)$, it will be modified to satisfy the constraint of Eq.~(\ref{ENTROPY_PRINCIPLE_FOR_DG}). In order to implement this regularization for a high-order DG scheme, the following aspects require addressing:
\begin{eqnarray}gin{itemize}
\textup{i}tem[i] To impose Eq.~($\ref{ENTROPY_PRINCIPLE_FOR_DG}$) on the DG-solution, we introduce a limiting operator ${\cal{L}}$. The regularized solution, denoted by $^\mathcal{L}U_e^{\displaystyleelta t}$, requires that $s(^\mathcal{L}U_e^{\displaystyleelta t}(x)) \geq s_e^0(t) $ $\forall x \textup{i}n \Omega_e$. In the following, we relax this condition, and impose Eq.~(\ref{ENTROPY_PRINCIPLE_FOR_DG}) only on the set of quadrature points, $\mathcal{D}$, that are involved in solving the weak form in Eq. (\ref{WEAK_FORM}).
\begin{eqnarray}gin{figure}
\centering
\textup{i}ncludegraphics[width=0.5\textwidth, clip=, keepaspectratio]{EBDGidea}
\caption{\label{EB_IDEA_DEMO}(Color online) Schematic of entropy-bounding of the EBDG scheme.}
\textup{e}nd{figure}
\textup{i}tem[ii] Guaranteeing that the constraint (\ref{ENTROPY_PRINCIPLE_FOR_DG}) is always imposed requires the existence of the operator ${\cal{L}}$. A sufficient condition for this is that the element-averaged solution is entropy-bounded, $s(\ol{U}_e^{\displaystyleelta t}) \geq s_e^0(t)$. Enforcing this condition relies on the selection of a proper CFL-condition, and this analysis will be developed in Sec.~\ref{DERIVE_CFL_1DDG} for a one-dimensional system. Subsequently, this analysis is then extended in Sec.~\ref{SUBSEC_CFL_CONSTRAINT_ARB_ELEMENT} to general multi-dimensional elements.
\textup{i}tem[iii] Algorithmic details on the implementation of the operator ${\cal{L}}$ constraining the element-local DG-solution are discussed in Sec.~\ref{1D_LIMITER_DESIGN}.
\textup{i}tem[iv] The evaluation of the lower bound $s_e^0(t)$ that is necessary to constrain the entropy solution is given in Sec.~\ref{SEC_EVALUATION_ENTROPY_BOUND}.
\textup{e}nd{itemize}
\subsection{\label{DERIVE_CFL_1DDG}CFL-constraint for one-dimensional entropy-bounded DG}
The objective now is to extend the analysis for DGP0 to a DG-scheme with high-order polynomial representations. Consider a one-dimensional domain in which the element $\Omega_e$ is centered at $x_e$, and a quadrature rule with weights $w_q$ and $\sum_{q=1}^{N_q}w_q = 1$. These quadrature weights are evaluated at the quadrature points $x_q \textup{i}n [x_{e+1/2},~x_{e-1/2}]$. The discretized cell-averaged solution $\ol{U}_e$ is defined as:
\begin{eqnarray}
\ol{U}_e=\f{1}{h}\textup{i}nt_{\Omega_e} U_e d x,
\textup{e}nd{eqnarray}
(for the P0-case discussed above, $\ol{U}_e = \wt{U}_e$), which can be further expanded by a quadrature rule with sufficient accuracy:
\begin{eqnarray}gin{equation}
\begin{eqnarray}gin{split}
\overline{U}_e
\label{1DDG_CELL_AVE}
= & \sum\limits_{q=1}^{N_q}w_qU_e(x_q)\;,\\
= & \sum_{q=1}^{N_q}\left({w_q} - \theta_l\partialhi_q(x_{e-1/2}) -
\theta_r \partialhi_q(x_{e+1/2})\right)U_e(x_q) + \theta_l U_e(x_{e-1/2})+ \theta_rU_e(x_{e+1/2})\;,\\
= & \sum_{q=1}^{N_q}\theta_q U_e(x_q)+ \theta_l U_e(x_{e-1/2})+ \theta_rU_e(x_{e+1/2})\;,
\textup{e}nd{split}
\textup{e}nd{equation}
where the first line utilizes the exactness of the quadrature rule, the second line utilizes Lemma~\ref{LEMMA_BASIS_SWITCH}, and the third line defines $\theta_q =w_q - \theta_l\partialhi_q(x_{e-1/2}) - \theta_r \partialhi_q(x_{e+1/2})$. Under the condition that $\theta_{r,l} > 0$ and $\theta_q \geq 0$, the last line of Eq.~(\ref{1DDG_CELL_AVE}) is a convex combination. Since the quadrature weights $w_q$ are positive, the existence of $\theta_{r,l}$ is guaranteed through the condition $w_q\geq \theta_l\partialhi_q(x_{e-1/2}) + \theta_r \partialhi_q(x_{e+1/2})$. If $\partialhi_q(x_{e\partialm {1}/{2}})>0$, $\theta_{r,l}$ is constrained as $(0, \min_q{w_q/\max\{\partialhi_q(x_{e\partialm {1}/{2}})\}}]$. If some of $\partialhi_q(x_{e\partialm {1}/{2}})$ are negative, they are not essential in setting the upper bound for $\theta_{r,l}$.
\begin{eqnarray}gin{remark}
In the following, $\theta_{r,l}$ will be related to a CFL-constraint. To obtain an optimal CFL-number, the largest value of $\theta_{r,l}$ needs to be found. This can be formulated as a maximization problem subject to the constraints, $\theta_{r,l} > 0$ and $\theta_q \geq 0$.
\textup{e}nd{remark}
For illustration, we fully discretize Eq.~(\ref{WEAK_FORM}) using a forward Euler time integration scheme and insert the results from Eq. (\ref{1DDG_CELL_AVE}). The element-averaged solution in $\Omega_e$ is then updated as:
\begin{eqnarray}gin{eqnarray}
\label{DG_MEAN_SOLUTION}
\overline{U}_e^{\displaystyleelta t}
& = &\overline{U}_e - \frac{\displaystyleelta t}{h}\left(\wh{F}(U_e(x_{e-1/2}), U_{e-1}(x_{e-1/2}), - 1) + \wh{F}(U_e(x_{e+1/2}), U_{e+1}(x_{e+1/2}), 1)\right)\;,\nonumber\\
& = & \sum_{q=1}^{N_q}\theta_q U_e(x_q) +
\nonumber\\
&& \theta_l U_e(x_{e-1/2}) - \frac{\displaystyleelta t}{h}\left(\wh{F}(U_e(x_{e-1/2}), U_{e-1}(x_{e-1/2}), -1) + \wh{F}(U_e(x_{e-1/2}), U_e^*, 1)\right) + \nonumber
\\
&& \theta_r U_e(x_{e+1/2}) - \frac{\displaystyleelta t}{h}\left(\wh{F}(U_e(x_{e+1/2}), U_e^*, -1) + \wh{F}(U_e(x_{e+1/2}),U_{e+1}(x_{e+1/2}), 1)\right)\;,
\textup{e}nd{eqnarray}
where
\begin{eqnarray}
\label{EXP_U_STAR}
U_e^* = \frac{1}{2}\left(U_e(x_{e-1/2}) + U_e(x_{e+1/2})\right)- \frac{1}{2\lambda} \left( F(U_e(x_{e+1/2})) - F(U_e(x_{e-1/2})) \right)
\textup{e}nd{eqnarray}
is introduced to simplify subsequent analyses. Note that $U_e^*$ ensures the validity of the second equality in Eq. (\ref{DG_MEAN_SOLUTION}) with a $\lambda$ that is defined in the following lemma. We can see that Eq. (\ref{DG_MEAN_SOLUTION}) contains two three-point systems discussed in Sec.~\ref{ENTROPY_PRINCIPLE_THREE-POINT_FVM}. To guarantee that $\overline{U}_e^{\displaystyleelta t}$ is entropy-bounded, it is necessary that these systems conform to the entropy principle of Eq.~(\ref{TADMOR_ENTROPY_PRIN}). This leads to the following lemma.
\begin{eqnarray}gin{lemma}
\label{ENTROPY_THREE_ELEMENT_DG}
For a one-dimensional DG-system, the element-averaged solution satisfies the entropy principle
\begin{eqnarray}gin{equation}
\label{MEAN_ENTROPY_BOUNDED_1D_DG}
s(\overline{U}_e^{\displaystyleelta t}) \geq s^0_e(t) = \min\{s(U(y))|~ y\textup{i}n \Omega_e \cup \partialartial\Omega_e^- \}\;,
\textup{e}nd{equation}
under the CFL-constraint
\begin{eqnarray}gin{equation}
\label{CFL_CONDITION_1D_DG}
\frac{\displaystyleelta t \lambda}{h} \leq \frac{1}{2} \min \left\{\theta_l, \theta_r\right\},
\textup{e}nd{equation}
where $\lambda$ is the maximum wave speed that is evaluated over the set of point-wise solutions
\begin{eqnarray}
\lambda \geq \max_{U}~\nu(U)\qquad\text{with}\qquad U \textup{i}n \{U_{e-1}(x_{e-1/2}), U_{e}(x_{e\partialm1/2}), U_{e+1}(x_{e+1/2})\}\;,
\textup{e}nd{eqnarray}
given the conditions $\theta_{r,l} > 0 $ and $\theta_q \geq 0$.
\\
Proof: First, we need to show that $U_e^*$ satisfies the discretized entropy principle of Eq.~(\ref{TADMOR_ENTROPY_PRIN}). This is indeed the case since $U_e^*$ is essentially the left-hand-side of Eq. (\ref{DISCRET_SPLIT_P1}) with time step taking $\displaystyleelta t = h/(2\lambda)$, corresponding to the upper bound of the CFL-constraint. Since $U_e^*$ is an entropy solution, $\nu(U_e^*)$ is bounded by $\lambda$, which makes the Lax-Friedrichs flux $\wh{F}$ involving $U_e^*$ in Eq. (\ref{DG_MEAN_SOLUTION}) valid according to the definition in Eq.~(\ref{LAX_FRIEDRICHS_FLUX}). Therefore, we have
\begin{eqnarray}n
s(U_e^*) &\geq& \min\left\{s(U_e(x_{e-1/2})), s(U_e(x_{e+1/2}))\right\}\;,
\textup{e}nd{eqnarray}n
Second, we reformulate Eq. (\ref{DG_MEAN_SOLUTION}) as
\begin{eqnarray}
\label{DG_MEAN_SOLUTION_REF}
\overline{U}_e^{\displaystyleelta t} =\sum_{q=1}^{N_q}\theta_q U_e(x_q) + \theta_l \overline{U}_{e,p1}^{\displaystyleelta t} + \theta_r \overline{U}_{e,p2}^{\displaystyleelta t}\;,
\textup{e}nd{eqnarray}
in which $\overline{U}_{e,p1}^{\displaystyleelta t}$ and $\overline{U}_{e,p2}^{\displaystyleelta t}$ are the two updated solutions of the three-point system. Their definitions are readily obtained by comparing Eqs. (\ref{DG_MEAN_SOLUTION_REF}) and (\ref{DG_MEAN_SOLUTION}). The given constraints, $\theta_{r,l} > 0 $ and $\theta_q \geq 0$, guarantee that the form of the convex combination in Eq. (\ref{DG_MEAN_SOLUTION_REF}) always holds. According to Lemma \ref{LEMMA_THREEPOINT_ROT}, it follows
\begin{eqnarray}n
s(\overline{U}_{e,p1}^{\displaystyleelta t}) &\geq& \min\left\{s(U_e^*), s(U_e(x_{e-1/2})), s(U_{e-1}(x_{e-1/2})) \right\}\;,\\
s(\overline{U}_{e,p2}^{\displaystyleelta t}) &\geq& \min\left\{s(U_e^*), s(U_e(x_{e+1/2})), s(U_{e+1}(x_{e+1/2}))\right\}\;,
\textup{e}nd{eqnarray}n
under the given CFL-constraint, Eq. (\ref{CFL_CONDITION_1D_DG}). Combining this with the quasi-concavity of entropy~\cite{ZHANG_SHU_NUMERMATH2010}, it follows
\begin{eqnarray}n
s(\overline{U}_e^{\displaystyleelta t}) &\geq& \min\left\{s(U_e(x_q)),s(\overline{U}_{e,p1}^{\displaystyleelta t}), s(\overline{U}_{e,p2}^{\displaystyleelta t}) \right\}\;,\\
& \geq & \min\{s(U(y))|~ y\textup{i}n \Omega_e \cup \partialartial \Omega_e^- \}\;.
\textup{e}nd{eqnarray}n
\textup{e}nd{lemma}
\begin{eqnarray}gin{remark}
Equation (\ref{CFL_CONDITION_1D_DG}) ensures the positivity of $p(\overline{U}_e^{\displaystyleelta t})$ and $\rho(\overline{U}_e^{\displaystyleelta t}).$
\textup{e}nd{remark}
In this context, we emphasize that the CFL-constraint (\ref{CFL_CONDITION_1D_DG}) provides a description for the entropy boundedness and does not conflict with the general CFL-constraint for linear stability, CFL$^{\rm L}$. To distinguish both constraints, here we use CFL$^{\rm EB}$ to denote the CFL-number for guaranteeing the entropy boundedness. In general, the time step has to be selected to satisfy both criteria. Equation~(\ref{CFL_CONDITION_1D_DG}) shows that CFL$^{\rm EB}$ depends on the value $\min \{\theta_l, \theta_r\}$, and a rigorous evaluation for this will be given below.
Although we considers the specific case of a forward Euler time discretization scheme, all the derivation and conclusions are directly applicable to any explicit Runge-Kutta (RK) methods with positive coefficients, since the RK-solution is a convex combination of solutions obtained from several forward Euler sub-steps. In practice, RK-methods are preferred as DG time-integration schemes due to their compatible stability properties \cite{COCKBURN_SHU_JSC01}.
\subsection{\label{1D_LIMITER_DESIGN}Construction of a limiting operator $\mathcal{L}$}
Following Lemma~\ref{ENTROPY_THREE_ELEMENT_DG}, the entropy constraint is imposed on the set of quadrature points, $x \textup{i}n \mathcal{D}\subset \Omega_e$. For the one-dimensional case, ${\cal{D}}$ is:
\begin{eqnarray}
\mathcal{D} = \{x_{e\partialm1/2}, x_q, q=1,\ldots,N_q ~(N_q \geq N_p) \}\;.
\textup{e}nd{eqnarray}
In the following, we are concerned with the construction of a limiting operator ${\cal{L}}$, such that
\begin{eqnarray}
\label{EQ_ENTROPY_1D_DG_APPL}
\forall x \textup{i}n \mathcal{D}, \qquad s(^{{\cal{L}}}U_e^{\displaystyleelta t}(x)) \geq s^0_e(t) \;,
\textup{e}nd{eqnarray}
Since the operator ${\cal{L}}$ is applied at the end of each sub-iteration, we will omit the superscript $\displaystyleelta t$ in the subsequent analysis. According to the entropy definition (\ref{ENTROPY_DEFINITION}), Eq.~(\ref{EQ_ENTROPY_1D_DG_APPL}) can be written as:
\begin{eqnarray}gin{equation}
\label{OPERATOR_CONSTRAINT}
p(^{{\cal{L}}}U_e(x)) \geq \textup{e}xp(s^0_e)\rho^\gamma (^{{\cal{L}}}U_e(x))\qquad\forall x\textup{i}n \mathcal{D}\;.
\textup{e}nd{equation}
To define the operator ${\cal{L}}$, we follow the work of Zhang and Shu~\cite{ZHANG_SHU_JCP2010,ZHANG_SHU_NUMERMATH2010}, and
introduce a linear scaling:
\begin{eqnarray}gin{equation}
\label{PROJECTED_SOLUTION}
^\mathcal{L}U_e = U_e + \varepsilon (\overline{U}_e - U_e)\;.
\textup{e}nd{equation}
The parameter $\varepsilon$ is then determined by substituting Eq.~(\ref{PROJECTED_SOLUTION}) into Eq.~(\ref{OPERATOR_CONSTRAINT}) and by applying Jensen's inequality:
\begin{eqnarray}gin{eqnarray}
\label{EQ_EPSILON_RELATION}
p\left((1- \varepsilon) U_e + \varepsilon \overline{U}_e\right) & \geq &(1-\varepsilon) p(U_e) + \varepsilon p(\overline{U}_e) \;,\nonumber\\
& \geq & \textup{e}xp({s}^0_e) \left[(1-\varepsilon) \rho^\gamma(U_e) + \varepsilon \rho^\gamma(\overline{U}_e) \right]\;,\\
& \geq & \textup{e}xp({s}^0_e) \rho^\gamma \left((1- \varepsilon) U_e + \varepsilon \overline{U}_e\right)\;.\nonumber
\textup{e}nd{eqnarray}
Solving for $\varepsilon$ gives
\begin{eqnarray}gin{equation}
\label{EVALUATE_VAREPSILON}
\varepsilon = \frac{\tau}{\tau - [p(\overline{U}_e) - \textup{e}xp({s}^0_e)\rho^\gamma(\overline{U}_e)]}\qquad\text{with}\qquad
\tau = \min\left\{0,~\min_{x\textup{i}n \mathcal{D}}\{ p(U_e(x)) - \textup{e}xp({s}^0_e) \rho^\gamma(U_e(x))\}\right\}\;,
\textup{e}nd{equation}
which is subject to the conditions
\begin{eqnarray}gin{equation}
\label{CONSTRAINTS_ON_MEAN}
\rho(\overline{U}_e) > 0\;,\qquad
p(\overline{U}_e) >\textup{e}xp(s^0_e) \rho^\gamma(\overline{U}_e)\;.
\textup{e}nd{equation}
These conditions are automatically guaranteed through the CFL$^{\rm EB}$-constraint of Lemma \ref{ENTROPY_THREE_ELEMENT_DG}. While the positivity condition for pressure is embedded in Eq.~(\ref{EQ_EPSILON_RELATION}), the positivity of density must be imposed for all $x \textup{i}n \mathcal{D}$ before $\mathcal{L}$ is applied, and the methodology for this is presented in \cite{ZHANG_SHU_JCP2011}.
Compared to the limiting operator, presented in \cite{ZHANG_SHU_NUMERMATH2010}, the herein proposed method is substantially simplified. Specifically, the step for imposing the positivity of pressure is avoided; in addition, $\varepsilon$ is obtained from an algebraic relation, and does not require a computationally expensive Newton iteration. It is also noted that the operator ${\cal{L}}$ contains the positivity-preserving limiter as a special case, which is obtained by setting $s_e^0 \to -\textup{i}nfty$.
\subsection{Numerical analysis of the limiting operator $\mathcal{L}$}
In this section, numerical properties of the limiting operator are examined.
\partialaragraph{Conservation}
Integrating Eq. (\ref{PROJECTED_SOLUTION}) over $\Omega_e$,
\begin{eqnarray}gin{equation}
\nonumber
\textup{i}nt_{\Omega_e} {}^\mathcal{L} U_e dx
= (1-\varepsilon) \textup{i}nt_{\Omega_e} U_e dx + \varepsilon \overline{U}_e \textup{i}nt_{\Omega_i} dx
= \textup{i}nt_{\Omega_e} U_e dx\;,
\textup{e}nd{equation}
confirms that the limiting operator preserves the conservation properties of the solution vector.
\partialaragraph{Stability} Since the positivity of density and pressure is preserved at the quadrature points, $\mathcal{L}$ is $L_1$-stable, which was shown in~\cite{ZHANG_SHU_JCP2010, ZHANG_SHU_NUMERMATH2010}. Here, we extend this stability analysis and evaluate the $L_2$-stability. By considering a periodic domain and taking the $L_2$-norm of Eq.~(\ref{PROJECTED_SOLUTION}) we obtain:
\begin{eqnarray}gin{subeqnarray}
||^\mathcal{L}U_e||^2 & = & \textup{i}nt_{\Omega_e} [U_e + \varepsilon ( \overline{U}_e - U_e)]^2 dx\;,\\
& = & (1-\varepsilon)^2 \textup{i}nt_{\Omega_e} U_e^2 dx - \varepsilon (\varepsilon-2) \textup{i}nt_{\Omega_e} \overline{U}_e^2 dx\;,\\
& \leq & (1-\varepsilon)^2 \textup{i}nt_{\Omega_e} U_e^2 dx - \varepsilon (\varepsilon-2) \textup{i}nt_{\Omega_e} U_e^2 dx\;,\\
& \leq & ||U_e||^2\;.
\textup{e}nd{subeqnarray}
After integrating over the entire domain, we obtain
\begin{eqnarray}n
\nonumber
||^\mathcal{L}U||^2_{\Omega} \leq ||U||^2_{\Omega}\;,
\textup{e}nd{eqnarray}n
which shows that $\mathcal{L}$ does not affect the stability of the DG-discretization. Further, since ${\cal{L}}$ constrains pressure and density, $\lambda$ in Eq.~(\ref{CFL_CONDITION_1D_DG}) provides a robust CFL-criterion, without the need for arbitrarily reducing $\displaystyleelta t$ to increase the stability region.
\partialaragraph{Accuracy} In regions where the solution is smooth, we assume that the weak solution before limiting has optimal accuracy:
\begin{eqnarray}n
||U - \mathsf{U} ||_\Omega \leq C_1 h^{p+1}\;,
\textup{e}nd{eqnarray}n
and that undershoots in entropy remain small, so that $\varepsilon \sim {\cal{O}}(h ^p)$. Thus, the error is estimated as follows:
\begin{eqnarray}gin{subeqnarray}
||^\mathcal{L} U - \mathsf{U}||_\Omega^2 & = & \sum_e ||^\mathcal{L}U_e - \mathsf{U}_e||^2\;,\\
& = & \sum_e ||\varepsilon (\overline{U}_e - \mathsf{U}_e) + (1-\varepsilon) (\mathsf{U}_e-U_e) ||^2\;,\\
& \leq & \sum_e \left( 2\varepsilon^2||(\overline{U}_e-\mathsf{U}_e) ||^2 + 2(1-\varepsilon)^2 ||U_e-\mathsf{U}_e ||^2 \right)\;,\\
& \leq & C_2 h^{2p+2}\;,
\textup{e}nd{subeqnarray}
where for simplicity, we introduce $\mathsf{U}_e$ to denote the element-wise representations to $\mathsf{U}$. Here we use the fact that $\overline{U}_e$ is locally a first-order approximation to $\mathsf{U}_e$, $\overline{U}_e = \mathsf{U}_e + C^e_3 {\cal{O}}(h)$.
In the vicinity of a discontinuity, the DG-solution looses its regularity so that the convergence rate reduces to first-order: $||U - \mathsf{U} ||_\Omega \leq C_4 h.$
Triggered by spurious sub-cell solutions, the entropy undershoot can be very large, so that $\varepsilon \sim {\cal{O}}(1) $. By repeating the above argument, we obtain an estimate for the accuracy of the discontinuous solution:
\begin{eqnarray}n
||^\mathcal{L}U - \mathsf{U} ||_\Omega \leq C_5 h.
\textup{e}nd{eqnarray}n
The accuracy arguments given here are substantiated through numerical tests in Sec. \ref{SEC_NUMERICAL_TEST}.
\section{\label{SEC_GENERALIZATION}Generalization to multi-dimension and arbitrary elements}
The entropy-bounded DG scheme that was presented for one-dimensional systems in the previous section can be generalized to arbitrary elements in multi-dimensions. This extension is the subject of the following analysis.
Since EBDG does not rely on a specific quadrature rule, any quadrature method can be used as long as it accurately integrates the problem and ensures the positivity of the quadrature weights. The limiting procedure requires the definition of a new set of quadrature points $\mathcal{D}$ for the general multi-dimensional setting. The selection of these points is given in the next section. The extension to arbitrary elements requires special consideration of the CFL$^{\rm EB}$ number.
\subsection{\label{SEC_GENERALIZATION_MULT_EXT}Generalization to multi-dimension and arbitrary elements}
To present a general formulation for multi-dimensional configurations, we first introduce necessary notations to describe general elements with curvatures. For this, we define a geometric mapping function $\Phi: \mathbb{R}^{N_d} \rightarrow \mathbb{R}^{N_d}$ on a reference element $\Omega_e^{\rm{r}}$, such that $x = \Phi(r)$ maps any point $r \textup{i}n \Omega_e^{\rm{r}}$ onto $x \textup{i}n \Omega_e$, and $\mathcal{J} = [\partialartial x/\partialartial r ]$ is the geometric Jacobian. With these specifications, we can write the discretized state vector as:
\begin{eqnarray}gin{equation}
U_e(x, t) = \sum_{m=1}^{N_p} ~\widetilde{U}_{e,m}(t)\varphi_m(r)\;,\qquad x = x(r) \textup{i}n \Omega_e\;,\qquad \forall r \textup{i}n \Omega_e^{\rm{r}}
\textup{e}nd{equation}
The mapping function is commonly parameterized by a polynomial function $x(r) = \sum_{m=1}^{N_g} \widetilde{x}_m\varphi^g_m(r) $, where $\varphi^g_m(r)$ is a Lagrangian interpolation and $N_g$ is the number of geometric bases used to represent $\Omega_e$. Since the reference element is regular, we can use a subspace of $r$ to parameterize the element edges. Therefore, to parameterize the $k$th edge of $\Omega_e$ we define $g_k = \mathcal{P}_k(r) \textup{i}n \mathbb{R}^{N_d-1}$ such that $\forall r \textup{i}n \partialartial \Omega_{e, k}^{\rm{r}}$, $r = \mathcal{P}^{-1}_{k}(g_k)$, in which $\mathcal{P}^{-1}_k$ is the pseudo-inverse of $\mathcal{P}_k$. For the physical element, the edge can be represented as:
\begin{eqnarray}gin{equation}
\partialartial \Omega_{e,k} = \left\{x \textup{i}n {\Omega}_e ~|~ x = \Phi(r),~ r = \mathcal{P}^{-1}_k (g_k) \textup{i}n \partialartial \Omega_{e,k}^{\rm{r}}\right\}\;.
\textup{e}nd{equation}
The integral in Eq.~(\ref{WEAK_FORM}) is evaluated using multi-dimensional quadrature rules. Considering the complexity of the dimensionality, here we follow the quadrature convention that is $\sum_{v=1}^{N_q} w_v = V_e^{\rm{r}}$ (the volume of $\Omega_e^{\rm{r}}$) and $\sum_{q=1}^{N_q^k} w_{k,q} = S_{e,k}^{\rm{r}}$ (the area of $\partialartial \Omega_{e,k}$). With these preliminaries, we can evaluated any volume integral in Eq.~(\ref{WEAK_FORM}) as:
\begin{eqnarray}gin{equation}
\textup{i}nt_{\Omega_e} f(x) dx = \textup{i}nt_{\Omega_e^{\rm{r}}} |\mathcal{J}(r)| f(x(r)) dr =
\sum_{v=1}^{N_q}|\mathcal{J}(r_v)| f(x(r_v))w_v\;.
\textup{e}nd{equation}
The surface integral of a scalar function on $\partialartial \Omega_{e,k}$ can be written as:
\begin{eqnarray}gin{eqnarray}
\textup{i}nt_{\partialartial \Omega_{e,k}} f(x) d\Gamma &=& \textup{i}nt_{\partialartial \Omega_{e,k}^{\rm{r}}} f(x(g_k)) |\mathcal{J}_k^{\partialartial}| dg_k = \sum_{q=1}^{N_{q,k}^{\partialartial}}|\mathcal{J}_k^\partialartial(g_{k,q})|f(x(g_{k,q})) w_{k,q}\;,
\textup{e}nd{eqnarray}
and the surface integral of a vector-valued function is evaluated as:
\begin{eqnarray}gin{eqnarray}
\textup{i}nt_{\partialartial \Omega_{e,k}}f(x) \cdot \wh{\textmd{n}} d\Gamma &=& \textup{i}nt_{\partialartial \Omega_{e,k}^{\rm{r}}} f(x(g_k)) \cdot \mathcal{J}^\partialartial_k dg_k = \sum_{q=1}^{N^{\partialartial}_{q,k}} |\mathcal{J}_k^\partialartial(g_{k,q})|f(x(g_{k,q})) \cdot \wh{\textmd{n}}(g_{k,q})w_{k, q}\;,
\textup{e}nd{eqnarray}
where $\mathcal{J}^\partialartial_k$ is the surface Jacobian, and $\widehat{\textmd{n}}$ refers to the unit vector ${\mathcal{J}^\partialartial_k}/{|\mathcal{J}^\partialartial_k|}$. Note that for a general element, the quadrature expression might be subject to a tiny numerical error bounded by ${\cal{O}}(h ^{2p+1})$, given the accuracy requirement for integrating Eq.~(\ref{WEAK_FORM}).
With the above notation, we are now able to define the set of quadrature points $\mathcal{D}$ for general curved elements:
\begin{eqnarray}gin{equation}
\mathcal{D} = \begin{itemize}gcup_{k=1}^{N_\partialartial} \{g_{k,q},~q=1,\ldots,N^\partialartial_{q,k}\} \begin{itemize}gcup \{r_v,~v=1,\ldots,N_q\}\;,
\textup{e}nd{equation}
where $N_\partialartial$ is the number of element edges (which is equal to the number of neighbor elements). In this context, it is noted that $\mathcal{D}$ includes all quadrature points that are involved in the integration. With this specification of $\mathcal{D}$, the limiting operator $\mathcal{L}$, developed in Sec.~\ref{1D_LIMITER_DESIGN}, can be directly extended to arbitrary elements on multi-dimensional configurations. In the following, a CFL$^{\rm EB}$-constraint is derived that extends the results of Lemma~\ref{ENTROPY_THREE_ELEMENT_DG}, thereby ensuring the existence of the general limiter $\mathcal{L}$.
\subsection{\label{SUBSEC_CFL_CONSTRAINT_ARB_ELEMENT}CFL-constraint}
Following the same approach as for the one-dimensional derivation in Sec. \ref{DERIVE_CFL_1DDG}, the element-averaged solution of $U_e^{\displaystyleelta t}$ is evaluated as:
\begin{eqnarray}gin{subeqnarray}
\label{DG_MEAN_GENERAL_ELEMENT}
\overline{U}_e^{\displaystyleelta t} & = & \overline{U}_e - \frac{\displaystyleelta t}{V_e} \sum_{k=1}^{N_\partialartial}\textup{i}nt_{\partialartial \Omega_{e,k}} \wh{F}\left(U_e^{+}, U_e^{-}, \wh{\textmd{n}}\right) d\Gamma\;,\\
& = & \sum_{v=1}^{N_q} \frac{|\mathcal{J}(r_{v})| w_v}{V_e}U_e(r_v) - \sum_{k=1}^{N_\partialartial}\sum_{q=1}^{N_{q,k}^\partialartial} \frac{\displaystyleelta t |\mathcal{J}^\partialartial_k(g_{k,q})|w_{k,q}}{V_e} \wh{F}\left(U_e^+(r(g_{k,q})), U_e^-(r(g_{k,q})), \wh{\textmd{n}}(g_{k,q})\right) \;,\\
\nonumber
& = & \sum_{v=1}^{N_q}\theta_v U_e(r_v)
+ \sum_{k=1}^{N_\partialartial}\sum_{q=1}^{N^\partialartial_{q,k}} \left[\theta_{k,q} U_e^+(r(g_{k,q})) \right. \\
\slabel{DG_MEAN_GENERAL_ELEMENT_C}
&& \left. - {\displaystyleelta t \zeta_{k,q}} \left(\wh{F}\left(U_e^+(r(g_{k,q})), U_e^-(r(g_{k,q})), \wh{\textmd{n}}(g_{k,q})\right) + \wh{F}\left(U_e^+(r(g_{k,q})), U_e^*, -\wh{\textmd{n}}(g_{k,q})\right) \right) \right] \;,
\textup{e}nd{subeqnarray}
where
\begin{eqnarray}
\theta_v & = & \frac{|J(r_v)| w_v}{V_e} - \sum_{k=1}^{N_\partialartial}\sum_{q=1}^{N^\partialartial_{q,k}} \theta_{k,q}\partialhi_v\left(r(g_{k,q})\right)\;
\textup{e}nd{eqnarray}
is introduced to decompose the volumetric quadrature to obtain $U_e^+(r(g_{k,q}))$. For notational simplification, we defined
\begin{eqnarray}
\zeta_{k,q} & = & \frac{|\mathcal{J}_k^\partialartial(g_{k,q})|w_{k,q}}{V_e}\;,
\textup{e}nd{eqnarray}
so such that $S_e=\sum_{k=1}^{N_\partialartial}\sum_{q=1}^{N^\partialartial_{q,k}}\zeta_{k,q}$ is equal to the surface area of $\Omega_e$, and $\sum_{k=1}^{N_\partialartial}\sum_{q=1}^{N^\partialartial_{q,k}}\zeta_{k,q} \wh{\textmd{n}}(g_{k,q}) =0$ since $\Omega_e$ has a closed surface. To apply the results from the three-point system to the multi-dimensional configuration, we introduce the auxiliary variable $U_e^*$:
\begin{eqnarray}
\label{MULTID_U_STAR}
U_e^* = \sum_{k=1}^{N_\partialartial}\sum_{q=1}^{N^\partialartial_{q,k}}\frac{\zeta_{k,q}}{S_e} \left[ U_e^+(r(g_{k,q})) -\frac{1}{\lambda^*} F(U_e^+(r(g_{k,q}))) \cdot \wh{\textmd{n}}(g_{k,q})\right]\;.
\textup{e}nd{eqnarray}
It can be shown that $U_e^*$ is essentially the solution to the following equation:
\begin{eqnarray}
\sum_{k=1}^{N_\partialartial}\sum_{q=1}^{N^\partialartial_{q,k}} \zeta_{k,q}
\wh{F}\left(U_e^+(r(g_{k,q})), U_e^*, -\wh{\textmd{n}}(g_{k,q})\right) = 0\;,
\textup{e}nd{eqnarray}
subject to a preselected dissipation coefficient $\lambda^*$, so that the equality in Eq.~(\ref{DG_MEAN_GENERAL_ELEMENT_C}) holds true. Here, we evaluate $\lambda^*$ from the following relation:
\begin{eqnarray}
\label{LAMBDA_STAR}
\lambda^* = \tau \max\left\{\nu(U)~|~U \textup{i}n \{U_e^+(r(g_{k,q}))\}\right\},\qquad \tau = \max\left\{\sqrt{N_d}, \sqrt{2+\gamma(\gamma-1)}\right\}
\textup{e}nd{eqnarray}
and the rationale for this selection is provided later in Remark~\ref{LAMBDA_RATIONALE}. To prove that $\overline{U}_e^{\displaystyleelta t}$ is entropy bounded, we present the following lemma.
\begin{eqnarray}gin{lemma}
\label{LEMMA_ENTROPY_MULTID}
$U_e^*$ in Eq. (\ref{MULTID_U_STAR}) satisfies $s(U_e^*) \geq \min \left\{s(U)~|~ U \textup{i}n \{U_e^+(r(g_{k,q}))\}\right\}$.
\\
Proof: For notational simplification, we combine the indices $k$ and $q$ into a single index $j$, and we denote the total number of surface quadrature points on $\partial\Omega_e$ by $N_{\rm{tot}}$, $N_{\rm{tot}} = \sum_{k=1}^{N_\partialartial} N^\partialartial_{q,k}$. Considering $\sum_{j=1}^{N_{\rm{tot}}} \zeta_j\wh{\textmd{n}}_j^{(d)} = 0$ and $\zeta_j >0$, the $d$th components of the surface-normal vectors have different signs. To denote each component, we introduce the superscript $(d)$. By sorting $\wh{\textmd{n}}_j^{(d)}$ so that the first $N^>_{\rm{tot}}$ vector components are positive. The following statement is true for any $d$:
\begin{eqnarray}
\sum\limits_{j=1}^{N_{\rm{tot}}^>} \zeta_j\wh{\textmd{n}}_j^{(d)} = -\sum\limits_{j=N_{\rm{tot}}^> + 1}^{N_{\rm{tot}}} \zeta_j\wh{\textmd{n}}_j^{(d)} = \sum_{n=1}^{N_{par}} l_n\;,
\textup{e}nd{eqnarray}
where $l_n$ introduces a partition as illustrated in Fig. \ref{PARTITION_IDEA_DEMO} and $N_{par}$ is the dimension of this partition. With this, we are able introduce a variable mapping,
\begin{eqnarray}n
U_{n}^{s+} = U_e^+(r_j),~~&\text{if}&~\sum\limits_{i=1}^{j-1} \zeta_i\wh{\textmd{n}}_i^{(d)}< \sum\limits_{i=1}^n l_i \leq \sum\limits_{i=1}^{j}\zeta_i\wh{\textmd{n}}_i^{(d)} \;,\\
U_{n}^{s-} = U_e^+(r_j),~~&\text{if}&~-\sum\limits_{i=N_{\rm{tot}}^> + 1}^{j-1} \zeta_i\wh{\textmd{n}}_i^{(d)}< \sum\limits_{i=1}^n l_i \leq -\sum\limits_{i=N_{\rm{tot}}^> + 1}^{j}\zeta_i\wh{\textmd{n}}_i^{(d)} \;.
\textup{e}nd{eqnarray}n
With this, Eq. (\ref{MULTID_U_STAR}) is equivalent to:
\begin{eqnarray}n
U_e^* &=& \frac{1}{S_e}\left(\sum_{j=1}^{N_{\rm{tot}}} \zeta_j U_e^+(r_j) - \sum_{j=1}^{N_{\rm{tot}}} \frac{\zeta_j}{\lambda^*}F(U_e^+(r_j)) \cdot \wh{\textmd{n}}_j \right)\;,\\
& = & \sum_{j=1}^{N_{\rm{tot}}} \frac{\zeta_j}{S_e} \left(1 - \sum_{d=1}^{N_d} \frac{|\wh{\textmd{n}}_{j}^{(d)}|}{\sqrt{N_d}}\right) U_e^+(r_j) + \sum_{d=1}^{N_d} \sum_{j=1}^{N_{\rm{tot}}} \frac{1}{S_e}\left(\frac{\zeta_j|\wh{\textmd{n}}_{j}^{(d)}|}{\sqrt{N_d}} U_e^+(r_j)- \frac{\zeta_j\wh{\textmd{n}}_j^{(d)}}{\lambda^*} F^{(d)}(U_e^+(r_j))\right) \;,\\
& = & \sum_{j=1}^{N_{\rm{tot}}} \frac{\zeta_j}{S_e} \left(1 - \sum_{d=1}^{N_d} \frac{|\wh{\textmd{n}}_{j}^{(d)}|}{\sqrt{N_d}}\right) U_e^+(r_j) + \sum_{d=1}^{N_d} \frac{1}{S_e}\left(\frac{2}{\sqrt{N_d}} \sum_{n=1}^{N_{par}}l_n U_{d,n}^{**}\right) \;,\\
\textup{e}nd{eqnarray}n
where we introduce
\begin{eqnarray}n
U_{d,n}^{**} &=& \frac{1}{2}(U_{n}^{s+} + U_{n}^{s-}) - \frac{\sqrt{N_d}}{2\lambda^*}\left(F^{(d)}(U_{n}^{s+}) - F^{(d)}(U_{n}^{s-})\right)\;,
\textup{e}nd{eqnarray}n
which takes the same form as the left-hand-side of Eq.~(\ref{EXP_U_STAR}). Note that $U_{d,n}^{**}$ is essentially expressed in a one-dimensional setting along $x^{(d)}$. Therefore, one can follow the same argument used for Eq. (\ref{EXP_U_STAR}) in Lemma \ref{ENTROPY_THREE_ELEMENT_DG} to verify that
\begin{eqnarray}n
s(U_{d,n}^{**} ) \geq \min \left\{s(U_{n}^{s\partialm})\right\} \geq \min \left\{s(U)~|~ U \textup{i}n \{U_e^+(r_j)\},~j = 1, \ldots, N_{\rm{tot}}\right\}\;,
\textup{e}nd{eqnarray}n
with the given form of $\lambda^*$ in Eq. (\ref{LAMBDA_STAR}). As given above, $U_e^*$ is a convex combination; by using the quasi-concavity of entropy~\cite{ZHANG_SHU_NUMERMATH2010}, we conclude that
\begin{eqnarray}n
s(U_e^*) > \min \left\{s(U)~|~ U \textup{i}n \{U_e^+(r_j)\},~j = 1, \ldots, N_{\rm{tot}}\right\}\;.
\textup{e}nd{eqnarray}n
\textup{e}nd{lemma}
\begin{eqnarray}gin{figure}[!htb!]
\centering
\textup{i}ncludegraphics[width=0.7\textwidth, clip=, keepaspectratio]{partitiondemo}
\caption{\label{PARTITION_IDEA_DEMO} Illustration of the partition introduced in Lemma~\ref{LEMMA_ENTROPY_MULTID}.}
\textup{e}nd{figure}
\begin{eqnarray}gin{remark}
\label{LAMBDA_RATIONALE}
Note that the maximum characteristic speed of $U_{d,n}^{**}$ is bounded, $\nu(U_{d,n}^{**}) \leq \nu(U_e^+(r))$. According to the combination law given in Appendix~\ref{APP_COMB_RULE}, we have $\nu(U_e^*) \leq \sqrt{2+ \gamma(\gamma-1)} \max \left\{\nu(U_e^+(r))\right\} \leq \lambda^*$. With this, we are now able to show that the maximum characteristic speed of $U_e^*$ is bounded by the chosen value of $\lambda^*$, so that the Lax-Friedrichs flux $\wh{F}\left(U_e^+(r), U_e^*, -\wh{\textmd{n}}\right)$ in Eq. (\ref{DG_MEAN_GENERAL_ELEMENT}) is valid according to the definition of Eq. (\ref{LAX_FRIEDRICHS_FLUX}).
\textup{e}nd{remark}
To enforce the entropy boundedness, the decomposition of $\overline{U}_e^{\displaystyleelta t}$ in Eq. (\ref{DG_MEAN_GENERAL_ELEMENT}) is required to be convex. This can be satisfied under the following condition:
\begin{eqnarray}
\label{DG_ENTROPY_CONVEX_CONS}
\begin{eqnarray}gin{cases}
\theta_v \geq 0\;,~~\forall v = 1,\ldots,N_q,\\
\theta_{k,q} > 0\;,~~\forall (k,~ q),~q = 1,\ldots, N^\partialartial_k,~k = 1,\ldots, N_\partialartial\;,
\textup{e}nd{cases}
\textup{e}nd{eqnarray}
With this, the entropy boundedness of $\ol{U}_e^{\displaystyleelta t}$ is shown by the following lemma.
\begin{eqnarray}gin{lemma}
\label{ENTROPY_GENERAL_ELEMENT_DG}
For a general DG element, the element-averaged solution is entropy bounded,
\begin{eqnarray}gin{equation}
s(\overline{U}^{\displaystyleelta t}_e) \geq s_e^0(t) = \min\{s\left(U(y)\right)|~ y \textup{i}n \Omega_e \cup \partialartial \Omega_e^-\},
\textup{e}nd{equation}
under the condition that Eq. (\ref{DG_ENTROPY_CONVEX_CONS}) holds and that the following constraint is fulfilled:
\begin{eqnarray}gin{equation}
\label{CFL_GENERAL}
\displaystyleelta t \lambda \leq \frac{1}{2}\min~\left\{\frac{\theta_{k,q}}{\zeta_{k,q}} \right\},~~\forall (k,~ q),~q = 1,\cdots, N^\partialartial_k,~k = 1,\cdots, N_\partialartial\;,
\textup{e}nd{equation}
where $\lambda \geq \max \{\nu(U)~|~ U \textup{i}n \{U^{\partialm}_e(r(g_{k,q}))\}$ and $\lambda \geq \lambda^*$.
\\
Proof: The proof follows Lemma~\ref{ENTROPY_THREE_ELEMENT_DG}, utilizing Lemma~\ref{LEMMA_THREEPOINT_ROT} and the quasi-concavity of entropy.
\textup{e}nd{lemma}
Note that Lemma~\ref{ENTROPY_GENERAL_ELEMENT_DG} does not rely on any assumption regarding the dimensionality or shape of the finite element, and is therefore general. Another observation is that Eq.~(\ref{CFL_GENERAL}) essentially provides an estimate for CFL$^{\rm EB}$ that is only a function of the geometry of the element. For practical applications, we require the right-hand-side of Eq.~(\ref{CFL_GENERAL}) to be as large as possible so that larger time steps can be taken. This can be achieved by solving a convex optimization problem:
\begin{eqnarray}gin{align}
\label{OPT_FOR_TIME_STEP}
&\textrm{maximize}~ \left(\min~\left\{\frac{\theta_{k,q}}{\zeta_{k,q}} \right\} \right)\;,\\
\nonumber
&\textrm{subject~to}~ \textrm{Eq. (\ref{DG_ENTROPY_CONVEX_CONS})}
\textup{e}nd{align}
where $\theta_{k,q},~\zeta_{k,q}$ are properties of the geometry alone. This problem can be solved for each individual element as a pre-processing step prior to the simulation. Another way to interpret the expression is to identity a length scale from the right-hand-side of Eq. (\ref{CFL_GENERAL}), for which the CFL$^{\rm EB}$ number can be explicitly defined. For this, $L_e = \min V_e / |\mathcal{J}^\partialartial_k(g_{k,q})|$ is used as a characteristic length for $\Omega_e$. Hence,
\begin{eqnarray}n
\min~\left\{\frac{\theta_{k,q}}{\zeta_{k,q}} \right\} \geq L_e \min ~\left\{\frac{\theta_{k,q}}{w_{k,q}} \right\}
\textup{e}nd{eqnarray}n
and an alternative expression to Eq.~(\ref{OPT_FOR_TIME_STEP}) is
\begin{eqnarray}gin{align}
&\textrm{maximize}~ \min~\left\{\frac{\theta_{k,q}}{w_{k,q}} \right\}\;,\\
\nonumber
&\textrm{subject~to}~ \textrm{Eq. (\ref{DG_ENTROPY_CONVEX_CONS})}\;,
\textup{e}nd{align}
where the optimal solution is the value of $\text{CFL}^{\text{EB}}$. With this, the CFL-constraint can be written as
\begin{eqnarray}
\label{CFL_COND_FINAL}
\frac{\displaystyleelta t \lambda}{L_e} \leq \frac{1}{2}\text{CFL}^{\text{EB}}\;,
\textup{e}nd{eqnarray}
which is used in the following numerical tests. The factor of 1/2 is a consequence of the Riemann flux formulation. For some of the most relevant element types with regular shapes, the value of $\text{CFL}^{\rm EB}$ has been calculated and listed in Table~\ref{OPT_CFL_NUM_LIST} for different polynomial orders. In practice, we found that the bound in Eq.~(\ref{CFL_COND_FINAL}) leads to a conservative estimate for the time step. Considering the computation of efficiency and the constraint for the linear stability by \cite{COCKBURN_SHU_JSC01}, this condition is relaxed and we consider $0.8\text{CFL}^{\rm EB}$ for the following numerical experiments.
\begin{eqnarray}gin{table}[!htb!]
\centering
\caption{\label{OPT_CFL_NUM_LIST}Summary of quadrature orders and optimal CFL numbers for different types of elements. Quadrature rule (QR) applied: Line, Quadrilateral and Brick: tensor-product Gauss-Legendre; Triangle: Dunavant \cite{DUNAVANT_QUADRATURE_1985}; Tetrahedron: Zhang, et al.~\cite{ZHANG_QUADRATURE_2009}. (note that Dunavant's triangle rule includes negative weights for 3rd-and 7th-order quadrature, therefore, only quadrature rules with positive weights are used with one extra order.)}
\begin{eqnarray}gin{tabular}{| c || c | c | c | c | c | c|}
\hline
Element & Order & QR on $\partialartial \Omega_e$ & QR on $\Omega_e$ & $\text{CFL}^{\rm EB}$ \\ \hline\hline
\multirow{4}{*}{\textup{i}ncludegraphics[width=0.1\textwidth, clip=, keepaspectratio]{Shape_lineseg}} & $p=1$ & / & 3 & 0.5 \\ \cline{2-5}
& $p=2$ & / & 5 & 0.167 \\ \cline{2-5}
& $p=3$ & / & 7 & 0.123 \\ \cline{2-5}
& $p=4$ & / & 9 & 0.073 \\ \cline{2-5} \hline
\multirow{4}{*}{\textup{i}ncludegraphics[width=0.1\textwidth, clip=, keepaspectratio]{Shape_quadrilateral}} & $p=1$ & 3 & 3 & 0.25 \\ \cline{2-5}
&$p=2$ & 5 & 5 & 0.083 \\ \cline{2-5}
&$p=3$ & 7 & 7 & 0.062 \\ \cline{2-5}
&$p=4$ & 9 & 9 & 0.036 \\ \cline{2-5} \hline
\multirow{4}{*}{\textup{i}ncludegraphics[width=0.1\textwidth, clip=, keepaspectratio]{Shape_triangle}} & $p=1$ & 3 & 4 & 0.135 \\ \cline{2-5}
&$p=2$ & 5 & 5 & 0.067 \\ \cline{2-5}
&$p=3$ & 7 & 8 & 0.058 \\ \cline{2-5}
&$p=4$ & 9 & 9 & 0.033 \\ \cline{2-5} \hline
\multirow{4}{*}{\textup{i}ncludegraphics[width=0.1\textwidth, clip=, keepaspectratio]{Shape_brick}} & $p=1$ & 3 & 3 & 0.167 \\ \cline{2-5}
&$p=2$ & 5 & 5 & 0.056 \\ \cline{2-5}
&$p=3$ & 7 & 7 & 0.041 \\ \cline{2-5}
&$p=4$ & 9 & 9 & 0.024 \\ \cline{2-5} \hline
\multirow{4}{*}{\textup{i}ncludegraphics[width=0.1\textwidth, clip=, keepaspectratio]{Shape_tetrahedron}} & $p=1$ & 4 & 3 & 0.066 \\ \cline{2-5}
&$p=2$ & 5 & 5 & 0.035 \\ \cline{2-5}
&$p=3$ & 8 & 7 & 0.015 \\ \cline{2-5}
&$p=4$ & 9 & 9 & 0.013 \\ \hline
\textup{e}nd{tabular}
\textup{e}nd{table}
\section{\label{SEC_EVALUATION_ENTROPY_BOUND}Evaluation of entropy bound}
\label{Evaluation of Entropy Bound}
In this section, we propose an approach for evaluating the entropy bound $s_e^0(t)$ to answer the fourth implementation problem listed in Sec. \ref{SEC_ENTROPY_BOUNDED_DG}. Obviously, the most accurate way for evaluating a lower bound of entropy is to use Newton's method. However, this approach can significantly impair the efficiency, since searching the minimum on a multi-dimensional high-order element is intractable in terms of computational cost. To overcome this issue, we propose the following two approaches:
\begin{eqnarray}gin{itemize}
\textup{i}tem{\textup{e}mph{User-defined global bound}}. The first strategy is to let the user specify a global entropy bound, which is then kept constant and used everywhere in the computational domain. Although this approach is simple and robust, it is not optimal. It is suitable for certain problems with a well-defined entropy bound. As example, for a supersonic flow over an airfoil, the free stream entropy can be used to impose this bound. However, for more complex cases with multiple discontinuities that include several entropy jumps, such a constant bound is not able to enforce the constraint for all elements. Note that this approach recovers the positivity constraint of Zhang and Shu \cite{ZHANG_SHU_JCP2010, ZHANG_SHU_JCP2011} in the limit of $s_e^0(t) \rightarrow -\textup{i}nfty$.
\textup{i}tem{\textup{e}mph{Estimate of local entropy bound}}. This strategy imposes an entropy bound for each element and dynamically updates $s_e^0(t)$ during the simulation. Instead of relying on a sophisticated search algorithm, $s_e^0(t)$ can be approximately evaluated by reusing available information on quadrature points. According to the definition of $s_e^0(t)$, Eq. (\ref{ENTROPY_PRINCIPLE_FOR_DG}), we also need to consider the set of quadrature points on $\partialartial \Omega_e^-$, denoted as $\mathcal{D}^-$. Therefore, the estimation for $s_e^0(t)$ is obtained according to the following formulation,
\begin{eqnarray}gin{equation}
\label{ESITMATE_SENOB_T_F1}
^\mathcal{E}{s}_e^0(t) = \min\left\{\min_{x\textup{i}n \mathcal{D}^- }{s(U(x))},~s_{m} - \frac{\min\limits_{x \textup{i}n \mathcal{D}, x \neq x_m} \{|x_m- x|\}}{|x_m - x_n|} \begin{itemize}g(s_{n}-s_{m}\begin{itemize}g)\right\}
\textup{e}nd{equation}
where we introduce $x_m$ and $x_n$ to denote the locations of the minimum and maximum entropy values, respectively,
\begin{eqnarray}n
s_{m} = s(U_e(x_m)) = \min\limits_{x \textup{i}n \mathcal{D}}s(U_e(x))\;,\\
s_{n} = s(U_e(x_n)) = \max\limits_{x \textup{i}n \mathcal{D}}s(U_e(x))\;.
\textup{e}nd{eqnarray}n
Although this estimate is simple and inexpensive, one has to realize that any extrapolation in the vicinity of discontinuities becomes dangerous due to the spurious behavior of the sub-cell solution. However, it can be resolved by referring to the entropy bounds around $\Omega_e$ at the last time step,
\begin{eqnarray}gin{equation}
\label{ESITMATE_SENOB_T_F2}
s_e^0(t) = \max \left\{^\mathcal{E}{s}_e^0(t),~ \min_{k \textup{i}n \mathcal{N}_e \cup \{e\}} s_k^0(t-\displaystyleelta t)\right\}\;,
\textup{e}nd{equation}
where $\mathcal{N}_e$ refers to the set of the indices of all neighbor elements of $\Omega_e$ that share a common edge.
\textup{e}nd{itemize}
For practical tests, we found that the above strategy can be applied in a combined way. Specifically, Eq. (\ref{ESITMATE_SENOB_T_F1}) is used for initializing the simulation, and Eq. (\ref{ESITMATE_SENOB_T_F2}) is then applied during the subsequent simulation.
\section{\label{SEC_ALG_IMPLEMENTATION}Algorithmic implementation}
Algorithm \ref{BRILLIANT_YU} provides a description of the implementation details of the EBDG scheme.
\begin{eqnarray}gin{algorithm}[H]
\label{BRILLIANT_YU}
\SetAlgoLined
\textbf{Pre-computation of CFL condition}:
For each element, solve Eq. (\ref{OPT_FOR_TIME_STEP}); alternatively, take CFL$^{\text{EB}}$ from Table~\ref{OPT_CFL_NUM_LIST} and compute $L_e$(recommended for simplicity)
\textbf{Initialization}: Initialize solution vector $U(x, 0) = U_0$
\While{$t \leq t_{\rm{end}}$}{
\For {each element} {
Estimate entropy bound $s_e^{0}(t)$ according to Eqs. (\ref{ESITMATE_SENOB_T_F1}) and (\ref{ESITMATE_SENOB_T_F2}) \\
Find $\lambda$ and estimate time step size $\displaystyleelta t$ according to Eq. (\ref{CFL_GENERAL})\\
}
Find the minimum permissible time step $\displaystyleelta t_{\min}$ over all elements\;
\For {each stage $k$ of a Runge-Kutta integration scheme} {
\For {each element} {
step 1: Update solution vector $U^{k+1} = U^k + \displaystyleelta t_{\min} R^k$ ($R$ refers to the residual)\\
step 2: Apply $\mathcal{L}$ on $U^{k+1}$ with $s_e^0(t)$ according to Eqs. (\ref{PROJECTED_SOLUTION}) and (\ref{EVALUATE_VAREPSILON})\\
}
}
Advance time $t = t + \displaystyleelta t_{\min}$
}
\caption{Implementation of EBDG scheme.}
\textup{e}nd{algorithm}
\section{\label{SEC_NUMERICAL_TEST}Results and numerical test cases}
In the following, EBDG is applied to a series of test cases to demonstrate the performance of this method. We begin by considering one-dimensional configurations to confirm the high-order accuracy and essential convergence properties. This is followed by two- and three-dimensional cases with specific emphasis on applications to unstructured meshes and general curved elements.
\subsection{One-dimensional smooth solution}
The first case considers a one-dimensional periodic domain $x\textup{i}n[0, 1]$ with smooth initial conditions:
\begin{eqnarray}n
\rho(x,0) & = & 1 + 0.1\sin(2\partiali x)\;,\\
u(x,0) & = & 1\;,\\
p(x,0) & = & 1\;.
\textup{e}nd{eqnarray}n
The accuracy is examined by considering different spatial resolutions and polynomial orders. For each polynomial order, the CFL number is assigned to 0.8CFL$^{\rm EB}$, in which CFL$^{\rm EB}$ is taken from Table \ref{OPT_CFL_NUM_LIST}. Initially, $s_0$ is set to 0.874, corresponding to the minimum entropy value of the initial condition. The SSPRK33 time-integration scheme~\cite{SSP_REF_2001} is used, and the convergence rate is given in Table~\ref{1D_ACCURATE_SSPRK34}. Although the EBDG-scheme remains stable, it can be seen that the solutions do not reach the optimal rates for DGP3 and DGP4. The reason for this is that the stability region is not sufficiently large for both cases. To demonstrate this, we switch the time-integration scheme to a standard RK45. As can be seen from Table~\ref{1D_ACCURATE_RK45}, the optimal convergence rates for all cases are achieved, demonstrating that the optimal convergence for smooth solutions is preserved by the EBDG-scheme. In the following, the standard RK45 is used for all other cases.
\begin{eqnarray}gin{table}[htp]
\begin{eqnarray}gin{center}
\begin{eqnarray}gin{tabular}{|c||cc|cc|cc|cc|}\hline
\multirow {2}{*}{$h$} & \multicolumn{2}{c|}{DGP1} & \multicolumn{2}{c|}{DGP2} & \multicolumn{2}{c|}{DGP3} & \multicolumn{2}{c|}{DGP4}\\
& $L_2$-error & rate & $L_2$-error & rate & $L_2$-error & rate & $L_2$-error & rate \\
\hline\hline
1/10 & 3.074\textup{e}{-3} & {-} & 1.274\textup{e}{-4} & {-} & 4.716\textup{e}{-6} & {-} & 2.036\textup{e}{-7} & {-}\\
1/20 & 6.508\textup{e}{-4} & 2.240 & 1.513\textup{e}{-5} & 3.073 & 3.073\textup{e}{-7} & 3.940 & 1.980\textup{e}{-8} & 3.362 \\
1/40 & 1.535\textup{e}{-4} & 2.084 & 1.891\textup{e}{-6} & 3.000 & 2.182\textup{e}{-8} & 3.816 & 2.454\textup{e}{-9} & 3.013 \\
1/80 & 3.775\textup{e}{-5} & 2.024 & 2.364\textup{e}{-7} & 3.000 & 1.880\textup{e}{-9} & 3.537 & 3.130\textup{e}{-10} & 2.971 \\
1/160 & 9.398\textup{e}{-6} & 2.006 & 2.955\textup{e}{-8} & 3.000 & 2.001\textup{e}{-10} & 3.232 & 3.924\textup{e}{-11} & 2.995 \\
1/320 & 2.347\textup{e}{-6} & 2.002 & 3.694\textup{e}{-9} & 3.000 & 2.401\textup{e}{-11} & 3.059 & 4.922\textup{e}{-12} & 2.995 \\
\hline
\textup{e}nd{tabular}
\textup{e}nd{center}
\caption{\label{1D_ACCURATE_SSPRK34}Convergence test of 1D advection with SSPRK33, showing degradation of convergence order for DGP3 and DGP4 (here we use density to evaluate the error).}
\textup{e}nd{table}
\begin{eqnarray}gin{table}[htp]
\begin{eqnarray}gin{center}
\begin{eqnarray}gin{tabular}{|c||cc|cc|cc|cc|}
\hline
\multirow {2}{*}{$h$} & \multicolumn{2}{c|}{DGP1} & \multicolumn{2}{c|}{DGP2} & \multicolumn{2}{c|}{DGP3} & \multicolumn{2}{c|}{DGP4}\\
& $L_2$-error & rate & $L_2$-error & rate & $L_2$-error & rate & $L_2$-error & rate \\ \hline\hline
1/10 & 3.494\textup{e}{-3} & {-} & 2.140\textup{e}{-4} & {-} & 4.650\textup{e}{-6} & {-} & 1.438\textup{e}{-7} & {-}\\
1/20 & 7.231\textup{e}{-4} & 2.273 & 1.513\textup{e}{-5} & 3.823 & 2.920\textup{e}{-7} & 3.993 & 4.517\textup{e}{-9} & 4.992 \\
1/40 & 1.630\textup{e}{-4} & 2.150 & 1.891\textup{e}{-6} & 3.000 & 1.826\textup{e}{-8} & 3.999 & 1.419\textup{e}{-10} & 4.992 \\
1/80 & 3.790\textup{e}{-5} & 2.105 & 2.364\textup{e}{-7} & 3.000 & 1.141\textup{e}{-9} & 4.000 & 4.444\textup{e}{-12} & 4.997 \\
1/160 & 9.398\textup{e}{-6} & 2.012 & 2.955\textup{e}{-8} & 3.000 & 7.134\textup{e}{-11} & 4.000 & 1.497\textup{e}{-13} & 4.892 \\
1/320 & 2.347\textup{e}{-6} & 2.002 & 3.694\textup{e}{-9} & 3.000 & 4.463\textup{e}{-12} & 3.999 & 8.930\textup{e}{-14} & 7.453\textup{e}{-1}\\
\hline
\textup{e}nd{tabular}
\textup{e}nd{center}
\caption{\label{1D_ACCURATE_RK45}Convergence test of 1D advection with standard RK45 (here we use density to evaluate the error).}
\textup{e}nd{table}
\subsection{\label{1D_MOV_SHOCK}One-dimensional moving shock wave}
A moving shock-wave in a one-dimensional domain is considered as a test-case for evaluating the robustness and performance of EBDG for shock-capturing. A domain with $x\textup{i}n[-0.1, 1.1]$ is considered, in which the initial shock front is located at $x=0$. The domain is initialized in $x<0$ with the following
pre-shock state:
\begin{eqnarray}n
\rho &=& 1.4\;,\\
u &=& 0\;,\\
p &=& 1\;.
\textup{e}nd{eqnarray}n
Shocks are specified with different Mach numbers $({\rm{Ma}} = u_s/c)$, and Ma $=\{2, 5, 100\}$ are considered in this case. For all cases considered, the initial value for the entropy, $s_0$, is set to a value of 0.620, corresponding of the minimum value in the initial condition. The simulation ends when the exact solution of the shock front reaches the location at $x=1$. Results are illustrated in Fig.~\ref{1D_SHOCK_CAPTURE}, showing that the entropy boundedness guarantees the robustness and consistent performance over a wide range of shock strengths. Entropy bounding ($\varepsilon \neq 0 $) is only activated in elements that are occupied by flow discontinuities. Compared to the positivity-preserving method, entropy bounding entirely avoids unphysical undershoots in pressure, and provides an improved suppression of oscillations in the post-shock region. Compared to limiting, the entropy bounding shows better robustness in describing shocks at different conditions, introducing lower dissipation in the vicinity of discontinuities.
\begin{eqnarray}gin{figure}[!tb!]
\begin{eqnarray}gin{subfigmatrix}{3}
\subfigure[${\rm{Ma}}=2, h = 1/100$.]{\textup{i}ncludegraphics[width=0.32\textwidth, clip=, keepaspectratio]{Mach2_coarse}}
\subfigure[${\rm{Ma}}=5, h = 1/100$.]{\textup{i}ncludegraphics[width=0.32\textwidth, clip=, keepaspectratio]{Mach5_coarse}}
\subfigure[${\rm{Ma}}=100, h = 1/100$.]{\textup{i}ncludegraphics[width=0.32\textwidth, clip=, keepaspectratio]{Mach100_coarse}}
\subfigure[${\rm{Ma}}=2, h = 1/200$.]{\textup{i}ncludegraphics[width=0.32\textwidth, clip=, keepaspectratio]{Mach2_fine}}
\subfigure[${\rm{Ma}}=5, h = 1/200$.]{\textup{i}ncludegraphics[width=0.32\textwidth, clip=, keepaspectratio]{Mach5_fine}}
\subfigure[${\rm{Ma}}=100, h = 1/200$.]{\textup{i}ncludegraphics[width=0.32\textwidth, clip=, keepaspectratio]{Mach100_fine}}
\textup{e}nd{subfigmatrix}
\caption{\label{1D_SHOCK_CAPTURE}(Color online) DGP2 simulation of moving shock wave for different Mach numbers (Abbreviation: EB$-$entropy bounding; PP$-$positivity preserving~\cite{ZHANG_SHU_JCP2010}; Limiter+PP$-$WENO limiter~\cite{WENOLIM_ZHONG}) with positivity preserving \cite{ZHANG_SHU_JCP2010}.}
\textup{e}nd{figure}
\subsection{Two-dimensional flow over a cylinder}
In this section, we verify the convergence order of the EBDG-scheme for high-order curved elements by considering a two-dimensional flow over a round cylinder. The radius of the cylinder is $R=1$ and the far-field boundary is a concentric circle with $R=20$. The condition in the free stream is given as:
\begin{eqnarray}n
\rho_\textup{i}nfty &=& 1.4\;,\\
u_\textup{i}nfty &=& 5.32\;,\\
v_\textup{i}nfty &=& 0.0\;,\\
p_\textup{i}nfty &=& 1\;.
\textup{e}nd{eqnarray}n
The corresponding Mach number is 0.38 and characteristic boundary conditions are imposed at the far-field. The entire domain is initialized with free-stream conditions and $s_0 = 0.620$. We compare results on quadrilateral and triangular meshes at three levels of refinement. High-order elements are generated using cubic polynomials to accommodate the curvature of the geometry. The CFL number is set to the CFL$^{\rm EB}$ number from Table \ref{1D_ACCURATE_SSPRK34} for the corresponding shape and polynomial order, multiplied by a factor of 0.8.
A main issue in these simulations is the occurrence of numerical instabilities that are initiated at the leading edge of the cylinder. As a result of this instability, DGP2 and DGP3 without any entropy-bounding diverge (the code blows up) after few iterations. Previously, limiters have been used in this case for stabilizing the transient solutions \cite{HWENOLIM_LUO}. However, for high-order polynomials, it is difficult to develop limiters to achieve the optimal convergence rate without a nontrivial implementation. In contrast, EBDG provides a considerably simpler implementation for enabling high-order simulations for such complex geometric configurations.
\begin{eqnarray}gin{table}[!h!]
\begin{eqnarray}gin{center}
\begin{eqnarray}gin{tabular}{|c||cc|cc|cc|}
\hline
\multirow {2}{*}{Mesh} & \multicolumn{2}{c|}{DGP1} & \multicolumn{2}{c|}{DGP2} & \multicolumn{2}{c|}{DGP3} \\
& $L_2$-error & rate & $L_2$-error & rate& $L_2$-error & rate\\
\hline\hline
\multicolumn{7}{|>{\columncolor[gray]{0.7}}c|}{Quadrilateral Elements} \\ \hline
Level 1 & 7.272\textup{e}{-2} & {-} & 1.694\textup{e}{-2} & {-} & 3.816\textup{e}{-3} & {-}\\
Level 2 & 1.318\textup{e}{-2} & 2.464 & 7.219\textup{e}{-4} & 4.552 & 1.827\textup{e}{-4} & 4.384 \\
Level 3 & 2.441\textup{e}{-3} & 2.433 & 6.029\textup{e}{-5} & 3.582 & 1.036\textup{e}{-5} & 4.141 \\
\hline
\hline
\multicolumn{7}{|>{\columncolor[gray]{0.7}}c|}{Triangular Elements} \\ \hline
Level 1 & 1.137\textup{e}{-1} & {-} & 2.590\textup{e}{-2} & {-} & 4.086\textup{e}{-3} & {-}\\
Level 2 & 1.865\textup{e}{-2} & 2.608 & 8.899\textup{e}{-4} & 4.863 & 1.291\textup{e}{-4} & 4.984 \\
Level 3 & 3.391\textup{e}{-3} & 2.459 & 7.222\textup{e}{-5} & 3.623 & 6.939\textup{e}{-6} & 4.217 \\
\hline
\textup{e}nd{tabular}
\textup{e}nd{center}
\caption{\label{2D_ACCURATE_RK45}Comparisons of convergence rate for 2D flow over a cylinder (here we use entropy to evaluate the error).}
\textup{e}nd{table}
\begin{eqnarray}gin{figure}[!htb!]
\centering
\begin{eqnarray}gin{tabular}{|c|c|c|c|}
\hline
& Mesh Level 1 & Mesh Level 2 & Mesh Level 3 \\
\hline
\begin{eqnarray}gin{sideways}\hspace*{20mm} Mesh \textup{e}nd{sideways}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv1_mesh}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv2_mesh}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv3_mesh} \\
\hline
\begin{eqnarray}gin{sideways}\hspace*{14mm} DGP2 solutions \textup{e}nd{sideways}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv1_res}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv2_res}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv3_res} \\
\hline
\begin{eqnarray}gin{sideways}\hspace*{14mm} Convergence \textup{e}nd{sideways}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{quad_lv1}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{quad_lv2}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{quad_lv3} \\
\hline
\textup{e}nd{tabular}
\caption{\label{FLOW_OVER_CIRCLE_QUAD}EBDG-solution of flow over a cylinder on curved quadrilateral meshes with
three different refinement levels; top: computational mesh in near-field of the cylinder; middle: Mach number; bottom: convergence history and activation of entropy bounding as a function of iteration.}
\textup{e}nd{figure}
\begin{eqnarray}gin{figure}[!htb!]
\centering
\begin{eqnarray}gin{tabular}{|c|c|c|c|}
\hline
& Mesh Level 1 & Mesh Level 2 & Mesh Level 3 \\
\hline
\begin{eqnarray}gin{sideways}\hspace*{20mm} Mesh \textup{e}nd{sideways}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv1tri_mesh}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv2tri_mesh}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv3tri_mesh} \\
\hline
\begin{eqnarray}gin{sideways}\hspace*{14mm} DGP2 solutions \textup{e}nd{sideways}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv1tri_res}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv2tri_res}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{lv3tri_res} \\
\hline
\begin{eqnarray}gin{sideways}\hspace*{14mm} Convergence \textup{e}nd{sideways}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{tri_lv1}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{tri_lv2}
& \textup{i}ncludegraphics[width=0.28\textwidth, clip=, keepaspectratio]{tri_lv3} \\
\hline
\textup{e}nd{tabular}
\caption{\label{FLOW_OVER_CIRCLE_TRI}DG-solution of flow over a cylinder on curved triangular meshes with
three different refinement levels; top: computational mesh in the near-field of the cylinder; middle: Mach number; bottom: convergence history and activation of entropy bounding as a function of iteration.}
\textup{e}nd{figure}
Comparisons of the computational meshes, simulation results, and convergence properties are presented in Figs.~\ref{FLOW_OVER_CIRCLE_QUAD} and \ref{FLOW_OVER_CIRCLE_TRI}. It is evident that the solution is improved by increasing the mesh resolution. The convergence history of the residual, provided in the last row of both figures, shows that entropy bounding is mostly activated during the start-up phase of the simulation to suppress numerical oscillations and ensure stability. It is interesting to note that the number of elements that require bounding is restricted to the region near the stagnation point upstream of the cylinder, and is confined to less than 8\% of the total number of elements. As the solution converges to the steady-state condition, entropy-bounding remains deactivated, retaining the high-order accuracy. Since the solution is smooth the physical entropy production is zero. Therefore, the convergence rates are measured in terms of entropy error using the discrete $L_2$-norm. A comparison of the convergence rates are presented in Table~\ref{2D_ACCURATE_RK45}, confirming that the optimal convergence rate is preserved even for complex geometries with curved elements.
\subsection{Two-dimensional double Mach reflection}
This test case is designed to assess the performance of EBDG for simulation of flows with strong shocks and wave structures. The numerical setup follows this of Woodward and Colella \cite{DMR_1984}, representing a Mach 10 shock over a $30^\circ$-wedge. All quantities are non-dimensionalized, and the computational domain is $[0, 4] \times [0, 1]$. In the present study, we consider two different mesh-discretizations, consisting of a Cartesian mesh with quadratic elements ($L_e=h=0.02$) and a mesh with triangular elements ($L_e\approx0.02$). The pre-shock state is the same as that in Sec.~\ref{1D_MOV_SHOCK} and hence $s_0$ is set to 0.620. The CFL number is prescribed from Table \ref{1D_ACCURATE_SSPRK34} using a safety factor of 0.8.
\begin{eqnarray}gin{figure}
\begin{eqnarray}gin{subfigmatrix}{2}
\subfigure[EBDGP1, quadrilateral mesh with $h=0.02.$]{\textup{i}ncludegraphics[width=0.475\textwidth, clip=, keepaspectratio]{P1str}}
\subfigure[EBDGP1, triangular mesh with $h=0.02.$]{\textup{i}ncludegraphics[width=0.475\textwidth, clip=, keepaspectratio]{P1unstr}}
\subfigure[EBDGP2, quadrilateral mesh with $h=0.02.$]{\textup{i}ncludegraphics[width=0.475\textwidth, clip=, keepaspectratio]{P2str_PIX}}
\subfigure[EBDGP2, triangular mesh with $h=0.02.$]{\textup{i}ncludegraphics[width=0.475\textwidth, clip=, keepaspectratio]{P2unstr_PIX}}
\subfigure[\label{DMR_SOLUTIONS_WENO5}WENO5, quadrilateral mesh with $h=0.0067.$]{\textup{i}ncludegraphics[width=0.475\textwidth, clip=, keepaspectratio]{weno}}
\subfigure[\label{DMR_SOLUTIONS_DG_WENO_LIMITER}DGP2 with WENO limiter~\cite{WENOLIM_ZHONG}, quadrilateral mesh with $h=0.02$.]{\textup{i}ncludegraphics[width=0.475\textwidth, clip=, keepaspectratio]{comp_P2str}}
\textup{e}nd{subfigmatrix}
\begin{eqnarray}gin{subfigmatrix}{1}
\subfigure[Instantaneous snapshot of $\varepsilon$ for DGP2 on quadrilateral mesh.]{\textup{i}ncludegraphics[width=0.85\textwidth, clip=, keepaspectratio]{indicator}}
\textup{e}nd{subfigmatrix}
\caption{\label{DMR_SOLUTIONS}(Color online) Simulation results of double Mach reflection over a 30$^{\rm o}$-wedge.}
\textup{e}nd{figure}
Simulation results for density contours at time $t=0.25$ are shown in Fig.~\ref{DMR_SOLUTIONS}. The proposed EBDG-method captures all wave-features, and it is found that without enforcing the entropy constraint the solution diverges in the first iteration for these strong shock conditions. For comparison, a reference solution obtained using a fifth-order WENO-scheme is shown in Fig.~\ref{DMR_SOLUTIONS_WENO5}, and results from a DGP2-simulation using a WENO-limiter~\cite{WENOLIM_ZHONG} are presented in Fig.~\ref{DMR_SOLUTIONS_DG_WENO_LIMITER}. Comparisons between EBDGP1 and EBDGP2 results show the benefit of the high-order scheme in providing improved representations of the shock-wave structure. At the same degrees of freedom, the DGP2-solution provides comparable predictions to that of the fifth-order WENO scheme, except for the small oscillations that cannot be removed by the linear scaling procedure. Compared to the DG-simulation with WENO-limiter (Fig.~\ref{DMR_SOLUTIONS_DG_WENO_LIMITER}), EBDG effectively avoids introducing excessive numerical dissipation since the solution is only entropy-constrained in regions in which the entropy condition is violated.
\subsection{Three-dimensional supersonic flow over a sphere}
This test case extends the evaluation of the EBDG-method to three-dimensional configurations with complex geometries. Currently, robust approaches for capturing strong shocks in three-dimensional curved elements are still subject to investigation. This test case considers a flow at a Mach number of 6.8 over a sphere. The radius of the sphere is $R=1$. Due to the geometric symmetry, the computational domain considers only an eighth section of the domain, and it extends to $3R$ in radial direction. Symmetry boundary conditions are imposed at the planes $y=0$ and $z=0$, and outflow boundary conditions are prescribed at $x=0.$ Normal velocity inflow is prescribed at the outer shell with the following specification:
\begin{eqnarray}n
\rho_\textup{i}nfty &=& 1.4\;,\\
u_\textup{i}nfty &=& -6.80\;,\\
v_\textup{i}nfty &=& 0.0\;,\\
w_\textup{i}nfty &=& 0.0\;,\\
p_\textup{i}nfty &=& 1\;.
\textup{e}nd{eqnarray}n
Slip-wall conditions are imposed at the surface of the sphere. The computational domain is discretized with quadratically curved hexahedron elements. For the initial mesh, the radial dimension is partitioned with 14 elements with a linear stretching factor of 1.1 while the azimuthal dimension of the plane at $x = 0$ is partitioned using 12 elements. DGP2 is applied for this case and the CFL number is 0.8CFL$^{\text{EB}}$ with CFL$^{\text{EB}}=0.056$.
Simulation results are illustrated in Fig.~\ref{SPHERE_SOLU_MESH_BL}, showing the surface mesh and isocontours of the Mach number. The bounding parameter $\varepsilon$ can be utilized as a indicator for local mesh refinement. We sample the elements with non-zero $\varepsilon$ values over few iterations, and then locally refine these elements. Results using one and two levels of refinement are shown in Figs.~\ref{SPHERE_SOLU_MESH_L1} and~\ref{SPHERE_SOLU_MESH_L2}, respectively. This direct comparison shows that the shock profiles become sharper with increasing resolution. Since the bounding parameter is sharp, the mesh-refinement is confined to a narrow region in the vicinity of the shock. The flow-field solution behind the shock is smooth, and no entropy bounding is applied in this region. To provide a quantitative analysis, simulation results from the EBDG-method are compared against measurements by Billig~\cite{REFDATA_SHOCKSPHERE_1967} in Fig. \ref{SPHERE_SOLU_MESH_L3}, showing good agreement between experiments and computations.
\begin{eqnarray}gin{figure}[!htb!]
\begin{eqnarray}gin{subfigmatrix}{2}
\subfigure[\label{SPHERE_SOLU_MESH_BL}Baseline mesh.]{\textup{i}ncludegraphics[width=0.48\textwidth, clip=, keepaspectratio]{sphere_lv1}}
\subfigure[\label{SPHERE_SOLU_MESH_L1}Local refinement at level one.]{\textup{i}ncludegraphics[width=0.48\textwidth, clip=, keepaspectratio]{sphere_lv2}}
\subfigure[\label{SPHERE_SOLU_MESH_L2}Local refinement at level two.]{\textup{i}ncludegraphics[width=0.48\textwidth, clip=, keepaspectratio]{sphere_lv3}}
\subfigure[\label{SPHERE_SOLU_MESH_L3}Comparison with measurement in~\cite{REFDATA_SHOCKSPHERE_1967}.]{\textup{i}ncludegraphics[width=0.48\textwidth, clip=, keepaspectratio]{sphere_comp}}
\textup{e}nd{subfigmatrix}
\caption{\label{SPHERE_SOLU}(Color online) Simulations of Mach 6.8 flow over a sphere showing the ${\varepsilon}ilon$ profile ($y=0$ plane) and Mach-number distribution ($z=0$ plane) on (a) baseline mesh, and simulation results with local refinement with (b) one refinement level and (c) two refinement levels. Comparisons of the shock location with measurements by Billig~\cite{REFDATA_SHOCKSPHERE_1967} are shown in (d).}
\textup{e}nd{figure}
\section{Conclusions}
A regularization technique for the discontinuous Galerkin scheme was developed using the entropy principle. Motivated by the FV entropy solution, the high-order DG-scheme is stabilized by constraining the solution to obey the entropy condition. The implementation of the resulting entropy-bounding discontinuous Galerkin scheme relies on two key components, namely a limiting operator and a CFL-constraint. These essential components were derived by considering first a one-dimensional setting and the subsequent extension to multi-dimensional configurations with arbitrary and curved elements. Specifically, utilizing the interpolation basis we were able to extend the entropy bounding (also including positivity preserving) to arbitrarily shaped elements independent of specific quadrature rules. The bounding procedure is obtained from algebraic operations, resulting in a computationally efficient and simple implementation. A sufficient CFL-condition was rigorously derived and proofed to ensure that the entropy constraint can be enforced on different types and orders of elements. By considering different configurations, numerical tests were conducted to examine accuracy and stability of the entropy-bounding DG-scheme. These test cases confirm the efficacy in regularizing solutions in the vicinity of discontinuities, generated either by true flow physics or during the transient solution update. The added benefit of the entropy bounding method is its utilization as a refinement indicator.
Since the herein proposed entropy bounding scheme relies on a linear scaling operator, it is not capable to remove shock-triggered oscillations of smaller magnitude, although it stabilizes the solution and prevents the solver from diverging. As a final remark, the derivation that was presented in this study is general and extendable to other discontinuous schemes with sub-cell solution representations, such as spectral finite volume schemes~\cite{ZJWANG_SPV_2006} and the flux reconstruction scheme~\cite{HUYNH_FR_2013}. Therefore, entropy-bounding, as an idea, has the potential to improve the robustness of shock-capturing for these emerging high-order numerical methods.
\section*{Acknowledgment}
Financial support through NSF with Award No. CBET-0844587 is gratefully acknowledged. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ASC130004. Helpful discussions with Yee Chee See on the mathematical analysis are appreciated.
\appendix
\section{\label{APP_COMB_RULE}Combination Rule}
In this section, we derive an estimate for the maximum characteristic speed for convex state solution. For this, we consider a state vector $U$ of Eq.~(\ref{EQ_EULER_STATE}), which is written in the following form:
\begin{eqnarray}
\label{COMBINATION_U_STAR_LAMBDA}
\mathsf{U} = \sum_{k} \begin{eqnarray}ta_k \mathsf{U}_k\;,
\textup{e}nd{eqnarray}
where $\begin{eqnarray}ta_k > 0 $ and $\sum_k \begin{eqnarray}ta_k =1 $. The maximum characteristic speed of $\mathsf{U}$ is:
\begin{eqnarray}n
\nu(\mathsf{U}) = |u(\mathsf{U})| + c(\mathsf{U}) = |u(\mathsf{U})| + \sqrt{\gamma(\gamma - 1) \left( e(\mathsf{U}) - \frac{1}{2} |u(\mathsf{U})|^2\right)}\;,
\textup{e}nd{eqnarray}n
in which $u$ and $E$ can be calculated according to Eq. (\ref{COMBINATION_U_STAR_LAMBDA}) as
\begin{eqnarray}n
u(\mathsf{U}) = \sum_k \alpha_k u(\mathsf{U}_k),~\qquad e(\mathsf{U}) = \sum_k \alpha_k e(\mathsf{U}_k)\;,
\textup{e}nd{eqnarray}n
and
\begin{eqnarray}n
\alpha_k = \frac{\begin{eqnarray}ta_k \rho(\mathsf{U}_k)}{\sum_k \begin{eqnarray}ta_k \rho(\mathsf{U}_k) }\;
\textup{e}nd{eqnarray}n
is a new set of coefficients that is introduced to convert from conservative to primitive variables. Furthermore, because of
\begin{eqnarray}n
\gamma(\gamma -1) e(\mathsf{U}) &=& \sum_k \alpha_k \left(c^2(\mathsf{U}_k) + \frac{\gamma(\gamma-1)}{2}|u(\mathsf{U}_k)|^2\right)\;,\\
|u(\mathsf{U})| & = & \sqrt{|\sum_k\alpha_ku(\mathsf{U}_k)|^2} \;,\\
&\leq& \sqrt{\sum_k\alpha_k |u(\mathsf{U}_k)|^2}\;,
\textup{e}nd{eqnarray}n
we obtain
\begin{eqnarray}n
\nu(\mathsf{U}) &=& |u(\mathsf{U})| + \sqrt{\sum_k \alpha_k c^2(\mathsf{U}_k) +\frac{\gamma(\gamma-1)}{2} \left( \sum_k \alpha_k |u(\mathsf{U}_k)|^2 - {|u(\mathsf{U})|^2}\right)} \;,\\
& \leq & \sqrt{\sum_k\alpha_k |u(\mathsf{U}_k)|^2} + \sqrt{\sum_k \alpha_k c^2(\mathsf{U}_k) +\frac{\gamma(\gamma-1)}{2} \sum_k \alpha_k |u(\mathsf{U}_k)|^2 }\;,\\
& \leq & \sqrt{2 \sum_k \alpha_k c^2(\mathsf{U}_k) + (2 + \gamma(\gamma-1))\sum_k \alpha_k |u(\mathsf{U}_k)|^2}\;,\\
& \leq & \sqrt{2 + \gamma(\gamma-1)} \sqrt{\sum_k \alpha_k (c^2(\mathsf{U}_k) + |u(\mathsf{U}_k)|^2)}\;,\\
& \leq & \sqrt{2 + \gamma(\gamma-1)} \sqrt{\sum_k \alpha_k (c(\mathsf{U}_k) + |u(\mathsf{U}_k)|)^2}\;,\\
& \leq & \sqrt{2 + \gamma(\gamma-1)}\max_k \left\{c(\mathsf{U}_k) + |u(\mathsf{U}_k)|\right\}\;.
\textup{e}nd{eqnarray}n
We used this estimation for preselecting the dissipation coefficient $\lambda^*$ used in Lemma \ref{LEMMA_ENTROPY_MULTID}.
\section{Formulation of $\mathcal{J}^\partialartial_k $}
For a two-dimensional configuration, the curve of an edge is parameterized by $g_k \textup{i}n \mathbb{R}$, and the surface Jacobian is written as $\mathcal{J}^\partialartial_k = \left[\frac{\partialartial x_1}{\partialartial g_k}, \frac{\partialartial x_2}{\partialartial g_k}\right]$; for a three-dimensional configuration, an edge surface is parameterized by $g_k=\left[g_{k}^{(1)}, g_{k}^{(2)}\right]^T \textup{i}n \mathbb{R}^2$, and the Jacobian can be written as $\mathcal{J}^\partialartial_k = \left[\frac{\partialartial x_1}{\partialartial g_{k}^{(1)}}, \frac{\partialartial x_2}{\partialartial g_{k}^{(1)}}, \frac{\partialartial x_3}{\partialartial g_{k}^{(1)}}\right] \times \left[\frac{\partialartial x_1}{\partialartial g_{k}^{(2)}},\frac{\partialartial x_2}{\partialartial g_{k}^{(2)}},\frac{\partialartial x_3}{\partialartial g_{k}^{(2)}}\right]$.
\begin{eqnarray}gin{thebibliography}{10}
\begin{itemize}bitem{TVBlim}
B.~Cockburn and C.-W. Shu.
\newblock {TVB} {Runge-Kutta} local projection discontinuous {G}alerkin finite
element method for conservation laws {III}: {O}ne dimensional systems.
\newblock {\textup{e}m J. Comp. Phys.}, 84:90--113, 1989.
\begin{itemize}bitem{TVBlim2D}
B.~Cockburn and C.-W. Shu.
\newblock The {Runge-Kutta} discontinuous {G}alerkin method for conservation
laws {V}: {M}ultidimensional systems.
\newblock {\textup{e}m J. Comp. Phys.}, 141:199--224, 1998.
\begin{itemize}bitem{MOMLIM1_BISWAS}
R.~Biswas, K.~Devine, and J.~E. Flaherty.
\newblock Parallel adaptive finite element methods for conservation laws.
\newblock {\textup{e}m Appl. Numer. Math.}, 14:255--284, 1994.
\begin{itemize}bitem{MOMLIM2_KRIVO}
L.~Krivodonova.
\newblock Limiters for high-order discontinuous {G}alerkin methods.
\newblock {\textup{e}m J. Comp. Phys.}, 226:879--896, 2007.
\begin{itemize}bitem{WENOLIM1_QIU}
J.~Qiu and C.-W. Shu.
\newblock {Runge-Kutta} discontinuous {G}alerkin method using {WENO} limiters.
\newblock {\textup{e}m SIAM J. Sci. Comput.}, 26:907--929, 2005.
\begin{itemize}bitem{WENOLIM_ZHU_2008}
J.~Zhu, J.~Qiu, C.-W. Shu, and M.~Dumbser.
\newblock {Runge-Kutta} discontinuous {G}alerkin method using {WENO} limiters
{II}: {U}nstructured meshes.
\newblock {\textup{e}m J. Comp. Phys.}, 227:4330--4353, 2008.
\begin{itemize}bitem{WENOLIM_ZHONG}
X.~Zhong and C.-W. Shu.
\newblock A simple weighted essentially nonoscillatory limiter for
{Runge-Kutta} discontinuous {G}alerkin methods.
\newblock {\textup{e}m J. Comp. Phys.}, 232:397--415, 2013.
\begin{itemize}bitem{HWENOLIM_LUO}
H.~Luo, J.~D. Baum, and R.~L\"ohner.
\newblock A {H}ermite {WENO}-based limiter for discontinuous {G}alerkin method
on unstructured grids.
\newblock {\textup{e}m J. Comp. Phys.}, 225:686--713, 2007.
\begin{itemize}bitem{ZHANG_SHU_JCP2010}
X.~Zhang and C.-W. Shu.
\newblock On positivity-preserving high order discontinuous {G}alerkin schemes
for compressible {Euler} equations on rectangular meshes.
\newblock {\textup{e}m J. Comp. Phys.}, 229:8918--8934, 2010.
\begin{itemize}bitem{ZHANG_SHU_JCP2011}
X.~Zhang and C.-W. Shu.
\newblock Positivity-preserving high order discontinuous {{G}alerkin} schemes
for compressible {Euler} equations with source terms.
\newblock {\textup{e}m J. Comp. Phys.}, 230:1238--1248, 2011.
\begin{itemize}bitem{ZHANG_SHU_NUMERMATH2010}
X.~Zhang and C.-W. Shu.
\newblock A minimum entropy principle of high order schemes for gas dynamics
equations.
\newblock {\textup{e}m Numer. Math.}, 121:545--563, 2012.
\begin{itemize}bitem{LV_IHME_PCI_2014}
Y.~Lv and M.~Ihme.
\newblock Computational analysis of re-ignition and re-initiation mechanisms of
quenched detonation waves behind a backward facing step.
\newblock {\textup{e}m Proc. Combust. Inst.}, 2014.
\newblock available online.
\begin{itemize}bitem{LV_IHME_JCP_2014}
Y.~Lv and M.~Ihme.
\newblock Discontinuous {G}alerkin method for multicomponent chemically
reacting flows and combustion.
\newblock {\textup{e}m J. Comp. Phys.}, 270:105--137, 2014.
\begin{itemize}bitem{LAX_ENTROPY_BOUND_1971}
P.~D. Lax.
\newblock Shock waves and entropy.
\newblock In E.H. Zarantonello, editor, {\textup{e}m Contributions to Nonlinear
Functional Analysis}, pages 603--634. Academic Press, New York, London, 1971.
\begin{itemize}bitem{TADMOR_1986}
E.~Tadmor.
\newblock A minimum entropy principle in the gas dynamics equations.
\newblock {\textup{e}m Appl. Numer. Math.}, 2:211--219, 1986.
\begin{itemize}bitem{CUBATURE_FORMULAS}
S.~L. Sobolev and V.~Vaskevich.
\newblock {\textup{e}m The theory of cubature formulas}.
\newblock Springer, 1997.
\begin{itemize}bitem{PERTHAME_SHU_POSI_1996}
B.~Perthame and C.-W. Shu.
\newblock On positivity preserving finite volume schemes for {E}uler equations.
\newblock {\textup{e}m Numer. Math.}, 73:119--130, 1996.
\begin{itemize}bitem{ROE_ENTROPY_FIX_TADMOR}
E.~Tadmor.
\newblock Entropy stability theory for difference approximations of nonlinear
conservation laws and related time-dependant problem.
\newblock {\textup{e}m Acta Numer.}, 12:451--512, 2003.
\begin{itemize}bitem{KINE_ENTROPY_SOLVER_1994}
B.~Khobalatte and B.~Perthame.
\newblock Maximum principle on the entropy and second-order kinetic schemes.
\newblock {\textup{e}m Math. Comp.}, 62:119--131, 1994.
\begin{itemize}bitem{ZJWANG_REVIEW_2013}
Z.~J. Wang, K.~Fidkowski, R.~Abgrall, F.~Bassi, D.~Caraeni, A.~Cary,
H.~Deconinck, R.~Hartmann, K.~Hillewaert, H.~T. Huynh, N.~Kroll, G.~May,
P.~{-}O. Persson, B.~van Leer, and M.~Visbal.
\newblock High-order {CFD} methods: {c}urrent status and perspective.
\newblock {\textup{e}m Int. J. Numer. Meth. Eng.}, 72:811--845, 2013.
\begin{itemize}bitem{COCKBURN_SHU_JSC01}
B.~Cockburn and C.-W. Shu.
\newblock {Runge-Kutta} discontinuous {{G}alerkin} methods for
convection-dominated problems.
\newblock {\textup{e}m J. Sci. Comput.}, 16(3):173--261, 2001.
\begin{itemize}bitem{DUNAVANT_QUADRATURE_1985}
D.~Dunavant.
\newblock High degree efficient symmetrical {G}aussian quadrature rules for the
triangle.
\newblock {\textup{e}m Int. J. Numer. Meth. Eng.}, 21:1129--1148, 1985.
\begin{itemize}bitem{ZHANG_QUADRATURE_2009}
L.~Zhang, T.~Cui, and H~Liu.
\newblock A set of symmetric quadrature rules on triangles and tetrahedra.
\newblock {\textup{e}m J. Comput. Math.}, 21:89--96, 2009.
\begin{itemize}bitem{SSP_REF_2001}
S.~Gottlieb, C.-W. Shu, and E.~Tadmor.
\newblock Strong stability-preserving high-order time discretization methods.
\newblock {\textup{e}m SIAM Review}, 43:89--112, 2001.
\begin{itemize}bitem{DMR_1984}
P.~R. Woodward and P.~Colella.
\newblock The numerical simulation of two-dimensional fluid flow with strong
shocks.
\newblock {\textup{e}m J. Comp. Phys.}, 54:115--173, 1984.
\begin{itemize}bitem{REFDATA_SHOCKSPHERE_1967}
F.~S. Billig.
\newblock Shock-wave shapes around spherical-and cylindrical-nosed bodies.
\newblock {\textup{e}m J. Spacecraft Rockets}, 4:822--823, 1967.
\begin{itemize}bitem{ZJWANG_SPV_2006}
Z.~J. Wang and Y.~Liu.
\newblock Spectral (finite) volume method for conservation laws on unstructured
grids {V}: {E}xtension to three-dimensional systems.
\newblock {\textup{e}m J. Comp. Phys.}, 212:454--472, 2006.
\begin{itemize}bitem{HUYNH_FR_2013}
H.~T. Huynh.
\newblock A flux reconstruction approach to high-order schemes including
discontinuous {G}alerkin methods.
\newblock In {\textup{e}m 18th AIAA Computational Fluid Dynamics Conference}, Miami,
FL, 2007. AIAA 2007-4079.
\textup{e}nd{thebibliography}
\textup{e}nd{document}
|
\begin{document}
\begin{frontmatter}
\title{A Technical Note: \\
Two-Step PECE Methods for Approximating Solutions \\
To First- and Second-Order ODEs}
\author[add]{A.\ D.~Freed}
\ead{[email protected]}
\date{\today}
\address[add]{Department of Mechanical Engineering,
Texas A\&\mbox{M} University,
College Station, \\ TX 77843,
United States}
\begin{abstract}
Two-step predictor\slash corrector methods are provided to solve three classes of problems that present themselves as systems of ordinary differential equations (ODEs). In the first class, velocities are given from which displacements are to be solved. In the second class, velocities and accelerations are given from which displacements are to be solved. And in the third class, accelerations are given from which velocities and displacements are to be solved. Two-step methods are not self starting, so compatible one-step methods are provided to take that first step with. An algorithm is presented for controlling the step size so that the local truncation error does not exceed a specified tolerance.
\end{abstract}
\end{frontmatter}
\section{Multi-Step Methods for Mechanical Engineers}
Multi-step methods \cite{Butcher08,HairerWanner91} are numerical schemes that are used to approximate solutions for systems of ODEs which commonly arise in engineering practice. Because the intended readers of this document are my students, whom will become Mechanical Engineers upon graduation, I present these methods using variables that are intuitive to them: time $t$ is the independent variable of integration, and position $\mathbf{x} = \{ x_1 , x_2 , x_3 \}^{\mathsf{T}}$ is the dependent variable of integration (plus, sometimes, velocity) while velocity $\mathbf{v} = \{ v_1 , v_2 , v_3 \}^{\mathsf{T}} = \mathbf{v}(t, \mathbf{x})$ and acceleration $\mathbf{a} = \{ a_1 , a_2 , a_3 \}^{\mathsf{T}} = \mathbf{a}(t, \mathbf{x}, \mathbf{v})$ are functions of these independent and dependent variables. The time rate-of-change of acceleration is jerk $\dot{\mathbf{a}} = \{ \dot{a}_1 , \dot{a}_2 , \dot{a}_3 \}^{\mathsf{T}}$, which is introduced as a means by which improvements in solution accuracy can be made. Problems like these commonly arise in applications within disciplines like kinematics, dynamics, thermo\-dynamics, vibrations, controls, process kinetics, etc. The methods presented in this document apply to systems of any dimension, it is just that $t$, $\mathbf{x}$, $\mathbf{v}$ and $\mathbf{a}$ are physical notions for which my students have intuitive understanding.
Current engineering curricula expose students to some basic methods like Euler's method (you should never use forward-Euler by itself), a simple Euler predictor with a trapezoidal corrector, often called Heun's method, and \textit{the\/} Runge-Kutta method. Kutta \cite{Kutta01} derived \textit{the\/} Runge-Kutta method (Runge played no part here). This method, likely the most popular of all ODE solvers, was not the method Kutta actually advocated for use. He derived a more accurate fourth-order method in his paper---a method that has sadly become lost to the obscurity of dusty shelves.
The intent of this note is to inform my students about the existence and utility of a whole other class of ODE solvers that have great value in many applications. These are called multi-step methods. They make an informed decision on the direction that its solution will advance into the future based upon where it has been in the recent past. In contrast, Runge-Kutta methods sample multiple paths in the present to make an informed decision on the direction that its solution will advance into the future. The past does not enter into the Runge-Kutta process. These two classes of numerical methods are fundamentally different in this regard. There is an emerging field within computational mathematics where these two approaches are being melded into one. They are called general linear methods, and two such methods can be found in Appendix~D of my textbook \cite{Freed14}. We will not address them here.
\section{The Objective}
Throughout this document we shall consider an interval in time $[0,T]$ over which $N$ solutions are to be extracted at nodes $n = 1,2, \ldots, N$ spaced at uniform intervals in time with a common step size of $h = T/N$ separating them. This is referred to as the global step size. A local step size will be introduced later, which will be the actual step size that an integrator uses to advance along its solution path. This size dynamically adjusts to maintain solution accuracy, and is under the control of a proportional integral (PI) controller.
Node $n$ is located at current time. Here is where the solution front resides. Node $n \! - \! 1$ is where the previous solution was acquired, while node $n \! + \! 1$ is the where the next solution is to be calculated. In this regard, information storage required by these methods is compatible with memory strategies and coding practices adopted by many industrial codes like finite elements. This requirement of working solely with nodes $n \! - \! 1$, $n$, $n \! + \! 1$ will limit the accuracy that one can achieve with these methods. Higher-order multi-step methods require more nodes, and as such, more information history.
Our objective is to construct a collection of numerical methods that resemble the popular, second-order, backward-difference formula \cite{HairerWanner91} denoted as BDF2 in the literature and software packages. BDF2 is described by
\begin{equation*}
\mathbf{x}_{n+1} = \tfrac{1}{3} \bigl( 4 \mathbf{x}_n -
\mathbf{x}_{n-1} \bigr) + \tfrac{2}{3} \, h
\mathbf{v}_{n+1} + \mathcal{O} (h^3)
\end{equation*}
and is an implicit method in that $\mathbf{v} = \mathbf{v} (t, \mathbf{x})$, typically, and therefore $\mathbf{x}_{n+1}$ appears on both sides of the equals sign. There are good reasons for selecting this numerical model upon which to construct other methods; specifically, BDF2 is a convergent method in that it is consistent and A~stable \cite{Butcher08}. These are noble properties to aspire to, but whose discussion lies beyond the scope of this document.
Here your professor seeks to provide techniques that address three questions: \textit{i\/}) How can one apply an implicit multi-step method where you need to know the solution to get the solution? \textit{ii\/}) How can one startup a multi-step method, because at the initial condition there is no solution history? and \textit{iii\/}) Numerical ODE solvers typically solve first-order systems, but Newton's Laws for Motion are described with a second-order system. How can one construct an ODE solver designed to handle these types of problems?
An answer to the first question is: We will introduce a predictor to get an initial solution estimate; specifically, predict\slash evaluate\slash correct\slash evalute (PECE) schemes are developed. An answer to the second question is: A single-step method can be used to start up a two-step method. And an answer to the third question is: We will use the natural features of multi-step methods and Taylor series expansions to construct solvers for second-order ODEs. Several of the methods found in this document are not found in the literature. Your professor created them just for you!
\subsection{Strategy}
The strategy used to construct multi-step algorithms is to expand an appropriate linear combination of Taylor series for displacement $\mathbf{x}$ taken about solution nodes at discrete times. In our case, expansions are taken about times $t_{n-1}$, $t_n$ and $t_{n+1}$ such that their sum replicates the general structure of the BDF2 method. Specifically, we seek two-step methods with constituents $\mathbf{x}_{n+1} = \tfrac{1}{3} (4 \textbf{x}_n - \textbf{x}_{n-1}) + \cdots$ that are common betwixt them.
Each Taylor series is expanded out to include acceleration $\mathbf{a}$ for methods that solve first-order ODEs, and each Taylor series is expanded out to include jerk $\dot{\mathbf{a}}$ for methods that solve second-order ODEs. The pertinent series for displacement include
\begin{subequations}
\label{TaylorDisplacements}
\begin{align}
\mathbf{x}_{n+1} & = \mathbf{x}_n + h \mathbf{v}_n +
\tfrac{1}{2} h^2 \mathbf{a}_n + \tfrac{1}{6} h^3
\dot{\mathbf{a}}_n + \cdots
\label{displacementA} \\
\mathbf{x}_n & = \mathbf{x}_{n+1} - h \mathbf{v}_{n+1} +
\tfrac{1}{2} h^2 \mathbf{a}_{n+1} -
\tfrac{1}{6} h^3 \dot{\mathbf{a}}_{n+1} + \cdots
\label{displacementB} \\
\mathbf{x}_n & = \mathbf{x}_{n-1} + h \mathbf{v}_{n-1} +
\tfrac{1}{2} h^2 \mathbf{a}_{n-1} +
\tfrac{1}{6} h^3 \dot{\mathbf{a}}_{n-1} + \cdots
\label{displacementC} \\
\mathbf{x}_{n-1} & = \mathbf{x}_n - h \mathbf{v}_n +
\tfrac{1}{2} h^2 \mathbf{a}_n - \tfrac{1}{6} h^3
\dot{\mathbf{a}}_n + \cdots
\label{displacementD}
\end{align}
\end{subequations}
where the set of admissible expansions only involve nodes $n \! - \! 1$, $n$ and $n \! + \! 1$. Once these are in place, like Taylor expansions for the velocity are secured
\begin{subequations}
\label{TaylorVelocities}
\begin{align}
\mathbf{v}_{n+1} & = \mathbf{v}_n + h \mathbf{a}_n +
\tfrac{1}{2} h^2 \dot{\mathbf{a}}_n + \cdots
\label{velocityA} \\
\mathbf{v}_n & = \mathbf{v}_{n+1} - h \mathbf{a}_{n+1} +
\tfrac{1}{2} h^2 \dot{\mathbf{a}}_{n+1} + \cdots
\label{velocityB} \\
\mathbf{v}_n & = \mathbf{v}_{n-1} + h \mathbf{a}_{n-1} +
\tfrac{1}{2} h^2 \dot{\mathbf{a}}_{n-1} + \cdots
\label{velocityC} \\
\mathbf{v}_{n-1} & = \mathbf{v}_n - h \mathbf{a}_n +
\tfrac{1}{2} h^2 \dot{\mathbf{a}}_n + \cdots .
\label{velocityD}
\end{align}
\end{subequations}
These series are solved for acceleration for the first-order ODE solvers, and for jerk for the second-order ODE solvers. These solutions for acceleration\slash jerk are then inserted back into the original series for displacement. The net effect is to incorporate contributions for acceleration\slash jerk by approximating them in terms of velocities and, possibly, accelerations, thereby increasing the order of accuracy for the overall method by one order, e.g. from second-order, i.e., $\mathcal{O}(h^3)$, to third-order, viz., $\mathcal{O}(h^4)$, for the second-order ODE methods. This is accomplished without the solver explicitly needing any information about jerk from the user, which would be hard to come by in practice.
We speak of a method being, say, second-order accurate, and designate this with the notation $\mathcal{O}(h^3)$. There may seem to be an apparent discrepancy between the order of a method and the exponent of $h$. This comes into being because the `order' of a method represents the global order of accuracy in a solution, whereas the exponent on the $h$-term represents an order of accuracy in the solution over a local step of integration with the exponent on $h$ designating the order of its error estimate.
Our objective, viz., $\mathbf{x}_{n+1} = \mathbf{x}_n + \cdots$ for one-step (startup) methods and $\mathbf{x}_{n+1} = \tfrac{1}{3} (4 \textbf{x}_n - \textbf{x}_{n-1}) + \cdots$ for two-step methods, is achieved by applying the following linear combinations of Taylor series
\begin{displaymath}
\textrm{predictors} \Leftarrow \begin{cases}
1 (\ref{displacementA}) & \textrm{one-step} \\
1 (\ref{displacementA}) -
\tfrac{1}{6} (\ref{displacementC}) +
\tfrac{1}{6} (\ref{displacementD}) &
\textrm{two-step}
\end{cases}
\end{displaymath}
and
\begin{displaymath}
\textrm{correctors} \Leftarrow \begin{cases}
\tfrac{1}{2} (\ref{displacementA}) -
\tfrac{1}{2} (\ref{displacementB}) & \textrm{one-step} \\
\tfrac{4}{3} (\ref{displacementA}) +
\tfrac{1}{3} (\ref{displacementB}) +
\tfrac{1}{3} (\ref{displacementD}) &
\textrm{two-step } \# 1 \\
\tfrac{4}{3} (\ref{displacementA}) +
\tfrac{1}{3} (\ref{displacementB}) -
\tfrac{1}{6} (\ref{displacementC}) +
\tfrac{1}{6} (\ref{displacementD}) &
\textrm{two-step } \# 2
\end{cases}
\end{displaymath}
with
\begin{displaymath}
\textrm{truncation errors} \Leftarrow \begin{cases}
\tfrac{1}{2} \| (\ref{displacementA}) +
(\ref{displacementB}) \| & \textrm{one-step} \\
\tfrac{1}{6} \| 2 (\ref{displacementA}) +
2 (\ref{displacementB}) + 1 (\ref{displacementC}) +
1 (\ref{displacementD}) \| & \textrm{two-step } \# 1 \\
\tfrac{1}{3} \| (\ref{displacementA}) +
(\ref{displacementB}) \| & \textrm{two-step } \# 2
\end{cases}
\end{displaymath}
wherein the parenthetical numbers refer to the sub-equations listed in Eq.~(\ref{TaylorDisplacements}) and where the coefficients out front designate the weight applied to that formula. To be a corrector requires expansion (\ref{displacementB}), which must not appear in a predictor.
There are two ways to construct a corrector that satisfy our conjecture, and both will be used. A design objective is to come up with a predictor\slash corrector pair that weigh their contributions the same; specifically, their displacements are weighted the same, their velocities are weighted the same, and when present, their accelerations are weighted the same, too.
\section{PECE Methods for First-Order ODEs}
\label{Sec:firstOrder}
The following algorithm is suitable for numerically approximating solutions to stiff systems of ODEs, which engineers commonly encounter. The idea of mathematical stiffness is illustrated through an example in \S\ref{Sec:Brusselator}.
For this class of problems it is assumed that velocity is described as a function in time and displacement, e.g., at step $n$ a formula would give $\mathbf{v}_n = \mathbf{v} (t_n , \mathbf{x}_n)$. An initial condition $\mathbf{x} (0) = \mathbf{x}_0$ is required to start an analysis. The objective is to solve this ODE for displacement $\mathbf{x}_{n+1}$ evaluated at the next moment in time $t_{n+1}$, wherein $n$ sequences as $n = 0, 1, \ldots , N \! - \! 1$.
Heun's method is used to take the first integration step. Begin by applying a predictor (it is a forward Euler step)
\begin{subequations}
\label{startUp1stOrderODEs}
\begin{align}
\mathbf{x}_1^p & = \mathbf{x}_0 + h \mathbf{v}_0 +
\mathcal{O} (h^2)
\label{startUp1stOrderPredictor} \\
\intertext{which is to be followed with an evaluation for velocity $\mathbf{v}^p_1 = \mathbf{v} (t_1 , \mathbf{x}_1^p)$ using this predicted estimate for displacement. A corrector is then applied (it is the trapezoidal rule)}
\mathbf{x}_1 & = \mathbf{x}_0 + \tfrac{1}{2} h
\bigl( \mathbf{v}_1^p + \mathbf{v}_0 \bigr) +
\mathcal{O} (h^3)
\label{startUp1stOrderCorrector}
\end{align}
\end{subequations}
after which a final re-evaluation for velocity $\mathbf{v}_1 = \mathbf{v} (t_1 , \mathbf{x}_1)$ is made and the first step comes to a close. In this case, using another Taylor series to subtract out the influences from acceleration did not bring about any change to the formula. This is not unexpected, as the trapezoidal method is already second-order accurate, i.e., it has a truncation error on the order of $\mathcal{O}(h^3)$. The step counter is assigned a value of $n = 1$, after which control of the solution process is passed over to the following method.
For entering step counts that lie within the interval $n=1$ to $n = N \! - \! 1$, numeric integration continues by employing a predictor
\begin{subequations}
\label{1stOrderODEs}
\begin{align}
\mathbf{x}_{n+1}^p & = \tfrac{1}{3}
\bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) +
\tfrac{2}{3} h \bigl( 2\mathbf{v}_n - \mathbf{v}_{n-1}
\bigr) + \mathcal{O} (h^3)
\label{1stOrderPredictor} \\
\intertext{followed by an evaluation for velocity via $\mathbf{v}^p_{n+1} = \mathbf{v} (t_{n+1} , \mathbf{x}_{n+1}^p)$ using this predicted estimate for displacement. Here including correction terms for acceleration changed $\tfrac{1}{6} h ( 5 \mathbf{v}_n - \mathbf{v}_{n-1})$ to $\tfrac{2}{3} h ( 2 \mathbf{v}_n - \mathbf{v}_{n-1} )$ and in the process improved its accuracy from $\mathcal{O}(h^2)$ to $\mathcal{O}(h^3)$. The corrector obtained according to our recipe for a type \#1 method is}
\mathbf{x}_{n+1} & = \tfrac{1}{3}
\bigl( 4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) +
\tfrac{2}{3} h \mathbf{v}^p_{n+1} + \mathcal{O} (h^3)
\label{1stOrderCorrector}
\end{align}
\end{subequations}
which culminates with a re-evaluation for $\mathbf{v}_{n+1} = \mathbf{v} ( t_{n+1}, \mathbf{x}_{n+1})$. This corrector is the well-known BDF2 formula, the method we are generalizing around. Including correction terms for acceleration changed $- \tfrac{1}{3} h ( \mathbf{v}^p_{n+1} - 3 \mathbf{v}_n)$ to $\tfrac{2}{3} h \mathbf{v}^p_{n+1}$ and in the process improved its accuracy from $\mathcal{O}(h^2)$ to $\mathcal{O}(h^3)$. For both integrators, displacement has weight 1, while velocity has weight $\tfrac{2}{3} h$. The predictor and corrector are consistent in this regard, a required design objective when deriving an admissible PECE method.
Variables are to be updated according to $n \! - \! 1 \leftarrow n$ and $n \leftarrow n \! + \! 1$ after which counter $n$ gets incremented. After finishing with the data management, the solution is ready for advancement to the next integration step, with looping continuing until $n = N$ whereat the solution becomes complete.
\section{PECE Methods for Second-Order ODEs}
\label{Sec:secondOrder}
For this class of problems it is assumed that the velocity is described as a function of time and displacement, e.g., $\mathbf{v}_n = \mathbf{v} (t_n , \mathbf{x}_n)$, and likewise, the acceleration is also a prescribed function in terms of time, displacement and velocity, e.g., $\mathbf{a}_n = \mathbf{a} (t_n , \mathbf{x}_n , \mathbf{v}_n)$. An initial condition is to be supplied by the user, viz., $\mathbf{x} (0) = \mathbf{x}_0$. The objective of this method is to solve this second-order ODE for displacement $\mathbf{x}_{n+1}$, which is to be evaluated at the next moment in time $t_{n+1}$, wherein $n=0,1, \ldots , N \! - \! 1$.
Like the previous method, this is a two-step method so, consequently, it is not self starting. To take a first step, apply the predictor (a straightforward Taylor series expansion)
\begin{subequations}
\label{startup}
\begin{align}
\mathbf{x}_1^p & = \mathbf{x}_0 + h \mathbf{v}_0 +
\tfrac{1}{2} h^2 \mathbf{a}_0 + \mathcal{O} (h^3)
\label{startupPredictor} \\
\intertext{followed by evaluations $\mathbf{v}^p_1 = \mathbf{v} (t_1, \mathbf{x}^p_1)$ and $\mathbf{a}^p_1 = \mathbf{a} (t_1, \mathbf{x}^p_1, \mathbf{v}^p_1)$ to prepare for executing its corrector}
\mathbf{x}_1 & = \mathbf{x}_0 + \tfrac{1}{2} h
\bigl( \mathbf{v}^p_1 + \mathbf{v}_0 \bigr) -
\tfrac{1}{12} h^2 \bigl( \mathbf{a}^p_1 -
\mathbf{a}_0 \bigr) + \mathcal{O} (h^4)
\label{startupCorrector}
\end{align}
\end{subequations}
after which one re-evaluates $\mathbf{v}_1 = \mathbf{v} (t_1, \mathbf{x}_1)$ and $\mathbf{a}_1 = \mathbf{a} (t_1, \mathbf{x}_1, \mathbf{v}_1)$. Including correction terms for jerk changed $- \tfrac{1}{4} h^2 ( \mathbf{a}^p_{n+1} - \mathbf{a}_n)$ to $- \tfrac{1}{12} h^2 ( \mathbf{a}^p_{n+1} - \mathbf{a}_n)$ and in the process improved its accuracy from $\mathcal{O}(h^3)$ to $\mathcal{O}(h^4)$. After this integrator has been run once, a switch is made to employ the two-step PECE method described below to finish up.
For entering step counts that lie within the interval $n=1$ to $n = N \! - \! 1$, numeric integration continues by employing a predictor
\begin{subequations}
\label{PECE}
\begin{align}
\mathbf{x}_{n+1}^p & = \tfrac{1}{3} \bigl(
4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) +
\tfrac{1}{6} h \bigl( 3 \mathbf{v}_n +
\mathbf{v}_{n-1} \bigr) \notag \\
\mbox{} & \hspace{4.5cm} +
\tfrac{1}{36} h^2 \bigl( 31 \mathbf{a}_n -
\mathbf{a}_{n-1} \bigr) + \mathcal{O} (h^4)
\label{predictor} \\
\intertext{followed by evaluations $\mathbf{v}^p_{n+1} = \mathbf{v} (t_{n+1}, \mathbf{x}^p_{n+1})$ and $\mathbf{a}^p_{n+1} = \mathbf{a} (t_{n+1}, \mathbf{x}^p_{n+1}, \mathbf{v}^p_{n+1})$ to be made sequentially. Here including correction terms for jerk changed $\tfrac{1}{6} h ( 5 \mathbf{v}_n - \mathbf{v}_{n-1} )$ $+$ $\tfrac{1}{12} h^2 ( 7 \mathbf{a}_n - \mathbf{a}_{n-1} )$ to $\tfrac{1}{6} h ( 3 \mathbf{v}_n + \mathbf{v}_{n-1} )$ $+$ $\tfrac{1}{36} h^2 ( 31 \mathbf{a}_n - \mathbf{a}_{n-1} )$ and in the process improved its accuracy from $\mathcal{O}(h^3)$ to $\mathcal{O}(h^4)$. A corrector that is consistent with the above predictor is}
\mathbf{x}_{n+1} & = \tfrac{1}{3} \bigl(
4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) +
\tfrac{1}{24} h \bigl( \mathbf{v}^p_{n+1} +
14 \mathbf{v}_n + \mathbf{v}_{n-1} \bigr)
\notag \\
\mbox{} & \hspace{4.5cm} +
\tfrac{1}{72} h^2 \bigl( 10 \mathbf{a}^p_{n+1} +
51 \mathbf{a}_n - \mathbf{a}_{n-1} \bigr) +
\mathcal{O} (h^4)
\label{corrector}
\end{align}
\end{subequations}
whose derivation follows below in \S\ref{Sec:derivation}. With the corrector having been run, finish by re-evaluating $\mathbf{v}_{n+1} = \mathbf{v} (t_{n+1}, \mathbf{x}_{n+1})$ and $\mathbf{a}_{n+1} = \mathbf{a} (t_{n+1}, \mathbf{x}_{n+1}, \mathbf{v}_{n+1})$.
Variables are to be updated according to $n \! - \! 1 \leftarrow n$, $n \leftarrow n \! + \! 1$, plus the counter $n$ gets incremented. After that the solution is ready for advancement to the next integration step, with looping continuing until $n = N$ whereat the solution becomes complete.
\subsection{Derivation of the Corrector}
\label{Sec:derivation}
The corrector obtained via our recipe for a type \#1 corrector is
\begin{displaymath}
\mathbf{x}_{n+1} = \tfrac{1}{3} \bigl(
4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) +
\tfrac{1}{9} h \bigl( \mathbf{v}^p_{n+1} +
5 \mathbf{v}_n \bigr) +
\tfrac{2}{9} h^2 \bigl( \mathbf{a}^p_{n+1} +
3 \mathbf{a}_n \bigr) + \mathcal{O} (h^4)
\end{displaymath}
where inclusion of correction terms for jerk changed $\tfrac{1}{3} h ( -\mathbf{v}^p_{n+1} + 3 \mathbf{v}_n )$ $+$ $\tfrac{1}{6} h^2 ( \mathbf{a}^p_{n+1} + 5 \mathbf{a}_n )$ to $\tfrac{1}{9} h ( \mathbf{v}^p_{n+1} + 5 \mathbf{v}_n )$ $+$ $\tfrac{2}{9} h^2 ( \mathbf{a}^p_{n+1} + 3 \mathbf{a}_n )$ and in the process improved its accuracy from $\mathcal{O}(h^3)$ to $\mathcal{O}(h^4)$.
The corrector obtained via our recipe for a type \#2 corrector is
\begin{displaymath}
\begin{aligned}
\mathbf{x}_{n+1} & = \tfrac{1}{3} \bigl(
4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) +
\tfrac{1}{36} h \bigl( - \mathbf{v}^p_{n+1} +
22 \mathbf{v}_n + 3 \mathbf{v}_{n-1} \bigr) \\
\mbox{} & \hspace{4.5cm} +
\tfrac{1}{36} h^2 \bigl( 2 \mathbf{a}^p_{n+1} +
27 \mathbf{a}_n - \mathbf{a}_{n-1} \bigr) +
\mathcal{O} (h^4)
\end{aligned}
\end{displaymath}
where the correction terms for jerk changed $-\tfrac{1}{6} h ( -2 \mathbf{v}_{n+1}^p + 7 \mathbf{v}_n - \mathbf{v}_{n-1} )$ $+$ $\tfrac{1}{12} h^2 ( 2 \mathbf{a}^p_{n+1} + 9 \mathbf{a}_n - \mathbf{a}_{n-1} )$ to $\tfrac{1}{36} h ( - \mathbf{v}^p_{n+1} + 22 \mathbf{v}_n + 3 \mathbf{v}_{n-1} )$ $+$ $\tfrac{1}{36} h^2 ( 2 \mathbf{a}^p_{n+1} + 27 \mathbf{a}_n - \mathbf{a}_{n-1} )$ and in the process improved its accuracy from $\mathcal{O}(h^3)$ to $\mathcal{O}(h^4)$.
Unfortunately, neither of these two correctors is consistent with the predictor in Eq.~(\ref{predictor}). This predictor has a weight imposed on displacement of 1, a weight imposed on velocity of $\tfrac{2}{3} h$, and a weight imposed on acceleration of $\tfrac{5}{6} h^2$. It is desirable to seek a corrector with these same weights. This would imply that if a field, say acceleration, were uniform over a time interval, say $[t_{n-1}, t_{n+1}]$, then both the predictor and corrector would produce the same numeric value for acceleration's contribution to the overall result at this location in time. The correctors derived from types~\#1 and \#2 are consistent with this predictor for all contributions except acceleration. In terms of acceleration, the predictor has a weight of $\tfrac{5}{6} h^2$, while corrector~\#1 has a weight of $\tfrac{8}{9} h^2$ and corrector \#2 has a weight of $\tfrac{7}{9} h^2$. Curiously, averaging correctors~\#1 and \#2 does produce the correct weight. There is consistency between the predictor and this `averaged' corrector, which is the corrector put forward in Eq.~(\ref{corrector}).
\subsection{When Only Acceleration is Controlled}
\label{Sec:Newton}
There is an important class of problems that is similar to the above class in that acceleration is described through a function of state; however, velocity is not. Velocity, like displacement, is a response function for this class of problems. Acceleration is still described by a function of time, displacement and velocity, e.g., $\mathbf{a}_n = \mathbf{a} (t_n , \mathbf{x}_n , \mathbf{v}_n)$; however, instead of the velocity being given as a function, it, like displacement, is to be solved through integration. Two initial conditions must be supplied, viz., $\mathbf{x} (0) = \mathbf{x}_0$ and $\mathbf{v} (0 , \mathbf{x}_0) = \mathbf{v}_0$. This is how Newton's Second Law usually presents itself for analysis. Beeman \cite{Beeman76} constructed a different set of multi-step methods that can also be used to get solutions for this class of problems.
This is a two-step method. Therefore, it will require a one-step method to startup an analysis. To start integration, take the first step using predictors
\begin{subequations}
\label{pairedStartUp}
\begin{align}
\mathbf{x}_1^p & = \mathbf{x}_0 + h \mathbf{v}_0 +
\tfrac{1}{2} h^2 \mathbf{a}_0 + \mathcal{O} (h^3)
\label{startupDisplacementPredictor} \\
\mathbf{v}^p_1 & = \mathbf{v}_0 + h \mathbf{a}_0 +
\mathcal{O} (h^3)
\label{startUpVelocityPredictor} \\
\intertext{followed by an evaluation for $\mathbf{a}^p_1 = \mathbf{a} (t_1, \mathbf{x}^p_1, \mathbf{v}^p_1)$. Their paired correctors are}
\mathbf{x}_1 & = \mathbf{x}_0 + \tfrac{1}{2} h
\bigl( \mathbf{v}^p_1 + \mathbf{v}_0 \bigr) -
\tfrac{1}{12} h^2 \bigl( \mathbf{a}^p_1 -
\mathbf{a}_0 \bigr) + \mathcal{O} (h^4)
\label{startupDisplacementCorrector} \\
\mathbf{v}_1 & = \mathbf{v}_0 + \tfrac{1}{2} h
\bigl( \mathbf{a}_1^p + \mathbf{a}_0 \bigr) +
\mathcal{O} (h^4)
\label{startUpVelocityCorrector}
\end{align}
\end{subequations}
followed with a re-evaluation for $\mathbf{a}_1 = \mathbf{a} (t_1, \mathbf{x}_1, \mathbf{v}_1)$. With the first step of integration taken, one can switch to the PECE algorithm described below.
For entering step counts that lie within the interval $n=1$ to $n = N \! - \! 1$, numeric integration continues by employing predictors
\begin{subequations}
\label{pairedMethods}
\begin{align}
\mathbf{x}_{n+1}^p & = \tfrac{1}{3} \bigl(
4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) +
\tfrac{1}{6} h \bigl( 3 \mathbf{v}_n +
\mathbf{v}_{n-1} \bigr) \notag \\
\mbox{} & \hspace{3.175cm} +
\tfrac{1}{36} h^2 \bigl( 31 \mathbf{a}_n -
\mathbf{a}_{n-1} \bigr) + \mathcal{O} (h^4)
\label{displacementPredictor} \\
\mathbf{v}_{n+1}^p & = \tfrac{1}{3}
\bigl( 4 \mathbf{v}_n - \mathbf{v}_{n-1} \bigr) +
\tfrac{2}{3} h \bigl( 2\mathbf{a}_n - \mathbf{a}_{n-1}
\bigr) + \mathcal{O} (h^4)
\label{velocityPredictor} \\
\intertext{followed with an evaluation of $\mathbf{a}^p_{n+1} = \mathbf{a} (t_{n+1}, \mathbf{x}^p_{n+1}, \mathbf{v}^p_{n+1})$. The paired correctors belonging with these predictors are}
\mathbf{x}_{n+1} & = \tfrac{1}{3} \bigl(
4 \mathbf{x}_n - \mathbf{x}_{n-1} \bigr) +
\tfrac{1}{24} h \bigl( \mathbf{v}^p_{n+1} +
14 \mathbf{v}_n + \mathbf{v}_{n-1} \bigr)
\notag \\
\mbox{} & \hspace{3.175cm} +
\tfrac{1}{72} h^2 \bigl( 10 \mathbf{a}^p_{n+1} +
51 \mathbf{a}_n - \mathbf{a}_{n-1} \bigr) +
\mathcal{O} (h^4)
\label{displacementCorrector} \\
\mathbf{v}_{n+1} & = \tfrac{1}{3}
\bigl( 4 \mathbf{v}_n - \mathbf{v}_{n-1} \bigr) +
\tfrac{2}{3} h \mathbf{a}^p_{n+1} + \mathcal{O} (h^4)
\label{velocityCorrector}
\end{align}
\end{subequations}
which are followed with a re-evaluation for $\mathbf{a}_{n+1} = \mathbf{a} (t_{n+1}, \mathbf{x}_{n+1}, \mathbf{v}_{n+1})$.
Variables are to be updated according to $n \! - \! 1 \leftarrow n$ and $n \leftarrow n \! + \! 1$, after which counter $n$ gets incremented. Upon finishing the data management, a solution is ready for advancement to the next integration step, with looping continuing until $n = N$ whereat the solution becomes complete.
\section{Error and Step-Size Control}
\label{Sec:PI}
To be able to control the local truncation error one must first have an estimate for its value. Here error is defined as a norm in the difference between predicted and corrected values. A recipe for computing this is stated in the \textit{Strategy\/} section. These expressions, although informative, cannot be used as stated because Taylor expansions for velocity have been applied to remove the next higher-order term in the Taylor series for displacement to improve accuracy.
An estimate for truncation error is simply
\begin{equation}
\varepsilon_{n+1} = \frac{ \| \mathbf{x}_{n+1} -
\mathbf{x}^p_{n+1} \| }
{\max (1 , \| \mathbf{x}_{n+1} \| )}
\label{truncationError}
\end{equation}
which can be used to control the size of a time step applied to an integrator, i.e., a local time step. Our objective here is to keep $\varepsilon$ below some allowable error, i.e., a user specified tolerance denoted as \textit{tol}, typically set within the range of $[10^{-8}, 10^{-2}]$.
At this juncture it is instructive to introduce separate notations for the two time steps that arise in a typical implementation for an algorithm of this type into code. Let $\Delta t$ denote the global time step, and let $h$ denote the local time step. The global time step is considered to be uniformly sized at $\Delta t = T / N$, where $T$ is the time at which analysis stops and $N$ is the number of discrete nodes whereat information is to be passed back from the solver to its driver. Typically $N$ is selected to be dense enough so that a user can create a suitable graphical representation of the result. On the other hand, the local time step $h$ that appears in formul\ae\ (\ref{startUp1stOrderODEs}--\ref{pairedMethods}) is dynamically sized to maintain accuracy. If error $\varepsilon$ becomes too large, then $h$ is reduced, and if it becomes too small, then $h$ is increased.
If there is to be a local time step of size $h$ that adjusts dynamically, then the first question one must answer is: What is an acceptable value for $h$ to start an integration with? It has been your professor's experience that the user is not as reliable in this regard as he\slash she would like to believe. The following automated procedure has been found to be useful in this regard \cite{FreedIskovitz96}. From the initial conditions, compute
\begin{displaymath}
h_0 = \frac{ \| \mathbf{x}_0 \| }{\| \mathbf{v}_0 \|}
\qquad \text{constrained so that} \qquad
\frac{\Delta t}{100} < h_0 < \frac{\Delta t}{10}
\end{displaymath}
and with this initial estimate for the step size, take an Euler step forward $\mathbf{x}^p_1 = \mathbf{x}_0 + h_0 \mathbf{v}_0$, evaluate $\mathbf{v}^p_1 = \mathbf{v} (h_0, \mathbf{x}^p_1)$, follow with a trapezoidal correction $\mathbf{x}_1 = \mathbf{x}_0 + \tfrac{1}{2} h_0 (\mathbf{v}^p_1 + \mathbf{v}_0)$, and re-evaluate $\mathbf{v}_1 = \mathbf{v} (h_0, \mathbf{x}_1)$. At this juncture, one can get an improved estimate for the initial step size via
\begin{displaymath}
h_1 = 2 \left| \frac{\| \mathbf{x}_1 \| - \| \mathbf{x}_0 \|}
{\| \mathbf{v}_1 \| + \| \mathbf{v}_0 \|} \right|
\qquad \text{subject to} \qquad
\frac{\Delta t}{1000} < h_1 .
\end{displaymath}
With this information, one can calculate the number of steps $S$ needed by a local solver to traverse the first step belonging to the global solver whose step size is $\Delta t$; specifically,
\begin{equation}
S = \max \bigl( 2 ,
\mathrm{round} ( \Delta t / h_1 ) \bigr)
\qquad \text{with} \qquad
h = \Delta t / S
\label{initialStepSize}
\end{equation}
and a reasonable value for the initial, local, step size $h$ is now in hand. As a minimum, there are to be two local steps taken for each global step traversed.
From here on a discrete PI controller (originally derived from control theory as a senior engineering project at Lund Institute of Technology in Lund, Sweden \cite{Gustafssonetal88}) is employed to automatically manage the size of $h$. The goal of this PI controller is to allow a solution to traverse its path with maximum efficiency, all the while maintaining a specified tolerance on error.
The P in PI stands for proportional feedback and accounts for current error, while the I in PI stands for integral feedback and accounts for an accumulation of error. The simplest controller is an I controller. For controlling step size, this I controller adjusts $h$ via \cite{Soderlind02}
\begin{displaymath}
C = \frac{h_{n+1}}{h_n} = \left(
\frac{tol}{\varepsilon_{n+1}} \right)^{k_I}
\end{displaymath}
wherein $k_I$ designates gain in the integral feedback loop, while $tol$ is the maximum truncation error to be tolerated over a local step of integration. Such controllers had been used by the numerical analysis community for a long time, and are known to be problematic \cite[pp.~31--35]{HairerWanner91}. Controls engineers know that PI controllers are superior to I controllers, and for the task of managing $h$, in 1988 a team of students at Lund University derived \cite{Gustafssonetal88}
\begin{displaymath}
C = \frac{h_{n+1}}{h_n} = \left(
\frac{tol}{\varepsilon_{n+1}} \right)^{k_I+k_P} \left(
\frac{\varepsilon_{n+1} \vphantom{l}}{tol} \right)^{k_P}
\end{displaymath}
wherein $k_P$ designates gain in the proportional feedback loop. This PI controller has revolutionized how commercial-grade ODE solvers are built today.
A strategy for managing error by dynamically adjusting the size of time step $h$ can now be put forward. To do so, it is instructive to introduce a second counter $s$ that decrements from $S$ down to 0. It designates the number of steps left to go before reaching the node located at the end of a global step that the integrator is currently traversing. $S$ needs to be redetermined each time the algorithm advances to its next global step. If there is a discontinuity in step size $h$ across this interface, then the history variables will need to be adjusted using, e.g., a Hermite interpolator \cite{Shampine85}. A suitable algorithm for controlling truncation error by managing step size is described below.
Initialize the controller by setting $\varepsilon_n = 1$.
\begin{enumerate}
\item After completing an integration for displacement $\mathbf{x}_{n+1}$, and possibly velocity $\mathbf{v}_{n+1}$, via any of the integrators given in Eqs.~(\ref{startUp1stOrderODEs}--\ref{pairedMethods}), calculate an estimate for its local truncation error $\varepsilon_{n+1}$ via Eq.~(\ref{truncationError}).
\item Calculate a scaling factor $C$ that comes from the controller
\begin{displaymath}
C = \begin{cases}
\left( \vphantom{\frac{a}{a}} \right.
\frac{\mathit{tol}}{\varepsilon_{n+1}}
\left. \vphantom{\frac{a}{a}} \right)^{0.7/(p+1)}
\left( \frac{\varepsilon_n}{\mathit{tol}}
\right)^{0.4/(p+1)} &
\text{if} \; \varepsilon_{n} < \mathit{tol}
\; \text{and} \; \varepsilon_{n+1} < \mathit{tol} \\
\left( \frac{\varepsilon_{n+1}}
{\mathit{tol}} \right)^{1/p} & \text{otherwise}
\end{cases}
\end{displaymath}
wherein \textit{tol\/} is the truncation error that the controller targets and $p$ is the order of the method, e.g., it appears as $\mathcal{O} (h^{p+1})$ in formul\ae\ (\ref{startUp1stOrderODEs}--\ref{pairedMethods}).
\item If $C > 2$ plus $s > 3$ and $s$ is even, then double the step size $h = 2h$, halve the steps to go $s = s / 2$, and continue on to the next step.
\item If $1 \leq C \leq 2$ then maintain the step size, decrement the counter, and on continue to the next step.
\item If $C < 1$ yet $\varepsilon_{n+1} \leq \mathit{tol}$, then halve the step size $h = h / 2$, double the steps to go $s = 2 s$, and continue on to the next step.
\item Else $C < 1$ and $\varepsilon_{n+1} > \mathit{tol}$, then halve the step size $h = h / 2$, double the steps to go $s = 2 s$, and \textit{repeat\/} the integration step from $n$ to $n \! + \! 1$.
\end{enumerate}
For the I controller, the gain on feedback has been set at $k_I = 1/p$. For the PI controller, the gain on I feedback has been set at $k_I = 0.3 / (p+1)$ while the gain on P feedback has been set at $k_P = 0.4 / (p+1)$, wherein factors 0.3 and 0.4 have been selected based upon the developer's experience in working with their controller \cite{Gustafssonetal88,Soderlind02}. By only admitting either a doubling or a halving of the current step size, a built-in mechanism is in play that mitigates the likelihood that wind-up or wind-down instabilities will happen in practice.
Whenever a step is to be halved, the displacement at a half step can be approximated via
\begin{equation}
\mathbf{x}_{n-\scriptfrac{1}{2}} =
\tfrac{1}{2} ( \mathbf{x}_n + \mathbf{x}_{n-1} ) -
\tfrac{1}{8} \, h ( \mathbf{v}_n - \mathbf{v}_{n-1} )
+ \mathcal{O} (h^4) + \mathcal{O} (h^{p+1})
\label{stepHalving}
\end{equation}
which is a cubic Hermite interpolant \cite{Shampine85} whose accuracy is $\mathcal{O}(h^4)$ with $\mathcal{O}(h^{p+1})$ designating accuracy of the numerical method used to approximate displacements $\mathbf{x}_n$ and $\mathbf{x}_{n+1}$ and, for solvers (\ref{pairedStartUp} \& \ref{pairedMethods}), a like interpolation for velocities $\mathbf{v}_n$ and $\mathbf{v}_{n+1}$ will be required, too.
As a closing comment, many PECE methods are often implemented as $\text{PE}(\text{CE})^m$ methods with the correct\slash evaluate steps being repeated $m$ times, or until convergence. It has been your professor's experience that PECE, i.e., $m=1$, is usually sufficient whenever the step size $h$ is properly controlled to keep the truncation error in check, provided a reasonable assignment for permissible error has been made, typically $tol \approx 10^{-(p+1)}$.
\section{Examples}
Examples are provided to illustrate the numerical methods put forward. A chemical kinetics problem, popular in the numerical analysis literature \cite[pp.~115--116]{Haireretal93}, is considered for testing the two-step PECE method of Eqs.~(\ref{startUp1stOrderODEs} \& \ref{1stOrderODEs}) used to solve first-order systems of ODEs, including stiff ODEs. The vibrational response of an formula SAE race car is simulated to illustrate a problem belonging to the class of solvers that are appropriate for applications of Newton's Second Law of motion.
\subsection{Brusselator}
\label{Sec:Brusselator}
The Brusselator describes a chemical kinetics problem where six substances are being mixed, and whose evolution through time is characterized by two, coupled, differential equations in two unknowns $A$ and $B$, viz.,
\begin{subequations}
\begin{align}
\dot{y}_1 & = A + y^2_1 y_2 - (B - 1) y_1 \notag \\
\dot{y}_2 & = B y_1 - y^2_1 y_2 \notag
\end{align}
\end{subequations}
whose eigenvalues are
\begin{displaymath}
\lambda = \frac{1}{2} \left( - \left( 1 - B + A^2 \right)
\pm \sqrt{( 1 - B + A^2)^2 - 4 A^2} \right)
\end{displaymath}
where parameters $A$ and $B$ are, to an extent, at the disposal of a chemist.
This system exhibits vary different behaviors for different values of its parameters. For values $A=1$ and $B=3$ (see Fig.~\ref{fig:brusselator1}) the solution converges to a limit cycle that orbits a steady-state attractor located at coordinate (1,~3) for these values of $A$ and $B$. This limit cycle does not depend upon initial condition (IC), provided the IC does not reside at the steady state.
\begin{figure}
\caption{A concentration plot for a Brusselator response with $A=1$ and $B=3$. Solutions are presented for several initial conditions. All solutions approach a limit cycle.}
\label{fig:brusselator1}
\end{figure}
The behavior is very different for parameters $A=100$ and $B=3$. Here the solutions rapidly settle in on asymptotic responses (see Fig.~\ref{fig:brusselator2}). Figures \ref{fig:brusselator1} \& \ref{fig:brusselator2} came from the same system of equations, just different parameters.
\begin{figure}
\caption{Brusselator response versus time with $A=100$ and $B=3$ for several initial conditions. The response curves have been normalized against their initial values.}
\label{fig:brusselator2}
\end{figure}
Ability of the PI controller discussed in \S\ref{Sec:PI} to manage the local truncation error by adjusting the local step size is illustrated in Fig.~\ref{fig:brusselator3}. Statistics gathered from these runs are reported on in Table~\ref{Table:Brusselator}
\begin{figure}
\caption{Local truncation error versus time for both Brusselator problems. The error tolerance was set at $10^{-4}
\label{fig:brusselator3}
\end{figure}
\begin{table}
\small
\begin{center}
\begin{tabular}{|c|ccc|ccc|} \hline
Initial &
\multicolumn{3}{c|}{$A=1$, $B=3$, $t_{\mathrm{end}}=20$~s} &
\multicolumn{3}{c|}{$A=100$, $B=3$, $t_{\mathrm{end}}=0.1$~s} \\
\cline{2-7} Condition & \#steps & \#halved & \#doubled &
\#steps & \#halved & \#doubled \\ \hline
(0.1, 0.1) & 1186 & 6 & 9 & 353 & 0 & 6 \\
(1.5, 3.0) & 1592 & 6 & 9 & 362 & 0 & 4 \\
(2.0, 0.5) & 1332 & 7 & 10 & 467 & 0 & 3 \\
(3.25, 2.5) & 1451 & 6 & 12 & 414 & 0 & 5 \\ \hline
\end{tabular}
\end{center}
\normalsize
\caption{Runtime statistics for the results plotted in Figs.~\ref{fig:brusselator1}--\ref{fig:brusselator3}. There were 200 global steps for the limit cycle analyses, and 100 global steps for the stiff analyses. In none of these numerical experiments did the integrator have to restart because of excessive error.}
\label{Table:Brusselator}
\end{table}
The solutions in Fig.~\ref{fig:brusselator1} have an eigenvalue ratio of $| \lambda_{\max} | / | \lambda_{\min} | = 2.6$, whereas the solutions in Fig.~\ref{fig:brusselator2} have a ratio of $| \lambda_{\max} | / | \lambda_{\min} | = 9,602$. Although there is no accepted `definition' for stiffness in the numerical analysis literature, there are some rules of thumb that exist. Probably the simplest to apply is the ratio $\Lambda = | \lambda_{\max} | / | \lambda_{\min} |$ with $\Lambda \approx 10$ being the boundary. Systems of ODEs whose ratio of extreme eigenvalues is less than about 10 do not exhibit stiffness; whereas, systems of ODEs whose ratio $\Lambda$ exceeds 10, and certainly 100, do exhibit stiffness.
Explicit methods, e.g., the predictors presented herein, when used alone, do not fair well when attempting to acquire solutions from systems of ODEs that are mathematically stiff. Implicit methods are needed, e.g., the correctors presented herein. The solutions graphed in Fig.~\ref{fig:brusselator1} are for a non-stiff problem, while the solutions graphed in Fig.~\ref{fig:brusselator2} are for a stiff problem. The implicit two-step method of Eqs.~(\ref{startUp1stOrderODEs} \& \ref{1stOrderODEs}) is a viable integrator for solving stiff systems of ODEs of first order; in contrast, explicit Runge-Kutta methods are not suitable.
\subsection{Vibrational Response of a Vehicle}
In this example we consider the vibrational response of a car as it travels down a roadway. This response is excited by an unevenness in the roadway, accentuated by the speed of a vehicle. This simulation determines the heave $z$, pitch $\theta$, and roll $\phi$ of a vehicle at its center of gravity excited by its traversal over a roadway.
There are three degrees of freedom for this problem with the position $\mathbf{x}$, velocity $\mathbf{v}$, and acceleration $\mathbf{a}$ vectors taking on forms of
\begin{displaymath}
\mathbf{x} = \left\{ \begin{matrix}
z \\ \theta \\ \phi
\end{matrix} \right\} , \qquad
\mathbf{v} = \left\{ \begin{matrix}
\dot{z} \\ \dot{\theta} \\ \dot{\phi}
\end{matrix} \right\} , \qquad
\mathbf{a} = \left\{ \begin{matrix}
\ddot{z} \\ \ddot{\theta} \\ \ddot{\phi}
\end{matrix} \right\}
\end{displaymath}
wherein $\dot{z} = \partial{z} / \partial t$, $\ddot{z} = \partial^2 z / \partial t^2$, etc. In our application of this simulator, we consider a formula SAE race car like the one our seniors design, fabricate and compete with every year in a cap stone project here at Texas~A\mbox{\&}M.
There are three matrices that establish the vibrational characteristics of a vehicle. There is a mass matrix
\begin{equation*}
\mathbf{M} = \begin{bmatrix}
m & 0 & 0 \\ 0 & J_{\theta} & 0 \\ 0 & 0 & J_{\phi}
\end{bmatrix}
\end{equation*}
where $m$ is the collective mass of the car and its driver, $J_{\theta}$ is the moment of inertia resisting pitching motions, and $J_{\phi}$ is the moment of inertia resisting rolling motions. There is also a damping matrix
\begin{multline}
\mathbf{C} =
\left[ \begin{matrix}
c_1 + c_2 + c_3 + c_4 \\
-(c_1 + c_2) \ell_f + (c_3 + c_4) \ell_r \\
-(c_1 - c_2) \rho_f + (c_3 - c_4) \rho_r
\end{matrix} \right. \notag \\
\left. \begin{matrix}
-(c_1 + c_2) \ell_f + (c_3 + c_4) \ell_r &
-(c_1 - c_2) \rho_f + (c_3 - c_4) \rho_r \\
(c_1 + c_2) \ell_f^2 + (c_3 + c_4) \ell_r^2 &
(c_1 - c_2) \ell_f \rho_f + (c_3 - c_4) \ell_r \rho_r \\
(c_1 - c_2) \ell_f \rho_f + (c_3 - c_4) \ell_r \rho_r &
(c_1 + c_2) \rho_f^2 + (c_3 + c_4) \rho_r^2
\end{matrix} \right] \notag
\end{multline}
and a like stiffness matrix
\begin{multline}
\mathbf{K} =
\left[ \begin{matrix}
k_1 + k_2 + k_3 + k_4 \\
-(k_1 + k_2) \ell_f + (k_3 + k_4) \ell_r \\
-(k_1 - k_2) \rho_f + (k_3 - k_4) \rho_r
\end{matrix} \right. \notag \\
\left. \begin{matrix}
-(k_1 + k_2) \ell_f + (k_3 + k_4) \ell_r &
-(k_1 - k_2) \rho_f + (k_3 - k_4) \rho_r \\
(k_1 + k_2) \ell_f^2 + (k_3 + k_4) \ell_r^2 &
(k_1 - k_2) \ell_f \rho_f + (k_3 - k_4) \ell_r \rho_r \\
(k_1 - k_2) \ell_f \rho_f + (k_3 - k_4) \ell_r \rho_r &
(k_1 + k_2) \rho_f^2 + (k_3 + k_4) \rho_r^2
\end{matrix} \right] \notag
\end{multline}
wherein $c_1$ and $k_1$ are the effective damping coefficient and spring stiffness for the suspension located at the driver's front, $c_2$ and $k_2$ are located at the passenger's front, $c_3$ and $k_3$ are located at the passenger's rear, and $c_4$ and $k_4$ are located at the driver's rear. Lengths $\ell_f$ and $\ell_r$ measure distance from the front and rear axles to the center of gravity (CG) for the car and driver with their sum being the wheelbase. Lengths $\rho_f$ and $\rho_r$ measure distance from the centerline (CL) of the vehicle out to the center of a tire patch along the front and rear axles, respectively. Typically, $\rho_f > \rho_r$ to allow a driver to take a tighter\slash shorter path into a corner during competition.
Interacting with these three matrices is a vector that establishes how a roadway excites a vehicle. It is described by
\begin{displaymath}
\mathbf{f} = \left\{ \begin{matrix}
w -
c_1 \dot{R}_1 - c_2 \dot{R}_2 - c_3 \dot{R}_3 - c_4 \dot{R}_4
- k_1 R_1 - k_2 R_2 - k_3 R_3 - k_4 R_4 \\
\bigl( c_1 \dot{R}_1 + c_2 \dot{R}_2
+ k_1 R_1 + k_2 R_2 \bigr) \ell_f -
\bigl( 3_1 \dot{R}_3 + c_4 \dot{R}_4
+ k_3 R_3 + k_4 R_4 \bigr) \ell_r \\
\bigl( c_1 \dot{R}_1 - c_2 \dot{R}_2
+ k_1 R_1 - k_2 R_2 \bigr) \rho_f -
\bigl( 3_1 \dot{R}_3 - c_4 \dot{R}_4
+ k_3 R_3 - k_4 R_4 \bigr) \rho_r
\end{matrix} \right\}
\end{displaymath}
where $w$ is the weight (mass times gravity) of the car and its driver. Functions $R(t)$ and $\dot{R}(t)$ are for displacement and velocity occurring normal to a roadway, measured from smooth. Roadway velocity is proportional to vehicle speed. It is through these functions that time enters into a solution. $R_i$ and $\dot{R_i}$, $i=1,2,3,4$, follow the same numbering scheme as the damping coefficients and spring stiffnesses.
To apply our numerical algorithm (\ref{pairedStartUp} \& \ref{pairedMethods}), one simply computes
\begin{displaymath}
\mathbf{a} (t , \mathbf{x} , \mathbf{v}) = \mathbf{M}^{-1}
\cdot \bigl( \mathbf{f}(t) - \mathbf{C} \cdot \mathbf{v}
- \mathbf{K} \cdot \mathbf{x} \bigr)
\end{displaymath}
and assigns a suitable pair of ICs: one for displacement, and the other for velocity, as they pertain to the motion of a vehicle at its center of gravity. Initial conditions can be cast is various ways. The simplest ICs come from either starting at rest, or starting at a constant velocity on a smooth roadway. Either way, one arrives at
\begin{displaymath}
\mathbf{x}_0 = \mathbf{K}^{-1} \cdot \mathbf{f}_0
\quad \text{and} \quad
\mathbf{v}_0 = \left\{ \begin{matrix}
0 \\ 0 \\ 0
\end{matrix} \right\}
\quad \text{wherein} \quad
\mathbf{f}_0 = \left\{ \begin{matrix}
w \\ 0 \\ 0
\end{matrix} \right\}
\end{displaymath}
because $R_i = 0$ and $\dot{R}_i = 0$, $i=1,2,3,4$, in these two cases. Remember, velocity $\mathbf{v}_0$ is not the speed of your car; rather, it is a change in vehicle motion with respect to its center of gravity.
To illustrate the simulator, a roadway was constructed with five gradual waves at a wavelength equal to the wheelbase. To excite roll, the passenger side lagged out of phase with the driver side by a tenth of the wheelbase. Vehicle speed was set at 10~mph. There were 500 global nodes so the density of output would produce nice graphs, for which there were 5,422 local integration steps required with 8 steps being doubled. No steps were halved, and no steps required to be restarted. The responses are plotted in Fig.~\ref{fig:fsae1}, while the errors are reported in Fig.~\ref{fig:fsae2}. It is apparent that the integrator (\ref{pairedStartUp} \& \ref{pairedMethods}) performs to expectations, and that the PI controller of \S\ref{Sec:PI} does an admirable job in managing the local truncation error.
\begin{figure}
\caption{Heave $z$ is plotted against time in the left graphic, while pitch $\theta$ and roll $\phi$ are plotted against time in the right graphic. Heave and pitch have static offsets, whereas roll does not.}
\label{fig:fsae1}
\end{figure}
\begin{figure}
\caption{Local truncation error versus time for the FSAE race car driving over a sequence of bumps. The error tolerance was set at $10^{-4}
\label{fig:fsae2}
\end{figure}
Vitals for the car that was simulated include: $m$ = 14~slugs ($w$ = 450~lbs), $J_{\theta} = 45 \text{ ft.lbs/(rad/sec}^2)$, $J_{\phi} = 20 \text{ ft.lbs/(rad/sec}^2)$, $\ell_f$ = 3.2~ft, $\ell_r$ = 1.8~ft, $\rho_f$ = 2.1~ft, $\rho_r$ = 2~ft, the front dampers were set at 10 lbs/(in/sec) and the rears were set at 15 lbs/(in/sec), while the front springs had stiffnesses of 150 lbs/in and the rears were selected at 300~lbs/in. These are reminiscent of a typical FSAE race car.
\section{Summary}
Two-step methods have been constructed that aspire to the structure of the well-known BDF2 formula. A predictor is derived for each case allowing PECE solution schemes to be put forward. The first method (\ref{startUp1stOrderODEs} \& \ref{1stOrderODEs}) that was introduced solves the classic problem where $\partial \mathbf{x} / \partial t = \mathbf{v} (t , \mathbf{x})$ subject to an IC of $\mathbf{x}(0) = \mathbf{x}_0$. The second method (\ref{startup} \& \ref{PECE}) introduced solves a fairly atypical case where functions for both velocity $\mathbf{v} ( t , \mathbf{x})$ and acceleration $\mathbf{a} (t , \mathbf{x} , \mathbf{v} )$ are given and a solution for the displacement $\mathbf{x}$ is sought, subject to an initial condition $\mathbf{x}(0) = \mathbf{x}_0$. And the third method (\ref{pairedStartUp} \& \ref{pairedMethods}) melds these two algorithms to construct a solver for the case where acceleration is given via a function $\mathbf{a} ( t , \mathbf{x} , \mathbf{v})$ from which solutions for both velocity $\mathbf{v}$ and displacement $\mathbf{x}$ are sought, subject to initial conditions of $\mathbf{x}(0) = \mathbf{x}_0$ and $\mathbf{v}(0, \mathbf{x}_0) = \mathbf{v}_0$. A PI controller is used to manage the local truncation error by dynamically adjusting the size of the local time step. All integrators have been illustrated using non-trivial example problems.
\noindent\textbf{Acknowledgment}
The author is grateful to Prof.\ Kai Diethelm, Institut Computational Mathematics, Technische Universit\"at, Braunschweig, Germany for critiquing this document and for providing instructive comments.
\noindent\textbf{References}
\small
\end{document}
|
\betaegin{document}
\deltaate{}
\thetaitle{\Large \betaf Non-Existence of Classical Solutions with Finite Energy to the Cauchy Problem of the Compressible Navier-Stokes Equations }
\alphauthor{HAILIANG LI, YUEXUN WANG, AND ZHOUPING XIN}
\muaketitle
\betaegin{abstract}
The well-posedness of classical solutions with finite energy to the compressible Navier-Stokes equations (CNS) subject to arbitrarily large and smooth initial data is a challenging problem. In the case when the fluid density is away from vacuum (strictly positive), this problem was first solved for the CNS in either one-dimension for general smooth initial data or multi-dimension for smooth initial data near some equilibrium state (i.e., small perturbation)~\cite{A,K1,K2,M1,M2,M3}. In the case that the flow density may contain vacuum (the density can be zero at some space-time point), it seems to be a rather subtle problem to deal with the well-posedness problem for CNS. The local well-posedness of classical solutions containing vacuum was shown in homogeneous Sobolev space (without the information of velocity in $L^2$-norm) for general regular initial data with some compatibility conditions being satisfied initially~\cite{CK1,CCK,CK2,CK3}, and the global existence of classical solution in the same space is established under additional assumption of small total initial energy but possible large oscillations~\cite{HLX}. However, it was shown that any classical solutions to the compressible Navier-Stokes equations in finite energy (inhomogeneous Sobolev) space can not exist globally in time since it may blow up in finite time provided that the density was compactly supported~\cite{Xin}. In this paper, we investigate the well-posedess of classical solutions to the Cauchy problem of Navier-Stokes equations,
and prove that the classical solution with finite energy does not exist even in the inhomogeneous Sobolev space for any short time under some natural assumptions on initial data near the vacuum. This implies in particular that the homogeneous Sobolev space is crucial as studying the well-posedness for the Cauchy problem of compressible Navier-Stokes equations in the presence of vacuum at far fields even locally in time.
\epsilonnd{abstract}
\sigmaetcounter{equation}{0}
\sigmaetcounter{Assumption}{0}
\sigmaetcounter{Theorem}{0}
\sigmaetcounter{Proposition}{0}
\sigmaetcounter{Corollary}{0}
\sigmaetcounter{Lemma}{0}
\mubox{}
\betaegin{center}
\sigmaection{Introduction and Main Results}
\epsilonnd{center}
The motion of a $n$-dimensional compressible viscous, heat-conductive, Newtonian polytropic fluid
is governed by the following full compressible
Navier-Stokes system:
\betaegin{eqnarray}\lambdaabel{5}
\lambdaeft\{ \betaegin{array}{ll}
\phiartial_t\rhoho+\thetaextrm{div}(\rhoho
u)=0,\\
\phiartial_t(\rhoho u)+\thetaextrm{div}(\rhoho u\omegatimes u)+\nuabla p=\muu\Delta u+(\muu+\lambdaambda)\nuabla\thetaextrm{div}u,\\
\phiartial_t(\rhoho e)+\thetaextrm{div}(\rhoho eu)+ p\thetaextrm{div}u=\fracrac{\muu}{2}|\nuabla u+(\nuabla u)^*|^2+\lambdaambda(\thetaextrm{div}u)^2+\fracrac{\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}\Delta e,
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
where $(x,t)\inftyn\muathbb{R}^n\thetaimes\muathbb{R}_+ $, $\rhoho,u,p$ and $e$ denote the
density, velocity, pressure and internal energy, respectively. $\muu$ and $\lambdaambda$ are the coefficient of viscosity
and the second coefficient of viscosity respectively and $\kappaappa$ denotes the coefficient of heat conduction, which satisfy
\betaegin{equation*}
\muu>0,\quad 2\muu+n\lambdaambda\gamma}\deltaef\G{\Gammaeq0, \quad \kappaappa\gamma}\deltaef\G{\Gammaeq0.
\epsilonnd{equation*}
The equation of state for polytropic gases satisfies
\betaegin{equation}\lambdaabel{6}
p=(\gamma}\deltaef\G{\Gammaamma-1)\rhoho e, \quad p=A\epsilonxp(\fracrac{(\gamma}\deltaef\G{\Gammaamma-1)S}{R})\rhoho^\gamma}\deltaef\G{\Gammaamma,
\epsilonnd{equation}
where $A>0$ and $R>0$ are positive constants, $\gamma}\deltaef\G{\Gammaamma> 1$ is the specific heat ratio, $S$ is the entropy, and we set $A=1$ in this paper for simplicity. The initial data is given by
\betaegin{eqnarray}\lambdaabel{7}
(\rhoho,u,e)(x,0)=(\rhoho_0,u_0,e_0)(x),\quad x\inftyn\R^n
\epsilonnd{eqnarray}
and is assumed to be continuous. In particular, the initial density is compactly supported on an open bounded set $\Omega\sigmaubset \muathbb{R}^n$ with smooth boundary, i.e.,
\betaegin{eqnarray}\lambdaabel{8}
\mubox{supp}_x\,\rhoho_0=\betaar{\Omega},\quad \rhoho_0(x)>0, \ x\inftyn \Omega
\epsilonnd{eqnarray}
and the initial internal energy $e_0$ is assumed to be nonnegative but not identical to zero in $\Omega$ to avoid the trivial case.
When the heat conduction can be neglected and the compressible viscous fluids are isentropic, the compressible Navier-Stokes equations \epsilonqref{5} can be reduced to the following system
\betaegin{eqnarray}\lambdaabel{1}
\lambdaeft\{ \betaegin{array}{ll}
\phiartial_t\rhoho+\thetaextrm{div}(\rhoho
u)=0,\\
\phiartial_t(\rhoho u)+\thetaextrm{div}(\rhoho u\omegatimes u)+\nuabla p=\muu\Delta u+(\muu+\lambdaambda)\nuabla\thetaextrm{div}u,
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
for $(x,t)\inftyn\muathbb{R}^n\thetaimes\muathbb{R}_+ $, where the equation of state satisfies
\betaegin{equation}\lambdaabel{2}
p=A\rhoho^\gamma}\deltaef\G{\Gammaamma
\epsilonnd{equation}
and the initial data are given by
\betaegin{eqnarray}\lambdaabel{3}
(\rhoho,u)(x,0)=(\rhoho_0,u_0)(x),\quad x\inftyn\R^n
\epsilonnd{eqnarray}
with the initial density being compactly supported, i.e., the assumption \epsilonqref{8} holds.
It is an important issue to study the global existence (well-posedness) of classical/strong solution to CNS \epsilonqref{5} and \epsilonqref{1}, and many significant progress have been made recently on this and related topics, such as the global existence and asymptotical behaviors of solutions to \epsilonqref{5} and \epsilonqref{1}. For instance, in the case when the flow density is strictly away from the vacuum ($\inftynf_\Omega\rhoho>0$), the short time existence of classical solution was shown for general regular initial data~\cite{Ka}, the global existence of solutions problems were proved in spatial one-dimension by Kazhikhov et al. \cite{A,K1,K2} for sufficiently smooth data and by Serre \cite{S1,S2} and Hoff \cite{H1} for discontinuous initial data.
The key point here behind the strategies to establish the global existence of strong solutions lies in the fact that if the flow density is strictly positive at the initial time, so does for any later-on time~\cite{H}. This is also proved to be true for weak solutions to the compressible
Navier-Stokes equations (1.1) in one space dimension, namely, weak solution does not exhibit vacuum states in any
finite time provided that no vacuum is present initially~\cite{H4}.
The corresponding multidimensional problems were also investigated as the flow density is away from the vacuum, for instance, the short time well-posedness of classical solution was shown by Nash and Serrin for general smooth initial data~\cite{Nash,Se}, and the global existence of unique strong solution was first proved by Matsumura and Nishida~\cite{M1,M2,M3} in the energy space (inhomogeneous Sobolev space)
\betaegin{equation}
\lambdaeft\{\betaegin{aligned}\lambdaabel{inhomSpace}
\rhoho-\betaar{\rhoho}\inftyn C(0,T;H^3(\muathbb{R}^3))\cap C^1(0,T;H^2(\muathbb{R}^3)), \\
u,\, e-\betaar{e}\inftyn C(0,T;H^3(\muathbb{R}^3))\cap C^1(0,T;H^1(\muathbb{R}^3)),
\epsilonnd{aligned}\rhoight.
\epsilonnd{equation}
with $\betaar{\rhoho}>0$ and $\betaar{e}>0$ for any $T\inftyn(0,\inftynfty]$, where the additional assumption of small oscillation is required on the perturbation of initial data near the non-vacuum equilibrium state $(\betaar{\rhoho},0,\betaar{e})$. The global existence of non-vacuum solution was also solved by Hoff for discontinuous initial data \cite{H2}, and by Danchin \cite{D} who set up the framework based on the Besov type space (a functional space invariant by the natural scaling of the associated equations) to obtain existence and uniqueness of global solutions, where the small oscillations on the perturbation of initial data near some non-vacuum equilibrium state is also required. It should be mentioned here that above smallness of the initial oscillation on the perturbation of initial data near the non-vacuum equilibrium state and the uniformly a-priori estimates established on the classical solutions to CNS \epsilonqref{5} or \epsilonqref{1} are sufficient to establish the strict positivity and uniform bounds of flow density, which is essential to prove the global existence of solutions with the flow density away from vacuum in the inhomogeneous Sobolev space~\epsilonqref{inhomSpace} or other function spaces~\cite{D,H2}. However, recently, this assumption on the small oscillations on the initial perturbation of a non-vacuum state can be removed at least for the isentropic case by Huang-Li-Xin in~\cite{HLX} provided that the initial total mechanical energy is suitable small which is equivalent to that the mean square norm of the initial difference from the non-vacuum state is small so that the perturbation may contain large oscillations and vacuum state. See also~\cite{WenZhu}.
In the case when the flow density may contain vacuum (the flow density is nonnegative), it is rather difficult and challenging to investigate the global existence (well-posedness) of classical/strong solutions to CNS~\epsilonqref{5} and CNS~\epsilonqref{1}, corresponding to the well-posedness theory of classical solutions~\cite{M1,M2,M3}, and the possible appearance of vacuum in the flow density (i.e., the flow density is zero) is one of the essential difficulties in the analysis of the well-posedness and related problems~\cite{CCK,CK1,CK2,CK3,H1,H3,H4,Sa,S1,WenZhu,Xin,XYa,XYu}. Indeed, as it is well-known that \epsilonqref{5} and \epsilonqref{1} are strongly coupled systems of hyperbolic-parabolic type, the density $\rhoho(x,t)$ can be determined by its initial value $\rhoho_0(x_0)$ by Eq.~$\epsilonqref{1}_1$ along the particle path $x(t)$ satisfying $x=x(t)$ and $x(0)=x_0$ provided that the flow velocity $u(x,t)$ is a-priorily regular enough.
Yet, the flow velocity can only be solved by Eq.~$\epsilonqref{1}_2$ which is uniformly parabolic so long as the density is a-priorily strictly positive and uniformly bounded function. However, the appearance of vacuum leads to the strong degeneracy of the hyperbolic-parabolic system and the behaviors of the solution may become singular, such as the ill-posedness and finite blow-up of classical solutions~\cite{CJ,H3,S1,Xin,XYa}.
Recently, the global existence of weak solutions with finite energy to the isentropic system~\epsilonqref{1} subject to general initial data with finite initial energy (initial data may include vacuum states) by Lions \cite{L1,L2,L3}, Jiang-Zhang~\cite{JZ} and Feireisl et al. \cite{F}, where the exponent $\gamma}\deltaef\G{\Gammaamma$ may be required to be large and the flow density is allowed to vanish. Despite the important progress, the regularity, uniqueness and behavior of these
weak solutions remain largely open. As emphasized before~\cite{CJ,H3,S1,Xin,XYa}, the possible appearance of
vacuum is one of the major difficulties when trying to prove global existence and strong
regularity results. Indeed, Xin~\cite{Xin} first shows that it is impossible to obtain the global existence of finite energy classical solution to the Cauchy problem for \epsilonqref{5} in the inhomogeneous Sobolev space~\epsilonqref{inhomSpace} for any smooth initial data with initial flow density compactly supported and similar phenomena happens for the isentropic system~\epsilonqref{1} for a large class of smooth initial data with compactly supported density. To be more precise, if there exists any solution $(\rhoho,u,e)\inftyn C^1(0,T;H^2(\muathbb{R}^3))$ for some time $T>0$, then it must hold $T<+\inftynfty$, which also implies the finite time blow-up of solution $(\rhoho,u,e)\inftyn C^1(0,T;H^2(\muathbb{R}^3))$ if existing in the presence of the vacuum.
Yet, Cho et al. \cite{CK1,CCK,CK2,CK3} proved the local well-posedness of classical solutions to the Cauchy problem for isentropic compressible Navier-Stokes equations~\epsilonqref{1} and full Navier-Stokes equations~\epsilonqref{5} with the initial density containing vacuum for some $T>0$ in the homogeneous energy space
\betaegin{equation}
\lambdaeft\{\betaegin{aligned}\lambdaabel{homSpace}
\rhoho\inftyn C(0,T;H^3(\muathbb{R}^3))\cap C^1(0,T;H^2(\muathbb{R}^3)), \\
u,\, e\inftyn C(0,T;D^3(\muathbb{R}^3))\cap L^2(0,T;D^{4}(\muathbb{R}^3)),
\epsilonnd{aligned}\rhoight.
\epsilonnd{equation}
where $D^k(\muathbb{R}^3)=\{f\inftyn L_{\thetaextrm{{loc}}}^1(\muathbb{R}^3): \nuabla f\inftyn H^{k-1}(\muathbb{R}^3)\,\}$, under some additional compatibility conditions as \epsilonqref{12.6} on $u$ and similar compatibility condition on $e$.
Moreover, under additional smallness assumption on initial energy, the global existence and
uniqueness of classical solutions to the isentropic system~\epsilonqref{1} established by Huang-Li-Xin in homogeneous Sobolev space~\cite{HLX}. Interestingly, such a theory of global in time existence of classical solutions to the full CNS~\epsilonqref{5} fails to be true due to the blow-up results Xin-Yan \cite{XYa} where they show that any classical solutions to ~\epsilonqref{5} will blow-up in finite time as long as the initial density has an isolated mass group. Note that the blow-up results in \cite{XYa} is independent of the spaces the solutions may be and whether they have small or large data.
It should be noted that the main difference of the homogeneous Sobolev space~\epsilonqref{homSpace} from the inhomogeneous Sobolev space~\epsilonqref{inhomSpace} lies that there is no any estimates on the term $\|u\|_{L^2}$ for the velocity.
Thus, it is natural and important to show whether or not the classical solution to the Cauchy problem for the CNS~\epsilonqref{5} and CNS~\epsilonqref{1} exits in the inhomogeneous Sobolev space~\epsilonqref{inhomSpace} for some small time.
We study the well-posedess of classical solutions to the Cauchy problem for the full compressible Navier-Stokes equations~\epsilonqref{5} and the isentropic Navier-Stokes equations~\epsilonqref{1} in the inhomogeneous Sobolev space~\epsilonqref{inhomSpace} in the present paper, and we prove that there does not exist any classical solution in the inhomogeneous Sobolev space~\epsilonqref{inhomSpace} for any small time (refer to Theorems~\rhoef{1001}--\rhoef{1005} for details). These imply that the homogeneous Sobolev spaces such as~\epsilonqref{inhomSpace}, are crucial in the study of the well-posedness theory of classical solutions to the Cauchy problem of compressible Navier-Stokes equations in the presence of vacuum at far fields.
The main results in this paper can be stated as follows:
\betaegin{Theorem}
\lambdaabel{1001}
The one-dimensional isentropic Navier-Stokes equations \epsilonqref{1}-\epsilonqref{3} with the initial density satisfying \epsilonqref{8} with $\Omega\thetariangleq I=(0,1)$ has no any solution $(\rhoho,u)$ in the inhomogeneous Sobolev space $C^1([0,T];H^m(\muathbb{R})), m>2$ for any positive time $T$, if the initial data $(\rhoho_0,u_0)$ satisfy one of the following two conditions in the interval $I$:
there exist positive numbers $\lambdaambda_i,i=1,2,3,4$ with $0<\lambdaambda_3,\lambdaambda_4<1$ such that
\betaegin{eqnarray}\lambdaabel{11}
\lambdaeft\{ \betaegin{array}{ll}
\fracrac{(\rhoho_0)_x}{\rhoho_0}\gamma}\deltaef\G{\Gammaeq \lambdaambda_1,\ in \ (0,\lambdaambda_3),\\
u_0(\lambdaambda_3)<0,u_0\lambdaeq0,\ in \ (0,\lambdaambda_3),
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
or
\betaegin{eqnarray}\lambdaabel{12}
\lambdaeft\{ \betaegin{array}{ll}
\fracrac{(\rhoho_0)_x}{\rhoho_0}\lambdaeq-\lambdaambda_2,\ in \ (\lambdaambda_4,1),\\
u_0(\lambdaambda_4)>0,u_0\gamma}\deltaef\G{\Gammaeq0,\ in \ (\lambdaambda_4,1).
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
\epsilonnd{Theorem}
The following remark is helpful for understanding the conditions \epsilonqref{11}-\epsilonqref{12} and Theorem \rhoef{1001}.
\betaegin{Remark} The set of initial data $(\rhoho_0,u_0)$ satisfying the condition \epsilonqref{11} or \epsilonqref{12} is non-empty.
For example, for any given positive integers $k$ and $l$. Set
\betaegin{eqnarray}\lambdaabel{12.2}
\rhoho_0(x)=
\lambdaeft\{ \betaegin{array}{ll}
x^k(1-x)^k, \ for \ x\inftyn \ [0,1],\\
0,\ for\ x\inftyn \muathbb{R}\sigmaetminus [0,1]
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
and
\betaegin{eqnarray}\lambdaabel{12.4}
u_0(x)=
\lambdaeft\{ \betaegin{array}{ll}
- x^l, \ for\ x\inftyn \ [0,\fracrac{1}{4}],\\
\ smooth\ connection, \ for\ x\inftyn \ (\fracrac{1}{4},\fracrac{3}{4}),\\
(1-x)^l, \ for\ x\inftyn \ [\fracrac{3}{4},1],\\
0, \ for\ x\inftyn \muathbb{R}\sigmaetminus [0,1],
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
then $(\rhoho_0,u_0)$ satisfies both \epsilonqref{11} and \epsilonqref{12}.
It is known that the system \epsilonqref{1}-\epsilonqref{3} is well-poseded in the homogeneous Sobolev space in classical sense if and only if $\rhoho_0$ and $u_0$ satisfy the following compatibility condition (see \cite{CK2})
\betaegin{eqnarray}\lambdaabel{12.6}
\lambdaeft\{ \betaegin{array}{ll}
-\muu\Delta u_0-(\muu+\lambdaambda)\nuabla\epsilonmph{div}u_0+\nuabla p_0=\rhoho_0g,\\
g\inftyn D^1,\sigmaqrt{\rhoho_0}g\inftyn L^2.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
In one-dimensional case, for $(\rhoho_0,u_0)$ given by \epsilonqref{12.2} and \epsilonqref{12.4}, we have
\betaegin{eqnarray*}
g=
\lambdaeft\{ \betaegin{array}{ll}
O(x^{l-k-2})+O(x^{l-k-1})+O(x^{k(\gamma}\deltaef\G{\Gammaamma-1)-1}), \ for \ x\inftyn \ [0,\fracrac{1}{4}],\\
\ smooth\ connection, \ for\ x\inftyn \ (\fracrac{1}{4},\fracrac{3}{4}),\\
O((1-x)^{l-k-2})+O((1-x)^{l-k-1})+O((1-x)^{k(\gamma}\deltaef\G{\Gammaamma-1)-1}),\ for\ x\inftyn \ [\fracrac{3}{4},1],\\
0, \ for\ x\inftyn \muathbb{R}\sigmaetminus [0,1].
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray*}
Direct calculations show $(\rhoho_0,u_0)$ satisfy \epsilonqref{12.6} if and only if
\betaegin{eqnarray}\lambdaabel{12.8}
\lambdaeft\{ \betaegin{array}{ll}
k>\fracrac{3}{2(\gamma}\deltaef\G{\Gammaamma-1)},\\
l>k+\fracrac{5}{2}.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
For the initial data $(\rhoho_0,u_0)$ given by \epsilonqref{12.2} and \epsilonqref{12.4} with \epsilonqref{12.8}, the system \epsilonqref{1}-\epsilonqref{3} is well-poseded in homogeneous Sobolev space but has no solution in $ C^1([0,T];H^m(\muathbb{R}))$, $m>2$ for any positive time $T$. Therefore, the solution constructed in (see \cite{CK2}) has no finite energy in $ C^1([0,T];H^m(\muathbb{R}))$, $m>2$ for any positive time $T$ even if the initial data has finite energy in $H^m(\muathbb{R})$. Precisely, even if
\betaegin{eqnarray}
\inftynt_{\muathbb{R}}u_0(x)^2dx<\inftynfty,
\epsilonnd{eqnarray}
but it holds that
\betaegin{eqnarray}
\inftynt_{\muathbb{R}}u(x,t)^2dx=\inftynfty \quad \ for\ any\ t>0.
\epsilonnd{eqnarray}
\epsilonnd{Remark}
\betaegin{Theorem}\lambdaabel{1003} The one-dimensional full Navier-Stokes equations \epsilonqref{5}-\epsilonqref{7} with zero heat conduction and the initial density satisfying \epsilonqref{8} with $\Omega\thetariangleq I=(0,1)$ has no any solution $(\rhoho,u,e)$ in the inhomogeneous Sobolev space $C^1([0,T];H^m(\muathbb{R}))$, $m>2$ for any positive time $T$, if the initial data $(\rhoho_0,u_0,e_0)$ satisfy one of the following two conditions in the interval $I$:
there exist positive numbers $\lambdaambda_i,i=5,6,7,8$ with $0<\lambdaambda_7,\lambdaambda_8<1$ such that
\betaegin{eqnarray}\lambdaabel{14}
\lambdaeft\{ \betaegin{array}{ll}
\fracrac{(\rhoho_0)_x}{\rhoho_0}+\fracrac{(e_0)_x}{\rhoho_0}\gamma}\deltaef\G{\Gammaeq \lambdaambda_5,\ in \ (0,\lambdaambda_7),\\
u_0(\lambdaambda_7)<0,u_0\lambdaeq0,\ in \ (0,\lambdaambda_7),
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
or
\betaegin{eqnarray}\lambdaabel{15}
\lambdaeft\{ \betaegin{array}{ll}
\fracrac{(\rhoho_0)_x}{\rhoho_0}+\fracrac{(e_0)_x}{\rhoho_0}\lambdaeq-\lambdaambda_6,\ in \ (\lambdaambda_8,1),\\
u_0(\lambdaambda_8)>0,u_0\gamma}\deltaef\G{\Gammaeq0,\ in \ (\lambdaambda_8,1).
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
\epsilonnd{Theorem}
Huang and Li \cite{HL} proved the well-posedness to the Cauchy problem of the $n$-dimensional full compressible Navier-Stokes equations \epsilonqref{5}-\epsilonqref{6} with positive heat conduction in Sobolev space, but the entropy function $S(t,x)$ is infinite in vacuum domain (see Remark 4.2 in \cite{XYa}). If the entropy function $S(t,x)$ is required to be finite in vacuum domains, then we have the following non-existence result:
\betaegin{Theorem}\lambdaabel{1005}
The $n$-dimensional full compressible Navier-Stokes equations \epsilonqref{5}-\epsilonqref{7} with positive heat conduction
and the initial density satisfying \epsilonqref{8} has no any solution $(\rhoho,u,e)$ in the inhomogeneous Sobolev space $C^1([0,T];H^m(\muathbb{R}^n))$, $m>[\fracrac{n}{2}]+2$ with finite entropy $S(t,x)$ for any positive time $T$.
\epsilonnd{Theorem}
To prove Theorem \rhoef{1001}-Theorem \rhoef{1005}, we will carry out the following steps. First we reduce the original Cauchy problem to an initial-boundary value problem, which then can be reduced further to an integro-differential system with degeneracy for t-derivative by the Lagrangian coordinates transformation, and one can then define a linear parabolic operator from the integro-differential system and establish the Hopf's lemma and a strong maximum principle for the resulting operator, and finally we prove that the resulting system is over-determined by contradiction.
Because the linear parabolic operator here degenerates for t-derivative due to that the initial density vanishes on boundary, one needs careful analysis to deduce a localized version strong maximum principle on some rectangle away from boundaries.
We should stress that our method is based on maximum principle for parabolic operator, therefore we shall deal with one-dimensional isentropic case in Section 2, one-dimensional zero heat conduction case in Section 3 and n-dimensional positive heat conduction case in Section 4 separately, we define parabolic operators from momentum equation near the degenerate boundary in the Lagrangian coordinates by adding some conditions on initial data for the first two cases and the energy equation in the whole domain for the last case, respectively.
\betaegin{center}
\sigmaection{Proof of Theorem \rhoef{1001}}
\epsilonnd{center}
\sigmaubsection{Reformulation of Theorem \rhoef{1001}}
Let $n=1$ and $(\rhoho,u)\inftyn C^1([0,T];H^m(\muathbb{R})), m>2$ be a solution to the system \epsilonqref{1}-\epsilonqref{3} with the initial density satisfying $\epsilonqref{8}$.
Let $a(t)$ and $b(t)$ be the particle paths stating from $0$ and $1$, respectively. The following argument is due to Xin \cite{Xin}.
Following from the first equation of \epsilonqref{1}, we see $\mubox{supp}_x\,\rhoho=[a(t), b(t)]$.
It follows from the second equation of \epsilonqref{1} that
\betaegin{eqnarray*}
u_{xx}(x,t)=0, \fracorall \ x\inftyn \muathbb{R}\betaackslash [a(t), b(t)]\thetaimes (0,T],
\epsilonnd{eqnarray*}
which gives
\betaegin{eqnarray*}
u(x,t)=
\lambdaeft\{ \betaegin{array}{ll}
u(b(t),t)+(x-b(t))u_x(b(t),t), \ if \ x>b(t),\\
u(a(t),t)+(x-a(t))u_x(a(t),t), \ if \ x<a(t).
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray*}
Since $u(\cdot,t)\inftyn H^m(\muathbb{R}), m>2$, then one has
\betaegin{eqnarray}\lambdaabel{9}
u(x,t)=u_x(x,t)=0, \fracorall \ x\inftyn \muathbb{R}\betaackslash [a(t), b(t)]\thetaimes (0,T],
\epsilonnd{eqnarray}
which implies $[a(t), b(t)]=[0,1]$, i.e., $\mubox{supp}_x\,\rhoho(x,t)=[0,1]$.
Therefore, by the above argument, to study the well-posedness of the system \epsilonqref{1}-\epsilonqref{3} with the initial density satisfying \epsilonqref{8} is equivalent to
study the well-posedness of the following initial-boundary value problem
\betaegin{eqnarray}\lambdaabel{10}
\lambdaeft\{ \betaegin{array}{ll}
\rhoho_t+(\rhoho
u)_x=0,\ in\ I\thetaimes (0,T],\\
(\rhoho u)_t+(\rhoho u^2+p)_x=\nuu u_{xx},\ in\ I\thetaimes (0,T],\\
(\rhoho,u)=(\rhoho_0,u_0), \ on \ I\thetaimes \{t=0\},\\
\rhoho=u=u_x=0,\ on \ \phiartial I\thetaimes (0,T],
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
where $\nuu=2\muu+\lambdaambda$.
The non-existence of Cauchy problem \epsilonqref{1}-\epsilonqref{3} in $ C^1([0,T];H^m(\muathbb{R}))$, $m>2$ is equivalent to the non-existence of the initial-boundary value problem \epsilonqref{10} in $C^{2,1}(\betaar{I}\thetaimes [0,T])$, which denotes the collection of functions that are $C^2$ in space and $C^1$ in time in $\betaar{I}\thetaimes [0,T]$ here and in the following sections. Thus, in order to prove Theorem \rhoef{1001}, one needs only to show the following:
\betaegin{Theorem}\lambdaabel{1000} The initial-boundary value problem \epsilonqref{10} has no solution $(\rhoho,u)$ in $C^{2,1}(\betaar{I}\thetaimes [0,T])$ for any positive time $T$, if the initial data $(\rhoho_0,u_0)$ satisfy the condition \epsilonqref{11} or \epsilonqref{12}.
\epsilonnd{Theorem}
Let $\epsilonta(x,t)$ denote the position
of the gas particle starting from $x$ at time $t=0$ satisfying
\betaegin{equation}\lambdaabel{16.5}
\lambdaeft\{ \betaegin{array}{ll}
\epsilonta_t(x,t)=u(\epsilonta(x,t),t), \\
\epsilonta(x,0)=x.
\epsilonnd{array}
\rhoight.
\epsilonnd{equation}
$\varepsilonarrho$ and $v$ are the Lagrangian density and velocity given by
\betaegin{eqnarray*}
\lambdaeft\{ \betaegin{array}{ll}
\varepsilonarrho(x,t)=\rhoho(\epsilonta(x,t),t),\\
v(x,t)=u(\epsilonta(x,t),t).
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray*}
Then the system \epsilonqref{10} can be rewritten in the Lagrangian coordinates as
\betaegin{eqnarray}\lambdaabel{17}
\lambdaeft\{ \betaegin{array}{ll}
\varepsilonarrho_t+{\frac{\varepsilonarrho v_x}{\epsilonta_x}}=0, \ in \ I\thetaimes(0,T],\\
\epsilonta_x\varepsilonarrho v_t+(\varepsilonarrho^\gamma}\deltaef\G{\Gammaamma)_x=\nuu({\frac{v_x}{\epsilonta_x}})_x, \ in \ I\thetaimes(0,T],\\
\epsilonta_t(x,t)=v(x,t), \\
(\varepsilonarrho,v,\epsilonta)=(\rhoho_0,u_0,x), \ on \ I\thetaimes \{t=0\},\\
\varepsilonarrho=v=v_x=0,\ on \ \phiartial I\thetaimes (0,T].
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
The first equation of \epsilonqref{17} implies that
\betaegin{eqnarray*}
\varepsilonarrho(x,t)={\frac{\rhoho_0(x)}{\epsilonta_x(x,t)}}.
\epsilonnd{eqnarray*}
Regarding $\rhoho_0$ as a parameter, then one can reduce the system $\epsilonqref{17}$ further to
\betaegin{equation}\lambdaabel{18}
\lambdaeft\{ \betaegin{aligned}
&\rhoho_0v_t+(\frac{\rhoho_0^\gamma}\deltaef\G{\Gammaamma}{\epsilonta_x^\gamma}\deltaef\G{\Gammaamma})_x=\nuu(\frac{v_x}{\epsilonta_x})_x, \ in \ I\thetaimes(0,T],\\
&\epsilonta_t(x,t)=v(x,t), \\
& (v,\epsilonta)=(u_0,x), \ on \ I\thetaimes \{t=0\},\\
&v=v_x=0,\ on \ \phiartial I\thetaimes (0,T].
\epsilonnd{aligned}
\rhoight.
\epsilonnd{equation}
The condition \epsilonqref{11} or \epsilonqref{12} on the initial data $(\rhoho_0,u_0)$ takes the following form in the Lagrangian coordinates
\betaegin{eqnarray}\lambdaabel{19}
\lambdaeft\{ \betaegin{array}{ll}
\fracrac{(\rhoho_0)_x}{\rhoho_0}\gamma}\deltaef\G{\Gammaeq \lambdaambda_1,\ in \ (0,\lambdaambda_3),\\
v_0(\lambdaambda_3)<0,v_0\lambdaeq0,\ in \ (0,\lambdaambda_3),
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
or
\betaegin{eqnarray}\lambdaabel{20}
\lambdaeft\{ \betaegin{array}{ll}
\fracrac{(\rhoho_0)_x}{\rhoho_0}\lambdaeq-\lambdaambda_2,\ in \ (\lambdaambda_4,1),\\
v_0(\lambdaambda_4)>0,v_0\gamma}\deltaef\G{\Gammaeq0,\ in \ (\lambdaambda_4,1).
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
The non-existence of the initial-boundary value problem \epsilonqref{10} is equivalent to the non-existence of the initial-boundary value problem \epsilonqref{18} in $C^{2,1}(\betaar{I}\thetaimes [0,T])$. Thus, Theorem \rhoef{1000} is a consequence of the following:
\betaegin{Theorem}\lambdaabel{1006} The problem \epsilonqref{18} has no solution $(v,\epsilonta)$ in $C^{2,1}(\betaar{I}\thetaimes [0,T])$ for any positive time $T$, if the initial data $(\rhoho_0,u_0)$ satisfy the condition \epsilonqref{19} or \epsilonqref{20}.
\epsilonnd{Theorem}
\sigmaubsection{Proof of Theorem \rhoef{1006}}
Given a sufficiently small positive time $T^*$, we let $(v,\epsilonta)\inftyn C^{2,1}(\betaar{I}\thetaimes [0,T^*])$ be a solution of the system \epsilonqref{18} with \epsilonqref{19} or \epsilonqref{20}.
Define the linear parabolic operator $\rhoho_0\phiartial_t+L$ by
\betaegin{eqnarray}\lambdaabel{21}
\rhoho_0\phiartial_t+L:=\rhoho_0\phiartial_t-\fracrac{\nuu}{\epsilonta_x}\phiartial_{xx}+\fracrac{\nuu \epsilonta_{xx}}{\epsilonta_x^2}\phiartial_x,
\epsilonnd{eqnarray}
where
\betaegin{eqnarray*}
\epsilonta_x=1+\inftynt_0^tv_xds \quad\ \ and\ \quad \epsilonta_{xx}=\inftynt_0^tv_{xx}d s.
\epsilonnd{eqnarray*}
Then, it follows from the first equation of \epsilonqref{18} that
\betaegin{eqnarray}\lambdaabel{22}
\rhoho_0v_t+Lv=-(\fracrac{\rhoho_0^\gamma}\deltaef\G{\Gammaamma}{\epsilonta_x^\gamma}\deltaef\G{\Gammaamma})_x.
\epsilonnd{eqnarray}
Let $M$ be a positive constant such that
\betaegin{eqnarray}\lambdaabel{23}
\rhoho_0+|v_0|+|(v_0)_x|+|(v_0)_{xx}|<M.
\epsilonnd{eqnarray}
It follows from the continuity on time that for short time, it holds that
\betaegin{eqnarray*}
|v|+|v_x|+|v_{xx}|\lambdaeq M,\ in \ I\thetaimes (0,T^*].
\epsilonnd{eqnarray*}
Taking a positive time $T<T^*$ sufficiently small such that $T\lambdaeq\fracrac{1}{2M}$, then one has
\betaegin{eqnarray}\lambdaabel{24}
|\inftynt_0^tv_xds|\lambdaeq MT\lambdaeq \fracrac{1}{2},\ in \ I\thetaimes (0,T].
\epsilonnd{eqnarray}
This implies
\betaegin{eqnarray*}
\fracrac{1}{2}\lambdaeq\epsilonta_x\lambdaeq\fracrac{3}{2},\ in \ I\thetaimes (0,T].
\epsilonnd{eqnarray*}
Thus, the equation \epsilonqref{22} is a well-defined integro-differential equation with degeneracy for t-derivative due to that the initial density $\rhoho_0$ vanishes on the boundary $\phiartial I$.
Restrict $T$ further such that $T\lambdaeq \fracrac{\lambdaambda_1}{4M}$. Then, \epsilonqref{24} implies
\betaegin{eqnarray}\lambdaabel{25}
-(\fracrac{\rhoho_0^\gamma}\deltaef\G{\Gammaamma}{\epsilonta_x^\gamma}\deltaef\G{\Gammaamma})_x
=-\fracrac{\gamma}\deltaef\G{\Gammaamma\rhoho_0^\gamma}\deltaef\G{\Gammaamma}{\epsilonta_x^\gamma}\deltaef\G{\Gammaamma}[\fracrac{(\rhoho_0)_x}{\rhoho_0}-\fracrac{\epsilonta_{xx}}{\epsilonta_x}]
\lambdaeq-\fracrac{\gamma}\deltaef\G{\Gammaamma\rhoho_0^\gamma}\deltaef\G{\Gammaamma}{\epsilonta_x^\gamma}\deltaef\G{\Gammaamma}(\lambdaambda_1-\fracrac{\lambdaambda_1}{2})<0, \ in \ (0,\lambdaambda_3)\thetaimes (0,T].
\epsilonnd{eqnarray}
Thus, it follows from \epsilonqref{22} and \epsilonqref{25} that $v$ satisfies the following differential inequality
\betaegin{eqnarray}\lambdaabel{26}
\rhoho_0v_t+Lv
\lambdaeq0, \ in \ (0,\lambdaambda_3)\thetaimes (0,T].
\epsilonnd{eqnarray}
Similarly, $v$ also satisfies
\betaegin{eqnarray}\lambdaabel{27}
\rhoho_0v_t+Lv
\gamma}\deltaef\G{\Gammaeq0, \ in \ (\lambdaambda_4,1)\thetaimes (0,T].
\epsilonnd{eqnarray}
In the rest of this section, our main task is to establish the Hopf's lemma and a strong maximum principle for the differential inequality \epsilonqref{26} and \epsilonqref{27}.
First recall the definition of the parabolic boundary (see \cite{H}) of a bounded domain $D$ of $\muathbb{R}^d\thetaimes \muathbb{R}^+$. The parabolic boundary $\phiartial_p D$ of $D$ consists of points $(x_0,t_0)\inftyn \phiartial D$ such that $B_r(x_0)\thetaimes (t_0-r^2,t_0]$
contains points not in $D$, for any $r>0$. In the following, suppose that $U$ is a bounded domain of $\muathbb{R}$, we use the notation $U_T:=U\thetaimes (0,T]$ to denote the cylinder in $(0,\lambdaambda_3)\thetaimes (0,T]$.
Let $Q_T$ be any domain contained in $(0,\lambdaambda_3)\thetaimes (0,T]$. We then derive a weak maximum principle for the differential inequality \epsilonqref{26} in $Q_T$.
\betaegin{Lemma}\lambdaabel{1007}
Suppose that $w\inftyn C^{2,1}(Q_T)\cap C(\betaar{Q}_T)$ satisfies
\betaegin{eqnarray}\lambdaabel{28}
\rhoho_0w_t+Lw
\lambdaeq0, \ in \ Q_T.
\epsilonnd{eqnarray}
Then $w$ attains its maximum on the parabolic boundary of $Q_T$.
\epsilonnd{Lemma}
\thetaextbf{Proof.} We first prove the statement under a stronger hypothesis instead of \epsilonqref{28} that
\betaegin{eqnarray}\lambdaabel{29}
\rhoho_0w_t+Lw<0, \ in \ Q_T.
\epsilonnd{eqnarray}
Assume $w$ attains its maximum at an interior point $(x_0,t_0)$ of the domain $Q_T$. Therefore
\betaegin{eqnarray*}
w_t(x_0,t_0)\gamma}\deltaef\G{\Gammaeq0, w_x(x_0,t_0)=0, w_{xx}(x_0,t_0)\lambdaeq0,
\epsilonnd{eqnarray*}
which implies $\rhoho_0w_t+Lw\gamma}\deltaef\G{\Gammaeq0$, this contradicts \epsilonqref{29}. Next, define the auxiliary function
\betaegin{eqnarray*}
\varepsilonarphi^\varepsilonarepsilon=w-\varepsilonarepsilon t,
\epsilonnd{eqnarray*}
for a positive number $\varepsilonarepsilon$. Then
\betaegin{eqnarray*}
\rhoho_0\varepsilonarphi^\varepsilonarepsilon_t+L\varepsilonarphi^\varepsilonarepsilon&=&\rhoho_0w_t+Lw-\varepsilonarepsilon \rhoho_0<0, \ in \ Q_T.
\epsilonnd{eqnarray*}
Thus $\varepsilonarphi^\varepsilonarepsilon$ attains its maximum on the parabolic boundary of $Q_T$, which proves the assertion of Lemma \rhoef{1007} by letting $\varepsilonarepsilon$ go to zero.\qed
The result in Lemma \rhoef{1007} can be extended to a general domain $D\sigmaubset (0,\lambdaambda_3)\thetaimes (0,T]$ (see \cite{Friedman}).
\betaegin{Lemma}\lambdaabel{1008}
Suppose that $w\inftyn C^{2,1}(D)\cap C(\betaar{D})$ satisfies
\betaegin{eqnarray}\lambdaabel{30}
\rhoho_0w_t+Lw
\lambdaeq0, \ in \ D.
\epsilonnd{eqnarray}
Then $w$ attains its maximum on the parabolic boundary of $D$.
\epsilonnd{Lemma}
Next, we prove the Hopf's lemma for the differential inequality \epsilonqref{26}, which is critical for proving Theorem \rhoef{1006}.
\betaegin{Proposition}\lambdaabel{1009}
Suppose that $w\inftyn C^{2,1}((0,\lambdaambda_3)\thetaimes (0,T])\cap C([0,\lambdaambda_3]\thetaimes [0,T])$ satisfies \epsilonqref{26} and there exits a point $(0,t_0)\inftyn\{0\}\thetaimes (0,T]$ such that $w(x,t)<w(0,t_0)$ for any point $(x,t)$ in a
neighborhood $D$ of the point $(0,t_0)$, where
\betaegin{eqnarray*}
D=:\{(x,t): (x-r)^2+(t_0-t)< r^2,0< x<\fracrac{r}{2},0<t\lambdaeq t_0\}, 0<r<\lambdaambda_3.
\epsilonnd{eqnarray*}
Then it holds that
\betaegin{eqnarray*}
\fracrac{\phiartial w(0,t_0)}{\phiartial \varepsilonec{n}}>0,
\epsilonnd{eqnarray*}
where $\varepsilonec{n}$ is the outer unit normal vector at the point $(0,t_0)$.
\epsilonnd{Proposition}
\thetaextbf{Proof.}
For positive constants $\alphalpha$ and $\varepsilonarepsilon$ to be determined, set
\betaegin{eqnarray*}
q(\alphalpha,x,t)=e^{-\alphalpha[(x-r)^2+(t_0-t)]}-e^{-\alphalpha r^2}
\epsilonnd{eqnarray*}
and
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon,\alphalpha,x,t)=w(x,t)-w(0,t_0)+\varepsilonarepsilon q(\alphalpha,x,t).
\epsilonnd{eqnarray*}
First, we determine $\varepsilonarepsilon$.
The parabolic boundary $\phiartial_p D$ consists of two parts $\Sigma_1$ and $\Sigma_2$ given by
\betaegin{gather*}
\Sigma_1=\{(x,t):(x-r)^2+(t_0-t)<r^2, x=\fracrac{r}{2}, 0<t\lambdaeq t_0\}
\epsilonnd{gather*}
and
\betaegin{gather*}
\Sigma_2=\{(x,t):(x-r)^2+(t_0-t)=r^2, 0\lambdaeq x\lambdaeq\fracrac{r}{2}, 0<t\lambdaeq t_0\}.
\epsilonnd{gather*}
On $\betaar{\Sigma}_1$, $w(x,t)-w(0,t_0)<0$, and hence $w(x,t)-w(0,t_0)<-\varepsilonarepsilon_0$ for some $\varepsilonarepsilon_0>0$.
Note that $q\lambdaeq1$ on $\Sigma_1$. Then for such an $\varepsilonarepsilon_0$, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)<0$ on $\Sigma_1$. For $(x,t)\inftyn \Sigma_2$, $q=0$ and $w(x,t)\lambdaeq w(0,t_0)$. Thus, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)\lambdaeq0$
for any $(x,t)\inftyn \Sigma_2$ and $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,0,t_0)=0$. One concludes that
\betaegin{eqnarray}\lambdaabel{31}
\lambdaeft\{ \betaegin{array}{ll}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)\lambdaeq0,\ on\ \phiartial_p D,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,0,t_0)=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
Next, we choose $\alphalpha$. It follows from \epsilonqref{26} that
\betaegin{eqnarray}\lambdaabel{32}
&&\rhoho_0\varepsilonarphi_t(\varepsilonarepsilon_0,\alphalpha,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)\nuonumber\\
&=&\rhoho_0w_t(x,t)+L w(x,t)
+\varepsilonarepsilon_0[\rhoho_0q_t(\alphalpha,x,t)+L q(\alphalpha,x,t)]\nuonumber\\
&\lambdaeq&\varepsilonarepsilon_0[\rhoho_0q_t(\alphalpha,x,t)+L q(\alphalpha,x,t)].
\epsilonnd{eqnarray}
A direct calculation yields
\betaegin{eqnarray}\lambdaabel{33}
&&e^{\alphalpha[(x-r)^2+(t_0-t)]}[\rhoho_0q_t(\alphalpha,x,t)+L q(\alphalpha,x,t)]\nuonumber\\
&=&-\fracrac{4\nuu(x-r)^2}{\epsilonta_x}\alphalpha^2
+[\rhoho_0+\fracrac{2\nuu}{\epsilonta_x}+\fracrac{2\nuu \epsilonta_{xx} (r-x)}{\epsilonta_x^2}]\alphalpha\nuonumber\\
&\lambdaeq& -\fracrac{2\nuu r^2}{3}\alphalpha^2
+(M+4\nuu+8\nuu M r)\alphalpha.
\epsilonnd{eqnarray}
Therefore, there exists a positive number $\alphalpha_0=\alphalpha_0(\nuu,r,M)$ such that
\betaegin{eqnarray}\lambdaabel{34}
\rhoho_0q_t(\alphalpha_0,x,t)+L q(\alphalpha_0,x,t)\lambdaeq0,\ in \ D,
\epsilonnd{eqnarray}
Thus, it follows from \epsilonqref{32} and \epsilonqref{34} that
\betaegin{eqnarray}\lambdaabel{35}
\rhoho_0\varepsilonarphi_t(\varepsilonarepsilon_0,\alphalpha_0,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\lambdaeq 0,\ in \ D.
\epsilonnd{eqnarray}
In conclusion, in view of \epsilonqref{31} and \epsilonqref{35}, one has
\betaegin{eqnarray*}
\lambdaeft\{ \betaegin{array}{ll}
\rhoho_0\varepsilonarphi_t(\varepsilonarepsilon_0,\alphalpha_0,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\lambdaeq 0,\ in \ D,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\lambdaeq0, \ on \ \phiartial_pD,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,0,t_0)=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray*}
This, together with Lemma \rhoef{1008} yields
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\lambdaeq0, \ in\ D.
\epsilonnd{eqnarray*}
Therefore, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,\cdot,\cdot)$ attains its maximum at the point $(0,t_0)$ in $D$. In particular, it holds that
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t_0)\lambdaeq \varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,0,t_0)\quad \ for\ all\ x\inftyn (0,\fracrac{r}{2}).
\epsilonnd{eqnarray*}
This implies
\betaegin{eqnarray*}
\fracrac{\phiartial \varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,0,t_0)}{\phiartial \varepsilonec{n}}\gamma}\deltaef\G{\Gammaeq0.
\epsilonnd{eqnarray*}
Finally, we get
\betaegin{eqnarray*}
\fracrac{\phiartial w(0,t_0)}{\phiartial \varepsilonec{n}}\gamma}\deltaef\G{\Gammaeq-\varepsilonarepsilon_0 \fracrac{\phiartial q(\alphalpha_0,0,t_0)}{\phiartial \varepsilonec{n}}
=2\varepsilonarepsilon_0\alphalpha_0 r e^{-\alphalpha_0 r^2}>0.
\epsilonnd{eqnarray*}
\qed
In order to establish a strong maximum principle for the differential inequality \epsilonqref{26}, we need to study the t-derivative of
interior maximum point. The main ideas in the following lemmas come from \cite{Friedman}.
\betaegin{Lemma}\lambdaabel{1010}
Let $w\inftyn C^{2,1}((0,\lambdaambda_3)\thetaimes (0,T])\cap C([0,\lambdaambda_3]\thetaimes [0,T])$ satisfy \epsilonqref{26} and have a maximum $M_0$ in the domain $(0,\lambdaambda_3)\thetaimes (0,T]$. Suppose that $(0,\lambdaambda_3)\thetaimes (0,T]$ contains a closed solid ellipsoid
\betaegin{eqnarray*}
\Omega^\sigmaigma:=\{(x,t): (x-x_*)^2+\sigmaigma(t-t_*)^2\lambdaeq r^2\},\sigmaigma>0
\epsilonnd{eqnarray*}
and $w(x,t)<M_0$ for any interior point $(x,t)$ of $\Omega^\sigmaigma$ and $w(\betaar{x},\betaar{t})=M_0$ at some point $(\betaar{x},\betaar{t})$ on the boundary of $\Omega^\sigmaigma$. Then $\betaar{x}=x_*$.
\epsilonnd{Lemma}
\thetaextbf{Proof.} Without loss of generality, one can assume that $(\betaar{x},\betaar{t})$ is the only point on $\phiartial \Omega^\sigmaigma$ such that $w=M_0$ in $\Omega^\sigmaigma$. Otherwise, one can limit it to a smaller closed ellipsoid lying in $\Omega^\sigmaigma$ and having $(\betaar{x},\betaar{t})$ as the only common point with $\phiartial \Omega^\sigmaigma$. We prove the desired result by contradiction.
Suppose that $\betaar{x}\nueq x_*$. Applying Lemma \rhoef{1008} on $\Omega^\sigmaigma$ shows $\betaar{t}<T$.
Choose a closed ball $D$ with center $(\betaar{x},\betaar{t})$ and radius $\thetailde{r}<|\betaar{x}-x_*|$ contained in $(0,\lambdaambda_3)\thetaimes (0,T]$. Then $|x-x_*|\gamma}\deltaef\G{\Gammaeq|\betaar{x}-x_*|-\thetailde{r}=:\hat{r}$ for any point $(x,t)\inftyn D$.
The parabolic boundary of $D$ is composed of a part $\Sigma_1$ lying in $\Omega^\sigmaigma$ and a part $\Sigma_2$ lying outside $\Omega^\sigmaigma$.
For positive constants $\alphalpha$ and $\varepsilonarepsilon$ to be determined, set
\betaegin{eqnarray*}
q(\alphalpha,x,t)=e^{-\alphalpha[(x-x_*)^2+\sigmaigma(t-t_*)^2]}-e^{-\alphalpha r^2}
\epsilonnd{eqnarray*}
and
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon,\alphalpha,x,t)=w(x,t)-M_0+\varepsilonarepsilon q(\alphalpha,x,t).
\epsilonnd{eqnarray*}
Note that $q(\alphalpha,x,t)>0$ in the interior of $\Omega^\sigmaigma$, $q(\alphalpha,x,t)=0$ on $\phiartial \Omega^\sigmaigma$ and $q(\alphalpha,x,t)<0$ outside $\Omega^\sigmaigma$. So, it holds that $\varepsilonarphi(\varepsilonarepsilon,\alphalpha,\betaar{x},\betaar{t})=0$.
On $\Sigma_1$, $w(x,t)-M_0<0$, and hence $w(x,t)-M_0<-\varepsilonarepsilon_0$ for some $\varepsilonarepsilon_0>0$.
Note that $q(\alphalpha,x,t)\lambdaeq1$ on $\Sigma_1$. Then for such an $\varepsilonarepsilon_0$, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)<0$ on $\Sigma_1$. For $(x,t)\inftyn \Sigma_2$, $q(\alphalpha,x,t)<0$ and $w(x,t)-M_0\lambdaeq0$. Thus, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)<0$
for any $(x,t)\inftyn \Sigma_2$. One concludes that
\betaegin{eqnarray}\lambdaabel{36}
\lambdaeft\{ \betaegin{array}{ll}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)<0,\ on\ \phiartial_pD,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,\betaar{x},\betaar{t})=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
Next, we estimate $\rhoho_0q_t(\alphalpha,x,t)+L q(\alphalpha,x,t)$.
One calculates that for $(x,t)\inftyn D$,
\betaegin{eqnarray*}
&&e^{\alphalpha[(x-x_*)^2+\sigmaigma(t-t_*)^2]}[\rhoho_0q_t(\alphalpha,x,t)+L q(\alphalpha,x,t)]\nuonumber\\
&=&-\fracrac{4\nuu(x-x_*)^2}{\epsilonta_x}\alphalpha^2
+[2\sigmaigma\rhoho_0(t_*-t)+\fracrac{2\nuu}{\epsilonta_x}+\fracrac{2\nuu \epsilonta_{xx} (x_*-x)}{\epsilonta_x^2}]\alphalpha\nuonumber\\
&\lambdaeq& -\fracrac{8\nuu \hat{r}^2}{3}\alphalpha^2
+(2\sigmaigma M+4\nuu+8\nuu M r)\alphalpha.
\epsilonnd{eqnarray*}
Therefore, there exists a positive number $\alphalpha_0=\alphalpha_0(\nuu,r,\hat{r},\sigmaigma,M)$ such that
\betaegin{eqnarray}\lambdaabel{37}
\rhoho_0q_t(\alphalpha_0,x,t)+L q(\alphalpha_0,x,t)\lambdaeq0,\ in \ D.
\epsilonnd{eqnarray}
Thus, it follows from \epsilonqref{26}, \epsilonqref{32} and \epsilonqref{37} that
\betaegin{eqnarray}\lambdaabel{38}
\rhoho_0\varepsilonarphi_t(\varepsilonarepsilon_0,\alphalpha_0,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\lambdaeq 0,\ in \ D.
\epsilonnd{eqnarray}
In conclusion, it follows from \epsilonqref{36} and \epsilonqref{38} that
\betaegin{eqnarray*}
\lambdaeft\{ \betaegin{array}{ll}
\rhoho_0\varepsilonarphi_t(\varepsilonarepsilon_0,\alphalpha_0,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\lambdaeq 0,\ in \ D,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)<0, \ on \ \phiartial_pD,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,\betaar{x},\betaar{t})=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray*}
However, Lemma \rhoef{1008} implies that
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)<0, \ in\ D,
\epsilonnd{eqnarray*}
which contradicts to $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,\betaar{x},\betaar{t})=0$ due to $(\betaar{x},\betaar{t})\inftyn D$ .\qed
Based on Lemma \rhoef{1010}, it is standard to prove the following lemma. For details, please refer to Lemma 3 of Chapter 2 in \cite{Friedman}.
\betaegin{Lemma}\lambdaabel{1011}
Suppose that $w\inftyn C^{2,1}((0,\lambdaambda_3)\thetaimes (0,T])\cap C([0,\lambdaambda_3]\thetaimes [0,T])$ satisfies \epsilonqref{26}.
If $w$ has a maximum in an interior point $P_0=(x_0,t_0)$ of $(0,\lambdaambda_3)\thetaimes (0,T]$, then
$w(P)=w(P_0)$ for any point $P(x,t_0)$ of $(0,\lambdaambda_3)\thetaimes (0,T]$.
\epsilonnd{Lemma}
We first prove a localized version strong maximum principle in a rectangle $\muathcal{R}$ of the domain $(0,\lambdaambda_3)\thetaimes (0,T]$.
\betaegin{Lemma}\lambdaabel{1012}
Suppose that $w\inftyn C^{2,1}((0,\lambdaambda_3)\thetaimes (0,T])\cap C([0,\lambdaambda_3]\thetaimes [0,T])$ satisfies \epsilonqref{26}.
If $v$ has a maximum in the interior point $P_0=(x_0,t_0)$ of $(0,\lambdaambda_3)\thetaimes (0,T]$, then there exists a rectangle
\betaegin{eqnarray*}
\muathcal{R}(P_0):=\{(x,t):x_0-a_1\lambdaeq x\lambdaeq x_0+a_1, t_0-a_0\lambdaeq t\lambdaeq t_0\}
\epsilonnd{eqnarray*}
in $(0,\lambdaambda_3)\thetaimes (0,T]$ such that
$w(P)=w(P_0)$ for any point $P$ of $\muathcal{R}(P_0)$.
\epsilonnd{Lemma}
\thetaextrm{\thetaextbf{Proof}} We prove the desired result by contradiction. Suppose that there exists an interior point $P_1=(x_1,t_1)$ of $(0,\lambdaambda_3)\thetaimes (0,T]$ with $t_1< t_0$ such that $w(P_1)<w(P_0)$.
Connect $P_1$ to $P_0$ by a simple smooth curve $\gamma}\deltaef\G{\Gammaamma$. Then there exists a point $P_*=(x_*,t_*)$ on $\gamma}\deltaef\G{\Gammaamma$ such that $w(P_*)=w(P_0)$ and $w(\betaar{P})<w(P_*)$ for all any point $\betaar{P}$ of $\gamma}\deltaef\G{\Gammaamma$ between $P_1$ and $P_*$. We may assume that $P_*=P_0$ and $P_1$ is very near to $P_0$. There exist a rectangle $\muathcal{R}(P_0)$ in $(0,\lambdaambda_3)\thetaimes (0,T]$ with small positive numbers $a_0$ and $a_1$ (will be determined) such that
$P_1$ lies on $t=t_0-a_0$.
Since $\muathcal{R}(P_0)\sigmaetminus \{t=t_0\}\cap\{t=\betaar{t}\}$ contains some point $\betaar{P}=(\betaar{x},\betaar{t})$ of $\gamma}\deltaef\G{\Gammaamma$ and $w(\betaar{P})<w(P_0)$, we deduce $w(P)<w(P_0)$ for each point $P$ in $\muathcal{R}(P_0)\sigmaetminus \{t=t_0\}\cap\{t=\betaar{t}\}$ due to Lemma \rhoef{1011}. Therefore, $w(P)<w(P_0)$ for each point $P$ in $\muathcal{R}(P_0)\sigmaetminus \{t=t_0\}$.
For positive constants $\alphalpha$ and $\varepsilonarepsilon$ to be determined, set
\betaegin{eqnarray*}
q(\alphalpha,x,t)=t_0-t-\alphalpha(x-x_0)^2
\epsilonnd{eqnarray*}
and
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon,\alphalpha,x,t)=w(x,t)-w(P_0)+\varepsilonarepsilon q(\alphalpha,x,t).
\epsilonnd{eqnarray*}
Assume further that $P=(x_0-a_1,t_0-a_0)$ is on the parabola $q(\alphalpha,x,t)=0$. Then
\betaegin{equation}\lambdaabel{38.2}
\alphalpha=\fracrac{a_0}{a_1^2}.
\epsilonnd{equation}
To choose $\alphalpha$, one calculates
\betaegin{eqnarray}\lambdaabel{38.4}
&&\rhoho_0q_t(\alphalpha,x,t)+L q(\alphalpha,x,t)\nuonumber\\
&=&
-\rhoho_0+[\fracrac{2\nuu}{\epsilonta_x}-\fracrac{2\nuu \epsilonta_{xx} (x-x_0)}{\epsilonta_x^2}]\alphalpha\nuonumber\\
&\lambdaeq& -\rhoho_0
+(4\nuu+8\nuu M a_1)\alphalpha.
\epsilonnd{eqnarray}
since $\rhoho_0$ has a positive lower bound depending on $x_0-a_1$ in $\muathcal{R}(P_0)$, one can choose $\alphalpha_0$ such that
\betaegin{equation}\lambdaabel{39}
\alphalpha_0<\fracrac{\rhoho_0}{4\nuu+8\nuu M a_1}.
\epsilonnd{equation}
This and \epsilonqref{38.4} imply that
\betaegin{eqnarray}\lambdaabel{40}
\rhoho_0\varepsilonarphi_t(\alphalpha_0,x,t)+L \varepsilonarphi(\alphalpha_0,x,t)
\lambdaeq0, \ in\ \muathcal{R}(P_0).
\epsilonnd{eqnarray}
One can now fix $a_1$ such that
\betaegin{equation*}
a_1<\muin\{x_0,\lambdaambda_3-x_0\}
\epsilonnd{equation*}
and it then follows from \epsilonqref{38.2} and \epsilonqref{38.4} that one can choose $a_0$ such that
\betaegin{equation*}
a_0<\muin\{t_0,\fracrac{a_1^2\rhoho_0}{2(4\nuu+8\nuu M a_1)}\}.
\epsilonnd{equation*}
Denote $\muathcal{S}=\{(x,t)\inftyn \muathcal{R}(P_0), q(\alphalpha_0,x,t)\gamma}\deltaef\G{\Gammaeq0\}$. The parabolic boundary $\phiartial_p\muathcal{S}$ of $\muathcal{S}$ is composed of a part $\Sigma_1$ lying in $\muathcal{R}(P_0)$ and a part $\Sigma_2$ lying on $\muathcal{R}(P_0)\cap \{t=t_0-a_0\}$.
We now determine $\varepsilonarepsilon$. Note that on $\Sigma_2$, $w(x,t)-M_0<0$, and $q(\alphalpha_0,x,t)$ is bounded, one can choose sufficiently small number $\varepsilonarepsilon_0$ such that $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)<0$ on $\Sigma_2$.
{On $\Sigma_1\sigmaetminus \{P_0\}$, $q(\alphalpha_0,x,t)=0$ and $w(x,t)-M_0<0$}. Thus, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)<0$ on $\Sigma_1\sigmaetminus \{P_0\}$ and
$\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)=0$. One concludes that
\betaegin{eqnarray}\lambdaabel{40.5}
\lambdaeft\{ \betaegin{array}{ll}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)<0,\ on\ \phiartial_p\muathcal{S}\sigmaetminus \{P_0\},\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
In conclusion, it follows from \epsilonqref{40} and \epsilonqref{40.5} that there exist $\varepsilonarepsilon_0$, $a_0$ and $a_1$ such that
\betaegin{eqnarray}\lambdaabel{41}
\lambdaeft\{ \betaegin{array}{ll}
\rhoho_0\varepsilonarphi_t(\varepsilonarepsilon_0,\alphalpha_0,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\lambdaeq 0,\ in\ \muathcal{S},\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)<0,\ on\ \phiartial_p\muathcal{S}\sigmaetminus \{P_0\},\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
In view of Lemma \rhoef{1008} and \epsilonqref{41}, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,\cdot,\cdot)$ only attains its maximum at $P_0$ in $\muathcal{\betaar{S}}$, thus
\betaegin{eqnarray*}
\fracrac{\phiartial \varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)}{\phiartial t}\gamma}\deltaef\G{\Gammaeq0.
\epsilonnd{eqnarray*}
Note that $q$ satisfies at $P_0$
\betaegin{eqnarray*}
\fracrac{\phiartial q(\alphalpha_0,x_0,t_0)}{\phiartial t}=-1.
\epsilonnd{eqnarray*}
Therefore
\betaegin{eqnarray}\lambdaabel{42}
\fracrac{\phiartial w(x_0,t_0)}{\phiartial t}\gamma}\deltaef\G{\Gammaeq \varepsilonarepsilon_0.
\epsilonnd{eqnarray}
But, by the assumption, $w$ attains its maximum at $P_0$, it follows that
\betaegin{eqnarray*}
\rhoho_0\fracrac{\phiartial w(x_0,t_0)}{\phiartial t}\lambdaeq -L w(x_0,t_0)\lambdaeq0,
\epsilonnd{eqnarray*}
which contradicts \epsilonqref{42}.\qed
Now we can prove the following strong maximum principle.
\betaegin{Proposition}\lambdaabel{1013}
Suppose that $w\inftyn C^{2,1}((0,\lambdaambda_3)\thetaimes (0,T])\cap C([0,\lambdaambda_3]\thetaimes [0,T])$ satisfies \epsilonqref{26}.
If $w$ attains its maximum at some interior point $P_0=(x_0,t_0)$ of $ (0,\lambdaambda_3)\thetaimes (0,T]$, then
$w(P)=w(P_0)$ for any point $P\inftyn(0,\lambdaambda_3)\thetaimes (0,t_0]$.
\epsilonnd{Proposition}
\thetaextrm{\thetaextbf{Proof}} We prove the desired result by contradiction. Suppose that $w\nuot\epsilonquiv w(P_0)$. Then there exists a point $P_1=(x_1,t_1)$ of $ (0,\lambdaambda_3)\thetaimes (0,t_0]$ such that $w(P_1)<w(P_0)$. By Lemma \rhoef{1011}, there must be $t_1<t_0$.
Connect $P_1$ to $P_0$ by a straight line $\gamma}\deltaef\G{\Gammaamma$. There exists a point $P_*$ on $\gamma}\deltaef\G{\Gammaamma$ such that $w(P_*)=w(P_0)$ and $w(\betaar{P})<w(P_*)$ for any point $\betaar{P}$ on $\gamma}\deltaef\G{\Gammaamma$ lying between $P_*$ and $P_1$. Denote by $\gamma}\deltaef\G{\Gammaamma_0$ the closed sub straight line of $\gamma}\deltaef\G{\Gammaamma$ lying $P_*$ and $P_1$.
Construct a series of rectangles $\muathcal{R}_n, n=1,2,\cdots,N$ with small $a_n$ and $b_n$ such that $\gamma}\deltaef\G{\Gammaamma_0\sigmaubset\cup_{n=1}^N\muathcal{R}_n$, $P_*\inftyn \muathcal{R}_1$ and $P_1\inftyn \muathcal{R}_N$. Applying Lemma \rhoef{1012} on $\muathcal{R}_1, \muathcal{R}_2, \cdots, \muathcal{R}_N$ step by step it follows that $w=w(P_1)$ in $\cup_{n=1}^N\muathcal{R}_n$. Hence, one deduces $w(P_*)\epsilonquiv w(P_1)$ due to $P_*$ lying on $\gamma}\deltaef\G{\Gammaamma_0$, which is a contradiction.\qed
Let $D$ be a bounded domain contained in the domain $ (\lambdaambda_4,1)\thetaimes (0,T]$. Similar to Lemma \rhoef{1008}, Proposition \rhoef{1009} and Proposition \rhoef{1013},
we have corresponding weak maximum principle, Hopf's lemma and strong minimum principle for the differential inequality \epsilonqref{27}.
\betaegin{Lemma}\lambdaabel{1014}
Suppose that $w\inftyn C^{2,1}(D)\cap C(\betaar{D})$ satisfies
\betaegin{eqnarray*}
\rhoho_0w_t+Lw
\gamma}\deltaef\G{\Gammaeq0, \ in \ D.
\epsilonnd{eqnarray*}
Then $w$ attains its minimum on the parabolic boundary of $D$.
\epsilonnd{Lemma}
\betaegin{Proposition}\lambdaabel{1015}
Suppose that $w\inftyn C^{2,1}((\lambdaambda_4,1)\thetaimes (0,T])\cap C([\lambdaambda_4,1]\thetaimes [0,T])$ satisfies \epsilonqref{27} and there exits a point $(1,t_0)\inftyn\{1\}\thetaimes (0,T]$ such that $w(x,t)>w(1,t_0)$ for any point $(x,t)$ in a
neighborhood $D$ of the point $(0,t_0)$, where
\betaegin{eqnarray*}
D=:\{(x,y):(x-(1-r))^2+(t_0-t)< r^2, 1-\fracrac{r}{2}<x<1 , 0<t\lambdaeq t_0\}, 1-r>\lambdaambda_4.
\epsilonnd{eqnarray*}
Then it holds that
\betaegin{eqnarray*}
\fracrac{\phiartial w(1,t_0)}{\phiartial \varepsilonec{n}}<0,
\epsilonnd{eqnarray*}
where $\varepsilonec{n}$ is the outer unit normal vector at the point $(1,t_0)$.
\epsilonnd{Proposition}
\betaegin{Proposition}\lambdaabel{1016}
Suppose that $w\inftyn C^{2,1}((\lambdaambda_4,1)\thetaimes (0,T])\cap C([\lambdaambda_4,1]\thetaimes [0,T])$ satisfies \epsilonqref{27}.
If $w$ attains its minimum at some interior point $P_0=(x_0,t_0)$ of $ (\lambdaambda_4,1)\thetaimes (0,T]$, then
$w(P)=w(P_0)$ for any point $P$ of $(\lambdaambda_4,1)\thetaimes (0,t_0]$.
\epsilonnd{Proposition}
We are now ready to prove Theorem \rhoef{1006}.\\
\thetaextrm{\thetaextbf{Proof of Theorem \rhoef{1006}.}} We first consider the case of the domain $(0,\lambdaambda_3)\thetaimes (0,T]$.
Recall $v$ satisfies \epsilonqref{26}, so the weak maximum principle, Hopf lemma and strong maximum principle for $w$ holds also for $v$.
Since $v_0(\lambdaambda_3)<0$, by continuity of $v$ on time, then there exists a time $t_0>0$ such that $v(\lambdaambda_3,\cdot)<0$ in $ (0,t_0)$. By Lemma \rhoef{1007}, $v$ attains its maximum on the parabolic boundary $\{x=0\}\thetaimes (0,t_0]\cup \{x=\lambdaambda_3\}\thetaimes (0,t_0]\cup (0,\lambdaambda_3)\thetaimes\{t=0\}$. Since $v=0$ on the parabolic boundary $\{x=0\}\thetaimes (0,t_0]$ and $v_0\lambdaeq0$ in $ (0,\lambdaambda_3)$, by Proposition \rhoef{1013}, $v$ only attains its maximum on the set $\{x=0\}\thetaimes (0,t_0]\cup (0,\lambdaambda_3)\thetaimes\{t=0\}$. Thus, $v(x,t)<v(0,t_0)(=0)$ for any point $(x,t)\inftyn (0,\lambdaambda_3)\thetaimes (0,t_0]$. Applying Proposition \rhoef{1009} shows that $\fracrac{\phiartial v(0,t_0)}{\phiartial \varepsilonec{n}}>0$, which contradicts to $v_x(x,t)=0$ on $\phiartial I\thetaimes (0,T]$ of the system \epsilonqref{18}. The other case is similar.\qed
\betaegin{center}
\sigmaection{Proof of Theorem \rhoef{1003}}
\epsilonnd{center}
\sigmaubsection{Reformulation of Theorem \rhoef{1003}}
Suppose that $\kappaappa=0$ and $n=1$.
Let $(\rhoho,u,e)\inftyn C^1([0,T];H^m(\muathbb{R})), m>2$ be a solution to the system \epsilonqref{5}-\epsilonqref{7} with the initial density satisfying $\epsilonqref{8}$.
Let $a(t)$ and $b(t)$ be the particle paths stating from $0$ and $1$, respectively.
Similar to \epsilonqref{9}, one can show that
\betaegin{eqnarray*}
\lambdaeft\{ \betaegin{array}{ll}
[a(t), b(t)]=[0,1],\\
u(x,t)=u_x(x,t)=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray*}
where $t\inftyn(0,T^*)$ and $x\inftyn [a(t), b(t)]^c$.
Therefore, to study the ill-posedness of the system \epsilonqref{5}-\epsilonqref{7} with the initial density satisfying $\epsilonqref{8}$ is equivalent to
study that of the following initial-boundary value problem
\betaegin{eqnarray}\lambdaabel{13}
\lambdaeft\{ \betaegin{array}{ll}
\rhoho_t+(\rhoho
u)_x=0,\ in\ I\thetaimes (0,T],\\
(\rhoho u)_t+(\rhoho u^2+p)_x=\muu u_{xx},\ in\ I\thetaimes (0,T],\\
(\rhoho e)_t+(\rhoho eu)_x+ pu_x=\muu u_x^2,\ in\ I\thetaimes (0,T],\\
(\rhoho,u,e)=(\rhoho_0,u_0,e_0), \ on \ I\thetaimes \{t=0\},\\
\rhoho=u=u_x=0,\ on \ \phiartial I\thetaimes (0,T].
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
The non-existence of Cauchy problem \epsilonqref{5}-\epsilonqref{7} in $ C^1([0,T];H^m(\muathbb{R}))$, $m>2$ is equivalent to the non-existence of the initial-boundary value problem \epsilonqref{13} in $C^{2,1}(\betaar{I}\thetaimes [0,T])$. Thus, in order to prove Theorem \rhoef{1003}, we need only to show the following:
\betaegin{Theorem}\lambdaabel{1002} The initial-boundary value problem \epsilonqref{13} has no solution $(\rhoho,u,e)$ in $C^{2,1}(\betaar{I}\thetaimes [0,T])$ for any positive time $T$, if the initial data $(\rhoho_0,u_0,e_0)$ satisfy the condition \epsilonqref{14} or \epsilonqref{15}.
\epsilonnd{Theorem}
Let $\epsilonta(x,t)$ be the position
of the gas particle starting from $x$ at time $t=0$ defined by \epsilonqref{16.5}.
Let $\varepsilonarrho$, $v$ and $\muathfrak{e}$ be the Lagrangian density, velocity and internal energy, which are defined by
\betaegin{eqnarray}\lambdaabel{42.5}
\lambdaeft\{ \betaegin{aligned}
\varepsilonarrho(x,t)=\rhoho(\epsilonta(x,t),t),\\
v(x,t)=u(\epsilonta(x,t),t),\\
\muathfrak{e}(x,t)=e(\epsilonta(x,t),t).
\epsilonnd{aligned}
\rhoight.
\epsilonnd{eqnarray}
Then the system \epsilonqref{13} can be rewritten in the Lagrangian coordinates as
\betaegin{equation}\lambdaabel{43}
\lambdaeft\{ \betaegin{aligned}
&\rhoho_0v_t+(\frac{\rhoho_0\muathfrak{e}}{\epsilonta_x})_x=\muu(\frac{v_x}{\epsilonta_x})_x, \ in \ I\thetaimes(0,T],\\
&\rhoho_0\muathfrak{e}_t+(\gamma}\deltaef\G{\Gammaamma-1)\fracrac{\rhoho_0\muathfrak{e}v_x}{\epsilonta_x}=\muu\frac{v_x^2}{\epsilonta_x},\ in \ I \thetaimes (0,T],\\
&\epsilonta_t(x,t)=v(x,t),\\
& (v,\muathfrak{e},\epsilonta)=(u_0,e_0,x), \ on \ I\thetaimes \{t=0\},\\
&v=v_x=0,\ on \ \phiartial I\thetaimes (0,T].
\epsilonnd{aligned}
\rhoight.
\epsilonnd{equation}
In the Lagrangian coordinates, the condition \epsilonqref{14} or \epsilonqref{15} on the initial data $(\rhoho_0,u_0,\muathfrak{e}_0)$ becomes
\betaegin{eqnarray}\lambdaabel{44}
\lambdaeft\{ \betaegin{array}{ll}
\fracrac{(\rhoho_0)_x}{\rhoho_0}+\fracrac{(\muathfrak{e}_0)_x}{\rhoho_0}\gamma}\deltaef\G{\Gammaeq \lambdaambda_5,\ in \ (0,\lambdaambda_7),\\
v_0(\lambdaambda_7)<0,v_0\lambdaeq0,\ in \ (0,\lambdaambda_7),
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
or
\betaegin{eqnarray}\lambdaabel{45}
\lambdaeft\{ \betaegin{array}{ll}
\fracrac{(\rhoho_0)_x}{\rhoho_0}+\fracrac{(\muathfrak{e}_0)_x}{\rhoho_0}\lambdaeq-\lambdaambda_6,\ in \ (\lambdaambda_8,1),\\
v_0(\lambdaambda_8)>0,v_0\gamma}\deltaef\G{\Gammaeq0,\ in \ (\lambdaambda_8,1),
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
respectively.
The non-existence of the initial-boundary value problem \epsilonqref{43} is equivalent to the non-existence of the initial-boundary value problem \epsilonqref{12} in $C^{2,1}(\betaar{I}\thetaimes [0,T])$. Thus, in order to prove Theorem \rhoef{1002}, we need only to show the following:
\betaegin{Theorem}\lambdaabel{1017} The initial-boundary value problem \epsilonqref{43} has no solution $(v,\muathfrak{e},\epsilonta)$ in $C^{2,1}(\betaar{I}\thetaimes [0,T])$ for any positive time $T$, if the initial data $(\rhoho_0,u_0)$ satisfy the condition \epsilonqref{44} or \epsilonqref{45}.
\epsilonnd{Theorem}
\sigmaubsection{Proof of Theorem \rhoef{1017}}
Given sufficiently small positive time $T^*$. Let $(v,\muathfrak{e},\epsilonta)\inftyn C^{2,1}(\betaar{I}\thetaimes [0,T^*])$ be a solution of the system \epsilonqref{43} with \epsilonqref{44} or \epsilonqref{45}.
Define the linear parabolic operator $\rhoho_0\phiartial_t+L$ similar to Subsection 3.1 by
\betaegin{eqnarray*}
\rhoho_0\phiartial_t+L:=\rhoho_0\phiartial_t-\fracrac{\muu}{\epsilonta_x}\phiartial_{xx}+\fracrac{\muu \epsilonta_{xx}}{\epsilonta_x^2}\phiartial_x,
\epsilonnd{eqnarray*}
Then, it follows from the first equation of \epsilonqref{43} that
\betaegin{eqnarray}\lambdaabel{47}
\rhoho_0v_t+Lv=-(\frac{\rhoho_0\muathfrak{e}}{\epsilonta_x})_x.
\epsilonnd{eqnarray}
Let $M$ be a positive constant such that
\betaegin{eqnarray*}
\rhoho_0+|v_0|+|(v_0)_x|+|(v_0)_{xx}|+|\muathfrak{e}_0|+|(\muathfrak{e}_0)_x|<M.
\epsilonnd{eqnarray*}
It follows from continuity on time that for suitably small $T^*$ that
\betaegin{eqnarray*}
|v|+|v_x|+|v_{xx}|++|\muathfrak{e}|+|\muathfrak{e}_x|\lambdaeq M,\ in \ I\thetaimes (0,T^*]
\epsilonnd{eqnarray*}
and
\betaegin{eqnarray}\lambdaabel{48}
\fracrac{(\rhoho_0)_x}{\rhoho_0}+\fracrac{\muathfrak{e}_x}{\rhoho_0}\gamma}\deltaef\G{\Gammaeq \fracrac{\lambdaambda_5}{2},\ in \ (0,\lambdaambda_7)\thetaimes (0,T^*].
\epsilonnd{eqnarray}
Taking a positive time $T<T^*$ sufficiently small such that $T\lambdaeq\fracrac{1}{2M}$, then one gets
\betaegin{eqnarray*}
|\inftynt_0^tv_xds|\lambdaeq MT\lambdaeq \fracrac{1}{2},\ in \ I\thetaimes (0,T].
\epsilonnd{eqnarray*}
This implies
\betaegin{eqnarray*}
\fracrac{1}{2}\lambdaeq\epsilonta_x\lambdaeq\fracrac{3}{2},\ in \ I\thetaimes (0,T].
\epsilonnd{eqnarray*}
Thus, \epsilonqref{43} are well-defined integro-differential equations with degeneracy for t-derivative due to that the initial density $\rhoho_0$ vanishes on the boundary $\phiartial I$.
Take $T$ small further such that $T\lambdaeq \fracrac{\lambdaambda_5}{8M}$. Therefore, \epsilonqref{48} implies
\betaegin{eqnarray}\lambdaabel{49}
-(\frac{\rhoho_0\muathfrak{e}}{\epsilonta_x})_x
=-\fracrac{\rhoho_0\muathfrak{e}}{\epsilonta_x}[\fracrac{(\rhoho_0)_x}{\rhoho_0}
+\fracrac{\muathfrak{e}_x}{\rhoho_0}-\fracrac{\epsilonta_{xx}}{\epsilonta_x}]
\lambdaeq-\fracrac{\rhoho_0\muathfrak{e}}{\epsilonta_x}(\fracrac{\lambdaambda_5}{2}-\fracrac{\lambdaambda_5}{4})<0, \ in \ (0,\lambdaambda_7)\thetaimes (0,T].
\epsilonnd{eqnarray}
Thus, it follows from \epsilonqref{47} and \epsilonqref{49} that $v$ satisfies the following differential inequality
\betaegin{eqnarray*}
\rhoho_0w_t+Lw
\lambdaeq0, \ in \ (0,\lambdaambda_7)\thetaimes (0,T].
\epsilonnd{eqnarray*}
Similarly, $v$ also satisfies
\betaegin{eqnarray*}
\rhoho_0w_t+Lw
\gamma}\deltaef\G{\Gammaeq0, \ in \ (\lambdaambda_8,1)\thetaimes (0,T].
\epsilonnd{eqnarray*}
The rest is the same as the proof of Theorem \rhoef{1006} in Subsection 2.2 and thus omitted.
\betaegin{center}
\sigmaection{Proof of Theorem \rhoef{1005}}
\epsilonnd{center}
\sigmaubsection{Reformulation of Theorem \rhoef{1005}}
Suppose that $\kappaappa>0$.
Let $(\rhoho,u,e)\inftyn C^1([0,T];H^m(\muathbb{R}^n)), m>[\fracrac{n}{2}]+2$ be a solution to the system \epsilonqref{5}-\epsilonqref{7} with the initial density satisfying \epsilonqref{8}. Denote by $X(x_0,t)$ the particle
trajectory starting at $x_0$ when $t=0$, that is,
\betaegin{equation*}
\lambdaeft\{ \betaegin{array}{ll}
\phiartial_tX(x_0,t)=u(X(x_0,t),t), \\
X(x_0,0)=x_0.
\epsilonnd{array}
\rhoight.
\epsilonnd{equation*}
Set
\betaegin{equation*}
\Omega=\Omega(0)\quad\ and\ \quad \Omega(t)=\{x=X(x_0,t):x_0\inftyn \Omega(0)\}.
\epsilonnd{equation*}
It follows from the first equation of \epsilonqref{5} that $\mubox{supp}_x\,\rhoho=\Omega(t)$. Under the assumption that the entropy $S(t,x)$ is finite in the vacuum domain $\Omega(t)^c$, then one deduces from the equation of state \epsilonqref{6} that
\betaegin{equation*}
e(x,t)=0\quad \ for \ x\inftyn \Omega(t)^c.
\epsilonnd{equation*}
Due to $e(\cdot,t)\inftyn H^m(\muathbb{R}^n), m>[\fracrac{n}{2}]+2$, one gets
\betaegin{eqnarray*}\lambdaabel{49.2}
e_{x_i}(x,t)=e_{x_ix_j}(x,t)=0\quad \ for \ x\inftyn \Omega(t)^c, i,j=1,2,\cdots,n.
\epsilonnd{eqnarray*}
It follows from the third equation of $\epsilonqref{5}$ that
\betaegin{eqnarray}
\fracrac{\muu}{2}|\nuabla u+\nuabla u^T|^2+\lambdaambda(\thetaextrm{div}u)^2=0\quad \ for \ x\inftyn \Omega(t)^c.
\epsilonnd{eqnarray}
Following the arguments in \cite{Xin}, one can calculate that
\betaegin{eqnarray}\lambdaabel{49.4}
\fracrac{\muu}{2}|\nuabla u+\nuabla u^T|^2+\lambdaambda(\thetaextrm{div}u)^2
\gamma}\deltaef\G{\Gammaeq
\lambdaeft\{ \betaegin{array}{ll}
(2\muu+n\lambdaambda)\sigmaum_{i=1}^n(u_{x_i})^2+\muu\sigmaum_{i> j}^n(u_{x_i}+u_{x_j})^2, \ if \ \lambdaambda\lambdaeq0,\\
2\muu\sigmaum_{i=1}^n(u_{x_i})^2+\muu\sigmaum_{i> j}^n(u_{x_i}+u_{x_j})^2, \ if \ \lambdaambda>0,
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
this, together with \epsilonqref{49.2} implies
\betaegin{eqnarray*}
\phiartial_iu_j+\phiartial_ju_i=0\quad \ for \ x\inftyn \Omega(t)^c, i,j=1,2,\cdots,n.
\epsilonnd{eqnarray*}
Because of $u(\cdot,t)\inftyn H^m(\muathbb{R}^n), m>[\fracrac{n}{2}]+2$, it holds that
\betaegin{eqnarray*}
u(x,t)=u_{x_i}(x,t)=u_{x_ix_j}(x,t)=0\quad \ for \ x\inftyn \Omega(t)^c, i,j=1,2,\cdots,n.
\epsilonnd{eqnarray*}
Furthermore, one has $\Omega(t)=\Omega(0)$.
One concludes that
\betaegin{eqnarray*}
\lambdaeft\{ \betaegin{array}{ll}
\Omega(t)=\Omega(0),\\
e(x,t)=e_{x_i}(x,t)=0,
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray*}
where $t\inftyn(0,T^*)$ and $x\inftyn \Omega(t)^c, i=1,2,\cdots,n$.
Therefore, to study the ill-posedness of the system \epsilonqref{5}-\epsilonqref{7} with the initial density satisfying \epsilonqref{8}, one needs only to
study the ill-posedness of the following initial-boundary value problem
\betaegin{eqnarray}\lambdaabel{16}
\lambdaeft\{ \betaegin{array}{ll}
\phiartial_t\rhoho+\thetaextrm{div}(\rhoho
u)=0,\ in\ \Omega\thetaimes (0,T],\\
\phiartial_t(\rhoho u)+\thetaextrm{div}(\rhoho u\omegatimes u)+\nuabla p=\muu\Delta u+(\muu+\lambdaambda)\nuabla\thetaextrm{div}u,\ in\ \Omega\thetaimes (0,T],\\
\phiartial_t(\rhoho e)+\thetaextrm{div}(\rhoho eu)+ p\thetaextrm{div}u=\fracrac{\muu}{2}|\nuabla u+(\nuabla u)^*|^2+\lambdaambda(\thetaextrm{div}u)^2\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\fracrac{\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}\Delta e, \ in\ \Omega\thetaimes (0,T],\\
(\rhoho,u,e)=(\rhoho_0,u_0,e_0), \ on \ \Omega\thetaimes \{t=0\},\\
e(x,t)=e_{x_i}(x,t)=0, \ on \ \phiartial \Omega\thetaimes (0,T].
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
The non-existence of Cauchy problem \epsilonqref{5}-\epsilonqref{7} in $ C^1([0,T];H^m(\muathbb{R}^n))$, $m>[\fracrac{n}{2}]+2$ will follow from the non-existence of the initial-boundary value problem \epsilonqref{16} in $C^{2,1}(\betaar{I}\thetaimes [0,T])$. Thus, in order to prove Theorem \rhoef{1005}, we need only to show the following theorem:
\betaegin{Theorem}\lambdaabel{1004} The initial-boundary value problem \epsilonqref{16} in the case of $\kappaappa>0$ has no solution $(\rhoho,u,e)$ in $C^{2,1}(\betaar{\Omega}\thetaimes [0,T])$ for any positive time $T$.
\epsilonnd{Theorem}
Let $\epsilonta(x,t)$ denote the position
of the gas particle starting from $x$ at time $t=0$ defined by \epsilonqref{16.5}.
Let $\varepsilonarrho$, $v$ and $\muathfrak{e}$ be the Lagrangian density, velocity and internal energy, respectively, which are defined by \epsilonqref{42.5}.
We will also use the following notations (see also \cite{CS,CLS,JM1,JM2})
\betaegin{eqnarray*}
\lambdaeft\{ \betaegin{array}{ll}
J=\deltaet D\epsilonta \quad (Jacobian\ determinant),\\
B=[D\epsilonta]^{-1} \quad (inverse\ of \ deformation\ tensor),\\
b=JB \quad (transpose\ of \ cofactor\ matrix).
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray*}
We will always use the convention in this section that repeated Latin indices $i,j,k,$ etc., are summed from $1$ to $n$.
Then the system \epsilonqref{5} can be rewritten in the Lagrangian coordinates as
\betaegin{eqnarray}\lambdaabel{51}
\betaegin{cases}
\phiartial_t\varepsilonarrho+\varepsilonarrho B_i^j\phiartial_jv^i=0, \ in \ \Omega \thetaimes (0,T],\\
\varepsilonarrho \phiartial_tv^i+(\gamma}\deltaef\G{\Gammaamma-1)B_i^j\phiartial_j(\varepsilonarrho \muathfrak{e})=\muu B_l^k\phiartial_k(B_l^j\phiartial_jv^i)+(\muu+\lambdaambda)B_i^k\phiartial_k(B_l^j\phiartial_jv^l),\ in \ \Omega \thetaimes (0,T],\\
\varepsilonarrho \phiartial_t\muathfrak{e}+(\gamma}\deltaef\G{\Gammaamma-1)\varepsilonarrho \muathfrak{e} B_i^j\phiartial_jv^i
=\fracrac{\muu}{2}|B_l^j\phiartial_jv^i+(B_l^j\phiartial_jv^i)^*|^2+\lambdaambda(B_i^j\phiartial_jv^i)^2\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\fracrac{\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}B_l^k\phiartial_k(B_l^j\phiartial_j\muathfrak{e}),\ in \ \Omega \thetaimes (0,T],\\
\epsilonta_t(x,t)=v(x,t),\\
(\varepsilonarrho,v,\muathfrak{e},\epsilonta)=(\rhoho_0,u_0,e_0,x), \ on \ \Omega\thetaimes \{t=0\},\\
\muathfrak{e}(x,t)=\muathfrak{e}_{x_i}(x,t)=0, \ on \ \phiartial \Omega\thetaimes (0,T].
\epsilonnd{cases}
\epsilonnd{eqnarray}
It follows from \epsilonqref{51} that
\betaegin{eqnarray*}
\varepsilonarrho(x,t)={\frac{\rhoho_0(x)}{J(x,t)}}.
\epsilonnd{eqnarray*}
Regarding the initial density $\rhoho_0$ as a parameter, one can rewrite the system $\epsilonqref{51}$ as
\betaegin{eqnarray}\lambdaabel{52}
\lambdaeft\{ \betaegin{array}{ll}
\rhoho_0\phiartial_tv^i+(\gamma}\deltaef\G{\Gammaamma-1)b_i^j\phiartial_j(J^{-1}\rhoho_0 \muathfrak{e} )=\muu b_l^k\phiartial_k(J^{-1}b_l^j\phiartial_jv^i)\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+(\muu+\lambdaambda)b_i^k\phiartial_k(J^{-1}b_l^j\phiartial_jv^l), \ in \ \Omega\thetaimes(0,T],\\
\rhoho_0\phiartial_t\muathfrak{e}+(\gamma}\deltaef\G{\Gammaamma-1)J^{-1}\rhoho_0 \muathfrak{e} b_i^j\phiartial_jv^i=\fracrac{\muu}{2}J^{-1}|b_l^j\phiartial_jv^i+(b_l^j\phiartial_jv^i)^*|^2+\lambdaambda J^{-1}(b_i^j\phiartial_jv^i)^2\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\fracrac{\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}b_l^k\phiartial_k(J^{-1}b_l^j\phiartial_j\muathfrak{e}),\ in \ \Omega \thetaimes (0,T],\\
\epsilonta_t(x,t)=v(x,t),\\
(v,\muathfrak{e},\epsilonta)=(u_0,e_0,x), \ on \ \Omega\thetaimes \{t=0\},\\
\muathfrak{e}(x,t)=\muathfrak{e}_{x_i}(x,t)=0, \ on \ \phiartial \Omega\thetaimes (0,T].
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
The non-existence of the initial-boundary value problem \epsilonqref{16} will be a consequence of the non-existence of the initial-boundary value problem \epsilonqref{51} in $C^{2,1}(\betaar{\Omega}\thetaimes [0,T])$. Thus, in order to prove Theorem \rhoef{1004}, we need only to show the following:
\betaegin{Theorem}\lambdaabel{1018} The problem \epsilonqref{52} in the case of $\kappaappa>0$ has no solution $(v, \muathfrak{e},\epsilonta)$ in $C^{2,1}(\betaar{\Omega}\thetaimes [0,T])$ for any positive time $T$.
\epsilonnd{Theorem}
\sigmaubsection{Proof of Theorem \rhoef{1018}}
Let $T^*$ be a given suitably small positive time. Let $(v,\muathfrak{e},\epsilonta)\inftyn C^{2,1}(\betaar{\Omega}\thetaimes [0,T^*])$ be a solution of the system \epsilonqref{52}.
Let $M$ be a positive constant such that
\betaegin{eqnarray*}
\rhoho_0+\sigmaum_{|\alphalpha|\lambdaeq2}|D^\alphalpha v_0|+\sigmaum_{|\alphalpha|\lambdaeq2}|D^\alphalpha \muathfrak{e}_0|< M.
\epsilonnd{eqnarray*}
It follows from continuity on time that for short time $T^*$
\betaegin{eqnarray*}
\sigmaum_{|\alphalpha|\lambdaeq2}|D^\alphalpha v|+\sigmaum_{|\alphalpha|\lambdaeq2}|D^\alphalpha \muathfrak{e}|\lambdaeq M,\ in \ \Omega\thetaimes (0,T^*].
\epsilonnd{eqnarray*}
Due to \epsilonqref{6}, it holds that
\betaegin{eqnarray*}
\phiartial_j\epsilonta^i(x,t)=\deltaelta_j^i+\inftynt_0^t\phiartial_jv^i(x,s)d s.
\epsilonnd{eqnarray*}
Thus, $D\epsilonta$ can be regarded as a small perturbation of the identity matrix, which implies both $D\epsilonta$ and $A$ are positive definite matrices. Thereby, there exist two positive numbers $\Lambda_1\lambdaeq\Lambda_2$ such that
\betaegin{eqnarray}\lambdaabel{53}
\Lambda_1|\xii|^2\lambdaeq b_k^ib_k^j\xii_j\xii_i\lambdaeq\Lambda_2|\xii|^2\quad\ for\ all\ \xii\inftyn \muathbb{R}^n\quad\ and\quad (x,t)\inftyn \Omega\thetaimes (0,T^*].
\epsilonnd{eqnarray}
It follows from the definition of cofactor matrices that
\betaegin{eqnarray*}
{|B_i^j|\lambdaeq (1+M T)^{n-1}}.
\epsilonnd{eqnarray*}
Note that (see \cite{MB})
\betaegin{eqnarray*}
J_t=J\thetaextrm{div}u.
\epsilonnd{eqnarray*}
The chain rule gives
\betaegin{eqnarray*}
J_t=JB_i^j\phiartial_jv^i=b_i^j\phiartial_jv^i.
\epsilonnd{eqnarray*}
Taking a positive time $T<T^*$ sufficiently small such that { $T\lambdaeq\fracrac{1}{2^{n+1}M}$}, then one has
{ \betaegin{eqnarray*}
|J(x,t)-1|&=&|\inftynt_0^tb_i^j(x,s)\phiartial_jv^i(x,s)d s|\lambdaeq J T|B_i^j||\phiartial_jv^i|\nuonumber\\
&\lambdaeq& J M T(1+M T)^{n-1}\lambdaeq\fracrac{J}{4},\ in\ \Omega\thetaimes(0,T].
\epsilonnd{eqnarray*}}
This implies
\betaegin{eqnarray}\lambdaabel{54}
\fracrac{1}{2}< J(x,t)<\fracrac{3}{2}, \ in\ \Omega\thetaimes(0,T].
\epsilonnd{eqnarray}
Direct calculations show (see also \cite{CLS})
\betaegin{eqnarray*}
\phiartial_iJ=b_k^j\phiartial_{ij}\epsilonta^k
\epsilonnd{eqnarray*}
and
\betaegin{eqnarray*}
\phiartial_j{b_i^k}=J^{-1}\phiartial_{sj}\epsilonta^r(b_r^sb_i^k-b_i^sb_r^k).
\epsilonnd{eqnarray*}
Therefore, one gets that
{ \betaegin{eqnarray}\lambdaabel{55}
|\phiartial_iJ|\lambdaeq \fracrac{3}{2}(1+M T)^{n-1}MT
\epsilonnd{eqnarray}}
and
{ \betaegin{eqnarray}\lambdaabel{56}
|\phiartial_j{b_i^k}|\lambdaeq 9 (1+M T)^{2n-2}MT.
\epsilonnd{eqnarray}}
Thus, the system \epsilonqref{52} is a well-defined integro-differential system with a degeneracy for t-derivative since the initial density $\rhoho_0$ vanishes on the boundary $\phiartial \Omega$.
Define the linear parabolic operator $\rhoho_0\phiartial_t+L$ by
\betaegin{eqnarray*}
\rhoho_0\phiartial_tw+Lw:&=&\rhoho_0\phiartial_tw-\fracrac{\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}J^{-1}b_k^ib_k^j\phiartial_{ij}w\nuonumber\\
&&-\fracrac{\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}b_k^i\phiartial_i(J^{-1}b_k^j)\phiartial_jw
+(\gamma}\deltaef\G{\Gammaamma-1)J^{-1}\rhoho_0 b_i^j\phiartial_jv^iw.
\epsilonnd{eqnarray*}
Then, it follows from the second equation of \epsilonqref{52} that
\betaegin{eqnarray}\lambdaabel{57}
\rhoho_0\phiartial_t\muathfrak{e}+L\muathfrak{e}=\fracrac{\muu}{2}J^{-1}|b_l^j\phiartial_jv^i+(b_l^j\phiartial_jv^i)^*|^2+\lambdaambda J^{-1}(b_i^j\phiartial_jv^i)^2.
\epsilonnd{eqnarray}
In the rest of this section, our main task is to establish the Hopf's lemma and a strong maximum principle for solutions of the following differential inequality
\betaegin{eqnarray}\lambdaabel{58}
\rhoho_0\phiartial_tw+Lw
\gamma}\deltaef\G{\Gammaeq0, \ in \ \Omega\thetaimes (0,T].
\epsilonnd{eqnarray}
It follows from \epsilonqref{57} and \epsilonqref{49.4} that $\muathfrak{e}$ also satisfies \epsilonqref{58}.
We first derive a weak maximum principle for the
differential inequality \epsilonqref{58}.
\betaegin{Lemma}\lambdaabel{1019}
Suppose that $w\inftyn C^{2,1}(Q_T)\cap C(\betaar{Q}_T)$ satisfies \epsilonqref{58}.
If $w\gamma}\deltaef\G{\Gammaeq0(>0)$ on $\phiartial_p Q_T$,
then $w\gamma}\deltaef\G{\Gammaeq0(>0)$ in $Q_T$.
\epsilonnd{Lemma}
\thetaextbf{Proof.} Set
\betaegin{eqnarray*}
d={(\gamma}\deltaef\G{\Gammaamma-1)\muax_{\betaar{\Omega}\thetaimes[0,T]}|\fracrac{J_t}{J}|}
\epsilonnd{eqnarray*}
and
\betaegin{eqnarray*}
\varepsilonarphi=\epsilonxp(dt)w.
\epsilonnd{eqnarray*}
Define a new linear parabolic operator by
\betaegin{eqnarray*}
\rhoho_0\phiartial_t\varepsilonarphi+\thetailde{L}\varepsilonarphi:=\rhoho_0\phiartial_t\varepsilonarphi+\thetailde{L}\varepsilonarphi
-d\rhoho_0\varepsilonarphi.
\epsilonnd{eqnarray*}
Direct calculation shows that
\betaegin{eqnarray*}
\rhoho_0\phiartial_t\varepsilonarphi+\thetailde{L}\varepsilonarphi=\epsilonxp(dt)(\rhoho_0\phiartial_tw+Lw)\gamma}\deltaef\G{\Gammaeq0,\ in\ Q_T.
\epsilonnd{eqnarray*}
We first prove the statement under a stronger hypothesis than \epsilonqref{58} that
\betaegin{eqnarray}\lambdaabel{59}
\rhoho_0\phiartial_t\varepsilonarphi+\thetailde{L}\varepsilonarphi>0,\ in\ Q_T.
\epsilonnd{eqnarray}
Assume that $\varepsilonarphi$ attains its non-negative minimum at an interior point $(x_0,t_0)$ of the domain $Q_T$. Therefore
\betaegin{eqnarray*}
\phiartial_t\varepsilonarphi(x_0,t_0)\lambdaeq0, \phiartial_j\varepsilonarphi(x_0,t_0)=0, a_k^ia_k^j\phiartial_{ij}\varepsilonarphi(x_0,t_0)\gamma}\deltaef\G{\Gammaeq0,
\epsilonnd{eqnarray*}
which implies $\rhoho_0\phiartial_t\varepsilonarphi+L\varepsilonarphi\lambdaeq0$, this contradicts \epsilonqref{59}. Next, choose the auxiliary function
\betaegin{eqnarray*}
\phisi^\varepsilonarepsilon=\varepsilonarphi+\varepsilonarepsilon t,
\epsilonnd{eqnarray*}
for a positive number $\varepsilonarepsilon$. One calculates
\betaegin{eqnarray*}
\rhoho_0\phiartial_t\phisi^\varepsilonarepsilon+L\phisi^\varepsilonarepsilon&=&\rhoho_0\varepsilonarphi_t+L\varepsilonarphi+\varepsilonarepsilon \rhoho_0>0,\ in\ Q_T.
\epsilonnd{eqnarray*}
Thus $\phisi^\varepsilonarepsilon$ attains its non-negative minimum on $\phiartial_p Q_T$, which implies that $\varepsilonarphi$ also attains its non-negative minimum on $\phiartial_p Q_T$ by letting $\varepsilonarepsilon$ go to zero.
Since $w\gamma}\deltaef\G{\Gammaeq0(>0)$ on $\phiartial_p Q_T$, so
$\varepsilonarphi\gamma}\deltaef\G{\Gammaeq0(>0)$ on $\phiartial_p Q_T$ by the definition of $\varepsilonarphi$, furthermore, $\varepsilonarphi\gamma}\deltaef\G{\Gammaeq0(>0)$ on $Q_T$. Therefore, $w\gamma}\deltaef\G{\Gammaeq0(>0)$ on $Q_T$.\qed
The result in Lemma \rhoef{1019} can also be extended to a general domain $D\sigmaubset \Omega\thetaimes (0,T]$.
\betaegin{Lemma}\lambdaabel{1020}
Suppose that $w\inftyn C^{2,1}(D)\cap C(\betaar{D})$ satisfies \epsilonqref{58}.
If $w\gamma}\deltaef\G{\Gammaeq0(>0)$ on $\phiartial_p D$,
then $w\gamma}\deltaef\G{\Gammaeq0(>0)$ in $D$.
\epsilonnd{Lemma}
Next, we establish the Hopf's lemma for the
differential inequality \epsilonqref{58}, which is critical for proving Theorem \rhoef{1018}.
\betaegin{Proposition}\lambdaabel{1021}
Suppose that $w\inftyn C^{2,1}(\Omega\thetaimes (0,T])\cap C(\betaar{\Omega}\thetaimes [0,T])$ satisfies \epsilonqref{58}
and there exits a point $(x_0,t_0)\inftyn \phiartial\Omega\thetaimes (0,T]$ such that $w(x,t)>w(x_0,t_0)$ for any point $(x,t)$ in $D$, where
\betaegin{eqnarray*}
D=:\{(x,t): |x-\thetailde{x}|^2+(t_0-t)< r^2,0< |x-x_0|<\fracrac{r}{2},0<t\lambdaeq t_0\}
\epsilonnd{eqnarray*}
with $|x_0-\thetailde{x}|=r$ and $(x_0-\thetailde{x})\phierp\phiartial\Omega$ at $x_0$.
Then it holds that
\betaegin{eqnarray*}
\fracrac{\phiartial w(x_0,t_0)}{\phiartial \varepsilonec{n}}<0,
\epsilonnd{eqnarray*}
where $\varepsilonec{n}=\fracrac{x_0-\thetailde{x}}{|x_0-\thetailde{x}|}$.
\epsilonnd{Proposition}
\thetaextbf{Proof.}
For positive constants $\alphalpha$ and $\varepsilonarepsilon$ to be determined, set
\betaegin{eqnarray*}
q(\alphalpha,x,t)=-e^{-\alphalpha[|x-\thetailde{x}|^2+(t_0-t)]}+e^{-\alphalpha r^2}
\epsilonnd{eqnarray*}
and
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon,\alphalpha,x,t)=w(x,t)-w(x_0,t_0)+\varepsilonarepsilon q(\alphalpha,x,t).
\epsilonnd{eqnarray*}
First, we determine $\varepsilonarepsilon$.
The parabolic boundary $\phiartial_p D$ consists of two parts $\Sigma_1$ and $\Sigma_2$ given by
\betaegin{gather*}
\Sigma_1=\{(x,t):|x-\thetailde{x}|^2+(t_0-t)<r^2, |x-x_0|=\fracrac{r}{2}, 0<t\lambdaeq t_0\}
\epsilonnd{gather*}
and
\betaegin{gather*}
\Sigma_2=\{(x,t):|x-\thetailde{x}|^2+(t_0-t)=r^2, 0\lambdaeq |x-x_0|\lambdaeq\fracrac{r}{2}, 0<t\lambdaeq t_0\}.
\epsilonnd{gather*}
{On $\betaar{\Sigma}_1$, $w(x,t)-w(x_0,t_0)>0$ }, and hence $w(x,t)-w(x_0,t_0)>\varepsilonarepsilon_0$ for some $\varepsilonarepsilon_0>0$.
Note that $q\gamma}\deltaef\G{\Gammaeq-1$ on $\Sigma_1$. Then for such an $\varepsilonarepsilon_0$, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)>0$ on $\Sigma_1$. For $(x,t)\inftyn \Sigma_2$, $q=0$ and $w(x,t)-w(x_0,t_0)\gamma}\deltaef\G{\Gammaeq0$. Thus, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)\gamma}\deltaef\G{\Gammaeq0$
for any $(x,t)\inftyn \Sigma_2$ and $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x_0,t_0)=0$. One concludes that
\betaegin{eqnarray}\lambdaabel{60}
\lambdaeft\{ \betaegin{array}{ll}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)\gamma}\deltaef\G{\Gammaeq0,\ on\ \phiartial_p D,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x_0,t_0)=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
Next, we choose $\alphalpha$. In view of \epsilonqref{58}, one has
\betaegin{eqnarray}\lambdaabel{61}
&&\rhoho_0\phiartial_t\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)\nuonumber\\
&=&\rhoho_0\phiartial_tw(x,t)+Lw(x,t)
+\varepsilonarepsilon_0[\rhoho_0\phiartial_tq(\alphalpha,x,t)+L q(\alphalpha,x,t)]\nuonumber\\
&\gamma}\deltaef\G{\Gammaeq&\varepsilonarepsilon_0[\rhoho_0\phiartial_tq(\alphalpha,x,t)+L q(\alphalpha,x,t)].
\epsilonnd{eqnarray}
A direct calculation yields
\betaegin{eqnarray}\lambdaabel{62}
&&e^{\alphalpha[|x-\thetailde{x}|^2+(t_0-t)]}[\rhoho_0\phiartial_tq(\alphalpha,x,t)+L q(\alphalpha,x,t)]\nuonumber\\
&=&\fracrac{4\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}J^{-1}b_k^ib_k^j(x_i-\thetailde{x}_i)(x_j-\thetailde{x}_j)\alphalpha^2
-[\rhoho_0+\fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}J^{-1}b_k^ib_k^j\deltaelta_{ij}\nuonumber\\
&&+\fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}b_k^i\phiartial_i(J^{-1}b_k^j)(x_j-\thetailde{x}_j)]\alphalpha
-(\gamma}\deltaef\G{\Gammaamma-1)J^{-1}\rhoho_0 b_i^j\phiartial_jv^i\nuonumber\\
&&\thetaimes(1-e^{\alphalpha[|x-\thetailde{x}|^2+(t_0-t)- r^2]}).
\epsilonnd{eqnarray}
It follows from \epsilonqref{53} and \epsilonqref{54} that
\betaegin{eqnarray}\lambdaabel{63}
&&\fracrac{4\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}J^{-1}b_k^ib_k^j(x_i-\thetailde{x}_i)(x_j-\thetailde{x}_j)\nuonumber\\
&\gamma}\deltaef\G{\Gammaeq&\fracrac{8\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)\Lambda_1 }{R}(|x_0-\thetailde{x}|-|x-x_0|)^2
\gamma}\deltaef\G{\Gammaeq\fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)r^2\Lambda_1}{R}.
\epsilonnd{eqnarray}
The other terms on the right hand side of \epsilonqref{62} can be estimated by \epsilonqref{55} and \epsilonqref{56} as follows
{
\betaegin{align}\lambdaabel{64}
|\fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}J^{-1}b_k^ib_k^j\deltaelta_{ij}|\lambdaeq& \fracrac{4\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)\Lambda_2 }{R},
\\
|\fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}b_k^i\phiartial_i(J^{-1}b_k^j)(x_j-\thetailde{x}_j)|
\lambdaeq& \fracrac{81\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)r }{R}(1+M T)^{3n-3}M T\nuonumber\\
\lambdaeq& \fracrac{81\cdot2^{2n-4}\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)r }{R}, \lambdaabel{65}
\\
|(\gamma}\deltaef\G{\Gammaamma-1)J^{-1}\rhoho_0 b_i^j\phiartial_jv^i(1-e^{\alphalpha[|x-\thetailde{x}|^2+(t_0-t)- r^2]})|
\lambdaeq& 3(\gamma}\deltaef\G{\Gammaamma-1)M^2 (1+M T)^{n-1}\nuonumber\\
\lambdaeq& 3\cdot2^{n-1}(\gamma}\deltaef\G{\Gammaamma-1)M^2,\lambdaabel{66}
\epsilonnd{align}}
where \epsilonqref{53}-\epsilonqref{56} have been used.
Finally, one gets
{
\betaegin{eqnarray*}
&&e^{\alphalpha[|x-\thetailde{x}|^2+(t_0-t)]}[\rhoho_0\phiartial_tq(\alphalpha,x,t)+L q(\alphalpha,x,t)]\nuonumber\\
&\gamma}\deltaef\G{\Gammaeq& \fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)r^2\Lambda_1 }{R}\alphalpha^2-(M+\fracrac{4\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)\Lambda_2 }{R}+\fracrac{81\cdot2^{2n-4}\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)r }{R})\alphalpha- 3\cdot2^{n-1}(\gamma}\deltaef\G{\Gammaamma-1)M^2.
\epsilonnd{eqnarray*}}
Thereby, there exists a positive number $\alphalpha_0=\alphalpha_0(\kappaappa,\gamma}\deltaef\G{\Gammaamma,r,R,M,\Lambda_1,\Lambda_2)$ such that
\betaegin{eqnarray}\lambdaabel{66.5}
\rhoho_0\phiartial_tq(\alphalpha_0,x,t)+L q(\alphalpha_0,x,t)\gamma}\deltaef\G{\Gammaeq0,\ in \ D.
\epsilonnd{eqnarray}
In conclusion, in view of \epsilonqref{60}, \epsilonqref{61} and \epsilonqref{66.5}, one has
\betaegin{eqnarray}\lambdaabel{67}
\lambdaeft\{ \betaegin{array}{ll}
\rhoho_0\phiartial_t\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\gamma}\deltaef\G{\Gammaeq 0,\ in \ D,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\gamma}\deltaef\G{\Gammaeq0, \ on \ \phiartial_pD,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
Lemma \rhoef{1020}, together with \epsilonqref{67}, shows that
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\gamma}\deltaef\G{\Gammaeq0, \ in\ D.
\epsilonnd{eqnarray*}
Therefore, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,\cdot,\cdot)$ attains its minimum at the point $(x_0,t_0)$ in $D$. In particular, it holds that
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t_0)\gamma}\deltaef\G{\Gammaeq \varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)\quad \ for\ all\ x\inftyn \{x:|x-x_0|\lambdaeq\fracrac{r}{2}\}.
\epsilonnd{eqnarray*}
This implies
\betaegin{eqnarray*}
\fracrac{\phiartial \varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)}{\phiartial \varepsilonec{n}}\lambdaeq0.
\epsilonnd{eqnarray*}
Finally, one obtains
\betaegin{eqnarray*}
\fracrac{\phiartial w(x_0,t_0)}{\phiartial \varepsilonec{n}}\lambdaeq-\varepsilonarepsilon_0 \fracrac{\phiartial q(\alphalpha_0,x_0,t_0)}{\phiartial \varepsilonec{n}}
=-2\varepsilonarepsilon_0\alphalpha_0 r e^{-\alphalpha_0 r^2}<0.
\epsilonnd{eqnarray*}
\qed
In order to establish a strong maximum principle for the
differential inequality \epsilonqref{58}, we study first the t-derivative at an
interior minimum point.
\betaegin{Lemma}\lambdaabel{1022}
Let $w\inftyn C^{2,1}(\Omega\thetaimes (0,T])\cap C(\betaar{\Omega}\thetaimes [0,T])$ satisfy \epsilonqref{58} and have a minimum $M_0$ in the domain $\Omega\thetaimes (0,T]$. Suppose that $\Omega\thetaimes (0,T]$ contains a closed solid ellipsoid
\betaegin{eqnarray*}
\Omega^\sigmaigma:=\{(x,t): |x-x_*|^2+\sigmaigma(t-t_*)^2\lambdaeq r^2\},\sigmaigma>0
\epsilonnd{eqnarray*}
and $w(x,t)>M_0$ for any interior point $(x,t)$ of $\Omega^\sigmaigma$ and $w(\betaar{x},\betaar{t})=M_0$ at some point $(\betaar{x},\betaar{t})$ on the boundary of $\Omega^\sigmaigma$. Then $\betaar{x}=x_*$.
\epsilonnd{Lemma}
\thetaextbf{Proof.} One can assume that $(\betaar{x},\betaar{t})$ is the only point on $\phiartial \Omega^\sigmaigma$ such that $w=M_0$ in $\Omega^\sigmaigma$. Otherwise, one can limit it to a smaller closed ellipsoid in $\Omega^\sigmaigma$ and with $(\betaar{x},\betaar{t})$ as the only common point with $\phiartial \Omega^\sigmaigma$. We prove the desired result by contradiction.
Suppose that $\betaar{x}\nueq x_*$. Choose a closed ball $D$ with center $(\betaar{x},\betaar{t})$ and radius $\thetailde{r}<|\betaar{x}-x_*|$ contained in $\Omega\thetaimes (0,T]$. Then, one has
\betaegin{eqnarray}\lambdaabel{68}
|x-x_*|\gamma}\deltaef\G{\Gammaeq|\betaar{x}-x_*|-\thetailde{r}=:\hat{r}\quad \ for\ (x,t)\inftyn D.
\epsilonnd{eqnarray}
The parabolic boundary $\phiartial_pD=\phiartial D$ of $D$ consists of a part $\Sigma_1$ lying in $\Omega^\sigmaigma$ and a part $\Sigma_2$ lying outside $\Omega^\sigmaigma$.
For positive constants $\alphalpha$ and $\varepsilonarepsilon$ to be determined, set
\betaegin{eqnarray*}
q(\alphalpha,x,t)=-e^{-\alphalpha[|x-x_*|^2+\sigmaigma(t-t_*)^2]}+e^{-\alphalpha r^2}
\epsilonnd{eqnarray*}
and
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon,\alphalpha,x,t)=w(x,t)-M_0+\varepsilonarepsilon q(\alphalpha,x,t).
\epsilonnd{eqnarray*}
We first determine the value of $\varepsilonarepsilon$. Note that $q(\alphalpha,x,t)<0$ in the interior of $\Omega^\sigmaigma$, $q(\alphalpha,x,t)=0$ on $\phiartial \Omega^\sigmaigma$ and $q(\alphalpha,x,t)>0$ outside $\Omega^\sigmaigma$. So, it holds that $\varepsilonarphi(\varepsilonarepsilon,\alphalpha,\betaar{x},\betaar{t})=0$.
On $\Sigma_1$, $w(x,t)-M_0>0$, and hence $w(x,t)-M_0>\varepsilonarepsilon_0$ for some $\varepsilonarepsilon_0>0$.
Note that $q(\alphalpha,x,t)\gamma}\deltaef\G{\Gammaeq-1$ on $\Sigma_1$. Then for such an $\varepsilonarepsilon_0$, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)>0$ on $\Sigma_1$. {For $(x,t)\inftyn \Sigma_2$, we have $q(\alphalpha,x,t)>0$ and $w(x,t)-M_0\gamma}\deltaef\G{\Gammaeq0$}. Thus, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)>0$
for any $(x,t)\inftyn \Sigma_2$. One concludes that
\betaegin{eqnarray}\lambdaabel{69}
\lambdaeft\{ \betaegin{array}{ll}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,x,t)>0,\ on\ \phiartial_pD,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha,\betaar{x},\betaar{t})=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
Next, we choose $\alphalpha$.
We need to estimate $\rhoho_0q_t(\alphalpha,x,t)+L q(\alphalpha,x,t)$ due to \epsilonqref{61}.
One calculates
\betaegin{eqnarray*}
&&e^{\alphalpha[|x-x_*|^2+\sigmaigma(t-t_*)^2]}[\rhoho_0\phiartial_tq(\alphalpha,x,t)+L q(\alphalpha,x,t)]\nuonumber\\
&=&\fracrac{4\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}J^{-1}b_k^ib_k^j(x_i-(x_*)_i)(x_j-(x_*)_j)\alphalpha^2
-[2\sigmaigma\rhoho_0(t-t_*)\nuonumber\\
&&+\fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}J^{-1}b_k^ib_k^j\deltaelta_{ij}+\fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}b_k^i\phiartial_i(J^{-1}b_k^j)(x_j-{x_*}_j)]\alphalpha\nuonumber\\
&&-(\gamma}\deltaef\G{\Gammaamma-1)J^{-1}\rhoho_0 b_i^j\phiartial_jv^i(1-e^{\alphalpha[|x-x_*|^2+\sigmaigma(t-t_*)^2- r^2]}).
\epsilonnd{eqnarray*}
Similar to \epsilonqref{63}-\epsilonqref{66}, there exists a positive number $\alphalpha_0=\alphalpha_0(\kappaappa,\gamma}\deltaef\G{\Gammaamma,\sigmaigma,r,\hat{r},R,M,\Lambda_1,\Lambda_2)$ such that
\betaegin{eqnarray}\lambdaabel{70}
\rhoho_0\phiartial_tq(\alphalpha_0,x,t)+L q(\alphalpha_0,x,t)\gamma}\deltaef\G{\Gammaeq0,\ in \ D.
\epsilonnd{eqnarray}
In conclusion, it follows from \epsilonqref{61} and \epsilonqref{70} that
\betaegin{eqnarray}\lambdaabel{71}
\lambdaeft\{ \betaegin{array}{ll}
\rhoho_0\phiartial_t\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\gamma}\deltaef\G{\Gammaeq 0,\ in \ D,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)>0, \ on \ \phiartial_pD,\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,\betaar{x},\betaar{t})=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
Then Lemma \rhoef{1020} and \epsilonqref{71} imply that
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)>0, \ in\ D.
\epsilonnd{eqnarray*}
which contradicts $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,\betaar{x},\betaar{t})=0$ due to $(\betaar{x},\betaar{t})\inftyn D$ .\qed
Based on Lemma \rhoef{1022}, it is standard to prove the following lemma. For details, one can refer to Lemma 3 of Chapter 2 in \cite{Friedman}.
\betaegin{Lemma}\lambdaabel{1023}
Suppose that $w\inftyn C^{2,1}(\Omega\thetaimes (0,T])\cap C(\betaar{\Omega}\thetaimes [0,T])$ satisfies \epsilonqref{58}.
If $w$ has a minimum in an interior point $P_0=(x_0,t_0)$ of $\Omega\thetaimes (0,T]$, then
$w(P)=w(P_0)$ for any point $P(x,t_0)$ of $\Omega\thetaimes (0,T]$.
\epsilonnd{Lemma}
Next, we prove a local strong minimum principle in a rectangle $\muathcal{R}$ of the domain $\Omega\thetaimes (0,T]$.
\betaegin{Lemma}\lambdaabel{1024}
Suppose that $w\inftyn C^{2,1}(\Omega\thetaimes (0,T])\cap C(\betaar{\Omega}\thetaimes [0,T])$ satisfies \epsilonqref{58}.
If $w$ has a minimum in the interior point $P_0=(x_0,t_0)$ of $\Omega\thetaimes (0,T]$, then there exists a rectangle
\betaegin{eqnarray*}
\muathcal{R}(P_0):=\{(x,t):(x_0)_i-c_i\lambdaeq x_i\lambdaeq (x_0)_i+c_i, t_0-c_0\lambdaeq t\lambdaeq t_0,i=1,2,\cdot\cdot\cdot,n\}
\epsilonnd{eqnarray*}
in $\Omega\thetaimes (0,T]$ such that
$w(P)=w(P_0)$ for any point $P$ of $\muathcal{R}(P_0)$.
\epsilonnd{Lemma}
\thetaextrm{\thetaextbf{Proof.}} We prove the desired result by contradiction. Suppose that there exists an interior point $P_1=(x_1,t_1)$ of $\Omega\thetaimes (0,T]$ with $t_1< t_0$ such that $w(P_1)>w(P_0)$.
Connect $P_1$ to $P_0$ by a simple smooth curve $\gamma}\deltaef\G{\Gammaamma$. Then there exists a point $P_*=(x_*,t_*)$ on $\gamma}\deltaef\G{\Gammaamma$ such that $w(P_*)=w(P_0)$ and $w(\betaar{P})<w(P_*)$ for all any point $\betaar{P}$ of $\gamma}\deltaef\G{\Gammaamma$ between $P_1$ and $P_*$. We may assume that $P_*=P_0$ and $P_1$ is very near to $P_0$. There exists a rectangle $\muathcal{R}(P_0)$ in $\Omega\thetaimes (0,T]$ with small positive numbers $a_0$ and $a_1$ (to be determined) such that
$P_1$ lies on $t=t_0-a_0$.
Since $\muathcal{R}(P_0)\sigmaetminus \{t=t_0\}\cap\{t=\betaar{t}\}$ contains some point $\betaar{P}=(\betaar{x},\betaar{t})$ of $\gamma}\deltaef\G{\Gammaamma$ and $w(\betaar{P})>w(P_0)$, we deduce $w(P)>w(P_0)$ for each point $P$ in $\muathcal{R}(P_0)\sigmaetminus \{t=t_0\}\cap\{t=\betaar{t}\}$ due to Lemma \rhoef{1011}. Therefore, $w(P)>w(P_0)$ for each point $P$ in $\muathcal{R}(P_0)\sigmaetminus \{t=t_0\}$.
For positive constants $\alphalpha$ and $\varepsilonarepsilon$ to be determined, set
\betaegin{eqnarray*}
q(\alphalpha,x,t)=-t_0+t+\alphalpha|x-x_0|^2
\epsilonnd{eqnarray*}
and
\betaegin{eqnarray*}
\varepsilonarphi(\varepsilonarepsilon,\alphalpha,x,t)=w(x,t)-w(P_0)+\varepsilonarepsilon q(\alphalpha,x,t).
\epsilonnd{eqnarray*}
Assume further that $P=(x_0-c,t_0-c_0)$ is on the parabola $q(\alphalpha,x,t)=0$, then
\betaegin{equation}\lambdaabel{71.2}
\alphalpha=\fracrac{c_0}{|c|^2},
\epsilonnd{equation}
where $|c|=(\sigmaum_{i=1}^n|c_i|^2)^{\fracrac{1}{2}}$.
A direct calculation shows that
\betaegin{eqnarray}\lambdaabel{71.4}
&&\rhoho_0\phiartial_tq(\alphalpha,x,t)+L q(\alphalpha,x,t)\nuonumber\\
&=&-\alphalpha[\fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}J^{-1}b_k^ib_k^j\deltaelta_{ij}
+\fracrac{2\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}b_k^i\phiartial_i(J^{-1}b_k^j)(x_j-(x_0)_j)\nuonumber\\
&&-(\gamma}\deltaef\G{\Gammaamma-1)J^{-1}\rhoho_0 b_i^j\phiartial_jv^i|x-x_0|^2)]+\rhoho_0[1+(\gamma}\deltaef\G{\Gammaamma-1)J^{-1} b_i^j\phiartial_jv^i(-t_0+t)].
\epsilonnd{eqnarray}
The first three terms on the right hand side of \epsilonqref{73} can be estimated similar to \epsilonqref{63}-\epsilonqref{65}. For the last term, one has
{
\betaegin{gather*}
|(\gamma}\deltaef\G{\Gammaamma-1)J^{-1} b_i^j\phiartial_jv^i(-t_0+t)|
\lambdaeq 6(\gamma}\deltaef\G{\Gammaamma-1)(1+M T)^{n-1}MT\lambdaeq \fracrac{3}{2}(\gamma}\deltaef\G{\Gammaamma-1).
\epsilonnd{gather*}}
Consequently, one gets
{
\betaegin{eqnarray}\lambdaabel{72}
&&\rhoho_0\phiartial_tq(\alphalpha,x,t)+L q(\alphalpha,x,t)\nuonumber\\
&\gamma}\deltaef\G{\Gammaeq&-\alphalpha[\fracrac{\kappaappa(\gamma}\deltaef\G{\Gammaamma-1) }{R}(4\Lambda_2+81\cdot2^{2n-4}|c|)+3\cdot2^{n-1}(\gamma}\deltaef\G{\Gammaamma-1)M^2|c|^2]+\fracrac{3\gamma}\deltaef\G{\Gammaamma-1}{2}\rhoho_0.
\epsilonnd{eqnarray}}
Since $\rhoho_0$ has a positive lower bound depending on $x_0\phim c$ in $\muathcal{R}(P_0)$, one can choose $\alphalpha_0$ such that
{
\betaegin{eqnarray}\lambdaabel{73}
\alphalpha_0 <\fracrac{(3\gamma}\deltaef\G{\Gammaamma-1)R\rhoho_0}{\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)(8\Lambda_2+81\cdot2^{2n-3}|c|) +3\cdot2^n(\gamma}\deltaef\G{\Gammaamma-1)RM^2|c|^2},
\epsilonnd{eqnarray}}
then it follows from \epsilonqref{71.4}-\epsilonqref{73} that
\betaegin{eqnarray}\lambdaabel{74}
\rhoho_0\phiartial_t\varepsilonarphi(\alphalpha_0,x,t)+L \varepsilonarphi(\alphalpha_0,x,t)
\gamma}\deltaef\G{\Gammaeq0, \ in\ \muathcal{R}(P_0).
\epsilonnd{eqnarray}
Next, for the fixed $c_0$, one can choose $c$ such that $\muathcal{R}(P_0)\sigmaubset\Omega\thetaimes (0,T]$ and then it follows from \epsilonqref{71.2} and \epsilonqref{74} that $c_0$ can be choosen such that
{
\betaegin{equation*}
c_0<\muin\{t_0,\fracrac{(3\gamma}\deltaef\G{\Gammaamma-1)|c|^2R\rhoho_0}{\kappaappa(\gamma}\deltaef\G{\Gammaamma-1)(16\Lambda_2+81\cdot2^{2n-2}|c|) +3\cdot2^{n+1}(\gamma}\deltaef\G{\Gammaamma-1)RM^2|c|^2}\}.
\epsilonnd{equation*}}
Denote $\muathcal{S}=\{(x,t)\inftyn \muathcal{R}(P_0), q(x,t)\gamma}\deltaef\G{\Gammaeq0\}$. The parabolic boundary $\phiartial_p\muathcal{S}$ of $\muathcal{S}$ consists of a part $\Sigma_1$ lying in $\muathcal{R}(P_0)$ and a part $\Sigma_2$ lying on $\muathcal{R}(P_0)\cap \{t=t_0-c_0\}$.
Finally, one can choose $\varepsilonarepsilon$.
On $\Sigma_2$, $w(x,t)-M_0>0$. Note $q(\alphalpha,x,t)$ is bounded on $\Sigma_2$, one can choose $\varepsilonarepsilon_0$ suitably small such that $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)>0$ on $\Sigma_2$.
On $\Sigma_1\sigmaetminus \{P_0\}$, $q(\alphalpha,x,t)=0$ and $w(x,t)-M_0>0$. Thus, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)>0$ on $\Sigma_1\sigmaetminus \{P_0\}$ and
$\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)=0$. One concludes that
\betaegin{eqnarray}\lambdaabel{75}
\lambdaeft\{ \betaegin{array}{ll}
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)>0,\ on\ \phiartial_p\muathcal{S}\sigmaetminus \{P_0\},\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
In conclusion, it follows from \epsilonqref{74} and \epsilonqref{75} that
\betaegin{eqnarray}\lambdaabel{76}
\lambdaeft\{ \betaegin{array}{ll}
\rhoho_0\phiartial_t\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)+L\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)\gamma}\deltaef\G{\Gammaeq 0,\ in\ \muathcal{S},\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x,t)>0,\ on\ \phiartial_p\muathcal{S}\sigmaetminus \{P_0\},\\
\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)=0.
\epsilonnd{array}
\rhoight.
\epsilonnd{eqnarray}
In view of Lemma \rhoef{1020} and \epsilonqref{76}, $\varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,\cdot,\cdot)$ attains its minimum at $P_0$ in $\muathcal{\betaar{S}}$, thus
\betaegin{eqnarray*}
\fracrac{\phiartial \varepsilonarphi(\varepsilonarepsilon_0,\alphalpha_0,x_0,t_0)}{\phiartial t}\lambdaeq0.
\epsilonnd{eqnarray*}
Note that $q$ satisfies at $P_0$
\betaegin{eqnarray*}
\fracrac{\phiartial q(\alphalpha_0,x_0,t_0)}{\phiartial t}=1.
\epsilonnd{eqnarray*}
Therefore
\betaegin{eqnarray}\lambdaabel{77}
\fracrac{\phiartial w(x_0,t_0)}{\phiartial t}\lambdaeq-\varepsilonarepsilon_0.
\epsilonnd{eqnarray}
But, by the assumption, $w$ attains its minimum at $P_0$, it follows that
\betaegin{eqnarray*}
\rhoho_0\fracrac{\phiartial w(x_0,t_0)}{\phiartial t}\gamma}\deltaef\G{\Gammaeq -L w(x_0,t_0)\gamma}\deltaef\G{\Gammaeq0,
\epsilonnd{eqnarray*}
which contradicts to \epsilonqref{77}.\qed
Now the following global strong maximum principle can be proved similarly as for Proposition \rhoef{1013}.
\betaegin{Proposition}\lambdaabel{1025}
Suppose that $w\inftyn C^{2,1}(\Omega\thetaimes (0,T])\cap C(\betaar{\Omega}\thetaimes [0,T])$ satisfies \epsilonqref{58}.
If $w$ attains its minimum at some interior point $P_0=(x_0,t_0)$ of $ \Omega\thetaimes (0,T]$, then
$w(P)=w(P_0)$ for any point $P$ of $\Omega\thetaimes (0,t_0]$.
\epsilonnd{Proposition}
We are ready to prove Theorem \rhoef{1018}.\\
\thetaextrm{\thetaextbf{Proof of Theorem \rhoef{1018}.}}
Recall that $\muathfrak{e}$ satisfies \epsilonqref{58}, so the weak maximum principle, Hopf lemma and strong maximum principle holds for $\muathfrak{e}$.
Since $\muathfrak{e}_0\gamma}\deltaef\G{\Gammaeq0$ and $\muathfrak{e}_0\nuot\epsilonquiv0$ in $ \Omega$, and $\muathfrak{e}=0$ on $\phiartial\Omega\thetaimes (0,t_0]$ due to \epsilonqref{52}, by Proposition \rhoef{1025}, it holds that $\muathfrak{e}>0$ in $\Omega\thetaimes(0,T]$. Taking any point $(x_0,t_0)$ of $\phiartial\Omega\thetaimes(0,T]$, applying Proposition \rhoef{1021}, we obtain $\fracrac{\phiartial \muathfrak{e}(x_0,t_0)}{\phiartial \varepsilonec{n}}<0$, which contradicts to $\muathfrak{e}_{x_i}(x_0,t_0)=0$ on $\phiartial\Omega\thetaimes (0,T]$ due to \epsilonqref{52}. \qed
\varepsilonskip 5mm
\nuoindent \thetaextrm{\thetaextbf{Acknowledgements:}} The research of Li was supported partially by the National Natural Science Foundation of China (Nos. 11231006, 11225102, 11461161007 and 11671384), and the Importation and Development of High Caliber Talents Project of Beijing Municipal Institutions (No. CIT\&TCD20140323).
The research of Wang was supported by grant nos. 231668 and 250070 from the Research Council of Norway.
The research of Xin was supported partially by the Zheng Ge Ru Foundation, Hong Kong RGC Earmarked Research grants CUHK-14305315 and CUHK-4048/13P, NSFC/RGC Joint Research Scheme N-CUHK443/14, and Focused Innovations Scheme from The Chinese University of Hong Kong.
\betaegin{thebibliography}{000}
\betaibitem{A}
S. N. Antontsev, A. V. Kazhikhov, V. N. Monakhov, Boundary value problems in
mechanics of nonhomogeneous fluids, North-Holland Publishing Co., Amsterdam, 1990.
\betaibitem{CCK}
Y. Cho, H.J. Choe, H. Kim, Unique solvability of the initial boundary value problems for compressible viscous fluids. J. Math. Pures Appl. 83 (2004), 243-275.
\betaibitem{CK1}
H.J. Choe, H. Kim, Strong solutions of the Navier-Stokes equations for isentropic compressible fluids. J. Differential Equations. 190 (2003), 504-523.
\betaibitem{CK2}
Y. Cho, H. Kim, On classical solutions of the compressible Navier-Stokes equations with nonnegative initial densities.
Manuscripta Math. 120 (2006), 91-129.
\betaibitem{CK3}
Y. Cho, H. Kim, Existence results for viscous polytropic fluids with vacuum. J. Differential Equations. 228 (2006), no. 2, 377-411.
\betaibitem{CJ}
Y. Cho, B. Jin, Blow-up of viscous heat-conducting compressible flows. J. Math. Anal. Appl. 320 (2006), no. 2, 819-826.
\betaibitem{CLS}
D. Coutand; H. Lindblad, S. Shkoller, A priori estimates for the free-boundary 3D compressible Euler equations in physical vacuum. Comm. Math. Phys. 296 (2010), no. 2, 559C587.
\betaibitem{CS}
D. Coutand, S. Shkoller, Well-posedness in smooth function spaces for the moving-boundary three-dimensional compressible Euler equations in physical vacuum. Arch. Ration. Mech. Anal. 206 (2012), no. 2, 515-616.
\betaibitem{D}
R. Danchin, Global existence in critical spaces for compressible Navier-Stokes equations.
Invent. Math. 141 (2000), 579-614.
\betaibitem{F}
E. Feireisl, A. Novotny, H. Petzeltov\'{a}, On the existence of globally defined weak solutions to the
Navier-Stokes equations. J. Math. Fluid Mech. 3 (2001), no. 4, 358-392.
\betaibitem{Friedman}
A. Friedman, Partial differential equations of parabolic type. Prentice-Hall, Inc., Englewood Cliffs, N.J. 1964.
\betaibitem{H}
Q. Han, A basic course in partial differential equations. Graduate Studies in Mathematics, 120. American Mathematical Society, Providence, RI, 2011.
\betaibitem{HLX}
X.D. Huang, J. Li, Z. Xin, Global well-posedness of classical solutions with large oscillations and vacuum to the three-dimensional isentropic compressible Navier-Stokes equations. Comm. Pure Appl. Math. 65 (2012), no. 4, 549-585.
\betaibitem{HL}
X.D. Huang, J. Li, Global classical and weak solutions to the
three-dimensional full compressible Navier-Stokes
system with vacuum and large oscillations. http://arxiv.org/abs/1107.4655v3 [math-ph], 2011.
\betaibitem{H1}
D. Hoff, Global existence for 1D, compressible, isentropic Navier-Stokes equations with large initial
data. Trans. Amer. Math. Soc. 303 (1987), no. 1, 169-181.
\betaibitem{H2}
D. Hoff, Strong convergence to global solutions for multidimensional flows of compressible, viscous
fluids with polytropic equations of state and discontinuous initial data. Arch. Rat. Mech. Anal. 132 (1995), 1-14.
\betaibitem{H3}
D. Hoff, D. Serre, The failure of continuous dependence on initial data for the Navier-Stokes equations
of compressible flow. SIAM J. Appl. Math. 51 (1991), no. 4, 887-898.
\betaibitem{H4}
D. Hoff, J. Smoller, Non-formation of vacuum states for compressible Navier-Stokes equations.
Commun. Math. Phys. 216 (2001), no. 1, 255-276.
\betaibitem{JM1}
J. Jang, N. Masmoudi, Well and ill-posedness for compressible Euler equations with vacuum. J. Math. Phys. 53 (2012), no. 11, 115625, 11 pp.
\betaibitem{JM2}
J. Jang, N. Masmoudi, Well-posedness of compressible Euler equations in a physical vacuum. Comm. Pure Appl. Math. 68 (2015), no. 1, 61C111.
\betaibitem{JZ}
S. Jiang, P. Zhang, On spherically symmetric solutions of the compressible isentropic Navier-Stokes equations. Comm. Math. Phys. 215 (2001), no. 3, 559-581.
\betaibitem{Ka}
J.I. Kanel, The Cauchy problem for equations of gas dynamics with viscosity. (Russian) Sibirsk. Mat. Zh. 20 (1979), no. 2, 293-306, 463.
\betaibitem{K1}
A.V. Kazhikhov, V.V. Shelukhin, Unique global solution with respect to time of initial-boundary
value problems for one-dimensional equations of a viscous gas. J. Appl. Math. Mech. 41 (1977), no. 2, 273-282.
\betaibitem{K2}
A.V. Kazhikhov, Cauchy problem for viscous gas equations, Siberian Math. J. 23
(1982), 44-49.
\betaibitem{L1}
P.L. Lions, Existence globale de solutions pour les equations de Navier-Stokes compressibles isentropiques.
C. R. Acad. Sci. Paris, S\'{e}r I Math. 316 (1993), 1335-1340.
\betaibitem{L2}
P.L. Lions, Limites incompressible et acoustique pour des fluides visqueux, compressibles et isentropiques.
C. R. Acad. Sci. Paris S\'{e}r. I Math. 317 (1993), 1197-1202.
\betaibitem{L3}
P.L. Lions, Mathematical topics in fluid mechanics. Vol. 2. Compressible models. New York: Oxford
University Press, 1998.
\betaibitem{MB}
A.J. Majda, A.L.Bertozzi, Vorticity and incompressible flow. Cambridge Texts in Applied Mathematics, 27. Cambridge University Press, Cambridge, 2002.
\betaibitem{MUK}
T. Makino, S. Ukai, S. Kawashima, Sur lasolution \`{a} support compact del' \'{e}quation d' Euler compressible, Jpn. J. Appl. Math. 3 (1986), 249-257.
\betaibitem{M1}
A. Matsumura, T. Nishida, The initial value problem for the equations of motion of compressible viscous
and heat-conductive fluids. Proc. Japan Acad. Ser. A Math. Sci. 55 (1979), no. 9, 337-342.
\betaibitem{M2}
A. Matsumura, T. Nishida, The initial value problem for the equations of motion of viscous and heatconductive
gases. J. Math. Kyoto Univ. 20(1980), no. 1, 67-104.
\betaibitem{M3}
A. Matsumura, T. Nishida, The initial boundary value problems for the equations of motion of compressible
and heat-conductive fluids. Commun. Math. Phys. 89 (1983), 445-464.
\betaibitem{Nash}
J. Nash, Le probl\`{e}me de Cauchy pour les \'{e}quations diff\'{e}rentielles d’un fluide
g\'{e}n\'{e}ral. Bull. Soc. Math. France. 90 (1962), 487-497.
\betaibitem{Sa}
R. Salvi, I. Stra$\check{s}$kraba, Global existence for viscous compressible fluids and their behavior as $t\rhoightarrow\inftynfty$.
J. Fac. Sci. Univ. Tokyo Sect. IA, Math. 40 (1993), 17-51.
\betaibitem{S1}
D. Serre, Solutions faibles globales des quations de Navier-Stokes pour un fluide compressible. C. R.
Acad. Sci. Paris S\'{e}r. I Math. 303 (1986), no. 13, 639-642.
\betaibitem{S2}
D. Serre, On the one-dimensional equation of a viscous, compressible, heat-conducting fluid. C. R. Acad.
Sci. Paris S\'{e}r. I Math. 303 (1986), no. 14, 703-706.
\betaibitem{Se}
J. Serrin, On the uniqueness of compressible fluid motion. Arch. Rational. Mech.Anal. 3 (1959), 271-288.
\inftyffalse
\betaibitem{So}
V.A. Solonnikov, On solvability of an initial boundary value problem for the equations of motion of
viscous compressible fluid. Zap. Nauchn. Sem. LOMI 56 (1976), 128-142.
\betaibitem{T}
A. Tani, On the first initial-boundary value problem of compressible viscous fluid
motion. Publ. Res. Inst. Math. Sci. Kyoto Univ. 13 (1971), 193-253.
\betaibitem{V}
A. Valli, W.M. Zajaczkowski, Navier-Stokes equations for compressible fluids: global existence and
qualitative properties of the solutions in the general case. Commun. Math. Phys. 103 (1986), no. 2, 259-296.
\fraci
\betaibitem{WenZhu}
H.Y. Wen, C.J. Zhu, Global spherically symmetric classical solution to compressible Navier-Stokes equations with large initial data and vacuum. SIAM J. Math. Anal. 44 (2012), no. 2, 1257-1278.
\betaibitem{Xin}
Z.P. Xin, Blow up of smooth solutions to the compressible Navier-Stokes equations with compact density. Comm. Pure Appl. Math. 51 (1998), 229-240.
\betaibitem{XYa}
Z.P. Xin, Y. Wei, On blowup of classical solutions to the compressible Navier-Stokes equations. Comm. Math. Phys. 321 (2013), no. 2, 529-541.
\betaibitem{XYu}
Z.P. Xin, H. Yuan, Vacuum state for spherically symmetric solutions of the compressible Navier-Stokes
equations. J Hyperbolic Differ. Eqs. 3 (2006), 403-442.
\epsilonnd{thebibliography}
\thetaextsc{School of Mathematics, Capital Normal University, Beijing 100048, P. R. China}
E-mail address: [email protected]
\thetaextsc{Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim 7491, Norway}
E-mail address: [email protected]
\thetaextsc{The Institute of Mathematical Sciences, The Chinese University of Hong Kong, Hong Kong}
E-mail address: [email protected]
\epsilonnd{document}
|
\begin{document}
\title{Diophantine Approximation with Products of Two Primes}
\section{Introduction}
Let $\|x\|$ denote the distance from the real number $x$ to the
nearest integer. Given an irrational $\alpha$, we are interested in
the problem of determining the values of $\tau$ for which there are
infinitely many prime solutions, $p$, to the Diophantine inequality
\begin{equation}\label{diophantineineq}
\|p\alpha\|\leq p^{-\tau}.
\mathcal End{equation}
It is an easy consequence of the Generalised Riemann Hypothesis that
any $\tau<\frac{1}{3}$ is admissible. This was proved unconditionally
by Matom{\"a}ki, \cite{mat}, and is currently the strongest result
known. Progress on this problem began with Vinogradov,
\cite{vinogradov}, who proved that we can take any $\tau<\frac{1}{5}$.
Vaughan, \cite{vaughan}, simplified the proof whilst improving the
exponent to $\tau<\frac{1}{4}$. In both of these works an asymptotic
formula for the number of prime solutions is proved. Harman,
\cite{har83}, introduced a sieve method to the problem. This only
gives a lower bound for the number of solutions but this is
sufficient. He increased the size of $\tau$ to $\tau<\frac{3}{10}$,
improving this in \cite{har96} to $\tau<\frac{7}{22}$. These results
of Harman used identical arithmetic information to the results of
Vaughan; the improvements were in the sieve method. Heath-Brown and
Jia, \cite{rhbjia}, found new arithmetic information which they were
able to use to get $\tau<\frac{16}{49}$. Matom{\"a}ki, by using
results on averages of Kloosterman sums, was able to extend this to
handle any $\tau<\frac{1}{3}$.
If we only require the solutions of (\ref{diophantineineq}) to have at most two prime factors
then the problem is considerably easier as classical sieve methods may
be used. In particular Harman, \cite[Theorem 2]{har83}, states that
any $\tau<0.46$ is sufficient. One reason for a stronger result is
that the parity problem of sieve theory is no longer an issue. In
order to circumvent the parity problem and detect primes it is
necessary to prove estimates for bilinear forms, known as ``Type II''
sums. Matom{\"a}ki, \cite{mat}, describes all the estimates known
for $\tau<\frac{1}{3}$ but none of her proofs are valid for $\tau\geq
\frac{1}{3}$. We will prove a Type II bound in which one may take
$\tau$ slightly larger than $\frac13$. This estimate is too weak to
show the existence of prime solutions to (\ref{diophantineineq}). It
does, however, show that there are solutions which have precisely two
prime factors. Hence we can break the parity barrier for some
$\tau>\frac13$.
We are also interested in the set $\mathcal P_3(b)$ of $3$-digit palindromes
in base $b$. We say that a number is palindromic in base $b$ if its
digits in base $b$ are the same when reversed. Thus
$$\mathcal P_3(b)=\{j(b^2+1)+kb:j\in (0,b)\mathcal Ap\mathbb Z,k\in [0,b)\mathcal Ap\mathbb Z\}.$$
As we shall see in Section \ref{proofs}, elements in this set correspond closely to solutions of (\ref{diophantineineq}) when $\tau=\frac13$. We may therefore also conclude that $\mathcal P_3(b)$ contains
numbers with precisely two prime factors provided that $b$ is
sufficiently large.
To handle both of these problems simultaneously we work with the
following set. For a natural number $q$, positive reals $x,z$ and an
integer $a$ with $(a,q)=1$ we let
$$\mathcal A=\mathcal A(x,q,z,a)=\{n\in (\frac{x}{4},x]:n\mathcal Equiv ak\mathcal Pmod q\text{ for some
}k\in [0,z)\mathcal Ap\mathbb Z\}.$$
For a fixed constant $\tau\in (0,1)$ we shall only consider the case when
$$z\in \left[\frac{1}{2}q^{\frac{1-\tau}{1+\tau}},2q^{\frac{1-\tau}{1+\tau}}\right]$$
and
$$x\in \left[\frac{1}{2}q^{\frac{2}{1+\tau}},2q^{\frac{2}{1+\tau}}\right].$$
All implied constants in our results may depend on $\tau$. Observe that
$zq\asymp x$.
Our aim is to estimate Type I and Type II sums for the set
$\mathcal A$ and use them to prove the following.
\begin{thm}\label{mainthm}
Suppose $\tau<\frac{8}{23}$ is fixed. Let $\mathcal E_2$ be the set of natural numbers having precisely $2$ prime factors. With the above definitions and hypotheses we have
$$\#(\mathcal A\mathcal Ap \mathcal E_2)\gg \frac{z^2}{\log z},$$
provided that $q$ is sufficiently large in terms of $\tau$.
\mathcal End{thm}
A result of this form for $\tau<\frac13$ would follow immediately from Vaughan's work, \cite{vaughan}. The key new idea to handle larger $\tau$ is our Type II estimate, Theorem \ref{typeii}.
This theorem enables us to prove the following results regarding the problems discussed above.
\begin{thm}\label{diophantine}
Let $\alpha$ be irrational. For any $\tau<\frac{8}{23}$ there exist infinitely many $n\in\mathcal E_2$ such that
$$\|n\alpha\|\leq n^{-\tau}.$$
\mathcal End{thm}
\begin{thm}\label{palindrome}
For all sufficiently large $b$ we have
$$\#(\mathcal P_3(b)\mathcal Ap\mathcal E_2)\gg \frac{b^2}{\log b}.$$
\mathcal End{thm}
\section{Notation and Useful Results}
We will write $e(x)=e^{2\mathcal Pi ix}$ and
$$\mathbf{1}_\mathcal A(n)=\begin{cases}
1 & n\in \mathcal A\\
0 & n\notin \mathcal A.\\
\mathcal End{cases}$$
We will use the notation $n\sim N$ to mean $N<n\leq 2N$ and similarly $n\asymp N$ to mean $aN<n\leq bN$ for some $a,b>0$. We will also need the Fourier transform $\hat f$ of the function $f$, defined by
$$\hat f(x)=\int_{-\infty}^\infty f(t)e(-tx)\,dt.$$
We will write $\tau(n)$ for the number of divisors of $n$. It is well known that for any $\mathcal Epsilon>0$ we have $\tau(n)\ll_\mathcal Epsilon n^\mathcal Epsilon$. It is slightly more convenient to work with a weighted version of the primes so we let
$$\varpi(n)=\begin{cases}
\log n & n\text{ is prime}\\
0 & \text{Otherwise}.\\
\mathcal End{cases}$$
Finally, we adopt the standard convention that the value of $\mathcal Epsilon$ may be different at each occurrence. For example, we may write $x^{\mathcal Epsilon}\log x\ll x^\mathcal Epsilon$ and $x^{2\mathcal Epsilon}\ll x^\mathcal Epsilon$.
We require the following forms of the Poisson Summation
Formula, which hold for all compactly supported smooth functions $f$,
all $v\in\mathbb R_{>0}$ and all $u\in \mathbb R$:
\begin{equation}\label{pois1}
\sum_{m\in\mathbb Z}f(vm+u)=\frac{1}{v}\sum_{n\in \mathbb Z}\hat
f\left(\frac{n}{v}\right)e\left(\frac{un}{v}\right),
\mathcal End{equation}
\begin{equation}\label{pois2}
\sum_{n\in\mathbb Z}f\left(\frac{n}{v}\right)e\left(\frac{un}{v}\right)=v\sum_{m\in\mathbb Z}\hat f(vm-u).
\mathcal End{equation}
\section{Reduction of the Problem}
As we only require a lower bound we may smooth the function $\mathbf{1}_{\mathcal A}$.
\begin{defn}\label{wdef}
Let $W$ be a smooth function satisfying the following conditions.
\begin{enumerate}
\item If $x\notin [\frac{1}{4},\frac{3}{4}]$ then $W(x)=0$.
\item If $x\in [\frac{1}{3},\frac{2}{3}]$ then $W(x)=1$.
\item For all $x$, $0\leq W(x)\leq 1$.
\mathcal End{enumerate}
\mathcal End{defn}
It is a well known fact that many functions, $W$, satisfying the conditions of this definition exist. The precise choice of $W$ does not matter but all implied constants may depend on it. For any $B\in\mathbb N$ we may integrate by parts $B$ times to obtain the standard estimate
\begin{equation}\label{fourierbound}
|\hat w(x)|\ll_B \min(1,|x|^{-B}).\\
\mathcal End{equation}
\begin{defn}\label{defPhi}
Let
$$\Phi(n)=\twosum{k}{n\mathcal Equiv ka\mathcal Pmod q}W(\frac{k}{z}).$$
\mathcal End{defn}
\begin{lem}
If $\frac{x}{4}<n<x$ then
$$0\leq \Phi(n)\leq \mathbf{1}_{\mathcal A}(n)\leq 1.$$
Therefore, to prove Theorem \ref{mainthm} it is sufficient to prove a lower bound for
$$\twosum{\frac{x}{4}<n<x}{n\in \mathcal E_2}\Phi(n).$$
\mathcal End{lem}
\begin{proof}
This follows immediately from the definitions of $\mathcal A$ and $\Phi$.
\mathcal End{proof}
\section{Type I Sums}
The Type I estimate we prove, Theorem \ref{typei}, has been known in essence since the work of Vaughan, \cite{vaughan}. However, it is useful to
prove it again to get a result which is valid in our precise
situation. In addition, Vaughan's proof uses estimates for
exponential sums whereas we use results from the geometry of numbers.
The exponential sum approach is possibly simpler for standard Type I
sums but we also need to estimate a variant of such sums, Theorem
\ref{typeiv}, which is easier with the geometry of numbers.
Throughout this section $M,N\geq 1$ satisfy $\frac{x}{4}\leq MN\leq 4x$ and $M\leq z^{2-\delta}$ for some $\delta>0$. This means that
$$N\gg \frac{x}{M}\gg \frac{q}{z^{1-\delta}}.$$
All our implied constants may depend on $\delta$.
For an integer $m$ let
$$\Psi(m)=\Psi(m;N)=\sum_{n\sim N}\Phi(mn).$$
We will consider $\Psi(m)$ as a counting function of points of a certain lattice, $\lambda(m)$.
\begin{lem}
Let
$$\lambda(m)=\{(j,k)\in\mathbb Z^2:jq+ka\mathcal Equiv 0\mathcal Pmod m\}.$$
The set $\lambda(m)$ is a lattice in $\mathbb Z^2$ with determinant $m$.
\mathcal End{lem}
\begin{proof}
It is clear that $\lambda(m)$ is a lattice. Since $(a,q)=1$ we know that $jq+ka$ takes on all integer values as $j,k$ vary over $\mathbb Z^2$. Thus $jq+ka$ represents all congruence classes mod $m$ so the determinant of $\lambda(m)$ is $m$.
\mathcal End{proof}
Define $b_1(m)$ to be the shortest nonzero vector in $\lambda(m)$ and let $R_1(m)$ be the Euclidean length of $b_1(m)$. We know, by Minkowski's Theorem, that $R_1(m)\ll \sqrt{m}$.
\begin{lem}\label{Psilem}
With the previous assumptions on $M,N,x,z$ and $q$ we have
$$\Psi(m)=\frac{N\hat W(0)z}{q}+O(\frac{z}{R_1(m)}),$$
for any $m\sim M$.
\mathcal End{lem}
\begin{proof}
From the definitions of $\Psi$ and $\Phi$ we get
\begin{eqnarray*}
\Psi(m)&=&\sum_{n\sim N}\Phi(mn)\\
&=&\sum_{n\sim N}\twosum{k}{mn\mathcal Equiv ka\mathcal Pmod q}W(\frac{k}{z})\\
&=&\sum_{n\sim N}\twosum{j,k}{mn=jq+ka}W(\frac{k}{z})\\
&=&\twosum{(j,k)\in\lambda(m)}{(jq+ka)/m\sim N}W(\frac{k}{z}).\\
\mathcal End{eqnarray*}
Since $W$ is supported on $(0,1)$ the sum only contains points with $k\in (0,z)$. Let
$$f(t)=\#\{(j,k)\in \lambda(m):\frac{jq+ka}{m}\sim N,k\in (0,t]\}.$$
Summing by parts we get
$$\Psi(m)=-\frac{1}{z}\int_0^z f(t)W'(\frac{t}{z})\,dt.$$
Let
$$A(t)=\{(x,y)\in \mathbb R^2:\frac{xq+ya}{m}\sim N,y\in (0,t]\}.$$
By a standard result for counting lattice points we have
$$f(t)=\frac{\text{area}(A(t))}{m}+O(\frac{\text{perimeter}(A(t))}{R_1(m)}+1).$$
The vertices of $A(t)$ are
$$(Nm/q,0),(2Nm/q,0),((Nm-ta)/q,t),((2Nm-ta)/q,t).$$
Therefore
$$\text{area}(A(T))=\frac{Nmt}{q}$$
and
$$\text{perimeter}(A(T))\ll \frac{NM}{q}+t+\frac{ta}{q}\ll z.$$
It follows that
\begin{eqnarray*}
\Psi(m)&=&-\frac{1}{z}\int_0^z \left(\frac{Nt}{q}+O(\frac{z}{R_1(m)}+1)\right)W'(\frac{t}{z})\,dt\\
&=&-\frac{N}{qz}\int_0^ztW'(\frac{t}{z})\,dt+O(\frac{z}{R_1(m)}+1)\\
&=&\frac{N\hat W(0)z}{q}+O(\frac{z}{R_1(m)}+1).\\
\mathcal End{eqnarray*}
Since $R_1(m)\ll \sqrt{m}\ll\sqrt{M}\ll z$ the result follows.
\mathcal End{proof}
We need a bound for the number of $m$ for which $R_1(m)$ is unusually small.
\begin{lem}\label{lcount}
For any $\mathcal Epsilon>0$, any $M\leq z^{2-\delta}$ and any integer $l$ we have
$$\#\{m\leq M:R_1(m)^2=l\}\ll_\mathcal Epsilon z^\mathcal Epsilon.$$
\mathcal End{lem}
\begin{proof}
We know that $R_1(m)^2\ll m\ll M$. Thus the only case to consider is $0<l\ll M$.
If $R_1(m)^2=l$ then there exist integers $j,k$ with $j^2+k^2=l$ and $m| jq+ka$. It follows that the quantity of interest is bounded by
$$\twosum{(j,k)\in\mathbb Z^2}{j^2+k^2=l}\#\{m:m|jq+ka\}\leq\twosum{(j,k)\in\mathbb Z^2}{j^2+k^2=l}\tau(jq+ka).$$
For the remainder of the proof let $h=jq+ka$, where $j^2+k^2=l$. We now use an argument by contradiction to show that $h\ne0$. If $h=0$ then
$k\ne0$ since $(j,k)\ne(0,0)$. Moreover $q|k$, whence $|k|\ge q$. However
$$k\leq \sqrt{l}\ll \sqrt{M}=o(z)=o(q),$$
giving a contradiction if $q$ is large enough. We therefore
conclude that $h\ne0$. In addition we have
$$h\ll q\sqrt{l}\ll qz\ll x,$$
so that $\tau(h)\ll x^\mathcal Epsilon$. Letting $r(l)$ denote the number of ways in which $l$ may be written as the sum of two squares, the cardinality of the set in the lemma is then
$$\ll r(l)x^\mathcal Epsilon\ll z^\mathcal Epsilon,$$
in view of the convention on different values of $\mathcal Epsilon$.
\mathcal End{proof}
We may now prove an estimate for Type I sums.
\begin{thm}\label{typei}
If $\alpha_m$ are complex numbers with $|\alpha_m|\leq 1$ then, with the previous assumptions on $M,N,x,z$ and $q$ we have, for any $A>0$
$$\sum_{m\sim M,n\sim N}\alpha_m \Phi(mn)=\frac{\hat W(0)Nz}{q}\sum_{m\sim M}\alpha_m+O_A(z^2(\log z)^{-A}).$$
\mathcal End{thm}
\begin{proof}
Let
$$S=\sum_{m\sim M,n\sim N}\alpha_m \Phi(mn)=\sum_{m\sim M}\alpha_m \Psi(m).$$
Applying Lemma \ref{Psilem} we get
$$S=\frac{N\hat W(0)z}{q}\sum_{m\sim M}\alpha_m+O(z\sum_{m\sim M}\frac{1}{R_1(m)}).$$
Using Lemma \ref{lcount} we deduce that
\begin{eqnarray*}
\sum_{m\sim M}\frac{1}{R_1(m)}&=&\sum_{l\ll M}\frac{1}{\sqrt{l}}\#\{m\sim M,R_1(m)^2=l\}\\
&\ll_\mathcal Epsilon&z^\mathcal Epsilon\sum_{l\ll M}l^{-\frac12}\\
&\ll_\mathcal Epsilon&z^\mathcal Epsilon M^{\frac12}.\\
\mathcal End{eqnarray*}
We conclude that
$$S=\frac{N\hat W(0)z}{q}\sum_{m\sim M}\alpha_m+O(z^{1+\mathcal Epsilon}M^{\frac12}).$$
Since $M\ll z^{2-\delta}$ the error term is
$$O(z^{2+\mathcal Epsilon-\delta/2}).$$
The result follows on taking $\mathcal Epsilon<\frac{\delta}{2}$.
\mathcal End{proof}
Observe that if
$$\sum_{m\sim M}\alpha_m \asymp M$$
then the leading term in this estimate has size $\frac{xz}{q}\asymp z^2$. This is larger than the error term.
It is also necessary to bound a Type I sum where $\sum_{n\sim N}$ is replaced by a smooth weight.
\begin{thm}\label{typei2}
Suppose the above conditions on $M,N,x,z$ and $q$ hold. If $\alpha_m$ are complex numbers with $|\alpha_m|\leq 1$ then, for any $A>0$, we have
$$\twosum{n}{m\sim M}\alpha_m W\left(\frac{n}{3N}\right)\Phi(mn)=\frac{3\hat W(0)^2Nz}{q}\sum_{m\sim M}\alpha_m+O_A(z^2(\log z)^{-A}).$$
\mathcal End{thm}
\begin{proof}
After using partial summation to remove the smooth weight $W(\frac{n}{3N})$, the result follows by an almost identical proof to that of Theorem \ref{typei}.
\mathcal End{proof}
Define $\Psi_1(m)$ by
$$\Psi(m)=\frac{\hat W(0)Nz}{q}+\Psi_1(m).$$
We will require the following two lemmas.
\begin{lem}
For any $\mathcal Epsilon>0$ and any $M,N,x,q$ and $z$ satisfying the previous assumptions we have
$$\sum_{m\asymp M}\Psi_1(m)^2\ll_\mathcal Epsilon z^{2+\mathcal Epsilon}.$$
\mathcal End{lem}
\begin{proof}
From Lemma \ref{Psilem} we have
$$\Psi_1(m)^2\ll_\mathcal Epsilon \frac{z^2}{R_1(m)^2}.$$
By Lemma \ref{lcount} we get
\begin{eqnarray*}
\sum_{m\asymp M}\frac{1}{R_1(m)^2}&\ll&\sum_{l\ll M}\frac{1}{l}\#\{m\asymp M,R_1(m)^2=l\}\\
&\ll_\mathcal Epsilon&z^\mathcal Epsilon\sum_{l\ll M}l^{-1}\\
&\ll_\mathcal Epsilon&z^\mathcal Epsilon.\\
\mathcal End{eqnarray*}
The result follows.
\mathcal End{proof}
\begin{lem}
Under the same assumptions as the last lemma we have
$$\sum_{m\asymp M}\Psi(m)^2\ll \frac{Nz^3}{q}.$$
\mathcal End{lem}
\begin{proof}
We have
\begin{eqnarray*}
\sum_{m\asymp M}\Psi(m)^2&=&\sum_{m\asymp M}(\frac{N\hat W(0)z}{q}+\Psi_1(m))^2\\
&\ll&\sum_{m\asymp M}\frac{N^2z^2}{q^2}+\sum_{m\asymp M}\mathcal Psi_1(m)^2\\
&\ll_\mathcal Epsilon&\frac{MN^2z^2}{q^2}+z^{2+\mathcal Epsilon}\\
&\ll_\mathcal Epsilon& \frac{Nz^3}{q}+z^{2+\mathcal Epsilon}.\\
\mathcal End{eqnarray*}
Since $N\gg \frac{q}{z^{1-\delta}}$ the first term is larger if we take a small enough $\mathcal Epsilon$.
\mathcal End{proof}
We may now estimate a variant of a Type I sum which will be useful later.
\begin{thm}\label{typeiv}
Suppose that the above assumptions on $M,N,x,z$ and $q$ hold. In addition, assume that $N\leq z^{2-\delta}$. Then, for any complex numbers $\beta_n$ bounded by $1$ and any $A>0$,
$$\twosum{m}{n_1,n_2\sim N}\beta_{n_1}W\left(\frac{m}{3M}\right)
\Phi(mn_1)\Phi(mn_2)=\frac{3NM\hat W(0)^3z^2}{q^2}\sum_{n\sim N}\beta_n+O_A(\frac{z^4(\log z)^{-A}}{M}).$$
\mathcal End{thm}
\begin{proof}
Let
$$S=\twosum{m}{n_1,n_2\sim N}\beta_{n_1}W\left(\frac{m}{3M}\right)\Phi(mn_1)\Phi(mn_2)=\twosum{m}{n\sim N}\beta_nW\left(\frac{m}{3M}\right)\Phi(mn)\Psi(m).$$
Writing
$$\Psi(m)=\frac{\hat W(0)Nz}{q}+\Psi_1(m)$$
we get a contribution from $\frac{\hat W(0)Nz}{q}$ of
$$\frac{\hat W(0)Nz}{q}\twosum{m}{n\sim N}\beta_nW\left(\frac{m}{3M}\right)\Phi(mn).$$
This sum is in a form which can be estimated by Theorem \ref{typei2},
with $m,n$ interchanged. All the conditions needed for that theorem are satisfied since $N\leq z^{2-\delta}$. The main term is thus
$$\frac{3\hat W(0)^3NMz^2}{q}\sum_{n\sim N}\beta_n+O_A(\frac{z^3(\log z)^{-A}N}{q}).$$
On writing $N\ll\frac{zq}{M}$ the error here is
$$O_A(\frac{z^4(\log z)^{-A}}{M}).$$
The contribution from $\Psi_1(m)$ is
$$\twosum{m}{n\sim N}\beta_nW\left(\frac{m}{3M}\right)\Phi(mn)\Psi_1(m).$$
Trivially estimating the $\beta_n$ by $1$ this is majorised by
$$\sum_mW\left(\frac{m}{3M}\right)\Psi(m)|\Psi_1(m)|.$$
Since $W(x)\leq 1$ for all $x$ we may remove the factor $W(\frac{m}{3M})$ and apply Cauchy's inequality to get a bound of
$$(\sum_{m\asymp M}\Psi(m)^2)^{1/2}(\sum_{m\asymp M}\Psi_1(m)^2)^{1/2}.$$
Applying the previous two lemmas this is
$$\ll_\mathcal Epsilon N^{1/2}z^{5/2+\mathcal Epsilon}q^{-1/2}\ll \frac{z^{3+\mathcal Epsilon}}{\sqrt M}.$$
Since $M\leq z^{2-\delta}$ the error here is
$$\frac{z^{3+\mathcal Epsilon}\sqrt{M}}{M}\leq \frac{z^{4+\mathcal Epsilon-\delta/2}}{M}.$$
The result follows on taking $\mathcal Epsilon<\frac{\delta}{2}$.
\mathcal End{proof}
Observe that if
$$\sum_{n\sim N}\beta_n\asymp N$$
then the main term in this last theorem has size
$$\frac{N^2Mz^2}{q^2}\asymp \frac{z^4}{M}.$$
\section{Type II Sums}
We will prove the following Type II result.
\begin{thm}\label{typeii}
Let $\alpha_m$ be complex numbers bounded by $1$. Suppose that $\frac{x}{4}\leq MN\leq 4x$ and
$$\max(z,\frac{q}{z^{1-\delta}})\leq N\leq z^{\frac{16}{15}-\delta}$$
for some $\delta>0$. Then, for every $A>0$, we have
$$\sum_{m\sim M,n\sim N}\alpha_m(\varpi(n)-1)\Phi(mn)\ll z^2(\log z)^{-A},$$
where the implied constant depends on both $A$ and $\delta$.
\mathcal End{thm}
Observe that the restrictions on $M,N$ in this theorem imply that
$$M\ll z^{2-\delta}.$$
The hypothesis that $N\geq z$ is only used once in our
argument, in the proof of Lemma \ref{zerosum}. When $\tau>\frac13$ this assumption is weaker than
$$N\geq \frac{q}{z^{1-\delta}}.$$
Let
$$S=\sum_{m\sim M,n\sim N}\alpha_m\beta_n\Phi(mn),$$
where
$$\beta_n=\varpi(n)-1.$$
We wish to show that $S=O(z^2(\log z)^{-A})$. Our arguments can be
modified to handle arbitrary $\beta_n$, although the range of $N$ is
then much smaller. However, this introduces some additional
technicalities. Since our Type II estimate does not cover a
sufficiently large range of $N$ to detect primes we have chosen to give the details only for the specific choice $\beta_n=\varpi(n)-1$.
Vaughan, \cite{vaughan}, used exponential sum methods to establish Type II estimates which are only valid when $x^\tau<N<x^{1-2\tau}$. This range is empty when $\tau\geq\frac{1}{3}$. Heath-Brown and Jia, \cite{rhbjia}, introduced a new method which reduces the problem to the estimation of certain Kloosterman sums. Matom{\"a}ki, \cite{mat}, used the same reduction but then used stronger bounds on the resulting averages of Kloosterman sums and was thus able to get enough Type II information to detect primes for any $\tau<\frac{1}{3}$. The range of $N$ in the Type II bounds found by Heath-Brown, Jia and Matom{\"a}ki remains nonempty as $\tau\rightarrow\frac{1}{3}$. However, it is not valid for $\tau\geq\frac{1}{3}$ as the reduction to Kloosterman sums gives an error which is too large in this case. Our method is essentially an extension of that of Heath-Brown and Jia which avoids this problem.
\begin{lem}
We have $S=O(\sqrt{MS_1})$ where
$$S_1=\sum_{n_1,n_2\sim N}\beta_{n_1}\beta_{n_2}\sum_m W\left(\frac{m}{3M}\right)\Phi(mn_1)\Phi(mn_2).$$
It follows that a bound of
$$S_1=O(\frac{z^4(\log z)^{-A}}{M}).$$
will be sufficient.
\mathcal End{lem}
\begin{proof}
Applying Cauchy's inequality gives
$$S^2\leq\sum_{m\sim M}|\alpha_m|^2\sum_{m\sim M}(\sum_{n\sim N}\beta_n \Phi(mn))^2.$$
By definition of the function $W$ we know that $W\left(\frac{m}{3M}\right)=1$ when $m\sim M$. Therefore
\begin{eqnarray*}
S^2&\ll&M\sum_mW\left(\frac{m}{3M}\right)(\sum_{n\sim N}\beta_n\Phi(mn))^2\\
&=&M\sum_{n_1,n_2\sim N}\beta_{n_1}\beta_{n_2}\sum_m W\left(\frac{m}{3M}\right)\Phi(mn_1)\Phi(mn_2)\\
&=&MS_1.\\
\mathcal End{eqnarray*}
\mathcal End{proof}
On putting $\beta_n=\varpi(n)-1$ into $S_1$ we will get three sums all of which must be evaluated asymptotically. However, on combining the sums, all the main terms will cancel and we will get the required result. Specifically let
$$S_1=S_{1,1}-2S_{1,2}+S_{1,3}$$
where
$$S_{1,1}=\sum_{n_1,n_2\sim N}\varpi(n_1)\varpi(n_2)\sum_m W\left(\frac{m}{3M}\right)\Phi(mn_1)\Phi(mn_2),$$
$$S_{1,2}=\sum_{n_1,n_2\sim N}\varpi(n_1)\sum_m W\left(\frac{m}{3M}\right)\Phi(mn_1)\Phi(mn_2)$$
and
$$S_{1,3}=\sum_{n_1,n_2\sim N}\sum_m W\left(\frac{m}{3M}\right)\Phi(mn_1)\Phi(mn_2).$$
We begin by dealing with the sums $S_{1,2}$ and $S_{1,3}$.
\begin{lem}
With our assumptions on $M,N,x,q$ and $z$ we have, for $i=2,3$ that
$$S_{1,i}=\frac{3N^2M\hat W(0)^3z^2}{q^2}+O_A\left(\frac{z^4(\log z)^{-A}}{M}\right).$$
\mathcal End{lem}
\begin{proof}
We have
$$N\leq z^{\frac{16}{15}-\delta}\leq z^{2-\delta}.$$
We may therefore use Theorem \ref{typeiv} with $\beta_n=\varpi(n)$ or $\beta_n=1$. These coefficients are only bounded by $\log n$ but this can be absorbed into the error term. In either case we have
$$\sum_{n\sim N}\beta_n=N+O(N(\log N)^{-A})$$
so the result follows.
\mathcal End{proof}
Next we deal with the contribution to $S_{1,1}$ from pairs with $n_1=n_2$. This is
$$\sum_{n\sim N}\varpi(n)^2\sum_m W\left(\frac{m}{3M}\right)\Phi(mn)^2.$$
All the terms are positive and $\Phi$ takes values in $[0,1]$ so this is at most
$$\sum_{n\sim N}\varpi(n)^2\sum_m W\left(\frac{m}{3M}\right)\Phi(mn).$$
Using Theorem \ref{typei2} we may bound this Type I sum by $O(z^2\log N)$. Since $M\ll z^{2-\delta}$ this is $O(\frac{z^4(\log z)^{-A}}{M})$.
The remaining terms in $S_{1,1}$ have $n_1\ne n_2$. Since the coefficients $\varpi(n)$ are supported on primes all such pairs actually satisfy $(n_1,n_2)=1$. We therefore consider
$$S_2=\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)\sum_m W\left(\frac{m}{3M}\right)\Phi(mn_1)\Phi(mn_2).$$
\subsection{Harmonic Analysis of the Sum $S_{2}$}
Let
$$T=\sum_m W\left(\frac{m}{3M}\right)\Phi(mn_1)\Phi(mn_2).$$
Since $(a,q)=1$ there exists an $\overline a$ satisfying
$$a\overline a\mathcal Equiv 1\mathcal Pmod q.$$
\begin{lem}
We have
$$T=\frac{3Mz^2}{q^2}\sum_{k_1,k_2}\hat W(\frac{k_1z}{q})\hat W(\frac{k_2z}{q})\sum_m\hat W\left(3M\left(m-\frac{\overline a(k_1n_1+k_2n_2)}{q}\right)\right).$$
\mathcal End{lem}
\begin{proof}
The definition of $\Phi$ gives
\begin{eqnarray*}
\Phi(n)&=&\twosum{k}{k\mathcal Equiv n\overline a\mathcal Pmod q}W(\frac{k}{z})\\
&=&\sum_m W\left(\frac{qm+n\overline a}{z}\right).\\
\mathcal End{eqnarray*}
Applying the Poisson Summation Formula in the form (\ref{pois1}) we therefore get
$$\Phi(n)=\frac{z}{q}\sum_k\hat W\left(\frac{kz}{q}\right)e\left(\frac{n\overline ak}{q}\right),$$
so that
$$T=\frac{z^2}{q^2}\sum_{k_1,k_2} \hat W\left(\frac{k_1z}{q}\right)\hat W\left(\frac{k_2z}{q}\right)\sum_mW\left(\frac{m}{3M}\right)e\left(\frac{m\overline a(k_1n_1+k_2n_2)}{q}\right).$$
We can now use the Poisson Summation Formula (\ref{pois2}) to obtain
$$\sum_mW\left(\frac{m}{3M}\right)e\left(\frac{m\overline a(k_1n_1+k_2n_2)}{q}\right)=3M\sum_m\hat W\left(3Mm-\frac{3M\overline a(k_1n_1+k_2n_2)}{q}\right).$$
The result follows on substituting this into the above expression for $T$.
\mathcal End{proof}
Let $S_3$ be the subsum of $S_2$ coming from terms with $k_1n_1+k_2n_2=0$. Since $(n_1,n_2)=1$ any solution of this may be written uniquely as $k_1=n_2h$ and $k_2=-n_2h$ for some $h\in \mathbb Z$. Therefore
$$S_3=\frac{3Mz^2}{q^2}\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)\sum_{h,m}\hat W(\frac{n_2hz}{q})\hat W(\frac{-n_1hz}{q})\hat W(3Mm).$$
\begin{lem}
For any $A>0$ we have, under the previous assumptions on $M,N,x,q$ and $z$, that
$$S_3=\frac{3MN^2z^2\hat W(0)^3}{q^2}+O_A(\frac{z^4(\log z)^{-A}}{M}).$$
\mathcal End{lem}
\begin{proof}
Our assumptions imply that for $n_i\sim N$ we have
$$\frac{n_iz}{q}\gg \frac{Nz}{q}\gg z^\delta$$
and that
$$M\gg z^\delta.$$
It follows, using the bound (\ref{fourierbound}), that the contribution to $S_3$ from terms with $h\ne 0$ or $m\ne 0$ is negligible. Specifically, for any $B\in\mathbb N$ we have
$$S_3=\frac{3Mz^2\hat W(0)^3}{q^2}\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)+O_B(z^{-B}).$$
Observe that
$$\frac{Mz^2}{q^2}\sum_{n\sim N}\varpi(n)^2\ll \frac{MNz^2\log N}{q^2}\ll \frac{z^3\log N}{q}\ll_A \frac{z^4(\log z)^{-A}}{M},$$
where the last inequality uses that $M\ll z^{2-\delta}\leq qz^{1-\delta}$. We deduce that
$$S_3=\frac{3Mz^2\hat W(0)^3}{q^2}\sum_{n_1,n_2\sim N}\varpi(n_1)\varpi(n_2)+O_A(\frac{z^4(\log z)^{-A}}{M}).$$
The result follows on applying the Prime Number Theorem to the sum
$$\sum_{n\sim N}\varpi(n).$$
\mathcal End{proof}
Let $S_4$ be the sum of the remaining terms from $S_2$, those with $k_1n_1+k_2n_2\ne 0$. Thus
$$S_4=\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)T_1,$$
where
$$T_1=\frac{3Mz^2}{q^2}\twosum{k_1,k_2}{k_1n_1+k_2n_2\ne 0}\hat W(\frac{k_1z}{q})\hat W(\frac{k_2z}{q})\sum_m\hat W\left(3M\left(m-\frac{\overline a(k_1n_1+k_2n_2)}{q}\right)\right).$$
For any integers $m,k_1,k_2$ there exists a unique integer $k$ such that
$$m-\frac{\overline a(k_1n_1+k_2n_2)}{q}=\frac{k}{q}.$$
There is then a unique integer $j$ such that
$$k_1n_1+k_2n_2=jq-ka.$$
Writing $c=jq-ka$ it follows that
$$T_1=\frac{3Mz^2}{q^2}\twosum{j,k,k_1,k_2}{k_1n_1+k_2n_2=c\ne 0}\hat W(\frac{k_1z}{q})\hat W(\frac{k_2z}{q})\hat W(\frac{3Mk}{q}).$$
If we let
$$F(n_1,n_2;c)=\twosum{k_1,k_2}{k_1n_1+k_2n_2=c}\hat W(\frac{k_1z}{q})\hat W(\frac{k_2z}{q})$$
then
$$S_4=\frac{3Mz^2}{q^2}\twosum{j,k}{c\ne 0}\hat W(\frac{3Mk}{q})\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)F(n_1,n_2;c).$$
\subsection{Transforming the Function $F$}
To deal with the sum $S_4$ we begin by applying Poisson Summation to the function $F$.
\begin{lem}\label{Fpois}
Let $\overline n_1$ be an inverse of $n_1$ modulo $n_2$, which exists since $(n_1,n_2)=1$. We have
$$F(n_1,n_2)=\frac{1}{n_2}\sum_l \hat g\left(\frac{l}{n_2};n_1,n_2,c\right)e\left(\frac{c\overline n_1 l}{n_2}\right),$$
where
$$g(t;n_1,n_2,c)=\hat W\left(\frac{tz}{q}\right)\hat W\left(\frac{(c-tn_1)z}{n_2q}\right)$$
and $\hat g$ is the Fourier transform of $g$ with respect to the single variable $t$.
\mathcal End{lem}
\begin{proof}
We are interested in pairs $k_1,k_2$ satisfying the equation
$$k_1n_1+k_2n_2=c.$$
For a given $k_1$ this has at most $1$ solution which exists if and only if
$$k_1n_1\mathcal Equiv c\mathcal Pmod {n_2}.$$
Since $(n_1,n_2)=1$ this condition is equivalent to
$$k_1\mathcal Equiv c\overline{n_1}\mathcal Pmod {n_2}.$$
If this congruence holds then the corresponding $k_2$ is given by
$$k_2=\frac{c-k_1n_1}{n_2}.$$
We therefore have
$$F(n_1,n_2)=\twosum{k}{k\mathcal Equiv c\overline{n_1}\mathcal Pmod{n_2}}\hat W\left(\frac{kz}{q}\right)\hat W\left(\frac{(c-kn_1)z}{n_2q}\right).$$
Now, if we let
$$g(t;n_1,n_2,c)=\hat W\left(\frac{tz}{q}\right)\hat W\left(\frac{(c-tn_1)z}{n_2q}\right),$$
then by the Poisson Summation Formula, (\ref{pois1}), we get
$$F(n_1,n_2)=\frac{1}{n_2}\sum_l \hat g\left(\frac{l}{n_2}\right)e\left(\frac{c\overline n_1 l}{n_2}\right).$$
\mathcal End{proof}
Applying this lemma to the sum $S_4$ we deduce that
$$S_4=\frac{3Mz^2}{q^2}\twosum{j,k,l}{c\ne 0}\hat W(\frac{3Mk}{q})\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)\frac{1}{n_2}\hat g\left(\frac{l}{n_2};n_1,n_2,c\right)e\left(\frac{c\overline n_1 l}{n_2}\right).$$
The sums considered by Heath-Brown and Jia, as well as by Matom{\"a}ki, are essentially just the $k=0$ terms of $S_4$.
\subsection{Terms with $l=0$}
We will need the following result concerning the function $\hat g$.
\begin{lem}\label{ltrunc}
For all $t$ and all $n_1,n_2\sim N$ we have $\hat g(t)\ll
\frac{q}{z}$. Furthermore, if $|t|\geq \frac{4z}{q}$ then $\hat g(t)=0$.
\mathcal End{lem}
\begin{proof}
Recall that
$$g(t)=\hat W\left(\frac{tz}{q}\right)\hat W\left(\frac{(c-tn_1)z}{n_2q}\right)=g_1(t)g_2(t),$$
say. It follows that
$$\hat g(t)=(\hat g_1\star \hat g_2)(t)=\int_{-\infty}^\infty \hat g_1(x)\hat g_2(t-x)\,dx.$$
We have
$$g_1(t)=\hat W(\frac{tz}{q})$$
so
$$\hat g_1(t)=\frac{q}{z}W(-tq/z).$$
We also have
$$g_2(t)=\hat W\left(\frac{(c-tn_1)z}{n_2q}\right)$$
so
$$\hat g_2(t)=\frac{n_2q}{n_1z}W(\frac{n_2qt}{n_1z})e(\frac{-ctq}{n_2z}).$$
Therefore, for all $t$ we deduce that
$$|\hat g_i(t)|\ll \frac{q}{z}.$$
Furthermore, if $|t|\geq \frac{2z}{q}$, then
$$\hat g_i(t)=0.$$
It follows that for all $t$ we have
$$\hat g(t)=\int_{-\infty}^\infty \hat g_1(x)\hat g_2(t-x)\,dx\ll
\int_{|x|\leq \frac{2z}{q}}(q/z)^2\,dx\ll \frac{q}{z}.$$
In addition, if $|t|\geq \frac{4z}{q}$
then for any $x$ either
$$|x|\geq \frac{2z}{q}$$
or
$$|t-x|\geq \frac{2z}{q}.$$
It follows that $\hat g(t)=0$.
\mathcal End{proof}
Let $S_5$ be the subsum of $S_4$ containing the terms with $l=0$, that is
$$S_5=\frac{3Mz^2}{q^2}\twosum{j,k}{c\ne 0}\hat
W(\frac{3Mk}{q})\twosum{n_1,n_2\sim
N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)\frac{1}{n_2}\hat
g(0;n_1,n_2,c).$$
It is convenient to reinstate the terms with $c=0$. These correspond
to pairs $(j,k)$ with $k=hq,j=ha$
so their contribution is
$$\frac{3Mz^2}{q^2}\sum_h\hat W(3Mh)\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)\frac{1}{n_2}\hat g(0;n_1,n_2,0).$$
From the estimate (\ref{fourierbound}) we may deduce that for any $B\in\mathbb N$ the contribution to this from terms with $h\ne 0$ is $O_B(z^{-B})$. Using the estimate for $\hat g$ given in Lemma \ref{ltrunc} we may bound the $h=0$ terms by
$$\frac{MNz}{q}\ll z^2\ll_A \frac{z^4(\log z)^{-A}}{M},$$
since $M\ll z^{2-\delta}$. It is therefore enough to bound
$$S_6=\frac{3Mz^2}{q^2}\sum_{j,k}\hat W(\frac{3Mk}{q})\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)\frac{1}{n_2}\hat g(0;n_1,n_2,c).$$
We may move the sum over $j$ inside the other summations to transform this to
$$S_6=\frac{3Mz^2}{q^2}\sum_k\hat W(\frac{3Mk}{q})\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)\frac{1}{n_2}\sum_j\hat g(0;n_1,n_2,c).$$
Inserting the definition of $\hat g$ and reordering we see that
$$S_6=\frac{3Mz^2}{q^2}\sum_k\hat W(\frac{3Mk}{q})\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)\frac{1}{n_2}\int_{-\infty}^\infty\hat W\left(\frac{tz}{q}\right)\sum_j\hat W\left(\frac{(c-tn_1)z}{n_2q}\right)\,dt.$$
\begin{lem}\label{zerosum}
For all $t\in\mathbb R$, $N\geq z$ and
$n_1,n_2\sim N$ we have
$$\sum_j\hat W\left(\frac{(c-tn_1)z}{n_2q}\right)=0.$$
\mathcal End{lem}
\begin{proof}
The sum is
$$\sum_j\hat W\left(\frac{(jq-ka-tn_1)z}{n_2q}\right).$$
We may apply the Poisson Summation Formula, (\ref{pois1}), to obtain
$$\frac{n_2}{z}\sum_j W(\frac{n_2j}{z})e(\gamma j),$$
for a $\gamma$ which depends on all the outer variables.
Since $N\geq z$ we have
$$\frac{n_2}{z}\geq \frac{N}{z}\geq 1.$$
However, $W$ is supported on $[\frac{1}{4},\frac{3}{4}]$ and thus for all $n\in \mathbb N$ we have
$$W(\frac{n_2n}{z})=0.$$
\mathcal End{proof}
It follows from this that $S_6=0$ and therefore that
$$S_5\ll_A\frac{z^4(\log z)^{-A}}{M}.$$
\subsection{The Remaining Terms}
Let $S_7$ be the subsum of $S_4$ containing all the remaining terms,
that is to say, all those with $l\ne 0$. Thus
$$S_7=\frac{3Mz^2}{q^2}\twosum{j,k,l}{c\ne 0,l\ne 0}\hat W(\frac{3Mk}{q})\twosum{n_1,n_2\sim N}{(n_1,n_2)=1}\varpi(n_1)\varpi(n_2)\frac{1}{n_2}\hat g\left(\frac{l}{n_2};n_1,n_2,c\right)e\left(\frac{c\overline n_1 l}{n_2}\right).$$
We now truncate the sums over $j,k,l$ to finite ranges.
\begin{lem}
Suppose $\mathcal Eta>0$.
The contribution to $S_7$ from $(j,k,l)$ for
which any of
$$|l|\geq \frac{8Nz}{q},$$
$$|k|\geq \frac{qz^{\mathcal Eta}}{M}$$
or
$$|j|\geq Nz^{-1+2\mathcal Eta}$$
hold is $O_{B,\mathcal Eta}(z^{-B})$ for any $B\in\mathbb N$.
\mathcal End{lem}
\begin{proof}
From Lemma \ref{ltrunc} we know that if $|t|\geq \frac{4z}{q}$ then
$\hat g(t)=0$. It follows that terms with
$$|l|\geq \frac{8Nz}{q}$$
make no contribution to the sum.
Let $R$ be the set of $(j,k)$ for which
$$|k|\geq \frac{qz^{\mathcal Eta}}{M}$$
or
$$|j|\geq Nz^{-1+2\mathcal Eta}.$$
To complete the proof it is sufficient to give a bound of $O_B(z^{-B})$ for
$$\sum_{(j,k)\in R}\left|\hat W(\frac{3Mk}{q})\hat g\left(\frac{l}{n_2};n_1,n_2,c\right)\right|.$$
By definition of $\hat g$ this is at most
$$\int_{-\infty}^\infty\sum_{(j,k)\in R}\left|\hat W(\frac{3Mk}{q})\hat W\left(\frac{tz}{q}\right)\hat W\left(\frac{(jq-ka-tn_1)z}{n_2q}\right)\right|\,dt.$$
We make repeated use of the estimate (\ref{fourierbound}). This shows that any part of the above where $\hat W$ is evaluated at a point $x$ with $|x|\geq z^{\mathcal Eta}$ may be bounded by $O_B(z^{-B})$. From the factor
$\hat W(\frac{tz}{q})$
we see that such a bound holds when
$$|t|\geq \frac{q}{z^{1-\mathcal Eta}}$$
and from the factor
$\hat W(\frac{3Mk}{q})$
it holds when
$$|k|\geq \frac{qz^{\mathcal Eta}}{M}.$$
Finally we assume that
$$|t|< \frac{q}{z^{1-\mathcal Eta}}$$
and
$$|k|< \frac{qz^{\mathcal Eta}}{M}.$$
In this case we have
$$|j|\geq Nz^{-1+2\mathcal Eta}.$$
For sufficiently large $q$ these assumptions imply that
$$\frac{(jq-ka-tn_1)z}{n_2q}\gg z^{\mathcal Eta}.$$
A bound of $O_B(z^{-B})$ therefore holds for all parts of the sum.
\mathcal End{proof}
Let $S_8$ be the sum $S_7$ with the following ranges of summation:
$$0<|l|<\frac{8Nz}{q},$$
$$|k|< \frac{qz^{\mathcal Eta}}{M}$$
and
$$|j|< Nz^{-1+2\mathcal Eta}.$$
The last lemma shows that, for a fixed $\mathcal Eta>0$, we only need to bound $S_8$. We ignore any potential cancellation in the outer sums so we write
$$S_8\ll \frac{Mz^2\log N}{q^2N}
\twosum{|j|<Nz^{-1+2\mathcal Eta},\,|k|<\frac{qz^{\mathcal Eta}}{M},\,0<|l|<\frac{8Nz}{q}}{c\ne0}S_9$$
where
$$S_9=\sum_{n_2\sim N}|\twosum{n_1\sim N}{(n_1,n_2)=1}\varpi(n_1)\hat
g\left(\frac{l}{n_2};n_1,n_2,c\right)e\left(\frac{c\overline n_1
l}{n_2}\right)|.$$
Let $h(n_1,n_2)$ be the weight in this sum:
$$h(n_1,n_2)=\hat g\left(\frac{l}{n_2}\right)=\int_{-\infty}^\infty \hat W\left(\frac{tz}{q}\right)\hat W\left(\frac{(c-tn_1)z}{n_2q}\right)e(-\frac{tl}{n_2})\,dt.$$
\begin{lem}
The function $h$ depends smoothly on $n_1$ and $n_2$. For
$n_1,n_2\sim N$ and the same $\mathcal Eta$ as above, we have
$$h(n_1,n_2)\ll \frac{q}{z}$$
and
$$h_{n_1}(n_1,n_2)\ll_{\mathcal Eta} \frac{q}{Nz^{1-\mathcal Eta}}.$$
\mathcal End{lem}
\begin{proof}
Since $W$ is smooth, it follows that $g$ depends smoothly on $n_1,n_2$ and therefore so does $\hat g$ and hence so does $h$. The bound for $h$ follows from that for $\hat g$ given in Lemma \ref{ltrunc}.
Differentiating we get
$$h_{n_1}(n_1,n_2)=\int_{-\infty}^\infty \hat W\left(\frac{tz}{q}\right)\frac{-tz}{n_2q}\hat W'\left(\frac{(c-tn_1)z}{n_2q}\right)e(-\frac{tl}{n_2})\,dt.$$
The contribution to the integral from $|t|\geq \frac{q}{z^{1-\mathcal Eta/2}}$ can be shown to be sufficiently small. The remainder of the integral is then bounded by
$$\int_{|t|\leq \frac{q}{z^{1-\mathcal Eta/2}}}\frac{tz}{Nq}\,dt\leq \int_{|t|\leq \frac{q}{z^{1-\mathcal Eta/2}}}\frac{z^{\mathcal Eta/2}}{N}\,dt\ll \frac{q}{Nz^{1-\mathcal Eta}}.$$
\mathcal End{proof}
We may now use partial summation to remove the weight $h(n_1,n_2)$ from $S_9$. We deduce that
$$S_9\ll_{\mathcal Eta} \frac{q}{z^{1-\mathcal Eta}}S_{10}$$
where
$$S_{10}=\max_{N'\sim N}\sum_{n_2\sim N}|\twosum{N\leq n_1<N'}{(n_1,n_2)=1}\varpi(n_1)e\left(\frac{c\overline n_1 l}{n_2}\right)|.$$
We will estimate $S_{10}$ using our bound, \cite[Theorem 1.3]{mykloost}. For any $\mathcal Epsilon>0$ this gives
$$S_{10}\ll_\mathcal Epsilon\left(1+\frac{|cl|}{N^2}\right)^{\frac12}N^{2-\alpha-\mathcal Epsilon},$$
with the specific value $\alpha=\frac18$. Since
$$0<|cl|\ll N^2z^{2\mathcal Eta}$$
we deduce that
$$S_{10}\ll_\mathcal Epsilon z^{\mathcal Eta}N^{2-\alpha+\mathcal Epsilon}.$$
We will eventually choose $\mathcal Eta$ in such a way that the factor $z^\mathcal Eta$ in this bound has no effect on the quality of our final result. It is the value of $\alpha$ which determines the size of the admissible range for $N$ and hence the limitation on $\tau$.
\begin{lem}
Under the previous assumptions on $M,N,x,q$ and $z$ we have
$$S_7\ll_A \frac{z^4(\log z)^{-A}}{M},$$
for any fixed $A>0$.
\mathcal End{lem}
\begin{proof}
We deduce from our bound for $S_{10}$ that
$$S_9\ll_\mathcal Epsilon \frac{q}{z^{1-2\mathcal Eta}}N^{2-\alpha+\mathcal Epsilon}$$
and therefore that
$$S_8\ll_\mathcal Epsilon \frac{N^{3-\alpha}z^{1+5\mathcal Eta+\mathcal Epsilon}}{q}.$$
By assumption we have
$$N\leq z^{\frac{16}{15}-\delta}=z^{\frac{2}{2-\alpha}-\delta}.$$
It follows that
\begin{eqnarray*}
MS_8&\ll_\mathcal Epsilon& \frac{MN^{3-\alpha}z^{1+5\mathcal Eta+\mathcal Epsilon}}{q}\\
&\ll& N^{2-\alpha}z^{2+5\mathcal Eta+\mathcal Epsilon}\\
&\leq& z^{4-\delta(2-\alpha)+5\mathcal Eta+\mathcal Epsilon}.\\
\mathcal End{eqnarray*}
We can choose $\mathcal Epsilon,\mathcal Eta$ sufficiently small so that
$$5\mathcal Eta+\mathcal Epsilon< \delta(2-\alpha),$$
whence
$$S_8\ll_{\delta}\frac{z^4(\log z)^{-A}}{M}.$$
The bound for $S_7$ follows.
\mathcal End{proof}
Recall that we are assuming $N\gg \frac{q}{z^{1-\delta}}$. Observe that
$$\frac{q}{z}<z^{\frac{2}{2-\alpha}}$$
if and only if
$$q<z^{\frac{4-\alpha}{2-\alpha}}.$$
We note that
$$\frac{4-\alpha}{2-\alpha}\frac{1-\tau}{1+\tau}>1$$
if and only if $\tau<\frac{1}{3-\alpha}=\frac{8}{23}$. We therefore impose the condition $\tau<\frac{8}{23}$ in order to ensure that our range
for $N$ is nonempty.
It should be noted that in this section we have made nontrivial use of
the fact that our coefficients are the indicator function of the
primes. If we want to estimate a general Type II sum with
coefficients $\beta_n$ then different bounds must be used.
Specifically, if we use Duke, Friedlander and Iwaniec's result,
\cite[Theorem 2]{dfi}, then we can take $\alpha=\frac{1}{48}$. This
is much worse than the value $\frac{1}{8}$ which we have for our
special coefficients; although even that is considerably weaker than
$\alpha=\frac{1}{2}$, which we conjecture should be best possible.
\subsection{Completing the Proof of Theorem \ref{typeii}}
The result follows on combining all the above estimates. We have
\begin{eqnarray*}
S_1&=&S_{1,1}-2S_{1,2}+S_{1,3}\\
&=&S_{1,1}-\frac{3N^2M\hat W(0)^3z^2}{q^2}+O_A\left(\frac{z^4(\log z)^{-A}}{M}\right)\\
&=&S_2-\frac{3N^2M\hat W(0)^3z^2}{q^2}+O_A\left(\frac{z^4(\log z)^{-A}}{M}\right)\\
&=&S_3+S_4-\frac{3N^2M\hat W(0)^3z^2}{q^2}+O_A\left(\frac{z^4(\log z)^{-A}}{M}\right)\\
&=&S_4+O_A\left(\frac{z^4(\log z)^{-A}}{M}\right)\\
&=&S_5+S_7+O_A\left(\frac{z^4(\log z)^{-A}}{M}\right)\\
&=&O_A\left(\frac{z^4(\log z)^{-A}}{M}\right).\\
\mathcal End{eqnarray*}
It follows that
$$S=O_A(z^2(\log z)^{-A}),$$
as required.
\section{Proof of the Theorems}\label{proofs}
\subsection{Proof of Theorem \ref{mainthm}}
Suppose $MN=\frac{x}{4}$ and $M\leq z^{2-\delta}$, for some $\delta>0$. For any $A>0$ we have
$$\sum_{m\sim M}\varpi(m)=M+O_A(M(\log M)^{-A}).$$
It follows by Theorem \ref{typei} that
$$\sum_{m\sim M,n\sim N}\varpi(m)\Phi(mn)=\frac{\hat W(0)}4z^2+O_{\delta,A}(z^2(\log z)^{-A});$$
the fact that $\varpi(n)$ is only bounded by $\log n$ does not matter as this factor can be absorbed into the error term.
Suppose, in addition, that
$$\max(z,\frac{q}{z^{1-\delta}})\leq N\leq z^{\frac{16}{15}-\delta}.$$
It follows from Theorem \ref{typeii} that for any $A>0$ we have
$$\sum_{m\sim M,n\sim N}\varpi(m)(\varpi(n)-1)\Phi(mn)\ll_{A,\delta} z^2(\log z)^{-A}.$$
Combining these two estimates we immediately deduce that
$$\sum_{m\sim M,n\sim N}\varpi(m)\varpi(n)\Phi(mn)=\frac{\hat W(0)}{4}z^2+O_{A,\delta}(z^2(\log z)^{-a}).$$
If $m$ and $n$ are prime then $\varpi(m)\varpi(n)\asymp (\log z)^2$. It follows that for sufficiently large $q$ we have
$$\twosum{m\sim M,n\sim N}{mn\in\mathcal E_2}\Phi(mn)\gg \frac{z^2}{(\log z)^2}.$$
For $\tau<\frac{8}{23}$ there are exponents $a(\tau)<b(\tau)$ such that the above bound holds for any range $(M,2M]\subseteq(z^{a(\tau)},z^{b(\tau)}]$. There are therefore $\gg_{\tau} \log z$ dyadic ranges available. Theorem
\ref{mainthm} follows.
\subsection{Proof of Theorem \ref{diophantine}}
Suppose $\alpha$ is irrational and $\tau<\frac{8}{23}$. By replacing $\tau$ by $\tau+\mathcal Epsilon$ for a sufficiently small $\mathcal Epsilon>0$ it is enough to show that there are infinitely many $n\in\mathcal E_2$ with
$$\|n\alpha\|\ll n^{-\tau}.$$
Let $\frac{c}{q}$ be a convergent in the continued fraction expansion of $\alpha$ with a sufficiently large denominator. We therefore have
$$|\alpha-\frac{c}{q}|\leq \frac{1}{q^2}.$$
If we let $x=q^{\frac{2}{1+\tau}}$, $z=\frac{x}{q}$ and $a=\overline c$ then any $n\in\mathcal A$ satisfies
$$an\mathcal Equiv k\mathcal Pmod q\text{ for some }k\in [0,z].$$
We therefore have
$$\|\frac{an}{q}\|\leq\frac{z}{q}.$$
It follows that
$$\|n\alpha\|\leq\|(\alpha-\frac{c}{q})n\|+\|\frac{an}{q}\|\ll n^{-\tau}.$$
Since there are infinitely many convergents to $\alpha$ it is thus sufficient to show that $\mathcal A$ contains members of $\mathcal E_2$. This follows from Theorem \ref{mainthm}.
\subsection{Proof of Theorem \ref{palindrome}}
Recall that
$$\mathcal P_3(b)=\{j(b^2+1)+kb:j\in (0,b)\mathcal Ap\mathbb Z,k\in [0,b)\mathcal Ap\mathbb Z\}.$$
We take $\tau=\frac{1}{3},q=b^2+1$, $z=b$, $x=b^3$ and $a=b$. The set $\mathcal A$ is then contained in $\mathcal P_3(b)$ so the result follows from Theorem \ref{mainthm}.
\addcontentsline{toc}{section}{References}
Mathematical Institute,
24--29, St. Giles',
Oxford
OX1 3LB
UK
{\tt [email protected]}
\mathcal End{document}
|
\begin{document}
\title{Recursive formula for
$\psi^g-\lambda_1\psi^{g-1}+\cdots+(-1)^g\lambda_g$ in ${\overline{\cM}}_{g,1}$}
\author{D.\ Arcara}
\address{Department of Mathematics, University of Utah,
155 S. 1400 E., Room 233, Salt Lake City, UT 84112-0090, USA}
\email{[email protected]}
\author{F.\ Sato}
\address{School of Mathematics, Korean Institute for Advanced Study,
Cheongnyangni 2-dong, Dongdaemun-gu, Seoul 130-722, South Korea}
\email{[email protected]}
\begin{abstract}
Mumford proved that
$\psi^g-\lambda_1\psi^{g-1}+\cdots+(-1)^g\lambda_g=0$ in the Chow ring of
${\mathcal M}_{g,1}$ [Mum83].
We find an explicit recursive formula for
$\psi^g-\lambda_1\psi^{g-1}+\cdots+(-1)^g\lambda_g$
in the tautological ring of ${\overline{\cM}}_{g,1}$ as a combination of classes supported
on boundary strata.
\end{abstract}
\maketitle
\section{Introduction}
Mumford proved in [Mum83] that
$\psi^g-\lambda_1\psi^{g-1}+\cdots+(-1)^g\lambda_g=0$ in the Chow ring of
${\mathcal M}_{g,1}$.
Moreover, he showed that this class is supported on the boundary strata with a
marked genus $0$ component.
Graber and Vakil proved in [GraVak05] that every codimension $g$ class in the
tautological ring of ${\overline{\cM}}_{g,1}$ is supported on the boundary strata with at
least one genus $0$ component.
We complement these results by finding an explicit recursive formula for
$\psi^g-\lambda_1\psi^{g-1}+\cdots+(-1)^g\lambda_g$ in the tautological ring
of ${\overline{\cM}}_{g,1}$ as a combination of classes supported on boundary strata.
It is clear from the formula being recursive that all the boundary strata
have a genus $0$ component in them, but it is not obvious from the formula
that the marked point must be on a genus $0$ component.
We simplified the formula for $g<5$ in Section \ref{simplify}, and checked
that this is the case.
\begin{thm}\label{thm}
In the tautological ring of ${\overline{\cM}}_{g,1}$,
$$ \sum_{i=0}^g (-1)^i \lambda_i \psi^{g - i} =
\sum_{h=1}^g \left( 1 - \frac{h}{g} \right) \iota_h{}_*(c_h), $$
where
$$ c_h := \sum_{i=0}^{g-1} (-1)^{h+i} \left[
\left( \sum_{j=0}^h (-1)^j \lambda^0_j \psi_0^{i-j} \right)
\left( \sum_{j=0}^{g-h} (-1)^j \lambda^\infty_j \psi_\infty^{g-1-i-j} \right) \right], $$
$\iota_h$ is the natural boundary map
$$ \iota_h \colon {\overline{\cM}}_{h,2} \times {\overline{\cM}}_{g-h,1} {\longrightarrow} {\overline{\cM}}_{g,1}, $$
$\psi_0$, $\psi_\infty$ are descendents at the marked points glued by $\iota_h$, and
$\lambda^0$, $\lambda^\infty$ are the $\lambda$-classes on ${\overline{\cM}}_{h,2}$ and ${\overline{\cM}}_{g-h,1}$,
respectively.
\end{thm}
This formula is actually the first step of an algorithm which calculates each
of the classes $\psi^g$, $\lambda_1\psi^{g-1}$, $\dots$, $\lambda_g$ in terms
of classes supported on boundary strata.
We want to single out the class
$\psi^g-\lambda_1\psi^{g-1}+\cdots+(-1)^g\lambda_g$, though, because it is the
only class we found so far in the tautological ring of ${\overline{\cM}}_{g,1}$ which has
a nice recursive formula, and can therefore be easily calculated.
\end{ack}
\section{Virtual localization}
The main tool we use to prove our theorems is the virtual localization theorem
by Graber and Pandharipande [GraPan99].
\begin{thm2}[Virtual localization theorem]
Suppose $f\colon X\rightarrow X'$ is a ${\mathbb{C}}^*$-equivariant map of proper
Deligne--Mumford quotient stacks with a ${\mathbb{C}}^*$-equivariant perfect
obstruction theory.
If $i'\colon F'\hookrightarrow X'$ is a fixed substack and $c\in A^*_{{\mathbb{C}}^*}(X)$, let
$f|_{F_i}\colon F_i\rightarrow F'$ be the restriction of $f$ to each of the fixed
substacks $F_i\subseteq f^{-1}(F')$.
Then
$$\sum_{F_i}{f|_{F_i}}_*\frac{i_{F_i}^*c}{\epsilon_{{\mathbb{C}}^*}(F_i^{\text{vir}})}
=\frac{i'^*f_*c}{\epsilon_{{\mathbb{C}}^*}(F'^{\text{vir}})}, $$
where $i_{F_i}\colon F_i\rightarrow X$ and $\epsilon_{{\mathbb{C}}^*}(F^{\text{vir}})$ is the
virtual equivariant Euler class of the ``virtual'' normal bundle
$F^{\text{vir}}$.
\end{thm2}
\begin{rem}
The conditions in the theorem are satisfied for the Kontsevich--Manin spaces
${\overline{\cM}}_{g,n}({\mathbb{P}}^m,d)$ of stable maps, and $\epsilon_{{\mathbb{C}}^*}(F^{\text{vir}})$
can be explicitly computed in terms of $\psi$ and $\lambda$-classes [GraPan99]
(see also [FabPan05]).
\end{rem}
We define a ${\mathbb{C}}^*$-action on ${\mathbb{P}}^{1}$ by
$ a \cdot [x : y] = [x : a y] $
for $a\in{\mathbb{C}}^*$ and $[x:y]\in{\mathbb{P}}^{1}$.
There are two fixed points, $0$ and $\infty$, and the torus acts with weight
$1$ on the tangent space at $0$ and $-1$ on the tangent space at $\infty$.
This ${\mathbb{C}}^*$-action induces ${\mathbb{C}}^*$-actions on ${\overline{\cM}}_{g,n}({\mathbb{P}}^{1},d)$, and we
shall consider the trivial ${\mathbb{C}}^*$-action on ${\overline{\cM}}_{g,n}$.
\section{Proof of Theorem \ref{thm}}
We use virtual localization on the natural function
$ f \colon {\overline{\cM}}_{g,3}({\mathbb{P}}^1,1) {\longrightarrow} {\overline{\cM}}_{g,3} \times ({\mathbb{P}}^1)^3 $
defined by
$ f([g \colon (C,p_1,p_2,p_3) \rightarrow {\mathbb{P}}^1]) =
((C_{\textrm{stab}},p_1,p_2,p_3),g(p_1),g(p_2),g(p_3)). $
Consider the fixed locus
$$ F' := {\overline{\cM}}_{g,4} \times \{ 0 \} \times \{ \infty \} \times \{ \infty \}
\hookrightarrow {\overline{\cM}}_{g,3} \times ({\mathbb{P}}^1)^3, $$
and apply the virtual localization theorem with $c=[1]^\textrm{vir}$ to obtain
$$ \sum_{F_i} \left( f|_{F_i} \right)_*
\frac{[1]^\textrm{vir}}{\epsilon_{{\mathbb{C}}^*}(F_i^{\text{vir}})} =
\frac{i'^* f_*[1]^\textrm{vir}}{t(-t)(-t)}. $$
There are $g+1$ fixed loci mapping to $F'$.
One fixed locus has a marked point mapping to $0$ and a curve in ${\overline{\cM}}_{g,3}$
mapping to $\infty$.
We shall denote it by $F_0$.
Then there are $g$ fixed loci which have a curve in ${\overline{\cM}}_{h,2}$ mapping to
$0$ and a curve in ${\overline{\cM}}_{g-h,3}$ mapping to $\infty$ (with $1\leq h\leq g$).
We shall denote these fixed loci by $F_h$.
Note that $F_0\simeq{\overline{\cM}}_{g,3}$ and $F_h\simeq{\overline{\cM}}_{h,2}\times{\overline{\cM}}_{g-h,3}$
($1\leq h\leq g$).
\begin{eqnarray*}
\setlength{\unitlength}{0.01cm}
\begin{picture}(1120,250)(0,0)
\thicklines
\put(50,220){$F_0$}
\put(140,0){\lambda^\inftyne(0,1){200}}
\put(140,15){\circle*{15}}
\put(0,185){\lambda^\inftyne(1,0){150}}
\put(50,185){\circle*{15}}
\put(80,185){\circle*{15}}
{\tiny
\put(150,10){$0$}
\put(150,180){$\infty$}
\put(20,150){$\textrm{genus }g$}
}
\put(300,220){$F_1$}
\put(390,0){\lambda^\inftyne(0,1){200}}
\put(250,15){\lambda^\inftyne(1,0){150}}
\put(320,15){\circle*{15}}
\put(250,185){\lambda^\inftyne(1,0){150}}
\put(300,185){\circle*{15}}
\put(330,185){\circle*{15}}
{\tiny
\put(400,10){$0$}
\put(400,180){$\infty$}
\put(270,35){$\textrm{genus }1$}
\put(240,150){$\textrm{genus }g-1$}
}
\put(500,100){\circle*{7}}
\put(600,100){\circle*{7}}
\put(550,100){\circle*{7}}
\put(750,220){$F_{g-1}$}
\put(840,0){\lambda^\inftyne(0,1){200}}
\put(700,15){\lambda^\inftyne(1,0){150}}
\put(770,15){\circle*{15}}
\put(700,185){\lambda^\inftyne(1,0){150}}
\put(750,185){\circle*{15}}
\put(780,185){\circle*{15}}
{\tiny
\put(850,10){$0$}
\put(850,180){$\infty$}
\put(720,150){$\textrm{genus }1$}
\put(690,35){$\textrm{genus }g-1$}
}
\put(1000,220){$F_g$}
\put(1090,0){\lambda^\inftyne(0,1){200}}
\put(950,15){\lambda^\inftyne(1,0){150}}
\put(1020,15){\circle*{15}}
\put(950,185){\lambda^\inftyne(1,0){150}}
\put(1000,185){\circle*{15}}
\put(1030,185){\circle*{15}}
{\tiny
\put(1100,10){$0$}
\put(1100,180){$\infty$}
\put(970,35){$\textrm{genus }g$}
\put(970,150){$\textrm{genus }0$}
}
\end{picture}
\end{eqnarray*}
Since $i'^*f_*[1]^\textrm{vir}$ is a polynomial in $t$, the sum of the
contributions from the coefficient of $t^{-4}$ on each fixed locus is $0$.
Call this contribution $a_{-4}$.
We have that $\pi_{1,*}(\pi_{2,*}(a_{-4}\cdot\psi_3))=0$.
We now calculate the contribution to the left hand side one fixed locus at the
time.
\begin{itemize}
\item
For $F_0$, we obtain
$$ \frac{[1]^\textrm{vir}}{\epsilon_{{\mathbb{C}}^*}(F_0^{\text{vir}})} = \frac{1}{t}
\cdot (-1)^g
\frac{t^g + \lambda_1 t^{g-1} + \cdots + \lambda_g}{- t (- t - \psi_\infty)}, $$
and the coefficient of $t^{-4}$ is
$ - \left( \psi_\infty^{g+1} - \lambda_1 \psi_\infty^g + \cdots +
(-1)^g \lambda_g \psi_\infty \right). $
Under the isomorphism $F_0\simeq{\overline{\cM}}_{g,3}$, $\psi_\infty$ gets identified with
$\psi_1$, and the contribution is therefore
$$ - \left( \psi_1^{g+1} - \lambda_1 \psi_1^g + \cdots +
(-1)^g \lambda_g \psi_1 \right). $$
\item
For $F_h$ ($1\leq h\leq g$), we obtain
$$ \frac{[1]^\textrm{vir}}{\epsilon_{{\mathbb{C}}^*}(F_h^{\text{vir}})} =
\frac{t^h - \lambda^0_1 t^{h-1} + \cdots + (-1)^h \lambda^0_h}{t (t - \psi_0)} \cdot
(-1)^{g-h}
\frac{t^{g-h} + \lambda^\infty_1 t^{g-h-1} + \cdots + \lambda^\infty_{g-h}}{- t (- t - \psi_\infty)}, $$
and the coefficient of $t^{-4}$ is\footnote{Note that $c'_h$ is the
summation (with the appropriate sign) of all possible products of codimension
$g$ of a class on the curve mapping to $0$ with a class on the curve mapping
to $\infty$.}
$$ c'_h := \sum_{i=0}^g (-1)^{h+i} \left[
\left( \sum_{j=0}^h (-1)^j \lambda^0_j \psi_0^{i-j} \right)
\left( \sum_{j=0}^{g-h} (-1)^j \lambda^\infty_j \psi_\infty^{g-i-j} \right) \right]. $$
This is a class of codimension $g$ in ${\overline{\cM}}_{h,2}\times{\overline{\cM}}_{g-h,3}$ which
maps to the codimension $g+1$ class $\iota_h{}_*(c'_h)$ in ${\overline{\cM}}_{g,3}$ under
$(f|_{F_h})_*$.
\end{itemize}
To summarize, we obtain that
$$ - \left( \psi_1^{g+1} - \lambda_1 \psi_1^g + \cdots +
(-1)^g \lambda_g \psi_1 \right) + \sum_{h=1}^g \iota_h{}_*(c'_h) = 0 $$
in ${\overline{\cM}}_{g,3}$.
The first step is now to multiply by $\psi_3$ and push-forward to
${\overline{\cM}}_{g,2}$.
\begin{itemize}
\item
If $h=0$, we obtain
$ - 2 g \left( \psi_1^{g+1} - \lambda_1 \psi_1^g + \cdots +
(-1)^g \lambda_g \psi_1 \right) $
in ${\overline{\cM}}_{g,2}$.
\item
If $1\leq h<g$, note that, since the third marked point is on the curve at
$\infty$, we are really multiplying by $\psi_3$ in ${\overline{\cM}}_{g-h,3}$ and
pushing-forward to ${\overline{\cM}}_{g-h,2}$.
We therefore obtain, by Dilaton, the class
$ 2 (g - h) \iota_h{}_*(c'_h), $
which is a class of codimension $g+1$ in ${\overline{\cM}}_{g,2}$.
\item
If $h=g$, then $\psi_3=0$ because it is a descendent at a marked point of a
genus $0$ curve with $3$ markings (the curve mapping to $\infty$).
\end{itemize}
Let us now suppose that $h<g$.
The second and last step is to push-forward this class via the map that
forgets the second marked point.
\begin{itemize}
\item
If $h=0$, we obtain, by String,
$ - 2 g \left( \psi_1^g - \lambda_1 \psi_1^{g - 1} + \cdots +
(-1)^g \lambda_g \right). $
\item
If $1\leq h<g$, we obtain, by String, the class
$ 2 (g - h) \iota_h{}_*(c_h), $
where $c_h$ is just $c'_h$ with every power of $\psi_\infty$ lowered by $1$ (with
the convention that $\psi_\infty^{-1}=0$), i.e.,
$$ c_h = \sum_{i=0}^{g-1} (-1)^{h+i} \left[
\left( \sum_{j=0}^h (-1)^j \lambda^0_j \psi_0^{i-j} \right)
\left( \sum_{j=0}^{g-h} (-1)^j \lambda^\infty_j \psi_\infty^{g-1-i-j} \right) \right]. $$
\end{itemize}
Putting it all together, we obtain that
$$ - 2 g \left( \psi_1^g - \lambda_1 \psi_1^{g - 1} + \cdots + (-1)^g \lambda_g \right)
+ \sum_{h=1}^g 2 (g - h) \iota_h{}_*(c_h) = 0, $$
from which we can derive the formula of Theorem \ref{thm}.
\qed
\begin{rems}
(I) By taking the coefficient of $t^{-3-j}$ with $j>1$, it is possible to
find a similar formula for
$ \psi^{g+j-1} - \lambda_1 \psi^{g+j-2} + \cdots +
(-1)^g \lambda_g \psi^{j-1} $
in terms of classes supported on boundary strata.
(II) In [GraVak05], Graber and Vakil proved that a codimension $g$ class in
the tautological ring of ${\overline{\cM}}_{g,1}$ can be written as a sum of classes
supported on boundary strata with at least one genus $0$ component.
By induction on $g$, it is easy to see that this is the case for our $c_h$
classes.
(III) Using the same function $f$ as above, but with the fixed locus
${\overline{\cM}}_{g,2}\times\{0\}^2\times\{\infty\}$ instead of
${\overline{\cM}}_{g,2}\times\{0\}\times\{\infty\}^2$, it is possible to obtain the
following tautological relation on ${\overline{\cM}}_{g,1}$:
$$ \sum_{h=1}^{g-1} (2h) \iota_h{}_* (c_h) + (2g) \pi_* \left( \psi_2^{g+1} -
\lambda_1 \psi_2^g + \cdots + (-1)^g \lambda_g \psi_2 \right) = 0. $$
\end{rems}
\section{Explicit formulas for low genus}\label{simplify}
The formula of Theorem \ref{thm} can be simplified recursively, and we
calculated the answer for low values of $g$.
Note that these formulas were already known for $g=1$ and $g=2$, but they were
unknown for higher $g$'s.
Genus $1$: In ${\overline{\cM}}_{1,1}$,
{\tiny
\begin{eqnarray*}
\setlength{\unitlength}{0.0075cm}
\begin{picture}(280,140)(0,0)
\thicklines
\put(100,50){\ellipse{200}{100}}
\path(50,54)(60,52)(70,50)(80,48)(90,46)(100,45)(110,46)(120,48)(130,50)(140,52)(150,54)
\path(70,50)(80,52)(90,54)(100,55)(110,54)(120,52)(130,50)
\put(100,22){\circle*{10}}
\put(50,115){$\psi-\lambda_1$}
\put(210,45){$=0.$}
\end{picture}
\end{eqnarray*}
}
Genus $2$: In ${\overline{\cM}}_{2,1}$,
{\tiny
\begin{eqnarray*}
\setlength{\unitlength}{0.0075cm}
\begin{picture}(900,140)(0,0)
\thicklines
\path(0,50)(1,57)(2,60)(5,66)(8,70)(14,76)(18,79)(21,81)(23,82)(26,84)(34,88)(37,89)(40,90)(52,94)(60,96)(65,97)(72,98)(80,99)(100,100)(101,100)
\path(0,50)(1,43)(2,40)(5,34)(8,30)(14,24)(18,21)(21,19)(23,18)(26,16)(34,12)(37,11)(40,10)(52,6)(60,4)(65,3)(72,2)(80,1)(100,0)(101,0)
\path(175,88)(160,90)(148,94)(140,96)(135,97)(128,98)(120,99)(100,100)(99,100)
\path(175,12)(160,10)(148,6)(140,4)(135,3)(128,2)(120,1)(100,0)(99,0)
\path(350,50)(349,57)(348,60)(345,66)(342,70)(336,76)(332,79)(329,81)(327,82)(324,84)(316,88)(313,89)(310,90)(298,94)(290,96)(285,97)(278,98)(270,99)(250,100)(249,100)
\path(350,50)(349,43)(348,40)(345,34)(342,30)(336,24)(332,21)(329,19)(327,18)(324,16)(316,12)(313,11)(310,10)(298,6)(290,4)(285,3)(278,2)(270,1)(250,0)(249,0)
\path(175,88)(190,90)(202,94)(210,96)(215,97)(222,98)(230,99)(250,100)(251,100)
\path(175,12)(190,10)(202,6)(210,4)(215,3)(222,2)(230,1)(250,0)(251,0)
\path(50,54)(60,52)(70,50)(80,48)(90,46)(100,45)(110,46)(120,48)(130,50)(140,52)(150,54)
\path(70,50)(80,52)(90,54)(100,55)(110,54)(120,52)(130,50)
\path(300,54)(290,52)(280,50)(270,48)(260,46)(250,45)(240,46)(230,48)(220,50)(210,52)(200,54)
\path(280,50)(270,52)(260,54)(250,55)(240,54)(230,52)(220,50)
\put(175,50){\circle*{10}}
\put(50,115){$\psi^2-\lambda_1\psi+\lambda_2$}
\put(360,45){$=$}
\put(500,50){\ellipse{200}{100}}
\path(450,54)(460,52)(470,50)(480,48)(490,46)(500,45)(510,46)(520,48)(530,50)(540,52)(550,54)
\path(470,50)(480,52)(490,54)(500,55)(510,54)(520,52)(530,50)
\put(650,50){\circle{100}}
\path(700,50)(690,48)(680,46)(670,44)(660,42)(650,41)(640,42)(630,44)(620,46)(610,48)(600,50)
\path(690,52)(680,54)\path(670,56)(660,58)\path(650,59)(640,58)\path(630,56)(620,54)\path(610,52)(600,50)
\put(650,20){\circle*{10}}
\put(800,50){\ellipse{200}{100}}
\path(750,54)(760,52)(770,50)(780,48)(790,46)(800,45)(810,46)(820,48)(830,50)(840,52)(850,54)
\path(770,50)(780,52)(790,54)(800,55)(810,54)(820,52)(830,50)
\end{picture}
\end{eqnarray*}
}
Genus $3$: In ${\overline{\cM}}_{3,1}$,
{\tiny
\begin{eqnarray*}
\setlength{\unitlength}{0.0075cm}
\begin{picture}(1750,200)(0,-50)
\thicklines
\path(0,50)(1,57)(2,60)(5,66)(8,70)(14,76)(18,79)(21,81)(23,82)(26,84)(34,88)(37,89)(40,90)(52,94)(60,96)(65,97)(72,98)(80,99)(100,100)(101,100)
\path(0,50)(1,43)(2,40)(5,34)(8,30)(14,24)(18,21)(21,19)(23,18)(26,16)(34,12)(37,11)(40,10)(52,6)(60,4)(65,3)(72,2)(80,1)(100,0)(101,0)
\path(175,88)(160,90)(148,94)(140,96)(135,97)(128,98)(120,99)(100,100)(99,100)
\path(175,12)(160,10)(148,6)(140,4)(135,3)(128,2)(120,1)(100,0)(99,0)
\path(175,88)(190,90)(202,94)(210,96)(215,97)(222,98)(230,99)(250,100)(251,100)
\path(175,12)(190,10)(202,6)(210,4)(215,3)(222,2)(230,1)(250,0)(251,0)
\path(325,88)(310,90)(298,94)(290,96)(285,97)(278,98)(270,99)(250,100)(249,100)
\path(325,12)(310,10)(298,6)(290,4)(285,3)(278,2)(270,1)(250,0)(249,0)
\path(325,88)(340,90)(352,94)(360,96)(365,97)(372,98)(380,99)(400,100)(401,100)
\path(325,12)(340,10)(352,6)(360,4)(365,3)(372,2)(380,1)(400,0)(401,0)
\path(500,50)(499,57)(498,60)(495,66)(492,70)(486,76)(482,79)(479,81)(477,82)(474,84)(466,88)(463,89)(460,90)(448,94)(440,96)(435,97)(428,98)(420,99)(400,100)(399,100)
\path(500,50)(499,43)(498,40)(495,34)(492,30)(486,24)(482,21)(479,19)(477,18)(474,16)(466,12)(463,11)(460,10)(448,6)(440,4)(435,3)(428,2)(420,1)(400,0)(399,0)
\path(50,54)(60,52)(70,50)(80,48)(90,46)(100,45)(110,46)(120,48)(130,50)(140,52)(150,54)
\path(70,50)(80,52)(90,54)(100,55)(110,54)(120,52)(130,50)
\path(300,54)(290,52)(280,50)(270,48)(260,46)(250,45)(240,46)(230,48)(220,50)(210,52)(200,54)
\path(280,50)(270,52)(260,54)(250,55)(240,54)(230,52)(220,50)
\path(350,54)(360,52)(370,50)(380,48)(390,46)(400,45)(410,46)(420,48)(430,50)(440,52)(450,54)
\path(370,50)(380,52)(390,54)(400,55)(410,54)(420,52)(430,50)
\put(250,22){\circle*{10}}
\put(50,115){$\psi^3-\lambda_1\psi^2+\lambda_2\psi-\lambda_3$}
\put(510,45){$=$}
\path(550,50)(551,57)(552,60)(555,66)(558,70)(564,76)(568,79)(571,81)(573,82)(576,84)(584,88)(587,89)(590,90)(602,94)(610,96)(615,97)(622,98)(630,99)(650,100)(651,100)
\path(550,50)(551,43)(552,40)(555,34)(558,30)(564,24)(568,21)(571,19)(573,18)(576,16)(584,12)(587,11)(590,10)(602,6)(610,4)(615,3)(622,2)(630,1)(650,0)(651,0)
\path(725,88)(710,90)(698,94)(690,96)(685,97)(678,98)(670,99)(650,100)(649,100)
\path(725,12)(710,10)(698,6)(690,4)(685,3)(678,2)(670,1)(650,0)(649,0)
\path(900,50)(899,57)(898,60)(895,66)(892,70)(886,76)(882,79)(879,81)(877,82)(874,84)(866,88)(863,89)(860,90)(848,94)(840,96)(835,97)(828,98)(820,99)(800,100)(799,100)
\path(900,50)(899,43)(898,40)(895,34)(892,30)(886,24)(882,21)(879,19)(877,18)(874,16)(866,12)(863,11)(860,10)(848,6)(840,4)(835,3)(828,2)(820,1)(800,0)(799,0)
\path(725,88)(740,90)(752,94)(760,96)(765,97)(772,98)(780,99)(800,100)(801,100)
\path(725,12)(740,10)(752,6)(760,4)(765,3)(772,2)(780,1)(800,0)(801,0)
\path(700,54)(690,52)(680,50)(670,48)(660,46)(650,45)(640,46)(630,48)(620,50)(610,52)(600,54)
\path(680,50)(670,52)(660,54)(650,55)(640,54)(630,52)(620,50)
\path(750,54)(760,52)(770,50)(780,48)(790,46)(800,45)(810,46)(820,48)(830,50)(840,52)(850,54)
\path(770,50)(780,52)(790,54)(800,55)(810,54)(820,52)(830,50)
\put(675,115){$\psi-\lambda_1$}
\put(950,50){\circle{100}}
\path(1000,50)(990,48)(980,46)(970,44)(960,42)(950,41)(940,42)(930,44)(920,46)(910,48)(900,50)
\path(990,52)(980,54)\path(970,56)(960,58)\path(950,59)(940,58)\path(930,56)(920,54)\path(910,52)(900,50)
\put(950,20){\circle*{10}}
\put(1100,50){\ellipse{200}{100}}
\path(1050,54)(1060,52)(1070,50)(1080,48)(1090,46)(1100,45)(1110,46)(1120,48)(1130,50)(1140,52)(1150,54)
\path(1070,50)(1080,52)(1090,54)(1100,55)(1110,54)(1120,52)(1130,50)
\put(1210,45){$+$}
\put(1350,0){\ellipse{200}{100}}
\path(1300,4)(1310,2)(1320,0)(1330,-2)(1340,-4)(1350,-5)(1360,-4)(1370,-2)(1380,0)(1390,2)(1400,4)
\path(1320,0)(1330,2)(1340,4)(1350,5)(1360,4)(1370,2)(1380,0)
\put(1500,0){\circle{100}}
\path(1550,0)(1540,-2)(1530,-4)(1520,-6)(1510,-8)(1500,-9)(1490,-8)(1480,-6)(1470,-4)(1460,-2)(1450,0)
\path(1540,2)(1530,4)\path(1520,6)(1510,8)\path(1500,9)(1490,8)\path(1480,6)(1470,4)\path(1460,2)(1450,0)
\put(1500,-30){\circle*{10}}
\put(1650,0){\ellipse{200}{100}}
\path(1600,4)(1610,2)(1620,0)(1630,-2)(1640,-4)(1650,-5)(1660,-4)(1670,-2)(1680,0)(1690,2)(1700,4)
\path(1620,0)(1630,2)(1640,4)(1650,5)(1660,4)(1670,2)(1680,0)
\put(1500,100){\ellipse{200}{100}}
\path(1450,104)(1460,102)(1470,100)(1480,98)(1490,96)(1500,95)(1510,96)(1520,98)(1530,100)(1540,102)(1550,104)
\path(1470,100)(1480,102)(1490,104)(1540,105)(1510,104)(1520,102)(1530,100)
\end{picture}
\end{eqnarray*}
}
Genus $4$: In ${\overline{\cM}}_{4,1}$,
{\tiny
\begin{eqnarray*}
\setlength{\unitlength}{0.0075cm}
\begin{picture}(1900,700)(-50,0)
\thicklines
\path(600,600)(601,607)(602,610)(605,616)(608,620)(614,626)(618,629)(621,631)(623,632)(626,634)(634,638)(637,639)(640,640)(652,644)(660,646)(665,647)(672,648)(680,649)(700,650)(701,650)
\path(600,600)(601,593)(602,590)(605,584)(608,580)(614,574)(618,571)(621,569)(623,568)(626,566)(634,562)(637,561)(640,560)(652,556)(660,554)(665,553)(672,552)(680,551)(700,550)(701,550)
\path(775,638)(760,640)(748,644)(740,646)(735,647)(728,648)(720,649)(700,650)(699,650)
\path(775,562)(760,560)(748,556)(740,554)(735,553)(728,552)(720,551)(700,550)(699,550)
\path(775,638)(790,640)(802,644)(810,646)(815,647)(822,648)(830,649)(850,650)(851,650)
\path(775,562)(790,560)(802,556)(810,554)(815,553)(822,552)(830,551)(850,550)(851,550)
\path(650,604)(660,602)(670,600)(680,598)(690,596)(700,595)(710,596)(720,598)(730,600)(740,602)(750,604)
\path(670,600)(680,602)(690,604)(700,605)(710,604)(720,602)(730,600)
\path(925,638)(910,640)(898,644)(890,646)(885,647)(878,648)(870,649)(850,650)(849,650)
\path(925,562)(910,560)(898,556)(890,554)(885,553)(878,552)(870,551)(850,550)(849,550)
\path(925,638)(940,640)(952,644)(960,646)(965,647)(972,648)(980,649)(1000,650)(1001,650)
\path(925,562)(940,560)(952,556)(960,554)(965,553)(972,552)(980,551)(1000,550)(1001,550)
\path(800,604)(810,602)(820,600)(830,598)(840,596)(850,595)(860,596)(870,598)(880,600)(890,602)(900,604)
\path(820,600)(830,602)(840,604)(850,605)(860,604)(870,602)(880,600)
\path(1075,638)(1060,640)(1048,644)(1040,646)(1035,647)(1028,648)(1020,649)(1000,650)(999,650)
\path(1075,562)(1060,560)(1048,556)(1040,554)(1035,553)(1028,552)(1020,551)(1000,550)(999,550)
\path(1075,638)(1090,640)(1102,644)(1110,646)(1115,647)(1122,648)(1130,649)(1150,650)(1151,650)
\path(1075,562)(1090,560)(1102,556)(1110,554)(1115,553)(1122,552)(1130,551)(1150,550)(1151,550)
\path(950,604)(960,602)(970,600)(980,598)(990,596)(1000,595)(1010,596)(1020,598)(1030,600)(1040,602)(1050,604)
\path(970,600)(980,602)(990,604)(1000,605)(1010,604)(1020,602)(1030,600)
\path(1250,600)(1249,607)(1248,610)(1245,616)(1242,620)(1236,626)(1232,629)(1229,631)(1227,632)(1224,634)(1216,638)(1213,639)(1210,640)(1198,644)(1190,646)(1185,647)(1178,648)(1170,649)(1150,650)(1149,650)
\path(1250,600)(1249,593)(1248,590)(1245,584)(1242,580)(1236,574)(1232,571)(1229,569)(1227,568)(1224,566)(1216,562)(1213,561)(1210,560)(1198,556)(1190,554)(1185,553)(1178,552)(1170,551)(1150,550)(1149,550)
\path(1100,604)(1110,602)(1120,600)(1130,598)(1140,596)(1150,595)(1160,596)(1170,598)(1180,600)(1190,602)(1200,604)
\path(1120,600)(1130,602)(1140,604)(1150,605)(1160,604)(1170,602)(1180,600)
\put(925,600){\circle*{10}}
\put(655,665){$\psi^4-\lambda_1\psi^3+\lambda_2\psi^2-\lambda_3\psi+\lambda_4$}
\put(1265,595){$=$}
\path(50,400)(51,407)(52,410)(55,416)(58,420)(64,426)(68,429)(71,431)(73,432)(76,434)(84,438)(87,439)(90,440)(102,444)(110,446)(115,447)(122,448)(130,449)(150,450)(151,450)
\path(50,400)(51,393)(52,390)(55,384)(58,380)(64,374)(68,371)(71,369)(73,368)(76,366)(84,362)(87,361)(90,360)(102,356)(110,354)(115,353)(122,352)(130,351)(150,350)(151,350)
\path(225,438)(210,440)(198,444)(190,446)(185,447)(178,448)(170,449)(150,450)(149,450)
\path(225,362)(210,360)(198,356)(190,354)(185,353)(178,352)(170,351)(150,350)(149,350)
\path(225,438)(240,440)(252,444)(260,446)(265,447)(272,448)(280,449)(300,450)(301,450)
\path(225,362)(240,360)(252,356)(260,354)(265,353)(272,352)(280,351)(300,350)(301,350)
\path(375,438)(360,440)(348,444)(340,446)(335,447)(328,448)(320,449)(300,450)(299,450)
\path(375,362)(360,360)(348,356)(340,354)(335,353)(328,352)(320,351)(300,350)(299,350)
\path(375,438)(390,440)(402,444)(410,446)(415,447)(422,448)(430,449)(450,450)(451,450)
\path(375,362)(390,360)(402,356)(410,354)(415,353)(422,352)(430,351)(450,350)(451,350)
\path(550,400)(549,407)(548,410)(545,416)(542,420)(536,426)(532,429)(529,431)(527,432)(524,434)(516,438)(513,439)(510,440)(498,444)(490,446)(485,447)(478,448)(470,449)(450,450)(449,450)
\path(550,400)(549,393)(548,390)(545,384)(542,380)(536,374)(532,371)(529,369)(527,368)(524,366)(516,362)(513,361)(510,360)(498,356)(490,354)(485,353)(478,352)(470,351)(450,350)(449,350)
\path(100,404)(110,402)(120,400)(130,398)(140,396)(150,395)(160,396)(170,398)(180,400)(190,402)(200,404)
\path(120,400)(130,402)(140,404)(150,405)(160,404)(170,402)(180,400)
\path(350,404)(340,402)(330,400)(320,398)(310,396)(300,395)(290,396)(280,398)(270,400)(260,402)(250,404)
\path(330,400)(320,402)(310,404)(300,405)(290,404)(280,402)(270,400)
\path(400,404)(410,402)(420,400)(430,398)(440,396)(450,395)(460,396)(470,398)(480,400)(490,402)(500,404)
\path(420,400)(430,402)(440,404)(450,405)(460,404)(470,402)(480,400)
\put(175,465){$\psi^2-\lambda_1\psi+\lambda_2$}
\put(600,400){\circle{100}}
\path(650,400)(640,398)(630,396)(620,394)(610,392)(600,391)(590,392)(580,394)(570,396)(560,398)(550,400)
\path(640,402)(630,404)
\path(620,406)(610,408)
\path(600,409)(590,408)
\path(580,406)(570,404)
\path(560,402)(550,400)
\put(600,370){\circle*{10}}
\put(750,400){\ellipse{200}{100}}
\path(700,404)(710,402)(720,400)(730,398)(740,396)(750,395)(760,396)(770,398)(780,400)(790,402)(800,404)
\path(720,400)(730,402)(740,404)(750,405)(760,404)(770,402)(780,400)
\put(860,395){$+$}
\path(900,400)(901,407)(902,410)(905,416)(908,420)(914,426)(918,429)(921,431)(923,432)(926,434)(934,438)(937,439)(940,440)(952,444)(960,446)(965,447)(972,448)(980,449)(1000,450)(1001,450)
\path(900,400)(901,393)(902,390)(905,384)(908,380)(914,374)(918,371)(921,369)(923,368)(926,366)(934,362)(937,361)(940,360)(952,356)(960,354)(965,353)(972,352)(980,351)(1000,350)(1001,350)
\path(1075,438)(1060,440)(1048,444)(1040,446)(1035,447)(1028,448)(1020,449)(1000,450)(999,450)
\path(1075,362)(1060,360)(1048,356)(1040,354)(1035,353)(1028,352)(1020,351)(1000,350)(999,350)
\path(1250,400)(1249,407)(1248,410)(1245,416)(1242,420)(1236,426)(1232,429)(1229,431)(1227,432)(1224,434)(1216,438)(1213,439)(1210,440)(1198,444)(1190,446)(1185,447)(1178,448)(1170,449)(1150,450)(1149,450)
\path(1250,400)(1249,393)(1248,390)(1245,384)(1242,380)(1236,374)(1232,371)(1229,369)(1227,368)(1224,366)(1216,362)(1213,361)(1210,360)(1198,356)(1190,354)(1185,353)(1178,352)(1170,351)(1150,350)(1149,350)
\path(1075,438)(1090,440)(1102,444)(1110,446)(1115,447)(1122,448)(1130,449)(1150,450)(1151,450)
\path(1075,362)(1090,360)(1102,356)(1110,354)(1115,353)(1122,352)(1130,351)(1150,350)(1151,350)
\path(950,404)(960,402)(970,400)(980,398)(990,396)(1000,395)(1010,396)(1020,398)(1030,400)(1040,402)(1050,404)
\path(970,400)(980,402)(990,404)(1000,405)(1010,404)(1020,402)(1030,400)
\path(1200,404)(1190,402)(1180,400)(1170,398)(1160,396)(1150,395)(1140,396)(1130,398)(1120,400)(1110,402)(1100,404)
\path(1180,400)(1170,402)(1160,404)(1150,405)(1140,404)(1130,402)(1120,400)
\put(1025,465){$\psi-\lambda_1$}
\put(1300,400){\circle{100}}
\path(1350,400)(1340,398)(1330,396)(1320,394)(1310,392)(1300,391)(1290,392)(1280,394)(1270,396)(1260,398)(1250,400)
\path(1340,402)(1330,404)
\path(1320,406)(1310,408)
\path(1300,409)(1290,408)
\path(1280,406)(1270,404)
\path(1260,402)(1250,400)
\put(1300,370){\circle*{10}}
\path(1350,400)(1351,407)(1352,410)(1355,416)(1358,420)(1364,426)(1368,429)(1371,431)(1373,432)(1376,434)(1384,438)(1387,439)(1390,440)(1402,444)(1410,446)(1415,447)(1422,448)(1430,449)(1450,450)(1451,450)
\path(1350,400)(1351,393)(1352,390)(1355,384)(1358,380)(1364,374)(1368,371)(1371,369)(1373,368)(1376,366)(1384,362)(1387,361)(1390,360)(1402,356)(1410,354)(1415,353)(1422,352)(1430,351)(1450,350)(1451,350)
\path(1525,438)(1510,440)(1498,444)(1490,446)(1485,447)(1478,448)(1470,449)(1450,450)(1449,450)
\path(1525,362)(1510,360)(1498,356)(1490,354)(1485,353)(1478,352)(1470,351)(1450,350)(1449,350)
\path(1700,400)(1699,407)(1698,410)(1695,416)(1692,420)(1686,426)(1682,429)(1679,431)(1677,432)(1674,434)(1666,438)(1663,439)(1660,440)(1648,444)(1640,446)(1635,447)(1628,448)(1620,449)(1600,450)(1599,450)
\path(1700,400)(1699,393)(1698,390)(1695,384)(1692,380)(1686,374)(1682,371)(1679,369)(1677,368)(1674,366)(1666,362)(1663,361)(1660,360)(1648,356)(1640,354)(1635,353)(1628,352)(1620,351)(1600,350)(1599,350)
\path(1525,438)(1540,440)(1552,444)(1560,446)(1565,447)(1572,448)(1580,449)(1600,450)(1601,450)
\path(1525,362)(1540,360)(1552,356)(1560,354)(1565,353)(1572,352)(1580,351)(1600,350)(1601,350)
\path(1400,404)(1410,402)(1420,400)(1430,398)(1440,396)(1450,395)(1460,396)(1470,398)(1480,400)(1490,402)(1500,404)
\path(1420,400)(1430,402)(1440,404)(1450,405)(1460,404)(1470,402)(1480,400)
\path(1650,404)(1640,402)(1630,400)(1620,398)(1610,396)(1600,395)(1590,396)(1580,398)(1570,400)(1560,402)(1550,404)
\path(1630,400)(1620,402)(1610,404)(1600,405)(1590,404)(1580,402)(1570,400)
\put(1475,465){$\psi-\lambda_1$}
\put(1710,395){$+$}
\put(-40,145){$+$}
\path(0,100)(1,107)(2,110)(5,116)(8,120)(14,126)(18,129)(21,131)(23,132)(26,134)(34,138)(37,139)(40,140)(52,144)(60,146)(65,147)(72,148)(80,149)(100,150)(101,150)
\path(0,100)(1,93)(2,90)(5,84)(8,80)(14,74)(18,71)(21,69)(23,68)(26,66)(34,62)(37,61)(40,60)(52,56)(60,54)(65,53)(72,52)(80,51)(100,50)(101,50)
\path(175,138)(160,140)(148,144)(140,146)(135,147)(128,148)(120,149)(100,150)(99,150)
\path(175,62)(160,60)(148,56)(140,54)(135,53)(128,52)(120,51)(100,50)(99,50)
\path(350,100)(349,107)(348,110)(345,116)(342,120)(336,126)(332,129)(329,131)(327,132)(324,134)(316,138)(313,139)(310,140)(298,144)(290,146)(285,147)(278,148)(270,149)(250,150)(249,150)
\path(350,100)(349,93)(348,90)(345,84)(342,80)(336,74)(332,71)(329,69)(327,68)(324,66)(316,62)(313,61)(310,60)(298,56)(290,54)(285,53)(278,52)(270,51)(250,50)(249,50)
\path(175,138)(190,140)(202,144)(210,146)(215,147)(222,148)(230,149)(250,150)(251,150)
\path(175,62)(190,60)(202,56)(210,54)(215,53)(222,52)(230,51)(250,50)(251,50)
\path(50,104)(60,102)(70,100)(80,98)(90,96)(100,95)(110,96)(120,98)(130,100)(140,102)(150,104)
\path(70,100)(80,102)(90,104)(100,105)(110,104)(120,102)(130,100)
\path(300,104)(290,102)(280,100)(270,98)(260,96)(250,95)(240,96)(230,98)(220,100)(210,102)(200,104)
\path(280,100)(270,102)(260,104)(250,105)(240,104)(230,102)(220,100)
\put(125,165){$\psi-\lambda_1$}
\put(400,100){\circle{100}}
\path(450,100)(440,98)(430,96)(420,94)(410,92)(400,91)(390,92)(380,94)(370,96)(360,98)(350,100)
\path(440,102)(430,104)
\path(420,106)(410,108)
\path(400,109)(390,108)
\path(380,106)(370,104)
\path(360,102)(350,100)
\put(400,70){\circle*{10}}
\put(550,100){\ellipse{200}{100}}
\path(500,104)(510,102)(520,100)(530,98)(540,96)(550,95)(560,96)(570,98)(580,100)(590,102)(600,104)
\path(520,100)(530,102)(540,104)(550,105)(560,104)(570,102)(580,100)
\put(400,200){\ellipse{200}{100}}
\path(350,204)(360,202)(370,200)(380,198)(390,196)(400,195)(410,196)(420,198)(430,200)(440,202)(450,204)
\path(370,200)(380,202)(390,204)(400,205)(410,204)(420,202)(430,200)
\put(660,145){$+$}
\put(800,150){\ellipse{200}{100}}
\path(750,154)(760,152)(770,150)(780,148)(790,146)(800,145)(810,146)(820,148)(830,150)(840,152)(850,154)
\path(770,150)(780,152)(790,154)(800,155)(810,154)(820,152)(830,150)
\put(950,150){\circle{100}}
\path(1000,150)(990,148)(980,146)(970,144)(960,142)(950,141)(940,142)(930,144)(920,146)(910,148)(900,150)
\path(990,152)(980,154)
\path(970,156)(960,158)
\path(950,159)(940,158)
\path(930,156)(920,154)
\path(910,152)(900,150)
\put(950,120){\circle*{10}}
\put(950,50){\ellipse{200}{100}}
\path(900,54)(910,52)(920,50)(930,48)(940,46)(950,45)(960,46)(970,48)(980,50)(990,52)(1000,54)
\path(920,50)(930,52)(940,54)(950,55)(960,54)(970,52)(980,50)
\put(950,250){\ellipse{200}{100}}
\path(900,254)(910,252)(920,250)(930,248)(940,246)(950,245)(960,246)(970,248)(980,250)(990,252)(1000,254)
\path(920,250)(930,252)(940,254)(950,255)(960,254)(970,252)(980,250)
\put(1100,150){\ellipse{200}{100}}
\path(1050,154)(1060,152)(1070,150)(1080,148)(1090,146)(1100,145)(1110,146)(1120,148)(1130,150)(1140,152)(1150,154)
\path(1070,150)(1080,152)(1090,154)(1100,155)(1110,154)(1120,152)(1130,150)
\put(1210,145){$-$}
\path(1250,150)(1251,157)(1252,160)(1255,166)(1258,170)(1264,176)(1268,179)(1271,181)(1273,182)(1276,184)(1284,188)(1287,189)(1290,190)(1302,194)(1310,196)(1315,197)(1322,198)(1330,199)(1350,200)(1351,200)
\path(1250,150)(1251,143)(1252,140)(1255,134)(1258,130)(1264,124)(1268,121)(1271,119)(1273,118)(1276,116)(1284,112)(1287,111)(1290,110)(1302,106)(1310,104)(1315,103)(1322,102)(1330,101)(1350,100)(1351,100)
\path(1425,188)(1410,190)(1398,194)(1390,196)(1385,197)(1378,198)(1370,199)(1350,200)(1349,200)
\path(1425,112)(1410,110)(1398,106)(1390,104)(1385,103)(1378,102)(1370,101)(1350,100)(1349,100)
\path(1600,150)(1599,157)(1598,160)(1595,166)(1592,170)(1586,176)(1582,179)(1579,181)(1577,182)(1574,184)(1566,188)(1563,189)(1560,190)(1548,194)(1540,196)(1535,197)(1528,198)(1520,199)(1500,200)(1499,200)
\path(1600,150)(1599,143)(1598,140)(1595,134)(1592,130)(1586,124)(1582,121)(1579,119)(1577,118)(1574,116)(1566,112)(1563,111)(1560,110)(1548,106)(1540,104)(1535,103)(1528,102)(1520,101)(1500,100)(1499,100)
\path(1425,188)(1440,190)(1452,194)(1460,196)(1465,197)(1472,198)(1480,199)(1500,200)(1501,200)
\path(1425,112)(1440,110)(1452,106)(1460,104)(1465,103)(1472,102)(1480,101)(1500,100)(1501,100)
\path(1300,154)(1310,152)(1320,150)(1330,148)(1340,146)(1350,145)(1360,146)(1370,148)(1380,150)(1390,152)(1400,154)
\path(1320,150)(1330,152)(1340,154)(1350,155)(1360,154)(1370,152)(1380,150)
\path(1550,154)(1540,152)(1530,150)(1520,148)(1510,146)(1500,145)(1490,146)(1480,148)(1470,150)(1460,152)(1450,154)
\path(1530,150)(1520,152)(1510,154)(1500,155)(1490,154)(1480,152)(1470,150)
\put(1650,150){\circle{100}}
\path(1700,150)(1690,148)(1680,146)(1670,144)(1660,142)(1650,141)(1640,142)(1630,144)(1620,146)(1610,148)(1600,150)
\path(1690,152)(1680,154)
\path(1670,156)(1660,158)
\path(1650,159)(1640,158)
\path(1630,156)(1620,154)
\path(1610,152)(1600,150)
\put(1650,120){\circle*{10}}
\put(1750,150){\circle{100}}
\path(1800,150)(1790,148)(1780,146)(1770,144)(1760,142)(1750,141)(1740,142)(1730,144)(1720,146)(1710,148)(1700,150)
\path(1790,152)(1780,154)
\path(1770,156)(1760,158)
\path(1750,159)(1740,158)
\path(1730,156)(1720,154)
\path(1710,152)(1700,150)
\put(1750,50){\ellipse{200}{100}}
\path(1700,54)(1710,52)(1720,50)(1730,48)(1740,46)(1750,45)(1760,46)(1770,48)(1780,50)(1790,52)(1800,54)
\path(1720,50)(1730,52)(1740,54)(1750,55)(1760,54)(1770,52)(1780,50)
\put(1750,250){\ellipse{200}{100}}
\path(1700,254)(1710,252)(1720,250)(1730,248)(1740,246)(1750,245)(1760,246)(1770,248)(1780,250)(1790,252)(1800,254)
\path(1720,250)(1730,252)(1740,254)(1750,255)(1760,254)(1770,252)(1780,250)
\end{picture}
\end{eqnarray*}
}
We also have calculated the formula for
$ \psi^5 - \lambda_1 \psi^4 + \lambda_2 \psi^3 - \lambda_3 \psi^2 +
\lambda_4 \psi - \lambda_5 $
in ${\overline{\cM}}_{5,1}$.
We do not write it here because it was calculated via a (possibly incorrect)
computer program and because it is rather long.
Note that non-integer coefficients do appear in genus $5$.
\end{document}
|
\begin{document}
\title{Ramsey games near the critical threshold}
\author{David Conlon\thanks{Department of Mathematics, California Institute of Technology, Pasadena, CA 91125, USA.
E-mail: {\tt
[email protected]}. Research supported in part by ERC Starting Grant RanDM 676632.} \and
Shagnik Das\thanks{Institut f\"ur Mathematik, Freie Universit\"at Berlin, 14195 Berlin, Germany. E-mail: {\tt [email protected]}. Research supported by GIF grant G-1347-304.6/2016 and by the Deutsche Forschungsgemeinschaft (DFG) project 415310276.}\and
Joonkyung Lee\thanks{
Department of Mathematics, University College London, Gower Street, London WC1E 6BT, UK.
E-mail: {\tt
[email protected]}. Research supported in part by ERC Consolidator Grant PEPCo 724903.}\and
Tam\'as M\'esz\'aros\thanks{Institut f\"ur Mathematik, Freie Universit\"at Berlin, 14195 Berlin, Germany. E-mail: {\tt [email protected]}. Funded by the DRS Fellowship Program and the Berlin Mathematics Research Center MATH+, Project ``Learning hypergraphs".}
}
\date{}
\maketitle
\begin{abstract}
A well-known result of R\"odl and Ruci\'nski states that for any graph $H$ there exists a constant $C$ such that if $p \geq C n^{- 1/m_2(H)}$, then the random graph $G_{n,p}$ is a.a.s.~$H$-Ramsey, that is, any $2$-colouring of its edges contains a monochromatic copy of $H$. Aside from a few simple exceptions, the corresponding $0$-statement also holds, that is, there exists $c>0$ such that whenever $p\leq cn^{-1/m_2(H)}$ the random graph $G_{n,p}$ is a.a.s.~not $H$-Ramsey.
We show that near this threshold, even when $G_{n,p}$ is not $H$-Ramsey, it is often extremely close to being $H$-Ramsey. More precisely, we prove that for any constant $c > 0$ and any strictly $2$-balanced graph $H$, if $p \geq c n^{-1/m_2(H)}$, then the random graph $G_{n,p}$ a.a.s.~has the property that every $2$-edge-colouring without monochromatic copies of $H$ cannot be extended to an $H$-free colouring after $\omega(1)$ extra random edges are added. This generalises a result by Friedgut, Kohayakawa, R\"odl, Ruci\'nski and Tetali, who in 2002 proved the same statement for triangles, and addresses a question raised by those authors. We also extend a result of theirs on the three-colour case and show that these theorems need not hold when $H$ is not strictly $2$-balanced.
\end{abstract}
\section{Introduction}
The study of sparse generalisations of combinatorial theorems has attracted considerable interest in recent years and there are now several general mechanisms~\cite{BMS15, CG16, ST15, S16} that allow one to prove that analogues of classical results such as Ramsey's theorem, Tur\'an's theorem and Szemer\'edi's theorem hold relative to sparse random graphs and sets of integers.
Much of this work is based, in one way or another, on the beautiful random Ramsey theorem of R\"odl and Ruci\'nski~\cite{RR93, RR95} from 1995. This seminal result gives a complete answer to the question of when the binomial random graph $G_{n,p}$ is \emph{$(H, r)$-Ramsey}, that is, has the property that any $r$-colouring of its edges contains a monochromatic copy of the graph $H$.
To state the R\"odl--Ruci\'nski theorem precisely, we need some notation. For a graph $H$, we write $d_2(H) = 0$ if $H$ has no edges, $d_2(H) = 1/2$ when $H = K_2$ and $d_2(H) = (e(H)-1)/(v(H)-2)$ in the general case. We then write $m_2(H) = \max_{H' \subseteq H} d_2(H')$ and call this quantity the \emph{$2$-density} of $H$. Though we will not use these definitions immediately, we also say that $H$ is \emph{$2$-balanced} if $m_2(H') \leq m_2(H)$ and \emph{strictly $2$-balanced} if $m_2(H') < m_2(H)$ for all proper subgraphs $H'$ of $H$.
\begin{thm}(R\"odl--Ruci\'nski, 1995) \label{thm:rodlrucinski}
Let $r \geq 2$ be a positive integer and let $H$ be a graph that is not a forest consisting of stars and paths of length $3$. Then there are positive constants $c$ and $C$ such that
\[\lim_{n\rightarrow \infty} \mathbb{P}[G_{n,p} \mbox{ is $(H, r)$-Ramsey}] = \left\{ \begin{array}{ll}
0 & \mbox{if $p < c n^{-1/m_2(H)}$},\\
1 & \mbox{if $p > C n^{-1/m_2(H)}$}.\end{array} \right.\]
\end{thm}
There has been much work extending this result. We will not attempt an exhaustive survey, but refer the interested reader instead to some of the latest progress on hypergraphs~\cite{GNPSST17}, the asymmetric case~\cite{MNS19}, establishing sharp thresholds~\cite{SS18} and the equivalent problem in settings other than the binomial random graph~\cite{DT19, P19}. Our particular concern here will be with the following surprising result of Friedgut, Kohayakawa, R\"odl, Ruci\'nski and Tetali~\cite{FKRRT03} regarding two-round Ramsey games against a random builder.
\begin{thm}(Friedgut--Kohayakawa--R\"odl--Ruci\'nski--Tetali, 2003)
Let $c > 0$ be fixed and, for $p = cn^{-1/2}$, let $G = G_{n,p}$. Then, with high probability, the following statements hold:
\begin{itemize}
\item[(a)] Let $\varphi_2$ be an arbitrary monochromatic-$K_3$-free $2$-edge-colouring of $G$. If $q_2 = \omega(n^{-2})$, then, with high probability, $\varphi_2$ cannot be extended to a monochromatic-$K_3$-free $2$-edge-colouring of $G \cup G_{n,q_2}$.
\item[(b)] Let $\varphi_3$ be an arbitrary monochromatic-$K_3$-free $3$-edge-colouring of $G$. If $q_3 = \omega(n^{-1})$, then, with high probability, $\varphi_3$ cannot be extended to a monochromatic-$K_3$-free $3$-edge-colouring of $G \cup G_{n,q_3}$.
\end{itemize}
\end{thm}
When $H = K_3$, the R\"odl--Ruci\'nski theorem implies that if $p = C n^{-1/2}$ for some sufficiently large $C$, then every $2$-edge-colouring contains a monochromatic triangle. Part (a) of the theorem above says that for any $c > 0$, no matter how small, if $p = c n^{-1/2}$, then, even though there are $2$-edge-colourings of $G_{n,p}$ containing no monochromatic $K_3$, no such colouring can be extended to a monochromatic-$K_3$-free $2$-edge-colouring after $\omega(1)$ extra random edges are added. One interpretation of this result is that for any $c > 0$ the random graph $G_{n,p}$ with $p = c n^{-1/2}$ is, with high probability, already extremely close to being $(K_3, 2)$-Ramsey. Part (b) gives a similar result for $3$-edge-colourings, though in this case $\omega(n)$ extra edges may be needed in the second round of colouring to guarantee a monochromatic triangle.
Addressing a problem raised by Friedgut, Kohayakawa, R\"odl, Ruci\'nski and Tetali~\cite{FKRRT03}, our main result says that a similar statement holds for all graphs $H$ containing an edge $h$ for which $m_2(H \setminus h) < m_2(H)$. In particular, the result applies when $H$ is strictly $2$-balanced, since any edge $h$ works in this case.
\begin{theorem} \label{thm:main}
Let $H$ be a graph and suppose that there is some edge $h \in E(H)$ whose removal decreases the $2$-density, that is, $m_2(H \setminus h) < m_2(H)$. Let $c > 0$ be fixed and, for $p = c n^{-1/m_2(H)}$, let $G = G_{n,p}$. Then, with high probability, the following statements hold:
\begin{itemize}
\item[(a)] Let $\varphi_2$ be an arbitrary monochromatic-$H$-free $2$-edge-colouring of $G$. If $q_2 = \omega(n^{-2})$, then, with high probability, $\varphi_2$ cannot be extended to a monochromatic-$H$-free $2$-edge-colouring of $G \cup G_{n,q_2}$.
\item[(b)] Let $\varphi_3$ be an arbitrary monochromatic-$H$-free $3$-edge-colouring of $G$. If $q_3 = \omega(n^{-1/m(H)})$, then, with high probability, $\varphi_3$ cannot be extended to a monochromatic-$H$-free $3$-edge-colouring of $G \cup G_{n,q_3}$.
\end{itemize}
\end{theorem}
Observe that the densities $q_2$ and $q_3$ of the random graphs that must be added to create monochromatic copies of $H$ are best possible. Indeed, if $q = O(n^{-2})$, then with positive probability $G_{n,q}$ has no edges, so $\varphi_2$ trivially extends to $G \cup G_{n,q}$. Only slightly less trivially, if $\varphi_3$ only uses two of the three colours on the edges of $G$, then we can colour all the edges of $G_{n,q}$ with the third colour. If $q = O(n^{-1/m(H)})$, then with positive probability $G_{n,q}$ is $H$-free, thus giving a valid extension of $\varphi_3$. Finally, note that these results cannot be extended to $r \ge 4$ colours, since the two random graphs $G_{n,p}$ and $G_{n,q}$ can be coloured independently with disjoint pairs of colours, so we can avoid creating a monochromatic copy of $H$ until the density of one of the two random graphs exceeds the random Ramsey threshold $C n^{-1/m_2(H)}$ from Theorem~\ref{thm:rodlrucinski}.
\section{The necessity of a condition}
In Theorem~\ref{thm:main}, we impose the condition that there is some edge $h \in E(H)$ such that $H \setminus h$ has a strictly lower $2$-density than $H$. While this condition covers, for example, strictly $2$-balanced graphs (where the edge $h$ can be chosen arbitrarily), it is natural to ask whether it is necessary. In this section we show that Theorem~\ref{thm:main} does not apply to all graphs $H$, so some condition is indeed required.
\subsection{Edge-rooted products of graphs}
We first define the edge-rooted product of graphs.
\begin{definition} \label{def:product}
Let $G$ be a graph, let $H$ be a graph rooted at an edge $h = \{u,v\} \in E(H)$ and let $k \in \mathbb{N}$. To build the \emph{$k$-fold edge-rooted product} $G \rprod{k} (H,h)$, we start with a central copy of $G$ and then attach $k$ copies of $H$ to each edge $g = \{x,y\} \in E(G)$ such that $\{x,y\}$ is the root-edge $h$ in each copy of $H$ and all other vertices in each copy are new and distinct.
In other words, $V( G \rprod{k} (H,h) ) = V(G) \cup \left( E(G) \times [k] \times \left( V(H) \setminus \{u,v\} \right) \right)$, $V(G)$ induces a copy of $G$ and, for each $g = \{x,y\} \in E(G)$ and $i \in [k]$, $\left( \{g \} \times \{i \} \times \left( V(H) \setminus \{u,v\} \right) \right) \cup \{x,y\}$ induces a copy of $H$ with $\{x,y\}$ playing the role of $\{u,v\}$. (Note that there is some slack in this definition, since we have not prescribed an orientation for each attached copy of $H$. In practice, the particular choice of orientation makes no difference, so we will simply assume that some fixed choice has been made.)
The \emph{reduced} $k$-fold edge-rooted product, denoted $G \rrprod{k} (H,h)$, is the subgraph obtained by removing all the edges from the central copy of $G$.
\end{definition}
\begin{figure}
\caption{$C_4\rprod{2}
\label{fig:examples}
\end{figure}
We have already defined $d_2(H)$, $m_2(H)$ and stated what it means for a graph to be $2$-balanced or strictly $2$-balanced. In a similar fashion, we write $d_1(H) = 0$ if $H$ has no edges and $d_1(H) = e(H)/(v(H)-1)$ otherwise. We then write $m_1(H) = \max_{H' \subseteq H} d_1(H')$ and call this quantity the \emph{$1$-density} of $H$. We say that $H$ is \emph{$1$-balanced} if $m_1(H') \leq m_1(H)$ and \emph{strictly $1$-balanced} if $m_1(H') < m_1(H)$ for all proper subgraphs $H'$ of $H$. Finally, write $d(H) = e(H)/v(H)$ and $m(H) = \max_{H' \subseteq H} d(H')$, which we call the \emph{density} of $H$. We then say that $H$ is \emph{balanced} if $m(H') \leq m(H)$ and \emph{strictly balanced} if $m(H') < m(H)$ for all proper subgraphs $H'$ of $H$. We will make repeated use of the following simple lemma in what follows.
\begin{lemma} \label{lem:2become1}
If $H$ is $2$-balanced with $d_2(H) > 1$, then $H$ is strictly $1$-balanced and strictly balanced.
\end{lemma}
\begin{proof}
Suppose that $H$ is not strictly $1$-balanced and let $F \subset H$ be a subgraph with $d_1(F) \ge d_1(H)$. That is, $e(F) / (v(F) - 1) \geq e(H) / (v(H) - 1)$ or, by multiplying the expression out,
\begin{equation} \label{eq:1bal}
e(F) v(H) - e(F) \geq e(H) v(F) - e(H).
\end{equation}
Since $H$ is $2$-balanced, we have $d_2(H) \ge d_2(F)$, which implies that
$(e(H) - 1)/(v(H) - 2) \geq (e(F) - 1)/(v(F) - 2)$. Rearranging gives
$e(H)v(F) - 2e(H) - v(F) + 2 \geq e(F)v(H) - 2e(F) - v(H) + 2$.
Substituting \eqref{eq:1bal}, we get
\[e(H)v(F) - 2e(H) - v(F) + 2 \geq e(F)v(H) - 2e(F) - v(H) + 2 \geq e(H)v(F) - e(H) - e(F) - v(H) + 2.\]
Cancelling the like terms gives $-e(H) - v(F) \geq -e(F) - v(H)$, which in turn implies that $(e(F) - 1) - (v(F) - 2) \geq (e(H) - 1) - (v(H) - 2)$, which can be rewritten as
$(d_2(F) - 1)(v(F) - 2) \geq (d_2(H) - 1)(v(H) - 2)$.
However, this is a contradiction, since by assumption $d_2(H) - 1 \geq d_2(F) - 1$ and $v(H) - 2 > v(F) - 2$. The argument in the strictly balanced case follows along almost exactly the same lines.
\end{proof}
The key observation for our purposes is that the edge-rooted product behaves well with respect to the various graph densities.
\begin{lemma} \label{lem:productdensity}
For any graphs $G$ and $H$ of density at least $1$, any edge $h \in E(H)$ and any $k \in \mathbb{N}$:
\begin{itemize}
\item[(a)] if $G$ is strictly balanced, $H$ is $2$-balanced and $d(G) < d_2(H)$, then $G \rprod{k} (H,h)$ is strictly balanced,
\item[(b)] $m_2( G \rprod{k} (H,h) ) = \max \{ m_2(G), m_2(H) \}$ and
\item[(c)] if $m_2(H \setminus h) < m_2(H)$, then $m_2( G \rrprod{k} (H,h) ) < m_2(H)$.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
\item[(a)] Let $F \subseteq G \rprod{k} (H,h)$ be a smallest (induced) subgraph maximising $d(F) = e(F) / v(F)$. We wish to show that $F = G \rprod{k} (H,h)$. We start with a lower bound on the density of $G \rprod{k} (H,h)$:
\begin{align}
m(G \rprod{k} (H,h)) &\ge d( G \rprod{k} (H,h) ) = \frac{e(G) + k e(G) (e(H) - 1)}{v(G) + k e(G) (v(H) - 2)} \notag \\
&= \frac{e(H) - (1 - \tfrac{1}{k})}{v(H) - 1 - (1 - \tfrac{1}{kd(G)})} \ge \frac{e(H)}{v(H) - 1} = d_1(H) = m_1(H), \label{ineq:largerthan1densityb}
\end{align}
where the inequality on the second line follows since either $k d(G) = 1$, in which case we have equality, or $(1 - \tfrac{1}{k})/(1 - \tfrac{1}{kd(G)}) \le 1 \le d(H) < d_1(H)$. Note that the final equality, $d_1(H) = m_1(H)$, is an application of Lemma~\ref{lem:2become1} above.
We also observe that $d(G \rprod{k} (H,h))$ is a convex combination of $d(G)$ and $d_2(H)$:
\begin{equation} \label{ineq:largerthan1densitya}
\frac{e(G) + k e(G) (e(H) - 1)}{v(G) + k e(G) (v(H) - 2)} = d(G) \frac{v(G)}{v(G \rprod{k} (H,h))} + d_2(H) \left(1 - \frac{v(G)}{v(G \rprod{k} (H,h))} \right).
\end{equation}
Now, for each $g \in E(G)$ and $i \in [k]$, let $F_{g,i} \subseteq H$ be the subgraph induced by the vertices of $F$ in the $i$th copy of $H$ attached to the edge $g$ in the central copy of $G$. Let $F_0 \subseteq G$ be the subgraph induced by the vertices of $F$ in the central copy of $G$.
By the minimality of the size of $F$, we may assume that $F$ is connected, as otherwise its densest component would be a smaller subgraph attaining the maximum density. We cannot have $F \subseteq H$ since, by~\eqref{ineq:largerthan1densityb}, the density of $G \rprod{k} (H,h)$ is at least $m_1(H)$, which is strictly larger than $m(H)$. Thus, $F_{g,i}$ must be non-empty for at least two pairs $(g,i) \in E(G) \times [k]$ and, hence, to be connected, each non-empty $F_{g,i}$ must contain at least one vertex of $g$.
Now suppose there was some $(g,i)$ such that $F_{g,i}$ contained only one of the two endpoints of $g$. Then, by removing $F_{g,i}$ from $F$, we lose $e(F_{g,i})$ edges and $v(F_{g,i})-1$ vertices. Since $F_{g,i} \subset H$, the ratio $e(F_{g,i})/(v(F_{g,i})-1)$ is at most $m_1(H)$, which by~\eqref{ineq:largerthan1densityb} is at most $m(G \rprod{k} (H,h))$. Removing $F_{g,i}$ would therefore not decrease the density of $F$, contradicting the minimality of its size. Thus, if $F_{g,i}$ is non-empty, we must have $g \in E(F_{g,i})$. Hence,
\begin{equation} \label{eqn:prodparameters}
v(F) = v(F_0) + \sum_{g \in E(F_0)} \sum_{i = 1}^k \left( v(F_{g,i}) - 2 \right) \quad \textrm{and} \quad e(F) = e(F_0) + \sum_{g \in E(F_0)} \sum_{i=1}^k \left( e(F_{g,i}) - 1 \right).
\end{equation}
Thus,
\[ d(F) = \frac{e(F_0) + \sum_{g \in E(F_0)} \sum_{i=1}^k (e(F_{g,i}) - 1)}{v(F_0) + \sum_{g \in E(F_0)} \sum_{i=1}^k (v(F_{g,i}) - 2)} = d(F_0) \frac{v(F_0)}{v(F)} + \sum_{g \in E(F_0)} \sum_{i=1}^k d_2(F_{g,i}) \frac{v(F_{g,i}) - 2}{v(F)}. \]
Since $F_0 \subseteq G$ and $G$ is balanced, $d(F_0) \le d(G)$. Similarly, for each $g$ and $i$, $d_2(F_{g,i}) \le d_2(H)$. We therefore have
\[ d(F) \le d(G) \frac{v(F_0)}{v(F)} + d_2(H) \left( 1 - \frac{v(F_0)}{v(F)} \right). \]
Comparing this to~\eqref{ineq:largerthan1densitya}, since $d(G) < d_2(H)$, for $d(F) \ge d(G \rprod{k} (H,h))$ to hold we require
\begin{equation} \label{ineq:F0contribution}
\frac{v(F_0)}{v(F)} \le \frac{v(G)}{v(G \rprod{k} (H,h))} = \frac{1}{1 + k d(G) (v(H) - 2)}.
\end{equation}
Now $v(F) = v(F_0) + \sum_{g \in E(F_0)} \sum_{i=1}^k (v(F_{g,i}) - 2) \le v(F_0) + k e(F_0) (v(H) - 2)$, with equality if and only if $F_{g,i} = H$ for all $g \in E(F_0)$ and $i \in [k]$. Therefore,
\[ \frac{v(F_0)}{v(F)} \ge \frac{v(F_0)}{v(F_0) + k e(F_0) (v(H) - 2)} = \frac{1}{1 + k d(F_0) (v(H) - 2)}. \]
Thus, in order to satisfy the inequality of~\eqref{ineq:F0contribution}, $d(F_0) \ge d(G)$. As $G$ is strictly balanced, it follows that $F_0 = G$ and then, since $F_{g,i} = H$ for all $g$ and $i$, we have $F = G \rprod{k} (H,h)$, as required.
\item[(b)] Since $G, H \subseteq G \rprod{k} (H,h)$, we immediately have $m_2(G \rprod{k} (H,h)) \ge \max \{ m_2(G), m_2(H) \}$. The proof of the upper bound follows the same lines as in part (a). Let $F \subseteq G \rprod{k} (H,h)$ be a smallest subgraph realising the $2$-density, that is, $m_2(G \rprod{k} (H,h)) = d_2(F) = (e(F) - 1)/(v(F) - 2)$.
Let $F_0$ and, for each $(g,i) \in E(G) \times [k]$, $F_{g,i}$ be defined as in part (a). We may assume that $F_{g,i} \neq \emptyset$ for at least two pairs $(g,i)$, since otherwise $F \subseteq H$ and thus $d_2(F) \le m_2(H)$. By the minimality of the size of $F$, we may further assume that $F$ is $2$-connected, as otherwise one of the blocks $B$ of $F$ will satisfy $m_2(B) \ge m_2(F)$ (see, for instance, Lemma 8 of~\cite{NS16}). In particular, this implies that $g \in E(F_{g,i})$ whenever $F_{g,i} \neq \emptyset$.
The vertices and edges of $F$ can then be enumerated as in~\eqref{eqn:prodparameters}, so
\begin{equation} \label{eqn:2densityofF}
d_2(F) = \frac{e(F) - 1}{v(F) - 2} = \frac{e(F_0) - 1 + \sum_{g \in E(F_0)} \sum_{i = 1}^k \left( e(F_{g,i}) - 1 \right)}{v(F_0) - 2 + \sum_{g \in E(F_0)} \sum_{i=1}^k \left( v(F_{g,i}) - 2 \right)}.
\end{equation}
Since $e(F_0)-1 \leq m_2(G) (v(F_0) - 2)$ and $e(F_{g, i}) - 1 \leq m_2(H) (v(F_{g,i})-2)$ for each $(g,i)$, it follows that $d_2(F) = m_2(G \rprod{k} (H,h)) \le \max \{ m_2(G), m_2(H) \}$.
\item[(c)] The product $G \rrprod{k} (H,h)$ is obtained by deleting the edges of the central copy of $G$ from the product $G \rprod{k} (H,h)$. We show $m_2(G \rrprod{k} (H,h)) < m_2(H)$ by following the argument of part (b). To start, let $F \subseteq G \rrprod{k} (H,h)$ be a smallest subgraph attaining the $2$-density.
As before, let $F_0$ be the subgraph of $G$ induced by the vertices of $F$ from the central copy of $G$ and, for $g \in E(G)$ and $i \in [k]$, let $F_{g,i}$ be the subgraph of $H$ induced by the vertices of $F$ in the $i$th copy of $H$ attached to the edge $g$. Note that, since the edges from the central copy of $G$ are deleted in $G \rrprod{k} (H,h)$, neither the edges of $F_0$ nor the edge $g$ in $F_{g,i}$ (if present) appear in $F$. However, it will be convenient for us to include them in $F_0$ and $F_{g,i}$ for our calculations.
If $F_{g,i}$ is only non-empty for one pair of $(g,i)$, then $F = F_{g,i} \setminus g \subseteq H \setminus h$, so $d_2(F) \le m_2(H \setminus h) < m_2(H)$. Otherwise, since $F$ must be $2$-connected, $g$ must be in $F_{g,i}$ whenever $F_{g,i}$ is non-empty. We can then compute the $2$-density of $F$ as in part (b), arriving at an expression similar to~\eqref{eqn:2densityofF}, except the edges in $F_0$ do not appear in $F$. Thus,
\[ d_2(F) = \frac{-1 + \sum_{g \in E(F_0)} \sum_{i=1}^k \left( e(F_{g,i}) - 1 \right) }{v(F_0)- 2 + \sum_{g \in E(F_0)} \sum_{i=1}^k \left( v(F_{g,i}) - 2 \right) } < \frac{\sum_{g \in E(F_0)} \sum_{i=1}^k \left( e(F_{g,i}) - 1 \right)}{\sum_{g \in E(F_0)} \sum_{i=1}^k \left( v(F_{g,i}) - 2 \right)} \le m_2(H), \]
since $F_{g,i} \subseteq H$ implies $e(F_{g,i}) - 1 \le m_2(H) (v(F_{g,i}) - 2)$. \qedhere
\end{itemize}
\end{proof}
\subsection{Graphs requiring unusually many extra random edges}
Part (c) of Lemma~\ref{lem:productdensity} shows the role played by the assumption of the existence of the edge $h$ in Theorem~\ref{thm:main}. We will show how to use this to prove Theorem~\ref{thm:main} in the next section, but first we use the other parts of this lemma to construct graphs for which the conclusion of Theorem~\ref{thm:main} does not hold.
\begin{theorem} \label{thm:condition}
Let $F$ be a $2$-balanced graph containing a cycle, let $f \in E(F)$ be an arbitrary edge of $F$ and let $H = F \rprod{1} (F,f)$. Let $G = G_{n,p}$ for $p = c n^{-1/m_2(H)}$, where $c > 0$ is a sufficiently small constant. Then, with high probability, the following statements hold:
\begin{itemize}
\item[(a)] There is a monochromatic-$H$-free $2$-edge-colouring of $G$ such that if $q = o(n^{-v(F) / e(F)})$ the colouring can with high probability be extended to a colouring of $G \cup G_{n,q}$ without monochromatic copies of $H$.
\item[(b)] There is a monochromatic-$H$-free $3$-edge-colouring of $G$ and $\delta = \delta(H) > 0$ such that if $q = o(n^{-1/m(H) + \delta})$ the colouring can with high probability be extended to a colouring of $G \cup G_{n,q}$ without monochromatic copies of $H$.
\end{itemize}
\end{theorem}
\begin{proof}
Since $F$ is $2$-balanced and contains a cycle, we have $d(F) \ge 1$ and $d_2(F) > 1$. Using Lemma~\ref{lem:productdensity}(b), $m_2(H) = m_2(F)$. Hence, if the constant $c$ is sufficiently small, the R\"odl--Ruci\'nski theorem implies that with high probability we can find a $2$-colouring $\varphi$ of $E(G)$ without any monochromatic copy of $F$. This is the edge-colouring we extend in both cases.
\begin{itemize}
\item[(a)] In this case, extend $\varphi$ to the edges of $G_{n,q}$ arbitrarily. Observe that $H = F \rprod{1} (F,f)$ consists of $e(F)$ edge-disjoint copies of $F$. Since there are no monochromatic copies of $F$ in $G$, any monochromatic copy of $H$ in $G \cup G_{n,q}$ must contain at least $e(F)$ edges from $G_{n,q}$.
There are at most $n^{v(H)}$ potential copies of $H$ and $2^{e(H)}$ ways to distribute its edges between $G$ and $G_{n,q}$. Since $q \le p$, the probability that a copy with at least $e(F)$ edges from $G_{n,q}$ appears in $G \cup G_{n,q}$ is at most $p^{e(H) - e(F)} q^{e(F)}$. Thus, by the union bound, the probability that there is a copy of $H$ in $G \cup G_{n,q}$ with at least $e(F)$ edges from $G_{n,q}$ is at most
\[ n^{v(H)} 2^{e(H)} p^{e(H) - e(F)} q^{e(F)} = 2^{e(H)}c^{e(H) - e(F)} n^{v(F) + e(F)(v(F)-2) - e(F) (e(F)-1) / m_2(H)} q^{e(F)}, \]
where we used that $v(H) = v(F) + e(F)(v(F)-2)$ and $e(H) = e(F)^2$.
As $m_2(H) = m_2(F) = \frac{e(F)-1}{v(F)-2}$, this simplifies to $2^{e(H)}c^{e(H) - e(F)} n^{v(F)} q^{e(F)}$, which is $o(1)$ by our choice of $q$. Hence, with high probability our arbitrary extension of $\varphi$ to $G \cup G_{n,q}$ does not create a monochromatic copy of $H$.
\item[(b)] The colouring $\varphi$ uses two colours, say red and blue. This leaves us with one unused colour, say green, that we can use when extending $\varphi$ to the edges of $G_{n,q}$.
As $F$ is $2$-balanced, Lemma~\ref{lem:2become1} implies that it is also strictly balanced. By Lemma~\ref{lem:productdensity}(a), it follows that $H$ is strictly balanced. As a consequence, any union of two copies of $H$ that share at least an edge must be strictly denser than $H$ itself. Indeed, the subgraph common to both copies of $H$ is a proper subgraph and therefore strictly sparser than $H$. Hence, the vertices and edges added in the second copy in the union must increase the overall density.
There is thus some $\delta_1 = \delta_1(H) > 0$ such that, whenever $q = o(n^{-1/m(H) + \delta_1})$, intersecting copies of $H$ do not appear in $G_{n,q}$. That is,
the copies of $H$ appearing in $G_{n,q}$ are with high probability pairwise edge-disjoint. We shall choose our $\delta(H)$ to be less than this $\delta_1(H)$.
We now order the edges of $G_{n,q}$ arbitrarily and process them one-by-one. We colour each edge green, unless that would create a green copy of $H$, in which case we colour the edge red. When colouring in this fashion, if we create a monochromatic copy of $H$, it clearly must be red.
Consider a red copy $H_0$ of $H$ in our colouring of $G \cup G_{n,q}$. Since $H_0$ is an edge-disjoint union of $e(F)$ copies of $F$ and the colouring $\varphi$ of $G$ has no monochromatic copy of $F$, each copy of $F$ in $H_0$ must contain at least one red edge from $G_{n,q}$. An edge $e$ from $G_{n,q}$ is only red if it is the last edge of an otherwise green copy $H_e$ of $H$, which must be wholly contained in $G_{n,q}$. Moreover, for $e \neq e'$, the copies $H_e$ and $H_{e'}$ of $H$ are edge-disjoint.
This gives us a subgraph of $G \cup G_{n,q}$ with at most $v(F) + e(F)(v(F) - 2 + v(H) - 2)$ vertices and $e(F) ( e(F) + e(H) - 1)$ edges, of which at least $e(F)e(H)$ edges come from $G_{n,q}$. Since $q \le p$ and there are at most some constant $K$ ways of building such a subgraph and dividing its edges between $G$ and $G_{n,q}$, the probability of finding such a structure is at most
\[ K n^{v(F) + e(F) (v(F) - 2 + v(H) - 2)} p^{e(F) (e(F) - 1)} q^{e(F) e(H)}. \]
Since $p \le n^{-1/m_2(H)} = n^{-(v(F)-2)/(e(F)-1)}$, this is at most $K n^{v(F) + e(F)(v(H) - 2)} q^{e(F)e(H)}$. Now $q = o(n^{-1/m(H) + \delta}) = o(n^{-v(H) / e(H) + \delta})$, since $H$ is strictly balanced. Thus the upper bound on the probability of the appearance of a red copy of $H$ is $o(n^{v(F) - 2e(F) + \delta e(F) e(H)}) = o(n^{e(F)(\delta e(H) - 1)})$, since $e(F) \ge v(F)$. Hence, if we choose $\delta \le \min \{ 1/e(H), \delta_1(H) \}$, this probability is $o(1)$, so with high probability we can extend the colouring to the edges of $G_{n,q}$ without creating a monochromatic copy of $H$. \qedhere
\end{itemize}
\end{proof}
\section{The proof of Theorem~\ref{thm:main}}
Having shown in the previous section that some condition on the graph $H$ is necessary in Theorem~\ref{thm:main}, we now show that our condition is sufficient. We begin with a sketch of the proof and then recall several useful results before providing the details of the argument.
\subsection{An overview of the proof}
We shall assume the colours used are red, blue and, in the case of three-colourings, green. Our goal is to find structures in the first random graph, $G$, that force the creation of a monochromatic copy of $H$ no matter how the edges of the second random graph, $G_{n,q}$, are coloured. To that end, we make the following definitions.
\begin{definition}[Colour-forced edges]
A copy of $H \setminus h$ in $G$ is \emph{supported} on the pair $\{x,y\}$ if $\{x,y\}$ maps to the missing edge $h$. We then call $\{x,y\}$ the \emph{base} of the copy. Given an edge-colouring $\varphi$, we say $\{x,y\}$ is a \emph{red, blue or green base} if it is the base of a monochromatic copy of $H \setminus h$ of the corresponding colour. Finally, we say a pair $\{x,y\}$ is \emph{green-forced} if it is both a red and a blue base simultaneously, with \emph{blue-forced} and \emph{red-forced} defined similarly.
\end{definition}
In the two-colour case, observe that it is impossible to extend $\varphi_2$ to a green-forced pair, since colouring it either red or blue would create a monochromatic copy of $H$. For the first assertion of Theorem~\ref{thm:main}, we shall show that with high probability $G$ is such that every two-colouring $\varphi_2$ admits quadratically many green-forced pairs. Then, again with high probability when $q_2 = \omega(n^{-2})$, one of these pairs will be an edge of the second random graph $G_{n,q_2}$, so any extension of $\varphi_2$ to $G \cup G_{n,q_2}$ will create a monochromatic copy of $H$.
When dealing with three colours, our goal will instead be to show that there is some colour, say green, such that the green-forced pairs in $G$ are sufficiently dense that, when $q_3 = \omega(n^{-1/m(H)})$, we will find a copy of $H$ in $G_{n,q}$ consisting solely of green-forced pairs. If any one of its edges is coloured red or blue, it will complete a monochromatic copy of $H$ with edges from $G$. On the other hand, if all of its edges are coloured green, we obtain a green copy of $H$ instead.
To find these colour-forced structures, we consider the reduced graph of a regular partition of $G$ (with respect to the colouring $\varphi_2$ or $\varphi_3$). In this reduced graph we will find two colours, say red and blue, and a copy of $H \rrprod{2} (H,h)$ such that for each (removed) edge from the central copy of $H$, one of the attached copies of $(H,h)$ is monochromatic red and the other is monochromatic blue. By applying the sparse counting lemma, we will deduce the existence of many potential copies of $H$ consisting of green-forced edges, from which we will be able to draw the desired conclusion.
Although the proof can be simplified in the two-coloured setting, for the sake of brevity we shall present a single unified argument allowing for three colours throughout, and only differentiate between the two cases at the end of the proof.
\subsection{Some preliminaries}
Here we collect several results about random graphs and sparse regularity that we shall use in our proof.
\subsubsection{Random graphs}
The Hoeffding inequality shows that $G_{n,p}$ does not have any subgraphs that are far sparser or denser than expected with high probability.
\begin{proposition} \label{prop:upperuniform}
Let $\eta > 0$ be fixed and suppose $p = \omega(n^{-1})$. Then, with high probability, $G_{n,p}$ is such that the following holds for any disjoint sets $X, Y$ of vertices with $\abs{X}, \abs{Y} \ge \eta n$:
\begin{itemize}
\item[(i)] $\tfrac12 \binom{\abs{X}}{2} p \le e(G_{n,p}[X]) \le 2 \binom{\abs{X}}{2} p$ and
\item[(ii)] $\tfrac12 \abs{X} \abs{Y} p \le e(G_{n,p}[X,Y]) \le 2 \abs{X} \abs{Y} p$.
\end{itemize}
\end{proposition}
A simple application of Markov's inequality also shows that $G_{n,p}$ is unlikely to contain many more copies of any subgraph than expected.
\begin{proposition} \label{prop:fewsubgraphs}
Given any graph $F$ with $v$ vertices and $e$ edges and any $K > 1$, the probability that there are more than $K n^v p^e$ copies of $F$ in $G_{n,p}$ is at most $1/K$.
\end{proposition}
In the other direction, we can use Chebyshev's inequality to establish the existence of subgraphs in $G_{n,p}$ when $p$ is suitably large. More precisely, it follows from Theorem 4.4.5 in The Probabilistic Method by Alon and Spencer~\cite{AS08} that if $p = \omega(n^{-1/m(F)})$, then the number of copies of $F$ in $G_{n,p}$ is concentrated around its expectation.
\begin{proposition} \label{prop:manysubgraphs}
Given a graph $F$ on $v$ vertices and a constant $\zeta > 0$, let $\mc F$ be a collection of $\zeta n^v$ potential copies of $F$. If $p = \omega_n n^{-1/m(F)}$ with some $\omega_n=\omega(1)$, then the probability that $G_{n,p}$ does not contain a copy of $F$ from $\mc F$ is at most $\frac{v! 2^v}{\zeta \omega_n}$.
\end{proposition}
If the edge probability $p$ is even larger, then the following result, a consequence of Theorem~3.29 from the book Random Graphs by Janson, {\L}uczak and Ruci\'nski~\cite{JLR00}, shows that there will be many pairwise edge-disjoint copies of $H$ in $G_{n,p}$.
\begin{proposition} \label{prop:disjointcopies}
For every graph $H$ with $m_2(H) > 1$, there is a constant $\kappa = \kappa(H)$ such that, given constants $\rho, c > 0$ and setting $p = c n^{-1/m_2(H)}$, with high probability every induced subgraph of $G_{n,p}$ on at least $\tfrac12 \rho n$ vertices contains at least $\kappa c^{e(H) - 1} \rho^{v(H)} n^2 p$ edge-disjoint copies of $H$.
\end{proposition}
\subsubsection{Sparse regularity and counting}
Given an $n$-vertex graph $G$, two disjoint sets of vertices $X$ and $Y$ form an \emph{$(\varepsilon,p)$-regular pair of density $d$} if $d(X,Y) = d$ and, for all $X' \subseteq X$ with $\abs{X'} \ge \varepsilon \abs{X}$ and $Y' \subseteq Y$ with $\abs{Y'} \ge \varepsilon \abs{Y}$, we have $\abs{d(X',Y') - d(X,Y)} < \varepsilon p$,
where $d(U,V)$ denotes $\frac{e(U,V)}{|U||V|}$. This notion of regularity is inherited by induced and random subgraphs (see Lemma 4.3 in~\cite{GS05}).
\begin{proposition} \label{prop:slicing}
Suppose that $c \in (0, \tfrac12]$, $G$ is a graph and $U, W$ are disjoint vertex sets, both of size $N$, with $(U,W)$ an $(\varepsilon, p)$-regular pair of density $d = \omega(N^{-1})$. Then the following is true:
\begin{itemize}
\item[(i)] for $X \subset U$ and $Y \subset W$ with $\abs{X}, \abs{Y} \ge c N$, the pair $(X,Y)$ is $(\varepsilon / c,p)$-regular with density at least $d - \varepsilon p$ and
\item[(ii)] for $m \ge cdN^2$, the subgraph $G'$ of $G$ obtained by choosing $m$ edges from $G[U,W]$ uniformly at random forms a $(2 \varepsilon, p)$-regular pair with high probability.
\end{itemize}
\end{proposition}
An \emph{$(\varepsilon, p)$-regular partition} $\mc P$ of $G$ is a partition $V(G) = V_0 \cup V_1 \cup \dots \cup V_k$ such that $\abs{V_0} \le \varepsilon n$, $\abs{V_1} = \abs{V_2} = \dots = \abs{V_k}$ and all but at most $\varepsilon k^2$ pairs $(V_i, V_j)$, $1 \le i < j \le k$, are $(\varepsilon,p)$-regular. When the graph $G$ is edge-coloured, we say a partition is \emph{$(\varepsilon,p)$-regular} if for all but at most $\varepsilon k^2$ pairs of parts the edges of each colour between the two parts form an $(\varepsilon,p)$-regular subgraph. If $G$ has density $d$, we say it is \emph{$(\eta, D)$-upper-uniform} if, for all disjoint sets $X$ and $Y$ of size at least $\eta n$, we have $d(X,Y) \le D d$. With these definitions in place, we may state a version of the sparse regularity lemma, originally due to Kohayakawa and R\"odl~\cite{K97}.
\begin{theorem} \label{thm:sparsereg}
For all $\varepsilon, D > 0$ and $r, t \in \mathbb{N}$, there are $\eta > 0$ and $T \in \mathbb{N}$ such that every $r$-colouring of the edges of an $(\eta, D)$-upper-uniform graph $G$ of density $d$ on at least $T$ vertices has an $(\varepsilon,d)$-regular partition $\mc P$ with $k$ parts for some $k \in [t,T]$.
\end{theorem}
The final ingredient we will need is a sparse counting lemma due to Conlon, Gowers, Samotij and Schacht~\cite{CGSS14}. Given a graph $H$, integers $N$ and $m$, and $\varepsilon, p > 0$, we define the family $\mc G(H,N,m,p,\varepsilon)$ to be all graphs obtained by replacing each vertex of $H$ by an independent set of size $N$ and replacing each edge of $H$ by an $(\varepsilon,p)$-regular bipartite graph with exactly $m$ edges. Given such a graph $G$, let $G(H)$ denote the number of canonical copies of $H$ in $G$ (by which we mean that each vertex of $H$ in the copy belongs to the corresponding independent set in $G$).
\begin{theorem} \label{thm:sparsecount}
For every graph $H$ and every $d > 0$, there exist $\varepsilon, \timesi > 0$ with the following property. For every $\eta > 0$, there is $C > 0$ such that if $p \ge C n^{-1/m_2(H)}$, then, with high probability, for every $N \ge \eta n$, $m \ge d p N^2$ and every subgraph $G$ of $G_{n,p}$ in $\mc G(H,N,m,p,\varepsilon)$, $G(H) \ge \timesi N^{v(H)} \left( \frac{m}{N^2} \right)^{e(H)}$.
\end{theorem}
\subsection{The reduced graph}
With these preliminaries in hand, we can proceed with the proof of Theorem~\ref{thm:main}. We begin by describing the (standard) construction of the reduced graph and proving it has some useful properties.
Let $t, \varepsilon, \alpha$ be defined such that $1/t \le \varepsilon \ll \alpha \ll \kappa, c$, where $\kappa = \kappa(H)$ is the constant from Proposition~\ref{prop:disjointcopies}, $p = cn^{-1/m_2(H)}$ and `$\ll$' means these parameters are sufficiently small for the subsequent calculations to hold.
Now consider a monochromatic-$H$-free $3$-edge-colouring $\varphi$ of the edges of $G \sim G_{n,p}$ (where, as in the case of $\varphi_2$, we may only be using two of the three colours) and let $G_{red}$, $G_{blue}$ and $G_{green}$ represent the red, blue and green subgraphs of $G$, respectively. Given our choice of $\varepsilon$ and $t$ and setting $r = 3$ and $D = 4$, let $\eta$ and $T$ be as in Theorem~\ref{thm:sparsereg}. Proposition~\ref{prop:upperuniform} shows that $G$ is with high probability $(\eta, 4)$-upper-uniform. Hence, there is an $(\varepsilon,p)$-regular partition $V(G) = V_0 \cup V_1 \cup \hdots \cup V_k$, where $t \le k \le T$.
We next define three graphs, $\Gammared$, $\Gammablue$ and $\Gammagreen$, on the same vertex set $[k]$. $\Gammared$ has an edge between $i$ and $j$ if and only if the bipartite induced subgraph $G_{red}[V_i,V_j]$ forms an $(\varepsilon,p)$-regular pair of density at least $\alpha p$, with $\Gammablue$ and $\Gammagreen$ defined similarly with respect to $G_{blue}$ and $G_{green}$, respectively. The reduced (multi)graph $\Gamma$ is the coloured union of $\Gammared$ in red, $\Gammablue$ in blue and $\Gammagreen$ in green. Given a vertex $i \in [k]$, we write $N_{red}(i)$, $N_{blue}(i)$ and $N_{green}(i)$ for its neighbourhoods in $\Gammared$, $\Gammablue$ and $\Gammagreen$, respectively, and write $d_{red}(i), d_{blue}(i)$ and $d_{green}(i)$ for the sizes of these sets.
We first show that any induced subgraph of $\Gamma$ with linearly many vertices has a vertex with large degree in at least two of the colours.
\begin{lemma} \label{lem:largedegrees}
Define $f(\rho) = \tfrac{1}{24} \kappa c^{e(H)-1} \rho^{v(H) - 1}$. Suppose $\rho$ satisfies
\begin{equation} \label{ineq:rho}
6 \rho f(\rho) \ge 3 \varepsilon + \tfrac12 \alpha.
\end{equation}
Then, with high probability, for any subset $U \subseteq [k]$ of $\rho k$ vertices of the reduced graph $\Gammaamma$, we can find a vertex $u \in U$, two disjoint sets $X_1, X_2 \subset U$ of size at least $f(\rho) k$ and two distinct colours $\chi_1, \chi_2$ such that, for each $i \in [2]$, $u$ is adjacent to all vertices in $X_i$ with edges of colour $\chi_i$.
\end{lemma}
\begin{proof}
Let $W = \cup_{u \in U} V_u$ be the vertices in the parts of $G$ corresponding to the vertices of $U$ and note that $\abs{W} \ge (1 - \varepsilon)\rho n \ge \tfrac12 \rho n$. Hence, by Proposition~\ref{prop:disjointcopies}, we may assume $G[W]$ contains at least $24 \rho f(\rho) n^2 p$ edge-disjoint copies of $H$. Since there are no monochromatic copies of $H$ in the $3$-edge-colouring of $G$, each such copy must contain two edges of distinct colours. It easily follows that there are two colours, say red and blue, that each appear on at least $12 \rho f(\rho) n^2 p$ edges of $G[W]$.
Using Proposition~\ref{prop:upperuniform}, we observe that all but at most $(3 \varepsilon + \tfrac12 \alpha) n^2 p$ red edges of $G[W]$ are contained within dense $(\varepsilon,p)$-regular pairs. Indeed, at most $k \cdot 2 \binom{n/k}{2} p \leq n^2 p/k \leq \varepsilon n^2 p$ edges can be contained within the parts $V_i$, at most $\varepsilon k^2 \cdot 2 (n/k)^2 p \leq 2 \varepsilon n^2 p$ edges can be within irregular pairs $(V_i, V_j)$ and at most $\binom{k}{2} \cdot (n/k)^2 \alpha p \leq \frac{1}{2} \alpha n^2 p$ edges are within $(\varepsilon,p)$-regular pairs $(V_i, V_j)$ of density less than~$\alpha p$. From~\eqref{ineq:rho}, it follows that there are at least $6 \rho f(\rho) n^2 p$ red edges in $G[W]$ that are contained in $(\varepsilon,p)$-regular pairs of density at least $\alpha p$. Again by Proposition~\ref{prop:upperuniform}, each such pair can account for at most $2 (n/k)^2 p$ edges in $G[W]$, so there must be at least $3 \rho f(\rho) k^2$ such pairs, each of which corresponds to an edge of $\Gammared[U]$. By symmetry, we also find at least $3 \rho f(\rho) k^2$ edges in $\Gammablue[U]$.
Now let $A = \{ a \in U : d_{red}(a,U) \ge 2 f(\rho) k \}$. By summing the red degrees of vertices in $U$, distinguishing between those in $A$ and those not, we have
\[ 6 \rho f(\rho) k^2 \le \rho k \cdot \abs{A} + 2 f(\rho) k \cdot \rho k, \]
from which we deduce that $\abs{A} \ge 4 f(\rho) k$. Defining $B = \{ b \in U : d_{blue}(b,U) \ge 2 f(\rho) k \}$, we similarly have $\abs{B} \ge 4 f(\rho) k$. If $A \cap B \neq \emptyset$, let $u \in A \cap B$. Since $d_{red}(u,U), d_{blue}(u,U) \ge 2 f(\rho) k$, we can find the required disjoint sets $X_1$ and $X_2$ of size $f(\rho) k$ of red and blue neighbours, respectively.
Otherwise, for every $a \in A$ and $b \in B$, by Proposition~\ref{prop:upperuniform}, there are at least $\tfrac12 (n/k)^2 p$ edges in $G$ between $V_a$ and $V_b$, so one of the three colours appears on at least $\tfrac16 (n/k)^2 p > \alpha (n/k)^2 p$ edges. Let $\chi_1$ be the colour that appears most commonly as the majority colour in these $\abs{A} \abs{B}$ pairs. Ignoring the pairs that give rise to irregular pairs in $\Gammaamma_{\chi_1}$, it follows that there are at least $\tfrac13 \abs{A} \abs{B} - \varepsilon k^2$ edges in $\Gammaamma_{\chi_1}$ between $A$ and $B$. Provided $\alpha$ is sufficiently large with respect to $\varepsilon$, \eqref{ineq:rho} and our lower bound on $\abs{A}, \abs{B}$ imply this is at least $\tfrac14 \abs{A} \abs{B}$ edges.
If $\chi_1$ is not red, then take $\chi_2$ to be red and, by averaging, find some $u \in A$ with a set $X_1$ of at least $\tfrac14 \abs{B} \ge f(\rho) k$ neighbours in $B$ in the colour $\chi_1$. Since $u \in A$, we have $d_{red}(u,U) \ge 2 f(\rho) k$, so we can find a disjoint set $X_2$ of $f(\rho) k$ red neighbours of $u$, as required. Otherwise, if $\chi_1$ is red, we take $\chi_2$ to be blue. By the same argument, we can find some $u \in B$ with a set $X_1$ of at least $f(\rho) k$ red neighbours in $A$ and, since $u \in B$, it has large enough degree in $\Gammablue[U]$ to guarantee a disjoint set $X_2$ of blue neighbours.
\end{proof}
Through repeated use of this lemma, we can build large multicoloured structures in $\Gammaamma$.
\begin{corollary} \label{cor:twocliques}
Given $t \in \mathbb{N}$, let $\rho_0 = 1$ and, for $1 \le i \le 2t-2$, let $\rho_i = f(\rho_{i-1})$. Provided $6 \rho_{2t-3} f(\rho_{2t-3}) \ge 3 \varepsilon + \tfrac12 \alpha$, there is with high probability a vertex $v_0$ of $\Gammaamma$ contained in two monochromatic $t$-cliques of distinct colours.
\end{corollary}
\begin{proof}
Applying Lemma~\ref{lem:largedegrees} with $\rho = \rho_0 = 1$ and $U = [k]$, we find a vertex $u_0$ with large degrees in two colours. Without loss of generality, let the colours be red and blue. In the first stage of this algorithm, we iterate within the red neighbourhood of $u_0$, finding either a vertex in blue and green $t$-cliques, in which case we are done, or a red $t$-clique containing $u_0$.
To start, apply Lemma~\ref{lem:largedegrees} again, this time taking $U$ to be the set of red neighbours of $u_0$. This gives a vertex with large degrees in two colours. While one of those colours is red, we repeat the process, giving us a sequence of vertices with large nested red neighbourhoods. If this sequence (including $u_0$) has length $t-1$, by choosing an arbitrary vertex in the final red neighbourhood, we obtain a red $t$-clique containing $u_0$ and can proceed to the second stage.
Otherwise, after some $h \le t-2$ steps we obtain a vertex $u_1$ that has large blue and green neighbourhoods. In this case, we first iterate within the blue neighbourhood of $u_1$. Each subsequent vertex has either a large red neighbourhood or a large blue neighbourhood within which we can proceed. Once we have obtained a sequence of $2t-3$ vertices, there are either $t-1$ of them (including $u_0$) for which we iterated within a red neighbourhood or $t-1$ of them (including $u_1$) for which we iterated within a blue neighbourhood. In the first case, we choose an arbitrary vertex in the final neighbourhood to create a red $t$-clique containing $u_0$ and can then proceed to the second stage.
In the second case, choosing an arbitrary vertex in the final neighbourhood gives a blue $t$-clique containing $u_1$. We can then return to the green neighbourhood of $u_1$ and repeatedly iterate, at each point proceeding with a red or green neighbourhood of the latest vertex. Once we reach a sequence of length $2t-3$ (including the vertices between $u_0$ and $u_1$), we again either have $t-1$ vertices with green neighbourhoods or $t-1$ vertices with red neighbourhoods. In the first case, we can complete a green $t$-clique containing $u_1$ that, together with the earlier blue $t$-clique, completes the desired structure. In the second case, choosing a vertex in the final neighbourhood again completes a red $t$-clique containing $u_0$, with which we proceed to the second stage.
If we proceed to the second stage, we will have already found a red $t$-clique containing $u_0$. The second stage consists of mirroring the above process in the blue neighbourhood of $u_0$. This results in a blue $t$-clique containing $u_0$ or a vertex $u_2$ in the blue neighbourhood that is contained in both red and green $t$-cliques; in either case, we are done.
\end{proof}
\subsection{Building colour-forced structures}
Let $\mc K_n(H)$ denote the family of all copies of $H$ in $K_n$. Using the cliques from Corollary~\ref{cor:twocliques}, we will prove the following key proposition.
\begin{proposition} \label{prop:colourforced}
There are positive constants $\kappa = \kappa(c,H)$ and $\zeta = \zeta(c,H)$ such that, for any $K > 1$, with probability at least $1 - \kappa K^{-1} - o(1)$, for every monochromatic-$H$-free $3$-edge-colouring $\varphi$ of $G$, there is some colour $\chi$ with at least $\zeta K^{-1} n^{v(H)}$ $\chi$-forced copies of $H$ in $\mc K_n(H)$.
\end{proposition}
\begin{proof}
For convenience, we write $v = v(H)$ and $e = e(H)$. Setting $t = e (v - 2) + 1$ and applying Corollary~\ref{cor:twocliques}, which holds with high probability, we find a vertex $x$ in the reduced graph $\Gammaamma$ that is in, say, both a red and a blue $K_t$. Let $u_1, u_2, \hdots, u_{t-1}$ be the other vertices from the red clique and $w_1, w_2, \hdots, w_{t-1}$ be the other vertices from the blue clique. Consider the corresponding parts in the graph $G$. We know that, for all $1 \le i < j \le t-1$, the pairs $G_{red}[V_x, V_{u_i}], G_{red}[V_{u_i}, V_{u_j}], G_{blue}[V_x, V_{w_i}]$ and $G_{blue}[V_{w_i}, V_{w_j}]$ are all $(\varepsilon, p)$-regular pairs of density at least $\alpha p$. This situation is illustrated below in the case $H = K_3$.
\begin{figure}
\caption{The parts of $G$ corresponding to the two cliques from Corollary~\ref{cor:twocliques}
\label{fig:biclique}
\end{figure}
Partition the part $V_x$ into $v$ equal-sized subsets, $X_1, X_2, \hdots, X_v$, letting $N$ denote the size of these sets. Define $\eta$ by $N = \eta n$, noting that $\eta \ge \frac{1-\varepsilon}{kv}$, where we recall that $k \le T$ is the number of parts in the $(\varepsilon,p)$-regular partition of $G$. For each $i$, let $R_i \subset V_{u_i}$ and $B_i \subset V_{w_i}$ be arbitrary subsets of size $N$. Let $\mc X = \{X_1, \hdots, X_v\}$, $\mc R = \{R_1, \hdots, R_{t-1} \}$ and $\mc B = \{ B_1, \hdots, B_{t-1} \}$. By Proposition~\ref{prop:slicing}(i), it follows that the pairs $G_{red}[X_i, R_j], G_{red}[R_i, R_j], G_{blue}[X_i, B_j]$ and $G_{blue}[B_i, B_j]$ are all $(\varepsilon v, p)$-regular of density at least $(\alpha - \varepsilon)p$.
\begin{figure}
\caption{We divide the central part into $v(H)$ subsets and shrink the other parts accordingly.}
\label{fig:smallparts}
\end{figure}
Next consider the graph $H \rrprod{2} (H,h)$ and note that it has precisely $v + 2(t-1)$ vertices, with one central copy $H_0$ of $H$, whose edges are deleted, and each deleted edge $g \in E(H_0)$ supporting two otherwise vertex-disjoint copies $H_{g,1}$ and $H_{g,2}$ of $H$. We can build a bijection $\psi : V(H \rrprod{2} (H,h)) \rightarrow \mc X \cup \mc R \cup \mc B$ such that:
\begin{itemize}
\item $\psi(H_0) = \mc X$ and
\item for all $g \in E(H_0)$, $\psi(V(H_{g,1}) \setminus g) \subset \mc R$ and $\psi(V(H_{g,2}) \setminus g) \subset \mc B$.
\end{itemize}
That is, for each edge $g \in E(H_0)$, we send one of the attached copies of $H$ to the red parts $\mc R$ and the other copy to the blue parts $\mc B$.
\begin{figure}
\caption{We imagine a copy of $H$ between the subsets of the central part, with each edge supporting both a red and a blue copy of $H \setminus h$ using the parts from the red and blue cliques.}
\label{fig:forcedparts}
\end{figure}
Let $m = \tfrac12 \alpha p N^2$ and consider an edge $f = \{y,z\} \in E(H \rrprod{2} (H,h))$. If $f \in E(H_{g,1})$ for some $g \in E(H_0)$, then the pair $G_{red}[\psi(y), \psi(z)]$ is an $(\varepsilon v, p)$-regular pair of density at least $(\alpha - \varepsilon) p$. Define $\psi(f) \subseteq G_{red}[\psi(y), \psi(z)]$ to be the subgraph obtained from this pair by choosing $m$ edges uniformly at random. By Proposition~\ref{prop:slicing}, $\psi(f)$ is $(2 \varepsilon v, p)$-regular with high probability. Otherwise, $f \in E(H_{g,2})$ for some $g \in E(H_0)$, in which case we define $\psi(f)$ to be the subgraph obtained by selecting $m$ edges uniformly at random from $G_{blue}[\psi(y), \psi(z)]$. We again have, with high probability, that $\psi(f)$ is $(2 \varepsilon v, p)$-regular.
Now define the subgraph $G' \subset G$ to be the union of all these subgraphs $\psi(f)$, that is,
\[ V(G') = \bigcup_{y \in V(H \rrprod{2} (H,h))} \psi(y) \quad \textrm{and} \quad E(G') = \bigcup_{f \in E(H \rrprod{2} (H,h))} \psi(f). \]
From the above discussion, it is clear that $G' \in \mc G(H \rrprod{2} (H,h), N, m, p, 2 \varepsilon v)$, where this family of graphs is as defined before Theorem~\ref{thm:sparsecount}. Since $p = cn^{-1/m_2(H)}$ and, by Lemma~\ref{lem:productdensity}(c), $m_2(H \rrprod{2} (H,h)) < m_2(H)$, we can apply Theorem~\ref{thm:sparsecount}. This gives some constant $\timesi > 0$ such that there are with high probability at least $\timesi N^{v(H \rrprod{2} (H,h))} \left( \tfrac{m}{N^2} \right)^{e(H \rrprod{2} (H,h))}$ copies of $H \rrprod{2} (H,h)$ in $G'$, where each vertex $y$ comes from the set $\psi(y)$. To simplify this expression, we define $c' = \timesi \eta^{v(H \rrprod{2} (H,h))} \left( \tfrac12 \alpha \right)^{e(H \rrprod{2} (H,h))}$ and $\mu = n^{v - 2} p^{e - 1}$. Our lower bound on the number of copies of $H \rrprod{2} (H,h)$ can then be written as $c' \mu^{2e} n^v$. Note that $c' > 0$ is a constant, while, since $p = cn^{-1/m_2(H)} \ge cn^{-(v-2)/(e-1)}$, $\mu = \Omega(1)$.
In each such copy of $H \rrprod{2} (H,h)$, each missing edge $g \in E(H_0)$ in the central copy of $H$ supports both a red copy $H_{g,1}$ of $H \setminus h$ and a blue copy $H_{g,2}$ of $H \setminus h$. In particular, this means $g$ is green-forced and, as this holds for all edges $g$, this shows that the central copy $H_0$ forms a green-forced copy of $H$ in $\mc K_n(H)$.
\begin{figure}
\caption{Applying Theorem~\ref{thm:sparsecount}
\label{fig:colorforcing}
\end{figure}
However, we are not quite done, as these green-forced copies of $H$ may contribute to multiple copies of $H \rrprod{2} (H,h)$, in which case they will have been overcounted. To rectify this, and complete the proof, we now show that most of these copies of $H$ are not counted too often.
To this end, suppose we have found $r$ distinct green-forced central copies $H_0$ of $H$ above and enumerate them as $H^{(1)}, H^{(2)}, \hdots, H^{(r)}$. For each $1 \le i \le r$, let $Z_i$ denote the number of copies of $H \rrprod{2} (H,h)$ found above in which $H^{(i)}$ is the central copy $H_0$. We have thus far established that
\begin{equation} \label{ineq:firstmoment}
\sum_{i=1}^r Z_i \ge c' \mu^{2e} n^v,
\end{equation}
while we wish to show that $r \ge \tfrac{\zeta}{K} n^v$.
Now consider the quantity $Z_i^2$. This counts the number of ordered pairs $(A,B)$ of canonical copies of $H \rrprod{2} (H,h)$ with $H^{(i)}$ as the central copy $H_0$. Given such a pair, let $J = A \cup B$. In $J$, each edge $g \in H_0$ is contained in a red copy of $H \setminus h$ from $A$ and one from $B$ as well. The same holds true for the blue copies of $H \setminus h$. These attached copies of $H \setminus h$ in $J$ are mostly disjoint outside the central $H_0$, except that the two copies of the same colour supported on the same edge $g$ may share some vertices. We consider such a graph $J$ as a degenerate copy of $H \rrprod{4} (H,h)$. There are several isomorphism classes $J$ could belong to, depending on which vertices are shared by $A$ and $B$.
For each edge $g \in H_0$, let $F_{g,1}$ be the subgraph of $H$ induced by the vertices shared between the red copies of $H \setminus h$ in $A$ and $B$ supported on $g$ and define $F_{g,2}$ analogously for the blue copies. We include the edge $g$ in $F_{g,1}$ and $F_{g,2}$, even though it does not appear in $J$. Note that the union $\cup_{g \in E(H_0)} \cup_{j=1}^2 F_{g,j}$ determines the isomorphism class of $J$. Hence, there are at most $2^{v(H \rrprod{2} (H,h))}$ possible isomorphism types, as for each vertex in $H \rrprod{2} (H,h)$, we can decide whether or not it belongs to the corresponding $F_{g,j}$. Set $\kappa = 2^{v(H \rrprod{2} (H,h))}$.
We shall use Proposition~\ref{prop:fewsubgraphs} to show that, regardless of isomorphism type, there cannot be many copies of $J$ in $G$. Indeed, we have
\[ v(J) = v(H) + \sum_{g \in E(H_0)} \sum_{j=1}^2 \left(2 (v(H) - 2) - (v(F_{g,j}) - 2) \right) = v + 4e(v-2) - \sum_{g,j} (v(F_{g,j}) - 2) \]
and
\[ e(J) = \sum_{g \in E(H_0)} \sum_{j=1}^2 \left(2 (e(H) - 1) - (e(F_{g,j}) - 1) \right) = 4e (e-1) - \sum_{g,j} (e(F_{g,j}) - 1). \]
This gives
\[ n^{v(J)} p^{e(J)} = n^{v + 4e(v-2) - \sum_{g,j} (v(F_{g,j}) - 2)} p^{4e(e-1) - \sum_{g,j} (e(F_{g,j}) - 1)} = \frac{\mu^{4e} n^v}{\prod_{g,j} \left( n^{v(F_{g,j}) - 2} p^{e(F_{g,j}) - 1} \right) }. \]
Since $F_{g,j} \subseteq H$ and $p = cn^{-1/m_2(H)}$, we have $n^{v(F_{g,j}) - 2} p^{e(F_{g,j}) - 1} \ge c^{e(F_{g,j}) - 1}$ for all $g,j$. Thus, $n^{v(J)} p^{e(J)} \le c^{-2e^2} \mu^{4e} n^v$. Hence, by Proposition~\ref{prop:fewsubgraphs}, with probability at least $1 - K^{-1}$ there are at most $K c^{-2e^2} \mu^{4e} n^v $ copies of $J$ in $G$. Taking a union bound over all isomorphism classes, we find that with probability at least $1 - \kappa K^{-1}$, there are at most $\kappa K c^{-2e^2} \mu^{4e} n^v$ of these degenerate copies of $H \rrprod{4} (H,h)$ in $G$.
We noted earlier that each pair $(A,B)$ of copies of $H \rrprod{2} (H,h)$ counted by $\sum_i Z_i^2$ gives rise to a degenerate copy $J = A \cup B$ of $H \rrprod{4} (H,h)$. To reverse the correspondence, for each vertex in $H \rrprod{4} (H,h) \setminus (A \cap B)$, we must decide how to assign the corresponding vertices of $J$ to $A$ and $B$. Thus, there are at most $\kappa^2 \geq 2^{v(H \rrprod{4} (H,h))}$ pairs $(A,B)$ giving rise to the same $J = A \cup B$.
Putting all this together, we have, with probability at least $1 - \kappa K^{-1}$,
\[ \sum_{i=1}^r Z_i^2 \le \kappa^3 K c^{-2e^2} \mu^{4e} n^v. \]
Define $I = \{ i : Z_i \ge \tfrac{2}{c'} \cdot \kappa^3 K c^{-2e^2} \mu^{2e} \}$. It then follows from the above inequality that $\sum_{i \in I} Z_i \le \tfrac12 c' \mu^{2e} n^v$. Plugging this into~\eqref{ineq:firstmoment}, we obtain $\sum_{i \notin I} Z_i \ge \tfrac12 c' \mu^{2e} n^v$. As there are at most $r$ summands, each of which has size less than $\tfrac{2}{c'} \cdot \kappa^3 K c^{-2e^2} \mu^{2e}$, we can conclude that
\[ r \ge \left( \frac{(c')^2 c^{2e^2}}{4 \kappa^3 K} \right) n^v. \]
Setting $\zeta = \tfrac{1}{4} (c')^2 c^{2e^2} \kappa^{-3}$ completes the proof.
\end{proof}
\subsection{Finishing the proof}
We begin with part (a). Suppose we have a monochromatic-$H$-free $2$-edge-colouring $\varphi_2$ of $G$ and $q_2 = \omega_n n^{-2}$ for some $\omega_n \rightarrow \infty$. Set $K = \omega_n^{1/2}$. By Proposition~\ref{prop:colourforced}, with probability $1 - \kappa K^{-1} - o(1) = 1 - o(1)$, there is some colour $\chi$ such that there are at least $\zeta K^{-1} n^{v(H)}$ $\chi$-forced copies of $H$. As the colouring $\varphi_2$ only has red and blue edges, the colour $\chi$ must be green.
Each edge can be in at most $n^{v(H)-2}$ green-forced copies of $H$, so there must be at least $\zeta K^{-1} n^2$ green-forced edges. If any of these edges were to appear in $G_{n,q_2}$, we would not be able to extend the colouring $\varphi_2$, as colouring the edge red or blue creates a monochromatic copy of $H$. Hence, the probability that $\varphi_2$ extends to $G \cup G_{n,q_2}$ is at most
\[ \left( 1 - q_2 \right)^{\zeta K^{-1} n^2} \le \exp \left( - \zeta K^{-1} n^2 q_2 \right) = \exp \left( - \zeta K \right) = o(1), \]
as required.
Part (b) follows the same lines. We begin as before: given the colouring $\varphi_3$ and some $q_3 = \omega_n n^{-1/m(H)}$, where $\omega_n \rightarrow \infty$, we set $K = \omega_n^{1/2}$. By Proposition~\ref{prop:colourforced}, with probability $1 - o(1)$, there is some colour $\chi$ with at least $\zeta K^{-1} n^{v(H)}$ $\chi$-forced copies of $H$.
If any $\chi$-forced copy of $H$ appears in $G_{n,q_3}$, then $\varphi_3$ cannot be extended. Indeed, colouring all of its edges with the colour $\chi$ clearly creates a monochromatic copy of $H$, but since all the edges are $\chi$-forced, using any other colour on an edge also completes a monochromatic copy. By Proposition~\ref{prop:manysubgraphs}, the probability that none of the $\chi$-forced copies of $H$ appear in $G_{n,q_3}$ is at most
\[ \frac{v(H)! 2^{v(H)} K}{\zeta \omega_n} = \frac{v(H)! 2^{v(H)}}{ \zeta K} = o(1), \]
as desired. This completes the proof of Theorem~\ref{thm:main}.
\section{Concluding remarks}
Our investigations point to several open problems, perhaps the most interesting of which is to classify all graphs $H$ for which Theorem~\ref{thm:main} holds. We have shown that our condition, that there exists an edge $h$ such that $m_2(H \setminus h) < m_2(H)$, cannot be entirely dispensed with. However, there are also examples of graphs which do not satisfy this condition, but still satisfy some of the conclusions of Theorem~\ref{thm:main}.
Indeed, our proof of Theorem~\ref{thm:main}(a) readily generalises to the following statement.
\begin{theorem} \label{thm:general}
Given a graph $H$, suppose there are graphs $F_{red}$, $F_{blue}$ and a matching $M$ such that
\begin{itemize}
\item[(i)] $V(F_{red}) \cap V(F_{blue}) = V(M)$, with $V(M)$ forming an independent set in $F_{red}$ and $F_{blue}$,
\item[(ii)] $m_2(H) > m_2(F_{red} \cup F_{blue})$,
\item[(iii)] $m_2(H) \ge \frac{e(J)}{v(J) - v(M)}$ for all $J \subseteq F_{red} \cup F_{blue}$ with $V(M) \subset V(J)$ and $e(J) \ge 1$ and
\item[(iv)] for any partition of the matching $M = M_{red} \cup M_{blue}$, $H$ is a subgraph of $F_{red} \cup M_{red}$ or $F_{blue} \cup M_{blue}$.
\end{itemize}
Let $c > 0$ be fixed and, for $p = c n^{-1/m_2(H)}$, let $G = G_{n,p}$. Then, with high probability, the following holds. Let $\varphi$ be an arbitrary monochromatic-$H$-free $2$-edge-colouring of $G$. If $q = \omega(n^{-2})$, then, with high probability, $\varphi$ cannot be extended to a monochromatic-$H$-free $2$-edge-colouring of $G \cup G_{n,q}$.
\end{theorem}
One of the simplest examples satisfying the conditions of Theorem~\ref{thm:general} is the graph $H$ consisting of two triangles joined by a path of length $\ell \geq 2$. In this case we can take $M$ to have size three with the corresponding graphs $F_{red}$ and $F_{blue}$ depicted below.
This example shows that the condition in Theorem~\ref{thm:main} is not best possible, as Theorem~\ref{thm:general} applies to a wider class of graphs. However, there is a subtle trade-off in finding appropriate forcing structures $F_{red} \cup F_{blue}$ for Theorem~\ref{thm:general} --- we need them to be sparse enough to satisfy (ii) and (iii), but to have enough copies of $H$ for (iv).
\begin{figure}
\caption{$H$, drawn on the left, consists of two triangles joined by a path of length $\ell$. On the right, $F_{red}
\label{fig:blah}
\end{figure}
\noindent
\textbf{Acknowledgements.} Part of this work was carried out while the third author visited the second and fourth authors at FU Berlin and he is grateful for their hospitality.
\end{document}
|
\begin{document}
\setcounter{secnumdepth}{5}
\numberwithin{equation}{section}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{coro}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{assum}{Assumption}[section]
\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\renewcommand{\theequation}
{\thesection.\arabic{equation}}
\def\sqrt H{\sqrt H}
\newcommand{\mar}[1]{{\marginpar{\sffamily{\scriptsize
#1}}}}
\newcommand\R{\mathbb{R}}
\newcommand\RR{\mathbb{R}}
\newcommand\CC{\mathbb{C}}
\newcommand\NN{\mathbb{N}}
\newcommand\ZZ{\mathbb{Z}}
\def\mathbb{R}^n{\mathbb{R}^n}
\renewcommand\Re{\operatorname{Re}}
\renewcommand\Im{\operatorname{Im}}
\newcommand{\mathcal}{\mathcal}
\newcommand\D{\mathcal{D}}
\def\hspace{0.33cm}{\hspace{0.33cm}pace{0.33cm}}
\newcommand{\lambda}{\lambdambda}
\def \l {\lambdambda}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\partial}{\partial}
\newcommand{\supp}{{\rm supp}{\hspace{0.33cm}pace{.05cm}}}
\newcommand{\times}{\times}
\newcommand{\lambdag}{\lambdangle}
\newcommand{\rangle}{\rangle}
\newcommand\wrt{\,{\rm d}}
\newcommand {\BMO}{{\mathrm{BMO}}}
\newcommand {\Rn}{{\mathbb{R}^{n}}}
\newcommand {\rb}{\rangle}
\newcommand {\lb}{{\lambdangle}}
\newcommand {\HT}{\mathcal{H}}
\newcommand {\Hp}{\mathcal{H}^{p}_{FIO}(\Rn)}
\newcommand {\ud}{\mathrm{d}}
\newcommand {\Sp}{S^{*}(\Rn)}
\newcommand {\Sw}{\mathcal{S}}
\newcommand {\w}{{\omega}}
\newcommand {\ph}{{\varphi}}
\newcommand {\para}{{\mathrm{par}}}
\newcommand {\N}{{{\mathbb N}}}
\newcommand {\Z}{{{\mathbb Z}}}
\newcommand {\F}{{\mathcal{F}}}
\newcommand {\C}{{\mathbb C}}
\newcommand {\vanish}[1]{\relax}
\newcommand {\ind}{{\mathbf{1}}}
\newcommand{\widetilde}{\widetilde}
\newtheorem{corollary}[theorem]{Corollary}
\newcommand{C_{0, L}}{C_{0, L}}
\newcommand{C_{0, U}}{C_{0, U}}
\newcommand{C_{1, L}}{C_{1, L}}
\newcommand{C_{1, U}}{C_{1, U}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\lesssim{\lesssim}
\def\mathcal{\mathcal}
\def\pnorm#1{{ \Big( #1 \Big) }}
\def\norm#1{{ \Big| #1 \Big| }}
\def\Norm#1{{ \Big\| #1 \Big\| }}
\def\inn#1#2{\lambdangle#1,#2\rangle}
\def\set#1{{ \left\{ #1 \right\} }}
\def\overrightarrow{\overrightarrow}
\def\text{dim}{\text{dim}}
\newcommand{\red}[1]{{\color{red} #1}}
\newcommand{\blue}[1]{{\color{blue} #1}}
\newcommand{\sg}[1]{{\bf\color{red}{#1}}}
\title[Hilbert transform along planar curves]
{Hilbert transforms along variable planar curves: Lipschitz regularity}
\thanks{{\it 2010
Mathematical Subject Classification.} Primary 42B20;
Secondary 42B25.}
\thanks{{\it Key words and phrases:} Hilbert transform, variable curve, square function estimate, local smoothing estimate.}
\author{Naijia Liu \ and \ Haixia Yu}
\address{
Naijia Liu,
Department of Mathematics,
Sun Yat-sen University,
Guangzhou, 510275,
P.R.~China}
\email{[email protected]}
\address{
Haixia Yu,
Department of Mathematics,
Sun Yat-sen University,
Guangzhou, 510275,
P.R.~China}
\email{[email protected]}
\subjclass[]{}
\begin{abstract}
In this paper, for $1<p<\infty$, we obtain the $L^p$-boundedness of the Hilbert transform $H^{\gamma}$ along a variable plane curve $(t,u(x_1, x_2)\gamma(t))$, where $u$ is a Lipschitz function with small Lipschitz norm, and $\gamma$ is a general curve satisfying some suitable smoothness and curvature conditions.
\end{abstract}
\maketitle
\section{introduction}
Let $u:\ \R^2\to \R$ be a measurable function, $\gamma:\ [-1, 1]\to \R$ with $\gamma(0)=0$ be a smooth function that is either even or odd, and is strictly increasing on $[0, 1]$, we define
\begin{align}
\lambdabel{0721-operator-hilbert} H^{\gamma}f(x_1,x_2)={\rm p.\,v.}\int_{-1}^{1}f(x_{1}-t,x_{2}-u(x_1,x_2)\gamma(t))\,\dtt.
\end{align}
The main purpose of this article is to study the $L^{p}$-boundedness of the Hilbert transform $H^{\gamma}$ along the variable plane curve $(t,u(x_1,x_2)\gamma(t))$ defined in \eqref{0721-operator-hilbert}.
Our Hilbert transform $H^{\gamma}$ is motivated by Stein's conjecture \cite{MR934224}, which correspond to the limiting case $\gamma(t)=t$ not covered here. The so-called Stein conjecture can be stated as follows: Denote
$$H_{\varepsilon_0}f(x_1,x_2)={\rm p.\,v.}\int_{-\varepsilon_0}^{\varepsilon_0}
f(x_1-t,x_2-u(x_1,x_2)t)\,\frac{dt}{t},
$$
one wishes to know if the operator $H_{\varepsilon_0}$ is bounded on $L^p$ for some $p\in(1,\infty)$, where $u:\ \mathbb{R}^2\rightarrow \mathbb{R}$ is a Lipschitz function and $\varepsilon_0>0$ small enough depending on $\|u\|_{\textrm{Lip}}$. It should be mentioned that this Lipschitz regularity imposed on $u$ is critical, since a counterexample based on a construction of the Besicovitch-Kakeya set shows that no $L^p$-boundednesss of $H_{\varepsilon_0}$ is possible for $C^\alpha$ regularity with $\alpha<1$, where $1<p<\infty$.
We now state some historical evolutions of this conjecture. For any real analytic function $u$, Stein and Street \cite{MR2880220} obtained the $L^p$-boundedness of $H_{\varepsilon_0}$ for all $1<p<\infty$. For $C^\infty$ vector fields $u$ under certain geometric assumptions, Christ, Nagel, Stein and Wainger \cite{MR1726701} showed that $H_{\varepsilon_0}$ is bounded on $L^{p}$ for all $1<p<\infty$. For $u\in C^{1+\alpha}$ with $\alpha>0$, Lacey and Li \cite{MR2654385} proved the $L^2$-boundedness of $H_{\varepsilon_0}$, conditioning on
the boundedness of the so-called Lipschitz-Kakeya maximal operator. For some other partial results toward this conjecture, we refer to \cite{MR545242,MR1009171,MR2219012}.
In the last decade further model problems, with the additional simplifying condition about $u$ have been considered: Based on the works of Lacey and Li \cite{MR2219012,MR2654385}, Bateman \cite{MR3090145} proved the $L^p$-boundedness of $H_{\varepsilon_0}P_{j}^{2}$, with a bound being independent of $j\in \mathbb{Z}$, where $u(x_1,x_2)=u(x_1,0)$ is only a one-variable measurable function, $P_{j}^{2}$
is the Littlewood-Paley projection applied in the second variable and $1<p<\infty$. Later, under the same condition about $u$, Bateman and Thiele \cite{MR3148061} obtained the $L^p$-boundedness of $H_{\varepsilon_0}$ for all $3/2<p<\infty$. Furthermore, Guo \cite{MR3393679,MR3592519} obtained some similar results under the condition that $u$ is constant along a Lipschitz curve, i.e., a perturbation of Bateman and Thiele's result \cite{MR3090145,MR3148061} in the critical Lipschitz regularity.
The so-called Stein conjecture is very difficult. However, we tend to think that the presence of the curvature in $H_{\varepsilon_0}$ is another direction of considering this outstanding open problem. Therefore, we here extend the variable straight line $(t,u(x_1,x_2)t)$ in $H_{\varepsilon_0}$ to the variable curve $(t,u(x_1,x_2)\gamma(t))$ in $H^{\gamma}$ (see \eqref{0721-operator-hilbert}), and consider the $L^{p}$-boundedness of the operator $H^{\gamma}$.
We now state our main theorem.
\begin{theorem}\lambdabel{f19}
We assume that $\gamma$ satisfies the following conditions:
\begin{enumerate}\lambdabel{curve gamma}
\item[\rm(i)]
$\inf_{0<t\le 1}\big|(\gamma'/\gamma'')'(t)\big|>0;$
\item[\rm(ii)]
$\min_{j\in \{1, 2\}} \inf_{0<t\le 1}\big|t^j\gamma^{(j)}(t)/\gamma(t)\big|>0;$
\item[\rm(iii)]
$\sup_{1\le j\le N} \sup_{0<t\le 1}\big|t^{j}\gamma^{(j)}(t)/\gamma(t)\big|<\infty,$ for a sufficiently large integer $N$.
\end{enumerate}
Then for every $u: \R^2\to \R$ with $\|u\|_{\mathrm{Lip}}\le 1/(2\gamma(1))$, it holds that
\begin{equation}
\|H^{\gamma} f\|_p \lesssim_{\gamma, p} \|f\|_p
\end{equation}
for every $1<p<\infty$.
\end{theorem}
\begin{remark}
In \cite{NjSlHx20}, for $1<p\leq\infty$, Liu, Song and Yu obtained the $L^{p}$-boundedness of the corresponding maximal function $M^{\gamma}$ along the variable curve $(t,u(x_1,x_2)\gamma(t))$, under the same assumptions imposed on $u$ and $\gamma$ as in Theorem \ref{f19}. Therefore, Theorem \ref{f19} should be regarded as a natural continuation of \cite{NjSlHx20}.
\end{remark}
\begin{remark}
The smoothness assumption and the strict monotonicity assumption on $\gamma$ are made to avoid certain technicality. In the assumption $\rm(iii)$ of the above theorem, it is more than enough to take $N=10^3$. Some examples of curves satisfying all of the assumptions in Theorem \ref{f19} can be found in \cite[Example 1.5]{NjSlHx20}. In particular, the non-flat homogeneous curves $[t]^{\alpha}$ satisfy our assumptions in Theorem \ref{f19}, where the notation $[t]^{\alpha}$ stands for either $|t|^{\alpha}$ or \textrm{sgn}$(t)|t|^{\alpha}$, $\alpha\in (0,1)\cup (1,\infty)$. Moreover, one can allow the function $u$ to have a larger Lipschitz norm if one is willing to replace the truncation $[-1, 1]$ in \eqref{0721-operator-hilbert} by a smaller interval.
\end{remark}
To further collect some historical evolutions of our operator \eqref{0721-operator-hilbert}, let us ignore the truncation $[-1, 1]$ and define the Hilbert transform along the variable curve $(t,u(x_1,x_2)\gamma(t))$ as
$$H^{\gamma}_{\infty}f(x_1,x_2)={\rm p.\,v.}\int^{\infty}_{-\infty}
f(x_1-t,x_2-u(x_1,x_2)\gamma(t))\,\frac{dt}{t}.$$
For the case $u$ is constant, the study of the boundedness properties of $H^{\gamma}_{\infty}$ first appeared in the works of Jones \cite{MR161099} and Fabes and Rivi\`ere \cite{MR209787} for studying the behavior of the constant coefficient parabolic differential operators. Later, the study has been extended to more general families of curves; see, for example, \cite{MR508453,MR714828,MR813582,MR1046743,MR1296728}. This is a classical area of harmonic analysis, which is directly related to the pointwise convergence of Fourier series and the Calder$\acute{\textrm{o}}$n-Zygmund theory.
For the case $u$ is one-variable function, in \cite{MR1364881}, Carbery, Wainger and Wright first obtained the $L^{p}$-boundedness of $H^{\gamma}_{\infty}$ for all $1<p<\infty$, but with the restriction that $u(x_1,x_2)=x_1$, where $\gamma\in C^{3}(\mathbb{R})$ is either an odd or even convex curve on $(0,\infty)$ satisfying $\gamma(0)=\gamma'(0)=0$ and the quantity $t\gamma''(t)/\gamma'(t)$ is decreasing and bounded below on $(0,\infty)$. Under the same conditions imposed on $\gamma$, Bennett \cite{MR1926840} proved the $L^2$-boundedness of $H^{\gamma}_{\infty}$ under the restriction that $u(x_1,x_2)=P(x_1)$, where $P$ is a polynomial. Some other related results under the same restriction about $u$, we refer to \cite{MR2948242,LiYu21}. In \cite{MR1689214}, Carbery and P\'{e}rez showed the $L^{p}$-boundedness of $H^{\gamma}_{\infty}$ for all $1<p<\infty$ as a generalization of the result in Seeger \cite{MR1258491}, where $u(x_1,x_2)\gamma(t)$ was written as $S(x_1,x_1-t)$, under more restrictive third order assumptions about $S$. A key breakthrough about this one-variable case was made by Guo, Hickman, Lie and Roos in \cite{MR3669936}, where the $L^p$-boundedness of $H^{\gamma}_{\infty}$ was proved for all $1<p<\infty$, and $u(x_1,x_2)=u(x_1,0)$ is only a measurable function and $\gamma(t)=[t]^{\alpha}$ with $\alpha\in (0,1)\cup (1,\infty)$.
For the case $u$ is general two-variable function, Seeger and Wainger \cite{MR2053571} obtained the $L^{p}$-boundedness of $H^{\gamma}_{\infty}$ for all $1<p<\infty$, where $u(x_1,x_2)\gamma(t)$ was written as $\Gamma(x_1,x_2,t)$, under some convexity and doubling hypothesis uniformly in $(x_1,x_2)$. A single annulus $L^p$ estimates for $H^{\gamma}_{\infty}$ for all $2<p<\infty$ were obtained in Guo, Hickman, Lie and Roos \cite{MR3669936}, where $u$ is only a measurable function and $\gamma(t)=[t]^{\alpha}$ with $\alpha\in (0,1)\cup (1,\infty)$. Recently, for the same non-flat homogeneous curve $[t]^{\alpha}$, Di Plinio, Guo, Thiele and Zorin-Kranich \cite{MR3841536} obtained the $L^{p}$-boundedness of the truncated Hilbert transform $H^{\gamma}$ along the variable curve $(t,u(x_1, x_2)[t]^{\alpha})$ if $1<p<\infty$ and $\|u\|_{\textrm{Lip}}$ is small, where the truncation $[-1, 1]$ plays a crucial role.
In the end, let us mention the new ingredients that will be used to prove our Theorem \ref{f19}. It is easy to see that Theorem \ref{f19} is the generalization of Di Plinio, Guo, Thiele and Zorin-Kranich's result \cite{MR3841536} for $H^{\gamma}$ from the special non-flat homogeneous curve $[t]^{\alpha}$ to more general curve $\gamma(t)$. In the non-flat homogeneous curve case $\gamma(t)=[t]^\alpha$, we have the following special property, i.e.,
\begin{align}\lambdabel{special property}
\gamma(ab)=\gamma(a)\gamma(b), \quad \textrm{for all}\ \ a, b>0,
\end{align}
which implies that one can add $v(x,y)(u(x,y)v^{-1}(x,y))^{\alpha/(\alpha-1)}$ in the partition of unity to split these operators considered, where $v(x,y)$ is the largest integer power of $2$ less than $u(x,y)$, see \cite[(2.36)]{MR3841536} for more details. In the general curve case, such a decomposition is not appropriate based on the absence of this property \eqref{special property}. Therefore, we have to split our operators by the classical partition of unity, i.e., $1=\Sigma_{l\in \mathbb{Z}} \phi(2^{l}t)$, see \eqref{201127e6-9}. However, we still will encounter the difficulty of $\gamma(2^{-l}t)\neq \gamma(2^{-l})\gamma(t)$, even though we have used this classical partition of unity. To overcome this difficulty and separate $2^{-l}$ from $\gamma(2^{-l}t)$, we may therefore replace $\gamma(2^{-l}t)$ by $\gamma_l(t)=\gamma(2^{-l}t)/\gamma(2^{-l})$ (see \eqref{c2}) in our proof. In the following Lemma \ref{b44}, we will see that $\gamma_l$ behaves ``uniformly" in the parameter $l$.
On the other hand, in \cite{MR3841536}, the authors need to obtain a local smoothing estimate for
$$\mathcal{A}_{u,t^\alpha} f(x,y)=\int_{-\infty}^{\infty} f(x-ut,y-ut^\alpha)\psi_0(t)\,dt.
$$
However, for the general curve case considered here, since the lack of this property \eqref{special property}, we may not reduce our local smoothing estimate for the corresponding operator $\mathcal{A}_{u,\gamma(t)}$, which is one of the main difficulties in the problem. In this paper, we establish a local smoothing estimate for $A^l_u$ (see \eqref{201210e7.7}) based on Sogge's cinematic curvature condition in \cite{MR4078231}. Compared with $\mathcal{A}_{u,t^\alpha}$, the local smoothing estimate for $A^l_u$ will leads to essential difficulties, since the critical point of the phase function in $\mathcal{A}_{u,t^\alpha}$ is independent of $u$, but the critical point of the phase function in $A^l_u$ will depend on $u$.
Another highlight of this paper is that we generalize the result in Seeger \cite{MR955772} to a general homogeneous type space associated with curve $\gamma$. Because the dilation $(2^{-l_{0}}\gamma_{l_{0}}(2^{l})^{-1},2^{-k})$ is not homogeneous, we can not use the result in Seeger \cite{MR955772} directly, one novelty is that we discovered a corresponding theory which work in the general homogeneous type space. To prove Lemma \ref{a39}, if one use the Sobolev embedding inequality, it will cause different loss for different values of $k$. To overcome this difficulty, we thus should make a additional restriction on the doubling constants $C_{0,L}$ and $C_{0,U}$, which will leads to an uniform estimate for different values of $k$. To get a better result, we do not first use the Sobolev embedding inequality and read $\sup_{u\in[1,2]}(\sum_{k\in E_{v}}|\cdot|^{2})^{1/2}$ as a norm in a Banach space and generalize Seeger's result to some functions which take values in a Banach space.
The article is organized as follows. In Section $2$, we obtain several properties about $\gamma$ which we need in the proof of our main theorem. In Section $3$, we study dyadic cubes, dyadic maximal functions and dyadic sharp maximal functions in a general homogeneous type space, which allows us to prove a key Lemma \ref{a13} in Section $4$. In Section $4$, we prove this key Lemma \ref{a13} similar as the one in Seeger \cite{MR955772}, where the underlying space is $\mathbb{R}^{n}$ instead of the general homogeneous type space. This lemma allows us to obtain Lemma \ref{a39}, which further leads to Theorem \ref{f19} if $p>2$. In Section $5$, we prove our main Theorem \ref{f19}. To make the proofs in Section $5$ more clearly, we put the proofs of four lemmas in Section $5$ to Sections $6$, $7$ and $8$.
Throughout this paper, we write $f(s)\lesssim g(s)$ to indicate that $f(s)\leq Cg(s)$ for all $s$, where the constant $C>0$ is independent of $s$ but allowed to depend on $\gamma$, and similarly for $f(s)\gtrsim g(s)$ and $f(s)\simeq g(s)$. $\hat{f}$ denotes the Fourier transform of $f$, and $\check{f}$ is the inverse Fourier transform of $f$. Let $\mathcal{S}$ denote the collection of all Schwartz functions. For any $s\in \R$, we denote by $[s]$ the integer part of $s$. For any set $E$, we use $\chi_E$ to denote the characteristic function of $E$, and $E^{\complement}$ indicates its complementary set.
\section{some properties about $\gamma$}
First of all, without loss of generality, we assume that $\gamma(1)=1$. As a consequence, the function $u$ satisfies $\|u\|_{{\rm Lip}}\le 1/2$.
To simplify our future discussion, we extend the domain of the function $\gamma$ from $[-1, 1]$ to $\R$ by setting
\begin{equation}\lambdabel{201124e2-3}
\gamma(t)=\gamma(t^{-1})^{-1}
\end{equation}
for every $|t|\ge 1$. Then we have $\gamma(t)\in C([0,\infty)),$ $\gamma(t)$ is increasing on $[0,\infty)$ and $\gamma([0,\infty))=[0,\infty).$ From the assumption $\rm(ii)$ in Theorem \ref{f19} with $j=2$, we see that $\gamma$ is either convex or concave. These two cases can be handled in exactly the same way, and therefore we without loss of generality assume that $\gamma$ is convex, that is, $\gamma''(t)>0$ for every $0<t\leq1$.
\begin{lemma}\lambdabel{c1}
Under the above assumptions on $\gamma$, we have
\begin{enumerate}\lambdabel{curve gamma_z}
\item[\rm(i)] $\gamma(t)$ satisfies a doubling condition on $\R^{+}$, i.e.,
\begin{equation}
C_{0, L}\le \frac{\gamma(2t)}{\gamma(t)}\le C_{0, U}
\end{equation}
for some $C_{0, L}>1$ and $C_{0, U}>1$ that depend only on $\gamma$;
\item[\rm(ii)] There exists a positive constant $C_{1, L}>1$ such that
\begin{align*}
C_{1, L}\leq \frac{\gamma'(2t)}{\gamma'(t)}
\end{align*}
for all $0< t\leq1/2$.
\end{enumerate}
\end{lemma}
\begin{proof}
We start with the proof of (i).
Define $h(t)=\ln \gamma(t)$ for all $t\in(0,1]$, by the conditions (ii) and (iii) of Theorem
\ref{f19}, we know that there exists $C>1$ such that $th'(t)\in[C^{-1},C]$. Then for all $t\in(0,1/2],$ we have
\begin{align*}
\ln \frac{\gamma(2t)}{\gamma(t)}=h(2t)-h(t)=h'(\theta t)t\in [C^{-1}\theta^{-1},C\theta^{-1}]\subset[2^{-1}C^{-1},C]
\end{align*}
for some $\theta\in[1,2],$ and therefore
\begin{align}\lambdabel{f22}
1<e^{2^{-1}C^{-1}}\le \frac{\gamma(2t)}{\gamma(t)}\le e^{C}
\end{align}
for all $t\in(0,1/2]$. By \eqref{201124e2-3}, we know that \eqref{f22} also holds
for all $t\in[1,\infty)$.
It remains to consider $t\in[1/2,1]$.
By continuity, there exists $t_{0}\in[1/2,1]$ such that $\sup_{t\in[1/2,1]}\gamma(t)\gamma(2^{-1}t^{-1})=\gamma(t_{0})\gamma(2^{-1}t_{0}^{-1})$. Note $t_0$ or $2^{-1}t_0^{-1}$ must less than $3/4$, and $\gamma(1)=1$.
Then, $\gamma(t_{0})\gamma(2^{-1}t_{0}^{-1})\leq\gamma(3/4)<1$.
As a result, for all $t\in[1/2,1]$, we have
$$\frac{\gamma(2t)}{\gamma(t)}=\frac{1}{\gamma(t)\gamma(2^{-1}t^{-1})}\geq \frac{1}{\gamma(3/4)}>1.$$
Because $\gamma(t)$ is increasing on [0,1], for all $t\in[1/2,1]$, we obtain
$$\frac{\gamma(2t)}{\gamma(t)}=\frac{1}{\gamma(t)\gamma(2^{-1}t^{-1})}\leq \frac{1}{\gamma(1/2)^{2}}.$$
By setting
\begin{equation}
C_{0, L}=\min\{e^{2^{-1}C^{-1}},\gamma(3/4)^{-1}\}>1 \ \ \textrm{and} \ \ C_{0, U}=\max\{e^{C},\gamma(1/2)^{-2}\}>1,
\end{equation} we can finish the proof of (i). The proof for (ii) is essentially the same as \eqref{f22}, we omit it.
\end{proof}
Let $l\in \N$, we define
\begin{equation}\lambdabel{c2}
\gamma_{l}(t)=\frac{\gamma(2^{-l}t)}{\gamma(2^{-l})}.
\end{equation}
In the following Lemma \ref{b44}, we would like to prove some estimates for $\gamma_{l}$ uniformly in $l\in \N$. In the model case $\gamma(t)=t^2$, it always holds that $\gamma_l(t)=t^2$. In the general case, $\gamma$ is not homogeneous anymore, and therefore we need an uniform control over the rescaled versions of $\gamma$.
\begin{lemma}\lambdabel{b44}
Let $I=[1/2, 2]$, for $l\in \N$ with $2^l\gg 1$, we then have
\begin{enumerate}
\item[\rm(i)] $ |\gamma_{l}^{(k)}(t)|\simeq1$, \ \ for $t\in I$ and $k=0,1,2$;
\item[\rm(ii)] $|\gamma_{l}^{(k)}(t)|\lesssim 1$, \ \ for $t\in I$ and $3\leq k\leq N$;
\item[\rm(iii)] $|((\gamma_{l}')^{-1})(t)|\simeq 1$, \ \
for $t\simeq 1$;
\item[\rm(iv)] $|((\gamma_{l}')^{-1})^{(k)}(t)|\lesssim 1$, \ \
for $t\simeq 1$ and $0<k<N$,
\end{enumerate}
where $(\gamma_{l}')^{-1}$ is the inverse function of $\gamma_{l}'.$ Here the implicit constants depend only on $\gamma$.
\end{lemma}
\begin{proof}
We start with the proof of (i).
Because $\gamma_{l}(t)$ is increasing on $[0,1]$, we have
\begin{align*}
\gamma_{l}(t)\leq \frac{\gamma(2^{-l+1})}{\gamma(2^{-l})}\leq C_{0,U}\ \ \textrm{and }\ \
\gamma_{l}(t)\geq \frac{\gamma(2^{-l-1})}{\gamma(2^{-l})}\geq C_{0,U}^{-1}.
\end{align*}
This finishes the proof of (i) with $k=0.$ Next for $k=1,2,$
by the conditions (ii) and (iii) of Theorem
\ref{f19}, we conclude that
\begin{align}\lambdabel{f23}
|t^{k}\gamma_{l}^{(k)}(t)|=\bigg|\frac{(2^{-l}t)^{k}\gamma^{(k)}(2^{-l}t)}{\gamma(2^{-l})}\bigg|\simeq \bigg|\frac{\gamma(2^{-l}t)}{\gamma(2^{-l})}\bigg|=|\gamma_{l}(t)|.
\end{align}
Then we have
$|\gamma_{l}^{(k)}(t)|\simeq |\frac{\gamma_{l}(t)}{t^{k}} |\simeq1.$ This finishes the proof of (i).
Next we prove (ii). By the condition (iii) of Theorem
\ref{f19}, it follows that
\begin{align*}
|t^{k}\gamma_{l}^{(k)}(t)|=\bigg|\frac{(2^{-l}t)^{k}\gamma^{(k)}(2^{-l}t)}{\gamma(2^{-l})}\bigg|\lesssim \bigg|\frac{\gamma(2^{-l}t)}{\gamma(2^{-l})}\bigg|=|\gamma_{l}(t)|,
\end{align*}
which further leads to
$|\gamma_{l}^{(k)}(t)|\lesssim|\gamma_{l}(t)|t^{-k}\simeq1.$
Finally, we prove (iii) and (iv). The case $k=0$ follows from the doubling property of $\gamma'$ in Lemma \ref{c1}. The case $k>0$ follows from (i), (ii) and (iii) via elementary computations involving inverse functions, and we leave out the details.
\end{proof}
\begin{lemma}\lambdabel{a21}
Under the above assumptions on $\gamma$, we obtain
\begin{enumerate}
\item[\rm(i)] For each $C>0,$ we have
\begin{align}\lambdabel{c40}
\gamma^{-1}(C t)\lesssim_C \gamma^{-1}(t)
\end{align}
for all $t\geq0$;
\item[\rm(ii)] There exist constants $c_{1},c_{2}>0$
so that
\begin{align}\lambdabel{21-312-210}
\gamma^{-1}(2^{-k})\gamma^{-1}(2^{-j})\lesssim 2^{-c_{1}(k+j)}
\end{align}
for all $j,k\in \mathbb{Z}$ with $j+k\geq0,$
and
\begin{align}\lambdabel{21-312-211}
2^{-c_{2}(k+j)}\lesssim\gamma^{-1}(2^{-k})\gamma^{-1}(2^{-j})
\end{align}
for all $j,k\in \mathbb{Z}$ with $j+k\leq0,$
where $c_{1},c_{2}$ depend only on $C_{0,U}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first item (i) follows immediately from the doubling property of $\gamma$. We will therefore only present the proof of (ii). Firstly, we will prove $\gamma(2^{j})\gamma(2^{k})\geq C_{0,U}^{j+k}$ for all $j,k\in \mathbb{Z}$ satisfying $\gamma(2^{j})\gamma(2^{k})\leq1$. It suffices to show $\gamma(2^{j})\gamma(2^{k})\geq \min\{C_{0,L}^{j+k},C_{0,U}^{j+k}\}$ for all $j,k\in \mathbb{Z}$. Furthermore, by symmetry, we should only to show the above inequality for $j\geq k$. There are the following four cases:
If $j,k\geq 0$, we have $\gamma(2^{j})\gamma(2^{k})\geq C_{0,L}^{j}C_{0,L}^{k}\gamma(1)^{2}=C_{0,L}^{j+k}$;
If $j,k\leq 0$, we have $\gamma(2^{j})\gamma(2^{k})\geq C_{0,U}^{j}C_{0,U}^{k}\gamma(1)^{2}=C_{0,U}^{j+k}$;
If $j\geq 0$, $k\leq 0$ and $j+k\geq0$, we have
$\gamma(2^{j})\gamma(2^{k})=\frac{\gamma(2^{k+j}2^{-j})}{\gamma(2^{-j})}
\geq C_{0,L}^{k+j};$
If $j\geq 0$, $k\leq 0$ and $j+k\leq0$, we have
$\gamma(2^{j})\gamma(2^{k})=\frac{\gamma(2^{k+j}2^{-j})}{\gamma(2^{-j})}
\geq C_{0,U}^{k+j}.$
Altogether, we have shown $\gamma(2^{j})\gamma(2^{k})\geq C_{0,U}^{j+k}$ holds for all $j,k\in \mathbb{Z}$ satisfying $\gamma(2^{j})\gamma(2^{k})\leq1$.
Then, for all $a,b>0$ and $\gamma(a)\gamma(b)\leq1$, we can choose $j,k\in \mathbb{Z}$ such that $a\in[2^{j},2^{j+1})$ and $b\in[2^{k},2^{k+1})$. Furthermore, from the fact that $\gamma(t)$ is increasing on $[0,\infty),$ we conclude that
\begin{align*}
\gamma(a)\gamma(b)&\geq\gamma(2^{j})\gamma(2^{k})\geq C_{0,U}^{j+k}
\simeq C_{0,U}^{j+k+2}
=(2^{j+1}2^{k+1})^{\log_{2}C_{0,U}}
\geq (ab)^{\log_{2}C_{0,U}}.
\end{align*}
Therefore, for all $a,b>0$ and $\gamma(a)\gamma(b)\leq1,$ we obtain
$ab\lesssim (\gamma(a)\gamma(b))^{c_{1}}$
with $c_{1}=\log_{C_{0,U}}2$.
Let $a=\gamma^{-1}(2^{-k})$ and $b=\gamma^{-1}(2^{-j})$, for all $j,k\in \mathbb{Z}$ and $j+k\geq0,$ we then have
$\gamma^{-1}(2^{-k})\gamma^{-1}(2^{-j})\lesssim 2^{-c_{1}(k+j)}.$ The proof of \eqref{21-312-211} is similar as \eqref{21-312-210} and we omit it. This finishes the proof of (ii).
\end{proof}
\begin{definition}\lambdabel{a38}
Let $x=(x_1,x_2)$, we define a function $\rho(x)$ on $x\in \mathbb{R}^{2}$ in the following way: For all $x\in \mathbb{R}^{2}$,
\begin{equation*}
\rho(x)=\begin{cases}\gamma(|x_{1}|^{-1})^{-1}+|x_{2}|, &x_{1}\neq0; \\ |x_{2}|, &x_{1}=0. \end{cases}
\end{equation*}
For all $r>0$ and $x_{0}\in \mathbb{R}^{2}$, we also define $B(x_{0},r)=\{x\in \mathbb{R}^{2}:\ \rho(x-x_{0})< r\}$ as a ball centered at $x_{0}$ with radius $r>0$ associated with $\rho$.
\end{definition}
We have the following lemma, which says that the function $\rho$ gives rise to a quasi-distance function on $\R^2$.
\begin{lemma}\lambdabel{a12}
One has that the following quasi-triangle inequality
\begin{align}\lambdabel{a11}
\rho(x+y)\leq C_{0,U}(\rho(x)+\rho(y))
\end{align}
holds for all $x,y\in \mathbb{R}^{2}$. Moreover, we have the following doubling condition
\begin{align}\lambdabel{c7}
|B(x,2r)|\lesssim |B(y,r)|
\end{align}
holds for all $r>0$ and $x,y\in \mathbb{R}^{2}$. As a result, $(\mathbb{R}^{2},\rho,dx)$ is a homogeneous type space.
\end{lemma}
\begin{proof}
We first prove \eqref{a11}. Let $x=(x_1, x_2)$ and $y=(y_1, y_2)$, by Definition \ref{a38}, it suffices to show
$$\gamma(|x_{1}+y_{1}|^{-1})^{-1}\leq C_{0,U}\big( \gamma(|x_{1}|^{-1})^{-1}+\gamma(|y_{1}|^{-1})^{-1}\big).$$
By monotonicity of $\gamma$, it is enough to prove
$$\gamma\big((|x_{1}|+|y_{1}|)^{-1}\big)^{-1}\leq C_{0,U}\big( \gamma(|x_{1}|^{-1})^{-1}+\gamma(|y_{1}|^{-1})^{-1}\big).$$
Without loss of generality, we assume $|x_{1}|\geq|y_{1}|$. Then
\begin{align*}
\gamma\big((|x_{1}|+|y_{1}|)^{-1}\big)^{-1}&\leq \gamma\big((2|x_{1}|)^{-1}\big)^{-1}\leq C_{0,U} \gamma\big(|x_{1}|^{-1}\big)^{-1}
\leq C_{0,U}\big(\gamma(|x_{1}|^{-1})^{-1}+\gamma(|y_{1}|^{-1})^{-1}\big),
\end{align*}
where we used the doubling property of $\gamma$ in Lemma \ref{curve gamma_z}. This finishes the proof of \eqref{a11}.
Next we show \eqref{c7}.
Without loss of generality, we assume $x=y=0$.
Fix $z\in \mathbb{R}^{2}$ satisfying $\rho(z)<2r$, it suffices to show there exists a constant $C$ such that
$\rho(z/C)< r.$
Note that
$\rho(z/2^{j})=\gamma(2^{j}|z_{1}|^{-1})^{-1}+2^{-j}|z_{2}|,$ by the doubling property of $\gamma$ in Lemma \ref{curve gamma_z}, we have $\gamma(2^{j}|z_{1}|^{-1})\geq C_{0,L}^{j}\gamma(|z_{1}|^{-1}).$
Then
$$\rho(z/2^{j})\leq C_{0,L}^{-j}\gamma(|z_{1}|^{-1})^{-1}+2^{-j}|z_{2}|\leq \max\{C_{0,L}^{-j},2^{-j}\}\rho(z).$$
Choose $j$ satisfying $\max\{C_{0,L}^{-j},2^{-j}\}\leq 1/2$, which leads to $\rho(z/2^{j})\leq \frac{1}{2}\rho(z)< r.$
\end{proof}
\section{dyadic cubes}
To prove Lemma \ref{a13} in Section 4, we need to obtain Lemma \ref{a14}. To prove Lemma \ref{a14}, we need to construct dyadic cubes on the homogeneous type space $(\mathbb{R}^{2},\rho,dx)$. So we need first the following lemma:
\begin{lemma}[\cite{MR1096400}]\lambdabel{a27}
Let $(\mathbb{R}^{2},\rho,dx)$ be a homogeneous type space as in Lemma \ref{a12}. Then
there exist an index set $I_k$ with $k\in \Z$, a collection of open subsets $\{Q^{k}(\alpha)\subset \mathbb{R}^{2}:\ k\in \mathbb{Z}, \alpha\in I_{k}\}$, and constants $\delta\in(0,1)$ and $ C_{\rho}, c_{\rho}>0$, such that
\begin{enumerate}
\item[\rm(i)] For all $k\in \mathbb{Z},$
$$\bigg|\bigg(\bigcup_{\alpha}Q^{k}(\alpha)\bigg)^{\complement}\bigg|=0; $$
\item[\rm(ii)] If $l\geq k,$ then for all $\alpha\in I_{k},\beta\in I_{l}$, we have
$Q^{l}(\beta)\subset Q^{k}(\alpha)$ or $Q^{l}(\beta)\bigcap Q^{k}(\alpha)=\varnothing;$
\item[\rm(iii)] For all $(k,\alpha)$ and $l<k$, there exists a unique $\beta$ such that $Q^{k}(\alpha)\subset Q^{l}(\beta);$
\item[\rm(iv)] $\textrm{Diameter}~(Q^{k}(\alpha))\leq C_{\rho}\delta^{k}$ for every $\alpha\in I_k$ and every $k\in \Z$;
\item[\rm(v)] Each $Q^{k}(\alpha)$ contains a ball of radius $c_{\rho}\delta^{k}$.
\end{enumerate}
\end{lemma}
Let $Q=Q^k(\alpha)$ be a dyadic cube constructed in Lemma \ref{a27}. We will use $r=r(Q)$ to denote $k \log_2 \delta$. Moreover, denote $Q^{*}=B(z_{\alpha}^{k},2C_{0,U}C_{\rho}\delta^{k})$, where $z^k_{\alpha}$ is an arbitrary point in $Q$. The choice of the parameters in $Q^*$ guarantees that $Q\subset Q^*$. For later use, we collect a few properties about the dyadic cubes constructed above.
\begin{lemma}\lambdabel{d5}
For every dyadic cube $Q$, the following hold:
\begin{enumerate}
\item[\rm(i)] If $x\in Q$ and $y\in (Q^{*})^{\complement},$ then $\rho(y-x)\gtrsim 2^{r(Q)}$;
\item[\rm(ii)] $|Q^{*}|\lesssim |Q|$;
\item[\rm(iii)] $|Q^{k-1}(\alpha)|\lesssim |Q^{k}(\beta)|\ $ for all $k\in \Z,$ $\alpha\in I_{k-1}$ and $\beta\in I_{k}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Recall that the definition of $Q^*$ and the notation $z^{k}_{\alpha}$, by Lemma \ref{a12}, we have
\begin{align*}
\rho(y-x)\geq \frac{1}{C_{0,U}}\rho(y-z^k_{\alpha})-\rho(x-z^k_{\alpha})\geq\frac{1}{C_{0,U}}2C_{0,U}C_{\rho}\delta^{k}-C_{\rho}\delta^{k}\geq C_{\rho}\delta^{k}=C_{\rho}2^{r(Q)}.
\end{align*}
This finishes the proof of (i). The other two statements follow immediately from \eqref{c7} and the properties (iv) and (v) in Lemma \ref{a27}.
\end{proof}
Let $(\mathbb{R}^{2},\rho,dx)$ be a homogeneous type space as in Lemma \ref{a12} and $B$ be a Banach space with norm $\|\cdot \|$. Suppose $(X,d\mu)$ is a measure space and let $1\leq p<\infty$, we define $L^{p}(X,d\mu,B)$ as the collection of all $B$-valued measurable function $f$ satisfying $\int_X \|f(x)\|^{p}\,d\mu(x)<\infty$.
Then we define
\begin{align*}
\|f\|_{L^{p}(X,d\mu,B)}=\bigg(\int_X \|f(x)\|^{p}\,d\mu(x)\bigg)^{1/p}
\end{align*}
and define $L^{\infty}(X,d\mu,B)$ by the usual sense. We define $L_{\textrm{loc}}^{1}(\mathbb{R}^{2},B)$ as the space of functions that lie in $L^{1}(K,B)$, where $K$ takes over all compact sets in $\mathbb{R}^{2}$. We ignore the measure $d\mu$ so that we always use notation $L^{p}(X,B)$.
We define $C(X,B)$ as the collection of all $B$-valued measurable continuous function $f$ on $X$. For
$f\in L_{\textrm{loc}}^{1}(\mathbb{R}^{2},B)$, we define
\begin{align}\lambdabel{c8}
M_{\textrm{dyad}}f(x)=\sup_{x\in Q}\fint_{Q}\|f(y)\| \,dy,
\end{align}
where the supremum takes over all dyadic cubes in Lemma \ref{a27}. In the rest of this section, we state a few useful lemmas; their proofs are standard and are therefore left out. One can read \cite{MR1232192} for details.
\begin{lemma}\lambdabel{c25}
Let $f\in C(\mathbb{R}^{2},B)$, then we have
\begin{align*}
\textrm{$\|f(x)\|\leq M_{\textrm{dyad}}f(x), \ $ a.e. \ $x\in \mathbb{R}^{2}$.}
\end{align*}
\end{lemma}
To prove the following Lemma \ref{a14}, we also need the following Calder$\acute{\textrm{o}}$n-Zygmund decomposition.
\begin{lemma}\lambdabel{a28}
Let $f\in L^{p}(\mathbb{R}^{2},B) $ for some $1\leq p<\infty$. Suppose $\alpha>0$ and define
\begin{align*}
\Omega_{\alpha}=\{x\in \mathbb{R}^{2} :\ M_{\textrm{dyad}}f(x)>\alpha\}.
\end{align*}
Then there exist a sequence of disjoint dyadic cubes $\{Q_{j}\}$ as in Lemma \ref{a27} such that, for all $j$,
\begin{enumerate}
\item[\rm(i)] $Q_{j}\subset \Omega_{\alpha}$ and
\begin{align}\lambdabel{c12}
\Omega_{\alpha}=\bigcup_{j}Q_{j};
\end{align}
\item[\rm(ii)]
\begin{align}
\alpha<\fint_{Q_{j}} \|f\|\,dx\lesssim\alpha.
\end{align}
\end{enumerate}
\end{lemma}
\begin{lemma}\lambdabel{d1}
Let $\vec{T}$ be a sublinear operator mapping the space $L^{p_{0}}(\mathbb{R}^{2},B)+L^{\infty}(\mathbb{R}^{2},B)$ to the space of all measurable functions on $\mathbb{R}^{2}$, where $1\leq p_{0}<\infty$. Suppose that
\begin{align}\lambdabel{f27}
\|\vec{T}f\|_{\infty}\lesssim\|f\|_{\infty} \text{\ \ for all\ } f\in L^{\infty}(\mathbb{R}^{2},B),
\end{align}
and
\begin{align}\lambdabel{f28}
\|\vec{T}f\|_{p_{0},\infty}\lesssim \|f\|_{p_{0}} \text{\ \ for all\ } f\in L^{p_{0}}(\mathbb{R}^{2},B).
\end{align}
Then, for all $p_{0}<p<\infty$, we have
\begin{align*}
\|\vec{T}f\|_{p}\lesssim \|f\|_{p} \text{\ \ for all\ } f\in L^{p}(\mathbb{R}^{2},B).
\end{align*}
\end{lemma}
By Lemmas \ref{a28} and \ref{d1} and a trivial $L^{\infty}$ estimate, we have
\begin{lemma}\lambdabel{c32}
Let $B$ be a Banach space and $M_{\textrm{dyad}}f$ be defined as in \eqref{c8}, we then have
\begin{align}\lambdabel{c19}
\big\|M_{\textrm{dyad}}f\big\|_{p}\lesssim \|f\|_{p}
\end{align}
for all $1<p<\infty$ and $f\in L^{p}(\mathbb{R}^{2},B)$.
\end{lemma}
Let $\rho$ be as in Definition \ref{a38} and $B$ be a Banach space, for
$f\in L_{\textrm{loc}}^{1}(\mathbb{R}^{2},B)$, we define $M^{\sharp}_{\textrm{dyad}}f$ by
$$M^{\sharp}_{\textrm{dyad}}f(x)=\sup_{x\in Q}\fint_{Q} \big\|f(y)-f\big|_{Q}\big\|\,dy,$$
where $Q$ takes over all dyadic cubes as in Lemma \ref{a27} and $f\big|_{Q}=\fint_{Q} f(y)\,dy.$
\begin{lemma}\lambdabel{a14}
Let $1<p<\infty$ and B be a Banach space. Then one has
\begin{align*}
\|f\|_{p}\lesssim \big\|M^{\sharp}_{\textrm{dyad}}f\big\|_{p}
\end{align*}
for all $f\in L^{p}(\mathbb{R}^{2},B)\bigcap C(\mathbb{R}^{2},B)$.
\end{lemma}
Based on these lemmas have been showed in this section, the proof of Lemma \ref{a14} can be found in \cite{MR447953}.
\section{a key vector valued estimate}
This section is devoted to proving Lemma \ref{a13}. We follow the spirit of the proof in \cite{MR955772}, where the underlying space is $\mathbb{R}^{n}$ instead of the general homogeneous type space. Working with this general space will brings some extra technical difficulties. We first state a few useful lemmas before proving the key Lemma \ref{a13}.
Throughout this section, we let $2<p<\infty$, $k,v\in \mathbb{Z}$ and $l\in \mathbb{N}$. We also let $a_{k}(u,\timesi)\in C^{\infty}(\mathbb{R}^{3})$ and $\supp a_{k}\subset I\times K_{1}\times K_{2}$, where $I$ is a compact subset of $\mathbb{R}$ and $K_{1},K_{2}$ are compact subsets of $\mathbb{R}\setminus0$. Define $m_{k}(u,\timesi)=a_{k}(u,2^{-l_{0}}\gamma_{l_{0}}(2^{l})^{-1}\timesi_{1},2^{-k}\timesi_{2})$,
where $l_{0}\in \mathbb{Z}$ satisfies $2^{k+v}\gamma(2^{-l_{0}})\simeq1$. There maybe multiple choices of $l_{0}$, we here pick an arbitrary one. We let $Q_{x}$ be a dyadic cube in Lemma \ref{a27} satisfying $x\in Q_{x}$, the definitions of $r_{x}=r(Q_{x})$ and $Q_{x}^{*}$ can be found in the previous section.
\begin{lemma}\lambdabel{a26}
Let us set $v=0$, suppose that there exists a constant $F$ such that $a_{k}$ satisfies
\begin{align}\lambdabel{21-314-41}
\bigg\|\sup_{u\in I}|a_{k}(u,D)f|\bigg\|_{q}\leq F\|f\|_{q}
\end{align}
for $q\in\{2,\infty\}$ and $k\in \mathbb{Z}$. Define
$$S_{1}f(x)=\fint_{Q_{x}} \sup_{u\in I}\bigg(\sum_{|k+r_{x}|\leq N}\bigg|m_{k}(u,D)f(y)-\fint_{Q_{x}}m_{k}(u,D)f(z)\,dz\bigg|^{2}\bigg)^{1/2}\,dy.$$
Then for all $N\geq1$, we have
\begin{align}\lambdabel{21-314-42}
\|S_{1}f\|_{p}\lesssim N^{1/2-1/p}F\|f\|_{p}.
\end{align}
\end{lemma}
\begin{proof}
By \eqref{21-314-41} and scaling, for $q\in\{2,\infty\}$, we conclude that
$$\bigg\|\sup_{u\in I}\big|m_{k}(u,D)f\big|\bigg\|_{q}\leq F\|\tilde{P}_{k}^{2}f\|_{q},$$
where $\tilde{P}_{k}^{2}f=(\tilde{\phi}(2^{-k}\timesi_{2})\widehat{f})^\vee$ for some $\tilde{\phi}\in C_{c}^{\infty}(\mathbb{R}\setminus0)$ and $\tilde{\phi}=1$ on $K_{2}$.
Define $\vec{T}f=\{m_{k}(u,D)f\}_{k}$, then by Lemma \ref{c32} we have
\begin{align}\lambdabel{b10}
\|S_{1}f\|_{2}&\lesssim \big\|M^{\sharp}_{\textrm{dyad}}\vec{T}f\big\|_{2}\lesssim \big\|M_{\textrm{dyad}}\vec{T}f\big\|_{2}\lesssim \big\|\vec{T}f\big\|_{L^{2}(L^{\infty}l^{2})}\nonumber\\
&\leq \bigg(\sum_{k}\bigg\|\sup_{u\in I}\big|m_{k}(u,D)f\big|\bigg\|_{2}^{2}\bigg)^{1/2}\leq
F\bigg(\sum_{k}\big\|\tilde{P}_{k}^{2}f\big\|_{2}^{2}\bigg)^{1/2}\lesssim F\|f\|_{2},
\end{align}
and
$$\sup_{x}|S_{1}f(x)|\lesssim N^{1/2}\sup_{k}\sup_{u\in I}\sup_{y}\big|m_{k}(u,D)f(y)\big|\leq N^{1/2}F\|f\|_{\infty}.$$
By interpolation, \eqref{21-314-42} follows.
\end{proof}
To prove Lemma \ref{a19}, we need the following estimate:
\begin{lemma}\lambdabel{a20}
Let us set $v=0$, suppose that there exists a constant $G$ such that $a_{k}$ satisfies
\begin{align}\lambdabel{f24}
\big|\partial_{\timesi}^{\alpha}a_{k}(u,\timesi)\big|\leq G
\end{align}
for all $u\in I,$ $\timesi\in K_{1}\times K_{2}$, $|\alpha|\leq 6$ and $k\in \mathbb{Z}$. Define
$$E_{k}^{u}(x,y,z)=\int_{(Q_{x}^{*})^{\complement}} \big|(m _{k}(u,\cdot))^\vee(y-w)-(m _{k}(u,\cdot))^\vee(z-w)\big|\,dw.$$
Then there exists a constant $\delta>0$ such that
\begin{align}\lambdabel{f2}
E_{k}^{u}(x,y,z)\lesssim GC_{0,U}^{l}2^{-\delta|k+r_{x}|}
\end{align}
for all $y,z\in Q_{x}$ and $u\in I.$
\end{lemma}
\begin{proof}
Define $K_{k}(x)=a _{k}(u,\cdot)^\vee(x)$, then
$$m _{k}(u,\cdot)^\vee(x)=\gamma^{-1}(2^{-k})^{-1}\gamma_{l_{0}}(2^{l})2^{k}K_{k}\big(\gamma^{-1}(2^{-k})^{-1}\gamma_{l_{0}}(2^{l})x_{1},2^{k}x_{2}\big).$$
By \eqref{f24} and integration by parts, we obtain a kernel estimate for $K_{k}$ and then we have
\begin{align}\lambdabel{a24}
\big|m _{k}(u,\cdot)^\vee(x)\big|\lesssim G\gamma^{-1}(2^{-k})^{-1}\gamma_{l_{0}}(2^{l})2^{k}\big(1+|\gamma^{-1}(2^{-k})^{-1}\gamma_{l_{0}}(2^{l})x_{1}|+|2^{k}x_{2}|\big)^{-4}.
\end{align}
We prove \eqref{f2} by considering two cases $k+r_{x}\geq0$ and $k+r_{x}\leq0$. For the first case $k+r_{x}\geq0$,
by Lemmas \ref{d5} and \eqref{a24}, we bound $E_{k}^{u}(x,y,z)$ by $\int_{\rho(w)\geq C2^{r_{x}}}|m _{k}(u,\cdot)^\vee(w)|\,dw$, which can be further bounded by
\begin{align}\lambdabel{f26}
G\gamma^{-1}(2^{-k})^{-1}\gamma_{l_{0}}(2^{l})2^{k}
\int_{\gamma(|w_{1}|^{-1})^{-1}+|w_{2}|\geq C2^{r_{x}}}\big(1+|\gamma^{-1}(2^{-k})^{-1}\gamma_{l_{0}}(2^{l})w_{1}|+|2^{k}w_{2}|\big)^{-4}\,dw.
\end{align}
Next, for $\gamma(|w_{1}|^{-1})^{-1}+|w_{2}|\geq C2^{r_{x}}$, we will show
\begin{align}\lambdabel{f25}
\big(1+|\gamma^{-1}(2^{-k})^{-1}w_{1}|+|2^{k}w_{2}|\big)^{-1}\lesssim 2^{-\min\{c_{1},1\}(k+r_{x})}.
\end{align}
Here $c_{1}$ is the same as that in Lemma \ref{a21}. Then by \eqref{f26}, \eqref{f25} and an $L^{1}$ estimate, we have verified \eqref{f2} if $k+r_{x}\geq0.$
To show \eqref{f25}, we consider $\gamma(|w_{1}|^{-1})^{-1}\geq \frac{C}{2}2^{r_{x}}$ or $|w_{2}|\geq \frac{C}{2}2^{r_{x}}$, respectively.
If $\gamma(|w_{1}|^{-1})^{-1}\geq \frac{C}{2}2^{r_{x}}$, we have $|w_{1}|^{-1}\leq \gamma^{-1}(\frac{2}{C}2^{-r_{x}})$ and
\begin{align*} \big(1+|\gamma^{-1}(2^{-k})^{-1}w_{1}|+|2^{k}w_{2}|\big)^{-1}&\lesssim \gamma^{-1}(2^{-k})|w_{1}|^{-1}\lesssim \gamma^{-1}(2^{-k})\gamma^{-1}\big(\frac{2}{C}2^{-r_{x}}\big)\\
&\lesssim\gamma^{-1}(2^{-k})\gamma^{-1}(2^{-[r_{x}]-1})\lesssim 2^{-c_{1}(k+r_{x})},
\end{align*}
where in the last inequality we used Lemma \ref{a21}.
If $|w_{2}|\geq \frac{C}{2}2^{r_{x}}$, similarly, we have
$$\big(1+|\gamma^{-1}(2^{-k})^{-1}w_{1}|+|2^{k}w_{2}|\big)^{-1}\lesssim 2^{-(k+r_{x})}.$$
Then \eqref{f25} follows and thus we finish the proof of \eqref{f2} if $k+r_{x}\geq0$.
Next we consider the case $k+r_{x}\leq0.$ By the mean value theorem, we can write
$$\big|m_{k}(u,\cdot)^\vee(y-w)-m_{k}(u,\cdot)^\vee(z-w)\big|$$
as the sum of the following two terms:
$$\gamma^{-1}(2^{-k})^{-1}\gamma_{l_{0}}(2^{l})(y_{1}-z_{1})\int_{0}^{1}m' _{k}(u,\cdot)^\vee(\theta(y-z)+z-w)\,d\theta,$$
and
$$2^{k}(y_{2}-z_{2})\int_{0}^{1}m'' _{k}(u,\cdot)^\vee(\theta(y-z)+z-w)\,d\theta,$$
where $$\partial_{x_{1}}m _{k}(u,\cdot)^\vee(x)=\gamma^{-1}(2^{-k})^{-1}\gamma_{l_{0}}(2^{l})(m _{k}'(u,\cdot))^\vee(x)\ \ \textrm{and} \ \ \partial_{x_{2}}m _{k}(u,\cdot)^\vee(x)=2^{k}(m _{k}''(u,\cdot))^\vee(x)$$ for some $m_{k}',m_{k}''$ that share the same properties as $m_{k}$.
After integrating in $w$, for all $k+r_{x}\leq0$, we find that
\begin{align*}
E_{k}^{u}(x,y,z)&\lesssim G\big(\gamma^{-1}(2^{-k})^{-1}\gamma_{l_{0}}(2^{l})|y_{1}-z_{1}|+2^{k}|y_{2}-z_{2}|\big)\\
&\lesssim \gamma_{l_{0}}(2^{l})G\big(2^{c_{2}(k+r_{x})}+2^{k+r_{x}}\big)\lesssim C_{0,U}^{l}G2^{\min\{c_{2},1\}(k+r_{x})},
\end{align*}
where in the first inequality we used
$$\int_{\mathbb{R}^{2}}\big|m' _{k}(u,\cdot)^\vee(w)\big|\,dw\lesssim G \ \ \textrm{ and} \ \ \int_{\mathbb{R}^{2}}\big|m'' _{k}(u,\cdot)^\vee(w)\big|\,dw\lesssim G,$$ which can be proved by a kernel estimate like \eqref{a24}. In the second inequality we used Lemma \ref{a21} and the doubling property of $\gamma$ in Lemma \ref{c1}.
Then we have verified \eqref{f2} if $k+r_{x}\leq0.$
\end{proof}
\begin{lemma}\lambdabel{a19}
Let us set $v=0$, suppose that there exist constants $F$ and $G$ such that $a_{k}$ satisfies
$$\bigg\|\sup_{u\in I}\big|a_{k}(u,D)f\big|\bigg\|_{2}\leq F\|f\|_{2} \ \ \textrm{and}\ \ |\partial_{\timesi}^{\alpha}a_{k}(u,\timesi)|\leq G$$
for all $u\in I,$ $\timesi\in K_{1}\times K_{2}$, $|\alpha|\leq 6$ and $k\in \mathbb{Z}$.
Furthermore, let us set $N=\frac{1}{\delta}\log_{2}(C_{0,U}^{l}G/F)$, and define
$$S_{2}f(x)=\fint_{Q_{x}} \sup_{u\in I}\bigg(\sum_{|k+r_{x}|> N}\bigg|m_{k}(u,D)f(y)-\fint_{Q_{x}}m_{k}(u,D)f(z)\,dz\bigg|^{2}\bigg)^{1/2}\,dy,$$
where $Q_{x}$ is a dyadic cube satisfying $x\in Q_{x}$.
Then we have
$$\|S_{2}f\|_{p}\lesssim F\|f\|_{p}.$$
\end{lemma}
\begin{proof}
Define
\begin{align*}
\vec{T}(\{f_{k}\})(x)=\fint_{Q_{x}} \sup_{u\in I}\bigg(\sum_{|k+r_{x}|> N}\bigg|m_{k}(u,D)f_{k}(y)-\fint_{Q_{x}}m_{k}(u,D)f_{k}(z)\,dz\bigg|^{2}\bigg)^{1/2}\,dy.
\end{align*}
By Lemma \ref{d1}, it suffices to show
\begin{align}\lambdabel{a33}
\Big\|\vec{T}(\{f_{k}\})\Big\|_{2}\lesssim F\bigg\|\bigg(\sum_{k}|f_{k}|^{2}\bigg)^{1/2}\bigg\|_{2}
\end{align}
and
\begin{align}\lambdabel{a34}
\Big\|\vec{T}(\{f_{k}\})\Big\|_{\infty}\lesssim F \bigg\|\bigg(\sum_{k}|f_{k}|^{2}\bigg)^{1/2}\bigg\|_{\infty}.
\end{align}
We start with the proof of \eqref{a33}. Similarly as \eqref{b10}, we have
$$\Big\|\vec{T}(\{f_{k}\})\Big\|_{2}\lesssim \bigg(\sum_{k}\bigg\|\sup_{u\in I}\big|m_{k}(u,D)f_{k}\big|\bigg\|_{2}^{2}\bigg)^{1/2}\leq F\bigg\|\bigg(\sum_{k}|f_{k}|^{2}\bigg)^{1/2}\bigg\|_{2}.$$
It remains to show \eqref{a34}.
Notice that $\vec{T}(\chi_{Q_{x}^{*}}f_{k})(x)$ can be written as
$$\fint_{Q_{x}} \sup_{u\in I}\bigg(\sum_{|k+r_{x}|> N}\bigg|m_{k}(u,D)(\chi_{Q_{x}^{*}}f_{k})(y)-\fint_{Q_{x}}m_{k}(u,D)(\chi_{Q_{x}^{*}}f_{k})(z)\,dz\bigg|^{2}\bigg)^{1/2}\,dy,$$
and $\vec{T}(\chi_{(Q_{x}^{*})^{\complement}}f_{k})(x)$ can be written as
$$\fint_{Q_{x}} \sup_{u\in I}\bigg(\sum_{|k+r_{x}|> N}\bigg|m_{k}(u,D)(\chi_{(Q_{x}^{*})^{\complement}}f_{k})(y)-\fint_{Q_{x}}m_{k}(u,D)(\chi_{(Q_{x}^{*})^{\complement}}f_{k})(z)\,dz\bigg|^{2}\bigg)^{1/2}\,dy.$$
By the triangle inequality, it suffices to show
\begin{align}\lambdabel{a17}
\sup_{x}\vec{T}(\{\chi_{Q_{x}^{*}}f_{k}\})(x)\lesssim F \bigg\|\bigg(\sum_{k}|f_{k}|^{2}\bigg)^{1/2}\bigg\|_{\infty}
\end{align}
and
\begin{align}\lambdabel{a18}
\sup_{x}\vec{T}(\{\chi_{(Q_{x}^{*})^{\complement}}f_{k}\})(x)\lesssim F \bigg\|\bigg(\sum_{k}|f_{k}|^{2}\bigg)^{1/2}\bigg\|_{\infty}.
\end{align}
We first show \eqref{a17}. From the triangle inequality and the Cauchy-Schwartz inequality and Fubini's theorem, it follows that
\begin{align*}
\vec{T}(\{\chi_{Q_{x}^{*}}f_{k}\})(x)&\lesssim |Q_{x}|^{-1/2}\bigg( \sum_{k}\bigg\|\sup_{u\in I}\big|m_{k}(u,D)(\chi_{Q_{x}^{*}}f_{k})(\cdot)\big|\bigg\|_{2}^{2}\bigg)^{1/2}\\
&\lesssim F|Q_{x}|^{-1/2}\bigg( \int_{Q_{x}^{*}}\sum_{k}|f_{k}(y)|^{2}\,dy\bigg)^{1/2}
\lesssim F\sup_{y}\bigg( \sum_{k}|f_{k}(y)|^{2}\bigg)^{1/2}.
\end{align*}
We next show \eqref{a18}. By Lemma \ref{a20}, for all $y,z\in Q_{x}$ and $u\in I,$ we have
\begin{align*} &\big|m_{k}(u,D)(\chi_{(Q_{x}^{*})^{\complement}}f_{k})(y)-m_{k}(u,D)(\chi_{(Q_{x}^{*})^{\complement}}f_{k})(z)\big|\\
&\lesssim E_{k}^{u}(x,y,z)\sup_{w}|f_{k}(w)|\lesssim G C_{0,U}^{l} 2^{-\delta|k+r_{x}|}\bigg\|\bigg(\sum_{k}|f_{k}|^{2}\bigg)^{1/2}\bigg\|_{\infty}.
\end{align*}
As a result, we find that, for all $y\in Q_{x}$ and $u\in I,$
\begin{align*}&\bigg(\sum_{|k+r_{x}|> N}\bigg|m_{k}(u,D)(\chi_{(Q_{x}^{*})^{\complement}}f_{k})(y)-\fint_{Q_{x}}m_{k}(u,D)(\chi_{(Q_{x}^{*})^{\complement}}f_{k})(z)\,dz\bigg|^{2}\bigg)^{1/2}\\
&\lesssim C_{0,U}^{l}G 2^{-\delta N}\bigg\|\bigg(\sum_{k}|f_{k}|^{2}\bigg)^{1/2}\bigg\|_{\infty},\end{align*}
which in turn implies
$$\sup_{x}\vec{T}(\chi_{(Q_{x}^{*})^{\complement}}f_{k})(x)\lesssim C_{0,U}^{l}G 2^{-\delta N}\bigg\|\bigg(\sum_{k}|f_{k}|^{2}\bigg)^{1/2}\bigg\|_{\infty}.$$
Choose $N=\frac{1}{\delta}\log_{2}(C_{0,U}^{l}G/F)$ such that $C_{0,U}^{l}G2^{-\delta N}=F$, where $\delta$ is the one in \eqref{f2}. Then we conclude that
$$\sup_{x}\vec{T}(\chi_{(Q_{x}^{*})^{\complement}}f_{k})(x)\lesssim F \bigg\|\bigg(\sum_{k}|f_{k}|^{2}\bigg)^{1/2}\bigg\|_{\infty}.$$
This finishes the proof of of Lemma \ref{a19}.
\end{proof}
Now we use Lemmas \ref{a26} and \ref{a19} to prove our key Lemma \ref{a13}.
\begin{lemma}\lambdabel{a13}
Suppose that there exist constants $F$ and $G$ such that $a_{k}$ satisfies
$$\bigg\|\sup_{u\in I}\big|a_{k}(u,D)f\big|\bigg\|_{q}\leq F\|f\|_{q}\ \ \textrm{and} \ \ \big|\partial_{\timesi}^{\alpha}a_{k}(u,\timesi)\big|\leq G$$
for $q\in\{2,\infty\}$, $u\in I,$ $\timesi\in K_{1}\times K_{2}$, $|\alpha|\leq 6$ and $k\in \mathbb{Z}$. Then we have
$$\bigg\|\sup_{u\in I}\bigg(\sum_{k}\big|m_{k}(u,D)f\big|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim \bigg(1+\big(\log_{2}(C_{0,U}^{l}G/F)\big)^{1/2-1/p}\bigg)F\|f\|_{p}.$$
\end{lemma}
\begin{proof}
By scaling, it suffices to show
$$\bigg\|\sup_{u\in I}\bigg(\sum_{k}\big|m_{k}(u,D_{1},2^{-v}D_{2})f\big|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim \bigg(1+\big(\log_{2}(C_{0,U}^{l}G/F)\big)^{1/2-1/p}\bigg)F\|f\|_{p}.$$
By the fact that
$$\bigg\|\sup_{u\in I}\bigg(\sum_{k}\big|m_{k}(u,D_{1},2^{-v}D_{2})f\big|^{2}\bigg)^{1/2}\bigg\|_{p}=\bigg\|\sup_{u\in I}\bigg(\sum_{k}\big|m_{k-v}(u,D_{1},2^{-v}D_{2})f\big|^{2}\bigg)^{1/2}\bigg\|_{p}, $$
we should only to show
$$\bigg\|\sup_{u\in I}\bigg(\sum_{k}\big|m_{k-v}(u,D_{1},2^{-v}D_{2})f\big|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim \bigg(1+\big(\log_{2}(C_{0,U}^{l}G/F)\big)^{1/2-1/p}\bigg)F\|f\|_{p}.$$
Therefore, without loss of generality, we can assume $v=0$ and show
$$\bigg\|\sup_{u\in I}\bigg(\sum_{k}\big|m_{k}(u,D)f\big|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim \bigg(1+\big(\log_{2}(C_{0,U}^{l}G/F)\big)^{1/2-1/p}\bigg)F\|f\|_{p}.$$
By Fatou's lemma, it suffices to show, for all $M\geq1,$
$$\bigg\|\sup_{u\in I}\bigg(\sum_{|k|\leq M}\big|m_{k}(u,D)f\big|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim \bigg(1+\big(\log_{2}(C_{0,U}^{l}G/F)\big)^{1/2-1/p}\bigg)F\|f\|_{p},$$
where the implicit constant is independent of $M$.
By density we can assume $f\in \mathcal{S}$. Define $$\vec{T}_{M}f=\{m_{k}(u,D)f\}_{|k|\leq M}\in L^{p}(\mathbb{R}^{2},L^{\infty}l^{2}_{M})\bigcap C(\mathbb{R}^{2},L^{\infty}l^{2}_{M}),$$
By Lemma \ref{a14}, it suffices to show
$$\Big\|M^{\sharp}_{\textrm{dyad}}\vec{T}_{M}f\Big\|_{p}\lesssim \bigg(1+\big(\log_{2}(C_{0,U}^{l}G/F)\big)^{1/2-1/p}\bigg)F\|f\|_{p}.$$
Choose $N=\frac{1}{\delta}\log_{2}(C_{0,U}^{l}G/F)$ and define
$$S_{1}f(x)=\fint_{Q_{x}} \sup_{u\in I}\bigg(\sum_{|k+r_{x}|\leq N}\bigg|m_{k}(u,D)f(y)-\fint_{Q_{x}}m_{k}(u,D)f(z)\,dz\bigg|^{2}\bigg)^{1/2}\,dy$$
and
$$S_{2}f(x)=\fint_{Q_{x}} \sup_{u\in I}\bigg(\sum_{|k+r_{x}|> N}\bigg|m_{k}(u,D)f(y)-\fint_{Q_{x}}m_{k}(u,D)f(z)\,dz\bigg|^{2}\bigg)^{1/2}\,dy.$$
For each $x\in \mathbb{R}^{2}$, we can find a dyadic cube $x\in Q_{x}$ such that
$M^{\sharp}\vec{T}_{M}f(x)\leq 2S_{1}f(x)+2S_{2}f(x).$
It suffices to obtain
$$\|S_{1}f\|_{p}\lesssim \big(\log_{2}(C_{0,U}^{l}G/F)\big)^{1/2-1/p}F\|f\|_{p} \ \ \textrm{and} \ \ \|S_{2}f\|_{p}\lesssim F\|f\|_{p},$$
which follows from Lemmas \ref{a26} and \ref{a19}.
\end{proof}
\section{the proof of the main theorem}
This section is devoted to the proof of our main Theorem \ref{curve gamma}. First of all, we will apply Minkowski's inequality and replace the sharp truncation $[-1, 1]$ in the definition of $H^{\gamma}$ by a smooth truncation. Let $\varepsilonilon_0>0$ be a small number that is to be determined, its choice depends only on $\gamma$, and let $\widetilde{\varphi}:\ \R\to \R$ be a non-negative smooth bump function supported on $(-\varepsilonilon_0, \varepsilonilon_0)$ with $\widetilde{\varphi}(t)=1$ for every $|t|\le \varepsilonilon_0/2$. Let $x=(x_{1},x_{2})$, by Minkowski's inequality, we observe that
\begin{equation}
\bigg\|\int_{-1}^1 f(x_1-t, x_2-u(x)\gamma(t))(1-\widetilde{\varphi}(t))\,\frac{dt}{t}\bigg\|_p \lesssim_{\varepsilonilon_0} \|f\|_p
\end{equation}
for every $p>1$. Therefore, to prove the $L^p$-boundedness of $H^{\gamma}$, it suffices to prove the same boundedness of the operator
\begin{equation}\lambdabel{201205e6-2}
\int_{\R} f(x_1-t, x_2-u(x)\gamma(t))\widetilde{\varphi}(t)\,\frac{dt}{t},
\end{equation}
which, for the sake of simplifying notation, will still be called as $H^{\gamma}$. Without loss of generality, we assume $u>0$.
Let $\varphi\in C_{c}^{\infty}(\mathbb{R})$ be a nonnegative function, $\supp\varphi\subset[-2,2]$ and $\varphi=1$ on $[-1,1]$. Define $\phi(t)=\varphi(t)-\varphi(2t)$, then
\begin{align}\lambdabel{f3}
\textrm{$\supp\phi\subset \left\{t\in \mathbb{R} :\ 1/2\leq|t|\leq2\right\},\ $ $\sum_{k\in \mathbb{Z}}\phi_{k}(t)=1$ \ \ for all $t\neq0$,}
\end{align}
where $\phi_{k}(t)=\phi(2^{-k}t)$. Let $\tilde{\phi}\in C_{c}^{\infty}(\mathbb{R}\backslash 0)$ such that $\tilde{\phi}=1$ on $\{t\in \mathbb{R} :\ 1/4\leq|t|\leq4\}$. We then define $P_{k}^{2}f(x)=(\phi_{k}(\timesi_{2})\widehat{f}(\timesi))^\vee(x)$ and $\tilde{P}_{k}^{2}f(x)=(\tilde{\phi}_{k}(\timesi_{2})\widehat{f}(\timesi))^\vee(x)$ for $k\in \mathbb{Z}$.
To proceed, we will reduce the $L^p$-boundedness of $H^{\gamma}$ to a square function estimate.
Recall $u:\ \R^2\to \R$ is a Lipschitz function with Lipschitz norm $\|u\|_{\mathrm{Lip}}\le 1/2$. This further implies that $\|u(x_1,\cdot)\|_{\mathrm{Lip}}\leq 1/2\ $ for every $x_1\in \mathbb{R}$.
\begin{lemma}\lambdabel{a1}
For every $1<p<\infty$, we have
\begin{align*}
\bigg\|\sum_{k}(1-\tilde{P}^2_k)H^{\gamma}P_{k}^{2}f\bigg\|_{p}\lesssim \|f\|_{p}
\end{align*}
for every Schwartz function $f$.
\end{lemma}
\begin{proof}[Proof (of Lemma \ref{a1})]
This lemma follows from Theorem 1.1 in \cite{MR3841536}, which further relies on Jones' beta numbers \cite{MR1013815}. In the special case where $\gamma$ is a monomial, such a lemma was already proved and used in \cite{MR3841536}. The proof of the general case is essentially the same. For the sake of completeness, we still include the proof here.
We first write
\begin{equation}
(1-\tilde{P}^2_k) H^{\gamma} P^2_k f(x)=\int_{\R} (1-\tilde{P}^2_k) \big[P^2_k f(x_1-t, x_2-u(x)\gamma(t))\big]\widetilde{\varphi}(t)\,\frac{dt}{t}.
\end{equation}
By Minkowski's inequality, we have
\begin{equation}
\Big\|(1-\tilde{P}^2_k) H^{\gamma} P^2_k f\Big\|_{p} \le \int_{\R}\Big \|(1-\tilde{P}^2_k) \big[P^2_k f(x_1-t, x_2-u(x)\gamma(t))\big]\Big\|_{p} \widetilde{\varphi}(t)\,\frac{dt}{|t|}.
\end{equation}
By \cite[Theorem 1.1]{MR3841536}, the last display can be further bounded by
\begin{equation}\lambdabel{201127e6-5}
\int_{\R} \Big\|\|f(x_1-t, x_2)\|_{L^p_{x_2}}\|u(x_1, \cdot) \gamma(t)\|_{\mathrm{Lip}}\Big\|_{L^p_{x_1}} \widetilde{\varphi}(t)\,\frac{dt}{|t|}.
\end{equation}
Note that $\|u(x_1, \cdot) \gamma(t)\|_{\mathrm{Lip}}\le |\gamma(t)|\|u\|_{\mathrm{Lip}}$. Therefore, \eqref{201127e6-5} is bounded by
\begin{equation}
\|f\|_p \|u\|_{\mathrm{Lip}} \int_{\R} |\gamma(t)|\widetilde{\varphi}(t)\,\frac{dt}{|t|}\lesssim \|f\|_p \|u\|_{\mathrm{Lip}}.
\end{equation}
In the last step, we used the fact that $|\gamma(t)|\lesssim |t|^{\log_2 C_{0, L}}$, which is a simple consequence of \eqref{curve gamma_z}.
\end{proof}
Lemma \ref{a1} tells us that in order to prove the $L^p$-boundedness of $H^{\gamma}$, it suffices to consider the $L^p$-boundedness of the ``diagonal" term:
\begin{equation}
\bigg\|\sum_k \tilde{P}^2_k H^{\gamma} P^2_k f\bigg\|_p \lesssim \|f\|_p
\end{equation}
for every $p>1$. By the Littlewood-Paley theory and the vector-valued estimates for the maximal operator, it suffices to prove
\begin{equation}\lambdabel{201127e6-8}
\bigg\|\bigg(\sum_k |H^{\gamma} P^2_k f|^2\bigg)^{1/2}\bigg\|_p \lesssim \|f\|_p.
\end{equation}
The proof of this square function estimate will occupy the rest of the paper. In order to bound such a square function, we will first apply a dyadic decomposition in the $t$ variable. For every $u\in \R$, define
\begin{equation}\lambdabel{201127e6-9}
A_{u,l}f(x)=\int_{\R} f(x_{1}-t,x_{2}-u\gamma(t))\phi(2^{l}t)\widetilde{\varphi}(t)\,\frac{dt}{t}.
\end{equation}
We use notations $u(x)=u_{x}$. With these notations, we can write $H^{\gamma}P^2_k f$ as a sum of $A_{u_{x}, l}P^2_k f$ over $l\in \N$. For each fixed $k\in \Z$, we will split the sum over $l\in \N$ into two parts: A high frequency part and a low frequency part. The intuition behind the splitting is quite standard: If $l$ is ``large", that is, $t\simeq 2^{-l}$ is ``small", then by the uncertainty principle applied to the second variable, the term $u \gamma$ can be treated as a small perturbation since $P^2_k f$ is essentially constant at the scale $2^{-k}$ in the second variable; if $l$ is ``small", that is, $t$ is ``large", then we will be able to make use of oscillations to obtain certain exponential decay in $l$, via the local smoothing estimates that were established in the following Lemma \ref{201127lem6-4}. \\
Let us proceed with the decomposition in $l$ described above. For $v\in \Z$, define
\begin{equation}
E_{v}=\{k\in \Z:\ 2^{v+k}\gamma(\varepsilonilon_{0}/2)\geq1\}.
\end{equation}
Define $v_{x}\in \mathbb{Z}$ satisfies $2^{-v_{x}}u_{x}\in[1,2)$.
We see that if $k\notin E_{v_x}$, then
\begin{equation}
2^{v_x+k}\gamma(\varepsilonilon_{0}/2)\simeq |u(x)|2^k \gamma(\varepsilonilon_{0}/2)\lesssim 1,
\end{equation}
and as a consequence, for every choice of $l$, the term $u\gamma$ can be treated as a perturbation when considering $P^2_k f$. To be more precise, we have
\begin{lemma}\lambdabel{201128lem6-2}
For every $x\in \R^2$ and $k\notin E_{v_x}$, we have the pointwise estimate
\begin{equation}\lambdabel{b5}
\big|H^{\gamma}P_{k}^{2}f(x)\big|\lesssim H_1^{*}P_{k}^{2}f(x)+ M_{S}P_{k}^{2}f(x).
\end{equation}
Here and hereafter, $H_1^*$ denotes the maximal Hilbert transform applied in the first variable, $M_{S}$ denotes the composition of some $M_{1}$ and $M_{2}$, which are the maximal function applied in the first and second variables, respectively.
\end{lemma}
Next, for $v, k\in \Z$, define
\begin{equation}\lambdabel{201127e6-13}
l_0=l_0(v, k)\in \mathbb{Z} \ \text{ such that }\ 2^{v+k} \gamma(2^{-l_0})\simeq 1.
\end{equation}
There may be multiple choices of $l_0$, we here pick an arbitrary one. Similar to Lemma \ref{201128lem6-2}, we have the following pointwise estimate, i.e., Lemma \ref{201128lem6-3}. All of these two lemmas will be proved in Section $6$.
\begin{lemma}\lambdabel{201128lem6-3}
For every $x\in \R^2$ and $k\in E_{v_x}$, it holds that
\begin{equation}
\bigg|\sum_{l\ge l_0(v_x, k)} A_{u_x, l} P^2_k f(x)\bigg|\lesssim H_1^* P^2_k f(x)+ M_S P^2_k f(x).
\end{equation}
\end{lemma}
By Lemmas \ref{201128lem6-2} and \ref{201128lem6-3} and the vector-valued estimates for $H_1^*$ and $M_S$, to prove \eqref{201127e6-8}, it remains to prove
\begin{equation}\lambdabel{201127e6-15}
\bigg\|\bigg( \sum_k \chi_{\{k\in E_{v_x}\}} \bigg|\sum_{l\le l_0(v_x, k)} A_{u_x, l} P^2_k f(x)\bigg|^2 \bigg)^{1/2} \bigg\|_{p} \lesssim \|f\|_p
\end{equation}
for every $p>1$. For $k\in \mathbb{Z}$, define $P_{k}^{1}f(x)=(\phi_{k}(\timesi_{1})\widehat{f}(\timesi))^\vee(x)$.
Now we carry out a last decomposition, which is a Littlewood-Paley decomposition in the first variable of $f$, write
\begin{equation}\lambdabel{201127e6-16}
A_{u_x, l}P^2_k f(x)=\sum_{k'\in \Z} A_{u_x, l}P^1_{k'} P^2_k f(x).
\end{equation}
For each fixed $u$, the Fourier multiplier of the convolution operator $A_{u, l}$ is given by
\begin{equation}\lambdabel{201127e6-17}
\int_{\R} e^{i(t\timesi+u\gamma(t)\eta)} \phi(2^{l}t)\widetilde{\varphi}(t)\,\frac{dt}{t}=\int_{\R} e^{i(2^{-l} t\timesi+u\gamma(2^{-l} t)\eta)} \phi(t)\widetilde{\varphi}(2^{-l}t)\,\frac{dt}{t}.
\end{equation}
Assume that $|\timesi|\simeq 2^{k'}$ and $|\eta|\simeq 2^k$. In order for the multiplier to have ``non-trivial" contributions, it should admit a critical point in the region where $|t|\simeq 2^{-l}$, that is,
\begin{equation}
\timesi+u\gamma'(t)\eta=0\ \text{ for some }\ |t|\simeq 2^{-l}.
\end{equation}
This leads to
\begin{equation}\lambdabel{201127e6-19}
2^{k'}\simeq |u| \gamma'(2^{-l}) 2^k.
\end{equation}
In other words, for fixed $u, k$ and $l$, in the sum over $k'$ in \eqref{201127e6-16}, only those terms satisfying \eqref{201127e6-19} will have non-trivial contributions; all other terms can be handled via elementary methods like integration by parts. Therefore, to prove \eqref{201127e6-15}, we will only prove
\begin{equation}\lambdabel{201127e6-20}
\bigg\|\bigg( \sum_k \chi_{\{k\in E_{v_x}\}} \bigg|\sum_{l\le l_0(v_x, k)} A_{u_x, l} P^1_{k_1}P^2_k f(x)\bigg|^2 \bigg)^{1/2} \bigg\|_{p} \lesssim \|f\|_p,
\end{equation}
where
\begin{equation}
k_1=k_1(v_x, k, l)\in \mathbb{Z}\ \ \text{is such that}\ \ 2^{k_1}\simeq 2^{v_x} 2^k \gamma'(2^{-l}).
\end{equation}
In the following part, for ease of notation, we will compress the dependence of $k_1$ on $x, k$ and $l$. We will emphasize such dependence whenever necessary. Moreover, if $x$ and $k$ are clear from the context, we will also leave out the dependence of $l_0$ on $x$ and $k$. \\
One key step in the proof of \eqref{201127e6-19} is the following estimate of local smoothing type, which originally only holds for $p>2$. Here we have a same estimate for every $p>1$ due to the Lipschitz assumption on $u$.
\begin{lemma}\lambdabel{201127lem6-4}
For every $p>1$, it holds that
\begin{equation}
\Big\|\chi_{\{k\in E_{v_x}\}} A_{u_x, l_0-l}P^1_{k_1} P^2_k f(x) \Big\|_{p} \lesssim 2^{-\delta_p l} \|f\|_p
\end{equation}
for every $l\in \N$, where $\delta_p>0$ is a small constant, $k_1=k_1(v_x, k, l_0-l)$ and $l_0=l_0(v_x, k)$.
\end{lemma}
Note that by Fubini's theorem, the desired estimate \eqref{201127e6-20} at $p=2$ follows immediately from Lemma \ref{201127lem6-4}.
The proof of Lemma \ref{201127lem6-4} will be given in Section $7$, we now make one remark. Recall that the phase function of the multiplier in \eqref{201127e6-17}, given by
\begin{equation}\lambdabel{201127e6-23}
2^{-(l_0-l)} t\timesi+u_x \gamma(2^{-(l_0-l)}t)\eta.
\end{equation}
Note that for every $t$ in the support of $\phi$, it holds that $|t|\simeq 1$; for every $(\timesi, \eta)$ in the Fourier support of $P^1_{k_1} P^2_k f$, it holds that
\begin{equation}
|\timesi|\simeq 2^{k_1}\simeq |u_x| 2^k \gamma'(2^{-(l_0-l)})\ \ \textrm{and}\ \ |\eta|\simeq 2^k.
\end{equation}
Let us show that both of the terms in \eqref{201127e6-23} have scales bigger than one. This is another way of saying that our phase function \eqref{201127e6-23} may oscillate very fast; local smoothing estimates of Mockenhaupt, Seeger and Sogge \cite{MR1168960} allow us to explore this high oscillation and obtain a favourable decay estimate as in the above lemma. That both terms in \eqref{201127e6-23} are of scale bigger than one is trivial in the case when $\gamma$ is a monomial. In the general case, the assumptions we made on $\gamma$ in Theorem \ref{f19} will appear. Let us be more precise. First, note that
\begin{equation}
|u_x| \gamma(2^{-l_0+l}t)|\eta|\ge |u_x| \gamma(2^{-l_0})|\eta|\gtrsim 1,
\end{equation}
which follows from the monotonicity assumption on $\gamma$ and the definition of $l_0$ in \eqref{201127e6-13}. The other term is the more interesting one. We need to show that
\begin{equation}
2^{-l_0+l} |u_x| 2^k \gamma'(2^{-l_0+l})\gtrsim 1,
\end{equation}
which follows from applying item (ii) in the statement of Theorem \ref{f19} with $j=1$. \\
So far we have finished the proof of \eqref{201127e6-20} at $p=2$, which leads to the main Theorem \ref{curve gamma} at $p=2$. Next, we consider $p\neq 2$. It turns out that the case $p<2$ and the case $p>2$ require entirely different tools. The case $p<2$ is easier, and can be handled essentially via the interpolation and the bootstrapping argument of Nagel, Stein and Wainger \cite{MR0466470}. The case $p>2$ is much more complicated, and would require a vector-valued version of the $L^p$ orthogonality principle of Seeger \cite{MR955772}, we here will prove this vector-valued result in a general homogeneous type space instead of $\mathbb{R}^{n}$. Homogeneous spaces were introduced
by Coifman and Weiss \cite{MR0499948}, who discovered that the ``spaces of
homogeneous type'' are the metric spaces to which the Calder$\acute{\textrm{o}}$n-Zygmund
theory extends naturally. It would be interesting to find an approach that works for every $p$ simultaneously. \\
Let us first look at the easier case $p<2$. By interpolation with the exponential decay estimate in Lemma \ref{201127lem6-4} and the Littlewood-Paley inequality, it suffices to prove that
\begin{equation}\lambdabel{201127e6-27}
\bigg\|\bigg( \sum_k \chi_{\{k\in E_{v_x}\}} \bigg| A_{u_x, l_0-l} P^1_{k_1}P^2_k f(x)\bigg|^{2} \bigg)^{1/2} \bigg\|_{p} \lesssim \bigg\|\bigg(\sum_k |P^2_k f|^{2}\bigg)^{1/2}\bigg\|_p
\end{equation}
for every $l\in \N$. Here $k_1=k_1(v_x, k, l_0-l)$ and $l_0=l_0(v_x, k)$. In order to prove the above vector-valued estimates, we will appeal to the $L^p$-boundedness of the maximal operators that were established in \cite{NjSlHx20}. Indeed, one can also apply Lemma \ref{201127lem6-4} and recover the above result. Define
\begin{equation}
M^{\gamma} f(x)=\sup_{\varepsilonilon>0} \frac{1}{2\varepsilonilon}\int_{-\varepsilonilon}^{\varepsilonilon} |f(x_1-t, x_2-u(x)\gamma(t))|\widetilde{\varphi}(t)\,dt.
\end{equation}
It was proven in \cite{NjSlHx20} that, under the same assumptions as in Theorem \ref{f19}, it holds that
\begin{equation}\lambdabel{201208e5-29}
\|M^{\gamma} f\|_p \lesssim \|f\|_p
\end{equation}
for every $p>1$. Moreover, one can upgrade such scalar-valued bound into a vector-valued bound for free. To be more precise, we also have that
\begin{equation}\lambdabel{201201e6-30}
\bigg\|\bigg(\sum_k |M^{\gamma} f_k|^2\bigg)^{1/2}\bigg\|_p \lesssim \bigg\|\bigg(\sum_k | f_k|^2\bigg)^{1/2}\bigg\|_p
\end{equation}
for every $1<p\le 2$. To show \eqref{201127e6-27}, we observe the following pointwise estimate
\begin{equation}
\big|A_{u_x, l_0-l} P^1_{k_1}P^2_k f(x)\big|\lesssim M^{\gamma} M_S P^2_k f(x).
\end{equation}
Therefore, the desired estimate \eqref{201127e6-27} follows immediately from \eqref{201201e6-30} and the vector-valued estimates for the strong maximal operator. \\
In the remaining part, we focus on the case $p>2$. By interpolating with the $L^2$ bounds in Lemma \ref{201127lem6-4}, it suffices to prove
\begin{equation}\lambdabel{201127e6-20zz}
\bigg\|\bigg( \sum_k \chi_{\{k\in E_{v_x}\}} \bigg| A_{u_x, l_0-l} P^1_{k_1}P^2_k f(x)\bigg|^2 \bigg)^{1/2} \bigg\|_{p} \lesssim (l+1)^{10} \bigg\|\bigg(\sum_k |P^2_k f|^2\bigg)^{1/2}\bigg\|_p
\end{equation}
for every $l\ge 0$, where
\begin{equation}
k_1=k_1(v_x, k, l_0-l) \ \ \text{is such that} \ \ 2^{k_1}\simeq 2^{v_x} 2^k \gamma'(2^{-(l_0-l)}).
\end{equation}
We bound the left hand side of \eqref{201127e6-20zz} by
\begin{equation}\lambdabel{201201e6-34}
\Bigg\| \Bigg[\sum_{v\in \Z} \sup_{u\in [1, 2]} \bigg( \sum_k \chi_{\{k\in E_{v}\}} \bigg| A_{2^v u, l_0-l} P^1_{k_1}P^2_k f\bigg|^2 \bigg)^{p/2}\Bigg]^{1/p} \Bigg\|_{p},
\end{equation}
where
\begin{equation}\lambdabel{201201e6-35}
l_0=l_0(v, k) \ \ \text{ satisfying }\ \ 2^{v+k}\gamma(2^{-l_0})\simeq 1,
\end{equation}
and
\begin{equation}\lambdabel{201201e6-36}
k_1=k_1(v, k, l_0-l) \ \ \text{ satisfying } \ \ 2^{k_1}\simeq 2^v 2^k \gamma'(2^{-(l_0-l)}).
\end{equation}
We now state two lemmas.
\begin{lemma}\lambdabel{lem21-315-51}
For each fixed $k_{1}$, $k$ and $l$, there are at most $C(l+1)$ choices of $v$ such that \eqref{201201e6-36} holds.
\end{lemma}
\begin{proof}[Proof (of Lemma \ref{lem21-315-51})]
Fix $k_{1}$, $k$ and $l$, by the assumption (ii) of Theorem \ref{f19}, \eqref{201201e6-35} and \eqref{201201e6-36}, we have $$C_{0,L}^{l}2^{l_{0}-l}\lesssim2^{k_1}\simeq \gamma_{l_{0}}(2^{l})2^{l_{0}-l}\leq C_{0,U}^{l}2^{l_{0}-l}.$$
Therefore, for each fixed $k_{1}$, there are at most $C_{1}(l+1)$ choices of $l_{0}$. By \eqref{201201e6-35}, for each fixed $l_{0}$, there are at most $C_{2}$ choices of $v$. Then for each fixed $k_{1}$, there are at most $C_{1}C_{2}(l+1)$ choices of $v$ such that \eqref{201201e6-36} holds.
\end{proof}
\begin{lemma}\lambdabel{a39}
Let $2<p<\infty$ and $v\in \Z$. Then
\begin{align}\lambdabel{a41}
\bigg\|\sup_{ u\in[1,2]}\bigg(\sum_{k\in E_{v}}\bigg|A_{2^{v}u,l_{0}-l}P^1_{k_1} P^2_{k} f\bigg|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim (l+1)^{5} \bigg\|\bigg(\sum_{k\in E_{v}}\big|P^1_{k_1}P^2_k f\big|^{2}\bigg)^{1/2}\bigg\|_{p},
\end{align}
where $l_0$ and $k_1$ are defined as in \eqref{201201e6-35} and \eqref{201201e6-36}, respectively.
\end{lemma}
We first assume Lemma \ref{a39}, which will be proved in Section $8$, and continue the proof of \eqref{201127e6-20zz}. By \eqref{201201e6-34}, $l^{2}\subseteq l^{p}$ and Lemmas \ref{lem21-315-51} and \ref{a39}, the left hand side of \eqref{201127e6-20zz} can be bounded by
\begin{align}\lambdabel{21-315-538}
&(l+1)^{5} \Bigg\| \Bigg[\sum_v \bigg(\sum_{k\in E_{v}}\big|P^1_{k_1}P^2_k f\big|^{2}\bigg)^{p/2}\Bigg]^{1/p}\Bigg\|_{p}
\leq (l+1)^{5} \bigg\| \bigg(\sum_v \sum_{k\in E_{v}}\big|P^1_{k_1}P^2_k f\big|^{2}\bigg)^{1/2}\bigg\|_{p}\nonumber\\
&\lesssim (l+1)^{10} \bigg\| \bigg(\sum_{k'\in \Z} \sum_{k\in \Z}\big|P^1_{k'}P^2_k f\big|^{2}\bigg)^{1/2}\bigg\|_{p}.
\end{align}
In the end, we apply the bi-parameter Littlewood-Paley inequality and finish the proof of \eqref{201127e6-20zz}.\\
\section{Proofs of Lemmas \ref{201128lem6-2} and \ref{201128lem6-3}}
Let us begin with the proof of Lemma \ref{201128lem6-2}. We fix $x\in \R^2$ and $k\notin E_{v_x}$, which means $2^{v_x+k}\lesssim 1$. We have
\begin{equation}\lambdabel{201202e7-40}
H^{\gamma} P^2_k f(x)=\int_{\R} P^2_k f(x_1-t, x_2-u_x \gamma(t))\widetilde{\varphi}(t)\,\dtt.
\end{equation}
From $P^2_k f=P^2_k \tilde{P}_{k}^{2}f$ and the definition of $\tilde{P}_{k}^{2}f$, it easily follows that
\begin{equation}
P^2_k f(x)=\int_{\R}P^2_k f(x_1, x_2-z)2^k \widecheck{\tilde{\phi}}(2^k z)\,dz.
\end{equation}
We write \eqref{201202e7-40} as
\begin{equation}
\begin{split}
& \int_{\R} \int_{\R}P^2_k f(x_1-t, x_2-u_x \gamma(t)-z) 2^k \widecheck{\tilde{\phi}}(2^k z) \widetilde{\varphi}(t)\,dz \,\dtt\\
& =\int_{\R} \int_{\R}P^2_k f(x_1-t, x_2-z) 2^k \widecheck{\tilde{\phi}}(2^k (z+u_x \gamma(t))) \widetilde{\varphi}(t)\,dz\, \dtt.
\end{split}
\end{equation}
We compare it with
\begin{equation}
\int_{\R} \int_{\R}P^2_k f(x_1-t, x_2-z) 2^k \widecheck{\tilde{\phi}}(2^k z) \widetilde{\varphi}(t)\,dz\, \dtt,
\end{equation}
which is certainly bounded by $H_1^* P^2_k f$. The difference is controlled by
\begin{equation}\lambdabel{201203e7-5}
\begin{split}
& \int_{\R} \int_{\R} |P^2_k f(x_1-t, x_2-z)| 2^k \big|\widecheck{\tilde{\phi}}(2^k (z+u_x \gamma(t)))-\widecheck{\tilde{\phi}}(2^k z)\big| \widetilde{\varphi}(t) \,dz \,\frac{dt}{|t|}\\
& \lesssim \sum_{\tau\in \N} (1+\tau)^{-100} \int_{\R} \int_{\R} |P^2_k f(x_1-t, x_2-z)| 2^k \chi_{\{\tau 2^{-k}\le |z|\le (\tau+1) 2^{-k}\}} \widetilde{\varphi}(t)\, dz \frac{|\gamma(t)|}{|t|}\,dt.
\end{split}
\end{equation}
Recall that $|\gamma(t)|\lesssim |t|^{\log_2 C_{0, L}}$, we see from it that \eqref{201203e7-5} is bounded by the strong maximal function $M_{S}P_{k}^{2}f$. This finishes the proof of Lemma \ref{201128lem6-2}.\\
Next we prove Lemma \ref{201128lem6-3}. Fix $x\in \R^2$, we abbreviate $l_0(v_x, k)$ to $l_0$, and then
\begin{equation}
\sum_{l\ge l_0} A_{u_x, l} P^2_k f(x)=\sum_{l\ge l_0} \int_{\R} P^2_k f(x_1-t, x_2-u_x \gamma(t)) \phi_l(t)\widetilde{\varphi}(t)\,\dtt.
\end{equation}
We still compare it with
\begin{equation}
\sum_{l\ge l_0} \int_{\R} P^2_k f(x_1-t, x_2) \phi_l(t)\widetilde{\varphi}(t)\,\dtt,
\end{equation}
which can be bounded by the maximal Hilbert transform $H_1^* P^2_k f$. The difference is bounded by
\begin{equation}\lambdabel{201203e7-8}
\begin{split}
& \sum_{\tau\in \N} (1+\tau)^{-100} \int_{|t|\lesssim 2^{-l_0}} \int_{\R} |P^2_k f(x_1-t, x_2-z)| 2^k \chi_{\{\tau 2^{-k}\le |z|\le (\tau+1) 2^{-k}\}} \widetilde{\varphi}(t) \,dz \frac{2^k |u_x||\gamma(t)|}{|t|}\,dt\\
& \lesssim \sum_{\tau\in \N} (1+\tau)^{-100} \int_{|t|\lesssim 2^{-l_0}} \int_{\R} |P^2_k f(x_1-t, x_2-z)| 2^k \chi_{\{\tau 2^{-k}\le |z|\le (\tau+1) 2^{-k}\}} \widetilde{\varphi}(t) \,dz \frac{|\gamma(t)|}{\gamma(2^{-l_0})|t|}\,dt.
\end{split}
\end{equation}
Recall that from Lemma \ref{curve gamma_z}, we have $\gamma(t)\le \gamma(2t)/C_{0, L}$. This implies that if $|t|\simeq 2^{-l}\le 2^{-l_0}$, then
\begin{equation}
|\gamma(t)|\le (C_{0, L})^{-(l-l_0)} \gamma(2^{-l_0}).
\end{equation}
From this we see again that \eqref{201203e7-8} can be bounded by the strong maximal function $M_{S}P_{k}^{2}f$. This finishes our proof.
\section{Proof of Lemma \ref{201127lem6-4}}
By interpolating with the estimate \eqref{201208e5-29}, we see that to prove Lemma \ref{201127lem6-4}, it suffices to prove
\begin{equation}\lambdabel{201205e8-40}
\Big\|\chi_{\{k\in E_{v_x}\}} A_{u_x, l_0-l}P^1_{k_1} P^2_k f(x) \Big\|_{p} \lesssim C_{0,L}^{-\delta_{p}l} \|f\|_p
\end{equation}
for some $p\ge 6$, where $\delta_p>0$ is allowed to depend on $p$. Here $k_1=k_1(v_x, k, l_0-l)$ is such that
\begin{equation*}
2^{k_1}\simeq 2^{v_x} 2^k 2^{l_0-l} \gamma(2^{-(l_0-l)}).
\end{equation*}
We bound the left hand side of \eqref{201205e8-40} by
\begin{align}\lambdabel{201205e8-42}
\bigg\| \sup_{v\in \mathbb{Z}}\sup_{u\in[1,2]}\chi_{\{k\in E_{v}\}}\big|A_{2^{v}u, l_0-l}P^1_{k_1} P^2_k f\big| \bigg\|_{p}.
\end{align}
Now we bound the sup over $v\in \Z$ by a $\ell^p$ norm. By Lemma \ref{lem21-315-51} and the method similar as in the proof of \eqref{21-315-538}, it then suffices to prove
\begin{align}\lambdabel{201205e8-43}
\bigg\| \sup_{u\in[1,2]}\chi_{\{k\in E_{v}\}}\big|A_{2^{v}u, l_0-l}P^1_{k_1} P^2_k f\big| \bigg\|_{p}\lesssim C_{0,L}^{-\delta_{p}l}\|f\|_p
\end{align}
uniformly in $v\in \Z$, where $k_1=k_1(v, k, l_0-l)$.
By scaling and the support assumption in \eqref{201205e6-2}, it suffices to show
\begin{align}\lambdabel{201205e8-44}
\bigg\| \sup_{u\in[1,2]}\Big|m_{\log_{2}\gamma_{l_0}(2^{l})}^{l_{0}-l}(u,D)f\Big| \bigg\|_{p}\lesssim \gamma_{l_0}(2^{l})^{-\delta_{p}}\|f\|_p\lesssim C_{0,L}^{-\delta_{p}l}\|f\|_p
\end{align}
for all $k,v\in \mathbb{Z}$ satisfying $k\in E_{v}$ and $2^{-l_{0}+l}\lesssim \varepsilonilon_{0}$, where
\begin{equation}\lambdabel{201211e7.6}
m_{j}^{l}(u,\timesi)=\phi_{j}(\timesi_{1})\phi_{j}(\timesi_{2}) \int_{\R} e^{i\timesi_{1}t}e^{i\timesi_{2}u\gamma_{l}(t)}\phi(t)\,\dtt .
\end{equation}
We remark that $m_j^l(u, \timesi)$ is the multiplier of the convolution operator $A^l_u P^1_j P^2_j$, where
\begin{equation}\lambdabel{201210e7.7}
A^l_u f(x)=\int_{\R} f(x_1-t, x_2-u\gamma_l(t))\phi(t)\,\dtt.
\end{equation}
Let us organize what we need to prove in the following lemma.
\begin{lemma}\lambdabel{201210lem7.1}
For every $j\ge 1$ and $l\in \N$ with $2^l\gg 1$, it holds that
\begin{equation}
\bigg\|\sup_{u\in [1, 2]} \big|m_j^l(u, D) f\big|\bigg\|_{p}\lesssim 2^{-\delta_p j} \|f\|_p
\end{equation}
for some $p\ge 6$ and some $\delta_p>0$ that is allowed to depend on $p$. In particular, the implicit constant is uniform in $l$.
\end{lemma}
The rest of this section is devoted to the proof of Lemma \ref{201210lem7.1}. Before we start the proof, let us make a remark that this is the place where we need to apply Lemma \ref{b44}, which states that $\gamma_l$ behaves ``uniformly" in the parameter $l$. In particular, if we assume that $\gamma:\ \R\to \R$ is homogeneous, then $\gamma_l$ will be independent of $l$, and certainly all estimates we obtained will also be uniform in $l$. \\
Let us begin the proof with an estimate for a fixed $u$, which is a simple corollary of van der Corput's lemma.
\begin{lemma}\lambdabel{b28}
For every $j\ge 1$, $2^{l}\gg 1$, $u\in [1, 2]$ and every $p\ge 2$, it holds that
\begin{align}\lambdabel{b27}
\Big\|m_j^l(u, D) f\Big\|_{p}\lesssim 2^{-j/p} \|f\|_p,
\end{align}
where the implicit constant is uniform in $l$.
\end{lemma}
\begin{proof}[Proof (of Lemma \ref{b28})]
By interpolation, it suffices to prove \eqref{b27} at $p=2$ and $p=\infty$. The estimate at $p=\infty$ is trivial since $A^l_u$ is an averaging operator. The desired $L^2$ bound follows immediately from the following pointwise estimate of the multiplier:
\begin{align}\lambdabel{b23}
|m_{j}^{l}(u,\timesi)|\lesssim 2^{-j/2},
\end{align}
which further follows from van der Corput's lemma and
\begin{equation}
|\gamma_{l}''(t)|\gtrsim 1 \ \text{ for all}\ t\in [1/2, 2],
\end{equation}
uniformly in $l$, as stated in Lemma \ref{b44}.
\end{proof}
Let us continue with the proof of Lemma \ref{201210lem7.1}. In the next lemma, we prove a local smoothing estimate, which, combined with Lemma \ref{b28} and the Sobolev embedding inequality, further implies Lemma \ref{201210lem7.1}. We refer to \cite{NjSlHx20} for a more detailed discussion.
\begin{lemma}\lambdabel{b56}
For every $j\ge 1$, $2^l\gg 1$ and every $p\ge 6$, it holds that
\begin{align}\lambdabel{b29}
\bigg(\int_{1}^{2}\Big\|m_j^l(u, D)f\Big\|_{p}^{p}\,du\bigg)^{1/p}\lesssim 2^{-(1/p+\delta_{p})j}\|f\|_{p},
\end{align}
for some $\delta_p>0$, uniformly in $l$.
\end{lemma}
\begin{proof}[Proof (of Lemma \ref{b56})] Recall the definition of the multiplier $m_j^l$ in \eqref{201211e7.6}. Let $t_c$ be the critical point of the phase function there, that is,
\begin{equation}\lambdabel{21-316-712}
\timesi_1+ \timesi_2 u \gamma'_l(t_c)=0.
\end{equation}
Next we show if $t_{c}$ satisfies \eqref{21-316-712}, then $t_{c}\simeq1$.
By \eqref{21-316-712}, for all $\timesi_{1},\timesi_{2}\in\supp \phi_{j}$ and $u\in (1/2,4)$, one has
\begin{align}\lambdabel{21-316-713}
\gamma'_l(t_c)\in [2^{-4},2^{4}].
\end{align}
By the doubling property of $\gamma'$ in Lemma \ref{c1}, there exits a constant $c_{3}$(independent of $l$) such that if $c_{3}\leq l$, we have
\begin{align}\lambdabel{21-316-714}
[2^{-4},2^{4}]\subset\gamma_{l}'([2^{-c_{3}},2^{c_{3}}]).
\end{align}
We now define $\varepsilonilon_{0}= 2^{-c_{3}-4}$. The condition $c_{3}\leq l$ is only to make sure $\gamma_{l}'(t)$ is well defined on $[2^{-c_{3}},2^{c_{3}}]$ and will be satisfied by the support condition in \eqref{201205e6-2}. By \eqref{21-316-713} and \eqref{21-316-714}, we have $t_{c}\in[2^{-c_{3}},2^{c_{3}}]$.
By the stationary phase formula in \cite{MR1232192}, we can write
\begin{equation}
m_j^l(u, \timesi)=a_{j,l}(u, \timesi) e^{i\Psi_l(u, \timesi)} +e_{j,l}(u, \timesi),
\end{equation}
where
\begin{equation}
\Psi_l(u, \timesi)=\timesi_1 t_c(\timesi)+ \timesi_2 u \gamma_l(t_c(\timesi)),
\end{equation}
$a_{j,l}(u, \timesi)$ is a symbol of order $-1/2$ that is supported on $\{\timesi\in \mathbb{R}^2 :\ |\timesi_1|\simeq |\timesi_2|\simeq 2^j\}$ and $e_{j,l}$ is a smooth symbol. The assumption (iii) in Theorem \ref{f19} guarantees that everything is uniform in $l$ whenever $2^l\gg 1$. It suffices to show that
\begin{equation}\lambdabel{21-315-716}
\bigg\|\int_{\R^2} \widehat{f}(\timesi) a_{j,l}(u, \timesi) e^{i\Psi_l(u, \timesi)} e^{ix\cdot \timesi} \,d\timesi\bigg\|_{L^p([1, 2]\times \R^2)} \lesssim 2^{-(1/p+\delta_p)j}\|f\|_p.
\end{equation}
We will show that $\Psi_l(u, \timesi)$ satisfies Sogge's cinematic curvature condition in \cite{MR4078231}. Define $Q$ is the orthogonal matrix such that $Qe_{1}=(\frac{\sqrt{2}}{2},-\frac{\sqrt{2}}{2})$, $\Psi_l^{Q}(x,u,\timesi)=\Psi_l(x,u,Q\timesi)$ and $a_{j,l}^{Q}(x,u,\timesi)=a_{j,l}(x,u,Q\timesi)$. On $\supp a_{j,l}^{Q}$, by the assumption (i) of Theorem \ref{f19}, we have
\begin{align}\lambdabel{9}
\big|\partial_{\timesi_{2}}^{2}\partial_{u}\Psi_l^{Q}(x,u,\timesi)\big|\gtrsim1
\end{align}
uniformly in $l$ whenever $2^l\gg 1$. So we have verified the cinematic curvature condition in the first quadrant in $\timesi$-space. For other quadrants, the proofs are similar and we leave them out. With Sogge's cinematic curvature condition of phase function $\Psi_l(u, \timesi)$ at our disposal, \eqref{21-315-716} follows by the local smoothing estimate proven in \cite{MR4078231}. One can read \cite{NjSlHx20} for more details.
\end{proof}
\section{Proof of Lemma \ref{a39}}
We first show that, for all $v\in \mathbb{Z}$ and $l\geq0$, we have
\begin{align}\lambdabel{a40}
\bigg\|\sup_{ u\in[1,2]}\bigg(\sum_{k\in E_{v}}\bigg|A_{2^{v}u,l_{0}-l}P_{k_1}^{1}P_{k}^{2}f\bigg|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim (l+1)^{5}\|f\|_{p}.
\end{align}
Define
\begin{align*}
m_{k}^{l}(u,\timesi)=\int_{\R} e^{i2^{-l_{0}}\timesi_{1} t}e^{iu2^{-k}\timesi_{2}\gamma_{l_{0}}(2^{l})\gamma_{l_{0}-l}(t)}\phi(t)\tilde{\varphi}(2^{l-l_{0}}t)\,\frac{dt}{t}\phi_{l_{0}}(\gamma_{l_{0}}(2^{l})^{-1}\timesi_{1})\phi_{k}(\timesi_{2})
\end{align*}
where $\gamma_{l_{0}}(2^{l})$ and $\gamma_{l_{0}-l}(t)$ are defined as in \eqref{c2}.
By scaling, it suffices to show
\begin{align}\lambdabel{d10}
\bigg\|\sup_{ u\in[1,2]}\bigg(\sum_{k\in E_{v}}\big|m_{k}^{l}(u,D)f\big|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim (l+1)^{5} \|f\|_{p}.
\end{align}
Define $a_{k}^{l}(u,\timesi)=\psi(u)m_{k}^{l}(u,2^{l_{0}}\gamma_{l_{0}}(2^{l})\timesi_{1},2^{k}\timesi_{2})$ for some $\psi(u)\in C_{c}^{\infty}(1/2,4)$ and $\psi=1$ on $[1,2]$. Then
\begin{align}\lambdabel{b15}
a_{k}^{l}(u,\timesi)=\psi(u)\int_{\R} e^{i\gamma_{l_{0}}(2^{l})\timesi_{1} t}e^{iu\timesi_{2}\gamma_{l_{0}}(2^{l})\gamma_{l_{0}-l}(t)}\phi(t)\tilde{\varphi}(2^{l-l_{0}}t)\,\frac{dt}{t}\phi(\timesi_{1})\phi(\timesi_{2})
\end{align}
and
$\supp a_{k}\subset I\times K$, where $I=\supp\psi$ and $K=\supp\phi\times \supp\phi\subset[1/2,2]\times[1/2,2]$.
We have that, for all $u\in I,$ $\timesi\in K$ and $k\in E_{v}$,
\begin{align}\lambdabel{b18}
|a_{k}^{l}(u,\timesi)|\lesssim \gamma_{l_{0}}(2^{l})^{-1/2},
\end{align}
where we used van der Corput's lemma and Lemma \ref{b44}.
Note $\frac{d}{du}a_{k}^{l}(u,\timesi)$ can be written as the sum of the following two terms:
\begin{align*}
\psi'(u)\int_{\R} e^{i2^{j}\timesi_{1} t}e^{iu\timesi_{2}\gamma_{l_{0}}(2^{l})\gamma_{l_{0}-l}(t)}\phi(t)\tilde{\varphi}(2^{l-l_{0}}t)\,\frac{dt}{t}\phi(\timesi_{1})\phi(\timesi_{2}),
\end{align*}
and
\begin{align*}
\gamma_{l_{0}}(2^{l})\psi(u)\int_{\R} e^{i2^{j}\timesi_{1} t}e^{iu\timesi_{2}\gamma_{l_{0}}(2^{l})\gamma_{l_{0}-l}(t)}\gamma_{-l_{0}+l}(t)\phi(t)\tilde{\varphi}(2^{l-l_{0}}t)\,\frac{dt}{t}\phi(\timesi_{1})\timesi_{2}\phi(\timesi_{2}).
\end{align*}
By Lemma \ref{b44} for all $m\geq0$ and $t\in \supp\phi$, we have $|(\frac{d}{dt})^{m}(\gamma_{l_{0}-l}(t))|\lesssim 1$, where the implicit constant is independent of $l_{0}$ and $l$. Then similar as \eqref{b18}, we have for all $u\in I,$
\begin{align}\lambdabel{b19}
\bigg|\frac{d}{du}a_{k}^{l}(u,\timesi)\bigg|\lesssim \gamma_{l_{0}}(2^{l})^{1/2}.
\end{align}
By \eqref{b18}, \eqref{b19} and Plancherel's theorem, we conclude that
\begin{align*}
\big\|a_{k}^{l}(u,D)f\big\|_{2}\lesssim\gamma_{l_{0}}(2^{l})^{-1/2}\|f\|_{2} \ \ \textrm{and} \ \ \bigg\|\frac{d}{du}a_{k}^{l}(u,D)f\bigg\|_{2}\lesssim\gamma_{l_{0}}(2^{l})^{1/2}\|f\|_{2}.
\end{align*}
By the following Sobolev embedding inequality,
\begin{align*}
\sup_{s\in I}|g(s)|^{2}\lesssim |g(1)|^{2}+\bigg(\int_{I}|g(s)|^{2}\,ds\bigg)\bigg(\int_{I}\big|\frac{d}{ds}g(s)\big|^{2}\,ds\bigg)
\end{align*}
where $g\in C^{\infty}(I)$, we get
\begin{align}\lambdabel{b11}
\bigg\|\sup_{u\in I}\big|a_{k}^{l}(u,D)f\big|\bigg\|_{2}\lesssim \|f\|_{2}.
\end{align}
By \eqref{b15}, it follows that
\begin{align*}
a_{k}^{l}(u,D)f=\psi(u)\int_{\R} P_{0}^{1}P_{0}^{2}f(x_{1}-2^{j}t,x_{2}-u\gamma_{l_{0}}(2^{l})\gamma_{l_{0}-l}(t))\phi(t)\tilde{\varphi}(2^{l-l_{0}}t)\,\frac{dt}{t}.
\end{align*}
Then we have $\sup_{x}\sup_{u\in I}|a_{k}^{l}(u,D)f(x)|\lesssim \|f\|_{\infty}.$
We also have
$|\partial_{\timesi}^{\alpha}a_{k}^{l}(u,\timesi)|\lesssim \gamma_{l_{0}}(2^{l})^{6l}\lesssim C_{0,U}^{6l}$
for all $u\in I,$ $\timesi\in K$, $k\in E_{v}$ and $|\alpha|\leq 6$, where we used the doubling property of $\gamma$ in Lemma \ref{c1}.
Therefore, by Lemma \ref{a13}, we conclude that
\begin{align*}
\bigg\|\sup_{u\in I}\bigg(\sum_{k\in E_{v}}\big|\psi(u)m_{k}^{l}(u,D)f\big|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim \big(1+\log_{2}(C_{0,L}^{7l})\big)^{\frac{1}{2}-\frac{1}{p}}\|f\|_{p}\lesssim (l+1)^{5} \|f\|_{p}.
\end{align*}
This along with the fact that $\psi=1$ on $[1,2]$ yields \eqref{d10}.
Finally, we use \eqref{a40} to show \eqref{a41}.
Define
\begin{align*}
\textrm{$J_{i}=\{3k+i:\ k\in \mathbb{Z}\}$,\ \ $i=1,2,3.$}
\end{align*}
Fix $i=1,2,3$ and define
$g=\sum_{k\in J_{i}\bigcap E_{v}}\tilde{P}_{k_1}^{1}\tilde{P}_{k}^{2}f$ with $(\tilde{P}_{k}^{2}f)^\wedge(\timesi)=\tilde{\phi}(2^{-k}\timesi_{2})\widehat{f}(\timesi)$,
where $\tilde{\phi}\in C_{c}^{\infty}(1/4\leq|\timesi_{2}|\leq4)$ and $\tilde{\phi}=1$ on $\{\timesi_2\in \mathbb{R} :\ 1/2\leq|\timesi_2|\leq2\}$.
By \eqref{a40}, we find that
\begin{align*}
\bigg\|\sup_{ u\in[1,2]}\bigg(\sum_{k\in J_{i}\bigcap E_{v}}\big|A_{2^{v}u,l_{0}-l}P_{k_1}^{1}P_{k}^{2}g\big|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim (l+1)^{5}\|g\|_{p}.
\end{align*}
Note that
\begin{align*}
P_{k_1}^{1}P_{k}^{2}g=P_{k_1}^{1}P_{k}^{2}f ~ \textrm{for~ all} ~k\in J_{i}, \ \ \textrm{and} \ \ \ \|g\|_{p}\lesssim \bigg\|\bigg(\sum_{k\in J_{i}\bigcap E_{v}}\big|\tilde{P}_{k_1}^{1}\tilde{P}_{k}^{2}f\big|^{2}\bigg)^{1/2}\bigg\|_{p}.
\end{align*}
For all $i=1,2,3,$ we have
$$\bigg\|\sup_{ u\in[1,2]}\bigg(\sum_{k\in J_{i}\bigcap E_{v}}\big|A_{2^{v}u,l_{0}-l}P_{k_1}^{1}P_{k}^{2}f\big|^{2}\bigg)^{1/2}\bigg\|_{p}\lesssim (l+1)^{5}\bigg\|\bigg(\sum_{k\in J_{i}\bigcap E_{v}}\big|\tilde{P}_{k_1}^{1}\tilde{P}_{k}^{2}f\big|^{2}\bigg)^{1/2}\bigg\|_{p}.$$
Then \eqref{a41} follows by the triangle inequality.
{\bf Acknowledgements.} The authors are very grateful to Shaoming Guo for his constant support and many valuable discussions. The authors thank Lixin Yan and Liang Song for helpful suggestions. H.Y. is supported by the Guangdong Basic and Applied Basic Research Foundation (No. 2020A1515110241).
\end{document}
|
\begin{document}
\title{Magnetic field effects in ultracold molecular collisions}
\author{Alessandro Volpi and John L. Bohn \cite{byline}}
\address{JILA and Department of Physics, University of Colorado, Boulder, CO}
\date{\today}
\maketitle
\begin{abstract}
We investigate the collisional stability of magnetically trapped
ultracold molecules, taking into account the influence of
magnetic fields. We compute elastic and spin-state-changing inelastic
rate constants for collisions of the prototype molecule $^{17}$O$_2$
with a $^3$He buffer gas as a function of the magnetic field and
the translational collision energy. We find that spin-state-changing
collisions are suppressed by Wigner's threshold laws as long as the
asymptotic Zeeman splitting between incident and final
states does not exceed the height of the centrifugal barrier
in the exit channel.
In addition, we propose a useful one-parameter
fitting formula that describes the threshold behavior
of the inelastic rates as a function of the field and
collision energy. Results show a semi-quantitative
agreement of this formula with the full quantum calculations,
and suggest useful applications also to different systems.
As an example, we predict the low-energy rate constants
relevant to evaporative cooling of molecular oxygen.
\end{abstract}
\pacs{34.20.Cf, 34.50.-s, 05.30.Fk}
\narrowtext
\section{Introduction}
The probable success of experiments aimed at producing
magnetically trapped ultracold molecular samples depends heavily
on the effects of collisional processes. For example, paramagnetic
alkali dimers can be produced via photassociation of ultracold
atoms \cite{Paul}, but the resulting
molecules, typically in high-lying
vibrational states, are subject to vibrational quenching
collisions \cite{balfordalPRL98,forkhabaldalPRA99}
which can release a
large amount of energy and dramatically affect the efficiency
of the cooling. Alternatively, cold molecules in their
vibrational ground states can be produced
either by thermal contact with
a cold helium buffer gas \cite{weidecguifridoynat98}
or by Stark slowing, for species
that possess an electric dipole moment
\cite{Meier,Dineen}. Collisions are
of obvious importance to buffer-gas cooling (BGC), as well as to
forced evaporative cooling (EC) that will be required to lower
the temperature of these gases further and achieve
for instance Bose-Einstein condensation (BEC). Both processes require
large elastic collision rates to thermalize the gas.
So far EC has not been realized in practice for molecules, but the
success of the BGC technique for
the production of cold CaH \cite{weidecguifridoynat98}
and PbO \cite{egoweipatfridoyPRA01} molecules suggests
that in the near future it will be possible
to achieve BEC using cold molecules. This would
open the way for a number of new and fascinating
experiments.
In order to be magnetically trapped, atoms or molecules must
be in a weak-field seeking state, i.e. a state whose
energy increases with the strength of the magnetic field.
For each trappable weak-field seeking state
there is in general a lower-energy untrapped strong-field
seeking state, in which the molecules experience a force
away from the center of the trap. Collisions can drive transitions
between trapped and untrapped states. These ``bad'' collisions can cause
heating or atom loss. It is therefore important to assess
the rate constants for the inelastic collisions.
In a series of previous papers the resilience of molecular oxygen
against spin-changing collisions was investigated, in collisions
of O$_2$ molecules both with a helium buffer gas \cite{bohPRA00}, and with
other O$_2$ molecules \cite{avdbohPRA01}. These studies found that
spin-changing rates due to spin-rotation coupling could be quite large.
However, in the case of the $^{17}$O$_2$ molecule, where
in the limit of zero field the only allowed
exit channels are energetically degenerate with
the incident channels, spin-flipping
transitions require boosting the centrifugal angular
momentum from $L$ = 0 to $L$ = 2, meaning that these
processes are
strongly suppressed by the Wigner's threshold laws
at collision energies smaller than
the height of the exit channel centrifugal barrier.
The results in Refs. \cite{bohPRA00,avdbohPRA01} considered only
the case of a vanishing
external magnetic field, which is obviously not the case in experiments
that trap molecules using spatially inhomogeneous magnetic fields.
The present paper therefore explores the role that fields play
in determining spin-changing
collision rates. As we demonstrate, the presence of a magnetic field
causes a Zeeman asymptotic splitting between incident and
exit channels, thus lifting the collision energy higher relative to
the centrifugal barrier in the exit channel,
and removing the Wigner's suppression.
Studies of spin-changing ultracold collisions in the presence of an
external magnetic field have been performed so far only for
atomic species
\cite{tieverstoPRA93,moeverPRA96,boemoeverPRA96}.
In this paper, we present a detailed dynamical study
at cold and ultracold temperatures for the
atom-diatom system $^{17}$O$_2$ $-$
$^3$He in a field. The basic model is described
in Sec. II. In Sec. III we calculate elastic and inelastic rate constants
for collisions involving the lowest-lying trappable state of $^{17}$O$_2$
over a wide range of field values (from 0 up to 5000 gauss), then discuss
the dependence of the rates on collision energy for several representative
values of the field. This system is of direct relevance to
BGC of molecular oxygen. Generally, it allows us to quantify the
removal of the Wigner's law suppression as the field increases in
strength. On this basis we determine a simple one-parameter fitting
formula that reproduces the trend with field and energy of the loss rates.
In Sec. IV we use this formula to extend previous results on O$_2$-O$_2$
collisions to estimate the influence of the field on EC of this system.
\section{Theory}
As mentioned in the Introduction, we will consider
in this paper molecules consisting of two
$^{17}$O atoms, whose
nuclear spin $i$ is equal to $5/2$. We
assume that total spin
{\bf I} = {\bf i}$_1$ + {\bf i}$_2$ is conserved in the collision
and polarized to its maximum value {\bf I} = 5,
implying that the even molecular
rotational states $N$ are separated from the odd ones
\cite{Mizushima}.
We limit the discussions
of this paper to the ``even-$N$'' manifold
of molecular states, which is
more appealing for cooling purposes
\cite{bohPRA00,avdbohPRA01} having a spin 1
paramagnetic ground state.
The electronic spin {\bf S} of the O$_2$ molecule has
magnitude $S$ = 1 in the electronic ground state
$^3\Sigma_g^-$ we are concerned with throughout this paper.
The angular momentum {\bf S} is coupled to the molecular
rotation angular momentum {\bf N} to give {\bf J}, the total
molecular angular momentum, which assumes the values
$N-1$, $N$ and $N+1$ for $N$ $>$ 0 and is 1 for $N$
= 0. The Hamiltonian operator ${\bf \hat{H}}_{{\rm O}_2}$
for molecular oxygen in the presence of an external magnetic
field $B$ can be written as:
\begin{equation}
\label{O2Ham}
{\bf \hat{H}}_{{\rm O}_2} =
B_e {\bf \hat{N}}^2 + {\bf \hat{H}}_{fs} + {\bf \hat{H}}_{B}
\end{equation}
where $B_e$ is the rotational constant.
The $^{17}$O$_2$ molecule is considered to be a rigid rotor,
with internuclear distance frozen to the equilibrium
value of $r_0$ = 2.282 bohr (the rigid rotor model
has been shown to be very accurate for this system
at the investigated collision energies
\cite{volboh01}).
The fine-structure Hamiltonian ${\bf \hat{H}}_{fs}$
and the Hamiltonian ${\bf \hat{H}}_{B}$
for the interaction of the molecule with the external
magnetic field follow the treatment
in Ref. \cite{fremildeslurJCP70}, disregarding
the molecular hyperfine interaction.
The fine structure Hamiltonian is given as \cite{fremildeslurJCP70}
\begin{equation}
\label{Hfs}
{\bf \hat{H}}_{fs} = \left(\frac{2}{3}\right)^{1/2}
\lambda T^2 ({\bf \hat{S}},{\bf \hat{S}}) \cdot
T^2(\vec{\alpha},\vec{\alpha}) +
\gamma {\bf \hat{N}}\cdot {\bf \hat{S}}
\end{equation}
where $\vec{\alpha}$ is a unit vector parallel
to the molecular axis and $T^2$ is a second-rank
tensor \cite{vanRMP51,tinstrPR55}.
The fine structure parameters $\gamma$ and $\lambda$
have been taken from Ref. \cite{Cazzoli}, where they
have been determined by microwave spectroscopy.
The interaction of the field with the electronic
spin can be expressed as
\cite{curMP65,carlevmilACP70}:
\begin{equation}
\label{HH}
{\bf \hat{H}}_{B} = g \cdot \mu_0 {\bf \hat{S}} \cdot
{\bf \hat{B}}
\end{equation}
where ${\bf \hat{B}}$ indicates the external magnetic
field, $g$ is the g-factor of the electron
and $\mu_0$ is the Bohr magneton.
Following \cite{carlevmilACP70}
we ignore a small interaction between the field
and the rotational angular momentum.
The matrix elements for ${\bf \hat{H}}_{fs}$
and ${\bf \hat{H}}_{B}$ have been given
in Ref. \cite{fremildeslurJCP70}, eqs. (A5) and (A6),
for a Hund's case b basis set.
We note here that the molecular rotational quantum number $N$
is no longer strictly a good quantum number for the molecular states,
because different values of $N$
are coupled together by the fine-structure
Hamiltonian ${\bf \hat{H}}_{fs}$ and by the interaction
with the external field.
The molecular total
angular momentum quantum number $J$ is still a good
quantum number with respect to ${\bf \hat{H}}_{fs}$,
but not with respect to the field
interaction ${\bf \hat{H}}_{B}$ term.
However, its projection $M_J$ on the quantization axis
is still conserved.
Consequently, our basis functions
should be labeled as $| n M_J \rangle$, where
$n$ is a shorthand index denoting the pair of
quantum numbers ($N, J$) in the field dressed
basis \cite{Mizushima}. However, the coupling between
different $N$'s is weak (the fine-structure coupling
is small compared to the rotational separation)
and $N$ can be considered "almost" a good quantum number.
Similarly, $J$ is also approximately good for
laboratory-strength magnetic fields,
so that we can use without confusion the label
$| N J M_J \rangle$.
Magnetic trapping is strongly related to the
behavior of the molecules in a magnetic field.
The low-energy
Zeeman levels of oxygen are displayed in Fig. 1 for
the even-$N$ species. (Throughout this paper
we report energies in units of Kelvin by dividing by the Boltzmann
constant $k_B$. These units are related to
wave numbers via 1 K = 0.695 cm$^{-1}$).
In order to be trapped in the usual magnetic traps
a molecule must be in a weak-field-seeking
state, i.e., one whose energy rises with increasing magnetic
field strength. Thus the state $|N JM_J \rangle$
= $|0 1\;1 \rangle$
is the lowest-lying trappable state of the even-$N$ manifold,
and this is the
state on which we focus our attention below. This state is indicated
by a heavy line in Figure 1. Higher-lying states with $N$ $\ge$ 2
are energetically forbidden at low temperatures.
It is clear from Fig. 1 that for any trapped state there is
an untrapped, strong-field-seeking Zeeman state at a lower
energy. These states are not merely
untrapped but antitrapped, experiencing a force away from
the trapping region.
In a magnetic trap
collisions with buffer gas atoms,
or more generally with other molecules, will therefore
ultimately deplete the trap of its molecular population.
The time available for cooling processes like BGC or EC,
as well as the lifetime of a molecular Bose-Einstein condensate,
is therefore limited and knowledge of the rate constants
for spin-flipping collisions is essential to predict their
feasibility.
In Ref. \cite{bohPRA00} the theoretical
framework for atom - diatom scattering was
derived in the limit of zero external field,
along the lines of the model originally due to Arthurs
and Dalgarno \cite{Arthurs,Child}, and properly modified
to incorporate the electronic spin of the oxygen molecule.
Here, the formulation of the scattering problem is further extended
to account for the interaction with the external magnetic field.
The full Hamiltonian operator describing the He - O$_2$ collision is
given by
\begin{equation}
\label{totalH}
{\bf \hat{H}} = -{\hbar^2 \over 2 \mu} \left[ {d^2 \over dR^2}
-{ {\bf \hat{L}}^2 \over R^2} \right] + {\bf \hat{H}}_{{\rm O}_2} + V(R,\theta)
\end{equation}
after multiplying the wave function by $R$ in order
to remove first derivatives.
Here $\mu$ is the reduced mass for the He-O$_2$ system,
$R$ is the modulus of the Jacobi vector joining the atom
to the molecule's center-of-mass, ${\bf \hat{L}}^2$ is the
centrifugal angular momentum operator, and ${\bf \hat{H}}_{{\rm O}_2}$
is the molecular oxygen Hamiltonian defined in (\ref{O2Ham}).
The potential term $V$, depending on both the Jacobi
vector $R$ and on the bending angle $\theta$ that the molecule's axis makes
with respect to ${\bf R}$, accounts for the He - O$_2$ interaction.
We use the $ab-initio$ potential energy surface
(PES) by Cybulski et al. \cite{Cyb}, which
approximates the true well depth to $\sim$ 20\%.
The full multichannel calculation requires casting $V(R,\theta)$
in an appropriate angular momentum basis.
Our field dressed basis for close-coupling calculations is then
\begin{equation}
\label{basis}
|{\rm O}_2(^3\Sigma_g^-) \rangle |{\rm He}(^1S) \rangle
| N J M_J L M_L {\cal M} \rangle
\end{equation}
where the electronic spin quantum number $S$
is not explicitly indicated being always
equal to 1 in the problem treated here.
The quantum number $L$ stands for the partial wave
representing the rotation of the molecule and the He atom
about their common center of mass, $M_L$ is the projection
of ${\bf \hat{L}}$ onto the laboratory axis, and ${\cal M}$
is the laboratory projection of the total angular momentum,
${\cal M} = M_J + M_L$.
At the collision energies of interest we assume that
the oxygen electronic state and
the helium atom state are preserved and therefore we
suppress the first two kets in equation (\ref{basis}) in the following.
We note here that, at variance with the formulation
in the zero field limit, the total angular momentum of the
system ${\bf {\cal J}}$ (equal to {\bf N} + {\bf S} + {\bf L})
is no longer a good quantum number, because different
${\cal J}$'s are coupled together by the interaction
with the external field.
This means that the dynamical problem is no longer
factorizable for different values of ${\cal J}$,
thus requiring larger numbers of channels to be treated
simultaneously. The problem is still factorizable
for ${\cal M}$, but in general the number of channels
to be included for each calculation is much larger
than in the previous case. Numerical details of
the calculations will be given in the next section.
The coupled-channel
equations are then propagated using a log-derivative
method \cite{johJCP73} and
solved subject to scattering boundary
conditions to yield a scattering matrix S:
\begin{equation}
\label{Smatrix}
\langle N J M_J L M_L | S({\cal M})
| N^{\prime} J^{\prime} M_J^{\prime} L^{\prime} M_L^{\prime} \rangle
\end{equation}
As already noted, the projection of the total angular momentum
${\cal M}$ is still a good quantum number, implying that
$M_J$ $+$ $M_L$ $=$ $M_J^{\prime}$ $+$ $M_L^{\prime}$.
Note that in general each of the quantum numbers $N$, $J$, $M_J$,
$L$, and $M_L$ are subject to change in a collision,
consistent with conserving ${\cal M}$.
Following Ref. \cite{bohPRA00}, the state-to-state
cross sections are given by:
\begin{equation}
\label{crossec}
\sigma_{N J M_J \rightarrow N^{\prime} J^{\prime} M_J^{\prime}} =
{\pi \over k_{N J M_J}^2 }
\sum_{LM_L L^{\prime} M_L^{\prime}}
|\langle N J M_J LM_L | S-I |
N^{\prime} J^{\prime} M_J^{\prime} L^{\prime}M_L^{\prime}
\rangle |^2
\end{equation}
and the corresponding rate coefficients
are given by
\begin{equation}
\label{ratek}
K_{N J M_J \rightarrow N^{\prime} J^{\prime} M_J^{\prime}}
= v_{N J M_J}
\sigma_{N J M_J \rightarrow N^{\prime} J^{\prime} M_J^{\prime}}
\end{equation}
where $v_{N J M_J}$ is the relative velocity of the collision
partners before the collision. For notational convenience we will
in the following refer to collisions that preserve the incident
molecular quantum numbers as ``elastic,'' and those that change the
quantum numbers as ``loss.''
\section{Results}
In this section we consider elastic and state-changing,
inelastic rate constants
for the incident channel
$| N\;J\;M_J \rangle$ = $|0\;1\;1 \rangle$.
At the investigated collision energies
(from 1 $\mu$K up to 10 K) there are two open inelastic
channels, namely $| N\;J\;M_J \rangle$ = $|0\;1\;0 \rangle$ and
$|0\;1\;-1 \rangle$, both of which are untrapped (see Fig. 1).
\subsection{Magnetic field dependence}
We begin by computing rate constants for the
elastic and spin-flipping
transitions $| 0\;1\;1 \rangle$ $\rightarrow$
$| 0\;1\;-1 \rangle$ and
$| 0\;1\;1 \rangle$ $\rightarrow$ $| 0\;1\;0 \rangle$
in the low-field limit.
Results in this section refer to 1 $\mu$K collision energy and
are converged using
partial waves up to $L$ = 6 and including the rotational
states $N$ = 0, 2, 4 and 6 for the oxygen molecules.
The maximum value of $R$ to which the coupled-channel equations
are propagated depends on the strength of the field,
ranging from 600 bohr in the case of smallest field
to 450 bohr for highest values. These parameters assure
rate constants convergent within less then 5 \%,
which is adequate for our purposes.
In principle, the scattering matrix should be determined for each
possible value of the projection of the total angular
momentum ${\cal M}$. However, we know that s-wave collisions
dominate the incident channel at ultralow collision energies, which
corresponds for our incident channel to the value ${\cal M}=1$.
We have verified that including in the calculation only the ${\cal M}$ = 1
channels changes the results by less than 1 \%
at $\mu$K energies; in this section
we therefore include only this contribution. The number of channels
to be propagated according to the given convergent
quantum numbers is then only 205.
Fig. 2 shows elastic and inelastic rate constants
at energy $E=1\mu$K and for low values of the field.
The inelastic rate constants are non zero even at zero field,
as shown in Ref. \cite{bohPRA00}. However, in this limit
the final states are degenerate with the initial state, and
inelastic transitions are strongly
suppressed by the presence of a $d$-wave centrifugal barrier
in the exit channel, whose height is about 0.59 K.
This effect is able to suppress
the molecule loss,
at least as long as the collision energy does
not exceed the barrier height \cite{avdbohPRA01}.
As soon as a field is applied,
the thresholds are no longer degenerate in energy,
so that the energy in the exit channel is not as far
below the centrifugal barrier.
As a consequence, inelastic transitions are no
longer as strongly suppressed, even in the limit of very
low collision energy. Rather, they increase dramatically even
in a weak field, with rates being boosted
by 5 or 6 orders of magnitude in a 1 gauss field. On the other hand,
elastic scattering is nearly unchanged by the field.
This sudden increase of the inelastic
transition rates can be reproduced semiquantitatively by applying
the distorted wave Born approximation
(DWBA) \cite{Child}, as has been successfully
done for the magnetic dipolar interaction
of cold alkali atoms \cite{tieverstoPRA93,moeverPRA96}
as well as in a number of problems in
cold collisions \cite{bohjulPRA99,forbaldalhaghelPRA01}.
The first order DWBA is a simple two state perturbation approximation
applicable in cases where inelastic scattering is weak in comparison
with elastic scattering. The DWBA expression for the off-diagonal
K-matrix elements is:
\begin{equation}
\label{Kmatr}
K_{if} = - \pi \; \int_0^{\infty} f_i(R) V_{if} (R)
f_f(R) dR
\end{equation}
where $f_i$ and $f_f$ represent the
energy normalized scattering wave function
of the initial and final states calculated on the diabatic
potential corresponding to the states involved in the
inelastic transition. $V_{if}$ is the corresponding
diabatic coupling term of the Hamiltonian (\ref{totalH}).
The first-order DWBA result is
shown as a dotted line in Fig.2.
The DWBA also yields information on the threshold dependence of
the loss rates on energy and field. To see this, first note that
the spin-changing processes we are considering are strongly dominated at
low collision energy by s-waves in the incident channel, and
by d-waves in the exit channel. This change of partial wave is
necessary to conserve angular momentum during a collision
that changes the molecule's spin.
For small values of the magnetic field (for which the
Zeeman splitting does not exceed the height of the
exit channel centrifugal barrier) the exit channel is still
in the threshold regime, whereby the wave functions $f_i$ and $f_f$
in (\ref{Kmatr}) can be approximated by the small-argument limit of
energy-normalized spherical Bessel functions,
\begin{equation}
\label{bessel}
f_i \propto \sqrt{ k_i} j_{L_i}(k_i R) \propto (k_iR)^{L_i+1/2},
\;\;\;\;\;
f_f \propto \sqrt{ k_f} j_{L_f}(k_f R) \propto (k_f R)^{L_f + 1/2},
\end{equation}
where $k_i$ and $k_f$ are the incident and final wave numbers
and $l_i$ and $l_f$ are the incident and final partial waves.
Assuming a small-$R$ cutoff to insure convergence of the integral
in (\ref{Kmatr}) with respect to $1/R^6$ singularity in the coupling
potential $V_{if}(R)$, the energy-dependence of the K-matrix element is
\begin{equation}
\label{prop}
K_{if} \propto k_{i}^{L_i + 1/2} k_{f}^{L_f + 1/2}
\end{equation}
By considering the relationship between $K_{if}$
and the effective rate constant
$K_{N J M_J \rightarrow N^{\prime} J^{\prime} M_J^{\prime}}$, it is
straightforward to show that the rate constant
behaves approximately as
\begin{equation}
\label{propE}
K_{N J M_J \rightarrow N^{\prime} J^{\prime} M_J^{\prime}}
\propto E^{L_i} (E + \Delta M_J g \mu_0 B )^{L_f + 1/2}
\end{equation}
where $E$ is the collision energy, and we have taken
into account that the final kinetic energy in the exit channel
is incremented by an amount $\Delta E_B$
= $\Delta {M_J}$ $g$ $\mu_0 B$
corresponding to the linear Zeeman shift.
$\Delta {M_J}$
(= $M_J$ $-$ $M_J^{\prime}$)
stands for the difference between the initial and final
values of $M_J$ in the two channels involved
in the transition. The actual value of $\Delta E_B$ is modified by
quadratic Zeeman shift, but the linear approximation is
adequate for achieving a simple fitting formula. In the
present case these shifts amount to less than 10\% changes in
the approximated rate constants.
From eq. (\ref{propE}), considering that in our case
$L_i$ = 0 and $L_f$ = 2, a simple expression for the
rate constants can be derived:
\begin{equation}
\label{rateK}
K_{N J M_J \rightarrow N^{\prime} J^{\prime} M_J^{\prime}}
= K_{0} \left( \frac{E + \Delta M_J g \mu_0 B}{E_0}
\right)^{5/2}
\end{equation}
where $K_{0}$ represents an overall scaling constant
and $E_0$ is conveniently chosen as the height of the centrifugal barrier
in the exit, d-wave channel.
In the limit of very low collision energy,
the $\Delta M_J g \mu_0 B$ term obviously dominates over $E$,
leading to a nonzero
rate constant, as is the case for exothermic collisions.
This simple expression allows us to interpret the
threshold behavior of the rates with the field,
explaining the 5/2 exponential dependence on
$B$ found in our calculation and shown explicitly in
the bilogarithmic plot for the rate constants (Fig. 3).
Here the dashed lines represent a fit to the rate constants in the limit
of zero magnetic field, yielding coefficients
$K_{0}$ = 2.73 $\times$ 10$^{-14}$ cm$^3$ sec$^{-1}$
and $K_{0}$ = 1.45 $\times$ 10$^{-14}$ cm$^3$ sec$^{-1}$
for the transition to the final states $| 0\;1\;-1 \rangle$
and $| 0\;1\;0 \rangle$, respectively. Apart from zeros in the
actual rate constants, the overall trend is indeed
$K_{N J M_J \rightarrow N^{\prime} J^{\prime} M_J^{\prime}}
\propto B^{5/2}$. The zeros in the real rates arise from
interferences between the s-and d-wave radial wave functions, as
we have verified qualitatively by the DWBA. Nevertheless, the simple
one-parameter expression (\ref{rateK}) provides a reasonable upper bound to
the complete calculation which, it will be recalled, requires
a calculation involving 205 coupled channels.
The simple formula (\ref{rateK}) holds, of course, only when both
incident and final channels are in the threshold regime. Assuming
low incident energies, this restriction therefore limits the size of
magnetic field for which (\ref{rateK}) applies. Namely, this expression
is only useful when the Zeeman energy splitting between
incident and final states remains smaller than
the height of the d-wave centrifugal barrier. For the channels
considered here, these fields are 2430 gauss and 4860 gauss for
the $|01-1 \rangle$ and $|01 0 \rangle$ final states, respectively;
Vertical arrows in Fig. 3 indicate these field values.
Relation (\ref{rateK}) serves as a useful quick fitting formula
for data at smaller fields,
and allows us to generalize the results obtained
in this paper also to different systems.
Possible applications are discussed in section IV.
When stronger values of the magnetic field are
considered, the DWBA is no longer
able to reproduce the full calculation,
because the coupling is no longer a weak perturbation.
Strong field interaction
mixes up different channels leading to a much more
complicated picture which can not be explained
in terms of a simple two state model. However, in this limit the
loss rate constants seem to be unacceptably large anyway, owing to
the effectiveness of spin-rotation coupling in changing spins.
For example, For $B$ $\sim$ 4800 gauss, the inelastic
rate constant for the $| N\;J\;M_J \rangle$
= $| 0\;1\;-1 \rangle$ exit channel
exceeds that for the elastic one. This indicates
that the high field limit fine structure changing
collisions represent a potential limitation
for the success of the collisional cooling processes.
\subsection{Collision energy dependence}
In this section, the dependence of elastic and inelastic
rate constants on the collision energy are analyzed
for the zero field limit and for three representative
values of the field, namely 10, 200 and 4500 gauss.
In Fig. 4.a, we report our results for
collision energies 1$\mu$K - 1 K, for
the four mentioned values of the field.
Elastic scattering, which is largely determined by s-waves in both
incident and final channels, is weakly affected
by the strength of the field. For spin-changing collisions,
however, the energy dependence changes dramatically, in
accordance with (\ref{rateK}), which is shown in the figure using
dashed lines. Again the trends are well-represented,
although the formula overestimates the rates due to the
zeros in the real rates described in the previous section.
As a general trend, we observe that
when the field is increased, the low-energy
inelastic rate constants are substantially
pushed up towards the elastic ones,
but at higher energies the rates
are less sensitive to the field.
This is better illustrated in Fig. 4.b, where
the total loss rate constants (that is
the sum of the inelastic scattering in the
two exothermic channels) for the different
values of the field are plotted together
with the elastic channel results in zero field
(which, as stressed before, are essentially
independent of the $B$ value).
In the range of collision energies from 1$\mu$K to 1 K
and for low magnetic fields
the rates for elastic collisions remain significantly
higher than the rate for the spin-flipping. In particular,
at buffer-gas-cooling energies of $\sim 1$K and below, the
inelastic rate constants are in the $10^{-14}$ cm$^3$/sec range,
and remain so even in the presence of a field. This
result verifies the suitability of the
$^{17}$O$_2$ molecule for BGC. Given the comparatively small
uncertainties in the PES \cite{Cyb}, the results
shown here are probably fairly realistic for the He-O$_2$
system.
We have continued the analysis in
the range of collision energies from 1 to 10 K.
We note that for energies larger than the height
of the centrifugal barrier, the approximation of
including only ${\cal M}$ = 1 in the calculation
is no longer accurate within a few percent,
as was the case at lower energy. We checked
that in this energy range the full
calculations including all the possible ${\cal M}$
values provides results which
differ (in the worst case) by about a
factor of 3 for the inelastic channels and by
about a factor of 5 for the elastic one.
The full calculation is computationally very expensive
for $B$ $>$ 0, as opposed to the case of zero field
where a total-${\cal J}$ basis can be adopted.
Results indicate that
inelastic rates for high collision energies are
not much lower than elastic rates, and of course
become higher still at energies where resonances exist.
At collision energies above 1K both Feshbach and shape resonances
appear in the cross sections, as noted in \cite{bohPRA00}. We find that
these resonances move somewhat as a function of magnetic field.
Nevertheless, they are sufficiently narrow that they are
completely ``washed out'' by thermal averaging in a gas. We therefore
present these results as a function of temperature rather than energy.
To this end we assume a Maxwellian velocity distribution of
the collision partners characterized by a kinetic
temperature $T$. The thermally averaged rate constants
are then expressed as:
\begin{equation}
\label{thermalav}
\bar{K}(T) = \Bigl(\frac{8k_{B}T}{\pi \mu}\Bigl)^{1/2}
\frac{1}{(k_{B}T)^{2}}
\int_{0}^{\infty}E \sigma(E) e^{-E/k_{B}T} dE
\end{equation}
where $k_B$ is the Boltzmann constant and $\sigma(E)$
stands for cross sections.
To compute this average, the values of the cross sections
for $E$ $>$ 10 K are extrapolated from their values
at 10 K.
Averaged rate constants are shown in Fig. 5 for the
same set of field values as in Fig. 4.
The condition for magnetic trapping to be
successful is usually expressed as $K_{el}$ $>$ 10
$K_{loss}$ \cite{bohPRA00rc}, so that we can conclude
that for collisions of $^{17}$O$_2$ with $^3$He this
condition is fulfilled at least for temperatures
up to 1 Kelvin and values of the field for which the
asymptotic Zeeman splitting does not exceeds the
height of the exit channel centrifugal barrier.
\section{Applications}
Even in the absence of detailed information on cold collisions of
a particular molecule, the fitting formula (\ref{rateK}) can be used
as an approximate guide to what the rates might be. For
example, we can inquire about the prospects for evaporatively cooling
$^{17}$O$_2$ molecules once they have been
successfully cooled in a first stage
of BGC. For this system, the d-wave centrifugal
barrier height is $\sim 13$ mK,
meaning that the threshold law is expected
to hold for field values smaller than $\sim$ 53 gauss
for transitions that produce one or more molecules in the $| 0\;1\;-1 \rangle$
final state.
In the case of $^{17}$O$_2$ cold collisions we have access to the zero-field
calculations of Ref. \cite{avdbohPRA01}. We have fit the energy dependence
of all the inelastic rate constants
to yield the dashed curve labeled ``$B=0$'' in Figure 6,
which represents the total loss.
The full calculation (solid line) has some additional features due to
scattering resonances near zero energy, but this will not affect our
conclusions here. Based on the zero-field fit, we use (\ref{rateK})
to estimate the loss rates in nonzero field. The general trends are the same,
namely, the rates rise sharply at low energy, but
are roughly field-independent at larger energies.
For evaporative cooling to be successful requires, roughly speaking, that
the ratio of elastic to inelastic collision rates, $K_{\rm el}/K_{\rm loss}$,
exceed 100 \cite{moncorsacmyawiePRL93}. For the estimated results shown
in Figure 6, this condition holds
only at energies below $\sim 1$mK for fields as low as 10 gauss, and
not at all for near-critical fields of $\sim 50$ gauss.
Thus evaporative cooling
from an initially buffer-gas-cooled sample may prove trickier than previously
expected. On the other hand, BGC is characterized by a large number of
molecules cooled in the initial step; it is possible that a certain loss
can be tolerated, and that a final sample at sub-$\mu$K temperatures
will still hold enough molecules to reach critical phase space density
for BEC. Detailed kinetic simulations are required to determine if this is so.
Alternatively, a recent proposal suggests that NH molecules could be cooled
via Stark slowing to temperatures as low as 1 mK \cite{vanjonbetmeiPRA01}.
In this case EC may work quite well.
For many systems of interest to ultracold studies, there does not
exist any information on spin-changing collisions at low enough
temperatures. In such cases
it may be possible to make order-of-magnitude guesses anyway. For example,
suppose a rate constant is known for higher temperature collisions.
In the absence of any other information we could simply assert that
the rate has the same value at the energy $E_0$ corresponding to the
height of the centrifugal barrier. The fitting formula (\ref{rateK})
then gives the behavior of this rate at lower temperatures.
For example, the zero-field rate constant in the first panel of
Fig. 5 has a value of $10^{-11}$ cm$^3$/sec at a temperature
of 4 K. Extrapolating this value down to $E_0=0.59$ K yields
a coefficient $K_0 = 7.2 \times 10^{-14}$ cm$^3$/sec
for the total loss, just a factor
of two higher than the fit to the low-energy calculation.
\section{Conclusions}
One of the main aims of this paper is to understand in a broad sense
collision of paramagnetic molecules in the limit of ultralow
temperatures in the presence of a magnetic field.
This is a part of our effort in showing that molecules
with nonzero spin in the lowest energy state
(such as $^{17}$O$_2$ investigated here) can be
successfully cooled and used for BEC purposes.
Elastic and inelastic scattering in presence of a magnetic field
for the specific system $^3$He - $^{17}$O$_2$ has been characterized
in detail. Our attention has been focused on
the lowest lying trappable state of the molecule.
This information is immediately relevant for BGC of molecular
oxygen, and suggests that the presence of the field does not
particularly hinder the BGC process. This work extends previous
predictions which referred to the oxygen molecule
in zero field \cite{bohPRA00,avdbohPRA01}, and
definitely assess the theoretical possibility for
trapping this species.
Moreover, we have illustrated the simple underlying physics of
spin-changing rates in general. For fields low enough that
both the incident and exit channels are in the threshold regime,
the rate constants vary according to the Wigner-law dependence
($E$ + $\Delta M_J g \mu_0 B$)$^{5/2}$.
This insight allows us to make estimates
of rate constants beyond the ones calculated in detail. In
particular, a small field was found to have a dramatic effect on the
evaporative cooling of $^{17}$O$_2$, which must be taken into account
in future experiments aimed at quantum degenerate molecular gases.
\acknowledgements
This work was supported by the National Science Foundation
and by NIST. We acknowledge useful discussions with A. Avdeenkov
and J. Hutson.
\begin{references}
\bibitem[*]{byline} Email: [email protected]
\bibitem{Paul} For recent reviews, see W. C. Stwalley
and H. Wang, J. Mol. Spect. {\bf 195}, 194 (1999);
J. Weiner, V. S. Bagnato, S. Zilio, and P. S.
Julienne, Rev. Mod. Phys. {\bf 71}, 1 (1999).
\bibitem{balfordalPRL98} N. Balakrishnan, R. C. Forrey,
and A. Dalgarno, Phys. Rev. Lett. {\bf 80}, 3224 (1998).
\bibitem{forkhabaldalPRA99} R. C. Forrey,
V. Kharchenko, N. Balakrishnan, and A. Dalgarno,
Phys. Rev. A {\bf 59}, 2146 (1999).
\bibitem{weidecguifridoynat98} J. D. Weinstein, R. deCarvalho,
T. Guillet, B. Friedrich, and J. M. Doyle,
Nature {\bf 395}, 148 (1998); B. Friedrich, J. D. Weinstein,
R. deCarvalho, and J. M. Doyle, J. Chem. Phys. {\bf 110},
2376 (1998).
\bibitem{Meier} H. L. Bethlem, G. Berden, and G. Meijer, Phys. Rev. Lett.
{\bf 83}, 1558 (1999).
\bibitem{Dineen} J. A. Maddi, T. P. Dineen, and H. Gould, Phys. Rev. A
{\bf 60}, 3882 (1999).
\bibitem{egoweipatfridoyPRA01} D. Egorov, J. D. Weinstein,
D. Patterson, B. Friedrich, and J. M. Doyle,
Phys. Rev. A {\bf 63}, 30501 (2001).
\bibitem{bohPRA00} J. L. Bohn, Phys. Rev. A. {\bf 62}, 32701 (2000).
\bibitem{avdbohPRA01} A. V. Avdeenkov and J. L. Bohn, Phys. Rev. A.
{\bf 64}, 52703 (2001).
\bibitem{tieverstoPRA93} E. Tiesinga, B. J. Verhaar, and
H. T. C. Stoof, Phys. Rev. A {\bf 47}, 4114 (1993).
\bibitem{moeverPRA96} A. J. Moerdijk and B. J. Verhaar,
Phys. Rev. A {\bf 53}, 19 (1996).
\bibitem{boemoeverPRA96} H. M. J. M. Boesten, A. J. Moerdijk,
and B. J. Verhaar,
Phys. Rev. A {\bf 54}, 29 (1996).
\bibitem{Mizushima} M. Mizushima, {\it The Theory of Rotating
Diatomic Molecules}, (Wiley, New York, 1975), p. 170.
\bibitem{volboh01} A. Volpi and J. L. Bohn, to be submitted to
Phys. Rev. A.
\bibitem{fremildeslurJCP70} R. S. Freund, T. A. Miller,
D. De Santis, and A. Lurio,
J. Chem. Phys. {\bf 53}, 2290 (1970).
\bibitem{vanRMP51} J. H. Van Vleck, Rev. Mod. Phys. {\bf 23}, 213 (1951).
\bibitem{tinstrPR55} M. Tinkham and M. W. P. Strandberg,
Phys. Rev. {\bf 97}, 937 (1955).
\bibitem{Cazzoli} G. Cazzoli and C. Degli Esposti, Chem. Phys. Lett.
{\bf 113}, 501 (1985).
\bibitem{curMP65} R. F. Curl, Mol. Phys. {\bf 9}, 585 (1965).
\bibitem{carlevmilACP70} A. Carrington, D. H. Levy,
and T. A. Miller, Advan. Chem. Phys. {\bf 18}, 149 (1970).
\bibitem{Arthurs} A. M. Arthurs and A. Dalgarno, Proc. Roy. Soc.
{\bf A256}, 540 (1960).
\bibitem{Child} M. S. Child, {\it Molecular Collision Theory}
(Mineola, Dover Publications, 1996), p. 100.
\bibitem{Cyb} S. M. Cybulski {\it et al.}, J. Chem. Phys.
{\bf 104}, 7997 (1996).
\bibitem{johJCP73} B. R. Johnson, J. Comp. Phys. {\bf 14},
445 (1973).
\bibitem{bohjulPRA99} J. L. Bohn and P. S. Julienne,
Phys. Rev. A. {\bf 60}, 414 (1999).
\bibitem{forbaldalhaghelPRA01} R. C. Forrey, N. Balakrishnan,
A. Dalgarno, M. R. Haggerty, and E. J. Heller,
Phys. Rev. A. {\bf 64}, 22706 (2001).
\bibitem{bohPRA00rc} J. L. Bohn, Phys. Rev. A {\bf 61},
40702 (2000).
\bibitem{moncorsacmyawiePRL93} C. R. Monroe, E. A. Cornell,
C. A. Sackett, C. J. Myatt, and C. E. Wieman,
Phys. Rev. Lett. {\bf 70}, 414 (1993).
\bibitem{vanjonbetmeiPRA01} S. Y. T. van de Meerakker,
R. T. Jongma, H. L. Bethlem, and G. Meijer,
Phys. Rev. A. {\bf 64}, 41401 (2001).
\end{references}
\begin{figure}
\caption{The lowest-energy Zeeman levels of O$_2$ for the
even-$N$ rotational manifold. These levels are usefully
labeled by the approximate rotational ($N$) and total
spin ($J$) quantum numbers, along with the projection $M_J$
of total spin onto the magnetic field.
The heavy line indicates the lowest-lying
trappable state $| N\;J\;M_J \rangle$ = $| 0\;1\;1 \rangle$.}
\end{figure}
\begin{figure}
\caption{Rate constants (logarithmic scale) for collisions of
$^3$He and $^{17}
\end{figure}
\begin{figure}
\caption{Rate constants for collisions of $^3$He and
$^{17}
\end{figure}
\begin{figure}
\caption{(a) Collision energy dependence of elastic and
inelastic rate constants in the range 1$\mu$K - 1 K
(bilogarithmic scale). In each panel the corresponding
value of the magnetic field is indicated.
The solid lines refer to the full quantum
calculations
while dashed lines refer to the inelastic rate constants
calculated using relation (\ref{rateK}
\end{figure}
\begin{figure}
\caption{Thermally averaged elastic (solid line)
and total loss (dashed line) rates for $^3$He $-$ $^{17}
\end{figure}
\begin{figure}
\caption{Comparison between elastic and total loss rate constants
for collisions of $^{17}
\end{figure}
\end{document}
|
\betaegin{document}
\captionsetup[figure]{labelfont={bf},labelformat={default},labelsep=period,name={Fig.}}
\betaegin{center}
\centerline{\lambdarge\betaf \vspace*{6pt} The isoparametric functions on a class of Finsler spheres\,{$^*\,$}}
\footnote {$^*\,$ Project supported by AHNSF (No.2108085MA11).
\\\indent\ \ $^\dag\,$ [email protected]
}
\centerline{Yali Chen$^1$, Qun He$^1$$^\dag\,$}
\centerline{\small 1 School of Mathematical Sciences, Tongji University, Shanghai,
200092, China.}
\end{center}
{\small
\parskip .005 truein
\betaaselineskip 3pt \lineskip 3pt
\noindent{{\betaf Abstract:}
In this paper, we give global expressions of geodesics and isoparametric functions on a Randers sphere by navigation. We obtain isoparametric families and focal submanifolds in $(\mathbb{S}^{n}, F, d\mu_{BH})$ by Cartan-M\"{u}nzner polynomials. Further more, we construct some examples of closed and non-closed geodesics, isoparametric functions, isoparametric families and focal submanifolds.
\noindent{\betaf Key words:}
geodesic; isoparametric hypersurface; isoparametric function; Randers sphere.}
\noindent{\it Mathematics Subject Classification (2010):} 53C60, 53C42, 34D23.}
\parskip .001 truein\betaaselineskip 6pt \lineskip 6pt
\section{Introduction}
~~~~The study on isoparametric hypersurfaces is an important topic in Riemannian geometry. E. Cartan began to study the isoparametric hypersurfaces in real space forms systematically\cite{C}. Then many mathematicians started working on it and made many important contributions~\cite{C1,TP,GT}. In Finsler geometry, the conception of isoparametric hypersurfaces has been introduced in~\cite{HYS}. Let $(N,F,d\mu)$ be an
$n$-dimensional Finsler manifold with volume form $d\mu$. A function $f$
on $(N,F,d\mu)$ is called \textit{isoparametric} if it is almost everywhere smooth and there are functions $\widetilde{a}(t)$ and $\widetilde{b}(t)$ such that
\betaegin{equation}\lambdabel{1.1} \left\{\betaegin{aligned}
&F(\nabla f)=\tilde{a}(f),\\
&\Delta f=\tilde{b}(f),
\end{aligned}\gammaight.
\end{equation}
where $\nabla f$ denotes the gradient of $f$, which is defined by means of the Legendre transformation, and $\Delta f$ is a nonlinear Finsler-Laplacian of $f$ (See Section 2.2 for details). Each regular level hypersurface of an isoparametric function is called an isoparametric hypersurface. As in Riemannian geometry, We call a complete and simply connected Finsler manifold with constant flag curvature a Finsler space form. Minkowski spaces (with zero flag curvature) and Finsler spheres (with positive constant flag curvature) are two important classes of Finsler space forms~\cite{YH}. For a Finsler sphere $(\mathbb{S}^{n},F)$, if $F$ is a Randers metric, we call it a Randers sphere.~\cite{HYS,HYS1,HD,HDY} gave a complete classification of $d\mu_{BH}-$isoparametric hypersurfaces in Minkowski spaces, Randers space forms and Funk-type spaces.~\cite{HCY} studied isoparametric hypersurfaces in Finsler space forms by investigating focal points, tubes and parallel hypersurfaces.
The study on isoparametric hypersurfaces is inseparable from isoparametric functions. In Riemannian geometry, M\"{u}nzner showed that an isoparametric hypersurface in $\mathbb{S}^{n}\subseteq\mathbb{R}^{n+1}$ with $g$ distinct principal curvatures is obtained by a homogeneous polynomial of degree $g$ on $\mathbb{R}^{n+1}$ satisfying the Cartan-M\"{u}nzner differential equations\cite{M1,M}. In Finsler geometry, it is still an open problem to study isoparametric functions on $\mathbb{S}^{n}$. It has been proved that the isoparametric hypersurfaces in Euclidean spheres and Randers spheres are the same, but their isoparametric functions, isoparametric families and focal submanifolds are all different, except in very special cases \cite{HDY}. From \cite{ZHC}, we know the isoparametric family is a special mean curvature flow in Finsler manifolds, which means the mean curvature flows generated by the same isoparametric hypersurface are different in general. So it is natural and meaningful to study isoparametric functions on Randers spheres and their corresponding isoparametric families.
Geodesic is important in Finsler geometry, especially in the study of isoparametric theory. We have already known the geodesics in Minkowski spaces and Funk spaces are all straight lines. But geodesics on a Randers sphere are more complicated. They may not be a closed great circle. C. Robles classified geodesics on a Randers manifold of constant flag curvature~\cite{RO}. M. Xu proved that the number of geometrically distinct closed geodesics on $(\mathbb{S}^n, F)$ is at least dim $I(\mathbb{S}^n, F)$\cite{XM}.
Navigation is a technique to manufacture new Finsler metrics and study their geometric properties. D. Bao, C. Robles and Z. M. Shen classified Randers metrics of constant flag curvature by navigation on Riemannian manifolds\cite{BRS}. L. Huang and X. Mo gave a geometric description of the geodesics of the Finsler metric produced from any Finsler metric and any homothetic field in terms of a navigation presentation\cite{HM12,HM1}. M. Xu, V. S. Matveev, et al discussed the correspondences of geodesics and Jacobi fields for homothetic navigation\cite{XM1}.
The local correspondences of isoparametric functions for homothetic navigation have been given in \cite {XM1}. In this paper, we try to give a global expression of isoparametric functions on a Randers sphere using navigation skills via a global expression of geodesics. Further more, we try to obtain the corresponding isoparametric families and focal submanifolds.
Let $(\mathbb S^n, F_{Q})$ be a Randers sphere, where $F_{Q}$ is a Randers metric with the navigation datum $(h, V)$, $h$ is a standard Euclidean sphere metric and $V=Qx\mid_{\mathbb S^{n}}$,$~x\in \mathbb R^{n+1}$, $Q\in o(n+1)$ such that $|V|<1$. We can prove that $V=Qx\mid_{\mathbb S^{n}}$ is a Killing vector field with respect to $h$. In fact, any Killing vector field on $(\mathbb S^{n},h)$ can be obtained in this way. Hence, we have
\betaegin{theo} \lambdabel{thm01}
Let $\overline{f}$ be a global isoparametric function on $(\mathbb S^n, h)$. Set $|\nabla^{h}\overline{f}|=a(\overline{f})$, $\overline{f}(\mathbb{S}^n)=[c,d]$, then $\psi(x)=\exp(Q\int^{\overline{f}(x)}_{t_{0}}\frac{1}{a(t)}dt)x$ is a homeomorphism from $\mathbb{S}^n$ to $\mathbb{S}^n$ and $f=\overline{f}\circ\psi^{-1}$ is a global isoparametric function on $(\mathbb S^n, F_{Q})$. Conversely, for any isoparametric function $f$ on $(\mathbb S^n, F_{Q})$ which satisfies $F_{Q}(\nabla f)=a(f)$, where $a^{2}(t)\in C^{2}(f(\mathbb{S}^n))$, can be obtained in this way.
\end{theo}
In Riemannian case, M\"{u}nzner made the association between an isoparametric function and a homogeneous polynomial.
\betaegin{thmA}~\cite{TP}
Let $M\subset\mathbb S^{n}\subset\mathbb R^{n+1}$ be a connected isoparametric hypersurface with $g$ principal curvatures $\lambdambda_{i}=cot\theta_{i}$, $0<\theta_{i}<\pi$, with multiplicities $m_{i}$. Then $M$ is an open subset of a level set of the restriction to $\mathbb S^{n}$ of a homogeneous polynomial $\phi$ on
$\mathbb R^{n+1}$ of degree $g$ satisfying the differential equations,
\betaegin{equation}\lambdabel{1.4}
\left\{\betaegin{array}{ll}
|\nabla^E\phi|^2=g^{2}r^{2g-2},\\
\Delta^{E}\phi=cr^{g-2},
\end{array}\gammaight.
\end{equation}
where $r=|x|$, and $c=\frac{g^{2}(m_{2}-m_{1})}{2}$, $m_{1}$, $m_{2}$ are the two distinct multiplicities. $\phi$ is called the Cartan-M\"{u}nzner polynomial of $M$. (\gammaef{1.4}) is called the Cartan-M\"{u}nzner differential equations.
\end{thmA}
\betaegin{thmB}~\cite{TP}
Let $\phi: \mathbb{R}^{n+1}\gammaightarrow\mathbb{R}$ be a Cartan-M\"{u}nzner polynomial of degree $g$ and $f$ is the restriction to $\mathbb{S}^{n}$. Then each isoparametric hypersurface
$$M_{t}=f^{-1}(t),\ \ \ \ -1<t<1$$
is connected. Moreover, $M_{+}=f^{-1}(1)$ and $M_{-}=f^{-1}(-1)$ are the focal submanifolds, respectively, and they are also connected.
\end{thmB}
Due to Theorem A, Theorem B and Theorem \gammaef{thm01}, we can characterize all isoparametric families of isoparametric hypersurfaces in a Randers sphere by a Cartan-M\"{u}nzner polynomial.
\betaegin{theo} \lambdabel{thm02}
Let $\{M_{t}\}$ be a connected isoparametric family in $(\mathbb S^{n}, F_{Q})$ with $g$ principal curvatures with multiplicities $m_{i}$. Then there exists a Cartan-M\"{u}nzner polynomial $\phi$ of degree $g$ such that $f=\phi\circ\psi^{-1}\mid_{\mathbb S^{n}}$ is an isoparametric function on $(\mathbb S^{n}, F_{Q})$, where $\psi(x)=\exp\left(\frac{1}{g}\alpharcsin\frac{\phi(x)}{|x|^{g}}Q\gammaight)x$, and each $M_{t}$ is an open subset of a level set of $f$. Further more, the maximized connected isoparametric family and two connected focal submanifolds can be expressed as
$$f^{-1}(t)=\{\exp(\frac{\alpharcsin t}{g}Q)((\cos\frac{\alpharcsin t}{g})x+\sin(\frac{\alpharcsin t}{g})(\mathbf{n}-V))\mid x\in M\},\ \ -1<t<1$$
and
$$M_{\pm}=\{\exp(\pm \frac{\pi}{2g}Q)(\cos\frac{\pi}{2g}x\pm \sin\frac{\pi}{2g}(\mathbf{n}-V))\mid x\in M\},$$
where $M=f^{-1}(0)$ and $\mathbf{n}$ is the unit normal vector of $M$ at $x$.
\end{theo}
\betaegin{rema}
Every isoparametric family is a special mean curvature flow. Hence, from a given isoparametric hypersurface $M$, we can construct two different mean curvature flows in $(\mathbb{S}^{n},h)$ and $(\mathbb{S}^{n},F_{Q})$ by a Cartan-M\"{u}nzner polynomial, respectively.
\end{rema}
The contents of this paper are organized as follows. In Section 2, some fundamental concepts and formulas are given. In Section 3,
we give global expressions of a one-parameter group and geodesics on $(\mathbb S^{n},F_{Q})$. Particularly, when $n=2$, the images of all geodesics can be depicted concretely. In section 4, we characterize isoparametric functions on $(\mathbb S^{n},F_{Q})$ and obtain global expressions of isoparametric families and focal submanifolds by homogeneous polynomial functions in $\mathbb{R}^{n+1}$. Further more, we give some examples of isoparametric functions, isoparmetric families and focal submanifolds.
\section{Preliminaries}
~~~~In this section, we will give some definitions and lemmas that will be used in the proof of our main results.
\subsection{Finsler manifolds}
~~~~Let $N$ be a manifold and let $TN=\cup_{x\in N}T_xN$ be the tangent bundle of $N$, where $T_xN$ is the tangent space at
$x\in N$. A Finsler metric is a Riemannian metric without quadratic restriction. Precisely, a function $F(x,y)$ on $TN$ is called a Finsler metric on a manifold $N$ with local coordinates $(x,y)$, where $x=(x^i)$ and $y=y^i\frac{\partial}{\partial x^{i}}$ ,
if it has the following properties:
(i)\ \ Regularity:\ \ $F(x,y)$ is $C^{\infty}$ on $TN\betaackslash\{0\}$;
(ii)\ \ Positive homogeneity:\ \ $F(ty)=tF(y),\ \forall t>0, y\in T_xN$;
(iii)\ \ Strong convexity:\ \ The $n\times n$ matrix $(\frac{\partial^2F^2}{\partial y^i \partial y^j}(x,y))(y\neq 0)$ is positive definite.
The fundamental form~$g$ of~$(N,F)$ is
\betaegin{equation*}
g=g_{ij}(x,y)dx^{i} \otimes dx^{j}, ~~~~~~~g_{ij}(x,y)=\frac{1}{2}[F^{2}] _{y^{i}y^{j}}.
\end{equation*}
For a Finsler metric $F=F(x,y)$ on a manifold $N$, the geodesic $\gamma=\gamma(t)$ of $F$ in local coordinates $(x^i)$ are characterized by
$$\frac{\textmd{d}^2x^i}{\textmd{d}t^2}+2G^i(x,\frac{\textmd{d}x}{\textmd{d}t})=0,$$
where $(x^i(t))$ are the coordinates of $c(t)$ and $G^i=G^i(x,y)$ are defined by
$$G^i=\frac{g^{il}}{4}\{[F^2]_{x^ky^l}y^k-[F^2]_{x^l}\},$$
which are called \textit{the spray coefficients}. For $X=X^{i}\frac{\partial}{\partial x^{i}}\in\mathcal{G}amma(TN)$, the covariant derivative of $X$ along $v=v^{i}\frac{\partial}{\partial x^{i}}\in T_{x}N$ with respect to a reference vector $w\in T_{x}N\setminus0$ is defined by
$$\nabla^{w}_{v}X(x):=\{v^{j}\frac{\partial X^{i}}{\partial x^{j}}(x)+\mathcal{G}amma^{i}_{jk}(w)v^{j}X^{k}(x)\}\frac{\partial}{\partial x^{i}}.$$
The equation of geodesics can be expressed by $\nabla^{\dot{\gamma}}_{\dot{\gamma}}\dot{\gamma}\equiv0$.
Let~${\mathcal L}:TN\gammaightarrow T^{\alphast}N$ denote the \emph{Legendre transformation}, satisfying~${\mathcal L}(\lambdambda
y)=\lambdambda {\mathcal L}(y)$ for all~$\lambdambda>0,~y\in TN$.
For a smooth function~$f: N\gammaightarrow \mathbb{R}$, the \emph{gradient vector} of~$f$ at~$x$ is defined as~$\nabla f(x)={\mathcal
L}^{-1}(df(x))\in T_{x}N$.
Set~$N_{f}=\{x\in N|df(x)\neq 0\}$ and~$\nabla^{2}f(x)=\nabla^{\nabla f}(\nabla f)(x)$ for~$x\in N_{f}$. The \emph{Laplacian} of~$f$ can defined
by
\betaegin{equation}\lambdabel{2.1}
\hat{\Delta} f=\textmd{tr}_{g_{_{\nabla f}}}(\nabla^{2}f).
\end{equation}
And the \emph{Laplacian} of~$f$ with respect to the volume form~$d\mu=\sigma(x)dx=\sigma(x)dx^{1}\wedge dx^{2}\wedge\cdots\wedge dx^{n}$ can be represented as
\betaegin{equation}\lambdabel{2.2}
\Delta_{\sigma} f=\textmd{div}_{\sigma}(\nabla f)=\frac{1}{\sigma}\frac{\partial}{\partial x^{i}}(\sigma g^{ij}(\nabla f)f_{j})=\hat{\Delta} f-\textbf{S}(\nabla f),
\end{equation}
where
$$\textbf{S}(x,y)=\frac{\partial G^{i}}{\partial y^{i}}-y^{i}\frac{\partial}{\partial x^{i}}(\ln \sigma(x))$$
is the \emph{$\mathbf{S}$-curvature}~\cite{SZ}.
\subsection{Isoparametric functions and isoparametric hypersurfaces}
~~~~Let $f$ be a non-constant $C^1$ function defined on a Finsler manifold $(N, F, d\mu)$ and smooth on $N_{f}$. Set $J=f(N_{f})$. The function $f$ is called \textit{isoparametric (resp. $d\mu-$isoparametric)} on $(N, F, d\mu)$, where $d\mu=\sigma(x)dx$, if there exist a smooth function $a(t)$ and a continuous function $b(t)$ defined on $J$ such that (\gammaef{1.1}) holds for $\Delta f=\hat{\Delta} f$ (resp. $\Delta f=\Delta_{\sigma}f$), which is defined by (\gammaef{2.1}) (resp.(\gammaef{2.2})). All the regular level hypersurfaces $M_{t}=f^{-1}(t)$ are named an \textit{($d\mu$-)isoparametric family}, each of which is called an \textit{($d\mu$-)isoparametric hypersurface} in $(N, F, d\mu)$. If each $M_{t}$ is connected, it is a connected isoparametric family.
Since $(\mathbb{S}^n,F_{Q}, d\mu_{BH})$ is a Randers sphere with $\textbf{S}=0$, then $M$ is an isoparametric hypersurface if and only if $M$ is a $d\mu_{BH}$-isoparametric hypersurface.
\subsection{Geometric correspondences for homothetic navigation}
~~~~Let $V$ be a smooth vector field on a Finsler manifold $(N, F)$. Around each $x\in N$, $V$ generates a family of local diffeomorphisms $\psi_{t}$, which is a flow in an alternative terminology and satisfies
\betaegin{equation}\lambdabel{0.3}
\left\{\betaegin{array}{ll}
\psi_{0}=id: N\gammaightarrow N,\\
\psi_{s}\circ\psi_{t}=\psi_{s+t}, \ \ \ \ \forall s, t\in(-\varepsilon, \varepsilon), s+t\in(-\varepsilon, \varepsilon).
\end{array}\gammaight.
\end{equation}
$V$ is called a \textit{homothetic field} when it satisfies
$$(\psi_{t}^{\alphast}F)(x,y)=F(\psi_{t}(x),\psi_{t*}(y))=e^{-2ct}F(x,y),$$
where $x\in N$, $y\in T_{x}N$ and $t\in\mathbb{R}$. The constant $c$ is the dilation of $V$. If $c=0$, $V$ is Killing~\cite{SS}.
The main technique of the navigation problem is described as follows. Suppose $F$ is a Finsler metric and $V$ is a vector field with $F(x,-V)<1$, we can define a new Finsler metric $\tilde{F}$ by
$$F(x, y-\tilde{F}(x,y)V)=\tilde{F}(x,y), \ \ \ \ \ \ \forall x\in N,\ \ y\in T_{x}N.$$
\betaegin{lemm} \lambdabel{lemm33}~\cite{HM12}
Let $F=F(x,y)$ be a Finsler metric on a manifold $N$ and let $V$ be a vector field on $N$ with $F(x, -V_{x})<1$. Suppose that $V$ is homothetic with dilation $c$. Let $\tilde{F}=\tilde{F}(x,y)$ denote the Finsler metric on $N$. Then the geodesics of $\tilde{F}$ are given by $\psi_{t}(\gamma(a(t)))$, where $\psi_{t}$ is the flow of $V$, $\gamma(t)$ is a geodesic of $F$ and $a(t)$ is defined by
$$a(t):=\left\{\betaegin{array}{ll}
\frac{e^{2ct}-1}{2c},\ \ \ \ if\ \ c\neq0,\\
t,\ \ \ \ \ \ \ \ \ \ if\ \ c=0.
\end{array}\gammaight.$$
\end{lemm}
If $F$ is a Randers metric with the navigation datum $(h, V)$, where $h$ is a Riemannian metric and $V$ is a Killing vector field, then we have
\betaegin{lemm}\lambdabel{lemm334}
For any geodesic $\overline{\gamma}(t)$ with respect to the metric $h$ satisfying $\overline{\gamma}(0)=\overline{x}$, $\gamma(t+t_{0})=\psi_{t+t_{0}}(\overline{\gamma}(t))$ is a geodesic with respect to the metric $F$ satisfying $\gamma(t_{0})=x=\psi_{t_{0}}(\overline{\gamma}(0))$.
\end{lemm}
\proof By ($\gammaef{0.3}$), $\gamma(t+t_{0})=\psi_{t+t_{0}}(\overline{\gamma}(t))=\psi_{t}(\psi_{t_{0}}(\overline{\gamma}(t)))$. Since $V$ is Killing, then $\psi_{t}$ is isometric with respect to the metric $h$. Hence, $\psi_{t_{0}}(\overline{\gamma}(t))$ is still a geodesic for the metric $h$ with $x=\psi_{t_{0}}(\overline{\gamma}(0))=\gamma(t_{0})$. Combine Lemma \gammaef{lemm33}, we complete the proof.
\endproof
\betaegin{lemm} \lambdabel{lemm46}~\cite{XM1}
Let $\widetilde{F}$ be the Finsler metric defined by navigation from the datum $(F,V)$ in which $V$ is a Killing vector field. Assume $x_{0}$ is a point where $F(x_{0},-V(x_{0}))<1$. Then for any normalized isoparametric function $f$ for $(F,d\mu^{F}_{BH})$ around the point $x_{0}$, the function $\widetilde{f}$ defined by $\widetilde{f}^{-1}(t)=\psi_{t}(f^{-1}(t))$ is a normalized isoparametric function for $(\widetilde{F},d\mu_{BH}^{\widetilde{F}})$ around $x_{0}$.
\end{lemm}
\section{ The geodesics of a Randers sphere}
\betaegin{lemm} \lambdabel{lemm31}
Let $x\in\mathbb{R}^{n+1}$, $Q\in o(n+1)$ and $h$ be a standard Euclidean sphere metric, then $V=Qx\mid_{\mathbb S^{n}}$ is a global Killing vector field on $(\mathbb S^n, h)$, and $|V|<1$ if and only if $I+Q^2$ is positive definite.
\end{lemm}
\proof Set $\widetilde{V}=Qx$, take $X,Y\in T\mathbb{R}^{n+1}$, we have
$$\lambdangle\nabla_{X}^{E}\widetilde{V},Y\gammaangle=\lambdangle Q\nabla_{X}^{E}x,Y\gammaangle=\lambdangle QX,Y\gammaangle=-\lambdangle X,QY\gammaangle=\lambdangle X, Q\nabla_{Y}^{E}x\gammaangle=-\lambdangle X, \nabla_{Y}^{E}\widetilde{V}\gammaangle,$$
where $\nabla^{E}$ is the covariant derivative in a Euclidean space. Hence, $\widetilde{V}$ is a global Killing vector field on $\mathbb{R}^{n+1}$.
Due to $\lambdangle Qx, x\gammaangle=0$, we know
$V=\widetilde{V}\mid_{\mathbb S^{n}}=Qx\mid_{\mathbb S^{n}}\in T\mathbb{S}^n$. Set $X,Y\in T\mathbb{S}^n$, we have
$$h(\nabla^{h}_{X}V, Y)=\lambdangle\nabla^{E}_{X}\widetilde{V}, Y\gammaangle=-\lambdangle\nabla^{E}_{Y}\widetilde{V}, X\gammaangle=-h(\nabla^{h}_{Y}V, X).$$
Hence, $V=Qx\mid_{\mathbb S^{n}}$ is a global Killing vector field on $(\mathbb S^n, h)$.
Further more, $|V|<1$ if and only if $x^TQ^TQx<1=x^Tx$, which means $x^T(I+Q^2)x>0$. This completes the proof.
\endproof
\betaegin{lemm} \lambdabel{lemm32}
Suppose that $\psi_{t}(x_{0})$ is the integral curve of $V=Qx\mid_{\mathbb{S}^n}$ through $x(0)=x_{0}\in\mathbb{S}^{n}$, where $t\in\mathbb{R}$. Then $\psi_{t}=\exp(tQ)\mid_{\mathbb S^n}$.
\end{lemm}
\proof Let $\widetilde{\psi}_{t}(x_{0})$ be the integral curve of $\widetilde{V}=Qx$, $\widetilde{x}(0)=x_{0}$. Due to the definition of the one-parameter local group,
$$\widetilde{x}'(t)=\widetilde{V}(\widetilde{x}(t))=Q\widetilde{x}(t).$$
Then
$$\widetilde{x}(t)=\exp (tQ)C,$$
where $C$ is a constant vector. Noting that $\widetilde{x}(0)=x_{0}$, we have
$$\widetilde{x}(t)=\widetilde{\psi}_{t}(x_{0})=\exp (tQ)x_{0}.$$
Further more,
$$|\widetilde{x}(t)|^2=x_{0}^{T}\exp (tQ)^{T}\exp (tQ)x_{0}=|x_{0}|^2=1,$$
we know that $\widetilde{x}(t)$ is still on $\mathbb{S}^{n}$. From the existence and uniqueness theorem of initial value problem of ODE, $\widetilde{\psi}_{t}(x_{0})=\widetilde{x}(t)=\psi_{t}(x_{0})\in\mathbb{S}^{n}$ is the unique integral curve through $x_{0}$. Since $\widetilde{\psi}_{t}$ is a global one-parameter group on $\mathbb{R}^{n+1}$, then $\psi_{t}$ is a global one-parameter group on $\mathbb{S}^{n}$.
\endproof
Let $(\mathbb S^n, F_{Q})$ be a Randers sphere, where $F_{Q}$ is a Randers metric with navigation datum $(h, V)$, $h$ is a standard Euclidean sphere metric, $V=Qx\mid_{\mathbb S^{n}},~x\in \mathbb R^{n+1}$, $Q\in o(n+1)$ such that $I+Q^2>0$. Then we have
\betaegin{theo} \lambdabel{thm0}
The unit speed geodesic $\gamma(s)$ on $(\mathbb S^n, F_{Q})$ satisfying $\gamma(0)=x\in\mathbb S^n$ and $\dot{\gamma}(0)=X\in T_x\mathbb S^n$ can be expressed as
$$\gamma(s)=\exp (sQ)((\cos s)x+(\sin s)(X-V(x))).$$
\end{theo}
\proof From Lemma \gammaef{lemm31} and Lemma \gammaef{lemm32}, we know that there exist a global Killing vector field $V=Qx\mid_{\mathbb{S}^n}$ and flow of diffeomorphisms $\psi_{t}=\exp (tQ)\mid_{\mathbb{S}^n}$ on $\mathbb{S}^{n}$. Let $\overline{\gamma}(s)$ be a unit speed geodesic on $(\mathbb S^n, h)$ satisfying $\overline{\gamma}(0)=x$ and $\dot{\overline{\gamma}}(0)=\overline{X}$, then $$\overline{\gamma}(s)=(\cos s)x+(\sin s)\overline{X},$$ where $s\in\mathbb{R}$. From Lemma \gammaef{lemm33},
$$\gamma(s)=\psi_{s}(\overline{\gamma}(s))=\exp (sQ)(\overline{\gamma}(s))$$
is a local unit speed geodesic on $(\mathbb{S}^n,F_{Q})$ satisfies $\gamma(0)=x$, $\dot{\gamma}(0)=X$, where $s\in (-\varepsilon, \varepsilon)$. Obviously, $\psi_{t}$ satisfies (\gammaef{0.3}).
From Lemma \gammaef{lemm334}, we know that no matter what is the origin point of $\overline{\gamma}(t)$, $\gamma(t)$ is a geodesic on $(\mathbb S^n, F_{Q})$. Hence, $\gamma(s)=\psi_{s}(\overline{\gamma}(s))$ is a global geodesic on $(\mathbb{S}^n,F_{Q})$, where $s\in \mathbb{R}$. By the notion of navigation, $X=\dot{\gamma}(0)=\overline{X}+V$. Hence, $$\gamma(s)=\exp (sQ)(\overline{\gamma}(s))=\exp (sQ)((\cos s)x+\sin s(X-V)),~~s\in\mathbb{R}.$$ This completes the proof of Theorem \gammaef{thm0}.
\endproof
By Theorem \gammaef{thm0}, we can obtain all geodesics on $(\mathbb{S}^{n},F_{Q})$ and find out if the geodesics are closed or non-closed.
\betaegin{exam}\lambdabel{exam3}
In $(\mathbb{S}^2, F_{Q})$, we choose $x=(1,0,0)^{T}$, $\overline{X}=(0,1,0)^{T}$, then $\overline{\gamma}(s)=(\cos s, \sin s, 0)^{T}$. Take
\betaegin{equation}\lambdabel{3.1.1.1}
Q=\left(
\betaegin{array}{cccc}
0&a&b\\
-a&0&c\\
-b&-c&0
\end{array}
\gammaight),
\end{equation}
where $I+Q^2$ is positive definite. By a direct calculation,
\betaegin{equation}\lambdabel{3.111}
\gamma(s)=\left(
\betaegin{array}{cc}
\cos bs\cos(1-a)s\\
\cos cs\sin(1-a)s-\sin bs\sin cs\cos(1-a)s\\
-\sin bs\cos cs\cos(1-a)s-\sin(1-a)s\sin cs
\end{array}
\gammaight).
\end{equation}
Due to (\gammaef{3.111}), we try to find some closed or non-closed geodesics on $(\mathbb{S}^{2},F_{Q})$.
\betaegin{case}
The geodesics are closed if and only if there exists a constant $T$ such that
\betaegin{equation}
\left\{\betaegin{array}{ll}\lambdabel{3.X}
\gamma(s)=\gamma(s+T),\\
\dot{\gamma}(s)=\dot{\gamma}(s+T).
\end{array}\gammaight.
\end{equation}
By a direct calculation, for $\gamma(s)$ in (\gammaef{3.111}), (\gammaef{3.X}) holds if and only if $\frac{b}{1-a}$ and $\frac{c}{1-a}$ are both rational numbers. Namely, $\gamma(s)$ in (\gammaef{3.111}) is closed if and only if $\frac{b}{1-a}$ and $\frac{c}{1-a}$ are both rational numbers.
\betaegin{rema}
When $b=c=0$, the geodesic in (\gammaef{3.111}) reduces to the expression in \cite{RO}. It is still a great circle and its length is $\frac{2\pi}{1-a}$.
\end{rema}
In other cases, the closed geodesics may not be great circles (See Fig.1. When $a=c=0$, $b=\frac{1}{2}$, the length $l=4\pi$).
\betaegin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{1.jpg}
\caption{$a=c=0$, $b=\frac{1}{2}$, $s\in[0,4\pi]$}
\end{figure}
\betaegin{rema}
At point $(0,1,0)$ and $(0,-1,0)$, the geodesic in Fig.1 is self-intersecting and self-tangent, but the tangent direction is opposite. Namely, in Finsler geometry, there exist two tangent geodesics through a point, which are different from Riemannian geometry.
\end{rema}
\end{case}
\betaegin{case}
If there exists an irrational number among $\frac{b}{1-a}$ or $\frac{c}{1-a}$, the geodesics on $(\mathbb{S}^2,F_{Q})$ cover the sphere irregularly (See Fig.2. When $a=c=0$, $b=1-\frac{1}{\sqrt{2}}$, $s\in[0,29\pi]$), they are all non-closed.
\betaegin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{2.jpg}
\caption{$a=c=0$, $b=1-\frac{1}{\sqrt{2}}$, $s\in[0,29\pi]$}
\end{figure}
\end{case}
\end{exam}
\section{Isoparametric Functions}
\subsection{Normal geodesics and focal submanifolds }
~~~~Let $\tau:M \gammaightarrow N$ be an embedded submanifold. For simplicity, we will denote $d\tau X$ by $X$. Define
$$\mathcal{V}(M)=\{(x,\xi)~|~x\in M,\xi\in T_x^{*}N,\xi (X)=0,\forall X\in T_xM\},$$
which is called the
{\it normal bundle} of $\tau$ or $M$. Let
$$ \mathcal{N}M={\mathcal L}^{-1}(\mathcal{V}(M))=\{(x, \eta)~|~x\in \tau(M), \eta={\mathcal L}^{-1}(\xi), \xi \in \mathcal{V}_{x}(M)\}.$$
Then $\mathcal{N}M\subset TN$. Moreover, denote
$$\mathcal{N}^{0}M=\{(x, \textbf{n})~|~x\in M, \textbf{n}\in\mathcal{N}M, F(\textbf{n})=1\}.$$
We call $\textbf{n}\in\mathcal{N}^{0}_{x}M$ the \textit{unit normal vector} of $M$.
Let $\textmd{Exp}: TN\gammaightarrow N$ be the exponential map of $N$. For $\eta\in\mathcal{N}_{x}M$, define the normal exponential map $E(x,\eta)=\textmd{Exp}_{x}\eta$. The focal points of $M$ are the critical values of the normal exponential map $E$. Let $M$ be an oriented hypersurface and $\textbf{n}\in\mathcal{N}^{0}_{x}M$. Set
$$\tau_{s}(x):=\textmd{Exp}(x,s\textbf{n})=\textmd{Exp}_{x}s\textbf{n}.$$
By Theorem \gammaef{thm0} and the notion of navigation, we immediately get the explicit expression of normal geodesics of $M$ on $(\mathbb{S}^n,F_{Q})$.
\betaegin{lemm}\lambdabel{coro35}
The unit speed normal geodesic $\gamma_{x}(s)$ on $(\mathbb{S}^n,F_{Q})$ with initial point $x\in M$ and initial tangent vector $\mathbf{n}$ can be expressed by
\betaegin{equation}\lambdabel{3.3.5.3}
\gamma(s)=\exp (sQ)((\cos s)x+\sin s(\mathbf{n}(x)-V(x))),~~~~s\in \mathbb{R}.
\end{equation}
\end{lemm}
From Lemma \gammaef{coro35}, we know that
\betaegin{equation}\lambdabel{3.3.5.3.3}
\tau_{s}(x)=\exp (sQ)((\cos s)x+\sin s(\mathbf{n}(x)-V(x))),~~ x\in M,
\end{equation}for any $s\in \mathbb{R}$.
\betaegin{lemm} \lambdabel{lemm23}~\cite{HCY}
Let $\tau:M \gammaightarrow N(c) $ be an immersion submanifold, and let $\mathbf{n}$ be a unit normal vector to $\phi(M )$ at ${x}$.
Then $p=E(x, s\mathbf{n})$ is a focal point
of $(M ,x)$ of multiplicity $m_0>0$ if and only if there is an eigenvalue $\lambdambda$ of the shape operator $A_{\mathbf{n}}$ of multiplicity $m_0$ such that
\betaegin{align} \lambdambda=\left\{
\betaegin{array}{rcl}
\frac{1}{s}, & & {c=0,}\\
\sqrt{c}\cot \sqrt{c}s, & & {c>0,}\\
\sqrt{-c}\coth \sqrt{-c}s, & & {c<0.}
\end{array} \gammaight.\lambdabel{2.3}\end{align}
\end{lemm}
Let $M$ be a connected, oriented isoparametric hypersurface in $(\mathbb{S}^n,F_{Q})$ with $\textbf{n}\in\mathcal{N}^{0}M$ and $g$ be distinct constant principal curvatures, say
\betaegin{align*}
\lambdambda_{1},~\lambdambda_{2},~\cdots, ~\lambdambda_{g}.
\end{align*}
We denote the multiplicity and the corresponding principal foliation of $\lambdambda_{i}$ by $m_{i}$ and $V_{i}$, respectively. Let $s_{i}=\mathrm{arccot}\,\lambdambda_{i}$ and $X\in V_{i}$. Set $\overline{\tau}_{s}(x)=(\cos s)x+\sin s(\textbf{n}(x)-V(x))$, then $\tau_{s}(x)=\exp(sQ)\overline{\tau}_{s}(x)$. Differentiating $\tau_{s}(x)$ in the direction $X$, from (3.41) in \cite{TP}, we have
$$\tau_{s*}X=\exp(sQ)\overline{\tau}_{s*}X=\exp(sQ)(\cos sI-\sin s\overline{A})X=\exp(sQ)\frac{\sin(s_{i}-s)}{\sin s_{i}}X,$$
where $\overline{A}$ is the shape operator of $M$ in $(\mathbb{S}^n,h)$ and on the right side we are identifying $X$ with its Euclidean parallel translate at $\tau_{s*}(x)$. $\tau_{s*}$ is injective on $V_{i}$, unless $s=s_{i}$ (mod $\pi$) for some $i$, that is, $\tau_{s_{i}*}V_{i}=0$. In this case, $\tau_{s_{i}}(x)$ is a focal point of $(M, x)$. Combine Lemma \gammaef{coro35}, we have
\betaegin{lemm}\lambdabel{coro34}
The parallel hypersurfaces and focal submanifolds of the isoparametric hypersurface $M$ in $(\mathbb{S}^n,F_{Q})$ can be expressed as $M_{s}:=\tau_{s}M,~s\neq s_{i}$ and $M_{s_{i}}:=\tau_{s_{i}}M$, respectively, where $\tau_{s}$ is expressed by (\gammaef{3.3.5.3.3}).
\end{lemm}
\subsection{Proof of Theorem \gammaef{thm01}}
~~~~Let $\overline{f}$ be a global isoparametric function on $(\mathbb{S}^n,h)$. Set $|\nabla^{h}\overline{f}|=a(\overline{f})$, $\overline{f}(\mathbb{S}^n)=[c,d]$. From \cite{QM}, we know that $a(t)=0$ if and only if $t=c$ or $d$. In this case, $\overline{f}^{-1}(c)$ and $\overline{f}^{-1}(d)$ are only two focal submanifolds. From \cite{QM}, we know $\int^{d}_{c}\frac{1}{a(t)}dt$ is convergence. Define
$$\zeta(t)=\int_{t_{0}}^{t}\frac{1}{a(\theta)}d\theta$$
for some given $t_{0}\in(c,d)$ and any $t\in [c,d]$. Set
$$\overline{\gammaho}(x)=\int_{t_{0}}^{\overline{f}(x)}\frac{1}{a(t)}dt=\zeta(\overline{f}(x)),\ \ \ \ x\in\mathbb{S}^n.$$
$\overline{\gammaho}$ is a normalized isoparametric function with respect to $h$. Since $\zeta'(t)=\frac{1}{a(t)}>0$ when $t\in(c,d)$, $\zeta(t)$ is strictly monotonous increasing on $[c,d]$. Set $\zeta[c,d]=[\alphalpha,\betaeta]$, then
$$\overline{M}_{s}=\overline{\gammaho}^{-1}(s),~~ \forall s\in[\alphalpha,\betaeta],~~~~M=\overline{M}_{0}=\overline{f}^{-1}(t_0).$$
Define the map $\psi: \mathbb{S}^n\gammaightarrow \mathbb{S}^n$ such that \betaegin{equation}\lambdabel{4.5.1.1}\psi\mid_{\overline{M}_{s}}=\psi_{s}=\exp (sQ),~~~~s\in[\alphalpha,\betaeta].\end{equation}
From \cite{XM1}, we know that $\psi$ is an orientation preserving local diffeomorphism around $M$.
If we define the function $\gammaho$ such that $$\gammaho^{-1}(s)=\psi(\overline{M}_{s})=\psi(\overline{\gammaho}^{-1}(s)),$$
then by Lemma \gammaef{lemm46}, $\gammaho$ is a local normalized isoparametric function around $M$ on $(\mathbb S^n, F_{Q})$.
Next, we try to extend the local isoparametric function to the global one. Let $M$ be an oriented hypersurface with a given unit normal vector field $\textbf{n}$. Since $M$ is complete, we can define a distance function $\gammaho(x)$. If $x$ is on the side of $M$ pointed by $\textbf{n}$, then $\gammaho(x)=d(M,x)=\inf\{d(z,x)\mid z\in M\}$, where $d(M,x)$ is a distance from $M$ to $x$. If $x$ is on the other side of $M$, then $\gammaho(x)=-d(x,M)=-\inf\{d(x,z)\mid z\in M\}$, where $d(x,M)$ is a distance from $x$ to $M$.
\betaegin{lemm}\lambdabel{lemm0.4}
There exists a minimal normal geodesic $\gamma(s)$ of $M$ such that $\gamma(\gammaho(x))=x$.
\end{lemm}
\proof Due to the definition of the distance function $\gammaho(x)=d(M,x)$, there exist $\{z_{n}\}\in M$ such that $s_{0}=\gammaho(x)=d(M ,x)=\lim\limits_{n\to+\infty}d(z_{n},x)$. Since $\mathbb{S}^n$ is compact, there exists a subsequence of $\{z_{n}\}$ which converges at $z_{0}\in\mathbb{S}^n$. Since $M$ is complete, $z_{0}\in M$ and $d(z_{0},x)=\lim\limits_{n\to+\infty}d(z_{n},x)=\gammaho(x)=s_{0}$.
Since $\mathbb{S}^n$ is complete and connected, there exists a minimal geodesic $\gamma:[0,s_{0}]\gammaightarrow \mathbb{S}^n$ satisfying $\gamma(0)=z_{0}$, $\gamma(s_{0})=x$. Then from the first variation formula of the arc-length, we know that $\dot{\gamma}(0)=\textbf{n}(z_{0})$. Namely, $\gamma(s)=E(z_{0},s\textbf{n}(z_{0}))$ is a minimal normal geodesic of $M$.
If $\gammaho(x)=-d(x,M)$, we can prove similarly.
\endproof
Lemma \gammaef{lemm0.4} also holds for $(\mathbb{S}^{n},h)$. Similarly, we can define a distance function $\overline{\gammaho}(x)=\overline{d}(M, x)$ on $(\mathbb S^n, h)$.
\betaegin{lemm}\lambdabel{lemm0.5}
$\psi(x)=\exp(Q\int^{\overline{f}(x)}_{t_{0}}\frac{1}{a(t)}dt)x$ is a homeomorphism from $\mathbb{S}^n$ to $\mathbb{S}^n$.
\end{lemm}
\proof If there exist two different points $x_{1}$, $x_{2}\in \mathbb{S}^n$ such that $\psi x_{1}=\psi x_{2}$, then there exist two minimal normal geodesics $\gamma_{1}(s)$ and $\gamma_{2}(s)$ of $M$ on $(\mathbb S^n, h)$ such that $x_{1}=\gamma_{1}(s_{1})$, $x_{2}=\gamma_{2}(s_{2})$. Due to the definition of $\psi$, $\psi_{s_{1}}x_{1}=\psi x_{1}=\psi x_{2}=\psi_{s_{2}}x_{2}$. Obviously, $s_{1}\neq s_{2}$. Assume $s_{1}<s_{2}$. Then $x_{1}=\psi_{s_{2}-s_{1}}x_{2}$, which means $x_{1}$ and $x_{2}$ are on the same orbit by the one-parameter group.
Set $$\sigma(t)=\psi_{t}x_{1}=\exp (tQ)x_{1},~~~~t\in [0,s_{2}-s_{1}],$$ where $\sigma(0)=x_{1}$, $\sigma(s_{2}-s_{1})=x_{2}$. Since $|Qx|<1$, the length $L_{\sigma}<s_{2}-s_{1}$. Hence,
$$\overline{d}(x_{1},x_{2})\leq L_{\sigma}<s_{2}-s_{1}.$$
Set $\overline{\gamma}_{1}(0)=z_{1}$, $\overline{\gamma}_{2}(0)=z_{2}$. Due to $$s_{2}>s_{1} +\overline{d}(x_{2},x_{1})=\overline{d}(z_{1},x_{1})+\overline{d}(x_{2},x_{1})\geq \overline{d}(z_{1},x_{2})\geq s_{2},$$
there exits a contradiction. Hence, $\psi$ is an injection.
For any $x\in\mathbb{S}^n$, set $s_0=\gammaho(x)$. From Lemma \gammaef{lemm0.4}, there exists a minimal normal geodesic $\gamma(s)$ of $M$ on $(\mathbb S^n, F_{Q})$ such that $\gamma(s_0)=x$. Set $\gamma(0)=z_{0}\in M$ and $\dot{\gamma}(0)=\textbf{n}(z_{0})$. There also exists a minimal normal geodesic $\overline{\gamma}(s)=\exp (-sQ){\gamma}(s)$ of $M$ on $(\mathbb S^n, h)$ such that $\overline{\gamma}(0)=z_{0}$, $\dot{\overline{\gamma}}(0)=\textbf{n}(z_{0})-V$. Set $\overline{x}=\overline{\gamma}(s_0)\in \overline{M}_{s_0}$, then
$$x=\gamma(s_{0})=\exp (s_{0}Q)\overline{\gamma}(s_{0})=\exp (s_{0}Q)(\overline{x})=\psi(\overline{x}).$$
Namely, $\psi$ is a surjection. Hence, $\psi: \mathbb{S}^{n}\gammaightarrow\mathbb{S}^{n}$ is a one-to-one correspondence. In addition, due to the expressions of $\psi$, $\psi$ is continuous. Since $\psi$ is a one-to-one continuous mapping, $\psi^{-1}$ is existing and continuous. This completes the proof of Lemma \gammaef{lemm0.5}.
\endproof
From (\gammaef{4.5.1.1}), we have
\betaegin{equation}\lambdabel{4.5.1}
\psi(x)=\exp(\overline{\gammaho}(x)Q)x,\ \ \ \ \ \ x\in \mathbb S^n.
\end{equation}
Combine Lemma \gammaef{lemm0.4} and Lemma \gammaef{lemm0.5}, we immediately know
\betaegin{equation}\lambdabel{4.4.1}
\gammaho(x)=\overline{\gammaho}(\psi^{-1}x),\ \ \ \ \ \ x\in \mathbb S^n.
\end{equation}
Set $\hat{U}=\mathbb{S}^{n}_{\overline{f}}$, $U=\psi\hat{U}$. We know $\overline{\gammaho}$ and $\psi$ are both smooth on $\hat{U}$. Set $P=\exp(\overline{\gammaho}(x)Q)\in O(n+1)$. By (\gammaef{4.5.1}),
$$D\psi=P(I+Qxd\overline{\gammaho}).$$
Due to $|d\overline{\gammaho}|=1$ on $\hat{U}$ and there exists a constant $q<1$ such that $|Qx|\leq q$ on $\mathbb S^n$, the Jacobian matrix $D\psi$ is invertible. Then we have
\betaegin{lemm}\lambdabel{lemm0.6}
$\psi:\hat{U}\gammaightarrow U$ is a $C^{\infty}$ diffeomorphism.
\end{lemm}
Hence, $\gammaho$ is a normalized isoparametric function on $(U,F_{Q})$. Define
$f(x)=\zeta^{-1}\gammaho(x).$
That is
\betaegin{equation}\lambdabel{4.6.2}
f(x)=\zeta^{-1}\overline{\gammaho}(\psi^{-1}x)=\overline{f}(\psi^{-1}x),\ \ \ \ \ \ \ x\in \mathbb{S}^n,
\end{equation}
where
\betaegin{equation}\lambdabel{4.6.2.1}
\psi(x)=\exp(\zeta( \overline{f}(x))Q)x=\exp(\left(\int^{\overline{f}(x)}_{t_{0}}\frac{1}{a(t)}dt\gammaight)Q)x,\ \ \ \ \ \ \ x\in \mathbb{S}^n.
\end{equation}
Then $f$ is smooth on $U$.
Since $f$ and $\overline{f}$ are global isoparametric functions on $(U,F_{Q})$ and $(\hat{U},h)$, respectively, they can be viewed as homogeneous functions of degree 0 on $\mathbb{R}^{n+1}$. Then $\psi$ is a homogeneous map of degree 1 on $\mathbb{R}^{n+1}$. By (\gammaef{4.6.2}), we can get $d{f}=d\overline fD\psi^{-1}$. That is,
\betaegin{equation}\lambdabel{4.7.1}
d\overline{f}=dfD\psi=dfP\left(I+Qxd\overline{\gammaho}\gammaight).
\end{equation}
Since $D\psi$ and $D\psi^{-1}$ are both bounded in $U$ and $\hat{U}$, $d\overline{f}\gammaightarrow0$ if and only if $df\gammaightarrow0$. That means $f$ is $C^1$ at $x$ satisfying $df(x)=0$. Hence, by Lemma \gammaef{lemm0.5} and Lemma \gammaef{lemm0.6}, combine (\gammaef{4.6.2}) and (\gammaef{4.7.1}), we know that $f$ is a global defined isoparametric function on $(\mathbb S^n, F_{Q})$.
Conversely, if $f$ is an isoparametric function on $(\mathbb{S}^n, F_{Q})$ satisfying $F_Q(\nabla{f})=a({f})$, where $a^{2}({f})\in C^{2}(f(\mathbb S^n))$ and ${f}(\mathbb{S}^n)=[c,d]$. Then for $d(M,x)=\inf\{d(z,x)\mid z\in M\}$ (or $d(x,M)=-\inf\{d(x,z)\mid z\in M\}$) on $(\mathbb S^n, F_{Q})$, we have
\betaegin{lemm}\lambdabel{lemm1.11}
If $d$ is the only critical value of $f$ in $[c,d]$, then
\betaegin{equation}\lambdabel{1.11}
d(M,M_{d})=\inf\lim\limits_{c_{i}\gammaightarrow d^{-}\alphatop i\gammaightarrow \infty}d(M,M_{c_{i}})=\int_{t_{0}}^{d}\frac{1}{a(f)}df
\end{equation}
and the improper integral in (\gammaef{1.11}) converges.
\end{lemm}
\proof Let $\{y\}$ be any point in $M_{d}$. Since $\{f\geqslant d\}$ in $\mathbb S^n$ is closed and $\mathbb S^n$ is connected, then there are points $\{p_{i}\}\in \{f<d\}$ in $\mathbb S^n$ which tends to $y$. Namely, $\lim\limits_{i\gammaightarrow\infty}p_{i}=y$. Set $f(p_{i})=c_{i}$.
Let $\sigma$ be any piecewise $C^{1}$ curve which goes from $M$ to $y$. For every $c_{i}\in(c,d)$, $\sigma_{i}$ is a curve from $M$ to $M_{c_{i}}$ and $\widetilde{\sigma}_{i}$ is a curve from $M_{c_{i}}$ to $y$. Then we have
\betaegin{equation}\lambdabel{1.12}
\inf\limits_{\sigma}L(\sigma)\geq\inf\limits_{\widetilde{\sigma}_{i}}L(\widetilde{\sigma}_{i})+\inf\limits_{{\sigma}_{i}}L(\sigma_{i})=\inf\limits_{\widetilde{\sigma}_{i}}L(\widetilde{\sigma}_{i})+d(M,M_{c_{i}}).
\end{equation}
Since $\lim\limits_{i\gammaightarrow \infty}d(p_{i},y)=0$, we know $\lim\limits_{i\gammaightarrow \infty}\inf\limits_{\widetilde{\sigma}_{i}}L(\widetilde{\sigma}_{i})=0$. From (\gammaef{1.12}), we have
\betaegin{equation}\lambdabel{1.13}
d(M,M_{d})\geq\lim\limits_{c_{i}\gammaightarrow d^{-}\alphatop i\gammaightarrow \infty}d(M,M_{c_{i}}).
\end{equation}
On the other hand, since $M$ is complete, from Lemma \gammaef{lemm0.4} and \cite{HYS}, we can find $\{x_{i}\}\in M$ and normal geodesics $\gamma_{i}$ such that $L(\gamma_{i})=d(x_{i},p_{i})=d(M, M_{c_{i}})$. Due to $d(x_{i},p_{i})+d(p_{i},y)\geq d(x_{i},y)\geq d(M,M_{d})$, we have
\betaegin{equation}\lambdabel{1.14}
\lim\limits_{i\gammaightarrow \infty}L(\gamma_{i})=\lim\limits_{c_{i}\gammaightarrow d^{-}\alphatop i\gammaightarrow \infty}d(M,M_{c_{i}})\geq d(M,M_{d}).
\end{equation}
From \cite{HYS}, combine (\gammaef{1.13}) and (\gammaef{1.14}),
$$d(M,M_{d})=\lim\limits_{c_{i}\gammaightarrow d^{-}\alphatop i\gammaightarrow \infty}d(M,M_{c_{i}})=\int_{t_{0}}^{d}\frac{1}{a(f)}df.$$
\endproof
From Lemma \gammaef{lemm1.11}, $\int^{d}_{c}\frac{1}{a(t)}dt$ is convergence. In addition, for $a^{2}(t)\in C^{2}(f(\mathbb{S}^{n}))$, using similar method in \cite{QM}, we have
\betaegin{lemm}\lambdabel{lemm1.12}
$f$ has no critical value in $(c,d)$.
\end{lemm}
Then from Lemma \gammaef{lemm1.11} and Lemma \gammaef{lemm1.12}, similarly, we can define $\widetilde{\psi}(x)=\exp(-\zeta( {f}(x))Q)x$ and prove $\overline{f}=f\circ\widetilde{\psi}^{-1}$ is an isoparametric function on $(\mathbb{S}^n, h)$. This completes the proof of Theorem \gammaef{thm01}.
\subsection{Homogeneous functions and homogeneous polynomials}
~~~~For Riemannian isoparametric functions, we have known the following fact from \cite{TP}. Let $\phi: \mathbb{R}^{n+1}\gammaightarrow \mathbb{R}$ be a homogeneous function. Then $f=\phi\mid_{(\mathbb S^{n},h)}$ is an isoparametric function on $(\mathbb S^n,h)$ if and only if there exist a smooth function $\widehat{a}(\phi,r)$ and a continuous function $\widehat{b}(\phi,r)$ such that $\phi$ satisfies
\betaegin{equation}\lambdabel{1.2}
\left\{\betaegin{array}{ll}
|\nabla^E\phi|^2=\widehat{a}(\phi,r),\\
\Delta^{E}\phi=\widehat{b}(\phi,r),
\end{array}\gammaight.
\end{equation}
where $r=|x|$.
We try to deduce similar equations to characterize isoparametric functions on a Randers sphere. Recall that a Randers metric $F$ can be defined by navigation from the datum $(h, V)$, where $h=\sqrt{h_{ij}y^{i}y^{j}}$, $V=v^{i}\frac{\partial}{\partial x^{i}}$. Set $V_{0}=v_{i}y^{i}=h_{ij}v^{j}y^{i}$, $\lambdambda=1-\|V\|_{h}^{2}$.
\betaegin{lemm} \lambdabel{lemm41}~\cite{HYS1}
Let $(N, F, d\mu_{BH})$ be an $n$-dimensional Randers space, where $F=\frac{\sqrt{\lambdambda h^{2}+V_{0}^{2}}-V_{0}}{\lambdambda}$. Then $f$ is an isoparametric function if and only if $f$ satisfies
$$\left\{\betaegin{array}{ll}
|df|_h+\lambdangle df, V^*\gammaangle_h=\tilde a(f),\\
\frac{1}{ |df|_h}\Delta^hf+\text{div}V +\frac{1}{ |df|_h^2}\lambdangle d\lambdangle df,V^*\gammaangle_h,df\gammaangle_h=\frac{\tilde b(f)}{\tilde a(f)}.
\end{array}\gammaight.$$
\end{lemm}
When $V$ is a Killing vector field, we have
$$\text{div}_{\sigma} V=h^{ij}v_{i|j}=0.$$
\betaegin{lemm} \lambdabel{lemm42}~\cite{TP}
Let~$(\mathbb S^n,h)\hookrightarrow \mathbb R^{n+1} (n\geq2)$ be the standard Euclidean sphere and $\phi: \mathbb R^{n+1}\gammaightarrow \mathbb R$ be a homogeneous function of degree $k$. Then
\betaegin{align}\lambdabel{na}
\nabla^h\phi=\nabla^E\phi-k\phi x,
\end{align}
\betaegin{align}\lambdabel{na1}
\Delta^h\phi=\Delta^E\phi-k(k+n-1)\phi,
\end{align}
where $\nabla^E$ and $\Delta^E$ denote the Euclidean gradient and Laplacian in $\mathbb R^{n+1}$, respectively.
\end{lemm}
If $\phi$ is a positively homogeneous function of degree $k$, (\gammaef{na}) and (\gammaef{na1}) still holds. By Lemma \gammaef{lemm41} and Lemma \gammaef{lemm42}, we get the following
\betaegin{theo}\lambdabel{thm41}
Let $\phi: \mathbb R^{n+1}\gammaightarrow \mathbb R$ be a positively homogeneous function of degree $k$. Then $f=\phi\mid_{\mathbb S^{n}}$ is an isoparametric function on $(\mathbb S^n, F_{Q})$ if and only if $\phi$ satisfies
\betaegin{equation}\lambdabel{1.3}
\left\{\betaegin{array}{ll}
|\nabla^E\phi-k\phi x|+\lambdangle \nabla^E\phi,xQ\gammaangle=\tilde a(\phi,r),\\
\frac{\Delta^E\phi-k(k+n-1)\phi}{ |\nabla^E\phi-k\phi x|}
+\frac{\lambdangle \nabla^E\lambdangle \nabla^E\phi,xQ\gammaangle,\nabla^E\phi\gammaangle}{|\nabla^E\phi-k\phi x|^2}=\tilde b(\phi,r).
\end{array}\gammaight.
\end{equation}
\end{theo}
Theorem \gammaef{thm41} gives equations to characterize isoparametric functions on a Randers sphere. But the equations are hard to solve. Hence, we use Theorem \gammaef{thm01} to derive the solution of (\gammaef{1.3}).
Let $\overline{\phi}(x)$ be a homogeneous function of degree $k$ on $\mathbb R^{n+1}$ which satisfies (\gammaef{1.2}). Then $\overline{f}=\overline{\phi}\mid_{\mathbb{S}^n}$ is a global defined isoparametric function on $(\mathbb{S}^n,h)$ and $\overline{\phi}(x)=|x|^{k}\overline{f}(\frac{x}{|x|})$. By Theorem \gammaef{thm01}, $f=\overline{f}\circ\psi^{-1}$ is a global defined isoparametric function on $(\mathbb{S}^n, F_{Q})$. Set $\phi=|x|^{k}f(\frac{x}{|x|})$. $\phi$ is a positively homogeneous function of degree $k$ on $\mathbb R^{n+1}$ such that $f=\phi\mid_{\mathbb{S}^n}$, which must be a solution of (\gammaef{1.3}).
Set $x=\psi(\overline{x})=\exp (\zeta (\overline{f}(\overline{x}))Q)\overline{x}$. Obviously, $|x|=|\overline{x}|$. Then
\betaegin{equation}\lambdabel{4.8}
\phi(x)=|x|^{k}f(\frac{x}{|x|})=|\overline x|^{k}\overline{f}\circ\psi^{-1}(\frac{x}{|\overline{x}|}),
\ \ \ \ \ \ x\in\mathbb{R}^{n+1}.\end{equation}
Since $\psi^{-1}$ is a homogeneous map of degree 1,
\betaegin{equation}\lambdabel{4.8.2}
\phi(x)=|\overline x|^{k}\overline f(\frac{\overline{x}}{|\overline{x}|})=\overline{\phi}(\overline{x})
=\overline{\phi}(\psi^{-1}x),
\ \ \ \ \ \ x\in\mathbb{R}^{n+1}.\end{equation}
From Lemma \gammaef{lemm42}, $|\nabla^h\overline{\phi}|^2=|\nabla^E\overline{\phi}|^2-k^2\overline{\phi}^2,$ that is, $a(t)^2=\widehat{a}
(t,1)-k^{2}t^{2}$. Thus, we have
\betaegin{equation}\lambdabel{4.8.1}
\psi(\overline{x})=\exp\left(Q\int^{\frac{\overline{\phi}(\overline x)}{|\overline{x}|^{k}}}_{t_{0}}\frac{1}{\sqrt{\widehat{a}
(t,1)-k^{2}t^{2}}}dt\gammaight)\overline x, \ \ \ \ \ \ \overline x\in\mathbb{R}^{n+1}.
\end{equation}
Combine (\gammaef{4.8.2}) and (\gammaef{4.8.1}), we give the solution of (\gammaef{1.3}).
Conversly, if there exists a positively homogeneous function $\phi$ of degree $k$ on $\mathbb R^{n+1}$ which satisfies (\gammaef{1.3}), then $f=\phi|_{\mathbb{S}^n}$ is an isoparametric function on $(\mathbb{S}^n, F_{Q})$ and $\phi=|x|^{k}f(\frac{x}{|x|})$. From Theorem \gammaef{thm01}, $\overline{f}=f\circ\widetilde{\psi}^{-1}$ is an isoparametric function on $(\mathbb{S}^n, h)$, where $\widetilde{\psi}(x)=\exp(-\zeta( {f}(x))Q)x$. We can also find a homogeneous function $\overline{\phi}$ of degree $k$ which satisfies $\overline{\phi}\mid_{\mathbb{S}^{n}}=\overline{f}$ and $\overline{\phi}=\phi\circ\widetilde{\psi}^{-1}$. Namely, we have the following
\betaegin{theo}$\overline{\phi}$ is a homogeneous function of degree $k$ satisfying (\gammaef{1.2}) if and only if $\phi=\overline{\phi}\circ\psi^{-1}$ is a positively homogeneous function of degree $k$ satisfying (\gammaef{1.3}), where $\psi$ is expressed by (\gammaef{4.8.1}).
\end{theo}
\betaegin{lemm} \lambdabel{lemm021}~\cite{HDY}
Let~$M$ be an anisotropic submanifold in a Randers space~$(N,F,d\mu_{BH})$ with the navigation datum~$(h,V)$. If~$F$ has isotropic~$\mathbf{S}$-curvature~$\mathbf{S}=(n+1)c(x)F$, then for any $\mathbf{n} \in \mathcal{N}^0(M)$, the shape operators of $M$ in Randers space~$(N, F)$ and Riemannian space~$(N, h)$, ${A}_{\mathbf{n}}$ and $\betaar{A}_{\betaar{\mathbf{n}}}$, have the same principal vectors and satisfy
\betaegin{align}\lambdabel{3.24}\lambdambda=\betaar{\lambdambda}+c(x),\end{align}
where~$\lambdambda$ and~$\betaar{\lambdambda}$ are the principal curvatures of~$M$ in a Randers space~$(N, F)$ and a Riemannian space~$(N, h)$, respectively.\end{lemm}
From Lemma \gammaef{lemm021}, the number and multiplicity of principal curvatures are the same by navigation. Hence, there exists the same isoparametric hypersurface in $(\mathbb{S}^n,h)$ and $(\mathbb{S}^n,F_{Q})$. So from Theorem A, we immediately have
\betaegin{prop}
Let $M$ be a connected isoparametric hypersurface in $(\mathbb S^{n},F_{Q})$ with $g$ principal curvatures $\lambdambda_{i}=cot\theta_{i}$, $0<\theta_{i}<\pi$, with multiplicities $m_{i}$. Then $M$ is an open subset of a level set of the restriction to $\mathbb S^{n}$ of a Cartan-M\"{u}nzner polynomial $\phi$ of degree $g$.
\end{prop}
\subsection{Proof of Theorem \gammaef{thm02}}
~~~~For a given connected isoparametric family $M_{t}$ in $(\mathbb{S}^n,F_{Q})$, there exists an isoparametric function $\tilde{f}:(\mathbb{S}^n,F_{Q})\gammaightarrow[c,d]$ such that $M_{t}\subset \widetilde{f}^{-1}(t)$ and $M=M_{t_{0}}\subseteq \tilde{f}^{-1}(t_{0})=\tilde{f}^{-1}(\frac{1}{2}(c+d))$. Let $\gammaho$ be a normalized isoparametric function such that $\nabla \gammaho=\frac{\nabla \tilde{f}}{F_Q(\nabla \tilde{f})}$ and $\gammaho(\mathbb{S}^n_{\tilde{f}})=(-\alphalpha,\alphalpha)$, where $\alphalpha>0$ is a constant. Set $\gammaho^{-1}(0)=\tilde{f}^{-1}(t_{0})$, then $M\subset\gammaho^{-1}(0)$. From Lemma \gammaef{lemm021}, $M$ is also a connected isoparametric hypersurface in $(\mathbb{S}^n,h)$. From Theorem A, there exists a Cartan-M\"{u}nzner polynomial $\phi$ of degree $g$ such that $\overline{f}=\phi\mid_{\mathbb{S}^n}$ is an isoparametric function on $(\mathbb{S}^n,h)$ and $M$ is an open subset of a level set of $\overline{f}$.
In this case, from (\gammaef{1.4}) and Lemma \gammaef{lemm42}, $$a(\overline{f})=|\nabla^{h}\phi|=\sqrt{|\nabla^{E}\phi|^{2}-g^2\phi^2}\betaig|_{\mathbb{S}^{n}}=\sqrt{g^{2}-g^2\overline{f}^2}=g\sqrt{1-\overline{f}^2}.$$
Hence, $a(t)=g\sqrt{1-t^2}$. Obviously, $\zeta(t)=\int_{{0}}^{t}\frac{1}{a(\theta)}d\theta=\frac{1}{g}\alpharcsin t$. From (\gammaef{4.6.2.1}), we have
$$\psi(x)=\exp(\frac{1}{g}\alpharcsin \overline{f}(x)Q)x=\exp(\frac{1}{g}\alpharcsin\frac{\phi(x)}{|x|^{g}}Q)x.$$
Due to Theorem \gammaef{thm01}, $f=\overline{f}\circ\psi^{-1}$ is an isoparametric function on $(\mathbb{S}^n,F_{Q})$ and each $M_{t}$ is an open subset of a level set of $f$.
Namely, there exists a $t'$ such that $M_{t'}\subset f^{-1}(t)$ and $M=f^{-1}(0)$. Meanwhile, from Lemma \gammaef{lemm32} and Lemma \gammaef{lemm0.4},
\betaegin{equation}\lambdabel{0.01}
x=\exp(sQ)\overline{x}=\exp(\overline{d}(M,\overline{x})Q)\overline{x}.
\end{equation}
Hence, from (\gammaef{3.3.5.3.3}),
\betaegin{align}\lambdabel{0.001}
&f^{-1}(t)=E(x,\zeta(t)\textbf{n})=\tau_{\zeta(t)}(x)=\exp(\zeta(t)Q)\overline{\tau}_{\zeta(t)}(x)\nonumber\\
&=\{\exp(\frac{\alpharcsin t}{g}Q)((\cos\frac{\alpharcsin t}{g})x+\sin(\frac{\alpharcsin t}{g})(\textbf{n}-V))\mid x\in M\},-1<t<1.
\end{align}
Conversely, for a given Cartan-M\"{u}nzner polynomial $\phi$ of degree $g$, set $f=\phi\circ\psi^{-1}\mid_{\mathbb{S}^n}$. Then $f^{-1}(t)$ is a connected isoparametric family in $(\mathbb{S}^n,F_{Q})$, where $-1<t<1$. Namely, we can find an isoparametric family in $(\mathbb{S}^n,F_{Q})$ through a positively homogeneous polynomial, which is equivalent to construct a mean curvature flow.
Further more, we discuss the focal submanifolds in $(\mathbb S^n, F_{Q})$. By Theorem B, there exist only two connected focal submaifolds $\overline{M}_{+}=\overline{\tau}_{\frac{\pi}{2g}}(x)$ and $\overline{M}_{-}=\overline{\tau}_{-\frac{\pi}{2g}}(x)$ in $(\mathbb S^n, h)$. Let $\overline{x}$ be a focal point of $M$ in $(\mathbb S^n, h)$. From (\gammaef{4.7.1}), $\nabla \overline{f}(\overline{x})\gammaightarrow0$ if and only if $\nabla f(x)\gammaightarrow0$. Hence, $x$ is a focal point of $M$ in $(\mathbb S^n, F_{Q})$. By (\gammaef{0.01}), there also exist two focal submanifolds $M_{\pm}$ in $(\mathbb S^n, F_{Q})$ corresponding to $\overline{M}_{\pm}$ in $(\mathbb S^n, h)$. From (\gammaef{0.001}), we have
\betaegin{equation}\lambdabel{4.7.2}
M_{\pm}=\tau_{\pm \frac{\pi}{2g}}(x)=\{\exp(\pm \frac{\pi}{2g}Q)(\cos\frac{\pi}{2g}x\pm \sin\frac{\pi}{2g}(\textbf{n}-V))\mid x\in M\}.
\end{equation}
Theorem \gammaef{thm02} can be proved.
\subsection{Examples of isoparametric functions and focal submanifolds}
~~~~In this section, we will give some examples of isoparametric functions on $(\mathbb{S}^n,F_{Q})$, the corresponding isoparametric families and focal submanifolds. We are familiar with the fact that the standard form of real antisymmetric matrices by the orthogonal similarity transformation can be written as
$$Q=\left(
\betaegin{array}{cccccccccc}
Q_{1}& & & & & & \\
&Q_{2}& & & & & \\
& &\ddots& & & & \\
& & &Q_{j}& & & \\
& & & &0& & \\
& & & & &\ddots& \\
& & & & & &0\\
\end{array}
\gammaight),$$
where \betaegin{equation}
Q_{i}=\left(
\betaegin{array}{cc}
0&a_{i}\\
-a_{i}&0
\end{array}
\gammaight), \ \ \ \ \ \ |a_{i}|<1, \ \ \ \ \ \ 1\leq i\leq j.
\end{equation}
If $\overline{\phi}$ is a homogeneous polynomial of degree $g$, then $\phi$ is a positively homogeneous function of degree $g$. Using (\gammaef{4.8.2}), we have $\phi(x)=\overline{\phi}(\psi^{-1}x)$, where
\betaegin{equation}\lambdabel{4.12}
\psi(x)=(\sum\limits_{i=1}^{j}I_{i}\cos a_{i}\left(\frac{1}{g}\alpharcsin\frac{\phi(x)}{|x|^{g}}\gammaight)+\frac{1}{a_{i}}P_{i}\sin a_{i}\left(\frac{1}{g}\alpharcsin\frac{\phi(x)}{|x|^{g}}\gammaight))x+I_{n+1-2j}x,
\end{equation}
where $I_{i}$ is a $(n+1)\times(n+1)$ matrix which satisfies each row and column of $I_{i}$ is 0, except for the elements of row $2i-1$, column $2i-1$ and row $2i$, column $2i$ are both $1$. $P_{i}$ is a $(n+1)\times(n+1)$ matrix which satisfies each row and column of $P_{i}$ is 0, except for the element of row $2i$, column $2i-1$ is $-1$ and row $2i-1$, column $2i$ is $1$. And $I_{n+1-2j}$ is still a $(n+1)\times(n+1)$ matrix which satisfies the last $n+1-2j$ elements on the diagonal of $I_{n+1-2j}$ are all 1, all the other elements are 0.
\betaegin{exam} \lambdabel{exam0}
$\textbf{(g=1)}$ Let $(\mathbb{S}^n, F_{Q})$ be an $n$-dimensional Randers sphere, where $F=\frac{\sqrt{\lambdambda h^{2}+V_{0}^{2}}-V_{0}}{\lambdambda}$. Then $\overline{f}$ is an isoparametric function if and only if there exists a homogeneous polynomial $\overline{\phi}:\mathbb{R}^{n+1}\gammaightarrow\mathbb{R}$ which satisfies
$$\overline{\phi}(x)=\lambdangle x,p\gammaangle,\ \ \ \ p\in\mathbb{R}^{n+1},\ \ \ \ |p|=1,$$
such that $\overline{f}=\overline{\phi}\mid_{\mathbb{S}^n}$.
Then we have $\phi(x)=\overline{\phi}(\psi^{-1}x)$. Using (\gammaef{4.12}),
\betaegin{equation*}
\psi(x)=(\sum\limits_{i=1}^{j}I_{i}\cos a_{i}(\alpharcsin\frac{\lambdangle x,p\gammaangle}{|x|})+\frac{1}{a_{i}}P_{i}\sin a_{i}(\alpharcsin\frac{\lambdangle x,p\gammaangle}{|x|}))x+I_{n+1-2j}x.
\end{equation*}
In Riemannian case, for this linear height function, consider the regular level sets $\overline{M}_{s}=\{x\in\mathbb{S}^{n}\mid\lambdangle x,p\gammaangle=\sin s\}$, $-\frac{\pi}{2}<s<\frac{\pi}{2}$. The two focal submanifolds are $\overline{M}_{+}=M_{\frac{\pi}{2}}=\{p\}$ and $\overline{M}_{-}=M_{-\frac{\pi}{2}}=\{-p\}$, respectively. From (\gammaef{0.001}) and (\gammaef{4.7.2}), the regular level sets $$M_{s}=\{x\in\mathbb{S}^{n}\mid\lambdangle x,\exp(sQ)p\gammaangle=\sin s\},\ \ \ \ -\frac{\pi}{2}<s<\frac{\pi}{2}.$$
The connected two focal submanifolds $M_{+}=\exp(\frac{\pi}{2}Q)\{p\}$, $M_{-}=\exp(-\frac{\pi}{2}Q)\{-p\}$ are still two points.
\end{exam}
\betaegin{rema} \lambdabel{rema03}
Take
$Q=\left(
\betaegin{array}{cc}
Q'&0\\
0&0\\
\end{array}
\gammaight)$
and $p=e_{n+1}$, where $Q'\in o(n)$. Example \gammaef{exam0} reduces to Example $\textit{3.11}$ in~\cite{HDY}.
\end{rema}
Take $p=(1,0,0)$ and
\betaegin{equation*}
Q=\left(
\betaegin{array}{cccc}
0&0&\frac{1}{2}\\
0&0&0\\
-\frac{1}{2}&0&0
\end{array}
\gammaight),
\end{equation*}
we can give the image of mean curvature flow in $(\mathbb{S}^{2},h)$ and $(\mathbb{S}^{2},F_{Q})$ (See Fig. 3 and Fig. 4). In this case, the two focal submanifolds
$M_{+}=\{(\frac{\sqrt{2}}{2},0,-\frac{\sqrt{2}}{2})\}$ and $M_{-}=\{(-\frac{\sqrt{2}}{2},0,-\frac{\sqrt{2}}{2})\}$ are two points.
\betaegin{figure}[H]
\centering
\betaegin{tabular}{cc}
\betaegin{minipage}[t]{3.5in}
\includegraphics[width=2.5in]{9.jpg}
\caption{Isoparametric family in $(\mathbb{S}^{2},h)$}
\end{minipage}
\betaegin{minipage}[t]{3.5in}
\includegraphics[width=2.5in]{8.jpg}
\caption{Isoparametric family in $(\mathbb{S}^{2},F_{Q})$}
\end{minipage}
\end{tabular}
\end{figure}
\betaegin{exam} \lambdabel{exam01}
$\textbf{(g=2)}$ Define the function $\overline{\phi}:\mathbb{R}^{p+1}\times \mathbb{R}^{q+1}\gammaightarrow \mathbb{R}$
$$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (\overline{x}_{1}\ \ ,\ \ \overline{x}_{2})\ \ \ \ \mapsto |\overline{x}_{1}|^2-|\overline{x}_{2}|^{2},$$
where $\overline{x}_{1}=(x_{1}, x_{2},\dots, x_{p+1})$, $\overline{x}_{2}=(x_{p+2}, x_{p+3}\dots, x_{p+q+2})$, $p,q\in N^{+}$ and $p+q=n-1$. $\overline{\phi}$ is a homogeneous polynomial.
Set $Q=(q^{\alphalpha}_{\betaeta})_{(n+1)\times(n+1)}$, where $q^{p+2}_{p+1}=-q^{p+1}_{p+2}=a$ and the others are $0$. By (\gammaef{4.12}), we obtain $\phi(x)=\phi(\psi^{-1}x)$, where
\betaegin{align*}
\psi(x)&=(x_{1},\dots,x_{p},\cos (a\alpharcsin\frac{|\overline{x}_{1}|^{2}-|\overline{x}_{2}|^{2}}{|\overline{x}_{1}|^{2}+|\overline{x}_{2}|^{2}})x_{p+1}+sin (a\alpharcsin\frac{|\overline{x}_{1}|^{2}-|\overline{x}_{2}|^{2}}{|\overline{x}_{1}|^{2}+|\overline{x}_{2}|^{2}})x_{p+2},\\
&-\sin(a\alpharcsin\frac{|\overline{x}_{1}|^{2}-|\overline{x}_{2}|^{2}}{|\overline{x}_{1}|^{2}+|\overline{x}_{2}|^{2}})x_{p+1}+\cos (a\alpharcsin\frac{|\overline{x}_{1}|^{2}-|\overline{x}_{2}|^{2}}{|\overline{x}_{1}|^{2}+|\overline{x}_{2}|^{2}})x_{p+2},x_{p+3},\dots,x_{n+1}).
\end{align*}
In Riemannian case, consider the regular level sets $\overline{M}_{s}=\{x=(\overline{x}_{1},\overline{x}_{2})\in\mathbb{S}^{n}\mid|\overline{x}_{1}|^2-|\overline{x}_{2}|^{2}=\sin 2s\}$, $-\frac{\pi}{4}<s<\frac{\pi}{4}$. The two focal submanifolds are $\overline{M}_{+}=\overline{M}_{\frac{\pi}{4}}=S^{p}\times\{0\}$ and $\overline{M}_{-}=\overline{M}_{-\frac{\pi}{4}}=\{0\}\times\mathbb{S}^{q}$, respectively. From (\gammaef{0.001}) and (\gammaef{4.7.2}), the regular level sets can be expressed as
\betaegin{align*}
M_{s}=\{(&x_{1},\dots,x_{p},\cos(as)x_{p+1}+\sin(as)x_{p+2},-\sin(as)x_{p+1}+\cos(as)x_{p+2},\\
&x_{p+3},\dots,x_{n+1})\in\mathbb{S}^n\mid|\overline{x}_{1}|^2-|\overline{x}_{2}|^{2}=\sin 2s\},\ \ \ \ -\frac{\pi}{4}<s<\frac{\pi}{4}.
\end{align*}
The two focal submanifolds are
\betaegin{align*}
M_{+}&=(\exp \frac{\pi}{4}Q)\{S^{p}\times\{0\}\}\\
&=\{(x_{1},\dots,x_{p+2},0,\dots,0)\mid x_{p+2}+\tan(\frac{\pi}{4}a)x_{p+1}=0,\sum\limits_{i=1}^{p+2}x_{i}^{2}=1\}
\end{align*}
and
\betaegin{align*}
M_{-}&=\exp(-\frac{\pi}{4}Q)\{\{0\}\times\mathbb{S}^{q}\}\\
&=\{(0,\dots,0,x_{p+1},\dots,x_{n+1})\mid x_{p+1}+\tan(\frac{\pi}{4}a)x_{p+2}=0,\sum\limits_{i=p+1}^{n+1}x_{i}^{2}=1\}.
\end{align*}
Namely, $M_{+}$ is $\mathbb{S}^{p}$ in $\mathbb{R}^{p+2}$ and $M_{-}$ is $\mathbb{S}^{q}$ in $\mathbb{R}^{q+2}$.
\end{exam}
\small \betaegin{thebibliography}{9999}\vspace*kip0pt
\parindent=2ex\parskip=-1pt\betaaselineskip=-1pt
\betaibitem{C}
E. Cartan, \textit{Familles de surfaces isoparam$\alphacute{e}$triques dans les espaces $\grave{a}$ courbure constante}, Ann. Mat. Pura Appl., \textbf{17} (1938), no. 1, 177-191.
\betaibitem{C1}
Q. S. Chi, \textit{Isoparametric hypersurfaces with four principal curvatures IV}, J. Differential Geom., \textbf{115} (2020), no. 2, 225-301.
\betaibitem{TP}
T. E. Cecil and P. J. Ryan, \textit{Geometry of Hypersurfaces}, Springer Monogr. Math., 2015.
\betaibitem{GT}
J. Q. Ge and Z. Z. Tang, \textit{Geometry of isoparametric hypersurfaces in Riemannian manifolds}, Asian J. Math., \textbf{18} (2010), no. 1, 117-125.
\betaibitem{HYS}
Q. He, S. T. Yin, Y. B. Shen, \textit{Isoparametric hypersurfaces in Minkowski spaces}, Differential Geom. Appl., \textbf{47} (2016), 133-158.
\betaibitem{YH}
S. T. Yin, Q. He, \textit{The maximum diam theorem on Finsler manifolds}, J. Geom. Anal., \textbf{31} (2021), 12231-12249.
\betaibitem{HYS1}
Q. He, S. T. Yin, Y. B. Shen, \textit{Isoparametric hypersurfaces in Funk manifolds}, Sci. China Math., \textbf{60} (2017), no. 12, 2447-2464.
\betaibitem{HD}
P. L. Dong, Q. He, \textit{Isoparametric hypersurfaces of a class of Finsler manifolds induced by navigation problem in Minkowski spaces}, Differential Geom. Appl., \textbf{68} (2020), 101581.
\betaibitem{HDY}
Q. He, P. L. Dong and S. T. Yin, \textit{Classifications of Isoparametric Hypersurfaces in Randers Space Forms}, Acta Math. Sin., \textbf{36}, (2020), no. 9, 1049-1060.
\betaibitem{HCY}
Q. He, Y. L. Chen, T. T. Ren and S. T. Yin, \textit{Isoparametric hypersurfaces in Finsler space forms}, Sci. China Math. \textbf{64} (2021), no. 7, 1463-1478.
\betaibitem{M1}
H. F. M\"{u}nzner, \textit{Isoparametrische Hyperflachen in Spharen I}, Math. Ann., \textbf{251} (1980), no. 1, 57-71.
\betaibitem{M}
H. F. M\"{u}nzner, \textit{Isoparametrische Hyperflachen in Spharen II}, Math. Ann., \textbf{256} (1981), no. 2, 215-232.
\betaibitem{ZHC}
F. Q. Zeng, Q. He, B. Chen, \textit{The mean curvature flow in Minkowski spaces}, Sci. China Math., \textbf{61} (2018), no. 10, 1833-1850.
\betaibitem{RO}
C. Robles, \textit{Geodesics in Randers spaces of constant curvature}, Trans. Amer. Math. Soc., \textbf{359} (2007), no. 4, 1633-1651.
\betaibitem{XM}
M. Xu, \textit{The number of geometrically distinct reversible closed geodesics on a Finsler sphere with $K=1$}, arXiv:1801.08868.
\betaibitem{BRS}
D. W. Bao, C. Robles and Z. M. Shen, \textit{Zermelo navigation on Riemann manifolds}, J. Differential Geom., \textbf{66} (2004), no. 3, 391-449.
\betaibitem{HM12}
L. B. Huang, X. Mo, \textit{On geodesics of Finsler metrics via navigation problem}, Proc. Amer. Math. Soc., \textbf{139} (2011), no. 8, 3015-3024.
\betaibitem{HM1}
L. B. Huang, X. Mo, \textit{On the flag curvature of a class of Finsler metrics produced by the navigation problem}, Pacific J. Math., \textbf{277} (2015), no. 1, 149-168.
\betaibitem{XM1}
M. Xu, V. Matveev, K. Yan, S. X. Zhang, \textit{Some geometric correspondences for homothetic navigation}, Publ. Math. Debrecen, \textbf{97} (2020), 449-474.
\betaibitem{SZ}
Z. M. Shen, \textit{Lectures on Finsler geometry}, Singapore: World Scientific Publishing Co 2001.
\betaibitem{SS}
Y. B. Shen, Z. M. Shen, \textit{Introduction to Modern Finsler geometry}, Beijing: Higher Education Press 2016.
\betaibitem{QM}
Q. M. Wang, \textit{Isoparametric functions on Riemannian manifolds I}, Math. Ann., \textbf{277} (1987), no. 4, 639-646.
\end{thebibliography}
Yali Chen \\
School of Mathematical Sciences, Tongji University, Shanghai, 200092, China\\
E-mail: [email protected]\\
Qun He \\
School of Mathematical Sciences, Tongji University, Shanghai, 200092, China\\
E-mail: [email protected]\\
\end{document}
|
\begin{document}
\title{Stochastic unfolding and homogenization}
\begin{abstract}
The notion of periodic two-scale convergence and the method of periodic unfolding are prominent and useful tools in multiscale modeling and analysis of PDEs with rapidly oscillating periodic coefficients. In this paper we are interested in the theory of stochastic homogenization for continuum mechanical models in form of PDEs with random coefficients, describing random heterogeneous materials. The notion of periodic two-scale convergence has been extended in different ways to the stochastic case. In this work we introduce a stochastic unfolding method that features many similarities to periodic unfolding. In particular it allows to characterize the notion of stochastic two-scale convergence in the mean by mere weak convergence in an extended space. We illustrate the method on the (classical) example of stochastic homogenization of convex integral functionals, and prove a new result on stochastic homogenization for a non-convex evolution equation of Allen-Cahn type. Moreover, we discuss the relation of stochastic unfolding to previously introduced notions of (quenched and mean) stochastic two-scale convergence. The method described in the present paper extends to the continuum setting the notion of discrete stochastic unfolding, as recently introduced by the second and third author in the context of discrete-to-continuum transition.
\noindent
{\bf Keywords:} stochastic homogenization, unfolding, two-scale convergence, gradient systems
_{\varepsilon}nd{abstract}
\setlength{\varphiarindent}{0pt}
\tableofcontents
\section{Introduction}\label{Intro}
Homogenization theory deals with the derivation of effective, macroscopic models for problems that involve two or more length-scales. Typical examples are continuum mechanical models for microstructured materials that give rise to boundary value problems or evolutionary problems for partial differential equations with coefficients that feature rapid, spatial oscillations. The first results in homogenization theory were motivated by a mechanics problem which was about the determination of the macroscopic behavior of linearly elastic composites with periodic microstructure, see Hill \cite{Hill1963}. In the mathematical community early contributions in the 70s came from the \textit{French school} (e.g.~see \cite{Bensoussan1978} for an early standard reference, and \cite{Tartar1977,Murat1997} for Tartar and Murat's notion of $H$-convergence), the \textit{Russian school} (e.g.~Zhikov, Kozlov and Oleinik, see \cite{Zhikov1979,jikov2012homogenization}), and from the \textit{Italian school} for variational problems (e.g., Marcellini \cite{Marcellini1978}, Spagnolo \cite{Spagnolo1976} for $G$-convergence, and De Giorgi and Franzoni for $\Gamma$-convergence \cite{DeGiorgi1975}). In the 80s and later, homogenization was intensively studied for a variety of models from continuum mechanics including non-convex integral functionals and applications to non-linear elasticity (e.g.~M\"uller \cite{Mueller1987, Geymonat1993} and Braides \cite{Braides1985}), or the topic of effective flow through porous media (e.g.~see Hornung et al. \cite{arbogast1990derivation,Hornung1991,Hornung2012} and Allaire \cite{Allaire1989}). Most results in homogenization theory discuss problems with periodic microstructure, and specific analytic tools for periodic homogenization of linear (or monotone) operators are developed, including the notions of two-scale convergence and periodic unfolding \cite{nguetseng1989general, allaire1992homogenization, Visintin2006, cioranescu2002periodic}, which by now are standard tools in multiscale modeling and analysis. In the last decade considerable interest in applied mathematics emerged in understanding random heterogeneous materials, i.e.~materials whose properties on a small length-scale are only described on a statistical level, such as polycrystalline composites, foams, or biological tissues, see \cite{Torquato2013} for a standard reference. Although the first results in stochastic homogenization were already obtained in the 70s and 80s for linear elliptic equations and convex minimization problems, see \cite{Papanicolaou1979, Kozlov1979, DalMaso1985,DalMaso1986}, the theory in the stochastic case is still less developed as in the periodic case and object of various recent studies, e.g.~regarding error estimates and regularity properties (see \cite{GO11,GO12, GNO1, GNO2, GNOreg, armstrong2016quantitative, Armstrong2017}, or modeling of random heterogeneous materials~\cite{Zhikov2006,Alicandro2011, Cicalese2017, Hornung2017, Heida2017b, HeidaNesenenko2017monotone, Berlyand2017, neukamm2017stochastic}. With the present paper we contribute to the latter. In particular, we introduce a \textit{stochastic unfolding method} that shares many similarities to periodic unfolding and two-scale convergence with the intention to systematize and simplify the process of lifting results from periodic homogenization to the stochastic case. We illustrate this by reconsidering stochastic homogenization of convex integral functionals and by proving a new stochastic homogenization result for semilinear gradient flows of Allen-Cahn type. In order to put the notion into perspective, in the following we recall the concepts of two-scale convergence and periodic unfolding.
For problems with periodic coefficients, the notion of (periodic) two-scale convergence was introduced in \cite{nguetseng1989general} and further developed in \cite{allaire1992homogenization,lukkassen2002two}. Two-scale convergence refines weak convergence in $L^p$-spaces: The two-scale limit captures not only the averaged behavior of an oscillating sequence (as opposed to the weak limit), but also oscillations on a prescribed small scale $\varepsilon$. In particular, let $Q\subset \mathbb{R}^d$ and $\Box=[0,1)^d$, a sequence $(u_{\varepsilon})\subset L^p(Q)$ two-scale converges to $u\in L^p(Q\times \Box)$ (as $\varepsilon\to 0$) if
\begin{equation*}
\lim_{\varepsilon \to 0}\int_{Q}u_{\varepsilon}(x)\varphi\brac{x,\frac{x}{\varepsilon}}dx = \int_{Q}\int_{\Box}u(x,y)\varphi(x,y)dy dx,
_{\varepsilon}nd{equation*}
for all $\varphi \in L^q(Q; C_{\#}(\Box))$. Here $C_{\#}(\Box)$ denotes the space of continuous and $\Box$-periodic functions and $p,q\in (1,\infty)$ are dual exponents.
In \cite{arbogast1990derivation} in the specific context of homogenization of flow through porous media Arbogast et~al.~introduced a dilation operator to resolve oscillations on a prescribed scale of weakly converging sequences; it turned out that the latter yields a characterization of two-scale convergence (see \cite[Proposition~4.6]{bourgeat1996convergence}). In a similar spirit, Cioranescu et al.~introduced in \cite{cioranescu2002periodic, cioranescu2008periodic} the periodic unfolding method as a systematic approach to homogenization. The key object of this method is a linear isometry $\mathcal{T}_{\varepsilon}^{p}:L^p(Q)\to L^p(Q\times \Box)$ (the periodic unfolding operator) which invokes a change of scales and allows (at the expense of doubling the dimension) to use standard weak and strong convergence theorems in $L^p$-spaces to capture the microscopic behavior of oscillatory sequences. It turned out that the method is well-suited for periodic multiscale problems, e.g.~see~\cite{cioranescu2012periodic,griso2004error,mielke2007two, Visintin2006, neukamm2010homogenization, hanke2011homogenization, liero2015homogenization}. Moreover, the unfolding method allows to rephrase two-scale convergence: Applied to an oscillatory sequence $(u_{\varepsilon})\subset L^p(Q)$, the unfolded sequence $(\mathcal{T}_{\varepsilon}^{p}u_{\varepsilon})$ weakly converges in $L^p(Q \times \Box)$ if and only if $(u_{\varepsilon})$ two-scale converges, and the corresponding limits are the same. We refer to \cite{mielke2007two} where this perspective on two-scale convergence is investigated and applied in the context of evolutionary problems.
Motivated by the idea of (periodic) two-scale convergence, in \cite{bourgeat1994stochastic} the notion of stochastic two-scale convergence in the mean was introduced suited for homogenization problems that invoke random coefficients, see also \cite{andrews1998stochastic}. In stochastic homogenization typically random coefficients of the form $a(\omega,x)=a_0(\tau_x\omega)$ (for $x\in\mathbb{R}^d$) are considered where $\omega$ stands for a ``random configuration'' and $a_0$ is defined on a probability space $(\Omega,\mathcal F,P)$ that is equipped with a measure preserving action $\tau_x:\Omega\to\Omega$, see Section \ref{S_Desc}. A sequence $(u_{\varepsilon})\subset L^p(\Omega\times Q)$ (where $Q\subset\mathbb{R}^d$ denotes a continuum domain) is said to two-scale converge in the mean to some $u\in L^p(\Omega\times Q)$ if
\begin{equation*}
\lim_{\varepsilon\to 0} \int_{\Omega}\int_{Q}u_{\varepsilon}(\omega,x)\varphi(\tau_{\frac{x}{\varepsilon}}\omega,x)dx dP(\omega)= \int_{\Omega}\int_{Q}u(\omega,x)\varphi(\omega,x)dx dP(\omega)
_{\varepsilon}nd{equation*}
for all $\varphi \in L^q(\Omega\times Q)$ satisfying suitable measurability conditions.
Motivated by the concept of the periodic unfolding method, in \cite{neukamm2017stochastic} the second and third author developed a stochastic unfolding method for a discrete-to-continuum analysis of discrete models of random heterogeneous materials. In the present work, we extend the concept to problems defined on continuum domains $Q\subset\mathbb{R}^d$. In particular, we introduce a stochastic unfolding operator $\mathcal{T}_{\varepsilon}: L^p(\Omega\times Q)\to L^p(\Omega\times Q)$ which is an isometric isomorphism (see Section \ref{S_Stoch_1}). It displays similar properties as the periodic unfolding operator; in particular, weak convergence of the unfolded sequence $(\mathcal{T}_{\varepsilon}u_{\varepsilon})$ is equivalent to stochastic two-scale convergence in the mean, and -- as in the periodic case -- we recover a compactness statement for two-scale limits of gradients.
A first example that we treat via stochastic unfolding is the classical problem of stochastic homogenization of convex integral functionals. As in the periodic case, the proof of the homogenization theorem via unfolding is merely based on elementary properties of the unfolding operator and on (semi-)continuity of convex functionals (with suitable growth assumptions). The second example we consider is homogenization for gradient flows driven by $\lambda$-convex energies. In particular, we consider an Allen-Cahn type equation with random and oscillating coefficients. The argument follows an abstract strategy for evolutionary $\Gamma$-convergence of gradient systems, see \cite{mielke2016evolutionary} and the references therein (we provide more references in Section 3). The homogenization results that we obtain via stochastic unfolding establish convergence in the \textit{mean} (i.e.~in a statistically averaged sense). This is in contrast to \textit{quenched} homogenization, where a finer topology is considered -- namely convergence for almost every random realization. Although homogenization in the mean (via unfolding) is easier to prove than homogenization in a quenched sense (which in most cases relies on a subadditive ergodic theorem), typically the homogenization limits in both cases are the same, see Section~\ref{Section:3:3}. One thus might view stochastic unfolding as a convenient and easy tool to rigorously identify homogenized models.
The alternative, \textit{quenched} notion of stochastic two-scale convergence was introduced by Zhikov and Piatnitski in \cite{Zhikov2006}. In a very general setting, they introduced two-scale convergence on random measures as a generalization of periodic two-scale convergence as presented in \cite{Zhikov2000}. In this work, we restrict to the simplest case where the random measure is the Lebesgue measure. The concept of stochastic two-scale convergence in \cite{Zhikov2006} is based on Birkhoff's ergodic theorem.
Although the definition of (quenched) stochastic two-scale convergence, which we recall in Section~\ref{Section_4}, and two-scale convergence in the mean look quite similar, it is non-trivial to relate both notions. In this paper we investigate this issue and provide some tools that allow to draw conclusions on quenched homogenization from mean homogenization. As an example we treat convex integral functionals. For the analysis, we appeal to Young measures generated by stochastically two-scale convergent sequences in the mean and in particular establish a compactness result (see Theorem \ref{thm:Balder-Thm-two-scale} and Lemma \ref{lem:Balder-Lem-two-scale}). Moreover, we exploit a lower semicontinuity result of convex integral functionals w.r.t.~quenched stochastic two-scale convergence that has been recently obtained by the first author and Nesenenko in \cite{HeidaNesenenko2017monotone}.
\textbf{Structure of the paper.} In Section \ref{S_Stoch} we introduce the standard setting for stochastic homogenization, introduce the notion of stochastic unfolding and derive the most significant properties of the unfolding operator. In the following Section \ref{Section_Applications} two examples of the homogenization procedure via stochastic unfolding are presented. Namely, Section \ref{Section_Convex} is dedicated to homogenization of convex functionals and in Section \ref{Section_3.2} homogenization for Allen-Cahn type gradient flows is provided. In Section \ref{Section_4} we discuss the relations of stochastic unfolding and quenched stochastic two-scale convergence. Section~\ref{S_Stoch} and ~\ref{Section_Convex}, which contain the basic concepts and the application to convex homogenization, are self-contained and require only basic input from functional analysis. Section~\ref{Section_3.2} and Section~\ref{Section_4} require some advanced tools from analysis and measure theory.
\section{Stochastic unfolding and properties}\label{S_Stoch}
\subsection{Description of random media - a functional analytic framework}\label{S_Desc}
To fix ideas we consider for a moment the setup of Papanicolaou and Varadhan \cite{Papanicolaou1979} for homogenization of elliptic operators of the form $-\nabla\cdot a(\frac{x}{\varepsilon})\nabla$ with a coefficient field $a:\mathbb{R}^d\to\mathbb{R}^{d\times d}$. In the stochastic case the coefficients are assumed to be random and thus $a$ can be viewed as a family of random variables $\{a(x)\}_{x\in\mathbb{R}^d}$. A minimal requirement for stochastic homogenization of such operators is that the distribution of the coefficient field is \textit{stationary} and \textit{ergodic}. Stationarity means that the coefficients are statistically homogeneous (i.e.~for any finite set of points $x_1,\ldots,x_n\in\mathbb{R}^d$ the joint distribution of the \textit{shifted} random variables $a(x_1+z),\ldots,a(x_n+z)$ is independent of $z\in\mathbb{R}^d$), while \textit{ergodicity} (see below for the precise definition) is an assumption that ensures a separation of scales in the sense that long-range correlations of the coefficients become negligible in the large scale limit, e.g.~$\mbox{cov}[\fint_{B+z}a,\fint_Ba]\to 0$ as $z\to\infty$. In \cite{Papanicolaou1979}, Papanicolaou and Varadhan introduced a (by now standard) setup that allows to phrase these conditions in the following functional analytic framework (see also \cite{jikov2012homogenization}):
\begin{assumption} \label{Assumption_2_1}
Let $\brac{\Omega,\mathcal{F},P}$ denote a probability space with a countably generated $\sigma$-algebra, and let $\tau=\cb{\tau_x}_{x\in \re{d}}$ denote a group of measurable, bijections $\tau_x:\Omega\to \Omega$ such that
\begin{enumerate}[(i)]
\item (group property). $\tau_0=Id$ and $\tau_{x+y}=\tau_x\circ \tau_y$ for all $x,y\in \re{d}$,
\item (measure preserving). $P(\tau_x A)=P(A)$ for all $A\in \mathcal{F}$ and $x\in \re{d}$,
\item (measurability). $(\omega,x)\mapsto \tau_{x}\omega$ is $\mathcal{F}\otimes \mathcal{L}$-measurable ($\mathcal{L}$ denotes the Lebesgue-$\sigma$-algebra on $\mathbb{R}^d$).
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{assumption}
From now on we assume that $(\Omega,\mathcal F,P,\tau)$ satisfies these assumptions and we write $\langle\cdot\rangle:=\int_\Omega\cdot\, dP$ as a shorthand for the expectation.
In the functional analytic setting, a random coefficient field is described by a map $a:\Omega\times \mathbb{R}^d\to\mathbb{R}^{d\times d}$ with the interpretation that $a(\omega,\cdot):\mathbb{R}^d\to\mathbb{R}^{d\times d}$ with $\omega\in\Omega$ sampled according to $P$ yields a realization of the random coefficient field. Likewise, solutions to an associated PDE with physical domain $Q\subset\mathbb{R}^d$ might be considered as \textit{random} functions, i.e.~quantities defined on the product $\Omega\times Q$. In this paper we denote by $L^p(\Omega)$ and $L^p(Q)$ (with $Q\subset\mathbb{R}^d$ open) the usual Banach spaces of $p$-integrable functions defined on $(\Omega,\mathcal F,P)$ and $Q$, respectively. We introduce function spaces for functions defined on $\Omega\times Q$ as follows: For closed subspaces $X\subset L^p(\Omega)$ and $Y\subset L^p(Q)$ (resp. $Y\subset W^{1,p}(Q)$) we denote by $X\otimes Y$ the closure of $$X\overset{a}{\otimes}Y:=\cb{\sum_{i=1}^{n}\varphi_i _{\varepsilon}ta_i: \varphi_i \in X, _{\varepsilon}ta_i\in Y, n\in \mathbb{N}}$$ in $L^p(\Omega;L^p(Q))$ (resp. $L^p(\Omega;W^{1,p}(Q))$, with a slight abuse of notation we use ``$X\otimes Y$'' for both type of spaces). Since the probability space is countably generated, $L^p(\Omega)$ (with $1\leq p<\infty$) is separable, and thus we have $L^p(\Omega)\otimes L^p(Q)=L^p(\Omega\times Q)=L^p(\Omega;L^p(Q))$ up to isometric isomorphisms. We therefore simply write $L^p(\Omega\times Q)$ instead of $L^p(\Omega)\otimes L^p(Q)$.
In the functional analytic setting and in view of the measure preserving property of $\tau$, the requirement of stationarity can be rephrased as the assumption that the coefficient field can be written in the form $a(\omega,x)=a_0(\tau_x\omega)$ for some measurable map $a_0:\Omega\to\mathbb{R}^{d\times d}$. The transition from $a_0$ to $a$ conserves measurability. As usual we denote by $\mathcal B(Q)$ (resp. $\mathcal L(Q)$) the Borel (resp. Lebesgue)-$\sigma$-algebra on $Q\subset\mathbb{R}^d$. The proof of the following lemma is obvious and therefore we do not present it.
\begin{lemma}[Stationary extension]\label{L:stat}
Let $\varphi:\Omega\to\mathbb{R}$ be $\mathcal F$-measurable. Then $S\varphi:\Omega\times Q\to\mathbb{R}$, $S\varphi(\omega,x):=\varphi(\tau_x\omega)$ defines an $\mathcal F\otimes\mathcal L(Q)$-measurable function -- called the stationary extension of $\varphi$. Moreover, if $Q$ is bounded, for all $1\leq p<\infty$ the map $S:L^p(\Omega)\to L^p(\Omega\times Q)$ is a linear injection satisfying
\begin{equation*}
\|S\varphi\|_{L^p(\Omega\times Q)}=|Q|^\frac{1}{p}\|\varphi\|_{L^p(\Omega)}.
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{lemma}
The assumption of ergodicity can be phrased as follows: We say $(\Omega,\mathcal F,P,\tau)$ is \textit{ergodic} (we also say $_{\varepsilon}x{\cdot}$ is ergodic), if
\begin{align*}
\text{ every shift invariant }A\in \mathcal{F} \text{ (i.e.~}\tau_x A=A \text{ for all }x\in \re d)\text{ satisfies } P(A)\in \cb{0,1}.
_{\varepsilon}nd{align*}
In this case the celebrated ergodic theorem of Birkhoff applies, which we recall in the following form:
\begin{thm}[{Birkhoff's ergodic Theorem \cite[Theorem 10.2.II]{Daley1988}}]
\label{thm:ergodic-thm} Let $_{\varepsilon}x{\cdot}$ be ergodic and $\varphi:\Omega\to\mathbb{R}$ be integrable. Then for $P$-a.e.~$\omega\in\Omega$ it holds: $ S\varphi(\omega,\cdot)$ is locally integrable and for all open, bounded sets $Q\subset\mathbb{R}^d$ we have
\begin{equation}
\lim_{{\varepsilon}\rightarrow0}\int_{Q}S\varphi(\omega,\tfrac{x}{\varepsilon})\,dx=|Q|\langle\varphi\rangle\,.\label{eq:ergodic-thm}
_{\varepsilon}nd{equation}
Furthermore, if $\varphi\in L^p(\Omega)$ with $1\leq p\leq\infty$, then for $P$-a.e.~$\omega\in\Omega$ it holds: $S\varphi(\omega,\cdot)\in L_{loc}^{p}(\mathbb{R}^d)$, and provided $p<\infty$ it holds $S\varphi(\omega,\frac{\cdot}{\varepsilon})\rightharpoonup \langle\varphi\rangle$ weakly in $L_{loc}^{p}(\mathbb{R}^d)$ as ${\varepsilon}\rightarrow0$.
_{\varepsilon}nd{thm}
\begin{example}\label{example:1}
Basic examples for stationary and ergodic systems include the random checkerboard (e.g.~see \cite[Example 2.12]{Neukamm_lecture}), Gaussian random fields (e.g.~see \cite[Example 2.13]{Neukamm_lecture}).
We remark that the setting for periodic homogenization fits as well into this framework. In particular, $\Omega = \Box$ equipped with the Lebesque-$\sigma$-algebra and the Lebesgue measure, and the shift $\tau_{x}y = y + x \mod 1$ defines a system satisfying Assumption 2.1 and ergodicity. We refer to \cite{duerinckx2017weighted} for further examples of stationary and ergodic systems.
_{\varepsilon}nd{example}
\subsection{Stochastic unfolding operator and two-scale convergence in the mean}\label{S_Stoch_1}
In the following we introduce the stochastic unfolding operator, which is a key object in this paper. It is a linear, $\varepsilon$-parametrized, isometric isomorphism $\mathcal{T}_{\varepsilon}$ on $L^p(\Omega\times Q)$ where $Q\subset\mathbb{R}^d$ denotes an open set which we think of as the domain of a PDE.
\begin{lemma}\label{L:unf}
Let $\varepsilon>0$, $1<p<\infty$, $q:=\frac{p}{p-1}$, and $Q\subset\mathbb{R}^d$ be open. There exists a unique linear isometric isomorphism
\begin{equation*}
\mathcal{T}_{\varepsilon}: L^p(\Omega \times Q)\rightarrow L^p(\Omega \times Q)
_{\varepsilon}nd{equation*}
such that
\begin{equation*}
\forall u\in L^p(\Omega)\overset{a}{\otimes} L^p(Q)\,:\qquad (\mathcal{T}_{\varepsilon} u)(\omega,x)=u(\tau_{-\frac{x}{\varepsilon}}\omega,x)\qquad \text{a.e. in }\Omega\times Q.
_{\varepsilon}nd{equation*}
Moreover, its adjoint is the unique linear isometric isomorphism $\mathcal{T}_{\varepsilon}^{*}:L^q(\Omega \times Q)\toL^q(\Omega \times Q)$ that satisfies $(\mathcal{T}_{\varepsilon}^{*}u)(\omega,x)=u(\tau_{\frac{x}{\varepsilon}}\omega,x)$ a.e.~in $\Omega\times Q$ for all $u\in L^q(\Omega)\overset{a}{\otimes}L^q(Q)$.
(For the proof see Section \ref{S_Proofs}.)
_{\varepsilon}nd{lemma}
\begin{defn}[Unfolding operator and two-scale convergence in the mean]\label{def46}
The operator $\mathcal{T}_{\varepsilon}:L^p(\Omega \times Q)\toL^p(\Omega \times Q)$ of Lemma~\ref{L:unf} is called the stochastic unfolding operator. We say that a sequence $(u_{\varepsilon}) \subset L^p(\Omega \times Q)$ weakly (strongly) two-scale converges in the mean in $L^p(\Omega \times Q)$ to $u\in L^p(\Omega \times Q)$ if (as $\varepsilon\to 0$)
\begin{equation*}
\mathcal{T}_{\varepsilon} u_{\varepsilon} \rightarrow u \quad \text{ weakly (strongly) in }L^p(\Omega \times Q).
_{\varepsilon}nd{equation*}
In this case we write $u_{\varepsilon}\wt u$ (resp. $u_{\varepsilon} \st u$) in $L^p(\Omega\times Q)$.
_{\varepsilon}nd{defn}
See Remark \ref{R_Stoch_1} for an explanation of the origin of the term \textit{weak/strong stochastic two-scale convergence in the mean} used for the above notion of convergence.
To motivate the definition, let $u_{\varepsilon}\in H^1_0(Q)$ denote a (distributional) solution to $-\nabla\cdot a_{\varepsilon}(x)\nabla u_{\varepsilon}=f$ in $Q$, where $a_{\varepsilon}$ is a family of uniformly elliptic, random coefficient fields of the form $a_{\varepsilon}(\omega, x)=a_0(\tau_{\frac{x}{\varepsilon}}\omega)$. The main difficulty in homogenization of this PDE is the passage to the limit $\varepsilon\to 0$ in the product $a_{\varepsilon} \nabla u_{\varepsilon}$, since both factors in general only weakly converge. The stochastic unfolding operator $\mathcal{T}_{\varepsilon}$ turns this expression into a product of a strongly and a weakly convergent sequence in $L^2(\Omega\times Q)$: Indeed, we have $\mathcal{T}_{\varepsilon}(a_{\varepsilon}\nabla u_{\varepsilon})=a_0(\mathcal{T}_{\varepsilon}\nabla u_{\varepsilon})$ and thus it remains to characterize the weak limit of $\mathcal{T}_{\varepsilon}\nabla u_{\varepsilon}$, as will be done in the next section.
Since $\mathcal{T}_{\varepsilon}$ is an isometry, we obtain the following properties (which resemble the key properties of the periodic unfolding method). The below lemma directly follows from the isometry property of $\mathcal{T}_{\varepsilon}$ and the usual properties of weak and strong convergence in $L^p(\Omega\times Q)$; therefore, we do not present its proof.
\begin{lemma}[Basic properties]\label{lemma_basics} Let $p\in (1,\infty)$, $q=\frac{p}{p-1}$ and $Q\subset \mathbb{R}^d$ be open.
Consider sequences $(u_{\varepsilon}) \subset L^p(\Omega \times Q)$ and $(v_{\varepsilon}) \subset L^q(\Omega \times Q)$.
\begin{enumerate}[(i)]
\item (Boundedness and lower-semicontinuity of the norm). If $u_{\varepsilon} \wt u$, then \\ $\sup_{\varepsilon\in (0,1)}\ns{u_{\varepsilon}}< \infty$ and $\ns{u}\leq \liminf_{\varepsilon\to 0}\ns{u_{\varepsilon}}$.
\item (Compactness of bounded sequences). If $\limsup_{\varepsilon\rightarrow 0}\ns{u_{\varepsilon}}<\infty$, then there exists a subsequence $\varepsilon'$ and $u\in L^p(\Omega \times Q)$ such that $u_{\varepsilon'} \wt u$ in $L^p(\Omega \times Q)$.
\item (Characterization of strong two-scale convergence). $u_{\varepsilon} \st u$ if and only if $u_{\varepsilon} \wt u$ in $L^p(\Omega\times Q)$ and $\ns{u_{\varepsilon}}\to \ns{u}$.
\item (Products of strongly and weakly two-scale convergent sequences). If $u_{\varepsilon} \wt u$ in $L^p(\Omega \times Q)$ and $v_{\varepsilon} \st v$ in $L^q(\Omega \times Q)$, then
\begin{equation*}
_{\varepsilon}x{\int_Qu_{\varepsilon}(\omega,x) v_{\varepsilon} (\omega,x) dx}\rightarrow _{\varepsilon}x{\int_Qu(\omega,x)v(\omega,x) dx}.
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{lemma}
\begin{remark}
The stochastic unfolding operator enjoys many similarities to the periodic unfolding operator, however we would like to point out one considerable difference. Namely, in the periodic case if a sequence $(u_{\varepsilon})\subset L^p(Q)$ satisfies $u_{\varepsilon}\to u$ strongly in $L^p(Q)$, it follows that $\mathcal{T}_{\varepsilon}^p u_{\varepsilon}\to u$ strongly in $L^p(Q\times \Box)$ (see e.g. \cite[Proposition 2.4]{mielke2007two}). In the stochastic case, this does not hold in general, specifically even for a fixed function $u \in L^p(\Omega\times Q)$, in general it does not hold $\mathcal{T}_{\varepsilon} u \rightharpoonup u$. However, if $_{\varepsilon}x{\cdot}$ is ergodic, using Proposition \ref{prop1} below, it follows that for a sequence $(u_{\varepsilon}) \subset L^{p}(\Omega)\otimes W^{1,p}(Q)$ such that $u_{\varepsilon}\rightharpoonup u$ weakly in $L^p(\Omega\times Q)$ it holds that $u_{\varepsilon}\overset{2}{\rightharpoonup} _{\varepsilon}x{u}$. In this respect, stochastic two-scale convergence might be viewed as an ergodic theorem for weakly convergent sequences.
_{\varepsilon}nd{remark}
\begin{remark}
The choice $\Omega=\Box =[0,1)^d$ (see Example \ref{example:1}), provides us with a tool for periodic homogenization (we might call it \textit{periodic unfolding in the mean}). However, we remark that the convergence notion we obtain using the unfolding operator (in the mean) slightly differs from the notion obtained in standard periodic homogenization results (e.g. using the usual periodic unfolding operator). Namely, we consider a standard convex periodic homogenization problem: For $V: \Box \times \mathbb{R}^d \to \mathbb{R}$, $V$ being convex in its second variable (see Section \ref{Section_Convex} for precise assumptions), for $y\in \Box$, let $u_{\varepsilon}(y,\cdot) \in H^1_0(Q)$ be the minimizer of the functional
\begin{equation*}
\mathcal{E}_{\varepsilon}^{y}: H^1_0(Q)\to \mathbb{R}, \quad {\mathcal{E}}_{\varepsilon}^{y}(u)= \int_{Q}V(\tau_{\frac{x}{\varepsilon}}y,\nabla u(x))dx.
_{\varepsilon}nd{equation*}
Let $u\in H^1_0(Q)$ be the minimizer of the homogenized functional ${\mathcal{E}}_{\mathsf{hom}}: H^1_0(Q)\to \mathbb{R}$, ${\mathcal{E}}_{\mathsf{hom}} (u)= \int_{Q}V_{hom}(\nabla u(x))dx$ (see Section \ref{Section_Convex} for the formula for $V_{\mathsf{hom}}$).
Theorem \ref{thm2} below implies that $\int_{\Box}u_{\varepsilon}(y,\cdot)dy \to u(\cdot)$ in $L^2(Q)$, whereas classical periodic results (e.g. \cite{cioranescu2004homogenization}) include the convergence $u_{\varepsilon}(y)\to u$ in $L^2(Q)$ for all $y\in \Box$. Note that, in the case of a strongly convex integrand (see Proposition \ref{prop3}), we might recover the convergence $u_{\varepsilon}(y)\to u$ in $L^2(Q)$, but merely for a.e. $y\in \Box$ and for a subsequence (which might depend on the choice of the exceptional set in $\Box$).
_{\varepsilon}nd{remark}
For homogenization of variational problems (in particular, convex integral functionals) the following transformation and (lower semi-)continuity properties are useful.
\begin{prop}\label{P_Cont_1}
Let $p\in (1,\infty)$ and $Q\subset \mathbb{R}^d$ be open and bounded. Let $V: \Omega \times Q \times \mathbb{R}^{m}\to \mathbb{R}$ be such that $V(\cdot,\cdot,F)$ is $\mathcal{F} \otimes \mathcal{L}(Q)$-measurable for all $F\in \mathbb{R}^m$ and $V(\omega,x,\cdot)$ is continuous for a.e. $(\omega,x)\in \Omega \times Q$. Also, we assume that there exists $C>0$ such that for a.e. $(\omega,x)\in \Omega\times Q$
\begin{equation*}
|V(\omega, x, F)|\leq C(1+|F|^p), \quad \text{for all }F\in \mathbb{R}^m.
_{\varepsilon}nd{equation*}
\begin{enumerate}[(i)]
\item We have
\begin{equation}\label{intform}
\forall u\in L^p(\Omega \times Q)^{m}\quad _{\varepsilon}x{\int_Q V(\tau_{\frac{x}{\varepsilon}}\omega, x,u(\omega,x))dx}=_{\varepsilon}x{\int_Q V(\omega, x, \mathcal{T}_{\varepsilon} u(\omega,x))dx} \,.
_{\varepsilon}nd{equation}
\item
If $u_{\varepsilon} \overset{2s}{\to} u$ in $L^p(\Omega\times Q)^m$, then
\begin{equation*}
\lim_{\varepsilon\to 0}_{\varepsilon}x{\int_{Q}V(\tau_{\frac{x}{\varepsilon}}\omega,x, u_{\varepsilon}(\omega,x))dx} = _{\varepsilon}x{\int_{Q} V(\omega, x, u(\omega,x))dx}.
_{\varepsilon}nd{equation*}
\item We additionally assume that for a.e. $(\omega,x)\in \Omega\times Q$, $V(\omega, x,\cdot)$ is convex. Then, if $u_{\varepsilon} \overset{2s}{\rightharpoonup} u$ in $L^p(\Omega\times Q)^m$,
\begin{equation*}
\liminf_{\varepsilon\to 0}_{\varepsilon}x{\int_{Q} V(\tau_{\frac{x}{\varepsilon}} \omega,x,u_{\varepsilon}(\omega,x)) dx}\geq _{\varepsilon}x{\int_{Q} V(\omega, x, u(\omega,x))dx}.
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{prop}
\textit{(For the proof see Section \ref{S_Proofs}.)}
\begin{remark}[A technical remark about measurability]
The stochastic unfolding operator $\mathcal{T}_{\varepsilon}$ is defined as a linear operator on the Banach space $L^p(\Omega\times Q)$, which is convenient since this prevents us from (fruitless) discussions on measurability properties. The elements of $L^p(\Omega\times Q)$ are strictly speaking not functions but equivalence classes of functions that coincide a.e.~in $\Omega\times Q$. Thus, a representative function $\tilde u$ in $L^p(\Omega\times Q)$ is measurable w.r.t.~the completion of the product $\sigma$-algebra $\mathcal F\otimes\mathcal L(Q)$, and thus the map $(\omega,x)\mapsto \tilde u(\tau_{x}\omega,x)$ might not be measurable. However, if $\tilde u$ is $\mathcal F\otimes\mathcal L(Q)$-measurable (e.g. if $\tilde u\in L^p(\Omega)\overset{a}{\otimes}L^p(Q)$), then $\tilde u_{\varepsilon}(\omega,x):=\tilde u(\tau_{\frac{x}{\varepsilon}}\omega,x)$ is $\mathcal F\otimes \mathcal L(Q)$-measurable. In particular, since $L^p(\Omega)\overset{a}{\otimes}L^p(Q)$ is dense in $L^p(\Omega\times Q)$, for any $u\in L^p(\Omega\times Q)$ we can find a representative-$\mathcal F\otimes\mathcal L(Q)$ measurable function $\tilde u:\Omega\times Q\to\mathbb{R}$ and we have $\mathcal{T}_{\varepsilon} u=\tilde u_{\varepsilon}$ a.e.~in $\Omega\times Q$.
_{\varepsilon}nd{remark}
\begin{remark}[{Comparison to the notion of} \cite{bourgeat1994stochastic}]\label{R_Stoch_1}
The notion of weak two-scale convergence in the mean of Definition~\ref{def46}, i.e.~the weak convergence of the unfolded sequence, coincides with the convergence introduced in \cite{bourgeat1994stochastic} (see also \cite{andrews1998stochastic}). More precisely, for a bounded sequence $(u_{\varepsilon})\subset L^p(\Omega\times Q)$ we have $u_{\varepsilon}\wt u$ in $L^p(\Omega\times Q)$ (in the sense of Definition~\ref{def46}) if and only if $u_{\varepsilon}$ stochastically 2-scale converges in the mean to $u$ in the sense of \cite{bourgeat1994stochastic}, i.e.
\begin{equation}\label{eq:1}
\lim_{\varepsilon\rightarrow 0}_{\varepsilon}x{\int_{Q}u_{\varepsilon}(\omega,x)\varphi(\tau_{\frac{x}{\varepsilon}}\omega,x)dx}=_{\varepsilon}x{ \int_{Q}u(\omega,x)\varphi(\omega,x)dx},
_{\varepsilon}nd{equation}
for any $\varphi\in L^q(\Omega\times Q)$ that is admissible (in the sense that the transformation $(\omega,x)\mapsto \varphi(\tau_{\frac{x}{\varepsilon}}\omega,x)$ is well-defined). Indeed, with help of $\mathcal{T}_{\varepsilon}$ (and its adjoint) we might rephrase the integral on the left-hand side in _{\varepsilon}qref{eq:1} as
\begin{equation}\label{eq:1234}
_{\varepsilon}x{\int_{Q}u_{\varepsilon}(\mathcal{T}_{\varepsilon}^{*}\varphi)\, dx}=_{\varepsilon}x{\int_{Q}(\mathcal{T}_{\varepsilon} u_{\varepsilon})\varphi dx},
_{\varepsilon}nd{equation}
which proves the equivalence. For the reason of this equivalence, we use the terms \textit{weak} and \textit{strong stochastic two-scale convergence in the mean} instead of talking about \textit{weak} or \textit{strong convergence of unfolded sequences}.
_{\varepsilon}nd{remark}
\subsection{Two-scale limits of gradients}\label{S_Two}
\def\varphier{{\sf per}}
As for periodic homogenization via periodic unfolding or two-scale convergence, also in the stochastic case it is important to understand the interplay of the unfolding operator and the gradient operator and to characterize two-scale limits of gradient fields. As a motivation we first recall the periodic case. A standard result states that for any bounded sequence in $W^{1,p}(Q)$ we can extract a subsequence such that $u_{\varepsilon}$ weakly converges in $W^{1,p}(Q)$ to a single scale function $u\in W^{1,p}(Q)$ and $\nabla u_{\varepsilon}$ weakly two-scale converges to a two-scale limit of the form $\nabla u(x)+\chi(x,y)$, where $\chi$ is a vector field in $L^p(Q)\otimes L^p_{\varphier}(\Box)^d$ and $L^p_{\varphier}(\Box)$ denotes the space of locally $p$-integrable, $\Box$-periodic functions on $\mathbb{R}^d$, and $\chi$ is mean-free and curl-free w.r.t. $y\in \Box:=[0,1)^d$. Since $\Box$ is compact, such vector fields can be represented with help of a periodic potential field, i.e.~there exists $\varphi\in L^p(Q,W^{1,p}_{\varphier}(\Box))$ s.t. $\chi(x,y)=\nabla_y\varphi(x,y)$ for a.e.~$(x,y)$. A helpful example to have in mind is the following $u_{\varepsilon}(x):=\varepsilon\varphi(\frac{x}{\varepsilon})_{\varepsilon}ta(x)$ with $_{\varepsilon}ta\in W^{1,p}(Q)$ and $\varphi\in C^\infty_{\varphier}(\Box)$. Then a direct calculation shows that $\nabla u_{\varepsilon}(x)=\nabla_y\varphi(\tfrac{x}{\varepsilon})_{\varepsilon}ta(x)+O(\varepsilon)$, which obviously two-scale converges to $\nabla_y\varphi(y)_{\varepsilon}ta(x)$.
In the stochastic case the torus of the periodic case (which is above represented by $\Box$) is replaced by the probability space $\Omega$ and periodic functions (e.g.~$\varphi$ above) are conceptually replaced by stationary functions, i.e.~functions of the form $S\varphi(\omega,x)=\varphi(\tau_{x}\omega)$ with $\varphi:\Omega\to\mathbb{R}$ measurable. To proceed further, we need to introduce an analogue of the gradient $\nabla_{y}$ and its domain $W^{1,p}_{\varphier}(\Box)$ in the stochastic setting. As illustrated below, the shift-group $\tau$ together with standard concepts from functional analysis lead to a horizontal gradient $D$ and the space $W^{1,p}(\Omega)$. With help of these objects we prove, as in the periodic case, that any bounded sequence in $L^p(\Omega)\otimes W^{1,p}(Q)$ admits (up to extraction of a subsequence) a weak two-scale limit $u$ and the sequence of gradients converges weakly two-scale to a limit of the form $\nabla u+\chi$ where $\chi$ is $D$-curl-free w.r.t.~$\omega$. A difference to the periodic case to be pointed out is that $\chi$ in general does not admit a representation by means of a stationary potential.
In order to implement the above philosophy we require some input from functional analysis, which we recall from the original work by Papanicolaou and Varadhan \cite{Papanicolaou1979} (see also \cite{jikov2012homogenization}). We consider the group of isometric operators $\cb{U_x:x\in \re{d}}$ on $L^p(\Omega)$ defined by $U_x \varphi(\omega)=\varphi(\tau_{x}\omega)$. This group is strongly continuous (see \cite[Section 7.1]{jikov2012homogenization}). For $i=1,...,d$, we consider the 1-parameter group of operators $\cb{U_{h e_i}:h\in \re{}}$ ($\cb{e_i}$ being the usual basis of $\re d$) and its infinitesimal generator $D_i:\mathcal{D}_i\subset L^p(\Omega)\rightarrow L^p(\Omega)$
\begin{equation*}
D_i \varphi=\lim_{h\rightarrow 0} \frac{U_{he_i}\varphi-\varphi}{h},
_{\varepsilon}nd{equation*}
which we refer to as \textit{horizontal} derivative. $D_i$ is a linear and closed operator and the associated domain $\mathcal{D}_i$ is dense in $L^p(\Omega)$. We set $W^{1,p}(\Omega)=\cap_{i=1}^{d}\mathcal{D}_i$ and define for $\varphi\in W^{1,p}(\Omega)$ the horizontal gradient as $D\varphi=(D_1 \varphi,...,D_d \varphi)$. In this manner, we obtain a linear, closed and densely defined operator $D:W^{1,p}(\Omega)\rightarrow L^p(\Omega)^d$, and we denote by
\begin{equation*}
L^p_{\varphiot}(\Omega):=\overline{\mathcal R(D)}\subset L^p(\Omega)^d
_{\varepsilon}nd{equation*}
the closure of the range of $D$ in $L^p(\Omega)^d$. We denote the adjoint of $D$ by $D^*:\mathcal{D}^*\subset{L^q(\Omega)^d}\rightarrow L^q(\Omega)$ which is a linear, closed and densely defined operator ($\mathcal{D}^*$ is the domain of $D^*$).
Note that $W^{1,q}(\Omega)^d\subset \mathcal{D}^*$ and for all $\varphi\in W^{1,p}(\Omega)$ and $\varphisi\in W^{1,q}(\Omega)$ ($i=1,...,d$) we have the integration by parts formula
\begin{equation*}
_{\varepsilon}x{D_i \varphi \varphisi}=-_{\varepsilon}x{\varphi D_i \varphisi},
_{\varepsilon}nd{equation*}
and thus $D^*\varphisi=-\sum_{i=1}^d D_i \varphisi_i$ for $\varphisi\in W^{1,q}(\Omega)^d$. We define the subspace of shift invariant functions in $L^p(\Omega)$ by
\begin{equation*}
L^p_{{{\textsf{inv}}}}(\Omega)=\cb{\varphi\in L^p(\Omega):U_x \varphi=\varphi \quad \text{for all }x \in \re{d}},
_{\varepsilon}nd{equation*}
and denote by $ P_{{\textsf{inv}}}:L^p(\Omega) \to L^p_{{\textsf{inv}}}(\Omega)$ the conditional expectation with respect to the $\sigma$-algebra of shift invariant sets $\cb{ A \in \mathcal{F} : \tau_x A = A \text{ for all } x\in \re d}$. It is a contractive projection and for $p=2$ it coincides with the orthogonal projection onto $L^2_{{\textsf{inv}}}(\Omega)$.
\begin{prop}[Compactness]\label{prop1}
Let $p\in (1,\infty)$ and $Q\subset \mathbb{R}^d$ be open. Let $(u_{\varepsilon})$ be a bounded sequence in $L^p(\Omega)\otimes W^{1,p}(Q)$. Then, there exist $u\in L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}(Q)$ and $\chi\in L^p_{\varphiot}(\Omega)\otimes L^p(Q)$ such that (up to a subsequence)
\begin{equation}\label{equation1}
u_{\varepsilon} \wt u \text{ in }L^p(\Omega \times Q), \quad \nabla u_{\varepsilon} \wt \nabla u +\chi \text{ in }L^p(\Omega \times Q)^d.
_{\varepsilon}nd{equation}
If, additionally, $_{\varepsilon}x{\cdot}$ is ergodic, then $u=P_{\mathsf{{{\textsf{inv}}}}} u=_{\varepsilon}x{u} \in W^{1,p}(Q)$ and $_{\varepsilon}x{u_{\varepsilon}}\rightharpoonup u$ weakly in $W^{1,p}(Q)$.
_{\varepsilon}nd{prop}
We remark that the above result is already established in \cite{bourgeat1994stochastic} in the context of two-scale convergence in the mean in the $L^2$-space setting. We recapitulate its (short) proof from the perspective of stochastic unfolding, see section \ref{S_Proofs}.
\begin{remark}\label{R_Two_1}
Since closed, convex subsets of a Banach space are also weakly closed, for any sequence $(u_{\varepsilon})$ that satisfies the assumption of Proposition \ref{prop1} and $\mathcal{T}_{\varepsilon} u_{\varepsilon} \in X$ where $X\subset L^p(\Omega\times Q)$ is closed and convex, the two-scale limit from Proposition \ref{prop1} satisfies $u\in X$. This is useful to study problems with boundary conditions.
_{\varepsilon}nd{remark}
\begin{lemma}[Nonlinear recovery sequence]\label{Nonlinear_recovery} Let $p \in (1,\infty)$ and $Q\subset \mathbb{R}^d$ be open. For $\chi\in L^p_{\varphiot}(\Omega)\otimes L^p(Q)$ and $\delta>0$, there exists a sequence $g_{\delta,\varepsilon}(\chi) \in L^p(\Omega)\otimes W^{1,p}_0(Q)$ such that
\begin{equation*}
\|g_{\delta,\varepsilon}(\chi)\|_{L^{p}(\Omega\times Q)} \leq \varepsilon C(\delta), \quad \limsup_{\varepsilon\to 0}\|\mathcal{T}_{\varepsilon}\nabla g_{\delta, \varepsilon}(\chi)-\chi\|_{L^p(\Omega\times Q)^d}\leq \delta.
_{\varepsilon}nd{equation*}
(For the proof see Section \ref{S_Proofs}.)
_{\varepsilon}nd{lemma}
\begin{prop}[Linear recovery sequence]\label{prop2}
Let $p\in (1,\infty)$ and $Q\subset \mathbb{R}^d$ be open, bounded and $C^1$. For $\varepsilon>0$ there exists a linear operator $\mathcal{G}_{\varepsilon}: L^p_{\varphiot}(\Omega)\otimes L^p(Q) \to L^p(\Omega)\otimes W^{1,p}_0(Q)$, that is uniformly bounded in ${\varepsilon}$, with the property that for any $\chi\in L^p_{\varphiot}(\Omega)\otimes L^p(Q)$
\begin{equation*}
\mathcal{G}_{\varepsilon} \chi \st 0 \text{ in } L^p(\Omega \times Q), \quad \nabla \mathcal{G}_{\varepsilon} \chi \st \chi \text{ in }L^p(\Omega \times Q)^d.
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{prop}
\textit{(For the proof see Section \ref{S_Proofs}.)}
\begin{remark}\label{rem14}
If $Q\subset \mathbb{R}^d$ is open, bounded and $C^1$, using Proposition \ref{prop2}, we obtain a mapping
$$\brac{L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}(Q)} \times \brac{L^p_{\varphiot}(\Omega)\otimes L^p(Q)}\ni(u,\chi)\mapsto u_{\varepsilon}(u,\chi):= u+ \mathcal{G}_{\varepsilon} \chi \in L^p(\Omega)\otimesW^{1,p}(Q)$$
which is linear, uniformly bounded in ${\varepsilon}$ and it satisfies (for all $(u,\chi)$)
\begin{equation}\label{cor22_eq1}
u_{\varepsilon}(u,\chi) \overset{2s}{\to} u \text{ in }L^p(\Omega\times Q), \quad \nabla u_{\varepsilon}(u,\chi) \overset{2s}{\to} \nabla u + \chi \text{ in }L^p(\Omega\times Q).
_{\varepsilon}nd{equation}
In the case that $Q$ is merely open, we can use the nonlinear construction from Lemma \ref{Nonlinear_recovery}. Specifically, for $(u,\chi)\in \brac{L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}(Q) }\times \brac{L^p_{\varphiot}(\Omega)\otimes L^p(Q)}$ we define $u_{\delta,\varepsilon}(u,\chi)= u + g_{\delta,\varepsilon}(\chi)$. Using Attouch's diagonal argument, we find a sequence $u_{\varepsilon}(u,\chi)=u_{\delta(\varepsilon),\varepsilon}$ which satisfies (\ref{cor22_eq1}). We remark that in both cases, the recovery sequence $u_{\varepsilon}$ matches the boundary conditions of the function $u$ (see constructions in Section \ref{S_Proofs}).
_{\varepsilon}nd{remark}
We conclude this section with some basic facts from functional analysis used in the proof of Proposition~\ref{prop1}.
\begin{remark}
Let $p \in (1,\infty)$ and $q= \frac{p}{p-1}$.
\begin{enumerate}[(i)]
\item $_{\varepsilon}x{\cdot}$ is ergodic $\Leftrightarrow$ $L^p_{{\textsf{inv}}}(\Omega)\simeq \re{}$ $\Leftrightarrow$ $\varphiinv f=_{\varepsilon}x{f}$.
\item The following orthogonality relations hold (for a proof see \cite[Section 2.6]{brezis2010functional}): Identify the dual space $L^p(\Omega)^*$ with $L^q(\Omega)$, and define for a set $A\subset L^q(\Omega)$ its orthogonal complement $A^{\bot}\subset L^p(\Omega)$ as $A^{\bot}=\cb{\varphi\in L^p(\Omega):_{\varepsilon}x{\varphi,\varphisi}_{L^p,L^q}=0 \text{ for all }\varphisi\in A}$. Then
\begin{equation}\label{orth}
\mathcal N(D)=\mathcal{R}(D^*)^{\bot}, \quad
L^p_{\varphiot}(\Omega)= \overline{\mathcal{R}(D)}=\mathcal N(D^*)^{\bot}.
_{\varepsilon}nd{equation}
Above, $\mathcal{N}(\cdot)$ denotes the kernel and $\mathcal{R}(\cdot)$ the range of an operator.
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{remark}
\subsection{Proofs}\label{S_Proofs}
\begin{proof}[Proof of Lemma \ref{L:unf}]
We first define $\mathcal{T}_{\varepsilon}$ on ${\mathscr A}:=\{\varphisi(\omega,x)=\varphi(\omega)_{\varepsilon}ta(x)\,:\,\varphi\in L^p(\Omega),\,_{\varepsilon}ta\in L^p(Q)\,\}\subset L^p(\Omega\times Q)$ by setting $(\mathcal{T}_{\varepsilon}\varphisi)(\omega,x)=(S\varphi)(\omega,-\tfrac{x}{{\varepsilon}})_{\varepsilon}ta(x)$ for all $\varphisi=\varphi_{\varepsilon}ta\in{\mathscr A}$. In view of Lemma~\ref{L:stat} $(\mathcal{T}_{\varepsilon}\varphisi)$ is $\mathcal F\otimes\mathcal L(Q)$-measurable, and
\begin{equation*}
_{\varepsilon}x{\int_Q|\mathcal{T}_{\varepsilon}\varphisi|^p\,dx}=\int_Q\big(\int_\Omega|S\varphi(\omega,-\tfrac{x}{{\varepsilon}})|^p\,dP(\omega)\big)|_{\varepsilon}ta(x)|^p\,dx=\|\varphi\|_{L^p(\Omega)}^p\|_{\varepsilon}ta\|_{L^p(Q)}^p=\|\varphisi\|_{L^p(\Omega\times Q)}^p.
_{\varepsilon}nd{equation*}
Since $\mbox{span}({\mathscr A})$ is dense in $L^p(\Omega\times Q)$, $\mathcal{T}_{\varepsilon}$ extends to a linear isometry from $L^p(\Omega\times Q)$ to $L^p(\Omega\times Q)$. We define a linear isometry $\mathcal{T}_{-\varepsilon}: L^q(\Omega\times Q)\to L^q(\Omega\times Q)$ analogously as $\mathcal{T}_{\varepsilon}$ with $\varepsilon$ replaced by $-\varepsilon$. Then for any $\varphi\in L^p(\Omega)\stackrel{a}{\otimes} L^p(Q)$ and $\varphisi\in L^q(\Omega)\stackrel{a}{\otimes} L^q(Q)$ we have (thanks to the measure preserving property of $\tau$):
\begin{eqnarray*}
_{\varepsilon}x{\int_Q(\mathcal{T}_{\varepsilon}\varphi)\varphisi\,dx}&=&\int_Q\int_\Omega\varphi(\tau_{-\frac{x}{{\varepsilon}}}\omega,x)\varphisi(\omega,x)\,dP(\omega)\,dx\\
&=&\int_Q\int_\Omega\varphi(\omega,x)\varphisi(\tau_{\frac{x}{{\varepsilon}}}\omega,x)\,dP(\omega)\,dx=_{\varepsilon}x{\int_Q\varphi(\mathcal T_{-{\varepsilon}}\varphisi)}\,dx.
_{\varepsilon}nd{eqnarray*}
Since $L^p(\Omega)\stackrel{a}{\otimes} L^p(Q)$ and $L^q(\Omega)\stackrel{a}{\otimes} L^q(Q)$ are dense in $L^p(\Omega\times Q)$ and $L^q(\Omega\times Q)$, respectively, we conclude that $\mathcal{T}_{\varepsilon}^*=\mathcal T_{-{\varepsilon}}$.
It remains to argue that $\mathcal{T}_{\varepsilon}$ and $\mathcal{T}_{\varepsilon}^*$ are surjective. Since $\mathcal{T}_{\varepsilon}^*$ is an isometry, it follows that $\mathcal{T}_{\varepsilon}$ is surjective (see \cite[Theorem 2.20]{brezis2010functional}). Analogously, $\mathcal{T}_{\varepsilon}^*$ is also surjective.
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Proposition \ref{P_Cont_1}]
We first note that $V$ is a \textit{Charath{\'e}odory integrand} (which is defined as a function satisfying the measurability and continuity assumptions given in the statement of the proposition) and therefore it follows that $V$ is $\overline{\mathcal{F}\otimes \mathcal{L}(Q)}\otimes \mathcal{B}(\mathbb{R}^{m})$-measurable (see \cite{rockafellar1971} Proposition 1 and the remarks following it). For fixed $\varepsilon>0$, the mapping $(\omega,x)\mapsto (\tau_{\frac{x}{\varepsilon}}\omega,x)$ is $\mathcal{F}\otimes \mathcal{L}(Q)$-$\mathcal{F}\otimes \mathcal{L}(Q)$-measurable and therefore $(\omega,x,F)\mapsto V(\tau_{\frac{x}{\varepsilon}}\omega,x,F)$ defines as well a Charath{\'e}odory integrand (with same measurability as $V$). As a result of these facts, for any function $u\in L^p(\Omega\times Q)^m$ it follows that $(\omega,x)\mapsto V(\omega,x,u(\omega,x))$ and $(\omega,x)\mapsto V(\tau_{\frac{x}{\varepsilon}}\omega, x, u(\omega,x))$ define measurable functions with respect to the completion of $\mathcal{F}\otimes \mathcal{L}(Q)$. Additionally, these functions are integrable thanks to the growth assumptions on $V$. Thus all the integrals in the statement of the proposition are well-defined.
(i) We first argue that it suffices to prove that
\begin{equation}\label{eq1_prop2.8}
_{\varepsilon}x{\int_{Q}V(\tau_{\frac{x}{\varepsilon}}\omega,x,u(\omega,x))dx}=_{\varepsilon}x{\int_{Q}V(\omega,x,\mathcal{T}_{\varepsilon} u(\omega,x))dx} \quad \text{for all }u\in L^p(\Omega)\overset{a}{\otimes} L^p(Q)^m.
_{\varepsilon}nd{equation}
Indeed, for any $u\in L^p(\Omega\times Q)^m$ we can find a sequence $u_k \in L^p(\Omega)\overset{a}{\otimes} L^p(Q)^m$ such that $u_k \to u$ strongly in $L^p(\Omega\times Q)^m$, and by passing to a subsequence (not relabeled) we may additionally assume that $u_k\to u$ pointwise a.e.~in $\Omega\times Q$.
By continuity of $V$ in its last variable, we thus have $V(\tau_{\frac{x}{\varepsilon}}\omega,x, u_{k}(\omega,x)) \to V(\tau_{\frac{x}{\varepsilon}}\omega,x, u(\omega,x))$ for a.e.~$(\omega,x)\in \Omega\times Q$. Since $|V(\tau_{\frac{x}{\varepsilon}}\omega,x, u_{k}(\omega,x))|\leq C(1+|u_{k}(\omega,x)|^p)$ a.e.~in $\Omega\times Q$, the dominated convergence theorem by Vitali implies that $\lim_{k\to \infty}_{\varepsilon}x{\int_{Q}V(\tau_{\frac{x}{\varepsilon}}\omega,x,u_{k}(\omega,x))dx} = _{\varepsilon}x{\int_{Q}V(\tau_{\frac{x}{\varepsilon}}\omega,x,u(\omega,x))dx}$. In the same way we conclude that
$$\lim_{k\to \infty}_{\varepsilon}x{\int_{Q}V(\omega,x,\mathcal{T}_{\varepsilon} u_k(\omega,x))dx} = _{\varepsilon}x{\int_{Q}V(\omega,x,\mathcal{T}_{\varepsilon} u(\omega,x))dx}\,,$$
and thus (\ref{eq1_prop2.8}) extends to general $u\in L^p(\Omega\times Q)^m$.
It is left to show (\ref{eq1_prop2.8}). Let $u\in L^p(\Omega)\overset{a}{\otimes} L^p(Q)^m$. By Fubini's theorem, the measure preserving property of $\tau$, and the transformation $\omega\mapsto \tau_{-\frac{x}{\varepsilon}}\omega$ in the second equality below, it follows
\begin{equation*}
_{\varepsilon}x{\int_{Q}V(\tau_{\frac{x}{\varepsilon}}\omega,x,u(\omega,x))dx}=\int_{Q}_{\varepsilon}x{V(\tau_{\frac{x}{\varepsilon}}\omega,x,u(\omega,x))}dx = \int_{Q}_{\varepsilon}x{V(\omega,x,u(\tau_{-\frac{x}{\varepsilon}}\omega,x))}dx.
_{\varepsilon}nd{equation*}
Since $u\in L^p(\Omega)\stackrel{a}{\otimes}L^p(Q)$, we have $u(\tau_{-\frac{x}{\varepsilon}}\omega,x)=\mathcal{T}_{\varepsilon} u(\omega,x)$, and thus the right-hand side equals $_{\varepsilon}x{\int_{Q}V(\omega,x,\mathcal{T}_{\varepsilon} u(\omega,x))dx}$, which completes the proof of (i).
(ii) By part (i) we get $_{\varepsilon}x{\int_Q V(\tau_{\frac{x}{\varepsilon}}\omega, x,u_{\varepsilon}(\omega,x))dx}=_{\varepsilon}x{\int_Q V(\omega, x, \mathcal{T}_{\varepsilon} u_{\varepsilon}(\omega,x))dx}$. Since $\mathcal{T}_{\varepsilon} u_{\varepsilon} \to u$ strongly in $L^p(\Omega\times Q)^m$ (by assumption), using the growth conditions of $V$ and the dominated convergence theorem, it follows (similarly as in part (i)) that $\lim_{\varepsilon\to 0}_{\varepsilon}x{\int_Q V(\omega, x, \mathcal{T}_{\varepsilon} u_{\varepsilon}(\omega,x))dx}= _{\varepsilon}x{\int_{Q}V(\omega,x,u(\omega,x))dx}$.
(iii) We note that the functional $L^p(\Omega\times Q)^m\ni u \mapsto _{\varepsilon}x{\int_{Q}V(\omega, x,u(\omega,x))dx}$ is convex and lower semi-continuous, therefore it is weakly lower semi-continuous (see \cite[Corollary 3.9]{brezis2010functional}). Combining this fact with the transformation formula from (i) and the weak convergence $\mathcal{T}_{\varepsilon} u_{\varepsilon}\rightharpoonup u$ (by assumption), the claim follows.
_{\varepsilon}nd{proof}
Before stating the proof of Proposition \ref{prop1}, we present some auxiliary lemmas.
\begin{lemma}\label{lemA} Let $p \in (1,\infty)$ and $q=\frac{p}{p-1}$.
\begin{enumerate}[(i)]
\item
If $\varphi\in \cb{D^*\varphisi:\varphisi\in W^{1,q}(\Omega)^d}^{\bot}$, then $\varphi\in L^p_{{{\textsf{inv}}}}(\Omega)$.
\item
If $\varphi \in \cb{\varphisi\in W^{1,q}(\Omega)^d: D^*\varphisi=0}^{\bot}$, then $\varphi\in L^p_{\varphiot}(\Omega)$.
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{lemma}
\begin{proof}
(i) First, we note that
\begin{align*}
\varphi \in L^p_{{\textsf{inv}}}(\Omega) \quad \Leftrightarrow \quad U_{h e_i}U_{y}\varphi=U_y\varphi \quad \text{for all }y\in \mathbb{R}^d,h\in \mathbb{R}, i=1,...,d.
_{\varepsilon}nd{align*}
We consider $\varphi\in \cb{D^*\varphisi:\varphisi\in W^{1,q}(\Omega)^d}^{\bot}$ and we show that $\varphi\in L^p_{{\textsf{inv}}}(\Omega)$ using the above equivalence.
Let $\varphisi \in W^{1,q}(\Omega)$ and $i\in \cb{1,...,d}$. Then, by the group property we have $U_{-h e_i}\varphisi-\varphisi=\int_{0}^h U_{-t e_i}D_i^*\varphisi dt$ and therefore
\begin{align*}
_{\varepsilon}x{(U_{h e_i}\varphi-\varphi)\varphisi}=_{\varepsilon}x{\varphi (U_{-he_i}\varphisi-\varphisi)}=_{\varepsilon}x{\varphi \int_{0}^h U_{-t e_i}D_i^*\varphisi dt}=\int_{0}^h_{\varepsilon}x{\varphi D^*_i(U_{-t e_i}\varphisi)}dt.
_{\varepsilon}nd{align*}
Since $U_{-t e_i}\varphisi \in W^{1,q}(\Omega)$ for any $t\in [0,h]$, we obtain $_{\varepsilon}x{\varphi D^*_i(U_{-t e_i}\varphisi)}=0$ and thus $U_{h e_i}\varphi = \varphi$. Furthermore, for any $y\in \mathbb{R}^d$, we have $_{\varepsilon}x{(U_{h e_i}U_y \varphi - U_y \varphi)\varphisi}=_{\varepsilon}x{(U_{h e_i}\varphi -\varphi)U_{-y}\varphisi}=0$ by the same argument.
(ii) In view of $L^p_{\varphiot}(\Omega)=\mathcal{N}(D^*)^{\bot}$ (see (\ref{orth})), it is sufficient to prove that $\cb{\varphi\in W^{1,q}(\Omega)^d: D^*\varphi=0}$ is dense in $\mathcal{N} (D^*)$. This follows by an approximation argument as in \cite{jikov2012homogenization}, Section 7.2.
Let $\varphi \in \mathcal{N}(D^*)$ and we define for $t>0$
\begin{equation*}
\varphi^t(\omega)=\int_{\re{d}}p_t(y)\varphi(\tau_y \omega)dy, \quad \text{where }p_t(y)=\frac{1}{\brac{4\varphii t}^{\frac{d}{2}}}e^{-\frac{|y|^2}{4t}}.
_{\varepsilon}nd{equation*}
Then the claimed density follows, since $\varphi^t\in W^{1,q}(\Omega)^d$, $D^*\varphi^t=0$ for any $t>0$ and $\varphi^t \rightarrow \varphi$ strongly in $L^q(\Omega)^d$. The last statement can be seen as follows. By the continuity property of $U_x$, for any $\varepsilon>0$ there exists $\delta>0$ such that $_{\varepsilon}x{|\varphi(\tau_y \omega)-\varphi(\omega)|^q}\leq \varepsilon$ for any $y\in B_{\delta}(0)$.
It follows that
\begin{align*}
_{\varepsilon}x{|\varphi^t-\varphi|^q} & =_{\varepsilon}x{\bigg|\int_{\re{d}}p_t(y)\brac{\varphi(\tau_y \omega)-\varphi(\omega)}dy\bigg|^q}\\ & \leq \int_{\re{d}}p_t(y)_{\varepsilon}x{|\varphi(\tau_y \omega)-\varphi(\omega)|^q}dy \\ & = \int_{B_{\delta}}p_t(y)_{\varepsilon}x{|\varphi(\tau_y \omega)-\varphi(\omega)|^q}dy+\int_{\re{d}\setminus B_{\delta}}p_t(y)_{\varepsilon}x{|\varphi(\tau_y \omega)-\varphi(\omega)|^q}dy.
_{\varepsilon}nd{align*}
The first term on the right-hand side of the above inequality is bounded by $\varepsilon$ as well as the second term for sufficiently small $t>0$.
_{\varepsilon}nd{proof}
\begin{lemma}\label{lem6}
Let $u_{\varepsilon} \in L^p(\Omega)\otimes W^{1,p}(Q)$ be such that $u_{\varepsilon} \wt u$ in $L^p(\Omega \times Q)$ and $\varepsilon \nabla u_{\varepsilon} \wt 0$ in $L^p(\Omega \times Q)^d$. Then $u\in L^p_{{{\textsf{inv}}}}(\Omega)\otimes L^p(Q)$.
_{\varepsilon}nd{lemma}
\begin{proof}
Consider a sequence $v_{\varepsilon}=\varepsilon \mathcal{T}_{\varepsilon}^*(\varphi _{\varepsilon}ta)$ such that $\varphi\in W^{1,q}(\Omega)$ and $_{\varepsilon}ta\in C^{\infty}_c(Q)$. Note that $\mathcal{T}_{\varepsilon} v_{\varepsilon}=\varepsilon \varphi _{\varepsilon}ta$ and we have ($i=1,...,d$)
\begin{equation*}
_{\varepsilon}x{\int_Q \varphiartial_i u_{\varepsilon} v_{\varepsilon} dx}=_{\varepsilon}x{\int_Q\mathcal{T}_{\varepsilon} \varphiartial_i u_{\varepsilon} \mathcal{T}_{\varepsilon} v_{\varepsilon} dx}=_{\varepsilon}x{\int_Q\mathcal{T}_{\varepsilon} \varphiartial_i u_{\varepsilon} \varepsilon \varphi _{\varepsilon}ta dx}\rightarrow 0.
_{\varepsilon}nd{equation*}
Moreover, it holds that $\varphiartial_{i}v_{\varepsilon}= \mathcal{T}_{\varepsilon}^*(D_i \varphi _{\varepsilon}ta + \varepsilon \varphi \varphiartial_i _{\varepsilon}ta)$ and therefore
\begin{align*}
_{\varepsilon}x{\int_Q\varphiartial_i u_{\varepsilon} v_{\varepsilon} dx} &=-_{\varepsilon}x{\int_Q u_{\varepsilon} \varphiartial_i v_{\varepsilon} dx} =-_{\varepsilon}x{\int_Q u_{\varepsilon} \mathcal{T}_{\varepsilon}^*(D_i \varphi _{\varepsilon}ta + \varepsilon \varphi \varphiartial_i _{\varepsilon}ta)dx}\\ &=
-_{\varepsilon}x{\int_Q \mathcal{T}_{\varepsilon} u_{\varepsilon} D_i\varphi _{\varepsilon}ta+\varepsilon \mathcal{T}_{\varepsilon} u_{\varepsilon} \varphi \varphiartial_i_{\varepsilon}ta dx}.
_{\varepsilon}nd{align*}
The last expression converges to $-_{\varepsilon}x{\int_Q u D_i\varphi _{\varepsilon}ta dx}$ as $\varepsilon\to 0$.
As a result of this, $_{\varepsilon}x{u(x)D_i\varphi}=0$ for almost every $x\in Q$ and therefore $u\in L^p_{{{\textsf{inv}}}}(\Omega)\otimes L^p(Q)$ by Lemma \ref{lemA} (i).
_{\varepsilon}nd{proof}
\begin{lemma}\label{lem7}
Let $u_{\varepsilon}$ be a bounded sequence in $L^p(\Omega)\otimes W^{1,p}(Q)$. Then there exists $u\in L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}(Q)$ such that
\begin{equation*}
u_{\varepsilon} \wt u \text{ in }L^p(\Omega \times Q), \quad \varphiinv u_{\varepsilon}\wt u \text{ in }L^p(\Omega \times Q),\quad \varphiinv \nabla u_{\varepsilon} \wt \nabla u \text{ in }L^p(\Omega \times Q)^d.
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{lemma}
\begin{proof}
\textit{Step 1. $\varphiinv \circ \mathcal{T}_{\varepsilon}=\mathcal{T}_{\varepsilon}\circ \varphiinv =\varphiinv$.}
The second equality holds clearly. To show that $\varphiinv \circ \mathcal{T}_{\varepsilon}=\varphiinv$, we consider $v\in L^p(\Omega \times Q)$, $\varphi\in L^q(\Omega)$ and $_{\varepsilon}ta\in L^q(Q)$. We have
\begin{align*}
_{\varepsilon}x{\int_Q (\varphiinv\mathcal{T}_{\varepsilon} v) (\varphi _{\varepsilon}ta) dx}& =_{\varepsilon}x{\int_Q (\mathcal{T}_{\varepsilon} v) \varphiinv^* (\varphi _{\varepsilon}ta) dx}\\ &=_{\varepsilon}x{\int_Q v \varphiinv^* (\varphi _{\varepsilon}ta) dx}=_{\varepsilon}x{\int_Q (\varphiinv v) (\varphi _{\varepsilon}ta) dx},
_{\varepsilon}nd{align*}
where we use the fact that $\mathcal{T}_{\varepsilon}^* P_{{\textsf{inv}}}^*= P_{{\textsf{inv}}}^*$ since the adjoint $P_{{\textsf{inv}}}^*$ of $P_{{\textsf{inv}}}$ satisfies $\mathcal{R}(P_{{\textsf{inv}}}^*)\subset L^q_{{\textsf{inv}}}(\Omega)$. The claim follows by an approximation argument since $L^q(\Omega)\overset{a}{\otimes}L^q(Q)$ is dense in $L^q(\Omega\times Q)$.
\textit{Step 2. Convergence of $\varphiinv u_{\varepsilon}$.}
$\varphiinv$ is bounded and it commutes with $\nabla$ and therefore
\begin{equation*}
\limsup_{\varepsilon\to 0} _{\varepsilon}x{\int_Q |\varphiinv u_{\varepsilon}|^p+|\nabla \varphiinv u_{\varepsilon}|^p dx}< \infty.
_{\varepsilon}nd{equation*}
As a result of this and with help of Lemma \ref{lemma_basics} (ii) and Lemma \ref{lem6}, it follows that $\varphiinv u_{\varepsilon}\overset{2s}{\rightharpoonup} v$ and $\nabla \varphiinv u_{\varepsilon}\overset{2s}{\rightharpoonup} w$ (up to a subsequence), where $v\in L^p_{{{\textsf{inv}}}}(\Omega)\otimes L^p(Q)$ and $w\in L^p_{{{\textsf{inv}}}}(\Omega)\otimes L^p(Q)^d$.
Let $\varphi\in W^{1,q}(\Omega)$ and $_{\varepsilon}ta\in C^{\infty}_c(Q)$.
On the one hand, we have
\begin{equation*}
_{\varepsilon}x{\int_Q (\varphiartial_i \varphiinv u_{\varepsilon}) \mathcal{T}_{\varepsilon}^*(\varphi _{\varepsilon}ta) dx}=_{\varepsilon}x{\int_Q\mathcal{T}_{\varepsilon} (\varphiartial_i \varphiinv u_{\varepsilon}) (\varphi_{\varepsilon}ta) dx}\rightarrow _{\varepsilon}x{\int_Q w_i \varphi_{\varepsilon}ta dx}.
_{\varepsilon}nd{equation*}
On the other hand,
\begin{equation*}
_{\varepsilon}x{\int_Q (\varphiartial_i \varphiinv u_{\varepsilon}) \mathcal{T}_{\varepsilon}^*(\varphi _{\varepsilon}ta) dx}=-\frac{1}{\varepsilon}_{\varepsilon}x{\int_Q (\varphiinv u_{\varepsilon}) (D_i\varphi_{\varepsilon}ta) dx}-_{\varepsilon}x{\int_Q(\varphiinv u_{\varepsilon}) (\varphi \varphiartial_i_{\varepsilon}ta) dx}.
_{\varepsilon}nd{equation*}
The first term on the right-hand side vanishes since $\varphiinv u_{\varepsilon}(\cdot,x)\in L^p_{{{\textsf{inv}}}}(\Omega)$ for almost every $x\in Q$ and by (\ref{orth}). The second term converges to $-_{\varepsilon}x{\int_Q v \varphi \varphiartial_i _{\varepsilon}ta dx}$ as $\varepsilon\rightarrow 0$. Consequently, we obtain $w=\nabla v$ and therefore $v\in L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}(Q)$.
\textit{Step 3. Convergence of $u_{\varepsilon}$.}
Since $u_{\varepsilon}$ is bounded, by Lemma \ref{lemma_basics} (ii) and Lemma \ref{lem6} there exists $u\in L^p_{{{\textsf{inv}}}}(\Omega)\otimes L^p(Q)$ such that $u_{\varepsilon} \wt u$ in $L^p(\Omega \times Q)$. Also, $\varphiinv$ is a linear and bounded operator which, together with Step 1, implies that $\varphiinv u_{\varepsilon}\rightharpoonup u$. Using this, we conclude that $u=v$.
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Proposition \ref{prop1}]
Lemma \ref{lem7} implies that $u_{\varepsilon} \wt u$ in $L^p(\Omega \times Q)$ (up to a subsequence), where $u\in L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}(Q)$. Moreover, it follows that there exists $v\in L^p(\Omega \times Q)^d$ such that $\nabla u_{\varepsilon} \wt v$ in $L^p(\Omega \times Q)^d$ (up to another subsequence). We show that $\chi:=v-\nabla u\in L^p_{\varphiot}(\Omega)\otimes L^p(Q)$.
Let $\varphi \in W^{1,q}(\Omega)^d$ with $D^*\varphi=0$ and $_{\varepsilon}ta\in C^{\infty}_c(Q)$. We have
\begin{equation}\label{eq4321}
_{\varepsilon}x{\int_Q \nabla u_{\varepsilon} \cdot \mathcal{T}_{\varepsilon}^*(\varphi _{\varepsilon}ta) dx} = _{\varepsilon}x{\int_Q \mathcal{T}_{\varepsilon} \nabla u_{\varepsilon} \cdot \varphi _{\varepsilon}ta dx} \rightarrow _{\varepsilon}x{\int_{Q}v \cdot \varphi _{\varepsilon}ta dx}.
_{\varepsilon}nd{equation}
On the other hand,
\begin{align}\label{eq1234}
\begin{split}
_{\varepsilon}x{\int_Q \nabla u_{\varepsilon} \cdot \mathcal{T}_{\varepsilon}^*(\varphi _{\varepsilon}ta) dx} &=-_{\varepsilon}x{\int_Q u_{\varepsilon} \sum_{i=1}^d \mathcal{T}_{\varepsilon}^*(\frac{1}{\varepsilon} D_i \varphi _{\varepsilon}ta+\varphi_i \varphiartial_i_{\varepsilon}ta) dx}
\\ &= \frac{1}{\varepsilon} _{\varepsilon}x{\int_Q (\mathcal{T}_{\varepsilon} u_{\varepsilon}) (D^*\varphi _{\varepsilon}ta) dx}-_{\varepsilon}x{\int_Q (\mathcal{T}_{\varepsilon} u_{\varepsilon}) \sum_{i=1}^d \varphi_i\varphiartial_i _{\varepsilon}ta dx}.
_{\varepsilon}nd{split}
_{\varepsilon}nd{align}
Above, the first term on the right-hand side vanishes by assumption and the second converges to $ _{\varepsilon}x{\int_Q\nabla u\cdot \varphi _{\varepsilon}ta}$ as $\varepsilon\rightarrow 0$. Using
(\ref{eq1234}), (\ref{eq4321}) and Lemma \ref{lemA} (ii) we complete the proof.
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Lemma \ref{Nonlinear_recovery}]
For $\chi \in L^p_{\varphiot}(\Omega)\otimes L^p(Q)$ and $\delta>0$, by definition of the space $L^p_{\varphiot}(\Omega)\otimes L^p(Q)$ and by density of $\mathcal{R}(D)$ in $L^p_{\varphiot}(\Omega)$, we find $g_{\delta}=\sum_{i=1}^{n(\delta)}\varphi^{\delta}_i _{\varepsilon}ta^{\delta}_i$ with $\varphi_i^{\delta} \in W^{1,p}(\Omega)$ and $_{\varepsilon}ta^{\delta}_i\in C^{\infty}_c(Q)$ such that
\begin{equation*}
\|\chi - Dg_{\delta} \|_{L^p(\Omega\times Q)^d} \leq \delta.
_{\varepsilon}nd{equation*}
We define $g_{\delta,\varepsilon}= \varepsilon \mathcal{T}_{\varepsilon}^{-1} g_{\delta}$ and note that $g_{\delta,\varepsilon} \in L^p(\Omega)\otimes W_0^{1,p}(Q)$ and $\nabla g_{\delta,\varepsilon}=\mathcal{T}_{\varepsilon}^{-1}D g_{\delta}+\mathcal{T}_{\varepsilon}^{-1}\varepsilon\nabla g_{\delta}$. As a result of this and with help of the isometry property of $\mathcal{T}_{\varepsilon}^{-1}$, the claim of the lemma follows.
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Proposition \ref{prop2}]
For $\chi \in L^p_{\varphiot}(\Omega)\otimes L^p(Q)$ we define $\mathcal{G}_{\varepsilon} \chi=v_{\varepsilon}$ as the unique weak solution in $W^{1,p}_0(Q)$ to the equation (for $P$-a.e. $\omega\in \Omega$)
\begin{equation}\label{eq99}
-\Delta v_{\varepsilon}(\omega)=- \nabla \cdot (\mathcal{T}_{\varepsilon}^{-1}\chi(\omega)).
_{\varepsilon}nd{equation}
Above and further in this proof, we use the notation $u(\omega):= u(\cdot,\omega)\in L^p(Q)$ for functions $u\in L^p(\Omega\times Q)$.
By Poincar{\'e}'s inequality and the Calder{\'o}n-Zygmund estimate, we obtain
\begin{equation*}
\| v_{\varepsilon}(\omega) \|_{L^p(Q)} \leq C \|\nabla v_{\varepsilon}(\omega) \|_{L^p(Q)^d} \leq C \| \mathcal{T}_{\varepsilon}^{-1}\chi(\omega) \|_{L^p(Q)^d},
_{\varepsilon}nd{equation*}
and therefore
\begin{equation*}
\| v_{\varepsilon} \|_{L^p(\Omega \times Q)} \leq C \|\nabla v_{\varepsilon} \|_{L^p(\Omega \times Q)^d} \leq C \| \chi \|_{L^p(\Omega\times Q)^d}.
_{\varepsilon}nd{equation*}
Using Lemma \ref{Nonlinear_recovery}, we find a sequence $g_{\delta,\varepsilon}\in L^p(\Omega)\otimes W^{1,p}_0(Q)$ such that
\begin{equation*}
\|g_{\delta,\varepsilon}(\chi)\|_{L^p(\Omega\times Q)} \leq \varepsilon C(\delta), \quad \limsup_{\varepsilon\to 0}\|\mathcal{T}_{\varepsilon}\nabla g_{\delta, \varepsilon}(\chi)-\chi\|_{L^p(\Omega\times Q)^d}\leq \delta.
_{\varepsilon}nd{equation*}
Note that $v_{\varepsilon}(\omega)-g_{\delta,\varepsilon}(\omega)\in W^{1,p}_0(Q)$ (for $P$-a.e. $\omega\in \Omega$) and it is the unique weak solution to
\begin{equation*}
-\Delta(v_{\varepsilon}(\omega) - g_{\delta,\varepsilon}(\omega))=-\nabla \cdot (\mathcal{T}_{\varepsilon}^{-1}\chi(\omega)-\nabla g_{\delta,\varepsilon}(\omega)).
_{\varepsilon}nd{equation*}
As before, we have
\begin{equation}\label{eq98}
\| v_{\varepsilon}- g_{\delta,\varepsilon}\|_{L^p(\Omega \times Q)}\leq C \|\nabla v_{\varepsilon}- \nabla g_{\delta,\varepsilon} \|_{L^p(\Omega \times Q)^d} \leq C \| \chi -\mathcal{T}_{\varepsilon} \nabla g_{\delta,\varepsilon} \|_{L^p(\Omega\times Q)^d}.
_{\varepsilon}nd{equation}
Therefore, using the isometry property of $\mathcal{T}_{\varepsilon}$, we obtain
\begin{align*}
\|\mathcal{T}_{\varepsilon} \nabla v_{\varepsilon}- \chi \|_{L^p(\Omega \times Q)^d} & \leq \|\nabla v_{\varepsilon}- \nabla g_{\delta,\varepsilon} \|_{L^p(\Omega \times Q)^d}+\|\mathcal{T}_{\varepsilon} \nabla g_{\delta,\varepsilon} - \chi \|_{L^p(\Omega \times Q)^d} \\ & \leq C \| \chi -\mathcal{T}_{\varepsilon} \nabla g_{\delta,\varepsilon} \|_{L^p(\Omega\times Q)^d}.
_{\varepsilon}nd{align*}
Consequently, first letting $\varepsilon\rightarrow 0$ and then $\delta\rightarrow 0$ we obtain that $\nabla v_{\varepsilon} \overset{2s}{\rightarrow} \chi$ in $L^p(\Omega \times Q)^d$. Furthermore, using (\ref{eq98}) we obtain that $v_{\varepsilon} \overset{2s}{\rightarrow }0$ in $L^p(\Omega \times Q)$ which completes the proof.
_{\varepsilon}nd{proof}
\section{Applications to homogenization in the mean}\label{Section_Applications}
In this section we apply the stochastic unfolding method to homogenization problems. We discuss the classical homogenization problem of convex integral functionals and derive a homogenization result for an evolutionary gradient system. We refer to \cite{neukamm2017stochastic} where a similar analysis has been conducted in a discrete-to-continuum setting for convex integral functionals and for an evolutionary rate-independent system.
The treatment of integral functionals is a well-known topic in stochastic homogenization and previous results typically rely on the subadditive ergodic theorem (see e.g. \cite{DalMaso1986,neukamm_schaeffner}) or on the notion of quenched stochastic two-scale convergence (see \cite{HeidaNesenenko2017monotone} and Section 4). The analysis via unfolding is less involved than these methods since it merely relies on lower semi-continuity of convex functionals and weak compactness properties of ``unfolded'' sequences in $L^p(\Omega\times Q)$. On the other hand, the method we present yields weaker results than other procedures, namely convergence for solutions is obtained in a statistically averaged sense (see Theorem \ref{thm2}), whereas the analysis based on the subadditive ergodic theorem (e.g. \cite{neukamm_schaeffner}) yields convergence for every typical realization of the medium and it even allows to consider non-convex functionals. We refer to a recent study \cite{Berlyand2017} for an investigation of homogenization of non-convex integral functionals by a two-scale $\Gamma$-convergence approach.
The second part of this section is dedicated to the analysis of an evolutionary problem, a gradient system which corresponds to an Allen-Cahn type equation. A significant number of mathematical models can be phrased in the setting of evolutionary gradient systems which are formulated variationally, with the help of an energy and a dissipation functional (see Section \ref{Section_3.2} for a specific example). We refer to \cite{ambrosio2008gradient,savare2007gradient,mielke2013nonsmooth} for the abstract theory of gradient systems. Typically, the asymptotic analysis of sequences of gradient systems (so called evolutionary $\Gamma$-convergence \cite{mielke2016evolutionary}) relies merely on $\Gamma$-convergence properties of the underlying two functionals. For various general strategies for such problems we refer to \cite{attouch1984variational,sandier2004gamma,daneri2010lecture,mielke2013nonsmooth,mielke2016evolutionary}.
In \cite{liero2015homogenization} a gradient system driven by a non-convex (Cahn-Hilliard type) energy is considered and a periodic homogenization result is established using periodic unfolding. In this study, we consider a related random model and derive a homogenization result based on the stochastic unfolding procedure (see Section \ref{Section_3.2}). We refer to \cite{hornung1994reactive,mielke2014two} for other related periodic homogenization results, where reaction-diffusion equations with periodic coefficients are considered.
In Section \ref{Section:3:3} we argue on the level of convex functionals that \textit{quenched homogenization} and \textit{homogenization in the mean} (via stochastic unfolding) typically lead to the same limiting equation. We therefore view stochastic unfolding as a useful tool to identifiy homogenized limit equations.
\subsection{Convex integral functionals}\label{Section_Convex}
Let $p\in (1,\infty)$ and $Q\subset \mathbb{R}^d$ be open and bounded. We consider $V:\Omega\times Q\times \re{d\times d}\rightarrow \re{}$ and the following set of assumptions.
\begin{itemize}
\item[(A1)] $V(\cdot,\cdot, F)$ is $\mathcal{F}\otimes \mathcal{L}(Q)$-measurable for all $F\in \mathbb{R}^{d\times d}$.
\item[(A2)] $V(\omega, x, \cdot)$ is convex for a.e. $(\omega,x)\in \Omega\times Q$.
\item[(A3)]\label{grow_cond} There exists a $C>0$ such that
\begin{equation*}
\frac{1}{C}|F|^p-C\leq V(\omega, x, F) \leq C(|F|^p+1)
_{\varepsilon}nd{equation*}
for a.e. $(\omega,x) \in \Omega\times Q$ and all $F\in \re{d\times d}$.
\item[(A4)] For a.e. $(\omega,x) \in \Omega\times Q$, $V(\omega, x,\cdot)$ is uniformly convex with modulus $(\cdot)^p$, i.e. there exists $C>0$ (independent of $\omega$ and $x$) such that for all $F, G\in \re{d\times d}$ and $t\in [0,1]$
\begin{align*}
V(\omega, x,tF+(1-t)G)\leq t V(\omega, x,F)+(1-t) V(\omega, x,G)-(1-t)tC|F-G|^p.
_{\varepsilon}nd{align*}
_{\varepsilon}nd{itemize}
Below we use the shorthand notation $\nabla^s u=\frac{1}{2}\brac{\nabla u+\nabla u ^{T}}$ and $\chi^{s}=\frac{1}{2}\brac{\chi+\chi^{T}}$. We consider problems with homogeneous Dirichlet boundary conditions and energy functional
\begin{equation}\label{energy}
\mathcal{E}_{\varepsilon}:L^p(\Omega)\otimes W^{1,p}_0(Q)^d\rightarrow \re{},\quad
\mathcal{E}_{\varepsilon}(u)=_{\varepsilon}x{\int_Q V(\tau_{\frac{x}{\varepsilon}}\omega, x,\nabla^s u(\omega,x))dx}.
_{\varepsilon}nd{equation}
Under the assumptions $(A1)-(A3)$, in the limit $\varepsilon\rightarrow 0$ we obtain the following functional
\begin{align}\label{energy_hom}
\begin{split}
& \mathcal{E}_0:\brac{L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}_0(Q)^d} \times \brac{L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d},\\
& \mathcal{E}_0(u,\chi)=_{\varepsilon}x{\int_Q V(\omega, x, \nabla^s u(\omega,x)+ \chi^{s}(\omega,x)) dx}.
_{\varepsilon}nd{split}
_{\varepsilon}nd{align}
\begin{thm}[Two-scale homogenization]\label{thm1}
Let $p\in (1,\infty)$ and $Q\subset \mathbb{R}^d$ be open and bounded. Assume $(A1)-(A3)$.
\begin{itemize}
\item[(i)](Compactness) Let $u_{\varepsilon} \in L^p(\Omega)\otimes W^{1,p}_0(Q)^d$ be such that
$\limsup_{\varepsilon\rightarrow 0}\mathcal{E}_{\varepsilon}(u_{\varepsilon})<\infty$.
There exist $(u,\chi) \in \brac{L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}_0(Q)^d} \times \brac{L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d}$ and a subsequence (not relabeled) such that
\begin{equation}\label{convergence}
u_{\varepsilon} \wt u \text{ in }L^p(\Omega \times Q)^d, \quad \nabla u_{\varepsilon} \wt \nabla u+\chi \text{ in }L^p(\Omega \times Q)^{d\times d}.
_{\varepsilon}nd{equation}
\item[(ii)](Liminf inequality) If the above convergence holds for the whole sequence, then
\begin{equation*}
\liminf_{\varepsilon\rightarrow 0}\mathcal{E}_{\varepsilon}(u_{\varepsilon})\geq \mathcal{E}_0(u,\chi).
_{\varepsilon}nd{equation*}
\item[(iii)](Limsup inequality) Let $(u,\chi)\in \brac{L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}_0(Q)^d} \times \brac{L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d}$. There exists a sequence $u_{\varepsilon} \in L^p(\Omega)\otimes W^{1,p}_0(Q)^d$ such that
\begin{equation*}
u_{\varepsilon} \st u \text{ in }L^p(\Omega \times Q)^d, \quad \nabla u_{\varepsilon} \st \nabla u+\chi \text{ in }L^p(\Omega \times Q)^{d\times d}, \quad
\lim_{\varepsilon\rightarrow 0}\mathcal{E}_{\varepsilon}(u_{\varepsilon})=\mathcal{E}_0(u,\chi).
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{itemize}
_{\varepsilon}nd{thm}
\textit{(For the proof see Section \ref{S_Proof_3}.)}
\begin{corollary}\label{C:thm1}
Assume the same assumptions as in Theorem \ref{thm1}. Let $u_{\varepsilon}\in L^p(\Omega)\otimes W^{1,p}_0(Q)^d$ be a minimizer of ${\mathcal{E}}_{\varepsilon}$. Then there exists a subsequence (not relabeled), $u\in L^p_{{\textsf{inv}}}(\Omega)\times W^{1,p}_0(Q)^d$, and $\chi\in L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d$ such that $u_{\varepsilon} \wt u \text{ in }L^p(\Omega \times Q)^d$, $\nabla u_{\varepsilon} \wt \nabla u+\chi \text{ in }L^p(\Omega \times Q)^{d\times d}$, and
\begin{equation*}
\lim\limits_{{\varepsilon}\to 0}\min{\mathcal{E}}_{\varepsilon}= \lim\limits_{{\varepsilon}\to 0}{\mathcal{E}}_{\varepsilon}(u_{\varepsilon})={\mathcal{E}}_0(u,\chi)=\min{\mathcal{E}}_0.
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{corollary}
\textit{(For the proof see Section \ref{S_Proof_3}.)}
\begin{remark}
If $V(\omega, x,\cdot)$ is strictly convex the minimizers are unique and the convergence in the above corollary holds for the entire sequence.
_{\varepsilon}nd{remark}
\begin{remark}
We might consider the perturbed energy functional $\mathcal I_{\varepsilon}(\cdot)={\mathcal{E}}_{\varepsilon}(\cdot)+_{\varepsilon}x{l_{\varepsilon},\cdot}_{(L^p)^*,L^p}$ with $l_{\varepsilon} \overset{2}{\to} l$ in $L^{q}(\Omega\times Q)$. As in Corollary \ref{C:thm1}, minimizers of $\mathcal I_{\varepsilon}$ converge in the above two-scale sense (up to a subsequence) to minimizers of $(u,\chi)\mapsto \mathcal I_0(u,\chi):={\mathcal{E}}_{0}(u,\chi)+_{\varepsilon}x{P_{{\textsf{inv}}}l,u}_{(L^p)^*,L^p}$.
_{\varepsilon}nd{remark}
If we additionally assume that $_{\varepsilon}x{\cdot}$ is ergodic, the limit functional reduces to a single-scale energy
\begin{equation*}
\mathcal{E}_{\hom}:W^{1,p}_0(Q)^d \rightarrow \re{}, \quad
\mathcal{E}_{\hom}(u)=\int_Q V_{\hom}(x,\nabla u(x))dx,
_{\varepsilon}nd{equation*}
where the homogenized integrand $V_{\hom}$ is given for $x\in \mathbb{R}^d$ and $F\in \mathbb{R}^{d\times d}$ by
\begin{align}\label{equation}
V_{\hom}(x,F)=\inf_{\chi\in L^p_{\varphiot}(\Omega)^d}_{\varepsilon}x{V(\omega, x,F^s+\chi^s(\omega))}.
_{\varepsilon}nd{align}
\begin{thm}[Ergodic case]\label{thm2}
Assume the same assumptions as in Theorem \ref{thm1}. Moreover, we assume that $_{\varepsilon}x{\cdot}$ is ergodic.
\begin{itemize}
\item[(i)] Let $u_{\varepsilon} \in L^p(\Omega)\otimes W^{1,p}_0(Q)^d$ be such that $\limsup_{\varepsilon\rightarrow 0}\mathcal{E}_{\varepsilon}(u_{\varepsilon})<\infty$. There exist $u \in W^{1,p}_0(Q)^d $ and a subsequence (not relabeled) such that
\begin{gather}
u_{\varepsilon} \wt u \text{ in }L^p(\Omega \times Q)^d , \quad _{\varepsilon}x{u_{\varepsilon}}\rightarrow u \text{ strongly in } L^p(Q)^d, \nonumber \\ _{\varepsilon}x{\nabla u_{\varepsilon}} \rightharpoonup \nabla u \text{ weakly in } L^p(Q)^{d\times d}.
_{\varepsilon}nd{gather}
Moreover,
\begin{align*}
\liminf_{\varepsilon\rightarrow 0}\mathcal{E}_{\varepsilon}(u_{\varepsilon})\geq \mathcal{E}_{\hom}(u).
_{\varepsilon}nd{align*}
\item[(ii)] Let $u\inW^{1,p}_0(Q)^d$. There exists a sequence $u_{\varepsilon} \in L^p(\Omega)\otimes W^{1,p}_0(Q)^d$ such that
\begin{align*}
u_{\varepsilon} \st u \text{ in } L^p(\Omega \times Q)^d, \quad _{\varepsilon}x{\nabla u_{\varepsilon}} \to \nabla u \text{ strongly in }L^p(Q)^{d\times d},
\quad \lim_{\varepsilon\rightarrow 0}\mathcal{E}_{\varepsilon}(u_{\varepsilon})= \mathcal{E}_{\hom}(u).
_{\varepsilon}nd{align*}
_{\varepsilon}nd{itemize}
_{\varepsilon}nd{thm}
\textit{(For the proof see Section \ref{S_Proof_3}.)}
We consider problems with an additional strong convexity assumption and consequently obtain that the whole sequence of unique minimizers of ${\mathcal{E}}_{\varepsilon}$ converges strongly in the usual strong topology of $L^p(\Omega\times Q)$ to the unique minimizer of ${\mathcal{E}}_{\hom}$:
\begin{prop}\label{prop3} Assume the same assumptions as in Theorem \ref{thm2}. Assume additionally $(A4)$. ${\mathcal{E}}_{\varepsilon}$ and ${\mathcal{E}}_{\hom}$ admit unique minimizers $u_{\varepsilon} \in L^p(\Omega)\otimes W^{1,p}_0(Q)^d$ and $u\in W^{1,p}_0(Q)$, respectively. We have
\begin{equation*}
u_{\varepsilon} \to u \text{ in }L^p(\Omega \times Q)^d, \quad _{\varepsilon}x{\nabla u_{\varepsilon}}{\rightharpoonup} \nabla u \text{ weakly in }L^p(Q)^{d\times d}.
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{prop}
\textit{(For the proof see Section \ref{S_Proof_3}.)}
\subsection{Allen-Cahn type gradient flows}\label{Section_3.2}
In this section we provide a homogenization result for an evolutionary gradient system. Let $Q\subset \mathbb{R}^d$ be open and bounded. The system is defined on a state space ${\mathscr{B}} := L^2(\Omega \times Q)$ and with the help of two functionals - a dissipation potential $\mathcal{R}_{\varepsilon}$ and an energy functional ${\mathcal{E}}_{\varepsilon}$.
The dissipation potential $\mathcal{R}_{\varepsilon}: {\mathscr{B}}\to [0,\infty)$ is given by
\begin{equation*}
\mathcal{R}_{\varepsilon}(v)=\frac{1}{2}_{\varepsilon}x{\int_{Q}r(\tau_{\frac{x}{\varepsilon}}\omega)|v(\omega,x)|^2 dx},
_{\varepsilon}nd{equation*}
and the energy functional ${\mathcal{E}}_{\varepsilon}:{\mathscr{B}}\to \mathbb{R}\cup \cb{\infty}$ is defined as follows: For $u \in L^2(\Omega)\otimes H^1(Q)\cap L^p(\Omega \times Q)=:dom(\mathcal{E}_{\varepsilon})$ (where $p>2$ is fixed throughout this section),
\begin{equation*}
{\mathcal{E}}_{\varepsilon}(u)=_{\varepsilon}x{\int_{Q}A(\tau_{\frac{x}{\varepsilon}}\omega)\nabla u(\omega,x)\cdot \nabla u(\omega,x)+f(\tau_{\frac{x}{\varepsilon}}\omega,u(\omega,x))dx},
_{\varepsilon}nd{equation*}
and ${\mathcal{E}}_{\varepsilon} = \infty$ otherwise. Our assumptions on $r:\Omega \to \mathbb{R}_{+}$, $A: \Omega \to \mathbb{R}^{d\times d}_{sym}$ and $f:\Omega\times \mathbb{R} \to \mathbb{R}$ are given as follows:
\begin{itemize}
\item[(B1)] $r\in L^{\infty}(\Omega)$, $A\in L^{\infty}(\Omega)^{d\times d}$ and there exists $C>0$ such that for $P$-a.e. $\omega\in \Omega$ it holds that $\frac{1}{C}\leq r(\omega)\leq C $ and $A(\omega)F\cdot F \geq \frac{1}{C} |F|^2$ for all $F\in \mathbb{R}^d$.
\item[(B2)] $f(\cdot,y)$ is measurable for all $y\in \mathbb{R}$ and $f(\omega,\cdot)$ is continuous for $P$-a.e. $\omega\in \Omega$. There exists $\lambda \in \mathbb{R}$ such that for $P$-a.e. $\omega \in \Omega$
\begin{align*}
& f(\omega, \cdot) \text{ is }\lambda\text{-convex, i.e. } \quad y \mapsto f(\omega, y)-\frac{\lambda}{2} |y|^2 \text{ is convex},\\
&\frac{1}{C}|y|^p - C \leq f(\omega,y)\leq C(|y|^p+1) \quad \text{for all }y\in \mathbb{R}.
_{\varepsilon}nd{align*}
_{\varepsilon}nd{itemize}
We remark that the above assumptions imply that $u\mapsto {\mathcal{E}}_{\varepsilon}(u)-\Lambda \mathcal{R}_{\varepsilon}(u)$ is convex, where $\Lambda:=\frac{\lambda}{C}$.
Let $T>0$ and we consider the following differential inclusion
\begin{equation}\label{diff_incl}
0\in D\mathcal{R}_{\varepsilon}(\dot{u}(t))+\varphiartial_{F}\mathcal{E}_{\varepsilon}(u(t)) \quad \text{for a.e. } t\in (0,T), \quad u(0)=u_{0,\varepsilon},
_{\varepsilon}nd{equation}
where $\varphiartial_F \mathcal{E}_{\varepsilon}: {\mathscr{B}} \to 2^{{\mathscr{B}}^*}$ is the Frech{\'e}t subdifferential of $\mathcal{E}_{\varepsilon}$ given by
\begin{equation*}
\varphiartial_F \mathcal{E}_{\varepsilon}(u)=\cb{\xi \in {\mathscr{B}}^{*}: \liminf_{w\to u}\frac{\mathcal{E}_{\varepsilon}(w)-\mathcal{E}_{\varepsilon}(u)-_{\varepsilon}x{\xi,w-u}_{{\mathscr{B}},{\mathscr{B}}^*}}{\|w-u\|_{{\mathscr{B}}}}\geq 0}.
_{\varepsilon}nd{equation*}
Since ${\mathcal{E}}_{\varepsilon}(\cdot)-\Lambda\mathcal{R}_{\varepsilon}(\cdot)$ is convex, using standard convex analysis arguments (see Proposition 1.2 and Corollary 1.12.2 in \cite{kruger2003frechet}), it follows that
\begin{equation*}
\varphiartial_F \mathcal{E}_{\varepsilon}(u)= \cb{\xi \in {\mathscr{B}}^*: {\mathcal{E}}_{\varepsilon}(u)\leq {\mathcal{E}}_{\varepsilon}(w)+_{\varepsilon}x{\xi,u-w}_{{\mathscr{B}}^*,{\mathscr{B}}}- \Lambda \mathcal{R}_{\varepsilon}(u-w) \text{ for all }w\in {\mathscr{B}}}.
_{\varepsilon}nd{equation*}
We refer to \cite{mielke2016evolutionary} for various other formulations of the differential inclusion (\ref{diff_incl}). If we assume $(B1)-(B2)$ and $u_{0,\varepsilon}\in dom({\mathcal{E}}_{\varepsilon})$, then (\ref{diff_incl}) admits a unique solution $u_{\varepsilon} \in H^1(0,T;{\mathscr{B}})$ (see e.g., \cite[Theorem 3.2]{clement2009introduction}).
As $\varepsilon\to 0$, we derive a limit gradient system which is described in the following. The state space for the effective model is ${\mathscr{B}}_0:=L^2_{{\textsf{inv}}}(\Omega)\otimes L^2(Q)$. The effective dissipation potential $\mathcal{R}_{\mathsf{hom}}: {\mathscr{B}}_0\to [0,\infty)$ is given by
\begin{equation*}
\mathcal{R}_{\mathsf{hom}}(v)=_{\varepsilon}x{\int_{Q}r(\omega) |v(\omega,x)|^2 dx}.
_{\varepsilon}nd{equation*}
The energy functional ${\mathcal{E}}_{\mathsf{hom}}: {\mathscr{B}}_{0} \to \mathbb{R}\cup \cb{\infty}$ is defined as
\begin{align}\label{equation_442}
\begin{split}
& {\mathcal{E}}_{\mathsf{hom}}(u) = \\ &\inf_{\chi \in L^2_{pot}(\Omega)\otimes L^2(Q)}_{\varepsilon}x{ \int_{Q} A(\omega)\brac{\nabla u(\omega,x)+\chi(\omega,x)}\cdot\brac{\nabla u(\omega,x)+\chi(\omega,x)}+ f(\omega,u(\omega,x))dx}
_{\varepsilon}nd{split}
_{\varepsilon}nd{align}
for $u \in L^2_{{\textsf{inv}}}(\Omega)\otimes H^1(Q)\cap L^p_{\mathsf{inv}}(\Omega)\otimes L^p(Q)=: dom({\mathcal{E}}_{\mathsf{hom}})$ and ${\mathcal{E}}_{\mathsf{hom}}=\infty$ otherwise. We remark that $u\mapsto {\mathcal{E}}_{\mathsf{hom}}(u)-\Lambda \mathcal{R}_{\mathsf{hom}}(u)$ is convex. The limit differential inclusion is
\begin{equation}\label{diff:incl:2}
0\in D\mathcal{R}_{\mathsf{hom}}(\dot{u}(t))+\varphiartial_{F}\mathcal{E}_{\mathsf{hom}}(u(t)) \quad \text{for a.e. } t\in (0,T), \quad u(0)=u_{0},
_{\varepsilon}nd{equation}
where $\varphiartial_{F}{\mathcal{E}}_{\mathsf{hom}}: {\mathscr{B}}_0 \to 2^{{\mathscr{B}}_0^*}$ is the Frech{\'e}t subdifferential of ${\mathcal{E}}_{\mathsf{hom}}$ defined analogously as $\varphiartial_{F}{\mathcal{E}}_{\varepsilon}$.
If $(B1)-(B2)$ hold and for initial data $u_0\in dom({\mathcal{E}}_{\mathsf{hom}})$, (\ref{diff:incl:2}) has a unique solution $u\in H^1(0,T;{\mathscr{B}}_0)$ (see e.g. \cite[Theorem 3.2]{clement2009introduction}).
The following homogenization result is based on a strategy related to a general method for evolutionary $\Gamma$-convergence of abstract gradient systems presented in \cite[Theorem 3.2]{mielke2014deriving}. In our particular case, the latter strategy does not apply due to the lack of a compactness property used to treat the non-convexity of the energy functional. In our model a priori bounds do not imply compactness: namely $L^2(\Omega)\otimes H^1(Q)$ is not compactly embedded into ${\mathscr{B}}= L^2(\Omega\times Q)$. In contrast, in deterministic homogenization of similar problems (e.g. \cite{liero2015homogenization}) the compact Sobolev embedding $H^1(Q){\subset} L^p(Q)$ with $p<2^*$ is critically used. In the stochastic case, we only have $L^2(\Omega)\otimes H^1(Q)\subset L^2(\Omega)\otimes L^p(Q)$ continuously. We remedy this issue by reducing problem (\ref{diff_incl}) to an equivalent evolutionary variational inequality with a modified (convex) energy functional which allows us to pass to the limit $\varepsilon\to 0$ using merely weak convergence. As an additional merit of this procedure, in our results the growth assumptions on the integrand $f$ are independent of the Sobolev exponents.
\begin{thm}[Evolutionary $\Gamma$-convergence]\label{s3_thm_5} Let $p>2$ and $Q\subset \mathbb{R}^d$ be open and bounded. Assume $(B1)-(B2)$, and consider $u_0\in dom({\mathcal{E}}_{\mathsf{hom}})$, $u_{0,\varepsilon} \in dom({\mathcal{E}}_{\varepsilon})$ such that
\begin{equation*}
u_{0,\varepsilon} \to u_0 \text{ strongly in }{\mathscr{B}}, \quad {\mathcal{E}}_{\varepsilon}(u_{0,\varepsilon}) \to {\mathcal{E}}_{\mathsf{hom}}(u_0) \quad \text{(well-prepared initial data).}
_{\varepsilon}nd{equation*}
Then $u_{\varepsilon}\in H^1(0,T;{\mathscr{B}})$, the unique solution to (\ref{diff_incl}), satisfies: For all $t\in [0,T]$
\begin{align*}
& u_{\varepsilon}(t) \to u(t) \text{ in }{\mathscr{B}}, \quad P_{{\textsf{inv}}}\nabla u_{\varepsilon}(t)\rightharpoonup \nabla u (t) \text{ weakly in }{\mathscr{B}}^d,
_{\varepsilon}nd{align*}
where $u\in H^1(0,T;{\mathscr{B}}_0)$ is the unique solution to (\ref{diff:incl:2}). Moreover, it holds $\dot{u}_{\varepsilon} \to \dot{u}$ strongly in $L^2(0,T; {\mathscr{B}})$ and for any $t\in [0,T]$
\begin{equation*}
{\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(t))\to {\mathcal{E}}_{\mathsf{hom}}(u(t)).
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{thm}
\textit{(For the proof see Section \ref{S_Proof_3}.)}
\begin{remark}
Note that the proof of the above theorem (in particular the convergence of the energies) allows us to additionally characterize the two-scale limit of $\nabla u_{\varepsilon}$. Specifically, for all $t\in [0,T]$ it holds
\begin{equation*}
\nabla u_{\varepsilon} \overset{2s}{\rightharpoonup} \nabla u(t) +\chi(t) \; \text{in }{\mathscr{B}},
_{\varepsilon}nd{equation*}
where $\chi(t)\in L^2_{\mathsf{pot}}(\Omega)\otimes L^2(Q)$ is the solution to the minimization problem given on the right-hand side of (\ref{equation_442}) (with $u=u(t)$).
_{\varepsilon}nd{remark}
\begin{remark}[Ergodic case]
If we additionally assume that $_{\varepsilon}x{\cdot}$ is ergodic, the limit system is driven by deterministic functionals. In particular, the limit is described by a state space $\tilde{{\mathscr{B}}}_0=L^2(Q)$, dissipation potential
\begin{equation*}
\tilde{\mathcal{R}}_{\mathsf{hom}}(u)=\int_Q _{\varepsilon}x{r}|u(x)|^2 dx,
_{\varepsilon}nd{equation*}
and energy functional (for $u\in H^1(Q)\cap L^p(Q)$ and otherwise $\infty$)
\begin{equation*}
\tilde{{\mathcal{E}}}_{\mathsf{hom}}(u)=\int_{Q}A_{\mathsf{hom}}\nabla u(x)\cdot \nabla u(x) + f_{\mathsf{hom}}(u(x)) dx,
_{\varepsilon}nd{equation*}
where $A_{\mathsf{hom}}$ and $f_{\mathsf{hom}}$ are defined as: Let $A_{\mathsf{hom}}F\cdot F=\inf_{\chi \in L^2_{pot}(\Omega)}_{\varepsilon}x{A(\omega)(F+\chi(\omega))\cdot (F+\chi(\omega))}$ for $F\in \mathbb{R}^d$, and let $f_{\mathsf{hom}}(y)=_{\varepsilon}x{f(\omega,y)}$ for $y\in \mathbb{R}$. This suggests that in the ergodic case we might lift the above averaged result to a quenched statement (convergence for $P$-a.e. $\omega\in \Omega$) similarly as in Section \ref{Section:4:3} for homogenization of convex integrals.
_{\varepsilon}nd{remark}
\subsection{Equality of mean and quenched limits}\label{Section:3:3}
In this section we show that for sequences of random functionals both mean and quenched homogenization (if both are possible) yield the same effective functional.
Let $p\in (1,\infty)$ and $Q\subset \mathbb{R}^d$ be open. Consider $\cb{{\mathcal{E}}^{\omega}_{\varepsilon}: L^p(Q)\to \mathbb{R} \cup \cb{\infty}}_{\omega\in\Omega}$, a family of random functionals that $\Gamma$-converges to a deterministic functional ${\mathcal{E}}_{\mathsf{hom}}: L^p(Q)\to \mathbb{R} \cup \cb{\infty}$ for $P$-a.e. $\omega\in \Omega$ (we refer to this notion as quenched homogenization). Under certain measurability assumptions (detailed below), we may consider the averaged functional ${\mathcal{E}}_{\varepsilon}: L^p(\Omega\times Q)\to \mathbb{R} \cup \cb{\infty}$, ${\mathcal{E}}_{\varepsilon}(u)=_{\varepsilon}x{{\mathcal{E}}^{\omega}_{\varepsilon}(u(\omega))}$. We assume that ${\mathcal{E}}_{\varepsilon}$ $\Gamma$-converges in the mean ($2$-scale sense) to a deterministic limit $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}: L^p(Q)\to \mathbb{R}\cup \cb{\infty}$. A specific example of such situation are integral functionals of the form ${\mathcal{E}}_{\varepsilon}^{\omega}(u)=\int_{Q}V(\tau_{\frac{x}{\varepsilon}}\omega,\nabla u(x))dx$, where $V$ satisfies the assumptions from Section \ref{Section_Convex}. A quenched homogenization result for integral functionals of this form is obtained in \cite{DalMaso1986} (based on the subadditive ergodic theorem) and a mean homogenization result for the corresponding averaged functionals is given in Section \ref{Section_Convex} (using the unfolding procedure). For a generic situation, below we show that the mean and quenched $\Gamma$-limits match, i.e. ${\mathcal{E}}_{\mathsf{hom}}=\widetilde{{\mathcal{E}}}_{\mathsf{hom}}$.
To make the above discussion precise, we require the following assumptions: There exist $\Omega'\subset \Omega$ with $P(\Omega')= 1$, $C>0$, and $\varphisi \in L^1(\Omega)$ such that:
\begin{enumerate}[(C1)]
\item {The mapping $\Omega\times L^p(Q)\ni (\omega,u)\mapsto {\mathcal{E}}^{\omega}_{\varepsilon}(u)$ is $\mathcal{F}\otimes {\mathcal{B}}(L^p(Q))$-measurable. For all $\omega \in \Omega'$, ${\mathcal{E}}^{\omega}_{\varepsilon}\geq -C$, $\inf_{u}{\mathcal{E}}^{\omega}_{\varepsilon}(u) \leq \varphisi(\omega)$ and ${\mathcal{E}}^{\omega}_{\varepsilon}$ is convex, proper, and l.s.c.}
(This implies that ${\mathcal{E}}_{\varepsilon}$ is a (well-defined) convex, proper and l.s.c. functional.)
\item {It holds that $dom({\mathcal{E}}_{\varepsilon}^{\omega})=X \subset W^{1,p}(Q)$ ($X$ is convex, closed and compactly embedded in $L^p(Q)$) and for all $\omega\in \Omega'$, ${\mathcal{E}}_{\varepsilon}^{\omega}(u)\geq \frac{1}{C}\|u\|^p_{W^{1,p}(Q)} - C$ for all $u\in L^p(Q)$.}
(This implies that ${\mathcal{E}}_{\varepsilon}(u)\geq \frac{1}{C}\|u\|_{L^p(\Omega)\otimes W^{1,p}(Q)}-C$ for all $u \in L^p(\Omega\times Q)$. Moreover, ${\mathcal{E}}_{\varepsilon}^{\omega}$ (resp. ${\mathcal{E}}_{\varepsilon}$) is equi-mildly coercive in $L^p(Q)$ (resp. w.r.t. weak two-scale convergence).)
\item {There exist ${\mathcal{E}}_{\mathsf{hom}}: L^p(Q)\to \mathbb{R} \cup \cb{\infty}, \; \widetilde{{\mathcal{E}}}_{\mathsf{hom}}: L^p(Q)\to \mathbb{R} \cup \cb{\infty}$ such that for all $\omega\in \Omega'$, ${\mathcal{E}}_{\varepsilon}^{\omega}\overset{\Gamma}{\to}{\mathcal{E}}_{\mathsf{hom}}$ in $L^p(Q)$, and ${\mathcal{E}}_{\varepsilon} \overset{\Gamma}{\rightharpoonup} \widetilde{{\mathcal{E}}}_{\mathsf{hom}}$ in the following sense:}
\begin{enumerate}[(i)]
\item {If $u_{\varepsilon} \overset{2}{\rightharpoonup} u$ where $u\in L^p(Q)$, then $\liminf_{\varepsilon\to 0}{{\mathcal{E}}}_{\varepsilon}(u_{\varepsilon}) \geq \widetilde{{\mathcal{E}}}_{\mathsf{hom}}(u)$.
\item For $u\in L^p(Q)$, there exists $u_{\varepsilon} \in L^p(\Omega\times Q)$, such that $u_{\varepsilon} \overset{2}{\rightharpoonup} u$, ${{\mathcal{E}}}_{\varepsilon}(u_{\varepsilon}) \to \widetilde{{\mathcal{E}}}_{\mathsf{hom}}(u).$}
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{enumerate}
\begin{prop}\label{proposition:818}
Let $p\in (1,\infty)$, $Q\subset \mathbb{R}^d$ be open and $_{\varepsilon}x{\cdot}$ be ergodic. If we assume $(C1)-(C3)$, then
\begin{equation*}
{\mathcal{E}}_{\mathsf{hom}}=\widetilde{{\mathcal{E}}}_{\mathsf{hom}}.
_{\varepsilon}nd{equation*}
(For the proof see Section \ref{S_Proof_3}.)
_{\varepsilon}nd{prop}
\subsection{Proofs}\label{S_Proof_3}
\begin{proof}[Proof of Theorem \ref{thm1}]
(i)
The Poincar{\'e}-Korn inequality and the growth conditions of $V$ imply that $u_{\varepsilon}$ is bounded in $L^p(\Omega)\otimes W^{1,p}(Q)^d$. By Proposition \ref{prop1} there exist $u \in L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}(Q)^d$ and $\chi \in L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d$ with the claimed convergence (up to a subsequence). From $\mathcal{T}_{\varepsilon} u_{\varepsilon} \in L^p(\Omega)\otimes W^{1,p}_0(Q)^d$ for every $\varepsilon>0$, we conclude that $u \in L^p_{{{\textsf{inv}}}}(\Omega)\otimes W^{1,p}_0(Q)^d$ (cf. Remark \ref{R_Two_1}).
(ii) The claim follows from Proposition \ref{P_Cont_1} (iii).
(iii) The existence of a strongly two-scale convergent sequence $u_{\varepsilon} \in L^p(\Omega)\otimes W^{1,p}_0(Q)^d$ follows from Remark \ref{rem14}. Furthermore, the convergence of the energy ${\mathcal{E}}_{\varepsilon}(u_{\varepsilon})\to {\mathcal{E}}_{0}(u,\chi)$ follows from Proposition \ref{P_Cont_1} (ii).
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Corollary~\ref{C:thm1}]
The statement follows by a standard argument from $\Gamma$-convergence: Since $u_{\varepsilon}$ is a minimizer we conclude that $\limsup_{{\varepsilon}\to 0}{\mathcal{E}}_{\varepsilon}(u_{\varepsilon})\leq\limsup_{{\varepsilon}\to 0}{\mathcal{E}}_{\varepsilon}(0)<\infty$. Hence, by Theorem~\ref{thm1} there exists $u\in L^p_{{\textsf{inv}}}(\Omega)\otimes W^{1,p}_0(Q)^d$ and $\chi\in L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d$ such that $u_{\varepsilon} \wt u \text{ in }L^p(\Omega \times Q)^d$, $\nabla u_{\varepsilon} \wt \nabla u+\chi \text{ in }L^p(\Omega \times Q)^{d\times d}$, and
\begin{equation*}
\liminf\limits_{{\varepsilon}\to 0}{\mathcal{E}}_{\varepsilon}(u_{\varepsilon})\geq {\mathcal{E}}_0(u,\chi).
_{\varepsilon}nd{equation*}
Let $(u_0,\chi_0)$ denote the minimizer of ${\mathcal{E}}_0$. Then by Theorem~\ref{thm1} (iii) there exists a recovery sequence $v_{\varepsilon}$ s.t.~${\mathcal{E}}_{\varepsilon}(v_{\varepsilon})\to {\mathcal{E}}_0(u_0,\chi_0)$, and thus
\begin{equation*}
\min{\mathcal{E}}_0=\lim\limits_{{\varepsilon}\to 0}{\mathcal{E}}_{\varepsilon}(v_{\varepsilon})\geq \liminf\limits_{{\varepsilon}\to 0}{\mathcal{E}}_{\varepsilon}(u_{\varepsilon})=\liminf\limits_{{\varepsilon}\to 0}\min{\mathcal{E}}_{\varepsilon}\geq {\mathcal{E}}_0(u,\chi)\geq \min{\mathcal{E}}_0,
_{\varepsilon}nd{equation*}
and thus $(u,\chi)$ is a minimizer of ${\mathcal{E}}_0$ and ${\mathcal{E}}_{\varepsilon}(u_{\varepsilon})=\min{\mathcal{E}}_{\varepsilon}\to \min{\mathcal{E}}_0={\mathcal{E}}_0(u,\chi)$.
_{\varepsilon}nd{proof}
Before presenting the proof of Theorem \ref{thm2}, we provide two auxiliary results.
\begin{lemma}[Stochastic Korn inequality]\label{lem8} Let $p\in (1,\infty)$. There exists $C>0$ such that
\begin{equation*}
_{\varepsilon}x{|\chi|^p}\leq C _{\varepsilon}x{|\chi^s|^p} \quad \text{for every }\chi\in L^p_{\varphiot}(\Omega)^d.
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{lemma}
The proof of the above inequality is similar as the argument for the case $p=2$ in \cite{heida_schweizer2017stochastic}. For the reader's convenience, we show it in the Appendix \ref{appendix:1}.
For the proof of Theorem~\ref{thm2} we apply Castaing's measurable selection lemma in the following form:
\begin{lemma}[See Theorem III.6 and Proposition III.11 in \cite{castaing2006convex}]\label{castaing}
Let $X$ be a complete separable metric space, $(\mathcal{S},\sigma)$ a measurable space and $\Gamma:\mathcal{S}\rightarrow 2^X$ a multifunction. Further, assume that for all $x\in \mathcal{S}$, $\Gamma(x)$ is nonempty and closed in $X$, and for any closed $G\subset X$ we have
\begin{equation*}
\Gamma^{-1}(G):=\cb{x\in S: \Gamma(x)\cap G\ne \varnothing}\in \sigma.
_{\varepsilon}nd{equation*}
Then $\Gamma$ admits a measurable selection, i.e. there exists $\tilde{\Gamma}:\mathcal{S}\rightarrow X$ measurable with $\tilde{\Gamma}(x)\in \Gamma(x)$.
_{\varepsilon}nd{lemma}
\begin{proof}[Proof of Theorem \ref{thm2}]
(i) According to Theorem \ref{thm1} (i) there exist $u\in W^{1,p}_0(Q)$ and $\chi\in L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d$ such that (using Proposition \ref{prop1}) $u_{\varepsilon}$ satisfies the claimed convergences. Furthermore, we have
\begin{align*}
\liminf_{\varepsilon\rightarrow 0}\mathcal{E}_{\varepsilon}(u_{\varepsilon})\geq \mathcal{E}_{0}(u,\chi)\geq \mathcal{E}_{\hom}(u).
_{\varepsilon}nd{align*}
(ii) We show that there exists $\chi \in L^p_{\mathsf{pot}}(\Omega)\otimes L^p(Q)^d$ such that ${\mathcal{E}}_{0}(u,\chi)={\mathcal{E}}_{\hom}(u)$, which implies the claim applying Theorem \ref{thm1} (iii). It is sufficient to show that for fixed $F \in \mathbb{R}^{d\times d}$ and a fixed $Q'\subset Q$ (measurable), we can find $\chi \in L^p_{\mathsf{pot}}(\Omega)\otimes L^p(Q')^d$ such that
\begin{equation}\label{claim1290}
\int_{Q'}_{\varepsilon}x{V(\omega,x,F^s+ \chi^s(x,\omega))}dx = \int_{Q'}V_{\mathsf{hom}}(x,F)dx.
_{\varepsilon}nd{equation}
Indeed, if the above holds, we approximate $\nabla u$ by piecewise-constant functions $F_k=\sum_{i}\mathbf{1}_{Q_{k,i}}F_{k,i} $ (in the strong $L^p(Q)$ topology), where $F_{k,i}\in \mathbb{R}^{d \times d}$, and we find $\chi_{k}=\sum_{i}\mathbf{1}_{Q_{k,i}}\chi_{k,i} \in L^p_{\mathsf{pot}}(\Omega)\otimes L^p(Q)^d$ such that
\begin{equation}\label{claim858}
\int_{Q}_{\varepsilon}x{V(\omega,x,F_k^s(x)+\chi_k^s(x,\omega))}dx = \int_{Q}V_{hom}(x,F_k(x))dx.
_{\varepsilon}nd{equation}
Using the growth conditions of $V$ and Lemma \ref{lem8}, it follows $\limsup_{k\to \infty} \|\chi_k\|_{L^p_{\mathsf{pot}}(\Omega)\otimes L^p(Q)^d}<\infty$ and therefore we may extract a (not relabeled) subsequence and $\chi \in L^p_{\mathsf{pot}}(\Omega) \otimes L^p(Q)^d$ such that $\chi_k \rightharpoonup \chi$ weakly in $L^p(\Omega\times Q)^{d\times d}$. Note that the functional on the left-hand side of (\ref{claim858}) is weakly l.s.c. and the functional on the right-hand side is continuous (by continuity of $V_{\mathsf{hom}}(x,\cdot)$ and growth conditions of $V$). As a result of this, we may pass to the limit $k\to \infty$ in (\ref{claim858}), in order to obtain ${\mathcal{E}}_{0}(u,\chi) \leq {\mathcal{E}}_{\hom}(u)$. Also, the other inequality ${\mathcal{E}}_{0}(u,\chi) \geq {\mathcal{E}}_{\hom}(u)$ follows by the definition of $V_{\mathsf{hom}}$ and therefore we conclude that ${\mathcal{E}}_{0}(u,\chi) = {\mathcal{E}}_{\hom}(u)$.
In the following we show (\ref{claim1290}). Fix $F\in \mathbb{R}^{d\times d}$ and $Q'\subset Q$ and let
\begin{align*}
& f: Q'\times L^p_{\mathsf{pot}}(\Omega)^d \to \mathbb{R}, \quad f(x,\chi)= _{\varepsilon}x{V(\omega,x,F^s+\chi^s(\omega))},\\
& \varphihi: Q' \to \mathbb{R},\quad \varphihi(x)=\inf_{\chi\in L^p_{\mathsf{pot}}(\Omega)^d}f(x,\chi).
_{\varepsilon}nd{align*}
We define a multifunction $\Gamma: Q' \to 2^{L^p_{\mathsf{pot}}(\Omega)^d}$ as (the set of all correctors corresponding to the point $x\in Q'$)
\begin{equation*}
\Gamma(x)= \cb{\chi \in L^p_{\mathsf{pot}}(\Omega)^d: f(x,\chi)\leq \varphihi(x)}.
_{\varepsilon}nd{equation*}
For each $x\in Q'$, $\Gamma(x)$ is non-empty and closed (using the direct method of calculus of variations).
In the following, we show that for a closed set $G \subset L^p_{\mathsf{pot}}(\Omega)^d$, $\Gamma^{-1}(G)$ is measurable and therefore Lemma \ref{castaing} implies that there exists $\chi \in L^p_{\mathsf{pot}}(\Omega) \otimes L^p(Q')^d$ which satisfies (\ref{claim1290}).
Note that $f$ defines a Charath{\'e}odory integrand in the sense that for fixed $x\in Q'$, $f(x,\cdot)$ is continuous, and for fixed $\chi \in L^p_{\mathsf{pot}}(\Omega)$, $f(\cdot, \chi)$ is measurable. Moreover, since $L^p_{\mathsf{pot}}(\Omega)$ is separable, we find a countable (dense) set $\cb{\chi_k}\subset L^p_{\mathsf{pot}}(\Omega)^d$ such that $\varphihi(x)=\inf_{\chi\in L^p_{\mathsf{pot}}(\Omega)^d}f(x,\chi)=\inf_{k}f(x,\chi_k)$ (using that the infimum in the definition of $\varphihi$ is attained and that $f(x,\cdot)$ is continuous). As a result of this, we conclude that $\varphihi$ is measurable and moreover we have that the function $\tilde{f}:=f-\varphihi: Q'\times L^p_{\mathsf{pot}}(\Omega)^d\to \mathbb{R}$ is as well a Charath{\'e}odory integrand. Consequently, \cite[Proposition 1]{rockafellar1971} implies that $x \mapsto epi \tilde{f}_x$ (where $epi \tilde{f}_x$ denotes the epigraph of the function $\tilde{f}(x,\cdot)$) is measurable in the following sense (see \cite[Theorem 1]{rockafellar1971}): For any closed set $\tilde{G} \subset \mathbb{R} \times L^p_{\mathsf{pot}}(\Omega)^d$, it holds that
\begin{equation*}
\cb{x \in Q': epi \tilde{f}_x \cap \tilde{G} \neq \varnothing}= \cb{x\in Q': _{\varepsilon}xists (\alpha,\chi) \in \tilde{G}, \quad \tilde{f}(x,\chi)\leq \alpha}
_{\varepsilon}nd{equation*}
is measurable. We choose $\tilde{G}=\cb{0}\times G$ and the above implies that $\Gamma^{-1}(G)$ is measurable. This concludes the proof.
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Proposition \ref{prop3}]
Uniqueness of minimizers follows by the uniform convexity assumption on the integrand $V$. As in the proof of Theorem \ref{thm2} (ii), we select $\chi \in L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d$ such that $\int_{Q}V_{\hom}(x,\nabla u(x))dx=\int_{Q}_{\varepsilon}x{V(\omega, x, \nabla^s u(x)+\chi^s(\omega,x))}dx$. Theorem \ref{thm1} (iii) implies that there exists a sequence $v_{\varepsilon}\in L^p(\Omega)\otimes W^{1,p}_0(Q)^d$ such that $v_{\varepsilon} \st u$ in $L^p(\Omega \times Q)^d$ and $\mathcal{E}_{\varepsilon}(v_{\varepsilon})\to \mathcal{E}_0(u,\chi)=\mathcal{E}_{\hom}(u)$. By the triangle inequality we have
$\|u_{\varepsilon} - u\|_{L^p(\Omega\times Q)}\leq \|u_{\varepsilon} - v_{\varepsilon}\|_{L^p(\Omega\times Q)}+\|v_{\varepsilon} - u\|_{L^p(\Omega\times Q)}$. By the isometry property of $\mathcal{T}_{\varepsilon}$ and strong two-scale convergence of $v_{\varepsilon}$ we have $\|v_{\varepsilon} - u\|_{L^p(\Omega\times Q)}=\|\mathcal{T}_{\varepsilon}(v_{\varepsilon} - u)\|_{L^p(\Omega\times Q)}=\|\mathcal{T}_{\varepsilon} v_{\varepsilon} - u\|_{L^p(\Omega\times Q)}\to 0$. Furthermore, the Poincar{\'e}-Korn inequality $\|u_{\varepsilon}-v_{\varepsilon}\|^p_{L^p(\Omega\times Q)}\leq C \|\nabla^su_{\varepsilon}-\nabla^sv_{\varepsilon}\|^p_{L^p(\Omega\times Q)}$ (for a generic constant $C$ that is independent of ${\varepsilon}$ but might change from line to line), the uniform convexity of $V$ in form of $\frac{C}{4}\|\nabla^su_{\varepsilon}-\nabla^sv_{\varepsilon}\|_{L^p(\Omega\times Q)}^p\leq \frac12{\mathcal{E}}_{\varepsilon}(v_{\varepsilon})+\frac12{\mathcal{E}}_{\varepsilon}(u_{\varepsilon})-{\mathcal{E}}_{\varepsilon}(\frac12(u_{\varepsilon}+v_{\varepsilon}))$,
and the minimality of $u_{\varepsilon}$, yield the estimate
\begin{equation*}
\|u_{\varepsilon}-v_{\varepsilon}\|^p_{L^p(\Omega\times Q)}\leq C\brac{{\mathcal{E}}_{\varepsilon}(v_{\varepsilon})-{\mathcal{E}}_{\varepsilon}(\tfrac12(u_{\varepsilon}+v_{\varepsilon}))}.
_{\varepsilon}nd{equation*}
Since ${\mathcal{E}}_{\varepsilon}(v_{\varepsilon})\to{\mathcal{E}}_{\hom}(u)$ and $\liminf\limits_{{\varepsilon}\to 0}{\mathcal{E}}_{\varepsilon}(\tfrac12(u_{\varepsilon}+v_{\varepsilon}))\geq {\mathcal{E}}(u,\chi)={\mathcal{E}}_{\hom}(u)$, we conclude that the right-hand side converges to $0$.
Thus, $u_{\varepsilon}\to u$ in $L^p(\Omega\times Q)$, and the convergence of the gradient follows using Proposition \ref{prop1}.
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Theorem \ref{s3_thm_5}]
\textit{Step 1. A priori estimates and compactness.}
In the following, using a standard argument, we derive an a priori estimate for the solution $u_{\varepsilon}$. We note that (\ref{diff_incl}) implies
\begin{equation*}
\mathcal{R}_{\varepsilon}(\dot{u}_{\varepsilon}(t))\leq _{\varepsilon}x{-\xi_{\varepsilon}(t),\dot{u}_{\varepsilon}(t)} ,
_{\varepsilon}nd{equation*}
where $\xi_{\varepsilon}(t)\in \varphiartial_F {\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(t))$. Integrating the above on the interval $(0,t)$ (with arbitrary $t\in (0,T]$) and using the chain rule for the ($\Lambda$-convex) energy functional ${\mathcal{E}}_{\varepsilon}$ (see e.g., \cite{Rossi2006} for the chain rule), we obtain
\begin{equation}\label{help:2}
{\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(t))+ \int_0^t \mathcal{R}_{\varepsilon}(\dot{u}_{\varepsilon}(s))ds \leq {\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(0)).
_{\varepsilon}nd{equation}
Using the assumptions on the initial data $u_{\varepsilon}(0)$, the above implies that there exists $C>0$ (independent of $\varepsilon$) such that
\begin{equation}\label{uniform:estimate}
\sup_{t\in [0,T]} \brac{\|u_{\varepsilon}(t)\|_{L^p(\Omega\times Q)}+\|u_{\varepsilon}(t)\|_{L^2(\Omega)\otimes H^1(Q)}} + \|u_{\varepsilon}\|_{H^1(0,T;{\mathscr{B}})}\leq C.
_{\varepsilon}nd{equation}
As a result of this, we find a (not relabeled) subsequence and $\widetilde{u}\in H^1(0,T; {\mathscr{B}})\cap L^p(0,T; L^p(\Omega\times Q))$ such that
\begin{equation*}
u_{\varepsilon} \rightharpoonup \widetilde{u} \quad \text{weakly in }H^1(0,T; {\mathscr{B}}) \text{ and weakly in }L^p(0,T; L^p(\Omega\times Q)).
_{\varepsilon}nd{equation*}
Moreover, using the Arzel{\`a}-Ascoli theorem \cite[Proposition 3.3.1]{ambrosio2008gradient}, we might extract another subsequence such that for all $t\in [0,T]$
\begin{equation*}
u_{\varepsilon}(t)\rightharpoonup \widetilde{u}(t) \quad \text{weakly in }{\mathscr{B}},
_{\varepsilon}nd{equation*}
and using (\ref{uniform:estimate}) and Proposition \ref{prop1} we conclude that $u_{\varepsilon}(t)\overset{2s}{\rightharpoonup} u(t)$ in ${\mathscr{B}}$ where $u(t):=P_{{\textsf{inv}}}\widetilde{u}(t)$. We consider the linear extension of $\mathcal{T}_{\varepsilon}$ to an (not relabeled) operator $\mathcal{T}_{\varepsilon}: L^2(0,T; {\mathscr{B}})\mapsto L^2(0,T; {\mathscr{B}})$ and note that $\mathcal{T}_{\varepsilon}$ and $\dot{(\cdot)}$ commute. This results in the convergence $\mathcal{T}_{\varepsilon} \dot{u}_{\varepsilon} \rightharpoonup \dot{u}$ weakly in $L^2(0,T; {\mathscr{B}})$.
\textit{Step 2. Reduction to a convex problem.}
The differential inclusion in (\ref{diff_incl}) is equivalent to
\begin{equation*}
{\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(t))\leq {\mathcal{E}}_{\varepsilon}(w)-_{\varepsilon}x{D\mathcal{R}_{\varepsilon}(\dot{u}_{\varepsilon}(t)),u_{\varepsilon}(t)-w}_{{\mathscr{B}}^*,{\mathscr{B}}}- \Lambda \mathcal{R}_{\varepsilon}(u_{\varepsilon}(t)-w) \quad \text{for all }w \in {\mathscr{B}}.
_{\varepsilon}nd{equation*}
We set $w= e^{-\Lambda t} \tilde{w}$ with an arbitrary $\tilde{w}\in {\mathscr{B}}$ and multiply the above inequality by $e^{2\Lambda t}$ to obtain
\begin{eqnarray}\label{eq:898}
&& e^{2\Lambda t}{\mathcal{E}}_{\varepsilon}(e^{-\Lambda t}e^{\Lambda t}u_{\varepsilon}(t)) \\ & \leq & e^{2\Lambda t}{\mathcal{E}}_{\varepsilon}(e^{-\Lambda t}\tilde{w}) - e^{2\Lambda t} _{\varepsilon}x{D\mathcal{R}_{\varepsilon}(\dot{u}_{\varepsilon}(t)),u_{\varepsilon}(t) - e^{-\Lambda t}\tilde{w}}_{{\mathscr{B}}^*,{\mathscr{B}}}-\Lambda e^{2\Lambda t} \mathcal{R}_{\varepsilon}(u_{\varepsilon}(t)-e^{-\Lambda t}\widetilde{w}). \nonumber
_{\varepsilon}nd{eqnarray}
Using the quadratic structure of $\mathcal{R}_{\varepsilon}$ (and its homogeneity of degree 2), we compute
\begin{equation*}
-\Lambda e^{2\Lambda t}\mathcal{R}_{\varepsilon} (u_{\varepsilon}(t)-e^{-\Lambda t}\tilde{w})=\Lambda \mathcal{R}_{\varepsilon}(e^{\Lambda t}u_{\varepsilon}(t))-\Lambda \mathcal{R}_{\varepsilon}(\tilde{w})- _{\varepsilon}x{D\mathcal{R}_{\varepsilon}(\Lambda e^{\Lambda t}u_{\varepsilon}(t)),e^{\Lambda t}u_{\varepsilon}(t)-\tilde{w}}_{{\mathscr{B}}^*,{\mathscr{B}}}.
_{\varepsilon}nd{equation*}
Using this equality, (\ref{eq:898}) implies that for any $\tilde{w}\in {\mathscr{B}}$,
\begin{equation*}
\widetilde{{\mathcal{E}}}_{\varepsilon}(t,e^{\Lambda t}u_{\varepsilon}(t)) \leq \widetilde{{\mathcal{E}}}_{\varepsilon}(t,\tilde{w})-_{\varepsilon}x{D\mathcal{R}_{\varepsilon}(e^{\Lambda t}\dot{u}_{\varepsilon}(t)+\Lambda e^{\Lambda t}u_{\varepsilon}(t)),e^{\Lambda t}u_{\varepsilon}(t)-\tilde w}_{{\mathscr{B}}^*,{\mathscr{B}}},
_{\varepsilon}nd{equation*}
where $\widetilde{{\mathcal{E}}}_{\varepsilon}:[0,T]\times {\mathscr{B}} \to \mathbb{R} \cup \cb{\infty}$, $\widetilde{{\mathcal{E}}}_{\varepsilon}(t,v) = e^{2\Lambda t}{\mathcal{E}}_{\varepsilon}(e^{-\Lambda t}v)-\Lambda \mathcal{R}_{\varepsilon}(v)$. We remark that for each $t\in [0,T]$, $\widetilde{{\mathcal{E}}}_{\varepsilon}(t,\cdot)$ is a convex, l.s.c, proper functional.
We introduce a new variable $v_{\varepsilon}: [0,T]\to {\mathscr{B}}$ given by $v_{\varepsilon}(t):=e^{\Lambda t}u_{\varepsilon}(t)$ and note that $\dot{v}_{\varepsilon}(t)=e^{\Lambda t} \dot{u}_{\varepsilon}(t)+\Lambda e^{\Lambda t}u_{\varepsilon}(t)$. As a result of this,
the above inequality implies that for all $w \in L^2(0,T; {\mathscr{B}})$
\begin{equation}\label{main:B}
\widetilde{{\mathcal{E}}}_{\varepsilon}(t,v_{\varepsilon}(t))+_{\varepsilon}x{-D\mathcal{R}_{\varepsilon}(\dot{v}_{\varepsilon}(t)),w(t)}_{{\mathscr{B}}^*,{\mathscr{B}}}-\widetilde{{\mathcal{E}}}_{\varepsilon}(t,w(t))\leq _{\varepsilon}x{-D\mathcal{R}_{\varepsilon}(\dot{v}_{\varepsilon}(t)),v_{\varepsilon}(t)}_{{\mathscr{B}}^*,{\mathscr{B}}}.
_{\varepsilon}nd{equation}
Integrating the above inequality on the interval $[0,T]$ and using the chain rule, we obtain (setting $w=w_{\varepsilon}$ a sequence that is arbitrary, but that we specify below)
\begin{equation}\label{main:inequality:189}
\int_0^T \widetilde{{\mathcal{E}}}_{\varepsilon}(t,v_{\varepsilon}(t))+_{\varepsilon}x{-D\mathcal{R}_{\varepsilon}(\dot{v}_{\varepsilon}(t)),w_{\varepsilon}(t)}_{{\mathscr{B}}^*,{\mathscr{B}}}-\widetilde{{\mathcal{E}}}_{\varepsilon}(t,w_{\varepsilon}(t))dt\leq -\mathcal{R}_{\varepsilon}(v_{\varepsilon}(T))+\mathcal{R}_{\varepsilon}(v_{\varepsilon}(0)).
_{\varepsilon}nd{equation}
\textit{Step 3. Passage to the limit $\varepsilon\to 0$.}
Since $u_{\varepsilon}(t)\overset{2s}{\rightharpoonup}u(t)$ in ${\mathscr{B}}$, it follows that $v_{\varepsilon}(t)\overset{2s}{\rightharpoonup} v(t):= e^{\Lambda t} u(t)$ in ${\mathscr{B}}$ for all $t\in [0,T]$. Note that $v_{\varepsilon}(0)=u_{\varepsilon}(0)\to u(0)=v(0)$ strongly in ${\mathscr{B}}$ and therefore the last term on the right-hand side of (\ref{main:inequality:189}) converges to $\mathcal{R}_{\mathsf{hom}}(v(0))$. The first term on the right-hand side satisfies
\begin{equation}\label{eq:920}
\limsup_{\varepsilon\to 0}\brac{-\mathcal{R}_{\varepsilon}(v_{\varepsilon}(T))} = -\liminf_{\varepsilon\to 0}_{\varepsilon}x{\int_Q r(\omega)|\mathcal{T}_{\varepsilon} v_{\varepsilon}(T)|^2 dx }\leq -\mathcal{R}_{\mathsf{hom}}(v(T)).
_{\varepsilon}nd{equation}
For the first term on the left-hand side, using Fatou's lemma, we have
\begin{equation*}
\liminf_{\varepsilon\to 0}\int_{0}^{T}\widetilde{{\mathcal{E}}}_{\varepsilon}(t,v_{\varepsilon}(t))dt \geq \int_{0}^{T}\liminf_{\varepsilon\to 0} \widetilde{{\mathcal{E}}}_{\varepsilon}(t,v_{\varepsilon}(t))dt.
_{\varepsilon}nd{equation*}
Moreover, for a fixed $t\in (0,T)$, we find a subsequence $\varepsilon'$ such that $\liminf_{\varepsilon\to 0}\widetilde{{\mathcal{E}}}_{\varepsilon}(t,v_{\varepsilon}(t))=\lim_{\varepsilon'\to 0}\widetilde{{\mathcal{E}}}_{\varepsilon'}(t,v_{\varepsilon'}(t))$. With help of the uniform estimate (\ref{uniform:estimate}), we obtain $v_{\varepsilon'}(t)\overset{2s}{\rightharpoonup} v(t)$ in $L^p(\Omega\times Q)$ and (up to a subsequence) $\nabla v_{\varepsilon'}(t)\overset{2s}{\rightharpoonup}\nabla v(t)+\chi$ in $L^2(\Omega\times Q)$ for some $\chi \in L^2_{\mathsf{pot}}(\Omega)\otimes L^2(Q)$. Therefore, we have
\begin{align}\label{eq:928}
\begin{split}
\liminf_{\varepsilon\to 0}\widetilde{{\mathcal{E}}}_{\varepsilon}(t,v_{\varepsilon}(t)) \geq & \liminf_{\varepsilon'\to 0} _{\varepsilon}x{\int_Q A(\omega)\mathcal{T}_{\varepsilon'}\nabla v_{\varepsilon'}(t)(\omega,x)\cdot \mathcal{T}_{\varepsilon'} \nabla v_{\varepsilon'}(t)(\omega,x)dx} \\ & +\liminf_{\varepsilon'\to 0} _{\varepsilon}x{\int_Q e^{2\Lambda t}f(\omega, e^{-\Lambda t} \mathcal{T}_{\varepsilon'} v_{\varepsilon'}(t)(\omega,x))-\Lambda \frac{r(\omega)}{2}|\mathcal{T}_{\varepsilon'} v_{\varepsilon'}(t)(\omega,x)|^2 dx}\\ \geq & \widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,v(t)),
_{\varepsilon}nd{split}
_{\varepsilon}nd{align}
where $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,v):=e^{2\Lambda t}{\mathcal{E}}_{\mathsf{hom}}(e^{-\Lambda t}v)-\Lambda \mathcal{R}_{\mathsf{hom}}(v)$ and the last inequality follows by convexity (and l.s.c.) of the underlying functionals.
In order to complete the limit passage in (\ref{main:inequality:189}), it is left to treat the second and third terms on the left-hand side. In the following, we show that there exists a sequence $w_{\varepsilon}\in L^2(0,T; {\mathscr{B}})$ such that
\begin{equation}\label{second:term:convergence}
\lim_{\varepsilon\to 0} \int_0^T _{\varepsilon}x{-D\mathcal{R}_{\varepsilon}(\dot{v}_{\varepsilon}(t)),w_{\varepsilon}(t)}_{{\mathscr{B}}^*,{\mathscr{B}}}-\widetilde{{\mathcal{E}}}_{\varepsilon}(t,w_{\varepsilon}(t))dt= \int_0^T \widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*(t,-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t)))dt,
_{\varepsilon}nd{equation}
where $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*(t,\cdot):{\mathscr{B}}_0^* \to \mathbb{R} \cup \cb{\infty}$ denotes the conjugate of the functional $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,\cdot)$, and it is defined as
\begin{equation*}
\widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*(t,\xi)=\sup_{w\in {\mathscr{B}}_0}\brac{_{\varepsilon}x{\xi,w}_{{\mathscr{B}}_0^*,{\mathscr{B}}_0}-\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,w)}.
_{\varepsilon}nd{equation*}
We remark that $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}: [0,T]\times {\mathscr{B}}_0 \to \mathbb{R} \cup \cb{\infty}$ is a normal integrand in the sense that it is $\mathcal{L}(0,T)\otimes \mathcal{B}({\mathscr{B}}_0)$-measurable and for each $t\in [0,T]$, $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,\cdot)$ is l.s.c. on ${\mathscr{B}}_0$ and it is proper. Moreover, for each $t\in [0,T]$, $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,\cdot)$ is convex. \cite[Theorem 2]{rockafellar1971} and the fact that the functional $w\in L^2(0,T; {\mathscr{B}}_0)\mapsto \int_0^T \widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,w(t))dt$ attains its minimum imply that there exists $w\in L^2(0,T; {\mathscr{B}}_0)$ such that
\begin{equation*}
\int_0^T \widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*(t,-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t)))dt = \int_0^T _{\varepsilon}x{-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t)),w(t)}_{{\mathscr{B}}_0^*,{\mathscr{B}}_0}-\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,w(t)) dt.
_{\varepsilon}nd{equation*}
Moreover, it holds that $\nabla w \in L^2(0,T; {\mathscr{B}}_0^d)$ and therefore similarly as in the proof of Theorem \ref{thm2} (ii) we may find $\chi \in L^2(0,T; L^2_{\mathsf{pot}}(\Omega)\otimes L^2(Q))$ such that
\begin{equation*}
\int_0^T \widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,w(t))dt = \int_0^T e^{2\Lambda t}{\mathcal{E}}_0(e^{-\Lambda t}w(t),e^{-\Lambda t}\chi(t))-\Lambda \mathcal{R}_{\mathsf{hom}}(w(t))dt,
_{\varepsilon}nd{equation*}
where ${\mathcal{E}}_0(v,\chi)= _{\varepsilon}x{\int_Q A(\omega)(\nabla v(\omega,x)+\chi(\omega,x))\cdot (\nabla v(\omega,x) + \chi(\omega,x))+ f(\omega, v(\omega,x))dx}$. In the following, we construct a strong recovery sequence for $w$ and $\chi$ similarly as in Lemma \ref{Nonlinear_recovery} with the only difference that the functions to be recovered are time-dependent. Since $\chi \in L^2(0,T; L^2_{\mathsf{pot}}(\Omega)\otimes L^2(Q))$, we find a sequence $g_{\delta}=\sum_{i=1}^{n_{\delta}}\xi_i^{\delta}_{\varepsilon}ta_i^{\delta}\varphi_i^{\delta}$ with $\xi_i^{\delta}\in L^p(0,T)$, $_{\varepsilon}ta_i^{\delta}\in C^{\infty}_c(Q)$ and $\varphi_i^{\delta}\in H^1(\Omega)\cap L^p(\Omega)$, such that
\begin{equation*}
\|Dg_{\delta}-\chi\|_{L^2(0,T;{\mathscr{B}}^d)}\to 0 \quad \text{as }\delta \to 0.
_{\varepsilon}nd{equation*}
Above, by a truncation and mollification argument (see e.g. \cite[Lemma 2.2]{bourgeat1994stochastic}) we may choose $\varphi_i^{\delta}\in H^1(\Omega)\cap L^p(\Omega)$ and not only in $H^1(\Omega)$ (as the definition of $L^2_{\mathsf{pot}}(\Omega)$ suggests).
We define $w_{\delta,\varepsilon}= w+ \varepsilon \mathcal{T}_{-\varepsilon}g_{\delta}$ and similarly as in the proof of Lemma \ref{Nonlinear_recovery}, we compute
\begin{eqnarray*}
&& \|\mathcal{T}_{\varepsilon} w_{\delta,\varepsilon} - w\|_{L^p(0,T; L^p(\Omega \times Q))}+\|\mathcal{T}_{\varepsilon} \nabla w_{\delta,\varepsilon} - \nabla w -\chi \|_{L^2(0,T; {\mathscr{B}}^d)}\\ & \leq & \varepsilon \|g_{\delta}\|_{L^p(0,T; L^p(\Omega \times Q))}+ \|Dg_{\delta} -\chi \|_{L^2(0,T; {\mathscr{B}}^d)}+\varepsilon \|\nabla g_{\delta} \|_{L^2(0,T; {\mathscr{B}}^d)}.
_{\varepsilon}nd{eqnarray*}
Letting first $\varepsilon \to 0$ and then $\delta \to 0$, the right-hand side above vanishes, therefore we can extract a diagonal sequence $\delta(\varepsilon)\to 0$ (as $\varepsilon\to 0$) such that $w_{\varepsilon}:=w_{\delta(\varepsilon),\varepsilon}$ satisfies
\begin{equation*}
\mathcal{T}_{\varepsilon} w_{\varepsilon} \to w \text{ strongly in }L^p(0,T; L^p(\Omega\times Q)), \quad \mathcal{T}_{\varepsilon} \nabla w_{\varepsilon} \to \nabla w + \chi \text{ strongly in }L^2(0,T; {\mathscr{B}}^d).
_{\varepsilon}nd{equation*}
For the sequence $w_{\varepsilon}$, we have
\begin{eqnarray}
& & \int_0^T _{\varepsilon}x{-D\mathcal{R}_{\varepsilon}(\dot{v}_{\varepsilon}(t)),w_{\varepsilon}(t)}_{{\mathscr{B}}^*,{\mathscr{B}}}-\widetilde{{\mathcal{E}}}_{\varepsilon}(t,w_{\varepsilon}(t))dt\\ & = & \int_{0}^T_{\varepsilon}x{\int_Q -r(\omega)\mathcal{T}_{\varepsilon} \dot{v}_{\varepsilon}(t,\omega,x) \mathcal{T}_{\varepsilon} w_{\varepsilon}(t,\omega,x)- A(\omega)\mathcal{T}_{\varepsilon} \nabla w_{\varepsilon}(t,\omega,x)\cdot \mathcal{T}_{\varepsilon} \nabla w_{\varepsilon}(t,\omega,x)dx}dt \nonumber \\
&& - \int_{0}^T e^{2\Lambda t} _{\varepsilon}x{\int_{Q}f(\omega,e^{-\Lambda t}\mathcal{T}_{\varepsilon} w_{\varepsilon}(t,\omega,x))-\frac{\Lambda}{2} r(\omega)|\mathcal{T}_{\varepsilon} w_{\varepsilon}(t,\omega,x)|^2 dx}dt. \nonumber
_{\varepsilon}nd{eqnarray}
Using the convergence properties of $u_{\varepsilon}$, we obtain that $\mathcal{T}_{\varepsilon} \dot{v}_{\varepsilon} \rightharpoonup \dot{v}$ weakly in $L^2(0,T; {\mathscr{B}})$ and therefore the first term on the right-hand side above converges to $\int_{0}^T _{\varepsilon}x{-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t)),w(t)}_{{\mathscr{B}}_0^*,{\mathscr{B}}_0}dt$. Using the strong convergence of $\mathcal{T}_{\varepsilon} \nabla w_{\varepsilon}$, it follows that the second term on the right-hand side converges to $-\int_0^{T}_{\varepsilon}x{\int_Q A(\omega)(\nabla w(t,\omega,x)+\chi(t,\omega,x))\cdot (\nabla w(t,\omega,x)+\chi(t,\omega,x))dx}dt$. Using the growth assumptions of $f$ and its continuity in its second variable and with help of the strong convergence $\mathcal{T}_{\varepsilon} w_{\varepsilon} \to w$ in $L^p(0,T; L^p(\Omega\times Q))$, we conclude that the sum of the last two terms converges to
\begin{equation*}
-\int_{0}^T e^{2\Lambda t}_{\varepsilon}x{\int_{Q}f(\omega,e^{-\Lambda t}w(t,\omega,x))dx}+\Lambda \mathcal{R}_{\mathsf{hom}}(w(t))dt.
_{\varepsilon}nd{equation*}
Collecting the above statements, we have that $w_{\varepsilon}$ satisfies (\ref{second:term:convergence}).
Finally, considering all the above estimates for the terms in (\ref{main:inequality:189}), we are able to pass to the limit $\varepsilon\to 0$ in (\ref{main:inequality:189}) to obtain
\begin{eqnarray*}
& & \int_{0}^T \widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,v(t))+\widetilde{{\mathcal{E}}}^*_{\mathsf{hom}}(t,-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t))) dt\\ & \leq & -\mathcal{R}_{\mathsf{hom}}(v(T))+\mathcal{R}_{\mathsf{hom}}(v(0))=\int_{0}^T _{\varepsilon}x{-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t)),v(t)}_{{\mathscr{B}}_0^*,{\mathscr{B}}_0}dt.
_{\varepsilon}nd{eqnarray*}
We have $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,v(t))+\widetilde{{\mathcal{E}}}^*_{\mathsf{hom}}(t,-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t))) \geq _{\varepsilon}x{-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t)),v(t)}_{{\mathscr{B}}_0^*,{\mathscr{B}}_0}$ (for a.e. $t$) by the definition of $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*$. As a result of this and of the above inequality, it follows that for a.e. $t$ it holds
\begin{equation}\label{help:1}
\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,v(t))+\widetilde{{\mathcal{E}}}^*_{\mathsf{hom}}(t,-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t))) = _{\varepsilon}x{-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t)),v(t)}_{{\mathscr{B}}_0^*,{\mathscr{B}}_0}.
_{\varepsilon}nd{equation}
Consequently, we obtain (using standard convex analysis arguments)
\begin{equation*}
\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,v(t)) \leq \widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,w)+_{\varepsilon}x{-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t)),v(t)-w}_{{\mathscr{B}}_0^*,{\mathscr{B}}_0} \quad \text{for all }w\in {\mathscr{B}}_0.
_{\varepsilon}nd{equation*}
Using the above inequality and similar reasoning as in Step 2, we obtain that $u$ satisfies (\ref{diff:incl:2}).
Moreover, note that using (\ref{eq:920}) and the strong convergence of the initial data, we obtain
\begin{equation*}
\limsup_{\varepsilon\to 0} \brac{-\mathcal{R}_{\varepsilon}(v_{\varepsilon}(T))+\mathcal{R}_{\varepsilon}(v_{\varepsilon}(0))}\leq -\mathcal{R}_{\mathsf{hom}}(v(T))+\mathcal{R}_{\mathsf{hom}}(v(0)).
_{\varepsilon}nd{equation*}
Also, exploiting the inequality (\ref{main:inequality:189}) and the liminf inequalities (\ref{eq:928}) and (\ref{second:term:convergence}), we obtain
\begin{align*}
\liminf_{\varepsilon\to 0}\brac{-\mathcal{R}_{\varepsilon}(v_{\varepsilon}(T))+\mathcal{R}_{\varepsilon}(v_{\varepsilon}(0))} \geq \int_{0}^T \widetilde{{\mathcal{E}}}_{\mathsf{hom}}(t,v(t))+\widetilde{{\mathcal{E}}}^*_{\mathsf{hom}}(t,-D\mathcal{R}_{\mathsf{hom}}(\dot{v}(t))) dt.
_{\varepsilon}nd{align*}
Finally, the above and (\ref{help:1}) imply that $\liminf_{\varepsilon\to 0}\brac{-\mathcal{R}_{\varepsilon}(v_{\varepsilon}(T))+\mathcal{R}_{\varepsilon}(v_{\varepsilon}(0))} \geq -\mathcal{R}_{\mathsf{hom}}(v(T))+\mathcal{R}_{\mathsf{hom}}(v(0))$ and therefore
\begin{equation*}
\lim_{\varepsilon \to 0}\brac{-\mathcal{R}_{\varepsilon}(v_{\varepsilon}(T))+\mathcal{R}_{\varepsilon}(v_{\varepsilon}(0))}= -\mathcal{R}_{\mathsf{hom}}(v(T))+\mathcal{R}_{\mathsf{hom}}(v(0)).
_{\varepsilon}nd{equation*}
Furthermore, since $v_{\varepsilon}(0)=u_{\varepsilon}(0)\to u(0)=v(0)$ strongly in ${\mathscr{B}}$, it follows that $\mathcal{R}_{\varepsilon}(v_{\varepsilon}(0))\to \mathcal{R}_{\mathsf{hom}}(v(0))$ and thus $\mathcal{R}_{\varepsilon}(v_{\varepsilon}(T))\to \mathcal{R}_{\mathsf{hom}}(v(T))$. This yields that $v_{\varepsilon}(T)\overset{2}{\to} v(T)$ strongly in ${\mathscr{B}}$ and using that $P_{\mathsf{inv}}v(T)=v(T)$ it follows that $v_{\varepsilon}(T) \to v(T)$ strongly in ${\mathscr{B}}$. Consequently, we obtain $u_{\varepsilon}(T)\to u(T)$ strongly in ${\mathscr{B}}$. The above procedure can be repeated with $T$ replaced by an arbitrary $t\in (0,T)$, hence we obtain that $u_{\varepsilon}(t)\to u(t)$ strongly in ${\mathscr{B}}$ for each $t\in [0,T]$.
\textit{Step 4. Convergence of $\dot{u}_{\varepsilon}$ and ${\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(t))$.}
We test (\ref{diff_incl}) with $\dot{u}_{\varepsilon}$ and with the help of the chain rule for ${\mathcal{E}}_{\varepsilon}$ we obtain
\begin{equation*}
_{\varepsilon}x{D\mathcal{R}_{\varepsilon}(\dot{u}_{\varepsilon}(t)),\dot{u}_{\varepsilon}(t)}_{{\mathscr{B}}^{*},{\mathscr{B}}}=- \frac{d}{dt}{\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(t)).
_{\varepsilon}nd{equation*}
For an arbitrary $t\in (0,T]$, we integrate the above equality on the interval $(0,t)$ to obtain
\begin{equation*}
\int_{0}^t _{\varepsilon}x{D\mathcal{R}_{\varepsilon}(\dot{u}_{\varepsilon}(s)),\dot{u}_{\varepsilon}(s)}_{{\mathscr{B}}^{*},{\mathscr{B}}}ds = {\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(0))-{\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(t)).
_{\varepsilon}nd{equation*}
Since $u_{\varepsilon}(t)\to u(t)$ strongly in ${\mathscr{B}}$, we obtain that $\liminf_{\varepsilon\to 0}{\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(t))\geq {\mathcal{E}}_{\mathsf{hom}}(u(t))$ using the usual two-scale convergence arguments for the first (quadratic) part of the energy and strong convergence of $\mathcal{T}_{\varepsilon} u_{\varepsilon}$ for the second (non-convex) part. As a consequence, using the convergence ${\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(0))\to {\mathcal{E}}_{\mathsf{hom}}(u(0))$, we obtain
\begin{equation*}
\limsup_{\varepsilon\to 0}\int_{0}^t _{\varepsilon}x{D\mathcal{R}_{\varepsilon}(\dot{u}_{\varepsilon}(s)),\dot{u}_{\varepsilon}(s)}_{{\mathscr{B}}^{*},{\mathscr{B}}}ds \leq {\mathcal{E}}_{\mathsf{hom}}(u(0))-{\mathcal{E}}_{\mathsf{hom}}(u(t)) = \int_{0}^t _{\varepsilon}x{D\mathcal{R}_{\mathsf{hom}}(\dot{u}(s)),\dot{u}(s)}_{{\mathscr{B}}^{*}_0,{\mathscr{B}}_0},
_{\varepsilon}nd{equation*}
where in the last equality we use that $u$ satisfies (\ref{diff:incl:2}) and the chain rule for ${\mathcal{E}}_{\mathsf{hom}}$. Note that $\int_{0}^t_{\varepsilon}x{D\mathcal{R}_{\varepsilon}(\dot{u}_{\varepsilon}(s)),\dot{u}_{\varepsilon}(s)}_{{\mathscr{B}}^{*},{\mathscr{B}}} = \int_0^t_{\varepsilon}x{\int_{Q} r |\mathcal{T}_{\varepsilon} \dot{u}_{\varepsilon}(s)|^2 }ds$ and since $\mathcal{T}_{\varepsilon} \dot{u}_{\varepsilon} \rightharpoonup \dot{u}$ weakly in $L^2(0,T; {\mathscr{B}})$, it follows that
\begin{equation*}
\liminf_{\varepsilon\to 0} \int_{0}^t_{\varepsilon}x{D\mathcal{R}_{\varepsilon}(\dot{u}_{\varepsilon}(s)),\dot{u}_{\varepsilon}(s)}_{{\mathscr{B}}^{*},{\mathscr{B}}} \geq \int_{0}^t _{\varepsilon}x{D\mathcal{R}_{\mathsf{hom}}(\dot{u}(s)),\dot{u}(s)}_{{\mathscr{B}}^{*}_0,{\mathscr{B}}_0}.
_{\varepsilon}nd{equation*}
Combining the last two inequalities (and the weak convergence $\mathcal{T}_{\varepsilon} \dot{u}_{\varepsilon} \rightharpoonup \dot{u}$), we conclude that for any $t\in(0,T]$
\begin{equation*}
\dot{u}_{\varepsilon} \to \dot{u} \text{ strongly in }L^2(0,t; {\mathscr{B}}), \quad {\mathcal{E}}_{\varepsilon}(u_{\varepsilon}(t))\to {\mathcal{E}}_{\mathsf{hom}}(u(t)).
_{\varepsilon}nd{equation*}
Note that all of the above results hold for a subsequence of $\cb{\varepsilon}$, however using the uniqueness property of the solution of the limit problem we conclude (using a standard contradiction argument) that all the convergence statements hold for the entire sequence $\cb{\varepsilon}$. This concludes the proof.
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Proposition \ref{proposition:818}]
Note that using the $\Gamma$-convergence in $(C3)$ and the properties of ${\mathcal{E}}_{\varepsilon}^{\omega}$ and ${\mathcal{E}}_{\varepsilon}$, it follows that ${\mathcal{E}}_{\mathsf{hom}}$ and $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}$ are convex, proper and l.s.c. functionals. To prove the claim of the proposition it is sufficient to show that $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*={\mathcal{E}}_{\mathsf{hom}}^*$ since by Proposition 2 in \cite{rockafellar1971} $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}^{**} = \widetilde{{\mathcal{E}}}_{\mathsf{hom}}$ and ${{\mathcal{E}}}_{\mathsf{hom}}^{**}={{\mathcal{E}}}_{\mathsf{hom}}$. Note that $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*: L^q(Q)\to \mathbb{R} \cup \cb{\infty}$ is the Legendre-Fenchel conjugate of the functional $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}$ defined by $\widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*(f)=\sup_{u} \brac{\int_{Q}fu-\widetilde{{\mathcal{E}}}_{\mathsf{hom}}(u)}$ (analogously we define $\brac{{\mathcal{E}}_{\varepsilon}^{\omega}}^*, {\mathcal{E}}_{\varepsilon}^{*}$ and ${\mathcal{E}}_{\mathsf{hom}}^*$).
According to Theorem 2 in \cite{rockafellar1971}, it holds that for any $f\in L^q(Q)$,
\begin{equation}\label{equation:117}
_{\varepsilon}x{\brac{{\mathcal{E}}_{\varepsilon}^{\omega}}^{*}(f)}= {{\mathcal{E}}}_{\varepsilon}^{*}(f).
_{\varepsilon}nd{equation}
${\mathcal{E}}_{\mathsf{hom}}^*=\widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*$ follows by passing to the limit $\varepsilon \to 0$ in the above equality and using the following: \textit{(a)} $_{\varepsilon}x{\brac{{\mathcal{E}}_{\varepsilon}^{\omega}}^*(f)} \to {\mathcal{E}}_{\mathsf{hom}}^*(f)$, \textit{(b)} ${\mathcal{E}}^*_{\varepsilon}(f)\to \widetilde{{\mathcal{E}}}_{\mathsf{hom}}^*(f)$ as $\varepsilon\to 0$.
In the following we show only \textit{(a)}, and \textit{(b)} follows similarly (cf. proof of Corollary \ref{C:thm1}). For an arbitrary $u\in L^p(Q)$, using the growth assumption in $(C2)$, it follows that
\begin{equation*}
{\mathcal{E}}_{\varepsilon}^{\omega}(u)-\int_{Q}fu \leq \brac{\frac{C}{p}+1}{\mathcal{E}}_{\varepsilon}^{\omega}(u)+\frac{C^2}{p}+\frac{1}{q}{\|f\|^q_{L^q(Q)}}.
_{\varepsilon}nd{equation*}
As a result of this and the assumption $\inf_{u}{\mathcal{E}}_{\varepsilon}^{\omega}(u)\leq \varphisi(\omega)$, it follows that
\begin{equation*}
\inf_{u}\brac{{\mathcal{E}}_{\varepsilon}^{\omega}(u)-\int_{Q}fu} \leq \brac{\frac{C}{p}+1}\varphisi(\omega)+\frac{C^2}{p}+\frac{1}{q}{\|f\|^q_{L^q(Q)}}.
_{\varepsilon}nd{equation*}
Note that for $P$-a.e. $\omega\in \Omega$, $\varphisi(\omega)<\infty$, thus the above inequality and the $\Gamma$-convergence ${\mathcal{E}}_{\varepsilon}^{\omega}\overset{\Gamma}{\to}{\mathcal{E}}_{\mathsf{hom}}$ imply that for $P$-a.e. $\omega\in \Omega$ (using a standard $\Gamma$-convergence argument)
\begin{equation*}
-\brac{{\mathcal{E}}_{\varepsilon}^{\omega}}^*(f)=\inf_{u}\brac{{\mathcal{E}}_{\varepsilon}^{\omega}(u)-\int_{Q}f u} \to \inf_{u}\brac{{\mathcal{E}}_{\mathsf{hom}}(u)-\int_{Q}f u}= -{\mathcal{E}}_{\mathsf{hom}}^*(f).
_{\varepsilon}nd{equation*}
Consequently, the dominated convergence theorem implies \textit{(a)}.
_{\varepsilon}nd{proof}
\section{Quenched stochastic two-scale convergence and relation to stochastic unfolding}\label{Section_4}
In this section, we recall the concept of quenched stochastic two-scale convergence (cf.~\cite{Zhikov2006,heida2011extension}) and study its relation to stochastic unfolding. The notion of quenched stochastic two-scale convergence is based on the individual ergodic theorem, see Theorem~\ref{thm:ergodic-thm}. We thus assume throughout this section that
\begin{equation*}
(\Omega,\mathcal F,P,\tau)\text{ satisfies Assumption~\ref{Assumption_2_1} and $P$ is ergodic.}
_{\varepsilon}nd{equation*}
Moreover, throughout this section we fix exponents $p\in(1,\infty)$, $q:=\frac{p}{p-1}$, and an open and bounded domain $Q\subset\mathbb{R}^d$. We denote by $({\mathscr{B}}^p, \|\cdot\|_{{\mathscr{B}}^p})$ the Banach space $L^p(\Omega\times Q)$ and the associated norm, and we write $({\mathscr{B}}^p)^*$ for the dual space. For the definition of quenched two-scale convergence we need to specify a suitable space of test-functions in ${\mathscr{B}}^q$ that is countably generated. To that end we fix sets ${\mathscr D}_\Omega$ and ${\mathscr D}_Q$ such that
\begin{itemize}
\item ${\mathscr D}_\Omega$ is a countable set of bounded, measurable functions on $(\Omega,\mathcal F)$ that contains the identity $\mathbf 1_{\Omega}_{\varepsilon}quiv 1$ and is dense in $L^1(\Omega)$ (and thus in $L^r(\Omega)$ for any $1\leq r<\infty$).
\item ${\mathscr D}_Q\subset C(\overline Q)$ is a countable set that contains the identity $\mathbf 1_Q_{\varepsilon}quiv 1$ and is dense in $L^1(Q)$ (and thus in $L^r(Q)$ for any $1\leq r<\infty$).
_{\varepsilon}nd{itemize}
We denote by ${\mathscr A}:=\{\varphi(\omega,x)=\varphi_\Omega(\omega)\varphi_Q(x)\,:\,\varphi_\Omega\in{\mathscr D}_\Omega,\varphi_Q\in{\mathscr D}_Q\}$ the set of simple tensor products (a countable set), and by ${\mathscr D}_0$ the $\mathbb Q$-linear span of ${\mathscr A}$, i.e.~
\begin{equation*}
{\mathscr D}_0:=\big\{\,\sum_{j=1}^m\lambda_j\varphi_j\,:\,m\in\mathbb{N},\,\lambda_1,\ldots,\lambda_m\in\mathbb Q,\,\varphi_1,\ldots,\varphi_m\in{\mathscr A}\,\big\}.
_{\varepsilon}nd{equation*}
We finally set ${\mathscr D}:=\mbox{span}{\mathscr A}=\mbox{span}{\mathscr D}_0$ and denote by $\overline{{\mathscr D}}:=\mbox{span}({\mathscr D}_Q)$ (the span of ${\mathscr D}_Q$ seen as a subspace of ${\mathscr D}$), and note that ${\mathscr D}$ and ${\mathscr D}_0$ are dense subsets of ${\mathscr{B}}^q$, while the closure of $\overline{{\mathscr D}}$ in ${\mathscr{B}}^q$ is isometrically isomorphic to $L^q(Q)$. Let us anticipate that ${\mathscr D}$ serves as our space of test-functions for stochastic two-scale convergence. As opposed to two-scale convergence in the mean, ``quenched'' stochastic two-scale convergence is defined relative to a fixed ``admissible'' realization $\omega_0\in\Omega$. Throughout this section we denote by $\Omega_0$ the set of admissible realizations; it is a set of full measure determined by the following lemma:
\begin{lemma}\label{L:admis}
There exists a measurable set $\Omega_0\subset\Omega$ with $P(\Omega_0)=1$ s.t. for all $\varphi,\varphi'\in{\mathscr A} $, all $\omega_0\in\Omega_0$, and $r\in\{p,q\}$ we have with $(\mathcal{T}_{\varepsilon}^*\varphi)(\omega,x):=\varphi(\tau_{\frac{x}{{\varepsilon}}}\omega,x)$,
\begin{align*}
\limsup\limits_{{\varepsilon}\to 0}\|(\mathcal{T}_{\varepsilon}^*\varphi)(\omega_0,\cdot)\|_{L^r(Q)}&\leq \|\varphi\|_{{\mathscr{B}}^r}\\ \text{and}\qquad \lim\limits_{{\varepsilon}\to 0}\int_Q\mathcal{T}_{\varepsilon}^*(\varphi\varphi')(\omega_0,x)dx&=_{\varepsilon}x{\int_Q(\varphi\varphi')(\omega_0,x)\,dx}.
_{\varepsilon}nd{align*}
_{\varepsilon}nd{lemma}
\begin{proof}
This is a simple consequence of Theorem~\ref{thm:ergodic-thm} and the fact that ${\mathscr A}$ is countable.
_{\varepsilon}nd{proof}
For the rest of the section $\Omega_0$ is fixed according to Lemma~\ref{L:admis}.
\textbf{Structure of Section \ref{Section_4}.} In Section \ref{Section:4:1} we quickly recall the definition of quenched two-scale convergence and its main properties. Section \ref{Section:4:2} is dedicated to the comparison of the notions of quenched and mean stochastic two-scale convergence using Young measures. In the last Section \ref{Section:4:3} we demonstrate that mean homogenization results (e.g., homogenization of convex integral functionals) might be extended to quenched results appealing to some aspects of the theory of quenched two-scale convergence.
\subsection{Definition and basic properties}\label{Section:4:1}
The idea of quenched stochastic two-scale convergence is similar to periodic two-scale convergence: We associate with a bounded sequence $(u_{\varepsilon})\subset L^p(Q)$ and $\omega_0\in\Omega_0$, a sequence of linear functionals $(U_{\varepsilon})$ defined on ${\mathscr D}$. We can pass (up to a subsequence) to a pointwise limit $U$, which is again a linear functional on ${\mathscr D}$ and which (thanks to Lemma~\ref{L:admis}) can be uniquely extended to a bounded linear functional on ${\mathscr{B}}^q$. We then define the \textit{weak quenched $\omega_0$-two-scale limit} of $(u_{\varepsilon})$ as the Riesz-representation $u\in {\mathscr{B}}^p$ of $U\in({\mathscr{B}}^q)^*$.
\begin{defn}[quenched two-scale limit, cf.~\cite{Zhikov2006,Heida2017b}]
\label{def:two-scale-conv}
Let $(u_{\varepsilon})$ be a sequence in $L^{p}(Q)$, and let $\omega_0\in\Omega_0$ be fixed. We say that $u_{\varepsilon}$ converges (weakly, quenched) $\omega_0$-two-scale to $u\in {\mathscr{B}}^{p}$, and write
$u_{\varepsilon}\tsq{\omega_0}u$, if the sequence $u_{\varepsilon}$ is bounded in $L^p(Q)$, and for all $\varphi\in {\mathscr D}$ we have
\begin{equation}
\lim_{{\varepsilon}\to0}\int_{Q}u_{\varepsilon}(x)(\mathcal{T}_{\varepsilon}^*\varphi)(\omega_0,x)\,dx=\int_\Omega\int_{Q}u(x,\omega)\varphi(\omega,x)\,dx\,dP(\omega).\label{eq:def-quenched-two-scale}
_{\varepsilon}nd{equation}
_{\varepsilon}nd{defn}
\begin{lemma}[Compactness]\label{lem:two-scale-limit}
Let $(u_{\varepsilon})$ be a bounded sequence in $L^p(Q)$ and $\omega_0\in \Omega_0$. Then there exists a subsequence (still denoted by ${\varepsilon}$) and $u\in {\mathscr{B}}^p$ such that $u_{{\varepsilon}}\tsq{\omega_0}u$ and
\begin{equation}
\| u\|_{{\mathscr{B}}^{p}}\leq \liminf_{{\varepsilon}\to0}\|u_{{\varepsilon}}\|_{L^{p}(Q)},\,\label{eq:two-scale-limit-estimate}
_{\varepsilon}nd{equation}
and $u_{{\varepsilon}}\rightharpoonup _{\varepsilon}x{u}$ weakly in $L^p(Q)$.
_{\varepsilon}nd{lemma}
{\it (For the proof see Section~\ref{S:4:p1}).}
For our purpose it is convenient to have a metric characterization of two-scale convergence.
\begin{lemma}[Metric characterization]\label{L:metric-char}
\begin{enumerate}[(i)]
\item
Let $\{\varphi_j\}_{j\in\mathbb{N}}$ denote an enumeration of ${\mathscr A}_1:=\{\frac{\varphi}{\|\varphi\|_{{\mathscr{B}}^q}}\,:\,\varphi\in {\mathscr D}_0\}$. The vector space $\mbox{\rm Lin}({\mathscr D}):=\{U:{\mathscr D}\to\mathbb{R}\text{ linear}\,\}$ endowed with the metric $$d(U,V;\mbox{\rm Lin}({\mathscr D})):=\sum_{j\in\mathbb{N}}2^{-j}\frac{|U(\varphi_j)-V(\varphi_j)|}{|U(\varphi_j)-V(\varphi_j)|+1}$$ is complete and separable.
\item Let $\omega_0\in\Omega_0$. Consider the maps
\begin{align*}
&J_{\varepsilon}^{\omega_0}: L^p(Q)\to \mbox{\rm Lin}({\mathscr D}),\qquad (J_{\varepsilon}^{\omega_0} u)(\varphi):=\int_Qu(x)(\mathcal{T}_{\varepsilon}^*\varphi)(\omega_0,x)\,dx,\\
&J_0:{\mathscr{B}}^p\to \mbox{\rm Lin}({\mathscr D}),\qquad (J_0u)(\varphi):=_{\varepsilon}x{\int_Qu\varphi}.
_{\varepsilon}nd{align*}
Then for any bounded sequence $u_{\varepsilon}$ in $L^p(Q)$ and any $u\in {\mathscr{B}}^p$ we have $u_{\varepsilon}\tsq{\omega_0}u$ if and only if $J_{\varepsilon}^{\omega_0} u_{\varepsilon} \to J_0u$ in $\mbox{\rm Lin}({\mathscr D})$.
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{lemma}
{\it (For the proof see Section~\ref{S:4:p1}).}
\begin{remark}
Convergence in the metric space $(\mbox{\rm Lin}({\mathscr D}),d(\cdot,\cdot,\mbox{\rm Lin}({\mathscr D}))$ is equivalent to pointwise convergence. $({\mathscr{B}}^q)^*$ is naturally embedded into the metric space by means of the restriction $J:({\mathscr{B}}^q)^*\to\mbox{\rm Lin}({\mathscr D})$, $JU=U\vert_{{\mathscr D}}$. In particular, we deduce that for a bounded sequences $(U_k)$ in $({\mathscr{B}}^q)^*$ we have $U_k\stackrel{*}{\rightharpoonup} U$ if and only if $JU_k\to JU$ in the metric space. Likewise, ${\mathscr{B}}^p$ (resp. $L^p(Q)$) can be embedded into the metric space $\mbox{\rm Lin}({\mathscr D})$ via $J_0$ (resp. $J_{\varepsilon}^{\omega_0}$ with ${\varepsilon}>0$ and $\omega_0\in\Omega_0$ arbitrary but fixed), and for a bounded sequence $(u_k)$ in ${\mathscr{B}}^p$ (resp. $L^p(Q)$) weak convergence in ${\mathscr{B}}^p$ (resp. $L^p(Q)$) is equivalent to convergence of $(J_0u_k)$ (resp. $(J^{\omega_0}_{\varepsilon} u_k)$) in the metric space.
_{\varepsilon}nd{remark}
\begin{lemma}[Strong convergence implies quenched two-scale convergence]\label{L:strong}
Let $(u_{\varepsilon})$ be a strongly convergent sequence in $L^p(Q)$ with limit $u\in L^p(Q)$. Then for all $\omega_0\in\Omega_0$ we have $u_{\varepsilon}\tsq{\omega_0}u$.
_{\varepsilon}nd{lemma}
{\it (For the proof see Section~\ref{S:4:p1}).}
\begin{defn}[set of quenched two-scale cluster points]
For a bounded sequence $(u_{\varepsilon})$ in $L^p(Q)$ and $\omega_0\in\Omega_0$ we denote by ${\mathscr{C\!P}}(\omega_0,(u_{\varepsilon}))$ the set of all $\omega_0$-two-scale cluster points, i.e. the set of $u\in{\mathscr{B}}^p$ with $J_0u\in \bigcap_{k=1}^\infty\overline{\big\{J^{\omega_0}_{{\varepsilon}} u_{\varepsilon}\,:\,{\varepsilon}<\frac{1}{k}\big\}}$ where the closure is taken in the metric space $\big(\mbox{\rm Lin}({\mathscr D}),d(\cdot,\cdot;\mbox{\rm Lin}({\mathscr D})))$.
_{\varepsilon}nd{defn}
We conclude this section with two elementary results on quenched stochastic two-scale convergence:
\begin{lemma}[Approximation of two-scale limits]
\label{lem:Every-v-is-a-fB-limit}Let $u\in{\mathscr{B}}^p$.
Then for all $\omega_0\in\Omega_0$, there exists a sequence
$u_{\varepsilon}\in L^{p}(Q)$ such that $u_{\varepsilon}\stackrel{2s}{\rightharpoonup}_{\omega_0} u$ as ${\varepsilon}\to0$.
_{\varepsilon}nd{lemma}
{\it (For the proof see Section~\ref{S:4:p1}).}
Similar to the slightly different setting in \cite{Heida2017b}
one can prove the following result:
\begin{lemma}[Two-scale limits of gradients]
\label{lem:sto-conver-grad}
Let $(u_{{\varepsilon}})$ be a sequence in $W^{1,p}(Q)$ and $\omega_0\in\Omega_0$. Then there exist a subsequence (still denoted by ${\varepsilon}$) and functions $u\in W^{1,p}(Q)$ and $\chi\in L^p_{\textsf{pot}}(\Omega)\otimes L^{p}(Q)$ such that $u_{\varepsilon}\rightharpoonup u$ weakly in $W^{1,p}(Q)$ and
\[
u_{{\varepsilon}}\tsq{\omega_0}u\quad\mbox{and }\quad\nabla u_{{\varepsilon}}\tsq{\omega_0}\nabla u+\chi\qquad\mbox{as }{\varepsilon}\to0\,.
\]
_{\varepsilon}nd{lemma}
\subsubsection{Proofs}\label{S:4:p1}
\begin{proof}[Proof of Lemma~\ref{lem:two-scale-limit}]
Set $C_0:=\limsup\limits_{{\varepsilon}\to 0}\|u_{\varepsilon}\|_{L^p(Q)}$ and note that $C_0<\infty$. By passing to a subsequence (not relabeled) we may assume that $C_0=\liminf\limits_{{\varepsilon}\to 0}\|u_{\varepsilon}\|_{L^p(Q)}$.
Fix $\omega_0\in\Omega_0$. Define linear functionals $U_{\varepsilon}\in\mbox{\rm Lin}({\mathscr D})$ via
\begin{equation*}
U_{\varepsilon}(\varphi):=\int_Qu_{\varepsilon}(x)(\mathcal{T}_{\varepsilon}^*\varphi)(\omega_0,x)\,dx.
_{\varepsilon}nd{equation*}
Note that for all $\varphi\in{\mathscr A}$, $(U_{\varepsilon}(\varphi))$ is a bounded sequence in $\mathbb{R}$. Indeed, by H\"older's inequality and Lemma~\ref{L:admis},
\begin{equation}\label{eq:x1}
\limsup\limits_{{\varepsilon}\to0}|U_{\varepsilon}(\varphi)|\leq \limsup\limits_{{\varepsilon}\to0}\|u_{\varepsilon}\|_{L^p(Q)}\|\mathcal{T}_{\varepsilon}^*\varphi(\omega_0,\cdot)\|_{L^q(Q)}\leq C_0\|\varphi\|_{{\mathscr{B}}^q}.
_{\varepsilon}nd{equation}
Since ${\mathscr A}$ is countable we can pass to a subsequence (not relabeled) such that $U_{\varepsilon}(\varphi)$ converges for all $\varphi\in{\mathscr A}$. By linearity and since ${\mathscr D}=\mbox{span}({\mathscr A})$, we conclude that $U_{\varepsilon}(\varphi)$ converges for all $\varphi\in{\mathscr D}$, and $U(\varphi):=\lim\limits_{{\varepsilon}\to0}U_{\varepsilon}(\varphi)$ defines a linear functional on ${\mathscr D}$. In view of _{\varepsilon}qref{eq:x1} we have $|U(\varphi)|\leq C_0\|\varphi\|_{{\mathscr{B}}^q}$, and thus $U$ admits a unique extension to a linear functional in $({\mathscr{B}}^q)^*$. Let $u\in{\mathscr{B}}^p$ denote its Riesz-representation. Then $u_{\varepsilon}\tsq{\omega_0} u$, and
\begin{equation*}
\|u\|_{{\mathscr{B}}^p}=\|U\|_{({\mathscr{B}}^q)^*}\leq C_0=\liminf\limits_{{\varepsilon}\to 0}\|u_{\varepsilon}\|_{L^p(Q)}.
_{\varepsilon}nd{equation*}
Since $\mathbf 1_{\Omega}\in{\mathscr D}_\Omega$ we conclude that for all $\varphi_Q\in{\mathscr D}_Q$ we have
\begin{equation*}
\int_Qu_{\varepsilon}(x)\varphi_Q(x)\,dx=U_{\varepsilon}(\mathbf 1_{\Omega}\varphi_Q)\to U(\mathbf 1_{\Omega}\varphi_Q)=_{\varepsilon}x{\int_Q u(\omega,x)\varphi_Q(x)\,dx}=\int_Q _{\varepsilon}x{u(x)}\varphi_Q(x)\,dx.
_{\varepsilon}nd{equation*}
Since $(u_{\varepsilon})$ is bounded in $L^p(Q)$, and ${\mathscr D}_Q\subset L^p(Q)$ is dense, we conclude that $u_{\varepsilon}\rightharpoonup_{\varepsilon}x{u}$ weakly in $L^p(Q)$.
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Lemma~\ref{L:metric-char}]
\begin{enumerate}[(i)]
\item Argument for completeness: If $(U_j)$ is a Cauchy sequence in $\mbox{\rm Lin}({\mathscr D})$, then for all $\varphi\in{\mathscr A}_1$,
$(U_j(\varphi))$ is a Cauchy sequence in $\mathbb{R}$. By linearity of the $U_j$'s this implies that $(U_j(\varphi))$ is Cauchy in $\mathbb{R}$ for all $\varphi\in{\mathscr D}$. Hence, $U_j\to U$ pointwise in ${\mathscr D}$ and it is easy to check that $U$ is linear. Furthermore, $U_j\to U$ pointwise in ${\mathscr A}_1$ implies $U_j\to U$ in the metric space.
Argument for separability: Consider the (injective) map $J:({\mathscr{B}}^q)^*\to\mbox{\rm Lin}({\mathscr D})$ where $J(U)$ denotes the restriction of $U$ to ${\mathscr D}$. The map $J$ is continuous, since for all $U,V\in({\mathscr{B}}^q)^*$ and $\varphi\in{\mathscr A}_1$ we have $|(JU)(\varphi)-(JV)(\varphi)|\leq \|U-V\|_{({\mathscr{B}}^q)^*}\|\varphi\|_{{\mathscr{B}}^q}=\|U-V\|_{({\mathscr{B}}^q)^*}$ (recall that the test functions in ${\mathscr A}_1$ are normalized). Since $({\mathscr{B}}^q)^*$ is separable (as a consequence of the assumption that $\mathcal F$ is countably generated), it suffices to show that the range $\mathcal R(J)$ of $J$ is dense in $\mbox{\rm Lin}({\mathscr D})$. To that end let $U\in\mbox{\rm Lin}({\mathscr D})$. For $k\in\mathbb{N}$ we denote by $U_k\in({\mathscr{B}}^q)^*$ the unique linear functional that is equal to $U$ on the the finite dimensional (and thus closed) subspace $\mbox{span}\{\varphi_1,\ldots,\varphi_k\}\subset {\mathscr{B}}^q$ (where $\{\varphi_j\}$ denotes the enumeration of ${\mathscr A}_1$), and zero on the orthogonal complement in ${\mathscr{B}}^q$. Then a direct calculation shows that $d(U,J(U_k);\mbox{\rm Lin}({\mathscr D}))\leq\sum_{j>k}2^{-j}=2^{-k}$. Since $k\in\mathbb{N}$ is arbitrary, we conclude that $\mathcal R(J)\subset\mbox{\rm Lin}({\mathscr D})$ is dense.
\item Let $u_{\varepsilon}$ denote a bounded sequence in $L^p(Q)$ and $u\in {\mathscr{B}}^p$. Then by definition, $u_{\varepsilon}\tsq{\omega_0}u$ is equivalent to $J^{\omega_0}_{\varepsilon} u_{\varepsilon}\to J_0u$ pointwise in ${\mathscr D}$, and the latter is equivalent to convergence in the metric space $\mbox{\rm Lin}({\mathscr D})$.
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{proof}
\begin{proof}[Lemma~\ref{L:strong}]
This follows from H{\"o}lder's inequality and Lemma~\ref{L:admis}, which imply for all $\varphi\in{\mathscr A}$ the estimate
\begin{multline*}
\limsup\limits_{{\varepsilon}\to 0}\int_Q|(u_{\varepsilon}(x)-u(x))\mathcal{T}_{\varepsilon}^*\varphi(\omega_0,x)|\,dx\\
\leq \limsup\limits_{{\varepsilon}\to 0}\Big(\|u_{\varepsilon}-u\|_{L^p(Q)}\left(\int_Q|\mathcal{T}_{\varepsilon}^*\varphi(\omega_0,x)|^q\,dx\right)^\frac1q\Big)=0.
_{\varepsilon}nd{multline*}
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Lemma~\ref{lem:Every-v-is-a-fB-limit}]
Since ${\mathscr D}$ (defined as in Lemma~\ref{L:metric-char}) is dense in ${\mathscr{B}}^p$, for any $\delta>0$ there exists $v_\delta\in{\mathscr D}_0$ with $\|u-v_\delta\|_{{\mathscr{B}}^p}\leq \delta$. Define $v_{\delta,{\varepsilon}}(x):=\mathcal{T}_{\varepsilon}^*v_\delta(\omega_0,x)$. Let $\varphi\in{\mathscr D}$. Since $v_\delta$ and $\varphi$ (resp. $v_\delta\varphi$) are by definition linear combinations of functions (resp. products of functions) in ${\mathscr A}$, we deduce from Lemma~\ref{L:admis} that $(v_{\delta,{\varepsilon}})_{{\varepsilon}}$ is bounded in $L^p(Q)$, and that
\begin{equation*}
\int_Qv_{\delta,{\varepsilon}}\mathcal{T}_{\varepsilon}^*\varphi(\omega_0,x)=\int_Q\mathcal{T}_{\varepsilon}^*(v_\delta\varphi)(\omega_0,x)\to _{\varepsilon}x{\int_Qv_\delta\varphi}.
_{\varepsilon}nd{equation*}
By appealing to the metric characterization, we can rephrase the last convergence statement as $d(J^{\omega_0}_{\varepsilon} v_{\delta,{\varepsilon}},J_0v_\delta;\mbox{\rm Lin}({\mathscr D}))\to 0$. By the triangle inequality we have
\begin{eqnarray*}
d(J^{\omega_0}_{\varepsilon} v_{\delta,{\varepsilon}},J_0u;\mbox{\rm Lin}({\mathscr D}))&\leq& d(J^{\omega_0}_{\varepsilon} v_{\delta,{\varepsilon}},J_0v_\delta;\mbox{\rm Lin}({\mathscr D}))+d(J_0v_\delta,J_0u;\mbox{\rm Lin}({\mathscr D})).
_{\varepsilon}nd{eqnarray*}
The second term is bounded by $\|v_\delta-u\|_{{\mathscr{B}}^p}\leq \delta$, while the first term vanishes for ${\varepsilon}\downarrow 0$. Hence, there exists a diagonal sequence $u_{\varepsilon}:=v_{\delta({\varepsilon}),{\varepsilon}}$ (bounded in $L^p(Q)$) such that there holds $d(J^{\omega_0}_{\varepsilon} u_{{\varepsilon}},J_0u;\mbox{\rm Lin}({\mathscr D}))\to 0$. The latter implies $u_{{\varepsilon}}\tsq{\omega_0}u$ by Lemma~\ref{L:metric-char}.
_{\varepsilon}nd{proof}
\subsection{Comparison to stochastic two-scale convergence in the mean via Young measures}\label{Section:4:2}
In this paragraph we establish a relation between quenched two-scale
convergence and two-scale convergence in the mean (in the sense of Definition
\ref{def46}). The relation is established by Young measures: We show that any bounded sequence $u_{\varepsilon}$ in ${\mathscr{B}}^p$ -- viewed as a functional acting on test-functions of the form $\mathcal{T}_{\varepsilon}^*\varphi$ -- generates (up to a subsequence) a Young measure on ${\mathscr{B}}^p$ that (a) concentrates on the quenched two-scale cluster points of $u_{\varepsilon}$, and (b) allows to represent the two-scale limit (in the mean) of $u_{\varepsilon}$.
\begin{defn}
We say $\boldsymbol{\nu}:=\left\{ \nu_{\omega}\right\} _{\omega\in\Omega}$ is a Young measure on ${\mathscr{B}}^p$, if for all $\omega\in\Omega$, $\nu_\omega$ is a Borel probability measure on ${\mathscr{B}}^p$ (equipped with the weak topology) and
\[
\omega\mapsto\nu_{\omega}(B)\quad\mbox{is }\mbox{measurable for all }B\in{\mathcal{B}}({\mathscr{B}}^p),
\]
where ${\mathcal{B}}({\mathscr{B}}^p)$ denotes the Borel-$\sigma$-algebra on ${\mathscr{B}}^p$ (equipped with the weak topology).
_{\varepsilon}nd{defn}
\begin{thm}
\label{thm:Balder-Thm-two-scale}Let $u_{{\varepsilon}}$ denote a bounded sequence in ${\mathscr{B}}^p$.
Then there exists a subsequence (still denoted by ${\varepsilon}$) and a Young measure $\boldsymbol{\nu}$ on ${\mathscr{B}}^p$
such that for all $\omega_0\in\Omega_0$,
\[
\nu_{\omega_0}\mbox{ is concentrated on }{\mathscr{C\!P}}\left(\omega_0,\big(u_{{\varepsilon}}(\omega_0,\cdot)\big)\right),
\]
and
\[
\liminf_{{\varepsilon}\to0}\Vert u_{{\varepsilon}}\Vert_{{\mathscr{B}}^p}^{p}\geq \int_{\Omega}\left(\int_{{\mathscr{B}}^p}\left\Vert v\right\Vert _{{\mathscr{B}}^p}^{p}\,d\nu_{\omega}(v)\right)\,dP(\omega).
\]
Moreover, we have
\[
u_{{\varepsilon}}\ts u\qquad\text{where }u:=\int_{\Omega}\int_{{\mathscr{B}}^p}v\, d\nu_{\omega}(v)dP(\omega).
\]
Finally, if there exists $v:\Omega\to{\mathscr{B}}^p$ measurable and $\nu_{\omega}=\delta_{v(\omega)}$ for $P$-a.e. $\omega\in\Omega$,
then up to extraction of a further subsequence (still denoted by ${\varepsilon}$) we have
\[
u_{{\varepsilon}}(\omega)\tsq{\omega}v(\omega)\qquad\text{for $P$-a.e.~$\omega\in\Omega$}.
\]
_{\varepsilon}nd{thm}
{\it (For the proof see Section~\ref{S:4:p2}).}
In the opposite direction we observe that quenched two-scale convergence implies two-scale convergence in the mean in the following sense:
\begin{lemma}\label{L:fromquenchedtomean}
Consider a family $\{(u_{\varepsilon}^\omega)\}_{\omega\in\Omega}$ of sequences $(u^\omega_{\varepsilon})$ in $L^p(Q)$ and suppose that:
\begin{enumerate}[(a)]
\item There exists $u\in{\mathscr{B}}^p$ s.t.~for $P$-a.e. $\omega\in\Omega$ we have $u_{\varepsilon}^\omega\tsq{\omega}u$.
\item There exists a sequence $(\tilde u_{\varepsilon})$ s.t.~$u_{\varepsilon}^\omega(x)=\tilde u_{\varepsilon}(\omega,x)$ for a.e.~$(\omega,x)\in\Omega\times Q$.
\item There exists a bounded sequence $(\chi_{\varepsilon})$ in $L^p(\Omega)$ such that $\|u^\omega_{\varepsilon}\|_{L^p(Q)}\leq\chi_{\varepsilon}(\omega)$ for a.e.~$\omega\in\Omega$.
_{\varepsilon}nd{enumerate}
Then $\tilde u_{\varepsilon}\wt u$ weakly two-scale (in the mean).
_{\varepsilon}nd{lemma}
{\it (For the proof see Section~\ref{S:4:p2}).}
To compare homogenization of convex integral functionals w.r.t.~stochastic two-scale convergence in the mean and in the quenched sense, we appeal to the following result.
\begin{defn}[Quenched two-scale normal integrand]\label{D:qtscnormal}
A function $h:\,\Omega\times Q\times\mathbb{R}^{d}\to\mathbb{R}$ is called a quenched two-scale normal integrand, if for all $\xi\in\mathbb{R}^d$, $h(\cdot,\cdot,\xi)$ is $\mathcal F\otimes{\mathcal{B}}(\mathbb{R}^{d})$-measurable, and for a.e.~$(\omega,x)\in\Omega\times Q$, $h(\omega,x,\cdot)$ is lower semicontinuous, and for $P$-a.e.~$\omega_0\in\Omega_0$ and sequence $(u_{\varepsilon})$ in $L^p(Q)$ the following implication holds:
\[
u_{\varepsilon}\tsq{\omega_0}u\qquad\mathbb{R}ightarrow\qquad
\liminf_{{\varepsilon}\to0}\int_{Q}h(\tau_{\frac{x}{{\varepsilon}}}\omega_0,x,u_{\varepsilon}(x))dx\geq\int_{\Omega}\int_{Q}h(\omega,x,u(\omega,x))\,dx\,dP(\omega).
\]
_{\varepsilon}nd{defn}
\begin{lemma}
\label{lem:Balder-Lem-two-scale}
Let $h$ denote a quenched two-scale normal integrand, let $(u_{\varepsilon})$ denote a bounded sequence in ${\mathscr{B}}^p$ that generates a Young measure $\boldsymbol{\nu}$ on ${\mathscr{B}}^p$ in the sense of Theorem \ref{thm:Balder-Thm-two-scale}.
Suppose that $h_{\varepsilon}:\Omega\to\mathbb{R}$, $h_{\varepsilon}(\omega):=-\int_Q\min\big\{0,h(\tau_{\frac{x}{{\varepsilon}}}\omega,x,u_{{\varepsilon}}(\omega,x))\big\}\,dx$ is uniformly integrable. Then
\begin{multline}
\liminf_{{\varepsilon}\to 0}\int_{\Omega}\int_{Q}h(\tau_{\frac{x}{{\varepsilon}}}\omega,x,u_{{\varepsilon}}(\omega,x))\,dx\,dP(\omega)\\
\geq\int_{\Omega}\int_{{\mathscr{B}}^p}\left(\int_\Omega\int_Qh(\tilde{\omega},x,v(\tilde{\omega},x))\,dx\,dP(\tilde{\omega})\right)\,d\nu_{\omega}(v)\,dP(\omega).\label{eq:liminf-balder-ts-lower-semic}
_{\varepsilon}nd{multline}
_{\varepsilon}nd{lemma}
{\it (For the proof see Section~\ref{S:4:p2}).}
\subsubsection{Proof of Theorem~\ref{thm:Balder-Thm-two-scale} and Lemmas~\ref{lem:Balder-Lem-two-scale} and \ref{L:fromquenchedtomean}}\label{S:4:p2}
We first recall some notions and results of Balder's theory for Young measures \cite{Balder1984}. Throughout this section ${\mathscr M}$ is assumed to be a separable, complete metric space with metric $d(\cdot,\cdot;{\mathscr M})$.
\begin{defn}
\begin{itemize}
\item We say a function $s:\Omega\to{\mathscr M}$ is measurable, if it is $\mathcal F-\mathcal B({\mathscr M})$-measurable where $\mathcal B({\mathscr M})$ denotes the Borel-$\sigma$-algebra in ${\mathscr M}$.
\item A function $h:\Omega\times{\mathscr M}\to(-\infty,+\infty]$ is called a \textit{normal} integrand, if $h$ is $\mathcal F\otimes\mathcal B({\mathscr M})$-measurable, and for all $\omega\in\Omega$ the function $h(\omega,\cdot):{\mathscr M}\to(-\infty,+\infty]$ is lower semicontinuous.
\item A sequence $s_{\varepsilon}$ of measurable functions $s_{\varepsilon}:\Omega\to{\mathscr M}$ is called \textit{tight}, if there exists a normal integrand $h$ such that for every $\omega\in\Omega$ the function $h(\omega,\cdot)$ has compact sublevels in ${\mathscr M}$ and $\limsup_{{\varepsilon}\to 0}\int_\Omega h(\omega,s_{\varepsilon}(\omega))\,dP(\omega)<\infty$.
\item A Young measure in ${\mathscr M}$ is a family $\boldsymbol{\mu}:=\left\{ \mu_{\omega}\right\} _{\omega\in\Omega}$
of Borel probability measures on ${\mathscr M}$ such that for all $B\in\mathcal B({\mathscr M})$ the map $\Omega\ni \omega\mapsto \mu_\omega(B)\in\mathbb{R}$ is $\mathcal F$-measurable.
_{\varepsilon}nd{itemize}
_{\varepsilon}nd{defn}
\begin{thm}[\mbox{\cite[Theorem I]{Balder1984}}]\label{thm:Balder} Let $s_{\varepsilon}:\,\Omega\to{\mathscr M}$ denote a tight sequence of measurable functions. Then there exists a subsequence, still indexed by ${\varepsilon}$, and a Young measure ${\boldsymbol\mu}:\Omega\to{\mathscr M}$ such that for every normal integrand $h:\,\Omega\times {\mathscr M}\rightarrow(-\infty,+\infty]$ we have
\begin{equation}\label{eq:Balders-ineq}
\liminf_{{\varepsilon}\to0}\int_{\Omega}h(\omega,s_{\varepsilon}(\omega))\,dP(\omega)\geq\int_{\Omega}\int_{{\mathscr M}}h(\omega,\xi) d\mu_{\omega}(\xi)dP(\omega)\,,
_{\varepsilon}nd{equation}
provided that the negative part $h^-_{\varepsilon}(\cdot)=|\min\{0,h(\cdot,s_{\varepsilon}(\cdot))\}|$ is uniformly integrable.
Moreover, for $P$-a.e. $\omega\in\Omega_0$ the measure $\mu_\omega$ is supported in the set of all cluster points of $s_{\varepsilon}(\omega)$, i.e.~in $\bigcup_{k=1}^\infty\overline{\{s_{\varepsilon}(\omega)\,:\,{\varepsilon}<\frac{1}{k}\}}$ (where the closure is taken in ${\mathscr M}$).
_{\varepsilon}nd{thm}
In order to apply the above theorem we require an appropriate metric space in which two-scale convergent sequences and their limits embed:
\begin{lemma}\label{L:metric-struct}
\begin{enumerate}[(i)]
\item We denote by ${\mathscr M}$ the set of all triples $(U,{\varepsilon},r)$ with $U\in\mbox{\rm Lin}({\mathscr D})$, ${\varepsilon}\geq 0$, $r\geq 0$. ${\mathscr M}$ endowed with the metric
\begin{equation*}
d((U_1,{\varepsilon}_1,r_1),(U_2,{\varepsilon}_2,r_2);{\mathscr M}):=d(U_1,U_2;\mbox{\rm Lin}({\mathscr D}))+|{\varepsilon}_1-{\varepsilon}_2|+|r_1-r_2|
_{\varepsilon}nd{equation*}
is a complete, separable metric space.
\item For $\omega_0\in\Omega_0$ we denote by ${\mathscr M}^{\omega_0}$ the set of all triples $(U,{\varepsilon},r)\in{\mathscr M}$ such that
\begin{equation}\label{eq:repr}
U=
\begin{cases}
J^{\omega_0}_{\varepsilon} u&\text{for some }u\in L^p(Q)\text{ with }\|u\|_{L^p(Q)}\leq r\text{ in the case }{\varepsilon}>0,\\
J_0 u&\text{for some }u\in{\mathscr{B}}^p\text{ with }\|u\|_{{\mathscr{B}}^p}\leq r\text{ in the case }{\varepsilon}=0.
_{\varepsilon}nd{cases}
_{\varepsilon}nd{equation}
Then ${\mathscr M}^{\omega_0}$ is a closed subspace of ${\mathscr M}$.
\item Let $\omega_0\in\Omega_0$, and $(U,{\varepsilon},r)\in{\mathscr M}^{\omega_0}$. Then the function $u$ in the representation _{\varepsilon}qref{eq:repr} of $U$ is unique, and
\begin{equation}\label{eq:x6}
\begin{cases}\displaystyle
\|u\|_{L^p(Q)}=\sup\limits_{\varphi\in\overline{{\mathscr D}},\ \|\varphi\|_{{\mathscr{B}}^q}\leq 1}|U(\varphi)|&\text{if }{\varepsilon}>0,\\
\|u\|_{{\mathscr{B}}^p}=\sup\limits_{\varphi\in {\mathscr D},\ \|\varphi\|_{{\mathscr{B}}^q}\leq 1}|U(\varphi)|&\text{if }{\varepsilon}=0.
_{\varepsilon}nd{cases}
_{\varepsilon}nd{equation}
\item For $\omega_0\in\Omega_0$ the function $\|\cdot\|_{\omega_0}:{\mathscr M}^{\omega_0}\to[0,\infty)$,
\begin{equation*}
\|(U,{\varepsilon},r)\|_{\omega_0}:=
\begin{cases}
\big(\sup\limits_{\varphi\in\overline{{\mathscr D}},\ \|\varphi\|_{{\mathscr{B}}^q}\leq 1}|U(\varphi)|^p+{\varepsilon}+r^p\big)^{\frac{1}{p}}&\text{if }(U,{\varepsilon},r)\in{\mathscr M}^{\omega_0},\,{\varepsilon}>0,\\
\big(\sup\limits_{\varphi\in {\mathscr D},\ \|\varphi\|_{{\mathscr{B}}^q}\leq 1}|U(\varphi)|^p+r^p\big)^\frac1p&\text{if }(U,{\varepsilon},r)\in{\mathscr M}^{\omega_0},\,{\varepsilon}=0,\\
_{\varepsilon}nd{cases}
_{\varepsilon}nd{equation*}
is lower semicontinuous on ${\mathscr M}^{\omega_0}$.
\item For all $(u,{\varepsilon})$ with $u\in L^p(Q)$ and ${\varepsilon}>0$ we have $s:=(J^{\omega_0}_{\varepsilon} u,{\varepsilon},\|u\|_{L^p(Q)})\in{\mathscr M}^{\omega_0}$ and $\|s\|_{\omega_0}=\big(2\|u\|_{L^p(Q)}^p+{\varepsilon}\big)^\frac1p$. Likewise, for all $(u,{\varepsilon})$ with $u\in{\mathscr{B}}^p$ and ${\varepsilon}=0$ we have $s=(J_0u,{\varepsilon},\|u\|_{{\mathscr{B}}^p})$ and $\|s\|_{\omega_0}=2^\frac1p\|u\|_{{\mathscr{B}}^p}$.
\item For all $R<\infty$ the set $\{(U,{\varepsilon},r)\in{\mathscr M}^{\omega_0}\,:\,\|(U,{\varepsilon},r)\|_{\omega_0}\leq R\}$ is compact in ${\mathscr M}$.
\item Let $\omega_0\in\Omega_0$ and let $u_{\varepsilon}$ denote a bounded sequence in $L^p(Q)$. Then the triple $s_{\varepsilon}:=(J^{\omega_0}_{\varepsilon} u_{\varepsilon},{\varepsilon},\|u_{\varepsilon}\|_{L^p(Q)})$ defines a sequence in ${\mathscr M}^{\omega_0}$. Moreover, we have $s_{\varepsilon}\to s_0$ in ${\mathscr M}$ as ${\varepsilon}\to0$ if and only if $s_0=(J_0u,0,r)$ for some $u\in{\mathscr{B}}^p$, $r\geq\|u\|_{{\mathscr{B}}^p}$, and $u_{\varepsilon}\tsq{\omega_0}u$.
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{lemma}
\begin{proof}
\begin{enumerate}[(i)]
\item This is a direct consequence of Lemma~\ref{L:metric-char} (i) and the fact that the product of complete, separable metric spaces remains complete and separable.
\item Let $s_k:=(U_k,{\varepsilon}_k,r_k)$ denote a sequence in ${\mathscr M}^{\omega_0}$ that converges in ${\mathscr M}$ to some $s_0=(U_0,{\varepsilon}_0,r_0)$. We need to show that $s_0\in{\mathscr M}^{\omega_0}$. By passing to a subsequence, it suffices to study the following three cases: ${\varepsilon}_k>0$ for all $k\in\mathbb{N}_0$, ${\varepsilon}_k=0$ for all $k\in\mathbb{N}_0$, and ${\varepsilon}_0=0$ while ${\varepsilon}_k>0$ for all $k\in\mathbb{N}$.
Case 1: ${\varepsilon}_k>0$ for all $k\in\mathbb{N}_0$.\\
W.l.o.g.~we may assume that $\inf_k{\varepsilon}_k>0$. Hence, there exist $u_k\in L^p(Q)$ with $U_k=J^{\omega_0}_{{\varepsilon}_k}u_k$. Since $r_k\to r$, we conclude that $(u_k)$ is bounded in $L^p(\Omega)$. We thus may pass to a subsequence (not relabeled) such that $u_k\rightharpoonup u_0$ weakly in $L^p(Q)$, and
\begin{equation}\label{eq:x3}
\|u_0\|_{L^p(Q)}\leq \liminf\limits_{k}\|u_k\|_{L^p(Q)}\leq \lim_k r_k=r_0.
_{\varepsilon}nd{equation}
Moreover, $U_k\to U$ in the metric space $\mbox{\rm Lin}({\mathscr D})$ implies pointwise convergence on ${\mathscr D}$, and thus for all $\varphi_Q\in{\mathscr D}_Q$ we have $U_k(\mathbf 1_{\Omega}\varphi_Q)=\int_Qu_k\varphi_Q\to \int_Qu_0\varphi_Q$. We thus conclude that $U_0(\mathbf 1_{\Omega}\varphi_Q)=\int_Q u_0\varphi_Q$. Since ${\mathscr D}_Q\subset L^q(Q)$ dense, we deduce that $u_k\rightharpoonup u_0$ weakly in $L^p(Q)$ for the entire sequence.
On the other hand the properties of the shift $\tau$ imply that for any $\varphi_\Omega\in{\mathscr D}_\Omega$ we have $\varphi_\Omega(\tau_{\frac{\cdot}{{\varepsilon}_k}}\omega_0)\to\varphi_\Omega(\tau_{\frac{\cdot}{{\varepsilon}_0}}\omega_0)$ in $L^q(Q)$. Hence, for any $\varphi_\Omega\in{\mathscr D}_\Omega$ and $\varphi_Q\in{\mathscr D}_Q$ we have
\begin{equation*}
U_k(\varphi_\Omega\varphi_Q)=\int_Q u_k(x)\varphi_Q(x)\varphi_\Omega(\tau_{\frac{x}{{\varepsilon}_k}}\omega_0)\,dx\to
\int_Q u_0(x)\varphi_Q(x)\varphi_\Omega(\tau_{\frac{x}{{\varepsilon}_0}}\omega_0)\,dx=J^{\omega_0}_{{\varepsilon}_0}(\varphi_\Omega\varphi_Q)
_{\varepsilon}nd{equation*}
and thus (by linearity) $U_0=J^{\omega_0}_{{\varepsilon}_0}u_0$.
Case 2: ${\varepsilon}_k=0$ for all $k\in\mathbb{N}_0$.\\
In this case there exist a bounded sequence $u_k$ in ${\mathscr{B}}^p$ with $U_k=J_0u_k$ for $k\in\mathbb{N}$. By passing to a subsequence we may assume that $u_k\rightharpoonup u_0$ weakly in ${\mathscr{B}}^p$ for some $u_0\in{\mathscr{B}}^p$ with
\begin{equation}\label{eq:x4}
\|u_0\|_{{\mathscr{B}}^p}\leq \liminf\limits_{k}\|u_{{\varepsilon}_k}\|_{{\mathscr{B}}^p}\leq r_0.
_{\varepsilon}nd{equation}
This implies that $U_k=J_0u_k\to J_0u_0$ in $\mbox{\rm Lin}({\mathscr D})$. Hence, $U_0=J_0u_0$ and we conclude that $s_0\in{\mathscr M}$.
Case 3: ${\varepsilon}_k>0$ for all $k\in\mathbb{N}$ and ${\varepsilon}_0=0$.\\
There exists a bounded sequence $u_k$ in $L^p(Q)$. Thanks to Lemma~\ref{lem:two-scale-limit}, by passing to a subsequence we may assume that $u_k\tsq{\omega_0} u_0$ for some $u\in {\mathscr{B}}^p$ with
\begin{equation}\label{eq:x5}
\|u_0\|_{{\mathscr{B}}^p}\leq \liminf\limits_{k}\|u_k\|_{L^p(Q)}\leq r_0.
_{\varepsilon}nd{equation}
Furthermore, Lemma~\ref{L:metric-char} implies that $J^{\omega_0}_{{\varepsilon}_k}u_k\to J_0u_0$ in $\mbox{\rm Lin}({\mathscr D})$, and thus $U_0=J_0u_0$. We conclude that $s_0\in{\mathscr M}$.
\item We first argue that the representation _{\varepsilon}qref{eq:repr} of $U$ by $u$ is unique. In the case ${\varepsilon}>0$ suppose that $u,v\in L^p(Q)$ satisfy $U=J^{\omega_0}_{{\varepsilon}}u=J^{\omega_0}_{{\varepsilon}}v$. Then for all $\varphi_Q\in{\mathscr D}_Q$ we have $\int_Q(u-v)\varphi_Q=J^{\omega_0}_{\varepsilon} u(\mathbf 1_\Omega\varphi_Q)-J^{\omega_0}_{\varepsilon} v(\mathbf 1_\Omega\varphi_Q)=U(\mathbf 1_\Omega\varphi_Q)-U(\mathbf 1_\Omega\varphi_Q)=0$, and since ${\mathscr D}_Q\subset L^q(Q)$ dense, we conclude that $u=v$. In the case ${\varepsilon}=0$ the statement follows by a similar argument from the fact that ${\mathscr D}$ is dense ${\mathscr{B}}^q$.
To see _{\varepsilon}qref{eq:x6} let $u$ and $U$ be related by _{\varepsilon}qref{eq:repr}. Since $\overline{\mathscr D}$ (resp. ${\mathscr D}$) is dense in $L^q(Q)$ (resp. ${\mathscr{B}}^q$), we have
\begin{equation*}
\begin{cases}
\|u\|_{L^p(Q)}=\sup\limits_{\varphi\in\overline{{\mathscr D}},\ \|\varphi\|_{{\mathscr{B}}^q}\leq 1}|\int_Qu\varphi\,dx\,dP|=\sup\limits_{\varphi\in\overline{{\mathscr D}},\ \|\varphi\|_{{\mathscr{B}}^q}\leq 1}|U(\varphi)|&\text{if }{\varepsilon}>0,\\
\|u\|_{{\mathscr{B}}^p}=\sup\limits_{\varphi\in{\mathscr D},\ \|\varphi\|_{{\mathscr{B}}^q}\leq 1}|\int_{\Omega}\int_{Q}u\varphi\,dx\,dP|=\sup\limits_{\varphi\in{\mathscr D},\ \|\varphi\|_{{\mathscr{B}}^q}\leq 1}|U(\varphi)|&\text{if }{\varepsilon}=0.
_{\varepsilon}nd{cases}
_{\varepsilon}nd{equation*}
\item Let $s_k=(U_k,{\varepsilon}_k,r_k)$ denote a sequence in ${\mathscr M}^{\omega_0}$ that converges in ${\mathscr M}$ to a limit $s_0=(U_0,{\varepsilon}_0,r_0)$. By (ii) we have $s_0\in{\mathscr M}^{\omega_0}$. For $k\in\mathbb{N}_0$ let $u_k$ in $L^p(Q)$ or ${\mathscr{B}}^p$ denote the representation of $U_k$ in the sense of _{\varepsilon}qref{eq:repr}. We may pass to a subsequence such that one of the three cases in (ii) applies and (as in (ii)) either $u_k$ weakly converges to $u_0$ (in $L^p(Q)$ or ${\mathscr{B}}^p$), or $u_k\tsq{\omega_0}u_0$. In any of these cases the claimed lower semicontinuity of $\|\cdot\|_{\omega_0}$ follows from ${\varepsilon}_k\to{\varepsilon}_0$, $r_k\to r_0$, and _{\varepsilon}qref{eq:x6} in connection with one of the lower semicontinuity estimates _{\varepsilon}qref{eq:x3} -- _{\varepsilon}qref{eq:x5}.
\item This follows from the definition and duality argument _{\varepsilon}qref{eq:x6}.
\item Let $s_k$ denote a sequence in ${\mathscr M}^{\omega_0}$. Let $u_k$ in $L^p(Q)$ or ${\mathscr{B}}^p$ denote the (unique) representative of $U_k$ in the sense of _{\varepsilon}qref{eq:repr}. Suppose that $\|s_k\|_{\omega_0}\leq R$. Then $(r_k)$ and $({\varepsilon}_k)$ are bounded sequences in $\mathbb{R}_{\geq 0}$, and $\sup_{k}\|u_k\|\leq \sup_kr_k<\infty$ (where $\|\cdot\|$ stands short for either $\|\cdot\|_{L^p(Q)}$ or $\|\cdot\|_{{\mathscr{B}}^p}$). Thus we may pass to a subsequence such that~$r_k\to r_0$, ${\varepsilon}_k\to {\varepsilon}_0$, and one of the following three cases applies:
\begin{itemize}
\item Case 1: $\inf_{k\in\mathbb{N}_0}{\varepsilon}_k>0$. In that case we conclude (after passing to a further subsequence) that $u_k\rightharpoonup u_0$ weakly in $L^p(Q)$, and thus $U_k\to U_0=J^{\omega_0}_{{\varepsilon}_0}u_0$ in $\mbox{Lin}({\mathscr D})$.
\item Case 2: ${\varepsilon}_k=0$ for all $k\in\mathbb{N}_0$. In that case we conclude (after passing to a further subsequence) that $u_k\rightharpoonup u_0$ weakly in ${\mathscr{B}}^p(Q)$, and thus $U_k\to U_0=J_0u_0$ in $\mbox{Lin}({\mathscr D})$.
\item Case 3: ${\varepsilon}_k>0$ for all $k\in\mathbb{N}$ and ${\varepsilon}_0=0$. In that case we conclude (after passing to a further subsequence) that $u_k\tsq{\omega_0}u_0$, and thus $U_k\to U_0=J_0u_0$ in $\mbox{Lin}({\mathscr D})$.
_{\varepsilon}nd{itemize}
In all of these cases we deduce that $s_0=(U_0,{\varepsilon}_0,r_0)\in{\mathscr M}^{\omega_0}$, and $s_k\to s_0$ in ${\mathscr M}$.
\item This is a direct consequence of (ii) -- (vi), and Lemma~\ref{L:metric-char}.
_{\varepsilon}nd{enumerate}
_{\varepsilon}nd{proof}
Now we are in position to prove Theorem~\ref{thm:Balder-Thm-two-scale}
\begin{proof}[Proof of Theorem~\ref{thm:Balder-Thm-two-scale}]
Let ${\mathscr M}$, ${\mathscr M}^{\omega_0}$, $J^{\omega_0}_{\varepsilon}$ etc.~be defined as in Lemma~\ref{L:metric-struct}.
{\it Step 1. (Identification of $(u_{\varepsilon})$ with a tight ${\mathscr M}$-valued sequence).}
Since $u_{\varepsilon}\in{\mathscr{B}}^p$, by Fubini's theorem, we have $u_{\varepsilon}(\omega,\cdot)\in L^p(Q)$ for $P$-a.e.~$\omega\in\Omega$. By modifying $u_{\varepsilon}$ on a null-set in $\Omega\times Q$ (which does not alter two-scale limits in the mean), we may assume w.l.o.g.~that $u_{\varepsilon}(\omega,\cdot)\in\ L^p(Q)$ for all $\omega\in\Omega$. Consider the measurable function $s_{\varepsilon}:\Omega\to{\mathscr M}$ defined as
\begin{equation*}
s_{\varepsilon}(\omega):= \begin{cases}
\big(J^{\omega}_{\varepsilon} u_{\varepsilon}(\omega,\cdot),{\varepsilon},\|u_{\varepsilon}(\omega,\cdot)\|_{L^p(Q)}\big)&\text{if }\omega\in\Omega_0\\
(0,0,0)&\text{else.}
_{\varepsilon}nd{cases}
_{\varepsilon}nd{equation*}
We claim that $(s_{\varepsilon})$ is tight. To that end consider the integrand $h:\Omega\times{\mathscr M}\to(-\infty,+\infty]$ defined by
\begin{equation*}
h(\omega,(U,{\varepsilon},r)):=
\begin{cases}
\|(U,{\varepsilon},r)\|_{\omega}^p&\text{if }\omega\in\Omega_0\text{ and }(U,{\varepsilon},r)\in{\mathscr M}^{\omega},\\
+\infty&\text{else.}
_{\varepsilon}nd{cases}
_{\varepsilon}nd{equation*}
From Lemma~\ref{L:metric-struct} we deduce that $h$ is a normal integrand and $h(\omega,\cdot)$ has compact sublevels for all $\omega\in\Omega$. Moreover, for all $\omega_0\in\Omega_0$ we have $s_{\varepsilon}(\omega_0)\in{\mathscr M}^{\omega_0}$ and thus $h(\omega_0,s_{\varepsilon}(\omega_0))=2\|u_{\varepsilon}(\omega_0,\cdot)\|^p_{L^p(Q)}+{\varepsilon}$. Hence,
\begin{equation*}
\int_\Omega h(\omega,s_{\varepsilon}(\omega))\,dP(\omega)=2\|u_{\varepsilon}\|^p_{{\mathscr{B}}^p}+{\varepsilon}.
_{\varepsilon}nd{equation*}
We conclude that $(s_{\varepsilon})$ is tight.
{\it Step 2. (Compactness and definition of $\boldsymbol{\nu}$)}. By appealing to Theorem~\ref{thm:Balder} there exists a subsequence (still denoted by ${\varepsilon}$) and a Young measure $\boldsymbol{\mu}$ that is generated by $(s_{\varepsilon})$. Let $\boldsymbol{\mu_1}$ denote the first component of $\boldsymbol{\mu}$, i.e.~the Young measure on $\mbox{\rm Lin}({\mathscr D})$ characterized for $\omega\in\Omega$ by
\begin{equation*}
\int_{\mbox{\rm Lin}({\mathscr D})}f(\xi)\,d\mu_{1,\omega}(\xi)=\int_{{\mathscr M}}f(\xi_1)\,d\mu_\omega(\xi),
_{\varepsilon}nd{equation*}
for all $f:\mbox{\rm Lin}({\mathscr D})\to\mathbb{R}$ continuous and bounded, where ${\mathscr M}\ni\xi=(\xi_1,\xi_2,\xi_3)\to\xi_1\in\mbox{\rm Lin}({\mathscr D})$ denotes the projection to the first component.
By Balder's theorem, $\mu_\omega$ is concentrated on the limit points of $(s_{\varepsilon}(\omega))$. By Lemma~\ref{L:metric-struct} we deduce that for all $\omega\in\Omega_0$ any limit point $s_0(\omega)$ of $s_{\varepsilon}(\omega)$ has the form $s_0(\omega)=(J_0u,0,r)$ where $0\leq r<\infty$ and $u\in{\mathscr{B}}^p$ is a $\omega$-two-scale limit of a subsequence of $u_{\varepsilon}(\omega,\cdot)$. Thus, $\mu_{1,\omega}$ is supported on $\{J_0u\,:\,u\in {\mathscr{C\!P}}(\omega,(u_{\varepsilon}(\omega,\cdot))\}$ which in particular is a subset of $({\mathscr{B}}^q)^*$. Since $J_0:{\mathscr{B}}^p\to ({\mathscr{B}}^q)^*$ is an isometric isomorphism (by the Riesz-Frech\'et theorem), we conclude that $\boldsymbol{\nu}=\{\nu_\omega\}_{\omega\in\Omega}$, $\nu_\omega(B):=\mu_{1,\omega}(J_0B)$ (for all Borel sets $B\subset{\mathscr{B}}^p$ where ${\mathscr{B}}^p$ is equipped with the weak topology) defines a Young measure on ${\mathscr{B}}^p$, and for all $\omega\in\Omega_0$, $\nu_\omega$ is supported on ${\mathscr{C\!P}}(\omega,(u_{\varepsilon}(\omega,\cdot))$.
{\it Step 3. (Lower semicontinuity estimate).} Note that $h:\Omega\times{\mathscr M}\to[0,+\infty]$,
\begin{equation*}
h(\omega,(U,{\varepsilon},r)):=
\begin{cases}
\sup_{\varphi\in\overline{\mathscr D},\,\|\varphi\|_{{\mathscr{B}}^q}\leq 1}|U(\varphi)|^p&\text{if }\omega\in\Omega_0\text{ and }(U,{\varepsilon},r)\in{\mathscr M}^{\omega},{\varepsilon}>0,\\
\sup_{\varphi\in{\mathscr D},\,\|\varphi\|_{{\mathscr{B}}^q}\leq 1}|U(\varphi)|^p&\text{if }\omega\in\Omega_0\text{ and }(U,{\varepsilon},r)\in{\mathscr M}^{\omega},{\varepsilon}=0,\\
+\infty&\text{else.}
_{\varepsilon}nd{cases}
_{\varepsilon}nd{equation*}
defines a normal integrand, as can be seen as in the proof of Lemma~\ref{L:metric-struct}. Thus Theorem~\ref{thm:Balder} implies that
\begin{equation*}
\liminf_{{\varepsilon}\to 0}\int_\Omega h(\omega,s_{\varepsilon}(\omega))\,dP(\omega)\geq \int_\Omega\int_{{\mathscr M}}h(\omega,\xi)\,d\mu_\omega(\xi)dP(\omega).
_{\varepsilon}nd{equation*}
In view of Lemma~\ref{L:metric-struct} we have $ \sup_{\varphi\in\overline{\mathscr D},\,\|\varphi\|_{{\mathscr{B}}^q}\leq 1}|(J^\omega_{\varepsilon} u_{\varepsilon})(\omega,\cdot))(\varphi)|=\|u_{\varepsilon}(\omega,\cdot)\|_{L^p(Q)}$ for $\omega\in\Omega_0$, and thus the left-hand side turns into $\liminf_{{\varepsilon}\to 0}\|u_{\varepsilon}\|^p_{{\mathscr{B}}^p}$. Thanks to the definition of $\boldsymbol{\nu}$ the right-hand side turns into $\int_\Omega \int_{{\mathscr{B}}^p}\|v\|_{{\mathscr{B}}^p}^p\,d\nu_\omega(v)dP(\omega)$.
{\it Step 4. (Identification of the two-scale limit in the mean)}.
Let $\varphi\in{\mathscr D}_0$. Then $h:\Omega\times{\mathscr M}\to[0,+\infty]$,
\begin{equation*}
h(\omega,(U,{\varepsilon},r)):=
\begin{cases}
U(\varphi)&\text{if }\omega\in\Omega_0,\,(U,{\varepsilon},r)\in{\mathscr M}^\omega,\\
+\infty&\text{else.}
_{\varepsilon}nd{cases}
_{\varepsilon}nd{equation*}
defines a normal integrand. Since $h(\omega,s_{\varepsilon}(\omega))=\int_Qu_{\varepsilon}(\omega,x)\mathcal{T}_{\varepsilon}^*\varphi(\omega,x)\,dx$ for $P$-a.e.~$\omega\in\Omega$, we deduce that $|h(\cdot, s_{\varepsilon}(\cdot))|$ is uniformly integrable. Thus, _{\varepsilon}qref{eq:Balders-ineq} applied to $\varphim h$ and the definition of $\boldsymbol{\nu}$ imply that
\begin{eqnarray*}
\lim\limits_{{\varepsilon}\to 0}\int_\Omega\int_Qu_{\varepsilon}(\omega,x)(\mathcal{T}_{\varepsilon}^*\varphi)(\omega,x)\,dx\,dP(\omega)&=& \lim\limits_{{\varepsilon}\to 0}\int_\Omega h(\omega,s_{\varepsilon}(\omega))\,dP(\omega)\\
&=&\int_\Omega\int_{{\mathscr{B}}^p}h(\omega,v)\,d\nu_\omega(v)\,dP(\omega)\\
&=&\int_\Omega\int_{{\mathscr{B}}^p}_{\varepsilon}x{\int_Qv\varphi}\,d\nu_\omega(v)\,dP(\omega).
_{\varepsilon}nd{eqnarray*}
Set $u:=\int_\Omega\int_{{\mathscr{B}}^p}v\,d\nu_\omega(v)dP(\omega)\in{\mathscr{B}}^p$. Then Fubini's theorem yields
\begin{eqnarray*}
\lim\limits_{{\varepsilon}\to 0}\int_\Omega\int_Qu_{\varepsilon}(\omega,x)(\mathcal{T}_{\varepsilon}^*\varphi)(\omega,x)\,dx\,dP(\omega)&=& _{\varepsilon}x{\int_Qu\varphi}.
_{\varepsilon}nd{eqnarray*}
Since $\mbox{span}({\mathscr D}_0)\subset {\mathscr{B}}^q$ dense, we conclude that $u_{\varepsilon}\ts u$.
{\it Step 5. Recovery of quenched two-scale convergence}. Suppose that $\nu_\omega$ is a delta distribution on ${\mathscr{B}}^p$, say $\nu_\omega=\delta_{v(\omega)}$ for some measurable $v:\Omega\to{\mathscr{B}}^p$. Note that $h:\Omega\times{\mathscr M}\to[0,+\infty]$,
\begin{equation*}
h(\omega,(U,{\varepsilon},r)):=-d(U,J_0v(\omega);\mbox{\rm Lin}({\mathscr D}))
_{\varepsilon}nd{equation*}
is a normal integrand and $|h(\cdot, s_{\varepsilon}(\cdot))|$ is uniformly integrable. Thus, _{\varepsilon}qref{eq:Balders-ineq} yields
\begin{eqnarray*}
&&\limsup\limits_{{\varepsilon}\to 0}\int_{\Omega} d(J^\omega_{\varepsilon} u_{\varepsilon}(\omega,\cdot),J_0v(\omega);\mbox{\rm Lin}({\mathscr D}))\,dP(\omega)\\
&=&-\liminf\limits_{{\varepsilon}\to 0}\int_\Omega h(\omega,s_{\varepsilon}(\omega))\,dP(\omega)\\
&\leq&-\int_\Omega\int_{{\mathscr{B}}^p}h(\omega,J_0v)\,d\nu_\omega(v)\,dP(\omega)=-\int_\Omega h(\omega,J_0v(\omega))\,dP(\omega)=0.
_{\varepsilon}nd{eqnarray*}
Thus, there exists a subsequence (not relabeled) such that $d(J^\omega_{\varepsilon} u_{\varepsilon}(\omega,\cdot),J_0v(\omega);\mbox{\rm Lin}({\mathscr D}))\to 0$ for a.e.~$\omega\in\Omega_0$. In view of Lemma~\ref{L:metric-char} this implies that $u_{\varepsilon}\tsq{\omega}v(\omega)$ for a.e.~$\omega\in\Omega_0$.
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Lemma~\ref{lem:Balder-Lem-two-scale}]
{\it Step 1. Representation of the functional by a lower semicontinuous integrand on ${\mathscr M}$.}\\
For all $\omega_0\in\Omega_0$ and $s=(U,{\varepsilon},r)\in{\mathscr M}^{\omega_0}$ we write $\varphii^{\omega_0}(s)$ for the unique representation $u$ in ${\mathscr{B}}^p$ (resp. $L^p(Q)$) of $U$ in the sense of _{\varepsilon}qref{eq:repr}. We thus may define for $\omega\in\Omega_0$ and $s\in{\mathscr M}^{\omega_0}$ the integrand
\begin{equation*}
\overline h(\omega_0,s):=
\begin{cases}
\int_Qh(\tau_{\frac{x}{{\varepsilon}}}\omega,x,(\varphii^{\omega_0}s)(x))\,dx&\text{if }s=(U,{\varepsilon},s)\text{ with }{\varepsilon}>0,\\
\int_\Omega\int_Qh(\omega,x,(\varphii^{\omega_0}s)(x))\,dx\,dP(\omega)&\text{if }s=(U,{\varepsilon},s)\text{ with }{\varepsilon}=0.
_{\varepsilon}nd{cases}
_{\varepsilon}nd{equation*}
We extend $\overline h(\omega_0,\cdot)$ to ${\mathscr M}$ by $+\infty$, and define $\overline h(\omega,\cdot)_{\varepsilon}quiv 0$ for $\omega\in\Omega\setminus\Omega_0$. We claim that $\overline h(\omega,\cdot):{\mathscr M}\to(-\infty,+\infty]$ is lower semicontinuous for all $\omega\in\Omega$. It suffices to consider $\omega_0\in\Omega_0$ and a convergent sequence $s_k=(U_k,{\varepsilon}_k,r_k)$ in ${\mathscr M}^{\omega_0}$. For brevity we only consider the (interesting) case when ${\varepsilon}_k\downarrow {\varepsilon}_0=0$. Set $u_k:=\varphii^{\omega_0}(s_k)$. By construction we have
\begin{equation*}
\overline h(\omega_0,s_k)=\int_Q h(\tau_{\frac{x}{{\varepsilon}_k}}\omega_0,u_k(\omega_0,x))\,dx,
_{\varepsilon}nd{equation*}
and
\begin{equation*}
\overline h(\omega_0,s_0)=\int_{\Omega}\int_Q h(\omega,x,u_0(\omega,x))\,dx\,dP(\omega).
_{\varepsilon}nd{equation*}
Since $s_k\to s_0$ and ${\varepsilon}_k\to 0$, Lemma~\ref{L:metric-struct} (vi) implies that $u_k\tsq{\omega_0}u_0$, and since $h$ is assumed to be a quenched two-scale normal integrand, we conclude that $\liminf\limits_{k} \overline h(\omega_0,s_k)\geq \overline h(\omega_0,s_0)$, and thus $\overline h$ is a normal integrand.
{\it Step 2. Conclusion.}\\
As in Step~1 of the proof of Theorem~\ref{thm:Balder-Thm-two-scale} we may associate with the sequence $(u_{\varepsilon})$ a sequence of measurable functions $s_{\varepsilon}:\Omega\to{\mathscr M}$ that (after passing to a subsequence that we do not relabel) generates a Young measure $\boldsymbol{\mu}$ on ${\mathscr M}$. Since by assumption $u_{\varepsilon}$ generates the Young measure ${\boldsymbol \nu}$ on ${\mathscr{B}}^p$, we deduce that the first component $\boldsymbol{\mu_1}$ satisfies $\nu_\omega(B)=\mu_{\omega}(J_0B)$ for any Borel set $B$. Applying _{\varepsilon}qref{eq:Balders-ineq} to the integrand $\overline h$ of Step~1, yields
\begin{eqnarray*}
&&\liminf\limits_{_{\varepsilon}\to 0}\int_\Omega\int_Q h(\tau_{\frac{x}{{\varepsilon}}}\omega_0,u_{\varepsilon}(\omega_0,x))\,dx\,dP(\omega)\\
&=&\liminf\limits_{_{\varepsilon}\to 0}\int_\Omega\overline h(\omega,s_{\varepsilon}(\omega))\,dP(\omega)\\
&\geq&\int_\Omega\int_{{\mathscr M}}\overline h(\omega,\xi)\,d\mu_\omega(\xi)\,dP(\omega)\\
&=&\int_\Omega\int_{{\mathscr{B}}^p}\Big(\int_\Omega\int_Qh(\tilde{\omega},x,v(\tilde{\omega},x))\,dx\,dP(\tilde{\omega})\Big)\,d\nu_\omega(v)\,dP(\omega).
_{\varepsilon}nd{eqnarray*}
_{\varepsilon}nd{proof}
\begin{proof}[Proof of Lemma~\ref{L:fromquenchedtomean}]
By (b) and (c) the sequence $(\tilde u_{\varepsilon})$ is bounded in ${\mathscr{B}}^p$ and thus we can pass to a subsequence such that $(\tilde u_{\varepsilon})$ generates a Young measure $\boldsymbol \nu$. Set $\tilde u:=\int_\Omega\int_{{\mathscr{B}}^p}v\,d\nu_\omega(v)\,dP(\omega)$ and note that Theorem~\ref{thm:Balder-Thm-two-scale} implies that $\tilde u_{\varepsilon}\wt \tilde u$ weakly two-scale in the mean. On the other hand the theorem implies that $\nu_\omega$ concentrates on the quenched two-scale cluster points of $(u^\omega_{\varepsilon})$ (for a.e.~$\omega\in\Omega$). Hence, in view of (a) we conclude that for a.e.~$\omega\in\Omega$ the measure $\nu_\omega$ is a Dirac measure concentrated on $u$, and thus $\tilde u=u$ a.e.~in $\Omega\times Q$.
_{\varepsilon}nd{proof}
\subsection{Quenched homogenization of convex functionals}\label{Section:4:3}
In this section we demonstrate how to lift homogenization results w.r.t.~two-scale convergence in the mean to quenched statements at the example of a convex minimization problem. Throughout this section we assume that $V:\Omega\times Q\times\mathbb{R}^{d\times d}\to\mathbb{R}$ is a convex integrand satisfying the assumptions $(A1)-(A3)$ of Section \ref{Section_Convex}. For $\omega\in\Omega$ we define ${\mathcal{E}}^{\omega}_{\varepsilon}: W^{1,p}_0(Q)\to \re{}$,
\begin{equation*}
{\mathcal{E}}^{\omega}_{{\varepsilon}}(u):=\int_{Q}V\left(\tau_{\frac{x}{{\varepsilon}}}\omega, x,\nabla^s u(x)\right)\,dx,
_{\varepsilon}nd{equation*}
and recall from Section~\ref{Section_Convex} the definition _{\varepsilon}qref{energy} of the averaged energy ${\mathcal{E}}_{{\varepsilon}}$ and the definition _{\varepsilon}qref{energy_hom} of the two-scale limit energy
${\mathcal{E}}_{0}$.
The goal of this section is to relate two-scale limits of ``mean''-minimizers, i.e.~functions $u_{\varepsilon}\in L^p(\Omega)\otimes W^{1,p}_0(Q)$ that minimize ${\mathcal{E}}_{{\varepsilon}}$, with limits of ``quenched''-minimizers, i.e.~families $\{u_{\varepsilon}(\omega)\}_{\omega\in\Omega}$ of minimizers to ${\mathcal{E}}^{\omega}_{\varepsilon}$ in $W^{1,p}_0(Q)$.
\begin{thm}
\label{thm:Quenched-hom-convex-grad} Let $u_{\varepsilon}\in L^{p}(\Omega)\otimes W_{0}^{1,p}(Q)$
be a minimizer of ${\mathcal{E}}_{{\varepsilon}}$. Then there exists a subsequence such that $(u_{\varepsilon},\nabla u_{\varepsilon})$ generates a Young measure $\boldsymbol{\nu}$ in ${\mathscr{B}}:=({\mathscr{B}}^p)^{d+d^2}$ in the sense of Theorem~\ref{thm:Balder-Thm-two-scale}, and for $P$-a.e.~$\omega\in\Omega$, $\nu_{\omega}$ concentrates on the set $ \big\{\,(u,\nabla u+\chi)\,:\,{\mathcal{E}}_0(u,\chi)=\min{\mathcal{E}}_0\,\big\}$ of minimizers of the limit functional.
Moreover, if $V(\omega,x,\cdot)$ is strictly convex for all $x\in Q$ and $P$-a.e.~$\omega\in\Omega$, then the minimizer $u_{\varepsilon}$ of ${\mathcal{E}}_{{\varepsilon}}$ and the minimizer $(u,\chi)$ of ${\mathcal{E}}_0$ are unique, and for $P$-a.e.~$\omega\in\Omega$ we have (for a not relabeled subsequence)
\begin{gather*}
u_{\varepsilon}\rightharpoonup u\text{ weakly in }W^{1,p}(Q),\qquad u_{\varepsilon}(\omega,\cdot)\tsq{\omega}u,\qquad\nabla u_{\varepsilon}(\omega,\cdot)\tsq{\omega}\nabla u+\chi,\\
\text{and }\min{\mathcal{E}}^\omega_{\varepsilon}={\mathcal{E}}^\omega_{\varepsilon}(u_{\varepsilon}(\omega,\cdot))\to {\mathcal{E}}_0(u,\chi)=\min{\mathcal{E}}_0.
_{\varepsilon}nd{gather*}
_{\varepsilon}nd{thm}
\begin{remark}[Identification of quenched two-scale cluster points]
If we combine Theorem~\ref{thm:Quenched-hom-convex-grad} with the identification of the support of the Young measure in Theorem~\ref{thm:Balder-Thm-two-scale} we conclude the following: There exists a subsequence such that
$(u_{\varepsilon},\nabla u_{\varepsilon})$ two-scale converges in the mean to a limit of the form $(u_0,\nabla u_0+\chi_0)$ with ${\mathcal{E}}_0(u_0,\chi_0)=\min{\mathcal{E}}_0$, and for a.e.~$\omega\in\Omega$ the set of quenched $\omega$-two-scale cluster points ${\mathscr{C\!P}}(\omega, (u_{\varepsilon}(\omega,\cdot),\nabla u_{\varepsilon}(\omega,\cdot)))$ is contained in $\big\{\,(u,\nabla u+\chi)\,:\,{\mathcal{E}}_0(u,\chi)=\min{\mathcal{E}}_0\,\big\}$. In the strictly convex case we further obtain that ${\mathscr{C\!P}}(\omega, (u_{\varepsilon}(\omega,\cdot),\nabla u_{\varepsilon}(\omega,\cdot)))=\{(u,\nabla u+\chi)\}$ where $(u,\chi)$ is the unique minimizer to ${\mathcal{E}}_0$. Note, however, that our argument (that extracts quenched two-scale limits from the sequence of ``mean'' minimizers) involves an exceptional $P$-null-set that a priori depends on the selected subsequence. This is in contrast to the classical result in \cite{DalMaso1986} which is based on a subadditive ergodic theorem and states that there exists a set of full measure $\Omega'$ such that for all $\omega\in\Omega'$ the minimizer $u_{\varepsilon}^\omega$ to ${\mathcal{E}}^{\omega}_{\varepsilon}$ weakly converges in $W^{1,p}(Q)$ to the deterministic minimizer $u$ of the reduced functional ${\mathcal{E}}_{\hom}$ for any sequence ${\varepsilon}\to 0$.
_{\varepsilon}nd{remark}
In the proof of Theorem~\ref{thm:Quenched-hom-convex-grad} we combine homogenization in the mean in form of Theorem~\ref{thm1}, the connection to quenched two-scale limits via Young measures in form of Theorem~\ref{thm:Balder-Thm-two-scale}, and a recent result by Nesenenko and the first author that states that $V$ is a quenched two-scale normal integrand:
\begin{lemma}[\mbox{\cite[Lemma~5.1]{HeidaNesenenko2017monotone}}]\label{lem:General-Hom-Convex}
$V$ is a quenched two-scale normal integrand in the sense of Definition~\ref{D:qtscnormal}.
_{\varepsilon}nd{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:Quenched-hom-convex-grad}]
{\it Step 1. (Identification of the support of $\boldsymbol{\nu}$).}
Since $u_{\varepsilon}$ is a sequence of minimizers, by Corollary~\ref{C:thm1} there exists a subsequence (not relabeled) and minimizers $(u,\chi)\in W^{1,p}_0(Q)\times (L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d)$ of ${\mathcal{E}}_0$ such that
that $u_{\varepsilon} \wt u \text{ in }L^p(\Omega \times Q)^d$, $\nabla u_{\varepsilon} \wt \nabla u+\chi \text{ in }L^p(\Omega \times Q)^{d\times d}$, and
\begin{equation}\label{eq:conv-minima}
\lim\limits_{{\varepsilon}\to 0}\min{\mathcal{E}}_{\varepsilon}= \lim\limits_{{\varepsilon}\to 0}{\mathcal{E}}_{\varepsilon}(u_{\varepsilon})={\mathcal{E}}_0(u,\chi)=\min{\mathcal{E}}_0.
_{\varepsilon}nd{equation}
In particular, the sequence $(u_{\varepsilon},\nabla u_{\varepsilon})$ is bounded in ${\mathscr{B}}$. By Theorem~\ref{thm:Balder-Thm-two-scale} we may pass to a further subsequence (not relabeled) such that $(u_{\varepsilon},\nabla u_{\varepsilon})$ generates a Young measure $\boldsymbol{\nu}$ on ${\mathscr{B}}$. Since $\nu_\omega$ is supported on the set of quenched $\omega$-two-scale cluster points of $(u_{\varepsilon}(\omega,\cdot),\nabla u_{\varepsilon}(\omega,\cdot))$, we deduce from Lemma~\ref{lem:sto-conver-grad} that the support of $\nu_\omega$ is contained in ${\mathscr{B}}_0:=\{\xi=(\xi_1,\xi_2)=(u',\nabla u'+\chi')\,:\,u'\in W^{1,p}_0(Q),\,\chi\in L^p_{\varphiot}(\Omega)\otimes L^p(Q)^d\}$ which is a closed subspace of ${\mathscr{B}}$. Moreover, thanks to the relation of the generated Young measure and stochastic two-scale convergence in the mean, we have $(u,\chi)=\int_\Omega \int_{{\mathscr{B}}_0}(\xi_1,\xi_2-\nabla\xi_1)\,\nu_\omega(d\xi)\,dP(\omega)$. By Lemma~\ref{lem:General-Hom-Convex}, $V$ is a quenched two-scale normal integrand and thus Lemma~\ref{lem:Balder-Lem-two-scale} implies that
\begin{equation*}
\lim\limits_{{\varepsilon}\to 0}{\mathcal{E}}_{\varepsilon}(u_{\varepsilon})\geq \int_\Omega\int_{{\mathscr{B}}}\Big(\int_\Omega\int_Q V(\tilde{\omega},x,\xi_2)\,dx\,dP(\tilde{\omega})\Big)\,\nu_\omega(d\xi)\,dP(\omega).
_{\varepsilon}nd{equation*}
In view of _{\varepsilon}qref{eq:conv-minima} and the fact that $\nu_\omega$ is supported in ${\mathscr{B}}_0$, we conclude that
\begin{equation*}
\min{\mathcal{E}}_0\geq \int_\Omega\int_{{\mathscr{B}}_0}{\mathcal{E}}_0(\xi_1,\xi_2-\nabla\xi_1)\,\nu_\omega(d\xi)\,dP(\omega)\geq \min{\mathcal{E}}_0\int_\Omega\int_{{\mathscr{B}}_0}\nu_\omega(d\xi)dP(\omega).
_{\varepsilon}nd{equation*}
Since $\int_\Omega\int_{{\mathscr{B}}_0}\nu_\omega(d\xi)dP(\omega)=1$, we have $\int_\Omega\int_{{\mathscr{B}}_0}|{\mathcal{E}}_0(\xi_1,\xi_2-\nabla\xi_1)-\min{\mathcal{E}}_0|\,\nu_\omega(d\xi)\,dP(\omega)= 0$, and thus we conclude that for $P$-a.e.~$\omega\in\Omega_0$, $\nu_\omega$ concentrates on $\{(u,\nabla u+\chi)\,:\,{\mathcal{E}}_0(u,\chi)=\min{\mathcal{E}}_0\}$.
{\it Step 2. (The strictly convex case).}
The uniqueness of $u_{\varepsilon}$ and $(u,\chi)$ is clear. From Step~1 we thus conclude that $\nu_\omega=\delta_{\xi}$ where $\xi=(u,\nabla u+\chi)$. Theorem~\ref{thm:Balder-Thm-two-scale} implies that $(u_{\varepsilon}(\omega,\cdot),\nabla u_{\varepsilon}(\omega,\cdot))\tsq{\omega}(u,\nabla u+\chi)$ (for $P$-a.e.~$\omega\in\Omega$).
By Lemma~\ref{lem:General-Hom-Convex}, $V$ is a quenched two-scale normal integrand and thus for $P$-a.e.~$\omega\in\Omega$,
\begin{equation*}
\liminf\limits_{{\varepsilon}\to 0}{\mathcal{E}}^\omega_{\varepsilon}(u_{\varepsilon}(\omega,\cdot))\geq {\mathcal{E}}_0(u,\chi)=\min{\mathcal{E}}_0.
_{\varepsilon}nd{equation*}
On the other hand, since $u_{\varepsilon}(\omega,\cdot)$ minimizes ${\mathcal{E}}^\omega_{\varepsilon}$, we deduce by a standard argument that for $P$-a.e.~$\omega\in\Omega$,
\begin{equation*}
\lim\limits_{{\varepsilon}\to 0}\min{\mathcal{E}}^\omega_{\varepsilon}=\lim\limits_{{\varepsilon}\to 0}{\mathcal{E}}^\omega_{\varepsilon}(u_{\varepsilon}(\omega,\cdot))={\mathcal{E}}_0(u,\chi)=\min{\mathcal{E}}_0.
_{\varepsilon}nd{equation*}
_{\varepsilon}nd{proof}
\begin{appendix}
\section{Proof of Lemma \ref{lem8}}\label{appendix:1}
\begin{proof}
We prove the claim for $\chi=D\varphi$ for $\varphi\in W^{1,p}(\Omega)^d$ and the general case follows by density.
Recall, the stationary extension of $D\varphi$ is given by $S{D\varphi}(\omega,x)=D\varphi(\tau_x\omega)$ and we have $\nabla S{\varphi}(\omega,x)=S{D\varphi}(\omega,x)$. Let $R>0$, $K>0$ and $_{\varepsilon}ta_{R}\in C^{\infty}_c(B_{R+K})$ be a cut-off function satisfying $_{\varepsilon}ta=1$ in $B_R$, $0 \leq _{\varepsilon}ta\leq 1$ and $|\nabla _{\varepsilon}ta_{R}|\leq \frac{2}{K}$.
Using stationarity of $P$, we obtain
\begin{equation*}
_{\varepsilon}x{|D\varphi|^p}=_{\varepsilon}x{\fint_{B_R}|\nabla S{\varphi}|^p dx}=_{\varepsilon}x{\fint_{B_R}|\nabla(_{\varepsilon}ta_R S{\varphi})|^p dx}\leq _{\varepsilon}x{\frac{1}{|B_R|}\int_{\re d}|\nabla(_{\varepsilon}ta_R S{\varphi})|^p dx}.
_{\varepsilon}nd{equation*}
Using this and Korn's inequality in $L^p(\mathbb{R}^d)$,
\begin{align*}
_{\varepsilon}x{|D\varphi|^p}&\leq 2 _{\varepsilon}x{\frac{1}{|B_R|}\int_{\re d}|\nabla^s(_{\varepsilon}ta_R S{\varphi})|^p}\\ &=2_{\varepsilon}x{\fint_{B_R}|\nabla^s S{\varphi}|^p dx}+\frac{2}{|B_R|}_{\varepsilon}x{\int_{B_{R+K}\setminus B_R}|\nabla^s(_{\varepsilon}ta_R S{\varphi})|^p dx}.
_{\varepsilon}nd{align*}
The first term on the right-hand side of the above inequality equals $2 _{\varepsilon}x{|D^s\varphi|^p}$ and therefore to conclude the proof, it is sufficient to show that the second term vanishes in the limit $R\to \infty$.
We have
\begin{align}\label{ineq47}
\begin{split}
&\frac{1}{|B_R|}_{\varepsilon}x{\int_{B_{R+K}\setminus B_R}|\nabla^s(_{\varepsilon}ta_R S{\varphi})|^p dx}\leq \frac{1}{|B_R|}_{\varepsilon}x{\int_{B_{R+K}\setminus B_R}|\nabla(_{\varepsilon}ta_R S{\varphi})|^p dx}\\
&\qquad\qquad \leq \frac{C}{|B_R|} _{\varepsilon}x{\int_{B_{R+K}\setminus B_R}|_{\varepsilon}ta_R|^p|\nabla S{\varphi}|^p +|\nabla _{\varepsilon}ta_R|^p|S{\varphi}|^p dx} \\
&\qquad\qquad \leq \frac{C}{|B_R|}_{\varepsilon}x{\int_{B_{R+K}\setminus B_R}|\nabla S{\varphi}|^pdx}+\frac{C}{|B_R|K^p}_{\varepsilon}x{\int_{B_{R+K}\setminus B_R}|S{\varphi}|^p dx}.
_{\varepsilon}nd{split}
_{\varepsilon}nd{align}
For the first term on the right-hand side, we have
\begin{align*}
\frac{C}{|B_R|}_{\varepsilon}x{\int_{B_{R+K}\setminus B_R}|\nabla S{\varphi}|^pdx}= & \frac{C|B_{R+K}|}{|B_R|}_{\varepsilon}x{\fint_{B_{R+K}}|\nabla S{\varphi}|^pdx}-C_{\varepsilon}x{\fint_{B_R}|\nabla S{\varphi}|^pdx}\\ = & C _{\varepsilon}x{|D\varphi|^p}\brac{\frac{|B_{R+K}|}{|B_R|}-1}
_{\varepsilon}nd{align*}
and as $R\to \infty$ the last expression vanishes. Similarly, the second term on the right-hand side of (\ref{ineq47}) vanishes as $R\to \infty$.
_{\varepsilon}nd{proof}
_{\varepsilon}nd{appendix}
\section*{Acknowledgments}
The authors thank Alexander Mielke for fruitful discussions and valuable comments. SN and MV thank Goro Akagi for useful discussions and valuable comments.
MH has been funded by Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114 ``Scaling Cascades
in Complex Systems'', Project C05 ``Effective models for interfaces with many scales''. SN and MV were supported by the DFG
in the context of TU Dresden's Institutional Strategy ``The Synergetic University''.
\nocite{*}
_{\varepsilon}nd{document}
|
\begin{document}
\author{Francesco Ciccarello}
\author{Vittorio Giovannetti}
\affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, Piazza dei Cavalieri 7, I-56126 Pisa, Italy}
\title{Local-channel-induced rise of quantum correlations in continuous-variable systems}
\date{\today}
\begin{abstract}
It was recently discovered that the quantum correlations of a pair of disentangled qubits, as measured by the quantum discord, can increase solely because of their interaction with a {\it local} dissipative bath. Here, we show that a similar phenomenon can occur in continuous-variable bipartite systems. To this aim, we consider a class of two-mode squeezed thermal states and study the behavior of Gaussian quantum discord under various {\it local} Markovian non-unitary channels. While these in general cause a monotonic drop of quantum correlations, an initial rise can take place with a thermal-noise channel.
\end{abstract}
\maketitle
\uparrowection{Introduction}
Within the remit of quantum information processing (QIP) theory \cite{nc} and beyond, a paramount topic is the study of quantum correlations (QCs). Until recently, the emergence of QCs has been regularly highlighted in connection with non-separable states of multipartite quantum systems \cite{horo1}. The violation of the celebrated Bell inequalities is a typical signature of the extra amount of correlations that quantum systems can possess besides those of a purely classical nature \cite{nc, horo1}. Non-separability, \ie entanglement, which relies on the superposition principle, has been in fact regarded as a necessary prerequisite in order for QCs to occur. In 2001, however, it was discovered \cite{seminal} a new way in which the superposition principle can entail an alternative type of non-classical correlations even in the {\it absence of any entanglement}. Typical instances are mixtures of pure separable states which are locally non-orthogonal, \ie indistinguishable \cite{talmor}. More rigorously, a state has non-classical correlations whenever its associated density operator cannot be diagonalized in a basis which is the tensor product of local orthonormal bases \cite{talmorsimil}. When this occurs the entire correlations' content cannot be retrieved by any local measurement at variance with the classical framework where this is always achievable.
Following such breakthrough, a growing interest has developed especially after the realization that this new QCs'~paradigm could be key to some known entanglement-free QIP schemes \cite{ent-free}.
Various measures have been proposed in the literature to detect such kind of QCs. A prominent one is {\it quantum discord} \cite{seminal} whose definition simply arises from the quantum generalization of two classically-equivalent versions of the mutual information (one being based on the conditional entropy). These are found to differ in the quantum framework, the corresponding discrepancy just being the quantum discord. This encompasses both QCs associated with entanglement and those that can be exhibited by separable states. The non-equivalence between such two forms of non-classical correlations arises when mixed states are addressed since as long as the state is pure the discord coincides with the entropy of entanglement.
Significant motivations to pursue a deeper understanding of these problems are currently being provided by various investigations targeting non-unitary dynamics. While entanglement is known to be extremely fragile to environmental interactions, QCs -- as given by such extended notion -- are generally very robust and in some cases even fully insensitive to phase noise \cite{laura}. This is related to the fact that zero-discord states are a set of negligible measure within the entire Hilbert space \cite{subset}.
Still in the framework of non-unitary dynamics, very recently it was found that the interaction with a {\it local and memoryless} bath can even create QCs initially fully absent \cite{ciccarello, mauro,bruss, gong}. Such effect, which is unattainable with entanglement \cite{horo1}, was shown to take place in particular for qubits prepared in a fully classical state and undergoing a local amplitude-damping channel. This describes the dissipative interaction with a local bath (such as the spontaneous emission of a two-level atom) \cite{nc}. Specifically, the behavior consists of an initial rise of QCs until a maximum is reached followed by a slow decay, the entanglement being zero throughout. The essential underlying mechanism is that such non-unitary process can map orthogonal states of a subsystem onto non-orthogonal ones. The state thus can no more be diagonalized in a tensor product of orthonormal bases, \ie it acquires QCs \cite{ciccarello, bruss}.
Most of the work carried out so far along these lines, though, tackled qubits. It is however natural to wonder how the above findings are generalized for quantum {\it continuous-variable} (CV) systems, which routinely occur in quantum optics. Such investigations are still in their infancy. Indeed, the first measure of generalized QCs for Gaussian states of bipartite CV systems has been worked out only last year \cite{dattaadesso, paris}. Such quantity, called {\it Gaussian discord}, is basically defined in the same spirit of standard discord \cite{seminal} and likewise can be non-zero for separable states \cite{dattaadesso, paris}. Recently, another measure has been proposed \cite{adessoAMID}.
To date, only few studies \cite{paris, ruggero} have targeted the non-unitary dynamics of Gaussian discord in the presence of environmental interactions including one model of non-Markovian reservoir \cite{nota}. It was found that apart from the case of a common reservoir the non-unitary dynamics is detrimental to QCs \cite{paris, ruggero,torun}.
Our goal in this work is to analyze the behavior of Gaussian discord in the case of two CV modes under the most relevant known instances of local -- \ie single-mode -- Gaussian memoryless channels \cite{eisert, holevo,njp} each described by an associated completely positive quantum map. Our major motivation is to assess whether growth of QCs can take place for some local non-unitary channels as in the case of qubits \cite{ciccarello,mauro,bruss}. We will indeed find that in significant analogy with the qubits' framework this can occur with the {\it lossy} channel (more in general with low-temperature thermal-noise channels).
This paper is organized as follows. In Section \ref{discord2}, we briefly review the definition of Gaussian discord. In Section \ref{states}, we describe the class of squeezed thermal states, which is the one which will be our focus in this work. In Section \ref{channels}, we review each of the aforementioned Gaussian channels and the related main features that we will refer to. In Section \ref{effect}, we show the behavior of QCs under each of the considered channel for some paradigmatic initial states. In Section \ref{insight}, we provide a simple picture that allows to obtain insight into the effects presented in the previous section. Finally, in Section \ref{concl} we draw our conclusions.
\uparrowection{Quantum discord of Gaussian states: review} \label{discord2}
Given two systems $1$ and $2$ in a state $\rho$, the quantum mutual information is defined as
\begin{equation}\label{mutinf}
\mathcal{I}(\rho)\ug S(\rho_1)\piu S(\rho_2)-S(\rho)\,\,
\end{equation}
where $\rho_{1(2)}\ug{\rm Tr}_{2(1)}\rho$ is the reduced density operator describing the state of 1 (2) and $S(\uparrowigma)\!=\!-{\rm Tr}\,(\uparrowigma{\rm\, log} \uparrowigma)$ is the Von Neumann entropy of an arbitrary state $\uparrowigma$.
A local generalized measurement on $2$ can be specified by a complete set of positive-operator-valued projectors (POVM) $\{\Pi_k\}$, where $k$ indexes a possible outcome. If $k$ is recorded with probability $p_k\!=\!{\rm Tr}[\rho \openone \tens \Pi_k]$ the overall system collapses onto the (normalized) state $\rho_k\!=\!(\rho \openone \tens \Pi_k)/p_k$. The maximum of $S(\rho_1)\meno \uparrowum_k p_k S(\rho_k)$, \ie the mismatch between the entropy of 1 and the average conditional entropy, reads
\begin{equation}\label{j}
\mathcal{J}(\rho)\ug \max_{\{\Pi_k\}} \left[S(\rho_1)\meno \uparrowum_k p_k S(\rho_k)\right]\,\,,
\end{equation}
and is taken over all possible POVM measurements {each described by the operator $\Pi_k$}. If 1 and 2 are CV modes and one restricts to generalized {\it Gaussian} measurements
\cite{cirac} the Gaussian discord $\mathcal{D}^\leftarrow$ is defined as the discrepancy between (\ref{mutinf}) and (\ref{j}) \cite{paris,dattaadesso}
\begin{equation}\label{disc1}
\mathcal{D}^\leftarrow=\mathcal{I}-\mathcal{J}\,\,.
\end{equation}
The arrow reminds that measurements over mode 2 have been considered.
As is well-known, any two-mode zero-mean Gaussian state $\rho_{12}$ is fully specified by its covariance matrix $\uparrowigma_{ij}\ug{\rm Tr\left[ \rho_{12}\left(R_{i}R_{j}\piu R_{j}R_{i}\right)\right]}$, where ${\bf R}\ug\left\{x_{1},p_{1},x_{2},p_{2}\right\}$ is the set of operators corresponding to the phase-space coordinates. Up to local symplectic, \ie unitary, operations the covariance matrix can be arranged in the form
\begin{equation} \label{sigma}
\mbox{\boldmath$\uparrowigma$}\! =\!\left( \begin{array}{cc}
A& C \\
C &B \end{array} \right)\,\,,
\end{equation}
where $A\ug{\rm diag}(a,a)$, $B\ug{\rm diag}(b,b)$ and $C\ug{\rm diag}(c_1,c_2)$. The determinants of such diagonal matrices $I_1\ug{\rm det}\,A$, $I_2\ug{\rm det}\,B$, $I_3\ug{\rm det}\,C$ along with $I_4\ug{\rm det}\mbox{\boldmath$\,\uparrowigma$}$ are symplectic invariants, \ie they are invariant under local symplectic transformations. The quantum discord $\mathcal{D}^{\leftarrow}$ is a function of these symplectic invariants according to \cite{paris, dattaadesso}
\begin{equation}\label{disc}
\mathcal{D}^{\leftarrow}\ug h(\uparrowqrt{I_2})-h(d_-)-h(d_+)+h\left(\frac{\uparrowqrt{I_1}+2\uparrowqrt{I_1I_2}+2I_3}{1+2\uparrowqrt{I_2}}\right)\,\,,
\end{equation}
where $h(x)\ug(x\piu1/2)\ln(x\piu1/2)-(x\meno1/2)\ln(x\meno1/2)$ and
\begin{equation}\label{dpm}
d_{\pm}^2\ug\frac{\Delta\pm\uparrowqrt{\Delta^2-4I_4}}{2}
\end{equation}
with $\Delta\ug I_1+I_2+2 I_3$. The discord in terms of measurements on mode 1 $\mathcal{D}^\rightarrow$ is obtained from (\ref{disc}) upon the replacement $I_1\leftrightarrow I_2$.
In the remainder of this work, we will be solely interested in the discord (\ref{disc}), hence we will drop the subscript henceforth and set $\mathcal{D}\ug \mathcal{D}^\leftarrow$.
\uparrowection{Two-mode squeezed-thermal states} \label{states}
In this work, we shall focus on the class of two-mode squeezed-thermal states (STSs) whose generic element reads
\begin{equation} \label{sts}
\rho\ug e^{r(a_1^\dagger a_2^\dagger-a_1a_2)} (\rho_1 \otimes \rho_2 )\; \left[e^{r(a_1^\dagger a_2^\dagger-a_1a_2)} \right]^\dagger\,\,,
\end{equation}
where each single-mode thermal state $\rho_i$ ($i\ug1,2$) is given by
\begin{equation}\label{thermal}
\rho_i\ug\uparrowum_{n\ug0}^{\infty}\frac{N_i^n}{(1+N_i)^{n+1}}|n\rangle_i\langle n|\;,
\end{equation}
with $N_i$ the corresponding average number of photons.
The characteristic function $\chi(\lambda_1,\lambda_2)$ \cite{walls} is obtained from (\ref{sts})
as
\begin{equation}\label{chilambda}
\chi(\lambda_1,\lambda_2)\ug{\rm Tr}\left[\rho D(\lambda_1)D(\lambda_2)\right]\,\,,
\end{equation}
where $D(\lambda_i)\ug\exp(\lambda_i a_i^{\dagger}\meno\lambda^{*}a_i)$ is the displacement operator of the $i$th mode ($i\ug1,2$ with $a_i$ and $a_i^\dagger$ the usual ladder operators of the $i$th mode). The density operator is retrieved from the characteristic function as $\rho\ug\int d^2 \!\lambda_1 d^2\!\lambda_2\, \chi(\lambda_1,\lambda_2)D(\meno \lambda_1)D(\meno \lambda_2)/\pi^2$.
It is then straightforwardly checked that
\begin{equation}\label{cfsts}
\chi(\lambda_1,\lambda_2)\ug e^{-(N_1\piu 1/2)|\cosh r\lambda_1\meno \uparrowinh r \lambda_2^*|^2}\,e^{-(N_2\piu 1/2)|\cosh r\lambda_2\meno \uparrowinh r \lambda_1^*|^2}\,\,.
\end{equation}
As states (\ref{sts}) are Gaussian, the covariance matrix is related to the characteristic function according to \cite{ferraronotes}
\begin{equation}\label{chisigma}
\chi({\bf \Lambda})=\exp\left[ -\frac{{\bf \Lambda}^{\rm T}\boldsigma{\bf \Lambda}}{2}\right]\,\,,
\end{equation}
where ${\bf \Lambda\ug(\alpha_1,\beta_1,\alpha_2,\beta_2)}^{\rm T}$ and we have decomposed each complex variable $\lambda_j$ ($j\ug1,2$) as $\lambda_j\ug(\alpha_j\piu i \,\beta_j)/\uparrowqrt{2}$. Upon comparison between Eq.~s (\ref{cfsts}) and (\ref{chisigma}) and in the light of (\ref{sigma}), the diagonal matrix elements defining $A$, $B$ and $C$ [\cfEq.~(\ref{sigma})] are obtained as
\begin{eqnarray}
a&=&(1\piu N_r)N_1\piu N_r N_2\piu N_r\piu1/2\,\,,\label{abc1}\\
b&=&N_r N_1\piu (1\piu N_r) N_2\piu N_r\piu1/2\,\,,\label{abc2}\\
c_1&=&-c_2=-(1\piu N_1\piu N_2)\uparrowqrt{ N_r (1\piu N_r)}\label{abc3}\,\,,
\end{eqnarray}
where we have set $N_r\ug(\uparrowinh r)^2$.
\uparrowection{Local Gaussian channels} \label{channels}
In this Section, we review the salient features of the local one-mode Gaussian channels whose effect on the quantum correlations of two-mode states we aim to scrutinize. Throughout, we will assume that each of such channels acts on mode 2 only.
The channels are Gaussian in that each of these maps Gaussian states into Gaussian states, hence allowing for use of Gaussian discord \cite{paris,dattaadesso} to shed light onto the QCs' behavior.
Specifically, we address the thermal-noise channel, which reduces to the lossy channel in the limit of vanishing reservoir temperature, the amplifier channel and the classical-noise channel \cite{eisert}.
\uparrowubsection{Thermal-noise channel}
This channel describes the dissipative interaction of a single-mode CV system with an environment at thermal equilibrium. Indeed, its associated map is routinely worked out by assuming that a single-mode environment in a thermal state specified by the average photon number ${N}$ is mixed by a beam splitter with the single-mode system. The state of the latter, and hence the map describing the channel, can then be obtained by simply tracing out the environmental degree of freedom. The channel quantum efficiency is measured by the parameter $\eta$ such that $0\!\le\!\eta\!\le\!1$ (in the above model $\eta$ is the transmissivity of the beam splitter).
Under the thermal-noise channel of efficiency $\eta$ and environment's average photon number $N$, the characteristic function $\chi(\lambda_1,\lambda_2)$ transforms into $\chi'(\lambda_1,\lambda_2)$ according to \cite{pra-lossy}
\begin{equation}\label{chitn}x
\chi'(\lambda_1,\lambda_2)\ug\chi(\lambda_1,\!\uparrowqrt{\eta}\lambda_2)\,e^{{-(1-\eta)(N+1/2)|\lambda_2|^2}}\,\,\,\,\,\,\,\,(0\!\le\!\eta\!\le\!1)\,.
\end{equation}
For $N\ug0$ (zero-temperature environment) Eq.~(\ref{chitn}) reduces to the case of a {\it lossy channel} \cite{eisert} acting on mode 2.
By substituting (\ref{cfsts}) in Eq.~(\ref{chisigma}) and comparing this with Eq.~(\ref{chitn}) it is straightforwardly found that the covariance matrix keeps the same structure as in Eq.~(\ref{sigma}) with $A\ug{\rm diag}(a',a')$, $B\ug{\rm diag}(b',b')$ and $C\ug{\rm diag}(c_1',c_2')$. The new diagonal matrix elements $\{a',b', c'\}$ are related to the input ones [\cf Eq.~s(\ref{abc1})-(\ref{abc3})] according to
\begin{eqnarray}\label{abctnc}
a'&=&a\,\,,\\
b'&=&\eta b+(1-\eta)\left(N\piu\frac{1}{2}\right)\,\,,\\
c'_1&=&-c'_2=\uparrowqrt{\eta}\, c_1\,\,.
\end{eqnarray}
\uparrowubsection{Amplifier channel}
This channel shares features similar to the thermal-noise channel but with the essential difference that it brings about an intensity amplification instead of a damping. Its corresponding map changes the characteristic function according to \cite{amplifier}
\begin{equation}\label{chiac}
\chi'(\lambda_1,\lambda_2)\ug\chi(\lambda_1,\!\uparrowqrt{k}\lambda_2)\,e^{{-(k-1)(N+1/2)|\lambda_2|^2}}\,\,\,\,\,\,\,\,(k\!\downarrowe\!1)\,.
\end{equation}
where $k\!\downarrowe\!1$ is a gain parameter. In the limit $k\ug1$ the channel reduces to the identity operator.
By proceeding analogously to the case of the thermal-noise channel we find that the covariance matrix has again the same form as in Eq.~(\ref{sigma}) with the diagonal matrix elements now given by
\begin{eqnarray}\label{abcac}
a'&=&a\,\,,\\
b'&=&k b+(k-1)\left(N\piu\frac{1}{2}\right)\,\,,\\
c'_1&=&-c'_2=\uparrowqrt{k}\, c_1\,\,.
\end{eqnarray}
\uparrowubsection{Classical-noise channel}
This channel arises when classical Gaussian noise is superimposed on the single-mode system \cite{cnchannel}. The parameter on which it depends is the number of injected noise photons $n\!\downarrowe\!0$ in a way that the corresponding map becomes the identity in the limit $n\ug0$.
The characteristic function is transformed under this channel according to
\begin{equation}\label{chicn}
\chi'(\lambda_1,\lambda_2)\ug\chi(\lambda_1,\lambda_2)\;e^{-n|\lambda_2|^2}\,\,\,\,\,(n\!\downarrowe\!0)\,\,.
\end{equation}
By comparing this with Eq.~s(\ref{cfsts}) and (\ref{chisigma}) it turns out that the covariance matrix has the form (\ref{sigma}) with diagonal matrix elements
\begin{eqnarray}\label{abccn}
a'&=&a\,\,,\\
b'&=&b+n\,\,,\\
c'_1&=&-c'_2=c_1\,\,.
\end{eqnarray}
\uparrowection{Behavior of quantum correlations} \label{effect}
Our aim in this Section is to present some typical behaviors of Gaussian discord when states {of the family} (\ref{sts}) are subject to the local Gaussian channels introduced in the previous Section. Unfortunately, a general analysis is quite demanding owing to the complicated functional form of (\ref{disc}). Nonetheless, as will become clear, one can obtain significant insight by addressing some paradigmatic instances.
\begin{figure*}
\caption{(Color online){ Behavior of Gaussian discord $\mathcal{D}
\label{Fig1}
\end{figure*}
To begin with, we consider initial STSs [\cfEq.~s(\ref{sts}) and (\ref{thermal})] having $r\ug1$ and $N_1\ug N_2$. In Fig.~1, we plot the behavior of Gaussian discord (\ref{disc}) for increasing values of $N_1\ug N_2$ in the presence of the thermal-noise and amplifier channel (both with $N\ug0$) as well as the classical-noise channel. We recall that each local non-unitary map affects only mode 2.
The classical-noise channel [see Fig.~1(c)] has a merely detrimental effect on the QCs, which monotonically decay for a growing number of noise photons regardless of $N_1$. A similar effect is exhibited in the case of the amplifier channel when the gain parameter $k$ is increased [see Fig.~1(b)]. In contrast, however, a non-monotonic behavior can take place with the lossy channel, as shown in Fig.~1(a). For low average photon numbers of each mode $N_1\ug N_2$, similarly to Fig.~s1(b) and (c), a monotonic drop of QCs occurs as the quantum efficiency $\eta$ decreases (we recall that the corresponding map reduces to the identity operator for $\eta\ug1$). As the photon number becomes larger, though, the discord remarkably undergoes a {\it rise} so as to reach a maximum value and eventually drop to zero. Such increase can be quite significant. For instance, as shown in Fig.~1(a), in the case $N_1\ug N_2\ug10$ the maximum taken by $\mathcal{D}$ is about $2.5$ times the initial value (for $\eta\ug1$) and even $\uparrowimeq\!10$ larger when $N_1\ug N_2\ug50$. For higher photons numbers (still keeping fixed the set value of $r$) the initial discord becomes extremely low but a significant rise still takes place before the asymptotic decay. We illustrate the latter feature in the inset of Fig.~1(a) for the paradigmatic case $N_1\ug N_2\ug1000$, which corresponds to a fully separable state \cite{nota-sep} with an initial discord $\mathcal{D}|_{\eta=1}\!\uparrowimeq\!10^{-6}$. Yet, under the lossy channel this undergoes a rise reaching a maximum $\uparrowimeq\!0.5$ before the eventual slow decay.
\begin{figure}
\caption{(Color online){ Gaussian discord $\mathcal{D}
\end{figure}
For all practical purposes, one can thus regard this behavior as mere creation of (previously absent) quantum correlations, which is significantly reminiscent of an analogous effect occurring for qubits under local amplitude-damping channels \cite{ciccarello, mauro,bruss}.
It is natural to wonder how the discord rise is affected in the presence of a reservoir having non-zero temperature. To investigate this, in Fig.~2 we display $\mathcal{D}$ against $\eta$ in the case of the thermal-noise channel for growing values of the reservoir average photon number $N$. Clearly, the effect of temperature is to spoil the discord increase. At high enough temperatures, the initial rise is no more exhibited and the behavior reduces to a mere monotonic decay similarly to the amplifier and classical-noise channels [\cf Fig.~s1(b) and (c)]. {Note that there is a threshold value for $N$ separating the non-monotonic regime (featuring QC's rise) from the monotonic one, the latter occurring above the threshold. For set $r$, such critical value linearly grows with $N_1$. This is shown in Fig.~3, where we plot the initial slope of the Gaussian discord $p$, \ie $p\ug\partial \mathcal{D}/\partial \eta$ at $\eta\ug0$, \vs $N$ and $N_1\ug N_2$. By recalling that $\eta$ decreases during the system's evolution under the thermal-noise channel, $p\!>\!0$ ($p\!<\!0$) means that an initial decay (rise) of QCs occurs. Hence, the intersection of $p(N,N_1)$ with with the $N\!-\!N_1$ plane provides the functional dependance of the aforementioned threshold value on $N_1$.}
\begin{figure}
\caption{(Color online){ Initial slope of the Gaussian discord $p$, \ie $\partial \mathcal{D}
\end{figure}
\begin{figure*}
\caption{(Color online){ (a) Contour plot of Gaussian discord $\mathcal{D}
\end{figure*}
\uparrowection{Insight into the rise of quantum correlations} \label{insight}
Here, we give a picture that illustrates the emergence of some key features and behaviors presented in the previous Section.
We focus on the initial state specified by $r\ug1$ and $N_1\ug N_2\ug10$ exhibiting a rise of discord followed by a decay in the case of the thermal-noise channel (at low temperatures) and a monotonic drop with the other two channels (\cfFig.~s1 and 2). As in each of such three cases the environment does not affect parameter $a'$ [see Eq.~s(\ref{abctnc}), (\ref{abcac}) and (\ref{abccn})] and the equality $c'_1\ug-c'_2$ always holds, in the addressed regime the discord is in fact a function of $b'$ and $c'\ug|c_1'|\ug|c_2'|$ [\cf Eq.~(\ref{disc})]. As shown in the contour plot in Fig.~4(a), in its domain of definition $\mathcal{D}$ decreases with $b'$ and increases with $c'$. One can now obtain insight into the behavior of QCs by observing that the equations for $b'$ and $c'$ define a characteristic curve (associated with the specific channel) parametrized by $\eta\!\in\![0,1]$, $k\!>\!1$ and $n\!>\!0$ in the case of the thermal-noise, amplifier and classical-noise channel, respectively. By expressing each of such parameters as a function of $c'$ and replacing it into the equation for $b'$, one obtains the trajectory of each channel in the $b'\!-\!c'$ plane as
\begin{eqnarray}\label{trajectories}
b'&\ug& \frac{b\meno \left(N\piu1/2\right)}{c^2}c'^2+(N\piu1/2)\,\,\,\,\,\,\,\,\,\,\,\left(c'\in[0,c]\right)\,\,,\label{tratn}\\
b'&\ug& \frac{b\piu \left(N\piu1/2\right)}{c^2}c'^2-(N\piu1/2)\,\,\,\,\,\,\,\,\,\,\,\left(c'\in[c,\infty]\right)\,\,,\label{traac}\\
c'&\ug& c \,\,\,\,\,\,\,\,\,\,\,\left(b'\!\in\![b,\infty]\right)
\end{eqnarray}
In Fig.~s4(b) and (c), we report the above trajectories along with the same contour plot of discord as in Fig.~4(a). Evidently, due to the above discussed functional shape of $\mathcal{D}$ a monotonic decrease of QCs necessarily takes place under the amplifier and classical-noise channels. It is also clear from the bottom left portion of Fig.~4(b) that for the lossy channel as $\eta$ is decreased $\mathcal{D}$ must eventually drop. However, the trajectory is such that over the first stage ($\eta$ slightly below 1) the QCs {\it grow} [this is best illustrated by {the zoom presented} in Fig.~4(c)].
The effect of temperature in the case of the thermal-noise channel (\cf Fig.~2) can be understood by scrutinizing Eq.~(\ref{tratn}) and Fig.~4(d). For $\eta\!\rightarrow\!0$ $c'$ tends to zero and thereby $b'\!\rightarrow\!(N\piu1/2)$. Hence, the higher $N$ (\ie the temperature) the larger the asymptotic value of $b'$. Accordingly, the concavity $C\ug b\meno \left(N\piu1/2\right)/c^2$ -- which is positive for $N\ug0$ -- progressively decreases so as to eventually become negative. As soon as it becomes small enough, the shape of $\mathcal{D}(b',c')$ [\cfFig.~4(a)] prevents from any initial rise to take place, which results in a monotonic decay (see Fig.~2).
\uparrowection{conclusions} \label{concl}
In this paper, we have investigated the behavior of Gaussian discord for a two-mode squeezed thermal state subject to various local Gaussian channels. Mainly motivated by the finding that a local amplitude-damping channel can create QCs in a pair of qubits, we have explored the dynamics under the thermal-noise, amplifier and classical-noise channel. While the typical behavior is a monotonic decrease of QCs, {as one would expect}, we have found that in significant analogy with the qubits' framework a thermal-noise channel can give rise to a non-monotonic behavior comprising an initial {\it rise} of discord. {For large enough photon numbers of each mode such that the initial QCs are in fact negligible, the above entails creation of previously absent QCs for all practical purposes.}
The reservoir temperature spoils this phenomenon in a way that when it is high enough a mere monotonic decay occurs.
We have provided a picture that allows to shed light on various features, in particular the reason why the discord rise is exhibited only under the thermal-noise channel as well as the detrimental effect of temperature on it.
These findings significantly extend to the CV-variable scenario one of the most {counter-intuitive} effects entailed by the emerging extended paradigm of QCs: {quantum correlations can be established in a composite system through the interaction with a local and memoryless bath.}
\begin{acknowledgments}
{We thank Matteo Paris and Mauro Paternostro for comments and acknowledge support from FIRB IDEAS through project RBID08B3FM.}
\end{acknowledgments}
\begin {thebibliography}{99}
\bibitem{nc} M. A. Nielsen and I. L. Chuang, \textit{Quantum Computation and Quantum Information} (Cambridge University Press, Cambridge, U. K.,2000).
{ \bibitem{horo1} R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki,
Rev. Mod. Phys. {\bf 81}, 865 (2009).}
\bibitem{seminal} L. Henderson and V. Vedral, J. Phys. A {\bf 34}, 6899 (2001);
H. Ollivier and W. H. Zurek, Phys. Rev. Lett. {\bf 88}, 017901
(2001).
\bibitem{talmor} B. Groisman, D. Kenigsberg, and T. Mor, arXiv:quant-ph/0703103.
\bibitem{talmorsimil} J. Oppenheim, M. Horodecki, P. Horodecki, and
R. Horodecki, Phys. Rev. Lett. {\bf 89}, 180402 (2002); M. Horodecki, P. Horodecki, R. Horodecki, J. Oppenheim, A. Sen, U. Sen, and B. Synak-Radtke, Phys. Rev. A {\bf 71}, 062307 (2005).
\bibitem{ent-free} A. Datta, A. Shaji, and C. M. Caves, Phys. Rev. Lett.
{\bf 100} 050502 (2008); B. P. Lanyon, M. Barbieri, M. P. Almeida, and A. G. White ,
Phys. Rev. Lett. {\bf 101}, 200501 (2008).
\bibitem{laura} T. Werlang, S. Souza, F. F. Fanchini, and C. J. Villas Boa, Phys. Rev. A {\bf 80}, 024103 (2009); J. Maziero, L. C. C\'eleri, R. M. Serra, and V. Vedral,
Phys. Rev. A {\bf 80}, 044102 (2009);
L. Mazzola, J. Piilo, and S. Maniscalco, Phys. Rev. Lett. {\bf 104}, 200401 (2010).
\bibitem{subset} A. Ferraro, L. Aolita, D. Cavalcanti, F. M. Cucchietti, and A. Acin, Phys. Rev. A {\bf 81}, 052318 (2010).
\bibitem{ciccarello} F. Ciccarello and V. Giovannetti, arXiv:1105.5551.
\bibitem{mauro} S. Campbell, T. J. G. Apollaro, C. Di Franco, L. Banchi, A. Cuccoli, R. Vaia, F. Plastina, and M. Paternostro, Phys. Rev. A {\bf 84}, 052316 (2011).
\bibitem{bruss} A. Streltsov, H. Kampermann, and D. Bruss, Phys. Rev. Lett. {\bf 107}, 170502 (2011).
\bibitem{gong} X. Hu, Y. Gu, Q. Gong, and G. Guo, Phys. Rev. A {\bf 84}, 022113 (2011).
\bibitem{paris} P. Giorda and M. G. A. Paris, Phys. Rev. Lett. {\bf 105}, 020503
(2010).
\bibitem{dattaadesso} G. Adesso and A. Datta, Phys. Rev. Lett. {\bf 105}, 030501 (2010).
\bibitem{adessoAMID} L. Mista, Jr., R. Tatham, D. Girolami, N. Korolkova, and Gerardo Adesso, Phys. Rev. A {\bf 83}, 042325 (2011).
\bibitem{ruggero} R. Vasile, P. Giorda, S. Olivares, M. G. A. Paris, and S. Maniscalco, \pra {\bf 82}, 012313 (2010).
\bibitem{torun} A. Isar, Open Sys. Inf. Dynamics {\bf18}, 175 (2011).
\bibitem{nota} Here, we are not concerned with the open dynamics of two (or more) CV systems featuring a {\it direct interaction} between each other, which has recently been the focus of some studies.
\bibitem{holevo} A. S. Holevo and R. F. Werner, Phys. Rev. A 63, 032312 (2001).
\bibitem{eisert} J. Eisert and M. M. Wolf, {\it Quantum Information with Continous Variables of Atoms and Light}, pages 23-42 (Imperial College Press, London, 2007).
\bibitem{njp} F. Caruso, J. Eisert, V. Giovannetti and A. S. Holevo, New J. Phys. {\bf 10}, 083030 (2008).
\bibitem{cirac} I. Cirac and G. Giedke, Phys. Rev. A {\bf 66}, 032316 (2002).
\bibitem{walls} D. F. Walls and G. J. Milburn, {\it Quantum Optics} (Springer,
Berlin , 1994).
\bibitem{ferraronotes} A. Ferraro, S. Olivares, and M. G. A. Paris, {\it Gaussian states in continuous variable quantum information} (Bibliopolis, Napoli, 2005).
\bibitem{pra-lossy} V. Giovannetti, S. Guha, S. Lloyd, L. Maccone, and J. H. Shapiro, \pra {\bf 70}, 032315 (2004).
\bibitem{amplifier} F. Caruso and V. Giovannetti, Phys. Rev. A {\bf 74}, 062307 (2006).
\bibitem{cnchannel} M. J. W. Hall and M. J. OÁ‚ Rourke, Quantum Opt. {\bf 5}, 161 (1993); M. J. W. Hall, Phys. Rev. A {\bf 50}, 3295 (1994).
\bibitem{nota-sep} Using the separability criterion in J. S. Prauzner-Bechcicki, J. Phys. A {\bf 37}, L173 (2004), state (\ref{sts}) is separable whenever $N_1 N_2/(1\piu N_1\piu N_2)\!>\!N_r$.
\end {thebibliography}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{RBF-MGN:Solving spatiotemporal PDEs with Physics-informed Graph Neural Network }
\tnotetext[tnote1]{This is an example for title footnote coding.}
\author[1]{Zixue Xiang}
\ead{[email protected]}
\author[2]{Wei Peng}
\author[2]{Wen Yao \corref{cor1}}
\cortext[cor1]{Corresponding author}
\address[1]{College of Aerospace Science and Engineering, National University of Defense Technology, No. 109, Deya Road, Changsha 410073, China}
\address[2]{National Innovation Institute of Defense Technology, Chinese Academy of Military Science, No. 55, Fengtai East Street, Beijing 100071, China}
\begin{abstract}
Physics-informed neural networks (PINNs) have lately received significant attention as a representative deep learning-based technique for solving partial differential equations (PDEs). Most fully connected network-based PINNs use automatic differentiation to construct loss functions that suffer from slow convergence and difficult boundary enforcement. In addition, although convolutional neural network (CNN)-based PINNs can significantly improve training efficiency, CNNs have difficulty in dealing with irregular geometries with unstructured meshes. Therefore, we propose a novel framework based on graph neural networks (GNNs) and radial basis function finite difference (RBF-FD). We introduce GNNs into physics-informed learning to better handle irregular domains with unstructured meshes. RBF-FD is used to construct a high-precision difference format of the differential equations to guide model training. Finally, we perform numerical experiments on Poisson and wave equations on irregular domains. We illustrate the generalizability, accuracy, and efficiency of the proposed algorithms on different PDE parameters, numbers of collection points, and several types of RBFs.
\end{abstract}
\end{frontmatter}
\section{Introduction}
Partial differential equations (PDEs), especially spatiotemporal PDEs, have been extensively used in several fields, such as physics, biology, and finance. However, except for some simple equations for which analytical solutions exist, solving PDEs is a challenging problem. Consequently, numerical approaches, including the finite element (FEM), finite volume (FVM), and finite difference (FDM), were developed to solve PDEs in various practical problems.
In recent years, the rise of deep learning has provided an alternative solution for complex nonlinear PDEs without the need to use domain discretization in numerical methods. A pioneering work in this direction is the physics-informed neural networks (PINNs) \cite{raissi2019physics}, which constrain the output of deep neural networks to satisfy the PDEs via minimizing a loss function. PINNs have emerged as a promising framework for exploiting information from observational data and physical equations and can be classified into two categories: continuous and discrete. The continuous PINNs are to build a map from the domain to the solution by a feed-forward multi-layer neural network, where the partial derivatives can be easily computed through automatic differentiation (AD) \cite{2015AutomaticML}. It has been widely used in solving several engineering applications, such as fluid flow \cite{highspeedflows,2020Physics}, or solid mechanics \cite{2021Asolidmechanics}. While the continuous PINNs still have some limitations. First, a large number of points are required to represent the high-dimensional domain, and AD requires saving the differential computation map during training, which significantly increases the training cost and computation time. Second, the residual form of the PDE and its initial (IC) and boundary conditions (BC) are reduced to a composite objective function as an unconstrained optimization problem. This leads to the fact that it is difficult to enforce IC and BC for continuous PINNs strictly. What's more, the use of fully connected networks would limit the fitting accuracy of PINNs.
To raise the representation and effectiveness, the discrete PINNs that employ numerical discretizations to compute the derivative terms of physical information loss have attracted significant attention. Chen et al. \cite{2021HCP} discretized the computational domain by a regular mesh and used the FDM to discrete the PDE. Furthermore, proposed the theory-guided hard constraint projection (HCP) to define the PDE loss function. In addition to FD-based PINNs, CAN-PINN \cite{chiu2021canpinn} based on the Coupled-Automatic-Numerical Differentiation Method has been presented. To address parametric PDEs with unstructured grids, several recent works have been devoted to constructing generalized discrete loss functions based on FVM \cite{Rezaei2022}, or FEM \cite{Jot} and integrating them into physics-based neural network algorithms. In addition to numerical discretization, convolutional neural networks (CNNs) are often used in discrete PINNs. Zhu et al. \cite{ZHU201956} demonstrated that CNN-based discrete PINNs have higher computational efficiency when solving high-dimensional elliptic PDEs. Cai et al. \cite{cai8793167} investigated the CNN named LiteFlowNet to solve the fluid motion estimation problem. Further, Fang et al. \cite{fang9403414} developed hybrid PINNs based on CNNs and FVM to solve PDEs on arbitrary geometry.
However, due to the inherent limitations of classical CNN convolution operations, it remains challenging for CNN-based discrete PINNs to handle irregular domains with unstructured grids. We can address the problem with graph neural networks (GNNs). Because graph convolution operates in non-Euclidean local space, it allows the network to learn the evolution of spatial localization, which is consistent with physical processes and has better interpretability. Jiang et al. \cite{Jiang} proposed the PhyGNNet that solves spatiotemporal PDEs with Physics-informed Graph Neural Networks, specifically, using FDM for physical knowledge embedding. Gao et al. \cite{GAO2022114502} presented a novel discrete PINN framework based on GNN and used the Galerkin method that is meshless to construct the PDE residual. In general, GNN-based discrete PINNs have more desirable fitting ability and generalization performance.
In this work, we propose a physics-informed framework (RBF-MGN) based on GNNs and radial basis function finite difference (RBF-FD) to solve spatio-temporal PDEs. The contributions are summarized as follows:
(a) We introduce graph convolutional neural networks into physics-informed learning to better handle irregular domains with unstructured meshes. We choose MeshGraphNets \cite{Mesh}, a graph neural network model with an Encoder-Processer-Decoder architecture, to model the discretized solution.
(b) Radial basis function finite difference (RBF-FD), a meshless method, is used to process the model output node solution maps and construct a high-precision difference format of the differential equations to guide model training. Moreover, ensure that the output fully satisfies the underlying boundary conditions.
(c) We conduct several experiments on Poisson and wave equations, which indicates that our method has excellent ability and extrapolates well on irregular domains.
The rest of the paper is structured as follows. Section 2 provides a detailed introduction to the GNN, RBF-FD, and the principle of the RBF-MGN method. In Section 3, we provide o numerical results showcasing the performance of the proposed approach. In Section 4, we conclude this work and extensions to address the limitations.
\section{Methods}
\subsection{Overview}
Consider a dynamic physical process governed by general nonlinear and time-dependent PDEs of the form:
\begin{equation}
\begin{array}{c}
\boldsymbol{u}_{t}+\mathcal{L}[\boldsymbol{u},\eta]=0, \quad x \in \Omega, t \in[0, T], \\
\boldsymbol{u}(x, 0)=\boldsymbol{h}(x), \quad x \in \Omega, \\
\boldsymbol{u}(x, t)=\boldsymbol{g}(x, t), \quad x \in \partial \Omega, t \in[0, T],
\end{array}
\label{nonlinear PDE}
\end{equation}
where $\Omega \subset \mathbb{R}^{d} $ and $t \in[0, T]$ denote the computational domain and time coordinates. $\mathcal{L}$ represents the spatial-temporal differential operator. $\eta$ is the PDE parameter vector. The set of PDEs is subjected to the initial condition $\boldsymbol{h}(x)$ and boundary condition $\boldsymbol{g}(x,t)$, which is defined on the boundary $\partial \Omega$ of the domain.
In this paper, we propose an innovative physics-informed graph neural network (RBF-MGN) to seek a solution function $\boldsymbol{u}(x,t)$ under the IC and BC. In the framework, we generate an unstructured grid and regard the mesh as a graph to train the GNN. The GNN aims to put the solution at time $t$ to obtain the solution at the next time step $t + \Delta t$. The IC is first used as an input to the model at the very beginning. The loss function is calculated based on the RBF-FD method associating the BC and the model output. The following subsections describe each component of the proposed method in detail.
\begin{figure}
\caption{An example of a GNN, given the input/output graph $G = (V, E)$, where V and E are a set of vertices $V = {1, 2, 3}
\label{Graph}
\end{figure}
\subsection{Graph neural networks}
As an emerging technology for flexible processing of unstructured data in deep learning, Graph neural networks (GNNs) have been widely used to solve various scientific machine learning problems.
A mesh with unstructured grids and corresponding nodal PDE solutions can be naturally described as graphs. The task is to represent the GNNs approximation of the solution of the equation \eqref{nonlinear PDE} at time $t+\Delta t$ given the current solution.
\subsubsection{Graphs}
First, we generate an irregular mesh and express the mesh as a graph $G = (V, E)$ with nodes $V$ connected by edges $E$. Each node $i \in V $ is defined by its feature vector, and the adjacent nodes are connected via edges. In the framework, the input graph of GNN is that each node is associated with its current PDE solution at time $t$, and then the output graph is the solution at time $t+\Delta t$ shown in Fig.\ref{Graph}.
\subsubsection{MeshGraphNets}
We use MeshGraphNets \cite{Self-Adaptive}, a graph neural network model with an Encoder-Processer-Decoder architecture, to model the discretized solution. The framework of the MeshGraphNets is constructed as shown in Fig.\ref{MeshGraphNets}, which mainly has three parts.
Firstly, the Encoder encodes features into graph nodes and edges. And the Encoder has two hidden layers with the ReLU activation function, and each layer has 128 hidden units. Secondly, The processer predicts latent feature variation of nodes via a Graph Network(GN) that updates a graph state, including the attributes of the node, edge, and whole graph. Each block contains a separate set of network parameters and is applied in sequence to the output of the previous block, updating the edge $e_{ij}$ and then node $v_{i}$. Finally, the decoder decodes node features with the MLP of the same architecture as the Encoder as correction of the input to create final predicts. When training, the losses are computed to update the network parameters $\boldsymbol{\Theta}$ at once.
\begin{figure}
\caption{Diagram of MeshGraphNets.}
\label{MeshGraphNets}
\end{figure}
\begin{figure}
\caption{The corresponding local region of (a) The classical difference. (b) The RBF-FD method.}
\label{RBF-FD}
\end{figure}
\subsection{PDE-informed loss function}
The GNN training requires the known differential equations to be enforced by the loss function built on the basis of PDE residuals. The above PDE consists of several derivative operators, such as $\boldsymbol{u}_{t}$, $\nabla u$, and $\Delta u$. In continuous PINN, the spatial and temporal gradient operators of PDEs are computed using the available technology of AD. However, the method is essentially a soft constraint whose regularization term in the loss function can only guarantee that the predicted results do not severely violate the constraint in an average sense and may still produce physically inconsistent results. A hard constraint approach must be proposed to ensure that the PDE is strictly satisfied in the computational domain. In this work, a hard constraint approach based on the radial basis function finite difference (RBF-FD) technique is considered to embed domain knowledge into neural networks.
\begin{table*}[!t]
\begin{center}
\caption{\label{tab1} Common Radial Basis Functions.}
\begin{tabular}{cccccccccccc} \toprule
RBF & $\phi(r)$ \\ \hline
Gaussian (GA) & $exp^{-(\varepsilon r)^2}$ \\
inverse multiquadric (IMQ) & $\frac{1}{\sqrt{1+(\varepsilon r)^2}}$ \\
3rd order polyharmonic spline (ph3) & $(\varepsilon r)^3$\\
\bottomrule
\end{tabular}
\end{center}
\end{table*}
\subsubsection{RBF-FD}
Tolsrykh et al. \cite{Tolstykh} first discussed the radial basis function difference method (RBF-FD) that applies radial basis functions to finite differences. The RBF-FD method belongs to the meshless method, which makes it easy to handle problems in irregular areas and scattered node layouts. It needn’t mesh generation, and without numerical integration that could save a lot of computing time. RBF-FD method has been applied to numerical solutions in many scientific and engineering fields, for example, incompressible flow and heat conduction problems \cite{RBF-FD}.
The RBF-FD method is a method for spatial discretization of differential operators on spatial scatter. Assuming there are $n$ collection points $\left\{\mathbf{x}_i\right\}_{i=1}^n$ on the domain $\Omega$, and we choose the $m$ nearest neighbor points to form the corresponding local region $\left\{\mathbf{x}_i^k\right\}_{k=1}^m=\Omega_i$ for each point $\mathbf{x}_i$. The classical difference method approximates the solution $u(\mathbf{x})$ as a polynomial function on a regular local grid shown in Fig.\ref{RBF-FD}(a) and represents the differentiation of the solution as a weighted sum of the function values of several grid nodes by Taylor expansion. The RBF-FD method approximates the solution $u(\mathbf{x_i})$ in local irregular space $\Omega_i$ shown in Fig.\ref{RBF-FD}(b) as a combination of radial basis functions $\phi\left(x_i, x_{i}^{k}\right)$ and polynomial functions $p(x_i)$.
\begin{equation}
u(x_i) \approx \sum_{k=1}^{m} \lambda_{k} \phi\left(x, x_{i}^{k}\right)+\sum_{k=1}^{q} \mu_{k} p_{k}(x_i),
\label{ux}
\end{equation}
where $\lambda_{k}$ and $\mu_{k}$ are the corresponding combination coefficients. And $q$ is the number of terms of the polynomial, with the constraint conditions,
$\sum_{i=1}^{n} \lambda_{i} p_{j}(x_i)=0, j=1,...,q$. Combining the above equations, we could obtain,
\begin{equation}
\begin{gathered}
\left[\begin{array}{cc}
\boldsymbol{A} & \boldsymbol{P} \\
\boldsymbol{P}^{\mathrm{T}} & \mathbf{0}
\end{array}\right]\left[\begin{array}{l}
\lambda \\
\mu
\end{array}\right]=\left[\begin{array}{l}
\boldsymbol{u} \\
\mathbf{0}
\end{array}\right]\\
\boldsymbol{A}=\left[\begin{array}{cccc}
\phi\left(x_{1}, x_{1}\right) & \phi\left(x_{2}, x_{1}\right) & \cdots & \phi\left(x_{m}, x_{1}\right) \\
\phi\left(x_{1}, x_{2}\right) & \phi\left(x_{2}, x_{2}\right) & \cdots & \phi\left(x_{m}, x_{2}\right) \\
\vdots & \vdots & \ddots & \vdots \\
\phi\left(x_{1}, x_{m}\right) & \phi\left(x_{2}, x_{m}\right) & \cdots & \phi\left(x_{m}, x_{m}\right)
\end{array}\right] \\
\boldsymbol{P}=\left[\begin{array}{cccc}
p_{1}\left(x_{1}\right) & p_{2}\left(x_{1}\right) & \cdots & p_{q}\left(x_{1}\right) \\
p_{1}\left(x_{2}\right) & p_{2}\left(x_{2}\right) & \cdots & p_{q}\left(x_{2}\right) \\
\vdots & \vdots & \ddots & \vdots \\
p_{1}\left(x_{m}\right) & p_{2}\left(x_{m}\right) & \cdots & p_{q}\left(x_{m}\right)
\end{array}\right] \\
\mu=\left[\begin{array}{llll}
\mu_{1} & \mu_{2} & \cdots & \mu_{q}
\end{array}\right]^{\mathrm{T}} \\
\lambda=\left[\begin{array}{llll}
\lambda_{1} & \lambda_{2} & \cdots & \lambda_{m}
\end{array}\right]^{\mathrm{T}} \\
\boldsymbol{u}=\left[\begin{array}{llll}
u_{1} & u_{2} & \cdots & u_{m}
\end{array}\right]^{\mathrm{T}}
\end{gathered}
\end{equation}
Therefore, the corresponding combination coefficients are as follows:
\begin{equation}
\left[\begin{array}{l}
\lambda \\
\mu
\end{array}\right]=\left[\begin{array}{cc}
\boldsymbol{A} & \boldsymbol{P} \\
\boldsymbol{P}^{\mathrm{T}} & \mathbf{0}
\end{array}\right]^{-1}\left[\begin{array}{l}
\boldsymbol{u} \\
\mathbf{0}
\end{array}\right].
\label{the corresponding combination coefficients}
\end{equation}
Further, consider the differential operator $\mathcal{L}$ on the solution $u(x)$, $\left.\mathcal{L} u(x)\right|_{x=x_{i}}$ at the space point $x_i$ can be approximated as:
\begin{equation}
\left.\mathcal{L} u(x)\right|_{x=x_{i}} \approx \sum_{k=1}^{m} w^{i}_{k} u_{x_{i}^{k}},
\label{operator}
\end{equation}
where $w^{i}_{k}$ are differentiation weights for the point $x_i$. To determine $w^{i}_{k}$, Substitute the equation \eqref{ux} into the above equation:
\begin{equation}
\sum_{k=1}^{m} \lambda_{k} \mathcal{L} \phi\left(x, x_{i}^{k}\right)+\sum_{k=1}^{q} \mu_{k} \mathcal{L} p_{k}(x_i) \approx \sum_{k=1}^{m} w^{i}_{k} u_{x_{i}^{k}},
\end{equation}
Performing matrix decomposition yields,
\begin{equation}
\begin{aligned}
&\left[\begin{array}{ll}
\boldsymbol{b} & \boldsymbol{c}
\end{array}\right]\left[\begin{array}{l}
\lambda \\
\mu
\end{array}\right] =\left[\begin{array}{ll}
w & v
\end{array}\right]\left[\begin{array}{l}
u \\
0
\end{array}\right]\\
\boldsymbol{b} &=\left[\begin{array}{llll}
\mathcal{L} \phi\left(x_{i}, x_{1}\right) & L \phi\left(x_{i}, x_{2}\right) & \cdots & L \phi\left(x_{i}, x_{m}\right)
\end{array}\right] \\
\boldsymbol{c} &=\left[\begin{array}{llll}
\mathcal{L} p_{1}\left(x_{i}\right) & L p_{2}\left(x_{i}\right) & \cdots & L p_{m}\left(x_{i}\right)
\end{array}\right] \\
\boldsymbol{w} &=\left[\begin{array}{llll}
w_{1} & w_{2} & \cdots & w_{m}
\end{array}\right] \\
\boldsymbol{v} &=\left[\begin{array}{llll}
v_{1} & v_{2} & \cdots & v_{q}
\end{array}\right]
\end{aligned}
\end{equation}
Substitute Eq. \eqref{the corresponding combination coefficients} into the above equation, we obtain all $w^{i}_{k}$:
\begin{equation}
\left[\begin{array}{l}
\boldsymbol{w}^{\mathrm{T}} \\
\boldsymbol{v}^{\mathrm{T}}
\end{array}\right]=\left[\begin{array}{cc}
\boldsymbol{A} & \boldsymbol{P} \\
\boldsymbol{P}^{\mathrm{T}} & \mathbf{0}
\end{array}\right]^{-1}\left[\begin{array}{l}
\boldsymbol{b}^{\mathrm{T}} \\
\boldsymbol{c}^{\mathrm{T}}
\end{array}\right].
\end{equation}
and substituting into Eq. \eqref{operator} get the approximate solution of $\mathcal{L}u(x)$ at the point $x_i$. Further, the approximate solution of $\mathcal{L}u(x)$ can be obtained for each point in the irregular computational domain.
\subsubsection{PDE residuals}
In this work, given $u^l(x)=u(x, t^{l})$ and the network output $u^{l+1}(x)=u(x, t^{l+1})$, we construct loss function with the RBF-FD method. The equation \eqref{nonlinear PDE} can represent a wide range of time-dependent PDEs, such as the Poisson equation and wave equation.
Here, we give a heat transfer example $u_t = \alpha \Delta u$ to demonstrate how to define PDE residuals using RBF-FD. First, we sample $n_c$ collocation points on the domain $\Omega$ and $n_b$ boundary nodes, where $n = n_c + n_b$. And denote $\tau = t^{l+1}-t^{l}$ be the time step.
For operator $u_t$, it can be approximated with backward difference on time $t$:
\begin{equation}
u_t = \frac{u^{l+1}(x)-u^{l}(x)}{\tau}
\end{equation}
According to equation \eqref{operator}, the Laplace item $\Delta u$ at the space point $x_i$ can be approximated as:
\begin{equation}
\Delta u^l(x) \approx \sum_{k=1}^{m} w^{i}_{k} u(x_{i}^{k})
\label{Delta}
\end{equation}
Therefore, the heat equation is discretized as the following formulation,
\begin{equation}
\alpha \sum_{k=1}^m \omega_k^i u^l\left(\mathbf{x}_i^k\right)+\frac{1}{\tau} u^l\left(\mathbf{x}_i\right)=\frac{1}{\tau} u^{l+1}\left(\mathbf{x}_i\right), \quad i=1,2, \ldots, n_c
\end{equation}
We consider the essential boundary conditions $u^l(\mathbf{x}_i)=h(\mathbf{x}_i, t^l), \quad i=n_c +1, n_c+2, \ldots, n$ and the discretized PDE. Transform the problem into the following linear algebraic equations:
\begin{equation}
\begin{aligned}
&\mathrm{A} U^l=\frac{1}{\tau} U^{l+1}+H^l\\
&a_{i j}= \begin{cases}\alpha \omega_{k}^{i} & i \neq j, \text { for } i=1,2, \ldots, n_{c}, \\ \alpha \omega_{k}^{i}+\frac{1}{\tau} & i=j, \text { for } i=1,2, \ldots, n_{c}, \\ 1 & i=j, \text { for } i=n_{c}+1, n_{c}+2, \ldots, n\end{cases}\\
&U^{l}=\left[\begin{array}{lllll}
u^{l}\left(\mathbf{x}_{1}\right) & u^{l}\left(\mathbf{x}_{2}\right) & \ldots & u^{l}\left(\mathbf{x}_{n}\right)
\end{array}\right]^{T}, \\
&H^{l}=\left[\begin{array}{lllllll}
0 & \ldots & 0 & h^{l}\left(\mathbf{x}_{n_{c}+1}\right) & h^{l}\left(\mathbf{x}_{n_{c}+2}\right) & \ldots & h^{l}\left(\mathbf{x}_{n}\right)
\end{array}\right]^{T},\\
&U^{l+1}=\left[\begin{array}{lllll}
u^{l+1}\left(\mathbf{x}_{1}\right) & u^{l+1}\left(\mathbf{x}_{2}\right) & \ldots & u^{l+1}\left(\mathbf{x}_{n_c}\right)
\end{array}\right]^{T},
\end{aligned}
\label{AMatrix}
\end{equation}
where $U^{l}$ and $\hat{U}^{l+1}$ are the prediction matrix. $\mathrm{A}$ is the constraint matrix that denotes the physical constraints. $H^l$ is the boundary matrix.
The solution $\hat{U}^{l+1}$ will be learned by GNN as the output graph $\hat{U}^{l+1}(\boldsymbol{\Theta})$. The PDE residual is as follows:
\begin{equation}
\boldsymbol{R}_u\left(\hat{U}^{l+1}(\boldsymbol{\Theta}), U^l ; \boldsymbol{\alpha}\right) = \mathrm{A} U^l-\frac{1}{\tau} \hat{U}^{l+1}-H^l,
\label{PDE residual}
\end{equation}
The PDE-informed loss function for the GNN has the following form, the essential boundary condition will be satisfied automatically:
\begin{equation}
\begin{aligned}
\mathcal{L}_{\mathrm{f}}(\boldsymbol{\Theta})=\left\|\boldsymbol{R}_u\left(\hat{U}^{l+1}(\boldsymbol{\Theta}), U^l ; \boldsymbol{\alpha}\right)\right\|_2 .
\label{loss}
\end{aligned}
\end{equation}
Then we can use Adam to minimize the loss $\mathcal{L}_{\mathrm{f}}(\boldsymbol{\Theta})$ as close to zero as possible to adjust the network weight $\boldsymbol{\Theta})$. Solving heat transfer problem with RBF-MGN is summarized as the algorithm\ref{alg:RBF-MGN}.
\begin{algorithm}[htb]
\caption{Solving heat transfer problem with RBF-MGN}
\label{alg:RBF-MGN}
\begin{algorithmic}
\State \textbf{Step\hspace{0.5em}1}:Generate an unstructured grid and regard the mesh as a graph $G = (V, E)$.
\State \textbf{Step\hspace{0.5em}2}:Construct MeshGraphNets to put the matrix $U^{l}$ and at time $t$ to obtain prediction matrix $\hat{U}^{l+1}$ at next time step $t + \Delta t$. The IC is first used as an input to the model at the very beginning.
\State \textbf{Step\hspace{0.5em}3}:Compute the constraint matrix $\mathrm{A}$ (Eq. \eqref{AMatrix}) based on the RBF-FD. Obtain the boundary matrix $H^l$ with BC.
\State \textbf{Step\hspace{0.5em}4}:Formulate the the PDE residual (Eq.\eqref{PDE residual}).
\State \textbf{Step\hspace{0.5em}5}:Solve the optimization problem (Eq.\eqref{loss}) to obtain the next state solution.
\end{algorithmic}
\end{algorithm}
\section{Results}
In this section, we will present several numerical experiments to show the ability of the proposed method for solving the PDEs especially on complex domain, include solving the two-dimensional Poisson’s equation and two-dimensional wave equation. We also consider different time steps $\tau$, PDE parameters and several types of RBFs on the learning performance of the proposed method. We further study the performance of RBF-MGN with different numbers of collection points $n$ and nearest neighbor nodes $m$. All numerical experiments are mainly based on Pytorch. The MLPs with two hidden layers, each with 64 neurons in the encoder, processor, and decoder of the neural network are employed in all experiments. The activation function is ReLU. Unless otherwise specified, The optimizer is Adam and the learning rate is set to 0.001. The rest of the detailed configurations are described in the respective experiment. In order to test the accuracy, two errors –the absolute error and the relative L2 error are used, which are defined as follows:
\begin{equation}
\begin{aligned}
\text {absolute error }=\left|\hat {u}-u\right|,\\
\text {relative L2 error }=\frac{\sqrt{\left|\hat {u}-u\right|^{2}}}{\sqrt{\left|u\right|^{2}}}.
\label{L2error}
\end{aligned}
\end{equation}
\subsection{Two-dimensional Poisson’s equation}
The first experiment considers a simple two-dimensional Poisson’s equation:
\begin{equation}
\begin{aligned}
u_{t}+\gamma \Delta u+f(x, y, t)=0, \quad x \in [0, 1], y \in[0, 1].
\end{aligned}
\end{equation}
First, assuming that $f(x, y, t)$ is 0, the Laplace operator $\Delta u$ approximated using Eq. \eqref{Delta} and AD are shown in Fig.\ref{Square-delta}. In the computation, we choose unstructured $n=167$ collection points, $m = 10$ nearest neighbor nodes. We use the ph3 RBF with shape parameter $\varepsilon=1$ to compute the weights, and the order of the added polynomial is 2. It can be easily seen that RBF-FD is a perfect substitute for AD to approximate the Laplace operator, which in turn can be used to define the PDE residuals.
Assume that the boundary conditions are Dirichlet and $f(x, y, t)=-3-2\gamma(x+y)$, we take $\gamma=1$ to obtain the analytical solution $u(x, y, t)=xy^2+yx^2+3t$. We represent the computational domain as a graph $G=(V, E)$ with the simple structured Delaunay triangulation in 2D with the Bowyer-Watson algorithm shown in Fig.\ref{Square}. V is a set of points on a two-dimensional domain, including boundary nodes (red triangles) and interior nodes (blue pentagons). The edge e is a closed line segment formed by the points in the set of points as endpoints, and E is the set of e.
First, we consider the IC as the exact solution at time $t=0$ and train the network to infer the solution at $t \in[0, T], T=1$ with the time step $\tau = 0.01s$. For the RBF-FD method, we use the ph3 RBF, and Analogous to the Eq. \eqref{PDE residual}, the PDE residual is $\mathrm{A} U^l-\frac{1}{\tau} \hat{U}^{l+1}-H^l + F^l$ where $F^l$ is the matrix associated with $f(x, y, t)$. As shown in Fig.\ref{Square-loss}, the residual based on the RBF-FD definition converges quickly, guaranteeing that the prediction relative errors are all less than 0.001. The predicted and exact solutions at $t = 1.02, 2.0$ and the absolute errors are compared as shown in Fig.\ref{Square-error}.
In addition, we need to find the initial temperature from the final temperature at $T$. We employ the IMQ RBF to define the loss function. The graphs of exact and predicted solutions at the initial moment are presented in Fig.\ref{Square-inverse} with $\tau = 0.1$, and the two figures are almost identical. Furthermore, RBF-MGN could achieve excellent accuracy with different $tau$ as shown in Table \ref{tab2} and Fig.\ref{Square-inverse}.
\begin{figure}
\caption{The Laplace operator $\Delta u$ approximated using RBF-FD \eqref{Delta}
\label{Square-delta}
\end{figure}
\begin{figure}
\caption{The graph $G = (V, E)$ with nodes $V$ connected by edges $E$, V is a set of 167 points on a two-dimensional domain , including boundary nodes (red triangles), interior nodes (blue pentagons).}
\label{Square}
\end{figure}
\begin{figure}
\caption{Two-dimensional Poisson’s equation: The results of the PDE residual (a) and train error (b) on different spatial areas along with training iterations.}
\label{Square-loss}
\end{figure}
\begin{figure}
\caption{The reuslts of two-dimensional Poisson’s equation at different time steps. The predicted results are compared with the exact solutions and the difference is also presented.}
\label{Square-error}
\end{figure}
\begin{figure}
\caption{Two-dimensional Poisson’s equation:The graphs of the initial temperature reconstructed by RBF-MGN}
\label{Square-inverse}
\end{figure}
\begin{table*}[!t]
\begin{center}
\caption{\label{tab2}Two-dimensional Poisson’s equation: The results of RBF-MGN with different $\tau$.}
\begin{tabular}{cccccccccccc} \toprule
$\tau$ & the final T & max absolute error & Relative L2 error \\ \hline
{0.5} & {0.5} & {3.03e-02} & {1.94e-02}\\
{0.25} & {0.25} & {6.14e-03} & {5.24e-03}\\
{0.1} & {0.1} & {9.90e-04} & {7.61e-04}\\
{0.01} & {0.01} & {8.11e-04} & {6.43e-04}\\
\bottomrule
\end{tabular}
\end{center}
\end{table*}
\subsection{Two-dimensional Poisson’s equation on amoeba domain}
In this example, we consider the heat transfer problem with the amoeba domain as follows:
\begin{equation}
\begin{aligned}
u_t &= \lambda \Delta u, \quad(x, y) \in \Omega,\\
\partial \Omega&=\{(x, y) \mid x=\rho \cos \theta+1, \quad y=\rho \sin \theta+1, \quad \theta \in[0,2 \pi]\},
\end{aligned}
\end{equation}
where $\rho=\left(\exp (\sin \theta) \sin ^2 2 \theta+\exp (\cos \theta) \cos ^2 2 \theta\right) / 2$. We use the analytical solution of the following form $u(x,y,t)=\lambda exp(-t)(cosx+cosy)$, and the initial condition is given as $u=\lambda (cosx+cosy)$.
Triangulate the irregular region as shown in the Fig.\ref{amoeba} to represent the graph $G=(V, E)$, which includes $n_c = 195$ collocation points on and $n_b = 64$ boundary nodes. It is much lower than the total number of collocation points for a typical point-to-point PINN. We aim to attain the solution at $t \in[0, T], T=2$ with the time step $\tau = 0.01s$ through the RBF-MGN method. We construct data sets with a time range of 0 to 1 for training and test the ability of the model to infer solutions at $t \in[1, 2]$. For the RBF-FD method, we use ph3 RBF and sample $m = 15$ nearest neighbor nodes.
Setting $\lambda = 1.0$, the batchsize is 5, and the fixed number of iterations is 200 Adam steps. As shown in Fig.\ref{amoeba-loss}, the residuals defined according to Eq.\eqref{PDE residual} are easy to handle, ensuring that the model predictions strictly conform to physical constraints. And then the val loss reaches to $1e-5$. RBF-MGN could accurately recover the temperature at $t = 1.99$, as shown in Fig.\ref{amoeba-error}.
In addition, the errors at different time steps of the heat transfer problem with the amoeba domain with different $\lambda$ are shown in Table \ref{tab3}, where the time step is fixed to $\tau = 0.01$. As we can see in Fig.\ref{amoeba-weight}, the error in step [1, 10] is on the 0.000001 level, which indicates that our approach could fit well even when extrapolating at time.
\begin{figure}
\caption{The graph $G = (V, E)$ with nodes $V$ connected by edges $E$, V is a set of 167 points on a two-dimensional domain , including $n_b = 64$ boundary nodes (red triangles), $n_c = 195$ interior nodes (blue pentagons).}
\label{amoeba}
\end{figure}
\begin{figure}
\caption{Two-dimensional Poisson’s equation: The results of the PDE residual (a) and test error (b) on different spatial areas along with training iterations.}
\label{amoeba-loss}
\end{figure}
\begin{figure}
\caption{The reuslts of two-dimensional Poisson’s equation on amoeba domain at $t = 1.99$. The predicted results are compared with the exact solutions and the difference is also presented.}
\label{amoeba-error}
\end{figure}
\begin{table} \fontsize{8}{8}\centering \caption{The errors at different time steps of the heat transfer problem on the amoeba domain with different $\lambda$.}\begin{tabular}{ | c | c | c | c | c | c | }\hline \diagbox{$\lambda$}{$setps$} & 1 & 10 & 50 & 100 & 200 \\\hline\hline 1 & {1.6e-6} & {6.3e-6} & {3.4e-5} &{4.8e-5} &{1.6e-5} \\\hline\hline 2 & {3.1e-6} & {6.6e-6} & {9.4e-6} &{5.3e-5} &{2.1e-5} \\\hline\hline 3 & {4.7e-6} & {7.3e-6} & {4.8e-5} &{5.2e-5} & {2.3e-5}\\\hline\end{tabular}
\label{tab3}\end{table}
\begin{figure}
\caption{The errors at different time steps of the heat transfer problem with the amoeba domain with different $\lambda$.}
\label{amoeba-weight}
\end{figure}
\subsection{Two-dimensional Poisson’s equation on butterfly domain}
In the experiment, we also consider the problem in Example 2, but the analytical solution of the heat conduction equation is:
\begin{equation}
\begin{aligned}
u(x, y, t)=\exp \left(-\frac{\pi^2 t}{4}\right)\left[y \sin \left(\frac{\pi x}{2}-\frac{\pi}{4}\right)+x \sin \left(\frac{\pi y}{2}-\frac{\pi}{4}\right)\right].
\end{aligned}
\end{equation}
The initial and boundary conditions are obtained from the analytical solution. For the computation domain, we would use the butterfly is as follows:
$\Omega=\{(x, y) \mid x=0.55 \rho(\theta) \cos (\theta), y=0.75 \rho(\theta) \sin (\theta)\} \text { and } \rho(\theta)=1+\cos (\theta) \sin (4 \theta),0\leqslant \theta\leqslant 2\pi$.
The Delaunay algorithm is used to discrete the irregular computational domain. As a result, the graph has collocation points and boundary nodes shown in Fig.\ref{butterfly}. We would compare the predicted solution of RBF-MGN and the analytical reference at $t \in[0, T], T=2$ with the time step $\tau = 0.01s$. For the RBF-FD method, we use ph3 RBF and sample $m = 15$ nearest neighbor nodes. Gaussian RBF The RBF-FD method approximates the differential operator in local irregular space with the number of local nearest neighbor nodes $m=15$.
Setting ph3 RBF with $\varepsilon = 1.0$, we can see the RBF-MGN forward solution is almost identical to the analytical reference, and the relative prediction error is only 0.00225 according to the Fig.\ref{butterfly-loss}. In addition, this test case in Fig.\ref{butterfly-error} demonstrates that the graph-based discrete model can easily handle irregular domains with unstructured meshes, and the RBF-FD-based PDE residual can ensure that the boundary condition is also strictly satisfied. In addition, we also choose Gaussian RBF or ph3 RBF with different shape parameters $\varepsilon$ for the experiment shown in Table \ref{tab4}. It can be easily seen from Fig.\ref{butterfly-weight} that different RBF parameters affect the solution accuracy, but both give good results.
\begin{figure}
\caption{The Delaunay algorithm is used to discrete the irregular computational domain. The graph $G = (V, E)$ has boundary nodes (red triangles) and interior nodes (blue pentagons).}
\label{butterfly}
\end{figure}
\begin{figure}
\caption{Two-dimensional Poisson’s equation: The results of the PDE residual (a) and test error (b) on different spatial areas along with training iterations.}
\label{butterfly-loss}
\end{figure}
\begin{figure}
\caption{The reuslts of two-dimensional Poisson’s equation on butterfly domain at $t = 1.99$. The predicted results are compared with the exact solutions and the difference is also presented.}
\label{butterfly-error}
\end{figure}
\begin{table} \fontsize{8}{8}\centering \caption{The errors of the heat transfer problem on the butterfly domain with different $\varepsilon$.}\begin{tabular}{ | c | c | c | c | c | }\hline \diagbox{RBF}{$\varepsilon$} & 0.1 & 0.5 & 1.0 & 2.0 \\\hline\hline ph3 & {5.7e-5} & {4.4e-5} & {7.7e-6} &{4.4e-6} \\\hline\hline GA & {6.1e-5} & {4.9e-5} & {8.3e-6} &{5.1e-6} \\\hline\end{tabular}
\label{tab4}\end{table}
\begin{figure}
\caption{The errors of the heat transfer problem on the butterfly domain with different $\varepsilon$.}
\label{butterfly-weight}
\end{figure}
\subsection{Two-dimensional wave equation on L-shaped domain}
In this example, we consider the two-dimensional wave equation on an L-shaped domain with free
boundary conditions.
\begin{equation}
\begin{aligned}
\frac{\partial^2 u}{\partial t^2}&=D\left(\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}\right), \quad t \in[0, T] \\ (x, y) \in \Omega &= Polygon[[0, 0], [2, 0], [2, 1], [1, 2], [0, 2], [1, 1]].
\end{aligned}
\end{equation}
We set the initial displacements in this problem at the interior and boundary. The velocities are zero, and the PDE parameter is $1e-6$. Triangulate the irregular region as shown in the Fig.\ref{L} to represent the graph $G=(V, E)$, 405 observations are randomly sampled in the domain, including 84 boundary nodes. We aim to attain the solution at $t \in[0, T]$ with the time step $\tau = 0.1s$ through the proposed method. The Adam optimizer is applied to update the neural network parameters in the iterations. We also use ph3 RBF and sample $m = 25$ nearest neighbor nodes. RBF-MGN could accurately recover the temperature at $t = 1.99$ as shown in Fig.\ref{L-error}. We show the predicted solutions of RBF-MGN at different time instants $T = 0.50, 1.00, 2.00, 3.00$.
To further scrutinize the performance of the proposed method, we have performed a systematic study concerning the size of the observation dataset $n$. First, 100, 200, 300, and 400 points in the computational domain are randomly selected to represent the graph. Respectively. Table \ref{tab5} shows the results at different time steps, where the number of nearest neighbor nodes is fixed to $m = 25$. Second, Setting $n = 400$, we also analyze the performance of RBF-MGN in different numbers of nearest neighbor nodes $m$ in Table \ref{tab6}. As shown in Fig.\ref{L-pa}, RBF-MGN is capable of achieving a more accurate solution when trained with the most miniature set of collocation points. RBF-MGN is also insensitive to this parameter $m$ (10, 15, 20, 25), which also indicates that the powerful approximation capability of the neural network can ignore the effect of the parameter on the RBF-FD approximate differential operator. As is shown in Fig.\ref{L-weight}, for all cases, RBF-MGN could achieve acceptable minor errors, especially with $n=400, m = 25$.
\begin{figure}
\caption{The Delaunay algorithm is used to discrete the L-shaped domain. The graph $G = (V, E)$ has boundary nodes (red triangles) and interior nodes (blue pentagons).}
\label{L}
\end{figure}
\begin{figure}
\caption{The reuslts of two-dimensional wave equation on L-shaped domain at different time steps.}
\label{L-error}
\end{figure}
\begin{table} \fontsize{8}{8}\centering \caption{The relative errors ($\%$) at different time steps of the Two-dimensional wave equation on L-shaped domain with numbers of collocation points $n$.}\begin{tabular}{ | c | c | c | c | c | c | }\hline \diagbox{$n$}{$setps$} & 1 & 5 & 10 & 20 & 30 \\\hline\hline 100 & {0.023} & {0.022} & {0.031} &{0.043} &{0.034} \\\hline\hline 200 & {0.021} & {0.011} & {0.015} &{0.0084} &{0.0091} \\\hline\hline 300 & {0.0078} & {0.0079} & {0.0068} &{0.0077} & {0.0065}\\\hline\hline 400 & {0.0054} & {0.0037} & {0.0045} &{0.0064} & {0.0041}\\\hline\end{tabular}
\label{tab5}\end{table}
\begin{table} \fontsize{8}{8}\centering \caption{The relative errors ($\%$) at different time steps of the Two-dimensional wave equation on L-shaped domain with numbers of nearest neighbor nodes $m$.}\begin{tabular}{ | c | c | c | c | c | c | }\hline \diagbox{$n$}{$setps$} & 1 & 5 & 10 & 20 & 30 \\\hline\hline 10 & {0.0074} & {0.0089} & {0.0075} &{0.0065} &{0.0068} \\\hline\hline 15 & {0.0044} & {0.0076} & {0.0079} &{0.0084} &{0.0055} \\\hline\hline 20 & {0.0068} & {0.0057} & {0.0054} &{0.0070} & {0.0061}\\\hline\hline 25 & {0.0054} & {0.0037} & {0.0045} &{0.0064} & {0.0041}\\\hline\end{tabular}
\label{tab6}\end{table}
\begin{figure}
\caption{Two-dimensional wave equation on L-shaped domain: Boxplot of the relative errors ($\%$) with different numbers of collocation points (Left) and nearest neighbor nodes (Right).}
\label{L-pa}
\end{figure}
\begin{figure}
\caption{The errors of the heat transfer problem on the butterfly domain with different parameters $n$ and $m$.}
\label{L-weight}
\end{figure}
\section{Conclusions}
This paper proposes a physics-informed framework (RBF-MGN) based on GNNs and RBF-FD to solve spatio-temporal PDEs. The GNNs and RBF-FD are introduced into physics-informed learning to handle irregular domains with unstructured meshes better. Combined with the boundary conditions, a high-precision difference format of the differential equations is constructed to guide model training. The numerical results from several Poisson’s equations on complex domains have shown the effectiveness of the proposed method. Furthermore, we also tested the robustness of the RBF-MGN with different time steps, PDE parameters, different numbers of collocation points, and several types of RBFs.
It should be noted that there are fluctuations in the loss functions constructed based on RBF-FD, and we should further infer the reasons for this phenomenon and effectively reduce the gradient fluctuations of these loss functions.
\end{document}
|
\betaegin{document}
\betaegin{abstract}
In this note we establish some rigidity and stability results for Caffarelli's log-concave perturbation theorem. As an application we show that if a \(1\)-log-concave measure has almost the same Poincar\'e constant as the Gaussian measure, then it almost splits off a Gaussian factor.
\varepsilonnd{abstract}
\deltadicatory{To Nicola Fusco, for his 60th birthday, con affetto e ammirazione.}
\maketitle
\sigmaection{Introduction}
Let $\gammaamma_n$ denote the centered Gaussian measure in \(\mathbb R^n\), i.e. \(\gammaamma_n=(2\partiali)^{-n/2}e^{-|x|^2/2}dx\), and let $\mu$ be a probability measure
on $\mathbb R^n$.
By a classical theorem of Brenier \cite{Br}, there exists a convex function $\varphi:\mathbb R^n\to \mathbb R$ such that $T=\nablaabla\varphi:\mathbb R^n\to\mathbb R^n$ transports $\gammaamma_n$ onto $\mu$, i.e. \(T_\sigmaharp \gammaamma_n=\mu\), or equivalently
\betaegin{equation*}
\int h\circ T \,d\gammaamma_n=\int h\, d\mu\qquad \textrm{for all continuous and bounded functions \(h\in C_b(\mathbb R^n)\)}.
\varepsilonnd{equation*}
In the sequel we will refer to \(T\) as the {\varepsilonm Brenier map} from $\gammaamma_n$ to $\mu$.
In \cite{Caf1,Caf2} Caffarelli proved that if $\mu$ is ``more log-concave'' than $\gammaamma_n$, then $T$ is $1$-Lipschitz, that is, all the eigenvalues
of $D^2\varphi$ are bounded from above by $1$. Here is the exact statement:
\betaegin{theorem}[Caffarelli]\lambdaabel{thm:caffarelli} Let \(\gammaamma_n\) be the Gaussian measure in \(\mathbb R^n\), and let \(\mu=e^{-V} dx\) be a probability measure satisfying \(D^2 V\gammae \Id_n\). Consider the Brenier map \(T=\nablaabla \varphi\) from \(\gammaamma_n\) to \(\mu\). Then \(T\) is \(\)1-Lipschitz, i.e. \(D^2\varphi(x)\lambdae\Id \)
for a.e. $x$.
\varepsilonnd{theorem}
This theorem allows one to show that optimal constants in several functional inequalities are extremized by the Gaussian measure. More precisely, let \(F,G,H,L,J\) be continuous functions on \(\mathbb R\) and assume that \(F,G,H,J\) are nonnegative, and that \(H\) and \(J\) are {increasing}. For \(\varepsilonll\in \mathbb R_+\) let
\betaegin{equation}\lambdaabel{eq:lambda}
\lambdaambda(\mu,\varepsilonll):=\inf\Bigg\{\frac{H\Big(\int J(|\nablaabla u|) \,d\mu\Big)} {F\Big(\int G(u)\, d\mu\Big)}\,:\qquad u\in {\rm Lip}(\mathbb R^n)\,, \int L(u)\,d\mu=\varepsilonll\Bigg\}.
\varepsilonnd{equation}
Then
\betaegin{equation}\lambdaabel{eq:lambda2}
\lambdaambda(\gammaamma_n,\varepsilonll)\lambdae \lambdaambda(\mu,\varepsilonll).
\varepsilonnd{equation}
Indeed, given a function \(u\) admissible in the variational formulation for \(\mu\), we set \(v:=u\circ T\) and note that, since \(T_\sigmaharp \gammaamma_n=\mu\),
$$
\int K(v)\,d\gammaamma_n=\int K(u\circ T)\,d\gammaamma_n=\int K(u)\,d\mu\qquad \text{for $K=G,L$.}
$$
In particular, this implies that $v$ is admissible in the variational formulation for \(\gammaamma_n\).
Also, thanks to Caffarelli's Theorem,
\[
|\nablaabla v|\lambdae |\nablaabla u|\circ T\,|\nablaabla T|\lambdae |\nablaabla u|\circ T,
\]
therefore
$$
H\Big(\int J(|\nablaabla v|)\, d\gammaamma_n\Big)
\lambdaeq H\Big(\int J(|\nablaabla u|)\circ T\, d\gammaamma_n\Big)=
H\Big(\int J(|\nablaabla u|)\, d\mu\Big).
$$
Thanks to these formulas, \varepsilonqref{eq:lambda2} follows easily.
Note that the classical Poincar\'e and Log-Sobolev inequalities fall in the above general framework. \\
Two questions that naturally arise from the above considerations are:
\betaegin{itemize}
\item[-]\varepsilonmph{Rigidity}: What can be said of \(\mu\) when \(\lambdaambda (\mu,\varepsilonll)=\lambdaambda(\gammaamma_n,\varepsilonll)\)?
\item[-]\varepsilonmph{Stability}: What can be said of \(\mu\) when \(\lambdaambda (\mu,\varepsilonll)\alphapprox \lambdaambda(\gammaamma_n,\varepsilonll)\)?
\varepsilonnd{itemize}
Looking at the above proof,
these two questions can usually be reduced to the study of the corresponding ones concerning the optimal map \(T\) in Theorem \ref{thm:caffarelli} (here $|A|$
denotes the operator norm of a matrix $A$):
\betaegin{itemize}
\item[-]\varepsilonmph{Rigidity}: What can be said of \(\mu\) when \(|\nablaabla T(x)|=1\) for a.e. \(x\) ?
\item[-]\varepsilonmph{Stability}: What can be said of \(\mu\) when \(|\nablaabla T(x)|\alphapprox 1\) (in suitable sense)?
\varepsilonnd{itemize}
Our first main result state that if \(|\nablaabla T(x)|=1\) for a.e. \(x\) then \(\mu\) ``splits off'' a Gaussian factor. More precisely, it splits off as many Gaussian factors as the number of eigenvalues of \(\nablaabla T=D^2\varphi\) that are equal to \(1\).
In the following statement and in the sequel, given \(p \in \mathbb R^k\) we denote by \(\gammaamma_{p,k}\) the Gaussian measure in $\mathbb R^k$ with barycenter \(p\), that is, \(\gammaamma_{p,k}=(2\partiali)^{-k/2}e^{-|x-p|^2/2}dx\).
\betaegin{theorem}[Rigidity]\lambdaabel{thm:rigidity}Let \(\gammaamma_n\) be the Gaussian measure in \(\mathbb R^n\), and let \(\mu=e^{-V} dx\) be a probability measure with \(D^2 V\gammae \Id_n\). Consider the Brenier map \(T=\nablaabla \varphi\) from \(\gammaamma_n\) to \(\mu\), and let
\[
0\lambdae \lambdaambda_1(D^2 \varphi(x))\lambdae \dots\lambdae \lambdaambda_n(D^2 \varphi(x))\lambdaeq 1
\]
be the eigenvalues of the matrix $D^2\varphi(x)$. If \(\lambdaambda_{n-k+1}(D^2 \varphi(x))=1\) for a.e. \(x\) then
\(\mu= \gammaamma_{p,k}\otimes e^{-W(x')}d x'\), where \(W:\mathbb R^{n-k}\to \mathbb R\) satisfies \(D^2W\gammae \Id_{n-k}\).
\varepsilonnd{theorem}
Our second main result is a quantitative version of the above theorem. Before stating it let us recall that, given two probability measures \(\mu, \nablau\in \mathcal P(\mathbb R^n)\), the \(1\)-Wasserstein distance between them is defined as
\[
W_1(\mu,\nablau):=\inf\Big\{\int |x-y|\,d\sigmaigma(x,y)\,:\quad \sigmaigma\in \mathcal P(\mathbb R^n\times \mathbb R^n)\textrm{ such that } ({\rm pr}_1)_\sigmaharp\sigmaigma=\mu,\, ({\rm pr}_2)_\sigmaharp\sigmaigma=\nablau\Big\},
\]
where \({\rm pr}_1\) (resp. \({\rm pr}_2\)) is the projection of \(\mathbb R^n\times\mathbb R^n\) onto the first (resp. second) factor.
\betaegin{theorem}[Stability]\lambdaabel{thm:stability}Let \(\gammaamma_n\) be the gaussian measure in \(\mathbb R^n\)and let \(\mu=e^{-V} dx\) be a probability measure with \(D^2 V\gammae \Id_n\). Consider the Brenier map \(T=\nablaabla \varphi\) from \(\gammaamma_n\) to \(\mu\), and let
\[
0\lambdae \lambdaambda_1(D^2 \varphi(x))\lambdae \dots\lambdae \lambdaambda_n(D^2 \varphi(x))\lambdaeq 1
\]
be the eigenvalues of $D^2\varphi(x)$. Let \(\varepsilon\in (0,1)\) and assume that
\betaegin{equation}\lambdaabel{eq:almost}
1-\varepsilon\lambdae \int \lambdaambda_{n-k+1}(D^2 \varphi(x))\,d\gammaamma_n(x)\lambdae 1\,.
\varepsilonnd{equation}
Then there exists a probability measure \(\nablau= \gammaamma_{p,k}\otimes e^{-W(x')}d x'\), with \(W:\mathbb R^{n-k}\to \mathbb R\) satisying \(D^2W\gammae \Id_{n-k}\), such that
\betaegin{equation}\lambdaabel{eq:log}
W_1(\mu,\nablau) \lambdaesssim \frac{1}{|\lambdaog\varepsilon|^{1/4_-}}.
\varepsilonnd{equation}
\varepsilonnd{theorem}
In the above statement, and in the rest of the note, we are employing the following notation:
\[
X \lambdaesssim Y^{\betaeta_-} \qquad \textrm{if \(X\lambdae C(n,\alphalpha)Y^{\alphalpha}\) for all \(\alphalpha< \betaeta\).}
\]
Analogously,
\[
X \gammatrsim Y^{\betaeta_-} \qquad \textrm{if \(C(n,\alphalpha )X\gammae Y^{\alphalpha}\) for all \(\alphalpha<\betaeta\).}
\]
\betaegin{remark}
We do not expect the stability estimate in the previous theorem to be sharp. In particular, in dimension $1$ an elementary argument (but completely specific to the one dimensional case) gives a linear control in $\varepsilon$. Indeed,
if we set $\partialsi(x):=x^2/2-\varphi(x)$, then our assumption can be rewritten as
$$
\int \partialsi'' \,d\gammaamma_1 \lambdaeq \varepsilon.
$$
Since $\partialsi''=(x-T)'>0$, this gives
$$
\int |(x-T)'|\,d\gammaamma_1 \lambdaeq \varepsilon
$$
and using the $L^1$-Poincar\'e inequality for the Gaussian measure we obtain
$$
W_1(\mu,\gammaamma_1) \lambdaeq \int|x-y|\,d\sigmaigma_T(x,y)= \int |x-T(x)|\,d\gammaamma_1(x) \lambdaeq C\varepsilon,
$$
where $\sigmaigma_T:=(\Id\times T)_\#\gammaamma_1$.
\varepsilonnd{remark}
As explained above, Theorems \ref{thm:rigidity} and \ref{thm:stability} can be applied to study the structure of \(1\)-log-concave measures (i.e., measures of the form \(e^{-V}dx\) with \(D^2V\gammae \Id_n\)) that almost achieve equality in \varepsilonqref{eq:lambda2}. To simplify the presentation and emphasize the main ideas, we limit ourselves to a particular instance of \varepsilonqref{eq:lambda}, namely the optimal constant in the $L^2$-Poincar\'e inequality for \(\mu\):
\[
\lambdaambda_\mu :=\inf\Bigg\{\frac{\int |\nablaabla u|^2 \,d\mu} {\int u^2 \,d\mu}\,:\qquad u\in {\rm Lip}(\mathbb R^n)\,, \int u \,d\mu=0\Bigg\}.
\]
It is well-known that \(\lambdaambda_{\gammaamma_n}=1\) and that $\{u_i(x)=x_i\}_{1\lambdaeq i \lambdaeq n}$ are the corresponding minimizers. In particular it follows by \varepsilonqref{eq:lambda2} that, for every \(1\)-log-concave measure \(\mu\),
\betaegin{equation}\lambdaabel{poin}
\int u^2\,d\mu \lambdaeq \int |\nablaabla u|^2\,d\mu\qquad\textrm{for all \(u\in {\rm Lip}(\mathbb R^n)\) with $\int u\,d\mu=0$. }
\varepsilonnd{equation}
As a consequence of Theorems \ref{thm:rigidity} and \ref{thm:stability} we have:
\betaegin{theorem}
\lambdaabel{cor:poincare}
Let \(\mu=e^{-V} dx\) be a probability measure with \(D^2 V\gammae \Id_n\),
and assume there exist $k$ functions $\{u_i\}_{1\lambdaeq i \lambdaeq k}\sigmaubset W^{1,2}(\mathbb R^n,\mu)$, $k \lambdaeq n$, such that
$$
\int u_i\,d\mu =0,\qquad \int u_i^2\,d\mu =1,\qquad \int \nablaabla u_i\cdot \nablaabla u_j\, d\mu =0\qquad \forall\,i \nablaeq j,
$$
and
$$
\int |\nablaabla u_i|^2\,d\mu\lambdae(1+\varepsilon)
$$
for some $\varepsilon >0$.
Then there exists a probability measure \(\nablau= \gammaamma_{p,k}\otimes e^{-W(x')}d x'\), with \(W:\mathbb R^{n-k}\to \mathbb R\) satisfying \(D^2W\gammae \Id_{n-k}\), such that
$$
W_1(\mu,\nablau)\lambdaesssim \frac{1}{|\lambdaog\varepsilon|^{1/4_-}}.
$$
In particular, if there exist $n$ orthogonal functions $\{u_i\}_{1\lambdaeq i \lambdaeq n}$ that attain the equality in \varepsilonqref{poin} then $\mu=\gammaamma_{n,p}$.
\varepsilonnd{theorem}
We conclude this introduction recalling that the rigidity version of the above theorem (i.e. the case $\varepsilon=0$) has already been proved by Cheng and Zho in \cite[Theorem 2]{CZ} with completely different techniques.
\sigmaection{Proof of Theorem \ref{thm:rigidity}}
\betaegin{proof}[Proof of Theorem \ref{thm:rigidity}]
Set \(\partialsi(x):=|x|^2/2-\varphi(x)\) and note that,
as a consequence of Theorem \ref{thm:caffarelli}, \(\partialsi:\mathbb R^n\to \mathbb R\) is a \(C^{1,1}\) convex function with \(0 \lambdaeq D^2\partialsi \lambdaeq \Id\).
Also, our assumption implies that
\betaegin{equation}\lambdaabel{zeroeigenvalue}
\lambdaambda_1(D^2\partialsi(x))=\lambdadots=\lambdaambda_{k}(D^2 \partialsi(x))=0\qquad \textrm{for a.e. \(x\in \mathbb R^d\).}
\varepsilonnd{equation}
We are going to show that $\partialsi$ depends only on $n-k$ variables. As we shall show later, this will immediately imply the desired conclusion.
In order to prove the above claim, we note it is enough to prove it for \(k=1\), since then one can argue recursively on $\mathbb R^{n-1}$ and so on.
Note that \varepsilonqref{zeroeigenvalue} implies that
\betaegin{equation}
\lambdaabel{eq:det0}
\deltat D^2\partialsi\varepsilonquiv 0.
\varepsilonnd{equation}
Up to translate \(\mu\) we can subtract a linear function to \(\partialsi\) and assume without loss of generality that \(\partialsi(x)\gammae \partialsi(0)=0\).
Consider the convex set $\Sigma:=\{\partialsi=0\}$. We claim that $\Sigma$ contains a line.
Indeed, if not, this set would contain an exposed point $\betaar x$.
Up to a rotation, we can assume that $\betaar x=a\,e_1$ with $a \gammaeq 0$.
Also, since $\betaar x$ is an exposed point,
$$
\Sigma\sigmaubset \{x_1\lambdaeq a\}\quad
\text{and}
\quad \Sigma\cap \{x_1=a\}=\{\betaar x\}.
$$
Hence, by convexity of $\Sigma$, the set $\Sigma\cap\{x_1\gammaeq -1\}$ is compact.
Consider the affine function
$$
\varepsilonll_\varepsilonta(x):=\varepsilonta(x_1+1),\qquad \varepsilonta>0\text{ small},
$$
and define $\Sigma_\varepsilonta:=\{\partialsi \lambdaeq \varepsilonll_\varepsilonta\}$.
Note that, as $\varepsilonta \to 0$, the sets $\Sigma_\varepsilonta$
converge in the Hausdorff distance to the compact set $\Sigma\cap\{x_1\gammaeq -1\}$. In particular,
this implies that $\Sigma_\varepsilonta$ is bounded for $\varepsilonta$ sufficiently small.
We now apply Alexandrov estimate (see for instance \cite[Theorem 2.2.4]{figalli_book}) to the convex function $\partialsi-\varepsilonll_\varepsilonta$
inside $\Sigma_\varepsilonta$,
and it follows by \varepsilonqref{eq:det0} that
(note that $D^2\varepsilonll_\varepsilonta\varepsilonquiv 0$)
$$
|\varphi(x)-\varepsilonll_\varepsilonta(x)|^n\lambdaeq C_{n}({\rm diam}(S_\varepsilonta))^n \int_{\Sigma_\varepsilonta}\deltat D^2\partialsi=0\qquad \forall\,x \in\Sigma_\varepsilonta.
$$
In particular this implies that $\partialsi(0)=\varepsilonll_\varepsilonta(0)=\varepsilonta$,
a contradiction to the fact that $\partialsi(0)=0.$
Hence, we proved that $\{\partialsi=0\}$ contains a line, say $\mathbb R e_1$.
Consider now a point \(x\in \mathbb R^n\). Then, by convexity of $\partialsi$,
\[
\partialsi(x)+\nablaabla \partialsi(x)\cdot (s e_1-x)\lambdae \partialsi(s e_1)=0\qquad \forall\,s \in \mathbb R,
\]
and by letting \(s\to \partialm\infty\) we deduce that \(\partialartial_1\partialsi(x)=\nablaabla \partialsi(x)\cdot e_1=0\). Since $x$ was arbitrary, this means that $\partialartial_1\partialsi\varepsilonquiv 0$,
hence \(\partialsi(x)=\partialsi(0,x')\), $x'\in \mathbb R^{n-1}$.
Going back to $\varphi$, this proves that
$$
T(x)=(x_1,x'-\nablaabla \partialsi(x')),
$$
and because $\mu=T_\#\gammaamma_n$ we immediately deduce that $\mu=\gammaamma_1\otimes \mu_1$ where $\mu_1:=(\Id_{n-1}-\nablaabla \partialsi)_\#\gammaamma_{n-1}$.
Finally, to deduce that $\mu_1=e^{-W}dx'$ with $D^2W\gammaeq \Id_{n-1}$
we observe that $\mu_1=(\partiali')_\#\mu$ where $\partiali':\mathbb R^n\to\mathbb R^{n-1}$ is the projection given by $\partiali'(x_1,x'):=x'$.
Hence, the result is a consequence of the fact that $1$-log-concavity is preserved when taking marginals, see \cite[Theorem 4.3]{BL} or \cite[Theorem 3.8]{SW}.
\varepsilonnd{proof}
\sigmaection{Proof of Theorem \ref{thm:stability}}
To prove Theorem \ref{thm:stability},
we first recall a basic properties of convex sets
(see for instance \cite[Lemma 2]{Cafbdry} for a proof).
\betaegin{lemma}
\lambdaabel{lem:john}
Given $S$ an open bounded convex set in $\mathbb R^n$
with barycenter at $0$,
let $\mathcal E$ denote an ellipsoid of minimal volume
with center $0$ and containing $K$.
Then there exists a dimensional constant $\kappaappa_n>0$
such that $\kappaappa_n \mathcal E\sigmaubset S$.
\varepsilonnd{lemma}
Thanks to this result, we can prove the following simple geometric lemma:
\betaegin{lemma}\lambdaabel{lm:geo}
Let $\kappaappa_n$ be as in Lemma \ref{lem:john},
set $c_n:=\kappaappa_n/2$,
and consider \(S\sigmaubset \mathbb R^n\) an open convex set with barycenter at \(0\).
Assume that \(S\sigmaubset B_R\) and \(\partialartial S\cap \partialartial B_R\nablaeq \varepsilonmptyset\). Then there exists a unit vector $v \in \mathbb S^{n-1}$ such that \(\partialm c_n R v\in S\).
\varepsilonnd{lemma}
\betaegin{proof}
By scaling we can assume that \(R=1\).
Let $v \in \partialartial S\cap \partialartial B_1$, and consider the ellipsoid $\mathcal E$ provided by Lemma \ref{lem:john}.
Since $v \in \overline{\mathcal E}$ and $\mathcal E$
is symmetric with respect to the origin, also $-v \in \overline{\mathcal E}$.
Hence
$$
\partialm c_n v \in c_n\overline{\mathcal E}\sigmaubset \kappaappa_n\mathcal E
\sigmaubset S,
$$
as desired.
\varepsilonnd{proof}
\betaegin{proof}[Proof of Theorem \ref{thm:stability}]
As in the proof of Theorem \ref{thm:rigidity} we set \(\partialsi:=|x|^2/2-\varphi\). Then, inequality \varepsilonqref{eq:almost} gives
\betaegin{equation}\lambdaabel{eq:det}
\int \lambdaambda_{k}(D^2\partialsi)\,d\gammaamma_n \lambdaeq \varepsilon.
\varepsilonnd{equation}
Up to subtract a linear function (i.e. substituting \(\mu\) with one of its translation, which does not affect the conlclusion of the theorem) we can assume that \(\partialsi(x)\gammae \partialsi (0)=0\), therefore \(\nablaabla \partialsi(0)=\nablaabla \varphi(0)=0\). Since $(\nablaabla \varphi)_\#\gammaamma_n=\mu$ and \(\|D^2\varphi\|_\infty\lambdae 1\), these conditions imply that
\[
\int |x|\,d\mu(x)=\int|\nablaabla\varphi(x)|\,d\gammaamma_n(x)=\int |\nablaabla \varphi(x)-\nablaabla \varphi(0)| \,d\gammaamma_n(x) \lambdae \int |x|\, d\gammaamma_n(x) \lambdae C_n.
\]
In particular
\[
W_1(\mu,\gammaamma)\lambdae W_1(\mu,\deltalta_0)+W_1(\deltalta_0,\gammaamma)\lambdae C_n.
\]
This proves that \varepsilonqref{eq:log} holds true with \(\nablau=\gammaamma_n\) and with a constant \(C\alphapprox |\lambdaog \varepsilon_0|^{1/4}\) whenever \(\varepsilon\gammae \varepsilon_0\). Hence, when showing the validity of \varepsilonqref{eq:log}, we can safely assume that \(\varepsilon\lambdae \varepsilon_0(n)\lambdal1\).
Furthermore, we can assume that the graph of \(\partialsi\) does not contain lines (otherwise, by the proof of Theorem \ref{thm:rigidity}, we would deduce that $\mu$ splits a Gaussian factor, and we could simply repeat the argument in $\mathbb R^{n-1}$).
Thanks to these considerations, we can apply \cite[Lemma 1]{Cafbdry} to find a slope \(p\in \mathbb R^n\) such that the open convex set
\[
S_1:=\{x\in \mathbb R^n: \partialsi (x)< p\cdot x+ 1\}
\]
is nonempty, bounded, and with barycenter at $0$. Applying the Aleksandrov estimate in \cite[Theorem 2.2.4]{figalli_book} to the convex function \(\tilde\partialsi(x):=\partialsi(x)-p\cdot x-1\) inside the set \(S_1\), we get (note that $D^2\tilde\partialsi=D^2\partialsi$)
\betaegin{equation}\lambdaabel{eq:abp}
1\lambdae \Bigl(-\min_{S_1} \tilde\partialsi \Bigr)^n\lambdae C_n(\diam(S_1))^n\int_{S_1} \deltat D^2 \partialsi.
\varepsilonnd{equation}
Consider now the smallest radius \(R>0\) such that \(S_1\sigmaubset B_R\) (note that \(R<+\infty\) since \(S_1\) is bounded).
Since $\gammaamma_n \gammaeq c_ne^{-R^2/2}$ in $B_R$
and $\lambdaambda_i(D^2\partialsi)\lambdaeq 1$ for all $i=1,\lambdadots,n$,
\varepsilonqref{eq:det} implies that
$$
\int_{B_R}\deltat D^2\partialsi \lambdaeq C_n e^{R^2/2}\varepsilon.
$$
Hence, using \varepsilonqref{eq:abp}, since ${\rm diam}(S_1)\lambdaeq 2R$ we get
$$
1 \lambdaeq C_n R^n e^{R^2/2}\varepsilon
$$
which yields
\betaegin{equation}\lambdaabel{eq:R}
R \gammatrsim |\lambdaog \varepsilon|^{1/2_+}.
\varepsilonnd{equation}
Now, up to a rotation and by Lemma \ref{lm:geo}, we can assume that
\[
\partialm c_nRe_1 \in S_1.
\]
Consider $1\lambdal \rho \lambdal R^{1/2}$ to be chosen. Since \(S_1\sigmaubset B_R\) and \(\partialsi \gammae 0\) we get that \(|p|\lambdae 1/R\), therefore \(\partialsi\lambdae 2\) on \(S_1\sigmaubset B_R\). Hence
$$
2 \gammaeq \partialsi(z)\gammaeq \partialsi(x)+\lambdaangle \nabla \partialsi(x), z-x\rangle \gammaeq \lambdaangle \nabla \partialsi(x), z-x\rangle \qquad \forall\,z \in S_1,\,x \in B_{\rho}.
$$
Thus, since $|\nabla \partialsi|\lambdaeq \rho$ in $B_{\rho}$ (by $\|D^2\partialsi\|_{L^\infty(\mathbb R^n)}\lambdae 1$ and \(|\nablaabla \partialsi(0)|=0\)),
choosing $z =\partialm c_nRe_1$ we get
\betaegin{equation}\lambdaabel{eq:der}
|\partialartial_1\partialsi|\lambdaeq \frac{C_n\rho^2}{R} \qquad \text{inside }B_{\rho}.
\varepsilonnd{equation}
Consider now $\betaar x_1\in [-1,1]$ (to be fixed later) and define $\partialsi_1(x'):=\partialsi(\betaar x_1,x')$ with $x' \in \mathbb R^{n-1}$. Integrating \varepsilonqref{eq:der} with respect to $x_1$ inside $B_{\rho/2}$, we get
$$
|\partialsi -\partialsi_1| \lambdaeq C_n \frac{\rho^3}{R} \qquad \text{inside }B_{\rho/2}.
$$
Thus, using the interpolation inequality
$$
\|\nabla \partialsi -\nabla \partialsi_1\|_{L^\infty(B_{\rho/4})}^2 \lambdaeq C_n\| \partialsi -\partialsi_1\|_{L^\infty(B_{\rho/2})} \|D^2 \partialsi -D^2 \partialsi_1\|_{L^\infty(B_{\rho/2})}
$$
and recalling that $\|D^2\partialsi\|_{L^\infty(\mathbb R^n)} \lambdaeq 1$ (hence $\|D^2\partialsi_1\|_{L^\infty(\mathbb R^{n-1})}\lambdaeq 1$),
we get
\betaegin{equation*}
|\nabla \partialsi -\nabla \partialsi_1| \lambdaeq C_n \frac{\rho^{3/2}}{R^{1/2}} \qquad \text{inside }B_{\rho/4}.
\varepsilonnd{equation*}
If $k=1$ we stop here, otherwise we notice that \varepsilonqref{eq:det} implies that
$$
\int_\mathbb R d\gammaamma_1(x_1)\int_{\mathbb R^{n-1}}
\deltat D_{x'x'}^2\partialsi(x_1,x')\,d\gammaamma_{n-1}(x') \lambdaeq
\int_\mathbb R d\gammaamma_1(x_1)\int_{\mathbb R^{n-1}}
\lambdaambda_{2}(D^2\partialsi)(x_1,x')\,d\gammaamma_{n-1}(x') \lambdaeq \varepsilon,
$$
where we used that\footnote{This inequality follows from the general fact that, given $A \in \mathbb R^{n\times n}$ symmetric matrix and \(W\sigmaubset \mathbb R^n\) a \(k\)-dimensional vector space,
\[
\lambdaambda_1\betaig(A\betaig|_W\betaig)=\min_{v\in W} \frac{A v\cdot v}{|v|^2}\lambdae \max_{\sigmaubstack{v\in W'\sigmaubset \mathbb R^n\\ \textrm{\(W'\) \(k\)-dim}}} \min_{W'} \frac{A v\cdot v}{|v|^2}=\lambdaambda_{n-k+1}(A).
\]}
$$
\lambdaambda_1\betaigl(D^2\partialsi|_{\{0\}\times\mathbb R^{n-1}}\betaigr) \lambdaeq \lambdaambda_2(D^2\partialsi)
$$
and that (since $D^2\partialsi\lambdaeq \Id$)
$$
\deltat D_{x'x'}^2\partialsi(x_1,x')\lambdaeq \lambdaambda_1\betaigl(D^2\partialsi|_{\{0\}\times\mathbb R^{n-1}}\betaigr).
$$
Hence, by Fubini's Theorem, there exists $\betaar x_1\in [-1,1]$ such that $\partialsi_1(x')=\partialsi(\betaar x_1,x')$ satisfies
$$
\int_{\mathbb R^{n-1}} \deltat D^2\partialsi_1\,d\gammaamma_{n-1}(x) \lambdaeq C_n\varepsilon.
$$
This allows us to repeat the argument above in $\mathbb R^{n-1}$ with
$$
\widetilde \partialsi_1(x'):=\partialsi_1(x')-\nablaabla_{x'} \partialsi_1(0)\cdot x'-\partialsi_1(0)
$$
in place of $\partialsi$, and up to a rotation we deduce that
$$
|\nabla \widetilde{\partialsi}_1 -\nabla \partialsi_2| \lambdaeq C_n \frac{\rho^{3/2}}{R^{1/2}} \qquad \text{inside }B_{\rho/4}.
$$
where $\partialsi_2(x''):=\partialsi_1(\betaar x_2,x'')$, where $\betaar x_2 \in [-1,1]$ is arbitrary.
By triangle inequality, this yields
$$
|\nabla \partialsi +p'- \nabla \partialsi_2| \lambdaeq C_n \frac{\rho^{3/2}}{R^{1/2}} \qquad \text{inside }B_{\rho/4},
$$
where \(p'=-(0,\nablaabla_{x'} \partialsi(\betaar x_1,0)\).
Note that, since \(|\betaar x_1|\lambdae 1\), \(\nablaabla \partialsi(0)=0\), and \(\|D^2\partialsi\|_\infty\lambdae 1\), we have \(|p|\lambdae 1\). Iterating this argument $k$ times, we conclude that
$$
|\nabla \partialsi +\betaar p-\nabla \partialsi_k| \lambdaeq C_n \frac{\rho^{3/2}}{R^{1/2}} \qquad \text{inside }B_{\rho/4}
$$
where \(\betaar p=(p,p'')\in \mathbb R^k\times \mathbb R^{n-k}=\mathbb R^n\) with \(|\betaar p|\lambdae C_n\),
$$
\partialsi_k(y):=\partialsi(\betaar x_1,\lambdadots,\betaar x_k,y),\qquad y \in \mathbb R^{n-k},
$$
and $\betaar x_i \in [-1,1]$.
Recalling that $\nabla \varphi=x-\nablaabla \partialsi$, we have proved that
$$
T(x)=\nabla \varphi(x)=(x_1+p_1,\lambdadots,x_k+p_k,S(y)+p'') +Q(x),
$$
where $Q:=-(\nablaabla \partialsi-\nablaabla\partialsi_k+\betaar p)$ satisfies
\[
\|Q\|_{L^\infty(B_{\rho})}\lambdaeq C_n\frac{\rho^{3/2}}{R^{1/2}}\qquad\textrm{and}\qquad |Q(x)|\lambdae C_n (1+|x|)
\]
(in the second bound we used that $T(0)=\nablaabla \varphi(0)=0$, \(|p|\lambdae C_n\), and
$T$ is $1$-Lipschitz).
Hence, if we set $\nablau:=(S+p'')_\# \gammaamma_{n-k}$, we have
$$
W_1(\mu,\gammaamma_{p,k}\otimes \nablau) \lambdaeq \int |Q| \,d\gammaamma_n
\lambdaeq C_n\frac{\rho^{3/2}}{R^{1/2}}+C_n\int_{\mathbb R^n\sigmaetminus B_{\rho}} |x| \,d\gammaamma_n
= C_n\frac{\rho^{3/2}}{R^{1/2}}+C_n\rho^{n}e^{-\rho^2/2},
$$
so, by choosing $\rho:=(\lambdaog R)^{1/2}$, we get
$$
W_1(\mu,\gammaamma_{p,k}\otimes \nablau) \lambdaesssim\frac{1}{R^{1/2_-}}.
$$
Consider now $\partiali_k:\mathbb R^n\to\mathbb R^n$ and $\betaar \partiali_{n-k}:\mathbb R^n\to \mathbb R^{n-k}$
the orthogonal projection onto the first $k$ and the last $n-k$ coordinates, respectively.
Define $\mu_1:=(\partiali_k)_\# (e^{-V}dx)$, $\mu_2:=(\betaar \partiali_{n-k})_\# (e^{-V}dx)$, and note that these are \(1\)-log-concave measures in \(\mathbb R^k\) and \(\mathbb R^{n-k}\) respectively
(see \cite[Theorem 4.3]{BL} or \cite[Theorem 3.8]{SW}). In particular \(\mu_2=e^{-W}\) with \(D^2W\gammae \Id_{n-k}\). Moreover, since $W_1$ decreases under orthogonal projection,
$$
W_1(\mu_2,\nablau)=W_1\betaigl((\betaar \partiali_{n-k})_\#\mu, (\betaar \partiali_{n-k})_\#(\gammaamma_{p,k}\otimes\nablau) \betaigr) \lambdaeq W_1(\mu,\gammaamma_{p,k}\otimes \nablau) \lambdaesssim\frac{1}{R^{1/2_-}},
$$
thus
\[
\betaegin{split}
W_1(\mu,\gammaamma_{p,k}\otimes \mu_2)
&\lambdaeq W_1(\mu,\gammaamma_{p,k}\otimes \nablau)+W_1(\gammaamma_{p,k}\otimes \nablau,\gammaamma_{p,k}\otimes \mu_2)
\\
&\lambdae W_1(\mu,\gammaamma_{p,k}\otimes \nablau)+W_1( \nablau, \mu_2) \lambdaesssim\frac{1}{R^{1/2_-}}
\varepsilonnd{split}
\]
where we used the elementary fact that \(W_1(\gammaamma_{p,k}\otimes \nablau,\gammaamma_{p,k}\otimes \mu_2)\lambdae W_1( \nablau, \mu_2) \). Recalling \varepsilonqref{eq:R}, this proves that
$$
W_1(\mu,\gammaamma_{p,k}\otimes \mu_2) \lambdaesssim\frac{1} {|\lambdaog \varepsilon|^{1/4_-}},
$$
concluding the proof.
\varepsilonnd{proof}
\sigmaection{Proof of Theorem \ref{cor:poincare}}
\betaegin{proof}[Proof of Theorem \ref{cor:poincare}]
As in the proof of Theorem \ref{thm:stability}, it is enough to prove the result when $\varepsilon \lambdaeq \varepsilon_0 \lambdal 1$.
Let $\{u_i\}_{1\lambdaeq i \lambdaeq k}$ be as in the statement,
and set $v_i:=u_i\circ T$, where $T=\nablaabla \varphi:\mathbb R^n\to \mathbb R^n$ is the Brenier map
from $\gammaamma_n$ to $\mu$.
Note that since $T_\#\gammaamma_n=\mu$,
$$
\int v_i\,d\gammaamma_n=\int u_i\circ T\,d\gammaamma_n=\int u_i\,d\mu=0.
$$
Also, since $|\nablaabla T|\lambdaeq 1$ and by our assumption on $u_i$,
\betaegin{align*}
\int |\nablaabla v_i|^2\,d\gammaamma_n&\lambdaeq \int |\nablaabla u_i|^2\circ T\,d\gammaamma_n
=\int |\nablaabla u_i|^2\,d\mu\\
&\lambdae (1+\varepsilon)
\int u^2_i\,d\mu =(1+\varepsilon) \int v^2_i\,d\gammaamma_n
\lambdae (1+\varepsilon) \int |\nablaabla v_i|^2\,d\gammaamma_n,
\varepsilonnd{align*}
where the last inequality follows from the Poincar\'e inequality for $\gammaamma_n$
applied to $v_i$. Since
$$
\int |\nablaabla u_i|^2\,d\mu\lambdae (1+\varepsilon),
$$
this proves that
\betaegin{equation}
\lambdaabel{eq:eps 1}
0 \lambdaeq \int \Bigl( |\nabla u_i|^2\circ T - |\nabla v_i|^2\Bigr)\,d\gammaamma_n \lambdaeq \varepsilon \int |\nablaabla v_i|^2\,d\mu \lambdaeq \varepsilon (1+\varepsilon).
\varepsilonnd{equation}
Moreover, by Theorem \ref{thm:caffarelli}, $\nablaabla T=D^2\varphi $ is a symmetric matrix satisfying $0\lambdaeq \nabla T\lambdaeq \Id_n$,
therefore \((\Id-\nablaabla T)^2\lambdae \Id -(\nabla T)^2\). Hence,
since $\nabla v_i= \nablaabla T\cdot\nabla u_i\circ T $, it follows by \varepsilonqref{eq:eps 1} that
\betaegin{equation}\lambdaabel{eq:v1}
\betaegin{aligned}
\int | \nabla u_i\circ T-\nabla v_i|^2\,d\gammaamma_n
&=\int |(\Id_n - \nablaabla T)\cdot \nabla u_i\circ T|^2\,d\gammaamma_n\\
&=\int (\Id_n - (\nabla T))^2[\nabla u_i\circ T,\nablaabla u_i\circ T]\,d\gammaamma_n\\
&\lambdae \int (\Id_n - (\nabla T)^2)[\nabla u_i\circ T,\nablaabla u_i\circ T]\,d\gammaamma_n\\
&=\int \Bigl( |\nabla u_i|^2\circ T - |\nabla v_i|^2\Bigr)\,d\gammaamma_n\lambdaeq 2\varepsilon,
\varepsilonnd{aligned}
\varepsilonnd{equation}
where, given a matrix \(A\) and a vector \(v\), we have used the notation \(A[v,v]\) for \(Av\cdot v\).
In particular, recalling the orthogonality constraint $\int \nabla u_i\cdot \nabla u_j\,d\mu =0$, we deduce that
\betaegin{equation}
\lambdaabel{eq:ortho eps}
\int \nabla v_i\cdot \nabla v_j\,d\gammaamma_n=O(\sigmaqrt{\varepsilon}).
\varepsilonnd{equation}
In addition, if we set
$$
f_i(x):=\frac{\nabla u_i\circ T(x)}{|\nabla u_i\circ T(x)|}
$$
then, using again that \(|\nablaabla T|\lambdae 1\),
\betaegin{equation}
\lambdaabel{eq:2eps}
\int |\nabla (u_i\circ T)|^2\Bigl(1 - |\nabla T\cdot f_i|^2\Bigr)\,d\gammaamma
\lambdaeq \int |\nabla u_i|^2\circ T\Bigl(1 - |\nabla T\cdot f_i|^2\Bigr)\,d\gammaamma_n \lambdaeq 2\varepsilon.
\varepsilonnd{equation}
Now, for \(j\in \mathbb N\), let \(H_j:\mathbb R\to \mathbb R\) be the one dimensional Hermite polynomial of degree \(j\) (see \cite[Section 9.2]{DaPrato} for a precise definition). It is well known (see for instance \cite{DaPrato}) that for \(J=(j_1,\dots,j_n)\in \mathbb N^n\) the functions
\[
H_J(x_1,\dots,x_n)=H_{j_1}(x_1)H_{j_2}(x_2)\cdot\dots\cdot H_{j_n}(x_n)
\]
form a Hilbert basis of \(L^2(\mathbb R^n,\gammaamma_n)\). Hence,
since \(\alphalpha^i_0=\int v_i\,d\gammaamma_n=0\), we can write
\[
v_i=\sigmaum_{J\in \mathbb N^n\sigmaetminus \{0\}} \alphalpha^i_J H_J.
\]
By some elementary properties of Hermite polynomials (see \cite[Proposition 9.3]{DaPrato}), we get
\[
1=\int v_i^2 d\gammaamma_n=\sigmaum_{J\in \mathbb N^n\sigmaetminus \{0\}}\betaig(\alphalpha^i_J\betaig)^2, \qquad \int |\nablaabla v_i|^2d\gammaamma _n=\sigmaum_{J\in \mathbb N^n\sigmaetminus \{0\}} |J| \betaig(\alphalpha^i_J\betaig)^2.
\]
Hence, combining the above equations with the bound \(\int |\nablaabla v_i|^2d\gammaamma _n\lambdae (1+\varepsilon)\), we obtain
\[
\varepsilon\gammae \int |\nablaabla v_i|^2d\gammaamma _n-\int v_i^2d\gammaamma _n=\sigmaum_{J\in \mathbb N^n\,, |J|\gammae 2}(|J|-1)\betaig(\alphalpha^i_J\betaig)^2\gammae \frac{1}{2}\sigmaum_{J\in \mathbb N^n\,, |J|\gammae 2}|J|\betaig(\alphalpha^i_J\betaig)^2,
\]
where $|J|=\sigmaum_{m=1}^nj_m$.
Recalling that the first Hermite polynomials are just linear functions (since \(H_1(t)=t\)), using the notation
$$
\alphalpha_j^i:=\alphalpha_J^i \qquad \text{with }J=e_j \in \mathbb N^n
$$
we deduce that
\[
v_i(x)=\sigmaum_{j=1}^n \alphalpha^{i}_j x_j+z(x),
\qquad
\textrm{with}
\qquad
\|z\|^2_{W^{1,2}(\mathbb R^n,\gammaamma_n)}=O(\varepsilon).
\]
In particular, if we define the vector
\[
V_i:=\sigmaum_{j=1}^n \alphalpha^i_{j}e_j \in \mathbb R^n,
\]
and we recall that $\int |\nablaabla v_i|^2\,d\gammaamma_n=1+O(\varepsilon)$ and the almost orthogonality relation \varepsilonqref{eq:ortho eps}, we infer that $|V_i|=1+O(\varepsilon)$ and $|V_i\cdot V_l|=O(\sigmaqrt{\varepsilon})$ for all $i \nablaeq l\in\{1,\dots,k\}$.
Hence, up to a rotation, we can assume that $|V_i-e_i|=O(\sigmaqrt{\varepsilon})$ for all $i=1,\lambdadots,k$, and \varepsilonqref{eq:v1} yields
\betaegin{equation}
\lambdaabel{eq:close 1}
\int |\nabla (u_i \circ T) -e_i|^2\,d\gammaamma_n \lambdaeq C\,\varepsilon.
\varepsilonnd{equation}
Since $0 \lambdaeq 1 - |\nabla T\cdot f_i|^2\lambdaeq 1$,
it follows by \varepsilonqref{eq:2eps} and \varepsilonqref{eq:close 1} that
\betaegin{equation}
\lambdaabel{eq:close 2}
\int \Bigl(1 - |\nabla T\cdot f_i|^2\Bigr)\,d\gammaamma_n \lambdaeq
2\int \Bigl(|\nabla (u_i\circ T)|^2+|\nabla (u_i \circ T) -e_i|^2\Bigr)\Bigl(1 - |\nabla T\cdot f_i|^2\Bigr)\,d\gammaamma_n
\lambdaeq C \varepsilon.
\varepsilonnd{equation}
Set $w_i:=\nabla u_i \circ T$ so that $f_i=\frac{w_i}{|w_i|}$.
We note that, since all the eigenvalues of \(\nablaabla T=D^2\varphi\) are bounded by \(1\), given $\deltalta \lambdal 1$ the following holds: whenever
$$
|\nabla T\cdot w_i -e_i|\lambdaeq \deltalta
\qquad\text{and}\qquad
|\nabla T\cdot f_i|\gammaeq 1-\deltalta
$$
then $|w_i|=1+O(\deltalta)$. In particular,
$$
|\nabla T\cdot f_i - e_i|\lambdaeq C\deltalta.
$$
Hence, if $\deltalta \lambdaeq \deltalta_0$ where \(\deltalta_0\) is a small geometric constant, this implies that
the vectors $f_i$ are a basis of $\mathbb R^k$, and
$$
\nablaabla T|_{{\rm span}(f_1,\lambdadots,f_k)} \gammaeq (1-C\deltalta)\,\Id.
$$
Defining $\partialsi(x):=|x|^2/2-\varphi(x)$,
this proves that
\betaegin{equation}\lambdaabel{inclusione}
\betaiggl\{x\,:\,\sigmaum_i |\nabla T(x)\cdot w_i(x) -e_i| + \Bigl(1-|\nabla T(x)\cdot f_i(x)|\Bigr)\lambdaeq \deltalta\betaiggr\}
\sigmaubset \lambdaeft\{x\,:\,\lambdaambda_{n-k+1}(D^2\partialsi(x)) \lambdaeq C\deltalta\right\}
\varepsilonnd{equation}
for all $0<\deltalta\lambdaeq \deltalta_0$.
By the layer-cake formula, \varepsilonqref{eq:close 1}, and \varepsilonqref{eq:close 2},
this implies that
\betaegin{equation*}
\betaegin{split}
\int_{\{\lambdaambda_{n-k+1}(D^2\partialsi) \lambdaeq C\deltalta_0\}}\lambdaambda_{n-k+1}(D^2\partialsi)\,d\gammaamma_n
&= C\int_0^{\deltalta_0} \gammaamma_n\betaigl(\{\lambdaambda_{n-k+1}(D^2\partialsi) > Cs\}\betaigr)\,ds
\\
&\lambdaeq C\sigmaum_i \int_0^{\deltalta_0} \gammaamma_n\betaigl(\{|\nabla T\cdot w_i - e_i|>s\}\betaigr)\,ds\\
&\quad+C\sigmaum_i \int_0^{\deltalta_0} \gammaamma_n\betaigl(\{1-|\nabla T\cdot f_i|>s\}\betaigr)\,ds
\\
&\lambdaeq C \sigmaum_i \int \Bigl(|\nabla T\cdot w_i - e_i|+\betaigl(1-|\nabla T\cdot f_i|\betaigr)\Bigr)\,d\gammaamma_n\lambdaeq C\sigmaqrt{\varepsilon}.
\varepsilonnd{split}
\varepsilonnd{equation*}
On the other hand, again by \varepsilonqref{inclusione}, \varepsilonqref{eq:close 1}, \varepsilonqref{eq:close 2},
and Chebishev's inequality,
\betaegin{multline*}
\gammaamma_n\betaigl(\{\lambdaambda_{n-k+1}(D^2\partialsi) > C\deltalta_0\}\betaigr)
\lambdaeq \sigmaum_i \gammaamma_n\betaigl(\{|\nabla T\cdot w_i - e_i|>\deltalta_0\}\betaigr)\\
\quad+\sigmaum_i\gammaamma_n\betaigl(\{1-|\nabla T\cdot f_i|>\deltalta_0\}\betaigr)\lambdaeq C\,\frac{\varepsilon}{\deltalta_0^2}.
\varepsilonnd{multline*}
Hence, since $\deltalta_0$ is a small but fixed geometric constant, combining the two equations above and recalling that \(\lambdaambda_{n-k+1}(D^2\partialsi)\lambdae 1\), we obtain
$$
\int \lambdaambda_{n-k+1}(D^2\partialsi)\,d\gammaamma_n \lambdaeq C\sigmaqrt{\varepsilon}.
$$
This implies that \varepsilonqref{eq:almost} holds with $C\sigmaqrt{\varepsilon}$ in place of $\varepsilon$, and the result follows by Theorem \ref{thm:stability}.
\varepsilonnd{proof}
\sigmaubsection*{Acknowledgements}
G.D.P. is supported by the MIUR SIR-grant ``Geometric Variational Problems'' (RBSI14RVEZ). G.D.P is a member of the ``Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni'' (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). A.F. is supported by NSF Grants DMS-1262411 and
DMS-1361122.
\betaegin{thebibliography}{99}
\betaibitem{BL} {\sigmac Brascamp H., Lieb E:} {\varepsilonm On extensions of the Brunn-Minkowski and Pr\'ekopa-Leindler theorems, including inequalities for log con- cave functions, and with an application to the diffusion equation} . J. Functional Analysis {\betaf 22} (1976) 366--389.
\betaibitem{Br} {\sigmac Brenier Y.}:
{\varepsilonm Polar factorization and monotone rearrangement of vector-valued
functions.} Comm. Pure Appl. Math. {\betaf 44} (1991), no. 4,
375--417.
\betaibitem{Cafbdry}
{\sigmac Caffarelli L.}: {\varepsilonm Boundary regularity of maps with convex potentials.} Comm. Pure Appl. Math. {\betaf 45} (1992), no. 9, 1141--1151.
\betaibitem{Caf1}
{\sigmac Caffarelli L}: {\varepsilonm Monotonicity properties of optimal transportation and the FKG and related inequalities.} Comm.
Math. Phys. {\betaf 214} (2000), 547--563.
\betaibitem{Caf2}
{\sigmac Caffarelli L}: {\varepsilonm Erratum: Monotonicity properties of optimal transportation and the FKG and related inequalities.} Comm.
Math. Phys {\betaf 225} (2002), 449--450.
\betaibitem{CZ} {\sigmac Cheng X., Zho D.}: {\varepsilonm Eigenvalues of the drifted Laplacian on complete metric measure spaces}. Commun. Contemp. Math. \texttt{ http://dx.doi.org/10.1142/S0219199716500012}.
\betaibitem{DaPrato}{\sigmac Da Prato, Giuseppe}: {\varepsilonm An introduction to infinite-dimensional analysis.}. Universitext. Springer-Verlag, Berlin, 2006. x+209 pp.
\betaibitem{figalli_book} {\sigmac Figalli, A:} {\it The Monge-Amp\`ere Equation and its Applications}. Z\"urich Lectures in Advanced Mathematics, to appear.
\betaibitem{SW} {\sigmac Saumard A., Wellner J.}: {\varepsilonm Log-concavity and strong log-concavity: a review.}
Stat. Surv. {\betaf 8} (2014), 45--114
\varepsilonnd{thebibliography}
\varepsilonnd{document}
|
\begin{document}
\title{A complete characterization of group-strategy\-proof mechanisms of cost-sharing}
\begin{abstract}
We study the problem of designing group-strategy\-proof cost-sharing mechanisms. The players report their bids for getting serviced and the mechanism decides which players are going to be serviced and how much each one of them is going to pay.
We determine three conditions: \emph{Fence Monotonicity}, \emph{Stability} of the allocation and \emph{Validity} of the tie-breaking rule that are necessary and sufficient for group-strategy\-proofness, regardless of the cost function. Fence Monotonicity puts restrictions only on the payments of the mechanism and stability only on the allocation. Consequently Fence Monotonicity characterizes group-strategy\-proof cost-sharing schemes.
Finally, we use our results to prove that there exist families of cost functions, where any group-strategy\-proof mechanism has unbounded approximation ratio.
\end{abstract}
\section{Introduction}
Algorithmic Mechanism Design~\cite{NR99} is a field of Game Theory, that tries to construct algorithms for allocating resources, that give to the players incentives to report their true interest in receiving a good, a service, or in participating in a given collective activity. The pivotal constraint when designing a mechanism for any problem is that it is truthful. Truthfulness also known as strategy-proofness or incentive compatibility requires that no player can strictly improve her utility by lying, when the values of the other players are fixed. In many settings this single requirement for an algorithm to be truthful restricts the repertoire of possible algorithms dramatically~\cite{Rob79}.
In settings where the repertoire of possible algorithms is not restricted too much by truthfulness, like for example in Cost-sharing problems it is desirable to construct mechanism that are also resistant to manipulation by groups of players. Group-stra\-te\-gy\-proofness naturally generalizes truthfulness by requiring that no group of players can improve their utility by lying, when the values of the other players are fixed. To be more precise there should not exist any group of players who can change their bids in a way that every member of the coalition is at least as happy as in the truthful scenario, and at least one person is happier, for fixed values of the players that do not belong to the coalition.
In this paper we study the following problem: We want to determine a set of $n$ customers/players, who are going to receive a service. Each player reports her willingness to pay for getting serviced and the mechanism decides which players are going to be serviced and the price that each one of them will pay, that is we consider direct revelation mechanisms. We want to characterize all possible mechanisms that satisfy group-strategy\-proofness, we want to find some necessary and sufficient conditions for a mechanism to be strategyproof and also to determine the corresponding payments.
We provide a complete characterization of group-stra\-te\-gy\-proof mechanisms and cost-sharing schemes, closing a open question posed by Immorlica, Mahdian and Mirrokni~\cite{imm08,agt07} by extending the condition of Semi-cross-montonicity they identified in~\cite{imm08} to a new condition, which we call Fence Monotonicity and by defining Fencing Mechanisms, a new general framework for designing group-stra\-te\-gy\-proof mechanisms.
Our results are of special importance for a very important problem, the Cost-Sharing Problem, whose study was initiated in~\cite{m99}, where we additionally have a cost function $C$, such that for each subset of players $S$ the cost for providing service to all the players in $S$ is $C(S)$, however the strength of our results is that they apply for any cost function, since throughout our proof we do not make any assumptions at all about this cost function. We believe that our work here can be the starting point for constructing new interesting classes of mechanisms for specific cost-sharing problems.
Recently Mehta, Roughgarden and Sundararajan~\cite{mrs07} proposed the notion of weak group-strategy-proofness, that relaxes group-strategy\-proofness. It regards a formation of a coalition, as successful, when \emph{each} player who participates in the coalition \emph{strictly} increases her personal utility. They also introduce acyclic mechanisms, a general framework for designing weakly group-strategy\-proof mechanisms, however the question of determining all possible weakly group-strategy\-proof mechanisms is an important question that remains open. Another alternative notion that is slightly stronger than weak-group-strategy\-proofness and weaker than group-strategy\-proofness was proposed by Bleischwitz, Monien and Schoppmann in~\cite{bms07}.
\section{Our results and related work}
The design of group-stra\-te\-gy\-proof mechanisms for cost-sharing was first discussed by Moulin and Shenker~\cite{m99,ms01}. Moulin defined a condition on the payments called cross-monotonicity, which states that the payment of a serviced player should decrease as the set of serviced players grows. Any mechanism whose payments satisfy cross-monotonicity can be easily turned to a simple mechanism called after Moulin. A Moulin mechanism first checks if all players can be serviced with positive utility and gradually diminishes the set of players that are candidates to be serviced, by throwing away at each step a player that cannot pay to get serviced (and who because of cross-monotonicity also cannot pay in any smaller set of serviced players). In fact if the cost function is sub-modular and 1-budget balanced
then the only possible group-stra\-te\-gy\-proof mechanisms are Moulin mechanisms~\cite{ms01}. The great majority of cost-sharing mechanisms proposed are Moulin mechanisms~\cite{jv01, pt03, kls05, gklrs07}. However recent results showed that for several important cost-sharing games Moulin mechanisms can only achieve a very bad budget balance factor~\cite{imm08,bs07,rs06,klsz05}. Another direction proposed by Moulin and rediscovered in~\cite{bs09,j08} resulting to weakly-group-stra\-te\-gy\-proof mechanisms are Incremental mechanisms where after ordering the players appropriately you ask them one by one if their bid is greater than an appropriate cost-share. Some alternative, very interesting and much more complicated in their description mechanisms that are group-stra\-te\-gy\-proof but not Moulin have been proposed in~\cite{imm08,pv06}, however these do not exhaust the class of group-stra\-te\-gy\-proof mechanisms.
In this work we introduce \emph{Fencing Mechanisms}, a new general framework for designing group-stra\-te\-gy\-proof mechanisms, that generalizes Moulin mechanisms~\cite{m99}. Unfortunately for the general case we do not know if there exists a polynomial-time algorithm that implements these mechanisms.
Finding a complete characterization of the cost-sharing schemes that give rise to a group-stra\-te\-gy\-pr\-oof mechanism was a question that was posed in~\cite{imm08,agt07}. The same question was posed in~\cite{mrs07} for weak group-stra\-te\-gy\-proof cost-sharing schemes and still remains open. Many interesting results arose in the attempt to find such a characterization~\cite{pv06,j08}. In contrast to previous characterization attempts that characterized mechanisms satisfying some additional boundary constraints~\cite{imm08,j08} our characterization is complete and succinct. The only complete characterization that was known was for the case of two players~\cite{pv06,j08}. It remains open how can our characterization help for constructing new efficient mechanisms for specific cost-sharing problems or for obtaining lower bounds and we belive that it can significantly enrich the repertoire of mechanisms with good approximation guarantees for specific problems.
In the notion of group-stra\-te\-gy\-proofness it is important to understand that ties play a very important role. This is in contrast to mechanisms that are only required to satisfy strategyproofness, where ties can be in most cases broken arbitrarily (see for example~\cite{nr01}). An intuitive way to understand this is that a mechanism designer of a group-stra\-te\-gy\-proof
mechanism expects a player to tell a lie in order to help the other players increase their utility, even when she would not gain any profit for herself. This player is at a tie but decides strategically if she should lie or not. Consequently a characterization that assumes a priori a tie-breaking rule, and thus greatly restricts the repertoire of possible mechanisms, like the one in~\cite{imm08,j09} might be useful for specific problems and easier in its statement, but can never capture the very notion group-stra\-te\-gy\-proofness.
We determine three conditions: \emph{Fence Monotonicity c} \emph{Stability} of the allocation and \emph{Validity} of the tie-breaking rule that are necessary and sufficient for group-strategy\-proof\-ness, regardless of the cost function and without any additional constraints (like tie-breaking rules used in~\cite{imm08,j08}). Fence Monotonicity concerns only the payments of the mechanism, while Stability and Validity of the tie-breaking rule, only the allocation. Consequently Fence Monotonicity characterizes cross-monotonic cost-sharing schemes. Having only the payments of a group-stra\-te\-gy\-proof mechanism is however not enough to determine its allocation. The allocation of a mechanism based on a cost-sharing scheme that satisfies Fence Monotonicity c should additional satisfy a condition we call Stability. Managing to separate the payments from the allocation part of the mechanism and avoiding to add any additional restrictions in the characterization we propose are undoubtedly its great virtues.
Our proofs are involved and based on set-theoretic arguments and the repeated use of induction. The main difficulty of our work was to identify some necessary and sufficient conditions for group-stra\-te\-gy\-proof payments that are also succinct to describe and add to our understanding of the notion of group-stra\-te\-gy\-proofness. In proving that Fence Monotonicity is a necessary condition for group-strategyproofness we first have to prove Lemmas that also reveal interesting properties of the allocation part of the mechanism. A novel tool that we introduce is the \emph{harm relation} that generalizes the notion of negative elements defined in~\cite{imm08}. Proving that Fencing Mechanisms, i.e. mechanisms whose payments satisfy Fence Monotonicity and whose allocation satisfies Stability and Validity of the tie-breaking rule, are group-stra\-te\-gy\-proof turns out to be rather complicated.
\section{Defining the model}
\subsection{The Mechanism}
Suppose that $\mathcal{A}=\{1,2,\ldots ,n\}$ is a set of players interested in receiving a service. Each of the players has a private type $v_i$, which is her valuation for receiving the service.
\begin{defn}
A cost sharing mechanism $(O,p)$ consists of a pair of functions, $O:\mathbb{R}^n\rightarrow 2^{A}$ that associates with each bid vector $b$ the set of serviced players and $p:\mathbb{R}^n\rightarrow \mathbb{R}^n$ that associates with each bid vector $b$ a vector $p(b)=p(b_1,\ldots,b_n)$, where the $i$-th coordinate is the payment of player $i$.
\end{defn}
Each player wants to maximize her utility, which assuming quasi-linear utilities is $v_i a_i-p_i(b)$ where $a_i=1$ if $i\in O(b)$ and $a_i=0$ if $i\notin O(b)$.
As is common in the literature and in order not to come up with useless mechanisms, we concentrate on mechanisms that satisfy the following very simple conditions~\cite{m99}:
\begin{itemize}
\item \emph{Voluntary Participation (VP)}: A player that is not serviced is not charged ($i\notin O(b)\Rightarrow p_i(b)=0$) and a serviced player is never charged more than his bid ($i\in O(b)\Rightarrow p_i(b)\leq b_i$).
\item \emph{No Positive Transfer (NPT)}: The payment of each player $i$ is positive ($p_i(b)\geq 0$ for all $i$).
\item \emph{Consumer Sovereignty (CS)}: For each player $i$ there exists a value $b_i^*\in \mathbb{R}$ such that if he bids $b_i^*$ then it is guaranteed that $i$ will receive the service no matter what the other players bid.
\end{itemize}
Obviously, VP implies that if an player is truthful, then her utility is lower bounded by zero. Moreover, VP and NPT imply that if a player announces a negative amount, then she will not be included in the outcome. While, negative bids are not realistic, the latter may be used to model, the denial of revealing
any information to the mechanism. Finally, notice that the crucial value $b_i^*$, in the definition of CS is independent of the bid vector. Thus, this value has to be greater or equal to any possible payment this player is charged, i.e. $b^*_i> p_i(b)$, for all $b\in\mathbb{R}^n$.
\begin{defn} We say that a cost-sharing mechanism is
\emph{group-strategyproof (GSP)} if and only if the following holds:
For every two valuation vectors $v, v'$ and every $S \subseteq A$, sat\-
isfying $v_i = v'_i$ for all $i \in S$, one of the following is true:
\emph{(a)} There is some $i \in S$, such that $v'_i a_i - p_i (v' ) < v_i a_i -
p_i (v)$, or
\emph{(b)} for all $i \in S$, it holds that $v_i a'_i - p_i (v' ) = v_i a_i - p_i (v)$.
\end{defn}
In other words, a GSP mechanism does not allow success\-ful coalitions of the players, i.e. that a group of a players announces a false value, instead of their true valuations and
moreover no ``liar" sacrifices her utility, while at least one
player (not necessarily a liar) strictly profits after the manipulation.
\begin{defn}
A cost-sharing scheme is a function $\xi :\mathcal{A}\times 2^{\mathcal{A}}\rightarrow \mathbb{R}^+\cup \{0\}$, such that, for every $S\subset \mathcal{A}$ and every $i\notin S$ we have $\xi(i,S)=0$.
\end{defn}
You can think of $\xi(i,S)$ as the payment of player $i$ if the serviced set is $S$. In fact it can be shown that in any group-stra\-te\-gy\-proof mechanism for our setting the payment of a player depends only on the allocation of the mechanism and not directly on the bids of the players. In this sense we do not restrict the mechanism in any way by assuming that the payments are given by a cost-sharing scheme $\xi$.
\subsection{The cost function and budget balance}
The cost of providing service service is given by a cost function $C:2^{\mathcal{A}}\rightarrow \mathbb{R}^+\cup \{0\}$, where $C(S)$ specifies the cost of providing service to all players in $S$.
A desirable property of cost-sharing mechanisms is budget balance. We say that
a mechanism is \emph{$\alpha$-budget balanced}, where $0\leq \alpha\leq 1$, if for all bid vectors $b$ it holds, that $ \alpha \cdot C(O(b))\leq \sum_{i\in S}\xi(i,S)\leq C(O(b))$.
We chose to define the cost function last in order to stress that our results are completely independent of the cost function and apply to any cost-sharing problem.
\subsection{Cross- and Semi-cross-mo\-no\-to\-ni\-ci\-ty}
We say that a cost-sharing scheme is \emph{cross-monotonic}~\cite{ms01} if $\xi(i,S)\geq \xi(i,T)$ for every $S\subset T\subseteq \mathcal{A}$ and every player $i\in S$.
In the attempt to provide a characterization of GSP cost-sharing schemes Immorlica Mahdian and Mirrokni~\cite{imm08} provided a partial characterization and identified semi-cross monotonicity an important condition that should be satisfied by any GSP cost-sharing scheme.
A cost sharing scheme $\xi$ is \emph{semi-cross monotonic} if for every $S\subseteq A$ an player $i\in S$ all $j\in S\setminus\{i\}$ : either
$\xi(j,S\setminus\{i\})\leq\xi(j,S)$ or $\xi(j,S\setminus\{i\})\geq\xi(j,S)$. Notice that every cross monotonic cost sharing scheme is also semi-cross monotonic, since the second or condition is always true, however the converse does not hold.
As we later show in Proposition~\ref{prop:neg-neu} (a) Semi-cross-mo\-no\-to\-ni\-ci\-ty can be almost directly derived from the condition of Fence Monotonicity we define in this work and more specifically from part (a) of Fence Monotonicity d
\section{Our Characterization}
\subsection{Fence Monotonicity }
Fence Monotonicity considers each time a restriction of the mechanism that can only output as the serviced set, subsets of $U$ that contain all players in $L$. To be more formal consider all possible subsets of the players $L,U$ such that $L\subseteq U\subseteq \mathcal{A}$. Fixing a pair $L\subseteq U$ Fence Monotonicity considers only sets of players $S$ with $L\subseteq S\subseteq U$.
Let $\m{i}$ be the minimum payment of player $i$, for getting serviced when the output of the mechanism is between $L$ and $U$, i.e. $\m{i}:=\min_{\{L\subseteq S\subseteq U,i\in S\}}\xi(i,S)$.
\begin{defn}[Fence Monotonicity]
We will say that a cost-sharing scheme satisfies \emph{Fence Monotonicity } if it satisfies the following three conditions:
\begin{itemize}
\item[(a)] There exists at least one set $S$ with $L\subseteq S\subseteq U$, such that for all $i\in S$ we have $\xi(i,S)=\m{i}$.
\item[(b)] For each player $i\in U\setminus L$ there exists at least one set $S_i$, with $L\subseteq S_i\subseteq U$, such that for all $j\in S_i\setminus L$, we have $\xi(j,S_i)=\m{j}$.
(Note that $i \in S_i\setminus L$ and thus $\xi(i,S_i)=\m{i}$. Also note that we might have $S_i\neq S_j$ for $i\neq j$.)
\item[(c)] If there exists a set $C\subset U$, such that for some player $i$ we have $i\in C$ and $\xi(i,C)<\m{i}$ (obviously $L\nsubseteq C$), then there exists at least one set $T\neq \emptyset$, with $T\subseteq L\setminus C$ such that for all $j\in T$, $\xi(j,C\cup T)=\m{j}$.
\end{itemize}
\end{defn}
The first condition says that if we necessarily have to service the players in $L$, there exists a superset $S$ of $L$, such that if all players in $S$ such that the serviced players achieve their lowest possible (non-zero) payment $\m{i}$. As we show in Proposition~\ref{prop:neg-neu}(a), this condition generalizes the condition of semi-cross-monotonicity, which was identified as necessary for group-strategyproofness in~\cite{imm08} and in this sense our work completes the partial characterization obtained in~\cite{imm08}.
The second condition says then for each player $i\in U\setminus L$ there exists an outcome $S_i$, such that $i\in S_i$, and such that all players in $S_i\setminus L$ are served with their lowest possible payment. Loosely speaking this gives a way to enlarge $L$ by adding to it more players in a way that is optimal for the players that we add. Note that the cost of the players already in $L$ might however increase, which in fact relaxes cross-monotonicity, in the sense that if we further restrict the second condition to hold for every $j \in S_i$, then the underlying cost sharing schemes are cross-monotonic.
The third condition compares the minimum possible non-zero payment of each player in this restriction of the mechanism, with his minimum possible non-zero payment when the outcome can be any subset of $U$ (i.e. the the output should not necessarily contain the players in $L$). If the first payment is bigger then this means that some of the players in $L$ are responsible for this higher payment and ``harm'' the player. Very loosely speaking condition (c) says that
at least one non-empty subset of $L\setminus C$, does not get ``harmed"
by a coalition that restricts the outcome to be a subset of $L\cup C$ (we remove all the players in $U\setminus (L\cup C)$ from $U$) and contain every player in $C$ (we add the players in $C\setminus L$ to $L$). The intuition of the third condition will become more clear when we show some important allocation properties of GSP mechanisms.
\begin{theorem}\label{theo:gsp=>bsm}
A cost sharing scheme gives rise to a group-strategyproof mechanism if and only if it satisfies Fence Monotonicity d
\end{theorem}
\subsubsection{Examples of Mechanisms that violate just one part of Fence Monotonicity and are not GSP}
We will give three representative examples to illustrate, why a cost sharing scheme, which does not satisfy Fence Monotonicity c cannot give rise to a group-stra\-te\-gy\-proof mechanism. We chose our examples in a way that only one condition of Fence Monotonicity is violated and only at a specific pair $L,U$. (in fact in condition (c), the violation is present at two pairs, however it can be shown that this is unavoidable.)
\begin{example}[a]\label{ex:a}
Let $\mathcal{A}=\{1,2,3,4\}$. We construct a cost sharing scheme, such that condition (a) of Fence Monotonicity is not satisfied at $L=\{1,2\}$ and $U=\{1,2,3,4\}$, as follows.
$$\begin{array}{|c||c|c|c|c|}
\hline
\xi & 1 & 2& 3 & 4 \\\hline\hline
\mathbf{\{1,2,3,4\}} & \mathbf{30}& \mathbf{30} & \mathbf{30} & \mathbf{30}\\\hline
\mathbf{\{1,2,3\}}&\mathbf{20}&\mathbf{30}& \mathbf{30} &\mathbf{-}\\\hline
\mathbf{\{1,2,4\}}&\mathbf{30}&\mathbf{20}&\mathbf{-}&\mathbf{30}\\\hline
\{1,3,4\}& 30 & -& 20 &30 \\\hline
\{2,3,4\}&- & 30&30& 20\\\hline
\end{array}
\hspace{5pt}
\begin{array}{|c||c|c|c|c|}
\hline
\xi & 1 & 2& 3 & 4 \\\hline\hline
\mathbf{\{1,2\}}& \mathbf{30} & \mathbf{30} &\mathbf{-}&\mathbf{-}\\\hline
\{1,3\}& 20 & - &30&-\\\hline
\{1,4\} & 30& 30 & - & -\\\hline
\{2,3\}&- & 30&30&-\\\hline
\{2,4\}&- & 20&-&30\\\hline
\end{array}
\hspace{5pt}
\begin{array}{|c||c|c|c|c|}
\hline
\xi & 1 & 2& 3 & 4 \\\hline\hline
\{3,4\}& - & -&30&30\\\hline
\{1\}& 30 & -& -&-\\\hline
\{2\}& - & 30 &-&-\\\hline
\{3\} & -& - & 30 & -\\\hline
\{4\}& - & -& - &30 \\\hline
\end{array}
\hspace{5pt}
$$
Consider the bid vector $b: = (b^*1, b^*_2, 30, 30).$ Notice that players 3 and 4 are indifferent to being serviced or not, as the single value they may be charged as payment equals their bid. Moreover, notice that either player 1 or player 2 (or both) must pay 30 strictly over their minimum payment 20 under
this restriction. Without loss of generality assume that $\xi(i,O(b)) = 30$. Consider the bid vector $b' := (b^*_1 , b^*_2 , b^*_3, -1)$. By VP and CS it
holds that $O(b') = \{1, 2, 3\}$ and thus $\xi(1,O(b')) < \xi(1,O(b))$.
Notice that the utilities of players $3$ and $4$ remain zero, and
consequently $\{1, 3, 4\}$ form a successful coalition.
In a similar manner we prove the existence of successful coalition when $\xi(2,O(b)) = 30$.
\end{example}
\begin{example}[b]\label{ex:b}
Let $\mathcal{A}=\{1,2,3,4\}$. We construct a cost sharing scheme, such that condition (b) of Fence Monotonicity is not satisfied at $L=\{1,2\}$ and $U=\{1,2,3,4\}$ for player $3$, as follows.
$$\begin{array}{|c||c|c|c|c|}
\hline
\xi & 1 & 2& 3 & 4 \\\hline\hline
\mathbf{\{1,2,3,4\}} & \mathbf{30}& \mathbf{30} & \mathbf{30} & \mathbf{30}\\\hline
\mathbf{\{1,2,3\}}& \mathbf{30}& \mathbf{30}& \mathbf{40} &-\\\hline
\mathbf{\{1,2,4\}}& \mathbf{30}& \mathbf{30}&-& \mathbf{20}\\\hline
\{1,3,4\}& 30 & -& 30 &30 \\\hline
\{2,3,4\}&- & 30&30& 30\\\hline
\end{array}
\hspace{5pt}
\begin{array}{|c||c|c|c|c|}
\hline
\xi & 1 & 2& 3 & 4 \\\hline\hline
\mathbf{\{1,2\}}& \mathbf{30} & \mathbf{30} &-&-\\\hline
\{1,3\}& 30 & - &30&-\\\hline
\{1,4\} & 30& - & - & 30\\\hline
\{2,3\}&- & 30&30&-\\\hline
\{2,4\}&- & 30&-&30\\\hline
\end{array}
\hspace{5pt}
\begin{array}{|c||c|c|c|c|}
\hline
\xi & 1 & 2& 3 & 4 \\\hline\hline
\{3,4\}& - & -&30&30\\\hline
\{1\}& 30 & -& -&-\\\hline
\{2\}& - & 30 &-&-\\\hline
\{3\} & -& - & 30 & -\\\hline
\{4\}& - & -& - &30 \\\hline
\end{array}
$$
Consider the bid vector $b^3 := (b^*_1, b^*_2, 35, b^*_4 )$. Strategy\-proof\-ness implies that $3 \in O(b^3)$, since otherwise if she is not
serviced (zero utility), she can misreport $b_3^*$
changing the outcome to $\{1, 2, 3, 4\}$ and increasing her util\-ity to $35 -30>0$.
Next, consider the bid vector $b^4 :=
(b^*_1 ; b^*_2, 30, 25)$.
Assume that $4 \in O(b^4)$. Moreover, notice that by VP
it is impossible that $3 \in O(b^4)$. Thus, $\{3, 4\}$ can form a
successful coalition bidding $b' = (b^*_1, b^*_2,-1,b^*_4 )$, changing
the outcome to $\{1, 2, 4\}$ increasing the utility of player $4$ to
$25-20>0$, while keeping the utility of player $3$ at zero.
Finally, consider the bid vector $b^{3,4} := (b^*_1, b^*_2 , 35, 25)$.
Notice that the $b^{3,4}$ differs with $b^3$ and $b^4$ in the coordinates
that correspond to players $4$ and $3$ respectively. Like in the
case of $b^3$ the only possible outcomes by VP and CS at $b^{3,4}$
are $\{1, 2, 4\}$ and $\{1, 2\}$.
Assume that player 4 is serviced at $b^{3,4}$, which implies
that $\xi(4,O(b^{3,4})) < \xi(4,O(b^3))$. This contradicts strategy\-proofness, since when the true values
are $b^3$, player $4$ can bid according to $b^{3,4}$ in order to decrease
her payment and still being serviced.
Now, assume that player $4$ is not serviced at $b^{3,4}$. Then
$\{3, 4\}$ can form a successful coalition when true values are
$b^{3,4}$ bidding $b^4$, increasing the utility of player $4$ to $25-20>0$, while keeping the utility of player 3 at zero.
\end{example}
\begin{example}[c]
Let $\mathcal{A}=\{1,2,3,4\}$. This time we construct a cost sharing scheme, such that part (c) is not satisfied for $L=\{1,2\}$ (or $\{1,2,3\}$)
and $U=\{1,2,3,4\}$ and specifically for $C=\{3,4\}$ and $j=3$.
$$\begin{array}{|c||c|c|c|c|}
\hline
\xi & 1 & 2& 3 & 4 \\\hline\hline
\mathbf{\{1,2,3,4\}} & \mathbf{30}& \mathbf{30} & \mathbf{30} & \mathbf{30}\\\hline
\mathbf{\{1,2,3\}}&\mathbf{20}&\mathbf{20}& \mathbf{30} &\mathbf{-}\\\hline
\mathbf{\{1,2,4\}}&\mathbf{30}&\mathbf{30}&\mathbf{-}&\mathbf{30}\\\hline
\{1,3,4\}& 30 & -& 30 &30 \\\hline
\{2,3,4\}&- & 30&30& 30\\\hline
\end{array}
\hspace{5pt}
\begin{array}{|c||c|c|c|c|}
\hline
\xi & 1 & 2& 3 & 4 \\\hline\hline
\mathbf{\{1,2\}}& \mathbf{20} & \mathbf{20} &\mathbf{-}&\mathbf{-}\\\hline
\{1,3\}& 20 & - &20&-\\\hline
\{1,4\} & 30& 30 & - & -\\\hline
\{2,3\}&- & 20&20&-\\\hline
\{2,4\}&- & 30&-&30\\\hline
\end{array}
\hspace{5pt}
\begin{array}{|c||c|c|c|c|}
\hline
\xi & 1 & 2& 3 & 4 \\\hline\hline
\{3,4\}& - & -&\mathit{20}&30\\\hline
\{1\}& 30 & -& -&-\\\hline
\{2\}& - & 30 &-&-\\\hline
\{3\} & -& - & 30 & -\\\hline
\{4\}& - & -& - &30 \\\hline
\end{array}
\hspace{5pt}
$$
Suppose that the values are $b := (25, 25, b^*_3 ,30)$. The
only feasible by VP outcomes are $\{1, 2, 3\}, \{1, 3\}$, $\{2,3\}$, $\{3, 4\}$ and $\{3\}$. Notice that player $4$ has zero utility regardless of
the outcome.
Assume that $O(b) \neq \{1, 2, 3\}$ and w.l.o.g that $1 \notin O(b)$.
Then $\{1, 2, 4\}$ can form a successful coalition bidding $b' :=
(b^*_1, b^*_2, b^*_3, -1)$ increasing the utility of player $1$ to $25>0$ without
decreasing the utility of player 2 (either $O(b) = \{2, 3\}$ and $\xi(2,O(b')) = \xi(2,O(b))$ and her utility remains the same utility or
$2\notin O(b)$ and her utility increases to $25>20$, like in the case of player $1$) and keeping the utility of player 4 at zero.
Finally, assume that $O(b) = \{1, 2, 3\}$. Then $\{3, 4\}$ can
form a successful coalition bidding $b'':= (25, 25, b^*_3, b^*_4 )$.
Obviously $\{3, 4\} \subseteq O(b'')$. Notice that VP excludes each of the following outcomes: $\{1, 3, 4\},$ $\{2, 3, 4\}$ and $\{1, 2, 3, 4\}$. As a
result, $O(b'') = \{3, 4\}$ implying that the utility of player 3
increases, as $\xi(3,O(b)) > \xi(3,O(b''))$, while player 4 keeps
her utility at zero.
\end{example}
If you are given a cross-monotone cost sharing scheme it is rather straightforward how to construct a Moulin mechanism.
However if you are given a cost sharing scheme that satisfies Fence Monotonicity it is not straightforward how to construct a GSP mechanism. Fence Monotonicity should be coupled with an allocation rule that satisfies a simple property, which we call stability and a valid tie-breaking rule in order for the mechanisms to be GSP.
\subsection{Fencing Mechanisms}
Given a cost sharing scheme $\xi$ that satisfies Fence Monotonicity we construct a mechanism, that uses $\xi$ as payment function and satisfies group-stra\-te\-gy\-proofness.
The mechanism takes as input the bids of the players and determines a pair of sets $L,U$ where $L\subseteq U$, where $L$ is the set of players that are going to be serviced by the mechanism with strictly positive utility. On the other hand $U\setminus L$ is the set of players that are indifferent between getting serviced or not, because their bid equals their payment, and we use a tie-breaking policy to determine which of these players will get serviced. The existence of a tie-breaking policy that does not violate group-stra\-te\-gy\-proofness is guaranteed by part (a) of Fence Monotonicity d The intuition behind the tie-breaking rule is that it is optimal for the players in $L$, in the sense that from all subsets of $U$ we choose to serve the one where the players in $L$ achieve their minimum payments provided that they all get serviced.
The mechanisms we design can be put in the following general framework: Given a bid vector as input, we search for a certain pair of sets $L,U$, where $L\subseteq U\subseteq A$ that meet the criteria of stability we define below and then we choose one of the allocations that service the players in $L$ according to a valid tie-breaking rule. If the search is exhaustive the resulting algorithm is exponential time and we do not know any polynomial-time algorithm. However if we restrict our attention to payments that satisfy certain conditions like for example cross-monotonicity we can come up with a polynomial-time algorithm for finding a stable pair.
\begin{defn}[Stability]
A pair $L,U$ is \emph{stable} at $b$ if the following conditions are true:
\begin{enumerate}
\item For all $i\in L$, $b_i>\m{i}$,
\item for all $i\in U\setminus L$, $b_i=\m{i}$ and
\item for all $R\subseteq A\setminus U$, there is some $i\in R$, such that
$b_i<\mlu{i}{L}{U\cup R}$.
\end{enumerate}
\end{defn}
\begin{remark}
Assume that $\xi$ is Cross-monotonic and let $S$ be the output of Moulin mechanism for some bid vector $b$. Then, the pair $L,U$, where $L=\{i\in S\mid b_i>\xi(i,S)\}$ and $U=S$,
is the unique stable pair at $b$.
\end{remark}
After identifying a stable pair these mechanisms output a set $S$, where $L\subseteq S\subseteq U$ given by a tie-breaking function.
\begin{defn}
The mapping $\sigma:2^\mathcal{A}\times 2^\mathcal{A} \times \mathbb{R}^n \rightarrow 2^{\mathcal{A}}$ is a \emph{valid} tie-breaking rule for $\xi$,
if for all $b$ and $L\subseteq U\subseteq A$, such that $L,U$ is stable at $b$, for the set $S=\sigma(L,U,b)$ it holds that $L\subseteq S\subseteq U$ and for all $i\in S$, $\xi(i,S)=\m{i}$. Part (a) of Fence Monotonicity guarantees that there exists at least one output $S$ with this property and each different such output gives a different tie-breaking rule.
\end{defn}
\begin{defn}
We will say that a mechanism is a \emph{Fencing Mechanism} if at input $b$ it finds a stable pair $L,U$ at $b$, outputs $\sigma(L,U,b)$, where $\sigma$ is a valid tie-breaking rule and charges each player $i$, the value $\xi(i,\sigma(L,U,b))$.
\end{defn}
It is easy to verify that every Fencing Mechanism satisfies VP, because the players that get serviced belong to the set $U$ and as the mechanism satisfies stability these players have non-negative utility, and CS, because if a player bids higher than any of his payments, then again by stability he belongs to the set $L$ and gets serviced.
\begin{remark}
Assume that for two distinct bid vectors $b$ and $b'$, the pair $L,U$ is stable. Then for a Valid tie-breaking rule it may hold that $\sigma(L,U,b)\neq\sigma(L,U,b')$, which implies that the outcome of a Fencing mechanism may change, though the utilities of the players remain unchanged. If we are not interested in considering the whole class of GSP allocations, that arise from cost sharing scheme that satisfies Fence Monotonicity c we can assume that the tie-breaking rule depends only on the sets $L$ and $U$.
\end{remark}
\begin{remark}
Moulin mechanisms are GSP and conseque\-nt\-ly they can be viewed as special case of the general framework of Fencing Mechanisms. In Moulin mechanisms $\xi$ is cross-monotonic and consequently the bigger the set of serviced players, the lower is the cost for each one of them. Thus it holds that for all $L\subseteq U\subseteq \mathcal{A}$, that for all $i\in U$, $\xi(i,U)=\m{i}$ and the mapping $\sigma(L,U,b)=U$ for all $L\subseteq U\subseteq \mathcal{A}$ and all $b$ such that $L,U$ is stable, is a valid tie-breaking rule. This simplifies the algorithm substantially, as we just have to find a set $U$ of players that can be serviced with utility greater or equal to zero. Moreover as the payment of each player increases when the serviced players becomes smaller we need the set $U$ with maximal cardinality.
\end{remark}
\begin{theorem}\label{theo:gsp<=>bs}
A mechanism is group-strategyproof if and only if it is a Fencing Mechanism.
\end{theorem}
\section{Every GSP Mechanism is a Fencing Mechanism} \label{nec}
\subsection{Necessity of Fence Monotonicity }
In this section we prove that the payment function of every GSP mechanism satisfies Fence Monotonicity d A natural approach would assume that one condition of Fence Monotonicity is violated at some $L,U$, and prove that it is impossible that the corresponding cost sharing scheme gives rise to a GSP mechanism. It turns out that our lack of knowledge of the payment function renders this approach unlikely to be fruitful.
Therefore, we will follow an alternative method. We select an arbitrary GSP mechanism and consider some $U\subseteq \mathcal{A}$. We show that for every $L\subseteq U$ the cost sharing scheme satisfies each one of the three conditions of Fence Monotonicity using induction on $|U \setminus L|$. When proving the induction step we also reveal several important allocation properties for GSP mechanisms.
\textbf{Base:} For $|U \setminus L|=0$, i.e $L=U$ every part of Fence Monotonicity is trivially satisfied as follows.
Condition (a): It holds that $\mlu{i}{U}{U}=\xi(i,U)$, since the minimum in the definition of $\xi$ is taken over the single possible outcome $U$.
Condition (b): This condition holds trivially as $U\setminus L=\emptyset$.
Condition (c):
Regardless of whether the if condition of this part is true, it holds that for all $C\subset U$ we can set
$T=U \setminus C$ since for all $j\in T$, $\xi(j, C\cup T)=\xi(j, U)=\mlu{j}{L}{U}$. \
\textbf{ Induction Step:} Proving the induction step requires some definitions that allows the effective use of the induction hypothesis in order to identify a successful coalition if some part is violated. We first define the notion of a harm relation and we prove that it is a strict partial order.
\subsubsection*{Harm relation}
\begin{lemma}\label{lem:xicompare}
If $U\subseteq U_1$ and $L_1\subseteq L$, then for all $i \in U$, $\m{i}\geq\mlu{i}{L_1}{U_1}$.
\end{lemma}
\begin{defn}[Harm]
We say that $i$ harms $j$, where $i,j\in U$ if and only if $\mlu{j}{L}{U}<\mlu{j}{L\cup\{i\}}{U}$
\end{defn}
Consequently, for all distinct $i,j\in U$, $i$ either \emph{harms} $j$ or
otherwise it holds that $\m{j}=\mlu{j}{L\cup \{i\}}{U}$ (from Lemma \ref{lem:xicompare}). Trivially every $i \in L$ \emph{does not harm} any other player $j \in U$.
\begin{claim}\label{cla:harm}
The harm relation satisfies anti-symmetry and transitivity and consequently it is strict partial order.
\end{claim}
\begin{corollary}\label{cor:harmdag}
The induced sub-graph $G[U \setminus L]$ is a directed acyclic graph.
\end{corollary}
\subsubsection*{Condition (a) of Fence Monotonicity }
The core idea that we will use for the proof of conditions (a) and (b) is to construct bid vectors, where VP and CS restrict the possible outcomes to be subsets of $U$ that contain every player in $L$, while for condition (c) we will restrict the possible outcomes to be subsets of $U$. Therefore, we assume that every player in $L$ has bidden a very high value and and every player in $\mathcal{A}\setminus U$ has bidden a negative value.
For the proof of condition (a), we also want the players in $U\setminus L$ to be indifferent between being serviced and getting excluded from the outcome, i.e. they have zero utility. Thus, we assume that every player in $U \setminus L$ has bidden exactly her minimum payment $\xi^*$ at $L,U$.
We first use the induction hypothesis and the properties of the harm relation to prove that the following Lemma, which is a condition somewhat milder than condition (a) of Fence Monotonicity d
\begin{lemma}\label{lem:a-milder}
For every $j \in L$, there is some set $S_j$, where $L\subseteq S_j \subseteq U$ such that for all $ i \in S_j \setminus L$, $\xi(i , S_j)=\m{i}$ and $\xi(j, S_j)= \m{j}$.
\end{lemma}
Then we use the preceding Lemma to show that if the cost-sharing scheme does not satisfy condition (a) of Fence Monotonicity then there exists a successful coalition.
\begin{lemma}\label{lem:a-alloc}
At the bid vector $b$, where for all $i \in L$, $b_i=b^*_i$, for all $i \in U \setminus L$, $b^*_i= \m{i}$ and for all $i \notin U$, $b_i =-1$, it holds that $L\subseteq O(b)\subseteq U$ and that for all $i \in O(b)$, $\xi(j, O(b))=\m{j}$.
Setting $S=O(b)$, condition (a) of Fence Monotonicity is satisfied at $L,U$.
\end{lemma}
\subsubsection*{Condition (b) of Fence Monotonicity }
We consider now the players in $U \setminus L$. Using the induced sub-graph $G[U\setminus L]$ of the harm relation, we can discriminate them by whether a player is \emph{sink} of this graph or not. First we consider the sinks, as the satisfaction of the second condition of Fence Monotonicity for a sink is an immediate consequence of the induction hypothesis.
\begin{claim}\label{cla:sinkandb}
For every sink $k$ of $G[U\setminus L]$ condition (b) of Fence Monotonicity is satisfied at $L,U$.
\end{claim}
We continue with the rest players in $U \setminus L$.
\begin{claim}\label{claim:harmandU-L}
For every $j \in U \setminus L$ one of the following holds: either $j$ is a sink of the sub-graph $G[U \setminus L]$, or there is a sink $k$ such that $j$ harms $k$.
\end{claim}
Now consider an player $j$ in $U\setminus L$, that is \emph{not a sink} of $G[U\setminus L]$ and let $k$ be one of its \emph{sinks} such that $j$ \emph{harms} $k$. In order to prove that the second condition is satisfied for $j$, we will involve group-stra\-te\-gy\-proofness at certain bid vectors (trying to generalize Example \ref{ex:b} where $j$ takes the role of player $3$ and $k$ the role of player $4$). Prior to defining these inputs, we prove another allocation property.
\begin{lemma}\label{lem:b-alloc}
Consider some $L'\subseteq U'\subseteq \mathcal{A}$. Assume that the set $S_j$, as in the definition of condition (b) of Fence Monotonicity exists for some $j\in U'\setminus L'$. At any bid vector $b^j$, such that for all $i \in L'$, $b^j_i=b^*_i$, $b^j_j> \mlu{j}{L'}{U'}$ for all $i \in U' \setminus (L'\cup\{j\})$, $b^j_i= \mlu{i}{L'}{U'}$ and for all $i \notin U$, $b^j_i =-1$, it holds that $j \in O(b^j)$ and $\xi(j, O(b^j))=\mlu{j}{L'}{U'}$.
\end{lemma}
Let $\epsilon$ be a very small positive number, smaller than any positive payment difference. We construct two bid vectors, at which we can characterize the allocation of the mechanism. First, consider the bid vector $b^k$, where for all $i \in L$, $b^k_i=b^*_i$, $b^k_k=\m{k}+\epsilon$, for all $i \in U \setminus (L\cup\{k\})$, $b^k_i=\m{k}$ and for all $i\notin U$,
$b^k_i=-1$.
\begin{claim} \label{claim:bk-alloc}
At the bid vector $b^k$ the following hold
(a) Player $k$ is serviced and charged $\m{k}$.
(b) Player $j$ is not serviced.
\end{claim}
Second, consider the bid vector $b^j$, where for all $i \in L\cup\{k\}$, $b^k_i=b^*_i$, $b^j_j=\m{j}+\epsilon$, for all $i \in U \setminus (L\cup\{i,j\})$, $b^j_i=\m{i}$ and for all $i\notin U$,
$b^j_i=-1$.
\begin{claim}\label{claim:bj-alloc}
At the bid vector $b^j$ the following hold
(a) For all $i \in U\setminus (L\cup\{j,k\})$, $b_i=\mlu{i}{L\cup\{k\}}{U}$ and $b_j=\mlu{j}{L\cup\{k\}}{U}+\epsilon$.
(b) Player $j$ is serviced and charged $\m{j}$.
(c) Player $k$ is serviced and charged more than $\m{k}$.
\end{claim}
Finally, we construct the intermediate bid vector $b^{j,k}$, where player $j$ bids according to $b^j$ ($\m{j}+\epsilon$) player $k$ bids according to $b^k$ ($\m{k}+\epsilon$), and every other player bids the same value as in both bid vectors.
\begin{claim} \label{claim:bjk-alloc}
At the bid vector $b^{j,k}$ the following hold
(a) Player $k$ is not serviced at $b^{j,k}$.
(b) Player $j$ is serviced at $b^{j,k}$.
(c) $L\subseteq O(b^{j,k})\subseteq U$ and every player $i \in O(b^{j,k})\setminus L$, is charged $\m{i}$.
\end{claim}
As a result setting $S_j=O(b^k)$, we conclude that the second condition is satisfied for any $j \in U\setminus L$ that is not a sink of $G[U\setminus L]$ as well.
\subsubsection*{Condition (c) of Fence Monotonicity }
To show that the cost-sharing scheme satisfies the third property of Fence Monotonicity c at $L,U$, we need the induction hypothesis only for showing (as we have already done) that condition (a) of Fence Monotonicity is satisfied at this pair and specifically the allocation properties of Lemma~\ref{lem:a-alloc}.
The main idea is to define two families of bid vectors, that both contain the previous special case and moreover the first family of inputs is a subset of second.
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& $L$ & $U\setminus L$ & $\mathcal{A}\setminus U$ \\\hline\hline
Special Case & $b^*_i$ & $\m{i}$ & $-1$ \\\hline
First Family & $>\m{i}$ & $\m{i}$ & $-1$\\\hline
Second Family & $>\m{i}$ & $\in\mathbb{R} $& $-1$\\\hline
\end{tabular}
\end{center}
The use of induction was one of the basic techniques in~\cite{imm08}, however here we need to use induction in a more powerful way. In~\cite{imm08} the authors first fix an ordering of the players and then apply induction, while here we start from a bid vector that satisfies a certain property (induction base) and use induction on the number of coordinates at which the new bid vector differs from the bid vector used in the induction base. This allows us to use the induction hypothesis more effectively, as we can alter the coordinates of our choice first, instead of selecting an ordering and then formulating the induction statement. While the proof of the allocation properties about the first family does not require the advantage of this technique, it is the essence of our proof of the corresponding allocation properties about the second family.
Notice that the special input differs from the bid vectors of the first family only in the bid coordinates that correspond to players in $L$. We apply our technique and extend the implications of Lemma~\ref{lem:a-alloc} for every bid vector in the first family.
\begin{lemma}\label{lem:char1}
For every bid vector $b$, where for all $i \in L$, $b_i>\xi(i, S)$, for all $i \in U \setminus L$,
$b_i =\m{i}$ and for all $i \notin U$, $b_i=-1$, it holds that $L\subseteq O(b)\subseteq U$ and for all $i \in O(b)$, $\xi(i, O(b))=\m{i}$.
\end{lemma}
Next we provide a weaker allocation property about the inputs of the second family. We arbitrarily select a vector that belongs to the first family and then apply our technique for proving this property for every input that is ``reachable" by our initial vector by altering the coordinates that correspond to the players in $U\setminus L$. The arbitrary selection of the initial vector ensures that our statement holds for every input that belongs to the second family.
\begin{lemma}\label{lem:char2}
For every bid vector $b$, where for all $i \in L$, $b_i>\xi(i, S)$, for all $i \in U \setminus L$, $b_i \in \mathbb{R}$ and for all $i \notin U$, $b_i=-1$, it holds that: for all $i \in O(b)$, $\xi(i, O(b))\geq\m{i}$.
\end{lemma}
This Lemma may shed some light on the understanding of the GSP mechanisms and can be interpreted as follows: Assume that the players in $\mathcal{A} \setminus U$ are uninterested to participate. Now if all the bids of players in a set $L\subseteq U$ have surpassed their respective minimum payments at $L,U$, then a GSP mechanism never excludes a group of players in $L$ from the outcome in order to charge a serviced player less than the restriction of their presence. Loosely speaking the players in $L$ ``fence" any outcome $C\subset U$ such that there is some $j \in C$ such that $\xi(j,C)<\m{j}$. Since this ``fencing" phenomenon must be true for arbitrary bids of the players in $U\setminus L$, it follows that every GSP cost-sharing scheme must satisfy condition (c) of Fence Monotonicity c as shown below.
\begin{claim}\label{claim:bc-alloc}
We construct the bid vector $b^C$ as follows: For all $i \in C$, $b^C_i=b^*_i$, for all $i \in L\setminus C$, $b^C_i=\m{i}+\epsilon$ and for all $i \notin C\cup L$, $b^C_i=-1$.
For the bid vector $b^C$ it holds that:
(a) For all $i\in O(b^C)$ it holds that $\xi(i,O(b^C))\geq\m{i}$.
(b) $C\subset O(b^C) \subseteq L\cup C$.
(c) For all $i \in O(b^C) \setminus C$, it holds that
$\xi(i,O(b^C))=\m{i}$.
\end{claim}
Setting $T=O(b^C)\setminus C$ we complete the proof of the third condition.
\subsection{Necessity of Stability and Valid tie-breaking}
In the last part, we show that the allocation of every GSP mechanism satisfies Stability and uses a Valid tie-breaking rule. We already know that the payment function satisfies Fence Monotonicity and thus we can prove the following generalization of Lemma \ref{lem:char1}.
\begin{lemma}\label{lem:char3}
Let $L\subseteq U\subseteq \mathcal{A}$. For every bid vector $b$, such that for all $i \in L$, $b_i>\m{i}$, for all $i \in U\setminus L$, $b_i=\m{i}$ and for all $R\subseteq \mathcal{A}\setminus U$, there is some $i \in R$ such that $b_i<\mlu{i}{L}{U\cup R}$, it holds that $L\subseteq O(b)\subseteq U$ and for all $i \in O(b)$, $\xi(i,O(b))=\m{i}$.
\end{lemma}
Lemma \ref{lem:exists} implies that there is a stable pair at every input. Thus, given a bid vector, we may apply Lemma \ref{lem:char3} with $L,U$ being the corresponding stable pair at this input, and get that $L\subseteq O(b)\subseteq U$ (Stability) and for all $i \in O(b)$, $\xi(i,O(b))=\m{i}$ (Validity of the tie-breaking rule).
\section{The classes of GSP and Fencing Mechanisms coincide}
In this section, we complete our characterization by proving that Fencing Mechanisms are GSP.
\subsection{Properties of Stable pairs}
\begin{lemma}\label{lem:maxu}
For every bid vector $b$ and set $L$ with $L\subseteq \mathcal{A}$, there exists a unique maximal set $U$, $U \supseteq L$, such that for all $i \in U \setminus L$ we have $b_i \geq \m{i}$ and any other set with the same property is a subset of $U$.
Moreover, the pair $L,U$ satisfies condition 3. of Stability
\end{lemma}
Consider all possible outcomes of the mechanism, that contain every player in $L$. Now we remove all the sets $S$, where at least one player $i \in S\setminus L$, has bidden strictly less than her payment in $S$. The set $U$ is the union of all the sets that remain after this filtering. Notice, that this is always true only if the underlying cost sharing scheme satisfies condition (b) of Fence Monotonicity d
Furthermore, notice that we haven't yet shown that that there is a stable pair at every input. The next two properties will be used together in our proofs as a criterion whether a pair $L,U$ is stable at a given bid vector $b$.
\begin{lemma}\label{lem:uncov2}
Suppose that $L,U$ is a stable pair at the bid vector $b$ and that $S$ is set with the property that for all $i\in S$ we have that $b_i-\xi(i,S)\geq 0$. (If the mechanism would output $S$ then all players are would have been served with non-negative utility.) If
(a) $S\not\subseteq U$, or
(b) if $S\subseteq U$ and for some $i\in S$ we have $\m{i}>\xi(i,S)$,
\\
there exists some non-empty set $T\subseteq L\setminus S$, such that for all $j\in T$ we have $\xi(j,S\cup T)<b_j$.
\end{lemma}
This lemma is an immediate consequence of part (c) of Fence Monotonicity and the definition of stability.
\begin{lemma}\label{lem:cov3}
Suppose that $L\subseteq S\subseteq U$ and that there exists a non-empty $T\subseteq \mathcal{A}\setminus S$ such that for all $i\in T$ we have $b_i\geq \xi(i,S\cup T)$ and that for at least one player from $T$ the inequality is strict.
Then $L,U$ is not a stable set at the bid vector $b$.
\end{lemma}
\subsection{Uniqueness and group-stra\-te\-gy\-proofness}
\begin{lemma}\label{lem:unique}
If for some bid vector $b$ there exists a stable pair then it is unique.
\end{lemma}
A crucial point of our proof is that for the following step we assume that if there exists a stable pair the mechanism produces an outcome, otherwise it terminates without providing an answer. We will first show that the mechanism is GSP, wherever there exists a stable pair. Then we will use the fact that the mechanism satisfies group-stra\-te\-gy\-proofness for inputs that have a stable pair, to prove the existence of a stable pair for every input.
\begin{lemma}
For the inputs where there exists a stable pair, every Fencing Mechanism is group-strategyproof.
\end{lemma}
\begin{proof}
Let $b$ and $b'$ be two bid vectors, and let $L,U$ and $L',U'$ be their corresponding unique (from Lemma \ref{lem:unique}) stable pairs and $O(b)$ and $O(b')$ the corresponding outputs. Assume towards a contradiction that some of the players can form a successful coalition when the true values are $b$ reporting $b'$.
We will first show that any player $i$ served in the new outcome, $i\in O(b' )$, has non-negative utility i.e. $b_i\geq\xi (i,O(b' ))$. Take some $i$ that is output in $O(b')$. If $b_i=b'_i$ then it holds trivially since the mechanism satisfies VP at the outcome $O(b')$. If $b_i\neq b'_i$ then $i$ changes his bid to be part of the coalition and consequently his utility after this coalition is non-negative, which gives $b_i-\xi(i,O(b'))\geq 0$.
The next step is to apply Lemma \ref{lem:uncov2} to show that there exists some non-empty $T\subseteq L\setminus O(b')$ such that for all $i\in T$ we have $b_i>\xi(i,O(b')\cup T)$. If $O(b')\not\subseteq U$ then the premises of the Lemma hold trivially.
Suppose that $O(b)\subseteq U$. For the coalition to be successful the utility of at least one player $j$ increases strictly when the players bid $b'$ consequently $j\in O(b')$ and $b_j-\xi(j,O(b'))> 0$. We will show that $\m{j}>\xi(j,O(b'))$. If $j$ is not served at $b$ and since $O(b')\subseteq U$ from stability we get that $j\in U\setminus L$ and $b_j=\m{j}>\xi(j,O(b'))$. If $j$ is served at $b$ then her payment equals $\m{j}$ by the definition of the mechanism and in order that she profits strictly it must be $\xi(j,O(b'))<\m{j}$, so we can again apply Lemma \ref{lem:uncov2}.
Finally we will show that for all $i\in T$ we have $b_i=b'_i$. After the manipulation the players in $T$ are not serviced, while as $T\subseteq L$ from stability we have that in the truthful scenario the players in $T$ are serviced with positive utility. Consequently the players in $T$ wouldn't have an incentive to be part of the coalition and change their bids.
Putting everything together we get that there exists a $T\subseteq \mathcal{A} \setminus O(b')$ such that for all $i\in T$ we have $b'_i>\xi(i,O(b') \cup T)$, which by Lemma \ref{lem:cov3} contradicts our initial assumption that $L',U'$ is stable at $b'$.
\end{proof}
\subsection{Existence of a stable pair for every input}
\begin{lemma}\label{lem:exists}
For every bid vector $b$ there exists a unique stable pair.
\end{lemma}
\begin{proof}
Let $b_i^*$ be some value that is big enough so that player $i$ always gets serviced, you can let $b_i^*>\max_{S\subseteq \mathcal{A}}\xi(i,S)$, since the allocation of the mechanism satisfies Stability. We will show that there exists a stable pair at any input $b$ by induction on the number $m$ of coordinates that are less than $b_i^*$, i.e. on the number $m=|\{i\mid b_i<b_i^*\}|$.
\textbf{Base:} For $m=0$, we only have to show that there exists a stable pair for the bid vector $(b_1^*,\ldots,b_n^*)$ and $\mathcal{A},\mathcal{A}$ is a stable pair.
\textbf{Induction Step:} Suppose that if a bid vector has $m-1$ coordinates that are less than $b_i^*$, then it has a stable pair. We will show that if a bid vector $b$ has $m$ coordinates that are less than $b_i^*$ then it also has a stable pair. We will suppose towards a contradiction that there exists no stable pair at $b$.
We first need some definitions. Let $L^*:=\{i\mid b_i\geq b_i^*\}$ and $U^*$ the corresponding (from Lemma \ref{lem:maxu}) maximal set. Notice
that for all $i \in L^*$, $b_i = b^*_i > \mlu{i}{L^*}{U^*}$ by the definition of $b^*_i$, which implies that the pair $L^*,U^*$ satisfies the first condition of stability. Moreover, from Lemma 6 we get that the third condition of stability is satisfied for $L^* , U^*$ as well. Thus, as $L^*,U^*$ cannot be stable at $b$ (from our assumption), the set $W=\{i\mid i\in U^*\setminus L^* \text{ and } b_i>\mlu{i}{L^*}{U^*}\}$ should be non-empty.
For each $i\in W$ we define a corresponding pair $L_i,U_i$ as follows: The pair $L_i,U_i$ is the unique (by Lemma \ref{lem:unique}) stable pair of $(b_i^*,b_{-i})$, which exists by the induction hypothesis.
\begin{claim}\label{claim:inclusion*}
If $L_i,U_i$ is the stable pair of $(b_i^*,b_{-i})$, then
(a) $L^*\cup\{i\}\subseteq L_i$ and (b) $U_i\subseteq U^*$.
(c) If $j\in L_i\setminus (L^*\cup \{i\})$ then $j\in W$ and $b_j>\xi^*(L^*,U^*)$ .
\end{claim}
\begin{claim}\label{claim:subset}
If there exists no stable pair at $b$, then for all $i\in W$ we have
(a) $b_i\leq \mlu{i}{L_i}{U_i}$.
(b) $L^*\cup \{i\}\subset L_i$.
\end{claim}
Since $W\neq \emptyset$ there exists some $i\in W$ such that $L_i$ has minimum cardinality, i.e. $i=\arg\min_{i\in W}|L_i|$. By Claim \ref{claim:subset} (b) there exists some $j\in L_i\setminus (L^*\cup \{i\})$ and by Claim \ref{claim:inclusion*} (c) $j \in W$. We will show that $L_j\subset L_i$ contradicting the choice of $i$.
\begin{claim}\label{claim:notgsp}
If there exists no stable pair at $b$ and $j\in L_i\setminus (L^*\cup \{i\})$ then
(a) The pair $L_i,U_i$ is stable at the bid vector $(b_i^*,b_j^*,b_{-\{i,j\}})$.
(b) $\mlu{j}{L_j}{U_j}>\mlu{j}{L_i}{U_i}$ and $i\notin L_j$.
(c) If there exists some $k\in L_j\setminus L_i$, then the mechanism is not group-stra\-te\-gy\-proof at inputs where a stable pair exists.
\end{claim}
Now since $j\in W$ and from Claim \ref{claim:notgsp} $i \notin L_j$ and for every $k\in L_j$ we have that also $k\in L_i$, we get that $L_j\subset L_i$, which completes the proof.
\end{proof}
\section{A lower bound for the Budget Balance of any GSP mechanism}
In the last section we demonstrate the use of our character\-ization by showing that even in the case of three players
there is a family of cost functions parameterized with a vari\-able $x$, where every GSP mechanism cannot achieve better
budget-balance than $\frac{1}{x}$.
First, we show some consequences of Fence Monotonicity for small numbers of players, that will simplify the use of it.
\begin{proposition}\label{prop:neg-neu}
Let $\xi$ be a cost sharing scheme, that satisfies Fence Monotonicity d For all $S\subseteq\mathcal{A}$ and two distinct $i,j\in S$, if $\xi(j,S \setminus \{i\})<\xi(j,S)$ then
(a) For all $k\in S\setminus \{i,j\}$, $\xi(k,S\setminus\{i\})\leq\xi(k,S)$
(b) $\xi(i,S\setminus \{j\})\leq\xi(i,S)$.
(c) $\xi(i,S\setminus \{j\})\geq\xi(i,S)$.
(part (a) is an alternative definition of semi-cross monotonicity (\cite{imm08}))
\end{proposition}
Every part of the preceding Property, is implied by the corresponding condition of Fence Monotonicity d Moreover, part (b) and (c) fully characterize the
case of two players. However, this Property is far from characterizing the case of the three players, as there are many
other constrains, that are not captured by its implications.
\begin{theorem}\label{theo:low}
Let $\mathcal{A}=\{1,2,3\}$. Consider the cost sharing function defined on $\mathcal{A}$ as follows: $C(\{1,2\})=C(\{1,3\})=1$, $C(\{1\})=C(\{2\})=C(\{3\})=x$,
$C(\{2,3\})=x^2+x$ and $C(\{1,2,3\})=x^3+x^2+x$, where $x\geq 1$.
There is no $\frac{1}{x}$-budget balanced cost sharing scheme, that satisfies Fence Monotonicity d
\end{theorem}
It is important to understand that for the proof of this theorem,
we use every implication of Proposition 1 and thus every
condition of Fence Monotonicity d
\section{Complexity and open questions}
Does there exist a polynomial-time algorithm for finding the allocation of a cost-sharing scheme that satisfies Fence Monotonicity or maybe we can show that the problem of finding a stable pair is computationally hard?
A natural question that arises in this context is whether, it is computationally more efficient to find the appropriate outcome than identifying the stable pair.
Suppose that you have an algorithm that computes the outcome of a GSP mechanism.
It is rather straightforward how to compute the lower set $L$ of the stable pair, simply by checking which players have positive utility. What remains is to find the upper set $U$, which is exactly the maximal set, as defined in Lemma \ref{lem:maxu}.
\begin{theorem}\label{theo:compl}
Suppose that we are given the outcome of a gr\-oup-str\-at\-egy\-pro\-of mechanism at $b$. Given that we have already computed $\m{i}$ for all $L\subseteq U\subseteq \mathcal{A}$ and all $i \in U$, there is a polynomial time algorithm for identifying its stable pair.
\end{theorem}
We believe that some other interesting directions for future research are the following: How can our characterization be applied for obtaining cost-sharing mechanism with better approximation and budget-balance guarantees or lower bounds for specific problems? And finally can our techniques be extended to obtain a characterization of weak-group-stra\-te\-gy\-proof cost-sharing schemes?
\section*{Aknowledgements}
We would like to thank Elias Koutsoupias for suggesting the problem, as well as for many very helpful insights and discussions. We would also like to thank Janina Brenner, Nicole Immorlica, Evangelos Markakis, Tim Roughgarden, and Florian Schoppmann for helpful discussions and some pointers in the bibliography.
\appendix
\section{Missing proofs of Section 5}
\subsubsection*{Proof of Lemma \ref{lem:xicompare}}
It is obvious that the minimum payment of player $i$ can only decrease as the set of outcomes, over which the minimum in the definition of $\xi^*$ is taken, becomes larger.\qed
\subsubsection*{Proof of Claim \ref{cla:harm}}
We show this using the induction hypothesis at every $L\cup\{i\},U$ for every $i \in U \setminus L$ and more specifically condition (c) of Fence Monotonicity d
To show anti-symmetry we need to consider only elements of $ U\setminus L$, as trivially it is impossible that some $i \in L$ harms any other elements. Consider two distinct $i,j\in U \setminus L$ and assume that $i$ \emph{harms} $j$, that is $\m{j}<\mlu{j}{L\cup\{i\}}{U}$. From definition of $\xi^*$ there is a set $S_j$, where $j \in S_j$, $L\subseteq S_j\subseteq U$ and
$\xi(j,S_j)=\m{j}$. Notice that by our assumption it is impossible that $i \in S_j$, since $\mlu{j}{L\cup\{i\}}{U}>\xi(j,S_j)$. It follows that $S_j\subset U$ and by using condition (c) of Fence Monotonicity at $L\cup\{i\},U$, we get that
$\xi(i,S_j\cup\{i\})=\mlu{i}{L\cup\{i\}}{U}=\m{i}$ (the only non-empty subset of
$(L\cup\{i\}) \setminus S_j$ is $\{i\}$). Since $L\cup\{j\}\subseteq S_j\cup\{i\}\subseteq U$ we get that $\mlu{i}{L\cup\{j\}}{U}=\m{i}$.
We show transitivity of the harm relation in a similar manner. Consider three distinct players $i$, $j$ and $k$, where $i,j\in U \setminus L$ and $k \in U$ (may also belong to $L$). Assume now that $i$ \emph{harms} $j$, $j$ \emph{harms} $k$ while $i$ \emph{does not harm} $k$. We will show that this contradicts our induction hypothesis. Since $\m{k}=\mlu{k}{L\cup\{i\}}{U}$, it follows that there is some set $S_k$, where $L\cup\{i\}\subseteq S_k\subseteq U$ such that
$\xi(k,S_k)=\m{k}$. Using similar arguments like in the previous part we show that $xi(j,S_k\cup\{j\})=\m{j}$ and as $L\cup\{i\}\subseteq S_k\cup\{j\}\subseteq U$ we reach a contradiction from our assumption that $\m{j}<\mlu{j}{L\cup\{i\}}{U}$.
\qed
\subsubsection*{Proof of Lemma \ref{lem:a-milder}}
(a) We first have to prove the following Claim, which is an immediate consequence of the fact that the harm relation is a strict partial order.
\begin{claim}\label{claim:sinkandL}
For every $j \in L$ one of the following holds: either every $i \in U \setminus L$ \emph{harms} $j$, or
there a $k \in U \setminus L$ that \emph{does not harm} $j$ and also is a \emph{sink} of $G[U\setminus L]$.
\end{claim}
\begin{proof}[of Claim \ref{claim:sinkandL}]
Suppose that there is some $i \in U \setminus L$ that \emph{does not harm} $j$.
If $i$ is a sink of $G[U\setminus L]$, setting $k=i$ completes our proof. Otherwise, there must be a path that goes through $i$ but does not stop there. Let $k$ be the sink of this path ($G[U\setminus L]$ is a directed acyclic graph).
Notice that transitivity implies that $i$ \emph{harms} $k$. Thus, it is impossible that $k$ \emph{ harms }$j$, since using transitivity again we would deduce that $i$ \emph{harms} $j$ contradicting our assumption.
\end{proof}
Consider some $j \in L$. We split the proof in two cases as Claim \ref{claim:sinkandL} indicates.
\textbf{Case 1:} Suppose that every $ i \in U \setminus L$ \emph{harms} $j$. This implies that $\m{j}=\xi(j,L)$ and hence we can set $S_j=L$, since the requirements of Lemma \ref{lem:a-alloc} are trivially satisfied, since $S_j \setminus L=\emptyset$.
\textbf{Case 2:} Consider some sink $k$ that \emph{does not harm} $i$. Using induction hypothesis at $L \cup \{k\},U$ and
in particular part (a) we get that there is a set $S$, where $L\cup\{k\}\subseteq S\subseteq U$ such that for all $i \in S$, $\xi(i, S)=\mlu{i}{L\cup\{k\}}{U}$. Using the fact that $k$ is a sink and also \emph{does not harm} $j$ we get that for all $i \in S' \setminus L$, $\mlu{i}{L\cup\{k\}}{U}=\m{i}$ and $\mlu{j}{L\cup\{k\}}{U}=\m{j}$. As a result, we can set $S_j=S$. \qed
\subsubsection*{Proof of Lemma \ref{lem:a-alloc}}
By CS and VP the output of the mechanism satisfies $L\subseteq O(b)\subseteq U$. First note that in any case a player $i\in U\setminus L$ has utility zero: either she is not serviced, or by VP, if she is serviced her payment cannot exceed her bid and cannot be less than her minimum payment $\m{i}$ so $\xi(i,O(b))= b_i=\mlu{i}{L}{U}$. What remains is to show that $j$ is also served at $\mlu{j}{L}{U}$.
Moreover, suppose towards a contradiction that for some player $j\in L$, $\xi(j,O(b))>\mlu{j}{L}{U}$ (If the set $U\setminus L$ is empty $U=L$ it is impossible that some $j \in L$ is charged more than $\mlu{j}{L}{U}=\xi(j,L)$). Then she could form a coalition with the players in $U\setminus L$, who would enforce the set $S_j$, where $S_j$ is the set guaranteed to exist by Lemma~\ref{lem:a-milder}, to be output i.e. the players $i\in U\setminus L$ could change their bids to $b_i^*$ if $i\in S_j$ and $-1$ if $i\notin S_j$, so that by CS and VP the output is $S_j$.
Since for every $i \in S_j\setminus L$, $\xi(i,S_j)=\mlu{i}{L}{U}$ their utilities remain zero after this manipulation (the same holds trivially for all $i \in U\setminus S_j$), while the utility of player $j$ strictly increases, and thus the coalition is successful indeed.
Consequently for all $j\in L$, $j\in O(b)$ and $\xi(j,O(b))=\mlu{j}{L}{U}$ and the players in $O(b)\setminus L$ are charged at their minimum payment as well.\qed
\subsubsection*{Proof of Claim \ref{cla:sinkandb}}
Using the induction hypothesis at $L\cup\{k\},U$ we get from part (a) that there is a set $S$, where $L\cup\{k\}\subseteq S\subseteq U$, such that for all $i \in S$, $\xi(i, S)=\mlu{i}{L\cup\{k\}}{U}$. Using now the fact that $k$ is a sink we have that
for all $i \in S \setminus L$, $\m{i}=\mlu{i}{L\cup\{k\}}{U}$. Thus, setting $S_k=S$ we satisfy condition (b) of Fence Monotonicity for $k$ at $L,U$. \qed
\subsubsection*{Proof of Claim \ref{claim:harmandU-L}}
If $j$ is not a \emph{sink} if $G[U\setminus L]$, there must be an path starting from $j$, which obviously ends at a sink $k$ of this graph. Transitivity implies that $j$ \emph{harms} $k$. \qed
\subsubsection*{Proof of Lemma \ref{lem:b-alloc}}
Like in the proof of the Lemma \ref{lem:a-alloc}, by CS and VP we get that $L'\subseteq O(b^j)\subseteq U'$ and thus from the definition of the bid vector and $\xi^*$, every player in $U'\setminus (L'\cup\{j\})$ has zero utility. We need to show that $j$ is serviced and charged $\mlu{j}{L'}{U'}$.
Moreover, suppose towards a contradiction that player $j$, is either not serviced or she is charged an amount greater than $\mlu{j}{L'}{U'}$.
Since we assumed that condition (b) of Fence Monotonicity is satisfied for
$j$ at $L',U'$, there must be a set $S_j$ with $L'\subseteq S_j \subseteq U'$, $j \in S_j$ and for all $i \in S_j\setminus L'$, $\xi(i,S_j)=\mlu{i}{L'}{U'}$.
Obviously if she is serviced at a higher payment then she prefers the set $S_j$ to the current outcome. Also, the same holds if she is not serviced, as we assumed that $b_j >\mlu{j}{L'}{U'}=\xi(j,S)$ and thus her utility would be to a \emph {strictly positive} if the outcome was $S_j$. In similar manner like int the proof of Lemma \ref{lem:a-alloc} she could form a coalition with the players in $U'\setminus (L'\cup\{j\})$ by enforcing $S$ to be output. Again our assumption about the payments of the rest players in $S_j\setminus L'$ implies that their utility remains unchanged thus this coalition is successful.\qed
\subsubsection*{Proof of Claim \ref{claim:bk-alloc}}
(a) We get that player $k$ is serviced and charged $\m{k}$ by applying Lemma \ref{lem:b-alloc} for $k$ with $L'=L$ and $U'=U$.
(b)
Suppose towards a contradiction that $j \in O(b^k)$. The payment of player $k$ would be lower bounded by $\mlu{k}{L\cup\{j\}}{U}$, since $L\cup\{j\}\subseteq O(b^k) \subseteq U$, which contradicts with the fact that $k$ is charged $\m{k}$, which is strictly lower by our assumption that $j$ \emph{harms} $k$. \qed
\subsubsection*{Proof of Claim \ref{claim:bj-alloc}}
(a) Since $k$ is a sink of $G[U\setminus L]$, we have that for all $i \in U\setminus (L\cup\{j,k\})$, $\mlu{i}{L\cup\{k\}}{U}=\m{i}=b^j_i$. Similarly, we get that $\mlu{j}{L\cup\{k\}}{U}+\epsilon=\m{j}+\epsilon=b^j_j$.
(b) We apply Lemma \ref{lem:b-alloc} for $j$ with $L'=L\cup\{k\}$ and $U'=U$, since condition (b) is satisfied for $j$ at this pair (induction hypothesis) and the bid vector satisfies the requirements of this Lemma. ($b^j_i=b^*_i$ for all $i \in L\cup\{k\}$, $b^j_i=-1$ for all $i \notin U$ and using Claim~\ref{claim:bj-alloc} (a).) As a result, $j\in O(b^j)$ and $\xi(j,O(b^j))=\mlu{j}{L\cup\{k\}}{U}=\m{j}$.
(c) From part (b) we get that the set $O(b^k)$ satisfies that $L\cup\{j\}\subseteq O(b)\subseteq U$, and thus the payment of player $k$ is lower bounded by $\mlu{k}{L\cup\{j\}}{U}$. Since $j$ \emph{harms} $k$ we get that $\mlu{k}{L\cup\{j\}}{U}>\m{k}$ completing our proof.\qed
\subsubsection*{Proof of Claim \ref{claim:bjk-alloc}}
(a) Assume that player $k$ is serviced at $b^{j,k}$. Notice that by VP and the definition of $\epsilon$, if $k$ is serviced at $b^{j,k}$ then her payment cannot exceed $\m{k}$. Additionally, notice that the only coordinate $b^{j,k}$ differs from $b^j$ is the bid of player $k$. Thus, strategyproofness is violated from $b^j$, since from Claim \ref{claim:bj-alloc} the payment of $k$ decreases.
(b) Suppose that $j$ is not serviced at $b^{j,k}$. Then $\{j,k\}$ can form a successful when true values are $b^k$ bidding $b^{j,k}$, since from Claim \ref{claim:bk-alloc} the utility of $k$ increases from zero to $\epsilon$ and the utility of $j$ is kept to zero (she is not serviced at either input).
(c) By VP and CS we get that $L\subseteq O(b) \subseteq U$, thus the payment of every serviced player $i$ is lower bounded by $\m{i}$. By definition of $b^{i,j}$ and VP of the mechanism we get the equality for every player in $O(b) \setminus (L\cup\{j\})$. Moreover, from the definition of $\epsilon$, we conclude the same for $j$. \qed
\subsubsection*{Proof of Lemma \ref{lem:char1}}
We will prove our statement with induction on the cardinality of the set $T=\{i \in L\mid b_i \neq b^*_i\}$.
\textbf{Base:} Since $T=\emptyset$, we simply apply Lemma~\ref{lem:a-alloc}.
\textbf{Induction step:} For the induction step we will show that $L\subseteq O(b)\subseteq U$ and for all $i \in L$, $\xi(i,O(b))=\m{i}$. Then it is easy to see that every $i\in O(b)\setminus L$ must be serviced at $\m{i}$ by VP and the definition of $\xi^*$.
By the construction of the bid vector we have that $O(b)\subseteq U$. Consider now some $j \in T$. Induction hypothesis implies that $j$ is serviced and charged $\m{j}$ at the bid vector $(b^*_j,b_{-j})$. Suppose that $j$ is not serviced at $b$ resulting in zero utility. Since $b_j-\m{j}>0$ she can misreport $b^*_j$ so as to increase her utility to a strictly positive quantity and thus violate the strategyproofness of the mechanism. As a result, $j$ must be serviced at $b$ and hence by strategyproofness $j$ must be serviced at the same payment.
(since $j$ is an arbitrary element of $T$, the same holds for all $j \in T$).
Now since $j$ is indifferent between the two outcomes group-\-stra\-te\-gy\-pr\-oof\-ness requires the same about the rest players. This is only possible $i \in L\setminus T$ ($i\in O(b)$ by CS since $b_i=b^*_i$) is charged the same payment in either input i.e. $\xi(i,O(b))=\xi(i,O(b^*_j,b_{-j}))=\m{i}$ (induction hypothesis).
\qed
\subsubsection*{Proof of Lemma~\ref{lem:char2}}
Let $b^0$ be any bid vector satisfying the conditions of Lemma \ref{lem:char1}, which means that for all $i \in L$, $b^0_i>\m{i}$, for all $i \in U\setminus L$, $b^0_i=\m{i}$ and for all $i \notin U$, $b^0_i=-1$. Then we relax the constrains we put on the bids of the players in $U \setminus L$ (the rest players bid always according to $b^0$) and prove that no player becomes serviced at a price strictly less than her minimum payment at $L,U$, by using induction on $|T|$, where
$T:=\{i \in U\setminus L\mid b_i\neq b^0_i\}$.
\textbf{Base:} For $T=\emptyset$ we have that $b=b^0$ and the allocation property follows from Lemma \ref{lem:char1}.
\textbf{Induction step:} Assume that there is some $j \in O(b)$ that is charged less than $\m{j}$. Since by VP $O(b)\subseteq U$, either $j\in L$ which implies that $b^0_j>\m{j}$ or $j \in U\setminus L$ and thus $b^0_j=\m{j}$. In both cases we have that
\begin{equation}\label{eq:char2-1}
j\in O(b)\text{ and }\xi(j,O(b))<\m{j}\leq b^0_j.
\end{equation}
We will prove that there exists \emph{at least one} successful coalition, contradicting the assumed group-strategyproofness of the mechanism.
The key of our proof is the definition of the following special subset of $T$
\begin{equation}\label{eq:char2-2}
R:=\{ i\in T \mid i \in O(b)\text{ and } \xi(i,O(b))>b^0_i\}
\end{equation}
(notice that $j\notin R$).
This set is a special subset of $T$ that can be interpreted as follows: If the true valuations are given by $b^0$ and the players in $T$ bid according to $b$, then the set $R$ represents the players that their utility becomes negative, and since every $i \in R$ had zero utility at $b^0$ (Lemma \ref{lem:char1}), this gives us the set of \emph{liars that sacrifised their utilities}. Notice that the set $R$ not being empty renders the manipulation we considered \emph{unsuccessful}. For proving the induction step we consider two cases regarding $R$.
\textbf{Case 1:} If $R \subset T$, we construct the bid vector $b'$, where for every $i\in R$, $b'_i=b_i$ and for every $i\notin R$, $b'_i=b^0_i$. We complete the proof of this case by showing that $(T\setminus R)\cup\{j\}$ form a coalition when true values are $b'$ bidding $b$.
First, we prove that in the truthful scenario every player $i\in T\setminus R$ has zero utility and non-negative after the misreporting (profile $b$). By using induction hypothesis at $b'$ (differs with $b^0$ in $|R|<|T|$ coordinates) we get that for every $i \in T\setminus R$ such that $i \in O(b')$ (the rest players have obviously zero utility), it holds that $\m{i}\leq\xi(i,O(b'))\leq b'_i$, where the last inequality follows from VP. Since $i \in T\setminus R$, we get that
$b'_i=b^0_i=\m{i}$ and thus $b'_i=\xi(i,O(b'))$. Now consider the outcome after the misreporting. Either $i \notin O(b)$ and hence she has zero utility after the misreporting or $i \in O(b)$ and her utility becomes $b'_i-\xi(i,O(b))= b^0_i-\xi(i,O(b))\geq 0$, where the last inequality follows from Equation \ref{eq:char2-2} as $i\in T\setminus R$.
Second, we prove that player $j$ strictly increases her utility. Considering the truthful scenario there are two cases for $j$: She is serviced at $b'$ and charged $\xi(j,O(b'))\geq\m{j}$ from the induction hypothesis. From Equation \ref{eq:char2-1} we get that $j$ is serviced after the misreporting and charged $\xi(i,O(b))<\m{j}$, and thus her payment decreases, while she is still serviced. Now if $j$ is not serviced at $b'$, then she has zero utility. From Equation \ref{eq:char2-1} we get that $b^0_j>\xi(j,O(b))$ and since $j \notin R$ it follows that $b^0_j=b'_j$. As a result, her utility increases to $b'_j-\xi(j,O(b))>0$.
\textbf{Case 2:} Otherwise if $R = T$, we construct the bid vector $b''$, where every $i \in R$, $b''_i=\xi(i,O(b))$ and every $i \notin R$, $b''_i=b_i$.
Notice that for all $i \in R$, $b_i\geq b''_i$ by VP at $b$ since $R\subseteq O(b)$. Moreover, for every $i \notin R$, $b''_i=b^0_i$, since $R=T$.
\begin{claim}\label{claim:bdp}
If $T=R$ and there is some $j$ that satisfies Equation \ref{eq:char2-1}, then for the bid vector $b''$, it holds that
$R\cup\{j\}\subseteq O(b'')$ and for all $i \in R\cup\{j\}$,
$\xi(i,O(b''))=\xi(i,O(b))$.
\end{claim}
\begin{proof}[of Claim~\ref{claim:bdp}]
For all $S\subseteq R$, we define the bid vector $b^S$, where for all $i \in S$, $b^S_i=b''_i$ and for all $i \notin S$,
$b^S_i=b_i$. Notice that since we assumed that $T=R$, it holds that $b^S_i=b^0_i$ for all $i \notin R$ and all $S\subseteq R$. We show that for all $S\subseteq R$, $R\cup\{j\}\subseteq O(b^S)$ and for all $i \in R\cup\{j\}$,
$\xi(i,O(b^S))=\xi(i,O(b))$, with induction on $|S|$.
We will refer to this induction as \emph{second} induction to discriminate it from the \emph{first}. After proving this statement, we can set $S=R$ ($b^R=b''$) and complete the proof.
\textbf{Base (second):} If $S=\emptyset$, i.e. $b^\emptyset =b$, then every condition holds trivially.
\textbf{Induction step (second):}
If for some $i \in S$, it holds that $b''_i=b_i$, then the induction step follows trivially from induction hypothesis as $b^S=(b_i,b^S_{-i})=b^{S\setminus \{i\}}$.
Now assume that for all $i \in S$, $b''_i< b_i$ and consider some $i \in S$.
Player $i$ is serviced at $(b_i,b^S_{-i})$ and charged
$\xi(i,O(b))$ (induction hypothesis). Strategyproofness implies that by lowering her bid up to her payment, which gives us the bid vector $b^S$, if she is serviced, then her payment must be the same, i.e.,
\begin{equation}\label{eq:char2-4}
\text{for all }i \in S\cap O(b^S), \text{ } \xi(i,O(b^S))=\xi(i,O(b)).
\end{equation}
This Equation implies that if the true values are given by $b^S$, then every player in $S$ has zero utility. Moreover, by misreporting $b$ they are charged an amount equal to their true value and thus their utilities is kept at zero. Group-strategyproofness implies that no player in $\mathcal{A}\setminus S$ has incentives for this misreporting. Notice that $b^S_j=b^0_j$ since $j \notin R=T$. Moreover, from Equation \ref{eq:char2-1} we have that $b^0_j\geq \m{j}>\xi(j,O(b))$ and thus her utility after the manipulation we described is given by
$b^S_j-\xi(j,O(b))>0$. In order that this amount is not bigger than her utility in the truthful scenario ($b^S$), it holds that
\begin{equation}\label{eq:char2-3}
j \in O(b^S)\text{ and }\xi(j,O(b^S)\leq\xi(j,O(b)),
\end{equation}
which together with Equation \ref{eq:char2-1} and the fact that $b^S_j=b^0_j$ implies that
\begin{equation}\label{eq:char2-5}
\xi(j,O(b^S))<\m{j}\leq b^S_j.
\end{equation}
Using the latter equation we show that $R\subseteq O(b^S)$. Assume that some $i\in R$ is not serviced at $b^S$. First, we show that $i$ is not serviced at
$(b^0_i,b^S_{-i})$. There are two cases for $i$: Either $b^S_i=b''_i$ if $i \in S$ or $b^S_i=b_i\geq b''_i$ if $i \in R\setminus S$. In any case $b^S_i\geq b''_i>b^0_i$, where the second inequality follows from the definition of $R$ (Equation \ref{eq:char2-2}) since $b''_i=\xi(i,O(b))$. Strategyproofness implies that if the true values are $b^S$ and $i$ reports $b^0_i$ (the bid vector becomes $(b^0_i,b^S_{-i}))$, then she is not serviced as otherwise her payment cannot exceed $b^0_i$ by VP and thus her utility increases to $b^S_i-b^0_i>0$.
Now we show that if $i$ is not serviced at $(b^0_i,b^S_{-i})$, then $\{i,j\}$ from a successful coalition when true values are $(b^0_i,b^S_{-i})$ bidding $b^S$. The utility of player $i$ is kept to zero, as she is excluded in both inputs from the outcome. Notice that by the \emph{first} induction hypothesis at the bid vector
$(b^0_i,b^S_{-i})$ (differs with $b^0$ in $|R|-1=|T|-1$ coordinates) there are two cases for $j$: Either $j$ is serviced and charged $\xi(j,O((b^0_i,b^S_{-i})))\geq\m{j}$ and thus her payment decreases to $\xi(j,O(b^S))<\m{j}$ (from Equation \ref{eq:char2-5}) or
she is not serviced and her utility becomes $b^S_j-\xi(j,O(b^S))>0$ (from Equation \ref{eq:char2-5}).
As a result, it holds that $S\subseteq R\subseteq O(b^S)$ and together with \ref{eq:char2-3} we get that for every $i \in S$, $\xi(i,O(b^S))=\xi(i,O(b))$. Therefore, the players in $S$ are indifferent between the two inputs, i.e regardless of which of the two we consider as true values, they can misreport the other without losing utility. Group-strategyproofness implies that the other players are indifferent as well, which is only possible if for all $i \in R\setminus S$, $\xi(i,O(b^S))=\xi(i,O(b))$. Moreover, from Equation \ref{eq:char2-3} we get that $j \in O(b^S)$ and for similar reasons it holds that $\xi(j,O(b^S))=\xi(j,O(b))$.
\end{proof}
By VP we have that $O(b'')\subseteq U$ (for every $i \in \mathcal{A}\setminus U$, it holds that $i \notin R$ and thus $b''_i=b^0_i=-1$). Furthermore, from Claim \ref{claim:bdp} and Equation \ref{eq:char2-1} we get that $\xi(j,O(b''))<\m{j}$. Therefore, the definition of $\xi^*$ implies that $L\not\subseteq O(b'')$.
Assume that the true values are given by $b''$.
Every player $k \in (L\setminus O(b''))$ has obviously zero utility.
From Claim~\ref{claim:bdp} we deduce the same for every $i \in R$.
We complete our proof by showing that $R\cup (L\setminus O(b''))$ form a successful coalition bidding $b^0$.
First, we prove that the utility of every $i \in R$ remains non-negative after the misreporting. Consider some $i \in R$ such that $i \in O(b^0)$ (obviously the rest players have zero utility). From Lemma \ref{lem:char1} it holds that $\xi(i,O(b^0))=\m{i}$ and since $\m{i}=b^0_i$ ($i \in U\setminus L$) and $b^0_i<\xi(i,O(b))=b''_i$ (Equation \ref{eq:char2-2}), we conclude that her utility increases to $b''_i-\xi(i,O(b^0))>0$.
Second, we show that the utility of every $k \in L\setminus O(b'')$ increases, as it becomes strictly positive. Since $k \notin R$ we get that $b''_k=b^0_k$. Moreover, since $k \in L$ we have $b^0_k>\m{k}$ and
from Lemma \ref{lem:char1} we get that $k \in O(b^0)$ and $\xi(k,O(b^0))=\m{k}$. As a result, her utility becomes $b''_k-\xi(k,O(b^0))>0$.
\qed
\subsubsection*{Proof of Claim \ref{claim:bc-alloc}}
(a) We can apply Lemma \ref{lem:char2} as every condition is satisfied.
(b) VP and CS implies $C\subseteq O(b^C) \subseteq L\cup C$. $C\subset O(b^C)$ follows from (a) and our assumption that $\xi(j,C)<\m{j}$.
(c) From definition of $\epsilon$ we have that for all $i \in O(b^C) \setminus C$, $\xi(i,O(b^C))\leq\m{i}$. Together with (a) we get the equality.
\qed
\subsubsection*{Proof of Lemma \ref{lem:char3}}
Notice that Lemma \ref{lem:char1} implies the same allocation property in the special case, where every $i \in \mathcal{A} \setminus U$ has bidden $-1$.
In order to show this Lemma we will use the same induction, i.e. we show that $L\subseteq O(b)$ and for all $i \in O(b)$, $\xi(i,O(b))=\m{i}$ using induction on $m=|\{i\in L\mid b_i\neq b^*_i\}|$.
\textbf{Base:} First, by CS we have that $L\subseteq U$. Next, we show that not player in $\mathcal{A}\setminus U$ is serviced. Suppose that the set $R=O(b)\setminus U\neq\emptyset$. We will show that VP is violated. By the
definition of the bid vectors we consider it holds that there
is some $i \in R$, such that $b_i<\mlu{i}{L}{U\cup R}\leq \xi(i,O(b))$
(since $L\subseteq O(b)\subseteq U\cup R$) and thus we reach a contradiction.
As a result, it holds that $L \subseteq O(b) \subseteq U$. Thus, for every
$i \in O(b)$, we have that $\xi(i, O(b)) \geq \m{i})$. By VP
we get the equality for every $i \in O(b) \setminus L$. Now assume
that the previous inequality is strict for some $j \in L$. We
show that the players in $(\mathcal{A} \setminus U )\cup \{j\}$ can form a successful
coalition, where every $i \notin U$ announces $-1$.
Trivially the utilities of these players are kept to zero after
the misreporting and applying Lemma \ref{lem:char1} we get that $j$ is serviced
and charged $\m{j}< \xi(j, O(b))$.
\textbf{Induction Step}: We show that $L \subseteq O(b)$ and for all $i \in
L$, $\xi(i, O(b)) = \m{i}$ exactly like in the proof of Lemma
\ref{lem:char1}. Now since $L \subseteq O(b)$ in similar way we show that $U \subseteq O(b)$.
Thus, this restriction together with the definition of the bid
vector, implies that for every $i \in O(b) \setminus L$, $\xi(i, O(b)) =
\m{i}$.\qed
\section{Missing Proofs of Section 6}
\subsubsection*{Proof of Lemma \ref{lem:maxu}}
Assume towards a contradiction that there exist two distinct sets $U_1,U_2$ none of which is a subset of the other that both satisfy this property. Then we could construct the set $U_1\cup U_2$ which also satisfies the same property and is a proper superset of both $U_1$ and $U_2$ reaching a contradiction. Indeed for all $i\in U_1\setminus L$ we have $b_i\geq\mlu{i}{L}{U_1}\geq \mlu{i}{L}{U_1\cup U_2}$ as the minimum payment of player $i$ can only decrease as the set of outcomes, over which the minimum in the definition of $\xi^*$ is taken, becomes larger. A similar inequality holds for the players in $i\in U_2\setminus L$ completing the proof.
Now we show that if $U$ is the maximal set with this property, then $L,U$ satisfy property 3. of stablity. Consider some non-empty $R \subseteq \mathcal{A} \setminus U$. Assume that for all $i \in R$, $b_i \geq \mlu{i}{L}{U\cup R}$. From Lemma \ref{lem:xicompare} we have
that for all $i \in U \setminus L$, $b_i \geq \mlu{i}{L}{ U\cup R}$ and thus we reach contradiction by the maximality of $U$.
\qed
\subsubsection*{Proof of Lemma \ref{lem:uncov2}}
We will first show that if $S\not\subseteq U$ then there exists some $i\in S$ we have $\mlu{i}{L}{U\cup S}>\xi(i,S)$. Since $L,U$ is stable (property 3.) there exists some $i\in S\setminus U$ such that $b_i<\mlu{i}{L}{U\cup S}$ and since from the initial assumption $\xi(i,S)\leq b_i$ we get that $\xi(i,S)<\mlu{i}{L}{U\cup S}$
The rest of the proof is the same for both cases (note that in what follows, if $S\subseteq U$ then $S\cup U=U$). Note that $S\subset U\cup S$, since $L\nsubseteq S$. Applying part (c) of Fence Monotonicity we get that there exists a non-empty $T\subseteq L\setminus S$ such that for all $j\in T$ we have $\xi(j,S\cup T)=\mlu{j}{L}{U\cup S}\leq \m{j}$, where the last inequality is by the definition of $\xi^*$, since the minimum cannot decrease as the set of outcomes over which it is taken becomes larger. From property 1. of stability and since $T\subseteq L$, for all $j\in T$ we have $\m{j}<b_i$. Consequently for all $j\in T$ we have $\xi(j,S\cup T)<b_j$.
\qed
\subsubsection*{Proof of Lemma \ref{lem:cov3}}
Suppose towards a contradiction that $L,U$ is stable at $b$ and that there exists some $T\subseteq \mathcal{A}\setminus S$, such that for all $i\in T$, $b_i\geq \xi(i,S\cup T)$ and that for at least one player the inequality is strict.
Since $L\subseteq S\cup T\subseteq U\cup T$ from the definition of $\xi^*$ we get that for all $i\in T$ we have $\mlu{i}{L}{U\cup T}\leq \mlu{i}{L}{U}\leq \xi(i,S\cup T)$.
If $T\subseteq U$, then there exists some $i\in T$ such that $b_i>\xi(i,S\cup T)\geq \m{i}$. Since $T\subseteq \mathcal{A}\setminus L$ this contradicts with stability (condition 2.) of $b$ at $L,U$.
If $T\not\subseteq U$, then $T\setminus U$ is not empty and for all $i \in T\setminus U$ we have $b_i\geq \xi(i,S\cup T)\geq \mlu{i}{L}{U\cup T}$, which contradicts with stability (condition 3.) of $b$ at $L,U$.
\qed
\subsubsection*{Proof of Lemma \ref{lem:unique}}
Assume first that $L_1\neq L_2$ and without loss of generality that $L_2\not\subseteq L_1$. Consequently there exists some $j\in L_2\setminus L_1$.
By Fence Monotonicity (part (a)) there exists a set $S_2$ such that
$L_2\subseteq S_2\subseteq U_2$ and $\xi(i,S_2)=\mlu{i}{L_2}{U_2}$ for all $i\in S_2$. Notice that for all $j\in S_2$, $b_j \geq \xi(j, S_2)$, where strict inequality holds only if $j \in L_2$.
The idea is to apply Lemma~\ref{lem:uncov2} and get that there exists a non-empty $T\subseteq L\setminus S_2$, such that for all $i\in T$ we have $b_i>\xi(i,S_2\cup T)$. We will then apply Lemma~\ref{lem:cov3} to show that $L_2,U_2$ is not stable at $b$ which contradicts our intial assumption.
It only remains to show that we can apply Lemma~\ref{lem:uncov2}. If $S_2\not\subseteq U_2$ this is immediate. Suppose that $S_2\subseteq U_1$. If $j\in L_2\setminus L_1$ then also $j \in U_1 \setminus L_1$ thus from stability (condition 2.) of $L_1,U_1$, we get that $b_j=\mlu{j}{L_1}{U_1}$ and from stability (condition 1.) of $L_2,U_2$ and since $j\in L_2$, we get that $b_j>\xi(j, S_2)$. Therefore, we get that $\xi(j,S_2)<\mlu{j}{L_1}{U_1}$ and consequently $S_2$ satisfies the requirements of Lemma~\ref{lem:uncov2}.
We showed that $L_1=L_2=L$. Suppose towards a contradiction that $U_1\neq U_2$. From Lemma \ref{lem:maxu} there exists a unique maximal set $U$ such that $b_i\geq \m{i}$ for all $i\in U$. Consequently $U_1,U_2$ are subsets of $U$ and at least one of them, say $U_1$ a proper subset of $U$. Then the players in $U\setminus U_1$ contradict stability (condition 3.).
\qed
\subsubsection*{Proof of Claim \ref{claim:inclusion*}}
(a) Using the definition of $L^*$ we get that $L^*\cup \{i\}$ contains all the players who have bidden higher than any payment of the mechanism in $(b_i^*,b_{-i})$.
Since $L_i$ is stable at $(b_i^*,b_{-i})$, each one of these players, who have bidden strictly higher than any payment of the mechanism, should be serviced (any value higher than any payment satisfies the definition of CS for Fencing Mechanisms), and if they are serviced they obviously have strictly positive utility, and thus $L^*\cup \{i\}\subseteq L_i$.
(b)
The idea is to show that for all $j \in (U^*\cup U_i)\setminus L^*$, $b_j\geq \mlu{j}{L^*}{U^*\cup U_i}$ and since we defined $U^*$ be the maximal set with this property, we get that $U_i\subseteq U^*$.
From definition of $U^*$ we have that for all $j \in U^*\setminus L^*$,
$b_j\geq \mlu{j}{L^*}{U^*} \geq \mlu{j}{L^*}{U^*\cup U_i}$, where the last inequality follows from Lemma \ref{lem:xicompare}.
Now consider some $j \in U_i\setminus U^*$ ($j\neq i$). Since $L_i,U_i$ is stable at $(b^*_i,b_{-i})$ we get that $b_j\geq \mlu{j}{L_i}{U_i}\geq \mlu{j}{L^*}{U_i\cup U^*}$ by applying Lemma \ref{lem:xicompare}, since from (a) $L^*\subset L_i$.
(c) From (a),(b) and Lemma \ref{lem:xicompare} we have that $\mlu{j}{L_i}{U_i}\geq \mlu{j}{L^*}{U^*}$. As we defined $L_i,U_i$ to be the stable pair at $(b_i^*,b_{-i})$ and $j\in L_i$, for $j\neq i$ we have $b_j>\mlu{j}{L_i}{U_i}\geq \mlu{j}{L^*}{U^*}$ and as $j\notin L^*$, we have $j\in W$. \qed
\subsubsection*{Proof of Claim \ref{claim:subset}}
(a) The pair $L_i,U_i$ is stable at $(b_i^*,b_{-i})$ (by definition of $L_i,U_i$) but it is not stable at $b$ (from our assumption that there exists no stable pair at $b$). Since the two bid vectors differ only on the $i$-th coordinate we deduce that stability (condition 1. as $i\in L_i$) is not satisfied by the $i$-th coordinate of $b$, thus $b_i\leq \mlu{i}{L_i}{U_i}$.
(b) From Claim~\ref{claim:inclusion*} (a) we already have that $L_i\supseteq L^*\cup \{i\}$. Suppose towards a contradiction that $L_i=L^*\cup \{i\}$. From Fence Monotonicity (condition (b)) we get that there exists some $S_i$, where $L^*\subseteq S_i\subseteq U^*$ and $i\in S_i$ such that for all $j\in S_i\setminus L^*$ we have $\xi(j,S_i)=\mlu{j}{L_i}{U_i}$.
The idea is to show that $L_i\subseteq S_i \subseteq U_i$, which implies that $\mlu{i}{L_i}{U_i}\leq \xi(i,S_i)=\mlu{i}{L^*}{U^*}$. Then considering also that $\mlu{i}{L^*}{U^*}<b_i$, because $i \in W$, we get $\mlu{i}{L_i}{U_i}<b_i$, contradicting the inequality we showed in (a).
It only remains to show that $L_i\subseteq S_i \subseteq U_i$. Notice first that $L_i\subseteq S_i$ since $L_i=L^* \cup\{ i \}$. Moreover, since for all $j \in S_i \setminus L_i$, it holds that $b_j \geq\xi(j, S_i)$ and the bids of these players are the same at $(b^*_i,b_{-i})$, stability of
$L_i,U_i$ (condition 3) implies that $S_i\subseteq U_i$, because otherwise $S_i\setminus U_i\neq \emptyset $ and its elements would violate condition 3. of stability.
\qed
\subsubsection*{Proof of Claim \ref{claim:notgsp}}
(a) The pair $L_i,U_i$ is stable at bid vector $(b_i^*,b_j^*,b_{-\{i,j\}})$, since $L_i,U_i$ is stable at $(b_i,b_{-i})$ and since raising the bid of player $j$ ($j\in L_i$ from our initial assumption) to $b_j^*$ does not effect its stability.
(b) From stability of $L_i,U_i$ at $(b_i^*,b_{-i})$ we get that $b_j>\mlu{j}{L_i}{U_i}$, while from part (a) of Claim \ref{claim:subset} we get that $\mlu{j}{L_j}{U_j}\geq b_j$. Consequently $\mlu{j}{L_j}{U_j}>\mlu{j}{L_i}{U_i}$.
Supposing towards a contradiction that $i\in L_j$ we also get in a similar way as before that the pair $L_j,U_j$ is stable at $(b_i^*,b_j^*,b_{-\{i,j\}})$. By Lemma \ref{lem:unique} these two pairs coincide, which is a contradiction because we just showed that the payment of $j$ is different.
(c) We will show that if there exists some $k\in L_j\setminus L_i$ then the mechanism is not GSP at inputs, where a stable pair exists. Observe that $k\neq i,j$.
Consider first the vector $b^{i,j}:=(b_i^*,b_j^*,b_{-\{i,j\}})$, which stable pair is $L_i,U_i$ from part (a). As $k\notin L_i$ either $k$ is not serviced, or if $k$ is serviced, then her utility is zero. As for $j$ she is serviced with payment $\mlu{j}{L_i}{U_i}<\mlu{j}{L_j}{U_j}=\xi(j,O(b_j^*,b_{-j}))$ (from (a)).
Consider then $b^j:=(b_j^*,b_{-j})$, where $L_j,U_j$ is stable. As $k\in L_j$ and $k$ is serviced and $b_k>\xi(k,O(b_j^*,b_{-j}))$.
Resuming player $j$ strictly prefers $O(b^{i,j})$ to $O(b^j)$, while for $k$ the situation is exactly the opposite. The idea is to construct a bid vector $b'$ where $i$ has zero utility and either $\{i,j\}$ or $\{i,k\}$ is a successful coalition. Let $b'=(\xi(i,O(b^{i,j})),b_j^*,b_{-\{i,j\}})$ and notice that induction hypothesis implies that there is a stable pair at $b'$. Moreover, observe that the three bid vectors differ only on the bid of player $i$ and consequently from strategyproofness, at every input $i$ is served, she is charged $\xi(i,O(b^{i,j}))$.
This implies that if $i$ is served at $b'$ she is charged an amount equal to her bid. Moreover, $i$ is serviced at $b^j$ in the degenerate case where $b'=b^j$ as otherwise $VP$ would imply that
$\xi(j, O(b^j))\leq b^j_j< b^{i,j}_j=\xi(j, O(b^{i,j}))$. In every case we have that $i$ has zero utility in
at $b'$ and $b^j$.
Observe first that $k$ must be serviced at $b'$ and charged $\xi(k,O(b'))\leq\xi(k,O(b^{j}))$ as otherwise $\{i,k\}$ would have been able to form a successful coalition when the true values are $b'$ bidding $b^j$. Similarly we can show that $\xi(j,O(b'))\leq\xi(j,O(b^{i,j}))$, excluding the degenerate case $b'=b^j$, because otherwise $\{i,j\}$ would have been able to form a successful coalition when the true values are $b'$ bidding $b^{i,j}$ (the utility of $i$ is kept to zero as in the truthful scenario).
As a result players $j$ and $k$ strictly prefer $O(b')$ to $O(b^j)$ and $O(b^{i,j})$ respectively.
If $i\in O(b')$ then $\{i,k\}$ when true utilities are given by $b^{i,j}$ can form a successful coalition bidding $b'$ strictly increasing the utility of $k$, while keeping the utility of $i$ constant, since she is still served with he same payment. If $i\notin O(b')$ then we deduce that $\{i,j\}$ when the true utilities are given by $b^j$ can form successful coalition bidding $b'$ strictly increasing the utility of $j$, while keeping the utility of $i$ to zero.\qed
\section{Missing Proofs of Section 7}
\subsubsection*{Proof of Proposition \ref{prop:neg-neu}}
Let $L=S\setminus \{i,j\}$ and $U=S$.
(a) From condition (a) of Fence Monotonicity c we get that at either $S\setminus \{i\}$ or $S$,
every player is charged the minimum payment at $L,U$. Since by our assumption, this is not true for $S$, as $\xi(j,S\setminus\{i\})<\xi(i,S)$, it follows that every other player $k \in S\setminus \{i,k\}$, $\xi(k,S\setminus\{i\})=\m{k}$ $\Rightarrow$ $\xi(k,S\setminus \{i\})\leq\xi(k,S)$.
(b) Notice that $j$ belongs only to
$S\setminus \{i\}$ and $S$. Moreover, our assumption implies that $\m{j}=\xi(j,S\setminus \{i\})<\xi(j,S)$.
From condition (b) of Fence Monotonicity c there must be a set $S_i$ with $L\subseteq S_i\subseteq U$, such that $i \in S$ and for all $k \in S_i\setminus L$,
$\xi(k,S_i)=\m{k}$. By our assumption, it is impossible that $S_i=S$ and since the only remaining set that contains $i$ is $S\setminus \{j\}$, we get that
$\xi(i, S\setminus \{j\})=\m{i}$ $\Rightarrow$ $\xi(i, S\setminus\{j\})\leq\xi(i,S)$.
(c) Let $L'=S\setminus \{j\}$. Now $j$ is contained only at $S$, it follows that $\xi(j,S)=\mlu{j}{L'}{U}>\xi(j,S\setminus \{i\})$.
Since $L'\setminus S\setminus\{i\}=\{i\}$, condition (c) of Fence Monotonicity implies that
$\mlu{i}{L\cup\{i\}}{U}=\m{i}=\xi(i,S)$ $\Rightarrow$
$\xi(i,S)\leq\xi(i,S\setminus \{j\})$.\qed
\subsubsection*{Proof of Theorem \ref{theo:low}}
Assume by contradiction that there is a
$\alpha$-budget balanced cost sharing scheme that satisfies Fence Monotonicity for $C$ where
$1\geq \alpha>\frac{1}{x}$. This implies the following relations
\begin{eqnarray}
\frac{1}{x} C(S) <\sum_{i\in S}\xi(i,S) &\leq& C(S). \label{eq:bb}
\end{eqnarray}
Equation \ref{eq:bb} (right part) with $S=\{1,2\}$ and $S=\{1,3\}$ imply that $\xi(2,\{1,2\})\leq 1$ and $\xi(3,\{1,3\})\leq 1$. We apply Proposition \ref{prop:neg-neu} (c) and we get that either $\xi(2,\{1,2,3\})\leq 1$ or $\xi(3,\{1,2,3\})\leq 1$ (or both). W.l.o.g we assume that $\xi(2,\{1,2,3\})\leq 1$.
Suppose that $\xi(2,\{2,3\})\leq 1$ $\Rightarrow$ $\xi(2,\{2,3\})<\xi(2,\{2\})$ (Equation \ref{eq:bb} (left part) with $S=\{2\}$: $\xi(2,\{2\})>1$). We apply the contra-positive implication of Proposition \ref{prop:neg-neu} (b) and deduce that $\xi(3,\{2,3\})\leq \xi(3,\{3\})$ $\Rightarrow \xi(3,\{2,3\})\leq x$ (Equation \ref{eq:bb} (right part) with $S=\{3\}$: $\xi(3,\{3\})\leq x)$). This contradicts Equation \ref{eq:bb} (left hand) with $S=\{2,3\}$, since $\xi(2,\{2,3\})+\xi(3,\{2,3\}) \leq x+1$.
Suppose that $\xi(2,\{2,3\})>1$ $\Rightarrow$ $\xi(2,\{2,3\})>\xi(2,\{1,2,3\})$ (from our assumption). Notice that the contra-positive implication of Proposition \ref{prop:neg-neu} (a) implies that $\xi(3,\{1,2,3\})\leq\xi(3,\{2,3\})$ $\Rightarrow$ $\xi(3,\{1,2,3\})\leq x^2+x-1$ (from Equation \ref{eq:bb} (right part) with $S=\{2,3\}$ it follows that $\xi(3,\{2,3\})\leq x^2+x-\xi(2,\{2,3\})\leq x^2+x-1$). Using now the contra-positive implication of Proposition \ref{prop:neg-neu} (b) we get that $\xi(1,\{1,2,3\})\leq\xi(1,\{1,3\})$
$\Rightarrow$ $\xi(1,\{1,2,3\})\leq 1$ (Equation \ref{eq:bb} (right part) at $S=\{1,3\}$: $\xi(1,\{1,3\})\leq 1$). This contradicts Equation \ref{eq:bb} (left part) with $S=\{1,2,3\}$, as
$\xi(1,\{1,2,3\})+\xi(2,\{1,2,3\})+\xi(3,\{1,2,3\})\leq 1+1+x^2+x-1=x^2+x+1$.
\qed
\section{Missing Proofs of Section 8}
\subsubsection*{Proof of Theorem \ref{theo:compl}}
Consider the following process, which takes as input a bid vector $b$ and
a set $L$.
\begin{algorithmic}
\REPEAT
\STATE $U\leftarrow L\cup \{i\in U\setminus L\mid b_i\geq\m{i}\}$
\UNTIL{For all $i \in U\setminus L$, $b_i\geq\m{i}$}
\end{algorithmic}
We will prove that if we feed this process with the lower set $L$ of the stable pair at $b$, then the outcome is the upper set of the stable pair.
Obviously by the definition of the process, for its final set $U$ it holds that for all $i \in U$, $b_i\geq\m{i}$. First, we show that $U$ is the maximal set with this property.
Assume that there is some $U'$ with $U'\not\subseteq U$, that satisfies this property, then we have that for all $i \in (U\cup U') \setminus L$, $b_i\geq\mlu{i}{L}{U\cup U'}$. Thus, it is impossible that the players in $U \cup U'$ are removed at any step of the previous process, contradicting our assumption that $U\neq U'\cup U$ is the outcome of this process.
Now consider now the upper set $U''$ of the stable pair at $b$. Since $U''$ satisfies this property it follows that $U''\subseteq U$. Notice that if $U''\subset U\neq \emptyset$, then the elements of $U\setminus U''$ would violate stability of $L,U''$. Thus, $U''=U$ and consequently this process outputs the upper set of the stable pair.
As a result, given an outcome $S$ of a GSP mechanism we can compute $L:=\{i\in S\mid b_i>\xi(i,S)\}$ and use this process to find the upper set $U$. The time-complexity of this algorithm is polynomial in the number of players assuming that we have polynomial-time access to every $\m{i}$ for all $L\subseteq U\subseteq \mathcal{A}$ and all $i \in U$. \qed
\end{document}
|
\betaegin{document}
\title{Limit of the Wulff Crystal when approaching criticality for site percolation on the triangular lattice}
\betaegin{abstract}
The understanding of site percolation on the triangular lattice progressed greatly in the last decade. Smirnov proved conformal invariance of critical percolation, thus paving the way for the construction of its scaling limit. Recently, the scaling limit of near-critical percolation was also constructed by Garban, Pete and Schramm. The aim of this very modest contribution is to explain how these results imply the convergence, as $p$ tends to $p_c$, of the Wulff crystal to a Euclidean disk. The main ingredient of the proof is the rotational invariance of the scaling limit of near-critical percolation proved by these three mathematicians.
\end{abstract}
\section{Introduction}
\paragraph{Definition of the model}
Percolation as a physical model was introduced by Broadbent and
Hammersley in the fifties~\cite{BH57}. For general background on percolation, we refer the reader to~\cite{Gri99,Kes82,BR06c}.
Let $\mathbb{T}$ be the regular triangular lattice given by the vertices $m+{\rm e}^{{\rm i} \pi/3}n$ where $m,n\in \mathbb{Z}$, and edges linking nearest neighbors together. In this article, the vertex set will be identified with the lattice itself.
For $p\in (0,1)$, \emph{site
percolation} on $\mathbb{T}$ is defined as follows. The set of configurations is given by $\{{\rm open},{\rm closed}\}^\mathbb{T}$. Each vertex, also called \emph{site}, is \emph{open} with probability $p$ and \emph{closed} otherwise,
independently of the state of other vertices. The probability measure thus obtained is denoted by $\mathbb{P}_p$.
A {\em path} between $a$ and $b$ is a sequence of sites $v_0,\dots,v_k$ such that $v_0=a$ and $v_k=b$, and such that $v_iv_{i+1}$ is an edge of $\mathbb{T}$ for any $0\le i<k$. A path is said to be {\em open} if all its sites are open. Two
sites $a$ and $b$ of the triangular lattice are \emph{connected} (this is denoted by $a\longleftrightarrow b$) if there exists an open path between them.
A {\em cluster} is a maximal connected graph for the relation $\longleftrightarrow$ on sites of $\mathbb{T}$.
\paragraph{The different phases} Bernoulli percolation undergoes a phase transition at $p_c=1/2$ (the corresponding result for bond percolation on the square lattice is due to Kesten \cite{Kes80}): in the {\em sub-critical phase} $p<p_c$, there is almost surely no infinite cluster, while in the {\em super-critical phase} $p>p_c$, there is almost surely a unique one.
The understanding of the {\em critical phase} $p=p_c$ has progressed greatly these last few years. In \cite{Smi01}, Smirnov proved Cardy's formula, thus providing the first rigorous proof of the conformal invariance of the model (see also \cite{Wer07,BD13} for details and references). This result led to many applications describing the critical phase. Among others, the convergence of interfaces was proved in \cite{CN06,CM07}, and critical exponents were computed in \cite{SW01}.
Another phase of interest is given by the so-called {\em near-critical phase}. It is obtained by letting $p$ go to $p_c$ as a well-chosen function of the size of the system (see below for more details). This phase was first studied in the context of percolation by Kesten \cite{Kes87}, who used it to relate fractal properties of the critical phase to the behavior of the correlation length and the density of the infinite cluster (as $p$ tends to $p_c$). Recently, the scaling limit of near-critical percolation was proved to exist in \cite{GPSa}. This result will be instrumental in the proof of our main theorem.
\paragraph{Main statement}
Mathematicians and physicists are particularly interested in the following quantity, called the {\em correlation length}. For $p<p_c$ and for any $u$ on the unit circle $\mathbb{U}=\{z\in \mathbb{C}:|z|=1\}$, define
$$\tau_p(u)~:=~\left(\lim_{n\rightarrow \infty}-\tfrac{1}{n}\log \mathbb{P}_p(0\longleftrightarrow \widehat{nu})\right)^{-1},$$
where $\widehat{nu}$ is the site of $\mathbb{T}$ closest to $nu$.
In \cite{SW01}, the correlation length $\tau_p(u)$ was proved to behave like $(p_c-p)^{-4/3+o(1)}$ as $p\nearrow p_c$.
Interestingly, conformal invariance at criticality is a strong indication that $\tau_p(u)$ becomes isotropic, meaning that it does not depend on $u\in \mathbb{U}$. The aim of this note is to show that this is indeed the case.
Let $|\cdot|$ be the Euclidean norm on $\mathbb{R}^2$.
\betaegin{theorem}\label{thm:correlation}
For percolation on the triangular lattice,
$\tau_p(u)/\tau_p(|u|)\longrightarrow 1$ uniformly in the direction $u\in \mathbb{U}$ as $p\nearrow p_c$.
\end{theorem}
While this result is very intuitive once conformal invariance has been proved, it does not follow directly from it. More precisely, it requires some understanding of the near-critical phase mentioned above.
The main input used in the proof is the spectacular and highly non-trivial result of \cite{GPSa}.
In this paper, the scaling limit for near-critical percolation is proved to exist and to be invariant under rotations. This result constitutes the heart of the proof of Theorem~\ref{thm:correlation}, which then consists in connecting the correlation length to properties of this near-critical scaling limit (in some sense, the proof can be understood as an exchange of two limits).
\paragraph{Wulff crystal} Theorem~\ref{thm:correlation} has an interesting corollary. Consider the cluster ${\sf C}_0$ of the origin. When $p<p_c$, there exists a deterministic shape $W_p$ such that for any $\varepsilon>0$,
$$\mathbb{P}_p\left(\mathbf d_{\rm Hausdorff}\Big(\mathfrak{a}c{{\sf C}_0}{\sqrt{n}},\mathfrak{a}c{W_p}{\sqrt{{\rm Vol}(W_p)|}}\Big)>\varepsilon~\Big| ~|{\sf C}_0|\gammae n\right)\longrightarrow 0\text{\quad as $n\rightarrow \infty$,}$$
where ${\rm Vol}(E)$ denotes the volume of the set $E$, and $\mathbf d_{\rm Hausdorff}$ is the Hausdorff distance. In the previous formula, $W_p$ is the {\em Wulff crystal} defined by
$$W_p~:=~\{x\in \mathbb{C}:\langle x|u\rangle\le \tau_p(u),u\in \mathbb{U}\},$$
where $\langle \cdot|\cdot\rangle$ is the standard scalar product on $\mathbb{C}$.
The Wulff crystal appears naturally when studying phase coexistence. Originally, the Wulff crystal was constructed rigorously in the context of the planar Ising model by Dobrushin, Koteck\'y and Shlosman \cite{DKS92} for very low temperature (see \cite{Pfi91,IS98} for extensions of this result). In the case of planar percolation, the first result is due to \cite{ACC90}. Let us mention that the Wulff construction was extended to higher dimensional percolation by Cerf \cite{Cer00} (see also \cite{Bod99,CP00} for the Ising case). We refer to \cite{Cer06} for a comprehensive exposition of the subject.
The geometry of the Wulff crystal has been studied extensively since then. Let us mention that for any $p<p_c$, it is a strictly convex body with analytic boundary \cite{ACC90,Ale92,CI02}.
The expression of $W_p$ in terms of the correlation length, together with Theorem~\ref{thm:correlation}, implies the following result.
\betaegin{corollary}
When $p\nearrow p_c$, $\displaystyle W_p/\sqrt{{\rm Vol}(W_p)}$ tends to the disk $\{u\in \mathbb{C}:|u|\le 1\}$.
\end{corollary}
This corollary has a strong geometric interpretation. As $p\nearrow p_c$, the typical shape of a cluster conditioned to be large becomes round.
\paragraph{Super-critical phase} For the super-critical phase, the previous results can be translated in the following way.
For $p>p_c$, define
$$\tau^{\rm f}_p(u)^{-1}~:=~\lim_{n\rightarrow \infty}-\mathfrak{a}c{1}{n}\log \mathbb{P}_p(0\longleftrightarrow \widehat{nu},0\not\longleftrightarrow \infty).$$
One can prove that $\tau^{\rm f}_p(u)=\mathfrak{a}c12\tau_{1-p}(u)$; see \cite[Theorem A]{CIL10} for a much more precise (and much harder) result. This fact, together with Theorem~\ref{thm:correlation}, immediately implies that $\tau^{\rm f}_p(u)/\tau^{\rm f}_p(|u|)\rightarrow 1$, uniformly in the direction $u\in \mathbb{U}$, as $p\searrow p_c$.
The Wulff crystal can also be extended to the super-critical phase.
When $p>p_c$, we find that for any $\varepsilon>0$,
$$\mathbb{P}_0\left(\mathbf d_{\rm Hausdorff}\Big(\mathfrak{a}c{{\sf C}_0}{\sqrt{n}},\mathfrak{a}c{W_{1-p}}{\sqrt{{\rm Vol}(W_{1-p})}}\Big)>\varepsilon~\Big|~n\le |{\sf C}_0|<\infty\right)\longrightarrow 0\text{\quad as $n\rightarrow \infty$.}$$
\paragraph{Other models} Let us mention that conformal invariance has been proved for a number of models, including the dimer model \cite{Ken00} and the Ising model \cite{Smi10,CS09}; see \cite{DCS11} for lecture notes on the subject. In both cases, exact computations (see \cite{McCW73} for the Ising model) allow one to show that the correlation length becomes isotropic, hence providing an extension of Theorem~\ref{thm:correlation}. For the Ising model, we refer to \cite{McCW73} for the original computation, and to \cite{BDCS11,Dum12} for a recent computation. For percolation, no exact computation is available and the passage via the near-critical regime seems required. For the Ising model, the near-critical phase was also studied in \cite{DGP12}.
\paragraph{An open question} To conclude, let us mention the following question, which was asked by I. Benjamini: let $p>p_c$ and condition 0 to be connected to infinity. Consider the sequence of balls of center 0 and radius $n$ for the graph distance on the infinite cluster. Show that these balls possess a limiting shape $U_p$ which becomes round as $p\searrow p_c$.
\paragraph{Notation}
When points are considered as elements of the plane, we usually use complex numbers to position them.
When they are considered as vertices of $\mathbb{T}$, we prefer oblique coordinates. More precisely, the point $(m,n)\in\mathbb{Z}^2$ is the point $m+{\rm e}^{{\rm i} \pi/3}n$.
It shall be clear from the context whether complex or oblique coordinates are used. The set $[m_1,m_2]\times [n_1,n_2]$ is therefore the parallelogram composed of sites $m+{\rm e}^{{\rm i} \pi/3}n$ with $m_1\le m\le m_2$ and $n_1\le n\le n_2$.
Define $\mathcal{C}_{\rm hor}([m_1,m_2]\times [n_1,n_2])$ to be the event that there exits an open path included in $[m_1,m_2]\times [n_1,n_2]$ from $\{m_1\}\times[n_1,n_2]$ to $\{m_2\}\times[n_1,n_2]$. If the event occurs, the parallelogram is said to be {\em crossed}. Let $\mathcal{C}_{\rm circuit}(x,n,2n)$ be the event that there exists an open circuit (meaning a path starting and ending at the same site) in $(x+[-2n,2n]^2)\setminus(x+[-n,n]^2)$ surrounding the origin.
\subsection{Two important inputs}
We will use tools of percolation theory such as correlation inequalities (in our case the FKG and BK inequalities) and Russo's formula. We shall not remind these classical facts here, and the reader is referred to \cite{Gri99} for precise definitions. In addition to these facts, we will harness two important results. The first one relates crossing probabilities for parallelograms of different shape. It is usually referred to as the Russo-Seymour-Welsh theorem \cite{Rus78,SW78}, or simply RSW.
\betaegin{theorem}[Russo-Seymour-Welsh]\label{main theorem}There exist $p_0>0$ and $c_1,c_2>0$ verifying for every $n,m$ and $p\in[p_0,1-p_0]$,
\betaegin{align}\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,m]\times[0,2n])\betaig] \le c_1\Big(\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,m]\times[0,n])\betaig]\Big)^{c_2}.\end{align}
\end{theorem}
Note that we do not require that the parameter $p\in[0,1]$ is critical, but rather that it is bounded away from 0 and 1. For a proof of this version of the theorem on $\mathbb{T}$, we refer to \cite{Nol08}.
\betaigbreak
Another important result, due to Garban, Pete and Schramm, will be required. Let $\mathcal{A}_4(1,n)$ be the event that there exist four disjoint paths from neighbors of the origin to $\mathbb{T}\setminus [-n,n]^2$, indexed in the clockwise order by $\gammaamma_1,\gammaamma_2,\gammaamma_3$ and $\gammaamma_4$, with the property that $\gammaamma_1$ and $\gammaamma_3$ are open, while $\gammaamma_2$ and $\gammaamma_4$ are closed (meaning that they contain closed sites only).
We set
$$r(n)=\mathfrak{a}c{\mathbb{P}_{p_c}[\mathcal{A}_4(1,n)]}{n^2}.$$
For $r>0$ and $u\in\mathbb{R}^2$, let $\mathcal{B}_r(u)=\{z\in \mathbb{R}^2:|z-u|\le r\}$ be the Euclidean ball of radius $r$ around $u$. For two sets $A$ and $B$ in $\mathbb{R}^2$, we say that $A\longleftrightarrow B$ if there exists $a\in A\cap \mathbb{T}$ and $b\in B\cap \mathbb{T}$ such that $a\longleftrightarrow b$.
\betaegin{proposition}\label{prop:inv}There exists $f:\mathbb{R}\times \mathbb{R}_+\rightarrow [0,1]$ such that for any $u\in \mathbb{R}^2$,
\betaegin{eqnarray}\label{eq:32} \mathbb{P}_{p_c-\lambda r(n)}\betaig[\mathcal{B}_n(0)\longleftrightarrow \mathcal{B}_n(nu)\betaig]&\longrightarrow &f(\lambda,|u|)
\end{eqnarray}
as $n$ tends to $\infty$.
\end{proposition}
The previous proposition has two interesting features. First, the quantity on the left possesses a limit as $n$ tends to infinity. Second, this limit is invariant under rotations.
For $\lambda=0$, the result follows from the convergence of percolation interfaces to $\mathsf{CLE}(6)$ \cite{CN06}. For more general values of $\lambda$, the result is much more difficult. Let us briefly explain where this proposition comes from.
The scaling limit described in \cite{GPS10a,GPSa} is a limit, in the sense of the Quad-topology introduced in \cite{SS11}, of percolation configurations $\mathbb{P}_{p_c-\lambda r(n)}$ on $\mathfrak{a}c1n \mathbb{T}$.
This topology is sufficiently strong to control events considered in Proposition~\ref{prop:inv}. The existence of the scaling limit is justified by a careful study of macroscopic ``pivotal points''. This existence implies the existence of the limit in \eqref{eq:32}.
The scale $r(n)$ corresponds to the scale for which a variation of $\lambda r(n)$ will alter the pivotal points, and therefore the scaling limit, but not too drastically. This fact enabled Garban, Pete and Schramm to construct the scaling limit of near-critical percolation from the scaling limit of critical percolation. The invariance under rotation of the near-critical scaling limit is then a consequence of the invariance under rotation of the critical one. We refer to \cite{GPSa} for more details.
In the proof, the near-critical phase will be used at its full strength. On the one hand, the scaling limit is still invariant under rotations. On the other hand, as $\lambda\rightarrow \infty$, the ``crossing probabilities'' tend to 0. The existence of such a phase is crucial in our proof.
\section{Proof of Theorem~\ref{main theorem}}
The proof consists in estimating the correlation length $\tau_p(u)$ using $f(\lambda,|u|)$. It is known since \cite{Kes87} that the correlation length is related to crossing probabilities. Yet, previous studies were interested in relations which are only valid up to bounded multiplicative constants. Here, we will need a slightly better control (roughly speaking that these constants tend to 1 as $p$ goes to $p_c$).
In order to relate $\tau_p(u)$ and $f(\lambda,|u|)$, we use the existence of different parameters $\delta,\lambda, p_0$ and $L_p$ with some specific properties presented in the next proposition.
\betaegin{proposition}\label{conditions}
Let $\varepsilon>0$. There exist $\lambda,\delta>0$ and $p_0<p_c$ such that for any $p\in[p_0,p_c]$, there exists $L_p\gammae 0$ with the following three properties:
\betaegin{enumerate}
\item[{\rm P1}] \quad for any $\theta\in[0,2\pi)$,
$$\displaystyle f(\lambda,\delta^{-1})^{1+\varepsilon}~\le~ \mathbb{P}_{p}\betaig[\mathcal{B}_{\delta L_p}(0)\longleftrightarrow \mathcal{B}_{\delta L_p}(L_pe^{i\theta})\betaig]~\le~ f(\lambda,\delta^{-1})^{1-\varepsilon},$$
\item[{\rm P2}] \quad$\displaystyle \mathbb{P}_{p}\betaig[\mathcal{C}_{\rm circuit}(0,\delta L_p,2\delta L_p)\betaig]~\gammae~ f(\lambda,\delta^{-1})^{\varepsilon},$
\item[{\rm P3}] \quad$\displaystyle \delta ~\gammae~ 2f(\lambda,\delta^{-1})^{\varepsilon}$.
\end{enumerate}
\end{proposition}
The main part of the proof of Theorem~\ref{main theorem} will be to show Proposition~\ref{conditions}, i.e. that for any $\varepsilon>0$, the constants $\lambda,\delta, p_0$ and $L_p$ can indeed be constructed. Before proving Proposition~\ref{conditions}, let us show how it implies Theorem~\ref{main theorem}.
\betaegin{proof}[Theorem~\ref{main theorem}]
Fix $\varepsilon>0$. Define $\lambda,\delta>0$ and $p_0<p_c$ such that Proposition~\ref{conditions} holds true. Let $\theta\in[0,2\pi)$ and $p_0<p<p_c$. Consider $L_p$ defined as in the proposition.
For $K\gammae 1$, consider the following three events:
\betaegin{enumerate}
\item[$\mathcal{E}_1$=]``$\mathcal{B}_{\delta L_p}(0)$ and $\mathcal{B}_{\delta L_p}(KL_pe^{i\theta})$ are full '',
\item[$\mathcal{E}_2$=]`` $\mathcal{B}_{\delta L_p}(kL_p{\rm e}^{i\theta})\longleftrightarrow \mathcal{B}_{\delta L_p}((k+1)L_p{\rm e}^{i\theta})$ for every $0\le k\le K$ '',
\item[$\mathcal{E}_3$=]`` $\mathcal{C}_{\rm circuit}(kL_p{\rm e}^{i\theta},\delta L_p,2\delta L_p)$ for every $0\le k\le K$ ''.
\end{enumerate}
As shown on Fig.~\ref{fig:events}, if all these events occur, then $0$ and the site of $\mathbb{T}$ closest to $KL_p{\rm e}^{i\theta}$, denoted $\widehat{KL_p{\rm e}^{i\theta}}$, are connected by an open path. The FKG inequality (see \cite[Theorem 2.4]{Gri99}) implies that \betaegin{align*}\mathbb{P}_{p}\betaig[0\longleftrightarrow \widehat{KL_pe^{i\theta}}\betaig]&\gammae \mathbb{P}_{p}\left[\mathcal{E}_1\right]\mathbb{P}_{p}\left[\mathcal{E}_2\right]\mathbb{P}_{p}\left[\mathcal{E}_3\right]\\
&\gammae p^{8(\delta L_p)^2}\cdot f(\lambda,\delta^{-1})^{(1+\varepsilon)K}\cdot f(\lambda,\delta^{-1})^{\varepsilon K}.\end{align*}
We have used P1 and P2 to bound the probabilities of $\mathcal{E}_2$ and $\mathcal{E}_3$ in the second inequality. The bound on $\mathbb{P}_p[\mathcal{E}_1]$ comes from the fact that there are less than $8(\delta L_p)^2$ sites in $\mathcal{B}_{\delta L_p}(0)\cup\mathcal{B}_{\delta L_p}(KL_pe^{i\theta})$.
By taking the logarithm and letting $K$ tend to infinity, we obtain that for $p_0<p<p_c$,
\betae{lower}\mathfrak{a}c1{\tau_{p}(e^{i\theta})}\le -(1+2\varepsilon)\mathfrak{a}c{\log f(\lambda,\delta^{-1})}{L_p}.\end{equation}
This provides us with an upper bound that we will match with the lower bound below.
\betaigbreak
Let us now turn to the lower bound. Assume that $0$ and $\widehat{KL_p{\rm e}^{i\theta}}$ are connected. Let $0\le N\le 2\delta^{-1}$ such that $\sin (\mathfrak{a}c{2\pi}{N})<\delta/2$. Define
$$\Theta=\betaig\{L_p,L_pe^{\mathfrak{a}c{2\pi i}{N} },L_pe^{2\mathfrak{a}c{2\pi i}{N}} ,\dots, L_pe^{(N-1)\mathfrak{a}c{2\pi i}{N}} \betaig\}.$$
We claim that if $0$ and $KL_p{\rm e}^{i\theta}$ are connected, then there must exist a sequence of sites $0=x_0,x_1,\dots,x_K$ such that $x_{i+1}-x_i\in \Theta$ and $\mathcal{B}_{\delta L_p}(x_i)\longleftrightarrow \mathcal{B}_{\delta L_p}(x_{i+1})$ occurs for every $0\le i<K$. Furthermore, the events $\mathcal{B}_{\delta L_p}(x_i)\longleftrightarrow \mathcal{B}_{\delta L_p}(x_{i+1})$ occur disjointly in the sense of \cite[Section 2.3]{Gri99}.
In order to prove this claim, consider a self-avoiding open path $\gammaamma=(\gammaamma_i)_{0\le i\le r}$ from $0$ to $\widehat{KL_p{\rm e}^{i\theta}}$. Let $y_1$ be the first point of this path which is outside of the Euclidean ball of radius $L_p$ around 0. Define $x_1\in \Theta$ such that $|y_1-x_1|\le \delta L_p$. The choice of $N$ guarantees the existence of $x_1$. Let $y_2$ be the first point of $\gammaamma[y_1,r]$ outside of the Euclidean ball of radius $L_p$ around $x_1$. We pick $x_2$ such that $x_2-x_1\in \Theta$ and $|y_2-x_2|\le \delta L_p$. We construct $(x_i)_{0\le i\le K}$ iteratively. See Fig.~\ref{fig:events} for an illustration. By construction, the events occur {\em disjointly} since the path $\gammaamma$ is self-avoiding. We note $A\circ B$ for the disjoint occurrence (see \cite[Theorem 2.12]{Gri99}). The union bound and the BK inequality give
\betaegin{align*}\mathbb{P}_p[0\longleftrightarrow \widehat{KL_pe^{i\theta}})]&\le \sum_{(x_i)_{i\le K}}\mathbb{P}_p\Big[\betaig\{\mathcal{B}_{\delta L_p}(0)\longleftrightarrow \mathcal{B}_{\delta L_p}(x_{1})\betaig\}\circ\cdots\\
&\quad\quad\quad\quad\quad\quad\quad\quad \cdots\circ\betaig\{\mathcal{B}_{\delta L_p}(x_{j-1})\longleftrightarrow \mathcal{B}_{\delta L_p}(x_{j})\betaig\}\Big]\\
&\le\left(\mathfrak{a}c{2}{\delta}\right)^K\mathbb{P}_p[\mathcal{B}_{\delta L_p}(0)\longleftrightarrow \mathcal{B}_{\delta L_p}(x_{1})]^K\\
&\le\betaig(\mathfrak{a}c2{\delta}f(\lambda,\delta^{-1})^{1-\varepsilon}\betaig)^K\le f(\lambda,\delta^{-1})^{(1-2\varepsilon)K}.\end{align*}
In the second inequality, we used the fact that the cardinality of $\Theta$ is bounded by $2\delta^{-1}$. In the last line, we used P1 and then P3. By taking the logarithm and letting $K$ go to infinity, we obtain that for $p_0<p<p_c$\betae{upper}\mathfrak{a}c1{\tau_{p}(e^{i\theta})}\gammae -(1-2\varepsilon)\mathfrak{a}c{\log f(\lambda,\delta^{-1})}{L_p}.\end{equation}
Therefore, for every $p_0<p<p_c$, \eqref{lower} and \eqref{upper} imply that $1-4\varepsilon<\tau_p(e^{i\theta})/\tau_p(1)\le 1+5\varepsilon$ for any $\theta\in[0,2\pi)$.\end{proof}
\betaegin{figure}
\betaegin{center}
\includegraphics[width=1.00\textwidth]{events}\end{center}
\caption{\label{fig:events}On the left, the events $\mathcal{E}_1$, $\mathcal{E}_2$ and $\mathcal{E}_3$. On the right, the construction corresponding to the upper bound.}
\end{figure}
We now prove Proposition~\ref{conditions}. Property P1 will be guaranteed by the fact that we set $L_p$ in such a way that
\betaegin{equation}\label{eq:def}p=p_c-\lambda r(\delta L_p)\end{equation} (we should be careful about the rounding operation since $r$ takes only discrete values, but this is easily shown to be irrelevant). P1 therefore follows from Proposition~\ref{prop:inv} and any choice of $\delta,\lambda$ by setting $L_p$ as in \eqref{eq:def}. The only mild condition is that $L_p$ is large enough, or equivalently that $p$ is close enough to $p_c$.
From now on, we will assume that $L_p$ is chosen as in \eqref{eq:def}. We need to explain how to choose $\delta$ and $\lambda$ in such a way that P2 and P3 are also satisfied.
Since properties P2 and P3 are conditions on the limit $f(\lambda,\delta^{-1})$, the proof boils down to finding $\lambda$ and $\delta$ in such a way that $f(\lambda,\delta^{-1})$ is small enough compared to $\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm circuit}(\delta L_p,2\delta L_p)\betaig]$ and $\delta$.
\betaegin{remark} Taking $\delta$ small enough is not sufficient since in such case, $f(\lambda,\delta^{-1})$ could a priori become larger than $\delta/2$. This is what is happening when $\lambda=0$ since this quantity tends to zero as $\delta^{10/48}$. We will prove that for $\lambda$ large enough, $f(\lambda,\delta^{-1})$ tends to zero (as $\delta$ tends to zero) exponential fast in $\delta^{-1}$.
\end{remark}
Let us start by two simple lemmata.
\betaegin{lemma}\label{lem:1}
Fix $p\in (0,1)$ and $n,k>0$. Then,
$$28 \mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,2^{k}n]\times[0,2^{k+1}n])\betaig]~\le~ \Big(28\,\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\Big)^{2^{k}}.$$\end{lemma}
Even though this fact is classical, the proof is elegant and short therefore we choose to include it here.
\betaegin{proof}
Let $n>0$ and consider the parallelograms $P_1,\dots,P_8$ defined by
\betaegin{center}
\betaegin{tabular}{ll}
$P_1=[0,n]\times[0,2n]$,\quad\quad & $P_2=[0,n]\times[n,3n]$, \\
$P_3=[0,n]\times[2n,4n]$,\quad\quad & $P_4=[n,2n]\times[0,2n]$,\\
$P_5=[n,2n]\times[n,3n]$,\quad\quad & $P_6=[n,2n]\times[2n,4n]$,\\
$P_7=[0,2n]\times[n,2n]$,\quad\quad & $P_8=[0,2n]\times[2n,3n]$.
\end{tabular}
\end{center}
These parallelograms have the property that whenever
there exists an open path from $\{0\}\times[0,4n]$ to $\{2n\}\times[0,4n]$, then there exist (at least) two disjoint paths crossing parallelograms $P_i$. In other words, the event $\mathcal{C}_{\rm hor}([0,2n]\times[0,4n])$ is included in the event that two of the eight parallelograms $P_i$ contain an open path crossing in the easy direction (meaning vertically if the base is larger than the height, and horizontally if the reverse is true), and that these events occur disjointly.
The BK inequality (see \cite[Theorem 2.12]{Gri99}) implies that
\betaegin{align*}
\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,2n]\times[0,4n])\betaig] & \le
\betainom 8 2~\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]^2\\
&=28\,\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]^2.
\end{align*}
The lemma follows by applying this inequality iteratively to integers of the form $2^jn$ for $0\le j\le k-1$.
\end{proof}
The next lemma is a standard application of the RSW theorem. We do not remind the proof here.
\betaegin{lemma}\label{lem:4}
There exist $0<p_0<p_c$ and $c_3,c_4>0$ such that for any $p\in[p_0,1-p_0]$ and $n\gammae 0$,
$$\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm circuit}(n,2n)\betaig]\gammae c_3\,\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]^{c_4}.$$
\end{lemma}
The previous lemma provides us with a lower bound on $\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm circuit}(n,2n)\betaig]$ in terms of $\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]$, while the next lemma provides us with an upper bound on $\mathbb{P}_{p}\betaig[\mathcal{B}_n(0)\longleftrightarrow \mathcal{B}_n(x)\betaig]$ in terms of the same quantity.
\betaegin{lemma}\label{lem:2}
There exist $0<p_0<p_c$ and $c_5,c_6>0$ such that for any $p\in[p_0,1-p_0]$, for any $n \gammae 0$ and $\delta>0$,
$$\mathbb{P}_{p}\betaig[\mathcal{B}_n(0)\longleftrightarrow \mathcal{B}_n(n/\delta)\betaig]\le \Big(c_5\,\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\Big)^{c_6/\delta}.$$\end{lemma}
\betaegin{proof}
For $n<m$, let $\mathcal{C}_{\rm in/out}(n,m)$ be the event that there exists an open path between $[-n,n]^2$ and $\mathbb{T}\setminus [-m,m]^2$.
The RSW theorem implies that
$$\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,4n])\betaig]\le c_1\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]^{c_2}.$$
Since one of the four parallelograms $[-2n,2n]\times[-2n,-n]$, $[-2n,2n]\times[n,2n]$, $[-2n,-n]\times[-2n,2n]$ and $[n,2n]\times[-2n,2n]$ must be crossed if the annulus $[-2n,2n]^2\setminus[-n,n]^2$ contains an open path from the interior to the exterior, we obtain
$$\mathbb{P}_p\betaig[\mathcal{C}_{\rm in/out}(n,2n)\betaig]\le 4c_1\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]^{c_2}.$$
We find immediately that for any $k>0$,
\betaegin{align*}\mathbb{P}_p\betaig[\mathcal{C}_{\rm in/out}(n,2^kn)\betaig]&\le \prod_{j=0}^{k-1} \mathbb{P}_p\betaig[\mathcal{C}_{\rm in/out}(2^jn,2^{j+1}n)\betaig]\\
&\le \prod_{j=0}^{k-1}4c_1\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,2^jn]\times[0,2^{j+1}n])\betaig]^{c_2}\\
&\le \prod_{j=0}^{k-1}\mathfrak{a}c{4c_1}{28}\Big(28\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\betaig)^{2^j}\Big)^{c_2}\\
&\le \Big(c_7\,\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\Big)^{c_22^k}\end{align*}
where $c_7>0$ is large enough. We have used independence in the first inequality. In the third inequality, we used Lemma~\ref{lem:1}.
\medbreak
Let $2^k\le \delta^{-1}<2^{k+1}$. Since
$$\betaig[-2^{k-2}n,2^{k-2}n\betaig]^2\setminus\betaig[-n,n\betaig]^2\subset \mathcal{B}_{n/(2\delta)}(0)\setminus \mathcal{B}_n(0),$$ we immediately get
\betaegin{align*}\mathbb{P}_p\betaig[\mathcal{B}_n(0)\longleftrightarrow \mathcal{B}_n(n/\delta)\betaig]&\le \mathbb{P}_p\betaig[\mathcal{C}_{\rm in/out}(n,2^{k-2}n)\betaig]\\
&\le \Big(c_7\,\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\Big)^{c_22^{k-2}}\\
&\le \Big(c_7\,\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\Big)^{8c_2/\delta}\end{align*}
and the claim follows.\end{proof}
Now, fix $\mu=\mu(c_3,c_4,c_5)>0$ small enough that $c_5\mu\le \mu^{1/2}\le 1/2$ and $\mu^{c_4}\le c_3$.
For any $\varepsilon>0$, the two previous lemmata show the existence of $\delta=\delta(\mu,\varepsilon)>0$ such that $$\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\le \mu$$ implies that
$$\betaegin{cases}\delta\gammae \mathbb{P}_p\betaig[\mathcal{B}_n(0)\longleftrightarrow \mathcal{B}_n(n/\delta)\betaig]^{\varepsilon/(1-\varepsilon)},&\,\\
\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm circuit}(n,2n)\betaig]\gammae\mathbb{P}_p\betaig[\mathcal{B}_n(0)\longleftrightarrow \mathcal{B}_n(n/\delta)\betaig]^{\varepsilon/(1-\varepsilon)}.&\, \end{cases}$$
In particular, if the previous implication can be applied with $p$ and $L_p$ such that $p=p_c-\lambda r(\delta L_p)$ (for a well-chosen $\lambda$), then P1, P2 and P3 are satisfied and Proposition~\ref{conditions} is proved. Therefore, it remains to prove that there exists $\lambda=\lambda(\mu,\delta)>0$ such that
$$\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,\delta L_p]\times[0,2\delta L_p])\betaig]\le \mu$$
for $p$ close enough to $p_c$, or equivalently, such that
\betaegin{equation}\label{ert} \limsup_{n\rightarrow \infty}\,\mathbb{P}_{p_c-\lambda r(n)}\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]<\mu\end{equation}
for $n$ large enough.
This is the object of our last lemma, which concludes the proof of Proposition~\ref{conditions}.
\betaegin{lemma}\label{lem:8}
Let $\mu>0$. There exists $\lambda>0$ such that,
\betaegin{equation} \limsup_{n\rightarrow \infty}\,\mathbb{P}_{p_c-\lambda r(n)}\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]<\mu.\end{equation}
\end{lemma}
Note that the $\limsup$ above is in fact a limit by \cite{GPSa}, but this fact is of no relevance here. While the proof is fairly easy, we justify it in details. The main ingredient is the quasi-multiplicativity property for near-critical percolation.
\betaegin{proof}
Fix $\mu>0$ and let $\lambda>0$ to be fixed later. Fix $0<\eta<1/28$ and $p\in(p_0,1-p_0)$ with $p_0$ defined in Lemma~\ref{lem:2}. If
$$\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm hor}([0,m]\times[0,2m])\betaig]<\eta,$$
then Lemma~\ref{lem:1} implies that for any $k\gammae 1$,
$$\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm hor}([0,2^{k}n]\times[0,2^{k+1}n])\betaig]\le \mathfrak{a}c1{28}\Big(28\eta\Big)^{2^{k}}\le \eta.$$
By RSW, we get that for $2^k n\le m\le 2^{k+1}n$,
\betaegin{align*}\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm hor}([0,m]\times[0,2m])\betaig]&\le \mathbb{P}_{p}\betaig[\mathcal{C}_{\rm hor}([0,2^kn]\times[0,2^{k+2}n])\betaig]\\
&\le c_1\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm hor}([0,2^kn]\times[0,2^{k+1}n])\betaig]^{c_2}\\
&\le c_1\eta^{c_2}.\end{align*}
In conclusion, if $0<\eta\ll \mu$ are chosen in such a way that $\mu>c_1\eta^{c_2}$ and $\eta<1/28$, we find that
$\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]<\eta$ implies that $\mathbb{P}_{p}\betaig[\mathcal{C}_{\rm hor}([0,m]\times[0,2m])\betaig]<\mu$ for any $m\gammae n$.
\medbreak
Define $$L_\eta(p)=\inf\Big\{n\gammae 0:\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]<\eta\Big\}.$$
The assumption that $\mathbb{P}_{p_c-\lambda r(n)}\betaig[\mathcal{C}_{\rm hor}([0,m]\times[0,2m])\betaig]\gammae \mu$ boils down to the assumption that $L_\eta(p)\gammae n$ for every $p_c-\lambda r(n)<p<p_c$. We use this formulation from now on in order to bound the derivative of $\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]$ from below.
Russo's formula implies that
\betaegin{align}\label{Russo}\mathfrak{a}c{\rm d}{{\rm d}p}\mathbb{P}_p&\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\\
&=\sum_{v\in[0,n]\times[0,2n]}\mathbb{P}_p\betaig[v\text{ is pivotal for }\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig],\nonumber\end{align}
where $v$ is pivotal for a configuration $\omega$ if the following is true: $\omega\in A$ if the site $v$ is switched to open and $\omega\notin A$ if it is switched to closed. In our case, $v$ is pivotal if there exist four disjoint arms, two open ones going from neighbors of $v$ to the left and right sides of $[0,n]\times[0,2n]$, and two closed ones going from neighbors of $v$ to the top and bottom sides of $[0,n]\times[0,2n]$. Since $n\le L_\eta(p)$, classical properties of arm-events (namely the extendability and quasimultiplicativity, see \cite[Propositions~15 and 16]{Nol08}) imply the existence of $c_8=c_8(\eta)>0$ such that
\betaegin{equation}\label{first bound}\mathbb{P}_{p}[v\text{ is pivotal for }\mathcal{C}_{\rm hor}([0,n]\times[0,2n])]\gammae c_8 \mathbb{P}_{p}[\mathcal{A}_4(1,n)]\end{equation}
for any $v\in [\mathfrak{a}c n3,\mathfrak{a}c{2n}3]\times[\mathfrak{a}c n2,n]$ and for any $p>p_c-\lambda r(n)$.
It is also well-known that under $L_\eta(p)$, arm-exponents do not vary (see e.g. \cite[Theorem~26]{Nol08} for the case of the triangular lattice), so that there exists $c_9=c_9(\eta)>0$ such that
\betaegin{equation}\label{second bound}\mathbb{P}_{p}[\mathcal{A}_4(1,n)]\gammae c_9 \mathbb{P}_{p_c}[\mathcal{A}_4(1,n)]=c_9 \mathfrak{a}c{1}{n^2r(n)}\end{equation}
for any $p_c>p>p_c-\lambda r(n)$.
Putting \eqref{Russo}, \eqref{first bound} and \eqref{second bound} together, we obtain that
$$\mathfrak{a}c{\rm d}{{\rm d}p}\mathbb{P}_p\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\gammae c_8c_9\sum_{v\in [\mathfrak{a}c n3,\mathfrak{a}c{2n}3]\times[\mathfrak{a}c n2,n]} \mathfrak{a}c{1}{n^2r(n)}= \mathfrak{a}c{c_8c_9}{6r(n)}.$$
Integrating this inequality between $p_c-\lambda r(n)$ and $p_c$, we find that
$$\mathbb{P}_{p_c-\lambda r(n)}\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\le 1-\mathfrak{a}c{c_8c_9\lambda}{6 r(n)}.$$
Since $c_8$ and $c_9$ depend only on $\mu$ (since they are functions of $\eta$), we can finally conclude that for $\lambda>\mathfrak{a}c{6}{c_8c_9}$,
$$\mathbb{P}_{p_c-\lambda r(n)}\betaig[\mathcal{C}_{\rm hor}([0,n]\times[0,2n])\betaig]\le \mu.$$
\end{proof}
\paragraph{Acknowledgements.} The author was supported by the ERC AG CONFRA, as well as by the Swiss
{FNS}. The author would like to thank Nicolas Curien for a very interesting and useful discussion, and for comments on the manuscript. The author would also like to thank Itai Benjamini, Christophe Garban, G\'abor Pete and Alan Hammond for stimulating discussions.
\betaibliographystyle{amsalpha}
\betaibliography{bibli}
\betaegin{flushright}
\footnotesize\obeylines
\textsc{Universit\'e de Gen\`eve}
\textsc{Gen\`eve, Switzerland}
\textsc{E-mail:} \texttt{[email protected]}
\end{flushright}
\end{document}
|
\begin{document}
\address{Graduate School of Pure and Applied Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba, Ibaraki, 305-8571, Japan}
\email{[email protected]}
\subjclass[2010]{11J61, 11J70, 11J82}
\keywords{Diophantine approximation; continued fractions; the Laurent series over a finite field.}
\begin{abstract}
In this paper, we study Diophantine exponents $w_n$ and $w_n ^{*}$ for Laurent series over a finite field.
Especially, we deal with the case $n=2$, that is, quadratic approximation.
We first show that the range of the function $w_2-w_2 ^{*}$ is exactly the closed interval $[0,1]$.
Next, we estimate an upper bound of the exponent $w_2$ of continued fractions with low complexity partial quotients.
\end{abstract}
\title{Quadratic approximation in $\lF _q((T^{-1}
\tableofcontents
\section{Introduction}
Let $p$ be a prime and $q$ be a power of $p$.
We denote by $\lF _q$ the finite field of elements $q$, $\lF _q[T]$ the ring of polynomials over $\lF _q$, $\lF _q(T)$ the field of rational functions over $\lF _q$, and $\lF _q((T^{-1}))$ the field of Laurent series over $\lF _q$.
For $\xi \in \lF _q((T^{-1})) \setminus \{ 0\}$, we can write
\begin{eqnarray*}
\xi = \sum _{n=N}^{\infty }a_n T^{-n},
\end{eqnarray*}
where $N\in \lZ $, $a_n \in \lF _q$, and $a_N \neq 0$.
We define the absolute value on $\lF _q ((T^{-1}))$ by $|0|:=0$ and $|\xi |:=q^{-N}$.
The absolute value can uniquely extend on the algebraic closure of $\lF _q((T^{-1}))$ and we continue to write $|\cdot |$ for the extended absolute value.
Throughout this paper, we regard elements of $(\lF _q[T])[X]$ as polynomials in $X$.
For $P(X) \in (\lF _q [T])[X]$, the {\itshape height} of $P(X)$, denoted by $H(P)$, is defined to be the maximal of absolute values of the coefficients of $P(X)$.
We denote by $(\lF _q[T])[X]_{\min }$ the set of non-constant irreducible primitive polynomials $P(X)\in (\lF _q[T])[X]$ whose the leading coefficient is monic in $T$.
For $\alpha \in \overline{\lF _q(T)}$, there exists a unique polynomial $P(X)\in (\lF _q[T])[X]_{\min }$ such that $P(\alpha )=0$.
We call $P(X)$ the {\itshape minimal polynomial} of $\alpha $.
The {\itshape height} (resp.\ the {\itshape degree}, the {\itshape inseparable degree}) of $\alpha $, denoted by $H(\alpha )$ (resp.\ $\deg \alpha $, $\insep \alpha $), is defined to be the height of $P(X)$ (resp.\ the degree of $P(X)$, the inseparable degree of $P(X)$).
For $\xi \in \lF _q((T^{-1}))$ and integers $n, H\geq 1$, let $w_n (\xi ,H)$ and $w_n ^{*}(\xi ,H)$ be given by
\begin{gather*}
w_n (\xi ,H)=\min \{ |P(\xi )| \mid P(X)\in (\lF _q[T])[X], H(P)\leq H, \deg _X P\leq n, P(\xi )\neq 0 \} , \\
w_n ^{*}(\xi ,H)=\min \{ |\xi -\alpha | \mid \alpha \in \overline{\lF _q(T)}, H(\alpha )\leq H, \deg \alpha \leq n, \alpha \neq \xi \} .
\end{gather*}
The Diophantine exponents $w_n$ and $w_n ^{*}$ are defined by
\begin{gather*}
w_n(\xi )=\limsup _{H\rightarrow \infty } \frac{-\log w_n(\xi ,H)}{\log H},\quad w_n ^{*}(\xi )=\limsup _{H\rightarrow \infty } \frac{-\log H w_n ^{*}(\xi ,H)}{\log H}.
\end{gather*}
In other words, $w_n(\xi )$ (resp.\ $w_n ^{*}(\xi )$) is the supremum of a real number $w$ (resp.\ $w^{*}$) which is satisfied that
\begin{eqnarray*}
0<|P(\xi )|\leq H(P)^{-w}\quad (\mbox{resp.\ } 0<|\xi -\alpha |\leq H(\alpha )^{-w^{*}-1})
\end{eqnarray*}
for infinitely many $P(X)\in (\lF _q[T])[X]$ of degree at most $n$ (resp.\ $\alpha \in \overline{\lF _q(T)}$ of degree at most $n$).
As in classical continued fraction theory of real numbers, if $\xi \in \lF _q((T^{-1}))$, then we can write
\begin{eqnarray*}
\xi = a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{\cdots }}},
\end{eqnarray*}
where $a_0, a_n \in \lF _q[T]$, $\deg a_n\geq 1$ for $n\geq 1$.
For simplicity of notation, we write $\xi =[a_0,a_1,a_2,\ldots ]$.
The $a_0 $ and $a_n $ are called the {\itshape partial quotients} of $\xi $.
We define $p_n$ and $q_n$ by
\begin{eqnarray*}
\begin{cases}
p_{-1}=1,\ p_0=a_0,\ p_n=a_n p_{n-1}+p_{n-2},\ n\geq 1,\\
q_{-1}=0,\ q_0=1,\ q_n=a_n q_{n-1}+q_{n-2},\ n\geq 1.
\end{cases}
\end{eqnarray*}
We call $(p_n/q_n)_{n\geq 0}$ {\itshape convergent sequence} of $\xi $ and have $p_n/q_n=[a_0,a_1,\ldots ,a_n]$ for $n\geq 0$ by induction on $n$.
In this paper, we study the difference of the Diophantine exponents $w_2$ and $w_2 ^{*}$ using continued fractions.
We denote by $\lfloor x\rfloor $ the integer part and $\lceil x\rceil $ the upper integer part of a real number $x$.
We construct explicitly continued fractions $\xi \in \lF _q((T^{-1}))$ for which $w_2(\xi )-w_2 ^{*}(\xi )=\delta $ for each $0<\delta \leq 1$ as follows:
\begin{thm}\label{main4}
Let $w$ be a real number which is greater than $(5+\sqrt{17})/2$ when $p\neq 2$, and that of $(9+\sqrt{65})/2$ when $p=2$.
Let $b,c\in \lF _q[T]$ be distinct polynomials of degree at least one.
We define a sequence $(a_{n,w})_{n\geq 1}$ by
\begin{eqnarray*}
a_{n,w}=\begin{cases}
c & \mbox{if } n=\lfloor w^i \rfloor \mbox{ for some integer } i\geq 0,\\
b & \mbox{otherwise}.
\end{cases}
\end{eqnarray*}
Set $\xi _w :=[0,a_{1,w},a_{2,w},\ldots ]$.
Then we have $w_2 ^{*}(\xi _w)=w-1$ and $w_2(\xi _w)=w$.
\end{thm}
\begin{thm}\label{main5}
Let $w\geq 25$ be a real number and $b,c,d\in \lF _q[T]$ be distinct polynomials of degree at least one.
Let $0<\eta <\sqrt{w}/4$ be a positive number and put
\begin{eqnarray*}
m_i :=\left\lfloor \frac{\lfloor w^{i+1}\rfloor -\lfloor w^i-1\rfloor }{\lfloor \eta w^i\rfloor } \right\rfloor
\end{eqnarray*}
for all $i\geq 1$.
We define a sequence $(a_{n,w,\eta })_{n\geq 1}$ by
\begin{eqnarray*}
a_{n,w,\eta }=\begin{cases}
c & \mbox{if } n=\lfloor w^i \rfloor \mbox{ for some integer } i\geq 0,\\
d & \parbox{250pt}{$\mbox{if } n \neq \lfloor w^i\rfloor \mbox{ for all integer }i\geq 0 \mbox{ and } n=\lfloor w^j\rfloor +m \lfloor \eta w^j\rfloor \mbox{ for some integer }1\leq m\leq m_j, j\geq 1,$}\\
b & \mbox{otherwise}.
\end{cases}
\end{eqnarray*}
Set $\xi _{w,\eta } :=[0,a_{1,w,\eta },a_{2,w,\eta },\ldots ]$.
Then we have
\begin{align*}
w_2 ^{*}(\xi _{w,\eta })=\frac{2 w-2-\eta }{2+\eta },\quad w_2(\xi _{w,\eta })=\frac{2 w-\eta }{2+\eta }.
\end{align*}
Hence, we have
\begin{eqnarray*}
w_2(\xi _{w,\eta })-w_2 ^{*}(\xi _{w,\eta })=\frac{2}{2+\eta }.
\end{eqnarray*}
\end{thm}
Theorem \ref{main4}, \ref{main5} are analogues of Theorem 4.1, 4.2 in \cite{Bugeaud2} and Theorem 1, 2 in \cite{Bugeaud3}.
Theorem \ref{main4}, \ref{main5} are proved in a similar method of the proof of these analogue theorems.
In Section \ref{Properties sec}, we prove
\begin{eqnarray*}
0\leq w_2(\xi )-w_2 ^{*}(\xi )\leq 1
\end{eqnarray*}
for all $\xi \in \lF _q((T^{-1}))$ (Proposition \ref{main7}).
We also prove $w_n (\xi )=w_n ^{*}(\xi )=0$ for all $n\geq 1$ and $\xi \in \lF _q(T)$ (Theorem \ref{alg}).
Consequently, we determine the range of the function $w_2-w_2 ^{*}$ from Theorem \ref{main4} and \ref{main5}.
\begin{cor}\label{main6}
The range of the function $w_2-w_2 ^{*}$ is exactly the closed interval $[0,1]$.
\end{cor}
For $\xi \in \lF _q((T^{-1}))$, we set
\begin{eqnarray*}
w(\xi ) :=\limsup _{n\rightarrow \infty }\frac{w_n(\xi )}{n},\quad w^{*}(\xi ) :=\limsup _{n\rightarrow \infty }\frac{w_n ^{*}(\xi )}{n}.
\end{eqnarray*}
We say that $\xi $ is an
\begin{gather*}
A\mbox{-{\itshape number} if } w(\xi )=0;\\
S\mbox{-{\itshape number} if } 0<w(\xi )<+\infty ;\\
T\mbox{-{\itshape number} if } w(\xi )=+\infty \mbox{ and } w_n(\xi )<+\infty \mbox{ for all } n;\\
U\mbox{-{\itshape number} if } w(\xi )=+\infty \mbox{ and } w_n(\xi )=+\infty \mbox{ for some } n.
\end{gather*}
This classification of $\lF _q((T^{-1}))$ was first introduced by Bundschuh \cite{Bundschuh} and is called {\itshape Mahler's classification}.
Replacing $w_n$ and $w$ with $w_n ^{*}$ and $w^{*}$, we define $A^{*}$-, $S^{*}$-, $T^{*}$-, and $U^{*}$-numbers as above.
This classification of $\lF _q((T^{-1}))$ was first introduced by Bugeaud \cite[Section 9]{Bugeaud1} and is called {\itshape Koksma's classification}.
Let $n\geq 1$ be an integer, $\xi \in \lF _q((T^{-1}))$ be a $U$-number, and $\zeta \in \lF _q((T^{-1}))$ be a $U^{*}$-number.
The number $\xi $ (resp.\ the number $\zeta $) is called a $U_n$-{\itshape number} (resp.\ $U_n ^{*}$-{\itshape number}) if $w_n(\xi )$ is infinite and $w_m(\xi )$ is finite (resp.\ $w_n ^{*}(\zeta )$ is infinite and $w_m ^{*}(\zeta )$ is finite) for all $1\leq m<n$.
Let $\mathcal{A}$ be a finite set.
Let $\mathcal{A}^{*}$, $\mathcal{A}^{+}$, and $\mathcal{A}^{\lN }$ denote the set of finite words over $\mathcal{A}$, the set of nonempty finite words over $\mathcal{A}$, and the set of infinite words over $\mathcal{A}$.
We denote by $|W|$ the length of a finite word $W$ over $\mathcal{A}$.
For an integer $n\geq 0$, let $W^n=WW\cdots W$ ($n$ times repeated concatenation of the word $W$) and $\overline{W}=WW\cdots W\cdots $ (infinitely many concatenation of the word $W$).
Note that $W^0$ is equal to the empty word.
More generally, for a real number $w\geq 0$, let $W^{w}=W^{\lfloor w\rfloor}W'$, where $W'$ is the prefix of $W$ of length $\lceil (w-\lfloor w\rfloor )|W|\rceil $.
Let ${\bf a}=(a_n)_{n\geq 0}$ be a sequence over $\mathcal{A}$.
We identify ${\bf a}$ with the infinite word $a_0 a_1 \cdots a_n \cdots $.
Let $\rho $ be a real number.
We say that ${\bf a}$ satisfies {\itshape Condition} $(*)_{\rho }$ if there exist sequences of finite words $(U_n)_{n\geq 1}$, $(V_n)_{n\geq 1}$ and a sequence of nonnegative real numbers $(w_n)_{n\geq 1}$ such that
\begin{enumerate}
\item[(i)] the word $U_n V_n ^{w_n}$ is the prefix of ${\bf a}$ for all $n\geq 1$,
\item[(ii)] $|U_n V_n ^{w_n}|/|U_n V_n|\geq \rho $ for all $n\geq 1$,
\item[(iii)] the sequence $(|V_n ^{w_n}|)_{n\geq 1}$ is strictly increasing.
\end{enumerate}
The {\itshape Diophantine exponent} of ${\bf a}$, first introduced in \cite{Adamczewski2}, denoted by $\Dio ({\bf a})$, and is defined to be the supremum of a real number $\rho $ such that ${\bf a}$ satisfy Condition $(*)_{\rho }$.
It is obvious that
\begin{eqnarray*}
1 \leq \Dio ({\bf a}) \leq +\infty .
\end{eqnarray*}
The infinite word ${\bf a}$ is called {\itshape ultimately periodic} if there exist finite words $U\in \mathcal{A}^{*}$ and $V\in \mathcal{A}^{+}$ such that ${\bf a}=U\overline{V}$.
The {\itshape complexity function} of ${\bf a}$ is defined by
\begin{eqnarray*}
p({\bf a}, n)= \Card \{a_i a_{i+1}\ldots a_{i+n-1}\mid i\geq 0 \} ,\quad \mbox{for }\ n\geq 1.
\end{eqnarray*}
We state now the second main results.
\begin{thm}\label{main1}
Let $\kappa \geq 2, A\geq q$ be integers and ${\bf a}=(a_n)_{n\geq 1}$ be a sequence over $\lF _q[T]$ with $q\leq |a_n|\leq A$ for all $n \geq 1$.
Assume that there exists an integer $n_0\geq 1$ such that
\begin{eqnarray*}
p({\bf a}, n) \leq \kappa n \mbox{ for all } n\geq n_0,
\end{eqnarray*}
and the Diophantine exponent of ${\bf a}$ is finite.
Set $\xi :=[0,a_1,a_2,\ldots ]$.
Then we have
\begin{eqnarray}\label{main1.1}
w_2 (\xi ) \leq 128(2\kappa +1)^3 \Dio ({\bf a}) \left( \frac{\log A}{\log q}\right) ^4 .
\end{eqnarray}
In particular, if the sequence $(|q_n|^{1/n})_{n\geq 1}$ converges, then we have
\begin{eqnarray}\label{main1.2}
w_2 (\xi ) \leq 64(2\kappa +1)^3 \Dio ({\bf a}).
\end{eqnarray}
\end{thm}
There are special sequences which are satisfied the assumption of Theorem \ref{main1}, for example, automatic sequences, primitive morphic sequences, and Strumian sequence with some condition.
The detail will appear in Section $2$.
\begin{thm}\label{main2}
Let ${\bf a}=(a_n)_{n\geq 1}$ be a non-ultimately periodic sequence over $\lF _q[T]$ with $\deg a_n \geq 1$ for all $n\geq 1$.
Assume that $(|q_n |^{1/n})_{n\geq 1}$ is bounded.
Put
\begin{eqnarray*}
m:= \liminf_{n\rightarrow \infty } |q_n|^{1/n},\quad M:=\limsup_{n\rightarrow \infty} |q_n|^{1/n}.
\end{eqnarray*}
Set $\xi :=[0,a_1,a_2,\ldots ]$.
Then we have
\begin{eqnarray}\label{lDio}
w_2(\xi )\geq w_2 ^{*}(\xi ) \geq \max \left( 2,\frac{\log m}{\log M} \Dio ({\bf a})-1\right).
\end{eqnarray}
In particular, if the sequence $(|q_n|^{1/n})_{n\geq 1}$ converges, then we have
\begin{eqnarray*}
w_2(\xi )\geq w_2 ^{*}(\xi ) \geq \max (2,\Dio ({\bf a})-1).
\end{eqnarray*}
Furthermore, assume that the sequence $(|a_n|)_{n\geq 1}$ is bounded.
Then we have
\begin{eqnarray}\label{lDio2}
w_2(\xi )\geq \max \left( 2,\frac{\log m}{\log M}( \Dio ({\bf a})+1)-1\right).
\end{eqnarray}
In particular, if the sequence $(|q_n|^{1/n})_{n\geq 1}$ converges, then we have
\begin{eqnarray*}
w_2(\xi )\geq \max (2,\Dio ({\bf a})).
\end{eqnarray*}
\end{thm}
Theorem \ref{main1} and \ref{main2} are analogues of Theorem 2.2 and 2.3 in \cite{Bugeaud2}.
Theorem \ref{main1} and \ref{main2} are proved in a similar method of the proof of these analogue theorems.
We state an immediately consequence of Theorem \ref{main1} and \ref{main2}.
\begin{cor}\label{main3}
Let ${\bf a}=(a_n)_{n\geq 1}$ be a non-ultimately periodic sequence over $\lF _q[T]$ with $\deg a_n\geq 1$ for $n\geq 1$.
Assume that $(|a_n|)_{n\geq 1}$ is bounded and
\begin{eqnarray*}
\limsup_{n\rightarrow \infty } \frac{p({\bf a}, n)}{n}<+\infty .
\end{eqnarray*}
Set $\xi :=[0,a_1, a_2,\ldots ]$.
Then the Diophantine exponent of ${\bf a}$ is finite if and only if $\xi $ is not a $U_2$-number.
\end{cor}
We use the Vinogradov notation $A\ll B$ (resp.\ $A\ll _a B$) if $|A|\leq c |B|$ with some constant (resp.\ some constant depending at most on $a$) $c>0$.
We write $A\asymp B$ (resp.\ $A\asymp _a B$) if $A\ll B$ and $B\ll A$ (resp.\ $A\ll _a B$ and $B\ll _a A$) hold.
This paper is organized as follows.
In Section \ref{Automatic sec}, we define special sequences and apply Theorem \ref{main1} to these sequences.
In Section \ref{Liouville sec}, we prove Liouville inequalities, that is, a nontrivial lower bound of the absolute value of difference two algebraic numbers and that of polynomial at an algebraic point.
In Section \ref{Continued sec}, we prove some lemmas with respect to continued fractions.
In Section \ref{Properties sec}, we study the Diophantine exponents $w_n$ and $w_n ^{*}$.
In Section \ref{Applications sec}, applying Liouville inequality, we prove lemmas to determine the value of $w_2$ and $w_2 ^{*}$.
In Section \ref{Combinational sec}, we prove combinational lemma to show Theorem \ref{main1}.
In Section \ref{Proof sec}, we prove Theorem \ref{main4}, \ref{main5}, \ref{main1}, and \ref{main2}.
In Appendix \ref{Rational sec}, we prove an analogue of Theorem \ref{main1} and \ref{main2} for Laurent series over a finite field.
\section{Application of the main results}\label{Automatic sec}
In this section, we first recall properties of special sequences.
For a deeper discussion, we refer the reader to \cite{Allouche}.
Let $k\geq 2$ be an integer.
We denote by $\Sigma _k$ the set $\{ 0,1,\ldots ,k-1 \}$.
A $k$-{\itshape automaton} is a sextuple
\begin{eqnarray*}
A=(Q, \Sigma _k, \delta , q_0, \Delta ,\tau ),
\end{eqnarray*}
where $Q$ is a finite set, $\delta :Q\times \Sigma _k\rightarrow Q$ is a map, $q_0 \in Q$, $\Delta $ is a set, and $\tau :Q\rightarrow \Delta $ is a map.
For $q\in Q$ and a finite word $W=w_0 w_1 \cdots w_m$ over $\Sigma _k$, we define recursively $\delta (q,W)$ by $\delta (q,W)=\delta (\delta (q,w_0 w_1\cdots w_{m-1}), w_m)$.
Let $n\geq 0$ be an integer and $W_n=w_r w_{r-1}\cdots w_0$, where $\sum _{i=0}^{r}w_i k^i$ is the $k$-ary expansion of $n$.
A sequence ${\bf a}=(a_n)_{n\geq 0}$ is said to be $k$-{\itshape automatic} if there exists a $k$-automaton $A=(Q, \Sigma _k, \delta , q_0, \Delta ,\tau )$ such that $a_n=\tau (\delta (q_0,W_n))$ for all $n\geq 0$.
The $k$-{\itshape kernel} of a sequence ${\bf a}=(a_n)_{n\geq 0}$ is the set of all sequences $(a_{k^i n+j})_{n\geq 0}$, where $i\geq 0$ and $0\leq j<k^i$.
It is known about $k$-automatic sequences as follows:
\begin{thm}[Eilenberg \cite{Eilenberg}]
Let $k\geq 2$ be an integer.
Then a sequence is $k$-automatic if and only if its $k$-kernel is finite.
\end{thm}
\begin{lem}[Adamczewski and Cassaigne \cite{Adamczewski1}]
Let $k\geq 2$ be an integer.
Let ${\bf a}$ be a non-ultimately periodic and $k$-automatic sequence.
Let $m$ be a cardinality of the $k$-kernel of ${\bf a}$.
Then we have
\begin{eqnarray*}
\Dio ({\bf a})<k^m.
\end{eqnarray*}
\end{lem}
Let $\mathcal{A}$ and $\mathcal{B}$ be finite sets.
A map $\sigma :\mathcal{A}^{*}\rightarrow \mathcal{B}^{*}$ is a {\itshape morphism} if $\sigma (UV)=\sigma (U)\sigma (V)$ for all $U,V \in \mathcal{A}$.
The {\itshape width} of $\sigma $ is defined to be $\max _{a\in \mathcal{A}}|\sigma (a)|$.
A morphism $\sigma $ is said to be $k$-{\itshape uniform} if there exists an integer $k\geq 1$ such that $|\sigma (a)|=k$ for all $a\in \mathcal{A}$.
In particular, we call a $1$-uniform morphism a {\itshape coding}.
A morphism $\sigma :\mathcal{A}^{*}\rightarrow \mathcal{A}^{*}$ is {\itshape primitive} if there exists an integer $n\geq 1$ such that $a$ occurs in $\sigma ^n(b)$ for all $a,b \in \mathcal{A}$.
A morphism $\sigma :\mathcal{A}^{*}\rightarrow \mathcal{A}^{*}$ is {\itshape prolongable} on $a\in \mathcal{A}$ if $\sigma (a)=aW,$ where $W\in \mathcal{A}^{+}$ and $\sigma ^n(W)$ is not an empty word for all $n\geq 1$.
A sequence ${\bf a}=(a_n)_{n\geq 0}$ is said to be $k$-{\itshape uniform morphic} (resp.\ {\itshape primitive morphic}) if there exist finite sets $\mathcal{A},\mathcal{B}$, a $k$-uniform morphism (resp.\ a primitive morphism) $\sigma :\mathcal{A}^{*}\rightarrow \mathcal{A}^{*}$ which is prolongable on some $a \in\mathcal{A}$, and a coding $\tau :\mathcal{A}^{*}\rightarrow \mathcal{B}^{*}$ such that ${\bf a}=\lim_{n\rightarrow \infty }\tau (\sigma ^n(a))$.
When ${\bf a}$ is a $k$-uniform morphic, we call $\mathcal{A}$ the {\itshape initial alphabet} associated with ${\bf a}$.
\begin{thm}[Cobham \cite{Cobham}]
Let $k\geq 2$ be an integer.
Then a sequence is $k$-automatic if and only if it is $k$-uniform morphic.
\end{thm}
Moss\'e's result \cite{Mosse} implies the lemma below.
\begin{lem}
Let ${\bf a}$ be a non-ultimately periodic and primitive morphic sequence.
Then the Diophantine exponent of ${\bf a}$ is finite.
\end{lem}
Let $\theta $ and $\rho $ be real numbers with $0<\theta <1$ and $\theta $ is irrational.
For $n\geq 1$, we put $s_{n,\theta ,\rho }:=\lfloor (n+1)\theta +\rho \rfloor - \lfloor n\theta +\rho \rfloor $ and $s_{n, \theta ,\rho } ':=\lceil (n+1)\theta +\rho \rceil -\lceil n\theta +\rho \rceil $.
A sequence ${\bf a}=(a_n)_{n\geq 1}$ is called {\itshape Sturmian} if there exist an irrational number $0<\theta <1$, a real number $\rho $, a finite set $\mathcal{A}$, and a coding $\tau :\{ 0,1\} ^{*}\rightarrow \mathcal{A}^{*}$ with $\tau (0)\neq \tau (1)$ such that $(a_n)_{n\geq 1}$ is $(\tau (s_{n,\theta ,\rho }))_{n\geq 1}$ or $(\tau (s_{n,\theta ,\rho } '))_{n\geq 1}$.
Then we call the irrational number $\theta $ {\itshape slope} of ${\bf a}$ and the real number $\rho $ {\itshape intercept} of ${\bf a}$.
\begin{lem}[Adamczewski and Bugeaud \cite{Adamczewski4}]
Let ${\bf a}$ be a Strumian sequence.
Then the slope of ${\bf a}$ has bounded partial quotients if and only if the Diophantine exponent of ${\bf a}$ is finite.
\end{lem}
It is known that automatic sequences, primitive morphic sequences, and Strumian sequences have low complexity.
\begin{lem}
Let $k\geq 2$ be an integer and ${\bf a}=(a_n)_{n\geq 0}$ be a $k$-automatic sequence.
Let $d$ be a cardinality of the internal alphabet associated with ${\bf a}$.
Then we have
\begin{eqnarray*}
p({\bf a}, n) \leq k d^2 n, \mbox{ for } n\geq 1.
\end{eqnarray*}
\end{lem}
\begin{proof}
See \cite[Theorem 10.3.1]{Allouche} or \cite{Cobham}.
\end{proof}
\begin{lem}
Let ${\bf a}=(a_n)_{n\geq 0}$ be a primitive morphic sequence over a finite set of cardinality of $b\geq 2$.
Let $v$ be the width of $\sigma $ which generates the sequence ${\bf a}$.
Then we have
\begin{eqnarray*}
p({\bf a},n)\leq 2 v^{4b-2} b^3 n, \mbox{ for } n\geq 1.
\end{eqnarray*}
\end{lem}
\begin{proof}
See \cite[Theorem 10.4.12]{Allouche}.
\end{proof}
\begin{lem}
Let ${\bf a}$ be a Sturmian sequence.
Then we have
\begin{eqnarray*}
p({\bf a}, n)=n+1, \mbox{ for } n\geq 1.
\end{eqnarray*}
\end{lem}
\begin{proof}
See \cite[Theorem 10.5.8]{Allouche}.
\end{proof}
Consequently, by Theorem \ref{main1} and \ref{main2}, we obtain the upper bound of $w_2$ of automatic, primitive morphic, or Strumian continued fractions as follows:
\begin{thm}
Let $k\geq 2$ be an integer.
Let ${\bf a}=(a_n)_{n\geq 0}$ be a non-ultimately periodic and $k$-automatic sequence over $\lF _q[T]$ with $\deg a_n \geq 1$ for all $n\geq 0$.
Let $A$ be an upper bound of the sequence $(|a_n|)_{n\geq 0}$, $m$ be a cardinality of $k$-kernel of ${\bf a}$, and $d$ be a cardinality of the initial alphabet associated with ${\bf a}$.
Set $\xi :=[0,a_0,a_1,\ldots ]$.
Then we have
\begin{eqnarray*}
w_2 (\xi )\leq 128 (2 k d^2+1)^3 k^m \left( \frac{\log A}{\log q} \right) ^4.
\end{eqnarray*}
\end{thm}
\begin{thm}
Let ${\bf a}=(a_n)_{n\geq 0}$ be a non-ultimately and primitive morphic sequence over $\lF _q [T]$ with $\deg a_n \geq 1$ for all $n\geq 0$, which is generated by a primitive morphism $\sigma $ over a finite set of cardinality $b\geq 2$.
Let $v$ be the width of $\sigma $.
Set $\xi :=[0,a_0,a_1,\ldots ]$.
Then we have
\begin{eqnarray*}
w_2 (\xi )\leq 128 (4 v^{4 b-2}b^3+1)^3 \Dio ({\bf a}) \left( \frac{\log A}{\log q} \right) ^4.
\end{eqnarray*}
\end{thm}
\begin{thm}
Let ${\bf a}=(a_n)_{n\geq 0}$ be a non-ultimately and Strumian sequence over $\lF _q [T]$ with $\deg a_n \geq 1$ for all $n\geq 0$.
Set $\xi :=[0,a_0,a_1,\ldots ]$.
Then we have
\begin{eqnarray*}
w_2 (\xi )\leq 16000\Dio ({\bf a}) \left( \frac{\log A}{\log q} \right) ^4
\end{eqnarray*}
if the slope of ${\bf a}$ has bounded partial quotients, and we have $w_2(\xi )=+\infty $ otherwise.
\end{thm}
\section{Liouville inequalities}\label{Liouville sec}
The following lemma is well-known and immediately seen.
\begin{lem}\label{height}
Let $P(X)$ be in $(\lF _q[T])[X]$.
Assume that $P(X)$ can be factorized as
\begin{eqnarray*}
P(X)=A\prod_{i=1}^{n} (X-\alpha _i),
\end{eqnarray*}
where $A\in \lF _q[T]$ and $\alpha _i \in \overline{\lF _q(T)}$ for $1\leq i\leq n$.
Then we have
\begin{align*}
H(P)=|A|\prod_{i=1}^{n} \max (1, |\alpha _i|).
\end{align*}
Furthermore, for $P(X), Q(X) \in (\lF _q[T])[X]$, we have
\begin{align*}
H(P Q)=H(P)H(Q).
\end{align*}
\end{lem}
The lemma below is an analogue of Theorem A.1 in \cite{Bugeaud1}.
\begin{prop}\label{Liouville inequ1}
Let $ P(X), Q(X) \in (\lF_q[T])[X]$ be non-constant polynomials of degree $m,n$, respectively.
Let $\alpha $ be a root of $P(X)$ of order $t$ and $\beta $ be a root of $Q(X)$ of order $u$.
Assume that $P(\beta )\neq 0$.
Then we have
\begin{eqnarray}\label{Liouville0}
|P(\beta )|\geq \max (1,|\beta |)^m H(P)^{-n/u+1} H(Q)^{-m/u}.
\end{eqnarray}
Furthermore, we have
\begin{gather}\label{Liouville1}
|\alpha -\beta |\geq \max (1, |\alpha |) \max (1, |\beta |) H(P)^{-n/t u} H(Q)^{-m/t u}.
\end{gather}
\end{prop}
\begin{proof}
Write $P(X)=A\prod_{i=1}^{r} (X-\alpha _i)^{t_i}$ and $Q(X)=B\prod_{i=1}^{s} (X-\beta _i)^{u_i}$, where $\alpha =\alpha _1, \beta =\beta _1, t=t_1 , u=u_1$, and $\alpha $'s (resp.\ $\beta $'s) are pairwise distinct.
Let $Q_1(X)=B_1\prod_{i=1}^{s_1} (X-\beta ^{(i)})^g$ be the minimal polynomial of $\beta $, where $\beta ^{(1)} =\beta , g=\insep \beta $, and $\beta ^{(i)}$'s are pairwise distinct.
Since $P$ and $Q_1$ do not have common roots, the resultant $\Res (P,Q_1)$ is non-zero and is in $\lF _q[T]$.
Therefore, by $H(Q_1)^{u/g} \leq H(Q)$ and $s_1 u\leq n$, we obtain
\begin{eqnarray*}
1 & \leq & |\Res (P,Q_1)| = |B_1|^m \prod_{i=1}^{s_1} |P(\beta ^{(i)})|^g \\
& \leq & |B_1|^m |P(\beta )|^g H(P)^{(s_1-1)g}\prod_{i=2}^{s_1} \max (1, |\beta ^{(i)}|)^{mg} \\
& = & |P(\beta )|^g H(P)^{(s_1-1)g} \left( \frac{H(Q_1)}{\max (1, |\beta |)^g} \right) ^m \\
& \leq & |P(\beta )|^g H(P)^{(n/u-1)g} H(Q)^{mg/u} \max (1, |\beta |)^{-mg}.
\end{eqnarray*}
As a result, we have (\ref{Liouville0}).
From Lemma \ref{height}, it follows that
\begin{eqnarray*}
|P(\beta )| & \leq & |\beta -\alpha |^t |A| \max (1, |\beta |)^{m-t}\prod_{i=2}^{r}\max (1, |\alpha _i|)^{t_i} \\
& = & |\beta -\alpha |^t H(P) \max (1, |\alpha |)^{-t} \max (1, |\beta |)^{m-t}.
\end{eqnarray*}
Hence, we have (\ref{Liouville1}) by (\ref{Liouville0}).
\end{proof}
The lemma below is an analogue of Theorem A.3 in \cite{Bugeaud1} and Lemma 2.3 in \cite{Pejkovic}.
\begin{lem}\label{Liouville inequ2}
Let $P(X) \in (\lF_q[T])[X]$ be an irreducible polynomial of degree $n \geq 2$.
For any distinct roots $\alpha ,\beta $ of $P(X)$, we have
\begin{eqnarray}\label{Liouville2}
|\alpha -\beta | \geq H(P)^{-n/f^2 +1/f},
\end{eqnarray}
where $f$ is inseparable degree of $P(X)$.
\end{lem}
\begin{proof}
We can write $P(X)=A\prod_{i=1}^{m} (X-\alpha _i)^f ,$ where $\alpha _1=\alpha , \alpha _2=\beta $ and $\alpha $'s are pairwise distinct.
Put $Q(X):=A\prod_{i=1}^{m} (X-\alpha _i ^f)$.
Since $Q(X)$ is separable, the discriminant $\Disc (Q)$ is non-zero and is in $\lF _q[T]$.
Therefore, we obtain
\begin{eqnarray*}
1 & \leq & |\Disc (Q)|
\leq |\alpha ^f-\beta ^f|^2 |A|^{2m-2} \prod_{\stackrel{1\leq i<j\leq m}{(i,j)\neq (1,2)}}^{} \max (1, |\alpha _i ^f|)^2\max (1, |\alpha _j ^f|)^2 \\
& \leq & |\alpha -\beta |^{2f} H(Q)^{2m-2} .
\end{eqnarray*}
Hence, we have (\ref{Liouville2}) by $H(P)=H(Q)$ and $n=mf$.
\end{proof}
The following proposition is an analogue of Corollary A.2 in \cite{Bugeaud1} and Lemma 2.5 in \cite{Pejkovic}, and is an extension of Theorem 1 in \cite{Mahler3}.
\begin{prop}\label{Liouville}
Let $\alpha ,\beta \in \overline{\lF _q(T)}$ be distinct algebraic numbers of degree $m, n$ and inseparable degree $f,g$, respectively.
Then we have
\begin{eqnarray}
|\alpha -\beta |\geq \max (1,|\alpha |) \max (1,|\beta |) H(\alpha )^{-n/f g} H(\beta )^{-m/f g}.
\end{eqnarray}
\end{prop}
\begin{proof}
From (\ref{Liouville1}) and (\ref{Liouville2}), the above inequality immediately holds.
\end{proof}
Let $\alpha \in \overline{\lF _q(T)}$ be a quadratic number.
Then we denote by $\alpha '$ the Galois conjugate of $\alpha $ which is different from $\alpha $ if $\insep \alpha =1$, and itself if $\insep \alpha =2$.
The lemma below is an analogue of Lemma 3.2 in \cite{Pejkovic}.
\begin{lem}\label{Galois conj}
Let $\alpha \in \overline{\lF _q(T)}$ be a quadratic number.
If $\alpha \neq \alpha '$, then we have
\begin{eqnarray}
H(\alpha )^{-1}\leq |\alpha -\alpha '|\leq H(\alpha ).
\end{eqnarray}
\end{lem}
\begin{proof}
Let $P_{\alpha }(X)= AX^2+BX+C$ be the minimal polynomial of $\alpha $.
Then we have
\begin{gather*}
|\alpha -\alpha '| = \frac{|B^2-4AC|^{1/2}}{|A|} \leq \max (|B|, |AC|^{1/2}) \leq H(\alpha )\\
\intertext{and}
|\alpha -\alpha '|\geq \frac{1}{|A|} \geq H(\alpha )^{-1}.
\end{gather*}
\end{proof}
We give a better estimate than Proposition \ref{Liouville} in some cases, which is an analogue of Lemma 7.1 in \cite{Bugeaud2} and Lemma 4 in \cite{Bugeaud3}.
\begin{prop}\label{Liouvilleinequ3}
Let $\alpha ,\beta \in \overline{\lF _q(T)}$ be quadratic numbers.
We denote by $P_{\alpha }(X)=A(X-\alpha )(X-\alpha '), P_{\beta }(X)=B(X-\beta )(X-\beta ')$ the minimal polynomials of $\alpha ,\beta $, respectively.
If $\alpha \neq \alpha '$ and $P_{\alpha }(X)\neq P_{\beta }(X)$, then we have
\begin{eqnarray}\label{Liouville3}
|\alpha -\beta |\geq \max (1, |\alpha -\alpha '|^{-1}) H(\alpha )^{-2} H(\beta )^{-2}.
\end{eqnarray}
\end{prop}
\begin{proof}
By Proposition \ref{Liouville}, we may assume that $|\alpha -\alpha '|<1$.
Since $P_{\alpha }(X)$ and $P_{\beta }(X)$ does not have common roots, we have
\begin{eqnarray*}
1 & \leq & |\Res (P_{\alpha },P_{\beta })| = |B|^2 |P_{\alpha }(\beta )||P_{\alpha }(\beta ')| \\
& \leq & |AB^2||\alpha -\beta ||\alpha '-\beta |H(\alpha ) \max (1,|\beta '|)^2 \\
& \leq & |\alpha -\beta ||\alpha '-\beta |H(\alpha )^2 H(\beta )^2.
\end{eqnarray*}
In the case of $|\alpha '-\beta |>|\alpha -\beta |$, we have $|\alpha -\alpha '|=|\alpha '-\beta |$.
Hence, we get (\ref{Liouville3}).
In other case, using Lemma \ref{Galois conj}, we obtain
\begin{eqnarray*}
|\alpha -\beta |^2 & \geq & |\alpha -\beta ||\alpha '-\beta | \geq H(\alpha )^{-2}H(\beta )^{-2}
\geq |\alpha -\alpha '|^{-2} H(\alpha )^{-4}H(\beta )^{-4}.
\end{eqnarray*}
Therefore, we have (\ref{Liouville3}).
\end{proof}
\section{Continued fractions}\label{Continued sec}
We collect fundamental properties of continued fractions for Laurent series over a finite field.
The lemma below is immediate by induction on $n$.
\begin{lem}\label{fund}
Consider a continued fraction $\xi =[a_0,a_1,a_2, \ldots ]\in \lF_q((T^{-1}))$.
Let \\ $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\xi $.
Then the following hold: for any $n\geq 0$,
\begin{enumerate}
\item[(i)] $q_n p_{n-1} -p_n q_{n-1} =(-1)^n$,
\item[(ii)] $(p_n,q_n)=1$,
\item[(iii)] $|q_n|=|a_1||a_2|\cdots |a_n|$,
\item[(iv)] $\xi = \frac{\xi _{n+1}p_n+p_{n-1}}{\xi _{n+1}q_n +q_{n-1}}$, where $\xi =[a_0,\ldots ,a_n,\xi _{n+1}]$,
\item[(v)] $\left| \xi -p_n/q_n \right| =|q_n|^{-1}|q_{n+1}|^{-1}=|a_{n+1}|^{-1}|q_n|^{-2}$,
\item[(vi)] $q_n/q_{n-1}=[a_n,a_{n-1},\ldots a_1]$.
\end{enumerate}
\end{lem}
We recall an analogue of Lagrange's theorem for Laurent series over a finite field.
\begin{thm}\label{Lagrange}
Let $\xi $ be in $\lF _q((T^{-1}))$.
Then $\xi $ is quadratic if and only if its continued fraction expansion is ultimately periodic.
\end{thm}
\begin{proof}
See e.g. \cite[Theorem 3 and 4]{Chaichana}.
\end{proof}
The lemma below is immediate by Lemma \ref{fund}.
\begin{lem}\label{height upper}
Consider an ultimately periodic continued fraction
\begin{eqnarray*}
\xi =[0,a_1,\ldots ,a_r,\overline{a_{r+1},\ldots ,a_{r+s}}]\in \lF _q((T^{-1}))
\end{eqnarray*}
for $r\geq 0, s\geq 1.$
Let $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\xi $.
Then $\xi $ is a root of the following equation:
\begin{eqnarray*}
(q_{r-1}q_{r+s}-q_r q_{r+s-1})X^2-(q_{r-1}p_{r+s}-q_r p_{r+s-1}+p_{r-1}q_{r+s}-p_r q_{r+s-1})X \\
+p_{r-1}p_{r+s}-p_r p_{r+s-1}=0,
\end{eqnarray*}
and we have $\ H(\xi )\leq |q_r q_{r+s}|$.
In particular, if $\xi =[0,\overline{a_1,\ldots ,a_s}]$, then $\xi $ is a root of the following equation:
\begin{eqnarray*}
q_{s-1}X^2-(p_{s-1}-q_s)X-p_s=0,
\end{eqnarray*}
and we have $H(\xi )\leq |q_s|$.
\end{lem}
\begin{lem}\label{lower bound}
Let $M\geq q$ be an integer and $\xi =[0, a_1,a_2,\ldots ], \zeta =[0,b_1,b_2,\ldots ] \in \lF _q((T^{-1}))$ be continued fractions with $|a_n|,|b_n|\leq M$ for all $n\geq 1$.
Assume that there exists an integer $n_0\geq 1$ such that $a_n=b_n$ for all $1\leq n\leq n_0$ and $a_{n_0+1}\neq b_{n_0+1}$.
Then we have
\begin{eqnarray*}
|\xi -\zeta |\geq \frac{1}{M^2|q_{n_0}|^2}.
\end{eqnarray*}
\end{lem}
\begin{proof}
See \cite[Lemma 3]{Adamczewski3}.
\end{proof}
\begin{lem}\label{conj}
For $n\geq 0$, consider an ultimately periodic continued fraction \\
$\xi =[\overline{a_0,a_1,\ldots ,a_n}] \in \lF _q((T^{-1}))$ with $\deg a_0\geq 1$.
Then we have
\begin{eqnarray*}
-\frac{1}{\xi '}=[\overline{a_n,a_{n-1},\ldots ,a_0}].
\end{eqnarray*}
\end{lem}
\begin{proof}
See the proof of Lemma 2 in \cite{Dubois1}.
\end{proof}
The following lemma is an analogue of Lemma 6.1 in \cite{Bugeaud2}.
\begin{lem}\label{conj2}
For $r,s\geq 1$, consider an ultimately periodic continued fraction \\
$\xi =[0,a_1,\ldots ,a_r,\overline{a_{r+1},\ldots ,a_{r+s}}] \in \lF _q((T^{-1}))$ with $a_r\neq a_{r+s}$.
Let $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\xi $.
Then we have
\begin{eqnarray}\label{conj3}
\frac{\min (|a_r|, |a_{r+s}| )}{|q_r|^2}\leq |\xi -\xi '|\leq \frac{|a_r a_{r+s}|}{|q_r|^2}.
\end{eqnarray}
\end{lem}
\begin{proof}
Put $\tau :=[\overline{a_{r+1},\ldots ,a_{r+s}}]$.
By Lemma \ref{conj}, we have $\tau '=-[0,\overline{a_{r+s},\ldots ,a_{r+1}}]$.
Since
\begin{eqnarray*}
\xi =\frac{p_r \tau +p_{r-1}}{q_r \tau +q_{r-1}},\quad \xi ' =\frac{p_r \tau '+p_{r-1}}{q_r \tau '+q_{r-1}},
\end{eqnarray*}
we obtain
\begin{eqnarray*}
|\xi -\xi '|=\frac{|\tau -\tau '|}{|q_r \tau +q_{r-1}||q_r \tau '+q_{r-1}|}
\end{eqnarray*}
by Lemma \ref{fund} (i).
We see $|\tau -\tau '|=|a_{r+1}|$ and $|q_r \tau +q_{r-1}|=|q_r||a_{r+1}|$.
It follows from Lemma \ref{fund} (vi) that
\begin{eqnarray*}
|q_r \tau '+q_{r-1}| & = & |q_r|\left| \tau '+\frac{q_{r-1}}{q_r}\right|
= \frac{|q_r||[\overline{a_{r+s},\ldots ,a_{r+1}}]-[a_r,\ldots ,a_1]|}{|[\overline{a_{r+s},\ldots ,a_{r+1}}]||[a_r,\ldots ,a_1]|} \\
& = & \frac{|q_r||a_{r+s}-a_r|}{|a_r a_{r+s}|}.
\end{eqnarray*}
Since $1\leq |a_{r+s}-a_r|\leq \max (|a_{r+s}|,|a_r|)$, we obtain (\ref{conj3}).
\end{proof}
The lemma below is an analogue of Lemma 6.3 in \cite{Bugeaud2}.
\begin{lem}\label{h and l}
Let $b,c,d\in \lF_q[T]$ be distinct polynomials of degree at least one, $n\geq 1$ be an integer, and $a_1,\ldots ,a_{n-1} \in \lF _q[T]$ be polynomials of degree at least one.
Put
\begin{eqnarray*}
\xi :=[0,a_1,\ldots ,a_{n-1},c,\overline{b}].
\end{eqnarray*}
Then $\xi $ is quadratic and
\begin{eqnarray*}
H(\xi )\asymp _{b,c} |q_n|^2,
\end{eqnarray*}
where $(p_k/q_k)_{k\geq 0}$ is the convergent sequence of $\xi $.
Let $m\geq 2$ be an integer.
Set
\begin{eqnarray*}
\zeta :=[0,a_1,\ldots ,a_{n-1},c,\overline{b,\ldots ,b,d}],
\end{eqnarray*}
where the length of period part of $\zeta $ is $m$.
Then $\zeta $ is quadratic and
\begin{eqnarray*}
H(\zeta )\asymp _{b,c,d} |\tilde{q}_n \tilde{q}_{n+m}|,
\end{eqnarray*}
where $(\tilde{p}_k/\tilde{q}_k)_{k\geq 0}$ is the convergent sequence of $\zeta $.
\end{lem}
\begin{proof}
It follows from Theorem \ref{Lagrange} that $\xi $ and $\zeta $ are quadratic.
By Lemma \ref{height upper}, we have $H(\xi ) \ll _{b,c} |q_n|^2$ and $H(\zeta ) \ll _{b,c,d} |\tilde{q}_n \tilde{q}_{n+m}|$.
Let $P_\xi (X)=A(X-\xi )(X-\xi ')$ be the minimal polynomial of $\xi $.
Since $P_\xi (p_n/q_n)$ is non-zero, we obtain $|P_\xi (p_n/q_n)|\geq 1/|q_n|^2$.
From Lemma \ref{fund} (v) and \ref{conj2}, it follows that
\begin{eqnarray*}
\left| \xi -\frac{p_n}{q_n}\right| , \left| \xi '-\frac{p_n}{q_n}\right| \ll _{b,c} \frac{1}{|q_n|^2 }.
\end{eqnarray*}
Therefore, we obtain $|q_n|^2 \ll _{b,c} |A| \ll _{b,c} H(\xi )$.
We denote by $P_\zeta (X)$ the minimal polynomial of $\zeta $.
Since $P_\zeta $ and $P_\xi $ do not have a common root, we have
\begin{eqnarray*}
1\leq |\Res (P_\zeta ,P_\xi )| \leq H(\zeta )^2 H(\xi )^2 |\xi -\zeta ||\xi '-\zeta ||\xi -\zeta '||\xi '-\zeta '|.
\end{eqnarray*}
Note that $q_n=\tilde{q}_n$.
By Lemma \ref{conj}, we obtain
\begin{eqnarray*}
|\xi -\zeta |\ll _{b,c,d} |\tilde{q}_{n+m}|^{-2},\quad |\xi '-\zeta |, |\xi -\zeta '|, |\xi '-\zeta '|\ll _{b,c,d} |\tilde{q}_n|^{-2}.
\end{eqnarray*}
Therefore, it follows that $1 \ll _{b,c,d} H(\zeta )^2 H(\xi )^2 |\tilde{q}_n|^{-6} |\tilde{q}_{n+m}|^{-2}$.
Hence, we have the inequality $|\tilde{q}_n \tilde{q}_{n+m}|\ll _{b,c,d} H(\zeta )$.
\end{proof}
The next lemma is a well-known result.
\begin{lem}\label{irr ex}
Consider a continued fraction $\xi =[a_0,a_1,a_2,\ldots ]\in \lF _q((T^{-1}))$.
Let $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\xi $.
Then we have
\begin{eqnarray*}
w_1 (\xi )= \limsup_{n\rightarrow \infty } \frac{\deg q_{n+1}}{\deg q_n}.
\end{eqnarray*}
\end{lem}
The lemma below is an analogue of Lemma 5.6 in \cite{Adamczewski3.5}.
\begin{lem}
Consider a continued fraction $\xi =[a_0,a_1,a_2,\ldots ]\in \lF _q((T^{-1}))$.
Let $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\xi $.
If the sequence $(|q_n|^{1/n})_{n\geq 1}$ is bounded, then $\xi $ is not a $U_1$-number.
\end{lem}
\begin{proof}
By the assumption, there exists an integer $A$ such that $q^n \leq |q_n|\leq A^n$ for all $n\geq 1$.
Thus, for all $n\geq 1$, we have
\begin{eqnarray*}
\frac{\deg q_{n+1}}{\deg q_n} \leq \left( 1+\frac{1}{n} \right) \frac{\log A}{\log q}.
\end{eqnarray*}
By Lemma \ref{irr ex}, we obtain $w_1 (\xi )\leq \log A/\log q$.
\end{proof}
\section{Properties of $w_n$ and $w_n ^{*}$}\label{Properties sec}
\begin{thm}\label{Mahler lower}
Let $n\geq 1$ be an integer and $\xi \in \lF _q((T^{-1}))$ be not algebraic of degree at most $n$.
Then we have
\begin{align*}
w_n (\xi ) \geq n,\quad w_n ^{*}(\xi ) & \geq \frac{n+1}{2}.
\end{align*}
Furthermore, if $n=2$, then $w_2 ^{*}(\xi )\geq 2$.
\end{thm}
\begin{proof}
The former estimate follows from an analogue of Minkowski's theorem for Laurent series over a finite field \cite{Mahler2} and the later estimates are Satz.1 and Satz.2 of \cite{Guntermann}.
\end{proof}
We give an immediate consequence of Proposition \ref{Liouville inequ1} and \ref{Liouville}.
\begin{thm}\label{alg}
Let $n\geq 1$ be an integer and $\xi \in \lF _q((T^{-1}))$ be an algebraic number of degree $d$.
Then we have
\begin{align*}
w_n (\xi ), w_n ^{*}(\xi ) \leq d-1.
\end{align*}
\end{thm}
Note that if $\xi \in \lF _q((T^{-1}))$ be an algebraic number, then $\insep \xi =1$.
We next show that we can replace the definition of $w_n$ by a weak definition of $w_n$.
Let $n\geq 1$ be an integer and $\xi $ be in $\lF _q((T^{-1}))$.
We define a Diophantine exponent $\tilde{w}_n$ at $\xi $ by the supremum of a real number $w$ for which there exist infinitely many $P(X) \in (\lF _q[T])[X]_{\min }$ of degree at most $n$ such that
\begin{align*}
0<|P(\xi )|\leq H(P)^{-w}.
\end{align*}
The lemma below is a slight improvement of a result in \cite[Section 3.4]{Sprindzuk1}.
\begin{lem}\label{ok}
Let $n\geq 1$ be an integer and $\xi $ be in $\lF _q((T^{-1}))$.
Then we have
\begin{eqnarray*}
w_n (\xi )=\tilde{w}_n (\xi ).
\end{eqnarray*}
\end{lem}
\begin{proof}
It is immediate that $\tilde{w}_n (\xi )\leq w_n(\xi )$ and $\tilde{w}_n (\xi )\geq 0$.
Therefore, we may assume that $w_n(\xi )>0$ and $\tilde{w}_n (\xi )$ is finite.
For $0<w<w_n(\xi )$, there exist infinitely many $P(X)\in (\lF _q[T])[X]$ of degree at most $n$ such that
\begin{eqnarray}
0<|P(\xi )|\leq H(P)^{-w}.
\end{eqnarray}
We can write $P(X)=A\prod _{i=1}^{k}P_i(X),$ where $A\in \lF _q[T]$ and $P_i(X) \in (\lF _q[T])[X]_{\min }$ for $1\leq i\leq k$.
By the definition, for $\tilde{w}>\tilde{w}_n(\xi )$, there exists a positive number $C$ such that for all $Q(X)\in (\lF _q[T])[X]_{\min }$ of degree at most $n$,
\begin{eqnarray*}
|Q(\xi )|\geq CH(Q)^{-\tilde{w}}.
\end{eqnarray*}
Therefore, by Lemma \ref{height}, we obtain
\begin{eqnarray*}
|P(\xi )|\geq \min (1,C^n)H(P)^{-\tilde{w}},
\end{eqnarray*}
which implies $\min (1,C^n)H(P)^w\leq H(P)^{\tilde{w}}$.
Since there exist infinitely many such polynomials $P(X)$, we have $w\leq \tilde{w}$.
This completes the proof.
\end{proof}
The lemma below is an analogue of Lemma A.8 in \cite{Bugeaud1} and Lemma 2.4 in \cite{Pejkovic}.
\begin{lem}\label{insep}
Let $P(X) \in (\lF _q[T])[X]$ be a non-constant irreducible polynomial of degree $n$ and of inseparable degree $f$.
Let $\xi $ be in $\lF _q((T^{-1}))$ and $\alpha $ be a root of $P(X)$ such that $|\xi -\alpha |$ is minimal.
If $n\geq 2f$, then we have
\begin{eqnarray}\label{comp}
|\xi -\alpha |\leq |P(\xi )|^{1/f}H(P)^{n/f^2-2/f}.
\end{eqnarray}
\end{lem}
\begin{proof}
We may assume that $\xi $ and $\alpha $ are distinct.
We first consider the case of $f=1$ .
Write $P(X)=A\prod _{i=1}^{n} (X-\alpha _i)$, where $\alpha =\alpha _1$ and $|\xi -\alpha _1|\leq |\xi-\alpha _2|\leq \ldots \leq |\xi -\alpha _n|$.
Put $Q(X):=A\prod _{i=2}^{n} (X-\alpha _i)$ and $\Delta :=\prod _{i=2}^{n}|\alpha -\alpha _i|$.
Then we have $|\Disc (P)|^{1/2}=\Delta |A||\Disc (Q)|^{1/2}$.
By the definition of discriminant, we obtain
\begin{eqnarray*}
|\Disc (Q)|^{1/2} & = & |A|^{n-2}|\det (\alpha _i ^j)_{2\leq i\leq n, 0\leq j\leq n-2}| \\
& \leq & |A|^{n-2}\prod _{i=2}^{n} \max (1,|\alpha _i|)^{n-2} \\
& = & H(P)^{n-2} \max (1,|\alpha |)^{-n+2}.
\end{eqnarray*}
Since the polynomial $P$ is separable, we get
\begin{eqnarray*}
1 & \leq & |\Disc (P)|^{1/2}
\leq H(P)^{n-2}\max (1,|\alpha |)^{-n+2}|A|\prod_{j=2}^{n}|\xi -\alpha _j|\\
& = & H(P)^{n-2}\max (1,|\alpha |)^{-n+2}|\xi -\alpha |^{-1}|P(\xi )|.
\end{eqnarray*}
Therefore, we have (\ref{comp}).
We next consider the case of $f>1$.
We can write $P(X)=R(X^f)$, where a separable polynomial $R(X) \in (\lF _q[T])[X]$.
Thus, in the same way, it follows that
\begin{eqnarray*}
|\xi ^f -\alpha ^f|\leq |R(\xi ^f)|H(R)^{n/f-2}.
\end{eqnarray*}
Since $H(P)=H(R)$ and $f$ is a power of $p$, we have (\ref{comp}).
\end{proof}
\begin{lem}\label{good}
Let $\xi $ be in $\lF _q((T^{-1}))$ and $n\geq 1$ be an integer.
Then we have
\begin{eqnarray*}
w_1(\xi )=w_1(\xi ^{p^n}).
\end{eqnarray*}
\end{lem}
\begin{proof}
By Theorem \ref{alg}, we may assume that $\xi $ is not in $\lF _q(T)$.
Therefore, we can write $\xi =[a_0,a_1,\ldots ]$.
Then we have $\xi ^{p^n}=[a_0 ^{p^n},a_1 ^{p^n},\ldots ]$ by the Frobenius endmorphism.
Hence, it follows from Lemma \ref{fund} (iii) and \ref{irr ex} that
\begin{eqnarray*}
w_1(\xi ^{p^n})=\limsup _{k\rightarrow \infty }\frac{\sum _{i=1}^{k+1}\deg a_{i} ^{p^n}}{\sum _{i=1}^{k}\deg a_i ^{p^n}}=\limsup _{k\rightarrow \infty }\frac{p^n \deg q_{k+1}}{p^n \deg q_k}=w_1(\xi ),
\end{eqnarray*}
where $(p_k/q_k)_{k\geq 0}$ is the convergent sequence of $\xi $.
\end{proof}
\begin{prop}\label{main7}
Let $n\geq 1$ be an integer and $\xi $ be in $\lF _q((T^{-1}))$.
Let $k\geq 0$ be an integer such that $p^k\leq n<p^{k+1}$.
Then we have
\begin{eqnarray}
\frac{w_n(\xi )}{p^k}-n+\frac{2}{p^k}-1\leq w_n ^{*}(\xi )\leq w_n(\xi ).
\end{eqnarray}
Furthermore, if $1\leq n<2p$, then we have
\begin{eqnarray}\label{dif}
w_n(\xi )-n+1\leq w_n ^{*}(\xi )\leq w_n(\xi ).
\end{eqnarray}
\end{prop}
\begin{rem}
We are able to define analogues of Diophantine exponents $w_n$ and $w_n ^{*}$ for real numbers and $p$-adic numbers (see \cite[Section 3.1, 3.3, and 9.3]{Bugeaud1} for the definition of $w_n$ and $w_n ^{*}$).
It is known that for all $n$, analogues of (\ref{dif}) for real numbers and $p$-adic numbers hold (see \cite{Wirsing,Morrison}).
However, in our framework, we are not able to prove (\ref{dif}) for all $n$.
The main difficulty is the existence of inseparable irreducible polynomials in $(\lF _q[T])[X]$.
Therefore, it seems that Proposition $\ref{main7}$ describes the difference between approximation properties of characteristic zero and that of positive characteristic.
On the other hand, when $n$ is sufficiently small, we prove (\ref{dif}) using continued fraction theory and the Frobenius endomorphism.
\end{rem}
\begin{proof}
It is immediate that $w_n(\xi ), w_n ^{*}(\xi )\geq 0$.
We first show that $w_n ^{*}(\xi )\leq w_n(\xi )$.
We may assume that $w_n ^{*}(\xi )>0$.
For $0<w^{*}<w_n ^{*}(\xi )$, there exist infinitely many $\alpha \in \overline{\lF _q(T)}$ of degree at most $n$ such that
\begin{eqnarray*}
0<|\xi -\alpha |\leq H(\alpha )^{-w^{*}-1}.
\end{eqnarray*}
Let $P_{\alpha }(X)=\sum_{i=0}^{d}a_i X^i$ be the minimal polynomial of $\alpha $, where $d=\deg \alpha $.
Put
\begin{eqnarray*}
Q_{\alpha }(X):=a_d X^{d-1}+(a_d \alpha +a_{d-1})X^{d-2}+(a_d \alpha ^2+a_{d-1} \alpha +a_{d-2})X^{d-3}\\
+\cdots +(a_d \alpha ^{d-1}+a_{d-1} \alpha ^{d-2}+\cdots +a_1).
\end{eqnarray*}
Then we have $P_{\alpha }(X)=(X-\alpha )Q_{\alpha }(X)$.
Since $\max (1,|\alpha |)=\max (1,|\xi |)$, we obtain $|Q_{\alpha }(\xi )|\leq H(P_{\alpha })\max (1,|\xi |)^{2n}$.
Hence, it follows that
\begin{eqnarray*}
|P_{\alpha }(\xi )|\leq H(\alpha )^{-w^*} \max (1,|\xi |)^{2n},
\end{eqnarray*}
which gives $w^{*}\leq w_n(\xi)$.
Consequently, we have $w_n ^{*}(\xi )\leq w_n(\xi )$.
Our next claim is that $w_n(\xi )/p^k-n+2/p^k-1\leq w_n ^{*}(\xi )$.
We may assume that $w_n(\xi )>0$.
For $0<w<w_n(\xi )$, there exist infinitely many $P(X) \in (\lF _q[T])[X]_{\min }$ of degree at most $n$ such that
\begin{eqnarray*}
0<|P(\xi )|\leq H(P)^{-w}
\end{eqnarray*}
by Lemma \ref{ok}.
Let $m$ denote the degree of $P$ and $f$ denote the inseparable degree of $P$.
We first consider the case of $m\geq 2f$.
By Lemma \ref{insep}, there exists $\alpha $ of root of $P$ such that
\begin{eqnarray*}
|\xi -\alpha |\leq H(\alpha )^{-w/f+m/f^2-2/f}\leq H(\alpha )^{-w/p^k+n-2/p^k}.
\end{eqnarray*}
Now, assume that $m<2f$.
Then we have $m=f$ by $f|m$.
Therefore, we can write $P(X)=A(X^m-\alpha ^m)$, where $A\in \lF_q[T]$ and $\alpha \in \overline{\lF_q(T)}$.
Thus, we get $|\xi -\alpha |<|A|^{-1/f}H(\alpha )^{-w/f}$.
Since $\max (1,|\xi |)=\max (1,|\alpha |)$, we have
\begin{eqnarray*}
|\xi -\alpha |\leq \max (1,|\xi |)H(\alpha )^{-w/f-1/f}
\leq \max (1,|\xi |)H(\alpha )^{-w/p^k+n-2/p^k}
\end{eqnarray*}
by Lemma \ref{height}.
This is our claim.
Finally, we assume $1\leq n<2p$ and show (\ref{dif}).
Let $0<w<w_n(\xi )$.
If there exist infinitely many separable polynomials $P(X) \in (\lF _q[T])[X]_{\min }$ of degree at most $n$ such that
\begin{eqnarray*}
0<|P(\xi )|\leq H(P)^{-w},
\end{eqnarray*}
then we have $w-n+1\leq w_n ^{*}(\xi )$ as in the same line of the above proof.
Therefore, we may assume that there exist infinitely many inseparable polynomials $P(X) \in (\lF _q[T])[X]_{\min }$ of degree at most $n$ such that
\begin{eqnarray*}
0<|P(\xi )|\leq H(P)^{-w}.
\end{eqnarray*}
Then we can write such polynomials $P(X)=AX^p+B,$ where $A,B \in \lF _q[T]$.
By Lemma \ref{good} and the definition of $w_n$, we have $w\leq w_1(\xi )$.
Therefore, we obtain $w-n+1\leq w_n ^{*}(\xi )$ by $w_1(\xi )=w_1 ^{*}(\xi )$.
Hence, we have (\ref{dif}).
\end{proof}
It follows from Proposition \ref{main7} that for an integer $n\geq 1$ and $\xi \in \lF _q((T^{-1}))$
\begin{itemize}
\item $w_n(\xi )$ is finite if and only if $w_n ^{*}(\xi )$ is finite,
\item if $w(\xi )$ is finite, then $w^{*}(\xi )$ is finite.
\end{itemize}
Consequently, we obtain
\begin{itemize}
\item $\xi $ is a $U_n$-number if and only if it is a $U_n ^{*}$-number,
\item if $\xi $ is an $S$-number, then it is an $S^{*}$-number.
\end{itemize}
We address the following questions in the last of this section.
\begin{prob}
Does (\ref{dif}) hold for all $n\geq 1$ and $\xi \in \lF _q((T^{-1}))$?
\end{prob}
\begin{prob}\label{prob1}
Does Mahler's classification coincide Koksma's classification?
\end{prob}
Note that an analogue of Problem \ref{prob1} for real numbers and $p$-adic numbers holds.
The detail is found in \cite[Section 3.4 and 9.3]{Bugeaud1}, \cite[Chapter 6]{Pejkovic}, and \cite{Schmidt1}.
\section{Applications of Liouville inequalities}\label{Applications sec}
The following proposition is an analogue of Lemma 7.3 in \cite{Bugeaud2} and Lemma 5 in \cite{Bugeaud3}.
\begin{prop}\label{Mahler and Koksma}
Let $\xi $ be in $\lF_q((T^{-1}))$ and $c_1,c_2,c_3,c_4,\theta ,\rho ,\delta $ be positive numbers.
Let $\varepsilon $ be a non-negative number.
Assume that there exists a sequence of distinct terms $(\alpha _j)_{j\geq 1}$ with $\alpha _j \in \overline{\lF _q(T)}$ is quadratic for $j\geq 1$ such that for all $j\geq 1$
\begin{gather*}
\frac{c_1}{H(\alpha _j)^{1+\rho }} \leq |\xi -\alpha _j|\leq \frac{c_2}{H(\alpha _j)^{1+\delta }},\\
H(\alpha _j)\leq H(\alpha _{j+1})\leq c_3 H(\alpha _j)^\theta ,\\
0<|\alpha _j-\alpha _j '|\leq \frac{c_4}{H(\alpha _j)^\varepsilon }.
\end{gather*}
If $(\rho -1)(\delta -1+\varepsilon )\geq 2\theta (2-\varepsilon )$, then we have $\delta \leq w_2 ^{*}(\xi )\leq \rho $.
Furthermore, if $(\delta -1)(\delta -1+\varepsilon )\geq 2\theta (2-\varepsilon )$, then we have
\begin{eqnarray*}
\delta \leq w_2 ^{*}(\xi )\leq \rho ,\quad w_2(\xi )\geq w_2 ^{*}(\xi )+\varepsilon .
\end{eqnarray*}
Finally, assume that there exists a non-negative number $\chi $ such that
\begin{eqnarray*}
\limsup_{j\rightarrow \infty} \frac{-\log |\alpha _j-\alpha _j '|}{\log H(\alpha _j)}\leq \chi .
\end{eqnarray*}
If $(\delta -2+\chi )(\delta -1+\varepsilon )\geq 2\theta (2-\varepsilon )$ when $p\neq 2$ and $(\delta -4+\chi )(\delta -1+\varepsilon )\geq 4\theta (2-\varepsilon )$ when $p=2$, then we have
\begin{eqnarray*}
\delta \leq w_2 ^{*}(\xi )\leq \rho ,\quad \varepsilon \leq w_2(\xi )-w_2 ^{*}(\xi ) \leq \chi .
\end{eqnarray*}
\end{prop}
\begin{proof}
Assume that $(\rho -1)(\delta -1+\varepsilon )\geq 2\theta (2-\varepsilon )$.
By the assumption, we have $\theta \geq 1, \rho >1$, and $\delta +\varepsilon >1$.
Let $\alpha \in \overline{\lF _q(T)}$ be an algebraic number of degree at most two with sufficiently large height and $\alpha \notin \{\alpha _j\mid j\geq 1 \} $.
We define an integer $j_0\geq 1$ by $H(\alpha _{j_0})\leq c_3 \{ (c_2 c_4)^{\frac{1}{2}}H(\alpha )\} ^{\frac{2\theta }{\delta +\varepsilon -1}}<H(\alpha _{j_0 +1})$.
Then, by the assumption, we have
\begin{align*}
H(\alpha )<c_3 ^{-\frac{\delta +\varepsilon -1}{2\theta }}(c_2 c_4)^{-\frac{1}{2}} H(\alpha _{j_0+1})^{\frac{\delta +\varepsilon -1}{2\theta }}
\leq (c_2 c_4)^{-\frac{1}{2}} H(\alpha _{j_0})^{\frac{\delta +\varepsilon -1}{2}}.
\end{align*}
Hence, it follows from Proposition \ref{Liouville}, \ref{Liouvilleinequ3}, and Lemma \ref{Galois conj} that
\begin{eqnarray*}
|\alpha -\alpha _{j_0}| & \geq & |\alpha _{j_0}-\alpha _{j_0} '|^{-1}H(\alpha _{j_0})^{-2}H(\alpha )^{-2}\\
& > & c_2 H(\alpha _{j_0})^{-\delta -1} \geq |\xi -\alpha _{j_0}|.
\end{eqnarray*}
Therefore, we obtain
\begin{eqnarray}
|\xi -\alpha | & = & |\alpha -\alpha _{j_0}|>c_4^{-1}H(\alpha _{j_0})^{-2+\varepsilon }H(\alpha )^{-2} \nonumber \\
\label{alg lower} & \geq & c_2^{-\frac{\theta (2-\varepsilon )}{\delta +\varepsilon -1}}c_3 ^{-2+\varepsilon }c_4 ^{-1-\frac{2\theta (2-\varepsilon )}{\delta +\varepsilon -1}} H(\alpha )^{-2-\frac{2\theta (2-\varepsilon )}{\delta +\varepsilon -1}}.
\end{eqnarray}
By the assumption, we have
\begin{align*}
\delta \leq w_2 ^{*}(\xi )\leq \max \left( \rho ,1+\frac{2\theta (2-\varepsilon )}{\delta +\varepsilon -1} \right) =\rho .
\end{align*}
We next assume that $(\delta -1)(\delta -1+\varepsilon )\geq 2\theta (2-\varepsilon )$.
By (\ref{alg lower}), it follows that
\begin{align*}
|\xi -\alpha | \geq c_2^{-\frac{\theta (2-\varepsilon )}{\delta +\varepsilon -1}}c_3 ^{-2+\varepsilon }c_4 ^{-1-\frac{2\theta (2-\varepsilon )}{\delta +\varepsilon -1}} H(\alpha )^{-\delta -1}.
\end{align*}
Therefore, the sequence $(\alpha _j)_{j\geq 1}$ is the best algebraic approximation to $\xi $ of degree at most two, that is,
\begin{align*}
w_2 ^{*}(\xi )= \limsup _{j\rightarrow \infty } \frac{-\log |\xi -\alpha _j|}{\log H(\alpha _j)} -1.
\end{align*}
We denote by $P_j (X)=A_j (X-\alpha _j)(X-\alpha _j ')$ the minimal polynomial of $\alpha _j$.
By the assumption, we have
\begin{align*}
|P_j (\xi )|\leq \max (c_2, c_4)H(\alpha _j)^{-\varepsilon +1} |\xi -\alpha _j|.
\end{align*}
Therefore, we obtain
\begin{eqnarray*}
w_2 ^{*}(\xi )+\varepsilon \leq \limsup _{n\rightarrow \infty }\frac{-\log |P_j (\xi )|}{H(P_j)} \leq w_2(\xi ).
\end{eqnarray*}
Finally, assume that
\begin{eqnarray*}
\limsup_{j\rightarrow \infty} \frac{\log |\alpha _j-\alpha _j '|}{\log H(\alpha _j)}=-\chi .
\end{eqnarray*}
We also assume that $(\delta -2+\chi )(\delta -1+\varepsilon )\geq 2\theta (2-\varepsilon )$ when $p\neq 2$ and $(\delta -4+\chi )(\delta -1+\varepsilon )\geq 4\theta (2-\varepsilon )$ when $p=2$.
Since $|\xi -\alpha _j|\leq c_2$ and $|\alpha _j-\alpha _j '|\leq c_4$, we have
\begin{eqnarray*}
\max (1,|\alpha _j|), \max (1,|\alpha _j '|)\leq \max (1,c_2, c_4, |\xi |),
\end{eqnarray*}
which implies $H(P_j)\leq |A_j|\max (1,c_2, c_4, |\xi |)^2$.
By the assumption, we get $|\alpha _j-\alpha _j '|<|\xi -\alpha _j '|$ for sufficiently large $j$.
Therefore, we obtain for sufficiently large $j$
\begin{align*}
|P_j(\xi )|\geq \max (1,c_2, c_4, |\xi |)^{-2} H(P_j)|\xi -\alpha _j||\alpha _j -\alpha _j '|.
\end{align*}
Taking a logarithm and a limit superior, we have
\begin{align*}
\limsup _{j\rightarrow \infty } \frac{-\log |P_j(\xi )|}{\log H(P_j)} \leq w_2 ^{*}(\xi )+\chi .
\end{align*}
Let $P(X) \in (\lF_q[T])[X]_{\min }$ be a polynomial of degree at most two with sufficiently large height such that $P(\alpha _j)\neq 0$ for all $j\geq 1$.
When $\deg _X P=1$, we can write $P(X)=A(X-\alpha )$ and have
\begin{align*}
|P(\xi )|=|A||\xi -\alpha |\geq c_2^{-\frac{\theta (2-\varepsilon )}{\delta +\varepsilon -1}}c_3 ^{-2+\varepsilon }c_4 ^{-1-\frac{2\theta (2-\varepsilon )}{\delta +\varepsilon -1}} H(P)^{-\delta -\chi }
\end{align*}
by (\ref{alg lower}).
When $\deg _X P=2$, we can write $P(X)=A(X-\alpha )(X-\alpha ')$ and assume that $|\xi -\alpha |\leq |\xi -\alpha '|$.
we first consider the case of $p\neq 2$.
Then we obtain
\begin{eqnarray}
|P(\xi )| & \geq & |A(\alpha -\alpha ')||\xi -\alpha | =|\Disc (P)|^{1/2}|\xi -\alpha | \nonumber \\
\label{before} & \geq & c_2^{-\frac{\theta (2-\varepsilon )}{\delta +\varepsilon -1}}c_3 ^{-2+\varepsilon }c_4 ^{-1-\frac{2\theta (2-\varepsilon )}{\delta +\varepsilon -1}} H(P)^{-\delta -\chi }
\end{eqnarray}
by (\ref{alg lower}).
We next consider the case of $p=2$.
If $\alpha \neq \alpha '$, then we have (\ref{before}) and if otherwise, then we have
\begin{eqnarray*}
|P(\xi )| & \geq & |A||\xi -\alpha |^2
\geq c_2^{-\frac{2\theta (2-\varepsilon )}{\delta +\varepsilon -1}}c_3 ^{-4+2\varepsilon }c_4 ^{-2-\frac{4\theta (2-\varepsilon )}{\delta +\varepsilon -1}} H(P)^{-4-\frac{4\theta (2-\varepsilon )}{\delta +\varepsilon -1}} \\
& \geq & c_2^{-\frac{2\theta (2-\varepsilon )}{\delta +\varepsilon -1}}c_3 ^{-4+2\varepsilon }c_4 ^{-2-\frac{4\theta (2-\varepsilon )}{\delta +\varepsilon -1}} H(P)^{-\delta -\chi }
\end{eqnarray*}
by (\ref{alg lower}).
Thus, it follows that $w_2(\xi )\leq w_2 ^{*}(\xi )+\chi $ by Lemma \ref{ok}.
\end{proof}
The following proposition is an analogue of Lemma 7.2 in \cite{Bugeaud2}.
\begin{prop}\label{Mahler and Koksma2}
Let $\xi $ be in $\lF_q((T^{-1}))$.
Let $c_0,c_1,c_2,c_3, \theta ,\rho ,\delta $ be positive numbers and $(\beta _j)_{j\geq 1}$ be a sequence of positive integers such that $\beta _j<\beta _{j+1}\leq c_0 \beta _j ^\theta $ for all $j\geq 1$.
Assume that there exists a sequence of distinct terms $(\alpha _j)_{j\geq 1}$ with $\alpha _j \in \overline{\lF _q(T)}$ is quadratic for $j\geq 1$ such that for all $j\geq 1$
\begin{gather*}
\frac{c_1}{\beta _j ^{2+\rho }} \leq |\xi -\alpha _j|\leq \frac{c_2 \max (1, |\alpha _j -\alpha _j '|^{-1})}{\beta _j ^{2+\delta }},\\
H(\alpha _j)\leq c_3 \beta _j, \quad \alpha _j \neq \alpha _j '.
\end{gather*}
Then we have
\begin{eqnarray*}
1+\delta \leq w_2 ^{*} (\xi )\leq (2+\rho )\frac{2\theta }{\delta }-1.
\end{eqnarray*}
\end{prop}
\begin{proof}
Let $\alpha \in \overline{\lF _q(T)}$ be an algebraic number of degree at most two with sufficiently large height.
We define an integer $j_0\geq 1$ by $\beta _{j_0} \leq c_0 c_2^{\frac{\theta }{\delta }}(c_3 H(\alpha ))^{\frac{2\theta }{\delta }} <\beta _{j_0+1}$.
We first consider the case of $\alpha =\alpha _{j_0}$.
By the assumption, we have
\begin{eqnarray*}
|\xi -\alpha |\geq c_1 \beta _{j_0} ^{-2-\rho } \geq c_0 ^{-2-\rho }c_1 c_2 ^{-(2+\rho )\frac{\theta }{\delta }}c_3 ^{-(2+\rho )\frac{2\theta }{\delta }}H(\alpha )^{-(2+\rho )\frac{2\theta }{\delta }}.
\end{eqnarray*}
We next consider the other case.
Then, by the assumption, we have
\begin{eqnarray*}
H(\alpha ) < c_2 ^{-\frac{1}{2}} c_3 ^{-1}(c_0 ^{-1} \beta _{j_0+1})^{\frac{\delta }{2\theta }} \leq c_2 ^{-\frac{1}{2}} c_3 ^{-1}\beta _{j_0} ^{\frac{\delta }{2}}.
\end{eqnarray*}
Hence, it follows from Proposition \ref{Liouville}, \ref{Liouvilleinequ3}, and Lemma \ref{Galois conj} that
\begin{eqnarray*}
|\alpha -\alpha _{j_0}| & \geq & \max (1,|\alpha _{j_0}-\alpha _{j_0} '|^{-1})H(\alpha _{j_0})^{-2}H(\alpha )^{-2} \\
& > & c_2 \max (1,|\alpha _{j_0}-\alpha _{j_0} '|^{-1})\beta _{j_0} ^{-2-\delta }\geq |\xi -\alpha _{j_0}|.
\end{eqnarray*}
Therefore, we obtain
\begin{eqnarray*}
|\xi -\alpha | & = & |\alpha -\alpha _{j_0}| \geq \max (1,|\alpha _{j_0}-\alpha _{j_0} '|^{-1})H(\alpha _{j_0})^{-2}H(\alpha )^{-2}\\
& \geq & c_3 ^{-2}\beta _{j_0} ^{-2} H(\alpha )^{-2} \geq c_0 ^{-2}c_2 ^{-\frac{2\theta }{\delta }}c_3 ^{-2-\frac{4\theta }{\delta }}H(\alpha )^{-2-\frac{4\theta }{\delta }}.
\end{eqnarray*}
By the assumption, we have
\begin{eqnarray*}
1+\delta \leq w_2 ^{*}(\xi )\leq \max \left( 1+\frac{4\theta }{\delta } , (2+\rho )\frac{2\theta }{\delta }-1\right) =(2+\rho )\frac{2\theta }{\delta }-1.
\end{eqnarray*}
\end{proof}
\section{Combinational lemma}\label{Combinational sec}
The lemma below is a slight improvement of \cite[Lemma 9.1]{Bugeaud2}.
\begin{lem}\label{combi}
Let ${\bf a}=(a_n)_{n \geq 1}$ be a sequence on a finite set $\mathcal{A}$.
Assume that there exist integers $\kappa \geq 2$ and $n_0\geq 1$ such that for all $n\geq n_0$,
\begin{eqnarray*}
p({\bf a},n)\leq \kappa n.
\end{eqnarray*}
Then, for each $n\geq n_0$, there exist finite words $U_n, V_n$ and a positive rational number $w_n$ such that the following hold:
\begin{enumerate}
\item[(i)] $U_n V_n ^{w_n}$ is a prefix of ${\bf a}$,
\item[(ii)] $|U_n|\leq 2\kappa |V_n|$,
\item[(iii)] $n/2 \leq |V_n|\leq \kappa n$,
\item[(iv)] if $U_n$ is not an empty word, then the last letters of $U_n$ and $V_n$ are different,
\item[(v)] $|U_n V_n ^{w_n}|/|U_n V_n|\geq 1+1/(4\kappa +2)$,
\item[(vi)] $|U_n V_n|\leq (\kappa +1)n-1$,
\item[(vii)] $|U_n ^2 V_n|\leq (2\kappa +1)n-2$.
\end{enumerate}
\end{lem}
\begin{proof}
For $n\geq 1$, we denote by $A(n)$ the prefix of ${\bf a}$ of length $n$.
By Pigeonhole principle, for each $n\geq n_0$, there exists a finite word $W_n$ of length $n$ such that the word appears to $A((\kappa +1)n)$ at least twice.
Therefore, for each $n\geq n_0$, there exist finite words $B_n, D_n, E_n \in \mathcal{A}^{*}$ and $C_n \in \mathcal{A}^{+}$ such that
\begin{eqnarray*}
A((\kappa +1)n)=B_n W_n D_n E_n =B_n C_n W_n E_n.
\end{eqnarray*}
We can take these words in such way that if $B_n$ is not empty, then the last letter of $B_n$ is different from that of $C_n$.
Firstly, we consider the case of $|C_n|\geq |W_n|$.
Then, there exists $F_n \in \mathcal{A}^{*}$ such that
\begin{eqnarray*}
A((\kappa +1)n)=B_n W_n F_n W_n E_n.
\end{eqnarray*}
Put $U_n :=B_n, V_n:=W_n F_n$, and $w_n:=|W_n F_n W_n|/|W_n F_n|$.
Since $U_n V_n ^{w_n}=B_n W_n F_n W_n$, the word $U_n V_n ^{w_n}$ is a prefix of ${\bf a}$.
It is obvious that $|U_n|\leq (\kappa -1)|V_n|$ and $n\leq |V_n|\leq \kappa n$.
By the definition, we have (iv) and (vi).
Furthermore, we see that
\begin{gather*}
\frac{|U_n V_n ^{w_n}|}{|U_n V_n|}=1+\frac{n}{|U_n V_n|}\geq 1+\frac{1}{\kappa },\\
|U_n ^2 V_n|\leq |U_n V_n|+|U_n|\leq \kappa n+(\kappa -1)n=(2\kappa -1)n.
\end{gather*}
We next consider the case of $|C_n|< |W_n|$.
Since the two occurrences of $W_n$ do overlap, there exists a rational number $d_n>1$ such that $W_n=C_n ^{d_n}$.
Put $U_n:=B_n, V_n:=C_n ^{\lceil d_n /2 \rceil}$, and $w_n:=(d_n +1)/\lceil d_n/2 \rceil$.
Obviously, we have (i) and (iv).
Since $ \lceil d_n/2 \rceil \leq d_n$ and $d_n |C_n|\leq 2 \lceil d_n/2 \rceil |C_n|$, we get $n/2\leq |V_n|\leq n$.
Using (iii) and $|U_n|\leq \kappa n-1$, we can see (ii), (vi), and (vii).
It is immediate that $w_n \geq 3/2$.
Hence, we obtain
\begin{eqnarray*}
\frac{|U_n V_n ^{w_n}|}{|U_n V_n|} & = & 1+\frac{ \lceil (w_n-1)|V_n| \rceil}{|U_n V_n|} \geq 1+ \frac{w_n-1}{|U_n|/|V_n| +1} \\
& \geq & 1+\frac{1/2}{2\kappa +1}= 1+\frac{1}{4\kappa +2}.
\end{eqnarray*}
\end{proof}
\section{Proof of the main results}\label{Proof sec}
\begin{proof}[{\rm Proof of Theorem \ref{main4}}]
Put
\begin{eqnarray*}
\xi _{w,j}:=[0,a_{1,w},\ldots ,a_{\lfloor w^j \rfloor ,w},\overline{b}] \quad \mbox{for } j\geq 1.
\end{eqnarray*}
Since $\xi _w$ and $\xi _{w,j}$ have the same first $(\lfloor w^{j+1} \rfloor -1)$-th partial quotients, while $\lfloor w^{j+1} \rfloor $-th partial quotients are different, we have
\begin{eqnarray*}
|\xi _w -\xi _{w,j}|\asymp |q_{\lfloor w^{j+1} \rfloor}|^{-2}
\end{eqnarray*}
by Lemma \ref{fund} (v) and \ref{lower bound}.
Let $0<\iota <w$ be a real number such that $(w-\iota -2)(w-\iota -1)\geq 2(w+\iota )$ when $p\neq 2$, and $(w-\iota -4)(w-\iota -1)\geq 4(w+\iota )$ when $p=2$.
It is obvious that
\begin{eqnarray*}
|q_{\lfloor w^j \rfloor }|^{w-\iota }\ll |q_{\lfloor w^{j+1} \rfloor }|\ll |q_{\lfloor w^j \rfloor }|^{w+\iota }
\end{eqnarray*}
for sufficiently large $j$ by Lemma \ref{fund} (iii).
Thus, we have
\begin{eqnarray*}
H(\xi _{w,j})^{-w-\iota }\ll |\xi _w -\xi _{w,j}|\ll H(\xi _{w,j})^{-w+\iota }
\end{eqnarray*}
for sufficiently large $j$ by Lemma \ref{h and l}.
It follows from Lemma \ref{conj2} and \ref{h and l} that
\begin{eqnarray*}
|\xi _{w,j} -\xi ' _{w,j}|\asymp H(\xi _{w,j})^{-1}.
\end{eqnarray*}
For sufficiently large $j$, we see that
\begin{eqnarray*}
H(\xi _{w,j})\leq H(\xi _{w,j+1})\ll H(\xi _{w,j})^{w+\iota }.
\end{eqnarray*}
It follows from Proposition \ref{Mahler and Koksma} that $w_2 ^{*}(\xi _w)\in [w-\iota -1, w+\iota -1]$ and $w_2 (\xi _w)-w_2 ^{*}(\xi )=1$.
Since $\iota $ is arbitrary, we have $w_2 ^{*}(\xi )=w-1$ and $w_2(\xi )=w$.
\end{proof}
\begin{proof}[{\rm Proof of Theorem \ref{main5}}]
Put
\begin{eqnarray*}
\xi _{w,\eta ,j}:=[0,a_{1,w,\eta },\ldots ,a_{\lfloor w^j\rfloor ,w,\eta },\overline{b,\ldots ,b,d}] \quad \mbox{for } j\geq 1,
\end{eqnarray*}
where the length of period part is $\lfloor \eta w^j\rfloor$.
Since $\lfloor w^j\rfloor +(m_j +1)\lfloor \eta w^j\rfloor >\lfloor w^{j+1}\rfloor$, it follows that $\xi _{w,\eta }$ and $\xi _{w,\eta ,j}$ have the same first $(\lfloor w^{j+1} \rfloor -1)$-th partial quotients, while $\lfloor w^{j+1} \rfloor $-th partial quotients are different.
Thus, we have
\begin{eqnarray*}
|\xi _{w,\eta }-\xi _{w,\eta ,j}|\asymp |q_{\lfloor w^{j+1}\rfloor }|^{-2} \quad \mbox{for } j\geq 1
\end{eqnarray*}
by Lemma \ref{fund} (v) and \ref{lower bound}.
We see that for $j\geq 1$
\begin{eqnarray*}
|\xi _{w,\eta ,j}-\xi _{w,\eta ,j} '|\asymp |q_{\lfloor w^j\rfloor }|^{-2},\quad H(\xi _{w,\eta ,j})\asymp |q_{\lfloor w^j\rfloor }q_{\lfloor w^j\rfloor +\lfloor \eta w^j\rfloor }|
\end{eqnarray*}
by Lemma \ref{conj2} and \ref{h and l}.
Let $0<\iota <\min \{w, 2+\eta \}$ be a real number such that
\begin{eqnarray*}
\left( \frac{2 (w-\iota )}{2+\eta +\iota }-3+\frac{2}{2+\eta }\right) \left( \frac{2(w-\iota )}{2+\eta +\iota }-2+\frac{2}{2+\eta +\iota }\right) \geq 2(w+\iota )\frac{2+\eta +\iota }{2+\eta -\iota } \left( 2-\frac{2}{2+\eta +\iota }\right)
\end{eqnarray*}
when $p\neq 2$, and
\begin{eqnarray*}
\left( \frac{2 (w-\iota )}{2+\eta +\iota }-5+\frac{2}{2+\eta }\right) \left( \frac{2(w-\iota )}{2+\eta +\iota }-2+\frac{2}{2+\eta +\iota }\right) \geq 4(w+\iota )\frac{2+\eta +\iota }{2+\eta -\iota } \left( 2-\frac{2}{2+\eta +\iota }\right)
\end{eqnarray*}
when $p=2$.
It is obvious that
\begin{eqnarray*}
|q_{\lfloor w^j\rfloor }|^{w-\iota }\ll |q_{\lfloor w^{j+1}\rfloor }|\ll |q_{\lfloor w^j\rfloor }|^{w+\iota },\\
|q_{\lfloor w^j\rfloor }|^{1+\eta -\iota }\ll |q_{\lfloor w^j \rfloor +\lfloor \eta w^j \rfloor }|\ll |q_{\lfloor w^j\rfloor }|^{1+\eta +\iota }
\end{eqnarray*}
for sufficiently large $j$.
Hence, we obtain
\begin{eqnarray*}
H(\xi _{w,\eta ,j})^{-\frac{2(w+\iota )}{2+\eta -\iota }}\ll |\xi _{w,\eta }-\xi _{w,\eta ,j}|\ll H(\xi _{w,\eta ,j})^{-\frac{2(w-\iota )}{2+\eta +\iota }},\\
H(\xi _{w,\eta ,j})^{-\frac{2}{2+\eta -\iota }}\ll |\xi _{w,\eta ,j}-\xi _{w,\eta ,j} '|\ll H(\xi _{w,\eta ,j})^{-\frac{2}{2+\eta +\iota }},\\
H(\xi _{w,\eta ,j})\leq H(\xi _{w,\eta ,j+1})\ll H(\xi _{w,\eta ,j})^{(w+\iota )\frac{2+\eta +\iota }{2+\eta -\iota }}
\end{eqnarray*}
for sufficiently large $j$.
It follows from Proposition \ref{Mahler and Koksma} that
\begin{eqnarray*}
w_2 ^{*}(\xi _{w,\eta })\in \left[ \frac{2 w-2-\eta -3\iota }{2+\eta +\iota }, \frac{2 w-2-\eta +3\iota }{2+\eta -\iota } \right] ,\\
w_2(\xi _{w,\eta })-w_2 ^{*}(\xi _{w,\eta })\in \left[ \frac{2}{2+\eta +\iota }, \frac{2}{2+\eta } \right] .
\end{eqnarray*}
Since $\iota $ is arbitrary, we have
\begin{eqnarray*}
w_2 ^{*}(\xi _{w,\eta })=\frac{2 w-2-\eta }{2+\eta },\quad w_2(\xi _{w,\eta })=\frac{2 w-\eta }{2+\eta }.
\end{eqnarray*}
\end{proof}
\begin{proof}[{\rm Proof of Theorem \ref{main1}}]
Applying Lemma \ref{combi}, for $n \geq n_0$, we take finite words $U_n ,V_n$ and a rational number $w_n$ satisfying Lemma \ref{combi} (i)-(v) and (vii).
We define a positive integer sequence $(n_j)_{j\geq 0}$ by $n_{j+1}=2(2\kappa +1)\lceil \log A/\log q\rceil n_j$ for $j\geq 0$.
Put $r_j:=|U_{n_j}|, s_j:=|V_{n_j}|$, and $\tilde{w}_j:=w_{n_j}$ for $j\geq 0$.
By Lemma \ref{combi} (iv), we have $a_{r_j}\neq a_{r_j+s_j}$ for all $j\geq 0$.
By the assumption and Lemma \ref{fund} (iii), we get $q^n \leq |q_n|\leq A^n$ for all $n\geq 1$.
Therefore, it follows from Lemma \ref{combi} (iii) and (vi) that for $j\geq 0$
\begin{eqnarray}\label{theta}
|q_{r_j} q_{r_j+s_j}|<|q_{r_{j+1}} q_{r_{j+1} s_{j+1}}|\leq |q_{r_j} q_{r_j+s_j}|^{4(2\kappa +1)^2 \left\lceil \frac{\log A}{\log q}\right\rceil \frac{\log A}{\log q}}.
\end{eqnarray}
Put $\alpha _j :=[0,a_1,\ldots ,a_{r_j},\overline{a_{r_j+1},\ldots ,a_{r_j+s_j}}]$ for $j\geq 1$.
By Lemma \ref{height upper}, we obtain
\begin{eqnarray}\label{beta}
H(\alpha _j)\leq |q_{r_j}q_{r_j+s_j}|
\end{eqnarray}
for $j\geq 0$.
Since $\xi $ and $\alpha _j$ have the same first $r_j +\lceil \tilde{w}_j s_j \rceil $-th partial quotients, we have
\begin{eqnarray*}
|\xi -\alpha _j| & \leq & \max \left( \left| \xi -\frac{p_{r_j +\lceil \tilde{w} _j s_j \rceil }}{q_{r_j +\lceil \tilde{w}_j s_j \rceil }}\right| , \left| \alpha _j -\frac{p_{r_j +\lceil \tilde{w}_j s_j \rceil }}{q_{r_j +\lceil \tilde{w}_j s_j \rceil }}\right| \right) \\
& \leq & |q_{r_j +\lceil \tilde{w}_j s_j \rceil }|^{-2} q^{-1} \leq |q_{r_j+s_j}|^{-2} q^{-2(\lceil \tilde{w}_j s_j \rceil -s_j)-1}
\end{eqnarray*}
for $j\geq 0$ by Lemma \ref{fund} (iii) and (v).
By Lemma \ref{combi} (v), we have
\begin{eqnarray*}
q^{2(\lceil \tilde{w}_j s_j \rceil -s_j)+1}\gg q^{\frac{r_j+s_j}{2\kappa +1}}|q_{r_j} q_{r_j +s_j}|^{\frac{\log q}{(4\kappa +2)\log A}}
\end{eqnarray*}
for $j\geq 0$.
From Lemma \ref{conj2}, we deduce that
\begin{eqnarray*}
|q_{r_j}|^2 \leq A^2 \max (|\alpha _j-\alpha ' _j|^{-1},1)
\end{eqnarray*}
for $j\geq 0$.
Hence, we obtain
\begin{eqnarray}\label{delta}
|\xi -\alpha _j|\ll A^2 \max (|\alpha _j -\alpha ' _j|^{-1},1) |q_{r_j}q_{r_j +s_j}|^{-2-\frac{\log q}{(4\kappa +2)\log A}}
\end{eqnarray}
for $j\geq 0$.
Take a real number $\delta $ which is greater than $\Dio ({\bf a})$.
Then $\xi $ and $\alpha _j$ have the same at most $\lceil \delta (r_j +s_j)\rceil $-th partial quotients for sufficiently large $j$.
By Lemma \ref{lower bound}, we have
\begin{eqnarray}
|\xi -\alpha _j| & \geq & A^{-2} |q_{\lceil \delta (r_j +s_j)\rceil }|^{-2} \gg |q_{r_j} q_{r_j +s_j}|^{-4\delta \frac{(r_j +s_j)\log A}{(2 r_j +s_j)\log q}} \nonumber \\
& \gg & |q_{r_j} q_{r_j +s_j}|^{-4\delta \frac{\log A}{\log q}} \label{rho}
\end{eqnarray}
for sufficiently large $j$.
Applying Proposition \ref{Mahler and Koksma2} with (\ref{theta}), (\ref{beta}), (\ref{delta}), and (\ref{rho}), we obtain
\begin{eqnarray*}
w_2 ^{*}(\xi ) \leq 128(2\kappa +1)^3 \Dio ({\bf a}) \left( \frac{\log A}{\log q}\right) ^4 -1.
\end{eqnarray*}
Thus, we have (\ref{main1.1}) by (\ref{dif}).
Assume that the sequence $(|q_n|^{1/n})_{n\geq 1}$ converges.
Let $M$ be a limit of the sequence $(|q_n|^{1/n})_{n\geq 1}$.
For any $\varepsilon >0$, there exists an integer $n_1$ such that for all $n \geq n_1$,
\begin{eqnarray*}
(M-\varepsilon )^n<|q_n|<(M+\varepsilon )^n.
\end{eqnarray*}
In the same matter as above, we see that
\begin{gather*}
|q_{r_j} q_{r_j+s_j}|<|q_{r_{j+1}} q_{r_{j+1} s_{j+1}}|\leq |q_{r_j} q_{r_j+s_j}|^{4(2\kappa +1)^2 \left\lceil \frac{\log (M+\varepsilon )}{\log (M-\varepsilon )}\right\rceil \frac{\log (M+\varepsilon )}{\log (M-\varepsilon )}}, \\
|q_{r_j} q_{r_j +s_j}|^{-4\delta \frac{\log (M+\varepsilon )}{\log (M-\varepsilon )}}\ll |\xi -\alpha _j|\ll \max (|\alpha _j -\alpha ' _j|^{-1},1) |q_{r_j}q_{r_j +s_j}|^{-2-\frac{\log (M-\varepsilon )}{(4\kappa +2)\log (M+\varepsilon )}},
\end{gather*}
for sufficiently large $j$.
Applying Proposition \ref{Mahler and Koksma2}, we have
\begin{eqnarray*}
w_2 ^{*}(\xi ) \leq 64(2\kappa +1)^3 \Dio ({\bf a})-1.
\end{eqnarray*}
Thus, we have (\ref{main1.2}) by (\ref{dif}).
\end{proof}
\begin{proof}[{\rm Proof of Theorem \ref{main2}}]
From Theorem \ref{Lagrange}, \ref{Mahler lower}, and Proposition \ref{main7}, we have $w_2(\xi )\geq w_2 ^{*}(\xi )\geq 2$.
Without loss of generality, we may assume that $\Dio ({\bf a})>1$.
Take a real number $\delta $ such that $1<\delta <\Dio ({\bf a})$.
For $n\geq 1$, there exist finite words $U_n,V_n $ and a real number $w_n $ such that $U_n V_n ^{w_n}$ is the prefix of ${\bf a}$, the sequence $(|V_n ^{w_n}|)_{n\geq 1}$ is strictly increasing, and $|U_n V_n ^{w_n}|\geq \delta |U_n V_n|$.
Set $r_n :=|U_n|, s_n :=|V_n|$, and $\alpha _n :=[0,a_1,\ldots ,a_{r_n},\overline{a_{r_n+1},\ldots ,a_{r_n+s_n}}]$.
Let $\tilde{M}$ denote an upper bound of $(|q_n|^{1/n})_{n\geq 1}$.
For any $\varepsilon >0$, there exists an integer $n_0$ such that for all $n\geq n_0$,
\begin{eqnarray*}
(m-\varepsilon )^n<|q_n|<(M+\varepsilon )^n.
\end{eqnarray*}
Since $\xi $ and $\alpha _n$ have the same first ($r_n+\lceil w_n s_n\rceil $)-th partial quotients, we obtain
\begin{eqnarray*}
|\xi -\alpha _n|\leq |q_{r_n+\lceil w_n s_n\rceil}|^{-2}<(M+\varepsilon )^{-2(r_n+\lceil w_n s_n\rceil )\frac{\log (m-\varepsilon )}{\log (M+\varepsilon )}}.
\end{eqnarray*}
Assume that the sequences $(r_n)_{n\geq 1}$ and $(s_n)_{n\geq 1}$ are bounded.
Then, for all $n\geq 1$, we have
\begin{eqnarray*}
H(\alpha _n)\leq |q_{r_n}q_{r_n+s_n}|\leq \tilde{M}^{2 r_n+s_n}\leq C,
\end{eqnarray*}
where $C$ is some constant, by Lemma \ref{height upper}.
Therefore, the set $\{ \alpha _n \mid n\geq 1\} $ is finite.
Take a positive integer sequence $(n_i)_{i\geq 1}$ such that $n_i\rightarrow \infty $ as $i\rightarrow \infty $ and $\alpha _{n_1}=\alpha _{n_2}=\cdots $.
Since $(s_n)_{n\geq 1}$ is bounded, we have $w_n \rightarrow \infty $ as $n\rightarrow \infty $.
Hence, we obtain ${\bf a}=U_{n_i}\overline{V_{n_i}}$, which is a contradiction.
We next consider the case that $(r_n)_{n\geq 1}$ is unbounded.
Here, if necessary, taking a subsequence of $(r_n)_{n\geq 1}$, we assume that $(r_n)_{n\geq 1}$ is increasing and $r_1\geq n_0$.
Since $H(\alpha _n)\leq (M+\varepsilon )^{2 r_n +s_n}$ by Lemma \ref{height upper}, we have
\begin{eqnarray*}
|\xi -\alpha _n|\leq H(\alpha _n)^{-\frac{r_n+\lceil w_n s_n\rceil }{r_n+s_n}\frac{\log (m-\varepsilon )}{\log (M+\varepsilon )}}\leq H(\alpha _n)^{-\delta \frac{\log (m-\varepsilon )}{\log (M+\varepsilon )}}.
\end{eqnarray*}
Hence, we obtain (\ref{lDio}).
We consider the case that $(r_n)_{n\geq 1}$ is bounded, $(s_n)_{n\geq 1}$ is unbounded, and $\Dio ({\bf a})$ is finite.
Here, if necessary, taking a subsequence of $(s_n)_{n\geq 1}$, we assume that $(s_n)_{n\geq 1}$ is increasing and $s_1\geq n_0$.
Then, for all $n\geq 1$, we have
\begin{eqnarray*}
H(\alpha _n)\leq \tilde{M}^{r_n} (M+\varepsilon )^{r_n+s_n}<C_1 (M+\varepsilon )^{r_n+s_n},
\end{eqnarray*}
where $C_1$ is some constant.
Therefore, we obtain
\begin{eqnarray*}
|\xi -\alpha _n| & \leq & (C_1 H(\alpha _n)^{-1})^{2\frac{r_n+\lceil w_n s_n\rceil }{r_n+s_n}\frac{\log (m-\varepsilon )}{\log (M+\varepsilon )}}
\leq C_1 ^{2\Dio ({\bf a})} H(\alpha _n)^{-2\delta \frac{\log (m-\varepsilon )}{\log (M+\varepsilon )}}.
\end{eqnarray*}
Hence, we obtain (\ref{lDio}).
We consider the case that $(r_n)_{n\geq 1}$ is bounded, $(s_n)_{n\geq 1}$ is unbounded, and $\Dio ({\bf a})$ is infinite.
Then, for all $n\geq 1$, we have $q^n\leq |q_n|\leq \tilde{M}^n$, which implies $H(\alpha _n)\leq \tilde{M}^{2r_n+s_n}$.
Therefore, in the same matter, we obtain
\begin{eqnarray*}
|\xi -\alpha _n|\leq H(\alpha _n)^{-\delta \frac{\log q}{\log \tilde{M}}}.
\end{eqnarray*}
Hence, we have $w_2 ^{*}(\xi )=+\infty$.
Assume that the sequence $(|a_n|)_{n\geq 1}$ is bounded.
We denote by $A$ its upper bound.
We consider the case that $(r_n)_{n\geq 1}$ is unbounded.
Here, if necessary, taking a subsequence of $(r_n)_{n\geq 1}$, we assume that $(r_n)_{n\geq 1}$ is increasing and $r_1\geq n_0$.
Let $P_n(X)$ be the minimal polynomial of $\alpha _n$.
From Lemma \ref{conj2}, we obtain
\begin{eqnarray*}
|P_n(\xi )| & \leq & H(\alpha _n)|\xi -\alpha _n||\xi -\alpha _n '| \leq A^2 H(\alpha _n)q_{r_n+\lceil w_n s_n\rceil} ^{-2} q_{r_n} ^{-2} \\
& \leq & A^2 H(\alpha _n)^{-2\frac{2 r_n+\lceil w_n s_n\rceil }{2 r_n+s_n}\frac{\log (m-\varepsilon )}{\log (M+\varepsilon )}+1}.
\end{eqnarray*}
Since
\begin{eqnarray*}
\frac{2 r_n+\lceil w_n s_n\rceil }{2 r_n+s_n} & \geq & \frac{r_n+\delta (r_n +s_n)}{2 r_n +s_n}\geq \frac{r_n +s_n/2 +\delta (r_n +s_n /2)}{2 r_n +s_n}
\geq \frac{1+\delta }{2},
\end{eqnarray*}
we obtain (\ref{lDio2}).
For the remaining case, we have (\ref{lDio2}) in the same line of proof of (\ref{lDio}).
\end{proof}
\appendix
\section{Rational approximation in $\lF _q((T^{-1}))$}\label{Rational sec}
\begin{lem}
Let ${\bf a}=(a_n)_{n\geq 0}$ be a non-ultimately periodic sequence over $\lF _q$.
Set $\xi :=\sum_{n=0}^{\infty}a_n T^{-n}$.
Then we have
\begin{eqnarray}\label{last2}
w_1(\xi )\geq \max (1, \Dio ({\bf a})-1).
\end{eqnarray}
\end{lem}
\begin{proof}
From Theorem \ref{Mahler lower}, we have $w_1(\xi )\geq 1$.
Without loss of generality, we may assume that $\Dio ({\bf a})>1$.
Take a real number $\delta $ such that $1<\delta <\Dio ({\bf a})$.
For $n\geq 1$, there exist finite words $U_n, V_n$ and a real number $w_n$ such that $U_n V_n ^{w_n}$ is the prefix of ${\bf a}$, the sequence $(|V_n ^{w_n}|)_{n\geq 1}$ is strictly increasing, and $|U_n V_n ^{w_n}|\geq \delta |U_n V_n|$.
Put $q_n:=T^{|U_n|}(T^{|V_n|}-1)$.
Then there exists $p_n \in \lF _q[T]$ such that
\begin{eqnarray*}
\frac{p_n}{q_n}=\sum _{k=0}^{\infty }b_k ^{(n)} T^{-k},
\end{eqnarray*}
where $(b_k ^{(n)})_{k\geq 0}$ is the infinite word $U_n\overline{V_n}$ by Lemma 3.4 in \cite{Firicel}.
Since $\xi $ and $p_n/q_n$ have the same first $|U_nV_n ^{w_n}|$-th digits, we obtain
\begin{eqnarray*}
\left| \xi -\frac{p_n}{q_n} \right| \leq |q_n|^{-\delta }.
\end{eqnarray*}
Hence, we have (\ref{last2}).
\end{proof}
The following theorem is an analogue of Th\'eor\`eme 2.1 in \cite{Adamczewski4} and Theorem 1.3 in \cite{Ooto}, and is an extension of Theorem 1.2 in \cite{Firicel}.
\begin{thm}
Let ${\bf a}=(a_n)_{n\geq 0}$ be a non-ultimately periodic sequence over $\lF _q$.
Set $\xi :=\sum_{n=0}^{\infty}a_n T^{-n}$.
Assume that there exist integers $n_0\geq 1$ and $\kappa \geq 2$ such that for all $n\geq n_0$,
\begin{eqnarray*}
p({\bf a}, n) \leq \kappa n.
\end{eqnarray*}
If the Diophantine exponent of ${\bf a}$ is finite, then we have
\begin{eqnarray}\label{last}
w_1(\xi )\leq 8(\kappa +1)^2(2\kappa +1)\Dio ({\bf a})-1.
\end{eqnarray}
\end{thm}
\begin{proof}
For $n\geq n_0$, take finite words $U_n ,V_n$ and a rational number $w_n$ satisfying Lemma \ref{combi} (i)-(vi).
Put $q_n:=T^{|U_n|}(T^{|V_n|}-1)$.
Then there exists $p_n \in \lF _q[T]$ such that
\begin{eqnarray*}
\frac{p_n}{q_n}=\sum _{k=0}^{\infty }b_k ^{(n)} T^{-k},
\end{eqnarray*}
where $(b_k ^{(n)})_{k\geq 0}$ is the infinite word $U_n\overline{V_n}$ by Lemma 3.4 in \cite{Firicel}.
Since $\xi $ and $p_n/q_n$ have the same first $|U_nV_n ^{w_n}|$-th digits, we obtain
\begin{eqnarray*}
\left| \xi -\frac{p_n}{q_n} \right| \leq |q_n|^{-1-\frac{1}{4\kappa +2}}.
\end{eqnarray*}
Take a real number $\delta $ which is greater than $\Dio ({\bf a})$.
Note that $\delta >1.$
By the definition of Diophantine exponent, there exists an integer $n_1\geq n_0$ such that for all $n\geq n_1$
\begin{eqnarray*}
\left| \xi -\frac{p_n}{q_n} \right| \geq |q_n|^{-\delta }.
\end{eqnarray*}
We define a positive integer sequence $(n_j)_{j\geq 1}$ by $n_{j+1}=2(\kappa +1)n_j$ for $j\geq 1$.
It follows from Lemma \ref{combi} (iii) and (vi) that for $j\geq 1$
\begin{eqnarray*}
|q_{n_j}|<|q_{n_{j+1}}|\leq |q_{n_j}|^{4(\kappa +1)^2}.
\end{eqnarray*}
Thus, by Lemma 3.2 in \cite{Firicel}, we obtain (\ref{last}).
\end{proof}
Consequently, the following result holds.
\begin{cor}
Let ${\bf a}=(a_n)_{n\geq 0}$ be a non-ultimately periodic sequence over $\lF _q$.
Set $\xi :=\sum_{n=0}^{\infty}a_n T^{-n}$.
Then the Diophantine exponent of ${\bf a}$ is finite if and only if $\xi $ is not a $U_1$-number.
\end{cor}
\end{document}
|
\begin{document}
\title{Dynamics of an Ion Coupled to a Parametric Superconducting Circuit}
\author{Dvir Kafri$^1$, Prabin Adhikari$^1$, Jacob M. Taylor$^{1,2}$}
\affiliation{$^1$Joint Quantum Institute, University of Maryland, College Park}
\affiliation{$^2$National Institute of Standards and Technology, Gaithersburg, Maryland}
\begin{abstract}
Superconducting circuits and trapped ions are promising architectures for quantum information processing. However, the natural frequencies for controlling these systems -- radio frequency ion control and microwave domain superconducting qubit control -- make direct Hamiltonian interactions between them weak. In this paper we describe a technique for coupling a trapped ion's motion to the fundamental mode of a superconducting circuit, by applying to the circuit a carefully modulated external magnetic flux. In conjunction with a non-linear element (Josephson junction), this gives the circuit an effective time-dependent inductance. We then show how to tune the external flux to generate a resonant coupling between the circuit and ion's motional mode, and discuss the limitations of this approach compared to using a time-dependent capacitance.
\end{abstract}
\pacs{85.25.Cp, 37.10.Ty, 03.75.Lm}
\maketitle
\section{Introduction}
Superconducting circuits and trapped ions have distinct advantages in quantum information processing. Circuits are known for fast gate times, flexible fabrication methods, and macroscopic sizes, allowing multiple applications in quantum information science\cite{Dicarlo2009, Clarke2004, Blais2004, Zakka-Bajjani2011}. Unfortunately, they have short coherence times and their decoherence mechanisms are hard to address\cite{Devoret2013}. Trapped ions, on the other hand, serve as ideal quantum memories. Indeed, the hyperfine transition displays coherence times on the order of seconds to minutes\cite{Bollinger1991,Roos2004,Langer2005,Harty2014}, while high fidelity state readout is available through fluorescence spectroscopy \cite{Leibfried2003,Dehmelt1975}. Unfortunately, ions depend mainly on motional gates for interactions \cite{Cirac1995, Molmer1999, Blatt2008, Monz2009}; these are correspondingly slow, susceptible to motional heating associated with traps\cite{Wineland1998,Turchette2000,Deslauriers2004}, and occur only at a short -- dipolar -- range. The distinct advantages of ion and superconducting systems therefore motivates a hybrid system comprising both architectures, producing a long-range coupling between high-quality quantum memories.
Early proposals for hybrid atomic and solid state systems \cite{Sorensen2004,Tian2004} have yet to be implemented experimentally. Other approaches involve coupling solid state systems to atomic systems with large dipole moments, such as ensembles of polar molecules\cite{Rabl2006} or Rydberg atoms\cite{Petrosyan2009}. Although the motional dipole couplings with the electric field of superconducting circuits can be several hundred $\rm{kHz}$, these systems suffer from a large mismatch between motional ($\sim \rm{MHz}$) and circuit ($\sim \rm{GHz}$) frequencies. This causes the normal modes of the coupled systems to be either predominantly motional or photonic in nature, thereby limiting the rate at which information is carried between them. Implementation of a practical hybrid device therefore requires something additional.
Parametric processes allow for efficient conversion of excitations between off-resonant systems. In the field of quantum optics, they are widely used in the frequency conversion of photons using nonlinear media \cite{Burnham1970, Mandel1995}. In the realm of superconducting quantum devices, parametric amplifiers provide highly sensitive, continuous readout measurements while adding little noise \cite{Bergeal2010,Castellanos-Beltran2008,Vijay2009,Vijay2011}. Parametric processes can also been used to generate controllable interactions between superconducting qubits and microwave resonators \cite{Beaudoin2012,Strand2013,Allman2014}. In the context of hybrid systems, Ref.~\cite{Kielpinski2012} presents a parametric coupling scheme between the resonant modes of an LC circuit and trapped ion. The ion, confined in a trap with frequency $\omega_{i}$, is coupled to the driven sidebands of a high quality factor parametric LC circuit whose capacitance is modulated at frequency $\omega_{LC} - \omega_{i}$. This gives rise to a coupling strength on the order of tens of kHz for typical ion chip-trap parameters. A different approach proposed in Ref.~\cite{Daniilidis2013} is based on a position-charge interaction that is quadratic in the charged particle's position, so that driving of its motion produces a parametric coupling. In this approach the circuit couples to an electron rather than an ion, leading to an enhanced coupling strength ($\sim 1$ MHz) due to its reduced mass.
In this manuscript we describe an alternative parametric driving scheme to produce coherent interactions between atoms and circuits. By using a time-dependent external flux, we drive a superconducting loop containing a Josephson junction, which causes the superconductor to act as a parametric oscillator with a tunable inductance. By studying the characteristic, time-dependent excitations of this system, we show how to produce a resonant interaction between it and the motional mode of a capacitively coupled trapped ion. Although in principle our approach could be used to produce a strong coupling, we find that, in contrast with capacitive driving schemes, the mismatch between inductive driving and capacitive (charge-mediated) interaction causes a significant loss in coupling strength.
The manuscript proceeds as follows: In the first section, we describe the experimental setup of the circuit and ion systems and motivate the method used to couple them. To understand how the Josephson non-linearity and external flux affect the circuit, we transform to a reference frame corresponding to the classical solution of its non-linear Hamiltonian. We then linearize about this solution, resulting in a Hamiltonian that describes fluctuations about the classical equations of motion and corresponds to an LC circuit with a sinusoidally varying inductance. This periodic, linear Hamiltonian is characterized by the quasi-periodic solutions to Mathieu's equation. From these functions we define the `quasi-energy' annihilation operator for the system and transform to a second reference frame where it is time-independent. Finally, we derive the circuit-ion interaction in the interaction picture, which allows us to directly compute the effective coherent coupling strength between the systems. We conclude by comparing our results to previous work\cite{Kielpinski2012} (a capacitive driving scheme) and analyzing why inductive modulation is generically ineffective for capacitive couplings.
\section{Physical System and Hamiltonian}
We begin with a basic physical description of the ion-circuit system, and as in Ref.~\cite{Devoret1995}, construct the classical Lagrangian before deriving the quantized Hamiltonian. We consider a single ion placed close to two capacitive plates of a superconducting circuit. We assume that the ion is trapped in an effective harmonic potential with a characteristic frequency $\omega_z$ in the direction parallel to the electric field between the two plates\cite{Leibfried2003}. The associated ion Lagrangian is
\beq
\mathcal{L}_{ion}(z, \dot z, t) = \frac{1}{2} m \dot z^2 - \frac{1}{2} m \omega_z^2 z^2 \,,
\eeq
where $m$ is the ion mass and $z$ its displacement from equilibrium. Since the LC circuit will only couple to the ion motion in the $z$ direction, we ignore the Lagrangian terms associated with motion in the other axial directions.
\begin{figure}
\caption{Left: An rf-Squid with Josephson energy $E_{J}
\label{Circuit}
\end{figure}
The circuit interacting with the ion contains a Josephson junction \cite{Likharev1986} (Fig.~\ref{Circuit}) shunted by an inductive outer loop with inductance $L$. This configuration is known as a radio frequency superconducting quantum interference device (rf-SQUID)\cite{Clarke2004}. The SQUID is connected in parallel to a capacitor $C$, whose plates are coupled to the nearby trapped ion. An external, time-dependent magnetic flux $\phi_{x}(t)$ through the circuit is used to tune the characteristic frequencies of this system. In our proposed configuration, the SQUID Lagrangian can be written in terms of the node flux $\phi$ as\cite{Devoret1995}
\beq
\label{Lsquid}
\mathcal{L}_{q}(\phi,\dot \phi, t) = \frac{1}{2} C_\Sigma \dot{\phi}^2 + E_{J} \cos ( \phi/\tilde \phi_0) - \frac{1}{2 L} (\phi-\phi_{x}(t))^2 \,,
\eeq
where $\tilde \phi_0 = \hbar/(2 e)$ is the reduced flux quantum, and $C_\Sigma = C + C_J$ is the effective capacitance of the circuit including the capacitance $C_J$ of the Josephson junction. The Josephson energy is defined in terms of the critical current of the junction, $E_J = I_c \tilde \phi_0$. The Josephson term adds a non-linearity to the system $\propto E_J$ which will, in conjunction with the external driving, allow for coherent transfer of excitations between the otherwise off-resonant LC circuit (typical frequency $\sim 1-10 \mathbfox{ GHz}$) and ion motional mode ($\sim 1-10 \mathbfox{ MHz}$). Finally, we include the interaction between the two systems, associated with the ion motion through the electric field between the two plates in the dipole approximation:
\ba
\mathcal{L}_I = -e \l(\frac{\xi}{d} \dot{\phi}\r) z \,,
\ea
where $d$ is the distance between the two plates and $\xi\sim 0.25$ a dimensionless factor associated with the capacitor geometry\cite{Kielpinski2012}.
With the full Lagrangian $\mathcal{L} = \mathcal{L}_q + \mathcal{L}_{ion} + \mathcal{L}_I$ we follow the standard prescription to define the canonical variables conjugate to $z$ and $\phi$:
\ba
\label{p}
p_{z} &=& \pd{\mathcal{L}}{\dot{z}} = m \dot{z}\\
\label{q} q &=& \pd{\mathcal{L}}{\dot{\phi}} = C_{\Sigma} \dot{\phi} - \xi \frac{e}{d} z\,.
\ea
These are identified as the momentum of the ion in the $z$ direction and the effective Cooper-pair charge difference across the Josephson junction, respectively. We use the canonical Legendre transformation $H = q \dot \phi + p_z \dot z - \mathcal{L}$ to define the Hamiltonian and quantize the system, giving
\beq
\hat H(t) = \hat H_{ion} + \hat H_{q}(t) + \hat H_{I}\,,
\eeq
where
\ba
\hat H_{ion} &=& \frac{\hat p_{z}^2}{2m} + \frac{1}{2} m \omega_{i}^2 \hat z^2, \\
\hat H_{q}(t) &=& \frac{\hat q^2}{2 C_{\Sigma}} -E_{J} \cos(\hat \phi/\tilde \phi_0) + \frac{1}{2 L} (\hat \phi-\phi_{x}(t))^2, \\
\hat H_{I} &=& e\frac{ \xi}{d C_{\Sigma}} \hat q \hat z\,,
\ea
As the canonical charge $q$ of equation \eqref{q} has a term proportional to $z$, the ion Hamiltonian $\hat H_{ion}$ gains an extra potential term proportional to $\hat z^2$. Accounting for this term, we define the dressed trap frequency,
\ba
\omega_{i}^2 = \omega_{z}^2 + \frac{e^2 \xi^2}{d^2 C_\Sigma m}\,.
\ea
The latter correction is small compared to $\omega_z$; it is on the order of $50 \mathbfox{kHz}$ for the parameters suggested in Ref.~\cite{Kielpinski2012} ($d \approx 25 \mu\mathbfox{m}$, $C_\Sigma \approx 50 \mathbfox{fF}$ and $m \approx 1.5 \times 10^{-26}\mathbfox{kg}$ the atomic mass of Beryllium). Finally, we note the resulting commutation relations, $[\hat \phi,\hat q] = [\hat z, \hat p_z] = i \hbar$.
To motivate the method for coherently coupling the ion and circuit, we consider the Hamiltonian in the interaction picture, i.e., in the frame rotating with respect to $\hat H_q(t) + \hat H_{ion}$. In this frame the Hamiltonian takes the form,
\beq
\label{Hint}
\hat H_{int}(t) = \frac{\xi e}{d C_\Sigma} \hat q(t) \l( \hat b e^{-i \omega_i t} + \hat b^\dagger e^{i \omega_i t}\r) \sqrt{\frac{\hbar}{2 m \omega_i}}\,,
\eeq
where $\hat b = \sqrt{\frac{m \omega_i}{2 \hbar}} \hat z + i \sqrt{\frac{1}{2 \hbar m \omega_i}} \hat p_z$ is the annihilation operator associated with excitations of the ion motional mode, and $\hat q(t)$ is propagated according to the time-dependent Hamiltonian $\hat H_{q}(t)$. Observe that if we set both $\phi_x(t)$ and $E_J$ to $0$ in the definition of $\hat H_q(t)$, the charge $\hat q(t)$ will oscillate as a harmonic oscillator,
\beq
\hat q(t) \rightarrow i\sqrt{\frac{\hbar C_\Sigma \omega_0}{2 }}(\hat a e^{-i \omega_0t}- \hat a^\dagger e^{i \omega_0t})\,,
\eeq
with $\hat a$ the circuit annihilation operator analogous to $\hat b$, and $\omega_0$ the undriven frequency LC,
\beq
\omega_0 = 1/\sqrt{L C_\Sigma}\,.
\eeq
In order to coherently exchange excitations between the systems, $\hat H_{int}(t)$ should only contain beam splitter-like terms, $\hat a \hat b^\dagger$ and $\hat a^\dagger \hat b$, while suppressing the excitation non-conserving terms, $\hat a^\dagger \hat b^\dagger$ and $\hat a \hat b$. Yet since $\omega_0 \gg \omega_i$, the $a b^\dagger$ terms oscillate at a frequency comparable to the $a^\dagger b^\dagger$ terms, both of which have negligible effect in the rotating wave approximation (RWA). Our goal is therefore the following: design $E_J$ and $\phi_x(t)$ such that $\hat a \hat b^\dagger$ contains a time-independent component in the interaction picture, while $\hat a \hat b$ contains only oscillating terms that drop out in the RWA.
\section{Linearization of the Parametric Oscillator}
Before we can determine the parameters $\phi_x(t)$ and $E_J$ allowing for coherent exchange of excitations, we must first bring $\hat H_q(t)$ into a more agreeable form. To begin, we linearize the circuit Hamiltonian about the classical solution to its equations of motion. This makes it possible (in the next section) to identify effective annihilation and creation operators for the LC system, and to analyze their spectrum.
We linearize by displacing the charge and flux variables by time-dependent scalars, $q_c(t)$ and $\phi_c(t)$, through the unitary transformations
\ba
\nonumber \hat U_{1}(t) &=& e^{i \hat \phi q_{c}(t)/\hbar}, \\
\hat U_{2}(t) &=& e^{-i \hat q \phi_{c}(t)/\hbar}\,.
\ea
Specifically, we consider the LC circuit in terms of the displaced states,
\beq
\ket{\tilde \Psi(t)} = \hat U^\dagger_2(t) \hat U^\dagger_1(t) \ket{\Psi(t)}\,,
\eeq
whose equation of motion satisfies
\ba
\begin{aligned}
\partial_t \ket{\tilde \Psi(t)} &= -i \hbar^{-1}\tilde H_q(t) \ket{\tilde \Psi(t)}\,,\\
\label{Hrot}
\tilde H_q &= \hat U_2^\dagger \hat U_1^\dagger \hat H_q \hat U_1 \hat U_2 - i \hbar \hat U_2^\dagger \hat U_1^\dagger \frac{\partial \hat U_1}{\partial t} \hat U_2\\
& - i \hbar \hat U_2^\dagger \frac{\partial \hat U_2}{\partial t}\,.
\end{aligned}
\ea
Given $[\hat \phi, \hat q] = i \hbar$, we have that $\hat U_1^\dagger \hat q \hat U_1 = \hat q + q_c(t) $ and $\hat U_2^\dagger \hat \phi \hat U_2 = \hat \phi + \phi_c(t) $, while clearly $\hat U_1^\dagger \hat \phi \hat U_1 = \hat \phi$ and $\hat U_2^\dagger \hat q \hat U_2 = \hat q$. The displaced state Hamiltonian is therefore
\ba
\tilde H_q(t) &=& \frac{(\hat q+q_{c}(t))^2}{2C_{\Sigma}} + V_{q}(\hat \phi+\phi_{c}(t)) \nonumber \\
& & + (\hat \phi+\phi_{c}(t)) \partial_t q_{c}(t) -\hat q \partial_t \phi_{c}(t),
\ea
where we have defined the nonlinear potential,
\beq
\label{Vq}
V_q(\hat \phi )= - E_J \cos( \hat \phi/\tilde \phi_0) + \frac{1}{2 L}( \hat \phi - \phi_x(t))^2 \,,
\eeq
with $\tilde \phi_0 = \hbar/(2 e)$. We now Taylor expand $V_q$ about $\phi_c(t)$, and collect all first and second order terms in $\hat q $ and $\hat \phi$, giving
\ba
\tilde H_q(t) &=& \hat q \l( q_c(t)/C_\Sigma - \partial_t \phi_c \r) + \hat \phi \l( V_q'(\phi_c) + \partial_t q_c \r)\nonumber\\
&& + \frac{\hat q^2}{2 C_\Sigma} + \frac{V_q''(\phi_c)}{2!}\hat \phi^2 + R(\hat \phi)\,,
\ea
where we have dropped all scalar terms, and
\beq
\label{R}
R(\hat \phi) = \sum_{k \geq 3} \tilde \phi_0^k V_q^{(k)}(\phi_c)\l(\hat \phi/\tilde \phi_0\r)^k\frac{1}{k!}
\eeq
represents all higher order terms in the Taylor expansion.
To complete the linearization, the displacements $q_c$ and $\phi_c$ are chosen so that the first order terms in $\hat q$ and $\hat \phi$ vanish, and therefore $\tilde H_q$ is quadratic to leading order:
\ba
\partial_t \phi_c &=& q_c/C_\Sigma\nonumber\\
\label{classical}\partial_t q_c &=& - V_q'(\phi_c)\,.
\ea
These relations are the solutions to the classical, driven Hamiltonian, $H_q = \frac{q_c^2}{2 C_\Sigma} + V_q(\phi_c,t)$, and can be substituted into each other to give
\beq
\label{driving}
\partial_t^2 \phi_c + \omega_0^2 (\phi_c + \beta \tilde \phi_0 \sin(\phi_c/\tilde \phi_0)) = \omega_0^2 \phi_x(t) \,,
\eeq
where
\beq
\beta = L E_J/\tilde \phi_0^2 = \frac{L I_c}{\tilde \phi_0}
\eeq
represents the strength of the non-linearity and $\omega_0 = 1/\sqrt{L C_\Sigma}$ is the bare LC resonance frequency. Substituting from \eqref{classical} and computing $V_q''(\phi_c) = \frac{1}{L}(1 + \beta \cos(\phi_c/\tilde \phi_0))$, we can now express the circuit Hamiltonian as
\beq
\label{tHq}
\tilde H_q(t) = \frac{ \hat q^2}{2 C_\Sigma} + \frac{1}{2 L } (1 + \beta \cos(\phi_c(t)/\tilde \phi_0)) \hat \phi^2 \,.
\eeq
Note that we have dropped the higher order terms $R(\hat \phi)$ of equation~\eqref{R}. Understanding when this is valid will require us to express the flux operator $\hat \phi$ in the interaction picture, which we carry out below. A complete analysis is deferred to the appendix.
With equation \eqref{tHq} we have converted to the Hamiltonian of a harmonic oscillator with time-dependent inductance. Although it is explicitly dependent on the classical solution $\phi_c(t)$, we note that engineering this function is rather straightforward. Given a desired $\phi_c(t) $, the external driving $\phi_x(t)$ needed to produce it is given explicitly by equation \eqref{driving} (see Fig.~\ref{fluxDrive}). For simplicity we assume that $\phi_c(t)$ satisfies the relation
\beq
\beta \cos(\phi_c(t)/\tilde \phi_0) = \eta \cos(\omega_d t)\,,
\eeq
where we assume $\eta<\beta$ so that $\phi_c$ is well defined at all $t$. This corresponds to an LC circuit with inverse inductance $L^{-1}$ modulated at amplitude $\eta$ and frequency $\omega_d$,
\beq
\label{tHq2}
\tilde H_q(t) = \frac{ \hat q^2}{2 C_\Sigma} + \frac{1}{2 L } (1 + \eta \cos(\omega_d t)) \hat \phi^2 \,.
\eeq
In the following section we will see how the design parameters $\eta$ and $\omega_d$ determine the evolution of this system.
\begin{figure}
\caption{ External flux drive $\phi_x(t)/\tilde \phi_0$ required for a sinusoidal modulation of circuit inductance. The flux $\phi_x(t)$ is defined explicitly by equation \eqref{driving}
\label{fluxDrive}
\end{figure}
\section{Time-Dependent Quantum Harmonic Oscillator}
\label{TDQHO}
Although we now have a quadratic Hamiltonian for the SQUID, since $\tilde H_q(t) = {\frac{\hat q^2}{2 C_\Sigma} + \frac{1}{2 L }(1 + \eta \cos(\omega_d t)) \hat \phi^2}$ is time-dependent, it is not immediately clear how to define its associated annihilation operator. To do so, we follow the approach of Ref.~\cite{Brown1991} to obtain an effective operator $\hat a$ retaining useful properties of the time-independent case. Specifically, it satisfies $[\hat a, \hat a^\dagger] = 1$ and, in the appropriate reference frame, is explicitly time-independent. Further, in this frame the Hamiltonian can be written as $\hbar (\partial_t \theta(t))( \hat a^\dagger \hat a + 1/2)$, where $\theta(t)$ is the effective phase accumulated by $\hat a$ in the Heisenberg picture. This new Hamiltonian commutes with itself at all times, allowing us to directly transform to the interaction picture in the next section. We will then describe how to control the spectral properties of $\hat q(t)$ in the interaction Hamiltonian \eqref{Hint}, producing the desired time-independent beam splitter-like interaction between circuit and ion motion.
Following Ref.~\cite{Brown1991}, we use unitary transformations to change $\tilde H_q(t)$ so that it is of the form $\sim g(t) ( \alpha \hat q^2 + \delta \hat \phi^2 )$, where $\alpha$ and $\delta$ are explicitly time-independent. To do this, we first analyze the classical equation of motion for the flux $\phi$ associated with $\tilde H_q(t)$, as derived from equation \eqref{tHq2},
\beq
\label{MathieuFirst}
\partial_t^2 f(t) + \omega_0^2(1 + \eta \cos(\omega_d t)) f(t) = 0\,.
\eeq
Since the drive has period $\tau = 2 \pi/\omega_d$, from Floquet's theorem \cite{Magnus1979,Kelley2010} we can find a quasi-periodic solution $f$ satisfying
\ba
\nonumber f(0) &=& 1\,,\\
\label{fPeriod}
f(t + \tau) &=& e^{i \mu \pi} f(t)\,.
\ea
In order for the solution to be stable, the characteristic exponent $\mu$ must be real valued. This imposes constraints on the parameters $\omega_d$ and $\eta$, which we discuss later. As we shall see, the properties of the function $f$ will be closely related to the spectrum of $\hat q(t)$ in the interaction picture.
It is useful to express $f$ in polar form,
\beq
f(t) = r(t) e^{i \theta(t)}\,,
\eeq
where $r>0$ and $\theta$ is real valued. Substituting this relation into the characteristic equation \eqref{MathieuFirst} determines equations of motion for $r$ and $\theta$,
\ba
\label{polarHill}
\begin{split}
\partial_t^2 r & = \l( (\partial_t \theta)^2-\omega_0^2(1 + \eta \cos(\omega_d t))\r) r \,, \\
0 &= r \partial_t^2 \theta_t + 2 (\partial_t r) (\partial_t \theta)\,.
\end{split}
\ea
We define the associated Wronskian, which sets a characteristic frequency scale for the evolution,
\ba
\nonumber W &=& \frac{1}{2 i}\l(f^*(t) \partial_t f(t) - f(t) \partial_t f^*(t)\r) \\
\nonumber & =& \mathbfox{Im}\l( r e^{-i \theta} \partial_t \l(r e^{i \theta} \r) \r)\\
\label{Wronskian} & = & r^2 \partial_t \theta \,.
\ea
We note that $\frac{1}{r}\partial_t W$ is equal to the second line of \eqref{polarHill}, so $\partial_t W = 0$ and $W$ is time-independent.
The above definitions are key to the transformations bringing $\tilde H_{q}(t)$ into the desired form. Our first transformation is
\beq
\hat U_3 = \exp(i \chi(t) \hat \phi^2/\hbar)\,,
\eeq
where
\beq
\chi(t) \equiv \frac{C_\Sigma}{2}\frac{\partial_t r}{r}
\eeq
is the real part of the effective admittance, $\frac{C_\Sigma}{2} \frac{\partial_t f}{ f}$. Using $[ \hat \phi, \hat q] = i \hbar$, we note
\ba
\nonumber \hat U_3^\dagger \hat q U_3 &=& \hat q + 2 \chi \hat \phi\\
\nonumber\hat U_3^\dagger \hat \phi U_3 &=& \hat \phi \\
-i \hbar \hat U_3^\dagger \partial_t \hat U_3 &=& (\partial_t \chi) \hat \phi^2
\ea
After some algebra, the Hamiltonian of Eq. \eqref{tHq2} is transformed to $\hat U_3^\dag \tilde H_q(t) \hat U_3 - i \hbar \hat U_3^\dagger \partial_t \hat U_3$ and becomes
\ba
\nonumber && \frac{\hat q^2}{2 C_\Sigma} + \frac{ \chi}{C_\Sigma} (\hat \phi \hat q + \hat q \hat \phi)\\
&& +\frac{C_\Sigma \hat \phi^2}{2}\l( \omega_0^2\l( 1 + \eta \cos(\omega_d t) \r) + (2 \chi/C_\Sigma)^2 + 2 \partial_t \chi/C_\Sigma \r) \\
\nonumber & = & \frac{\hat q^2}{2 C_\Sigma} + \frac{ \chi}{C_\Sigma} (\hat \phi \hat q + \hat q \hat \phi) + \frac{C_\Sigma}{2} (\partial_t \theta)^2 \hat \phi^2\,.
\ea
In the last line we used $2 \chi/C_\Sigma =(\partial_t r)/r$ and the first line of \eqref{polarHill} to compute $2 \partial_t \chi/C_\Sigma = (\partial_t \theta)^2 - \omega_0^2\l( 1 + \eta \cos(\omega_d t) \r) - (2 \chi/C_\Sigma)^2$.
The next transformation removes the cross term and rescales $\hat q$ and $\hat \phi$:
\beq
\hat U_4 = e^{-i F(t) (\hat \phi \hat q + \hat q \hat \phi)/\hbar} \,,
\eeq
where $F(t) = \frac{1}{2} \log(r)$ satisfies $\partial_t F = \chi/C_\Sigma$. Given $[(\hat \phi \hat q + \hat q \hat \phi) , \hat \phi] =- 2 i \hbar \hat \phi$ and $[(\hat \phi \hat q + \hat q \hat \phi) , \hat q] = 2 i \hbar \hat q$, we compute
\ba
\nonumber \hat U_4^\dagger \hat q U_4 &=& e^{-2 F} \hat q = \frac{\hat q}{r} \\
\nonumber \hat U_4^\dagger \hat \phi U_4 &=& e^{2 F} \hat \phi = r \hat \phi \\
-i \hbar \hat U_4^\dagger \partial_t \hat U_4 &=& -\frac{ \chi}{C_\Sigma} (\hat \phi \hat q + \hat q \hat \phi)\,.
\ea
The final transformed Hamiltonian is therefore
\ba
\nonumber \hat H_{LC} & = & \frac{\hat q^2}{2 C_\Sigma r^2} + \frac{C_\Sigma}{2} ( \partial_t \theta)^2 r^2 \hat \phi^2\\
\label{HLC} & = & \frac{\partial_t \theta }{W} \l(\frac{\hat q^2}{2 C_\Sigma } + \frac{1}{2} C_\Sigma W^2 \hat \phi^2 \r) \,,
\ea
where the last line follows from equation \eqref{Wronskian}.
Equation \eqref{HLC} allows us to define the effective annihilation operator in the standard way,
\beq
\label{a}
\hat a = \sqrt{\frac{ C_\Sigma W}{2 \hbar}} \hat \phi + i \sqrt{ \frac{1}{2\hbar C_\Sigma W}} \hat q\,,
\eeq
so that the Hamiltonian is also equal to
\beq
\label{HLCtrans}
\hat H_{LC}(t) = \hbar (\partial_t \theta) (\hat a^\dagger \hat a + 1/2) \,.
\eeq
In this reference frame, we can interpret $\hat a^\dagger$ as the creation operator for the instantaneous eigenstates of $\hat H_{LC}(t)$, whose energies are integer multiples of $\hbar (\partial_t \theta)(t)$.
\section{Interaction Picture}
\label{interactionPicture}
Using the series of unitary transformations derived above, we now compute the operators $\hat \phi$ and $\hat q$ in the interaction picture with respect to Eq. \eqref{HLCtrans}. This serves two purposes. First, it will allow us calculate in the interaction picture the higher order terms $R(\hat \phi)$ of equation \eqref{R}, which we we dropped in order to the reach driven, quadratic Hamiltonian of equation \eqref{tHq2}. In the appendix we evaluate the size of these corrections in the rotated frame, and suggest a technique for minimizing their effect. Second, going to the interaction picture will allow us to write the original interaction $\hat H_I = e\frac{ \xi}{d C_{\Sigma}} \hat q \hat z$ in terms of the effective annihilation operators of equation \eqref{a}, allowing for a straightforward calculation of the coherent coupling strength in the next section.
It is easier to first compute $\hat \phi$ in the interaction picture; the first two transformations are the displacements involved in linearization,
\beq
\hat \phi \rightarrow \hat \phi + \phi_c\,,
\eeq
while the third unitary $\hat U_3 = \exp(i \chi \hat \phi^2/\hbar)$ has no effect on $\hat \phi$. The final unitary $\hat U_4$ rescales $\hat \phi$ by the factor $r$, giving
\beq
\sqrt{\frac{\hbar}{2 C_\Sigma W}} r (\hat a + \hat a^\dagger) + \phi_c(t)\,,
\eeq
where we have used \eqref{a} to express $\hat \phi$ in terms of the annihilation operator $\hat a$. Finally, since in this rotated frame the circuit Hamiltonian is of the form $\hat H_{LC} = \hbar (\partial_t \theta) \l( \hat a^\dagger \hat a + 1/2\r)$, in the corresponding interaction picture $\hat a \rightarrow \hat a e^{-i \theta}$. The final form of $\hat \phi$ in the interaction picture is thus
\begin{align}
\label{phiInt}
\hat \phi_{int}(t) & = \phi_c(t) + \sqrt{\frac{\hbar}{2 C_\Sigma W}} r (\hat a e^{-i \theta} + \hat a^\dagger e^{i \theta})\\
& = \phi_c(t) + \sqrt{\frac{\hbar}{2 C_\Sigma W}}(f^* \hat a + f \hat a^\dagger)\,, \label{phiint}
\end{align}
where we have used $f = r e^{i \theta}$. Using this expression for the interaction picture flux operator, in the appendix we bound the effect of the higher order corrections $R(\hat \phi)$ to the linearized Hamiltonian of equation \eqref{tHq}. A parameter of interest arising in this discussion is the relative size of the characteristic flux, which is set by
\beq
\label{smallflux}
\gamma \equiv \frac{1}{\tilde \phi_0}\sqrt{\frac{\hbar}{2 C_\Sigma W}}\,.
\eeq
Importantly, the final coupling strength is linearly proportional to this parameter, which we shall see limits the efficacy of our approach.
The derivation of $\hat q_{int}$ is analogous to that of $\hat \phi_{int}$. From the action of the four transformations $\hat U_1$ through $\hat U_4$, $\hat q$ takes the form
\beq
q_c(t) + \frac{1}{r} \hat q + C_\Sigma (\partial_t r) \hat \phi\,.
\eeq
Expressing these operators in terms of $\hat a$ of equation \eqref{a} then going to the rotating frame $\hat a \rightarrow \hat a e^{-i \theta}$ gives
\beq
\hat q \rightarrow q_c(t) + \sqrt{\frac{\hbar C_\Sigma }{2 W }}\l( g(t) \hat a + g(t)^* \hat a^\dagger \r)\,,
\eeq
where $g(t) = (\partial_t r - i \frac{W}{r})e^{-i \theta} $. Using the Wronskian identity $W = r^2 \partial_t \theta$ of equation \eqref{Wronskian}, we see that $g(t) = \partial_t(r e^{-i \theta}) = \partial_t f^*$, so the final form of $\hat q$ in the interaction picture is
\beq
\label{qint}
\hat q_{int}(t) = q_c(t) + \sqrt{\frac{\hbar C_\Sigma}{2 W}}\l(\partial_t f^* \hat a + \partial_t f\hat a^\dagger \r)\,.
\eeq
With this result, we may immediately compute the final ion-circuit interaction,
\begin{align}
\begin{split}
\hat H_{int}(t) &= e\frac{ \xi}{d C_{\Sigma}} \hat q_{int}(t) \hat z_{int}(t)\\
&= \xi \frac{e z_0}{C_\Sigma d} \sqrt{\frac{\hbar C_\Sigma }{2W }}\l( \partial_t f^* \hat a + \partial_t f \hat a^\dagger\r)\\
& \times ( \hat b e^{-i \omega_i t} + \hat b^\dagger e^{i \omega_i t}) \label{Hintfinal}\,,
\end{split}
\end{align}
where $\hat z_{int}(t) = z_0 ( \hat b e^{-i \omega_i t} + \hat b^\dagger e^{i \omega_i t}) $, with $z_0 = \sqrt{\frac{\hbar}{2 m \omega_i}}$ the characteristic displacement of the ion. Note that we have dropped the term $q_c(t) \hat z_{int}(t)$ by using the rotating wave approximation, since $q_c$ oscillates at characteristic frequency $\omega_d \gg \omega_i$.
\section{Coupling strength}
In order to compute the effective coupling strength between the circuit and ion motion, we must first analyze the Fourier spectrum of the characteristic function $f$. We begin by expressing the interaction Hamiltonian of \eqref{Hintfinal} in terms of dimensionless factors,
\ba
\label{canonical}
\nonumber\tilde H_{int}(u) &=& \frac{\hbar \omega_d}{4} \frac{z_0}{d} \xi \gamma \l( \partial_u f^* \hat a + \partial_u f \hat a^\dagger\r)\\
&& \times ( \hat b e^{-i 2 \omega_i u/\omega_d} + \hat b^\dagger e^{i2 \omega_i u/\omega_d}) \label{Hintdimensionless}\,,
\ea
where we have used $\tilde \phi_0 = \hbar/(2 e)$ to get an expression in terms of the flux parameter $\gamma$ of equation~\eqref{smallflux}. Here we have rewritten $f$ in terms of the dimensionless parameter $u = \omega_d t/2$, to better match the canonical form of Mathieu's equation,
\beq
\l[\partial_u^2 + (A - 2 Q \cos 2u)\r] f = 0, \label{Mathieu}\,.
\eeq
Using the substitution $A = 4 (\omega_0/\omega_d)^2$, and $Q = -\eta a/2$, this equation is equivalent to \eqref{MathieuFirst}. By Floquet's Theorem, $f$ can be expressed as a quasi-periodic function
\beq
\label{fseries}
f(u) = e^{i \mu u} \sum_k c_k e^{2 i k u}\,,
\eeq
where the sum has period $u_0 = \pi$ corresponding to $t = 2 \pi/\omega_d$. Multiplying the cross terms of equation \eqref{Hintdimensionless}, we see that the coupling strength is proportional to the time-independent part of $ e^{i 2\omega_i u/ \omega_d } \partial_u f^* \hat a \hat b^\dagger$ (the only term to survive under the RWA). This constant is exactly proportional to the Fourier component of $f$ corresponding to the ion's motional frequency, which is specified by the resonance condition
\beq
\label{resonance}
(\mu + 2 k) = 2\omega_i /\omega_d \,.
\eeq
Accounting for the derivative $\partial_u f^*$ in this expression, the coupling strength is
\beq
\label{coupling}
\hbar |\Omega| = \frac{\hbar \omega_d}{4} \frac{z_0}{d} \xi \gamma (\mu + 2 k) |c_k| = \frac{\hbar \omega_i}{2} \frac{z_0}{d} \xi \gamma |c_k|\,,
\eeq
where in the second equality we used condition \eqref{resonance}. Note that although $\Omega$ is not explicitly dependent on the driving amplitude $\eta$ and frequency $\omega_d$, both $\gamma$ and $c_k$ are functions of these parameters since both are dependent on the characteristic function $f$.
Before we compute the effective coupling strength between the circuit and ion, we first rule out the presence of other resonances in equation~\eqref{canonical}. To do so we account for the ion's micromotion, which can be derived from its equation of motion in a time-dependent potential. Using calculations analogous to those in Sections~\ref{TDQHO} and~\ref{interactionPicture}, one may show that $\hat z_{int}(t) = z_0\l(\hat b e^{-i \omega_i t}h^*(t) + \mathbfox{h.c.} \r)$, where $h(t)$ is a periodic function whose fundamental frequency $\omega_{rf}$ matches the trapping potential's RF drive~\cite{Leibfried2003}. In the previous analysis we have approximated $h(t) = 1$, as this represents the largest Fourier coefficient of $h(t)$, while other coefficients correspond to frequencies displaced from $\omega_i$ by integer multiples of $\omega_{rf}$. Typically $\omega_{rf}$ ranges between $10-100$~MHz, in contrast to the $\omega_d \approx 1$~GHz frequency spacing of the charge oscillations (equations~\eqref{qint} and~\eqref{fseries}). Since the Fourier coefficients of both $h(t)$ and $f(t)$ are negligible at higher frequencies, the product $\hat z_{int}(t) \hat q_{int}(t)$ only has a resonance between $\hat a$ and $\hat b^\dagger$ at frequency $\omega_i$. By the same reasoning, the two-mode squeezing terms ($\hat a \hat b$ and $\hat a^\dagger \hat b^\dagger$) oscillate at frequency at least $2 \omega_i$ and may be dropped in the RWA. Likewise, no resonance exists between $\hat z_{int}(t)$ and the classical motion, since it $q_c(t)$ is a periodic function with fundamental frequency $\omega_d$. Thus, the only remaining term after making the rotating wave approximation on $\hat H_{int}(t)$ is the desired interaction Hamiltonian, $\hbar \Omega \,\hat a \hat b^\dagger + \mathbfox{h.c.}$.
With equation \eqref{coupling} we can now evaluate the strength of the coupling for a driving strategy similar to that of Ref.~\cite{Kielpinski2012}. Specifically, we set the drive frequency to be approximately the difference between the circuit's bare frequency and the ion's motional frequency: $\omega_d \approx \omega_0 - \omega_i$. Since the ion frequency $\omega_i \ll \omega_0$ is much smaller than the LC frequency, the drive frequency $\omega_d $ is nearly resonant with the LC circuit. This means that even a relatively small drive amplitude leads to a mathematical instability in the system. Indeed, for $\eta$ sufficiently large, the characteristic exponent $\mu$ (equation~\eqref{fPeriod}) describing the quasi-periodic function $f$ gains an imaginary component, causing the interaction picture charge operator $\hat q_{int}(t)$ to diverge over time. This instability is alleviated in the presence environmental dissipation, as the system dynamics are then damped, though for simplicity we neglect these effects in our analysis. As seen in Fig.~\ref{stability}, for $\omega_0 \approx \omega_d\gg \omega_i$ the boundary between mathematically stable and unstable regions is set by $\eta \lesssim 2 \sqrt{\omega_i/\omega_0}$. For near-resonant driving to be mathematically stable, the parameter $\eta$ must therefore be perturbatively small. In this $\eta\ll 1$ limit, the characteristic exponent is equal to $\mu = 2\omega_0/\omega_d + O(\eta^2)$ \cite{McLachlan1947}, so the solution of the frequency matching condition $\mu + 2 k = 2 \omega_i/\omega_d$ corresponds to $k = -1$. Using the relations $A = 4 (\omega_0/\omega_d)^2 \approx 4$ and $Q = - \eta A /2 \approx -2 \eta$, these parameters can be mapped to the canonical form of equation \eqref{Mathieu}. The coefficient $c_{-1} = Q/4 + O(Q^2) = - \eta/2 + O(\eta^2)$ is known from standard perturbative expansions \cite{McLachlan1947}. Substituting this value into equation \eqref{coupling}, we conclude that in the perturbative regime $\eta \lesssim 2 \sqrt{\omega_i/\omega_0}$, the coupling strength between LC and ion modes scales as
\ba
\hbar |\Omega| &= &\frac{\hbar \omega_i}{4} \l( \frac{z_0}{d} \xi \gamma \eta \r)\l(1 + O(\eta) \r)\nonumber \\
&\lesssim& \frac{\hbar \omega_i}{2} \l( \frac{z_0}{d} \xi \gamma \r) \sqrt{\frac{\omega_i}{\omega_0}}\l(1 + O(\sqrt{\omega_i/\omega_0})\r)\,. \label{coupling2}
\ea
We note that this driving scheme may not be optimal. When $\omega_d$ is set far from resonance and $\eta\sim 1$, the characteristic exponent $\mu$ can be stable and vary over a large range of values, allowing for the resonance condition \eqref{resonance} to be satisfied at stronger driving. Note that this also changes the Wronskian $W$ (which changes $\gamma \propto 1/\sqrt{W}$) in a non-linear way, so a general analysis for the best driving parameters is difficult. Strong driving may be infeasible for experimental reasons, since it may require too large a Josephson energy (which is proportional to $\beta>\eta$).
\begin{figure}
\caption{Stability diagram of Mathieu's equation in the resonant regime. The parameters $\omega_i/\omega_0$ and $\eta$ map to the canonical variables of equation \eqref{Mathieu}
\label{stability}
\end{figure}
Unfortunately, we find that the final effective coupling $\Omega$ is substantially weaker than in the proposal of Ref.~\cite{Kielpinski2012}. In that work, the ion's motional degree of freedom was also capacitively coupled to the circuit, but the characteristic equation of the circuit was instead modulated by a time varying capacitive element. The same driving scheme as above was used, leading to an effective coupling strength of the form,
\beq
\hbar \Omega_{cap} \sim \frac{e Q_0}{C_\Sigma} \l(\xi \frac{z_0}{d} \eta \r) \approx \hbar (2 \pi \times 60 \mathbfox{ kHz})\,.
\eeq
Here $Q_0 = \sqrt{\frac{\hbar C_\Sigma W}{2}}$ is the characteristic charge fluctuation in the driven circuit, and $\eta$ the relative amplitude of the time-varying capacitance, $C(t) = C_\Sigma ( 1 + \eta \sin(\omega_d t))$. For comparison, from equation \eqref{coupling2} we obtain a coupling rate $\Omega \simeq 1 \mathbfox{ Hz}$. Decoherence rates for these systems are expected to be on the order of $\rm{kHz}$, while leading order corrections from linearization scale as $\hbar \omega_i (\beta \gamma \omega_0/\omega_i)^2$, both of which make this approach infeasible as currently described\footnote{The presence of low frequency sidebands may also make the circuit somewhat more susceptible to $1/f$-like flux noise: Since the circuit's flux variable (Eq.~\eqref{phiInt}) gains a sideband at frequency $\omega_i$ scaling as $\sim \eta$ relative to its main Fourier component, we expect an additional low-frequency contribution to the decoherence rate. A simple Fermi's Golden Rule calculation suggests this should scale as $\eta^2 (\omega_i/\omega_d) \sim 1$, and is therefore comparable in magnitude to the undriven case.}. Note that we are making the same assumptions as Ref.~\cite{Kielpinski2012} about trap geometry ($\xi = 0.25, d = 25 \mu$m) and ion mass ($m \simeq 1.5 \times 10^{-26}$kg for $^9\mathbfox{Be}_+$), as well as ion motional frequency ($\omega_i \approx 2 \pi \times 1$MHz, so $z_0 = \sqrt{\frac{\hbar}{2 m \omega_i}} \approx 25 \mathbfox{ nm}$). Finally, the parameter $\gamma\sim 1.3$ is set by the circuit capacitance $C_\Sigma = 46 \mathbfox{ fF}$ and the Wronskian $W \approx \omega_0 = 2\pi \times 1 \mathbfox{ GHz}$.
As seen from the above comparison, when the ion-circuit interaction is capacitive (proportional to charge), modulating the circuit inductance at frequency $\omega_d \gg \omega_i$ is significantly less effective than modulating the capacitance. This results from the asymmetric dependence between charge and flux operators on the characteristic function $f$, as demonstrated by the interaction picture expressions for these operators (equations \eqref{phiint} and \eqref{qint}). While the flux scales with $f(t)$, charge is explicitly dependent on $\partial_t f(t)$. This means that the Fourier component of $\hat q_{int}(t)$ oscillating at frequency $\omega_i$ (and thus the time-independent component of $\hat H_{int}(t)$) picks up a factor of $\omega_i/\omega_d\ll 1$ compared to $\hat \phi_{int}(t)$. The opposite is true for capacitance modulation (for which the roles of $\hat q$ and $\hat \phi$ are reversed in transformations $\hat U_3$ and $\hat U_4$), which explains why it achieves a much larger effective coupling for a charge based interaction.
\section{Conclusions}
We have studied a technique to coherently couple a superconducting circuit to the motional mode of a trapped ion by careful variation of the circuit's inductance. We describe a means of tuning the inductance (an external magnetic flux with a Josephson Junction) and describe an approximation mapping the circuit's non-linear Hamiltonian to that of a driven harmonic oscillator. Notwithstanding corrections to this approximation, the mismatch between the inductive driving and capacitive interaction is the major reason why the resulting coupling strength is impractically small. We confirm this in a direct comparison with a capacitive modulation scheme for a specific driving strategy, though the general form of the coupling suggests our conclusions hold for a broader class of strategies as well. Indeed, equation \eqref{coupling} holds for any choice of drive parameters $\omega_d, \eta$, and in fact can be applied more generally to any periodic modulation of inductance\footnote{Specifically, we may replace Mathieu's equation \eqref{MathieuFirst} with the more general Hill equation, $\partial_t^2 f + Q(t)f = 0$, where $Q(t)$ is any periodic function. The bare LC resonance frequency would then correspond to $\omega_0^2= \frac{1}{\tau}\int_0^\tau Q(t) \mathbfox{d}t$, where $Q(t + \tau) = Q(t)$.}. Conversely, our work suggests that for an inductive interaction (e.g. one based on mutual inductance between off-resonant circuits) an inductive modulation is the preferred approach.
\acknowledgements{The authors thank the following people for helpful discussions and comments concerning the work: Raymond Simmonds, Michael Foss-Feig, Hartmut H{\"a}ffner, David Kielpinski, Christopher Monroe, Dietrich Liebfried, and David Wineland. This research was supported by the U.S. Army Research Office Multidisciplinary University Research Initiative award W911NF0910406, and the NSF through the Physics Frontier Center at the Joint Quantum Institute.}
\section{Appendix -- Linearization Procedure}
The expression for $\hat \phi_{int}(t)$ allows us to evaluate the higher order corrections to the linearized Hamiltonian of equation \eqref{tHq2}. Since these corrections arise after the first two transformations ($(\hat \phi,\hat q)\rightarrow (\hat \phi + \phi_c, \hat q + q_c)$), they can be expressed as $\hat R_{int}(t) = R(\hat \phi_{int}(t) - \phi_c)$. Using relations \eqref{R} and $E_J = \beta \tilde \phi_0^2/L$ we obtain
\ba
\label{Rint}
\hat R_{int}(t) & =&\nonumber \sum_{k\geq3} \tilde \phi_0^k V^{(k)}_q|_{\phi_c(t)} \tilde \phi_0^{-k}\l(\hat \phi_{int}(t) - \phi_c(t) \r)^k/ k!\\
& =& \nonumber \frac{\tilde \phi_0^2}{L} \sum_{k\geq3} \beta c_k(t) \l(\sqrt{\frac{\hbar}{2 C_\Sigma W}}\frac{1}{\tilde \phi_0}\r)^k \l(\hat a f^* + \hat a^\dagger f \r)^k \frac{1}{k!}\\
& = & \nonumber\frac{\hbar}{ L C_\Sigma W} \sum_{k \geq 3} \beta c_k(t) \gamma^{k-2} \l(\hat a f^* + \hat a^\dagger f \r)^k \frac{1}{k!}\,,
\ea
where we have defined the characteristic flux parameter
\beq
\label{smallflux2}
\gamma \equiv \frac{1}{\tilde \phi_0}\sqrt{\frac{\hbar}{2 C_\Sigma W}}\,.
\eeq
Here $c_k(t) = E_J^{-1}\tilde \phi_0^k \frac{\partial^k V_q}{\partial \phi^k}|_{\phi = \phi_c}$ is defined according to equation \eqref{Vq}, giving
\ba
\nonumber \beta c_3(t) = &\beta \sin(\phi_c(t)/\tilde \phi_0)& = \sqrt{\beta^2 - \eta^2 \cos^2(\omega_d t)}\\
\nonumber \beta c_4(t) =& \beta \cos(\phi_c(t)/\tilde \phi_0)& = \eta \cos(\omega_d t)\\
\nonumber \beta c_k(t) = & -\beta \partial_x^k \cos(x) |_{x = \phi_c(t)/\tilde \phi_0} &,
\ea
where we have used $\beta \cos(\phi_c(t)/\tilde\phi_0) = \eta \cos(\omega_d t)$.
To bound the error introduced by $\hat R_{int}(t)$ we begin by considering only the $k = 3$ contribution. Using $\omega_0^2 = 1/L C_\Sigma$ this term can be written as
\beq
\label{R3}
\hbar \omega_0 \frac{\omega_0}{W} \frac{\gamma \beta}{3!} \sqrt{1 - (\eta/\beta)^2 \cos^2(\omega_d t)} \l(\hat a f^* + \hat a^\dagger f \r)^3\,.
\eeq
To analyze this term under the rotating wave approximation, we note that our coupling scheme is premised on giving $f$ a Fourier component matching the ion motional frequency, $\omega_i$. Specifically, in the driving scheme described in the text, the parameters $\omega_d\approx \omega_0-\omega_i$ and $\eta<\beta \ll 1$ are chosen such that $f$ has the form
\beq
f(t) = e^{i (\omega_d + \omega_i)t} \l( 1 + \frac{\eta}{2}\l( e^{i \omega_d t}/3 - e^{-i \omega_d t}\r) + ... \r)\,,
\eeq
where all other terms are of order $O(\eta^2)$ and correspond to frequencies $n \omega_d$ ($|n| \geq 2$). Thus the only slowly rotating term in equation \eqref{R3} is of order $\hbar \omega_0 \gamma \eta$ (since $W \approx \omega_0$ for these parameters) and rotates at frequency $\omega_i$. This slowly rotating part is the main contribution of \eqref{R3} for evolutions over a time scale $\sim 1/\omega_i$, and we may compute its effective size over this timescale by using a second-order Magnus expansion \cite{Magnus1954,Blanes2009},
\beq
\label{size}
\hat R_{int}(t) \sim \hbar \omega_i \l(\gamma \beta \frac{\omega_0}{\omega_i}\r)^2\,.
\eeq
Using a similar procedure, we can derive analogous estimates for the higher ($k >3$) order terms as well. But because these terms pick up extra factors of $\gamma$, equation \eqref{size} represents the overall scaling of all higher order terms. This is true when the scale of $|f|$ is on the order of $\sim 1$ (as in Fig.~\ref{fSmall}), the characteristic flux is small $(\gamma \lesssim 1)$, and there are only small fluctuations above the classical solution, $\Mean {\hat a^\dagger \hat a} \sim 1$.
\begin{figure}
\caption{A plot of the periodic component of the characteristic function, $p(u) = e^{-i \mu u }
\label{fSmall}
\end{figure}
One technique to reduce the effective size of corrections $\hat R_{int}(t)$ is to replace the single Josephson junction with a linear array of these elements. In the simplest approximation\cite{Kaplunenko2004}, using a stack of $N$ junctions we get that the Josephson Hamiltonian contribution is transformed as
\beq
-E_J \cos\l(\frac{\hat \phi}{\tilde \phi_0}\r) \rightarrow -N E_J \cos\l(\frac{\hat \phi}{N\tilde \phi_0}\r)
\eeq
In terms of the parameters in the definition of $\hat R_{int}(t)$, this corresponds to the map $\beta\rightarrow \beta/N$, $\tilde \phi_0 \rightarrow N \tilde \phi_0 $ (or equivalently $\gamma\rightarrow \gamma/N$). From equation \eqref{size}, this rescales the leading order contribution of $\hat R_{int}(t)$ by a factor of $1/N^4$. For the resonance ratio $\omega_0/\omega_i = 1000$, an array of $N \sim 100$ junctions should suffice to limit the effect of all higher order corrections.
\end{document}
|
\begin{document}
\renewcommand{References}{References}
\thispagestyle{empty}
\title[Numerical Simulation of 2.5-Set of Iterated
Stratonovich Stochastic Integrals]
{Numerical Simulation of 2.5-Set of Iterated Stratonovich
Stochastic Integrals of Multiplicities 1 to 5 From the
Taylor--Stratonovich Expansion}
\author[D.F. Kuznetsov]{Dmitriy F. Kuznetsov}
\address{Dmitriy Feliksovich Kuznetsov
\newline\hphantom{iii} Peter the Great Saint-Petersburg Polytechnic University,
\newline\hphantom{iii} Polytechnicheskaya ul., 29,
\newline\hphantom{iii} 195251, Saint-Petersburg, Russia}
\email{sde\[email protected]}
\thanks{\sc Mathematics Subject Classification: 60H05, 60H10, 42B05, 42C10}
\thanks{\sc Keywords: Ito stochastic differential equation,
Explicit one-step strong numerical method,
Iterated Stratonovich stochastic integral,
Iterated Ito stochastic integral,
Taylor--Stratonovich expansion, Generalized multiple Fourier series,
Multiple Fourier--Legendre series,
Mean-square approximation, Expansion}
\maketitle{\small
\begin{quote}
\noindent{\sc Abstract.}
The article is devoted to construction of effective procedures
of the mean-square approximation for iterated Stratonovich stochastic
integrals of multiplicities 1 to 5. We apply the method of
generalized multiple Fourier series for approximation of
iterated stochastic integrals. More precisely, we use multiple
Fourier--Legendre series converging in the sense of norm
in Hilbert space $L_2([t,T]^k),$ $k\in\mathbb{N}.$
Considered iterated Stra\-to\-no\-vich stochastic integrals are part
of the Taylor--Stratonovich expansion. That is why the results of
the article can be applied to implementation of
numerical methods with the orders 1.0, 1.5, 2.0 and 2.5
of strong convergence
for Ito stochastic differential
equations with multidimensional non-commutative noise.
\end{quote}
}
\setlength{\baselineskip}{2.0em}
\tableofcontents
\setlength{\baselineskip}{1.2em}
\section{Introduction}
Let $(\Omega,$ ${\rm F},$ ${\sf P})$ be a complete probability space, let
$\{{\rm F}_t, t\in[0,T]\}$ be a nondecreasing
right-continous family of $\sigma$-algebras of ${\rm F},$
and let ${\bf f}_t$ be a standard $m$-dimensional Wiener
stochastic process, which is
${\rm F}_t$-measurable for any $t\in[0, T].$ We assume that the components
${\bf f}_{t}^{(i)}$ $(i=1,\ldots,m)$ of this process are independent. Consider
an Ito stochastic differential equation (SDE) in the integral form
\begin{equation}
\label{1.5.2}
{\bf x}_t={\bf x}_0+\int\limits_0^t {\bf a}({\bf x}_{\tau},\tau)d\tau+
\int\limits_0^t B({\bf x}_{\tau},\tau)d{\bf f}_{\tau},\ \ \
{\bf x}_0={\bf x}(0,\omega).
\end{equation}
\noindent
Here ${\bf x}_t$ is some $n$-dimensional stochastic process
satisfying to the equation (\ref{1.5.2}).
The nonrandom functions ${\bf a}: \mathbb{R}^n\times[0, T]\to\mathbb{R}^n$,
$B: \mathbb{R}^n\times[0, T]\to\mathbb{R}^{n\times m}$
guarantee the existence and uniqueness up to stochastic equivalence
of a solution
of the equation (\ref{1.5.2}) \cite{1}. The second integral on the
right-hand side of (\ref{1.5.2}) is
interpreted as an Ito stochastic integral.
Let ${\bf x}_0$ be an $n$-dimensional random variable, which is
${\rm F}_0$-measurable and
${\sf M}\{\left|{\bf x}_0\right|^2\}<\infty$
(${\sf M}$ denotes a mathematical expectation).
We assume that
${\bf x}_0$ and ${\bf f}_t-{\bf f}_0$ are independent when $t>0.$
It is well known \cite{KlPl2}-\cite{KPS}
that Ito SDEs are
adequate mathematical models of dynamic systems under
the influence of random disturbances. One of the effective approaches
to numerical integration of
Ito SDEs is an approach based on
the Taylor--Ito and
Taylor--Stratonovich expansions
\cite{KlPl2}-\cite{xxx333}.
The most important feature of such
expansions is a presence in them of the so-called iterated
Ito and Stratonovich stochastic integrals, which play the key
role for solving the
problem of numerical integration of Ito SDEs
and have the
following form
\begin{equation}
\label{ito}
J[\psi^{(k)}]_{T,t}=\int\limits_t^T\psi_k(t_k) \ldots \int\limits_t^{t_{2}}
\psi_1(t_1) d{\bf w}_{t_1}^{(i_1)}\ldots
d{\bf w}_{t_k}^{(i_k)},
\end{equation}
\begin{equation}
\label{str}
J^{*}[\psi^{(k)}]_{T,t}=
{\int\limits_t^{*}}^T
\psi_k(t_k) \ldots
{\int\limits_t^{*}}^{t_2}
\psi_1(t_1) d{\bf w}_{t_1}^{(i_1)}\ldots
d{\bf w}_{t_k}^{(i_k)},
\end{equation}
\noindent
where $\psi_1(\tau),\ldots,\psi_k(\tau)$ are continuous nonrandom
functions
on $[t,T],$ ${\bf w}_{\tau}^{(i)}={\bf f}_{\tau}^{(i)}$
for $i=1,\ldots,m$ and
${\bf w}_{\tau}^{(0)}=\tau,$\ \
$i_1,\ldots,i_k = 0, 1,\ldots,m,$
$$
\int\limits\ \hbox{and}\ \int\limits^{*}
$$
\noindent
denote Ito and
Stratonovich stochastic integrals,
respectively (in this paper,
we use the definition of the Stratonovich stochastic integral from \cite{KlPl2}).
Note that $\psi_l(\tau)\equiv 1$ $(l=1,\ldots,k)$ and
$i_1,\ldots,i_k = 0, 1,\ldots,m$ in the classical Taylor--Ito
and Taylor--Stratonovich expansions
\cite{KlPl2}-\cite{KlPl1}. At the same time
$\psi_l(\tau)\equiv (t-\tau)^{q_l}$ ($l=1,\ldots,k$;
$q_1,\ldots,q_k=0, 1, 2,\ldots $) and $i_1,\ldots,i_k = 1,\ldots,m$
in the unified Taylor--Ito
and Taylor--Stratonovich expansions
\cite{kk5}-\cite{xxx333}.
Effective solution
of the problem of
combined mean-square approximation of collections
of the iterated Ito and Stratonovich stochastic integrals
(\ref{ito}), (\ref{str}) of multiplicities 1 to 5 and beyond
composes the subject of the article.
We want to mention in short that there are
two main criteria of numerical methods convergence
for Ito SDEs \cite{KlPl2}-\cite{Mi3}:
a strong or mean-square
criterion and a
weak criterion where the subject of approximation is not the solution
of Ito SDE, simply stated, but the
distribution of Ito SDE solution.
Using the strong numerical methods, we can build
sample pathes
of Ito SDEs numerically.
These methods require the combined mean-square approximation of collections
of the iterated Ito and Stratonovich stochastic integrals
(\ref{ito}) and (\ref{str}).
The strong numerical methods are using when constructing new mathematical
models on the basis of Ito SDEs, when
solving the filtering problem of signal under the influence
of random disturbance in various arrangements,
when solving the problem of stochastic
optimal control, when solving the problem
of testing procedures of evaluating parameters of stochastic
systems etc. \cite{KlPl2}-\cite{KPS}.
The problem of effective jointly numerical modeling
(in accordance to the mean-square convergence criterion) of the iterated
Ito and Stratonovich stochastic integrals
(\ref{ito}) and (\ref{str}) is
difficult from
theoretical and computing point of view \cite{KlPl2}-\cite{KPS},
\cite{2006}-\cite{Rybakov1000}.
The only exception is connected with the narrow particular case, when
$i_1=\ldots=i_k\ne 0$ and
$\psi_1(s),\ldots,\psi_k(s)\equiv \psi(s)$.
This case allows
the investigation with using of the Ito formula
\cite{KlPl2}-\cite{Mi3}.
Note that even for the mentioned coincidence ($i_1=\ldots=i_k\ne 0$),
but for different
functions $\psi_1(s),\ldots,\psi_k(s)$ the mentioned
difficulties persist, and
relatively simple families of
iterated Ito and Stratonovich stochastic integrals,
which can be often
met in the applications, cannot be represented effectively in a finite
form (for the mean-square approximation) using the system of standard
Gaussian random variables.
Note that for a number of special types of Ito SDEs
the problem of approximation of iterated
stochastic integrals can be simplified but cannot be solved. The equations
with additive vector noise, with additive scalar or non-additive scalar
noise, with a small parameter are related to such
types of equations \cite{KlPl2}-\cite{Mi3}.
For the mentioned types of equations, simplifications
are connected with the fact that either some coefficient functions
from stochastic analogues of the Taylor formula
(Taylor--Ito and Taylor--Stratonovich expansions)
identically equal to zero,
or scalar noise has an essential effect, or due to the presence
of a small parameter we can neglect some members from stochastic
analogues of the Taylor formula, which include difficult for approximation
iterated stochastic integrals \cite{KlPl2}-\cite{Mi3}.
In this article, we consider Ito SDEs
with multidimentional and non-additive noise.
The conditions of commutativity of the noise \cite{KlPl2}
are also not used.
Seems that iterated stochastic integrals can be approximated by multiple
integral sums of different types \cite{Mi2}, \cite{Mi3}, \cite{allen}.
However, this approach implies partitioning of the interval
of integration $[t, T]$ of iterated stochastic integrals
(the length $T-t$ of this interval is a small
value, because it is a step of integration of numerical methods for
Ito SDEs) and according to numerical
experiments this additional partitioning leads to significant calculating
costs \cite{2006}.
In \cite{Mi2} (also see \cite{KlPl2}, \cite{Mi3})
Milstein G.N. proposed to expand (\ref{ito}) or (\ref{str})
into iterated series of products
of standard Gaussian random variables by representing the Wiener
process as a trigonometric Fourier series with random coefficients
(the version of the so-called Karhunen--Loeve expansion for
the Brownian bridge process).
For example,
to obtain the Milstein expansion of (\ref{str}), the truncated Fourier
expansions of components of the Wiener process ${\bf f}_s$ must be
iteratively substituted in the single integrals, and the integrals
must be calculated, starting from the innermost integral.
This is a complicated procedure that does not lead to a general
expansion of (\ref{str}) valid for an arbitrary multiplicity $k.$
For this reason, only expansions of single, double, and triple
stochastic integrals (\ref{ito}) and (\ref{str}) were presented
in \cite{KlPl2} (the integrals (\ref{str}) for $k=1, 2, 3$)
and in \cite{Mi2}, \cite{Mi3} (the integrals (\ref{ito}) for $k=1, 2$)
for the simplest case $\psi_1(s), \psi_2(s), \psi_3(s)\equiv 1;$
$i_1, i_2, i_3=0,1,\ldots,m.$ Moreover, the Milstein
approach \cite{Mi2} leads to iterated application
of the operation of limit transition (see above).
It should be noted that the authors of the works
\cite{KlPl2}
(Sect.~5.8, pp.~202--204), \cite{KPS} (pp.~82-84),
\cite{KPW} (pp.~438-439),
\cite{Zapad-9} (pp.~263-264) use
the Wong--Zakai approximation
\cite{W-Z-1}-\cite{Watanabe} (without rigorous proof) within the frames
of the method of expansion of iterated stochastic integrals
\cite{Mi2} (1988) based on the series expansion
of the Brownian bridge process (version
of the so-called Karhunen-Loeve expansion).
See discussions in \cite{2018a} (Sect.~2.18, 6.2),
\cite{xxx333} (Sect.~2.6.2, 6.2)
\cite{arxiv-1} (Sect.~11),
\cite{arxiv-3} (Sect.~8),
\cite{arxiv-4} (Sect.~11),
\cite{arxiv-5} (Sect.~6),
\cite{arxiv-7} (Sect.~6)
for detail.
Note that in \cite{rr} the method of expansion
of iterated (double) Ito stochastic integrals (\ref{ito})
($k=2;$ $\psi_1(s), \psi_2(s) \equiv 1;$ $i_1, i_2 =1,\ldots,m$)
based on expansion
of the Wiener process using Haar functions and
trigonometric functions has been considered.
The restrictions of the method \cite{rr} are also connected
with iterated application of the operation
of limit transition (as in the Milstein approach \cite{Mi2} (1988))
at least starting from the third
multiplicity of iterated stochastic integrals.
It is necessary to note that the Milstein approach \cite{Mi2}
excelled in several times or even in several orders
the methods based on multiple integral sums
\cite{Mi2}, \cite{Mi3}, \cite{allen}
considering computational costs in the sense
of their diminishing.
An alternative strong approximation method was
proposed for (\ref{str}) in \cite{300a}, \cite{400a}
(also see \cite{2011-2}-\cite{xxx333},
\cite{2010-1}-\cite{2013}, \cite{arxiv-23}),
where $J^{*}[\psi^{(k)}]_{T,t}$ was represented as the multiple stochastic
integral
from the certain discontinuous nonrandom function of $k$ variables, and the
function
was then expressed as the iterated generalized Fourier series in complete
systems of continuous functions that are orthonormal in the space
$L_2([t, T]).$
In \cite{300a}, \cite{400a}
(also see \cite{2011-2}-\cite{xxx333},
\cite{2010-1}-\cite{2013}, \cite{arxiv-23})
the cases of Legendre polynomials and
trigonometric functions are considered in detail.
As a result,
the general iterated series expansion of (\ref{str}) in terms of products
of standard Gaussian random variables was obtained in
\cite{300a}, \cite{400a}
(also see \cite{2011-2}-\cite{xxx333},
\cite{2010-1}-\cite{2013}, \cite{arxiv-23})
for an arbitrary multiplicity $k.$
Hereinafter, this method is referred to as the method of generalized
iterated Fourier series.
It was shown in \cite{300a}, \cite{400a}
(also see \cite{2011-2}-\cite{xxx333},
\cite{2010-1}-\cite{2013}, \cite{arxiv-23}) that the method of
generalized iterated Fourier series leads to the
Milstein expansion \cite{Mi2} of (\ref{str})
in the case of trigonometric
functions
and to a substantially simpler expansion of (\ref{str}) in the case
of Legendre polynomials.
Note that the method of generalized
iterated Fourier series as well as the Milstein approach
\cite{Mi2}
lead to iterated application of the operation of limit transition.
As mentioned above, this problem appears for iterated (triple)
stochastic integrals
($i_1, i_2, i_3=1,\ldots,m$)
or even for some iterated (double) stochastic integrals
in the case, when $\psi_1(s),$
$\psi_2(s)\not\equiv 1$ ($i_1, i_2=1,\ldots,m$)
\cite{2006} (also see \cite{2011-2}-\cite{200a},
\cite{301a}-\cite{arxiv-12},
\cite{arxiv-24}-\cite{arxiv-6}).
The mentioned problem (iterated application of the operation
of limit transition) not appears
in the efficient method, which
is considered for (\ref{ito}) in Theorems 1, 2 (see below)
\cite{2006}-\cite{200a},
\cite{301a}-\cite{arxiv-12},
\cite{arxiv-24}-\cite{new-art-1-xxy}.
The idea of this method is as follows:
the iterated Ito stochastic
integral (\ref{ito}) of multiplicity $k$ is represented as
the multiple stochastic
integral from the certain discontinuous nonrandom function of $k$ variables
defined on the hypercube $[t, T]^k$, where $[t, T]$ is the interval of
integration of the iterated Ito stochastic
integral (\ref{ito}). Then,
the indicated
nonrandom function is expanded in the hypercube $[t, T]^k$
into the generalized
multiple Fourier series converging
in the mean-square sense
in the space
$L_2([t,T]^k)$. After a number of nontrivial transformations we come
(see Theorems 1, 2 below) to the
mean-square convergening expansion of
the iterated Ito stochastic
integral (\ref{ito})
into the multiple
series of products
of standard Gaussian random
variables. The coefficients of this
series are the coefficients of
generalized multiple Fourier series for the mentioned nonrandom function
of $k$ variables, which can be calculated using the explicit formula
regardless of the multiplicity $k$ of the
iterated Ito stochastic
integral (\ref{ito}).
Hereinafter, this method is referred to as the method of generalized
multiple Fourier series.
Thus, we obtain the following useful possibilities
of the method of generalized multiple Fourier series.
1. There is an explicit formula (see (\ref{ppppa}) below) for calculation
of expansion coefficients
of the iterated Ito stochastic integral (\ref{ito}) with any
fixed multiplicity $k$.
2. We have possibilities for exact calculation of the mean-square
error of approximation
of the iterated Ito stochastic integral (\ref{ito})
\cite{2017}-\cite{xxx333}, \cite{17a}, \cite{arxiv-2}.
3. Since the used
multiple Fourier series is a generalized in the sense
that it is constructed using various complete orthonormal
systems of functions in the space $L_2([t, T])$, then we
have new possibilities
for approximation --- we can
use not only trigonometric functions as in \cite{KlPl2}-\cite{Mi3},
but Legendre polynomials.
4. As it turned out \cite{2006}-\cite{200a},
\cite{301a}-\cite{arxiv-12},
\cite{arxiv-24}-\cite{arxiv-6xx} it is more convenient to work
with Legendre polynomials for constructing of approximations
of the iterated Ito stochastic integrals (\ref{ito}).
Approximations based on the Legendre polynomials are essentially simpler
than their analogues based on the trigonometric functions.
Another advantages of the application of Legendre polynomials
in the framework of the mentioned problem are considered
in \cite{2018a}-\cite{xxx333}, \cite{29a}, \cite{301a}.
5. An approach based on the Karhunen--Loeve expansion
of the Brownian bridge process (also see \cite{rr})
leads to
iterated application of the operation of limit
transition (the operation of limit transition
is implemented only once in Theorems 1, 2 (see below))
starting from
the second multiplicity (in the general case)
and third multiplicity (for the case
$\psi_1(s), \psi_2(s), \psi_3(s)\equiv 1;$
$i_1, i_2, i_3=1,\ldots,m$)
of iterated Ito stochastic integrals.
Multiple series (the operation of limit transition
is implemented only once) are more convenient
for approximation than the iterated ones
(iterated application of the operation of limit
transition),
since partial sums of multiple series converge for any possible case of
convergence to infinity of their upper limits of summation
(let us denote them as $p_1,\ldots, p_k$).
For example,
when $p_1=\ldots=p_k=p\to\infty$.
For iterated series, the condition $p_1=\ldots=p_k=p\to\infty$ obviously
does not guarantee the convergence of this series.
However, the authors of the works
\cite{KlPl2}
(Sect.~5.8, pp.~202--204), \cite{KPS} (pp.~82-84),
\cite{KPW} (pp.~438-439),
\cite{Zapad-9} (pp.~263-264) use
the condition $p_1=p_2=p_3=p\to\infty$
together with the Wong--Zakai approximation
\cite{W-Z-1}-\cite{Watanabe} (but without rigorous proof) within the frames
of the method of expansion of iterated stochastic integrals
\cite{Mi2} (1988) based on the series expansion
of the Brownian bridge process.
See discussions in \cite{2018a} (Sect.~2.18, 6.2),
\cite{xxx333} (Sect.~2.6.2, 6.2),
\cite{arxiv-1} (Sect.~11),
\cite{arxiv-3} (Sect.~8),
\cite{arxiv-4} (Sect.~11),
\cite{arxiv-5} (Sect.~6),
\cite{arxiv-7} (Sect.~6)
for detail.
As it turned out, Theorems 1, 2 can be adapted for the iterated
Stratonovich stochastic integrals (\ref{str}) at least
for multiplicities 1 to 6 \cite{2011-2}-\cite{xxx333},
\cite{2010-2}-\cite{2013}, \cite{30a}, \cite{300a},
\cite{400a}, \cite{271a},
\cite{arxiv-4}-\cite{arxiv-8}, \cite{arxiv-23}, \cite{arxiv-6}, \cite{new-art-1-xxy}.
Expansions of these iterated Stratonovich
stochastic integrals turned out
much simpler (see Theorems 4--10 below), than the appropriate expansions
of the iterated Ito stochastic integrals (\ref{ito}) from Theorems 1, 2.
\section{Explicit One-Step Strong Numerical Schemes With Orders 2.0 and 2.5 for Ito SDEs
Based on the Unified Taylor--Stratonovich expansion}
Consider the partition $\{\tau_j\}_{j=0}^N$ of the interval $[0, T]$ such that
$$
0=\tau_0<\ldots <\tau_N=T,\ \ \
\Delta_N=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm max}\cr
$\stackrel{}{{}_{0\le j\le N-1}}$\cr
}} }\Delta\tau_j,\ \ \ \Delta\tau_j=\tau_{j+1}-\tau_j.
$$
Let ${\bf y}_{\tau_j}\stackrel{\sf def}{=}
{\bf y}_{j},$\ $j=0, 1,\ldots,N$ be a time discrete approximation
of the process ${\bf x}_t,$ $t\in[0,T],$ which is a solution of the Ito
SDE (\ref{1.5.2}).
{\bf Definiton 1}\ \cite{KlPl2}.\
{\it We will say that a time discrete approximation
${\bf y}_{j}$\ $(j=0, 1,\ldots,N)$
corresponding to the maximal step of discretization $\Delta_N,$
converges strongly with order
$\gamma>0$ at time moment
$T$ to the process ${\bf x}_t,$ $t\in[0,T]$,
if there exists a constant $C>0,$ which does not depend on
$\Delta_N,$ and a $\delta>0$ such that
$$
{\sf M}\{|{\bf x}_T-{\bf y}_T|\}\le
C(\Delta_N)^{\gamma}
$$
\noindent
for each $\Delta_N\in(0, \delta).$}
Consider the explicit one-step strong numerical scheme with order 2.5 for Ito SDEs
based on the so-called unified Taylor--Stratonovich
expansion \cite{kk6}-\cite{2010-1}, \cite{arxiv-6xx}
$$
{\bf y}_{p+1}={\bf y}_p+\sum_{i_{1}=1}^{m}B_{i_{1}}
\hat I_{(0)\tau_{p+1},\tau_p}^{*(i_{1})}+\Delta \bar {\bf a}
+\sum_{i_{1},i_{2}=1}^{m}G_{i_{2}}
B_{i_{1}}\hat I_{(00)\tau_{p+1},\tau_p}^{*(i_{2}i_{1})}+
$$
$$
+
\sum_{i_{1}=1}^{m}\Biggl(G_{i_{1}}\bar {\bf a}\left(
\Delta \hat I_{(0)\tau_{p+1},\tau_p}^{*(i_{1})}+
\hat I_{(1)\tau_{p+1},\tau_p}^{*(i_{1})}\right)
-\bar LB_{i_{1}}\hat I_{(1)\tau_{p+1},\tau_p}^{*(i_{1})}\Biggr)+
$$
$$
+\sum_{i_{1},i_{2},i_{3}=1}^{m} G_{i_{3}}G_{i_{2}}
B_{i_{1}}\hat I_{(000)\tau_{p+1},\tau_p}^{*(i_{3}i_{2}i_{1})}+
\frac{\Delta^2}{2}\bar L\bar {\bf a}+
$$
$$
+\sum_{i_{1},i_{2}=1}^{m}
\Biggl(G_{i_{2}}\bar LB_{i_{1}}\left(
\hat I_{(10)\tau_{p+1},\tau_p}^{*(i_{2}i_{1})}-
\hat I_{(01)\tau_{p+1},\tau_p}^{*(i_{2}i_{1})}
\right)
-\bar LG_{i_{2}}B_{i_{1}}\hat I_{(10)\tau_{p+1},\tau_p}^{*(i_{2}i_{1})}
+\Biggr.
$$
$$
\Biggl.+G_{i_{2}}G_{i_{1}}\bar {\bf a}\left(
\hat I_{(01)\tau_{p+1},\tau_p}
^{*(i_{2}i_{1})}+\Delta \hat I_{(00)\tau_{p+1},\tau_p}^{*(i_{2}i_{1})}
\right)\Biggr)+
$$
$$
+
\sum_{i_{1},i_{2},i_{3},i_{4}=1}^{m}G_{i_{4}}G_{i_{3}}G_{i_{2}}
B_{i_{1}}\hat I_{(0000)\tau_{p+1},\tau_p}^{*(i_{4}i_{3}i_{2}i_{1})}+
\frac{\Delta^3}{6}LL{\bf a}+
$$
$$
+\sum_{i_{1}=1}^{m}\Biggl(G_{i_{1}}\bar L\bar {\bf a}\left(\frac{1}{2}
\hat I_{(2)\tau_{p+1},\tau_p}^{*(i_{1})}+
\Delta \hat I_{(1)\tau_{p+1},\tau_p}^{*(i_{1})}+
\frac{\Delta^2}{2}\hat I_{(0)\tau_{p+1},\tau_p}^{*(i_{1})}\right)\Biggr.+
$$
$$
\Biggl.+\frac{1}{2}\bar L
\bar L B_{i_{1}}\hat I_{(2)\tau_{p+1},\tau_p}^{*(i_{1})}-
LG_{i_{1}}\bar {\bf a}\left(\hat I_{(2)\tau_{p+1},\tau_p}^{*(i_{1})}+
\Delta \hat I_{(1)\tau_{p+1},\tau_p}^{*(i_{1})}\right)\Biggr)+
$$
$$
+
\sum_{i_{1},i_{2},i_{3}=1}^m\Biggl(
G_{i_{3}}\bar LG_{i_{2}}B_{i_{1}}
\left(\hat I_{(100)\tau_{p+1},\tau_p}
^{*(i_{3}i_{2}i_{1})}-\hat I_{(010)\tau_{p+1},\tau_p}
^{*(i_{3}i_{2}i_{1})}\right)
\Biggr.+
$$
$$
+G_{i_{3}}G_{i_{2}}\bar LB_{i_{1}}\left(
\hat I_{(010)\tau_{p+1},\tau_p}^{*(i_{3}i_{2}i_{1})}-
\hat I_{(001)\tau_{p+1},\tau_p}^{*(i_{3}i_{2}i_{1})}\right)+
$$
$$
+
G_{i_{3}}G_{i_{2}}G_{i_{1}}\bar {\bf a}
\left(\Delta \hat I_{(000)\tau_{p+1},\tau_p}^{*(i_{3}i_{2}i_{1})}+
\hat I_{(001)\tau_{p+1},\tau_p}^{*(i_{3}i_{2}i_{1})}\right)
-
$$
$$
\Biggl.-\bar LG_{i_{3}}G_{i_{2}}B_{i_{1}}
\hat I_{(100)\tau_{p+1},\tau_p}^{*(i_{3}i_{2}i_{1})}\Biggr)+
$$
\begin{equation}
\label{4.470}
+\sum_{i_{1},i_{2},i_{3},i_{4},i_{5}=1}^m
G_{i_{5}}G_{i_{4}}G_{i_{3}}G_{i_{2}}B_{i_{1}}
\hat I_{(00000)\tau_{p+1},\tau_p}^{*(i_{5}i_{4}i_{3}i_{2}i_{1})},
\end{equation}
\noindent
where $\Delta=T/N$ $(N>1)$ is a constant (for simplicity)
step of integration,\
$\tau_p=p\Delta$ $(p=0, 1,\ldots,N)$,\
$\hat I_{(l_1\ldots l_k)s,t}^{*(i_1\ldots i_k)}$ is an
approximation of the iterated
Stratonovich stochastic integral
\begin{equation}
\label{str11}
I_{(l_1\ldots \hspace{0.2mm}l_k)s,t}^{*(i_1\ldots i_k)}
=
{\int\limits_t^{*}}^s
(t-t_k)^{l_k} \ldots {\int\limits_t^{*}}^{t_{2}}
(t-t_1)^{l_1} d{\bf f}_{t_1}^{(i_1)}\ldots
d{\bf f}_{t_k}^{(i_k)},
\end{equation}
\noindent
where $i_1,\ldots, i_k=1,\dots,m,$\ \ $l_1,\ldots,l_k=0, 1, 2,$\ \
$k=1, 2,\ldots, 5,$
$$
\bar{\bf a}({\bf x},t)={\bf a}({\bf x},t)-
\frac{1}{2}\sum\limits_{j=1}^m G_jB_j({\bf x},t),
$$
$$
\bar L=L-\frac{1}{2}\sum\limits_{j=1}^m G_j G_j,
$$
$$
L= {\partial \over \partial t}
+ \sum^ {n} _ {i=1} {\bf a}_i ({\bf x}, t)
{\partial \over \partial {\bf x}_i}
+ {1\over 2} \sum^ {m} _ {j=1} \sum^ {n} _ {l,i=1}
B_{lj} ({\bf x}, t) B_{ij} ({\bf x}, t) {\partial
^{2} \over \partial {\bf x}_l \partial {\bf x}_i},
$$
$$
G_i = \sum^ {n} _ {j=1} B_{ji} ({\bf x}, t)
{\partial \over \partial {\bf x}_j}\ ,\ \ \ i=1,\ldots,m,
$$
\noindent
$B_i$ and $B_{ij}$ are the $i$th column and the $ij$th
element of the matrix function $B$,
${\bf a}_i$ is the $i$th element of the vector function ${\bf a},$
${\bf x}_i$ is the $i$th element
of the column ${\bf x}$,
the functions
$$
B_{i_{1}},\ \bar {\bf a},\ G_{i_{2}}B_{i_{1}},\
G_{i_{1}}\bar {\bf a},\ \bar LB_{i_{1}},\ G_{i_{3}}G_{i_{2}}B_{i_{1}},\
\bar L\bar {\bf a},\ LL{\bf a},\
G_{i_{2}}\bar LB_{i_{1}},\
$$
$$
\bar LG_{i_{2}}B_{i_{1}},\ G_{i_{2}}G_{i_{1}}\bar{\bf a},\
G_{i_{4}}G_{i_{3}}G_{i_{2}}B_{i_{1}},\ G_{i_{1}}\bar L\bar{\bf a},\
\bar L\bar LB_{i_{1}},\ \bar LG_{i_{1}}\bar {\bf a},\
G_{i_{3}}\bar LG_{i_{2}}B_{i_{1}},\
G_{i_{3}}G_{i_{2}}\bar LB_{i_{1}},\
$$
$$
G_{i_{3}}G_{i_{2}}G_{i_{1}}\bar {\bf a},\
\bar LG_{i_{3}}G_{i_{2}}B_{i_{1}},\
G_{i_{5}}G_{i_{4}}G_{i_{3}}G_{i_{2}}B_{i_{1}}
$$
\noindent
are calculated at the point $({\bf y}_p,p).$
It is well known that
under the standard conditions \cite{KlPl2}, \cite{2006} the numerical
scheme (\ref{4.470}) has
strong order of convergence
2.5. The major emphasis below will be placed on the
approximation of the iterated
Stratonovich stochastic integrals appearing in (\ref{4.470}).
Therefore, among
the standard conditions, we note the
following appro\-xi\-ma\-ti\-on condi\-ti\-on for these
stochastic integrals \cite{KlPl2}, \cite{2006}
\begin{equation}
\label{ors}
{\sf M}\biggl\{\biggl(I_{(l_{1}\ldots\hspace{0.2mm} l_{k})\tau_{p+1},\tau_p}
^{*(i_{1}\ldots i_{k})}
- \hat I_{(l_{1}\ldots\hspace{0.2mm} l_{k})\tau_{p+1},
\tau_p}^{*(i_{1}\ldots i_{k})}
\biggr)^2\biggr\}\le C\Delta^{6},
\end{equation}
\noindent
where constant $C$ is independent of
$\Delta$.
Note that if we exclude from (\ref{4.470}) the terms starting from the
term $\Delta^3 LL{\bf a}/6$, then we have the explicit
one-step strong numerical scheme with order 2.0 \cite{KlPl2},
\cite{2006}, \cite{2017-1}-\cite{2010-1}.
Using the numerical scheme (\ref{4.470}) or its modifications based
on the classical Taylor--Stratonovich expansion \cite{KlPl1},
the implicit or multistep analogues of (\ref{4.470}) can be constructed
\cite{KlPl2}, \cite{2006}, \cite{2017-1}-\cite{2010-1}. The set of the
iterated Stratonovich
stochastic integrals to be approximated for implementing
these modifications is the same
as for the numerical scheme (\ref{4.470}) itself.
Interestingly, the truncated unified Taylor--Stratonovich expansion \cite{kk6}
(the
foundation of the numerical
scheme (\ref{4.470})) contains only $12$
different types of the iterated Stratonovich
stochastic integrals
(\ref{str11}), which cannot be
interconnected by linear relations \cite{2006}, \cite{2017-1}-\cite{2010-1}.
The analogues
classical Taylor--Stratonovich expansion \cite{KlPl2}, \cite{KlPl1} contains
$17$ different types of iterated Stratonovich
stochastic integrals, part of which
are interconnected by linear relations
and part of which have a higher multiplicity than the iterated
Stratonovich stochastic integrals (\ref{str11}). This
fact well explains the use of the numerical scheme (\ref{4.470}).
One of the main problems arising in the implementation of the
numerical scheme (\ref{4.470}) is the joint
numerical modeling of the iterated Stratonovich stochastic integrals
figuring in (\ref{4.470}).
\section{Expansions
of Iterated Ito
Stochastic Integrals
(Method of Genegalized Multiple Fourier Series)}
An efficient numerical modeling method for iterated Ito
stochastic integrals based on generalized multiple
Fourier series was considered in \cite{2006} (also see
\cite{2011-2}-\cite{200a},
\cite{301a}-\cite{new-art-1-xxy}).
This method rests on important
results presented below (Theorems 1, 2).
Suppose that every $\psi_l(\tau)$ $(l=1,\ldots,k)$ is a function from the space $L_2([t, T])$.
Define the following function on the hypercube $[t, T]^k$
\begin{equation}
\label{ppp}
K(t_1,\ldots,t_k)=
\begin{cases}
\psi_1(t_1)\ldots \psi_k(t_k)\ &\hbox{for}\ \ t_1<\ldots<t_k\\
~\\
~\\
0\ &\hbox{otherwise}
\end{cases},\ \ \ \ t_1,\ldots,t_k\in[t, T],\ \ \ \ k\ge 2,
\end{equation}
\noindent
and
$K(t_1)\equiv\psi_1(t_1)$ for $t_1\in[t, T].$
Suppose that $\{\phi_j(x)\}_{j=0}^{\infty}$
is a complete orthonormal system of functions in the space
$L_2([t, T])$.
The function $K(t_1,\ldots,t_k)$ belongs to the space $L_2([t, T]^k).$
At this situation it is well known that the generalized
multiple Fourier series
of $K(t_1,\ldots,t_k)\in L_2([t, T]^k)$ is converging
to $K(t_1,\ldots,t_k)$ in the hypercube $[t, T]^k$ in
the mean-square sense, i.e.
$$
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm lim}\cr
$\stackrel{}{{}_{p_1,\ldots,p_k\to \infty}}$\cr
}} }\Biggl\Vert
K(t_1,\ldots,t_k)-
\sum_{j_1=0}^{p_1}\ldots \sum_{j_k=0}^{p_k}
C_{j_k\ldots j_1}\prod_{l=1}^{k} \phi_{j_l}(t_l)
\Biggr\Vert_{L_2([t,T]^k)}=0,
$$
\noindent
where
\begin{equation}
\label{ppppa}
C_{j_k\ldots j_1}=\int\limits_{[t,T]^k}
K(t_1,\ldots,t_k)\prod_{l=1}^{k}\phi_{j_l}(t_l)dt_1\ldots dt_k
\end{equation}
\noindent
is the Fourier coefficient,
$$
\left\Vert f\right\Vert_{L_2([t,T]^k)}=\left(\int\limits_{[t,T]^k}
f^2(t_1,\ldots,t_k)dt_1\ldots dt_k\right)^{1/2}.
$$
Consider the partition $\{\tau_j\}_{j=0}^N$ of $[t,T]$ such that
\begin{equation}
\label{1111}
t=\tau_0<\ldots <\tau_N=T,\ \ \
\Delta_N=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm max}\cr
$\stackrel{}{{}_{0\le j\le N-1}}$\cr
}} }\Delta\tau_j\to 0\ \ \hbox{if}\ \ N\to \infty,\ \ \
\Delta\tau_j=\tau_{j+1}-\tau_j.
\end{equation}
{\bf Theorem 1}\ \cite{2006} (2006), \cite{2011-2}-\cite{200a},
\cite{301a}-\cite{new-art-1-xxy}.\
{\it Suppose that
every $\psi_l(\tau)$ $(l=1,\ldots, k)$ is a continuous
nonrandom func\-tion on
$[t, T]$ and
$\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal system
of continuous func\-ti\-ons in the space $L_2([t,T]).$ Then
$$
J[\psi^{(k)}]_{T,t}\ =\
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_k\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\ldots\sum_{j_k=0}^{p_k}
C_{j_k\ldots j_1}\Biggl(
\prod_{l=1}^k\zeta_{j_l}^{(i_l)}\ -
\Biggr.
$$
\begin{equation}
\label{tyyy}
-\ \Biggl.
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{N\to \infty}}$\cr
}} }\sum_{(l_1,\ldots,l_k)\in {\rm G}_k}
\phi_{j_{1}}(\tau_{l_1})
\Delta{\bf w}_{\tau_{l_1}}^{(i_1)}\ldots
\phi_{j_{k}}(\tau_{l_k})
\Delta{\bf w}_{\tau_{l_k}}^{(i_k)}\Biggr),
\end{equation}
\noindent
where $J[\psi^{(k)}]_{T,t}$ is defined by {\rm (\ref{ito}),}
$$
{\rm G}_k={\rm H}_k\backslash{\rm L}_k,\ \ \
{\rm H}_k=\{(l_1,\ldots,l_k):\ l_1,\ldots,l_k=0,\ 1,\ldots,N-1\},
$$
$$
{\rm L}_k=\{(l_1,\ldots,l_k):\ l_1,\ldots,l_k=0,\ 1,\ldots,N-1;\
l_g\ne l_r\ (g\ne r);\ g, r=1,\ldots,k\},
$$
\noindent
${\rm l.i.m.}$ is a limit in the mean-square sense$,$
$i_1,\ldots,i_k=0,1,\ldots,m,$
\begin{equation}
\label{rr23}
\zeta_{j}^{(i)}=
\int\limits_t^T \phi_{j}(s) d{\bf w}_s^{(i)}
\end{equation}
\noindent
are independent standard Gaussian random variables
for various
$i$ or $j$ {\rm(}in the case when $i\ne 0${\rm),}
$C_{j_k\ldots j_1}$ is the Fourier coefficient {\rm(\ref{ppppa}),}
$\Delta{\bf w}_{\tau_{j}}^{(i)}=
{\bf w}_{\tau_{j+1}}^{(i)}-{\bf w}_{\tau_{j}}^{(i)}$
$(i=0, 1,\ldots,m),$
$\left\{\tau_{j}\right\}_{j=0}^{N}$ is a partition of
the interval $[t, T],$ which satisfies the condition {\rm (\ref{1111})}.
}
It was shown in \cite{2007-2}-\cite{2013}
that Theorem 1 is valid for convergence
in the mean of degree $2n$ ($n\in \mathbb{N}$).
The convergence with probability 1 in Theorem 1
is proved in \cite{2018a}-\cite{xxx333}, \cite{arxiv-1}, \cite{OK1000}
for the cases of Legendre polynomials and trigonometric functions.
Moreover, the complete orthonormal systems of Haar and
Rademacher--Walsh functions in the space $L_2([t,T])$
can also be applied in Theorem 1
\cite{2006}-\cite{2013}.
The modification of Theorem 1 for
complete orthonormal with weigth $r(x)\ge 0$ systems
of functions in the space $L_2([t,T])$ can be found in
\cite{2018}, \cite{2018a}-\cite{xxx333}, \cite{arxiv-1}, \cite{arxiv-26b}.
Application of Theorem 1 and Theorem 2 (see below) for the mean-square
approximation of iterated stochastic integrals
with respect to the
infinite-dimensional $Q$-Wiener process can be found
in the monographs \cite{2018a}-\cite{xxx333} (Chapter 7) and in \cite{31a}, \cite{200a},
\cite{OK}, \cite{Kuzh-1}.
In order to evaluate the significance of Theorem 1 for practice we will
demonstrate its transfor\-med particular cases for
$k=1,\ldots,5$
\cite{2006}-\cite{200a}, \cite{301a}-\cite{new-art-1-xxy}
\begin{equation}
\label{a1}
J[\psi^{(1)}]_{T,t}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}
C_{j_1}\zeta_{j_1}^{(i_1)},
\end{equation}
\begin{equation}
\label{a2}
J[\psi^{(2)}]_{T,t}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,p_2\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\sum_{j_2=0}^{p_2}
C_{j_2j_1}\Biggl(\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}
-{\bf 1}_{\{i_1=i_2\ne 0\}}
{\bf 1}_{\{j_1=j_2\}}\Biggr),
\end{equation}
$$
J[\psi^{(3)}]_{T,t}=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_3\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\sum_{j_2=0}^{p_2}\sum_{j_3=0}^{p_3}
C_{j_3j_2j_1}\Biggl(
\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
-\Biggr.
$$
\begin{equation}
\label{a3}
-\Biggl.
{\bf 1}_{\{i_1=i_2\ne 0\}}
{\bf 1}_{\{j_1=j_2\}}
\zeta_{j_3}^{(i_3)}
-{\bf 1}_{\{i_2=i_3\ne 0\}}
{\bf 1}_{\{j_2=j_3\}}
\zeta_{j_1}^{(i_1)}-
{\bf 1}_{\{i_1=i_3\ne 0\}}
{\bf 1}_{\{j_1=j_3\}}
\zeta_{j_2}^{(i_2)}\Biggr),
\end{equation}
$$
J[\psi^{(4)}]_{T,t}
=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_4\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\ldots\sum_{j_4=0}^{p_4}
C_{j_4\ldots j_1}\Biggl(
\prod_{l=1}^4\zeta_{j_l}^{(i_l)}
\Biggr.
-
$$
$$
-
{\bf 1}_{\{i_1=i_2\ne 0\}}
{\bf 1}_{\{j_1=j_2\}}
\zeta_{j_3}^{(i_3)}
\zeta_{j_4}^{(i_4)}
-
{\bf 1}_{\{i_1=i_3\ne 0\}}
{\bf 1}_{\{j_1=j_3\}}
\zeta_{j_2}^{(i_2)}
\zeta_{j_4}^{(i_4)}-
$$
$$
-
{\bf 1}_{\{i_1=i_4\ne 0\}}
{\bf 1}_{\{j_1=j_4\}}
\zeta_{j_2}^{(i_2)}
\zeta_{j_3}^{(i_3)}
-
{\bf 1}_{\{i_2=i_3\ne 0\}}
{\bf 1}_{\{j_2=j_3\}}
\zeta_{j_1}^{(i_1)}
\zeta_{j_4}^{(i_4)}-
$$
$$
-
{\bf 1}_{\{i_2=i_4\ne 0\}}
{\bf 1}_{\{j_2=j_4\}}
\zeta_{j_1}^{(i_1)}
\zeta_{j_3}^{(i_3)}
-
{\bf 1}_{\{i_3=i_4\ne 0\}}
{\bf 1}_{\{j_3=j_4\}}
\zeta_{j_1}^{(i_1)}
\zeta_{j_2}^{(i_2)}+
$$
$$
+
{\bf 1}_{\{i_1=i_2\ne 0\}}
{\bf 1}_{\{j_1=j_2\}}
{\bf 1}_{\{i_3=i_4\ne 0\}}
{\bf 1}_{\{j_3=j_4\}}
+
$$
$$
+
{\bf 1}_{\{i_1=i_3\ne 0\}}
{\bf 1}_{\{j_1=j_3\}}
{\bf 1}_{\{i_2=i_4\ne 0\}}
{\bf 1}_{\{j_2=j_4\}}+
$$
\begin{equation}
\label{a4}
+\Biggl.
{\bf 1}_{\{i_1=i_4\ne 0\}}
{\bf 1}_{\{j_1=j_4\}}
{\bf 1}_{\{i_2=i_3\ne 0\}}
{\bf 1}_{\{j_2=j_3\}}\Biggr),
\end{equation}
$$
J[\psi^{(5)}]_{T,t}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_5\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\ldots\sum_{j_5=0}^{p_5}
C_{j_5\ldots j_1}\Biggl(
\prod_{l=1}^5\zeta_{j_l}^{(i_l)}
-\Biggr.
$$
$$
-
{\bf 1}_{\{i_1=i_2\ne 0\}}
{\bf 1}_{\{j_1=j_2\}}
\zeta_{j_3}^{(i_3)}
\zeta_{j_4}^{(i_4)}
\zeta_{j_5}^{(i_5)}-
{\bf 1}_{\{i_1=i_3\ne 0\}}
{\bf 1}_{\{j_1=j_3\}}
\zeta_{j_2}^{(i_2)}
\zeta_{j_4}^{(i_4)}
\zeta_{j_5}^{(i_5)}-
$$
$$
-
{\bf 1}_{\{i_1=i_4\ne 0\}}
{\bf 1}_{\{j_1=j_4\}}
\zeta_{j_2}^{(i_2)}
\zeta_{j_3}^{(i_3)}
\zeta_{j_5}^{(i_5)}-
{\bf 1}_{\{i_1=i_5\ne 0\}}
{\bf 1}_{\{j_1=j_5\}}
\zeta_{j_2}^{(i_2)}
\zeta_{j_3}^{(i_3)}
\zeta_{j_4}^{(i_4)}-
$$
$$
-
{\bf 1}_{\{i_2=i_3\ne 0\}}
{\bf 1}_{\{j_2=j_3\}}
\zeta_{j_1}^{(i_1)}
\zeta_{j_4}^{(i_4)}
\zeta_{j_5}^{(i_5)}-
{\bf 1}_{\{i_2=i_4\ne 0\}}
{\bf 1}_{\{j_2=j_4\}}
\zeta_{j_1}^{(i_1)}
\zeta_{j_3}^{(i_3)}
\zeta_{j_5}^{(i_5)}-
$$
$$
-
{\bf 1}_{\{i_2=i_5\ne 0\}}
{\bf 1}_{\{j_2=j_5\}}
\zeta_{j_1}^{(i_1)}
\zeta_{j_3}^{(i_3)}
\zeta_{j_4}^{(i_4)}
-{\bf 1}_{\{i_3=i_4\ne 0\}}
{\bf 1}_{\{j_3=j_4\}}
\zeta_{j_1}^{(i_1)}
\zeta_{j_2}^{(i_2)}
\zeta_{j_5}^{(i_5)}-
$$
$$
-
{\bf 1}_{\{i_3=i_5\ne 0\}}
{\bf 1}_{\{j_3=j_5\}}
\zeta_{j_1}^{(i_1)}
\zeta_{j_2}^{(i_2)}
\zeta_{j_4}^{(i_4)}
-{\bf 1}_{\{i_4=i_5\ne 0\}}
{\bf 1}_{\{j_4=j_5\}}
\zeta_{j_1}^{(i_1)}
\zeta_{j_2}^{(i_2)}
\zeta_{j_3}^{(i_3)}+
$$
$$
+
{\bf 1}_{\{i_1=i_2\ne 0\}}
{\bf 1}_{\{j_1=j_2\}}
{\bf 1}_{\{i_3=i_4\ne 0\}}
{\bf 1}_{\{j_3=j_4\}}\zeta_{j_5}^{(i_5)}+
{\bf 1}_{\{i_1=i_2\ne 0\}}
{\bf 1}_{\{j_1=j_2\}}
{\bf 1}_{\{i_3=i_5\ne 0\}}
{\bf 1}_{\{j_3=j_5\}}\zeta_{j_4}^{(i_4)}+
$$
$$
+
{\bf 1}_{\{i_1=i_2\ne 0\}}
{\bf 1}_{\{j_1=j_2\}}
{\bf 1}_{\{i_4=i_5\ne 0\}}
{\bf 1}_{\{j_4=j_5\}}\zeta_{j_3}^{(i_3)}+
{\bf 1}_{\{i_1=i_3\ne 0\}}
{\bf 1}_{\{j_1=j_3\}}
{\bf 1}_{\{i_2=i_4\ne 0\}}
{\bf 1}_{\{j_2=j_4\}}\zeta_{j_5}^{(i_5)}+
$$
$$
+
{\bf 1}_{\{i_1=i_3\ne 0\}}
{\bf 1}_{\{j_1=j_3\}}
{\bf 1}_{\{i_2=i_5\ne 0\}}
{\bf 1}_{\{j_2=j_5\}}\zeta_{j_4}^{(i_4)}+
{\bf 1}_{\{i_1=i_3\ne 0\}}
{\bf 1}_{\{j_1=j_3\}}
{\bf 1}_{\{i_4=i_5\ne 0\}}
{\bf 1}_{\{j_4=j_5\}}\zeta_{j_2}^{(i_2)}+
$$
$$
+
{\bf 1}_{\{i_1=i_4\ne 0\}}
{\bf 1}_{\{j_1=j_4\}}
{\bf 1}_{\{i_2=i_3\ne 0\}}
{\bf 1}_{\{j_2=j_3\}}\zeta_{j_5}^{(i_5)}+
{\bf 1}_{\{i_1=i_4\ne 0\}}
{\bf 1}_{\{j_1=j_4\}}
{\bf 1}_{\{i_2=i_5\ne 0\}}
{\bf 1}_{\{j_2=j_5\}}\zeta_{j_3}^{(i_3)}+
$$
$$
+
{\bf 1}_{\{i_1=i_4\ne 0\}}
{\bf 1}_{\{j_1=j_4\}}
{\bf 1}_{\{i_3=i_5\ne 0\}}
{\bf 1}_{\{j_3=j_5\}}\zeta_{j_2}^{(i_2)}+
{\bf 1}_{\{i_1=i_5\ne 0\}}
{\bf 1}_{\{j_1=j_5\}}
{\bf 1}_{\{i_2=i_3\ne 0\}}
{\bf 1}_{\{j_2=j_3\}}\zeta_{j_4}^{(i_4)}+
$$
$$
+
{\bf 1}_{\{i_1=i_5\ne 0\}}
{\bf 1}_{\{j_1=j_5\}}
{\bf 1}_{\{i_2=i_4\ne 0\}}
{\bf 1}_{\{j_2=j_4\}}\zeta_{j_3}^{(i_3)}+
{\bf 1}_{\{i_1=i_5\ne 0\}}
{\bf 1}_{\{j_1=j_5\}}
{\bf 1}_{\{i_3=i_4\ne 0\}}
{\bf 1}_{\{j_3=j_4\}}\zeta_{j_2}^{(i_2)}+
$$
$$
+
{\bf 1}_{\{i_2=i_3\ne 0\}}
{\bf 1}_{\{j_2=j_3\}}
{\bf 1}_{\{i_4=i_5\ne 0\}}
{\bf 1}_{\{j_4=j_5\}}\zeta_{j_1}^{(i_1)}+
{\bf 1}_{\{i_2=i_4\ne 0\}}
{\bf 1}_{\{j_2=j_4\}}
{\bf 1}_{\{i_3=i_5\ne 0\}}
{\bf 1}_{\{j_3=j_5\}}\zeta_{j_1}^{(i_1)}+
$$
\begin{equation}
\label{a5}
+\Biggl.
{\bf 1}_{\{i_2=i_5\ne 0\}}
{\bf 1}_{\{j_2=j_5\}}
{\bf 1}_{\{i_3=i_4\ne 0\}}
{\bf 1}_{\{j_3=j_4\}}\zeta_{j_1}^{(i_1)}\Biggr),
\end{equation}
\noindent
where ${\bf 1}_A$ is the indicator of the set $A$.
Note that we will consider the case $i_1,\ldots,i_5=1,\ldots,m$.
This case corresponds to the numerical scheme (\ref{4.470}).
For further consideration, let us
consider the generalization of formulas (\ref{a1})--(\ref{a5})
for the case of an arbitrary multiplicity $k$ $(k\in\mathbb{N})$ of
the iterated Ito stochastic integral $J[\psi^{(k)}]_{T,t}$ defined by (\ref{ito}).
In order to do this, let us
introduce some notations.
Consider the unordered
set $\{1, 2, \ldots, k\}$
and separate it into two parts:
the first part consists of $r$ unordered
pairs (sequence order of these pairs is also unimportant) and the
second one consists of the
remaining $k-2r$ numbers.
So, we have
\begin{equation}
\label{leto5007}
(\{
\underbrace{\{g_1, g_2\}, \ldots,
\{g_{2r-1}, g_{2r}\}}_{\small{\hbox{part 1}}}
\},
\{\underbrace{q_1, \ldots, q_{k-2r}}_{\small{\hbox{part 2}}}
\}),
\end{equation}
\noindent
where
$$
\{g_1, g_2, \ldots,
g_{2r-1}, g_{2r}, q_1, \ldots, q_{k-2r}\}=\{1, 2, \ldots, k\},
$$
\noindent
braces
mean an unordered
set, and pa\-ren\-the\-ses mean an ordered set.
We will say that (\ref{leto5007}) is a partition
and consider the sum with respect to all possible
partitions
\begin{equation}
\label{leto5008}
\sum_{\stackrel{(\{\{g_1, g_2\}, \ldots,
\{g_{2r-1}, g_{2r}\}\}, \{q_1, \ldots, q_{k-2r}\})}
{{}_{\{g_1, g_2, \ldots,
g_{2r-1}, g_{2r}, q_1, \ldots, q_{k-2r}\}=\{1, 2, \ldots, k\}}}}
a_{g_1 g_2, \ldots,
g_{2r-1} g_{2r}, q_1 \ldots q_{k-2r}}.
\end{equation}
Below there are several examples of sums in the form (\ref{leto5008})
$$
\sum_{\stackrel{(\{g_1, g_2\})}{{}_{\{g_1, g_2\}=\{1, 2\}}}}
a_{g_1 g_2}=a_{12},
$$
$$
\sum_{\stackrel{(\{\{g_1, g_2\}, \{g_3, g_4\}\})}
{{}_{\{g_1, g_2, g_3, g_4\}=\{1, 2, 3, 4\}}}}
a_{g_1 g_2 g_3 g_4}=a_{1234} + a_{1324} + a_{2314},
$$
$$
\sum_{\stackrel{(\{g_1, g_2\}, \{q_1, q_{2}\})}
{{}_{\{g_1, g_2, q_1, q_{2}\}=\{1, 2, 3, 4\}}}}
a_{g_1 g_2, q_1 q_{2}}=
$$
$$
=a_{12,34}+a_{13,24}+a_{14,23}
+a_{23,14}+a_{24,13}+a_{34,12},
$$
$$
\sum_{\stackrel{(\{g_1, g_2\}, \{q_1, q_{2}, q_3\})}
{{}_{\{g_1, g_2, q_1, q_{2}, q_3\}=\{1, 2, 3, 4, 5\}}}}
a_{g_1 g_2, q_1 q_{2}q_3}
=
$$
$$
=a_{12,345}+a_{13,245}+a_{14,235}
+a_{15,234}+a_{23,145}+a_{24,135}+
$$
$$
+a_{25,134}+a_{34,125}+a_{35,124}+a_{45,123},
$$
$$
\sum_{\stackrel{(\{\{g_1, g_2\}, \{g_3, g_{4}\}\}, \{q_1\})}
{{}_{\{g_1, g_2, g_3, g_{4}, q_1\}=\{1, 2, 3, 4, 5\}}}}
a_{g_1 g_2, g_3 g_{4},q_1}
=
$$
$$
=
a_{12,34,5}+a_{13,24,5}+a_{14,23,5}+
a_{12,35,4}+a_{13,25,4}+a_{15,23,4}+
$$
$$
+a_{12,54,3}+a_{15,24,3}+a_{14,25,3}+a_{15,34,2}+a_{13,54,2}+a_{14,53,2}+
$$
$$
+
a_{52,34,1}+a_{53,24,1}+a_{54,23,1}.
$$
Now we can write (\ref{tyyy}) as
$$
J[\psi^{(k)}]_{T,t}=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_k\to \infty}}$\cr
}} }
\sum\limits_{j_1=0}^{p_1}\ldots
\sum\limits_{j_k=0}^{p_k}
C_{j_k\ldots j_1}\Biggl(
\prod_{l=1}^k\zeta_{j_l}^{(i_l)}+\sum\limits_{r=1}^{[k/2]}
(-1)^r \times
\Biggr.
$$
\begin{equation}
\label{leto6000hh}
\times
\sum_{\stackrel{(\{\{g_1, g_2\}, \ldots,
\{g_{2r-1}, g_{2r}\}\}, \{q_1, \ldots, q_{k-2r}\})}
{{}_{\{g_1, g_2, \ldots,
g_{2r-1}, g_{2r}, q_1, \ldots, q_{k-2r}\}=\{1, 2, \ldots, k\}}}}
\prod\limits_{s=1}^r
{\bf 1}_{\{i_{g_{{}_{2s-1}}}=~i_{g_{{}_{2s}}}\ne 0\}}
\Biggl.{\bf 1}_{\{j_{g_{{}_{2s-1}}}=~j_{g_{{}_{2s}}}\}}
\prod_{l=1}^{k-2r}\zeta_{j_{q_l}}^{(i_{q_l})}\Biggr),
\end{equation}
\noindent
where $[x]$ is an integer part of a real number $x;$
another notations are the same as in Theorem {\bf 1}.
In particular, from (\ref{leto6000hh}) for $k=5$ we obtain
$$
J[\psi^{(5)}]_{T,t}=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_5\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\ldots\sum_{j_5=0}^{p_5}
C_{j_5\ldots j_1}\Biggl(
\prod_{l=1}^5\zeta_{j_l}^{(i_l)}-\Biggr.
$$
$$
-
\sum\limits_{\stackrel{(\{g_1, g_2\}, \{q_1, q_{2}, q_3\})}
{{}_{\{g_1, g_2, q_{1}, q_{2}, q_3\}=\{1, 2, 3, 4, 5\}}}}
{\bf 1}_{\{i_{g_{{}_{1}}}=~i_{g_{{}_{2}}}\ne 0\}}
{\bf 1}_{\{j_{g_{{}_{1}}}=~j_{g_{{}_{2}}}\}}
\prod_{l=1}^{3}\zeta_{j_{q_l}}^{(i_{q_l})}+
$$
$$
+
\sum_{\stackrel{(\{\{g_1, g_2\},
\{g_{3}, g_{4}\}\}, \{q_1\})}
{{}_{\{g_1, g_2, g_{3}, g_{4}, q_1\}=\{1, 2, 3, 4, 5\}}}}
{\bf 1}_{\{i_{g_{{}_{1}}}=~i_{g_{{}_{2}}}\ne 0\}}
{\bf 1}_{\{j_{g_{{}_{1}}}=~j_{g_{{}_{2}}}\}}
\Biggl.{\bf 1}_{\{i_{g_{{}_{3}}}=~i_{g_{{}_{4}}}\ne 0\}}
{\bf 1}_{\{j_{g_{{}_{3}}}=~j_{g_{{}_{4}}}\}}
\zeta_{j_{q_1}}^{(i_{q_1})}\Biggr).
$$
\noindent
The last equality obviously agrees with
(\ref{a5}).
Let us consider a generalization of Theorem 1 for the case
of an arbitrary complete orthonormal systems
of functions in the space $L_2([t,T])$
and $\psi_1(\tau),\ldots,\psi_k(\tau)\in L_2([t, T]).$
{\bf Theorem~2}\ \cite{2018a} (Sect.~1.11), \cite{arxiv-1} (Sect.~15).
{\it Suppose that
$\psi_1(\tau),\ldots,\psi_k(\tau)\in L_2([t, T])$ and
$\{\phi_j(x)\}_{j=0}^{\infty}$ is an arbitrary complete orthonormal system
of functions in the space $L_2([t,T]).$
Then the following expansion
$$
J[\psi^{(k)}]_{T,t}=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_k\to \infty}}$\cr
}} }
\sum\limits_{j_1=0}^{p_1}\ldots
\sum\limits_{j_k=0}^{p_k}
C_{j_k\ldots j_1}\Biggl(
\prod_{l=1}^k\zeta_{j_l}^{(i_l)}+\sum\limits_{r=1}^{[k/2]}
(-1)^r \times
\Biggr.
$$
\begin{equation}
\label{leto6000}
\times
\sum_{\stackrel{(\{\{g_1, g_2\}, \ldots,
\{g_{2r-1}, g_{2r}\}\}, \{q_1, \ldots, q_{k-2r}\})}
{{}_{\{g_1, g_2, \ldots,
g_{2r-1}, g_{2r}, q_1, \ldots, q_{k-2r}\}=\{1, 2, \ldots, k\}}}}
\prod\limits_{s=1}^r
{\bf 1}_{\{i_{g_{{}_{2s-1}}}=~i_{g_{{}_{2s}}}\ne 0\}}
\Biggl.{\bf 1}_{\{j_{g_{{}_{2s-1}}}=~j_{g_{{}_{2s}}}\}}
\prod_{l=1}^{k-2r}\zeta_{j_{q_l}}^{(i_{q_l})}\Biggr)
\end{equation}
\noindent
con\-verg\-ing in the mean-square sense is valid,
where $[x]$ is an integer part of a real number $x;$
another notations are the same as in Theorem~{\rm 1}.}
It should be noted that an analogue of Theorem 2 was considered
in \cite{Rybakov1000}.
Note that we use another notations
\cite{2018a} (Sect.~1.11), \cite{arxiv-1} (Sect.~15)
in comparison with \cite{Rybakov1000}.
Moreover, the proof of an analogue of Theorem 2
from \cite{Rybakov1000} is somewhat different from the proof given in
\cite{2018a} (Sect.~1.11), \cite{arxiv-1} (Sect.~15).
\section{Calculation of the Mean-Square Approximation Error
in the Method of Generalized Multiple Fourier Seires}
Note that for the integrals $J[\psi^{(k)}]_{T,t}$ defined by
(\ref{ito})
the mean-square approximation error can be exactly
calculated and efficiently estimated.
Let $J[\psi^{(k)}]_{T,t}^{q}$ be the
expression on the right-hand side of (\ref{leto6000}) before passing to the limit
$\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,\ldots,p_k\to \infty}}$\cr
}} }$ for the case
$p_1=\ldots=p_k=q,$ i.e.
$$
J[\psi^{(k)}]_{T,t}^{q}=
\sum\limits_{j_1,\ldots,j_k=0}^{q}
C_{j_k\ldots j_1}\Biggl(
\prod_{l=1}^k\zeta_{j_l}^{(i_l)}+\sum\limits_{r=1}^{[k/2]}
(-1)^r \times
\Biggr.
$$
\begin{equation}
\label{r1}
\times
\sum_{\stackrel{(\{\{g_1, g_2\}, \ldots,
\{g_{2r-1}, g_{2r}\}\}, \{q_1, \ldots, q_{k-2r}\})}
{{}_{\{g_1, g_2, \ldots,
g_{2r-1}, g_{2r}, q_1, \ldots, q_{k-2r}\}=\{1, 2, \ldots, k\}}}}
\prod\limits_{s=1}^r
{\bf 1}_{\{i_{g_{{}_{2s-1}}}=~i_{g_{{}_{2s}}}\ne 0\}}
\Biggl.{\bf 1}_{\{j_{g_{{}_{2s-1}}}=~j_{g_{{}_{2s}}}\}}
\prod_{l=1}^{k-2r}\zeta_{j_{q_l}}^{(i_{q_l})}\Biggr).
\end{equation}
Let us denote
$$
{\sf M}\left\{\left(J[\psi^{(k)}]_{T,t}-
J[\psi^{(k)}]_{T,t}^{q}\right)^2\right\}\stackrel{{\rm def}}
{=}E_k^{q},
$$
$$
\int\limits_{[t,T]^k}
K^2(t_1,\ldots,t_k)dt_1\ldots dt_k
\stackrel{{\rm def}}{=}I_k.
$$
In \cite{2017-1}-\cite{xxx333}, \cite{arxiv-1},
\cite{arxiv-2} it was shown that
\begin{equation}
\label{qq4}
E_k^{q}\le k!\Biggl(I_k-\sum_{j_1,\ldots,j_k=0}^{q}C^2_{j_k\ldots j_1}\Biggr)
\end{equation}
\noindent
for the following two cases:
1.\ $i_1,\ldots,i_k=1,\ldots,m$ and $T-t\in (0, +\infty)$,
2.\ $i_1,\ldots,i_k=0, 1,\ldots,m$ and $T-t\in (0, 1)$.
The value $E_k^{q}$
can be calculated exactly.
{\bf Theorem 3} \cite{2018a} (Sect.~1.12), \cite{arxiv-2} (Sect.~6).
{\it Suppose that $\{\phi_j(x)\}_{j=0}^{\infty}$
is an arbitrary complete orthonormal system
of functions in the space $L_2([t,T])$ and
$\psi_1(\tau),\ldots,\psi_k(\tau)\in L_2([t, T]).$
Then
\begin{equation}
\label{tttr11}
E_k^q=I_k- \sum_{j_1,\ldots, j_k=0}^{q}
C_{j_k\ldots j_1}
{\sf M}\left\{J[\psi^{(k)}]_{T,t}
\sum\limits_{(j_1,\ldots,j_k)}
\int\limits_t^T \phi_{j_k}(t_k)
\ldots
\int\limits_t^{t_{2}}\phi_{j_{1}}(t_{1})
d{\bf f}_{t_1}^{(i_1)}\ldots
d{\bf f}_{t_k}^{(i_k)}\right\},
\end{equation}
\noindent
where
$i_1,\ldots,i_k = 1,\ldots,m;$\
the expression
$$
\sum\limits_{(j_1,\ldots,j_k)}
$$
\noindent
means the sum with respect to all
possible permutations
$(j_1,\ldots,j_k)$. At the same time if
$j_r$ swapped with $j_q$ in the permutation $(j_1,\ldots,j_k)$,
then $i_r$ swapped with $i_q$ in the permutation
$(i_1,\ldots,i_k);$
another notations are the same as in Theorems {\rm 1, 2.}
}
Note that
$$
{\sf M}\left\{J[\psi^{(k)}]_{T,t}
\int\limits_t^T \phi_{j_k}(t_k)
\ldots
\int\limits_t^{t_{2}}\phi_{j_{1}}(t_{1})
d{\bf f}_{t_1}^{(i_1)}\ldots
d{\bf f}_{t_k}^{(i_k)}\right\}=C_{j_k\ldots j_1}.
$$
Therefore, for the case of pairwise
different numbers $i_1,\ldots,i_k$
as well as for the case $i_1=\ldots=i_k$
from Theorem 3 it follows that
\cite{2018}, \cite{2018a}-\cite{xxx333}, \cite{17a},
\cite{arxiv-2}
\begin{equation}
\label{qq1}
E_k^q= I_k- \sum_{j_1,\ldots,j_k=0}^{q}
C_{j_k\ldots j_1}^2,
\end{equation}
$$
E_k^q= I_k - \sum_{j_1,\ldots,j_k=0}^{q}
C_{j_k\ldots j_1}\Biggl(\sum\limits_{(j_1,\ldots,j_k)}
C_{j_k\ldots j_1}\Biggr),
$$
\noindent
where
$$
\sum\limits_{(j_1,\ldots,j_k)}
$$
\noindent
is a sum with respect to all
possible permutations
$(j_1,\ldots,j_k)$.
Consider some examples \cite{2018}, \cite{2018a}-\cite{xxx333}, \cite{17a},
\cite{arxiv-2} of application of Theorem 3
$(i_1,i_2,i_3=1,\ldots,m)$
\begin{equation}
\label{qq2}
E_2^q
=I_2
-\sum_{j_1,j_2=0}^q
C_{j_2j_1}^2-
\sum_{j_1,j_2=0}^q
C_{j_2j_1}C_{j_1j_2}\ \ \ (i_1=i_2),
\end{equation}
\begin{equation}
\label{qq3}
E_3^q=I_3
-\sum_{j_3,j_2,j_1=0}^q C_{j_3j_2j_1}^2-
\sum_{j_3,j_2,j_1=0}^q C_{j_3j_1j_2}C_{j_3j_2j_1}\ \ \ (i_1=i_2\ne i_3),
\end{equation}
\begin{equation}
\label{882}
E_3^q=I_3-
\sum_{j_3,j_2,j_1=0}^q C_{j_3j_2j_1}^2-
\sum_{j_3,j_2,j_1=0}^q C_{j_2j_3j_1}C_{j_3j_2j_1}\ \ \ (i_1\ne i_2=i_3),
\end{equation}
\begin{equation}
\label{883}
E_3^q=I_3
-\sum_{j_3,j_2,j_1=0}^q C_{j_3j_2j_1}^2-
\sum_{j_3,j_2,j_1=0}^q C_{j_3j_2j_1}C_{j_1j_2j_3}\ \ \ (i_1=i_3\ne i_2).
\end{equation}
The values $E_4^q$ and $E_5^q$ were calculated exaclty for all possible
combinations of $i_1,\ldots,i_5=1,\ldots,m$ in
\cite{2018}, \cite{2018a}-\cite{xxx333},
\cite{arxiv-2}.
\section{Expansions
of Iterated Stratonovich
Stochastic Integrals Based on Multiple Fourier--Legendre
Series and Multiple Trigonometric Fourier Seires}
In contrast to the iterated Ito stochastic integrals (\ref{ito}),
the iterated Stratonovich stochastic integrals (\ref{str})
have simpler expansions (see Theorems 4--10 below)
than (\ref{tyyy}) but the calculation (or estimation)
of mean-square approximation
errors for the latter is a more difficult problem than
for the former. We will study this issue in
details below.
As we mentioned above, Theorems 1, 2 can be adapted for the iterated
Stratonovich stochastic integrals (\ref{str}) at least
for multiplicities 1 to 6.
Expansions of these iterated Stratonovich
stochastic integrals turned out
much simpler, than the appropriate expansions
of the iterated Ito stochastic integrals (\ref{ito}) from Theorems 1, 2.
Let us formulate some old results on expansions of the iterated
Stratonovich stochastic integrals (\ref{str}) of
multiplicities 2 to 4.
{\bf Theorem 4}\ \cite{2011-2}-\cite{xxx333},
\cite{2010-2}-\cite{2013}, \cite{30a}, \cite{300a},
\cite{400a}, \cite{271a},
\cite{arxiv-5}, \cite{arxiv-8}, \cite{arxiv-23}.\
{\it Assume that the following conditions are fulfulled{\rm :}
{\rm 1}. The function $\psi_2(\tau)$ is continuously
differentiable at the interval $[t, T]$ and the
function $\psi_1(\tau)$ is twice continuously
differentiable at the interval $[t, T]$.
{\rm 2}. $\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal system
of Legendre polynomials or tri\-go\-no\-met\-ric functions
in the space $L_2([t, T]).$
Then, the iterated Stratonovich stochastic integral of second multiplicity
$$
{\int\limits_t^{*}}^T\psi_2(t_2)
{\int\limits_t^{*}}^{t_2}\psi_1(t_1)d{\bf f}_{t_1}^{(i_1)}
d{\bf f}_{t_2}^{(i_2)}\ \ \ (i_1, i_2=1,\ldots,m)
$$
\noindent
is expanded into the
following series
$$
{\int\limits_t^{*}}^T\psi_2(t_2)
{\int\limits_t^{*}}^{t_2}\psi_1(t_1)d{\bf f}_{t_1}^{(i_1)}=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p_1,p_2\to \infty}}$\cr
}} }\sum_{j_1=0}^{p_1}\sum_{j_2=0}^{p_2}
C_{j_2j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}
$$
\noindent
converging in the mean-square sense, where
$$
C_{j_2 j_1}=\int\limits_t^T
\psi_2(t_2)\phi_{j_2}(t_2)
\int\limits_t^{t_2}\psi_1(t_1)\phi_{j_1}(t_1)dt_1dt_2;
$$
\noindent
another notations are the same as in Theorems {\rm 1, 2}.}
{\bf Theorem 5}\ \cite{2011-2}-\cite{xxx333},
\cite{2010-2}-\cite{2013}, \cite{271a},
\cite{arxiv-5}, \cite{arxiv-7}.\
{\it Assume that
$\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal
system of Legendre polynomials or trigonomertic functions
in the space $L_2([t, T])$. Moreover,
the function $\psi_2(\tau)$ is continuously
differentiable at the interval $[t, T]$ and
the functions $\psi_1(\tau),$ $\psi_3(\tau)$ are twice continuously
differentiable at the interval $[t, T]$.
Then, for the iterated Stratonovich stochastic integral of
third multiplicity
$$
{\int\limits_t^{*}}^T\psi_3(t_3)
{\int\limits_t^{*}}^{t_3}\psi_2(t_2)
{\int\limits_t^{*}}^{t_2}\psi_1(t_1)
d{\bf f}_{t_1}^{(i_1)}
d{\bf f}_{t_2}^{(i_2)}d{\bf f}_{t_3}^{(i_3)}\ \ \ (i_1, i_2, i_3=1,\ldots,m)
$$
\noindent
the following
expansion
\begin{equation}
\label{feto19000a}
{\int\limits_t^{*}}^T\psi_3(t_3)
{\int\limits_t^{*}}^{t_3}\psi_2(t_2)
{\int\limits_t^{*}}^{t_2}\psi_1(t_1)
d{\bf f}_{t_1}^{(i_1)}
d{\bf f}_{t_2}^{(i_2)}d{\bf f}_{t_3}^{(i_3)}\
=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{q\to \infty}}$\cr
}} }
\sum\limits_{j_1, j_2, j_3=0}^{q}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\end{equation}
\noindent
converging in the mean-square sense is valid, where
$$
C_{j_3 j_2 j_1}=\int\limits_t^T\psi_3(t_3)\phi_{j_3}(t_3)
\int\limits_t^{t_3}\psi_2(t_2)\phi_{j_2}(t_2)
\int\limits_t^{t_2}\psi_1(t_1)\phi_{j_1}(t_1)dt_1dt_2dt_3;
$$
\noindent
another notations are the same as in Theorems {\rm 1, 2}.}
{\bf Theorem 6}\ \cite{2011-2}-\cite{xxx333},
\cite{2010-2}-\cite{2013}, \cite{271a},
\cite{arxiv-5}, \cite{arxiv-6}.\
{\it Suppose that
$\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal system of
Legendre polynomials or trigonometric functions in $L_2([t, T]).$
Then, for the iterated Stra\-to\-no\-vich stochastic integral
of multiplicity {\rm 4}
$$
{\int\limits_t^{*}}^T
{\int\limits_t^{*}}^{t_4}
{\int\limits_t^{*}}^{t_3}
{\int\limits_t^{*}}^{t_2}
d{\bf w}_{t_1}^{(i_1)}
d{\bf w}_{t_2}^{(i_2)}d{\bf w}_{t_3}^{(i_3)}d{\bf w}_{t_4}^{(i_4)}
$$
\noindent
the following
expansion
$$
{\int\limits_t^{*}}^T
{\int\limits_t^{*}}^{t_4}
{\int\limits_t^{*}}^{t_3}
{\int\limits_t^{*}}^{t_2}
d{\bf w}_{t_1}^{(i_1)}
d{\bf w}_{t_2}^{(i_2)}d{\bf w}_{t_3}^{(i_3)}d{\bf w}_{t_4}^{(i_4)}=
\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{q\to \infty}}$\cr
}} }
\sum\limits_{j_1, j_2, j_3, j_4=0}^{q}
C_{j_4 j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\zeta_{j_4}^{(i_4)}
$$
\noindent
converging in the mean-square sense
is valid, where $i_1, i_2, i_3, i_4=0, 1,\ldots,m,$
$$
C_{j_4 j_3 j_2 j_1}=\int\limits_t^T\phi_{j_4}(t_4)\int\limits_t^{t_4}
\phi_{j_3}(t_3)
\int\limits_t^{t_3}\phi_{j_2}(t_2)\int\limits_t^{t_2}\phi_{j_1}(t_1)
dt_1dt_2dt_3dt_4;
$$
\noindent
another notations are the same as in Theorems {\rm 1, 2}.}
Recently, a new approach to the expansion and mean-square
approximation of iterated Stratonovich stochastic integrals has been obtained
\cite{2018a} (Sect.~2.10--2.16), \cite{arxiv-4} (Sect.~7--13), \cite{arxiv-5} (Sect.~13--19),
\cite{arxiv-6} (Sect.~5--11), \cite{new-art-1-xxy}
(Sect.~4--9).
Let us formulate four theorems that were obtained using this approach.
{\bf Theorem 7}\ \cite{2018a}, \cite{arxiv-4}, \cite{arxiv-5},
\cite{arxiv-6}, \cite{new-art-1-xxy}.\
{\it Suppose
that $\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal system of
Legendre polynomials or trigonometric functions in the space $L_2([t, T]).$
Furthermore, let $\psi_1(\tau), \psi_2(\tau), \psi_3(\tau)$ are continuously dif\-ferentiable
nonrandom functions on $[t, T].$
Then, for the
iterated Stra\-to\-no\-vich stochastic integral of third multiplicity
$$
J^{*}[\psi^{(3)}]_{T,t}={\int\limits_t^{*}}^T\psi_3(t_3)
{\int\limits_t^{*}}^{t_3}\psi_2(t_2)
{\int\limits_t^{*}}^{t_2}\psi_1(t_1)
d{\bf w}_{t_1}^{(i_1)}
d{\bf w}_{t_2}^{(i_2)}d{\bf w}_{t_3}^{(i_3)}\ \ \ (i_1,i_2,i_3=0,1,\ldots,m)
$$
\noindent
the following
relations
\begin{equation}
\label{fin1}
J^{*}[\psi^{(3)}]_{T,t}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p\to \infty}}$\cr
}} }
\sum\limits_{j_1, j_2, j_3=0}^{p}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)},
\end{equation}
\begin{equation}
\label{fin2}
{\sf M}\left\{\left(
J^{*}[\psi^{(3)}]_{T,t}-
\sum\limits_{j_1, j_2, j_3=0}^{p}
C_{j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}\right)^2\right\}
\le \frac{C}{p}
\end{equation}
\noindent
are fulfilled, where $i_1, i_2, i_3=0,1,\ldots,m$ in {\rm (\ref{fin1})} and
$i_1, i_2, i_3=1,\ldots,m$ in {\rm (\ref{fin2}),}
constant $C$ is independent of $p,$
$$
C_{j_3 j_2 j_1}=\int\limits_t^T\psi_3(t_3)\phi_{j_3}(t_3)
\int\limits_t^{t_3}\psi_2(t_2)\phi_{j_2}(t_2)
\int\limits_t^{t_2}\psi_1(t_1)\phi_{j_1}(t_1)dt_1dt_2dt_3
$$
\noindent
and
$$
\zeta_{j}^{(i)}=
\int\limits_t^T \phi_{j}(\tau) d{\bf f}_{\tau}^{(i)}
$$
\noindent
are independent standard Gaussian random variables for various
$i$ or $j$ {\rm (}in the case when $i\ne 0${\rm );}
another notations are the same as in Theorems~{\rm 1, 2}.}
{\bf Theorem 8}\ \cite{2018a}, \cite{arxiv-4}, \cite{arxiv-5},
\cite{arxiv-6}, \cite{new-art-1-xxy}.\ {\it Let
$\{\phi_j(x)\}_{j=0}^{\infty}$ be a complete orthonormal system of
Legendre polynomials or trigonometric functions in the space $L_2([t, T]).$
Furthermore, let $\psi_1(\tau), \ldots, \psi_4(\tau)$ be continuously dif\-ferentiable
nonrandom functions on $[t, T].$
Then, for the
iterated Stra\-to\-no\-vich stochastic integral of fourth multiplicity
\begin{equation}
\label{fin0}
J^{*}[\psi^{(4)}]_{T,t}={\int\limits_t^{*}}^T\psi_4(t_4)
{\int\limits_t^{*}}^{t_4}\psi_3(t_3)
{\int\limits_t^{*}}^{t_3}\psi_2(t_2)
{\int\limits_t^{*}}^{t_2}\psi_1(t_1)
d{\bf w}_{t_1}^{(i_1)}
d{\bf w}_{t_2}^{(i_2)}d{\bf w}_{t_3}^{(i_3)}d{\bf w}_{t_4}^{(i_4)}
\end{equation}
\noindent
the following
relations
\begin{equation}
\label{fin3}
J^{*}[\psi^{(4)}]_{T,t}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p\to \infty}}$\cr
}} }
\sum\limits_{j_1, j_2, j_3,j_4=0}^{p}
C_{j_4j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}\zeta_{j_4}^{(i_4)},
\end{equation}
\begin{equation}
\label{fin4}
{\sf M}\left\{\left(
J^{*}[\psi^{(4)}]_{T,t}-
\sum\limits_{j_1, j_2, j_3, j_4=0}^{p}
C_{j_4 j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\zeta_{j_4}^{(i_4)}
\right)^2\right\}
\le \frac{C}{p^{1-\varepsilon}}
\end{equation}
\noindent
are fulfilled, where $i_1, \ldots , i_4=0,1,\ldots,m$ in {\rm (\ref{fin0}),} {\rm (\ref{fin3})}
and $i_1, \ldots, i_4=1,\ldots,m$ in {\rm (\ref{fin4}),}
constant $C$ does not depend on $p,$
$\varepsilon$ is an arbitrary
small positive real number
for the case of complete orthonormal system of
Legendre polynomials in the space $L_2([t, T])$
and $\varepsilon=0$ for the case of
complete orthonormal system of
trigonometric functions in the space $L_2([t, T]),$
$$
C_{j_4 j_3 j_2 j_1}=
$$
$$
=
\int\limits_t^T\psi_4(t_4)\phi_{j_4}(t_4)
\int\limits_t^{t_4}\psi_3(t_3)\phi_{j_3}(t_3)
\int\limits_t^{t_3}\psi_2(t_2)\phi_{j_2}(t_2)
\int\limits_t^{t_2}\psi_1(t_1)\phi_{j_1}(t_1)dt_1dt_2dt_3dt_4;
$$
\noindent
another notations are the same as in Theorem~{\rm 7}.}
{\bf Theorem 9}\ \cite{2018a}, \cite{arxiv-4}, \cite{arxiv-5},
\cite{arxiv-6}, \cite{new-art-1-xxy}.\
{\it Assume
that $\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal system of
Legendre polynomials or trigonometric functions in the space $L_2([t, T])$
and $\psi_1(\tau), \ldots, \psi_5(\tau)$ are continuously dif\-ferentiable
nonrandom functions on $[t, T].$
Then, for the
iterated Stra\-to\-no\-vich stochastic integral of fifth multiplicity
\begin{equation}
\label{fin7}
J^{*}[\psi^{(5)}]_{T,t}={\int\limits_t^{*}}^T\psi_5(t_5)
\ldots
{\int\limits_t^{*}}^{t_2}\psi_1(t_1)
d{\bf w}_{t_1}^{(i_1)}
\ldots d{\bf w}_{t_5}^{(i_5)}
\end{equation}
\noindent
the following
relations
\begin{equation}
\label{fin8}
J^{*}[\psi^{(5)}]_{T,t}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p\to \infty}}$\cr
}} }
\sum\limits_{j_1,\ldots,j_5=0}^{p}
C_{j_5 \ldots j_1}\zeta_{j_1}^{(i_1)}\ldots \zeta_{j_5}^{(i_5)},
\end{equation}
\begin{equation}
\label{fin9}
{\sf M}\left\{\left(
J^{*}[\psi^{(5)}]_{T,t}-
\sum\limits_{j_1, \ldots, j_5=0}^{p}
C_{j_5 \ldots j_1}\zeta_{j_1}^{(i_1)}\ldots
\zeta_{j_5}^{(i_5)}
\right)^2\right\}
\le \frac{C}{p^{1-\varepsilon}}
\end{equation}
\noindent
are fulfilled, where $i_1, \ldots , i_5=0,1,\ldots,m$ in {\rm (\ref{fin7}),} {\rm (\ref{fin8})}
and $i_1, \ldots, i_5=1,\ldots,m$ in {\rm (\ref{fin9}),}
constant $C$ is independent of $p,$
$\varepsilon$ is an arbitrary
small positive real number
for the case of complete orthonormal system of
Legendre polynomials in the space $L_2([t, T])$
and $\varepsilon=0$ for the case of
complete orthonormal system of
trigonometric functions in the space $L_2([t, T]),$
$$
C_{j_5 \ldots j_1}=
\int\limits_t^T\psi_5(t_5)\phi_{j_5}(t_5)\ldots
\int\limits_t^{t_2}\psi_1(t_1)\phi_{j_1}(t_1)dt_1\ldots dt_5;
$$
\noindent
another notations are the same as in Theorems~{\rm 7, 8}.}
{\bf Theorem 10}\ \cite{2018a}, \cite{arxiv-4}, \cite{arxiv-5},
\cite{arxiv-6}, \cite{new-art-1-xxy}.\
{\it Suppose that
$\{\phi_j(x)\}_{j=0}^{\infty}$ is a complete orthonormal system of
Legendre polynomials or trigonometric functions in the space $L_2([t, T]).$
Then, for the
iterated Stratonovich stochastic integral of sixth multiplicity
\begin{equation}
\label{after10001qu1}
J_{T,t}^{*(i_1\ldots i_6)}={\int\limits_t^{*}}^T
\ldots
{\int\limits_t^{*}}^{t_2}
d{\bf w}_{t_1}^{(i_1)}
\ldots d{\bf w}_{t_6}^{(i_6)}
\end{equation}
\noindent
the following
expansion
$$
J_{T,t}^{*(i_1\ldots i_6)}
=\hbox{\vtop{\offinterlineskip\halign{
\hfil#\hfil\cr
{\rm l.i.m.}\cr
$\stackrel{}{{}_{p\to \infty}}$\cr
}} }
\sum\limits_{j_1, \ldots, j_6=0}^{p}
C_{j_6 \ldots j_1}\zeta_{j_1}^{(i_1)}\ldots
\zeta_{j_6}^{(i_6)}
$$
\noindent
that converges in the mean-square sense is valid, where
$i_1, \ldots, i_6=0, 1,\ldots,m,$
$$
C_{j_6 \ldots j_1}=
\int\limits_t^T\phi_{j_6}(t_6)\ldots
\int\limits_t^{t_2}\phi_{j_1}(t_1)dt_1\ldots dt_6;
$$
\noindent
another notations are the same as in Theorems~{\rm 7--9}.}
\section{Approximation of Iterated
Stratonovich Stochastic Integrals Based on Multiple
Fourier--Legendre Series}
As was mentioned above,
one of the main problems arising in the implementation of the
numerical scheme (\ref{4.470}) is the joint
numerical modeling of the iterated Stratonovich stochastic integrals
figuring in (\ref{4.470}). Let us consider efficient
numerical modeling formulas for
the iterated Stratonovich stochastic integrals based on Theorems 4--9.
Using Theorems 1, 2 ($k=1$), Theorems 4--9, and multiple Fourier--Legendre series,
we obtain the following
approximations of iterated Stratonovich stochastic
integrals from (\ref{4.470}) \cite{2006}-\cite{arxiv-6}
\begin{equation}
\label{ccc1}
I_{(0)\tau_{p+1},\tau_p}^{*(i_1)}=\sqrt{\Delta}\zeta_0^{(i_1)},
\end{equation}
\begin{equation}
\label{ccc2}
I_{(1)\tau_{p+1},\tau_p}^{*(i_1)}=
-\frac{{\Delta}^{3/2}}{2}\left(\zeta_0^{(i_1)}+
\frac{1}{\sqrt{3}}\zeta_1^{(i_1)}\right),
\end{equation}
\begin{equation}
\label{ccc3}
{I}_{(2)\tau_{p+1},\tau_p}^{*(i_1)}=
\frac{\Delta^{5/2}}{3}\left(
\zeta_0^{(i_1)}+\frac{\sqrt{3}}{2}\zeta_1^{(i_1)}+
\frac{1}{2\sqrt{5}}\zeta_2^{(i_1)}\right),
\end{equation}
\begin{equation}
\label{ccc4}
I_{(00)\tau_{p+1},\tau_p}^{*(i_1 i_2)q}=
\frac{\Delta}{2}\left(\zeta_0^{(i_1)}\zeta_0^{(i_2)}+\sum_{i=1}^{q}
\frac{1}{\sqrt{4i^2-1}}\left(
\zeta_{i-1}^{(i_1)}\zeta_{i}^{(i_2)}-
\zeta_i^{(i_1)}\zeta_{i-1}^{(i_2)}\right)\right),
\end{equation}
$$
I_{(01)\tau_{p+1},\tau_p}^{*(i_1 i_2)q}=
-\frac{\Delta}{2}
I_{(00)\tau_{p+1},\tau_p}^{*(i_1 i_2)q}
-\frac{{\Delta}^2}{4}\Biggl(
\frac{1}{\sqrt{3}}\zeta_0^{(i_1)}\zeta_1^{(i_2)}+\Biggr.
$$
\begin{equation}
\label{ccc5}
+\Biggl.\sum_{i=0}^{q}\Biggl(
\frac{(i+2)\zeta_i^{(i_1)}\zeta_{i+2}^{(i_2)}
-(i+1)\zeta_{i+2}^{(i_1)}\zeta_{i}^{(i_2)}}
{\sqrt{(2i+1)(2i+5)}(2i+3)}-
\frac{\zeta_i^{(i_1)}\zeta_{i}^{(i_2)}}{(2i-1)(2i+3)}\Biggr)\Biggr),
\end{equation}
$$
I_{(10)\tau_{p+1},\tau_p}^{*(i_1 i_2)q}=
-\frac{\Delta}{2}I_{(00)\tau_{p+1},\tau_p}^{*(i_1 i_2)q}
-\frac{\Delta^2}{4}\Biggl(
\frac{1}{\sqrt{3}}\zeta_0^{(i_2)}\zeta_1^{(i_1)}+\Biggr.
$$
\begin{equation}
\label{ccc6}
+\Biggl.\sum_{i=0}^{q}\Biggl(
\frac{(i+1)\zeta_{i+2}^{(i_2)}\zeta_{i}^{(i_1)}
-(i+2)\zeta_{i}^{(i_2)}\zeta_{i+2}^{(i_1)}}
{\sqrt{(2i+1)(2i+5)}(2i+3)}+
\frac{\zeta_i^{(i_1)}\zeta_{i}^{(i_2)}}{(2i-1)(2i+3)}\Biggr)\Biggr)
\end{equation}
\noindent
or
\begin{equation}
\label{ccc50}
I_{(01)\tau_{p+1},\tau_p}^{*(i_1 i_2)q}
=
\sum_{j_1,j_2=0}^{q}
C_{j_2j_1}^{01}
\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)},
\end{equation}
\begin{equation}
\label{ccc51}
I_{(10)\tau_{p+1},\tau_p}^{*(i_1 i_2)q}
=
\sum_{j_1,j_2=0}^{p}
C_{j_2j_1}^{10}
\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)};
\end{equation}
\begin{equation}
\label{ccc7}
I_{(000)\tau_{p+1},\tau_p}^{*(i_1 i_2 i_3)q}
=\sum_{j_1, j_2, j_3=0}^{q}
C_{j_3 j_2 j_1}
\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)},
\end{equation}
\begin{equation}
\label{ccc8}
I_{(100)\tau_{p+1},\tau_p}^{*(i_1 i_2 i_3)q}
=\sum_{j_1, j_2, j_3=0}^{q}
C_{j_3 j_2 j_1}^{100}
\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)},
\end{equation}
\begin{equation}
\label{ccc9}
I_{(010)\tau_{p+1},\tau_p}^{*(i_1 i_2 i_3)q}
=\sum_{j_1, j_2, j_3=0}^{q}
C_{j_3 j_2 j_1}^{010}
\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)},
\end{equation}
\begin{equation}
\label{ccc10}
I_{(001)\tau_{p+1},\tau_p}^{*(i_1 i_2 i_3)q}
=\sum_{j_1, j_2, j_3=0}^{q}
C_{j_3 j_2 j_1}^{001}
\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)},
\end{equation}
\begin{equation}
\label{ccc11}
I_{(0000)\tau_{p+1},\tau_p}^{*(i_1 i_2 i_3 i_4)q}
=\sum_{j_1, j_2, j_3, j_4=0}^{q}
C_{j_4 j_3 j_2 j_1}
\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\zeta_{j_4}^{(i_4)},
\end{equation}
\begin{equation}
\label{ccc12}
I_{(00000)\tau_{p+1},\tau_p}^{*(i_1 i_2 i_3 i_4 i_5)q}=
\sum\limits_{j_1, j_2, j_3, j_4, j_5=0}^{q}
C_{j_5j_4 j_3 j_2 j_1}\zeta_{j_1}^{(i_1)}\zeta_{j_2}^{(i_2)}\zeta_{j_3}^{(i_3)}
\zeta_{j_4}^{(i_4)}\zeta_{j_5}^{(i_5)},
\end{equation}
\noindent
where the Fourier--Legendre coefficients have the form
$$
C_{j_2j_1}^{01}
=
\int\limits_{\tau_p}^{\tau_{p+1}}(\tau_p-y)\phi_{j_3}(y)
\int\limits_{\tau_p}^{y}
\phi_{j_1}(x)dx dy =
\frac{\sqrt{(2j_1+1)(2j_2+1)}}{8}\Delta^{2}\bar
C_{j_2j_1}^{01},
$$
$$
C_{j_2j_1}^{10}
=\int\limits_{\tau_p}^{\tau_{p+1}}\phi_{j_3}(y)
\int\limits_{\tau_p}^{y}(\tau_p-x)
\phi_{j_1}(x)dx dy =
\frac{\sqrt{(2j_1+1)(2j_2+1)}}{8}\Delta^{2}\bar
C_{j_2j_1}^{10},
$$
$$
C_{j_3j_2j_1}=\int\limits_{\tau_p}^{\tau_{p+1}}\phi_{j_3}(z)
\int\limits_{\tau_p}^{z}\phi_{j_2}(y)
\int\limits_{\tau_p}^{y}
\phi_{j_1}(x)dx dy dz=
$$
\begin{equation}
\label{hhh1}
=
\frac{\sqrt{(2j_1+1)(2j_2+1)(2j_3+1)}}{8}\Delta^{3/2}\bar
C_{j_3j_2j_1},
\end{equation}
$$
C_{j_4j_3j_2j_1}=\int\limits_{\tau_p}^{\tau_{p+1}}\phi_{j_4}(u)
\int\limits_{\tau_p}^{u}\phi_{j_3}(z)
\int\limits_{\tau_p}^{z}\phi_{j_2}(y)
\int\limits_{\tau_p}^{y}
\phi_{j_1}(x)dx dy dz du=
$$
\begin{equation}
\label{hhh2}
=\frac{\sqrt{(2j_1+1)(2j_2+1)(2j_3+1)(2j_4+1)}}{16}\Delta^{2}\bar
C_{j_4j_3j_2j_1},
\end{equation}
$$
C_{j_3j_2j_1}^{001}=\int\limits_{\tau_p}^{\tau_{p+1}}(\tau_p-z)\phi_{j_3}(z)
\int\limits_{\tau_p}^{z}\phi_{j_2}(y)
\int\limits_{\tau_p}^{y}
\phi_{j_1}(x)dx dy dz=
$$
\begin{equation}
\label{hhh3}
=
\frac{\sqrt{(2j_1+1)(2j_2+1)(2j_3+1)}}{16}\Delta^{5/2}\bar
C_{j_3j_2j_1}^{001},
\end{equation}
$$
C_{j_3j_2j_1}^{010}=\int\limits_{\tau_p}^{\tau_{p+1}}\phi_{j_3}(z)
\int\limits_{\tau_p}^{z}(\tau_p-y)\phi_{j_2}(y)
\int\limits_{\tau_p}^{y}
\phi_{j_1}(x)dx dy dz=
$$
\begin{equation}
\label{hhh4}
=
\frac{\sqrt{(2j_1+1)(2j_2+1)(2j_3+1)}}{16}\Delta^{5/2}\bar
C_{j_3j_2j_1}^{010},
\end{equation}
$$
C_{j_3j_2j_1}^{100}=\int\limits_{\tau_p}^{\tau_{p+1}}\phi_{j_3}(z)
\int\limits_{\tau_p}^{z}\phi_{j_2}(y)
\int\limits_{\tau_p}^{y}
(\tau_p-x)\phi_{j_1}(x)dx dy dz=
$$
\begin{equation}
\label{hhh5}
=
\frac{\sqrt{(2j_1+1)(2j_2+1)(2j_3+1)}}{16}\Delta^{5/2}\bar
C_{j_3j_2j_1}^{100},
\end{equation}
$$
C_{j_5j_4 j_3 j_2 j_1}=
\int\limits_{\tau_p}^{\tau_{p+1}}\phi_{j_5}(v)
\int\limits_{\tau_p}^v\phi_{j_4}(u)
\int\limits_{\tau_p}^{u}
\phi_{j_3}(z)
\int\limits_{\tau_p}^{z}\phi_{j_2}(y)\int\limits_{\tau_p}^{y}\phi_{j_1}(x)
dxdydzdudv=
$$
\begin{equation}
\label{hhh6}
=\frac{\sqrt{(2j_1+1)(2j_2+1)(2j_3+1)(2j_4+1)(2j_5+1)}}{32}\Delta^{5/2}\bar
C_{j_5j_4 j_3 j_2 j_1},
\end{equation}
\noindent
where
$$
\bar C_{j_2j_1}^{01}=-\int\limits_{-1}^{1}(1+y)P_{j_2}(y)
\int\limits_{-1}^{y}
P_{j_1}(x)dx dy,
$$
$$
\bar C_{j_2j_1}^{10}=-\int\limits_{-1}^{1}P_{j_2}(y)
\int\limits_{-1}^{y}
(1+x)P_{j_1}(x)dx dy,
$$
\begin{equation}
\label{jjj1}
\bar C_{j_3j_2j_1}=
\int\limits_{-1}^{1}P_{j_3}(z)
\int\limits_{-1}^{z}P_{j_2}(y)
\int\limits_{-1}^{y}
P_{j_1}(x)dx dy dz,
\end{equation}
\begin{equation}
\label{jjj2}
\bar C_{j_4j_3j_2j_1}=
\int\limits_{-1}^{1}P_{j_4}(u)
\int\limits_{-1}^{u}P_{j_3}(z)
\int\limits_{-1}^{z}P_{j_2}(y)
\int\limits_{-1}^{y}
P_{j_1}(x)dx dy dz,
\end{equation}
\begin{equation}
\label{jjj3}
\bar C_{j_3j_2j_1}^{100}=-
\int\limits_{-1}^{1}P_{j_3}(z)
\int\limits_{-1}^{z}P_{j_2}(y)
\int\limits_{-1}^{y}
P_{j_1}(x)(x+1)dx dy dz,
\end{equation}
\begin{equation}
\label{jjj4}
\bar C_{j_3j_2j_1}^{010}=-
\int\limits_{-1}^{1}P_{j_3}(z)
\int\limits_{-1}^{z}P_{j_2}(y)(y+1)
\int\limits_{-1}^{y}
P_{j_1}(x)dx dy dz,
\end{equation}
\begin{equation}
\label{jjj5}
\bar C_{j_3j_2j_1}^{001}=-
\int\limits_{-1}^{1}P_{j_3}(z)(z+1)
\int\limits_{-1}^{z}P_{j_2}(y)
\int\limits_{-1}^{y}
P_{j_1}(x)dx dy dz,
\end{equation}
\begin{equation}
\label{jjj6}
\bar C_{j_5j_4 j_3 j_2 j_1}=
\int\limits_{-1}^{1}P_{j_5}(v)
\int\limits_{-1}^{v}P_{j_4}(u)
\int\limits_{-1}^{u}P_{j_3}(z)
\int\limits_{-1}^{z}P_{j_2}(y)
\int\limits_{-1}^{y}
P_{j_1}(x)dx dy dz du dv,
\end{equation}
\noindent
where $P_i(x)$ $(i=0, 1, 2,\ldots)$ is the Legendre polynomial and
$$
\phi_i(x)=
\sqrt{\frac{2i+1}{\Delta}}P_i\left(\left(x-\tau_p-\frac{\Delta}{2}\right)
\frac{2}{\Delta}\right),\ \ \ i=0, 1, 2,\ldots
$$
The Fourier--Legendre coefficients
\begin{equation}
\label{sss1}
\bar C_{j_2 j_1}^{01},\ \bar C_{j_2 j_1}^{10},\
\bar C_{j_3 j_2 j_1},\ \bar C_{j_4 j_3 j_2 j_1},\ \bar C_{j_3 j_2 j_1}^{001},\
\bar C_{j_3 j_2 j_1}^{010},\ \bar C_{j_3 j_2 j_1}^{100},\
\bar C_{j_5 j_4 j_3 j_2 j_1}
\end{equation}
\noindent
can be calculated exactly before start of the numerical method (\ref{4.470}).
The above calculation can be done
with Python, Derive or Maple.
In \cite{2006}, \cite{2017}-\cite{2013}, \cite{arxiv-3}
several tables with these coefficients can be found.
Moreover, in \cite{Kuz-Kuz}, \cite{Mikh-1}
the database with 270,000 exactly calculated
Fourier--Legendre coefficients (including (\ref{sss1}))
was described.
This database was used in the software package,
which is written in the Python programming language
for the implementation of high-order strong
numerical schemes for Ito SDEs with non-commutative noise \cite{Kuz-Kuz}, \cite{Mikh-1}.
Note that the mentioned Fourier--Legendre coefficients
do not depend on the step of integration $\tau_{p+1}-\tau_p$ of the
numerical scheme,
which can be not a constant in a general case.
On the basis of the presented
expansions (see (\ref{ccc1})--(\ref{ccc12})) of
iterated Stratonovich stochastic integrals we
can see that increasing of multiplicities of these integrals
or degree indexes of their weight functions
leads
to increasing
of smallness orders with respect to $\Delta$ in the mean-square sense
for iterated stochastic integrals.
This leads to a sharp decrease
of member
quantities (the numbers $q$)
in expansions of iterated Stratonovich stochastic
integrals, which are required for achieving the acceptable accuracy
of approximation.
Generally speaking, the minimal values $q$ that guarantee the condition
(\ref{ors})
for each approximation (\ref{ccc1})--(\ref{ccc12})
are various and abruptly decreasing with the growth of
smallness orders with respect to $\Delta$ in the mean-square sense for
iterated stochastic integrals.
Consider in detail the question on calculation and estimation
of the mean-square approximation error for the iterated
Stratonovich stochastic integrals (\ref{str11}) (see \cite{2018a}, Chapter~5 for details).
Let us consider the following iterated Ito stochastic integrals
$$
I_{(l_1\ldots \hspace{0.2mm}l_k)T,t}^{(i_1\ldots i_k)}
=
\int\limits_t^T
(t-t_k)^{l_k} \ldots \int\limits_t^{t_{2}}
(t-t_1)^{l_1} d{\bf f}_{t_1}^{(i_1)}\ldots
d{\bf f}_{t_k}^{(i_k)},
$$
\noindent
where $i_1,\ldots, i_k=1,\dots,m,$\ \ $l_1,\ldots,l_k=0, 1, 2,$\ \
$k=1, 2,\ldots, 5.$
According to the standard relations between iterated
Ito and Stratonovich stochastic integrals, we obtain w.~p.~1
(with probability 1)
$$
I_{(00)\tau_{p+1},\tau_p}^{(i_1 i_2)}=
I_{(00)\tau_{p+1},\tau_p}^{*(i_1 i_2)}-
\frac{1}{2}{\bf 1}_{\{i_1=i_2\}}\Delta,
$$
$$
I_{(10)\tau_{p+1},\tau_p}^{(i_1 i_2)}=
I_{(10)\tau_{p+1},\tau_p}^{*(i_1 i_2)}+
\frac{1}{4}{\bf 1}_{\{i_1=i_2\}}\Delta^2,
$$
$$
I_{(01)\tau_{p+1},\tau_p}^{(i_1 i_2)}=
I_{(01)\tau_{p+1},\tau_p}^{*(i_1 i_2)}+
\frac{1}{4}{\bf 1}_{\{i_1=i_2\}}\Delta^2.
$$
Moreover,
the mean-square approximation error
for the iterated Ito stochastic integral
$$
I_{(00)\tau_{p+1},\tau_p}^{(i_1 i_2)}\ \ \ (i_1\ne i_2)
$$
\noindent
equals to the
mean-square approximation error
for the iterated Stratonovich stochastic integral (see \cite{2018a}, Sect.~5.1
for details)
$$
I_{(00)\tau_{p+1},\tau_p}^{*(i_1 i_2)}\ \ \ (i_1\ne i_2).
$$
From Theorem 3 we obtain \cite{2006}-\cite{200a},
\cite{301a}-\cite{arxiv-6}
\begin{equation}
\label{1}
{\sf M}\Biggl\{\left(I_{(00)\tau_{p+1},\tau_p}^{(i_1 i_2)}-
I_{(00)\tau_{p+1},\tau_p}^{(i_1 i_2)q}
\right)^2\Biggr\}
=\frac{\Delta^2}{2}\Biggl(\frac{1}{2}-\sum_{i=1}^q
\frac{1}{4i^2-1}\Biggr)\ \ \ (i_1\ne i_2),
\end{equation}
$$
{\sf M}\Biggl\{\left(I_{(10)\tau_{p+1},\tau_p}^{(i_1 i_2)}-
I_{(10)\tau_{p+1},\tau_p}^{(i_1 i_2)
q}
\right)^2\Biggr\}=
{\sf M}\Biggl\{\left(I_{(01)\tau_{p+1},\tau_p}^{(i_1 i_2)}-
I_{(01)\tau_{p+1},\tau_p}^{(i_1 i_2)q}\right)^2\Biggr\}=
$$
$$
=\frac{\Delta^4}{16}\Biggl(\frac{5}{9}-
2\sum_{i=2}^{q}\frac{1}{4i^2-1}-
\sum_{i=1}^{q}
\frac{1}{(2i-1)^2(2i+3)^2}
-\Biggr.
$$
\begin{equation}
\label{2}
\Biggl.-
\sum_{i=0}^{q}\frac{(i+2)^2+(i+1)^2}{(2i+1)(2i+5)(2i+3)^2}
\Biggr)\ \ \ (i_1\ne i_2).
\end{equation}
The case $i_1=i_2$ is considered in \cite{2018a}, Sect.~5.1.
Let us estimate the mean-square approximation error
for the iterated Stratonovich stochastic integrals (\ref{str11})
of multiplicities $k\ge 3.$
From (\ref{1}) ($i_1\ne i_2$) we get
$$
{\sf M}\left\{\left(I_{(00)\tau_{p+1},\tau_p}^{*(i_1 i_2)}-
I_{(00)\tau_{p+1},\tau_p}^{*(i_1 i_2)q}
\right)^2\right\}=\frac{\Delta^2}{2}
\sum\limits_{i=q+1}^{\infty}\frac{1}{4i^2-1}\le
$$
\begin{equation}
\label{teac}
\le \frac{\Delta^2}{2}\int\limits_{q}^{\infty}
\frac{1}{4x^2-1}dx
=-\frac{\Delta^2}{8}{\rm ln}\left|
1-\frac{2}{2q+1}\right|\le C_1\frac{\Delta^2}{q},
\end{equation}
\noindent
where constant $C_1$ does not depend on $\Delta$.
As was mentioned above,
the value $\Delta$ plays the role of integration step
in the numerical procedures for Ito SDEs.
Then this value is a sufficiently small.
Keeping in mind this circumstance, it is easy to notice that there
exists such a constant $C_2$ that
\begin{equation}
\label{teac3}
{\sf M}\left\{\left(I_{(l_1\ldots l_k)\tau_{p+1},\tau_p}^{*(i_1\ldots i_k)}-
I_{(l_1\ldots l_k)\tau_{p+1},\tau_p}^{*(i_1\ldots i_k)q}\right)^2\right\}
\le C_2 {\sf M}\left\{\left(I_{(00)\tau_{p+1},\tau_p}^{*(i_1 i_2)}-
I_{(00)\tau_{p+1},\tau_p}^{*(i_1 i_2)q}\right)^2\right\},
\end{equation}
\noindent
where $I_{(l_1\ldots l_k)\tau_{p+1},\tau_p}^{*(i_1\ldots i_k)q}$
is the approximation of the iterated Stratonovich stochastic integral
(\ref{str11}) for $k\ge 3$.
From (\ref{teac}) and (\ref{teac3}) we finally have
\begin{equation}
\label{teac4}
{\sf M}\left\{\left(I_{(l_1\ldots l_k)\tau_{p+1},\tau_p}^{*(i_1\ldots i_k)}-
I_{(l_1\ldots l_k)\tau_{p+1},\tau_p}^{*(i_1\ldots i_k)q}\right)^2\right\}
\le K \frac{\Delta^2}{q},
\end{equation}
\noindent
where constant $K$ does not depend on $\Delta.$
The same idea can be found in \cite{KlPl2}
for the case of trigonometric functions.
Note that, in contrast to the estimate (\ref{teac4}),
the constant $C$ in Theorems 7--9 does not depend on $q.$
Essentially more information about numbers $q$ can be obtained
by another approach. We have
$$
I_{(l_1\ldots l_k)\tau_{p+1},\tau_p}^{*(i_1\ldots i_k)}=
I_{(l_1\ldots l_k)\tau_{p+1},\tau_p}^{(i_1\ldots i_k)}\ \ \ \hbox{w.~p.~1}
$$
\noindent
for pairwise different $i_1,\ldots,i_k=1,\ldots,m$.
Then, for pairwise different $i_1,\ldots,i_5=1,\ldots,m$
from (\ref{qq1}) we obtain
$$
{\sf M}\left\{\left(
I_{(01)\tau_{p+1},\tau_p}^{*(i_1i_2)}-
I_{(01)\tau_{p+1},\tau_p}^{*(i_1i_2)q}\right)^2\right\}=
\frac{\Delta^{4}}{4}-\sum_{j_1,j_2=0}^{q}
\left(C_{j_2j_1}^{01}\right)^2,
$$
$$
{\sf M}\left\{\left(
I_{(10)\tau_{p+1},\tau_p}^{*(i_1i_2)}-
I_{(10)\tau_{p+1},\tau_p}^{*(i_1i_2)q}\right)^2\right\}=
\frac{\Delta^{4}}{12}-\sum_{j_1,j_2=0}^{q}
\left(C_{j_2j_1}^{10}\right)^2,
$$
$$
{\sf M}\left\{\left(
I_{(000)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)}-
I_{(000)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)q}\right)^2\right\}=
\frac{\Delta^{3}}{6}-\sum_{j_3,j_2,j_1=0}^{q}
C_{j_3j_2j_1}^2,
$$
$$
{\sf M}\left\{\left(
I_{(0000)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3 i_4)}-
I_{(0000)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3 i_4)q}\right)^2\right\}=
\frac{\Delta^{4}}{24}-\sum_{j_1,j_2,j_3,j_4=0}^{q}
C_{j_4j_3j_2j_1}^2,
$$
$$
{\sf M}\left\{\left(
I_{(100)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)}-
I_{(100)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)q}\right)^2\right\}=
\frac{\Delta^{5}}{60}-\sum_{j_1,j_2,j_3=0}^{q}
\left(C_{j_3j_2j_1}^{100}\right)^2,
$$
$$
{\sf M}\left\{\left(
I_{(010)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)}-
I_{(010)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)q}\right)^2\right\}=
\frac{\Delta^{5}}{20}-\sum_{j_1,j_2,j_3=0}^{q}
\left(C_{j_3j_2j_1}^{010}\right)^2,
$$
$$
{\sf M}\left\{\left(
I_{(001)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)}-
I_{(001)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)q}\right)^2\right\}=
\frac{\Delta^5}{10}-\sum_{j_1,j_2,j_3=0}^{q}
\left(C_{j_3j_2j_1}^{001}\right)^2,
$$
$$
{\sf M}\left\{\left(
I_{(00000)\tau_{p+1},\tau_p}^{*(i_1 i_2 i_3 i_4 i_5)}-
I_{(00000)\tau_{p+1},\tau_p}^{*(i_1 i_2 i_3 i_4 i_5)q}\right)^2\right\}=
\frac{\Delta^{5}}{120}-\sum_{j_1,j_2,j_3,j_4,j_5=0}^{q}
C_{j_5 i_4 i_3 i_2 j_1}^2.
$$
For example \cite{2006}-\cite{2013},
$$
{\sf M}\left\{\left(
I_{(000)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)}-
I_{(000)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)6}\right)^2\right\}=
\frac{\Delta^{3}}{6}-\sum_{j_3,j_2,j_1=0}^{6}
C_{j_3j_2j_1}^2
\approx
0.01956000\Delta^3,
$$
$$
{\sf M}\left\{\left(
I_{(100)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)}-
I_{(100)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)2}\right)^2\right\}=
\frac{\Delta^{5}}{60}-\sum_{j_1,j_2,j_3=0}^{2}
\left(C_{j_3j_2j_1}^{100}\right)^2
\approx
0.00815429\Delta^5,
$$
$$
{\sf M}\left\{\left(
I_{(010)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)}-
I_{(010)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)2}\right)^2\right\}=
\frac{\Delta^{5}}{20}-\sum_{j_1,j_2,j_3=0}^{2}
\left(C_{j_3j_2j_1}^{010}\right)^2
\approx
0.01739030\Delta^5,
$$
$$
{\sf M}\left\{\left(
I_{(001)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)}-
I_{(001)\tau_{p+1},\tau_p}^{*(i_1i_2 i_3)2}\right)^2\right\}=
\frac{\Delta^5}{10}-\sum_{j_1,j_2,j_3=0}^{2}
\left(C_{j_3j_2j_1}^{001}\right)^2
\approx 0.02528010\Delta^5,
$$
$$
{\sf M}\left\{\left(
I_{(0000)\tau_{p+1},\tau_p}^{*(i_1i_2i_3 i_4)}-
I_{(0000)\tau_{p+1},\tau_p}^{*(i_1i_2i_3 i_4)2}\right)^2\right\}=
\frac{\Delta^{4}}{24}-\sum_{j_1,j_2,j_3,j_4=0}^{2}
C_{j_4 j_3 j_2 j_1}^2\approx
0.02360840\Delta^4,
$$
$$
{\sf M}\left\{\left(
I_{(00000)\tau_{p+1},\tau_p}^{*(i_1i_2i_3i_4 i_5)}-
I_{(00000)\tau_{p+1},\tau_p}^{*(i_1i_2i_3i_4 i_5)1}\right)^2\right\}=
\frac{\Delta^5}{120}-\sum_{j_1,j_2,j_3,j_4,j_5=0}^{1}
C_{j_5 i_4 i_3 i_2 j_1}^2\approx
0.00759105\Delta^5.
$$
\end{document}
|
\begin{document}
\title{ Hypergroups with Unique $\alpha$-Means
}
\author{Ahmadreza Azimifard \\
\footnotesize\texttt{}
}
\date{}
\maketitle
\begin{description}
\item Abstract:
Let $K$ be a commutative hypergroup and $\alpha\in \widehat{K}$. We
show that $K$ is $\alpha$-amenable with the unique $\alpha$-mean
$m_\alpha$ if and only if $m_\alpha\in L^1(K)\cap L^2(K)$ and
$\alpha$ is isolated in $\widehat{K}$. In contrast to the case of
amenable noncompact locally compact groups, examples of
polynomial hypergroups with unique $\alpha$-means ($\alpha\not=1$)
are given. Further examples emphasize
that the $\alpha$-amenability of hypergroups depends
heavily on the asymptotic behavior of Haar measures and characters.
\footnote{
Parts of the paper
are taken from the author's Ph.D.~thesis at the Technical University of Munich.}
\item R\'esum\'e:
Soit $K$ un hypergroupe commutatif et $\alpha\in \widehat{K}$ .
Nous montrons que $K$ est $\alpha$-moyennable avec unicité de l'$\alpha$-moyenne
$m_\alpha$ si et seulement si $m_\alpha\in L^1(K)\cap L^2(K)$ et
$\alpha$ est isolé dans $\widehat{K}$.
Contrairement au cas des groupes moyennables localement compacts mais non compacts,
des exemples d'hyper-groupes polynomiaux avec unicité des $\alpha$-moyennes
($\alpha\not=1$) sont donnés. Nous montrons à l'aide d'autres examples
que l'$\alpha$-moyennabilité des hypergroupes dépend
fortement de leurs mesures de Haar ainsi que du comportement
du caractères.
\end{description}
{
\footnotesize{
\begin{tabular}{lrl}
\bf {Keywords.} & \multicolumn{2}{l}
{ Hypergroups: orthogonal polynomial, of Nevaei classes.}\\
&{ $\alpha$-Amenable Hypergroups.}\\
\end{tabular}
{\bf AMS Subject Classification 2000:}{ primary 43A62, 43A07,} {secondary 46H20.}
}}
\section{Introduction}
Recently the notion of $\alpha$-amenable hypergroups was
introduced and studied in \cite{f.l.s}. Let $K$ be a commutative locally compact hypergroup and
let $L^1(K)$
denote the hypergroup algebra.
Assume that
$\alpha\in \widehat{K}$ and denote by $I(\alpha)$
the maximal ideal in $L^1(K)$ generated by $\alpha$.
As shown in \cite{f.l.s},
$K$ is $\alpha$-amenable if and only if either
$I(\alpha)$ has a b.a.i. (bounded approximate identity) or $K$ satisfies
the modified Reiter's condition of $P_1$-type in $\alpha$.
Commutative hypergroups are always 1-amenable \cite{Ska92}, whereas
a large class of
non $\alpha$-amenable
hypergroups, $\alpha\not=1$, are given in \cite{thesis, f.l.s}.
It is worth to notify that $1\in \mbox{ supp }\pi_K$ does not
hold in general, where
$\mbox{ supp }\pi_K$ denotes the support
of the Plancherel measure on $\widehat{K}$ \cite{Jew75, Ska92}.
As in the case of locally compact groups \cite{Pat88},
if $K$ is a noncompact locally compact amenable hypergroup, then
the cardinality of (1-)means is $2^{2^{d}}$,
where $d$ is the smallest cardinality
of a cover of $K$ by compact sets \cite{Ska92}.
However, it is well known that $K$ has a unique (1-)mean if and only
if $K$ is compact \cite{Pat88, Ska92}. Hence,
$\mbox{ supp }\pi_K=\widehat{K}$ and $K$ is
$\alpha$-amenable for every $\alpha\in \widehat{K}$ \cite{BloHey94, f.l.s}.
For a $\alpha$-amenable hypergroup $K$ with a unique $\alpha$-mean,
one can pose the natural question of whether $K$
is compact or $K$ is $\beta$-amenable when $\alpha\not=\beta \in \widehat{K}$.
Theorem \ref{main.theorem} answers
this question completely. In addition, examples
of polynomial hypergroups show that
the $\alpha$-amenability of hypergroups depends on
the asymptotic behavior of the Haar measures and
characters. Furthermore, the $\alpha$-amenability of $K$ with
a unique $\alpha$-mean ($\alpha\not=1$), even in every $\alpha\in \widehat{K}\setminus{\{1\}}$,
does not imply the compactness of $K$; see Section \ref{examples}.
Different axioms for hypergroups are given
in \cite{ Dun73, Jew75, Spec75}. However, in this paper
we refer to Jewett's axioms in \cite{Jew75}.
\section{Preliminaries}
Let $(K, \omega, \sim )$ be a locally compact
hypergroup, where
$\omega:K\times K\rightarrow M^1(K)$ defined by $(x,y)\mapsto \omega(x,y)$,
and $\sim:K\rightarrow K$ defined by $x\mapsto \tilde{x}$,
denote the convolution and involution on $K$,
where $M^1(K)$ stands for the set of all probability
measures on $K$. The hypergroup $K$ is called commutative
if $\omega{(x,y)}=\omega{(y,x)}$ for every $x, y\in K$.
Throughout this paper $K$ is a commutative hypergroup.
Let $C_c(K)$ be the spaces of all continuous
functions on $K$ with compact support.
The translation of $f\in C_c(K)$ at
the point $x\in K$, $T_xf$, is defined by
\begin{equation}\notag
T_xf(y):=\int_K f(t)d\omega{(x,y)}(t), \mbox{ for every } y\in K.
\end{equation}
Being $K$ commutative ensures the existence of a
Haar measure $m$ on $K$ which is unique up to a
multiplicative constant \cite{Spec75}.
Let $(L^p(K), \|\cdot \|_p)$ $(p=1, 2)$
denote the usual Banach space of Borel measurable functions on $K$ \cite[6.2]{Jew75}.
For $f, g\in L^1(K)$ we may define the convolution and
involution by
$ f*g(x):=\int_K f(y)T_{\tilde{y}}g(x)dm(y)$ ($m$-a.e. on $K$)
and
$f^\ast(x)=\overline{f(\tilde x)}$,
respectively, that $\left(L^1(K), \|\cdot \|_1\right)$ becomes a Banach $\ast$-algebra.
If $K$ is discrete, then $L^1(K)$ has an
identity element. Otherwise $L^1(K)$ has a
b.a.i., i.e. there exists a net $\{e_i\}_i$
of functions in $L^1(K)$ with $\|e_i\|_1\leq M$,
for some $M>0$, such that $\|f \ast e_i-f\|_1\rightarrow 0$
as $i\rightarrow \infty$ \cite{BloHey94}.
The set of all multiplicative linear functionals on
$L^1(K)$, i.e. the maximal ideal space of $L^1(K)$ \cite{BonDun73},
can be identified with
\begin{equation}\notag
\mathfrak{X}^b(K):=\left\{
\alpha\in C^b(K): \alpha\not=0, \;\omega(x,y)
(\alpha)=\alpha(x)\alpha(y), \; \forall\;x,y\in K
\right\}
\end{equation}
via
$ \mbox{var}phi_\alpha(f):=\int_K f(x)\overline{\alpha(x)}dm(x)$, for every $f\in L^1(K)$.
$\mathfrak{X}^b(K)$ is a
locally compact Hausdorff space with
the compact-open topology \cite{BloHey94}.
$\mathfrak{X}^b(K)$ and
its subset
\[
\widehat{K}:=\{\alpha\in \mathfrak{X}^b(K):
\alpha(\tilde x)=\overline{\alpha(x)}, \; \forall x\in K\}
\]
are considered as the character spaces of $K$.
The maximal ideal in $L^1(K)$ generated by the character $\alpha$ is
$I(\alpha):=\{f\in L^1(K):\mbox{var}phi_\alpha(f) =0\}$.
The Fourier transform of $f\in L^1(K)$,
$\widehat{f}\in C_0(\widehat{K})$, is
$\widehat{f}(\alpha):=\mbox{var}phi_\alpha(f)$
for every $\alpha\in \widehat{K}$.
There exists a unique (up to a multiplicative constant)
regular positive Borel measure $\pi_K$ on $\widehat{K}$
with $\mbox{ supp }\pi_K=\mathcal{S}$
such that
$\int_K |f(x)|^2dm(x)=\int_{\mathcal{S}}|\widehat{f}(\alpha)|^2 d\pi_K(\alpha)$
for all $f\in L^1(K)\cap L^2(K) $ \cite{BloHey94}.
The extension of
the Fourier transform
defined on $L^1(K)\cap L^2(K)$
to all of $L^2(K)$ onto $L^2(\widehat{K})$ is the Plancherel transform
which is an isometric isomorphism.
Observe that $\mathcal{S}$ is a
nonvoid closed subset of $\widehat{K}$,
and the constant function 1 is
in general not contained in $\mathcal{S}$ \cite[9.5]{Jew75}.
The inverse Fourier transform for
$\mbox{var}phi\in L^1(\widehat{K})$ is given by
$\check{\mbox{var}phi}(x)=\int_{\mathcal{S}} \mbox{var}phi(\alpha)\alpha(x)d\pi_K(\alpha)$
for every $x\in K$. Then
$\check{\mbox{var}phi}\in C_0(K)$ and
if $\check{\mbox{var}phi}\in L^1(K)$
then
$\widehat{\check{\mbox{var}phi}}=\mbox{var}phi$ \cite{BloHey94}.
Let $L^1(K)^\ast$ and $L^1(K)^{\ast\ast}$ denote the dual
and the bidual spaces of $L^1(K)$ respectively. As usual,
$L^1(K)^\ast$ can be identified with the space $L^\infty(K)$ of essentially bounded Borel
measurable complex-valued functions on $K$.
We may define the Arens product on $L^1(K)^{\ast\ast}$ as follows:
\begin{equation}\notag
\langle m\cdot m', f\rangle=\langle m, m'\cdot f\rangle
\end{equation}
in which
$\langle m'\cdot f, g \rangle=\langle m', f\cdot g\rangle$
and $\langle f\cdot g, h\rangle=\langle f, g \ast h\rangle$
for all $m, m'\in L^1(K)^{\ast\ast}$, $f\in L^\infty(K)$
and $g,h\in L^1(K)$.
$L^1(K)^{\ast\ast}$ with the Arens product is
a noncommutative Banach algebra in general \cite{BonDun73, Civ}.
From the definitions of the Arens product and the convolution
we may have
$g\cdot f={g}^{\ast}\ast f$ and $m\cdot(f \cdot g)=(m\cdot f)\cdot g.$
\begin{definition}\label{ch.2.14}
\emph{Let $K$ be a commutative hypergroup
and $\alpha \in \widehat{K}$. $K$ is
called $\alpha$-amenable if there
exists a bounded linear functional
$m_\alpha$ on $L^{\infty }({{K}})$ with
the following properties:}
\begin{itemize}
\item[\emph{(i)}]$m_\alpha(\alpha)=1$,
\item[\emph{(ii)}]$m_\alpha({\delta_{\tilde x}}\ast f)
=
{\alpha(x)} m_\alpha (f), $ \hspace{.2in}\emph{for
every }$f\in L^{\infty }({{K}})$ \emph{and }$x \in
K$.
\end{itemize}
\end{definition}
\begin{comment}
$K$ satisfies the $P_1(\alpha)$-condition,
if there exists a $M>0$ such that for every
$\mbox{var}epsilon>0$ and
compact $C\subset K$ there exists a
$g\in L^1(K)$ with $\|g\|_1\leq M$
$\widehat{g}(\alpha)=1$ such that
$K$ is $\alpha$-amenable
if and only if $I(\alpha)$ has a b.a.i.
well as $K$ satisfies the $P_1(\alpha)$-condition \cite{f.l.s}.
$K$ satisfies the $P_2(\alpha)$-condition,
if for every $\mbox{var}epsilon>0$ and
every compact $C\subset K$ there exists
a $g\in L^2(K)$ with $\|g\|_2=1$ such that
$\|T_{\tilde{y}}g-\overline{\alpha(y)}g\|_2<\mbox{var}epsilon$
for all $y\in C$.
$K$ satisfies the $P_2(\alpha)$-condition if and only if
$\alpha\in \mbox{ supp}\pi_K$ \cite{FiLa99}.
For $\alpha\in \widehat{K}$, $K$ is $\alpha$-amenable
if and only if $I(\alpha)$ has
a bounded approximate identity as well
as if $K$
satisfies the modified the Reiter's
condition of $P_1$-type in the
$\alpha$, see \cite{f.l.s}.
For the sake of completeness we recall
the Reiter's condition of $P_p$-type, $p=1,2$.
\end{comment}
For example, if $K$ is compact or $L^1(K)$ is amenable, then $K$
is $\alpha$-amenable, for every $\alpha\in \widehat{K}$ \cite{f.l.s, Ska92}.
\section{Main Theorem}
\begin{theorem}\label{main.theorem}
\emph{
Let $K$ be a hypergroup and $\alpha\in \widehat{K}$.
If $K$ is
$\alpha$-amenable with the
unique $\alpha$-mean $m_\alpha$, then}
\begin{itemize}
\item[\emph{(i)}] \emph{
$m_\alpha$ and $\alpha$ belong to
$L^1(K)\cap L^2(K)$ and
$\alpha\in \mathcal{S} $
is isolated. Further,
$m_\alpha^2=m_\alpha$.
}
\item[\emph{(ii)}] \emph{
$m_\alpha=\pi(\alpha)/\|\alpha\|_2^2$,
where
$\pi:L^1(K)\rightarrow L^1(K)^{\ast\ast}$
is the canonical embedding.
}
\item[\emph{(iii)}] \emph{
If $\alpha$ is positive,
then $\alpha=1$, hence $K$ is compact.
}
\end{itemize}
\end{theorem}
\begin{proof}
Since $K$ is $\alpha$-amenable with
the unique $\alpha$-mean $m_\alpha$,
$m_\alpha(\alpha)=1$, and $ f\cdot{g} ={g}\cdot f$, we have
\begin{align}\notag
\langle m_\alpha, f\cdot g \rangle &=\langle m_\alpha, {g}^\ast \ast f\rangle\\ \notag
&=\langle m_\alpha, \int_K (\delta_{x}\ast f) g^\ast(x) dm(x)\rangle \\ \notag
& =\int_K \langle m_\alpha,\delta_{x}\ast f\rangle g^\ast(x) dm(x) \\\notag
& =\widehat{g^{\ast} }(\alpha) \langle m_\alpha, f\rangle, \notag
\end{align}
for every $f\in L^{\infty}(K)$ and $g\in L^1(K)$.
Moreover, if $n\in L^1(K)^{\ast \ast}$ and $ h\in L^1(K)$,
then
\[
\langle m_\alpha\cdot n,f\cdot{g} \rangle
=
\langle m_\alpha ,n\cdot(f\cdot{g})\rangle
=
\langle m_\alpha, (n\cdot f)\cdot{g}\rangle
=
{\widehat{g^{\ast}} (\alpha)}
\langle m_\alpha, n\cdot f\rangle
=
{\widehat{g^\ast} (\alpha)}
\langle m_\alpha\cdot n ,f\rangle.
\]
Since the $\alpha$-mean $m_\alpha$ is unique and
the associated functional to $\alpha$
on $L^1(K)^{\ast\ast}$
is multiplicative \cite{Civ},
$m_\alpha\cdot n=\lambda_n\cdot m_\alpha$,
where $\lambda_n=\langle n, \alpha\rangle$.
Let $(n_i)$ be a net in $L^1(K)^{\ast\ast}$
converging to $n$ in the $w^\ast$-topology.
Then the convergence $\lambda_{n_i}\rightarrow \lambda_n$, as $i\rightarrow \infty$, implies that
the mapping $n\rightarrow {m_\alpha}\cdot n$ is
$w^*$-$w^*$ continuous on
$L^1(K)^{\ast\ast}$,
hence
$m_\alpha$ is in $L^1(K)$, the topological centre of
$L^1(K)^{\ast\ast}$ \cite{Kamyabi.2}.
In that $\widehat{m_\alpha}(\alpha)=1$,
$g\cdot m_\alpha={\widehat{g^\ast}(\alpha)}m_\alpha$
for every $g\in L^1(K)$,
and the Arens product is continuous
in the first variable, then
$m_\alpha^2=m_\alpha.$
Let $\beta\in \widehat{K}$.
The equality
$\beta(x)m_\alpha(\beta)
=m_\alpha(T_x \beta)
=\alpha(x)m_\alpha(\beta)$,
for all $x \in K$,
implies that
$m_\alpha(\beta)=\delta_\alpha(\beta)$.
Since $\widehat{m_\alpha}\in C_0(\widehat{K})$, $\alpha$ is isolated in
$\widehat{K}$ and $\widehat{m_\alpha}\in L^1(\widehat{K})$.
The inverse of Fourier theorem yields
$m_\alpha=\widehat{m_\alpha}{^\vee}$, hence
$\alpha\in \mathcal{S}$. Moreover,
since the Plancherel transform is an isometric
isomorphism of $L^2(K)$
onto $L^2(\widehat{K})$
and $\widehat{m_\alpha}(\beta)=\delta_{\beta}(\alpha)$,
$m_\alpha\in L^2(K)$.\\
(ii) Plainly $\delta_x\cdot m_\alpha=\alpha(x)m_\alpha$,
for every $x\in K$, so it follows from part (i) that
$\alpha\in L^1(K)\cap L^2(K)$.
Let $n_\alpha=\pi(\alpha)/\|\alpha\|_2^2$.
We shall prove $m_\alpha=n_\alpha$.
Apparently $\langle n_\alpha,\alpha \rangle=1$,
and for every
$x\in K$ and $f\in L^\infty(K)$
we have $\langle n_\alpha, T_x f\rangle =\alpha(x)\langle n_\alpha, f\rangle $,
hence $n_\alpha$ is a $\alpha$-mean on
$L^\infty(K)$.
Since $m_\alpha\in L^1(K)^{\ast\ast}$, there exists
$(m_i)_i$ a net of
functions in $L^1(K)$
such that $\pi({m_j})\overset{w^\ast}{\longrightarrow} m_\alpha$,
Goldstein's theorem \cite{Dunford}. Moreover,
${m_j}\cdot {\alpha}=\alpha \cdot m_j
={\widehat{{m_j}}(\alpha)}{{\alpha}}$ and
$m_\alpha(\alpha)=1$,
so taking the $w^\ast$-limit yields
$m_\alpha\cdot {\pi(\alpha)}=\pi(\alpha)$. Therefore,
for every $f\in L^\infty(K)$ and $x\in K$
we have
\[
\|{\alpha}\|_2^2 \langle n_{{\alpha}},
f\rangle= \langle {\pi({\alpha})} , f \rangle
=
\langle m_\alpha\cdot\pi({\alpha}),f\rangle
=
\langle m_\alpha,\pi({\alpha})\cdot f \rangle
=
\langle m_\alpha,{\alpha}\cdot f \rangle
=
\|{\alpha}\|_2^2 \langle m_\alpha, f \rangle,
\]
hence
$m_\alpha=n_{{\alpha}}$.
(iii) By (i) since $\alpha\in L^1(K)\cap L^2(K)$,
we have
\[
\alpha(x)\int_K \alpha(y)dm(y)
=
\int_KT_x\alpha(y)dm(y)=\int_K \alpha(y)dm(y),
\]
which implies that $\alpha(x)=1$ for
every $x\in K$, hence $K$ is compact \cite{Jew75}.
\end{proof}
\begin{corollary}
\emph{
Let $K$ be a $\alpha$-amenable hypergroup
with a unique $\alpha$-mean in all
$\alpha\in \widehat{K}\setminus\{1\}$.
Then $1\in \mathcal{S}$.
}
\end{corollary}
\begin{remark}
\emph{
We observe that part (iii) of Theorem \ref{main.theorem} can also be derived from part (i) and
\cite[Theorem 2.1]{voit.p.c.}.
}
\end{remark}
\section{ Examples}\label{examples}
\begin{itemize}
\item[(I)]{\bf{
Symmetric hypergroup \cite{Voit91.1}:
}
}
For each $n\in \NN$, let $b_n\in ]0,1]$, $c_0=1$,
and
define numbers $c_n$ inductively
by $c_n=\fracac{1}{b_n}\left(c_0+c_1+...c_{n-1}\right)$.
A symmetric
hypergroup structure on $\NN_0$ is
defined by
$\mbox{var}epsilon_n\ast \mbox{var}epsilon_m=\mbox{var}epsilon_m\ast \mbox{var}epsilon_n=\mbox{var}epsilon_n \mbox{ if } 0\leq m<n$
and
\[\mbox{var}epsilon_n\ast \mbox{var}epsilon_n
=
\fracac{c_0}{c_n}\mbox{var}epsilon_0+\fracac{c_1}{c_n}\mbox{var}epsilon_1
+
...\fracac{c_{n-1}}{c_n}\mbox{var}epsilon_n+(1-b_n)\mbox{var}epsilon_n.\]
$\NN_0$ with the above convolution and
an involution defined by the identity map
is a commutative hypergroup with
$\mathfrak{X}^b(\NN_0)=\mathcal{S}$.
Every nontrivial character $\alpha$ in
$ \widehat{\NN_0}$ has
a finite support, so
$\alpha\in \ell^1(\NN_0)\cap \ell^2(\NN_0)$.
Consequently by Theorem \ref{main.theorem} we see that
$\NN_0$ is $\alpha$-amenable with a unique $\alpha$-mean if and only if $\alpha\not=1$.
\item [(II)]
Let $\{p_n\}_{n\in \NN_0}$ be a set of polynomials defined by a recursion relation
\begin{equation}\label{r.10}
p_1(x)p_n(x)=a_np_{n+1}(x)+b_np_n(x)+ c_np_{n-1}(x)
\end{equation}
for $n\in \NN$ and $p_0(x)=1$, $p_1(x)=\fracac{1}{a_0}(x-b_0)$, where
$a_n>0$, $b_n\in \RR$ for all $n\in \NN_0$ and $c_n > 0$
for
$n\in \NN$. There exists a probability measure
$\pi\in M^1(\RR)$ such that
$\int_\RR p_n(x)p_m(x)d\pi(x)
=
\delta_{n,m}\mu_m\hspace{.1in}(\mu_m>0)$ \cite{Chi78}.
Assume that $p_n(1)\not=0$, so after renorming,
for $n\in \NN_0$ we have $p_n(1)=1$.
The relation (\ref{r.10}) implies that
$a_n+b_n+c_n=1$ and $a_0+b_0=1$.
The polynomial set $\{p_n\}_{n\in \NN_0}$
induces a hypergroup structure
on $\NN_0$ \cite{BloHey94},
which is known as a polynomial hypergroup.
\begin{itemize}
\item [(i)] {\bf{
Hypergroups of compact type \cite{Frankcompact}:
}
}
If in the recursion
formula (\ref{r.10}) $a_n, c_n\rightarrow 0$ and $b_n\rightarrow 1$
as $n\rightarrow \infty$, then the induced hypergroup $\NN_0$ is called to be of compact type.
In this case, $\mathcal{S}=\widehat{\NN_0}=\mathfrak{X}^b(\NN_0)$, 1 is the
only accumulation point of $\widehat{\NN_0}$ and nontrivial characters of $\NN_0$ belong to
$\ell^1(\NN_0)\cap \ell^2(\NN_0)$. By Theorem \ref{main.theorem}
we see that $\NN_0$ is $\alpha$-amenable with a unique $\alpha$-mean if and only if $\alpha\not=1$.
For instance, the little q-Legendre polynomial hypergroup is of compact type.
\item[(ii)]{
\bf{
Hypergroups of Nevai Classes:
}
}\label{examples.nevai}
Let $\{p_n\}_{n\in \NN_0}$ define a hyergroup structure on $\NN_0$ with the
relations (\ref{r.10}).
Consider the orthonormal polynomials
$q_n(x):=\sqrt{h(n)}p_n(x)$, which by the
recursion (\ref{r.10}) satisfy the following
recursion formula
\begin{equation}\notag
xq_n(x)=\lambda_{n+1}q_{n+1}(x)
+
\beta_nq_n(x)+\lambda_n q_{n-1}(x),\hspace{1.cm} \forall n\in \NN_0,
\end{equation}
where $q_0(x)=1$, $\lambda_n=a_0\sqrt{c_na_{n-1}}$ for $n\geq 2$,
$\lambda_1=a_0\sqrt{c_1}$,
$\lambda_0=0$, and $\beta_n=a_0b_n+b_0$ for
$n\geq 1$, with $\beta_0=b_0$.
The polynomial set $(q_n)_{n\in \NN_0}$ is of
the Nevai class $M(0,1)$ if
$\underset{n\rightarrow \infty}{\lim}\lambda_n=\fracac{1}{2}$
and
$\underset{n\rightarrow \infty}{\lim}\beta_n=0$.
It has a bounded variation, $(q_n)_{n\in \NN_0}\in BV$,
if
\[\sum_{n=1}^{\infty}
(|\lambda_{n+1}-\lambda_n|+|\beta_{n+1}-\beta_n|)<\infty.\]
\begin{theorem}\label{main.2}
\emph{
Let $(q_n)_{n\in \NN_0}\in BV\cap M(0,1)$ and $\alpha_x\in\widehat{\NN_0}$, where
$\alpha_x(n):=q_n(x)$ for $n\in \NN_0$. Then the followings hold:
\begin{enumerate}
\item[(i)] $\mathcal{S} \cong [-1,1]\cup A$,
where $A$ is a nonempty countable set
and $[-1, 1]\cap A= \emptyset$.
\item[(ii)]
If $x\in A$, then $\NN_0$ is $\alpha_x$-amenable
with a unique $\alpha_x$-mean.
\item[(iii)] If $h(n)$ is unbounded,
then $\NN_0$ is not $\alpha_x$-amenable for $x\in (-1,1)$.
\item[(iv)] If $h(n)$ is bounded,
then $\NN_0$ is $\alpha_x$-amenable for $x\in (-1,1)$.
\end{enumerate}
}
\end{theorem}
\begin{proof}
(i) It is shown in \cite[Theorem 7]{Nev79}.
(ii)
Let $\mathcal{S}$ be as in part (i).
If
$A\cap ]1,\infty [\not=\emptyset$, then $x_1:=\mbox{sup}A\in A$
corresponds to a positive
character of $\NN_0$, \cite[Theorem 5.3]{Chi78}.
But this contradicts the fact that
a positive character in $\mathcal{S}$
cannot be isolated, \cite[Theorem 2.1]{voit.p.c.},
hence
$A\subset ]-\infty, -1[$.
By \cite[Theorem 18(p.36)]{Nev79} we have
\[
\underset{n\rightarrow \infty}{\lim}\fracac{h(n+1)}{h(n)}
\left|
\fracac{p_{n+1}(x)}{p_{n}(x)}
\right|
=
C\underset{n\rightarrow \infty}{\lim}\left|
\fracac{p_{n+1}(x)}{p_{n}(x)}
\right|
=
\left(
|x|+(x^2-1)^{1/2}
\right)^{-1}
<1,
\]
whenever $x\in A$.
This shows that $\alpha_x$ belongs
to $\ell^1(\NN_0)$.
Hence, by Theorem \ref{main.theorem}
$\NN_0$ is $\alpha_x$-amenable with a unique $\alpha_x$-mean.
(iii) and (iv) are shown in \cite[Theorems 4.10--11]{f.l.s}.
\end{proof}
\begin{remark}
\begin{enumerate}
\item[\emph{(i)}] \emph{
Theorem \ref{main.2} reveals that the
$\alpha$-amenability of
$K$ in general depends on the asymptotic
behavior of the Haar measure and
$\alpha$.
}
\item[\emph{(ii)}]\emph{
Observe that in
Theorem \ref{main.2} (iii) if
$x\in (-1,1)$ then the functionals $m_{\alpha_x}$ are
distinct.
}
\end{enumerate}
\end{remark}
\end{itemize}
\end{itemize}
\begin{pr1}{\bf{Conjecture:}}
\emph{
Let $K$ be a $\alpha$-amenable hypergroup. Then $K$ has either a unique $\alpha$-mean
or
the cardinality of the set
of $\alpha$-means is at most $2^{2^d}$,
where $d$ is the smallest
cardinality of a cover of $K$ by compact sets.
}
\end{pr1}
\end{document}
|
\begin{document}
\title{Characterizing Multipartite Non-Gaussian Entanglement for Three-Mode Spontaneous Parametric Down-Conversion Process}
\author{Mingsheng Tian}
\affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing 100871, China}
\author{Yu Xiang}
\affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing 100871, China}
\affiliation{Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006, China}
\author{Feng-Xiao Sun}
\email{[email protected]}
\affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing 100871, China}
\affiliation{Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006, China}
\author{Matteo Fadel}
\affiliation{Department of Physics, ETH Z\"urich, 8093 Z\"urich, Switzerland}
\affiliation{Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland}
\author{Qiongyi He}
\affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing 100871, China}
\affiliation{Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006, China}
\affiliation{Peking University Yangtze Delta Institute of Optoelectronics, Nantong, Jiangsu, China}
\begin{abstract}
Very recently, strongly non-Gaussian states have been observed via a direct three-mode spontaneous parametric down-conversion in a superconducting cavity [Phys. Rev. X 10, 011011 (2020)]. The created multi-photon non-Gaussian correlations are attractive and useful for various quantum information tasks. However, how to detect and classify multipartite non-Gaussian entanglement has not yet been completely understood. Here, we present an experimentally practical method to characterize continuous-variable multipartite non-Gaussian entanglement, by introducing a class of nonlinear squeezing parameters involving accessible higher-order moments of phase-space quadratures. As these parameters can depend on arbitrary operators, we consider their analytical optimization over a set of practical measurements, in order to detect different classes of multipartite non-Gaussian entanglement ranging from fully separable to fully inseparable. We demonstrate that the nonlinear squeezing parameters act as an excellent approximation to the quantum Fisher information within accessible third-order moments. The level of the nonlinear squeezing quantifies the metrological advantage provided by those entangled states. Moreover, by analyzing the above mentioned experiment, we show that our method can be readily used to confirm fully inseparable tripartite non-Gaussian entangled states by performing a limited number of measurements without requiring full knowledge of the quantum state.
\end{abstract}
\maketitle
\section{Introduction}
Continuous-variable (CV) systems, where multimode entangled states can be deterministically prepared~\cite{weedbrook2012gaussian,yokoyama2013ultra,roslund2014wavelength,chen2014experimental,armstrong2015multipartite,cavalcanti2015detection,cai2017multimode,deng2017demonstration,larsen2019deterministic,asavanant2019generation,takeda2019demand,cai2020versatile,wang2020deterministic}, constitute an important platform for quantum technologies, including quantum teleportation networks~\cite{PhysRevLett.80.869,nature.2004}, quantum key distribution~\cite{PhysRevLett.88.057902}, quantum secrete sharing~\cite{armstrong2015multipartite}, boson sampling~\cite{PhysRevLett.119.170501,np.15.925}, and multi-parameter quantum metrology~\cite{np.16.3,PhysRevLett.121.130503}. Non-Gaussian states in CV systems have attracted increasing attentions in recent years~\cite{np11.713,PRXQuantum.2.030204}, as they have been proven to be indispensable resources for universal quantum computation~\cite{niset2009no,PhysRevLett.109.230503}, entanglement distillation~\cite{eisert2002distilling,fiuravsek2002gaussian,nphoton.2010.1}, quantum-enhanced sensing~\cite{science.345,PhysRevLett.107.083601}, and quantum imaging~\cite{PhysRevApplied.16.064037}. Such perspectives have led to a growing interest in the experimental preparation of multimode non-Gaussian quantum states~\cite{natphy.nicolas,olthreemode,prxthreemode}.
Despite the fact that substantial progress has been made in the generation of multimode non-Gaussian states, the characterization of their entanglement structure still poses a number of conceptual and practical challenges. The main reason being that the nontrivial correlations appear in higher-order moments of the observables that cannot be sufficiently uncovered by the widely used entanglement criteria based on second-order correlations~\cite{PhysRevLett.84.2726,PhysRevA.67.052315,PhysRevA.72.032334,PhysRevLett.96.050503,PhysRevLett.111.250403,PRA.90.062337}, partially hindering their application for quantum information tasks. To tackle this problem, several approaches have been developed to take higher-order moments into account, such as the entropic entanglement criteria~\cite{PhysRevLett.103.160505,RevModPhys.90.035007,PhysRevA.103.013704}, the generalized Hillery-Zubairy criteria based on the multimode moments~\cite{PhysRevA.81.062322,PhysRevA.84.032115,PhysRevLett.125.020502}, or nonlinear entanglement criteria based on the amplitude-squared squeezing~\cite{PhysRevLett.114.100403,PhysRevLett.127.150502}. The quantum Fisher information (QFI) also provides a powerful method for capturing strongly non-Gaussian features of quantum states. It is widely applied to detect multi-particle entanglement in nonclassical spin states~\cite{rev.mod.phys.90.035005, science.345,npys3700,PhysRevA.102.012412,PhysRevLett.126.080502,NC2021,fadel2022multiparameter} and has been recently developed to characterize continuous variables~\cite{npj.5.2056,pra2016manuel,Gessner2017entanglement}. Moreover, QFI provides a powerful tool to establish a quantitative link between entanglement and quantum metrology~\cite{PhysRevLett.102.100401,pra.85.022321}.
However, most of the above mentioned methods are experimentally challenging, as they require full knowledge of the quantum state, which is extremely difficult in the multipartite scenario. Recently, a nonlinear squeezing parameter was proposed for the metrological characterization of non-Gaussian features~\cite{PhysRevLett.122.090503,guo2021detecting}. This can be optimized over a set of observables that are experimentally accessible, and it can ultimately coincide with the state QFI with increasing the order of the measured moments.
Experimentally, the recent advance in the generation of three-photon correlated non-Gaussian states via a direct three-mode spontaneous parametric down-conversion (SPDC) process~\cite{prxthreemode}, opens up novel possibilities for various quantum information processing applications. Thus, it would be interesting to develop experimentally feasible methods to witness different classes of multipartite entanglement in such systems.
In this paper, we construct a class of CV nonlinear squeezing parameters for testing fully separable, inseparable, and fully inseparable tripartite non-Gaussian states with practical measurements. Firstly, we analyze the QFI for arbitrary sets of accessible local observables (involving higher-order moments) in three subsystems and find out the optimal combination that maximizes the violation of different entanglement bounds. These bounds are divided into three classes according to the separability properties with respect to the three splittings (fully separable) and particular bipartite splittings (biseparable). To avoid the requirement of full quantum state tomography, we then construct a nonlinear squeezing parameter by analytically determining the optimal measurements within arbitrary accessible third-order observables, that results in an excellent approximation to the QFI. The level of the nonlinear squeezing parameter quantifies the metrological advantage provided by different classes of entangled states. Moreover, by analyzing the accessible conditions in the three-mode SPDC experiment of Ref.~\cite{prxthreemode}, we show that our method is capable of detecting fully inseparable tripartite entangled states by performing a limited number of measurements. Our results lead to an experimentally feasible way to systematically investigate CV multipartite non-Gaussian entanglement, which paves a way for exploiting their potential quantum advantages beyond Gaussian states.
\section{A general method to detect non-gaussian entanglement}
Before considering a specific system, we firstly introduce a general method to detect non-Gaussian entanglement with a two-step optimization.
\textit{Step 1.$-$}
We characterize the entanglement with choosing optimal operators in QFI.
For an arbitrary $N$-partite separable quantum state $\hat{\rho}_{\mathrm{sep}}$, it was shown that the QFI must be less than a bound $B_n$ given by the variance~\cite{pra2016manuel}
\begin{equation}\label{eqfv}
F_Q\left[\hat{\rho}_{\mathrm{sep}}, \sum_{j=1}^N \hat{A}_j\right] \leq 4 \sum_{j=1}^{N} \mathrm{Var}\left(\hat{A}_{j}\right)_{\hat{\rho}_{\mathrm{sep}}} \equiv B_n.
\end{equation}
Here, $\hat{A}_j$ is a local observable in the reduced state $\hat{\rho}_j$, and $\mathrm{Var}(\hat{A}_j)_{\hat{\rho}}=\langle \hat{A}_j^2\rangle_{\hat{\rho}}-\langle \hat{A}_j\rangle^2_{\hat{\rho}}$ indicates the variance. $F_Q$ denotes the QFI, which describes the sensitivity of the parameter $\theta$ when the state $\hat{\rho}$ is transformed with unitary evolution $\hat{\rho}_{\theta}=e^{-i \sum_j \hat{A}_j \theta} \hat{\rho} e^{i \sum_j \hat{A}_j \theta}$ and provides a bound on the accuracy to determine $\theta$ as $(\Delta\theta_{est})^2\geq1/F_Q[\hat{\rho},\sum_j \hat{A}_j ]$.
Since Eq.~(\ref{eqfv}) represents a necessary criterion for separability, its violation is sufficient criterion for entanglement.
In order to witness the entanglement in the largest possible parameter range, we need to choose an optimal local operator $\hat{A_j}$,
which can be constructed by analytical optimizing over arbitrary linear combination of accessible operators (namely, $\hat{A}_j=\sum_{m=1}c_j^{(m)}\hat{A}_j^{(m)}=\mathbf{c}_j\cdot\mathbf{\hat{A}}_j$).
In this case, the full operator
$\hat{A}(\mathbf{c})=\sum_{j=1}^{N} \mathbf{c}_{j} \cdot \hat{\mathbf{A}}_{j}$
is characterized by the combined vector $\mathbf{c}=(\mathbf{c}_1,\cdots,\mathbf{c}_N)^T$.
According to Eq.~(\ref{eqfv}), the quantity
$
W[\hat{\rho},\hat{A}(\mathbf{c})]=F_Q[\hat{\rho},\hat{A}(\mathbf{c})]-4\sum_{j=1}^N \mathrm{Var}(\mathbf{c}_j\cdot\mathbf{\hat{A}}_j)_{\hat{\rho}}
$
must be nonpositive for arbitrary choices of $\mathbf{c}$ whenever the state is separable.
We can now maximize $W[\hat{\rho},\hat{A}(\mathbf{c})]$ by variation of $\mathbf{c}$ to obtain an optimized entanglement operator $\hat{A}$.
According to Ref.~\cite{pra2016manuel}, the quantity can be expressed as $W[\hat{\rho},\hat{A}(\mathbf{c})]=\mathbf{c}^{T}\left(Q_{\hat{\rho}}^{\mathcal{A}}-4 \Gamma_{\Pi(\hat{\rho})}^{\mathcal{A}}\right) \mathbf{c}$ and the optimal $\mathbf{c}$ can be obtained by calculating the eigenvector of the maximum eigenvalue of the matrix $Q_{\hat{\rho}}^{\mathcal{A}}-4 \Gamma_{\Pi(\hat{\rho})}^{\mathcal{A}}$
(see Appendix \ref{qfi} for details).
\textit{Step 2.$-$}
Computing the QFI is challenging for arbitrary multipartite states, as it requires the full density matrix of the system~\cite{PhysRevLett.72.3439}. For this reason, it is often more practical to find its lower bound, which, regarded as squeezing parameters $\chi^2$, involves simple measurements that are usually experimentally feasible. The relation between the QFI and $\chi^2$ fulfills~\cite{PhysRevLett.102.100401},
\begin{equation}\label{eqfangsuo}
F_{\mathrm{Q}}[\hat{\rho}, \hat{A}]
\geq \frac{\left|\langle[\hat{A}, \hat{M}]\rangle_{\hat{\rho}}\right|^{2}}{\mathrm{Var}(\hat{M})_{\hat{\rho}}}
\equiv \chi^{-2}(\hat{\rho},\hat{A},\hat{M}),
\end{equation}
which is saturable for certain measurement operator $\hat{M}$.
Often $\hat{M}$ is a complicated operator, of difficult implementation. Therefore, similarly to the analytical optimization of $\hat{A}$, $\hat{M}$ will be optimized over a linear combination of experimentally practical observables (see the details in Appendix~\ref{optsqueezing}).
It is noticed that the general method introduced here is able to be applied to arbitrary quantum states for detecting the multipartite entanglement, where the optimal operator $\hat{A}$ and $\hat{M}$ will differ for different systems.
In the following, we will take a particular non-Gaussian system as an example and obtain the optimal operator $\hat{A}$ and $\hat{M}$ for detecting the entanglement with the above two-step optimization.
\section{Witnessing entanglement in three-mode non-Gaussian states via the QFI}
Here, we focus on the three-mode SPDC process realized in Ref.~\cite{prxthreemode}, where a pump photon at frequency $\omega_p$ is down converted to three nondegenerate photons at frequencies $\omega_1,~\omega_2,~\omega_3$, respectively. This process is described by the interaction Hamiltonian
\begin{equation}\label{eqham}
\hat{H}=i\hbar \kappa ({\hat{b}}\hat{a}_1^{\dagger}\hat{a}_2^{\dagger}\hat{a}_3^{\dagger}-{\hat{b}}^{\dagger}\hat{a}_1\hat{a}_2\hat{a}_3),
\end{equation}
where $\kappa$ is the third-order coupling constant, $\hat{b}$ and $\hat{a}_i$ ($i=1,2,3$) are the annihilation operators for the pump and $i$th mode, respectively.
With this Hamiltonian, the three-mode non-Gaussian state is obtained by {$\partial \hat{\rho}/\partial t=-i[\hat{H},\hat{\rho}]/\hbar$} with considering the initial state as vacuum for the generated triplets and coherent mode $\alpha_p$ for the pump.
A three-mode fully separable state allows for a description in three splittings $1|2|3$ ($\hat{\rho}_{\mathrm{sep}}=\sum_{k} p_{k} \hat{\rho}_{1}^{k} \otimes \hat{\rho}_{2}^{k} \otimes \hat{\rho}_{3}^{k}$), which provides the bound $B_1$. Thus, observing $F_Q > B_1$ indicates inseparability among the three modes. Furthermore, we can denote the maximum among the sum of variances for different bipartitions $1|23, ~2|13, ~3|12$, as bound $B_2$. The state is then confirmed to be fully inseparable if $F_Q > B_2$ (see Appendix \ref{optfisher} for details).
\begin{figure}
\caption{The QFI $F_Q[\hat{\rho}
\label{fig1}
\end{figure}
A key step to witness tripartite non-Gaussian entanglement is the choice of the local observables $\hat{A}_j$, corresponding to the generators of parameter imprinting. To find the optimal operator $\hat{A}_j$, we express $\mathcal{A}=\{\hat{\mathbf{A}}_{1},\hat{\mathbf{A}}_{2}, \hat{\mathbf{A}}_{3}\}$ through a set of accessible local operators for each mode $\hat{\mathbf{A}}_{j}=\{\hat{A}_j^{(1)},\hat{A}_j^{(2)},\dots\}$. The optimization of local observable $\hat{A}_{opt}$ to witness entanglement is analyzed over arbitrary linear combinations of the accessible operators. As the generated states are non-Gaussian, their characteristics cannot be sufficiently captured by linear quadratures $\hat{x}_j,\hat{p}_j$, where $\hat{x}_j=\hat{a}_j+\hat{a}_j^{\dagger}$, $\hat{p}_j=-i(\hat{a}_j-\hat{a}_j^{\dagger})$. Therefore, we extend the family of accessible operators to the second-order, i.e. $\mathbf{\hat{A}}_j=[\hat{x}_j,\hat{p}_j,\hat{x}_j^2,\hat{p}_j^2,(\hat{x}_j\hat{p}_j+\hat{p}_j\hat{x}_j)/2]$.
To find the optimal linear combination, we make use of the matrix form $W[\hat{\rho}, \hat{A}(\mathbf{c})]=\mathbf{c}^{T}\left(Q_{\hat{\rho}}^{\mathcal{A}}-4 \Gamma_{\Pi(\hat{\rho})}^{\mathcal{A}}\right) \mathbf{c}$ with the non-Gaussian state $\hat{\rho}$ generated in the three-mode SPDC process. Then by calculating the maximum eigenvalue and eigenvectors of the matrix $Q_{\hat{\rho}}^{\mathcal{A}}-4 \Gamma_{\Pi(\hat{\rho})}^{\mathcal{A}}$, the optimal local operator $\hat{A}_{opt}=\sum_{j=1}^3(x_j^2+p_j^2)$ is obtained. It can also be rewritten as $\hat{A}_{opt}=\sum_{j=1}^3\hat{a}_j^{\dagger}\hat{a}_j=\hat{N}$, because the constant term in $\hat{A}_{opt}$ has no effects on the results (see the detailed analytical optimization in Appendix~\ref{optfisher}).
By comparing the QFI $F_Q[\hat{\rho},\hat{A}_{opt}]$ to different bounds, such as the fully separable bound $B_1$ and the biseparable bound $B_2$, the three-mode non-Gaussian entanglement can be classified. In Fig.~\ref{fig1}, we plot the QFI and the different bounds (solid curves) as functions of the effective coupling strength $\alpha_{p}\kappa t$, where $t$ denotes the effective interaction time. We find that the QFI $F_Q[\hat{\rho},\hat{A}_{opt}]$ is higher than the biseparable bound $B_2$, which means that the non-Gaussian states generated in the three-mode SPDC dynamics Eq.~(\ref{eqham}) are fully inseparable.
It has been clarified that only a specific class of entangled states enable quantum-enhanced parameter estimation~\cite{rev.mod.phys.90.035005}, where a sufficient condition for quantum-enhanced metrology is~\cite{RN612},
\begin{equation}\label{eqclassical}
F_Q [\hat{\rho},\hat{N}] > 4 \text{Tr}[\hat{\rho}\hat{N}] \equiv B_0,
\end{equation}
where $\hat{N}=\sum_j^3\hat{a}_j^{\dagger}a_j$ is the total number operator for the three modes. The right hand side of the inequality give a bound $B_0$ for QFI (the derivation is given in Appendix~\ref{C}). It is obviously seen from Fig.~\ref{fig1} that both of the entanglement bounds ($B_1$ and $B_2$) are above the metrological bound $B_0$ (the lowest solid curve), which indicates that the entanglement witnessed in the three-mode SPDC dynamics is useful for quantum metrology.
\section{Detecting non-Gaussian entanglement with nonlinear squeezing parameters}
In this section, we will take the \textit{Step 2} optimization to obtain the squeezing parameter $\chi^2$ with optimal measurement $\hat{M}$. By extending the family of accessible observables to third order, the optimal $\hat{M}$ we obtained is $\hat{M}_{opt}=\hat{x}_1 \hat{x}_2 \hat{x}_3-\hat{x}_1 \hat{p}_2 \hat{p}_3-\hat{p}_1 \hat{x}_2 \hat{p}_3-\hat{p}_1 \hat{p}_2 \hat{x}_3$ (see the details in Appendix~\ref{optsqueezing}). As shown in Fig.~\ref{fig1} by the red dashed curve, the nonlinear squeezing parameter with optimal choices of measurements $\chi^{-2}(\hat{\rho},\hat{A}_{opt},\hat{M}_{opt})$ approximates extremely well the exact QFI $F_{\mathrm{Q}}[\hat{\rho}, \hat{A}_{opt}]$. Hence, the squeezing parameter $\chi^{-2}(\hat{\rho},\hat{A}_{opt},\hat{M}_{opt})$ can act as a faithful substitute for the QFI in the left-hand side of inequalities (\ref{eqfv}) and (\ref{eqclassical}), avoiding to perform quantum state tomography.
\begin{figure}
\caption{(a) The QFI $F_Q$, the nonlinear parameter $\chi^{-2}
\label{fig2}
\end{figure}
Although more accessible in experiments comparing to the QFI, the nonlinear squeezing parameter $\chi^{2}(\hat{\rho},\hat{A}_{opt},\hat{M}_{opt})$ can still be of demanding implementation, as (for the three-mode SPDC process) it requires measuring four collective observables in $\hat{M}_{opt}$. Therefore, we show that this can be further simplified by a measurement $\hat{M}$ which is still capable of detecting fully tripartite inseparable states but contains less number of observables. For this purpose, an experimentally feasible coincidence measurement is evaluated, which can be written as $\hat{Q}=[\hat{x}_1\sin(\theta_1)+\hat{p}_1\cos(\theta_1)]\times[\hat{x}_2\sin(\theta_2)+\hat{p}_2\cos(\theta_2)]\times[\hat{x}_3\sin(\theta_3)+\hat{p}_3\cos(\theta_3)]$, where $\theta_j\in [0,2\pi)$. The alternative observable $\hat{M}$ can be obtained by varying $\theta_j$. For the simplest case of $\hat{M_1}$ that contains only one collective observable, by optimizing $\theta_{1,2,3}$ we find that the nonlinear parameter $\chi^{-2}(\hat{M}_1)=|\langle[\hat{A}_{opt}, \hat{M}_1]\rangle_{\hat{\rho}}|^{2}/\mathrm{Var}( \hat{M}_1)_{\hat{\rho}}$ is maximized by any one of four terms in $\hat{M}_{opt}$, e.g. $\hat{M}_1=\hat{x}_1 \hat{x}_2 \hat{x}_3$.
We find that this performs as an excellent proxy for the QFI when $\alpha_{p}\kappa t\leq 0.1$, as indicated by the blue dashed line in Fig.~\ref{fig1}.
To test entanglement, we need to compare the simplified nonlinear parameter $\chi^{-2}(\hat{M}_1)$ with different bound $B_n$. The simplified nonlinear parameter has the form of,
\begin{equation}
\begin{aligned}
\chi^{-2}(\hat{M}_1)
&=\frac{|\langle[\hat{A}_{opt}, \hat{M}_1]\rangle_{\hat{\rho}}|^{2}}
{\mathrm{Var}(\hat{M}_1)_{\hat{\rho}}}\\
&=\frac{|\langle
\hat{p}_1\hat{x}_2\hat{x}_3+\hat{x}_1\hat{p}_2\hat{x}_3+\hat{x}_1\hat{x}_2\hat{p}_3
\rangle_{\hat{\rho}}|^{2}}
{\mathrm{Var}(\hat{x}_1\hat{x}_2\hat{x}_3)_{\hat{\rho}}}.
\end{aligned}
\end{equation}
In experiments, $\chi^{-2}(\hat{M}_1)$ can be obtained by detecting $\hat{p}_1\hat{x}_2\hat{x}_3$, $\hat{x}_1\hat{p}_2\hat{x}_3$, $\hat{x}_1\hat{x}_2\hat{p}_3$, and $\hat{x}_1\hat{x}_2\hat{x}_3$ with collective measurements~\cite{prxthreemode,pan1999,sa.collective.measurement,npj.collective.measurement}.
And the different bounds $B_n=4 \sum_{j} \mathrm{Var}(\hat{N}_{j})$ can be obtained by number-resolving detectors~\cite{np2007-Photon-number-discriminating,np2008-detector,np-superconducting}.
Besides, we also provide the results of simplified $\hat{M}_{2}$ and $\hat{M}_{3}$ that involve two and three collective observables, receptively, as shown by the two middle dashed curves in Fig.~\ref{fig1} (the detailed expressions and analysis can be found in Appendix~\ref{simplesqueezing}). As expected, the more observables are included in $M$, the larger range of coupling strengths over which non-Gaussian multipartite entanglement can be witnessed.
\section{Testing fully inseparable tripartite entangled states in three-photon SPDC experiment}
In this section we use the QFI $F_Q$ and the nonlinear parameter $\chi^{-2}$ to examine the three-mode non-Gaussian entanglement for a realistic scenario. In experiments, thermal noise in the initial state $\hat{\rho}_j(n_{th}^j)$ is inevitable, with average thermal photon number $\langle n_{th}^j \rangle=1/(e^{\beta\omega_j}-1)$, and $\beta \omega_j=\hbar\omega_j/(k_B T)$. Here, we refer to the reported tripartite states produced by three-mode SPDC in the superconducting experiment~\cite{prxthreemode}, where the noise temperature is $T=25$ mK and the frequencies of three modes are $\omega_1=2\pi\times 4.2$GHz, $\omega_2=2\pi\times 6.1$GHz, $\omega_3=2\pi\times 7.5$GHz, respectively. In this case, the simplified squeezing parameter $\chi^2(\hat{\rho},\hat{A}_{opt},\hat{M}_1)$ is sufficient to characterize the entanglement, since the achieved effective coupling strength is quite small.
Specifically, the experimental result for $\alpha_p \kappa t \approx 0.016$ is marked as a red star in Fig.~\ref{fig2}, where the parameters are verified by reproducing the reported results in Ref.~\cite{prxthreemode}. As shown in Fig.~\ref{fig2}(a), the simplified nonlinear parameter $\chi^{-2}(\hat{\rho},\hat{A}_{opt},\hat{M}_1)$ (red dashed line) approximates well the QFI $F_Q(\hat{\rho},\hat{A}_{opt})$ (solid black line). Thus, $\chi^{-2}(\hat{\rho},\hat{A}_{opt},\hat{M}_1)$ is adequate to witness entanglement in such range of coupling strengths. Besides this, different separable bounds to characterise entanglement are illustrated in the figure, showing that the metrological bound $B_0$ coincides with the separable bound $B_1$ in this small coupling regime.
\begin{figure}
\caption{The values of witness (a) $I_{\chi}
\label{fig3}
\end{figure}
In summary, Fig.~\ref{fig2}(b) shows that we can characterize the entanglement of the final state over a wide range of both coupling strength and environmental temperature. Full tripartite inseparability is witnessed with larger coupling and lower temperature (top right area) by violating the bound $B_2$ of all three possible $i|jk$ bipartitions. The intermediate area, where $\chi^2(\hat{\rho},\hat{A}_{opt},\hat{M}_1)$ violates the tripartite $a|b|c$ bound $B_1$ but doesn't violate $B_2$, indicates there is entanglement among three modes. States in the bottom left area are not detected as useful for quantum metrology, since $\chi^2(\hat{\rho},\hat{A}_{opt},\hat{M}_1)$ is lower than the metrological bound $B_0$. In addition, as the red star is located in the top right area, a metrological useful full tripartite inseparability is witnessed in the experimentally reported three-mode states~\cite{prxthreemode}.
\section{Comparison to other criteria}
In this section we compare our criterion to the widely used Hillery-Zubairy criterion~\cite{PhysRevA.81.062322}, where full inseparability is witnessed if $I_\mathrm{HZ}=\mathrm{min}\{I_1,I_2,I_3\}>0$, where $I_i=|\langle\hat{a}_1\hat{a}_2\hat{a}_3\rangle|-\sqrt{\langle \hat{N}_i\rangle \langle \hat{N}_j \hat{N}_k\rangle}$ and $\hat{N}_i=\hat{a}_i^{\dagger}\hat{a}$, $i\neq j\neq k\neq i$.
Similarly, we define $I_{\chi}=\chi^{-2}(\hat{\rho},\hat{A}_{opt},\hat{M}_{opt})-B_2$ based on the optimal measurements we proposed, such that a state is fully inseparable if $I_{\chi}>0$. The values of $I_\mathrm{HZ}$ and $I_{\chi}$ are presented in Fig.~\ref{fig3} as functions of the effective coupling strength and the inverse temperature, where the three-mode states are generated under the interaction Hamiltonian (\ref{eqham}). It is shown that $I_\mathrm{HZ}$ fails to detect tripartite inseparability when $\alpha_{p}\kappa t>0.66$, while our witness $I_{\chi}$ works well in a larger coupling regime. This can be explained by the fact that detecting entanglement for large coupling strength requires observing correlations of higher order, which are not considered in the witness $I_\mathrm{HZ}$.
As it is shown in Fig.~\ref{fig3}(a), $I_{\chi}$ fails to witness fully inseparable tripartite entangled states in the area below the dashed black line at the region with very weak coupling and high temperature, where $I_\mathrm{HZ}$ has a better performance [Fig.~\ref{fig3}(b)]. This result can be explained by the fact that our entanglement conditions are tailored to detect metrologically useful entanglement, and that the region where the HZ criterion outperforms our criterion might correspond to states less useful for parameter-estimation tasks.
\section{Conclusions}
Based on the widely used quantifiers for metrological sensitivity, we have obtained an experimentally practical entanglement witness for CV tripartite non-Gaussian states generated by three-mode SPDC. First, the optimal phase-space observable that maximizes the QFI has been analyzed to detect the tripartite non-Gaussian entanglement, where fully inseparable states, biseparable entangled states, and fully separable states can be distinguished. Then, in order to avoid the full quantum state tomography required by the QFI, we constructed a nonlinear squeezing parameter by analytically determining the optimal measurements within arbitrary accessible observables, and demonstrate that it performs as well as the QFI by involving up to third-order moments of phase-space measurements. Moreover, by considering the experimental conditions in a recent three-mode SPDC experiment, we show that our method can detect fully inseparable tripartite non-Gaussian entanglement with only a limited number of measurements. Notably, the level of the nonlinear squeezing parameter quantifies the metrological advantage provided by the examined entangled states. Our results provide an approach to understand and characterize multipartite non-Gaussian entanglement, and paves the way to harness their potential applications in quantum metrology experiments.
\begin{acknowledgments}
This work is supported by the National Natural Science Foundation of China (Grants No.~11975026, No.~12125402, and No.~12147148), Beijing Natural Science Foundation (Grant No.~Z190005), the Key R\&D Program of Guangdong Province (Grant No. 2018B030329001), and the LabEx ENS-ICFP: ANR-10-LABX-0010 / ANR-10-IDEX-0001-02 PSL*. F.-X. S. acknowledges the China Postdoctoral Science Foundation (Grant No. 2020M680186).
\end{acknowledgments}
\appendix
\section{Quantum Fisher Information}\label{qfi}
\subsection{Fisher Information} \label{fisher}
Originally, the Fisher information was introduced in the context of parameter estimation (see Ref.~\cite{RN01} for a review). To infer the value of $\theta$, one performs a measurement $\hat{M}=$ $\left\{\hat{M}_{\mu}\right\}$, which in the most general case is given by a positive operator valued measure (POVM). The Fisher information $F[\hat{\rho}(\theta), \hat{M}]$ quantifies the sensitivity of n independent measurements and gives a bound on the accuracy to determine $\theta$ as $(\Delta \theta)^{2} \geq 1 / (nF[\hat{\rho}(\theta), \hat{M}])$ in central limit ($n\gg1$). In particular, the Fisher information is defined as \cite{PhysRevLett.72.3439}
\begin{equation}
F[\hat{\rho}(\theta), \hat{M}] \equiv \sum_{\mu} \frac{1}{P(\mu \mid \theta)}\left(\frac{\partial P(\mu \mid \theta)}{\partial \theta}\right)^{2},
\end{equation}
where $P(\mu \mid \theta) \equiv \operatorname{Tr}\left\{\hat{\rho}(\theta) M_{\mu}\right\}$ is the probability to obtain the measurement outcome $\mu$ in a measurement of $\hat{M}$ given the state $\rho(\theta).$
The Fisher information for an optimal measurement, i.e., the one that gives the best resolution to determine $\theta$, is called quantum Fisher information (QFI), and is defined as $F_{Q}[\hat{\rho}(\theta)] \equiv$ $\max _{\hat{M}} F[\hat{\rho}(\theta), \hat{M}] .$ There, one is interested in distinguishing the state $\hat{\rho}$ from the state $\hat{\rho}({\theta})=e^{-i \hat{A} \theta} \hat{\rho} e^{i \hat{A} \theta}$, obtained by applying an unitary induced by a Hermitian generator $\hat{A}$. With the spectral decomposition $\hat{\rho}=\sum_k p_k|\Psi_k\rangle \langle \Psi_k|$, an explicit expression for $F_Q[\hat{\rho},\hat{A}]$ is given by \cite{PhysRevLett.72.3439}
\begin{equation}
F_{Q}[\hat{\rho}, \hat{A}]=2 \sum_{\substack{k, l\\p_k+p_l\neq 0}} \frac{\left(p_{k}-p_{l}\right)^{2}}{p_{k}+p_{l}}\left|\left\langle\Psi_{k}\right|\hat{A}\left| \Psi_{l}\right\rangle\right|^{2}.
\end{equation}
And in pure states, it takes the simple form $ F_{Q}[|\psi\rangle\langle\psi|, \hat{A}]=4(\Delta A)^{2}. $
\subsection{The Optimal operator in QFI}\label{optfisher}
Any seperable quantum state
$\hat{\rho}_{\mathrm{sep}}=\sum_{n} p_{n} \hat{\rho}_{1}^{n} \otimes \cdots \otimes \hat{\rho}_{N}^{n}$ must satisfy~\cite{pra2016manuel}
\begin{equation}\label{eqfv1}
F_Q\left[\hat{\rho}_{\mathrm{sep}}, \sum_{j=1}^N \hat{A}_j\right] \leq 4 \sum_{j=1}^{N} \mathrm{Var}\left(\hat{A}_{j}\right)_{\hat{\rho}_{\mathrm{sep}}}=B_n,
\end{equation}
where $\hat{A}_i$ is the local operator acting on the reduced state $\hat{\rho}_i$. $\mathrm{Var}(A)_{\hat{\rho}}=\langle A^2\rangle_{\hat{\rho}}-\langle A\rangle^2_{\hat{\rho}}$ denotes the variance. If a state violates this inequality, it can not be divided into the corresponding seperable quantum state. In our case, for a generated state with the local operators $\hat{A}$, the right-hand side of Eq.~(\ref{eqfv1}) can be characterized by the different bounds, taking the form of
\begin{align}
B_1=4\, [\mathrm{Var}(&\hat{A}_1)_{\hat{\rho}_a}+\mathrm{Var}(\hat{A}_2)_{\hat{\rho}_b}+\mathrm{Var}(\hat{A}_3)_{\hat{\rho}_c}], \nonumber \\
B_2=4 \,\mathrm{max}\{&\mathrm{Var}(\hat{A}_1)_{\hat{\rho}_a}+\mathrm{Var}(\hat{A}_2+\hat{A}_3)_{\hat{\rho}_{bc}},\nonumber\\
&\mathrm{Var}(\hat{A}_2)_{\hat{\rho}_b}+\mathrm{Var}(\hat{A}_1+\hat{A}_3)_{\hat{\rho}_{ac}},\nonumber\\
&\mathrm{Var}(\hat{A}_3)_{\hat{\rho}_c}+\mathrm{Var}(\hat{A}_1+\hat{A}_2)_{\hat{\rho}_{ab}}
\}, \nonumber\\
B_3=4\, [\mathrm{Var}(&\hat{A}_1+\hat{A}_2+\hat{A}_3)_{\hat{\rho}}]. \nonumber
\end{align}
If the left-hand side $F_Q[\hat{\rho}_{\mathrm{sep}},\hat{A}]>B_1$, then the state is not separable, i.e. there is entanglement between three modes. Furthermore, if $F_Q[\hat{\rho}_{\mathrm{sep}},\hat{A}]>B_2$, the state is fully inseparable, which means full tripartite inseparability between three modes. Furthermore, $B_3$ give a bound valid for all physical states, which satisfy $F_Q[\hat{\rho},\hat{A}]\leq B_3$ for arbitrary density matrices $\hat{\rho}$.
The metrological witness for entanglement depends on the choice of the local operator $\hat{A}$. Certain choices of operators may be better suited than others to detect entanglement in a given state $\hat{\rho}$. In order to find an optimal operator for a family of accessible $\mathbf{\hat{A}}_j=[\hat{A}_j^{(1)},\hat{A}_j^{(2)}\cdots]^T$, the local operators $\hat{A}_j$ are denoted by the expression $\sum_{m=1}c_j^{(m)}\hat{A}_j^{(m)}=\mathbf{c}_j\cdot\mathbf{\hat{A}}_j$, and the full generator of the unitary transformation $\hat{A}(\mathbf{c})=\mathbf{c}_1\cdot \mathbf{\hat{A}}_1+\mathbf{c}_2\cdot \mathbf{\hat{A}}_2+\mathbf{c}_3\cdot \mathbf{\hat{A}}_3$ is characterized by the vector $\mathbf{c}=[\mathbf{c}_1,\mathbf{c}_2,\mathbf{c}_3]^T$. According to Eq.~\ref{eqfv1}, the quantity
\begin{equation}
W[\hat{\rho},\hat{A}(\mathbf{c})]=F_Q[\hat{\rho},\hat{A}(\mathbf{c})]-4\sum_{j=1}^N \mathrm{Var}(\mathbf{c}_j\cdot\mathbf{\hat{A}}_j)_{\hat{\rho}}
\end{equation}
must be nonpositive for arbitrary choices of $\mathbf{c}$ whenever the state $\rho$ is separable. We can now maximize $W[\hat{\rho},\hat{A}(\mathbf{c})]$ by variation of $\mathbf{c}$ to obtain an optimized entanglement witness for the state $\rho$, given the sets of available operators contained in $\mathcal{A}=\{\mathbf{\hat{A}}_1,\mathbf{\hat{A}}_2,\mathbf{\hat{A}}_3\}$.
To this aim, let us first express the quantum Fisher information in matrix form as $F_Q[\hat{\rho},\hat{A}(\mathbf{c})]=\mathbf{c}^T Q_{\rho}^{\mathcal{A}}c$, where the spectral decomposition $\hat{\rho}=\sum_k p_k|\Psi_k\rangle\langle\Psi_k|$ defines $(Q_{\hat{\rho}}^{\mathcal{A}})_{i j}^{m n}=$ $2 \sum_{k, l} \frac{(p_{k}-p_{l})^{2}}{p_{k}+p_{l}}\langle\Psi_{k}|\hat{A}_{i}^{(m)}| \Psi_{l}\rangle\langle\Psi_{l}|\hat{A}_{j}^{(n)}| \Psi_{k}\rangle$ element-wise and the sum extends over all pairs with $p_k+p_l\neq 0$. The indices $i$ and $j$ refer to different parties, while the indices $m$ and $n$ label the respective local sets of observables. Similarly, we can express the elements of the covariance matrix of $\hat{\rho}$ as $(\Gamma_{\hat{\rho}}^{\mathcal{A}})_{i j}^{m n}=\mathrm{Cov}(\hat{A}_{i}^{(m)}, \hat{A}_{j}^{(m)})_{\hat{\rho}}$. If the above covariance matrix is evaluated after replacing $\hat{\rho}$ with $\Pi(\hat{\rho})=\hat{\rho}_{1} \otimes \cdots \otimes \hat{\rho}_{N}$, where $\hat{\rho}_i$ is the reduced density operator, we arrive at the expression for the local variances, $\sum_{j=1}^{N} \mathrm{Var}(\mathbf{c}_{j}\cdot \hat{\mathbf{A}}_{j})_{\hat{\rho}}=\mathbf{c}^{T} \Gamma_{\Pi(\hat{\rho})}^{\mathcal{A}} \mathbf{c}$. Combining this with expression for the quantum Fisher matrix, the separability criterion reads
\begin{equation}\label{eqW1}
W[\hat{\rho}, \hat{A}(\mathbf{c})]=\mathbf{c}^{T}\left(Q_{\hat{\rho}}^{\mathcal{A}}-4 \Gamma_{\Pi(\hat{\rho})}^{\mathcal{A}}\right) \mathbf{c} \leq0.
\end{equation}
An entanglement witness is therefore found when the matrix $\mathbf{c}^{T}(Q_{\hat{\rho}}^{\mathcal{A}}-4 \Gamma_{\Pi(\hat{\rho})}^{\mathcal{A}}) \mathbf{c}$ has at least one positive eigenvalue. The criterion (\ref{eqW1}) can be equivalently stated as $\lambda_{\mathrm{max}}(Q_{\hat{\rho}}^{\mathcal{A}}-4 \Gamma_{\Pi(\hat{\rho})}^{\mathcal{A}}) \leq 0$, where $\lambda_{max}(M)$ denotes the largest eigenvalue of the matrix $M$.
For pure states $\hat{\rho}=|\Psi\rangle\langle\Psi|$, the quantum Fisher matrix coincides, up to a factor of 4 , with the covariance matrix, i.e., $Q_{|\Psi\rangle}^{\mathcal{A}}=4 \Gamma_{|\Psi\rangle}^{\mathcal{A}}$. Thus, according to Eq.~(\ref{eqW1}), every pure separable state must satisfy the condition
\begin{equation}
\Gamma_{|\Psi\rangle}^{\mathcal{A}}-\Gamma_{\Pi(\mid \Psi))}^{\mathcal{A}} \leq 0.
\end{equation}
A common choice for such a set are the local position operators $x_j$ and momentum operators $p_j$. As our state is non-Gaussian, their characteristics cannot be sufficiently uncovered by measuring linear observables. We need extend the family of accessible operators by adding second order nonlinear local operators: $\mathbf{\hat{A}}_j=[\hat{x}_j,\hat{p}_j,\hat{x}_j^2,\hat{p}_j^2,(\hat{x}_j\hat{p}_j+\hat{p}_j\hat{x}_j)/2]^T$.
The optimum is given by the maximum eigenvalue of the matrix $(Q_{\hat{\rho}}^{\mathcal{A}}-4 \Gamma_{\Pi(\hat{\rho})}^{\mathcal{A}}) $, the optimal direction is given by the corresponding eigenvector. The result is $\mathbf{c}_j=[0,0,1,1,0]$, Which indicates the optimal operators taking the form: $\hat{A}_j^{opt}=\hat{x}_j^2+\hat{p}_j^2$. And the full optimal operator is $\hat{A}_{opt}=\sum_{j=1}^3 \hat{A}_j^{opt}=\hat{x}_1^2+\hat{p}_1^2+\hat{x}_2^2+\hat{p}_2^2+\hat{x}_3^3+\hat{p}_3^2=4(\hat{a}_1^{\dagger}\hat{a}_1+\hat{a}_2^{\dagger}\hat{a}_2+\hat{a}_3^{\dagger}\hat{a}_3)+6$, where we take
$\hat{x}_i=\hat{a}_j+\hat{a}_j^{\dagger}$, $\hat{p}_j=-i(\hat{a}_j-\hat{a}_j^{\dagger})$ for $j=1,2,3$.
As the constant in the operator $\hat{A}$ does not influence the optimal result, so the optimal operator can be rewritten in the form of $\hat{A}_{opt}=\hat{N}=\hat{N}_1+\hat{N}_2+\hat{N}_3$.
\section{The optimal squeezing parameter}\label{optsqueezing}
\subsection{Squeezing parameter}
The Fisher information has a lower bound, which is given by
\begin{equation}
\chi^{-2}[\hat{\rho}(\theta),\hat{M}] \leq F[\hat{\rho}(\theta),\hat{M}] \leq F_{Q}[\hat{\rho}, \hat{A}],
\end{equation}
where $\chi^{-2}[\hat{\rho}(\theta),\hat{M}]$ is the reciprocal of the squeezing parameter
and $\hat{\rho}({\theta})=e^{-i \hat{A} \theta} \hat{\rho} e^{i \hat{A} \theta}$ is obtained by applying an unitary evolution $\hat{A}$ on initial state $\hat{\rho}$.
And the chain of inequalities is saturable by an optimal measurement observable $\hat{M}$. The squeezing parameter is also used to estimate an unknown phase $\theta$ encoded in a quantum state $\hat{\rho(\theta)}$ by the method of moments (see Ref.~\cite{RN01} for details). In the central limit ($n\gg1)$, the phase uncertainty is given by $\left(\Delta \theta_{\text {est }}\right)^{2}=\chi^{2}[\hat{\rho}(\theta), \hat{M}] / n$, where $\chi^{2}[\hat{\rho}(\theta), \hat{M}]=$ $(\Delta \hat{M})_{\hat{\rho}(\theta)}^{2}\left(d\langle\hat{M}\rangle_{\hat{\rho}(\theta)} / d \theta\right)^{-2}$ is the squeezing parameter of $\hat{\rho}$ associated with the measurement of the observable $\hat{M}$. Specially, for unitary evolution $\hat{\rho}({\theta})=e^{-i \hat{A} \theta} \hat{\rho} e^{i \hat{A} \theta}$, the squeezing parameter is a property of the initial state $\hat{\rho}$, which takes the form of,
\begin{equation}
\chi^2[\hat{\rho},\hat{A},\hat{M}]=\frac{(\Delta\hat{M})_{\hat{\rho}}^2}{\left|\langle[\hat{A}, \hat{M}]\rangle_{\hat{\rho}}\right|^{2}},
\end{equation}
with the unitary evolution $\hat{A}$ and observable $\hat{M}$.
\subsection{The Optimal observable in nonlinear quadrature parameters}
The analytical optimization is over arbitrary linear combinations of accessible operators $(\hat{M}=\mathbf{n}\cdot \mathbf{\hat{M}}=\sum_l n_l \hat{M}_l)$, where $\mathbf{\hat{M}}$ include the different observable.
If we only consider the first-order operators, the set $\mathbf{M}$ can be written:
\begin{equation}
\mathbf{\hat{M}^{(1)}}=[\hat{x}_1,\hat{p}_1,\hat{x}_2,\hat{p}_2,\hat{x}_3,\hat{p}_3],
\end{equation}
And if we add the second order operators, the set $\mathbf{\hat{M}}$ can be expanded over the following 27 terms
\begin{equation}
\begin{aligned}
\mathbf{\hat{M}^{(2)}}=\{
&\mathbf{\hat{M}^{(1)}},[\hat{x}_1^2,\hat{p}_1^2,\hat{x}_2^2,\hat{p}_2^2,\hat{x}_3^2,\hat{p}_3^2,\hat{x}_1\hat{p}_1,\hat{x}_1 \hat{x}_2,\\
&\hat{x}_1 \hat{p}_2,\hat{x}_1\hat{x}_3,\hat{x}_1\hat{p}_3,
\hat{x}_2\hat{p}_1,\hat{p}_1\hat{p}_2,\hat{x}_3\hat{p}_1,\hat{p}_1\hat{p}_3,\\
&\hat{x}_2\hat{p}_2,\hat{x}_2\hat{x}_3,\hat{x}_2\hat{p}_3,\hat{p}_2\hat{x}_3,\hat{p}_2\hat{p}_3,\hat{x}_3\hat{p}_3]\} \;,
\end{aligned}
\end{equation}
if we add the third order operators, the set can be written as the 83 terms
\begin{equation}
\begin{aligned}
\hat{\mathbf{M}}^{(3)}=&\{\hat{\mathbf{M}}^{(1)}, \hat{\mathbf{M}}^{(2)},[\hat{x}_{1}^{3}, \hat{p}_{1}^{3}, \hat{x}_{2}^{3}, \hat{p}_{2}^{3}, \hat{x}_{3}^{3}, \hat{p}_{3}^{3}, \hat{x}_{1}^{2} \hat{p}_{1},\\
& \hat{x}_{1}^{2} \hat{x}_{2}, \hat{x}_{1}^{2} \hat{p}_{2}, \hat{x}_{1}^{2} \hat{x}_{3}, \hat{x}_{1}^{2} \hat{p}_{3}, \hat{p}_{1}^{2} \hat{x}_{1}, \hat{p}_{1}^{2} \hat{x}_{2}, \hat{p}_{1}^{2} \hat{p}_{2}, \\
& \hat{p}_{1}^{2} \hat{x}_{3}, \hat{p}_{1}^{2} \hat{p}_{3}, \hat{x}_{2}^{2} \hat{x}_{1}, \hat{x}_{2}^{2} \hat{p}_{1}, \hat{x}_{2}^{2} \hat{p}_{2}, \hat{x}_{2}^{2} \hat{x}_{3}, \hat{x}_{2}^{2} \hat{p}_{3}, \\
& \hat{p}_{2}^{2} \hat{x}_{1}, \hat{p}_{2}^{2} \hat{p}_{1}, \hat{p}_{2}^{2} \hat{x}_{2}, \hat{p}_{2}^{2} \hat{x}_{3}, \hat{p}_{2}^{2} \hat{p}_{3}, \hat{x}_{3}^{2} \hat{x}_{1}, \hat{x}_{3}^{2} \hat{p}_{1}, \\
& \hat{x}_{3}^{2} \hat{x}_{2}, \hat{x}_{3}^{2} \hat{p}_{2}, \hat{x}_{3}^{2} \hat{p}_{3}, \hat{p}_{3}^{2} \hat{x}_{1}, \hat{p}_{3}^{2} \hat{p}_{1}, \hat{p}_{3}^{2} \hat{x}_{2}, \hat{p}_{3}^{2} \hat{p}_{2}, \\
& \hat{p}_{3}^{2} \hat{x}_{3}, \hat{x}_{1} \hat{x}_{2} \hat{x}_{3}, \hat{x}_{1} \hat{x}_{2} \hat{p}_{1}, \hat{x}_{1} \hat{x}_{2} \hat{p}_{2}, \hat{x}_{1} \hat{x}_{2} \hat{p}_{3}, \\
& \hat{x}_{1} \hat{x}_{3} \hat{p}_{1}, \hat{x}_{1} \hat{x}_{3} \hat{p}_{2}, \hat{x}_{1} \hat{x}_{3} \hat{p}_{3}, \hat{x}_{1} \hat{p}_{1} \hat{p}_{2}, \hat{x}_{1} \hat{p}_{1} \hat{p}_{3}, \\
& \hat{x}_{1} \hat{p}_{2} \hat{p}_{3}, \hat{x}_{2} \hat{x}_{3} \hat{p}_{1}, \hat{x}_{2} \hat{x}_{3} \hat{p}_{2}, \hat{x}_{2} \hat{x}_{3} \hat{p}_{3}, \hat{x}_{2} \hat{p}_{1} \hat{p}_{2}, \\
& \hat{x}_{2} \hat{p}_{1} \hat{p}_{3}, \hat{x}_{2} \hat{p}_{2} \hat{p}_{3}, \hat{x}_{3} \hat{p}_{1} \hat{p}_{2}, \hat{x}_{3} \hat{p}_{1} \hat{p}_{3}, \hat{x}_{3} \hat{p}_{2} \hat{p}_{3}, \\
&\hat{p}_{1} \hat{p}_{2} \hat{p}_{3}]\} \;.
\end{aligned}
\end{equation}
From these we obtain:
\begin{equation}
\begin{aligned}
\chi^{-2}[\hat{\rho},\hat{A}_{opt},\hat{M}]&=\frac{\left|\langle[\hat{A}_{opt}, \hat{M}]\rangle_{\hat{\rho}}\right|^{2}}{\operatorname{Var}[ \hat{M}]_{\hat{\rho}}}\\
&=\mathbf{n}^T\mathbf{C}[\hat{\rho},\mathbf{\hat{M}}]\mathbf{\Gamma}[\hat{\rho},\mathbf{\hat{M}}]^{-1}\mathbf{n},
\end{aligned}
\end{equation}
where $\mathbf{\Gamma}[\hat{\rho},\mathbf{\hat{M}}]_{kl}=\operatorname{Cov}(\hat{M}_k,\hat{M}_l)$ and $\mathbf{C}[\hat{\rho},\mathbf{\hat{M}}]_{kl}=\langle[\hat{A}_{opt},\hat{M}_k]\rangle \langle[\hat{M}_l,\hat{A}_{opt}]\rangle$. The optimum is given by the maximum eigenvalue of the matrix $\mathbf{C}[\hat{\rho},\mathbf{\hat{M}}]\mathbf{\Gamma}[\hat{\rho},\mathbf{\hat{M}}]^{-1} $, and the optimal direction is given by the corresponding eigenvector. We show the results in Fig.~\ref{sfig1}, where we find that the optimal value is zero with only considering first and second order observables. Instead, if we add third-order measurements, the optimum of the nonlinear parameter $\chi^{-2}[\hat{\rho},\hat{A}_{opt},\hat{M}_{opt}]$ can be nearly equal to the QFI $F_Q[\hat{\rho},\hat{A}_{opt}]$, which indicates the $\hat{M}$ in third-order is the optimal measurement in this case. The optimal measurement takes the form
\begin{equation}
\hat{M}_{opt}=\hat{x}_1 \hat{x}_2 \hat{x}_3-\hat{x}_1 \hat{p}_2 \hat{p}_3-\hat{p}_1 \hat{x}_2 \hat{p}_3-\hat{p}_1 \hat{p}_2 \hat{x}_3.
\end{equation}
\begin{figure}
\caption{Evolution of the optimal nonlinear parameters $\chi^{-2}
\label{sfig1}
\end{figure}
\section{Bound of metrological useful resource}\label{C}
It has been clarified that nonclassicality is a necessary resource to achieve quantum advantage in quantum metrology tasks~\cite{PhysRevLett.122.040503}. Therefore, we will try to distinguish the useful resource for quantum metrology with the help of the nonclassicality.
To this aim, we will take use of the following two properties of QFI:
(i) For pure states, such as coherent states $|\alpha\rangle$, the quantum Fisher information becomes proportional to the variance of the generator:
\begin{equation}
\begin{aligned}
F_{Q}(|\alpha\rangle, \hat{G})&=4\left(\Delta_{\alpha} \hat{G}\right)^{2}\\
&=4\left(\left\langle\alpha\left|\hat{G}^{2}\right| \alpha\right\rangle-\langle\alpha|\hat{G}| \alpha\rangle^{2}\right).
\end{aligned}
\end{equation}
(ii) The quantum Fisher information is convex. Thus, for classical states
\begin{equation}
\rho_{\text {class }}=\int d^{2} \alpha P_{\text {class }}(\alpha)|\alpha\rangle\langle\alpha|,
\end{equation}
where $P_{\text {class }}(\alpha)$ is a non-negative function no more singular than a delta function.
Based on these properties, the bound for the quantum Fisher information of classical states reads~\cite{RN612}:
\begin{equation}\label{nonclass}
F_{Q}\left(\rho_{\text {class }}, \hat{G}\right) \leq \int d^{2} \alpha P_{\text {class }}(\alpha) F_{Q}(|\alpha\rangle, \hat{G}).
\end{equation}
Taking the generator $\hat{G}=\hat{a}^{\dagger}\hat{a}$ into account, the right-hand side of inequality~(\ref{nonclass}) is then given by its means number of photons in coherent states:
\begin{equation}
\begin{aligned}
F_{Q}\left(\left|\alpha_{\hat{\rho}}\right\rangle, \hat{a}^{\dagger} \hat{a}\right)&=4\Delta_{\alpha}^{2} (\hat{a}^{\dagger} \hat{a})\\
&=4\left\langle\alpha_{\hat{\rho}}\left|\hat{a}^{\dagger} \hat{a}\right| \alpha_{\hat{\rho}}\right\rangle\\
&=4 \operatorname{Tr}\left(\hat{\rho} \hat{a}^{\dagger} \hat{a}\right).
\end{aligned}
\end{equation}
So Eq.~(\ref{nonclass}) can be rewritten for classical states as
\begin{equation}
F_Q(\hat{\rho},\hat{a}^{\dagger}\hat{a})\leq 4Tr(\hat{\rho}\hat{a}^{\dagger}\hat{a}).
\end{equation}
This approach is also suitable for a classical multimode system with the generators $\hat{G}=\hat{N}=\hat{a}_1^{\dagger}\hat{a}_1+\hat{a}_2^{\dagger}\hat{a}_2+\hat{a}_3^{\dagger}\hat{a}_3$, as the QFI for separable coherent states is given by:
\begin{equation}
\begin{aligned}
F_{Q}\left(\left|\alpha_1\alpha_2\alpha_3\right\rangle_{\hat{\rho}}, \hat{N}\right)&=
4\left(\Delta_{\hat{\rho}} \hat{N}\right)^{2}\\
&=
4\left\langle\alpha_1\alpha_2\alpha_3\left|\hat{N}\right| \alpha_1\alpha_2\alpha_3\right\rangle_{\hat{\rho}}\\
&=4 \operatorname{tr}\left(\hat{\rho} \hat{N}\right).
\end{aligned}
\end{equation}
So the bound for QFI of classical multimode system reads:
\begin{equation}\label{fqn}
F_Q(\hat{\rho},\hat{N})\leq 4Tr(\hat{\rho}\hat{N}).
\end{equation}
The violation of Eq.~(\ref{fqn}) means the state is nonclassical, which indicates the state is a useful resource for quantum metrology.
\section{The simplified Squeezing Parameter}\label{simplesqueezing}
In the above, we obtained the optimal nonlinear squeezing parameter $\chi^{-2}(\hat{\rho},\hat{A}_{opt},\hat{M}_{opt})$, which requires at least four observables for $\hat{M}_{opt}$ and four for $[\hat{A}_{opt},\hat{M}_{opt}]$ in the three-mode SPDC systems:
\begin{align}
\hat{M}_{opt}=&\hat{x}_1\hat{x}_2\hat{x}_3-\hat{x}_1\hat{p}_2\hat{p}_3-\hat{p}_1 \hat{x}_2 \hat{p}_3-\hat{p}_1\hat{p}_2\hat{x}_3,\nonumber\\
[\hat{A}_{opt},\hat{M}_{opt}]=&3i(\hat{p}_1\hat{x}_2\hat{x}_3+\hat{x}_1\hat{p}_2\hat{x}_3+\hat{x}_1\hat{x}_2\hat{p}_3\\
&-\hat{p}_1\hat{p}_2\hat{p}_3). \nonumber
\end{align}
In the following, we want to simplify the squeezing parameter to be more easily accessible in experiments.
One collective measurement can be written as
$\hat{Q}_j=[\hat{x}_1\sin(\theta_1^j)+\hat{p}_1\cos(\theta_1^j)]\times[\hat{x}_2\sin(\theta_2^j)+\hat{p}_2\cos(\theta_2^j)]\times[\hat{x}_3\sin(\theta_3^j)+\hat{p}_3\cos(\theta_3^j)]$, where $\theta_j\in [0,2\pi)$.
Our aim is to use fewer collective measurements to obtain the squeezing parameter that can be used to witness entanglement.
To obtain the optimal simplified squeezing parameter, we need to consider both $\hat{M}$ and $[\hat{A}_{opt},\hat{M}]$, where $\hat{M}$ can be written in the form of $\hat{M}=\sum_i^n \hat{Q}_j$ with n measurements for $\hat{M}$.
By optimizing the parameter $\theta_i$ in $\hat{M}$ with fixed number of collective measurements, we can find that
we need at least 1 observable in $\hat{M}$. There are four optimal observables of $\hat{M}$, which are shown in the following:
\begin{equation}
\begin{aligned}
\hat{M}_1^{(1)}&=\hat{x}_1\hat{x}_2\hat{x}_3,\\
[\hat{A}_{opt},\hat{M}_1^{(1)}]&=i(\hat{p}_1\hat{x}_2\hat{x}_3+\hat{x}_1\hat{p}_2\hat{x}_3+\hat{x}_1\hat{x}_2\hat{p}_3);\\
\hat{M}_1^{(2)}&=\hat{x}_1\hat{p}_2\hat{p}_3,\\
[\hat{A}_{opt},\hat{M}_1^{(2)}]&=i(\hat{p}_1\hat{p}_2\hat{p}_3-\hat{x}_1\hat{x}_2\hat{p}_3-\hat{x}_1\hat{p}_2\hat{x}_3); \\
\hat{M}_1^{(3)}&=\hat{p}_1\hat{x}_2\hat{p}_3,\\
[\hat{A}_{opt},\hat{M}_1^{(3)}]&=i(\hat{p}_1\hat{p}_2\hat{p}_3-\hat{x}_1\hat{x}_2\hat{p}_3-\hat{p}_1\hat{x}_2\hat{x}_3); \\
\hat{M}_1^{(4)}&=\hat{p}_1\hat{p}_2\hat{x}_3,\\
[\hat{A}_{opt},\hat{M}_1^{(4)}]&=i(\hat{p}_1\hat{p}_2\hat{p}_3-\hat{p}_1\hat{x}_2\hat{x}_3-\hat{x}_1\hat{p}_2\hat{x}_3).
\end{aligned}
\end{equation}
In the main text, we have set $\hat{M}_1=\hat{M}_1^{(1)}$ and obtained the corresponding results. Please note that all of the simplified nonlinear parameters $\chi^{-2}(\hat{M}_1)$ behave the same when choosing $\hat{M}_1$ as each of the four optimal observables.
With considering two observables in $\hat{M}$, the optimal $\hat{M}$ is given by any two terms out of four terms in $\hat{M}_{opt}$, for example, we show one of the optimal results is:
\begin{align}
\hat{M}_2=&\hat{x}_1\hat{x}_2\hat{x}_3-\hat{x}_1\hat{p}_2\hat{p}_3,\nonumber\\
[\hat{A}_{opt},\hat{M}_2]=&i(\hat{p}_1\hat{x}_2\hat{x}_3+2\hat{x}_1\hat{p}_2\hat{x}_3+2\hat{x}_1\hat{x}_2\hat{p}_3\\
&-\hat{p}_1\hat{p}_2\hat{p}_3). \nonumber
\end{align}
When considering three observables, the optimal $M$ is given by any three terms out of four terms in $M_{opt}$, for example, we show one of the optimal results is:
\begin{align}
\hat{M}_3=&\hat{x}_1\hat{x}_2\hat{x}_3-\hat{x}_1\hat{p}_2\hat{p}_3-\hat{p}_1 \hat{x}_2 \hat{p}_3,\nonumber\\
[\hat{A}_{opt},\hat{M}_3]=&i(2\hat{p}_1\hat{x}_2\hat{x}_3+2\hat{x}_1\hat{p}_2\hat{x}_3+3\hat{x}_1\hat{x}_2\hat{p}_3\\&-2\hat{p}_1\hat{p}_2\hat{p}_3).\nonumber
\end{align}
The results of the simplified nonlinear parameters with different observables are shown in Fig.~\ref{fig1}.
\begin{thebibliography}{70}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Weedbrook}\ \emph {et~al.}(2012)\citenamefont
{Weedbrook}, \citenamefont {Pirandola}, \citenamefont
{Garc{\'\i}a-Patr{\'o}n}, \citenamefont {Cerf}, \citenamefont {Ralph},
\citenamefont {Shapiro},\ and\ \citenamefont
{Lloyd}}]{weedbrook2012gaussian}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Weedbrook}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Pirandola}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Garc{\'\i}a-Patr{\'o}n}}, \bibinfo {author} {\bibfnamefont {N.~J.}\
\bibnamefont {Cerf}}, \bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont
{Ralph}}, \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Shapiro}},
\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\href
{\doibase 10.1103/RevModPhys.84.621} {\bibfield {journal} {\bibinfo
{journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo
{pages} {621} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yokoyama}\ \emph {et~al.}(2013)\citenamefont
{Yokoyama}, \citenamefont {Ukai}, \citenamefont {Armstrong}, \citenamefont
{Sornphiphatphong}, \citenamefont {Kaji}, \citenamefont {Suzuki},
\citenamefont {Yoshikawa}, \citenamefont {Yonezawa}, \citenamefont
{Menicucci},\ and\ \citenamefont {Furusawa}}]{yokoyama2013ultra}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Yokoyama}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ukai}},
\bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Armstrong}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Sornphiphatphong}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Kaji}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Suzuki}}, \bibinfo {author} {\bibfnamefont
{J.-i.}\ \bibnamefont {Yoshikawa}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Yonezawa}}, \bibinfo {author} {\bibfnamefont {N.~C.}\
\bibnamefont {Menicucci}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Furusawa}},\ }\href {\doibase 10.1038/nphoton.2013.287}
{\bibfield {journal} {\bibinfo {journal} {Nat. Photon.}\ }\textbf {\bibinfo
{volume} {7}},\ \bibinfo {pages} {982} (\bibinfo {year} {2013})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Roslund}\ \emph {et~al.}(2014)\citenamefont
{Roslund}, \citenamefont {De~Araujo}, \citenamefont {Jiang}, \citenamefont
{Fabre},\ and\ \citenamefont {Treps}}]{roslund2014wavelength}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Roslund}}, \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont
{De~Araujo}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Jiang}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Fabre}}, \ and\ \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Treps}},\ }\href {\doibase
10.1038/nphoton.2013.340} {\bibfield {journal} {\bibinfo {journal} {Nat.
Photon.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {109} (\bibinfo
{year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2014)\citenamefont {Chen},
\citenamefont {Menicucci},\ and\ \citenamefont
{Pfister}}]{chen2014experimental}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Menicucci}},
\ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Pfister}},\ }\href
{\doibase 10.1103/PhysRevLett.112.120505} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo
{pages} {120505} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Armstrong}\ \emph {et~al.}(2015)\citenamefont
{Armstrong}, \citenamefont {Wang}, \citenamefont {Teh}, \citenamefont {Gong},
\citenamefont {He}, \citenamefont {Janousek}, \citenamefont {Bachor},
\citenamefont {Reid},\ and\ \citenamefont {Lam}}]{armstrong2015multipartite}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Armstrong}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Wang}},
\bibinfo {author} {\bibfnamefont {R.~Y.}\ \bibnamefont {Teh}}, \bibinfo
{author} {\bibfnamefont {Q.}~\bibnamefont {Gong}}, \bibinfo {author}
{\bibfnamefont {Q.}~\bibnamefont {He}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Janousek}}, \bibinfo {author} {\bibfnamefont {H.-A.}\
\bibnamefont {Bachor}}, \bibinfo {author} {\bibfnamefont {M.~D.}\
\bibnamefont {Reid}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~K.}\
\bibnamefont {Lam}},\ }\href {\doibase 10.1038/nphys3202} {\bibfield
{journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume}
{11}},\ \bibinfo {pages} {167} (\bibinfo {year} {2015})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Cavalcanti}\ \emph {et~al.}(2015)\citenamefont
{Cavalcanti}, \citenamefont {Skrzypczyk}, \citenamefont {Aguilar},
\citenamefont {Nery}, \citenamefont {Ribeiro},\ and\ \citenamefont
{Walborn}}]{cavalcanti2015detection}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Cavalcanti}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Skrzypczyk}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Aguilar}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Nery}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Ribeiro}}, \ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Walborn}},\ }\href {\doibase
10.1038/ncomms8941} {\bibfield {journal} {\bibinfo {journal} {Nat.
Commun.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {7941}
(\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cai}\ \emph {et~al.}(2017)\citenamefont {Cai},
\citenamefont {Roslund}, \citenamefont {Ferrini}, \citenamefont {Arzani},
\citenamefont {Xu}, \citenamefont {Fabre},\ and\ \citenamefont
{Treps}}]{cai2017multimode}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Cai}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Roslund}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ferrini}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Arzani}}, \bibinfo {author}
{\bibfnamefont {X.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Fabre}}, \ and\ \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Treps}},\ }\href {\doibase 10.1038/ncomms15645} {\bibfield
{journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume}
{8}},\ \bibinfo {pages} {15645} (\bibinfo {year} {2017})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Deng}\ \emph {et~al.}(2017)\citenamefont {Deng},
\citenamefont {Xiang}, \citenamefont {Tian}, \citenamefont {Adesso},
\citenamefont {He}, \citenamefont {Gong}, \citenamefont {Su}, \citenamefont
{Xie},\ and\ \citenamefont {Peng}}]{deng2017demonstration}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Deng}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xiang}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Tian}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Adesso}}, \bibinfo {author} {\bibfnamefont
{Q.}~\bibnamefont {He}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{Gong}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Su}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Xie}}, \ and\ \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Peng}},\ }\href {\doibase
10.1103/PhysRevLett.118.230501} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages}
{230501} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Larsen}\ \emph {et~al.}(2019)\citenamefont {Larsen},
\citenamefont {Guo}, \citenamefont {Breum}, \citenamefont
{Neergaard-Nielsen},\ and\ \citenamefont
{Andersen}}]{larsen2019deterministic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~V.}\ \bibnamefont
{Larsen}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Guo}},
\bibinfo {author} {\bibfnamefont {C.~R.}\ \bibnamefont {Breum}}, \bibinfo
{author} {\bibfnamefont {J.~S.}\ \bibnamefont {Neergaard-Nielsen}}, \ and\
\bibinfo {author} {\bibfnamefont {U.~L.}\ \bibnamefont {Andersen}},\ }\href
{\doibase 10.1126/science.aay4354} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {366}},\ \bibinfo {pages} {369}
(\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Asavanant}\ \emph {et~al.}(2019)\citenamefont
{Asavanant}, \citenamefont {Shiozawa}, \citenamefont {Yokoyama},
\citenamefont {Charoensombutamon}, \citenamefont {Emura}, \citenamefont
{Alexander}, \citenamefont {Takeda}, \citenamefont {Yoshikawa}, \citenamefont
{Menicucci}, \citenamefont {Yonezawa} \emph
{et~al.}}]{asavanant2019generation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Asavanant}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Shiozawa}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Yokoyama}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Charoensombutamon}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Emura}}, \bibinfo {author}
{\bibfnamefont {R.~N.}\ \bibnamefont {Alexander}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Takeda}}, \bibinfo {author} {\bibfnamefont
{J.-i.}\ \bibnamefont {Yoshikawa}}, \bibinfo {author} {\bibfnamefont {N.~C.}\
\bibnamefont {Menicucci}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Yonezawa}}, \emph {et~al.},\ }\href {\doibase 10.1126/science.aay2645}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {366}},\ \bibinfo {pages} {373} (\bibinfo {year}
{2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Takeda}\ \emph {et~al.}(2019)\citenamefont {Takeda},
\citenamefont {Takase},\ and\ \citenamefont {Furusawa}}]{takeda2019demand}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Takeda}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Takase}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Furusawa}},\ }\href
{\doibase 10.1126/sciadv.aaw4530} {\bibfield {journal} {\bibinfo {journal}
{Sci. Adv.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {eaaw4530}
(\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cai}\ \emph {et~al.}(2020)\citenamefont {Cai},
\citenamefont {Xiang}, \citenamefont {Liu}, \citenamefont {He},\ and\
\citenamefont {Treps}}]{cai2020versatile}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Cai}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xiang}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author}
{\bibfnamefont {Q.}~\bibnamefont {He}}, \ and\ \bibinfo {author}
{\bibfnamefont {N.}~\bibnamefont {Treps}},\ }\href {\doibase
10.1103/PhysRevResearch.2.032046} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Res.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages}
{032046(R)} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2020)\citenamefont {Wang},
\citenamefont {Xiang}, \citenamefont {Kang}, \citenamefont {Han},
\citenamefont {Liu}, \citenamefont {He}, \citenamefont {Gong}, \citenamefont
{Su},\ and\ \citenamefont {Peng}}]{wang2020deterministic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xiang}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Kang}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Han}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{He}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Gong}}, \bibinfo
{author} {\bibfnamefont {X.}~\bibnamefont {Su}}, \ and\ \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Peng}},\ }\href {\doibase
10.1103/PhysRevLett.125.260506} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo {pages}
{260506} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Braunstein}\ and\ \citenamefont
{Kimble}(1998)}]{PhysRevLett.80.869}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Braunstein}}\ and\ \bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont
{Kimble}},\ }\href {\doibase 10.1103/PhysRevLett.80.869} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {80}},\ \bibinfo {pages} {869} (\bibinfo {year} {1998})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Yonezawa}\ \emph {et~al.}(2004)\citenamefont
{Yonezawa}, \citenamefont {Aoki},\ and\ \citenamefont
{Furusawa}}]{nature.2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Yonezawa}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Aoki}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Furusawa}},\ }\href
{\doibase 10.1038/nature02858} {\bibfield {journal} {\bibinfo {journal}
{Nature}\ }\textbf {\bibinfo {volume} {431}},\ \bibinfo {pages} {430}
(\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Grosshans}\ and\ \citenamefont
{Grangier}(2002)}]{PhysRevLett.88.057902}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Grosshans}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Grangier}},\ }\href {\doibase 10.1103/PhysRevLett.88.057902} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {88}},\ \bibinfo {pages} {057902} (\bibinfo {year}
{2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hamilton}\ \emph {et~al.}(2017)\citenamefont
{Hamilton}, \citenamefont {Kruse}, \citenamefont {Sansoni}, \citenamefont
{Barkhofen}, \citenamefont {Silberhorn},\ and\ \citenamefont
{Jex}}]{PhysRevLett.119.170501}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~S.}\ \bibnamefont
{Hamilton}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kruse}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Sansoni}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Barkhofen}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Silberhorn}}, \ and\ \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Jex}},\ }\href {\doibase
10.1103/PhysRevLett.119.170501} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages}
{170501} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Paesani}\ \emph {et~al.}(2019)\citenamefont
{Paesani}, \citenamefont {Ding}, \citenamefont {Santagati}, \citenamefont
{Chakhmakhchyan}, \citenamefont {Vigliar}, \citenamefont {Rottwitt},
\citenamefont {Oxenløwe}, \citenamefont {Wang}, \citenamefont {Thompson},\
and\ \citenamefont {Laing}}]{np.15.925}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Paesani}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ding}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Santagati}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Chakhmakhchyan}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Vigliar}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Rottwitt}}, \bibinfo {author}
{\bibfnamefont {L.~K.}\ \bibnamefont {Oxenløwe}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont
{M.~G.}\ \bibnamefont {Thompson}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Laing}},\ }\href {\doibase 10.1038/s41567-019-0567-8}
{\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo
{volume} {15}},\ \bibinfo {pages} {925} (\bibinfo {year} {2019})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Guo}\ \emph {et~al.}(2020)\citenamefont {Guo},
\citenamefont {Breum}, \citenamefont {Borregaard}, \citenamefont {Izumi},
\citenamefont {Larsen}, \citenamefont {Gehring}, \citenamefont {Christandl},
\citenamefont {Neergaard-Nielsen},\ and\ \citenamefont {Andersen}}]{np.16.3}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Guo}}, \bibinfo {author} {\bibfnamefont {C.~R.}\ \bibnamefont {Breum}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Borregaard}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Izumi}}, \bibinfo {author}
{\bibfnamefont {M.~V.}\ \bibnamefont {Larsen}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Gehring}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Christandl}}, \bibinfo {author} {\bibfnamefont {J.~S.}\
\bibnamefont {Neergaard-Nielsen}}, \ and\ \bibinfo {author} {\bibfnamefont
{U.~L.}\ \bibnamefont {Andersen}},\ }\href {\doibase
10.1038/s41567-019-0743-x} {\bibfield {journal} {\bibinfo {journal} {Nat.
Phys.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {281} (\bibinfo
{year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gessner}\ \emph {et~al.}(2018)\citenamefont
{Gessner}, \citenamefont {Pezz\`e},\ and\ \citenamefont
{Smerzi}}]{PhysRevLett.121.130503}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Gessner}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Pezz\`e}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}},\ }\href
{\doibase 10.1103/PhysRevLett.121.130503} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {121}},\ \bibinfo
{pages} {130503} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Andersen}\ \emph {et~al.}(2015)\citenamefont
{Andersen}, \citenamefont {Neergaard-Nielsen}, \citenamefont {van Loock},\
and\ \citenamefont {Furusawa}}]{np11.713}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.~L.}\ \bibnamefont
{Andersen}}, \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Neergaard-Nielsen}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {van
Loock}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Furusawa}},\ }\href {\doibase 10.1038/nphys3410} {\bibfield {journal}
{\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {11}},\
\bibinfo {pages} {713} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Walschaers}(2021)}]{PRXQuantum.2.030204}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Walschaers}},\ }\href {\doibase 10.1103/PRXQuantum.2.030204} {\bibfield
{journal} {\bibinfo {journal} {PRX Quantum}\ }\textbf {\bibinfo {volume}
{2}},\ \bibinfo {pages} {030204} (\bibinfo {year} {2021})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Niset}\ \emph {et~al.}(2009)\citenamefont {Niset},
\citenamefont {Fiur{\'a}{\v{s}}ek},\ and\ \citenamefont
{Cerf}}]{niset2009no}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Niset}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Fiur{\'a}{\v{s}}ek}}, \ and\ \bibinfo {author} {\bibfnamefont {N.~J.}\
\bibnamefont {Cerf}},\ }\href {\doibase 10.1103/PhysRevLett.102.120501}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {102}},\ \bibinfo {pages} {120501} (\bibinfo {year}
{2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mari}\ and\ \citenamefont
{Eisert}(2012)}]{PhysRevLett.109.230503}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Mari}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\
}\href {\doibase 10.1103/PhysRevLett.109.230503} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\
\bibinfo {pages} {230503} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Eisert}\ \emph {et~al.}(2002)\citenamefont {Eisert},
\citenamefont {Scheel},\ and\ \citenamefont {Plenio}}]{eisert2002distilling}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Eisert}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Scheel}}, \
and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\
}\href {\doibase 10.1103/PhysRevLett.89.137903} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {89}},\
\bibinfo {pages} {137903} (\bibinfo {year} {2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fiur{\'a}{\v{s}}ek}(2002)}]{fiuravsek2002gaussian}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Fiur{\'a}{\v{s}}ek}},\ }\href {\doibase 10.1103/PhysRevLett.89.137904}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {89}},\ \bibinfo {pages} {137904} (\bibinfo {year}
{2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Takahashi}\ \emph {et~al.}(2010)\citenamefont
{Takahashi}, \citenamefont {Neergaard-Nielsen}, \citenamefont {Takeuchi},
\citenamefont {Takeoka}, \citenamefont {Hayasaka}, \citenamefont {Furusawa},\
and\ \citenamefont {Sasaki}}]{nphoton.2010.1}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Takahashi}}, \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Neergaard-Nielsen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Takeuchi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Takeoka}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Hayasaka}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Furusawa}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Sasaki}},\ }\href {\doibase
10.1038/nphoton.2010.1} {\bibfield {journal} {\bibinfo {journal} {Nat.
Photon.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {178} (\bibinfo
{year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Strobel}\ \emph {et~al.}(2014)\citenamefont
{Strobel}, \citenamefont {Muessel}, \citenamefont {Linnemann}, \citenamefont
{Zibold}, \citenamefont {Hume}, \citenamefont {Pezzè}, \citenamefont
{Smerzi},\ and\ \citenamefont {Oberthaler}}]{science.345}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Strobel}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Muessel}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Linnemann}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Zibold}}, \bibinfo {author}
{\bibfnamefont {D.~B.}\ \bibnamefont {Hume}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Pezzè}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Smerzi}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~K.}\
\bibnamefont {Oberthaler}},\ }\href {\doibase 10.1126/science.1250147}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {345}},\ \bibinfo {pages} {424} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Joo}\ \emph {et~al.}(2011)\citenamefont {Joo},
\citenamefont {Munro},\ and\ \citenamefont
{Spiller}}]{PhysRevLett.107.083601}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Joo}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Munro}}, \
and\ \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Spiller}},\
}\href {\doibase 10.1103/PhysRevLett.107.083601} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\
\bibinfo {pages} {083601} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2021)\citenamefont {Liu},
\citenamefont {Tian}, \citenamefont {Liu}, \citenamefont {Dong},
\citenamefont {Guo}, \citenamefont {He}, \citenamefont {Xu},\ and\
\citenamefont {Li}}]{PhysRevApplied.16.064037}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tian}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Liu}}, \bibinfo {author}
{\bibfnamefont {X.}~\bibnamefont {Dong}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{He}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Xu}}, \ and\
\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Li}},\ }\href {\doibase
10.1103/PhysRevApplied.16.064037} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Appl.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages}
{064037} (\bibinfo {year} {2021})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ra}\ \emph {et~al.}(2020)\citenamefont {Ra},
\citenamefont {Dufour}, \citenamefont {Walschaers}, \citenamefont {Jacquard},
\citenamefont {Michel}, \citenamefont {Fabre},\ and\ \citenamefont
{Treps}}]{natphy.nicolas}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-S.}\ \bibnamefont
{Ra}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dufour}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Walschaers}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Jacquard}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Michel}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Fabre}}, \ and\ \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Treps}},\ }\href {\doibase 10.1038/s41567-019-0726-y}
{\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo
{volume} {16}},\ \bibinfo {pages} {144} (\bibinfo {year} {2020})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Douady}\ and\ \citenamefont
{Boulanger}(2004)}]{olthreemode}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Douady}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Boulanger}},\ }\href {\doibase 10.1364/OL.29.002794} {\bibfield {journal}
{\bibinfo {journal} {Opt. Lett.}\ }\textbf {\bibinfo {volume} {29}},\
\bibinfo {pages} {2794} (\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Chang}\ \emph {et~al.}(2020)\citenamefont {Chang},
\citenamefont {Sab\'{\i}n}, \citenamefont {Forn-D\'{\i}az}, \citenamefont
{Quijandr\'{\i}a}, \citenamefont {Vadiraj}, \citenamefont {Nsanzineza},
\citenamefont {Johansson},\ and\ \citenamefont {Wilson}}]{prxthreemode}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~W.~S.}\
\bibnamefont {Chang}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Sab\'{\i}n}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Forn-D\'{\i}az}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Quijandr\'{\i}a}}, \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont
{Vadiraj}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Nsanzineza}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Johansson}}, \ and\
\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}},\ }\href
{\doibase 10.1103/PhysRevX.10.011011} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo
{pages} {011011} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Simon}(2000)}]{PhysRevLett.84.2726}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Simon}},\ }\href {\doibase 10.1103/PhysRevLett.84.2726} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {84}},\ \bibinfo {pages} {2726} (\bibinfo {year}
{2000})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {van Loock}\ and\ \citenamefont
{Furusawa}(2003)}]{PhysRevA.67.052315}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {van
Loock}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Furusawa}},\ }\href {\doibase 10.1103/PhysRevA.67.052315} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{67}},\ \bibinfo {pages} {052315} (\bibinfo {year} {2003})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Adesso}\ and\ \citenamefont
{Illuminati}(2005)}]{PhysRevA.72.032334}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Adesso}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Illuminati}},\ }\href {\doibase 10.1103/PhysRevA.72.032334} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{72}},\ \bibinfo {pages} {032334} (\bibinfo {year} {2005})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Hillery}\ and\ \citenamefont
{Zubairy}(2006)}]{PhysRevLett.96.050503}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Hillery}}\ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont
{Zubairy}},\ }\href {\doibase 10.1103/PhysRevLett.96.050503} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {96}},\ \bibinfo {pages} {050503} (\bibinfo {year}
{2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {He}\ and\ \citenamefont
{Reid}(2013)}]{PhysRevLett.111.250403}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.~Y.}\ \bibnamefont
{He}}\ and\ \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Reid}},\
}\href {\doibase 10.1103/PhysRevLett.111.250403} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {111}},\
\bibinfo {pages} {250403} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Teh}\ and\ \citenamefont
{Reid}(2014)}]{PRA.90.062337}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~Y.}\ \bibnamefont
{Teh}}\ and\ \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Reid}},\
}\href {\doibase 10.1103/PhysRevA.90.062337} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo
{pages} {062337} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Walborn}\ \emph {et~al.}(2009)\citenamefont
{Walborn}, \citenamefont {Taketani}, \citenamefont {Salles}, \citenamefont
{Toscano},\ and\ \citenamefont {de~Matos~Filho}}]{PhysRevLett.103.160505}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~P.}\ \bibnamefont
{Walborn}}, \bibinfo {author} {\bibfnamefont {B.~G.}\ \bibnamefont
{Taketani}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Salles}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Toscano}}, \ and\
\bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {de~Matos~Filho}},\
}\href {\doibase 10.1103/PhysRevLett.103.160505} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {103}},\
\bibinfo {pages} {160505} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nishioka}(2018)}]{RevModPhys.90.035007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Nishioka}},\ }\href {\doibase 10.1103/RevModPhys.90.035007} {\bibfield
{journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume}
{90}},\ \bibinfo {pages} {035007} (\bibinfo {year} {2018})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Zhang}\ \emph
{et~al.}(2021{\natexlab{a}})\citenamefont {Zhang}, \citenamefont {Cai},
\citenamefont {Zheng}, \citenamefont {Barral}, \citenamefont {Zhang},
\citenamefont {Xiao},\ and\ \citenamefont {Bencheikh}}]{PhysRevA.103.013704}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cai}}, \bibinfo
{author} {\bibfnamefont {Z.}~\bibnamefont {Zheng}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Barral}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Xiao}}, \ and\ \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Bencheikh}},\ }\href {\doibase
10.1103/PhysRevA.103.013704} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages}
{013704} (\bibinfo {year} {2021}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hillery}\ \emph {et~al.}(2010)\citenamefont
{Hillery}, \citenamefont {Dung},\ and\ \citenamefont
{Zheng}}]{PhysRevA.81.062322}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Hillery}}, \bibinfo {author} {\bibfnamefont {H.~T.}\ \bibnamefont {Dung}}, \
and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zheng}},\ }\href
{\doibase 10.1103/PhysRevA.81.062322} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo
{pages} {062322} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cavalcanti}\ \emph {et~al.}(2011)\citenamefont
{Cavalcanti}, \citenamefont {He}, \citenamefont {Reid},\ and\ \citenamefont
{Wiseman}}]{PhysRevA.84.032115}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont
{Cavalcanti}}, \bibinfo {author} {\bibfnamefont {Q.~Y.}\ \bibnamefont {He}},
\bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Reid}}, \ and\
\bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},\ }\href
{\doibase 10.1103/PhysRevA.84.032115} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo
{pages} {032115} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Agust\'{\i}}\ \emph {et~al.}(2020)\citenamefont
{Agust\'{\i}}, \citenamefont {Chang}, \citenamefont {Quijandr\'{\i}a},
\citenamefont {Johansson}, \citenamefont {Wilson},\ and\ \citenamefont
{Sab\'{\i}n}}]{PhysRevLett.125.020502}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Agust\'{\i}}}, \bibinfo {author} {\bibfnamefont {C.~W.~S.}\ \bibnamefont
{Chang}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Quijandr\'{\i}a}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Johansson}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Wilson}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Sab\'{\i}n}},\ }\href {\doibase 10.1103/PhysRevLett.125.020502} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {125}},\ \bibinfo {pages} {020502} (\bibinfo {year}
{2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shen}\ \emph {et~al.}(2015)\citenamefont {Shen},
\citenamefont {Assad}, \citenamefont {Grosse}, \citenamefont {Li},
\citenamefont {Reid},\ and\ \citenamefont {Lam}}]{PhysRevLett.114.100403}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Shen}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Assad}},
\bibinfo {author} {\bibfnamefont {N.~B.}\ \bibnamefont {Grosse}}, \bibinfo
{author} {\bibfnamefont {X.~Y.}\ \bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {M.~D.}\ \bibnamefont {Reid}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.~K.}\ \bibnamefont {Lam}},\ }\href {\doibase
10.1103/PhysRevLett.114.100403} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo {pages}
{100403} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zhang}\ \emph
{et~al.}(2021{\natexlab{b}})\citenamefont {Zhang}, \citenamefont {Barral},
\citenamefont {Cai}, \citenamefont {Zhang}, \citenamefont {Xiao},\ and\
\citenamefont {Bencheikh}}]{PhysRevLett.127.150502}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Barral}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cai}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Xiao}}, \ and\ \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Bencheikh}},\ }\href {\doibase
10.1103/PhysRevLett.127.150502} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages}
{150502} (\bibinfo {year} {2021}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Pezze}\ \emph {et~al.}(2018)\citenamefont {Pezze},
\citenamefont {Smerzi}, \citenamefont {Oberthaler}, \citenamefont {Schmied},\
and\ \citenamefont {Treutlein}}]{rev.mod.phys.90.035005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Pezze}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}},
\bibinfo {author} {\bibfnamefont {M.~K.}\ \bibnamefont {Oberthaler}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Schmied}}, \ and\
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Treutlein}},\ }\href
{\doibase 10.1103/RevModPhys.90.035005} {\bibfield {journal} {\bibinfo
{journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo
{pages} {035005} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hauke}\ \emph {et~al.}(2016)\citenamefont {Hauke},
\citenamefont {Heyl}, \citenamefont {Tagliacozzo},\ and\ \citenamefont
{Zoller}}]{npys3700}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Hauke}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Heyl}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Tagliacozzo}}, \ and\
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href
{\doibase 10.1038/nphys3700} {\bibfield {journal} {\bibinfo {journal} {Nat.
Phys.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {778} (\bibinfo
{year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fadel}\ and\ \citenamefont
{Gessner}(2020)}]{PhysRevA.102.012412}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Fadel}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Gessner}},\ }\href {\doibase 10.1103/PhysRevA.102.012412} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{102}},\ \bibinfo {pages} {012412} (\bibinfo {year} {2020})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Ren}\ \emph {et~al.}(2021)\citenamefont {Ren},
\citenamefont {Li}, \citenamefont {Smerzi},\ and\ \citenamefont
{Gessner}}]{PhysRevLett.126.080502}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Ren}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Gessner}},\ }\href {\doibase
10.1103/PhysRevLett.126.080502} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages}
{080502} (\bibinfo {year} {2021})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yadin}\ \emph {et~al.}(2021)\citenamefont {Yadin},
\citenamefont {Fadel},\ and\ \citenamefont {Gessner}}]{NC2021}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Yadin}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fadel}}, \ and\
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gessner}},\ }\href
{\doibase 10.1038/s41467-021-22353-3} {\bibfield {journal} {\bibinfo
{journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo
{pages} {2410} (\bibinfo {year} {2021})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fadel}\ \emph {et~al.}(2022)\citenamefont {Fadel},
\citenamefont {Yadin}, \citenamefont {Mao}, \citenamefont {Byrnes},\ and\
\citenamefont {Gessner}}]{fadel2022multiparameter}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Fadel}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yadin}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Mao}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Byrnes}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Gessner}},\ }\href
{https://doi.org/10.48550/arXiv.2201.11081} {\bibfield {journal} {\bibinfo
{journal} {arXiv:2201.11081}\ } (\bibinfo {year} {2022})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Qin}\ \emph {et~al.}(2019)\citenamefont {Qin},
\citenamefont {Gessner}, \citenamefont {Ren}, \citenamefont {Deng},
\citenamefont {Han}, \citenamefont {Li}, \citenamefont {Su}, \citenamefont
{Smerzi},\ and\ \citenamefont {Peng}}]{npj.5.2056}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Qin}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gessner}},
\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Ren}}, \bibinfo {author}
{\bibfnamefont {X.}~\bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Han}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Li}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Su}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}}, \ and\ \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Peng}},\ }\href {\doibase
10.1038/s41534-018-0119-6} {\bibfield {journal} {\bibinfo {journal} {npj
Quantum Inf.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {3}
(\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gessner}\ \emph {et~al.}(2016)\citenamefont
{Gessner}, \citenamefont {Pezze},\ and\ \citenamefont
{Smerzi}}]{pra2016manuel}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Gessner}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Pezze}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}},\ }\href
{\doibase 10.1103/PhysRevA.94.020101} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo
{pages} {020101(R)} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gessner}\ \emph {et~al.}(2017)\citenamefont
{Gessner}, \citenamefont {Pezz{\`{e}}},\ and\ \citenamefont
{Smerzi}}]{Gessner2017entanglement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Gessner}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Pezz{\`{e}}}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Smerzi}},\ }\href {\doibase 10.22331/q-2017-07-14-17} {\bibfield {journal}
{\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo
{pages} {17} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Pezz\'e}\ and\ \citenamefont
{Smerzi}(2009)}]{PhysRevLett.102.100401}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Pezz\'e}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Smerzi}},\ }\href {\doibase 10.1103/PhysRevLett.102.100401} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {102}},\ \bibinfo {pages} {100401} (\bibinfo {year}
{2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hyllus}\ \emph {et~al.}(2012)\citenamefont {Hyllus},
\citenamefont {Laskowski}, \citenamefont {Krischek}, \citenamefont
{Schwemmer}, \citenamefont {Wieczorek}, \citenamefont {Weinfurter},
\citenamefont {Pezz\'e},\ and\ \citenamefont {Smerzi}}]{pra.85.022321}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Hyllus}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Laskowski}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Krischek}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Schwemmer}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {Wieczorek}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Pezz\'e}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Smerzi}},\ }\href {\doibase
10.1103/PhysRevA.85.022321} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {022321}
(\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gessner}\ \emph {et~al.}(2019)\citenamefont
{Gessner}, \citenamefont {Smerzi},\ and\ \citenamefont
{Pezz\`e}}]{PhysRevLett.122.090503}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Gessner}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}}, \
and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Pezz\`e}},\ }\href
{\doibase 10.1103/PhysRevLett.122.090503} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo
{pages} {090503} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Guo}\ \emph {et~al.}(2021)\citenamefont {Guo},
\citenamefont {Sun}, \citenamefont {Zhu}, \citenamefont {Gessner},
\citenamefont {He},\ and\ \citenamefont {Fadel}}]{guo2021detecting}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Guo}}, \bibinfo {author} {\bibfnamefont {F.-X.}\ \bibnamefont {Sun}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Zhu}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Gessner}}, \bibinfo {author} {\bibfnamefont
{Q.}~\bibnamefont {He}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Fadel}},\ }\href
{https://doi.org/10.48550/arXiv.2106.13106} {\bibfield {journal} {\bibinfo
{journal} {arXiv:2106.13106}\ } (\bibinfo {year} {2021})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Braunstein}\ and\ \citenamefont
{Caves}(1994)}]{PhysRevLett.72.3439}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Braunstein}}\ and\ \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Caves}},\ }\href {\doibase 10.1103/PhysRevLett.72.3439} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {72}},\ \bibinfo {pages} {3439} (\bibinfo {year}
{1994})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rivas}\ and\ \citenamefont {Luis}(2010)}]{RN612}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Rivas}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Luis}},\
}\href {\doibase 10.1103/PhysRevLett.105.010403} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\
\bibinfo {pages} {010403} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bouwmeester}\ \emph {et~al.}(1999)\citenamefont
{Bouwmeester}, \citenamefont {Pan}, \citenamefont {Daniell}, \citenamefont
{Weinfurter},\ and\ \citenamefont {Zeilinger}}]{pan1999}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Bouwmeester}}, \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont
{Pan}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Daniell}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href
{\doibase 10.1103/PhysRevLett.82.1345} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo
{pages} {1345} (\bibinfo {year} {1999})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wu}\ \emph {et~al.}(2019)\citenamefont {Wu},
\citenamefont {Yuan}, \citenamefont {Xiang}, \citenamefont {Li},
\citenamefont {Guo},\ and\ \citenamefont
{Perarnau-Llobet}}]{sa.collective.measurement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.-D.}\ \bibnamefont
{Wu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yuan}}, \bibinfo
{author} {\bibfnamefont {G.-Y.}\ \bibnamefont {Xiang}}, \bibinfo {author}
{\bibfnamefont {C.-F.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont
{G.-C.}\ \bibnamefont {Guo}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Perarnau-Llobet}},\ }\href {\doibase
10.1126/sciadv.aav4944} {\bibfield {journal} {\bibinfo {journal} {Sci.
Adv.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {eaav4944}
(\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yuan}\ \emph {et~al.}(2020)\citenamefont {Yuan},
\citenamefont {Hou}, \citenamefont {Tang}, \citenamefont {Streltsov},
\citenamefont {Xiang}, \citenamefont {Li},\ and\ \citenamefont
{Guo}}]{npj.collective.measurement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Yuan}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Hou}}, \bibinfo
{author} {\bibfnamefont {J.-F.}\ \bibnamefont {Tang}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Streltsov}}, \bibinfo {author}
{\bibfnamefont {G.-Y.}\ \bibnamefont {Xiang}}, \bibinfo {author}
{\bibfnamefont {C.-F.}\ \bibnamefont {Li}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.-C.}\ \bibnamefont {Guo}},\ }\href {\doibase
10.1038/s41534-020-0280-6} {\bibfield {journal} {\bibinfo {journal} {npj
Quantum Inf.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {46}
(\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gansen}\ \emph {et~al.}(2007)\citenamefont {Gansen},
\citenamefont {Rowe}, \citenamefont {Greene}, \citenamefont {Rosenberg},
\citenamefont {Harvey}, \citenamefont {Su}, \citenamefont {Hadfield},
\citenamefont {Nam},\ and\ \citenamefont
{Mirin}}]{np2007-Photon-number-discriminating}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~J.}\ \bibnamefont
{Gansen}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Rowe}},
\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Greene}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Rosenberg}}, \bibinfo {author}
{\bibfnamefont {T.~E.}\ \bibnamefont {Harvey}}, \bibinfo {author}
{\bibfnamefont {M.~Y.}\ \bibnamefont {Su}}, \bibinfo {author} {\bibfnamefont
{R.~H.}\ \bibnamefont {Hadfield}}, \bibinfo {author} {\bibfnamefont {S.~W.}\
\bibnamefont {Nam}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~P.}\
\bibnamefont {Mirin}},\ }\href {\doibase 10.1038/nphoton.2007.173} {\bibfield
{journal} {\bibinfo {journal} {Nat. Photon.}\ }\textbf {\bibinfo {volume}
{1}},\ \bibinfo {pages} {585} (\bibinfo {year} {2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kardynał}\ \emph {et~al.}(2008)\citenamefont
{Kardynał}, \citenamefont {Yuan},\ and\ \citenamefont
{Shields}}]{np2008-detector}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~E.}\ \bibnamefont
{Kardynał}}, \bibinfo {author} {\bibfnamefont {Z.~L.}\ \bibnamefont {Yuan}},
\ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Shields}},\
}\href {\doibase 10.1038/nphoton.2008.101} {\bibfield {journal} {\bibinfo
{journal} {Nat. Photon.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages}
{425} (\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Divochiy}\ \emph {et~al.}(2008)\citenamefont
{Divochiy}, \citenamefont {Marsili}, \citenamefont {Bitauld}, \citenamefont
{Gaggero}, \citenamefont {Leoni}, \citenamefont {Mattioli}, \citenamefont
{Korneev}, \citenamefont {Seleznev}, \citenamefont {Kaurova}, \citenamefont
{Minaeva}, \citenamefont {Gol'tsman}, \citenamefont {Lagoudakis},
\citenamefont {Benkhaoul}, \citenamefont {Lévy},\ and\ \citenamefont
{Fiore}}]{np-superconducting}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Divochiy}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Marsili}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bitauld}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Gaggero}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Leoni}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Mattioli}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Korneev}}, \bibinfo {author} {\bibfnamefont
{V.}~\bibnamefont {Seleznev}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Kaurova}}, \bibinfo {author} {\bibfnamefont
{O.}~\bibnamefont {Minaeva}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Gol'tsman}}, \bibinfo {author} {\bibfnamefont {K.~G.}\
\bibnamefont {Lagoudakis}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Benkhaoul}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Lévy}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Fiore}},\ }\href {\doibase 10.1038/nphoton.2008.51}
{\bibfield {journal} {\bibinfo {journal} {Nat. Photon.}\ }\textbf {\bibinfo
{volume} {2}},\ \bibinfo {pages} {302} (\bibinfo {year} {2008})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Pezz\'e}\ and\ \citenamefont {Smerzi}(2014)}]{RN01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Pezz\'e}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Smerzi}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Atom
Interferometry}}},\ \bibinfo {series} {Proceedings of the International
School of Physics "Enrico Fermi"}, Vol.\ \bibinfo {volume} {188},\ \bibinfo
{editor} {edited by\ \bibinfo {editor} {\bibfnamefont {M.~A.~K.}\
\bibnamefont {Guglielmo M.~Tino}}}\ (\bibinfo {publisher} {IOS Press},\
\bibinfo {address} {Amsterdam},\ \bibinfo {year} {2014})\ pp.\ \bibinfo
{pages} {691--741}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kwon}\ \emph {et~al.}(2019)\citenamefont {Kwon},
\citenamefont {Tan}, \citenamefont {Volkoff},\ and\ \citenamefont
{Jeong}}]{PhysRevLett.122.040503}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Kwon}}, \bibinfo {author} {\bibfnamefont {K.~C.}\ \bibnamefont {Tan}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Volkoff}}, \ and\
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Jeong}},\ }\href
{\doibase 10.1103/PhysRevLett.122.040503} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo
{pages} {040503} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\end{thebibliography}
\end{document}
|
\begin{document}
\date{\today}
\title{On a classification of Killing vector fields on a tangent bundle with
$g$-natural metric.}
\author{Stanis\l aw Ewert-Krzemieniewski (Szczecin)}
\maketitle
\begin{abstract}
The tangent bundle of a Riemannian manifold $(M,g)$ with non-degenerated $g-$
natural metric $G$ that admits a Killing vector field is investigated. Using
Taylor's formula $(TM,G)$ is decomposed into four classes that are
investigated separately. The equivalence of the existence of Killing vector
field on $M$ and $TM$ is proved.
\textbf{Mathematics Subject Classification }Primary 53B20, 53C07, secondary
53B21, 55C25.
\textbf{Key words}: Riemannian manifold, tangent bundle, g - natural metric,
Killing vector field, non-degenerate metric
\end{abstract}
\section{Introduction}
Geometry of a tangent bundle goes back to 1958 when Sasaki published (\cite
{S}). Heaving given Riemannian metric $g$ on a differentiable manifold $M,$
he constructed a Riemannian metric $G$ on the tangent bundle $TM$ of $M,$
known today as the Sasaki metric. Since then different topics of geometry of
the tangent bundle were studied by many geometers. The Killing vector fields
on $(TM,G)$ were studied in (\cite{Abbassi 2003}), (\cite{Tanno 1}) and (
\cite{Tanno 2}) with $G$ being the Cheeger-Gromoll metric $g^{CG}$, the
complete lift $g^{c}$ and the Sasaki metric $g^{S},$ respectively, obtained
from a metric $g$ on a base manifold $M$. Similar results were obtained
independently in (\cite{P}). All these metrics belong to a class of metrics
on $TM,$ known as a $g-$natural one and constructed in (\cite{KS}), see also
(\cite{Abb 2005 b}). These metrics can be regarded as jets of a Riemannian
metric $g$ on a manifold $M$ (\cite{A}).
In the paper we develop the method by Tanno (\cite{Tanno 2}) to investigate
Killing vector fields on $TM$ with arbitrary, non-degenerated $g-$ natural
metrics. The method applies Taylor's formula to components of the vector
field that is supposed to be an infinitesimal affine transformation, in
particular an infinitesimal isometry. The infinitesimal affine
transformation is determined by the values of its components and their first
partial derivatives at a point (\cite{KN}, p. 232). It appears by applying
the Taylor's formula there are at most four "generators" of the
infinitesimal isometry: two vectors and two tensors of type $(1,1).$
The paper is organized as follows. In Chapter 2 we describe the conventions
and give basic formulas we shall need. We also give a short resum\'{e} on a
tangent bundle of a Riemannian manifold. In Chapter 3 we calculate the Lie
derivative of a $g-$natural metric $G$ on $TM$ in terms of horizontal and
vertical lifts of vector fields from $M$ to $TM.$ Furthermore, we obtain the
Lie derivative of $G$ with respect to an arbitrary vector field in terms of
an adapted frame. By applying the Taylor's formula to the Killing vector
field on a neighbourhood of the set $M\times \{0\}$ we get a series of
conditions relating components and their covariant derivatives. Finally we
prove some lemmas of a general character. It is worth mentioning that at
this level ther is a restriction on one of the generator to be non-zero.\
The further restrictions of this kind will appear later on.
In Chapter 4, making use of these conditions and lemmas we split the
non-degenerated $g-$natural metrics on $TM$ into four classes (Theorem \ref
{Splitting theorem}). For each such class further properties are proved
separately. Moreover, a complete structure of Killing vector fields on $TM$
for some subclasses is given (Theorems \ref{Structure 1} and \ref{Structure
2}).
As a consequence of the splitting theorem and Theorem \ref{Lift prop 2} as
well, we obtain the main
\begin{theorem}
\label{mine}If the tangent bundle of a Riemannian manifold $(M,g)$, $dimM>2,$
with a $g-$ natural, non-degenerated metric $G$ admits a Killing vector
field, then there exists a Killing vector field on $M.$
Conversely, any Killing vector field $X$ on a Riemannian manifold $(M,g)$
gives rise to the Killing vector field $Z$ on its tangent bundle endowed
with a $g-$ natural metric. Precisely, $Z$ is the complete lift of $X.$
\end{theorem}
In the next section some classical lifts of some tensor fields from $(M,g)$
to $(TM,G)$ are discussed.
Finally, in the Appendix we collect some known facts and theorems that we
use throughout the paper and also prove lemmas of a general character.
Throughout the paper all manifolds under consideration are smooth and
Hausdorff ones. The metric $g$ of the base manifold $M$ is always assumed to
be Riemannian one.
The computations in local coordinates were partially carried out and checked
using MathTensor\texttrademark\ and Mathematica\registered\ software.
\section{Preliminaries}
\subsection{Conventions and basic formulas}
Let $(M,g)$ be a pseudo-Riemannian manifold of dimension $n$ with metric $g.$
The Riemann curvature tensor $R$ is defined by
\begin{equation*}
R(X,Y)=\nabla _{X}\nabla _{Y}-\nabla _{Y}\nabla _{X}-\nabla _{\left[ X,Y
\right] }.
\end{equation*}
In a local coordinate neighbourhood $(U,(x^{1},...,x^{n}))$ its components
are given by
\begin{multline*}
R(\partial _{i},\partial _{j})\partial _{k}=R(\partial _{i},\partial
_{j},\partial _{k})=R_{kji}^{r}\partial _{r}= \\
\left( \partial _{i}\Gamma _{jk}^{r}-\partial _{j}\Gamma _{ik}^{r}+\Gamma
_{is}^{r}\Gamma _{jk}^{s}-\Gamma _{js}^{r}\Gamma _{ik}^{s}\right) \partial
_{r},
\end{multline*}
where $\partial _{k}=\frac{\partial }{\partial x^{k}}$ and$\ \Gamma
_{jk}^{r} $ are the Christoffel symbols of the Levi-Civita connection $
\nabla .$ We have
\begin{equation}
\partial _{l}g_{hk}=g_{hk;l}=\Gamma _{hl}^{r}g_{rk}+\Gamma _{kl}^{r}g_{rk}.
\label{Conv3}
\end{equation}
The Ricci identity is
\begin{equation}
\nabla _{i}\nabla _{j}X_{k}-\nabla _{j}\nabla
_{i}X_{k}=X_{k,ji}-X_{k,ij}=-X^{s}R_{skji}. \label{Conv4}
\end{equation}
The Lie derivative of a metric tensor $g$ is given by
\begin{equation}
\left( L_{X}g\right) \left( Y,Z\right) =g\left( \nabla _{Y}X,Z\right)
+g\left( Y,\nabla _{Z}X\right) \label{Conv4a}
\end{equation}
for all vector fields $X,$ $Y,$ $Z$ on $M.$ In local coordinates $
(U,(x^{1},...,x^{n}))$\ we get
\begin{equation*}
\left( L_{X^{r}\partial _{r}}g\right) _{ij}=\nabla _{i}X_{j}+\nabla
_{j}X_{i},
\end{equation*}
where $X_{k}=g_{kr}X^{r}.$
We shall need the following properties of the Lie derivative
\begin{multline}
L_{X}\Gamma _{ji}^{h}=\nabla _{j}\nabla _{i}X^{h}+X^{r}R_{rjis}g^{sh}=
\label{Conv5} \\
\frac{1}{2}g^{hr}\left[ \nabla _{j}\left( L_{X}g_{ir}\right) +\nabla
_{i}\left( L_{X}g_{jr}\right) -\nabla _{r}\left( L_{X}g_{ji}\right) \right] .
\end{multline}
If $L_{X}\Gamma _{ji}^{h}=0,$ then $X$ is said to be an infinitesimal affine
transformation.
The vector field $X$ is said to be the Killing vector field or infinitesimal
isometry if
\begin{equation*}
L_{X}g=0.
\end{equation*}
For a Killing vector field $X$ we have
\begin{equation*}
L_{X}\nabla =0,\quad L_{X}R=0,\quad L_{X}\left( \nabla R\right) =0,....
\end{equation*}
(\cite{Y}, p. 23 and 24).
\subsection{Tangent bundle}
Let $x$ be a point of a Riemannian manifold $(M,g),$ dim$M=n,$ covered by
coordinate neighbourhoods $(U,$ $(x^{j})),$ $j=1,...,n.\ $Let $TM\ $be
tangent bundle of $M$ and $\pi :TM\longrightarrow M\ $be a natural
projection on $M.$ If $x\in U$ and $u=u^{r}\frac{\partial }{\partial x^{r}}
_{\mid x}\in T_{x}M\ $then $(\pi ^{-1}(U),$ $((x^{r}),(u^{r})),$ $r=1,...,n,$
is a coordinate neighbourhood on $TM.$
The space $T_{(x,u)}TM$ tangent to $TM$ at $(x,u)$ splits into direct sum
\begin{equation*}
T_{(x,u)}TM=H_{(x,u)}TM\oplus V_{(x,u)}TM.
\end{equation*}
$V_{(x,u)}TM$ is the kernel of the differential of the projection $\pi
:TM\longrightarrow M,$ i.e.
\begin{equation*}
V_{(x,u)}TM=Ker\left( d\pi _{\mid (x,u)}\right)
\end{equation*}
and is called the vertical subspace of $T_{(x,u)}TM.$
Let $V\subset M$ and $W\subset T_{x}M$ be open neighbourhhoods of $x$ and $0$
respectively, diffeomorphic under exponential mapping $exp_{x}:T_{x}M
\longrightarrow M.$ Furthermore, let $S:\pi ^{-1}(V)\longrightarrow T_{x}M$
be a smooth mapping that translates every vector $Z\in $ $\pi ^{-1}(V)$ from
the point $y$ to the point $x$ in a parallel manner along the unique
geodesic connecting $y$ and $x.$ Finally, for a given $u\in T_{x}M,$ let $
R_{-u}:T_{x}M\longrightarrow T_{x}M$ be a translation by $u,$ i.e. $
R_{-u}(X_{x})=X_{x}-u.$ The connection map
\begin{equation*}
K_{(x,u)}:T_{(x,u)}TM\longrightarrow T_{x}M
\end{equation*}
of the Levi-Civita connection $\nabla $ \ is given by
\begin{equation*}
K_{(x,u)}(Z)=d(exp_{p}\circ R_{-u}\circ S)(Z)
\end{equation*}
for any $Z\in T_{(x,u)}TM.$
For any smooth vector field $Z:M\longrightarrow TM$ and $X_{x}\in T_{x}M$ we
have
\begin{equation*}
K(dZ_{x}(X_{x}))=\left( \nabla _{X}Z\right) _{x}.
\end{equation*}
Then $H_{(x,u)}TM=Ker(K_{(x,u)})$ is called the horizontal subspace of $
T_{(x,u)}TM.$
We have isomorphisms
\begin{equation*}
H_{(x,u)}TM\sim T_{x}M\sim V_{(x,u)}TM.
\end{equation*}
For any vector $X\in T_{x}M$ there exist the unique vectors: $X^{h}$ given
by $d\pi (X^{h})=X$ and $X^{v}$ given by $X^{v}(df)=Xf$ for any function $f$
on $M.$ $X^{h}$ and $X^{v}$ are called the horizontal and the vertical lifts
of $X$ to the point $(x,u)\in TM$.
The vertical lift of a vector field $X$ on $M$ is a unique vector field $
X^{v}$ on $TM$ such that at each point $(x,u)\in TM$ its value is a vertical
lift of $X_{x}$ to the point $(x,u).$ The horizontal lift of a vector field
is defined similarly.
If $((x^{j}),$ $(u^{j})),$ $i=1,...,n,$ is a local coordinate system around
the point $(x,u)\in TM$ where $u\in T_{x}M$ and $X=X^{j}\frac{\partial }{
\partial x^{j}},$ then
\begin{equation*}
X^{h}=X^{j}\frac{\partial }{\partial x^{j}}-u^{r}X^{s}\Gamma _{rs}^{j}\frac{
\partial }{\partial u^{j}},\quad X^{v}=X^{j}\frac{\partial }{\partial u^{j}},
\end{equation*}
where $\Gamma _{rs}^{j}$ are Christoffel symbols of the Levi-Civita
connection $\nabla $ on $(M,g).$ We shall write $\partial _{k}=\frac{
\partial }{\partial x^{k}}$ and $\delta _{k}=\frac{\partial }{\partial u^{k}}
.$ Cf. \cite{Dombr} or \cite{Gudmundsson}. See also \cite{YI}.
In the paper we shall frequently use the frame $(\partial _{k}^{h},\partial
_{l}^{v})=\left( \left( \frac{\partial }{\partial x^{k}}\right) ^{h},\left(
\frac{\partial }{\partial x^{l}}\right) ^{v}\right) $ known as the adapted
frame.
Every metric $g$ on $M$ defines a family of metrics on $TM.$ Between them a
class of so called $g-$ natural metrics is of special interest. The
well-known Cheeger-Gromoll and Sasaki metrics are special cases of the $g-$
natural metrics (\cite{KS}).
\begin{lemma}
\label{Lemma 8}(\cite{Abb 2005 b}, \cite{Abb 2005 c}) Let $(M,g)$ be a
Riemannian manifold and $G$ be a $g-$natural metric on $TM.$ There exist
functions $a_{j},$ $b_{j}:<0,\infty )\longrightarrow R,$ $j=1,2,3,$ such
that for every $X,$ $Y,$ $u\in T_{x}M$
\begin{eqnarray}
G_{(x,u)}(X^{h},Y^{h})
&=&(a_{1}+a_{3})(r^{2})g_{x}(X,Y)+(b_{1}+b_{3})(r^{2})g_{x}(X,u)g_{x}(Y,u),
\notag \\
G_{(x,u)}(X^{h},Y^{v})
&=&a_{2}(r^{2})g_{x}(X,Y)+b_{2}(r^{2})g_{x}(X,u)g_{x}(Y,u), \label{g1a} \\
G_{(x,u)}(X^{v},Y^{h})
&=&a_{2}(r^{2})g_{x}(X,Y)+b_{2}(r^{2})g_{x}(X,u)g_{x}(Y,u), \notag \\
G_{(x,u)}(X^{v},Y^{v})
&=&a_{1}(r^{2})g_{x}(X,Y)+b_{1}(r^{2})g_{x}(X,u)g_{x}(Y,u), \notag
\end{eqnarray}
where $r^{2}=g_{x}(u,u).$ For $\dim M=1$ the same holds for $b_{j}=0,$ $
j=1,2,3.$
\end{lemma}
Setting $a_{1}=1,$ $a_{2}=a_{3}=b_{j}=0$ we obtain the Sasaki metric, while
setting $a_{1}=b_{1}=\frac{1}{1+r^{2}},$ $a_{2}=b_{2}=0=0,$ $a_{1}+a_{3}=1,$
$b_{1}+b_{3}=1$ we get the Cheeger-Gromoll one.
Following (\cite{Abb 2005 b}) we put
\begin{enumerate}
\item $a(t)=a_{1}(t)\left( a_{1}(t)+a_{3}(t)\right) -a_{2}^{2}(t),$
\item $F_{j}(t)=a_{j}(t)+tb_{j}(t),$
\item $F(t)=F_{1}(t)\left[ F_{1}(t)+F_{3}(t)\right] -F_{2}^{2}(t)$
for all $t\in <0,\infty ).$
\end{enumerate}
We shall often abbreviate: $A=a_{1}+a_{3},$ $B=b_{1}+b_{3}.$
\begin{lemma}
\label{Lemma 9}(\cite{Abb 2005 b}, Proposition 2.7) The necessary and
sufficient conditions for a $g-$ natural metric $G$ on the tangent bundle of
a Riemannian manifold $(M,g)$ to be non-degenerate are $a(t)\neq 0$ and $
F(t)\neq 0$ for all $t\in <0,\infty ).$ If $\dim M=1$ this is equivalent to $
a(t)\neq 0$ for all $t\in <0,\infty ).$
\end{lemma}
\begin{lemma}
The Lie brackets of vector fields on the tangent bundle of the
pseudo-Riemannian manifold $M$ are given by
\begin{eqnarray*}
\left[ X^{h},Y^{h}\right] _{\left( x,u\right) } &=&\left[ X,Y\right]
_{\left( x,u\right) }^{h}-v\left\{ R\left( X_{x},Y_{x}\right) u\right\} , \\
\left[ X^{h},Y^{v}\right] _{\left( x,u\right) } &=&\left( \nabla
_{X}Y\right) _{\left( x,u\right) }^{v}=\left( \nabla _{Y}X\right) _{\left(
x,u\right) }^{v}+\left[ X,Y\right] _{\left( x,u\right) }^{v}, \\
\left[ X^{v},Y^{v}\right] _{\left( x,u\right) } &=&0
\end{eqnarray*}
for all vector fields $X,$ $Y$ on $M.$
\end{lemma}
\section{Killing vector field}
\subsection{Lie derivative}
Applying the formula (\ref{Conv4a}) to the $g-$natural metric $G$ on $TM$
and vertical and horizontal lifts of vector fields $X,$ $Y,$ $Z$ on $M,$
using Proposition \ref{Connection}, we get
\begin{multline*}
\left( L_{X^{v}}G\right) \left( Y^{v},Z^{v}\right) = \\
b_{1}g(X,Z)g(Y,u)+b_{1}g(X,Y)g(Z,u)+ \\
2a_{1}^{\prime }g(Y,Z)g(X,u)+2b_{1}^{\prime }g(X,u)g(Y,u)g(Z,u),
\end{multline*}
\begin{equation*}
\left( L_{X^{h}}G\right) \left( Y^{v},Z^{v}\right) =0,
\end{equation*}
whence
\begin{multline*}
\left( L_{H^{a}\partial _{a}^{h}+V^{a}\partial _{a}^{v}}G\right) \left(
\partial _{k}^{v},\partial _{l}^{v}\right) =V^{a}\left( L_{\partial
_{a}^{v}}G\right) \left( \partial _{k}^{v},\partial _{l}^{v}\right)
+\partial _{k}^{v}H^{a}G\left( \partial _{a}^{h},\partial _{l}^{v}\right) +
\\
\partial _{k}^{v}V^{a}G\left( \partial _{a}^{v},\partial _{l}^{v}\right)
+\partial _{l}^{v}H^{a}G\left( \partial _{k}^{v},\partial _{a}^{h}\right)
+\partial _{l}^{v}V^{a}G\left( \partial _{k}^{v},\partial _{a}^{v}\right) .
\end{multline*}
Next we find
\begin{multline*}
\left( L_{X^{h}}G\right) \left( Y^{v},Z^{h}\right) = \\
-a_{1}R(Y,u,X,Z)+a_{2}g(\nabla _{Z}X,Y)+b_{2}g(\nabla _{Z}X,u)g(Y,u),
\end{multline*}
\begin{multline*}
\left( L_{X^{v}}G\right) \left( Y^{v},Z^{h}\right) =a_{1}g(\nabla
_{Z}X,Y)+b_{1}g(\nabla _{Z}X,u)g(Y,u)+ \\
b_{2}\left[ g(X,Z)g(Y,u)+g(X,Y)g(Z,u)\right] + \\
2a_{2}^{\prime }g(Y,Z)g(X,u)+2b_{2}^{\prime }g(X,u)g(Y,u)g(Z,u),
\end{multline*}
whence
\begin{multline*}
\left( L_{H^{a}\partial _{a}^{h}+V^{a}\partial _{a}^{v}}G\right) \left(
\partial _{k}^{v},\partial _{l}^{h}\right) = \\
H^{a}\left( L_{\partial _{a}^{h}}G\right) \left( \partial _{k}^{v},\partial
_{l}^{h}\right) +V^{a}\left( L_{\partial _{a}^{v}}G\right) \left( \partial
_{k}^{v},\partial _{l}^{h}\right) +\partial _{k}^{v}H^{a}G\left( \partial
_{a}^{h},\partial _{l}^{h}\right) + \\
\partial _{k}^{v}V^{a}G\left( \partial _{a}^{v},\partial _{l}^{h}\right)
+\partial _{l}^{h}H^{a}G\left( \partial _{k}^{v},\partial _{a}^{h}\right)
+\partial _{l}^{h}V^{a}G\left( \partial _{k}^{v},\partial _{a}^{v}\right) .
\end{multline*}
Finally, we have
\begin{multline*}
\left( L_{X^{h}}G\right) \left( Y^{h},Z^{h}\right) =A\left[ g(\nabla
_{Z}X,Y)+g(\nabla _{Y}X,Z)\right] + \\
B\left[ g(\nabla _{Z}X,u)g(Y,u)+g(\nabla _{Y}X,u)g(Z,u)\right] - \\
a_{2}\left[ R(Y,u,X,Z)+R(Z,u,X,Y)\right] ,
\end{multline*}
\begin{multline*}
\left( L_{X^{v}}G\right) \left( Y^{h},Z^{h}\right) =a_{2}\left[ g(\nabla
_{Z}X,Y)+g(\nabla _{Y}X,Z)\right] + \\
b_{2}\left[ g(\nabla _{Z}X,u)g(Y,u)+g(\nabla _{Y}X,u)g(Z,u)\right] + \\
B\left[ g(X,Y)g(Z,u)+g(X,Z)g(Y,u)\right] + \\
2A^{\prime }g(Y,Z)g(X,u)+2B^{\prime }g(X,u)g(Y,u)g(Z,u),
\end{multline*}
whence
\begin{multline*}
\left( L_{H^{a}\partial _{a}^{h}+V^{a}\partial _{a}^{v}}G\right) \left(
\partial _{k}^{h},\partial _{l}^{h}\right) = \\
H^{a}\left( L_{\partial _{a}^{h}}G\right) \left( \partial _{k}^{h},\partial
_{l}^{h}\right) +V^{a}\left( L_{\partial _{a}^{v}}G\right) \left( \partial
_{k}^{h},\partial _{l}^{h}\right) +\partial _{k}^{h}H^{a}G\left( \partial
_{a}^{h},\partial _{l}^{h}\right) + \\
\partial _{k}^{h}V^{a}G\left( \partial _{a}^{v},\partial _{l}^{h}\right)
+\partial _{l}^{h}H^{a}G\left( \partial _{k}^{h},\partial _{a}^{h}\right)
+\partial _{l}^{h}V^{a}G\left( \partial _{k}^{h},\partial _{a}^{v}\right) .
\end{multline*}
Suppose now that
\begin{equation*}
Z=Z^{a}\partial _{a}+\widetilde{Z}^{\alpha }\delta _{\alpha }=Z^{a}\partial
_{a}^{h}+(\widetilde{Z}^{\alpha }+Z^{a}u^{r}\Gamma _{ar}^{\alpha })\partial
_{\alpha }^{v}=H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}
\end{equation*}
is a vector field on $TM,$ $H^{a},$ $V^{a}$ being the horizontal and
vertical components of the vector field $Z$ on $TM$ respectively.
\begin{lemma}
\label{Lie Deriv}Let $G$ be a metric of the form (\ref{g1a}) defined on the
tangent bundle $TM$ of a manifold $(M,g).$ In particular, $G$ being the $g$
- natural metric on $TM.$ With respect to the base $\left( \partial
_{k}^{v},\partial _{l}^{h}\right) $ we have
\begin{multline}
\left( L_{H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}}G\right)
\left( \partial _{k}^{h},\partial _{l}^{h}\right) = \label{LD10} \\
-a_{2}\left[ R_{aklr}+R_{alkr}\right] H^{a}u^{r}+ \\
A\left[ \left( \partial _{k}^{h}H^{a}+H^{r}\Gamma _{rk}^{a}\right)
g_{al}+\left( \partial _{l}^{h}H^{a}+H^{r}\Gamma _{rl}^{a}\right) g_{ak}
\right] + \\
B\left[ \left( \partial _{k}^{h}H^{a}+H^{r}\Gamma _{rk}^{a}\right)
u_{a}u_{l}+\left( \partial _{l}^{h}H^{a}+H^{r}\Gamma _{rl}^{a}\right)
u_{a}u_{k}\right] + \\
a_{2}\left[ \left( \partial _{k}^{h}V^{a}+V^{r}\Gamma _{rk}^{a}\right)
g_{al}+\left( \partial _{l}^{h}V^{a}+V^{r}\Gamma _{rl}^{a}\right) g_{ak}
\right] + \\
b_{2}\left[ \left( \partial _{k}^{h}V^{a}+V^{r}\Gamma _{rk}^{a}\right)
u_{a}u_{l}+\left( \partial _{l}^{h}V^{a}+V^{r}\Gamma _{rl}^{a}\right)
u_{a}u_{k}\right] + \\
2A^{\prime }g_{kl}V^{b}u_{b}+2B^{\prime }V^{a}u_{a}u_{k}u_{l}+B\left(
V_{k}u_{l}+V_{l}u_{k}\right) ,
\end{multline}
\begin{multline}
\left( L_{H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}}G\right)
\left( \partial _{k}^{v},\partial _{l}^{h}\right) = \label{LD11} \\
-a_{1}R_{alkr}u^{r}H^{a}+\partial _{k}^{v}H^{a}\left(
Ag_{al}+Bu_{a}u_{l}\right) + \\
a_{2}\left( \partial _{l}^{h}H^{a}+H^{r}\Gamma _{rl}^{a}\right)
g_{ak}+b_{2}\left( \partial _{l}^{h}H^{a}+H^{r}\Gamma _{rl}^{a}\right)
u_{a}u_{k}+ \\
\partial _{k}^{v}V^{a}\left( a_{2}g_{al}+b_{2}u_{a}u_{l}\right) + \\
a_{1}\left( \partial _{l}^{h}V^{a}+V^{r}\Gamma _{rl}^{a}\right)
g_{ak}+b_{1}\left( \partial _{l}^{h}V^{a}+V^{r}\Gamma _{rl}^{a}\right)
u_{a}u_{k}+ \\
2a_{2}^{\prime }g_{kl}V^{b}u_{b}+2b_{2}^{\prime
}V^{a}u_{a}u_{k}u_{l}+b_{2}\left( V_{k}u_{l}+V_{l}u_{k}\right) ,
\end{multline}
\begin{multline}
\left( L_{H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}}G\right)
\left( \partial _{k}^{v},\partial _{l}^{v}\right) = \label{LD12} \\
a_{2}\left( \partial _{k}^{v}H^{a}g_{al}+\partial _{l}^{v}H^{a}g_{ak}\right)
+b_{2}\left( \partial _{k}^{v}H^{a}u_{a}u_{l}+\partial
_{l}^{v}H^{a}u_{a}u_{k}\right) + \\
b_{1}\left( V_{k}u_{l}+V_{l}u_{k}\right) +2a_{1}^{\prime
}g_{kl}V^{b}u_{b}+2b_{1}^{\prime }V^{b}u_{b}u_{k}u_{l}+ \\
a_{1}\left( \partial _{k}^{v}V^{a}g_{al}+\partial _{l}^{v}V^{a}g_{ak}\right)
+b_{1}\left( \partial _{k}^{v}V^{a}u_{a}u_{l}+\partial
_{l}^{v}V^{a}u_{a}u_{k}\right) .
\end{multline}
\end{lemma}
\subsection{Taylor's formula and coefficients}
Throughout the paper the following hypothesis will be used:
\begin{eqnarray}
&&(M,g)\text{ is a Riemannian manifold of dimension }n\text{ with metric }g,
\text{ } \TCItag{H} \label{H} \\
&&\text{covered by the coordinate system }(U,\text{ }(x^{r})). \notag \\
&&(TM,G)\text{ is the tangent bundle of }M\text{ with }g\text{-natural non-}
\notag \\
&&\text{degenerated metric }G,\text{ covered by a coordinate system } \notag
\\
&&(\pi ^{-1}(U),\text{ }(x^{r},u^{s}))\text{, }r,s\text{ run through the
range }\{1,...,n\}. \notag \\
&&Z\text{ is a Killing vector field on }TM\text{ with local components }
(Z^{r},\widetilde{Z}^{s})\text{ } \notag \\
&&\text{with respect to the local base }(\partial _{r},\delta _{s})\text{ }.
\notag
\end{eqnarray}
Let
\begin{multline}
H^{a}=Z^{a}=Z^{a}(x,u)= \label{Taylor1} \\
X^{a}+K_{p}^{a}u^{p}+\frac{1}{2}E_{pq}^{a}u^{p}u^{q}+\frac{1}{3!}
F_{pqr}^{a}u^{p}u^{q}u^{r}+\frac{1}{4!}G_{pqrs}^{a}u^{p}u^{q}u^{r}u^{s}+
\cdots ,
\end{multline}
\begin{multline}
\widetilde{Z}^{a}=\widetilde{Z}^{a}(x,u)= \label{Taylor2} \\
Y^{a}+\widetilde{P}_{p}^{a}u^{p}+\frac{1}{2}Q_{pq}^{a}u^{p}u^{q}+\frac{1}{3!}
S_{pqr}^{a}u^{p}u^{q}u^{r}+\frac{1}{4!}V_{pqrs}^{a}u^{p}u^{q}u^{r}u^{s}+
\cdots
\end{multline}
be expansions of the components $Z^{a}$ and $\widetilde{Z}^{a}$ by Taylor's
formula in a neighbourhood of \ a point $(x,0)\in TM.$ For each $a$ the
coefficients are values of partial derivatives of $Z^{a}$, $\widetilde{Z}
^{a} $ respectively$,$ taken at a point $(x,0)$ and therefore are symmetric
in all lower indices. For simplicity we have omitted the remainders.
\begin{lemma}
(\cite{Tanno 2}) The quantities
\begin{multline*}
X=\left( X^{a}(x)\right) =\left( Z^{a}(x,0)\right) , \\
Y=\left( Y^{a}\left( x\right) \right) =\left( \widetilde{Z}^{a}\left(
x,0\right) \right) , \\
K=\left( K_{p}^{a}\left( x\right) \right) =\left( \delta _{p}Z^{a}\left(
x,0\right) \right) , \\
E=\left( E_{pq}^{a}\left( x\right) \right) =\left( \delta _{p}\delta
_{q}Z^{a}\left( x,0\right) \right) , \\
P=\left( P_{p}^{a}\left( x\right) \right) =\left( \left( \delta _{p}
\widetilde{Z}^{a}\right) \left( x,0\right) -\partial _{p}\left( Z^{a}\left(
x,0\right) \right) \right)
\end{multline*}
are tensor fields $M.$
\end{lemma}
Applying the operators $\partial _{k}^{v}$ and $\partial _{k}^{h}$ to the
horizontal components we get
\begin{equation*}
\partial _{k}^{v}H^{a}=K_{k}^{a}+E_{kq}^{a}u^{q}+\frac{1}{2}
F_{kpq}^{a}u^{p}u^{q}+\frac{1}{3!}G_{kpqr}^{a}u^{p}u^{q}u^{r}+\cdots ,
\end{equation*}
\begin{multline*}
\partial _{k}^{h}H^{a}=\Theta _{k}X^{a}+\Theta _{k}K_{p}^{a}u^{p}+ \\
\frac{1}{2}\Theta _{k}E_{pq}^{a}u^{p}u^{q}+\frac{1}{3!}\Theta
_{k}F_{pqr}^{a}u^{p}u^{q}u^{r}+\frac{1}{4!}\Theta
_{k}G_{pqrs}^{a}u^{p}u^{q}u^{r}u^{s}+\cdots .
\end{multline*}
on a neighbourhood of a point $(x,0)\in TM,$ where for any $(1,z)$ tensor $T$
we have put
\begin{equation*}
\Theta _{k}T_{hij...}^{a}=\nabla _{k}T_{hij...}^{a}-\Gamma
_{rk}^{a}T_{hij....}^{r}.
\end{equation*}
Moreover, if we put:
\begin{equation*}
S_{k}^{a}=\widetilde{P}_{k}^{a}+X^{b}\Gamma _{bk}^{a}=\widetilde{P}
_{k}^{a}-\partial _{k}X^{a}+\nabla _{k}X^{a}=P_{k}^{a}+\nabla _{k}X^{a},
\end{equation*}
\begin{equation*}
T_{kp}^{a}=Q_{kq}^{a}+K_{k}^{b}\Gamma _{bp}^{a}+K_{p}^{b}\Gamma _{bk}^{a},
\end{equation*}
\begin{eqnarray*}
F_{lkpq} &=&\delta _{k}\delta _{p}\delta _{q}Z^{a}(x,0)g_{al},\quad \\
W_{lkpq} &=&\left( \delta _{k}\delta _{p}\delta _{q}\overline{Z}
^{a}(x,0)+E_{pk}^{c}\Gamma _{cq}^{a}+E_{qk}^{c}\Gamma
_{cp}^{a}+E_{pq}^{c}\Gamma _{ck}^{a}\right) g_{al}= \\
&&\left( S_{kpq}^{a}+E_{pk}^{c}\Gamma _{cq}^{a}+E_{qk}^{c}\Gamma
_{cp}^{a}+E_{pq}^{c}\Gamma _{ck}^{a}\right) g_{al},
\end{eqnarray*}
\begin{equation*}
Z_{kpqr}^{a}=V_{kpqr}^{a}+F_{kpq}^{c}\Gamma _{cr}^{a}+F_{kqr}^{c}\Gamma
_{cp}^{a}+F_{krp}^{c}\Gamma _{cq}^{a}+F_{pqr}^{c}\Gamma _{ck}^{a},
\end{equation*}
then the vertical component writes
\begin{equation*}
V^{a}=Y^{a}+S_{p}^{a}u^{p}+\frac{1}{2!}T_{pq}^{a}u^{p}u^{q}+\frac{1}{3!}
W_{pqr}^{a}u^{p}u^{q}u^{r}+\frac{1}{4!}Z_{pqrs}^{a}u^{p}u^{q}u^{r}u^{s}+
\cdots
\end{equation*}
and
\begin{equation*}
\partial _{k}^{v}V^{a}=S_{k}^{a}+T_{kp}^{a}u^{p}+\frac{1}{2}
W_{kpq}^{a}u^{p}u^{q}+\frac{1}{3!}Z_{kpqr}^{a}u^{p}u^{q}u^{r}+...,
\end{equation*}
\begin{multline}
\partial _{k}^{h}V^{a}=\Theta _{k}Y^{a}+\Theta _{k}S_{p}^{a}u^{p}+
\label{Taylor7} \\
\frac{1}{2!}\Theta _{k}T_{pq}^{a}u^{p}u^{q}+\frac{1}{3!}\Theta
_{k}W_{pqr}^{a}u^{p}u^{q}u^{r}+\frac{1}{4!}\Theta
_{k}Z_{pqrs}^{a}u^{p}u^{q}u^{r}u^{s}+\cdots
\end{multline}
on a neighbourhood of a point \thinspace $(x,0)\in TM.$
We shall often use the following definitions and abbreviations:
\begin{equation*}
S_{p}^{a}=P_{p}^{a}+\nabla _{p}X^{a},\quad S_{kp}=S_{p}^{a}g_{ak},\quad
P_{lk}=P_{k}^{a}g_{al},
\end{equation*}
\begin{equation*}
K_{lp}=K_{p}^{a}g_{al},\quad E_{kpq}=E_{kqp}=E_{pq}^{a}g_{ak},\quad
T_{lkp}=T_{kp}^{a}g_{al}.
\end{equation*}
Substituting (\ref{Taylor1}) - (\ref{Taylor7}) into the right hand sides of (
\ref{LD10})-(\ref{LD12}) we obtain on some neighbourhood of $(x,0)$
expressions that are sums of polynomials in variables $u^{r}$ with
coefficients depending on $x^{t}$ multiplied by functions depending on $
r^{2}=g_{rs}u^{r}u^{s}$ plus terms that contain remainders. Suppose that $
Z=Z^{r}\partial _{r}+\widetilde{Z}^{r}\delta _{r}$ is a Killing vector field
on $TM.$ Then the left hand sides vanish and substituting $u=(u^{j})=0$ we
obtain on $M\times \{0\}$
\begin{equation}
A\left( \nabla _{k}X_{l}+\nabla _{l}X_{k}\right) +a_{2}\left( \nabla
_{k}Y_{l}+\nabla _{l}Y_{k}\right) =0, \tag{$I_{1}$} \label{I1}
\end{equation}
\begin{equation}
AK_{lk}+a_{2}\left( P_{lk}+\nabla _{k}X_{l}+\nabla _{l}X_{k}\right)
+a_{1}\nabla _{l}Y_{k}=0, \tag{$II_{1}$} \label{II1}
\end{equation}
\begin{equation}
a_{2}\left( K_{lk}+K_{kl}\right) +a_{1}\left( S_{lk}+S_{kl}\right) =0,
\tag{$III_{1}$} \label{III1}
\end{equation}
where $A=A(0),$ $a_{j}=a_{j}(0).$ Differentiating with respect to $\delta
_{k}$, making use of the property
\begin{equation*}
\delta _{k}f(r^{2})=2f^{\prime }(r^{2})g_{ks}u^{s}
\end{equation*}
and substituting $u^{j}=0$ we find
\begin{multline}
A\left( \nabla _{k}K_{lp}+\nabla _{l}K_{kp}\right) +a_{2}\left[ \nabla
_{k}S_{lp}+\nabla _{l}S_{kp}-X^{a}\left( R_{aklp}+R_{alkp}\right) \right] +
\tag{$I_{2}$} \label{I2} \\
2A^{\prime }g_{kl}Y_{p}+B\left( Y_{k}g_{lp}+Y_{l}g_{kp}\right) =0,
\end{multline}
\begin{multline}
AE_{lkp}+a_{1}\left( \nabla _{l}S_{kp}-X^{a}R_{alkp}\right) +a_{2}\left(
\nabla _{l}K_{kp}+T_{lkp}\right) + \tag{$II_{2}$} \label{II2} \\
2a_{2}^{\prime }g_{kl}Y_{p}+b_{2}\left( Y_{k}g_{lp}+Y_{l}g_{kp}\right) =0,
\end{multline}
\begin{equation}
a_{1}\left( T_{lkp}+T_{klp}\right) +a_{2}\left( E_{lkp}+E_{klp}\right)
+b_{1}\left( Y_{k}g_{lp}+Y_{l}g_{kp}\right) +2a_{1}^{\prime }g_{kl}Y_{p}=0,
\tag{$III_{2}$} \label{III2}
\end{equation}
on $M\times \{0\},$ where $A^{\prime }=A^{\prime }(0),$ $a_{j}^{\prime
}=a_{j}^{\prime }(0)$ etc.
For any $(0,2)$ tensor $T$ we put
\begin{equation*}
\overline{T}_{ab}=T_{ab}+T_{ba},\quad \widehat{T}_{ab}=T_{ab}-T_{ba}
\end{equation*}
It is easily seen, that the quantities $F$ and $W$\ are symmetric in the
last three indices. Proceeding in the same way as before we easily obtain
expressions of the second order:
\begin{multline}
\left( L_{H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}}G\right)
\left( \partial _{k}^{h},\partial _{l}^{h}\right) _{pq}|_{(x,0)}=
\tag{$I_{3}$} \label{I3} \\
A\left( \nabla _{k}E_{lpq}+\nabla _{l}E_{kpq}\right) +a_{2}\left( \nabla
_{k}T_{lpq}+\nabla _{l}T_{kpq}\right) +2A^{\prime }g_{kl}\overline{S}_{pq}+
\\
B\left[ \left( \nabla _{k}X_{p}+S_{kp}\right) g_{ql}+\left( \nabla
_{k}X_{q}+S_{kq}\right) g_{pl}+\right. \\
\left. \left( \nabla _{l}X_{p}+S_{lp}\right) g_{qk}+\left( \nabla
_{l}X_{q}+S_{lq}\right) g_{pk}\right] + \\
b_{2}\left( \nabla _{k}Y_{p}g_{ql}+\nabla _{k}Y_{q}g_{pl}+\nabla
_{l}Y_{p}g_{qk}+\nabla _{l}Y_{q}g_{pk}\right) - \\
a_{2}\left[ K_{p}^{a}\left( R_{alkq}+R_{aklq}\right) +K_{q}^{a}\left(
R_{alkp}+R_{aklp}\right) \right] =0,
\end{multline}
\begin{multline}
\left( L_{H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}}G\right)
\left( \partial _{k}^{v},\partial _{l}^{h}\right) _{pq}|_{(x,0)}=
\tag{$II_{3}$} \label{II3} \\
AF_{lkpq}+a_{2}W_{lkpq}+a_{1}\nabla _{l}T_{kpq}+a_{2}\nabla
_{l}E_{kpq}+2a_{2}^{\prime }g_{kl}\overline{S}_{pq}- \\
a_{1}\left( K_{p}^{a}R_{alkq}+K_{q}^{a}R_{alkp}\right) +B\left(
K_{pk}g_{ql}+K_{qk}g_{pl}\right) + \\
b_{2}\left( \overline{S}_{pk}g_{ql}+\overline{S}
_{qk}g_{pl}+S_{lp}g_{qk}+S_{lq}g_{pk}+\nabla _{l}X_{p}g_{kq}+\nabla
_{l}X_{q}g_{kp}\right) + \\
b_{1}\left( \nabla _{l}Y_{p}g_{kq}+\nabla _{l}Y_{q}g_{kp}\right) =0,
\end{multline}
\begin{multline}
\left( L_{H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}}G\right)
\left( \partial _{k}^{v},\partial _{l}^{v}\right) _{pq}|_{(x,0)}=
\tag{$III_{3}$} \label{III3} \\
a_{2}\left( F_{lkpq}+F_{klpq}\right) +a_{1}\left( W_{lkpq}+W_{klpq}\right)
+2a_{1}^{\prime }g_{kl}\overline{S}_{pq}+ \\
b_{1}\left( \overline{S}_{kp}g_{ql}+\overline{S}_{kq}g_{pl}+\overline{S}
_{lp}g_{qk}+\overline{S}_{lq}g_{pk}\right) + \\
b_{2}\left( K_{pk}g_{ql}+K_{qk}g_{pl}+K_{pl}g_{qk}+K_{ql}g_{pk}\right) =0.
\end{multline}
Finally, expressions of the third order are:
\begin{multline}
\left( L_{H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}}G\right)
\left( \partial _{k}^{h},\partial _{l}^{h}\right) _{pqr}|_{(x,0)}=
\tag{$I_{4}$} \label{I4} \\
A\left[ \nabla _{k}F_{lpqr}+\nabla _{l}F_{kpqr}\right] +a_{2}\left[ \nabla
_{k}W_{lpqr}+\nabla _{l}W_{kpqr}\right] - \\
a_{2}\left[ E_{pq}^{a}\left( R_{alkr}+R_{aklr}\right) +E_{qr}^{a}\left(
R_{alkp}+R_{aklp}\right) +E_{rp}^{a}\left( R_{alkq}+R_{aklq}\right) \right] +
\\
B\left[ \nabla _{k}\overline{K}_{qp}g_{lr}+\nabla _{k}\overline{K}
_{rq}g_{lp}+\nabla _{k}\overline{K}_{pr}g_{lq}+\nabla _{l}\overline{K}
_{qp}g_{kr}+\nabla _{l}\overline{K}_{rq}g_{kp}+\nabla _{l}\overline{K}
_{pr}g_{kq}\right] + \\
b_{2}\left[ \nabla _{k}\overline{S}_{qp}g_{lr}+\nabla _{k}\overline{S}
_{rq}g_{lp}+\nabla _{k}\overline{S}_{pr}g_{lq}+\nabla _{l}\overline{S}
_{qp}g_{kr}+\nabla _{l}\overline{S}_{rq}g_{kp}+\nabla _{l}\overline{S}
_{pr}g_{kq}\right] + \\
B\left[
g_{lp}T_{kqr}+g_{lq}T_{krp}+g_{lr}T_{kpq}+g_{kp}T_{lqr}+g_{kq}T_{lrp}+g_{kr}T_{lpq}
\right] + \\
2B^{\prime }\left[ \left( g_{pk}g_{ql}+g_{qk}g_{pl}\right) Y_{r}+\left(
g_{qk}g_{rl}+g_{rk}g_{ql}\right) Y_{p}+\left(
g_{rk}g_{pl}+g_{pk}g_{rl}\right) Y_{q}\right] + \\
2A^{\prime }g_{kl}M_{pqr}=0,
\end{multline}
\begin{multline}
\left( L_{H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}}G\right)
\left( \partial _{k}^{v},\partial _{l}^{h}\right) _{pqr}|_{(x,0)}=
\tag{$II_{4}$} \label{II4} \\
AG_{lkpqr}+a_{2}Z_{lkpqr}+a_{2}\nabla _{l}F_{kpqr}+a_{1}\nabla _{l}W_{kpqr}-
\\
a_{1}\left[ E_{pq}^{a}R_{alkr}+E_{qr}^{a}R_{alkp}+E_{rp}^{a}R_{alkq}\right] +
\\
b_{2}\left[ \nabla _{l}\overline{K}_{qp}g_{kr}+\nabla _{l}\overline{K}
_{rq}g_{kp}+\nabla _{l}\overline{K}_{pr}g_{kq}\right] + \\
B\left[ g_{lr}\left( E_{qkp}+E_{pkq}\right) +g_{lp}\left(
E_{rkq}+E_{qkr}\right) +g_{lq}\left( E_{pkr}+E_{rkp}\right) \right] + \\
b_{1}\left[ \nabla _{l}\overline{S}_{qp}g_{kr}+\nabla _{l}\overline{S}
_{rq}g_{kp}+\nabla _{l}\overline{S}_{pr}g_{kq}\right] + \\
b_{2}\left[ g_{kp}T_{lqr}+g_{kq}T_{lrp}+g_{kr}T_{lpq}\right] +b_{2}\left[
g_{lp}M_{kqr}+g_{lq}M_{krp}+g_{lr}M_{kpq}\right] + \\
2b_{2}^{\prime }\left[ \left( g_{pk}g_{ql}+g_{qk}g_{pl}\right) Y_{r}+\left(
g_{qk}g_{rl}+g_{rk}g_{ql}\right) Y_{p}+\left(
g_{rk}g_{pl}+g_{pk}g_{rl}\right) Y_{q}\right] + \\
2a_{2}^{\prime }g_{kl}M_{pqr}=0,
\end{multline}
where $M_{pqr}=T_{pqr}+T_{qrp}+T_{rpq}$ and
\begin{equation*}
Z_{lkpqr}=\left( V_{kpqr}^{a}+F_{kpq}^{c}\Gamma _{cr}^{a}+F_{kqr}^{c}\Gamma
_{cp}^{a}+F_{krp}^{c}\Gamma _{cq}^{a}+F_{pqr}^{c}\Gamma _{ck}^{a}\right)
g_{al}
\end{equation*}
is symmetric in the last four lower indices.
Moreover, we have
\begin{multline}
\left( L_{H^{a}\partial _{a}^{h}+V^{\alpha }\partial _{\alpha }^{v}}G\right)
\left( \partial _{k}^{v},\partial _{l}^{v}\right) _{pqr}|_{(x,0)}=
\tag{$III_{4}$} \label{III4} \\
a_{2}\left( G_{lkpqr}+G_{klpqr}\right) +a_{1}\left(
Z_{lkpqr}+Z_{klpqr}\right) + \\
b_{2}\left[ g_{lr}\left( E_{qkp}+E_{pkq}\right) +g_{lp}\left(
E_{rkq}+E_{qkr}\right) +g_{lq}\left( E_{pkr}+E_{rkp}\right) \right] + \\
b_{2}\left[ g_{kr}\left( E_{qlp}+E_{plq}\right) +g_{kp}\left(
E_{rlq}+E_{qlr}\right) +g_{kq}\left( E_{plr}+E_{rlp}\right) \right] + \\
b_{1}\left[ g_{kp}M_{lqr}+g_{kq}M_{lrp}+g_{kr}M_{lpq}\right] +b_{1}\left[
g_{lp}M_{kqr}+g_{lq}M_{krp}+g_{lr}M_{kpq}\right] + \\
2b_{1}^{\prime }\left[ \left( g_{pk}g_{ql}+g_{qk}g_{pl}\right) Y_{r}+\left(
g_{qk}g_{rl}+g_{rk}g_{ql}\right) Y_{p}+\left(
g_{rk}g_{pl}+g_{pk}g_{rl}\right) Y_{q}\right] + \\
2a_{1}^{\prime }g_{kl}M_{pqr}=0.
\end{multline}
Let us remind once again that these identities hold on $M$ and therefor all
the coefficients $a_{j},$ $b_{j},$ $a_{j}^{\prime },$ $b_{j}^{\prime }$ are
considered to be constants.
\subsection{Lemmas}
\begin{lemma}
\label{Lemmas L5}Under hypothesis (\ref{H}) at a point $(x,0)\in TM$ we have:
\begin{equation}
a_{1}T_{lkp}+a_{2}E_{lkp}=a_{1}^{\prime }\left(
Y_{l}g_{kp}-Y_{k}g_{lp}-Y_{p}g_{kl}\right) -b_{1}Y_{l}g_{kp}, \label{L 5}
\end{equation}
\begin{equation}
AE_{lkp}+a_{2}T_{lkp}+a_{2}^{\prime }(g_{kl}Y_{p}+g_{pl}Y_{k})+\frac{1}{2}
b_{2}(2g_{kp}Y_{l}+g_{lp}Y_{k}+g_{kl}Y_{p})=0. \label{L5a}
\end{equation}
If $a\neq 0,$ then
\begin{multline}
aE_{lkm}=(a_{2}b_{1}-a_{1}b_{2}-a_{2}a_{1}^{\prime })g_{km}Y_{l}-
\label{L5a1} \\
\frac{1}{2}(a_{1}b_{2}-2a_{2}a_{1}^{\prime }+2a_{1}a_{2}^{\prime
})(g_{lm}Y_{k}+g_{lk}Y_{m}),
\end{multline}
\begin{equation}
aT_{lkm}=(Aa_{1}^{\prime }+a_{2}b_{2}-Ab_{1})g_{km}Y_{l}+\frac{1}{2}
(a_{2}b_{2}-2Aa_{1}^{\prime }+2a_{2}a_{2}^{\prime
})(g_{lm}Y_{k}+g_{lk}Y_{m}), \label{L5a2}
\end{equation}
\begin{equation}
aM_{lkm}=[2a_{2}(b_{2}+a_{2}^{\prime })-A(b_{1}+a_{1}^{\prime
})](g_{km}Y_{l}+g_{lk}Y_{m}+g_{ml}Y_{k}). \label{L5a2'}
\end{equation}
Moreover,
\begin{multline}
a_{2}\left[ \nabla _{k}\left( \nabla _{l}X_{p}+\nabla _{p}X_{l}\right)
+\nabla _{l}\left( \nabla _{k}X_{p}+\nabla _{p}X_{k}\right) -\nabla
_{p}\left( \nabla _{l}X_{k}+\nabla _{k}X_{l}\right) \right] + \label{L5b} \\
a_{1}\left( \nabla _{k}\nabla _{l}Y_{p}+\nabla _{l}\nabla _{k}Y_{p}\right)
=2A^{\prime }g_{kl}Y_{p}+B\left( Y_{k}g_{lp}+Y_{l}g_{kp}\right) ,
\end{multline}
\begin{multline}
a\left( \nabla _{k}K_{lp}+\nabla _{l}K_{kp}\right)
+(a_{2}b_{2}+2a_{1}A^{\prime }-2a_{2}a_{2}^{\prime })Y_{p}g_{kl}+
\label{L5c} \\
\frac{1}{2}(-a_{2}b_{2}+2a_{1}B+2a_{2}a_{2}^{\prime
})(Y_{k}g_{lp}+Y_{l}g_{kp})=0.
\end{multline}
\end{lemma}
\begin{proof}
Alternating ($III_{2}$) in $(l,p),$ then interchanging the indices $(p,k)$
and adding the resulting equation to ($III_{2}$)$,$ we obtain (\ref{L 5}).
Differentiating covariantly (\ref{III1}) we get
\begin{equation*}
a_{2}\left( \nabla _{k}K_{lp}+\nabla _{k}K_{pl}\right) +a_{1}\left( \nabla
_{k}S_{lp}+\nabla _{k}S_{pl}\right) =0.
\end{equation*}
Symmetrizing (\ref{II2}) in $(k,p)$ and subtracting the resulting equation
from the above one we find (\ref{L5a}).
Now (\ref{L5a1}) and (\ref{L5a2}) result immediately from (\ref{L 5}) and (
\ref{L5a}).
From (\ref{II1}) we easily get
\begin{equation*}
A\nabla _{k}K_{lp}+a_{2}\left( \nabla _{k}P_{lp}+\nabla _{k}\nabla
_{p}X_{l}+\nabla _{k}\nabla _{l}X_{p}\right) +a_{1}\nabla _{k}\nabla
_{l}Y_{p}=0,
\end{equation*}
whence, symmetrizing in $(k,l),$ subtracting from (\ref{I2}), by the use of
the Ricci identity, we obtain (\ref{L5b}).
To prove (\ref{L5c}) first we symmetrize (\ref{II2}) in $(k,l)$ and combine
it with (\ref{I2}) to obtain
\begin{multline*}
a\left( \nabla _{k}K_{lm}+\nabla _{l}K_{km}\right) -a_{2}\left[ A\left(
E_{lkm}+E_{klm}\right) +a_{2}\left( T_{lkm}+T_{klm}\right) \right] + \\
2\left( a_{1}A^{\prime }-2a_{2}a_{2}^{\prime }\right) g_{kl}Y_{m}+\left(
a_{1}B-2a_{2}b_{2}\right) \left( g_{lm}Y_{k}+g_{km}Y_{l}\right) =0.
\end{multline*}
On the other hand, symmetrizing (\ref{L5a}) in $(k,l)$ and subtracting from
the above we obtain (\ref{L5c}). This completes the proof.
\end{proof}
\begin{lemma}
Under hypothesis (\ref{H}) suppose $a\neq 0$ at a point $(x,0)\in TM.$ Then
we have
\begin{multline}
2a\nabla _{l}K_{km}=a_{1}^{2}Y^{r}R_{rmkl}-a_{1}Bg_{km}Y_{l}+ \label{L6a} \\
(-a_{1}B+a_{2}b_{2}-2a_{2}a_{2}^{\prime
})g_{lm}Y_{k}+(-a_{2}b_{2}-2a_{1}A^{\prime }+2a_{2}a_{2}^{\prime
})g_{kl}Y_{m},
\end{multline}
\begin{multline}
2a\left( \nabla _{l}S_{km}-X^{r}R_{rlkm}\right)
+a_{1}a_{2}Y^{r}R_{rmkl}-a_{2}Bg_{km}Y_{l}+ \label{L6b} \\
\left[ -a_{2}B+A\left( b_{2}-2a_{2}^{\prime }\right) \right] g_{lm}Y_{k}+
\left[ -2a_{2}A^{\prime }-A\left( b_{2}-2a_{2}^{\prime }\right) \right]
g_{kl}Y_{m}=0
\end{multline}
at the point.
\end{lemma}
\begin{proof}
From (\ref{II2}) we subtract (\ref{L5a}) to obtain
\begin{equation}
a_{2}\nabla _{l}K_{km}+a_{1}\left( \nabla _{l}S_{km}-X^{r}R_{rlkm}\right)
+\left( a_{2}^{\prime }-\frac{b_{2}}{2}\right) \left(
g_{kl}Y_{m}-g_{ml}Y_{k}\right) =0. \label{L6c}
\end{equation}
On the other hand, interchanging in (\ref{II1}) $k$ and $m,$ differentiating
covariantly with respect to $\partial _{k},$ alternating in $(k,l)$ and
applying the Ricci identity, we find
\begin{equation*}
A\left( \nabla _{k}K_{lm}-\nabla _{l}K_{km}\right) +a_{2}\left( \nabla
_{k}S_{lm}-\nabla _{l}S_{km}\right) +a_{2}X^{r}R_{rmkl}+a_{1}Y^{r}R_{rmkl}=0.
\end{equation*}
Subtracting from (\ref{I2}), in virtue of the Bianchi identity, we get
\begin{multline*}
2A\nabla _{l}K_{km}+2a_{2}\left( \nabla _{l}S_{km}-X^{r}R_{rlkm}\right) - \\
a_{1}Y^{r}R_{rmkl}+2A^{\prime }g_{kl}Y_{m}+B\left(
g_{lm}Y_{k}+g_{km}Y_{l}\right) =0.
\end{multline*}
The last equation together with (\ref{L6c}) yields the result.
\end{proof}
\begin{lemma}
\label{LE5}Under hypothesis (\ref{H}) suppose $\dim M>2.$ Then on $M\times
\{0\}$
\begin{equation}
T_{kl}=T_{lk}=2\left( b_{1}-a_{1}^{\prime }\right) \overline{S}_{kl}+b_{2}
\overline{K}_{kl}=0, \label{LE5-1}
\end{equation}
\begin{multline}
a_{2}F_{labk}+a_{1}W_{labk}+\frac{1}{2}b_{2}\left( \widehat{K}_{kl}g_{ab}+
\widehat{K}_{bl}g_{ak}+\widehat{K}_{al}g_{bk}+\overline{K}_{ak}g_{bl}\right)
+ \label{LE5-2} \\
b_{1}g_{bl}\overline{S}_{ak}+a_{1}^{\prime }(g_{kl}\overline{S}_{ab}+g_{al}
\overline{S}_{bk})=0.
\end{multline}
\end{lemma}
\begin{proof}
Replacing in (\ref{III3}) the indices $(p,q)$ with $(a,b),$ alternating in $
(a,l),$ then again in $(k,l)$ and adding to the first equation we get
\begin{multline*}
a_{2}F_{labk}+a_{1}W_{labk}+ \\
\frac{1}{2}b_{2}\left( \widehat{K}_{kl}g_{ab}+2K_{bl}g_{ak}+\widehat{K}
_{al}g_{bk}+\overline{K}_{ak}g_{bl}\right) + \\
b_{1}(g_{bl}\overline{S}_{ak}+g_{ak}\overline{S}_{bl})+a_{1}^{\prime
}(-g_{ak}\overline{S}_{bl}+g_{kl}\overline{S}_{ab}+g_{al}\overline{S}
_{bk})=0.
\end{multline*}
Alternating in $(a,b)$ we find
\begin{equation*}
g_{bl}T_{ak}-g_{bk}T_{al}-g_{al}T_{bk}+g_{ak}T_{bl}=0,
\end{equation*}
whence $(n-2)T_{ak}=0$ results. Then (\ref{LE5-2}) is obvious.
\end{proof}
\begin{lemma}
\label{AfterLE5}Under hypothesis (\ref{H}) suppose $\dim M>1$ and $a\neq 0.$
Then
\begin{equation*}
(n-1)\beta Y_{l}=0
\end{equation*}
on $M\times \{0\}$ holds, where
\begin{equation*}
\beta =2A(b_{1}^{2}-a_{1}^{\prime 2}-a_{1}b_{1}^{\prime
})+(a_{1}b_{2}-2a_{2}b_{1})(3b_{2}+2a_{2}^{\prime })+2a_{2}\left[
2a_{1}^{\prime }(b_{2}+a_{2}^{\prime })+a_{2}b_{1}^{\prime }\right] .
\end{equation*}
\end{lemma}
\begin{proof}
First, replace in (\ref{III4}) the indices $(p,q,r)$ with $(a,b,c).$
Alternating an equationo btained in such a way in $(a,l),$ then in $(k,l)$
and adding the result to the first one, we get
\begin{multline*}
a_{2}G_{labck}+a_{1}Z_{labck}+ \\
\frac{1}{2}b_{2}\left[
(E_{kcl}-E_{lck})g_{ab}+(E_{kbl}-E_{lbk})g_{ac}+2(E_{bcl}+E_{cbl})g_{ak}+(E_{acl}-E_{lac})g_{bk}+\right.
\\
\left. \left( E_{ack}+2E_{cak}+E_{kac}\right)
g_{bl}+(E_{abl}-E_{lba})g_{ck}+\left( E_{abk}+2E_{bak}+E_{kab}\right) g_{cl}
\right] + \\
b_{1}(M_{bcl}g_{ak}+M_{ack}g_{bl}+M_{abk}g_{cl})+a_{1}^{\prime
}(-M_{bcl}g_{ak}+M_{bck}g_{al}+M_{abc}g_{kl})+ \\
b_{1}^{\prime }\left[ (g_{bl}g_{ck}+g_{bk}g_{cl})Y_{a}+2g_{ak}\left(
g_{cl}Y_{b}+g_{bl}Y_{c}\right) +(g_{ac}g_{bl}+g_{ab}g_{cl})Y_{k}-\right. \\
\left. (g_{ac}g_{bk}+g_{ab}g_{ck})Y_{l}\right] =0.
\end{multline*}
Alternating in $(k,b)$ and contracting with $g^{ab}g^{kc}$ we obtain
\begin{equation*}
b_{2}\left[ \left( n-2\right) E_{rls}+nE_{lrs}\right]
g^{rs}+(n-1)(b_{1}-a_{1}^{\prime })M_{rls}g^{rs}+(n+2)(n-1)b_{1}^{\prime
}Y_{l}=0,
\end{equation*}
which, using (\ref{L5a1}) and (\ref{L5a2'}), yields the thesis.
\end{proof}
\begin{remark}
For the Cheeger-Gromoll metric $g^{CG}$ on $TM,$ the vector field $Y$
vanishes everywhere on $M$.
\end{remark}
\begin{lemma}
\label{LE6}Under hypothesis (\ref{H})
\begin{multline}
3AF_{lkmn}+3a_{2}W_{lkmn}+B\left( g_{kl}\overline{K}_{mn}+g_{lm}\overline{K}
_{kn}+g_{ln}\overline{K}_{km}\right) + \label{LE6-1} \\
(b_{1}-a_{1}^{\prime })\left(
Y_{n,l}g_{km}+Y_{m,l}g_{kn}+Y_{k,l}g_{mn}\right) + \\
2(b_{2}+a_{2}^{\prime })\left( g_{kl}\overline{S}_{mn}+g_{lm}\overline{S}
_{kn}+g_{ln}\overline{S}_{km}\right) + \\
2b_{2}\left[ g_{km}\left( X_{n,l}+S_{ln}\right) +g_{kn}\left(
X_{m,l}+S_{lm}\right) +g_{mn}\left( X_{k,l}+S_{lk}\right) \right] =0
\end{multline}
is satisfied at a point $(x,0)\in TM$.
\end{lemma}
\begin{proof}
Differentiating covariantly (\ref{L 5}) and subtracting from (\ref{II3}) we
get
\begin{multline}
AF_{lkmn}+a_{2}W_{lkmn}+B\left( g_{lm}K_{nk}+g_{ln}K_{mk}\right) +
\label{LE6-2} \\
(b_{1}-a_{1}^{\prime })\left(
Y_{n,l}g_{km}+Y_{m,l}g_{kn}-Y_{k,l}g_{mn}\right) - \\
a_{1}\left( K_{n}^{r}R_{rlkm}+K_{m}^{r}R_{rlkn}\right) +2a_{2}^{\prime
}g_{kl}\overline{S}_{mn}+ \\
b_{2}\left[ g_{km}\left( X_{n,l}+S_{ln}\right) +g_{kn}\left(
X_{m,l}+S_{lm}\right) +g_{ln}\overline{S}_{km}+g_{lm}\overline{S}_{kn}\right]
=0.
\end{multline}
Antisymmetrizing in $(k,m)$ and symmetrizing in $(k,n)$ we have
\begin{multline}
B\left[ g_{kl}\left( K_{mn}-2K_{nm}\right) +g_{lm}\left(
K_{kn}+K_{nk}\right) +g_{ln}\left( K_{mk}-2K_{km}\right) \right] +
\label{like 78} \\
2(b_{1}-a_{1}^{\prime })\left(
2Y_{m,l}g_{kn}-Y_{n,l}g_{km}-Y_{k,l}g_{mn}\right) +3a_{1}\left(
K_{n}^{r}R_{rlmk}+K_{k}^{r}R_{rlmn}\right) + \\
b_{2}\left[ 2g_{kn}\left( X_{m,l}+S_{lm}\right) -g_{km}\left(
X_{n,l}+S_{ln}\right) -g_{mn}\left( X_{k,l}+S_{lk}\right) \right] + \\
(b_{2}-2a_{2}^{\prime })\left( 2g_{lm}\overline{S}_{kn}-g_{ln}\overline{S}
_{km}-g_{kl}\overline{S}_{mn}\right) .
\end{multline}
Exchanging in (\ref{LE6-2}) times three the indices $k$ and $m$ and adding
to the last equation we obtain (\ref{LE6-1}). This completes the proof.
\end{proof}
\begin{lemma}
\label{LE6'}Under hypothesis (\ref{H}) relation
\begin{multline}
3a_{2}\left[ E_{bc}^{p}\left( R_{pkal}+R_{lak}^{p}\right) +E_{ac}^{p}\left(
R_{pkbl}+R_{lbk}^{p}\right) +E_{ab}^{p}\left( R_{pkcl}+R_{lck}^{p}\right)
\right] + \label{LE6'-1} \\
6A^{\prime
}g_{kl}(T_{abc}+T_{bca}+T_{cab})+g_{bc}K_{kal}+g_{ca}K_{kbl}+g_{ab}K_{kcl}+
\\
g_{cl}L_{abk}+g_{al}L_{bck}+g_{bl}L_{cak}+g_{ck}L_{abl}+g_{ak}L_{bcl}+g_{bk}L_{cal}=0
\end{multline}
holds on $M\times \{0\}$, where
\begin{multline}
K_{kal}=K_{lak}= \label{LE6'-2} \\
-2b_{2}\left( S_{ka,l}+S_{la,k}+X_{a,kl}+X_{a,lk}\right)
-(b_{1}-a_{1}^{\prime })(Y_{a,kl}+Y_{a,lk}),
\end{multline}
\begin{equation}
L_{abk}=L_{bak}=2B\overline{K}_{ab,k}+3BT_{kab}+(b_{2}-2a_{2}^{\prime })
\overline{S}_{ab,k}+3B^{\prime }(g_{ka}Y_{b}+g_{kb}Y_{a}). \label{LE6'-3}
\end{equation}
\end{lemma}
\begin{proof}
To prove the lemma it is enough to differentiate covariantly (\ref{LE6-1})
and eliminate covariant derivatives of $F$ and $W$ from (\ref{I4}).
\end{proof}
\begin{lemma}
\label{LE6''}Under hypothesis (\ref{H}) suppose $\dim M>2.$ Then the relation
\begin{multline*}
a_{1}\left[
2E_{ab}^{p}R_{plck}-E_{bk}^{p}R_{plac}+E_{bc}^{p}R_{plak}-E_{ak}^{p}R_{plbc}+E_{ac}^{p}R_{plbk}
\right] + \\
B\left[ \left( E_{ckb}-E_{kcb}\right) g_{al}+\left( E_{cak}-E_{kac}\right)
g_{bl}+\right. \\
\left. \left( E_{abk}+E_{bak}\right) g_{cl}-\left( E_{abc}+E_{bac}\right)
g_{kl}\right] + \\
(b_{1}-a_{1}^{\prime })\left[ \nabla _{l}\overline{S}_{bc}g_{ak}-\nabla _{l}
\overline{S}_{bk}g_{ac}\right] + \\
b_{2}\left[ \nabla _{l}\widehat{K}_{kc}g_{ab}+g_{ak}\left( \frac{3}{2}\nabla
_{l}K_{bc}+\frac{1}{2}\nabla _{l}K_{cb}\right) -g_{ac}\left( \frac{3}{2}
\nabla _{l}K_{bk}+\frac{1}{2}\nabla _{l}K_{kb}\right) \right] + \\
b_{2}\left( \nabla _{l}K_{ac}g_{bk}-\nabla _{l}K_{ak}g_{bc}\right) + \\
\left( b_{2}-2a_{2}^{\prime }\right) \left(
M_{abk}g_{cl}-M_{abc}g_{kl}\right) +b_{2}\left[
g_{bk}T_{lac}-g_{bc}T_{lak}+g_{ak}T_{lbc}-g_{ac}T_{lbk}\right] + \\
2b_{2}^{\prime }\left[ \left( g_{bk}g_{cl}-g_{bc}g_{kl}\right) Y_{a}+\left(
g_{ak}g_{cl}-g_{ac}g_{kl}\right) Y_{b}+\right. \\
\left. \left( g_{al}g_{bk}+g_{ak}g_{bl}\right) Y_{c}-\left(
g_{al}g_{bc}+g_{ac}g_{bl}\right) Y_{k}\right] =0
\end{multline*}
holds on $M\times \{0\}.$
\end{lemma}
\begin{proof}
Firstly, we change in (\ref{LE5-2}) the indices $(l,a,b,k)$ into $(k,a,b,c)$
and differentiate covariantly with respect to $\partial _{l}.$ Setting in (
\ref{II4}) $(a,b,c)$ instead of $(p,q,r),$ subtracting the just obtained
equation and, finally, alternating in $(k,c)$ we get the thesis.
\end{proof}
\begin{lemma}
\label{LE8}Under hypothesis (\ref{H}) relations
\begin{multline}
\mathbf{A}_{km}=(3a_{1}B-a_{2}b_{2})\nabla _{k}X_{m}+(-2a_{2}b_{1}+\frac{3}{2
}a_{1}b_{2}+2a_{2}a_{1}^{\prime }-3a_{1}a_{2}^{\prime })\nabla _{k}Y_{m}+ \\
a_{2}B(K_{km}-2K_{mk})+(3a_{1}B-2a_{2}b_{2}+2a_{2}a_{2}^{\prime })S_{km}+ \\
(-a_{2}b_{2}+2a_{2}a_{2}^{\prime })S_{mk}=0, \label{LE8a}
\end{multline}
\begin{multline*}
\mathbf{F}_{kl}+\mathbf{B}
_{kl}=2a_{2}b_{2}(L_{X}g)_{kl}+(4a_{2}b_{1}-3a_{1}b_{2}-4a_{2}a_{1}^{\prime
})(L_{Y}g)_{kl}+ \\
2\left( 3a_{2}b_{2}+3a_{1}A^{\prime }-4a_{2}a_{2}^{\prime }\right) \overline{
S}_{kl}+2a_{2}B\overline{K}_{kl}=0.
\end{multline*}
hold at a point $(x,0)\in TM.$
\end{lemma}
\begin{proof}
First, we change in (\ref{L5a}) the indices $(l,k,p)$ into $(l,m,n),$ then
differentiate covariantly with respect to $\partial _{k}$ and symmetrize in $
(k,l).$ Next, change in (\ref{I3}) the indices $(p,q)$ into $(m,n)$ and
subtract the former equality to obtain
\begin{multline*}
\frac{1}{2}\left( b_{2}-2a_{2}^{\prime }\right) \left(
Y_{n,l}g_{km}+Y_{m,l}g_{kn}+Y_{n,k}g_{lm}+Y_{m,k}g_{ln}\right)
-b_{2}(Y_{k,l}+Y_{l,k})g_{mn}- \\
a_{2}\left[ K_{n}^{r}\left( R_{rklm}+R_{rlkm}\right) +K_{m}^{r}\left(
R_{rkln}+R_{rlkn}\right) \right] +2A^{\prime }g_{kl}\overline{S}_{mn}+ \\
B\left[ g_{ln}\left( X_{m,k}+S_{km}\right) +g_{lm}\left(
X_{n,k}+S_{kn}\right) +\right. \\
\left. g_{kn}\left( X_{m,l}+S_{lm}\right) +g_{km}\left(
X_{n,l}+S_{ln}\right) \right] =0.
\end{multline*}
Eliminating between (\ref{like 78}) and the last equation the terms
containing curvature tensor we obtain
\begin{equation*}
g_{mn}\mathbf{B}_{kl}+g_{kl}\mathbf{F}_{mn}+g_{ln}\mathbf{A}_{km}+g_{kn}
\mathbf{A}_{lm}+g_{lm}\mathbf{A}_{kn}+g_{km}\mathbf{A}_{ln}=0,
\end{equation*}
where
\begin{equation*}
\mathbf{F}_{mn}=2a_{2}B\overline{K}_{mn}+2(2a_{2}b_{2}+3a_{1}A^{\prime
}-4a_{2}a_{2}^{\prime })\overline{S}_{mn},
\end{equation*}
\begin{equation*}
\mathbf{B}
_{kl}=2a_{2}b_{2}(L_{X}g)_{kl}+(4a_{2}b_{1}-3a_{1}b_{2}-4a_{2}a_{1}^{\prime
})(L_{Y}g)_{kl}+2a_{2}b_{2}\overline{S}_{kl}.
\end{equation*}
Now, the thesis is a simple consequence of Lemma \ref{Apen3}.
\end{proof}
\section{Classification}
To simplify further considerations put for a moment $\overline{X}=\nabla
_{k}X_{l}+\nabla _{l}X_{k},$ $\overline{Y}=\nabla _{k}Y_{l}+\nabla
_{l}Y_{k}, $ $\overline{S}=P_{kl}+P_{lk}+\nabla _{k}X_{l}+\nabla _{l}X_{k},$
$\overline{K}=K_{kl}+K_{lk}.$ Symmetrizing indices in (\ref{II1}) and taking
into consideration equations (\ref{I1}), (\ref{III1}) and (\ref{LE5-1}) we
obtain a homogeneous system of linear equations in $\overline{X},$ $
\overline{Y},$ $\overline{S},$ $\overline{K}:$
\begin{equation}
\begin{bmatrix}
A & a_{2} & 0 & 0 \\
a_{2} & a_{1} & a_{2} & A \\
0 & 0 & a_{1} & a_{2} \\
0 & 0 & 2b & b_{2}
\end{bmatrix}
\begin{bmatrix}
\overline{X} \\
\overline{Y} \\
\overline{S} \\
\overline{K}
\end{bmatrix}
=
\begin{bmatrix}
0 \\
0 \\
0 \\
0
\end{bmatrix}
, \label{macierz1}
\end{equation}
where $b=b_{1}-a_{1}^{\prime }.$ The system has a unique solution if and
only if
\begin{equation*}
a(2ba_{2}-a_{1}b_{2})\neq 0,
\end{equation*}
where $a=a_{1}A-a_{2}^{2}.$
Suppose $2ba_{2}-a_{1}b_{2}=0.$
If $a_{2}b_{2}\neq 0,$ then multiplying the third equation by $b_{2}$ and
the fourth one by $a_{2}$ we transform the whole system to
\begin{equation*}
\begin{bmatrix}
A & a_{2} & 0 \\
a_{2}^{2} & a_{1}a_{2} & -a \\
0 & 0 & a_{1}
\end{bmatrix}
\begin{bmatrix}
\overline{X} \\
\overline{Y} \\
\overline{S}
\end{bmatrix}
=
\begin{bmatrix}
0 \\
0 \\
-a_{2}\overline{K}
\end{bmatrix}
\end{equation*}
with determinant equal to $a_{1}a_{2}a.$
Therefore, if $a_{1}\neq 0$ and $a_{2}b_{2}\neq 0,$ we get
\begin{equation}
\overline{X}+\overline{S}=0,\quad \overline{Y}=\frac{A}{a_{2}}\overline{S}
,\quad \overline{K}=-\frac{a_{1}}{a_{2}}\overline{S}. \label{solution2}
\end{equation}
On the other hand, if $a_{1}=0$ and $a_{2}b_{2}\neq 0,$ then $b=0$ and (\ref
{macierz1}) yields
\begin{equation*}
\begin{bmatrix}
A & a_{2} & 0 & 0 \\
a_{2} & 0 & a_{2} & A \\
0 & 0 & 0 & a_{2} \\
0 & 0 & 0 & b_{2}
\end{bmatrix}
\begin{bmatrix}
\overline{X} \\
\overline{Y} \\
\overline{S} \\
\overline{K}
\end{bmatrix}
=
\begin{bmatrix}
0 \\
0 \\
0 \\
0
\end{bmatrix}
,
\end{equation*}
whence
\begin{equation*}
\overline{X}+\overline{S}=0,\quad A\overline{X}+a_{2}\overline{Y}=0,\quad
\overline{K}=0.
\end{equation*}
Now suppose $a_{2}=0.$ Then by $2ba_{2}-a_{1}b_{2}=0$ we have either $
a_{1}=0 $ or $b_{2}=0.$ But $a_{1}=a_{2}=0$ would give $a=0.$ On the other
hand $a_{2}=b_{2}=0$ reduce the system (\ref{macierz1}) to
\begin{equation*}
\begin{bmatrix}
A & 0 & 0 & 0 \\
0 & a_{1} & 0 & A \\
0 & 0 & a_{1} & 0 \\
0 & 0 & 2b & 0
\end{bmatrix}
\begin{bmatrix}
\overline{X} \\
\overline{Y} \\
\overline{S} \\
\overline{K}
\end{bmatrix}
=
\begin{bmatrix}
0 \\
0 \\
0 \\
0
\end{bmatrix}
.
\end{equation*}
Since $a_{2}=0$ and $a\neq 0$ hold if $a_{1}A\neq 0,$ we obtain
\begin{equation*}
\overline{X}=0,\quad \overline{S}=0,\quad A\overline{K}+a_{1}\overline{Y}=0.
\end{equation*}
Finally, if $b_{2}=0$ but $a_{2}\neq 0,$ we have $b=b_{1}-a_{1}^{\prime }=0$
and from (\ref{macierz1}) we easily get (\ref{solution2}). Thus we have
proved
\begin{lemma}
Under assumption $a=a_{1}A-a_{2}^{2}\neq 0$ the system (\ref{macierz1}) has
the following solutions:
\begin{enumerate}
\item If $2ba_{2}-a_{1}b_{2}\neq 0,$ then $\overline{X}=\overline{Y}=
\overline{S}=\overline{K}=0.$
\item If $2ba_{2}-a_{1}b_{2}=0$ and either $a_{1}a_{2}b_{2}\neq 0$ or $
b_{2}=0$ and $a_{2}\neq 0$ and $b=0,$ then $\overline{X}+\overline{S}=0,\
\overline{Y}=\frac{A}{a_{2}}\overline{S},\ \overline{K}=-\frac{a_{1}}{a_{2}}
\overline{S}.$
\item If $a_{1}=b=0,$ then $\overline{X}+\overline{S}=0,\ A\overline{X}+a_{2}
\overline{Y}=0,\ \overline{K}=0.$
\item If $a_{2}=b_{2}=0,$ then $\overline{X}=0,\ \overline{S}=0,\ A\overline{
K}+a_{1}\overline{Y}=0.$
\end{enumerate}
Conversely, if $a\neq 0,$ then the above four cases give the only possible
solutions to (\ref{macierz1}).
\end{lemma}
Combining the above lemma with (\ref{I1}), (\ref{II1}), (\ref{III1}) and (
\ref{LE5-1}) we obtain the following
\begin{theorem}
\label{Splitting theorem}Let $(TM,$ $G)$ be a tangent bundle of a Riemannian
manifold $(M,g),$ dim$M>2,$ with g natural metric $G$ such that $
a=a_{1}A-a_{2}^{2}\neq 0$ on $M\times \{0\}.$ Let $Z$ be a Killing vector
field on $TM$ with its Taylor series expansion around a point $(x,0)\in TM$
given by (\ref{Taylor1}). Then for each such a point there exists a
neighbourhood $U\subset M,$ $x\in U$, that one of the following cases occurs:
\begin{enumerate}
\item $2ba_{2}-a_{1}b_{2}\neq 0.$ Then
\begin{eqnarray}
\nabla _{k}X_{l}+\nabla _{l}X_{k} &=&0,\quad \nabla _{k}Y_{l}+\nabla
_{l}Y_{k}=0, \label{14} \\
P_{kl}+P_{lk} &=&0,\quad K_{kl}+K_{lk}=0. \label{15}
\end{eqnarray}
\item $2ba_{2}-a_{1}b_{2}=0$ and either $a_{1}a_{2}b_{2}\neq 0$ or $
a_{2}\neq 0$ and $b_{2}=0.$ Then
\begin{eqnarray}
P_{kl}+P_{lk}+2\left( \nabla _{k}X_{l}+\nabla _{l}X_{k}\right) &=&0,
\label{16} \\
a_{2}\left( \nabla _{k}Y_{l}+\nabla _{l}Y_{k}\right) +A\left( \nabla
_{k}X_{l}+\nabla _{l}X_{k}\right) &=&0, \label{17} \\
a_{2}\left( K_{kl}+K_{lk}\right) -a_{1}\left( \nabla _{k}X_{l}+\nabla
_{l}X_{k}\right) &=&0. \label{18}
\end{eqnarray}
\item $a_{2}b_{2}\neq 0$ and $a_{1}=b=0.$ Then
\begin{eqnarray}
P_{kl}+P_{lk}+2\left( \nabla _{k}X_{l}+\nabla _{l}X_{k}\right) &=&0,
\label{19} \\
a_{2}\left( \nabla _{k}Y_{l}+\nabla _{l}Y_{k}\right) +A\left( \nabla
_{k}X_{l}+\nabla _{l}X_{k}\right) &=&0, \label{20} \\
K_{kl}+K_{lk} &=&0. \label{21}
\end{eqnarray}
\item $a_{2}=b_{2}=0.$ Then
\begin{equation}
\nabla _{k}X_{l}+\nabla _{l}X_{k}=0,\quad P_{kl}+P_{lk}=0,\quad
AK_{lk}+a_{1}\nabla _{l}Y_{k}=0. \label{22}
\end{equation}
\end{enumerate}
\end{theorem}
In the above theorem we have put $a_{j}=a_{j}(r^{2})_{\mid (x,0)\in TM},$ $
b_{j}=b_{j}(r^{2})_{\mid (x,0)\in TM},$ $a_{j}^{\prime }=a_{j}^{\prime
}(r^{2})_{\mid (x,0)\in TM},$ $A=a_{1}+a_{3}.$
It is clear that the above results together with Proposition \ref{Lift prop
2} yield Theorem \ref{mine}.
\subsection{Case 1}
In this section we study relations between $Y$ components of the Killing
vector field on $TM$ and the base manifold $M$ (Theorems \ref{LEC1b}, \ref
{LEC1c}). Various conditions for $Y$ to be non-zero and relations between $
X, $ $Y,$ $P$, $K$ are proved. Moreover, Theorem \ref{Structure 1}
establishes isomorphism between algebras of Killing vector fields on $M$ and
$TM$ for large subclass of $g-$ metrics.
\begin{lemma}
\label{LEC1a}Under hypothesis (\ref{H}) suppose $\dim M>2,$ $a\neq 0$ and $
2(b_{1}-a_{1}^{\prime })a_{2}-a_{1}b_{2}\neq 0$ at a point $(x,0)\in TM.$
Then
\begin{equation}
(B+A^{\prime })Y_{k}=0, \label{LEC1a1}
\end{equation}
\begin{equation}
2a\nabla _{l}K_{km}=\left[ 2a_{1}A^{\prime }+a_{2}(b_{2}-2a_{2}^{\prime })
\right] (g_{lm}Y_{k}-g_{lk}Y_{m}), \label{LEC1a2}
\end{equation}
\begin{equation}
2a\nabla _{l}P_{km}=-\left[ 2a_{2}A^{\prime }+A(b_{2}-2a_{2}^{\prime })
\right] (g_{lm}Y_{k}-g_{lk}Y_{m}), \label{LEC1a3}
\end{equation}
\begin{equation}
a_{1}\nabla _{m}\nabla _{l}Y_{k}=A^{\prime }(g_{ml}Y_{k}-g_{mk}Y_{l}),
\label{LEC1a4}
\end{equation}
\begin{equation}
a_{1}Y^{r}R_{rklm}=A^{\prime }(g_{km}Y_{l}-g_{kl}Y_{m}) \label{LAC1a5}
\end{equation}
hold at the point.
\end{lemma}
\begin{proof}
First suppose $a_{1}\neq 0.$ Symmetrizing (\ref{L6a}) in $(k,m),$ making use
of the skew-symmetricity of $K,$ then alternating in $(k,l)$ and applying
the first Bianchi identity, we get
\begin{equation}
3a_{1}Y^{r}R_{rmkl}+(B-2A^{\prime })(g_{lm}Y_{k}-g_{km}Y_{l})=0.
\label{LEC1a6}
\end{equation}
Applying the last identity to (\ref{L6a}) we find
\begin{multline*}
6a\nabla _{l}K_{km}+2a_{1}(B+A^{\prime })g_{km}Y_{l}+3\left[ 2a_{1}A^{\prime
}+a_{2}(b_{2}-2a_{2}^{\prime })\right] g_{lk}Y_{m}+ \\
\left[ 2a_{1}(2B-A^{\prime })-3a_{2}(b_{2}-2a_{2}^{\prime })\right]
g_{lm}Y_{k}=0,
\end{multline*}
whence, symmetrizing in $(k,m),$ we obtain (\ref{LEC1a1}) and, consequently,
(\ref{LEC1a2}).
Suppose now $a_{1}=0.$ Substituting in (\ref{L6a}) we easily state that (\ref
{LEC1a2}) remains true. On the other hand, substituting $a_{1}=0$ into (\ref
{L6b}) and symmetrizing in $(k,m)$ we get
\begin{equation*}
2a_{2}Bg_{km}Y_{l}+a_{2}(B+2A^{\prime })(g_{lm}Y_{k}+g_{lk}Y_{m})=0,
\end{equation*}
whence, by contractions with $g^{km}$ and $g^{lm},$ we obtain
\begin{equation}
BY_{l}=0\text{ and }A^{\prime }Y_{l}=0 \label{LEC1a5}
\end{equation}
respectively since $a_{2}\neq 0$ must hold. Thus (\ref{LEC1a1}) holds good.
Since $X$ is a Killing vector field, (\ref{L6b}), (\ref{Conv5}), (\ref
{LEC1a1}) and (\ref{LEC1a6}) in the case $a_{1}\neq 0$ and (\ref{L6b}) and (
\ref{LEC1a5}) as well in the case $a_{1}=0$ yield (\ref{LEC1a3}).
Differentiating covariantly (\ref{II1}), using just obtained identities, we
get (\ref{LEC1a4}). Finally, alternating (\ref{LEC1a4}) in $(l,m),$ by the
use of the Ricci identity (\ref{Conv4}), we obtain (\ref{LAC1a5}). This
completes the proof.
\end{proof}
From (\ref{LAC1a5}) and Theorem \ref{Apen1} by Grycak we infer
\begin{theorem}
\label{LEC1b}Under hypothesis (\ref{H}) suppose $\dim M>2,$ $a\neq 0$ and $
2(b_{1}-a_{1}^{\prime })a_{2}-a_{1}b_{2}\neq 0$ on the set $M\times
\{0\}\subset TM.$ If the vector field $\frac{A^{\prime }}{a_{1}}
Y^{a}\partial _{a}$ does not vanish on a dense subset of $M$ and $M$ is
semisymmetric, i.e. $R\cdot R=0,$ (resp. the Ricci tensor $S$ is
semisymmetric, i.e. $R\cdot S=0$), then $M$ is a space of constant
curvature, (resp. $M$ is an Einstein manifold).
\end{theorem}
\begin{theorem}
\label{LEC1c}Under hypothesis (\ref{H}) suppose $\dim M>2,$ $a\neq 0$ and $
2(b_{1}-a_{1}^{\prime })a_{2}-a_{1}b_{2}\neq 0$ at a point $(x,0)\in TM.$
Then the $Y$ component of the Killing vector field on $TM$ satisfies
\begin{equation}
S_{1}Y\left[ a_{1}R+\frac{B}{2}g\wedge g\right] =0
\label{Th Eq II4 and III3}
\end{equation}
on $M.$
\end{theorem}
\begin{proof}
Suppose $a_{1}\neq 0.$ By (\ref{14}) and (\ref{15}) we have $\overline{S}
_{ab}=0.$ Applying this and (\ref{15}), (\ref{LEC1a1}), (\ref{LEC1a2}) and (
\ref{LEC1a5}) to Lemma \ref{LE6''}, after long computations we obtain
\begin{multline}
S_{1}\left[ 3(R_{blck}Y_{a}+R_{alck}Y_{b})+\left( R_{akbl}+R_{albk}\right)
Y_{c}-\left( R_{acbl}+R_{albc}\right) Y_{k}\right] + \label{Case 1-39} \\
S_{2}g_{ab}\left( g_{kl}Y_{c}-g_{cl}Y_{k}\right) +S_{3}\left[ \left(
g_{al}g_{bk}+g_{ak}g_{bl}\right) Y_{c}-\left(
g_{al}g_{bc}+g_{ac}g_{bl}\right) Y_{k}\right] + \\
S_{4}\left[ (g_{bk}g_{cl}-g_{bc}g_{kl})Y_{a}+(g_{ak}g_{cl}-g_{ac}g_{kl})Y_{b}
\right] =0,
\end{multline}
where
\begin{equation*}
S_{1}=a_{1}\left[ 2a_{2}a_{1}^{\prime }-a_{1}\left( b_{2}+2a_{2}^{\prime
}\right) \right] ,
\end{equation*}
\begin{multline*}
S_{2}=-2\left[ b_{2}\left( -Ab_{1}+3a_{2}b_{2}+5a_{1}A^{\prime
}-Aa_{1}^{\prime }-4a_{2}a_{2}^{\prime }\right) +2b_{1}(Aa_{2}^{\prime
}-a_{2}A^{\prime })\right. + \\
\left. 2(a_{1}A^{\prime }+Aa_{1}^{\prime }-2a_{2}a_{2^{\prime
}})a_{2}^{\prime }\right] = \\
-2\left[ b_{2}\left( -Ab_{1}+3\left( a_{2}b_{2}+a_{1}A^{\prime
}-Aa_{1}^{\prime }\right) +2a^{\prime }\right) +2b_{1}(Aa_{2}^{\prime
}-a_{2}A^{\prime })+2a^{\prime }a_{2}^{\prime }\right. ],
\end{multline*}
\begin{multline*}
S_{3}=-3a_{1}b_{2}A^{\prime }-2Ab_{2}a_{1}^{\prime }+2a_{2}A^{\prime
}a_{1}^{\prime }+4a_{2}a_{2}^{\prime }b_{2}-2a_{1}A^{\prime }a_{2}^{\prime
}+4ab_{2}^{\prime }= \\
2A^{\prime }(a_{2}a_{1}^{\prime }-a_{1}a_{2}^{\prime })-b_{2}(2a^{\prime
}+a_{1}A^{\prime })+4ab_{2}^{\prime },
\end{multline*}
\begin{multline*}
S_{4}=b_{2}\left( -2Ab_{1}+6a_{2}b_{2}+7a_{1}A^{\prime }-4Aa_{1}^{\prime
}-4a_{2}a_{2}^{\prime }\right) -4a_{2}b_{1}A^{\prime }+2a_{2}A^{\prime
}a_{1}^{\prime }+ \\
a_{2}^{\prime }\left( 4Ab_{1}+2a_{1}A^{\prime }+4Aa_{1}^{\prime
}-8a_{2}a_{2}^{\prime }\right) +4ab_{2}^{\prime }
\end{multline*}
and
\begin{equation*}
S_{2}-S_{3}+S_{4}=0
\end{equation*}
identically.
Symmetrizing (\ref{Case 1-39}) in $(a,b,l)$ we get
\begin{equation*}
(S_{2}+2S_{3})\left[ \left( g_{al}g_{bk}+g_{ak}g_{lb}+g_{ab}g_{kl}\right)
Y_{c}-\left( g_{al}g_{bc}+g_{ac}g_{lb}+g_{ab}g_{cl}\right) Y_{k}\right] =0,
\end{equation*}
whence, by contraction with $g^{al}g^{bk},$ we find $
(n-1)(n+2)(S_{2}+2S_{3})Y_{c}=0$. Therefore, symmetrizing (\ref{Case 1-39})
in $(a,b,c)$ and using the last result, we obtain
\begin{equation*}
Y_{a}T_{bckl}+Y_{b}T_{cakl}+Y_{c}T_{abkl}=0,
\end{equation*}
where
\begin{multline*}
T_{bckl}=T_{cbkl}=T_{klbc}= \\
2S_{1}(R_{bkcl}+R_{blck})-(S_{3}+S_{4})\left[ g_{bc}g_{kl}-\frac{1}{2}
(g_{bl}g_{ck}+g_{bk}g_{cl})\right] .
\end{multline*}
Hence, by the use of the Walker's Lemma \ref{Apen2}, we get
\begin{equation}
Y_{a}T_{bckl}=0. \label{Case 1-46}
\end{equation}
Alternating (\ref{Case 1-46}) in $(l,c)$ and applying the Bianchi identity
we obtain
\begin{equation*}
Y_{a}\left[ 4S_{1}R_{bkcl}+\left( S_{3}+S_{4}\right) \left(
g_{bl}g_{kc}-g_{bc}g_{kl}\right) \right] =0.
\end{equation*}
Transvecting the last equation with $Y^{b},$ by the use of (\ref{LEC1a5}),
we easily get
\begin{equation*}
\left[ 4BS_{1}+a_{1}(S_{3}+S_{4})\right] Y_{a}=0,
\end{equation*}
whence (\ref{Th Eq II4 and III3}) results.
On the other hand, from the proof of Lemma \ref{LEC1a} it follows that $
a_{1}(0)=0$ implies $B(0)Y_{a}=0.$ Thus, by continuity, (\ref{Th Eq II4 and
III3}) holds good on $M.$
\end{proof}
\begin{corollary}
Under assumptions of the above theorem we have on $M:$
\begin{eqnarray*}
\left( S_{2}+2S_{3}\right) Y &=&0\text{ if }a_{1}\neq 0, \\
\left[ 4BS_{1}+a_{1}(S_{3}+S_{4})\right] Y &=&0.
\end{eqnarray*}
Notice that multiplying the first equation by $a_{1}$ and adding to the
second one we obtain
\begin{equation*}
a_{1}\left( b_{2}a^{\prime }-2ab_{2}^{\prime }\right) Y=0.
\end{equation*}
\end{corollary}
\begin{lemma}
Under hypothesis (\ref{H}) suppose $\dim M>2,$ $a\neq 0$ and $
2(b_{1}-a_{1}^{\prime })a_{2}-a_{1}b_{2}\neq 0$ at a point $(x,0)\in TM.$
If $a_{1}a_{2}\neq 0,$ then
\begin{multline}
A_{km}=\left[ 2a_{2}\left( b_{1}-a_{1}^{\prime }\right) -\frac{3}{2}
a_{1}\left( b_{2}-2a_{2}^{\prime }\right) \right] Y_{k,m}+ \label{LEC1b1} \\
\left( 3a_{1}B-a_{2}b_{2}\right) P_{km}+3a_{2}BK_{km}=0.
\end{multline}
If $a_{2}=0$ and $a_{1}b_{2}\neq 0$ then
\begin{equation}
A_{km}=-\frac{1}{2}a_{1}\left( b_{2}-2a_{2}^{\prime }\right)
Y_{k,m}+a_{1}BP_{km}=0. \label{LEC1b2}
\end{equation}
If $a_{1}=0$ and $(b_{1}-a_{1}^{\prime })a_{2}\neq 0$ then
\begin{eqnarray}
(n+1)BK_{kn}-b_{2}P_{kn}+2(b_{1}-a_{1}^{\prime })Y_{k,n} &=&0,
\label{LEC1b3} \\
3BK_{ln}-(n-1)b_{2}P_{ln}+2(n-1)(b_{1}-a_{1}^{\prime })Y_{l,n} &=&0.
\label{LEC1b4}
\end{eqnarray}
\end{lemma}
\begin{proof}
If $a_{1}a_{2}\neq 0,$ we apply (\ref{14}) and (\ref{15}) to (\ref{LE8a}) to
obtain (\ref{LEC1b1}).
If $a_{2}=0$ but $a_{1}\neq 0,$ then also there must be $b_{2}\neq 0.$
Substituting $a_{2}=0$ into (\ref{LE8a}) and applying (\ref{14}) and (\ref
{15}) we get (\ref{LEC1b2}).
Finally, the last two identities one obtains substituting $a_{1}=0$ into (
\ref{like 78}), contracting with $g^{km}$ and $g^{lm}$ and making use of (
\ref{14}) and (\ref{15}).
\end{proof}
Taking into account (\ref{LEC1b3}) and (\ref{LEC1b4}) together with the
equation (\ref{II1}) which, in virtue of (\ref{14}), writes
\begin{equation*}
AK_{km}+a_{2}P_{km}-a_{1}Y_{k,m}=0
\end{equation*}
we find that $B\neq 0$ implies $P=K=\nabla Y=0$ on $M.$ We conclude with the
following
\begin{theorem}
\label{Structure 1}Let $TM,$ $dimTM>4,$ be endowed with a $g$-natural metric
$G,$ such that $a_{1}=0,$ $(b_{1}-a_{1}^{\prime })a_{2}\neq 0$ and $B\neq 0$
on $M\times \{0\}\subset TM$. Let $V$ be an open subset of $TM$ such that $
M\times \{0\}\subset V.$ If $V$admits a Killing vector field, then it is a
complete lift of a Killing vector field on $M.$ Consequently, Lie algebras
of Killing vector fields on $M$ and $V\subset TM$ are isomorphic.
\end{theorem}
Besides, for $B=0,$ we have
\begin{theorem}
Let $TM,$ $dimTM>4,$ be endowed with a $g$-natural metric $G,$ such that $
a_{1}=0,$ $(b_{1}-a_{1}^{\prime })a_{2}\neq 0$ and $B=0$ on $M\times
\{0\}\subset TM$. Then
\begin{eqnarray*}
a_{2}P+AK &=&0, \\
b_{2}P-2(b_{1}-a_{1}^{\prime })\nabla Y &=&0
\end{eqnarray*}
hold on $M\times \{0\}\subset TM$.
\end{theorem}
Hence, for $B=0,$ $A\neq 0$ and $b_{2}\neq 0,$ a theorem similar to the
former one can be deduced.
The next theorem gives further restrictions on the vector $Y$ to be non-zero.
\begin{theorem}
\label{Th7C1}Under hypothesis (\ref{H}) suppose $\dim M>2,$ $a\neq 0$ and $
2(b_{1}-a_{1}^{\prime })a_{2}-a_{1}b_{2}\neq 0$ at a point $(x,0)\in TM.$ If
$a_{1}\neq 0,$ then the $Y$ component of the Killing vector field on $TM$
satisfies
\begin{equation*}
Q_{2}Y=\left\{ a_{1}b_{2}\left[ A(b_{2}-2a_{2}^{\prime })-2a_{2}B\right]
-4aB(b_{1}-a_{1}^{\prime }))\right\} Y=0,
\end{equation*}
\begin{equation*}
B^{\prime }Y=0,
\end{equation*}
\begin{equation*}
B\left[ a_{1}a_{2}\left( b_{2}+2a_{2}^{\prime }\right) -2Aa_{1}a_{1}^{\prime
}+aa_{1}^{\prime }\right] Y=0.
\end{equation*}
\end{theorem}
\begin{proof}
We apply Lemma \ref{LE6'}. By the use of (\ref{14}), (\ref{15}), (\ref
{LEC1a1}) - (\ref{LEC1a4}) and (\ref{L5a2}) the components of the tensors $K$
and $L$ defined by (\ref{LE6'-2}) and (\ref{LE6'-3}) can be written as
\begin{equation*}
K_{kal}=\frac{\left[ aB(b_{1}-a_{1}^{\prime
})+2a_{1}a_{2}Bb_{2}-Aa_{1}b_{2}(b_{2}-2a_{2}^{\prime })\right] }{aa_{1}}
(2g_{kl}Y_{a}-g_{ka}Y_{l}-g_{la}Y_{k}),
\end{equation*}
\begin{multline*}
L_{abl}=3BT_{lab}+3B^{\prime }(g_{bl}Y_{a}+g_{al}Y_{b})= \\
-\frac{3B\left[ A(b_{1}-a_{1}^{\prime })-a_{2}b_{2}\right] }{a}g_{ab}Y_{l}+
\\
\frac{3\left[ B(a_{2}b_{2}-2Aa_{1}^{\prime }+2a_{2}a_{2}^{\prime
})+2aB^{\prime }\right] }{2a}\left( g_{al}Y_{b}+g_{bl}Y_{a}\right) .
\end{multline*}
Substituting into (\ref{LE6'-1}) and applying (\ref{L5a1}), (\ref{L5a2'})
and (\ref{LEC1a5}) we get
\begin{multline}
Q_{1}\left[ \left( R_{bkcl}+R_{blck}\right) Y_{a}+\left(
R_{ckal}+R_{clak}\right) Y_{b}+\left( R_{akbl}+R_{albk}\right) Y_{c}\right] +
\label{Th7C1-6} \\
Q_{2}\left[ \left( g_{al}g_{bc}+g_{bl}g_{ca}+g_{cl}g_{ab}\right)
Y_{k}+\left( g_{ak}g_{bc}+g_{bk}g_{ca}+g_{ck}g_{ab}\right) Y_{l}\right] + \\
Q_{3}g_{kl}\left( g_{bc}Y_{a}+g_{ca}Y_{b}+g_{ab}Y_{c}\right) + \\
Q_{4}\left[ \left( g_{bl}g_{kc}+g_{bk}g_{lc}\right) Y_{a}+\left(
g_{cl}g_{ka}+g_{ck}g_{la}\right) Y_{b}+\left(
g_{al}g_{kb}+g_{ak}g_{lb}\right) Y_{c}\right] =0,
\end{multline}
where
\begin{equation*}
Q_{1}=-\frac{3a_{2}\left( a_{1}b_{2}-2a_{2}a_{1}^{\prime
}+2a_{1}a_{2}^{\prime }\right) }{a},
\end{equation*}
\begin{equation*}
Q_{2}=\frac{\left[ a_{1}b_{2}(A(b_{1}-a_{1}^{\prime
})-2Ba_{2})-4aB(b_{1}-a_{1}^{\prime }))\right] }{aa_{1}},
\end{equation*}
\begin{equation*}
Q_{3}=2\frac{4aB(b_{1}-a_{1}^{\prime })-a_{1}\left[ A(b_{2}-2a_{2}^{\prime
})+B(a_{2}b_{2}-6Aa_{1}^{\prime }+6a_{2}a_{2}^{\prime })\right] }{aa_{1}},
\end{equation*}
\begin{equation*}
Q_{4}=\frac{3\left[ B\left( a_{2}b_{2}-2Aa_{1}^{\prime }+2a_{2}a_{2}^{\prime
}\right) +2aB^{\prime }\right] }{a}.
\end{equation*}
Contracting (\ref{Th7C1-6}) with $g^{ab}$, by the use of (\ref{LEC1a5}), we
get
\begin{multline}
g_{kl}\left( -\frac{4BQ_{1}}{a_{1}}+(n+2)Q_{3}+2Q_{4}\right)
Y_{c}-2Q_{1}R_{kl}Y_{c}+ \label{Th7C1-7} \\
\left( \frac{2BQ_{1}}{a_{1}}+(n+2)Q_{2}+2Q_{4}\right)
(g_{cl}Y_{k}+g_{kc}Y_{l})=0.
\end{multline}
Symmetrizing in $(c,k,l)$ we obtain
\begin{equation*}
T_{kl}Y_{c}+T_{lc}Y_{k}+T_{ck}Y_{l}=0,
\end{equation*}
where
\begin{equation}
T_{kl}=T_{lk}=g_{kl}\left[ \left( n+2\right) \left( 2Q_{2}+Q_{3}\right)
+6Q_{4}\right] -2Q_{1}R_{kl}. \label{Th7C1-8}
\end{equation}
Then the Walker lemma yields $T_{kl}=0$ or $Y_{c}=0.$ Subtracting (\ref
{Th7C1-8}) from (\ref{Th7C1-7}) and contracting with $g^{kl}$ we get
\begin{equation}
\left[ a_{1}\left( (n+2)Q_{2}+2Q_{4}\right) +2BQ_{1}\right] Y_{c}=0.
\label{Th7C1-9}
\end{equation}
In the same way, by contraction of (\ref{Th7C1-6}) with $g^{cl},$ we find
\begin{equation}
\left\{ g_{bk}\left[ (n+5)Q_{2}+3Q_{3}+2(n+2)Q_{4}\right] +2Q_{1}R_{bk}
\right\} Y_{c}=0 \label{Th7C1-10}
\end{equation}
and
\begin{equation}
\left[ a_{1}\left( \left( n+3\right) Q_{2}+Q_{3}\right) -2BQ_{1}\right]
Y_{k}=0. \label{Th7C1-11}
\end{equation}
At last, by contraction of (\ref{Th7C1-6}) with $g^{kl},$ we obtain
\begin{equation}
\left[ g_{bc}\left( 2Q_{2}+nQ_{3}+2Q_{4}\right) -2Q_{1}R_{bc}\right] Y_{a}=0.
\label{Th7C1-12}
\end{equation}
Eliminating the Ricci tensor between (\ref{Th7C1-8}), (\ref{Th7C1-12}) and (
\ref{Th7C1-10}) we find
\begin{equation*}
\left[ 3(n+3)Q_{2}+(n+5)(Q_{3}+2Q_{4})\right] Y_{c}=0,
\end{equation*}
\begin{equation*}
\left[ (n+1)Q_{2}+2Q_{3}+2Q_{4}\right] Y_{c}=0.
\end{equation*}
The system consisting of (\ref{Th7C1-9}), (\ref{Th7C1-11}) and the above two
equations is undetermined and equivalent to $Y=0$ or $Q_{2}=0$ and $
2BQ_{1}+a_{1}Q_{3}=0$ and $Q_{3}+2Q_{4}=0.$ Hence $2Q_{2}+Q_{3}+2Q_{4}$
yields the second identity, while $a_{1}(Q_{3}+2Q_{4})-(2BQ_{1}+a_{1}Q_{3})$
gives the third one.
\end{proof}
\begin{remark}
From (\ref{Th7C1-6}) one can deduce the identity
\begin{equation*}
Q_{1}Y\left[ a_{1}R+\frac{B}{2}g\wedge g\right] =0.
\end{equation*}
\end{remark}
\subsection{Case 2}
The next theorem partially improves the result of Tanno (\cite{Tanno 1})
concerned with Killing vector field on $(TM,g^{C}),$ where the complete lift
$g^{C}$ of $g$ is \ a $g-$ natural metric with $a_{2}=1,$ all others being
zero. (In Tanno's paper the Killing vector on $(TM,g^{C})$ is of the form $
\iota C^{[X]}+X^{C}+Y^{v}+(u^{r}P_{r}^{t})\partial _{t}^{h},$ where $Y$ and $
P$ satisfy some additional conditions). Furthermore, we prove in the section
some sufficient conditions for $X$ and $Y$ to be either infinitesimal affine
transformation or infinitesimal isometry.
\begin{theorem}
Let $X$ be an infinitesimal affine vector field on some open $U\subset M.$ If
\begin{equation*}
a_{2}=const\neq 0,\text{ }b_{3}=const,\text{ }all\ others\ equal\text{ }0
\end{equation*}
on $\pi ^{-1}(U)\subset TM,$ then $\iota C^{[X]}+X^{C}$ is a Killing vector
field on $\pi ^{-1}(U).$
\end{theorem}
\begin{proof}
It follows from the results of subsection \ref{Subsection543}.
\end{proof}
\begin{lemma}
Under hypothesis (\ref{H}) suppose $\dim M>2,$ $a\neq 0$ and $
2(b_{1}-a_{1}^{\prime })a_{2}-a_{1}b_{2}=0$ at a point $(x,0)\in TM.$
Moreover, let either $a_{1}a_{2}b_{2}\neq 0$ or $a_{2}\neq 0,$ $b_{2}=0,$ $
b_{1}-a_{1}^{\prime }=0.$ Then
\begin{equation*}
(a_{1}B-2a_{2}b_{2}-3a_{1}A^{\prime }+4a_{2}a_{2}^{\prime })\left[ \left(
L_{X}g\right) -\frac{1}{n}Tr(L_{X}g)g\right] =0,
\end{equation*}
\begin{equation*}
a_{2}(b_{1}-a_{1}^{\prime })\left[ \left( L_{Y}g\right) -\frac{1}{n}
Tr(L_{Y}g)g\right] =0,
\end{equation*}
\begin{equation*}
a_{1}\left[ a_{2}^{\prime }\left( L_{Y}g\right) +A^{\prime }\left(
L_{X}g\right) \right] =0,
\end{equation*}
\begin{equation*}
\left[ a_{1}(B-3A^{\prime })+A(b_{1}-a_{1}^{\prime
})-2a_{2}(b_{2}-2a_{2}^{\prime })\right] \left( L_{X}g\right) =0.
\end{equation*}
\end{lemma}
\begin{proof}
First consider the case $a_{1}a_{2}b_{2}\neq 0.$ By the use of (\ref{16}) - (
\ref{18}) and the equality $a_{1}b_{2}=2a_{2}(b_{1}-a_{1}^{\prime })$ Lemma
\ref{LE8} yields
\begin{equation*}
\mathbf{F}=2(a_{1}B-2a_{2}b_{2}-3a_{1}A^{\prime }+4a_{2}a_{2}^{\prime
})(L_{X}g),
\end{equation*}
\begin{equation*}
\mathbf{B}=-2a_{2}(b_{1}-a_{1}^{\prime })(L_{Y}g),
\end{equation*}
whence, by Lemma \ref{Apen3}, the first two equalities result. Moreover, by
Lemma \ref{LE8} we have
\begin{equation}
\mathbf{F}+\mathbf{B}=-2a_{2}(b_{1}-a_{1}^{\prime
})(L_{Y}g)+2(a_{1}B-2a_{2}b_{2}-3a_{1}A^{\prime }+4a_{2}a_{2}^{\prime
})(L_{X}g)=0, \label{Case 2-30}
\end{equation}
and
\begin{multline*}
\mathbf{A}
_{km}=3a_{2}BK_{km}+(3a_{1}B-a_{2}b_{2})P_{km}+(a_{1}B-2a_{2}a_{2}^{\prime
})(L_{X}g)_{km}+ \\
\left[ a_{2}\left( b_{1}-a_{1}^{\prime }\right) -3a_{1}a_{2}^{\prime }\right]
\nabla _{k}Y_{m}=0.
\end{multline*}
Symmetrizing in $(k,m)$ and transforming the obtained equation in the same
manner as before we find
\begin{equation}
\left[ a_{2}\left( b_{1}-a_{1}^{\prime }\right) -3a_{1}a_{2}^{\prime }\right]
\left( L_{Y}g\right) -(a_{1}B-2a_{2}b_{2}+4a_{2}a_{2}^{\prime })(L_{X}g)=0.
\label{Case2-32}
\end{equation}
Now from (\ref{Case 2-30}) and (\ref{Case2-32}) we easily deduce the third
equality. Finally, the last one is obtained by applying (\ref{17}) to (\ref
{Case 2-30}).
A proof of the second case can be obtained in the same way. The statements
differ only in that $b_{2}=0.$
\end{proof}
\begin{corollary}
If $a_{1}(a_{2}A^{\prime }-a_{2}^{\prime }A)\neq 0,$ then $L_{X}g=0.$
\end{corollary}
\subsection{Case 3}
The main result of the section establishes isomorphism between algebras of
Killing vector fields on $M$ and $TM$ for large subclass of $g-$ metrics
(Theorem \ref{Structure 2}). Furthermore, conditions for $Y$ to be non-zero
are proved.
\begin{lemma}
\label{Case3 Lemma 2}Under hypothesis (\ref{H}) suppose that $dimM>2$ and
the following conditions on $a_{j},$ $b_{j}$ at a point $(x,0)\in M$ are
satisfied: $a_{1}=0,$ $b_{1}=a_{1}^{\prime },$ $a_{2}\neq 0,$ $b_{2}\neq 0.$
Then the relations
\begin{equation}
\left( b_{2}-2a_{2}^{\prime }\right) L_{X}g=0,\quad \left(
b_{2}-2a_{2}^{\prime }\right) Tr\left( \nabla X\right) =0,\quad \left(
b_{2}-2a_{2}^{\prime }\right) TrP=0, \label{Case3-1-1}
\end{equation}
\begin{equation}
BK=0,\quad L_{X}g+P=0 \label{Case3-1-2}
\end{equation}
hold. Moreover $P$ is symmetric. Finally $a_{3}K=0.$
\end{lemma}
\begin{proof}
Substituting $a_{1}=0$ and $a_{1}^{\prime }=b_{1}$ into (\ref{like 78}),
then applying (\ref{21}) and (\ref{19}) we find
\begin{multline}
b_{2}\left[ 2g_{kn}\left( \left( L_{X}g\right) _{lm}+P_{lm}\right)
+g_{mn}\left( \left( L_{X}g\right) _{kl}+P_{kl}\right) -g_{km}\left( \left(
L_{X}g\right) _{ln}+P_{ln}\right) \right] + \label{Case3-1-3} \\
g_{ln}\left[ -3BK_{km}+\left( b_{2}-2a_{2}^{\prime }\right) \left(
L_{X}g\right) _{km}\right] + \\
g_{kl}\left[ 3BK_{mn}+\left( b_{2}-2a_{2}^{\prime }\right) \left(
L_{X}g\right) _{mn}\right] -2(b_{2}-2a_{2}^{\prime })g_{lm}\left(
L_{X}g\right) _{kn}=0.
\end{multline}
From (\ref{19}) it follows that $P_{a}^{a}+2X_{,a}^{a}=0.$ Thus contracting (
\ref{Case3-1-3}) with $g^{lm}$ and then with $g^{kn}$ we get (\ref{Case3-1-1}
) in turn. Consequently, contracting (\ref{Case3-1-3}) with $g^{kn},$ by the
use of (\ref{19}), (\ref{21}) and (\ref{Case3-1-1}), we obtain
\begin{equation*}
-3BK_{lm}+(n-1)b_{2}\left[ P_{lm}+(L_{X}g)_{lm}\right] =0.
\end{equation*}
In a similar way, contracting (\ref{Case3-1-1}) with $g^{kl},$ we find
\begin{equation*}
-(n+1)BK_{mn}+b_{2}\left[ P_{mn}+(L_{X}g)_{mn}\right] =0.
\end{equation*}
The last two equations yield (\ref{Case3-1-2}). The final statement is a
consequence of (\ref{Case3-1-2}), (\ref{II1}) and $a_{1}=0.$
\end{proof}
\begin{lemma}
\label{Case3 Lemma 4}Under assumptions of Lemma \ref{Case3 Lemma 2} relations
\begin{equation*}
\left[ \left( b_{2}-2a_{2}^{\prime }\right)
(2Ab_{1}-3a_{2}b_{2}-2a_{2}a_{2}^{\prime })-2a_{2}Bb_{1}\right] Y=0,
\end{equation*}
\begin{equation*}
\left[ a_{2}Bb_{1}+Ab_{1}b_{2}-2a_{2}\left( b_{2}a_{2}^{\prime
}-a_{2}b_{2}^{\prime }\right) \right] Y=0,
\end{equation*}
\begin{equation*}
(b_{1}b_{2}-a_{2}b_{1}^{\prime })Y=0
\end{equation*}
hold on $M\times \{0\}.$
\end{lemma}
\begin{proof}
We apply Lemma \ref{LE6''}. Substituting $a_{1}^{\prime }=b_{1},$ $a_{1}=0,$
contracting with $g^{ab}g^{cl}$ and applying (\ref{21}) we get
\begin{equation*}
-2b_{2}(n+2)K_{\ k,r}^{r}+2BE_{\ kr}^{r}-2BE_{k\ r}^{\
r}+(n-1)(b_{2}-2a_{2}^{\prime })M_{\ kr}^{r}=0,
\end{equation*}
whence, by the use of Lemma \ref{Lemmas L5} we obtain the first equality.
Similarly, contracting with $g^{al}g^{bc}$ we find
\begin{multline*}
-b_{2}(n+2)K_{\ k,r}^{r}+B(n+2)E_{\ kr}^{r}-B(n+2)E_{k\ r}^{\ r}-b_{2}nT_{\
kr}^{r}+b_{2}T_{k\ r}^{\ r}- \\
2(n+2)(n-1)Y_{k}=0,
\end{multline*}
whence the second equation results. Finally, the third one follows from
Lemma \ref{AfterLE5}.
\end{proof}
\begin{lemma}
Under assumptions of Lemma \ref{Case3 Lemma 2} suppose $L_{X}g=0.$ Then
\begin{equation*}
AY=BY=A^{\prime }Y=0
\end{equation*}
at each point $(x,0)\in TM.$
\end{lemma}
\begin{proof}
By (\ref{20}), $Y$ is a Killing vector field on $M.$ Moreover, (\ref{L5b})
reduces to
\begin{equation*}
2A^{\prime }g_{kl}Y_{p}+B\left( Y_{k}g_{lp}+Y_{l}g_{kp}\right) =0,
\end{equation*}
whence we easily deduce $BY=A^{\prime }Y=0.$ Since an infinitesimal isometry
is also an infinitesimal affine transformation, from (\ref{L6b}), by the use
of (\ref{Conv5}) and the above properties, we obtain $AY=0.$
\end{proof}
\begin{lemma}
Under assumptions of Lemma \ref{Case3 Lemma 2} suppose
\begin{equation}
P+L_{X}g=0. \label{Case3-8-1}
\end{equation}
Then $\nabla P=0$ if and only if $BY=A^{\prime }Y=0.$
\end{lemma}
\begin{proof}
Substituting into (\ref{L5b}), symmetrizing in $(k,p)$ and applying (\ref
{Case3-8-1}) we get
\begin{equation*}
a_{2}\nabla _{l}P_{kp}+B(2g_{kp}Y_{l}+g_{lp}Y_{k}+g_{kl}Y_{p})+2A^{\prime
}(g_{kl}Y_{p}+g_{lp}Y_{k})=0,
\end{equation*}
whence the thesis results.
\end{proof}
A complete lift of a Killing vector field on $M$ to $(TM,G)$ is always a
Killing vector field (see the next section). Thus we have proved
\begin{theorem}
\label{Structure 2}Let on $TM,$ $dimTM>4,$ a $g-$ natural metric $G$
\begin{eqnarray*}
G_{(x,u)}(X^{h},Y^{h}) &=&A(r^{2})g_{x}(X,Y)+B(r^{2})g_{x}(X,u)g_{x}(Y,u), \\
G_{(x,u)}(X^{h},Y^{v})
&=&a_{2}(r^{2})g_{x}(X,Y)+b_{2}(r^{2})g_{x}(X,u)g_{x}(Y,u), \\
G_{(x,u)}(X^{v},Y^{h})
&=&a_{2}(r^{2})g_{x}(X,Y)+b_{2}(r^{2})g_{x}(X,u)g_{x}(Y,u), \\
G_{(x,u)}(X^{v},Y^{v}) &=&0
\end{eqnarray*}
be given, where $a_{2}b_{2}\neq 0$ everywhere on $TM$ while $
b_{2}-a_{2}^{\prime }$ and either $A$ or $B$ do not vanish on a dense subset
of $TM.$ If $Z$ is a Killing vector field on $TM,$ then there exists an open
subset $U$ containing $M$ such that $Z$ restricted to $U$ is a complete lift
of a Killing vector field on $M,$ i.e.
\begin{equation*}
Z_{|U}=X^{C}.
\end{equation*}
\end{theorem}
\subsection{Case 4}
The class under consideration contains the Sasaki metric $g^{S}$ and the
Cheeger-Gromol one $g^{CG}.$ In (\cite{Tanno 2}) Tanno proved the following
\begin{theorem}
Let $(M,g)$ be a Riemannian manifold. Let $X$ be a Killing vector field on $
M,$ $P$ be a $(1,1)$ tensor field on $M$ that is skew-symmetric and parallel
and $Y$ be a vector field on $M$ that satisfies $\nabla _{k}\nabla
_{l}Y_{p}+\nabla _{l}\nabla _{k}Y_{p}=0$ and (\ref{LemmaCase4-3-1}). Then
the vector field $Z$ on $TM$ defined by
\begin{equation*}
Z=X^{C}+\iota P+Y^{\#}=(X^{r}-\nabla ^{r}Y_{s}u^{s})\partial
_{r}^{h}+(Y^{r}+S_{s}^{r}u^{s})\partial _{r}^{v}
\end{equation*}
is a Killing vector field on $(TM,g^{S}).$ Conversely, any Killing vector
field on $(TM,g^{S})$ is of this form.
\end{theorem}
A similar theorem holds for $(TM,g^{CG}),$ (\cite{Abbassi 2003}). However,
in virtue of Lemma \ref{AfterLE5} and the remark after it, the $Y$ component
vanishes.
We shall give a simple sufficient condition for $\iota P$ to be a Killing
vector field on $TM.$ The rest of the section is devoted to investigations
on the properties of the $Y$ component.
Notice that $a\neq 0$ and $a_{2}=0$ require $a_{1}A\neq 0.$ From (\ref{L5b})
we get immediately
\begin{equation}
a_{1}\left( \nabla _{k}\nabla _{l}Y_{p}+\nabla _{l}\nabla _{k}Y_{p}\right)
=2A^{\prime }g_{kl}Y_{p}+B\left( Y_{k}g_{lp}+Y_{l}g_{kp}\right) .
\label{LEC4-0-1}
\end{equation}
Since $b_{2}=0,$ symmetrizing (\ref{II2}) in $\left( k,p\right) $ we get $
AE_{lkp}=a_{2}^{\prime }(g_{lk}Y_{p}+g_{lp}Y_{k}).$ Consequently$,$ in
virtue of the properties of the Lie derivative (\ref{Conv5}), (\ref{L6b}) \
and (\ref{22}) yield
\begin{equation*}
a_{1}\nabla _{l}P_{kp}=a_{2}^{\prime }(g_{lp}Y_{k}-g_{lk}Y_{p}).
\end{equation*}
Moreover, because of $a_{2}=0,$ $b_{2}=0,$ $\overline{S}_{pq}=0$ and $\nabla
_{l}X_{q}+S_{lq}=P_{lq}=-P_{ql},$ identity (\ref{I3}) together with (\ref
{L5a})\ yields
\begin{equation*}
BP_{kp}=a_{2}^{\prime }\nabla _{k}Y_{p},
\end{equation*}
whence, since $P$ is skew-symmetric,
\begin{equation}
a_{2}^{\prime }L_{Y}g=0\text{ and }a_{2}^{\prime }Tr(\nabla Y)=0
\label{LEC4-0-3}
\end{equation}
result.
Next, Lemma \ref{LE8} yields
\begin{equation*}
B\nabla _{k}X_{m}-a_{2}^{\prime }\nabla _{k}Y_{m}+BP_{km}=0,
\end{equation*}
whence we find
\begin{equation*}
B\nabla X=0.
\end{equation*}
We conclude with
\begin{lemma}
Suppose (\ref{H}), $dimM>2,$ and $a_{2}=0,$ $b_{2}=0$ on $M\times \{0\}.$\
If $a_{2}^{\prime }=0$ on $M\times \{0\},$ then $BP=0$ and $\nabla P=0$ on $
M\times \{0\}.$
\end{lemma}
By Proposition \ref{jotaP} we obtain
\begin{theorem}
Suppose $a_{2}(r^{2})=0,$ $b_{2}(r^{2})=0$ and $B(r^{2})=0$ on $(TM,G).$ If $
M$ admits non-trivial skew-symmetric and parallel $(0,2)$ tensor field $P,$
then its $\iota $-lift is a Killing vector field on $TM.$
\end{theorem}
\begin{lemma}
\label{LE12}If $a_{2}=0,$ $b_{2}=0$ at $(x,0)$ and $a\neq 0$ everywhere on $
TM,$ then
\begin{multline}
3a_{1}^{2}\left( \nabla ^{r}Y_{q}R_{rlkp}+\nabla ^{r}Y_{p}R_{rlkq}\right) =
\label{LE12-1} \\
a_{1}B\left[ \left( 2\nabla _{q}Y_{k}-\nabla _{k}Y_{q}\right) g_{pl}+\left(
2\nabla _{p}Y_{k}-\nabla _{k}Y_{p}\right) g_{ql}-\left( \nabla
_{p}Y_{q}+\nabla _{q}Y_{p}\right) g_{kl}\right] + \\
2A(b_{1}-a_{1}^{\prime })\left( 2\nabla _{l}Y_{k}g_{pq}-\nabla
_{l}Y_{p}g_{kq}-\nabla _{l}Y_{q}g_{kp}\right)
\end{multline}
and
\begin{multline*}
3a_{1}^{2}\nabla ^{q}Y_{p}R_{qlkr}u^{p}u^{r}= \\
2A(b_{1}-a_{1}^{\prime })\left[ Y_{k,l}r^{2}-Y_{p,l}u_{k}u^{p}\right] +a_{1}B
\left[ \left( 2Y_{k,p}-Y_{p,k}\right) u_{l}-g_{kl}Y_{p,q}u^{q}\right] u^{p}
\end{multline*}
hold at arbitrary point $(x,0)\in TM.$
\end{lemma}
\begin{proof}
To prove the lemma it is enough to put $a_{2}=0,$ $b_{2}=0$ in (\ref{like 78}
), then multiply by $A$ and apply (\ref{22}). For convenience indices $(k,m)$
are interchanged after that.
\end{proof}
\begin{lemma}
\label{LemmaCase4-3}Suppose (\ref{H}), $dimM>2,$ $a\neq 0.$ If neither $
\nabla _{n}Y_{m}=0$ nor $\nabla _{n}Y_{m}=\frac{T}{n}g_{mn},$ then
\begin{equation}
\nabla ^{r}Y_{q}R_{rlkp}+\nabla ^{r}Y_{p}R_{rlkq}=0 \label{LemmaCase4-3-1}
\end{equation}
if and only if $B=b=0.$
\end{lemma}
\begin{proof}
The "only if " part is obvious. Put $T=Y_{r,s}g^{rs}.$ Suppose that (\ref
{LemmaCase4-3-1}) holds. Contracting the right hand side of (\ref{LE12-1}) \
in turn with $g^{kl},$ $g^{kl}g^{mn},$ $g^{lm}$ and $g^{kn}$ we get
respectively
\begin{equation*}
\left[ 2Ab+(n-1)a_{1}B\right] \left( Y_{m,n}+Y_{n,m}\right) -4AbTg_{mn}=0,
\end{equation*}
\begin{equation*}
(n-1)\left( 2Ab-a_{1}B\right) T=0,
\end{equation*}
\begin{equation*}
-\left[ 4Ab+(2n+1)a_{1}B\right] Y_{k,n}+\left[ 2Ab+(n+2)a_{1}B\right]
Y_{n,k}+2AbTg_{kn}=0,
\end{equation*}
\begin{equation*}
-a_{1}BY_{l,m}+2\left[ (n-1)Ab+a_{1}B\right] Y_{m,l}-a_{1}BTg_{lm}=0.
\end{equation*}
If $Y_{l,m}-Y_{m,l}\neq 0,$ then alternating the last two equations in
indices we obtain
\begin{eqnarray*}
2Ab+(n+1)a_{1}B &=&0, \\
2(n-1)Ab+3a_{1}B &=&0,
\end{eqnarray*}
whence $B=b=0$ for $n\neq 2$ results.
If $Y_{l,m}-Y_{m,l}=0,$ then the suitable linear combination of these
equations gives
\begin{equation*}
(n-2)\left( 2Ab-a_{1}B\right) Y_{l,m}+\left( 2Ab-a_{1}B\right) Tg_{lm}=0.
\end{equation*}
By the second equation this yields $\left( 2Ab-a_{1}B\right) Y_{l,m}=0.$
Applying the last result to the first equality completes the proof.
\end{proof}
\begin{lemma}
\label{LemmaCase4-6}\label{LE13}Let $(TM,G)$ be a tangent bundle of a
manifold $(M,g),$ $dimM>2,$ with $g$-natural metric $G$ given by (\ref{g1a}
). Suppose there is given a Killing vector field $Z$ on $TM$ with Taylor
expansion (\ref{Taylor1}). If the coefficients $a_{2}(t),$ $b_{2}(t)$ vanish
along $M$ then $Y$ satisfies
\begin{equation}
A^{\prime }(2b_{1}+a_{1}^{\prime })Y=0, \label{LC4-1}
\end{equation}
\begin{equation}
\left\{ \left[ 2B(B+A^{\prime })-3AB^{\prime }\right]
a_{1}+AB(2b_{1}+a_{1}^{\prime })\right\} Y=0. \label{LC4-2}
\end{equation}
\end{lemma}
\begin{proof}
Recall that if $a_{2}(r_{0}^{2})=0,$ then necessary $a_{1}A\neq 0$ on some
neighbourhood of $r_{0}^{2}.$
From (\ref{L5c}) we easily get
\begin{equation}
A(\overline{K}_{ab,c}+\overline{K}_{bc,a}+\overline{K}_{ca,b})+2(B+A^{\prime
})(g_{bc}Y_{a}+g_{ca}Y_{b}+g_{ab}Y_{c})=0. \label{LC4-4}
\end{equation}
From Lemma \ref{LE6'}, by the use of the assumptions on $a_{2}$ and $b_{2},$
we find
\begin{multline*}
2B\left[ \overline{K}_{ab,(k}g_{l)c}+\overline{K}_{bc,(k}g_{l)a}+\overline{K}
_{ca,(k}g_{l)b}\right] + \\
3B\left[ g_{c(l}T_{k)ab}+g_{a(l}T_{k)bc}+g_{b(l}T_{k)ca}\right] + \\
6A^{\prime }g_{kl}M_{abc}-b\left[ g_{ab}\left( Y_{c,kl}+Y_{c,lk}\right)
+g_{bc}\left( Y_{a,kl}+Y_{a,lk}\right) +g_{ca}\left(
Y_{b,kl}+Y_{b,lk}\right) \right] + \\
6B^{\prime }\left[ \left( g_{al}g_{kb}+g_{ak}g_{lb}\right) Y_{c}+\left(
g_{bl}g_{kc}+g_{bk}g_{lc}\right) Y_{a}+\left(
g_{al}g_{kc}+g_{ak}g_{lc}\right) Y_{b}\right] =0.
\end{multline*}
Applying (\ref{L5b}), (\ref{L5a2}) and (\ref{L5a2'}) we find
\begin{multline}
2B\left[ \overline{K}_{ab,(k}g_{l)c}+\overline{K}_{bc,(k}g_{l)a}+\overline{K}
_{ca,(k}g_{l)b}\right] - \label{LC4-5} \\
\frac{4bB}{a_{1}}\left[ \left( g_{al}g_{bc}+g_{bl}g_{ca}+g_{cl}g_{ab}\right)
Y_{k}+\left( g_{ak}g_{bc}+g_{bk}g_{ca}+g_{ck}g_{ab}\right) Y_{l}\right] - \\
\frac{4A^{\prime }(2b_{1}+a_{1}^{\prime })}{a_{1}}\left(
g_{ab}Y_{c}+g_{bc}Y_{a}+g_{ca}Y_{b}\right) g_{kl}+\frac{6(B^{\prime
}a_{1}-Ba_{1}^{\prime })}{a_{1}}\times \\
\left[ \left( g_{al}g_{kb}+g_{ak}g_{lb}\right) Y_{c}+\left(
g_{bl}g_{kc}+g_{bk}g_{lc}\right) Y_{a}+\left(
g_{al}g_{kc}+g_{ak}g_{lc}\right) Y_{b}\right] =0.
\end{multline}
Hence, contracting with $g^{kl},$ we obtain
\begin{multline}
B(\overline{K}_{ab,c}+\overline{K}_{bc,a}+\overline{K}_{ca,b})+
\label{LC4-6} \\
\left[ 3B^{\prime }-\frac{(B+nA^{\prime })(2b_{1}+a_{1}^{\prime })}{a_{1}}
\right] (g_{bc}Y_{a}+g_{ca}Y_{b}+g_{ab}Y_{c})=0.
\end{multline}
If $B\neq 0,$ then a linear combination of (\ref{LC4-4}) and (\ref{LC4-6})
yields $\psi Y=0$ where
\begin{equation}
\psi =2B(B+A^{\prime })-3AB^{\prime }+\frac{A(B+nA^{\prime
})(2b_{1}+a_{1}^{\prime })}{a_{1}}. \label{LC4-7}
\end{equation}
On the other hand, contractions of (\ref{LC4-5}) with $g^{ak}$ and then with$
\ g^{bl}$ yield respectively
\begin{multline*}
B\left[ (n+3)\overline{K}_{bc,l}+\overline{K}_{c,r}^{r}g_{bl}+\overline{K}
_{b,r}^{r}g_{cl}\right] = \\
\frac{2}{a_{1}}\left[ (n+3)bB+A^{\prime }(2b_{1}+a_{1}^{\prime })\right]
g_{bc}Y_{l}+ \\
\frac{1}{a_{1}}\left[ 2bB+3(n+2)(Ba_{1}^{\prime }-B^{\prime
}a_{1})+2A^{\prime }(2b_{1}+a_{1}^{\prime })\right] (g_{bl}Y_{c}+g_{cl}Y_{b})
\end{multline*}
and
\begin{equation*}
2B\overline{K}_{c,r}^{r}=\frac{1}{a_{1}}\left[ 4bB+3(n+1)(Ba_{1}^{\prime
}-B^{\prime }a_{1})+2A^{\prime }(2b_{1}+a_{1}^{\prime })\right] Y_{c}.
\end{equation*}
Hence we find
\begin{multline}
2(n+3)a_{1}B\overline{K}_{bc,l}=4\left[ (n+3)bB+A^{\prime
}(2b_{1}+a_{1}^{\prime })\right] g_{bc}Y_{l}+ \label{LC4-9} \\
\left[ 3(n+3)(Ba_{1}^{\prime }-B^{\prime }a_{1})+2A^{\prime
}(2b_{1}+a_{1}^{\prime })\right] (g_{bl}Y_{c}+g_{cl}Y_{b})
\end{multline}
and
\begin{multline*}
(n+3)a_{1}B(\overline{K}_{bc,l}+\overline{K}_{cl,b}+\overline{K}_{lb,c})- \\
\left[ (n+3)(B(2b_{1}+a_{1}^{\prime })-3a_{1}B^{\prime })+4A^{\prime
}(2b_{1}+a_{1}^{\prime })\right] (g_{bc}Y_{l}+g_{cl}Y_{b}+g_{lb}Y_{c})=0
\end{multline*}
If $B\neq 0,$ then combining the last relation with (\ref{LC4-6}) we obtain (
\ref{LC4-1}) and, as a consequence of (\ref{LC4-7}), equality (\ref{LC4-2}).
On the other hand, if $B(r_{0}^{2})=0,$ then contractions of (\ref{LC4-9})
with $g^{bc}$ and $g^{bl}$ yield either $Y^{a}=0$ or $B^{\prime }=0$ and $
A^{\prime }(2b_{1}+a_{1}^{\prime })=0.$ This completes the proof.
\end{proof}
\begin{lemma}
For an arbitrary $B$ we have
\begin{equation*}
a_{1}^{2}B(\nabla _{l}\nabla _{c}Y_{b}+\nabla _{l}\nabla
_{b}Y_{c})=-2ABbg_{bc}Y_{l}-\frac{3}{2}A(Ba_{1}^{\prime }-a_{1}B^{\prime
})(g_{bl}Y_{c}+g_{cl}Y_{b}),
\end{equation*}
\begin{multline*}
a_{1}^{2}B(\nabla _{l}\nabla _{c}Y_{b}-\nabla _{b}\nabla _{l}Y_{c})=-B\left(
a_{1}B+2Ab\right) g_{bc}Y_{l}+ \\
-\dfrac{1}{2}\left[ 4A^{\prime }a_{1}B+3A(Ba_{1}^{\prime }-a_{1}B^{\prime })
\right] g_{bl}Y_{c}-\dfrac{1}{2}\left[ 2a_{1}B^{2}+3A(Ba_{1}^{\prime
}-a_{1}B^{\prime })\right] g_{cl}Y_{b}.
\end{multline*}
\end{lemma}
\begin{proof}
We can suppose $B\neq 0.$ From (\ref{II1}) we get
\begin{equation*}
A\nabla _{l}\overline{K}_{km}=-a_{1}\left( \nabla _{l}\nabla
_{k}Y_{m}+\nabla _{l}\nabla _{m}Y_{k}\right) .
\end{equation*}
Combining this with (\ref{LC4-9}), by the use of (\ref{LC4-1}) we find the
first equality. Hence, by the use of (\ref{L5b}) and (\ref{LC4-1}), we get
the second one. On the other hand, if $BY=0,$ then the previous lemma yields
$B^{\prime }Y=0.$ This completes the proof.
\end{proof}
\begin{lemma}
\label{LEC4-10}Under hypothesis (\ref{H}) suppose $\dim M>2,$ $a\neq 0$ on $
TM$ and $a_{2}=0,$ $b_{2}=0$ on $M\times \{0\}\subset TM.$ Then
\begin{equation}
\left[ Aa_{2}^{\prime }(b_{1}+a_{1}^{\prime })-2a_{1}(Ba_{2}^{\prime
}+Ab_{2}^{\prime })\right] Y=0, \label{LEC4-10-1}
\end{equation}
\begin{equation}
Y\left[ a_{1}a_{2}^{\prime }R-\frac{(Ba_{2}^{\prime }+2Ab_{2}^{\prime })}{2}
g\wedge g\right] =0. \label{LEC4-10-2}
\end{equation}
If $a_{2}^{\prime }\neq 0,$ then
\begin{equation}
b_{2}^{\prime }\nabla Y=0,\quad (b_{1}-a_{1}^{\prime })\nabla Y=0
\label{LEC4-10-2-1}
\end{equation}
\end{lemma}
\begin{proof}
For the proof of the first part we apply Lemma \ref{LE6''}. Substituting $
a_{2}=0,$ $b_{2}=0,$ by the use of (\ref{22}), we get
\begin{multline*}
a_{1}\left[
2E_{ab}^{p}R_{plck}-E_{bk}^{p}R_{plac}+E_{bc}^{p}R_{plak}-E_{ak}^{p}R_{plbc}+E_{ac}^{p}R_{plbk}
\right] + \\
B\left[ \left( E_{ckb}-E_{kcb}\right) g_{al}+\left( E_{cak}-E_{kac}\right)
g_{bl}+\right. \\
\left. \left( E_{abk}+E_{bak}\right) g_{cl}-\left( E_{abc}+E_{bac}\right)
g_{kl}\right] -2a_{2}^{\prime }\left( M_{abk}g_{cl}-M_{abc}g_{kl}\right) + \\
2b_{2}^{\prime }\left[ \left( g_{bk}g_{cl}-g_{bc}g_{kl}\right) Y_{a}+\left(
g_{ak}g_{cl}-g_{ac}g_{kl}\right) Y_{b}+\right. \\
\left. \left( g_{al}g_{bk}+g_{ak}g_{bl}\right) Y_{c}-\left(
g_{al}g_{bc}+g_{ac}g_{bl}\right) Y_{k}\right] =0.
\end{multline*}
Applying (\ref{L5a1}) - (\ref{L5a2'}) and the Bianchi identity we find
\begin{multline*}
-a_{1}^{2}a_{2}^{\prime }\left[
3R_{blck}Y_{a}+3R_{alck}Y_{b}+(R_{akbl}+R_{bkal})Y_{c}-(R_{acbl}+R_{bcal})Y_{k}
\right] - \\
2a_{2}^{\prime }\left[ A(b_{1}+a_{1}^{\prime })-a_{1}B\right] g_{ab}\left(
g_{kl}Y_{c}-g_{cl}Y_{k}\right) - \\
\left[ 2Aa_{2}^{\prime }(b_{1}+a_{1}^{\prime })+a_{1}(2Ab_{2}^{\prime
}-Ba_{2}^{\prime })\right] \left[ \left( g_{bc}g_{kl}-g_{bk}g_{cl}\right)
Y_{a}+\left( g_{ac}g_{kl}-g_{ak}g_{cl}\right) Y_{b}\right] + \\
a_{1}(Ba_{2}^{\prime }+2Ab_{2}^{\prime })\left[ g_{bl}\left(
g_{ak}Y_{c}-g_{ac}Y_{k}\right) +g_{al}\left( g_{bk}Y_{c}-g_{bc}Y_{k}\right)
\right] =0.
\end{multline*}
Symmetrizing the last relation in $(a,b,l)$ we obtain (\ref{LEC4-10-1}).
Then, symmetrize in $(a,b,k).$ Since coefficient times $Y_{c}$ vanishes by (
\ref{LEC4-10-1}), by the use of the the Walker lemma and (\ref{LEC4-10-1})
we get either $Y=0$ or
\begin{equation*}
a_{1}a_{2}^{\prime }(R_{acbl}+R_{albc})=(Ba_{2}^{\prime }+2Ab_{2}^{\prime
})(g_{al}g_{bc}+g_{ac}g_{bl}-2g_{ab}g_{cl}),
\end{equation*}
whence, alternating in $(b,l),$ we easily obtain
\begin{equation*}
a_{1}a_{2}^{\prime }R_{acbl}=(Ba_{2}^{\prime }+2Ab_{2}^{\prime
})(g_{al}g_{bc}-g_{ab}g_{cl}).
\end{equation*}
Thus (\ref{LEC4-10-2}) is proved.
Suppose now $Y\neq 0$ and $a_{2}^{\prime }\neq 0$ on $M\times \{0\}.$
Applying (\ref{LEC4-10-2}) to (\ref{LE12-1}) and eliminating $B,$ by the use
of (\ref{LEC4-0-3}), we obtain
\begin{multline}
2(b_{1}-a_{1}^{\prime })\left(
Y_{n,l}g_{km}+Y_{m,l}g_{kn}-2Y_{k,l}g_{mn}\right) + \label{LEC4-10-6} \\
\left( 2\frac{a_{1}b_{2}^{\prime }}{a_{2}^{\prime }}-b_{1}-a_{1}^{\prime
}\right) (Y_{k,m}g_{ln}+Y_{k,n}g_{lm})- \\
\left( 4\frac{a_{1}b_{2}^{\prime }}{a_{2}^{\prime }}+b_{1}+a_{1}^{\prime
}\right) (Y_{m,k}g_{ln}+Y_{n,k}g_{lm})=0.
\end{multline}
Contracting (\ref{LEC4-10-6}) with $g^{lm},$ by the use of (\ref{LEC4-0-3})
we get
\begin{equation*}
\left[ (n+1)\frac{a_{1}b_{2}^{\prime }}{a_{2}^{\prime }}-(b_{1}-a_{1}^{
\prime })\right] Y_{k,n}=0.
\end{equation*}
On the other hand, by contraction with $g^{mn}$ we obtain
\begin{equation*}
\left[ \frac{a_{1}b_{2}^{\prime }}{a_{2}^{\prime }}-(n-1)(b_{1}-a_{1}^{
\prime })\right] Y_{k,l}=0.
\end{equation*}
Hence we easily get either $\nabla Y=0$ or both $b_{2}^{\prime }=0$ and $
b_{1}-a_{1}^{\prime }=0$ on $M\times \{0\}.$
\begin{remark}
If $a_{2}^{\prime }\neq 0$ and $Y\neq 0,$ then equations (\ref{LEC4-10-2-1})
give a further restriction on the metric $G.$ Namely, if $\nabla Y=0,$ then
from (\ref{22}) we infer $K=0$ while from (\ref{LEC4-0-1}) we get $A^{\prime
}=0$ and $B=0$ on $M\times \{0\}.$ Consequently, (\ref{LC4-2}) yields $
B^{\prime }=0.$
On the other hand, substituting $b_{2}^{\prime }=0$ and $b_{1}=a_{1}^{\prime
}$ into Lemma \ref{AfterLE5} we get $b_{1}^{\prime }=0$ on $M\times \{0\}.$
\end{remark}
\end{proof}
\section{Lifts properties}
\subsection{$X^{C}$ and $X^{v}$}
If $X=X^{r}\partial _{r}$ is a vector field on $M,$ then $
X^{C}=X^{r}\partial _{r}+u^{s}\partial _{s}X^{r}\delta _{r}=X^{r}\partial
_{r}^{h}+u^{s}\nabla _{s}X^{r}\partial _{r}^{v}$ is said to be the complete
lift of $X$ to $TM.$
\begin{lemma}
Let $X$ be a vector field on $(M,g)$ satisfying
\begin{equation}
L_{X}g=fg, \label{Kvf}
\end{equation}
$f$ being a function on $M,$ and $X^{C}$ be its complete lift to $(TM,G)$
with $g$-natural metric $G.$ Then
\begin{eqnarray*}
\left( L_{X^{C}}G\right) \left( \partial _{k}^{h},\partial _{l}^{h}\right)
&=&\left[ a_{2}\partial f+f(A+A^{\prime }r^{2})\right] g_{kl}+f(2B+B^{\prime
}r^{2})u_{k}u_{l}+ \\
&&\frac{1}{2}b_{2}r^{2}\left( \nabla _{k}fu_{l}+\nabla _{l}fu_{k}\right) ,
\end{eqnarray*}
\begin{eqnarray*}
\left( L_{X^{C}}G\right) \left( \partial _{k}^{v},\partial _{l}^{h}\right)
&=&\frac{1}{2}a_{1}\left( \nabla _{l}fu_{k}-\nabla _{k}u_{l}+\partial
fg_{kl}\right) + \\
&&f(a_{2}+a_{2}^{\prime }r^{2})g_{kl}+f(2b_{2}+b_{2}^{\prime
}r^{2})u_{k}u_{l}+\frac{1}{2}b_{1}r^{2}\nabla _{l}fu_{k},
\end{eqnarray*}
\begin{equation*}
\left( L_{X^{C}}G\right) \left( \partial _{k}^{v},\partial _{l}^{v}\right)
=f(a_{1}+a_{1}^{\prime }r^{2})g_{kl}+f(2b_{1}+b_{1}^{\prime
}r^{2})u_{k}u_{l},
\end{equation*}
where $\partial f=u^{r}\nabla _{r}f.$
\end{lemma}
\begin{proof}
Straightforward calculations with the use of (\ref{LD10}) - (\ref{LD12}) .
Relations (\ref{Conv3}) and (\ref{Conv5}) are useful.
\end{proof}
\begin{theorem}
\label{Lift prop 2}Let $X$ be a vector field on $(M,g)$ such that (\ref{Kvf}
) is satisfied. Then $X^{C}$ is a Killing vector field on $(TM,G)$ with
non-degenerated $g$-natural metric $G$ if and only if $f=0$ on $M.$
\end{theorem}
\begin{proof}
If $f=0,$ then the theorem is obvious by the previous lemma.
Suppose that $L_{X^{C}}G=0$ on $TM$ holds for some $f\neq 0.$ At first,
contracting the third equation with $g^{kl},$ next transvecting with $
u^{k}u^{l},$ we easily find
\begin{gather*}
f(a_{1}+a_{1}^{\prime }r^{2})=0, \\
f(2b_{1}+b_{1}^{\prime }r^{2})=0.
\end{gather*}
Hence we obtain $a_{1}=0$ and $b_{1}=0$ on $TM$ since the only smooth
solution to $h(r^{2})+r^{2}h^{\prime }(r^{2})=0$ that can be prolonged
smoothly on \thinspace $<0,\infty )$ is $h(r^{2})=0.$ Then the second
equation gives $a_{2}=0$ on $TM,$ a contradiction to $a(r^{2})\neq 0.$ This
completes the proof.
\end{proof}
\begin{proposition}
The vertical lift $X^{v}=X^{r}\partial _{r}^{v}$ of a Killing vector field $
X=X^{r}\partial _{r}$ to $(TM,G)$ with $g-$natural metrics $G$ is a Killing
vector field on $TM$ if and only if $a_{j}^{\prime }=0$ and $b_{j}=0$ on $
TM. $
\end{proposition}
\begin{proof}
Suppose $X^{v}$ is a Killing vector field. Since $X$ is also the Killing
one, (\ref{LD10}) yields
\begin{equation*}
b_{2}(X_{r,k}u_{l}+X_{r,l}u_{k})u^{r}+B(u_{k}X_{l}+u_{l}X_{k})+2u^{r}X_{r}(A^{\prime }g_{kl}+B^{\prime }u_{k}u_{l})=0,
\end{equation*}
whence, by contraction with $g^{kl}$ and $u^{k}u^{l}$ we obtain
\begin{equation*}
2u^{r}X_{r}(B+nA^{\prime }+r^{2}B^{\prime })=0
\end{equation*}
and
\begin{equation*}
2r^{2}u^{r}X_{r}(B+A^{\prime }+r^{2}B^{\prime })=0
\end{equation*}
since $X$ is a Killing vector field on $M.$ Thus $A^{\prime }=0$ and the
only smooth solution to $B+r^{2}B^{\prime }=0$ on $TM$ is $B=0$. In similar
manner, from (\ref{LD11}) and (\ref{LD12}) we deduce that $a_{1}^{\prime
}=a_{2}^{\prime }=0$ and $b_{1}=b_{2}=0$ on $TM.$ The "only if" part is
obvious. Thus the proposition is proved.
\end{proof}
\subsection{$V^{a}\partial _{a}^{v}=u^{p}\protect\nabla ^{r}Y_{p}\partial
_{r}^{v}$}
Let $Y$ be a non-parallel Killing vector field on $M$ and consider its lift $
u^{p}\nabla ^{r}Y_{p}\partial _{r}^{v}$ to $(TM,G).$Then we have $\partial
_{k}^{v}V^{a}=\nabla ^{a}Y_{k},$ $\partial _{k}^{h}V^{a}=u^{p}\Theta
_{k}\left( \nabla ^{a}Y_{p}\right) $ and from (\ref{LD10}) - (\ref{LD12}) we
obtain
\begin{align*}
\left( L_{V^{a}\partial _{a}^{v}}G\right) \left( \partial _{k}^{h},\partial
_{l}^{h}\right) & =a_{2}(\nabla _{l}\nabla _{k}Y_{p}+\nabla _{l}\nabla
_{k}Y_{p})u^{p}+B(\nabla _{k}Y_{p}u^{p}u_{l}+\nabla _{l}Y_{p}u^{p}u_{k}), \\
\left( L_{V^{a}\partial _{a}^{v}}G\right) \left( \partial _{k}^{v},\partial
_{l}^{h}\right) & =a_{2}\nabla _{l}Y_{k}+a_{1}\nabla _{l}\nabla
_{k}Y_{p}u^{p}+b_{2}\nabla _{l}Y_{p}u^{p}u_{k}, \\
\left( L_{V^{a}\partial _{a}^{v}}G\right) \left( \partial _{k}^{v},\partial
_{l}^{v}\right) & =0.
\end{align*}
Hence we deduce
\begin{proposition}
Let $Y$ be a non-parallel Killing vector field on $M$ satisfying $\nabla
\nabla Y=0.$ Then $u^{p}\nabla ^{r}Y_{p}\partial _{r}^{v}$ is a Killing
vector field on $TM$ if and only if $a_{2}=b_{2}=B=0$ on $TM.$
\end{proposition}
\begin{proposition}
Let $Y$ be a non-parallel Killing vector field on $M.$ If $a_{2}=b_{2}=B=0$
on $TM$ and $u^{p}\nabla ^{r}Y_{p}\partial _{r}^{v}$ is a Killing vector
field on $TM$ then $\nabla \nabla Y=0$ on $M$.
\end{proposition}
\begin{proof}
It is enough to symmetrize the second equation.
\end{proof}
\subsection{$\protect\iota P$}
\begin{proposition}
\label{jotaP}Let $P$ be an arbitrary (0,2)-tensor field on $(M,g).$ The its $
\iota $ - lift $\iota P=u^{r}P_{r}^{a}\partial _{a}^{v}$ to $\left(
TM,G\right) $ with $g-$natural metric $G$ satisfies
\begin{multline*}
\left( L_{\iota P}G\right) \left( \partial _{k}^{h},\partial _{l}^{h}\right)
=a_{2}u^{r}\left( \nabla _{k}P_{lr}+\nabla _{l}P_{kr}\right)
+b_{2}u^{p}u^{r}\left( \nabla _{k}P_{pr}u_{l}+\nabla _{l}P_{pr}u_{k}\right) +
\\
2(A^{\prime }g_{kl}+B^{\prime }u_{k}u_{l})P_{pr}u^{p}u^{r}+Bu^{r}\left(
P_{kr}u_{l}+P_{lr}u_{k}\right) , \\
\left( L_{\iota P}G\right) \left( \partial _{k}^{v},\partial _{l}^{h}\right)
=a_{2}P_{lk}+b_{2}u^{r}P_{rk}u_{l}+a_{1}u^{r}\nabla _{l}P_{kr}+b_{1}\nabla
_{l}P_{r}^{a}u_{a}u^{r}u_{k}+ \\
2(a_{2}^{\prime }g_{kl}+b_{2}^{\prime
}u_{k}u_{l})P_{pr}u^{p}u^{r}+b_{2}u^{r}\left( P_{kr}u_{l}+P_{lr}u_{k}\right)
, \\
\left( L_{\iota P}G\right) \left( \partial _{k}^{v},\partial _{l}^{v}\right)
=a_{1}\left( P_{kl}+P_{lk}\right) +b_{1}\left[ u^{p}\left(
P_{kp}+P_{pk}\right) u_{l}+u^{p}\left( P_{lp}+P_{pl}\right) u_{k}\right] + \\
2(a_{1}^{\prime }g_{kl}+b_{1}^{\prime }u_{k}u_{l})P_{pr}u^{p}u^{r}.
\end{multline*}
\end{proposition}
\begin{proof}
We have $V^{a}=u^{r}P_{r}^{a},$ $\partial _{k}^{v}V^{a}=P_{k}^{a},$ $
\partial _{k}^{h}V^{a}=u^{p}\left( \partial _{k}P_{p}^{a}-\Gamma
_{pk}^{t}P_{t}^{a}\right) $ and $\partial _{k}^{h}V^{a}+V^{r}\Gamma
_{kr}^{a}=u^{p}\nabla _{k}P_{p}^{a}.$
\end{proof}
\begin{proposition}
Let $P$ be a skew-symmetric (0,2)-tensor field on $(M,g).$ The its $\iota $
- lift $\iota P=u^{r}P_{r}^{a}\partial _{a}^{v}$ to $\left( TM,G\right) $
with $g-$natural metric $G$ satisfies
\begin{multline*}
\left( L_{\iota P}G\right) \left( \partial _{k}^{h},\partial _{l}^{h}\right)
=a_{2}\left( u^{r}\nabla _{k}P_{lr}+u^{r}\nabla _{l}P_{kr}\right) +B\left(
u^{r}P_{kr}u_{l}+u^{r}P_{lr}u_{k}\right) , \\
\left( L_{\iota P}G\right) \left( \partial _{k}^{v},\partial _{l}^{h}\right)
=a_{2}P_{lk}+b_{2}u^{r}P_{lr}u_{k}+a_{1}u^{r}\nabla _{l}P_{kr}, \\
\left( L_{\iota P}G\right) \left( \partial _{k}^{v},\partial _{l}^{v}\right)
=0.
\end{multline*}
\end{proposition}
\begin{proof}
By definition, $\iota P=u^{r}P_{r}^{t}\partial _{t}^{v},$ where $
P_{r}^{t}=P_{xr}g^{xt}.$ Thus it is enough to check the identities making
use of (\ref{LD10}) - (\ref{LD12}) with $H^{a}=0,$ $V^{a}=u^{r}P_{r}^{a},$ $
V_{k}=-u^{r}P_{rk}.$
\end{proof}
\subsubsection{$\protect\iota C^{\left[ X\right] }$}
Put $C^{\left[ X\right] }=\left( \left( C^{\left[ X\right] }\right)
_{k}^{h}\right) =\left( -g^{hr}\left( L_{X}g\right) _{rk}\right) =\left(
-\left( \nabla ^{h}X_{k}+\nabla _{k}X^{h}\right) \right) $ on $(M,g)$ Then
its $\iota $-lift $\iota C^{\left[ X\right] }=\left( 0,\ u^{k}\left( C^{
\left[ X\right] }\right) _{k}^{h}\right) =\left( 0,\ -u^{k}\left( \nabla
^{h}X_{k}+\nabla _{k}X^{h}\right) \right) \ $is a vertical vector field on $
TM.$ In adapted coordinates $\left( \partial _{k}^{v},\partial
_{l}^{h}\right) $ we have
\begin{equation*}
\iota C^{\left[ X\right] }=-u^{k}\left( \nabla ^{h}X_{k}+\nabla
_{k}X^{h}\right) \partial _{h}^{v}.
\end{equation*}
Hence, applying Lemma \ref{Lie Deriv}, we easily get
\begin{multline*}
\left( L_{\iota C^{\left[ X\right] }}G\right) \left( \partial
_{k}^{h},\partial _{l}^{h}\right) = \\
-a_{2}u^{p}\left[ \nabla _{k}\nabla _{l}X_{p}+\nabla _{l}\nabla
_{k}X_{p}+\nabla _{k}\nabla _{p}X_{l}+\nabla _{l}\nabla _{p}X_{k}\right] - \\
2b_{2}u^{p}u^{q}\left[ \nabla _{k}\nabla _{p}X_{q}u_{l}+\nabla _{l}\nabla
_{p}X_{q}u_{k}\right] -4\left( A^{\prime }g_{kl}+B^{\prime
}u_{k}u_{l}\right) u^{p}u^{q}\nabla _{p}X_{q}- \\
Bu^{p}\left[ \left( \nabla _{k}X_{p}+\nabla _{p}X_{k}\right) u_{l}+\left(
\nabla _{l}X_{p}+\nabla _{p}X_{l}\right) u_{k}\right] ,
\end{multline*}
\begin{multline*}
\left( L_{\iota C^{\left[ X\right] }}G\right) \left( \partial
_{k}^{v},\partial _{l}^{h}\right) = \\
-a_{2}\left( \nabla _{k}X_{l}+\nabla _{l}X_{k}\right) -b_{2}u^{p}\left[
2\left( \nabla _{k}X_{p}+\nabla _{p}X_{k}\right) u_{l}+\left( \nabla
_{l}X_{p}+\nabla _{p}X_{l}\right) u_{k}\right] - \\
a_{1}u^{p}\left( \nabla _{l}\nabla _{k}X_{p}+\nabla _{l}\nabla
_{p}X_{k}\right) -2b_{1}u^{p}u^{q}\nabla _{l}\nabla _{p}X_{q}u_{k}- \\
4\left( a_{2}^{\prime }g_{kl}+b_{2}^{\prime }u_{k}u_{l}\right)
u^{p}u^{q}\nabla _{p}X_{q},
\end{multline*}
\begin{multline*}
\left( L_{\iota C^{\left[ X\right] }}G\right) \left( \partial
_{k}^{v},\partial _{l}^{v}\right) = \\
-2a_{1}\left( \nabla _{k}X_{l}+\nabla _{l}X_{k}\right) -2b_{1}u^{p}\left[
\left( \nabla _{k}X_{p}+\nabla _{p}X_{k}\right) u_{l}+\left( \nabla
_{l}X_{p}+\nabla _{p}X_{l}\right) u_{k}\right] - \\
4\left( a_{1}^{\prime }g_{kl}+b_{1}^{\prime }u_{k}u_{l}\right)
u^{p}u^{q}\nabla _{p}X_{q}.
\end{multline*}
\subsubsection{Complete lift X$^{C}$ of X to (TM, G)}
We have $X^{C}=(X^{r}\partial _{r})^{C}=X^{r}\partial _{r}+\partial
X^{r}\delta _{r}=X^{r}\partial _{r}^{h}+u^{p}\nabla _{p}X^{r}\partial
_{r}^{v}.$ Making use of Lemma \ref{Lie Deriv} we obtain
\begin{multline*}
\left( L_{X^{C}}G\right) \left( \partial _{k}^{h},\partial _{l}^{h}\right) =
\\
a_{2}u^{p}\left[ \nabla _{k}\nabla _{p}X_{l}+X^{r}R_{rkpl}+\nabla _{l}\nabla
_{p}X_{k}+X^{r}R_{rlpk}\right] + \\
b_{2}u^{p}u^{q}\left[ \left( \nabla _{k}\nabla
_{p}X_{q}+X^{r}R_{rkpq}\right) u_{l}+\left( \nabla _{l}\nabla
_{p}X_{q}+X^{r}R_{rlpq}\right) u_{k}\right] + \\
A\left( \nabla _{k}X_{l}+\nabla _{l}X_{k}\right) +Bu^{p}\left[ \left( \nabla
_{k}X_{p}+\nabla _{p}X_{k}\right) u_{l}+\left( \nabla _{l}X_{p}+\nabla
_{p}X_{l}\right) u_{k}\right] + \\
2\left( A^{\prime }g_{kl}+B^{\prime }u_{k}u_{l}\right) u^{p}u^{q}\nabla
_{p}X_{q},
\end{multline*}
\begin{multline*}
\left( L_{X^{C}}G\right) \left( \partial _{k}^{v},\partial _{l}^{h}\right) =
\\
a_{1}u^{p}\left[ \nabla _{l}\nabla _{p}X_{k}+X^{r}R_{rlpk}\right]
+a_{2}\left( \nabla _{k}X_{l}+\nabla _{l}X_{k}\right) + \\
b_{2}u^{p}\left[ \left( \nabla _{k}X_{p}+\nabla _{p}X_{k}\right)
u_{l}+\left( \nabla _{l}X_{p}+\nabla _{p}X_{l}\right) u_{k}\right] + \\
b_{1}u^{p}u^{q}\left( \nabla _{l}\nabla _{p}X_{q}+X^{r}R_{rlpq}\right)
u_{k}+2\left( a_{2}^{\prime }g_{kl}+b_{2}^{\prime }u_{k}u_{l}\right)
u^{p}u^{q}\nabla _{p}X_{q},
\end{multline*}
\begin{multline*}
\left( L_{X^{C}}G\right) \left( \partial _{k}^{v},\partial _{l}^{v}\right) =
\\
a_{1}\left( \nabla _{k}X_{l}+\nabla _{l}X_{k}\right) +b_{1}u^{p}\left[
\left( \nabla _{k}X_{p}+\nabla _{p}X_{k}\right) u_{l}+\left( \nabla
_{l}X_{p}+\nabla _{p}X_{l}\right) u_{k}\right] + \\
2\left( a_{1}^{\prime }g_{kl}+b_{1}^{\prime }u_{k}u_{l}\right)
u^{p}u^{q}\nabla _{p}X_{q}.
\end{multline*}
\subsubsection{\label{Subsection543}$\protect\iota C^{\left[ X\right]
}+X^{C} $ for an infinitesimal affine transformation}
Suppose that $X$ is an infinitesimal affine transformation on $M.$ Then by (
\ref{Conv5}) and the definition
\begin{equation*}
\nabla _{k}\nabla _{l}X_{p}+\nabla _{k}\nabla _{p}X_{l}=\nabla _{k}\nabla
_{l}X_{p}+X^{r}R_{rklp}+\nabla _{k}\nabla _{p}X_{l}+X^{r}R_{rkpl}=0
\end{equation*}
and
\begin{equation*}
u^{p}u^{q}\nabla _{k}\nabla _{p}X_{q}=-u^{p}u^{q}X^{r}R_{rkpq}=0.
\end{equation*}
Therefore, applying results of previous subsections, we find
\begin{equation*}
\left( L_{\iota C^{\left[ X\right] }+X^{C}}G\right) \left( \partial
_{k}^{h},\partial _{l}^{h}\right) =A\left( \nabla _{k}X_{l}+\nabla
_{l}X_{k}\right) -2\left( A^{\prime }g_{kl}+B^{\prime }u_{k}u_{l}\right)
u^{p}u^{q}\nabla _{p}X_{q},
\end{equation*}
\begin{multline*}
\left( L_{\iota C^{\left[ X\right] }+X^{C}}G\right) \left( \partial
_{k}^{v},\partial _{l}^{h}\right) = \\
-2\left( a_{2}^{\prime }g_{kl}+b_{2}^{\prime }u_{k}u_{l}\right)
u^{p}u^{q}\nabla _{p}X_{q}-b_{2}u^{p}\left( \nabla _{k}X_{p}+\nabla
_{p}X_{k}\right) u_{l},
\end{multline*}
\begin{multline*}
\left( L_{\iota C^{\left[ X\right] }+X^{C}}G\right) \left( \partial
_{k}^{v},\partial _{l}^{v}\right) =-a_{1}\left( \nabla _{k}X_{l}+\nabla
_{l}X_{k}\right) - \\
b_{1}u^{p}\left[ \left( \nabla _{k}X_{p}+\nabla _{p}X_{k}\right)
u_{l}+\left( \nabla _{l}X_{p}+\nabla _{p}X_{l}\right) u_{k}\right] -2\left(
a_{1}^{\prime }g_{kl}+b_{1}^{\prime }u_{k}u_{l}\right) u^{p}u^{q}\nabla
_{p}X_{q}.
\end{multline*}
\section{Appendix{}}
The Levi-Civita connection $\widetilde{\nabla }$ of the $g$ - natural metric
$G$ on $TM$ was calculated and presented in (\cite{Abb 2005 a}, \cite{Abb
2005 b}, \cite{Abb 2005 c}). Unfortunately, they contain some misprints and
omissions. Below, we present the correct version owing to the courtesy of
the authors of the aforementioned papers. In addition, it was checked
independently by the present author.
Moreover, observe that it is the Levi-Civita connection of a metric given by
(\ref{g1a}).
Let $T$ be a tensor field of type\thinspace $(1,s)$ on $M.$ For any $
X_{1},...,X_{s}\in T_{x}M,$ $x\in M,$ we define horizontal and vertical
vectors at a point $(x,u)\in TTM$ setting respectively
\begin{equation*}
h\left\{ T(X_{1},...,u,...,X_{s-1}\right\}
=\sum_{r=1}^{dimM}u^{r}[T(X_{1},...,\partial _{r},..,X_{s-1})]^{h},
\end{equation*}
\begin{equation*}
v\left\{ T(X_{1},...,u,...,X_{s-1}\right\}
=\sum_{r=1}^{dimM}u^{r}[T(X_{1},...,\partial _{r},..,X_{s-1})]^{v}.
\end{equation*}
By the similar formulas we define
\begin{equation*}
h\left\{ T(X_{1},...,u,...,u,...,X_{s-1}\right\} \ \text{and}\ h\left\{
T(X_{1},...,u,...,u,...,X_{s-1}\right\} .
\end{equation*}
Moreover, we put $h\left\{ T(X_{1},...,X_{s}\right\} =\left(
T(X_{1},...,X_{s})\right) ^{h}$ and $v\left\{ T(X_{1},...,X_{s}\right\}
=\left( T(X_{1},...,X_{s})\right) ^{v}.$ Therefore $h\{X\}=X^{h}$ and $
v\{X\}=X^{v}$ (\cite{Abb 2005 a}, p. 22-23).
Finally, we write \thinspace
\begin{equation*}
R(X,Y,Z)=R(X,Y)Z\ \text{and}\ R(X,Y,Z,V)=g(R(X,Y,Z),V)
\end{equation*}
for all $X,Y,Z,V\in T_{x}M.$
\begin{proposition}
\label{Connection}(\cite{Abb 2005 a}, \cite{Abb 2005 b}, \cite{Abb 2005 c})
Let $(M,g)$ be a Riemannian manifold, $\nabla $ its Levi-Civita connection
and $R$ its Riemann curvature tensor. If $G$ is a $g-$natural metrics on $
TM, $ then the Levi-Civita connection $\widetilde{\nabla }$of $(TM,G)$ at a
point $(x,u)\in TM$ is given by
\begin{eqnarray*}
\left( \widetilde{\nabla }_{X^{h}}Y^{h}\right) _{(x,u)} &=&\left( \nabla
_{X}Y\right) _{(x,u)}^{h}+h\left\{ A(u,X_{x},Y_{x})\right\} +v\left\{
B(u,X_{x},Y_{x})\right\} , \\
\left( \widetilde{\nabla }_{X^{h}}Y^{v}\right) _{(x,u)} &=&\left( \nabla
_{X}Y\right) _{(x,u)}^{v}+h\left\{ C(u,X_{x},Y_{x})\right\} +v\left\{
D(u,X_{x},Y_{x})\right\} , \\
\left( \widetilde{\nabla }_{X^{v}}Y^{h}\right) _{(x,u)} &=&h\left\{
C(u,Y_{x},X_{x})\right\} +v\left\{ D(u,Y_{x},X_{x})\right\} , \\
\left( \widetilde{\nabla }_{X^{v}}Y^{v}\right) _{(x,u)} &=&h\left\{
E(u,X_{x},Y_{x})\right\} +v\left\{ F(u,X_{x},Y_{x})\right\} ,
\end{eqnarray*}
for all vector fields $X,$ $Y$ on $M,$ where $P=a_{2}^{\prime }-\frac{b_{2}}{
2},$ $Q=$ $a_{2}^{\prime }+\frac{b_{2}}{2}$ and
\begin{multline*}
A(u,X,Y)=-\frac{a_{1}a_{2}}{2a}\left[ R(X,u,Y)+R(Y,u,X)\right] + \\
\frac{a_{2}B}{2a}\left[ g(Y,u)X+g(X,u)Y\right] + \\
\frac{1}{aF}\QATOPD\{ . {{}}{{}}\left. a_{2}\left[ a_{1}\left(
F_{1}B-F_{2}b_{2}\right) +a_{2}\left( b_{1}a_{2}-b_{2}a_{1}\right) \right]
R(X,u,Y,u)+\right. \\
\left. \left[ aF_{2}B^{\prime }+B\left[
a_{2}(F_{2}b_{2}-F_{1}B)+A(a_{1}b_{2}-a_{2}b_{1})\right] \right]
g(X,u)g(Y,u)+\right. \\
\left. aF_{2}A^{\prime }g(X,Y)\right. \QATOPD. \} {{}}{{}}u,
\end{multline*}
\begin{multline*}
B(u,X,Y)=\frac{a_{2}^{2}}{a}R(X,u,Y)-\frac{a_{1}A}{2a}R(X,Y,u)- \\
\frac{AB}{2a}\left[ g(Y,u)X+g(X,u)Y\right] + \\
\frac{1}{aF}\QATOPD\{ .
{{}}{{}}a_{2}[a_{2}(F_{2}b_{2}-F_{1}B)+A(b_{2}a_{1}-b_{1}a_{2})]R(X,u,Y,u)+
\\
\left[ \frac{{}}{{}}\right. -a(F_{1}+F_{3})B^{\prime }+B\left[ A\left(
(F_{1}+F_{3})b_{1}-F_{2}b_{2}\right) +a_{2}\left( a_{2}B-b_{2}A\right)
\right] \left. \frac{{}}{{}}\right] g(X,u)g(Y,u) \\
-a(F_{1}+F_{3})A^{\prime }g(X,Y)\QATOPD. \} {{}}{{}}u,
\end{multline*}
\begin{multline*}
C(u,X,Y)=-\frac{a_{1}^{2}}{2a}R(Y,u,X)+\frac{a_{1}B}{2a}g(X,u)Y+ \\
\frac{1}{a}\left( a_{1}A^{\prime }-a_{2}P\right) g(Y,u)X+ \\
\frac{1}{aF}\QATOPD\{ . {{}}{{}}\frac{a_{1}}{2}\left[ a_{2}\left(
a_{2}b_{1}-a_{1}b_{2}\right) +a_{1}\left( F_{1}B-F_{2}b_{2}\right) \right]
R(X,u,Y,u)+ \\
a\left( \frac{F_{1}}{2}B+F_{2}P\right) g(X,Y)+ \\
\QATOPD[ . {{}}{{}}aF_{1}B^{\prime }+\left( A^{\prime }+\frac{B}{2}\right)
\left[ a_{2}\left( a_{1}b_{2}-a_{2}b_{1}\right) +a_{1}\left(
F_{2}b_{2}-BF_{1}\right) \right] + \\
P\left[ a_{2}\left( b_{1}\left( F_{1}+F_{3}\right) -b_{2}F_{2}\right)
-a_{1}\left( b_{2}A-a_{2}B\right) \right] \QATOPD. ]
{{}}{{}}g(X,u)g(Y,u)\QATOPD. \} {{}}{{}}u,
\end{multline*}
\begin{multline*}
D(u,X,Y)=\frac{1}{a}\left\{ \QATOP{{}}{{}}\frac{a_{1}a_{2}}{2}R(Y,u,X)-\frac{
a_{2}B}{2}g(X,u)Y\right. + \\
\left. \left( AP-a_{2}A^{\prime })g(Y,u)X\right) \QATOP{{}}{{}}\right\} + \\
\frac{1}{aF}\left\{ \frac{a_{1}}{2}\left[
A(a_{1}b_{2}-a_{2}b_{1})+a_{2}(F_{2}b_{2}-F_{1}B)\right] R(X,u,Y,u)\right. -
\\
a\left[ \frac{F_{2}}{2}B+(F_{1}+F_{3})P\right] g(X,Y)+ \\
\left[ \QATOP{{}}{{}}-aF_{2}B^{\prime }+\left( A^{\prime }+\frac{B}{2}
\right) \left[ A(a_{2}b_{1}-a_{1}b_{2})+a_{2}(F_{1}B-F_{2}b_{2})\right]
+\right. \\
\left. \left. P\left[ A(b_{2}F_{2}-b_{1}(F_{1}+F_{3}))+a_{2}(b_{2}A-a_{2}B)
\right] \QATOP{{}}{{}}\right] g(X,u)g(Y,u)\right\} u,
\end{multline*}
\begin{multline*}
E(u,X,Y)=\frac{1}{a}\left( a_{1}Q-a_{2}a_{1}^{\prime }\right) \left[
g(X,u)Y+g(Y,u)X\right] + \\
\frac{1}{aF}\left\{ \QATOP{{}}{{}}a\left[ F_{1}b_{2}-F_{2}(b_{1}-a_{1}^{
\prime })\right] g(X,Y)+\right. \\
\left[ \QATOP{{}}{{}}a(2F_{1}b_{2}^{\prime }-F_{2}b_{1}^{\prime
})+2a_{1}^{\prime }\left[
a_{1}(a_{2}B-b_{2}A)+a_{2}(b_{1}(F_{1}+F_{3})-b_{2}F_{2})\right] +\right. \\
\left. \left. 2Q\left[ a_{1}(F_{2}b_{2}-F_{1}B)+a_{2}(a_{1}b_{2}-a_{2}b_{1})
\right] \QATOP{{}}{{}}\right] g(X,u)g(Y,u)\QATOP{{}}{{}}\right\} u,
\end{multline*}
\begin{multline*}
F(u,X,Y)=\frac{1}{a}\left( Aa_{1}^{\prime }-a_{2}Q\right) \left[
g(X,u)Y+g(Y,u)X\right] + \\
\frac{1}{aF}\left\{ \QATOP{{}}{{}}a\left[ (F_{1}+F_{3})(b_{1}-a_{1}^{\prime
})-F_{2}b_{2}\right] g(X,Y)+\right. \\
\left[ \QATOP{{}}{{}}a((F_{1}+F_{3})b_{1}^{\prime }-2F_{2}b_{2}^{\prime
})+\right. 2a_{1}^{\prime }\left[
a_{2}(b_{2}A-a_{2}B)+A(b_{2}F_{2}-b_{1}(F_{1}+F_{3}))\right] + \\
\left. \left. 2Q\left[ a_{2}(F_{1}B-F_{2}b_{2})+A(a_{2}b_{1}-a_{1}b_{2})
\right] \QATOP{{}}{{}}\right] g(X,u)g(Y,u)\right\} u.
\end{multline*}
\end{proposition}
A $(0,4)$ tensor $B$ on a manifold $M$ is said to be a generalized curvature
tensor if
\begin{equation*}
B(V,X,Y,Z)+B(V,Y,Z,X)+B(V,Z,X,Y)=0
\end{equation*}
and
\begin{equation*}
B(V,X,Y,Z)=-B(X,V,Y,Z),\qquad B(V,X,Y,Z)=B(Y,Z,V,X)
\end{equation*}
for all vector fields $V,$ $X,$ $Y,$ $Z$ on $M$ (\cite{Nomizu}).
For a $(0,k)$ tensor $T,$ $k\geq 1,$ we define
\begin{equation*}
(R\cdot T)(X_{1},...,X_{k};X,Y)=\nabla _{Y}\nabla
_{X}T(X_{1},...,X_{k})-\nabla _{X}\nabla _{Y}T(X_{1},...,X_{k}).
\end{equation*}
For more details see for example (\cite{Belkhelfa}), (\cite{Ewert}).
The Kulkarni-Nomizu product of symmetric $(0,2)$ tensors $A$ and $B$ is
given by
\begin{multline*}
\left( A\wedge B\right) (U,X,Y,V)= \\
A(X,Y)B(U,V)-A(X,V)B(U,Y)+A(U,V)B(X,Y)-A(U,Y)B(X,V).
\end{multline*}
\begin{theorem}
\label{Apen1}\cite{Grycak} Let $(M,g)$ be a semi-Riemannian manifold with
metric $g$, $dimM>2.$ Let $g_{X}$ be a 1-form associated to $g,$ i.e. $
g_{X}(Y)=g(Y,X)$ for any vector field $Y.$
If $B$ is generalized curvature tensor having the property $R\cdot B=0$ and $
P$ is a one-form on $M$ satisfying
\begin{equation}
\left( R\cdot V\right) \left( X;Y,Z\right) =\left( P\wedge g_{X}\right)
\left( Y,Z\right) , \label{Ap1}
\end{equation}
for some 1-form $V,$ then
\begin{equation*}
P\left( B-\frac{TrB}{2n(n-1)}g\wedge g\right) =0.
\end{equation*}
If $A$ is a symmetric $(0,2)$-tensor on $M$ having the properties $R\cdot
A=0 $ and (\ref{Ap1}) then
\begin{equation*}
P\left( A-\frac{TrA}{n}g\right) =0.
\end{equation*}
\end{theorem}
\begin{lemma}
\cite{Walker}\label{Apen2} Let $A_{l},$ $B_{hk}$ where $l,h,k=1,...,n$ be
numbers satisfying
\begin{equation*}
B_{hk}=B_{kh},\quad A_{l}B_{hk}+A_{h}B_{kl}+A_{k}B_{lh}=0.
\end{equation*}
Then either $A_{l}=0$ for all $l$ or $B_{hk}=0$ for all $h,k.$
\end{lemma}
\begin{lemma}
\label{Apen3}Let on a manifold $(M,g),$ $dimM>2,$ $(0,2)$ tensors $A,$ $B,$ $
F$ satisfying the condition
\begin{multline*}
g(X,Y)F(U,V)+g(U,V)B(X,Y)+ \\
g(Y,V)A(X,U)+g(X,V)A(Y,U)+g(Y,U)A(X,V)+g(X,U)A(Y,V)=0
\end{multline*}
for arbitrary vectors $X,Y,U,V$ be given.
Then $F$ and $B$ are symmetric. Moreover, $A=0$, $B+F=0$ and $
nF-TrFg=nB-TrBg=0.$
\end{lemma}
\begin{proof}
In local coordinates $(U,(x^{a}))$ the condition writes
\begin{equation*}
g_{kl}F_{mn}+g_{mn}B_{kl}+g_{ln}A_{km}+g_{kn}A_{lm}+g_{lm}A_{kn}+g_{km}A_{ln}=0.
\end{equation*}
By contractions with $g^{kl},$ $g^{mn},$ $g^{km}$ we obtain in turn
\begin{eqnarray*}
2(A_{mn}+A_{nm})+nF_{mn}+B_{p}^{p}g_{mn} &=&0, \\
2(A_{kl}+A_{lk})+nB_{kl}+F_{p}^{p}g_{kl} &=&0, \\
(n+2)A_{ln}+B_{nl}+F_{ln}+A_{p}^{p}g_{ln} &=&0.
\end{eqnarray*}
Now, the symmetry of $F$ and $B$ results from the first two equations.
Contracting the first equation with $g^{mn}$ and the third one with $g^{ln}$
we get
\begin{eqnarray*}
4A_{p}^{p}+n(B_{p}^{p}+F_{p}^{p}) &=&0, \\
2(n+1)A_{p}^{p}+B_{p}^{p}+F_{p}^{p} &=&0,
\end{eqnarray*}
whence $TrA=TrF+TrB=0$ results. Applying these to the first system we easily
get
\begin{eqnarray*}
4(A_{mn}+A_{nm})+n(F_{mn}+B_{mn}) &=&0, \\
(n+2)(A_{mn}+A_{nm})+2(F_{mn}+B_{mn}) &=&0,
\end{eqnarray*}
whence $F+B=0$ and $A_{mn}+A_{nm}=0.$ Now the third equation yields $A=0.$
The further statements are obvious.
\end{proof}
Stanis\l aw Ewert-Krzemieniewski
West Pomeranian University of Technology Szczecin
School of Mathematics
Al. Piast\'{o}w 17
70-310 Szczecin
Poland
e-mail: [email protected]
\end{document}
|
\begin{document}
\title[Surgery Presentations for Coloured Knots]
{Surgery Presentations for Knots Coloured by Metabelian Groups}
\author{Daniel Moskovich}
\address{Department of Mathematics, University of Toronto, 40 St. George Street, Toronto, Ontario, Canada M5S 2E4}
\email{[email protected]}
\urladdr{http://www.sumamathematica.com/}
\thanks{The author would like to thank Tomotada Ohtsuki, Kazuo Habiro, Andrew Kricker, Julius Shaneson, Alexander Stoimenow, and Najmuddin Fakhruddin for helpful discussions, and also Charles Livingston, Kent Orr, Stefan Friedl and Steven Wallace for useful comments and for pointing out references. The bulk of this work was done with the support of a JSPS Research Fellowship for Young Scientists.}
\date{28th of December, 2010}
\subjclass{57M12, 57M25}
\begin{abstract}
A $G$--coloured knot $(K,\rho)$ is a knot $K$ together with a representation $\rho$ of its knot group onto $G$. Two $G$--coloured knots are said to be $\rho$--equivalent if they are related by surgery around $\pm1$--framed unknots in the kernels of their colourings. The induced local move is a $G$--coloured analogue of the crossing change. For certain families of metabelian groups $G$, we classify $G$--coloured knots up to $\rho$--equivalence. Our method involves passing to a problem about $G$--coloured analogues of Seifert matrices.
\end{abstract}
\subjclass{57M12, 57M25}
\maketitle
\section{Introduction}\label{S:Intro}
\subsection{Preamble}\label{SS:Results}
One of the fundamental facts in knot theory is that any knot can be untied by crossing changes, and that crossing changes are realized by surgery around $\pm1$--framed unknots. For $G$--coloured knots, where $G$ is a group, twist moves as in Figure \ref{F:FRMove} take the place of crossing changes, and these are realized by surgery around $\pm1$--framed unknots in the kernel of the $G$--colouring. Two $G$--coloured knots are said to be \emph{$\rho$--equivalent} if they are related, up to ambient isotopy, by a sequence of twist moves. How many $\rho$--equivalence classes of $G$--coloured knots are there? What distinguishes one from another?\par
In \cite{KM09}, Kricker and I considered the case of $G$ a dihedral group $D_{2n}=\mathcal{C}_2\ltimes\mathds{Z}/n\mathds{Z}$. We proved that the number of $\rho$--equivalence classes of $D_{2n}$-coloured knots is $n$. These are told apart by the \emph{coloured untying invariant}, an algebraic invariant of $\rho$--equivalence classes defined in terms of \emph{surface data} (see \cite{Mos06b}). Surface data is the analogue for a $G$--coloured knot of a Seifert matrix. Our proof was constructive, in the sense that it provided an explicit sequence of twist moves to relate each $D_{2n}$-coloured knot to a chosen representative of its $\rho$--equivalence class.\par
The purpose of this work is to expand the above result to knots coloured by a wider class of metabelian groups $G=\mathcal{C}_m\ltimes A$. We show that the results of \cite[Section 4]{KM09} extend to $G$--coloured knots for most metacyclic groups (Theorem \ref{T:metacyclic}), and for certain classes of metabelian groups with $\Rank(A)=2$ (Theorem \ref{T:R2M3Diag} and Theorem \ref{T:R2M3Non}). In particular, we classify $A_4$-coloured knots up to $\rho$--equivalence (Theorem \ref{T:A4Theorem}). In all cases, `the only obstruction to $\rho$--equivalence is the obvious one'. The obstruction to carrying out the same computations for metabelian groups with $\Rank(A)>2$ is identified by Theorem \ref{T:clasperprop}.\par
The starring role is played by the surface data. For a $G$--coloured knot, the surface data determines the $G$--colouring; moreover, the $S$--equivalence relation on Seifert matrices induces an $S$--equivalence relation on surface data (Section \ref{SS:S-equiv-matrix}). The relevant equivalence relation on $G$--coloured knots becomes \emph{$\bar\rho$--equivalence}, induced by a special kind of twist move called the \emph{null-twist} (Figure \ref{F:HosteMove}). To classify $G$--coloured knots up to $\rho$--equivalence, we first classify them up to $\bar\rho$--equivalence. When $\Rank(A)\leq 2$, two $G$-coloured knots with $S$--equivalent surface data must be $\bar\rho$--equivalent and therefore $\rho$--equivalent (Theorem \ref{T:clasperprop}). Thus, $\bar\rho$--equivalence classes are distinguished by invariants coming from surface data, which in turn have explicit linear algebraic formulae. Two such invariants are the \emph{surface untying invariant} (Section \ref{SS:su}) and the \emph{$S$--equivalence class of the colouring} (Section \ref{SS:sequiv}). To go further and to distinguish $\rho$--equivalence classes, we use the \emph{coloured untying invariant} (Section \ref{SS:cu}), also given in terms of surface data. To distinguish $\bar\rho$--equivalence classes when $\Rank{A}>2$, surface data alone turns out to be insufficient, and we must take into account also triple-linkage between bands (Section \ref{S:ClasperProof}).
\begin{figure}
\caption{\label{F:FRMove}
\label{F:FRMove}
\end{figure}
\subsection{Technical Summary}\label{SS:Method}
Let $G= \mathcal{C}_m\ltimes_\phi A$ be a fixed metabelian group, where $\mathcal{C}_m=\left\langle\left.\rule{0pt}{9pt}t\thinspace\right|t^m=1\right\rangle$ is a cyclic group, and $A$ is an finitely generated abelian group. A \emph{$G$--coloured knot} is a pair $(K,\rho)$ of an oriented knot with basepoint $K\colon\thinspace S^1\hookrightarrow S^3$, together with a surjective homomorphism $\rho$ of the knot group of $K$ onto $G$. Such $G$--coloured knots were previously studied by Hartley \cite{Har79}. Two $G$--coloured knots are said to be \emph{$\rho$--equivalent} if they are related up to ambient isotopy by a finite sequence of twist moves. We bound the number of $\rho$--equivalence classes from above and from below. In favourable cases these bounds agree. In Section \ref{S:r12}, we classify $G$--coloured knots up to $\rho$--equivalence in all such favourable cases, when the rank of $A$ is at most $2$.\par
A key idea is to introduce various weaker equivalence relations. The $G$--colouring $\rho$ induces:
\begin{itemize}
\item An $A$--colouring $\bar\rho$ of a Seifert surface exterior $E(F)$.
\item For $\tilde{G}= \mathcal{C}_0\ltimes_\phi A$, and $\tilde{G}$--colouring $\hat\rho$ of $K$.
\item An $A$--colouring $\tilde\rho$ of the $m$--fold branched cyclic cover $C_m(K)$.
\end{itemize}
Each of these colourings in turn induces an equivalence relation on $G$--coloured knots, which we call $\bar\rho$--equivalence, $\hat\rho$--equivalence, and $\tilde\rho$--equivalence correspondingly. Chief among these is $\bar\rho$--equivalence. Two (rigid) knots are \emph{tube equivalent} if they possess tube equivalent Seifert surfaces (Definition \ref{D:tubequiv}). Two $G$--coloured knots are $\bar\rho$--equivalent if they are related up to tube equivalence by null-twists (see Figure \ref{F:HosteMove}). As $\bar\rho$--equivalence is defined with respect to a colouring of a Seifert surface by an abelian group, its study is amenable to linear algebraic techniques. Our main effort is to classify $G$--coloured knots up to $\bar\rho$--equivalence. Such a classification leads to a classification of $G$--coloured knots up to $\rho$--equivalence if either all of the equivalence relations happen to coincide (as is the case for some metabelian groups in Section \ref{S:r12}), or if $G$ is simple enough that the remaining work can be done by hand (as for the case $G=A_4$ in Section \ref{S:A4}).
\begin{rem}
In a different context, the twist move is called the Fenn--Rourke move, and the null-twist is called the Hoste move (see \textit{e.g.} \cite{Hab06}).
\end{rem}
Both a twist moves and a null-twist come from integral Dehn surgery, and the trace of such surgery a special kind of bordism (Proposition \ref{P:bordbarrho}). Therefore the order of the appropriately defined bordism group gives an upper bound on the number of possible $\rho$--equivalence classes of $G$--coloured knots. This upper bound was studied by Litherland and Wallace \cite{LiWal08} following work of Cochran, Gerges, and Orr \cite{CGO01}. Their result was that the number of $\rho$--equivalence classes of $G$--coloured knots is bounded above by the product of orders of certain homology groups. We tighten this upper bound by considering instead the $\bar\rho$--equivalence relation. We find that the order of $H_3(A;\mathds{Z})$ is an upper bound for the number of $\bar\rho$--equivalence classes (Corollary \ref{C:Wallacebound}).\par
\begin{figure}
\caption{\label{F:HosteMove}
\label{F:HosteMove}
\end{figure}
For lower bound calculations, the goal is to compile the longest possible list of non--$\rho$--equivalent $G$--coloured knots. Recall \cite[Definition 3]{KM09}.
\begin{defn}
A \emph{complete set of base-knots} for a group $G$ is a set $\Psi$ of $G$--coloured knots $(K_i,\rho_i)$, no two of which are $\rho$--equivalent, such that any $G$--coloured knot $(K,\rho)$ is $\rho$--equivalent to some $(K_i,\rho_i)\in\Psi$. A element of $\Psi$ is called a \emph{base-knot} (the term imitates `base-point').
\end{defn}
We remark that for the applications outlined in Section \ref{SS:Motivation}, base-knots should be chosen to be as ``nice'' as possible, in that they should be unknotting number $1$ knots whose irregular $G$--covers we know how to present explicitly.\par
The method of this paper consists of transforming the geometric-topology problem of finding a complete set of base-knots into a problem in linear algebra over a commutative ring, and then solving that problem for the relevant commutative rings. I arrived at this approach by thinking hard about the band-sliding algorithm in \cite[Section 4]{KM09} until I understood the underlying algebraic mechanism that makes it work.\par
Choose a Seifert surface $F$ for $K$ and a basis $x_1,\ldots,x_{2g}$ for $H_1(F)$, which induces an associated basis $\xi_1,\ldots,\xi_{2g}$ for $H_1(E(F))$. The $G$--colouring $\rho$ restricts to an $A$--colouring $\bar\rho\colon\thinspace H_1(E(F))\to A$ (Section \ref{SS:ASeif}). We obtain a Seifert matrix $M$ for $K$ and a \emph{colouring vector} $V\in A^{2g}$, whose entries are the $\bar\rho$--images of the $\xi_i$'s. Such a pair $(M,V)$ is called \emph{surface data} for $(K,\rho)$. Surface data is the analogue for $G$--coloured knots of a Seifert matrix (Section \ref{S:surfacedata}). In particular, it makes sense to discuss \emph{$S$--equivalence} of surface data (Section \ref{SS:S-equiv-matrix}); and moreover, when $\Rank(A)\leq 2$, $S$--equivalence of surface data implies $\bar\rho$--equivalence of $G$--coloured knots (Theorem \ref{T:clasperprop}). The implication is that rather than working with twist-moves on $G$--coloured knots, we may instead work with the induced equivalence relation on surface data. Matrices are simpler mathematical objects that knots, and for `simple enough' groups $G$ the induced problem solves itself.\par
To distinguish between $\bar\rho$--equivalence classes, we identify two $\bar\rho$--equivalence invariants coming from the surface data. The first of these, given in Section \ref{SS:su}, is an element of $A$ which is a version of the coloured untying invariant of \cite[Section 6]{Mos06b}, which we call the \emph{surface untying invariant}. It may be interpreted as a linking number of push-offs of curves naturally associated to the map $\bar\rho$. The second, which we call the \emph{$S$--equivalence class of the colouring}, is an element of $A\wedge A$ coming from the $S$--equivalence class of the surface data. These two invariants suffice to distinguish the base-knots presented in Sections \ref{S:r12} and \ref{S:A4} up to $\bar\rho$--equivalence. An extension of the coloured untying invariant (Section \ref{SS:cu}) is then used to distinguish these base-knots up to $\rho$--equivalence.\par
For a metacyclic group for which $2(\phi^{-3}-\mathrm{id})$ is invertible, two $G$--coloured knots are $\bar\rho$--equivalent if and only if they are $\rho$--equivalent, thus no extra work is required. Conversely, for $G=A_4$ the group of symmetries of an oriented tetrahedron, two $G$--coloured knots may even be ambient isotopic without being $\bar\rho$--equivalent! For this group, which is the smallest metabelian group with $\Rank(A)>1$ and is also a finite subgroup of $SO(3)$ and therefore interesting, we conclude the paper by showing `by hand' that the lower bound is sharp, \textit{i.e.} that the coloured untying invariant is a complete invariant of $\rho$--equivalence classes for $A_4$-coloured knots.\par
When $\Rank(A)>2$, an additional $\bigwedge^3 A$--valued obstruction to $\bar\rho$--equivalence emerges from triple-linkage between bands of the Seifert surface. This obstruction, which we call the \emph{$Y$--obstruction}, is the topic of Section \ref{S:ClasperProof}, where in Theorem \ref{T:clasperprop} we prove that two $S$--equivalent knots are $\bar\rho$--equivalent if and only if their $Y$--obstruction vanishes. Triple-linkage between bands detects information one step below the Alexander module in the derived series of the knot group \cite{Tur83,Tur84}.\par
The moral is that $\bar\rho$--equivalence is a useful equivalence relation to consider on $G$--coloured knots, because of its relationship to $S$--equivalence, and the fact that it is generated by a local move. Conceptually, it is a similar idea to null--equivalence \cite{GR04} and to $H_1$-bordism \cite{CM00}.\par
With $\Lk=0$ and \sout{Inn} short-hands for ``admit only null-twists'' and ``admit only tube equivalence'', the following summarizes the equivalence relations which this papers considers, and how they relate to one another.
\begin{equation}\label{E:equivrels}
\psfrag{a}[r]{$\rho$--equivalence}\psfrag{b}[c]{$\hat\rho$--equivalence}\psfrag{c}[c]{$\tilde\rho$--equivalence}\psfrag{d}[l]{$\bar\rho$--equivalence}
\psfrag{r}[c]{\footnotesize$\Lk=0$}\psfrag{s}[c]{\footnotesize \sout{Inn}}\psfrag{t}[c]{\footnotesize \sout{Inn}}\psfrag{u}[c]{\footnotesize$\Lk=0$}
\psfrag{1}[c]{$cu$,$s$}\psfrag{2}[c]{$\Omega$}\psfrag{3}[c]{$su$}
\begin{minipage}{300pt}\centering\includegraphics[width=260pt]{equivrels}\end{minipage}
\end{equation}
If we would have used \emph{equivariant} homology and bordism, with respect to the action of $\mathcal{C}_m$ on $A$, then we could have pushed the bordism upper bound $\Omega$, the surface untying invariant $\mathrm{su}$, and the $S$--equivalence class $s$ of the colouring, all `one step to the left', so as to try to classify $G$--coloured knots up to $\hat\rho$--equivalence.
\subsection{My motivation for studying $\rho$--equivalence}\label{SS:Motivation}
My motivation for studying $\rho$--equivalence is to construct quantum topological invariants associated to formal perturbative expansions around \emph{non-trivial} flat connections. Building on the results in this paper, I plan to mimic Garoufalidis and Kricker's construction of a rational Kontsevich invariant of a knot \cite{GK04} in the $G$--coloured setting. The $1$--loop part of the Garoufalidis--Kricker theory determines the Alexander polynomial, while the $2$--loop part contains the Casson invariant of cyclic branched coverings of a knot. Studying $G$--coloured analogues of the rational Kontsevich invariant might provide an avenue to attack the Volume Conjecture, by interpreting hyperbolic volume as $L^2$-torsion \cite[Theorem 4.3]{Lu02}, which has a formula in terms of Jacobians of the Fox matrix \cite[Theorem 4.9]{Lu02} and which should be closely related to the $1$--loop parts of our prospective invariants. This would seem to me to be a natural perturbative approach to proving conjectures about semiclassical limits of quantum invariants, because in physics the fundamental object is Witten's invariant rather than the LMO invariant--- the path integral over \emph{all} $SU(2)$--connections, as opposed to its perturbative expansion close to the trivial $SU(2)$--connection.\par
The LMO invariant and the rational Kontsevich invariant are built out of a surgery presentation for a knot, in the complement of a standard unknot (see \textit{e.g.} \cite[Chapter 10]{Oht02}). The analogue for $G$--coloured knots is a surgery presentation in the complement of a base-knot and in the kernel of its colouring. We will show in future work that, for sufficiently nice base-knots (the complete sets of base-knots in this paper are indeed `sufficiently nice'), a Kirby theorem-like result holds for such presentations, allowing us to prove invariance for quantum invariants coming from surgery. Thus, such surgery presentations provides a solid foundation on which to construct $G$--coloured rational Kontsevich invariants.\par
Invariants of $G$--coloured knots have proven useful in knot theory in that they detect information beyond $\pi/\pi^{\prime\prime}$. Classically, Reidemeister used the linking matrix of a knot's dihedral covering link to distinguish knots with the same Alexander polynomial (\cite{Rei29}, see also \textit{e.g.} \cite{Per74}). More recently, twisted Alexander polynomials have been receiving a lot of attention, particularly in the context of knot concordance (see \textit{e.g.} \cite{FV10}). For the groups in question, I hope and expect that these will be related to the ``$1$--loop part'' of the theory, which might lead in the direction of the Volume Conjecture. On the next level, Cappell and Shaneson \cite{CS75,CS84} found a formula for the Rokhlin invariant of a dihedral branched covering space, which provides an obstruction to a knot being ribbon. Presumably this will be related to the ``$2$--loop part'' of the theory.\par
An unrelated motivation is the study of faithful $G$--actions on a closed oriented connected smooth $3$--manifold $M$ by diffeomorphisms. The question is whether there exists a bordism $W$ and a handle decomposition of $W$ as $M_G\times I$ with $2$--handles attached, for some fixed standard $3$--manifold $M_G$, such that the $G$--action on $M$ extends to a smooth faithful $G$--action on $W$. If $G$ happens to be a finite subgroup of $SO(3)$, this is equivalent to the existence of a surgery presentation $L\subset S^3$ for $M$ which is invariant under the standard action of $G$ on $S^3$. This would imply that an invariant of $3$--manifolds which admits a surgery presentation must take on some symmetric form for such manifolds, as discussed by Przytycki and Sokolov \cite{PrzS01}. This was proven for cyclic groups in \cite{Sak01} following \cite{PrzS01}, and for free actions of dihedral groups in \cite{KM09}. In the same vein, the results of this paper will be used, in future work, to prove the above claim also for certain $A_4$ actions.
\subsection{Comparison with the literature}\label{SS:Compare}
The results of this paper generalize the results of my joint paper with Andrew Kricker \cite[Section 4]{KM09}, based in turn on \cite{Mos06b}, to a wider class of metabelian groups. The main innovation in our methodology is that \cite{KM09} works with knot diagrams, while we work with surface data.\par
Our bordism argument is based on \cite{LiWal08} and on Steven Wallace's thesis \cite{Wal08}.\par
The results of this paper imply that, for certain metabelian groups $G$, any $G$--coloured knot $(K,\rho)$ has a surgery presentation in the complement of a base-knot for any of our complete sets of base-knots, and that the components of that surgery presentation lie in $\ker\rho$. Such a surgery presentation of $(K,\rho)$ may be lifted to a surgery presentation of irregular covering spaces associated to $(K,\rho)$, containing embedded covering links. This construction was carried out for $D_{2n}$-coloured knots in \cite{KM09}. For the groups we consider, we defer the explicit construction of such surgery presentations to future work.\par
If our base-knots all have unknotting number $1$ then we can prove a Kirby Theorem-like result for surgery presentations of $(K,\rho)$, which we can then use to construct new invariants of a $G$--coloured knots and of their covering spaces and covering links. Thus, our approach is well-suited to \emph{constructing} invariants. On the other hand, if we wanted to \emph{calculate} known invariants, then generalizing the surgery presentations of David Schorow's thesis \cite{SchorowPhD}, based on the explicit bordism constructed by Cappell and Shaneson \cite{CS84}, looks promising to me. His surgery presentation is constructed directly from a $G$--coloured knot diagram, without first having to reduce it to a base-knot by twist moves.
\subsection{Why this generality?}\label{SS:Generality}
In this paper, $\rho$--equivalence is studied by applying linear algebra to surface data. In particular, we need a Seifert surface in order to define surface data. The widest class of topological objects with Seifert matrices is homology boundary links in integral homology spheres \cite{Ko87}. With effort, the results of this paper should extend to that setting.\par
The methods in this paper are largely linear algebraic, and linear algebra can only be performed over a commutative ring. For $G$ metabelian, a $G$--colouring of a knot $(K,\rho)$ induces an $A$--colouring $\bar\rho$ of a Seifert surface complement, which allows us to encode $\rho$ as a colouring vector. If $G$ were not metabelian, the colouring would no longer correspond to a vector, and we would need more than linear algebra to bound from below the number of $\bar\rho$--equivalence classes.\par
If $A$ were not finitely generated, then $\bar\rho$ would not be surjective, and the arguments of Section \ref{S:ClasperProof} and of Section \ref{S:untyinginvariants} would fail.
\subsection{Contents of this paper}\label{SS:contents}
In Section \ref{S:Prelims} we recall the concept of a $G$--coloured knot and we establish conventions and notation. In Section \ref{S:surfacedata} we define surface data and prove that it satisfies analogous properties to the Seifert matrix. In particular, it admits an $S$--equivalence relation. In Section \ref{S:rhoequiv} we define the various flavours of $\rho$--equivalence, and show their relation with relative bordism and how they are generated by local moves. In Section \ref{S:ClasperProof} we prove Theorem \ref{T:clasperprop}, relating $S$--equivalence with $\bar\rho$--equivalence. In Section \ref{S:untyinginvariants} we identify invariants of $\rho$--equivalence classes and of $\bar\rho$--equivalence classes in terms of homology and surface data. In Section \ref{S:r12} we apply the results of the previous sections, matching upper and lower bounds, to classify $G$--coloured knots up to $\bar\rho$--equivalence and up to $\rho$--equivalence, for families of metabelian groups with $\Rank(A)\leq 2$. In Section \ref{S:A4} we go beyond the algebraic techniques of earlier sections, and beginning from the $\bar\rho$--equivalence classification of $A_4$-coloured knots, we work `by hand' to classify $A_4$-coloured knots up to $\rho$--equivalence. The paper concludes by listing some open problems in Section \ref{S:conclusion}.
\section{Preliminaries}\label{S:Prelims}
\subsection{The metabelian group $G$}\label{SS:metagroup}
A metabelian homomorph $G$ of a knot group is finitely generated, of weight one \cite{GA75,Joh80}, and is isomorphic to a semi-direct product $\mathcal{C}_m\ltimes_\phi A$ where $\mathcal{C}_m=\left\langle\left.\rule{0pt}{9pt}t\thinspace\right|t^m=1\right\rangle$ is a (possibly infinite) cyclic group, and $A$ is an finitely generated abelian group. The above notation means that the conjugation action of $\mathcal{C}_m$ on $A$ is $t^{-1}at=\phi(a)$. Write $A$ additively, and write conjugation by $t$ as left multiplication, using a dot, while we don't write the dot for multiplication in $G$, so that $t\cdot a$ stands for $t^{-1}at$.
\begin{example}
Dihedral groups are metabelian homomorphs of knot groups. They have presentation
\[D_{2n}\ass\ \left\langle t,s\left|\rule{0pt}{9.5pt}\ t^{2}=s^{n}=1,\ tst=s^{-1}\right.\right\rangle.\]
\end{example}
\begin{example}
The alternating group of order $4$ is another metabelian homomorph of knot groups, with presentation
\[A_4\ass \ \left\langle t,s_1,s_2\left|\rule{0pt}{9.5pt}\ t^3=s_1^2=s_2^2=1,\ t^2s_1t=s_2,\ t^2s_2t=s_1s_2\right.\right\rangle.\]
\end{example}
\subsection{$G$--coloured knots}\label{SS:ColKnots}
We adopt conventions that facilitate concrete discussion. None of our results depend essentially on these conventions.\par
In this paper, every $n$--sphere comes equipped with a fixed parametrization
\[\left\{(x_1,\ldots,x_{n+1})\in\mathds{R}^{n+1}\left|\rule{0pt}{9.5pt}\ x_1^2+\cdots+x_{n+1}^2=1\right.\right\}\to S^n\]
and each disk with a fixed parametrization $[-1,1]^{\times n}\to D^n$.\par
A knot is an embedding $K\colon\thinspace S^1\hookrightarrow S^3$ together with the orientation induced by the counter-clockwise orientation of $S^1$, and a basepoint $K|_{(0,1)}$. We parameterize a tubular neighbourhood of a knot $K$ as $N(K)\colon\thinspace D^2\times S_1\hookrightarrow S^3$ such that
$N(K)\left(\{(0,0)\}\times \{(x,y)\}\right)= K(x,y)$, and $\Link(K,\ell)=0$, where $\ell$ denotes $N(K)\left(\{(1,1)\}\times
S^1\right)$. Thus $K$ comes equipped with a distinguished meridian $\mu\ass N(K)\left(\partial D^2\times \{(0,0)\}\right)$ and with a canonical
longitude $\ell$.\par
The \emph{knot group} is $\pi\simeq \pi_1 E(K)$. A \emph{$G$--coloured knot} is a knot $K\subset S^3$ together with a surjective homomorphism $\rho\colon\thinspace\pi\twoheadrightarrow G$. We draw $G$--coloured knots by labeling arcs in a knot diagram by $\rho$--images of corresponding Wirtinger generators.\par
Because Wirtinger generators of a knot are all related by conjugation, they all map to elements of the same coset $t^a A$, where $a\neq 0$ because $\rho$ is surjective. By convention, set $a$ to be $1$, so that all Wirtinger generators map to elements of $t A$.
\begin{rem}
Our coloured knots are called \emph{based coloured knots} in \cite{LiWal08}.
\end{rem}
\begin{lem}\label{L:inneriso}
Consider $G$--colourings $\rho_{1,2}\colon\thinspace \pi\twoheadrightarrow G$ of a knot $K$. If there exists an inner automorphism $\psi$ of $G$ such that $\rho_1(x)=\psi(\rho_2(x))$ for all $x\in \pi$, then $(K,\rho_{1,2})$ are ambient isotopic.
\end{lem}
\begin{proof}
We summarize the argument in \cite[Page 678]{Mos06b} and \cite[Lemma 14]{KM09}. Because $\pi$ is normally generated by $\mu$, the group $G$ is normally generated by $\rho(\mu)$, so conjugation by any $g\in G$ corresponds to some composition of conjugations by labels of arcs of some knot diagram $D$ for $K$. For each such arc $\alpha$ in turn, create a kink in $\alpha$ by a Reidemeister \textrm{I} move, shrink the rest of the knot to lie inside a small ball, drag the knot through the kink (the effect is to conjugate the labels of all arcs in $D$ by the label of $\alpha$), and get rid of the kink by another Reidemeister \textrm{I} move. This sequence of Reidemeister moves brings us back to $D$, and its combined effect will have been to realize the action of $\psi$ on $\rho_1$ by ambient isotopy.
\end{proof}
\begin{example}\label{Ex:cyclic}
The degenerate case of a $G$--coloured knot is a $\mathcal{C}_n$-coloured knot. Any knot is canonically $\mathcal{C}_n$-coloured by the mod $n$ linking pairing, which with our conventions sends all of its meridians to $t$. Thus the set of $\mathcal{C}_n$--coloured knots is in bijective correspondence with
the set of knots.
\end{example}
\begin{example}\label{Ex:dihedral}
The simplest non-degenerate case of a $G$--coloured knot is a knot coloured by a dihedral group. Each Wirtinger generator is mapped to an element of the form $ts^i\in D_{2n}$, which depends only on $i\in\mathds{Z}/n\mathds{Z}\triangleleft D_{2n}$. Therefore a $D_{2n}$-colouring is encapsulated by a labeling of arcs of a knot diagram by elements in $\mathds{Z}/n\mathds{Z}$.
Such a knot diagram, labeled by integers or with colours standing in for those integers, was called an $n$--coloured knot by Fox, and this is the genesis of the term `coloured knots' \cite{Fox62}. There is no need to orient the knot diagram, because a $\rho$--image of a Wirtinger generator is its own inverse. See Figure \ref{F:Fan}.
\end{example}
\begin{figure}
\caption{\label{F:Fan}
\label{F:Fan}
\end{figure}
\begin{example}\label{Ex:A_4}
The simplest example of a $G$--coloured knot for $G$ not metacyclic is a knot coloured by the alternating group. Each Wirtinger generator gets mapped to one of $\{t,ts_1,ts_2,ts_1s_2\}$. See Figure \ref{F:A4tref}.
\end{example}
\begin{figure}
\caption{\label{F:A4tref}
\label{F:A4tref}
\end{figure}
\section{Surface data}\label{S:surfacedata}
Let $G= \mathcal{C}_m\ltimes_\phi A$ be a fixed metabelian homomorph of a knot group.\par
In this section we define and explore \emph{surface data}. Surface data is an analogue for $G$--coloured knots of the Seifert matrix. In particular, it admits an $S$--equivalence relation (Section \ref{SS:S-equiv-matrix}).\par
We fix some linear algebra notation for the rest of the paper. The transpose of a matrix $M$ is denoted $M^{\thinspace T}$. We write
both column vectors and row vectors as rows, but we separate row vector elements with commas and column vector elements with
semicolons. Thus $\left(v_1;\,\ldots;v_n\right)$ denotes $\left(\begin{smallmatrix}v_1\\\vdots\\v_n\end{smallmatrix}\right)$.
The number $0$ denotes a zero matrix, whose size depends on its context. The direct sum of matrices $M\oplus N$ is
$\left(\begin{smallmatrix}M & 0\\ 0 & N\end{smallmatrix}\right)$. We denote the $n\times n$ unit matrix by $I_n$. We use square brackets
for matrices over $\mathds{Z}$, and round brackets for matrices over $A$.
\subsection{$A$-coloured Seifert surfaces and covering spaces}\label{SS:ASeif}
Let $(K,\rho)$ be a $G$--coloured knot, and let $F$ be a Seifert surface for $K$. For us, a Seifert surface comes equipped with a basepoint on its boundary, an
orientation (right-hand convention), and a fixed parametrization, for instance as a zero mean curvature ``soap bubble'' surface with the parameterized knot $K$ as its boundary. Let $E(F)$ denote the exterior of $F$, which inherits a basepoint $\star_F$ from $F$ by pushing off along the positive normal.\par
Let $C_{m}(K)$ be the $m$--fold branched covering space of $K$, obtained from $E(F)$ via the standard cut-and-paste construction
(see \textit{e.g.} \cite[Chapter 5C]{Rol90}). By convention $C_{0}(K)\ass C_\infty(K)$.\par
In this section we characterize the homomorphism $\bar\rho\colon\thinspace H_1\left(E(F)\right)\twoheadrightarrow A$ which arises from the restriction of $\rho$ to the complement of $F$, and the homomorphism $\tilde\rho\colon\thinspace H_1\left(C_m(K)\right)\twoheadrightarrow A$. This section generalizes \cite[Section 4.1.1]{KM09}, to which the reader is referred for details.\par
Write $\pi$ as a semidirect product $\mathds{Z}\ltimes\pi^{\prime}$. The abelianization map
$\Ab\colon\thinspace\ \pi\twoheadrightarrow\mathcal{C}_0$ is given by $\Ab(x)=t^{\Link(x,K)}$, where
$\Link(x,K)$ equals the algebraic intersection number of $x$ with $F$. Any based loop $x$ in the complement of $F$ does
not intersect $F$. So the image of the map $\iota_*\colon\thinspace \pi_1 E(F)\rightarrow \pi$ induced by the inclusion $\iota\colon\thinspace\ E(F) \hookrightarrow E(K)$ lies in $\pi^\prime$. Additionally, the group $G$ factors as $G=\rho(\mathds{Z})\ltimes \rho(\pi^\prime)$ with $\rho(\mathds{Z})=\mathcal{C}_m$ and
$\rho(\pi^\prime)=A$ (see for instance \cite[Proposition 14.2]{BZ03}). Combining these facts tells us that the image of $\rho\circ \iota_\ast$ is contained in $A$, and we obtain a map $\rho^{(1)}\colon\thinspace \pi_1 E(F)\twoheadrightarrow A$. Apply the abelianization map to the domain and to the range of $\rho^{(1)}$ to obtain a map $\bar\rho\colon\thinspace H_1 \left(E(F)\right)\twoheadrightarrow A$, which we call the \emph{restriction of $\rho$ to the complement of $F$}.\par
In another direction, for $G\ass\mathcal{C}_m\ltimes_\phi A$ a metabelian homomorph of a knot group, a $G$--colouring $\rho$ of a knot $K$ factors as follows (see \emph{e.g.} \cite[Proposition 14.3]{BZ03}):
\begin{equation}
\begin{CD}
\rho\colon\thinspace \pi= \mathds{Z}\ltimes_\tau \pi^\prime @>\beta_n>> \mathcal{C}_m\ltimes_{\psi^\prime}H_1(C_m(K)) @>\rho^\prime>> G\\
@. @VVV @VVV\\
\phantom{a} @. H_1(C_m(K)) @>\tilde{\rho}>> A
\end{CD}
\end{equation}
We will call $\tilde{\rho}$ the \emph{lift of $\rho$ to $C_m(K)$}.\par
The relationship between $\tilde\rho$ and $\bar\rho$ is as follows. Given a choice of $A$--coloured Seifert surface $(F,\bar\rho)$, construct $\mathrm{pr}\colon\thinspace C_{m}(K)\twoheadrightarrow E(K)$ by gluing together copies $R_0,\ldots,R_{m-1}$ of $E(F)$. A basis $\set{x_1,\ldots,x_{2g}}$ for $H_1(F)$ lifts to a generating set $\set{t^i\cdot x_1,\ldots,t^i\cdot x_{2g}}_{0\leq i\leq m-1}$ for $H_1(C_m)$. Choose indexes such that $t^i\cdot x_j\in R_i$ for all $i=0,\ldots m-1$ and $j=1,\ldots,2g$. This corresponds to a choice of a lift to $C_m(K)$ of $\star_F$. Then $\tilde\rho|_{R_0}=\bar\rho$. Conversely, given a choice of lift of $\star_F$, $\tilde\rho$ is recovered from $\bar\rho$ by setting $\tilde{\rho}(t^i\cdot x_j)\ass \phi^i \rho(x_j)$.\par
The discussion above is summarized by the commutative diagram below:
\begin{equation}
\begin{minipage}{300pt}\centering
\psfrag{a}[c]{$\pi$}\psfrag{b}[c]{$\pi^\prime$}\psfrag{c}[c]{$H_1(C_m(K))$}\psfrag{d}[c]{$G$}\psfrag{e}[c]{$\pi^\prime E(F)$}\psfrag{f}[c]{$H_1(E(F))$}
\psfrag{g}[c]{$A$}\psfrag{r}[c]{\footnotesize$\rho$}\psfrag{s}[c]{\footnotesize$\tilde\rho$}\psfrag{t}[c]{\footnotesize$\bar\rho$}\psfrag{p}[r]{\footnotesize$\mathrm{pr}_\ast$}
\includegraphics[width=210pt]{Colourings}
\end{minipage}
\end{equation}
\noindent \rule{0pt}{12pt} Conditions for an $A$--colouring of $F$ to arise as a restriction of a knot colouring are given in Proposition \ref{P:HNN}, and conditions for an $A$--colouring of $C_m(K)$ to arise as a lift of a knot colouring are given in Proposition \ref{P:HNNlift}.\par
\begin{rem}\label{R:tubecomment}
Two Seifert surfaces of a knot are tube equivalent, \emph{i.e.} ambient isotopic up to addition or removal of tubes. See \textit{e.g.} \cite{BFK98,Lev65,Ric71}. However, two $A$--coloured Seifert surfaces of a $G$--coloured knot are only tube equivalent up to inner automorphism of the colouring as in Lemma \ref{L:inneriso}.
\end{rem}
\subsection{Definition of surface data}
\begin{defn}
A \emph{marked Seifert surface} for a knot $K$ is a Seifert surface $F$ for $K$, together with a choice of basis for $H_1(F)$.
\end{defn}
Let $(F,\bar\rho)$ be an $A$--coloured Seifert surface for a $G$--coloured knot $(K,\rho)$. A choice of basis $\set{x_1,\ldots,x_{2g}}$ for $H_1(F)$ induces an
\emph{associated basis} $\set{\xi_1,\ldots,\xi_{2g}}$ for $H_1\left(E(F)\right)$ which is uniquely characterized by the
condition that $\Link(x_i,\xi_j)=\delta_{ij}$ (see \textit{e.g.} \cite[Definition 13.2]{BZ03}). Let $\tau^{\pm}\colon\thinspace F\to
E(F)$ be the \emph{push-off maps} which take $x\in F$ to $(x,\pm1)\in F\times \{\pm 1\}\subset E(F)$. The group $A$ is abelian, and is therefore a $\mathds{Z}$--module in a unique way.
\begin{defn}
A pair $(M,V)$ is called \emph{surface data} for $(K,\rho)$ with respect to a marked Seifert surface $\left(F,\set{x_1,\ldots,x_{2g}}\right)$ for $K$ if:
\begin{itemize}
\item $M=(M_{ij})$ is the \emph{Seifert matrix} of $K$ defined by the equation
\begin{equation}\label{E:Seifertmatrix}\tau^-_\ast(x_i)=\sum_{j=1}^{2g}M_{ij}\xi_j.\end{equation}
\item $V$, called the \emph{colouring vector} of $(K,\rho)$ with respect to $\set{x_1,\ldots,x_{2g}}$, is defined by the equation
\begin{equation}
V\ass \left(v_1;\,\ldots;v_{2g}\right)\ass \left(\bar\rho(\xi_1);\,\ldots;\bar\rho(\xi_{2g})\right)\in A^{2g}.
\end{equation}
\end{itemize}
Conversely, a pair $(M,V)$ is called \emph{surface data} if there exists a $G$--coloured knot $(K,\rho)$ and a marked Seifert surface $\left(F,\set{x_1,\ldots,x_{2g}}\right)$ for $K$ with respect to which $(M,V)$ is the surface data of $(K,\rho)$.
\end{defn}
The following is a direct generalization of \cite[Proposition 8]{KM09}.
\begin{prop}\label{P:HNN}[Proof in Section {\ref{SS:LinAlgProofs}}]
Let $K$ be an oriented knot with marked Seifert surface $\left(F,\set{x_1,\ldots,x_{2g}}\right)$. Corresponding to this data, there are bijections
between three sets:
\begin{enumerate}
\item The set of epimorphisms $\left\{\rho\colon\thinspace\pi\twoheadrightarrow G\right\}$ with $\rho(\mu)=t$.
\item The set of epimorphisms $\left\{\psi\colon\thinspace H_1\left(E(F)\right)\twoheadrightarrow
A\right\}$ satisfying the condition that
$\psi\left(\tau^+_*(a)\right)=t\cdot\psi\left(\tau_*^-(a)\right)$ for
all $a\in H_1(F)$.
\item The set of vectors $\left\{V\ass\left(v_1;\,\ldots;v_{2g}\right)\in A^{2g}\right\}$ satisfying:
\begin{enumerate}
\item\label{I:vigen}The elements of the set $\{v_1,\ldots,v_{2g}\}$ together generate $A$.
\item The identity $M^{\thinspace T}\thinspaceV=M\thinspace t\cdot V$ holds in $A^{2g}$.
\end{enumerate}
\end{enumerate}
\end{prop}
A corollary is a simple necessary condition, which appears to be new, for a knot to be $G$--colourable.
\begin{cor}\label{C:rankbound}
If twice the genus of a knot $K$ is less than $\Rank(A)$, then there cannot exist a
surjective homomorphism $\rho\colon\thinspace\pi\twoheadrightarrow G$.
\end{cor}
For $A$--coloured covering spaces we have:
\begin{prop}\label{P:HNNlift}
Let $K$ be an oriented knot equipped with a marked Seifert surface $\left(F,\set{x_1,\ldots,x_{2g}}\right)$. Corresponding to this data, there are bijections between three sets:
\begin{enumerate}
\item The set of epimorphisms $\left\{\rho\colon\thinspace\pi\twoheadrightarrow G\right\}$ with $\rho(\mu)=t$.
\item The set of epimorphisms $\left\{\psi\colon\thinspace H_1\left(E(F)\right)\twoheadrightarrow
A\right\}$ satisfying the condition that
$\psi(\tau(z))=t\cdot \psi(z)$ for
all $a\in H_1(F)$.
\item The set of vectors $\left\{V\ass\left(v_1;\,\ldots;v_{2g}\right)\in A^{2g}\right\}$
satisfying:
\begin{enumerate}
\item The elements of the set $\{v_1,\ldots,v_{2g}\}$ together generate $A$.
\item The vector $P\thinspaceV$ vanishes in $A^{2g}$, where $P$ is a presentation matrix for $H_1(C_m(K))$ as a $\mathcal{C}_m$-module.
\end{enumerate}
\end{enumerate}
\end{prop}
This is the analogue of Proposition \ref{P:HNN} for lifts of $G$--colourings and it is proved in the same way \textit{mutatis mutandis}.
\subsection{$S$--equivalence}\label{SS:S-equiv-matrix}
Recall that two Seifert surfaces are \emph{tube equivalent} if they are ambient isotopic up to addition and removal of tubes. Tube equivalence is weaker than ambient isotopy, because we allow only ambient isotopy which preserves a Seifert surface (although we don't care which one).
\begin{defn}\label{D:tubequiv}
Two $G$--coloured knots $(K_{1,2},\rho_{1,2})$ are \emph{tube equivalent} if there exist tube equivalent $A$--coloured Seifert surfaces $(F_{1,2},\rho_{1,2})$ for $(K_{1,2},\rho_{1,2})$ correspondingly.
\end{defn}
In this section, two ambient isotopic knots are considered the same, and two tube equivalent $G$--coloured knots are considered the same.\par
Two matrices $M_{1,2}$ are \emph{$S$--equivalent} if there exists a knot $K$ and a choice $\left(F_{1,2},\set{x^{1,2}_1,\ldots,x^{1,2}_{2g_{1,2}}}\right)$ of marked Seifert surfaces for $K$, such that the Seifert matrix of $K$ with respect to $\left(F_1,\set{x^1_1,\ldots,x^1_{2g_{1,2}}}\right)$ is $M_1$, and the
Seifert matrix with respect to $\left(F_2,\set{x^2_1,\ldots,x^2_{2g_{1,2}}}\right)$ is $M_2$ (this is equivalent to the more standard definition of
$S$--equivalence via moves on Seifert matrices \cite{Mur65,Ric71,Tro62}, as may be seen from \cite[Proposition 4.2]{GG08}). Two knots $K_{1,2}$ are \emph{$S$--equivalent} if they share the same Seifert matrix $M$ with respect to some choice of marked Seifert surfaces $\left(F_{1,2},\set{x^{1,2}_1,\ldots,x^{1,2}_{2g_{1,2}}}\right)$ correspondingly \cite{GG08,NS03}. This is a well-defined equivalence relation on knots modulo ambient isotopy.\par
These definitions extend to the $G$--coloured context.
\begin{defn}\label{D:Sequiv}
\begin{itemize}
\item Two surface data $(M_1,V_1)$ and $(M_2,V_2)$ are said to be \emph{$S$--equivalent} if there exists a $G$--coloured knot
$(K,\rho)$ together with a choice of marked Seifert surfaces $\left(F_{1,2},\set{x^{1,2}_1,\ldots,x^{1,2}_{2g_{1,2}}}\right)$ for $K$, such that the surface data of $(K,\rho)$ with respect to $(F_1,\set{x^1_1,\ldots,x^2_{2g_1}})$ is $(M_1,V_1)$, and the surface data with respect to $(F_2,\set{x^2_1,\ldots,x^2_{2g_2}})$ is $(M_2,V_2)$.
\item Two $G$--coloured knots $(K_{1,2},\rho_{1,2})$ are \emph{$S$--equivalent} if there exist Seifert surfaces $F_{1,2}$ for $K_{1,2}$
correspondingly, and bases for their first homology, with respect to which the surface data of $(K_{1},\rho_{1})$ is $S$--equivalent to the surface data of
$(K_{2},\rho_{2})$.
\end{itemize}
\end{defn}
$S$--equivalence is a well-defined equivalence relation on $G$--coloured knots modulo tube equivalence, by Naik and Stanford's proof \cite{NS03}, which is fleshed out in \cite{GG08}.
\begin{rem}\label{R:tubecom2}
$S$--equivalence would not be well-defined on $G$--coloured knots modulo ambient isotopy, because $A$--coloured Seifert surfaces corresponding to ambient isotopic $G$--coloured knots might not be tube equivalent. See Remark \ref{R:tubecomment}.
\end{rem}
Our definition of $S$--equivalence on surface data coincides with a definition in terms of moves on matrices.
\begin{prop}\label{P:Sequiv}
Two surface data are $S$--equivalent if and only if they are related a finite
sequence of the following moves and their inverses:
\begin{description}
\item[$\Lambda_1$]\[
(M,V)\ \mapsto (U^{\thinspace T}\thinspace M U,U^{-1}V)
\]\noindent where $U$ is an integral square matrix such that $\det U=\pm1$ (such a matrix is said to be \emph{unimodular}).
\item[$\Lambda_2$]
\begin{multline*}
(M,V)\ \mapsto \left(\thinspace\begin{aligned}
\begin{bmatrix}
\ & \ & \ & c_1 & 0\\
\ & M & \ & \vdots & \vdots\\
\ & \ & \ & c_{2g} & 0\\
c_1 & \cdots & c_{2g} & 0 & -1\\
0 & \cdots & 0 & 0 & 0
\end{bmatrix}
\end{aligned}\ \scalebox{1.5}{,}\ \left(\begin{matrix}v_1\\ \vdots\\ v_{2g}\\ 0\\ \frac{t-1}{t}\cdot\left(\sum_{i=1}^{2g}c_i v_i\right)
\end{matrix}\right)\right)\ \mathrm{or}\\
\left(\thinspace\begin{aligned}
\begin{bmatrix}
\ & \ & \ & c_1 & 0\\
\ & M^{\thinspace T} & \ & \vdots & \vdots\\
\ & \ & \ & c_{2g} & 0\\
c_1 & \cdots & c_{2g} & 0 & 0\\
0 & \cdots & 0 & 1 & 0
\end{bmatrix}
\end{aligned}\ \scalebox{1.5}{,}\ \left(\begin{matrix} v_1\\ \vdots\\
v_{2g}\\ 0
\\ (t-1)\cdot\left(\sum_{i=1}^{2g}c_i v_i\right)
\end{matrix}\right)\right)\end{multline*}
\noindent with $c_1,\ldots,c_{2g}$ arbitrary integers.
\end{description}
\end{prop}
\begin{proof}
If $(M_1,V_1)$ and $(M_2,V_2)$ are related by a $\Lambda_1$-move, and if $(K,\rho)$ is a $G$--coloured knot with surface data $(M_1,V_1)$ with respect to a choice of Seifert surface $F$ for $K$ and some choice of basis $x_1,\ldots,x_{2g}$ for $H_1(F)$, then the action of $U$ on $H_1(F)$ induces a new basis
$y_1,\ldots,y_{2g}$ for $H_1(F)$, such that the surface data for $(K,\rho)$ with respect to $(F,\set{y_1,\ldots,y_{2g}})$ is $(M_2,V_2)$.\par
If $(M_2,V_2)$ is obtained from $(M_1,V_1)$ by a $\Lambda_2$-move, and if $(M_1,V_1)$ is surface data for a $G$--coloured knot $(K,\rho)$ with respect to a choice $(F,\set{x_1,\ldots,x_{2g}})$ of marked Seifert surface, then $(M_2,V_2)$ arises as surface data for $(K,\rho)$ with respect to a Seifert surface $F^{\prime}=F\cup\{\text{$1$--handle}\}$ and a basis $\set{x_1,\ldots,x_{2g},x_1^{\text{new}},x_2^{\text{new}}}$ for $H_1(F^{\prime})$ as
follows:
\begin{equation}\label{E:genincrease}
\begin{minipage}{100pt}
\includegraphics[width=100pt]{genincrease-1}
\end{minipage}\quad\ \ \ \overset{\text{stabilize}}{\begin{minipage}{18pt}\includegraphics[width=18pt]{fluffyarrow}\end{minipage}}\quad
\left\{\ \begin{array}{c}
\raisebox{20pt}{\begin{minipage}{130pt}
\psfrag{a}[c]{\tiny$x_1^{\text{new}}$}\psfrag{b}[c]{\tiny$x_2^{\text{new}}$}
\includegraphics[width=130pt]{genincrease2o}
\end{minipage}}\\[0.6cm]
\raisebox{20pt}{\begin{minipage}{130pt}
\psfrag{a}[c]{\tiny$x_1^{\text{new}}$}\psfrag{b}[c]{\tiny$x_2^{\text{new}}$}
\includegraphics[width=130pt]{genincrease2}
\end{minipage}}
\end{array}\right.
\end{equation}
Conversely, let $(M_1,V_1)$ and $(M_2,V_2)$ be surface data for a $G$--coloured knot $(K,\rho)$ with respect to choices $(F_1,\set{x_1,\ldots,x_{2g}})$ and $(F_2,\set{y_1,\ldots,y_{2g}})$ of marked Seifert surfaces. Then, in particular, $M_1$ and $M_2$ are related by a finite sequence of the following moves and their inverses:
\begin{description}
\item[$\Lambda_1$]\[
M\ \mapsto U^{\thinspace T}\thinspace M U
\]\noindent for $U$ a unimodular matrix.
\item[$\Lambda_2$]
\begin{multline}
M\ \mapsto
\begin{bmatrix}
\ & \ & \ & c_1 & 0\\
\ & M & \ & \vdots & \vdots\\
\ & \ & \ & c_{2g} & 0\\
c_1 & \cdots & c_{2g} & 0 & -1\\
0 & \cdots & 0 & 0 & 0
\end{bmatrix}
\ \mathrm{or}\
\begin{bmatrix}
\ & \ & \ & c_1 & 0\\
\ & M & \ & \vdots & \vdots\\
\ & \ & \ & c_{2g} & 0\\
c_1 & \cdots & c_{2g} & 0 & 0\\
0 & \cdots & 0 & 1 & 0
\end{bmatrix}
\end{multline}
\noindent with $c_1,\ldots,c_{2g}$ arbitrary integers.
\end{description}
For a proof, see \textit{e.g.} \cite[Theorem 5.4.1]{Mur96} or \cite[Theorem 2.3]{Ric71}. The $\Lambda_1$-move corresponds to a change of basis for $H_1(F)$, which induces the move $V\mapsto U^{-1}V$ on the colouring vector. The $\Lambda_2$-move corresponds to a $1$--handle attachment. Let $(v_1;\,\ldots;v_{2g};x;y)$ be the corresponding colouring vector. By the argument of \cite[Page 1371]{KM09}, for any colouring data $(M,V)$, the equation
$M^{\thinspace T}\thinspaceV=M\thinspace t\cdot V\in A^{2g}$ holds.
Therefore:
\begin{multline}\label{E:colvecform}
\begin{bmatrix}
\ & \ & \ & c_1 & 0\\
\ & M & \ & \vdots & \vdots\\
\ & \ & \ & c_{2g} & 0\\
c_1 & \cdots & c_{2g} & 0 & -1\\
0 & \cdots & 0 & 0 & 0
\end{bmatrix}\cdot \left(\begin{matrix}t\cdot v_1\\ \vdots\\ t\cdot v_{2g}\\ t\cdot x\\
t\cdot y
\end{matrix}\right)-\begin{bmatrix}
\ & \ & \ & c_1 & 0\\
\ & M^{\thinspace T} & \ & \vdots & \vdots\\
\ & \ & \ & c_{2g} & 0\\
c_1 & \cdots & c_{2g} & 0 & 0\\
0 & \cdots & 0 & -1 & 0
\end{bmatrix}\cdot
\left(
\begin{matrix}v_1\\ \vdots\\ v_{2g}\\ x\\ y
\end{matrix}
\right)\\%
\rule{0pt}{17pt}=
\left(
\begin{matrix}M\thinspace t\cdot V-M^{\thinspace T}V+\left(\sum_{i=1}^{2g}c_i\right)(t-1)\cdot x\\\rule{0pt}{16pt}
(t-1)\cdot\left(\sum_{i=1}^{2g}c_i\thinspace v_i\right)-t\cdot y\\ x
\end{matrix}
\right)\ =
\left(\begin{matrix}0\\\vdots\\0\\ 0\\ 0\end{matrix}\right).
\end{multline}
The bottom row tells us that $x= 0$, while the second lowest row tells us that $y=\frac{t-1}{t}\cdot\left(\sum_{i=1}^{2g}c_iv_i\right)$ as
required. The remaining case is proved in the same way,\textit{mutatis mutandis}.
\end{proof}
Over an integral domain, any Seifert matrix is $S$--equivalent to a non-singular matrix or to zero \cite{Lev70,Tro62}.
\begin{prop}\label{P:TrotterProp}
If $A$ is isomorphic to a vector space over an integral domain, then for any surface data $(M,V)$, there exists surface data
$(M^{\prime},V^{\prime})$ which is $S$--equivalent to $(M,V)$,such that the matrix $M^{\prime}$ is non-singular.
\end{prop}
\begin{proof}
The argument of \cite[pages 484--485]{Tro62} shows that over an integral domain, any singular Seifert matrix is related by $\Lambda_1$-moves to a Seifert matrix of the form
\begin{equation}
\begin{bmatrix}
\ & \ & \ & c_1 & 0\\
\ & M & \ & \vdots & \vdots\\
\ & \ & \ & c_{2g} & 0\\
c_1 & \cdots & c_{2g} & 0 & 0\\
0 & \cdots & 0 & 1 & 0
\end{bmatrix}.
\end{equation}
Corresponding to this Seifert matrix, by Equation \ref{E:colvecform}, the colouring vector is of the form $\left(v_1;\,\ldots;v_{2g-2};0;(t-1)\cdot\left(\sum_{i=1}^{2g}c_iv_i\right)\right)$. As $v_1,\ldots,v_{2g}$ generate $A$ as a $\mathcal{C}_m$-module, this
implies that $g>2$, and we may obtain a smaller matrix $M^\prime$ such that $(M^\prime,\left(v_1;\,\ldots;v_{2g-2}\right))$ is $S$--equivalent to $(M,V)$ by an inverse $\Lambda_2$-move. Continue until a nonsingular matrix is reached.
\end{proof}
\subsection{Proof of Proposition {\ref{P:HNN}} and of Corollary
{\ref{C:rankbound}}}\label{SS:LinAlgProofs}
\begin{proof}[Proof of Proposition {\ref{P:HNN}}]
Note first that $\nu$ normally generates $\pi$, therefore $\rho(\nu)$ normally generates $G$, and so by an inner automorphism we may set $\rho(\mu)=t$.\par
The argument of \cite[Proof of Proposition 8]{KM09} shows that there
is a bijective correspondence between three sets:
\begin{enumerate}
\item{The set of epimorphisms $\left\{\rho\colon\thinspace\pi\twoheadrightarrow G\right\}$ with $\rho(\mu)=t$.}
\item{The set of maps $\left\{\psi\colon\thinspace H_1\left(E(F)\right)\twoheadrightarrow
A\right\}$ satisfying two conditions:
\begin{enumerate}
\item The image of $\psi$ generates $A$ as a $\mathcal{C}_m$-module.
\item For every $a\in H_1(F)$, we have
$\psi\left(\tau^+_*(a)\right)=\psi\left(\tau_*^-(a)\right)^{-1}$.
\end{enumerate}}
\item{\rule{0pt}{11pt} The set of vectors $\left\{V\ass\,\left(v_1;\,\ldots;v_{2g}\right)\in
A^{2g}\right\}$ satisfying:
\begin{enumerate}
\item{The elements of the set $\left\{t^k\cdot v_1,\ldots,t^k\cdot v_{2g}\right\}_{k\in\mathds{Z}}$ together generate
$A$.}
\item{
\begin{equation*}
M\thinspace t\cdotV=M^{\thinspace T}V\in A^{2g}.
\end{equation*}}
\end{enumerate}}
\end{enumerate}
Note that our choice of distinguished meridian for $K$ means that we don't have to mod out the first set by an equivalence relation. Let $I_V\subseteq A$ denote the ideal generated by $\left\{v_1,\ldots,v_{2g}\right\}$. It remains to prove that $I_V$ equals $A$. Equation \ref{E:Seifertmatrix} implies that
\begin{multline}\label{E:8.8Thm}
\bar\rho\left([\mu]^{-i}\tau_\ast^+(x_1;\,\ldots;x_{2g})[\mu]^{i}\right)=M^{\thinspace T}\thinspace t^i\cdotV\\= M\thinspace
t^{i+1}\cdotV=\bar\rho\left([\mu]^{-i-1}\tau_\ast^-(x_1;\,\ldots;x_{2g})[\mu]^{i+1}\right).
\end{multline}
Without the limitation of generality ,take $i=0\in\mathds{Z}$.\par
Because $A$ is finitely generated, it may be given the structure of a principal ideal ring. It then follows from the Chinese remainder
theorem that any solution to
\begin{equation}\label{E:MWMV}
M\thinspace W=M^{\thinspace T}V
\end{equation}
\noindent must restrict to a solution of \ref{E:MWMV} over each Sylow subgroup of $A$, and if $A$ is infinite, over the integers (we would like $W$ to become $t\cdotV$). We may therefore restrict to the case that $A$ is of the form $\mathcal{C}_{q}^{r}$ with $q$ prime or zero. The goal is to show that $W$ is unique. The ideal $I_V$, defined as the ideal generated by the entries of $V$, equals $A$ if and only if, for any surface date $(M^{\prime},V^{\prime})$ which is $S$--equivalent to $(M,V)$, we have $I_{V^{\prime}}=A$. If $A$ is isomorphic to a vector space over the integers, by Proposition \ref{P:TrotterProp}, $M$ must be $S$--equivalent to a non-singular Seifert matrix. This implies that $W$, which we know exists, is uniquely determined by Equation \ref{E:MWMV}.\par
Next, if $A$ is an abelian $p$--group, then the quotient $A/\Phi(A)$ is an elementary abelian group, where $\Phi(A)$ denotes the Frattini subgroup of $A$ (see \textit{e.g.} \cite[Section 10.4]{Hal76}). The group $A/\Phi(A)$ is isomorphic to a vector space over an integral domain (a field in fact), and we may uniquely solve Equation \ref{E:MWMV} over $A/\Phi(A)$ to give $W=M^{-1}M^{\thinspace T}V$.
The proposition is thus proven over an abelian $p$--group. We are finished, because by the Burnside Basis Theorem (see \textit{e.g.}
\cite[Theorem 12.2.1]{Hal76}), any lift of a solution to Equation \ref{E:MWMV} whose entries generate $A$ will be a vector in $A^{2g}$
whose entries generate $A$.
\end{proof}
Recall that a square integral matrix $P$ is said to be \emph{unimodular} if $\det P=\pm 1$, and two matrices $M_{1,2} $ are said to be \emph{unimodular congruent} if $P^{\,T}\,M_1 P=M_2$ for some unimodular $P$.
\begin{proof}[Proof of Corollary {\ref{C:rankbound}}]
Because $S\ass M^{\thinspace T}-M$ is unimodular congruent to
$\left[\begin{smallmatrix}0 & 1\\ -1 &
0\end{smallmatrix}\right]^{\oplus g}$ (see e.g. \cite{BZ03}[Proposition 8.7]), it is invertible over any commutative ring. Rewrite
\begin{equation}
M\thinspace t\cdot V=M^{\thinspace T}V
\end{equation}
\noindent as
\begin{equation}\label{E:MVSV}
M \left(t-1\right)\cdot\thinspace V= S\thinspace V.
\end{equation}
\noindent by subtracting $M\thinspaceV$ from both sides of the equation. Left multiply both sides by $S^{-1}$ to obtain
\begin{equation}\label{E:SMVV}
S^{-1} \thinspace M \thinspace \left(t-1\right)\cdot\thinspace V=V.
\end{equation}
Because $S$ is invertible and because $(t-1)$ induces an automorphism of $A$ (see \textit{e.g.} \cite[Proposition 14.2]{BZ03}), it follows that $\Rank(M)$ is bounded below by $\Rank(V)$, which in turn equals the minimal number of elements in a generating set for $A$ by Proposition
\ref{P:HNN}.
\end{proof}
\section{Surgery equivalence relations between $G$--coloured knots}\label{S:rhoequiv}
In Section \ref{SS:rhoequiv}, we define equivalence relations on $G$--coloured knots whose study is the focus of this paper. The relationship between these was described in Section \ref{SS:Method}. The $\bar\rho$--equivalence relation is put into the context of a big construction (relative bordism) by Proposition \ref{P:bordbarrho}.
\subsection{The equivalence relations}\label{SS:rhoequiv}
Recall the twist move and the null-twist from Section \ref{SS:Results}, Figures \ref{F:FRMove} and \ref{F:HosteMove}, and recall tube equivalence of $G$--coloured knots from Definition \ref{D:tubequiv}. Recall also the restriction $\bar\rho$ and the lift $\tilde\rho$ of the $G$--colouring $\rho$. Consider the infinite cyclic covering
\begin{equation}\tilde{G}\ass \mathcal{C}_0\ltimes_{\tilde{\phi}} A\overset{p}{\twoheadrightarrow} \mathcal{C}_m\ltimes_\phi A=G,\end{equation}
\noindent with $p(t^ia)\ass t^{i\bmod m}a$ for all $a\in A$. The $G$--colouring $\rho$ of $K$ pulls back to a $\tilde{G}$--colouring $\hat\rho$ of $K$, which we call the \emph{colift of $\rho$ to $\tilde{G}$}.\par
Define the following equivalence relations on the set of $G$--coloured knots.
\begin{defn}Two $G$--coloured knots $(K_{1,2},\rho_{1,2})$ are said to be:
\begin{itemize}
\item \emph{$\rho$--equivalent} if they are related up to ambient isotopy by twist moves.
\item \emph{$\hat\rho$--equivalent} if they are related up to ambient isotopy by null-twists.
\item \emph{$\bar\rho$--equivalent} if they are related up to tube equivalence by null-twists.
\item \emph{$\tilde\rho$--equivalent} if they are related up to tube equivalence by twist moves.
\end{itemize}
\end{defn}
The justification for these names is as follows. A null-twist respects a $\tilde{G}$--colouring such as $\hat\rho$, as does ambient isotopy. It may be realized as a twist moves between bands of some Seifert surface by the tubing construction, and therefore it respects an $A$--colouring of the complement of a Seifert surface, such as $\bar\rho$. A twist move respects an $A$--colouring of $C_m(K)$ such as $\tilde\rho$. Forgetting the $\mathcal{C}_m$-module structure on both sides, $\tilde\rho$ descends to a homomorphism from $H_1(C_m(K))$ onto $A$, which we call $\check{\tilde{\rho}}$, and which is preserved by tube equivalence but not by ambient isotopy of $K$. In fact $\tilde\rho$--equivalence is what we should be calling $\check{\tilde{\rho}}$--equivalence.
\subsection{Relative bordism}\label{SS:relbord}
In this section we work in the smooth category, and write the unit interval as $I\ass [0,1]$. References for this section are Conner--Floyd \cite{CF64} and Cochran--Gerges--Orr \cite{CGO01}.
\begin{defn}
Consider two compact oriented $n$--manifolds $M_{1,2}$, whose boundaries $\partial M_{1,2}$ are compact
oriented $(n-1)$--manifolds. Fix a subgroup $H\subseteq G$, and let $f_{1,2}\colon\thinspace M_{1,2} \twoheadrightarrow K(G,1)$ be a pair of smooth maps which map $\partial M_{1,2}$ onto $K(H,1)$. The pairs $(M_1,f_1)$ and $(M_2, f_2)$ are said to be \emph{$(G,H)$--relative bordant} of there exists a compact oriented $n$--manifold $N$ called a \emph{connecting manifold}, a compact oriented $(n+1)$--manifold $W$, and a smooth map $F\colon\thinspace W\twoheadrightarrow G$ such that:
\begin{itemize}
\item $\partial N=\partial M_1\cup -\partial M_2$ and $N\cap M_{1,2}=\partial M_{1,2}$ and $\partial W=\left(M_1\amalg M_2\right)\bigcup_{\partial N} N$.
\item $F\!\mid_{M_{1,2}}=f_{1,2}$ and $F$ maps $N$ onto $K(H,1)$.
\end{itemize}
We call $(W,F)$ a relative bordism between $(M_1,f_1)$ and $(M_2, f_2)$. The $n$th $(G,H)$--relative bordism group is denoted
$\Omega_n(G,H)$. See Figure \ref{F:bordism}.
\end{defn}
\begin{figure}
\caption{\label{F:bordism}
\label{F:bordism}
\end{figure}
Relative bordism of knots is defined as relative bordism of knot complements. Namely, a $G$--colouring $\rho\colon\thinspace \pi\twoheadrightarrow G$ induces a smooth map $f\colon\thinspace E(K)\to K(G,1)$ such that $\partial E(K)\subseteq K(H,1)$, where $H\ass \langle\rho(\mu),\rho(\ell)\rangle$ is the $\rho$--image of the peripheral subgroup of $\pi=\pi_1 E(K)$. For $G$ metabelian, the $\rho$--image of the longitude is trivial, and the $\rho$--image of the distinguished meridian is a generator of $\mathcal{C}_m\simeq\Ab\, G$. This motivates the following definition.
\begin{defn}\label{D:bordismrels}
Two $G$--coloured knots $(K_{1,2},\rho_{1,2})$ are:
\begin{itemize}
\item \emph{$\rho$--bordant} if there exists a $(G,\mathcal{C}_m)$--relative bordism $(W,F)$ between them, with $F\!\mid_{E(K_{1,2})}=f_{1,2}$ smooth maps induced by $\rho_{1,2}$ correspondingly.
\item \emph{$\hat\rho$--bordant} if there exists a $(\tilde{G},\mathcal{C}_0)$--relative bordism $(W,F)$ between them, with $F\!\mid_{E(K_{1,2})}=f_{1,2}$ smooth maps induced by $\hat\rho_{1,2}$ correspondingly.
\item \emph{$\bar\rho$--bordant} if there exists an $(G,\mathcal{C}_m)$--relative bordism $(W,F)$ between them, and Seifert surfaces $F_{1,2}$ for $K_{1,2}$ correspondingly, with $F\!\mid_{E(F_{1,2})}=f_{1,2}$ smooth maps induced by $\bar\rho_{1,2}$ correspondingly.
\item \emph{$\tilde\rho$--bordant} if there exists a $(G,\mathcal{C}_m)$--relative bordism $(W,F)$ between them, with $F\!\mid_{E(K_{1,2})}=f_{1,2}$ smooth maps induced by $\check{\tilde{\rho}}_{1,2}$ correspondingly.
\end{itemize}
\end{defn}
\begin{example}
Two $\mathcal{C}_n$-coloured knots are $\Link$-bordant if and only if they are bordant.
\end{example}
\subsection{Surgery}\label{SS:surgery}
Given an $n$--manifold $X$ and an embedding $\varphi\colon\thinspace S^{n-i}\times D^{i}\subset X$ with $1\leq i\leq n$, we may form a new $n$--manifold
\begin{equation}
X^{\prime}\ass\ \left(X-\mathrm{int}\,\mathrm{im}\varphi\right)\cup_{\varphi\mid_{S^{n-i}\times S^{i-1}}}\left(D^{n-i+1}\times S^{i-1}\right)\end{equation}
\noindent by cutting out $S^{n-i}\times D^{i}$ and gluing in $D^{n-i+1}\times S^{i-1}$. This process is called \emph{$i$-handle attachment}. In this paper, \emph{surgery} means $2$--handle attachment to a $3$--manifold (so by ``surgery'' we mean ``integral Dehn surgery''). The \emph{trace} of an $i$--handle attachment is the bordism
\begin{equation}
W^{\prime}\ass\ \left(X\times I\right)\cup_{S^{n-i}\times D^{i}\times\{1\}}\left(D^{n-i+1}\times D^{i}\right).
\end{equation}
\noindent Such a bordism is called \emph{elementary}. In the case of surgery, call $\varphi(S^1)$ with its induced framing a \emph{surgery component}, and call its image in the trace of the surgery the \emph{attaching curve} for the $2$--handle $D^2\times D^2\subset W^{\prime}$. By the Pontryagin construction, $X^\prime$ depends only on the attaching curve.\par
By the fundamental theorem of Morse theory every bordism has a handle decomposition, and therefore can be represented as a union of elementary bordisms. To remind the reader, given a bordism $W$ between $n$-manifolds $M_{1,2}$, a handle decomposition is a diffeomorphism from $W$ to a $4$--manifold obtained by attaching handles to the cylinder $M_{1}\times I$, where the handles may be assumed to be attached in disjoint \emph{times slices} of the form $M_{1}\times [h,h+\varepsilonilon]$.\par
We pass to the relative setting.
\begin{defn}
A \emph{surgery description of $(M_2,f_2)$ in $(M_1,f_1)$} is a relative bordism $(W,F)$ between $(M_1,f_1)$ and $(M_2, f_2)$ such that $W$ is homeomorphic to the cylinder $M_1\times I$ with $2$--handles attached, and $F$ is an extension of $f_1$ over the cylinder and over the $2$--handles.
\end{defn}
\begin{example}
Any $\mathcal{C}_n$-coloured knot has a surgery description in the complement of the $\mathcal{C}_n$-coloured unknot. This is a special case of the Lickorish--Wallace Theorem, that every $3$--manifold has a surgery description, which in the bordism setting follows from the result of Rokhlin that the bordism group of $3$--manifolds is trivial (\cite{Rok51}, see also \cite{Rou85} for a pretty proof).
\end{example}
Each bordism equivalence relation in Definition \ref{D:bordismrels} has a corresponding surgery equivalence relation.
\begin{defn}\label{D:surgrels}
Let $\psi\in\{\rho,\hat\rho,\bar\rho,\tilde\rho\}$. Two $G$--coloured knots $(K_{1,2},\rho_{1,2})$ are \emph{$\psi$--surgery equivalent} if there is a $\psi$--bordism $(W,F)$ between them such that $W$ is homeomorphic to the cylinder $E(K_1)\times I$ with $2$--handles attached.
\end{defn}
\begin{rem}
In the language of \cite{KM09}, two $G$--coloured knots in $S^3$ are related by surgery in $\ker\rho$ if and only if they are $\rho$--surgery equivalent.
\end{rem}
\subsection{Relationships between equivalence relations}\label{SS:rhorels}
The following is the main proposition of Section \ref{S:rhoequiv}.
\begin{prop}\label{P:bordbarrho}
The following conditions are equivalent:
\begin{enumerate}
\item $\bar\rho$--bordism.
\item $\bar\rho$--surgery equivalence.
\item $\bar\rho$--equivalence.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{description}
\item[$1\Rightarrow 2$]
We mimic the arguments of \cite[Section 4.3]{LiWal08} and \cite[Proof of Theorem 4.2]{CGO01} (see either source for details). Let $(W,F)$ be a $\bar\rho$--bordism between $(K_{1,2},\rho_{1,2})$. Forgetting Seifert surfaces, in particular $(W,F)$ is a $\hat\rho$--equivalence. The boundary of the connecting manifold $N\subset W$ consists of two disjoint copies of $T^2$. The closed $3$--manifold $N\cup_{T^2\sqcup T^2}\left(T^{2}\times I\right)$ is an element of $\Omega_3(\mathcal{C}_0)\simeq \set{1}$. Therefore there exists a $\hat\rho$--bordism $W^\prime$ between $(K_{1,2},\rho_{1,2})$ with connecting manifold $T^2\times I$. Take a smooth handle decomposition of $W^\prime$ relative to the boundary as $\left(E(K_1)\times I\right)\cup\{\text{$2$--handles}\}$ by the standard argument (see \textit{e.g.} \cite[Section 5.4]{GS91}). This gives rise to a $\hat\rho$--surgery equivalence $(W^\prime,F^\prime)$. Choose Seifert surfaces $F_{1,2}$ for $K_{1,2}$ correspondingly. The induced restriction $\bar\rho_{2}^\prime$ of $\rho_2$ is related to $\bar\rho_2$ by an inner automorphism of $G$. Therefore $(K_{2},\bar\rho_2)$ and $(K_2,\bar\rho_2^\prime)$ are related by ambient isotopy (Lemma \ref{L:inneriso}), realized by a second $\hat\rho$--surgery equivalence $(W^{\prime\prime},F^{\prime\prime})$ with connecting manifold $T^2\times I$. Thus, \begin{equation}(W_{\text{srg}},F_{\text{srg}})\ass\left(W^\prime\cup_{E(K_2)}W^{\prime\prime},F^\prime\cup_{\bar\rho_2^\prime} F^{\prime\prime}\right)\end{equation}
\noindent becomes a $\bar\rho$--surgery equivalence between $(K_{1,2},\rho_{1,2})$.
\item[$2\Rightarrow 3$] We imitate the argument of \cite[Proof of Theorem 1.1]{LiWal08} and \cite[Proof of Theorem 4.2]{CGO01}.
``Filling in'' the connecting manifold $T^2\times I$ with a solid torus times an interval turns $W_{\text{srg}}$ into a surgery description of $S^3$. The Kirby Theorem implies that a surgery description of $S^3$ can be transformed to a $\pm1$--framed unlink by blow-ups and handle-slides, changing the handle decomposition of $W_{\text{srg}}$. Writing the unlink as $L\ass L_1\cup\cdots\cup L_\nu$, slide each $L_i$ (an attaching circle for a $2$--handle) to the time-slice $E(K_1)\times \left[\frac{i-1}{\nu},\frac{i}{\nu}\right]$. This induces a decomposition of $W_{\text{srg}}$ as a union of elementary $\bar\rho$--bordisms
\begin{equation}W_{\text{srg}}=\bigcup_{i=1}^\nu E(K_i)\times \left[\frac{i-1}{\nu},\frac{i}{\nu}\right]\cup_{L_i} H_i.\end{equation}
For $i=1$, the $G$--colouring $\rho_1$ induces $f_{1}\colon\thinspace E(K_1)\times \left[0,\frac{1}{\nu}\right]\to G$ which extends over the $2$--handle $H_1$. Therefore $L_1$ represents an element in $\ker\rho$. We may represent $L_1$ as an unknot which rings $2r$ strands in $K_1$ by pushing $L_1$ down to $E(K_1)\times\set{0}$ (note that $\Link(K_1,L_1)=0$). Thus, surgery around $L_1$ is a null-twist. The same argument show that surgeries around $L_2,\ldots,L_\nu$ are all null-twists.
\item[$3\Rightarrow 1$]
Figure \ref{F:surg-real}, and tubing, shows how to realize a null-twist as an (elementary) $\bar\rho$--bordism.
\begin{figure}
\caption{\label{F:surg-real}
\label{F:surg-real}
\end{figure}
\end{description}
\end{proof}
\begin{rem}\label{R:LW2}
Litherland and Wallace conjectured the analogue of Proposition \ref{P:bordbarrho}, replacing $\bar\rho$ by $\rho$.
\end{rem}
The above proposition helps us to understand $\bar\rho$--equivalence in two ways. First, it puts it in the framework of relative bordism, which is a ``bigger construction'', by showing that every $\bar\rho$--bordism can be `upgraded' to a surgery presentation. Relative bordism can be calculated homologically, because, for $i\leq 3$, the group $\Omega_i(G,H)$ is isomorphic to the relative homology group $H_i(G,H)$ (see {\textit{e.g} \cite[{Theorem \textrm{IV}.7.37}]{Rud08}}). This leads to an upper bound of $\abs{H_3(G,\mathds{Z})}$ for the number of $\bar\rho$--equivalence classes. We calculate $H_3(G,\mathds{Z})$ by first applying the Lyndon--Hochschild--Serre spectral sequence (\textit{e.g.} \cite[{Chapter \textrm{VII}, Section 6}]{Bro82}) to identify it with $H_0(\mathds{Z};H_3(A;\mathds{Z}))\simeq H_3(A;\mathds{Z})$ and calculate the latter following Cartan \cite{Car54}. Summarizing:
\begin{cor}\label{C:Wallacebound}
The number of $\bar\rho$--equivalence classes is bounded above by the order of $\abs{H_3(A;\mathds{Z})}$.
\end{cor}
The local-move description of $\bar\rho$--equivalence is a ``small construction'' which is good for making explicit calculations.
\begin{rem}\label{R:LiWal}
The above argument, applied in the paper of Litherland and Wallace \cite{LiWal08}, would have led to a sharp upper bound of $n$ instead of $2n$ for the number of $\rho$--equivalence classes of $D_{2n}$--coloured knots. Two $\bar\rho$--equivalent $G$--coloured knots are $\rho$--equivalent, and $n$ is an upper bound for the number of $\bar\rho$--equivalence classes by the above homological calculation.
\end{rem}
\begin{rem}\label{R:tightenbordism}
The complex $(K(G,1),S^1)$ has a $\mathds{Z}$--action by conjugation by $t$, corresponding to ambient isotopy of the knot as in the proof of Lemma \ref{L:inneriso}. Equivariant bordisms with respect to this action would correspond to $\hat{\rho}$--equivalence, and so would lead to a tighter upper-bound on the number of $\rho$--equivalence classes.
\end{rem}
\section{An algebraic characterization of $\bar\rho$--equivalence}\label{S:ClasperProof}
The finitely generated abelian group $A$ is given the structure of a principle ideal ring, which by abuse of notation we also call $A$.
\subsection{Result statement}
A celebrated result of Naik and Stanford states that the $\Delta$--move generates $S$--equivalence \cite{NS03}. Translated into the language of claspers (recalled in Section \ref{SS:clasper-review}), this is equivalent to saying that for any $S$--equivalent knots $K_{1,2}$ there exists a Seifert surface $F_1$ for $K_1$ and a set of $Y$--claspers $C=\set{Y_1,\ldots,Y_k}$ in the complement of $F_1$, such that surgery around $C$ gives $K_2$. In the $G$--coloured context, leaves $A^i_{1,2,3}$ of clasper $Y_i$ come equipped with colours $a^i_{1,2,3}\in A$ correspondingly, and we can associate to $(K_{1,2},\rho_{1,2})$ the sum of their triple wedge products in $\bigwedge^{ 3} A$--- the \emph{$Y$--obstruction}
$Y((K_1,\bar\rho_1),(K_{2},\bar\rho_2))$. The $Y$--obstruction is independent of the choices made in its construction. The goal of this section is to prove the following theorem.
\begin{thm}\label{T:clasperprop}
Two $S$--equivalent $G$--coloured knots $(K_{1,2},\rho_{1,2})$ are $\bar\rho$--equivalent if and only if their $Y$--obstruction vanishes.
\end{thm}
In the special case $\Rank(A)\leq 2$, the group $\bigwedge^{ 3} A$ vanishes, and Theorem \ref{T:clasperprop} becomes that $S$--equivalence implies $\bar\rho$--equivalence. We sketch a proof of this (simpler) claim for $\Rank(A)= 2$, as the rank $1$ case follows from analogous arguments. This offers a shortcut through this section for the reader interested only in such groups. Let $s_{1,2}$ be generators of $A$. Engineer a band projection for $F_1$ by Section \ref{SSS:Shorten} so that entries in the corresponding colouring vector are all elements of the set $\set{0,\pm s_{1},\pm s_{2}}$. Any $\Delta$--move between bands is then realized by null-twists, by the proofs of Lemmas \ref{L:(0,a,b)} and \ref{L:(a,a,b)}.
\subsection{Review of clasper calculus}\label{SS:clasper-review}
One use of clasper calculus is to provide a graphical language to prove theorems of the form ``two objects in class $C$ are related by a finite sequence of local moves $M$ if and only if they share homological information $I$''. Examples of such theorems are in \cite{GR04,Mas03,Mat87,MN89}. Theorem \ref{T:clasperprop} is of such form. Our definitions follow \cite[Section 2]{Hab00}, but are simplified because we require only a small segment of clasper calculus. Conventions which differ from those of Habiro are written in \textbf{bold} font.
A \emph{basic clasper} is defined to be a union of three oriented embedded objects $C\ass A_1\cup A_2\cup E \subset S^3$ with $A_{1,2}$ \textbf{zero-framed unknots bounding disjoint discs} and $E$ an oriented $\frac{1}{2}\mathds{Z}$--framed line segment such that $E\cap A_{1,2}$ are a pair of points in $S^3$. Framing $\frac{1}{2}$ and $-\frac{1}{2}$ on $E$ are graphically represented as \begin{minipage}{30pt}\includegraphics[width=30pt]{half-frame}\end{minipage} and
\begin{minipage}{30pt}\includegraphics[width=30pt]{half-frame-1}\end{minipage} correspondingly. Unknots $A_1$ and $A_2$ are called \emph{leaves} of $C$, while $E$ is called the \emph{edge} of $C$. Basic claspers provide a graphical notation for linkage as in Figure \ref{F:Habiro1}.
\begin{figure}
\caption{\label{F:Habiro1}
\label{F:Habiro1}
\end{figure}
A \emph{clasper} $C\ass\mathbf{A}\cup G \subset S^3$ is a collection $\mathbf{A}\ass\ A_1\cup\ldots\cup A_k$ of \textbf{zero-framed unknots bounding disjoint discs} together with an oriented embedded uni-trivalent graph $G$ whose trivalent vertices are \textbf{oriented counterclockwise} and each of whose edges is half-integer framed, such that $\mathbf{A}\cap G$ equals the set of $1$--valent vertices of $G$ in $S^3$, and each leaf $A_i\subset \mathbf{A}$ meets $G$ at a single point $l_i\in \mathbf{A}\cap G$. Thus, a \emph{simple clasper} is a clasper with two leaves.\par
\begin{figure}
\caption{\label{F:YClasper}
\label{F:YClasper}
\end{figure}
Another useful class of claspers is \emph{$Y$--claspers}, interpreted in Figure \ref{F:YClasper}. Boxes are a useful graphical shorthand, as described in Figure \ref{F:Habiro2}.
\begin{figure}
\caption{\label{F:Habiro2}
\label{F:Habiro2}
\end{figure}
\begin{figure}
\caption{\label{F:unitebox}
\label{F:unitebox}
\end{figure}
We make repeated use of Habiro's twelve moves \cite[Page 14--15]{Hab00}\footnote{For easy reference, the reader might want to print out \cite{MosXX}.}, to which we add an additional \emph{unite-box move} described in Figure \ref{F:unitebox}.
\subsection{Review of $\Delta$--Moves}
The following proposition describes four equivalent ways to define the $\Delta$--move. It is well-known, but the author could find no reference for it in the literature.
\begin{prop}
The following local moves are equivalent:
\begin{subequations}\label{E:Delta}
\begin{equation}
\begin{minipage}{80pt}
\includegraphics[width=70pt]{clasporder-1}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{$\Delta_1$}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=70pt]{clasporder-2}
\end{minipage}
\end{equation}
\begin{equation}
\psfrag{a}[c]{}\psfrag{b}[c]{}\psfrag{c}[c]{}
\begin{minipage}{80pt}
\includegraphics[width=75pt]{BorrBands-1}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{$\Delta_2$}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=75pt]{BorrBands-4}
\end{minipage}
\end{equation}
\begin{equation}
\psfrag{a}[c]{}\psfrag{b}[c]{}\psfrag{c}[c]{}
\begin{minipage}{80pt}
\includegraphics[width=75pt]{DeltaY-1}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{$\Delta_3$}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=75pt]{DeltaY-3}
\end{minipage}
\end{equation}
\begin{equation}
\begin{minipage}{80pt}
\includegraphics[width=80pt]{dblpass-1}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{$\Delta_4$}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=80pt]{dblpass-f}
\end{minipage}
\end{equation}
\end{subequations}
Define the \emph{$\Delta$--move} to be any of the above.
\end{prop}
\begin{proof}
\begin{description}
\item[$\Delta_1\Rightarrow \Delta_2$]
\begin{multline*}
\psfrag{a}[c]{}\psfrag{b}[c]{}\psfrag{c}[c]{}
\begin{minipage}{80pt}
\includegraphics[width=75pt]{Borr-nbd-1}
\end{minipage}\ \ \ \overset{\text{zoom in}}{\begin{minipage}{30pt}\includegraphics[width=30pt]{fluffyarrow}\end{minipage}}\quad
\begin{minipage}{90pt}
\includegraphics[width=90pt]{Borr-nbd-2}
\end{minipage}
\ \ \ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{surgery}}}}{\Longleftrightarrow}\ \
\begin{minipage}{90pt}
\includegraphics[width=90pt]{Borr-nbd-3}
\end{minipage}\\
\overset{\raisebox{2pt}{\scalebox{0.8}{$\Delta_1$}}}{\Longleftrightarrow}\
\begin{minipage}{85pt}
\includegraphics[width=85pt]{Borr-nbd-4}
\end{minipage}\ \ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{surgery}}}}{\Longleftrightarrow}\
\begin{minipage}{85pt}
\includegraphics[width=85pt]{Borr-nbd-f}
\end{minipage} \ \ \overset{\text{zoom out}}{\begin{minipage}{30pt}\includegraphics[width=30pt]{fluffyarrow}\end{minipage}}\ \
\begin{minipage}{75pt}
\psfrag{a}[c]{}\psfrag{b}[c]{}\psfrag{c}[c]{}
\includegraphics[width=70.5pt]{BorrBands-4}
\end{minipage}
\end{multline*}
\item[$\Delta_2\Rightarrow \Delta_3$]
$$\psfrag{a}[c]{}\psfrag{b}[c]{}\psfrag{c}[c]{}
\begin{minipage}{80pt}
\includegraphics[width=75pt]{DeltaY-1}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{$\Delta_2$}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=75pt]{DeltaY-2}
\end{minipage}
\quad\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=75pt]{DeltaY-3}
\end{minipage}$$
\item[$\Delta_3\Rightarrow \Delta_4$]
\begin{multline*}
\begin{minipage}{80pt}
\includegraphics[width=80pt]{dblpass-1}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=80pt]{dblpass-2}
\end{minipage}\\
\overset{\raisebox{2pt}{\scalebox{0.8}{$\Delta_3$}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=80pt]{dblpass-3}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=80pt]{dblpass-f}
\end{minipage}
\end{multline*}
\item[$\Delta_4\Rightarrow \Delta_1$]\begin{multline*}
\begin{minipage}{80pt}
\includegraphics[width=70pt]{clasporder-1}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{\text{surgery}}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=70pt]{clasporder-3}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=70pt]{clasporder-4}
\end{minipage}\\
\overset{\raisebox{2pt}{\scalebox{0.8}{$\Delta_4$}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=70pt]{clasporder-5}
\end{minipage}
\quad\overset{\raisebox{2pt}{\scalebox{0.8}{\text{surgery}}}}{\Longleftrightarrow}\quad
\begin{minipage}{80pt}
\includegraphics[width=70pt]{clasporder-2}
\end{minipage}
\end{multline*}
\end{description}
\end{proof}
\subsection{The space $\mathcal{C}$ of $A$--coloured $Y$--claspers}
A $Y$--clasper with leaves $A_{1,2,3}$ in the complement of an $A$--coloured Seifert surface is \emph{coloured $(a_1,a_2,a_3)\in A^3$} if $\bar\rho(A_{1,2,3})=a_{1,2,3}$ correspondingly (recall that the trivalent vertex and the leaves are oriented counterclockwise). Write the set of $(a_1,a_2,a_3)$--coloured $Y$--claspers in $A$--coloured Seifert surface complements as $\lblY{a_1}{a_2}{a_3}{32}$. Inserting a half-twist in an edge corresponds to inverting the colour of the leaf adjacent to that edge. We may formally add (sets of) coloured claspers over $\mathds{N}$ by taking their disjoint union: $\lblY{a_1}{a_2}{a_3}{32}+\lblY{b_1}{b_2}{b_3}{32}$ denotes the set of pairs of claspers in $A$--coloured Seifert surface complements, one of which is coloured $(a_1,a_2,a_3)$, and the other $(b_1,b_2,b_3)$. The identity element is the empty $Y$--clasper, \textit{i.e.} nothing at all, written as $0\in\mathcal{C}$. This monoid of formal sums is denoted $\mathcal{C}$.\par
We write
\begin{equation}\sum_{i=1}^{N_1}n_i\lblY{a_i^1}{b^1_i}{c^1_i}{43}\,\sim_{\bar\rho}\,\sum_{i=1}^{N_2}n_i\lblY{a_i^2}{b^2_i}{c^2_i}{43}\end{equation}
\noindent if any $A$--coloured Seifert surface $(F,\bar\rho)$ is $\bar\rho$--equivalent to any $A$--coloured Seifert surface $(F^\prime,\bar\rho^\prime)$ obtained from $(F,\bar\rho)$ through a finite sequence of $Y$--clasper surgeries, deletion of an element in $\sum_{i=1}^{N_1}n_i\lblY{a_i^1}{b^1_i}{c^1_i}{43}$, and insertion of an element in $\sum_{i=1}^{N_2}n_i\lblY{a_i^2}{b^2_i}{c^2_i}{43}$, and also the converse.
Define a homomorphism
\begin{equation}
\begin{aligned}
\Phi\colon\thinspace \mathcal{C}\qquad &\longrightarrow\qquad \bigwedge\nolimits^{ 3} A\\
\sum_{i=1}^{k}n_i\lblY{a_1^i}{a_2^i}{a_3^i}{43} &\mapsto\ \ \sum_{i=1}^k n_i\left(a_1^i\wedge a_2^i\wedge a_3^i\right).
\end{aligned}
\end{equation}
By abuse of terminology, $\Phi(C)$ means $\Phi$ of its class in $\mathcal{C}$.\par
\begin{prop}[Proof in Section {\ref{SS:ClasperProof}}]\label{P:Y-barrho}
The relation $\sim_{\bar\rho}$ is an equivalence relation, and $\mathcal{C}/\sim_{\bar\rho}$ is an abelian group.
The map $\Phi$ descends to an isomorphism of abelian groups
\begin{equation}\hat{\Phi}\colon\thinspace\ \mathcal{C}/\sim_{\bar\rho}\quad \longrightarrow\quad \bigwedge\nolimits^3 A.\end{equation}
\end{prop}
\subsection{The $Y$--obstruction}
If for two $G$--coloured knots $(K_{1,2},\rho_{1,2})$ there exists a Seifert surface $F_{1}$ for $K_{1}$ and a set of $Y$--claspers $C\in E(F_1)$ such that surgery on $C$ gives $(K_2,\rho_2)$, then the \emph{$Y$--obstruction of $(K_{1,2},\rho_{1,2})$} is defined to be
\begin{equation}Y((K_1,\bar\rho_1),(K_{2},\bar\rho_2))\ass \Phi(C).\end{equation}
\begin{lem}
The $Y$--obstruction $Y((K_1,\bar\rho_1),(K_{2},\bar\rho_2))$ does not depend on the choice of $Y$--clasper $C$ in its definition.
\end{lem}
\begin{proof}
If surgery around $C_1\subset E(F_1)$ and surgery around $C_2\subset E(F_1)$ both give $(F_2,\bar\rho_2)$, then surgery around $C_1\cup \bar{C}_2$ gives back $(F_1,\bar\rho_1)$, where $\bar{C}_2$ is the result of inserting a half twist in one edge of each $Y$--clasper in $C_2$. But by \cite[Lemma 3.2]{Mas03} (see also \cite[Section 4.3]{Tur84}), $[A_1^i]\wedge [A_2^i]\wedge [A_3^i]=0\in \bigwedge^3 H_1(E(F))$, where $[A^i_{1,2,3}]$ are homology classes representing leaves of $Y$--claspers in $C_1\cup \bar{C}_2$. A-fortiori $\Phi(C_1\cup \bar{C}_2)=0$.
\end{proof}
In the remainder of this section we prove that the $Y$--obstruction is independent of the choice of Seifert surface used in its construction.\par
\begin{defn}
A \emph{weak band projection} of a knot $K$ is a Seifert surface $F$ for $K$ and a projection of an identification
\[D^2\cup B_1\cup\cdots \cup B_{2g}\to\ F\]
where $D^2$ and each $B_i$ is a disk. Moreover, we require $B_i\cap B_j=\emptyset$ for $i\neq j$. We write $\partial B_i{\text{=\rm \raisebox{0.03ex}{:}
}}\, \alpha_i\gamma_i\beta_i\gamma_i^{\prime -1}$ with $D^{2}\cap B_i{\text{=\rm \raisebox{0.03ex}{:}
}}\, \alpha_i\cup\beta_i$. A weak band projection is called a \emph{band projection} (see \textit{e.g.} \cite[Chapter 8B]{BZ03}) if
\[
\partial D^2{\text{=\rm \raisebox{0.03ex}{:}
}}\ \alpha_1\delta_1\beta_2^{-1}\delta_2\beta_1^{-1}\delta_3\alpha_2\delta_4\cdots\alpha_{2g-1}\delta_{4g-3}\beta_{2g}^{-1}\delta_{4g-2}\beta_{2g-1}^{-1}\delta_{4g-1}\alpha_{2g}\delta_{4g}.
\]
\noindent Note that the bands of a weak band projection are oriented, and that it induces a basis for $H_1(F)$, and therefore also for $H_1(E(F))$. See Figure \ref{F:bandproj}.
\end{defn}
\begin{figure}
\caption{\label{F:bandproj}
\label{F:bandproj}
\end{figure}
Any ambient isotopy of $F$ can be realized by a sequence of band slides for any weak band projection of $F$ (see \textit{e.g.} \cite{MosXX}). A dual basis element $\xi_i\in H_1(E(F))$ is associated to each band, and to it an entry $v_i=\bar\rho(\xi_i)$ of the colouring vector. If all orientations are counterclockwise (other cases are analogous), the band-slide of $B_1$ over $B_2$ is realized by the following local picture.
\begin{equation}\label{E:pileY}
\begin{minipage}{100pt}
\psfrag{A}[c]{\footnotesize$a$}\psfrag{B}[c]{\footnotesize$b$}
\includegraphics[width=100pt]{pileY-n1}
\end{minipage}\quad \overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\quad
\begin{minipage}{130pt}
\psfrag{A}[c]{\footnotesize$a$}\psfrag{B}[l]{\footnotesize$b-a$}
\includegraphics[width=100pt]{pileY-n2}
\end{minipage}.
\end{equation}
Zoom in:
\begin{equation}\label{E:pile}
\psfrag{B}[c]{$B_1$}\psfrag{C}[c]{$B_2$}
\raisebox{10pt}{\begin{minipage}{60pt}
\includegraphics[width=55pt]{pile8-1}
\end{minipage}}
\quad
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{Move 8}}}}{\Longleftrightarrow}\quad
\ \raisebox{10pt}{\begin{minipage}{60pt}
\includegraphics[width=60pt]{pile8-2}
\end{minipage}}
\quad\ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{unzip}}}}{\Longleftrightarrow}\quad
\ \raisebox{10pt}{\begin{minipage}{60pt}
\includegraphics[width=60pt]{pile8-3}
\end{minipage}}
\end{equation}
\noindent where `unzip' means \cite[Definition 3.12]{Hab00}.\par
For each $Y$--clasper in $\lblY{b}{c}{d}{27.5}$ whose leaf clasped $B_2$, we now have two $Y$--claspers in $\lblY{a}{c}{d}{27.5}$ and in
$\begin{minipage}{30pt}\psfrag{a}[c]{\small$b-a$}\psfrag{b}[c]{\small$c$}\psfrag{c}[c]{\small$d$}\includegraphics[width=30pt]{labeledYz}\end{minipage}$ correspondingly. The $\Phi$--image is unchanged.\par
We next show that the $Y$--obstruction is invariant under stabilization. A $1$--handle attachment to $F$ locally looks, up to reflection, as in Figure \ref{F:genincrease}. The only possible contributions to the $Y$--obstruction come from linkage with $B_1^{\text{new}}$. But the loop which rings around $B_1^{\text{new}}$ is in $\ker\bar\rho$, and so any $Y$ clasper which clasps $B_1^{\text{new}}$ is in $\ker\Phi$.
\begin{figure}
\caption{\label{F:genincrease}
\label{F:genincrease}
\end{figure}
\subsection{Null-twists don't change the $Y$--obstruction}\label{SS:Y-nulltwist}
Let $(K_{1,2},\rho_{1,2})$ be a pair of $S$--equivalent $G$--coloured knots which are related by a sequence of null-twists. The goal of this section is to show that $Y((K_1,\bar\rho_1),(K_2,\bar\rho_2))$ vanishes. Let $F_{1,2}$ be Seifert surfaces for $K_{1,2}$ correspondingly. By the tubing construction, we assume the null-twists to be between bands of $F_1$. As in Section \ref{SS:S-equiv-matrix}, we may assume without the limitation of generality that there exist bases $\set{x^{1,2}_1,\ldots,x^{1,2}_{2g}}$ for $H_1(F_{1,2})$ correspondingly, which give rise to identical Seifert matrices. In this section, each time we stabilize $F_1$ we automatically stabilize $F_2$ in the same way, and each time we change the basis of $H_1(F_1)$ we automatically change the basis of $H_1(F_2)$ in the same way. The colouring vectors with respect to $\left(F_{1,2},\set{x^{1,2}_1,\ldots,x^{1,2}_{2g}}\right)$ also coincide because null-twists don't change the colouring vector.
Define a \emph{$Y_0$-move} to be a set of $\Delta$--moves realized as surgery around a set of $Y$--claspers in $\ker\Phi$.\par
The proof consists of three steps. First, for a chosen basis $\mathcal{B}$ of $H_1(F_1)$, we arrange by tube-equivalence for all non-zero entries in the colouring vector to be elements of $\mathcal{B}$, up to sign. Next, gather the null-twists together into a local picture by $Y_0$-moves. Finally, trivialize this local picture by $Y_0$-moves.
\subsubsection{Step 1: Shorten Words}\label{SSS:Shorten}
The goal of this section is to present an algorithm to generate the following output from the following input.
\begin{description}
\item[Input] A band projection of a Seifert surface, together with an ordered basis $\mathcal{B}\ass \set{b_1,\ldots,b_r}$ for $A$.
\item[Output] A band projection of a Seifert surface, with every non-zero entry of the corresponding colouring vector in $\mathcal{B}$, up to sign.
\end{description}
Carry out the procedure as follows. Let $\bar{\mathcal{B}}$ denote $\set{\pm b_1,\ldots,\pm b_r}$. Write the word length of an element $a\in A$ with respect to $\bar{\mathcal{B}}$ as $w_{\bar{\mathcal{B}}}(a)$. Denote by $\mathcal{V}$ the set of colouring vectors coming from band projections. A colouring vector $V\in\mathcal{V}$ has a partition into pairs $\set{(v_{2i-1},v_{2i})}_{1\leq i\leq g}$. Define a partial order $\prec$ on $\mathcal{V}$ by ordering its elements first by the lexicographical partial order by word lengths of their entries $w_{\bar{\mathcal{B}}}(v_i)$, and then by the lexicographical partial order by total word-lengths of their pairs $w_{\bar{\mathcal{B}}}(v_{2i-1})+w_{\bar{\mathcal{B}}}(v_{2i})$ .
If $w_{\bar{\mathcal{B}}}(v_i)\leq 1$ for $i=1,\ldots,2g$ then we are done. Otherwise there exists an entry in the colouring vector, which we assume without limitation of generality is $v_{2g}$, such that $w_{\bar{\mathcal{B}}}(v_{2g})\geq w_{\bar{\mathcal{B}}}(v_j)$ for $j=1,\ldots,2g$, and $w_{\bar{\mathcal{B}}}(v_{2g})> 1$.
Choose an element $b\in\bar{\mathcal{B}}$ such that $w_{\bar{\mathcal{B}}}(v_{2g}+b)<w_{\bar{\mathcal{B}}}(v_{2g})$.\par
Recall that $ta$ (as opposed to $t\cdot a$) simply means ``left-multiply $a$ by $t$''. Because $\bar\rho$ is surjective, there exists an oriented based loop $C\in \pi$ bounding a disc $D$ with $\bar\rho(C)= \frac{t}{t-1}\cdot b$. Form a cylinder $Z\in E(F)$ with $\partial Z=(C\times [0,1])\cup (D\times\{0,1\})$. One may imagine a bunch of bands passing through a pipe $C\times [0,1]$. Stabilize $F$ by adding bands $B_{1,2}^{\text{new}}$ where $B_1^{\text{new}}$ links $Z$, immediately to the right of $B_{2g}$ the band corresponding to $v_{2g}$.
\begin{equation}\label{E:stabilizecol}
\psfrag{e}[c]{\footnotesize$ta$}
\begin{minipage}{70pt}
\psfrag{Z}[c]{$Z$}\psfrag{C}[c]{$C$}
\includegraphics[height=45pt]{Lblsimple-2}
\end{minipage} \overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\
\begin{minipage}{105pt}
\psfrag{a}[c]{\footnotesize$ta$}\psfrag{d}[c]{\footnotesize$tatbt^{-1}$}
\includegraphics[height=45pt]{Lblsimple-1a}
\end{minipage}\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\ \ \ \begin{minipage}{114pt}
\psfrag{y}[c]{\footnotesize\phantom{x}$b$}
\psfrag{B}[c]{$B_1^{\text{new}}$}\psfrag{A}[c]{$B_2^{\text{new}}$}
\includegraphics[height=45pt]{Lblsimple-0}
\end{minipage}
\end{equation}
Now slide bands as follows (compare with \cite[Section 4.2.2]{KM09})
\begin{gather*}
\begin{minipage}{160pt}
\scriptsize \psfrag{a}[r]{$v_{2g-1}$}\psfrag{b}{$v_{2g}$}\psfrag{c}{$b$}\psfrag{d}{$0$}
\includegraphics[width=160pt]{addlabels-1n}
\end{minipage}\quad \Longleftrightarrow\ \ \
\begin{minipage}{160pt}
\scriptsize \psfrag{a}[r]{$v_{2g-1}$}\psfrag{b}[c]{$b$}\psfrag{c}[c]{$v_{2g}+b$}\psfrag{d}{$0$}
\includegraphics[width=160pt]{addlabels-2n}
\end{minipage}\\
\Longleftrightarrow\quad
\begin{minipage}{145pt}\scriptsize
\scriptsize
\psfrag{b}[c]{$v_{2g-1}$}\psfrag{a}{$v_{2g}+b$}
\includegraphics[width=145pt]{addlabels-3n}
\end{minipage}\quad\Longleftrightarrow\quad
\begin{minipage}{145pt}\scriptsize
\psfrag{a}[r]{$v_{2g-1}$}\psfrag{b}{$v_{2g}+b$}\psfrag{c}{$b$}\psfrag{d}{$v_{2g-1}$}
\includegraphics[width=145pt]{addlabels-4n}
\end{minipage}
\end{gather*}
We obtain a colouring vector $V^{\text{new}}$ which satisfies $V^{\text{new}}\prec V$. By Zorn's Lemma we are finished.
\subsubsection{Step 2: Bring null-twists together}
Parameterize each band $B_i$ in a band projection of $F$ as $I\times I$. We show that for each band, up to $Y_0$-moves, all null-twists may be assumed to take place in $I\times [\frac{1}{2},1]$, while everything else (linkage, twisting, and knotting) takes place in $I\times [0,\frac{1}{2})$.
\begin{lem}\label{L:clasprho}
A leaf may be moved past a null-twist by a $Y_0$-move. See Figure \ref{F:clasprho}.
\end{lem}
\begin{proof}
Write the null-twist, between bands $B_1,\ldots,B_k$ coloured $a_1,\ldots,a_k$ correspondingly, in terms of surgery on basic claspers.
Let $C\ass A_1\cup A_2\cup E$ be a basic clasper such that $A_2$ clasps $B_1$ and $A_1$ clasps a band $B_0$ with colour $a_0$.
Moving $A_2$ past a null-twist entails performing one $\Delta_1$-move for each clasper coming from the null-twist which clasps $B_1$. Each $\Delta$-move is realized by inserting a $Y$--clasper. The collective contribution of these $Y$--claspers to $\Phi$ is
\begin{equation}a_0\wedge a_1\wedge \sum_{i=2}^k a_i= -a_0\wedge a_1\wedge a_1 =0.\end{equation}
\end{proof}
\begin{figure}
\caption{\label{F:clasprho}
\label{F:clasprho}
\end{figure}
\subsubsection{Eliminate null-twists}
Having carried out the preceding steps, we arrive at a presentation of $(F_2,\bar\rho_2)$ in $(F_1,\bar\rho_1)$ by a collection of null-twists in a local picture which is a trivial braid between bands coloured by elements of $\bar{\mathcal{B}}\subset A$. The result of these null-twists is a braid in which every pair of bands has linking number zero. Our goal is to show that this braid is trivialized by $Y_0$-moves.\par
For a null-twist between bands $B_1,\ldots,B_k$ coloured $a_1,\ldots,a_k\subset \bar{\mathcal{B}}$ correspondingly, there exists a partition $P$ of $\set{1,\ldots,k}$ such that for each $S\subseteq P$, both the sum $\sum_{i\in S}a_i$ vanishes, and also $a_i=\pm a_j$ for all $a_i,a_j\in S$. If for some null-twist $T$ this partition has more than $2$ parts, separate $T$ into smaller null-twists whose corresponding partitions have fewer parts, as in Figure \ref{F:threadsep}.
\begin{figure}
\caption{\label{F:threadsep}
\label{F:threadsep}
\end{figure}
Choose a pair of basis elements $a,b\in\mathcal{B}\cup\{0\}$. Using Lemma \ref{L:clasprho} and the fact that partitions corresponding to null-twists now have at most two parts, perform $Y_0$-moves to create a smaller local picture in which all null-twists are between bands labeled $\pm a$ and bands labeled $\pm b$. Because the wedge of any triple in $\set{\pm a,\pm b}$ vanishes, any $Y$--clasper which we insert in this local picture will be in $\ker\Phi$. Due to the vanishing of all linking numbers between bands, by Murakami--Nakanishi all crossings between such bands cancel up to $Y_0$-moves \cite{MN89}. Repeat for each pair of basis elements, until all null-twists are between bands which share the same colour. These cancel up to $Y_0$-moves, because the wedge of any triple in $\set{a,-a}$ is zero.
\subsection{Local Moves realized by null-twists}
To prove Theorem \ref{T:clasperprop}, it remains to show that any $Y_0$-move is realized by a null-twist. We adopt the typical clasper strategy of first identifying moves between $Y$--claspers which are realized by null-twists, and then proving that these suffice to realize any $Y_0$-move.\par
\begin{lem}\label{L:(0,a,b)}
\[\lblY{0}{a}{b}{27.5}\sim_{\bar\rho}\ 0.\]
\end{lem}
\begin{proof}
Realize the $\Delta_2$ move by the following sequence of ambient isotopy and null-twists (the dotted arc is labeled $0\in A$).
\begin{equation}
\psfrag{a}[c]{$a$}\psfrag{b}[c]{$b$}\psfrag{c}[c]{$0$}
\begin{minipage}{61pt}
\includegraphics[width=61pt]{BorrBands-1s}
\end{minipage}\ \ \Leftrightarrow\ \
\begin{minipage}{61pt}
\includegraphics[width=61pt]{BorrBands-2s}
\end{minipage}\ \ \Leftrightarrow\ \ \
\begin{minipage}{61pt}
\includegraphics[width=61pt]{BorrBands-3s}
\end{minipage}\ \ \Leftrightarrow\ \ \
\begin{minipage}{61pt}
\includegraphics[width=61pt]{BorrBands-4s}
\end{minipage}
\end{equation}
\end{proof}
\begin{lem}\label{L:(a,a,b)}
Setting $\bar{a}\ass a^{-1}$, we have $\lblY{\bar a}{a}{b}{30}\sim_{\bar\rho}\ 0$ and also $\lblY{a}{a}{b}{27.5}\sim_{\bar\rho}\ 0$.
\end{lem}
\begin{proof}
Realize the $\Delta_2$ move by the following sequence of ambient isotopy and null-twists.
\begin{equation}
\psfrag{a}[r]{$\pm a$}\psfrag{b}[c]{$a$}\psfrag{c}[c]{$b$}
\begin{minipage}{61pt}
\includegraphics[width=61pt]{BorrBands-1sr}
\end{minipage}\ \ \Leftrightarrow\quad\quad
\begin{minipage}{61pt}
\includegraphics[width=61pt]{BorrBands-2sr}
\end{minipage}\ \ \Leftrightarrow\quad\quad\ \begin{minipage}{61pt}
\includegraphics[width=61pt]{BorrBands-4sr}
\end{minipage}
\end{equation}
\end{proof}
\begin{lem}\label{L:clasp-pass}
The results of surgeries around the following two claspers in the complement of an $A$--coloured Seifert surface are $\bar\rho$--equivalent.
\begin{equation}
\begin{minipage}{130pt}
\includegraphics[width=130pt]{clasppass-1rn}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{\text{clasp-pass}}}}{\Longleftrightarrow}\quad
\begin{minipage}{130pt}
\includegraphics[width=130pt]{clasppass-fn}
\end{minipage}
\end{equation}
This is Habiro's \emph{clasp-pass move}.
\end{lem}
\begin{proof}
\begin{multline}
\begin{minipage}{100pt}
\includegraphics[width=100pt]{clasppass-1rn}
\end{minipage}\quad\overset{\raisebox{2pt}{\scalebox{0.8}{\text{surgery}}}}{\Longleftrightarrow}\quad
\begin{minipage}{100pt}
\includegraphics[width=100pt]{clasppass-2a}
\end{minipage}\\
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{null-twist}}}}{\Longleftrightarrow}\quad
\begin{minipage}{100pt}
\includegraphics[width=100pt]{clasppass-2b}
\end{minipage}\quad
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{surgery}}}}{\Longleftrightarrow}\quad
\begin{minipage}{100pt}
\includegraphics[width=100pt]{clasppass-fn}
\end{minipage}
\end{multline}
\end{proof}
\begin{cor}\label{C:ClasperFramingReduce}
A full-twist in an edge of a clasper is realized by a null-twist.
\end{cor}
\begin{lem}\label{L:YPass}
The following local move is realized by null-twists.
$$
\begin{minipage}{40pt}
\includegraphics[height=35pt]{YPass-1}
\end{minipage}\Leftrightarrow\ \
\begin{minipage}{35pt}
\includegraphics[height=35pt]{YPass-f}
\end{minipage}
$$
\end{lem}
\begin{proof}
\begin{equation}
\begin{minipage}{30pt}
\includegraphics[height=30pt]{YPass-1}
\end{minipage}\overset{\raisebox{0.3pt}{\scalebox{0.6}{\text{Move 9}}}}{\Leftrightarrow}
\begin{minipage}{65pt}
\includegraphics[height=30pt]{YPass-2}
\end{minipage}\overset{\raisebox{0.3pt}{\scalebox{0.6}{\text{Lemma {\ref{L:(0,a,b)}}}}}}{\Leftrightarrow}\
\begin{minipage}{65pt}
\includegraphics[height=30pt]{YPass-3}
\end{minipage}\overset{\raisebox{0.3pt}{\scalebox{0.6}{\text{Move 9}}}}{\Leftrightarrow}\
\begin{minipage}{30pt}
\includegraphics[height=30pt]{YPass-f}
\end{minipage}
\end{equation}
\end{proof}
\subsection{Leaves clasping single bands}
\subsubsection{The leaf-shepherd procedure}\label{SSS:Shepherd}
The goal of this section is to present an algorithm to generate the following output from the following input.
\begin{description}
\item[Input] A band projection of an $A$--coloured Seifert surface $(F,\bar\rho)$, together with a pair of claspers $C_{1,2}\subset E(F)$ with distinguished leaves $A^{1,2}$ each of which ring a single band, such that the colours of the bands which $C_{1,2}$ clasp are either mutually inverse or the same.
\item[Output] A clasper $C^\prime$ with distinguished leaf $A^\prime$ which clasps a single band in the same band projection of $(F,\bar\rho)$, related to $C_{1,2}$ as in Figure \ref{F:shepherd}.
\end{description}
\begin{figure}
\caption{\label{F:shepherd}
\label{F:shepherd}
\end{figure}
Let $B_{1,2}$ denote the bands clasped by $A^{1,2}$ correspondingly, coloured $a_{1,2}\in A$ correspondingly. Assume without the limitation of generality that $B_2$ is the left band of a $1$--handle. If $B_{1,2}$ are different, and if they are not adjacent along $D^2$, then the first step is to bring them close together. Our graphical convention above in what follows is to write the name of the leaf above its adjacent edge.
\newcounter{stepnum}
\begin{list}{\textbf{Step \arabic{stepnum}:}}
{\setlength{\leftmargin}{.5in}
\setlength{\rightmargin}{.1in}
\usecounter{stepnum}}
\item If $B_{1,2}$ are different, we find a mapping class $\tau\in\mathrm{MCG}(F)$ whose action gives a band projection for $F$ in which $C_{1,2}$ clasp the same band. Lemma \ref{L:(0,a,b)} is used to kill the excess $Y$--claspers we create along the way.
\newcounter{casenum}[stepnum]
\begin{list}{\textbf{Case (\roman{casenum}):}}
{\setlength{\leftmargin}{.5in}
\setlength{\rightmargin}{.1in}
\usecounter{casenum}}
\item If $B_1$ and $B_2$ are the two bands of the same handle, choose $\tau$ to be the following Dehn twist:
\begin{equation}
\psfrag{A}[c]{\footnotesize$B_1$}\psfrag{B}[c]{\footnotesize$\phantom{B}B_2$}\psfrag{a}[cb]{\footnotesize$A^1$}\psfrag{b}[cb]{\footnotesize$A^2$}
\begin{minipage}{100pt}
\includegraphics[width=100pt]{twinY-1}
\end{minipage}\ \ \ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\ \ \
\left\{\, \begin{array}{c}
\raisebox{20pt}{\begin{minipage}{130pt}
\includegraphics[width=100pt]{twinY-2}
\end{minipage}}\raisebox{1cm}{\text{\Small if $a_1=a_2$;}}\\[0.6cm]
\raisebox{20pt}{\begin{minipage}{130pt}
\includegraphics[width=130pt]{twinY-3}
\end{minipage}}\ \ \raisebox{1cm}{\text{\Small if $a_1+a_2=0$.}}
\end{array}\right.
\end{equation}
\item Otherwise, if $B_{1,2}$ belong to different $1$--handles, let $B$ denote the band left adjacent to the band $B_2$ clasped by $A^2$. Explicitly, if we write $D^2\cap B=\alpha\cup\beta$ and $D^2\cap B_{1,2}=\alpha_{1,2}\cap\beta_{1,2}$, then $\partial D^2$ contains a line segment of the form $x\delta\alpha_2\beta^\prime$ with $x=\alpha$ or $x=\beta^{-1}$. Repeat the following step, until $x=\alpha_1$ if $a_1=a_2$, or until $x=\beta_1^{-1}$ if $a_1=a_2^{-1}$. Slide $B$ over the $B_2$'s $1$--handle as follows:
\scalebox{0.76}{\parbox{\textwidth}{
\begin{multline}
\psfrag{A}[c]{$B$}\psfrag{B}[c]{$B_2$}\psfrag{a}[c]{\phantom{a}$A^2$}
\begin{minipage}{138pt}
\includegraphics[height=65pt]{deltapair-1proc}
\end{minipage}
\hspace{-9pt}\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\
\begin{minipage}{155pt}
\includegraphics[height=65pt]{deltapair-2proc}
\end{minipage}\\[0.3cm]
\psfrag{A}[c]{$B$}\psfrag{B}[c]{$B_2$}\psfrag{a}[c]{$\phantom{a}A^2$}
\ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{Move 8}}}}{\Longleftrightarrow}\quad
\begin{minipage}{155pt}
\includegraphics[height=65pt]{deltapair-3proc}
\end{minipage}
\end{multline}}}
\noindent Unzip the resulting clasper \cite[Definition 3.12]{Hab00}. Finally, when $B_1$ becomes adjacent to $B_2$ in the prescribed fashion, slide $B_2$ over $B_1$ (the diagram is of one possible configuration of the ends of the bands--- other possible configurations are handled analogously):
\begin{equation}
\psfrag{A}[c]{\footnotesize$B_1$}\psfrag{B}[c]{\footnotesize$B_2$}\psfrag{a}[cb]{\footnotesize$A^1$}\psfrag{b}[cb]{\footnotesize$A^2$}
\begin{minipage}{100pt}
\includegraphics[width=100pt]{pileY-1}
\end{minipage}\quad \overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\quad
\begin{minipage}{130pt}
\includegraphics[width=100pt]{pileY-3}
\end{minipage}.
\end{equation}
\end{list}
This slide sets the colour of $B_1$ to $0\in A$. We are left with the following local picture:
\begin{equation}
\raisebox{10pt}{\begin{minipage}{60pt}
\psfrag{A}[c]{\phantom{a}$A^1$}\psfrag{B}[c]{$B_1$}\psfrag{C}[c]{$B_2$}
\includegraphics[width=55pt]{pile8-1}
\end{minipage}}
\quad
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{Move 8}}}}{\Longleftrightarrow}\quad
\psfrag{A}[c]{\phantom{a}$A^{1\prime}$}\psfrag{B}[c]{$B_1$}\psfrag{C}[c]{$B_2$}\psfrag{D}[c]{\phantom{a}$A^{1\prime\prime}$}
\ \raisebox{10pt}{\begin{minipage}{60pt}
\includegraphics[width=60pt]{pile8-2}
\end{minipage}}
\quad\ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{unzip}}}}{\Longleftrightarrow}\quad
\ \raisebox{10pt}{\begin{minipage}{60pt}
\includegraphics[width=60pt]{pile8-3}
\end{minipage}}
\end{equation}
\noindent Delete the clasper which contains $A^{1\prime}$ using Lemma \ref{L:(0,a,b)} and rename $A^1$ as $A^\prime$. We take $\tau$ to be the mapping class corresponding to this step.
\item Now that $A^{1,2}$ clasp a common band, shepherd them together:
\begin{equation}
\begin{minipage}{61pt}
\includegraphics[width=61pt]{sweep-1}
\end{minipage}\overset{\raisebox{2pt}{\scalebox{0.6}{\text{isotopy}}}}{\Leftrightarrow}
\begin{minipage}{61pt}
\includegraphics[width=61pt]{sweep-2}
\end{minipage}\overset{\raisebox{2pt}{\scalebox{0.6}{\text{Move 8}}}}{\Leftrightarrow}
\begin{minipage}{61pt}
\includegraphics[width=61pt]{sweep-3}
\end{minipage}\overset{\raisebox{2pt}{\scalebox{0.6}{\text{Lemma {\ref{L:(0,a,b)}}}}}}{\Leftrightarrow}
\begin{minipage}{61pt}
\includegraphics[width=61pt]{sweep-4}
\end{minipage}.
\end{equation}
\noindent Once $A^{1,2}$ are adjacent, create a box using Move 8.
\item Act by $\tau^{-1}$ to return to the band projection with which we started.
\end{list}
\subsubsection{Adding claspers geometrically}
\begin{lem}\label{L:monogamousclasper}
Let $C_{1,2}\ass\, A^{1,2}_{1,2,3}\cup E^{1,2}_{1,2,3}$ be a pair of $Y$--claspers in the complement of an $A$--coloured Seifert surface $(F,\bar\rho)$ in band projection, whose leaves $A_{1,2,3}^{1,2}$ clasp single bands $B^{1,2}_{1,2,3}$ correspondingly, with $(B_1^1,B_1^2,B_2^{1,2},B_3^{1,2})$ coloured $(a,b,c,d)$ correspondingly. There exists a $Y$--clasper $C_{3}\ass\,A^{3}_{1,2,3}\cup E^3_{1,2,3}$ in the complement of $(F,\bar\rho)$ whose leaves $A_{1,2,3}^3$ clasp bands $B^3_{1,2,3}$ coloured $(a+b)$, $c$, and $d$ correspondingly.
\end{lem}
\begin{proof}
Shepherd leaves to bring together $A_{2,3}^{1,2}$ coloured $c$ and $d$ (Section \ref{SSS:Shepherd}). If any one of the edges $E_{1,2,3}^{1,2}$ crosses under an edge of another clasper, or under a band, use Lemma \ref{L:clasp-pass} or \ref{L:YPass} to change that crossing, to make $E_{1,2,3}^{1,2}$ cross over all edges and all over bands. Untie $E_{1,2,3}^{1,2}$ (Lemma \ref{L:clasp-pass}), and remove all full twists in them (Corollary \ref{C:ClasperFramingReduce}). Push the two boxes past the trivalent vertex as follows:
\begin{equation}
\psfrag{a}[c]{$a$}\psfrag{b}[c]{$b$}\psfrag{c}[c]{$c$}\psfrag{d}[c]{$d$}
\begin{minipage}{90pt}
\includegraphics[height=61pt]{YPush-1}
\end{minipage}\overset{\raisebox{2pt}{\scalebox{0.6}{\text{Lemma \ref{L:(0,a,b)}}}}}{\Longleftrightarrow}
\begin{minipage}{90pt}
\includegraphics[height=61pt]{YPush-2}
\end{minipage}\overset{\raisebox{2pt}{\scalebox{0.6}{\text{Move 11}}}}{\Longleftrightarrow}
\begin{minipage}{61pt}
\includegraphics[height=61pt]{YPush-3}
\end{minipage}.
\end{equation}
This unites the two pairs of leaves $A_2^{1,2}$ and $A_3^{1,2}$ into single leaves which we suggestively call $A^3_2$ and $A^3_3$ correspondingly, and the two pairs of edges $E_{2}^{1,2}$ and $E_3^{1,2}$ into single edges which we suggestively call $E^3_2$ and $E^3_3$ correspondingly.\par
As in Step 1 of Section \ref{SSS:Shepherd}, bring $A_1^{1,2}$ to adjacent positions along $D^2$. Slide $A_1^2$ over $A_2^2$, and resolve as follows (we draw the procedure in the case that $A_1^{1,2}$ belong to the same handle. The remaining case is analogous):
\begin{multline}
\psfrag{A}[c]{\footnotesize$B^1_1$}\psfrag{B}[c]{\phantom{a}\footnotesize$B^2_1$}\psfrag{a}[cb]{\footnotesize$A^1_1$}\psfrag{b}[cb]{\footnotesize$A^2_1$}
\begin{minipage}{100pt}
\includegraphics[width=100pt]{twinY-1}
\end{minipage}\quad\ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\quad
\begin{minipage}{100pt}
\includegraphics[width=100pt]{twinY-2}
\end{minipage}\\[0.4cm]
\psfrag{A}[c]{\footnotesize$B^1_1$}\psfrag{B}[c]{\footnotesize$\,B^2_1$}\psfrag{a}[cb]{\footnotesize$A^1_1$}\psfrag{b}[cb]{\footnotesize$A^2_1$}
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{Move 8}}}}{\Longleftrightarrow}\quad
\begin{minipage}{100pt}
\includegraphics[width=100pt]{twinY-4a}
\end{minipage}
\ \ \ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{Lemma \ref{L:YPass}}}}}{\Longleftrightarrow}\quad
\begin{minipage}{110pt}
\includegraphics[width=110pt]{twinY-4}
\end{minipage}\\[0.4cm]
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{Lemma \ref{L:(0,a,b)}}}}}{\Longleftrightarrow}\quad
\begin{minipage}{110pt}
\psfrag{A}[c]{\footnotesize$B^1_1$}\psfrag{B}[c]{\footnotesize$\,B^2_1$}\psfrag{a}[cb]{\footnotesize$A^1_1$}\psfrag{b}[cb]{\footnotesize$A^2_1$}
\includegraphics[width=110pt]{twinY-6}
\end{minipage}
\end{multline}
In the above sequence, $A^1_1$ was broken up into two leaves, which we sloppily collectively called $A_1^1$. This sloppiness causes no harm because of the next step.
Shepherd $A_{1}^1$ and $A_{1}^2$ together as in Step 2 of the procedure in Section \ref{SSS:Shepherd}, and manipulate the resulting local picture as follows:
\begin{equation}
\psfrag{E}[c]{\footnotesize$E^3_2$}\psfrag{F}[c]{\footnotesize$E^3_3$}\psfrag{A}[c]{\footnotesize$A_1^1$} \psfrag{B}[c]{\footnotesize$A_1^2$}
\begin{minipage}{61pt}
\includegraphics[width=61pt]{3boxfun-1}
\end{minipage}\overset{\raisebox{2pt}{\scalebox{0.6}{\text{unite-box}}}}{\Longleftrightarrow}
\begin{minipage}{61pt}
\includegraphics[width=61pt]{3boxfun-2}
\end{minipage}\ \ \overset{\raisebox{2pt}{\scalebox{0.6}{\text{Move 4}}}}{\Longleftrightarrow}
\begin{minipage}{61pt}
\includegraphics[width=61pt]{3boxfun-3}
\end{minipage}\ \ \overset{\raisebox{2pt}{\scalebox{0.6}{\text{Moves 1,3}}}}{\Longleftrightarrow}\!
\hspace{-0.6cm}\begin{minipage}{61pt}
\includegraphics[width=61pt]{3boxfun-4}
\end{minipage}
\end{equation}
\noindent Finally, we are left with a single $Y$--clasper with three leaves: $A_1^1$, which we relabel $A^3_1$, which clasps a band coloured $a+b$, and $A_{2,3}^3$ which clasp bands coloured $c$ and $d$ respectively.
\end{proof}
\subsection{Leaves clasping multiple bands}
\begin{lem}\label{L:3wedge}
If $a\wedge b\wedge c=0\in\bigwedge^3 A$, then
\[\lblY{a}{b}{c}{27.5}\sim_{\bar\rho}\,0.\]
\end{lem}
\begin{proof}
Consider a $Y$--clasper $C\in \lblY{a}{b}{c}{27.5}$. Shorten words (Section \ref{SSS:Shorten}) with respect to an ordered basis $\mathcal{B}$ for $A$ which contains a maximal independent subset of $S\subset \set{a,b,c}$. Use Move 8 and unzip to split $C$ into a collection of claspers, each of whose leaves clasps a single band. Each clasper $C^\prime$ in this collection which has a leaf which clasps a band coloured $d\notin S$ has a counterpart $C^{\prime\prime}$ whose corresponding leaf clasps a band coloured $-d$, and these cancel by Lemma \ref{L:monogamousclasper} combined with Lemma \ref{L:(0,a,b)}. We are left with claspers whose leaves clasp bands all of whose colours are in $S$, which cancel by Lemma \ref{L:(a,a,b)} because $S$ has cardinality at most $2$.
\end{proof}
\begin{lem}\label{L:Ydistribute}
\[\begin{minipage}{30pt}
\psfrag{a}[c]{\small$a+b$}\psfrag{b}[c]{\small$c$}\psfrag{c}[c]{\small$d$}\includegraphics[width=30pt]{labeledYz}\end{minipage}\
\sim_{\bar\rho}\lblY{a}{c}{d}{27.5}+ \lblY{b}{c}{d}{27.5}\]
\end{lem}
\begin{proof}
We show that any $A$--coloured Seifert surface $(F,\bar\rho)$ is $\bar\rho$--equivalent to any $A$--coloured Seifert surface $(F^\prime,\bar\rho^\prime)$ obtained from $(F,\bar\rho)$ through a finite sequence of $Y$--clasper surgeries, deletion of an element in $\begin{minipage}{30pt}
\psfrag{a}[c]{\small$a+b$}\psfrag{b}[c]{\small$c$}\psfrag{c}[c]{\small$d$}\includegraphics[width=30pt]{labeledYz}\end{minipage}$, and insertion of an element in $\lblY{a}{c}{d}{27.5}+ \lblY{b}{c}{d}{27.5}$. The converse follows analogously.\par
Consider $Y$--claspers $C_{1,2}\ass\, A^{1,2}_{1,2,3}\cup E^{1,2}_{1,2,3}$ in $\lblY{a}{c}{d}{27.5}$ and in $\lblY{b}{c}{d}{27.5}$ correspondingly, such that the colours of $A^{1,2}_{2,3}$ are $c$ and $d$ correspondingly. By Lemma \ref{L:3wedge} we may assume that $c\wedge d\neq 0$. As in the proof of Lemma \ref{L:3wedge}, word shorten with respect to an ordered basis $\mathcal{B}$ for $A$ which contains a maximal independent subset of $S\subseteq \set{a,b,c,d}$, and then use Move 8 and unzip to split $A^1_2$, splitting $C_1$ into a collection of claspers each of whom has a distinguished leaf which clasps a single band. The distinguished leaf of each clasper $C^\prime$ in this collection which clasps a band coloured $x\neq c$ has a counterpart $C^{\prime\prime}$ whose corresponding leaf clasps a band coloured $-x$, and these cancel by Lemma \ref{L:monogamousclasper} combined with Lemma \ref{L:(0,a,b)}. Only one clasper $C_1^\prime$ survives, whose distinguished leaf clasps a band labeled $c$. Repeat the above procedure to replace $C_2$ by a corresponding $Y$--clasper $C_2^\prime$, and combine $C_{1,2}^\prime$ using Section \ref{SSS:Shepherd}. Repeat for $A_{3}^{1,2}$. Repeat again for $A_1^{1,2}$, except that this time $C_1$ turns into a clasper whose distinguished leaf clasps a band coloured $a$, while $C_2$ turns into either $n$ claspers whose distinguished leaf clasp single bands labeled $\pm a$ if $b=\pm na$ for $n\in\mathds{N}$, or into a single clasper whose distinguished leaf clasps a band coloured $b$ otherwise. Combine these using Lemma \ref{L:monogamousclasper} to obtain $C\in \begin{minipage}{30pt}
\psfrag{a}[c]{\small$a+b$}\psfrag{b}[c]{\small$c$}\psfrag{c}[c]{\small$d$}\includegraphics[width=30pt]{labeledYz}\end{minipage}$.
\end{proof}
\subsection{Proof of Theorem {\ref{T:clasperprop}}}\label{SS:ClasperProof}
Theorem \ref{T:clasperprop} is equivalent to the statement that two $A$--coloured Seifert surfaces sharing the same Seifert matrix are $\bar\rho$--equivalent if and only if they are related by inserting a $Y$--clasper in $\ker\Phi$ (a $Y_0$-move). This is implied by Proposition \ref{P:Y-barrho}, which we now prove.
\begin{proof}[Proof of Proposition \ref{P:Y-barrho}]
To prove that $\sim_{\bar\rho}$ is an equivalence relation we must prove that it is transitive. $\lblY{a}{b}{c}{27.5}\sim_{\bar\rho}\lblY{d}{e}{f}{28.5}\sim_{\bar\rho}\lblY{g}{h}{i}{27.5}$ implies that
\begin{equation}\lblY{d}{e}{f}{28.5}+\lblY{\bar a}{b}{c}{30}\sim_{\bar\rho} \lblY{g}{h}{i}{27.5}+\lblY{\bar a}{b}{c}{30}.\end{equation}
Adding $\lblY{a}{b}{c}{27.5}$ to both sides implies, by Lemma \ref{L:Ydistribute}, that $\lblY{d}{e}{f}{28.5}\sim_{\bar\rho}\lblY{g}{h}{i}{27.5}$ as required. By Lemma \ref{L:Ydistribute}, $\lblY{\bar a}{b}{c}{30}$ is the inverse of $\lblY{a}{b}{c}{27.5}$, making $C/\sim_{\bar\rho}$ into an abelian group. The map $\hat{\Phi}$ is surjective by Section \ref{SS:Y-nulltwist} and is injective by Lemma \ref{L:3wedge}, therefore it is an isomorphism.
\end{proof}
\section{Coloured untying invariants}\label{S:untyinginvariants}
We construct invariants of $\rho$--equivalence classes and of $\bar\rho$--equivalence classes. In Sections \ref{S:r12} and \ref{S:A4} these will be used to bound from below the number of such classes, and to determine whether or not two given $G$--coloured knots $(K_{1,2},\rho_{1,2})$ are $\rho$--equivalent or $\bar\rho$--equivalent. In Section \ref{SS:su} we identify an analogue for $A$--coloured surfaces of the coloured untying invariant \cite[Section 6]{Mos06b}, and in Section \ref{SS:cu} we generalize the definition of the coloured untying invariant for covering spaces. The homological algebra parallels the treatment of Lannes and Latour \cite{LaLa75}, using methods in Hatcher \cite{Hat02}, and is condensed. The finitely generated abelian group $A$ is given the structure of a principal ideal ring, which by abuse of notation we also call $A$.
\subsection{An untying invariant for surfaces}\label{SS:su}
Let $(K,\rho)$ be a $G$--coloured knot. Choose a marked Seifert surface $\left(F,\set{x_1,\ldots,x_{2g}}\right)$ for $K$. By the Universal Coefficient Theorem, the colouring $\bar\rho\colon\thinspace H_1(E(F))\twoheadrightarrow A$ corresponds to a cohomology class $\bar\alpha\in H^1(E(F); A)$. Let $r$ be the rank of $A$ as a $\mathds{Z}$--module with presentation
\begin{equation}\label{E:split}
0\longrightarrow\mathds{Z}^r\overset{\iota}{\longrightarrow}\mathds{Z}^r \overset{\mathrm{p}}{\longrightarrow} A \longrightarrow 0.
\end{equation}
\noindent If it happens to be the case that $A$ is of the form $\left(\mathds{Z}/n\mathds{Z}\right)^r$, then $\iota$ is represented by the matrix $nI_r$, and $p$ is the `modulo $n$' map. For $k\in\{1,2,\ldots\}$, the above maps extend by linearity:
\begin{equation}
0\longrightarrow\left(\mathds{Z}^r\right)^k\overset{\iota}{\longrightarrow}\left(\mathds{Z}^r\right)^k \overset{\mathrm{p}}{\longrightarrow} A^k \longrightarrow 0.
\end{equation}
Short exact sequence \ref{E:split} gives rise to a long exact sequence on homology
\[
\cdots\rightarrow H_2(E(F);A))\overset{\raisebox{2pt}{\scalebox{0.8}{$\beta_2$}}}{\rightarrow} H_1(E(F);\mathds{Z}^r)\overset{\raisebox{2pt}{\scalebox{0.8}{$\iota_\ast$}}}{\rightarrow} H_1(E(F);\mathds{Z}^r) \overset{\raisebox{2pt}{\scalebox{0.8}{$\mathrm{p}_\ast$}}}{\rightarrow} H_1(E(F); A)\rightarrow\cdots
\]
\noindent where $\beta_\ast$ is the Bockstein homomorphism on homology; and to the long exact sequence on cohomology
\[
\cdots\rightarrow H^1(E(F);\mathds{Z}^r)\overset{\iota^\ast}{\rightarrow} H^1(E(F);\mathds{Z}^r)\overset{\mathrm{p}^\ast}{\rightarrow} H^1(E(F);A)\overset{\beta^1}{\rightarrow} H^2(E(F); \mathds{Z}^r)\rightarrow\cdots
\]
\noindent where $\beta^\ast$ is the Bockstein homomorphism on cohomology. We write $[E(F)]$ for the fundamental class of $E(F)$. Define the \emph{surface untying invariant} as
\begin{equation}\label{E:su}
\mathrm{su}(F,\bar\rho)\ass \left\langle\rule{0pt}{11.5pt} \bar\alpha\smile \beta^1\bar\alpha,[E(F)]\right\rangle\in A.
\end{equation}
\begin{prop}
The surface untying invariant is an invariant of $\bar\rho$--equivalence classes of $G$--coloured knots.
\end{prop}
\begin{proof}
Two $A$--coloured Seifert surfaces of a $G$--coloured knot $(K,\rho)$ have the same surface untying invariant, because there are related by tube equivalence, and a loop around a tube is contractible in $E(K)$.\par
The proof that the surface untying invariant is invariant under null-twists follows \cite[Proposition 17]{Mos06b}. Denote the Poincar\'{e} duality isomorphism by $D$. The Poincar\'{e} dual of $\mathrm{su}(F,\bar\rho)$ is the algebraic intersection number of $D\bar\alpha$ with $D\beta^1\bar\alpha$. A curve $L$ in $\ker\bar\rho$ vanishes in $H_1(E(F);A)$ because it vanishes in $H_1(E(F);aA)$ for each principal ideal $aA$ of $A$ (note that $aA$ is a cyclic ring). Therefore $L$ may be taken to be disjoint from $D\bar\alpha$ as an element of $H_1(E(F);A)$, and surgery on $L$ does not change $\mathrm{su}(F,\bar\rho)$.
\end{proof}
By Alexander duality, $(\tau^+-\tau^-)$ gives rise to an isomorphism from $H_1(F;A)$ to $H_1(E(F);A)$. We denote by $\bar{a}\in H_1(F;A)$ the Alexander dual of $\bar\alpha$, which satisfies
\[\widehat{\bar{a}}\ass\,(\tau^+-\tau^-) \bar{a} = D\,\mathrm{p}\iota\beta^1\bar\alpha.\]
\noindent In the dihedral case, a curve representing $\bar{a}$ was called a \emph{mod $p$ characteristic knot} in \cite{CS75,CS84}. Recall the homological definition for the self-linking number, as in \cite[Chapter 77]{ST34} or in \cite[Page 18]{LaLa75}. For $g\in A$, let $\tilde{g}$ denote $\mathrm{p}^\ast(g)$, the smallest element of $\mathds{Z}^r$ for which $\mathrm{p}(\tilde{g})=g$. The surface untying invariant is seen to be the self-linking number of $\widehat{\bar{a}}$ as follows:
\begin{equation}
\left\langle\rule{0pt}{11.5pt} \widehat{\bar\alpha}\smile\beta^1\widehat{\bar\alpha}\,,[E(F)]\right\rangle= \left\langle\rule{0pt}{11.5pt} (D\mathrm{p}\iota \beta^1)^\ast\,\widehat{\bar{a}}\smile D\,\widehat{\bar{a}} \,,[E(F)]\right\rangle
= \left\langle\rule{0pt}{11.5pt} (\mathrm{p}\iota\beta^1)^\ast D\,\widehat{\bar{a}}\,,\widehat{\bar{a}}\right\rangle.
\end{equation}
Let us calculate an explicit formula for the surface untying invariant of a $G$--coloured knot $(K,\rho)$ with surface data $(M,V)$ with respect to a marked Seifert surface $\left(F,\set{x_1,\ldots,x_{2g}}\right)$. Unraveling the definitions gives
\begin{equation}\label{E:su-formula1}
\mathrm{su}(F,\bar\rho)=\ \rule{0pt}{11pt}\varepsilonilon\,V^{\,T}(\mathrm{p}\iota\beta^1)^\ast\left(\rule{0pt}{10pt}M\,t\cdotV-M^{\,T}\,V\right),
\end{equation}
\noindent where $\varepsilonilon\colon\thinspace A^{2g}\to \mathds{Z}^{2g}$ is the augmentation map. For $g\in A$, write $\tilde{g}$ for the smallest element of $\mathds{N}^r\subset\mathds{Z}^r$ for which $\mathrm{p}(\tilde{g})=g$.
In the special case $A\approx\left(\mathds{Z}/n\mathds{Z}\right)^r$, Formula \ref{E:su-formula1} simplifies to
\begin{equation}\label{E:su-formula2}
\mathrm{su}(F,\bar\rho)=\ \varepsilonilon\,V^{\,T}\frac{M\,\widetilde{t\cdotV}-M^{\,T}\,\widetilde{V}}{n}\bmod n.
\end{equation}
\subsection{An untying invariant for covering spaces}\label{SS:cu}
We set up a parallel construction to the one in Section \ref{SS:su}. Set $\Lambda\ass \mathds{Z}[\mathcal{C}_{m}]$. Denote by $l$ the rank of $A$ as a $\Lambda$-module with presentation
\begin{equation}\label{E:cplit}
0\longrightarrow\Lambda^{l}\overset{\iota}{\longrightarrow}\Lambda^l \overset{\mathrm{p}}{\longrightarrow} A \longrightarrow 0.
\end{equation}
\noindent If it happens to be the case that $A$ is of the form $\left(\mathds{Z}/n\mathds{Z}\right)^r$, then $\iota$ is represented by the matrix $nI_l$, and $p$ assigns to each element of $\Lambda$ its $\mathcal{C}_m$ orbit, modulo $n$. For $k\in\{1,2,\ldots\}$, the above maps extend by linearity:
\begin{equation}
0\longrightarrow\left(\Lambda^l\right)^k\overset{\iota}{\longrightarrow}\left(\Lambda^l\right)^k \overset{\mathrm{p}}{\longrightarrow} A^k \longrightarrow 0.
\end{equation}
A $G$--colouring $\rho\colon\thinspace \pi\twoheadrightarrow G$ of $K$ lifts to an $A$--colouring $\tilde\rho\colon\thinspace H_1(C_m(K))\twoheadrightarrow A$ of its $m$--fold branched cyclic covering space $C_m(K)$, which corresponds to a cocycle $\alpha\in H^1(C_m(K);A)$ by the Universal Coefficient Theorem. The long exact sequences
\scalebox{0.92}{\parbox{\textwidth}{
\[
\cdots\rightarrow H_2(C_m(K);A))\overset{\raisebox{2pt}{\scalebox{0.8}{$\beta_2$}}}{\rightarrow} H_1(C_m(K);\Lambda^l)\overset{\raisebox{2pt}{\scalebox{0.8}{$\iota_\ast$}}}{\rightarrow} H_1(C_m(K);\Lambda^l)\overset{\raisebox{2pt}{\scalebox{0.8}{$\mathrm{p}_\ast$}}}{\rightarrow} H_1(C_m(K); A)\rightarrow\cdots
\]}}
\noindent and
\scalebox{0.92}{\parbox{\textwidth}{
\[
\cdots\rightarrow H^1(C_m(K);\Lambda^l)\overset{\iota^\ast}{\rightarrow} H^1(C_m(K);\Lambda^l)\overset{\mathrm{p}^\ast}{\rightarrow} H^1(C_m(K);A)\overset{\beta^1}{\rightarrow} H^2(C_m(K); \Lambda^l)\rightarrow\cdots
\]}}
\noindent are induced by short exact sequence \ref{E:cplit}. The \emph{coloured untying invariant} is defined by the formula
\begin{equation}\label{E:cu-defn}
\mathrm{cu}(K,\rho)\ass \left\langle\rule{0pt}{11.5pt} \alpha\smile \beta^1\alpha,[C_m(K)]\right\rangle\in A.
\end{equation}
The argument of \cite[Proof of Proposition 17]{Mos06b} shows the following:
\begin{prop}
The coloured untying invariant is an invariant of $\rho$--equivalence classes of $G$--coloured knots.
\end{prop}
The coloured untying invariant is the self-linking number of $a\ass D\,\mathrm{p}\iota\beta^1\alpha$ as is seen via
\begin{equation}\label{E:cu-is-sl}
\left\langle\rule{0pt}{11.5pt} \alpha\smile\beta^1\alpha\,,[C_m(K)]\right\rangle= \left\langle\rule{0pt}{11.5pt} (D\mathrm{p}\iota \beta^1)^\ast\, a\smile D a \,,[C_m(K)]\right\rangle
= \left\langle\rule{0pt}{11.5pt} (\mathrm{p}\iota \beta^1)^\ast\, Da\,,a\right\rangle.
\end{equation}
We would next like an explicit formula for the coloured untying invariant of a $G$--coloured knot $(K,\rho)$ with surface data $(M,V)$ with respect to a marked Seifert surface $\left(F,\set{x_1,\ldots,x_{2g}}\right)$. We work this out for $m>0$. It turns out that the easiest way to do this is in two stages, first by regarding the coloured untying invariant as a $\tilde\rho$--equivalence invariant by forgetting the action of $\mathcal{C}_m$ on $A$, then by obtaining an explicit formula for this invariant, and then by adding this $\mathcal{C}_m$ action back `by hand'. Note that the analogues of Equations \ref{E:cu-defn} and \ref{E:cu-is-sl} will continue to hold (with analogous proofs). Thus, having forgotten the covering transformations, $\tilde\rho$ corresponds to a cocycle $\tilde\alpha\in H^1(C_m(K);A)$, and for $\tilde{a}\ass D\,\mathrm{p}\iota\beta^1\tilde\alpha$ we have
\begin{equation}\label{E:cu-defn-2}
\widetilde{\mathrm{cu}}(C_m(K),\tilde\rho)\ass \left\langle\rule{0pt}{11.5pt} \tilde\alpha\smile \beta^1\tilde\alpha,[C_m(K)]\right\rangle=\left\langle\rule{0pt}{11.5pt} (\mathrm{p}\iota \beta^1)^\ast\, D\tilde{a}\,,\tilde{a}\right\rangle\in A,
\end{equation}
\noindent which is an invariant of $\tilde{\rho}$--equivalence classes of $G$--coloured knots.\par
For $m>0$, push a Seifert surface $F$ for $K$ into $D^4$. The intersection form of the $m$--fold branched cyclic cover of this manifold represents the linking form of its boundary, which is $C_m(K)$. Kauffman in \cite[Proposition 5.6]{Kau74} gives the matrix representing this linking form with respect to the basis $\set{t^jx_i}_{\substack{1\leq j\leq m-2;\\1\leq i\leq 2g.}}$ as
\[
L(M)\ass\ \left[
\begin{matrix}
M+M^{\,T} & M^{\,T} & 0 & 0 &\cdots & 0 & 0 & 0\\
M & M+M^{\,T} & M^{\,T} & 0 & \cdots & 0 & 0 & 0\\
0 & M & M+M^{\,T} & M^{\,T} & \cdots & 0 & 0 & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & 0 & \cdots & 0 & M & M+M^{\,T}
\end{matrix}\right],
\]
\noindent where the sign and transpose differences are due to differences between our orientation conventions and the ones used by Kauffman.
Set
\[V_{(m)}\ass\left(V; t\cdot V;\ldots; t^{m-2}\cdot V\right).\]
We obtain the explicit formula
\begin{equation}\label{E:L-Equation-1}
\widetilde{\mathrm{cu}}(C_m(K),\tilde\rho)=\ \varepsilonilon\,V_{(m)}^{\,T}(\iota\beta^1)^\ast
\left(L(M)\widetilde{V}_{(m)}\right).
\end{equation}
In the special case $A\approx\left(\mathds{Z}/n\mathds{Z}\right)^r$, this simplifies to
\begin{equation}\label{E:L-equation-2}
\widetilde{\mathrm{cu}}(C_m(K),\tilde\rho)= \ \varepsilonilon\,V_{(m)}^{\,T}\frac{L(M)\widetilde{V}_{(m)}}{n}\bmod n.
\end{equation}
Finally, notice that the action of $\mathcal{C}_m$ on $A$ sends $V_{(m)}$ to $t\cdotV_{(m)}$, leaving invariant the right hand side of Equation \ref{E:L-equation-2}. Thus,
\begin{equation}\label{E:L-equation-3}
\widetilde{\mathrm{cu}}(C_m(K),\tilde\rho)= \mathrm{cu}(K,\rho).
\end{equation}
\subsection{The $S$--equivalence class of the colouring}\label{SS:sequiv}
Let $(K,\rho)$ be a $G$--coloured knot, with surface data $(M,V)$ with respect to a choice $\left(F,\set{x_1,\ldots,x_{2g}}\right)$ of marked Seifert surface for $K$. Let $P$ be a unimodular matrix such that
\begin{equation}
P^{\thinspace T}\thinspace \left(M-M^{\thinspace T}\right)\thinspace P= \left[\begin{matrix}0 & -1\\
1 & 0\end{matrix}\right]^{\oplus g}.
\end{equation}
Write $P^{-1}V\ass \left(v_1;\,\ldots;v_{2g}\right)$, and define the \emph{$S$--equivalence class of the colouring}
\begin{equation}\label{E:s-equiv-formula}
\mathrm{s}(K,\rho)= \sum_{j=1}^{g_2}v_{2j-1}\wedge v_{2j}\in A\wedge A.
\end{equation}
As we defined $S$--equivalence for surface data, it can be defined for vectors. Two vectors $V_{1,2}$ are said to be
\emph{$S$--equivalent} if there exist matrices $M_{1,2}$ such that $(M_1,V_1)$ and $(M_2,V_2)$ are $S$--equivalent. The following proposition shows that the $S$--equivalence class of the colouring is a well-defined invariant of $\bar\rho$--equivalence classes of $G$--coloured knots, and it explains what it measures.
\begin{prop}\label{P:ColClass}
Given a pair of surface data $(M_{1,2},V_{1,2})$, colouring vectors $V_{1,2}$ are $S$--equivalent if and only if, for any $G$--coloured knots $(K_{1,2},\rho_{1,2})$ with surface data $(M_{1,2},V_{1,2})$ correspondingly, corresponding to a choice of marked Seifert surfaces for each, we have
\[
\mathrm{s}(K_{1},\rho_1)=\ \mathrm{s}(K_{2},\rho_2).
\]
\end{prop}
\begin{proof}[Proof of Proposition {\ref{P:ColClass}}]
Identify the symplectic group $\mathrm{Sp}(2g,\mathds{Z})$ with the group of integral square matrices $P$ satisfying
\begin{equation}
P^{\,T}\left[\begin{matrix}0 & -1\\
1 & 0\end{matrix}\right]^{\oplus g} P= \left[\begin{matrix}0 & -1\\
1 & 0\end{matrix}\right]^{\oplus g}.
\end{equation}
By an argument of Rice \cite{Ric71}, two Seifert matrices $M_{1,2}$ are $S$--equivalent if and only if there exist Seifert matrices $M_{3,4}$ which are $S$--equivalent to $M_{1,2}$ correspondingly such that $M_{3,4}-M_{3,4}^{\,T}=\left[\begin{smallmatrix}0 & -1\\ 1 & 0\end{smallmatrix}\right]^{\oplus g}$ and $M_{3}$ is $S$--equivalent to $M_{4}$ via a finite sequence of $\Lambda_2$-moves, and $\Lambda_1$-moves of the form $M\mapsto P^{\,T}\,M\thinspace P$, with $P \in \mathrm{Sp}(2g,\mathds{Z})$. We may therefore assume that $M$ satisfies $M-M^{\,T}=\left[\begin{smallmatrix}0 & -1\\ 1 & 0\end{smallmatrix}\right]^{\oplus g}$ without loss of generality.
The $S$--equivalence relation on symplectic matrices induces an equivalence relation on the corresponding colouring vectors. Let $A^{2g}_{\text{full}}$ denote the set of vectors in $A^{2g}$ whose entries together generate $A$. A $\Lambda_1$-move on surface data sends a colouring vector $V\in A^{2g}_{\text{full}}$ to a vector $P^{-1}V$, for $P\in \mathrm{Sp}(2g,\mathds{Z})$. A $\Lambda_2$-move sends a colouring vector $(v_{1};\,\ldots;v_{2g})\in A^{2g}_{\text{full}}$ to a colouring vector $(v_{1};\,\ldots;v_{2g};0;y)\in A^{2g+2}_{\text{full}}$ for any $y\in A$.\par
Define a map
\begin{equation}
\begin{aligned}
\varphi\colon\thinspace V\in \bigcup_{g\in\mathds{N}^\ast}A^{2g}_{\text{full}}\left/\rule{0pt}{11pt}\Lambda_{1,2}\right.&\longrightarrow A\wedge A\\
(v_1;\,\ldots;v_{2g}) &\mapsto\ \ \sum_{j=1}^{g_2}v^{2}_{2j-1}\wedge
v^{2}_{2j}
\end{aligned}.
\end{equation}
\noindent We next show that $\varphi$ is well-defined. Because $a\wedge 0=0$ for any $a\in A$, the $\varphi$--image of a vector $V\in A^{2g}_{\text{full}}$ is not changed by a $\Lambda_2$-move. To see that it is not changed by a $\Lambda_1$ either, use the fact that $\mathrm{Sp}(2g,\mathds{Z})$ is generated by
\begin{equation}
R^{\,T}\left[\begin{matrix} 0 & I_g\\
-I_g & 0
\end{matrix}\right]R,\ \ R^{\,T}\left[\begin{matrix} A & 0\\
0 & (A^{\,T})^{-1}\end{matrix}\right]R,\ \ R^{\,T}\left[\begin{matrix} I_g & 0\\
B & I_g
\end{matrix}\right]R,
\end{equation}
\noindent with $A\in \mathrm{GL}(g,\mathds{Z})$, and $B$ a symmetric integral matrix (see \textit{e.g.} \cite[Proposition A5]{Mum83}). Above, $R$ denotes the integral matrix satisfying
\begin{equation}
R^{\,T} \left[\begin{matrix}0 & -I_g\\ I_g & 0\end{matrix}\right] R= \left[\begin{matrix}0 & -1\\ 1 & 0\end{matrix}\right]^{\oplus g}.
\end{equation}
The reader may verify directly that the $\varphi$--image of a vector $V\in A^{2g}_{\text{full}}$ is not changed by left multiplication by any of the above basis elements.\par
Next, we construct the inverse map
\begin{equation}
\psi\colon\thinspace A\wedge A\longrightarrow \bigcup_{g\in\mathds{N}^\ast}A^{2g}_{\text{full}}\left/\rule{0pt}{11pt}\{\Lambda_{1,2}\}\right.
\end{equation}
Let $b_1,\ldots,b_r$ be a fixed basis for $A$, and let $X\ass
\sum_{1\leq i<j\leq r}c_{i,j}{b_{i}\wedge b_j}$ be some element of
$A\wedge A$. If $c_{1,2}>0$, we
set $v_1,\ldots,v_{2c_{1,2}-1}$ to $s_1$ and we set
$v_{2},\ldots,v_{2c_{1,2}}$ to $s_2$. If $c_{1,3}>0$, we set
$v_{2c_{1,2}+1},\ldots,v_{2c_{1,2}+2c_{1,3}-1}$ to $s_1$ and
$v_{2c_{1,2}+2},\ldots,v_{2c_{1,2}+2c_{1,3}}$ to $s_3$, and so on
lexicographically, until we finish with $c_{r-1,r}$. We conclude by
setting $v_{2C+2k-1}$ to $0$ and setting $v_{2C+2k}$ to $s_k$ for
$k=1,\ldots,r$, where $C$ denotes $\sum_{1\leq i<j\leq r}c_{i,j}$. By construction, entries in this colouring vector, whose length is $g\ass 2C+2r$, together generate $A$. For example, for $A$ generated by $s_1,s_2,s_3$, we would have $\psi(2s_1\wedge s_2)=(s_1;s_2;s_1;s_2;0;s_1;0;s_2;0;s_3)$.\par
To prove that $\psi$ is well-defined, identify $A\wedge A$ with the free commutative monoid over $A^2$ modulo moves $S_{1,2}$, where $S_1$ takes elements of the form $a\wedge (b+c)$ to elements of the form $a\wedge b + a\wedge c$, and $S_2$ takes elements of the form $a\wedge a$ to zero. We call this monoid $\mathcal{M}$.
First, for $X\in\mathcal{M}$, commutativity of $\mathcal{M}$ corresponds to a $\Lambda_1$-move on $\psi(X)$ with matrix $P=I_{2i}\oplus \left[\begin{smallmatrix}0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\end{smallmatrix}\right]$. The effect of an $S_1$-move is replicated in $A^{2g}_{\text{full}}\left/\rule{0pt}{11pt}\Lambda_{1,2}\right.$ by first applying a $\Lambda_2$-move
\begin{equation}
\psi(X)=(v_1;\,\ldots\,;v_{2g-2}\,; a\,; (b+c))\mapsto (v_1;\,\ldots\,;v_{2g-2}\,; a\,; (b+c)\, ; 0\, ; c),
\end{equation}
\noindent and then applying a $\Lambda_1$-move with matrix
\begin{equation}
P= I_{2g-2}\oplus \left[\begin{matrix}1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1\\ -1 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{matrix}\right].
\end{equation}
\noindent The result is the vector $(v_1;\,\ldots\,;v_{2g-2}\,; a\,; b\, ; a\, ; c)$, as desired. The effect of an $S_2$-move is replicated by a $\Lambda_1$-move with matrix $P= I_{2g}+E_{2g,g-1}$ to get
\begin{equation}\psi(X)=(v_1;\,\ldots\,;v_{2g-2}\,; a\,; a)\mapsto (v_1;\,\ldots\,;v_{2g-2}\,; 0\,; a),\end{equation}
\noindent after which a $\Lambda_2$-move erases the last two entries, and we obtain $(v_1;\,\ldots\,;v_{2g-2})$ as desired.\par
We have shown that both $\varphi$ and $\psi$ are well-defined, and following through the definitions shows that $\varphi(\psi(X))=X$ for any $X\in A\wedge A$. So $\varphi$ is invertible, and is therefore an isomorphism.
\end{proof}
Because null-twists don't change the colouring vector, Proposition \ref{P:ColClass} implies the following.
\begin{cor}
The element $\mathrm{s}(K,\rho)\in A\wedge A$ is an invariant of $\bar\rho$--equivalence classes of $G$--coloured knots.
\end{cor}
\begin{rem}
The proof that $\psi$ is well-defined is an algebraic version of the band sliding arguments of \cite[Section 4.2]{KM09}.
\end{rem}
\begin{rem}
If it were necessary, we could upgrade $\mathrm{s}(K,\rho)\in A\wedge A$ to a $\hat\rho$--invariant by considering $A\wedge A$ as a $\mathcal{C}_m$-module with respect to the diagonal action of $t$.
\end{rem}
\section{Groups whose commutator subgroup has small rank}\label{S:r12}
Armed with the tools of Section \ref{S:untyinginvariants}, we are now in a position to find complete sets of base-knots for some metabelian groups $G$ of the particularly simple form $G=\mathcal{C}_m\ltimes_\phi (\mathds{Z}/n\mathds{Z})^r$ for $r\leq 2$, where the order of $\phi$ is $m$. Then $\phi$ is represented by an integer matrix $N$. In Section \ref{SS:r1} we consider the case $r=1$, and we find a complete set of base-knots for metacyclic groups for which $2(\phi^{-3}-\mathrm{id})$ is invertible. This generalizes \cite[Sections 4.2, 4.3 and 5.1]{KM09}, where the $m=2$ case is treated. In Section \ref{SS:r2} we find complete sets of base knots for certain families of groups $G$ of the form $\mathcal{C}_m\ltimes_{\phi}(\mathds{Z}/n_1\mathds{Z}\times\mathds{Z}/n_2\mathds{Z})$.\par
The strategy is always the same. Relative bordism gives an upper bound on the number of $\bar\rho$--equivalence classes via the K\"{u}nneth Formula. To find a lower bound, choose a colouring vector to represent each $S$--equivalence class, and solve $MV N=M^{\,T}V$ (Proposition \ref{P:HNN}) for $M$ over $A$. If an entry of $M$ is not determined, set it to zero, if it is determined then set it to that value, and if the equation for that entry admits no solutions, then there are no $G$--coloured knots in that equivalence class. Finally, to get different values for the surface untying invariant (Equation \ref{E:su-formula2}), add `$A$--torsion' elements to $M$. This gives a list of surface data representing non-$\bar\rho$--equivalent $G$--coloured knots, and if the length of the list equals the upper bound then we are finished. For $\rho$--equivalence, check that these $G$--coloured knots all have different coloured untying invariants using Equation \ref{E:L-equation-2}.\par
Throughout this section, for $a\in \mathds{Z}/n\mathds{Z}$, let $\tilde{a}\in\mathds{N}=\{0,1,2,\ldots\}$ denote the smallest natural number such that $a=\tilde{a}\bmod n$ unless otherwise specified.
\subsection{The $r=1$ case}\label{SS:r1}
\subsubsection{$\bar\rho$--equivalence}\label{SSS:R1M0}
Groups of the form $\mathcal{C}_m\ltimes_{\phi}(\mathds{Z}/n\mathds{Z})$ are called \emph{metacyclic groups}. Note that, because both $\phi$ and $\phi-\mathrm{id}$ are invertible (\textit{e.g} \cite[Proposition 14.2]{BZ03}), it follows that $m,n>0$. The automorphism $\phi$ takes the form $\phi(s)=\xi s$ with respect to a fixed generator $s$ for $\mathds{Z}/n\mathds{Z}$, where $\xi^m=1\bmod n$. Both $\xi$ and $\xi-1$ are units.\par
The relative bordism upper bound $\abs{H_3(\mathds{Z}/n\mathds{Z};\mathds{Z})}=n$ for $\bar\rho$--equivalence classes coming from Corollary \ref{C:Wallacebound} coincides with the lower bound coming from Section \ref{S:untyinginvariants}, which is given as $\abs{\mathds{Z}/n\mathds{Z}\wedge \mathds{Z}/n\mathds{Z}}\abs{\mathds{Z}/n\mathds{Z}}=1\cdot n=n$. A complete set of base-knots with respect to $\bar\rho$--equivalence are the twist knots $(T_k,\rho_{k})$ of Figure \ref{F:metacycbase}, with surface data
\begin{figure}
\caption{\label{F:metacycbase}
\label{F:metacycbase}
\end{figure}
\begin{equation}\label{E:metacycReps}
(M_k,V)\ =\ \genusoneknot{a+kn}{0}{1}{1}{s}{s^{\frac{\xi}{1-\xi}}},
\end{equation}
\noindent where $a\in\mathds{N}$ is the minimal natural number such that $a\bmod n= \frac{-\xi}{(1-\xi)^2}$ and $k=1,\ldots,n$. These $\bar\rho$-equivalence classes are distinguished by
$\mathrm{su}(F_k,\bar{\rho}_{k})=k(\xi-1)$ (plug the surface data into Equation \ref{E:su-formula2}), where $F_k$ is the obvious Seifert surface for $T_k$ in the projection of Figure \ref{F:metacycbase}.
\begin{rem}
A more explicit way to establish the upper bound of $n$ for the number of $\rho$--equivalence classes would have been to apply the algorithm of \cite[Section 4]{KM09}. Given a $G$--coloured knot $(K,\rho)$, the arguments of Sections 4.2 and of 4.3.1 provide an algorithm to relate $(K,\rho)$ by an explicit sequence of null-twists to a genus $1$ knot with surface data
\begin{equation}\genusoneknot{a_{1,1}}{a_{1,2}}{a_{1,2}+1}{a_{2,2}}{s}{0}.\end{equation}
Moreover, $a_{2,2}$ can be made to vanish by null-twists, and one may add or subtract $n$ from $a_{1,2}$ and $n^2$ from $a_{1,1}$ by the arguments of \cite[Page 1382]{KM09}. Finally, Proposition \ref{P:HNN} tells us that $a_{1,1}\bmod n=0$, and that $a_{1,2}\bmod n =\frac{1}{\xi-1}$.
\end{rem}
\subsubsection{$\rho$--equivalence}\label{SSS:R1M3}
\begin{thm}\label{T:metacyclic}
For $G$ metacyclic, the number of $\rho$--equivalence classes of $G$--coloured knots is bounded from below by the order of $2(\phi^{-3}-\mathrm{id})$. In particular, if $2(\phi^{-3}-\mathrm{id})$ is invertible, then $\set{(T_k,\rho_{k})}_{1\leq k<n}$ is a complete set of base-knots for $G$.
\end{thm}
\begin{proof}
Two $\bar\rho$--equivalence knots are in particular $\rho$--equivalent, therefore it suffices to check that the coloured untying invariant distinguishes $(T_k,\rho_{k})$. We do this by plugging the surface data of Equation \ref{E:metacycReps} into Equation \ref{E:L-equation-2}. Decompose this equation as
\begin{equation}
\mathrm{cu}(T_K,\rho_K)= \ \varepsilonilon\,V_{(m)}^{\,T}\frac{\left(L\left[\begin{matrix}kn & 0\\ 0 & 0\end{matrix}\right]+L\left[\begin{matrix}a & 0\\ 1 & 1\end{matrix}\right]\right)\widetilde{V}_{(m)}}{n}\bmod n.
\end{equation}
The matrix $L\left[\begin{matrix}a & 0\\ 1 & 1\end{matrix}\right]$ does not depend on $k$, so it suffices to calculate
\begin{equation}
\left[\tilde\xi^0,\tilde\xi^1,\ldots,\tilde\xi^{n-2}\right]\,\frac{L[kn]}{n}\left(\xi^0;\xi^1;\ldots;\xi^{n-2}\right)=2k\sum_{i=1}^{2(n-2)}\xi^i.
\end{equation}
\noindent with $\tilde\xi\in \mathds{N}$ the smallest number such that $\tilde{\xi}^m=1\bmod n^2$ and $\xi= \tilde{\xi}\bmod n$. \par
Because $1-\xi$ is invertible in $\mathds{Z}/n_1\mathds{Z}$, the number of distinct numbers in the set $\left\{2k\sum_{i=1}^{2n-4}\xi\right\}_{1\leq k\leq n_1}\subset \mathds{Z}/n_1\mathds{Z}$ equals the order of
$2(1-\xi)\sum_{i=1}^{2n-4}\xi^i=2(1-\xi^{2n-3})$, which is equal to the order of $2(1-\xi^{-3})$ in $\mathds{Z}/n\mathds{Z}$.
\end{proof}
This generalizes \cite[Theorem 3]{KM09} to all metacyclic groups with $2(1-\xi^{-3})$ invertible in $\mathds{Z}/n\mathds{Z}$. Thus, the simplest group $G$ for which we have not classified $G$--coloured knots up to $\rho$--equivalence is then $G=\mathcal{C}_3\ltimes_{[2]} (\mathds{Z}/7\mathds{Z})$.
\subsection{The $r=2$ case}\label{SS:r2}
In Sections \ref{SSS:R2M0D} and \ref{SSS:R2M3D} we consider groups of the form $\mathcal{C}_m\ltimes_{\phi}(\mathds{Z}/n_1\mathds{Z}\times\mathds{Z}/n_2\mathds{Z})$. Matrix notation is misleading when $n_1\neq n_2$, but we'll use it anyway, with care. Given a basis $s_{1,2}\ass\left(s_{1,2}^1,s_{1,2}^2\right)$ for $A=\mathds{Z}/n_1\mathds{Z}\times\mathds{Z}/n_2\mathds{Z}$, there is a $2\times 2$ matrix $N$ such that $\phi(s_{1,2})=s_{1,2}N$.
\subsubsection{$\bar\rho$--equivalence; $N$ is diagonalisable}\label{SSS:R2M0D}
Again $n_{1,2}>0$. Relative bordism gives an upper bound of on the number of $\bar\rho$--equivalence classes of $n_1n_2\gcd(n_1,n_2)$ by the K\"{u}nneth Formula, which simplifies to
\begin{equation}
0\to A\to H_3(A;\mathds{Z})\to \mathds{Z}/\gcd(n_1,n_2)\mathds{Z}\to 0 .
\end{equation}
The surface untying invariant will be seen to detect the `$A$' part, while the $S$--equivalence class of the colouring detects the `$\mathrm{Tor}$' part, noting for our groups that
\begin{equation}\mathrm{Tor}_1(\mathds{Z}/n_1\mathds{Z},\mathds{Z}/n_2\mathds{Z}) \approx \mathds{Z}/\gcd(n_1,n_2)\mathds{Z}\approx A\wedge A.\end{equation}
Choose a basis $s_{1,2}$ for $A$, with respect to which the matrix $N$ is of the form $\left[\begin{smallmatrix}\tilde{\xi}_1 & 0\\ 0 & \tilde{\xi}_2\end{smallmatrix}\right]$, with $\tilde\xi_{1,2}^m=1\bmod n_{1,2}^2$ correspondingly, and set $\xi_{1,2}\ass\tilde{\xi}\bmod n_{1,2}$. The $S$--equivalence classes of $G$--colourings are represented by the colouring vectors $(s_1;is_2)$ and $(s_1;0;s_2;0)$, where $i=1,\ldots,\gcd(n_1,n_2)-1$.\par
By explicitly solving $M\thinspaceV N=M^{\thinspace T}\thinspaceV$, we see that there exist $G$--coloured knots in the $S$--equivalence class represented by $(s_1;is_2)$ only if there exists a number $x\in \mathds{N}$ with $\frac{\xi_1}{1-\xi_1}=x\bmod n_1$ and with $\frac{1}{\xi_2-1}=x\bmod n_2$. For $n_1=n_2$, this condition would become $\xi_1=\xi_2^{-1}$, while for $n_{1,2}$ coprime it would be vacuous. The $\bar\rho$--equivalence classes for such knots are represented by $G$--coloured knots $(K_{k,l},\rho_{k,l,i})$ with surface data
\begin{equation}(M_{k,l},V_i)\ =\ \genusoneknot{kn_1}{x}{x+1}{ln_2}{s_1}{is_2},\end{equation}
\noindent with $x\in\mathds{N}$ being the minimal integer satisfying the above, and with $k=1,\ldots,n_1$ and $l=1,\ldots,n_2$ and $i=1,\ldots,\gcd(n_1,n_2)-1$.\par
These $\bar\rho$-equivalence classes are distinguished by the surface untying invariant. The simplest way to see this is to decompose $M$ as $\left[\begin{smallmatrix}kn_1 & 0\\ 0 & ln_{2}\end{smallmatrix}\right]+ \left[\begin{smallmatrix}0 & x\\ x+1 & 0\end{smallmatrix}\right]$ and observe that the contribution of the first summand to Equation \ref{E:su-formula2} is $\left((\xi_1-1)k, (\xi_2-1)l\right)$ which spans $A$, while the contribution of the second summand is constant.
The knots in the $S$--equivalence class represented by $(s_1;0;s_2;0)$ are represented by $G$--coloured knots $(K^\ast_{k,l},\rho^\ast_{k,l})$ with surface data
\begin{equation}(M^\ast_{k,l},V^\ast)\ =\ \left(\rule{0pt}{38pt}\begin{bmatrix} kn_1 & x_1 & 0 & 0\\ x_1+1 & 0 & 0 & 0\\
0 & 0 & ln_2 & x_2\\ 0 & 0 & x_2+1 & 0 \end{bmatrix}\raisebox{-5.5pt}{\huge{,}}\normalsize \begin{pmatrix} s_1\\ 0\\ s_2\\ 0 \end{pmatrix}\right),\end{equation}
\noindent where $x_{1,2}\in\mathds{N}$ are the smallest natural numbers such that
$\frac{\xi_{1,2}}{1-\xi_{1,2}}=x_{1,2}\bmod n_{1,2}$ correspondingly.\par
These $\bar\rho$-equivalence classes are distinguished by
\begin{equation}\mathrm{su}(F^\ast_{k,l},\bar\rho^\ast_{k,l})=
\left((\xi_1-1)k, (\xi_2-1)l\right).\end{equation}
\begin{rem}\label{R:AlgoFail}
The algorithm of \cite[Section 4]{KM09} would give an upper bound on the number of $\bar\rho$--equivalence classes, which would not be sharp for $\gcd(n_1,n_2)>1$. Given a $G$--coloured knot $(K,\rho)$, the arguments of \cite[Sections 4.2 and 4.3.1]{KM09} provide an algorithm to relate $(K,\rho)$ by an explicit sequence of null-twists to a knot $(K_0,\rho_0)$ of genus $\leq 2$. We can arrange for the colouring vector to take the form $(s_1;is_2)$ or $(s_1;0;s_2;0)$ by band slides \cite[Section 4.1.4 and Section 4.2.2]{KM09}, where $i=1,\ldots,\gcd(n_1,n_2)-1$. For genus $1$, write the Seifert matrix as $\left[\begin{smallmatrix}a_{1,1} & a_{1,2}\\
a_{1,2}+1 & a_{2,2}\end{smallmatrix}\right]$. Proposition \ref{P:HNN} determines the values of $a_{1,1}\bmod n_1$, of $a_{1,2}\bmod\gcd(n_1,n_2)$, and of $a_{2,2}\bmod n_2$. Moreover, one may add or subtract $n_1^2$ from $a_{1,1}$, and $\gcd(n_1,n_2)^2$ from $a_{1,2}$, and $n_2^2$ from $a_{2,2}$ by the arguments of \cite[Page 1382]{KM09}. For genus $2$ let the Seifert matrix be $\left[\begin{smallmatrix}a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4}\\
a_{1,2}+1 & a_{2,2} & a_{2,3} & a_{2,4}\\
a_{1,3} & a_{2,3} & a_{3,3} & a_{3,4}\\
a_{1,4} & a_{2,4} & a_{3,4}+1 & a_{4,4}\end{smallmatrix}\right]$. We may kill $a_{2,2}$, $a_{2,4}$, and $a_{4,4}$ by \cite[Equation 4.7]{KM09}. Proposition \ref{P:HNN} determines the values of $a_{1,1}\bmod n_1$, of $a_{3,3}\bmod n_2$, and of $a_{1,3}\bmod\gcd(n_1,n_2)$. The other entries are determined on the nose, because they are determined modulo either $n_1$ or $n_2$, and we can add or subtract either $n_1$ or $n_2$ from them by \cite[Equation 4.11]{KM09}. On the other hand, all we can add or subtract from $a_{1,1}$, $a_{3,3}$, and $a_{1,3}$ is $n_1^2$, $n_2^2$, and $\gcd(n_1,n_2)^2$ correspondingly. In summary, the upper bound which the algorithm gives is $n_1n_2\gcd(n_1,n_2)^2$, which in general is not sharp.
\end{rem}
\subsubsection{$\rho$--equivalence; $N$ is diagonalisable}\label{SSS:R2M3D}
For $A$ of rank $2$ and for $N$ diagonalisable, we obtain a complete set of base knots if $2(\phi^{-3}-\mathrm{id})$ is invertible. We remark that this would hold for $A$ of any rank if $N$ were diagonalisable, with pairwise coprime diagonal entries (in this case $A\wedge A$ and $A\wedge A\wedge A$ both vanish).
\begin{thm}\label{T:R2M3Diag}
\begin{itemize}
\item For each $1\leq l_0\leq n_2$, the number of non-$\rho$--equivalent $G$--coloured knots in the set $\left\{(K_{k,l_0},\rho_{k,l_0,i})\right\}_{1\leq k=1\leq n_1}$, and also the number of non-$\rho$--equivalent $G$--coloured knots in the set $\left\{(K^\ast_{k,l_0},\rho^\ast_{k,l_0})\right\}_{1\leq k=1\leq n_1}$, are bounded from below by the order of $2(1-\xi_{1}^{-3})\in\mathds{Z}/n_1\mathds{Z}$.
\item For each $1\leq k_0\leq n_1$, the number of non-$\rho$--equivalent $G$--coloured knots in the set $\left\{(K_{k_0,l},\rho_{k_0,l,i})\right\}_{1\leq l=1\leq n_2}$, and also the number of non-$\rho$--equivalent $G$--coloured knots in the set $\left\{(K^\ast_{k_0,l},\rho^\ast_{k_0,l})\right\}_{1\leq l=1\leq n_2}$, are bounded from below by the order of $2(1-\xi_{2}^{-3})\in\mathds{Z}/n_2\mathds{Z}$.
\end{itemize}
\end{thm}
\begin{proof}
Consider the claim for $G$--coloured knots in the $S$--equivalence class represented by $(s_1;s_2)$.
To prove the first assertion, decompose $M$ as $\left[\begin{smallmatrix}kn_1 & 0\\ 0 & 0\end{smallmatrix}\right]+\left[\begin{smallmatrix}0 & x\\ x+1 & l_0n_2\end{smallmatrix}\right]$. The matrix $\left[\begin{smallmatrix}0 & x\\ x+1 & l_0n_2\end{smallmatrix}\right]$ is independent of $k$. Thus, to show that the coloured untying invariant (Equation \ref{E:L-equation-2}) distinguishes our base knots, it suffices to calculate
\begin{equation}
\left[\tilde\xi_1^0,\tilde\xi_1^1,\ldots,\tilde\xi_1^{n-2}\right]\,\frac{L\left[kn_1\right]}{n_1}\,\left(\xi_1^0;\xi_1^1;\ldots;\xi_1^{n-2}\right)=
2k\sum_{j=1}^{2(n-2)}\xi_1^j.
\end{equation}
Because $1-\xi_1$ is invertible in $\mathds{Z}/n_1\mathds{Z}$, the number of distinct numbers in the set $\left\{2k\sum_{j=1}^{2n-4}\xi_j\right\}_{1\leq k\leq n_1}\subset \mathds{Z}/n_1\mathds{Z}$ equals the order of
$2(1-\xi_1)\sum_{j=1}^{2n-4}\xi^j_1$, which is the order of $2(1-\xi_1^{-3})$ in $\mathds{Z}/n_1\mathds{Z}$.
The proof of the second assertion is analogous, as is the proof for $i>1$ and for $G$--coloured knots in the $S$--equivalence class represented by $(s_1;0;s_2;0)$.
\end{proof}
\subsubsection{$\bar\rho$--equivalence; $N$ is not diagonalisable}\label{SSS:R2M0N}
Set $n\ass n_1=n_2$. The bordism upper bound is $n^3$. Choose a basis $s_{1,2}$ for $A$ such that $\phi(s_1)=s_2$. With respect to such a basis, $N$ takes the form $\left[\begin{smallmatrix}0 & 1\\ N_{2,1} & N_{2,2}\end{smallmatrix}\right]$, such that $N^m=I_2\bmod n^2$. Because $N$ and $N-I_2$ are both invertible in $\mathds{Z}/n\mathds{Z}$, it follows that $\abs{N}=-N_{2,1}$ and $\abs{N-I_2}=N_{2,1}+N_{2,2}-1$ are both invertible modulo $n$. Let $\xi\in \mathds{Z}/n\mathds{Z}$ be an element satisfying $(1-N_{2,1}-N_{2,2})\xi=1$.\par
Solving $M\,V N=M^{\thinspace T}\thinspaceV$ shows that there exist $G$--coloured knots in the $S$--equivalence class represented by $(s_1;is_2)$ for $1\leq i<n$ only if $N_{2,1}\bmod n=-1$, and that $\rho$--equivalence classes for such knots are represented by $G$--coloured knots $(J_{k,l,i},\rho_{k,l,i})$ with surface data
\begin{equation}
(M_{k,l,i},V_i)\ =\ \left(\rule{0pt}{18pt}\tilde\xi\,\begin{bmatrix} \tilde{i}+kn & -1\\ 1-N^{\prime}_{2,2} & \widetilde{i^{-1}}+ln \end{bmatrix}\raisebox{-5.5pt}{\huge{,}}\normalsize \begin{pmatrix} s_1\\ is_2 \end{pmatrix}\right),
\end{equation}
\noindent where $N^{\prime}_{2,2}$ denotes the minimum integer which agrees modulo $n$ with $N_{2,2}$ for which $1-2\tilde\xi+\tilde\xi N^{\prime}_{2,2}\bmod n=0$. For this surface data
\begin{equation}
\mathrm{su}(F_{k,l,i},\bar\rho_{k,l,i})=
\left(k-N_{2,1}\tilde{i}l, (N_{2,2}-1)\tilde{i}l-k\right)\bmod n.
\end{equation}
\noindent Because $N_{2,1}+N_{2,2}-1$ is a unit modulo $n$, the number of distinct values of the surface untying invariant for these knots is $n\sum_{j=1}^{n-1} \frac{n}{\gcd(n,j)}$. In particular, if $n$ is prime then all possible values are realized.
Knots in the $S$--equivalence class represented by $(s_1;0;s_2;0)$ are represented by $G$--coloured knots $(J^\ast_{k,l},\rho^\ast_{k,l})$ with surface data
\begin{equation}
(M^\ast_{k,l},V^\ast)\ =\ \left(\rule{0pt}{38pt}\begin{bmatrix} kn & \tilde{\xi}N_{2,1} & 0 & \tilde\xi\\ \tilde\xi N_{2,1}+1 & 0 & \tilde\xi & 0\\
0 & \tilde{\xi} & ln & a-1\\ \tilde\xi & 0 & a & 0 \end{bmatrix}\raisebox{-5.5pt}{\huge{,}}\normalsize \begin{pmatrix} s_1\\ 0\\ s_2\\ 0 \end{pmatrix}\right),
\end{equation}
\noindent where $a\in\mathds{N}$ is the minimal natural number congruent modulo $n$ to $\frac{\xi}{N_{2,1}}$.\par
The surface coloured untying invariant for these is
\begin{equation}\mathrm{su}(F^\ast_{k,l},\bar\rho^\ast_{k,l})=
\left(N_{2,1}l-k, k+l(N_{2,2}-1)\right)\bmod n.\end{equation}
For this $S$--equivalence class, the number of possible values of the surface untying invariant equals $n$ times the order of $(N_{2,2}-N_{2,1}+1)\bmod n$. If this number is a unit, then we have classified knots coloured by such groups up to $\bar\rho$--equivalence.
\begin{rem}
As in Remark \ref{R:AlgoFail}, the algorithm of \cite[Section 4]{KM09} gives a non-sharp upper bound of $n^4$ for the number of $\bar\rho$--equivalence classes.
\end{rem}
\subsubsection{$\rho$--equivalence; $N$ is not diagonalisable}\label{SSS:R2M3N}
Because $N$ is not diagonalisable, $m$ must be greater than $2$. We consider only the case $m=3$.
\begin{thm}\label{T:R2M3Non}
The number of non-$\rho$--equivalent $G$--coloured knots among elements of the set $\left\{(J_{k,l,i},\rho_{k,l,i})\right\}_{1\leq k,l\leq n}$ for each $1\leq i<n$, and also the number of non-$\rho$--equivalent $G$--coloured knots in the set $\left\{(J^\ast_{k,l},\rho^\ast_{k,l})\right\}_{1\leq k,l\leq n}$, are bounded from below by the order of $6(1+N_{2,2}+N^{2}_{2,2}-N_{2,1}^2)\bmod n$.
\end{thm}
\begin{proof}
Consider the claim for $G$--coloured knots in the $S$--equivalence class represented by $(s_1;s_2)$.
As in the proof of Theorem \ref{T:R2M3Diag}, it suffices to consider the quantity
\begin{multline}
\varepsilonilon\left(V,V N\right)\, L\left[\begin{matrix}kn & 0\\ 0 & ln\end{matrix}\right]\,\left(V;V N\right)
=\\\left(3,3\right)k+\left(N_{2,1}(1+2N_{2,1}+2N_{2,2}),2+N_{2,1}+N_{2,2}(2+2N_{2,1}+2N_{2,2})\right)l\bmod n.
\end{multline}
To see how many $\rho$--equivalence classes we can distinguish by the surface coloured untying invariant, we calculate
\[ (2+N_{2,1}+N_{2,2}(2+2N_{2,1}+2N_{2,2}))-N_{2,1}(1+2N_{2,1}+2N_{2,2})=2+2N_{2,2}+2N^{2}_{2,2}-2N_{2,1}^2.
\]
\noindent The theorem follows.\par
The proof is analogous for $G$--coloured knots in the other $S$--equivalence classes.
\end{proof}
\section{$A_4$-Coloured Knots}\label{S:A4}
To finish this paper, we go beyond the algebraic techniques of Section \ref{S:untyinginvariants},
to classify $G$--coloured knots up to $\rho$--equivalence for a specific small but interesting group.
\subsection{Setup}
The alternating group $A_4$ is the group of orientation preserving symmetries of an oriented tetrahedron. As a metabelian group it is of the form
\begin{equation}A_4 = \mathcal{C}_3\ltimes_{\phi} \left(\mathds{Z}/2\mathds{Z}\right)^{2},\end{equation}
\noindent where the matrix associated to $\phi$ is $N=\left[\begin{smallmatrix} 0 & 1\\
-1 & -1
\end{smallmatrix}\right]$.\par
The number of $\rho$--equivalence classes of $A_4$-coloured knots is bounded from above by the number of $\bar\rho$--equivalence classes of such knots equipped with marked Seifert surfaces, which is $8$ by the bordism upper bound of Corollary \ref{C:Wallacebound}. For the $S$--equivalence class represented by $(s_1;s_2)$, the four distinct $\bar\rho$--equivalence classes are represented by the knots in Figure \ref{F:A4s1s2}, which are denoted $3_1^l, 3_1^r, 4_1^l,$ and $4_1^r$ correspondingly.
\begin{figure}
\caption{\label{F:A4s1s2}
\label{F:A4s1s2}
\end{figure}
We choose the colouring vector $(s_1;s_2;s_1;s_2)$ to represent the remaining $S$--equivalence class. The four distinct $\bar\rho$--equivalence classes of $A_4$-coloured knots with this colouring vector are represented by $3_1^l\Hash 3_1^l$, $3_1^l\Hash 4_1^l$, $3_1^l\Hash 4_1^r$, and $4_1^l\Hash 4_1^r$ (these are well-defined $A_4$-coloured knots by Lemma \ref{L:consumwelldef}).\par
On the other hand, the number of $\rho$--equivalence classes of $A_4$-coloured knots is bounded from below by $2$, because $3_1^l$ and $4_1^l$ are distinguished by their coloured untying invariants, which are $1$ and $s_1$ correspondingly. Notice first that $4_1^l$ and $4_1^r$ are ambient isotopic, and we therefore don't distinguish between them, and call them $4_1$ collectively. We finish this paper by showing that the lower bound of $2$ is sharp, by reducing each knot in our list to either $3_1^l$ or to $4_1$ by twist moves. Let $S$ denote the commutative semigroup of $\rho$--equivalence classes of $A_4$-coloured knots, equipped with the connect sum operation (see Section \ref{SS:A4Prelim}). Consider $\psi\colon\thinspace \mathcal{C}_2\to S$ which maps $0$ and $1$ to the $\rho$--equivalence classes of $3_1^l$ and of $4_1$ correspondingly.
\begin{thm}\label{T:A4Theorem}
The map $\psi$ is a bijection. In particular, $S$ is isomorphic to a group with two elements, which are distinguished by the coloured untying invariant.
\end{thm}
\subsection{Preliminaries}\label{SS:A4Prelim}
According to our conventions, $\rho$ sends Wirtinger generators to elements of the coset $t\left(\mathds{Z}/2\mathds{Z}\right)^{2}$. To simplify notation we write its elements $\left\{t,ts_1,ts_2,ts_1s_2\right\}$ as $\left\{a,b,c,d\right\}$ correspondingly. Let $Q$ denote the conjugation quandle whose elements are $\left\{a,b,c,d\right\}$ and whose quandle operation is given by Table \ref{T:A4Table}. This table is found also in \cite[Figure 2]{HN05}.
\begin{figure}
\caption{\label{T:A4Table}
\label{T:A4Table}
\end{figure}
The connect-sum $(K_1,\rho_1)\Hash(K_2,\rho_2)$ of $A_4$-coloured knots $(K_{1,2},\rho_{1,2})$ is well-defined, and does not depend on the choice of basepoints, as proven \cite[Lemma 4]{Mos06b}. If one of the connect summands is an invertible knot (ambient isotopic to itself with the opposite orientation), and if its $A_4$-colouring is unique up to inner automorphism, then the connect sum is independent of the choice of orientations. This implies in particular the following.
\begin{lem}\label{L:consumwelldef}
If $K_{1,2}$ are connect sums of trefoil knots and of figure-eight knots, and if $\rho_{1,2}$ are their corresponding unique $A_4$-colourings, then $(K_1,\rho_1)\Hash(K_2,\rho_2)$ is independent of the orientations of $K_{1,2}$.
\end{lem}
\subsection{Proof of Theorem {\ref{T:A4Theorem}}}
We identify some $\rho$--equivalences between trefoils and figure-eight knots by explicitly finding sequences of twist moves which relate them. The notation $(K_1,\rho_1)\sim (K_2,\rho_2)$ means that $(K_{1,2},\rho_{1,2})$ are $\rho$--equivalent.
\begin{lem}\label{L:A4idents}
\begin{enumerate}
\item $3_1^l\Hash 4_1\sim 3_1^r$ and by reflection $3_1^r\Hash 4_1\sim 3_1^l$.
\item $3_1^r\Hash 3_1^r\sim 4_1$.
\item $3_1^l\Hash 3_1^r\sim 4_1$.
\end{enumerate}
\end{lem}
\begin{proof}
\begin{enumerate}
\item
\begin{multline*}
\begin{minipage}{110pt}
\psfrag{1}[c]{$b$}\psfrag{2}[c]{$c$}\psfrag{4}[c]{$a$}\psfrag{5}[c]{$d$}
\includegraphics[width=110pt]{41rl-1}
\end{minipage}\ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{twist}}}}{\Longleftrightarrow}\ \
\begin{minipage}{110pt}
\psfrag{1}[c]{$b$}\psfrag{2}[c]{$c$}\psfrag{4}[b]{$a$}\psfrag{5}[c]{$d$}
\includegraphics[width=110pt]{41rl-2}
\end{minipage}
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\ \
\begin{minipage}{65pt}
\psfrag{a}[c]{$a$}\psfrag{b}[c]{$b$}
\includegraphics[width=65pt]{T32r}
\end{minipage}
\end{multline*}
\item
\begin{multline*}
\begin{minipage}{120pt}
\psfrag{1}[c]{$c$}\psfrag{2}[c]{$b$}\psfrag{3}[c]{$d$}\psfrag{5}[c]{$b$}\psfrag{6}[c]{$c$}\psfrag{7}[c]{$a$}
\includegraphics[width=120pt]{rr41-1}
\end{minipage}\ \ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{twist}}}}{\Longleftrightarrow}\ \
\begin{minipage}{120pt}
\psfrag{1}[c]{$c$}\psfrag{2}[c]{$b$}\psfrag{3}[c]{$d$}\psfrag{5}[c]{$b$}\psfrag{6}[c]{$c$}\psfrag{7}[c]{$a$}
\includegraphics[width=120pt]{rr41-2}
\end{minipage}\\[0.3cm]
\ \ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\ \
\begin{minipage}{90pt}
\psfrag{1}[c]{$a$}\psfrag{2}[c]{$b$}
\includegraphics[width=90pt]{Figure8col}
\end{minipage}
\end{multline*}
\item
\begin{multline*}
\begin{minipage}{110pt}
\psfrag{1}[c]{$a$}\psfrag{8}[c]{$d$}\psfrag{2}[c]{$b$}\psfrag{5}[c]{$b$}\psfrag{3}[c]{$c$}\psfrag{4}[c]{$a$}
\includegraphics[width=110pt]{trefkill-1b}
\end{minipage}\ \ \ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{twist}}}}{\Longleftrightarrow}\ \
\begin{minipage}{110pt}
\psfrag{1}[c]{$a$}\psfrag{6}[c]{$b$}\psfrag{8}[c]{$b$}\psfrag{9}[c]{$d$}\psfrag{2}[c]{$a$}
\includegraphics[width=110pt]{trefkill-2b}
\end{minipage}\\[0.3cm]
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\
\begin{minipage}{90pt}
\psfrag{a}[c]{$a$}\psfrag{b}[c]{$b$}
\includegraphics[width=90pt]{trefkill-3b}
\end{minipage}
\ \ \overset{\raisebox{2pt}{\scalebox{0.8}{{\ref{E:add4twist}}}}}{\Longleftrightarrow}\
\begin{minipage}{90pt}
\psfrag{a}[c]{$a$}\psfrag{b}[c]{$b$}
\includegraphics[width=90pt]{trefkill-4b}
\end{minipage}\\
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\ \ \begin{minipage}{80pt}
\psfrag{1}[c]{$a$}\psfrag{2}[c]{$b$}
\includegraphics[width=80pt]{Figure8cor}
\end{minipage}
\end{multline*}
\noindent where for the penultimate step we subtract four full twists via the following sequence of null-twists:
\begin{equation}\label{E:add4twist}
\begin{minipage}{20pt}
\includegraphics[height=120pt]{add4twist-0}
\end{minipage}\overset{\raisebox{2pt}{\scalebox{0.8}{\text{isotopy}}}}{\Longleftrightarrow}\ \ \
\begin{minipage}{60pt}
\includegraphics[height=120pt]{add4twist-1}
\end{minipage}\ \overset{\raisebox{2pt}{\scalebox{0.8}{\text{null-twist}}}}{\Longleftrightarrow}\
\begin{minipage}{60pt}
\includegraphics[height=120pt]{add4twist-2}
\end{minipage}
\overset{\raisebox{2pt}{\scalebox{0.8}{\text{null-twist}}}}{\Longleftrightarrow}\ \ \
\begin{minipage}{30pt}
\includegraphics[height=120pt]{add4twist-3}
\end{minipage}.
\end{equation}
\end{enumerate}
\end{proof}
\begin{proof}[Proof of Theorem {\ref{T:A4Theorem}}]
As a corollary to Lemma \ref{L:A4idents} we have
\begin{equation}
3_1^r\sim 3_1^l\Hash 4_1 \sim 3_1^r\Hash 3_1^l\Hash 3_1^r \sim 3_1^r\Hash 4_1\sim 3_1^l.
\end{equation}
Thus, up to $\rho$--equivalence, there is no need to distinguish between $3_1^r$ and $3_1^l$, and we may call them both $3_1$. By looking back at our list of representatives of $\bar\rho$--equivalence classes, we now know that any $A_4$ coloured knot is $\rho$--equivalent to one of $\left\{3_1,4_1,3_1\Hash 3_1,3_1\Hash 4_1,4_1\Hash 4_1\right\}$. The classes of $3_1\Hash 3_1$ and of $4_1$ are the same by Lemma \ref{L:A4idents}, as are the classes of $3_1\Hash 4_1$ and of $3_1$. Finally, the classes of $4_1\Hash 4_1$ and of $4_1$ are the same, because
\begin{equation}
4_1\Hash 4_1 \sim 4_1\Hash 3_1\Hash 3_1\sim 4_1\Hash 3_1\sim 4_1.
\end{equation}
Therefore the map $\psi$, which maps $0$ to $3_1$ and $1$ to $4_1$, is a bijection of groups where the connect-sum gives rise to the group operation on $S$.
\end{proof}
\section{Additional questions}\label{S:conclusion}
We have classified $G$--coloured knots up to $\rho$--equivalence for a large class of metabelian groups $G=\mathcal{C}_m\ltimes A$ with $\Rank(A)\leq 2$. This work raises a number of additional questions.
\begin{enumerate}
\item Classify $G$--coloured knots up to $\rho$--equivalence for a wider class of groups. The particularly interesting cases seem to be:
\begin{itemize}
\item For metacyclic groups with $\Ab\,G\approx \mathcal{C}_3$, we have classified $G$--coloured knots up to $\bar\rho$--equivalence. However the coloured untying invariant is trivial, so we have no lower bound on the number of $\rho$--equivalence classes.
\item For metabelian groups $G$ with $\Rank(A)>2$, the techniques are the same but the matrices are bigger, and one must take the $Y$--obstruction into account. In general, can you determine for which groups our invariants classify $G$--coloured knots up to $\bar\rho$--equivalence?
\item Polycyclic groups. How can our methods be iterated?
\item The symmetric group $S^4$ and the alternating group $A^5$ are finite subgroups of $SO(3)$, and the classification of their $\rho$--equivalence classes looks interesting.
\item It makes sense to consider the $\rho$--equivalence classification problem not only for groups, but also for more general quandle colourings.
\end{itemize}
\item For $G$ metabelian, classify $G$--coloured links, perhaps in $3$--manifolds, up to $\rho$--equivalence. My guess is that one would need to figure out how handle with the maximal abelian covering directly, instead of using a Seifert matrix.
\item In order to apply our classification results to the construction of quantum topological invariants, the base knots have to be sufficiently `nice'. What are the conditions on $G$ for each $G$--coloured knot to be $\rho$--equivalent to:
\begin{itemize}
\item A knot with unknotting number 1?
\item A fibred knot?
\end{itemize}
\item Find a conceptual reason that different flavours of $\rho$--equivalence should coincide for some groups but not for others. Can this be detected homologically?
\end{enumerate}
\end{document}
|
\begin{document}
\title{Clique immersion in graph products}
\author[Collins]{Karen L. Collins}
\address[Karen L. Collins]{Department of Mathematics and Computer Science, Wesleyan University, Middletown, CT, USA 06459} \email[Karen L. Collins]{[email protected]}
\author[Heenehan]{Megan E. Heenehan}
\address[Megan E. Heenehan]{Department of Mathematical Sciences, Eastern Connecticut State University, Willimantic, CT, USA 06226} \email[Megan Heenehan]{[email protected]}
\author[McDonald]{Jessica McDonald} \address[Jessica McDonald]{Department of Mathematics and Statistics, Auburn University, Auburn, AL, USA 36849}\email[Jessica McDonald]{[email protected]}
\thanks{The third author is supported in part by NSF grant DMS-1600551}
\date{}
\maketitle
\begin{abstract} Let $G,H$ be graphs and $G*H$ represent a particular graph product of $G$ and $H$. We define $\im(G)$ to be the largest $t$ such that $G$ has a $K_t$-immersion and ask: given $\im(G)=t$ and $\im(H)=r$, how large is $\im(G*H)$? Best possible lower bounds are provided when $*$ is the Cartesian or lexicographic product, and a conjecture is offered for each of the direct and strong products, along with some partial results.
\end{abstract}
\section{Introduction}
In this paper every graph is assumed to be simple.
Formally, a pair of adjacent edges $uv$ and $vw$ in a graph are \emph{split off} (or \emph{lifted}) from their common vertex $v$ by deleting the edges $uv$ and $vw$, and adding the edge $uw$. Given graphs $G, G'$, we say that $G$ has a \emph{$G'$-immersion} if a graph isomorphic to $G'$ can be obtained from a subgraph of $G$ by splitting off pairs of edges, and removing isolated vertices. We define the \emph{immersion number} of a graph $G$, denoted $\im(G)$, to be the largest value $t$ for which $G$ has an $K_t$-immersion. We call the $t$ vertices corresponding to those in the $K_t$-immersion the \emph{terminals} of the immersion.
Immersions have enjoyed increased interest in the last number of years (see eg. \cite{CDKS, CH, DDFMMS, DW, DY, KK, V17, Wa, Wo}). A major factor in this was Robertson and Seymour's \cite{RS23} proof that graphs are well-quasi-ordered by immersion, published as part of their celebrated graph minors project (where they show that graphs are well-quasi-ordered by minors). Although graph minors and graph immersions are incomparable, it is interesting to ask the same questions about both. In the realm of minors, motivated by Hadwiger's conjecture~\cite{hadwiger}, authors have asked: what is the largest complete graph that is a minor of a given graph? In this paper we ask: what is the largest complete graph that is immersed in a given graph? Similar questions were also asked about subdivisions after Haj\'os~\cite{Haj} conjectured that a graph with chromatic number $n$ would have a subdivision of a $K_n$. However, this conjecture is false for $n\geq 7$ \cite{C}. Since every subdivision is an immersion, but not every immersion is a subdivision, we examine whether or not the counterexamples provided by Catlin in \cite{C} have immersion numbers of interest.
In this paper, we are interested in the immersion number of graph products. In particular, for graphs $G$ and $H$, we consider the four standard graph products: the \emph{lexicographic product} $G\circ H$, the \emph{Cartesian product} $G\Box H$, the \emph{direct product} $G\times H$, and the \emph{strong product} $G \boxtimes H$. The central question of this paper is the following.
\begin{quest}\label{ques}
Let $G$ and $H$ be graphs with $im(G)=t$ and $im(H)=r$. For each of the four standard graph products, $G*H$, is $\im(G*H)\geq\im(K_t*K_r)$?
\end{quest}
In this paper we determine that the answer to Question~\ref{ques} is yes for the lexicographic and Cartesian products. In addition we provide partial results for the direct product and conjecture that the answer is yes for the direct and strong products. In determining the immersion number for $K_t*K_r$, for any product *, we choose as our terminals a vertex and all of its neighbors. In trying to affirmatively answer Question~\ref{ques}, we use a similar strategy for choosing terminals.
We will now summarize the results in each section of the paper. In Section~\ref{prelims}, we describe necessary background, including an alternate definition of graph immersion that we use throughout the rest of the paper, and explain our strategy for choosing terminals in a graph product. In Section~\ref{Lexicographic}, we discuss the lexicographic product and affirmatively resolve Question~\ref{ques} for the lexicographic product of two or more graphs (Theorem~\ref{lex}). There is an appealing immersion-analog of the Haj\'os Conjecture \cite{Haj} by Abu-Khzam and Langston \cite{AKL}, namely, that $\chi(G)\geq t$ implies $\im(G)\geq t$. While the Haj\'os Conjecture was disproved by Catlin \cite{C} using lexicographic products as counterexamples, in Section~\ref{lex}, we show
that the lexicographic product does not yield smallest counterexamples to the Abu-Khzam and Langston conjecture.
In Section~\ref{Cartesian}, we discuss the Cartesian product. In Section~\ref{CartBound}, we affirmatively resolve Question~\ref{ques} for the Cartesian product of two graphs (Theorem~\ref{box}) and provide a contrasting example of an immersion number greater than that given in the theorem. In Section~\ref{CartPowers}, we extend our results to products with more than two factors. In particular, we show the immersion number of the $d$ dimensional hypercube, $Q_d$, is $d+1$ and the immersion number of the Hamming graph, $K_n^d$, is $d(n-1)+1$. For the Cartesian product of a path on $n$ vertices with itself $d$ times, denoted $P_n^d$, we show $\im(P_n^d)=2d+1$. We compare the results for hypercubes, Hamming graphs, and $P_n^d$ to results by Kotlov~\cite{Ko01} and Chandran and Sivadasan~\cite{ChSi07} for graph minors. In addition, we show we can do better than the bound of Theorem~\ref{box} by proving $\im(G\Box P_n)\geq t+2$ when $\im(G) =t$.
In Section~\ref{Direct}, we conjecture that the answer to Question~\ref{ques} is yes for the direct product of two graphs and provide partial results towards the proof of this conjecture (Conjecture~\ref{direct}). In Section~\ref{DirectComplete}, we find the immersion number of $K_t\times K_r$ (Theorem~\ref{KtTimesKr}) and thus that the conjecture holds when $G$ and $H$ contain $K_t$ and $K_r$ as subgraphs, respectively (Corollary~\ref{CompleteSubgraphTimes}). In addition, we prove that the conjecture holds when $r\geq 3$ and $K_r$ is a subgraph of $H$ (Theorem~\ref{GtimesKr}). In Section~\ref{PegParity}, we extend these results to $G$ and $H$ having immersions in which all of the paths have the same parity (Theorem~\ref{DirectParity} and Theorem~\ref{GTimesParity}). In Section~\ref{DirectEx}, we provide some examples. In Section~\ref{limitations}, we discuss the cases that remain to prove the conjecture and the limitations of our current proof techniques.
Finally in Section~\ref{final}, we end with some concluding remarks and a conjecture about the strong product.
\section{Preliminaries}\label{prelims}
In this paper all graphs are finite and simple. For graph products we follow the notation of \cite{HIK} and \cite{ImKl00}.
One definition of immersion was provided in the Introduction; an alternative definition for graph immersion is as follows. Given graphs $G$ and $G'$, $G$ has a $G'$-immersion if there is an injective function $\phi: V(G')\to V(G)$ such that for each edge $uv\in E(G')$, there is a path in $G$ joining vertices $\phi(u)$ and $\phi(v)$, and these paths are edge-disjoint for all $uv\in E(G')$. We denote the path from $u$ to $v$ in $G$ by $P_{u,v}$. In this context we call the vertices of $\{\phi(v): v\in V(G)\}$ the \emph{terminals} (or \emph{corners}) of the $G'$-immersion, and we call internal vertices of the paths $\{P_{u,v}: uv\in E(G)\}$ the \emph{pegs} of the $G'$-immersion. In this paper we will often refer to the terminals and pegs of an immersion.
For the reader's convenience, we begin by providing a definition of each of the four standard graph products. Each graph product is defined to have vertex set $V(G)\times V(H)$. Two vertices $(g, h)$ and $(g', h')$ are defined to be adjacent if: $gg'\in E(G)$ or $g=g'$ and $hh' \in E(H)$ (lexicographic product); $g=g'$ and $hh' \in E(H)$, or $gg' \in E(G)$ and $h=h'$ (Cartesian product); $gg' \in E(G)$ and $hh' \in E(H)$ (direct product). The edge set of the strong product is defined to be $E(G \Box H) \cup E(G \times H)$.
In order to contain a $K_n$-immersion, a graph must not only have at least $n$ vertices, but it must have at least $n$ vertices whose degree is at least $n-1$. In particular, this gives the following observation.
\begin{remark}\label{Delta} For every graph $G$, $\im(G) \leq \Delta(G)+1$.
\end{remark}
Given Remark \ref{Delta} we make the following proposition for a bound on the immersion number of each product.
\begin{proposition}\label{MaxProducts} Given two graphs $G$ and $H$ where $n$ is the number of vertices in $H$,
\begin{enumerate}
\item $\im(G\circ H)\leq\Delta(H)+n\Delta(G)+1$,
\item $\im(G\Box H) \leq \Delta(G) + \Delta(H) +1$,
\item $\im(G\times H) \leq \Delta(G)\Delta(H) +1, and$
\item $\im(G\boxtimes H) \leq \Delta(G)\Delta(H) +\Delta(G) + \Delta(H)+1$.
\end{enumerate}
\end{proposition}
\begin{proof} Let $G$ and $H$ be graphs with maximum degrees $\Delta(G)$ and $\Delta(H)$ respectively such that $H$ has $n$ vertices.
\underline{Case 1:} By the definition of the lexicographic product, $\Delta(G\circ H) = \Delta(H) + n\Delta(G)$. Therefore, by Remark \ref{Delta}, $\im(G\circ H) \leq \Delta(H) + n\Delta(G) +1$.
\underline{Case 2:} By the definition of the Cartesian product, $\Delta(G\Box H) = \Delta(G) + \Delta(H)$. Therefore, by Remark \ref{Delta}, $\im(G\Box H) \leq \Delta(G) + \Delta(H) +1$.
\underline{Case 3:} By definition of the direct product $\Delta(G \times H) = \Delta(G)\Delta(H)$. Therefore, by Remark \ref{Delta}, $\im(G\times H) \leq \Delta(G)\Delta(H) +1$.
\underline{Case 4:} By definition of the strong product $\Delta(G \boxtimes H) = \Delta(G)\Delta(H)+\Delta(G)+\Delta(H)$. Therefore, by Remark \ref{Delta}, $\im(G\boxtimes H) \leq \Delta(G)\Delta(H) +\Delta(G)+\Delta(H)+1$.
\end{proof}
In trying to affirmatively answer Question~\ref{ques}, we use the same general strategy for each product. We consider graphs $G$ and $H$ with $\im(G) = t$ and $\im(H)=r$. We look at a $K_t$-immersion in $G$ with terminals $v_1, v_2, \ldots, v_t$ and a $K_r$-immersion in $H$ with terminals $u_1, u_2\ldots u_r$. We then consider $K_t\times K_r$ with the vertices labeled $v_1, v_2, \ldots, v_t$ in $K_t$ and $u_1, u_2\ldots u_r$ in $K_r$. As terminals for our immersion in $G*H$ we take a vertex from $K_t\times K_r$ (usually $(v_1, u_1)$) and all of its neighbors -- these are vertices in $G*H$ since each $v_i\in V(G)$ and each $u_j\in V(H)$. We are then able to use the $K_t$-immersion in $G$ and the $K_r$-immersion in $H$ along with the structure of the specific product to find paths in $G*H$ connecting these terminals.
We begin with a discussion of the lexicographic product.
\section{Lexicographic Products}\label{Lexicographic}
The lexicographic product is of particular interest because Catlin \cite{C} disproved the Haj\'os Conjecture \cite{Haj}, that is, if $\chi(G) = n$, then $G$ contains a subdivision of $K_n$, using lexicographic products of odd cycles and complete graphs. Every subdivision is an immersion, but not every immersion is a subdivision. Abu-Khzam and Langston conjectured that if $\chi(G) = n$, then $G$ contains an immersion of $K_n$ \cite{AKL}. Theorem~\ref{lex} implies that a lexicographic product is never a smallest counterexample to the Abu-Khzam and Langston conjecture, since if $G$ and $H$ satisfy the conjecture, then $G\circ H$ also satisfies the conjecture.
In the following theorem we prove a lower bound for the immersion number of the lexicographic product of two graphs.
\begin{theorem}\label{lex}
Let $G$ and $H$ be graphs with $im(G)=t$ and $im(H)=r$. Then $im(G\circ H) \geq tr$.
\end{theorem}
The following global definition of the lexicographic product will be helpful in our proof of the bound. For graphs $G$ and $H$, $G\circ H$ is obtained from a copy of $G$ by replacing each vertex in $G$ with a copy of $H$, and replacing each edge in $G$ with a complete bipartite graph. Note that the lexicographic product is not commutative, so the roles of $G$ and $H$ in this global definition cannot be reversed.
\begin{proof}(Theorem~\ref{lex})
Fix a $K_t$-immersion in $G$ and a $K_r$-immersion in $H$. Consider $G\circ H$ (with the global description given above). We use the $r$ terminals in the copies of $H$ that correspond to the $t$ terminals in $G$ as the terminal vertices of our $K_{tr}$-immersion.
Within each copy of $H$ there is a $K_r$-immersion, which we use to get the required paths between the $r$ terminals that we have chosen in $H$ (for our $K_{tr}$-immersion). Consider now, in $G\circ H$, two copies of $H$ that correspond to terminals $u$ and $w$ in the $K_t$-immersion in $G$. Let $H_v$ be the copy of $H$ that corresponding to a vertex $v$. Since there is a path between $u $ and $w$ in the $K_t$-immersion in $G$, $H_u$ is connected to $H_w$ by a sequence of copies of $H$, where each consecutive pair yields a $K_{r,r}$ between the two copies. There are two cases: (i) $u$ and $w$ are adjacent and (ii) $u$ and $w$ are not adjacent. In the case that $u$ and $w$ are adjacent, then each vertex in the set of $r$ terminals in $H_u$ is adjacent to each vertex in the set of $r$ terminals in $H_w$. In the case that $u$ and $w$ are not adjacent, let the path between $u$ and $w$ in $G$ be $u, v_1, v_2, \ldots, v_t, w$. It is well-known that $K_{r,r}$ has a proper edge-coloring with $r$ colors. Choose any such $r$-edge coloring of $K_{r,r}$, and apply this coloring to the edges between $H_{v_i}$ and $H_{v_{i+1}}$ for $1\leq i\leq t-1$ and also to the edges between $H_{v_t}$ and $H_w$. Let the terminals in $H_u$ be $z_1, z_2, \ldots z_r$. Color the edge $e$ between $H_u$ and $H_{ v_1}$ by $e$ is colored $i$ if and only if $e$ is incident to $z_i$. Then the resulting edge-coloring gives an $i$-colored path from $z_i$ to each terminal in $H_w$. These edges provide the required edge-disjoint paths between our two copies of $H$. Therefore $G\circ H$ has a $K_{tr}$-immersion.
\end{proof}
Figure~\ref{K_4_4Lex} shows an example of the edge-coloring process described in the above proof, and the paths that are formed.
\begin{figure}
\caption{Illustration of the edge-coloring described in the proof of Theorem \ref{lex}
\label{K_4_4Lex}
\end{figure}
\begin{corollary} Given graphs $G_1, G_2, \ldots, G_\ell$, $$\im(G_1\circ G_2\circ \ldots \circ G_\ell) \geq \im(G_1)\im(G_2)\ldots\im(G_\ell).$$
\end{corollary}
\begin{proof} Let $G_1, G_2, \ldots ,G_\ell$ be graphs. When $\ell = 2$, Theorem \ref{lex} gives $\im(G_1\circ G_2) \geq \im(G_1)\im(G_2)$. Assume for $k\geq 2$, $\im(G_1\circ G_2\circ \ldots \circ G_k) \geq \im(G_1)\im(G_2)\ldots\im(G_k)$. When $n = k+1$, $G_1\circ G_2\circ \ldots \circ G_k\circ G_{k+1} = (G_1\circ G_2\circ \ldots \circ G_k)\circ G_{k+1}$. Thus, by induction, $\im(G_1\circ G_2\circ \ldots \circ G_k\circ G_{k+1}) \geq\im(G_1)\im(G_2)\ldots\im(G_{k+1})$.
\end{proof}
Theorem \ref{lex} combined with Proposition \ref{MaxProducts} imply the following corollaries.
\begin{corollary} $\im(K_t\circ K_r)=tr$.
\end{corollary}
\begin{proof} By Theorem \ref{lex} $\im(K_t\circ K_r)\geq tr$ and by Proposition~\ref{MaxProducts}, $\im(K_t\circ K_r)\leq (r-1) + r(t-1)+1 = tr$. Therefore $\im(K_t\circ K_r)=tr$.
\end{proof}
\begin{corollary}\label{CycleLexComplete} For $n\geq 3$, $\im(C_n\circ K_r)=3r$.
\end{corollary}
\begin{proof} By Theorem \ref{lex} $\im(C_n\circ K_r)\geq 3r$ and by Proposition~\ref{MaxProducts}, $\im(C_n\circ K_r)\leq (r-1) + r(2)+1 = 3r$. Therefore $\im(C_n\circ K_r)=3r$.
\end{proof}
As an example where we can do better than the bound of Theorem \ref{lex}, consider $K_3\circ C_5$.
\begin{proposition} $\im(K_3\circ C_5) = 13$ \end{proposition}
\begin{proof}
By Proposition~\ref{MaxProducts}, $\im(K_3\circ C_5) \leq 2 + 5(2)+1 = 13$. We now describe the immersion. Label the vertices of $K_3$ as $v_1, v_2,$ and $v_3$. Label consecutive vertices in $C_5$ as $u_1, u_2, u_3, u_4,$ and $u_5$ so that $u_1$ and $u_5$ are adjacent. All vertices are used as terminals except $(v_2, u_1)$ and $(v_3, u_1)$. Terminals in different copies of $C_5$ are connected by an edge. It remains to connect terminals in the same copy of $C_5$ to each other. To complete the immersion we use the following paths.
$(v_1,u_1) - (v_2, u_1) - (v_1, u_4)$
$(v_1,u_2) - (v_2, u_1) - (v_1, u_5)$
$(v_1, u_2) - (v_3, u_1) - (v_1, u_4)$
$(v_1, u_3) - (v_3, u_1) - (v_1, u_5)$
$(v_1, u_1) - (v_3, u_1) - (v_2, u_1) - (v_1, u_3)$
$(v_2, u_2) - (v_3, u_1) - (v_2, u_4)$
$(v_2, u_3) - (v_3, u_1) - (v_2, u_5)$
$(v_3, u_2) - (v_2, u_1) - (v_3, u_4)$
$(v_3, u_3) - (v_2, u_1) - (v_3, u_5)$
\end{proof}
A similar strategy to the above may be used to show $\im(C_7 \circ C_5) = 13$.
Next we explore the Cartesian product.
\section{Cartesian products}\label{Cartesian}
The following global definition of the Cartesian product will be helpful in our proof of Theorem \ref{box}. Given graphs $G$ and $H$, the graph $G\Box H$ can be obtained from a copy of $H$ by replacing each vertex in $H$ with a copy of $G$, and replacing each edge in $H$ with a perfect matching that pairs identical vertices in the copies of $G$. Since the Cartesian product is commutative, we may switch the roles of $G$ and $H$ without changing the results. In Section~\ref{CartBound}, we discuss bounds for $\im(G\Box H)$. In Section~\ref{CartPowers}, we discuss the immersion numbers of several graph powers.
\subsection{Bounds on the immersion number}\label{CartBound}
We begin by affirmatively answering Question~\ref{ques} for the Cartesian product.
\begin{theorem}\label{box} Let $G$ and $H$ be graphs with $\im(G)=t$ and $\im(H)=r$. Then $\im(G\Box H) \geq t+r-1$.
\end{theorem}
\begin{proof}
Fix a $K_t$-immersion in $G$ and a $K_r$-immersion in $H$.
Consider $G\Box H$ with the global description described above. Suppose the terminals of the $K_t$-immersion in $G$ are $u_1, u_2, \ldots, u_t$, and suppose the terminals of the $K_r$-immersion in $H$ are $v_1, v_2, \ldots, v_r$. Choose a copy of $G$ that corresponds to a terminal $v_k$ of the $K_r$-immersion in $H$. For the terminals in our $K_{t+r-1}$-immersion, we choose the vertices that correspond to the terminals of the $K_t$-immersion in this copy of $G$,
\begin{equation}\label{vkterms}
(u_1,v_k), (u_2,v_k), \ldots, (u_t,v_k),
\end{equation}
along with the vertices \begin{equation}\label{ulterms} (u_l, v_1), (u_l, v_2), \ldots, (u_l, v_{k-1}), (u_l, v_{k+1}), \ldots, (u_l, v_r)
\end{equation}
where $u_l$ is some fixed terminal of the $K_t$-immersion in $G$.
For the paths between the terminals in set (\ref{vkterms}), use the edge-disjoint paths provided by the $K_t$-immersion in $G$ (keeping $v_k$ constant). For the paths between the terminals in set (\ref{ulterms}), use the paths provided by the $K_r$-immersion in $H$ (keeping $u_l$ constant, and joining $v_1, v_2, \ldots, v_{k-1}, v_{k+1}, \ldots, v_r$). It remains to connect $(u_i, v_k)$ to $(u_l, v_j)$ for $1 \leq i \leq t$ and $1 \leq j \leq r$. For this, first use the path from $v_k$ to $v_j$ in the $K_r$-immersion, keeping $u_i$ constant, to get from $(u_i, v_k)$ to $(u_i, v_j)$. Then use the path from $u_i$ to $u_l$ in the $K_t$-immersion, keeping $v_j$ constant, to get from $(u_i, v_j)$ to $(u_l, v_j)$. Note that the first segment of this path is edge-disjoint from the paths we used to connect the vertices in (\ref{ulterms}), and the second segment of this path is the first time we have used edges within the $v_j$ copy of $H$. Hence we have built a $K_{t+r-1}$-immersion in $G\Box H$ and $\im(G\Box H)\geq t+r-1$.
\end{proof}
The following corollary shows that the above bound is tight for the Cartesian product of two complete graphs.
\begin{corollary} $\im(K_t\Box K_r) = t+r-1$
\end{corollary}
\begin{proof} By Theorem~\ref{box}, $\im(K_t\Box K_r) \geq t+r-1$. By Proposition~\ref{MaxProducts}, $\im(K_t\Box K_r) \leq (t-1)+(r-1)+1 = t+r-1$. Therefore $\im(K_t\Box K_r) = t+r-1$.
\end{proof}
As an example where we can do better than the bound of Theorem~\ref{box}, we now prove $\im(G\Box P_n)\geq t+2$ for $n\geq5$, when $G\neq K_t$.
\begin{theorem}\label{BoxPath} Let $G$ be connected with $\im(G)=t$. Then $\im(G\Box P_n)\geq t+2$ for $n\geq5$ and $G\neq K_t$.
\end{theorem}
\begin{proof} Let $G\neq K_t$ be a connected graph with $\im(G)=t$ and let $v_1, v_2, \ldots, v_t$ be the terminals in a $K_t$-immersion in $G$. Since $G\neq K_t$ there is at least one non-terminal vertex in $G$, call it $x$. Let $u_1, u_2, \ldots, u_n$ be the vertices of $P_n$ in order along the path. We choose as our terminals of the $K_{t+2}$-immersion, $(v_1, u_3), (v_2, u_3), \ldots, (v_t, u_3)$, $(v_1, u_2)$, and $(v_1, u_4)$.
For the paths between $(v_i, u_3)$ and $(v_j, u_3)$ we use the paths provided by the $K_t$-immersion in the copy of $G$ corresponding to $u_3$. In order to complete the immersion we need edge-disjoint paths from $(v_1, u_2)$ and $(v_1, u_4)$ to $(v_j, u_3)$ and between $(v_1, u_2)$ and $(v_1, u_4)$. For the paths from $(v_1, u_j)$, for $j=2$ or $4$, to $(v_i, u_3)$ use the edge-disjoint paths from $(v_1, u_j)$ to $(v_i, u_j)$ in the $u_j$ copy of $G$ and the edge $(v_i, u_j)-(v_i, u_3)$. For the path from $(v_1, u_2)$ to $(v_1, u_4)$ use the edge $(v_1, u_2)-(v_1, w_1)$ followed by any path to $(x, u_1)$ in the $u_1$ copy of $G$. Then the path $(x, u_1)-(x, u_2)-(x, u_3)-(x, u_4)-(x, u_5)$ followed by any path from $(x, u_5)$ to $(v_1, u_5)$ in the $u_5$ copy of $G$. Then use the edge $(v_1, u_5)-(v_1, u_4)$ to complete the path. This completes the immersion of $K_{t+2}$.
\end{proof}
As an explanation for why $G\neq K_t$ in Theorem~\ref{BoxPath} we now show $\im(K_t\Box P_n)=t+1$. This is also an example of a graph that does not reach the bound of Proposition~\ref{MaxProducts} because $\Delta(K_t\Box P_n)=t+1$.
\begin{theorem}\label{completeBoxpath} Given a complete graph on $t$ vertices, $K_t$, and a path on $n\geq2$ vertices, $P_n$, $\im(K_t\Box P_n)=t+1$.\end{theorem}
To prove Theorem~\ref{completeBoxpath} we use the following lemma which follows from the \emph{Corner Separating Lemma} found in \cite{CH}.
\begin{lemma}\label{edgecut} Let $G$ be a graph, $C$ a cutset of edges in $G$, and $M$ a connected component of $G-C$. If $G$ has an immersion in which $k$ terminals are in $G-M$ and $j$ terminals are in $M$, then $|C|\geq kj$.\end{lemma}
\begin{proof} Let $G$ be a graph, $C$ be a cutset of edges, and $M$ a connected component of $G-C$. Suppose $G$ has an immersion in which $k$ terminals are in $G-M$ and $j$ terminals are in $M$. Each of the terminals in $G-M$ must be connected by a path to each of the terminals in $M$ and these paths must be edge-disjoint. Therefore, there must be $kj$ edge-disjoint paths from the $k$ terminals in $G-M$ to the $j$ terminals in $M$. Since $G-M$ and $M$ are connected by an edge cutset $C$, each of these $kj$ paths must use a unique edge of $C$. Thus, $|C|\geq kj$.
\end{proof}
\begin{proof} (Theorem \ref{completeBoxpath}) Consider the graph $K_t\Box P_n$ with $n\geq 2$. By Theorem~\ref{box}, $\im(K_t\Box P_n)\geq t+2-1=t+1$. Suppose, for a contradiction, that $\im(K_t\Box P_n)=t+2$. By the global description of the Cartesian product, $K_t\Box P_n$ can be thought of as a sequence of copies of $K_t$ laid out like a path and connected by matchings along the path's edges. Therefore between consecutive copies of $K_t$ there is an edge cutset of size $t$. Since each copy of $K_t$ has only $t$ vertices and we have $t+2$ terminals total, all of the terminals cannot be in a single copy of $K_t$. Starting at one end of the sequence of copies of $K_t$, find the first copy of $K_t$ that contains a terminal. Suppose this copy of $K_t$ contains $a$ terminals where $1\leq a\leq t+1$, then there are $t+2-a$ terminals that are separated from this copy by an edge cut of size $t$. By Lemma \ref{edgecut}, $t \geq a(t+2-a)$. That is, $a^2-a(t+2)+t \geq 0$. When $a=1$, $a^2-a(t+2)+t\leq 0$. Similarly when $a=t+1$, $a^2-a(t+2)+t\leq 0$. The absolute minimum value of this quadratic occurs at $a=\frac{t+2}{2}$, which is between $1$ and $t+1$. Therefore, $a^2-a(t+2)+t\leq 0$ for the range of interest, this contradicts $\im(K_t\Box P_n)=t+2$. Thus, $\im(K_t\Box P_n)=t+1$.
\end{proof}
\subsection{Powers of graphs and immersion number}\label{CartPowers}
Theorem~\ref{box} combined with Proposition~\ref{MaxProducts} imply the following corollaries. Here $G^{\ell}$ is the Cartesian product of $G$ with itself $\ell$ times.
\begin{corollary}\label{G^d} Given graphs $G_1, G_2, \ldots, G_\ell$, $\im(G_1\Box G_2 \Box \ldots \Box G_{\ell})\geq \sum_{i=1}^\ell G_i - (\ell-1)$. Further the bound is tight if $\Delta(G_i) = \im(G_i)-1$ for each $i$.\end{corollary}
\begin{proof} When $\ell=2$, Theorem~\ref{box} gives $\im(G_1\Box G_2) \geq \im(G_1)+\im(G_2)-1$. Assume $\im(G_1\Box G_2 \Box \ldots \Box G_k)\geq \sum_{i=1}^k G_i - (k-1)$. When $\ell=k+1$, $G_1\Box G_2 \Box \ldots \Box G_k\Box G_{k+1}=(G_1\Box G_2 \Box \ldots \Box G_k)\Box G_{k+1}$. Thus, by Theorem~\ref{box}, $\im(G_1\Box G_2 \Box \ldots \Box G_k\Box G_{k+1})\geq \sum_{i=1}^k G_i - (k-1) +\im(G_{k+1}) - 1= \sum_{i=1}^{k+1} G_i - k$. Therefore, $\im(G_1\Box G_2 \Box \ldots \Box G_{\ell})\geq \sum_{i=1}^\ell G_i - (\ell-1)$.
If $\Delta(G_i)=\im(G_i)-1$ for all $i$, then by induction $\Delta(G_1\Box G_2 \Box \ldots \Box G_{\ell})=\sum_i^\ell\Delta(G_i)$ and by Proposition~\ref{MaxProducts} $\im(G_1\Box G_2 \Box \ldots \Box G_{\ell})= \sum_{i=1}^\ell G_i - (\ell-1)$.
\end{proof}
\begin{remark}\label{hypercube} Since the $d$-dimensional hypercube, $Q_d$, is the Cartesian product of $K_2$ with itself $d$ times Corollary~\ref{G^d} gives $\im(Q_d)=d+1$.
\end{remark}
\begin{remark}\label{hamming} Corollary~\ref{G^d} gives us the immersion number of the Hamming graph (the Cartesian product of $K_n$ with itself $d$ times) $\im(K^d_n)=d(n-1)+1$. \end{remark}
In contrast, when $\Delta(G) \neq \im(G)-1$, then the bound is not tight. As an example we find the immersion number of the Cartesian product of a path, $P_n$, with itself $d$ times. We begin by showing $\im(P_6^2)=5$.
\begin{proposition}\label{BoxPath6} $\im(P_6^2)=5$.
\end{proposition}
\begin{proof}
Label consecutive vertices in the path $v_1, v_2, \ldots, v_6$. We use $(v_3, v_3)$ and its neighbors $(v_2, v_3), (v_3, v_2), (v_3, v_4), (v_4, v_3)$ as terminals in our immersion of $K_5$. Notice that, $(v_3, v_3)$ is connected by an edge to all of the other terminals. To connect the remaining pairs of terminals we use the following paths (see also Figure~\ref{P_6BoxP_6}).
$(v_2, v_3) - (v_2, v_2) - (v_3, v_2)$
$(v_3, v_2) - (v_4, v_2) - (v_4, v_3)$
$(v_4, v_3) - (v_4, v_4) - (v_3, v_4)$
$(v_3, v_4) - (v_2, v_4) - (v_2, v_3)$
$(v_2, v_3) - (v_1, v_3) - (v_1, v_4) - (v_1, v_5) - (v_2, v_5) - (v_3, v_5) - (v_4, v_5) - (v_5, v_5) - $
$\hspace{.2in}(v_5, v_4) - (v_5, v_3) - (v_4, v_3)$
$(v_3, v_2) - (v_3, v_1) - (v_4, v_1) - (v_5, v_1) - (v_6, v_1) - (v_6, v_2) - (v_6, v_3) - (v_6, v_4) - $
$\hspace{.2in}(v_6, v_5) - (v_6, v_6) - (v_5, v_6) - (v_4, v_6) - (v_3, v_6) - (v_3, v_5) - (v_3, v_4)$
This completes the description of a $K_5$-immersion in $P_6^2$. Using Remark~\ref{Delta} and the fact that $\Delta(P_6^2)=4$ we can conclude $\im(P_6^2)=5$.
\end{proof}
\begin{figure}
\caption{An immersion of $K_5$ in $P_6^2$. Terminals are labeled and the edge-disjoint paths are highlighted.}
\label{P_6BoxP_6}
\end{figure}
\begin{corollary}\label{BoxPathd} Let $n\geq 6$ and $d>2$. Then $\im(P_n^d)=2d+1$.
\end{corollary}
\begin{proof}
First we prove that $\im(P_6^d)=2d+1$. Proposition~\ref{BoxPath6} shows $\im(P_6^2)=2(2)+1=5$.
Assume $\im(P_6^k)=2k+1$ for $k\geq 2$. When $d = k+1$, $P_6^{k+1} = P_6^{k}\Box P_6$, thus by Theorem~\ref{BoxPath} and induction, $\im(P_6^{k+1})\geq 2k + 1 + 2 = 2(k+1)+1$.
The maximum degree in $P_6^{k+1}$ is $2(k+1)$.
Therefore, $\im(P_6^{k+1})= 2(k+1)+1$.
Since $P_6^d$ is a subgraph of $P_n^d$ for $n\geq 6$ and $\Delta(P_n^d)=2d$, we have shown $\im(P_n^d)=2d+1$.
\end{proof}
As mentioned in the introduction, Kotolov~\cite{Ko01} and Chandran and Sivadasan~\cite{ChSi07} proved bounds for the Hadwiger number of products of the graphs discussed above. Let $G$ be a graph. The Hadwiger number of $G$, $h(G)$, is the maximum $m$ such that $G$ has a $K_m$-minor. For the $d$-dimensional hypercube, Kotlov proved $h(Q_d)\geq 2^{\frac{d+1}{2}}$ for $d$ odd and $h(Q_d)\geq 3\cdot2^{\frac{d-2}{2}}$ for $d$ even. Chandran and Sivadasan proved $h(Q_d)\leq 2^{\frac{d}{2}}\cdot\sqrt{d}+1$. These results contrast with the result in Remark~\ref{hypercube} where we show $\im(Q_d)=d+1$. Chandran and Sivadasan also showed $n^{\lfloor\frac{d-1}{2}\rfloor}\leq h(K_n^d)\leq h^{\frac{d+1}{2}}\cdot\sqrt{d}+1$ and $n^{\lfloor{\frac{d-1}{2}}\rfloor}\leq h(P_n^d)\leq h^{\frac{d}{2}}\cdot\sqrt{2d}+1$. These results contrast with the results in Remark~\ref{hamming} and Corollary~\ref{BoxPathd}, where we show $\im(K_n^d)=d(n-1)+1$ and $\im(P_n^d)=5$ respectively.
\section{Direct products}\label{Direct}
The bounds for the immersion numbers of the Cartesian and lexicographic products were relatively straightforward to prove. In this section we discuss the direct product. The structure of the direct product is quite different than the previous two products and leads to challenges in proving a bound for the immersion number in the general case.
We begin by conjecturing that the answer to Question~\ref{ques} is yes for the direct product.
\begin{conjecture}\label{direct} Let $G$ and $H$ be graphs where $im(G)=t$ and $im(H)=r$. Then $im(G\times H) \geq (t-1)(r-1)+1$.
\end{conjecture}
The global definition for the direct product of graphs $G$ and $H$ is to form $G\times H$ from a copy of $G$ by replacing each vertex in $G$ with an edgeless copy of $H$, and replacing each edge in $G$ with a set of edges joining vertices $h, h'$ in the two different copies of $H$ when $hh'\in E(H)$. Since the direct product is commutative, we may switch the roles of $G$ and $H$ without changing the results. In this section we present evidence towards the proof of Conjecture~\ref{direct}. In Section~\ref{DirectComplete}, we consider cases where the graphs are complete or have a subgraph of a complete graph of the same size as the immersion number. In Section~\ref{PegParity}, we consider cases involving the parity of the number of pegs on each path in an immersion.
\subsection{Direct products of complete graphs} \label{DirectComplete}
We begin by proving the immersion number for the direct product of two complete graphs.
\begin{theorem}\label{KtTimesKr} $\im(K_t\times K_r) = (t-1)(r-1)+1.$
\end{theorem}
\begin{proof} Observe that when $t=r=2$, $\im(K_2\times K_2) = 2$ since $K_2\times K_2$ is two disjoint edges.
We now consider the case when at least one of $t$ or $r$ is greater than $2$. By Proposition~\ref{MaxProducts}, $\im(K_t\times K_r)\leq (t-1)(r-1)+1$. To complete the proof we define a $K_{(t-1)(r-1)+1}$-immersion in $K_t\times K_r$.
Label the vertices of each complete graph $1, 2, \ldots, t$ or $r$. The $(t-1)(r-1)+1$ terminals of our clique immersion will be $(1,1)$ and all its neighbors. Let $N\left[(1,1)\right]=\{(i,j)\;|\; 2\leq i\leq t, 2\leq j\leq r\}$, the neighbors of $(1,1)$. Some pairs of these terminals are adjacent in $K_t\times K_r$; in that case we use the edge between them for the immersion. It remains to define a path between each pair of vertices in $N\left[(1,1)\right]$ that share a first coordinate or a second coordinate.
Define a graph $S$ with vertex set $N\left[(1,1)\right]$ where two vertices are adjacent if and only if they are not adjacent in $K_t\times K_r$. Note that a vertex $(x,y)\in S$ is part of one clique of size $r-1$, namely the clique with vertex set $\{(x, j): 2\leq j\leq r \}$, and part of one clique of size $t-1$, namely the clique with vertex set $\{(i, y): 2\leq i\leq t \}$; $(x,y)$ has no other adjacencies beyond these two cliques. Hence the edges of $S$ can be partitioned into $t-1$ copies of $K_{r-1}$ (one corresponding to each $i\in\{2, 3, \ldots, t\}$ in the first slot, which we call the $i$th copy of $K_{r-1}$) and $r-1$ copies of $K_{t-1}$ (one corresponding to each $j\in\{2, 3, \ldots, r\}$ in the second slot, which we call the $j$th copy of $K_{t-1}$).
In particular, $S$ is isomorphic to $K_{t-1}\Box K_{r-1}$.
For each edge in $S$, we must define a path in $K_t\times K_r$ between its endpoints. To do this we shall rely on edge-colorings of cliques, and associate colored edges in $S$ with particular paths to use in $K_t\times K_r$. A complete graph $K_{n}$ has maximum degree $n-1$ and so is $n$-edge-colorable by Vizing's Theorem. In the case that $n$ is odd and an $n$-edge-coloring using the colors $1, 2, \ldots n$ has been assigned to $K_n$, observe that each of these $n$ colors is missing at exactly one vertex. Remove the vertex that is missing an edge colored 1, and label the other vertices $2, 3, \ldots, n$ according to the color of its removed edge. We are now left with an $n$-edge-coloring of $K_{n-1}$ (an even clique) in which every vertex sees the color 1, and every other color is missing at exactly two vertices. In particular, this means that every vertex is missing exactly two of the colors $2, \ldots, n$, exactly one of which is its vertex label. We shall refer to this particular edge-coloring and vertex-labelling as our \emph{even clique assignment}; see Figure \ref{EvenCliqueAsst}. Given a copy of $K_{n-1}$ where $n$ is even (so $K_{n-1}$ is an odd clique), consider the $(n-1)$-edge-coloring of $K_{n-1}$ using the colors $2, 3, \ldots, n$. Each of these colors will be missing at exactly one vertex; consider each vertex to be labelled with its missing color. We shall refer to this particular edge-coloring and vertex-labelling as our \emph{odd clique assignment}; see Figure \ref{OddCliqueAsst}.
\begin{figure}
\caption{An \emph{even clique assignment}
\label{EvenCliqueAsst}
\end{figure}
\underline{Case 1:} $t, r$ are both even.
In this case, $K_{t-1}$ and $K_{r-1}$ are both odd cliques, and we use our odd-clique assignment on each of our $r$ copies of $K_{t-1}$ and each of our $t$ copies of $K_{r-1}$. We do this in such a way that vertex $(i,j)$ in $S$ is labelled $i$ in its copy of $K_{t-1}$ and labelled $j$ in its copy of $K_{r-1}$.
Suppose the color of the edge $(i,j) - (k,j)$ is $a$. Then we choose the path between these vertices to be
\[(i,j) -(a,1) -(k,j) .\]
Note that this is indeed a path in $K_t\times K_r$, as $a\neq i, k$ and $j\neq 1$. Since $a\neq 1$, we are not using any of the (already used) edges incident to $(1, 1)$. Moreover, the edges in the $j$th-copy of $K_{t-1}$ labelled $a$ form a matching, so these paths use each edge incident to $(a,1)$ at most once.
Similarly, suppose the color of the edge $(i,j)-(i,k)$ is $b$. Then $b\neq 1, j, k$ and $i\neq 1$, and we choose the path from $(i, j)$ to $(i,k)$ to be
\[(i,j) -(1,b) -(i,k) .\]
The edges in the $i$th copy of $K_{r-1}$ labelled $b$ form a matching, so we use each edge incident to $(1,b)$ at most once. These edges are disjoint from the edges incident to $(a, 1)$ because $a$ and $b$ are not 1. This completes the description of our desired immersion in $K_t\times K_r$.
\begin{figure}
\caption{An \emph{odd clique assignment}
\label{OddCliqueAsst}
\end{figure}
\underline{Case 2:} exactly one of $t,r$ is odd.
Suppose, without loss of generality, that $t$ is even and $r$ is odd. In this case, $K_{t-1}$ is an odd clique while $K_{r-1}$ is an even clique. We use our odd-clique assignment on each of our $r$ copies of $K_{t-1}$ and our even-clique assignment on each of our $t$ copies of $K_{r-1}$. We do this in such a way that vertex $(i,j)$ in $H$ is labelled $i$ in its copy of $K_{t-1}$ and labelled $j$ in its copy of $K_{r-1}$.
For edges in copies of $K_{t-1}$, we do the path-assignment according to colors exactly as in Case 1.
Consider now an edge $(i,j)-(i,k)$ in the $i$th copy of $K_{r-1}$, and suppose it has color $b$. If $b\neq 1$, we define the path as before, namely
\[(i,j) -(1,b) -(i,k) .\]
In the case $b=1$ however, we must proceed differently as all edges incident to $(1,1)$ have already been used. In this case, we look more closely at this copy of $K_{r-1}$. The vertex $(i,j)$ is missing exactly two colors in this copy, namely $j$ and a second color $c\neq 1$. The vertex $(i,k)$ is missing $k$ and a second color $d\neq 1$. We choose the path between these vertices to be
$$(i,j)-(1, c) -(i,1)-(1,d) -(i,k).$$
We haven't used the edges $(i,j) - (1,c) $ or $ (1,d)-(i,k)$ in the first step (dealing with edges not colored 1 in the $K_{r-1}$), because $c$ is missing at $(i,j)$ and $d$ is missing at $(i,k)$. We haven't used the edges $(1,c)-(i,1)$ or $(i,1)-(1,d)$ in the first step because $j,k\neq 1$. Moreover, these new paths do not overlap any of the edges used for our paths from the $K_{t-1}$'s (i.e. paths between vertices in one of the copies of $K_{t-1}$), because those edges were all of form $(x,y)- (a,1)$ where $x\neq 1$. This completes the description of our desired immersion in $K_t\times K_r$.
\underline{Case 3}: $t,r$ are both odd.
Let $t'=t-1$, so $t'$ is even. Define $S'$ to be the subgraph of $S$ obtained by deleting the vertices with $t$ in the first coordinate. Apply Case 2 to the pair $t', r$ to get paths corresponding to every edge in $S'$. Of these paths, we will change only the longest ones, that is, the paths of length 4. The paths of length 4 in Case 2 occur between vertices $(i,j)$ and $(i,k)$ when the color $b$ on the edge $(i,j)-(i,k)$ is 1. We will replace each such path with
$$(i,j)-(t,1)-(i,k).$$
Since $j,k\neq 1$ and since $t\neq i$, this is indeed a path. Since the paths we are replacing correspond to a matching (in fact a perfect matching in the $i$th copy of $K_{r-1}$) and since $t$ is a completely new value, these edges have not yet been used in the immersion.
It remains now to define paths between pairs of vertices in which at least one vertex has $t$ in the first coordinate. We will do this based on the edge-coloring of $S'$.
In particular, for a vertex $(t, j)$, $2\leq j \leq r$, we must define paths to it's neighbors in $S'$ and to the other vertices with first coordinate $t$. Let $2\leq i\leq t-1$.
For the first type of path, we use
$$(t,j)-(i,1)-(1,c)-(i,j),$$
where $c$ is the color missing at $(i,j)$ (in addition to $j$) in the $i$th copy of $K_{r-1}$. Note that the path $(i,1)-(1,c)-(i,j)$ was precisely one half of the length 4 path between $(i,j)$ and $(i,k)$ that we deleted. Hence these edges are indeed available and form a path (note the other half of this length four path will be used to join $(t, j)$ to $(i,k)$). The first edge of the path, $(t, j) - (i, 1)$ is an edge because $t\neq i$ and $j\neq 1$.
We must now define paths between each pair of vertices with $t$ in the first coordinate.
We applied Case 2 to $S'$, this means each copy of $K_{r-1}$ in $S'$ has the same fixed coloring of it's edges. Let $b$ be the color of the edge between $j$ and $k$ in $K_{r-1}$. If $b\neq 1$
then we define the path
\[(t,j) -(1,b) -(t,k) .\]
Note that, no edges of form $(t, x), (1,y)$ with $x,y\neq 1$ have been previously used.
If $b=1$, we must proceed differently, as all edges incident to $(1,1)$ have already been used. In this case, we look more closely at the edge-coloring of $K_{r-1}$. Each vertex, $j$, in $K_{r-1}$ is missing exactly two colors, namely $j$ and a second color $c\neq 1$. Another vertex $k$ is missing $k$ and a second color $d\neq 1$. We choose the path to be
$$(t,j)-(1, c) -(t,1)-(1,d) -(t,k).$$
The edge $(t,j)-(1,c)$ or $(1,d)-(t,k)$ have not previously been used because $c$ is missing at $j$ and $d$ is missing at $k$ in the copy of $K_{r-1}$. We previously used edges incident to $(t,1)$ in paths of form $(i,j)-(t,1)-(i,k)$, but there we know that $i\neq 1$, so we are not re-using any edges from those paths.
This completes the description of our desired immersion in $K_t\times K_r$.
\end{proof}
The following corollary follows directly from Theorem~\ref{KtTimesKr}.
\begin{corollary}\label{CompleteSubgraphTimes} Let $G$ and $H$ be graphs with $\im(G)=t$ and $\im(H)=r$, and suppose that $K_t$ is a subgraph of $G$ and $K_r$ is a subgraph of $H$. Then $\im(G\times H) \geq (t-1)(r-1)+1.$
\end{corollary}
\begin{proof} Define a $K_{(t-1)(r-1)+1}$-immersion in $G\times H$ using only the complete subgraphs and Theorem~\ref{KtTimesKr}. Therefore $\im(G\times H) \geq (t-1)(r-1)+1.$
\end{proof}
We now prove the conjecture for a general graph $G$ and a graph that contains $K_r$ as a subgraph. Here $r\geq 3$ since we use an $r\times r$ idempotent Latin Square in our proof of Case 3 and there is no $2\times 2$ idempotent Latin Square.
\begin{theorem} \label{GtimesKr} Let $G$ and $H$ be graphs with $\im(G)=t$ and $\im(H)=r$ where $r\geq 3$, and suppose $K_r$ is a subgraph of $H$. Then $\im(G\times H) \geq (t-1)(r-1)+1$.
\end{theorem}
\begin{proof}
Note that, it suffices to prove the theorem for $H=K_r$. Let $v_1, v_2, \ldots, v_t$ be the terminals of a $K_t$-immersion in $G$, and let the vertices of $H=K_r$ be $1,2,\ldots, r$. The $(t-1)(r-1)+1$ terminals of our clique immersion will be $(v_1, 1)$ and $(v_2, k), (v_3, k), \ldots, (v_t, k)$ for each $k\in \{2, 3, \ldots, r\}$.
We use the same plan for routes between terminals as in the proof of Theorem~\ref{KtTimesKr}, where each vertex $v_i$ replaces $i$ for $1\leq i\leq t$. However, vertices that were adjacent in $K_t$ may now be connected by a path. In order to complete the immersion, we need to show how to replace each edge in any route used in the proof of Theorem~\ref{KtTimesKr} by a path.
Consider $(v_i,m)-(v_j,n)$ where $i<j$ and $m\neq n$. We want to describe a path in $G\times K_r$ from $(v_i,m)$ to $(v_j,n)$. If $v_i$ is adjacent to $v_j$ in $G$, then these two vertices are adjacent. Otherwise, let $P_{i,j}$ be the path in $G$ between $v_i$ and $v_j$ in a fixed $K_t$-immersion. Let $P_{i,j} = v_i - p_1 - p_2 - p_3 - \cdots - p_a- v_j$.
\underline{Case 1:} The number of pegs, $a$, is even.
Then we use the route
\[(v_i, m) - (p_1,n)-(p_2,m)-(p_3,n) - \cdots - (p_{a-1}, n)- (p_a, m)- (v_j, n)\]
These paths will be edge-disjoint because the paths $P_{i,j}$ in $G$ are edge-disjoint and we alternate between the $m$th copy and the $n$th copy of $K_r$.
\underline{Case 2:} The number of pegs, $a$, is odd and $K_r$ is an odd clique.
Consider an edge coloring of $K_r$ using $r$ colors in which the color $k$ is missing at vertex $k$. This is possible because $K_r$ is an odd clique. In this coloring, suppose the color on the edge $mn$ is $\ell$, this means $\ell\neq m, n$. Then we use the route
\[(v_i, m) - (p_1,\ell)-(p_2,m)-(p_3,\ell) - \cdots - (p_{a-1}, m)- (p_a, \ell)- (v_j, n)\]
These paths will be edge-disjoint because the paths $P_{i,j}$ in $G$ are edge-disjoint and the color $\ell$ is missing at vertices $m$ and $n$ in $K_r$.
\underline{Case 3:} The number of pegs, $a$, is odd and $K_r$ is an even clique.
Let $A$ be a $r\times r$ idempotent Latin square, that is, a Latin square in which $a_{hh}=h$. Consider a copy of $K_r$ in which every edge is replaced with a directed $2$-cycle. We use $A$ to color this digraph. color the directed edge $hk$ with $a_{hk}$.
Suppose $\ell$ is the color on the directed edge $mn$. Since $A$ is an idempotent Latin square, $\ell\neq m, n$. Then we use the route
\[(v_i, m) - (p_1,\ell)-(p_2,m)-(p_3,\ell) - \cdots - (p_{a-1}, m)- (p_a, \ell)- (v_j, n)\]
Using a Latin square insures that each out-edge at a vertex is a different color and each in-edge at a vertex is a different color because the colors do not repeat in a row or column. This, combined with the fact that the $P_{i,j}$ are edge disjoint in $G$, means these paths will be edge-disjoint.
\end{proof}
The proof technique from Theorem~\ref{GtimesKr} does not work for $r=2$ because in this case we may not be able to choose $(v_1, 1), (v_2, 2), (v_3, 2), \ldots, (v_t, 2)$ as our terminals. For example, consider the graph in Figure~\ref{GcrossK2}. In using our proof technique to find an immersion of $K_4$ in $G\times K_2$ we would choose $(v_1, 1), (v_2, 2), (v_3, 2), (v_4, 2)$ as our terminals. Each terminal has degree 3 and therefore every edge incident to a terminal must be used on a path to connect to the other terminals. Every vertex in $G\times K_2$ has degree $2$ or $3$, this means every non-terminal vertex can be used as a peg at most once. Since $(v_3, 2)$ and $(v_4, 2)$ have two neighbors in common, namely $(v_2, 1)$ and $(p_1, 1)$, at most one can be used on the path from $(v_3, 2)$ to $(v_4, 2)$, meaning the other neighbor must be used as a peg twice, once for $(v_3, 2)$ and once for $(v_4, 2)$ contradicting it can only be used as a peg one time. Thus we are unable to complete the immersion. However, in Figure~\ref{GcrossK2}, we identify an immersion of $K_4$ in $G\times K_2$ using different terminals.
\begin{figure}
\caption{An example of $G\times K_2$. $G$ has a $K_4$-immersion with terminals $v_1, v_2, v_3,$ and $v_4$, and we have indicated a $K_4$-immersion in $G \times K_2$. The terminals are the larger vertices and the gray lines are the edges unused by our $K_4$-immersion. The vertices $(v_3, 2)$ and $(v_4, 2)$ are joined by the path of length two through $(p_1, 1)$, and there is a copy of $K_{2,2}
\label{GcrossK2}
\end{figure}
Another issue with the example provided in Figure 5 is that the immersion of $K_4$ in $G$ contains paths of different parity. We discuss this in the next section.
\subsection{Path parity}\label{PegParity}
In the proof of Theorem~\ref{KtTimesKr}, we found paths between two different kinds of pairs of vertices in $G\times H$: the first type is $(a,b)$ and $(c,d)$ where $a\neq c$ and $b\neq d$, and the second type is $(a,b)$ and $(a,d)$ or $(a,b)$ and $(c,b)$. In the first case, there were edges between the first coordinates in $G$, and between the second coordinates in $H$. In the second case, there were edges between one set of coordinates, and equality in the other coordinate. Any path between $(a,b)$ and $(a,d)$ requires a closed walk between the first coordinates and an open (i.e. not closed) walk in the second, and these walks must contain the same number of edges. If the open walk has an odd number of pegs, then the closed walk can alternate between $a$ and any neighbor of $a$, but if the open walk has an even number of pegs, then the closed walk must contain an odd number of edges, and hence contain an odd cycle. Any generalization of the theorem must therefore take into account the parity of the number of pegs between terminal vertices in the factors of a direct product.
In the next theorem we prove that if the parity of all paths in both immersions is the same, then our previous bound holds.
\begin{theorem}\label{DirectParity} Let $G$ and $H$ be graphs such that $\im(G)=t$ and $\im(H)=r$. If there is an immersion $I_1$ of $K_t$ in $G$ and an immersion $I_2$ of $K_r$ in $H$ such that every path between terminals in both of these immersions has the same parity, then $\im(G\times H) \geq (t-1)(r-1)+1$.
\end{theorem}
\begin{proof} Let $G$ and $H$ be graphs such that $\im(G)=t$ and $\im(H)=r$. Let immersion $I_1$ of $K_t$ in $G$ and immersion $I_2$ of $K_r$ in $H$ be immersions such that the parity of every path in $I_1$ and $I_2$ between terminals is the same. Let the terminals of $I_1$ be labeled $1, 2, \ldots, t$ and the terminals of $I_2$ be labeled $1, 2, \ldots, r$. The $(t-1)(r-1)+1$ terminals of our clique immersion will be $(1, 1)$ and $(j, k)$ for each $j \in \{ 2, \ldots, t\}$ and each $k \in \{2, \ldots, r\}$.
\underline{Case 1:} Suppose every path in $I_1$ and $I_2$ has an even number of pegs. We will use the proof method of Theorem~\ref{KtTimesKr}. In Theorem~\ref{KtTimesKr} we constructed a $K_{(t-1)(r-1)+1}$-immersion, $I$, in $K_t\times K_r$ with terminals $(1,1)$ and all of its neighbors. We are using the same terminals now in $G\times H$, but will replace each edge of $I$ by a path in $G\times H$.
Let $(a, b)-(c, d)$ be an edge in $I$, thus $a \neq c$ and $b \neq d$. Let the path from $a$ to $c$ in $I_1$ be
$$P_{a, c} = a - p_1 - p_2 - \ldots - p_k - c$$
and let the path from $b$ to $d$ in $I_2$ be
$$P_{b, d} = b - q_1 - q_2 - \ldots - q_l - d,$$
where $k$ and $l$ are even and, without loss of generality, $k\leq l$. We choose the path from $(a, b)$ to $(c, d)$ in $G\times H$ to be
$$(a, b) - (p_1, q_1) - (p_2, q_2) - \ldots - (p_k, q_k) - (p_{k-1}, q_{k+1}) - (p_k, q_{k+2}) - \ldots - (p_k, q_l) - (c, d).$$
We must now confirm that these newly defined paths in $G\times H$ are edge-disjoint. Note that the only times we will use $P_{a,c}$ and $P_{b,d}$ together is when connecting $(a, b)$ to $(c, d)$ or when connecting $(a, d)$ to $(c, b)$. Using the above, our path from $(a, d)$ to $(c, b)$ will be
$(a, d) - (p_1, q_l) - (p_2, q_{l-1}) \ldots - (p_k, q_{l-k+1}) - (p_{k-1}, q_{l-k}) - (p_{k-1}, q_{l-k-1}) - \ldots - (p_k, q_1) - (c, b).$
This path is edge-disjoint from the path from $(a, b)$ to $(c, d)$ because all of the vertices are different (the sum of the subscripts of each vertex on the path from $(a, b)$ to $(c, d)$ is even, while the sum of the subscripts of each vertex on the path from $(a, d)$ to $(c, b)$ is odd). In the case where one of the paths, $P_{ac}$ or $P_{bd}$, is an edge we give the first vertex a subscript of $0$ and the second vertex a subscript of $1$ and the above parity argument applies. Therefore we have defined a $K_{(t-1)(r-1)+1}$-immersion in $G\times H$.
\underline{Case 2:} Suppose every path in $I_1$ and $I_2$ has an odd number of pegs. We must define paths $(a, b) - (c, d)$ and $(a, b) - (1,1)$, where $a$ and $c$ are terminals in $I_1$ and $b$ and $d$ are terminals in $I_2$. Let the path from $a$ to $c$ in $I_1$ be
$$P_{a,c} = a - p_1 - p_2 - \ldots - p_k - c,$$
the path from $b$ to $d$ in $I_2$ be
$$P_{b,d} = b - q_1 - q_2 - \ldots - q_l - d,$$
the path from $1$ to $a$ in $I_1$ be
$$P_{1,a} = 1 - w_1 - w_2 -\ldots - w_m - a,$$
and the path from $1$ to $b$ in $I_2$ be
$$P_{1,b} = 1 - z_1 - z_2 - \ldots - z_n - b$$
where $k, l, m,$ and $n$ are odd and, without loss of generality, $k + 2j = l$ and $m + 2h = n$ for some whole numbers $j$ and $h$.
If $a \neq c$, $b\neq d$, then for our path from $(a,b)$ to $(c,d)$ we use the path
$(a, b) - (p_1, q_1) - (p_2, q_2) - \ldots - (p_k, q_k) - (p_{k-1}, q_{k+1}) - (p_k, q_{k+2}) - \ldots - (p_k, q_l) - (c, d).$
\noindent For our path from $(1, 1)$ to $(a, b)$ we use
$(1, 1) - (w_1, z_1) - (w_2, z_2) - \ldots - (w_m, z_m) - (w_{m-1}, z_{m+1}) - (w_m, z_{m+2}) - \ldots - (w_m, z_n) - (a, b).$
If $a = c$ then for our path from $(a, b)$ to $(a, d)$ we use
$$(a, b) - (w_m, q_1) - (a, q_2) - (w_m, q_3) - \ldots - (w_m, q_l) - (a, d).$$
Similarly, if $b = d$ then for our path from $(a,b)$ to $(c, d)$ we use
$$(a, b) - (p_1, z_n) - (p_2, b) - (p_3, z_n) - \ldots - (p_k, z_n) - (c, b).$$
We must confirm that none of the defined paths share any edges. Suppose for a contradiction that the edge $(x_1, x_2) - (y_1, y_2)$ is used on two different paths in our immersion where $x_1y_1$ is on $P_{e,f}$ in $I_1$ and $x_2y_2$ is on $P_{g,h}$ in $I_2$, where $e$ and $f$ are terminals in $I_1$ and $g$ and $h$ are terminals in $I_2$. This means that we used the paths $P_{e,f}$ and $P_{g,h}$ in two instances, i.e., when making the path from $(e, g)$ to $(f, h)$ and when making the path from $(e, h)$ to $(f, g)$. For the purpose of our argument, we label the paths $P_{e,f}$ and $P_{g,h}$ as follows
$$P_{e,f} = e - u_1 - u_2 - \ldots - x_1 - y_1 - \ldots - u_m - f$$ and
$$P_{g,h} = g - v_1 - v_2 - \ldots - x_2 - y_2 - \ldots - v_n - h$$
where $m$ and $n$ are odd and $m \leq n$.
If $e, f, g, h \neq 1$, then $e \neq f$ and $g \neq h$.
Then the path from $(e, g)$ to $(f, h)$ will be
$(e, g) - (u_1, v_1) - (u_2, v_2) - \ldots - (x_1, x_2) - (y_1, y_2) - \ldots - (u_m, v_m) - (u_{m-1}, v_{m+1}) - (u_m, v_{m+2}) - \ldots - (u_m, v_n) - (f, h)$
\noindent and the path from $(e, h)$ to $(f, g)$ will be
$(e, h) - (u_1, v_n) - (u_2, v_{n-1}) - \ldots - (x_1, y_2) - (y_1, x_2) - \ldots - (u_m, v_{n-m+1}) - (u_{m-1}, v_{n-m}) - \ldots (u_m, v_1) - (f, g)$.
\noindent As we can see, since the direction in which we traverse the path $P_{g,h}$ is different in each case, the edge $(x_1, x_2) - (y_1, y_2)$ is not actually repeated.
If $e=1$ then the paths we are considering are $(1, 1)$ to either $(f, g)$ or $(f, h)$ and $(f, g)$ to $(f, h)$. In each case the path $P_{e,f}$ is not used, so the edge $(x_1, x_2) - (y_1, y_2)$ will not be used.
We have shown that the paths we defined are edge disjoint and thus have defined an immersion of $K_{(t-1)(r-1)+1}$ in $G \times H$.
\end{proof}
If we know that one of the factors of $G\times H$ has an immersion in which every path has an even number of pegs, we can generalize the result of Theorem~\ref{DirectParity} as follows. To do this we again use the blueprint provided by the proof of Theorem~\ref{GtimesKr}.
\begin{theorem}\label{GTimesParity} Let $G$ and $H$ be graphs such that $\im(G)=t$ and $\im(H)=r$ where $r\geq 3$. If there is an immersion $I_2$ of $K_r$ in $H$ such that every path in the immersion has an even number of pegs, then $\im(G\times H) \geq (t-1)(r-1)+1$
\end{theorem}
\begin{proof} Let $G$ and $H$ be graphs such that $\im(G)=t$ and $\im(H)=r$ and let $I_2$ be an immersion of $K_r$ in $H$ such that every path in the immersion has an even number of pegs. We will use the proof method of Theorem~\ref{GtimesKr}. In Theorem~\ref{GtimesKr} we constructed a $K_{(t-1)(r-1)+1}$-immersion, $I$ in $G\times K_r$ with terminals $(v_1, 1)$ and $(v_2, k), (v_3, k), \ldots, (v_t, k)$ for each $k\in \{2, 3, \ldots, r\}$, where $v_1, v_2, \ldots, v_t$ are terminals in a $K_t$-immersion in $G$. We use the same terminals now in $G\times H$, but will replace each edge of $I$ by a path in $G\times H$.
Let $(a, b)-(c, d)$ be an edge in $I$, thus $a \neq c$ and $b \neq d$ and $ac$ is an edge in $G$.
Let the path from $b$ to $d$ in $I_2$ be
$$P_{b,d} = b - q_1 - q_2 - \ldots - q_l - d,$$
where $l$ is even We choose the path from $(a, b)$ to $(c, d)$ in $G\times H$ to be
$$(a, b) - (c, q_1) - (a, q_2) - \ldots - (a, q_l) - (c, d).$$
We must now confirm that these newly defined paths in $G\times H$ are edge-disjoint. Note that, the only times we will alternate $a$ and $c$ in the first coordinate and $P_{b,d}$ together is when connecting $(a, b)$ to $(c, d)$ or when connecting $(a, d)$ to $(c, b)$. Using the above, our path from $(a, d)$ to $(c, b)$ will be
$(a, d) - (c, q_l) - (a, q_{l-1}) - \ldots - (a, q_1) - (c, b)$. This path is edge-disjoint from the path from $(a, b)$ to $(c, d)$ because in this path $c$ is paired with $q_i$ where $i$ is even, while in the path from $(a, b)$ to $(c, d)$, $c$ is paired with $q_j$ where $j$ is odd.
\end{proof}
\subsection{Examples}\label{DirectEx} As a consequences to our theorems we now give the immersion numbers for the direct products of several specific families.
\begin{theorem} $\im(C_m\times K_r) = 2r-1$\end{theorem}
\begin{proof} When $r=2$ and $m$ is even, $C_m\times K_r$ is two disjoint copies of $C_m$ and thus $\im(C_m\times K_2) = 3$. When $r=2$ and $m$ is odd, $C_m\times K_r = C_{2m}$ and thus $\im(C_m\times K_2) = 3$. When $r\geq 3$, Theorem~\ref{GtimesKr} implies $\im(C_m\times K_r) \geq 2(r-1)+1=2r -1$ and Proposition~\ref{MaxProducts} implies $\im(C_m\times K_r) \leq 2(r-1)+1=2r -1$. Therefore $\im(C_m\times K_r) = 2r-1$.\end{proof}
\begin{theorem}\label{CmTimesCn} $\im(C_m\times C_n) = 5$.
\end{theorem}
\begin{proof}We have two cases: (1) $n$ is odd, and (2) $m$ and $n$ are even. Since $\Delta(C_m\times C_n) = 4$, Proposition~\ref{MaxProducts} implies $\im(C_m\times C_n) \leq 5$. Therefore in each case we need only show $\im(C_m \times C_n)\geq5$. For both cases we will label the vertices of $C_m$ with $1, 2, \ldots, m$ and the vertices of $C_n$ with $1, 2, \ldots, n$ clockwise around the cycles.
\underline{Case 1:} Let $n$ be odd. By Thereom~\ref{GTimesParity}, if we can find an immersion $I_2$ of $K_3$ in $C_n$ in which the parity of all the paths are even, then $\im{(C_m \times C_n)} \geq 5$. We take vertices $1, 2,$ and $3$ as the terminals of our $K_3$-immersion in $C_n$. Since the cycle is odd, the number of pegs on each path in the immersion is even.
\underline{Case 2:} Let $m$ and $n$ be even. We divide this into three sub-cases, one where $m$ and $n$ are both greater than or equal to 6, one where $m=n=4$, and one where $m\geq 6$ and $n=4$.
\begin{enumerate}
\item[(i)] Let $m$ and $n$ be greater than or equal to $6$. We can find immersions $I_1$ of $K_3$ in $C_m$ and an immersion $I_2$ of $K_3$ in $C_n$ in which the parity of all the paths is the same by taking vertices $1, 3,$ and $5$ as the terminals of our $K_3$-immersion in each cycle. Since the cycles are even, the number of pegs on each path in the immersion is odd. Therefore, by Theorem~\ref{DirectParity}, $\im{(C_m \times C_n)} \geq 5$.
\begin{figure}
\caption{A $K_5$-immersion in $C_{m}
\label{C2jcrossP4}
\end{figure}
\item[(ii)] Let $m = n = 4$. Since $C_4$ is bipartite, $C_4\times C_4$ has two isomorphic connected components. Vertices whose coordinates sum is even are in one component and those whose sum is odd are in another component. We will only describe the immersion for the even-sum component, where we use $(2, 2), (4, 2), (1, 3), (3, 3)$ and $(1, 1)$ as our terminals. We then use the following edges and paths to complete the immersion.
$(2,2) - (3, 1) - (4, 2)$
$(2,2) - (1,3)$
$(2, 2) - (3,3)$
$(2, 2) - (1, 1)$
$(4,2) - (1, 3)$
$(4,2) - (3, 3)$
$(4,2) - (1, 1)$
$(1, 3) - (2, 4) - (3,3)$
$(1, 3) - (4, 4) - (3,1) - (2, 4) - (1,1)$
$(3, 3) - (4, 4) - (1, 1)$
\item[(iii)] Let $m\geq 6$ and $n=4$. Since $C_6$ and $C_4$ are bipartite, $C_6\times C_4$ has two isomorphic connected components. We will only describe the immersion for one component, where we use $(2, 2), (4, 2), (6, 2), (1, 3),$ and $(3, 3)$ as the terminals of our $K_5$-immersion. We then use the following edges and paths to complete the immersion.
$(2,2) - (3, 1) - (4, 2)$
$(2, 2) - (1, 1) - (m, 2) - (m-1, 3) - (m-2, 2) - (m-3, 3) - \ldots - (6, 2)$
$(2,2) - (1,3)$
$(2, 2) - (3,3)$
$(4, 2) - (5, 1) - (6, 2)$
$(4, 2) - (5, 3) - (6, 4) - (7, 3) - (8, 4) - \ldots (m-1, 3) - (m, 4) - (1, 3)$
$(4,2) - (3, 3)$
$(6, 2) - (5, 3) - (4, 4) - (3,3)$
$(6,2) - (7,1) - (8, 2) - (9, 1) - \ldots - (m -1, 1) - (m, 2) - (1, 3)$
$(1, 3) - (2, 4) - (3,3)$
\noindent See Figure~\ref{C2jcrossP4} for an illustration of this immersion.
\end{enumerate}
\end{proof}
As an example where the bound of Conjecture~\ref{direct} is not tight we prove Theorem~\ref{CmTimesPath}.
\begin{theorem}\label{CmTimesPath} For $m\geq 5$, $\im(C_m\times P_4)=5$.\end{theorem}
\begin{proof}
\underline{Case 1:} Let $m$ be even. In Case 2(iii) of Theorem~\ref{CmTimesCn} we did not use the full cycle in the second coordinate, so in fact this case proves that $\im(C_m\times P_4)=5$ for $m$ even and greater than or equal to $6$ (also illustrated in Figure~\ref{C2jcrossP4}).
\underline{Case 2:} Let $m$ be odd. Label the vertices of the cycle $1, 2, \ldots, m$ clockwise around the cycle and label consecutive vertices on the path $1, 2, 3, 4$. We choose $(1, 2), (1, 3), (2, 2), (2, 3),$ and $(3, 2)$ as the terminals of our $K_5$-immersion. We then use the following edges and paths to complete the immersion.
$(1, 2) - (m, 3) - (m-1, 4) - (m-2, 3) - (m-3, 4) - \cdots - (1, 3)$
$(1, 2) - (m, 1) - (m-1, 2) - (m-2, 1) - (m-3, 2) - \cdots - (3, 1) - (2, 2)$
$(1, 2) - (2, 3)$
$(1, 2) - (2, 1) - (3, 2)$
$(1, 3) - (2, 2)$
$(1, 3) - (m, 4) - (m-1, 3) - (m-2, 4) - (m-3, 3) - \cdots - (2, 3)$
$(1, 3) - (m, 2) - (m-1, 3) - (m-2, 2) - (m-3, 3) - \cdots - (3, 2)$
$(2, 2) - (3, 3) - (4, 2) - (5, 3) - \cdots - (m, 3) - (1, 4) - (2, 3)$
$(2, 2) - (1, 1) - (m, 2) - (m-1, 1) - (m-2, 2) - \cdots - (3, 2)$
$(2, 3) - (3, 2)$
Therefore $\im(C_m\times P_4)=5$ for $m\geq 5$.
\end{proof}
\subsection{Limitations of proof techniques}\label{limitations} Using our proof techniques we cannot generalize the result of Theorem~\ref{GTimesParity} to an immersion in $H$ in which all paths have an odd number of pegs. We use the example in Figure~\ref{BipartiteEx} to illustrate this. Graphs $G$ and $H$ each have an immersion of $K_4$ and every path in $H$'s $K_4$-immersion has an odd number of pegs. Conjecture~\ref{Direct} predicts $G\times H$ will have an immersion of $K_{10}$. Given the degree of each vertex in $G\times H$ we see that the potential terminals for the $K_{10}$-immersion are
$$(1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (2, 3), (2, 4),$$ $$(3, 1), (3, 2), (3, 3), (3, 4), (4, 1), (4, 2), (4, 3), (4, 4).$$
Since $G$ and $H$ are both bipartite graphs the product $G\times H$ has two connected components. The potential terminals are separated in different components: $(1, 1)$, $(1, 2)$, $(1, 3)$, $(1,4)$ are in one component, and all of the other potential terminals are in the other component. This means an immersion of $K_{10}$ would have to be in the component where $1$ is not in the first coordinate. Our proof techniques would have us choose one vertex and all of its ``neighbors," in the sense of $K_4\times K_4$, but this is not possible because none of the potential terminals have all of their ``neighbors" in the same component. Thus, when the paths in the immersions have different parities of paths we cannot use our proof techniques. However, the example in Figure~\ref{BipartiteEx} is not a counterexample to Conjecture~\ref{Direct} as we can find an immersion of $K_{10}$ using vertices $$(2, 1), (2, 2), (2, 3), (2, 4), (3, 1), (3, 3), (3, 4), (4, 1), (4, 3), (4, 4)$$ as our terminals.
\begin{figure}
\caption{Graphs $G$ and $H$ are bipartite graphs (with vertices colored black and while to show bipartitions), hence the product $G \times H$ has two connected components. Each of $G$ and $H$ has a $K_4$-immersion with terminals labeled $1, 2, 3, 4$ in the figure.
}
\label{BipartiteEx}
\end{figure}
\section{Final remarks}\label{final}
The last remaining product is the strong product. Recall that the edge set of the strong product is $E(G\Box H)\cup E(G\times H)$. Therefore, $K_t\boxtimes K_r=K_{tr}$ and $\im(K_t\boxtimes K_r)=tr$. This leads us to make the following conjecture about the immersion number of the strong product of two graphs.
\begin{conjecture}\label{boxtimes} Let $G$ and $H$ be graphs with $\im(G)=t$ and $\im(H)=r$ then $\im(G\boxtimes H)\geq tr$.\end{conjecture}
We believe Conjecture~\ref{boxtimes} will be resolved once the remaining cases for Conjecture~\ref{direct} are resolved.
In this paper we have completely resolved Question~\ref{ques} for the lexicographic and Cartesian products. We have provided much evidence that the answer will be yes for the direct product as well, but will need different proof techniques to fully resolve the direct product. For each product we were able to find examples where we could do better than the bound of $\im(K_t*K_r)$. For this reason, future work should include exploring the following question.
\begin{quest}\label{Nextques} Let $G$ and $H$ be graphs with $im(G)=t$ and $im(H)=r$. For each of the four standard graph products $G*H$, how large is $\im(G*H)$?
\end{quest}
\end{document}
|
{\beta}gin{document}
\title{On the Ruin Probability of the Generalised \\ Ornstein-Uhlenbeck Process in the Cram\'er Case$^\dagger$\footnote{$^\dagger${This
research was partially supported by ARC grant DP1092502.}}}
\authorone[Australian National University]{Damien Bankovsky}
\addressone{Mathematical Sciences Institute, Australian National University, Canberra, Australia, email: [email protected] }
\authortwo[Technische Universit\"at M\"unchen]{Claudia Kl\"uppelberg}
\addresstwo{Center for Mathematical Sciences, and Institute for Advanced Study, Technische Universit\"at M\"unchen, 85747 Garching, Germany,
email: [email protected]}
\authorthree[ Australian National University]{Ross Maller}
\addressthree{Mathematical Sciences Institute, and
School of Finance and Applied Statistics, Australian National University, Canberra, Australia, email: [email protected]}
{\beta}gin{abstract}
For a bivariate \Levy process $(\xi_t,\eta_t)_{t\ge 0}$ and initial value $V_0$ define the Generalised Ornstein-Uhlenbeck (GOU) process
\[
V_t:=e^{\xi_t}\Big(V_0+\int_0^t e^{-\xi_{s-}}\mathrm{d} \eta_s\Big),\quad t\ge0,\]
and the associated stochastic integral process
\[Z_t:=\int_0^t e^{-\xi_{s-}}\mathrm{d} \eta_s,\quad t\ge0.\]
Let
$T_z:=\inf\{t>0:V_t<0\mid V_0=z\}$ and
$\psi(z):=P(T_z<\infty)$ for $z\ge 0$ be
the ruin time and infinite horizon ruin probability of the GOU.
Our results extend previous work of
Nyrhinen (2001) and others
to give asymptotic estimates for $\psi(z)$ and the distribution
of $T_z$ as $z\to\infty$,
under very general, easily checkable, assumptions,
when $\xi$ satisfies a Cram\'er condition.
\end{abstract}
\keywords{exponential functionals of \Levy processes; generalised Ornstein-Uhlenbeck process; ruin probability; stochastic recurrence equation}
\ams{60H30;60J25;91B30}{60H25;91B28}
\section{Introduction}{\lambda}bel{s1}
Let $(\xi,\eta)=(\xi_t,\eta_t)_{t\ge 0}$ be a bivariate \Levy process on a filtered complete probability space $(\Omega, \mathscr{F},\mathbb{F},P)$ and define
a generalised Ornstein-Uhlenbeck (GOU) process by
{\beta}gin{equation}{\lambda}bel{GOU definition}
V_t:=e^{\xi_t}\Big(V_0+\int_0^t e^{-\xi_{s-}}\mathrm{d} \eta_s\Big),\ t\ge 0,
\end{equation}
and the associated stochastic integral process $Z=(Z_t)_{t\ge 0}$ by
{\beta}gin{equation}{\lambda}bel{Z definition}
Z_t:=\int_0^t e^{-\xi_{s-}}\mathrm{d} \eta_s.
\end{equation}
$V_0$ is a random variable (r.v.), not necessarily independent of $(V_t)_{t>0}$.
To avoid trivialities, assume that neither $\xi$ nor $\eta$ are
identically zero.
Such processes have attracted attention over the last decade as continuous time analogues of solutions to stochastic recurrence equations (SRE); cf. Carmona, Petit and Yor~\cite{CarmonaPetitYor97,CarmonaPetitYor01},
Erickson and Maller~\cite{EricksonMaller05}.
The link between SREs and the GOU was made in
de Haan and Karandikar~\cite{deHaanKarandikar89}.
GOU processes turn up naturally in
stochastic volatility models (e.g., the continuous time GARCH model of Kl\"uppelberg, Lindner and Maller~\cite{klm:2004}), but most prominently as insurance risk models
for perpetuities in life insurance or
when the insurance company receives some stochastic return on investment; such investigations started with Dufresne~\cite{Dufresne90} and Paulsen~\cite{Paulsen93}.
More references are given later.
This paper is intended to fill a gap left
between Bankovsky~\cite{Bankovsky09} and
Bankovsky and Sly~\cite{BankovskySly08}, where more details on
the insurance background can be found.
Define
$$
T_z:=\inf\{t>0:V_t<0 \mid V_0=z\}, \ z\ge 0,
$$
(with the convention throughout that $\inf\emptyset=\infty$),
and let
{\beta}gin{equation}{\lambda}bel{definition: ruin prob}
\psi(z):=P\Big(\inf_{t>0}V_t<0\mid V_0=z\Big)
=P\Big(\inf_{t>0}Z_t<-z\Big)=P\left(T_z<\infty\right),\ z\ge0,
\end{equation}
be the \emph{infinite horizon ruin probability} for the GOU.
Note that $\psi(z)$ is a nonincreasing function of $z$,
and we can ask how fast it decreases as $z\to\infty$.
Our main result, Theorem~\ref{theorem: my Nyrhinen result},
provides a very general asymptotic result for $\psi(z)$ as $z\to\infty$
for the case when
$\lim_{t\to\infty}Z_t$ exists as an a.s. finite r.v.
and shows that, under a Cram\'er-like condition on $\xi$,
$\psi(z)$ decreases approximately like a power law.
This is an extension of a similar asymptotic result of
Nyrhinen \cite{Nyrhinen01}, who, like us,
utilises a discrete time result of
Goldie \cite{goldie:1991} for proof.
We use more recent developments in the theory of discrete time
perpetuities and the continuous time GOU to update Nyrhinen's results.
In Section \ref{Examples} we provide some examples
which cannot be dealt with by the prior results
but satisfy the conditions of our theorem.
To conclude this introduction,
we describe some previous literature relating to the GOU and its ruin
probability, beginning with those papers which examine the GOU in
its full generality. The process appears implicitly in the work of
de Haan and Karandikar~\cite{deHaanKarandikar89} as a continuous
generalisation of an SRE. Basic properties
are given by Carmona et al. \cite{CarmonaPetitYor01}. A general
survey of the GOU and its applications is given by Maller, M\"uller
and Szimayer \cite{MallerMullerSzimayer07}.
Exact conditions for no ruin
($\psi(z)=0$ for some $z\ge 0$) are given by
Bankovsky and Sly \cite{BankovskySly08} whilst conditions for
certain ruin
($\psi(z)=1$ for some $z\ge 0$)
are examined by Bankovsky \cite{Bankovsky09}.
The study of the GOU is closely related to the study of integrals of
the form $Z,$ defined in (\ref{Z definition}).
It is shown in Lindner and Maller \cite{lindner:maller:2005} that
stationarity of $V$ is related to convergence of a
stochastic integral constructed from $(\xi,\eta)$ in a similar way to $Z.$
Among the few papers dealing with $Z$ in its full generality,
Erickson and Maller \cite{EricksonMaller05} give necessary and
sufficient conditions for the almost sure convergence of $Z_t$ to a
r.v. $Z_\infty$ as $t\rightarrow\infty$,
and Bertoin, Lindner and Maller
\cite{BertoinLindnerMaller07} present necessary and sufficient
conditions for the continuity of the distribution of
$Z_\infty$, when it exists.
Fasen~\cite{fasen:2009}, using point process methods, gives an account
of the extremal behaviour of a GOU process.
There are a larger number of papers dealing with $V$ and $Z$ when
$(\xi,\eta)$ is subject to restrictions. We discuss a selection of those
papers which are relevant to ruin probability.
Harrison \cite{Harrison77} presents results on the ruin probability
of $V$ when $\xi$ is a linear deterministic function and $\eta$ is a
\Levy process with finite variance.
His approach is based on an exponential martingale argument,
which corresponds to the Cram\'er case.
The heavy-tailed case is investigated in
Kl\"uppelberg and Stadtm\"uller~\cite{Klu:Stadtmueller:98} and
extended by Asmussen~\cite{Asmussen:1998}.
See also Maulik and Zwart~\cite{MaulikZwart} and Konstantinides and Mikosch \cite{KonstantinidesMikosch05}.
Paulsen \cite{Paulsen93}
generalises Harrison's results, and presents new ruin probability
results for $V$, when $\xi$ and $\eta$ are independent with
finite activities.
This independent case is also treated in Kalashnikov and Norberg
\cite{KalashnikovNorberg02} and Paulsen \cite{Paulsen98,Paulsen02}.
Chiu and Yin \cite{ChiuYin04} generalise some of Paulsen's results
to the case in which $\eta$ is a jump-diffusion process.
Cai \cite{Cai04} and Yuen et al. \cite{NgYuenWang04}
present results when $\eta$ is a compound Poisson process.
Most relevant works containing restrictions on $(\xi,\eta)$ focus on the case
when $Z_t$ converges to $Z_\infty$ as $t\rightarrow\infty$; cf.
Yor \cite{Yor01} and Carmona et al.
\cite{CarmonaPetitYor97}. Gjessing and Paulsen
\cite{GjessingPaulsen97} study the distribution of $Z_\infty$ when
$\xi$ and $\eta$ are independent with finite activity,
and obtain exact distributions in some special cases. Hove and
Paulsen \cite{PaulsenHove99} use Markov chain Monte Carlo methods
to find the distribution of $Z_\infty$ in some special cases. Kl\"uppelberg and Kostadinova
\cite{KluppelbergKostadinova08} and Brokate et al.
\cite{Malleretal08} provide results on the tail of the distribution of
$Z_\infty$ when
$\eta$ is a compound Poisson process plus drift, independent of $\xi$.
\section{Main Results}{\lambda}bel{s2}
Our main results apply under a
Cram\'er-like condition on $\xi$: assume that
{\beta}gin{equation}{\lambda}bel{cramer}
Ee^{-w\xi_1}=1 \mbox{ for some } w>0.
\end{equation}
The following consequences of \eqref{cramer}
are well known and easily verified.
Condition \eqref{cramer} implies that $E\xi_1$ is well defined,
with $E\xi_1^-<\infty$, $E\xi_1^+\in(0,\infty]$,
and $E\xi_1\in (0, \infty]$, and so
$\lim_{t \to \infty} \xi_t=\infty$ a.s.
Further,
$Ee^{-{\alpha}pha \xi_1}$ is finite and nonzero for all ${\alpha}pha\in[0,w]$,
and
$c({\alpha}pha):= \ln Ee^{-{\alpha}pha \xi_1}$
is finite at least for all ${\alpha}pha\in[0,w)$.
The derivatives $c'({\alpha}pha)$ and $c''({\alpha}pha)$ are
finite at least for all ${\alpha}pha\in[0,w)$,
and $c''({\alpha}pha)\in(0,\infty]$ for all ${\alpha}pha\ge 0$.
So $c({\alpha}pha)$ is strictly convex for ${\alpha}pha\in[0,\infty)$
and $\mu^*:= c'(w)=-E[\xi_1e^{-w\xi_1}]\in(0,\infty]$.
We will need the {\em Fenchel-Legendre transform} of $c$, defined as
{\beta}gin{eqnarray}{\lambda}bel{fenchel}
c^*(v):=\sup\{{\alpha}pha
v-c({\alpha}pha):{\alpha}pha\in\mathbb{R}\}, \ v\in\mathbb{R}.
\end{eqnarray}\noindent
Next, let
{\beta}gin{equation}{\lambda}bel{t0}
{\alpha}pha_0:= \sup\left\{{\alpha}pha\in\mathbb{R}:c({\alpha}pha)<\infty,
E|Z_1|^{\alpha}pha<\infty
\right\}\in[0,\infty],
\end{equation}
and define the constant
{\beta}gin{equation}{\lambda}bel{xdef}
x_0:=\lim_{{\alpha}pha\rightarrow
{\alpha}pha_0-} (1/c'({\alpha}pha))\in[0,\infty].
\end{equation}
A distribution is \emph{spread out}
if it has a convolution power with an absolutely continuous component.
{\beta}gin{thm}{\lambda}bel{theorem: my Nyrhinen result}
Suppose that the following conditions hold:\\
{\rm Condition A: } $\psi(z)>0$ for all $z\ge 0$,\\
{\rm Condition B: } there exists $w>0$ such that $Ee^{-w\xi_1} = 1$ (i.e. \eqref{cramer} holds),\\
{\rm Condition C: }
there exist $\varepsilon>0$
and $p,q>1$ with $1/p+1/q=1$ such that
{\beta}gin{equation}{\lambda}bel{equation: first moment condition}
E[e^{-\max\{1,w+\varepsilon\}p\xi_1}]<\infty\quad\mbox{and}\quad
E[|\eta_1|^{\max\{1,w+\varepsilon\}q}]<\infty.
\end{equation}
Then $0\le x_0<1/\mu^*<\infty$, the function
\[
R(x):= \left\{ {\beta}gin{array}{ll}
xc^*(1/x) & \textrm{for~}x\in(x_0,1/\mu^*), \\
w &\textrm{for~}x\ge 1/\mu^*,
\end{array} \right.\,
\]
is finite and continuous on $(x_0,\infty)$ and strictly decreasing
on $(x_0, 1/\mu^*)$, and we have
{\beta}gin{equation}{\lambda}bel{equation: Nyrhinen result 1}
\lim_{z\rightarrow\infty}(\ln z)^{-1} \ln P(T_z\le x\ln z) = -R(x)
\end{equation}
for every $x>x_0.$
In addition,
{\beta}gin{equation}{\lambda}bel{equation: Nyrhinen result 2}
\lim_{z\rightarrow\infty}(\ln z)^{-1} \ln \psi(z)=-w.
\end{equation}
If, further, the distribution of $\xi_1$ is spread out, then there
exist constants $C_->0$ and $\kappa>0$ such that
{\beta}gin{equation}{\lambda}bel{equation: Goldie
result}z^w\psi(z)=C_- + o(z^{-\kappa})~~\mathrm{as~}z\rightarrow\infty.
\end{equation}
\end{thm}
{\beta}gin{rem}\rm
(i) $\psi(z)>0$ for all $z\ge 0$ is of course a logical assumption
to make in the context of Theorem~\ref{theorem: my Nyrhinen result},
though not necessarily easy to verify.
Necessary and sufficient conditions for it
in terms of the L\'evy measure of $(\xi,\eta)$
are given in \cite{BankovskySly08}.
The moment conditions in Theorem~\ref{theorem: my Nyrhinen result}
are also easily expressed in terms of the L\'evy measure of
$(\xi,\eta)$, cf. Sato \cite{sato:1999}, p.~159.
They imply that
$E[\sup_{0\le t\le 1}\left|Z_t\right|^{\max\{1,w+\varepsilon \}}]<\infty$
(see Lemma \ref{sup proposition} below).
We also have $E[\ln(\max\{1,|\eta_1|\}]<\infty$ in Theorem~\ref{theorem: my Nyrhinen result},
and $\lim_{t \to \infty} \xi_t=\infty$ a.s., so
$Z_t$ converges a.s. to a finite r.v.
$Z_\infty$ as $t\rightarrow\infty$
by Proposition 2.4 of \cite{lindner:maller:2005}
or Theorem 2 of \cite{EricksonMaller05}.\\
(ii)
Let
${\overlineerline Z}_t:= Z_t-\inf_{0\le s\le t}Z_s$ be the process
reflected in its minimum, and set
{\beta}gin{equation}{\lambda}bel{MQL}
(M,Q,{\overlineerline L}):=\big(e^{-\xi_1},Z_1,-e^{\xi_1}{\overlineerline Z}_1\big).
\end{equation}
Then the value $C_-$ in
(\ref{equation: Goldie result}) is given by the formula
in (2.19) of Goldie \cite{goldie:1991}, namely
{\beta}gin{equation}C_-=\frac{1}{w
\mu^*}E\Big[
\Big(Q+M\min\big\{{\overlineerline L},\inf_{t>0}Z_t\big\}^-\Big)^w-
\Big(\big(M\inf_{t>0}Z_t\big)^-\Big)^w\Big].
\end{equation}
When $\xi$ and $\eta$
are independent, it was pointed out by Paulsen \cite{Paulsen02}
that this constant can be written in a slightly different form,
which, by
Theorem 4 of \cite{BankovskySly08}, is also true
in the dependent case. Namely, let
$G(z):=P(Z_\infty\le z)$,
$h(z):=E[G(-V_{T_z})\mid T_z<\infty]\in[0,1]$,
and
$h:=\lim_{z\rightarrow\infty} h(z)$.
Then
\[
C_-=\frac{1}{w\mu^* h}
E\Big[\left(\left(Q+MZ_\infty\right)^-\right)^w-
\left(\left(MZ_\infty\right)^-\right)^w\Big].
\]
(iii) The requirement that $\xi_1$ is
spread out can be replaced
with the less restrictive requirement that $\xi_T$ be spread out, where
$T$ is uniformly distributed on $[0,1]$ and independent of $\xi.$
We omit details of this, which can be carried out as in
\cite{Paulsen02}.
\end{rem}
\section{Examples}{\lambda}bel{Examples}
In this section we provide examples of \Levy processes for
which Conditions A, B and C of Theorem~\ref{theorem: my Nyrhinen result}
are satisfied.
Note that conditions B and C only involve the marginal processes $\xi$ and $\eta$ and they apply to all examples treated in the literature so far; cf. Kl\"uppelberg and Kostadinova~\cite{KluppelbergKostadinova08} for detailed references.
The only condition which may involve dependence between $\xi$ and $\eta$ is
Condition A.
We denote the characteristic triplet of $(\xi,\eta)$ by
$((\tilde{{\gamma}mma}_\xi,\tilde{{\gamma}mma}_\eta),\Sigma_{\xi,\eta},\Pi_{\xi,\eta}).$
The characteristic triplet of the marginal process $\xi$ is denoted by $({\gamma}mma_\xi,{\sigma}gma_\xi^2,\Pi_\xi),$ where
{\beta}gin{equation}{\lambda}bel{first 2 dim to 1 dim equation}
{\gamma}mma_\xi=\tilde{\gamma}mma_\xi+
\int_{\{|x|< 1\}\cap\{x^2+y^2\ge 1\}}x\Pi_{\xi,\eta}(\mathrm{d}(x,y)),
\end{equation}
and ${\sigma}gma_\xi^2$ is the upper left entry in the matrix $\Sigma_{\xi,\eta}$.
Similarly for $\eta$.
The random jump measure and Brownian motion
components of $(\xi,\eta)$ will be denoted respectively by
$N_{\xi,\eta}$ and $(B_\xi,B_\eta)$;
see Section 1.1 of \cite{BankovskySly08} for further details.
{\beta}gin{example}\rm [Bivariate compound Poisson process with drift]{\lambda}bel{3.1}
\\
Let $(N_t)_{t\ge0}$ be a Poisson process with intensity ${\lambda}>0$,
and, independent of it, $(X_i,Y_i)_{i\in{\Bbb N}}$ an iid sequence of random 2-vectors.
For ${\gamma}mma_\xi,{\gamma}mma_\eta\in{\Bbb R}$ set
\[
(\xi_t,\eta_t):=({\gamma}mma_\xi,{\gamma}mma_\eta)\,t+\sum_{i=1}^{N_t}(X_i,Y_i),\quad t\ge0,
\]
with
$E|X_1|<\infty$ and ${\lambda}mbda$, ${\gamma}mma_\xi$ and $EX_1$ such that ${\gamma}mma_\xi+{\lambda}mbda EX_1>0$.
For this process,
\[
c({\alpha})=\ln Ee^{-{\alpha}pha\xi_1}=-{\alpha} {\gamma}mma_\xi-{\lambda}mbda \big(1-Ee^{-{\alpha} X_1}\big) < \infty
\]
for ${\alpha}\in\mathbb{R}$ such that $Ee^{-{\alpha}pha X_1}$ is finite,
with $c'(0)=-{\gamma}mma_\xi-{\lambda}mbda EX_1<0$.
We consider the special case where $(X_1,Y_1)$ is bivariate Gaussian with mean
$(m_X,m_Y)$ and positive definite covariance matrix
\[ \Sigma_{X,Y}:=
\left(
{\beta}gin{array}{cc}
{\sigma}gma_X^2 & {\sigma}gma_{X,Y} \\
{\sigma}gma_{X,Y} & {\sigma}gma_Y^2 \\
\end{array}
\right).
\]
Then Condition C obviously holds.
For Condition B, note that
{\beta}gin{equation}{\lambda}bel{ca}
c({\alpha}pha)=
-{\alpha} {\gamma}mma_\xi - {\lambda} \big(1-e^{-m_X {\alpha}+{\sigma}_X^2{\alpha}^2/2}\big) \to \infty\,\mbox{ as } {\alpha}pha\to\infty.
\end{equation}
Consequently, a Lundberg coefficient exists and Condition B is satisfied.
To establish Condition A we note that $(\xi,\eta)$ is a finite variation process and invoke Remark 2(2) of \cite{BankovskySly08},
also using the notation from that paper.
In fact, by that Remark 2(2), $\psi(z)=0$ for some $z>0$ would imply that
$P_{X,Y}(A_3)=P(X_1\le 0, Y_1\le 0)=0$, which obviously
is not the case. So Condition A holds.
\end{example}
{\beta}gin{example}\rm A Brownian motion with drift, i.e., with
\[
(\xi_t,\eta_t)=({\gamma}mma_\xi,{\gamma}mma_\eta)\,t+(B_{\xi,t},B_{\eta,t}),\quad t\ge0,
\]
where ${\gamma}mma_\xi>0$ and
$(B_\xi,B_\eta)_t$ is bivariate Brownian motion with mean 0 and positive definite
covariance matrix, is easily seen to satisfy Conditions A, B, C.
\end{example}
{\beta}gin{example}\rm [Jump diffusion $\xi$ and Brownian motion $\eta$]\\
Let $(B_t)_{t\ge0}$ be Brownian motion with mean zero and variance ${\sigma}^2$, $(N_t)_{t\ge0}$ a
Poisson process with intensity ${\lambda}>0$, and $(X_i)_{i\in{\Bbb N}}$ iid r.v.s, all independent.
Set
$$
(\xi_t,\eta_t) = ({\gamma}mma_\xi,{\gamma}mma_\eta)t + \big(B_t+\sum_{i=1}^{N_t} X_i, B_t\big),\quad t\ge0,
$$
where ${\gamma}mma_\xi>0$, and
assume that ${\gamma}mma_\xi+{\lambda}mbda EX_1>0$.
Condition A holds, since the Gaussian covariance matrix of $(\xi,\eta)$ is of the form
{\beta}gin{eqnarray}{\lambda}bel{matrix}
\Sigma_{\xi,\eta}:=
\left(
{\beta}gin{array}{cc}
{\sigma}gma^2+ {\lambda}mbda EX_1^2 & 1 \\
1 & {\sigma}gma^2 \\
\end{array}
\right),
\end{eqnarray}\noindent
and, hence, is not of the form excluded by Theorem~1 of \cite{BankovskySly08}.
Moreover, $c({\alpha})$ is the same as in \eqref{ca} with the addition of a term
$ {\alpha}^2{\sigma}gma^2/2$,
so again $c'(0)= - {\gamma}mma_\xi - {\lambda} EX_1 <0$.
\\[2mm]
(a) \, Now assume that $X_1$ is, as in the
Merton model, normally distributed with mean $m_X$ and variance ${\sigma}_X$.
Then
Conditions B and C are satisfied just as in Example \ref{3.1}.
\\[2mm]
(b) \,
The picture changes slightly when we consider Laplace distributed $X$
with density $f(x)=\rho e^{-\rho |x|}/2$ for $x\in{\Bbb R}$, $\rho>0$.
Then $Ee^{-{\alpha} X}=\rho\left((\rho+{\alpha})^{-1}+(\rho-{\alpha})^{-1}\right)/2$
for $-\rho<{\alpha}<\rho$ with singularities at $-\rho$ and $\rho$.
Moreover,
$$
c'({\alpha})=-{\gamma}mma_\xi +{\alpha}{\sigma}gma^2 +{\lambda}\frac{\rho}2\Big(\frac1{(\rho-{\alpha})^2}
-\frac1{(\rho+{\alpha})^2}\Big),
$$
implying that $c'(0)=-{\gamma}mma_\xi<0$.
So a Lundberg coefficient $w>0$ exists.
Since the normal r.v. $B_1$ has absolute moments of every order,
for Condition C to hold it suffices that $w<\rho$, which is guaranteed,
since $\rho$ is a singularity of $c$.
\end{example}
{\beta}gin{example}\rm [Subordinated Brownian motion $\xi$ and spectrally positive $\eta$]\\
Let $(B_t)_{t\ge 0}$ be a standard Brownian motion and $(S_t)_{t\ge 0}$
a driftless subordinator with $\Pi_S\{\mathbb{R}\}=\infty$.
For constants $\mu$, ${\gamma}mma_\xi$, ${\gamma}mma_\eta$,
define
$$(\xi_t,\eta_t) = ({\gamma}mma_\xi,{\gamma}mma_\eta) t + (B(S_t)+\mu S_t,S_t),\quad t\ge0.$$
Subordinated Brownian motions play an important role in financial modeling; cf. Cont and Tankov~\cite{CT}, Ch. 4.
The bivariate process above has joint Laplace transform
{\beta}gin{eqnarray*}
e^{({\alpha}pha_1{\gamma}mma_\xi+{\alpha}pha_2{\gamma}mma_\eta)t}
E[e^{{\alpha}pha_1 (B(S_t)+\mu S_t)+{\alpha}pha_2S_t} ]
&=&
e^{({\alpha}pha_1{\gamma}mma_\xi+{\alpha}pha_2{\gamma}mma_\eta)t}
E[ e^{\Psi_B({\alpha}pha_1) S_t + ({\alpha}pha_1\mu +{\alpha}pha_2 )S_t}]\\
&=&
e^{t[\Psi_S(\Psi_B({\alpha}pha_1)+{\alpha}pha_1\mu+{\alpha}pha_2)+
{\alpha}pha_1{\gamma}mma_\xi+{\alpha}pha_2{\gamma}mma_\eta]},
\end{eqnarray*}\noindent
where $\Psi_B$ and $\Psi_S$ are the Laplace exponents of $B$ and
$S$, respectively. Thus $\Psi_B({\alpha}pha)=-{\alpha}pha^2/2$.
By setting ${\alpha}pha_2=0$ and $t=1$ we obtain
{\beta}gin{eqnarray*}
c({\alpha}pha)= \ln E e^{-{\alpha}\xi_1}
= \Psi_S(\Psi_B(-{\alpha}pha) -{\alpha}pha\mu )-{\alpha}pha {\gamma}mma_\xi
= \Psi_S(-{\alpha}^2/2 -{\alpha}pha\mu)-{\alpha}pha {\gamma}mma_\xi.
\end{eqnarray*}\noindent
Consider the variance gamma model with parameters $c,{\lambda}>0$, where $S$ is a gamma
subordinator with L\'evy density
$\rho(x)=cx^{-1}e^{-{\lambda} x}$ for $x>0$ and
Laplace transform
$Ee^{-uS_t} = (1+u/{\lambda})^{-ct}$.
Assume ${\gamma}mma_\xi+c\mu/{\lambda}mbda>0$ and ${\gamma}mma_\eta\le 0$.
Now, $\Psi_S(u)=-c\ln(1-u/{\lambda})$,
giving
$$
c({\alpha})=-{\alpha}pha {\gamma}mma_\xi-c\ln\Big(1+\frac{{\alpha}\mu}{{\lambda}}-\frac{{\alpha}^2}{2{\lambda}}\Big).
$$
$c({\alpha})$ is well defined for ${\alpha}pha\in(\mu-\sqrt{\mu^2+2{\lambda}},\mu+\sqrt{\mu^2+2{\lambda}})$, which includes 0, and $c'(0)=-{\gamma}mma_\xi-c\mu/{\lambda}<0$.
Then, since $c(\mu+\sqrt{\mu^2+2{\lambda}})=\infty$, the Lundberg coefficient $w$ exists.
In order to check Condition A, we have, in the notation
of Theorem 1 of \cite{BankovskySly08},
$\Pi_{\xi,\eta}(A_2)=\Pi_{\xi,\eta}(A_3)=0$,
since $\eta$ has only positive jumps, and
$\theta_2=0$.
Now with $u\ge 0$,
$A_4^u=\{x\le 0,y\ge 0:y<u(e^{-x}-1)\}=\{x\ge 0,y\ge 0:y<u(e^x-1)\}$.
Since $\Pi_\eta(\mathbb{R})=\infty$, $\eta$ has jumps arbitrarily close to 0, and we have
$\Pi_{\xi,\eta}(A_4^u)>0$ for $u>0$, while $\Pi_{\xi,\eta}(A_4^0)=0$.
Thus $\theta_4:= \inf\{u\ge 0: \Pi_{\xi,\eta}(A_4^u)>0\}=0$.
There is no Gaussian component, so ${\sigma}gma^2_\xi=0$, which puts us in the situation
of the second item of Theorem 1 of \cite{BankovskySly08},
and to verify that $\psi(z)>0$ for all $z\ge 0$ we only need
(since $\theta_2=\theta_4=0$)
{\beta}gin{equation}{\lambda}bel{gcr}
g(0)=\widetilde {\gamma}mma_\eta-\int_{x^2+y^2\le 1}y\Pi_{\xi,\eta}({\rm d} x,{\rm d} y)<0.
\end{equation}
But by \eqref{first 2 dim to 1 dim equation},
\[
\widetilde {\gamma}mma_\eta={\gamma}mma_\eta
-\int_{0\le y\le1,x^2+y^2>1}y\Pi_{\xi,\eta}({\rm d} x,{\rm d} y)\le{\gamma}mma_\eta,
\]
thus
$ g(0)<{\gamma}mma_\eta\le 0$,
since we chose ${\gamma}mma_\eta\le 0$.
Hence Condition A holds in this model.
\end{example}
\section{Discrete Time Background and Preliminaries}{\lambda}bel{s4}
Our continuous time asymptotic results
will be transferred across from discrete time versions, and
our first task in the present section is to show how $(V_t)_{t\ge0}$ can be
expressed as a solution of one of two SREs, and give the associated discrete
stochastic series for $(Z_t)_{t\ge0}$.
Earlier papers in this area also adopted this approach and we
will tap into some of their results in proving
Theorem~\ref{theorem: my Nyrhinen result}.
We begin by describing the discrete time setup we use. For
$n\in{\Bbb N}$ consider the SRE
{\beta}gin{equation}{\lambda}bel{definition: first difference equation for V}
Y_n=A_n Y_{n-1}+B_n,
\end{equation}
where $(A_n,B_n)_{n\in{\Bbb N}}$ is an iid sequence of $\mathbb{R}^2$-valued random
vectors independent of an initial r.v. $Y_0.$ The
recursion in \eqref{definition: first difference equation for V} can
be solved in the form
{\beta}gin{equation}{\lambda}bel{definition: first discrete series for V}
Y_n=Y_0\prod_{j=1}^n A_j+\sum_{i=1}^n\prod_{j=i+1}^n A_jB_i
\end{equation}
(with $\prod_{j=n+1}^n=1$).
{}From (\ref{GOU definition}) we can write, for $n\in{\Bbb N}$
{\beta}gin{equation}{\lambda}bel{formula: expansion of V}V_n=e^{\xi_n-\xi_{n-1}}\Big(e^{\xi_{n-1}}\big(V_0+\int_0^{n-1}e^{-\xi_{s-}}\mathrm{d} \eta_s\big)\Big)
+e^{\xi_n}\int_{(n-1)+}^n e^{-\xi_{s-}}\mathrm{d} \eta_s.
\end{equation}
Thus, if we let $Y_0=V_0$ and define the $\mathbb{R}^2$-valued random vectors
{\beta}gin{equation}{\lambda}bel{definition: first A and B}
(A_n,B_n):=\Big(e^{\xi_n-\xi_{n-1}},e^{\xi_n}\int_{(n-1)+}^n
e^{-\xi_{s-}}\mathrm{d} \eta_s\Big),
\end{equation}
then
$V_n$ satisfies (\ref{definition: first difference equation
for V}).
An alternative formulation considers for $n\in{\Bbb N}$ the SRE
{\beta}gin{equation}{\lambda}bel{definition: second difference equation for V}
Y_n=C_n Y_{n-1}+C_n D_n\,,
\end{equation}
where $(C_n,D_n)_{n\in{\Bbb N}}$ is an iid
sequence independent of $Y_0.$
The solution is
{\beta}gin{equation}{\lambda}bel{definition: second discrete series for V}
Y_n=Y_0\prod_{j=1}^n C_i+\sum_{i=1}^n\prod_{j=i}^n C_jD_i.
\end{equation}
Using (\ref{formula: expansion of V}) it is clear that
$V_n$ is a solution of (\ref{definition: second difference equation for V})
if we let $V_0=Y_0$ and define
{\beta}gin{equation}{\lambda}bel{definition: second C and D}
(C_n,D_n):=\Big(e^{\xi_n-\xi_{n-1}},e^{\xi_{n-1}}\int_{(n-1)+}^n
e^{-\xi_{s-}}\mathrm{d} \eta_s\Big).
\end{equation}
Then it is easily verified that
{\beta}gin{equation}{\lambda}bel{definition: second discrete series for Z}
Z_n=\sum_{i=1}^n\prod_{j=1}^{i-1} C_j^{-1}D_i
\end{equation}
(with $\prod_{j=1}^0=1$).
Note that even when $\xi$
and $\eta$ are independent, the r.v.s $A_n$ and $B_n$ may
be dependent, and similarly for $C_n$ and $D_n$.
But we have
{\beta}gin{lem}{\lambda}bel{lemma: iid}
{\lambda}bel{Appendix} $(A_n,B_n)_{n\in{\Bbb N}}$ and $(C_n,D_n)_{n\in{\Bbb N}}$ are
iid sequences.
\end{lem}
{\beta}gin{proof}
We begin by proving that the sequence $(C_n,D_n)_{n\in{\Bbb N}}$ is iid.
Fix $n\in\mathbb{N}$ and define the new \Levy process
$
(\bar{\xi}_s,\bar{\eta}_s) :=(\xi_{n-1+s}-\xi_{n-1},\eta_{n-1+s}-\eta_{n-1})$ for $s\ge0.
$
Thus
$(\bar{\xi}_s,\bar{\eta}_s)_{s\ge 0}=_D(\xi_s,\eta_s)_{s\ge 0}$.
Note that we can bring the term $e^{\xi_{n-1}}$ through the integral
sign in \eqref{definition: second C and D} and write
$D_n=\int_{(n-1)+}^n e^{-(\xi_{s-}-\xi_{n-1})}\mathrm{d} \eta_s.$ $(\xi,\eta)$ has independent increments, so
$(C_n,D_n)$ is independent of $(C_m,D_m)$ for every $n\neq m.$
Now
{\beta}gin{eqnarray*}
(C_n,D_n)&=&\Big(e^{\xi_n-\xi_{n-1}},\int_{(n-1)+}^n e^{-\left(\xi_{s-}-\xi_{n-1}\right)}\mathrm{d}\eta_s\Big)\\
&=&\Big(e^{\bar{\xi_1}},\int_{0+}^1e^{-\bar{\xi}_{s-}}\mathrm{d}\bar{\eta}_s\Big)
\, =_D \, \Big(e^{\xi_1},\int_{0+}^1e^{-\xi_{s-}}\mathrm{d}\eta_s\Big)=(C_1,D_1).
\end{eqnarray*}
Thus we have proved that $(C_n,D_n)_{n\in{\Bbb N}}$ is an iid sequence. This implies that $(C_n,C_nD_n)$ is also an iid sequence,
and then $(A_n,B_n)_{n\in{\Bbb N}}$ is also an iid sequence since
\[
(C_n,C_nD_n)=\Big(e^{\xi_n-\xi_{n-1}},e^{\xi_n}\int_{(n-1)+}^n
e^{-\xi_{s-}}\mathrm{d} \eta_s\Big)=(A_n,B_n).
\]\vskip-0.8cm
\quad
\mbox{$\Box$}
\end{proof}
In order to directly access particular results from previous papers,
when discretizing $V$ we will use the approach via the recursion
(\ref{definition: first difference equation for V}) and the
sequence (\ref{definition: first discrete series for V}), whereas
when discretizing $Z$ we will use the approach via the series
(\ref{definition: second discrete series for Z}).
There has been significant attention paid to
sequences of the form (\ref{definition: first
discrete series for V}) and (\ref{definition: second discrete series
for Z}), and they are linked via the fixed point of the same SRE, see
Vervaat \cite{vervaat:1979} and Goldie and Maller \cite{goldie:maller:2000}.
Next we describe two important papers relating to the GOU
and its ruin time.
In them, $\xi$ and $\eta$
are general \Levy processes, possibly dependent. The
relevant papers are Nyrhinen \cite{Nyrhinen01} and Paulsen
\cite{Paulsen02}, which are very closely related to Theorem
\ref{theorem: my Nyrhinen result}.
Nyrhinen \cite{Nyrhinen01} contains asymptotic ruin probability
results for the GOU, in which $(\xi,\eta)$ is allowed to be an
arbitrary bivariate \Levy process.
He discretizes the stochastic integral process $Z$ and deduces asymptotic results in the continuous time setting from similar discrete time results.
We describe Nyrhinen's results in some
detail, and then make some comments.
Let $(M_n,Q_n,L_n)_{n\in{\Bbb N}}$ be iid random vectors
with $P(M>0)=1$ and
$(M,Q,L)\equiv(M_1,Q_1,L_1)$.
Define the sequence
$(X_n)_{n\in{\Bbb N}}$ by
{\beta}gin{equation}{\lambda}bel{definition: Nyrhinens series}
X_n=\sum_{i=1}^n\prod_{j=1}^{i-1}M_jQ_i+\prod_{j=1}^nM_jL_n, \ {\rm
with}\ X_0=0.
\end{equation}
For $u>0$ define the passage time
$\tau_u^X:=\inf\{n\in{\Bbb N}:X_n>u\}$
and the function
$c_M({\alpha}pha):=\ln EM^{\alpha}pha$.
Assume there is a $w^+>0$ such that $EM^{w^+}=1$.
Define
{\beta}gin{equation}{\lambda}bel{t0+}
{\alpha}pha_0^+:= \sup\left\{{\alpha}pha\in\mathbb{R}:c_M({\alpha}pha)<\infty,
~E|Q|^{\alpha}pha<\infty,
~E(ML^+)^{\alpha}pha<\infty\right\}\in[0,\infty].
\end{equation}
Also let
{\beta}gin{equation}{\lambda}bel{Nyybar}
\bar{y}:=\sup\Big\{y\in\mathbb{R}:P\big(\sup_{n\in\mathbb{N}}
X_n>y\big)>0\Big\}\in(-\infty,\infty].
\end{equation}
Nyrhinen provides asymptotic results for $X_n$ under the following
\\[2mm]
{\bf Hypothesis H}: Suppose that $0<w^+<{\alpha}pha_0^+\le\infty$ and
$\bar{y}=\infty.$\\[2mm]
Under Hypothesis H,
and assuming that
$P(M>1)>0,$ the
following quantities are well-defined:
$\mu^+:=1/{c_M'(w)}\in(0,\infty)$ and
$x_0^+:=\lim_{t\rightarrow {\alpha}pha_0^+-} (1/{c_M'(t)})\in[0,\infty)$.
Let $c_M^*(v)$
be the Fenchel-Legendre transform of $c_M$ as in \eqref{fenchel}.
Define the function
$R:(x_0^+,\infty)\rightarrow\mathbb{R}\cup\{\pm\infty\}$ by
\[
R(x):= \left\{ {\beta}gin{array}{ll}
xc_M^*(1/x) & \textrm{for~}x\in(x_0^+,1/\mu^+), \\
w & \textrm{for~}x\ge 1/\mu^+.
\end{array} \right.\,
\]
In our
situation, $R$ is finite and continuous on $(x_0^+,\infty)$ and
strictly decreasing on $(x_0^+,1/\mu^+)$.
{\beta}gin{prop}{\lambda}bel{theorem: Nyrhinens main theorem}{\rm [Nyrhinen's main discrete results, \cite{Nyrhinen01}, Theorems~2 and~3]}\newline
Assume Hypothesis H. Then the following hold.\\
(i) \, For every $x>x_0$,
{\beta}gin{equation}{\lambda}bel{4.15}
\lim_{u\rightarrow\infty}(\ln u)^{-1}\ln P(\tau_u^X\le x\ln u)=-R(x)
\end{equation}
and
{\beta}gin{equation}{\lambda}bel{equation: Nyrhinens finite time}
\lim_{u\rightarrow\infty}(\ln u)^{-1}\ln P(\tau_u^X<\infty)=-w.
\end{equation}
(ii) \, If the distribution of $\ln M$ is spread out,
there are constants $C_+>0$ and $\kappa>0$ such that
{\beta}gin{equation}{\lambda}bel{equation: Nyrhinens Goldie}
u^{w^+}P(\tau_u^X<\infty)=C_+ +o(u^{-\kappa}), \ {\rm as}\ u\to\infty.
\end{equation}
\end{prop}
$C_+$ can be obtained from the formula in Theorem 6.2 and (2.18) of Goldie
\cite{goldie:1991}. Nyrhinen continues in his Theorem 3 to give
equivalences for the condition $\bar{y}=\infty$, but they are difficult to verify, as he admits. We discuss these more fully later.
Nyrhinen's continuous result is obtained by applying his discrete
results to the case
{\beta}gin{eqnarray}{\lambda}bel{definition: M allocation}
(M_n, Q_n)&=&
\Big(e^{-(\xi_n-\xi_{n-1})},
e^{\xi_{n-1}}\int_{(n-1)+}^ne^{-\xi_{s-}} \mathrm{d} \eta_s\Big)
=(C_n^{-1}, D_n)
\quad {\rm (cf.\ \eqref{definition: second C and D})},\
\nonumber\\
{\rm and}\
L_n:&=&
e^{\xi_n}\Big(\sup_{n-1< t\leq n}\int_{(n-1)+}^t
e^{-\xi_{s-}}\mathrm{d}\eta_s-\int_{(n-1)+}^n e^{-\xi_{s-}} \mathrm{d} \eta_s\Big).
\end{eqnarray}
$(M_n,Q_n,L_n)_{n\in{\Bbb N}}$ is an iid sequence,
as follows by an easy extension of our proof of
Lemma~\ref{Appendix}. With these allocations $Z_n$ can be
written via (\ref{definition: second discrete series for Z})
in the form
{\beta}gin{eqnarray}{\lambda}bel{Z}
Z_n=\sum_{i=1}^n\prod_{j=1}^{i-1}M_jQ_i=X_n-L_n\prod_{j=1}^nM_j.
\end{eqnarray}\noindent
Nyrhinen proves the following result with equality in distribution:
{\beta}gin{prop}{\lambda}bel{pro}
Let $(M_n,Q_n,L_n)$ and $Z_n$ be as defined in
\eqref{definition: M allocation}, \eqref{definition: L allocation}
and \eqref{Z}.
Define $X_n$ as in \eqref{definition: Nyrhinens series}.
Then
\[\sup_{n-1<t\le n}Z_t=X_n\quad\mbox{and}\quad
\sup_{0\le t\le n} Z_t=\max_{m=1,\ldots,n}X_m.\]
\end{prop}
{\beta}gin{proof}
For $n\in{\Bbb N}$ we have
{\beta}gin{eqnarray*}
\sup_{n-1< t\leq n}Z_t
&=& Z_{n-1}+\sup_{n-1< t\leq n}\int_{(n-1)+}^t e^{-\xi_{s-}}\mathrm{d}\eta_s
\nonumber\\
&=& Z_{n-1}+
\int_{(n-1)+}^n e^{-\xi_{s-}}\mathrm{d}\eta_s+e^{-\xi_n}L_n\\
& = &
X_n-\prod_{j=1}^n M_j L_n+e^{-\xi_n}L_n
\, = \, X_n.
\end{eqnarray*}
This further implies that
$\sup_{0\le t\le n} Z_t=\max_{m=1,\ldots,n}X_m$.
\quad
\mbox{$\Box$}
\end{proof}
Define the first passage time of $Z$ above $u>0$ by
$\tau_u^Z:=\inf\{t\ge 0:Z_t>u\}$.
Then Proposition \ref{pro} implies that for all $t>0$,
\[
P(\tau_u^Z\le t)=P(\tau_u^X\le t)
\quad {\rm and}\quad
P(\tau_u^Z<\infty)=P(\tau_u^X<\infty).
\]
So \eqref{4.15} and \eqref{equation: Nyrhinens finite time}
hold with $\tau_u^X$ replaced by $\tau_u^Z$,
when Hypothesis H is satisfied for the associated values of
$(M_n,Q_n,L_n).$ If, further, the distribution of $\ln M$ is spread
out, then \eqref{equation: Nyrhinens Goldie}
holds with $\tau_u^X$ replaced by $\tau_u^Z.$ This is the content of
Theorem 4 and Corollary 5 of \cite{Nyrhinen01}.
{\beta}gin{rem} We make some comments on Nyrhinen \cite{Nyrhinen01}.
(i) We begin with the discrete results.
Firstly, the sequence $X_n$ defined in (\ref{definition:
Nyrhinens series}) converges as $n\to\infty$ a.s. to a finite r.v. under Hypothesis H. To see this, note that if we
choose $L_n=L$ then $X_n$ is the inner iteration sequence $I_n(L)$
for the random equation
$\phi(t)=Mt+Q.$ Goldie and Maller \cite{goldie:maller:2000} prove that
$I_n(L)$ converges a.s. to a finite r.v. iff
$\prod_{j=1}^n M_j\rightarrow 0$ a.s. as $n\rightarrow\infty$ and
$I_{M,Q}<\infty,$ where $I_{M,Q}$ is an integral involving the
marginal distributions of $M$ and $Q.$ Since
these conditions have no
dependence on the distribution of $L$, it is clear that they
are precisely those under which $X_n$ converges a.s. for
iid $(M_n,Q_n,L_n).$ We now show that these conditions are in
fact satisfied under Hypothesis H, and thus the sequences $X_n$ and
$\sum_{i=1}^n\prod_{j=1}^{i-1}M_jQ_i$ converge a.s.,
and to the same finite r.v..
Under Hypothesis H and our assumption $P(M=0)=0$,
$E\ln M$ is well-defined and $E\ln M\in[-\infty,0)$.
Hence the random walk
$S_n:=\sum_{j=1}^n(-\ln M_j)=-\ln\prod_{j=1}^n M_j$ drifts to
$\infty$ a.s., and it follows that $\prod_{j=1}^n M_j\rightarrow 0$
a.s. as $n\rightarrow\infty$.
Since ${\alpha}pha_0^+>0$ there exists $s>0$ such that $E|Q|^s<\infty$,
thus $E\ln^+|Q|<\infty$. Hence Corollary~4.1 of
\cite{goldie:maller:2000} implies that the integral condition
$I_{M,Q}<\infty$ is satisfied and the sequence
$\sum_{i=1}^n\prod_{j=1}^{i-1}M_jQ_i$ converges a.s.
(ii) Nyrhinen transfers his discrete results into continuous time, but
the corresponding results are difficult to apply in general.
The most problematic assumption is his condition
$\bar{y}=\infty$ (see \eqref{Nyybar}). In our notation, this is
equivalent to the condition $\psi(z)>0$ for all $z\ge 0.$ Theorem 1
of \cite{BankovskySly08} gives
necessary and sufficient
conditions on the \Levy measure of $(\xi,\eta)$ for this,
which are amenable to verification in special cases,
as we showed in Section \ref{Examples}.
Verifying Nyrhinen's condition $0<w^+<{\alpha}pha_0^+\le\infty$
requires finiteness of powers of $E|Z_1|$ and
$E[\sup_{0<t\le 1}|Z_t|]$.
These conditions would be more conveniently
stated in terms of the characteristic
triplet of $(\xi,\eta)$ or (at least) the marginal distributions of
$\xi$ and $\eta.$
In the special case that $\xi$ and $\eta$ are
independent \Levy processes, Theorem 3.2 of Paulsen
\cite{Paulsen02} does exactly that. However, problems remain.
In \cite{Paulsen02}, the condition $\bar y=\infty$ is assumed to
be true whenever $\xi$ and $\eta$ are independent and $\eta$ is not
a subordinator. However, this claim is false$^\dagger$. \footnote{$^\dagger$To see this,
let $(\xi,\eta)_t:=(t+N_t,-t)$ where
$N$ is a Poisson process with jump times $0<\tau_0<\tau_1<\cdots$. This
example trivially satisfies all the conditions in Paulsen's Theorem~3.2. However, using Ito's formula for semi-martingales and some
simple manipulation we obtain
$Z_t=-1+(e-1)\sum_{i=1}^{N_t}e^{-\tau_i-i}+e^{-t-N_t}$, and hence
$\inf_{t>0}Z_t\ge-1$ a.s.}
(It does hold if extra conditions are
imposed, in line with Remark 2(3) of \cite{BankovskySly08}.)
Finally, it would be desirable to remove the finite mean assumption
for $\xi$ in \cite{Paulsen02}
and replace the moment conditions in \cite{Paulsen02},
which are sufficient
for convergence of $Z_t$, with the precise necessary and sufficient
conditions given in Goldie and Maller~\cite{goldie:maller:2000}.
Our Theorem \ref{theorem: my Nyrhinen result} addresses all of the
above concerns in the most general setting.
\end{rem}
\section{Proof of Theorem \ref{theorem: my Nyrhinen result}}
The proof requires the
following lemma, which was stated but not proved in \cite{Bankovsky09}.
{\beta}gin{lem}{\lambda}bel{sup proposition} Suppose there exist
$r>0$ and $p,q>1$ with $1/p+1/q=1$ such that $E e^{-\max \{1,r
\}p\xi_1} <\infty$ and $E|\eta_1| ^{\max\{1,r
\}q}<\infty.$
Then
{\beta}gin{equation}{\lambda}bel{equation: sup proposition}
E\Big[\sup_{0\le t\le 1}\left|Z_t\right|^{\max\{1,r \}}\Big]
=
E\Big[\sup_{0\le t\le 1}
\Big|\int_0^t e^{-\xi_{s-}}\mathrm{d}\eta_s\Big|^{\max\{1,r \}}\Big]
<\infty.
\end{equation}
\end{lem}
{\beta}gin{proof}
For ease of notation let $k:=\max\{1,r\}.$ Assume there exists $r>0$
and $p,q>1$ with $1/p+1/q=1$ such that
$Ee^{-kp\xi_1}<\infty$ and $E|\eta_1|^{kq}<\infty.$
We prove the lemma first for the case
in which $E\eta_1=0.$ Since $\eta$ is a \Levy process this implies that $\eta$ is a \cadlag martingale. Since $\xi$
is \cadlag $e^{-\xi}$ is a locally bounded process and hence $Z$ is
a local martingale for $\mathbb{F}$ by the construction of the stochastic integral (see e.g. Protter~\cite{protter}). Since additionally $Z_0=0,$ the Burkholder-Davis-Gundy inequalities
ensure that for our choices of $p,q$ and $k$ there
exists $b>0$ such that
{\beta}gin{eqnarray*}E\Big[\sup_{0\le t\le 1}\Big|\int_0^t
e^{-\xi_{s-}}\mathrm{d}\eta_s\Big|^k\Big]
&\le& bE\Big[\Big[\int_0^z e^{-\xi_{s-}}\mathrm{d}\eta_s,\int_0^z e^{-\xi_{s-}}\mathrm{d}\eta_s\Big]_{z=1}^{z=k/2}\Big]\\
= \, bE\Big[\Big(\int_0^1 e^{-2\xi_{s-}}\mathrm{d}
[\eta,\eta]_s\Big)^{k/2}\Big]
&\le& bE\Big[\Big(\int_0^1 \sup_{0\le t\le 1}e^{-2\xi_t}\mathrm{d}
[\eta,\eta]_s\Big)^{k/2} \Big],
\end{eqnarray*}
where in the second inequality recall that
$[\eta,\eta]_s$ is increasing.
(The notation $[\cdot,\cdot]$ denotes the quadratic variation process.)
The last expression equals
\[
bE\Big[\sup_{0\le t\le 1}e^{-k\xi_{t}}[\eta,\eta]_1^{k/2}\Big]
\le b\Big(E\Big[\sup_{0\le t\le 1}e^{-pk\xi_{t}}\Big]
\Big)^{1/p}\Big(E\big[[\eta,\eta]_1^{qk/2}\big]\Big)^{1/q},
\]
where the inequality follows for
our choices of $p$ and $q$ by H\"{o}lder's inequality.
Since $k\ge1$, $q>1$, the Burkholder-Davis-Gundy inequalities give
the existence of $c>0$ such that (using Doob's inequality for the second inequality)
{\beta}gin{eqnarray*}
E\left[[\eta,\eta]_1^{qk/2}\right] \leq \frac{1}{c}E\Big[\sup_{0\le
t\le 1}|\eta_t|^{qk}\Big]
\le\frac{8}{c}E\left[|\eta_1|^{qk}\right]<\infty.
\end{eqnarray*}
Thus it suffices to prove
$E\left[\sup_{0\le t\le 1}e^{-pk\xi_{t}}\right]<\infty$.
Now $Y_t:=e^{-pk\xi_t}/c^t$, where
$c:=Ee^{-pk\xi_1} \in(0,\infty)$
is a non-negative martingale, and
it follows by Doob's maximal inequality that
{\beta}gin{equation*}
E\Big[\sup_{0\le t\le1}e^{-pk\xi_{t}}\Big]\le\max\{1,c\}
E\Big[\sup_{0\le t\le 1}
\frac{e^{-pk\xi_t}}{c^t} \Big]
\le
\max\Big\{\frac1{c},1\Big\}
\Big(\frac{pk}{pk-1}\Big)^{pk}
Ee^{-pk\xi_1}<\infty.
\end{equation*}
Hence the lemma is proved for the case in which $E(\eta_1)=0.$
In general, write
{\beta}gin{eqnarray*}
&&E\Big[\sup_{0 \le t\le
1}\Big|\int_0^t e^{-\xi_{s-}}\mathrm{d} \eta_s\Big|^k\Big]
= E\Big[\sup_{0 \le t\le 1}\Big|\int_0^t e^{-\xi_{s-}}\mathrm{d}(
\eta_s-sE\eta_1+sE\eta_1)\Big|^k
\Big]\\
&&\le E\Big[\Big(\sup_{0 \le t\le 1}\Big|\int_0^t
e^{-\xi_{s-}}\mathrm{d}( \eta_s-sE\eta_1)\Big|+
|E\eta_1|\sup_{0 \le t\le 1}\Big|\int_0^t
e^{-\xi_{s-}}\mathrm{d} s\Big| \Big)^k\Big],
\end{eqnarray*}
in which the first term on the right-hand side is finite by the first
part of the proof.
An application of Minkowski's inequality to the second term on the right-hand side completes the proof.
\quad
\mbox{$\Box$}
\end{proof}
{\beta}gin{rem}\rm
If $\xi$ and $\eta$ are independent, then H\"{o}lder's
inequality is not required in the proof of Lemma \ref{sup
proposition}, and a simpler independence argument shows that
(\ref{equation: sup proposition}) holds if $Ee^{-\max
\{1,r \}\xi_1}<\infty$ and $E|\eta_1| ^{\max\{1,r
\}}<\infty$ for some $r>0.$ We can put further restrictions
on $\xi$ and $\eta,$ such as in the example in Section 3 of Nyrhinen
\cite{Nyrhinen01}, which assumes $\xi$ is continuous and $\eta$ is
compound Poisson plus drift, which render the use of the
Burkholder-Davis-Gundy inequalities unnecessary and further simplify
the conditions. For general \Levy $(\xi,\eta)$ the above inequality
is the sharpest we have found.
\end{rem}
{\em Proof of Theorem \ref{theorem: my Nyrhinen result}: }
We aim to use Proposition \ref{theorem: Nyrhinens main theorem}
for passage below rather than above.
We can do this by replacing $\eta$ by $-\eta$. Note that for $z>0$,
\[
T_z=\inf\{t>0:Z_t<-z\} =\inf\{t>0:-Z_t>z\}
=\inf\{t>0:{\widehat Z}_t>z\},
\]
where we denote $Z_t$, when $\eta$ is replaced by $-\eta$, by ${\widehat Z}_t$
and similarly for the other quantities.
Thus ${\widehat Z}_t=-Z_t$, and it is easily checked that,
with $(M_n,Q_n)$ as in \eqref{definition: M allocation},
$({\widehat M}_n,{\widehat Q}_n)= (M_n,-Q_n)$, and,
with $L_n$ as in \eqref{definition: M allocation},
${\widehat L}_n=-{\overlineerline L}_n$, where
{\beta}gin{equation}{\lambda}bel{Ltilde}
{\overlineerline L}_n:=-e^{\xi_n}\Big(
\int_{(n-1)+}^n e^{-\xi_{s-}} \mathrm{d} \eta_s-
\inf_{n-1< t\leq n}\int_{(n-1)+}^te^{-\xi_{s-}}\mathrm{d}\eta_s
\Big).
\end{equation}
{}From \eqref{definition: Nyrhinens series} we get
${\widehat X}_n({\widehat L}_n)=-X_n({\overlineerline L}_n)$.
Then Proposition \ref{theorem: Nyrhinens main theorem}
ensures that
(\ref{equation: Nyrhinen result 1}) and
(\ref{equation: Nyrhinen result 2})
hold, if we can
prove that the relevant conditions are satisfied for $({\widehat M},{\widehat Q},{\widehat L})$;
i.e., we must show that Hypothesis H holds for the hat variables.
The corresponding
$\widehat {\overlineerline y}$ (see \eqref{Nyybar}) is
{\beta}gin{eqnarray*}
\sup\Big\{y\in\mathbb{R}:
P\Big(\sup_{n\in\mathbb{N}}{\widehat X}_n({\widehat L}_n)>y\Big)>0\Big\}
&=&
\inf\Big\{z\in\mathbb{R}:
P\Big(\inf_{t>0}Z_t<-z\Big)>0\Big\},
\end{eqnarray*}
so $\widehat {\overlineerline y}=\infty$
if and only if $\psi(z)>0$ for all $z\ge 0,$ which we have assumed.
We need a $w^+>0$ such that $E{\widehat M}^{w^+}=1$, and this is the case
with $w^+=w$ under
\eqref{cramer} since ${\widehat M}=M=e^{-\xi_1}$.
Also, ${\widehat c}_M({\alpha}pha)= \ln E{\widehat M}^{\alpha}pha=c({\alpha}pha)$, so that
${\alpha}pha_0^+$ in \eqref{t0+} here equals
$ {\alpha}pha_0$ as defined in \eqref{t0}.
Note that the extra term
$E({\widehat M}{\widehat L}^+)^{\alpha}pha=EM{\overlineerline L}^-)^{\alpha}pha$
required in \eqref{t0} is superfluous here, since
$E(M{\overlineerline L}^-)^{\alpha}pha= E{\overlineerline Z}_1^{\alpha}pha$, and this is finite for
${\alpha}pha\ge 0$ if and only if $E|Z_1|^{\alpha}pha<\infty$.
Under the moment conditions of
Theorem \ref{theorem: my Nyrhinen result},
the conditions of Lemma \ref{sup proposition} hold with $r=w^++\varepsilon$,
so $E|Z_1|^{\alpha}pha<\infty$ for ${\alpha}pha=\max\{1,w+\varepsilon\}$,
and hence ${\alpha}pha_0^+\ge w^++\varepsilon>w^+$.
Thus indeed Hypothesis H is fulfilled in the present situation and
Proposition \ref{theorem: Nyrhinens main theorem}
applies to give
(\ref{equation: Nyrhinen result 1}) and
(\ref{equation: Nyrhinen result 2}).
Also ${\alpha}pha_0^+\ge w^++\varepsilon>w^+$ implies
$c'({\alpha}pha_0-)>c'(w)=\mu^*=-E\xi_1e^{-w\xi_1}$,
and this is finite since $Ee^{-(w+\varepsilon)\xi_1}$ is.
So $0\le {\alpha}pha_0<1/\mu^*<\infty$.
Suppose, further, that $\xi_1$ is spread out. Then the dual
version of (\ref{equation: Goldie result}) follows
from Nyrhinen's comments in \cite{Nyrhinen01}, which we expressed as Proposition \ref{theorem: Nyrhinens main theorem}.
\quad
\mbox{$\Box$}
{\beta}gin{thebibliography}{99}
\footnotesize
\bibitem{Asmussen:1998}
S.~Asmussen.
\newblock Subexponential asymptotics for stochastic processes: extremal
behavior, stationary distributions and first passage probabilities.
\newblock {\em Ann. Appl. Prob.}, 8:354--374, 1998.
\bibitem{Bankovsky09}
D.~Bankovsky.
\newblock Conditions for certain ruin for the generalised
{O}rnstein-{U}hlenbeck process and the structure of the upper and lower
bounds.
\newblock {\em Stoch. Proc. Appl.}, 120:255--280, 2010.
\bibitem{BankovskySly08}
D.~Bankovsky and A.~Sly.
\newblock Exact conditions for no ruin for the generalised
{O}rnstein-{U}hlenbeck process.
\newblock {\em Stoch. Proc. Appl.}, 119:2544--2562, 2009.
\bibitem{BertoinLindnerMaller07}
J.~Bertoin, A.~Lindner, and R.~Maller.
\newblock On continuity properties of the law of integrals of {L}\'evy
processes.
\newblock In: {\em S\'eminaire de Probabilit\'es XLI}, LNM 1934, pp. 137--160.
Springer, Berlin, 2008.
\bibitem{Malleretal08}
M.~Brokate, C.~Kl\"uppelberg, R.~Kostadinova, R.~Maller, R.S. Seydel.
\newblock On the distribution tail of an integrated risk model: a numerical
approach.
\newblock {\em Insurance: Math. \& Econ.}, 42:101--106, 2008.
\bibitem{Cai04}
Jun Cai.
\newblock Ruin probabilities and penalty functions with stochastic rates of
interest.
\newblock {\em Stoch. Proc. Appl.}, 112:53--78, 2004.
\bibitem{CarmonaPetitYor97}
P.~Carmona, F.~Petit, and M.~Yor.
\newblock On the distribution and asymptotic results for exponential
functionals of {L}\'evy processes.
\newblock M.~Yor, Ed., {\em Exponential Functionals and Principal Values
Related to Brownian Motion}, 73--126.
Biblio. de la Rev. Mat. Ibero-Americana, 1997.
\bibitem{CarmonaPetitYor01}
P.~Carmona, F.~Petit, and M.~Yor.
\newblock Exponential functionals of {L}\'evy processes.
\newblock O.E. Barndorff-Nielsen, T.~Mikosch, S.I. Resnick, Eds,
{\em L\'evy {P}rocesses: {T}heory and {A}pplications}, 41--55.
Birkh\"auser, Boston, 2001.
\bibitem{ChiuYin04}
S.~N. Chiu and C.~Yin.
\newblock A diffusion perturbed risk process with stochastic return on
investments.
\newblock {\em Stochastic Anal. Appl.}, 22:341--353, 2004.
\bibitem{CT}
R. Cont and P. Tankov,
\newblock Financial modelling with jump processes
\newblock Chapman \& Hall/CRC Financial Mathematics Series, Boca Raton, FL, 2004.
\bibitem{deHaanKarandikar89}
L.~de~Haan and R.~L. Karandikar.
\newblock Embedding a stochastic difference equation into a continuous-time
process.
\newblock {\em Stoch. Proc. Appl.}, 32:225--235, 1989.
\bibitem{Dufresne90}
D.~Dufresne.
\newblock The distribution of a perpetuity, with applications to risk theory
and pension funding.
\newblock {\em Scand. Actuar. J.}, (1-2):39--79, 1990.
\bibitem{EricksonMaller05}
K.~B. Erickson and R.~A. Maller.
\newblock Generalised {O}rnstein-{U}hlenbeck processes and the convergence of
{L}\'evy integrals.
\newblock In {\em S\'eminaire de Probabilit\'es XXXVIII}, Lecture Notes in
Mathematics 1857, pp. 70--94. Springer, Berlin, 2005.
\bibitem{fasen:2009}
V.~Fasen.
\newblock Extremes of continuous-time processes.
\newblock In T.G. Andersen, R.A. Davis, J.-P. Kreiss, T.~Mikosch, Eds,
{\em Handbook of Financial Time Series.}, 653--667. Springer, Berlin,
2009.
\bibitem{GjessingPaulsen97}
H.~K. Gjessing and J.~Paulsen.
\newblock Present value distributions with applications to ruin theory and
stochastic equations.
\newblock {\em Stoch. Proc. Appl.}, 71:123--144, 1997.
\bibitem{goldie:1991}
C.~M. Goldie.
\newblock Implicit renewal theory and tails of solutions of random equations.
\newblock {\em Ann. Appl. Prob.}, 1:126--166, 1991.
\bibitem{goldie:maller:2000}
C.~M. Goldie and R.A. Maller.
\newblock Stability of perpetuities.
\newblock {\em Ann. Prob.}, 28:1195--1218, 2000.
\bibitem{Harrison77}
J.~M. Harrison.
\newblock Ruin problems with compounding assets.
\newblock {\em Stoch. Proc. Appl.}, 5:67--79, 1977.
\bibitem{PaulsenHove99}
A.~Hove and J.~Paulsen.
\newblock Markov chain {M}onte {C}arlo simulation of the distribution of some
perpetuities.
\newblock {\em Adv. Appl. Prob.}, 31:112--134, 1999.
\bibitem{KalashnikovNorberg02}
V.~Kalashnikov and R.~Norberg.
\newblock Power tailed ruin probabilities in the presence of risky investments.
\newblock {\em Stoch. Proc. Appl.}, 98:211--228, 2002.
\bibitem{KluppelbergKostadinova08}
C.~Kl\"uppelberg and R.~Kostadinova.
\newblock Integrated insurance risk models with exponential {L}\'evy
investment.
\newblock {\em Insurance: Math \& Econ.}, 42:560--577, 2008.
\bibitem{klm:2004}
C.~Kl{\"u}ppelberg, A.~Lindner, and R.~Maller.
\newblock A continuous time {GARCH} process driven by a {L}\'evy process:
stationarity and second order behaviour.
\newblock {\em J. Appl. Prob.}, 41:601--622, 2004.
\bibitem{Klu:Stadtmueller:98}
C.~Kl\"uppelberg and U.~Stadtm\"uller.
\newblock Ruin probabilities in the presence of heavy-tails and interest rates.
\newblock {\em Scand. Actuar. J.}, 1:49--58, 1998.
\bibitem{KonstantinidesMikosch05}
D.~G. Konstantinides and T.~Mikosch.
\newblock Large deviations and ruin probabilities for solutions to stochastic
recurrence equations with heavy-tailed innovations.
\newblock {\em Ann. Prob.}, 33:1992--2035, 2005.
\bibitem{lindner:maller:2005}
A.~Lindner and R.A. Maller.
\newblock L\'evy integrals and the stationarity of generalised
{O}rnstein-{U}hlenbeck processes.
\newblock {\em Stoch. Proc. Appl.}, 115:1701--1722, 2005.
\bibitem{MallerMullerSzimayer07}
R.A. Maller, G.~M\"uller, and A.~Szimayer.
\newblock Ornstein-{U}hlenbeck processes and extensions.
\newblock In T.G. Andersen, R.A. Davis, J.-P. Kreiss, and T.~Mikosch, Eds,
{\em Handbook of Financial Time Series.}, 421--438. Springer, Berlin,
2009.
\bibitem{MaulikZwart}
Maulik, K. and B.~Zwart.
\newblock Tail asymptotics for exponential functionals of Lévy processes.
\newblock {\em Stoch. Proc. Appl.}, 116: 156-177, 2006.
\bibitem{Nyrhinen01}
H.~Nyrhinen.
\newblock Finite and infinite time ruin probabilities in a stochastic economic
environment.
\newblock {\em Stoch. Proc. Appl.}, 92:265--285, 2001.
\bibitem{Paulsen93}
J.~Paulsen.
\newblock Risk theory in a stochastic economic environment.
\newblock {\em Stoch. Proc. Appl.}, 46:327--361, 1993.
\bibitem{Paulsen98}
J.~Paulsen.
\newblock Sharp conditions for certain ruin in a risk process with stochastic
return on investments.
\newblock {\em Stoch. Proc. Appl.}, 75:135--148, 1998.
\bibitem{Paulsen02}
J.~Paulsen.
\newblock On {C}ram\'er-like asymptotics for risk processes with stochastic
return on investments.
\newblock {\em Ann. Appl. Probab.}, 12:1247--1260, 2002.
\bibitem{protter}
P.E. Protter.
\newblock {\em Stochastic Integration and Differential Equations.}
\newblock Springer, Berlin, 2nd ed., 2004.
\bibitem{sato:1999}
K.-I. Sato.
\newblock {\em L\'evy Processes and Infinitely Divisible Distributions.}
\newblock Cambridge University Press, Cambridge, UK, 1999.
\bibitem{vervaat:1979}
W.~Vervaat.
\newblock On a stochastic difference equation and a representation of
non-negative infinitely divisible random variables.
\newblock {\em Adv. Appl. Prob.}, 11:750--783, 1979.
\bibitem{Yor01}
M.~Yor.
\newblock {\em Exponential functionals of {B}rownian motion and related
processes}.
\newblock Springer, Berlin, 2001.
\bibitem{NgYuenWang04}
K.~C. Yuen, G.~Wang, and K.~W. Ng.
\newblock Ruin probabilities for a risk process with stochastic return on
investments.
\newblock {\em Stoch. Proc. Appl.}, 110:259--274, 2004.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{The Parameterized Complexity of~$s$-Club with Triangle and Seed Constraints }
\titlerunning{The Parameterized Complexity of~$s$-Club with Triangle and Seed Constraints}
\authorrunning{J.~Garvardt, C.~Komusiewicz, F.~Sommer}
\author{Jaroslav Garvardt \and Christian Komusiewicz\orcidID{0000-0003-0829-7032} \and Frank Sommer \thanks{Supported by the Deutsche Forschungsgemeinschaft (DFG), project EAGR (KO 3669/6-1).}
\orcidID{0000-0003-4034-525X}}
\institute{Fachbereich Mathematik und Informatik, Philipps-Universit채t Marburg,\\ Marburg, Germany \\ \email{\{garvardt,komusiewicz,fsommer\}@informatik.uni-marburg.de}}
\maketitle
\begin{abstract}
The \textsc{s-Club} problem asks, for a given undirected graph~$G$, whether~$G$ contains a vertex set~$S$ of size at least $k$ such that~$G[S]$, the subgraph of~$G$ induced by~$S$, has diameter at most~$s$.
We consider variants of \textsc{$s$-Club} where one additionally demands that each vertex of~$G[S]$ is contained in at least~$\ell$ triangles in~$G[S]$, that~$G[S]$ contains a spanning subgraph~$G'$ such that each edge of~$E(G')$ is contained in at least $\ell$~triangles in~$G'$, or that~$S$ contains a given set~$W$ of seed vertices.
We show that in general these variants are W[1]-hard when parameterized by the solution size~$k$, making them significantly harder than the unconstrained~\textsc{$s$-Club} problem.
On the positive side, we obtain some FPT algorithms for the case when~$\ell=1$ and for the case when~$G[W]$, the graph induced by the set of seed vertices, is a clique.
\end{abstract}
\section{Introduction}
Finding cohesive subgroups in social or biological networks is a fundamental task
in network analysis. A classic formulation of cohesiveness is based on the
observation that cohesive groups have small diameter. This observation led to the~$s$-club model originally proposed by Mokken~\cite{Mok79}. An \emph{$s$-club} in a graph~$G=(V,E)$ is a set of vertices~$S$ such that~$G[S]$, the subgraph of~$G$ induced by~$S$, has diameter at most~$s$. The 1-clubs are thus precisely the cliques and the larger the value of~$s$, the more the clique-defining constraint of having diameter one is relaxed. In the \textsc{$s$-Club} problem we aim to decide whether~$G$ contains an~$s$-club of size at least~$k$.
A big drawback of $s$-clubs is that the largest $s$-clubs are often not very cohesive with
respect to other cohesiveness measures such as density or minimum degree. This behavior is particularly pronounced for~$s=2$: the largest
$2$-club in a graph is often the vertex~$v$ of maximum degree together with its neighbors~\cite{HKN15}.
To avoid these so-called hub-and-spoke structures, it has been proposed to
augment the $s$-club definition with additional constraints~\cite{CA17,KNNP19,PYB13,VB12}.
One of these augmented models, proposed by Carvalho and Almeide~\cite{CA17}, asks that every vertex
is part of a triangle~\cite{CA17}. This property was later generalized to the \emph{vertex-$\ell$-triangle} property, which asks that every vertex of~$S$ is in at
least~$\ell$ triangles in~$G[S]$~\cite{AB19}. \problemdef{Vertex Triangle~$s$-Club} {An undirected graph~$G$ = $(V,E)$, and
two integers~$k,\ell\geq 1$.} {Does~$G$ contain an $s$-club~$S$ of size at least~$k$
that fulfills the vertex-$\ell$-triangle property?}
The vertex-$\ell$-triangle
constraint entails some desirable properties for cohesive subgraphs. For example, in a
vertex-$\ell$-triangle~$s$-club, the minimum degree is larger
than~$\sqrt{2\ell}$. However, some undesirable behavior of hub-and-spoke structures
remains. For example, the graph consisting of two cliques of size~$d+1$ that are connected
via one edge is a vertex-$\binom{d}{2}$-triangle 3-club but it can be made disconnected
via one edge deletion. Thus, vertex-$\ell$-triangle $s$-clubs are not robust with respect to edge deletions.
To overcome this problem, we introduce a new model where we put triangle constraints on the edges of the~$s$-club instead of the vertices.
More precisely, we say that a vertex set~$S$ of a graph~$G$ fulfills the \emph{edge-$\ell$-triangle} property if~$G[S]$ contains a spanning subgraph~$G'\coloneqq (S,E')$ such that every edge in~$E(G')$ is in at least~$\ell$ triangles in~$G'$ and the diameter of~$G'$ is at most~$s$.
Next, we introduce the related problem.
\problemdef{Edge Triangle~$s$-Club}
{An undirected graph~$G=(V,E)$, and two integers~$k,\ell\geq 1$.}
{Does~$G$ contain a vertex set~$S$ of size at least~$k$ that fulfills the edge-$\ell$-triangle property?}
Note that in this definition, the triangle and diameter constraints are imposed on a spanning subgraph of~$G[S]$.
In contrast, for \textsc{Vertex Triangle~$s$-Club}, they are imposed directly on~$G[S]$.
The reason for this distinction is that we would like to have properties that are closed under edge insertions.
Properties which are closed under edge insertions are also well-motivated from an application point of view since adding a new connection within a group should not destroy this group.
If we would impose the triangle constraint on the induced subgraph~$G[S]$ instead, then an edge-$\ell$-triangle~$s$-club~$S$ would not be robust to edge additions.
For example, consider a graph~$G$ consisting of clique~$C$ to which two vertices~$u$ and~$v$ are attached in such a way that both~$u$ and~$v$ have exactly $2$~neighbors in~$C$ which are distinct.
The~$V(G)$ is an edge-$1$-triangle~$3$-club, but other adding the edge~$uv$, the edge~$uv$ is contained in no triangle and thus any edge-$1$-triangle~$3$-club cannot contain both~$u$ and~$v$.
Observe that every set that fulfills the edge-$\ell$-triangle property also fulfills the vertex-$\ell$-triangle property.
Also note that the converse is not true:
A vertex-$\ell$-triangle~$s$-club is not necessarily also an edge-$\ell$-triangle~$s$-club.
For this, consider the above mentioned graph consisting of two cliques of size~$d+1$ that are connected
via one edge
Then, this graph is an~vertex-$\binom{d}{2}$-triangle-$3$-club. But after deleting~$e$ the graph is disconnected.
Moreover, each vertex~$v\in S$ has at least~$\ell+1$ neighbors in~$S$:
Consider an arbitrary edge~$uv$.
Since~$uv$ is in at least~$\ell$ triangles~$\{u,v,w_1\},\ldots,\{u,v,w_\ell\}$ we thus conclude that~$u$ and~$v$ have degree at least~$\ell$.
We can show an even stronger statement: an edge-$\ell$-triangle $ s $-club $ S $ is robust against up to $ \ell $ edge deletions, as desired.
\begin{proposition}
\label{lem:edge-triangle-club-robust-edge-deletions}
Let~$G=(V,E)$ be a graph and let~$S$ be an edge-$\ell$-triangle $ s $-club in~$G$.
More precisely, let~$G'$ be a spanning subgraph of~$G[S]$ such that every edge in~$E(G')$ is in at least~$\ell$ triangles in~$G'$ and the diameter of~$G'$ is at most~$s$.
If $ \ell $ edges are removed from~$G'$, then~$S$ is still an $ (s+\ell)$-club and a~$(2s)$-club.
\end{proposition}
\begin{proof}
We show that if $ \ell $ edges are removed from $ G' $, the diameter of the resulting graph $ \widetilde{G} $ increases by at most $ \ell $.
Let $ P=(v_1,\ldots,v_{s+1}) $ be a path of length~$ s $ in $ G' $.
Since $ G' $ is an edge-$\ell$-triangle $ s $-club, every edge $ v_i v_{i+1} $ of~$P $ is part of at least $ \ell $~triangles in~$G'$.
Thus, for two vertices $ v_i$ and~$v_{i+1}$ in~$P$ there is a path of length at most two from $ v_i $ to $ v_{i+1} $ in $ G' $, either directly through the edge $ v_i v_{i+1}$ or via a vertex $ u $ that forms one of the $ \ell $ triangles with $ v_i $ and $ v_{i+1} $ in~$ G' $.
Thus, $ \dist(v_i,v_{i+1}) $ increases by at most $ 1 $ after one edge deletion and only if~$v_i v_{i+1}$ is removed.
Since at most $ \ell $ of the edges in $ P $ are removed, we have $ \dist(v_1,v_{s+1}) \leq \dist(v_1,v_2) + \ldots + \dist(v_s,v_{s+1}) \leq s + \ell $ in $ \widetilde{G} $.
By the same arguments, we also have $\dist(v_1,v_{s+1}) \leq 2s$.
Thus, after deleting $ \ell $ edges in $ G' $,~$S$ is an $ (s+\ell) $-club and a~$(2s)$-club.
\qed
\end{proof}
The following further variant of \textsc{$s$-Club} is also practically motivated but not necessarily by concerns about the robustness of the $s$-club. Here the difference to the standard problem is simply that we are given a set of seed vertices~$W$ and aim to find a large $s$-club that contains all seed vertices.
\problemdef{Seeded~$s$-Club}
{An undirected graph~$G$ = $(V,E)$, a subset~$W\subseteq V$, and an integer~$k\geq 1$.}
{Does~$G$ contain an $s$-club~$S$ of size at least~$k$ such that~$W\subseteq S$?}
This variant has applications in community detection, where we are often interested in finding communities containing some set of fixed vertices~\cite{Kan14,WGD13}.
In this work, we study the parameterized complexity of the three above-mentioned problems with
respect to the standard parameter solution size~$k$. Our goal is to determine whether FPT results for~\textsc{$s$-Club}~\cite{CHLS13,SKMN12} transfer to these practically motivated problem variants.
\paragraph{Known Results.}
The \textsc{$s$-Club} problem is NP-hard for all~$s\ge 1$~\cite{BLP02}, even when the input graph has diameter~$s+1$~\cite{BBT05}. For~$s=1$, \textsc{$s$-Club} is equivalent to \textsc{Clique} and thus W[1]-hard with respect to~$k$.
In contrast, for every~$s>1$,
\textsc{$s$-Club} is fixed-parameter tractable (FPT) with respect to the solution
size~$k$~\cite{CHLS13,SKMN12}.
This fixed-parameter tractability can be shown via a Turing kernel with $\ensuremath{\mathcal{O}}(k^2)$~vertices for even~$s$ and~$\ensuremath{\mathcal{O}}(k^3)$~vertices for odd~$s$~\cite{SKMN12,CHLS13}. The complexity of~\textsc{$s$-Club} has been also studied with respect to different classes of input graphs~\cite{GHKR14} and with respect to~structural parameters such as degeneracy of the input graph~\cite{HKNS15}. The \textsc{$s$-Club} problem can be solved
efficiently in practice, in particular for~$s=2$~\cite{BLP02,CHLS13,HKN15}. In particular, the \textsc{$2$-Club} problem has efficient branch-and-bound algorithms~\cite{CHLS13,HKN15} which can compute the optimal solutions on very large sparse graphs.
\textsc{Vertex Triangle $s$-Club}{} is NP-hard for all~$s\ge 1$ and for all~$\ell\ge 1$~\cite{CA17,AB19}. We are not aware of any algorithmic studies of \textsc{Edge Triangle $s$-Club}{} or~\textsc{Seeded $s$-Club}.
NP-hardness of \textsc{Edge Triangle $s$-Club}{} for~$\ell=1$ can be shown via the reduction for \textsc{Vertex Triangle $s$-Club}{} for~$\ell=1$~\cite{AB19}.
Also, the NP-hardness of~\textsc{Seeded $s$-Club}{} for~$W\ne\emptyset$ follows directly from the fact that an algorithm for the case where~$|W|=1$ can be used as a black box to solve \textsc{$s$-Club}. Further robust models of $s$-clubs, which are not considered in this work, include
$t$-hereditary $s$-clubs~\cite{PYB13}, $t$-robust $s$-clubs~\cite{VB12}, and $t$-connected $s$-clubs~\cite{YPB17,KNNP19}. For an
overview on clique relaxation models and complexity issues for the corresponding subgraph
problems we refer to the relevant surveys~\cite{K16,PYB13}.
\paragraph{Our Results.} An overview of our results is given in Table~\ref{tab:results}.
For \textsc{Vertex Triangle $s$-Club}{} and \textsc{Edge Triangle $s$-Club}{}, we provide a complexity dichotomy for all interesting combinations of~$s$ and~$\ell$, that is, for all~$s\ge 2$ and~$\ell\ge 1$,
into cases that are FPT or W[1]-hard with
respect to~$k$, respectively.
Our W[1]-hardness reduction for \textsc{Edge Triangle $s$-Club}{} for~$\ell\ge 2$ also shows the NP-hardness of this case.
The FPT-algorithms are obtained via adaptions of the Turing kernelization for \textsc{$s$-Club}.
Interestingly, \textsc{Vertex Triangle $s$-Club}~ with~$\ell=1$ is
FPT only for larger~$s$, whereas \textsc{Edge Triangle $s$-Club}~with~$\ell=1$ is FPT for all~$s$.
In our opinion, this means that the edge-$\ell$-triangle property is preferable not only from a modelling standpoint but also from an algorithmic standpoint as it allows to employ Turing kernelization as a part of the solving procedure, at least for~$\ell=1$.
It is easy to see that standard problem kernels of polynomial size are unlikely to exist for \textsc{Vertex Triangle $s$-Club}{} and \textsc{Edge Triangle $s$-Club}{}: $s$-clubs are necessarily connected and thus taking the disjoint union of graphs gives a trivial or-composition and, therefore, a polynomial problem kernel implies coNP $\subseteq$ NP/poly~\cite{BDFH09}.
All of our hardness results for \textsc{Vertex Triangle $s$-Club}{} and \textsc{Edge Triangle $s$-Club}{} are shown by a reduction from \textsc{Clique} which is W[1]-hard with respect to~$k$~\cite{Cyg+15,DF13}.
The idea is to replace each vertex of the \textsc{Clique} instance by a vertex gadget.
These gadgets are constructed in such a way that if one of these vertices is part of a vertex/edge-$\ell$-triangle~$2$-club~$S$, then the entire vertex gadget is part of~$S$.
We can then use the distance constraint to make sure that full vertex gadgets are chosen only if the corresponding vertices are adjacent.
While this idea is very natural, using the triangle constraint without creating many vertices that are too close to each other turned out to be technically challenging.
For \textsc{Seeded $s$-Club}, we provide a kernel with respect to~$k$ for clique seeds~$W$ and W[1]-hardness with respect to~$k$ for some other cases.
For~$s=2$, our results provide a dichotomy into FPT and W[1]-hardness with respect to~$k$ in terms of the structure of the seed.
\begin{table}[t]
\caption{Overview of our results of the parameterized complexity of the three problems with respect to the parameter solution size~$k$.}
\begin{tabularx}{\textwidth}{ltty}
\toprule
& \textsc{Vertex Triangle~~~~~\linebreak $s$-Club}~ & \textsc{Edge Triangle~~~~~\linebreak $s$-Club}~ & \textsc{Seeded $s$-Club} \\
\midrule
FPT &\ $\ell=1$ and~$s\ge 4$ & $\ell=1$ for each~$s$ & $W$ is a clique\\
\midrule
W[1]-h &\ $\ell=1$ and $s\le 3$ & $\ell\ge 2$ for each~$s$ & $s=2$ and~$G[W]$ contains at least two non-adjacent vertices \\[6.5ex]
&\ $\ell\ge 2$ for each~$s$ & & $s\ge 3$ and $G[W]$ contains at\linebreak least~$2$ connected components\\
\bottomrule
\end{tabularx}
\label{tab:results}
\end{table}
The W[1]-hardness of \textsc{Seeded $s$-Club}{} is provided by two reductions from \textsc{Clique}.
One reduction is for the case~$s=2$ and any seed that contains at least two non-adjacent vertices~$u$ and~$z$.
The other reduction is for the case~$s\ge 3$ and any seed that contains at least two connected components~$U$ and~$Z$.
In both cases we add two copies~$X_1$ and~$X_2$ of the graph of the \textsc{Clique} instance to the new instance of \textsc{Seeded $s$-Club}{} such that each vertex of~$X_1$ has distance at most~$s$ to~$u$ or~$U$ if its copy in~$X_2$ is also part of the solution.
We show a similar property for~$X_2$ and~$z$ or~$Z$.
This ensures that if~$p_1\in X_1$ is in the solution if and only if~$p_2\in X_2$ is in the solution.
Furthermore, the reductions have the property that vertex~$p_1$ in~$X_1$ has distance at most~$s$ to vertex~$q_2$ in~$X_2$ if and only if~$pq$ is an edge of the \textsc{Clique} instance.
This feature will then ensure that the same clique has to be chosen from both copies.
Our W[1]-hardness results, in particular those for \textsc{Seeded $s$-Club}, show that the FPT results for \textsc{$s$-Club} are quite brittle since the standard argument that we may assume~$k\ge\Delta$ fails and that adding even simple further constraints makes finding small-diameter subgraphs much harder.
\paragraph{Preliminaries.}
For integers~$p,q$, we denote~$[p,q]\coloneqq \{p,p+1, \ldots ,q\}$ and~$[q]\coloneqq [1,q]$.
For a graph~$G$, we let~$V(G)$ denote its vertex set and~$E(G)$ its edge set.
We let~$n$ and~$m$ denote the order of~$G$ and the number of edges in~$G$, respectively.
A \emph{path of length~$p$} is a sequence of pairwise distinct vertices~$v_1,\ldots, v_{p+1}$ such that~$v_iv_{i+1}\in E(G)$ for each~$i\in[p]$.
The \emph{distance}~$\dist_G(u,v)$ is the length of a shortest path between vertices~$u$ and~$v$.
Furthermore, we define~$\dist_G(u,W)\coloneqq \min_{w\in W}\dist(u,w)$.
We denote by~$\diam_G(G)\coloneqq \max_{u,v\in V(G)}\dist_G(u,v)$ the \emph{diameter} of~$G$.
Let~$S\subseteq V(G)$ be a vertex set.
We denote by~$N_i(S)\coloneqq \{u \in V \mid \dist(u,S) = i \}$
the \emph{open~$i$-neighborhood} of~$S$ and by~$N_i[S]\coloneqq \bigcup_{j\le i}N_i(S)\cup S$ the \emph{closed~$i$-neighborhood} of~$S$. For a vertex~$v\in V(G)$, we write~$N_i(v)\coloneqq N_i(\{v\})$ and~$N_i[v]\coloneqq N_i[\{v\}]$.
A graph~$G'\coloneqq (V',E')$ with~$V'\subseteq V$, and~$E'\subseteq E(G[V'])$ is a \emph{subgraph} of~$G$.
By~$G[S]\coloneqq (S,\{uv\in E(G)\mid u,v\in S\})$ we denote the \emph{subgraph induced by}~$S$.
Furthermore, by~$G-S\coloneqq G[V\setminus S]$ we denote the induced subgraph obtained after the deletion of the vertices in~$S$.
A vertex set such that each pair of vertices is adjacent is called a \emph{clique} and a clique consisting of three vertices is a \textit{triangle}.
For the definitions of parameterized complexity theory, we refer to the standard monographs~\cite{Cyg+15,DF13}.
All of our hardness results are shown by a reduction from \textsc{Clique}.
\problemdef{Clique}
{An undirected graph~$G=(V,E)$ and an integer~$k$.}
{Does~$G$ contain a clique of size at least~$k$?}
\textsc{Clique} is W[1]-hard with respect to~$k$~\cite{Cyg+15,DF13}.
\section{Vertex Triangle~\texorpdfstring{$s$}{s}-Club}
In this section, we settle the parameterized complexity of \textsc{Vertex Triangle~$s$-Club} with respect to the solution size~$k$.
First, we show that this problem is fixed-parameter tractable when~$\ell=1$ and~$s\ge 4$.
Afterwards, we show W[1]-hardness for all remaining cases, that is, for~$\ell\ge 2$ and~$s\ge 2$, and also for~$\ell=1$ and~$s\in\{2,3\}$.
\subsection{FPT-Algorithms}
The overall idea is based on the idea of the Turing kernel for \textsc{$2$-Club}, that is, bounding the size of~$N_s[v]$ for each vertex~$v\in V(G)$.
The first step is to remove all vertices which are not in a triangle.
\begin{rrule}
\label{rr-1-vertex-triangle-s-club-remove-vertices-in-no-triangle}
Let~$(G,k)$ be an instance of \textsc{Vertex Triangle~$s$-Club}.
Delete all vertices from~$G$ which are not part of any triangle.
\end{rrule}
Clearly, Reduction Rule~\ref{rr-1-vertex-triangle-s-club-remove-vertices-in-no-triangle} is correct and can be exhaustively applied in polynomial time.
The application of Reduction Rule~\ref{rr-1-vertex-triangle-s-club-remove-vertices-in-no-triangle} has the following effect: if some vertex~$v$ is close to many vertices, then~$(G,k)$ is a trivial yes-instance.
\begin{lemma}
\label{lem-1-vertex-triangle-s-club-neighborhood}
Let~$(G,k)$ be an instance of \textsc{Vertex Triangle~$s$-Club} with~$\ell=1$ and~$s\ge 4$ to which Reduction Rule~\ref{rr-1-vertex-triangle-s-club-remove-vertices-in-no-triangle} is applied.
Then,~$(G,k)$ is a yes-instance if~$|N_{\lfloor s/2\rfloor-1}[v]|\ge k$ for some vertex~$v\in V(G)$.
\end{lemma}
\begin{proof}
Let~$v\in V(G)$ be a vertex such that~$|N_{\lfloor s/2\rfloor-1}[v]|\ge k$.
We construct a vertex-$1$-triangle~$s$-club~$T$ of size at least~$|N_{\lfloor s/2\rfloor-1}[v]|\ge k$.
Initially, we set~$T\coloneqq N_{\lfloor s/2\rfloor-1}[v]$.
Now, for each vertex~$w\in N_{\lfloor s/2\rfloor-1}(v)$ we do the following:
Since Reduction Rule~\ref{rr-1-vertex-triangle-s-club-remove-vertices-in-no-triangle} is applied, we conclude that there exist two vertices~$x$ and~$y$ such that~$G[\{ w,x,y\}]$ is a triangle.
We add~$x$ and~$y$ to the set~$T$.
We call the set of vertices added in this step the \emph{$T$-expansion}.
Next, we show that~$T$ is indeed a vertex-$1$-triangle~$s$-club for~$s\ge 4$.
Observe that each vertex in~$T$ is either in~$N_{\lfloor s/2\rfloor-1}[v]$ or a neighbor of a vertex in~$N_{\lfloor s/2\rfloor-1}(v)$.
Hence, each vertex in~$T$ has distance at most~$\lfloor s/2\rfloor$ to vertex~$v$.
Thus,~$T$ is an~$s$-club.
It remains to show that each vertex of~$T$ is in a triangle.
Observe that for each vertex~$w\in N_{\lfloor s/2\rfloor-2}[v]$ we have~$N(w)\subseteq N_{\lfloor s/2\rfloor-1}[v]$.
Recall that since Reduction Rule~\ref{rr-1-vertex-triangle-s-club-remove-vertices-in-no-triangle} is applied, each vertex in~$G$ is contained in a triangle.
Thus, each vertex of~$N_{\lfloor s/2\rfloor-2}[v]$ is contained in a triangle in~$T$.
Furthermore, all vertices in~$N_{\lfloor s/2\rfloor-1}(v)\cup (T\setminus N_{\lfloor s/2\rfloor-1}[v])$ are in a triangle because of the~$T$-expansion.
Since~$|T|\ge |N_{\lfloor s/2\rfloor-1}[v]|\ge k$, the statement follows.\qed
\end{proof}
Next, we show that Lemma~\ref{lem-1-vertex-triangle-s-club-neighborhood} implies the existence of a Turing kernel for~$s\ge 4$.
We do this by showing that~$N_s[v]$ is bounded for every vertex in the graph.
This in turn implies that the problem is fixed-parameter tractable.
It is sufficient to bound the size of~$N_s[v]$ for each~$v\in V(G)$ since we then can query the oracle for an $s$-club of size~$k$.
\begin{theorem}
\label{thm-1-vertex-triangle-s-club-fpt}
\textsc{Vertex Triangle~$s$-Club} for~$\ell=1$ admits a~$k^4$-vertex Turing kernel if~$s=4$ or~$s=7$, a~$k^5$-vertex Turing kernel if~$s=5$, and a~$k^3$-vertex Turing kernel if~$s=6$ or~$s\ge 8$.
\end{theorem}
\begin{proof}
First, we apply Reduction Rule~\ref{rr-1-vertex-triangle-s-club-remove-vertices-in-no-triangle}.
Because of Lemma~\ref{lem-1-vertex-triangle-s-club-neighborhood} we conclude that~$(G,k)$ is a trivial yes-instance if~$|N_{\lfloor s/2\rfloor-1}[v]|\ge k$ for any vertex~$v\in V(G)$.
Thus, in the following we can assume that~$|N_{\lfloor s/2\rfloor-1}[v]|<k$ for each vertex~$v\in V(G)$.
We use this fact to bound the size of~$N_s[v]$ in non-trivial instances.
In this case we have~$\lfloor s/2\rfloor-1=1$.
Hence, for each~$s\ge 4$ from~$|N_{\lfloor s/2\rfloor-1}[v]|<k$ we obtain that the size of the neighborhood of each vertex is bounded.
Thus, we obtain a~$k^4$-vertex Turing kernel for~$s=4$ and a~$k^5$-vertex Turing kernel for~$s=5$.
Furthermore, if~$s=7$ we have~$\lfloor s/2\rfloor-1=2$.
Thus, we obtain a~$k^4$-vertex Turing kernel for~$s=7$ since~$N_7[v]\subseteq N_8[v]=N_2[N_2[N_2[N_2[v]]]]$.
If~$s=6$ or~$s\ge 8$, then~$\lfloor s/2\rfloor-1\ge \lceil s/3\rceil$.
Observe that~$N_s[v]$ is contained in~$N_{\lceil s/3\rceil}[N_{\lceil s/3\rceil}[N_{\lceil s/3\rceil}[v]]]$.
Thus, we obtain a~$k^3$-vertex Turing kernel for~$s=6$ or~$s\ge 8$.\qed
\end{proof}
Note that~$s\ge4$ is necessary to ensure~$\lfloor s/2\rfloor-1\ge 1$.
In our arguments to obtain a Turing kernel~$\ell=1$ is necessary for the following reason:
if~$\ell\ge 2$, then the remaining vertices of the other triangles of a vertex in the~$T$-expansion may be contained in~$N_{\lfloor s/2\rfloor+1}$ and, thus, adding them will not necessarily give an~$s$-club.
Also note that using~$N_t[v]$ for some~$t<\lfloor s/2\rfloor-1$ does not help:
The remaining vertices of the other triangles of a vertex in the~$T$-expansion may be contained in in~$N_{t+1}$.
But now, another~$T$-expansion for the vertices in~$N_{t+1}$ is necessary.
This may lead to a cascade of~$T$-expansions where eventually, we add vertices with distance at least~$s+1$ to~$v$.
Thus, the constructed set is no $s$-club anymore.
\subsection{Parameterized Hardness}
In the following, we prove W[1]-hardness for \textsc{Vertex Triangle~$s$-Club} parameterized by the solution size~$k$ for all cases not covered by Theorem~\ref{thm-1-vertex-triangle-s-club-fpt}, that is,~$\ell\ge 2$ and~$s\ge 2$, and also for~$\ell=1$ and $s\in\{2,3\}$.
\begin{theorem}
\label{thm-w1-hardness-vertex-variant}
\textsc{Vertex Triangle~$s$-Club} is W[1]-hard for parameter~$k$ if~$\ell\ge 2$, and if~$\ell=1$ and~$s\in\{2,3\}$.
\end{theorem}
For some combinations of~$s$ and~$\ell$ we provide hardness for restricted input graphs.
More precisely, we prove that \textsc{Vertex Triangle~$s$-Club} is W[1]-hard even if each vertex~$v\in V(G)$ is contained in \emph{exactly}~$\ell$ triangles in the input graph.
In other words, the hardness does not depend on the fact that we could choose different triangles.
We provide this hardness for the case~$s\ge 3$ and arbitrary~$\ell$ and also for~$s=2$ when~$\ell=\binom{c-1}{2}$ for some integer~$c$.
We prove the theorem by considering four subcases.
The proofs for the four cases all use a reduction from the W[1]-hard \textsc{Clique} problem.
In these constructions, each vertex~$v$ of the \textsc{Clique} instance is replaced by a vertex gadget~$T^v$ such that every vertex-$\ell$-triangle~$s$-club~$S$ either contains~$T^v$ completely or contains no vertex of~$T^v$.
This property is obtained since each vertex in~$T^v$ will be in exactly $\ell$~triangles and each of these triangles is within~$T^v$.
The idea is that if~$uv\notin E(G)$ then there exists a vertex~$x\in T^u$ and a vertex~$y\in T^v$ such that~$\dist(x,y)\ge s+1$.
\paragraph{Vertex Triangle~$2$-Club.}
First, we handle the case~$s=2$.
\begin{const}
\label{const-l-triangle-2-club}
Let~$(G,k)$ be an instance of \textsc{Clique} and let~$c$ be the smallest number such that~$\binom{c-1}{2}\ge\ell$.
We construct an instance~$(G',c(k+1))$ of \textsc{Vertex Triangle~$2$-Club} as follows.
\begin{itemize}
\item For each vertex~$v\in V(G)$, we add a clique~$T^v\coloneqq \{x_1^v, \ldots , x_c^v\}$ of size~$c$ to~$G'$.
\item For each edge~$vw\in E(G)$, we connect the cliques~$T^v$ and~$T^w$ by adding the edge~$x^v_{2i-1}x^w_{2i}$ and~$x^w_{2i-1}x^v_{2i}$ to~$G'$ for each~$i\in[\lfloor c/2\rfloor]$.
\item Furthermore, we add a clique~$Y\coloneqq \{y_1, \ldots, y_c\}$ of size~$c$ to~$G'$.
\item We also add, for each~$i\in[c]$ and each~$v\in V(G)$, the edge~$x^v_iy_i$ to~$G'$.
\end{itemize}
\end{const}
Note that the clique size~$c$ ensures that each vertex~$x\in V(G')$ is contained in at least~$\binom{c-1}{2}\ge\ell$ triangles in~$G'$.
Furthermore, note that the clique~$Y$ is only necessary when~$c$ is odd to ensure that the vertices~$x^v_c$ and~$x^w_c$ have a common neighbor.
We add the clique~$Y$ in both cases to unify the construction and the correctness proof.
Next, we show that for each vertex gadget~$T^v$ the intersection with each vertex-$\ell$-triangle~$2$-club is either empty or~$T^v$.
\begin{lemma}
\label{lem-l-triangle-2-club-property-empty-or-complete} Let~$S$ be a vertex-$\ell$-triangle~$2$-club in~$G'$. Then,
\begin{itemize}
\item[a)] $S\cap T^v\neq\emptyset\Leftrightarrow T^v\subseteq S$, and
\item[b)] $S'\coloneqq S\cup Y$ is also a vertex-$\ell$-triangle~$2$-club in~$G'$.
\end{itemize}
\end{lemma}
\begin{proof}
First, we show statement~$a)$.
Assume that for a vertex~$z\in T^v$ for some~$v\in V(G)$ we have~$z\in S$ for some vertex-$\ell$-triangle~$2$-club~$S$.
Note that~$T^v$ contains all vertices which form a triangle with vertex~$z$.
Since~$c$ is minimal such that~$\binom{c-1}{2}\ge\ell$ and since~$T^v$ is a clique, we conclude that~$T^v\subseteq S$ to fulfill the property that vertex~$z$ is contained in at least~$\ell$ triangles in~$G[S]$.
Thus,~$T^v\subseteq S$.
Second, we show statement~$b)$.
Since each vertex~$y\in Y$ forms only triangles with vertices in~$Y$ and~$Y$ has size~$\binom{c-1}{2}$, we conclude that~$Y\subseteq S^*\Leftrightarrow S^*\cap Y\ne\emptyset$ for each vertex-$\ell$-triangle~$2$-club~$S^*$.
In the following, let~$S$ be a vertex-$\ell$-triangle~$2$-club such that~$Y\cap S=\emptyset$.
From statement~$a)$ we conclude that~$S\coloneqq \bigcup_{v\in P}T^v$ for some set~$P\subseteq V(G)$.
Next, we show that~$S'\coloneqq S\cup Y$ is also a vertex-$\ell$-triangle~$2$-club.
Since each vertex is contained in a clique of size~$c$ in~$S$, it is in sufficiently many triangles.
Thus, it remains to prove that~$S'$ is a~$2$-club.
Hence, consider some vertex~$x^v_i$ and some vertex~$y_j$ for some~$i,j\in[c]$ and~$v\in P$.
Then,~$y_i$ is a common neighbor of~$x^v_i$ and~$y_j$.
Hence,~$S'$ is a vertex-$\ell$-triangle~$2$-club and thus~$b)$ holds.\qed
\end{proof}
Now, we prove the correctness of Construction~\ref{const-l-triangle-2-club}.
\begin{lemma}
\label{lem-l-triangle-2-club-hardness}
For each~$\ell\in\mathds{N}$, the \textsc{Vertex Triangle~$2$-Club} problem parameterized by~$k$ is W[1]-hard.
\end{lemma}
\begin{proof}
We prove that~$G$ contains a clique of size~$k$ if and only if~$G'$ contains a vertex-$\ell$-triangle~$2$-club of size at least~$c(k+1)$.
Let~$C$ be a clique of size at least~$k$ in~$G$.
We argue that~$S\coloneqq Y\cup\bigcup_{v\in C}T^v$ is a vertex-$\ell$-triangle~$2$-club of size~$c(k+1)$ in~$G'$.
Note that for each vertex~$v\in T^v$ we have~$|T^v|=c$.
Since~$T^v$ is a clique, we conclude that each vertex in~$T^v$ is contained in exactly~$\binom{c-1}{2}\ge\ell$ triangles.
The same is true for each vertex in~$Y$.
Hence, each vertex in~$S$ is contained in at least~$\ell$ triangles.
Thus, it remains to show that~$S$ is a~$2$-club.
Consider the vertices~$x^v_i$ and~$x^w_j$ for~$v,w\in C$,~$i\in[c-1]$, and~$j\in[c]$.
If~$i$ is odd, then~$x^w_{i+1}\in N(x^v_i)\cap N(x^w_j)$.
Otherwise, if~$i$ is even,~$x^w_{i-1}\in N(x^v_i)\cap N(x^w_j)$.
In both cases, we obtain~$\dist(x^v_i,x^w_j)\le 2$.
Next, consider two vertices~$x^v_c$ and~$x^w_c$ in~$S$.
Observe that~$y_c\in N(x^v_c)\cap N(x^w_c)$.
Since~$Y$ is a clique, it remains to consider vertices~$x^v_i$ and~$y_j$ in~$S$ for~$i\in[c]$ and~$j\in[c]$.
Observe that~$x^v_j\in N(y_j)\cap N[x^v_i]$.
Thus,~$G'$ contains a vertex-$\ell$-triangle~$2$-club of size at least~$c(k+1)$.
Conversely, suppose that~$G'$ contains a vertex-$\ell$-triangle~$2$-club~$S$ of size at least~$c(k+1)$.
By Lemma~\ref{lem-l-triangle-2-club-property-empty-or-complete}, we can assume that~$Y\subseteq S$ and for each vertex gadget~$T^v\in G'$ we either have~$T^v\subseteq S$ or~$T^v\cap S=\emptyset$.
Hence,~$S$ contains at least~$k$ cliques of the form~$T^v$.
Assume towards a contradiction that~$S$ contains two cliques~$T^v$ and~$T^w$ such that~$vw\notin E(G)$ and consider the two vertices~$x^v_1\in T^v$ and~$x^w_2\in T^w$.
Note that these vertices always exist since~$c\ge 3$.
Observe that~$N[x^v_1]=T^v\cup\{x^u_2\mid uv\in E(G)\}\cup\{y_1\}$ and~$N[x^w_2]=T^w\cup\{x^u_1\mid uw\in E(G)\}\cup\{y_2\}$.
Thus,~$\dist(x^v_1,x^w_2)\ge 3$, a contradiction.
Hence, for each two distinct vertex gadgets~$T^v$ and~$T^w$ that are contained in~$S$, we observe that~$vw\in E(G)$.
Consequently, the set~$\{v \mid T^v\subseteq S\}$ is a clique of size at least~$k$ in~$G$.\qed
\end{proof}
If~$\ell=\binom{c-1}{2}$ for some integer~$c$, then Lemma~\ref{lem-l-triangle-2-club-hardness} also holds for the restriction that each vertex is contained in exactly~$\ell$ triangles in the input graph~$G'$.
\paragraph{Vertex Triangle~$s$-Club for~$s=3$ and for~$\ell\ge 2$ and~$s\ge 4$.}
Now, we provide hardness for the remaining cases.
We consider three subcases.
Case~$1$ deals with odd~$s$.
Case~$2$ covers the case that~$s$ is even and~$\ell\ge 3$.
Case~$3$ deals with the case that~$s$ is even and~$\ell=2$.
All three cases use the same vertex gadget.
Only the edges between these gadgets, called \emph{connector edges}, differ.
The idea is to construct the vertex gadgets~$T^v$ in such a way that there are pairs of vertices in~$T^v$ of distance~$2s^*$ which is almost~$s$.
Thus, in a vertex-$\ell$-triangle~$s$-club, the distance between two different vertex gadgets must be small.
The first part of the following construction describes the vertex gadget which is used in all three cases.
For an illustration of this construction see Fig.~\ref{fig-visulation-vertex-triangle}.
\begin{figure}
\caption{$a)$ The vertex gadgets~$T^v$ and~$T^w$ for~$s\in\{7,8\}
\label{fig-visulation-vertex-triangle}
\end{figure}
\begin{const}
\label{const-l-triangle-3-or-4-club}
We set~$s^*\coloneqq \lfloor (s-1)/2\rfloor$.
Let~$(G,k)$ be an instance of \textsc{Clique}.
We construct an equivalent instance~$(G',3\ell ks^*)$ of \textsc{Vertex Triangle~$s$-Club}.
Recall that~$\ell\ge 1$ and~$s=3$, or~$\ell\ge 2$ and~$s\ge 4$.
For each vertex~$v\in V(G)$ we construct a vertex gadget~$T^v$.
This vertex gadget is used for each reduction of the three cases.
\begin{itemize}
\item We add the vertices~$p^v_i$, and~$q^v_i$, and an edge~$p^v_iq^v_i$ for each~$i\in[\ell]$ to~$G'$.
\item We add a vertex~$x^v_{j,i}$ for each~$i\in[\ell]$ and each~$j\in[s^*]$ to~$G'$.
\item We add an edge~$y^v_{t,i}z^v_{t,i}$ for each~$i\in[\ell]$ and each~$t\in[s^*-1]$ to~$G'$.
Note that these vertices only exist, if~$s\ge 5$.
\end{itemize}
The vertices~$x^v_{j,i}$,~$y^v_{t,i}$, and~$z^v_{t,i}$ are referred to as the \emph{cascading vertices}.
They are used to ensure that all vertices in~$T^v$ are in exactly~$\ell$ triangles and that there are vertex pairs of distance~$2s^*$ within~$T^v$.
Note that since~$s\ge 3$ we create at least~$\ell$ many~$x$-vertices.
We connect these vertices as follows:
\begin{itemize}
\item We add the edges~$p^v_ix^v_{1,j}$ and~$q^v_ix^v_{1,j}$ for each~$i\in[\ell-1]$ and each~$j\in[\ell]$ to~$G'$.
\item We add the edges~$p^v_\ell x^v_{s^*,j}$ and~$q^v_\ell x^v_{s^*,j}$ for each~$j\in[\ell]$ to~$G'$.
\item We add the edges~$y^v_{t,i}x^v_{t,i}$ and~$z^v_{t,i}x^v_{t,i}$ for each~$i\in[\ell]$, and each~$t\in[s^*-1]$ to~$G'$.
\item We add the edges~$y^v_{t,i}x^v_{t+1,j}$ and~$z^v_{t,i}x^v_{t+1,j}$ for each~$i\in[\ell]$, each~$j\in[\ell]\setminus\{i\}$, and each~$t\in[s^*-1]$ to~$G'$.
\end{itemize}
Note that if~$s=3$ or~$s=4$, the graph~$G'$ is a non-induced biclique where one partite set consists of the vertices~$\{x^v_{1,j}\mid j\in[\ell]\}$ and the other partite set consists of the vertices~$\{\{p^v_i,q^v_i\}\mid i\in[\ell]\}$.
Furthermore, the additional edges are~$p^v_iq^v_i$ for each~$i\in[\ell]$.
Also, observe that each vertex gadget~$T^v$ consists of exactly~$3\ell s^*$ vertices.
From now on, the construction for the three cases differs.
We now connect these vertex gadgets by introducing the \emph{connector edges}:
For each edge~$vw\in E(G)$ we add edges between the vertex gadgets~$T^v$ and~$T^w$ of the corresponding vertices.
We distinguish the three cases.
\begin{enumerate}[label=\textbf{Case \Roman*:},leftmargin=*]
\item \textbf{$s$ is odd.} We add the edges~$p^v_iq^w_i$ and~$q^v_ip^w_i$ to~$G'$ for each~$i\in[\ell]$ to~$G'$.
\item \textbf{$s$ is even and~$\ell\ge 3$.} We add the edges~$p^v_1q^w_1$,~$q^v_1p^w_1$,~$p^v_\ell q^w_\ell$, and~$q^v_\ell p^w_\ell$ to~$G'$.
\item \textbf{$s$ is even and~$\ell=2$.} We add the edges~$p^v_1x^w_{s^*,1}$, and~$p^w_1x^v_{s^*,1}$ to~$G'$.
\end{enumerate}
\end{const}
We make the following observation about the connector edges between different vertex gadgets:
If~$s$ is odd (Case~I), or if~$s$ is even and~$\ell\ge 3$ (Case~II), we have~$N(p^v_i)\setminus T^v\subseteq\{q^w_i\mid vw\in E(G)\}$ and also~$N(q^v_i)\setminus T^v\subseteq\{p^w_i\mid vw\in E(G)\}$ for each~$v\in V(G)$ and each~$i\in[\ell]$.
Otherwise, if~$s$ is even and~$\ell=2$ (Case~III), observe that we have~$N(p^v_1)\setminus T^v=\{x^w_{s*,1}\mid vw\in E(G)\}$ and also~$N(x^v_{s^*,1})\setminus T^v=\{p^w_{1}\mid vw\in E(G)\}$ for each~$v\in V(G)$.
Since only the connector edges have endpoints in different vertex gadgets, we thus observe the following.
\begin{observation}
\label{obs-l-vertex-triangle-s-club-triangles-in-vertex-gadgets}
All~$3$ endpoints of each triangle in~$G'$ are contained in exactly one vertex gadget.
\end{observation}
Next, we show that each vertex in a vertex gadget~$T^v$ is contained in exactly~$\ell$ triangles.
Together with Observation~\ref{obs-l-vertex-triangle-s-club-triangles-in-vertex-gadgets} this implies that each vertex in~$G'$ is contained in exactly~$\ell$ triangles.
\begin{lemma}
\label{lem-l-vertex-triangle-s-club-each-vertex-in-l-triangles}
Let~$T^v$ be a vertex gadget.
Each vertex in~$T^v$ is contained in exactly~$\ell$ triangles.
\end{lemma}
\begin{proof}
We make a case distinction, that is, for each vertex~$a\in T^v$ we present exactly~$\ell$ triangles containing vertex~$a$.
We distinguish the different vertices of a vertex gadget.
\textbf{Case 1:} Consider vertex~$p^v_d$ for some~$d\in[\ell-1]$.
Because of Observation~\ref{obs-l-vertex-triangle-s-club-triangles-in-vertex-gadgets} we only have to consider the neighbors of~$p^v_d$ in~$T^v$.
By construction we have~$N(p^v_d)\cap T^v=\{q^v_d\}\cup\{x^v_{1,j}\mid j\in[\ell]\}$.
Observe that the only edges within~$N(p^v_d)\cap T^v$ are the edges~$q^v_dx^v_{1,j}$ for each~$j\in[\ell]$.
Thus,~$p^v_d$ is contained in the~$\ell$ triangles~$\{\{p^v_d,q^v_d,x^v_{1,i}\}\mid i\in[\ell]\}$.
By similar arguments the same is true for vertex~$q^v_d$.
\textbf{Case 2:} Consider vertex~$p^v_\ell$.
By construction we have~$N(p^v_\ell)= \{q^v_\ell\}\cup\{x^v_{s^*,i}\mid i\in[\ell]\}\cup\{q^w_d\mid vw\in E(G)\}$ if~$s$ is odd or~$s$ is even and~$\ell\ge 3$.
If~$s$ is even and~$\ell=2$ we have~$N(p^v_\ell)=\{q^v_\ell\}\cup\{x^v_{s^*,i}\mid i\in[\ell]\}$.
Observe that the only edges within~$N(p^v_\ell)$ are the edges~$q^v_\ell x^v_{s^*,i}$ for each~$i\in[\ell]$.
Thus,~$p^v_\ell$ is contained in the~$\ell$ triangles~$\{\{p^v_\ell,q^v_\ell,x^v_{s^*,i}\}\mid i\in[\ell]\}$.
By similar arguments the same is true for vertex~$q^v_\ell$.
\textbf{Case 3:} Now, consider vertices~$x^v_{1,i}$ and~$x^v_{s^*,i}$ for some~$i\in[\ell]$.
Here we have to distinguish if~$s\in\{3,4\}$ or~$s\ge 5$ since~$y^v_{t,i}$ exists only in the second case.
First, consider the case~$s=3$ or~$s=4$.
Note that we now have~$x^v_{1,i}=x^v_{s^*,i}$.
By construction we have~$N(x^v_{1,i})=\{p^v_j\mid j\in[\ell]\}\cup\{q^v_j\mid j\in[\ell]\}$.
Note that the only edges within~$N(x^v_{1,i})$ have the form~$p^v_jq^v_j$ for each~$j\in[\ell]$.
Thus,~$x^v_{1,i}$ is contained in the~$\ell$ triangles~$\{\{x^v_{1,i},p^v_j,q^v_j\}\mid j\in[\ell]\}$.
Second, consider the case~$s\ge 5$.
First, we investigate vertex~$x^v_{1,i}$.
By construction we have~$N(x^v_{1,i})=\{p^v_j\mid j\in[\ell-1]\}\cup\{q^v_j\mid j\in[\ell-1]\}\cup\{y^v_{1,i},z^v_{1,i}\}$.
The only edges within~$N(x^v_{1,i})$ are the edge~$p^v_jq^v_j$ for each~$j\in[\ell-1]$ and the edge~$y^v_{1,i}z^v_{1,i}$.
Thus,~$x^v_{1,i}$ is contained in the~$\ell-1$ triangles~$\{\{x^v_{1,i},p^v_j,q^v_j\}\mid j\in[\ell-1]\}$, and in the triangle~$\{x^v_{1,i},y^v_{1,i},z^v_{1,i}\}$.
Now, consider vertex~$x^v_{s^*,i}$.
Because of Observation~\ref{obs-l-vertex-triangle-s-club-triangles-in-vertex-gadgets} we only have to consider the neighbors of~$x^v_{s^*,i}$ in~$T^v$.
By construction we observe that~$N(x^v_{s^*,i})\cap T^v\subseteq \{p^v_\ell,q^v_\ell\}\cup\{y^v_{s^*-1,j},z^v_{s^*-1,j}\mid j\in[\ell]\setminus\{i\}\}$.
Note that the only edges within~$N(x^v_{s^*,i})$ are the edge~$p^v_\ell q^v_\ell$ and the edge~$y^v_{s^*-1,j}z^v_{s^*-1,j}$ for each~$j\in[\ell]\setminus\{i\}$.
Thus,~$x^v_{s^*,i}$ is contained in the triangle~$\{x^v_{s^*,i},p^v_\ell,q^v_\ell\}$ and in the~$\ell-1$ triangles $\{\{x^v_{s^*,i},y^v_{s^*-1,j},z^v_{s^*-1,j}\}\mid j\in[\ell]\setminus\{i\}\}$.
\textbf{Case 4:} Consider vertex~$x^v_{r,i}$ for some~$i\in[\ell]$ and some~$r\in[2,s^*-1]$.
Recall that these vertices only exist if~$s\ge 7$.
By construction~$N(x^v_{r,i})=\{y^v_{r,i},z^v_{r,i}\}\cup\{y^v_{r-1,j},z^v_{r-1,j}\mid j\in[\ell]-1\}$.
The only edges within~$N(x^v_{r,i})$ have the form~$y^v_{r,i}z^v_{r,i}$ and~$y^v_{r-1,j}z^v_{r-1,j}$ for each~$j\in[\ell]\setminus\{i\}$.
Thus,~$x^v_{r,i}$ is contained in the triangle~$\{x^v_{r,i},y^v_{r,i},z^v_{r,i}\}$ and in the~$\ell-1$ triangles~$\{\{x^v_{r,i},y^v_{r-1,j},z^v_{r-1,j}\}\mid j\in[\ell]\setminus\{i\}\}$.
\textbf{Case 5:} Finally, consider vertex~$y^v_{t,i}$ for some~$i\in[\ell]$ and some~$t\in[s^*-1]$.
Recall that these vertices only exist if~$s\ge 5$.
By construction we have~$N(y^v_{t,i})=\{x^v_{t,i},z^v_{t,i}\}\cup\{x^v_{t+1,j}\mid j\in[\ell]\setminus\{i\}\}$.
The only edges within~$N(y^v_{t,i})$ is the edge~$x^v_{t,i}z^v_{t,i}$ and the edge~$x^v_{t+1,j}z^v_{t,i}$ for each~$j\in[\ell]\setminus\{i\}$.
Thus,~$y^v_{t,i}$ is contained in the triangle~$\{y^v_{t,i},z^v_{t,i},x^v_{t,i}\}$ and in the~$\ell-1$ triangles~$\{\{y^v_{t,i},z^v_{t,i}, x^v_{t+1,j}\}\mid j\in[\ell]\setminus\{i\}\}$.
By similar arguments the same holds for vertex~$z^v_{t,i}$.
Thus, each vertex in~$T^v$ is contained in exactly~$\ell$ triangles.\qed
\end{proof}
From Lemma~\ref{lem-l-vertex-triangle-s-club-each-vertex-in-l-triangles} and Observation~\ref{obs-l-vertex-triangle-s-club-triangles-in-vertex-gadgets}, we conclude the following.
\begin{observation}
\label{obs-l-triangle-3-or-4-club-empty-or-complete}
Let~$S$ be a vertex-$\ell$-triangle~$s$-club for~$s=3$ and~$\ell\ge 1$ and or for for~$s\ge 4$ and~$\ell\ge 2$ in~$G'$.
Then,~$S\cap T^v\neq\emptyset\Leftrightarrow T^v\subseteq S$.
\end{observation}
Now, we prove the correctness of Construction~\ref{const-l-triangle-3-or-4-club}.
\begin{lemma}
\label{lem-l-triangle-3-club-hardness}
For each~$s=3$ and~$\ell\ge 1$, and also for each~$s\ge 4$ and~$\ell\ge 2$ the \textsc{Vertex Triangle~$s$-Club} problem parameterized by~$k$ is W[1]-hard, even if each vertex in the input graph is contained in exactly~$\ell$ triangles.
\end{lemma}
\begin{proof}
We show that~$G$ contains a clique of size at least~$k$ if and only if~$G'$ contains a vertex-$\ell$-triangle~$s$-club of size at least~$3\ell ks^*$.
Let~$K$ be a clique of size at least~$k$ in~$G$.
We argue that~$S\coloneqq \bigcup_{v\in K}T^v$ is a vertex-$\ell$-triangle~$s$-club of size at least~$3\ell ks^*$ in~$G'$.
The size bound follows from the fact that each~$T^v$ consists of exactly~$3\ell s^*$ vertices.
Furthermore, by Lemma~\ref{lem-l-vertex-triangle-s-club-each-vertex-in-l-triangles} each vertex in~$T^v$ for some~$v\in V(G)$ is contained in exactly~$\ell$ triangles in~$G'[T^v]$.
Hence, it remains to show that~$S$ is an~$s$-club.
To do so, we first prove the following two claims.
To formulate the claims, we need some further notation.
We define~$T^v_0\coloneqq \{p^v_1, \ldots, p^v_{\ell-1}\}\cup\{q^v_1, \ldots, q^v_{\ell-1}\}$ and~$T^v_\ell\coloneqq \{p^v_\ell,q^v_\ell\}$.
Recall that if~$\ell=1$, then~$T^v_0=\emptyset$.
Otherwise, both sets are nonempty.
This claim
\begin{claim}
\label{claim-distances-in-tv}
For~$\ell\ge 2$, we have~$\dist_{G'}(u,a)+\dist_{G'}(u,b)=2s^*$ for each vertex~$a\in T^v_0$, each vertex~$b\in T^v_\ell$, and for each vertex~$u\in T^v\setminus(T^v_0\cup T^v_\ell)$.
\end{claim}
\begin{claimproof}
By~$X^v_1\coloneqq \{x^v_{1,j}\mid j\in[\ell]\}$ we denote the neighbors of~$T^v_0$.
We define the sets~$X^v_2, \ldots, X^v_{s^*}$ and~$Y^v_1, Z^v_1, \ldots, Y^v_{s^*-1},Z^v_{s^*-1}$ analogue.
We first make some observations about the neighborhoods of these vertex sets which then allows us to obtain upper and lower bounds on these paths.
\begin{itemize}
\item $T^v_0\cup\{y^v_{1,i},z^v_{1,i}\}\subseteq N(x^v_{1,i})$ for each~$i\in[\ell]$.
Hence~$N(X^v_1)=T^v_0\cup Y^v_1\cup Z^v_1$.
\item $T^v_\ell\cup\{y^v_{s^*-1,j},z^v_{s^*-1,j}\}\subseteq N(x^v_{s^*,i})$ for each~$i\in[\ell]$ and each~$j\in[\ell]\setminus\{i\}$.
Hence,~$N(X^v_{s^*})=T^v_\ell\cup Y^v_{s^*-1}\cup Z^v_{s^*-1}$.
\item $\{y^v_{t,i},y^v_{t-1,j},z^v_{t,i},z^v_{t-1,j}\}\subseteq N(x^v_{t,i})$ for each~$t\in[2,s^*-1]$, each~$i\in[\ell]$ and each~$j\in[\ell]\setminus\{i\}$.
Hence,~$N(X^v_t)=Y^v_t\cup Z^v_t\cup Y^v_{t-1}\cup Z^v_{t-1}$ for~$t\in[2,s^*-1]$.
\item $x^v_{t,i}, x^v_{t+1,j},z^v_{t,i}\in N(y^v_{t,i})$ and also~$x^v_{t,i}, x^v_{t+1,j},y^v_{t,i}\in N(z^v_{t,i})$ for each~$t\in[s^*-1]$, each~$i\in[\ell]$ and each~$j\in[\ell]\setminus\{i\}$.
Hence,~$N(Y^v_t)=X^v_t\cup Z^v_t\cup Y^v_{t-1}$ and~$N(Z^v_t)=X^v_t\cup Y^v_t\cup Y^v_{t-1}$ for each~$t\in[s^*-1]$.
\end{itemize}
From the above and a vertex~$u\in X^v_z$ we obtain a path to~$a$ of length~$2z-1$ by using subsequent vertices from~$Y_{z-1},X_{z-1},\ldots, Y_1,X_1$.
A similar observation can be made for a path from~$u$ to~$b$.
The cases~$u\in Y_z\cup Z_z$ is treated similar.
This implies that~$\dist_{G'}(u,a)+\dist_{G'}(u,b)\le 2s^*$ for each vertex~$a\in T^v_0$, each vertex~$b\in T^v_\ell$, and for each vertex~$u\in T^v\setminus(T^v_0\cup T^v_\ell)$.
Furthermore, note that in each two steps on any path we can increase/decrease the first sub-index by one.
Hence~$\dist_{G'}(u,a)+\dist_{G'}(u,b)\ge 2s^*$.
Thus,~$\dist_{G'}(u,a)+\dist_{G'}(u,b)=2s^*$.
\end{claimproof}
Now, we use Claim~\ref{claim-distances-in-tv} to show that~$T^v$ is a~$(2s^*)$-club.
\begin{claim}
\label{claim-tv-is-club}
If~$s$ is odd, then~$T^v$ is an~$(s-1)$-club and if~$s$ is even, then~$T^v$ is an~$(s-2)$-club for each vertex~$v\in V(G)$.
\end{claim}
\begin{claimproof}
Recall that for~$s\in\{3,4\}$ the gadget~$T^v$ is a non-induced biclique.
Hence, the statement is true in this case.
In the following, we assume that~$s\ge 5$.
Note that this implies that~$\ell\ge 2$.
We only consider the case that~$s$ is odd.
The case that~$s$ is even follows analogously.
Consider a pair of vertices~$u,\widetilde{u}$ of~$T^v\setminus(T^v_0\cup T^v_\ell)$.
To bound the distance of~$u$ and~$\widetilde{u}$, we consider one path via a vertex in~$T^v_0$ and one via a vertex in~$T^v_\ell$.
We have $\dist_{G'}(u,\widetilde{u})\le\min(\dist_{G'}(u,a)+\dist_{G'}(a,\widetilde{u}),\dist_{G'}(u,b)+\dist_{G'}(b,\widetilde{u}))$ for each vertex~$a\in T^v_0$, and each vertex~$b\in T^v_\ell$.
From Claim~\ref{claim-distances-in-tv} we know that~$\dist_{G'}(u,a)+\dist_{G'}(u,b)=2s^*$ and that~$\dist_{G'}(\widetilde{u},a)+\dist_{G'}(\widetilde{u},b)=2s^*$.
Hence,~$\dist_{G'}(u,\widetilde{u})\le 2s^*\le s-1$.
Thus,~$T^v$ is indeed an~$(s-1)$-club.
\end{claimproof}
Now, we show that for two vertices~$v$ and~$w$ in~$K$, a vertex~$u\in T^v$, and a vertex~$\widetilde{u}\in T^w$ we have~$\dist_{G'}(u,\widetilde{u})\le s$.
We consider the three cases:
\textbf{Case I:~$s$ is odd.}
Observe that since~$vw\in E(G)$, each vertex~$u\in (T^v_0\cup T^v_\ell)$ has one neighbor in~$T^w$.
Since~$T^w$ is an~$(s-1)$-club by Claim~\ref{claim-tv-is-club}, we obtain that for each vertex~$\widetilde{u}\in T^w$ we have~$\dist_{G'}(u,\widetilde{u})\le s$ if~$u\in T^v_0\cup T^v_\ell$.
Hence, it remains to consider the case that~$u\in T^v\setminus(T^v_0\cup T^v_\ell)$ and that~$\widetilde{u}\in T^w\setminus(T^w_0\cup T^w_\ell)$.
For this, let~$u^v_1\coloneqq \dist_{G'}(u,p^v_1), \widetilde{u}^w_1\coloneqq \dist_{G'}(q^w_1,\widetilde{u}), u^v_\ell\coloneqq \dist_{G'}(u,p^v_\ell)$, and~$\widetilde{u}^w_\ell\coloneqq \dist_{G'}(q^w_\ell,\widetilde{u})$.
Note that
\begin{align*}
\dist_{G'}(u,\widetilde{u}) &\le\min(u^v_1+1+\widetilde{u}^w_1,u^v_\ell+1+\widetilde{u}^w_\ell)\\
&=1+\min(u^v_1+\widetilde{u}^w_1,u^v_\ell+\widetilde{u}^w_\ell).
\end{align*}
In this inequality, the '$+1$' is the result of the fact that we have to use an edge to get from gadget~$T^v$ to gadget~$T^w$.
By Claim~\ref{claim-distances-in-tv} we know that~$u^v_1+u^v_\ell= 2s^*$ and that~$\widetilde{u}^w_1+\widetilde{u}^w_\ell= 2s^*$.
Since~$s$ is odd, we obtain that~$\dist_{G'}(u,\widetilde{u})\le 1 + 2s^*=s$.
Thus,~$S$ is a vertex-$\ell$-triangle~$s$-club of size~$3\ell ks^*$.
\textbf{Case II:~$s$ is even and~$\ell\ge 3$.} For each vertex pair~$u\in T^v\setminus T^v_0$ and~$\widetilde{u}\in T^w\setminus T^w_0$ the proof of~$\dist_{G'}(u,\widetilde{u})\le s$ is analogous to the proof in Case~$1$ handling odd values of~$s$.
Hence, it remains to show that each vertex~$u\in T^v_0$ has distance at most~$s$ to each vertex~$\widetilde{u}\in T^w$.
Observe that~$\dist_{G'}(u,p^v_1)\le 2$ since~$T^v_0\subseteq N(x^v_{1,1})$ and~$x_{1,1}^*$ is a neighbor of~$p^v_1$.
Thus,~$\dist_{G'}(u,q^w_1)=3$ since~$vw\in E(G)$.
Furthermore, for each vertex~$\widetilde{u}\in T^w\setminus T^w_\ell$ we have~$\dist_{G'}(\widetilde{u},q^w_1)\le s-3$ by the proof of Claim~\ref{claim-tv-is-club}.
Hence,~$\dist(u,\widetilde{u})\le s$.
Thus, it remains to consider the case that~$u\in T^v_0$ and that~$\widetilde{u}\in T^w_\ell=\{p^w_\ell,q^w_\ell\}$.
Since~$vw\in E(G)$ we conclude that~$\widetilde{u}$ has a neighbor in~$T^v$.
Since~$T^v$ is an~$(s-2)$-club by Claim~\ref{claim-tv-is-club} we conclude that~$\dist_{G'}(u,\widetilde{u})=s-1$.
Hence,~$S$ is a vertex-$\ell$-triangle~$s$-club of size~$3\ell ks^*$.
\textbf{Case III:~$s$ is even and~$\ell=2$.}
Let~$x^w\coloneqq x_{s^*,1}^w$ and~$x^v\coloneqq x_{s^*,1}^v$.
Furthermore, let~$u^v_1\coloneqq \dist_{G'}(u,p^v_1), \widetilde{u}^w_x\coloneqq \dist_{G'}(x^w,\widetilde{u}), u^v_x\coloneqq \dist_{G'}(u,x^v)$, and~$\widetilde{u}^w_1\coloneqq \dist_{G'}(p^w_1,\widetilde{u})$.
We have
\begin{align*}
\dist_{G'}(u,\widetilde{u})&\le \min(u^v_1+1+\widetilde{u}^w_x,u^v_x+1+\widetilde{u}^w_1)\\
&=1+\min(u^v_1+\widetilde{u}^w_x,u^v_x+\widetilde{u}^w_1).
\end{align*}
Again, the '$+1$' is the result that we have to use an edge to go from~$T^v$ to gadget~$T^w$.
Consider the following claim.
\begin{claim}
\label{claim-s-even-distances}
For each vertex~$u\in T^v$ we have~$u^v_1+u^v_x\le s-1$.
\end{claim}
Claim~\ref{claim-s-even-distances} directly implies that $$\min(u^v_1+\widetilde{u}^w_x,u^v_x+\widetilde{u}^w_1)\le s-1$$ and thus~$\dist_{G'}(u,\widetilde{u})\le s$.
This in turn implies that ~$S$ is a vertex-$\ell$-triangle~$s$-club of size~$3\ell ks^*$.
Thus, it remains to prove Claim~\ref{claim-s-even-distances}.
\begin{claimproof}
Since~$T^v$ is an~$(s-2)$-club by Claim~\ref{claim-tv-is-club}, we conclude that each vertex~$u\in (N[p^v_1]\cup N[x^v_{s^*,1}])\cap T^v$ has distance at most~$s$ to each vertex in~$T^v$.
Hence, it remains to show that each vertex~$u\in T^v\setminus(N[p^v_1]\cup N[x^v_{s^*,1}]))$ has distance at most~$s$ to each vertex~$u'\in T^v$.
If~$s=4$, then~$T^v\subseteq (N[p^v_1]\cup N[x^v_{s^*,1}])$ and hence the claim is proven.
Hence, in the following, we consider the case~$s\ge 6$.
Next, we consider the case that~$u\in T^v_0$.
Since~$T^v_0\subseteq N(x^v_{1,1})$ we conclude that~$u^v_1\le 2$.
By the proof of Claim~\ref{claim-tv-is-club} we conclude that~$u^v_x=s-3$.
Hence, Claim~\ref{claim-s-even-distances} is true in this case.
For all remaining cases, it is sufficient to prove the claim for each vertex~$u\in T^v\setminus(T^v_0\cup T^v_\ell)$.
First, we consider the case that~$u\coloneqq x^v_{t,i}$ for some~$t\in[s^*]$ and some~$i\in[\ell]$.
Then we have~$\dist_{G'}(x^v_{t,i},p^v_1)=2t-1$ and~$\dist_{G'}(x^v_{t,i},x^v_{s^*,1})\le 2(s^*-t)+2$ and hence~$\dist_{G'}(x^v_{t,i},p^v_1)+\dist_{G'}(x^v_{t,i},x^v_{s^*,1})\le 2t-1+s^*-2t+2=s^*+1\le s-1$.
Second, we consider the case that~$u\coloneqq y^v_{t,i}$ for some~$t\in[s^*-1]$ and some~$i\in[\ell]$.
Then we have~$\dist_{G'}(y^v_{t,i},p^v_1)=2t$ and~$\dist_{G'}(y^v_{t,i},x^v_{s^*,1})\le 2(s^*-t)+1$ and hence we obtain that~$\dist_{G'}(y^v_{t,i},p^v_1)+\dist_{G'}(y^v_{t,i},x^v_{s^*,1})\le 2t+s^*-2t+1=s^*+1\le s-1$.
The case that~$u\coloneqq z^v_{t,i}$ follows by the same argumentation.
\end{claimproof}
Conversely, suppose that~$G'$ contains a vertex-$\ell$-triangle~$s$-club of size at least $3\ell ks^*$.
Because of Observation~\ref{obs-l-triangle-3-or-4-club-empty-or-complete} for each vertex-gadget~$T^v$ we either have $T^v\subseteq S$ or~$T^v\cap S=\emptyset$.
Hence,~$S$ contains at least~$k$ vertex gadgets.
We assume towards a contradiction that~$S$ contains two vertex gadgets~$T^v$ and~$T^w$ such that~$vw\notin E(G)$.
In each case, we will determine a vertex~$u_v\in T^v$ and a vertex~$u_w\in T^w$ such that~$\dist_{G'}(u_v,u_w)\ge s+1$.
This contradiction to the $s$-club property allows us to conclude that the set~$\{v \mid T^v\subseteq S\}$ is a clique of size at least~$k$ in~$G$.
\textbf{Case I:~$s$ is odd.}
We define vertex~$u_v$ as follows.
The vertex~$u_w$ is defined analogously.
Recall that in this case we have~$s^*=(s-1)/2$.
\begin{itemize}
\item If~$s\equiv 3\mod 4$, we set~$u_v\coloneqq x^v_{(s+1)/4,1}$.
\item Otherwise, if~$s\equiv 1\mod 4$, we set~$u_v\coloneqq y^v_{(s-1)/4,1}$.
\end{itemize}
Observe that for each vertex~$u\in T^v_0\cup T^v_\ell$ we have~$\dist_{G'}(u_v,u)=(s-1)/2$.
Furthermore, recall that the vertices in~$T^v_0\cup T^v_\ell$ are the only vertices in~$T^v$ with neighbors in other vertex gadgets.
Similarly, for each vertex~$u'\in T^w_0\cup T^w_\ell$ we have~$\dist_{G'}(u_w,u')=(s-1)/2$.
But since~$vw\notin E(G)$ there are no edges between~$T^v$ and~$T^w$, we obtain for each choice of~$u$ and~$u'$ that~$\dist_{G'}(u_v,u_w)\ge\dist_{G'}(u_v,u)+2+\dist_{G'}(u',u_w)=(s-1)/2+2+(s-1)/2=s+1$, a contradiction.
\textbf{Case II:~$s$ is even and~$\ell\ge 3$.}
We set~$u_v\coloneqq p^v_2$ and~$u_w\coloneqq x^w_{s^*,1}$.
Note that~$u_v\in T^v_0$ since~$\ell\ge 3$.
By the construction we obtain that for each vertex~$u_0\in\{p^v_1,q^v_1\}$ we have~$\dist_{G'}(u_v,u_0)=2$ since~$T^v_0\subseteq N(x^v_{1,1})$.
Furthermore, for each vertex~$u_\ell\in\{p^v_\ell, q^v_\ell\}$ we have~$\dist_{G'}(u_v,u_\ell)=s-2$.
Next, observe that for each vertex~$u'_0\in\{p^w_1, q^w_1\}$ we have~$\dist_{G'}(u_w,u'_0)=s-3$ and for each vertex~$u'_\ell\in\{p^w_\ell, q^w_\ell\}$ we have~$\dist(u_w,u'_\ell)=1$.
Since~$vw\notin E(G)$, there are no edges between the gadgets~$T^v$ and~$T^w$.
Hence, $\dist(u_v,u_w)\ge\min(\dist(u_v, u_0)+2+\dist_{G'}(u'_0,u_w),\dist_{G'}(u_v, u_\ell)+2+\dist_{G'}(u'_\ell,u_w))$ for each vertex~$u_0\in\{p^v_1, q^v_1\}$, each~$u'_0\in\{p^w_1,q^w_1\}$, each~$u_\ell\in T^v_\ell$, and each~$u'_\ell\in T^w_\ell$.
By the above argumentation we obtain~$\dist_{G'}(u_v,u_w)\ge\min(2+2+s-3,s-2+2+1)=s+1$, a contradiction.
\textbf{Case III:~$s$ is even and~$\ell=2$.}
We define the vertices~$u_v$ and~$u_w$ as follows:
\begin{itemize}
\item If~$s=4$, we set~$u_v\coloneqq x^v_{1,2}$ and~$u_w\coloneqq p^w_2$.
\item If~$s\equiv 0\mod 4$ and~$s\ge 8$, we set~$u_v\coloneqq x^v_{s/4,1}$ and~$u_w\coloneqq y^w_{s/4,1}$.
\item If~$s\equiv 2\mod 8$, we set~$u_v\coloneqq y^v_{(s-2)/4,2}$ and~$u_w\coloneqq x^w_{(s+2)/4,1}$.
\item If~$s\equiv 6\mod 8$, we set~$u_v\coloneqq y^v_{(s-2)/4,1}$ and~$u_w\coloneqq x^w_{(s+2)/4,2}$.
\end{itemize}
From the definition of these vertices we obtain that~$\dist_{G'}(u_v,p^v_1)=s/2-1$, $\dist_{G'}(u_v,x^v_{s^*,1})=s/2$,~$\dist_{G'}(u_w,p^w_1)=s/2$, and that~$\dist_{G'}(u_w,x^w_{s^*,1})=s/2-1$.
Recall that the vertices~$p^u_1$ and~$x^u_{s^*,1}$ are the only vertices in~$T^u$ which have neighbors outside~$T^u$ for each~$u\in V(G)$.
Furthermore, observe that all neighbors of~$p^u_1$ which are not contained in~$T^u$ are the vertices~$x^b_{s^*,1}$ where~$ub\in E(G)$.
Similar, all neighbors of~$x^u_{s^*,1}$ which are not in~$T^u$ are of the form~$p^b_1$ where~$ub\in E*G)$.
We conclude that~$\dist_{G'}(u_v,u_w)\ge \dist(u_v,p^v_1)+3+\dist(x^w_{s^*,1},u_w)$
Here, the '$+3$' results from the fact that at least~$3$ edges to switch the vertex gadgets have to be used: one is not sufficient since~$uw\notin E(G)$ and also two are not sufficient since in two steps one can only reach a vertex~$p^c_1$ for~$c\in V(G)$ from~$p^v_1$ but no vertex~$x^d_{s^*,1}$ for~$d\in V(G)$.
Hence,~$\dist_{G'}(u_v,u_w)\ge s/2-1+3+s/2-1=s+1$, a contradiction.\qed
\end{proof}
\section{Edge Triangle~\texorpdfstring{$s$}{s}-Club}
In this section we settle the parameterized complexity of \textsc{Edge Triangle~$s$-Club} with respect to the solution size~$k$.
Recall that a vertex set~$S$ is an edge-$\ell$-triangle~$s$-club if~$G[S]$ contains a spanning subgraph~$G'=(S,E')$ such that each edge in~$E(G')$ is contained in at least~$\ell$ triangles within~$G'$ and the diameter of~$G'$ is at most~$s$.
First, we show that \textsc{Edge Triangle~$s$-Club} is FPT with respect to~$k$ when~$\ell=1$ irrespective of the value of~$s$ by providing a Turing kernel.
To show this, it is sufficient to delete edges which are not part of a triangle.
Afterwards, we prove W[1]-hardness of \textsc{Edge Triangle~$s$-Club} with respect to~$k$ for all fixed~$\ell\ge 2$.
\subsection{Edge Triangle~\texorpdfstring{$s$}{s}-Club with~\texorpdfstring{$\ell=1$}{\ell=1}}
Now, we prove that \textsc{Edge Triangle~$s$-Club} for~$\ell=1$ admits a Turing kernel with respect to~$k$ implying that the problem is FPT.
To obtain the kernel we need the following reduction rule which removes edges which are in no triangle.
\begin{rrule}
\label{rr-1-edge-triangle-s-club-remove-edges-in-no-triangle}
Let~$(G,k)$ be an instance of \textsc{Edge Triangle~$s$-Club}.
Delete all edges from~$G$ which are not part of any triangle.
\end{rrule}
It is clear that Reduction Rule~\ref{rr-1-edge-triangle-s-club-remove-edges-in-no-triangle} is correct and can be applied in polynomial time.
The idea that after Reduction Rule~\ref{rr-1-edge-triangle-s-club-remove-edges-in-no-triangle} is applied, we can bound the size of the neighborhood of each vertex.
Next, we prove that after the application of Reduction Rule~\ref{rr-1-edge-triangle-s-club-remove-edges-in-no-triangle} each edge with both endpoints in the closed neighborhood of a vertex in contained in a triangle.
\begin{lemma}
\label{lem-1-edge-triangle-2-club-neighborhood}
Let~$(G,k)$ be an instance of \textsc{Edge Triangle~$s$-Club} with~$\ell=1$ to which Reduction Rule~\ref{rr-1-edge-triangle-s-club-remove-edges-in-no-triangle} is applied.
For each vertex~$v\in V(G)$ each edge in~$G[N[v]]$ is contained in at least one triangle in~$G[N[v]]$.
\end{lemma}
\begin{proof}
First, we consider edges of the form~$uv$ where~$u$ is a neighbor of~$v$.
Since Reduction Rule~\ref{rr-1-edge-triangle-s-club-remove-edges-in-no-triangle} is applied, there exists another vertex~$w\in V(G)$ such that $G[\{u,v,w\}]$ is a triangle.
Observe that~$w\in N(v)$.
Second, each edge~$uw$ with~$u,w\in N(v)$ is in a triangle with vertex~$v$.\qed
\end{proof}
Next, we show that any instance such that sufficiently many vertices are close to some vertex~$v$ is a yes-instance.
\begin{lemma}
\label{lem-1-edge-triangle-2-club-neighborhood-s-half}
Let~$(G,k)$ be an instance of \textsc{Edge Triangle~$s$-Club} with~$\ell=1$ to which Reduction Rule~\ref{rr-1-edge-triangle-s-club-remove-edges-in-no-triangle} is applied.
Then,~$(G,k)$ is a yes-instance if~$|N_{\lfloor s/2\rfloor}[v]|\ge k$ for some vertex~$v\in V(G)$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem-1-edge-triangle-2-club-neighborhood} each edge in~$N_{\lfloor s/2\rfloor}[v]$ is contained in at least one triangle since~$N_{\lfloor s/2\rfloor}[v]= \bigcup_{w\in N_{\lfloor s/2\rfloor-1}[v]}N[w]$.
Furthermore, each vertex in~$N_{\lfloor s/2\rfloor}[v]$ has distance at most~$\lfloor s/2\rfloor$ to vertex~$v$.
Hence,~$N_{\lfloor s/2\rfloor}[v]$ is an~$s$-club and by definition~$|N_{\lfloor s/2\rfloor}[v]|\ge k$.\qed
\end{proof}
Lemma~\ref{lem-1-edge-triangle-2-club-neighborhood-s-half} implies a Turing kernel for~$k$ which implies that the problem is fixed-parameter tractable.
\begin{theorem}
\label{thm-1-edge-triangle-s-club-fpt}
\textsc{Edge Triangle~$s$-Club} for~$\ell=1$ admits a~$k^2$-vertex Turing kernel if~$s$ is even and a~$k^3$-vertex Turing kernel if~$s$ is odd and~$s\ge 3$.
\end{theorem}
\begin{proof}
First, we apply Reduction Rule~\ref{rr-1-edge-triangle-s-club-remove-edges-in-no-triangle} exhaustively.
Because of Lemma~\ref{lem-1-edge-triangle-2-club-neighborhood-s-half} we conclude that~$(G,k)$ is a trivial yes-instance, if~$|N_{\lfloor s/2\rfloor}[v]|\ge k$ for some~$v\in V(G)$.
Hence, in the following we can assume that~$k> |N_{\lfloor s/2\rfloor}[v]|$ for each vertex~$v\in V(G)$.
First, we consider the case that~$s$ is even.
Then~$\lfloor s/2\rfloor=s/2$ and we obtain that~$N_s[v]\subseteq N_{s/2}[N_{s/2}[v]]$ for each~$v\in V(G)$.
Thus,~$|N_s[v]|\le k^2$.
Second, we consider the case that~$s$ is odd.
Observe that we have $N_s[v]\subseteq N_{\lfloor s/2\rfloor}[N_{\lfloor s/2\rfloor}[N_{\lfloor s/2\rfloor}[v]]]$ for each~$v\in V(G)$.
Thus,~$|N_s[v]|\le k^3$.\qed
\end{proof}
\subsection{Edge Triangle~\texorpdfstring{$s$}{s}-Club for~\texorpdfstring{$\ell\ge 2$}{\ell\ge 2}}
Now we show W[1]-hardness for the remaining cases.
\begin{theorem}
\label{thm-ell-edge-triangle-s-club-w-hard}
\textsc{Edge Triangle~$s$-Club} is W[1]-hard for parameter~$k$ if~$\ell\ge 2$.
\end{theorem}
Next, we describe the construction of the reduction to prove Theorem~\ref{thm-ell-edge-triangle-s-club-w-hard}.
We reduce from \textsc{Clique}.
The idea is to construct one vertex gadget for each vertex of the \textsc{Clique} instance and to add edges between two different vertex gadgets if and only of the two corresponding vertices are adjacent in such a way that all these edges are in exactly~$\ell$ triangles.
For an illustration of this construction see Fig.~\ref{fig-visulation-edge-triangle}.
\begin{const}
\label{const-ell-triangle-s-club-w-hardness}
Let~$(G,k)$ be an instance of \textsc{Clique} with~$k\ge 3$.
We construct an equivalent instance~$(G',k')$ of \textsc{Edge-Triangle~$s$-Club} for some fixed~$\ell\ge 2$ as follows.
Let~$\ell^*\coloneqq \lceil\ell/2\rceil$ and let~$x\coloneqq 6\cdot \ell^* (s-1)+\lfloor\ell/2\rfloor$.
For each vertex~$v\in V(G)$, we construct the following vertex gadget~$T^v$.
For better readability, all sub-indices of the vertices in~$T^v$ are considered modulo~$x$.
Our construction distinguishes even and odd values of~$\ell$.
First, we describe the part of the construction which both cases have in common.
\begin{enumerate}
\item \label{item-edge-variant-1}
We add vertex sets~$A_v\coloneqq \{a^v_i\mid i\in[0,x]\}$ and~$B_v\coloneqq \{b^v_i\mid i\in[0,x]\}$ to~$G'$.
\item \label{item-edge-variant-2}
We add the edges~$a^v_ia^v_{i+j}$, and~$b^v_ib^v_{i+j}$ for each~$i\in[0,x]$ and each~$j\in[-3\ell^*,3\ell^*]\setminus\{0\}$ to~$G'$.
\item \label{item-edge-variant-3}
We add the edge~$a^v_ib^v_{i+j}$ for each~$i\in[0,x]$ and each~$j\in[-3\ell^*,3\ell^*]$ to~$G'$.
\end{enumerate}
In other words, an edge~$a^v_i b^v_j$ is added if the indices differ by at most~$3\ell^*$.
For even~$\ell$, this completes the construction of~$T^v$.
For odd~$\ell$, we extend~$T^v$ as follows:
\begin{enumerate}[label=0-\arabic*.,leftmargin=*]
\item \label{item-edge-variant-4}
We add the vertex set~$C_v\coloneqq \{c^v_{i}\mid i\in[0,x] \text{ and } i\equiv 0\mod\ell^*\}$ to~$G'$.
Note that~$C_v$ consists of exactly~$6s-5$ vertices.
\item \label{item-edge-variant-5}
We add the edges~$c^v_ia^v_{i+j}$ and~$c^v_ib^v_{i+j}$ for each~$i\in[0,x]$ such that~$i\equiv 0\mod\ell^*$ and each~$j\in[-3\ell^*,3\ell^*]$ to~$G'$.
\item \label{item-edge-variant-6}
Also, we add the edge~$c^v_ic^v_{i+j}$ to~$G'$ for each~$i\in[0,x]$ such that~$i\equiv 0\mod\ell^*$ and each~$j\in[-3\ell^*,3\ell^*]\setminus\{0\}$ to~$G'$ if the corresponding vertex~$c^v_{i+j}$ exists.
\end{enumerate}
In other words, an edge between~$c^v_i$ and~$a^v_j$,~$b^v_j$, or~$c^v_j$ is added if the indices differ by at most~$3\ell^*$.
Now, for each edge~$uv\in E(G)$, we add the following to~$G'$:
\begin{enumerate}[label=0-\arabic*.,leftmargin=*, start=4]
\item \label{item-edge-variant-7}
We add the edges~$a^v_ib^u_{i+j}$ and~$a^u_ib^v_{i+j}$ for each~$i\in[0,x]$ and~$j\in[0,\lfloor \ell/2\rfloor]$.
\item \label{item-edge-variant-8}
If~$\ell$ is odd, we also add the edges~$c^v_ib^u_{i+j}$ and~$c^u_ib^v_{i+j}$ for each~$i\in[0,x]$ such that~$i\equiv 0\mod\ell^*$ and each~$j\in[0,\lfloor \ell/2\rfloor]$ to~$G'$.
Observe that each vertex~$b^u_{i+j}$ is adjacent to \emph{exactly} one vertex in~$C_v$.
\end{enumerate}
In other words, an edge between~$a^v_i$ or~$c^v_i$ and~$b^v_j$ is added if~$j$ exceeds~$i$ by at most~$\lfloor\ell/2\rfloor$.
Finally, if~$\ell$ is even, we set~$k'\coloneqq 2(x+1)k=(\ell(6s-5)+2)\cdot k$, and if~$\ell$ is odd, we set~$k'\coloneqq (2(x+1)+6s-5)k=(\ell+2)(6s-5)\cdot k$.
\end{const}
\begin{figure}
\caption{Construction for Theorem~\ref{thm-ell-edge-triangle-s-club-w-hard}
\label{fig-visulation-edge-triangle}
\end{figure}
Construction~\ref{const-ell-triangle-s-club-w-hardness} has two key mechanisms:
First, if~$uv\notin E(G)$ then for each vertex~$a\in A_v$ there is at least one vertex~$b\in B_u$ such that~$\dist(a,b)>s$.
Second, each edge with one endpoint in~$A_v$ and one endpoint in~$B_u$ is contained in \emph{exactly}~$\ell$ triangles.
Furthermore, if~$\ell$ is odd, then this also holds for each edge with one endpoint in~$C_v$ and one in~$B_u$.
Consider an edge-$\ell$-triangle~$s$-club~$S$ and let~$\widetilde{G}=(S,\widetilde{E})$ be a spanning subgraph of~$G[S]$ with the maximal number of edges, such that each edge of~$\widetilde{E}$ is contained in at least~$\ell$ triangles in~$\widetilde{G}$ and the diameter of~$\widetilde{G}$ is~$s$.
As we will show, the two mechanics ensure that an edge with one endpoint in~$A_v$ (or~$C_v$) and the other endpoint in~$B_u$ is contained in~$\widetilde{E}$ if and only if~$S$ contains all vertices of~$A_v$ (and~$C_v$) and~$B_u$.
We call this the \emph{enforcement property}.
Next, we formalize this property.
To this end, we introduce the following notation.
By~$E_{uv}$ we denote the set of all edges with one endpoint in~$A_v$ (or~$C_v$ if~$\ell$ is odd) and the other endpoint in~$B_u$.
\begin{lemma}
\label{lem-ell-triangle-s-club-av-also-bu}
Let~$S$ be an edge-$\ell$-triangle~$s$-club~$G'$ constructed in Construction~\ref{const-ell-triangle-s-club-w-hardness}.
More precisely, let~$\widetilde{G}=(S,\widetilde{E})$ be a maximal subgraph of~$G[S]$ such that each edge in~$E(\widetilde{G})$ is contained in at least~$\ell$ triangles within~$\widetilde{G}$ and the diameter of~$\widetilde{G}$ is at most~$s$.
Let~$e\in E_{uv}$.
Then~$e\in E(\widetilde{G})$ if and only if~$A_v,B_u\subseteq S$ (and~$C_v\subseteq S$, if~$\ell$ is odd).
\end{lemma}
\begin{proof}
Before we show the two implications, we prove the following cascading property of edge-$\ell$-triangle~$s$-clubs which contain at least one edge of~$E_{uv}$.
\begin{claim}
\label{claim-edge-variant-cascading-av-bu}
If~$a^v_ib^u_j\in E(\widetilde{G})$ or~$c^v_ib^u_j\in E(\widetilde{G})$, then,$E_{uv}\subseteq E(\widetilde{G})$.
\end{claim}
\begin{claimproof}
First, we consider even values of~$\ell$.
Note that~$\ell^*=\lfloor\ell/2\rfloor=\ell/2=\lceil\ell/2\rceil$.
By construction we have
$$N[a^v_i]= \{a^v_{i+i'}, b^v_{i+i'}\mid i'\in[-3\ell/2,3\ell/2]\}\cup\{b^w_{i+i'}\mid i'\in[0,\ell/2] \text{ and }vw\in E(G)\},$$
and similar
$$N[b^u_j]=\{a^u_{j+i'},b^u_{j+i'}\mid i'\in[-3\ell/2,3\ell/2]\}\cup\{a^w_{j-i'}\mid i'\in[0,\ell/2] \text{ and }uw\in E(G)\}.$$
Since~$a^v_ib^u_j\in E(G')$ we obtain by Part~\ref{item-edge-variant-7} of Construction~\ref{const-ell-triangle-s-club-w-hardness} that~$j=i+z$ for some~$z\in[0,\ell/2]$.
Let~$y\coloneqq\ell/2-z$ and observe that~$N(a^v_i)\cap N(b^u_j)=\{a^v_{i+i'}\mid i'\in[-y,z]\setminus\{0\}\}\cup\{b^u_{j+i'} \mid i'\in[-z,y]\setminus{\{0\}}\}$.
Thus, the edge~$a^v_ib^u_j$ is contained in exactly~$\ell$ triangles whose vertex sets are all contained in~$G'[A_v\cup B_u]$.
Since~$a^v_ib^u_j\in E(\widetilde{G})$, we thus conclude that all vertices and edges which form these $\ell$~triangles are contained in~$\widetilde{G}$.
In particular, we obtain that~$a^v_{i+1}b^u_{j+1}\in E(\widetilde{G})$ and thus also that~$a^v_{i+1},b^u_{j+1}\in V(\widetilde{G})$.
Now, since~$a^v_{i+1}b^u_{j+1}\in E(\widetilde{G})$ we can repeat the above argumentation for the edge~$a^v_{i+1}b^u_{j+1}$ and inductively for the edge~$a^v_{i+q}b^v_{i+q}$ for all~$q\in [x]$.
We then have verified that~$A_v\cup B_u\subseteq V(\widetilde{G})$ and that each edge in~$E_{uv}$ is contained in~$E(\widetilde{G})$.
Recall that~$A_v$ and~$B_u$ have size~$x+1$.
Second, we consider odd values of~$\ell$, that is~$\ell=2t+1$ for some integer~$t$.
Note that~$\lfloor\ell/2\rfloor=t$ and that~$\ell^*=\lceil\ell/2\rceil=t+1$.
Furthermore, observe that vertex~$b^u_j$ has exactly one neighbor~$c^v_{j+i'}$ in~$C_v$ for some~$i'\in[0,t]$ such that~$j+i'\mod (t+1)=0$.
Hence, by construction we have
\begin{itemize}
\item $N[a^v_i]=N[c^v_i]=\{a^v_{i+i'},b^v_{i+i'}\mid i'\in[-3(t+1),3(t+1)]\}\cup\{c^v_{i+i'}\mid i'\in[-3(t+1),3(t+1)] \text{ and } (i+i')\mod (t+1)=0\}\cup\{b^w_{i+i'}\mid i'\in[0,t] \text{ and }vw\in E(G)\}$,
\item $N[b^u_j]=\{a^u_{j+i'},b^u_{j+i'}\mid i'\in[-3(t+1),3(t+1)]\}\cup\{c^u_{j+i'}\mid i'\in[-3(t+1),3(t+1)] \text{ and } (j+i')\mod (t+1)=0\}\cup\{a^w_{j-i'}\mid i'\in[0,t] \text{ and }uw\in E(G)\}\cup \{c^w_{j-i'}\mid i'\in [0,t] \text{ and } (j-i')\mod (t+1)=0 \text{ and }uw\in E(G)\}$, and
\end{itemize}
Since~$a^v_ib^u_j\in E(G')$ we obtain by Part~\ref{item-edge-variant-7} of Construction~\ref{const-ell-triangle-s-club-w-hardness} that~$j=i+z$ for some~$z\in[0,t]$.
Now, let~$y\coloneqq t-z$ and let~$c^v_{i'}$ be the unique neighbor of~$b^u_j$ in~$C_v$.
We conclude that~$N(a^v_i)\cap N(b^u_j)=\{a^v_{i+j'}\mid j'\in[-y,z]\setminus\{0\}\}\cup\{b^u_{j+j'} \mid j'\in[-z,y]\setminus{0}\}\cup\{c^v_{i'}\}$.
Hence, both vertices have exactly~$t+t+1=\ell$ common neighbors and thus~$N(a^v_i)\cap N(b^u_j)\subseteq S$.
By similar arguments a similar statement can be shown for the edge~$c^v_ib^u_j$.
Thus, the edges~$a^v_ib^u_j$ and~$c^v_ib^u_j$ are contained in exactly~$\ell$ triangles whose vertex sets are contained in~$G[A_v\cup C_v\cup B_u]$.
Since~$a^v_ib^u_j\in E(\widetilde{G})$ we thus conclude that all vertices and edges which form these~$\ell$ triangles are contained in~$\widetilde{G}$.
In particular, we obtain that~$a^v_{i+1}b^u_{j+1}\in E(\widetilde{G})$ and also that~$a^v_{i+1},b^u_{j+1}\in V(\widetilde{G})$.
Now, since~$a^v_{i+1}b^u_{j+1}\in E(\widetilde{G})$ we can repeat the above argumentation for the edge~$a^v_{i+1}b^u_{j+1}$ and inductively for the edge~$a^v_{i+q}b^v_{i+q}$ for all~$q\in [x]$.
We then have verified that~$A_v\cup B_u\cup C_v\subseteq V(\widetilde{G})$ and that each edge in~$E_{uv}$ is contained in~$E(\widetilde{G})$.
\end{claimproof}
Now, we are ready to prove the two implications.
The implication that if~$e\in E(\widetilde{G})$ then~$A_v,B_u\subseteq S$ (and~$C_v\subseteq S$, if~$\ell$ is odd) directly follows from Claim~\ref{claim-edge-variant-cascading-av-bu}.
It remains to show the other implication.
From Claim~\ref{claim-edge-variant-cascading-av-bu} we conclude that either~$E_{uv}\subseteq E(\widetilde{G})$ or~$E_{uv}\cap E(\widetilde{G})=\emptyset$.
If~$E_{uv}\subseteq E(\widetilde{G})$, then we are done, so assume towards a contradiction that~$E_{uv}\cap E(\widetilde{G})=\emptyset$.
Recall that~$A_v\cup B_u\subseteq S$ (and also~$C_v\subseteq S$ is~$\ell$ is odd).
Furthermore, recall that~$\widetilde{G}$ is maximal, that is, there exists no spanning subgraph of~$G[S]$ which has more edges than~$\widetilde{G}$.
Hence, adding all edges in~$E_{uv}$ to~$\widetilde{G}$ is still an edge-$\ell$-triangle~$s$-club, a contradiction to the maximality of~$\widetilde{G}$.\qed
\end{proof}
Now, we prove the correctness of the reduction for Theorem~\ref{thm-ell-edge-triangle-s-club-w-hard}.
\begin{proof}[of Theorem~\ref{thm-ell-edge-triangle-s-club-w-hard}]
We show that~$G$ contains a clique of size at least~$k$ if and only if~$G'$ contains an edge-$\ell$-triangle~$s$-club of size at least~$k'$.
Let~$K$ be a clique of size~$k$ in~$G$.
Recall that~$T^v$ is the gadget of vertex~$v\in V(G)$.
We verify that~$S\coloneqq\{u\in V(T^v)\mid v\in K\}$ is an edge-$\ell$-triangle~$s$-club of size at least~$k'$.
More precisely, we show that~$\widetilde{G}\coloneqq G[S]$ fulfills all properties of being an edge-$\ell$-triangle~$s$-club.
Since~$|T^v|=2\ell^*(6s-5)+2$ if~$\ell$ is even and~$|T^v|=(2\ell^*+1)(6s-5)$ if~$\ell$ is odd for each~$v\in K$ and since~$|K|\ge k$, we have~$|S|=|V(\widetilde{G})|\ge k'$.
It remains to show that~$\widetilde{G}$ is an edge-$\ell$-triangle~$s$-club.
\textbf{Next, we show that~$\widetilde{G}$ is an~$s$-club.}
First, we show that~$T^v$ is an~$s$-club.
Therefore, consider the vertex pair $\{a^v_i,a^v_j\}$ for some~$v\in K$.
Observe that~$P\coloneqq (a^v_i,a^v_{i+1},\ldots, a^v_{i+p})$ for~$i+p=j$ is a path of length~$p$ from~$a^v_i$ to~$a^v_j$ and that~$Q\coloneqq (a^v_i,a^v_{i-1},\ldots, a^v_{i-q})$ for~$i-q=j$ is a path of length~$q$ from~$a^v_i$ to~$a^v_j$.
Clearly,~$p+q=x+1$.
Hence,~$\min(p,q)\le (x+1)/2\le 3\ell^*(s-1)+\lfloor\ell/2\rfloor$.
Without loss of generality, assume that the minimum is achieved by path~$P$ and assume that~$p=\alpha \cdot(3\ell^*)+\beta$ for some~$\alpha\in[s-1]$ and some~$\beta<3\ell^*$.
Recall that by Part~\ref{item-edge-variant-2} of Construction~\ref{const-ell-triangle-s-club-w-hardness},~$a^v_{i'}a^v_{j'}\in E(G')$ if and only if~$j'=i'+z$ for some~$z\in[-3\ell^*,3\ell^*]\setminus\{0\}$.
Hence, $$(a^v_i,a^v_{i+1\cdot(3\ell^*)}, \ldots, a^v_{i+\alpha\cdot(3\ell^*)}, a^v_{i+\alpha\cdot(3\ell^*)+\beta})$$ is a path of length at most~$(s-1)+1=s$ from~$a^v_i$ to~$a^v_j$.
These arguments also apply symmetrically to the vertex pairs~$\{b^v_i,b^v_j\}$ and $\{a^v_i,b^v_j\}$ for each~$v\in K$.
Furthermore, if~$\ell$ is odd, observe that the above argumentation can also be used to show that the vertex pairs~$\{c^v_i,a^v_j\}$,~$\{c^v_i,b^v_j\}$, and~$\{c^v_i,c^v_j\}$ have distance at most~$s$ to each other.
Second, we show that~$a^v_i$ has distance at most~$s$ to~$b^u_j$.
Note that by Part~\ref{item-edge-variant-7} of Construction~\ref{const-ell-triangle-s-club-w-hardness},~$a^v_i$ has neighbors~$b^u_i, \ldots , b^u_{i+\lfloor \ell/2\rfloor}$ since~$uv\in E(G)$.
In the following, we assume that~$j\ne i+z$ for all~$z\in[0,\lfloor \ell/2\rfloor]$.
Consider the paths~$P\coloneqq (a^v_i,b^u_{i+\lfloor \ell/2\rfloor},b^u_{i+\lfloor \ell/2\rfloor+1},\ldots, ,b^u_{i+\lfloor \ell/2\rfloor+p})$ for~$i+\lfloor \ell/2\rfloor+p= j$ of length~$p+1$ and~$Q\coloneqq (a^v_i,b^u_i,b^u_{i-1}, \ldots, b^u_{i-q})$ for~$i-q= j$ of length~$q+1$.
Observe that~$(p+1)+(q+1)=(x+3)-\lfloor \ell/2\rfloor$.
Thus,~$p+q=(x+1)-\lfloor \ell/2\rfloor=6\ell^*(s-1)+1$.
Since~$p$ and~$q$ are integers we have~$\min(p,q)\le ((x+1)-\lfloor \ell/2\rfloor)/2=3\ell^*(s-1)$.
Without loss of generality assume that the minimum is achieved by path~$P$ and assume that~$p=\alpha \cdot(3\ell^*)+\beta$ for some~$\alpha\in[s-2]$ and some~$\beta\le 3\ell^*$.
Recall that by Part~\ref{item-edge-variant-2} of Construction~\ref{const-ell-triangle-s-club-w-hardness}, we have~$b^u_{i'}b^u_{j'}\in E(G')$ if and only if~$j'=i'+z$ for some~$z\in[-3\ell^*,3\ell^*]\setminus\{0\}$.
Now, observe that $$(a^v_i,b^u_{i+\lfloor\ell/2\rfloor},b^u_{i+\lfloor\ell/2\rfloor+1\cdot(3\ell^*)}, \ldots, b^u_{i+\lfloor\ell/2\rfloor+\alpha\cdot(3\ell^*)}, b^u_{i+\lfloor\ell/2\rfloor+\alpha\cdot(3\ell^*)+\beta})$$ is a path of length at most~$1+(s-2)+1=s$ from~$a^v_i$ to~$b^u_j$.
Furthermore, if~$\ell$ is odd, observe that the above argumentation can also be used to show that the vertex pairs~$\{c^v_i,b^u_j\}$ have distance at most~$s$ to each other by replacing~$a^v_i$ with~$c^v_i$ in the paths~$P$ and~$Q$.
The fact that vertices~$a^v_i$ and~$a^u_j$, and~$b^v_i$ and~$b^u_j$, respectively, have distance at most~$s$ to each other can be proven similar as we showed that~$a^v_i$ and~$b^u_j$ have distance at most~$s$ by observing that~$a^v_i$ has distance~$2$ to each vertex~$a^u_{i+z}$ with~$z\in [-3\ell^*,\lfloor\ell/2\rfloor+3\ell^*]$ since~$a^v_i$ has neighbors~~$b^u_i,\ldots, b^u_{i+\lfloor\ell/2\rfloor}$ and since by Part~\ref{item-edge-variant-2} of Construction~\ref{const-ell-triangle-s-club-w-hardness}, we have~$b^u_{i'}a^u_{j'}\in E(G')$ if and only if~$j'=i'+z'$ for some~$z'\in[-3\ell^*,3\ell^*]$.
Furthermore, if~$\ell$ is odd, observe that the above argumentation can also be used to show that the vertex pairs~$\{c^v_i,a^u_j\}$ and~$\{c^v_i,c^u_j\}$ have distance at most~$s$ to each other by replacing~$a^v_i$ with~$c^v_i$ and replacing~$a^u_j$ with~$c^u_j$, respectively, in the paths~$P$ and~$Q$.
Hence,~$\widetilde{G}$ is indeed an~$s$-club.
\textbf{Next, we show that each edge in~$E(\widetilde{G})$ is contained in at least~$\ell$ triangles which are contained in~$\widetilde{G}$.}
Consider the edge~$a^v_ia^v_{i+j}$ for some~$j\in[-3\ell^*,3\ell^*]$.
Without loss of generality, assume that~$j>0$.
By Part~\ref{item-edge-variant-2} Construction~\ref{const-ell-triangle-s-club-w-hardness}, both vertices are adjacent to each vertex~$a^v_{i+i'}$ with~$i'\in[3\ell^*]\setminus\{j\}$.
Hence, both vertices are in at least~$3\ell^*-1\ge\ell$ triangles.
Furthermore, the statement can be shown analogously for the edges~$b^v_ib^v_{i+j}$ (and $c^v_ic^v_{i+j}$ if~$\ell$ is odd) for some~$j\in[-3\ell^*,3\ell^*]$.
Also, the statement can be shown analogously for the edge~$a^v_ib^v_{i+j}$ for some $j\in[-3\ell^*,3\ell^*]$.
If~$j\ge 0$, then~$a^v_{i+z}$ for each~$z\in[3\ell^*]$ is a common neighbor of both vertices and thus the edge~$a^v_ib^v_{i+j}$ is contained in at least~$\ell$ triangles.
The case~$j<0$ can be shown analogously.
For odd~$\ell$, the statement can be shown analogously for the edges $a^v_ic^v_{i+j}$, $b^v_ic^v_{i+j}$, and~$c^v_ic^v_{i+j}$ for some~$j\in[-3\ell^*,3\ell^*]$.
The fact that the edges~$a^v_ib^u_{i+j}$ and~$c^v_ib^u_{i+j}$ are contained in exactly~$\ell$ triangles follows from the proof of Lemma~\ref{lem-ell-triangle-s-club-av-also-bu}.
We conclude that each edge in~$E(\widetilde{G})$ is contained in at least~$\ell$ triangles in~$\widetilde{G}$.
Thus,~$S$ is indeed an edge-$\ell$-triangle~$s$-club of size~$k'$.
Conversely, let~$S$ be an edge-$\ell$-triangle~$s$-club of size at least~$k'$ in~$G'$.
More precisely, let~$\widetilde{G}$ be a maximal spanning subgraph of~$G[S]$ which has diameter at most~$s$ and such that each edge in~$E(\widetilde{G})$ is contained in at least~$\ell$ triangles is~$\widetilde{G}$.
We show that~$G$ contains a clique of size at least~$k$.
First, we show that for each vertex~$x\in A_v\cup B_v\cup C_v$ there exists a vertex~$y\in A_u\cup B_u\cup C_u$ such that~$\dist(x,y)\ge s+1$ if~$uv\notin E(G)$.
For this, recall that by construction each two vertices with sub-indices~$i'$ and~$j'$ are not adjacent if their difference (modulo~$x$) is larger than~$3\ell^*$.
\begin{claim}
\label{claim-ell-triangle-high-distance-uv-notin-eg'}
In~$G'$ we have~$\dist(x_i,y_{j})\ge s+1$ for each~$i\in[0,x]$,~$j\coloneqq i+\lfloor\ell/2\rfloor+3\ell^*(s-1)$,~$x_i\in\{a^v_i,b^v_i,c^v_i\}$, and~$y_j\in\{a^u_j,b^u_j,c^u_j\}$ if~$uv\notin E(G)$.
\end{claim}
\begin{claimproof}
There are two possible paths from~$x_i$ to~$y_{i+\lfloor\ell/2\rfloor+3\ell^*(s-1)}$ with respect to the indices.
First, there is a subsequence of the indices which is increasing ($i,i+1,\ldots,i+\lfloor\ell/2\rfloor+3\ell^*(s-1)$).
This path has length~$\lfloor\ell/2\rfloor+3\ell^*(s-1)$.
Second, there is a subsequence of the indices which is decreasing ($i,i-1,\ldots,i-i'$, where~$-i'=i+\lfloor\ell/2\rfloor+3\ell^*(s-1)$).
This path has length~$3\ell^*(s-1)+1$.
Hence, each path from~$x_i$ to~$y_{i+\lfloor\ell/2\rfloor+3\ell^*(s-1)}$ has to overcome at least~$3\ell^*(s-1)+1$ indices.
Observe that whenever an edge between~$A_p$ (or~$C_p$) and~$B_q$ for~$p,q\in V(G)$ with~$pq\in E(G)$ is traversed, by construction the index can increase/decrease by at most~$\lfloor\ell/2\rfloor$.
Now, we use the fact that~$uv\notin E(G)$:
There are no edges between the vertex gadgets~$T^v$ and~$T^u$.
Thus, at least two times such a traversal of at most~$\lfloor\ell/2\rfloor$ indices has to be done.
Hence, the index~$i$ can increase or decrease by at most~$2\cdot\lfloor\ell/2\rfloor+3\ell^*(s-2)<3\ell^*(s-1)+1$ if at least~$2$ edge traversals between different vertex gadgets are necessary.
Thus, both vertices have distance at least~$s+1$.
\end{claimproof}
The following statement directly follows from Claim~\ref{claim-ell-triangle-high-distance-uv-notin-eg'} and Lemma~\ref{lem-ell-triangle-s-club-av-also-bu}.
\begin{claim}
\label{claim-ell-triangle-high-distance-uv-notin-eg'-consequenz}
If~$A_v\subseteq S$ (or~$C_v\subseteq S$ if~$\ell$ is odd) and~$B_u\subseteq S$ then~$uv\in E(G)$.
\end{claim}
We now use Claims~\ref{claim-ell-triangle-high-distance-uv-notin-eg'} and~\ref{claim-ell-triangle-high-distance-uv-notin-eg'-consequenz} to show that~$G$ contains a clique of size at least~$k$.
We distinguish the cases whether~$S$ contains only parts of one of the gadgets~$A_v$,~$B_v$, or~$C_v$ or whether~$S$ contains all vertices of the gadgets~$A_v$ (or~$B_v$, or~$C_v$) completely.
First, assume that for some vertex~$v\in V(G)$ we have~$A_v\cap S\ne\emptyset$ and~$A_v\not\subseteq S$.
In the following, we show that~$S$ only contains vertices of gadget~$T^v$ and from gadgets~$T^u$ such that~$uv\in E(G)$.
Since~$A_v\not\subseteq S$, we conclude that in~$\widetilde{G}$ we have~$N_{\widetilde{G}}(A_v\cap S)\subseteq (B_v\cup C_v)$:
Otherwise, vertex~$a_v$ has a neighbor~$b_u\in B_u$ and by Lemma~\ref{lem-ell-triangle-s-club-av-also-bu} we would obtain~$A_v\subseteq S$, a contradiction to the assumption~$A_v\not\subseteq S$.
If~$B_v\not\subseteq S$, then by Lemma~\ref{lem-ell-triangle-s-club-av-also-bu} no vertex in~$B_v$ can have a neighbor~$a^w_i$ or~$c^w_i$ for some~$w\ne v$.
Hence,~$S\cap T^v$ would be a connected component of size at most~$3(x+1)$, a contradiction to the size of~$S$ since~$k\ge 3$.
Thus, we may assume that~$B_v\subseteq S$.
Observe that if~$a^w_i\in S$ or~$c^w_i\in S$ for some~$w\in V(G)$ such that~$vw\in E(\widetilde{G})$, that is, also~$vw\in E(G')$, then we have~$A_w\subseteq S$ and~$C_w\subseteq S$ by Lemma~\ref{lem-ell-triangle-s-club-av-also-bu} since each vertex~$a^w_i$ and~$c^w_i$ has a neighbor in~$B_v$.
Let~$W\coloneqq \{w_1,\ldots, w_t\}$ denote the set of vertices~$w_j$ such that~$vw_j\in E(G)$ and~$A_{w_j},C_{w_j}\subseteq S$.
If~$w_xw_y\notin E(G)$ for some~$x,y\in[t]$ with~$x\ne y$, then~$a^{w_x}_0$ and~$a^{w_y}_{\lfloor\ell/2\rfloor+3\ell^*(s-1)}$ have distance at least~$s+1$ by Claim~\ref{claim-ell-triangle-high-distance-uv-notin-eg'}.
Thus~$w_xw_y\in E(G)$ for each~$x,y\in[t]$ with~$x\ne y$.
Assume towards a contradiction that~$a^p_i\in S$ or~$c^p_i\in S$ for some~$p\in V(G)\setminus w$ with~$p\ne v$.
Note that~$pv\notin E(G)$ since otherwise~$p\in W$ by the definition of~$W$.
Observe that since~$B_v\subseteq S$ we also have~$b^v_{i+\lfloor\ell/2\rfloor+3\ell^*(s-1)}\in S$.
But since~$pv\notin E(G)$ we obtain from Claim~\ref{claim-ell-triangle-high-distance-uv-notin-eg'} that~$\dist(z_i,b^v_{i+\lfloor\ell/2\rfloor+3\ell^*(s-1)})\ge s+1$ for~$z_i=a_i$ or~$z_i=c_i$, a contradiction.
We conclude that~$S$ does not contain any vertex~$a^p_i$ or~$c^p_i$ with~$p\ne v$ or~$p\ne w_j$ for~$j\in[t]$.
Next, assume towards a contradiction that~$b^p_i\in S$ for some~$p\in V(G)$ with $p\ne v$ and~$p\notin W$.
If~$pv\notin E(G)$, then~$b^p_i$ and~$b^v_{i+\lfloor\ell/2\rfloor+3\ell^*(s-1)}$ have distance at least~$s+1$ again by Claim~\ref{claim-ell-triangle-high-distance-uv-notin-eg'}.
Thus, we can assume that~$pv\in E(G)$.
Recall that~$b^v_{i+\lfloor\ell/2\rfloor+3\ell^*(s-1)}\in S$.
As defined by Claim~\ref{claim-ell-triangle-high-distance-uv-notin-eg'}, each shortest path from~$b^p_i$ to~$b^v_{i+\lfloor\ell/2\rfloor+3\ell^*(s-1)}$ can swap at most once between different vertex gadgets.
In this case, there is exactly one swap from~$T^p$ to~$T^v$.
From the above we know that~$(A_p\cup C_p)\cap S=\emptyset$.
Thus, each shortest path from~$b^p_i$ to~$b^v_{i+\lfloor\ell/2\rfloor+3\ell^*(s-1)}$ uses at least one vertex in~$A_v\cup C_v$.
Since at least one edge with endpoint~$A_v\cup C_v$ is contained in~$S$, we conclude from Lemma~\ref{lem-ell-triangle-s-club-av-also-bu} that~$A_v\cup C_v\subseteq S$, a contradiction to the assumption~$A_v\not\subseteq S$.
Hence, there is no vertex~$p\ne v$ and~$p\notin W$ such that~$T^p\cap S\ne\emptyset$.
In other words,~$S$ contains only vertices from the gadget~$T^v$ and from gadgets~$T^u$ with~$vu\in E(G)$.
Thus,~$S\subseteq T^v\cup\bigcup_{j=1}^t T^{w_j}$.
By definition of~$k'$, we have~$t\ge k-1$ and we conclude that~$G$ contains a clique of size at least~$k$.
The case that we have~$B_v\cap S\ne\emptyset$ and~$B_v\not\subseteq S$ or~$C_v\cap S\ne\emptyset$ and~$C_v\not\subseteq S$ for some vertex~$v\in V(G)$ can be handled similarly.
Second, consider the case that for each set~$A_v$ with~$A_v\cap S\ne\emptyset$ we have~$A_v\subseteq S$, that for each set~$B_v$ with~$B_v\cap S\ne\emptyset$ we have~$B_v\subseteq S$ , and that for each set~$C_v$ with~$C_v\cap S\ne\emptyset$ we have~$C_v\subseteq S$.
Let~$W_A\coloneqq \{w_A^j\mid A_{w_j}\subseteq S\}$,~$W_B\coloneqq \{w_B^j\mid B_{w_j}\subseteq S\}$ ,and~$W_C\coloneqq \{w_C^j\mid C_{w_j}\subseteq S\}$.
If~$W_A=\emptyset$ or~$W_B=\emptyset$ or~$W_C=\emptyset$then each connected component in~$G'[S]$ has size at most~$2(x+1)<k'$.
Thus, we may assume that~$W_A\ne\emptyset$,~$W_B\ne\emptyset$ ,and~$W_C\ne\emptyset$.
By Claim~\ref{claim-ell-triangle-high-distance-uv-notin-eg'-consequenz}, we have~$w^i_Aw^j_B\in E(G)$ for each~$w^i_A\in W_A$ and~$w^j_B\in W_B$ and also~$w^i_Cw^j_B\in E(G)$ for each~$w^i_C\in W_C$ and~$w^j_B\in W_B$.
Furthermore, by Claim~\ref{claim-ell-triangle-high-distance-uv-notin-eg'}, we have~$w^j_Bw^{j'}_B\in E(G)$ for~$w^j_B,w^{j'}_B\in W_B$,~$w^i_Aw^{i'}_A\in E(G)$ for~$w^i_A,w^{i'}_A\in W_A$,~$w^i_Cw^{i'}_C\in E(G)$ for~$w^i_C,w^{i'}_C\in W_C$, and also ,~$w^i_Aw^{i'}_C\in E(G)$ for~$w^i_A\in W_A$ and~$w^{i'}_C\in W_C$.
Hence, we obtain that~$\min(|W_A|,|W_B|,|W_C|)\ge k$ and thus~$G$ contains a clique of size~$k$.
\qed
\end{proof}
\section{Seeded~\texorpdfstring{$s$}{s}-Club}
In this section we study the parameterized complexity of \textsc{Seeded~$s$-Club} with respect to the standard parameter solution size~$k$.
Recall that is this problem we aim to find an $s$-club containing a given seed of vertices.
Here, we assume that~$|W|<k$ since otherwise the problem can be solved in polynomial time.
\subsection{Tractable Cases}
For clique seeds, we provide the following kernel.
Note that here we present a kernel and not only a Turing kernel.
\begin{theorem}
\label{thm-w-seeded-tractable-cases}
\textsc{Seeded~$s$-Club} admits a kernel with $\ensuremath{\mathcal{O}}(k^{2|W|+1})$ vertices if~$W$ is a clique.
\end{theorem}
Note that this kernel has polynomial size if~$W$ has constant size.
In the following, assume that~$G[W]$ is a clique.
To prove the kernel, we first remove all vertices with distance at least~$s+1$ to any vertex in~$W$.
Second, we show that if the remaining graph, that is,~$N_s[W]$, is sufficiently large, then~$(G,W,k)$ is a trivial yes-instance.
\begin{rrule}
\label{rr-w-seesed-s-club-remove-vertices-large-distance}
Let~$(G,W,k)$ be an instance of \textsc{Seeded~$s$-Club}.
If~$G$ contains a vertex~$u$ such that~$\dist(u,w)\ge s+1$ for some~$w\in W$, then remove~$u$.
\end{rrule}
Clearly, Reduction Rule~\ref{rr-w-seesed-s-club-remove-vertices-large-distance} is correct and can be applied in polynomial time.
Next, we show that if the remaining graph is sufficiently large then~$(G,W,k)$ is a yes-instance of \textsc{Seeded~$s$-Club}.
\begin{lemma}
\label{lem-w-seeded-bound-of-ti-small}
An instance~$(G,W,k)$ of \textsc{Seeded~$s$-Club} with ~$|N_{s-1}[W]|\ge k^2$ is a yes-instance.
\end{lemma}
To prove Lemma~\ref{lem-w-seeded-bound-of-ti-small} for~$s\ge 3$ we need the following technical lemmas.
\begin{lemma}
\label{lem-seeded-s-club-clique-sge3-small}
An instance~$(G,W,k)$ of \textsc{Seeded~$s$-Club} with~$s\ge 3$ is a yes-instance if~$|N_{\lfloor (s+1)/2\rfloor-1}[W]|\ge k$.
\end{lemma}
\begin{proof}
By definition~$W\subseteq N_{\lfloor (s+1)/2\rfloor-1}[W]$ and~$|N_{\lfloor (s+1)/2\rfloor-1}[W]|\ge k$.
Thus, it remains to show that~$N_{\lfloor (s+1)/2\rfloor-1}[W]$ is an~$s$-club.
Therefore, consider a pair of vertices~$u, v\in N_{\lfloor (s+1)/2\rfloor-1}[W]$.
Observe that by definition~$\dist(u,W)\le \lfloor (s+1)/2\rfloor-1$ and~$\dist(v,W)\le \lfloor (s+1)/2\rfloor-1$.
Since~$W$ is a clique, we have~$\dist(u,v)\le (\lfloor (s+1)/2\rfloor-1)+1+(\lfloor (s+1)/2\rfloor-1)\le s$.
Hence, the lemma follows.\qed
\end{proof}
Note that the assumption~$s\ge 3$ in Lemma~\ref{lem-seeded-s-club-clique-sge3-small} is necessary to guarantee that~$\lfloor (s+1)/2\rfloor-1\ge 1$.
Next, we show that if a vertex in~$N_{\lfloor (s+1)/2\rfloor-1}(W)$ has many vertices close to it, then~$(G,W,k)$ is a yes-instance.
\begin{lemma}
\label{lem-seeded-s-club-clique-sge3-big}
An instance~$(G,W,k)$ of \textsc{Seeded~$s$-Club} in which~$s\ge 3$ and $|N_{\lfloor s/2\rfloor}(v)|\ge k$ for some vertex~$v\in N_{\lfloor (s+1)/2\rfloor-1}(W)$ is a yes-instance.
\end{lemma}
\begin{proof}
Let~$v$ be a vertex as specified in the lemma.
By definition of~$v$, there exists a path~$P\coloneqq (q_0,q_1,\ldots, q_{\lfloor (s+1)/2\rfloor-1})$ of length~$\lfloor (s+1)/2\rfloor-1$ in~$G$ such that~$q_{\lfloor (s+1)/2\rfloor-1}=v$,~$w_0\in W$, and~$q_i\in N_i(q_0)$.
We show that~$S\coloneqq N_{\lfloor s/2\rfloor}(v)\cup W\cup P$ is an~$s$-club of size~$k$ containing~$W$.
Clearly,~$W\subseteq S$ and~$|S|\ge k$.
Thus, it remains to show that~$S$ is an~$s$-club.
Consider a vertex~$w\in W$.
Vertex~$w$ has distance at most~$i+1$ to vertex~$w_i$.
In particular,~$\dist(w,v)\le \lfloor (s+1)/2\rfloor$.
Since each vertex~$u\in N_{\lfloor s/2\rfloor}[v]$ has distance at most~$\lfloor s/2\rfloor$ to~$v$ we obtain that~$\dist(w,u)\le \lfloor (s+1)/2\rfloor+\lfloor s/2\rfloor=s$.
By similar arguments we can also show that vertex~$w_i$ for~$i\in [\lfloor (s+1)/2\rfloor-1]$ has distance at most~$s$ to each vertex in~$S$.
Finally, consider two vertices~$x,y\in N_{\lfloor s/2\rfloor}[v]$.
Note that~$\dist(x,v)\le \lfloor s/2\rfloor$ and also~$\dist(y,v)\le \lfloor s/2\rfloor$ and thus~$\dist(x,y)\le s$.
Thus,~$S$ is indeed an~$s$-club.\qed
\end{proof}
With those two lemmas we are now able to prove Lemma~\ref{lem-w-seeded-bound-of-ti-small}.
\begin{proof}[Proof of Lemma~\ref{lem-w-seeded-bound-of-ti-small}]
First, we consider the case~$s=2$.
It is sufficient to show that~$(G,W,k)$ is a yes-instance if~$|N[w]|\ge k$ for some~$w\in W$ since~$|W|\le k$ and since~$N[W]\ge k^2$ by our assumption.
Since all vertices in~$N(w)$ have the common neighbor~$w$, we conclude that~$N[w]$ is a~$2$-club.
Also, since~$W$ is a clique, we have~$W\subseteq N[w]$.
The size bound of~$N_{s-1}[W]$ follows from~$|N[w]|\ge k$.
Thus,~$(G,W,k)$ is a yes-instance.
Second, we consider the case~$s\ge 3$.
Observe that $$N_{s-1}[W]= N_{\lfloor (s+1)/2\rfloor-1}[W]\cup\bigcup_{v\in N_{\lfloor (s+1)/2\rfloor-1}[W]}N_{\lfloor s/2\rfloor}[v].$$
By Lemma~\ref{lem-seeded-s-club-clique-sge3-small},~$(G,W,k)$ is a yes-instance if~$|N_{\lfloor (s+1)/2\rfloor-1}[W]|\ge k$ and, by Lemma~\ref{lem-seeded-s-club-clique-sge3-big}, $(G,W,k)$ is a yes-instance if~$|N_{\lfloor s/2\rfloor}(v)|\ge k$ for some~$v\in N_{\lfloor (s+1)/2\rfloor-1}(W)$.
Thus, by the above equality we conclude that~$(G,W,k)$ is a yes-instance.\qed
\end{proof}
Finally, we bound the size of~$N_s(W)$.
There we assume that~$|N_{s-1}[W]|<k^2$ by Lemma~\ref{lem-w-seeded-bound-of-ti-small} and that Reduction Rule~\ref{rr-w-seesed-s-club-remove-vertices-large-distance} is applied.
\begin{lemma}
\label{lem-w-seeded-bound-of-ti-s}
An instance~$(G,W,k)$ of \textsc{Seeded~$s$-Club} with~$|N_s(W)|\ge k^{2|W|+1}$ which is reduced with respect to Reduction Rule~\ref{rr-w-seesed-s-club-remove-vertices-large-distance} is a yes-instance.
\end{lemma}
\begin{proof}
Since Reduction Rule~\ref{rr-w-seesed-s-club-remove-vertices-large-distance} has been applied exhaustively, each vertex~$p\in N_s(W)$ has distance exactly~$s$ to each vertex in~$W$.
In other words, for each vertex~$w_\ell\in W$ there exists a vertex~$u^{\ell}_{s-1}\in N_{s-1}(w_\ell)$ such that~$pu^\ell_{s-1}\in E(G)$.
Note that~$N_{s-1}(w_\ell)\subseteq N_{s-1}[W]$.
Moreover, by Lemma~\ref{lem-w-seeded-bound-of-ti-small} we may assume that $|N_{s-1}[W]|<k^2$.
In particular:~$|N_{s-1}(W)|<k^2$.
Since~$|N_s(W)|\ge k^{2|W|+1}$, by the pigeonhole principle there exists a set~$\{u^1_{s-1},u^2_{s-1},\ldots, u^{|W|}_{s-1}\}$ with $u^\ell_{s-1}\in N_{s-1}(w_\ell)$ for~$\ell\in[|W|]$ such that the set $P\coloneqq N_s(W)\cap\bigcap_{\ell\in[|W|]}N(u^\ell_{s-1})$ has size at least~$k$.
The size bound follows from the observation that each~$N_{s-1}(w_\ell)$ has size at most~$k^2$ and we have exactly~$|W|$ many of these sets.
By the definition of vertex~$u^\ell_{s-1}$, there exists for each~$i\in[s-2]$ a vertex~$u_i^\ell\in N_i(w_\ell)$ such that~$w_\ell, u_1^\ell, \ldots , u_{s-1}^\ell$ is a path of length~$s-1$ in~$G$.
We define the set~$U\coloneqq \{u_i^\ell\mid\ell\in[|W|], i\in[s-1]\}$.
Next, we show that~$Z\coloneqq P\cup W\cup U$ induces an~$s$-club.
First, observe that all vertices in~$P$ have distance at most~$2$ to each other since they have the common neighbor~$u^1_{s-1}$.
Second, note that the vertices~$w_\ell$, $u^\ell_1,\ldots, u^\ell_{s-1}$,~$p,u^j_{s-1},\ldots, u^j_1,w_j$ form a cycle with~$2s+1$ vertices, for each~$p\in P$ and each two indices~$j,\ell\in[|W|]$.
Each vertex in this cycle has distance at most~$s$ to each other vertex in that cycle.
Hence,~$Z$ is indeed an~$s$-club.\qed
\end{proof}
Recall that Lemma~\ref{lem-w-seeded-bound-of-ti-small} showed that the number of vertices with distance \emph{at most}~$s-1$ to~$W$ is bounded by~$k^2$.
Together with Lemma~\ref{lem-w-seeded-bound-of-ti-s} now Theorem~\ref{thm-w-seeded-tractable-cases} is proven.
\subsection{Intractable Cases}
Now, we show hardness for some of the remaining cases.
\begin{theorem}
Let~$H$ be a fixed graph. \textsc{Seeded~$s$-Club} is W[1]-hard parameterized by~$k$ even if~$G[W]$ is isomorphic to~$H$, when
\begin{itemize}
\item $s=2$ and~$H$ contains at least two non-adjacent vertices, or if
\item $s\ge 3$ and~$H$ contains at least two connected components.
\end{itemize}
\end{theorem}
\paragraph{Hardness for $s=2$.} First, we prove hardness for~$s=2$ when~$H$ contains at least one non-edge.
For an illustration of Construction~\ref{const-w-seeded-s-club-s-equals-two} we refer to Figure~\ref{fig-examples-seeded-club}.
\begin{figure}
\caption{Illustration of Construction~\ref{const-w-seeded-s-club-s-equals-two}
\label{fig-examples-seeded-club}
\end{figure}
\begin{const}
\label{const-w-seeded-s-club-s-equals-two}
Let~$(G,k)$ be an instance of \textsc{Clique}.
We construct an equivalent instance~$(G',k')$ of \textsc{Seeded~$s$-Club} as follows.
Initially, we add the set~$W$ to~$G'$, and add edges such that~$G'[W]$ is isomorphic to~$H$.
Since~$H$ is not a clique, there exist two vertices~$u,v\in V(H)$ such that~$uv\notin E(H)$.
Let~$R\coloneqq W\setminus\{u,v\}$.
Next, we add two copies~$G_u$ and~$G_v$ of~$G$ to~$G'$, make~$u$ adjacent to each vertex in~$V(G_u)$, and make~$v$ adjacent to each vertex in~$V(G_v)$.
We denote with~$x_u$, and~$x_v$ the copies of~$x$ in~$G_u$ and~$G_v$, respectively.
Next, we add the edge~$x_ux_v$ for each~$x\in V(G)$.
Furthermore, we add a new vertex~$p$ and make it adjacent to each vertex in~$W$.
Next, we add a new vertex~$u^*$ adjacent to~$p$, each vertex in~$V(G_u)$, and each vertex in~$R$.
Analogously, we add a new vertex~$v^*$ which is adjacent to~$p$, each vertex in~$V(G_v)$, and each vertex in~$R$.
Finally, we set~$k'\coloneqq 2k+|W|+3$.
\end{const}
Now, we prove the correctness of Construction~\ref{const-w-seeded-s-club-s-equals-two}.
\begin{lemma}
\label{lemma-w-seeded-2-club-hardness}
For any graph~$H$ which is not a clique, \textsc{Seeded~$2$-Club} parameterized by~$k$ is W[1]-hard if~$G'[W]$ is isomorphic to~$H$.
\end{lemma}
\begin{proof}
We prove that~$G$ contains a clique of size~$k$ if and only if~$G'$ contains a~$2$-club~$S$ containing~$W$ of size~$k'=2k+|W|+3$.
Let~$K$ be a clique of size~$k$ in~$G$ and let~$K_u$ and~$K_v$ be the copies of~$K$ in~$G_u$ and~$G_v$.
We argue that~$S\coloneqq K_u\cup K_v\cup W\cup \{u^*,v^*,p\}$ is a~$2$-club of size at least~$k'$ containing~$W$.
Clearly,~$|S|=k'$ and~$S$ contains~$W$.
Thus, it remains to show that~$G[S]$ is a~$2$-club.
First, we show that each vertex in~$R$ has distance at most~$2$ to each vertex in~$S$:
All vertices in~$R$ have the common neighbors~$p$,~$u^*$, and~$v^*$.
Since~$u$ and~$v$ are neighbors of~$p$, each vertex in~$V(G_u)$ is a neighbor of~$u^*$, and since each vertex in~$V(G_v)$ is a neighbor of~$v^*$ we conclude that each vertex in~$R$ has distance at most~$2$ to any vertex in~$S$.
Second, we show that each vertex in~$K_u$ has distance at most~$2$ to each vertex in~$S\setminus R$.
Observe that~$\{u,u^*,x_v\}\cup K_u\subseteq N[x_u]$ for each vertex~$x_u\in K_u$.
Hence,~$x_u$ has distance at most~$2$ to~$p$ via~$u^*$,~$v$, and~$v^*$ via~$x_v$, each vertex, and each vertex in~$K_v$ via the corresponding vertex in~$K_u$.
By symmetric arguments the statement also holds for each vertex in~$K_v$.
Finally, each pair of vertices of~$\{p,u,u^*,v,v^*\}$ has distance at most~$2$ to each other since~$u,u^*,v,v^*\in N(p)$.
Thus,~$S$ is indeed a~$2$-club.
Conversely, suppose that~$G'$ contains an~$2$-club~$S$ of size at least~$2k+|W|+3$ which contains all vertices of~$W$.
Observe that we have~$N(x_u)\cap N(v)=\{x_v\}$ for each vertex~$x_u\in V(G_u)$, and symmetrically~$N(x_v)\cap N(u)=\{x_u\}$ for each vertex~$x_v\in V(G_v)$.
Hence,~$x_u\in S$ if and only if~$x_v\in S$.
Let~$K_u\coloneqq S\cap V(G_u)$.
By definition of~$k'$ we obtain that~$|K_u|\ge k$.
Assume towards a contradiction that~$K_u$ contains a pair of nonadjacent vertices~$x_u$ and~$y_u$.
By the argumentation above we obtain~$y_v\in S$.
Now, observe that~$N(x_u)=\{u,u^*,x_v\}\cup\bigcup\{z_u\mid xz\in E(G)\}$ and that~$N(y_v)=\{v,v^*,y_u\}\cup\bigcup\{z_v\mid yz\in E(G)\}$.
Since~$xy\notin E(G)$ we thus obtain~$N(x_u)\cap N(y_v)=\emptyset$, a contradiction.
Thus,~$G$ contains a clique of size~$k$.\qed
\end{proof}
\paragraph{Hardness for seeds with at least two connected components and~$s\ge 3$.}
Now, we show W[1]-hardness for the case~$s\ge 3$ when the seed contains at least two connected components.
Fix a graph~$H$ with at least two connected components.
We show W[1]-hardness for~$s\ge 3$ even if~$G[W]$ is isomorphic to~$H$.
\begin{const}
\label{const-w-seeded-s-club-s-atleast3-many-conn-compos}
Let~$(G,k)$ be an instance of \textsc{Clique}.
We construct an equivalent instance~$(G',k')$ of \textsc{Seeded~$s$-Club} as follows.
Initially, we add the set~$W$ to~$G'$, and add edges such that~$G'[W]$ is isomorphic to~$H$.
Let~$D_1$ be one connected component of~$G'[W]$.
By assumption,~$D_2\coloneqq W\setminus D_1$ is not empty.
Next, we add two copies~$G_1$ and~$G_2$ of~$G$ to~$G'$. Then, we add edges to~$G'$ such that each vertex in~$D_1$ is adjacent to each vertex in~$V(G_1)$ and such that each vertex in~$D_2$ is adjacent to each vertex in~$V(G_2)$.
Furthermore, we add a path~$(p_1, \ldots , p_{s-1})$ consisting of exactly~$s-1$ new vertices to~$G'$, make~$p_1$ adjacent to each~$u\in D_1$, and make~$p_{s-1}$ adjacent to each~$v\in D_2$.
By~$P\coloneqq \{p_i\mid i\in[s-1]\}$ we denote the set of these newly added vertices.
Now, for each~$x\in V(G)$ we do the following.
Consider the copies~$x_1\in V(G_1)$ and~$x_2\in V(G_2)$ of vertex~$x\in V(G)$.
We add a path~$(x_1, q_1^x, \ldots , q_{s-2}^x, x_2)$ consisting of~$s-2$ new vertices to~$G'$.
By~$Q_x\coloneqq \{q^x_i\mid i\in[s-2]\}$ we denote the set of the new internal path vertices.
Finally, we set~$k'\coloneqq sk+|W|+s-1$.
\end{const}
Now, we prove the correctness of Construction~\ref{const-w-seeded-s-club-s-atleast3-many-conn-compos}.
\begin{lemma}
\label{lem-w-seeded-s-club-s-atleast3-many-conn-compos}
Let~$H$ be a fixed graph with at least two connected components.
\textsc{Seeded~$s$-Club} parameterized by~$k$ is W[1]-hard even if~$G[W]$ is isomorphic to~$H$.
\end{lemma}
\begin{proof}
We show that~$G$ contains a clique of size~$k$ if and only if~$G'$ contains a~$W$-seeded~$s$-club of size at least~$k'=sk+|W|+s-1$.
Let~$K$ be a clique of size~$k$ in~$G$.
Furthermore, let~$K_1$ and~$K_2$ denote the copies of~$K$ in~$G_1$ and~$G_2$, respectively.
We show that~$S\coloneqq W\cup P\cup K_1\cup K_2\cup\bigcup_{x\in K}Q_x$ is a~$W$-seeded~$s$-club of size at least~$k'$.
Clearly,~$|S|=k'$ and~$S$ contains~$W$.
Thus, it remains to verify that~$G[S]$ is an~$s$-club.
Note that since each vertex in~$V(G_1)$ is adjacent to each vertex in~$D_1$, each two vertices in~$D_1$ have distance at most~$2$, and similarly each two vertices in~$V(G_1)$ have distance at most~$2$.
Analogously, we can show that each two vertices in~$D_2$ and each two vertices in~$V(G_2)$ have distance at most~$2$.
Furthermore, the vertices~$(x,p_1,p_2,\ldots,p_{s-1},y,u_2,q^u_{s-2},\ldots, q^u_1,u_1)$ for each vertex~$x\in D_1$, each vertex~$y\in D_2$, and each vertex~$u\in K$ form a~$C_{2s+1}$, a cycle with~$2s+1$ vertices.
Observe that also the vertices~$(u_1,q^u_1,\ldots,q^u_{s-2},u_2,v_2,q^v_{s-2},\ldots, q^v_1,v_1)$ form a~$C_{2s}$ for each two vertices~$u,v\in K$.
Since all remaining distances are covered by these two cycles, we conclude that~$S$ is indeed an~$s$-club.
Conversely, suppose that~$G'$ contains a~$W$-seeded~$s$-club~$S$ of size at least~$k'$.
Let~$Q'_v\coloneqq \{v_1,q^v_1,\ldots, q^v_{s-2},v_2\}$ for each~$v\in V(G)$.
We show that~$Q'_v\cap S\ne\emptyset$ if and only if~$Q'_v\subseteq S$.
Assume towards a contradiction, that~$Q'_v\cap S\ne\emptyset$ for some~$v\in V(G)$ such that~$Q'_v\not\subseteq S$.
If~$v_1\notin S$, and also~$v_2\notin S$, then no vertex in~$S\cap Q'_v$ is connected to any vertex in~$S\setminus Q'_v$.
Hence, we can assume without loss of generality that~$v_1\in S$.
Note that~$N(D_2)=V(G_2)\cup\{p_{s-1}\}$.
Furthermore, observe that~$\dist(v_1,p_{s-1})=s$, that~$\dist(v_1,q^v_{s-2})=s-2$, and that~$\dist(v_1,q^u_{s-2})\ge s-1$ for each~$u\in V(G)\setminus\{v\}$.
Thus, the unique path of length at most~$s$ from~$v_1$ to~$D_2$ contains all vertices in~$Q'_v$.
Hence,~$Q'_v\cap S\ne\emptyset$ if and only if~$Q'_v\subseteq S$.
By the definition of~$k'$ we may thus conclude that~$Q'_v\subseteq S$ for at least~$k$ vertices~$v\in V(G)$.
Now, assume towards a contradiction that~$Q'_u\subseteq S$ and~$Q'_v\subseteq S$ such that~$uv\notin E(G)$.
We consider the vertices~$v_1$ and~$u_2$.
Observe that by construction each path from~$v_1$ to~$u_2$ containing any vertex~$p_i$ has length at least~$s+1$.
Hence, each shortest path from~$v_1$ to~$u_2$ contains the vertex set of~$Q'_w$ for some~$w\in V(G)$.
Since the path induced by each~$Q'_w$ has length~$s-1$, we conclude that~$w=u$ or~$w=v$.
Assume without loss of generality that~$w=v$.
Hence, the~$(s-1)$th vertex on the path from~$v_1$ is~$v_2$.
Since~$uv\notin E(G)$ we have by construction that~$u_2v_2\notin E(G')$.
Hence,~$\dist(v_1,u_2)\ge s+1$, a contradiction to the fact that~$S$ is an~$s$-club.
Thus,~$\{v\mid Q'_v\subseteq S\}$ is a clique of size at least~$k$ in~$G$.\qed
\end{proof}
\section{Conclusion}
We provided a complexity dichotomy for \textsc{Vertex Triangle $s$-Club}{} and \textsc{Edge Triangle $s$-Club}{} for the standard parameter solution size~$k$ with respect to~$s$ and~$\ell$.
Furthermore, we also provided a complexity dichotomy for \textsc{Seeded~$2$-Club} for~$k$ in terms of the structure of~$G[W]$.
For \textsc{Seeded~$s$-Club} with~$s\ge 3$ we provided an FPT-algorithm with respect to~$k$ when~$G[W]$ is a clique and we showed W[1]-hardness for~$k$ when~$G[W]$ contains at least 2 connected components.
Hence, an immediate open question is the parameterized complexity of \textsc{Seeded~$s$-Club} for~$s\ge3$ when~$G[W]$ is connected but not a clique.
One aim should be to also provide a dichotomy for~$k$ for \textsc{Seeded~$s$-Club} with~$s\ge 3$ for all possible structures of~$G[W]$.
It is particularly interesting to study seeds of constant size since this seems to be the most interesting case for applications.
For future work, it seems interesting to study the complexity of the considered variants of \textsc{$s$-Club} with respect to further parameters, for example with respect to structural parameters of the input graph~$G$ such as the treewidth of~$G$.
Additionally, the parameterized complexity of further robust variants of \textsc{$s$-Club} such as~\textsc{$t$-Hereditary $s$-Club}~\cite{PYB13,KNNP19} with respect to~$k$ remains open.
It is also interesting to study other problems for detecting communities with seed constraints.
One prominent example is \textsc{$s$-Plex}.
This problem is also NP-hard for~$W\ne\emptyset$, since an algorithm for the case when~$|W|=1$ can be used as a black box to solve the unseeded variants.
From a practical perspective, we plan to implement combinatorial algorithms for all three problem variants for the most important special case~$s=2$.
Based on experience with previous implementations for \textsc{$2$-Club}~\cite{HKN15} and some of its robust variants~\cite{KNNP19} we are optimistic that these problems can be solved efficiently on sparse real-world instances.
\end{document}
|
\begin{document}
\sloppy
\title{The Intersection of Two Halfspaces Has \\
High Threshold Degree}
\date{}
\author{
{\sc Alexander~A.~Sherstov}\thanks{Department of Computer Sciences,
University of Texas at Austin, TX 78757 USA.\newline
{\Large \Letter}~$\mathtt{[email protected]}$}
}
\setcounter{page}{-1}
\maketitle
\thispagestyle{empty}
\pagestyle{plain}
\hyphenpenalty=200
\clubpenalty=100000
\widowpenalty=100000
\begin{abstract}
The \emph{threshold degree} of a Boolean function $f\colon\zoon\to\moo$
is the least degree of a real polynomial $p$ such that $f(x)\equiv\sign
p(x).$ We construct two halfspaces on $\zoon$ whose intersection
has threshold degree $\Theta(\sqrt n),$ an exponential improvement
on previous lower bounds. This solves an open problem due to
Klivans (2002) and rules out the use of perceptron-based
techniques for PAC learning the intersection of two halfspaces, a
central unresolved challenge in computational learning. We also
prove that the intersection of two majority functions has
threshold degree $\Omega(\log n),$ which is tight and settles a
conjecture of O'Donnell and Servedio (2003).
Our proof consists of two parts. First, we show that for any
nonconstant Boolean functions $f$ and $g,$ the intersection $f(x)\wedge g(y)$
has threshold degree $O(d)$ if and only if $\|f-F\|_\infty +
\|g-G\|_\infty < 1$ for some rational functions $F,$ $G$ of
degree $O(d).$ Second, we settle the least degree
required for approximating a halfspace and a majority function to
any given accuracy by rational functions.
Our technique further allows us to make progress on
Aaronson's challenge (2008) and contribute strong direct product
theorems for polynomial representations of composed Boolean functions
of the form $F(f_1,...,f_n).$ In particular, we give an improved
lower bound on the approximate degree of the AND-OR tree.
\end{abstract}
\thispagestyle{empty}
\tableofcontents
\section{Introduction}
\label{sec:intro}
Representations of Boolean functions by real polynomials play
an important role in theoretical computer science, with applications
ranging from complexity theory to quantum computing and
learning theory. The surveys
in~\mbox{\cite{beigel93polynomial-method,saks93slicing,buhrman-dewolf02DT-survey,dual-survey}} offer
a glimpse into the diversity of these results and techniques.
We study one such representation scheme known as
\emph{sign-representation}. Specifically, fix a Boolean
function $f\colon X\to\moo$ for some finite set $X\subset \Re^n,$ such as
the hypercube $X=\moon.$
The \emph{threshold degree} of $f,$ denoted $\degthr(f),$ is the
least degree of a polynomial $p(x_1,\dots,x_n)$ such that
\[ f(x) = \sign p(x) \]
for each $x\in X.$ In other words, the threshold degree of $f$ is
the least degree of a real polynomial that represents $f$ in sign.
The formal study of this complexity measure and of sign-representations in
general began in 1969 with the seminal work of Minsky and
Papert~\cite{minsky88perceptrons}, who examined the threshold degree of several
common functions. Since then, sign-representations have
found a variety of applications in theoretical computer science.
Paturi and Saks~\cite{paturi-saks94rational} and later Siu et
al.~\cite{siu-roy-kailath94rational} used Boolean functions with
high threshold degree to obtain size-depth trade-offs for threshold
circuits. The well-known result, due to Beigel et
al.~\cite{beigel91rational}, that $\PP$ is closed under intersection
is also naturally interpreted in terms of threshold degree. In
another development, Aspnes et al.~\cite{aspnes91voting} used the
notion of threshold degree and its relaxations to obtain oracle
separations for $\mathsf{PP}$ and to give an insightful new proof
of classical lower bounds for $\mathsf{AC}^0.$ Krause and
Pudl{\'a}k~\cite{krause94depth2mod,KP98threshold} used random
restrictions to show that the threshold degree gives lower
bounds on the \emph{weight} and \emph{density} of perceptrons and
their generalizations, which are well-studied computational models.
Learning theory is another area in which the threshold degree of
Boolean functions is of considerable interest. Specifically,
functions with low threshold degree can be efficiently PAC
learned under arbitrary distributions via linear programming.
The current fastest algorithm for PAC learning polynomial-size
DNF formulas, due to Klivans and Servedio~\cite{KS01dnf}, is an
illustrative example: it is based precisely on an upper bound on
the threshold degree of this concept class.
The threshold degree has recently become a versatile
tool in communication complexity. The starting point in this line
of work is the Degree/Discrepancy
Theorem~\cite{sherstov07ac-majmaj,sherstov07quantum}, which states
that any Boolean function with high threshold degree induces a
communication problem with low \emph{discrepancy} and thus high
communication complexity in almost all models. This result was used
in~\cite{sherstov07ac-majmaj} to show the optimality of Allender's
simulation of $\AC^0$ by majority
circuits~\cite{allender89ac0tc0}, thus solving an open
problem of Krause and Pudl{\'ak}~\cite{krause94depth2mod}.
Known lower bounds on the threshold degree have
played an important role in recent
progress~\cite{sherstov07symm-sign-rank,RS07dc-dnf} on
\emph{unbounded-error} communication complexity, which is
considerably more powerful than the models above.
In summary, the threshold degree has a variety of applications in
circuit complexity, learning theory, and communication
complexity.
Nevertheless, analyzing the threshold degree has remained a difficult
task, and Minsky and Papert's \emph{symmetrization} technique
from 1969 has been essentially the only method available.
Unfortunately, symmetrization only applies to
symmetric Boolean functions and certain derivations thereof. In
a recent tutorial presented at the FOCS'08
conference, Aaronson~\cite{aaronson08tutorial} re-posed the
challenge of developing new analytic techniques for
multivariate real polynomials that represent Boolean functions.
We make significant progress on this challenge in the context of
sign-representation, contributing a number of strong direct
product theorems for the threshold degree. As an application, we
construct two halfspaces on $\zoon$ whose intersection has
threshold degree $\Omega(\sqrt n),$ which solves an open problem
due to Klivans~\cite{klivans-thesis} and rules out the use of
perceptron-based techniques for PAC learning the intersection of
even two halfspaces (a central unresolved challenge in
computational learning theory). We give a detailed description of
our results in
Sections~\ref{sec:general-compositions}--\ref{sec:results-hshs},
followed by a discussion of our techniques in
Section~\ref{sec:techniques}.
\subsection{Results for general compositions}
\label{sec:general-compositions}
Our first result is a general direct product theorem for the threshold
degree of composed functions.
\begin{theorem}[Threshold degree]
\label{thm:thrdeg-dp}
Consider functions
$f\colon X\to\moo$ and $F\colon \mook\to\moo,$ where
$X\subset\Re^n$ is a finite set. Then
\begin{align*}
\degthr(F(f,\dots,f)) geq \degthr(F)\degthr(f).
\end{align*}
\end{theorem}
Theorem~\ref{thm:thrdeg-dp} gives the best possible
lower bound that depends on $\degthr(F)$ and $\degthr(f)$ alone. In
particular, the bound is tight whenever $F=\PARITY$ or
$f=\PARITY.$ To our knowledge, the only previous direct product
theorem of any kind for the threshold degree was the XOR lemma
in~\cite{odonnell03degree}, which states that the XOR of $k$
copies of a given function $f\colon X\to\moo$ has threshold
degree $k\degthr(f).$
We are able to generalize Theorem~\ref{thm:thrdeg-dp} to the notion
of \emph{$\epsilon$-approximate degree} $\degeps(F),$ which is the
least degree of a real polynomial $p$ with $\|F-p\|_\infty\leq\epsilon.$
This notion plays a fundamental role in complexity theory, learning
theory, and quantum computing
and was also re-posed as an analytic challenge in Aaronson's
tutorial~\cite{aaronson08tutorial}. We have:
\begin{theorem}[Approximate degree]
\label{thm:main-approx-dp}
Fix functions
$f\colon X\to\moo$ and $F\colon \mook\to\moo,$ where
$X\subset\Re^n$ is a finite set. Then for
$0<\epsilon<1,$
\begin{align*}
\deg_{\epsilon}(F(f,\dots,f)) geq \deg_{\epsilon}(F)\degthr(f).
\end{align*}
\end{theorem}
Again, Theorem~\ref{thm:main-approx-dp} gives the best lower
bound that depends on $\deg_\eps(F)$ and $\degthr(f)$ alone. For
example, the stated bound is tight for any function $F$ when $f=\PARITY.$
In Section~\ref{sec:dp}, we prove various other results involving
bounded-error and small-bias approximation, as well as
compositions of the form $F(f_1,\dots,f_k)$ where $f_1,\dots,f_k$
may all be distinct.
We use Theorem~\ref{thm:main-approx-dp} to
obtain an improved lower bound on the approximate degree
of the well-studied AND-OR tree, given by
\begin{align}
f(x)=\bigvee_{i=1}^n
\bigwedge_{j=1}^n x_{ij}.
\label{eqn:and-or-def}
\end{align}
Prior to this work, the best lower bound was
$\Omega(n^{0.66\dots}),$ due to
Ambainis~\cite{ambainis05collision}. Preceding it were lower
bounds of $\Omega(\sqrt n)$ due to Nisan and
Szegedy~\cite{nisan-szegedy94degree} and $\Omega(\sqrt{n\log n})$
due to Shi~\cite{shi-linear}. We improve the standing lower bound
from $\Omega(n^{0.66\dots})$ to $\Omega(n^{0.75}),$ the best
upper bound being $O(n)$
due to H{\o}yer et al.~\cite{hoyer-mosca-dewolf03and-or-tree}.
\begin{theorem}[AND-OR Tree]
\label{thm:main-and-or}
Define $f\colon \moo^{n^2}\to\moo$ by
\textup{(\ref{eqn:and-or-def})}.
Then
\begin{align*}
\deg_{1/3}(f) = \Omega(n^{0.75}).
\end{align*}
\end{theorem}
\noindent
Furthermore, the proof of Theorem~\ref{thm:main-and-or} is simpler
and more modular than the previous lower bound~\cite{ambainis05collision},
which was based on the collision and element distinctness problems.
\subsection{Results for specific compositions}
\label{sec:results-conj}
While Theorems~\ref{thm:thrdeg-dp}
and~\ref{thm:main-approx-dp} give the best lower bounds that depend on
$\degthr(F),$ $\degthr(f),$ and $\degeps(F)$ alone, much stronger
lower bounds can be derived in some cases by exploiting additional
structure of $F$ and $f.$
Consider the special but illustrative
case of the conjunction of two functions.
In other words, we are given functions $f\colon X\to\moo$ and
$g\colon Y\to\moo$ for some finite sets $X,Y\subset\Re^n$ and would like
to determine the threshold degree of their conjunction, $(f\wedge
g)(x,y) = f(x)\wedge g(y).$ A simple and elegant method for
sign-representing $f\wedge g,$ due to Beigel et
al.~\cite{beigel91rational}, is to use rational approximation.
Specifically, let $p_1(x)/q_1(x)$ and $p_2(y)/q_2(y)$ be rational
functions of degree $d$ that approximate $f$ and $g,$ respectively,
in the following sense:
\begin{align}
\label{eqn:approx-f-g}
\max_{x\in X} \left| f(x) - \frac{p_1(x)}{q_1(x)}\right| \,+ \,
\max_{y\in Y} \left| g(y) - \frac{p_2(y)}{q_2(y)}\right| \,<\, 1.
\end{align}
Letting $-1$ and $+1$ correspond to ``true'' and ``false,'' respectively,
we obtain:
\begin{align}
\label{eqn:pq}
f(x)\wedge g(y) &\equiv \sign\{1 + f(x)+g(y)\}
\rule{0mm}{7mm}\equiv \sign\BRACES{ 1 + \frac{p_1(x)}{q_1(x)} +
\frac{p_2(y)}{q_2(y)} }.
\end{align}
Multiplying the last expression in braces by the positive quantity
$q_1(x)^2q_2(y)^2$ gives
\begin{multline*}
\quad f(x)\wedge g(y) \equiv \sign\left\{ q_1(x)^2q_2(y)^2 \right.
\\\left.+p_1(x)q_1(x)q_2(y)^2 + p_2(y)q_1(x)^2q_2(y)\right\},\quad
\end{multline*}
whence $\degthr(f\wedge g) \leq 4d.$ In summary, if $f$ and $g$ can
be approximated as in (\ref{eqn:approx-f-g}) by rational functions of
degree at most $d,$ then the conjunction $f\wedge g$ has
threshold degree at most $4d.$
It is natural to ask whether there exists a better
construction. After all, given a sign-representing polynomial
$p(x,y)$ for $f(x)\wedge g(y),$ there is no reason to expect
that $p$ arises from the sum of two independent rational functions
as in~(\ref{eqn:pq}). Indeed, $x$ and $y$ can be tightly coupled
inside $p(x,y)$ and can interact in complicated ways. Our next
result is that, surprisingly, no such interactions
can beat the simple construction above. In other words, the
sign-representation based on rational functions always achieves
the optimal degree, up to a small constant factor.
\begin{theorem}[Conjunctions of functions]
\label{thm:main-two}
Let $f\colon X\to\moo$ and $g\colon Y\to\moo$ be given functions, where
$X,Y\subset\Re^n$ are arbitrary finite sets.
Assume that $f$ and $g$ are not identically false. Let
$d=\degthr(f\wedge g).$ Then there exist degree-$4d$ rational
functions
\[ \frac{p_1(x)}{q_1(x)}, \quad
\frac{p_2(y)}{q_2(y)} \]
that satisfy \textup{(\ref{eqn:approx-f-g})}.
\end{theorem}
Via repeated applications of Theorem~\ref{thm:main-two}, we are
able to obtain analogous results for conjunctions $f_1\wedge f_2\wedge
\cdots \wedge f_k$ for any Boolean functions $f_1,f_2,\dots,f_k$
and any $k.$ Our results further extend to compositions $F(f_1,\dots,f_k)$
for various $F$ other than $F=\AND,$ such as
halfspaces and read-once AND/OR/NOT formulas.
We defer a more detailed description of these extensions
to Section~\ref{sec:h}, limiting this overview to the following
representative special case.
\begin{theorem}[Extension to multiple functions]
\label{thm:main-h}
Let $f_1,f_2,\dots,f_k$ be nonconstant Boolean functions on finite
sets $X_1,X_2,\dots,X_k\subset\Re^n,$ respectively. Let
$F\colon \mook\to\moo$ be a halfspace or a read-once AND/OR/NOT
formula. Assume that $F$ depends on
all of its $k$ inputs and that the composition $F(f_1,f_2,\dots,f_k)$
has threshold degree $d.$ Then there is a degree-$D$ rational
function $p_i/q_i$ on $X_i,$ $i=1,2,\dots,k,$ such that
\[ \sum_{i=1}^k \;
\max_{x_i\in X_i} \left| f_i(x_i) -
\frac{p_i(x_i)}{q_i(x_i)}\right|<1,\]
where $D=8d\log2k.$
\end{theorem}
\noindent
Theorem~\ref{thm:main-h} is close to
optimal. For example, when $F=\AND,$ the upper bound on
$D$ is tight up to a factor of $\Theta(k\log k)$; for all $F$
in the statement of the theorem, it is tight up to a polynomial
in $k.$ See Remark~\ref{rem:tightness} for details.
Theorems~\ref{thm:main-two} and~\ref{thm:main-h} contribute a
strong technique for proving lower bounds on the threshold degree,
via rational approximation. Prior to this paper, it was a substantial
challenge to analyze the threshold degree even for
compositions of the form $f\wedge g.$ Indeed, we are only aware of
the work in~\cite{minsky88perceptrons,odonnell03degree}, where
the threshold degree of $f\wedge g$ was studied for the special
case $f=g=\text{\sc majority}.$ The main difficulty in those previous
works was analyzing the unintuitive interactions between $f$ and
$g.$ Our results remove this difficulty, even in the general
setting of compositions $F(f_1,f_2,\dots,f_k)$ for arbitrary
$f_1,f_2,\dots,f_k$ and various combining functions $F.$ Specifically,
Theorems~\ref{thm:main-two} and~\ref{thm:main-h} make it possible
to study the base functions $f_1,f_2,\dots,f_k$ individually,
in isolation. Once their rational approximability is understood,
one immediately obtains lower bounds on the threshold degree of
$F(f_1,f_2,\dots,f_k).$
\subsection{Results for intersections of two halfspaces}
\label{sec:results-hshs}
As an application of our direct product theorems in
Section~\ref{sec:results-conj}, we obtain the first strong lower bounds on
the threshold degree of
intersections of halfspaces, i.e., intersections of functions of
the form
$f(x)=\sign(\sum \alpha_i x_i-\theta)$ for some reals
$\alpha_1,\dots,\alpha_n,\theta.$
In light of Theorem~\ref{thm:main-two}, this task amounts to
proving that rational functions of low degree cannot approximate
a given halfspace. We accomplish this in the following theorem,
where the notation $\rdeg_\eps(f)$ stands for the least degree of a
rational function $A$ with $\|f-A\|_\infty\leq\epsilon.$
\begin{theorem}[Approximation of a halfspace]
\label{thm:main-approx-hs}
Let $f\colon\moo^{n^2}\to\moo$ be given by
\begin{align}
\label{eqn:halfspace-defined}
f(x) = \sign\PARENS{1 + \sum_{i=1}^{\phantom{A}n\phantom{A}}
\sum_{j=1}^n 2^i x_{ij}}.
\end{align}
Then for $1/3<\epsilon<1,$
\begin{align*}
\rdeg_\epsilon(f) = \Theta\PARENS{1 + \frac
{\phantom{A}n\phantom{A}}{\log\{1/(1-\epsilon)\}}}.
\end{align*}
Furthermore, for all $\epsilon>0,$
\begin{align*}
\rdeg_\epsilon(f) \leq 64 n\lceil \log_2 n \rceil +1.
\end{align*}
\end{theorem}
The function (\ref{eqn:halfspace-defined}) is known
as the \emph{canonical halfspace}.
Thus, Theorem~\ref{thm:main-approx-hs} shows that a rational
function of degree $\Theta(n)$ is necessary and sufficient for
approximating the canonical halfspace within $1/3.$ The upper bound
in this theorem follows readily from classical work by
Newman~\cite{newman64rational}, and it is
the lower bound that has required of us technical
novelty and effort. The best previous degree lower bound for
constant-error approximation for any halfspace was $\Omega(\log
n/\log\log n),$ obtained implicitly in~\cite{odonnell03degree}.
We complement Theorem~\ref{thm:main-approx-hs} with a full solution for
another common halfspace, the majority function.
\begin{theorem}[Approximation of majority]
\label{thm:main-approx-maj}
Let $\MAJ_n\colon \moon\to\moo$ denote the majority function. Then
\begin{align*}
\rdeg_\epsilon(\MAJ_n)=
\ccases{
\displaystyle
\Theta\PARENS{
\log \BRACES{\frac{2n}{\log(1/\epsilon)}}
\cdot
\log \frac1\epsilon
},
&\qquad 2^{-n}<\epsilon<1/3,\\
\rule{0mm}{10mm}
\displaystyle
\Theta\PARENS{1 + \frac{\log n}{\log\{1/(1-\epsilon)\}}},
&\qquad 1/3\leq \epsilon<1.
}
\end{align*}
\end{theorem}
\noindent
Again, the upper bound in Theorem~\ref{thm:main-approx-maj} is
relatively straightforward. Indeed, an upper bound
of $O(\log\{1/\epsilon\}\log n)$ for $0<\epsilon<1/3$ was known and
used in the complexity literature long before our
work~\cite{paturi-saks94rational,
siu-roy-kailath94rational, beigel91rational, KOS:02}, and we only
somewhat tighten that upper bound and extend it to all $\epsilon.$
Our primary contribution in Theorem~\ref{thm:main-approx-maj},
then, is a matching \emph{lower} bound on the degree, which requires
considerable effort. The closest previous line of research concerns
\emph{continuous} approximation of the sign function on
$[-1,-\epsilon]\cup[\epsilon,1],$ which unfortunately gives no
insight into the discrete case. For example, the lower bound derived
by Newman~\cite{newman64rational} in the continuous setting is based
on the integration of relevant rational functions with respect to
a suitable weight function, which has no meaningful discrete analogue. We
discuss our solution in greater detail at the end of the introduction.
Our first application of these lower bounds for rational
approximation is to construct an intersection of two halfspaces
with high threshold degree. In what follows, the symbol $f\wedge f$
denotes the conjunction of two independent copies of a given
function $f.$
\begin{theorem}[Intersection of two halfspaces]
\label{thm:main-sign-hs}
Let $f\colon\moo^{n^2}\to\moo$ be given by
\textup{(\ref{eqn:halfspace-defined})}.
Then
\begin{align*}
\degthr(f\wedge f) = \Omega(n).
\end{align*}
\end{theorem}
The lower bound in Theorem~\ref{thm:main-sign-hs} is tight and
matches the construction by Beigel~et
al.~\cite{beigel91rational}. Prior to our work,
only an $\Omega(\log n/\log\log
n)$ lower bound was known on the threshold degree of the
intersection of two halfspaces, due to O'Donnell and
Servedio~\cite{odonnell03degree}, preceded in turn by an
$\omega(1)$ lower bound of Minsky and
Papert~\cite{minsky88perceptrons}. Note that
Theorem~\ref{thm:main-sign-hs} requires the difficult part of
Theorem~\ref{thm:main-approx-hs}, namely, the lower bound for the
rational approximation of a halfspace.
Theorem~\ref{thm:main-sign-hs} solves an open problem in
computational learning theory, due to
Klivans~\cite{klivans-thesis}. In more detail, recall that
Boolean functions with low threshold degree can be efficiently
PAC learned under arbitrary distributions, by expressing an
unknown function as a perceptron with unknown weights and solving
the associated linear program~\cite{KS01dnf,KOS:02}. Now, a
central challenge in the area is PAC learning the intersection of
two halfspaces under arbitrary distributions, which remains
unresolved despite much effort and solutions to some restrictions
of the problem, e.g.,~\cite{KwekPitt:98, Vempala:97, KOS:02,
KlivansServedio:04coltmargin}. Prior to this paper, it was
unknown whether intersections of two halfspaces on $\zoon$ are amenable to
learning via perceptron-based techniques. Specifically,
Klivans~\cite[\S7]{klivans-thesis} asked for a lower bound of
$\Omega(\log n)$ or better on the threshold degree of the
intersection of two halfspaces. We solve this problem with a
lower bound of $\Omega(\sqrt n),$ thereby ruling out the use of
perceptron-based techniques for learning the intersection of two
halfspaces in subexponential time. To our knowledge,
Theorem~\ref{thm:main-sign-hs} is the first unconditional,
structural lower bound for PAC learning the intersection of two
halfspaces; all previous hardness results for the problem were
based on complexity-theoretic assumptions~\cite{blum92trainingNN,
ABFKP:04, focs06hardness, khot-saket08hs-and-hs}. We
complement Theorem~\ref{thm:main-sign-hs} as follows.
\begin{theorem}[Mixed intersection]\quad
\label{thm:main-sign-mixed}
Let $f\colon\moo^{n^2}\to\moo$ be given by
\textup{(\ref{eqn:halfspace-defined})}. Let $g\colon\moo^{\lceil\sqrt
n\rceil}\to\moo$ be the majority function. Then
\begin{align*}
\degthr(f\wedgeg) = \Theta(\sqrt n).
\end{align*}
\end{theorem}
\noindent
In words, even if one of the halfspaces in Theorem~\ref{thm:main-sign-hs}
is replaced by a majority function, the threshold degree will remain
high, resulting in a challenging learning problem. Finally, we have:
\begin{theorem}[Intersection of two majorities]
\label{thm:main-sign-maj}
Consider the majority function
$\MAJ_n\colon\moon\to\moo.$
Then
\begin{align*}
\degthr(\MAJ_n \wedge \MAJ_n) = \Omega(\log n).
\end{align*}
\end{theorem}
\noindent
Theorem~\ref{thm:main-sign-maj} is tight, matching the
construction of Beigel et al.~\cite{beigel91rational}. It
settles a conjecture of O'Donnell and
Servedio~\cite{odonnell03degree}, who gave a lower bound of
$\Omega(\log n/\log \log n)$ with completely different techniques
and conjectured that the true answer was $\Omega(\log n).$
Theorems~\ref{thm:main-sign-hs}--\ref{thm:main-sign-maj} are of
course also valid for disjunctions rather than
conjunctions. Furthermore, Theorems~\ref{thm:main-sign-hs}
and~\ref{thm:main-sign-maj} remain tight with respect to
conjunctions of any constant number of functions.
Finally, we believe that the lower bounds for rational approximation in
Theorems~\ref{thm:main-approx-hs} and~\ref{thm:main-approx-maj}
are of independent interest. Rational functions are classical
objects with various applications in
theoretical computer science~\cite{beigel91rational,
paturi-saks94rational,
siu-roy-kailath94rational,
KOS:02,
aaronson05postselection},
and yet our ability to prove strong lower bounds for
the rational approximation of Boolean functions has seen little
progress since the seminal work in 1964 by
Newman~\cite{newman64rational}. To illustrate some of the
counterintuitive phenomena involved in rational approximation,
consider the familiar function $\OR_n\colon\zoon\to\moo,$ given
by $\OR_n(x)=1 \Leftrightarrow x=0.$ A well-known result of Nisan
and Szegedy~\cite{nisan-szegedy94degree} states that
$\deg_{1/3}(f)=\Theta(\sqrt n),$ meaning that a polynomial of
degree $\Theta(\sqrt n)$ is required for approximation within
$1/3.$ At the same time, we claim that $\rdeg_\epsilon(f)=1$ for
all $0<\epsilon<1.$ Indeed, let
\begin{align*}
A_M(x) = \frac{1 - M\sum x_i}{1+M\sum x_i}.
\end{align*}
Then $\|f-A_M\|_\infty\to0$ as $M\to\infty.$
This example illustrates that proving lower bounds for rational
functions can be a difficult and unintuitive task. We hope that
Theorems~\ref{thm:main-approx-hs} and~\ref{thm:main-approx-maj}
in this paper will spur further progress on the rational
approximation of Boolean functions.
\subsection{Our techniques \label{sec:techniques}}
We use one set of techniques to obtain our direct product
theorems for the threshold degree
(Sections~\ref{sec:general-compositions}
and~\ref{sec:results-conj}) and another, unrelated set of
techniques to analyze the rational approximation of halfspaces
(Section~\ref{sec:results-hshs}). We will give a separate overview
of the technical development in each case.
\noindent
\emph{Direct product theorems.}
In symmetrization, one takes an assumed multivariate polynomial $p$ that
sign-represents a given symmetric function and converts $p$ into
a univariate polynomial, which is amenable to direct analysis. No
such approach works for the function compositions of this paper,
whose sign-representing polynomials can have complicated
structure and will not simplify in a meaningful way. This leads
us to pursue a completely different approach.
Specifically, our results are based on a thorough
study of the \emph{linear programming dual} of the sign-representation
problems at hand. The challenge in our work is to bring out, through
the dual representation, analytic properties that will obey a
direct product theorem. Depending on the context
(Theorem~\ref{thm:thrdeg-dp},~\ref{thm:main-approx-dp},
or~\ref{thm:main-two}), the property in question can be nonnegativity,
correlation, orthogonality, certain quotient structure, or a
combination of several of these. A strength of this approach
is that it works with the sign-representation problem itself (over which
we have considerable control) rather than an assumed sign-representing
polynomial (whose structure we can no longer control in a meaningful
way). We are confident that this approach will find other
applications.
As a concrete illustration, we briefly describe the idea behind
Theorem~\ref{thm:main-two}. The dual object with which we work there
is a certain problem of finding, in the positive spans of two given
matrices, two vectors whose corresponding entries have comparable
magnitude. By an analytic argument, we are able to prove that this
intermediate problem has the sought direct-product property, giving
the missing link between sign-representation and rational approximation.
Thus, by working with the dual, we \emph{implicitly} decompose any
sign-representation $p(x,y)$ of the function $f(x)\wedge g(y)$
into individual rational approximants for $f$ and $g,$ regardless
of how tightly the $x$ and $y$ parts are coupled inside $p.$
\noindent
\emph{Rational approximation.}
Our proof of Theorem~\ref{thm:main-approx-hs}
is built around two key ideas.
The first is a new technique for placing
lower bounds on the degree of a given polynomial
$p\in\Re[x_1,x_2,\dots,x_n]$ with prescribed approximate behavior,
whereby one constructs a degree-nonincreasing linear map
$M\colon\Re[x_1,x_2,\dots,x_n]\to\Re[x]$ and argues that $Mp$ has
high degree. This technique is crucial to proving
Theorem~\ref{thm:main-approx-hs}, which
is not amenable to standard techniques such
as symmetrization. As applied in this work, the technique amounts
to constructing random variables $\xbold_1,\xbold_2,\dots,\xbold_n$ in
Euclidean space that, on the one hand, satisfy the linear dependence
$\sum 2^i\xbold_i\equiv \zbold$ for a suitably fixed vector $\zbold$ and,
on the other hand, in expectation look independent to any low-degree
polynomial $p\in\Re[x_1,x_2,\dots,x_n].$ We pass, then, from $p$ to a
univariate polynomial by observing that
$\Exp[p(\xbold_1,\dots,\xbold_n)]=q(\zbold)$ for some univariate polynomial
$q$ of degree no greater than the degree of $p.$ This technique is
a substantial departure from previous methods and shows promise
on other problems involving approximation by polynomials or rational
functions.
Second, we are able to prove that the rational approximation of the
sign function has a self-reducibility property on the discrete
domain. More specifically, we are able to give an explicit solution
to the \emph{dual} of the rational approximation problem by
distributing the nodes as in known positive results. What makes
this program possible in the first place is our ability to zero out
the dual object on the complementary domain, which is where the
above map $M\colon\Re[x_1,x_2,\dots,x_n]\to\Re[x]$ plays a crucial
role. This dual approach, too, departs entirely from previous
analyses. In particular, recall that Newman's lower-bound analysis
is specialized to the continuous domain and does not extend
to the setting of Theorem~\ref{thm:main-approx-maj}, let alone
Theorem~\ref{thm:main-approx-hs}.
\subsection*{Recent progress}
A recent follow-up paper~\cite{sherstov09opthshs} proves that
the intersection of two halfspaces on $\zoon$ has threshold
degree $\Theta(n),$ improving on the lower bound of $\Omega(\sqrt
n)$ in this work.
We have also learned that the inequality
$\deg_\eps(F(f,\dots,f))geq\degeps(F)\degthr(f)$ was derived
independently by Lee~\cite{lee09formulas} in a recent work on
read-once Boolean formulas.
\section{Preliminaries}
Throughout this work, the symbol $t$ refers to a real variable, whereas
$u,$ $v,$ $w,$ $x,$ $y,$ $z$ refer to vectors in $\Re^n$ and in
particular in $\moon.$ We adopt the following standard definition
of the sign function:
\begin{align*}
\sign t =
\ccases{
-1, &t<0, \\
0, &t=0, \\
1, &t>0.
}
\end{align*}
We will also have occasion to use the following modified sign function:
\[
\Sign t =
\ccases{
-1, &t<0, \\
1, &tgeq0.
}
\]
Equations and inequalities involving vectors in $\Re^n,$
such as $x<y$ or $xgeq0,$ are to be interpreted component-wise, as
usual.
Throughout this manuscript, we view Boolean functions as mappings
$f\colon X\to\moo$ for some finite set $X,$ where $-1$ and $+1$ correspond to
``true'' and ``false,'' respectively. If $\mu_1,\dots,\mu_k$ are
probability distributions on finite sets $X_1,\dots, X_k,$ respectively,
then $\mu_1\times\cdots\times\mu_k$ stands for the probability
distribution on $X_1\times\cdots\times X_k$ given by
\[ (\mu_1\times\cdots\times\mu_k)(x_1,\dots,x_k) = \prod_{i=1}^k\mu_i(x_i).
\]
The majority function on $n$ bits, $\MAJ_n\colon\moon\to\moo,$ is given
by
\[ \MAJ_n(x) = \ccases{
1, & \sum x_i > 0,\\
-1, &\text{otherwise.}
}\]
The symbol $P_k$ stands for the family of all univariate real polynomials
of degree up to $k.$ The following combinatorial identity is well-known.
\begin{fact}
\label{fact:comb}
For every integer $ngeq1$ and every polynomial $p\in P_{n-1},$
\[ \sum_{i=0}^n {n\choose i} (-1)^i p(i) = 0. \]
\end{fact}
\noindent
This fact can be verified by repeated differentiation
of the real function
\[(t-1)^n = \sum_{i=0}^n {n\choose i} (-1)^{n-i} t^i\]
at $t=1,$ as explained in~\cite{odonnell03degree}.
For a real function $f$ on a finite set $X,$ we write
$\|f\|_\infty=\max_{x\in X} |f(x)|.$
For a subset $X\subseteq\Re^n,$ we adopt the notation $-X=\{-x : x\in
X\}.$ We say that a set $X\subseteq\Re^n$ is \emph{closed under
negation} if $X=-X.$ Given a function
$f\colon X\to\Re,$ where $X\subseteq\Re^n$ is closed under
negation, we say that $f$ is
\emph{odd} (respectively, \emph{even}) if $f(-x)=-f(x)$ for all
$x\in X$ (respectively, $f(-x)=f(x)$ for all $x\in X$).
Given functions $f\colon X\to\moo$ and $g\colon Y\to\moo,$ recall that the
function $f\wedge g\colon X\times Y\to\moo$ is given by $(f\wedge g)(x,y)
= f(x)\wedge g(y).$ The function $f\vee g$ is defined analogously.
Observe that in this notation, \mbox{$f\wedge f$} and $f$ are completely
different functions, the former having domain $X\times X$ and the
latter $X.$ These conventions extend in the obvious way to any
number of functions. For example, $f_1\wedge f_2\wedge \cdots\wedge
f_k$ is a Boolean function with domain $X_1\times X_2\times\cdots\times
X_k,$ where $X_i$ is the domain of $f_i.$
Generalizing further, we let the symbol $F(f_1,\dots,f_k)$
denote the Boolean function on $X_1\times
X_2\times\cdots\times X_k$ obtained by composing a given function
$F\colon \mook\to\moo$ with the functions $f_1,f_2,\dots,f_k.$
Finally, recall that the negated function $\overline f\colon X\to\moo$
is given by $\overline f(x)=-f(x).$
\subsection{Sign-representation and approximation by polynomials}
By the \emph{degree} of a multivariate polynomial $p$ on $\Re^n,$
denoted $\deg p,$ we shall always mean the total degree of $p,$
i.e., the greatest total degree of any monomial of $p.$ The
degree of a rational function $p(x)/q(x)$ is the maximum of $\deg
p$ and $\deg q.$
Given a function $f\colon X\to\moo,$ where $X\subset\Re^n$
is a finite set, the \emph{threshold degree} $\degthr(f)$ of
$f$ is defined as the least degree of a multivariate polynomial $p$
such that $f(x)p(x)>0$ for all $x\in X.$ In words, the threshold
degree of $f$ is the least degree of a polynomial that represents
$f$ in sign. Equivalent terms in the literature
include ``strong degree''~\cite{aspnes91voting}, ``voting polynomial
degree''~\cite{krause94depth2mod}, ``polynomial threshold
function degree''~\cite{OS-extremal-ptf}, and ``sign
degree''~\cite{buhrman07pp-upp}. Crucial to understanding the
threshold degree is the following result, which is a well-known
corollary to Gordan's transposition theorem~\cite{gordan1873lp}.
\begin{theorem}[Gordan~\cite{gordan1873lp}]
\label{thm:gordan}
Let $X\subset\Re^n$ be a finite set, $f\colon X\to\moo$ a given
function. Then $\degthr(f)>d$ if and only if there exists a
probability distribution $\mu$ on $X$ such that
\begin{align*}
\sum_{x\in X}\mu(x)f(x)p(x)=0
\end{align*}
for every polynomial $p$ of degree up to $d.$
Equivalently, $\degthr(f)>d$ if and only if there exists a map
$\psi\colon X\to\Re,$ $\psi\not\equiv 0,$ such that $f(x)\psi(x)geq
0$ on $X$ and
\begin{align*}
\sum_{x\in X}\psi(x)p(x)=0
\end{align*}
for every polynomial $p$ of degree up to $d.$
\end{theorem}
\noindent
Theorem~\ref{thm:gordan} has a short proof using linear
programming duality, as explained
in~\cite[\S2.2]{sherstov07ac-majmaj}.
The threshold degree is closely related to another analytic
notion. Let $f\colon X\to\moo$ be given, for a finite subset
$X\subset\Re^n.$ The \emph{$\epsilon$-approximate degree} of $f,$
denoted $\degeps(f),$ is the least degree of a polynomial $p$
such that $\abs{f(x)-p(x)}\leq \epsilon$ for all $x\in X.$ The
relationship between the threshold degree and approximate degree
is an obvious one:
\begin{align}
\degthr(f) = \lim_{\epsilon\nearrow1} \deg_{\epsilon}(f).
\label{eqn:adeg-thrdeg}
\end{align}
We will need the following dual characterization of the
approximate degree.
\begin{theorem}
\label{thm:dual-approx}
Fix $\epsilongeq0.$ Let $f\colon X\to\moo$ be given, $X\subset\Re^n$ a
finite set. Then $\degeps(f)>d$ if and only if there exists a
function $\psi\colon X\to\Re$ such that
\begin{align*}
&\sum_{x\in X}|\psi(x)| =1, \\
&\sum_{x\in X}\psi(x)f(x) > \epsilon,\\
\intertext{and, for every polynomial $p$ of degree up to $d,$}
&\sum_{x\in X}\psi(x)p(x)=0.
\end{align*}
\end{theorem}
Theorem~\ref{thm:dual-approx} follows readily from linear programming
duality, as explained in~\cite[\S3]{sherstov07quantum}.
Theorem~\ref{thm:gordan} can be derived from Theorem~\ref{thm:dual-approx}
in view of (\ref{eqn:adeg-thrdeg}).
\subsection{Approximation by rational functions}
Consider a function $f\colon X\to\moo,$ where $X\subseteq \Re^n$
is an arbitrary set. For $dgeq0,$ we define
\[ R(f,d) \,= \,\inf_{\rule{0pt}{7pt}p,q} \,\sup_{x\in X}
\left\lvert f(x) - \frac{p(x)}{q(x)} \right\rvert,\]
where the infimum is over multivariate polynomials $p$ and $q$ of degree up
to $d$ such that $q$ does not vanish on $X.$
In words,
$R(f,d)$ is the least error in an approximation of $f$ by a
multivariate rational function of degree up to $d.$
We will also take an interest in the related quantity
\[ R^+(f,d) \,= \,\inf_{\rule{0pt}{7pt}p,q} \,\sup_{x\in X}
\left\lvert f(x) - \frac{p(x)}{q(x)} \right\rvert,\]
where the infimum is over multivariate polynomials $p$ and $q$ of
degree up to $d$ such that $q$ is positive on $X.$ These two
quantities are related in a straightforward way:
\begin{equation}
\label{eqn:rational-positive-denominator}
R^+(f,2d) \leq R(f,d) \leq R^+(f,d).
\end{equation}
The second inequality here is trivial. The first follows from the fact that
every rational approximant $p(x)/q(x)$ of degree $d$ gives rise to a
degree-$2d$ rational approximant with the same error and a positive
denominator, namely, $\{p(x)q(x)\}/q(x)^2.$
The infimum in the definitions of $R(f,d)$ and $R^+(f,d)$ cannot
in general be replaced by a minimum~\cite{rivlin-book}, even when
$X$ is a finite subset of $\Re.$ This is in contrast to the more
familiar setting of a finite-dimensional normed linear space,
where least-error approximants are guaranteed to exist.
We now recall Newman's classical construction of a rational
approximant to the sign function~\cite{newman64rational}.
\begin{theorem}[Newman]
\label{thm:newman-approx}
Fix $N>1.$ Then for every integer $kgeq1,$ there is a rational
function $S(t)$ of degree $k$ such that
\begin{align}
\max_{1\leq|t|\leq N} |\sign t - S(t)|\leq 1-N^{-1/k}
\label{eqn:newman-approx}
\end{align}
and the denominator of $S$ is positive on $[-N,-1]\cup[1,N].$
\end{theorem}
\begin{proof}
[Proof \textup{(adapted from Newman~\cite{newman64rational})}]
Consider the univariate polynomial
\[p(t) = \prod_{i=1}^k \big(t+N^{(2i-1)/(2k)}\big).\]
By examining every interval $[N^{i/(2k)},N^{(i+1)/(2k)}],$ where
$i=0,1,\dots,2k-1,$ one sees that
\begin{align}
p(t) geq\frac{N^{1/(2k)}+1}{N^{1/(2k)}-1}\, \lvert p(-t)\rvert,
\qquad 1\leq t\leq N.
\label{eqn:newman-balance}
\end{align}
Letting
\begin{align*}
S(t)=N^{-1/(2k)}\cdot \frac{p(t)-p(-t)}{p(t)+p(-t)},
\end{align*}
one has (\ref{eqn:newman-approx}). The positivity of the
denominator of $S$ on $[-N,-1]\cup[1,N]$ is a consequence of
(\ref{eqn:newman-balance}).
\end{proof}
A useful consequence of Newman's theorem is the following general
statement on decreasing the error in rational approximation.
\begin{theorem}
\label{thm:error-boosting}
Let $f\colon X\to\moo$ be given, where $X\subseteq\Re^n.$ Let $d$ be a given integer,
$\epsilon=R(f,d).$ Then for $k=1,2,3,\dots,$
\begin{align*}
R(f,kd) \leq 1 - \PARENS{\frac{1-\epsilon}{1+\epsilon}}^{1/k}.
\end{align*}
\end{theorem}
\begin{proof}
We may assume that $\epsilon<1,$ the theorem being trivial
otherwise. Let $S$ be the degree-$k$ rational approximant to the
sign function for $N=(1+\epsilon)/(1-\epsilon),$ as constructed
in Theorem~\ref{thm:newman-approx}. Let
$A_1,A_2,\dots,A_m,\dots$ be a sequence of rational functions on
$X$ of degree at most $d$ such that $\sup_X|f-A_m|\to\epsilon$ as
$m\to\infty.$ The theorem follows by considering the sequence of
approximants $S(A_m(x)/\{1-\epsilon\})$ as $m\to\infty.$
\end{proof}
\subsection{Symmetrization}
Let $S_n$ denote the symmetric group on $n$ elements. For $\sigma\in
S_n$ and $x\in\Re^n$, we denote $\sigma
x=(x_{\sigma(1)},\ldots,x_{\sigma(n)})\in\Re^n.$ The following is
a generalized form of Minsky and Papert's \emph{symmetrization
argument}~\cite{minsky88perceptrons}, as formulated in~\cite{RS07dc-dnf}.
\begin{proposition}[cf.~Minsky and Papert]
\label{prop:symm-argument}
Let $n_1,\dots,n_k$ be positive integers. Let
$\phi\colon\zoo^{n_1}\times\cdots\times\zoo^{n_k}\to\Re$ be a
polynomial of degree $d.$ Then there is a polynomial $p$ on $\Re^k$
of degree at most $d$ such that for all $x$ in the domain of $\phi,$
\begin{align*}
\Exp_{\sigma_1\in S_{n_1},\dots,\sigma_k\in S_{n_k}}
\left[\phi\big(\sigma_1x_1,\dots,\sigma_kx_k\big)\right] =
p\big(\dots,x_{i,1}+\cdots+x_{i,n_i},\dots\big).
\end{align*}
\end{proposition}
We now obtain a form of the symmetrization argument for
rational approximation.
\begin{proposition}
\label{prop:symm-rational}
Let $n_1,\dots,n_k$ be positive integers, and $\alpha,\beta$ distinct
reals. Let
$G\colon\{\alpha,\beta\}^{n_1}\times\cdots\times\{\alpha,\beta\}^{n_k}\to\moo$
be a function such that $G(x_1,\dots,x_k)\equiv G(\sigma_1
x_1,\dots,\sigma_k x_k)$ for all $\sigma_1\in S_{n_1},\dots,\sigma_k\in
S_{n_k}.$ Let $d$ be a given integer. Then for each $\epsilon>R^+(G,d),$
there exists a rational function $p/q$ on $\Re^k$ of degree at most
$d$ such that for all $x$ in the domain of $G,$ one has
\begin{align*}
\left\lvert G(x)-\frac
{p(\dots,x_{i,1}+\cdots+x_{i,n_i},\dots)}
{q(\dots,x_{i,1}+\cdots+x_{i,n_i},\dots)}
\right\rvert<\epsilon
\end{align*}
and $q(\dots,x_{i,1}+\cdots+x_{i,n_i},\dots)>0.$
\end{proposition}
\begin{proof}
Clearly, we may assume that $\epsilon<1.$
Using the linear bijection $(\alpha,\beta)\leftrightarrow (0,1)$ if
necessary, we may further assume that $\alpha=0$ and $\beta=1.$
Since $\epsilon>R^+(G,d),$ there are polynomials $P,Q$
of degree up to $d$ such that for all $x$ in the domain of $G,$ one
has $Q(x)>0$ and
\begin{align*}
(1-\epsilon)Q(x)<G(x)P(x)<(1+\epsilon)Q(x).
\end{align*}
By Proposition~\ref{prop:symm-argument}, there exist polynomials
$p,q$ on $\Re^k$ of degree at most $d$ such that
\begin{align*}
\Exp_{\sigma_1\in S_{n_1},\ldots,\sigma_k\in S_{n_k}}
\left[P\big(\sigma_1x_1,\dots,\sigma_kx_k\big)\right] =
p\big(\dots,x_{i,1}+\cdots+x_{i,n_i},\dots\big)
\end{align*}
and
\begin{align*}
\Exp_{\sigma_1\in S_{n_1},\ldots,\sigma_k\in S_{n_k}}
\left[Q\big(\sigma_1x_1,\dots,\sigma_kx_k\big)\right] =
q\big(\dots,x_{i,1}+\cdots+x_{i,n_i},\dots\big)
\end{align*}
for all $x$ in the domain of $G.$ Then the required properties of
$p$ and $q$ follow immediately from the corresponding properties of $P$ and
$Q.$
\end{proof}
\section{Direct product theorems}
In the several subsections that follow, we prove our direct
product theorems for polynomial representations of
composed Boolean functions. General compositions are treated in
Section~\ref{sec:dp}, followed by a study of conjunctions and
other specific compositions in
Sections~\ref{sec:auxiliary}--\ref{sec:additional-conj}.
\subsection{General compositions}
\label{sec:dp}
We begin our study with general
compositions of the form $F(f_1,\dots,f_k).$ Our focus in this
section will be on results that depend only on the threshold or
approximate degrees of $F,f_1,\dots,f_k.$ In later sections, we will
exploit additional structure of the functions involved. The following result
settles Theorems~\ref{thm:thrdeg-dp} and~\ref{thm:main-approx-dp}
from the Introduction.
\begin{theorem}
\label{thm:thrdeg-adeg-dp}
Let $f\colon X\to\moo$ and $F\colon \mook\to\moo$ be given functions, where
$X\subset\Re^n$ is a finite set. Then for $0<\epsilon<1,$
\begin{align}
\deg_\eps(F(f,\dots,f)) geq \deg_\eps(F)\degthr(f).
\label{eqn:adeg-dp}
\end{align}
In particular,
\begin{align}
\degthr(F(f,\dots,f)) geq \degthr(F)\degthr(f).
\label{eqn:thrdeg-dp}
\end{align}
\end{theorem}
\begin{proof}
Recall that the threshold degree is a limiting case of the
approximate degree, as given by (\ref{eqn:adeg-thrdeg}).
Hence, one obtains (\ref{eqn:thrdeg-dp}) by letting $\epsilon\nearrow1$
in (\ref{eqn:adeg-dp}). In the remainder of the proof, we focus
on (\ref{eqn:adeg-dp}) alone.
Put $D=\degeps(F)$ and $d=\degthr(f).$ By
Theorem~\ref{thm:dual-approx}, there
exists a map $\Psi\colon \mook\to\Re$ such that
\begin{align}
&\sum_{z\in\mook}\abs{\Psi(z)} =1, \label{eqn:Psi-bounded}\\
&\sum_{z\in\mook}\Psi(z)F(z) > \epsilon \label{eqn:Psi-correl},
\end{align}
and $\sum\Psi(z)p(z)=0$ for every polynomial $p$ of
degree less than $D.$ By Theorem~\ref{thm:gordan},
there exists a distribution $\mu$ on $X$ such that
$\sum\mu(x)f(x)p(x)=0$ for every polynomial $p$ of
degree less than $d.$
Now, define $\zeta\colon X^k\to\Re$ by
\begin{align*}
\zeta(\dots,x_i,\dots)
= 2^k\Psi(\dots,f(x_i),\dots) \prod_{i=1}^k \mu(x_i).
\end{align*}
We claim that
\begin{align}
\label{eqn:zeta-orthogonality}
\sum_{X^k} \zeta(\dots,x_i,\dots)p(\dots,x_i,\dots) = 0
\end{align}
for every polynomial $p$ of degree less than $Dd.$ By linearity,
it suffices to consider a polynomial $p$ of the form $p(\dots,x_i,\dots)
= \prod p_i(x_i),$ where $\sum \deg p_i<Dd.$ Since $\Psi$ is
orthogonal on $\mook$ to all polynomials of degree less than $D,$
we have the representation
\begin{align*}
\Psi(z) = \sum_{\substack{S\subseteq\{1,\dots,k\},\\|S|geq D}} \hat\Psi(S)
\prod_{i\in S} z_i
\end{align*}
for some reals $\hat\Psi(S).$ As a result,
\begin{multline}
\sum_{X^k} \zeta(\dots,x_i,\dots)p(\dots,x_i,\dots) \\
=2^k \sum_{|S|geq D} \hat\Psi(S)
\prod_{i\in S} \undercbrace{\PARENS{\sum_{x_i\in
X}^{\phantom{A}} \mu(x_i)f(x_i)p_i(x_i)}}
\prod_{i\notin S}\PARENS{\sum_{x_i\in X}^{\phantom{A}}
\mu(x_i)p_i(x_i)}.
\label{eqn:big-sum}
\end{multline}
Since $\sum \deg p_i<Dd,$ the pigeonhole principle implies that
$\deg p_i<d$ for more than $k-D$ indices $i\in\{1,\dots,k\}.$ As a
result, for each set $S$ in the outer summation of (\ref{eqn:big-sum}),
at least one of the underbraced factors vanishes (recall that
$f$ is orthogonal on $X$ with respect to $\mu$
to all polynomials of degree less than
$d$). This gives (\ref{eqn:zeta-orthogonality}).
We may assume that $f$
is not a constant function, the theorem being trivial otherwise.
It follows that $\degthr(f)geq1$ and $\sum_X \mu(x)f(x)=0.$
Now, define a product distribution $\lambda$ on $X^k$ by
$\lambda(\dots,x_i,\dots)=\prod \mu(x_i).$ Since $\sum_X
\mu(x)f(x)=0,$ it follows that the string
$(\dots,f(x_i),\dots)$ is distributed uniformly on $\mook$
when $(\dots,x_i,\dots)\sim\lambda.$ As a result,
\begin{align}
\sum_{X^k} |\zeta(\dots,x_i,\dots)|
= 2^k \Exp_{z\in\mook}[\abs{\Psi(\dots,z_i,\dots)}]=1,
\label{eqn:zeta-bounded}
\end{align}
where the last equality holds by (\ref{eqn:Psi-bounded}).
Similarly,
\begin{multline}
\sum_{X^k} \zeta(\dots,x_i,\dots)F(\dots,f(x_i),\dots)\\
= 2^k
\Exp_{z\in\mook}[\Psi(\dots,z_i,\dots)F(\dots,z_i,\dots)]>\epsilon,
\qquad
\label{eqn:zeta-correl}
\end{multline}
where the inequality holds by (\ref{eqn:Psi-correl}).
Now (\ref{eqn:adeg-dp}) follows from
(\ref{eqn:zeta-orthogonality}), (\ref{eqn:zeta-bounded}),
(\ref{eqn:zeta-correl}), and Theorem~\ref{thm:dual-approx}.
\end{proof}
\begin{remark*}
In Theorem~\ref{thm:thrdeg-adeg-dp} and elsewhere in this paper,
we consider Boolean functions on finite subsets of $\Re^n,$ which
is the setting of primary interest in computational complexity.
It is useful to keep in mind, however, that approximation and
sign-representation problems on compact infinite sets and other
well-behaved infinite sets are easily reduced to the finite case.
\end{remark*}
We now
consider the so-called AND-OR tree, given by
$f(x)=\bigvee_{i=1}^n \bigwedge_{j=1}^n x_{ij}.$ We improve the
standing lower bound on the approximate degree of $f$ from
$\Omega(n^{0.66\dots})$ to $\Omega(n^{0.75}),$ the best upper
bound being $O(n).$
\begin{restatetheorem}{thm:main-and-or}{\textsc{restated}}
Let $f\colon \moo^{n^2}\to\moo$ be given by
$
f(x) = \bigvee_{i=1}^n \bigwedge_{j=1}^n x_{ij}.
$
Then
\begin{align*}
\deg_{1/3}(f) = \Omega(n^{0.75}).
\end{align*}
\end{restatetheorem}
\begin{proof}
Without loss of generality, assume that $n=4m^2$ for some integer $m.$
Define $g\colon\moo^{4m^3}\to\moo$ by
\begin{align*}
g(x) = \bigvee_{i=1}^m \bigwedge_{j=1}^{4m^2} x_{ij}.
\end{align*}
Let $G\colon\moo^{4m}\to\moo$ be given by $G(x)=x_1\vee\cdots\vee
x_{4m}.$
A well-known result of Minsky and Papert~\cite{minsky88perceptrons}
states that $\degthr(g)=m.$ Also, Nisan and
Szegedy~\cite{nisan-szegedy94degree} proved that
$\deg_{1/3}(G)=\Theta(\sqrt m).$ Since $f=G(g,\dots,g),$ it follows
by Theorem~\ref{thm:thrdeg-adeg-dp} that $\deg_{1/3}(f)=\Omega(m\sqrt
m),$ as desired.
\end{proof}
We now further develop the ideas of
Theorem~\ref{thm:thrdeg-adeg-dp} to
obtain a more general result on the approximation of composed
functions by polynomials. This generalization is based on a
combinatorial property of Boolean functions known as
\emph{certificate complexity}. For a string $x\in\mook$ and a
set $S\subseteq\onetok$ whose distinct elements are
$i_1<i_2<\cdots<i_{|S|},$ we adopt the notation $x|_S =
(x_{i_1},x_{i_2},\dots,x_{i_{|S|}})\in\zoo^{|S|}.$ For a Boolean
function $F\colon\mook\to\moo$ and a point $x\in\mook,$ the
\emph{certificate complexity of $F$ at $x,$} denoted $C_x(F),$ is
the minimum size of a subset $S\subseteq\onetok$ such that
$F(x)=F(y)$ for all $y\in\mook$ with $x|_S=y|_S.$ The
\emph{certificate complexity of $F,$} denoted $C(F),$ is the
maximum $C_x(F)$ over all $x.$ In the degenerate case when $F$ is
constant, we have $C(F)=0.$ At the other extreme, the parity
function $F\colon\mook\to\moo$ satisfies $C(F)=k,$ which is the
maximum possible. The following proposition is immediate from
the definition of certificate complexity.
\begin{proposition}
\label{prop:certificate}
Let $F\colon\mook\to\moo$ be a given Boolean function. Let
$y\in\mook$ be a random string whose $i$th bit is
set to $-1$ with probability $\alpha_i$ and to $+1$ otherwise,
independently for each $i.$
Then for every $x\in\mook,$
\begin{align*}
\Prob_{y}[F(x_1,\dots,x_k)= F(x_1y_1,\dots,x_ky_k)] geq
\min_{i_1<i_2<\cdots<i_{C_x(F)}}
\prod_{j=1}^{C_x(F)}(1-\alpha_{i_j}).
\end{align*}
\end{proposition}
\begin{proof}
Fix a set $S\subseteq\onetok$ of cardinality $C_x(F)$ such that
$F(x)=F(y)$ whenever $x|_S=y|_S.$ Then clearly
$\Prob_{y}[F(\dots,x_i,\dots)=F(\dots,x_iy_i,\dots)] geq
\Prob_{y}[y|_S=(1,1,\dots,1)],$ and the bound follows.
\end{proof}
We can now state and prove the desired generalization of
Theorem~\ref{thm:thrdeg-adeg-dp}.
\begin{theorem}
\label{thm:approx-dp}
Let $f\colon X\to\moo$ and $F\colon \mook\to\moo$ be given functions, where
$X\subset\Re^n$ is a finite set. Then for each $\epsilon,\delta>0,$
\begin{align}
\deg_{\epsilon+\eta-2+2(1-\delta)^{C(F)}}(F(f,\dots,f)) &geq \deg_{\epsilon}(F)
\deg_{1-\delta}(f)
\label{eqn:approx-dp}
\end{align}
for some $\eta=\eta(\epsilon,F)>0.$
\end{theorem}
\begin{remark}
One recovers Theorem~\ref{thm:thrdeg-adeg-dp}
by letting $\delta\searrow0$ in (\ref{eqn:approx-dp}).
We also note that (\ref{eqn:approx-dp}) is
considerably stronger than Theorem~\ref{thm:thrdeg-adeg-dp}: functions
$\mook\to\moo$ are known, such as {\sc odd-max-bit}~\cite{beigel94perceptrons},
with threshold degree~$1$ and \mbox{$(1-\delta)$}-approximate degree
$k^{\Omega(1)}$ for $\delta$ as small as
$\delta=\exp\{-k^{\Omega(1)}\}.$ Another advantage of
Theorem~\ref{thm:approx-dp} is that the $(1-\delta)$-approximate
degree is easier to bound from below than the threshold
degree~\cite{beigel94perceptrons, vereshchagin95weight, mlj07sq,
podolskii07perceptrons, podolskii08perceptrons}, even for
$\delta$ exponentially small. For $\delta$ small, the
$(1-\delta)$-approximate degree is essentially equivalent to a
notion known as \emph{perceptron
weight}~\cite{minsky88perceptrons, beigel94perceptrons,
vereshchagin95weight, KP98threshold, KOS:02,
klivans-servedio06decision-lists, mlj07sq, buhrman07pp-upp,
podolskii07perceptrons, podolskii08perceptrons}.
\end{remark}
\begin{proof}[Proof of Theorem~\textup{\ref{thm:approx-dp}}]
Let $D=\degeps(F)$ and $d=\deg_{1-\delta}(f)>0.$
Theorem~\ref{thm:dual-approx} provides
a map $\Psi\colon \mook\to\Re$ such that
\begin{align}
&\sum_{z\in\mook}\abs{\Psi(z)} =1, \label{eqn:Psi-bounded2}\\
&\sum_{z\in\mook}\Psi(z)F(z) > \epsilon+\eta
\label{eqn:Psi-correl2}
\end{align}
for some $\eta=\eta(\epsilon,F)>0,$
and $\sum_{z\in\mook}\Psi(z)p(z)=0$ for every polynomial $p$ of
degree less than $D.$ Analogously, there
exists a map $\psi\colon X\to\Re$ such that
\begin{align}
&\sum_{x\in X}|\psi(x)| =1, \label{eqn:psi-bounded2}\\
&\sum_{x\in X}\psi(x)f(x) > 1-\delta, \label{eqn:psi-correl2}
\end{align}
and $\sum_{x\in X}\psi(x)p(x)=0$ for every polynomial $p$ of degree
less than $d.$
Define $\zeta\colon X^k\to\Re$ by
\begin{align*}
\zeta(\dots,x_i,\dots)
= 2^k\,\Psi(\dots,\Sign \psi(x_i),\dots) \prod_{i=1}^k \abs{\psi(x_i)}.
\end{align*}
By the same argument as in Theorem~\ref{thm:thrdeg-adeg-dp}, we have
\begin{align}
\sum_{X^k} \zeta(\dots,x_i,\dots)p(\dots,x_i,\dots) = 0
\label{eqn:zeta-orthogonality2}
\end{align}
for every polynomial $p$ of degree less than $Dd.$
Let $\mu$ be the distribution on $X^k$ given by $\mu(\dots,x_i,\dots)=\prod
\, \abs{\psi(x_i)}.$ Since $\psi$ is orthogonal to the constant
polynomial $1,$ the string $(\dots,\Sign\psi(x_i),\dots)$ is
distributed uniformly over $\mook$ when one samples
$(\dots,x_i,\dots)$ according to $\mu.$ As a result,
\begin{align}
\sum_{X^k}\, \abs{\zeta(\dots,x_i,\dots)}
= \sum_{z\in\mook}\abs{\Psi(z)}
= 1,
\label{eqn:zeta-bounded2}
\end{align}
where the final equality uses (\ref{eqn:Psi-bounded2}).
Define
$A_{+1}=\{x\in X:\psi(x)>0, f(x)=-1\}$ and
$A_{-1}=\{x\in X:\psi(x)<0, f(x)=+1\}.$ Since $\psi$ is orthogonal to the
constant polynomial~$1,$ it follows from (\ref{eqn:psi-bounded2}) that
\begin{align*}
\sum_{x:\psi(x)<0} \abs{\psi(x)} =
\sum_{x:\psi(x)>0} \abs{\psi(x)} = \frac12.
\end{align*}
In light of (\ref{eqn:psi-correl2}), we see that
$\sum_{x\in A_{+1}} \abs{\psi(x)} < \delta/2$ and
$\sum_{x\in A_{-1}} \abs{\psi(x)} < \delta/2.$ Now, for any given
$z\in\mook,$ the following two random variables are identically
distributed:
\begin{itemize}
\item the string $(\dots,f(x_i),\dots)$ when one chooses
$(\dots,x_i,\dots)\sim\mu$ and conditions on the event
that $(\dots,\Sign\psi(x_i),\dots)=z$;
\item the string $(\dots,y_iz_i,\dots),$ where $y\in\mook$ is a
random string whose $i$th bit independently takes on $-1$
with probability $2\sum_{x\in A_{z_i}}\abs{\psi(x)}<\delta.$
\end{itemize}
Proposition~\ref{prop:certificate}
now implies that for each $z\in\mook,$
\begin{multline}
\left\lvert\Exp_{\mu}\Big[F(\dots,f(x_i),\dots) \mid (\dots,\Sign
\psi(x_i),\dots)=z\Big]\right.\\
\left.\phantom{\Exp_{\mu}}-F(\dots,\Sign\psi(x_i),\dots)\right\rvert
\leq 2-2(1-\delta)^{C(F)}.\qquad
\label{eqn:error-prob}
\end{multline}
We are now prepared to complete the proof. We have
\begin{align}
\sum_{X^k} &\zeta(\dots,x_i\dots)F(\dots,f(x_i),\dots)
\nonumber\\
&=
2^k\Exp_{\mu}\Big[\Psi(\dots,\Sign\psi(x_i),\dots)F(\dots,f(x_i),\dots)\Big]
\nonumber\\
&geq \sum_{z\in\mook}\Psi(z)F(z) - 2\{1-(1-\delta)^{C(F)}\}
\sum_{z\in\mook}\abs{\Psi(z)}
&&\nonumber\\
&> \epsilon +\eta- 2 + 2(1-\delta)^{C(F)},
\label{eqn:zeta-correlated2}
\end{align}
where the last two inequalities use (\ref{eqn:error-prob}),
(\ref{eqn:Psi-bounded2}), and (\ref{eqn:Psi-correl2}). In
view of Theorem~\ref{thm:dual-approx}, the exhibited properties
(\ref{eqn:zeta-orthogonality2}), (\ref{eqn:zeta-bounded2}), and
(\ref{eqn:zeta-correlated2}) of $\zeta$ force (\ref{eqn:approx-dp}).
\end{proof}
Theorems~\ref{thm:thrdeg-adeg-dp}
and~\ref{thm:approx-dp} complement known \emph{upper} bounds for
the approximation of composed functions. The following theorem is
due to Buhrman et al.~\cite{BNRW05robust}, who
studied the approximation of Boolean functions with perturbed
inputs. We include the proof from~\cite{BNRW05robust} and
slightly generalize it to any given parameters.
\begin{theorem}[cf. Buhrman et al.]
\label{thm:approx-upper-F-f}
Fix functions $F\colon\mook\to\moo$ and $f\colon X\to\moo,$ where
$X\subset\Re^n$ is finite. Then for all $\Delta,\deltageq 0,$
\begin{align}
\deg_{\eta(\Delta,\delta)}(F(f,\dots,f))
\leq \deg_\Delta(F)\deg_\delta(f),
\label{eqn:Delta-delta}
\end{align}
where
\begin{align}
\eta(\Delta,\delta) =
\Delta + 2-2\left(1-\frac{\delta}{1+\delta}\right)^{C(F)}.
\label{eqn:approx-upper-F-f-nu}
\end{align}
In particular,
\begin{multline}
\deg_{1/3}(F(f,\dots,f)) \\\leq \deg_{1/3}(F)\deg_{1/3}(f)\cdot O(\log
\{1+\deg_{1/3}(F)\}).\qquad
\label{eqn:approx-upper}
\end{multline}
\end{theorem}
\begin{proof}[Proof \textup{(adapted from Buhrman et al.)}.]
Fix polynomials $P$ and $p$ on $\mook$ and $X,$
respectively. As usual, $P$ may be assumed to be multilinear in
view of its domain.
Define $\Phi\colon X^k\to\Re$ by
\begin{align*}
\Phi(\dots,x_i,\dots) = P\PARENS{\dots,
\frac1{1+\|f-p\|_\infty}p(x_i),\dots}.
\end{align*}
Fix any input $(\dots,x_i,\dots)\in X^k$ and
consider a random variable $y\in\mook$ whose $i$th bit takes on
$-1$ with probability
\begin{align*}
\alpha_i = \frac12 - \frac{f(x_i)p(x_i)}{2(1+\|f-p\|_\infty)}
\leq \frac{\|f-p\|_\infty}{1+\|f-p\|_\infty},
\end{align*}
independently for each $i.$ Then
\begin{align*}
\abs{\Phi(\dots,x_i,\dots) &- F(\dots,f(x_i),\dots)}\\
&= \left|\Exp_y[P(\dots,y_if(x_i),\dots) -
F(\dots,f(x_i),\dots)]\right|\\
&\leq \|P-F\|_\infty +
\left|\Exp_y[F(\dots,y_if(x_i),\dots) -
F(\dots,f(x_i),\dots)]\right|\\
& \leq \|P-F\|_\infty +
2 -
2\PARENS{1-\frac{\|f-p\|_\infty}{1+\|f-p\|_\infty}}^{C(F)},
\end{align*}
where the first and last steps in the derivation follow by the
multilinearity of $P$ and by Proposition~\ref{prop:certificate},
respectively. This completes the proof of
(\ref{eqn:Delta-delta}).
Taking $\Delta=1/6$ and $\delta=1/(12C(F))$ in
(\ref{eqn:Delta-delta}) gives
\begin{align*}
\deg_{1/3}(F(f,\dots,f)) \leq \deg_{1/6}(F)\deg_{1/(12C(F))}(f).
\end{align*}
Basic approximation theory~\cite{eremenko07sign} shows
that for each $\epsilon>0,$ there exists a univariate polynomial
of degree $O(\log \frac1\epsilon)$ that sends $[-\frac43,-\frac23]\to
[-1-\epsilon,-1+\epsilon]$ and $[\frac23,\frac43] \to
[1-\epsilon,1+\epsilon].$ As a result, we obtain
\begin{align*}
\deg_{1/3}(F(f,\dots,f)) \leq
\deg_{1/3}(F)\deg_{1/3}(f)\cdot O(\log\{1+C(F)\}),
\end{align*}
which is equivalent to (\ref{eqn:approx-upper}) because $C(F)$ is
known to be within a polynomial of $\deg_{1/3}(F)$ for every
Boolean function $F\colon\mook\to\moo,$ as discussed in detail in
the survey article~\cite{buhrman-dewolf02DT-survey}.
\end{proof}
\paragraph{Compositions with \emph{k} distinct functions.}
We now consider compositions of the form $F(f_1,\dots,f_k),$ where the
functions $f_1,\dots,f_k$ may all be distinct.
For a function $F\colon \mook\to\Re$ and a vector
$v=(v_1,\dots,v_k)$ of nonnegative integers, define the
\emph{$(\epsilon,v)$-approximate degree} $\deg_{\epsilon,v}(F)$ to be
the least $D$ for which there is a polynomial $P(x_1,\dots,x_k)$
with
\begin{align*}
P\in\Span\BRACES{\prod_{i\in S} x_i
:S\subseteq\onetok, \; \sum_{i\in S}^{\phantom{S}} v_i\leq D}
\end{align*}
and $\|F-P\|_\infty\leq\epsilon.$ Note that the
$\epsilon$-approximate degree of $F$ is the
$(\epsilon,v)$-approximate degree of $F$ for $v=(1,1,\dots,1).$
It is clear that
\begin{align*}
\deg_{\eps,v}(F) geq \min_{i_1<i_2<\cdots<i_{\deg_{\eps}(F)}}
\{v_{i_1}+v_{i_2}+\cdots+v_{i_{\deg_{\eps}(F)}}\},
\end{align*}
with an arbitrary gap achievable between the right and left
members of the inequality.
We will also need the following generalized version of
Theorem~\ref{thm:dual-approx}, due to
Ioffe and Tikhomirov~\cite{ioffe-tikhomirov68duality}.
\begin{theorem}[Ioffe and Tikhomirov]
\label{thm:berr-morth}
Let $X$ be a finite set. Fix any family $\Phi$ of functions
$X\to\Re$ and an additional function
$f\colon X\to\Re.$ Then
\begin{align*}
\min_{\phi\in\Span(\Phi)} \|f - \phi\|_\infty =
\max_\psi \left\{ \sum_{x\in X} f(x)\psi(x) \right\},
\end{align*}
where the maximum is over all functions $\psi\colon X\to\Re$ such that
\begin{align*}
\sum_{x\in X} |\psi(x)| \leq 1
\end{align*}
and, for each $\phi\in\Phi,$
\begin{align*}
\sum_{x\in X}\phi(x)\psi(x)=0.
\end{align*}
\end{theorem}
\noindent
A short proof of Theorem~\ref{thm:berr-morth}
can be found, e.g., in~\cite[\S3]{sherstov07quantum}.
With this setup in place, we obtain the following analogues
of Theorems~\ref{thm:approx-dp} and~\ref{thm:approx-upper-F-f}
for compositions of the form $F(f_1,\dots,f_k).$
\begin{theorem}
\label{thm:approx-dp-generalized}
Fix nonconstant functions $F\colon\mook\to\moo$ and $f_i\colon X_i\to\moo,$
$i=1,2,\dots,k,$ where each $X_i\subset\Re^n$ is finite.
Then for $\epsilon,\delta>0,$ one has
\begin{align}
\deg_{\epsilon+\eta-2+2(1-\delta)^{C(F)}}(F(f_1,\dots,f_k))
&geq \deg_{\epsilon,v}(F)
\label{eqn:approx-dp-generalized}
\end{align}
for some $\eta=\eta(\epsilon,F)>0,$
where $v=(\deg_{1-\delta}(f_1),\dots,\deg_{1-\delta}(f_k)).$
\end{theorem}
\begin{proof}
Let $D=\deg_{\eps,v}(F)$ and $d_i=\deg_{1-\delta}(f_i).$
Theorem~\ref{thm:berr-morth} provides
a map $\Psi\colon \mook\to\Re$ such that
\begin{align}
&\sum_{z\in\mook}\abs{\Psi(z)} =1, \label{eqn:Psi-bounded3}\\
&\sum_{z\in\mook}\Psi(z)F(z) > \epsilon+\eta
\nonumber
\end{align}
for some $\eta=\eta(\epsilon,F)>0,$
and
\begin{align*}
\Psi(z) = \sum_{S\in\mathcal{S}} \hat\Psi(S)\prod_{i\in S} z_i
\end{align*}
for some reals $\hat\Psi(S),$ where $\mathcal{S} =
\{S\subseteq\onetok: \sum_{i\in S}d_igeq D\}.$
Analogously, there are maps $\psi_i\colon X_i\to\Re,$
$i=1,2,\dots,k,$ such that
\begin{align*}
&\sum_{x_i\in X_i}|\psi_i(x_i)| =1, \\%\label{eqn:psi-bounded3}\\
&\sum_{x_i\in X_i}\psi_i(x_i)f_i(x_i) > 1-\delta,
\end{align*}
and $\sum_{x_i\in X_i}\psi_i(x_i)p(x_i)=0$ for every polynomial $p$ of degree
less than $d_i.$
Define $\zeta\colon X_1\times\cdots\times X_k\to\Re$ by
\begin{align*}
\zeta(\dots,x_i,\dots)
= 2^k\,\Psi(\dots,\Sign \psi_i(x_i),\dots) \prod_{i=1}^k \abs{\psi_i(x_i)}.
\end{align*}
By an argument analogous to that in
Theorem~\ref{thm:thrdeg-adeg-dp}, we have
\begin{align}
\sum_{X_1\times\cdots\times X_k} \zeta(\dots,x_i,\dots)p(\dots,x_i,\dots) = 0
\label{eqn:zeta-orthogonality3}
\end{align}
for every polynomial $p$ of degree less than $D.$
Let $\mu$ be the distribution on $X_1\times\cdots\times X_k$
given by $\mu(\dots,x_i,\dots)=\prod
\, \abs{\psi_i(x_i)}.$ Since each $\psi_i$ is orthogonal to the constant
polynomial $1,$ the string $(\dots,\Sign\psi_i(x_i),\dots)$ is
distributed uniformly over $\mook$ when one samples
$(\dots,x_i,\dots)$ according to $\mu.$ As a result,
\begin{align}
\sum_{X_1\times\cdots\times X_k}\, \abs{\zeta(\dots,x_i,\dots)}
= \sum_{z\in\mook}\abs{\Psi(z)}
= 1,
\label{eqn:zeta-bounded3}
\end{align}
where the final equality uses (\ref{eqn:Psi-bounded3}).
By an argument analogous to that in Theorem~\ref{thm:approx-dp}, we obtain
\begin{align}
\sum_{X_1\times\cdots\times X_k} &\zeta(\dots,x_i\dots)F(\dots,f_i(x_i),\dots)
> \epsilon +\eta- 2 + 2(1-\delta)^{C(F)}.
\label{eqn:zeta-correlated3}
\end{align}
In view of Theorem~\ref{thm:dual-approx}, the exhibited properties
(\ref{eqn:zeta-orthogonality3}), (\ref{eqn:zeta-bounded3}), and
(\ref{eqn:zeta-correlated3}) of $\zeta$ complete the proof.
\end{proof}
\begin{remark}
Analogous to the earlier development,
taking $\delta\searrow0$ in
Theorem~\ref{thm:approx-dp-generalized} yields the lower bound
$
\deg_{\epsilon}(F(f_1,\dots,f_k))
geq \deg_{\epsilon,v}(F)
$
for each $\epsilon>0,$ where
$v=(\degthr(f_1),\dots,\degthr(f_k)).$
\end{remark}
\begin{theorem}
\label{thm:approx-upper-F-f-generalized}
Fix functions $F\colon\mook\to\moo$ and $f_i\colon X_i\to\moo,$
$i=1,2,\dots,k,$ where each $X_i\subset\Re^n$ is finite. Then
for all $\Delta,\deltageq 0,$
\begin{align*}
\deg_{\eta(\Delta,\delta)}(F(f_1,\dots,f_k))
\leq \deg_{\Delta,v}(F),
\end{align*}
where
$v = (\deg_\delta(f_1),\dots,\deg_\delta(f_k))$ and
\begin{align}
\eta(\Delta,\delta) &=
\Delta + 2-2\left(1-\frac{\delta}{1+\delta}\right)^{C(F)}.
\label{eqn:approx-upper-F-f-generalized}
\end{align}
In particular,
\begin{align}
\deg_{1/3}(F(f_1,\dots,f_k)) = \deg_{{1/3},v}(F)\cdot O(\log
\{1+\deg_{1/3}(F)\})
\label{eqn:approx-upper-generalized}
\end{align}
for $v=(\deg_{1/3}(f_1),\dots,\deg_{1/3}(f_k)).$
\end{theorem}
\begin{proof}
Fix a real polynomial $P$ on $\mook$ and polynomials
$p_i$ on $X_i,$ respectively.
As usual, $P$ may be assumed to be multilinear in
view of its domain.
Define $\Phi\colon X_1\times\cdots\times X_k\to\Re$ by
\begin{align*}
\Phi(\dots,x_i,\dots) = P\PARENS{\dots,
\frac1{1+\|f_i-p_i\|_\infty}p_i(x_i),\dots}.
\end{align*}
The remainder of the proof is analogous to that of
Theorem~\ref{thm:approx-upper-F-f}, with the obvious notational
changes and an optimal choice of approximants $P,p_1,\dots,p_k.$
\end{proof}
\paragraph{Bounds using block sensitivity.}
Several results above can be sharpened somewhat using
the notion of \emph{block sensitivity}, denoted $\bs(F)$ for a
function $F\colon\mook\to\moo$ and defined as the maximum number
of nonempty disjoint subsets $S_1,S_2,S_3,\dots\subseteq\onetok$
such that on some input $x\in\mook,$ flipping the bits in any one
set $S_i$ changes the value of the function. We have:
\begin{proposition}
\label{prop:certificate-bs}
Let $F\colon\mook\to\moo$ be a given Boolean function. Let
$y\in\mook$ be a random string whose $i$th bit is
set to $-1$ with probability at most $\alpha,$
independently for each $i.$
Then for every $x\in\mook,$
\begin{align*}
\Prob_{y}[F(x_1,\dots,x_k)\ne F(x_1y_1,\dots,x_ky_k)] \leq
2\alpha\bs(F).
\end{align*}
\end{proposition}
\begin{proof}
By monotonicity, we may assume that each bit of $y$ takes on
$-1$ with probability exactly $\alpha.$
For a fixed integer $r$ and a uniformly random string $y\in\mook$ with
$\abs{\{i:y_i=-1\}}=r,$ the probability that $F(\dots,x_i,\dots)\ne
F(\dots,x_iy_i,\dots)$ is clearly at most $\bs(F)/\lfloor
k/r\rfloor\leq 2r\bs(F)/k.$ Averaging over $r$ gives the sought
bound.
\end{proof}
Since by definition $C(F)geq\bs(F)$ for every function
$F\colon\mook\to\moo,$ use of
Proposition~\ref{prop:certificate-bs} instead of
Proposition~\ref{prop:certificate} can lead to sharper bounds in
some results of this section. Specifically,
Theorems~\ref{thm:approx-dp}, \ref{thm:approx-upper-F-f},
\ref{thm:approx-dp-generalized}, and
\ref{thm:approx-upper-F-f-generalized} remain valid
with (\ref{eqn:approx-dp}) replaced by
\begin{align}
\deg_{\epsilon+\eta-4\delta\bs(F)}(F(f,\dots,f)) &geq \deg_{\epsilon}(F)
\deg_{1-\delta}(f);
\end{align}
with (\ref{eqn:approx-upper-F-f-nu}) and
(\ref{eqn:approx-upper-F-f-generalized}) replaced by
\begin{align}
\eta(\Delta,\delta) =
\Delta + \frac{4\delta\bs(F)}{1+\delta};
\end{align}
and with (\ref{eqn:approx-dp-generalized}) replaced by
\begin{align}
\deg_{\epsilon+\eta-4\delta\bs(F)}(F(f_1,\dots,f_k))
&geq \deg_{\epsilon,v}(F).
\end{align}
In particular, we obtain from Theorem~\ref{thm:approx-dp} that
\begin{align*}
\deg_{1/3}(F(f,\dots,f))&geq \deg_{2/3}(F)\deg_{1-(12\bs(F))^{-1}}(f)
\nonumber\\
&geq
\deg_{1/3}(F)\deg_{1/3}(f)\cdot \Omega\left(\frac1{1+\bs(F)}\right).
\end{align*}
\subsection{Auxiliary results on rational approximation
\label{sec:auxiliary}}
In this section, we prove a number of auxiliary facts about
uniform approximation and sign-representation. This preparatory
work will set the stage for our analysis of conjunctions of
functions. We start by spelling out the exact relationship
between the rational approximation and sign-representation of a
Boolean function.
\begin{theorem}
\label{thm:trivial-approx}
Let $f\colon X\to\moo$ be a given function, where $X\subset\Re^n$
is finite. Then for every integer $d,$
\[ \degthr(f)\leq d \quad \Leftrightarrow \quad R^+(f,d)<1. \]
\end{theorem}
\begin{proof}
For the forward implication, let $p$ be a polynomial of degree at most $d$
such that $f(x)p(x)>0$ for every $x\in X.$ Letting
$M=\max_{x\in X} |p(x)|$ and $m=\min_{x\in X}|p(x)|,$ we have
\begin{align*}
R^+(f,d)\leq \max_{x\in
X}\left|f(x)-\frac{p(x)}{M}\right|\leq 1-\frac mM <1.
\end{align*}
For the converse, fix a degree-$d$ rational function $p(x)/q(x)$
with $q(x)>0$ on $X$ and $\max_X\abs{f(x) - \{p(x)/q(x)\}}
< 1.$ Then clearly $f(x)p(x)>0$ on $X.$
\end{proof}
Our next observation amounts to reformulating the rational approximation
of Boolean functions in a way that is more analytically pleasing.
\begin{theorem}
\label{thm:balance}
Let $f\colon X\to\moo$ be a given function, where $X\subset\Re^n$
is finite. Then for every integer $dgeq\degthr(f),$ one has
\begin{align*}
R^+(f,d) = \inf_{cgeq1} \; \frac{c^2-1}{c^2+1},
\end{align*}
where the infimum is over all $cgeq1$ for which there exist
polynomials $p,q$ of degree up to $d$ such that
$0<\frac1c q(x) \leq f(x)p(x) \leq cq(x)$ on $X.$
\end{theorem}
\begin{proof}
In view of Theorem~\ref{thm:trivial-approx}, the quantity $R^+(f,d)$
is the infimum over all $\epsilon<1$ for which there exist polynomials
$p$ and $q$ of degree up to $d$ such that $0<(1-\epsilon)q(x)\leq
f(x)p(x)\leq (1+\epsilon)q(x)$ on $X.$ Equivalently, one may require
that
\begin{align*}
0<\frac{1-\epsilon}{\sqrt{1-\epsilon^2}}\, q(x)
\leq f(x)p(x)\leq
\frac{1+\epsilon}{\sqrt{1-\epsilon^2}}\, q(x).
\end{align*}
Letting $c=c(\epsilon)=\sqrt{(1+\epsilon)/(1-\epsilon)},$ the
theorem follows.
\end{proof}
We will now show that if a degree-$d$ rational approximant
achieves error $\epsilon$ in approximating a given Boolean
function, then a degree-$2d$ approximant can achieve error as
small as $\epsilon^2.$ Note that this result is a refinement of
Theorem~\ref{thm:error-boosting} for small $k.$
\begin{theorem}
\label{thm:accuracy-boost}
Let $f\colon X\to\moo$ be a given function, where $X\subseteq\Re^n.$
Let $d$ be a given integer. Then
\[ R^+(f,2d) \leq \PARENS{\frac{\epsilon}{1 + \sqrt{1-\epsilon^2}}}^2,\]
where $\epsilon = R(f,d).$
\end{theorem}
\begin{proof}
The theorem is clearly true for $\epsilon=1.$ For $0\leq
\epsilon<1,$ consider the univariate rational function
\[ S(t) = \frac{4\sqrt{1-\epsilon^2}}{1+\sqrt{1-\epsilon^2}}\cdot
\frac{t}{t^2+(1-\epsilon^2)}. \]
Calculus shows that
\[ \max_{\rule{0mm}{8pt}1-\epsilon\leq |t|\leq 1+\epsilon}
|\sign t - S(t)| =
\PARENS{\frac{\epsilon}{1 + \sqrt{1-\epsilon^2}}}^2.\]
Fix a sequence $A_1,A_2,\dots$ of rational functions of degree at
most $d$ such that $\sup_{x\in X}|f(x) - A_m(x)|\to\epsilon$ as
$m\to\infty.$ Then $S(A_1(x)),S(A_2(x)),\dots$ is the sought
sequence of approximants to $f,$ each a rational function of
degree at most $2d$ with a positive denominator.
\end{proof}
\begin{corollary}
\label{cor:general-amplify}
Let $f\colon X\to\moo$ be a given function, where $X\subseteq\Re^n.$
Then for all integers $dgeq1$ and reals $tgeq2,$
\[ R^+(f,td)\leq R(f,d)^{t/2}. \]
\end{corollary}
\begin{proof}
If $t=2^k$ for some integer $kgeq1,$ then repeated applications of
Theorem~\ref{thm:accuracy-boost} yield
$R^+(f,2^kd) \leq R(f,2^{k-1}d)^2 \leq \cdots \leq R(f,d)^{2^k}.$
The general case follows because $2^{\lfloor\log t\rfloor}geq t/2.$
\end{proof}
\subsection{Conjunctions of functions
\label{sec:intersection-two}}
In this section, we prove our direct product theorems for
conjunctions of Boolean
functions. Recall that a key challenge
will be, given a sign-representation $\phi(x,y)$ of a
composite function $f(x)\wedge g(y),$ to suitably break down $\phi$
and recover individual rational approximants of $f$ and $g.$ We
now present an ingredient of our solution, namely, a certain fact
about pairs of matrices based on Farkas' Lemma. For the time being,
we will formulate this fact in a clean and abstract way.
\begin{theorem}
\label{thm:zero-sum-game}
Fix matrices $A,B\in\Re^{m\times n}$ and a real $cgeq1.$ Consider
the following system of linear inequalities in $u,v\in\Re^n${\rm :}
\begin{equation}
\LEFTRIGHT.\rcbrace{
\hspace{3cm}
\begin{aligned}
\frac{1}{c}\, Au \leq &Bv \leq cAu,\\
u&geq 0, \\
v&geq 0. \\
\end{aligned}
\hspace{3cm}}
\label{eqn:matrix-system}
\end{equation}
If $u=v=0$ is the only solution to {\rm (\ref{eqn:matrix-system}),} then
there exist vectors $wgeq0$ and $zgeq0$ such that
\[ w\tr A + z\tr B > c (z\tr A + w\tr B). \]
\end{theorem}
\begin{proof}
If $u=v=0$ is the only solution to (\ref{eqn:matrix-system}), then
linear programming duality implies the existence of vectors $wgeq0$
and $zgeq 0$ such that $w\tr A > cz\tr A$ and $z\tr B > cw\tr B.$
Adding the last two inequalities completes the proof.
\end{proof}
For clarity of exposition, we first prove the main result of this
section for the case of \emph{two} Boolean functions at least one of which
is \emph{odd}. While this case seems restricted, we will see that
it captures the full complexity of the problem.
\begin{theorem}
\label{thm:main-finite-odd}
Let $f\colon X\to\moo$ and $g\colon Y\to\moo$ be given functions, where
$X,Y\subset\Re^n$ are arbitrary finite sets. Assume that $f\not\equiv1$
and $g\not\equiv1.$ Let $d=\degthr(f\wedge g).$ If $f$ is odd, then
\[ R^+(f,2d) + R^+(g,d) < 1. \]
\end{theorem}
\begin{proof}
We first collect some basic observations. Since $f\not\equiv1$
and $g\not\equiv1,$ we have $\degthr(f)\leq d$ and $\degthr(g)\leq d.$
Therefore, Theorem~\ref{thm:trivial-approx} implies that
\begin{equation}
R^+(f,d)<1,\qquad R^+(g,d)<1.
\label{eqn:both-nontrivial}
\end{equation}
In particular, the theorem holds if $R^+(g,d)=0.$ In the
remainder of the proof, we assume that $R^+(g,d)=\epsilon,$ where
$0<\epsilon<1.$
By hypothesis, there exists a degree-$d$ polynomial $\phi$
such that $f(x)\wedge g(y) = \sign \phi(x,y)$ for all $x\in X,$ $y\in Y.$
Define
\[ X^- = \{ x\in X : f(x)=-1 \}. \]
Since $X$ is closed under negation and $f$ is odd, we have $f(x)=1
\Leftrightarrow -x\in X^-.$ We will make several uses of this fact
in what follows, without further mention.
Put
\[ c = \SQRT{\frac{1 + (1-\delta)\epsilon}{1-(1-\delta)\epsilon}}, \]
where $\delta\in(0,1)$ is sufficiently small.
Since $R^+(g,d) > (c^2-1)/(c^2+1),$ we know by Theorem~\ref{thm:balance}
that there cannot exist polynomials $p,q$ of degree up to $d$ such that
\begin{equation}
0 < \frac1c q(y) \leq g(y)p(y) \leq cq(y), \qquad y\in Y.
\label{eqn:qpq}
\end{equation}
We claim, then, that there cannot exist reals $a_xgeq0,$ $x\in X,$ not all
zero, such that
\[ \frac1c \sum_{x\in X^-} a_{-x}\phi(-x,y)
\leq g(y) \sum_{x\in X^-} a_x \phi(x,y)
\leq c\sum_{x\in X^-} a_{-x} \phi(-x,y), \quad y\in Y. \]
Indeed, if such reals $a_x$ were to exist, then (\ref{eqn:qpq}) would
hold for the polynomials $p(y) = \sum_{x\in X^-}
a_x\phi(x,y)$ and $q(y)=\sum_{x\in X^-} a_{-x}\phi(-x,y).$
In view of the nonexistence of the $a_x,$ Theorem~\ref{thm:zero-sum-game}
applies to the matrices
\[ \Big[\phi(-x,y)\Big]_{y\in Y,\, x\in X^-},\qquad
\Big[g(y)\phi(x,y)\Big]_{y\in Y,\, x\in X^-} \]
and guarantees the existence of nonnegative reals $\lambda_y,\mu_y$ for
$y\in Y$ such that
\begin{multline}
\sum_{y\in Y} \lambda_y \phi(-x,y) +
\sum_{y\in Y} \mu_y g(y) \phi(x,y)\\
> c\PARENS{
\sum_{y\in Y}^{~} \mu_y \phi(-x,y) +
\sum_{y\in Y} \lambda_y g(y) \phi(x,y) }, \qquad x\in X^-.
\qquad
\label{eqn:lambda-mu-claim}
\end{multline}
Define polynomials $\alpha,\beta$ on $X$ by
\begin{align*}
\alpha(x) &=
\sum_{y\in g^{-1}(-1)} \{\lambda_y \phi(-x,y) - \mu_y\phi(x,y)\}, \\
\beta(x) &=
\sum_{y\in g^{-1}(1)\phantom{-}} \{\lambda_y \phi(-x,y) + \mu_y\phi(x,y)\}.
\end{align*}
Then (\ref{eqn:lambda-mu-claim}) can be restated as
\[ \alpha(x) + \beta(x) > c\{-\alpha(-x) + \beta(-x)\}, \qquad x\in X^-. \]
Both members of this inequality are nonnegative, and thus
$\{\alpha(x) + \beta(x)\}^2 > c^2\{-\alpha(-x) + \beta(-x)\}^2$ for
$x\in X^-.$
Since in addition $\alpha(-x)\leq0$ and $\beta(-x)geq0$ for $x\in X^-,$ we have
\[ \{\alpha(x) + \beta(x)\}^2 > c^2\{\alpha(-x) + \beta(-x)\}^2,
\qquad x\in X^-. \]
Letting $gamma(x) = \{\alpha(x) + \beta(x)\}^2,$ we see that
\[ R^+(f,2d) \leq \max_{x\in X}
\left| f(x) - \frac{c^2+1}{c^2} \cdot
\frac{gamma(-x) - gamma(x)}{gamma(-x)+gamma(x)}\right|
\leq \frac1{c^2}<1-\epsilon, \]
where the final inequality holds for all $\delta\in(0,1)$ small enough.
\end{proof}
\begin{remark*}
In Theorem~\ref{thm:main-finite-odd} and elsewhere in this paper,
the degree of a multivariate polynomial $p(x_1,x_2,\dots,x_n)$ is
defined as the greatest total degree of any monomial of $p.$ A
related notion is the \emph{partial degree} of $p,$ which is the
maximum degree of $p$ in any one of the variables $x_1,x_2,\dots,x_n.$
One readily sees that the proof of Theorem~\ref{thm:main-finite-odd}
applies unchanged to this alternate notion. Specifically, if the
conjunction $f(x)\wedge g(y)$ can be sign-represented by a polynomial
of partial degree $d,$ then there exist rational functions $F(x)$
and $G(y)$ of partial degree $2d$ such that $\|f-F\|_\infty +
\|g-G\|_\infty<1.$ In the same way, the program of Section~\ref{sec:h}
carries over, with cosmetic changes, to the notion of partial degree.
Analogously, our proofs apply to hybrid definitions of degree, such
as partial degree over blocks of variables. Other, more abstract
notions of degree can also be handled. In the remainder of the
paper, we will maintain our focus on total degree and will not
elaborate further on its generalizations.
\end{remark*}
As promised, we will now remove the assumption, made in
Theorem~\ref{thm:main-finite-odd}, about one of the functions being
odd. The result that we are about to prove settles
Theorem~\ref{thm:main-two} from the Introduction.
\begin{theorem}
\label{thm:main-finite}
Let $f\colon X\to\moo$ and $g\colon Y\to\moo$ be given functions, where
$X,Y\subset\Re^n$ are arbitrary finite sets. Assume that $f\not\equiv1$
and $g\not\equiv1.$ Let $d=\degthr(f\wedge g).$ Then
\begin{align}
R^+(f,4d) + R^+(g,2d) < 1
\label{eqn:42}
\end{align}
and, by symmetry,
\begin{align*}
R^+(f,2d) + R^+(g,4d) < 1.
\end{align*}
\end{theorem}
\begin{proof}
It suffices to prove (\ref{eqn:42}).
Define $X'\subset\Re^{n+1}$ by $X'= \{(x,1),(-x,-1) : x\in X\}.$
It is clear that $X'$ is closed under negation. Let $f'\colon X'\to\moo$ be
the odd Boolean function given by
\[ f'(x,b) = \ccases{
f(x), &b=1,\\
-f(-x), &b=-1.
}
\]
Let $\phi$ be a polynomial of degree no greater than $d$
such that $f(x)\wedge g(y)\equiv
\sign \phi(x,y).$ Fix an input $\tilde x\in X$ such that
$f(\tilde x)=-1.$ Then
$f'(x,b) \wedge g(y) \equiv \sign\BRACES{
K(1+b)\phi(x,y) + \phi(-x,y)\phi(\tilde x,y)
}
$ for a large enough constant $Kgg1,$ whence
\[ \degthr(f'\wedge g) \leq 2d. \]
Theorem~\ref{thm:main-finite-odd} now yields
$R^+(f',4d) + R^+(g,2d) < 1.$
Since $R^+(f,4d) \leq R^+(f',4d)$ by definition, the proof is complete.
\end{proof}
Finally, we obtain an analogue of Theorem~\ref{thm:main-finite} for
a conjunction of three and more functions.
\begin{theorem}
\label{thm:main-finite-multi}
Let $f_1,f_2,\dots,f_k$ be given Boolean functions on finite sets
$X_1,X_2,\dots,X_k$ $\subset \Re^n,$ respectively.
Assume that $f_i\not\equiv1$ for $i=1,2,\dots,k.$
Let $d=\degthr(f_1\wedge f_2\wedge \cdots \wedge f_k).$ Then
\[ \sum_{i=1}^k R^+(f_i,D) < 1 \]
for $D=8d\log2k.$
\end{theorem}
\begin{proof}
Since $f_1,f_2,\dots,f_k\not\equiv 1,$ it follows that
for each pair of indices $i<j,$ the function $f_i\wedge f_j$ is a
subfunction of $f_1\wedge f_2\wedge \cdots \wedge f_k.$
Theorem~\ref{thm:main-finite} now shows that for each $i<j,$
\begin{align}
R^+(f_i,4d) + R^+(f_j,4d) < 1. \label{eqn:ij}
\end{align}
Without loss of generality,
$R^+(f_1,4d)=\max_{i=1,\dots,k} R^+(f_i,4d).$
Abbreviate $\epsilon=R^+(f_1,4d).$
By (\ref{eqn:ij}),
\[R^+(f_i,4d) < \min\BRACES{1-\epsilon,\frac12},
\qquad i=2,3,\dots,k.\]
Now Corollary~\ref{cor:general-amplify} implies that
\begin{align*}
\sum_{i=1}^k R^+(f_i,D)
\leq \epsilon + \sum_{i=2}^k R^+(f_i,4d)^{1 + \log k}
< 1.
\tag*{\qedhere}
\end{align*}
\end{proof}
\subsection{Other combining functions \label{sec:h}}
As we will now see, the development in Section~\ref{sec:intersection-two}
applies to many combining functions other than conjunctions.
Disjunctions are an illustrative starting point. Consider two
Boolean functions $f\colon X\to\moo$ and $g\colon Y\to\moo,$ where
$X,Y\subset\Re^n$ are finite sets and $f,g\not\equiv-1.$ Let
$d=\degthr(f\vee g).$ Then, we claim that
\begin{equation}
R^+(f,4d) + R^+(g,4d) < 1.
\label{eqn:f-or-g}
\end{equation}
To see this, note first that the function $f\vee g$ has the same
threshold degree as its negation, $\overline f\wedge \overline g.$
Applying Theorem~\ref{thm:main-finite} to the latter function shows
that
\[
R^+(\overline f,4d) + R^+(\overline g,4d) < 1.
\]
This is equivalent to (\ref{eqn:f-or-g}) since
approximating a function is the same as approximating its negation:
$R^+(\overline f,4d)=R^+(f,4d)$ and
$R^+(\overline g,4d)=R^+(g,4d).$ As in the case
of conjunctions, (\ref{eqn:f-or-g}) can be strengthened to
\[
R^+(f,2d) + R^+(g,2d) < 1
\]
if at least one of $f,g$ is known to be odd. These observations
carry over to disjunctions of multiple functions,
$f_1\vee f_2\vee \cdots\vee f_k.$
The above discussion is still too specialized. In what follows, we
consider composite functions $h(f_1,f_2,\dots,f_k),$
where $h\colon\mook\to\moo$ is any given Boolean function. We will
shortly see that the results of the previous sections hold for
various $h$ other than $h=\AND$ and $h=\OR.$
We start with some notation and definitions.
Let $f,h\colon \mook\to\moo$
be given Boolean functions. Recall that $f$ is called a \emph{subfunction}
of $h$ if for some fixed strings $y,z\in\mook,$ one has
\[ f(x) = h(\dots,(x_i\wedge y_i) \vee z_i,\dots) \]
for each $x\in\mook.$ In words, $f$ can
be obtained from $h$ by replacing some of the variables
$x_1,x_2,\dots,x_k$ with fixed values ($-1$ or $+1$).
\begin{definition}
\label{def:and-reducible}
A function $F\colon \mook\to\moo$ is \emph{\AND-reducible} if for each
pair of indices $i,j,$ where $1\leq i\leq j\leq k,$ at least
one of the eight functions
\[
\begin{aligned}
x_i&\wedge x_j,\\
x_i&\wedge \overline {x_j},\\
\overline {x_i}&\wedge x_j,\\
\overline {x_i}&\wedge \overline {x_j},
\end{aligned}
\quad\qquad
\begin{aligned}
x_i&\vee x_j,\\
x_i&\vee \overline {x_j},\\
\overline {x_i}&\vee x_j,\\
\overline {x_i}&\vee \overline {x_j}
\end{aligned}
\]
is a subfunction of $F(x).$
\end{definition}
\begin{theorem}
\label{thm:and-reducible}
Let $f_1,f_2,\dots,f_k$ be nonconstant Boolean functions on finite
sets $X_1,X_2,\dots,X_k\subset\Re^n,$ respectively. Let $F\colon \mook\to\moo$
be an \AND-reducible function. Put
$d=\degthr(F(f_1,f_2,\dots,f_k)).$ Then
\[ \sum_{i=1}^k R^+(f_i,D) < 1 \]
for $D=8d\log 2k.$
\end{theorem}
\begin{proof}
Since $F$ is \AND-reducible, it follows that for each pair of indices
$i<j,$ one of the following eight functions is a subfunction of
$F(f_1,\dots,f_k)$:
\[
\begin{aligned}
f_i&\wedge f_j,\\
f_i&\wedge \overline {f_j},\\
\overline {f_i}&\wedge f_j,\\
\overline {f_i}&\wedge \overline {f_j},
\end{aligned}
\quad\qquad
\begin{aligned}
f_i&\vee f_j,\\
f_i&\vee \overline {f_j},\\
\overline {f_i}&\vee f_j,\\
\overline {f_i}&\vee \overline {f_j}.
\end{aligned}
\]
By Theorem~\ref{thm:main-finite} (and the opening remarks of this
section),
\[ R^+(f_i,4d) + R^+(f_j,4d) < 1. \]
The remainder of the proof is identical to the proof of
Theorem~\ref{thm:main-finite-multi}, starting at equation (\ref{eqn:ij}).
\end{proof}
In summary, the development in Section~\ref{sec:intersection-two}
naturally extends to compositions
$F(f_1,f_2,\dots,f_k)$ for various $F.$ For a function
$F\colon \mook\to\moo$ to be \AND-reducible, $F$ must clearly depend
on all of its inputs. This necessary condition
is often sufficient, for example when $F$ is a read-once
AND/OR/NOT formula or a halfspace. Hence, Theorem~\ref{thm:main-h}
from the Introduction is a corollary of Theorem~\ref{thm:and-reducible}.
\begin{remark*}
If more information is available about the combining function $F,$
Theorem~\ref{thm:and-reducible} can be generalized to let some of
$f_1,\dots,f_k$ be constant functions. For example, some or all of
the functions $f_1,\dots,f_k$ in Theorem~\ref{thm:main-finite-multi} can
be identically true. Another direction for generalization is as
follows. In Definition~\ref{def:and-reducible}, one considers all
the ${k\choose 2}$ distinct pairs of indices $(i,j).$ If one happens
to know that $f_1$ is harder to approximate than $f_2,\dots,f_k,$
then one can relax Definition~\ref{def:and-reducible} to examine
only the $k-1$ pairs $(1,2),(1,3),\dots,(1,k).$ We do not formulate
these extensions as theorems, the fundamental technique being already
clear.
\end{remark*}
\subsection{Additional observations}
\label{sec:additional-conj}
Analogous to Section~\ref{sec:dp},
our results here can be viewed as a technique for proving lower
bounds on the threshold degree of composite functions
$F(f_1,f_2,\dots,f_k).$ We make this view explicit in the following
statement, which is the contrapositive of Theorem~\ref{thm:and-reducible}.
\begin{theorem}
\label{thm:and-reducible-degthr-technique}
Let $f_1,f_2,\dots,f_k$ be nonconstant Boolean functions on finite
sets $X_1,X_2,\dots,X_k\subset\Re^n,$ respectively. Let $F\colon \mook\to\moo$
be an $\AND$-reducible function. Suppose that
$\sum R^+( f_i,D) geq 1$
for some integer $D.$ Then
\begin{equation}
\degthr(F(f_1,f_2,\dots,f_k))>\frac D{8\log 2k}.
\label{eqn:lower-bound-degthr}
\end{equation}
\end{theorem}
\begin{remark}[On the tightness of
Theorem~\textup{\ref{thm:and-reducible-degthr-technique}}]
\label{rem:tightness}
Theorem~\ref{thm:and-reducible-degthr-technique} is close to
optimal. For example, when $F=\AND,$ the lower bound
in~(\ref{eqn:lower-bound-degthr}) is tight up to a factor of
$\Theta(k\log k).$ This can be seen by the well-known
argument~\cite{beigel91rational} described in the Introduction.
Specifically, fix an integer $D$ such that $\sum R^+(f_i,D) <1.$
Then there exists a rational function $p_i(x_i)/q_i(x_i)$ on $X_i,$
for $i=1,2,\dots,k,$ such that $q_i$ is positive on $X_i$ and
\[ \sum_{i=1}^k \;\max_{x_i\in X_i} \left| f_i(x_i) -
\frac{p_i(x_i)}{q_i(x_i)} \right| < 1. \]
As a result,
\begin{align*}
\bigwedge_{i=1}^k f_i(x_i)
\equiv \sign\PARENS{k-1 + \sum_{i=1}^k f_i(x_i) }
\equiv \sign\PARENS{ k-1 + \sum_{i=1}^k
\frac{p_i(x_i)}{q_i(x_i)}}.
\end{align*}
Multiplying by $\prod q_i(x_i)$ yields
\begin{align*}
\bigwedge_{i=1}^k f_i(x_i) \equiv \sign\PARENS{
(k-1)\prod_{i=1}^k q_i(x_i) +
\sum_{i=1}^k p_i(x_i)\prod_{j\in\{1,\dots,k\}\setminus\{i\}}{q_j(x_j)}
},
\end{align*}
whence $\degthr(f_1\wedge f_2\wedge \cdots \wedge f_k) \leq k D.$
This settles our claim regarding $F=\AND.$ For arbitrary
$\AND$-reducible functions $F\colon \mook\to\moo,$ a similar
argument (cf.~Theorem~31 of Klivans et al.~\cite{KOS:02}) shows
that the lower bound in~(\ref{eqn:lower-bound-degthr}) is tight up
to a polynomial in $k.$
\end{remark}
We close this section with one additional result.
\begin{theorem}
\label{thm:2-to-k}
Let $f\colon X\to\moo$ be a given function, where $X\subset\Re^n$ is
finite. Then for every integer $kgeq2,$
\begin{equation}
\degthr(\undercbrace{f\wedge f\wedge \cdots\wedge f}_k)
\leq (8k\log k)\cdot \degthr(f\wedge f).
\label{eqn:2-to-k}
\end{equation}
\end{theorem}
\begin{proof}
Put $d=\degthr(f\wedge f).$ Theorem~\ref{thm:main-finite} implies
that $R^+(f,4d)< 1/2,$ whence $R^+(f,8d\log k)<1/k$ by
Corollary~\ref{cor:general-amplify}. By
the argument in Remark~\ref{rem:tightness}, this proves the theorem.
\end{proof}
To illustrate, let $\Ccal$ be a given class of functions on
$\moon,$ such as halfspaces. Theorem~\ref{thm:2-to-k} shows that
the task of constructing a sign-representation for the
intersections of up to $k$ members from $\Ccal$ reduces to the
case $k=2.$ In other words, solving the problem for $k=2$
essentially solves it for all $k.$ The dependence on $k$ in
(\ref{eqn:2-to-k}) is tight up to a factor of $16\log k,$ even in
the simple case when $f$ is the OR
function~\cite{minsky88perceptrons}.
\section{Rational approximation of a halfspace}
In this section, we determine how well a rational function of any
given degree can approximate the canonical halfspace.
The lower bounds in Theorem~\ref{thm:main-approx-hs},
the main result to be proved in this section,
are considerably more involved than the upper bounds.
To help build some intuition in the former case, we first obtain
the upper bounds (Section~\ref{sec:small-error}) and only then
prove the lower bounds (Sections~\ref{sec:preparatory}
and~\ref{sec:rational}).
\subsection{Upper bounds}
\label{sec:small-error}
As shown in the Introduction, the OR function on $n$ bits has
$R^+(\OR,1)=0.$ A similar example is the ODD-MAX-BIT
function $f\colon\zoon\to\moo,$ due to Beigel~\cite{beigel94perceptrons},
defined by
\begin{align*}
f(x) = \sign\left(1 + \sum_{i=1}^n (-2)^i x_i\right).
\end{align*}
Indeed, letting
\begin{align*}
A_M(x) = \frac{1 + \sum_{i=1}^n (-M)^i x_i}
{1 + \sum_{i=1}^n M^i x_i},
\end{align*}
we have $\|f - A_M\|_\infty\to0$ as $M\to\infty.$ Thus,
$R^+(f,1) = 0.$ With this construction in mind, we now turn to
the canonical halfspace. We start with an auxiliary result that
generalizes the argument just given.
\begin{lemma}
\label{lem:012-dfa}
Let $f\colon\{0,\pm1,\pm2\}^n\to\moo$ be the function
given by $f(z) = \sign(1+\sum_{i=1}^n
2^iz_i).$ Then
\begin{align*}
R^+(f,64) = 0.
\end{align*}
\end{lemma}
\begin{proof}
Consider the deterministic finite automaton in
Figure~\ref{fig:dfa}.
The automaton has two terminal states (labeled ``$+$'' and ``$-$'')
and three nonterminal states (the start state and two additional
states). We interpret the output of the automaton to be $+1$ and
$-1$ at the two terminal states, respectively, and $0$ otherwise.
A string $z=(z_n,z_{n-1},\dots,z_1,0)\in\{0,\pm1,\pm2\}^{n+1},$
when read by the automaton left to right, forces it to output
exactly $\sign(\sum_{i=1}^n 2^i z_i).$ If the automaton is
currently at a nonterminal state, this state is determined
uniquely by the last two symbols read. Hence, the output of the
automaton on input
$z=(z_n,z_{n-1},\dots,z_1,0)\in\{0,\pm1,\pm2\}^{n+1}$ is given by
\begin{align*}
\sign\PARENS{\sum_{i=0}^n 2^i \alpha(z_{i+2},z_{i+1},z_i)}
\end{align*}
for a suitable map $\alpha\colon\{0,\pm1,\pm2\}^3\to\{0,-1,+1\},$
where we adopt the shorthand $z_{n+1}=z_{n+2}=z_0=0.$
Put
\begin{align*}
A_M(z) = \frac{ 1 + \sum_{i=0}^n M^{i+1} \alpha(z_{i+2},z_{i+2},z_i) }
{ 1 + \sum_{i=0}^n M^{i+1} \abs{\alpha(z_{i+2},z_{i+2},z_i)} }.
\end{align*}
By interpolation, the numerator and denominator of $A_M$ can be represented
by polynomials of degree no more than $4\times 4\times 4=64.$ On the
other hand, we have $\|f-A_M\|_\infty\to 0$ as $M\to\infty.$
\end{proof}
\begin{figure}
\caption{Finite automaton for the proof of Lemma~\ref{lem:012-dfa}
\label{fig:dfa}
\end{figure}
We are now prepared to prove our desired upper bounds for halfspaces.
\begin{theorem}
\label{thm:upper-fnk}
Let $f\colon\moo^{nk}\to\moo$ be the function
given by
\begin{align}
f(x) = \sign\left(1+\sum_{i=1}^n\sum_{j=1}^k 2^ix_{ij}\right).
\label{eqn:f-n-k}
\end{align}
Then
\begin{align}
R^+(f,64k\lceil \log k\rceil+1) =0.
\label{eqn:R0}
\end{align}
In addition, for all integers $dgeq1,$
\begin{align}
R^+(f,d) \leq 1 - (k2^{n+1})^{-1/d}.
\label{eqn:Rd}
\end{align}
\end{theorem}
\noindent
In particular, Theorem~\ref{thm:upper-fnk} settles all upper bounds on
$\rdeg_\epsilon(f)$ in Theorem~\ref{thm:main-approx-hs}.
\begin{proof}[Proof of Theorem \textup{\ref{thm:upper-fnk}}]
Theorem~\ref{thm:newman-approx} immediately implies
(\ref{eqn:Rd}) in view of the representation (\ref{eqn:f-n-k}).
It remains to prove (\ref{eqn:R0}). In the degenerate case $k=1,$ we have
$f\equiv x_{n1}$ and thus (\ref{eqn:R0}) holds. In what follows, we assume
that $kgeq2$ and put $\Delta=\lceil\log k\rceil.$
We adopt the convention that $x_{ij}\equiv 0$ for $i>n.$ For
$\ell=0,1,2,\dots,$ define
\begin{align*}
S_\ell
= \sum_{i=1}^{\Delta}\sum_{j=1}^k 2^{i-1} x_{\ell\Delta+i,j}.
\end{align*}
Then
\begin{multline}
\sum_{i=1}^n \sum_{j=1}^k 2^{i-1}x_{ij} =
\PARENS{
S_0 + 2^{2\Delta} S_2 + 2^{4\Delta} S_4 + 2^{6\Delta} S_6 +
\cdots}\\
+\PARENS{
2^{\Delta}S_1 + 2^{3\Delta} S_3 + 2^{5\Delta} S_5 + 2^{7\Delta} S_7 +
\cdots}.
\label{eqn:integer-sum}
\end{multline}
Now, each $S_\ell$ is an integer in $[-2^{2\Delta}+1,2^{2\Delta}-1]$
and therefore admits a representation as
\begin{align*}
S_\ell = z_{\ell,1} + 2z_{\ell,2} + 2^2z_{\ell,3} + \cdots +
2^{2\Delta-1}z_{\ell,2\Delta},
\end{align*}
where $z_{\ell,1},\dots,z_{\ell,2\Delta}\in\{-1,0,+1\}.$
Furthermore, each $S_\ell$ only depends on $k\Delta$ of the
original variables $x_{ij},$ whence $z_{\ell,1},\dots,z_{\ell,2\Delta}$
can all be viewed as polynomials of degree at most $k\Delta$ in the
original variables. Rewriting (\ref{eqn:integer-sum}),
\begin{align*}
\sum_{i=1}^n \sum_{j=1}^k 2^{i-1}x_{ij}
&= \left(\sum_{igeq 1} 2^{i-1} z_{\ell(i),j(i)}\right)
+ \left(\sum_{igeq \Delta+1} 2^{i-1} z_{\ell'(i),j'(i)}\right)
\end{align*}
for appropriate indexing functions $\ell(i),\ell'(i),j(i),j'(i).$ Thus,
\begin{align*}
f(x) \equiv \sign\left(
1 + \sum_{i=1}^\Delta 2^i \undercbrace{z_{\ell(i),j(i)}}
+ \sum_{igeq \Delta+1}2^i \undercbrace{\left(z_{\ell(i),j(i)} +
z_{\ell'(i),j'(i)}\right)}
\right).
\end{align*}
Since the underbraced expressions range in $\{0,\pm1,\pm2\}$ and are
polynomials of degree at most $k\Delta$ in the original variables,
Lemma~\ref{lem:012-dfa} implies (\ref{eqn:R0}).
\end{proof}
\subsection{Preparatory work} \label{sec:preparatory}
This section sets the stage for our rational approximation
lower bounds with some preparatory
results about halfspaces. It will be convenient to establish some
additional notation, for use in this section only. Here, we
typeset real vectors in boldface ($\xbold_1,\xbold_2,\zbold,\vbold$) to better
distinguish them from scalars. The $i$th component of a vector
$\xbold\in \Re^n$ is denoted by $(\xbold)_i,$ while the symbol $\xbold_i$
is reserved for another \emph{vector} from some enumeration. In
keeping with this convention, we let $\ebold_i$ denote the vector with
$1$ in the $i$th component and zeroes everywhere else. For
$\xbold,\ybold\in\Re^n,$ the vector $\xbold\ybold\in\Re^n$ is given by
$(\xbold\ybold)_i\equiv (\xbold)_i(\ybold)_i.$ More generally, for a polynomial
$p$ on $\Re^k$ and vectors $\xbold_1,\dots,\xbold_k\in\Re^n,$ we define
$p(\xbold_1,\dots,\xbold_k)\in\Re^n$ by $(p(\xbold_1,\dots,\xbold_k))_i =
p((\xbold_1)_i,\dots,(\xbold_k)_i).$ The expectation of a random variable
$\xbold\in\Re^n$ is defined componentwise, i.e., the vector
$\Exp[\xbold]\in\Re^n$ is given by $(\Exp[\xbold])_i\equiv \Exp[(\xbold)_i].$
For convenience, we adopt the notational shorthand $\alpha^0=1$
for all $\alpha\in\Re.$
In particular, if $\xbold\in \Re^n$ is a
given vector, then $\xbold^0 = (1,1,\dots,1)\in \Re^n.$
A scalar $\alpha\in\Re,$ when interpreted as a vector, stands for
$(\alpha,\alpha,\dots,\alpha).$ This shorthand allows one to speak of
$\Span\{1,\zbold,\zbold^2,\dots,\zbold^k\},$ for example, where $\zbold\in\Re^n$ is
a given vector.
\begin{theorem}
\label{thm:mu-b}
Let $N$ and $m$ be positive integers. Then reals
$\alpha_0,\alpha_1,\dots,\alpha_{4m}$ exist with the following
property: for each $\bbold\in\zoo^N,$ there is a probability
distribution $\mu_\bbold$ on $\{0,\pm1,\dots,\pm m\}^N$ such that
\begin{align*}
\Exp_{\vbold\sim \mu_\bbold}[(2\vbold + \bbold)^d] =
(\alpha_d,\alpha_d,\dots,\alpha_d),
\qquad d=0,1,2,\dots,4m.
\end{align*}
\end{theorem}
\begin{proof}
Let $\lambda_0$ and $\lambda_1$ be the distributions on
$\{0,\pm1,\dots,\pm m\}$ given by
\begin{align*}
\lambda_0(t) = 16^{-m}{4m+1 \choose 2m+2t}, \qquad
\lambda_1(t) = 16^{-m}{4m+1 \choose 2m+2t+1}.
\end{align*}
Then for $d=0,1,\dots,4m,$ one has
\begin{multline}
\Exp_{t\sim\lambda_0}[(2t)^d] - \Exp_{t\sim\lambda_1}[(2t+1)^d]\\
= 16^{-m} \sum_{t=0}^{4m+1} (-1)^t {4m+1\choose t}(t-2m)^d
=0, \qquad
\label{eqn:indisting}
\end{multline}
where (\ref{eqn:indisting}) holds by Fact~\ref{fact:comb}.
Now, let
$
\mu_\bbold=
\lambda_{(\bbold)_1} \times
\lambda_{(\bbold)_2} \times
\cdots
\times \lambda_{(\bbold)_N}.
$
Then in view of (\ref{eqn:indisting}), the theorem holds by letting
$\alpha_d = \Exp_{\lambda_0}[(2t)^d]$ for $d=0,1,2,\dots,4m.$
\end{proof}
Using the previous theorem, we will now establish another auxiliary
result pertaining to halfspaces.
\begin{theorem}
\label{thm:distribution}
Put $\zbold=(-2^n,-2^{n-1},\dots,-2^0,2^0,\dots,2^{n-1},2^n)\in \Re^{2n+2}.$
There are random
variables $\xbold_1,\xbold_2,\dots,\xbold_{n+1}\in
\{0,\pm1,\pm2,\dots,\pm (3n+1)\}^{2n+2}$ such that:
\begin{align}
\sum_{i=1}^{n+1} 2^{i-1}\xbold_i \equiv \zbold
\label{eqn:support}
\end{align}
and
\begin{align}
\Exp\left[\prod_{i=1}^n \xbold_i^{d_i}\right]
\in\Span\{(1,1,\dots,1)\}
\label{eqn:span-property}
\end{align}
for $d_1,\dots,d_n\in \{0,1,\dots,4n\}.$
\end{theorem}
\begin{proof}
Let
\begin{align*}
\xbold_i = 2\ybold_i-\ybold_{i-1} +\ebold_{n+1+i}-\ebold_{n+2-i}, \qquad
i=1,2,\dots,n+1,
\end{align*}
where $\ybold_0,\ybold_1,\dots,\ybold_{n+1}$ are suitable random variables
with $\ybold_0\equiv\ybold_{n+1}\equiv 0.$ Then property~(\ref{eqn:support})
is immediate. We will construct $\ybold_0,\ybold_1,\dots,\ybold_{n+1}$
such that the remaining property~(\ref{eqn:span-property}) holds as well.
Let $N=2n+2$ and $m=n$ in Theorem~\ref{thm:mu-b}. Then reals
$\alpha_0,\alpha_1,\dots,\alpha_{4n}$ exist with the property that for each
$\bbold\in\zoo^{2n+2},$ a probability distribution $\mu_\bbold$ can be
found on $\{0,\pm1,\dots,\pm n\}^{2n+2}$ such that
\begin{align}
\Exp_{\vbold\sim\mu_\bbold} [(2\vbold +\bbold)^d]=\alpha_d(1,1,\dots,1),
\qquad d=0,1,\dots,4n.
\label{eqn:alpha_d}
\end{align}
Now, we will specify the distribution of $\ybold_0,\ybold_1,\dots,\ybold_n$ by
giving an algorithm for generating $\ybold_i$ from $\ybold_{i-1}.$ First, recall
that $\ybold_0\equiv \ybold_{n+1}\equiv 0.$
The algorithm for generating $\ybold_{i}$ given
$\ybold_{i-1}$ $(i=1,2,\dots,n)$ is as follows.
\begin{itemize}
\item [\textup{(1)}] Let $\ubold$ be the unique integer vector
such that $2\ubold - \ybold_{i-1} + \ebold_{n+1+i} - \ebold_{n+2-i} \in\zoo^{2n+2}.$
\item [\textup{(2)}] Draw a random vector $\vbold\sim \mu_{\bbold},$
where $\bbold=2\ubold - \ybold_{i-1} + \ebold_{n+1+i} - \ebold_{n+2-i}.$
\item [\textup{(3)}] Set $\ybold_i = \vbold + \ubold.$
\end{itemize}
One easily verifies that
$\ybold_0,\ybold_1,\dots,\ybold_{n+1}\in\{0,\pm1,\dots,\pm 3n\}^{2n+2}.$
Let $R$ denote the resulting joint distribution of
$(\ybold_0,\ybold_1,\dots,\ybold_{n+1}).$ Let $i\leq n.$ Then conditioned
on any fixed value of $(\ybold_0,\ybold_1,\dots,\ybold_{i-1})$ in the support
of $R,$ the random variable $\xbold_i$ is by definition independent of
$\xbold_1,\dots,\xbold_{i-1}$ and is distributed identically to $2\vbold
+\bbold,$ for some fixed vector $\bbold\in\zoo^{2n+2}$ and a random
variable $\vbold\sim \mu_\bbold.$ In view of (\ref{eqn:alpha_d}), we
conclude that
\begin{align*}
\Exp\left[\prod_{i=1}^n \xbold_i^{d_i}\right]
= (1,1,\dots,1) \prod_{i=1}^n \alpha_{d_i}
\end{align*}
for all $d_1,d_2,\dots,d_n\in\{0,1,\dots,4n\},$
which establishes (\ref{eqn:span-property}). It remains to note that
$\xbold_1,\xbold_2,\dots,\xbold_n\in\{-2n,-2n+1,\dots,-1,0,1,\dots,
2n,2n+1\}^{2n+2},$ whereas $\xbold_{n+1}= -\ybold_n
+\ebold_{2n+2}-\ebold_1\in\{0,\pm1,\dots,\pm(3n+1)\}^{2n+2}.$
\end{proof}
At last, we arrive at the main theorem of this section, which will play a
crucial role in our analysis of the rational approximation of halfspaces.
\begin{theorem}
\label{thm:aux-thr-deg}
For $i=0,1,2,\dots,n,$ define
\begin{align*}
A_i &= \BRACES{(x_1,\dots,x_{n+1})\in\{0,\pm1,\dots,\pm(3n+1)\}^{n+1}
\colon \quad
\sum_{j=1}^{n+1} 2^{j-1}x_j = 2^i}.
\end{align*}
Let $p(x_1,\dots,x_{n+1})$ be a real polynomial with sign $(-1)^i$
throughout $A_i$ $(i=0,1,2,\dots,n)$ and sign $(-1)^{i+1}$ throughout
$-A_i$ $(i=0,1,2,\dots,n).$ Then
\begin{align*}
\deg p geq 2n+1.
\end{align*}
\end{theorem}
\begin{proof}
For the sake of contradiction, suppose that $p$ has degree no greater
than $2n.$ Put $\zbold=(-2^n,-2^{n-1},\dots,-2^0,2^0,\dots,2^{n-1},2^n).$
Let $\xbold_1,\dots,\xbold_{n+1}$ be the random variables constructed in
Theorem~\ref{thm:distribution}. By (\ref{eqn:span-property}) and
the identity $\xbold_{n+1}\equiv2^{-n}\zbold - \sum_{i=1}^n 2^{i-n-1}\xbold_i,$
we have
\begin{align*}
\Exp[p(\xbold_1,\dots,\xbold_{n+1})] \in \Span\{1,\zbold,\zbold^2,\dots,\zbold^{2n}\},
\end{align*}
whence
$
\Exp[p(\xbold_1,\dots,\xbold_{n+1})] = q(\zbold)
$
for a univariate polynomial $q\in P_{2n}.$
In view of (\ref{eqn:support}) and the
assumed sign behavior of $p,$ we have $\sign q(2^i) = (-1)^i$ and $\sign
q(-2^i) = (-1)^{i+1},$ for $i=0,1,2,\dots,n.$ Therefore, $q$ has
at least $2n+1$ roots. Since $q\in P_{2n},$ we arrive at a
contradiction. It follows that the assumed polynomial $p$ does
not exist.
\end{proof}
\begin{remark}
The passage $p\mapsto q$ in the proof of Theorem~\ref{thm:aux-thr-deg}
is precisely the linear degree-nonincreasing map
$M\colon\Re[x_1,x_2,\dots,x_{n+1}]\to\Re[x]$ described previously
in the Introduction.
\end{remark}
\subsection{Lower bounds}
\label{sec:rational}
The purpose of this section is to prove that the canonical halfspace
cannot be approximated well by a rational function of low degree. A
starting point in our discussion is a criterion for inapproximability
by low-degree rational functions, which is applicable not only to
halfspaces but any odd Boolean functions on Euclidean space.
\newcommand{(x)}{(x)}
\newcommand{(x)m}{(-x)}
\begin{theorem}[Criterion for inapproximability]
\label{thm:rational-degree-criterion-hs}
Fix a nonempty finite subset $S\subset\Re^m$ with $S\cap -S=\varnothing.$
Define $f\colon S\cup -S\to\moo$ by
\begin{align*}
f(x) = \ccases{
+1, & x\in S, \\
-1, & x\in -S.
}
\end{align*}
Let $\psi$ be a real function such that
\begin{align}
\psi(x) >
\delta |\psi(x)m|, && x\in S,
\label{eqn:r-unbalanced-hs}
\end{align}
for some $\delta\in(0,1)$ and
\begin{align}
\sum_{S\cup -S} \psi(x) u(x) = 0
\label{eqn:r-orthog}
\end{align}
for every polynomial $u$ of degree at most $d.$ Then
\begin{align*}
R^+(f,d) geq \frac{2\delta}{1 + \delta}.
\end{align*}
\end{theorem}
\begin{proof}
Fix polynomials $p,q$ of degree at most $d$ such that $q$ is positive on
$S\cup -S.$ Put
\[ \epsilon = \max_{S\cup -S}
\left| f(x) - \frac{p(x)}{q(x)} \right|. \]
We assume that $\epsilon<1$ since otherwise there is nothing to show.
For $x\in S,$
\begin{align}
(1-\epsilon)q(x) \leq p(x) \leq (1+\epsilon)q(x)
\label{eqn:pqeps1-hs}
\end{align}
and
\begin{align}
(1-\epsilon)q(x)m \leq -p(x)m \leq (1+\epsilon)q(x)m.
\label{eqn:pqeps2-hs}
\end{align}
Consider the polynomial $u(x) = q(x) + q(x)m + p(x) - p(x)m.$
Equations (\ref{eqn:pqeps1-hs}) and (\ref{eqn:pqeps2-hs}) show
that for $x\in S,$ one has
$u(x) geq (2-\epsilon) \{ q(x) + q(x)m \}$
and
$|u(x)m| \leq \epsilon \{ q(x) + q(x)m \},$
whence
\begin{align}
u&(x) geq \PARENS{\frac 2\epsilon -1} |u(x)m|, &&x\in S.
\label{eqn:u-unbalanced-hs}
\intertext{We also note that}
u&(x) >0, &&x\in S.
\label{eqn:u-positive-hs}
\end{align}
Since $u$ has degree at most $d,$ we have by (\ref{eqn:r-orthog}) that
\begin{align*}
\sum_{x\in S} \{ \psi(x) u(x) + \psi(x)m u(x)m\}
= \sum_{S\cup -S} \psi(x) u(x)
= 0,
\end{align*}
whence
\begin{align*}
\psi(x) u(x) \leq |\psi(x)m u(x)m|
\end{align*}
for some $x\in S.$ At the same time, it follows from
(\ref{eqn:r-unbalanced-hs}),
(\ref{eqn:u-unbalanced-hs}), and (\ref{eqn:u-positive-hs}) that
\begin{align*}
\psi(x) u(x) > \delta\PARENS{\frac 2\epsilon -1}
\lvert \psi(x)m u(x)m\rvert, && x\in S.
\end{align*}
We immediately obtain
$\delta(\{2/\epsilon\} -1)< 1, $
as was to be shown.
\end{proof}
\begin{remark}
The method of Theorem~\ref{thm:rational-degree-criterion-hs} amounts
to reformulating (\ref{eqn:u-unbalanced-hs}) and (\ref{eqn:u-positive-hs})
as a linear program and exhibiting a solution to its dual. The
presentation above does not explicitly use the language of linear
programs or appeal to duality, however, because our goal is solely
to prove the correctness of our method and not its completeness.
\end{remark}
Using the criterion of Theorem~\ref{thm:rational-degree-criterion-hs} and our
preparatory work in Section~\ref{sec:preparatory}, we now establish a
key lower bound for the rational approximation of halfspaces within constant
error.
\begin{theorem}
\label{thm:rtl-approx-of-halfspace}
Let $f\colon \{0,\pm1,\dots,\pm(3n+1)\}^{n+1}\to\moo$ be given by
\begin{align*}
f(x) = \sign\PARENS{1 + \sum_{i=1}^{n+1}2^ix_i}.
\end{align*}
Then
\begin{align*}
R^+(f,n)=\Omega(1).
\end{align*}
\end{theorem}
\begin{proof}
Let $A_0,A_1,\dots,A_n$ be as defined in Theorem~\ref{thm:aux-thr-deg}. Put
$A=\bigcup A_i$ and define $g\colon A\cup -A\to\moo$ by
\begin{align*}
g(x) =
\ccases{
(-1)^i,&x\in A_i,\\
(-1)^{i+1},&x\in -A_i.
}
\end{align*}
Then $\degthr(f)>2n$ by Theorem~\ref{thm:aux-thr-deg}. As a result,
Theorem~\ref{thm:gordan} guarantees the existence of a function
$\phi\colon A\cup -A\to\Re,$ not identically zero, such that
\begin{align}
\phi(x)g(x)geq 0, \qquad x\in A\cup -A, \label{eqn:signrep}
\end{align}
and
\begin{align}
\sum_{A\cup -A} \phi(x)u(x) = 0 \label{eqn:phiorthog}
\end{align}
for every polynomial $u$ of degree at most $2n.$ Put
\begin{align*}
p(x) = \prod_{j=0}^{n-1}
\PARENS{ - 2^j\sqrt 2 + \sum_{i=1}^{n+1} 2^{i-1}x_i}
\end{align*}
and
\begin{align*}
\psi(x) = (-1)^n\{\phi(x)-\phi(-x)\}p(x).
\end{align*}
Define $S=A\setminus\psi^{-1}(0).$ Then $S\ne\varnothing$
by (\ref{eqn:signrep}) and the fact that
$\phi$ is not identically zero on $A\cup -A.$
For $x\in S,$ we have $\psi(-x)\ne 0$ and
\begin{align*}
\frac{\lvert\psi(x)\rvert}{\lvert\psi(-x)\rvert}
= \frac{\lvert p(x)\rvert}{\lvert p(-x)\rvert}
> \PARENS{\prod_{i=1}^\infty \frac{2^{i/2}-1}{2^{i/2}+1}}^2
> \exp(-9\sqrt 2),
\end{align*}
where the final step uses the bound $(a-1)/(a+1)>\exp(-2.5/a),$
valid for $ageq\sqrt 2.$
It follows from (\ref{eqn:signrep}) and the definition of $p$
that $\psi$ is positive on $S.$ Hence,
\begin{align}
\psi(x) > \exp(-9\sqrt 2)\; \lvert\psi(-x)\rvert, \qquad x\in S.
\label{eqn:psibalance}
\end{align}
For any polynomial $u$ of degree no greater than $n,$ we infer from
(\ref{eqn:phiorthog}) that
\begin{align}
\sum_{S\cup -S} \psi(x)u(x) =
(-1)^n\sum_{A\cup -A}
\{\phi(x)-\phi(-x)\} u(x) p(x) = 0.
\label{eqn:psiorthog}
\end{align}
Since $f$ is positive on $S$ and negative on $-S,$
the proof is now complete in view of (\ref{eqn:psibalance}),
(\ref{eqn:psiorthog}), and Theorem~\ref{thm:rational-degree-criterion-hs}.
\end{proof}
We have reached the main result of this section, which extends
Theorem~\ref{thm:rtl-approx-of-halfspace} to any subconstant
approximation error and to halfspaces on the hypercube.
\begin{theorem}
\label{thm:main-halfspace}
Let $F\colon\moo^{m^2}\to\moo$ be given by
\begin{align*}
F(x) = \sign\PARENS{1 + \sum_{i=1}^m\sum_{j=1}^m 2^i x_{ij}}.
\end{align*}
Then for $d<m/14,$
\begin{align}
R(F,d) geq 1 - 2^{-\Theta(m/d)}.
\label{eqn:approx-halfspace-lower}
\end{align}
\end{theorem}
Observe that Theorem~\ref{thm:main-halfspace} settles
the lower bounds in Theorem~\ref{thm:main-approx-hs} from the
Introduction.
\begin{proof}[Proof of Theorem~\textup{\ref{thm:main-halfspace}}.]
We may assume that $mgeq14,$ the claim being trivial otherwise.
Consider the function $G\colon
\moo^{(n+1)(6n+2)}\to\moo$
given by
\begin{align*}
G(x) = \sign\PARENS{1 + \sum_{i=1}^{n+1}
\; \sum_{j=1}^{6n+2}2^ix_{ij}},
\end{align*}
where $n=\lfloor (m-2)/6 \rfloor.$ For every $\epsilon>R^+(G,n),$
Proposition~\ref{prop:symm-rational} provides a rational function
$A$ on $\Re^{n+1}$ of degree at most $n$ such that, on the domain
of $G,$
\begin{align*}
\left\lvert G(x)-A\PARENS{\dots,\sum_{j=1}^{6n+2}
x_{ij},\dots}\right\rvert < \epsilon
\end{align*}
and the denominator of $A$ is positive.
Letting $f$ be the function in
Theorem~\ref{thm:rtl-approx-of-halfspace}, it follows that
$| f(x_1,\dots,x_{n+1}) - A(2x_1,\dots,2x_{n+1}) |<\epsilon$
on the domain of $f,$ whence
\begin{align}
R^+(G,n) = \Omega(1).
\label{eqn:RnG}
\end{align}
We now claim that either $G(x)$ or $-G(-x)$ is a subfunction of
$F.$ For example, consider the following substitution for the variables
$x_{ij}$ for which $i>n+1$ or $j>6n+2$:
\begin{align*}
&x_{mj}gets (-1)^j, &&(1\leq j\leq m),\\
&x_{ij}gets (-1)^{j+1}, &&(n+1<i<m,\quad 1\leq j\leq m),\\
&x_{ij}gets (-1)^{j+1}, &&(1\leq i\leq n+1, \quad \, j>6n+2).
\end{align*}
After this substitution, $F$ is a function of the remaining variables
$x_{ij}$ and is equivalent to $G(x)$ if $m$ is even, and to $-G(-x)$
if $m$ is odd. In either case, (\ref{eqn:RnG}) implies that
\begin{align}
R^+(F,n) = \Omega(1).
\label{eqn:R+nF}
\end{align}
Theorem~\ref{thm:error-boosting} shows that
\begin{align*}
R(F,n/2)\leq 1 - \PARENS{\frac{1 - R(F,d)}{2}}^{1/\lfloor n/(2d)\rfloor}
\end{align*}
for $d=1,2,\dots,\lfloor n/2\rfloor,$ which yields
(\ref{eqn:approx-halfspace-lower}) in light of
(\ref{eqn:rational-positive-denominator}) and (\ref{eqn:R+nF}).
\end{proof}
\section{Rational approximation of the majority function}
\label{sec:rational-approx-maj}
The goal of this section is to determine $R^+(\MAJ_n,d)$ for each
integer $d,$ i.e., to determine the least error to which a degree-$d$
multivariate rational function can approximate the majority function.
As is frequently the case with symmetric Boolean functions such as
majority, the multivariate problem of analyzing
$R^+(\MAJ_n,d)$ is equivalent to a univariate question. Specifically, given an
integer $d$ and a finite set $S\subset\Re,$ we define
\[ R^+(d,S) \,= \,\inf_{p,q}\, \max_{t\in S}
\left\lvert \sign t - \frac{p(t)}{q(t)} \right\rvert,\]
where the infimum ranges over $p,q\in P_d$ such that $q$ is positive on
$S.$ In other words, we study how well a rational
function of a given degree can approximate
the sign function over a finite support. We give a
detailed answer to this question in the following theorem:
\begin{theorem}[Rational approximation of \textsc{majority}]
\label{thm:R+}
Let $n,d$ be positive integers. Abbreviate
$R=R^+(d,\{\pm1,\pm2,\dots,\pm n\}).$
For $1\leq d\leq\log n,$
\[\exp\BRACES{-\Theta\PARENS{\frac1{n^{1/(2d)}}}}
\leq R < \exp\BRACES{-\frac1{n^{1/d}}}. \]
For $\log n < d <n,$
\[ R=\exp\BRACES{-\Theta\PARENS{\frac{d}{\log (2n/d)}}}. \]
For $dgeq n,$ \[ R=0.\]
Moreover, the rational approximant is constructed explicitly in
each case.
\end{theorem}
Theorem~\ref{thm:R+} is the main result of this section. We establish
it in the next two subsections, giving separate treatment to
the cases $d\leq\log n$
and $d>\log n$ (see Theorems~\ref{thm:rational-small-degree}
and~\ref{thm:rational-high-degree}, respectively). In the concluding
subsection, we give the promised proof that
$R^+(d,\{\pm1,\dots,\pm n\})$
and $R^+(\MAJ_n,d)$ are essentially equivalent.
\subsection{Low-degree approximation}
We start by specializing the criterion of
Theorem~\ref{thm:rational-degree-criterion-hs} to the problem of
approximating the sign function on the set $\{\pm1,\pm2,\dots,\pm
n\}.$
\begin{theorem}
\label{thm:rational-degree-criterion-maj}
Let $d$ be an integer, $0\leq d\leq 2n-1.$
Fix a nonempty subset $S\subseteq\{1,2,\dots,n\}.$
Suppose that there exists a real $\delta\in(0,1)$
and a polynomial $r\in P_{\,2n-d-1}$ that vanishes on
$\{-n,\dots,n\}\setminus (S\cup -S)$ and obeys
\begin{equation}
(-1)^t r(t) > \delta \lvert r(-t)\rvert,\qquad t\in S.
\label{eqn:r-unbalanced-maj}
\end{equation}
Then
\begin{align}
R^+(d,S\cup -S) geq \frac{2\delta}{1+\delta}.
\label{eqn:maj-criterion}
\end{align}
\end{theorem}
\begin{proof}
Define $f\colon S\cup-S\to\moo$ by $f(t)=\sign t.$ Define $\psi\colon S\cup
-S\to\Re$ by
$\psi(t) = (-1)^t {2n\choose n+t} r(t).$
Then (\ref{eqn:r-unbalanced-maj}) takes on the form
\begin{align}
\psi(t) > \delta \lvert \psi(-t)\rvert, \qquad t\in S.
\label{eqn:maj-psi-unbalanced}
\end{align}
For every polynomial $u$ of degree at most $d,$ we have
\begin{align}
\sum_{S\cup -S} \psi(t)u(t) =
\sum_{t=-n}^n (-1)^t {2n\choose n+t} r(t)u(t)=0
\label{eqn:maj-psi-orthogonal}
\end{align}
by Fact~\ref{fact:comb}. Now (\ref{eqn:maj-criterion}) is immediate
from (\ref{eqn:maj-psi-unbalanced}), (\ref{eqn:maj-psi-orthogonal}),
and Theorem~\ref{thm:rational-degree-criterion-hs}.
\end{proof}
Using Theorem~\ref{thm:rational-degree-criterion-maj}, we will
now determine the optimal error in the approximation of the majority
function by rational functions of degree up to $\log n.$ The case
of higher degrees will be settled in the next subsection.
\begin{theorem}[Low-degree rational approximation of \textsc{majority}]
\label{thm:rational-small-degree}
Let $d$ be an integer, $1\leq d\leq \log n.$ Then
\[\exp\BRACES{-\Theta\PARENS{\frac1{n^{1/(2d)}}}}
\leq R^+(d,\{\pm1,\pm2,\dots,\pm n\})
< \exp\BRACES{-\frac1{n^{1/d}}}. \]
\end{theorem}
\begin{proof}
The upper bound is immediate from Newman's Theorem~\ref{thm:newman-approx}.
For the lower bound, put $\Delta = \lfloor n^{1/d}\rfloorgeq2$ and
$S = \{1,\Delta,\Delta^2,\dots,\Delta^d\}.$
Define $r\in P_{\,2n-d-1}$ by
\[ r(t) = (-1)^n\prod_{i=0}^{d-1} (t-\Delta^i\sqrt\Delta)
\prod_{i\in \{-n,\dots,n\}\setminus (S\cup-S)} (t-i). \]
For $j=0,1,2,\dots,d,$
\begin{align*}
\frac{|r(\Delta^j)|}{|r(-\Delta^j)|} &=
\prod_{i=0}^{j-1}
\frac{\Delta^j - \Delta^i\sqrt\Delta}
{\Delta^j + \Delta^i\sqrt\Delta}
\;
\prod_{i=j}^{d-1}
\frac{\Delta^i\sqrt\Delta - \Delta^j}
{\Delta^i\sqrt\Delta + \Delta^j}
> \PARENS{\prod_{i=1}^\infty
\frac{\Delta^{i/2}- 1}{\Delta^{i/2}+ 1}}^2\\
&> \exp\BRACES{-5\sum_{i=1}^\infty
\frac1{\Delta^{i/2}}}
>\exp\BRACES{- \frac{18}{\sqrt\Delta}},
\end{align*}
where we used the bound $(a-1)/(a+1)>\exp(-2.5/a),$
valid for $ageq\sqrt2.$
Since $\sign r(t)=(-1)^t$ for $t\in S,$ we conclude that
\[ (-1)^t r(t) >
\exp\BRACES{-\frac{18}{\sqrt\Delta}}
\lvert r(-t)\rvert, \qquad
t\in S.\]
Since in addition $r$ vanishes on $\{-n,\dots,n\}\setminus (S\cup-S),$
we infer from Theorem~\ref{thm:rational-degree-criterion-maj} that
$R^+(d,S\cup -S)geq\exp\{-18/\sqrt \Delta\}.$
\end{proof}
\subsection{High-degree approximation}
In the previous subsection, we determined the least error in
approximating the majority function by rational functions of degree
up to $\log n.$ Our goal here is to solve the case of higher degrees.
We start with some preparatory work. First, we need to
accurately estimate products of the form $\prod_i
(\Delta^i+1)/(\Delta^i-1)$ for all $\Delta>1.$
A suitable \emph{lower} bound was already given by
Newman~\cite[Lem.~1]{newman64rational}:
\begin{lemma}[Newman]
\label{lem:newman-exp-product}
For all $\Delta>1,$
\[ \prod_{i=1}^n \frac{\Delta^i+1}{\Delta^i-1} >
\exp\BRACES{\frac{2(\Delta^n-1)}{\Delta^n(\Delta-1)}}. \]
\end{lemma}
\begin{proof}
Immediate from the bound $(a+1)/(a-1)>\exp(2/a),$ which is
valid for $a>1.$
\end{proof}
We will need a corresponding upper bound:
\begin{lemma}
\label{lem:infinite-product}
For all $\Delta>1,$
\[ \prod_{i=1}^\infty \frac{\Delta^i + 1}{\Delta^i - 1} <
\exp\BRACES{\frac 4{\Delta-1}}.
\]
\end{lemma}
\begin{proof}
Let $kgeq0$ be an integer.
By the binomial theorem, $\Delta^igeq(\Delta-1)i+1$ for integers $igeq0.$
As a result,
\begin{align*}
\prod_{i=1}^k \frac{\Delta^i + 1}{\Delta^i - 1}
\leq \prod_{i=1}^k \frac{1}{i}\PARENS{i+ \frac{2}{\Delta-1}} \leq
{k+\left\lceil\frac{2}{\Delta-1}\right\rceil \choose k}.
\end{align*}
Also,
\begin{align*}
\prod_{i=k+1}^\infty \frac{\Delta^i + 1}{\Delta^i - 1}
<\prod_{i=0}^\infty \PARENS{1 + \frac{2}{(\Delta^{k+1}-1)\Delta^i}}
<\exp\BRACES{\frac{2\Delta}{(\Delta^{k+1}-1)(\Delta-1)}}.
\end{align*}
Setting $k=k(\Delta)=\left\lfloor \frac2{\Delta-1}\right\rfloor,$
we conclude that
\[ \prod_{i=1}^\infty \frac{\Delta^i + 1}{\Delta^i - 1}
< \exp\BRACES{\frac C{\Delta-1} }, \]
where
\begin{align*}
C=\sup_{\Delta>1}\BRACES{ (\Delta-1)\ln
{k(\Delta)+\left\lceil\frac{2}{\Delta-1}\right\rceil \choose k(\Delta)}
+ \frac{2\Delta}{\Delta^{k(\Delta)+1}-1}}<4.
\tag*{\qedhere}
\end{align*}
\end{proof}
We will also need the following binomial estimate.
\begin{lemma}
\label{lem:binomial-ratio}
Put $p(t) = \prod_{i=1}^n \left(t-i-\frac12\right).$ Then
\[ \max_{t=1,2,\dots,n+1} \left\lvert \frac{p(-t)}{p(t)}\right\rvert \leq
\Theta(16^n). \]
\end{lemma}
\begin{proof}
For $t=1,2,\dots,n+1,$ we have
\begin{align*}
|p(t)| = \frac{(2t-2)! (2n-2t+2)!}{4^n (t-1)!(n-t+1)!},\quad
|p(-t)| = \frac{t!(2n+2t+1)!}{4^n (2t+1)!(n+t)!}.
\end{align*}
As a result,
\begin{align*}
\left\lvert\frac{p(-t)}{p(t)}\right\rvert
&= \frac t{2t+1} \cdot
\frac{ \displaystyle {2n+2t+1\choose 2t}{2n+1\choose n+t}}
{ \displaystyle {2t -2\choose t-1}{2n-2t+2 \choose n-t+1}}
\leq \frac{\displaystyle \Theta\PARENS{\frac{2^{4n}}{\sqrt n}}
\Theta\PARENS{\frac{2^{2n}}{\sqrt n}}}
{\displaystyle \Theta\PARENS{\frac{2^{2n}}n}},
\end{align*}
which gives the sought bound.
\end{proof}
Our construction requires one additional ingredient.
\begin{lemma}
\label{lemma:floors}
Let $n,d$ be integers, $1\leq d\leq n/55.$ Consider the
polynomial
$p(t) = \prod_{i=1}^{d-1} (t-d \Delta^i\sqrt \Delta),$
where $\Delta=(n/d)^{1/d}.$ Then
\[ \min_{j=1,\dots,d} \left|
\frac{p(\lfloor d\Delta^j\rfloor)}{p(-\lfloor d\Delta^j\rfloor)} \right|
>\exp\BRACES{-\frac{4\ln3d}{\ln(n/d)} -\frac{8}{\sqrt \Delta-1}}.
\]
\end{lemma}
\begin{proof}
Fix $j=1,2,\dots,d.$ Then for each $i=1,2,\dots,j-1,$
\[ d\Delta^j - d\Delta^i\sqrt \Delta geq
d\left(\Delta^{j-i-\frac12}-1\right)
geq \frac12\, (j-i) \ln\frac nd, \]
and thus
\begin{align}
\prod_{i=1}^{j-1} \PARENS{1 - \frac1{d\Delta^j - d\Delta^i\sqrt \Delta}}
&geq \exp\BRACES{-\frac{4}{\ln(n/d)}\sum_{i=1}^{j-1}
\frac{1}{j-i}
}\nonumber\\
&geq \exp\BRACES{-\frac{4\ln3d}{\ln(n/d)} }.
\label{eqn:linearization}
\end{align}
For brevity, let $\xi$ stand for the final expression in
(\ref{eqn:linearization}).
Since $1\leq d\leq n/55,$ we have $\lfloor d\Delta^j\rfloor
-d\Delta^{j-1}\sqrt \Delta>1.$
As a result,
\begin{align*}
\left|\frac{p(\lfloor d\Delta^j\rfloor)}{p(-\lfloor d\Delta^j\rfloor)}\right|
&geq \prod_{i=1}^{j-1}
\frac{d\Delta^j-1-d\Delta^i\sqrt \Delta}{d\Delta^j+d\Delta^i\sqrt \Delta}
\;\; \prod_{i=j}^{d-1}
\frac{d\Delta^i\sqrt \Delta-d\Delta^j}{d\Delta^i\sqrt \Delta+d\Delta^j}\\
&geq \xi
\prod_{i=1}^{j-1}
\frac{d\Delta^j-d\Delta^i\sqrt \Delta}{d\Delta^j+d\Delta^i\sqrt \Delta}
\;\;\prod_{i=j}^{d-1}
\frac{d\Delta^i\sqrt \Delta-d\Delta^j}{d\Delta^i\sqrt \Delta+d\Delta^j}
&&\text{by (\ref{eqn:linearization})}\\
&> \xi
\PARENS{\prod_{i=1}^\infty \frac{\Delta^{i/2} - 1}{\Delta^{i/2} +1}}^2\\
&geq \xi \exp\BRACES{-\frac 8{\sqrt \Delta-1}},
\end{align*}
where the last inequality holds by Lemma~\ref{lem:infinite-product}.
\end{proof}
We have reached the main result of this subsection.
\begin{theorem}[High-degree rational approximation of \textsc{majority}]
\label{thm:rational-high-degree}
Let $d$ be an integer, $\log n<d\leq n-1.$ Then
\begin{align*}
R^+(d,\{\pm1,\pm2,\dots,\pm n\}) =
\exp\BRACES{-\Theta\PARENS{\frac{d}{\log (2n/d)}}}.
\end{align*}
Also,
\[ R^+(n,\{\pm1,\pm2,\dots,\pm n\}) =0. \]
\end{theorem}
\begin{proof}
The final statement in the theorem follows at once by considering
the rational function $\{p(t)-p(-t)\}/\{p(t)+p(-t)\},$ where $p(t)
= \prod_{i=1}^n (t+i).$
Now assume that $\log n < d < n/55.$ Let
\[ k = \left\lceil \frac{d}{\log (n/d)} \right\rceil, \qquad
\Delta=\PARENS{\frac nd}^{1/d}. \]
Define sets
\begin{align*}
S_1 &= \{1,2,\dots,k\}, \\
S_2 &= \rule{0mm}{4mm} \{\lfloor d \Delta^i\rfloor\,:\, i=1,2,\dots,d \},\\
S_{\phantom{1}} &= \rule{0mm}{4mm} S_1\cup S_2.
\end{align*}
Consider the polynomial
\[ r(t) = (-1)^n r_1(t) r_2(t)
\prod_{i\in\{-n,\dots,n\}\setminus (S\cup -S)} (t-i),
\]
where
\[ r_1(t) =
\prod_{i=1}^{k} \PARENS{t-i-\frac12}, \qquad
r_2(t) = \prod_{i=1}^{d-1} (t-d \Delta^i\sqrt \Delta).
\]
We have:
\begin{align*}
\min_{t\in S} \left| \frac{r(t)}{r(-t)} \right|
&geq
\min_{i=1,\dots,k+1} \left| \frac{r_1(i)}{r_1(-i)} \right|
\cdot
\min_{i=1,\dots,d} \left|
\frac{r_2(\lfloor d\Delta^i\rfloor)}{r_2(-\lfloor d\Delta^i\rfloor)} \right|\\
&> \exp\BRACES{-\frac{Cd}{\log (n/d)}}
\end{align*}
by Lemmas~\ref{lem:binomial-ratio} and~\ref{lemma:floors},
where $C>0$ is an absolute constant. Since $\sign p(t)=(-1)^t$ for
$t\in S,$ we can restate this result as follows:
\[ (-1)^t r(t) >
\exp\BRACES{-\frac{Cd}{\log (n/d)}}
|r(-t)|, \qquad t\in S. \]
Since $r$ vanishes on $\{-n,\dots,n\}\setminus (S\cup -S)$
and has degree $\leq 2n-1-d,$ we infer from
Theorem~\ref{thm:rational-degree-criterion-maj} that
$ R^+(d,S\cup-S) geq \exp\BRACES{-Cd/\log (n/d)}. $
This proves the lower bound for the case $\log n < d < n/55.$
To handle the case $n/55\leq d \leq n-1,$ a different argument is
needed. Let
\[ r(t)=(-1)^n\,t\,\prod_{i=1}^d \PARENS{t-i-\frac12}
\prod_{i=d+2}^n (t^2-i^2). \]
By Lemma~\ref{lem:binomial-ratio},
there is an absolute constant $C>1$ such that
\[ \left\lvert\frac{r(t)}{r(-t)}\right\rvert
> C^{-d}, \qquad t=1,2,\dots,d+1.
\]
Since $\sign r(t)=(-1)^t$ for $t=1,2,\dots,d+1,$ we conclude that
\[ (-1)^t r(t) > C^{-d} |r(-t)|, \qquad t=1,2,\dots,d+1. \]
Since the polynomial $r$
vanishes on $\{-n,\dots,n\}\setminus \{\pm1,\pm2,\dots,\pm (d+1)\}$
and has degree $2n-1-d,$
we infer from Theorem~\ref{thm:rational-degree-criterion-maj} that
\[ R^+(d,\{\pm1,\pm2,\dots,\pm (d+1)\}) geq C^{-d}. \]
This settles the lower bound for the case $n/55\leq d\leq n-1.$
It remains to prove the upper bound for the case $\log n<d\leq n-1.$ Here
we always have $dgeq2.$
Letting $k=\lfloor d/2\rfloor$ and $\Delta=(n/k)^{1/k},$
define $p\in P_{\,2k}$ by
\[ p(t) = \prod_{i=1}^k (t+i) \prod_{i=1}^k (t+k\Delta^i). \]
Fix any point $t\in \{1,2,\dots,n\}$ with $p(-t)\ne0.$ Letting $i^*$ be the
integer with $k \Delta^{i^*}<t<k \Delta^{i^*+1},$ we have:
\begin{align*}
\frac{p(t)}{|p(-t)|}
&> \prod_{i=0}^{i^*} \frac{k\Delta^{i^*+1} + k\Delta^i}{k\Delta^{i^*+1}-k\Delta^i}
\prod_{i=i^*+1}^{k} \frac{k\Delta^i + k\Delta^{i^*}}{k\Delta^i-k\Delta^{i^*}}
geq \prod_{i=1}^k \frac{\Delta^i + 1}{\Delta^i-1}\\
&> \exp\BRACES{\frac{2(\Delta^k-1)}{\Delta^k(\Delta-1)}},
\end{align*}
where the last inequality holds by Lemma~\ref{lem:newman-exp-product}.
Substituting $\Delta=(n/k)^{1/k}$ and recalling that
$kgeq\Theta(\log n),$ we obtain
$p(t) > A |p(-t)|$ for $t=1,2,\dots,n,$ where
\[ A = \exp\BRACES{\Theta\PARENS{\frac{k}{\log (n/k)}}}. \]
As a result, $R^+(2k,\{\pm1,\pm2,\dots,\pm n\}) \leq2A/(A^2+1),$
the approximant in question being
\begin{align*}
\frac{A^2-1}{A^2+1}\cdot \frac{p(t)-p(-t)}{p(t)+p(-t)}.
\tag*{\qedhere}
\end{align*}
\end{proof}
\subsection{Equivalence of the majority and sign functions}
It remains to prove the promised equivalence of the majority and sign
functions, from the standpoint of approximating them by rational functions
on the discrete domain. We have:
\begin{theorem}
\label{thm:maj-vs-sign}
For every integer $d,$
\begin{align}
R^+(\MAJ_n,d) &\leq R^+(d-2,\{\pm1,\pm2,\dots,\pm \lceil n/2\rceil\}),
\label{eqn:upper-bound-R} \\
R^+(\MAJ_n,d) &geq R^+(d,\{\pm1,\pm2,\dots,\pm \lfloor n/2\rfloor\}).
\label{eqn:lower-bound-R}
\end{align}
\end{theorem}
\begin{proof}
We prove (\ref{eqn:upper-bound-R}) first. Fix a degree-$(d-2)$
approximant $p(t)/q(t)$ to $\sign t$ on $S=\{\pm1,\dots,\pm\lceil
n/2\rceil\},$ where $q$ is positive on $S.$ For small $\delta>0,$
define
\[ A_\delta(t) = \frac{t^2p(t)-\delta}{t^2q(t)+\delta}. \]
Then $A_\delta$ is a rational function of degree at most $d$ whose
denominator is positive on $S\cup \{0\}.$ Finally, we have
$A_\delta(0)=-1$ and
\[ \lim_{\delta\to0} \max_{t\in S} |\sign t - A_\delta(t)|
= \max_{t\in S} \left|\sign t - \frac{p(t)}{q(t)}\right|.
\]
Then $A_\delta(\frac12\sum (x_i+1) - \lfloor n/2\rfloor)$
is the desired approximant for $\MAJ_n(x_1,\dots,x_n).$
We now turn to the lower bound, (\ref{eqn:lower-bound-R}).
For every $\epsilon>R^+(\MAJ_n,d),$
Proposition~\ref{prop:symm-rational}
gives a univariate rational function $p(t)/q(t)$ of degree at most $d$
such that for all $x\in\moon,$ one has
\begin{align*}
\left\lvert \MAJ_n(x) - \frac{p(\sum x_i)}{q(\sum x_i)}\right\rvert
<\epsilon
\end{align*}
and $q(\sum x_i)>0.$
Then
\begin{align*}
\max_{t=\pm1,\pm2,\dots,\pm\lfloor n/2\rfloor}
\left| \sign t - \frac{p(2t+n-2\lfloor n/2\rfloor)}{q(2t+n-2\lfloor n/2\rfloor)}
\right|< \epsilon,
\end{align*}
completing the proof of (\ref{eqn:lower-bound-R}).
\end{proof}
Note that (\ref{eqn:rational-positive-denominator})
and Theorems~\ref{thm:rational-small-degree},
\ref{thm:rational-high-degree}, and~\ref{thm:maj-vs-sign}
immediately imply Theorem~\ref{thm:main-approx-maj} from the
Introduction.
\begin{remark}
The proof that we gave for the upper bound, (\ref{eqn:upper-bound-R}),
illustrates a useful property of univariate rational approximants
$A(t)=p(t)/q(t)$ on a finite set $S.$ Specifically, given such an
approximant and a point $t^*\notin S,$ there exists an
approximant $A'$ with $A'(t^*)=a$ for any prescribed value $a$ and
$A'\approx A$ everywhere on $S.$ One such construction is
\[ A'(t)=\frac{(t-t^*)p(t)+a\delta}{(t-t^*)q(t)+\delta}\]
for an arbitrarily small constant $\delta>0.$ Note that $A'$ has
degree only $1$ higher than the degree of the original approximant,
$A.$ This phenomenon is in sharp contrast to approximation by
polynomials, which do not possess this corrective ability.
\end{remark}
\section{Intersections of halfspaces
\label{sec:maj-maj}}
In this section, we prove our main theorems on the sign-representation
of intersections of halfspaces and majority functions.
In the two subsections that follow, we give results for
the threshold degree as well as \emph{threshold density}, another
key complexity measure of a sign-representation.
\subsection{Lower bounds on the threshold degree}
We start by formalizing the elegant observation due to Beigel et
al.~\cite{beigel91rational}, already described briefly in the
Introduction.
\begin{theorem}[Beigel, Reingold, and Spielman]
\label{thm:rational-is-possible}
Let $f\colon X\to\moo$ and $g\colon Y\to\moo$ be given functions, where
$X,Y\subset \Re^n$ are finite sets. Let $d$ be an integer with
$R^+(f,d) + R^+(g,d)<1.$
Then
\begin{align*}
\degthr(f\wedge g) \leq 2d.
\end{align*}
\end{theorem}
\begin{proof}
Fix rational functions $p_1(x)/q_1(x)$ and $p_2(y)/q_2(y)$ of degree
at most $d$ such that $q_1$ and $q_2$ are positive on $X$ and $Y,$
respectively, and
\begin{align*}
\max_{x\in X} \left| f(x) - \frac{p_1(x)}{q_1(x)}\right| +
\max_{y\in Y} \left| g(y) - \frac{p_2(y)}{q_2(y)}\right| < 1.
\end{align*}
Then
\begin{align*}
f(x)\wedge g(y) \equiv \sign\{1 + f(x)+g(y)\}
\equiv \sign\BRACES{ 1 + \frac{p_1(x)}{q_1(x)} +
\frac{p_2(y)}{q_2(y)} }.
\end{align*}
Multiplying the last expression by the positive quantity $q_1(x)q_2(y),$
we obtain $f(x)\wedge g(y) \equiv \sign\{ q_1(x)q_2(y) + p_1(x)q_2(y)
+ p_2(y)q_1(x)\}.$
\end{proof}
Recall that Theorem~\ref{thm:main-finite} gives an essentially
exact converse to Theorem~\ref{thm:rational-is-possible}.
We are now in a position to prove our main results on the
threshold degree.
\begin{theorem}[restatement of Theorems~\ref{thm:main-sign-hs}
and~\ref{thm:main-sign-maj}]
\label{thm:intersections}
Consider the function $f\colon\moo^{n^2}\to\moo$ given by
\begin{align*}
f(x) = \sign\PARENS{1 + \sum_{i=1}^n\sum_{j=1}^n 2^i x_{ij}}.
\end{align*}
Let $g\colon\moon\to\moo$ be the majority function on $n$ bits.
Then
\begin{align}
\degthr(f\wedge f) &= \Omega(n), \label{eqn:hs-hs} \\
\degthr(\,g\, \wedge\, g\,) &= \Omega(\log n). \label{eqn:maj-maj}
\end{align}
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:main-halfspace}, we have $R^+(f,\epsilon n)geq1/2$ for
some constant $\epsilon>0,$ which settles
(\ref{eqn:hs-hs}) in view of Theorem~\ref{thm:main-finite}.
Analogously, Theorems~\ref{thm:R+} and~\ref{thm:maj-vs-sign} show
that $R^+(g,\epsilon\log n)geq 1/2$ for some constant $\epsilon>0,$
which settles (\ref{eqn:maj-maj}) in view of
Theorem~\ref{thm:main-finite}.
\end{proof}
\begin{remark}
The lower bounds (\ref{eqn:hs-hs}) and (\ref{eqn:maj-maj}) are tight
and match the constructions due to Beigel et
al.~\cite{beigel91rational}. These
matching upper bounds can be seen as follows. By
Theorem~\ref{thm:upper-fnk}, we have $R^+(f,Cn)<1/2$ for some
constant $C>0,$ which shows that \mbox{$\degthr(f\wedge f)=O(n)$}
in view of Theorem~\ref{thm:rational-is-possible}. Analogously,
Theorems~\ref{thm:R+} and~\ref{thm:maj-vs-sign} imply that
$R^+(g,C\log n)<1/2$ for some constant $C>0,$ which shows that
\mbox{$\degthr(g\wedgeg) =O(\log n)$} in view of
Theorem~\ref{thm:rational-is-possible}.
Furthermore, Theorem~\ref{thm:rational-is-possible} generalizes
immediately to conjunctions of $k=3$ and more functions.
In particular, the lower bounds in (\ref{eqn:hs-hs}) and
(\ref{eqn:maj-maj}) remain tight for intersections \mbox{$f\wedge
f\wedge\cdots\wedge f$} and \mbox{$g\wedge g\wedge\cdots\wedge
g$} featuring any constant number of functions.
\end{remark}
We give one additional result, featuring the intersection of the
canonical halfspace with a majority function.
\begin{restatetheorem}{thm:main-sign-mixed}{\textsc{restated}}
Let $f\colon\moo^{n^2}\to\moo$ be given by
\begin{align*}
f(x) = \sign\PARENS{1 + \sum_{i=1}^n\sum_{j=1}^n 2^i x_{ij}}.
\end{align*}
Let $g\colon\moo^{\lceil\sqrt n\rceil}\to\moo$
be the majority function on $\lceil\sqrt n\rceil$ bits.
Then
\begin{align}
\degthr(f\wedge\, g\,) &= \Theta(\sqrt n).
\label{eqn:f-and-g}
\end{align}
\end{restatetheorem}
\begin{proof}
We prove the lower bound first.
Let $\epsilon>0$ be a suitably small constant.
By Theorem~\ref{thm:main-halfspace}, we have $R^+(f,\epsilon \sqrt n)
geq 1-2^{-\sqrt n}.$
By Theorems~\ref{thm:R+} and~\ref{thm:maj-vs-sign}, we have
$R^+(g,\epsilon\sqrt n)geq 2^{-\sqrt n}.$ In view of
Theorem~\ref{thm:main-finite}, these two facts imply that
$\degthr(f\wedge g\,)=\Omega(\sqrt n).$
We now turn to the upper bound. It is clear that $R^+(g,\lceil\sqrt
n\rceil)=0$ and $R^+(f,1)<1.$ It follows by
Theorem~\ref{thm:rational-is-possible} that $\degthr(f\wedge
g)=O(\sqrt n).$
\end{proof}
\subsection{Lower bounds on the threshold density}
In addition to threshold degree, several other complexity
measures are of interest when sign-representing Boolean functions
by real polynomials. One such complexity measure is
\emph{density,} i.e., the number of distinct monomials in any
polynomial that sign-represents a given function. Formally, for
a given function $f\colon\moon\to\moo,$ the \emph{threshold
density} $\dns(f)$ is the minimum $k$ such that
\begin{align*}
f(x) \equiv \sign\PARENS{\sum_{i=1}^k\lambda_i\prod_{j\in S_i}
x_j}
\end{align*}
for some sets $S_1,\dots,S_k\subseteq\oneton$ and some reals
$\lambda_1,\dots,\lambda_k.$ We will show that intersections of
two halfspaces not only have high threshold degree but also high
threshold density.
We start with the conjunction of two majority functions.
Constructions in~\cite{beigel91rational} show
that the function $f(x,y)=\MAJ_n(x)\wedge \MAJ_n(y)$ can be
sign-represented by a linear combination of $n^{O(\log n)}$ monomials,
namely, the monomials of degree up to $O(\log n).$
Klivans and Sherstov~\cite[Thm.~1.2]{mlj07sq} complement
this with a lower bound of $n^{\Omega(\log
n/\log \log n)}$ on the number of distinct monomials needed. Our
next result improves this lower bound to a tight $n^{\Theta(\log n)}.$
\begin{theorem}
\label{thm:dns-majmaj}
Let $f\colon\moon\times\moon\to\moo$ be given by $f(x,y)=\MAJ_n(x_1,\dots,x_n)
\wedge \MAJ_n(y_1,\dots,y_n).$ Then
\begin{align*}
\dns(f) = n^{\Omega(\log n)}.
\end{align*}
\end{theorem}
\begin{proof}
Identical to the proof of Klivans and Sherstov~\cite[\S3.3,
Thm.~1.2]{mlj07sq}, with the only difference that
Theorem~\ref{thm:main-sign-maj} should be invoked in place of O'Donnell
and Servedio's earlier result~\cite{odonnell03degree} that
$\degthr(f)=\Omega(\log n/\log\log n).$
\end{proof}
We will now derive an exponential lower bound on the threshold
density of the intersection of two halfspaces. For this, we
recall an elegant procedure for converting Boolean functions with
high threshold degree into Boolean functions with high threshold
density, discovered by Krause and
Pudl{\'a}k~\cite{krause94depth2mod}.
Their construction maps a given function
$f\colon\moon\to\moo$ to the function
$f^\text{\rm KP}\colon(\moon)^3\to\moo$ given by
\begin{align*}
f^\text{\rm KP}(x,y,z) =
f(\dots, (\overline{z_i}\wedge x_i)\vee(z_i\wedge y_i), \dots).
\end{align*}
We have:
\begin{theorem}[{Krause and
Pudl{\'a}k~\cite[Prop.~2.1]{krause94depth2mod}}]
\label{thm:degree-length}
For every function $f\colon\moon\to\moo,$
\[ \dns(f^\text{\rm KP}) geq 2^{\degthr(f)}. \]
\end{theorem}
Another ingredient in our analysis is the following observation.
\begin{lemma}[Klivans and Sherstov~\cite{mlj07sq}]
\label{lem:ptf-reduction}
Let $f\colon\moon\to\moo$ be a given function. Consider any
function $F\colon\moo^m\to\moo$ given by
$F(x) = f(\chi_1(x),\dots,\chi_n(x)),$ where each
$\chi_i$ is a parity function $\moom\to\moo$ or the negation of a
parity function. Then
\[ \dns(f)geq \dns(F). \]
\end{lemma}
\begin{proof}[Proof \textup{(Klivans and
Sherstov~\cite{mlj07sq})}.]
Immediate from the definition of threshold density and the fact
that the product of parity functions is another parity function.
\end{proof}
We are now in a position to prove the desired result for
halfspaces.
\begin{theorem}
\label{thm:dns-hshs}
Let $f_n\colon\moo^{n^2}\to\moo$ be given by
\begin{align*}
f_n(x) = \sign\PARENS{1 +
\sum_{i=1}^n\sum_{j=1}^{\phantom{A}n\phantom{A}} 2^i x_{ij}}.
\end{align*}
Then
\begin{align}
\dns(f_n\wedge f_n)
&= \exp\{\Omega(n)\}, \label{eqn:dns-hs-hs} \\
\dns(f_n\wedge \MAJ_{\lceil \sqrt n\rceil})
&= \exp\{\Omega(\sqrt n)\}.
\label{eqn:dns-hs-maj}
\end{align}
\end{theorem}
\begin{remark}
\label{rem:f-g-ff-gg}
In the proof below, it will be useful to keep in mind the
following straightforward observation. Fix functions
$f,g\colon\mook\to\moo$ and define functions
$f',g'\colon\mook\to\moo$ by $f'(x)=-f(-x)$ and $g'(y)=-g(-y).$
Then we have $f'(x)\wedge g'(y)\equiv -(f(-x)\wedge g(-y)) f(-x)
g(-y),$ whence $\dns(f'\wedge g') \leq \dns(f\wedge
g)\dns(f)\dns(g)$ and
thus
\begin{align}
\dns(f\wedge g) geq \frac{\dns(f'\wedge g')}{\dns(f)\dns(g)}.
\label{eqn:f-g-ff-gg}
\end{align}
Similarly, we have
$f(x)\wedge g'(y)\equiv (f(x)\wedge g(-y))f(x),$ whence
\begin{align}
\dns(f\wedge g) geq \frac{\dns(f\wedge g')}{\dns(f)}.
\label{eqn:f-g-f-gg}
\end{align}
To summarize, (\ref{eqn:f-g-ff-gg}) and (\ref{eqn:f-g-f-gg})
allow one to analyze the threshold density of $f\wedge g$ by
analyzing the threshold density of $f'\wedge g'$ or $f'\wedge g$
instead. Such a transition will be helpful in our case.
\end{remark}
\begin{proof}
[Proof of Theorem~\textup{\ref{thm:dns-hshs}}.]
Put $m=\lfloor n/4\rfloor.$
The function
${f_m}^\text{\rm KP}\colon(\moo^{m^2})^3\to\moo$ has the
representation
\begin{align*}
{f_m}^\text{\rm KP}(x,y,z) = \sign\PARENS{1 +
\sum_{i=1}^m\sum_{j=1}^{\phantom{A}m\phantom{A}} 2^i
(x_{ij}+y_{ij}+x_{ij}z_{ij}-y_{ij}z_{ij})}.
\end{align*}
As a result,
\begin{align*}
\dns(f_{4m}\wedge f_{4m})
&geq\dns({f_m}^\text{\rm KP}\wedge{f_m}^\text{\rm KP})
&&\text{by Lemma~\ref{lem:ptf-reduction}}\\
&=\dns((f_m\wedge f_m)^\text{\rm KP}) \\
&geq 2^{\degthr(f_m\wedge f_m)}
&& \text{by Theorem~\ref{thm:degree-length}} \\
&geq\exp\{\Omega(m)\}
&& \text{by Theorem~\ref{thm:intersections}.}
\end{align*}
By the same argument as in Theorem~\ref{thm:main-halfspace}, the
function $f_{4m}$ is a subfunction of $f_{n}(x)$ or $-f_n(-x).$
In the former case, (\ref{eqn:dns-hs-hs}) is immediate from the
lower bound on $\dns(f_{4m}\wedge f_{4m}).$ In the
latter case, (\ref{eqn:dns-hs-hs}) follows from the
lower bound on $\dns(f_{4m}\wedge f_{4m})$ and
Remark~\ref{rem:f-g-ff-gg}.
The proof of (\ref{eqn:dns-hs-maj}) is entirely analogous.
\end{proof}
Krause and Pudl{\'a}k's method in Theorem~\ref{thm:degree-length}
naturally generalizes to linear combinations of
conjunctions rather than parity functions. In
other words, if a function $f\colon\moon\to\moo$ has threshold
degree $d$ and $f^\text{\rm KP}(x,y,z)\equiv \sign(\sum_{i=1}^N\lambda_i
T_i(x,y,z))$ for some conjunctions $T_1,\dots,T_N$ of
the literals $x_1,y_1,z_1,\dots,x_n,y_n,z_n, \neg x_1,\neg
y_1,\neg z_1,\dots,\neg x_n,\neg y_n,\neg z_n,$ then $Ngeq
2^{\Omega(d)}.$ With this remark in mind,
Theorems~\ref{thm:dns-majmaj} and~\ref{thm:dns-hshs} and their
proofs adapt easily to this alternate definition of density.
\addcontentsline{toc}{section}{Acknowledgments}
\section*{Acknowledgments}
I would like to thank Dima Gavinsky,
Adam Klivans, Ryan O'Donnell, Ronald de Wolf,
and the anonymous reviewers
for their very helpful comments on an earlier version of this manuscript.
I am also thankful to Ronald for telling me about applications of
rational approximation to quantum query complexity. I gratefully
acknowledge Scott Aaronson's tutorial on the polynomial
method, which motivated me to work on direct product theorems
for real polynomials. This research was supported by Adam Klivans' NSF
CAREER Award and NSF Grant CCF-0728536.
{\small
\addcontentsline{toc}{section}{References}
\bibliography{
/Users/sasha/bib/general,
/Users/sasha/bib/fourier,
/Users/sasha/bib/cc,
/Users/sasha/bib/learn,
/Users/sasha/bib/my}
}
\end{document}
|
\begin{document}
\title{Solutions to polynomial congruences in well-shaped sets}
\author{Bryce Kerr}
\address{Department of Computing, Macquarie University, Sydney, NSW 2109, Australia}
\email{[email protected]}
\begin{abstract}
We use a generalization of Vinogradov's mean value theorem of S. Parsell, S. Prendiville and T. Wooley and ideas of W. Schmidt to give nontrivial bounds for the number of solutions to polynomial congruences, when the solutions lie in a very general class of sets, including all convex sets.
\end{abstract}
\maketitle
\section{Introduction}
Given integer $m$ and a polynomial $F(X_1,\dots,X_d)\in \mathbb{Z}_m[X_1,\dots X_d]$ and some $\Omega \subseteq [0,1]^{d}$, we let $N_F(\Omega)$ denote the number of solutions $\mathbf{x}=(x_1,\dots x_d)\in \mathbb{Z}^d$ to the congruence
\begin{equation}
\label{solutions}
F(\mathbf{x}){\mathbf{\,e}}_quiv 0 \text{ \ (mod $m$)}
\ \text{ \ with \ } \
\left(\frac{x_1}{m},\dots \frac{x_d}{m}\right)\in \Omega.
\end{equation}
Questions concerning the distribution of solutions to polynomial congruences have been considered in a number of works (for example \cite{CCGHSZ,GrShZa,Shp1,Zum}). In \cite{Fouv} Fouvry gives an asymptotic formula for the number of solutions to systems of polynomial congruences in small cubic boxes for a wide class of systems (see also \cite{FoKa,Luo,ShpSk,Skor}). Shparlinski \cite{Sp} uses the results of \cite{Fouv} and ideas of \cite{Schm} to obtain an asymptotic formula for the number of solutions to the same systems when the solutions lie in a very general class of sets. For the case of a single polynomial, $F$ in $d$ variables, Shparlinski \cite{Sp} shows that for suitable $\Omega$, when the modulus $m=p$ is prime,
\begin{equation}
\label{shp}
N_F(\Omega)=p^{d-1}(\mu(\Omega)+O(p^{-1/4}\log{p}))
\end{equation}
provided $F$ is irreducible over $\mathbb{C}$ and $\mu$ denotes the Lebesgue measure on $[0,1]^d$. This gives an asymptotic formula for $N_F(\Omega)$ provided $\mu(\Omega)\geq p^{-1/4+{\mathbf{\,e}}_psilon}$ and a nontrivial upper bound for $N_F(\Omega)$ when $\mu(\Omega)\ge p^{-5/4+\varepsilon}$. We follow the method of \cite{Sp} to give an upper bound for $N_F(\Omega)$ without any restrictions on our polynomial $F$ when the modulus $m$ is composite. We first establish an upper bound for $N_F(\Omega)$ when $\Omega$ is a cube. This gives a generalization of Theorem $1$ of \cite{cp}. Although we follow the same argument, the difference is our use of a multidimensional version of Vinogradov's mean value theorem (Theorem 1.1 of \cite{mdws}). To extend the bound from cubes to more general sets $\Omega$, we approximate $\Omega$ by cubes using ideas based on Theorem $2$ of \cite{Schm}. \\
\section{Definitions}
\indent We let $\mu$ denote the Lebesgue measure on $[0,1]^d$, $||.||$ the Euclidian norm and define the distance between $\mathbf{x}\in [0,1]^d$ and $\Omega\subseteq [0,1]^d$ to be
$$\mathrm{dist}(\mathbf{x},\Omega)=\displaystyle\inf_{\mathbf{y}\in \Omega}||\mathbf{x}-\mathbf{y}||.$$
As in \cite{Sp}, we say that $\Omega \subseteq [0,1]^{d}$ is \emph{well-shaped} if there exists $C=C(\Omega)$ such that for every $\varepsilon>0$ the measures of the sets
$$
\Omega_\varepsilon^{+} = \left\{ \varepsilonc{u} \in [0,1]^d \backslash
\Omega \ : \ \mathrm{dist}(\varepsilonc{u},\Omega) < \varepsilon \right\},
$$
$$
\Omega_\varepsilon^{-} = \left\{ \varepsilonc{u} \in \Omega \ : \
\mathrm{dist}(\varepsilonc{u},[0,1]^d \backslash \Omega ) < \varepsilon \right\}
$$
exist and satisfy
\begin{equation}
\label{well-shaped}
\mu(\Omega_{\varepsilon}^{\pm})\leq C\varepsilon.
\end{equation}
From Lemma 1 of \cite{Schm} all convex subsets of $[0,1]^d$ are well-shaped and from equation $(2)$ of \cite{Weyl}, if the boundary of $\Omega$ is a manifold of dimension $n-1$ with bounded surface area then $\Omega$ is well-shaped, for suitably chosen $C.$ \\
\indent For $\mathbf{x}=(x_1,\dots x_d)$ we write $a\leq \mathbf{x} \leq b$ if $a \leq x_1, \dots, x_d \leq b$. Given a $d$-tuple of non-negative integers
$\mathbf{i}=(i_1,i_2,\dots i_d)$, we set $\mathbf{x}^{\mathbf{i}}=x_{1}^{i_1}x_{2}^{i_2}\dots x_{d}^{i_d}$
and $|\mathbf{i}|=i_1+i_2+\dots i_d$. We let $r$ denote the number of distinct $d$-tuples, $\mathbf{i}$ with
$1\leq |\mathbf{i}|\leq k,$ so that
\begin{equation}
\label{r}
r=\binom{k+d}{d}-1.
\end{equation}
\indent We will always suppose $m$ is an integer greater than $2$. Given $F\in \mathbb{Z}_m[X_1,\dots,X_d]$, we let $k$ denote the degree of $F$ and $d$ the number of variables. Writing
$$F(\mathbf{x})=\displaystyle\sum_{0\leq |\mathbf{i}| \leq k}\beta_{\mathbf{i}}\mathbf{x}^{\mathbf{i}}, \quad \beta_{\mathbf{i}}\in \mathbb{Z}_m$$
we define $$g_F=\displaystyle\min_{|\mathbf{i}|=k}\gcd(m,\beta_i).$$
\indent We use $g(t) \ll f(t)$ and $g(t)=O(f(t))$ to mean that there exists some absolute constant $\alpha$ such that $|g(t)|\leq \alpha f(t)$ for all values of $t$ within some specified range. Whenever we use $\ll$ and $O$, unless stated otherwise the implied constant will depend only on $d$, $k$ and the particular $C$ in $(\ref{well-shaped}).$ Similarily $o(1)$ denotes a term which is sufficiently small when our parameter is large enough in terms of $d$, $k$ and $C$.
\section{Main Results}
We can now present our main results:
\begin{theorem}
\label{multi}
For positive $K_1, \dots, K_d, L,H,R\ge 1$, integer $m$ and
$$F(\mathbf{x})=\displaystyle\sum_{0\leq |\mathbf{i}| \leq k}\beta_{\mathbf{i}}\mathbf{x}^{\mathbf{i}}\in \mathbb{Z}_m[X_1,\dots, X_d ]$$ of degree $k\ge2$ with $g_F=1$, let $M_F(H,R)$ denote the number of solutions to the congruence
\begin{equation}
\label{polynomial equation 1}
F(\mathbf{x}){\mathbf{\,e}}_quiv y \ \ (\mathrm{mod} \ m)
\end{equation}
with
$$(\mathbf{x},y)\in [K_1+1,K_1+H]\times \dots \times [K_d+1,K_d+H]\times [L+1,L+R].$$
Then uniformly over all $K_1,\dots,K_d,L\ge 1$
$$M_F(H,R)\le H^d\left(\left(\frac{R}{H^k}\right)^{1/2r(k+1)}+\left(\frac{R}{m}\right)^{1/2r(k+1)}\right)m^{o(1)}$$
as $H\rightarrow \infty.$
\end{theorem}
Arguing from heuristics, we expect the bound for $M_F(H,R)$ to be around
$$M_F(H,R)\le H^d\left(\frac{R}{m}\right)m^{o(1)}$$
which can be directly compared with Theorem~1. Similarly, by considering the first term in Theorem~1 we immediatley see when this bound for $M_F(H,R)$ is worse than the trivial bound $M_F(H,R)\le H^d$. \newline \indent
Also, if $m=p$ is prime and $F[X_1,\dots, X_d]$ is not multilinear, i.e $F$ is not linear in each of its variables, then Theorem~\ref{multi} is trivial. This may be seen by the following argument. First we may show by slightly adjusting the proof of Theorem~1 of~\cite{cp} that for $G \in \mathbb{Z}_p[X]$ of degree $k\ge 2$
\begin{equation}
\label{aabb}
M_G(H,R)\le H\left(\left(\frac{R}{H^k}\right)^{1/2k(k+1)}+\left(\frac{R}{p}\right)^{1/2k(k+1)}\right)p^{o(1)}.
\end{equation}
Supposing $F\in \mathbb{Z}_p[X_1,\dots,X_{d}]$ is not multilinear, then after re-ordering the variables we may suppose for some $k_0 \ge 2$ that
\begin{equation}
\label{induction step}
F[X_1,\dots,X_{d}]=\sum_{i=0}^{k_0}X_{d}^iF_{i}[X_1,\dots,X_{d-1}]
\end{equation}
with $F_{k_0}\neq 0$ and consider separately the values of $X_1,\dots, X_{d-1}$ such that
\begin{equation*}
\label{equiv1}
F_{k_0}[X_1,\dots,X_{d-1}]{\mathbf{\,e}}_quiv0 \pmod p
\end{equation*}
and
\begin{equation*}
\label{notequiv1}
F_{k_0}[X_1,\dots,X_{d-1}] \not {\mathbf{\,e}}_quiv 0 \pmod p.
\end{equation*}
For the first case we use the assumption that $p$ is prime and induction on $d$ to bound the number of values $X_1,\dots,X_{d-1}$ such that
$F_{k_0}[X_1,\dots,X_{d-1}]{\mathbf{\,e}}_quiv0 \pmod p$ by $O(H^{d-2})$ and bound the number of solutions to
$$F[X_1,\dots,X_{d}]{\mathbf{\,e}}_quiv y \pmod p$$
in remaining variables $X_{d},Y$ trivially by $RH$. \newline \indent For the second case, we bound the number of $X_1,\dots, X_{d-1}$ such that
$F_{k_0}[X_1,\dots,X_{d-1}] \not {\mathbf{\,e}}_quiv 0 \pmod p$ trivially by $H^{d-1}$ and bound the number of solutions in the remaining variables $X_{d},Y$ by applying~{\mathbf{\,e}}_qref{aabb} to the equation~{\mathbf{\,e}}_qref{induction step}. Combining the above two cases gives
$$M_F(H,R)\le H^d\left(\frac{R}{H}+\left(\frac{R}{H^k}\right)^{1/2k(k+1)}+\left(\frac{R}{m}\right)^{1/2k(k+1)}\right)p^{o(1)}$$
which can be compared with Theorem~1. \newline
\indent Taking $R=1$ in Theorem~\ref{multi} we get,
\begin{corollary}
\label{cubes}
For any cube $B\subseteq [0,1]^{d}$ of side length $\frac{1}{h}$, $F \in \mathbb{Z}_m[X_1,\dots, X_d ]$ of degree $k\ge 2$ with $g_F=1$ we have
$$N_F(B)\le \left(\frac{m}{h}\right)^{d-k/2r(k+1)+o(1)}+m^{d-1/2r(k+1)+o(1)}\left(\frac{1}{h}\right)^{d+o(1)}$$
as \ $\dfrac{m}{h} \rightarrow \infty.$
\end{corollary}
Taking $R=H$ in Theorem~\ref{multi} we get,
\begin{corollary}
\label{cubes 1}
Suppose $F \in \mathbb{Z}_m[X_1,\dots, X_d ]$ of degree $k\ge 2$ with $g_F=1$ is of the form,
$$F(X_1,\dots, X_d)=G(X_1,\dots X_{d-1})-X_{d}$$
for some $G \in \mathbb{Z}_m[X_1,\dots,X_{d-1}]$, then
for any cube $B\subseteq [0,1]^{d}$ of side length $\frac{1}{h}$, we have
\begin{align*}
N_F(B)&\le \left(\frac{m}{h}\right)^{d-1-(k-1)/2r(k+1)+o(1)}+ m^{d-1+o(1)}\left(\frac{1}{h}\right)^{d-1+1/2r(k+1)+o(1)}
\end{align*}
as \ $\dfrac{m}{h} \rightarrow \infty$, where $r$ corresponds to $d-1$ in the definition~{\mathbf{\,e}}_qref{r}.
\end{corollary}
We use the above Corollaries to estimate $N_F(\Omega)$ for well-shaped $\Omega$.
\begin{theorem}
\label{multi vws}
Suppose $F \in \mathbb{Z}_m[X_1,\dots, X_d ]$ satisfies the conditions of Corollary~\ref{cubes} and $\Omega \subset [0,1]^{d}$ is well-shaped with
$\mu(\Omega)\geq m^{-1}$. Then we have
$$N_F(\Omega)\leq m^{d-k/2r(k+1)+o(1)}\mu(\Omega)^{1-k/2r(k+1)}+m^{d-1/2r(k+1)+o(1)}\mu(\Omega)$$
as $m \rightarrow \infty.$
\end{theorem}
\begin{theorem}
Suppose $F\in \mathbb{Z}_m[X_1,X_2,\dots X_d]$ satisfies the conditions of Corollary~\ref{cubes 1} and $\Omega \subset [0,1]^{d}$ is well-shaped. Then we have
$$
N_F(\Omega) \leq \begin{cases}m^{d-1+o(1)}\mu(\Omega)^{1/2r(k+1)}, \quad \mu(\Omega)\ge m^{-1+1/k} \\
m^{d-1-(k-1)/2r(k+1)+o(1)}\mu(\Omega)^{-(k-1)/2r(k+1)}, \quad m^{-1}\le \mu(\Omega)<m^{-1+1/k}.
\end{cases}
$$
as $m\rightarrow \infty.$
\end{theorem}
\section{Proof of Theorem 3.1}
Making a change of variables we may assume $(\mathbf{K},L)=(0,\dots,0)$. Suppose for integer $s$ we have $\mathbf{x}_1,\mathbf{x}_2,\dots, \mathbf{x}_{2s}$ satisfying (\ref{polynomial equation 1}) with \\
$\mathbf{x}_j=(x_{j,1},x_{j,2},\dots,x_{j,d}).$ Then
$$ F(\mathbf{x}_1)+ F(\mathbf{x}_2)+\dots+ F(\mathbf{x}_s)- F(\mathbf{x}_{s+1})-\dots - F(\mathbf{x}_{2s}){\mathbf{\,e}}_quiv z \text{\ (mod $m$)} $$
for some \ $-sR \leq z \leq sR.$ Hence there exists $-sR \leq u \leq sR$ \ such that
\begin{equation}
\label{M bound}
M_F(H,R)^{2s}\leq (1+2sR)T(u,H)
\end{equation}
with $T(u,H)$ equal to the number of solutions to the congruence
\begin{equation}
\label{polynomial equation 2}
F(\mathbf{x}_1)+ F(\mathbf{x}_2)+\dots+ F(\mathbf{x}_s)- F(\mathbf{x}_{s+1})-\dots - F(\mathbf{x}_{2s}){\mathbf{\,e}}_quiv u \text{\ (mod $m$)}
\end{equation}
with each co-ordinate of $\mathbf{x_j}$ between $1$ and $H.$ \\ \\ Since
$$F(\mathbf{x})=\displaystyle\sum_{0\leq |\mathbf{i}| \leq k}\beta_{\mathbf{i}}\mathbf{x}^{\mathbf{i}}, \text{ \ for some\ $\beta_{\mathbf{i}}\in \mathbb{Z}_m$} $$
we may write (\ref{polynomial equation 2}) in the form
\begin{equation}
\label{linear}
\displaystyle\sum_{1\leq |\mathbf{i}| \leq k}\beta_{\mathbf{i}}\lambda_{\mathbf{i}}{\mathbf{\,e}}_quiv u \text{ \ (mod $m$)}
\end{equation}
with
\begin{equation}
\label{lambda}
\lambda_{\mathbf{i}}=\mathbf{x}_1^{\mathbf{i}}+\dots+\mathbf{x}_s^{\mathbf{i}}
-\mathbf{x}_{s+1}^{\mathbf{i}}-\dots -\mathbf{x}_{2s}^{\mathbf{i}}.
\end{equation}
Since $g_F=1$, we choose $\mathbf{i}_0$ with $|\mathbf{i}_0|=k$ and $\gcd(\beta_{\mathbf{i}_0},m)=1$. Considering (\ref{linear}) as a linear equation in $\lambda_{\mathbf{i}}$, if we let $\lambda_{\mathbf{i}}$, $\mathbf{i}\neq \mathbf{i}_0$ take arbitrary values then $\lambda_{\mathbf{i}_0}$ is determined uniquely $\pmod m$.
Since we have
\begin{equation}
\label{lambda bound}
-sH^{|\mathbf{i}|}\leq \lambda_{\mathbf{i}}\leq sH^{|\mathbf{i}|}
\end{equation}
there are at most
\begin{equation}
\label{T bound step 1}
\left(1+(2s+1)\dfrac{H^{k}}{m}\right)\displaystyle\prod_{\substack{\mathbf{i}\neq \mathbf{i}_0\\ 1\leq |\mathbf{i}|\leq k }}(2s+1)H^{|\mathbf{i}|}=(2s+1)^{r-1}H^{K-k}\left(1+(2s+1)\dfrac{gH^{k}}{m}\right)
\end{equation}
solutions to (\ref{linear}) in integer variables $\lambda_{\mathbf{i}},$ with
$$K=\displaystyle\sum_{1\leq |\mathbf{i}| \leq k}|\mathbf{i}|=\frac{d}{d+1}(r+1)k.$$
For \ $U=(u_{\mathbf{i}})_{1\leq |\mathbf{i}|\leq k}$ with each $u_{\mathbf{i}}\in \mathbb{Z},$ \ let \
$J_{s,k,d}(U, H)$ \ denote the number of solutions in integers, $\lambda_{\mathbf{i}}$, to
\begin{equation}
\label{equation over Z}
\lambda_{\mathbf{i}}=u_{\mathbf{i}}, \text{ \ $1\leq |\mathbf{i}|\leq k$}
\end{equation}
with each $\mathbf{x}_j$ having components between $1$ and $H$ and we write $J_{s,k,d}(U, H)=J_{s,k,d}(H)$ when $U=(0)_{1\leq |\mathbf{i}|\leq k}.$ Let ${\mathcal U}$ be the collection of sets $U=(u_{\mathbf{i}})_{1\leq |\mathbf{i}|\leq k}$ such that $|u_{\mathbf{i}}|\leq sH^{|\mathbf{i}|}$ and
$$\displaystyle\sum_{1\leq |\mathbf{i}| \leq k}\beta_{\mathbf{i}}u_{\mathbf{i}}{\mathbf{\,e}}_quiv u \text{ \ (mod $m$)}$$
so that the cardinality of ${\mathcal U}$ is bounded by~{\mathbf{\,e}}_qref{T bound step 1}.
We see that
\begin{equation}
\label{T bound step 2}
T(u,H)\leq \displaystyle\sum_{U \in {\mathcal U}}J_{s,k,d}(U,H),
\end{equation}
since if $\mathbf{x}_{0,1} \dots \mathbf{x}_{0,2s}$ is a solution to~{\mathbf{\,e}}_qref{polynomial equation 2}, then the integers $\lambda_{0,\mathbf{i}}$, defined by
$$\lambda_{0,\mathbf{i}}=\mathbf{x}_{0,1}^{\mathbf{i}}+\dots+\mathbf{x}_{0,s}^{\mathbf{i}}
-\mathbf{x}_{0,s+1}^{\mathbf{i}}-\dots -\mathbf{x}_{0,2s}^{\mathbf{i}}, \ \ \ \ \ 1\leq |\mathbf{i}|\leq k$$
are a solution to~{\mathbf{\,e}}_qref{linear} and the $\mathbf{x}_{0,1} \dots \mathbf{x}_{0,2s}$ are a solution to
$$\lambda_{\mathbf{i}}=\lambda_{0,\mathbf{i}}, \text{ \ $1\leq |\mathbf{i}| \leq k$}.$$
So if we let $U_0=(\lambda_{0,\mathbf{i}})_{1\leq |\mathbf{i}|\leq k}$, then we see that the solution to {\mathbf{\,e}}_qref{polynomial equation 2}, $\mathbf{x}_{0,1} \dots \mathbf{x}_{0,2s},$ is counted by the term $J_{s,k,d}(U_0,H)$ in {\mathbf{\,e}}_qref{T bound step 2}. By~{\mathbf{\,e}}_qref{T bound step 1} and~{\mathbf{\,e}}_qref{T bound step 2}, we have
\begin{equation}
\label{t bound}
T(u,H)\leq (2s+1)^{r-1}H^{K-k}\left(1+(2s+1)\dfrac{H^{k}}{m}\right)J_{s,k,d}(V , H)
\end{equation}
for some $V\in {\mathcal U}$. \ Although for any $U \in {\mathcal U}$ we have the inequality
$$J_{s,k,d}(U, H)\leq J_{s,k,d}(H).$$
Since if we let $\boldsymbol{\alpha}=(\alpha_\mathbf{i})_{1\leq |\mathbf{i}| \leq k}$ and
$$S(\boldsymbol{\alpha})=\displaystyle\sum_{1\leq \mathbf{x}\leq H}\exp\left(2\pi i \displaystyle\sum_{1\leq |\mathbf{i}|\leq k}\alpha_\mathbf{i} \mathbf{x}^{\mathbf{i}}\right)$$
then for $\lambda_{\mathbf{i}}$ defined as in ~{\mathbf{\,e}}_qref{lambda} we have
\begin{align*}
J_{s,k,d}(U, H)&=\displaystyle\sum_{1\leq \mathbf{x}_1,\dots \mathbf{x}_{2s}\leq H}
\displaystyle\int_{[0,1]^{r}}\exp\left(2\pi i \displaystyle\sum_{1\leq |\mathbf{i}|\leq k}\alpha_{\mathbf{i}} (\lambda_{\mathbf{i}}
-u_{\mathbf{i}})\right)d\boldsymbol{\alpha}
\\ &=
\displaystyle\int_{[0,1]^{r}}|S(\boldsymbol{\alpha})|^{2s}\exp\left(-2\pi i\displaystyle\sum_{1\leq |\mathbf{i}| \leq k}\alpha_{\mathbf{i}}u_{\mathbf{i}}\right)d\boldsymbol{\alpha} \\
&\leq \displaystyle\int_{[0,1]^{r}}|S(\boldsymbol{\alpha})|^{2s}d\boldsymbol{\alpha}=J_{s,k,d}(H)
\end{align*}
where the integral is over the variables $\alpha_{\mathbf{i}}$, $1\leq |\mathbf{i}| \leq k$. Hence by ~{\mathbf{\,e}}_qref{M bound} and ~{\mathbf{\,e}}_qref{t bound} we have
\begin{equation}
\label{M bound 2}
M_F(H,R)^{2s}\leq (1+2sR) (2s+1)^{r-1}H^{K-k}\left(1+(2s+1)\dfrac{H^{k}}{m}\right)J_{s,k,d}( H).
\end{equation}
By Theorem 1.1 of \cite{mdws} we have for $s\geq r(k+1)$
$$J_{s,k,d}(H) \ll H^{2sd-K+{\mathbf{\,e}}_psilon}$$
for any ${\mathbf{\,e}}_psilon>0$ provided $H$ is sufficiently large in terms of $k,d$ and $s$. Inserting this bound into~{\mathbf{\,e}}_qref{M bound 2} gives
$$M_F(H,R)^{2s}\ll RH^{K-k}\left(1+\dfrac{H^k}{m}\right)H^{2sd-K+{\mathbf{\,e}}_psilon}$$
and the result follows taking $s=r(k+1)$. \\
\qed
\section{Proof of Theorem 3.4}
As in \cite{Schm} we begin with choosing $\mathbf{a}=(a_1,\dots a_{d})$ with each co-ordinate irrational. For integer $j$ let $\mathfrak{C}(j)$ be the set of cubes of the form
\begin{equation}
\label{cuubes}
\left[a_1+\frac{u_1}{j},a_1+\frac{u_1+1}{j}\right]\times \dots \times \left[a_{d}+\frac{u_{d}}{j},a_{d}+\frac{u_{d}+1}{j}\right], \ \ \ u_i \in \mathbb{Z}.
\end{equation}
Since each $a_i$ is irrational, no point (\ref{solutions}) lies in two distinct cubes (\ref{cuubes}). Given integer $M>0$, let $\varepsilon=2d^{\frac{1}{2}}/2^ M$ and consider the set
$$\Omega_{\varepsilon}=\Omega \cup \Omega_{\varepsilon}^{+}.$$
Since $\Omega$ is well-shaped, we have
\begin{equation}
\label{omega big}
\mu(\Omega_{{\mathbf{\,e}}_psilon})=\mu(\Omega)+O\left(\frac{1}{2^M}\right).
\end{equation}
Let ${\mathcal C}(j)$ be the cubes of $\mathfrak{C}(j)$ lying inside $\Omega_\varepsilon$ and we suppose $j\leq 2^M.$ Then by (\ref{omega big}) we obtain,
\begin{equation}
\label{c upper bound}
\# {\mathcal C}(j)\leq j^d\mu(\Omega_\varepsilon)\leq j^{d}\mu(\Omega)+O\left(\frac{j^d}{2^M}\right)=
j^{d}\mu(\Omega)+O\left(j^{d-1}\right).
\end{equation}
\begin{figure}
\caption{The sets $\Omega_{\varepsilon}
\end{figure}
Also, since a cube of side length $1/j$ has diameter $\varepsilon_j=d^{\frac{1}{2}}/j$, we see that the cubes of ${\mathcal C}(j)$ cover $\Omega_{\mathbf{\,e}}_psilon \setminus (\Omega_\varepsilon)_{\varepsilon_j}^{-}$ and hence
$$ \# {\mathcal C}(j)\geq j^d\left(\mu(\Omega_\varepsilon)-\mu((\Omega_\varepsilon)_{\varepsilon_j}^{-})\right).$$
But for $j \leq 2^M$, we have
$$(\Omega_\varepsilon)_{\varepsilon_j}^{-}\subseteq \Omega_{\varepsilon_j}^{-}\cup \Omega_{\varepsilon}^{+}$$
and since $\Omega$ is well-shaped
$$\mu((\Omega_\varepsilon)_{\varepsilon_j}^{-})\leq \mu(\Omega_{\varepsilon_j}^{-})+ \mu(\Omega_{\varepsilon}^{+})
\ll \frac{1}{j}$$
so we get
$$\# {\mathcal C}(j)\geq j^d\mu(\Omega_\varepsilon)+O(j^{d-1}).$$
Combining this with (\ref{c upper bound}) gives
\begin{equation}
\label{C bound}
\# {\mathcal C}(j)=j^{d}\mu(\Omega)+O(j^{d-1}) \text{ \ for $j\leq 2^M.$}
\end{equation}
Let ${\mathcal B}_1={\mathcal C}(2)$ and for $2\leq i \leq M$ we let ${\mathcal B}_i$ be the set of cubes from ${\mathcal C}(2^i)$ that are not contained in any cubes from ${\mathcal C}(2^{i-1}).$ Then we have $\# {\mathcal B}_1 =\# {\mathcal C}(2)$ and for $2\leq i \leq M,$ the cubes from both ${\mathcal B}_i$ and ${\mathcal C}(2^{i-1})$ are contained in $\Omega_\varepsilon$. This gives
\begin{align*}
\# {\mathcal B}_i + 2^d\# {\mathcal C}(2^{i-1})\leq 2^{id}\mu(\Omega)+O\left(\frac{2^{id}}{2^M}\right)\leq 2^{id}\mu(\Omega)+O\left(2^{i(d-1)}\right)
\end{align*}
and hence by (\ref{C bound})
\begin{equation}
\label{B bound}
\# {\mathcal B}_i \ll 2^{i(d-1)}.
\end{equation}
We have
\begin{equation}
\label{omega in}
\Omega \subseteq \bigcup_{i=1}^M \bigcup_{\Gamma \in {\mathcal B}_i}\Gamma
\end{equation}
since if $\mathbf{x} \in \Omega$ then $$\mathrm{dist}(\mathbf{x},[0,1]^d \setminus \Omega_\varepsilon)\geq \varepsilon.$$
Although $\mathbf{x} \in \Gamma$ for some $\Gamma \in \mathfrak{C}(2^M)$ and since $\Gamma$ has diameter $\varepsilon/2$ we have $\Gamma \in {\mathcal C}(2^M).$ Since the union of the cubes from ${\mathcal C}(2^{i-1})$ is contained in the union from ${\mathcal C}(2^{i})$ we get (\ref{omega in}). Hence
$$N_F(\Omega)\leq \displaystyle\sum_{i=1}^{M}\displaystyle\sum_{\Gamma \in {\mathcal B}_i}N_F(\Gamma)$$
and using Corollary \ref{cubes}, as $m2^{-M}\rightarrow \infty$
\begin{align*}
\displaystyle\sum_{i=1}^{M}\displaystyle\sum_{\Gamma \in {\mathcal B}_i}N_F(\Gamma) &\ll
\displaystyle\sum_{i=1}^{M}\displaystyle\sum_{\Gamma \in {\mathcal B}_i}
\left(\frac{m}{2^{i}}\right)^{d-k/2r(k+1)+o(1)} \\ & \ \ \ +\displaystyle\sum_{i=1}^{M}\displaystyle\sum_{\Gamma \in {\mathcal B}_i}m^{d-1/2r(k+1)+o(1)}2^{-i(d+o(1))} \\
&\ll m^{d-k/2r(k+1)+o(1)}2^{o(M)}\displaystyle\sum_{i=1}^{M}2^{ik/2r(k+1)} \frac{\# {\mathcal B}_i }{2^{id}} \\ & \ \ \ + m^{d-1/2r(k+1)+o(1)}2^{o(M)}\displaystyle\sum_{i=1}^{M}\frac{\# {\mathcal B}_i}{2^{id}}.
\end{align*}
We use (\ref{omega big}) to bound
\begin{equation*}
\displaystyle\sum_{i=1}^{M} \frac{\# {\mathcal B}_i}{2^{id}}\leq \mu(\Omega_{\varepsilon})=\mu(\Omega)+O\left(\frac{1}{2^M}\right)
\end{equation*}
and from (\ref{B bound}), for $N\leq M$
\begin{align*}
\displaystyle\sum_{i=1}^{M}2^{ik/2r(k+1)} \frac{\# {\mathcal B}_i }{2^{id}} &= \nonumber
\displaystyle\sum_{i=1}^{N}2^{ik/2r(k+1)} \frac{\# {\mathcal B}_i }{2^{id}}+\displaystyle\sum_{i=N+1}^{M}2^{ik/2r(k+1)} \frac{\# B_i }{2^{id}} \\ &\ll 2^{Nk/2r(k+1)}\displaystyle\sum_{i=1}^{N}\frac{\# {\mathcal B}_i }{2^{id}}
+\displaystyle\sum_{i=N+1}^{M}2^{ik/2r(k+1)}\frac{2^{i(d-1)}}{2^{id}} \\
&\ll 2^{Nk/2r(k+1)}\left(\mu(\Omega)+2^{-M}\right)+2^{-N(1-k/2r(k+1))} \\
&\ll 2^{Nk/2r(k+1)}\left(\mu(\Omega)+2^{-N}\right).
\end{align*}
Hence we get
\begin{align}
\label{optimize}
N_F(\Omega)&\leq m^{d-k/2r(k+1)+o(1)} 2^{Nk/2r(k+1)+o(M)}\left(\mu(\Omega)+2^{-N}\right) \\
& \ \ \ +2^{o(M)}m^{d-1/2r(k+1)+o(1)}\left(\mu(\Omega)+2^{-M}\right). \nonumber
\end{align}
Recalling that $\mu(\Omega)\geq m^{-1},$ to balance the two terms involving $N$, we choose
$$2^{-N}\leq \mu(\Omega) \log{m} <2^{-N+1}.$$
Substituting this choice into (\ref{optimize}) gives,
\begin{align*}
N_F(\Omega) &\leq m^{d-k/2r(k+1)}2^{o(M)}\mu(\Omega)^{1-k/2r(k+1)} \\
& \ \ \ +m^{d-1/2r(k+1)+o(1)}2^{o(M)}\left(\mu(\Omega)+2^{-M}\right).
\end{align*}
The same choice for $M$ is essentially optimal,
\begin{equation}
\label{M choice}
2^{-M}\leq m^{-1}\log{m}\leq 2^{-M+1}.
\end{equation}
This gives
$$N_F(\Omega)\leq m^{d-k/2r(k+1)+o(1)}\mu(\Omega)^{1-k/2r(k+1)}+m^{d-1/2r(k+1)+o(1)}\mu(\Omega)$$
where we have replaced $2^{o(M)}$ with $m^{o(1)}$ since $\mu(\Omega)\geq m^{-1}.$ Theorem 3.4 follows since for the choice of $M$ in (\ref{M choice}), for $\mu(\Omega)\geq m^{-1}$
$$m2^{-M}\gg m^{-1}\mu(\Omega)\log{m}\geq \log{m}$$
which tends to infinity as $m\rightarrow \infty.$ \qed \\ \
\section{Proof of Theorem 3.5}
Using the same constructions from Theorem 3.5, we have
$$N_F(\Omega)\leq \displaystyle\sum_{i=1}^{M}\displaystyle\sum_{\Gamma \in {\mathcal B}_i}N_F(\Gamma).$$
Hence by Corollary~\ref{cubes 1}
\begin{align}
\label{nf}
N_F(\Omega)&\leq 2^{o(M)}m^{d-1-(k-1)/2r(k+1)+o(1)}\displaystyle\sum_{i=1}^{M}2^{i(1+(k-1)/2r(k+1))}\frac{\#{\mathcal B}_i}{2^{id}} \nonumber \\ & \ \ \ +2^{o(M)}m^{d-1+o(1)}\displaystyle\sum_{i=1}^{M}2^{i(1-1/2r(k+1))}\frac{\#{\mathcal B}_i}{2^{id}}.
\end{align}
For the first sum by~{\mathbf{\,e}}_qref{B bound},
\begin{align*}
\displaystyle\sum_{i=1}^{M}2^{i(1+(k-1)/2r(k+1))}\frac{\#{\mathcal B}_i}{2^{id}}\le \displaystyle\sum_{i=1}^{M}2^{i(k-1)/2r(k+1))}\ll 2^{M(k-1)/2r(k+1)}.
\end{align*}
For the second sum,
\begin{align*}
\displaystyle\sum_{i=1}^{M}2^{i(1-1/2r(k+1))}\frac{\#{\mathcal B}_i}{2^{id}}&=\displaystyle\sum_{i=1}^{N}2^{i(1-1/2r(k+1))}\frac{\#{\mathcal B}_i}{2^{id}} +\displaystyle\sum_{i=N+1}^{M}2^{i(1-1/2r(k+1))}\frac{\#{\mathcal B}_i}{2^{id}} \\
&\ll 2^{N(1-1/2r(k+1))}\left(\mu(\Omega)+\frac{1}{2^M}\right)+2^{-N/2r(k+1)} \\
&\ll 2^{N(1-1/2r(k+1))}\mu(\Omega)+2^{-N/2r(k+1)}.
\end{align*}
Substituting the above bounds into~{\mathbf{\,e}}_qref{nf} gives
\begin{align*}
N_F(\Omega)&\leq 2^{o(M)}m^{d-1-(k-1)/2r(k+1)+o(1)}2^{M(k-1)/2r(k+1)} \\ & \ \ \ + 2^{o(M)}m^{d-1+o(1)}\left(2^{N(1-1/2r(k+1))}\mu(\Omega)+2^{-N/2r(k+1)}\right).
\end{align*}
For $\mu(\Omega)\ge m^{-1+1/k}$ we choose $N$ to balance the first and last terms then choose $M$ to balance the remaining terms, so that
$$2^{M-1}< \mu(\Omega)^{1/(k-1)}m\le 2^M$$
$$2^{-N}\le 2^{M(k-1)}m^{-(k-1)}<2^{-N+1}$$
which gives $N\le M$ and
$$N_F(\Omega)\le m^{d-1+o(1)}\mu(\Omega)^{1/2r(k+1)}.$$
If $m^{-1}\le \mu(\Omega) <m^{-1+1/k}$ then we choose $N$ to balance the last two terms and take $M$ as small as possible subject to the condition $N\le M$. This gives $$2^{-M}\le \mu(\Omega)<2^{-M+1}$$ $$N=M$$
and
\begin{align*}
N_F(\Omega) &\le m^{d-1-(k-1)/2r(k-1)}\mu(\Omega)^{-(k-1)/2r(k+1)} \\
& \ \ \ +m^{d-1+o(1)}\mu(\Omega)^{1/2r(k+1)}.
\end{align*}
Combining the above two bounds completes the proof. \qed
\section{Comments}
Using the methods of Theorem 3.4 and Theorem 3.5, we have not been able to to give bounds for $N_F(\Omega)$ which are nontrivial when $\mu(\Omega)\leq m^{-1}$. This seems to be caused by two factors, the bound from Corollary \ref{cubes} and the bounds for $\mu(\Omega_{\varepsilon})^{\pm}$, which affect the estimates (\ref{omega big}) and (\ref{B bound}). For certain cases with prime modulus we may be able to do better than Theorem 3.5. For example, the same method may be combined with other bounds replacing Corollary \ref{cubes 1} for more specific families of polynomials. This has the potential to obtain sharper estimates for such polynomials and also to increase the range of values of $\mu(\Omega)$ for which an analogue of Theorem 3.5 would apply. For example, Bourgain, Garaev, Konyagin and Shparlinski~\cite{BGKS1} consider the number
$J_{\nu}(p,h,s;\lambda)$ of solutions to the congruence
\begin{equation*}
(x_1+s)\dots(x_{\nu}+s){\mathbf{\,e}}_quiv \lambda \ \ (\text{mod} \ p), \ \ 1\leq x_1, \dots , x_{\nu} \leq h.
\end{equation*}
They show that if $h<p^{1/(\nu^2-1)}$ then we have the bound
\begin{equation}
\label{b1}
J_{\nu}(p,h,s;\lambda)\leq \exp \left(c(\nu)\frac{\log{h}}{\log\log{h}}\right)
\end{equation}
for some constant $c(\nu)$ depending only on $\nu$ (Lemma 2.33 of \cite{BGKS1}). \\
In \cite{BGKS2}, the same authors consider the number
$K_\nu(p,h,s)$ of solutions to the congruence
\begin{equation*}
(x_1+s)\dots(x_\nu+s){\mathbf{\,e}}_quiv (y_1+s)\dots(y_\nu+s) \not {\mathbf{\,e}}_quiv 0 \ \ (\text{mod} \ p),
\end{equation*}
$$1\leq x_1, \dots, x_\nu, y_1, \dots ,y_\nu \leq h$$
and show that
\begin{equation}
\label{b2}
K_\nu(p,h,s)\leq \left(\frac{h^{\nu}}{p^{\nu/e_{\nu}}}+1\right)h^{\nu}\exp\left(c(\nu)\frac{\log{h}}{\log\log{h}}\right)
\end{equation}
for some constants $e_{\nu}$ and $c(\nu)$ depending only on $\nu$ (Theorem 17 of \cite{BGKS2}). \\ \indent Another possible way to improve on our results for certain classes of well-shaped sets is to use Weyl's formula for tubes (equation (2) of \cite{Weyl}) and Steiner's formula for convex bodies (equation (4.2.27) of \cite{Schn}) to give an explicit constant
in (\ref{well-shaped}) for certain subsets of $[0,1]^d$ for which these formula are valid. This would have the effect of improving on the bounds (\ref{omega big}) and (\ref{B bound}) and hence the bound for $N_F(\Omega)$ and possibly the range of values of $\mu(\Omega)$ for which this bound would be valid.
\end{document}
|
\begin{document}
\title{Multistate models as a framework for estimand specification in clinical trials of complex processes}
\begin{abstract}
Intensity-based multistate models provide a useful framework for characterizing disease processes, the introduction of interventions,
loss to followup, and other complications arising in the conduct of randomized trials studying complex life history processes. Within this framework we discuss the issues involved in the specification of estimands and show the limiting values of common estimators of marginal process features based on cumulative incidence function regression models.
When intercurrent events arise we stress the need to carefully define the target estimand and the importance of avoiding targets of inference
that are not interpretable in the real world. This has implications for analyses, but also the design of clinical trials where protocols may help in the interpretation of estimands based on marginal features.
\end{abstract}
\keywords{estimands, intercurrent events, semi-competing risks, generalized transformation model, robustness}
\section{Introduction}
\subsection{Overview}
The randomized clinical trial is widely regarded as the best study design for evaluating an experimental intervention compared to standard care
on disease processes.
However when the processes involve multiple types of events, recurrent events, or competing terminal events, it is often unclear how best to summarize treatment effects. In particular it is challenging to specify a sufficiently informative one-dimensional estimand to summarize effects and be used as a basis for tests. In cancer trials, for example, patients are at risk of cancer progression and death, and there has been considerable discussion about the merits of responses based on progression alone, progression-free survival and overall survival \citep{booth2012}.
In cardiovascular trials, interest lies in prolonging overall survival but with recent advances in medical therapies and surgical interventions mortality rates are now relatively low. This has lead to increased interest in using responses based on recurrent myocardial infarction, stroke, and hospitalization, in addition to time to cardiovascular-related death; non-cardiovascular deaths are typically handled as competing risks \citep{Rufibach2019,Furberg2021, Toenges2021}. The conceptualization and observation of treatment effects can be further complicated by intercurrent events \citep{qu2021}, defined as events that prevent observation of the primary response or otherwise interfere with the process of interest.
Early withdrawal from a clinical trial is an example of the former, while the introduction of rescue therapy is an example of the latter. Ways to define estimands and make treatment comparisons in the presence of intercurrent events have received a great deal of attention in recent years \citep{Rufibach2019}. Numerous researchers and working groups have proposed estimands and guidelines for specific settings involving competing risks \citep{Rufibach2019, Stensrud2020, Young2020, Nevo2020, Poythress2020},
recurrent and terminal events \citep{Andersen2019, Schmidli2021, wei2021},
introduction of rescue therapies \citep{ster2020} or treatment switching \citep{watkins2013,Manitz2022}; see also \cite{casey2021} and \cite{stensrud2021}.
Discussion in the estimands literature regarding challenges in conceptualizing and specifying treatment effects often lacks detail concerning models and assumptions, which makes interpretation of the recommended estimands difficult. The challenges arise from the complexity of disease processes and related processes involved with the management of subjects in a randomized clinical trial, and from the desire to make causal statements concerning treatment effects that are based on comparisons of marginal process features, which are not dependent on post-randomization events. In addition, estimands that reflect causal effects without a meaningful interpretation in the real world in which the trial is conducted are frequently considered. The use of potential outcomes has proven powerful and popular for formalizing causal reasoning and analysis, but can lead to causal statements about effects in randomized trials that do not reflect the actual trial or patient care.
Our goals are two-fold. First, we argue for the importance of models that reflect the disease process and related events that impact the conduct of the trial in both the planning and analysis stages. Time is a basic element in trials on disease processes and so models based on stochastic processes are crucial; we discuss the particular utility of multistate models \citep{Andersen1993, Cook2018}. Our second goal is to present guiding principles for the specification of estimands for clinical trials involving complex disease processes. We consider settings involving competing risks and semi-competing risks in some detail, along with more complex processes involving multiple and possibly recurrent events. For the planning stage of a trial when primary and secondary analysis strategies are considered, we emphasize the important role of models that account for the
disease process, study protocol, and clinical interventions.
In settings involving intercurrent events, we show how multistate models
can be used to jointly represent disease and intercurrent event processes.
The structure offered by this multistate representation can
clarify the interpretation of potential target estimands,
reveal what is needed to estimate them, and illuminate whether they have an interpretation in the real world. We also highlight the value of specifying utilities for different disease states. While some may view these as subjective and undesirable for assessing the effects of experimental treatments in clinical trials, they are used to great effect in
quality of life and health economic analyses \citep{Gelber1989, Torrance1986}. Moreover, utilities are often adopted implicitly; for example,
progression-free survival and other composite time-to-event responses equally weight the component outcomes, and relative utilities of different disease paths are implied when computing win ratios \citep{oakes2016}.
Explicit specification and incorporation of utilities into the evaluation of treatments both enables the handling of complex processes and makes explicit the relative importance of different events.
The paper is organized as follows. In Section \ref{sec1.2} we describe challenges arising in two illustrative multicenter clinical trials.
In Section \ref{sec2.0} we introduce notation for multistate models using illness-death and competing risks processes as illustrations,
define intensity functions, and give examples of functionals of process intensities which may serve as a basis for defining estimands.
Principles for the specification of estimands are given in Section \ref{sec2.1}. In Section \ref{sec3} we discuss these processes in more detail, and provide some numerical results concerning the effects of assumptions,
model misspecification and the interpretation of estimands. Section \ref{sec4} illustrates the utility of multistate models for dealing with more complex processes including those with intercurrent events. Section \ref{sec5} contains remarks on potential outcomes and on utility-based estimands, and
concluding remarks are given in Section \ref{sec6}.
\subsection{Some motivating and illustrative studies} \label{sec1.2}
\subsubsection{Skeletal complications in patients with cancer metastatic to bone} \label{bonemets-study}
Many individuals with cancer experience skeletal metastases which can in turn cause fractures, bone pain, and the need for therapeutic or surgical
interventions \citep{coleman2006}. Prophylactic treatment with a bisphosphonate can strengthen bone and reduce the risk of skeletal-related events and so are often used to mitigate complications and improve quality of life in affected individuals. \cite{Theriault1999} report on a international multicenter randomized trial involving patients with stage IV breast cancer having at least two predominantly lytic bone lesions greater than one centimeter in diameter. Eighty-five sites in the United States, Canada, Australia and New Zealand recruited patients who were randomized within strata defined by ECOG status to receive a 90 mg infusion of pamidronate every four weeks ($n=182$) or a placebo control ($n=189$). Skeletal complications of primary interest included nonvertebral and vertebral fractures, the need for surgery to treat or prevent fractures, and
the need for radiation for the treatment of bone pain.
Individuals with metastatic cancer are at high risk of death and here the skeletal-related event process is terminated by death,
creating a semi-competing risks problem for the analysis of the time to the first skeletal event.
If interest lies in assessing the effect of pamidronate on the occurrence of recurrent skeletal complications then the
recurrent event process is terminated by death.
Figure \ref{sre} is a multistate diagram depicting the possible occurrence of up to $K$ skeletal-related events while accommodating death from any of the transient
states.
Intensity-based analyses are natural for modeling such disease processes but marginal rate-based methods are more suited for primary analysis in clinical trials; we address this in detail in subsequent sections.
\cite{Cook2009} discuss methods for nonparametric estimation of rate and mean functions in this setting, while accommodating the possible impact of
dependent censoring; we discuss approaches to analysis of the recurrent and terminal event processes in Section \ref{sec4.3} but discuss issues for the
simpler illness-death process in Section \ref{sec3}.
\begin{figure}
\caption{A multistate model for up to $K$ recurrent skeletal complications and death for \cite{Theriault1999}
\label{sre}
\end{figure}
\subsubsection{Assessing carotid endarterectomy versus medical care in stroke prevention} \label{nascet-intro}
Atherosclerosis involves the development of arterial plaque and stenosis of the carotid arteries putting individuals at increased risk of stroke. \cite{Barnett1998} report on a multicenter clinical trial designed to evaluate the effect of a surgical intervention called carotid endarterectomy,
which involves surgical removal of plaque to increase the diameter of the carotid artery and enhance blood flow, compared to usual medical therapy
(e.g. use of platelet inhibitors and antihypertensive drugs).
The goal of both treatment strategies is the prevention of stroke and stroke-related death. To be eligible for recruitment individuals must have experienced at least one transient ischemic attack or minor stroke and have at least a 30\% narrowing of the carotid artery on the same side as (ipsilateral to) the event, as determined by central angiographic examination. Consenting patients were randomized to either carotid endarterectomy and best medical care, or best medical care alone; as part of best medical care, aspirin was recommended for all patients at a dose of 1,300 mg/day and blood pressure was carefully controlled through regular monitoring. The endpoints of interest include
i) any stroke arising from the same side as the one designated at the time of recruitment as a possible site for surgery,
ii) any stroke,
iii) the composite event of stroke or stroke-related death, and
iv) the composite event of any stroke or death.
For iii), deaths unrelated to stroke are a competing risk which should be addressed when assessing the effect of surgery.
\begin{figure}
\caption{A multistate model depicting the occurrence of stroke, switch from medical care to carotid endarterectomy, stroke-related death and non-stroke related death
for the stroke prevention trial reported on by \cite{Barnett1998}
\label{nascet}
\end{figure}
There were two strata of particular interest defined by whether the degree of carotid stenosis at randomization was moderate (stenosis of 30-69\%) or
severe (stenosis of 70-99\%). An interim analysis led to early termination of recruitment to the severe stenosis stratum and publication of the finding that carotid endarterectomy was highly effective in the prevention of stroke among these patients \citep{NASCET-NEJM1991}. The recruitment and follow-up of the moderate stenosis stratum continued, but many individuals randomized to medical care ultimately underwent carotid endarterectomy,
further complicating analysis and interpretation of this data.
There were various reasons reported for crossover from the medical to surgical interventions including atherosclerotic progression from
moderate stenosis at randomization to severe steonsis during follow-up, at which point the criteria for carotid endarterectomy in the publication based on the severe
stenosis stratum were met.
Physicians treating patients experiencing non-fatal stroke may also recommend that they undergo carotid endarterectomy to reduce the risk of future stroke.
In what follows we focus on the occurrence of the first stroke of any type and death due to stroke, and consider strategies for dealing with complications
due to both the competing risk of death unrelated to stroke and participants randomized to medical care undergoing carotid endarterectomy.
Figure \ref{nascet} contains a multistate diagram depicting the occurrence of the first event among stroke, stroke-related death, death unrelated to stroke,
and crossover to surgery; the latter event can only occur among patients randomized to medical care.
Additional states can be entered upon the occurrence of subsequent events; see Figure \ref{nascet}.
We revisit this example in Section \ref{secnascet}.
\section{Estimands for interventions in event history processes} \label{sec2}
\subsection{Multistate models, intensities and associated functionals} \label{sec2.0}
In a typical phase III clinical trial, treatment groups are formed by randomizing individuals to receive either an experimental or control treatment. Although the disease process may be complex, a primary objective for estimation and testing of treatment efficacy is to compare some process feature in the two treatment groups. A process feature is typically related to some outcome or event, for example, the total time spent in a given state or the time of entry to a specific state. In that case a feature is based on the response distribution, for example, the mean or median time to an event or the probability it occurs by a specific time.
We define an estimand as a one-dimensional measure of the difference in a process feature in the experimental and control groups.
The central clinical challenge involves specification of the process feature of greatest relevance, and then a clinical and statistical challenge is to specify how to specify an estimand comparing the feature in the two groups.
\begin{figure}
\caption{Multistate diagrams for illness-death and competing risks processes.}
\label{multistateDiagram}
\end{figure}
We consider disease processes that can be represented by multistate models and will
first define notation, using the important illness-death process in Figure \ref{multistateDiagram}(a) with state space $\mathcal{S} = \{ 0, 1, 2 \}$ for illustration \citep{Andersen1993, Cook2018}. The related competing risks process shown in Figure \ref{multistateDiagram}(b) is discussed later. The illness-death process is widely applicable to oncology trials (e.g. \citealp{carey2021}), with state 0 representing a cancer-free state following initial treatment, state 1 representing events such as recurrence, relapse or progression, and state 2 representing death; individuals may enter state $2$ with or without having experienced the non-fatal event.
We let $Z(t)$ denote the state occupied by a generic individual at time $t \geq 0$ and assume that the process $\{Z(t), t \ge 0\}$ begins in state $0$ at $t=0$,
the time of randomized treatment assignment.
We let $Y_{k}(t) = I(Z(t^-)=k)$ be a state occupancy indicator that equals $1$ if an individual is in state $k \in \mathcal{S}$ at time $t^-$ and $0$ otherwise, and let $T := \inf \{t>0: Z(t) \neq 0 \}$, $T_1 :=\inf \{t>0: Z(t)=1 \}$ and $T_2 := \inf \{t>0: Z(t) = 2 \}$ represent the event-free survival time (or time spent in state $0$), the time to the non-fatal event and the overall survival time (or time to death), respectively.
Let $X$ be a binary variable indicating whether an individual was randomized to receive the experimental ($X=1$) or control ($X=0$) treatment and let $V$ denote
other observable covariates that may affect the disease process.
For simplicity we initially assume that $V$ is fixed, but we later consider settings where it may have time-varying components. If $\mathcal{H}(t)=\{Z(u): 0 \leq u < t, X,V \}$ is the history of the process up to time $t^{-}$ plus the treatment indicator $X$ and covariates $V$, the transition intensity function for a $k \longrightarrow l$ transition is defined as
\begin{align}
\lim_{\Delta t \downarrow 0} \dfrac{P(Z(t+\Delta t^{-})=l \ | \ Z(t^{-})=k, \mathcal{H}(t))}{\Delta t} = \lambda_{kl}(t \ | \ \mathcal{H}(t)), \ \ \ \ \ \ \ \ k,l \in \mathcal{S}, \ \ k<l \; .
\end{align}
The stochastic nature of the multistate process is fully defined by the specification of all transition intensities.
For ease of discussion we consider Markov processes, in which case the intensities depend only on $t$ and the state occupied at $t^{-}$:
$\lambda_{kl}(t \ | \ \mathcal{H}(t)) = Y_k(t)\lambda_{kl}(t \ | \ X, V)$.
We use the term \textit{process feature} to mean any functional of the set of intensities \citep{Andersen2012.1}.
The survivor function for the event-free survival time is, for example, given by
\begin{equation}
\label{EFS}
P(T > t|X,V) = P_{00}(0,t|X,V) = \exp \biggl( - \int_{0}^{t} (\lambda_{01}(u|X,V) + \lambda_{02}(u|X,V))du \biggr),
\end{equation}
where $P_{kl}(s,t \ | \ X,V) = P(Z(t)=l \ | \ Z(s)=k, X,V)$ for $s \le t$.
In addition $P(T_2 > t|X,V) = P_{00}(0,t|X,V) + P_{01}(0,t|X,V)$ where $P_{01}(0,t|X,V)$ is
\begin{align*}
\int_{0}^{t} P_{00}(0,u|X, V) \; \lambda_{01}(u|X,V) \exp \biggl( - \int_{u}^{t} \lambda_{12}(s|X,V)ds \biggr) du \; .
\end{align*}
The distribution function for $T_1$ is often called the cumulative incidence function for the non-fatal event and is given by
\begin{equation}
\label{CIF}
F_{1}(t|X,V) = P(T_1 \leq t|X,V) = \int_{0}^{t} P_{00}(0,u|X,V) \lambda_{01}(u|X,V)du \; .
\end{equation}
It should be noted that $F_1(t|X,V)$ approaches a limit less than one as $t$ increases.
\subsection{Principles for defining estimands} \label{sec2.1}
{
\cite{Andersen2012.1} provide important guidance on the analysis of life history processes to ensure interpretable results.
This has shaped our views on the specification of estimands and is recommended reading for those working in life history analysis in observational or
experimental settings.
The three main tenets of \cite{Andersen2012.1} are:
\textit{don't condition on the future}, \textit{don't condition on having reached an absorbing state} and \textit{stick to the real world}.
These tenets are recurring themes in this paper.
To deal with the practicalities of actual trials we use models that represent the disease process as well as features such as intercurrent events. Responses and features refer to particular aspects of a process that are of key interest. For example, the three-state diagram of Figure \ref{multistateDiagram}(a) may represent possible disease paths, but in a study aiming to reduce the occurrence of the intermediate event the time of entry to state 1 may be the response of interest. Figure 2 portrays a much more complicated setting where intercurrent events may occur.
We begin with an idealized setting in which individuals in a trial experience illness-death processes, with no premature dropouts before a common end of followup time $C$. The first step in specifying an estimand involves identifying an observable process $\{Z(t), t > 0\}$ and specifying a process feature on which treatment comparisons will be based.
This can be surprisingly challenging in complex settings and will depend on the clinical meaning of the different states and the primary goals for the experimental intervention. In randomized trials of palliative therapies treatments might aim to reduce the occurrence of an adverse non-fatal event; for example \cite{Cook2009} and Section 1.2.1 above describe trials involving the reduction of fractures and other skeletal events in patients with advanced breast cancer. In this case the response of interest could be entry to state 1, with the associated feature the $0 \rightarrow 1$ intensity function $\lambda_{01}(t)$ or the cumulative incidence function $F_1(t)$. In oncology trials aiming to prolong survival in patients with advanced cancer, state 1 might represent disease progression, and the overall survival time $T_2$ or disease-free survival time $T$ is often considered as the response of interest \citep{Rittmeyer2017, Powles2018}.
Once a process feature has been identified, the second step is the specification of an estimand $\beta$.
One option is to take a specific time $\tau$ and a feature such as $F_1(\tau|X)$, and to define $\beta$ as the ratio or difference in
$F_1(\tau|X=1)$ and $F_1(\tau|X=0)$.
This requires minimal assumptions, but estimands will naturally vary according to the chosen value of $\tau$ which may be contentious.
If the process feature is a function of time such as $F_1(t|X)$ or $\lambda_{01}(t|X)$, a second option is to adopt modeling assumptions that provide a one-dimensional estimand. In the case of entry to state 1, an estimand $\beta$ is frequently defined by assuming $F_1(t|X=1)=\exp(\beta) F_1(t|X=0)$ or $\lambda_{01}(t|X=1)=\exp(\beta) \lambda_{01}(t|X=0)$.
This is not strictly necessary since we could, for example, define $\beta$ as the maximum of $|F_1(t|X=1)$ - $F_1(t|X=0)|$ over $(0,\tau]$.
However, power calculations for tests of no treatment effect used in planning studies are most conveniently formulated in terms of model-based estimands, and we will focus on them.
Process features and related estimands can be distinguished according to whether they do or do not condition on previous process history. We refer to the former as dynamic features; process intensity functions are of this type. We term features that do not condition on previous history as marginal features; the cumulative incidence function $F_1(t|X)$ is an example. We will also refer to features as having
a \textit{marginal} interpretation or
a \textit{dynamic} interpretation.
Causal inference for treatment effects based on marginal features is in principle straightforward: it is facilitated
by the random allocation of treatment to trial participants at time $t=0$.
Transition intensities are defined conditional on the process history and estimands such as intensity ratios are geared towards process dynamics \citep{Aalen2012}. Intensity functions are crucial to a full understanding of a disease process, and to an understanding of specific marginal features, but
for reasons we expand on below, are not suited for simple causal inference based on randomization.
They are, however, important for a deeper understanding of causal mechanisms which in processes must address time-varying factors and random events.
For example, in view of the relationship (\ref{CIF}), the effects of $X$ on both process intensities $\lambda_{01}(t|X)$ and $\lambda_{02}(t|X)$ must be
considered in order to understand what has produced an observed effect in $F_1(t|X=0)$ and $F_1(t|X=1)$. {Chapter 9 in \cite{Aalen2008} provides a thoughtful survey of aspects and approaches to causality in the context of event history processes.}
We support the common position that primary assessment of treatment effects in randomized trials should be based on marginal features and estimands. Beyond this, we argue that they should possess three fundamental properties:
\begin{enumerate}
\item
An estimand should represent the difference between treatment groups with respect to a clinically relevant marginal feature.
\item
Features and estimands should be interpretable in the real world, meaning the response involved should be an element of the observable process.
\item
Estimands should not be sensitive to uncheckable assumptions;
models on which an estimand is based should be assessed using available data.
\end{enumerate}
The \textit{first principle} ensures that any findings will have clear relevance for treatment decisions.
While this appears a straightforward objective it can be challenging to satisfy this principle in the face of intercurrent events.
The \textit{second principle}, that estimands be interpretable "in the real world", may seem self-evident but estimands that do not satisfy this condition are often adopted. For a competing risks process the subdistribution hazard function (see \citealp{Fine1999}) corresponding to $F_1(t|X)$ is a well-known functional that violates our second principle.
Individuals who have previously experienced event-free death are included in the risk set at time $t$ when defining and estimating the sub-distribution hazard but
such individuals are not genuinely at risk of the non-fatal event in the real world \citep{Andersen2012.1, Putter2020}.
Moreover, most of the recent work in causal inference defines estimands based on a potential outcome for each individual subject, corresponding to each of the treatments under study; in the real world of most trials,
only one of the treatments is received by an individual, making the other potential outcomes "counterfactual". In this case inter-individual treatment effects such as the difference in the two potential outcomes are not observable in the real world.
Many proposed solutions to challenges with intercurrent events also fail to satisfy principle 2: they involve specification of higher dimensional potential outcomes, taking us further away
from the real world.
Estimands of this type include the survivor average causal effect and other estimands in \cite{Comment2019}, \cite{Stensrud2020}, \cite{Xu2020} and \cite{Young2020}. The \textit{third principle} ensures that one can assess the adequacy of an underlying model and related assumptions, and their effect on the validity of conclusions.
Of course the more complex the disease and disease management processes are, the more difficult it is to select a single primary estimand that adequately summarizes treatment effects.
Secondary analyses that enhance understanding of intervention effects and more thoroughly characterize the overall response to treatment are an important aspect of clinical trials.
}
\subsection{Problems with conditioning on features of the process history}
\label{sec2.3}
In Sections \ref{sec3} and \ref{sec4} we discuss marginal estimands for specific processes, but first we review problems with causal interpretations of intensity functions and other conditional process features.
In particular, estimands such as intensity ratios for experimental versus control treatment groups do not have a simple causal interpretation.
Although treatment $X$ is randomly assigned at time $t=0$ and is thus independent of other covariates $V$ affecting the disease process, at a later time $t$ the
random allocation of $X$ and independence of $V$ no longer holds among those at risk for the event in question.
Confounding induced by conditioning on post-randomization events that may also be responsive to the treatment is a well-known phenomenon;
it has been comprehensively discussed by \cite{Hernan2010}, \cite{Aalen2015} and \cite{Martinussen2020} for the standard survival case, where the hazard function at time $t$ conditions on being alive then. In illness-death processes, time-dependent confounding is more involved; for example the intensity function for the non-fatal event conditions on being alive and event-free, and confounding depends on the effect of $V$ on both event types. Hazard or intensity ratios nevertheless remain popular in the analysis of randomized trials, so we illustrate here the impact of time-related confounding in an illness-death process.
For simplicity, we let $V$ be a binary covariate with $P(V=1)=0.5$ and independent of $X$ due to randomization ($X \perp V$).
We let $P(X=1)=P(X=0)=0.5$ and assume that the true illness-death process has intensity functions $\lambda_{0k}(t | X, V) = \lambda_{k} \exp(\gamma_{kx}X + \gamma_{kv}V)$ for $k=1,2$;
thus $V$ may affect the intensity for both the non-fatal event and death, and both intensities are of proportional hazards form.
The conditional joint distribution of $X$ and $V$ for subjects in state 0 at time $t^-$ is
$$P(X, V | Z(t^-)=0) = \dfrac{P(Z(t^-)=0 | X, V)P(X)P(V)}{\mathbb{E}_{XV}(P(Z(t^-)=0 | X, V))}$$ and this cannot be factored into separate functions
of $X$ and $V$, so $X \not \perp V | Z(t^-)=0$.
In addition, the marginal intensity function 'averaged' over $V$ has the form
\begin{align*}
\lambda_{01}(t | X=x, Z(t^-)=0) &= \mathbb{E}_{V}(\lambda_{01}(t | X=x, V) | X, Z(t^-)=0) \\
&= \lambda_{1}\exp(\gamma_{1x}x) \mathbb{E}_{V}(\exp(\gamma_{1v}V) | X=x, Z(t^-)=0),
\end{align*}
where $P(V | X=x, Z(t^-)=0) = P(Z(t^-)=0 | X=x, V)P(V)/\mathbb{E}_V(P(Z(t^-)=0 | X=x, V))$.
As a result the marginal intensity ratio $IR_{01}(t)=\lambda_{01}(t | X=1, Z(t^-)=0)/\lambda_{01}(t | X=0, Z(t^-)=0)$ is given by
\begin{equation} \label {marg-int-ratio}
\exp(\gamma_{1x}) \biggl[ \dfrac{\exp(\gamma_{1v})P(V=1 | X=1, Z(t^-)=0) +
P(V=0 | X=1, Z(t^-)=0)}{\exp(\gamma_{1v})P(V=1 | X=0, Z(t^-)=0) + P(V=0 | X=0, Z(t^-)=0)} \biggr].
\end{equation}
If $\gamma_{1v} \neq 0$ this is a function of time $t$ so the intensity ratio for experimental versus control subjects is not a constant.
Consequently there are two key points:
(i) the marginal intensity ratio at a given time $t>0$, $IR_{01}(t)$, cannot be interpreted causally if confounders $V$ are omitted, and
(ii) if the true intensities conditional on both $X$ and $V$ are proportional, so that there is a scalar treatment effect $\gamma_{1x}$ that applies at all times $t$,
the marginal intensity ratio is time-dependent so does not yield a scalar estimand.
To illustrate this numerically we set $\gamma_{1x}=\log(0.75), \gamma_{2x}=\log(0.9)$ and considered three values $0, \log(2.0)$ and $\log(3.0)$ for each of
$\gamma_{1v}$ and $\gamma_{2v}$.
We chose the administrative censoring time to be $C=1$ and for each set of regression coefficients determined $\lambda_1$ and $\lambda_2$ so that $P(T \leq 1)=0.6$
and so that the conditional probability of entry to state $1$ given $T \leq 1$ was either $0.05, 0.4, 0.6, 0.8$ or $1.0$.
We incorporated loss to follow-up by introducing an independent random right-censoring time $R$ which followed an exponential distribution with rate $\rho$, with $\rho$ set to satisfy $\pi_R=P(R < T_1 | T_1 < \min(T_2, 1))=0.2$.
Thus 20\% of the non-fatal events occurring before the administrative censoring time $C$ are censored due to early withdrawal.
As is common in primary analyses of clinical trials, we consider a cause-specific hazards model for the non-fatal event of the proportional hazards form
$\lambda_{01}(t | X) = \tilde{\lambda}_{1}(t)\exp(\phi_1 X)$ where $\phi_1$ is estimated by maximizing a Cox partial likelihood. As we describe below, the true hazard ratio varies with time $t$ so this model is misspecified. However,
the maximum likelihood estimator $\hat{\phi}_1$ converges in probability in large samples to a limit $\phi^{*}_1$ and we first consider these values; see the Appendix \ref{app-Cox} for details on how $\phi^{*}_1$ can be obtained.
Figure $\ref{BiasCoxIntensityAnalysis}$ plots $100(\phi_1^{*}-\gamma_{1x})/\gamma_{1x}$,
the percent relative difference between $\phi^{*}_1$ and $\gamma_{1x}$, the effect of $X$ in the true process conditional on $X$ and $V$,
as a function of $\exp(\gamma_{1v})$ (upper panel) and $\exp(\gamma_{2v})$ (lower panel).
When $V$ affects the fatal event only (i.e. $\gamma_{1v}=0$), $\phi^{*}_1$ equals the conditional treatment effect $\gamma_{1x}$ in the true process;
see (\ref{marg-int-ratio}) and the bottom set of panels in Figure \ref{BiasCoxIntensityAnalysis}.
For all other scenarios, the limiting value $\phi^{*}_1$ differs from $\gamma_{1x}$.
In settings where the non-fatal event occurs most often (right hand panels, bottom row), the relative difference is about $10 \%$.
The magnitude of the difference increases with stronger effects of $V$ on the fatal and non-fatal event intensities.
The relative difference depends on the probability of being event-free at time $t^-$, which is given by $(\ref{EFS})$, and on the proportion of non-fatal events.
In the extreme case of $P(T_1 < T_2 | T \leq 1) =1$, $\lambda_{02}(t|X,V)$ is zero; the competing risk setting coincides with the standard survival
case then and our results align with the points made in \cite{Aalen2015}.
If on the other hand the non-fatal event happens rarely, the relative difference is smaller but its dependence on $\gamma_{2v}$ becomes stronger.
As noted by \cite{Struthers1986}, we also find that $\phi_{1}^{*}$ depends on the censoring distribution.
Figure $\ref{MarginalHR}$ depicts the true marginal intensity ratio $IR_{01}(t)$ over the time interval $[0,1]$ for different values of $\gamma_{1v}, \gamma_{2v}$ and
$P(T_1 < T_2 | T \leq 1)$.
As noted earlier, unless $\gamma_{1v}=0$, $IR_{01}(t)$
varies with time so that the marginal proportional hazards model is misspecified; this further complicates the interpretation of the effective estimand $\phi^{*}_1$.
The magnitude of variation in the intensity ratio over time depends on the effects of $V$ on the intensity functions for the non-fatal and fatal event;
the effects on $\phi^{*}_1$ seen in Figure \ref{BiasCoxIntensityAnalysis} are related to this.
To summarize, intensity-based comparison of treatment groups is not recommended for the specification of estimands and primary analysis. Even if models adequately describe the observed data, intensities condition on post-randomization events and do not permit randomization-based causal interpretations. In addition, the almost inevitable presence of other factors $V$ that affect the disease process further complicates a causal interpretation of treatment effects. For more comprehensive marginal models or intensity-based models used for secondary analysis, known baseline covariates $V$ can be included, with inferences about the effect of $X$ now adjusted for $V$. Even then there may exist unknown or unobserved covariates that affect the process, so some caution should be exercised when interpreting treatment effects.
\begin{sidewaysfigure}
\scalebox{0.59}{
\includegraphics{Bias_IntensityAnalysis_new1.pdf}
}
\scalebox{0.59}{
\includegraphics{Bias_IntensityAnalysis_new2.pdf}
}
\caption{Asymptotic percent relative difference $100(\phi^{*}_1 - \gamma_{1x})/\gamma_{1x}$ where $\phi^{*}_1$ is the limiting value of the estimator from a cause-specific Cox model for the $0 \longrightarrow 1$ transition when the covariate $V$ is omitted; $C=1, P(X=1)=0.5, P(V=1)=0.5,P(T \leq C)=0.6, \gamma_{1x}=\log(0.75), \gamma_{2x}=\log(0.9).$}
\label{BiasCoxIntensityAnalysis}
\end{sidewaysfigure}
\begin{figure}
\caption{Plots of $IR_{01}
\label{MarginalHR}
\end{figure}
\subsection{General frameworks for expressing marginal treatment effects}
\label{GeneralEstimandFramework}
We now consider frameworks for specification of marginal effects and discuss estimability and model assumptions.
Most proposed estimands are based on the distribution of the time to some event; for an illness-death process,
the times $T_1, T_2$ and $T$ are all used in certain settings.
Quantiles or expected values of these times in "restricted" form are also used;
the restricted mean time event-free over a specified time interval $(0,\tau]$ is $RMT(\tau)=\mathbb{E}(\min(T,\tau))$.
Estimation of state occupancy probabilities by treatment group, $P(Z(t)=k|X)$, is also appealing in many cases and can be related to distributions of
event times.
For example, $P(T>t|X)=P(Z(t)=0|X)$ and $RMT(\tau|X)= \int_{0}^{\tau} P(Z(u)=0|X)du$.
Similarly, in an illness-death process $P(T_2 > t|X)=P(Z(t) \in \{0, 1\}|X)$.
A third related framework is utility-based: we assign utility scores to each state, and then consider the average cumulative utility over a specified time period.
This has been used in quality of life comparisons in oncology trials \citep{Gelber1989, Gelber1995, Glasziou1990, Cook2003}.
We will briefly comment on utility-based estimands in Section \ref{sec5.2}.
In the time to event framework $P(T _E \le t)=F_E(t)$ denotes the distribution function for the time $T_E$ to a defined event $E$.
With a specified time horizon, one can estimate $F_E(\tau|X)= P(T_E \le \tau|X)$ separately for each treatment arm and compare them; this can be done nonparametrically.
A more common approach for specifying an estimand $\beta$ is through transformation models of the form
\begin{equation}
\label{EventTime}
g(F_E(t|X)) = \alpha_{0}(t) + X \beta,
\end{equation}
where $g$ is a known differentiable monotonic function on $(0,1)$ and $\alpha_{0}(t)=g(F_E(t|X=0))$ is a monotonic
function with $\alpha_0(t) \downarrow - \infty$ as $t \downarrow 0$ \citep{Scheike2007, Scheike2008}.
Such a model makes a strong assumption about the form of any difference in $F_{E}(t|X=1)$ and $F_{E}(t|X=0)$ and should be checked in a given setting.
We note that the common practice of assuming a Cox proportional hazards model for times $T_E$ produces a model of the form (\ref{EventTime}); as discussed, we argue that the treatment effect $\beta$ in such a model should be interpreted for causal purposes in terms of (\ref{EventTime}) and not as a hazard ratio.
We discuss special cases of model (\ref{EventTime}) in the context of a competing risk process in Section \ref{sec3}.
Finally, tests of no treatment effect are a key component of primary analysis, and power calculations are important in planning trials. Hypothesis tests can be
based on nonparametric estimates of process features but to address power we usually want a parametric assumption that gives an ordering of alternative hypotheses.
We assume in further discussion that hypotheses can be expressed in terms of a parametric estimand $\beta$.
\section{Illness-death and competing risk processes} \label{sec3}
\subsection{Estimands based on marginal process features}
We now take a closer look at estimands based on marginal features of a process. In this section we consider illness-death and competing risks processes, given their wide applicability. More general multistate processes involving several intermediate health states, recurrent events or reversible conditions are considered in the next section. Additional issues complicating the interpretation and selection of estimands arise when post-randomization events involving compliance, treatment switching, rescue therapy, or loss to followup occur; these are also discussed in Section 4.
We first consider illness-death settings where times $T_1$ or $T$ are responses of interest. In their case it is sufficient to focus on the associated competing risks process $\{Z(t), t \geq 0 \}$ in Figure \ref{multistateDiagram}(b), where $1 \rightarrow 2$ transitions are not considered. The exit time from state $0$ is $T = \inf\{t>0:Z(t)\neq 0 \}$ and the indicator $\varepsilon = Z(T) \in \{1, 2 \}$ records the
type of event. We let $X$ and $V$ denote treatment indicator and other covariates, as before. The intensity functions in this case are often called cause-specific hazard functions:
$$ \lim_{\Delta t \downarrow 0} \dfrac{P(T \in [t, t+ \Delta t), \varepsilon=k | T \geq t, X,V)}{\Delta t} = \lambda_{0k}(t | X,V), \ \ \ \ \ \ \ \ k=1,2 $$
and they completely determine the competing risks process. Models for the intensities have to be chosen,
but methodology and software for fitting and checking models is widely available \citep{Cook2018}. The event time $T$ has survivor function $S(t|X,V)$ given by (\ref{EFS})
and the cumulative incidence function for $k=1$ is given by (\ref{CIF}); each
depends on both cause-specific hazard functions.
To illustrate the specification of estimands based on marginal features, we consider quantifying the difference between $F_1(t|X=1)$ and $F_1(t|X=0)$; for
convenience we continue to denote the cumulative incidence function as $F_1$, though we condition now on $X$ alone.
One approach is to estimate the two functions nonparametrically
\citep[Section 3.2]{Cook2018}
and then to use some one-dimensional measure of their difference. \cite{Zhang2008} proposed comparison at a specified time point $\tau > 0$.
Such estimands do not provide a full picture of the effect of treatment on $F_1(t)$, nor do estimands such as differences in restricted mean time or
the maximum of $|F_1(t|X=1)-F_1(t|X=0)|$ over a time period $(0,\tau]$.
A second way to obtain a one-dimensional estimand is by making parametric assumptions about the difference between $F_1(t|X=1)$ and $F_1(t|X=0)$.
The most common approach has been to use models that are of generalized linear form, as in (\ref{EventTime}).
The function $\alpha_{0}(t)=g(F_1(t|X=0))$ may be modelled parametrically or nonparametrically and the regression
coefficient $\beta= g(F_{1}(t \ | \ X=1))-g(F_{1}(t \ | \ X=0))$ is an estimand.
Several functions $g(u)$ have been considered in the literature \citep{Fine1999, Gerds2012}; common choices are
$g(u)=\log(u)$, ${\rm logit}(u)$ and $\log(-\log(1-u))$, often called the complementary log-log, or cloglog transform.
Assumed models should of course adequately represent treatment group differences; if this is not the case the interpretation of estimates $\hat{\beta}$ is
affected, as we illustrate in the next section.
Parametric or semi-parametric estimation for arbitrary transformation models for $F_1(t|X)$ can be based on so-called direct binomial estimating
functions \citep{Scheike2007} or on pseudo-value methods \citep{Klein2005}.
These methods avoid modeling $F_2(t|X)$, whereas maximum likelihood estimation requires such a model when some individuals are still in state $0$ at the end of
followup.
The most widely used model is based on the $cloglog$ function;
it can be used in analysis based on a weighted partial likelihood estimating function targeting the hazard function for the sub-distribution $F_1(t|X)$ \citep{Fine1999}.
As noted previously, this "hazard" function does not have a real world interpretation \citep{Andersen2012.1}; however, although the associated estimand $\beta$
cannot be interpreted as a hazard ratio, it can be interpreted in terms of the $cloglog$ transformation model, and the estimation procedure of \cite{Fine1999} is valid in this context. These methods are reviewed by \citet[Section 4.1]{Cook2018}, who provide references to software. In addition to packages mentioned there, the R-functions coxph and cifreg now handle the Fine-Gray method.
The adequacy of a transformation model for $F_1(t|X)$ can be checked in a given setting by plotting $g$-transformed nonparametric Aalen-Johansen estimates for each treatment group.
We have found that the $cloglog$ model represents the results quite well for many oncology trials in which state $1$ represents disease progression or the onset
of complications. One can explore different transformations through parametric families of
functions $g(t, \nu_k)$ specified by an unknown paramter $\nu_{k}$; this is easier if parametric assumptions are also made about the function $\alpha_0(t)$. \cite{Jeong2007} considered a generalized Burr form
\begin{equation}
\label{parametricCIF}
F_{k}(t \ | \ X) = 1 - \biggl(1+\dfrac{\exp(\alpha_{0k}(t, \phi_k))\exp(X\beta_k)}{\nu_k}\biggr)^{-\nu_k}.
\end{equation}
for $k=1,2$ and developed maximum likelihood procedures. Model $(\ref{parametricCIF})$ includes parametric versions of the $logit$ and $cloglog$ transformation
models ($\nu_{k}=1$ and $\nu_{k} \longrightarrow \infty$ respectively) as special cases.
Since $F_k(t|X) < 1$ we need $\alpha_{0k}(t)$ to approach a finite limit as $t$ increases; $\beta_k$ must also be constrained to keep $F_k(t|X=1) < 1$.
In addition $F_1(t|X) + F_2(t|X)$ cannot exceed one, which may require additional constraints.
In practice a transformation model should provide an adequate representation up to some maximum followup time $\tau$ and these constraints may not have much
impact in some settings.
Other families of models have also been proposed, some of which automatically constrain the cumulative incidence functions to sum to
one (e.g. \citealp{Gerds2012}).
For some illness-death settings the time $T_2$ of death or failure may be the preferred feature, regardless of whether or not an individual passes through state 1.
Marginal models for $F_D(t|X)=P(T_2 \le t|X)$ such as (\ref{EventTime}) can be applied, and permit descriptive causal interpretations of treatment effects.
Cox proportional hazards models are often fitted for $T_2$ given $X$, with the regression coefficient $\beta$ for $X$ a common estimand.
Once again, this should be interpreted in terms of the associated $cloglog$ transformation model which the distribution function $F_D(t|X)$ in the Cox model satisfies;
the model can be checked as described above.
Finally, given the importance of estimands based on marginal process features in primary estimation and testing of treatment effects,
we recommend that the study planning stage include careful consideration of process intensities, and how treatment may affect them.
This promotes an understanding of what types of marginal features and models to employ and informs decisions concerning sample size and duration of followup.
In the next section we provide some numerical illustrations of the connection between intensities and marginal features, and
Section \ref{sec4} contains illustrations involving more complex settings with intercurrent events.
\subsection{Illustrative calculations for cumulative incidence function regression}\label{sec3.2}
To provide insight into the impact of process intensities on marginal process features, we consider some illustrative calculations.
A pragmatic point of view is that models only approximate reality, and when we define an estimand $\beta$ based on assumptions about a process (that is, a model), along with a method of estimating it, the true estimand is actually the limiting value $\beta^*$ of the estimator $\widehat{\beta}$ as the number of subjects $n$ becomes arbitrarily large. The value $\beta^*$ is sometimes referred to as the least false parameter \citep{Grambauer2010},
but more precisely it is the parameter value for which the assumed model family is "closest" (in the expected log likelihood or Kullback-Leibler sense) to the true process or distribution in question. Equivalently, it is the value for which the estimating function used to obtain the estimator of $\beta$ has expectation zero with respect to the true data generation process.
We present a simple illustration for the illness-death setting, with the true process assumed to have intensity functions
$\lambda_{0k}(t|X)= \lambda_k \exp(\gamma_k X)$ for $k=1,2$.
We focus on the cumulative incidence function $F_1(t|X)$ for the non-fatal event, and consider transformation models (\ref{EventTime})
where $g(u)= \log(u)$ or $\log(-\log(1-u))$.
The interpretation (and relevance) of $\beta$ depends on whether the transformation model adequately approximates the difference in $F_1(t|X=0)$ and $F_1(t|X=1)$.
We can also ask how $\beta^*$ is related to the baseline intensities and treatment effects in the true process. We let $P(X=1)=P(X=0)=0.5$ and considered true processes with $\gamma_1=\log(0.75)$ and $\gamma_2=\log(0.9,1.0,1.1)$; thus the treatment reduces the intensity for the non-fatal event by one quarter and gives either a mild decrease, no change, or a mild increase in the intensity for the fatal event.
We took the administrative censoring time to be $C=1$ and for each value for $\gamma_2$, set $\lambda_1$ and $\lambda_2$ so that $P(T \leq 1)$ was either 0.2 or 0.6 and so that the probability of entry to state 1, given $T \le 1$, was either 0.4, 0.6 or 0.8. We note that the assumed form for the intensities is for illustration; as we saw in Section \ref{sec2.3}, the presence of other covariates $V$ that affect intensities can produce non-proportional intensities when only $X$ is modeled.
\begin{figure}
\caption{Plots of $g(F_1(s_r | X=1))-g(F_1(s_r | X=0))$ for $20$ equi-spaced values of $s_r$ in $(0,1)$ and different values of $P(T \leq C), P(T_1 < T_2 | T \leq C)$ and $\gamma_2$ under the $log$ and $cloglog$ transformation models; $C=1, P(X=1)=0.5, \gamma_1=\log(0.75)=-0.288$.}
\label{PlotAdequacy}
\end{figure}
We consider first the adequacy of the transformation models by computing the true values of $F_1(s_r|X)$ for 20 equi-spaced values of $s_r$ in (0,1) and $X=0,1$.
For each transformation, we calculated $g(F_1(s_r|X=1))-g(F_1(s_r|X=0))$ for $r=1, \ldots,20$ and display the results in Figure \ref{PlotAdequacy}.
Departures from a constant line are indicative of model misspecification.
In most cases the $cloglog$ transformation model is a fairly good approximation, especially when the treatment has no effect on the fatal event, whereas the $\log$
transformation model approximates the differences in $F_1(t | X)$ less well.
It can be shown that at $t=0$ the difference in the transformed functions is $\gamma_1= \log(0.75)= - 0.288$ under either model.
For $g(u)=\log(-\log(1-u))$, adequacy of the transformation model varies with the probability of entry to state $1$ and the treatment effect $\gamma_2$
on the fatal event intensity; the model improves with increasing $P(T_1 < T_2 | T \leq 1)$ and as the magnitude of $\gamma_2$ decreases.
For $g(u)=\log(u)$, adequacy of the model likewise varies with the parameter setting.
We have found that these results persist across various plausible settings, and we restrict our attention to $g(u)=\log(-\log(1-u))$ in the remainder of this discussion.
We considered estimation of $\beta$ for the $cloglog$ transformation model using the two main methods:
(a) the method of \cite{Fine1999} and
(b) direct binomial estimation \citep{Scheike2007}.
Competing risks data under the true process with intensity functions $\lambda_{0k}(t |X) = \lambda_{k}\exp(\gamma_k X)$ as specified above were simulated as follows:
for each of $X=0,1$ the event time $T | X$ was generated from an exponential distribution with rate
$\lambda_{01}(t | X) + \lambda_{02}(t | X)$, and given $T=t$, the event type was drawn from a single binomial trial with non-fatal event probability
$\lambda_{01}(t | X)/(\lambda_{01}(t | X)+\lambda_{02}(t | X))$.
As in Section $\ref{sec2.3}$, we also considered a random withdrawal time $R$, which was generated from an exponential distribution with rate $\rho$.
We set $\rho$ so that $\pi_R=P(R < T_1 | T_1 < \min(T_2, 1))$ was either $0.0, 0.1$ or $0.2$, with $\pi_R=0.0$ corresponding to administrative censoring only.
In scenarios with $\pi_R \neq$ 0.0, this gave the net censoring time $\min(R, C)$.
We simulated data sets with $n$ = 1000 observations and for each we obtained estimates of $\beta$ using the Fine-Gray (FG) approach with and without
weight stabilization, and by direct binomial (DB) estimation based on $6$ equi-spaced time points in $(0,1)$.
We used the R-functions crr and comp.risk to fit the models, combined with unstratified Kaplan-Meier estimation to get an estimate of $P(R > t)$.
Results are shown in Table $\ref{simulation}$ for $N$ = 2000 simulation runs. For each estimation procedure we report the limiting value ($\beta^*$), the mean of the estimates $\hat{\beta}$, their empirical standard error (ESE), the average model-based standard error (ASE) and the empirical coverage probability (ECP) for $\beta^*$ of the nominal $95 \%$ confidence intervals. In Appendix \ref{appendix-FG} and \ref{appendix-DB} we briefly describe how the limiting values $\beta^{*}$ under either estimation procedure can be obtained.
\begin{table}
\centering
\scalebox{0.70}{
\begin{tabular}{ c c c cccccc l ccccc}
\hline
\multirow{3}{*}{$P(T \leq C)$} & \multirow{3}{*}{$P(T_1 < T_2 | T \leq C)$} & \multirow{3}{*}{$\pi_R$} & \multicolumn{12}{c}{\textit{\textbf{Method}}} \\ \cline{4-15}
& & & \multicolumn{6}{c}{\textit{\textbf{Fine-Gray}}} & \multirow{11}{*}{} & \multicolumn{5}{c }{\textit{\textbf{Direct Binomial Regression}}} \\
& & & $\beta^{FG*}$ & $\beta_{stab}^{FG*}$ & $\text{mean}(\widehat{\beta}^{FG}_{stab})$ & ESE & ASE & ECP & & $\beta^{DB*}$ & $\text{mean}(\widehat{\beta}^{DB})$ & ESE & ASE & ECP \\ \cline{1-9} \cline{11-15}
\multirow{9}{*}{0.6} & \multirow{3}{*}{0.4} & 0.0 & -0.2532 & -0.2532 & -0.2550 & 0.1308 & 0.1300 & 95.2 & & -0.2702 & -0.2672 & 0.1437 & 0.1429 & 95.2 \\
& & 0.1 & -0.2532 & -0.2551 & -0.2579 & 0.1341 & 0.1369 & 95.6 & & -0.2702 & -0.2693 & 0.1479 & 0.1486 & 95.1 \\
& & 0.2 & -0.2532 & -0.2572 & -0.2574 & 0.1474 & 0.1451 & 95.0 & & -0.2702 & -0.2645 & 0.1567 & 0.1558 & 95.2 \\ \cline{2-9} \cline{11-15}
& \multirow{3}{*}{0.6} & 0.0 & -0.2601 & -0.2601 & -0.2594 & 0.1067 & 0.1059 & 94.6 & & -0.2742 & -0.2704 & 0.1180 & 0.1161 & 94.4 \\
& & 0.1 & -0.2601 & -0.2617 & -0.2630 & 0.1113 & 0.1116 & 95.1 & & -0.2742 & -0.2727 & 0.1199 & 0.1214 & 95.8 \\
& & 0.2 & -0.2601 & -0.2635 & -0.2660 & 0.1172 & 0.1182 & 95.3 & & -0.2742 & -0.2737 & 0.1278 & 0.1280 & 95.2 \\ \cline{2-9} \cline{11-15}
& \multirow{3}{*}{0.8} & 0.0 & -0.2708 & -0.2708 & -0.2685 & 0.0923 & 0.0917 & 94.6 & & -0.2798 & -0.2759 & 0.1014 & 0.1002 & 95.3 \\
& & 0.1 & -0.2708 & -0.2719 & -0.2721 & 0.0955 & 0.0966 & 95.4 & & -0.2798 & -0.2781 & 0.1045 & 0.1055 & 95.5 \\
& & 0.2 & -0.2708 & -0.2730 & -0.2681 & 0.1020 & 0.1024 & 95.4 & & -0.2798 & -0.2715 & 0.1102 & 0.1119 & 95.6 \\ \hline
\end{tabular}
}
\caption{Asymptotic and empirical properties of Fine-Gray (FG) and direct binomial (DB) regression estimators for $\beta$ in $cloglog$ transformation models; $C=1, P(X=1)=0.5, \gamma_{1}=\log(0.75)= -0.288, \gamma_{2}=\log(0.9), n=1000$ subjects, $N=2000$ simulation runs. Weights in FG and DB approaches are based on covariate-independent
censoring and under the assumption that the censoring model is correctly specified.}
\label{simulation}
\end{table}
Since $\beta^{FG*}, \beta^{FG*}_{stab}$ and $\beta^{DB*}$ solve different expected estimating equations, we do not expect these limiting values to be the same. However, they are in close agreement, and the size and direction of their departure from $\gamma_1$ is consistent with the departures from a horizontal line seen in Figure \ref{PlotAdequacy}.
In all scenarios, the average estimates under both estimation methods closely approximate the corresponding limiting values, which are close to the value
$\gamma_1=\log(0.75)$, and the figure provides a simple basis for the interpretation of estimates $\hat{\beta}$, unlike so-called average hazard ratios.
We also see close agreement between the empirical and the average model-based standard errors, and that the empirical coverage probabilities are close to the
nominal level. Thus for the scenarios considered here, the $cloglog$ transformation model provides a reasonable approximation for the effect of $X$ on $T_1$, and both FG and DB estimation of $\beta$ in the $cloglog$ model perform well.
\section{Defining estimands for more complex settings} \label{sec4}
\subsection{Challenges involving estimands with intercurrent events} \label{sec4.1}
Marginal features, and hence the treatment effect, are usually conceptualized in an idealized setting where individuals under study are compliant, receive medical care according to a prescribed (deterministic or well-characterized stochastic) treatment strategy, and complete followup. Under such circumstances the differences between treatment arms with respect to, say, the proportion of individuals experiencing an event by some time $\tau$ can be attributed to the treatment assigned at randomization.
In practice, events such as termination of followup due to inefficacy or adverse reactions to treatment, the introduction of rescue medication, or treatment discontinuation or switching
may occur over the course of followup, making the resulting process incompatible with the idealized setting.
Such events are examples of "intercurrent events", defined in the ICH E9 (R1) guidance document \citep{iche9-2017} as
``events occurring after treatment initiation that affect either the interpretation or the existence
of the measurements associated with the clinical question of interest''.
We refer to intercurrent events which preclude observation of the event of interest as type 1 intercurrent events, and intercurrent events which do not preclude their observation but change the
interpretation of the clinical events (due to the condition under which they arise) as type 2 intercurrent events.
A type 1 intercurrent event (IE) can be sub-classified as a type 1A IE if it simply\textit{ impacts the observation} of the clinical event of interest
because of loss to followup or a type 1B IE that \textit{precludes the occurrence} of the clinical events (e.g. death).
Examples of type 2 intercurrent events are treatment discontinuation without termination of followup, treatment switching, or introduction of rescue therapy that is prohibited by the protocol.
The core challenge with intercurrent events is that marginal process features differ from those under the idealized setting, making the intended causal analyses challenging.
Consider a clinical trial where individuals are randomized to receive an experimental treatment ($X=1$) or standard care ($X = 0$).
To simplify discussion we consider a trial involving an illness-death process depicted in
Figure \ref{fig-intensity-based-models}(a),
where states 0, 1 and 2 represent being alive and event-free, alive post-event, and dead, respectively.
We assume individuals begin in state 0 at $t=0$ with followup planned until an administrative censoring time $\tau$.
Let $Z(t)$ denote the state occupied at time $t$ with $Y_k(t)=I(Z(t^-)=k)$, $k=0, 1$, and let $H(t)$ be the process history up to time $t^-$.
The $k \rightarrow l$ transition intensity is denoted by $\lambda_{kl}(t | H(t))$ and we also let
\begin{equation}
\label{qkl}
\lim_{\Delta t \downarrow 0}
\frac{P( Z(t + \Delta t^-) = l | Z(t^-)=k, X)}{\Delta t} = q_{kl}(t| X)\; ,
\end{equation}
denote the transition rate given $X$ for $(k, l) \in \{ (0,1), (0,2), (1,2) \}$.
The target estimand could be any of those discussed in Sections \ref{sec2} and \ref{sec3} including,
for example,
\begin{itemize}
\item[\textit{i)}] $\beta(\tau) = F_1(\tau | X = 1) - F_1(\tau | X = 0)$ given in Section \ref{sec2.1},
\item[\textit{ii)}] $\beta(\tau) = S(\tau | X = 1) - S(\tau | X = 0)$ where $S(t|X=x) = P(Z(t) < 2|X=x)$
is the survival function for time to death, given $X = x$.
\end{itemize}
These quantities can all be estimated with Aalen-Johansen methods using separate nonparametric estimates of (\ref{qkl}) for each treatment arm.
Estimands based on transformation models of cumulative incidence functions as in (\ref{EventTime}) are also possible but we focus here on those listed above since they require minimal modeling assumptions.
Multistate models such as this can be expanded to incorporate intercurrent events; this facilitates a clear discussion of marginal features, the interpretation of possible estimands, and offers a framework for
comparing analysis strategies.
Figure \ref{fig-intensity-based-models}(b) depicts an expanded state space for a joint model involving the illness-death process of
Figure \ref{fig-intensity-based-models}(a) and a type 1 intercurrent event that precludes observation of transitions after it.
Figure \ref{fig-intensity-based-models}(c) represents a joint model with a type 2 intercurrent event that does not preclude further followup:
state $0'$ is entered if the intercurrent event occurs in an individual who is event-free, while $1'$ is occupied if an individual is alive but has experienced both the
non-fatal clinical event and the intercurrent event; state $2'$ is entered upon death by individuals who have experienced the intercurrent event.
We let $\{ Z^\circ(s), 0 < s \}$ denote either the five-state process depicted in Figure \ref{fig-intensity-based-models}(b) or the six-state process depicted in Figure \ref{fig-intensity-based-models}(c), $H^\circ(t)=\{Z^\circ(s), 0< s< t, X\}$ the corresponding process history and define
$Y_k^\circ(t) = I(Z^\circ(t^-)=k)$ for $k=0,1$ or $k=0, 1, 0', 1'$, respectively.
We let $\lambda_{kl}^\circ(t | H^\circ(t))$ denote the intensity function for $k \rightarrow l$ transitions within Figure \ref{fig-intensity-based-models}(b) or
Figure \ref{fig-intensity-based-models}(c), with the corresponding rate functions
\begin{equation}
\label{qklcirc}
\lim_{\Delta t \downarrow 0}
\frac{P( Z^\circ(t + \Delta t^-) = l | Z^\circ(t^-)=k, X)}{\Delta t} = q^\circ_{kl}(t| X)\; ,
\end{equation}
for $(k,l) \in \{ (0,1), (0,2),(0, 0'), (1,2), (1, 1')\}$ or $(k, l) \in \{ (0,1), (0,2), (0, 0'), (1,2), (1, 1'), (0', 1'), (0',2'), (1', 2') \}$.
In the next section we examine three types of scenarios involving intercurrent events:
loss to followup,
discontinuation of the randomized treatment without loss to followup, and
introduction of rescue treatment without loss to followup.
In some clinical trials individuals may switch from the randomized treatment to the treatment of the other arm.
For each setting we will discuss issues arising from these events, including possible targets of inference (estimands) and strategies for estimating these quantitites.
We stress the need for careful thought about interpretation of the target estimand and the strength of assumptions required to estimate it.
We focus on estimands relevant to the real world, which invariably leads to analyses within the intention-to-treat (ITT) framework \citep{montori2001}.
\subsection{Some illustrative examples involving intercurrent events} \label{sec4.2}
\subsubsection{Loss to followup } \label{sec4.2.1}
\begin{figure}
\caption{Multistate diagrams for a simple illness-death process (panel (a)) and
joint models for an illness-death process and type 1 (panel (b)) and type 2 (panel (c)) intercurrent events.}
\label{fig-intensity-based-models}
\end{figure}
Premature loss to followup (LTF) is a type 1 IE (or results from a type 1 IE) and Figure \ref{fig-intensity-based-models}(b) depicts the illness-death process with states added to represent termination due to LTF. For a more thorough analysis we need to also consider the joint model of Figure \ref{fig-intensity-based-models}(c). In this framework,
\cite{Lawless2019} define conditionally independent LTF for multistate processes observed in cohort studies. The condition in their equation (4a) can be restated in the present notation as
\begin{equation}
\label{ind1}
\lim_{\Delta t \downarrow 0}
\frac{P( Z^\circ(t + \Delta t^-) = l | Z^\circ(t^-)=k, H^\circ(t))}{\Delta t}
= \lambda_{kl}(t| H(t))\; ,
\end{equation}
for $(k,l) \in \{ (0, 1), (0, 2), (1, 2)\}$.
This condition states that the transition intensities between states $0, 1$ and $2$ are the same for the process under observation as they are for the process of interest depicted in Figure \ref{fig-intensity-based-models}(a).
If baseline covariates or marker processes governing the $j \rightarrow j'$ transitions are present, the intensities for $j \rightarrow j'$ transitions can be modeled to give insights into the types of individuals becoming lost to followup.
These intensities can in turn be used for the construction of inverse probability of censoring weights for partially conditional rate-based analyses \citep{Datta2002, Cook2009}.
This enables consistent estimation of the $q_{kl}(t|X)$ in (\ref{qkl}) governing the process
in Figure \ref{fig-intensity-based-models}(a) and from this, weighted Aalen-Johansen estimates of state occupancy probabilities $P(Z(t)=k|X)$ can be obtained as described by \citet[Section 3.4.2.]{Cook2018}. This allows estimation of estimands such as \textit{i)}- \textit{ii)} described above.
We note that an additional more subtle requirement for conditionally independent loss to followup given in (4b) of \cite{Lawless2019} is expressed here in terms of Figure \ref{fig-intensity-based-models}(c) as
\begin{equation}
\label{ind2}
\lambda_{k' l'}(t | H^\circ(t)) = \lambda_{kl}(t | H(t)) \; ,
\end{equation}
\noindent
for $(k, l) = \{(0,1), (0,2), (1,2) \}$.
This assumption implies that among those recruited and randomized to treatment, the intensities of the illness-death process are the same for those under
followup as those who have been lost to followup.
This assumption cannot be verified in the absence of data following loss to followup.
Such data can sometimes be obtained through tracing studies \citep{Lawless2019}, but these are seldom done in clinical trials. Moreover, subjects in a trial receive care and treatment by protocols which do not apply after withdrawal and so one would not expect disease dynamics to remain the same as if the person had remained in the trial. Thus, the main objective should be to protect against dependent loss to followup through violation of (\ref{ind1}) by using weights as described above.
We stress that when some degree of premature LTF is a feature of a trial, estimands such as \textit{i)}- \textit{ii)} above must address LTF. In particular, consider the commonly used event times $T,T_1,T_2$ associated with Figure \ref{fig-intensity-based-models}(a); here $T$ is the exit time of state $0$ and $T_1,T_2$ are the times of entry to states $1$ and $2$ respectively. Interpretation of $P(T > t|X)=P(Z(t)=0|X)$ is clear in the setting of Figure \ref{fig-intensity-based-models}(a), but when the observable process is that in Figure \ref{fig-intensity-based-models}(b), $P(T > t|X) = P(Z^{\circ}(t) = 0|X)$ represents the probability that an individual is event-free, alive, and not lost to followup at time $t$.
If LTF arises because of treatment discontinuation due to lack of efficacy or adverse effects, this represents a reasonable strategy.
If LTF is driven by fixed or time-varying biomarkers associated with event occurrence, then LTF is dependent and incorporating LTF into a composite event
(implicit in modeling the sojourn time in state 0 of Figure \ref{fig-intensity-based-models}(b)) is one solution.
If there is no evidence that LTF is marker-dependent then LTF can simply be treated as independent right censoring and analyses can be based on
Figure \ref{fig-intensity-based-models}(a).
Similar issues arise when considering $P(T_1 \le t|X)$ and $P(T_2 \le t|X)$, which in Figure \ref{fig-intensity-based-models}(b) represent
the probability that a subject has entered states 1 or 2 respectively prior to time $t$, while under followup.
\subsubsection{Treatment discontinuation, no loss to followup } \label{sec4.2.2}
In some settings the intercurrent event may be a toxicity-related event leading to discontinuation of the assigned treatment, but with the subject remaining under followup, perhaps with a change in treatment.
This constitutes a type 2 IE with observed data as in Figure \ref{fig-intensity-based-models}(c).
In this case, as in Section \ref{sec4.2.1}, we should incorporate this expanded process when defining features and estimands of interest.
For event-free survival time $T$, for example, we should consider $P(T>t)=P(Z^\circ(t) \in (0,0'))$. For treatment group comparisons that relate to
observable processes, we support intention-to-treat comparisons \citep{montori2001}, which here might for example be based on $\beta(\tau) =
P(T>\tau|X=1) - P(T>\tau|X=0)$, with $X$ representing treatment assigned at randomization. This in our opinion gives a more relevant estimand concerning treatment efficacy in most settings than another option that has been proposed, which is to artificially censor subjects when they cease the initial treatment; in that case, Figure \ref{fig-intensity-based-models}(b) would be used.
In many settings decisions about treatment switches are related to biomarkers which are used in monitoring subjects and in this case, expanded models may include a marker process. As an illustration, we consider a time-dependent marker $\{W(t), 0< t\}$ where $W(t)$ reflects some aspect of disease severity, for example a marker of bone formation or destruction (e.g. bone alkaline phosphotase) in cancer patients with skeletal metastases.
We let $H(t) = \{Z(s), 0 < s< t\}$ be the history of the illness-death process in Figure \ref{fig-intensity-based-models}(a) and
${\cal W}(t) = \{W(s), 0 < s< t\}$ be the history of the marker process alone.
The expanded history ${\cal H}(t) = \{Z(s), W(s), 0 < s< t, X\}$ includes the multistate and marker process histories along with the treatment covariate.
We let $\lambda_{kl}(t|{\cal H}(t))$ denote the transition intensities for $(k,l)$ in ${\cal S}$ for the illness-death process of
Figure \ref{fig-intensity-based-models}(a), now accommodating dependence on the marker process.
Likewise we let $\lambda^\circ_{kl}(t|{\cal H}^\circ(t))$ denote the intensities for transitions among the pairs of illness-death states of
Figure \ref{fig-intensity-based-models}(c) for $(k,l) \in \{(0,1), (0, 2), (1, 2), (0',1'), (0', 2'), (1', 2')\}$ where ${\cal H}^\circ(t) = \{Z^\circ(s), W(s), 0 < s< t, X\}$.
Modeling the intensities $\lambda_{jj'}(t|{\cal H}^\circ(t))$ for $j \rightarrow j'$ transitions corresponding to the occurrence of the IE can offer important insights concerning the types of individuals who cannot tolerate the study medication. A major difficulty in this situation, however, is the need to model the marker process in order to define and calculate marginal process features such as $P(T > t|X)$ that can be used to define estimands. A discussion of this area is beyond our present scope; \cite{Cook2022} provide an illustration.
\subsubsection{Introduction of rescue treatment, no loss to followup} \label{sec4.2.3}
In cancer clinical trials it is common for individuals to receive rescue therapy, often after evidence of disease progression, with followup continuing after the
rescue therapy has been introduced.
The decision to prescribe rescue therapy will often be guided by marker processes, perhaps in combination with information on the illness-death process.
In the case where rescue therapy is only introduced upon disease progression (entry to state 1) the $0 \rightarrow 0'$ transition intensity in
Figure \ref{fig-intensity-based-models}(c) is zero.
In such a setting estimation of event-free survival probabilities or cumulative incidence of disease progression is unaffected but overall survival probabilities
are affected.
Modeling the $0 \rightarrow 0'$ and $1 \rightarrow 1'$ intensity functions will provide insight into the kinds of individuals who are prescribed rescue therapy.
As in Section \ref{sec4.2.2} the intention-to-treat principle considers the full process in
Figure \ref{fig-intensity-based-models}(c) and is preferred for treatment comparison.
In some trials individuals may be switched from their assigned treatment to the treatment of the other arm.
Most often individuals in the control arm are prescribed the experimental treatment; in cancer trials this is often done following cancer progression \citep{watkins2013}.
Figure \ref{fig-marker-trt2}(a) is a multistate diagram for an illness-death process $\{Z(t), 0 < t\}$ with separate states for two treatment groups, with overall survival the
feature of interest.
\cite{henshall2016}, \cite{latimer2019b} and \cite{latimer2019} for example consider this setting, though not with our expanded model.
Our preference is again to formulate intensity-based models of observable event times and make clear and explicit assumptions about the disease and treatment process.
In Figure \ref{fig-marker-trt2}(b) we add a state $1''$ which can be entered from state $1'$ (indicating progression under the control therapy)
to reflect the introduction of the experimental treatment, resulting in the six-state multistate process $\{Z^{\circ}(t), 0 < t \}$.
As in Sections \ref{sec4.2.2} and \ref{sec4.2.3} the intensity functions for the $1' \rightarrow 1''$ transiton
give insights into the kinds of individuals switching from the control to the treatment arm.
In this setting we again favour use of the intention-to-treat principle in conjunction with nonparametric estimation of $P(Z^{\circ}(t)=2|X)$.
\begin{figure}
\caption{Multistate models for a process of interest (panel (a))
and an expanded process incorporating a post-randomization intervention
for the control arm (panel (b)).}
\label{fig-marker-trt2}
\end{figure}
We note that it is common to see analysis of individual or composite event times such as $T,T_1$ and $T_2$ based on Cox proportional hazards models, with the
estimand $\beta$ defined as the log hazard ratio for treament to control groups.
Even if we interpret $\beta$ causally through an analogous transformation model as discussed earlier, this should be accompanied by an assessment of the
model's adequacy.
With either a model-based or nonparametric estimand as in \textit{i)}- \textit{ii)}, secondary intensity-based analysis is crucial to an understanding of factors
producing an observed marginal effect.
Another type of secondary analysis is to examine conditional probabilities such as
$P(Z^\circ(t)=0| Z^\circ(t) \in (0,1,2),X)$ or $P(Z^\circ(t)=2| Z^\circ(t) \in (0,1,2),X)$;
these summarize event occurrence up to time $t$ for persons not experiencing the IE by that time.
Such probabilities can be estimated nonparametrically and although they are not suitable for causal interpretation because they condition on events that
may be dependent on treatment or unobserved confounders, they and corresponding estimates of $P(Z^\circ(t) \in (0,1,2)|X)$ provide useful summary information.
There has been discussion in the literature and in the ICH E9 (R1) guidance document \citep{iche9-2017} about the use of principle stratification and estimation of the so-called
survivor average causal effect.
These methods condition on similar events but use counterfactuals, so do not meet our objective of real world interpretation; Section \ref{sec5.1} discusses this further.
\subsubsection{Surgical prevention of stroke-related events in the NASCET study}
\label{secnascet}
Here we consider data from the NASCET trial \citep{Barnett1998,NASCET-STROKE1991}
in which we focus on individuals randomized to medical care $(n=1118)$ or carotid endarterectomy $(n=1087)$ in the stratum with moderate ($< 70$\%) stenosis.
We consider for illustration an analysis involving the response of stroke-related events (i.e. stroke or stroke-related death).
Figure \ref{fig-nascet-ms-death}(a) shows the multistate diagram for a competing risks process $\{Z(s), 0 < s\}$ where
state 0 represents the condition of being stroke-free and alive,
state 1 is entered up the occurrence of a stroke or stroke-death, and
state 2 is entered upon a non-stroke death.
Note that entry to state 1 due to a non-fatal stroke may be followed by a stroke death or non-stroke death but we consider this simplified model, which is
relevant for an analysis of the composite event of non-fatal stroke or stroke death; then the only competing event is non-stroke death.
\begin{figure}
\caption{Multistate diagrams for analyses of stroke or stroke-related death and non-stroke related death.}
\label{fig-nascet-ms-death}
\end{figure}
As noted in Section \ref{nascet-intro}, a number of individuals randomized to receive best available medical care crossed over to undergo carotid endarterectomy.
The reasons recorded for this included non-fatal stroke, further narrowing of the carotid artery based on angiograph examination,
both stroke and angiographic progression, and other reasons\footnote{unfortunately while this information was recorded at the time
of the trial this data are not available.}.
Figure \ref{fig-nascet-ms-death}(b) shows the more complex multistate process relevant for individuals in the medical arm
which accommodates the crossover to carotid endarteractomy.
Under the intention-to-treat framework the cumulative incidence functions for the model in Figure \ref{fig-nascet-ms-death}(a)
can be estimated for both the surgical and medical arms;
these are given in
Figure \ref{fig-nascet-death-cif}(a)
for the composite event of stroke or stroke-related death and
Figure \ref{fig-nascet-death-cif}(b) for non-stroke death.
Carotid endarterectomy can dislodge plaque into the circulatory system and cause stroke, so there is a perioperative period of high risk of stroke reflected
by the steep increase in the cumulative incidence function estimate in Figure \ref{fig-nascet-death-cif}(a) for the surgical arm.
Following this perioperative period the cumulative risk of stroke or stroke-death is lower in the surgical arm, so that ultimately there is
a 6-8\% absolute reduction in the risk of the composite event in the surgical arm by 8 years.
Figure \ref{fig-nascet-death-cif}(b) shows very similar cumulative incidence functions for non-stroke death in the two arms.
The intention-to-treat principle is normally justified on the basis that any interventions following randomization are part of routine care.
To help in understanding and communicating the relevance of trial findings, it is necessary to clearly describe what constitutes routine care.
For the NASCET study involving individuals deemed at moderate risk, the situation is more complicated.
We focus here on the moderate risk stratum of NASCET which was treated as a separate, parallel study to one involving
individuals designated as high risk due to having greater than 70\% stenosis of the carotid artery at the time of randomization.
The trial involving high risk patients started at the same time as the trial of moderate risk patients, but was stopped early when strong evidence
emerged at an interim analysis of a benefit of carotid endarterectomy -- this occurred during the conduct of the trial for the moderate risk patients.
A second point is that individuals at moderate risk at the time of randomization to medical care may have experienced a progression of their carotid stenosis
over the course of followup to the point that it exceeded 70\% -- this then qualified them as high risk patient.
Following the publication of trial results for high risk patients, the standard of care changed to include carotid endarterectomy.
As a result some patients randomized to medical care in the trial of moderate risk individuals may have experienced stroke when carotid endarterectomy
was not part of standard of care, whereas
others may have been randomized, progressed to qualify as high risk, and received carotid endarterectomy while stroke-free.
The risk of stroke following surgery in this latter group of would be influenced by the surgical procedure.
The results of an intention-to-treat analysis are difficult to interpret when the standard of care changes during the course of a study -- a
more detailed analysis is warranted to understand risks and associated effects.
Figure \ref{fig-nascet-ms-death}(b) shows the multistate diagram for a more detailed characterization of the course following randomization.
States 1 and 2 again correspond to stroke or stroke-death (state 1) and non-stroke death (state 2) but state 3 is entered upon medical-surgical crossover for
individuals in the medical arm.
By orienting the states in this competing risk arrangement we note that state 1 is now entered only if the stroke or stroke-death occurs prior to surgical crossover,
and likewise for the non-stroke death.
Medical-surgical crossover can be followed by stroke/stroke-death or non-stroke death so states $1'$ and $2'$ represent these events.
Letting $\{Z^{\circ}(s), 0< s\}$ denote the multistate process in Figure \ref{fig-nascet-ms-death}(b), we note that the Aalen-Johansen estimate of the
transition probability matrix facilitates a decomposition of the cumulative incidence function estimates in Figures \ref{fig-nascet-death-cif}(a) and (b).
If $X=1$ for an individual randomized to receive surgery and $X=0$ if they are randomized to medical care, then we note that $P(Z^{\circ}(t)=1|X=0) + P(Z^{\circ}(t)=1'|X=0)$
is the cumulative incidence function for stroke/stroke-death estimated in Figure \ref{fig-nascet-death-cif}(a).
The separate Aalen-Johansen estimates of $P(Z^{\circ}(t)=1|X=0)$ and $P(Z^{\circ}(t)=1'|X=0)$ in Figure \ref{fig-nascet-death-cif}(c) show that the excess risk of
stroke/stroke-death in the medical arm evident in Figure \ref{fig-nascet-death-cif}(a)
is made up of risk for those not crossing over, and risk from those crossing over to receive surgery.
We cannot attribute this added risk to the surgery itself due to confounding by indication; we may expect an elevation in risk shortly after surgery and a reduction
in risk following a perioperative period, but a critical point is that those crossing over to receive surgery are selected in a dynamic way in response to their disease
course.
To explore this more fully we plot Nelson-Aalen estimates of the cumulative transition intensities in Figure \ref{fig-nascet-death-cumint} for the multistate models of
Figure \ref{fig-nascet-ms-death}.
The Nelson-Aalen estimates in Figures \ref{fig-nascet-death-cumint}(a) and (b) correspond to those of Figure \ref{fig-nascet-ms-death}(a) and
we again see evidence of the perioperative period of elevated risk following surgery, with similar cumulative intensities for non-stroke death.
The Nelson-Aalen estimates of the cumulative transition intensities in Figure \ref{fig-nascet-death-cumint}(c)
are for transitions in Figure \ref{fig-nascet-ms-death}(b).
Interestingly, the slope of the estimated
$0 \rightarrow 1$ cumulative intensity for the surgical arm and the
$3 \rightarrow 1'$ cumulative intensity for those in the medical arm following surgical-crossover are very similar;
the steepest cumulative intensity estimate is for the medical patients who did not crossover.
A similar decomposition is given in Figure
\ref{fig-nascet-death-cumint}(d) for the endpoint of non-stroke death.
Intensity-based treatment comparisons could be based on regression modeling but the crossing cumulative intensities seen in Figure \ref{fig-nascet-death-cumint}(a)
mean that proportional cause-specific hazards models will not be suitable.
A detailed analysis would make use of a model allowing crossover between treatment arms and this would ideally incorporate information on the disease course including
blood pressure, cholesterol measurements and angiographic assessment of carotid stenosis.
Data on such variables are unfortunately not available so we do not explore this, but note that there
is much ongoing work on the assessment of randomized treatments when hazards or cumulative incidence functions cross, and when
treatment switching occurs.
\begin{figure}
\caption{Estimated cumulative incidence functions for stroke/stroke-death and non-stroke death in Figure \ref{fig-nascet-ms-death}
\label{fig-nascet-death-cif}
\end{figure}
\begin{figure}
\caption{Nelson-Aalen estimates of the cumulative intensities for
selected transitions based on Figure \ref{fig-nascet-ms-death}
\label{fig-nascet-death-cumint}
\end{figure}
\subsection{Estimands for recurrent and terminal events based on marginal rate functions} \label{sec4.3}
Other disease processes involve non-fatal events that may occur repeatedly.
In cardiovascular research, for example, events such as myocardial infarction (MI), stroke and admission to hospital due to heart failure may each recur \citep{Schmidli2021, Toenges2021,Furberg2021}.
Competing terminal events such as death or loss to followup are also common.
The risk of recurrent skeletal complications in the bone metastases trial in Section \ref{bonemets-study} is, for instance, terminated by death; see Figure \ref{sre}.
Section 6.4 of \cite{Cook2007} and \cite{Cook2009} give a thorough treatment of such processes involving recurrent and terminal events;
for a recent review of methods in the context of randomized trials see \cite{Furberg2021}, \cite{Toenges2021} and \cite{Mao2021b}.
In such settings marginal estimands based on cumulative counts for recurrent non-fatal events, or counts comprised of either fatal or non-fatal events are widely used. In connection with the LEADER (Liraglutide Effect and Action in Diabetes: Evaluation of Cardiovascular Outcome Results) trial,
\cite{Furberg2021} consider a recurrent composite event with components consisting of non-fatal stroke, non-fatal MI or cardiovascular (CV) death.
Such a composite recurrent event is motivated by a desire to synthesize information about treatment effects across multiple types of events, but
as with any composite endpoint clear interpretation requires modeling treatment effects on the component event types.
{
If primary interest lies in the effect of treatment on non-fatal recurrent events, the proportional means model of \cite{Ghosh2002} can be used.
It is based on the marginal rate function for the non-fatal event, recognizing that further non-fatal events
cannot occur after a terminal event; it is analogous to a cumulative incidence function in this way, but accommodates recurrent events.
The ratio of rate or mean functions satisfies the principles for estimands given in Section \ref{sec2.1} and has a descriptive causal interpretation.
\cite{Mao2016} proposed a multiplicative model based on the rate function and corresponding mean function for the composite counting process for fatal and non-fatal events combined.
Estimands based on the ratio of such mean or rate functions also satisfy our principles of Section \ref{sec2.1} and have a descriptive causal interpretation.
By including terminal events in the composite counting process, the Mao-Lin approach may be suited to settings where treatment is needed to avoid both serious non-fatal events and terminal events.
There are however the usual caveats:
(a) intensity-based models should be fitted in secondary analyses to gain understanding of the mechanisms
leading to any observed treatment effect; Cox models and other regression models for terminal event intensities are useful in understanding connections between non-fatal and fatal events, and
(b) the proportionality of rate or mean functions inherent to the Ghosh-Lin and Mao-Lin models should be checked. In applications and ongoing numerical investigations we have found that both the Ghosh-Lin and Mao-Lin models provide a reasonable summary of
treatment effects in a range of settings \citep{buhler22}.
}
\section{Additional remarks} \label{sec5}
\subsection{Further remarks on potential outcomes} \label{sec5.1}
{
Potential outcomes have played a central role in the development of causal inference theory and methods over the last several
decades with their origins in cross-sectional or simple longitudinal settings \citep{rubin2005, hernan2020}.
The potential outcome framework
conceptualizes the existence of outcomes for each treatment under consideration for each individual.
In a two treatment trial this leads to a pair of potential outcomes which could, conceptually, be compared for each individual in the trial.
In trials where each individual receives just one treatment, the pairs of
potential outcomes and "causal" measures such as their difference are thus not observable in the real world.
The potential outcome corresponding to the treatment received is revealed, while the other is counterfactual.
Some find potential outcomes useful for conceptualizing independence conditions, and this is reasonable in settings where one could,
both in theory and in practice, randomize individuals to one of two treatments under study \citep{hernan2020}.
We note that thinking in terms of potential outcomes is increasingly common in connection with clinical practice, but in our view this does not line up in most settings with clinicians' views. For example,
\cite{bornkamp2021} say
``Assume a treating physician is deciding on a treatment to prescribe.
Ideally she would make that decision based on knowledge on what the outcome for the patient would be if given the control treatment, $\ldots$, and what the outcome
would be under test treatments$\ldots$''. The term "ideally" notwithstanding, our view is that physicians think and counsel patients not in terms of fixed potential outcomes under alternative
treatment options but in terms of probabilities that are
based on data from trials and observational studies. For example, a physician might inform a breast cancer patient following surgery that the (estimated) probability of a recurrence during the next five years is $0.10$ with no adjuvant hormone therapy and $0.08$ with adjuvant therapy.
We believe that the counterfactual framework can also lead to specification of target estimands of dubious scientific relevance when used in conjunction with the concept of principal strata.
Consider the problem of assessing the effect of a new intervention versus standard care on an outcome that can only be measured in individuals
who are alive; an example is the measurement of neonatal outcomes among infants born in studies of different techniques for in vitro
fertilization \citep{snowden2020}.
\cite{rubin2006} proposed estimands based on women who would experience a live birth under either of two treatments under study; this is referred to as the survivor average causal effect, with those surviving under either treatment comprising one of the four so-called "principal strata".
Aside from concerns about counterfactual outcomes not being observable in the real world, such subgroups are not identifiable (observable) from the available data. Such issues have been discussed by others.
As noted by \cite{lipkovich2022} for example, ``a major challenge of using principal stratification $\ldots$ is its counterfactual nature
that requires strong assumptions to be able to identify and estimate treatment effect within principal strata''.
Moreover, as noted by \cite{scharfstein2019}, the survivor average causal effect does not convey the effect of treatment on
all of those randomized, and the subset of the population in the principle stratum of interest may be small.
Issues of competing risks also arise in settings with intercurrent events \citep{iche9-2017}
such as early withdrawal,
introduction of rescue medication,
crossover to the complementary arm, and death.
\cite{hernan2018} provide a thoughtful discussion of the ICH Addendum and cautionary remarks about potential outcomes and principal strata.
Finally, \cite{Andersen2012.1} caution against conditioning on the future in analysis of life history processes if interest
lies maintaining clear real world interpretation of estimands; the survivor average causal effect violates this tenet.
Preferable strategies for dealing with intercurrent events include ignoring their occurrence in an intention-to-treat analysis
targetting the effect of a "treatment policy" (i.e. the effect of prescribing one treatment versus the other at the time of study entry).
The intercurrent event can also be incorporated into a composite endpoint, which can make sense if the event of interest and the intercurrent event
are both undesirable.
Disease processes and patient management are by nature dynamic, and
time plays a central role. \cite{aalen2012-2} among others have highlighted the importance of time, and of modeling data in ways that recognize
the evolutionary nature of disease processes. This way of thinking is aligned with the principle of "staying in the real world" but, as we have discussed, complicates the identification of relevant marginal features which can deliver causal inferences based on randomization to treatment.
The dynamic nature of processes also makes the notion of pre-existing counterfactual outcomes artificial; probabilistic models are needed for real world processes.
}
\subsection{The utility of utilities in defining causal estimands} \label{sec5.2}
In pharmaceutical research a considerable amount of discussion typically takes place between companies and regulators regarding study protocols, with much of it related to the choice of primary endpoint, specification of the estimand, and the associated analyses. As we have discussed, it can be challenging to define estimands when individuals in a clinical trial are at risk of different types of clinically important events, some of which may be recurrent.
Evidence must be synthesized to make a decision about the relative value of an experimental treatment, but
initial discussions usually aim to set a protocol and analysis plan wherein the criteria for trial success and the path to regulatory approval for a treatment is clear. Trial planning is facilitated by considering a marginal estimand for which observed treatment effects can be given a causal interpretation.
We have stressed that secondary, possibly intensity-based analyses provide a fuller understanding of the response to treatment. Conclusions about a treatment, and its approval, should be based on the totality of evidence based on the primary analysis and important secondary analyses.
Various approaches for combining information across multiple event types have been proposed. Compound events composed of two or more types are common; this was discussed in Section \ref{sec4.3} for the case of recurrent non-fatal and fatal events. A recently proposed approach with complex processes is the win ratio \citep{pocock2012}.
It involves a rule by which two process outcomes (or "paths") can be compared, with either one path being adjudged superior (the "winner") or them being declared tied.
With such a rule we can consider all possible pairs $(i,j)$ of subjects, where $i$ is from one treatment group and $j$ from the other, and declare either a winner or a tie.
We can then obtain the proportion of (non-tied) pairs where the winner was in the experimental arm; the win ratio is the ratio of this proportion and its complement.
The rule for ranking process outcomes or paths depends on the context. In an illness-death process, for example, if one subject dies after the other, they are the winner; if neither subject dies then the one who enters the non-fatal event (illness) state later is the winner. Subjects who are both in the healthy state at administrative end of followup are treated as tied. We will return briefly to this setting at the end of this section.
Stratified versions of the win ratio and other extensions have also been
developed \citep{luo2015, oakes2016, mao2019, mao2021}.
An alternative approach in complex disease processes is to use utilities. This involves defining a numerical utility score associated with events or with time spent in specific states, and then computing the total utility score for each subject across the duration of the trial. This approach has the advantage of transparency and it produces marginal features and estimands that admit descriptive causal interpretations. A disadvantage is the need to specify utility scores; there may be a lack of consensus.
We note that many other approaches implicitly involve utilies or assumptions about relative utility score. This is true of win ratios as well as composite endpoints.
In a composite endpoint, for example, the component events are implicitly of equal importance if a failure event is deemed to have taken place when any one of the component
events occurs.
\begin{figure}
\caption{A four-state multistate model distinguishing the phases of toxicity from chemotherapy, the time without toxicity or symptoms, relapse and death, for computation
of the Q-TWiST measure of \cite{Gelber1989}
\label{QoL}
\end{figure}
Utilities have frequently been used to address quality of life outcomes. An example was in a randomized clinical trial of adjuvant chemotherapy for breast cancer patients conducted by the
International Breast Cancer Study Group (IBCSG) \citep{LBCSG1988}.
A total of 1,229 patients were randomized to receive either short duration ($n=413$) or long duration ($n=816$) chemotherapy and followed for a median of about
seven years.
Patients experience toxicity during chemotherapy but intervals during which they are toxicity-free and relapse-free typically follow the completion of chemotherapy;
patients may of course relapse and die following such an interval.
To assess the impact of short versus long duration chemotherapy, overall survival analyses can be conducted.
The therapeutic issue under investigation here is, however, whether the greater anti-tumour effect of longer duration chemotherapy outweighs the protracted
periods of toxicity-related symptoms and poorer quality of life. To explore the impact of treatment on the sojourn times in the different stages of response, \cite{Gelber1989} proposed partitioning the survival time into different stages according to a four-state progressive model where
state 1 is occupied during the period of toxicity due to chemotherapy,
state 2 is occupied following chemotherapy when individuals are toxicity-free and symptom-free, state 3 is entered when symptoms reemerge upon relapse, and
state 4 is the absorbing state entered upon death; see Figure \ref{QoL}.
The challenge in analysis of such data is to summarize the effect of treatment on the sojourn times in the different states or the state entry time distributions
in a robust way that facilitates simple comparisons between the treatment arms.
In \cite{Gelber1989}, and a series of papers that followed by Gelber and colleagues,
this was achieved by assigning utilities to each day in each state and accumulating this over time.
The summary measure was called the Quality-adjusted Time Without Symptoms of disease and Toxicity to treatment, abbreviated as Q-TWiST \citep{Gelber1989}.
In such a utility-based approach we specify a function $U(Z(t), t)$ which is the utility value or score (for example a cost or quality of life score) for an individual who is in state $Z(t)$ at time $t$. The cumulative utility score up to time $\tau$ is $U(\tau)= \int_{0}^{\tau} U(Z(t),t)dt$. Often it is assumed that $U(k, t) = U_{k}$ is a fixed
utility rate for state $k$. In this case the cumulative utility score reduces to $U(\tau) = \sum_{k \in \mathcal{S}} \int_{0}^{\tau} U_{k} I(Z(t)=k)dt$, with expectation
\begin{equation}
\label{TotalUtilityScore}
\mu(\tau \ | \ X) = \mathbb{E}(U(\tau) \ | \ X) = \sum_{k \in \mathcal{S}} \int_{0}^{\tau} U_{k} P(Z(t)=k \ | \ X)dt.
\end{equation}
An estimand can be defined as $\beta(\tau) = \mu(\tau \ | \ X=1) - \mu(\tau \ | \ X=0)$ or alternatively as a ratio, and can be given a descriptive causal interpretation
because $U(\tau)$ is a marginal feature. A straightforward approach is to use nonparametric Aalen-Johansen estimates of state occupancy probabilities $P(Z(t)=k|X)$ for each treatment group to give robust estimates for $\mu(\tau|X=1)-\mu(\tau|X=0)$ or for analogous ratios, with variance estimates based on a nonparametric bootstrap. \cite{Cook2003} and \citet[Section 4.2]{Cook2018} illustrate this approach.
In spite of the challenge of specifying the utility functions \citep{Thomas1984, Torrance1986, Torrance1987},
this approach is attractive in balancing and combining multiple outcomes. Utility-based secondary analysis can in addition be useful when considering the implications of treatment assignment.
{
We reiterate that the utility formulation $(\ref{TotalUtilityScore})$ covers features based on times to event $T_E$ as special cases where utility rates are either $0$ or $1$ for each state. The restricted mean time to failure $RMT(\tau|X)$ is, for instance, given by (\ref{TotalUtilityScore}) with $U_0=1$, $U_1=U_2=0$. The cumulative utility framework also includes the win ratio method as a special case. In an illness-death process, for example, if $T_{1A}$ and $T_{2A}$ are the times of entry to states $1$ and $2$ for path A, and $T_{1B}$ and $T_{2B}$ are the respective entry times for path B,
then one rule would be to declare that path A is preferable (i.e. "wins") if and only if
$T_{2B} < \min(T_{2A}, C)$ or
$C < \min(T_{2A}, T_{2B})$ and $T_{1B}<\min(T_{1A},C)$, where $C$ is a common censoring time. If we assign utility rates $U_0=1$ and $U_1= 1-\epsilon$ to states 0 and 1, with $\epsilon$ a small positive number, then the cumulative utility score up to time $C$ is $U(C)=W_0 + (1-\epsilon)W_1$, where $W_j$ is the total time spent in state $j \in \{0, 1\}$ over $[0, C]$. As $\epsilon$ approaches $0$ the probability that $U_A(C)>U_B(C)$ approximates the probability that A "wins".
The utility-based approach is more flexible and tractable, however, and can handle problems such as variable and dependent censoring times $C$.
}
\section{Discussion} \label{sec6}
Randomized controlled trials that address disease processes are challenging to plan, conduct and analyze. In the simplest situations an intervention is designed to
target a specific adverse event, but in many settings it might also affect other events.
In this case there may be alternative choices for the primary event or outcome used for treatment comparisons. In addition, other disease-related events may prevent
or interfere with the observation of the primary outcome, or necessitate a change in treatment.
A main message of this paper is the importance of formulating comprehensive models that incorporate features of the disease process as well as intercurrent events
related to followup and interventions.
Such models, in conjunction with medical information, aid in the formulation of protocols and specification of estimands.
In addition, they are crucial for secondary intensity-based analysis which provides a deeper understanding of treatment effects and interpretation of estimands
based on marginal process features.
{
The challenge of dealing with intercurrent events can sometimes be lessened by good study design and execution -- for example if one can follow individuals to an
administrative censoring time then issues of loss to followup are avoided.
When the standard of care can be defined unambiguously problems in the interpretation of study findings might be avoided, but in international multicenter trials
investigators may view having variation in standard of care good for generalizability.
In such cases study findings are interpretable in a population with a similar mix of standard of care strategies, which may be difficult to use in a given setting.
}
Reference is often made to robustness and sensitivity analysis in conjunction with estimands and their estimation.
Our view is that it is the model assumptions and probability limit $\beta^*$ of the estimator for an estimand $\beta$ that are crucial.
For causal interpretation of a marginal model-based treatment effect the model should be consistent with the observed data. Models naturally just approximate reality and we prefer,
as in some of our numerical illustrations, to consider $\beta^*$ as the de facto estimand; it can be interpreted as the parameter value in the best
approximating (or least false) model (see \citealp{Grambauer2010}).
In this framework we would apply the term \textit{robust} to a family of marginal models and an estimand whose value $\beta^*$ does not change much under certain changes
in the intensities for the disease or followup process.
Numerical studies as we provided here can be viewed as sensitivity analyses concerning robustness.
In this sense, the $cloglog$ transformation model for the cumulative incidence of a non-fatal event in competition with a fatal event is quite robust across a
range of plausible scenarios, whereas the $log$ transformation model is less robust.
Another aspect that is sometimes considered as a form of robustness, but is slightly different, is whether a model is collapsible with respect to the
effect of treatment when other covariates $V$ affecting treatment are removed (e.g. \citealp{Aalen2015}).
This means that the treatment effect has the same interpretation in models with or without the inclusion of $V$.
Aside from certain types of linear models, most models are not collapsible. Proportional hazards and other relative risk models are in particular generally non-collapsible.
We note in the same vein that causal interpretation of a marginal treatment effect does not rely on the "correctness" of the assumed model.
It requires only that there is a well defined limiting estimand $\beta^*$ which is marginal in the sense that it is based on a marginal feature that involves no
conditioning on events after randomization to treatment.
An understanding of the treatment effect does however depend on model adequacy, as well as its source in the effects of treatment on process intensities.
For disease processes associated with multiple types of events there may be no optimal primary outcome. For example, in the LEADER trial the
effects of treatment on cardiovascular events such as stroke, MI and CV death were of interest \citep{Furberg2021}.
One approach to specification of a primary estimand is to consider a composite outcome, for example, the time to a first stroke, MI or CV death or the cumulative
number of events of all types.
Another is to use a cumulative utility score that recognizes the burden of various events or states.
We encourage wider consideration of this approach, including discussion and research on utility scoring systems.
Cumulative utilities can accommodate intercurrent events, besides addressing disease burden and quality of life. This is facilitated by including states that deal with disease severity and effects of treatment in a model. In an oncology trial involving alternative chemotherapies, we could for example split a relapse-free state into two states - one while a subject is undergoing chemotherapy and one for when toxic effects of the therapy have subsided \citep{Cook2003}. The first of the two states would have a lower utility (quality of life) score than the second. In a cardiovascular trial, the severity and after-effects of strokes or MIs could likewise be represented with multiple states, as opposed to simply recognizing the occurrence of the event.
Tests for treatment effects are a crucial aspect of randomized trials and inform decisions concerning sample sizes and trial duration. A thorough discussion of testing is beyond our present scope. Our position is that tests should normally be related to estimands, and based on Wald or score statistics that employ robust variance estimates. This provides validity when the assumed model is misspecified to some degree, but in saying this we point out that the null hypothesis being tested is $H_0: \{ \beta^*=0 \}$. A stronger null hypothesis is that treatment does not affect the disease process but it is usually unclear how it could be efficiently tested. Moreover, in trials where intercurrent events or multiple disease-related events are common, it would often be viewed as an unrealistic hypothesis.
\appendix
\section{Limiting values of estimators under model misspecification \label{app1}}
\subsection{Impact of covariate omission in Cox intensity-based model}
\label{app-Cox}
To derive the limiting value $\phi_1^{*}$ given in Section \ref{sec2.3} we let $\lambda_{01}(t | X) = \tilde{\lambda}_{1}\exp(\phi_1 X)$ be the misspecified intensity model for the non-fatal event. The score equation under the Cox model is
$$ U(\phi_1) = \sum_{i=1}^{n} \int_{0}^{\tau} \overline{Y}_{i}(t) \biggl \{ X_i - \dfrac{S^{(1)}(t, \phi_1)}{S^{(0)}(t, \phi_1)} dN_{i1}(t) \biggr \}, $$
where $\overline{Y}_{i}(t) = Y_{i0}(t)I(\min(C, R_i) \geq t), dN_{i1}(t) = I(T_{i1} = t)$ and $S^{(r)}(t, \phi_1) = \sum_{i=1}^{n} \overline{Y}_{i}(t) X_{i}^r \exp(\phi_1 X_i)$ for $r=0,1$. The limiting value $\phi_1^{*}$ is the solution to the equation $\mathbb{E}(U(\phi_1))=0$, where
$$\mathbb{E}(U(\phi_1)) = \int_{0}^{\tau} \biggl \{ \mathbb{E} \biggl( \sum_{i=1}^{n} \overline{Y}_{i}(t) X_i dN_{i1}(t) \biggr) - \dfrac{\mathbb{E}(S^{(1)}(t, \phi_1))}{\mathbb{E}(S^{(0)}(t, \phi_1))} \mathbb{E} \biggl( \sum_{i=1}^{n} \overline{Y}_{i}(t) dN_{i1}(t) \biggr) \biggr \}, $$
and where expectations are taken with respect to the true illness-death, censoring and covariate processes. Since $\{Z(t), t>0 \}$ is independent of $R$ given $X$ and $V$, $P(R > t |X, V)=P(R > t)$ and $X \perp V$ due to randomization, the expectations needed to compute this are
\begin{align*}
& \mathbb{E}(S^{(1)}(t, \phi_1)) = nP(R > t) P(X=1) \exp(\phi_1) \sum_{v \in \{0, 1\}} P(V=v) P(Z(t^-)=0 | X=1, V=v) \\
&\mathbb{E}(S^{(0)}(t, \phi_1)) = nP(R > t) \sum_{x \in \{0, 1\}} \sum_{v \in \{0, 1\}} \exp(\phi_1 x) P(X=x) P(V=v) P(Z(t^-)=0 | X=x, V=v)\\
&\mathbb{E} \biggl( \sum_{i=1}^{n} \overline{Y}_{i}(t) X_i dN_{i1}(t) \biggr) = nP(R > t)P(X=1) \cdot \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \biggl[ \ \sum_{v \in \{0, 1\}} P(V=v) \lambda_{01}(t | X=1, V=v) P(Z(t^-)=0 | X=1, V=v) \ \biggl] \\
&\mathbb{E} \biggl( \sum_{i=1}^{n} \overline{Y}_{i}(t) dN_{i1}(t) \biggr) = nP(R > t) \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \biggl[ \ \sum_{x \in \{0, 1\}}\sum_{v \in \{0, 1\}} P(X=x) P(V=v) \lambda_{01}(t | X=x, V=v) P(Z(t^-)=0 | X=x, V=v)\ \biggl].
\end{align*}
\subsection{Limiting value of the Fine-Gray estimator under misspecification}
\label{appendix-FG}
The inverse probability of censoring weighted estimating equations for $\beta$ under the Fine and Gray estimation method is
\begin{align}
\label{score-FG}
U(\beta^{FG}) = \sum_{i=1}^{n} \int_{0}^{\infty} w_{i}(t)Y_{i}^{\dagger}(t)\biggl[ X_{i} - \dfrac{S^{(1)}(t, \beta^{FG})}{S^{(0)}(t, \beta^{FG})} \biggl] dN_{i1}(t),
\end{align}
where $Y_{i}^{\dagger}(t)=I(T_{1i} \geq t), dN_{1i}(t)=I(T_{1i}=t), w_i(t)= I(C_{i} \geq \min(T_{i}, t))/G_{i}(\min(T_{i}, t)), $ and $S^{(r)}(t, \beta^{FG}) = \sum_{i=1}^{n} w_{i}(t) Y_{i}^{\dagger}(t) X_{i}^{r}\exp(\beta X_{i})$ for $r=0,1$ \citep[Section 4.1]{Cook2018}. Here $C_i=\min(C, R_i)$ is the net censoring time for subject $i=1,2,...,n$ with $C$ the administrative censoring time and $R_i$ the random censoring time. Setting (\ref{score-FG}) equal to zero and solving gives $\widehat{\beta}^{FG}$ following estimation of the censoring distribution $G_i(t)=P(R_i > t)$. The estimand $\beta^{FG*}$ is the limiting value solving
\begin{align}
\label{limiting-value-FG}
\int_{0}^{\infty} \biggl\{ \mathbb{E}\biggl(\sum_{i=1}^{n} w_{i}(t) Y_{i}^{\dagger}(t) X_{i} dN_{1i}(t) \biggr) - \dfrac{s^{(1)}(t, \beta^{FG})}{s^{(0)}(t, \beta^{FG})} \mathbb{E}\biggl( \sum_{i=1}^{n} w_{i}(t) Y_{i}^{\dagger}(t) dN_{1i}(t) \biggr) \biggr\} = 0,
\end{align}
where $s^{(r)}(t, \beta^{FG}) = \mathbb{E}(S^{(r)}(t, \beta^{FG}))$ for $r=0,1$ and where the expectations are taken with respect to the true competing risks, censoring and covariate processes. The stabilized Fine-Gray estimator $\widehat{\beta}^{FG}_{stab}$ and the corresponding estimand $\beta^{FG*}_{stab}$ in Section \ref{sec3.2} can be found as the solutions to (\ref{score-FG}) and (\ref{limiting-value-FG}) with $w_{i}(t)$ replaced by $I(C_{i} \geq \min(T_{i}, t))G_{i}(t)/G_{i}(\min(T_{i}, t))$, respectively \citep[Section 4]{Fine1999}. If the censoring model is correctly specified and thus $\widehat{G}_{i}(t) \longrightarrow G_{i}(t)$, the limiting value $\beta^{FG*}$ does not depend on the censoring distribution. The limiting value $\beta^{FG*}_{stab}$, on the other hand, varies with the censoring distribution due to weight stabilization. The Fine-Gray approach with weight stabilization is implemented in the R-function crr.
\subsection{Limiting value of the direct binomial estimator under misspecification}
\label{appendix-DB}
Using the same notation as in Section \ref{appendix-FG} and assuming estimation is conducted at several times $s_1 < s_2 < ... < s_R$ over the time horizon $[0, C]$, the estimators $\widehat{\theta}=(\widehat{\alpha}_0(s_1),..., \widehat{\alpha}_0(s_R),\widehat{\beta}^{DB})$ are the solution to the generalized estimating equation \citep{Scheike2008}
\begin{align}
\label{limiting-value-DB}
U^{DB}(\theta) = \sum_{i=1}^{n}
\begin{pmatrix}
U_{i}(\alpha_{0}(s_1)) \\
... \\
U_{i}(\alpha_{0}(s_R)) \\
U_{i}(\beta^{DB}) \\
\end{pmatrix}
= \sum_{i=1}^{n} D_{i} V^{-1}_{i} \Bigg[
\begin{pmatrix}
\widetilde{N}_{1i}(s_1) \\
... \\
\widetilde{N}_{1i}(s_R) \\
\end{pmatrix}
-
\begin{pmatrix}
F_{1}(s_1 | X_{i}) \\
... \\
F_{1}(s_R | X_{i}) \\
\end{pmatrix} \Bigg] =
\begin{pmatrix}
0 \\
... \\
0 \\
0 \\
\end{pmatrix} \; ,
\end{align}
where $\theta=(\alpha_{0}(s_1), ..., \alpha_{0}(s_R), \beta^{DB})$ is a $(R+1)$-dimensional vector of unknown regression coefficients, $\widetilde{N}_{1i}(t) = I(C_{i} \geq \min(T_{i}, t))N_{1i}(t) / G_{i}(\min(T_i, t))$ is a weighted response, $F_{1}(s_r | X_{i}) = g^{-1}(\alpha_{0}(s_r) + \beta^{DB} X_{i})$ is the cumulative incidence function under the cloglog transformation model at time point $s_r$ ($r=1,...,R$), $g(u)=\log(-\log(1-u))$ is the link function with $g^{-1}(u)=h(u)=1-\exp(-\exp(u))$, $V_{i}=\text{diag} \bigl( \{ F_1(s_r |X_{i})(1-F_{1}(s_r|X_{i})) \}, \ r=1,...,R\bigr)$ is a $(R \ \text{x} \ R)$ working independence covariance matrix, and $D_{i}$ is $((R+1) \ \text{x} \ R)$ matrix of derivatives of the form
\begin{align*}
\begin{pmatrix}
\dfrac{\partial F_{1}(s_1|X_i)}{\partial \alpha_{0}(s_1)} & 0 & ... & ... & 0 & 0\\
0 & \dfrac{\partial F_{1}(s_2|X_i)}{\partial \alpha_{0}(s_2)} & 0 & .... & 0 & 0\\
... & ... & ... & ... & ... & .. . \\
... & ... & ... & ... & \dfrac{\partial F_{1}(s_{R-1}|X_i)}{\partial \alpha_{0}(s_{R-1})} & .. . \\
0 & ... & ... & ... & 0 & \dfrac{\partial F_{1}(s_R|X_i)}{\partial \alpha_{0}(s_{R})}\\
\dfrac{\partial F_{1}(s_1|X_i)}{\partial \beta^{DB}} & \dfrac{\partial F_{1}(s_2|X_i)}{\partial \beta^{DB}} & ... & ... & \dfrac{\partial F_{1}(s_{R-1}|X_i)}{\partial \beta^{DB}} & \dfrac{\partial F_{1}(s_R|X_i)}{\partial \beta^{DB}} \\
\end{pmatrix} \; ,
\end{align*}
with $\dfrac{\partial F_{1}(s_r|X_i)}{\partial \alpha_{0}(s_r)}=h'(\alpha_{0}(s_r)+\beta^{DB} X_{i})$ and $\dfrac{\partial F_{1}(s_r|X_i)}{\partial \beta^{DB}}=h'(\alpha_{0}(s_r)+\beta^{DB} X_{i})X_i$. This estimation procedure also requires modeling of the censoring distribution $G_i(t)=P(R_i > t)$. If the censoring model is correctly specified, the limiting values $\theta^{*} = (\alpha_0^{*}(s_1),..., \alpha_0^{*}(s_R),\beta^{DB*})$ can be found as the solution to $\mathbb{E}(U^{DB}(\theta))=(0, 0,..., 0)$, where
\begin{align}
\label{score-DB1}
\mathbb{E}(U_{i}(\alpha_{0}(s_r)) &= \sum_{x=0,1} \dfrac{P(X=x)h'(\alpha_{0}(s_r)+\beta^{DB}x)}{h(\alpha_0(s_r)+\beta^{DB} x)(1-h(\alpha_0(s_r)+\beta^{DB} x))} (F_1^{true}(s_r | X=x) - h(\alpha_0(s_r) + \beta^{DB} x)) \; ,
\end{align}
and
\begin{align}
\label{score-DB2}
\mathbb{E}(U_{i}(\beta^{DB})) &= \sum_{s_r: \ r=1,.., R} \dfrac{P(X=1)h'(\alpha_0(s_r)+\beta^{DB})}{h(\alpha_0(s_r)+\beta^{DB})(1-h(\alpha_0(s_r)+\beta^{DB}))} (F^{true}_1(s_r | X=1) - h(\alpha_0(s_r) + \beta^{DB}) ) \; .
\end{align}
Here $F^{true}_1(s_r | X=1)$ is the true cumulative incidence function given in (\ref{CIF}) and the expectations are taken with respect to the true competing risks, censoring and covariate processes.
\end{document}
|
\begin{equation}gin{document}
\title{On the Benjamin-Bona-Mahony equation with a localized damping}
\author{Lionel Rosier \thanks{Centre Automatique et Syst\`emes,
MINES ParisTech, PSL Research University, 60 Boulevard Saint-Michel,
75272 Paris Cedex 06, France
Email: {\tt [email protected]}.}}
\date{\empty}
\maketitle
\begin{equation}gin{center}
{\bf Abstract}
\end{center}
We introduce several mechanisms to dissipate the energy in the Benjamin-Bona-Mahony (BBM) equation. We consider
either a distributed (localized) feedback law, or a boundary feedback law.
In each case,
we prove the global wellposedness of the system and the convergence towards a solution of the BBM equation which is null on
a band. If the Unique Continuation Property holds for the BBM equation, this implies that the origin is
asymptotically stable for the damped BBM equation.
\vskip0.3cm\noindent {\it AMS Subject Classification:} 35Q53, 93D15.
\vskip0.3cm\noindent {\it Keywords:} Benjamin-Bona-Mahony equation, unique
continuation property, internal stabilization, boundary stabilization.
\section{Introduction}\label{sec1}
The Benjamin-Bona-Mahony equation
\begin{equation}
\label{A1}
v_t-v_{xxt}+v_x+vv_x=0,
\end{equation}
was proposed in \cite{BBM} as an alternative to the Korteweg-de Vries (KdV) equation as a model for the propagation of
one-dimensional, unidirectional, small amplitude long waves in nonlinear dispersive media. In the context of shallow water waves,
$v=v(x,t)$ stands for the displacement of the water surface (from rest position) at location $x$ and time $t$.
In the paper, we shall assume that either $x\in \mathbb{R}$, or $x\in (0,L)$ or $x\in \mathbb{T}=\mathbb{R} /(2\pi \mathbb{Z} )$ (the one-dimensional torus).
The dispersive term $-v_{xxt}$ produces a strong smoothing effect for the time regularity, thanks to which the wellposedness
theory of \eqref{A1} is easier than for KdV (see \cite{BT,roumegoux}). Solutions of \eqref{A1} turn out to be analytic in time.
On the other hand, the control theory is at his early stage for BBM (for the control properties of KdV, we refer the reader to
the recent survey \cite{RZ2009}). S. Micu investigated in \cite{micu}
the boundary controllability of the {\em linearized} BBM equation, and noticed that the exact controllability fails due to the existence of a {\em limit point} in the spectrum of the adjoint equation.
The author and B.-Y. Zhang introduced in \cite{RZ2013} a {\em moving control} and derived with such a control both the exact
controllability and the exponential stability of the full BBM equation. For a distributed control with a {\em fixed support},
the exact controllability of the linearized BBM equation fails, so that the study of the controllability of the full BBM equation seems hard.
However, it is reasonable to expect that some stability properties could be derived by incorporating some dissipation
in a fixed subdomain or at the boundary. The aim of this paper is to propose several dissipation mechanisms leading to systems
for which one has both the global existence of solutions and a nonincreasing $H^1$-norm.
The conclusion that all the trajectories emanating from data close to the origin are indeed attracted by the origin
is valid provided that the following conjecture is true: \\[3mm]
{\bf Unique Continuation Property (UCP) Conjecture:} There exists some number $\delta >0$ such that for any
$v_0\in H^1(\mathbb{T})$ with $\Vert v_0\Vert _{H^1(\mathbb{T} )} <\delta$, if the solution
$v=v(x,t)$ of
\begin{equation}
\label{A2}
\left\{
\begin{equation}gin{array}{ll}
v_t-v_{xxt}+v_x+vv_x =0 , \quad &x\in \mathbb{T} ,\\
v(x,0)=v_0(x), \quad &x\in\mathbb{T}
\end{array}
\right.
\end{equation}
satisfies
\begin{equation}
\label{A3}
v(x,t)=0 \, \quad \forall (x,t)\in \omega \times (0,T)
\end{equation}
for some nonempty open set $\omega \subset \mathbb{T}$ and some time $T>0$, then $v_0=0$ (and hence $v\equiv 0$).
To the best knowledge of the author, the UCP for BBM as stated in the above conjecture is still open. The main difficulty
comes from the fact that the lines $x=0$ are {\em characteristic} for BBM, so that the ``information'' does not propagate well
in the $x$-direction. For some UCP for BBM (with additional assumptions) see \cite{RZ2013,yamamoto}. See also \cite{MORZ, MP} for
control results for some Boussinesq systems of BBM-BBM type.
The following result is a {\em conditional} UCP in which it is assumed that the initial data is small in
the $L^\infty$-norm and it has a {\em nonnegative} mean value. Its proof was based on the analyticity in time of the trajectories and
on the use of some Lyapunov function.
\begin{equation}gin{thm} \cite{RZ2013}
Let $u_0\in H^1(\mathbb{T} ) $ be such that
\begin{equation}
\label{AA10}
\int_\mathbb{T} v_0(x)\, dx \ge 0 ,
\end{equation}
and
\begin{equation}
\label{AA11}
\Vert v_0\Vert _{L^\infty (\mathbb{T} ) } <3.
\end{equation}
Assume that the solution $v$ to \eqref{A2} satisfies \eqref{A3}. Then $v_0=0$.
\end{thm}
As it was noticed in \cite{RZ2013}, the UCP for BBM cannot hold for any state in $L^2(\mathbb{T})$, for any initial data $v_0$ with values in
$\{ -2, 0 \}$ gives a trivial (stationary) solution of BBM. Thus, either a regularity assumption ($v_0\in H^1(\mathbb{T})$), or a bound on the norm
of the initial data has to be imposed for the UCP to hold.
The paper is outlined as follows. In Section 2, we incorporate a simple localized damping in BBM equation and investigate
the corresponding Cauchy problem. In Section 3, we consider another dissipation mechanism involving one derivative. The last
section is concerned with the introduction of nonhomogeneous boundary conditions leading again to the dissipation of the $H^1$-norm.
\section{Stabilization of the BBM equation}
\subsection{Internal stabilization with a simple feedback law}
We consider the BBM equation on $\mathbb{T}$ with a localized damping
\begin{equation}gin{eqnarray}
u_t-u_{xxt}+u_x+uu_x +a(x)u =0 \quad && x\in \mathbb{T},\ t\ge 0,\label{A20}\\
u(x,0)=u_0(x) \quad && x\in \mathbb{T} , \label{A21}
\end{eqnarray}
where $a$ is a smooth, nonnegative function on $\mathbb{T}$ with $a(x)>0$ on a given
open set $\omega \subset \mathbb{T}$.
We write \eqref{A20}-\eqref{A21} in its integral form
\begin{equation}gin{equation}
\label{A22}
u(t)=u_0 - \int_0^t (1-\partial _x^2)^{-1}
[a(x)u + ( u + \frac{u^2}{2} )_x ] (\tau )d\tau
\end{equation}
where $(1-\partial _x^2)^{-1}f$ denotes for $f\in L^2(\mathbb{T} )$ the unique solution $v\in H^2(\mathbb{T} )$ of $(1-\partial _x^2)v=f$.
Define the norm $\Vert \cdot \Vert _s$ in $H^s(\mathbb{T} )$ as
\[
\Vert \sum_{n\in \mathbb{Z}} c_ne^{inx} \Vert _s^2 = \sum_{n\in \mathbb{Z}} \vert c_n\vert ^2 (1+\vert n\vert ^2)^s.
\]
We have the following result.
\begin{equation}gin{thm}
\label{stab1}
Let $s\ge 0$ be given. For any $u_0\in H^s(\mathbb{T} )$, there exist
$T>0$ and a unique solution $u$ of \eqref{A22} in
$C([0,T],H^s(\mathbb{T} ))$. Moreover, the correspondence $u_0\in H^s(\mathbb{T} )\mapsto
u\in C([0,T],H^s(\mathbb{T} ))$ is Lipschitz continuous. If $s=1$, the solution exists for every $T>0$, and the energy
$\Vert u ( t ) \Vert _{1} ^2$ is nonincreasing. Finally,
if $v_0$ denotes any weak limit in $H^1(\mathbb{T} )$ of a sequence
$u(t_n)$ with $t_n\nearrow +\infty$, then the solution $v$ of system
\eqref{A2} satisfies \eqref{A3}. In particular, if the UCP conjecture holds,
then $v_0=0$, so that $u(t)\to 0$ weakly in $H^1(\mathbb{T})$ as $t\to +\infty$, hence
strongly in $H^s(\mathbb{T} )$ for all $s<1$.
\end{thm}
\noindent
{\em Proof.} We proceed as in \cite{BCS2} using the estimate
\begin{equation}gin{equation}
\label{A15}
||fg||_s \le C ||f||_{s+1} ||g||_{s+1} \qquad
\forall s\ge -1,\ \forall f,g \in H^{s+1} (\mathbb{T} ).
\end{equation}
The estimate \eqref{A15} follows from a similar estimate proved in \cite{BCS2} for functions defined on $\mathbb{R}$, namely
\begin{equation}gin{equation}
\label{A31}
||\tilde f \tilde g ||_{H^s(\mathbb{R} )}
\le C ||\tilde f ||_{H^{s+1}(\mathbb{R} )} ||\tilde g ||_{H^{s+1}(\mathbb{R} )}\qquad
\forall s\ge -1,\ \forall f,g \in H^{s+1}(\mathbb{R} ),
\end{equation}
by letting $\tilde f(x) =\varphi (x) f(x)$, $\tilde g(x)=\varphi (x)g(x)$,
where $f$ and $g$ are viewed as $2\pi$-periodic functions, and $\varphi
\in C^\infty_0(\mathbb{R} )$ denotes a cut-off function such that $\varphi (x)=1$ on
$[0,2\pi ]$. Indeed, we notice that
\[
||f||_{H^s(\mathbb{T} )} \le ||\tilde f||_{H^s(\mathbb{R} )} \le C || f ||_{H^s(\mathbb{T} )}
\]
for some constant $C>0$.
Note that for any $s \ge 0$
\[
||(1-\partial _x^2)^{-1} \partial _x (fg)||_s \le C ||fg||_{s-1}
\le C ||f||_s||g||_s.
\]
Pick any $u_0\in H^s (\mathbb{T} )$. Let us introduce the operator
\[
(\Gamma u)(t) := u_0 - \int_0^t (1-\partial _x ^2)^{-1}
[a(x)u + (u + \frac{u^2}{2})_x ] (\tau ) d\tau .
\]
Then
\begin{equation}gin{eqnarray*}
\sup_{0\le t\le T } ||(\Gamma u-\Gamma v)(t)||_s
&\le & C_2 \int_0^T [||a(u-v)||_{s-2} + ||u-v||_{s-1}+||u-v||_s||u+v||_s] d\tau \\
&\le & C_3T (1+2R) ||u-v||_{C([0,T], H^s(\mathbb{T} ))}
\end{eqnarray*}
if we assume that $u$ and $v$ are in the closed ball $B_R$ of radius $R$
centered at $0$ in $C([0,T],H^s(\mathbb{T} ))$. We pick
$T:=[ 2C_3(1+2R)]^{-1}$ so that
\[
||\Gamma u - \Gamma v||_{C([0,T],H^s(\mathbb{T} ))} \le \frac{1}{2} ||u-v||_{C([0,T],H^s(\mathbb{T} ))}\cdot
\]
On the other hand
\[
||\Gamma u||_{C([0,T],H^s(\mathbb{T} ))}
\le ||u_0||_s + ||\Gamma u -\Gamma 0||_{C([0,T],H^s(\mathbb{T} ))} \le ||u_0||_s +\frac{R}{2}\le R
\]
for the choice $R=2||u_0||_s$. It follows that the map $\Gamma $ contracts in $B_R$, hence it
admits a unique fixed point $u$ in $B_R$ which solves the integral equation
\eqref{A22}. Furthermore, given any $\rho >0$ and any
$u_0,v_0\in H^s(\mathbb{T} )$ with $||u_0||_s\le \rho$, $||v_0||_s\le \rho$, one easily sees
that for
\begin{equation}gin{equation}
\label{A30}
T=[2C_3(1+4\rho )]^{-1}
\end{equation}
one has
\begin{equation}gin{equation}
\label{ABC}
||u-v||_{C([0,T],H^s(\mathbb{T} ))} \le 2||u_0-v_0||_s.
\end{equation}
Finally assume that $s=1$. Scaling in \eqref{A20} by $u$ yields
\begin{equation}gin{equation}
\label{A32}
\frac{1}{2}||u(T)||^2_1 -\frac{1}{2}||u_0||^2_1
+\int_0^T\!\!\!\int_{\mathbb{T} }a(x)|u(x,t)|^2dxdt =0
\end{equation}
(Note that $u_t= -(1-\partial _x^2)^{-1}[a(x)u + (u+\frac{u^2}{2})_x]
\in C([0,T], H^2(\mathbb{T} ))$ so that each term in \eqref{A20} belongs to
$C([0,T],L^2(\mathbb{T} ))$, and the integrations by parts are valid.)
It follows that the map $t\mapsto ||u(t) || _1$ is nonincreasing, hence it admits
a nonnegative limit $l$ as $t\to \infty$, and that the solution
of \eqref{A20}-\eqref{A21} is defined for all $t\ge 0$. Let $T$ be as in
\eqref{A30} with $\rho = ||u_0||_1$. Note that
$||u(t)||_1\le ||u_0||_1$ for all $t\ge 0$. Let $v_0$ be any weak limit
of $\{ u(t) \} _{t\ge 0}$ in $H^1(\mathbb{T} )$, i.e. we have that
$u(t_n)\to v_0$ weakly in $H^1(\mathbb{T} )$ for some sequence $t_n\to + \infty$. Extracting a subsequence if needed, we
may assume that
\begin{equation}gin{equation}
\label{Atn}
t_{n+1}-t_n\ge T\qquad \text{ for } n\ge 0.
\end{equation}
\null From
\[
\frac{1}{2} ||u(t_{n+1})||_1^2 - \frac{1}{2} ||u(t_n)||_1^2
+\int_{t_n}^{t_{n+1}} \!\!\!\int_{\mathbb{T}} a(x)|u(x,t)|^2dxdt =0,
\]
we obtain that
\begin{equation}gin{equation}
\label{A35}
\lim_{n\to +\infty} \int_{t_n}^{t_{n+1}} \!\!\! \int_T a(x)|u(x,t)|^2 dxdt=0.
\end{equation}
Since $u(t_n)\to v_0$ in $H^s(\mathbb{T} )$ for $s<1$, and
$||u(t_n)||_s \le ||u(t_n)||_1 \le \rho$, we have from \eqref{ABC} that
for all $s\in [0,1[$
\begin{equation}gin{equation}
\label{A36}
u(t_n+\cdot ) \to v\qquad \text{ in } C([0,T],H^s(\mathbb{T} ))\qquad
\text{ as } n\to +\infty ,
\end{equation}
where $v=v(x,t)$ denotes the solution of
\begin{equation}gin{eqnarray*}
&& v_t - v_{xxt} + v_x + vv_x + a(x) v =0,\qquad x\in \mathbb{T}, \ t\ge 0, \\
&& v(x,0)=v_0(x),\qquad x\in \mathbb{T} .
\end{eqnarray*}
Notice that $v\in C([0,T];H^1(\mathbb{T} )) $, for $v_0\in H^1(\mathbb{T} )$.
\eqref{A35} combined to \eqref{Atn} and \eqref{A36} yields
\begin{equation}gin{equation}
\label{A37}
\int_0^T\!\!\!\int_{\mathbb{T} }a(x)|v|^2 dxdt=0.
\end{equation}
Therefore $v\in C([0,T];H^1(\mathbb{T} ))$ solves
\begin{equation}gin{eqnarray*}
&&v_t-v_{xxt}+v_x+vv_x =0,\qquad x\in \mathbb{T},\ t\in (0,T),\\
&&v(x,0)=v_0(x) \qquad x\in \mathbb{T} ,\\
&&v(x,t)=0 \qquad x\in \omega,\ t\in (0,T).
\end{eqnarray*}
If the UCP conjecture is true, we have that $v\equiv 0$ on $\mathbb{R} \times (0,T)$. It follows that
$v_0\equiv 0$, and that as $t\to + \infty$
\begin{equation}gin{eqnarray*}
&&u(t)\to 0 \qquad \text{ weakly in } H^1(\mathbb{T} ),\\
&&u(t)\to 0 \qquad \text{ strongly in } H^s(\mathbb{T} ) \text{ for any } s<1.
\end{eqnarray*}
\cqfd
\subsection{Internal stabilization with one derivative in the feedback law}
We now pay attention to the stabilization of the BBM equation by means of a
``stronger'' feedback law involving one derivative. More precisely, we consider the system
\begin{equation}gin{eqnarray}
&&u_t-u_{xxt}+u_x+uu_x -(a(x)u_x)_x =0, \qquad x\in \mathbb{T}, \ t\ge 0, \label{A50}\\
&&u(x,0) = u_0(x), \qquad x\in \mathbb{T}. \label{A51}
\end{eqnarray}
where $a=a(x)$ denotes again any nonnegative smooth function on $\mathbb{T}$
with $a(x)>0$ on a given open set $\omega \subset \mathbb{T}$.
Scaling in \eqref{A50} yields (at least formally)
\begin{equation}gin{equation}
\label{A52}
\frac{1}{2} ||u(T)||^2_1 -\frac{1}{2}||u_0||_1^2
+\int_0^T\!\!\!\int_{\mathbb{T}} a(x)|u_x(x,t)|^2 dxdt =0.
\end{equation}
The decay of the energy is quantified by an integral term involving
the square of a localized $H^1$-norm in \eqref{A52}. By contrast, the integral term in \eqref{A32}
involved the square of a localized $L^2$-norm. This suggests that the damping mechanism
involved in \eqref{A50} acts in a {\em much stronger way} than in \eqref{A20}.
As a matter of fact, in the trivial situation when $a(x)\ge C>0$ for all $x\in \mathbb{T}$
in \eqref{A50}, it is a simple exercise to establish the exponential stability in $H^1(\mathbb{T})$ for
both the linearized equation and the nonlinear BBM equation for states with zero means.
In the general case when the function $a(x)$ is supported in a subdomain of $\mathbb{T}$, we obtain the following result.
\begin{equation}gin{thm}
\label{stab2}
Let $s\ge 0$ be given. For any $u_0\in H^s(\mathbb{T} )$, there exist
$T>0$ and a unique solution $u$ of \eqref{A50}-\eqref{A51} in
$C([0,T],H^s(\mathbb{T} ))$. Moreover, the correspondence $u_0\in H^s(\mathbb{T} )\mapsto
u\in C([0,T],H^s(\mathbb{T} ))$ is Lipschitz continuous. If $s=1$, one can pick any
$T>0$ and $\Vert u(t) -[u_0]\Vert _{H^1(\mathbb{T} )} $ is nondecreasing. If the UCP conjecture is valid
by replacing \eqref{A3} by $v=C$ on $\omega \times (0,T)$, then $u(t)\to [u_0]=(2\pi)^{-1} \int _\mathbb{T} u_0(x)dx$ weakly in $H^1(T)$
hence strongly in $H^s(\mathbb{T} )$ for $s<1$ as $t\to \infty$.
\end{thm}
\noindent
{\em Proof.} As the proof is very similar to those of Theorem \ref{stab1}, we limit ourselves to
pointing out the only differences. For the wellposedness, we use the estimate valid
for $s\ge 0$
\[
|| ( 1- \partial _x ^2 )^{-1} (au_x)_x||_s \le C ||u||_s.
\]
For $s=1$, we claim that \eqref{A52} is justified by noticing that for $u\in H^1(\mathbb{T} )$
\[
\langle -(au_x)_x , u \rangle _{H^{-1},H^1} = \langle au_x, u_x \rangle
_{L^2,L^2}.
\]
Then the wellposedness statement is established as in Theorem \ref{stab1}.
Let us proceed to the asymptotic behavior. Pick any
$u_0\in H^1(\mathbb{T} )$ and any $v_0\in H^1(\mathbb{T} )$ which is the weak limit in $H^1(\mathbb{T} )$ of a sequence $u(t_n)$ with
$t_n\to \infty$ and $t_{n+1}-t_n\ge T$. Let us still denote by
$v$ the solution of \eqref{A2}.
Equation \eqref{A37} has to be replaced by
\begin{equation}gin{equation}
\label{A56}
\int_0^T\!\!\! \int_{\mathbb{T} } a(x) |v_x(x,t)|^2dxdt=0.
\end{equation}
To prove \eqref{A56}, we notice first that $u(t_n+\cdot )\to v$ in
$C([0,T],H^s(\mathbb{T} ))$ for $s<1$, and that $||u(t_n+\cdot )||_{L^2(0,T;H^1(\mathbb{T} ))}
\le const$, so that, extracting a subsequence if needed, we have that
\[
u(t_n+\cdot ) \to v \qquad \text{ weakly in } L^2(0,T; H^1(\mathbb{T} )).
\]
This yields, with \eqref{A52},
\[
\int_0^T\!\!\!\int_{\mathbb{T}} a(x)|v_x|^2dxdt
\le \liminf_{n\to \infty}\int_{t_n}^{t_n+T}\!\!\!\int_{\mathbb{T}} a(x) |u_{nx}|^2dxdt=0.
\]
Therefore $v$ solves
\begin{equation}gin{eqnarray}
&& v_t-v_{xxt} +v_x+vv_x =0,\qquad x\in\mathbb{T},\ t\in (0,T),\label{A60}\\
&& v_x=0,\qquad x\in\omega, \ t\in (0,T), \label{A61}\\
&& v\in C([0,T]; H^s(\mathbb{T} ))\qquad \text{ for } s<1. \label{A62}
\end{eqnarray}
\eqref{A60} and \eqref{A61} yield $v_t=v_x=0$ in $\omega \times (0,T)$, hence
\[
v(x,t)=C\qquad \text{ for } (x,t)\in \omega\times (0,T)
\]
for some constant $C\in \mathbb{R}$. If the UCP Conjecture is still valid when $v$ is constant on the band $\omega \times (0,T)$,
then
$v\equiv C$. As $d[u(t)]/dt=0$, it follows that $C=[u_0]$ and
that as $t\to \infty$, $u(t)\to [u_0]$ weakly in $H^1(\mathbb{T})$ and strongly in $H^s(\mathbb{T})$ for any $s<1$. \cqfd
\begin{equation}gin{rmk}
Similar results, but with convergences towards 0, hold for the system
\begin{equation}gin{eqnarray*}
&&u_t-u_{xxt}+u_x+uu_x-(a(x)u_x)_x=0,\\
&&u(0,t)=u(2\pi, t)=0,\\
&&u(x,0)=u_0(x)
\end{eqnarray*}
provided that $a(x)>0$ for $x\in (0,\varepsilon )\cup (2\pi -\varepsilon ,0)$ for some $\varepsilon >0$ and the UCP Conjecture holds.
\end{rmk}
\subsection{Boundary stabilization of BBM}
In this section, we are concerned with the boundary stabilization of the BBM
equation. We consider the following system
\begin{equation}gin{eqnarray}
&& u_t - u_{xxt} +u_x + uu_x =0, \qquad x\in (0,L),\ t\ge 0,
\label{B1}\\
&& u_{tx}(0,t)=\alpha u(0,t) + \frac{1}{3} u^2(0,t), \label{B2}\\
&& u_{tx}(L,t)=\begin{equation}ta u(L,t) + \frac{1}{3} u^2(L,t), \label{B3}\\
&& u(x,0)=u_0(x), \label{B4}
\end{eqnarray}
where $\alpha$ and $\begin{equation}ta$ are some real constants chosen so that
\[
\alpha >\frac{1}{2} \quad \text{ and }\quad \begin{equation}ta < \frac{1}{2}\cdot \label{B5}
\]
Scaling in \eqref{B1} by $u$ yields (at least formally)
\begin{equation}gin{eqnarray}
\frac{d}{dt} \frac{1}{2} \int_0^L (|u|^2 + |u_x|^2)dx
&=& \left[ uu_{tx}\right] _0^L -\left[
\frac{u^2}{2} +\frac{u^3}{3}\right] _0^L \nonumber\\
&=& ( \begin{equation}ta -\frac{1}{2}) |u(L,t)|^2 +(\frac{1}{2}-\alpha )|u(0,t)|^2,
\label{B6}
\end{eqnarray}
hence $||u(t)||_{H^1}$ is nonincreasing if $\alpha$ and $\begin{equation}ta$
fulfill \eqref{B5}. The wellposedness of \eqref{B1}-\eqref{B4}
and the asymptotic behavior are described in the following result.
\begin{equation}gin{thm}
Let $s\in (1/2,5/2)$ and $u_0\in H^s(0,L)$. Then there exist a time
$T>0$ and a unique solution $u\in C([0,T],H^s(0,L))$ of
\eqref{B1}-\eqref{B4}. Furthermore, if $s=1$, then $T$ may be taken arbitrarily
large, and if the UCP Conjecture holds, as $t\to \infty$
\begin{equation}gin{eqnarray}
&&u(t)\to 0 \quad \text{\rm weakly in } H^1(0,L) \label{B7}\\
&&u(t)\to 0 \quad \text{\rm strongly in } H^s(0,L) \text{ for all } s<1.\label{B8}
\end{eqnarray}
\end{thm}
{\em Proof.} Let us begin with the well-posedness part.
Pick any $u_0\in H^s(0,L)$ with $s>\frac{1}{2}$. Let $v=u_t$.
Then $v$ solves the elliptic problem
\begin{equation}gin{eqnarray}
&&(1-\partial _x ^2)v=f, \quad x\in (0,L), \label{B9}\\
&&v_x(0)=a,\ v_x(L)=b \label{B10}
\end{eqnarray}
with $f:=-u_x-uu_x$,
$a:=\alpha u(0,t) + \frac{1}{3} u^2(0,t)$,
$b:=\begin{equation}ta u(L,t) + \frac{1}{3} u^2(L,t)$. Note that the solution
$v$ of \eqref{B9}-\eqref{B10} may be written as
\[
v=w+g
\]
where
$g(x)=ax+ \frac{b-a}{2L}x^2$ and
$w=(1-\partial _x^2)_N^{-1}(f-(1-\partial ^2 _x)g)$ solves
\begin{equation}gin{eqnarray*}
&&(1-\partial _x ^2)w=f-(1-\partial _x^2)g\\
&&w_x(0)=w_x(L)=0.
\end{eqnarray*}
Thus
\begin{equation}gin{equation}
u_t=v=-(1-\partial _x^2)_N^{-1} (u_x+uu_x)
+ (1-(1-\partial _x^2)_N^{-1} (1-\partial_x^2))g.
\label{B15}
\end{equation}
We seek $u$ as a fixed point of the integral equation
\begin{equation}gin{eqnarray}
u(t)={\Gamma }(u)(t) &:=& u_0 + \int_0^t
\bigg\{
-(1-\partial _x^2)_N^{-1}(u_x+uu_x)(\tau) \nonumber \\
&&\ \left. +(1-(1-\partial _x ^2)^{-1}_N (1-\partial _x^2))
\bigg[ [ \alpha u(0,\tau) +\frac{1}{3}u^2(0,\tau) ] x \right. \nonumber\\
&& +(2L)^{-1} [ \begin{equation}ta u(L,\tau ) + \frac{1}{3} u^2(l,\tau )
-\alpha u(0,\tau ) -\frac{1}{3} u^2(0, \tau ) ] x^2 \bigg]\bigg\} d\tau . \quad\ \label{B16}
\end{eqnarray}
Note that ${\mathcal D}((1-\partial _x^2)_N^{\frac{s}{2}}) =H^s(0,L)$ for
$-1/2<s<3/2$.
Let $R>0$ be given and let $B_R$ denote the closed ball in $C([0,T],H^s(0,L))$
of center $0$ and radius $R$. For $1/2<s<5/2$ and $u\in B_R,v\in B_R$, we have that
\begin{equation}gin{eqnarray*}
||{\Gamma }(u)(t)-{\Gamma }(v)(t)||_{H^s(0,L)}
&\le& C\int_0^t (1+R) ||u(\tau ) -v(\tau )||_{H^s(0,L)} d\tau \\
&\le& CT(1+R) ||u-v||_{C([0,T],H^s(0,L))}
\end{eqnarray*}
and
\[
||{\Gamma }(u)(t)||_{H^s(0,L)} \le ||u_0||_{H^s(0,L)}
+CT (1+R) ||u||_{C([0,T],H^s(0,L))}\cdot
\]
Pick $R=2||u_0||_{H^s(0,L)}$ and $T=(2C(1+R))^{-1}$. Then
$\Gamma $ is a contraction in $B_R$, hence
it admits a unique fixed point which solves \eqref{B16}, or
\eqref{B15} and \eqref{B4}.
Assume now that $s=1$. It follows from \eqref{B15} that
$u_t\in C([0,T],H^2(0,L))$, hence we can scale by $u$ in \eqref{B1} to derive
\eqref{B6}. Thus $||u(t) ||_{H^1(0,L)}$ is nonincreasing, and $T$ may be taken as
large as desired. Let us turn our attention to the asymptotic behavior.
Let $(t_n)_{n\ge 0}$ be any sequence with $t_n\to \infty$ as $n\to \infty$.
Extracting a subsequence if needed, we can assume that $t_{n+1}-t_n\ge T$
for each $n$ and that, for some $v_0\in H^1(0,L)$, $u(t_n)\to v_0$ weakly in $H^1(0,L)$ (hence strongly in $H^s(0,L)$ for $s<1$) as $n\to \infty$. The
continuity of the flow map (which follows at once from the fact that the map
$\Gamma$ is a contraction) yields
\[
u(t_n+\cdot ) \to v(\cdot ) \qquad \text{\rm in }\
C([0,T],H^s(0,L)) \quad \text{\rm for } s<1,
\]
where $v$ denotes the solution of \eqref{B1}-\eqref{B4}
issued from $v_0$ at $t=0$. Integrating \eqref{B6} on $[t_n,t_{n+1}]$
yields
\[
\frac{1}{2} ||u(t_{n+1} ) ||^2_{H^1(0,L)} -\frac{1}{2}||u(t_n) ||^2_{H^1(0,L)}
+(\frac{1}{2} -\begin{equation}ta ) \int_{t_n}^{t_{n+1}} |u(L,t)|^2dt
+(\alpha -\frac{1}{2})\int_{t_n}^{t_{n+1}} |u(0,t)|^2 dt.
\]
Letting $n\to \infty$ yields
\begin{equation}gin{equation}
\label{B30}
(\frac{1}{2}-\begin{equation}ta ) \int_0^T |v(L,t)|^2 dt
+(\alpha - \frac{1}{2} ) \int_0^T |v(0,t)|^2 dt =0.
\end{equation}
Extending $v(x,t)$ by $0$ for $x\in \mathbb{R} \setminus (0,L)$ and $t\in (0,T)$,
we infer from \eqref{B1}, \eqref{B2}, \eqref{B3} (for $v$) and
\eqref{B30} that
\[
v_t - v_{txx} + v_x +vv_x =0 \qquad \text{ for } x\in \mathbb{R} ,\ t\in (0,T)
\]
with
\[
v(x,t)=0 \qquad \text{ for } x\in \mathbb{R} \setminus (0,L),\ t\in (0,T).
\]
Since $v\in C([0,T],H^s(\mathbb{R} ))$ with $1/2 <s<1$, we infer from
the UCP Conjecture (if true) that $v\equiv 0$, hence $v_0=0$. \cqfd
\begin{equation}gin{thebibliography}{99}
\bibitem{BBM} T.B. Benjamin, J.L. Bona, J.J. Mahoney, {\em Model equations for long waves in nonlinear dispersive systems}, Philos. Trans.
R. Soc. Lond. {\bf 272} (1972) 47--78.
\bibitem{BCS1} J.L. Bona, M. Chen, J.-C. Saut, {\em Boussinesq equations and other systems for small-amplitude long waves
in nonlinear dispersive media. I. Derivation and linear theory}, J. Nonlinear Sci. {\bf 12} (2002), 282-318.
\bibitem{BCS2} J.L. Bona, M. Chen, J.-C. Saut, {\em Boussinesq equations and other systems for small-amplitude long waves
in nonlinear dispersive media. II. Nonlinear theory}, Nonlinearity, {\bf 17} (2004), 925--952.
\bibitem {BT} J.L. Bona, N. Tzvetkov, {\em Sharp well-posedness for the BBM equation}, Discrete Contin. Dyn. Syst. {\bf 23} (4)
(2009) 1241--1252.
\bibitem{micu} S. Micu, {\em On the controllability of the linearized Benjamin-Bona-Mahony equation}, SIAM J. Control Optim.,
{\bf 39} (6) (2001) 1677--1696.
\bibitem{MORZ} S. Micu, J.H. Ortega, L. Rosier, B.-Y. Zhang, {\em Control and stabilization of a family
of Boussinesq systems}, Discrete Contin. Dyn. Syst. {\bf 24} (2009), no. 2, 273--313.
\bibitem{MP} S. Micu, A.F. Pazoto, {\em Stabilization of a Boussinesq system with localized damping},
Journal d'Analyse Math\'ematique, to appear.
\bibitem{rosier} L. Rosier, {\em Exact controllability for the Korteweg-de Vries equation on a bounded domain}, ESAIM Control Optim.
Calc. Var. {\bf 2} (1997) 33--55.
\bibitem{roumegoux} D. Roum\'egoux, {\em A symplectic non-squeezing theorem for BBM equation}, Dyn. Partial Differ.
Equ. {\bf 7} (4) (2010) 289--305.
\bibitem{RZ2009} L. Rosier, B.-Y. Zhang, {\em Control and stabilization of the Korteweg-de Vries equation: recent progresses}, J. Syst. Sci. Complex. {\bf 22} (2009), no. 4, 647--682.
\bibitem{RZ2013} L. Rosier, B.-Y. Zhang, {\em Unique continuation property and control for the Benjamin-Bona-Mahony equation},
J. Differential Equations {\bf 254} (2013) 141--178.
\bibitem{yamamoto} M. Yamamoto, {\em One unique continuation for a linearized Benjamin-Bona-Mahony equation}, J. Inverse Ill-Probl.
{\bf 11} (5) (2003) 537--543.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Transformed Fay-Herriot Model \with Measurement Error in Covariates}
\begin{abstract}
Statistical agencies are often asked to produce small area estimates (SAEs) for positively skewed variables.
When domain sample sizes are too small to support direct estimators, effects of skewness of the response variable can be large. As such, it is important to appropriately account for the distribution of the response variable given available auxiliary information. Motivated by this issue and in order to stabilize the skewness and achieve normality in the response variable, we propose an area-level log-measurement error model on the response variable.
Then, under our proposed modeling framework, we derive an empirical Bayes (EB) predictor of positive small area quantities subject to the covariates containing measurement
error. We propose a corresponding mean squared prediction error (MSPE) of EB predictor using both a jackknife and a bootstrap method. We show that the order of the bias is $O(m^{-1})$, where $m$ is the number of small areas.
Finally, we investigate the performance of our methodology using both design-based and model-based simulation studies.
\noindent {KEYWORDS: Small area estimation; official statistics; Bayesian methods; jackknife; parametric bootstrap; applied statistics; simulation studies.}
\end{abstract}
\section{Introduction} \label{sec:introduction}
Typically, in small area measurement error models, both the response variable and covariate can be any real number (see Ybarra and Lohr (\citeyear{ybarra2008small}), Arima et al. (\citeyear{arima2017})). However, statistical agencies are often asked to produce small area estimates (SAEs) for skewed variables, which are also positive in $\mathbb{R}^{+}$. For instance, the Census of the Governments (CoG) provides information on roads, tolls, airports, and other similar information at the local-government level as defined by the United States Census Bureau (USCB). Another example includes the United States National Agricultural Statistics Service (NASS), which
provides estimates regarding crop harvests (see Bellow and Lahiri (\citeyear{bellow2011empirical})). The United States Natural Resources Conservation Service (NRCS) provides estimates regarding roads at the county-level (e.g., Wang and Fuller (\citeyear{wang2003mean})), and the Australian Agricultural and Grazing Industries Survey provides estimates of the total expenditures of Australian farms (e.g., Chandra and
Chambers (\citeyear{chandra2011small})).
When domain sample sizes are too small to support direct estimators, the effect of skewness can be quite large, and it is critical to account for the distribution of the response variable given auxiliary information at hand. For a review of the SAE literature, we refer to recent work by Rao and Molina (\citeyear{rao2015small}) and Pfefferman (\citeyear{pfeffermann2013new}).
The case of positively skewed response variables is one such that the governing parameter in the Box-Cox transformation is zero. Due to the fact that the covariate in the model may be positively skewed and contains measurement error, this has received less attention in the literature.
Throughout this paper, we explain the problem which is beyond a simple substitution and address some of its difficulties.
\subsection{Census of the Governments}
\label{sec:cog}
As mentioned in Sec.~\ref{sec:introduction}, our proposed framework is motivated by data that is positively skewed. One such data set is the Census of Governments (CoG), which is a survey data collected by the United States Census Bureau (USCB) periodically that provides comprehensive statistics about governments and governmental activities. Data is reported on government organizations, finances, and employment. For example, data from organizations refer to location, type, and characteristics of local governments and officials. Data from finances/employment refer to revenue, expenditure, debt, assets, employees, payroll, and benefits.
We utilize data from the CoG from 2007 and 2012 (\url{https://www.census.gov/econ/overview/go0100.html}). In the CoG, the small areas consist of the 48 states of the contiguous United States. These 48 areas contain 86,152 local governments defined by the USCB, such as airports, toll roads, bridges, and other federal government corporations. The parameter of interest is the average number of full-time employees per government at the state level from the 2012 data set, which can be defined as the total number of full-time employees from all local governments divided by the total number of local governments per state. The covariate of interest is the average number of full-time employees per government at the state level from the 2007 data set. After studying residual plots and histograms, we observe skewed patterns in the average number of full-time employees in both the 2007 and 2012 data sets, which partially motivate our proposed framework.
\subsection{Our Contribution}
\label{sec:contribution}
Motivated by issues that statistical agencies face with skewed response variables we make several contributions to the literature. In order to stabilize the skewness and achieve normality in the response variable, we propose an area-level log-measurement error model on the response variable (Eq. (\ref{eqn:FHlog})). In addition, we propose a log-measurement error model on the covariates (Eq. (\ref{eqn:ME})).
Next, under our proposed modeling framework, we derive an EB predictor of positive small area quantities subject to the covariates containing measurement error. In addition, we propose a corresponding estimate of the MSPE using a jackknife and a parametric bootstrap, where we illustrate that the order of the bias is $O(m^{-1})$ under standard regularity conditions. We illustrate the performance of our methodology in both model-based simulation and design-based simulation studies. We summarize our conclusions and provide directions for future work.
The article is organized as follows. Sec.~\ref{sec:prior work} details the prior work related to our proposed methodology. In Sec.~\ref{sec:area-level-log}, we propose a log-measurement error
model for the response variable. In addition, we consider a measurement error model of the covariates with a log transformation.
Further, we derive the EB predictor under our framework.
Sec.~\ref{sec:mspe} provides the MSPE for our EB predictor. We provide a decomposition of the MSPE to include the uncertainty of the EB predictor through unknown parameters. Sec.~\ref{sec:estimation-mspe} provides two estimators of the MSPE, namely a jackknife and a parametric bootstrap, where we prove that the order of the bias is $O(m^{-1})$ under standard regularity conditions. Sec.~\ref{sec:experiments} provides both design-based and model-based simulation studies. Sec.~\ref{sec:dis} provides a discussion and directions for future work.
\subsection{Prior Work}
\label{sec:prior work}
In this section, we review the prior literature most relevant to our proposed work.
There is a rich literature on the area-level Fay-Herriot model, where various additive measurement error models have been proposed on the covariates. Ybarra and Lohr (\citeyear{ybarra2008small}) proposed the first additive measurement error model on the covariates. More specifically, the authors considered covariate information from another survey that was independent of the response variable. More recently, Berg and Chandra (\citeyear{berg2014small}) have proposed an EB predictor and an approximately unbiased MSE estimator under a unit-level log-normal model, where no measurement error is assumed present in the covariates. Turning to the Bayesian literature, Arima et al. (\citeyear{arima2017}), Arima et al. (\citeyear{arima2015bayesianbook}), and Arima et al. (\citeyear{arima2015bayesian}) have provided fully Bayesian solutions to the measurement error problem for both unit-level and area-level small area estimation problems.
Next, we discuss related literature regarding the proposed jackknife and parametric bootstrap estimator of the MSPE of the Bayes estimators, where the order of the bias is $O(m^{-1}),$ under standard regularity conditions. Our proposed jackknife estimator of the MSPE contrasts that of Jiang et al. (\citeyear{jiang2002unified}), who proposed an MSE using an orthogonal decomposition, where the leading term in the MSE does not depend on the area-specific response and is nearly unbiased. Given that the authors can make an
orthogonal decomposition, they can show that the order of the bias of the MSE is $o(m^{-1}),$ which contrasts our proposed approach. Under our approach, the leading term
depends on the area-specific response, and thus, the bias is of order $O(m^{-1})$. Turning to the bootstrap, we utilize methods similar to Butar and Lahiri (\citeyear{butar2003measures}). Using this approach, we propose a parametric bootstrap estimator of the MSPE of our estimator. In a similar manner to the jackknife, the order of the bias for the parametric bootstrap estimator of the MSPE is $O(m^{-1})$.
\section{Area-Level Logarithmic Model with Measurement Error}
\label{sec:area-level-log}
Consider $m$ small areas and let $Y_i$ ($i=1,\ldots,m$) denote the population characteristic of interest in area $i$,
where often the information of interest is a population mean or proportion. A primary survey provides a direct estimator $y_i$ of $Y_i$ for some or all of the $m$ small areas.
In this section, we propose a measurement error model suitable for the inference of positively skewed response variable $y_i$. To achieve normality in the response variable, we therefore propose an area-level log-measurement error model on $Y_i.$ In the rest of this section, we explain our model and the desirable predictor.
Consider the following model:
\begin{align} \label{eqn:FHlog}
z_i=\theta_i + e_i,
\end{align}
where $z_i:=\log y_i$, $\theta_i:=\log Y_i$, and $e_i$ is the sampling error distributed as $e_i \sim N(0,\psi_i)$. Assume
\begin{equation*}
\theta_i=\sum_{k=1}^{p}\beta_k \log X_{ik} + \nu_i,
\end{equation*}
where $X_{ik}$ is the $k$-th covariate of the $i$-th small area, which is unknown but is observed by $x_{ik}$. The regression coefficient $\beta_k$ is unknown and must be estimated, and $\nu_i$ is the random effect distributed as $\nu_i \sim N(0,\sigma^2_{\nu})$, where $\sigma^2_{\nu}$ is unknown.
Our measurement error model for the case of positively skewed $X_{ik}$'s is proposed as
\begin{equation*}
w_{ik}:= \log x_{ik}=\log X_{ik} + \eta_{ik}, \qquad k=1,...,p,
\end{equation*}
or in a vector form
\begin{equation} \label{eqn:ME}
\boldsymbol{w}_i = \boldsymbol{W}_i + \boldsymbol{\eta}_i, \qquad \boldsymbol{\eta}_i \sim N_p(\boldsymbol{0},\Sigma_i),
\end{equation}
where $\boldsymbol{w}_i=(w_{i1}, ..., w_{ip})^\top$ and $\boldsymbol{W}_i=(W_{i1}, ..., W_{ip})^\top$ for $W_{ik}=\log X_{ik}$.
Note that in Eq. (\ref{eqn:ME}), $\boldsymbol{W}_i$ is non-stochastic within the
class of functional measurement error models (c.f. Fuller (\citeyear{fuller2006measurement})).
We assume $\Sigma_i$ is known, and if it is unknown, it can be estimated using microdata or from another independent survey. We refer to Arima et al. (\citeyear{arima2017}) for further details of estimating $\Sigma_i$.
Now, one can write
\begin{align*}
\begin{cases}
z_i=\boldsymbol{W}_i^\top \boldsymbol{\beta}+ \nu_i + \boldsymbol{\beta}^\top \boldsymbol{\eta}_i+ e_i \\
\theta_i= \boldsymbol{W}_i^\top \boldsymbol{\beta} + \nu_i + \boldsymbol{\beta}^\top \boldsymbol{\eta}_i
\end{cases}
\end{align*}
where $\boldsymbol{\beta}=(\beta_1, ..., \beta_p)^\top$. Thus, for the pair $(z_i,\theta_i)$, we have the following joint normal distribution
\begin{equation*}
\begin{pmatrix}
z_i \\ \theta_i
\end{pmatrix} \sim N_2
\Bigg[ \begin{pmatrix}
\boldsymbol{W}_i^\top \boldsymbol{\beta} \\ \boldsymbol{W}_i^\top \boldsymbol{\beta}
\end{pmatrix},
\begin{pmatrix}
\boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta}+\sigma^2_{\nu}+\psi_i & \boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu} \\ \boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu} & \boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu}
\end{pmatrix}
\Bigg].
\end{equation*}
We assume all the sources of errors $(e_i,\nu_i,\boldsymbol{\eta}_i)$ for $i=1,...,m$ are mutually independent throughout the rest of the paper.
\begin{remark}
Eq. (\ref{eqn:FHlog}) is a Fay-Herriot model for $z_i$, however, the parameter of interest is $Y_i:=\exp(\theta_i)$ rather than $\theta_i.$ Slud and Maiti (\citeyear{slud2006mean}) and Ghosh et al. (\citeyear{ghosh2015benchmarked}) used a similar model in the absence of measurement errors in the covariates.
\end{remark}
Next, we give the following conditional distribution $[\theta_i|z_i]$ to later justify our Bayesian interpretation of the unknown interested parameter $Y_i$:
\begin{equation*}
\theta_i | z_i \sim N \Big[ \boldsymbol{W}_i^\top \boldsymbol{\beta} + \frac{\boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu}}{\boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu}+\psi_i} (z_i-\boldsymbol{W}_i^\top \boldsymbol{\beta}), \boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu} - \frac{(\boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu})^2}{\boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu}+\psi_i} \Big],
\end{equation*}
i.e.
\begin{equation*}
\theta_i | z_i \sim N \Big(\gamma_i z_i + (1-\gamma_i) \boldsymbol{W}_i^\top \boldsymbol{\beta} , \gamma_i \psi_i \Big),
\end{equation*}
where $\gamma_i = (\boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu})/(\boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu}+\psi_i)$.
Recall that the parameter of interest is
$Y_i:=\exp(\theta_i)$ after transforming from the logarithmic scale back to the original scale. Therefore, the corresponding Bayes predictor is given by $\hat{Y}_i:=E(Y_i|z_i)$. By using the moment generating function of the normal distribution of $\theta_i|z_i$, the Bayes predictor has the form of $\hat{Y}_i=\exp\{\gamma_i z_i+(1-\gamma_i)\boldsymbol{W}_i^\top \boldsymbol{\beta}+\gamma_i\psi_i/2\}$.
In practice, $\boldsymbol{W}_{i}$ is unobserved, and since $E(\boldsymbol{w}_i)=\boldsymbol{W}_i$, we can replace it with the observed $\boldsymbol{w}_i$. Also, $\boldsymbol{\beta}$ and $\sigma^2_{\nu}$ are unknown, and we need to replace them with their consistent estimators. Therefore, the EB predictor of $Y_i$ is
\begin{align} \label{eqn:EBpredictor}
\hat{Y}_i^{\text{EB}} = \exp\Big\{\hat{\gamma}_i z_i+(1-\hat{\gamma}_i) \boldsymbol{w}_i^\top \hat{\boldsymbol{\beta}}+ \frac{\hat{\gamma}_i \psi_i}{2} \Big\}.
\end{align}
\subsection{Estimation of Unknown Parameters}
\label{sec:parameters}
In this section, we discuss estimation of the unknown parameters $\boldsymbol{\beta}$ and $\sigma^2_{\nu}.$ First, an estimator of $\boldsymbol{\beta}$ is obtained by solving the equation
\begin{equation} \label{eqn:bb}
\sum_{i=1}^{m} \Big[ D_i \Big(\boldsymbol{w}_i \boldsymbol{w}_i^\top - \Sigma_i \Big) \Big] \boldsymbol{\beta} = \sum_{i=1}^{m} D_i \boldsymbol{w}_i z_i.
\end{equation}
The justification for Eq. (\ref{eqn:bb}) is as follows. Let $\boldsymbol{z}=(z_1,...,z_m)^\top$ and $\boldsymbol{W}^\top = (\boldsymbol{W}_1, ..., \boldsymbol{W}_m)$. Then, $\boldsymbol{z} \sim N_m(\boldsymbol{W} \boldsymbol{\beta}, D^{-1})$ where $D^{-1}= \text{diag}(D_1^{-1}, ..., D_m^{-1})$ and $D_i^{-1}= \boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta} + \sigma^2_{\nu}+\psi_i$. Hence, an estimator of $\boldsymbol{\beta}$ is obtained by solving
\begin{equation*}
\boldsymbol{\beta} = \Big(\boldsymbol{W}^\top D \boldsymbol{W}\Big)^{-1} \boldsymbol{W}^\top D \boldsymbol{z}= \Big(\sum_{i=1}^{m} D_i \boldsymbol{W}_i \boldsymbol{W}_i^\top \Big)^{-1} \sum_{i=1}^{m} D_i \boldsymbol{W}_i z_i.
\end{equation*}
Now, notice that $E(\boldsymbol{w}_i \boldsymbol{w}_i^\top)= \boldsymbol{W}_i \boldsymbol{W}_i^\top + \Sigma_i$ and $E(\boldsymbol{w}_i)=\boldsymbol{W}_i$. Hence, we estimate $\boldsymbol{\beta}$ from
\begin{equation*}
\sum_{i=1}^{m} \Big[ D_i \Big(\boldsymbol{w}_i \boldsymbol{w}_i^\top - \Sigma_i \Big) \Big] \boldsymbol{\beta} = \sum_{i=1}^{m} D_i \boldsymbol{w}_i z_i.
\end{equation*}
However, $D_i$ is not known as both $\boldsymbol{\beta}$ and $\sigma^2_{\nu}$ are unknown. Take $E(z_i-\boldsymbol{w}_i^\top \boldsymbol{\beta})^2 = \sigma^2_{\nu}+\psi_i$. Then $\sigma^2_{\nu}$ can be estimated from
\begin{equation} \label{eqn:sigma}
m^{-1} \sum_{i=1}^{m} \Big(z_i-\boldsymbol{w}_i^\top \boldsymbol{\beta} \Big)^2 - m^{-1} \sum_{i=1}^{m} \psi_i.
\end{equation}
If the above is less than zero, estimate $\sigma_\nu^2$ as zero.
One can estimate $\boldsymbol{\beta}$ and $\sigma^2_{\nu}$ by iteratively solving the Eqs. (\ref{eqn:bb}) and (\ref{eqn:sigma}).
\subsection{Mean Squared Prediction Error of the EB Predictor}
\label{sec:mspe}
In this section, we first define the MSPE of the EB predictor $\hat{Y}_i^{\text{EB}}.$
Second, we show that the cross-product term of the MSPE of the EB predictor $\hat{Y}_i^{\text{EB}}$ is exactly zero. Now, we introduce notation that will be used throughout the rest of the paper. Let
\begin{align*}
M_{1i} & :=E[(\hat{Y}_i-Y_i)^2|z_i] \\
& =\exp\Big\{\psi_i\gamma_i\Big\}\Big[\exp\Big\{\psi_i\gamma_i\Big\}-1\Big] \exp\Big\{2\Big[\gamma_i z_i+(1-\gamma_i) \boldsymbol{W}_i^\top \boldsymbol{\beta} \Big]\Big\} \\
M_{2i} & := E[(\hat{Y}_i^{\text{EB}}-\hat{Y}_i)^2|z_i], \quad M_{3i}:= E[(\hat{Y}_i^{\text{EB}}-\hat{Y}_i)(\hat{Y}_i-Y_i)|z_i].
\end{align*}
Note that we estimate $\boldsymbol{W}_i$ with $\boldsymbol{w}_i$, and the term $M_{1i}$ depends on the area-specific response variable $z_i$ unlike Jiang et al. (\citeyear{jiang2002unified}), and its estimator has bias of order $O(m^{-1})$. Since we wish to include the uncertainty of the EB predictor $\hat{Y}_i^{\text{EB}}$ with respect to the unknown parameters $\boldsymbol{\beta}$ and $\sigma^2_{\nu}$, we decompose the MSPE into three terms using Definition \ref{eqn:mspe}.
\begin{definition}
\label{eqn:mspe}
The MSPE of the EB predictor $\hat{Y}^{\text{EB}}_i$ is
\begin{align*}
\text{MSPE}(\hat{Y}^{\text{EB}}_i) & =E[(\hat{Y}_i^{\text{EB}}-Y_i)^2|z_i]\notag \\
& \equiv E[(\hat{Y}_i-Y_i)^2|z_i] + E[(\hat{Y}_i^{\text{EB}}-\hat{Y}_i)^2|z_i] + 2 E[(\hat{Y}_i^{\text{EB}}-\hat{Y}_i)(\hat{Y}_i-Y_i)|z_i] \notag \\
& = M_{1i}+M_{2i}+2M_{3i},
\end{align*}
where we show below that $M_{3i} = 0.$
\end{definition}
To show that the cross product, $M_{3i}$ goes to 0, recall the Bayes estimator is $$E[\hat{Y}_i] = E[Y_i \mid z_i] \implies E[\hat{Y}_i-Y_i \mid z_i] = 0.$$ Consider
\begin{align*}
M_{3i} &= E[(\hat{Y}_i^{\text{EB}}-\hat{Y}_i)(\hat{Y}_i-Y_i)|z_i] \\
& = E\Big\{(\hat{Y}_i^{\text{EB}}-\hat{Y}_i) E\Big((\hat{Y}_i-Y_i)\mid z_i\Big)\Big|z_i\Big\}=0.
\end{align*}
\section{Jackknife and Parametric Bootstrap Estimators of the MSPE}
\label{sec:estimation-mspe}
In this section, we propose two estimators for the MSPE of the EB predictor $\hat{Y}_i^{\text{EB}}.$
First, we propose a jackknife estimator of the MSPE. Second, we propose a parametric bootstrap estimator of the MSPE. The expectation of the proposed measure of uncertainty based on both methods is correct up to the order $O(m^{-1})$ for the EB predictor.
\subsection{Jackknife Estimator of the MSPE}
\label{sec:jack}
In this section, we propose a jackknife estimator of the MSPE of the EB predictor $\hat{Y}_i^{\text{EB}},$ denoted by $\text{mspe}_J(\hat{Y}_i^{\text{EB}}).$
We prove the order of the bias of $\text{mspe}_J(\hat{Y}_i^{\text{EB}})$ is correct up to the order $O(m^{-1})$ under six regularity conditions.
We propose the following jackknife estimator:
\begin{align} \label{eqn:jack-estimator}
\text{mspe}_J(\hat{Y}^{\text{EB}}_i)=\hat{M}_{1i,J}+\hat{M}_{2i,J} \quad \text{where}
\end{align}
\begin{align*}
\hat{M}_{1i,J}=
\hat{M}_{1i}-\frac{m-1}{m}\sum\limits_{j=1}^{m}(\hat{M}_{1i}-\hat{M}_{1i(-j)}) \quad
\text{and} \quad
\hat{M}_{2i,J}=\frac{m-1}{m}\sum\limits_{j=1}^{m}(\hat{Y}^{\text{EB}}_i-\hat{Y}^{\text{EB}}_{i(-j)})^2,
\end{align*}
where $(-j)$ denotes all areas except the $j$-th area. Therefore, let
\begin{align*} \label{eq:M1ihat}
\hat{M}_{1i} & :=M_{1i}(\hat{\sigma}^2_{\nu},\hat{\boldsymbol{\beta}})\\
& =\exp\Big\{\psi_i\hat{\gamma}_i\Big\}\Big[\exp\Big\{\psi_i\hat{\gamma}_i\Big\}-1\Big] \exp\Big\{2\Big[\hat{\gamma}_i z_i+(1-\hat{\gamma}_i) \boldsymbol{w}_i^\top \hat{\boldsymbol{\beta}}\Big]\Big\}, \numberthis \\
\hat{M}_{1i(-j)} & = \exp\Big\{\psi_i\hat{\gamma}_{i(-j)}\Big\}\Big[\exp\Big\{\psi_i\hat{\gamma}_{i(-j)}\Big\}-1\Big] \exp\Big\{2\Big[\hat{\gamma}_{i(-j)} z_i+(1-\hat{\gamma}_{i(-j)}) \boldsymbol{w}_i^\top \hat{\boldsymbol{\beta}}_{(-j)} \Big]\Big\}, \quad \text{and} \\
\hat{Y}^{\text{EB}}_{i(-j)} & = \exp \Big\{\hat{\gamma}_{i(-j)} z_i + (1-\hat{\gamma}_{i(-j)}) \boldsymbol{w}_i^\top \hat{\boldsymbol{\beta}}_{(-j)} + \frac{\psi_i \hat{\gamma}_{i(-j)}}{2} \Big\}.
\end{align*}
Note that for all $[.]_{(-j)}$ cases, the $\phi=(\boldsymbol{\beta},\sigma^2_{\nu})^\top$ estimators should plug into the expressions where the data is based on all the areas other than $j$. We define some notation and then establish six regularity conditions used in Theorem \ref{thm:thm1}.
Let $\ell(\cdot|z_i)$ denote the conditional likelihood function. We define the corresponding first, second, and third derivatives of the conditional likelihood function by $\ell_i^{'}(\phi| z_i)$, $\ell_i^{''}(\phi| z_i),$ and $\ell_i^{'''}(\phi| z_i)$, respectively.
Now, assume the following six regularity conditions:
\noindent\textit{Condition 1.} Define $\phi^\top=(\boldsymbol{\beta},\sigma^2_{\nu}) \in \Theta$ where $\Theta$ is a compact set such that $\Theta \subseteq (\mathbb{R}^{p},\mathbb{R}^{+})$.
\noindent\textit{Condition 2.} Assume $\hat{\phi}$ is a consistent estimator for $\phi,$ i.e. $\hat{\phi} \xrightarrow{p} \phi$.
\noindent\textit{Condition 3.} Assume $\ell_i^{'}(\phi| z_i)$ and $\ell_i^{''}(\phi| z_i)$ both exist for $i = 1,\ldots m$, almost surely in probability.
\noindent\textit{Condition 4.} Assume $E\{\ell_i^{'}(\phi|z_i)| \phi\}=0$ for $i = 1,\ldots, m$.
\noindent\textit{Condition 5.} Assume $\ell_i^{''}(\phi| z_i)$ is a continuous function of $\phi$ for $i = 1, ..., m$, almost surely in probability, where $E\{\ell_i^{''}(\phi|z_i)\}$ is positive definite, uniformly bounded away from $0$, and is a measurable function of $z_i$.
\noindent\textit{Condition 6.} Assume $E\{|\ell_i^{'}(\phi|z_i)|^{4+\delta}\}$, $E\{|\ell_i^{''}(\phi|z_i)|^{4+\delta}\}$, and $E\{\sup_{c \in (-\epsilon,\epsilon)}|\ell_i^{'''}(\phi+c|z_i)|^{4+\delta}\}$ are uniformly bounded for $i = 1,\ldots m$ under some $\epsilon > 0$ and $\delta > 0$.
\begin{theorem}
\label{thm:thm1}
Assume Conditions 1--6 hold. Then
$$E[\text{mspe}_J(\hat{Y}_i^{\text{EB}})]=\text{MSPE}(\hat{Y}_i^{\text{EB}})+O(m^{-1}).$$
\end{theorem}
\begin{proof}
Define
\begin{align*}
E(\text{mspe}_J(\hat{Y}_i^{\text{EB}})) & \equiv E(\hat{M}_{1i,J}+\hat{M}_{2i,J}) \\
& = E \Big(\hat{M}_{1i}-\frac{m-1}{m} \sum_{j=1}^{m} [(\hat{M}_{1i}-\hat{M}_{1i(-j)})|z_i] \Big) \\
& \quad + \frac{m-1}{m} E \Big(\sum_{j=1}^{m} [(\hat{Y}_i^{\text{EB}}-\hat{Y}_{i(-j)}^{\text{EB}})^2|z_i] \Big).
\end{align*}
\noindent Also, define a remainder term $r_i$ that is bounded in absolute value by $R_i$ such that,
$$|r_i| \leq \max\{1,|\ell^{'}(\phi|z_i)|^3,|\ell^{''}(\phi|z_i)|^3,|\ell^{'''}(\phi|z_i)|^3\} \equiv R_i.$$
First, we prove $\hat{M}_{1i,J}$ has a bias of order $O(m^{-1}).$
Using a Taylor series expansion, we find that
\begin{align*}
\hat{M}_{1i}=M_{1i}+M_{1i}^{' \top}(\phi) (\hat{\phi}-\phi) + \frac{1}{2} M_{1i}^{'' \top}(\phi) (\hat{\phi}-\phi)^2 + \frac{1}{6} M_{1i}^{''' \top}(\phi^*) (\hat{\phi}-\phi)^3,
\end{align*}
for $\phi^*$ between $\phi$ and $\hat{\phi}$. Also, $M_{1i}^{' \top}(\phi)$, $M_{1i}^{'' \top}(\phi)$, and $M_{1i}^{''' \top}(\phi^*)$ stand for the first, second, and third derivatives of $M_{1i}$ with respect to $\phi$.
Let $\hat{\phi}^{\top}~=~(\hat{\boldsymbol{\beta}},\hat{\sigma}^2_{\nu})$, and it follows that
\begin{align*}
\hat{M}_{1i}-\hat{M}_{1i(-j)}=\hat{M}^{' \top}_{1i}(\hat{\phi})(\hat{\phi}-\hat{\phi}_{(-j)})+\frac{1}{2} \hat{M}_{1i}^{'' \top}(\hat{\phi}) (\hat{\phi}-\hat{\phi}_{(-j)})^2+\frac{1}{6} \hat{M}_{1i}^{''' \top}(\hat{\phi}^*_{(-j)}) (\hat{\phi}-\hat{\phi}_{(-j)})^3,
\end{align*}
for some $\hat{\phi}^*_{(-j)}$ between $\hat{\phi}_{(-j)}$ and $\hat{\phi}$.
In order to approximate the solution $\hat{\phi}$ of the equation $f(\tau)=\sum_{i=1}^{m}\ell'(\tau|z_i)=0$ in iteration $(\xi+1)$, we use Householder's method (Householder (\citeyear{householder1970numerical}), Theorem 4.4.1). See also Theorem 1 of Lohr and Rao (\citeyear{lohr2009jackknife}):
\begin{align*}
\tau_{\xi+1}= \tau_{\xi}-\frac{f(\tau_{\xi})}{f'(\tau_{\xi})} \Big[1+\frac{\tau_{\xi} f''(\tau_{\xi})}{2 \{f'(\tau_{\xi})\}^2} \Big].
\end{align*}
By taking the initial value $\tau_{\xi}=\phi$, we have
\begin{align*}
\hat{\phi}-\phi & =-\frac{\sum\limits_{i=1}^{m}\ell_i^{'}(\phi|z_i)}{\sum\limits_{i=1}^{m} \ell_i^{''}(\phi|z_i)}\Bigg\{1+\frac{\sum\limits_{k=1}^{m}\ell^{'}_{k}(\phi| z_k) \sum\limits_{r=1}^{m}\ell^{'''}_{r}(\phi|z_r)}{2(\sum\limits_{k=1}^{m}\ell^{''}_{k}(\phi| z_k))^2}\Bigg\}+O_p(|\hat{\phi}-\phi|^3), \quad \text{and} \\
\hat{\phi}-\hat{\phi}_{(-j)} & =\frac{\ell^{'}_j(\hat{\phi}|z_j)}{\sum\limits_{k \neq j}^{m}\ell^{''}_k(\hat{\phi}|z_k)} \Bigg[ 1-\frac{\ell^{'}_j(\hat{\phi}|z_j) \sum\limits_{k \neq j}^{m} \ell^{'''}_k(\hat{\phi}| z_k)}{2( \sum\limits_{k \neq j}^{m} \ell^{''} _k(\hat{\phi}| z_k) )^2} \Bigg]+ O_p(|\hat{\phi}-\hat{\phi}_{(-j)}|^3).
\end{align*}
By taking conditional expectation and using Theorem 2.1 of Jiang et al. (\citeyear{jiang2002unified}), we find that
\begin{align*}
E(\hat{\phi}-\phi|z_i)=\frac{-\ell_i^{'}(\phi|z_i)+\varphi}{\sum\limits_{i=1}^{m}E\{\ell^{''}_i(\phi|z_i)\}}+r_i \, o(m^{-1}),
\end{align*}
where
\begin{align*}
\varphi= \frac{\sum\limits_{j=1}^{m}E[\ell^{'}_j(\phi|z_j) \ell^{''}_j(\phi|z_j)]}{\sum\limits_{j=1}^{m}E\{\ell^{''}_j(\phi|z_j)\}}-\frac{\sum\limits_{j=1}^{m}\sum\limits_{k=1}^{m}E{[\ell^{'}_j(\phi|z_j)]^2}E(\ell^{'''}_k(\phi|z_k))}{2(\sum\limits_{j=1}^{m}E\{\ell^{''}_j(\phi|z_j)\})^2},
\end{align*}
\begin{align*}
\sum\limits_{j \neq i}^{m}E(\hat{\phi}-\hat{\phi}_{(-j)}|z_i) & =\frac{-\ell^{'}_i(\phi|z_i)+\varphi}{\sum\limits_{j=1}^{m}E\{\ell^{''}_j(\phi|z_j)\}}+r_i \, o(m^{-1}), \quad \text{and} \\
E(\hat{\phi}_{(-i)}-\hat{\phi}|z_i) & = \frac{\ell^{'}_i(\phi|z_i)}{\sum\limits_{j=1}^{m}E\{\ell^{''}_j(\phi|z_j)\}}+r_i \, o(m^{-1}).
\end{align*}
By combining the above results, we find that
\begin{align*}
E\{\hat{M}_{1i,J}-M_{1i}|z_i\} &= -M_{1i}^{'}(\phi|z_i) \ell^{'}_i(\phi|z_i)/ \varphi +r_i \, o(m^{-1}).\\
\text{Hence,} \quad E(\hat{M}_{1i,J}) & = M_{1i}+O(m^{-1}).
\end{align*}
Second, we prove $\hat{M}_{2i}$ has a bias of order $o(m^{-1})$. Let
\begin{align*}
\hat{Y}_i^{\text{EB}}-\hat{Y}_{i(-j)}^{\text{EB}} := h(\hat{\phi}|z_i)-h(\hat{\phi}_{(-j)}|z_i),
\end{align*}
and $h(\phi|z_i)=E(Y_i^{\text{EB}}|z_i,\phi)$. Using a Taylor series expansion, we find that
\begin{align*}
\hat{Y}_i^{\text{EB}}-\hat{Y}_{i(-j)}^{\text{EB}}=h^{' \top} (\hat{\phi}|z_i) (\hat{\phi}-\hat{\phi}_{(-j)})+\frac{1}{2}h^{'' \top}(\hat{\phi}^*_{(-j)}|z_i) (\hat{\phi}-\hat{\phi}_{(-j)})^2,
\end{align*}
where
\begin{align*}
h^{' \top}(\hat{\phi}|z_i)=\Big(\frac{\partial h(\hat{\phi}|z_i)}{\partial \boldsymbol{\beta}}, \frac{\partial h(\hat{\phi}|z_i)}{\partial \sigma^2_{\nu}}\Big), \quad h^{'' \top}(\hat{\phi}|z_i)=\Big(\frac{\partial(\partial h(\hat{\phi}|z_i))}{\partial^2 \boldsymbol{\beta}}, \frac{\partial(\partial h(\hat{\phi}|z_i))}{\partial^2 \sigma^2_{\nu}}\Big),
\end{align*}
and $\hat{\phi}^*_{(-j)}$ is between $\hat{\phi}_{(-j)}$ and $\hat{\phi}.$
Using an additional Taylor series expansion, we find that
\begin{align*}
\sum\limits_{j=1}^{m}E\big\{(\hat{Y}_{i(-j)}^{\text{EB}}-\hat{Y}_i^{\text{EB}})^2|z_i\big\} & =\big\{h^{' \top}(\phi|z_i)\big\}^2 \times \frac{\sum\limits_{j=1}^{m}E\{(\ell^{'}_j(\phi|z_j))^2\}}{\varphi^2}+r_i \, o(m^{-1}). \quad \text{Similarly,}\\
E\big\{(\hat{Y}_{i}^{\text{EB}}-\hat{Y}_i)^2|z_i\big\} & =\big\{h^{' \top}(\phi|z_i)\big\}^2 \times \frac{\sum\limits_{j=1}^{m}E\{(\ell^{'}_j(\phi|z_j))^2\}}{\varphi^2}+r_i \, o(m^{-1}).
\end{align*}
By combining the above results, we find that
\begin{align*}
E(\hat{M}_{2i,J}) &= M_{2i}+o(m^{-1}).
\end{align*}
Finally,
\begin{align*}
E(\text{mspe}_J(\hat{Y}_i^{\text{EB}})) & = E(\hat{M}_{1i,J})+E(\hat{M}_{2i,J}) \\
& = \{M_{1i}+O(m^{-1}) \} + \{ M_{2i}+o(m^{-1}) \} \\
& = M_{1i}+M_{2i} + O(m^{-1}).\\
\end{align*}
Hence, $E[\text{mspe}_J(\hat{Y}_i^{\text{EB}})] =\text{MSPE}(\hat{Y}_i^{\text{EB}})+O(m^{-1})$.
\end{proof}
\subsection{Parametric Bootstrap Estimator of the MSPE}
\label{sec:boot}
In this section, we propose a parametric bootstrap estimator of the MSPE of the EB predictor $\hat{Y}_i^{\text{EB}}$, which we denote it by $\text{mspe}_B (\hat{Y}_i^{\text{EB}}).$ We prove that the order of the bias is correct up to order $O(m^{-1}).$ Specifically, we extend Butar and Lahiri (\citeyear{butar2003measures}) to find a parametric bootstrap of our proposed EB predictor.
To introduce the parametric bootstrap method, consider the following bootstrap model:
\begin{align*}
\label{eqn:Bootmodel}
z_i^{\star}|\boldsymbol{w}_i^{\star}, \nu_i^{\star} & \stackrel{ind}{\sim} N(\boldsymbol{w}_i^{{\star} \top} \hat{\boldsymbol{\beta}} +\nu_i^{\star}, \psi_i)\\
\boldsymbol{w}_i^{\star} & \stackrel{ind}{\sim} N_p(\boldsymbol{W}_i, \Sigma_i) \\
\nu_i^{\star} & \stackrel{ind}{\sim} N(0, \hat{\sigma}^2_{\nu}). \numberthis
\end{align*}
Recall that from Definition \ref{eqn:mspe}, $\text{MSPE}(\hat{Y}_i^{\text{EB}}) = M_{1i} + E[(\hat{Y}_i^{\text{EB}} - \hat{Y}_i)^2|z_i]$ since $M_{3i}=0$.
We use the parametric bootstrap twice. First, we use it to estimate $M_{1i}$ in order to correct the bias of $\hat{M}_{1i}:=M_{1i}(\hat{\sigma}^2_{\nu}, \hat{\boldsymbol{\beta}})$ (see Eq. (\ref{eq:M1ihat})). Second, we use it to estimate $E[(\hat{Y}_i^{\text{EB}}-\hat{Y}_i)^2|z_i]$. More specifically,
we propose to estimate $M_{1i}$ by
$2{M}_{1i}(\hat{\sigma}^2_{\nu},\hat{\boldsymbol{\beta}})-E_{\star}[M_{1i}(\hat{\sigma}^{\star 2}_{\nu},\hat{\boldsymbol{\beta}}^{\star})|z_i^{\star}]$,
and $E[(\hat{Y}_i^{\text{EB}}-\hat{Y}_i)^2|z_i]$ by $E_{\star}[(\hat{Y}_i^{\text{EB} \star}-\hat{Y}_i^{\text{EB}})^2|z_i^{\star}]$, where $E_{\star}$ denotes that the expectation is computed with respect to model in Eq. (\ref{eqn:Bootmodel}) and $\hat{Y}_i^{\text{EB} \star}=\exp \{\hat{\gamma}_i^{\star}z_i+(1-\hat{\gamma}_i^{ \star}) \boldsymbol{w}_i^\top \hat{\boldsymbol{\beta}}^{\star} +\psi_i \hat{\gamma}_i^{ \star}/2 \}$.
In addition, $\hat{\gamma}_i^{ \star}=(\hat{\sigma}^{\star 2}_{\nu}+\hat{\boldsymbol{\beta}}^{\star \top} \Sigma_i \hat{\boldsymbol{\beta}}^{\star})/(\hat{\sigma}^{\star 2}_{\nu}+\hat{\boldsymbol{\beta}}^{\star \top} \Sigma_i \hat{\boldsymbol{\beta}}^{\star}+\psi_i),$ where $\hat{\boldsymbol{\beta}}^{\star}$ and $\hat{\sigma}^{\star 2}_{\nu}$ are estimators of $\boldsymbol{\beta}$ and $\sigma^2_{\nu}$ with respect to the parametric bootstrap model in Eq. (\ref{eqn:Bootmodel}).
Our proposed estimator of $\text{MSPE}(\hat{Y}_i^{\text{EB}})$ is
\begin{equation} \label{eq:3.4}
\text{mspe}_B(\hat{Y}_i^\text{EB}) = 2{M}_{1i}(\hat{\sigma}^2_{\nu},\hat{\boldsymbol{\beta}})-E_{\star}[M_{1i}(\hat{\sigma}^{\star 2}_{\nu},\hat{\boldsymbol{\beta}}^{\star})|z_i^{\star}]+E_{\star}[(\hat{Y}_i^{\text{EB} \star}-\hat{Y}_i^{\text{EB}})^2|z_i^{\star}],
\end{equation}
which has bias of order $O(m^{-1})$ as shown in the Theorem \ref{thm:thm2}.
\begin{theorem}
\label{thm:thm2}
Assume $E_{\star}(\hat{\sigma}_{\nu}^{\star 2}-\hat{\sigma}^2_{\nu})=O_p(m^{-1})$ and $E_{\star}(\hat{\boldsymbol{\beta}}^{\star}-\hat{\boldsymbol{\beta}})=O_p(m^{-1})$. The bootstrap estimator of the MSPE has bias of order $O(m^{-1})$, i.e.
$$E[\text{mspe}_B(\hat{Y}_i^{\text{EB}})]= \text{MSPE}(\hat{Y}_i^{\text{EB}}) + O(m^{-1}).$$
\end{theorem}
\begin{proof}
Let
\begin{equation*}
E_{\star}[M_{1i}(\hat{\sigma}^{\star 2}_{\nu},\hat{\boldsymbol{\beta}}^{\star})|z_i^{\star}] = M_{1i}(\hat{\sigma}^2_{\nu},\hat{\boldsymbol{\beta}}) + O_p(m^{-1}).
\end{equation*}
Assume that $R^{{\star}}_m=O_{p^{\star}}(m^{-1})$ such that $mR^{\star}_m$ is bounded in probability under the parametric bootstrap model in Eq. (\ref{eqn:Bootmodel}). Consider the following Taylor series expansion:
\begin{equation*}
\hat{Y}_i^{\text{EB} \star} - \hat{Y}_i^{\text{EB}} = (\hat{\phi}^{\star} - \hat{\phi})^\top h'(\hat{\phi}| z_i) + R^{\star}_m,
\end{equation*}
such that $\hat{\phi}^{\star \top}=(\hat{\boldsymbol{\beta}}^{\star},\hat{\sigma}^{\star 2}_{\nu})$.
Using an argument similar to the proof of Theorem \ref{thm:thm1},
\begin{align}
\label{eqn:star}
E_{\star}[(\hat{Y}_i^{\text{EB} \star}-\hat{Y}_i^{\text{EB}})^2|z_i^{\star}]&=\hat{M}_{2i}+o_p(m^{-1}) \quad \text{and} \quad
E_{\star}[\hat{M}^{\star}_{1i}|z_i^{\star}] =\hat{M}_{1i}+O_p(m^{-1}).
\end{align}
Substituting Eq. (\ref{eqn:star}) into Eq. (\ref{eq:3.4}), we find that
\begin{align*}
\text{mspe}_B(\hat{Y}_i^{\text{EB}})& =2\hat{M}_{1i}-[\hat{M}_{1i}+O_p(m^{-1})]+\hat{M}_{2i}+o_p(m^{-1}) \\
& = \hat{M}_{1i}+\hat{M}_{2i}+O_p(m^{-1}).
\end{align*}
This suggests that
$$ E[\text{mspe}_B(\hat{Y}_i^{\text{EB}})]= \text{MSPE}(\hat{Y}_i^{\text{EB}})+O(m^{-1}). $$
\end{proof}
\section{Experiments}
\label{sec:experiments}
In this section, we investigate the performance of the EB predictors in comparison to the direct estimators through design-based and model-based simulation studies. In addition, we evaluate the MSPE estimators using both a jackknife and parametric bootstrap.
\subsection{Design-Based Simulation Study}
\label{sec:design}
In this section, we consider a design-based simulation study using the CoG data set as described in Sec. \ref{sec:cog}.
\subsubsection{Design-Based Simulation Setup}
We describe the design-based simulation setup. The parameter of interest is average number of full-time employees per government at the state level from 2012 data set. The covariate
is the average number of full-time employees per government at the state level from the 2007 data set. There are observed skewed patterns in the average number of full-time employees in both 2007 and 2012, which motivates our proposed framework.
For the response variable, we select a total sample of 7,000 governmental units proportionally allocated to the states and for the covariates, we select a total sample of 70,000 units and the survey-weighted averages were then calculated. The measurement error variance $\Sigma_i$ was obtained from a Taylor series approximation, where $\text{Var}(x_i)$ was estimated from the formula of variance in simple random sampling without replacement at each state. The $\psi_i$'s were estimated by a Generalized Variance Function (GVF) method (see Fay and Herriot (\citeyear{fay1979estimates})). We assume the sampling variances to be known throughout the estimation procedure.
For the design-based simulation, we draw 1,000 samples and estimate the parameters from each sample. We evaluate our proposed predictors by empirical MSE per each state $i$:
\begin{align*}
\text{EMSE}(\hat{Y}_i) =\frac{1}{R}\sum\limits_{r=1}^{R}\Big[\hat{Y}_i^{(r)}-Y_i^{(r)}\Big]^2,
\end{align*}
where $R=1,000$ is the total number of replications, and $\hat{Y}_i$ is the estimator of $Y_i$.
In addition, when the parametric bootstrap it used, we take $B=1,000$ bootstrap samples. We use the same number of replications and bootstrap samples in the design and model-based simulation studies.
\subsubsection{Design-Based Simulation Results}
In this section, we provide the results of the design-based simulation study.
\paragraph{Investigating the performance of the proposed estimators}
Recall that the covariate of interest is the average number of full-time employees per government at the state level from 2007 data set, and we wish to predict the average number of full-time employees per government at the state level in 2012. To do so, we give the predictors for each state as well as their corresponding EMSE's in Tables \ref{table1} and \ref{table2}. More specifically, we compare the following three estimators:
\begin{itemize}
\item[1)] $y_i$: the direct estimator,
\item[2)] $\tilde{Y}_i$: the EB predictor, assuming the true covariate $w_i$ and ignoring $\Sigma_i$ in our model,
\item[3)] $\hat{Y}_i^{\text{EB}}$: the EB predictor, assuming the true covariate $w_i$ has measurement error, where $\Sigma_i$ is included in our model.
\end{itemize}
We observe that in most cases the $\text{EMSE}(\hat{Y}_i^{\text{EB}})$ is smaller than the $\text{EMSE}(\tilde{Y}_i)$. However, we observe that our proposed EB predictor does not always outperform the direct estimator, which we further explore in our model-based simulation studies in Sec.~\ref{sec:model}.
\begin{table}[ht]
\centering
\caption{Estimators and their empirical MSEs from CoG. Note that $n_i$ is the sample size per state, and the MSEs are rescaled logarithmically.}
\label{table1}
\renewcommand{1.25}{1.25}
\begin{tabular}{@{} ccccccccccc @{}}
\hline
$i$ & State & $n_i$ & $y_i$ & $\tilde{Y}_i$ & $\hat{Y}_i^{\text{EB}}$ & $\text{EMSE}(y_i)$ & $\text{EMSE}(\tilde{Y}_i)$ & $\text{EMSE}(\hat{Y}_i^{\text{EB}})$ \\ \specialrule{.1em}{.05em}{.05em}
1 & RI & 10 & 191.641 & 202.928 & 204.907 & 6.523 & 5.390 & 5.103 \\
2 & AK & 14 & 132.301 & 120.112 & 123.684 & 4.501 & 6.153 & 5.793 \\
3 & NV & 15 & 420.299 & 422.992 & 431.912 & 8.077 & 8.170 & 8.449 \\
4 & MD & 19 & 824.939 & 784.779 & 794.784 & 8.050 & 5.524 & 6.503 \\
5 & DE & 27 & 64.273 & 72.674 & 73.130 & 3.003 & 5.113 & 5.182 \\
6 & LA & 40 & 363.838 & 342.056 & 343.344 & 6.017 & 0.845 & -2.863 \\
7 & VA & 40 & 560.881 & 526.612 & 530.160 & 6.669 & 8.265 & 8.148 \\
8 & NH & 44 & 80.695 & 83.645 & 83.857 & -3.994 & 2.254 & 2.387 \\
9 & UT & 47 & 126.045 & 117.729 & 118.512 & 2.827 & 2.873 & 2.461 \\
10 & AZ & 49 & 332.610 & 349.704 & 350.915 & 6.270 & 7.380 & 7.439 \\
11 & CT & 49 & 187.500 & 191.494 & 192.054 & 4.851 & 5.457 & 5.528 \\
12 & SC & 53 & 231.790 & 221.881 & 222.574 & 5.158 & 6.279 & 6.218 \\
13 & WV & 53 & 83.568 & 84.071 & 84.315 & 2.754 & 2.482 & 2.336 \\
14 & WY & 55 & 43.084 & 39.629 & 39.795 & 2.493 & 3.873 & 3.824 \\
15 & VT & 59 & 27.717 & 27.529 & 27.574 & 0.474 & 0.751 & 0.688 \\
16 & ME & 65 & 38.892 & 40.713 & 40.789 & 2.966 & 1.899 & 1.840 \\
17 & NM & 67 & 93.817 & 93.360 & 93.913 & 3.496 & 3.331 & 3.529 \\
18 & MA & 70 & 237.066 & 231.213 & 231.999 & 4.218 & 1.741 & 2.310 \\
19 & TN & 72 & 245.317 & 232.133 & 233.405 & 4.257 & 6.144 & 6.023 \\
20 & NC & 76 & 372.264 & 357.791 & 359.375 & 5.846 & 6.997 & 6.700 \\
21 & MS & 78 & 137.977 & 132.143 & 132.454 & 3.891 & 0.299 & 0.774 \\
22 & ID & 92 & 46.450 & 45.642 & 45.784 & 2.451 & 1.910 & 2.016 \\
23 & AL & 93 & 162.841 & 160.389 & 160.818 & 3.356 & 2.131 & 2.406 \\
24 & MT & 99 & 23.296 & 22.457 & 22.508 & 0.461 & 1.481 & 1.433 \\
25 & KY & 104 & 119.415 & 121.105 & 121.576 & 1.935 & 2.928 & 3.134 \\
26 & NJ & 108 & 235.215 & 245.163 & 245.595 & 3.898 & 5.663 & 5.713 \\
27 & GA & 109 & 255.141 & 243.862 & 244.539 & 5.686 & 6.696 & 6.648 \\
\hline
\end{tabular}
\end{table}
\begin{table}[ht]
\centering
\caption{Estimators and their empirical MSEs from CoG. Note that $n_i$ is the sample size per state, and the MSEs are rescaled logarithmically (continued).}
\label{table2}
\renewcommand{1.25}{1.25}
\begin{tabular}{@{} ccccccccccc @{}}
\hline
$i$ & State & $n_i$ & $y_i$ & $\tilde{Y}_i$ & $\hat{Y}_i^{\text{EB}}$ & $\text{EMSE}(y_i)$ & $\text{EMSE}(\tilde{Y}_i)$ & $\text{EMSE}(\hat{Y}_i^{\text{EB}})$ \\ \specialrule{.1em}{.05em}{.05em}
28 & AR & 118 & 68.911 & 65.223 & 65.362 & -3.160 & 2.719 & 2.646 \\
29 & OR & 120 & 72.363 & 72.483 & 72.653 & 1.376 & 1.493 & 1.648 \\
30 & FL & 124 & 377.822 & 365.035 & 366.658 & 7.658 & 8.148 & 8.092 \\
31 & OK & 144 & 76.686 & 74.610 & 74.803 & -2.447 & 1.155 & 0.926 \\
32 & WA & 145 & 93.681 & 91.238 & 91.476 & 3.077 & 3.920 & 3.852 \\
33 & SD & 153 & 14.757 & 14.078 & 14.142 & -1.338 & -3.583 & -4.546 \\
34 & IA & 155 & 56.219 & 55.686 & 55.790 & -4.924 & -0.962 & -1.332 \\
35 & CO & 184 & 74.373 & 71.579 & 71.908 & 0.789 & 2.907 & 2.746 \\
36 & NE & 197 & 33.713 & 31.017 & 31.215 & 1.326 & -0.561 & -1.167 \\
37 & IN & 217 & 75.994 & 78.712 & 78.830 & 0.619 & 0.609 & 0.775 \\
38 & ND & 217 & 8.336 & 7.421 & 7.469 & -1.704 & -1.436 & -1.644 \\
39 & MI & 236 & 85.314 & 97.710 & 97.705 & -1.220 & 4.945 & 4.944 \\
40 & WI & 246 & 61.304 & 61.320 & 61.451 & 2.486 & 2.495 & 2.569 \\
41 & NY & 270 & 280.982 & 278.784 & 283.973 & 6.457 & 6.275 & 6.681 \\
42 & MO & 278 & 61.972 & 62.846 & 62.946 & 0.093 & 1.307 & 1.409 \\
43 & MN & 289 & 42.025 & 45.747 & 45.727 & -1.572 & 2.367 & 2.355 \\
44 & OH & 302 & 108.261 & 111.756 & 111.905 & 2.364 & 3.820 & 3.864 \\
45 & KS & 309 & 30.804 & 30.253 & 30.321 & 1.859 & 2.253 & 2.208 \\
46 & CA & 338 & 238.284 & 248.558 & 248.800 & 6.991 & 6.244 & 6.223 \\
47 & TX & 374 & 221.105 & 204.983 & 205.689 & 2.570 & 5.965 & 5.892 \\
48 & PA & 386 & 81.653 & 83.551 & 83.668 & 2.888 & 3.628 & 3.666 \\
49 & IL & 567 & 64.650 & 67.278 & 67.172 & 1.490 & 3.110 & 3.064 \\
\hline
\end{tabular}
\end{table}
\paragraph{Jackknife versus parametric bootstrap estimators}
Next, we consider the performance of MSPE estimators, i.e., the jackknife and bootstrap, with respect to the true MSE, i.e., $\text{EMSE}(\hat{Y}_i^{\text{EB}})$ in Figure \ref{figure1}. The results are given on the logarithmic scale, and we observe that the distribution of jackknife is closer to the distribution of true MSE when compared to the bootstrap.
Therefore, we recommend the jackknife given that it slightly overestimates the true MSE.
As already mentioned, given that our proposed estimator does not uniformly beat the direct estimator in terms of the EMSE, we conduct a model-based simulation study in Sec.~\ref{sec:model} to investigate this and provide further insight.
\begin{figure}
\caption{Left: The jackknife and the bootstrap estimators versus the true MSE ($\text{EMSE}
\label{figure1}
\end{figure}
\subsection{Model-Based Simulation Study}
\label{sec:model}
In this section, we describe our model-based simulation study to further investigate the performance of the proposed EB predictor $\hat{Y}_i^{\text{EB}}.$ Second, we compare the proposed jackknife and parametric bootstrap estimators, $\text{mspe}_J (\hat{Y}_i^{\text{EB}})$ and $\text{mspe}_B (\hat{Y}_i^{\text{EB}}).$
Third, we investigate how often the variance estimates $\hat{\sigma}_u^2$ are zero. Finally, we investigate how the regression parameter changes when $\Sigma_i$ is misspecified.
Our goal through this model-based simulation study is to understand how one could improve the EB predictor through future research, and to further understand its underlying behavior.
\subsubsection{Model-Based Simulation Setup}
\label{sec:setup}
In this section, we provide the setup of our model-based simulation study in Table \ref{table3}. This setup follows Eqs. (\ref{eqn:FHlog}) and (\ref{eqn:ME}).
We are interested in comparing the following four estimators:
\begin{itemize}
\item[1)] $y_i$: the direct estimator,
\item[2)] $\hat{Y}_i$: the EB predictor, assuming the true covariate $W_i$
\item[3)] $\tilde{Y}_i$: the EB predictor, assuming the true covariate $w_i$ and ignoring $\Sigma_i$ in our model,
\item[4)] $\hat{Y}_i^{\text{EB}}$: the EB predictor, assuming the true covariate $w_i$ has measurement error, where $\Sigma_i$ is included in our model
\end{itemize}
We compare these four estimators (for each area $i$) using the empirical MSE:
$$\text{EMSE}(\hat{Y}_i) = \frac{1}{R}\sum\limits_{r=1}^{R}\Big[\hat{Y}_{i}^{(r)}-Y_{i}^{(r)}\Big]^2,$$
where $\hat{Y}_i$ is the estimator of $Y_i$.
\begin{table}[hb]
\centering
\caption{Model-based simulation setup with definition of parameters and distributions} \label{table3}
\renewcommand{1.25}{1.25}
\begin{tabular}{l}
\\\hline
\textbf{Simulation Setup:}\\
Generate $W_i$ from a Normal(5,9)
and $\psi_i$ from a Gamma(4.5,2)\\
Take $\theta_i = 3 W_i+\nu_i$, $z_i=\theta_i+e_i$, and $w_i=W_i+\eta_i$\\
$\nu_i \sim \text{Normal}(0,\sigma^2_{\nu})$,
$e_i \sim \text{Normal}(0,\psi_i)$, and $\eta_i \sim \text{Normal}(0,\Sigma_i)$\\
Take $y_i=\exp(z_i)$ and $Y_i=\exp(\theta_i)$\\
\hline\hline
\textbf{Parameter Definition:} \\
Let $m=20,50,100,$ and $500$ (number of small areas)\\
Let $\sigma^2_{\nu}=2$ (for all cases) \\
Let $k \in \{0, 20, 50, 80, \text{and} \; 100 \}$ \\
$\Sigma_i \in \{0, d\}$, where $d = 2$ \text{or} $4$ \\
Allow $k\%$ of the $\Sigma_i$'s randomly receive $d$
and the rest $0$. \\
\hline
\end{tabular}
\end{table}
In order to evaluate the jackknife and parametric bootstrap estimators of $\hat{Y}_i^{\text{EB}}$, we consider the relative bias, denoted by $\text{RB}_J(\hat{Y}_i^{\text{EB}})$ and $\text{RB}_B(\hat{Y}_i^{\text{EB}})$, respectively. More specifically, the relative biases are defined as follows for each area $i$:
\begin{align*}
\text{RB}_J(\hat{Y}_i^{\text{EB}}) &= \Big\{\frac{1}{R} \sum_{r=1}^{R} \text{mspe}_J^{(r)} (\hat{Y}_i^{\text{EB}(r)}) - \text{EMSE}(\hat{Y}_i^{\text{EB}}) \Big\} \Big/\text{EMSE}(\hat{Y}_i^{\text{EB}}), \\
\text{RB}_B(\hat{Y}_i^{\text{EB}}) &= \Big\{\frac{1}{R} \sum_{r=1}^{R} \text{mspe}_B^{(r)} (\hat{Y}_i^{\text{EB}(r)}) - \text{EMSE}(\hat{Y}_i^{\text{EB}}) \Big\} \Big/\text{EMSE}(\hat{Y}_i^{\text{EB}}).
\end{align*}
\subsubsection{Model-Based Simulation Results} \label{sec:model-results}
In this section, we summarize our results of the model-based simulation study.
\paragraph{Investigating the performance of the proposed estimators}
In this section, we investigate the performance of the proposed estimators.
Table \ref{table4} provides the four estimators given in Sec.~\ref{sec:setup} with their empirical MSEs, where we average the results over all the small areas and re-scale them using the logarithmic scale. When $k=0$, the MSE's for all EB predictors are the same since the term $\Sigma_i$ vanishes and $w_i$ is the same as $W_i$. Overall, as the value of $k$ increases, the empirical MSE increases for almost all predictors. We observe there are cases in which the EB predictors cannot outperform the direct estimators based on the simulation results.
Table \ref{table4} illustrates that there are cases in which the $\text{EMSE}(\hat{Y}_i^{\text{EB}})$ is larger than the $\text{EMSE}(y_i).$ In fact, the EB predictors cannot outperform the direct estimators due to propagated errors in the term $\boldsymbol{\beta}^\top \Sigma_i \boldsymbol{\beta},$ which is present in the term $\gamma_i$ in the EB predictors through the simulations; see expression (\ref{eqn:EBpredictor}). Therefore, as the measurement error variance $\Sigma_i$ increases, we have shown that the MSE of our proposed EB predictors can also increase. This is the main point that one should notice when using a log-model with measurement error. In order to prevent such behavior, a further adjustment should be made to the EB predictors, which we discuss in Sec.~\ref{sec:dis}.
\begin{table}[h!]
\centering
\caption{Estimators and their empirical MSEs from model-based simulations. The results are averaged over all the small areas and re-scaled logarithmically.} \label{table4}
\renewcommand{1.25}{1.25}
\begin{tabular}{@{} cccccccccc @{}}
\\\hline
m & k & $y_i$ & $\hat{Y}_i$ & $\tilde{Y}_i$ & $\hat{Y}_i^{\text{EB}}$ & $\text{EMSE}(y_i)$ & $\text{EMSE}(\hat{Y}_i)$ & $\text{EMSE}(\tilde{Y}_i)$ & $\text{EMSE}(\hat{Y}_i^{\text{EB}})$ \\
\specialrule{.1em}{.05em}{.05em}
20 &0 & 46.766 & 50.626 & 50.626 & 50.626 & 102.088 & 111.09 & 111.09 & 111.09 \\
&20 & 53.054 & 50.139 & 49.229 & 49.864 & 115.974 & 109.338 & 104.398 & 108.425\\
&50 & 42.232 & 41.174 & 42.464 & 43.110 & 93.496 & 91.163 & 92.948 & 94.223 \\
&80 & 42.519 & 44.285 & 43.469 & 44.794 & 93.881 & 97.557 & 95.152 & 97.495\\
&100 & 44.682 & 41.81 & 45.073 & 46.331 & 99.13 & 92.161 & 99.938 & 102.48 \\
\hline
50&0 & 49.624 & 47.732 & 47.732 & 47.732 & 110.061 & 106.167 & 106.167 & 106.167 \\
&20 & 44.292 & 42.851 & 44.702 & 45.098 & 97.644 & 94.541 & 99.321 & 100.255 \\
&50 & 45.512 & 44.677 & 46.071 & 47.59 & 99.615 & 98.56 & 101.351 & 105.547\\
&80 & 42.703 & 41.773 & 45.289 & 46.469 & 94.339 & 93.268 & 101.068 & 103.615\\
&100 & 43.83 & 43.201 & 44.779 & 45.68 & 97.144 & 95.961 & 98.347 & 100.319 \\
\hline
100&0 & 42.635 & 42.241 & 42.241 & 42.241 & 94.625 & 92.802 & 92.802 & 92.802\\
&20 & 46.216 & 45.601 & 46.179 & 47.427 & 103.264 & 101.412 & 103.035 & 106.172\\
&50 & 50.93 & 45.982 & 49.347 & 48.519 & 113.343 & 103.08 & 109.718 & 108.214\\
&80 & 50.132 & 46.586 & 48.137 & 48.966 & 111.618 & 103.055 & 106.712 & 108.454\\
&100 & 44.925 & 44.711 & 48.009 & 49.031 & 100.996 & 100.764 & 107.472 & 109.515 \\
\hline
500 & 0 & 47.338 & 45.253 & 45.253 & 45.253 & 107.275 & 103.465 & 103.465 & 103.465 \\
& 20 & 46.382 & 45.369 & 47.575 & 47.652 & 104.607 & 102.867 & 107.896 & 108.169 \\
& 50 & 53.045 & 46.854 & 50.662 & 49.126 & 119.208 & 106.396 & 114.378 & 110.006 \\
& 80 & 47.766 & 44.868 & 47.706 & 49.950 & 108.289 & 104.921 & 107.454 & 112.805 \\
& 100 & 48.586 & 45.313 & 49.372 & 50.449 & 109.795 & 103.378 & 111.069 & 113.218 \\
\hline
\end{tabular}
\end{table}
\paragraph{Jackknife versus parametric bootstrap estimators}
We compare the jackknife MSPE estimator of the EB predictor $\hat{Y}^{\text{EB}}_i$ to that of the bootstrap using the relative bias (see Table \ref{table6}).
In addition, we consider box plots for the jackknife and bootstrap MSPE estimators of the EB predictor $\hat{Y}^{\text{EB}}_i,$ where we compare these to box plots of the true values (see Figure \ref{figure3}). Both Table \ref{table6} and Figure \ref{figure3} illustrate that the bootstrap receives a large number of negative values, which is due to the construction of $\hat{M}_{1i}.$
Here, we find that the bootstrap grossly underestimates the true values, whereas the jackknife slightly overestimates the true values. This could be due to generating data from the normal distribution and the non-linear transformation in the model. Thus, we would recommend the jackknife in practice.
\paragraph{Amount of zeros for the estimates of $\hat{\sigma}_u^2$} Here, we investigate the proportion of zero estimates for $\sigma^2_{\nu}$ based on iteratively solving the Eqs. (\ref{eqn:bb}) and (\ref{eqn:sigma}). Figure \ref{figure2} illustrates that as the number of small areas increases, the magnitude of receiving zeros decreases. More specifically, we observe when $m=20$ and as $k$ increases, $\hat{Y}_i^{\text{EB}}$ and $\hat{Y}_i$ tend to have a proportion of zero estimates of $\sigma^2_{\nu}$ between 0.3 and 0.5.
When $m=50$ and as $k$ increases, $\hat{Y}_i^{\text{EB}}$ and $\hat{Y}_i$ tend to have a proportion of zero estimates of $\sigma^2_{\nu}$ between 0.15 and 0.4.
When $m=100$ and as $k$ increases, $\hat{Y}_i^{\text{EB}}$ and $\hat{Y}_i$ tend to have a proportion of zero estimates of $\sigma^2_{\nu}$ between 0.05 and 0.3.
When $m=500$ and as $k$ increases, $\hat{Y}_i^{\text{EB}}$ and $\hat{Y}_i$ tend to have a proportion of zero estimates of $\sigma^2_{\nu}$ between 0 and 0.05.
One should be cautious of this in practical applications, and adjusting for this is of the interest of future work.
\begin{figure}
\caption{The proportion of zero estimates of $\sigma^2_{\nu}
\label{figure2}
\end{figure}
\paragraph{The effect of misspecification of $\Sigma_i$ on $\beta$} \label{sec:beta}
We investigate the effect of mis-specifying the variance $\Sigma_i$ on the estimation of the regression parameter $\beta.$ To accomplish this, we
conduct an empirical study based on the proposed model-based simulation study in Table \ref{table3} for the EB predictor $\hat{Y}^{\text{EB}}_i$.
Assume $\beta=3,$ and we consider two sets of experiments for each value of $k,$ which are summarized in Table \ref{table7}. Recall that $\Sigma_i \in \{0,d \}.$
Denote the first set of experiments by
$A1, B1, C1, D1$ and $E1,$ where we assume $d=2.$ Denote the misspecified value of $d$ by $d_{\text{mis}} = 4.$ Denote the second set of experiments by
$A2, B2, C2, D2$ and $E2,$ where we assume $d=4$ and $d_{\text{mis}} = 2.$
We conduct both sets of experiments for $m=20$ and $500$.
For each experiment, we estimate the unknown parameter $\beta$ under the followings:
(1) the true value of $d$ denoted by $\hat{\beta}$ and (2) the misspecified value of $d_{\text{mis}}$ denoted by $\hat{\beta}_{\text{mis}}$.
Then we compute the average absolute difference between the respective $\beta$'s by considering the following:
$$ 100 \times \frac{1}{R}\sum\limits_{r=1}^{R}\Big|\hat{\beta}^{(r)}-\hat{\beta}_{\text{mis}}^{(r)}\Big|.$$
In addition, we compute the magnitude of bias related to $\hat{\beta}$ and $\hat{\beta}_{\text{mis}}$ with respect to the true value of $\beta=3$ as follows
$$100 \times \frac{1}{R} \sum_{r=1}^{R} \Big(\hat{\beta}^{(r)} -3 \Big) \quad \text{and} \quad 100 \times \frac{1}{R} \sum_{r=1}^{R} \Big( \hat{\beta}^{(r)}_{\text{mis}}-3\Big). $$
Table \ref{table7} illustrates that the overall misspecification of $\Sigma_i$ leads to bias in $\beta.$ When the magnitude of measurement error is zero (i.e. $k=0$), there is no difference between the estimated $\beta$ using $d$ or $d_{\text{mis}}$. On the other hand, when the magnitude of $k$ increases and we have more uncertainty in the error variance $\Sigma_i$, values of $\hat{\beta}$ and $\hat{\beta}_{\text{mis}}$ diverge more from one another, and the magnitude of the bias increases. Also, we observe as the number of small areas increases, the value of bias decreases.
One can resolve this bias issue by constructing an adaptive estimator for $\hat{\beta}_{\text{mis}}$ in which its bias is corrected through some techniques such as bootstrap and develop a test of parameter specification.
\begin{table}[ht]
\centering
\caption{Comparison of the proposed jackknife and bootstrap estimators from model-based simulations. The results are averaged over all small ares. The results are rescaled by the logarithm of absolute value. Note, “*” denote that the original value is negative.}
\label{table6}
\renewcommand{1.25}{1.25}
\begin{tabular}{@{} ccccccc @{}}
\\\hline
m & k & $\text{EMSE}(\hat{Y}_i^{\text{EB}})$ & $\text{mspe}_J(\hat{Y}_i^{\text{EB}})$ & $\text{mspe}_B(\hat{Y}_i^{\text{EB}})$ & $\text{RB}_J(\hat{Y}_i^{\text{EB}})$ & $\text{RB}_B(\hat{Y}_i^{\text{EB}})$ \\\specialrule{.1em}{.05em}{.05em}
20 & 0 & 111.09 & 120.731 & 118.478* & 9.641 & 7.389* \\
&20 & 108.425 & 115.222 & 115.884* & 6.796 & 7.459*\\
&50 & 94.223 & 107.245 & 111.255* & 13.021 & 17.032*\\
&80 & 97.495 & 113.06 & 129.772* & 15.565 & 32.277*\\
&100 & 102.48 & 115.536* & 119.84* & 13.056* & 17.36*\\
\hline
50 &0 & 106.167 & 112.318* & 116.383* & 6.154* & 10.216* \\
&20 & 100.255 & 110.449 & 106.743* & 10.194 & 6.489* \\
&50 & 105.547 & 115.365* & 118.838* & 9.818* & 13.291*\\
&80 & 103.615 & 114.127 & 112.717* & 10.512 & 9.102*\\
&100 & 100.319 & 110.454* & 112.552* & 10.134* & 12.233*\\
\hline
100&0 & 92.802 & 98.67 & 102.932* & 5.866 & 10.13*\\
&20 & 106.172 & 106.788* & 108.623* & 1.048* & 2.534* \\
&50 & 108.214 & 119.666 & 120.032* & 11.452 & 11.819*\\
&80 & 108.454 & 117.223* & 119.418* & 8.769* & 10.964*\\
&100 & 109.515 & 119.028 & 117.071* & 9.513 & 7.557*\\
\hline
500& 0 & 103.465 & 108.38 & 110.455* & 4.908 & 6.991* \\
& 20 & 108.169 & 119.672 & 115.892 & 11.502 & 7.722 \\
& 50 & 110.006 & 121.191 & 125.754* & 11.184 & 15.747*\\
& 80 & 112.805 & 125.672* & 129.761* & 12.867* & 16.955* \\
& 100 & 113.217 & 123.037* & 124.597* & 9.819* & 11.379* \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\caption{Comparing the distribution of the jackknife and bootstrap estimators with respect to the true MSE ($\text{EMSE}
\label{figure3}
\end{figure}
\begin{table}[ht]
\centering
\caption{Percentage of bias related to the consequences of misspecifying the error variance $\Sigma_i$ on $\beta$ in the EB predictor $\hat{Y}^{\text{EB}}_i$ from model-based simulations. For all cases, we assume the true value for $\beta$ is 3. Also, $\sigma^2_{\nu}=2$ and $m=20$ (the smallest one) and $500$ (the largest one).}
\label{table7}
\renewcommand{1.25}{1.25}
\begin{tabular}{@{} cccccc @{}}
\hline
m & k & Experiment & $\frac{1}{R}\sum\limits_{r=1}^{R}\Big|\hat{\beta}^{(r)}-\hat{\beta}_{\text{mis}}^{(r)}\Big|$ & $\frac{1}{R}\sum\limits_{r=1}^{R}\Big(\hat{\beta}^{(r)}-3\Big)$ & $\frac{1}{R}\sum\limits_{r=1}^{R}\Big(\hat{\beta}_{\text{mis}}^{(r)}-3\Big)$\\ \specialrule{.1em}{.05em}{.05em}
20 & $0$ & $A1 (d=2, d_{\text{mis}}=4)$ & $0$ & $-0.026$ & $-0.026$ \\
&& $A2 (d=4, d_{\text{mis}}=2)$ & $0$ & $-0.237$ & $-0.237$\\
& $20$ & $B1 (d=2, d_{\text{mis}}=4)$ & $6.034$ & $0.314$ & $-0.046$\\
&& $B2 (d=4, d_{\text{mis}}=2)$ & $5.814$ & $-0.634$ & $-0.591$\\
&$50$ & $C1 (d=2, d_{\text{mis}}=4)$ & $11.287$ & $-0.184$ & $-0.864$\\
&& $C2 (d=4, d_{\text{mis}}=2)$ & $10.977$ & $-0.157$ & $-0.394$\\
&$80$ & $D1 (d=2, d_{\text{mis}}=4)$ & $19.641$ & $0.842$ & $2.177$\\
&& $D2 (d=4, d_{\text{mis}}=2)$ & $18.722$ & $0.712$ & $0.327$\\
&$100$ & $E1 (d=2, d_{\text{mis}}=4)$ & $25.774$ & $2.650$ & $4.285$\\
&& $E2 (d=4, d_{\text{mis}}=2)$ & $25.967$ & $4.020$ & $2.937$\\
\hline
500 & $0$ & $A1 (d=2, d_{\text{mis}}=4)$ & $0$ & $0.185$ & $0.185$ \\
&& $A2 (d=4, d_{\text{mis}}=2)$ & $0$ & $-0.082$ & $-0.082$\\
& $20$ & $B1 (d=2, d_{\text{mis}}=4)$ & $1.136$ & $0.174$ & $0.118$\\
&& $B2 (d=4, d_{\text{mis}}=2)$ & $1.107$ & $0.010$ & $0.009$\\
&$50$ & $C1 (d=2, d_{\text{mis}}=4)$ & $2.200$ & $-0.138$ & $0.026$\\
&& $C2 (d=4, d_{\text{mis}}=2)$ & $2.182$ & $-0.044$ & $0.006$\\
&$80$ & $D1 (d=2, d_{\text{mis}}=4)$ & $3.365$ & $0.204$ & $-0.067$\\
&& $D2 (d=4, d_{\text{mis}}=2)$ & $3.386$ & $-0.139$ & $-0.090$\\
&$100$ & $E1 (d=2, d_{\text{mis}}=4)$ & $5.086$ & $0.174$ & $0.152$\\
&& $E2 (d=4, d_{\text{mis}}=2)$ & $4.941$ & $-0.022$ & $0.117$\\
\hline
\end{tabular}
\end{table}
\section{Discussion}
\label{sec:dis}
In this paper, in order to stabilize the skewness and achieve normality in the response variable, we have proposed an area-level log-measurement error model on the response variable. In addition, we have proposed a measurement error model on the covariates. Second, under our proposed modeling framework, we derived the EB predictor of positive small area quantities subject to the covariates containing measurement
error. Third, we proposed a corresponding estimate of MSPE using a jackknife and a bootstrap method, where we illustrated that the order of the bias is $O(m^{-1})$, where $m$ is the number of small areas. Fourth, we have illustrated the performance of our methodology in both design-based simulation and model-based simulation studies, where the EMSE of the proposed EB predictor is not always uniformly better than that of the direct estimator. Our model-based simulation studies have provided further investigation and guidance on the behavior. For example, one fruitful area of future research would be providing a correction to the EB predictor to avoid for such behavior. One way to address this issue is by estimating $\phi=(\boldsymbol{\beta},\sigma^2_{\nu})^\top$ in such a way that its order of bias is smaller than $O(m^{-1})$. This could help to reduce the amount of propagated errors in the EB predictor $\hat{Y}_i^{\text{EB}}$. Another way, which is less theoretical burdensome is by estimating the covariate as we have assumed it follows a functional measurement error model rather than a structural one. We have studied the MSPE of the EB predictor using both the jackknife and bootstrap in simulation studies, where we have shown the jackknife estimator performs better than the bootstrap one under our log model.
\end{document}
|
\begin{document}
\title{Local in time solution to Kolmogorov's two-equation model of turbulence}
\author{Przemys\l aw Kosewski${}^{\ast}$, Adam Kubica\footnote{Department of Mathematics and Information Sciences, Warsaw University of Technology, ul. Koszykowa 75,
00-662 Warsaw, Poland, E-mail addresses: [email protected], [email protected]} }
\maketitle
\abstract{We prove the existence of local in time solution to Kolmogorov's two-equation model of turbulence in three dimensional domain with periodic boundary conditions. We apply Galerkin method for appropriate truncated problem. Next, we obtain estimates for a limit of approximate solutions to ensure that it satisfies the original problem. }
\noindent Keywords: Kolmogorov's two-equation model of turbulence, local in time solution, Galerkin method.
\noindent AMS subject classifications (2010): 35Q35, 76F02.
\section{Introduction}
Firstly, we will provide a short introduction to turbulence modeling. We introduce an idea behind RANS (Reynolds Averaged Navier Stokes, see \cite{WILCOX}, \cite{RANS4}, \cite{RANS7}, \cite{TurbMod}) and explain the necessity of incorporating additional equations to model turbulence. Next, we will introduce Kolmogorov's two equation model and its connection to currently used turbulence models.
Turbulent flow is a fluid motion characterized by rapid changes in velocity and pressure. These fluctuations cause difficulties mainly in finding solutions using numerical methods, which require dense mesh and very short time steps to properly reproduce the turbulent flow. Additionally, turbulences appear to be self-similar and display a chaotic behaviour. This bolster a need for precise simulations.
The simplest idea that would decrease the apparent fluctuations of solutions is to consider the average value of the velocity and of the pressure. This is the case in RANS, where the average is taken with respect to the time. Now, let us decompose the velocity $v$ and \m{pressure $p$:}
\[
v(x,t) = \overline{v}(x,t) + \widetilde{v}(x,t) , \hspace{0.2cm} \hspace{0.2cm}
p(x,t) = \overline{p}(x,t) + \widetilde{p}(x,t) ,
\]
where $\overline{v}$, $\overline{p}$ are time-averaged values and $\widetilde{v}$, $\widetilde{p}$ are fluctuations. We substitute the decomposed functions into the Navier Stokes system and we get (for details see chapter 2 of \cite{WILCOX}).
\[
\partial_{t} \overline{v}
+ \overline{v} \cdot \nabla \overline{v}
- \nu \operatorname{div} D \overline{v}
+ \nabla \overline{p}
=
- \operatorname{div} \n{ \overline{\widetilde{v} \cdot \widetilde{v}}}.
\]
The last term on the right hand side can be approximated by Boussinesq approximation (see \cite{WILCOX})
\[
-\overline{\widetilde{v} \cdot \widetilde{v}}
=
\nu_T (\nabla \overline{v} + \nabla^T \overline{v} ) - \frac{2}{3} k I,
\]
where $ \nu_T = \frac{k}{\omega}$, $k$ is the tubulent kinetic energy and $\omega$ is the dissipation rate. Finaly, we obtain
\eqq{
\partial_{t} \overline{v}
+ \overline{v} \cdot \nabla \overline{v}
- \nabla \cdot \left( ( \nu + \nu_T)D \overline{v} \right)
+ \nabla \left( \overline{p} + \frac{2}{3} k \right)
=
0.
}{ransEqv}
We see that to close the system we need to introduce additional equations for $\omega$ and $k$.
For further details see \cite{WILCOX} and \cite{RANS7}.
Nowadays, $k-\varepsilon$ and $k - \omegaega$ are two of the most commonly used models to calculate $k$ and $\omega$.
They bear a strong resemblance to Kolmogorov's turbulence model in the way they deal with diffusive terms. In both models, the equation on $k$ uses a squared matrix norm of the symmetric gradient as a source term.
In 1941 A. N. Kolmogorov introduced following system of equations describing turbulent flow (\cite{Kolmog}, English translation in Appendix A \cite{Spal})
\eqq{\partial_{t} v + \operatorname{div}(v\partial_{t} \omegaegaimes v) - 2 \nu_{0} \operatorname{div} \n{ \bno D(v)} = - \nablala p , }{pa}
\eqq{ \partial_{t} \omegaega + \operatorname{div}(\omega v ) - \kappa_{1} \operatorname{div} \n{ \bno \nablala \omega } = - \kappa_{2} \omega^{2}, }{pb}
\eqq{\partial_{t} b + \operatorname{div}(b v ) - \kappa_{3} \operatorname{div} \n{ \bno \nablala b } = - b \omega + \kappa_{4} \bno |D(v)|^{2}, }{pc}
\eqq{\operatorname{div}{v} =0,}{pd}
where $v$ is the mean velocity, $\omega$ is the dissipation rate, $b$ represents 2/3 of the mean kinetic energy, $p$ is the sum of the mean pressure and $b$. The novelty of Kolmogorov's formulation is that it no longer requires prior knowledge of the length scale (size of large eddies) - it can be calculated as $\frac{\sqrt{b}}{\omega}$. Let us notice that the proposed equation on velocity highly resembles the equation (\ref{ransEqv}), which appeared in RANS. The $k-\varepsilon$ and $k - \omegaega$ systems provide similar equations for $ \omega $ and $b$ with the addition of a source term in the equation for $ \omega $.
The physical motivation of the proposed system can be found in \cite{Spal} and \cite{BuM}. A mathematical analysis of the difficulties that occur in proving the existence of solutions of such a system can also be found in \cite{BuM}.
Now, we would like to discus the known mathematical results related to Kolmogorov's two-equation model of turbulence. There are two recent results devoted to this problem: \cite{BuM} and \cite{MiNa} (see the announcement \cite{MiNaa}) and our result is inspired by them. In the first one, the Authors consider the system in a bounded $C^{1,1}$ domain with mixed boundary conditions for $b$ and $\omega$ and a stick-slip boundary condition for the velocity $v$. In order to overcome the difficulties related with the last term on the right hand side of (\ref{pc}) the problem is reformulated and the quantity $E:=\frac{1}{2}|v|^{2}+ \frac{2\nu_{0}}{\kappa_{4}}b$ is introduced. Then, the equation (\ref{pc}) is replaced by
\[
\partial_{t}E+ \operatorname{div}(v(E+p))- 2\nu_{0}\operatorname{div}\left( \frac{\kappa_{3} b}{\kappa_{4}\omega}\nabla b+ \frac{b}{\omega} D(v)v \right)+\frac{2\nu_{0}}{\kappa_{4}}b\omega=0.
\]
The existence of global-in-time weak solution of the reformulated problem is established. It is also worth mentioning that in \cite{BuM} the assumption related to the initial value of $b$ tolerates the vanishing of $b_{0}$ in some points of the domain. More precisely, the existence of weak solution is proved under the conditions $b_{0}\in L^{1}$, $b_{0}>0$ a.e. and $\ln{b_{0}}\in L^{1}$.
In the article \cite{MiNa} the Authors consider the system (\ref{pa})-(\ref{pd}) in a periodic domain. The existence of global-in-time weak solution is proved, but due to the presence of the strongly nonlinear term $\bno |D(v)|^{2}$, the weak form of equation (\ref{pc}) has to be corrected by a positive measure $\mu$, which is zero, if the weak solution is sufficiently regular. There are also estimates for $\omega$ and $b$ (see (4.2) in \cite{MiNa}). These observations are crucial in our reasoning presented below. Concerning to the initial value of $b$, the assumption is that $b_{0}$ is uniformly positive.
\section{Notation and main result.}
Assume that $\Omegaega = \prod_{i=1}^{3}(0,L_{i}) $, \hspace{0.2cm} $L_{i}$, $T>0$ and $\Omegaega^{T}=\Omegaega \times (0,T)$. We shall consider the following problem
\eqq{\partial_{t} v + \operatorname{div}(v\partial_{t} \omegaegaimes v) - \nu_{0} \operatorname{div} \n{ \bno D(v)} = - \nablala p , }{a}
\eqq{ \partial_{t} \omegaega + \operatorname{div}(\omega v ) - \kappa_{1} \operatorname{div} \n{ \bno \nablala \omega } = - \kappa_{2} \omega^{2}, }{b}
\eqq{\partial_{t} b + \operatorname{div}(b v ) - \kappa_{3} \operatorname{div} \n{ \bno \nablala b } = - b \omega + \kappa_{4} \bno |D(v)|^{2}, }{c}
\eqq{\operatorname{div}{v} =0,}{d}
in $\Omegaega^{T}$ with periodic boundary condition on $\partial \Omegaega$ and initial condition
\eqq{v_{|t=0}= v_{0}, \hspace{0.2cm} \hspace{0.2cm} \omega_{|t=0}= \omega_{0}, \hspace{0.2cm} \hspace{0.2cm} b_{|t=0}= b_{0}.}{e}
Here $\nu_{0}, \kappa_{1}, \dots, \kappa_{4}$ are positive constants. For simplicity, we assume further that all constants except $\kappa_{2}$ are equal to one. The reason is that the constant $\kappa_{2}$ plays an important role in the a priori estimates.
\noindent We shall show the local-in-time existence of regular solution of problem (\ref{a})-(\ref{e}) under some assumption imposed on the initial data. Namely, suppose that there exists positive numbers $b_{\min}$, $\omegai$, $\omegaa$ such that
\eqq{0<b_{\min}\leq b_{0}(x) , }{f}
\eqq{0<\omegai\leq \omega_{0}(x) \leq \omegaa }{g}
on $\Omegaega$ and we set
\eqq{
\begin{array}{c}
b_{\min}^{t} = b_{\min}t, \hspace{0.2cm} \hspace{0.2cm}
\omegat = \omegaitt, \hspace{0.2cm} \\ \\
\omegamt = \omegaat, \hspace{0.2cm} \hspace{0.2cm}
\mu^{t}_{\min} = \frac{1}{4}\frac{b_{\min}^{t}}{\omegamt}.
\end{array} }{h}
If $m\in \mathbb{N}$, then by $\mathcal{V}^{m}$ we denote the space of restrictions to $\Omega$ of the functions, which belong to the space
\eqq{\{u \in H^{m}_{loc}(\mathbb{R}^{3}): \hspace{0.2cm} u(\cdot + kL_{i}e_{i}) = u(\cdot ) \hspace{0.2cm} \m{ for } \hspace{0.2cm} k\in \mathbb{Z}, \hspace{0.2cm} i=1,2,3 \},}{j}
where $\{ e_{i}\}_{i=1}^{3}$ form a standard basis in $\mathbb{R}^{3}$. Next, we define
\eqq{\mathcal{V}kd^{m} = \{v\in \mathcal{V}^{m}:\hspace{0.2cm} \operatorname{div} v =0, \hspace{0.2cm} \int_{\Omega} vdx =0 \}.}{k}
We shall find the solution of the system (\ref{a})-(\ref{d}) such that $(v,\omega,b)\in \mathcal{X}(T)$, where
\eqq{\mathcal{X}(T)= L^{2}(0,T;\mathcal{V}kdt)\times L^{2}(0,T;\mathcal{V}t))\times (L^{2}(0,T;\mathcal{V}t) \cap (H^{1}(0,T;H^{1}(\Omega)))^{5}. }{i}
We shall denote by $\| \cdot \|_{k,2}$ the norm in the Sobolev space, i.e.
\eqq{
\| f \|_{k,2} = (\ndk{\nabla^{k} f } + \ndk{f})^{\frac{1}{2}},
}{norSob}
where $\| \cdot \|_{2}$ is $L^{2}$ norm on $\Omega$.
Now, we introduce the notion of solution to the system (\ref{a})-(\ref{d}). We shall show that for any $v_{0}\in \mathcal{V}kd^{2}$ and strictly positive $\omega_{0}$, $b_{0}\in \mathcal{V}^{2}$ there exist positive $T$ and $(v, \omega, b)\in \mathcal{X}(T)$ such that
\eqq{(\partial_{t} v, w) - (v\partial_{t} \omegaegaimes v,\nablala w) + \n{ \mu D(v), D(w)} = 0 \hspace{0.2cm} \m{ for } \hspace{0.2cm} w\in \mathcal{V}kd^{1}, }{aa}
\eqq{ (\partial_{t} \omegaega, z) - (\omega v, \nablala z ) + \n{ \mu \nablala \omega, \nablala z } = - \kappa_{2} (\omega^{2}, z ) \hspace{0.2cm} \m{ for } \hspace{0.2cm} z\in \mathcal{V}^{1}, }{ab}
\eqq{(\partial_{t} b, q) - (b v, \nabla q ) +\n{ \mu \nablala b, \nablala q } = - (b \omega , q) + ( \mu |D(v)|^{2}, q) \hspace{0.2cm} \m{ for } \hspace{0.2cm} q\in \mathcal{V}^{1}, }{ac}
for a.a. $t\in (0,T)$, where $\mu= \frac{b}{\omega}$ and (\ref{e}) holds. Recall that $D(v)$ denotes the symmetric part of $\nablala v$ and $(\cdot , \cdot )$ is the inner product in $L^{2}(\Omega)$.
Our main result concerning the existence of local in time regular solutions is as follows.
\begin{theorem}
Suppose that $\omega_{0}$, $b_{0}\in \mathcal{V}^{2}$, $v_{0}\in \mathcal{V}kd^{2}$ and (\ref{f}), (\ref{g}) are satisfied. Then there exist positive $t^{*}$ and $(v,\omega , b)\in \mathcal{X}(t^{*})$ such that (\ref{aa})-(\ref{ac}) hold for a.a. $t\in (0,t^{*})$ and (\ref{e}) is satisfied.
Furthermore, for each $(x,t)\in \Omega\times [0,t^{*})$ the following estimates
\eqq{\frac{\omegai}{1+\kappa_{2} \omegai t} \leq \omega(x,t) \leq \frac{\omegaa}{1+\kappa_{2} \omegaa t},}{newc}
\eqq{\frac{b_{\min}}{(1+\kappa_{2} \omegaa t)^{\frac{1}{\kappa_{2}}}} \leq b(x,t) }{newd}
hold. The time of existence of the solution is estimated from below in the following sense: for each positive $\delta$ and compact $K\subseteq \{ (a,b,c): 0<a\leq b, \hspace{0.2cm} 0<c \}$ there exists positive $t^{*}kd$, which depends only on $\kappa_{2}, \Omega, \delta$ and $K$ such that if
\eqq{ \nsodk{v_{0}}+\nsodk{\omega_{0}}+\nsodk{b_{0}}\leq \delta \hspace{0.2cm} \m{ and } \hspace{0.2cm} (\omegai, \omegaa, b_{\min})\in K,}{uniformly}
then $t^{*}\geq t^{*}kd$. The Sobolev norm is defined by (\ref{norSob}).
\lambdabel{main}
\end{theorem}
We note that the last part of the theorem is needed for proving the existence of global in time solution for small data. We address this issue in another paper.
In the next section we prove the above theorem by applying Galerkin method for an appropriate truncated problem. We obtain a priori estimates for the sequence of approximate solutions and by a weak-compactness argument we get a solution of the truncated problem. Finally, after proving some bounds for $\omega$ and $b$ we deduce that the obtained solution satisfies the original system of equations.
\section{Proof of the main result}
The proof of theorem~\ref{main} is based on Galerkin method. Hence, we need a basis of the spaces $\mathcal{V}^{1}$ and $\mathcal{V}kd^{1}$. Let $\{ w_{i}\}_{i\in \mathbb{N}}$ be a system of eigenfunctions of Stokes operator in $\mathcal{V}kd^{1}$, which
is complete and orthogonal in $\mathcal{V}kd^{1}$ and orthonormal in $L^{2}(\Omega)$ (see chap.~II.6 in \cite{Temam}). In particular, $\{ w_{i} \}_{i\in \mathbb{N}}$ are smooth (see formula (6.17), chap. II in \cite{Temam}). By $\{ \lambdambda_{i} \}_{i\in \mathbb{N}}$ we denote the corresponding system of eigenvalues. Similarly, let $\{ z_{i} \}_{i\in \mathbb{N}}$ be an complete and orthogonal system in $\mathcal{V}^{1}$, which is orthonormal in $L^{2}(\Omega)$, which is obtained by taking eigenvectors of the minus Laplace operator. The system of corresponding eigenvalues is denoted by $\{ \lambdaf_{i} \}_{i\in \mathbb{N}}$.
We shall find approximate solutions of (\ref{aa})-(\ref{ac}) in the following form
\eqq{v^{l}(t, x)= \sum_{i=1}^{l} c_{i}^{l}(t)w_{i}(x), \hspace{0.2cm} \hspace{0.2cm} \om^{l}(t, x)= \sum_{i=1}^{l} e_{i}^{l}(t)z_{i}(x), \hspace{0.2cm} \hspace{0.2cm} b^{l}(t, x)= \sum_{i=1}^{l} d_{i}^{l}(t)z_{i}(x).}{ad}
We have to determine the coefficients $\{ c_{i}^{l} \}_{i=1}^{l}$, $\{ e_{i}^{l} \}_{i=1}^{l}$ and $\{ d_{i}^{l} \}_{i=1}^{l}$. In order to define an approximate problem we have to introduce a few auxiliary functions. For fixed $t>0$ we denote by $\Psi_{t}= \Psi_{t}(x)$ a smooth function such that
\eqq{
\Psi_{t}(x)= \left\{
\begin{array}{rll}
\frac{1}{2}b_{\min}^{t} & \m{ for } & x<\frac{1}{2}b_{\min}^{t}, \\ x & \m{ for } & x\geqb_{\min}^{t},\\
\end{array} \right. }{defPsi}
where $b_{\min}^{t}$ is defined by (\ref{h}). We assume that the function $\Psi_{t} $ also satisfies
\eqq{ 0\leq \Psi_{t}'(x)\leq c_{0}, \hspace{0.2cm} |\Psi_{t}''(x)|\leq c_{0} (b_{\min}^{t})^{-1}, }{estiPsi}
where, here and $c_{0}$ is a constant independent on $x$ and $t$ (see in the appendix for details (formula (\ref{defPsik})). We also need smooth functions $\Phi_{t}$, $\psi_{t}$ and $\phi_{t}$ such that
\eqq{ \Phi_{t}(x)=\left\{
\begin{array}{rll}
\frac{1}{2}\omegat & \m{ for } & x< \frac{1}{2}\omegat,\\
x & \m{ for } & x \in [ \omegat, \omegamt], \\
2\omegamt & \m{ for } & x > 2\omegamt,\\
\end{array} \right. }{defPhi}
\eqq{
\psi_{t}(x)= \left\{
\begin{array}{rll}
0 & \m{ for } & x<\frac{1}{2}b_{\min}^{t}, \\ x & \m{ for } & x \geq b_{\min}^{t},\\
\end{array} \right. }{defpsi}
\eqq{ \phi_{t}(x)=\left\{
\begin{array}{rll}
0 & \m{ for } & x< \frac{1}{2}\omegat,\\
x & \m{ for } & x\geq \omegat.\\
\end{array} \right. }{defphi}
We assume that these functions additionally satisfy
\eqq{ 0\leq \Phi_{t}'(x)\leq c_{0}, \hspace{0.2cm} |\Phi_{t}''(x)|\leq c_{0} (\omegat)^{-1}, }{estiPhi}
\eqq{\psi_{t}(x)\leq x \hspace{0.2cm} \m{ for } \hspace{0.2cm} x\geq 0 , \hspace{0.2cm} \hspace{0.2cm} 0\leq \psi_{t}'(x)\leq c_{0} \hspace{0.2cm} \m{ for } \hspace{0.2cm} x\in \mathbb{R} , }{estipsi}
\eqq{\phi_{t}(x)\leq x \hspace{0.2cm} \m{ for } \hspace{0.2cm} x\geq 0 , \hspace{0.2cm} \hspace{0.2cm} 0\leq \phi_{t}'(x)\leq c_{0} \hspace{0.2cm} \m{ for } \hspace{0.2cm} x\in \mathbb{R} , }{estiphi}
for some constant $c_{0}$ (the construction of $\Phi_{t}$, $\psi_{t}$ and $\phi_{t}$ are similar to argument from the appendix).
An approximate solution will be found in the form (\ref{ad}), where the coefficients $\{ c_{i}^{l} \}_{i=1}^{l}$, $\{ e_{i}^{l} \}_{i=1}^{l}$ and $\{ d_{i}^{l} \}_{i=1}^{l}$ are determined by the following truncated system
\eqq{(v^{l}t, w_{i}) - (v^{l}\partial_{t} \omegaegaimes v^{l},\nablala w_{i}) + \n{ \mu^{l} D(v^{l}), D(w_{i})} = 0 , }{ca}
\eqq{ (\om^{l}t, z_{i}) - (\om^{l} v^{l}, \nablala z_{i} ) + \n{ \mu^{l} \nablala \om^{l}, \nablala z_{i} } = - \kappa_{2} (\phi_{t}^{2}(\om^{l}), z_{i} ), }{cb}
\eqq{(b^{l}t, z_{i}) - (b^{l} v^{l}, \nabla z_{i} ) +\n{ \mu^{l} \nablala b^{l}, \nablala z_{i} } = - (\psi_{t}(b^{l})\phi_{t}( \om^{l}) , z_{i}) + ( \mu^{l} |D(\vl)|^{2}, z_{i}), }{cc}
\[
c^{l}_{i}(0)= (v_{0},w_{i}), \hspace{0.2cm} e^{l}_{i}(0)= (\omega_{0},z_{i}), \hspace{0.2cm} d^{l}_{i}(0)= (b_{0},z_{i}),
\]
where $i\in\{1,\dots , l \}$ and we denote
\eqq{\mu^{l} = \frac{\Psi_{t}(b^{l})}{\Phi_{t}(\om^{l})}. }{cd}
In the computations below, the exponent $l$ systematically refers to this Galerkin approximation.
\begin{rem}
We emphasize that in order to control the second derivatives of approximated solutions we need the conditions (\ref{estiPhi})-(\ref{estiphi}). In particular, we can not apply piecewise linear functions.
\end{rem}
Firstly, we note that $\mu^{l}$ is positive and then, by standard ODE theory the system (\ref{ca})-(\ref{cc}) has a local-in-time solution. Now, we shall obtain an estimate independent on $l$.
\begin{lem}
The approximate solutions obtained above satisfies the following estimates
\eqq{ \ddt \ndk{v^{l} } + 2\mu^{t}_{\min} \ndk{ D(v^{l})}\leq 0,}{cf}
\eqq{ \ddt \ndk{\om^{l} } + 2\mu^{t}_{\min} \ndk{ \nablala \om^{l}}\leq 0 ,}{ch}
\eqq{\ddt \ndk{b^{l} } + 2\mu^{t}_{\min} \ndk{ \nablala b^{l}}\leq 2\nif{ b^{l}} \nif{ \mu^{l}} \ndk{ \nabla v^{l} }, }{ci}
where $\mu^{t}_{\min}$ is defined by (\ref{h}).
\lambdabel{energyesti}
\end{lem}
\begin{proof}
We multiply (\ref{ca}) by $c_{i}^{l}$, sum over $i$ and we obtain
\[
\jd \ddt \ndk{v^{l} } + (\mu^{l} D(v^{l}), D(v^{l}))=0,
\]
where we used (\ref{ad}). Applying the properties of functions $\Psi_{t}$, $\Phi_{t}$ and (\ref{h}) we get
\eqq{
\jd \ddt \ndk{v^{l} } + \mu^{t}_{\min} \ndk{ D(v^{l})}\leq 0.
}{ce} Similarly, we multiply (\ref{cb}) by $e_{i}^{l}$ and we obtain
\[
\jd \ddt \ndk{ \om^{l}} + (\mu^{l} \nablala \om^{l} , \nablala \om^{l}) = - \kappa_{2} (\phi_{t}^{2}(\om^{l}), \om^{l}).
\]
By the properties of $\phi_{t}$ the right-hand side is non-positive thus, we obtain (\ref{ch}). Finally, after multiplying (\ref{cc}) by $d_{i}^{l}$ we get
\[
\jd \ddt \ndk{ b^{l}} + (\mu^{l} \nablala b^{l} , \nablala b^{l} )= - (\psi_{t}(b^{l})\phi_{t}(\om^{l}), b^{l}) + (\mu^{l} |D(\vl)|^{2}, b^{l}).
\]
We note that $\psi_{t}(b^{l})\phi_{t}(\om^{l}) b^{l} \geq 0$ hence, we obtain
\[
\jd \ddt \ndk{ b^{l}} + \mu^{t}_{\min} \ndk{ \nablala b^{l}} \leq (\mu^{l} |D(\vl)|^{2}, b^{l} ) \leq \nif{ b^{l}} \nif{ \mu^{l}} \ndk{ \nabla v^{l} }
\]
and the proof is finished.
\end{proof}
We also need the higher order estimates.
\begin{lem}
There exist positive $t^{*}$ and $C_{*}$, which depend on $b_{\min}$, $\omegai$, $\omegaa$, $\Omega$, $\kappa_{2}$, $c_{0}$, $\nsod{v_{0}}$, $\nsod{\omegaega_{0}}$ and $\nsod{b_{0}}$ such that for each $l\in \mathbb{N}$ the following estimate
\eqq{\| v^{l}, \om^{l}, b^{l} \|_{L^{\infty}(0, t^{*}; H^{2}(\Omega))}+\| v^{l}, \om^{l}, b^{l} \|_{L^{2}(0,t^{*};H^{3}(\Omega))} +
\| v^{l}t, \om^{l}t, b^{l}t \|_{L^{2}(0,t^{*};H^{1}(\Omega))}
\leq C_{*}}{fa}
holds.
Furthermore, for each positive $\delta$ and compact \m{$K\subseteq \{ (a,b,c): 0<a\leq b, \hspace{0.2cm} 0<c \}$} there exists positive $t^{*}kd$, which depends only on $\kappa_{2}, \Omega, \delta$ and $K$ such that if
\eqq{ \nsodk{v_{0}}+\nsodk{\omega_{0}}+\nsodk{b_{0}}\leq \delta \hspace{0.2cm} \m{ and } \hspace{0.2cm} (\omegai, \omegaa, b_{\min})\in K,}{uniformly}
then $t^{*}\geq t^{*}kd$.
\lambdabel{energyhigher}
\end{lem}
Before we go to the proof of Lemma~\ref{energyhigher} we present
its idea. First, we test the equation for approximate solution by its bi-Laplacian. Next, after integration by parts we obtain (\ref{cff}), (\ref{cg}) and (\ref{chh}). Further, we apply the lower bound for the ''diffusive coefficient'' $\mu^{l}$ (see (\ref{estimu})) and use the H\"older and Gagliardo-Nirenberg inequalities which leads to (\ref{sumvlolbl}). To estimate the $H^{2}$-norm of $\mu^{l}$ we use the properties of $\Psi_{t}$ and $\Phi_{t}$. After applying the energy estimates from Lemma~\ref{energyesti} we obtain (\ref{caloscGro}), which leads to a uniform bound of the $H^{2}$-norm of the sequence of approximate solution on the interval $(0,t^{*})$ for some positive $t^{*}$ (see (\ref{eg})). Immediately it gives a bound in $L^{2}H^{3}$. The last step is the $l$-independent estimate of the time derivative of the approximate solution.
\begin{proof}
We multiply the equality (\ref{ca}) by $\lambda_{i}^{2}c_{i}^{l}$ and sum over $i$
\[
(v^{l}t, \Deltak v^{l} ) - (v^{l} \partial_{t} \omegaegaimes v^{l} , \nablala \Deltak v^{l} )+ (\mu^{l} D(v^{l} ), D(\Deltak v^{l} ))=0.
\]
After integrating by parts we obtain
\[
(v^{l}t, \Deltak v^{l} ) = \jd \ddt \ndk{ \Delta v^{l} },
\]
\[
(v^{l} \partial_{t} \omegaegaimes v^{l} , \nablala \Deltak v^{l} )= ( \Delta (v^{l} \partial_{t} \omegaegaimes v^{l}) , \nablala \Delta v^{l} ),
\]
\[
(\mu^{l} D(v^{l} ), D(\Deltak v^{l} )) = ( \Delta \mu^{l} D(v^{l}), \Delta D(v^{l}))
\]
\[
+ 2 (\nabla \mu^{l} \cdot \nabla D(\vl) ,\Delta D(\vl)) + (\mu^{l} \Delta D(\vl) ,\Delta D(\vl) ).
\]
Thus, we get
\[
\jd \ddt \ndk{\Delta v^{l} } + \int_{\Omega} \mu^{l} |\Delta D(\vl) |^{2}dx
\]
\[
= -( \Delta (v^{l} \partial_{t} \omegaegaimes v^{l}) , \nablala \Delta v^{l} ) - ( \Delta \mu^{l} D(v^{l}), \Delta D(v^{l})) - 2 (\nabla \mu^{l} \cdot \nabla D(\vl) ,\Delta D(\vl)).
\]
We estimate the right-hand side
\[
|( \Delta (v^{l} \partial_{t} \omegaegaimes v^{l}) , \nablala \Delta v^{l} ) |\leq \nif{v^{l} } \nd{ \nabladv^{l} } \nd{ \nablatv^{l} } + \nck{ \nabla v^{l} } \nd{\nablat v^{l} } .
\]
Proceeding analogously we obtain
\[
\jd \ddt \ndk{ \Delta v^{l} } + \int_{\Omega} \mu^{l} | \Delta D(\vl) |^{2}dx
\]
\[
\leq \nif{v^{l} } \nd{ \nabladv^{l} } \nd{ \nablatv^{l} } + \nck{ \nabla v^{l} } \nd{\nablat v^{l} }
\]
\eqq{
+ \Big( \nd{ \Delta \mu^{l} D(\vl) } + 2 \nd{\nabla \mu^{l} \cdot \nabla D(\vl) } \Big)\nd{\Delta D(\vl)}.
}{cff}
Now, we multiply the equation (\ref{cb}) by $\lambdaf_{i}^{2} e_{i}^{l}$ and we obtain
\[
(\om^{l}t, \Deltak \om^{l}) - (\om^{l} v^{l}, \nablala \Deltak \om^{l} ) + \n{ \mu^{l} \nablala \om^{l}, \nablala \Deltak \om^{l} } = - \kappa_{2}(\phi_{t}^{2}(\om^{l}), \Deltak \om^{l} ).
\]
After integrating by parts we get
\[
(\om^{l}t, \Deltak \om^{l}) = \jd \ddt \ndk{ \Delta \om^{l} },
\]
\[
(\om^{l} v^{l}, \nablala \Deltak \om^{l} ) = (\Delta \om^{l} v^{l} , \nablala \Delta \om^{l})+ 2(\nabla v^{l} \nabla \om^{l} , \nablala \Delta \om^{l}) + (\om^{l} \Delta v^{l} , \nablala \Delta \om^{l}),
\]
\[
\n{ \mu^{l} \nablala \om^{l}, \nabla \Deltak \om^{l} } = \n{\Delta \mu^{l} \nabla \om^{l}, \nablala \Delta \om^{l} } + 2\n{ \nablak \om^{l} \nabla \mu^{l} , \nablala \Delta \om^{l} }
+ \n{ \mu^{l} \nablala \Delta \om^{l}, \nablala \Delta \om^{l} },
\]
\eqns{
-(\phi_{t}^{2}(\om^{l}), \Deltak \om^{l} )
= 2\n{ \pht( \ol) \pht( \ol)p \nabla \om^{l}, \nablala \Delta \om^{l} }
}
Thus, we may write
\[
\jd \ddt \ndk{\Delta \om^{l} } + \int_{\Omega} \mu^{l} \bk{\nablala \Delta \om^{l} } dx
\]
\[
\leq \left( \nd{\Delta \om^{l} v^{l} }+ \nd{ \nabla v^{l} \nabla \om^{l} } +\nd{\om^{l} \Delta v^{l} }+
\nd{\Delta \mu^{l} \nabla \om^{l} } \right. \hspace{4cm}
\]
\eqq{
\left. \hspace{2cm} + 2 \nd{\nablak \om^{l} \nabla \mu^{l} } + 2\kappa_{2}\nd{\pht( \ol) \pht( \ol)p \nabla \om^{l}} \right) \nd{ \nablala \Delta \om^{l}}.
}{cg}
Finally, after multiplying (\ref{cc}) by $\lambdaf^{2}_{i}d_{i}^{l}$ we obtain
\[
(b^{l}t, \Deltak b^{l}) - (b^{l} v^{l}, \Deltak \nabla b^{l} ) +\n{ \mu^{l} \nabla b^{l}, \nabla \Deltak b^{l} }
\]
\[
= - (\psi_{t}(b^{l})\phi_{t}( \om^{l}) , \Deltak b^{l}) + ( \mu^{l} |D(\vl)|^{2}, \Deltak b^{l}).
\]
We deal with the terms on the left hand-side as earlier and for the right-hand side terms we get
\[
-(\psi_{t}(b^{l})\phi_{t}( \om^{l}) , \Deltak b^{l}) = \left( \pst( \bl)p \pht( \ol) \nabla b^{l}, \nablala \Delta b^{l} \right)
+ \left( \pst( \bl) \pht( \ol)p \nabla \om^{l} , \nablala \Delta b^{l} \right),
\]
\[
( \mu^{l} |D(\vl)|^{2}, \Deltak b^{l}) = -(|D(\vl)|^{2} \nabla \mu^{l} , \nablala \Delta b^{l} ) - (\mu^{l} \nabla (|D(\vl)|^{2}) , \nablala \Delta b^{l}).
\]
Therefore, we obtain the inequality
\[
\jd \ddt \ndk{\Delta b^{l} } + \int_{\Omega} \mu^{l} \bk{\nablala \Delta b^{l} } dx
\]
\[
\leq \Big( \nd{\Delta b^{l} v^{l} }+ 2\nd{\nabla v^{l} \nabla b^{l} } + \nd{ b^{l} \Delta v^{l} } +
\nd{\Delta \mu^{l} \nabla b^{l} } + 2\nd{\nablak b^{l} \nabla \mu^{l} } + \nd{\pht( \ol) \pst( \bl)p \nabla b^{l}}
\]
\eqq{
+ \nd{\pst( \bl) \pht( \ol)p \nabla \om^{l}} + \nd{ \nabla \mu^{l} \bk{ D(\vl) } } + \nd{\mu^{l} |D(\vl)| |\nabla D(\vl)| }
\Big) \nd{ \nablala \Delta b^{l}}.
}{chh}
We note that
\eqq{\int_{\Omega} \bk{ \Delta D(\vl) } dx = \frac{1}{2} \int_{\Omega} \bk{ \nablat v^{l} } dx. }{Korn}
Indeed, integrating by parts yield
\[
2\int_{\Omega} \bk{ \Delta D(\vl) } dx = \sum_{k,m}\int_{\Omega} \bk{ \Delta v^{l}_{k,x_{m}} } dx + \int_{\Omega} \Delta v^{l}_{k,x_{m}} \cdot \Delta v^{l}_{m,x_{k}} dx
\]
\[
=\sum_{k,m,p,q}\int_{\Omega} v^{l}_{k,x_{m}x_{p}x_{p}} \cdot v^{l}_{k,x_{m}x_{q}x_{q}} dx + \sum_{k,m,p,q}\int_{\Omega} \Delta v^{l}_{k,x_{k}} \cdot \Delta v^{l}_{m,x_{m}} dx
\]
\[
=\sum_{k,m,p,q}\int_{\Omega} \bk{ v^{l}_{k,x_{m}x_{p}x_{q}} } dx,
\]
where we applied the condition $\operatorname{div}{ v^{l}} =0$ and used the tensor notation for components and derivatives. After applying (\ref{h}), (\ref{defPsi}), (\ref{defPhi}) and (\ref{cd}) we get
\eqq{
\mu^{t}_{\min} \leq \mu^{l}
}{estimu}
for each $l$ thus, (\ref{cff}) together with (\ref{Korn}) and (\ref{estimu}) give
\[
\ddt \ndk{\Delta v^{l} } + \mu^{t}_{\min} \ndk{ \Delta D(\vl) }
\]
\eqq{
\leq \frac{32}{\mu^{t}_{\min}} \Big( \nifk{v^{l} } \ndk{ \nablakv^{l} } + \ncc{ \nabla v^{l} } + \ndk{ \Delta \mu^{l} D(\vl) } + \ndk{\nabla \mu^{l} \cdot \nabla D(\vl)} \Big).
}{cfa}
Applying Gagliardo-Nirenberg interpolation inequality
\eqq{\nif{ \nabla v^{l}} \leq C \nd{ \nablat v^{l} }^{\frac{1}{2}} \ns{ \nabla v^{l} }^{\frac{1}{2}} }{Gaginfi}
and Sobolev embedding inequality we get
\[
\ndk{ \Delta \mu^{l} D(\vl) } \leq \ndk{ \Delta \mu^{l}} \nifk{ D(\vl) } \leq C \nd{ \nablat v^{l} } \nsod{ v^{l} } \nsod{\mu^{l}}^{2}
,
\]
where $C$ depends only on $\Omegaega$. Again, by Gagliardo-Nirenberg inequality
\eqq{\nt{ \nablad v^{l} } \leq C \nd{ \nablat v^{l} }^{\frac{1}{2}} \nd{ \nablak v^{l} }^{\frac{1}{2}} }{Gagl3}
and H\"older inequality we have
\[
\ndk{\nabla \mu^{l} \cdot \nabla D(\vl)} \leq \nsk{\nabla \mu }\ntk{\nablak v^{l} } \leq C \nd{ \nablat v^{l} } \nsod{ v^{l} } \nsodk{ \mu^{l}}.
\]
Thus, applying after the Young inequality with exponents $(2,6,3)$ we get
\eqq{
\ndk{ \Delta \mu^{l} D(\vl) }+ \ndk{\nabla \mu^{l} \cdot \nabla D(\vl)} \leq \varepsilon \ndk{ \nablat v^{l} }+\frac{C}{\varepsilon } ( \nsod{ v^{l} }^{6} + \nsod{\mu^{l}}^{6}),
}{zGag}
where $\varepsilon>0$ and $C$ depends only on $\Omega$. Applying the above inequality and (\ref{Korn}) in (\ref{cfa}) we obtain
\eqq{
\ddt \ndk{\nablak v^{l} } + \mu^{t}_{\min} \ndk{ \nablat v^{l} }
\leq \frac{C}{\mu^{t}_{\min}} \Big( \| v^{l} \|^{4}_{2,2}+ (\mu^{t}_{\min})^{-2}( \| v^{l} \|_{2,2}^{6}+ \| \mu^{l} \|_{2,2}^{6}) \Big),
}{zcfa}
where $C=C(\Omega)$. Now, we proceed similarly with (\ref{cg}) and we obtain
\[
\ddt \ndk{\Delta \om^{l} } + \mu^{t}_{\min} \ndk{ \nablala \Delta \om^{l}} \leq \frac{C}{\mu^{t}_{\min}}
\Big(
\nifk{ v^{l}} \ndk{\nablak \om^{l}} + \nck{ \nabla v^{l}} \nck{ \nabla \om^{l}} + \nifk{\om^{l}} \ndk{ \nablak v^{l}}
\]
\eqq{+\ndk{ \Delta \mu^{l} \nabla \om^{l}} + \ndk{\nablak \om^{l} \nabla \mu^{l} }
+ \kappa_{2}^{2} c_{0}^{2} \nifk{\om^{l}} \ndk{\nabla \om^{l}} \Big),}{cgg}
where we applied (\ref{estiphi}). We repeat the reasoning leading to (\ref{zGag}) and we obtain
\[
\ndk{ \Delta \mu^{l} \nabla \om^{l}} + \ndk{\nablak \om^{l} \nabla \mu^{l} } \leq \varepsilon \ndk{ \nablat \om^{l} }+\frac{C}{\varepsilon } ( \nsod{ \om^{l} }^{6} + \nsod{\mu^{l}}^{6}).
\]
Thus, the above inequality and (\ref{cgg}) give
\[
\ddt \ndk{\nablak \om^{l} } + \mu^{t}_{\min} \ndk{ \nablat \om^{l}}
\]
\eqq{
\leq \frac{C}{\mu^{t}_{\min}}
\Big(
\| v^{l} \|_{2,2}^{4}+ (1+\kappa_{2}^{4}c_{0}^{4} ) \| \om^{l} \|_{2,2}^{4}+ (\mu^{t}_{\min})^{-2}( \| \om^{l} \|_{2,2}^{6}+ \| \mu^{l} \|_{2,2}^{6}) \Big),}{zcgg}
where $C=C(\Omega)$. Further, from (\ref{chh}) we get
\[
\ddt \ndk{\Delta b^{l} } + \mu^{t}_{\min} \ndk{ \nablala \Delta b^{l} } \leq \frac{C}{\mu^{t}_{\min}} \Big( \nifk{ v^{l} } \ndk{ \nablak b^{l} } + \nck{ \nabla v^{l}} \nck{ \nabla b^{l} } + \nifk{b^{l}} \ndk{ \nablak v^{l}}
\]
\[
+
\ndk{\nablak \mu^{l} \nabla b^{l} } + \ndk{\nablak b^{l} \nabla \mu^{l} } + c_{0}^{2} \nifk{ \om^{l}} \ndk{ \nabla b^{l}}
\]
\[
+ c_{0}^{2} \nifk{ b^{l}} \ndk{ \nabla \om^{l}} + \ndk{ \nabla \mu^{l} |D(\vl)|^{2} } + \ndk{\mu^{l} \nabla(|D(\vl)|^{2}) }\Big),
\]
where we applied (\ref{estipsi}) and (\ref{estiphi}). Applying integrating by parts and Sobolev embedding theorem we get
\[
\ddt \ndk{\nablak b^{l} } + \mu^{t}_{\min} \ndk{ \nablat b^{l} } \leq \frac{C}{\mu^{t}_{\min}} \Big( \nsod{ v^{l} }^{4}+ \nsod{ b^{l} }^{4} +
\ndk{\nablak \mu^{l} \nabla b^{l} } + \ndk{\nablak b^{l} \nabla \mu^{l} }
\]
\eqq{
+ c_{0}^{4} \nsod{ \om^{l}}^{4} +\nsod{\mu^{l} }^{6} +\nsod{v^{l} }^{6} + \ntk{\nablak v^{l} } \nsod{ \mu^{l} }^{2} \nsod{ v^{l} }^{2} \Big),
}{nabzGag}
Applying again the Gagliardo-Nirenberg inequality and Young inequality we get
\[
\ndk{\nablak \mu^{l} \nabla b^{l} } + \ndk{\nablak b^{l} \nabla \mu^{l} } \leq \varepsilon \| \nablat b^{l} \|_{2}^{2} + \frac{C}{\varepsilon}( \| b^{l} \|_{2,2}^{6}+ \| \mu^{l} \|_{2,2}^{6}).
\]
From (\ref{Gagl3}) we get
\[
\ntk{\nablak v^{l} } \nsod{ v^{l} }^{2} \nsod{ \mu^{l} }^{2}\leq C\nd{\nablat v^{l} }\nsod{ v^{l} }^{3} \nsod{ \mu^{l} }^{2} \leq \varepsilon \ndk{\nablat v^{l} }+\frac{C}{\varepsilon}( \nsod{ v^{l} }^{10}+ \nsod{ \mu^{l} }^{10}).
\]
hence, from (\ref{nabzGag}) we obtain the following estimate
\[
\ddt \ndk{\nablak b^{l} } + \mu^{t}_{\min} \ndk{ \nablat b^{l} } \leq \frac{C}{\mu^{t}_{\min}} \Big( \nsod{ v^{l} }^{4}+ \nsod{ b^{l} }^{4} + c_{0}^{4} \nsod{ \om^{l}}^{4} +\nsod{\mu^{l} }^{6} +\nsod{v^{l} }^{6} \Big)
\]
\eqq{ +\frac{C}{(\mu^{t}_{\min})^{3}} \Big( \nsod{b^{l}}^{6}+ \nsod{\mu^{l}}^{6}+ \nsod{v^{l}}^{10}+ \nsod{\mu^{l}}^{10} \Big) +\frac{\mu^{t}_{\min}}{2} \ndk{\nablat v^{l} }, }{nablzGag}
where $C=C(\Omega)$. We sum the inequalities (\ref{zcfa}), (\ref{zcgg}), (\ref{nablzGag}) and we obtain
\[
\ddt \Big( \ndk{\nablak v^{l} } +\ndk{\nablak \om^{l} } + \ndk{\nablak b^{l} } \Big) + \mu^{t}_{\min} \Big( \ndk{\nablat v^{l} } +\ndk{\nablat \om^{l} } + \ndk{\nablat b^{l} } \Big)
\]
\[
\leq \frac{C}{\mu^{t}_{\min}} \Big( \nsod{ v^{l} }^{4}+ \nsod{ b^{l} }^{4} + (1+c_{0}^{4}+c_{0}^{4}\kappa_{2}^{4}) \nsod{ \om^{l}}^{4} +\nsod{\mu^{l} }^{6} +\nsod{v^{l} }^{6} \Big)
\]
\eqq{+\frac{C}{(\mu^{t}_{\min})^{3}} \Big( \nsod{ v^{l} }^{6}+ \nsod{ b^{l} }^{6} + \nsod{ \om^{l}}^{6} +\nsod{\mu^{l} }^{6} +\nsod{v^{l} }^{10}+\nsod{\mu^{l} }^{10} \Big) }{sumvlolbl_old}
for some $C$, which depends only on $\Omega$. We note that
\eqq{
\mu^{t}_{\min} = \frac{1}{4} \frac{b_{\min}}{\omegaa}(1+\kappa_{2} \omegaa t )^{1-\frac{1}{\kappa_{2}}}
}{mutmineq}
hence, we have
\[
\ddt \Big( \ndk{\nablak v^{l} } +\ndk{\nablak \om^{l} } + \ndk{\nablak b^{l} } \Big) + \mu^{t}_{\min} \Big( \ndk{\nablat v^{l} } +\ndk{\nablat \om^{l} } + \ndk{\nablat b^{l} } \Big)
\]
\eqq{
\leq C \n{\frac{\omegaa}{b_{\min}}+ \n{\frac{\omegaa}{b_{\min}}}^{3} }\n{1 + \kappa_{2} \omegaa t}^{\beta } \Big( 1 + \nsod{ b^{l} }^{6} + \nsod{ \om^{l}}^{6} +\nsod{\mu^{l} }^{10} +\nsod{v^{l} }^{10} \Big),
}{sumvlolbl}
where $\beta= \max \{ \frac{1}{\kappa_{2}}-1, \frac{3}{\kappa_{2}}-3 \}$ and $C$ depends only on $\Omega$, $c_{0}$ and $\kappa_{2}$.
Now, we shall estimate $\mu^{l}$ in terms of $\om^{l}$ and $b^{l}$. Firstly, we note that from (\ref{defPsi}) and (\ref{defPhi}) we have
\eqq{\Pst( \bl) \leq \max\{\jd b_{\min}^{t} , b^{l} \}, \hspace{0.2cm} \hspace{0.2cm} \Pht( \ol) \geq \jd \omegat.}{da}
Hence, by definition (\ref{cd}) we get
\eqq{0<\mu^{l} \leq 2 \omegatm \max\{b_{\min}^{t}, b^{l} \}\leq c_1(\Omegaega) \frac{1}{\omegai} \n{1 + \kappa_{2} \omegai t} \n{ b_{\min} + |b^{l}| } ,}{mulestibe}
where $c_{1}$ depends only on $\Omegaega$. Thus, we obtain
\eqq{
\nd{\mu^{l}} \leq c_{1}\frac{1}{\omegai} \n{1 + \kappa_{2} \omegai t} (b_{\min} + \nd{b^{l}}). }{mulifty}
Now, we have to estimate the derivatives of $\mu^{l}$. Direct calculation gives
\[
| \nablak \mu^{l}| = \be{\nablak \n{ \Pst( \bl) \cdot (\Pht( \ol))^{-1} }} \leq (\Pht( \ol))^{-1} \be{\nablak (\Pst( \bl))}
\]
\[
+2(\Pht( \ol))^{-2} \be{\nabla (\Pst( \bl))} \be{\nabla( \Pht( \ol)) }
\]
\eqq{
+2\Pst( \bl) (\Pht( \ol))^{-3} \be{\nabla( \Pht( \ol)) }^2 + \Pst( \bl) (\Pht( \ol))^{-2} \be{\nablak( \Pht( \ol)) } .
}{muleq}
Using (\ref{estiPsi}) and (\ref{estiPhi}) we may estimate the derivatives
\eqq{\be{\nabla (\Pst( \bl) ) } \leq c_{0} \be{ \nabla b^{l} }, \hspace{0.2cm} \hspace{0.2cm} \be{\nabla (\Pht( \ol) ) } \leq c_{0} \be{ \nabla \om^{l} }, }{db}
\eqq{\begin{array}{l}
\be{\nablak (\Pst( \bl) ) } \leq c_{0}b_{\min}^{t}m \bk{ \nabla b^{l} } + c_{0} \be{ \nablak b^{l} }, \hspace{0.2cm} \hspace{0.2cm} \\ \be{\nablak (\Pht( \ol) ) } \leq c_{0}\omegatm \bk{ \nabla \om^{l} } + c_{0} \be{ \nablak \om^{l} }. \end{array} }{dc}
If we apply estimates (\ref{da}), (\ref{db}) and (\ref{dc}) in (\ref{muleq}) then we obtain
\[
\be{\nablak \mu^{l} } \leq c_{2} Q_{1} \n{1+ \kappa_{2} \omegaa t }^{\max \{ 3, 1 + \frac{1}{\kappa_{2} } \} } \left[ \be{\nabla b^{l} }^{2} + \be{\nablak b^{l} } +|b^{l} | \be{ \nabla \om^{l}}^{2}
\right.
\]
\eqq{
\left.+ \be{\nabla b^{l} }+ \be{\nabla \om^{l} }+ \be{\nabla \om^{l} }^{2} + \be{b^{l} \nablak \om^{l}} + \be{\nablak \om^{l}} \right] }{mulesta}
where $c_{2}$ depends only on $c_{0}$ and $Q_{1}=\frac{b_{\min}}{\omegai}\n{1+b_{\min}^{-3}+\omegai^{-3} } $. Thus, we obtain
\[
\nd{\nablak \mu^{l} }\leq c_{2}Q_{1}\n{1+ \kappa_{2} \omegaa t }^{\max \{ 3,1 + \frac{1}{\kappa_{2} } \} } \left[ \nck{\nablab^{l} } + \nd{\nablak b^{l} } \right.
\]
\eqq{ \left. + \nif{b^{l}} \nck{\nabla \om^{l} } + \nck{ \nabla \om^{l}} + \nd{ \nablak \om^{l} } + \nif{ b^{l} } \nd{\nablak \om^{l} } \right] . }{estimuldiff}
If we take into account (\ref{mulifty}) then we get
\eqq{\nsod{ \mu^{l}} \leq c_{3} Q_{1} \n{1+ \kappa_{2} \omegaa t }^{\max \{ 3,1 + \frac{1}{\kappa_{2} } \} } \left( \nsodt{ b^{l}} + \nsodt{ \om^{l}}+1 \right), }{ea}
where $c_{3}=c_{3}( c_{0}, \Omegaega )$. Applying the above estimate in (\ref{sumvlolbl}) we obtain
\[
\ddt \Big( \ndk{\nablak v^{l} } +\ndk{\nablak \om^{l} } + \ndk{\nablak b^{l} } \Big) + \mu^{t}_{\min} \Big( \ndk{\nablat v^{l} } +\ndk{\nablat \om^{l} } + \ndk{\nablat b^{l} } \Big)
\]
\eqq{
\le C Q_{2} \n{1+ \kappa_{2} \omegaa t }^{\bar{\beta} } \Big( 1+ \nsod{ v^{l} }^{2}+ \nsod{ b^{l} }^{2} + \nsod{ \om^{l}}^{2} \Big)^{15}, }{sumvlolblbezmu}
where
\[
Q_{2}= \left[ 1+\n{\frac{\omegaa}{b_{\min}}}^{3} \right] \left[ \frac{b_{\min}}{\omegai}(1+b_{\min}^{-3} +\omegai^{-3} )^{10} +1\right], \hspace{0.2cm} \bar{\beta}= 10\max\{1+\frac{1}{\kappa_{2}},3\}+\beta
\]
and $C$ depends only on $\Omega$, $c_{0}$ and $\kappa_{2}$.
If we take into account the estimates (\ref{cf})-(\ref{ci}) then we have
\[
\ddt \Big( \nsodk{ v^{l} } +\nsodk{ \om^{l} } + \nsodk{ b^{l} } \Big) + \mu^{t}_{\min} \Big( \nsotk{ v^{l} } +\nsotk{ \om^{l} } + \nsotk{ b^{l} } \Big)
\]
\eqq{\leq C Q_{3}\n{1+ \kappa_{2} \omegaa t }^{\bar{\beta} } \Big( 1+ \nsod{ v^{l} }^{2}+ \nsod{ b^{l} }^{2} + \nsod{ \om^{l}}^{2} \Big)^{15}, }{caloscGro}
where $C=C( c_{0}, \Omega, \kappa_{2} )$ and $Q_{3}=Q_{1}^{2}+Q_{2}+1$. If we divide both sides by the last term and next integrate with respect time variable then we get
\[
\Big( 1+ \nsod{ v^{l}(t) }^{2}+ \nsod{ b^{l}(t) }^{2} + \nsod{ \om^{l}(t)}^{2} \Big)^{-14}
\]
\[
\geq \Big( 1+ \nsod{ v^{l}(0) }^{2}+ \nsod{ b^{l}(0) }^{2} + \nsod{ \om^{l}(0)}^{2} \Big)^{-14} - \frac{14CQ_{3}}{(\bar{\beta}+1) \kappa_{2} \omegaa } \n{ (1+\kappa_{2} \omegaa t )^{\bar{\beta}+1} -1}
\]
\eqq{
\geq \Big( 1+ \nsod{ v_{0} }^{2}+ \nsod{ b_{0} }^{2} + \nsod{ \omegaega_{0}}^{2} \Big)^{-14} - \frac{14CQ_{3}}{(\bar{\beta}+1) \kappa_{2} \omegaa } \n{ (1+\kappa_{2} \omegaa t )^{\bar{\beta}+1} -1},
}{dodefts}
where the last estimate is a consequence of Bessel inequality. Now, we define time $t^{*}$ as the unique solution of the equality
\eqq{\Big( 1+ \nsod{ v_{0} }^{2}+ \nsod{ b_{0} }^{2} + \nsod{ \omegaega_{0}}^{2} \Big)^{-14} = \frac{15CQ_{3}}{(\bar{\beta}+1) \kappa_{2} \omegaa } \n{ (1+\kappa_{2} \omegaa t^{*} )^{\bar{\beta}+1} -1}.}{defts}
We note that $t^{*} $ is positive and depends on $\nsod{ v_{0} }^{2}+ \nsod{ b_{0} }^{2} + \nsod{ \omegaega_{0}}^{2}$, $\kappa_{2}$, $\Omega$, $c_{0}$, $\omegai$, $\omegaa$ and $b_{\min}$. It is evident that $t^{*}$ is decreasing function of $\nsod{ v_{0} }^{2}+ \nsod{ b_{0} }^{2} + \nsod{ \omegaega_{0}}^{2} $. Moreover, for any $\delta>0$ and compact $K\subseteq \{(a,b,c): \hspace{0.2cm} 0<a\leq b, \hspace{0.2cm} 0<c \}$ there exists $t^{*}kd >0$ such that $t^{*} \geq t^{*}kd$ for any initial data satisfying $\nsod{ v_{0} }^{2}+ \nsod{ b_{0} }^{2} + \nsod{ \omegaega_{0}}^{2} \leq \delta$ and $(\omegai, \omegaa, b_{\min})\in K$. From (\ref{defts}) we deduce that $t^{*}kd$ depends only on $\delta$, $K$, $\Omega$ $\kappa_{2}$ and $ c_{0}$.
From (\ref{dodefts}) and (\ref{defts}) we have
\[
\Big( 1+ \nsod{ v^{l}(t) }^{2}+ \nsod{ b^{l}(t) }^{2} + \nsod{ \om^{l}(t)}^{2} \Big)^{-14} \geq \frac{CQ_{3}}{(\bar{\beta}+1) \kappa_{2} \omegaa } \n{ (1+\kappa_{2} \omegaa t )^{\bar{\beta}+1} -1}
\]
for $t\in [0,t^{*}]$ hence,
\eqq{
\nsod{ v^{l}(t) }^{2}+ \nsod{ b^{l}(t) }^{2} + \nsod{ \om^{l}(t)}^{2} \leq \left[ \frac{CQ_{3}}{(\bar{\beta}+1) \kappa_{2} \omegaa } \n{ (1+\kappa_{2} \omegaa t^{*} )^{\bar{\beta}+1} -1}\right]^{-\frac{1}{14}}
}{defCs}
for $t\in [0,t^{*}]$. In particular, there exists $C^{*}=C^{*}(t^{*})$ such that
\eqq{\| v^{l} \|_{L^{\infty}(0, t^{*}; \mathcal{V}kdd)}+\| \om^{l} \|_{L^{\infty}(0, t^{*}; \mathcal{V}d)}+ \| b^{l} \|_{L^{\infty}(0, t^{*}; \mathcal{V}d)} \leq C^{*}}{eg}
uniformly with respect to $l\in \mathbb{N}$. Next, from (\ref{mutmineq}), (\ref{caloscGro}) and (\ref{eg}) we get the bound
\eqq{\| v^{l} \|_{L^{2}(0, t^{*}; \mathcal{V}kdt)}+\| \om^{l} \|_{L^{2}(0, t^{*}; \mathcal{V}t)}+ \| b^{l} \|_{L^{2}(0, t^{*}; \mathcal{V}t)} \leq C_{*},}{gh}
where $C_{*} $ depends on $t^{*}, \kappa_{2}, $ $b_{\min}$, $\omegaa$ and $C^{*}$.
It remains to show the estimate of time derivative of solution. We do this by multiplying the equality (\ref{ca}) by $c_{i}^{l}t$ and after summing it over $i$ we get
\[
(v^{l}t, v^{l}t ) - (v^{l} \partial_{t} \omegaegaimes v^{l} , \nablala v^{l}t )+ (\mu^{l} D(v^{l} ), D(v^{l}t ))=0.
\]
Thus, by after integration by parts and applying H\"older inequality we have
\[
\ndk{v^{l}t} \le \nd{\operatorname{div} (v^{l} \partial_{t} \omegaegaimes v^{l})} \nd{v^{l}t} + \nd{\nabla \n{\mu^{l} D(v^{l} )}} \nd{v^{l}t}.
\]
By applying Young inequality we get
\[
\ndk{v^{l}t} \le 2\ndk{\operatorname{div} (v^{l} \partial_{t} \omegaegaimes v^{l})} + 2\ndk{\nabla \n{\mu^{l} D(v^{l} )}} .
\]
Next, H\"older inequality gives us
\[
\ndk{v^{l}t} \le C \Big(
\nck{\nabla v^{l}} \nck{v^{l}}
+ \nck{\nabla \mu^{l}} \nck{ D(v^{l} )}
+ \nif{\mu^{l}}^2 \ndk{\nabla D(v^{l} )}
\Big).
\]
Finally, Sobolev embedding theorem leads us to the following inequality
\[
\ndk{v^{l}t} \le C \Big(
\nsod{v^{l}}^4
+ \nsodk{\mu^{l}} \nsodk{v^{l}}
\Big),
\]
where $C$ depends only on $\Omega$. If we apply (\ref{ea}) and (\ref{eg}) then we get
\eqq{\| v^{l}t \|_{L^{\infty}(0,t^{*};L^{2}(\Omega))} \leq C_{*}, }{gi}
where $C_{*} $ depends on $\Omega, c_{0}, t^{*}, \kappa_{2}, $ $b_{\min}$, $\omegaa$ and $C^{*}$.
Now, we shall consider (\ref{cb}). Proceeding as earlier we get
\[
\ndk{\om^{l}t} \leq 4\ndk{ \nabla \om^{l} \cdot v^{l} }+ 4\ndk{ \nabla ( \mu^{l} \nabla \om^{l})}+ 4\kappa_{2}\ndk{ \phi_{t}^{2} (\om^{l})}
\]
\[
\leq 4 \nifk{ v^{l} } \ndk{ \nabla \om^{l} } + 8\nck{ \nabla \mu^{l} } \nck{ \nabla \om^{l} } + 8\nifk{ \mu^{l} } \ndk{\nablak \om^{l}}+ 4\kappa_{2}\ncc{\om^{l}},
\]
where we applied (\ref{estiphi}). Thus, using (\ref{ea}) and (\ref{eg}) we get
\eqq{\| \om^{l}t \|_{L^{\infty}(0,t^{*};L^{2}(\Omega))} \leq C_{*}, }{gl}
where $C_{*}$ is as earlier. It remains to deal with (\ref{cc}). In similar way we obtain
\[
\ndk{ b^{l}t} \leq 4\ndk{ \nabla b^{l} v^{l} } + 4\ndk{ \nabla (\mu^{l} \nabla b^{l} )}+ 4\ndk{\pst( \bl) \pht( \ol) } + 4\ndk{ \mu^{l} | D(\vl)|^{2}}
\]
\[
\leq 4\ndk{\nabla b^{l} } \nifk{ v^{l}} + 8\nck{ \nabla \mu^{l}} \nck{ \nabla b^{l} } + 8\nifk{ \mu^{l} }\ndk{ \nablak b^{l} }+ 4\nifk{ b^{l} }\ndk{ \om^{l}} + 4\nifk{ \mu^{l} }\ncc{ \nabla v^{l}}.
\]
Applying again (\ref{ea}) and (\ref{eg}) we obtain
\eqq{\| b^{l}t \|_{L^{\infty}(0,t^{*};L^{2}(\Omega))} \leq C_{*}, }{gm}
where $C_{*} $ depends on $\Omega, c_{0}, t^{*}, \kappa_{2}, $ $b_{\min}$, $\omegaa$ and $C^{*}$.
Now, we prove the higher order estimates for time derivative of approximate solution. Firstly, we multiply the equality (\ref{ca}) by $-\lambda_{i} c_{i}^{l}t$ and sum over $i$
\[
(v^{l}t, -\Delta v^{l}t ) + (v^{l} \partial_{t} \omegaegaimes v^{l} , \nablala \Delta v^{l}t )- (\mu^{l} D(v^{l} ), D(\Delta v^{l}t ))=0.
\]
After integration by parts we get
\[
\ndk{\nabla v^{l}t }
=
- \n{\Delta \n{ v^{l} \partial_{t} \omegaegaimes v^{l} }, \nablala v^{l}t }
+ \n{\Delta \n{\mu^{l} D(v^{l} )}, D(v^{l}t )}.
\]
If we apply H\"older and Young inequalities, then we get
\[
\ndk{\nabla v^{l}t }
\le
2\nd{\Delta \n{ v^{l} \partial_{t} \omegaegaimes v^{l} }}^2
+ \nd{\Delta \n{\mu^{l} D(v^{l} )}}^2,
\]
where we used the equality $ 2\ndk{D(v^{l}t )} = \ndk{\nabla v^{l}t} $. We estimate further
\[
\ndk{\nabla v^{l}t }
\le
8\nif{v^{l}}^2\nd{\nablak v^{l}}^2
+ 8\ncc{\nabla v^{l}} + 4\nif{\mu^{l}}^2 \nd{\Delta D(v^{l} )}^2
\]
\[
+ 16\nt{\nabla \mu^{l}}^2 \ns{\nabla D(v^{l} )}^2
+ 4\nd{\Delta \mu^{l}}^2 \nif{D(v^{l} )}^2.
\]
Using Sobolev embedding we obtain
\[
\ndk{\nabla v^{l}t }
\le C \Big(
\nsod{ v^{l} }^4
+ \nsodk{\mu^{l}} \nsodk{v^{l}}
+ \nsodk{\mu^{l}} \nsot{v^{l}}^2.
\Big),
\]
where $C$ depends only on $\Omega$. Applying (\ref{ea}), (\ref{eg}) and (\ref{gh}) we get
\eqnsl{
\| \nabla v^{l}t \|_{L^{2}(0, t^{*}; L^{2}(\Omega)} \le C_{*},
}{vtEst_2}
where $C_{*} $ depends on $c_{0}, \Omega, t^{*}, \kappa_{2}, $ $b_{\min}$, $\omegaa$ and $C^{*}$. Proceeding analogously we get
\eqnsl{
\| \nabla \om^{l}t \|_{L^{2}(0, t^{*}; L^{2}(\Omega))} \le C_{*}.
}{oltEst_1}
It remains to estimate $\nabla b^{l}t$. If we multiply the equality (\ref{cc}) by $-\lambdaf_{i} d_{i}^{l}t$ and sum over $i$, then we get
\[
(b^{l}t, - \Delta b^{l}t) + (b^{l} v^{l}, \nabla \Delta b^{l}t )
- \n{ \mu^{l} \nablala b^{l}, \nablala \Delta b^{l}t }
\]
\[
=
(\psi_{t}(b^{l})\phi_{t}( \om^{l}) , \Delta b^{l}t)
- ( \mu^{l} |D(\vl)|^{2}, \Delta b^{l}t).
\]
Integrating by parts and H\"older inequality lead to
\[
\ndk{\nabla b^{l}t}
\le
\nd{\Delta \n{b^{l} v^{l}}} \nd{\nabla b^{l}t}
+ \nd{ \Delta \n{\mu^{l} \nablala b^{l}}} \nd{\nabla b^{l}t}
\]
\[
+ \nd{\nabla \n{\psi_{t}(b^{l})\phi_{t}( \om^{l})}} \nd{\nabla b^{l}t }
+ \nd{\nabla \n{ \mu^{l} |D(\vl)|^{2}}} \nd{\nabla b^{l}t}.
\]
After applying Young inequality we get
\[
\ndk{\nabla b^{l}t}
\]
\[
\le
4 \nd{\Delta \n{b^{l} v^{l}}}^2
+ 4 \nd{ \Delta \n{\mu^{l} \nablala b^{l}}}^2
+ 4 \nd{\nabla \n{\psi_{t}(b^{l})\phi_{t}( \om^{l})}}^2
+ 4 \nd{\nabla \n{ \mu^{l} |D(\vl)|^{2}}}^2.
\]
Using H\"older inequality we obtain
\eqnsl{
\ndk{\nabla b^{l}t}
& \le
16 \nd{\Delta b^{l}}^2 \nif{v^{l}}^2
+ 32 \nc{\nabla b^{l}}^2 \nc{\nabla v^{l}}^2
+ 16\nif{b^{l}}^2 \nd{\nablak v^{l}}^2 \\
&
+ 16 \nd{ \Delta \mu^{l}}^2 \nif{\nablala b^{l}}^2
+ 32 \nc{ \nabla \mu^{l}}^2 \nc{\nablak b^{l}}^2
+ 16 \nif{\mu^{l}}^2 \nd{\nablala \Delta b^{l}}^2 \\
&
+ 8 \nd{\nabla ( \psi_{t}(b^{l}))}^2\nif{\phi_{t}( \om^{l})}^2
+ 8 \nif{\psi_{t}(b^{l})}^2\nd{\nabla ( \phi_{t}( \om^{l}))}^2 \\
&
+ 8 \ns{\nabla \mu^{l}}^2 \ns{D(\vl)}^4
+ 16 \nif{\mu^{l}}^2 \nt{D(\vl)}^2\ns{\nabla D(\vl)}^2.
}{estB_1__}
After applying (\ref{estipsi}) and (\ref{estiphi}) we get $\nif{\psi_{t}(b^{l})} \le \nif{b^{l}}$, \hspace{0.2cm} $\nif{\psi_{t}(\om^{l})} \le \nif{\om^{l}}$ and
\[
\nd{\nabla (\phi_{t}(\om^{l})) } = \nd{\pht( \ol)p\nabla \om^{l}} \le c_0 \nd{\nabla \om^{l}},
\]
\[
\nd{\nabla (\psi_{t}(\om^{l}))} = \nd{\pst( \bl)p\nabla \om^{l}} \le c_0 \nd{\nabla b^{l}}.
\]
Using these inequalities in (\ref{estB_1__}) we obtain
\[
\ndk{\nabla b^{l}t}
\le C \Big(
\nsodk{b^{l}} \nsodk{v^{l}}
+ \nsodk{ \mu^{l}} \nsotk{b^{l}}
+ \ndk{\nabla b^{l}} \nsodk{\om^{l}}
+ \ndk{\nabla \om^{l}} \nsodk{b^{l}}
\]
\[
+ \nsodk{\mu^{l}} \nsod{v^{l}}^4
+ \nsodk{\mu^{l}} \nsodk{v^{l}} \nsotk{v^{l}}
\Big),
\]
where $ C = C(\Omegaega,c_0)$. Finally, from (\ref{ea}), (\ref{eg}) and (\ref{gh}) we obtain
\eqnsl{
\| \nabla b^{l}t \|_{L^{2}(0, t^{*}; L^{2}(\Omega))} \le C_{*} ,
}{btEst_4}
where $C_{*} $ depends on $c_{0}, \Omega, t^{*}, \kappa_{2}, $ $b_{\min}$, $\omegaa$ and $C^{*}$. The estimates (\ref{eg})-(\ref{gm}), (\ref{vtEst_2}), (\ref{oltEst_1}) and (\ref{btEst_4}) give (\ref{fa}) and the proof of lemma~\ref{energyhigher} is finished.
\end{proof}
Now, we draw the idea of the remain part of the proof of theorem~\ref{main}. From the $l$-independent estimate (\ref{fa}) we deduce the existence of a subsequence, which converges weakly in some spaces (see (\ref{fb})-(\ref{fd})). Next, by applying Aubin-Lions lemma we get strong convergence of the approximate solution, see (\ref{fe}), (\ref{ff}). Further, we prove the convergence of ''diffusive coefficient'' $\mu^{l}$ (\ref{ga}), which allows us to take the limit in the approximate problem. As a result, we obtain (\ref{gb})-(\ref{ccg}). In the last step we prove a series of inequalities (\ref{gc})-(\ref{le}), (\ref{ge}), (\ref{gf}), which show that the truncated problem is in fact the original one.
Having the estimate (\ref{fa}) from lemma~\ref{energyhigher} we may apply weak-compactness argument to the sequence of approximate solutions and we obtain a subsequence (still numerated by superscript $l$) weakly convergent in appropriate spaces. To be more precise, there exist $v$, $\omega$ and $b$ such that
\[
v \in L^{2}(0,t^{*};\mathcal{V}kdt)\cap L^{\infty}(0,t^{*};\mathcal{V}kdd), \hspace{0.2cm} \partial_{t} v \in L^{2}(0,t^{*};H^{1}(\Omega))
\]
\[
\omega, b \in L^{2}(0,t^{*};\mathcal{V}t)\cap L^{\infty}(0,t^{*};\mathcal{V}d), \hspace{0.2cm} \partial_{t} \omegaega , \partial_{t} b \in L^{2}(0,t^{*};H^{1}(\Omega))
\]
and
\eqq{v^{l} \rightharpoonup v \m{ in } L^{2}(0,t^{*};\mathcal{V}kdt), \hspace{0.2cm} \hspace{0.2cm} v^{l} \slg v \m{ in } L^{\infty}(0,t^{*};\mathcal{V}kdd), \hspace{0.2cm} \hspace{0.2cm} v^{l}t \rightharpoonup \partial_{t} v \m{ in } L^{2}(0,t^{*};H^{1}(\Omega)), }{fb}
\eqq{ (\om^{l}, b^{l} ) \rightharpoonup (\omega, b) \m{ in } L^{2}(0,t^{*};\mathcal{V}t), \hspace{0.2cm} (\om^{l}, b^{l} ) \slg (\omega, b ) \m{ in } L^{\infty}(0,t^{*};\mathcal{V}d), }{fc}
\eqq{(\om^{l}t, b^{l}t) \rightharpoonup (\partial_{t} \omegaega , \partial_{t} b ) \m{ in } L^{2}(0,t^{*};H^{1}(\Omega)).}{fd}
Thus, by the Aubin-Lions lemma there exists a subsequence (again denoted by $l$) such that
\eqq{(v^{l}, \om^{l}, b^{l} ) \longrightarrow (v, \omega , b ) \hspace{0.2cm} \m{ in } \hspace{0.2cm} L^{2}(0,t^{*};H^{s}(\Omega)) \hspace{0.2cm} \m{ for } \hspace{0.2cm} s< 3,}{fe}
and
\eqq{(v^{l}, \om^{l}, b^{l} ) \longrightarrow (v, \omega , b ) \hspace{0.2cm} \m{ in } \hspace{0.2cm} C([0,t^{*}];H^{q}(\Omega)) \hspace{0.2cm} \m{ for } \hspace{0.2cm} q < 2.}{ff}
Now, we characterize the limits of nonlinear terms. Firstly, we note that for fixed $(x,t)$ we may write
\[
\Psi_{t}(b^{l} (x,t)) - \Psi_{t}(b(x, t) )= \int_{0}^{1} \frac{d}{ds} \left[ \Psi_{t} \left(s b^{l} (x,t) +(1-s) b(x, t) \right) \right] ds
\]
\[
= \int_{0}^{1} \Psi_{t}' (s b^{l} (x,t ) +(1-s) b(x, t) ) ds \cdot [b^{l} (x,t)- b(x,t)].
\]
Taking into account (\ref{estiPsi}) we get
\[
|\Psi_{t}(b^{l} (x,t)) - \Psi_{t}(b(x, t) )|\leq c_{0} |b^{l} (x,t)- b(x,t)|.
\]
Similarly we obtain
\[
|\Phi_{t}(\om^{l} (x,t)) - \Phi_{t}(\omega(x, t) )|\leq c_{0} |\om^{l} (x,t)- \omega(x,t)|.
\]
and
\[
|\Phi_{t} (b(x,t))| \leq c_{0} (|b(x,t)|+ b_{\min}^{t}).
\]
Therefore, applying (\ref{defPhi}) we obtain
\[
\left| \frac{\Pst( \bl)}{\Pht( \ol)}- \frac{\Psi_{t}( b)}{ \Phi_{t}( \omega) }\right|\leq 4\omegatd \left[ |\Phi_{t}( \omega) | | \Pst( \bl) - \Psi_{t} (b)|+ |\Psi_{t}(b )| |\Phi_{t} ( \omega) - \Pht( \ol)| \right]
\]
\[
\leq 4 \omegatd \left[ 2 \omegaa |b^{l} - b |+ c_{0} (|b|+ b_{\min}^{t})|\omega - \om^{l}| \right].
\]
From (\ref{ff}) and the above estimate we have
\eqq{\mu^{l} \longrightarrow \mu_{\Pst \Pht}\equiv \frac{\Psi_{t}( b)}{ \Phi_{t}( \omega)} \hspace{0.2cm} \m{ uniformly on } \hspace{0.2cm} \overline{\Omega } \times [0,t^{*}]. }{ga}
Now, we shall take the limit $l\rightarrow \infty$ in the system (\ref{ca})-(\ref{cc}). First, we multiply (\ref{ca}) by $a_{i}$ and sum over $i \in \{1,\dots , l \}$ and after integrating with respect time variable we get
\[
\int_{0}^{t} (v^{l}t, w)dt - \int_{0}^{t} (v^{l}\partial_{t} \omegaegaimes v^{l},\nablala w)dt + \int_{0}^{t} \n{ \mu^{l} D(v^{l}), D(w)}dt = 0,
\]
where $w= \sum\limits_{i=1}^{l}a_{i}w_{i}$ and $t\in (0,t^{*})$. We note that from (\ref{ff}) we have for some $ \lambdambda > 0$
\eqq{(v^{l}, \om^{l}, b^{l} ) \longrightarrow (v, \omega , b ) \hspace{0.2cm} \m{ in } \hspace{0.2cm} C([0,t^{*}];C^{0,\lambdambda}(\overline{\Omega}))}{ffa}
hence, (\ref{fd}), (\ref{ff}) and (\ref{ga}) imply that
\[
\int_{0}^{t} (\partial_{t} v, w)dt - \int_{0}^{t} (v\partial_{t} \omegaegaimes v,\nablala w)dt + \int_{0}^{t} \n{ \mu_{\Pst \Pht} D(v), D(w)}dt = 0
\]
for $t\in (0,t^{*})$ and $w= \sum\limits_{i=1}^{l}a_{i}w_{i}$. By density, the above identity holds for $w\in \mathcal{V}kdj$. As a consequence, we obtain
\[
\int_{t_{1}}^{t_{2}} (\partial_{t} v, w)dt - \int_{t_{1}}^{t_{2}} (v\partial_{t} \omegaegaimes v,\nablala w)dt + \int_{t_{1}}^{t_{2}} \n{ \mu_{\Pst \Pht} D(v), D(w)}dt = 0
\]
for $0<t_{1}<t_{2}<t^{*}$ and $w\in \mathcal{V}kdj$. After dividing both sides by $|t_{2}-t_{1}|$ and taking the limit $t_{2}\rightarrow t_{1}$ we get
\eqq{(\partial_{t} v , w) - (v\partial_{t} \omegaegaimes v,\nablala w) + \n{ \mu_{\Pst \Pht} D(v), D(w)} =0 \hspace{0.2cm} \m{ for } \hspace{0.2cm} w\in \mathcal{V}kdj}{gb}
for a.a. $t\in (0,t^{*})$. Further, we have
\[
\pst( \bl) \longrightarrow \psi_{t}( b), \hspace{0.2cm} \hspace{0.2cm} \pht( \ol) \longrightarrow \phi_{t} (\om^{l}) \hspace{0.2cm} \m{ uniformly on } \hspace{0.2cm} \overline{\Omega } \times [0,t^{*}]
\]
thus, using (\ref{cb}) and (\ref{cc}) and arguing as earlier we obtain
\eqq{ (\partial_{t} \omegaega , z) - (\omega v, \nablala z ) + \n{ \mu_{\Pst \Pht} \nablala \omega, \nablala z } = - \kappa_{2}(\phi_{t}^{2}(\omega), z ) \hspace{0.2cm} \m{ for } \hspace{0.2cm} z\in \mathcal{V}j, }{cbg}
\eqq{(\partial_{t} b , q) - (b v, \nabla q ) +\n{ \mu_{\Pst \Pht} \nablala b, \nablala q } = - (\psi_{t}(b)\phi_{t}( \omega) , q) + ( \mu_{\Pst \Pht} \bk{D(v)}, q) \m{ for } q\in \mathcal{V}j }{ccg}
for a.a. $t\in (0,t^{*})$.
Now, we shall prove the bounds for $b$ and $\omega$. The proof is similar to one found in \cite{MiNa}. We denote by $b_{+} $ ($b_{-}$) the positive (negative resp.) part of $b$. Then $b=b_{+} + b_{-} $. We shall show that
\eqq{b\geq 0 \hspace{0.2cm} \m{ in } \hspace{0.2cm} \overline{\Omega } \times [0,t^{*}].}{gc}
For this purpose we test the equation (\ref{ccg}) by $b_{-}$ and we obtain
\[
(\partial_{t} b , b_{-} ) - (b v, \nabla b_{-} ) +\n{ \mu_{\Pst \Pht} \nablala b, \nablala b_{-} } = - (\psi_{t}(b)\phi_{t}( \omega) , b_{-} ) + ( \mu_{\Pst \Pht} \bk{D(v)}, b_{-} ).
\]
We note that from (\ref{ga}) we have $0\leq \mu_{\Pst \Pht}$ and by (\ref{defpsi}) we obtain $\psi_{t}(b)b_{-}\equiv 0$ thus, we get
\[
({\partial_{t} b_{-}}, b_{-} ) - (b_{-} v, \nabla b_{-} ) +\n{ \mu_{\Pst \Pht} \nablala b_{-}, \nablala b_{-} } \leq 0
\]
and then
\[
\ddt \ndk{ b_{-} } \leq 0.
\]
By the assumption (\ref{f}) the negative part of initial value of $b$ is zero hence, $b_{-} \equiv 0$ and we obtained (\ref{gc}).
Proceeding similarly we introduce the decomposition $\omega = \om_{+} + \om_{-}$ and test the equation (\ref{cbg}) by $\om_{-}$
\[
(\partial_{t} \omegaega, \om_{-}) - (\omega v, \nablala \om_{-} ) + \n{ \mu_{\Pst \Pht} \nablala \omega, \nablala \om_{-} } = - (\phi_{t}^{2}(\omega), \om_{-} ).
\]
We note that by $(\ref{defphi})$ the right-hand side of the above equality vanishes thus, we get $\ddt \ndk{ \om_{-} } \leq 0$ and by assumption (\ref{g})
\eqq{\omega\geq 0 \hspace{0.2cm} \m{ in } \hspace{0.2cm} \overline{\Omega } \times [0,t^{*}].}{gd}
Now, we shall prove that
\eqq{\omega(x,t)\geq \omegaitt \hspace{0.2cm} \m{ for } \hspace{0.2cm} (x,t)\in \overline{\Omega } \times [0,t^{*}]. }{le}
We test the equation (\ref{cbg}) by $(\omega - \omegat)_{-}$ and we obtain
\[
(\partial_{t} \omegaega, (\omega - \omegat)_{-}) - (\omega v, \nablala (\omega - \omegat)_{-} ) + \n{ \mu_{\Pst \Pht} \nabla \omega, \nabla \n{\omega - \omegat}_{-} }
\]
\eqq{
= - \kappa_{2} (\phi_{t}^{2}(\omega), (\omega - \omegat)_{-} ).
}{rra}
Using (\ref{h}) we get
\[
(\partial_{t} \omegaega, (\omega - \omegat)_{-}) = \jd \ddt \ndk{(\omega - \omegat)_{-}} - \kappa_{2} \n{ (\omegat)^{2} ,(\omega - \omegat)_{-}}
\]
hence, using inequality $ 0\leq \mu_{\Pst \Pht} $ and $\operatorname{div} v=0$ in (\ref{rra}) we obtain
\[
\jd \ddt \ndk{(\omega - \omegat)_{-}} - \kappa_{2} \n{ (\omegat)^{2} ,(\omega - \omegat)_{-}} \leq - \kappa_{2} (\phi_{t}^{2}(\omega), (\omega - \omegat)_{-} ) .
\]
We write the above inequality the form
\[
\jd \ddt \ndk{(\omega - \omegat)_{-}} \leq -\kappa_{2} (( \phi_{t}(\omega)-\omegat)(\phi_{t}(\omega)+\omegat), (\omega - \omegat)_{-} ).
\]
We note that $-\kappa_{2} ((\phi_{t}(\omega)+\omegat), (\omega - \omegat)_{-} )$ is nonnegative thus, using (\ref{estiphi}) we get $\phi_{t}(\omega)\leq \omega$ we have
\[
\jd \ddt \ndk{(\omega - \omegat)_{-}} \leq -\kappa_{2} (( \omega-\omegat)(\phi_{t}(\omega)+\omegat), (\omega - \omegat)_{-} )
\]
\[
=-\kappa_{2} \big( (\phi_{t}(\omega)+\omegat), \bk{(\omega - \omegat)_{-}} \big)\leq 0.
\]
Therefore, we obtain $\ddt \ndk{ (\omega - \omegat)_{-} }\leq 0$ and by (\ref{g}) we get (\ref{le}).\\
Now, we shall prove that
\eqq{\omega(x,t)\leq \omegaat \hspace{0.2cm} \m{ for } \hspace{0.2cm} (x,t)\in \overline{\Omega } \times [0,t^{*}]. }{ge}
Indeed, firstly we note that from (\ref{h}), (\ref{defphi}) and (\ref{le}) we have
\eqq{\phi_{t}(\omega)=\omega}{newa}
hence, if we test the equation (\ref{cbg}) by $(\omega - \omegamt)_{+}$ then we obtain
\[
(\partial_{t} \omegaega, (\omega - \omegamt)_{+}) - (\omega v, \nablala (\omega - \omegamt)_{+} ) + \n{ \mu_{\Pst \Pht} \nabla \omega, \nabla \n{\omega - \omegamt}_{+} }
\]
\[
= - \kappa_{2} (\omega^{2}, (\omega - \omegamt)_{+} ).
\]
Proceeding as earlier, we get
\[
\jd \ddt \ndk{(\omega - \omegamt)_{+}} -\kappa_{2} \n{ (\omegamt)^{2} ,(\omega - \omegamt)_{+} }
\le - \kappa_{2}(\omega^{2}, (\omega - \omegamt)_{+} ).
\]
and
\[
\jd \ddt \ndk{(\omega - \omegamt)_{+}} \leq -\kappa_{2} (( \omega-\omegamt)(\omega+\omegamt), (\omega - \omegamt)_{+} )
\]
\[
= - \kappa_{2} ((\omega+\omegamt), |(\omega-\omegamt)_{+}|^{2} )
\]
hence, we obtain
\eqns{
\jd & \ddt \ndk{(\omega - \omegamt)_{+}} \le 0.
}
By (\ref{g}) we get (\ref{ge}).
We shall prove that
\eqq{b(x,t) \geq b_{\min}^{t} \hspace{0.2cm} \m{ for } \hspace{0.2cm} (x,t)\in \overline{\Omega } \times [0,t^{*}]. }{gf}
For this purpose we test the equation (\ref{ccg}) by $(b - b_{\min}^{t})_{-}$. Then we get
\[
(\partial_{t} b, (b - b_{\min}^{t})_{-}) - (b v, \nabla ((b - b_{\min}^{t})_{-} ) ) +\n{ \mu_{\Pst \Pht} \nabla b, \nabla ((b - b_{\min}^{t})_{-}) }
\]
\[
= - (\psi_{t}(b) \omega , (b - b_{\min}^{t})_{-}) + ( \mu_{\Pst \Pht} \bk{D(v)}, (b - b_{\min}^{t})_{-}).
\]
The first term on the left-hand side is equal to
\[
\jd \ddt \ndk{ (b - b_{\min}^{t})_{-}} - \n{\frac{\omegaa b_{\min}}{\n{1 + \omegaa \kappa_{2} t}^{\frac{1}{\kappa_{2}} + 1}}, (b - b_{\min}^{t})_{-}}.
\]
The second term of the left-hand side vanishes and the third is nonnegative. Thus, we get
\[
\jd \ddt \ndk{ (b - b_{\min}^{t})_{-}}
- \n{\frac{\omegaa b_{\min}}{\n{1 + \omegaa \kappa_{2} t}^{\frac{1}{\kappa_{2}} + 1}}, (b - b_{\min}^{t})_{-}}
\leq - (\psi_{t}(b) \omega , (b - b_{\min}^{t})_{-}).
\]
Using (\ref{ge}) we get
\[
\jd \ddt \ndk{ (b - b_{\min}^{t})_{-}}
- \n{\frac{\omegaa b_{\min}}{\n{1 + \omegaa \kappa_{2} t}^{\frac{1}{\kappa_{2}} + 1}}, (b - b_{\min}^{t})_{-}}
\]
\[
\leq - \frac{\omegaa}{1 + \omegaa \kappa_{2} t} (\psi_{t}(b) , (b - b_{\min}^{t})_{-})
\]
and by definition (\ref{h}) we obtain
\[
\jd \ddt \ndk{ (b - b_{\min}^{t})_{-}}
\leq - \frac{\omegaa}{1 + \omegaa \kappa_{2} t} (\psi_{t}(b) - b_{\min}^{t}, (b - b_{\min}^{t})_{-}).
\]
From (\ref{gc}) and (\ref{estipsi}) we have $\psi_{t}(b)\leq b$ so, we obtain
\[
\jd \ddt \ndk{ (b - b_{\min}^{t})_{-}}
\leq - \frac{\omegaa}{1 + \omegaa \kappa_{2} t} (b - b_{\min}^{t}, (b - b_{\min}^{t})_{-})
\]
\[
= - \frac{\omegaa}{1 + \omegaa \kappa_{2} t} \ndk{ (b-b_{\min}^{t})_{-} }
\]
and then $\ddt \ndk{ (b - b_{\min}^{t})_{-} }\leq 0$. Using (\ref{f}) and (\ref{h}) we get (\ref{gf}).
Note that from (\ref{defpsi}) and (\ref{gf}) we get
\eqq{\psi_{t} (b)=b.
}{newb}
Further, (\ref{defPsi}) and (\ref{gf}) give $\Psi_{t} (b)=b$. Finally, (\ref{h}), (\ref{defPhi}), (\ref{le}) and (\ref{ge}) yield $\Phi_{t}(\omega) = \omega$. Thus,
\eqq{
\mu_{\Pst \Pht} = \frac{\Psi_{t} (b)}{\Phi_{t} (\omega)} = \frac{b}{\omega}.
}{ha}
Applying (\ref{newa}), (\ref{newb}) and (\ref{ha}) we deduce that system (\ref{gb})-(\ref{ccg}) has the following form
\eqq{(\partial_{t} v, w) - (v\partial_{t} \omegaegaimes v,\nablala w) + \n{ \frac{b}{\omega} D(v), D(w)} =0 \hspace{0.2cm} \m{ for } \hspace{0.2cm} w\in \mathcal{V}kdj,}{gbh}
\eqq{ (\partial_{t} \omegaega, z) - (\omega v, \nablala z ) + \n{ \frac{b}{\omega} \nablala \omega, \nablala z } = - \kappa_{2} (\omega^{2}, z ) \hspace{0.2cm} \m{ for } \hspace{0.2cm} z\in \mathcal{V}j, }{cbgh}
\eqq{(\partial_{t} b, q) - (b v, \nabla q ) +\n{ \frac{b}{\omega} \nablala b, \nablala q } = - (b \omega , q) + \left( \frac{b}{\omega} \bk{D(v)}, q\right) \hspace{0.2cm} \m{ for } \hspace{0.2cm} q\in \mathcal{V}j }{ccgno}
for a.a. $t\in (0,t^{*})$.
\section{Appendix }
The function $\Psi_{t} $ may be defined as follows. We set $f(x)= e^{-1/x}$ for $x>0$ and zero elsewhere. We put $g(x)= x-e^{-1/x}$ for $x<0$ and $g(x)=x$ for $x>0$. Then we set
\[
\tilde{\eta}(x) = \frac{1}{c} \int_{0}^{x} f(y)f(-y+1)dy,
\]
where $c=\int_{0}^{1} f(y)f(-y+1)dy$. Function $\tilde{\eta } $ is smooth function, which vanishes for negative $x$ and is equal to one for $x>1$. Next, we put
\[
\eta(x)= \tilde{\eta}(2(x-\frac{1}{4})), \hspace{0.2cm} \hspace{0.2cm} h(x)= (1- \eta(x))f(x)+ \eta(x)g(x).
\]
Finally, we define
\eqq{\Psi_{t}(x) = \frac{b_{\min}^{t}}{2} + \frac{b_{\min}^{t}}{2} h\left( \frac{2}{b_{\min}^{t}} \left( x- \frac{b_{\min}^{t}}{2} \right) \right). }{defPsik}
{\bf{Acknowledgements}} The authors would like to thank the anonymous referee for valuable remarks, which significantly improve the paper.
\end{document}
|
\begin{document}
\title{{\Huge Second-order linear structure-preserving modified finite volume schemes for the regularized long-wave equation}}
\author{Qi Hong$^{a}$, ~Jialing Wang$^{b}$, ~Yuezheng Gong$^{c,*}$ \\ \\
$^{a}$ Graduate School of China Academy of Engineering Physics,\\
, Beijing 100088, China\\
$^{b}$ School of Mathematics and Statistics, \\
Nanjing University of Information Science and Technology,\\
Nanjing 210044 , China\\
$^{c}$ College of Science, Nanjing University of Aeronautics and Astronautics\\
Nanjing 210016, China\\}
\date{}
\maketitle
\begin{abstract}
In this paper, based on the weak form of the Hamiltonian formulation of the regularized long-wave equation and a novel approach of transforming the original Hamiltonian energy into a quadratic functional, a fully implicit and three linear-implicit energy conservation numerical schemes are respectively proposed. The resulting numerical schemes are proved theoretically to satisfy the energy conservation law in the discrete level. Moreover, these linear-implicit schemes are efficient in practical computation because only a linear system need to be solved at each time step.
The proposed schemes are both second order accurate in time and space.
Numerical experiments are presented to show all the proposed schemes have satisfactory performance in providing accurate solution and the remarkable energy-preserving property.
\end{abstract}
{\bf Keywords:} modified finite volume method, discrete variational derivative method, linear scheme, regularized long-wave equation, conservation laws, quadratic invariant.
\pagestyle{myheadings}
\markboth{\hfil
\hfil \hbox{}}
{\hbox{} \hfil
Qi Hong, Jialing Wang and Yuezheng Gong \hfil}
\section{Introduction}\label{sec;introduction}
The regularized long-wave (RLW) type equation
\begin{align}\label{model-eq}
u_t+au_x-\sigma u_{xxt}+(F^{'}(u))_x=0,\quad F(u)=\dfrac{\gamma}{6}u^3,
\end{align}
where $a$, $\sigma$ and $\gamma$ are positive constants.
The RLW equation was proposed first by Peregrine
\cite{Peregrine1966}. Benjamin, Bona, and Mahony derived it as alternative to the Korteweg-de Vries equation for describing unidirectional propagation of weakly along dispersive wave \cite{Benjamin1972}.
The RLW is very important in physics media since it describes phenomena with weak nonlinearity and dispersion
waves, including nonlinear transverse waves in shallow water, ion-acoustic and magneto hydrodynamic waves in plasma and phonon packets in nonlinear crystals.
It admits three conservation laws \cite{Olver1979} given by
\begin{align*}
\mathcal{I}_1=\int_{-\infty}^{\infty}udx,\quad
\mathcal{I}_2=\int_{-\infty}^{\infty}\left(u^2+\sigma u_x^2\right)dx,\quad
\mathcal{I}_3=\int_{-\infty}^{\infty}\left(\dfrac{\gamma}{6}u^3+\dfrac{a}{2}u^2\right)dx,
\end{align*}
which correspond to mass, momentum and energy, respectively. As the RLW equation and its variants have been solved analytically for a restricted set of boundary and initial conditions, its numerical solution has been the subject of many literatures
\cite{Dag2004,Dehghan2011,Dogan2002,Gao2015,Gu2008,Guo1988,Lu2018,Luo1999,Mei2015,MeiChen2012,Meigaocheng2015,ZakiSI2001}.
However, some numerical algorithms obtained using standard approaches could not inherit certain
invariant quantities in the continuous dynamical systems. A numerical method which can
preserve at least some of structural properties of systems is called geometric integrator or structure
preserving algorithm. Nowadays, it has been a criterion to judge the success of the numerical simulation.
When discretizing such a conservative system in space and time, it is a natural
idea to design numerical schemes that preserve rigorously a discrete invariant that is an
equivalence of the continuous one, since they often yield physically correct results and
numerical stability \cite{Bubb2003Book}.
To our knowledge, Sun and Qin \cite{Sun2004} constructed a multi-symplectic Preissman scheme by using the implicit midpoint rule both in space and time. Cai \cite{Cai2009} developed a 6-point multi-symplectic Preissman scheme. An explicit 10-point multi-symplectic Euler-box scheme for the RLW equation was proposed in \cite{02Cai2009}, and the author applied this scheme to solve the modified
RLW in \cite{Cai2010}.
Based on the multi-symplectic Euler box scheme and Preissman box scheme, Li and Sun \cite{Li2013} proposed a new multi-symplectic Euler box scheme for RLW equation with the accuracy of $\mathcal{O}(h^2+\tau^2)$.
In some fields, sometimes it is more convenient to construct numerical algorithms that preserve the energy conservation law rather than the sympletic or multi-symplectic one \cite{Hairer2006Book}.
Fortunately, the energy-preserving methods have been developed on numerical partial differential equations (PDEs) (see \cite{Eidnes2017,Gong2014,Matsuo2007,Quispel2008} for examples).
Furhata \cite{Furihata1999} presented the discrete variational derivative methods (DVDM) for a large class of PDEs that inherit energy conservation or dissipation property. Matsuo and Furihata \cite{Furihata2001} generalized the DVDM for complex-valued nonlinear PDEs.
However, most existing energy-preserving methods of the RLW equation are fully implicit \cite{CaiHong2017,Hammad2015}, and so on. Thus, it needs to use an iteration technique to evaluate numerical solution. Recently, Dahlby and Owren proposed a general framework
for deriving linear-implicit integral-preserving numerical methods for PDEs with polynomial invariants \cite{Dahlby2011}. Inpsired by the contribution in \cite{Dahlby2011}, we will develop linear energy conservation schemes for the RLW.
In this article, we design four energy-preserving numerical methods based on the variational technique (such as finite volume method) for the space semi-discretization. Note that this approach holds the conservation of a semi-discrete energy which is a discrete approximation of the original one in the continuous conservative system . It is a quite natural idea to gain a full discrete scheme that can preserve the original energy as accurately as possible.
To this end, we will use the DVDM in time, which can preserve the semi-discrete energy exactly. First, we apply one-point DVDM in temporal direction. But the resulting numerical scheme is a fully implicit and an
iterative technique is needed in numerical computations. In order to overcome the disadvantage, a novel linear energy conservation scheme is proposed by the DVDM with two points numerical solutions.
In addition, we propose a new idea of transforming the original Hamiltonian energy into a quadratic functional to derive the energy stable schemes. Based on this strategy, we still discrete the weak form of the equivalent formulation for the RLW equation in space by the so-called modified finite volume method (mFVM), which is obtained by the linear finite volume method while the second-order derivative is approximated by the central finite difference. Then, we consider the linear-implicit Crank-Nicolson and Leap-frog scheme to discretize the conservative semi-discrete system in temporal direction and two second-order linear energy-preserving schemes are obtained readily. In our proposed linear-implicit methods, we just need to
solve one linear system at each time step, which is less inexpensive than any nonlinear
schemes. Compared with several existing works, numerical tests have confirmed this conclusion. Finally, we strictly prove that the four schemes mentioned above preserve the discrete energy exactly. Numerical results demonstrate the advantages of the methods are presented.
The paper is organized as follows. In Section 2, some elementary notations and definitions are briefly introduced firstly and then we describe in detail the idea of the mFVM discretization in space. Section 3 is devoted to designing energy-preserving discretization schemes, which can preserve the discrete energy exactly. The conservation of the quadratic invariant is developed in section 4. Numerical examples are presented to show the validity of theoretical results in Section 5. Finally, we give a concluding remark in Section 6.
\section{Structure-preserving Spatial Discretization based on mFVM }
In this section, we construct a semi-discrete scheme by mFVM for the RLW equation in a domain $\Omega=[a,b]$. The semi-discrete scheme preserves the corresponding energy conservation law at the semi-discrete level.
The equation \eqref{model-eq} can be rewritten the following formulation
\begin{align}\label{RLW-Hamilton-eq1}
&(1-\sigma\partial^2_x)u_t=-\partial_x\dfrac{\delta \mathcal{H}}{\delta u},
\end{align}
where $\frac{\delta \mathcal{H}}{\delta u}$ is a variational derivative of the Hamiltonian function
\begin{align*}
\mathcal{H}(u,u_x)=\int H(u,u_x)dx,\quad H(u,u_x)=\frac{\gamma}{6}u^3+\frac{a}{2}u^2.
\end{align*}
Let $H_p^1(\Omega)=\left\{u\in H^1(\Omega):u(x)=u(x+b-a)\right\}$. Further, $\forall\;u,v\in L^2(\Omega)$, $(u,v)=\int_{\Omega}uvdx$.
A weak formulation of the Galerkin discretization for \eqref{RLW-Hamilton-eq1} stats from finding $u\in H_p^1(\Omega)$, such that
\begin{align}\label{weak-form-eq}
\Big((1-\sigma\partial_x^2)u_t,v\Big)=-\Big(\partial_x\dfrac{\delta\mathcal{H}}{\delta u},v\Big),\quad \forall\; v\in H_p^{1}(\Omega).
\end{align}
Obviously, the $\mathcal{H}$-conservation law can be explicitly obtained by formal calculation
\begin{align*}
\dfrac{d}{dt}\mathcal H=\dfrac{d}{dt}\int Hdx=(\dfrac{\partial H}{\partial u},u_t)
=(\dfrac{\delta \mathcal{H}}{\delta u},-(1-\sigma\partial^2_x)^{-1}\partial_x\dfrac{\delta \mathcal{H}}{\delta u})=0,
\end{align*}
where the second equality is just the chain rule and the skew-adjoint operator of $(1-\sigma\partial^2_x)^{-1}\partial_x$ has been used.
Suppose that $\Omega$ is partitioned into a number of non-overlapped cells that form the so-called primary cell $\Omega_h=\left\{\Omega_i|\Omega_i=[x_i,x_{i+1}],\; i=0, 1, \cdots, N \right\}$ with the mesh size $h=x_{i+1}-x_{i}=(b-a)/N$. Denote by $x_{i+1/2}$ cell center of cell $[x_i,x_{i+1}]$. In this paper, the model is imposed with a periodic boundary condition. Accordingly, we define a dual grid $\Omega_h^*$ as
\begin{align*}
\Omega_h^*=\{\Omega_0^*\cup\Omega_i^*:i=1,2,\cdots N-1\},
\end{align*}
where $\Omega_0^*=[x_0,x_{1/2}]\cup [x_{N-1/2},x_N]$ and $\Omega_i^*=[x_{i-1/2},x_{i+1/2}]$.
The trial function space $U_h$ is taken as the linear element space and the corresponding basis functions are given by
\begin{align*}
\psi_0=
\begin{cases}
(x_1-x)/h,\quad &x_0\leq x\leq x_1,\\
(x-x_{N-1})/h,\quad &x_{N-1}\leq x\leq x_N,\\
0,&\mathrm{otherwise},
\end{cases}
\quad
\psi_i(x)=
\begin{cases}
(x-x_{i-1})/h,\quad &x_{i-1}\leq x\leq x_{i},\\
(x_{i+1}-x)/h,\quad &x_{i}\leq x\leq x_{i+1},\\
0,&\mathrm{otherwise},
\end{cases}
\end{align*}
where $i=1,2,\cdots,N-1$.
For $\forall\; u_h\in U_h$, we have
\begin{align*}
u_h=\sum_{i=0}^{N-1}u_i\psi_i(x),
\end{align*}
where $u_i=u_h(x_i,t)$.
Define the test function space $V_h$ as
\begin{align*}
V_h=\mathrm{Span}\left\{\chi_i(x)|i=0,1,\cdots,N-1\right\},
\end{align*}
where $\chi_i$ denotes the characteristic function of dual cell $\Omega_i^*$ associated with $x_i$. We have
\begin{align*}
v_h=\sum_{i=0}^{N-1}v_i\chi_i,\quad \forall\; v_h\in V_h,
\end{align*}
where $v_i=v_h(x_i)$. We also define an interpolation operator $I_h:L^2(\Omega) \rightarrow U_h$ by
$$I_hu=\sum_{i=0}^{N-1}u(x_i,t)\psi_i(x),\quad\forall\;u\in L^2(\Omega).$$
Throughout this paper, the hollow letters $\mathbb{A}, \mathbb{B}, \mathbb{C}, \cdots$
will be used to denote rectangular matrices with a number of columns greater than one, while the bold ones $\mathbf{u}, \mathbf{v}, \mathbf{ w}, \cdots$ represent vectors.
For a Galerkin approximation of \eqref{weak-form-eq}, we look for $u_h(x,t)\in U_h$ such that, for any $v_h\in V_h$,
\begin{align}\label{semi-discrete-eq}
\Big((1-\sigma\partial_x^2)(u_h)_t,v_h\Big)=-\left(\partial_x\dfrac{\delta \mathcal{H}}{\delta u},v_h\right),
\end{align}
where
$\delta\mathcal{H}/\delta u=\gamma I_hu^2/2+au_h$.
Let $X_h=\{\mathbf{u}:\mathbf{u}=(u_0,u_1,\cdots,u_{N-1})^T\}$ be the space of grid function on $\Omega_h$.
Based on the above results and by some simple calculations, we get
\begin{align*}
\int_{\Omega}(u_h)_t\chi_idx&=\int_{x_{i}-\frac{1}{2}}^{x_i}u_{i-1}\psi_{i-1}dx+\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}u_i\psi_idx+\int_{x_i}^{x_{i}+\frac{1}{2}}u_{i+1}\psi_{i+1}dx\\
&=\dfrac{h}{8}(\partial_t u_{i-1}+6\partial_t u_{i}+\partial_t u_{i+1}),\\
\int_{\Omega}\partial_x^2(u_h)_t\chi_idx&\approx\int_{\Omega}\delta_x^2(\partial_tu_i)\chi_idx=
\dfrac{\partial_t u_{i+1}-2\partial_t u_i+\partial_t u_{i-1}}{h},\\
\int_{\Omega}\partial_x(\dfrac{\delta \mathcal{H}}{\delta u})\chi_idx&=(\dfrac{\delta\mathcal{H}}{\delta u})_{i+\frac{1}{2}}-(\dfrac{\delta\mathcal{H}}{\delta u})_{i-\frac{1}{2}}=\dfrac{1}{2}\big[(\dfrac{\delta\mathcal{H}}{\delta u})_{i+1}-(\dfrac{\delta\mathcal{H}}{\delta u})_{i-1}\big],
\end{align*}
where the operator $\delta_x^2$ denotes the two-order central difference.
The corresponding discrete inner product and discrete $L^2$ norm are given by
\begin{align*}
(\mathbf{u},\mathbf{v})_h=h\sum_{j=0}^{N-1}u_jv_j,\quad\|\mathbf{u}\|_h=(\mathbf{u},\mathbf{u})_h^{1/2},\quad\forall\;\mathbf{u},\mathbf{v}\in X_h.
\end{align*}
Finally, we define another operator ``$\odot$'' for element by element multiplication between two arrays of same size as
\begin{align*}
(\mathbf{v}\odot\mathbf{w})_j=(\mathbf{w}\odot\mathbf{v})_j=v_jw_j,
\end{align*}
where $\mathbf{v}, \mathbf{w}\in X_h$.
In fact, \eqref{semi-discrete-eq} is equivalent to the following system
\begin{align}\label{semi-discrete-eq1}
(\mathbb{A}-\sigma\mathbb{B})\dfrac{d\mathbf{u}}{dt}=-\mathbb{C}\dfrac{\delta \mathcal{H}_h}{\delta \mathbf{u}},
\end{align}
where $\mathbf{u}=(u_0(t),u_1(t),\cdots,u_{N-1}(t))^T$, $\mathcal{H}_h=h\sum\limits_{j=0}^{N-1}\big(\dfrac{\gamma u_j^3}{6}+\dfrac{au_j^2}{2}\big)$ and
\begin{align*}
\mathbb{A}=\dfrac{h}{8}\left(
\begin{array}{cccccc}
6& 1& 0& \cdots& 1\\
1& 6& 1& \cdots& 0\\
\vdots& \vdots& \vdots& \vdots& \vdots \\
1& 0& \cdots& 1& 6
\end{array}
\right),\quad
\mathbb{B}=\dfrac{1}{h}\left(
\begin{array}{ccccc}
-2& 1& 0& \cdots& 1\\
1& -2& 1& \cdots& 0\\
\vdots& \vdots& \vdots& \vdots& \vdots \\
1& 0& \cdots&\ 1& -2
\end{array}
\right),\quad
\mathbb{C}=\dfrac{1}{2}\left(
\begin{array}{ccccc}
0& 1& 0& \cdots& -1\\
-1& 0& 1& \cdots& 0\\
\vdots& \vdots& \vdots& \vdots& \vdots \\
1&\ 0& \cdots& -1& 0
\end{array}
\right).
\end{align*}
\begin{lemma}\label{integ-by-parts-lem}\cite{Gong2017}
For any real square $\mathbb{A}$ and $\mathbf{u},\mathbf{v}\in X_h$, we have
\begin{align*}
(\mathbb{A}\mathbf{u},\mathbf{v})_h=(\mathbf{u},\mathbb{A}^T\mathbf{v})_h.
\end{align*}
\end{lemma}
\begin{theorem}
The semi-discrete scheme \eqref{semi-discrete-eq1} conserves the discrete energy conservation law
\begin{align*}
\dfrac{d}{dt}\mathcal{H}_h=0,
\end{align*}
where $\mathcal{H}_h=h\sum\limits_{j=0}^{N-1}\big(\dfrac{\gamma u_j^3}{6}+\dfrac{au_j^2}{2}\big)$.
\end{theorem}
\begin{proof}
Computing the discrete inner product on the both sides of \eqref{semi-discrete-eq1} with $\delta \mathcal{H}_h/\delta \mathbf{u}$, we have
\begin{align}\label{semi-disc-theom-eq1}
\left(\dfrac{d\mathbf{u}}{dt},\dfrac{\delta \mathcal{H}_h}{\delta \mathbf{u}}\right)_h=-\left((\mathbb{A}-\sigma \mathbb{B})^{-1}\mathbb{C}\dfrac{\delta \mathcal{H}_h}{\delta \mathbf{u}},\dfrac{\delta \mathcal{H}_h}{\delta \mathbf{u}}\right)_h.
\end{align}
Using the antisymmetry of matrix $(\mathbb{A}-\sigma \mathbb{B})^{-1}\mathbb{C}$ and Lemma \ref{integ-by-parts-lem}, the right-hand of \eqref{semi-disc-theom-eq1} becomes
\begin{align}\label{semi-disc-theom-eq2}
\left((\mathbb{A}-\sigma \mathbb{B})^{-1}\mathbb{C}\dfrac{\delta \mathcal{H}_h}{\delta \mathbf{u}},\dfrac{\delta \mathcal{H}_h}{\delta \mathbf{u}}\right)_h=0.
\end{align}
Noth that
\begin{align}\label{semi-disc-theom-eq3}
\left(\dfrac{d\mathbf{u}}{dt},\dfrac{\delta \mathcal{H}_h}{\delta \mathbf{u}}\right)_h=\dfrac{d}{dt}\mathcal{H}_h.
\end{align}
Combining the \eqref{semi-disc-theom-eq1}, \eqref{semi-disc-theom-eq2} with \eqref{semi-disc-theom-eq3} yields the desired result and completes the proof.
\end{proof}
\section{Energy conservation scheme for the RLW equation}
In this section, we construct two energy-preserving schemes for the RLW equation by the DVDM. One is a fully-implicity energy-preserving (FIEP) scheme, the other is a linear-implicity energy-preserving (LIEP) scheme.
For a positive integer $N_t$, we denote time step $\tau=T/N_t$, $t_n=n\tau,\ 0\leq n\leq N_t$. We define
\begin{align*}
&\delta_t^+{\mathbf u}^n=\dfrac{\mathbf u^{n+1}-\mathbf u^n}{\tau},\quad \delta_t\mathbf{u}^n=\dfrac{\mathbf{u}^{n+1}-\mathbf{u}^{n-1}}{2\tau},\\[0.3cm]
&\mathbf{\overline{u}}^{n+\frac{1}{2}}=\dfrac{3\mathbf{u}^{n}-\mathbf{u}^{n-1}}{2},\quad
\mathbf{u}^{n+\frac{1}{2}}=\dfrac{\mathbf{u}^{n+1}+\mathbf{u}^n}{2},\quad A_t\mathbf{u}^n=\dfrac{\mathbf u^{n+1}+\mathbf{u}^{n-1}}{2}.
\end{align*}
\subsection{Fully-implicit energy-preserving (FIEP) scheme for the RLW equation}
We define a discrete form of the energy and its partial derivative as
\begin{align*}
H_h^n(u_j^n)=\dfrac{\gamma}{6}(u_j^n)^3+\dfrac{a}{2}(u_j^n)^2,\quad
{\mathcal{H}}_h^n=h\sum_{j=0}^{N-1}H_h^n(u^n_j),
\end{align*}
and
\begin{align*}
\dfrac{\partial H_h^n}{\partial(u_j^n,u_j^{n+1})}&=\dfrac{\gamma}{6}\dfrac{(u_j^{n+1})^3-(u_j^n)^3}{u_j^{n+1}-u_j^n}
+\dfrac{a}{2}\dfrac{(u_j^{n+1})^2-(u_j^n)^2}{u_j^{n+1}-u_j^n}\\[0.3cm]
&=\gamma \dfrac{(u_j^{n+1})^2+u_j^{n+1}u_j^n+(u_j^n)^2}{6}+
a\dfrac{u_j^{n+1}+u_j^n}{2}.
\end{align*}
We approximate $\delta\mathcal{H}/\delta u$ by the discrete version of the variational
derivative, i.e.,
\begin{align*}
\dfrac{\delta{\mathcal{H}}_h^n}{\delta(u_j^n,u_j^{n+1})}=\dfrac{\partial H^n_h}{\partial(u_j^n,u_j^{n+1})}-\dfrac{\partial}{\partial x}\left(\dfrac{\partial H_h^n}{\partial((u_x)_j^n,(u_x)_j^{n+1})}\right)=\dfrac{\partial H^n_h}{\partial(u_j^n,u_j^{n+1})}.
\end{align*}
We discretize the above semi-discrete system \eqref{semi-discrete-eq} in time by DVDM, thus the fully discrete scheme is given by
\begin{align*}
\Big((1-\sigma \partial_x^2)\delta_t^{+}u^n_h,v_h\Big)=-\left(\partial_x\left(\dfrac{\delta {\mathcal{H}}_h^n}{\delta(u^{n+1},u^n)}\right),v_h\right).
\end{align*}
For simplicity, it can be written as the following compact representation
\begin{align}\label{matrix-form-eq1}
\delta_t^+\mathbf{u}^n=-(\mathbb{A}-\sigma \mathbb{B})^{-1}\mathbb{C}F(\mathbf{u}^n,\mathbf{u}^{n+1}),
\end{align}
where
\begin{align*}
F(\mathbf{u}^n,\mathbf{u}^{n+1})=\dfrac{\gamma}{6}(\mathbf{u}^{n+1}\odot\mathbf{u}^{n+1}
+\mathbf{u}^{n}\odot\mathbf{u}^{n+1}+\mathbf{u}^n\odot\mathbf{u}^n)+\dfrac{a}{2}(\mathbf{u}^{n+1}+\mathbf{u}^n).
\end{align*}
\begin{theorem}\label{theorem-FIEP}
Under the periodic boundary condition, the FIEP scheme preserves the discrete
energy conservation law
\begin{align*}
\mathcal{H}_h^{n+1}=\mathcal{H}_h^n,\quad \forall\; n\geq 0,
\end{align*}
where $\mathcal{H}_h^n=h\sum\limits_{j=0}^{N-1}\big(\dfrac{\gamma (u^n_j)^3}{6}+\dfrac{a(u^n_j)^2}{2}\big)$ is called the total energy in the discrete level.
\end{theorem}
\begin{proof}
Noticing that $(\mathbb{A}-\sigma\mathbb{B})^{-1}\mathbb{C}$ is a skew-symmetric matrix and using Lemma \ref{integ-by-parts-lem}, we have
\begin{align*}
\left((\mathbb{A}-\sigma\mathbb{B})^{-1}\mathbb{C}F(\mathbf{u}^n,\mathbf{u}^{n+1}),F(\mathbf{u}^n,\mathbf{u}^{n+1})\right)_h=0.
\end{align*}
Therefore, computing the discrete inner product on the both side of \eqref{matrix-form-eq1} with $F(\mathbf{u}^n,\mathbf{u}^{n+1})$, we have
\begin{align*}
0=(\delta_t^{+}\mathbf{u}^n,F(\mathbf{u}^n,\mathbf{u}^{n+1}))_h=\dfrac{1}{\tau}(\mathcal{H}_h^{n+1}-\mathcal{H}_h^{n}).
\end{align*}
The proof is complete.
\end{proof}
\subsection{Linear-implicity energy-preserving (LIEP) scheme for the RLW equation}
Different from the previous one, we define a new discrete energy and its partial derivative as follows:
\begin{align*}
{\mathcal{H}}_h^n=h\sum_{j=0}^{n-1} H_h^n(u_j^n,u_j^{n+1}),\quad
H_h^n(u_j^n,u_j^{n+1})=\gamma\dfrac{u_j^{n+1}u_j^n(u_j^{n+1}+u_j^n)}{12}+a\dfrac{u_j^nu_j^{n+1}}{2},
\end{align*}
and
\begin{align*}
\dfrac{\partial H_h^n}{\partial(u_j^{n-1},u_j^n,u_j^{n+1})}=\dfrac{ H_h^n(u_j^{n+1},u_j^n)- H_h^{n-1}(u_j^n,u_j^{n-1})}{\frac{1}{2}(u_j^{n+1}-u_j^{n-1})}.
\end{align*}
Therefore, the approximation of $\delta\mathcal{H}/\delta u$ can be given by
\begin{align*}
\dfrac{\delta\mathcal{H}^n_h}{\delta(u_j^{n-1},u_j^n,u_j^{n+1})}=\dfrac{\partial H_h^n}{\partial(u_j^{n-1},u_j^n,u_j^{n+1})}=au_j^n+\gamma\dfrac{u_j^n(u_j^{n-1}+u_j^n+u_j^{n+1})}{6}.
\end{align*}
Applying the novel DVDM mentioned above in the temporal direction of the semi-discrete form \eqref{semi-discrete-eq}, we can obtain the full discrete scheme
\begin{align}\label{scheme2-eq1}
\Big((1-\sigma \partial_x^2)\delta_tu^n_h,v_h\Big)=-\left(\partial_x\left(\dfrac{\delta\mathcal{H}_h^n}{\delta(u^{n-1},u^n,u^{n+1})}\right),v_h\right).
\end{align}
Obviously, we can rewrite \eqref{scheme2-eq1} in the following compact form
\begin{align}\label{scheme2-eq2}
(\mathbb{A}-\sigma \mathbb{B})\delta_t\mathbf{u}^n=-\mathbb{C}G(\mathbf{u}^{n-1},\mathbf{u}^n,\mathbf{u}^{n+1}),
\end{align}
where
\begin{align*}
G(\mathbf{u}^{n-1},\mathbf{u}^n,\mathbf{u}^{n+1})=a\mathbf{u}^n+\dfrac{\gamma}{6}\mathbf{u}^n\odot(\mathbf{u}^{n+1}+\mathbf{u}^n+\mathbf{u}^{n-1}).
\end{align*}
Note that the above system is a three-level scheme, where $\mathbf{u}^1$ can be given by a suitable two-level scheme
\begin{align*}
(\mathbb{A}-\sigma \mathbb{B})\delta_t^+\mathbf{u}^0=-\mathbb{C}F(\mathbf{u}^0,\mathbf{u}^1),
\end{align*}
where
\begin{align*}
F(\mathbf{u}^0,\mathbf{u}^1)=\dfrac{\gamma}{6}\big(\mathbf{u}^{1}\odot\mathbf{u}^{1}+\mathbf{u}^{1}\odot\mathbf{u}^0+\mathbf{u}^0\odot\mathbf{u}^0\big)+\dfrac{a}{2}(\mathbf{u}^{1}+\mathbf{u}^0).
\end{align*}
\begin{theorem}
Under the periodic boundary condition, the LIEP scheme preserves the discrete
energy conservation law
\begin{align*}
\mathcal{H}_h^{n+1}=\mathcal{H}^n_h,
\end{align*}
where
\begin{align*}
\mathcal{H}_h^n=h\sum\limits_{j=0}^{N-1}\Big(\gamma\dfrac{u_j^{n+1}u_j^n(u_j^{n+1}+u_j^n)}{12}+a\dfrac{u_j^nu_j^{n+1}}{2}\Big).
\end{align*}
\end{theorem}
\begin{proof}
The proof is similar to the theorem \ref{theorem-FIEP}, thus it is omitted.
\end{proof}
\section{The conservation of the quadratic invariant}
In order to develop an equivalent system with a quadratic energy functional, we introduce a new intermediate variable $v=u^2$ and rewrite the RLW equation \eqref{model-eq} into the following equivalent formulation:
\begin{align*}
&(1-\sigma\partial_x^2)u_t=-\partial_x\left(\dfrac{\gamma}{6}v+au+\dfrac{\gamma}{3}u^2\right),\\[0.3cm]
&(1-\sigma\partial_x^2)v_t=-2u\partial_x\left(\dfrac{\gamma}{6}v+au+\dfrac{\gamma}{3}u^2\right),
\end{align*}
where the corresponding quadratic energy conservation law
\begin{align*}
\mathcal{H}(t)=\int\left(\dfrac{\gamma}{6}uv+\dfrac{a}{2}u^2\right)dx\equiv \mathcal{H}(0).
\end{align*}
The so-called modified finite volume discretization in space of the system is based on the Galerkin formulation, thus we have
\begin{align*}
\begin{array}{ll}
\Big((1-\sigma\partial_x^2)(u_h)_t,w_1\Big)=-\left(\partial_x\big(\dfrac{\gamma}{6}v_h+au_h+\dfrac{\gamma}{3}u_h^2\big),w_1\right),\quad&\forall\;w_1\in V_h,\\[0.3cm]
\Big((1-\sigma\partial_x^2)(v_h)_t,w_2\Big)=-\left(2u_h\partial_x\big(\dfrac{\gamma}{6}v_h+au_h+\dfrac{\gamma}{3}u_h^2\big),w_2\right),\quad&\forall\;w_2\in V_h.
\end{array}
\end{align*}
The semi-discrete system is equivalent to the following ODE system
\begin{align}\label{new-semi-discrete-eq}
\begin{split}
&(\mathbb{A}-\sigma \mathbb{B})\dfrac{d}{dt}\mathbf{u}=-\mathbb{C}\left(\dfrac{\gamma}{6}\mathbf{v}+a\mathbf{u}\right)-\dfrac{\gamma}{3}\mathbb{C}(\mathbf{u}\odot\mathbf{u}),\\[0.3cm]
&(\mathbb{A}-\sigma\mathbb{B})\dfrac{d}{dt}\mathbf{v}=-2\mathbf{u}\odot \mathbb{C}\left(\dfrac{\gamma}{6}\mathbf{v}+a\mathbf{u}\right)-\dfrac{2}{3}\mathbf{u}\odot\mathbb{C}(\mathbf{u}\odot\mathbf{u}).
\end{split}
\end{align}
\subsection{Linear-implicity Crank-Nicolson (LICN) scheme for the RLW equation}
Applying the linear-implicit Crank-Nicolson scheme in temporal direction for the semi-discrete system \eqref{new-semi-discrete-eq}, we have
\begin{align}
&(\mathbb{A}-\sigma\mathbb{B})\delta_t^+\mathbf{u}^n=-\mathbb{C}\left(\dfrac{\gamma}{6}\mathbf{v}^{n+\frac{1}{2}}+a\mathbf{u}^{n+\frac{1}{2}}\right)
-\dfrac{\gamma}{3}\mathbb{C}\mathrm{diag}(\mathbf{\bar{u}}^{n+\frac{1}{2}})\mathbf{u}^{n+\frac{1}{2}},\label{linear-scheme-eq1}\\[0.3cm]
&(\mathbb{A}-\sigma\mathbb{B})\delta_t^+\mathbf{v}^n=-2\mathrm{diag}(\mathbf{\bar{u}}^{n+\frac{1}{2}})\mathbb{C}\left(\dfrac{\gamma}{6}\mathbf{v}^{n+\frac{1}{2}}
+a\mathbf{u}^{n+\frac{1}{2}}\right)-\dfrac{2}{3}\mathrm{diag(\mathbf{\bar u}^{n+\frac{1}{2}})}\mathbb{C}\mathrm{diag}(\mathbf{\bar u}^{n+\frac{1}{2}})\mathbf{u}^{n+\frac{1}{2}},\notag
\end{align}
where $n\geq 1$. Obviously, \eqref{linear-scheme-eq1} is also a three-level scheme which can not start by itself. Thus, the first step $\mathbf{u}^1$ can be computed by a proper two-level scheme as follows
\begin{align}\label{start-linear-scheme-eq}
\begin{split}
&(\mathbb{A}-\sigma\mathbb{B})\delta_t^+\mathbf{u}^0=-\mathbb{C}\left(\dfrac{\gamma}{6}\mathbf{v}^{\frac{1}{2}}+a\mathbf{u}^{\frac{1}{2}}\right)-\dfrac{\gamma}{3}\mathbb{C}\mathrm{diag}(\mathbf{u}^{0})\mathbf{u}^{\frac{1}{2}},\\[0.3cm]
&(\mathbb{A}-\sigma\mathbb{B})\delta_t^+\mathbf{v}^0=-2\mathrm{diag}(\mathbf{u}^{0})\mathbb{C}\left(\dfrac{\gamma}{6}\mathbf{v}^{\frac{1}{2}}+a\mathbf{u}^{\frac{1}{2}}\right)-\dfrac{2}{3}\mathrm{diag(\mathbf{u}^{0})}\mathbb{C}\mathrm{diag}(\mathbf{u}^{0})\mathbf{u}^{\frac{1}{2}},\\[0.3cm]
&\mathbf{v}^0=\mathbf{u}^0\odot\mathbf{u}^0.
\end{split}
\end{align}
\begin{theorem}
The scheme \eqref{linear-scheme-eq1} and \eqref{start-linear-scheme-eq} satisfy the discrete
energy conservation law
\begin{align}\label{energy-conservation-law}
\mathcal{H}_h^n=\mathcal{H}_h^0,\quad\forall\;n\geq 0,
\end{align}
where $\mathcal{H}_h^n=h\sum\limits_{j=0}^{N-1}\big(\dfrac{\gamma u^n_jv^n_j}{6}+\dfrac{a(u^n_j)^2}{2}\big)$.
\end{theorem}
\begin{proof}
Taking the discrete inner product of the first equation of \eqref{linear-scheme-eq1} with $\frac{\gamma}{6}\mathbf{v}^{n+\frac{1}{2}}+a\mathbf{u}^{n+\frac{1}{2}}$, yields
\begin{align}\label{quadratic-theorem-eq1}
\left(\delta_t^+\mathbf{u}^n,\dfrac{\gamma}{6}\mathbf{v}^{n+\frac{1}{2}}+a\mathbf{u}^{n+\frac{1}{2}}\right)_h
&=-\dfrac{\gamma^2}{18}\left((\mathbb{A}-\sigma\mathbb{B})^{-1}\mathbb{C}\mathrm{diag}(\bar{\mathbf{u}}^{n+\frac{1}{2}})\mathbf{u}^{n+\frac{1}{2}},\mathbf{v}^{n+\frac{1}{2}}\right)_h.
\end{align}
because of the antisymmetry of $(\mathbb{A}-\sigma\mathbb{B})^{-1}\mathbb{C}$.
In the similar way, taking the discrete inner product of the second equation of \eqref{linear-scheme-eq1} with $\frac{\gamma}{6}\mathbf{u}^{n+\frac{1}{2}}$, we have
\begin{align}\label{inner-produce-eq2}
\left(\delta_t^+\mathbf{v}^n,\dfrac{\gamma}{6}\mathbf{u}^{n+\frac{1}{2}}\right)_h
=-\dfrac{\gamma^2}{18}\left(\mathrm{diag}(\bar{\mathbf{u}}^{n+\frac{1}{2}})(\mathbb{A}-\sigma\mathbb{B})^{-1}\mathbb{C}\mathbf{v}^{n+\frac{1}{2}},\mathbf{u}^{n+\frac{1}{2}}\right)_h
\end{align}
due to the antisymmetry of $\mathbb{C}$.
Using Lemma \ref{integ-by-parts-lem}, we have
\begin{align*}
\left((\mathbb{A}-\sigma\mathbb{B})^{-1}\mathbb{C}\mathrm{diag}(\bar{\mathbf{u}}^{n+\frac{1}{2}})\mathbf{u}^{n+\frac{1}{2}},\mathbf{v}^{n+\frac{1}{2}}\right)_h
+\left(\mathrm{diag}(\bar{\mathbf{u}}^{n+\frac{1}{2}})(\mathbb{A}-\sigma\mathbb{B})^{-1}\mathbb{C}\mathbf{v}^{n+\frac{1}{2}},\mathbf{u}^{n+\frac{1}{2}}\right)_h=0.
\end{align*}
So, adding \eqref{quadratic-theorem-eq1} and \eqref{inner-produce-eq2}, we obtain
\begin{align}
\left(\delta_t^+\mathbf{u}^n,\dfrac{\gamma}{6}\mathbf{v}^{n+\frac{1}{2}}+a\mathbf{u}^{n+\frac{1}{2}}\right)_h+\left(\delta_t^+\mathbf{v}^n,\dfrac{\gamma}{6}\mathbf{u}^{n+\frac{1}{2}}\right)_h=0,
\end{align}
i.e.,
\begin{align*}
&\dfrac{\gamma}{12\tau}\left((\mathbf{u}^{n+1})^T\mathbf{v}^{n+1}-(\mathbf{u}^{n})^T\mathbf{v}^{n}+(\mathbf{u}^{n+1})^T\mathbf{v}^n-(\mathbf{u}^n)^T\mathbf{v}^{n+1}\right)
+\dfrac{a}{2\tau}\left((\mathbf{u}^{n+1})^T\mathbf{u}^{n+1}-(\mathbf{u}^n)^T\mathbf{u}^n\right)\\
&+\dfrac{\gamma}{12\tau}\left((\mathbf{u}^{n+1})^T\mathbf{v}^{n+1}-(\mathbf{u}^{n})^T\mathbf{v}^{n}+(\mathbf{u}^{n})^T\mathbf{v}^{n+1}-(\mathbf{u}^{n+1})^T\mathbf{v}^{n}\right)=0.
\end{align*}
Then, we have
\begin{align*}
\dfrac{\gamma}{6}(\mathbf{u}^{n+1})^T\mathbf{v}^{n+1}+\dfrac{a}{2}(\mathbf{u}^{n+1})^T\mathbf{u}^{n+1}-\dfrac{\gamma}{6}(\mathbf{u}^{n})^T\mathbf{v}^{n}+\dfrac{a}{2}(\mathbf{u}^{n})^T\mathbf{u}^{n}=0,
\end{align*}
which implies
\begin{align}\label{theorem-result-eq1}
\mathcal{H}_h^{n+1}=\mathcal{H}_h^n,\quad\forall\;n\geq 1.
\end{align}
Similarly, we adopt the same technique to \eqref{start-linear-scheme-eq} and obtain
\begin{align}\label{theorem-result-eq2}
\mathcal{H}_h^{1}=\mathcal{H}_h^0.
\end{align}
Combining \eqref{theorem-result-eq1} with \eqref{theorem-result-eq2} leads to \eqref{energy-conservation-law} and concludes the proof.
\end{proof}
\subsection{Linear-implicit leap-frog (LILF) scheme for the RLW equation}
Applying the linear implicit leap-frog scheme in time for \eqref{new-semi-discrete-eq}, we obtain
\begin{align}\label{linear-scheme-eq2}
\begin{split}
&(\mathbb{A}-\sigma\mathbb{B})\delta_t\mathbf{u}^n=-\mathbb{C}\left(\dfrac{\gamma}{6}A_t\mathbf{v}^{n}+aA_t\mathbf{u}^{n}\right)-\dfrac{\gamma}{3}\mathbb{C}\mathrm{diag}(\mathbf{u}^{n})A_t\mathbf{u}^{n},\\[0.3cm]
&(\mathbb{A}-\sigma\mathbb{B})\delta_t\mathbf{v}^n=-2\mathrm{diag}(\mathbf{u}^{n})\mathbb{C}\left(\dfrac{\gamma}{6}A_t\mathbf{v}^{n}+aA_t\mathbf{u}^{n}\right)-\dfrac{2}{3}\mathrm{diag(\mathbf{u}^{n})}\mathbb{C}\mathrm{diag}(\mathbf{u}^{n})A_t\mathbf{u}^{n},
\end{split}
\end{align}
where the starting value $\mathbf{u}^1$ and $\mathbf{v}^1$ are still computed by \eqref{start-linear-scheme-eq}.
\begin{theorem}
The scheme \eqref{linear-scheme-eq2} and \eqref{start-linear-scheme-eq} satisfy the discrete
energy conservation law
\begin{align}\label{energy-conservation-law2}
\mathcal{H}_h^n=\mathcal{H}_h^0,\quad\forall\;n\geq 0,
\end{align}
where
$\mathcal{H}_h^n=h\sum\limits_{j=0}^{N-1}\big(\dfrac{\gamma u^n_jv^n_j}{6}+\dfrac{a(u^n_j)^2}{2}\big).$
\end{theorem}
\begin{proof}
The proof of this theorem is similar to the previous one and thus it is omitted.
\end{proof}
\section{Numerical experiments}
As discussed above, conservation laws play an important role in migration of the solitary. Therefore, we pay attention to the efficiency and conservative properties of our proposed new schemes.
\noindent
The numerical error in $L^2$ and $L^{\infty}$ norms are defined by
\begin{align*}
\|E_u\|_{0,h}=\left(h\sum_{j=0}^{N-1}|u(x_j,t_n)-u_j^n|^2\right)^{1/2},\quad
\|E_u\|_{\infty}=\max_{0\leq j\leq N-1}|u(x_j,t_n)-u_j^n|.
\end{align*}
The convergence order is calculated with the formula
\begin{align*}
\mathrm{Order}=\dfrac{\log(error_1/error_2)}{\log(\delta_1/\delta_2)},
\end{align*}
where $\delta_j,\ error_j\ (j=1,2)$ are step size and the corresponding error with step size $\delta_j$, respectively.
In order to show the preservation of invariants at $n$-th time level, we test the changes in mass and energy by $\mathcal{M}_h^n-\mathcal{M}_h^0$ and $\mathcal{H}_h^n-\mathcal{H}_h^0$, respectively.
\noindent
$\mathbf{Example\; 1}$ (Motion of a single solitary wave)
The RLW equation has an analytic solution of the form
\begin{align*}
u(x,t)=3c\mathrm{sech}^2(m(x-(\gamma c+a)t-x_0)),
\end{align*}
which corresponds to the motion of a single solitary wave with amplitude $3c$, initial center at $x_0$, the wave velocity $v=a+\gamma c$ and width $m=\sqrt{\gamma c/(v\sigma)}/2$. For this problem, the theoretical values of the invariants are
\begin{align*}
M=\dfrac{6c}{m},\quad H=\dfrac{6c^2}{m}+\dfrac{24c^3}{5m},
\end{align*}
which correspond to the mass and energy \cite{ZakiSI2001}, respectively. Next, we solve
the RLW equation with initial condition
\begin{align*}
u(x,0)=3c\mathrm{sech}^2(mx).
\end{align*}
All computations are done with the parameters $x_0=0$, $a=1$, $\sigma=1$ and $\gamma=1$.
First, the proposed schemes are performed with $x\in[-60,200]$, $c=1/3$ at $T=75$. Fig. \ref{fig:error-momentum} shows the errors in mass and energy by the four proposed schemes FIEP, LIEP, LICN and LILF. The errors in mass and energy are all conserved up to roundoff error, supporting our theoretical results.
Second, we test the convergence order on the solution numerically and display the computational efficiency by considering the initial condition with $c=1$ and
the run of the algorithm is continued up to time $T=1$
over the problem domain $[-40,60]$. The accuracies of numerical solution are given in Fig. \ref{fig:accuracy-numrical-solution}, where a second order convergence can be explicitly observed. The CPU costs of the four schemes are presented in Fig. \ref{fig:CPU-time}, which shows that the schemes LIEP, LICN and LILF are more efficient than FIEP scheme.
Finally, we investigate the numerical values in mass and energy and the $L^2$, $L^{\infty}$ error norms of numerical solutions are given in Table \ref{tab:FIEP}--\ref{tab:LILF}. As it is seen from these tables, we find the numerical values of mass and energy coincide with their analytical values. Actually, the mass and energy respectively remain almost constant and the error norms in $L^2$ and $L^{\infty}$ are satisfactorily small. In addition, we compare the errors of numerical solutions in $L^2$ and $L^{\infty}$ and the calculation time among the proposed four schemes and some existed schemes \cite{CaiHong2017, Caijiaxiang2011, Dogan2002} in Table \ref{tab:Numer-Compari}. On the one hand, our proposed implicit scheme FIEP is more efficient and effective than the existed schemes NC-II \cite{CaiHong2017} and AMC-CN \cite{Caijiaxiang2011}. On the other hand, our proposed linear schemes LIEP, LICN and LILF all show higher precision solutions than the existed scheme Linear-CN \cite{Dogan2002}. In a word, our schemes are efficient and reliable.
\begin{figure}
\caption{\small The errors in mass (left) and energy (right) of the four schemes with $c=1/3$, $\tau=0.05$, $h=0.1$ and $x\in[-60,200]$ until $T=75$.\label{fig:error-momentum}
\label{fig:error-momentum}
\end{figure}
\begin{figure}
\caption{\small The accuracy of numerical solutions in $L^2$ and $L^{\infty}
\label{fig:accuracy-numrical-solution}
\end{figure}
\begin{figure}
\caption{\small Comparison of $L^2$ and $L^{\infty}
\label{fig:CPU-time}
\end{figure}
\begin{table}[!htbp]
{\caption{ The invariants and errors of numerical solutions for the scheme FIEP with $c=0.1$, $\tau=0.1$, $h=0.125$ in $[-40,60]$.}\label{tab:FIEP}}
\begin{center}
\begin{tabular}{c c c c c}\hline
\small Time &$M$ &$H$ &$L^2$ error &$L^{\infty}$ error \\\hline
\small Analytical &$3.97995$ &$0.42983$ &-- &-- \\ \hline
0 &3.97993 &0.42983 &-- & -- \\
4 &3.97993 &0.42983 &8.291e-5 & 3.357e-5 \\
8 &3.97993 &0.42983 &1.633e-4 &6.721e-5 \\
12 &3.97993 &0.42983 &2.404e-4 &9.791e-5 \\
16&3.97993 &0.42983 &3.138e-4 &1.255e-4 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!htbp]
{\caption{ The invariants and errors of numerical solutions for the scheme LIEP with $c=0.1$, $\tau=0.1$, $h=0.125$ in $[-40,60]$.}\label{tab:LIEP}}
\begin{center}
\begin{tabular}{c c c c c}\hline
\small Time &$M$ &$H$ &$L^2$ error &$L^{\infty}$ error \\\hline
\small Analytical &$3.97995$ &$0.42983$ &-- &-- \\ \hline
0 &3.97993 &0.42979 &-- & -- \\
4 &3.97993 &0.42979 &4.020e-5 &1.455e-5 \\
8 &3.97993 &0.42979 &8.265e-5 &3.124e-5 \\
12 &3.97993 &0.42979 &1.224e-4 &4.673e-5 \\
16 &3.97993 &0.42979 &1.614e-4 &6.131e-5 \\
\hline
\end{tabular}
\end{center}
\end{table}\label{Tab:table-2}
\begin{table}[!htbp]
{\caption{ The invariants and errors of numerical solutions for the scheme LICN with $c=0.1$, $\tau=0.1$, $h=0.125$ in $[-40,60]$.}\label{tab:LICN}}
\begin{center}
\begin{tabular}{c c c c c}\hline
\small Time &$M$ &$H$ &$L^2$ error &$L^{\infty}$ error \\\hline
\small Analytical &$3.97995$ &$0.42983$ &-- &-- \\ \hline
0 &3.97993 &0.42983 &-- & -- \\
4 &3.97993 &0.42983 &6.485e-5 &2.651e-5 \\
8 &3.97993 &0.42983 &1.270e-4 &5.174e-5 \\
12 &3.97993 &0.42983 &1.854e-4 &7.413e-5 \\
16 &3.97993 &0.42983 &2.416e-4 &9.502e-5 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!htbp]
{\caption{ The invariants and errors of numerical solutions for the scheme LILF with $c=0.1$, $\tau=0.1$, $h=0.125$ in $[-40,60]$.}\label{tab:LILF}}
\begin{center}
\begin{tabular}{c c c c c}\hline
\small Time &$M$ &$H$ &$L^2$ error &$L^{\infty}$ error \\\hline
\small Analytical &$3.97995$ &$0.42983$ &-- &-- \\ \hline
0 &3.97993 &0.42983 &-- & -- \\
4 &3.97993 &0.42983 &1.998e-4 &7.882e-5 \\
8 &3.97993 &0.42983 &3.936e-4 &1.574e-4 \\
12 &3.97993 &0.42983 &5.842e-4 &2.332e-4 \\
16 &3.97993 &0.42983 &7.671e-4 &3.021e-4 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!htbp]
{\caption{Numerical comparison at $T=10$ with $c=0.1$, $\tau=0.1$ and $-40\leq x\leq 60$.}\label{tab:Numer-Compari}}
\begin{center}
\begin{tabular}{l l l l l l l}\hline
\multirow{2}{*}{Method} &\multicolumn{3}{c}{$h=0.125$} &\multicolumn{3}{c}{$h=0.0625$}\\ \cmidrule {2-4} \cmidrule{5-7}
&$L^2$ error &$L^{\infty}$ error &CPU(s) &$L^2$ error &$L^{\infty}$ error &CPU(s) \\ \hline
FIEP&2.023e-4&8.298e-5&1.398&1.363e-4&5.520e-5&1.796 \\
LIEP&1.035e-4&3.946e-5&0.993&1.687e-4&6.716e-5&1.268 \\
LICN&1.566e-4&6.316e-5&1.073&9.172e-5&3.545e-5&1.076 \\
LILF&4.896e-4&1.962e-4&1.006&4.241e-4&1.684e-4&1.128 \\
NC-II \cite{CaiHong2017} &2.088e-4&7.532e-5&1.755&1.234e-4&4.236e-5&2.137 \\
AMC-CN \cite{Caijiaxiang2011} &3.944e-4&1.581e-4&1.506&1.828e-4&7.307e-5&1.988\\
Linear-CN \cite{Dogan2002} &2.648e-4&1.088e-4&1.046&7.945e-4&2.957e-4&1.108 \\
\hline
\end{tabular}
\end{center}
\end{table}
\noindent
$\mathbf{Example\; 2}$ (Interaction of three positive solitary waves) This example shows the interaction of three solitary waves with different amplitudes and travelling in the same direction for the RLW equation with parameters $\gamma=1$, $\sigma=1$ and $a=1$. We consider the initial condition $u(x,0)=\sum_{i=1}^33c_i\mathrm{sech}^2(m_i(x-x_i))$, where $-200\leq x\leq400$, $c_1=1$, $c_2=0.5$, $c_3=0.25$, $x_1=-20$, $x_2=15$, $x_3=45$ and $m_i=\frac{1}{2}\sqrt{\frac{\gamma c_i}{(1+\gamma c_i)\sigma}}$. The simulation is performed with $h=0.25$ and $\tau=0.05$ until $T=400$. Due to the space restrictions, we just display the interaction of three solitary waves as time evolves using the scheme LICN. Fig. \ref{fig:inter-LICN} shows the process of interaction of three positive solitary waves as time evolves. It is clear that the three solitary waves travel forward, then interact, and finally depart without
any changes in their own shapes. We also plot the errors in mass and energy in Fig. \ref{fig:mass-energy}. These results show that the errors of mass and energy computed by the schemes LIEP, LICN, LILF are conserved up to roundoff error very well than FIEP throughout the interaction simulation.
\begin{figure}
\caption{\small The interaction of three solitary waves at different time using the scheme LICN.\label{fig:inter-LICN}
\label{fig:inter-LICN}
\end{figure}
\begin{figure}
\caption{\small The errors in mass (left) and energy (right) of the four schemes with $\tau=0.05$, $h=0.25$ and $x\in[-200,400]$ until $T=400$.\label{fig:mass-energy}
\label{fig:mass-energy}
\end{figure}
\noindent
$\textbf{Example\; 3}$ (The Maxwellian pulse) In this part, we have examined the evolution of an initial Maxwellian pulse into solitary waves for various values of the parameter $\delta$. Take the initial condition
\begin{align*}
u(x,0)=\exp(-(x-7)^2),\quad -40\leq x\leq 100.
\end{align*}
and all simulations are done with $\gamma=1$, $a=1$, $\tau=0.05$ and $h=0.05$. We mainly discuss each of the following cases: (i) $\sigma=0.04$, (ii) $\sigma=0.01$ and (iii) $\sigma=0.001$, respectively. The evolutions of the RLW equation at $T=55$ are just simulated by the scheme LILF due to the limit of page. For case (i), only one solitary wave is generated as shown in Fig. \ref{fig:evol-LILF} (a), for case (ii), three stable solitary waves are generated as shown in Fig. \ref{fig:evol-LILF} (b) and for case (iii), the Maxwellian initial condition has decayed into about six solitary waves as shown in Fig. \ref{fig:evol-LILF} (c). The errors in mass and energy are shown in Fig. \ref{fig:mass-energy-3}. The results imply the mass and energy are captured exactly by the schemes LIEP, LICN and LILF than FIEP.
\begin{figure}
\caption{\small The evolution of the RLW equation using the scheme LILF at $T=55$.\label{fig:evol-LILF}
\label{fig:evol-LILF}
\end{figure}
\begin{figure}
\caption{\small The errors in mass (left) and energy (right) of the four schemes with $\sigma=0.01$ $\tau=0.05$ and $h=0.05$ and $x\in[-40,100]$ until $T=55$.\label{fig:mass-energy-3}
\label{fig:mass-energy-3}
\end{figure}
\noindent
$\mathbf{Example\; 4}$ (The undular bore propagation) As our last test problem, we consider the development of an undular bore with the initial condition
\begin{align*}
u(x,0)=\dfrac{U_0}{2}\left[1-\tanh(\dfrac{x-x_0}{d})\right],
\end{align*}
and boundary conditions
\begin{align*}
u(a,t)=U_0,\quad u(b,t)=0,
\end{align*}
where $u(x,0)$ denotes the elevation of the water above the equilibrium surface at time $t=0$, $U_0$ represents the magnitude of the change in water level which is centered on $x=x_0$ and $d$ represents the slope between the still water and deeper water. Under the above physical boundary conditions, the mass and energy are not constants but increase linearly throughout the simulation at the following rates \cite{ZakiSI2001}
\begin{align}\label{linear-behaviour-eq}
\begin{split}
M_1&=\dfrac{d}{dt}M=\dfrac{d}{dt}\int udx=U_0+\dfrac{1}{2}U_0^2,\\[0.3cm]
M_3&=\dfrac{d}{dt}H=\dfrac{d}{dt}\int\left(\dfrac{\gamma}{6}u^3+\dfrac{1}{2}u^2\right)dx=\dfrac{1}{2}U_0^2+\dfrac{\gamma}{2}U_0^3+\dfrac{\gamma}{8}U_0^4.
\end{split}
\end{align}
For the simulation, computations use the parameters $\sigma=1/6$, $\gamma=1.5$, $a=1$, $x_0=0$, $h=0.24$, $\tau=0.1$ and $U_0=0.1$. The development of the undular bore at different times from $T=0$ to $T=250$ for $d=2$ and $d=5$ are represented in Fig. \ref{fig:d2-d5}, respectively. It is observed that both waves are stable without any numerical perturbations. The rate of the growth of amplitudes of the undulations seems to be fast in the beginning. The reason is that
the formation of the undular bore depends on the form of the initial undulation.
To see the effect of the initial undulation, view
of the formation of the leading undulation for the both steep and gentle slope is illustrated by drawing maximal $u$ versus time $t$ in Fig. \ref{fig:mass-energy-d2-d5} (a). As it seen from this picture, we find the magnitudes of the leading undulations becomes very close after certain time. For the steep slope, rate of the
growth undulation is sharp and then decrease slowly. Fig. \ref{fig:mass-energy-d2-d5} (b) and (c) show the linear behaviour of the mass and energy in time for $d=2$ and $d=5$, respectively. We find that these results are consistent with \eqref{linear-behaviour-eq} very well.\\
\begin{figure}
\caption{\small Initial and undulation profiles with gentle $d=2$ (top) and $d=5$ (bottom) at different times using the scheme LILF.\label{fig:d2-d5}
\label{fig:d2-d5}
\end{figure}
\begin{figure}
\caption{\small (a) Development of the first undulation from $t=0$ to $t=250$ and (b) the behavior of the invariants for $d=2$ and (c) $d=5$ by LILF.\label{fig:mass-energy-d2-d5}
\label{fig:mass-energy-d2-d5}
\end{figure}
\section{Conclusions}
In this paper, incorporating the modified finite volume method with the discrete variational derivative method, linear-implicit Crank-Nicolson and linear-implicit Leap-frog scheme, we proposed and analyzed a fully implicit and three linear-implicit energy-preserving schemes for RLW equation. In this study, we consider the weak formulation and presented the modified finite volume method for the framework of the scheme. Moreover, we strictly proved that the semi-discrete system preserves a semi-discrete energy. For time discretization, the DVDM with a point and two points numerical solutions is used to preserve the semi-discrete energy obtained by the first step. The perfect combination of mFVM and DVDM results in a fully implicit scheme and a linear-implicit scheme for RLW. In addition, we develop a new idea of transforming the energy into a quadratic function
to derive energy stable schemes. Based on this strategy, we still discrete the weak form of the
equivalent formulation for the RLW equation by the so-called modified finite volume method
in space. And then we consider the linear-implicit Crank-Nicolson and linear-implicit Leap-frog scheme in temporal direction, two second-order
linear energy-preserving numerical schemes are obtained readily. Finally, numerical results show that our proposed numerical schemes have excellent performance
in providing accurate solution and preserving the discrete invariants. Compared with the scheme FIEP, the linear-implicit schemes LIEP, LICN and LILF improve the computational efficiency. In the future, the linear momentum-preserving scheme and the corresponding error analysis is one of our on-going project.
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.